public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:4.19 commit in: /
Date: Mon, 16 Sep 2019 12:26:10 +0000 (UTC)	[thread overview]
Message-ID: <1568636755.145454b6a808a552cf3e80041ce442cbae29d912.mpagano@gentoo> (raw)

commit:     145454b6a808a552cf3e80041ce442cbae29d912
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Sep 16 12:25:55 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Sep 16 12:25:55 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=145454b6

Linux patch 4.19.73

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    8 +-
 1072_linux-4.19.73.patch | 8877 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 8883 insertions(+), 2 deletions(-)

diff --git a/0000_README b/0000_README
index 5a202ee..d5d2e47 100644
--- a/0000_README
+++ b/0000_README
@@ -323,9 +323,13 @@ Patch:  1070_linux-4.19.70.patch
 From:   https://www.kernel.org
 Desc:   Linux 4.19.70
 
-Patch:  1071_linux-4.19.71.patch
+Patch:  1071_linux-4.19.72.patch
 From:   https://www.kernel.org
-Desc:   Linux 4.19.71
+Desc:   Linux 4.19.72
+
+Patch:  1072_linux-4.19.73.patch
+From:   https://www.kernel.org
+Desc:   Linux 4.19.73
 
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644

diff --git a/1072_linux-4.19.73.patch b/1072_linux-4.19.73.patch
new file mode 100644
index 0000000..0364fc7
--- /dev/null
+++ b/1072_linux-4.19.73.patch
@@ -0,0 +1,8877 @@
+diff --git a/Documentation/devicetree/bindings/display/panel/armadeus,st0700-adapt.txt b/Documentation/devicetree/bindings/display/panel/armadeus,st0700-adapt.txt
+new file mode 100644
+index 000000000000..a30d63db3c8f
+--- /dev/null
++++ b/Documentation/devicetree/bindings/display/panel/armadeus,st0700-adapt.txt
+@@ -0,0 +1,9 @@
++Armadeus ST0700 Adapt. A Santek ST0700I5Y-RBSLW 7.0" WVGA (800x480) TFT with
++an adapter board.
++
++Required properties:
++- compatible: "armadeus,st0700-adapt"
++- power-supply: see panel-common.txt
++
++Optional properties:
++- backlight: see panel-common.txt
+diff --git a/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt b/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt
+index 6c49db7f8ad2..e1fe02f3e3e9 100644
+--- a/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt
++++ b/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt
+@@ -11,11 +11,13 @@ New driver handles the following
+ 
+ Required properties:
+ - compatible:		Must be "samsung,exynos-adc-v1"
+-				for exynos4412/5250 and s5pv210 controllers.
++				for Exynos5250 controllers.
+ 			Must be "samsung,exynos-adc-v2" for
+ 				future controllers.
+ 			Must be "samsung,exynos3250-adc" for
+ 				controllers compatible with ADC of Exynos3250.
++			Must be "samsung,exynos4212-adc" for
++				controllers compatible with ADC of Exynos4212 and Exynos4412.
+ 			Must be "samsung,exynos7-adc" for
+ 				the ADC in Exynos7 and compatibles
+ 			Must be "samsung,s3c2410-adc" for
+@@ -28,6 +30,8 @@ Required properties:
+ 				the ADC in s3c2443 and compatibles
+ 			Must be "samsung,s3c6410-adc" for
+ 				the ADC in s3c6410 and compatibles
++			Must be "samsung,s5pv210-adc" for
++				the ADC in s5pv210 and compatibles
+ - reg:			List of ADC register address range
+ 			- The base address and range of ADC register
+ 			- The base address and range of ADC_PHY register (every
+diff --git a/Documentation/devicetree/bindings/mmc/mmc.txt b/Documentation/devicetree/bindings/mmc/mmc.txt
+index f5a0923b34ca..c269dbe384fe 100644
+--- a/Documentation/devicetree/bindings/mmc/mmc.txt
++++ b/Documentation/devicetree/bindings/mmc/mmc.txt
+@@ -62,6 +62,10 @@ Optional properties:
+   be referred to mmc-pwrseq-simple.txt. But now it's reused as a tunable delay
+   waiting for I/O signalling and card power supply to be stable, regardless of
+   whether pwrseq-simple is used. Default to 10ms if no available.
++- supports-cqe : The presence of this property indicates that the corresponding
++  MMC host controller supports HW command queue feature.
++- disable-cqe-dcmd: This property indicates that the MMC controller's command
++  queue engine (CQE) does not support direct commands (DCMDs).
+ 
+ *NOTE* on CD and WP polarity. To use common for all SD/MMC host controllers line
+ polarity properties, we have to fix the meaning of the "normal" and "inverted"
+diff --git a/Makefile b/Makefile
+index ef80b1dfb753..9748fa3704bc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 19
+-SUBLEVEL = 72
++SUBLEVEL = 73
+ EXTRAVERSION =
+ NAME = "People's Front"
+ 
+diff --git a/arch/arc/kernel/troubleshoot.c b/arch/arc/kernel/troubleshoot.c
+index 5c6663321e87..215f515442e0 100644
+--- a/arch/arc/kernel/troubleshoot.c
++++ b/arch/arc/kernel/troubleshoot.c
+@@ -179,6 +179,12 @@ void show_regs(struct pt_regs *regs)
+ 	struct task_struct *tsk = current;
+ 	struct callee_regs *cregs;
+ 
++	/*
++	 * generic code calls us with preemption disabled, but some calls
++	 * here could sleep, so re-enable to avoid lockdep splat
++	 */
++	preempt_enable();
++
+ 	print_task_path_n_nm(tsk);
+ 	show_regs_print_info(KERN_INFO);
+ 
+@@ -221,6 +227,8 @@ void show_regs(struct pt_regs *regs)
+ 	cregs = (struct callee_regs *)current->thread.callee_reg;
+ 	if (cregs)
+ 		show_callee_regs(cregs);
++
++	preempt_disable();
+ }
+ 
+ void show_kernel_fault_diag(const char *str, struct pt_regs *regs,
+diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
+index db6913094be3..4e8143de32e7 100644
+--- a/arch/arc/mm/fault.c
++++ b/arch/arc/mm/fault.c
+@@ -66,14 +66,12 @@ void do_page_fault(unsigned long address, struct pt_regs *regs)
+ 	struct vm_area_struct *vma = NULL;
+ 	struct task_struct *tsk = current;
+ 	struct mm_struct *mm = tsk->mm;
+-	siginfo_t info;
++	int si_code = SEGV_MAPERR;
+ 	int ret;
+ 	vm_fault_t fault;
+ 	int write = regs->ecr_cause & ECR_C_PROTV_STORE;  /* ST/EX */
+ 	unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
+ 
+-	clear_siginfo(&info);
+-
+ 	/*
+ 	 * We fault-in kernel-space virtual memory on-demand. The
+ 	 * 'reference' page table is init_mm.pgd.
+@@ -83,16 +81,14 @@ void do_page_fault(unsigned long address, struct pt_regs *regs)
+ 	 * only copy the information from the master page table,
+ 	 * nothing more.
+ 	 */
+-	if (address >= VMALLOC_START) {
++	if (address >= VMALLOC_START && !user_mode(regs)) {
+ 		ret = handle_kernel_vaddr_fault(address);
+ 		if (unlikely(ret))
+-			goto bad_area_nosemaphore;
++			goto no_context;
+ 		else
+ 			return;
+ 	}
+ 
+-	info.si_code = SEGV_MAPERR;
+-
+ 	/*
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+@@ -119,7 +115,7 @@ retry:
+ 	 * we can handle it..
+ 	 */
+ good_area:
+-	info.si_code = SEGV_ACCERR;
++	si_code = SEGV_ACCERR;
+ 
+ 	/* Handle protection violation, execute on heap or stack */
+ 
+@@ -143,12 +139,17 @@ good_area:
+ 	 */
+ 	fault = handle_mm_fault(vma, address, flags);
+ 
+-	/* If Pagefault was interrupted by SIGKILL, exit page fault "early" */
+ 	if (unlikely(fatal_signal_pending(current))) {
+-		if ((fault & VM_FAULT_ERROR) && !(fault & VM_FAULT_RETRY))
+-			up_read(&mm->mmap_sem);
+-		if (user_mode(regs))
++
++		/*
++		 * if fault retry, mmap_sem already relinquished by core mm
++		 * so OK to return to user mode (with signal handled first)
++		 */
++		if (fault & VM_FAULT_RETRY) {
++			if (!user_mode(regs))
++				goto no_context;
+ 			return;
++		}
+ 	}
+ 
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+@@ -195,15 +196,10 @@ good_area:
+ bad_area:
+ 	up_read(&mm->mmap_sem);
+ 
+-bad_area_nosemaphore:
+ 	/* User mode accesses just cause a SIGSEGV */
+ 	if (user_mode(regs)) {
+ 		tsk->thread.fault_address = address;
+-		info.si_signo = SIGSEGV;
+-		info.si_errno = 0;
+-		/* info.si_code has been set above */
+-		info.si_addr = (void __user *)address;
+-		force_sig_info(SIGSEGV, &info, tsk);
++		force_sig_fault(SIGSEGV, si_code, (void __user *)address, tsk);
+ 		return;
+ 	}
+ 
+@@ -238,9 +234,5 @@ do_sigbus:
+ 		goto no_context;
+ 
+ 	tsk->thread.fault_address = address;
+-	info.si_signo = SIGBUS;
+-	info.si_errno = 0;
+-	info.si_code = BUS_ADRERR;
+-	info.si_addr = (void __user *)address;
+-	force_sig_info(SIGBUS, &info, tsk);
++	force_sig_fault(SIGBUS, BUS_ADRERR, (void __user *)address, tsk);
+ }
+diff --git a/arch/arm/boot/dts/gemini-dlink-dir-685.dts b/arch/arm/boot/dts/gemini-dlink-dir-685.dts
+index 502a361d1fe9..15d6157b661d 100644
+--- a/arch/arm/boot/dts/gemini-dlink-dir-685.dts
++++ b/arch/arm/boot/dts/gemini-dlink-dir-685.dts
+@@ -65,7 +65,7 @@
+ 		gpio-miso = <&gpio1 8 GPIO_ACTIVE_HIGH>;
+ 		gpio-mosi = <&gpio1 7 GPIO_ACTIVE_HIGH>;
+ 		/* Collides with pflash CE1, not so cool */
+-		cs-gpios = <&gpio0 20 GPIO_ACTIVE_HIGH>;
++		cs-gpios = <&gpio0 20 GPIO_ACTIVE_LOW>;
+ 		num-chipselects = <1>;
+ 
+ 		panel: display@0 {
+diff --git a/arch/arm/boot/dts/qcom-ipq4019.dtsi b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+index 78db67337ed4..54d056b01bb5 100644
+--- a/arch/arm/boot/dts/qcom-ipq4019.dtsi
++++ b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+@@ -386,10 +386,10 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x81000000 0 0x40200000 0x40200000 0 0x00100000
+-				  0x82000000 0 0x48000000 0x48000000 0 0x10000000>;
++			ranges = <0x81000000 0 0x40200000 0x40200000 0 0x00100000>,
++				 <0x82000000 0 0x40300000 0x40300000 0 0x00d00000>;
+ 
+-			interrupts = <GIC_SPI 141 IRQ_TYPE_EDGE_RISING>;
++			interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+ 			#interrupt-cells = <1>;
+ 			interrupt-map-mask = <0 0 0 0x7>;
+diff --git a/arch/arm/mach-davinci/devices-da8xx.c b/arch/arm/mach-davinci/devices-da8xx.c
+index 3c42bf9fa061..708931b47090 100644
+--- a/arch/arm/mach-davinci/devices-da8xx.c
++++ b/arch/arm/mach-davinci/devices-da8xx.c
+@@ -704,6 +704,46 @@ static struct resource da8xx_gpio_resources[] = {
+ 	},
+ 	{ /* interrupt */
+ 		.start	= IRQ_DA8XX_GPIO0,
++		.end	= IRQ_DA8XX_GPIO0,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DA8XX_GPIO1,
++		.end	= IRQ_DA8XX_GPIO1,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DA8XX_GPIO2,
++		.end	= IRQ_DA8XX_GPIO2,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DA8XX_GPIO3,
++		.end	= IRQ_DA8XX_GPIO3,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DA8XX_GPIO4,
++		.end	= IRQ_DA8XX_GPIO4,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DA8XX_GPIO5,
++		.end	= IRQ_DA8XX_GPIO5,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DA8XX_GPIO6,
++		.end	= IRQ_DA8XX_GPIO6,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DA8XX_GPIO7,
++		.end	= IRQ_DA8XX_GPIO7,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DA8XX_GPIO8,
+ 		.end	= IRQ_DA8XX_GPIO8,
+ 		.flags	= IORESOURCE_IRQ,
+ 	},
+diff --git a/arch/arm/mach-davinci/dm355.c b/arch/arm/mach-davinci/dm355.c
+index 9f7d38d12c88..2b0f5d97ab7c 100644
+--- a/arch/arm/mach-davinci/dm355.c
++++ b/arch/arm/mach-davinci/dm355.c
+@@ -548,6 +548,36 @@ static struct resource dm355_gpio_resources[] = {
+ 	},
+ 	{	/* interrupt */
+ 		.start	= IRQ_DM355_GPIOBNK0,
++		.end	= IRQ_DM355_GPIOBNK0,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DM355_GPIOBNK1,
++		.end	= IRQ_DM355_GPIOBNK1,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DM355_GPIOBNK2,
++		.end	= IRQ_DM355_GPIOBNK2,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DM355_GPIOBNK3,
++		.end	= IRQ_DM355_GPIOBNK3,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DM355_GPIOBNK4,
++		.end	= IRQ_DM355_GPIOBNK4,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DM355_GPIOBNK5,
++		.end	= IRQ_DM355_GPIOBNK5,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DM355_GPIOBNK6,
+ 		.end	= IRQ_DM355_GPIOBNK6,
+ 		.flags	= IORESOURCE_IRQ,
+ 	},
+diff --git a/arch/arm/mach-davinci/dm365.c b/arch/arm/mach-davinci/dm365.c
+index abcf2a5ed89b..42665914166a 100644
+--- a/arch/arm/mach-davinci/dm365.c
++++ b/arch/arm/mach-davinci/dm365.c
+@@ -267,6 +267,41 @@ static struct resource dm365_gpio_resources[] = {
+ 	},
+ 	{	/* interrupt */
+ 		.start	= IRQ_DM365_GPIO0,
++		.end	= IRQ_DM365_GPIO0,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DM365_GPIO1,
++		.end	= IRQ_DM365_GPIO1,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DM365_GPIO2,
++		.end	= IRQ_DM365_GPIO2,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DM365_GPIO3,
++		.end	= IRQ_DM365_GPIO3,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DM365_GPIO4,
++		.end	= IRQ_DM365_GPIO4,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DM365_GPIO5,
++		.end	= IRQ_DM365_GPIO5,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DM365_GPIO6,
++		.end	= IRQ_DM365_GPIO6,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DM365_GPIO7,
+ 		.end	= IRQ_DM365_GPIO7,
+ 		.flags	= IORESOURCE_IRQ,
+ 	},
+diff --git a/arch/arm/mach-davinci/dm644x.c b/arch/arm/mach-davinci/dm644x.c
+index 0720da7809a6..de1ec6dc01e9 100644
+--- a/arch/arm/mach-davinci/dm644x.c
++++ b/arch/arm/mach-davinci/dm644x.c
+@@ -492,6 +492,26 @@ static struct resource dm644_gpio_resources[] = {
+ 	},
+ 	{	/* interrupt */
+ 		.start	= IRQ_GPIOBNK0,
++		.end	= IRQ_GPIOBNK0,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_GPIOBNK1,
++		.end	= IRQ_GPIOBNK1,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_GPIOBNK2,
++		.end	= IRQ_GPIOBNK2,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_GPIOBNK3,
++		.end	= IRQ_GPIOBNK3,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_GPIOBNK4,
+ 		.end	= IRQ_GPIOBNK4,
+ 		.flags	= IORESOURCE_IRQ,
+ 	},
+diff --git a/arch/arm/mach-davinci/dm646x.c b/arch/arm/mach-davinci/dm646x.c
+index 6bd2ed069d0d..d9b93e2806d2 100644
+--- a/arch/arm/mach-davinci/dm646x.c
++++ b/arch/arm/mach-davinci/dm646x.c
+@@ -442,6 +442,16 @@ static struct resource dm646x_gpio_resources[] = {
+ 	},
+ 	{	/* interrupt */
+ 		.start	= IRQ_DM646X_GPIOBNK0,
++		.end	= IRQ_DM646X_GPIOBNK0,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DM646X_GPIOBNK1,
++		.end	= IRQ_DM646X_GPIOBNK1,
++		.flags	= IORESOURCE_IRQ,
++	},
++	{
++		.start	= IRQ_DM646X_GPIOBNK2,
+ 		.end	= IRQ_DM646X_GPIOBNK2,
+ 		.flags	= IORESOURCE_IRQ,
+ 	},
+diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+index 5089aa64088f..9a1ea8a46405 100644
+--- a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
++++ b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+@@ -140,6 +140,7 @@
+ 			tx-fifo-depth = <16384>;
+ 			rx-fifo-depth = <16384>;
+ 			snps,multicast-filter-bins = <256>;
++			altr,sysmgr-syscon = <&sysmgr 0x44 0>;
+ 			status = "disabled";
+ 		};
+ 
+@@ -156,6 +157,7 @@
+ 			tx-fifo-depth = <16384>;
+ 			rx-fifo-depth = <16384>;
+ 			snps,multicast-filter-bins = <256>;
++			altr,sysmgr-syscon = <&sysmgr 0x48 0>;
+ 			status = "disabled";
+ 		};
+ 
+@@ -172,6 +174,7 @@
+ 			tx-fifo-depth = <16384>;
+ 			rx-fifo-depth = <16384>;
+ 			snps,multicast-filter-bins = <256>;
++			altr,sysmgr-syscon = <&sysmgr 0x4c 0>;
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts b/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
+index c142169a58fc..e9147e35b739 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
+@@ -40,6 +40,7 @@
+ 		pinctrl-0 = <&usb30_host_drv>;
+ 		regulator-name = "vcc_host_5v";
+ 		regulator-always-on;
++		regulator-boot-on;
+ 		vin-supply = <&vcc_sys>;
+ 	};
+ 
+@@ -50,6 +51,7 @@
+ 		pinctrl-0 = <&usb20_host_drv>;
+ 		regulator-name = "vcc_host1_5v";
+ 		regulator-always-on;
++		regulator-boot-on;
+ 		vin-supply = <&vcc_sys>;
+ 	};
+ 
+diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
+index 83a9aa3cf689..dd18d8174504 100644
+--- a/arch/powerpc/include/asm/kvm_book3s.h
++++ b/arch/powerpc/include/asm/kvm_book3s.h
+@@ -301,12 +301,12 @@ static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
+ 
+ static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
+ {
+-	vcpu->arch.cr = val;
++	vcpu->arch.regs.ccr = val;
+ }
+ 
+ static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
+ {
+-	return vcpu->arch.cr;
++	return vcpu->arch.regs.ccr;
+ }
+ 
+ static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, ulong val)
+diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
+index dc435a5af7d6..14fa07c73f44 100644
+--- a/arch/powerpc/include/asm/kvm_book3s_64.h
++++ b/arch/powerpc/include/asm/kvm_book3s_64.h
+@@ -482,7 +482,7 @@ static inline u64 sanitize_msr(u64 msr)
+ #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+ static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
+ {
+-	vcpu->arch.cr  = vcpu->arch.cr_tm;
++	vcpu->arch.regs.ccr  = vcpu->arch.cr_tm;
+ 	vcpu->arch.regs.xer = vcpu->arch.xer_tm;
+ 	vcpu->arch.regs.link  = vcpu->arch.lr_tm;
+ 	vcpu->arch.regs.ctr = vcpu->arch.ctr_tm;
+@@ -499,7 +499,7 @@ static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
+ 
+ static inline void copy_to_checkpoint(struct kvm_vcpu *vcpu)
+ {
+-	vcpu->arch.cr_tm  = vcpu->arch.cr;
++	vcpu->arch.cr_tm  = vcpu->arch.regs.ccr;
+ 	vcpu->arch.xer_tm = vcpu->arch.regs.xer;
+ 	vcpu->arch.lr_tm  = vcpu->arch.regs.link;
+ 	vcpu->arch.ctr_tm = vcpu->arch.regs.ctr;
+diff --git a/arch/powerpc/include/asm/kvm_booke.h b/arch/powerpc/include/asm/kvm_booke.h
+index d513e3ed1c65..f0cef625f17c 100644
+--- a/arch/powerpc/include/asm/kvm_booke.h
++++ b/arch/powerpc/include/asm/kvm_booke.h
+@@ -46,12 +46,12 @@ static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
+ 
+ static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
+ {
+-	vcpu->arch.cr = val;
++	vcpu->arch.regs.ccr = val;
+ }
+ 
+ static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
+ {
+-	return vcpu->arch.cr;
++	return vcpu->arch.regs.ccr;
+ }
+ 
+ static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, ulong val)
+diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
+index 2b6049e83970..2f95e38f0549 100644
+--- a/arch/powerpc/include/asm/kvm_host.h
++++ b/arch/powerpc/include/asm/kvm_host.h
+@@ -538,8 +538,6 @@ struct kvm_vcpu_arch {
+ 	ulong tar;
+ #endif
+ 
+-	u32 cr;
+-
+ #ifdef CONFIG_PPC_BOOK3S
+ 	ulong hflags;
+ 	ulong guest_owned_ext;
+diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
+index b694d6af1150..ae953958c0f3 100644
+--- a/arch/powerpc/include/asm/mmu_context.h
++++ b/arch/powerpc/include/asm/mmu_context.h
+@@ -217,12 +217,6 @@ static inline void enter_lazy_tlb(struct mm_struct *mm,
+ #endif
+ }
+ 
+-static inline int arch_dup_mmap(struct mm_struct *oldmm,
+-				struct mm_struct *mm)
+-{
+-	return 0;
+-}
+-
+ #ifndef CONFIG_PPC_BOOK3S_64
+ static inline void arch_exit_mmap(struct mm_struct *mm)
+ {
+@@ -247,6 +241,7 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
+ #ifdef CONFIG_PPC_MEM_KEYS
+ bool arch_vma_access_permitted(struct vm_area_struct *vma, bool write,
+ 			       bool execute, bool foreign);
++void arch_dup_pkeys(struct mm_struct *oldmm, struct mm_struct *mm);
+ #else /* CONFIG_PPC_MEM_KEYS */
+ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
+ 		bool write, bool execute, bool foreign)
+@@ -259,6 +254,7 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
+ #define thread_pkey_regs_save(thread)
+ #define thread_pkey_regs_restore(new_thread, old_thread)
+ #define thread_pkey_regs_init(thread)
++#define arch_dup_pkeys(oldmm, mm)
+ 
+ static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
+ {
+@@ -267,5 +263,12 @@ static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
+ 
+ #endif /* CONFIG_PPC_MEM_KEYS */
+ 
++static inline int arch_dup_mmap(struct mm_struct *oldmm,
++				struct mm_struct *mm)
++{
++	arch_dup_pkeys(oldmm, mm);
++	return 0;
++}
++
+ #endif /* __KERNEL__ */
+ #endif /* __ASM_POWERPC_MMU_CONTEXT_H */
+diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
+index e5b314ed054e..640a4d818772 100644
+--- a/arch/powerpc/include/asm/reg.h
++++ b/arch/powerpc/include/asm/reg.h
+@@ -118,11 +118,16 @@
+ #define MSR_TS_S	__MASK(MSR_TS_S_LG)	/*  Transaction Suspended */
+ #define MSR_TS_T	__MASK(MSR_TS_T_LG)	/*  Transaction Transactional */
+ #define MSR_TS_MASK	(MSR_TS_T | MSR_TS_S)   /* Transaction State bits */
+-#define MSR_TM_ACTIVE(x) (((x) & MSR_TS_MASK) != 0) /* Transaction active? */
+ #define MSR_TM_RESV(x) (((x) & MSR_TS_MASK) == MSR_TS_MASK) /* Reserved */
+ #define MSR_TM_TRANSACTIONAL(x)	(((x) & MSR_TS_MASK) == MSR_TS_T)
+ #define MSR_TM_SUSPENDED(x)	(((x) & MSR_TS_MASK) == MSR_TS_S)
+ 
++#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
++#define MSR_TM_ACTIVE(x) (((x) & MSR_TS_MASK) != 0) /* Transaction active? */
++#else
++#define MSR_TM_ACTIVE(x) 0
++#endif
++
+ #if defined(CONFIG_PPC_BOOK3S_64)
+ #define MSR_64BIT	MSR_SF
+ 
+diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
+index 89cf15566c4e..7c3738d890e8 100644
+--- a/arch/powerpc/kernel/asm-offsets.c
++++ b/arch/powerpc/kernel/asm-offsets.c
+@@ -438,7 +438,7 @@ int main(void)
+ #ifdef CONFIG_PPC_BOOK3S
+ 	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
+ #endif
+-	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
++	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);
+ 	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
+ #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+ 	OFFSET(VCPU_MSR, kvm_vcpu, arch.shregs.msr);
+@@ -695,7 +695,7 @@ int main(void)
+ #endif /* CONFIG_PPC_BOOK3S_64 */
+ 
+ #else /* CONFIG_PPC_BOOK3S */
+-	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
++	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);
+ 	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
+ 	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
+ 	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
+diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
+index 9168a247e24f..3fb564f3e887 100644
+--- a/arch/powerpc/kernel/head_64.S
++++ b/arch/powerpc/kernel/head_64.S
+@@ -906,6 +906,7 @@ p_toc:	.8byte	__toc_start + 0x8000 - 0b
+ /*
+  * This is where the main kernel code starts.
+  */
++__REF
+ start_here_multiplatform:
+ 	/* set up the TOC */
+ 	bl      relative_toc
+@@ -981,6 +982,7 @@ start_here_multiplatform:
+ 	RFI
+ 	b	.	/* prevent speculative execution */
+ 
++	.previous
+ 	/* This is where all platforms converge execution */
+ 
+ start_here_common:
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index d29f2dca725b..909c9407e392 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -102,27 +102,8 @@ static void check_if_tm_restore_required(struct task_struct *tsk)
+ 	}
+ }
+ 
+-static inline bool msr_tm_active(unsigned long msr)
+-{
+-	return MSR_TM_ACTIVE(msr);
+-}
+-
+-static bool tm_active_with_fp(struct task_struct *tsk)
+-{
+-	return msr_tm_active(tsk->thread.regs->msr) &&
+-		(tsk->thread.ckpt_regs.msr & MSR_FP);
+-}
+-
+-static bool tm_active_with_altivec(struct task_struct *tsk)
+-{
+-	return msr_tm_active(tsk->thread.regs->msr) &&
+-		(tsk->thread.ckpt_regs.msr & MSR_VEC);
+-}
+ #else
+-static inline bool msr_tm_active(unsigned long msr) { return false; }
+ static inline void check_if_tm_restore_required(struct task_struct *tsk) { }
+-static inline bool tm_active_with_fp(struct task_struct *tsk) { return false; }
+-static inline bool tm_active_with_altivec(struct task_struct *tsk) { return false; }
+ #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
+ 
+ bool strict_msr_control;
+@@ -247,7 +228,8 @@ void enable_kernel_fp(void)
+ 		 * giveup as this would save  to the 'live' structure not the
+ 		 * checkpointed structure.
+ 		 */
+-		if(!msr_tm_active(cpumsr) && msr_tm_active(current->thread.regs->msr))
++		if (!MSR_TM_ACTIVE(cpumsr) &&
++		     MSR_TM_ACTIVE(current->thread.regs->msr))
+ 			return;
+ 		__giveup_fpu(current);
+ 	}
+@@ -256,7 +238,7 @@ EXPORT_SYMBOL(enable_kernel_fp);
+ 
+ static int restore_fp(struct task_struct *tsk)
+ {
+-	if (tsk->thread.load_fp || tm_active_with_fp(tsk)) {
++	if (tsk->thread.load_fp) {
+ 		load_fp_state(&current->thread.fp_state);
+ 		current->thread.load_fp++;
+ 		return 1;
+@@ -311,7 +293,8 @@ void enable_kernel_altivec(void)
+ 		 * giveup as this would save  to the 'live' structure not the
+ 		 * checkpointed structure.
+ 		 */
+-		if(!msr_tm_active(cpumsr) && msr_tm_active(current->thread.regs->msr))
++		if (!MSR_TM_ACTIVE(cpumsr) &&
++		     MSR_TM_ACTIVE(current->thread.regs->msr))
+ 			return;
+ 		__giveup_altivec(current);
+ 	}
+@@ -337,8 +320,7 @@ EXPORT_SYMBOL_GPL(flush_altivec_to_thread);
+ 
+ static int restore_altivec(struct task_struct *tsk)
+ {
+-	if (cpu_has_feature(CPU_FTR_ALTIVEC) &&
+-		(tsk->thread.load_vec || tm_active_with_altivec(tsk))) {
++	if (cpu_has_feature(CPU_FTR_ALTIVEC) && (tsk->thread.load_vec)) {
+ 		load_vr_state(&tsk->thread.vr_state);
+ 		tsk->thread.used_vr = 1;
+ 		tsk->thread.load_vec++;
+@@ -397,7 +379,8 @@ void enable_kernel_vsx(void)
+ 		 * giveup as this would save  to the 'live' structure not the
+ 		 * checkpointed structure.
+ 		 */
+-		if(!msr_tm_active(cpumsr) && msr_tm_active(current->thread.regs->msr))
++		if (!MSR_TM_ACTIVE(cpumsr) &&
++		     MSR_TM_ACTIVE(current->thread.regs->msr))
+ 			return;
+ 		__giveup_vsx(current);
+ 	}
+@@ -499,13 +482,14 @@ void giveup_all(struct task_struct *tsk)
+ 	if (!tsk->thread.regs)
+ 		return;
+ 
++	check_if_tm_restore_required(tsk);
++
+ 	usermsr = tsk->thread.regs->msr;
+ 
+ 	if ((usermsr & msr_all_available) == 0)
+ 		return;
+ 
+ 	msr_check_and_set(msr_all_available);
+-	check_if_tm_restore_required(tsk);
+ 
+ 	WARN_ON((usermsr & MSR_VSX) && !((usermsr & MSR_FP) && (usermsr & MSR_VEC)));
+ 
+@@ -530,7 +514,7 @@ void restore_math(struct pt_regs *regs)
+ {
+ 	unsigned long msr;
+ 
+-	if (!msr_tm_active(regs->msr) &&
++	if (!MSR_TM_ACTIVE(regs->msr) &&
+ 		!current->thread.load_fp && !loadvec(current->thread))
+ 		return;
+ 
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
+index 68e14afecac8..a488c105b923 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
+@@ -744,12 +744,15 @@ void kvmppc_rmap_reset(struct kvm *kvm)
+ 	srcu_idx = srcu_read_lock(&kvm->srcu);
+ 	slots = kvm_memslots(kvm);
+ 	kvm_for_each_memslot(memslot, slots) {
++		/* Mutual exclusion with kvm_unmap_hva_range etc. */
++		spin_lock(&kvm->mmu_lock);
+ 		/*
+ 		 * This assumes it is acceptable to lose reference and
+ 		 * change bits across a reset.
+ 		 */
+ 		memset(memslot->arch.rmap, 0,
+ 		       memslot->npages * sizeof(*memslot->arch.rmap));
++		spin_unlock(&kvm->mmu_lock);
+ 	}
+ 	srcu_read_unlock(&kvm->srcu, srcu_idx);
+ }
+diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
+index 36b11c5a0dbb..2654df220d05 100644
+--- a/arch/powerpc/kvm/book3s_emulate.c
++++ b/arch/powerpc/kvm/book3s_emulate.c
+@@ -110,7 +110,7 @@ static inline void kvmppc_copyto_vcpu_tm(struct kvm_vcpu *vcpu)
+ 	vcpu->arch.ctr_tm = vcpu->arch.regs.ctr;
+ 	vcpu->arch.tar_tm = vcpu->arch.tar;
+ 	vcpu->arch.lr_tm = vcpu->arch.regs.link;
+-	vcpu->arch.cr_tm = vcpu->arch.cr;
++	vcpu->arch.cr_tm = vcpu->arch.regs.ccr;
+ 	vcpu->arch.xer_tm = vcpu->arch.regs.xer;
+ 	vcpu->arch.vrsave_tm = vcpu->arch.vrsave;
+ }
+@@ -129,7 +129,7 @@ static inline void kvmppc_copyfrom_vcpu_tm(struct kvm_vcpu *vcpu)
+ 	vcpu->arch.regs.ctr = vcpu->arch.ctr_tm;
+ 	vcpu->arch.tar = vcpu->arch.tar_tm;
+ 	vcpu->arch.regs.link = vcpu->arch.lr_tm;
+-	vcpu->arch.cr = vcpu->arch.cr_tm;
++	vcpu->arch.regs.ccr = vcpu->arch.cr_tm;
+ 	vcpu->arch.regs.xer = vcpu->arch.xer_tm;
+ 	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
+ }
+@@ -141,7 +141,7 @@ static void kvmppc_emulate_treclaim(struct kvm_vcpu *vcpu, int ra_val)
+ 	uint64_t texasr;
+ 
+ 	/* CR0 = 0 | MSR[TS] | 0 */
+-	vcpu->arch.cr = (vcpu->arch.cr & ~(CR0_MASK << CR0_SHIFT)) |
++	vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & ~(CR0_MASK << CR0_SHIFT)) |
+ 		(((guest_msr & MSR_TS_MASK) >> (MSR_TS_S_LG - 1))
+ 		 << CR0_SHIFT);
+ 
+@@ -220,7 +220,7 @@ void kvmppc_emulate_tabort(struct kvm_vcpu *vcpu, int ra_val)
+ 	tm_abort(ra_val);
+ 
+ 	/* CR0 = 0 | MSR[TS] | 0 */
+-	vcpu->arch.cr = (vcpu->arch.cr & ~(CR0_MASK << CR0_SHIFT)) |
++	vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & ~(CR0_MASK << CR0_SHIFT)) |
+ 		(((guest_msr & MSR_TS_MASK) >> (MSR_TS_S_LG - 1))
+ 		 << CR0_SHIFT);
+ 
+@@ -494,8 +494,8 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ 
+ 			if (!(kvmppc_get_msr(vcpu) & MSR_PR)) {
+ 				preempt_disable();
+-				vcpu->arch.cr = (CR0_TBEGIN_FAILURE |
+-				  (vcpu->arch.cr & ~(CR0_MASK << CR0_SHIFT)));
++				vcpu->arch.regs.ccr = (CR0_TBEGIN_FAILURE |
++				  (vcpu->arch.regs.ccr & ~(CR0_MASK << CR0_SHIFT)));
+ 
+ 				vcpu->arch.texasr = (TEXASR_FS | TEXASR_EXACT |
+ 					(((u64)(TM_CAUSE_EMULATE | TM_CAUSE_PERSISTENT))
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 083dcedba11c..05b32cc12e41 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -410,8 +410,8 @@ static void kvmppc_dump_regs(struct kvm_vcpu *vcpu)
+ 	       vcpu->arch.shregs.sprg0, vcpu->arch.shregs.sprg1);
+ 	pr_err("sprg2 = %.16llx sprg3 = %.16llx\n",
+ 	       vcpu->arch.shregs.sprg2, vcpu->arch.shregs.sprg3);
+-	pr_err("cr = %.8x  xer = %.16lx  dsisr = %.8x\n",
+-	       vcpu->arch.cr, vcpu->arch.regs.xer, vcpu->arch.shregs.dsisr);
++	pr_err("cr = %.8lx  xer = %.16lx  dsisr = %.8x\n",
++	       vcpu->arch.regs.ccr, vcpu->arch.regs.xer, vcpu->arch.shregs.dsisr);
+ 	pr_err("dar = %.16llx\n", vcpu->arch.shregs.dar);
+ 	pr_err("fault dar = %.16lx dsisr = %.8x\n",
+ 	       vcpu->arch.fault_dar, vcpu->arch.fault_dsisr);
+@@ -3813,12 +3813,15 @@ static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu)
+ /* Must be called with kvm->lock held and mmu_ready = 0 and no vcpus running */
+ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm)
+ {
++	kvmppc_rmap_reset(kvm);
++	kvm->arch.process_table = 0;
++	/* Mutual exclusion with kvm_unmap_hva_range etc. */
++	spin_lock(&kvm->mmu_lock);
++	kvm->arch.radix = 0;
++	spin_unlock(&kvm->mmu_lock);
+ 	kvmppc_free_radix(kvm);
+ 	kvmppc_update_lpcr(kvm, LPCR_VPM1,
+ 			   LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR);
+-	kvmppc_rmap_reset(kvm);
+-	kvm->arch.radix = 0;
+-	kvm->arch.process_table = 0;
+ 	return 0;
+ }
+ 
+@@ -3831,10 +3834,14 @@ int kvmppc_switch_mmu_to_radix(struct kvm *kvm)
+ 	if (err)
+ 		return err;
+ 
++	kvmppc_rmap_reset(kvm);
++	/* Mutual exclusion with kvm_unmap_hva_range etc. */
++	spin_lock(&kvm->mmu_lock);
++	kvm->arch.radix = 1;
++	spin_unlock(&kvm->mmu_lock);
+ 	kvmppc_free_hpt(&kvm->arch.hpt);
+ 	kvmppc_update_lpcr(kvm, LPCR_UPRT | LPCR_GTSE | LPCR_HR,
+ 			   LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR);
+-	kvm->arch.radix = 1;
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index 1d14046124a0..68c7591f2b5f 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -56,6 +56,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
+ #define STACK_SLOT_DAWR		(SFS-56)
+ #define STACK_SLOT_DAWRX	(SFS-64)
+ #define STACK_SLOT_HFSCR	(SFS-72)
++#define STACK_SLOT_AMR		(SFS-80)
++#define STACK_SLOT_UAMOR	(SFS-88)
+ 
+ /*
+  * Call kvmppc_hv_entry in real mode.
+@@ -760,11 +762,9 @@ BEGIN_FTR_SECTION
+ 	mfspr	r5, SPRN_TIDR
+ 	mfspr	r6, SPRN_PSSCR
+ 	mfspr	r7, SPRN_PID
+-	mfspr	r8, SPRN_IAMR
+ 	std	r5, STACK_SLOT_TID(r1)
+ 	std	r6, STACK_SLOT_PSSCR(r1)
+ 	std	r7, STACK_SLOT_PID(r1)
+-	std	r8, STACK_SLOT_IAMR(r1)
+ 	mfspr	r5, SPRN_HFSCR
+ 	std	r5, STACK_SLOT_HFSCR(r1)
+ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+@@ -772,11 +772,18 @@ BEGIN_FTR_SECTION
+ 	mfspr	r5, SPRN_CIABR
+ 	mfspr	r6, SPRN_DAWR
+ 	mfspr	r7, SPRN_DAWRX
++	mfspr	r8, SPRN_IAMR
+ 	std	r5, STACK_SLOT_CIABR(r1)
+ 	std	r6, STACK_SLOT_DAWR(r1)
+ 	std	r7, STACK_SLOT_DAWRX(r1)
++	std	r8, STACK_SLOT_IAMR(r1)
+ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
+ 
++	mfspr	r5, SPRN_AMR
++	std	r5, STACK_SLOT_AMR(r1)
++	mfspr	r6, SPRN_UAMOR
++	std	r6, STACK_SLOT_UAMOR(r1)
++
+ BEGIN_FTR_SECTION
+ 	/* Set partition DABR */
+ 	/* Do this before re-enabling PMU to avoid P7 DABR corruption bug */
+@@ -1202,7 +1209,7 @@ BEGIN_FTR_SECTION
+ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
+ 
+ 	ld	r5, VCPU_LR(r4)
+-	lwz	r6, VCPU_CR(r4)
++	ld	r6, VCPU_CR(r4)
+ 	mtlr	r5
+ 	mtcr	r6
+ 
+@@ -1313,7 +1320,7 @@ kvmppc_interrupt_hv:
+ 	std	r3, VCPU_GPR(R12)(r9)
+ 	/* CR is in the high half of r12 */
+ 	srdi	r4, r12, 32
+-	stw	r4, VCPU_CR(r9)
++	std	r4, VCPU_CR(r9)
+ BEGIN_FTR_SECTION
+ 	ld	r3, HSTATE_CFAR(r13)
+ 	std	r3, VCPU_CFAR(r9)
+@@ -1713,22 +1720,25 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300)
+ 	mtspr	SPRN_PSPB, r0
+ 	mtspr	SPRN_WORT, r0
+ BEGIN_FTR_SECTION
+-	mtspr	SPRN_IAMR, r0
+ 	mtspr	SPRN_TCSCR, r0
+ 	/* Set MMCRS to 1<<31 to freeze and disable the SPMC counters */
+ 	li	r0, 1
+ 	sldi	r0, r0, 31
+ 	mtspr	SPRN_MMCRS, r0
+ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
+-8:
+ 
+-	/* Save and reset AMR and UAMOR before turning on the MMU */
++	/* Save and restore AMR, IAMR and UAMOR before turning on the MMU */
++	ld	r8, STACK_SLOT_IAMR(r1)
++	mtspr	SPRN_IAMR, r8
++
++8:	/* Power7 jumps back in here */
+ 	mfspr	r5,SPRN_AMR
+ 	mfspr	r6,SPRN_UAMOR
+ 	std	r5,VCPU_AMR(r9)
+ 	std	r6,VCPU_UAMOR(r9)
+-	li	r6,0
+-	mtspr	SPRN_AMR,r6
++	ld	r5,STACK_SLOT_AMR(r1)
++	ld	r6,STACK_SLOT_UAMOR(r1)
++	mtspr	SPRN_AMR, r5
+ 	mtspr	SPRN_UAMOR, r6
+ 
+ 	/* Switch DSCR back to host value */
+@@ -1897,11 +1907,9 @@ BEGIN_FTR_SECTION
+ 	ld	r5, STACK_SLOT_TID(r1)
+ 	ld	r6, STACK_SLOT_PSSCR(r1)
+ 	ld	r7, STACK_SLOT_PID(r1)
+-	ld	r8, STACK_SLOT_IAMR(r1)
+ 	mtspr	SPRN_TIDR, r5
+ 	mtspr	SPRN_PSSCR, r6
+ 	mtspr	SPRN_PID, r7
+-	mtspr	SPRN_IAMR, r8
+ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+ 
+ #ifdef CONFIG_PPC_RADIX_MMU
+diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c
+index 008285058f9b..31cd0f327c8a 100644
+--- a/arch/powerpc/kvm/book3s_hv_tm.c
++++ b/arch/powerpc/kvm/book3s_hv_tm.c
+@@ -130,8 +130,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
+ 			return RESUME_GUEST;
+ 		}
+ 		/* Set CR0 to indicate previous transactional state */
+-		vcpu->arch.cr = (vcpu->arch.cr & 0x0fffffff) |
+-			(((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 28);
++		vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & 0x0fffffff) |
++			(((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 29);
+ 		/* L=1 => tresume, L=0 => tsuspend */
+ 		if (instr & (1 << 21)) {
+ 			if (MSR_TM_SUSPENDED(msr))
+@@ -174,8 +174,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
+ 		copy_from_checkpoint(vcpu);
+ 
+ 		/* Set CR0 to indicate previous transactional state */
+-		vcpu->arch.cr = (vcpu->arch.cr & 0x0fffffff) |
+-			(((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 28);
++		vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & 0x0fffffff) |
++			(((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 29);
+ 		vcpu->arch.shregs.msr &= ~MSR_TS_MASK;
+ 		return RESUME_GUEST;
+ 
+@@ -204,8 +204,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
+ 		copy_to_checkpoint(vcpu);
+ 
+ 		/* Set CR0 to indicate previous transactional state */
+-		vcpu->arch.cr = (vcpu->arch.cr & 0x0fffffff) |
+-			(((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 28);
++		vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & 0x0fffffff) |
++			(((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 29);
+ 		vcpu->arch.shregs.msr = msr | MSR_TS_S;
+ 		return RESUME_GUEST;
+ 	}
+diff --git a/arch/powerpc/kvm/book3s_hv_tm_builtin.c b/arch/powerpc/kvm/book3s_hv_tm_builtin.c
+index b2c7c6fca4f9..3cf5863bc06e 100644
+--- a/arch/powerpc/kvm/book3s_hv_tm_builtin.c
++++ b/arch/powerpc/kvm/book3s_hv_tm_builtin.c
+@@ -89,7 +89,8 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu)
+ 		if (instr & (1 << 21))
+ 			vcpu->arch.shregs.msr = (msr & ~MSR_TS_MASK) | MSR_TS_T;
+ 		/* Set CR0 to 0b0010 */
+-		vcpu->arch.cr = (vcpu->arch.cr & 0x0fffffff) | 0x20000000;
++		vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & 0x0fffffff) |
++			0x20000000;
+ 		return 1;
+ 	}
+ 
+@@ -105,5 +106,5 @@ void kvmhv_emulate_tm_rollback(struct kvm_vcpu *vcpu)
+ 	vcpu->arch.shregs.msr &= ~MSR_TS_MASK;	/* go to N state */
+ 	vcpu->arch.regs.nip = vcpu->arch.tfhar;
+ 	copy_from_checkpoint(vcpu);
+-	vcpu->arch.cr = (vcpu->arch.cr & 0x0fffffff) | 0xa0000000;
++	vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & 0x0fffffff) | 0xa0000000;
+ }
+diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
+index 614ebb4261f7..de9702219dee 100644
+--- a/arch/powerpc/kvm/book3s_pr.c
++++ b/arch/powerpc/kvm/book3s_pr.c
+@@ -167,7 +167,7 @@ void kvmppc_copy_to_svcpu(struct kvm_vcpu *vcpu)
+ 	svcpu->gpr[11] = vcpu->arch.regs.gpr[11];
+ 	svcpu->gpr[12] = vcpu->arch.regs.gpr[12];
+ 	svcpu->gpr[13] = vcpu->arch.regs.gpr[13];
+-	svcpu->cr  = vcpu->arch.cr;
++	svcpu->cr  = vcpu->arch.regs.ccr;
+ 	svcpu->xer = vcpu->arch.regs.xer;
+ 	svcpu->ctr = vcpu->arch.regs.ctr;
+ 	svcpu->lr  = vcpu->arch.regs.link;
+@@ -249,7 +249,7 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu)
+ 	vcpu->arch.regs.gpr[11] = svcpu->gpr[11];
+ 	vcpu->arch.regs.gpr[12] = svcpu->gpr[12];
+ 	vcpu->arch.regs.gpr[13] = svcpu->gpr[13];
+-	vcpu->arch.cr  = svcpu->cr;
++	vcpu->arch.regs.ccr  = svcpu->cr;
+ 	vcpu->arch.regs.xer = svcpu->xer;
+ 	vcpu->arch.regs.ctr = svcpu->ctr;
+ 	vcpu->arch.regs.link  = svcpu->lr;
+diff --git a/arch/powerpc/kvm/bookehv_interrupts.S b/arch/powerpc/kvm/bookehv_interrupts.S
+index 612b7f6a887f..4e5081e58409 100644
+--- a/arch/powerpc/kvm/bookehv_interrupts.S
++++ b/arch/powerpc/kvm/bookehv_interrupts.S
+@@ -186,7 +186,7 @@ END_BTB_FLUSH_SECTION
+ 	 */
+ 	PPC_LL	r4, PACACURRENT(r13)
+ 	PPC_LL	r4, (THREAD + THREAD_KVM_VCPU)(r4)
+-	stw	r10, VCPU_CR(r4)
++	PPC_STL	r10, VCPU_CR(r4)
+ 	PPC_STL r11, VCPU_GPR(R4)(r4)
+ 	PPC_STL	r5, VCPU_GPR(R5)(r4)
+ 	PPC_STL	r6, VCPU_GPR(R6)(r4)
+@@ -296,7 +296,7 @@ _GLOBAL(kvmppc_handler_\intno\()_\srr1)
+ 	PPC_STL	r4, VCPU_GPR(R4)(r11)
+ 	PPC_LL	r4, THREAD_NORMSAVE(0)(r10)
+ 	PPC_STL	r5, VCPU_GPR(R5)(r11)
+-	stw	r13, VCPU_CR(r11)
++	PPC_STL	r13, VCPU_CR(r11)
+ 	mfspr	r5, \srr0
+ 	PPC_STL	r3, VCPU_GPR(R10)(r11)
+ 	PPC_LL	r3, THREAD_NORMSAVE(2)(r10)
+@@ -323,7 +323,7 @@ _GLOBAL(kvmppc_handler_\intno\()_\srr1)
+ 	PPC_STL	r4, VCPU_GPR(R4)(r11)
+ 	PPC_LL	r4, GPR9(r8)
+ 	PPC_STL	r5, VCPU_GPR(R5)(r11)
+-	stw	r9, VCPU_CR(r11)
++	PPC_STL	r9, VCPU_CR(r11)
+ 	mfspr	r5, \srr0
+ 	PPC_STL	r3, VCPU_GPR(R8)(r11)
+ 	PPC_LL	r3, GPR10(r8)
+@@ -647,7 +647,7 @@ lightweight_exit:
+ 	PPC_LL	r3, VCPU_LR(r4)
+ 	PPC_LL	r5, VCPU_XER(r4)
+ 	PPC_LL	r6, VCPU_CTR(r4)
+-	lwz	r7, VCPU_CR(r4)
++	PPC_LL	r7, VCPU_CR(r4)
+ 	PPC_LL	r8, VCPU_PC(r4)
+ 	PPC_LD(r9, VCPU_SHARED_MSR, r11)
+ 	PPC_LL	r0, VCPU_GPR(R0)(r4)
+diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
+index 75dce1ef3bc8..f91b1309a0a8 100644
+--- a/arch/powerpc/kvm/emulate_loadstore.c
++++ b/arch/powerpc/kvm/emulate_loadstore.c
+@@ -117,7 +117,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
+ 
+ 	emulated = EMULATE_FAIL;
+ 	vcpu->arch.regs.msr = vcpu->arch.shared->msr;
+-	vcpu->arch.regs.ccr = vcpu->arch.cr;
+ 	if (analyse_instr(&op, &vcpu->arch.regs, inst) == 0) {
+ 		int type = op.type & INSTR_TYPE_MASK;
+ 		int size = GETSIZE(op.type);
+diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
+index f23a89d8e4ce..29fd8940867e 100644
+--- a/arch/powerpc/mm/hash_utils_64.c
++++ b/arch/powerpc/mm/hash_utils_64.c
+@@ -1859,11 +1859,20 @@ void hash__setup_initial_memory_limit(phys_addr_t first_memblock_base,
+ 	 *
+ 	 * For guests on platforms before POWER9, we clamp the it limit to 1G
+ 	 * to avoid some funky things such as RTAS bugs etc...
++	 *
++	 * On POWER9 we limit to 1TB in case the host erroneously told us that
++	 * the RMA was >1TB. Effective address bits 0:23 are treated as zero
++	 * (meaning the access is aliased to zero i.e. addr = addr % 1TB)
++	 * for virtual real mode addressing and so it doesn't make sense to
++	 * have an area larger than 1TB as it can't be addressed.
+ 	 */
+ 	if (!early_cpu_has_feature(CPU_FTR_HVMODE)) {
+ 		ppc64_rma_size = first_memblock_size;
+ 		if (!early_cpu_has_feature(CPU_FTR_ARCH_300))
+ 			ppc64_rma_size = min_t(u64, ppc64_rma_size, 0x40000000);
++		else
++			ppc64_rma_size = min_t(u64, ppc64_rma_size,
++					       1UL << SID_SHIFT_1T);
+ 
+ 		/* Finally limit subsequent allocations */
+ 		memblock_set_current_limit(ppc64_rma_size);
+diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
+index b271b283c785..25a8dd9cd71d 100644
+--- a/arch/powerpc/mm/pkeys.c
++++ b/arch/powerpc/mm/pkeys.c
+@@ -414,3 +414,13 @@ bool arch_vma_access_permitted(struct vm_area_struct *vma, bool write,
+ 
+ 	return pkey_access_permitted(vma_pkey(vma), write, execute);
+ }
++
++void arch_dup_pkeys(struct mm_struct *oldmm, struct mm_struct *mm)
++{
++	if (static_branch_likely(&pkey_disabled))
++		return;
++
++	/* Duplicate the oldmm pkey state in mm: */
++	mm_pkey_allocation_map(mm) = mm_pkey_allocation_map(oldmm);
++	mm->context.execute_only_pkey = oldmm->context.execute_only_pkey;
++}
+diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
+index c433f6d3dd64..a840b7d074f7 100644
+--- a/arch/riscv/kernel/ftrace.c
++++ b/arch/riscv/kernel/ftrace.c
+@@ -132,7 +132,6 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
+ {
+ 	unsigned long return_hooker = (unsigned long)&return_to_handler;
+ 	unsigned long old;
+-	int err;
+ 
+ 	if (unlikely(atomic_read(&current->tracing_graph_pause)))
+ 		return;
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 3245b95ad2d9..0d3f5cf3ff3e 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -117,7 +117,7 @@ static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level)
+ }
+ 
+ #define KVM_PERMILLE_MMU_PAGES 20
+-#define KVM_MIN_ALLOC_MMU_PAGES 64
++#define KVM_MIN_ALLOC_MMU_PAGES 64UL
+ #define KVM_MMU_HASH_SHIFT 12
+ #define KVM_NUM_MMU_PAGES (1 << KVM_MMU_HASH_SHIFT)
+ #define KVM_MIN_FREE_MMU_PAGES 5
+@@ -784,6 +784,9 @@ struct kvm_hv {
+ 	u64 hv_reenlightenment_control;
+ 	u64 hv_tsc_emulation_control;
+ 	u64 hv_tsc_emulation_status;
++
++	/* How many vCPUs have VP index != vCPU index */
++	atomic_t num_mismatched_vp_indexes;
+ };
+ 
+ enum kvm_irqchip_mode {
+@@ -793,9 +796,9 @@ enum kvm_irqchip_mode {
+ };
+ 
+ struct kvm_arch {
+-	unsigned int n_used_mmu_pages;
+-	unsigned int n_requested_mmu_pages;
+-	unsigned int n_max_mmu_pages;
++	unsigned long n_used_mmu_pages;
++	unsigned long n_requested_mmu_pages;
++	unsigned long n_max_mmu_pages;
+ 	unsigned int indirect_shadow_pages;
+ 	unsigned long mmu_valid_gen;
+ 	struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES];
+@@ -1198,8 +1201,8 @@ void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
+ 				   gfn_t gfn_offset, unsigned long mask);
+ void kvm_mmu_zap_all(struct kvm *kvm);
+ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen);
+-unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm);
+-void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages);
++unsigned long kvm_mmu_calculate_mmu_pages(struct kvm *kvm);
++void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long kvm_nr_mmu_pages);
+ 
+ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3);
+ bool pdptrs_changed(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 50d309662d78..5790671857e5 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -53,7 +53,7 @@ int ftrace_arch_code_modify_post_process(void)
+ union ftrace_code_union {
+ 	char code[MCOUNT_INSN_SIZE];
+ 	struct {
+-		unsigned char e8;
++		unsigned char op;
+ 		int offset;
+ 	} __attribute__((packed));
+ };
+@@ -63,20 +63,23 @@ static int ftrace_calc_offset(long ip, long addr)
+ 	return (int)(addr - ip);
+ }
+ 
+-static unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
++static unsigned char *
++ftrace_text_replace(unsigned char op, unsigned long ip, unsigned long addr)
+ {
+ 	static union ftrace_code_union calc;
+ 
+-	calc.e8		= 0xe8;
++	calc.op		= op;
+ 	calc.offset	= ftrace_calc_offset(ip + MCOUNT_INSN_SIZE, addr);
+ 
+-	/*
+-	 * No locking needed, this must be called via kstop_machine
+-	 * which in essence is like running on a uniprocessor machine.
+-	 */
+ 	return calc.code;
+ }
+ 
++static unsigned char *
++ftrace_call_replace(unsigned long ip, unsigned long addr)
++{
++	return ftrace_text_replace(0xe8, ip, addr);
++}
++
+ static inline int
+ within(unsigned long addr, unsigned long start, unsigned long end)
+ {
+@@ -686,22 +689,6 @@ int __init ftrace_dyn_arch_init(void)
+ 	return 0;
+ }
+ 
+-#if defined(CONFIG_X86_64) || defined(CONFIG_FUNCTION_GRAPH_TRACER)
+-static unsigned char *ftrace_jmp_replace(unsigned long ip, unsigned long addr)
+-{
+-	static union ftrace_code_union calc;
+-
+-	/* Jmp not a call (ignore the .e8) */
+-	calc.e8		= 0xe9;
+-	calc.offset	= ftrace_calc_offset(ip + MCOUNT_INSN_SIZE, addr);
+-
+-	/*
+-	 * ftrace external locks synchronize the access to the static variable.
+-	 */
+-	return calc.code;
+-}
+-#endif
+-
+ /* Currently only x86_64 supports dynamic trampolines */
+ #ifdef CONFIG_X86_64
+ 
+@@ -923,8 +910,8 @@ static void *addr_from_call(void *ptr)
+ 		return NULL;
+ 
+ 	/* Make sure this is a call */
+-	if (WARN_ON_ONCE(calc.e8 != 0xe8)) {
+-		pr_warn("Expected e8, got %x\n", calc.e8);
++	if (WARN_ON_ONCE(calc.op != 0xe8)) {
++		pr_warn("Expected e8, got %x\n", calc.op);
+ 		return NULL;
+ 	}
+ 
+@@ -995,6 +982,11 @@ void arch_ftrace_trampoline_free(struct ftrace_ops *ops)
+ #ifdef CONFIG_DYNAMIC_FTRACE
+ extern void ftrace_graph_call(void);
+ 
++static unsigned char *ftrace_jmp_replace(unsigned long ip, unsigned long addr)
++{
++	return ftrace_text_replace(0xe9, ip, addr);
++}
++
+ static int ftrace_mod_jmp(unsigned long ip, void *func)
+ {
+ 	unsigned char *new;
+diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
+index 013fe3d21dbb..2ec202cb9dfd 100644
+--- a/arch/x86/kernel/kvmclock.c
++++ b/arch/x86/kernel/kvmclock.c
+@@ -117,12 +117,8 @@ static u64 kvm_sched_clock_read(void)
+ 
+ static inline void kvm_sched_clock_init(bool stable)
+ {
+-	if (!stable) {
+-		pv_time_ops.sched_clock = kvm_clock_read;
++	if (!stable)
+ 		clear_sched_clock_stable();
+-		return;
+-	}
+-
+ 	kvm_sched_clock_offset = kvm_clock_read();
+ 	pv_time_ops.sched_clock = kvm_sched_clock_read;
+ 
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index b4866badb235..90ecc108bc8a 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -1251,7 +1251,7 @@ void __init setup_arch(char **cmdline_p)
+ 	x86_init.hyper.guest_late_init();
+ 
+ 	e820__reserve_resources();
+-	e820__register_nosave_regions(max_low_pfn);
++	e820__register_nosave_regions(max_pfn);
+ 
+ 	x86_init.resources.reserve_resources();
+ 
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 4a688ef9e448..429728b35bca 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -2331,12 +2331,16 @@ static int em_lseg(struct x86_emulate_ctxt *ctxt)
+ 
+ static int emulator_has_longmode(struct x86_emulate_ctxt *ctxt)
+ {
++#ifdef CONFIG_X86_64
+ 	u32 eax, ebx, ecx, edx;
+ 
+ 	eax = 0x80000001;
+ 	ecx = 0;
+ 	ctxt->ops->get_cpuid(ctxt, &eax, &ebx, &ecx, &edx, false);
+ 	return edx & bit(X86_FEATURE_LM);
++#else
++	return false;
++#endif
+ }
+ 
+ #define GET_SMSTATE(type, smbase, offset)				  \
+@@ -2381,6 +2385,7 @@ static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, u64 smbase, int n)
+ 	return X86EMUL_CONTINUE;
+ }
+ 
++#ifdef CONFIG_X86_64
+ static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, u64 smbase, int n)
+ {
+ 	struct desc_struct desc;
+@@ -2399,6 +2404,7 @@ static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, u64 smbase, int n)
+ 	ctxt->ops->set_segment(ctxt, selector, &desc, base3, n);
+ 	return X86EMUL_CONTINUE;
+ }
++#endif
+ 
+ static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt,
+ 				    u64 cr0, u64 cr3, u64 cr4)
+@@ -2499,6 +2505,7 @@ static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, u64 smbase)
+ 	return rsm_enter_protected_mode(ctxt, cr0, cr3, cr4);
+ }
+ 
++#ifdef CONFIG_X86_64
+ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase)
+ {
+ 	struct desc_struct desc;
+@@ -2560,6 +2567,7 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase)
+ 
+ 	return X86EMUL_CONTINUE;
+ }
++#endif
+ 
+ static int em_rsm(struct x86_emulate_ctxt *ctxt)
+ {
+@@ -2616,9 +2624,11 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt)
+ 	if (ctxt->ops->pre_leave_smm(ctxt, smbase))
+ 		return X86EMUL_UNHANDLEABLE;
+ 
++#ifdef CONFIG_X86_64
+ 	if (emulator_has_longmode(ctxt))
+ 		ret = rsm_load_state_64(ctxt, smbase + 0x8000);
+ 	else
++#endif
+ 		ret = rsm_load_state_32(ctxt, smbase + 0x8000);
+ 
+ 	if (ret != X86EMUL_CONTINUE) {
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index 229d99605165..5842c5f587fe 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -132,8 +132,10 @@ static struct kvm_vcpu *get_vcpu_by_vpidx(struct kvm *kvm, u32 vpidx)
+ 	struct kvm_vcpu *vcpu = NULL;
+ 	int i;
+ 
+-	if (vpidx < KVM_MAX_VCPUS)
+-		vcpu = kvm_get_vcpu(kvm, vpidx);
++	if (vpidx >= KVM_MAX_VCPUS)
++		return NULL;
++
++	vcpu = kvm_get_vcpu(kvm, vpidx);
+ 	if (vcpu && vcpu_to_hv_vcpu(vcpu)->vp_index == vpidx)
+ 		return vcpu;
+ 	kvm_for_each_vcpu(i, vcpu, kvm)
+@@ -689,6 +691,24 @@ void kvm_hv_vcpu_uninit(struct kvm_vcpu *vcpu)
+ 		stimer_cleanup(&hv_vcpu->stimer[i]);
+ }
+ 
++bool kvm_hv_assist_page_enabled(struct kvm_vcpu *vcpu)
++{
++	if (!(vcpu->arch.hyperv.hv_vapic & HV_X64_MSR_VP_ASSIST_PAGE_ENABLE))
++		return false;
++	return vcpu->arch.pv_eoi.msr_val & KVM_MSR_ENABLED;
++}
++EXPORT_SYMBOL_GPL(kvm_hv_assist_page_enabled);
++
++bool kvm_hv_get_assist_page(struct kvm_vcpu *vcpu,
++			    struct hv_vp_assist_page *assist_page)
++{
++	if (!kvm_hv_assist_page_enabled(vcpu))
++		return false;
++	return !kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.pv_eoi.data,
++				      assist_page, sizeof(*assist_page));
++}
++EXPORT_SYMBOL_GPL(kvm_hv_get_assist_page);
++
+ static void stimer_prepare_msg(struct kvm_vcpu_hv_stimer *stimer)
+ {
+ 	struct hv_message *msg = &stimer->msg;
+@@ -1040,21 +1060,41 @@ static u64 current_task_runtime_100ns(void)
+ 
+ static int kvm_hv_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
+ {
+-	struct kvm_vcpu_hv *hv = &vcpu->arch.hyperv;
++	struct kvm_vcpu_hv *hv_vcpu = &vcpu->arch.hyperv;
+ 
+ 	switch (msr) {
+-	case HV_X64_MSR_VP_INDEX:
+-		if (!host)
++	case HV_X64_MSR_VP_INDEX: {
++		struct kvm_hv *hv = &vcpu->kvm->arch.hyperv;
++		int vcpu_idx = kvm_vcpu_get_idx(vcpu);
++		u32 new_vp_index = (u32)data;
++
++		if (!host || new_vp_index >= KVM_MAX_VCPUS)
+ 			return 1;
+-		hv->vp_index = (u32)data;
++
++		if (new_vp_index == hv_vcpu->vp_index)
++			return 0;
++
++		/*
++		 * The VP index is initialized to vcpu_index by
++		 * kvm_hv_vcpu_postcreate so they initially match.  Now the
++		 * VP index is changing, adjust num_mismatched_vp_indexes if
++		 * it now matches or no longer matches vcpu_idx.
++		 */
++		if (hv_vcpu->vp_index == vcpu_idx)
++			atomic_inc(&hv->num_mismatched_vp_indexes);
++		else if (new_vp_index == vcpu_idx)
++			atomic_dec(&hv->num_mismatched_vp_indexes);
++
++		hv_vcpu->vp_index = new_vp_index;
+ 		break;
++	}
+ 	case HV_X64_MSR_VP_ASSIST_PAGE: {
+ 		u64 gfn;
+ 		unsigned long addr;
+ 
+ 		if (!(data & HV_X64_MSR_VP_ASSIST_PAGE_ENABLE)) {
+-			hv->hv_vapic = data;
+-			if (kvm_lapic_enable_pv_eoi(vcpu, 0))
++			hv_vcpu->hv_vapic = data;
++			if (kvm_lapic_enable_pv_eoi(vcpu, 0, 0))
+ 				return 1;
+ 			break;
+ 		}
+@@ -1064,10 +1104,11 @@ static int kvm_hv_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
+ 			return 1;
+ 		if (__clear_user((void __user *)addr, PAGE_SIZE))
+ 			return 1;
+-		hv->hv_vapic = data;
++		hv_vcpu->hv_vapic = data;
+ 		kvm_vcpu_mark_page_dirty(vcpu, gfn);
+ 		if (kvm_lapic_enable_pv_eoi(vcpu,
+-					    gfn_to_gpa(gfn) | KVM_MSR_ENABLED))
++					    gfn_to_gpa(gfn) | KVM_MSR_ENABLED,
++					    sizeof(struct hv_vp_assist_page)))
+ 			return 1;
+ 		break;
+ 	}
+@@ -1080,7 +1121,7 @@ static int kvm_hv_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
+ 	case HV_X64_MSR_VP_RUNTIME:
+ 		if (!host)
+ 			return 1;
+-		hv->runtime_offset = data - current_task_runtime_100ns();
++		hv_vcpu->runtime_offset = data - current_task_runtime_100ns();
+ 		break;
+ 	case HV_X64_MSR_SCONTROL:
+ 	case HV_X64_MSR_SVERSION:
+@@ -1172,11 +1213,11 @@ static int kvm_hv_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata,
+ 			  bool host)
+ {
+ 	u64 data = 0;
+-	struct kvm_vcpu_hv *hv = &vcpu->arch.hyperv;
++	struct kvm_vcpu_hv *hv_vcpu = &vcpu->arch.hyperv;
+ 
+ 	switch (msr) {
+ 	case HV_X64_MSR_VP_INDEX:
+-		data = hv->vp_index;
++		data = hv_vcpu->vp_index;
+ 		break;
+ 	case HV_X64_MSR_EOI:
+ 		return kvm_hv_vapic_msr_read(vcpu, APIC_EOI, pdata);
+@@ -1185,10 +1226,10 @@ static int kvm_hv_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata,
+ 	case HV_X64_MSR_TPR:
+ 		return kvm_hv_vapic_msr_read(vcpu, APIC_TASKPRI, pdata);
+ 	case HV_X64_MSR_VP_ASSIST_PAGE:
+-		data = hv->hv_vapic;
++		data = hv_vcpu->hv_vapic;
+ 		break;
+ 	case HV_X64_MSR_VP_RUNTIME:
+-		data = current_task_runtime_100ns() + hv->runtime_offset;
++		data = current_task_runtime_100ns() + hv_vcpu->runtime_offset;
+ 		break;
+ 	case HV_X64_MSR_SCONTROL:
+ 	case HV_X64_MSR_SVERSION:
+diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
+index d6aa969e20f1..0e66c12ed2c3 100644
+--- a/arch/x86/kvm/hyperv.h
++++ b/arch/x86/kvm/hyperv.h
+@@ -62,6 +62,10 @@ void kvm_hv_vcpu_init(struct kvm_vcpu *vcpu);
+ void kvm_hv_vcpu_postcreate(struct kvm_vcpu *vcpu);
+ void kvm_hv_vcpu_uninit(struct kvm_vcpu *vcpu);
+ 
++bool kvm_hv_assist_page_enabled(struct kvm_vcpu *vcpu);
++bool kvm_hv_get_assist_page(struct kvm_vcpu *vcpu,
++			    struct hv_vp_assist_page *assist_page);
++
+ static inline struct kvm_vcpu_hv_stimer *vcpu_to_stimer(struct kvm_vcpu *vcpu,
+ 							int timer_index)
+ {
+diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c
+index faa264822cee..007bc654f928 100644
+--- a/arch/x86/kvm/irq.c
++++ b/arch/x86/kvm/irq.c
+@@ -172,3 +172,10 @@ void __kvm_migrate_timers(struct kvm_vcpu *vcpu)
+ 	__kvm_migrate_apic_timer(vcpu);
+ 	__kvm_migrate_pit_timer(vcpu);
+ }
++
++bool kvm_arch_irqfd_allowed(struct kvm *kvm, struct kvm_irqfd *args)
++{
++	bool resample = args->flags & KVM_IRQFD_FLAG_RESAMPLE;
++
++	return resample ? irqchip_kernel(kvm) : irqchip_in_kernel(kvm);
++}
+diff --git a/arch/x86/kvm/irq.h b/arch/x86/kvm/irq.h
+index d5005cc26521..fd210cdd4983 100644
+--- a/arch/x86/kvm/irq.h
++++ b/arch/x86/kvm/irq.h
+@@ -114,6 +114,7 @@ static inline int irqchip_in_kernel(struct kvm *kvm)
+ 	return mode != KVM_IRQCHIP_NONE;
+ }
+ 
++bool kvm_arch_irqfd_allowed(struct kvm *kvm, struct kvm_irqfd *args);
+ void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu);
+ void kvm_inject_apic_timer_irqs(struct kvm_vcpu *vcpu);
+ void kvm_apic_nmi_wd_deliver(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 5f5bc5976804..262e49301cae 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -2633,17 +2633,25 @@ int kvm_hv_vapic_msr_read(struct kvm_vcpu *vcpu, u32 reg, u64 *data)
+ 	return 0;
+ }
+ 
+-int kvm_lapic_enable_pv_eoi(struct kvm_vcpu *vcpu, u64 data)
++int kvm_lapic_enable_pv_eoi(struct kvm_vcpu *vcpu, u64 data, unsigned long len)
+ {
+ 	u64 addr = data & ~KVM_MSR_ENABLED;
++	struct gfn_to_hva_cache *ghc = &vcpu->arch.pv_eoi.data;
++	unsigned long new_len;
++
+ 	if (!IS_ALIGNED(addr, 4))
+ 		return 1;
+ 
+ 	vcpu->arch.pv_eoi.msr_val = data;
+ 	if (!pv_eoi_enabled(vcpu))
+ 		return 0;
+-	return kvm_gfn_to_hva_cache_init(vcpu->kvm, &vcpu->arch.pv_eoi.data,
+-					 addr, sizeof(u8));
++
++	if (addr == ghc->gpa && len <= ghc->len)
++		new_len = ghc->len;
++	else
++		new_len = len;
++
++	return kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, addr, new_len);
+ }
+ 
+ void kvm_apic_accept_events(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
+index ed0ed39abd36..ff6ef9c3d760 100644
+--- a/arch/x86/kvm/lapic.h
++++ b/arch/x86/kvm/lapic.h
+@@ -120,7 +120,7 @@ static inline bool kvm_hv_vapic_assist_page_enabled(struct kvm_vcpu *vcpu)
+ 	return vcpu->arch.hyperv.hv_vapic & HV_X64_MSR_VP_ASSIST_PAGE_ENABLE;
+ }
+ 
+-int kvm_lapic_enable_pv_eoi(struct kvm_vcpu *vcpu, u64 data);
++int kvm_lapic_enable_pv_eoi(struct kvm_vcpu *vcpu, u64 data, unsigned long len);
+ void kvm_lapic_init(void);
+ void kvm_lapic_exit(void);
+ 
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index cdc0c460950f..88940261fb53 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -1954,7 +1954,7 @@ static int is_empty_shadow_page(u64 *spt)
+  * aggregate version in order to make the slab shrinker
+  * faster
+  */
+-static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, int nr)
++static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, unsigned long nr)
+ {
+ 	kvm->arch.n_used_mmu_pages += nr;
+ 	percpu_counter_add(&kvm_total_used_mmu_pages, nr);
+@@ -2704,7 +2704,7 @@ static bool prepare_zap_oldest_mmu_page(struct kvm *kvm,
+  * Changing the number of mmu pages allocated to the vm
+  * Note: if goal_nr_mmu_pages is too small, you will get dead lock
+  */
+-void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int goal_nr_mmu_pages)
++void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long goal_nr_mmu_pages)
+ {
+ 	LIST_HEAD(invalid_list);
+ 
+@@ -5926,10 +5926,10 @@ out:
+ /*
+  * Caculate mmu pages needed for kvm.
+  */
+-unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm)
++unsigned long kvm_mmu_calculate_mmu_pages(struct kvm *kvm)
+ {
+-	unsigned int nr_mmu_pages;
+-	unsigned int  nr_pages = 0;
++	unsigned long nr_mmu_pages;
++	unsigned long nr_pages = 0;
+ 	struct kvm_memslots *slots;
+ 	struct kvm_memory_slot *memslot;
+ 	int i;
+@@ -5942,8 +5942,7 @@ unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm)
+ 	}
+ 
+ 	nr_mmu_pages = nr_pages * KVM_PERMILLE_MMU_PAGES / 1000;
+-	nr_mmu_pages = max(nr_mmu_pages,
+-			   (unsigned int) KVM_MIN_ALLOC_MMU_PAGES);
++	nr_mmu_pages = max(nr_mmu_pages, KVM_MIN_ALLOC_MMU_PAGES);
+ 
+ 	return nr_mmu_pages;
+ }
+diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
+index 1fab69c0b2f3..65892288bf51 100644
+--- a/arch/x86/kvm/mmu.h
++++ b/arch/x86/kvm/mmu.h
+@@ -69,7 +69,7 @@ bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu);
+ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code,
+ 				u64 fault_address, char *insn, int insn_len);
+ 
+-static inline unsigned int kvm_mmu_available_pages(struct kvm *kvm)
++static inline unsigned long kvm_mmu_available_pages(struct kvm *kvm)
+ {
+ 	if (kvm->arch.n_max_mmu_pages > kvm->arch.n_used_mmu_pages)
+ 		return kvm->arch.n_max_mmu_pages -
+diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c
+index e9ea2d45ae66..9f72cc427158 100644
+--- a/arch/x86/kvm/mtrr.c
++++ b/arch/x86/kvm/mtrr.c
+@@ -48,11 +48,6 @@ static bool msr_mtrr_valid(unsigned msr)
+ 	return false;
+ }
+ 
+-static bool valid_pat_type(unsigned t)
+-{
+-	return t < 8 && (1 << t) & 0xf3; /* 0, 1, 4, 5, 6, 7 */
+-}
+-
+ static bool valid_mtrr_type(unsigned t)
+ {
+ 	return t < 8 && (1 << t) & 0x73; /* 0, 1, 4, 5, 6 */
+@@ -67,10 +62,7 @@ bool kvm_mtrr_valid(struct kvm_vcpu *vcpu, u32 msr, u64 data)
+ 		return false;
+ 
+ 	if (msr == MSR_IA32_CR_PAT) {
+-		for (i = 0; i < 8; i++)
+-			if (!valid_pat_type((data >> (i * 8)) & 0xff))
+-				return false;
+-		return true;
++		return kvm_pat_valid(data);
+ 	} else if (msr == MSR_MTRRdefType) {
+ 		if (data & ~0xcff)
+ 			return false;
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 0f33f00aa4df..ac2cc2ed7a85 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -5622,6 +5622,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ 	svm->vmcb->save.cr2 = vcpu->arch.cr2;
+ 
+ 	clgi();
++	kvm_load_guest_xcr0(vcpu);
+ 
+ 	/*
+ 	 * If this vCPU has touched SPEC_CTRL, restore the guest's value if
+@@ -5769,6 +5770,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ 	if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
+ 		kvm_before_interrupt(&svm->vcpu);
+ 
++	kvm_put_guest_xcr0(vcpu);
+ 	stgi();
+ 
+ 	/* Any pending NMI will happen here */
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 2e310ea62d60..2938b4bcc968 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -4135,7 +4135,10 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		return vmx_get_vmx_msr(&vmx->nested.msrs, msr_info->index,
+ 				       &msr_info->data);
+ 	case MSR_IA32_XSS:
+-		if (!vmx_xsaves_supported())
++		if (!vmx_xsaves_supported() ||
++		    (!msr_info->host_initiated &&
++		     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
++		       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
+ 			return 1;
+ 		msr_info->data = vcpu->arch.ia32_xss;
+ 		break;
+@@ -4265,9 +4268,10 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 					      MSR_TYPE_W);
+ 		break;
+ 	case MSR_IA32_CR_PAT:
++		if (!kvm_pat_valid(data))
++			return 1;
++
+ 		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
+-			if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
+-				return 1;
+ 			vmcs_write64(GUEST_IA32_PAT, data);
+ 			vcpu->arch.pat = data;
+ 			break;
+@@ -4301,7 +4305,10 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 			return 1;
+ 		return vmx_set_vmx_msr(vcpu, msr_index, data);
+ 	case MSR_IA32_XSS:
+-		if (!vmx_xsaves_supported())
++		if (!vmx_xsaves_supported() ||
++		    (!msr_info->host_initiated &&
++		     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
++		       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
+ 			return 1;
+ 		/*
+ 		 * The only supported bit as of Skylake is bit 8, but
+@@ -10437,28 +10444,21 @@ static void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu)
+ 
+ static void vmx_complete_atomic_exit(struct vcpu_vmx *vmx)
+ {
+-	u32 exit_intr_info = 0;
+-	u16 basic_exit_reason = (u16)vmx->exit_reason;
+-
+-	if (!(basic_exit_reason == EXIT_REASON_MCE_DURING_VMENTRY
+-	      || basic_exit_reason == EXIT_REASON_EXCEPTION_NMI))
++	if (vmx->exit_reason != EXIT_REASON_EXCEPTION_NMI)
+ 		return;
+ 
+-	if (!(vmx->exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY))
+-		exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
+-	vmx->exit_intr_info = exit_intr_info;
++	vmx->exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
+ 
+ 	/* if exit due to PF check for async PF */
+-	if (is_page_fault(exit_intr_info))
++	if (is_page_fault(vmx->exit_intr_info))
+ 		vmx->vcpu.arch.apf.host_apf_reason = kvm_read_and_reset_pf_reason();
+ 
+ 	/* Handle machine checks before interrupts are enabled */
+-	if (basic_exit_reason == EXIT_REASON_MCE_DURING_VMENTRY ||
+-	    is_machine_check(exit_intr_info))
++	if (is_machine_check(vmx->exit_intr_info))
+ 		kvm_machine_check();
+ 
+ 	/* We need to handle NMIs before interrupts are enabled */
+-	if (is_nmi(exit_intr_info)) {
++	if (is_nmi(vmx->exit_intr_info)) {
+ 		kvm_before_interrupt(&vmx->vcpu);
+ 		asm("int $2");
+ 		kvm_after_interrupt(&vmx->vcpu);
+@@ -10756,6 +10756,8 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ 	if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)
+ 		vmx_set_interrupt_shadow(vcpu, 0);
+ 
++	kvm_load_guest_xcr0(vcpu);
++
+ 	if (static_cpu_has(X86_FEATURE_PKU) &&
+ 	    kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&
+ 	    vcpu->arch.pkru != vmx->host_pkru)
+@@ -10808,7 +10810,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ 		"mov %%" _ASM_AX", %%cr2 \n\t"
+ 		"3: \n\t"
+ 		/* Check if vmlaunch of vmresume is needed */
+-		"cmpl $0, %c[launched](%0) \n\t"
++		"cmpb $0, %c[launched](%0) \n\t"
+ 		/* Load guest registers.  Don't clobber flags. */
+ 		"mov %c[rax](%0), %%" _ASM_AX " \n\t"
+ 		"mov %c[rbx](%0), %%" _ASM_BX " \n\t"
+@@ -10971,10 +10973,15 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ 			__write_pkru(vmx->host_pkru);
+ 	}
+ 
++	kvm_put_guest_xcr0(vcpu);
++
+ 	vmx->nested.nested_run_pending = 0;
+ 	vmx->idt_vectoring_info = 0;
+ 
+ 	vmx->exit_reason = vmx->fail ? 0xdead : vmcs_read32(VM_EXIT_REASON);
++	if ((u16)vmx->exit_reason == EXIT_REASON_MCE_DURING_VMENTRY)
++		kvm_machine_check();
++
+ 	if (vmx->fail || (vmx->exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY))
+ 		return;
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index c27ce6059090..cbc39751f36b 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -713,7 +713,7 @@ void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw)
+ }
+ EXPORT_SYMBOL_GPL(kvm_lmsw);
+ 
+-static void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu)
++void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu)
+ {
+ 	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
+ 			!vcpu->guest_xcr0_loaded) {
+@@ -723,8 +723,9 @@ static void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu)
+ 		vcpu->guest_xcr0_loaded = 1;
+ 	}
+ }
++EXPORT_SYMBOL_GPL(kvm_load_guest_xcr0);
+ 
+-static void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu)
++void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu)
+ {
+ 	if (vcpu->guest_xcr0_loaded) {
+ 		if (vcpu->arch.xcr0 != host_xcr0)
+@@ -732,6 +733,7 @@ static void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu)
+ 		vcpu->guest_xcr0_loaded = 0;
+ 	}
+ }
++EXPORT_SYMBOL_GPL(kvm_put_guest_xcr0);
+ 
+ static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
+ {
+@@ -2494,7 +2496,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 
+ 		break;
+ 	case MSR_KVM_PV_EOI_EN:
+-		if (kvm_lapic_enable_pv_eoi(vcpu, data))
++		if (kvm_lapic_enable_pv_eoi(vcpu, data, sizeof(u8)))
+ 			return 1;
+ 		break;
+ 
+@@ -4116,7 +4118,7 @@ static int kvm_vm_ioctl_set_identity_map_addr(struct kvm *kvm,
+ }
+ 
+ static int kvm_vm_ioctl_set_nr_mmu_pages(struct kvm *kvm,
+-					  u32 kvm_nr_mmu_pages)
++					 unsigned long kvm_nr_mmu_pages)
+ {
+ 	if (kvm_nr_mmu_pages < KVM_MIN_ALLOC_MMU_PAGES)
+ 		return -EINVAL;
+@@ -4130,7 +4132,7 @@ static int kvm_vm_ioctl_set_nr_mmu_pages(struct kvm *kvm,
+ 	return 0;
+ }
+ 
+-static int kvm_vm_ioctl_get_nr_mmu_pages(struct kvm *kvm)
++static unsigned long kvm_vm_ioctl_get_nr_mmu_pages(struct kvm *kvm)
+ {
+ 	return kvm->arch.n_max_mmu_pages;
+ }
+@@ -7225,9 +7227,9 @@ static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, char *buf)
+ 	put_smstate(u32, buf, 0x7ef8, vcpu->arch.smbase);
+ }
+ 
++#ifdef CONFIG_X86_64
+ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf)
+ {
+-#ifdef CONFIG_X86_64
+ 	struct desc_ptr dt;
+ 	struct kvm_segment seg;
+ 	unsigned long val;
+@@ -7277,10 +7279,8 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf)
+ 
+ 	for (i = 0; i < 6; i++)
+ 		enter_smm_save_seg_64(vcpu, buf, i);
+-#else
+-	WARN_ON_ONCE(1);
+-#endif
+ }
++#endif
+ 
+ static void enter_smm(struct kvm_vcpu *vcpu)
+ {
+@@ -7291,9 +7291,11 @@ static void enter_smm(struct kvm_vcpu *vcpu)
+ 
+ 	trace_kvm_enter_smm(vcpu->vcpu_id, vcpu->arch.smbase, true);
+ 	memset(buf, 0, 512);
++#ifdef CONFIG_X86_64
+ 	if (guest_cpuid_has(vcpu, X86_FEATURE_LM))
+ 		enter_smm_save_state_64(vcpu, buf);
+ 	else
++#endif
+ 		enter_smm_save_state_32(vcpu, buf);
+ 
+ 	/*
+@@ -7351,8 +7353,10 @@ static void enter_smm(struct kvm_vcpu *vcpu)
+ 	kvm_set_segment(vcpu, &ds, VCPU_SREG_GS);
+ 	kvm_set_segment(vcpu, &ds, VCPU_SREG_SS);
+ 
++#ifdef CONFIG_X86_64
+ 	if (guest_cpuid_has(vcpu, X86_FEATURE_LM))
+ 		kvm_x86_ops->set_efer(vcpu, 0);
++#endif
+ 
+ 	kvm_update_cpuid(vcpu);
+ 	kvm_mmu_reset_context(vcpu);
+@@ -7649,8 +7653,6 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		goto cancel_injection;
+ 	}
+ 
+-	kvm_load_guest_xcr0(vcpu);
+-
+ 	if (req_immediate_exit) {
+ 		kvm_make_request(KVM_REQ_EVENT, vcpu);
+ 		kvm_x86_ops->request_immediate_exit(vcpu);
+@@ -7703,8 +7705,6 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 	vcpu->mode = OUTSIDE_GUEST_MODE;
+ 	smp_wmb();
+ 
+-	kvm_put_guest_xcr0(vcpu);
+-
+ 	kvm_before_interrupt(vcpu);
+ 	kvm_x86_ops->handle_external_intr(vcpu);
+ 	kvm_after_interrupt(vcpu);
+diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
+index 1826ed9dd1c8..3a91ea760f07 100644
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -345,4 +345,16 @@ static inline void kvm_after_interrupt(struct kvm_vcpu *vcpu)
+ 	__this_cpu_write(current_vcpu, NULL);
+ }
+ 
++
++static inline bool kvm_pat_valid(u64 data)
++{
++	if (data & 0xF8F8F8F8F8F8F8F8ull)
++		return false;
++	/* 0, 1, 4, 5, 6, 7 are valid values.  */
++	return (data | ((data & 0x0202020202020202ull) << 1)) == data;
++}
++
++void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu);
++void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu);
++
+ #endif
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 4a3e1f417880..af635f878f96 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -816,7 +816,8 @@ void blk_cleanup_queue(struct request_queue *q)
+ 	blk_exit_queue(q);
+ 
+ 	if (q->mq_ops)
+-		blk_mq_free_queue(q);
++		blk_mq_exit_queue(q);
++
+ 	percpu_ref_exit(&q->q_usage_counter);
+ 
+ 	spin_lock_irq(lock);
+diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
+index f4f7c73fb828..0529e94a20f7 100644
+--- a/block/blk-iolatency.c
++++ b/block/blk-iolatency.c
+@@ -560,15 +560,12 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
+ 	u64 now = ktime_to_ns(ktime_get());
+ 	bool issue_as_root = bio_issue_as_root_blkg(bio);
+ 	bool enabled = false;
++	int inflight = 0;
+ 
+ 	blkg = bio->bi_blkg;
+ 	if (!blkg)
+ 		return;
+ 
+-	/* We didn't actually submit this bio, don't account it. */
+-	if (bio->bi_status == BLK_STS_AGAIN)
+-		return;
+-
+ 	iolat = blkg_to_lat(bio->bi_blkg);
+ 	if (!iolat)
+ 		return;
+@@ -585,41 +582,24 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
+ 		}
+ 		rqw = &iolat->rq_wait;
+ 
+-		atomic_dec(&rqw->inflight);
+-		if (iolat->min_lat_nsec == 0)
+-			goto next;
+-		iolatency_record_time(iolat, &bio->bi_issue, now,
+-				      issue_as_root);
+-		window_start = atomic64_read(&iolat->window_start);
+-		if (now > window_start &&
+-		    (now - window_start) >= iolat->cur_win_nsec) {
+-			if (atomic64_cmpxchg(&iolat->window_start,
+-					window_start, now) == window_start)
+-				iolatency_check_latencies(iolat, now);
++		inflight = atomic_dec_return(&rqw->inflight);
++		WARN_ON_ONCE(inflight < 0);
++		/*
++		 * If bi_status is BLK_STS_AGAIN, the bio wasn't actually
++		 * submitted, so do not account for it.
++		 */
++		if (iolat->min_lat_nsec && bio->bi_status != BLK_STS_AGAIN) {
++			iolatency_record_time(iolat, &bio->bi_issue, now,
++					      issue_as_root);
++			window_start = atomic64_read(&iolat->window_start);
++			if (now > window_start &&
++			    (now - window_start) >= iolat->cur_win_nsec) {
++				if (atomic64_cmpxchg(&iolat->window_start,
++					     window_start, now) == window_start)
++					iolatency_check_latencies(iolat, now);
++			}
+ 		}
+-next:
+-		wake_up(&rqw->wait);
+-		blkg = blkg->parent;
+-	}
+-}
+-
+-static void blkcg_iolatency_cleanup(struct rq_qos *rqos, struct bio *bio)
+-{
+-	struct blkcg_gq *blkg;
+-
+-	blkg = bio->bi_blkg;
+-	while (blkg && blkg->parent) {
+-		struct rq_wait *rqw;
+-		struct iolatency_grp *iolat;
+-
+-		iolat = blkg_to_lat(blkg);
+-		if (!iolat)
+-			goto next;
+-
+-		rqw = &iolat->rq_wait;
+-		atomic_dec(&rqw->inflight);
+ 		wake_up(&rqw->wait);
+-next:
+ 		blkg = blkg->parent;
+ 	}
+ }
+@@ -635,7 +615,6 @@ static void blkcg_iolatency_exit(struct rq_qos *rqos)
+ 
+ static struct rq_qos_ops blkcg_iolatency_ops = {
+ 	.throttle = blkcg_iolatency_throttle,
+-	.cleanup = blkcg_iolatency_cleanup,
+ 	.done_bio = blkcg_iolatency_done_bio,
+ 	.exit = blkcg_iolatency_exit,
+ };
+diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
+index aafb44224c89..0b7297a43ccd 100644
+--- a/block/blk-mq-sysfs.c
++++ b/block/blk-mq-sysfs.c
+@@ -10,6 +10,7 @@
+ #include <linux/smp.h>
+ 
+ #include <linux/blk-mq.h>
++#include "blk.h"
+ #include "blk-mq.h"
+ #include "blk-mq-tag.h"
+ 
+@@ -21,6 +22,11 @@ static void blk_mq_hw_sysfs_release(struct kobject *kobj)
+ {
+ 	struct blk_mq_hw_ctx *hctx = container_of(kobj, struct blk_mq_hw_ctx,
+ 						  kobj);
++
++	if (hctx->flags & BLK_MQ_F_BLOCKING)
++		cleanup_srcu_struct(hctx->srcu);
++	blk_free_flush_queue(hctx->fq);
++	sbitmap_free(&hctx->ctx_map);
+ 	free_cpumask_var(hctx->cpumask);
+ 	kfree(hctx->ctxs);
+ 	kfree(hctx);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 70d839b9c3b0..455fda99255a 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2157,12 +2157,7 @@ static void blk_mq_exit_hctx(struct request_queue *q,
+ 	if (set->ops->exit_hctx)
+ 		set->ops->exit_hctx(hctx, hctx_idx);
+ 
+-	if (hctx->flags & BLK_MQ_F_BLOCKING)
+-		cleanup_srcu_struct(hctx->srcu);
+-
+ 	blk_mq_remove_cpuhp(hctx);
+-	blk_free_flush_queue(hctx->fq);
+-	sbitmap_free(&hctx->ctx_map);
+ }
+ 
+ static void blk_mq_exit_hw_queues(struct request_queue *q,
+@@ -2662,7 +2657,8 @@ err_exit:
+ }
+ EXPORT_SYMBOL(blk_mq_init_allocated_queue);
+ 
+-void blk_mq_free_queue(struct request_queue *q)
++/* tags can _not_ be used after returning from blk_mq_exit_queue */
++void blk_mq_exit_queue(struct request_queue *q)
+ {
+ 	struct blk_mq_tag_set	*set = q->tag_set;
+ 
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index 9497b47e2526..5ad9251627f8 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -31,7 +31,7 @@ struct blk_mq_ctx {
+ } ____cacheline_aligned_in_smp;
+ 
+ void blk_mq_freeze_queue(struct request_queue *q);
+-void blk_mq_free_queue(struct request_queue *q);
++void blk_mq_exit_queue(struct request_queue *q);
+ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr);
+ void blk_mq_wake_waiters(struct request_queue *q);
+ bool blk_mq_dispatch_rq_list(struct request_queue *, struct list_head *, bool);
+diff --git a/drivers/char/tpm/st33zp24/i2c.c b/drivers/char/tpm/st33zp24/i2c.c
+index be5d1abd3e8e..8390c5b54c3b 100644
+--- a/drivers/char/tpm/st33zp24/i2c.c
++++ b/drivers/char/tpm/st33zp24/i2c.c
+@@ -33,7 +33,7 @@
+ 
+ struct st33zp24_i2c_phy {
+ 	struct i2c_client *client;
+-	u8 buf[TPM_BUFSIZE + 1];
++	u8 buf[ST33ZP24_BUFSIZE + 1];
+ 	int io_lpcpd;
+ };
+ 
+diff --git a/drivers/char/tpm/st33zp24/spi.c b/drivers/char/tpm/st33zp24/spi.c
+index d7909ab287a8..ff019a1e3c68 100644
+--- a/drivers/char/tpm/st33zp24/spi.c
++++ b/drivers/char/tpm/st33zp24/spi.c
+@@ -63,7 +63,7 @@
+  * some latency byte before the answer is available (max 15).
+  * We have 2048 + 1024 + 15.
+  */
+-#define ST33ZP24_SPI_BUFFER_SIZE (TPM_BUFSIZE + (TPM_BUFSIZE / 2) +\
++#define ST33ZP24_SPI_BUFFER_SIZE (ST33ZP24_BUFSIZE + (ST33ZP24_BUFSIZE / 2) +\
+ 				  MAX_SPI_LATENCY)
+ 
+ 
+diff --git a/drivers/char/tpm/st33zp24/st33zp24.h b/drivers/char/tpm/st33zp24/st33zp24.h
+index 6f4a4198af6a..20da0a84988d 100644
+--- a/drivers/char/tpm/st33zp24/st33zp24.h
++++ b/drivers/char/tpm/st33zp24/st33zp24.h
+@@ -18,8 +18,8 @@
+ #ifndef __LOCAL_ST33ZP24_H__
+ #define __LOCAL_ST33ZP24_H__
+ 
+-#define TPM_WRITE_DIRECTION             0x80
+-#define TPM_BUFSIZE                     2048
++#define TPM_WRITE_DIRECTION	0x80
++#define ST33ZP24_BUFSIZE	2048
+ 
+ struct st33zp24_dev {
+ 	struct tpm_chip *chip;
+diff --git a/drivers/char/tpm/tpm_i2c_infineon.c b/drivers/char/tpm/tpm_i2c_infineon.c
+index 977fd42daa1b..3b4e9672ff6c 100644
+--- a/drivers/char/tpm/tpm_i2c_infineon.c
++++ b/drivers/char/tpm/tpm_i2c_infineon.c
+@@ -26,8 +26,7 @@
+ #include <linux/wait.h>
+ #include "tpm.h"
+ 
+-/* max. buffer size supported by our TPM */
+-#define TPM_BUFSIZE 1260
++#define TPM_I2C_INFINEON_BUFSIZE 1260
+ 
+ /* max. number of iterations after I2C NAK */
+ #define MAX_COUNT 3
+@@ -63,11 +62,13 @@ enum i2c_chip_type {
+ 	UNKNOWN,
+ };
+ 
+-/* Structure to store I2C TPM specific stuff */
+ struct tpm_inf_dev {
+ 	struct i2c_client *client;
+ 	int locality;
+-	u8 buf[TPM_BUFSIZE + sizeof(u8)]; /* max. buffer size + addr */
++	/* In addition to the data itself, the buffer must fit the 7-bit I2C
++	 * address and the direction bit.
++	 */
++	u8 buf[TPM_I2C_INFINEON_BUFSIZE + 1];
+ 	struct tpm_chip *chip;
+ 	enum i2c_chip_type chip_type;
+ 	unsigned int adapterlimit;
+@@ -219,7 +220,7 @@ static int iic_tpm_write_generic(u8 addr, u8 *buffer, size_t len,
+ 		.buf = tpm_dev.buf
+ 	};
+ 
+-	if (len > TPM_BUFSIZE)
++	if (len > TPM_I2C_INFINEON_BUFSIZE)
+ 		return -EINVAL;
+ 
+ 	if (!tpm_dev.client->adapter->algo->master_xfer)
+@@ -527,8 +528,8 @@ static int tpm_tis_i2c_send(struct tpm_chip *chip, u8 *buf, size_t len)
+ 	u8 retries = 0;
+ 	u8 sts = TPM_STS_GO;
+ 
+-	if (len > TPM_BUFSIZE)
+-		return -E2BIG;	/* command is too long for our tpm, sorry */
++	if (len > TPM_I2C_INFINEON_BUFSIZE)
++		return -E2BIG;
+ 
+ 	if (request_locality(chip, 0) < 0)
+ 		return -EBUSY;
+diff --git a/drivers/char/tpm/tpm_i2c_nuvoton.c b/drivers/char/tpm/tpm_i2c_nuvoton.c
+index b8defdfdf2dc..280308009784 100644
+--- a/drivers/char/tpm/tpm_i2c_nuvoton.c
++++ b/drivers/char/tpm/tpm_i2c_nuvoton.c
+@@ -35,14 +35,12 @@
+ #include "tpm.h"
+ 
+ /* I2C interface offsets */
+-#define TPM_STS                0x00
+-#define TPM_BURST_COUNT        0x01
+-#define TPM_DATA_FIFO_W        0x20
+-#define TPM_DATA_FIFO_R        0x40
+-#define TPM_VID_DID_RID        0x60
+-/* TPM command header size */
+-#define TPM_HEADER_SIZE        10
+-#define TPM_RETRY      5
++#define TPM_STS			0x00
++#define TPM_BURST_COUNT		0x01
++#define TPM_DATA_FIFO_W		0x20
++#define TPM_DATA_FIFO_R		0x40
++#define TPM_VID_DID_RID		0x60
++#define TPM_I2C_RETRIES		5
+ /*
+  * I2C bus device maximum buffer size w/o counting I2C address or command
+  * i.e. max size required for I2C write is 34 = addr, command, 32 bytes data
+@@ -292,7 +290,7 @@ static int i2c_nuvoton_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ 		dev_err(dev, "%s() count < header size\n", __func__);
+ 		return -EIO;
+ 	}
+-	for (retries = 0; retries < TPM_RETRY; retries++) {
++	for (retries = 0; retries < TPM_I2C_RETRIES; retries++) {
+ 		if (retries > 0) {
+ 			/* if this is not the first trial, set responseRetry */
+ 			i2c_nuvoton_write_status(client,
+diff --git a/drivers/clk/clk-s2mps11.c b/drivers/clk/clk-s2mps11.c
+index 0934d3724495..4080d4e78e8e 100644
+--- a/drivers/clk/clk-s2mps11.c
++++ b/drivers/clk/clk-s2mps11.c
+@@ -255,7 +255,7 @@ MODULE_DEVICE_TABLE(platform, s2mps11_clk_id);
+  * This requires of_device_id table.  In the same time this will not change the
+  * actual *device* matching so do not add .of_match_table.
+  */
+-static const struct of_device_id s2mps11_dt_match[] = {
++static const struct of_device_id s2mps11_dt_match[] __used = {
+ 	{
+ 		.compatible = "samsung,s2mps11-clk",
+ 		.data = (void *)S2MPS11X,
+diff --git a/drivers/clk/tegra/clk-audio-sync.c b/drivers/clk/tegra/clk-audio-sync.c
+index 92d04ce2dee6..53cdc0ec40f3 100644
+--- a/drivers/clk/tegra/clk-audio-sync.c
++++ b/drivers/clk/tegra/clk-audio-sync.c
+@@ -55,7 +55,7 @@ const struct clk_ops tegra_clk_sync_source_ops = {
+ };
+ 
+ struct clk *tegra_clk_register_sync_source(const char *name,
+-		unsigned long rate, unsigned long max_rate)
++					   unsigned long max_rate)
+ {
+ 	struct tegra_clk_sync_source *sync;
+ 	struct clk_init_data init;
+@@ -67,7 +67,6 @@ struct clk *tegra_clk_register_sync_source(const char *name,
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 
+-	sync->rate = rate;
+ 	sync->max_rate = max_rate;
+ 
+ 	init.ops = &tegra_clk_sync_source_ops;
+diff --git a/drivers/clk/tegra/clk-tegra-audio.c b/drivers/clk/tegra/clk-tegra-audio.c
+index b37cae7af26d..02dd6487d855 100644
+--- a/drivers/clk/tegra/clk-tegra-audio.c
++++ b/drivers/clk/tegra/clk-tegra-audio.c
+@@ -49,8 +49,6 @@ struct tegra_sync_source_initdata {
+ #define SYNC(_name) \
+ 	{\
+ 		.name		= #_name,\
+-		.rate		= 24000000,\
+-		.max_rate	= 24000000,\
+ 		.clk_id		= tegra_clk_ ## _name,\
+ 	}
+ 
+@@ -176,7 +174,7 @@ static void __init tegra_audio_sync_clk_init(void __iomem *clk_base,
+ void __init tegra_audio_clk_init(void __iomem *clk_base,
+ 			void __iomem *pmc_base, struct tegra_clk *tegra_clks,
+ 			struct tegra_audio_clk_info *audio_info,
+-			unsigned int num_plls)
++			unsigned int num_plls, unsigned long sync_max_rate)
+ {
+ 	struct clk *clk;
+ 	struct clk **dt_clk;
+@@ -221,8 +219,7 @@ void __init tegra_audio_clk_init(void __iomem *clk_base,
+ 		if (!dt_clk)
+ 			continue;
+ 
+-		clk = tegra_clk_register_sync_source(data->name,
+-					data->rate, data->max_rate);
++		clk = tegra_clk_register_sync_source(data->name, sync_max_rate);
+ 		*dt_clk = clk;
+ 	}
+ 
+diff --git a/drivers/clk/tegra/clk-tegra114.c b/drivers/clk/tegra/clk-tegra114.c
+index 1824f014202b..625d11091330 100644
+--- a/drivers/clk/tegra/clk-tegra114.c
++++ b/drivers/clk/tegra/clk-tegra114.c
+@@ -1190,6 +1190,13 @@ static struct tegra_clk_init_table init_table[] __initdata = {
+ 	{ TEGRA114_CLK_XUSB_FALCON_SRC, TEGRA114_CLK_PLL_P, 204000000, 0 },
+ 	{ TEGRA114_CLK_XUSB_HOST_SRC, TEGRA114_CLK_PLL_P, 102000000, 0 },
+ 	{ TEGRA114_CLK_VDE, TEGRA114_CLK_CLK_MAX, 600000000, 0 },
++	{ TEGRA114_CLK_SPDIF_IN_SYNC, TEGRA114_CLK_CLK_MAX, 24000000, 0 },
++	{ TEGRA114_CLK_I2S0_SYNC, TEGRA114_CLK_CLK_MAX, 24000000, 0 },
++	{ TEGRA114_CLK_I2S1_SYNC, TEGRA114_CLK_CLK_MAX, 24000000, 0 },
++	{ TEGRA114_CLK_I2S2_SYNC, TEGRA114_CLK_CLK_MAX, 24000000, 0 },
++	{ TEGRA114_CLK_I2S3_SYNC, TEGRA114_CLK_CLK_MAX, 24000000, 0 },
++	{ TEGRA114_CLK_I2S4_SYNC, TEGRA114_CLK_CLK_MAX, 24000000, 0 },
++	{ TEGRA114_CLK_VIMCLK_SYNC, TEGRA114_CLK_CLK_MAX, 24000000, 0 },
+ 	/* must be the last entry */
+ 	{ TEGRA114_CLK_CLK_MAX, TEGRA114_CLK_CLK_MAX, 0, 0 },
+ };
+@@ -1362,7 +1369,7 @@ static void __init tegra114_clock_init(struct device_node *np)
+ 	tegra114_periph_clk_init(clk_base, pmc_base);
+ 	tegra_audio_clk_init(clk_base, pmc_base, tegra114_clks,
+ 			     tegra114_audio_plls,
+-			     ARRAY_SIZE(tegra114_audio_plls));
++			     ARRAY_SIZE(tegra114_audio_plls), 24000000);
+ 	tegra_pmc_clk_init(pmc_base, tegra114_clks);
+ 	tegra_super_clk_gen4_init(clk_base, pmc_base, tegra114_clks,
+ 					&pll_x_params);
+diff --git a/drivers/clk/tegra/clk-tegra124.c b/drivers/clk/tegra/clk-tegra124.c
+index b6cf28ca2ed2..df0018f7bf7e 100644
+--- a/drivers/clk/tegra/clk-tegra124.c
++++ b/drivers/clk/tegra/clk-tegra124.c
+@@ -1291,6 +1291,13 @@ static struct tegra_clk_init_table common_init_table[] __initdata = {
+ 	{ TEGRA124_CLK_CSITE, TEGRA124_CLK_CLK_MAX, 0, 1 },
+ 	{ TEGRA124_CLK_TSENSOR, TEGRA124_CLK_CLK_M, 400000, 0 },
+ 	{ TEGRA124_CLK_VIC03, TEGRA124_CLK_PLL_C3, 0, 0 },
++	{ TEGRA124_CLK_SPDIF_IN_SYNC, TEGRA124_CLK_CLK_MAX, 24576000, 0 },
++	{ TEGRA124_CLK_I2S0_SYNC, TEGRA124_CLK_CLK_MAX, 24576000, 0 },
++	{ TEGRA124_CLK_I2S1_SYNC, TEGRA124_CLK_CLK_MAX, 24576000, 0 },
++	{ TEGRA124_CLK_I2S2_SYNC, TEGRA124_CLK_CLK_MAX, 24576000, 0 },
++	{ TEGRA124_CLK_I2S3_SYNC, TEGRA124_CLK_CLK_MAX, 24576000, 0 },
++	{ TEGRA124_CLK_I2S4_SYNC, TEGRA124_CLK_CLK_MAX, 24576000, 0 },
++	{ TEGRA124_CLK_VIMCLK_SYNC, TEGRA124_CLK_CLK_MAX, 24576000, 0 },
+ 	/* must be the last entry */
+ 	{ TEGRA124_CLK_CLK_MAX, TEGRA124_CLK_CLK_MAX, 0, 0 },
+ };
+@@ -1455,7 +1462,7 @@ static void __init tegra124_132_clock_init_pre(struct device_node *np)
+ 	tegra124_periph_clk_init(clk_base, pmc_base);
+ 	tegra_audio_clk_init(clk_base, pmc_base, tegra124_clks,
+ 			     tegra124_audio_plls,
+-			     ARRAY_SIZE(tegra124_audio_plls));
++			     ARRAY_SIZE(tegra124_audio_plls), 24576000);
+ 	tegra_pmc_clk_init(pmc_base, tegra124_clks);
+ 
+ 	/* For Tegra124 & Tegra132, PLLD is the only source for DSIA & DSIB */
+diff --git a/drivers/clk/tegra/clk-tegra210.c b/drivers/clk/tegra/clk-tegra210.c
+index 4e1bc23c9865..080bfa24863e 100644
+--- a/drivers/clk/tegra/clk-tegra210.c
++++ b/drivers/clk/tegra/clk-tegra210.c
+@@ -3369,6 +3369,15 @@ static struct tegra_clk_init_table init_table[] __initdata = {
+ 	{ TEGRA210_CLK_SOC_THERM, TEGRA210_CLK_PLL_P, 51000000, 0 },
+ 	{ TEGRA210_CLK_CCLK_G, TEGRA210_CLK_CLK_MAX, 0, 1 },
+ 	{ TEGRA210_CLK_PLL_U_OUT2, TEGRA210_CLK_CLK_MAX, 60000000, 1 },
++	{ TEGRA210_CLK_SPDIF_IN_SYNC, TEGRA210_CLK_CLK_MAX, 24576000, 0 },
++	{ TEGRA210_CLK_I2S0_SYNC, TEGRA210_CLK_CLK_MAX, 24576000, 0 },
++	{ TEGRA210_CLK_I2S1_SYNC, TEGRA210_CLK_CLK_MAX, 24576000, 0 },
++	{ TEGRA210_CLK_I2S2_SYNC, TEGRA210_CLK_CLK_MAX, 24576000, 0 },
++	{ TEGRA210_CLK_I2S3_SYNC, TEGRA210_CLK_CLK_MAX, 24576000, 0 },
++	{ TEGRA210_CLK_I2S4_SYNC, TEGRA210_CLK_CLK_MAX, 24576000, 0 },
++	{ TEGRA210_CLK_VIMCLK_SYNC, TEGRA210_CLK_CLK_MAX, 24576000, 0 },
++	{ TEGRA210_CLK_HDA, TEGRA210_CLK_PLL_P, 51000000, 0 },
++	{ TEGRA210_CLK_HDA2CODEC_2X, TEGRA210_CLK_PLL_P, 48000000, 0 },
+ 	/* This MUST be the last entry. */
+ 	{ TEGRA210_CLK_CLK_MAX, TEGRA210_CLK_CLK_MAX, 0, 0 },
+ };
+@@ -3562,7 +3571,7 @@ static void __init tegra210_clock_init(struct device_node *np)
+ 	tegra210_periph_clk_init(clk_base, pmc_base);
+ 	tegra_audio_clk_init(clk_base, pmc_base, tegra210_clks,
+ 			     tegra210_audio_plls,
+-			     ARRAY_SIZE(tegra210_audio_plls));
++			     ARRAY_SIZE(tegra210_audio_plls), 24576000);
+ 	tegra_pmc_clk_init(pmc_base, tegra210_clks);
+ 
+ 	/* For Tegra210, PLLD is the only source for DSIA & DSIB */
+diff --git a/drivers/clk/tegra/clk-tegra30.c b/drivers/clk/tegra/clk-tegra30.c
+index acfe661b2ae7..e0aaecd98fbf 100644
+--- a/drivers/clk/tegra/clk-tegra30.c
++++ b/drivers/clk/tegra/clk-tegra30.c
+@@ -1267,6 +1267,13 @@ static struct tegra_clk_init_table init_table[] __initdata = {
+ 	{ TEGRA30_CLK_GR3D2, TEGRA30_CLK_PLL_C, 300000000, 0 },
+ 	{ TEGRA30_CLK_PLL_U, TEGRA30_CLK_CLK_MAX, 480000000, 0 },
+ 	{ TEGRA30_CLK_VDE, TEGRA30_CLK_CLK_MAX, 600000000, 0 },
++	{ TEGRA30_CLK_SPDIF_IN_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
++	{ TEGRA30_CLK_I2S0_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
++	{ TEGRA30_CLK_I2S1_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
++	{ TEGRA30_CLK_I2S2_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
++	{ TEGRA30_CLK_I2S3_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
++	{ TEGRA30_CLK_I2S4_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
++	{ TEGRA30_CLK_VIMCLK_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
+ 	/* must be the last entry */
+ 	{ TEGRA30_CLK_CLK_MAX, TEGRA30_CLK_CLK_MAX, 0, 0 },
+ };
+@@ -1344,7 +1351,7 @@ static void __init tegra30_clock_init(struct device_node *np)
+ 	tegra30_periph_clk_init();
+ 	tegra_audio_clk_init(clk_base, pmc_base, tegra30_clks,
+ 			     tegra30_audio_plls,
+-			     ARRAY_SIZE(tegra30_audio_plls));
++			     ARRAY_SIZE(tegra30_audio_plls), 24000000);
+ 	tegra_pmc_clk_init(pmc_base, tegra30_clks);
+ 
+ 	tegra_init_dup_clks(tegra_clk_duplicates, clks, TEGRA30_CLK_CLK_MAX);
+diff --git a/drivers/clk/tegra/clk.h b/drivers/clk/tegra/clk.h
+index d2c3a010f8e9..09bccbb9640c 100644
+--- a/drivers/clk/tegra/clk.h
++++ b/drivers/clk/tegra/clk.h
+@@ -41,7 +41,7 @@ extern const struct clk_ops tegra_clk_sync_source_ops;
+ extern int *periph_clk_enb_refcnt;
+ 
+ struct clk *tegra_clk_register_sync_source(const char *name,
+-		unsigned long fixed_rate, unsigned long max_rate);
++					   unsigned long max_rate);
+ 
+ /**
+  * struct tegra_clk_frac_div - fractional divider clock
+@@ -796,7 +796,7 @@ void tegra_register_devclks(struct tegra_devclk *dev_clks, int num);
+ void tegra_audio_clk_init(void __iomem *clk_base,
+ 			void __iomem *pmc_base, struct tegra_clk *tegra_clks,
+ 			struct tegra_audio_clk_info *audio_info,
+-			unsigned int num_plls);
++			unsigned int num_plls, unsigned long sync_max_rate);
+ 
+ void tegra_periph_clk_init(void __iomem *clk_base, void __iomem *pmc_base,
+ 			struct tegra_clk *tegra_clks,
+diff --git a/drivers/crypto/ccree/cc_driver.c b/drivers/crypto/ccree/cc_driver.c
+index 1ff229c2aeab..186a2536fb8b 100644
+--- a/drivers/crypto/ccree/cc_driver.c
++++ b/drivers/crypto/ccree/cc_driver.c
+@@ -364,7 +364,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
+ 	rc = cc_ivgen_init(new_drvdata);
+ 	if (rc) {
+ 		dev_err(dev, "cc_ivgen_init failed\n");
+-		goto post_power_mgr_err;
++		goto post_buf_mgr_err;
+ 	}
+ 
+ 	/* Allocate crypto algs */
+@@ -387,6 +387,9 @@ static int init_cc_resources(struct platform_device *plat_dev)
+ 		goto post_hash_err;
+ 	}
+ 
++	/* All set, we can allow autosuspend */
++	cc_pm_go(new_drvdata);
++
+ 	/* If we got here and FIPS mode is enabled
+ 	 * it means all FIPS test passed, so let TEE
+ 	 * know we're good.
+@@ -401,8 +404,6 @@ post_cipher_err:
+ 	cc_cipher_free(new_drvdata);
+ post_ivgen_err:
+ 	cc_ivgen_fini(new_drvdata);
+-post_power_mgr_err:
+-	cc_pm_fini(new_drvdata);
+ post_buf_mgr_err:
+ 	 cc_buffer_mgr_fini(new_drvdata);
+ post_req_mgr_err:
+diff --git a/drivers/crypto/ccree/cc_pm.c b/drivers/crypto/ccree/cc_pm.c
+index 79fc0a37ba6e..638082dff183 100644
+--- a/drivers/crypto/ccree/cc_pm.c
++++ b/drivers/crypto/ccree/cc_pm.c
+@@ -103,20 +103,19 @@ int cc_pm_put_suspend(struct device *dev)
+ 
+ int cc_pm_init(struct cc_drvdata *drvdata)
+ {
+-	int rc = 0;
+ 	struct device *dev = drvdata_to_dev(drvdata);
+ 
+ 	/* must be before the enabling to avoid resdundent suspending */
+ 	pm_runtime_set_autosuspend_delay(dev, CC_SUSPEND_TIMEOUT);
+ 	pm_runtime_use_autosuspend(dev);
+ 	/* activate the PM module */
+-	rc = pm_runtime_set_active(dev);
+-	if (rc)
+-		return rc;
+-	/* enable the PM module*/
+-	pm_runtime_enable(dev);
++	return pm_runtime_set_active(dev);
++}
+ 
+-	return rc;
++/* enable the PM module*/
++void cc_pm_go(struct cc_drvdata *drvdata)
++{
++	pm_runtime_enable(drvdata_to_dev(drvdata));
+ }
+ 
+ void cc_pm_fini(struct cc_drvdata *drvdata)
+diff --git a/drivers/crypto/ccree/cc_pm.h b/drivers/crypto/ccree/cc_pm.h
+index 020a5403c58b..907a6db4d6c0 100644
+--- a/drivers/crypto/ccree/cc_pm.h
++++ b/drivers/crypto/ccree/cc_pm.h
+@@ -16,6 +16,7 @@
+ extern const struct dev_pm_ops ccree_pm;
+ 
+ int cc_pm_init(struct cc_drvdata *drvdata);
++void cc_pm_go(struct cc_drvdata *drvdata);
+ void cc_pm_fini(struct cc_drvdata *drvdata);
+ int cc_pm_suspend(struct device *dev);
+ int cc_pm_resume(struct device *dev);
+@@ -29,6 +30,8 @@ static inline int cc_pm_init(struct cc_drvdata *drvdata)
+ 	return 0;
+ }
+ 
++static inline void cc_pm_go(struct cc_drvdata *drvdata) {}
++
+ static inline void cc_pm_fini(struct cc_drvdata *drvdata) {}
+ 
+ static inline int cc_pm_suspend(struct device *dev)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+index 5f3f54073818..17862b9ecccd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+@@ -1070,7 +1070,7 @@ void amdgpu_vce_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 seq,
+ int amdgpu_vce_ring_test_ring(struct amdgpu_ring *ring)
+ {
+ 	struct amdgpu_device *adev = ring->adev;
+-	uint32_t rptr = amdgpu_ring_get_rptr(ring);
++	uint32_t rptr;
+ 	unsigned i;
+ 	int r, timeout = adev->usec_timeout;
+ 
+@@ -1084,6 +1084,9 @@ int amdgpu_vce_ring_test_ring(struct amdgpu_ring *ring)
+ 			  ring->idx, r);
+ 		return r;
+ 	}
++
++	rptr = amdgpu_ring_get_rptr(ring);
++
+ 	amdgpu_ring_write(ring, VCE_CMD_END);
+ 	amdgpu_ring_commit(ring);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+index 400fc74bbae2..205e683fb920 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+@@ -431,7 +431,7 @@ error:
+ int amdgpu_vcn_enc_ring_test_ring(struct amdgpu_ring *ring)
+ {
+ 	struct amdgpu_device *adev = ring->adev;
+-	uint32_t rptr = amdgpu_ring_get_rptr(ring);
++	uint32_t rptr;
+ 	unsigned i;
+ 	int r;
+ 
+@@ -441,6 +441,9 @@ int amdgpu_vcn_enc_ring_test_ring(struct amdgpu_ring *ring)
+ 			  ring->idx, r);
+ 		return r;
+ 	}
++
++	rptr = amdgpu_ring_get_rptr(ring);
++
+ 	amdgpu_ring_write(ring, VCN_ENC_CMD_END);
+ 	amdgpu_ring_commit(ring);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 46568497ef18..782411649816 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -82,7 +82,8 @@ MODULE_FIRMWARE("amdgpu/raven_rlc.bin");
+ 
+ static const struct soc15_reg_golden golden_settings_gc_9_0[] =
+ {
+-	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG2, 0xf00fffff, 0x00000420),
++	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG2, 0xf00fffff, 0x00000400),
++	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG3, 0x80000000, 0x80000000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGB_GPU_ID, 0x0000000f, 0x00000000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_BINNER_EVENT_CNTL_3, 0x00000003, 0x82400024),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_ENHANCE, 0x3fffffff, 0x00000001),
+diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+index d4070839ac80..80613a74df42 100644
+--- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+@@ -170,7 +170,7 @@ static void uvd_v6_0_enc_ring_set_wptr(struct amdgpu_ring *ring)
+ static int uvd_v6_0_enc_ring_test_ring(struct amdgpu_ring *ring)
+ {
+ 	struct amdgpu_device *adev = ring->adev;
+-	uint32_t rptr = amdgpu_ring_get_rptr(ring);
++	uint32_t rptr;
+ 	unsigned i;
+ 	int r;
+ 
+@@ -180,6 +180,9 @@ static int uvd_v6_0_enc_ring_test_ring(struct amdgpu_ring *ring)
+ 			  ring->idx, r);
+ 		return r;
+ 	}
++
++	rptr = amdgpu_ring_get_rptr(ring);
++
+ 	amdgpu_ring_write(ring, HEVC_ENC_CMD_END);
+ 	amdgpu_ring_commit(ring);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+index 057151b17b45..ce16b8329af0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+@@ -175,7 +175,7 @@ static void uvd_v7_0_enc_ring_set_wptr(struct amdgpu_ring *ring)
+ static int uvd_v7_0_enc_ring_test_ring(struct amdgpu_ring *ring)
+ {
+ 	struct amdgpu_device *adev = ring->adev;
+-	uint32_t rptr = amdgpu_ring_get_rptr(ring);
++	uint32_t rptr;
+ 	unsigned i;
+ 	int r;
+ 
+@@ -188,6 +188,9 @@ static int uvd_v7_0_enc_ring_test_ring(struct amdgpu_ring *ring)
+ 			  ring->me, ring->idx, r);
+ 		return r;
+ 	}
++
++	rptr = amdgpu_ring_get_rptr(ring);
++
+ 	amdgpu_ring_write(ring, HEVC_ENC_CMD_END);
+ 	amdgpu_ring_commit(ring);
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 5aba50f63ac6..938d0053a820 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -310,6 +310,7 @@ static const struct kfd_deviceid supported_devices[] = {
+ 	{ 0x67CF, &polaris10_device_info },	/* Polaris10 */
+ 	{ 0x67D0, &polaris10_vf_device_info },	/* Polaris10 vf*/
+ 	{ 0x67DF, &polaris10_device_info },	/* Polaris10 */
++	{ 0x6FDF, &polaris10_device_info },	/* Polaris10 */
+ 	{ 0x67E0, &polaris11_device_info },	/* Polaris11 */
+ 	{ 0x67E1, &polaris11_device_info },	/* Polaris11 */
+ 	{ 0x67E3, &polaris11_device_info },	/* Polaris11 */
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 59445c83f023..c85bea70d965 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -377,9 +377,6 @@ dm_dp_add_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
+ 	drm_connector_attach_encoder(&aconnector->base,
+ 				     &aconnector->mst_encoder->base);
+ 
+-	/*
+-	 * TODO: understand why this one is needed
+-	 */
+ 	drm_object_attach_property(
+ 		&connector->base,
+ 		dev->mode_config.path_property,
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c
+index 2aab1b475945..cede78cdf28d 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c
+@@ -669,20 +669,20 @@ int smu_set_watermarks_for_clocks_ranges(void *wt_table,
+ 	for (i = 0; i < wm_with_clock_ranges->num_wm_dmif_sets; i++) {
+ 		table->WatermarkRow[1][i].MinClock =
+ 			cpu_to_le16((uint16_t)
+-			(wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_dcfclk_clk_in_khz) /
+-			1000);
++			(wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_dcfclk_clk_in_khz /
++			1000));
+ 		table->WatermarkRow[1][i].MaxClock =
+ 			cpu_to_le16((uint16_t)
+-			(wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_dcfclk_clk_in_khz) /
+-			100);
++			(wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_dcfclk_clk_in_khz /
++			1000));
+ 		table->WatermarkRow[1][i].MinUclk =
+ 			cpu_to_le16((uint16_t)
+-			(wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_mem_clk_in_khz) /
+-			1000);
++			(wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_mem_clk_in_khz /
++			1000));
+ 		table->WatermarkRow[1][i].MaxUclk =
+ 			cpu_to_le16((uint16_t)
+-			(wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_mem_clk_in_khz) /
+-			1000);
++			(wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_mem_clk_in_khz /
++			1000));
+ 		table->WatermarkRow[1][i].WmSetting = (uint8_t)
+ 				wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_set_id;
+ 	}
+@@ -690,20 +690,20 @@ int smu_set_watermarks_for_clocks_ranges(void *wt_table,
+ 	for (i = 0; i < wm_with_clock_ranges->num_wm_mcif_sets; i++) {
+ 		table->WatermarkRow[0][i].MinClock =
+ 			cpu_to_le16((uint16_t)
+-			(wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_socclk_clk_in_khz) /
+-			1000);
++			(wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_socclk_clk_in_khz /
++			1000));
+ 		table->WatermarkRow[0][i].MaxClock =
+ 			cpu_to_le16((uint16_t)
+-			(wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_socclk_clk_in_khz) /
+-			1000);
++			(wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_socclk_clk_in_khz /
++			1000));
+ 		table->WatermarkRow[0][i].MinUclk =
+ 			cpu_to_le16((uint16_t)
+-			(wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_mem_clk_in_khz) /
+-			1000);
++			(wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_mem_clk_in_khz /
++			1000));
+ 		table->WatermarkRow[0][i].MaxUclk =
+ 			cpu_to_le16((uint16_t)
+-			(wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_mem_clk_in_khz) /
+-			1000);
++			(wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_mem_clk_in_khz /
++			1000));
+ 		table->WatermarkRow[0][i].WmSetting = (uint8_t)
+ 				wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_set_id;
+ 	}
+diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
+index 281cf9cbb44c..1a4b44923aec 100644
+--- a/drivers/gpu/drm/drm_atomic.c
++++ b/drivers/gpu/drm/drm_atomic.c
+@@ -1702,6 +1702,27 @@ drm_atomic_set_crtc_for_connector(struct drm_connector_state *conn_state,
+ 	struct drm_connector *connector = conn_state->connector;
+ 	struct drm_crtc_state *crtc_state;
+ 
++	/*
++	 * For compatibility with legacy users, we want to make sure that
++	 * we allow DPMS On<->Off modesets on unregistered connectors, since
++	 * legacy modesetting users will not be expecting these to fail. We do
++	 * not however, want to allow legacy users to assign a connector
++	 * that's been unregistered from sysfs to another CRTC, since doing
++	 * this with a now non-existent connector could potentially leave us
++	 * in an invalid state.
++	 *
++	 * Since the connector can be unregistered at any point during an
++	 * atomic check or commit, this is racy. But that's OK: all we care
++	 * about is ensuring that userspace can't use this connector for new
++	 * configurations after it's been notified that the connector is no
++	 * longer present.
++	 */
++	if (!READ_ONCE(connector->registered) && crtc) {
++		DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] is not registered\n",
++				 connector->base.id, connector->name);
++		return -EINVAL;
++	}
++
+ 	if (conn_state->crtc == crtc)
+ 		return 0;
+ 
+diff --git a/drivers/gpu/drm/drm_ioc32.c b/drivers/gpu/drm/drm_ioc32.c
+index 138680b37c70..f8672238d444 100644
+--- a/drivers/gpu/drm/drm_ioc32.c
++++ b/drivers/gpu/drm/drm_ioc32.c
+@@ -185,7 +185,7 @@ static int compat_drm_getmap(struct file *file, unsigned int cmd,
+ 	m32.size = map.size;
+ 	m32.type = map.type;
+ 	m32.flags = map.flags;
+-	m32.handle = ptr_to_compat(map.handle);
++	m32.handle = ptr_to_compat((void __user *)map.handle);
+ 	m32.mtrr = map.mtrr;
+ 	if (copy_to_user(argp, &m32, sizeof(m32)))
+ 		return -EFAULT;
+@@ -216,7 +216,7 @@ static int compat_drm_addmap(struct file *file, unsigned int cmd,
+ 
+ 	m32.offset = map.offset;
+ 	m32.mtrr = map.mtrr;
+-	m32.handle = ptr_to_compat(map.handle);
++	m32.handle = ptr_to_compat((void __user *)map.handle);
+ 	if (map.handle != compat_ptr(m32.handle))
+ 		pr_err_ratelimited("compat_drm_addmap truncated handle %p for type %d offset %x\n",
+ 				   map.handle, m32.type, m32.offset);
+@@ -529,7 +529,7 @@ static int compat_drm_getsareactx(struct file *file, unsigned int cmd,
+ 	if (err)
+ 		return err;
+ 
+-	req32.handle = ptr_to_compat(req.handle);
++	req32.handle = ptr_to_compat((void __user *)req.handle);
+ 	if (copy_to_user(argp, &req32, sizeof(req32)))
+ 		return -EFAULT;
+ 
+diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
+index 28cdcf76b6f9..d1859bcc7ccb 100644
+--- a/drivers/gpu/drm/drm_vblank.c
++++ b/drivers/gpu/drm/drm_vblank.c
+@@ -105,13 +105,20 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
+ 	write_sequnlock(&vblank->seqlock);
+ }
+ 
++static u32 drm_max_vblank_count(struct drm_device *dev, unsigned int pipe)
++{
++	struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
++
++	return vblank->max_vblank_count ?: dev->max_vblank_count;
++}
++
+ /*
+  * "No hw counter" fallback implementation of .get_vblank_counter() hook,
+  * if there is no useable hardware frame counter available.
+  */
+ static u32 drm_vblank_no_hw_counter(struct drm_device *dev, unsigned int pipe)
+ {
+-	WARN_ON_ONCE(dev->max_vblank_count != 0);
++	WARN_ON_ONCE(drm_max_vblank_count(dev, pipe) != 0);
+ 	return 0;
+ }
+ 
+@@ -198,6 +205,7 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe,
+ 	ktime_t t_vblank;
+ 	int count = DRM_TIMESTAMP_MAXRETRIES;
+ 	int framedur_ns = vblank->framedur_ns;
++	u32 max_vblank_count = drm_max_vblank_count(dev, pipe);
+ 
+ 	/*
+ 	 * Interrupts were disabled prior to this call, so deal with counter
+@@ -216,9 +224,9 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe,
+ 		rc = drm_get_last_vbltimestamp(dev, pipe, &t_vblank, in_vblank_irq);
+ 	} while (cur_vblank != __get_vblank_counter(dev, pipe) && --count > 0);
+ 
+-	if (dev->max_vblank_count != 0) {
++	if (max_vblank_count) {
+ 		/* trust the hw counter when it's around */
+-		diff = (cur_vblank - vblank->last) & dev->max_vblank_count;
++		diff = (cur_vblank - vblank->last) & max_vblank_count;
+ 	} else if (rc && framedur_ns) {
+ 		u64 diff_ns = ktime_to_ns(ktime_sub(t_vblank, vblank->time));
+ 
+@@ -1204,6 +1212,37 @@ void drm_crtc_vblank_reset(struct drm_crtc *crtc)
+ }
+ EXPORT_SYMBOL(drm_crtc_vblank_reset);
+ 
++/**
++ * drm_crtc_set_max_vblank_count - configure the hw max vblank counter value
++ * @crtc: CRTC in question
++ * @max_vblank_count: max hardware vblank counter value
++ *
++ * Update the maximum hardware vblank counter value for @crtc
++ * at runtime. Useful for hardware where the operation of the
++ * hardware vblank counter depends on the currently active
++ * display configuration.
++ *
++ * For example, if the hardware vblank counter does not work
++ * when a specific connector is active the maximum can be set
++ * to zero. And when that specific connector isn't active the
++ * maximum can again be set to the appropriate non-zero value.
++ *
++ * If used, must be called before drm_vblank_on().
++ */
++void drm_crtc_set_max_vblank_count(struct drm_crtc *crtc,
++				   u32 max_vblank_count)
++{
++	struct drm_device *dev = crtc->dev;
++	unsigned int pipe = drm_crtc_index(crtc);
++	struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
++
++	WARN_ON(dev->max_vblank_count);
++	WARN_ON(!READ_ONCE(vblank->inmodeset));
++
++	vblank->max_vblank_count = max_vblank_count;
++}
++EXPORT_SYMBOL(drm_crtc_set_max_vblank_count);
++
+ /**
+  * drm_crtc_vblank_on - enable vblank events on a CRTC
+  * @crtc: CRTC in question
+diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
+index f9ce35da4123..e063e98d1e82 100644
+--- a/drivers/gpu/drm/i915/i915_debugfs.c
++++ b/drivers/gpu/drm/i915/i915_debugfs.c
+@@ -1788,6 +1788,8 @@ static int i915_emon_status(struct seq_file *m, void *unused)
+ 	if (!IS_GEN5(dev_priv))
+ 		return -ENODEV;
+ 
++	intel_runtime_pm_get(dev_priv);
++
+ 	ret = mutex_lock_interruptible(&dev->struct_mutex);
+ 	if (ret)
+ 		return ret;
+@@ -1802,6 +1804,8 @@ static int i915_emon_status(struct seq_file *m, void *unused)
+ 	seq_printf(m, "GFX power: %ld\n", gfx);
+ 	seq_printf(m, "Total power: %ld\n", chipset + gfx);
+ 
++	intel_runtime_pm_put(dev_priv);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index 03cda197fb6b..937287710042 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -1874,20 +1874,28 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
+ 	 * pages from.
+ 	 */
+ 	if (!obj->base.filp) {
+-		i915_gem_object_put(obj);
+-		return -ENXIO;
++		addr = -ENXIO;
++		goto err;
++	}
++
++	if (range_overflows(args->offset, args->size, (u64)obj->base.size)) {
++		addr = -EINVAL;
++		goto err;
+ 	}
+ 
+ 	addr = vm_mmap(obj->base.filp, 0, args->size,
+ 		       PROT_READ | PROT_WRITE, MAP_SHARED,
+ 		       args->offset);
++	if (IS_ERR_VALUE(addr))
++		goto err;
++
+ 	if (args->flags & I915_MMAP_WC) {
+ 		struct mm_struct *mm = current->mm;
+ 		struct vm_area_struct *vma;
+ 
+ 		if (down_write_killable(&mm->mmap_sem)) {
+-			i915_gem_object_put(obj);
+-			return -EINTR;
++			addr = -EINTR;
++			goto err;
+ 		}
+ 		vma = find_vma(mm, addr);
+ 		if (vma && __vma_matches(vma, obj->base.filp, addr, args->size))
+@@ -1896,17 +1904,20 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
+ 		else
+ 			addr = -ENOMEM;
+ 		up_write(&mm->mmap_sem);
++		if (IS_ERR_VALUE(addr))
++			goto err;
+ 
+ 		/* This may race, but that's ok, it only gets set */
+ 		WRITE_ONCE(obj->frontbuffer_ggtt_origin, ORIGIN_CPU);
+ 	}
+ 	i915_gem_object_put(obj);
+-	if (IS_ERR((void *)addr))
+-		return addr;
+ 
+ 	args->addr_ptr = (uint64_t) addr;
+-
+ 	return 0;
++
++err:
++	i915_gem_object_put(obj);
++	return addr;
+ }
+ 
+ static unsigned int tile_row_pages(struct drm_i915_gem_object *obj)
+@@ -5595,6 +5606,8 @@ err_uc_misc:
+ 		i915_gem_cleanup_userptr(dev_priv);
+ 
+ 	if (ret == -EIO) {
++		mutex_lock(&dev_priv->drm.struct_mutex);
++
+ 		/*
+ 		 * Allow engine initialisation to fail by marking the GPU as
+ 		 * wedged. But we only want to do this where the GPU is angry,
+@@ -5605,7 +5618,14 @@ err_uc_misc:
+ 					"Failed to initialize GPU, declaring it wedged!\n");
+ 			i915_gem_set_wedged(dev_priv);
+ 		}
+-		ret = 0;
++
++		/* Minimal basic recovery for KMS */
++		ret = i915_ggtt_enable_hw(dev_priv);
++		i915_gem_restore_gtt_mappings(dev_priv);
++		i915_gem_restore_fences(dev_priv);
++		intel_init_clock_gating(dev_priv);
++
++		mutex_unlock(&dev_priv->drm.struct_mutex);
+ 	}
+ 
+ 	i915_gem_drain_freed_objects(dev_priv);
+@@ -5615,6 +5635,7 @@ err_uc_misc:
+ void i915_gem_fini(struct drm_i915_private *dev_priv)
+ {
+ 	i915_gem_suspend_late(dev_priv);
++	intel_disable_gt_powersave(dev_priv);
+ 
+ 	/* Flush any outstanding unpin_work. */
+ 	i915_gem_drain_workqueue(dev_priv);
+@@ -5626,6 +5647,8 @@ void i915_gem_fini(struct drm_i915_private *dev_priv)
+ 	i915_gem_contexts_fini(dev_priv);
+ 	mutex_unlock(&dev_priv->drm.struct_mutex);
+ 
++	intel_cleanup_gt_powersave(dev_priv);
++
+ 	intel_uc_fini_misc(dev_priv);
+ 	i915_gem_cleanup_userptr(dev_priv);
+ 
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 16f5d2d93801..4e070afb2738 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -6531,7 +6531,7 @@ enum {
+ #define   PLANE_CTL_YUV422_UYVY			(1 << 16)
+ #define   PLANE_CTL_YUV422_YVYU			(2 << 16)
+ #define   PLANE_CTL_YUV422_VYUY			(3 << 16)
+-#define   PLANE_CTL_DECOMPRESSION_ENABLE	(1 << 15)
++#define   PLANE_CTL_RENDER_DECOMPRESSION_ENABLE	(1 << 15)
+ #define   PLANE_CTL_TRICKLE_FEED_DISABLE	(1 << 14)
+ #define   PLANE_CTL_PLANE_GAMMA_DISABLE		(1 << 13) /* Pre-GLK */
+ #define   PLANE_CTL_TILED_MASK			(0x7 << 10)
+diff --git a/drivers/gpu/drm/i915/intel_cdclk.c b/drivers/gpu/drm/i915/intel_cdclk.c
+index 29075c763428..7b4906ede148 100644
+--- a/drivers/gpu/drm/i915/intel_cdclk.c
++++ b/drivers/gpu/drm/i915/intel_cdclk.c
+@@ -2208,6 +2208,17 @@ int intel_crtc_compute_min_cdclk(const struct intel_crtc_state *crtc_state)
+ 	if (INTEL_GEN(dev_priv) >= 9)
+ 		min_cdclk = max(2 * 96000, min_cdclk);
+ 
++	/*
++	 * "For DP audio configuration, cdclk frequency shall be set to
++	 *  meet the following requirements:
++	 *  DP Link Frequency(MHz) | Cdclk frequency(MHz)
++	 *  270                    | 320 or higher
++	 *  162                    | 200 or higher"
++	 */
++	if ((IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) &&
++	    intel_crtc_has_dp_encoder(crtc_state) && crtc_state->has_audio)
++		min_cdclk = max(crtc_state->port_clock, min_cdclk);
++
+ 	/*
+ 	 * On Valleyview some DSI panels lose (v|h)sync when the clock is lower
+ 	 * than 320000KHz.
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index 3bd44d042a1d..6902fd2da19c 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -2712,6 +2712,17 @@ intel_alloc_initial_plane_obj(struct intel_crtc *crtc,
+ 	if (size_aligned * 2 > dev_priv->stolen_usable_size)
+ 		return false;
+ 
++	switch (fb->modifier) {
++	case DRM_FORMAT_MOD_LINEAR:
++	case I915_FORMAT_MOD_X_TILED:
++	case I915_FORMAT_MOD_Y_TILED:
++		break;
++	default:
++		DRM_DEBUG_DRIVER("Unsupported modifier for initial FB: 0x%llx\n",
++				 fb->modifier);
++		return false;
++	}
++
+ 	mutex_lock(&dev->struct_mutex);
+ 	obj = i915_gem_object_create_stolen_for_preallocated(dev_priv,
+ 							     base_aligned,
+@@ -2721,8 +2732,17 @@ intel_alloc_initial_plane_obj(struct intel_crtc *crtc,
+ 	if (!obj)
+ 		return false;
+ 
+-	if (plane_config->tiling == I915_TILING_X)
+-		obj->tiling_and_stride = fb->pitches[0] | I915_TILING_X;
++	switch (plane_config->tiling) {
++	case I915_TILING_NONE:
++		break;
++	case I915_TILING_X:
++	case I915_TILING_Y:
++		obj->tiling_and_stride = fb->pitches[0] | plane_config->tiling;
++		break;
++	default:
++		MISSING_CASE(plane_config->tiling);
++		return false;
++	}
+ 
+ 	mode_cmd.pixel_format = fb->format->format;
+ 	mode_cmd.width = fb->width;
+@@ -3561,11 +3581,11 @@ static u32 skl_plane_ctl_tiling(uint64_t fb_modifier)
+ 	case I915_FORMAT_MOD_Y_TILED:
+ 		return PLANE_CTL_TILED_Y;
+ 	case I915_FORMAT_MOD_Y_TILED_CCS:
+-		return PLANE_CTL_TILED_Y | PLANE_CTL_DECOMPRESSION_ENABLE;
++		return PLANE_CTL_TILED_Y | PLANE_CTL_RENDER_DECOMPRESSION_ENABLE;
+ 	case I915_FORMAT_MOD_Yf_TILED:
+ 		return PLANE_CTL_TILED_YF;
+ 	case I915_FORMAT_MOD_Yf_TILED_CCS:
+-		return PLANE_CTL_TILED_YF | PLANE_CTL_DECOMPRESSION_ENABLE;
++		return PLANE_CTL_TILED_YF | PLANE_CTL_RENDER_DECOMPRESSION_ENABLE;
+ 	default:
+ 		MISSING_CASE(fb_modifier);
+ 	}
+@@ -8812,13 +8832,14 @@ skylake_get_initial_plane_config(struct intel_crtc *crtc,
+ 		fb->modifier = I915_FORMAT_MOD_X_TILED;
+ 		break;
+ 	case PLANE_CTL_TILED_Y:
+-		if (val & PLANE_CTL_DECOMPRESSION_ENABLE)
++		plane_config->tiling = I915_TILING_Y;
++		if (val & PLANE_CTL_RENDER_DECOMPRESSION_ENABLE)
+ 			fb->modifier = I915_FORMAT_MOD_Y_TILED_CCS;
+ 		else
+ 			fb->modifier = I915_FORMAT_MOD_Y_TILED;
+ 		break;
+ 	case PLANE_CTL_TILED_YF:
+-		if (val & PLANE_CTL_DECOMPRESSION_ENABLE)
++		if (val & PLANE_CTL_RENDER_DECOMPRESSION_ENABLE)
+ 			fb->modifier = I915_FORMAT_MOD_Yf_TILED_CCS;
+ 		else
+ 			fb->modifier = I915_FORMAT_MOD_Yf_TILED;
+@@ -15951,8 +15972,6 @@ void intel_modeset_cleanup(struct drm_device *dev)
+ 	flush_work(&dev_priv->atomic_helper.free_work);
+ 	WARN_ON(!llist_empty(&dev_priv->atomic_helper.free_list));
+ 
+-	intel_disable_gt_powersave(dev_priv);
+-
+ 	/*
+ 	 * Interrupts and polling as the first thing to avoid creating havoc.
+ 	 * Too much stuff here (turning of connectors, ...) would
+@@ -15980,8 +15999,6 @@ void intel_modeset_cleanup(struct drm_device *dev)
+ 
+ 	intel_cleanup_overlay(dev_priv);
+ 
+-	intel_cleanup_gt_powersave(dev_priv);
+-
+ 	intel_teardown_gmbus(dev_priv);
+ 
+ 	destroy_workqueue(dev_priv->modeset_wq);
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index f92079e19de8..20cd4c8acecc 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -4739,6 +4739,22 @@ intel_dp_long_pulse(struct intel_connector *connector,
+ 		 */
+ 		status = connector_status_disconnected;
+ 		goto out;
++	} else {
++		/*
++		 * If display is now connected check links status,
++		 * there has been known issues of link loss triggering
++		 * long pulse.
++		 *
++		 * Some sinks (eg. ASUS PB287Q) seem to perform some
++		 * weird HPD ping pong during modesets. So we can apparently
++		 * end up with HPD going low during a modeset, and then
++		 * going back up soon after. And once that happens we must
++		 * retrain the link to get a picture. That's in case no
++		 * userspace component reacted to intermittent HPD dip.
++		 */
++		struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
++
++		intel_dp_retrain_link(encoder, ctx);
+ 	}
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c b/drivers/gpu/drm/i915/intel_dp_mst.c
+index 1fec0c71b4d9..58ba14966d4f 100644
+--- a/drivers/gpu/drm/i915/intel_dp_mst.c
++++ b/drivers/gpu/drm/i915/intel_dp_mst.c
+@@ -408,8 +408,6 @@ static struct drm_encoder *intel_mst_atomic_best_encoder(struct drm_connector *c
+ 	struct intel_dp *intel_dp = intel_connector->mst_port;
+ 	struct intel_crtc *crtc = to_intel_crtc(state->crtc);
+ 
+-	if (!READ_ONCE(connector->registered))
+-		return NULL;
+ 	return &intel_dp->mst_encoders[crtc->pipe]->base.base;
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index f889d41a281f..5e01bfb69d7a 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -759,7 +759,8 @@ nv50_msto_enable(struct drm_encoder *encoder)
+ 
+ 	slots = drm_dp_find_vcpi_slots(&mstm->mgr, mstc->pbn);
+ 	r = drm_dp_mst_allocate_vcpi(&mstm->mgr, mstc->port, mstc->pbn, slots);
+-	WARN_ON(!r);
++	if (!r)
++		DRM_DEBUG_KMS("Failed to allocate VCPI\n");
+ 
+ 	if (!mstm->links++)
+ 		nv50_outp_acquire(mstm->outp);
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index b1d41c4921dd..5fd94e206029 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -436,6 +436,32 @@ static const struct panel_desc ampire_am800480r3tmqwa1h = {
+ 	.bus_format = MEDIA_BUS_FMT_RGB666_1X18,
+ };
+ 
++static const struct display_timing santek_st0700i5y_rbslw_f_timing = {
++	.pixelclock = { 26400000, 33300000, 46800000 },
++	.hactive = { 800, 800, 800 },
++	.hfront_porch = { 16, 210, 354 },
++	.hback_porch = { 45, 36, 6 },
++	.hsync_len = { 1, 10, 40 },
++	.vactive = { 480, 480, 480 },
++	.vfront_porch = { 7, 22, 147 },
++	.vback_porch = { 22, 13, 3 },
++	.vsync_len = { 1, 10, 20 },
++	.flags = DISPLAY_FLAGS_HSYNC_LOW | DISPLAY_FLAGS_VSYNC_LOW |
++		DISPLAY_FLAGS_DE_HIGH | DISPLAY_FLAGS_PIXDATA_POSEDGE
++};
++
++static const struct panel_desc armadeus_st0700_adapt = {
++	.timings = &santek_st0700i5y_rbslw_f_timing,
++	.num_timings = 1,
++	.bpc = 6,
++	.size = {
++		.width = 154,
++		.height = 86,
++	},
++	.bus_format = MEDIA_BUS_FMT_RGB666_1X18,
++	.bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_POSEDGE,
++};
++
+ static const struct drm_display_mode auo_b101aw03_mode = {
+ 	.clock = 51450,
+ 	.hdisplay = 1024,
+@@ -2330,6 +2356,9 @@ static const struct of_device_id platform_of_match[] = {
+ 	}, {
+ 		.compatible = "ampire,am800480r3tmqwa1h",
+ 		.data = &ampire_am800480r3tmqwa1h,
++	}, {
++		.compatible = "armadeus,st0700-adapt",
++		.data = &armadeus_st0700_adapt,
+ 	}, {
+ 		.compatible = "auo,b101aw03",
+ 		.data = &auo_b101aw03,
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
+index 59e9d05ab928..0af048d1a815 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
+@@ -353,7 +353,7 @@ static int vmw_recv_msg(struct rpc_channel *channel, void **msg,
+ 				     !!(HIGH_WORD(ecx) & MESSAGE_STATUS_HB));
+ 		if ((HIGH_WORD(ebx) & MESSAGE_STATUS_SUCCESS) == 0) {
+ 			kfree(reply);
+-
++			reply = NULL;
+ 			if ((HIGH_WORD(ebx) & MESSAGE_STATUS_CPT) != 0) {
+ 				/* A checkpoint occurred. Retry. */
+ 				continue;
+@@ -377,7 +377,7 @@ static int vmw_recv_msg(struct rpc_channel *channel, void **msg,
+ 
+ 		if ((HIGH_WORD(ecx) & MESSAGE_STATUS_SUCCESS) == 0) {
+ 			kfree(reply);
+-
++			reply = NULL;
+ 			if ((HIGH_WORD(ecx) & MESSAGE_STATUS_CPT) != 0) {
+ 				/* A checkpoint occurred. Retry. */
+ 				continue;
+@@ -389,10 +389,8 @@ static int vmw_recv_msg(struct rpc_channel *channel, void **msg,
+ 		break;
+ 	}
+ 
+-	if (retries == RETRIES) {
+-		kfree(reply);
++	if (!reply)
+ 		return -EINVAL;
+-	}
+ 
+ 	*msg_len = reply_len;
+ 	*msg     = reply;
+diff --git a/drivers/hv/hv_kvp.c b/drivers/hv/hv_kvp.c
+index 5eed1e7da15c..d6106e1a0d4a 100644
+--- a/drivers/hv/hv_kvp.c
++++ b/drivers/hv/hv_kvp.c
+@@ -353,7 +353,9 @@ static void process_ib_ipinfo(void *in_msg, void *out_msg, int op)
+ 
+ 		out->body.kvp_ip_val.dhcp_enabled = in->kvp_ip_val.dhcp_enabled;
+ 
+-	default:
++		/* fallthrough */
++
++	case KVP_OP_GET_IP_INFO:
+ 		utf16s_to_utf8s((wchar_t *)in->kvp_ip_val.adapter_id,
+ 				MAX_ADAPTER_ID_SIZE,
+ 				UTF16_LITTLE_ENDIAN,
+@@ -406,6 +408,10 @@ kvp_send_key(struct work_struct *dummy)
+ 		process_ib_ipinfo(in_msg, message, KVP_OP_SET_IP_INFO);
+ 		break;
+ 	case KVP_OP_GET_IP_INFO:
++		/*
++		 * We only need to pass on the info of operation, adapter_id
++		 * and addr_family to the userland kvp daemon.
++		 */
+ 		process_ib_ipinfo(in_msg, message, KVP_OP_GET_IP_INFO);
+ 		break;
+ 	case KVP_OP_SET:
+@@ -421,7 +427,7 @@ kvp_send_key(struct work_struct *dummy)
+ 				UTF16_LITTLE_ENDIAN,
+ 				message->body.kvp_set.data.value,
+ 				HV_KVP_EXCHANGE_MAX_VALUE_SIZE - 1) + 1;
+-				break;
++			break;
+ 
+ 		case REG_U32:
+ 			/*
+@@ -446,7 +452,10 @@ kvp_send_key(struct work_struct *dummy)
+ 			break;
+ 
+ 		}
+-	case KVP_OP_GET:
++
++		/*
++		 * The key is always a string - utf16 encoding.
++		 */
+ 		message->body.kvp_set.data.key_size =
+ 			utf16s_to_utf8s(
+ 			(wchar_t *)in_msg->body.kvp_set.data.key,
+@@ -454,7 +463,18 @@ kvp_send_key(struct work_struct *dummy)
+ 			UTF16_LITTLE_ENDIAN,
+ 			message->body.kvp_set.data.key,
+ 			HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1;
+-			break;
++
++		break;
++
++	case KVP_OP_GET:
++		message->body.kvp_get.data.key_size =
++			utf16s_to_utf8s(
++			(wchar_t *)in_msg->body.kvp_get.data.key,
++			in_msg->body.kvp_get.data.key_size,
++			UTF16_LITTLE_ENDIAN,
++			message->body.kvp_get.data.key,
++			HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1;
++		break;
+ 
+ 	case KVP_OP_DELETE:
+ 		message->body.kvp_delete.key_size =
+@@ -464,12 +484,12 @@ kvp_send_key(struct work_struct *dummy)
+ 			UTF16_LITTLE_ENDIAN,
+ 			message->body.kvp_delete.key,
+ 			HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1;
+-			break;
++		break;
+ 
+ 	case KVP_OP_ENUMERATE:
+ 		message->body.kvp_enum_data.index =
+ 			in_msg->body.kvp_enum_data.index;
+-			break;
++		break;
+ 	}
+ 
+ 	kvp_transaction.state = HVUTIL_USERSPACE_REQ;
+diff --git a/drivers/i2c/busses/i2c-at91.c b/drivers/i2c/busses/i2c-at91.c
+index 3f3e8b3bf5ff..d51bf536bdf7 100644
+--- a/drivers/i2c/busses/i2c-at91.c
++++ b/drivers/i2c/busses/i2c-at91.c
+@@ -270,9 +270,11 @@ static void at91_twi_write_next_byte(struct at91_twi_dev *dev)
+ 	writeb_relaxed(*dev->buf, dev->base + AT91_TWI_THR);
+ 
+ 	/* send stop when last byte has been written */
+-	if (--dev->buf_len == 0)
++	if (--dev->buf_len == 0) {
+ 		if (!dev->use_alt_cmd)
+ 			at91_twi_write(dev, AT91_TWI_CR, AT91_TWI_STOP);
++		at91_twi_write(dev, AT91_TWI_IDR, AT91_TWI_TXRDY);
++	}
+ 
+ 	dev_dbg(dev->dev, "wrote 0x%x, to go %zu\n", *dev->buf, dev->buf_len);
+ 
+@@ -690,9 +692,8 @@ static int at91_do_twi_transfer(struct at91_twi_dev *dev)
+ 		} else {
+ 			at91_twi_write_next_byte(dev);
+ 			at91_twi_write(dev, AT91_TWI_IER,
+-				       AT91_TWI_TXCOMP |
+-				       AT91_TWI_NACK |
+-				       AT91_TWI_TXRDY);
++				       AT91_TWI_TXCOMP | AT91_TWI_NACK |
++				       (dev->buf_len ? AT91_TWI_TXRDY : 0));
+ 		}
+ 	}
+ 
+@@ -913,7 +914,7 @@ static struct at91_twi_pdata sama5d4_config = {
+ 
+ static struct at91_twi_pdata sama5d2_config = {
+ 	.clk_max_div = 7,
+-	.clk_offset = 4,
++	.clk_offset = 3,
+ 	.has_unre_flag = true,
+ 	.has_alt_cmd = true,
+ 	.has_hold_field = true,
+diff --git a/drivers/iio/adc/exynos_adc.c b/drivers/iio/adc/exynos_adc.c
+index 4be29ed44755..1ca2c4d39f87 100644
+--- a/drivers/iio/adc/exynos_adc.c
++++ b/drivers/iio/adc/exynos_adc.c
+@@ -115,6 +115,8 @@
+ #define MAX_ADC_V2_CHANNELS		10
+ #define MAX_ADC_V1_CHANNELS		8
+ #define MAX_EXYNOS3250_ADC_CHANNELS	2
++#define MAX_EXYNOS4212_ADC_CHANNELS	4
++#define MAX_S5PV210_ADC_CHANNELS	10
+ 
+ /* Bit definitions common for ADC_V1 and ADC_V2 */
+ #define ADC_CON_EN_START	(1u << 0)
+@@ -270,6 +272,19 @@ static void exynos_adc_v1_start_conv(struct exynos_adc *info,
+ 	writel(con1 | ADC_CON_EN_START, ADC_V1_CON(info->regs));
+ }
+ 
++/* Exynos4212 and 4412 is like ADCv1 but with four channels only */
++static const struct exynos_adc_data exynos4212_adc_data = {
++	.num_channels	= MAX_EXYNOS4212_ADC_CHANNELS,
++	.mask		= ADC_DATX_MASK,	/* 12 bit ADC resolution */
++	.needs_adc_phy	= true,
++	.phy_offset	= EXYNOS_ADCV1_PHY_OFFSET,
++
++	.init_hw	= exynos_adc_v1_init_hw,
++	.exit_hw	= exynos_adc_v1_exit_hw,
++	.clear_irq	= exynos_adc_v1_clear_irq,
++	.start_conv	= exynos_adc_v1_start_conv,
++};
++
+ static const struct exynos_adc_data exynos_adc_v1_data = {
+ 	.num_channels	= MAX_ADC_V1_CHANNELS,
+ 	.mask		= ADC_DATX_MASK,	/* 12 bit ADC resolution */
+@@ -282,6 +297,16 @@ static const struct exynos_adc_data exynos_adc_v1_data = {
+ 	.start_conv	= exynos_adc_v1_start_conv,
+ };
+ 
++static const struct exynos_adc_data exynos_adc_s5pv210_data = {
++	.num_channels	= MAX_S5PV210_ADC_CHANNELS,
++	.mask		= ADC_DATX_MASK,	/* 12 bit ADC resolution */
++
++	.init_hw	= exynos_adc_v1_init_hw,
++	.exit_hw	= exynos_adc_v1_exit_hw,
++	.clear_irq	= exynos_adc_v1_clear_irq,
++	.start_conv	= exynos_adc_v1_start_conv,
++};
++
+ static void exynos_adc_s3c2416_start_conv(struct exynos_adc *info,
+ 					  unsigned long addr)
+ {
+@@ -478,6 +503,12 @@ static const struct of_device_id exynos_adc_match[] = {
+ 	}, {
+ 		.compatible = "samsung,s3c6410-adc",
+ 		.data = &exynos_adc_s3c64xx_data,
++	}, {
++		.compatible = "samsung,s5pv210-adc",
++		.data = &exynos_adc_s5pv210_data,
++	}, {
++		.compatible = "samsung,exynos4212-adc",
++		.data = &exynos4212_adc_data,
+ 	}, {
+ 		.compatible = "samsung,exynos-adc-v1",
+ 		.data = &exynos_adc_v1_data,
+diff --git a/drivers/iio/adc/rcar-gyroadc.c b/drivers/iio/adc/rcar-gyroadc.c
+index dcb50172186f..f3a966ab35dc 100644
+--- a/drivers/iio/adc/rcar-gyroadc.c
++++ b/drivers/iio/adc/rcar-gyroadc.c
+@@ -391,7 +391,7 @@ static int rcar_gyroadc_parse_subdevs(struct iio_dev *indio_dev)
+ 				dev_err(dev,
+ 					"Only %i channels supported with %s, but reg = <%i>.\n",
+ 					num_channels, child->name, reg);
+-				return ret;
++				return -EINVAL;
+ 			}
+ 		}
+ 
+@@ -400,7 +400,7 @@ static int rcar_gyroadc_parse_subdevs(struct iio_dev *indio_dev)
+ 			dev_err(dev,
+ 				"Channel %i uses different ADC mode than the rest.\n",
+ 				reg);
+-			return ret;
++			return -EINVAL;
+ 		}
+ 
+ 		/* Channel is valid, grab the regulator. */
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 50152c1b1004..357de3b4fddd 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -265,6 +265,9 @@ void ib_uverbs_release_file(struct kref *ref)
+ 	if (atomic_dec_and_test(&file->device->refcount))
+ 		ib_uverbs_comp_dev(file->device);
+ 
++	if (file->async_file)
++		kref_put(&file->async_file->ref,
++			 ib_uverbs_release_async_event_file);
+ 	kobject_put(&file->device->kobj);
+ 	kfree(file);
+ }
+@@ -915,10 +918,6 @@ static int ib_uverbs_close(struct inode *inode, struct file *filp)
+ 	}
+ 	mutex_unlock(&file->device->lists_mutex);
+ 
+-	if (file->async_file)
+-		kref_put(&file->async_file->ref,
+-			 ib_uverbs_release_async_event_file);
+-
+ 	kref_put(&file->ref, ib_uverbs_release_file);
+ 
+ 	return 0;
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index 88e326d6cc49..d648a4167832 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -410,10 +410,7 @@ static void sdma_flush(struct sdma_engine *sde)
+ 	sdma_flush_descq(sde);
+ 	spin_lock_irqsave(&sde->flushlist_lock, flags);
+ 	/* copy flush list */
+-	list_for_each_entry_safe(txp, txp_next, &sde->flushlist, list) {
+-		list_del_init(&txp->list);
+-		list_add_tail(&txp->list, &flushlist);
+-	}
++	list_splice_init(&sde->flushlist, &flushlist);
+ 	spin_unlock_irqrestore(&sde->flushlist_lock, flags);
+ 	/* flush from flush list */
+ 	list_for_each_entry_safe(txp, txp_next, &flushlist, list)
+@@ -2426,7 +2423,7 @@ unlock_noconn:
+ 		wait->tx_count++;
+ 		wait->count += tx->num_desc;
+ 	}
+-	schedule_work(&sde->flush_worker);
++	queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
+ 	ret = -ECOMM;
+ 	goto unlock;
+ nodesc:
+@@ -2526,7 +2523,7 @@ unlock_noconn:
+ 		}
+ 	}
+ 	spin_unlock(&sde->flushlist_lock);
+-	schedule_work(&sde->flush_worker);
++	queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
+ 	ret = -ECOMM;
+ 	goto update_tail;
+ nodesc:
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index 9e1cac8cb260..453e5c4ac19f 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -497,7 +497,7 @@ void mlx5_ib_free_implicit_mr(struct mlx5_ib_mr *imr)
+ static int pagefault_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr,
+ 			u64 io_virt, size_t bcnt, u32 *bytes_mapped)
+ {
+-	u64 access_mask = ODP_READ_ALLOWED_BIT;
++	u64 access_mask;
+ 	int npages = 0, page_shift, np;
+ 	u64 start_idx, page_mask;
+ 	struct ib_umem_odp *odp;
+@@ -522,6 +522,7 @@ next_mr:
+ 	page_shift = mr->umem->page_shift;
+ 	page_mask = ~(BIT(page_shift) - 1);
+ 	start_idx = (io_virt - (mr->mmkey.iova & page_mask)) >> page_shift;
++	access_mask = ODP_READ_ALLOWED_BIT;
+ 
+ 	if (mr->umem->writable)
+ 		access_mask |= ODP_WRITE_ALLOWED_BIT;
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 2c1114ee0c6d..bc6a44a16445 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -3401,13 +3401,17 @@ static const match_table_t srp_opt_tokens = {
+ 
+ /**
+  * srp_parse_in - parse an IP address and port number combination
++ * @net:	   [in]  Network namespace.
++ * @sa:		   [out] Address family, IP address and port number.
++ * @addr_port_str: [in]  IP address and port number.
++ * @has_port:	   [out] Whether or not @addr_port_str includes a port number.
+  *
+  * Parse the following address formats:
+  * - IPv4: <ip_address>:<port>, e.g. 1.2.3.4:5.
+  * - IPv6: \[<ipv6_address>\]:<port>, e.g. [1::2:3%4]:5.
+  */
+ static int srp_parse_in(struct net *net, struct sockaddr_storage *sa,
+-			const char *addr_port_str)
++			const char *addr_port_str, bool *has_port)
+ {
+ 	char *addr_end, *addr = kstrdup(addr_port_str, GFP_KERNEL);
+ 	char *port_str;
+@@ -3416,9 +3420,12 @@ static int srp_parse_in(struct net *net, struct sockaddr_storage *sa,
+ 	if (!addr)
+ 		return -ENOMEM;
+ 	port_str = strrchr(addr, ':');
+-	if (!port_str)
+-		return -EINVAL;
+-	*port_str++ = '\0';
++	if (port_str && strchr(port_str, ']'))
++		port_str = NULL;
++	if (port_str)
++		*port_str++ = '\0';
++	if (has_port)
++		*has_port = port_str != NULL;
+ 	ret = inet_pton_with_scope(net, AF_INET, addr, port_str, sa);
+ 	if (ret && addr[0]) {
+ 		addr_end = addr + strlen(addr) - 1;
+@@ -3440,6 +3447,7 @@ static int srp_parse_options(struct net *net, const char *buf,
+ 	char *p;
+ 	substring_t args[MAX_OPT_ARGS];
+ 	unsigned long long ull;
++	bool has_port;
+ 	int opt_mask = 0;
+ 	int token;
+ 	int ret = -EINVAL;
+@@ -3538,7 +3546,8 @@ static int srp_parse_options(struct net *net, const char *buf,
+ 				ret = -ENOMEM;
+ 				goto out;
+ 			}
+-			ret = srp_parse_in(net, &target->rdma_cm.src.ss, p);
++			ret = srp_parse_in(net, &target->rdma_cm.src.ss, p,
++					   NULL);
+ 			if (ret < 0) {
+ 				pr_warn("bad source parameter '%s'\n", p);
+ 				kfree(p);
+@@ -3554,7 +3563,10 @@ static int srp_parse_options(struct net *net, const char *buf,
+ 				ret = -ENOMEM;
+ 				goto out;
+ 			}
+-			ret = srp_parse_in(net, &target->rdma_cm.dst.ss, p);
++			ret = srp_parse_in(net, &target->rdma_cm.dst.ss, p,
++					   &has_port);
++			if (!has_port)
++				ret = -EINVAL;
+ 			if (ret < 0) {
+ 				pr_warn("bad dest parameter '%s'\n", p);
+ 				kfree(p);
+diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
+index 60348d707b99..9a576ae837dc 100644
+--- a/drivers/iommu/iova.c
++++ b/drivers/iommu/iova.c
+@@ -148,8 +148,9 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free)
+ 	struct iova *cached_iova;
+ 
+ 	cached_iova = rb_entry(iovad->cached32_node, struct iova, node);
+-	if (free->pfn_hi < iovad->dma_32bit_pfn &&
+-	    free->pfn_lo >= cached_iova->pfn_lo)
++	if (free == cached_iova ||
++	    (free->pfn_hi < iovad->dma_32bit_pfn &&
++	     free->pfn_lo >= cached_iova->pfn_lo))
+ 		iovad->cached32_node = rb_next(&free->node);
+ 
+ 	cached_iova = rb_entry(iovad->cached_node, struct iova, node);
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 3f4211b5cd33..45f684689c35 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -35,7 +35,7 @@
+ #include <linux/rcupdate.h>
+ #include <linux/sched/clock.h>
+ #include <linux/rculist.h>
+-
++#include <linux/delay.h>
+ #include <trace/events/bcache.h>
+ 
+ /*
+@@ -649,7 +649,25 @@ static int mca_reap(struct btree *b, unsigned int min_order, bool flush)
+ 		up(&b->io_mutex);
+ 	}
+ 
++retry:
++	/*
++	 * BTREE_NODE_dirty might be cleared in btree_flush_btree() by
++	 * __bch_btree_node_write(). To avoid an extra flush, acquire
++	 * b->write_lock before checking BTREE_NODE_dirty bit.
++	 */
+ 	mutex_lock(&b->write_lock);
++	/*
++	 * If this btree node is selected in btree_flush_write() by journal
++	 * code, delay and retry until the node is flushed by journal code
++	 * and BTREE_NODE_journal_flush bit cleared by btree_flush_write().
++	 */
++	if (btree_node_journal_flush(b)) {
++		pr_debug("bnode %p is flushing by journal, retry", b);
++		mutex_unlock(&b->write_lock);
++		udelay(1);
++		goto retry;
++	}
++
+ 	if (btree_node_dirty(b))
+ 		__bch_btree_node_write(b, &cl);
+ 	mutex_unlock(&b->write_lock);
+@@ -772,10 +790,15 @@ void bch_btree_cache_free(struct cache_set *c)
+ 	while (!list_empty(&c->btree_cache)) {
+ 		b = list_first_entry(&c->btree_cache, struct btree, list);
+ 
+-		if (btree_node_dirty(b))
++		/*
++		 * This function is called by cache_set_free(), no I/O
++		 * request on cache now, it is unnecessary to acquire
++		 * b->write_lock before clearing BTREE_NODE_dirty anymore.
++		 */
++		if (btree_node_dirty(b)) {
+ 			btree_complete_write(b, btree_current_write(b));
+-		clear_bit(BTREE_NODE_dirty, &b->flags);
+-
++			clear_bit(BTREE_NODE_dirty, &b->flags);
++		}
+ 		mca_data_free(b);
+ 	}
+ 
+@@ -1061,11 +1084,25 @@ static void btree_node_free(struct btree *b)
+ 
+ 	BUG_ON(b == b->c->root);
+ 
++retry:
+ 	mutex_lock(&b->write_lock);
++	/*
++	 * If the btree node is selected and flushing in btree_flush_write(),
++	 * delay and retry until the BTREE_NODE_journal_flush bit cleared,
++	 * then it is safe to free the btree node here. Otherwise this btree
++	 * node will be in race condition.
++	 */
++	if (btree_node_journal_flush(b)) {
++		mutex_unlock(&b->write_lock);
++		pr_debug("bnode %p journal_flush set, retry", b);
++		udelay(1);
++		goto retry;
++	}
+ 
+-	if (btree_node_dirty(b))
++	if (btree_node_dirty(b)) {
+ 		btree_complete_write(b, btree_current_write(b));
+-	clear_bit(BTREE_NODE_dirty, &b->flags);
++		clear_bit(BTREE_NODE_dirty, &b->flags);
++	}
+ 
+ 	mutex_unlock(&b->write_lock);
+ 
+diff --git a/drivers/md/bcache/btree.h b/drivers/md/bcache/btree.h
+index a68d6c55783b..4d0cca145f69 100644
+--- a/drivers/md/bcache/btree.h
++++ b/drivers/md/bcache/btree.h
+@@ -158,11 +158,13 @@ enum btree_flags {
+ 	BTREE_NODE_io_error,
+ 	BTREE_NODE_dirty,
+ 	BTREE_NODE_write_idx,
++	BTREE_NODE_journal_flush,
+ };
+ 
+ BTREE_FLAG(io_error);
+ BTREE_FLAG(dirty);
+ BTREE_FLAG(write_idx);
++BTREE_FLAG(journal_flush);
+ 
+ static inline struct btree_write *btree_current_write(struct btree *b)
+ {
+diff --git a/drivers/md/bcache/extents.c b/drivers/md/bcache/extents.c
+index c809724e6571..886710043025 100644
+--- a/drivers/md/bcache/extents.c
++++ b/drivers/md/bcache/extents.c
+@@ -538,6 +538,7 @@ static bool bch_extent_bad(struct btree_keys *bk, const struct bkey *k)
+ {
+ 	struct btree *b = container_of(bk, struct btree, keys);
+ 	unsigned int i, stale;
++	char buf[80];
+ 
+ 	if (!KEY_PTRS(k) ||
+ 	    bch_extent_invalid(bk, k))
+@@ -547,19 +548,19 @@ static bool bch_extent_bad(struct btree_keys *bk, const struct bkey *k)
+ 		if (!ptr_available(b->c, k, i))
+ 			return true;
+ 
+-	if (!expensive_debug_checks(b->c) && KEY_DIRTY(k))
+-		return false;
+-
+ 	for (i = 0; i < KEY_PTRS(k); i++) {
+ 		stale = ptr_stale(b->c, k, i);
+ 
+-		btree_bug_on(stale > 96, b,
++		if (stale && KEY_DIRTY(k)) {
++			bch_extent_to_text(buf, sizeof(buf), k);
++			pr_info("stale dirty pointer, stale %u, key: %s",
++				stale, buf);
++		}
++
++		btree_bug_on(stale > BUCKET_GC_GEN_MAX, b,
+ 			     "key too stale: %i, need_gc %u",
+ 			     stale, b->c->need_gc);
+ 
+-		btree_bug_on(stale && KEY_DIRTY(k) && KEY_SIZE(k),
+-			     b, "stale dirty pointer");
+-
+ 		if (stale)
+ 			return true;
+ 
+diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
+index ec1e35a62934..7bb15cddca5e 100644
+--- a/drivers/md/bcache/journal.c
++++ b/drivers/md/bcache/journal.c
+@@ -404,6 +404,7 @@ static void btree_flush_write(struct cache_set *c)
+ retry:
+ 	best = NULL;
+ 
++	mutex_lock(&c->bucket_lock);
+ 	for_each_cached_btree(b, c, i)
+ 		if (btree_current_write(b)->journal) {
+ 			if (!best)
+@@ -416,9 +417,14 @@ retry:
+ 		}
+ 
+ 	b = best;
++	if (b)
++		set_btree_node_journal_flush(b);
++	mutex_unlock(&c->bucket_lock);
++
+ 	if (b) {
+ 		mutex_lock(&b->write_lock);
+ 		if (!btree_current_write(b)->journal) {
++			clear_bit(BTREE_NODE_journal_flush, &b->flags);
+ 			mutex_unlock(&b->write_lock);
+ 			/* We raced */
+ 			atomic_long_inc(&c->retry_flush_write);
+@@ -426,6 +432,7 @@ retry:
+ 		}
+ 
+ 		__bch_btree_node_write(b, NULL);
++		clear_bit(BTREE_NODE_journal_flush, &b->flags);
+ 		mutex_unlock(&b->write_lock);
+ 	}
+ }
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index f3dcc7640319..34f5de13a93d 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -949,6 +949,7 @@ static int crypt_integrity_ctr(struct crypt_config *cc, struct dm_target *ti)
+ {
+ #ifdef CONFIG_BLK_DEV_INTEGRITY
+ 	struct blk_integrity *bi = blk_get_integrity(cc->dev->bdev->bd_disk);
++	struct mapped_device *md = dm_table_get_md(ti->table);
+ 
+ 	/* From now we require underlying device with our integrity profile */
+ 	if (!bi || strcasecmp(bi->profile->name, "DM-DIF-EXT-TAG")) {
+@@ -968,7 +969,7 @@ static int crypt_integrity_ctr(struct crypt_config *cc, struct dm_target *ti)
+ 
+ 	if (crypt_integrity_aead(cc)) {
+ 		cc->integrity_tag_size = cc->on_disk_tag_size - cc->integrity_iv_size;
+-		DMINFO("Integrity AEAD, tag size %u, IV size %u.",
++		DMDEBUG("%s: Integrity AEAD, tag size %u, IV size %u.", dm_device_name(md),
+ 		       cc->integrity_tag_size, cc->integrity_iv_size);
+ 
+ 		if (crypto_aead_setauthsize(any_tfm_aead(cc), cc->integrity_tag_size)) {
+@@ -976,7 +977,7 @@ static int crypt_integrity_ctr(struct crypt_config *cc, struct dm_target *ti)
+ 			return -EINVAL;
+ 		}
+ 	} else if (cc->integrity_iv_size)
+-		DMINFO("Additional per-sector space %u bytes for IV.",
++		DMDEBUG("%s: Additional per-sector space %u bytes for IV.", dm_device_name(md),
+ 		       cc->integrity_iv_size);
+ 
+ 	if ((cc->integrity_tag_size + cc->integrity_iv_size) != bi->tag_size) {
+diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
+index baa966e2778c..481e54ded9dc 100644
+--- a/drivers/md/dm-mpath.c
++++ b/drivers/md/dm-mpath.c
+@@ -554,8 +554,23 @@ static int multipath_clone_and_map(struct dm_target *ti, struct request *rq,
+ 	return DM_MAPIO_REMAPPED;
+ }
+ 
+-static void multipath_release_clone(struct request *clone)
++static void multipath_release_clone(struct request *clone,
++				    union map_info *map_context)
+ {
++	if (unlikely(map_context)) {
++		/*
++		 * non-NULL map_context means caller is still map
++		 * method; must undo multipath_clone_and_map()
++		 */
++		struct dm_mpath_io *mpio = get_mpio(map_context);
++		struct pgpath *pgpath = mpio->pgpath;
++
++		if (pgpath && pgpath->pg->ps.type->end_io)
++			pgpath->pg->ps.type->end_io(&pgpath->pg->ps,
++						    &pgpath->path,
++						    mpio->nr_bytes);
++	}
++
+ 	blk_put_request(clone);
+ }
+ 
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index 264b84e274aa..17c6a73c536c 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -219,7 +219,7 @@ static void dm_end_request(struct request *clone, blk_status_t error)
+ 	struct request *rq = tio->orig;
+ 
+ 	blk_rq_unprep_clone(clone);
+-	tio->ti->type->release_clone_rq(clone);
++	tio->ti->type->release_clone_rq(clone, NULL);
+ 
+ 	rq_end_stats(md, rq);
+ 	if (!rq->q->mq_ops)
+@@ -270,7 +270,7 @@ static void dm_requeue_original_request(struct dm_rq_target_io *tio, bool delay_
+ 	rq_end_stats(md, rq);
+ 	if (tio->clone) {
+ 		blk_rq_unprep_clone(tio->clone);
+-		tio->ti->type->release_clone_rq(tio->clone);
++		tio->ti->type->release_clone_rq(tio->clone, NULL);
+ 	}
+ 
+ 	if (!rq->q->mq_ops)
+@@ -495,7 +495,7 @@ check_again:
+ 	case DM_MAPIO_REMAPPED:
+ 		if (setup_clone(clone, rq, tio, GFP_ATOMIC)) {
+ 			/* -ENOMEM */
+-			ti->type->release_clone_rq(clone);
++			ti->type->release_clone_rq(clone, &tio->info);
+ 			return DM_MAPIO_REQUEUE;
+ 		}
+ 
+@@ -505,7 +505,7 @@ check_again:
+ 		ret = dm_dispatch_clone_request(clone, rq);
+ 		if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) {
+ 			blk_rq_unprep_clone(clone);
+-			tio->ti->type->release_clone_rq(clone);
++			tio->ti->type->release_clone_rq(clone, &tio->info);
+ 			tio->clone = NULL;
+ 			if (!rq->q->mq_ops)
+ 				r = DM_MAPIO_DELAY_REQUEUE;
+diff --git a/drivers/md/dm-target.c b/drivers/md/dm-target.c
+index 314d17ca6466..64dd0b34fcf4 100644
+--- a/drivers/md/dm-target.c
++++ b/drivers/md/dm-target.c
+@@ -136,7 +136,8 @@ static int io_err_clone_and_map_rq(struct dm_target *ti, struct request *rq,
+ 	return DM_MAPIO_KILL;
+ }
+ 
+-static void io_err_release_clone_rq(struct request *clone)
++static void io_err_release_clone_rq(struct request *clone,
++				    union map_info *map_context)
+ {
+ }
+ 
+diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c
+index ed3caceaed07..6a26afcc1fd6 100644
+--- a/drivers/md/dm-thin-metadata.c
++++ b/drivers/md/dm-thin-metadata.c
+@@ -2001,16 +2001,19 @@ int dm_pool_register_metadata_threshold(struct dm_pool_metadata *pmd,
+ 
+ int dm_pool_metadata_set_needs_check(struct dm_pool_metadata *pmd)
+ {
+-	int r;
++	int r = -EINVAL;
+ 	struct dm_block *sblock;
+ 	struct thin_disk_superblock *disk_super;
+ 
+ 	down_write(&pmd->root_lock);
++	if (pmd->fail_io)
++		goto out;
++
+ 	pmd->flags |= THIN_METADATA_NEEDS_CHECK_FLAG;
+ 
+ 	r = superblock_lock(pmd, &sblock);
+ 	if (r) {
+-		DMERR("couldn't read superblock");
++		DMERR("couldn't lock superblock");
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/media/cec/Makefile b/drivers/media/cec/Makefile
+index 29a2ab9e77c5..ad8677d8c896 100644
+--- a/drivers/media/cec/Makefile
++++ b/drivers/media/cec/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+-cec-objs := cec-core.o cec-adap.o cec-api.o cec-edid.o
++cec-objs := cec-core.o cec-adap.o cec-api.o
+ 
+ ifeq ($(CONFIG_CEC_NOTIFIER),y)
+   cec-objs += cec-notifier.o
+diff --git a/drivers/media/cec/cec-adap.c b/drivers/media/cec/cec-adap.c
+index a7ea27d2aa8e..4a15d53f659e 100644
+--- a/drivers/media/cec/cec-adap.c
++++ b/drivers/media/cec/cec-adap.c
+@@ -62,6 +62,19 @@ static unsigned int cec_log_addr2dev(const struct cec_adapter *adap, u8 log_addr
+ 	return adap->log_addrs.primary_device_type[i < 0 ? 0 : i];
+ }
+ 
++u16 cec_get_edid_phys_addr(const u8 *edid, unsigned int size,
++			   unsigned int *offset)
++{
++	unsigned int loc = cec_get_edid_spa_location(edid, size);
++
++	if (offset)
++		*offset = loc;
++	if (loc == 0)
++		return CEC_PHYS_ADDR_INVALID;
++	return (edid[loc] << 8) | edid[loc + 1];
++}
++EXPORT_SYMBOL_GPL(cec_get_edid_phys_addr);
++
+ /*
+  * Queue a new event for this filehandle. If ts == 0, then set it
+  * to the current time.
+diff --git a/drivers/media/cec/cec-edid.c b/drivers/media/cec/cec-edid.c
+deleted file mode 100644
+index f587e8eaefd8..000000000000
+--- a/drivers/media/cec/cec-edid.c
++++ /dev/null
+@@ -1,95 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+- * cec-edid - HDMI Consumer Electronics Control EDID & CEC helper functions
+- *
+- * Copyright 2016 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+- */
+-
+-#include <linux/module.h>
+-#include <linux/kernel.h>
+-#include <linux/types.h>
+-#include <media/cec.h>
+-
+-u16 cec_get_edid_phys_addr(const u8 *edid, unsigned int size,
+-			   unsigned int *offset)
+-{
+-	unsigned int loc = cec_get_edid_spa_location(edid, size);
+-
+-	if (offset)
+-		*offset = loc;
+-	if (loc == 0)
+-		return CEC_PHYS_ADDR_INVALID;
+-	return (edid[loc] << 8) | edid[loc + 1];
+-}
+-EXPORT_SYMBOL_GPL(cec_get_edid_phys_addr);
+-
+-void cec_set_edid_phys_addr(u8 *edid, unsigned int size, u16 phys_addr)
+-{
+-	unsigned int loc = cec_get_edid_spa_location(edid, size);
+-	u8 sum = 0;
+-	unsigned int i;
+-
+-	if (loc == 0)
+-		return;
+-	edid[loc] = phys_addr >> 8;
+-	edid[loc + 1] = phys_addr & 0xff;
+-	loc &= ~0x7f;
+-
+-	/* update the checksum */
+-	for (i = loc; i < loc + 127; i++)
+-		sum += edid[i];
+-	edid[i] = 256 - sum;
+-}
+-EXPORT_SYMBOL_GPL(cec_set_edid_phys_addr);
+-
+-u16 cec_phys_addr_for_input(u16 phys_addr, u8 input)
+-{
+-	/* Check if input is sane */
+-	if (WARN_ON(input == 0 || input > 0xf))
+-		return CEC_PHYS_ADDR_INVALID;
+-
+-	if (phys_addr == 0)
+-		return input << 12;
+-
+-	if ((phys_addr & 0x0fff) == 0)
+-		return phys_addr | (input << 8);
+-
+-	if ((phys_addr & 0x00ff) == 0)
+-		return phys_addr | (input << 4);
+-
+-	if ((phys_addr & 0x000f) == 0)
+-		return phys_addr | input;
+-
+-	/*
+-	 * All nibbles are used so no valid physical addresses can be assigned
+-	 * to the input.
+-	 */
+-	return CEC_PHYS_ADDR_INVALID;
+-}
+-EXPORT_SYMBOL_GPL(cec_phys_addr_for_input);
+-
+-int cec_phys_addr_validate(u16 phys_addr, u16 *parent, u16 *port)
+-{
+-	int i;
+-
+-	if (parent)
+-		*parent = phys_addr;
+-	if (port)
+-		*port = 0;
+-	if (phys_addr == CEC_PHYS_ADDR_INVALID)
+-		return 0;
+-	for (i = 0; i < 16; i += 4)
+-		if (phys_addr & (0xf << i))
+-			break;
+-	if (i == 16)
+-		return 0;
+-	if (parent)
+-		*parent = phys_addr & (0xfff0 << i);
+-	if (port)
+-		*port = (phys_addr >> i) & 0xf;
+-	for (i += 4; i < 16; i += 4)
+-		if ((phys_addr & (0xf << i)) == 0)
+-			return -EINVAL;
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(cec_phys_addr_validate);
+diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
+index 63c9ac2c6a5f..8b1ae1d6680b 100644
+--- a/drivers/media/i2c/Kconfig
++++ b/drivers/media/i2c/Kconfig
+@@ -60,8 +60,9 @@ config VIDEO_TDA1997X
+ 	tristate "NXP TDA1997x HDMI receiver"
+ 	depends on VIDEO_V4L2 && I2C && VIDEO_V4L2_SUBDEV_API
+ 	depends on SND_SOC
+-	select SND_PCM
+ 	select HDMI
++	select SND_PCM
++	select V4L2_FWNODE
+ 	---help---
+ 	  V4L2 subdevice driver for the NXP TDA1997x HDMI receivers.
+ 
+diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
+index f01964c36ad5..a4b0a89c7e7e 100644
+--- a/drivers/media/i2c/adv7604.c
++++ b/drivers/media/i2c/adv7604.c
+@@ -2297,8 +2297,8 @@ static int adv76xx_set_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid)
+ 		edid->blocks = 2;
+ 		return -E2BIG;
+ 	}
+-	pa = cec_get_edid_phys_addr(edid->edid, edid->blocks * 128, &spa_loc);
+-	err = cec_phys_addr_validate(pa, &pa, NULL);
++	pa = v4l2_get_edid_phys_addr(edid->edid, edid->blocks * 128, &spa_loc);
++	err = v4l2_phys_addr_validate(pa, &pa, NULL);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
+index bb43a75ed6d0..58662ba92d4f 100644
+--- a/drivers/media/i2c/adv7842.c
++++ b/drivers/media/i2c/adv7842.c
+@@ -791,8 +791,8 @@ static int edid_write_hdmi_segment(struct v4l2_subdev *sd, u8 port)
+ 		return 0;
+ 	}
+ 
+-	pa = cec_get_edid_phys_addr(edid, 256, &spa_loc);
+-	err = cec_phys_addr_validate(pa, &pa, NULL);
++	pa = v4l2_get_edid_phys_addr(edid, 256, &spa_loc);
++	err = v4l2_phys_addr_validate(pa, &pa, NULL);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index 26070fb6ce4e..e4c0a27b636a 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -1789,7 +1789,7 @@ static int tc358743_s_edid(struct v4l2_subdev *sd,
+ 		return -E2BIG;
+ 	}
+ 	pa = cec_get_edid_phys_addr(edid->edid, edid->blocks * 128, NULL);
+-	err = cec_phys_addr_validate(pa, &pa, NULL);
++	err = v4l2_phys_addr_validate(pa, &pa, NULL);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/media/platform/stm32/stm32-dcmi.c b/drivers/media/platform/stm32/stm32-dcmi.c
+index d38682265892..1d9c028e52cb 100644
+--- a/drivers/media/platform/stm32/stm32-dcmi.c
++++ b/drivers/media/platform/stm32/stm32-dcmi.c
+@@ -1681,7 +1681,7 @@ static int dcmi_probe(struct platform_device *pdev)
+ 	if (irq <= 0) {
+ 		if (irq != -EPROBE_DEFER)
+ 			dev_err(&pdev->dev, "Could not get irq\n");
+-		return irq;
++		return irq ? irq : -ENXIO;
+ 	}
+ 
+ 	dcmi->res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+diff --git a/drivers/media/platform/vim2m.c b/drivers/media/platform/vim2m.c
+index 462099a141e4..7b8cf661f238 100644
+--- a/drivers/media/platform/vim2m.c
++++ b/drivers/media/platform/vim2m.c
+@@ -3,7 +3,8 @@
+  *
+  * This is a virtual device driver for testing mem-to-mem videobuf framework.
+  * It simulates a device that uses memory buffers for both source and
+- * destination, processes the data and issues an "irq" (simulated by a timer).
++ * destination, processes the data and issues an "irq" (simulated by a delayed
++ * workqueue).
+  * The device is capable of multi-instance, multi-buffer-per-transaction
+  * operation (via the mem2mem framework).
+  *
+@@ -19,7 +20,6 @@
+ #include <linux/module.h>
+ #include <linux/delay.h>
+ #include <linux/fs.h>
+-#include <linux/timer.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+ 
+@@ -148,7 +148,7 @@ struct vim2m_dev {
+ 	struct mutex		dev_mutex;
+ 	spinlock_t		irqlock;
+ 
+-	struct timer_list	timer;
++	struct delayed_work	work_run;
+ 
+ 	struct v4l2_m2m_dev	*m2m_dev;
+ };
+@@ -336,12 +336,6 @@ static int device_process(struct vim2m_ctx *ctx,
+ 	return 0;
+ }
+ 
+-static void schedule_irq(struct vim2m_dev *dev, int msec_timeout)
+-{
+-	dprintk(dev, "Scheduling a simulated irq\n");
+-	mod_timer(&dev->timer, jiffies + msecs_to_jiffies(msec_timeout));
+-}
+-
+ /*
+  * mem2mem callbacks
+  */
+@@ -387,13 +381,14 @@ static void device_run(void *priv)
+ 
+ 	device_process(ctx, src_buf, dst_buf);
+ 
+-	/* Run a timer, which simulates a hardware irq  */
+-	schedule_irq(dev, ctx->transtime);
++	/* Run delayed work, which simulates a hardware irq  */
++	schedule_delayed_work(&dev->work_run, msecs_to_jiffies(ctx->transtime));
+ }
+ 
+-static void device_isr(struct timer_list *t)
++static void device_work(struct work_struct *w)
+ {
+-	struct vim2m_dev *vim2m_dev = from_timer(vim2m_dev, t, timer);
++	struct vim2m_dev *vim2m_dev =
++		container_of(w, struct vim2m_dev, work_run.work);
+ 	struct vim2m_ctx *curr_ctx;
+ 	struct vb2_v4l2_buffer *src_vb, *dst_vb;
+ 	unsigned long flags;
+@@ -802,9 +797,13 @@ static int vim2m_start_streaming(struct vb2_queue *q, unsigned count)
+ static void vim2m_stop_streaming(struct vb2_queue *q)
+ {
+ 	struct vim2m_ctx *ctx = vb2_get_drv_priv(q);
++	struct vim2m_dev *dev = ctx->dev;
+ 	struct vb2_v4l2_buffer *vbuf;
+ 	unsigned long flags;
+ 
++	if (v4l2_m2m_get_curr_priv(dev->m2m_dev) == ctx)
++		cancel_delayed_work_sync(&dev->work_run);
++
+ 	for (;;) {
+ 		if (V4L2_TYPE_IS_OUTPUT(q->type))
+ 			vbuf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+@@ -1015,6 +1014,7 @@ static int vim2m_probe(struct platform_device *pdev)
+ 	vfd = &dev->vfd;
+ 	vfd->lock = &dev->dev_mutex;
+ 	vfd->v4l2_dev = &dev->v4l2_dev;
++	INIT_DELAYED_WORK(&dev->work_run, device_work);
+ 
+ 	ret = video_register_device(vfd, VFL_TYPE_GRABBER, 0);
+ 	if (ret) {
+@@ -1026,7 +1026,6 @@ static int vim2m_probe(struct platform_device *pdev)
+ 	v4l2_info(&dev->v4l2_dev,
+ 			"Device registered as /dev/video%d\n", vfd->num);
+ 
+-	timer_setup(&dev->timer, device_isr, 0);
+ 	platform_set_drvdata(pdev, dev);
+ 
+ 	dev->m2m_dev = v4l2_m2m_init(&m2m_ops);
+@@ -1083,7 +1082,6 @@ static int vim2m_remove(struct platform_device *pdev)
+ 	media_device_cleanup(&dev->mdev);
+ #endif
+ 	v4l2_m2m_release(dev->m2m_dev);
+-	del_timer_sync(&dev->timer);
+ 	video_unregister_device(&dev->vfd);
+ 	v4l2_device_unregister(&dev->v4l2_dev);
+ 
+diff --git a/drivers/media/platform/vivid/vivid-vid-cap.c b/drivers/media/platform/vivid/vivid-vid-cap.c
+index 3b09ffceefd5..2e273f4dfc29 100644
+--- a/drivers/media/platform/vivid/vivid-vid-cap.c
++++ b/drivers/media/platform/vivid/vivid-vid-cap.c
+@@ -1724,7 +1724,7 @@ int vidioc_s_edid(struct file *file, void *_fh,
+ 		return -E2BIG;
+ 	}
+ 	phys_addr = cec_get_edid_phys_addr(edid->edid, edid->blocks * 128, NULL);
+-	ret = cec_phys_addr_validate(phys_addr, &phys_addr, NULL);
++	ret = v4l2_phys_addr_validate(phys_addr, &phys_addr, NULL);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1740,7 +1740,7 @@ set_phys_addr:
+ 
+ 	for (i = 0; i < MAX_OUTPUTS && dev->cec_tx_adap[i]; i++)
+ 		cec_s_phys_addr(dev->cec_tx_adap[i],
+-				cec_phys_addr_for_input(phys_addr, i + 1),
++				v4l2_phys_addr_for_input(phys_addr, i + 1),
+ 				false);
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/vivid/vivid-vid-common.c b/drivers/media/platform/vivid/vivid-vid-common.c
+index 2079861d2270..e108e9befb77 100644
+--- a/drivers/media/platform/vivid/vivid-vid-common.c
++++ b/drivers/media/platform/vivid/vivid-vid-common.c
+@@ -863,7 +863,7 @@ int vidioc_g_edid(struct file *file, void *_fh,
+ 	if (edid->blocks > dev->edid_blocks - edid->start_block)
+ 		edid->blocks = dev->edid_blocks - edid->start_block;
+ 	if (adap)
+-		cec_set_edid_phys_addr(dev->edid, dev->edid_blocks * 128, adap->phys_addr);
++		v4l2_set_edid_phys_addr(dev->edid, dev->edid_blocks * 128, adap->phys_addr);
+ 	memcpy(edid->edid, dev->edid + edid->start_block * 128, edid->blocks * 128);
+ 	return 0;
+ }
+diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
+index c7c600c1f63b..a24b40dfec97 100644
+--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
+@@ -15,6 +15,7 @@
+ #include <media/v4l2-dv-timings.h>
+ #include <linux/math64.h>
+ #include <linux/hdmi.h>
++#include <media/cec.h>
+ 
+ MODULE_AUTHOR("Hans Verkuil");
+ MODULE_DESCRIPTION("V4L2 DV Timings Helper Functions");
+@@ -942,3 +943,153 @@ v4l2_hdmi_rx_colorimetry(const struct hdmi_avi_infoframe *avi,
+ 	return c;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_hdmi_rx_colorimetry);
++
++/**
++ * v4l2_get_edid_phys_addr() - find and return the physical address
++ *
++ * @edid:	pointer to the EDID data
++ * @size:	size in bytes of the EDID data
++ * @offset:	If not %NULL then the location of the physical address
++ *		bytes in the EDID will be returned here. This is set to 0
++ *		if there is no physical address found.
++ *
++ * Return: the physical address or CEC_PHYS_ADDR_INVALID if there is none.
++ */
++u16 v4l2_get_edid_phys_addr(const u8 *edid, unsigned int size,
++			    unsigned int *offset)
++{
++	unsigned int loc = cec_get_edid_spa_location(edid, size);
++
++	if (offset)
++		*offset = loc;
++	if (loc == 0)
++		return CEC_PHYS_ADDR_INVALID;
++	return (edid[loc] << 8) | edid[loc + 1];
++}
++EXPORT_SYMBOL_GPL(v4l2_get_edid_phys_addr);
++
++/**
++ * v4l2_set_edid_phys_addr() - find and set the physical address
++ *
++ * @edid:	pointer to the EDID data
++ * @size:	size in bytes of the EDID data
++ * @phys_addr:	the new physical address
++ *
++ * This function finds the location of the physical address in the EDID
++ * and fills in the given physical address and updates the checksum
++ * at the end of the EDID block. It does nothing if the EDID doesn't
++ * contain a physical address.
++ */
++void v4l2_set_edid_phys_addr(u8 *edid, unsigned int size, u16 phys_addr)
++{
++	unsigned int loc = cec_get_edid_spa_location(edid, size);
++	u8 sum = 0;
++	unsigned int i;
++
++	if (loc == 0)
++		return;
++	edid[loc] = phys_addr >> 8;
++	edid[loc + 1] = phys_addr & 0xff;
++	loc &= ~0x7f;
++
++	/* update the checksum */
++	for (i = loc; i < loc + 127; i++)
++		sum += edid[i];
++	edid[i] = 256 - sum;
++}
++EXPORT_SYMBOL_GPL(v4l2_set_edid_phys_addr);
++
++/**
++ * v4l2_phys_addr_for_input() - calculate the PA for an input
++ *
++ * @phys_addr:	the physical address of the parent
++ * @input:	the number of the input port, must be between 1 and 15
++ *
++ * This function calculates a new physical address based on the input
++ * port number. For example:
++ *
++ * PA = 0.0.0.0 and input = 2 becomes 2.0.0.0
++ *
++ * PA = 3.0.0.0 and input = 1 becomes 3.1.0.0
++ *
++ * PA = 3.2.1.0 and input = 5 becomes 3.2.1.5
++ *
++ * PA = 3.2.1.3 and input = 5 becomes f.f.f.f since it maxed out the depth.
++ *
++ * Return: the new physical address or CEC_PHYS_ADDR_INVALID.
++ */
++u16 v4l2_phys_addr_for_input(u16 phys_addr, u8 input)
++{
++	/* Check if input is sane */
++	if (WARN_ON(input == 0 || input > 0xf))
++		return CEC_PHYS_ADDR_INVALID;
++
++	if (phys_addr == 0)
++		return input << 12;
++
++	if ((phys_addr & 0x0fff) == 0)
++		return phys_addr | (input << 8);
++
++	if ((phys_addr & 0x00ff) == 0)
++		return phys_addr | (input << 4);
++
++	if ((phys_addr & 0x000f) == 0)
++		return phys_addr | input;
++
++	/*
++	 * All nibbles are used so no valid physical addresses can be assigned
++	 * to the input.
++	 */
++	return CEC_PHYS_ADDR_INVALID;
++}
++EXPORT_SYMBOL_GPL(v4l2_phys_addr_for_input);
++
++/**
++ * v4l2_phys_addr_validate() - validate a physical address from an EDID
++ *
++ * @phys_addr:	the physical address to validate
++ * @parent:	if not %NULL, then this is filled with the parents PA.
++ * @port:	if not %NULL, then this is filled with the input port.
++ *
++ * This validates a physical address as read from an EDID. If the
++ * PA is invalid (such as 1.0.1.0 since '0' is only allowed at the end),
++ * then it will return -EINVAL.
++ *
++ * The parent PA is passed into %parent and the input port is passed into
++ * %port. For example:
++ *
++ * PA = 0.0.0.0: has parent 0.0.0.0 and input port 0.
++ *
++ * PA = 1.0.0.0: has parent 0.0.0.0 and input port 1.
++ *
++ * PA = 3.2.0.0: has parent 3.0.0.0 and input port 2.
++ *
++ * PA = f.f.f.f: has parent f.f.f.f and input port 0.
++ *
++ * Return: 0 if the PA is valid, -EINVAL if not.
++ */
++int v4l2_phys_addr_validate(u16 phys_addr, u16 *parent, u16 *port)
++{
++	int i;
++
++	if (parent)
++		*parent = phys_addr;
++	if (port)
++		*port = 0;
++	if (phys_addr == CEC_PHYS_ADDR_INVALID)
++		return 0;
++	for (i = 0; i < 16; i += 4)
++		if (phys_addr & (0xf << i))
++			break;
++	if (i == 16)
++		return 0;
++	if (parent)
++		*parent = phys_addr & (0xfff0 << i);
++	if (port)
++		*port = (phys_addr >> i) & 0xf;
++	for (i += 4; i < 16; i += 4)
++		if ((phys_addr & (0xf << i)) == 0)
++			return -EINVAL;
++	return 0;
++}
++EXPORT_SYMBOL_GPL(v4l2_phys_addr_validate);
+diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
+index 11841f4b7b2b..dd938a5d0409 100644
+--- a/drivers/mfd/Kconfig
++++ b/drivers/mfd/Kconfig
+@@ -509,10 +509,10 @@ config INTEL_SOC_PMIC
+ 	bool "Support for Crystal Cove PMIC"
+ 	depends on ACPI && HAS_IOMEM && I2C=y && GPIOLIB && COMMON_CLK
+ 	depends on X86 || COMPILE_TEST
++	depends on I2C_DESIGNWARE_PLATFORM=y
+ 	select MFD_CORE
+ 	select REGMAP_I2C
+ 	select REGMAP_IRQ
+-	select I2C_DESIGNWARE_PLATFORM
+ 	help
+ 	  Select this option to enable support for Crystal Cove PMIC
+ 	  on some Intel SoC systems. The PMIC provides ADC, GPIO,
+@@ -538,10 +538,10 @@ config INTEL_SOC_PMIC_CHTWC
+ 	bool "Support for Intel Cherry Trail Whiskey Cove PMIC"
+ 	depends on ACPI && HAS_IOMEM && I2C=y && COMMON_CLK
+ 	depends on X86 || COMPILE_TEST
++	depends on I2C_DESIGNWARE_PLATFORM=y
+ 	select MFD_CORE
+ 	select REGMAP_I2C
+ 	select REGMAP_IRQ
+-	select I2C_DESIGNWARE_PLATFORM
+ 	help
+ 	  Select this option to enable support for the Intel Cherry Trail
+ 	  Whiskey Cove PMIC found on some Intel Cherry Trail systems.
+@@ -1403,9 +1403,9 @@ config MFD_TPS65217
+ config MFD_TPS68470
+ 	bool "TI TPS68470 Power Management / LED chips"
+ 	depends on ACPI && I2C=y
++	depends on I2C_DESIGNWARE_PLATFORM=y
+ 	select MFD_CORE
+ 	select REGMAP_I2C
+-	select I2C_DESIGNWARE_PLATFORM
+ 	help
+ 	  If you say yes here you get support for the TPS68470 series of
+ 	  Power Management / LED chips.
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index 45baf5d9120e..61f0faddfd88 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -636,6 +636,13 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 		host->ops.card_busy = renesas_sdhi_card_busy;
+ 		host->ops.start_signal_voltage_switch =
+ 			renesas_sdhi_start_signal_voltage_switch;
++
++		/* SDR and HS200/400 registers requires HW reset */
++		if (of_data && of_data->scc_offset) {
++			priv->scc_ctl = host->ctl + of_data->scc_offset;
++			host->mmc->caps |= MMC_CAP_HW_RESET;
++			host->hw_reset = renesas_sdhi_hw_reset;
++		}
+ 	}
+ 
+ 	/* Orginally registers were 16 bit apart, could be 32 or 64 nowadays */
+@@ -693,8 +700,6 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 		const struct renesas_sdhi_scc *taps = of_data->taps;
+ 		bool hit = false;
+ 
+-		host->mmc->caps |= MMC_CAP_HW_RESET;
+-
+ 		for (i = 0; i < of_data->taps_num; i++) {
+ 			if (taps[i].clk_rate == 0 ||
+ 			    taps[i].clk_rate == host->mmc->f_max) {
+@@ -707,12 +712,10 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 		if (!hit)
+ 			dev_warn(&host->pdev->dev, "Unknown clock rate for SDR104\n");
+ 
+-		priv->scc_ctl = host->ctl + of_data->scc_offset;
+ 		host->init_tuning = renesas_sdhi_init_tuning;
+ 		host->prepare_tuning = renesas_sdhi_prepare_tuning;
+ 		host->select_tuning = renesas_sdhi_select_tuning;
+ 		host->check_scc_error = renesas_sdhi_check_scc_error;
+-		host->hw_reset = renesas_sdhi_hw_reset;
+ 		host->prepare_hs400_tuning =
+ 			renesas_sdhi_prepare_hs400_tuning;
+ 		host->hs400_downgrade = renesas_sdhi_disable_scc;
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index c4115bae5db1..71794391f48f 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -1577,6 +1577,8 @@ static const struct pci_device_id pci_ids[] = {
+ 	SDHCI_PCI_DEVICE(INTEL, CNPH_SD,   intel_byt_sd),
+ 	SDHCI_PCI_DEVICE(INTEL, ICP_EMMC,  intel_glk_emmc),
+ 	SDHCI_PCI_DEVICE(INTEL, ICP_SD,    intel_byt_sd),
++	SDHCI_PCI_DEVICE(INTEL, CML_EMMC,  intel_glk_emmc),
++	SDHCI_PCI_DEVICE(INTEL, CML_SD,    intel_byt_sd),
+ 	SDHCI_PCI_DEVICE(O2, 8120,     o2),
+ 	SDHCI_PCI_DEVICE(O2, 8220,     o2),
+ 	SDHCI_PCI_DEVICE(O2, 8221,     o2),
+diff --git a/drivers/mmc/host/sdhci-pci.h b/drivers/mmc/host/sdhci-pci.h
+index 2ef0bdca9197..6f04a62b2998 100644
+--- a/drivers/mmc/host/sdhci-pci.h
++++ b/drivers/mmc/host/sdhci-pci.h
+@@ -50,6 +50,8 @@
+ #define PCI_DEVICE_ID_INTEL_CNPH_SD	0xa375
+ #define PCI_DEVICE_ID_INTEL_ICP_EMMC	0x34c4
+ #define PCI_DEVICE_ID_INTEL_ICP_SD	0x34f8
++#define PCI_DEVICE_ID_INTEL_CML_EMMC	0x02c4
++#define PCI_DEVICE_ID_INTEL_CML_SD	0x02f5
+ 
+ #define PCI_DEVICE_ID_SYSKONNECT_8000	0x8000
+ #define PCI_DEVICE_ID_VIA_95D0		0x95d0
+diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
+index 91ca77c7571c..b4347806a59e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
++++ b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
+@@ -77,10 +77,13 @@
+ #define IWL_22000_HR_FW_PRE		"iwlwifi-Qu-a0-hr-a0-"
+ #define IWL_22000_HR_CDB_FW_PRE		"iwlwifi-QuIcp-z0-hrcdb-a0-"
+ #define IWL_22000_HR_A_F0_FW_PRE	"iwlwifi-QuQnj-f0-hr-a0-"
+-#define IWL_22000_HR_B_FW_PRE		"iwlwifi-Qu-b0-hr-b0-"
++#define IWL_22000_HR_B_F0_FW_PRE	"iwlwifi-Qu-b0-hr-b0-"
++#define IWL_22000_QU_B_HR_B_FW_PRE	"iwlwifi-Qu-b0-hr-b0-"
++#define IWL_22000_HR_B_FW_PRE		"iwlwifi-QuQnj-b0-hr-b0-"
+ #define IWL_22000_JF_B0_FW_PRE		"iwlwifi-QuQnj-a0-jf-b0-"
+ #define IWL_22000_HR_A0_FW_PRE		"iwlwifi-QuQnj-a0-hr-a0-"
+ #define IWL_22000_SU_Z0_FW_PRE		"iwlwifi-su-z0-"
++#define IWL_QU_B_JF_B_FW_PRE		"iwlwifi-Qu-b0-jf-b0-"
+ 
+ #define IWL_22000_HR_MODULE_FIRMWARE(api) \
+ 	IWL_22000_HR_FW_PRE __stringify(api) ".ucode"
+@@ -88,7 +91,11 @@
+ 	IWL_22000_JF_FW_PRE __stringify(api) ".ucode"
+ #define IWL_22000_HR_A_F0_QNJ_MODULE_FIRMWARE(api) \
+ 	IWL_22000_HR_A_F0_FW_PRE __stringify(api) ".ucode"
+-#define IWL_22000_HR_B_QNJ_MODULE_FIRMWARE(api) \
++#define IWL_22000_HR_B_F0_QNJ_MODULE_FIRMWARE(api) \
++	IWL_22000_HR_B_F0_FW_PRE __stringify(api) ".ucode"
++#define IWL_22000_QU_B_HR_B_MODULE_FIRMWARE(api) \
++	IWL_22000_QU_B_HR_B_FW_PRE __stringify(api) ".ucode"
++#define IWL_22000_HR_B_QNJ_MODULE_FIRMWARE(api)	\
+ 	IWL_22000_HR_B_FW_PRE __stringify(api) ".ucode"
+ #define IWL_22000_JF_B0_QNJ_MODULE_FIRMWARE(api) \
+ 	IWL_22000_JF_B0_FW_PRE __stringify(api) ".ucode"
+@@ -96,6 +103,8 @@
+ 	IWL_22000_HR_A0_FW_PRE __stringify(api) ".ucode"
+ #define IWL_22000_SU_Z0_MODULE_FIRMWARE(api) \
+ 	IWL_22000_SU_Z0_FW_PRE __stringify(api) ".ucode"
++#define IWL_QU_B_JF_B_MODULE_FIRMWARE(api) \
++	IWL_QU_B_JF_B_FW_PRE __stringify(api) ".ucode"
+ 
+ #define NVM_HW_SECTION_NUM_FAMILY_22000		10
+ 
+@@ -190,7 +199,54 @@ const struct iwl_cfg iwl22000_2ac_cfg_jf = {
+ 
+ const struct iwl_cfg iwl22000_2ax_cfg_hr = {
+ 	.name = "Intel(R) Dual Band Wireless AX 22000",
+-	.fw_name_pre = IWL_22000_HR_FW_PRE,
++	.fw_name_pre = IWL_22000_QU_B_HR_B_FW_PRE,
++	IWL_DEVICE_22500,
++	/*
++	 * This device doesn't support receiving BlockAck with a large bitmap
++	 * so we need to restrict the size of transmitted aggregation to the
++	 * HT size; mac80211 would otherwise pick the HE max (256) by default.
++	 */
++	.max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
++};
++
++/*
++ * All JF radio modules are part of the 9000 series, but the MAC part
++ * looks more like 22000.  That's why this device is here, but called
++ * 9560 nevertheless.
++ */
++const struct iwl_cfg iwl9461_2ac_cfg_qu_b0_jf_b0 = {
++	.name = "Intel(R) Wireless-AC 9461",
++	.fw_name_pre = IWL_QU_B_JF_B_FW_PRE,
++	IWL_DEVICE_22500,
++};
++
++const struct iwl_cfg iwl9462_2ac_cfg_qu_b0_jf_b0 = {
++	.name = "Intel(R) Wireless-AC 9462",
++	.fw_name_pre = IWL_QU_B_JF_B_FW_PRE,
++	IWL_DEVICE_22500,
++};
++
++const struct iwl_cfg iwl9560_2ac_cfg_qu_b0_jf_b0 = {
++	.name = "Intel(R) Wireless-AC 9560",
++	.fw_name_pre = IWL_QU_B_JF_B_FW_PRE,
++	IWL_DEVICE_22500,
++};
++
++const struct iwl_cfg killer1550i_2ac_cfg_qu_b0_jf_b0 = {
++	.name = "Killer (R) Wireless-AC 1550i Wireless Network Adapter (9560NGW)",
++	.fw_name_pre = IWL_QU_B_JF_B_FW_PRE,
++	IWL_DEVICE_22500,
++};
++
++const struct iwl_cfg killer1550s_2ac_cfg_qu_b0_jf_b0 = {
++	.name = "Killer (R) Wireless-AC 1550s Wireless Network Adapter (9560NGW)",
++	.fw_name_pre = IWL_QU_B_JF_B_FW_PRE,
++	IWL_DEVICE_22500,
++};
++
++const struct iwl_cfg iwl22000_2ax_cfg_jf = {
++	.name = "Intel(R) Dual Band Wireless AX 22000",
++	.fw_name_pre = IWL_QU_B_JF_B_FW_PRE,
+ 	IWL_DEVICE_22500,
+ 	/*
+ 	 * This device doesn't support receiving BlockAck with a large bitmap
+@@ -264,7 +320,10 @@ const struct iwl_cfg iwl22560_2ax_cfg_su_cdb = {
+ MODULE_FIRMWARE(IWL_22000_HR_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_22000_JF_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_22000_HR_A_F0_QNJ_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
++MODULE_FIRMWARE(IWL_22000_HR_B_F0_QNJ_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
++MODULE_FIRMWARE(IWL_22000_QU_B_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_22000_HR_B_QNJ_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_22000_JF_B0_QNJ_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_22000_HR_A0_QNJ_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_22000_SU_Z0_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
++MODULE_FIRMWARE(IWL_QU_B_JF_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+index 12fddcf15bab..2e9fd7a30398 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+@@ -574,11 +574,18 @@ extern const struct iwl_cfg iwl22000_2ac_cfg_hr;
+ extern const struct iwl_cfg iwl22000_2ac_cfg_hr_cdb;
+ extern const struct iwl_cfg iwl22000_2ac_cfg_jf;
+ extern const struct iwl_cfg iwl22000_2ax_cfg_hr;
++extern const struct iwl_cfg iwl9461_2ac_cfg_qu_b0_jf_b0;
++extern const struct iwl_cfg iwl9462_2ac_cfg_qu_b0_jf_b0;
++extern const struct iwl_cfg iwl9560_2ac_cfg_qu_b0_jf_b0;
++extern const struct iwl_cfg killer1550i_2ac_cfg_qu_b0_jf_b0;
++extern const struct iwl_cfg killer1550s_2ac_cfg_qu_b0_jf_b0;
++extern const struct iwl_cfg iwl22000_2ax_cfg_jf;
+ extern const struct iwl_cfg iwl22000_2ax_cfg_qnj_hr_a0_f0;
++extern const struct iwl_cfg iwl22000_2ax_cfg_qnj_hr_b0_f0;
+ extern const struct iwl_cfg iwl22000_2ax_cfg_qnj_hr_b0;
+ extern const struct iwl_cfg iwl22000_2ax_cfg_qnj_jf_b0;
+ extern const struct iwl_cfg iwl22000_2ax_cfg_qnj_hr_a0;
+ extern const struct iwl_cfg iwl22560_2ax_cfg_su_cdb;
+-#endif /* CONFIG_IWLMVM */
++#endif /* CPTCFG_IWLMVM || CPTCFG_IWLFMAC */
+ 
+ #endif /* __IWL_CONFIG_H__ */
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 5d65500a8aa7..0982bd99b1c3 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -601,6 +601,7 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ 	{IWL_PCI_DEVICE(0x2526, 0x2030, iwl9560_2ac_cfg_soc)},
+ 	{IWL_PCI_DEVICE(0x2526, 0x2034, iwl9560_2ac_cfg_soc)},
+ 	{IWL_PCI_DEVICE(0x2526, 0x4010, iwl9260_2ac_cfg)},
++	{IWL_PCI_DEVICE(0x2526, 0x4018, iwl9260_2ac_cfg)},
+ 	{IWL_PCI_DEVICE(0x2526, 0x4030, iwl9560_2ac_cfg)},
+ 	{IWL_PCI_DEVICE(0x2526, 0x4034, iwl9560_2ac_cfg_soc)},
+ 	{IWL_PCI_DEVICE(0x2526, 0x40A4, iwl9460_2ac_cfg)},
+@@ -696,34 +697,33 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ 	{IWL_PCI_DEVICE(0x31DC, 0x40A4, iwl9462_2ac_cfg_shared_clk)},
+ 	{IWL_PCI_DEVICE(0x31DC, 0x4234, iwl9560_2ac_cfg_shared_clk)},
+ 	{IWL_PCI_DEVICE(0x31DC, 0x42A4, iwl9462_2ac_cfg_shared_clk)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x0030, iwl9560_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x0034, iwl9560_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x0038, iwl9560_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x003C, iwl9560_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x0060, iwl9461_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x0064, iwl9461_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x00A0, iwl9462_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x00A4, iwl9462_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x0230, iwl9560_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x0234, iwl9560_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x0238, iwl9560_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x023C, iwl9560_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x0260, iwl9461_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x0264, iwl9461_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x02A0, iwl9462_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x02A4, iwl9462_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x1010, iwl9260_2ac_cfg)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x1030, iwl9560_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x1210, iwl9260_2ac_cfg)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x1552, iwl9560_killer_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x2030, iwl9560_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x2034, iwl9560_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x4030, iwl9560_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x4034, iwl9560_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x40A4, iwl9462_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x4234, iwl9560_2ac_cfg_soc)},
+-	{IWL_PCI_DEVICE(0x34F0, 0x42A4, iwl9462_2ac_cfg_soc)},
++
++	{IWL_PCI_DEVICE(0x34F0, 0x0030, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x0034, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x0038, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x003C, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x0060, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x0064, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x00A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x00A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x0230, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x0234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x0238, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x023C, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x0260, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x0264, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x02A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x02A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x1551, killer1550s_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x1552, killer1550i_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x2030, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x2034, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x4030, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x4034, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x40A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x4234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++	{IWL_PCI_DEVICE(0x34F0, 0x42A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++
+ 	{IWL_PCI_DEVICE(0x3DF0, 0x0030, iwl9560_2ac_cfg_soc)},
+ 	{IWL_PCI_DEVICE(0x3DF0, 0x0034, iwl9560_2ac_cfg_soc)},
+ 	{IWL_PCI_DEVICE(0x3DF0, 0x0038, iwl9560_2ac_cfg_soc)},
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_mac_common.c b/drivers/net/wireless/mediatek/mt76/mt76x2_mac_common.c
+index 6542644bc325..cec31f0c3017 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2_mac_common.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_mac_common.c
+@@ -402,7 +402,7 @@ void mt76x2_mac_write_txwi(struct mt76x2_dev *dev, struct mt76x2_txwi *txwi,
+ 		ccmp_pn[6] = pn >> 32;
+ 		ccmp_pn[7] = pn >> 40;
+ 		txwi->iv = *((__le32 *)&ccmp_pn[0]);
+-		txwi->eiv = *((__le32 *)&ccmp_pn[1]);
++		txwi->eiv = *((__le32 *)&ccmp_pn[4]);
+ 	}
+ 
+ 	spin_lock_bh(&dev->mt76.lock);
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 67dec8860bf3..565bddcfd130 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -206,7 +206,7 @@ static LIST_HEAD(nvme_fc_lport_list);
+ static DEFINE_IDA(nvme_fc_local_port_cnt);
+ static DEFINE_IDA(nvme_fc_ctrl_cnt);
+ 
+-
++static struct workqueue_struct *nvme_fc_wq;
+ 
+ /*
+  * These items are short-term. They will eventually be moved into
+@@ -2053,7 +2053,7 @@ nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
+ 	 */
+ 	if (ctrl->ctrl.state == NVME_CTRL_CONNECTING) {
+ 		active = atomic_xchg(&ctrl->err_work_active, 1);
+-		if (!active && !schedule_work(&ctrl->err_work)) {
++		if (!active && !queue_work(nvme_fc_wq, &ctrl->err_work)) {
+ 			atomic_set(&ctrl->err_work_active, 0);
+ 			WARN_ON(1);
+ 		}
+@@ -3321,6 +3321,10 @@ static int __init nvme_fc_init_module(void)
+ {
+ 	int ret;
+ 
++	nvme_fc_wq = alloc_workqueue("nvme_fc_wq", WQ_MEM_RECLAIM, 0);
++	if (!nvme_fc_wq)
++		return -ENOMEM;
++
+ 	/*
+ 	 * NOTE:
+ 	 * It is expected that in the future the kernel will combine
+@@ -3338,7 +3342,8 @@ static int __init nvme_fc_init_module(void)
+ 	fc_class = class_create(THIS_MODULE, "fc");
+ 	if (IS_ERR(fc_class)) {
+ 		pr_err("couldn't register class fc\n");
+-		return PTR_ERR(fc_class);
++		ret = PTR_ERR(fc_class);
++		goto out_destroy_wq;
+ 	}
+ 
+ 	/*
+@@ -3362,6 +3367,9 @@ out_destroy_device:
+ 	device_destroy(fc_class, MKDEV(0, 0));
+ out_destroy_class:
+ 	class_destroy(fc_class);
++out_destroy_wq:
++	destroy_workqueue(nvme_fc_wq);
++
+ 	return ret;
+ }
+ 
+@@ -3378,6 +3386,7 @@ static void __exit nvme_fc_exit_module(void)
+ 
+ 	device_destroy(fc_class, MKDEV(0, 0));
+ 	class_destroy(fc_class);
++	destroy_workqueue(nvme_fc_wq);
+ }
+ 
+ module_init(nvme_fc_init_module);
+diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
+index acd50920c2ff..b57ee79f6d69 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-host.c
++++ b/drivers/pci/controller/dwc/pcie-designware-host.c
+@@ -356,7 +356,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 		dev_err(dev, "Missing *config* reg space\n");
+ 	}
+ 
+-	bridge = pci_alloc_host_bridge(0);
++	bridge = devm_pci_alloc_host_bridge(dev, 0);
+ 	if (!bridge)
+ 		return -ENOMEM;
+ 
+@@ -367,7 +367,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 
+ 	ret = devm_request_pci_bus_resources(dev, &bridge->windows);
+ 	if (ret)
+-		goto error;
++		return ret;
+ 
+ 	/* Get the I/O and memory ranges from DT */
+ 	resource_list_for_each_entry_safe(win, tmp, &bridge->windows) {
+@@ -411,8 +411,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 						resource_size(pp->cfg));
+ 		if (!pci->dbi_base) {
+ 			dev_err(dev, "Error with ioremap\n");
+-			ret = -ENOMEM;
+-			goto error;
++			return -ENOMEM;
+ 		}
+ 	}
+ 
+@@ -423,8 +422,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 					pp->cfg0_base, pp->cfg0_size);
+ 		if (!pp->va_cfg0_base) {
+ 			dev_err(dev, "Error with ioremap in function\n");
+-			ret = -ENOMEM;
+-			goto error;
++			return -ENOMEM;
+ 		}
+ 	}
+ 
+@@ -434,8 +432,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 						pp->cfg1_size);
+ 		if (!pp->va_cfg1_base) {
+ 			dev_err(dev, "Error with ioremap\n");
+-			ret = -ENOMEM;
+-			goto error;
++			return -ENOMEM;
+ 		}
+ 	}
+ 
+@@ -458,14 +455,14 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 			    pp->num_vectors == 0) {
+ 				dev_err(dev,
+ 					"Invalid number of vectors\n");
+-				goto error;
++				return -EINVAL;
+ 			}
+ 		}
+ 
+ 		if (!pp->ops->msi_host_init) {
+ 			ret = dw_pcie_allocate_domains(pp);
+ 			if (ret)
+-				goto error;
++				return ret;
+ 
+ 			if (pp->msi_irq)
+ 				irq_set_chained_handler_and_data(pp->msi_irq,
+@@ -474,7 +471,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 		} else {
+ 			ret = pp->ops->msi_host_init(pp);
+ 			if (ret < 0)
+-				goto error;
++				return ret;
+ 		}
+ 	}
+ 
+@@ -514,8 +511,6 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ err_free_msi:
+ 	if (pci_msi_enabled() && !pp->ops->msi_host_init)
+ 		dw_pcie_free_msi(pp);
+-error:
+-	pci_free_host_bridge(bridge);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 87a8887fd4d3..e292801fff7f 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1091,7 +1091,6 @@ static int qcom_pcie_host_init(struct pcie_port *pp)
+ 	struct qcom_pcie *pcie = to_qcom_pcie(pci);
+ 	int ret;
+ 
+-	pm_runtime_get_sync(pci->dev);
+ 	qcom_ep_reset_assert(pcie);
+ 
+ 	ret = pcie->ops->init(pcie);
+@@ -1128,7 +1127,6 @@ err_disable_phy:
+ 	phy_power_off(pcie->phy);
+ err_deinit:
+ 	pcie->ops->deinit(pcie);
+-	pm_runtime_put(pci->dev);
+ 
+ 	return ret;
+ }
+@@ -1218,6 +1216,12 @@ static int qcom_pcie_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	pm_runtime_enable(dev);
++	ret = pm_runtime_get_sync(dev);
++	if (ret < 0) {
++		pm_runtime_disable(dev);
++		return ret;
++	}
++
+ 	pci->dev = dev;
+ 	pci->ops = &dw_pcie_ops;
+ 	pp = &pci->pp;
+@@ -1226,45 +1230,57 @@ static int qcom_pcie_probe(struct platform_device *pdev)
+ 
+ 	pcie->ops = of_device_get_match_data(dev);
+ 
+-	pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_LOW);
+-	if (IS_ERR(pcie->reset))
+-		return PTR_ERR(pcie->reset);
++	pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH);
++	if (IS_ERR(pcie->reset)) {
++		ret = PTR_ERR(pcie->reset);
++		goto err_pm_runtime_put;
++	}
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "parf");
+ 	pcie->parf = devm_ioremap_resource(dev, res);
+-	if (IS_ERR(pcie->parf))
+-		return PTR_ERR(pcie->parf);
++	if (IS_ERR(pcie->parf)) {
++		ret = PTR_ERR(pcie->parf);
++		goto err_pm_runtime_put;
++	}
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
+ 	pci->dbi_base = devm_pci_remap_cfg_resource(dev, res);
+-	if (IS_ERR(pci->dbi_base))
+-		return PTR_ERR(pci->dbi_base);
++	if (IS_ERR(pci->dbi_base)) {
++		ret = PTR_ERR(pci->dbi_base);
++		goto err_pm_runtime_put;
++	}
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi");
+ 	pcie->elbi = devm_ioremap_resource(dev, res);
+-	if (IS_ERR(pcie->elbi))
+-		return PTR_ERR(pcie->elbi);
++	if (IS_ERR(pcie->elbi)) {
++		ret = PTR_ERR(pcie->elbi);
++		goto err_pm_runtime_put;
++	}
+ 
+ 	pcie->phy = devm_phy_optional_get(dev, "pciephy");
+-	if (IS_ERR(pcie->phy))
+-		return PTR_ERR(pcie->phy);
++	if (IS_ERR(pcie->phy)) {
++		ret = PTR_ERR(pcie->phy);
++		goto err_pm_runtime_put;
++	}
+ 
+ 	ret = pcie->ops->get_resources(pcie);
+ 	if (ret)
+-		return ret;
++		goto err_pm_runtime_put;
+ 
+ 	pp->ops = &qcom_pcie_dw_ops;
+ 
+ 	if (IS_ENABLED(CONFIG_PCI_MSI)) {
+ 		pp->msi_irq = platform_get_irq_byname(pdev, "msi");
+-		if (pp->msi_irq < 0)
+-			return pp->msi_irq;
++		if (pp->msi_irq < 0) {
++			ret = pp->msi_irq;
++			goto err_pm_runtime_put;
++		}
+ 	}
+ 
+ 	ret = phy_init(pcie->phy);
+ 	if (ret) {
+ 		pm_runtime_disable(&pdev->dev);
+-		return ret;
++		goto err_pm_runtime_put;
+ 	}
+ 
+ 	platform_set_drvdata(pdev, pcie);
+@@ -1273,10 +1289,16 @@ static int qcom_pcie_probe(struct platform_device *pdev)
+ 	if (ret) {
+ 		dev_err(dev, "cannot initialize host\n");
+ 		pm_runtime_disable(&pdev->dev);
+-		return ret;
++		goto err_pm_runtime_put;
+ 	}
+ 
+ 	return 0;
++
++err_pm_runtime_put:
++	pm_runtime_put(dev);
++	pm_runtime_disable(dev);
++
++	return ret;
+ }
+ 
+ static const struct of_device_id qcom_pcie_match[] = {
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 28c64f84bfe7..06be52912dcd 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -5082,59 +5082,95 @@ static void quirk_switchtec_ntb_dma_alias(struct pci_dev *pdev)
+ 	pci_iounmap(pdev, mmio);
+ 	pci_disable_device(pdev);
+ }
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8531,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8532,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8533,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8534,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8535,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8536,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8543,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8544,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8545,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8546,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8551,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8552,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8553,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8554,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8555,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8556,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8561,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8562,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8563,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8564,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8565,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8566,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8571,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8572,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8573,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8574,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8575,
+-			quirk_switchtec_ntb_dma_alias);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8576,
+-			quirk_switchtec_ntb_dma_alias);
++#define SWITCHTEC_QUIRK(vid) \
++	DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, vid, \
++				quirk_switchtec_ntb_dma_alias)
++
++SWITCHTEC_QUIRK(0x8531);  /* PFX 24xG3 */
++SWITCHTEC_QUIRK(0x8532);  /* PFX 32xG3 */
++SWITCHTEC_QUIRK(0x8533);  /* PFX 48xG3 */
++SWITCHTEC_QUIRK(0x8534);  /* PFX 64xG3 */
++SWITCHTEC_QUIRK(0x8535);  /* PFX 80xG3 */
++SWITCHTEC_QUIRK(0x8536);  /* PFX 96xG3 */
++SWITCHTEC_QUIRK(0x8541);  /* PSX 24xG3 */
++SWITCHTEC_QUIRK(0x8542);  /* PSX 32xG3 */
++SWITCHTEC_QUIRK(0x8543);  /* PSX 48xG3 */
++SWITCHTEC_QUIRK(0x8544);  /* PSX 64xG3 */
++SWITCHTEC_QUIRK(0x8545);  /* PSX 80xG3 */
++SWITCHTEC_QUIRK(0x8546);  /* PSX 96xG3 */
++SWITCHTEC_QUIRK(0x8551);  /* PAX 24XG3 */
++SWITCHTEC_QUIRK(0x8552);  /* PAX 32XG3 */
++SWITCHTEC_QUIRK(0x8553);  /* PAX 48XG3 */
++SWITCHTEC_QUIRK(0x8554);  /* PAX 64XG3 */
++SWITCHTEC_QUIRK(0x8555);  /* PAX 80XG3 */
++SWITCHTEC_QUIRK(0x8556);  /* PAX 96XG3 */
++SWITCHTEC_QUIRK(0x8561);  /* PFXL 24XG3 */
++SWITCHTEC_QUIRK(0x8562);  /* PFXL 32XG3 */
++SWITCHTEC_QUIRK(0x8563);  /* PFXL 48XG3 */
++SWITCHTEC_QUIRK(0x8564);  /* PFXL 64XG3 */
++SWITCHTEC_QUIRK(0x8565);  /* PFXL 80XG3 */
++SWITCHTEC_QUIRK(0x8566);  /* PFXL 96XG3 */
++SWITCHTEC_QUIRK(0x8571);  /* PFXI 24XG3 */
++SWITCHTEC_QUIRK(0x8572);  /* PFXI 32XG3 */
++SWITCHTEC_QUIRK(0x8573);  /* PFXI 48XG3 */
++SWITCHTEC_QUIRK(0x8574);  /* PFXI 64XG3 */
++SWITCHTEC_QUIRK(0x8575);  /* PFXI 80XG3 */
++SWITCHTEC_QUIRK(0x8576);  /* PFXI 96XG3 */
++
++/*
++ * On Lenovo Thinkpad P50 SKUs with a Nvidia Quadro M1000M, the BIOS does
++ * not always reset the secondary Nvidia GPU between reboots if the system
++ * is configured to use Hybrid Graphics mode.  This results in the GPU
++ * being left in whatever state it was in during the *previous* boot, which
++ * causes spurious interrupts from the GPU, which in turn causes us to
++ * disable the wrong IRQ and end up breaking the touchpad.  Unsurprisingly,
++ * this also completely breaks nouveau.
++ *
++ * Luckily, it seems a simple reset of the Nvidia GPU brings it back to a
++ * clean state and fixes all these issues.
++ *
++ * When the machine is configured in Dedicated display mode, the issue
++ * doesn't occur.  Fortunately the GPU advertises NoReset+ when in this
++ * mode, so we can detect that and avoid resetting it.
++ */
++static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
++{
++	void __iomem *map;
++	int ret;
++
++	if (pdev->subsystem_vendor != PCI_VENDOR_ID_LENOVO ||
++	    pdev->subsystem_device != 0x222e ||
++	    !pdev->reset_fn)
++		return;
++
++	if (pci_enable_device_mem(pdev))
++		return;
++
++	/*
++	 * Based on nvkm_device_ctor() in
++	 * drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
++	 */
++	map = pci_iomap(pdev, 0, 0x23000);
++	if (!map) {
++		pci_err(pdev, "Can't map MMIO space\n");
++		goto out_disable;
++	}
++
++	/*
++	 * Make sure the GPU looks like it's been POSTed before resetting
++	 * it.
++	 */
++	if (ioread32(map + 0x2240c) & 0x2) {
++		pci_info(pdev, FW_BUG "GPU left initialized by EFI, resetting\n");
++		ret = pci_reset_bus(pdev);
++		if (ret < 0)
++			pci_err(pdev, "Failed to reset GPU: %d\n", ret);
++	}
++
++	iounmap(map);
++out_disable:
++	pci_disable_device(pdev);
++}
++DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
++			      PCI_CLASS_DISPLAY_VGA, 8,
++			      quirk_reset_lenovo_thinkpad_p50_nvgpu);
+diff --git a/drivers/remoteproc/qcom_q6v5.c b/drivers/remoteproc/qcom_q6v5.c
+index e9ab90c19304..602af839421d 100644
+--- a/drivers/remoteproc/qcom_q6v5.c
++++ b/drivers/remoteproc/qcom_q6v5.c
+@@ -188,6 +188,14 @@ int qcom_q6v5_init(struct qcom_q6v5 *q6v5, struct platform_device *pdev,
+ 	init_completion(&q6v5->stop_done);
+ 
+ 	q6v5->wdog_irq = platform_get_irq_byname(pdev, "wdog");
++	if (q6v5->wdog_irq < 0) {
++		if (q6v5->wdog_irq != -EPROBE_DEFER)
++			dev_err(&pdev->dev,
++				"failed to retrieve wdog IRQ: %d\n",
++				q6v5->wdog_irq);
++		return q6v5->wdog_irq;
++	}
++
+ 	ret = devm_request_threaded_irq(&pdev->dev, q6v5->wdog_irq,
+ 					NULL, q6v5_wdog_interrupt,
+ 					IRQF_TRIGGER_RISING | IRQF_ONESHOT,
+@@ -198,8 +206,13 @@ int qcom_q6v5_init(struct qcom_q6v5 *q6v5, struct platform_device *pdev,
+ 	}
+ 
+ 	q6v5->fatal_irq = platform_get_irq_byname(pdev, "fatal");
+-	if (q6v5->fatal_irq == -EPROBE_DEFER)
+-		return -EPROBE_DEFER;
++	if (q6v5->fatal_irq < 0) {
++		if (q6v5->fatal_irq != -EPROBE_DEFER)
++			dev_err(&pdev->dev,
++				"failed to retrieve fatal IRQ: %d\n",
++				q6v5->fatal_irq);
++		return q6v5->fatal_irq;
++	}
+ 
+ 	ret = devm_request_threaded_irq(&pdev->dev, q6v5->fatal_irq,
+ 					NULL, q6v5_fatal_interrupt,
+@@ -211,8 +224,13 @@ int qcom_q6v5_init(struct qcom_q6v5 *q6v5, struct platform_device *pdev,
+ 	}
+ 
+ 	q6v5->ready_irq = platform_get_irq_byname(pdev, "ready");
+-	if (q6v5->ready_irq == -EPROBE_DEFER)
+-		return -EPROBE_DEFER;
++	if (q6v5->ready_irq < 0) {
++		if (q6v5->ready_irq != -EPROBE_DEFER)
++			dev_err(&pdev->dev,
++				"failed to retrieve ready IRQ: %d\n",
++				q6v5->ready_irq);
++		return q6v5->ready_irq;
++	}
+ 
+ 	ret = devm_request_threaded_irq(&pdev->dev, q6v5->ready_irq,
+ 					NULL, q6v5_ready_interrupt,
+@@ -224,8 +242,13 @@ int qcom_q6v5_init(struct qcom_q6v5 *q6v5, struct platform_device *pdev,
+ 	}
+ 
+ 	q6v5->handover_irq = platform_get_irq_byname(pdev, "handover");
+-	if (q6v5->handover_irq == -EPROBE_DEFER)
+-		return -EPROBE_DEFER;
++	if (q6v5->handover_irq < 0) {
++		if (q6v5->handover_irq != -EPROBE_DEFER)
++			dev_err(&pdev->dev,
++				"failed to retrieve handover IRQ: %d\n",
++				q6v5->handover_irq);
++		return q6v5->handover_irq;
++	}
+ 
+ 	ret = devm_request_threaded_irq(&pdev->dev, q6v5->handover_irq,
+ 					NULL, q6v5_handover_interrupt,
+@@ -238,8 +261,13 @@ int qcom_q6v5_init(struct qcom_q6v5 *q6v5, struct platform_device *pdev,
+ 	disable_irq(q6v5->handover_irq);
+ 
+ 	q6v5->stop_irq = platform_get_irq_byname(pdev, "stop-ack");
+-	if (q6v5->stop_irq == -EPROBE_DEFER)
+-		return -EPROBE_DEFER;
++	if (q6v5->stop_irq < 0) {
++		if (q6v5->stop_irq != -EPROBE_DEFER)
++			dev_err(&pdev->dev,
++				"failed to retrieve stop-ack IRQ: %d\n",
++				q6v5->stop_irq);
++		return q6v5->stop_irq;
++	}
+ 
+ 	ret = devm_request_threaded_irq(&pdev->dev, q6v5->stop_irq,
+ 					NULL, q6v5_stop_interrupt,
+diff --git a/drivers/remoteproc/qcom_q6v5_pil.c b/drivers/remoteproc/qcom_q6v5_pil.c
+index d7a4b9eca5d2..6a84b6372897 100644
+--- a/drivers/remoteproc/qcom_q6v5_pil.c
++++ b/drivers/remoteproc/qcom_q6v5_pil.c
+@@ -1132,6 +1132,9 @@ static int q6v5_probe(struct platform_device *pdev)
+ 	if (!desc)
+ 		return -EINVAL;
+ 
++	if (desc->need_mem_protection && !qcom_scm_is_available())
++		return -EPROBE_DEFER;
++
+ 	rproc = rproc_alloc(&pdev->dev, pdev->name, &q6v5_ops,
+ 			    desc->hexagon_mba_image, sizeof(*qproc));
+ 	if (!rproc) {
+diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c
+index a57b969b8973..3be54651698a 100644
+--- a/drivers/s390/crypto/ap_bus.c
++++ b/drivers/s390/crypto/ap_bus.c
+@@ -777,6 +777,8 @@ static int ap_device_probe(struct device *dev)
+ 		drvres = ap_drv->flags & AP_DRIVER_FLAG_DEFAULT;
+ 		if (!!devres != !!drvres)
+ 			return -ENODEV;
++		/* (re-)init queue's state machine */
++		ap_queue_reinit_state(to_ap_queue(dev));
+ 	}
+ 
+ 	/* Add queue/card to list of active queues/cards */
+@@ -809,6 +811,8 @@ static int ap_device_remove(struct device *dev)
+ 	struct ap_device *ap_dev = to_ap_dev(dev);
+ 	struct ap_driver *ap_drv = ap_dev->drv;
+ 
++	if (is_queue_dev(dev))
++		ap_queue_remove(to_ap_queue(dev));
+ 	if (ap_drv->remove)
+ 		ap_drv->remove(ap_dev);
+ 
+@@ -1446,10 +1450,6 @@ static void ap_scan_bus(struct work_struct *unused)
+ 			aq->ap_dev.device.parent = &ac->ap_dev.device;
+ 			dev_set_name(&aq->ap_dev.device,
+ 				     "%02x.%04x", id, dom);
+-			/* Start with a device reset */
+-			spin_lock_bh(&aq->lock);
+-			ap_wait(ap_sm_event(aq, AP_EVENT_POLL));
+-			spin_unlock_bh(&aq->lock);
+ 			/* Register device */
+ 			rc = device_register(&aq->ap_dev.device);
+ 			if (rc) {
+diff --git a/drivers/s390/crypto/ap_bus.h b/drivers/s390/crypto/ap_bus.h
+index 5246cd8c16a6..7e85d238767b 100644
+--- a/drivers/s390/crypto/ap_bus.h
++++ b/drivers/s390/crypto/ap_bus.h
+@@ -253,6 +253,7 @@ struct ap_queue *ap_queue_create(ap_qid_t qid, int device_type);
+ void ap_queue_remove(struct ap_queue *aq);
+ void ap_queue_suspend(struct ap_device *ap_dev);
+ void ap_queue_resume(struct ap_device *ap_dev);
++void ap_queue_reinit_state(struct ap_queue *aq);
+ 
+ struct ap_card *ap_card_create(int id, int queue_depth, int raw_device_type,
+ 			       int comp_device_type, unsigned int functions);
+diff --git a/drivers/s390/crypto/ap_queue.c b/drivers/s390/crypto/ap_queue.c
+index 66f7334bcb03..0aa4b3ccc948 100644
+--- a/drivers/s390/crypto/ap_queue.c
++++ b/drivers/s390/crypto/ap_queue.c
+@@ -718,5 +718,20 @@ void ap_queue_remove(struct ap_queue *aq)
+ {
+ 	ap_flush_queue(aq);
+ 	del_timer_sync(&aq->timeout);
++
++	/* reset with zero, also clears irq registration */
++	spin_lock_bh(&aq->lock);
++	ap_zapq(aq->qid);
++	aq->state = AP_STATE_BORKED;
++	spin_unlock_bh(&aq->lock);
+ }
+ EXPORT_SYMBOL(ap_queue_remove);
++
++void ap_queue_reinit_state(struct ap_queue *aq)
++{
++	spin_lock_bh(&aq->lock);
++	aq->state = AP_STATE_RESET_START;
++	ap_wait(ap_sm_event(aq, AP_EVENT_POLL));
++	spin_unlock_bh(&aq->lock);
++}
++EXPORT_SYMBOL(ap_queue_reinit_state);
+diff --git a/drivers/s390/crypto/zcrypt_cex2a.c b/drivers/s390/crypto/zcrypt_cex2a.c
+index f4ae5fa30ec9..ff17a00273f7 100644
+--- a/drivers/s390/crypto/zcrypt_cex2a.c
++++ b/drivers/s390/crypto/zcrypt_cex2a.c
+@@ -198,7 +198,6 @@ static void zcrypt_cex2a_queue_remove(struct ap_device *ap_dev)
+ 	struct ap_queue *aq = to_ap_queue(&ap_dev->device);
+ 	struct zcrypt_queue *zq = aq->private;
+ 
+-	ap_queue_remove(aq);
+ 	if (zq)
+ 		zcrypt_queue_unregister(zq);
+ }
+diff --git a/drivers/s390/crypto/zcrypt_cex4.c b/drivers/s390/crypto/zcrypt_cex4.c
+index 35d58dbbc4da..2a42e5962317 100644
+--- a/drivers/s390/crypto/zcrypt_cex4.c
++++ b/drivers/s390/crypto/zcrypt_cex4.c
+@@ -273,7 +273,6 @@ static void zcrypt_cex4_queue_remove(struct ap_device *ap_dev)
+ 	struct ap_queue *aq = to_ap_queue(&ap_dev->device);
+ 	struct zcrypt_queue *zq = aq->private;
+ 
+-	ap_queue_remove(aq);
+ 	if (zq)
+ 		zcrypt_queue_unregister(zq);
+ }
+diff --git a/drivers/s390/crypto/zcrypt_pcixcc.c b/drivers/s390/crypto/zcrypt_pcixcc.c
+index 94d9f7224aea..baa683c3f5d3 100644
+--- a/drivers/s390/crypto/zcrypt_pcixcc.c
++++ b/drivers/s390/crypto/zcrypt_pcixcc.c
+@@ -276,7 +276,6 @@ static void zcrypt_pcixcc_queue_remove(struct ap_device *ap_dev)
+ 	struct ap_queue *aq = to_ap_queue(&ap_dev->device);
+ 	struct zcrypt_queue *zq = aq->private;
+ 
+-	ap_queue_remove(aq);
+ 	if (zq)
+ 		zcrypt_queue_unregister(zq);
+ }
+diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
+index 3c86e27f094d..aff073a5b52b 100644
+--- a/drivers/s390/scsi/zfcp_fsf.c
++++ b/drivers/s390/scsi/zfcp_fsf.c
+@@ -1594,6 +1594,7 @@ int zfcp_fsf_open_wka_port(struct zfcp_fc_wka_port *wka_port)
+ {
+ 	struct zfcp_qdio *qdio = wka_port->adapter->qdio;
+ 	struct zfcp_fsf_req *req;
++	unsigned long req_id = 0;
+ 	int retval = -EIO;
+ 
+ 	spin_lock_irq(&qdio->req_q_lock);
+@@ -1616,6 +1617,8 @@ int zfcp_fsf_open_wka_port(struct zfcp_fc_wka_port *wka_port)
+ 	hton24(req->qtcb->bottom.support.d_id, wka_port->d_id);
+ 	req->data = wka_port;
+ 
++	req_id = req->req_id;
++
+ 	zfcp_fsf_start_timer(req, ZFCP_FSF_REQUEST_TIMEOUT);
+ 	retval = zfcp_fsf_req_send(req);
+ 	if (retval)
+@@ -1623,7 +1626,7 @@ int zfcp_fsf_open_wka_port(struct zfcp_fc_wka_port *wka_port)
+ out:
+ 	spin_unlock_irq(&qdio->req_q_lock);
+ 	if (!retval)
+-		zfcp_dbf_rec_run_wka("fsowp_1", wka_port, req->req_id);
++		zfcp_dbf_rec_run_wka("fsowp_1", wka_port, req_id);
+ 	return retval;
+ }
+ 
+@@ -1649,6 +1652,7 @@ int zfcp_fsf_close_wka_port(struct zfcp_fc_wka_port *wka_port)
+ {
+ 	struct zfcp_qdio *qdio = wka_port->adapter->qdio;
+ 	struct zfcp_fsf_req *req;
++	unsigned long req_id = 0;
+ 	int retval = -EIO;
+ 
+ 	spin_lock_irq(&qdio->req_q_lock);
+@@ -1671,6 +1675,8 @@ int zfcp_fsf_close_wka_port(struct zfcp_fc_wka_port *wka_port)
+ 	req->data = wka_port;
+ 	req->qtcb->header.port_handle = wka_port->handle;
+ 
++	req_id = req->req_id;
++
+ 	zfcp_fsf_start_timer(req, ZFCP_FSF_REQUEST_TIMEOUT);
+ 	retval = zfcp_fsf_req_send(req);
+ 	if (retval)
+@@ -1678,7 +1684,7 @@ int zfcp_fsf_close_wka_port(struct zfcp_fc_wka_port *wka_port)
+ out:
+ 	spin_unlock_irq(&qdio->req_q_lock);
+ 	if (!retval)
+-		zfcp_dbf_rec_run_wka("fscwp_1", wka_port, req->req_id);
++		zfcp_dbf_rec_run_wka("fscwp_1", wka_port, req_id);
+ 	return retval;
+ }
+ 
+diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
+index ec54538f7ae1..67efdf25657f 100644
+--- a/drivers/s390/virtio/virtio_ccw.c
++++ b/drivers/s390/virtio/virtio_ccw.c
+@@ -132,6 +132,7 @@ struct airq_info {
+ 	struct airq_iv *aiv;
+ };
+ static struct airq_info *airq_areas[MAX_AIRQ_AREAS];
++static DEFINE_MUTEX(airq_areas_lock);
+ 
+ #define CCW_CMD_SET_VQ 0x13
+ #define CCW_CMD_VDEV_RESET 0x33
+@@ -244,9 +245,11 @@ static unsigned long get_airq_indicator(struct virtqueue *vqs[], int nvqs,
+ 	unsigned long bit, flags;
+ 
+ 	for (i = 0; i < MAX_AIRQ_AREAS && !indicator_addr; i++) {
++		mutex_lock(&airq_areas_lock);
+ 		if (!airq_areas[i])
+ 			airq_areas[i] = new_airq_info();
+ 		info = airq_areas[i];
++		mutex_unlock(&airq_areas_lock);
+ 		if (!info)
+ 			return 0;
+ 		write_lock_irqsave(&info->lock, flags);
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 806ceabcabc3..bc37666f998e 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -5218,7 +5218,7 @@ static int megasas_init_fw(struct megasas_instance *instance)
+ {
+ 	u32 max_sectors_1;
+ 	u32 max_sectors_2, tmp_sectors, msix_enable;
+-	u32 scratch_pad_2, scratch_pad_3, scratch_pad_4;
++	u32 scratch_pad_2, scratch_pad_3, scratch_pad_4, status_reg;
+ 	resource_size_t base_addr;
+ 	struct megasas_register_set __iomem *reg_set;
+ 	struct megasas_ctrl_info *ctrl_info = NULL;
+@@ -5226,6 +5226,7 @@ static int megasas_init_fw(struct megasas_instance *instance)
+ 	int i, j, loop, fw_msix_count = 0;
+ 	struct IOV_111 *iovPtr;
+ 	struct fusion_context *fusion;
++	bool do_adp_reset = true;
+ 
+ 	fusion = instance->ctrl_context;
+ 
+@@ -5274,19 +5275,29 @@ static int megasas_init_fw(struct megasas_instance *instance)
+ 	}
+ 
+ 	if (megasas_transition_to_ready(instance, 0)) {
+-		atomic_set(&instance->fw_reset_no_pci_access, 1);
+-		instance->instancet->adp_reset
+-			(instance, instance->reg_set);
+-		atomic_set(&instance->fw_reset_no_pci_access, 0);
+-		dev_info(&instance->pdev->dev,
+-			"FW restarted successfully from %s!\n",
+-			__func__);
++		if (instance->adapter_type >= INVADER_SERIES) {
++			status_reg = instance->instancet->read_fw_status_reg(
++					instance->reg_set);
++			do_adp_reset = status_reg & MFI_RESET_ADAPTER;
++		}
+ 
+-		/*waitting for about 30 second before retry*/
+-		ssleep(30);
++		if (do_adp_reset) {
++			atomic_set(&instance->fw_reset_no_pci_access, 1);
++			instance->instancet->adp_reset
++				(instance, instance->reg_set);
++			atomic_set(&instance->fw_reset_no_pci_access, 0);
++			dev_info(&instance->pdev->dev,
++				 "FW restarted successfully from %s!\n",
++				 __func__);
+ 
+-		if (megasas_transition_to_ready(instance, 0))
++			/*waiting for about 30 second before retry*/
++			ssleep(30);
++
++			if (megasas_transition_to_ready(instance, 0))
++				goto fail_ready_state;
++		} else {
+ 			goto fail_ready_state;
++		}
+ 	}
+ 
+ 	megasas_init_ctrl_params(instance);
+@@ -5325,12 +5336,29 @@ static int megasas_init_fw(struct megasas_instance *instance)
+ 				instance->msix_vectors = (scratch_pad_2
+ 					& MR_MAX_REPLY_QUEUES_OFFSET) + 1;
+ 				fw_msix_count = instance->msix_vectors;
+-			} else { /* Invader series supports more than 8 MSI-x vectors*/
++			} else {
+ 				instance->msix_vectors = ((scratch_pad_2
+ 					& MR_MAX_REPLY_QUEUES_EXT_OFFSET)
+ 					>> MR_MAX_REPLY_QUEUES_EXT_OFFSET_SHIFT) + 1;
+-				if (instance->msix_vectors > 16)
+-					instance->msix_combined = true;
++
++				/*
++				 * For Invader series, > 8 MSI-x vectors
++				 * supported by FW/HW implies combined
++				 * reply queue mode is enabled.
++				 * For Ventura series, > 16 MSI-x vectors
++				 * supported by FW/HW implies combined
++				 * reply queue mode is enabled.
++				 */
++				switch (instance->adapter_type) {
++				case INVADER_SERIES:
++					if (instance->msix_vectors > 8)
++						instance->msix_combined = true;
++					break;
++				case VENTURA_SERIES:
++					if (instance->msix_vectors > 16)
++						instance->msix_combined = true;
++					break;
++				}
+ 
+ 				if (rdpq_enable)
+ 					instance->is_rdpq = (scratch_pad_2 & MR_RDPQ_MODE_OFFSET) ?
+@@ -6028,13 +6056,13 @@ static int megasas_io_attach(struct megasas_instance *instance)
+  * @instance:		Adapter soft state
+  * Description:
+  *
+- * For Ventura, driver/FW will operate in 64bit DMA addresses.
++ * For Ventura, driver/FW will operate in 63bit DMA addresses.
+  *
+  * For invader-
+  *	By default, driver/FW will operate in 32bit DMA addresses
+  *	for consistent DMA mapping but if 32 bit consistent
+- *	DMA mask fails, driver will try with 64 bit consistent
+- *	mask provided FW is true 64bit DMA capable
++ *	DMA mask fails, driver will try with 63 bit consistent
++ *	mask provided FW is true 63bit DMA capable
+  *
+  * For older controllers(Thunderbolt and MFI based adapters)-
+  *	driver/FW will operate in 32 bit consistent DMA addresses.
+@@ -6047,15 +6075,15 @@ megasas_set_dma_mask(struct megasas_instance *instance)
+ 	u32 scratch_pad_2;
+ 
+ 	pdev = instance->pdev;
+-	consistent_mask = (instance->adapter_type == VENTURA_SERIES) ?
+-				DMA_BIT_MASK(64) : DMA_BIT_MASK(32);
++	consistent_mask = (instance->adapter_type >= VENTURA_SERIES) ?
++				DMA_BIT_MASK(63) : DMA_BIT_MASK(32);
+ 
+ 	if (IS_DMA64) {
+-		if (dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) &&
++		if (dma_set_mask(&pdev->dev, DMA_BIT_MASK(63)) &&
+ 		    dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)))
+ 			goto fail_set_dma_mask;
+ 
+-		if ((*pdev->dev.dma_mask == DMA_BIT_MASK(64)) &&
++		if ((*pdev->dev.dma_mask == DMA_BIT_MASK(63)) &&
+ 		    (dma_set_coherent_mask(&pdev->dev, consistent_mask) &&
+ 		     dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)))) {
+ 			/*
+@@ -6068,7 +6096,7 @@ megasas_set_dma_mask(struct megasas_instance *instance)
+ 			if (!(scratch_pad_2 & MR_CAN_HANDLE_64_BIT_DMA_OFFSET))
+ 				goto fail_set_dma_mask;
+ 			else if (dma_set_mask_and_coherent(&pdev->dev,
+-							   DMA_BIT_MASK(64)))
++							   DMA_BIT_MASK(63)))
+ 				goto fail_set_dma_mask;
+ 		}
+ 	} else if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)))
+@@ -6080,8 +6108,8 @@ megasas_set_dma_mask(struct megasas_instance *instance)
+ 		instance->consistent_mask_64bit = true;
+ 
+ 	dev_info(&pdev->dev, "%s bit DMA mask and %s bit consistent mask\n",
+-		 ((*pdev->dev.dma_mask == DMA_BIT_MASK(64)) ? "64" : "32"),
+-		 (instance->consistent_mask_64bit ? "64" : "32"));
++		 ((*pdev->dev.dma_mask == DMA_BIT_MASK(64)) ? "63" : "32"),
++		 (instance->consistent_mask_64bit ? "63" : "32"));
+ 
+ 	return 0;
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index 1f1a05a90d3d..fc08e46a93ca 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -3360,15 +3360,15 @@ int qla24xx_async_gpsc(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+ 	sp->done = qla24xx_async_gpsc_sp_done;
+ 
+-	rval = qla2x00_start_sp(sp);
+-	if (rval != QLA_SUCCESS)
+-		goto done_free_sp;
+-
+ 	ql_dbg(ql_dbg_disc, vha, 0x205e,
+ 	    "Async-%s %8phC hdl=%x loopid=%x portid=%02x%02x%02x.\n",
+ 	    sp->name, fcport->port_name, sp->handle,
+ 	    fcport->loop_id, fcport->d_id.b.domain,
+ 	    fcport->d_id.b.area, fcport->d_id.b.al_pa);
++
++	rval = qla2x00_start_sp(sp);
++	if (rval != QLA_SUCCESS)
++		goto done_free_sp;
+ 	return rval;
+ 
+ done_free_sp:
+@@ -3729,13 +3729,14 @@ int qla24xx_async_gpnid(scsi_qla_host_t *vha, port_id_t *id)
+ 	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+ 	sp->done = qla2x00_async_gpnid_sp_done;
+ 
++	ql_dbg(ql_dbg_disc, vha, 0x2067,
++	    "Async-%s hdl=%x ID %3phC.\n", sp->name,
++	    sp->handle, ct_req->req.port_id.port_id);
++
+ 	rval = qla2x00_start_sp(sp);
+ 	if (rval != QLA_SUCCESS)
+ 		goto done_free_sp;
+ 
+-	ql_dbg(ql_dbg_disc, vha, 0x2067,
+-	    "Async-%s hdl=%x ID %3phC.\n", sp->name,
+-	    sp->handle, ct_req->req.port_id.port_id);
+ 	return rval;
+ 
+ done_free_sp:
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index ddce32fe0513..39a8f4a671aa 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -247,6 +247,12 @@ qla2x00_async_login(struct scsi_qla_host *vha, fc_port_t *fcport,
+ 
+ 	}
+ 
++	ql_dbg(ql_dbg_disc, vha, 0x2072,
++	    "Async-login - %8phC hdl=%x, loopid=%x portid=%02x%02x%02x "
++		"retries=%d.\n", fcport->port_name, sp->handle, fcport->loop_id,
++	    fcport->d_id.b.domain, fcport->d_id.b.area, fcport->d_id.b.al_pa,
++	    fcport->login_retry);
++
+ 	rval = qla2x00_start_sp(sp);
+ 	if (rval != QLA_SUCCESS) {
+ 		fcport->flags |= FCF_LOGIN_NEEDED;
+@@ -254,11 +260,6 @@ qla2x00_async_login(struct scsi_qla_host *vha, fc_port_t *fcport,
+ 		goto done_free_sp;
+ 	}
+ 
+-	ql_dbg(ql_dbg_disc, vha, 0x2072,
+-	    "Async-login - %8phC hdl=%x, loopid=%x portid=%02x%02x%02x "
+-		"retries=%d.\n", fcport->port_name, sp->handle, fcport->loop_id,
+-	    fcport->d_id.b.domain, fcport->d_id.b.area, fcport->d_id.b.al_pa,
+-	    fcport->login_retry);
+ 	return rval;
+ 
+ done_free_sp:
+@@ -303,15 +304,16 @@ qla2x00_async_logout(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
+ 
+ 	sp->done = qla2x00_async_logout_sp_done;
+-	rval = qla2x00_start_sp(sp);
+-	if (rval != QLA_SUCCESS)
+-		goto done_free_sp;
+ 
+ 	ql_dbg(ql_dbg_disc, vha, 0x2070,
+ 	    "Async-logout - hdl=%x loop-id=%x portid=%02x%02x%02x %8phC.\n",
+ 	    sp->handle, fcport->loop_id, fcport->d_id.b.domain,
+ 		fcport->d_id.b.area, fcport->d_id.b.al_pa,
+ 		fcport->port_name);
++
++	rval = qla2x00_start_sp(sp);
++	if (rval != QLA_SUCCESS)
++		goto done_free_sp;
+ 	return rval;
+ 
+ done_free_sp:
+@@ -489,13 +491,15 @@ qla2x00_async_adisc(struct scsi_qla_host *vha, fc_port_t *fcport,
+ 	sp->done = qla2x00_async_adisc_sp_done;
+ 	if (data[1] & QLA_LOGIO_LOGIN_RETRIED)
+ 		lio->u.logio.flags |= SRB_LOGIN_RETRIED;
+-	rval = qla2x00_start_sp(sp);
+-	if (rval != QLA_SUCCESS)
+-		goto done_free_sp;
+ 
+ 	ql_dbg(ql_dbg_disc, vha, 0x206f,
+ 	    "Async-adisc - hdl=%x loopid=%x portid=%06x %8phC.\n",
+ 	    sp->handle, fcport->loop_id, fcport->d_id.b24, fcport->port_name);
++
++	rval = qla2x00_start_sp(sp);
++	if (rval != QLA_SUCCESS)
++		goto done_free_sp;
++
+ 	return rval;
+ 
+ done_free_sp:
+@@ -1161,14 +1165,13 @@ int qla24xx_async_gpdb(struct scsi_qla_host *vha, fc_port_t *fcport, u8 opt)
+ 
+ 	sp->done = qla24xx_async_gpdb_sp_done;
+ 
+-	rval = qla2x00_start_sp(sp);
+-	if (rval != QLA_SUCCESS)
+-		goto done_free_sp;
+-
+ 	ql_dbg(ql_dbg_disc, vha, 0x20dc,
+ 	    "Async-%s %8phC hndl %x opt %x\n",
+ 	    sp->name, fcport->port_name, sp->handle, opt);
+ 
++	rval = qla2x00_start_sp(sp);
++	if (rval != QLA_SUCCESS)
++		goto done_free_sp;
+ 	return rval;
+ 
+ done_free_sp:
+@@ -1698,15 +1701,14 @@ qla2x00_async_tm_cmd(fc_port_t *fcport, uint32_t flags, uint32_t lun,
+ 	tm_iocb->u.tmf.data = tag;
+ 	sp->done = qla2x00_tmf_sp_done;
+ 
+-	rval = qla2x00_start_sp(sp);
+-	if (rval != QLA_SUCCESS)
+-		goto done_free_sp;
+-
+ 	ql_dbg(ql_dbg_taskm, vha, 0x802f,
+ 	    "Async-tmf hdl=%x loop-id=%x portid=%02x%02x%02x.\n",
+ 	    sp->handle, fcport->loop_id, fcport->d_id.b.domain,
+ 	    fcport->d_id.b.area, fcport->d_id.b.al_pa);
+ 
++	rval = qla2x00_start_sp(sp);
++	if (rval != QLA_SUCCESS)
++		goto done_free_sp;
+ 	wait_for_completion(&tm_iocb->u.tmf.comp);
+ 
+ 	rval = tm_iocb->u.tmf.data;
+@@ -1790,14 +1792,14 @@ qla24xx_async_abort_cmd(srb_t *cmd_sp, bool wait)
+ 
+ 	sp->done = qla24xx_abort_sp_done;
+ 
+-	rval = qla2x00_start_sp(sp);
+-	if (rval != QLA_SUCCESS)
+-		goto done_free_sp;
+-
+ 	ql_dbg(ql_dbg_async, vha, 0x507c,
+ 	    "Abort command issued - hdl=%x, target_id=%x\n",
+ 	    cmd_sp->handle, fcport->tgt_id);
+ 
++	rval = qla2x00_start_sp(sp);
++	if (rval != QLA_SUCCESS)
++		goto done_free_sp;
++
+ 	if (wait) {
+ 		wait_for_completion(&abt_iocb->u.abt.comp);
+ 		rval = abt_iocb->u.abt.comp_status == CS_COMPLETE ?
+diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c
+index 088772ebef9b..77838d8fd9bb 100644
+--- a/drivers/spi/spi-gpio.c
++++ b/drivers/spi/spi-gpio.c
+@@ -410,7 +410,7 @@ static int spi_gpio_probe(struct platform_device *pdev)
+ 		return status;
+ 
+ 	master->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32);
+-	master->mode_bits = SPI_3WIRE | SPI_CPHA | SPI_CPOL;
++	master->mode_bits = SPI_3WIRE | SPI_CPHA | SPI_CPOL | SPI_CS_HIGH;
+ 	master->flags = master_flags;
+ 	master->bus_num = pdev->id;
+ 	/* The master needs to think there is a chipselect even if not connected */
+@@ -437,7 +437,6 @@ static int spi_gpio_probe(struct platform_device *pdev)
+ 		spi_gpio->bitbang.txrx_word[SPI_MODE_3] = spi_gpio_spec_txrx_word_mode3;
+ 	}
+ 	spi_gpio->bitbang.setup_transfer = spi_bitbang_setup_transfer;
+-	spi_gpio->bitbang.flags = SPI_CS_HIGH;
+ 
+ 	status = spi_bitbang_start(&spi_gpio->bitbang);
+ 	if (status)
+diff --git a/drivers/staging/wilc1000/linux_wlan.c b/drivers/staging/wilc1000/linux_wlan.c
+index 649caae2b603..25798119426b 100644
+--- a/drivers/staging/wilc1000/linux_wlan.c
++++ b/drivers/staging/wilc1000/linux_wlan.c
+@@ -649,17 +649,17 @@ static int wilc_wlan_initialize(struct net_device *dev, struct wilc_vif *vif)
+ 			goto fail_locks;
+ 		}
+ 
+-		if (wl->gpio_irq && init_irq(dev)) {
+-			ret = -EIO;
+-			goto fail_locks;
+-		}
+-
+ 		ret = wlan_initialize_threads(dev);
+ 		if (ret < 0) {
+ 			ret = -EIO;
+ 			goto fail_wilc_wlan;
+ 		}
+ 
++		if (wl->gpio_irq && init_irq(dev)) {
++			ret = -EIO;
++			goto fail_threads;
++		}
++
+ 		if (!wl->dev_irq_num &&
+ 		    wl->hif_func->enable_interrupt &&
+ 		    wl->hif_func->enable_interrupt(wl)) {
+@@ -715,7 +715,7 @@ fail_irq_enable:
+ fail_irq_init:
+ 		if (wl->dev_irq_num)
+ 			deinit_irq(dev);
+-
++fail_threads:
+ 		wlan_deinitialize_threads(dev);
+ fail_wilc_wlan:
+ 		wilc_wlan_cleanup(dev);
+diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
+index ce1321a5cb7b..854b2bcca7c1 100644
+--- a/drivers/target/target_core_iblock.c
++++ b/drivers/target/target_core_iblock.c
+@@ -514,8 +514,8 @@ iblock_execute_write_same(struct se_cmd *cmd)
+ 		}
+ 
+ 		/* Always in 512 byte units for Linux/Block */
+-		block_lba += sg->length >> IBLOCK_LBA_SHIFT;
+-		sectors -= 1;
++		block_lba += sg->length >> SECTOR_SHIFT;
++		sectors -= sg->length >> SECTOR_SHIFT;
+ 	}
+ 
+ 	iblock_submit_bios(&list);
+@@ -757,7 +757,7 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
+ 		}
+ 
+ 		/* Always in 512 byte units for Linux/Block */
+-		block_lba += sg->length >> IBLOCK_LBA_SHIFT;
++		block_lba += sg->length >> SECTOR_SHIFT;
+ 		sg_num--;
+ 	}
+ 
+diff --git a/drivers/target/target_core_iblock.h b/drivers/target/target_core_iblock.h
+index 9cc3843404d4..cefc641145b3 100644
+--- a/drivers/target/target_core_iblock.h
++++ b/drivers/target/target_core_iblock.h
+@@ -9,7 +9,6 @@
+ #define IBLOCK_VERSION		"4.0"
+ 
+ #define IBLOCK_MAX_CDBS		16
+-#define IBLOCK_LBA_SHIFT	9
+ 
+ struct iblock_req {
+ 	refcount_t pending;
+diff --git a/drivers/usb/typec/tcpm.c b/drivers/usb/typec/tcpm.c
+index fb20aa974ae1..819ae3b2bd7e 100644
+--- a/drivers/usb/typec/tcpm.c
++++ b/drivers/usb/typec/tcpm.c
+@@ -37,6 +37,7 @@
+ 	S(SRC_ATTACHED),			\
+ 	S(SRC_STARTUP),				\
+ 	S(SRC_SEND_CAPABILITIES),		\
++	S(SRC_SEND_CAPABILITIES_TIMEOUT),	\
+ 	S(SRC_NEGOTIATE_CAPABILITIES),		\
+ 	S(SRC_TRANSITION_SUPPLY),		\
+ 	S(SRC_READY),				\
+@@ -2987,10 +2988,34 @@ static void run_state_machine(struct tcpm_port *port)
+ 			/* port->hard_reset_count = 0; */
+ 			port->caps_count = 0;
+ 			port->pd_capable = true;
+-			tcpm_set_state_cond(port, hard_reset_state(port),
++			tcpm_set_state_cond(port, SRC_SEND_CAPABILITIES_TIMEOUT,
+ 					    PD_T_SEND_SOURCE_CAP);
+ 		}
+ 		break;
++	case SRC_SEND_CAPABILITIES_TIMEOUT:
++		/*
++		 * Error recovery for a PD_DATA_SOURCE_CAP reply timeout.
++		 *
++		 * PD 2.0 sinks are supposed to accept src-capabilities with a
++		 * 3.0 header and simply ignore any src PDOs which the sink does
++		 * not understand such as PPS but some 2.0 sinks instead ignore
++		 * the entire PD_DATA_SOURCE_CAP message, causing contract
++		 * negotiation to fail.
++		 *
++		 * After PD_N_HARD_RESET_COUNT hard-reset attempts, we try
++		 * sending src-capabilities with a lower PD revision to
++		 * make these broken sinks work.
++		 */
++		if (port->hard_reset_count < PD_N_HARD_RESET_COUNT) {
++			tcpm_set_state(port, HARD_RESET_SEND, 0);
++		} else if (port->negotiated_rev > PD_REV20) {
++			port->negotiated_rev--;
++			port->hard_reset_count = 0;
++			tcpm_set_state(port, SRC_SEND_CAPABILITIES, 0);
++		} else {
++			tcpm_set_state(port, hard_reset_state(port), 0);
++		}
++		break;
+ 	case SRC_NEGOTIATE_CAPABILITIES:
+ 		ret = tcpm_pd_check_request(port);
+ 		if (ret < 0) {
+diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
+index 40589850eb33..a9be2d8e98df 100644
+--- a/drivers/vhost/test.c
++++ b/drivers/vhost/test.c
+@@ -23,6 +23,12 @@
+  * Using this limit prevents one virtqueue from starving others. */
+ #define VHOST_TEST_WEIGHT 0x80000
+ 
++/* Max number of packets transferred before requeueing the job.
++ * Using this limit prevents one virtqueue from starving others with
++ * pkts.
++ */
++#define VHOST_TEST_PKT_WEIGHT 256
++
+ enum {
+ 	VHOST_TEST_VQ = 0,
+ 	VHOST_TEST_VQ_MAX = 1,
+@@ -81,10 +87,8 @@ static void handle_vq(struct vhost_test *n)
+ 		}
+ 		vhost_add_used_and_signal(&n->dev, vq, head, 0);
+ 		total_len += len;
+-		if (unlikely(total_len >= VHOST_TEST_WEIGHT)) {
+-			vhost_poll_queue(&vq->poll);
++		if (unlikely(vhost_exceeds_weight(vq, 0, total_len)))
+ 			break;
+-		}
+ 	}
+ 
+ 	mutex_unlock(&vq->mutex);
+@@ -116,7 +120,8 @@ static int vhost_test_open(struct inode *inode, struct file *f)
+ 	dev = &n->dev;
+ 	vqs[VHOST_TEST_VQ] = &n->vqs[VHOST_TEST_VQ];
+ 	n->vqs[VHOST_TEST_VQ].handle_kick = handle_vq_kick;
+-	vhost_dev_init(dev, vqs, VHOST_TEST_VQ_MAX);
++	vhost_dev_init(dev, vqs, VHOST_TEST_VQ_MAX, UIO_MAXIOV,
++		       VHOST_TEST_PKT_WEIGHT, VHOST_TEST_WEIGHT);
+ 
+ 	f->private_data = n;
+ 
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 0752f8dc47b1..98b6eb902df9 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -2073,7 +2073,7 @@ static int get_indirect(struct vhost_virtqueue *vq,
+ 		/* If this is an input descriptor, increment that count. */
+ 		if (access == VHOST_ACCESS_WO) {
+ 			*in_num += ret;
+-			if (unlikely(log)) {
++			if (unlikely(log && ret)) {
+ 				log[*log_num].addr = vhost64_to_cpu(vq, desc.addr);
+ 				log[*log_num].len = vhost32_to_cpu(vq, desc.len);
+ 				++*log_num;
+@@ -2216,7 +2216,7 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
+ 			/* If this is an input descriptor,
+ 			 * increment that count. */
+ 			*in_num += ret;
+-			if (unlikely(log)) {
++			if (unlikely(log && ret)) {
+ 				log[*log_num].addr = vhost64_to_cpu(vq, desc.addr);
+ 				log[*log_num].len = vhost32_to_cpu(vq, desc.len);
+ 				++*log_num;
+diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
+index 9bfa66592aa7..c71e534ca7ef 100644
+--- a/fs/btrfs/compression.c
++++ b/fs/btrfs/compression.c
+@@ -42,6 +42,22 @@ const char* btrfs_compress_type2str(enum btrfs_compression_type type)
+ 	return NULL;
+ }
+ 
++bool btrfs_compress_is_valid_type(const char *str, size_t len)
++{
++	int i;
++
++	for (i = 1; i < ARRAY_SIZE(btrfs_compress_types); i++) {
++		size_t comp_len = strlen(btrfs_compress_types[i]);
++
++		if (len < comp_len)
++			continue;
++
++		if (!strncmp(btrfs_compress_types[i], str, comp_len))
++			return true;
++	}
++	return false;
++}
++
+ static int btrfs_decompress_bio(struct compressed_bio *cb);
+ 
+ static inline int compressed_bio_size(struct btrfs_fs_info *fs_info,
+diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
+index ddda9b80bf20..f97d90a1fa53 100644
+--- a/fs/btrfs/compression.h
++++ b/fs/btrfs/compression.h
+@@ -127,6 +127,7 @@ extern const struct btrfs_compress_op btrfs_lzo_compress;
+ extern const struct btrfs_compress_op btrfs_zstd_compress;
+ 
+ const char* btrfs_compress_type2str(enum btrfs_compression_type type);
++bool btrfs_compress_is_valid_type(const char *str, size_t len);
+ 
+ int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end);
+ 
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 82682da5a40d..4644f9b629a5 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -3200,6 +3200,9 @@ int btrfs_prealloc_file_range_trans(struct inode *inode,
+ 				    struct btrfs_trans_handle *trans, int mode,
+ 				    u64 start, u64 num_bytes, u64 min_size,
+ 				    loff_t actual_len, u64 *alloc_hint);
++int btrfs_run_delalloc_range(void *private_data, struct page *locked_page,
++		u64 start, u64 end, int *page_started, unsigned long *nr_written,
++		struct writeback_control *wbc);
+ extern const struct dentry_operations btrfs_dentry_operations;
+ #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
+ void btrfs_test_inode_set_ops(struct inode *inode);
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 0cc800d22a08..88c939f7aad9 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -10478,22 +10478,6 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ 	}
+ 	spin_unlock(&block_group->lock);
+ 
+-	if (remove_em) {
+-		struct extent_map_tree *em_tree;
+-
+-		em_tree = &fs_info->mapping_tree.map_tree;
+-		write_lock(&em_tree->lock);
+-		/*
+-		 * The em might be in the pending_chunks list, so make sure the
+-		 * chunk mutex is locked, since remove_extent_mapping() will
+-		 * delete us from that list.
+-		 */
+-		remove_extent_mapping(em_tree, em);
+-		write_unlock(&em_tree->lock);
+-		/* once for the tree */
+-		free_extent_map(em);
+-	}
+-
+ 	mutex_unlock(&fs_info->chunk_mutex);
+ 
+ 	ret = remove_block_group_free_space(trans, block_group);
+@@ -10510,6 +10494,24 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ 		goto out;
+ 
+ 	ret = btrfs_del_item(trans, root, path);
++	if (ret)
++		goto out;
++
++	if (remove_em) {
++		struct extent_map_tree *em_tree;
++
++		em_tree = &fs_info->mapping_tree.map_tree;
++		write_lock(&em_tree->lock);
++		/*
++		 * The em might be in the pending_chunks list, so make sure the
++		 * chunk mutex is locked, since remove_extent_mapping() will
++		 * delete us from that list.
++		 */
++		remove_extent_mapping(em_tree, em);
++		write_unlock(&em_tree->lock);
++		/* once for the tree */
++		free_extent_map(em);
++	}
+ out:
+ 	btrfs_free_path(path);
+ 	return ret;
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 90b0a6eff535..cb598eb4f3bd 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -3199,7 +3199,7 @@ static void update_nr_written(struct writeback_control *wbc,
+ /*
+  * helper for __extent_writepage, doing all of the delayed allocation setup.
+  *
+- * This returns 1 if our fill_delalloc function did all the work required
++ * This returns 1 if btrfs_run_delalloc_range function did all the work required
+  * to write the page (copy into inline extent).  In this case the IO has
+  * been started and the page is already unlocked.
+  *
+@@ -3220,7 +3220,7 @@ static noinline_for_stack int writepage_delalloc(struct inode *inode,
+ 	int ret;
+ 	int page_started = 0;
+ 
+-	if (epd->extent_locked || !tree->ops || !tree->ops->fill_delalloc)
++	if (epd->extent_locked)
+ 		return 0;
+ 
+ 	while (delalloc_end < page_end) {
+@@ -3233,18 +3233,16 @@ static noinline_for_stack int writepage_delalloc(struct inode *inode,
+ 			delalloc_start = delalloc_end + 1;
+ 			continue;
+ 		}
+-		ret = tree->ops->fill_delalloc(inode, page,
+-					       delalloc_start,
+-					       delalloc_end,
+-					       &page_started,
+-					       nr_written, wbc);
++		ret = btrfs_run_delalloc_range(inode, page, delalloc_start,
++				delalloc_end, &page_started, nr_written, wbc);
+ 		/* File system has been set read-only */
+ 		if (ret) {
+ 			SetPageError(page);
+-			/* fill_delalloc should be return < 0 for error
+-			 * but just in case, we use > 0 here meaning the
+-			 * IO is started, so we don't want to return > 0
+-			 * unless things are going well.
++			/*
++			 * btrfs_run_delalloc_range should return < 0 for error
++			 * but just in case, we use > 0 here meaning the IO is
++			 * started, so we don't want to return > 0 unless
++			 * things are going well.
+ 			 */
+ 			ret = ret < 0 ? ret : -EIO;
+ 			goto done;
+diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
+index b4d03e677e1d..ed27becd963c 100644
+--- a/fs/btrfs/extent_io.h
++++ b/fs/btrfs/extent_io.h
+@@ -106,11 +106,6 @@ struct extent_io_ops {
+ 	/*
+ 	 * Optional hooks, called if the pointer is not NULL
+ 	 */
+-	int (*fill_delalloc)(void *private_data, struct page *locked_page,
+-			     u64 start, u64 end, int *page_started,
+-			     unsigned long *nr_written,
+-			     struct writeback_control *wbc);
+-
+ 	int (*writepage_start_hook)(struct page *page, u64 start, u64 end);
+ 	void (*writepage_end_io_hook)(struct page *page, u64 start, u64 end,
+ 				      struct extent_state *state, int uptodate);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 355ff08e9d44..98c535ae038d 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -110,17 +110,17 @@ static void __endio_write_update_ordered(struct inode *inode,
+  * extent_clear_unlock_delalloc() to clear both the bits EXTENT_DO_ACCOUNTING
+  * and EXTENT_DELALLOC simultaneously, because that causes the reserved metadata
+  * to be released, which we want to happen only when finishing the ordered
+- * extent (btrfs_finish_ordered_io()). Also note that the caller of the
+- * fill_delalloc() callback already does proper cleanup for the first page of
+- * the range, that is, it invokes the callback writepage_end_io_hook() for the
+- * range of the first page.
++ * extent (btrfs_finish_ordered_io()).
+  */
+ static inline void btrfs_cleanup_ordered_extents(struct inode *inode,
+-						 const u64 offset,
+-						 const u64 bytes)
++						 struct page *locked_page,
++						 u64 offset, u64 bytes)
+ {
+ 	unsigned long index = offset >> PAGE_SHIFT;
+ 	unsigned long end_index = (offset + bytes - 1) >> PAGE_SHIFT;
++	u64 page_start = page_offset(locked_page);
++	u64 page_end = page_start + PAGE_SIZE - 1;
++
+ 	struct page *page;
+ 
+ 	while (index <= end_index) {
+@@ -131,8 +131,18 @@ static inline void btrfs_cleanup_ordered_extents(struct inode *inode,
+ 		ClearPagePrivate2(page);
+ 		put_page(page);
+ 	}
+-	return __endio_write_update_ordered(inode, offset + PAGE_SIZE,
+-					    bytes - PAGE_SIZE, false);
++
++	/*
++	 * In case this page belongs to the delalloc range being instantiated
++	 * then skip it, since the first page of a range is going to be
++	 * properly cleaned up by the caller of run_delalloc_range
++	 */
++	if (page_start >= offset && page_end <= (offset + bytes - 1)) {
++		offset += PAGE_SIZE;
++		bytes -= PAGE_SIZE;
++	}
++
++	return __endio_write_update_ordered(inode, offset, bytes, false);
+ }
+ 
+ static int btrfs_dirty_inode(struct inode *inode);
+@@ -1599,12 +1609,12 @@ static inline int need_force_cow(struct inode *inode, u64 start, u64 end)
+ }
+ 
+ /*
+- * extent_io.c call back to do delayed allocation processing
++ * Function to process delayed allocation (create CoW) for ranges which are
++ * being touched for the first time.
+  */
+-static int run_delalloc_range(void *private_data, struct page *locked_page,
+-			      u64 start, u64 end, int *page_started,
+-			      unsigned long *nr_written,
+-			      struct writeback_control *wbc)
++int btrfs_run_delalloc_range(void *private_data, struct page *locked_page,
++		u64 start, u64 end, int *page_started, unsigned long *nr_written,
++		struct writeback_control *wbc)
+ {
+ 	struct inode *inode = private_data;
+ 	int ret;
+@@ -1629,7 +1639,8 @@ static int run_delalloc_range(void *private_data, struct page *locked_page,
+ 					   write_flags);
+ 	}
+ 	if (ret)
+-		btrfs_cleanup_ordered_extents(inode, start, end - start + 1);
++		btrfs_cleanup_ordered_extents(inode, locked_page, start,
++					      end - start + 1);
+ 	return ret;
+ }
+ 
+@@ -10598,7 +10609,6 @@ static const struct extent_io_ops btrfs_extent_io_ops = {
+ 	.readpage_io_failed_hook = btrfs_readpage_io_failed_hook,
+ 
+ 	/* optional callbacks */
+-	.fill_delalloc = run_delalloc_range,
+ 	.writepage_end_io_hook = btrfs_writepage_end_io_hook,
+ 	.writepage_start_hook = btrfs_writepage_start_hook,
+ 	.set_bit_hook = btrfs_set_bit_hook,
+diff --git a/fs/btrfs/props.c b/fs/btrfs/props.c
+index 61d22a56c0ba..6980a0e13f18 100644
+--- a/fs/btrfs/props.c
++++ b/fs/btrfs/props.c
+@@ -366,11 +366,7 @@ int btrfs_subvol_inherit_props(struct btrfs_trans_handle *trans,
+ 
+ static int prop_compression_validate(const char *value, size_t len)
+ {
+-	if (!strncmp("lzo", value, 3))
+-		return 0;
+-	else if (!strncmp("zlib", value, 4))
+-		return 0;
+-	else if (!strncmp("zstd", value, 4))
++	if (btrfs_compress_is_valid_type(value, len))
+ 		return 0;
+ 
+ 	return -EINVAL;
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 3be1456b5116..916c39770467 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -322,6 +322,7 @@ static struct full_stripe_lock *insert_full_stripe_lock(
+ 	struct rb_node *parent = NULL;
+ 	struct full_stripe_lock *entry;
+ 	struct full_stripe_lock *ret;
++	unsigned int nofs_flag;
+ 
+ 	lockdep_assert_held(&locks_root->lock);
+ 
+@@ -339,8 +340,17 @@ static struct full_stripe_lock *insert_full_stripe_lock(
+ 		}
+ 	}
+ 
+-	/* Insert new lock */
++	/*
++	 * Insert new lock.
++	 *
++	 * We must use GFP_NOFS because the scrub task might be waiting for a
++	 * worker task executing this function and in turn a transaction commit
++	 * might be waiting the scrub task to pause (which needs to wait for all
++	 * the worker tasks to complete before pausing).
++	 */
++	nofs_flag = memalloc_nofs_save();
+ 	ret = kmalloc(sizeof(*ret), GFP_KERNEL);
++	memalloc_nofs_restore(nofs_flag);
+ 	if (!ret)
+ 		return ERR_PTR(-ENOMEM);
+ 	ret->logical = fstripe_logical;
+@@ -568,12 +578,11 @@ static void scrub_put_ctx(struct scrub_ctx *sctx)
+ 		scrub_free_ctx(sctx);
+ }
+ 
+-static noinline_for_stack
+-struct scrub_ctx *scrub_setup_ctx(struct btrfs_device *dev, int is_dev_replace)
++static noinline_for_stack struct scrub_ctx *scrub_setup_ctx(
++		struct btrfs_fs_info *fs_info, int is_dev_replace)
+ {
+ 	struct scrub_ctx *sctx;
+ 	int		i;
+-	struct btrfs_fs_info *fs_info = dev->fs_info;
+ 
+ 	sctx = kzalloc(sizeof(*sctx), GFP_KERNEL);
+ 	if (!sctx)
+@@ -582,7 +591,8 @@ struct scrub_ctx *scrub_setup_ctx(struct btrfs_device *dev, int is_dev_replace)
+ 	sctx->is_dev_replace = is_dev_replace;
+ 	sctx->pages_per_rd_bio = SCRUB_PAGES_PER_RD_BIO;
+ 	sctx->curr = -1;
+-	sctx->fs_info = dev->fs_info;
++	sctx->fs_info = fs_info;
++	INIT_LIST_HEAD(&sctx->csum_list);
+ 	for (i = 0; i < SCRUB_BIOS_PER_SCTX; ++i) {
+ 		struct scrub_bio *sbio;
+ 
+@@ -607,7 +617,6 @@ struct scrub_ctx *scrub_setup_ctx(struct btrfs_device *dev, int is_dev_replace)
+ 	atomic_set(&sctx->workers_pending, 0);
+ 	atomic_set(&sctx->cancel_req, 0);
+ 	sctx->csum_size = btrfs_super_csum_size(fs_info->super_copy);
+-	INIT_LIST_HEAD(&sctx->csum_list);
+ 
+ 	spin_lock_init(&sctx->list_lock);
+ 	spin_lock_init(&sctx->stat_lock);
+@@ -1622,8 +1631,19 @@ static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx,
+ 	mutex_lock(&sctx->wr_lock);
+ again:
+ 	if (!sctx->wr_curr_bio) {
++		unsigned int nofs_flag;
++
++		/*
++		 * We must use GFP_NOFS because the scrub task might be waiting
++		 * for a worker task executing this function and in turn a
++		 * transaction commit might be waiting the scrub task to pause
++		 * (which needs to wait for all the worker tasks to complete
++		 * before pausing).
++		 */
++		nofs_flag = memalloc_nofs_save();
+ 		sctx->wr_curr_bio = kzalloc(sizeof(*sctx->wr_curr_bio),
+ 					      GFP_KERNEL);
++		memalloc_nofs_restore(nofs_flag);
+ 		if (!sctx->wr_curr_bio) {
+ 			mutex_unlock(&sctx->wr_lock);
+ 			return -ENOMEM;
+@@ -3022,8 +3042,7 @@ out:
+ static noinline_for_stack int scrub_stripe(struct scrub_ctx *sctx,
+ 					   struct map_lookup *map,
+ 					   struct btrfs_device *scrub_dev,
+-					   int num, u64 base, u64 length,
+-					   int is_dev_replace)
++					   int num, u64 base, u64 length)
+ {
+ 	struct btrfs_path *path, *ppath;
+ 	struct btrfs_fs_info *fs_info = sctx->fs_info;
+@@ -3299,7 +3318,7 @@ again:
+ 			extent_physical = extent_logical - logical + physical;
+ 			extent_dev = scrub_dev;
+ 			extent_mirror_num = mirror_num;
+-			if (is_dev_replace)
++			if (sctx->is_dev_replace)
+ 				scrub_remap_extent(fs_info, extent_logical,
+ 						   extent_len, &extent_physical,
+ 						   &extent_dev,
+@@ -3397,8 +3416,7 @@ static noinline_for_stack int scrub_chunk(struct scrub_ctx *sctx,
+ 					  struct btrfs_device *scrub_dev,
+ 					  u64 chunk_offset, u64 length,
+ 					  u64 dev_offset,
+-					  struct btrfs_block_group_cache *cache,
+-					  int is_dev_replace)
++					  struct btrfs_block_group_cache *cache)
+ {
+ 	struct btrfs_fs_info *fs_info = sctx->fs_info;
+ 	struct btrfs_mapping_tree *map_tree = &fs_info->mapping_tree;
+@@ -3435,8 +3453,7 @@ static noinline_for_stack int scrub_chunk(struct scrub_ctx *sctx,
+ 		if (map->stripes[i].dev->bdev == scrub_dev->bdev &&
+ 		    map->stripes[i].physical == dev_offset) {
+ 			ret = scrub_stripe(sctx, map, scrub_dev, i,
+-					   chunk_offset, length,
+-					   is_dev_replace);
++					   chunk_offset, length);
+ 			if (ret)
+ 				goto out;
+ 		}
+@@ -3449,8 +3466,7 @@ out:
+ 
+ static noinline_for_stack
+ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
+-			   struct btrfs_device *scrub_dev, u64 start, u64 end,
+-			   int is_dev_replace)
++			   struct btrfs_device *scrub_dev, u64 start, u64 end)
+ {
+ 	struct btrfs_dev_extent *dev_extent = NULL;
+ 	struct btrfs_path *path;
+@@ -3544,7 +3560,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
+ 		 */
+ 		scrub_pause_on(fs_info);
+ 		ret = btrfs_inc_block_group_ro(cache);
+-		if (!ret && is_dev_replace) {
++		if (!ret && sctx->is_dev_replace) {
+ 			/*
+ 			 * If we are doing a device replace wait for any tasks
+ 			 * that started dellaloc right before we set the block
+@@ -3609,7 +3625,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
+ 		dev_replace->item_needs_writeback = 1;
+ 		btrfs_dev_replace_write_unlock(&fs_info->dev_replace);
+ 		ret = scrub_chunk(sctx, scrub_dev, chunk_offset, length,
+-				  found_key.offset, cache, is_dev_replace);
++				  found_key.offset, cache);
+ 
+ 		/*
+ 		 * flush, submit all pending read and write bios, afterwards
+@@ -3670,7 +3686,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
+ 		btrfs_put_block_group(cache);
+ 		if (ret)
+ 			break;
+-		if (is_dev_replace &&
++		if (sctx->is_dev_replace &&
+ 		    atomic64_read(&dev_replace->num_write_errors) > 0) {
+ 			ret = -EIO;
+ 			break;
+@@ -3762,16 +3778,6 @@ fail_scrub_workers:
+ 	return -ENOMEM;
+ }
+ 
+-static noinline_for_stack void scrub_workers_put(struct btrfs_fs_info *fs_info)
+-{
+-	if (--fs_info->scrub_workers_refcnt == 0) {
+-		btrfs_destroy_workqueue(fs_info->scrub_workers);
+-		btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers);
+-		btrfs_destroy_workqueue(fs_info->scrub_parity_workers);
+-	}
+-	WARN_ON(fs_info->scrub_workers_refcnt < 0);
+-}
+-
+ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 		    u64 end, struct btrfs_scrub_progress *progress,
+ 		    int readonly, int is_dev_replace)
+@@ -3779,6 +3785,10 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 	struct scrub_ctx *sctx;
+ 	int ret;
+ 	struct btrfs_device *dev;
++	unsigned int nofs_flag;
++	struct btrfs_workqueue *scrub_workers = NULL;
++	struct btrfs_workqueue *scrub_wr_comp = NULL;
++	struct btrfs_workqueue *scrub_parity = NULL;
+ 
+ 	if (btrfs_fs_closing(fs_info))
+ 		return -EINVAL;
+@@ -3820,13 +3830,18 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 		return -EINVAL;
+ 	}
+ 
++	/* Allocate outside of device_list_mutex */
++	sctx = scrub_setup_ctx(fs_info, is_dev_replace);
++	if (IS_ERR(sctx))
++		return PTR_ERR(sctx);
+ 
+ 	mutex_lock(&fs_info->fs_devices->device_list_mutex);
+ 	dev = btrfs_find_device(fs_info, devid, NULL, NULL);
+ 	if (!dev || (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state) &&
+ 		     !is_dev_replace)) {
+ 		mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto out_free_ctx;
+ 	}
+ 
+ 	if (!is_dev_replace && !readonly &&
+@@ -3834,7 +3849,8 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 		mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+ 		btrfs_err_in_rcu(fs_info, "scrub: device %s is not writable",
+ 				rcu_str_deref(dev->name));
+-		return -EROFS;
++		ret = -EROFS;
++		goto out_free_ctx;
+ 	}
+ 
+ 	mutex_lock(&fs_info->scrub_lock);
+@@ -3842,7 +3858,8 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 	    test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &dev->dev_state)) {
+ 		mutex_unlock(&fs_info->scrub_lock);
+ 		mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+-		return -EIO;
++		ret = -EIO;
++		goto out_free_ctx;
+ 	}
+ 
+ 	btrfs_dev_replace_read_lock(&fs_info->dev_replace);
+@@ -3852,7 +3869,8 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 		btrfs_dev_replace_read_unlock(&fs_info->dev_replace);
+ 		mutex_unlock(&fs_info->scrub_lock);
+ 		mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+-		return -EINPROGRESS;
++		ret = -EINPROGRESS;
++		goto out_free_ctx;
+ 	}
+ 	btrfs_dev_replace_read_unlock(&fs_info->dev_replace);
+ 
+@@ -3860,16 +3878,9 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 	if (ret) {
+ 		mutex_unlock(&fs_info->scrub_lock);
+ 		mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+-		return ret;
++		goto out_free_ctx;
+ 	}
+ 
+-	sctx = scrub_setup_ctx(dev, is_dev_replace);
+-	if (IS_ERR(sctx)) {
+-		mutex_unlock(&fs_info->scrub_lock);
+-		mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+-		scrub_workers_put(fs_info);
+-		return PTR_ERR(sctx);
+-	}
+ 	sctx->readonly = readonly;
+ 	dev->scrub_ctx = sctx;
+ 	mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+@@ -3882,6 +3893,16 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 	atomic_inc(&fs_info->scrubs_running);
+ 	mutex_unlock(&fs_info->scrub_lock);
+ 
++	/*
++	 * In order to avoid deadlock with reclaim when there is a transaction
++	 * trying to pause scrub, make sure we use GFP_NOFS for all the
++	 * allocations done at btrfs_scrub_pages() and scrub_pages_for_parity()
++	 * invoked by our callees. The pausing request is done when the
++	 * transaction commit starts, and it blocks the transaction until scrub
++	 * is paused (done at specific points at scrub_stripe() or right above
++	 * before incrementing fs_info->scrubs_running).
++	 */
++	nofs_flag = memalloc_nofs_save();
+ 	if (!is_dev_replace) {
+ 		/*
+ 		 * by holding device list mutex, we can
+@@ -3893,8 +3914,8 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 	}
+ 
+ 	if (!ret)
+-		ret = scrub_enumerate_chunks(sctx, dev, start, end,
+-					     is_dev_replace);
++		ret = scrub_enumerate_chunks(sctx, dev, start, end);
++	memalloc_nofs_restore(nofs_flag);
+ 
+ 	wait_event(sctx->list_wait, atomic_read(&sctx->bios_in_flight) == 0);
+ 	atomic_dec(&fs_info->scrubs_running);
+@@ -3907,11 +3928,23 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 
+ 	mutex_lock(&fs_info->scrub_lock);
+ 	dev->scrub_ctx = NULL;
+-	scrub_workers_put(fs_info);
++	if (--fs_info->scrub_workers_refcnt == 0) {
++		scrub_workers = fs_info->scrub_workers;
++		scrub_wr_comp = fs_info->scrub_wr_completion_workers;
++		scrub_parity = fs_info->scrub_parity_workers;
++	}
+ 	mutex_unlock(&fs_info->scrub_lock);
+ 
++	btrfs_destroy_workqueue(scrub_workers);
++	btrfs_destroy_workqueue(scrub_wr_comp);
++	btrfs_destroy_workqueue(scrub_parity);
+ 	scrub_put_ctx(sctx);
+ 
++	return ret;
++
++out_free_ctx:
++	scrub_free_ctx(sctx);
++
+ 	return ret;
+ }
+ 
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 6e008bd5c8cd..a8297e7489d9 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -7411,6 +7411,7 @@ static int verify_one_dev_extent(struct btrfs_fs_info *fs_info,
+ 	struct extent_map_tree *em_tree = &fs_info->mapping_tree.map_tree;
+ 	struct extent_map *em;
+ 	struct map_lookup *map;
++	struct btrfs_device *dev;
+ 	u64 stripe_len;
+ 	bool found = false;
+ 	int ret = 0;
+@@ -7460,6 +7461,34 @@ static int verify_one_dev_extent(struct btrfs_fs_info *fs_info,
+ 			physical_offset, devid);
+ 		ret = -EUCLEAN;
+ 	}
++
++	/* Make sure no dev extent is beyond device bondary */
++	dev = btrfs_find_device(fs_info, devid, NULL, NULL);
++	if (!dev) {
++		btrfs_err(fs_info, "failed to find devid %llu", devid);
++		ret = -EUCLEAN;
++		goto out;
++	}
++
++	/* It's possible this device is a dummy for seed device */
++	if (dev->disk_total_bytes == 0) {
++		dev = find_device(fs_info->fs_devices->seed, devid, NULL);
++		if (!dev) {
++			btrfs_err(fs_info, "failed to find seed devid %llu",
++				  devid);
++			ret = -EUCLEAN;
++			goto out;
++		}
++	}
++
++	if (physical_offset + physical_len > dev->disk_total_bytes) {
++		btrfs_err(fs_info,
++"dev extent devid %llu physical offset %llu len %llu is beyond device boundary %llu",
++			  devid, physical_offset, physical_len,
++			  dev->disk_total_bytes);
++		ret = -EUCLEAN;
++		goto out;
++	}
+ out:
+ 	free_extent_map(em);
+ 	return ret;
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 11f19432a74c..665a86f83f4b 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -528,13 +528,16 @@ static void ceph_i_callback(struct rcu_head *head)
+ 	kmem_cache_free(ceph_inode_cachep, ci);
+ }
+ 
+-void ceph_destroy_inode(struct inode *inode)
++void ceph_evict_inode(struct inode *inode)
+ {
+ 	struct ceph_inode_info *ci = ceph_inode(inode);
+ 	struct ceph_inode_frag *frag;
+ 	struct rb_node *n;
+ 
+-	dout("destroy_inode %p ino %llx.%llx\n", inode, ceph_vinop(inode));
++	dout("evict_inode %p ino %llx.%llx\n", inode, ceph_vinop(inode));
++
++	truncate_inode_pages_final(&inode->i_data);
++	clear_inode(inode);
+ 
+ 	ceph_fscache_unregister_inode_cookie(ci);
+ 
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index c5cf46e43f2e..02528e11bf33 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -827,9 +827,9 @@ static int ceph_remount(struct super_block *sb, int *flags, char *data)
+ 
+ static const struct super_operations ceph_super_ops = {
+ 	.alloc_inode	= ceph_alloc_inode,
+-	.destroy_inode	= ceph_destroy_inode,
+ 	.write_inode    = ceph_write_inode,
+ 	.drop_inode	= ceph_drop_inode,
++	.evict_inode	= ceph_evict_inode,
+ 	.sync_fs        = ceph_sync_fs,
+ 	.put_super	= ceph_put_super,
+ 	.remount_fs	= ceph_remount,
+diff --git a/fs/ceph/super.h b/fs/ceph/super.h
+index 018019309790..6e968e48e5e4 100644
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -854,7 +854,7 @@ static inline bool __ceph_have_pending_cap_snap(struct ceph_inode_info *ci)
+ extern const struct inode_operations ceph_file_iops;
+ 
+ extern struct inode *ceph_alloc_inode(struct super_block *sb);
+-extern void ceph_destroy_inode(struct inode *inode);
++extern void ceph_evict_inode(struct inode *inode);
+ extern int ceph_drop_inode(struct inode *inode);
+ 
+ extern struct inode *ceph_get_inode(struct super_block *sb,
+diff --git a/fs/cifs/cifs_fs_sb.h b/fs/cifs/cifs_fs_sb.h
+index 9731d0d891e7..aba2b48d4da1 100644
+--- a/fs/cifs/cifs_fs_sb.h
++++ b/fs/cifs/cifs_fs_sb.h
+@@ -72,5 +72,10 @@ struct cifs_sb_info {
+ 	struct delayed_work prune_tlinks;
+ 	struct rcu_head rcu;
+ 	char *prepath;
++	/*
++	 * Indicate whether serverino option was turned off later
++	 * (cifs_autodisable_serverino) in order to match new mounts.
++	 */
++	bool mnt_cifs_serverino_autodisabled;
+ };
+ #endif				/* _CIFS_FS_SB_H */
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index fb32f3d6925e..64e3888f30e6 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -292,6 +292,7 @@ cifs_alloc_inode(struct super_block *sb)
+ 	cifs_inode->uniqueid = 0;
+ 	cifs_inode->createtime = 0;
+ 	cifs_inode->epoch = 0;
++	spin_lock_init(&cifs_inode->open_file_lock);
+ 	generate_random_uuid(cifs_inode->lease_key);
+ 
+ 	/*
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 6f227cc781e5..57af9bac0045 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -1287,6 +1287,7 @@ struct cifsInodeInfo {
+ 	struct rw_semaphore lock_sem;	/* protect the fields above */
+ 	/* BB add in lists for dirty pages i.e. write caching info for oplock */
+ 	struct list_head openFileList;
++	spinlock_t	open_file_lock;	/* protects openFileList */
+ 	__u32 cifsAttrs; /* e.g. DOS archive bit, sparse, compressed, system */
+ 	unsigned int oplock;		/* oplock/lease level we have */
+ 	unsigned int epoch;		/* used to track lease state changes */
+@@ -1563,6 +1564,25 @@ static inline void free_dfs_info_array(struct dfs_info3_param *param,
+ 	kfree(param);
+ }
+ 
++static inline bool is_interrupt_error(int error)
++{
++	switch (error) {
++	case -EINTR:
++	case -ERESTARTSYS:
++	case -ERESTARTNOHAND:
++	case -ERESTARTNOINTR:
++		return true;
++	}
++	return false;
++}
++
++static inline bool is_retryable_error(int error)
++{
++	if (is_interrupt_error(error) || error == -EAGAIN)
++		return true;
++	return false;
++}
++
+ #define   MID_FREE 0
+ #define   MID_REQUEST_ALLOCATED 1
+ #define   MID_REQUEST_SUBMITTED 2
+@@ -1668,10 +1688,14 @@ require use of the stronger protocol */
+  *  tcp_ses_lock protects:
+  *	list operations on tcp and SMB session lists
+  *  tcon->open_file_lock protects the list of open files hanging off the tcon
++ *  inode->open_file_lock protects the openFileList hanging off the inode
+  *  cfile->file_info_lock protects counters and fields in cifs file struct
+  *  f_owner.lock protects certain per file struct operations
+  *  mapping->page_lock protects certain per page operations
+  *
++ *  Note that the cifs_tcon.open_file_lock should be taken before
++ *  not after the cifsInodeInfo.open_file_lock
++ *
+  *  Semaphores
+  *  ----------
+  *  sesSem     operations on smb session
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index 269471c8f42b..86a54b809c48 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -2033,16 +2033,17 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
+ 
+ 		wdata2->cfile = find_writable_file(CIFS_I(inode), false);
+ 		if (!wdata2->cfile) {
+-			cifs_dbg(VFS, "No writable handles for inode\n");
++			cifs_dbg(VFS, "No writable handle to retry writepages\n");
+ 			rc = -EBADF;
+-			break;
++		} else {
++			wdata2->pid = wdata2->cfile->pid;
++			rc = server->ops->async_writev(wdata2,
++						       cifs_writedata_release);
+ 		}
+-		wdata2->pid = wdata2->cfile->pid;
+-		rc = server->ops->async_writev(wdata2, cifs_writedata_release);
+ 
+ 		for (j = 0; j < nr_pages; j++) {
+ 			unlock_page(wdata2->pages[j]);
+-			if (rc != 0 && rc != -EAGAIN) {
++			if (rc != 0 && !is_retryable_error(rc)) {
+ 				SetPageError(wdata2->pages[j]);
+ 				end_page_writeback(wdata2->pages[j]);
+ 				put_page(wdata2->pages[j]);
+@@ -2051,8 +2052,9 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
+ 
+ 		if (rc) {
+ 			kref_put(&wdata2->refcount, cifs_writedata_release);
+-			if (rc == -EAGAIN)
++			if (is_retryable_error(rc))
+ 				continue;
++			i += nr_pages;
+ 			break;
+ 		}
+ 
+@@ -2060,7 +2062,15 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
+ 		i += nr_pages;
+ 	} while (i < wdata->nr_pages);
+ 
+-	mapping_set_error(inode->i_mapping, rc);
++	/* cleanup remaining pages from the original wdata */
++	for (; i < wdata->nr_pages; i++) {
++		SetPageError(wdata->pages[i]);
++		end_page_writeback(wdata->pages[i]);
++		put_page(wdata->pages[i]);
++	}
++
++	if (rc != 0 && !is_retryable_error(rc))
++		mapping_set_error(inode->i_mapping, rc);
+ 	kref_put(&wdata->refcount, cifs_writedata_release);
+ }
+ 
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index c53a2e86ed54..208430bb66fc 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -3247,12 +3247,16 @@ compare_mount_options(struct super_block *sb, struct cifs_mnt_data *mnt_data)
+ {
+ 	struct cifs_sb_info *old = CIFS_SB(sb);
+ 	struct cifs_sb_info *new = mnt_data->cifs_sb;
++	unsigned int oldflags = old->mnt_cifs_flags & CIFS_MOUNT_MASK;
++	unsigned int newflags = new->mnt_cifs_flags & CIFS_MOUNT_MASK;
+ 
+ 	if ((sb->s_flags & CIFS_MS_MASK) != (mnt_data->flags & CIFS_MS_MASK))
+ 		return 0;
+ 
+-	if ((old->mnt_cifs_flags & CIFS_MOUNT_MASK) !=
+-	    (new->mnt_cifs_flags & CIFS_MOUNT_MASK))
++	if (old->mnt_cifs_serverino_autodisabled)
++		newflags &= ~CIFS_MOUNT_SERVER_INUM;
++
++	if (oldflags != newflags)
+ 		return 0;
+ 
+ 	/*
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 23cee91ed442..8703b5f26f45 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -336,10 +336,12 @@ cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
+ 	list_add(&cfile->tlist, &tcon->openFileList);
+ 
+ 	/* if readable file instance put first in list*/
++	spin_lock(&cinode->open_file_lock);
+ 	if (file->f_mode & FMODE_READ)
+ 		list_add(&cfile->flist, &cinode->openFileList);
+ 	else
+ 		list_add_tail(&cfile->flist, &cinode->openFileList);
++	spin_unlock(&cinode->open_file_lock);
+ 	spin_unlock(&tcon->open_file_lock);
+ 
+ 	if (fid->purge_cache)
+@@ -411,7 +413,9 @@ void _cifsFileInfo_put(struct cifsFileInfo *cifs_file, bool wait_oplock_handler)
+ 	cifs_add_pending_open_locked(&fid, cifs_file->tlink, &open);
+ 
+ 	/* remove it from the lists */
++	spin_lock(&cifsi->open_file_lock);
+ 	list_del(&cifs_file->flist);
++	spin_unlock(&cifsi->open_file_lock);
+ 	list_del(&cifs_file->tlist);
+ 
+ 	if (list_empty(&cifsi->openFileList)) {
+@@ -749,7 +753,8 @@ reopen_success:
+ 
+ 	if (can_flush) {
+ 		rc = filemap_write_and_wait(inode->i_mapping);
+-		mapping_set_error(inode->i_mapping, rc);
++		if (!is_interrupt_error(rc))
++			mapping_set_error(inode->i_mapping, rc);
+ 
+ 		if (tcon->unix_ext)
+ 			rc = cifs_get_inode_info_unix(&inode, full_path,
+@@ -1928,10 +1933,10 @@ refind_writable:
+ 		if (!rc)
+ 			return inv_file;
+ 		else {
+-			spin_lock(&tcon->open_file_lock);
++			spin_lock(&cifs_inode->open_file_lock);
+ 			list_move_tail(&inv_file->flist,
+ 					&cifs_inode->openFileList);
+-			spin_unlock(&tcon->open_file_lock);
++			spin_unlock(&cifs_inode->open_file_lock);
+ 			cifsFileInfo_put(inv_file);
+ 			++refind;
+ 			inv_file = NULL;
+@@ -2137,6 +2142,7 @@ static int cifs_writepages(struct address_space *mapping,
+ 	pgoff_t end, index;
+ 	struct cifs_writedata *wdata;
+ 	int rc = 0;
++	int saved_rc = 0;
+ 
+ 	/*
+ 	 * If wsize is smaller than the page cache size, default to writing
+@@ -2163,8 +2169,10 @@ retry:
+ 
+ 		rc = server->ops->wait_mtu_credits(server, cifs_sb->wsize,
+ 						   &wsize, &credits);
+-		if (rc)
++		if (rc != 0) {
++			done = true;
+ 			break;
++		}
+ 
+ 		tofind = min((wsize / PAGE_SIZE) - 1, end - index) + 1;
+ 
+@@ -2172,6 +2180,7 @@ retry:
+ 						  &found_pages);
+ 		if (!wdata) {
+ 			rc = -ENOMEM;
++			done = true;
+ 			add_credits_and_wake_if(server, credits, 0);
+ 			break;
+ 		}
+@@ -2200,7 +2209,7 @@ retry:
+ 		if (rc != 0) {
+ 			add_credits_and_wake_if(server, wdata->credits, 0);
+ 			for (i = 0; i < nr_pages; ++i) {
+-				if (rc == -EAGAIN)
++				if (is_retryable_error(rc))
+ 					redirty_page_for_writepage(wbc,
+ 							   wdata->pages[i]);
+ 				else
+@@ -2208,7 +2217,7 @@ retry:
+ 				end_page_writeback(wdata->pages[i]);
+ 				put_page(wdata->pages[i]);
+ 			}
+-			if (rc != -EAGAIN)
++			if (!is_retryable_error(rc))
+ 				mapping_set_error(mapping, rc);
+ 		}
+ 		kref_put(&wdata->refcount, cifs_writedata_release);
+@@ -2218,6 +2227,15 @@ retry:
+ 			continue;
+ 		}
+ 
++		/* Return immediately if we received a signal during writing */
++		if (is_interrupt_error(rc)) {
++			done = true;
++			break;
++		}
++
++		if (rc != 0 && saved_rc == 0)
++			saved_rc = rc;
++
+ 		wbc->nr_to_write -= nr_pages;
+ 		if (wbc->nr_to_write <= 0)
+ 			done = true;
+@@ -2235,6 +2253,9 @@ retry:
+ 		goto retry;
+ 	}
+ 
++	if (saved_rc != 0)
++		rc = saved_rc;
++
+ 	if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))
+ 		mapping->writeback_index = index;
+ 
+@@ -2266,8 +2287,8 @@ cifs_writepage_locked(struct page *page, struct writeback_control *wbc)
+ 	set_page_writeback(page);
+ retry_write:
+ 	rc = cifs_partialpagewrite(page, 0, PAGE_SIZE);
+-	if (rc == -EAGAIN) {
+-		if (wbc->sync_mode == WB_SYNC_ALL)
++	if (is_retryable_error(rc)) {
++		if (wbc->sync_mode == WB_SYNC_ALL && rc == -EAGAIN)
+ 			goto retry_write;
+ 		redirty_page_for_writepage(wbc, page);
+ 	} else if (rc != 0) {
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 1fadd314ae7f..53f3d08898af 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -2261,6 +2261,11 @@ cifs_setattr_unix(struct dentry *direntry, struct iattr *attrs)
+ 	 * the flush returns error?
+ 	 */
+ 	rc = filemap_write_and_wait(inode->i_mapping);
++	if (is_interrupt_error(rc)) {
++		rc = -ERESTARTSYS;
++		goto out;
++	}
++
+ 	mapping_set_error(inode->i_mapping, rc);
+ 	rc = 0;
+ 
+@@ -2404,6 +2409,11 @@ cifs_setattr_nounix(struct dentry *direntry, struct iattr *attrs)
+ 	 * the flush returns error?
+ 	 */
+ 	rc = filemap_write_and_wait(inode->i_mapping);
++	if (is_interrupt_error(rc)) {
++		rc = -ERESTARTSYS;
++		goto cifs_setattr_exit;
++	}
++
+ 	mapping_set_error(inode->i_mapping, rc);
+ 	rc = 0;
+ 
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index facc94e159a1..e45f8e321371 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -523,6 +523,7 @@ cifs_autodisable_serverino(struct cifs_sb_info *cifs_sb)
+ {
+ 	if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) {
+ 		cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_SERVER_INUM;
++		cifs_sb->mnt_cifs_serverino_autodisabled = true;
+ 		cifs_dbg(VFS, "Autodisabling the use of server inode numbers on %s. This server doesn't seem to support them properly. Hardlinks will not be recognized on this mount. Consider mounting with the \"noserverino\" option to silence this message.\n",
+ 			 cifs_sb_master_tcon(cifs_sb)->treeName);
+ 	}
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 2bc47eb6215e..cbe633f1840a 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -712,6 +712,7 @@ SMB2_negotiate(const unsigned int xid, struct cifs_ses *ses)
+ 		} else if (rsp->DialectRevision == cpu_to_le16(SMB21_PROT_ID)) {
+ 			/* ops set to 3.0 by default for default so update */
+ 			ses->server->ops = &smb21_operations;
++			ses->server->vals = &smb21_values;
+ 		}
+ 	} else if (le16_to_cpu(rsp->DialectRevision) !=
+ 				ses->server->vals->protocol_id) {
+diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c
+index 5fdb9a509a97..1959931e14c1 100644
+--- a/fs/cifs/smbdirect.c
++++ b/fs/cifs/smbdirect.c
+@@ -2090,7 +2090,8 @@ int smbd_recv(struct smbd_connection *info, struct msghdr *msg)
+  * rqst: the data to write
+  * return value: 0 if successfully write, otherwise error code
+  */
+-int smbd_send(struct TCP_Server_Info *server, struct smb_rqst *rqst)
++int smbd_send(struct TCP_Server_Info *server,
++	int num_rqst, struct smb_rqst *rqst_array)
+ {
+ 	struct smbd_connection *info = server->smbd_conn;
+ 	struct kvec vec;
+@@ -2102,6 +2103,8 @@ int smbd_send(struct TCP_Server_Info *server, struct smb_rqst *rqst)
+ 		info->max_send_size - sizeof(struct smbd_data_transfer);
+ 	struct kvec *iov;
+ 	int rc;
++	struct smb_rqst *rqst;
++	int rqst_idx;
+ 
+ 	info->smbd_send_pending++;
+ 	if (info->transport_status != SMBD_CONNECTED) {
+@@ -2109,47 +2112,41 @@ int smbd_send(struct TCP_Server_Info *server, struct smb_rqst *rqst)
+ 		goto done;
+ 	}
+ 
+-	/*
+-	 * Skip the RFC1002 length defined in MS-SMB2 section 2.1
+-	 * It is used only for TCP transport in the iov[0]
+-	 * In future we may want to add a transport layer under protocol
+-	 * layer so this will only be issued to TCP transport
+-	 */
+-
+-	if (rqst->rq_iov[0].iov_len != 4) {
+-		log_write(ERR, "expected the pdu length in 1st iov, but got %zu\n", rqst->rq_iov[0].iov_len);
+-		return -EINVAL;
+-	}
+-
+ 	/*
+ 	 * Add in the page array if there is one. The caller needs to set
+ 	 * rq_tailsz to PAGE_SIZE when the buffer has multiple pages and
+ 	 * ends at page boundary
+ 	 */
+-	buflen = smb_rqst_len(server, rqst);
++	remaining_data_length = 0;
++	for (i = 0; i < num_rqst; i++)
++		remaining_data_length += smb_rqst_len(server, &rqst_array[i]);
+ 
+-	if (buflen + sizeof(struct smbd_data_transfer) >
++	if (remaining_data_length + sizeof(struct smbd_data_transfer) >
+ 		info->max_fragmented_send_size) {
+ 		log_write(ERR, "payload size %d > max size %d\n",
+-			buflen, info->max_fragmented_send_size);
++			remaining_data_length, info->max_fragmented_send_size);
+ 		rc = -EINVAL;
+ 		goto done;
+ 	}
+ 
+-	iov = &rqst->rq_iov[1];
++	rqst_idx = 0;
++
++next_rqst:
++	rqst = &rqst_array[rqst_idx];
++	iov = rqst->rq_iov;
+ 
+-	cifs_dbg(FYI, "Sending smb (RDMA): smb_len=%u\n", buflen);
+-	for (i = 0; i < rqst->rq_nvec-1; i++)
++	cifs_dbg(FYI, "Sending smb (RDMA): idx=%d smb_len=%lu\n",
++		rqst_idx, smb_rqst_len(server, rqst));
++	for (i = 0; i < rqst->rq_nvec; i++)
+ 		dump_smb(iov[i].iov_base, iov[i].iov_len);
+ 
+-	remaining_data_length = buflen;
+ 
+-	log_write(INFO, "rqst->rq_nvec=%d rqst->rq_npages=%d rq_pagesz=%d "
+-		"rq_tailsz=%d buflen=%d\n",
+-		rqst->rq_nvec, rqst->rq_npages, rqst->rq_pagesz,
+-		rqst->rq_tailsz, buflen);
++	log_write(INFO, "rqst_idx=%d nvec=%d rqst->rq_npages=%d rq_pagesz=%d "
++		"rq_tailsz=%d buflen=%lu\n",
++		rqst_idx, rqst->rq_nvec, rqst->rq_npages, rqst->rq_pagesz,
++		rqst->rq_tailsz, smb_rqst_len(server, rqst));
+ 
+-	start = i = iov[0].iov_len ? 0 : 1;
++	start = i = 0;
+ 	buflen = 0;
+ 	while (true) {
+ 		buflen += iov[i].iov_len;
+@@ -2197,14 +2194,14 @@ int smbd_send(struct TCP_Server_Info *server, struct smb_rqst *rqst)
+ 						goto done;
+ 				}
+ 				i++;
+-				if (i == rqst->rq_nvec-1)
++				if (i == rqst->rq_nvec)
+ 					break;
+ 			}
+ 			start = i;
+ 			buflen = 0;
+ 		} else {
+ 			i++;
+-			if (i == rqst->rq_nvec-1) {
++			if (i == rqst->rq_nvec) {
+ 				/* send out all remaining vecs */
+ 				remaining_data_length -= buflen;
+ 				log_write(INFO,
+@@ -2248,6 +2245,10 @@ int smbd_send(struct TCP_Server_Info *server, struct smb_rqst *rqst)
+ 		}
+ 	}
+ 
++	rqst_idx++;
++	if (rqst_idx < num_rqst)
++		goto next_rqst;
++
+ done:
+ 	/*
+ 	 * As an optimization, we don't wait for individual I/O to finish
+diff --git a/fs/cifs/smbdirect.h b/fs/cifs/smbdirect.h
+index a11096254f29..b5c240ff2191 100644
+--- a/fs/cifs/smbdirect.h
++++ b/fs/cifs/smbdirect.h
+@@ -292,7 +292,8 @@ void smbd_destroy(struct smbd_connection *info);
+ 
+ /* Interface for carrying upper layer I/O through send/recv */
+ int smbd_recv(struct smbd_connection *info, struct msghdr *msg);
+-int smbd_send(struct TCP_Server_Info *server, struct smb_rqst *rqst);
++int smbd_send(struct TCP_Server_Info *server,
++	int num_rqst, struct smb_rqst *rqst);
+ 
+ enum mr_state {
+ 	MR_READY,
+@@ -332,7 +333,7 @@ static inline void *smbd_get_connection(
+ static inline int smbd_reconnect(struct TCP_Server_Info *server) {return -1; }
+ static inline void smbd_destroy(struct smbd_connection *info) {}
+ static inline int smbd_recv(struct smbd_connection *info, struct msghdr *msg) {return -1; }
+-static inline int smbd_send(struct TCP_Server_Info *server, struct smb_rqst *rqst) {return -1; }
++static inline int smbd_send(struct TCP_Server_Info *server, int num_rqst, struct smb_rqst *rqst) {return -1; }
+ #endif
+ 
+ #endif
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index f2938bd95c40..fe77f41bff9f 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -287,7 +287,7 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
+ 	__be32 rfc1002_marker;
+ 
+ 	if (cifs_rdma_enabled(server) && server->smbd_conn) {
+-		rc = smbd_send(server, rqst);
++		rc = smbd_send(server, num_rqst, rqst);
+ 		goto smbd_done;
+ 	}
+ 	if (ssocket == NULL)
+diff --git a/fs/ext4/block_validity.c b/fs/ext4/block_validity.c
+index 913061c0de1b..e8e27cdc2f67 100644
+--- a/fs/ext4/block_validity.c
++++ b/fs/ext4/block_validity.c
+@@ -137,6 +137,49 @@ static void debug_print_tree(struct ext4_sb_info *sbi)
+ 	printk(KERN_CONT "\n");
+ }
+ 
++static int ext4_protect_reserved_inode(struct super_block *sb, u32 ino)
++{
++	struct inode *inode;
++	struct ext4_sb_info *sbi = EXT4_SB(sb);
++	struct ext4_map_blocks map;
++	u32 i = 0, num;
++	int err = 0, n;
++
++	if ((ino < EXT4_ROOT_INO) ||
++	    (ino > le32_to_cpu(sbi->s_es->s_inodes_count)))
++		return -EINVAL;
++	inode = ext4_iget(sb, ino, EXT4_IGET_SPECIAL);
++	if (IS_ERR(inode))
++		return PTR_ERR(inode);
++	num = (inode->i_size + sb->s_blocksize - 1) >> sb->s_blocksize_bits;
++	while (i < num) {
++		map.m_lblk = i;
++		map.m_len = num - i;
++		n = ext4_map_blocks(NULL, inode, &map, 0);
++		if (n < 0) {
++			err = n;
++			break;
++		}
++		if (n == 0) {
++			i++;
++		} else {
++			if (!ext4_data_block_valid(sbi, map.m_pblk, n)) {
++				ext4_error(sb, "blocks %llu-%llu from inode %u "
++					   "overlap system zone", map.m_pblk,
++					   map.m_pblk + map.m_len - 1, ino);
++				err = -EFSCORRUPTED;
++				break;
++			}
++			err = add_system_zone(sbi, map.m_pblk, n);
++			if (err < 0)
++				break;
++			i += n;
++		}
++	}
++	iput(inode);
++	return err;
++}
++
+ int ext4_setup_system_zone(struct super_block *sb)
+ {
+ 	ext4_group_t ngroups = ext4_get_groups_count(sb);
+@@ -171,6 +214,12 @@ int ext4_setup_system_zone(struct super_block *sb)
+ 		if (ret)
+ 			return ret;
+ 	}
++	if (ext4_has_feature_journal(sb) && sbi->s_es->s_journal_inum) {
++		ret = ext4_protect_reserved_inode(sb,
++				le32_to_cpu(sbi->s_es->s_journal_inum));
++		if (ret)
++			return ret;
++	}
+ 
+ 	if (test_opt(sb, DEBUG))
+ 		debug_print_tree(sbi);
+@@ -227,6 +276,11 @@ int ext4_check_blockref(const char *function, unsigned int line,
+ 	__le32 *bref = p;
+ 	unsigned int blk;
+ 
++	if (ext4_has_feature_journal(inode->i_sb) &&
++	    (inode->i_ino ==
++	     le32_to_cpu(EXT4_SB(inode->i_sb)->s_es->s_journal_inum)))
++		return 0;
++
+ 	while (bref < p+max) {
+ 		blk = le32_to_cpu(*bref++);
+ 		if (blk &&
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 45aea792d22a..00bf0b67aae8 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -518,10 +518,14 @@ __read_extent_tree_block(const char *function, unsigned int line,
+ 	}
+ 	if (buffer_verified(bh) && !(flags & EXT4_EX_FORCE_CACHE))
+ 		return bh;
+-	err = __ext4_ext_check(function, line, inode,
+-			       ext_block_hdr(bh), depth, pblk);
+-	if (err)
+-		goto errout;
++	if (!ext4_has_feature_journal(inode->i_sb) ||
++	    (inode->i_ino !=
++	     le32_to_cpu(EXT4_SB(inode->i_sb)->s_es->s_journal_inum))) {
++		err = __ext4_ext_check(function, line, inode,
++				       ext_block_hdr(bh), depth, pblk);
++		if (err)
++			goto errout;
++	}
+ 	set_buffer_verified(bh);
+ 	/*
+ 	 * If this is a leaf block, cache all of its entries
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index e65559bf7728..cff6277f7a9f 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -399,6 +399,10 @@ static int __check_block_validity(struct inode *inode, const char *func,
+ 				unsigned int line,
+ 				struct ext4_map_blocks *map)
+ {
++	if (ext4_has_feature_journal(inode->i_sb) &&
++	    (inode->i_ino ==
++	     le32_to_cpu(EXT4_SB(inode->i_sb)->s_es->s_journal_inum)))
++		return 0;
+ 	if (!ext4_data_block_valid(EXT4_SB(inode->i_sb), map->m_pblk,
+ 				   map->m_len)) {
+ 		ext4_error_inode(inode, func, line, map->m_pblk,
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index 75fe92eaa681..1624618c2bc7 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -153,7 +153,7 @@ again:
+ 		/* Block nfs4_proc_unlck */
+ 		mutex_lock(&sp->so_delegreturn_mutex);
+ 		seq = raw_seqcount_begin(&sp->so_reclaim_seqcount);
+-		err = nfs4_open_delegation_recall(ctx, state, stateid, type);
++		err = nfs4_open_delegation_recall(ctx, state, stateid);
+ 		if (!err)
+ 			err = nfs_delegation_claim_locks(ctx, state, stateid);
+ 		if (!err && read_seqcount_retry(&sp->so_reclaim_seqcount, seq))
+diff --git a/fs/nfs/delegation.h b/fs/nfs/delegation.h
+index bb1ef8c37af4..c95477823fa6 100644
+--- a/fs/nfs/delegation.h
++++ b/fs/nfs/delegation.h
+@@ -61,7 +61,7 @@ void nfs_reap_expired_delegations(struct nfs_client *clp);
+ 
+ /* NFSv4 delegation-related procedures */
+ int nfs4_proc_delegreturn(struct inode *inode, struct rpc_cred *cred, const nfs4_stateid *stateid, int issync);
+-int nfs4_open_delegation_recall(struct nfs_open_context *ctx, struct nfs4_state *state, const nfs4_stateid *stateid, fmode_t type);
++int nfs4_open_delegation_recall(struct nfs_open_context *ctx, struct nfs4_state *state, const nfs4_stateid *stateid);
+ int nfs4_lock_delegation_recall(struct file_lock *fl, struct nfs4_state *state, const nfs4_stateid *stateid);
+ bool nfs4_copy_delegation_stateid(struct inode *inode, fmode_t flags, nfs4_stateid *dst, struct rpc_cred **cred);
+ bool nfs4_refresh_delegation_stateid(nfs4_stateid *dst, struct inode *inode);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 31ae3bd5d9d2..621e3cf90f4e 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -2113,12 +2113,10 @@ static int nfs4_handle_delegation_recall_error(struct nfs_server *server, struct
+ 		case -NFS4ERR_BAD_HIGH_SLOT:
+ 		case -NFS4ERR_CONN_NOT_BOUND_TO_SESSION:
+ 		case -NFS4ERR_DEADSESSION:
+-			set_bit(NFS_DELEGATED_STATE, &state->flags);
+ 			nfs4_schedule_session_recovery(server->nfs_client->cl_session, err);
+ 			return -EAGAIN;
+ 		case -NFS4ERR_STALE_CLIENTID:
+ 		case -NFS4ERR_STALE_STATEID:
+-			set_bit(NFS_DELEGATED_STATE, &state->flags);
+ 			/* Don't recall a delegation if it was lost */
+ 			nfs4_schedule_lease_recovery(server->nfs_client);
+ 			return -EAGAIN;
+@@ -2139,7 +2137,6 @@ static int nfs4_handle_delegation_recall_error(struct nfs_server *server, struct
+ 			return -EAGAIN;
+ 		case -NFS4ERR_DELAY:
+ 		case -NFS4ERR_GRACE:
+-			set_bit(NFS_DELEGATED_STATE, &state->flags);
+ 			ssleep(1);
+ 			return -EAGAIN;
+ 		case -ENOMEM:
+@@ -2155,8 +2152,7 @@ static int nfs4_handle_delegation_recall_error(struct nfs_server *server, struct
+ }
+ 
+ int nfs4_open_delegation_recall(struct nfs_open_context *ctx,
+-		struct nfs4_state *state, const nfs4_stateid *stateid,
+-		fmode_t type)
++		struct nfs4_state *state, const nfs4_stateid *stateid)
+ {
+ 	struct nfs_server *server = NFS_SERVER(state->inode);
+ 	struct nfs4_opendata *opendata;
+@@ -2167,20 +2163,23 @@ int nfs4_open_delegation_recall(struct nfs_open_context *ctx,
+ 	if (IS_ERR(opendata))
+ 		return PTR_ERR(opendata);
+ 	nfs4_stateid_copy(&opendata->o_arg.u.delegation, stateid);
+-	nfs_state_clear_delegation(state);
+-	switch (type & (FMODE_READ|FMODE_WRITE)) {
+-	case FMODE_READ|FMODE_WRITE:
+-	case FMODE_WRITE:
++	if (!test_bit(NFS_O_RDWR_STATE, &state->flags)) {
+ 		err = nfs4_open_recover_helper(opendata, FMODE_READ|FMODE_WRITE);
+ 		if (err)
+-			break;
++			goto out;
++	}
++	if (!test_bit(NFS_O_WRONLY_STATE, &state->flags)) {
+ 		err = nfs4_open_recover_helper(opendata, FMODE_WRITE);
+ 		if (err)
+-			break;
+-		/* Fall through */
+-	case FMODE_READ:
++			goto out;
++	}
++	if (!test_bit(NFS_O_RDONLY_STATE, &state->flags)) {
+ 		err = nfs4_open_recover_helper(opendata, FMODE_READ);
++		if (err)
++			goto out;
+ 	}
++	nfs_state_clear_delegation(state);
++out:
+ 	nfs4_opendata_put(opendata);
+ 	return nfs4_handle_delegation_recall_error(server, state, stateid, NULL, err);
+ }
+diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c
+index 8cf2218b46a7..6f90d91a8733 100644
+--- a/fs/pstore/inode.c
++++ b/fs/pstore/inode.c
+@@ -330,10 +330,6 @@ int pstore_mkfile(struct dentry *root, struct pstore_record *record)
+ 		goto fail;
+ 	inode->i_mode = S_IFREG | 0444;
+ 	inode->i_fop = &pstore_file_operations;
+-	private = kzalloc(sizeof(*private), GFP_KERNEL);
+-	if (!private)
+-		goto fail_alloc;
+-	private->record = record;
+ 
+ 	switch (record->type) {
+ 	case PSTORE_TYPE_DMESG:
+@@ -383,12 +379,16 @@ int pstore_mkfile(struct dentry *root, struct pstore_record *record)
+ 		break;
+ 	}
+ 
++	private = kzalloc(sizeof(*private), GFP_KERNEL);
++	if (!private)
++		goto fail_inode;
++
+ 	dentry = d_alloc_name(root, name);
+ 	if (!dentry)
+ 		goto fail_private;
+ 
++	private->record = record;
+ 	inode->i_size = private->total_size = size;
+-
+ 	inode->i_private = private;
+ 
+ 	if (record->time.tv_sec)
+@@ -404,7 +404,7 @@ int pstore_mkfile(struct dentry *root, struct pstore_record *record)
+ 
+ fail_private:
+ 	free_pstore_private(private);
+-fail_alloc:
++fail_inode:
+ 	iput(inode);
+ 
+ fail:
+diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
+index f9c6e0e3aec7..fa117e11458a 100644
+--- a/include/drm/drm_device.h
++++ b/include/drm/drm_device.h
+@@ -174,7 +174,13 @@ struct drm_device {
+ 	 * races and imprecision over longer time periods, hence exposing a
+ 	 * hardware vblank counter is always recommended.
+ 	 *
+-	 * If non-zeor, &drm_crtc_funcs.get_vblank_counter must be set.
++	 * This is the statically configured device wide maximum. The driver
++	 * can instead choose to use a runtime configurable per-crtc value
++	 * &drm_vblank_crtc.max_vblank_count, in which case @max_vblank_count
++	 * must be left at zero. See drm_crtc_set_max_vblank_count() on how
++	 * to use the per-crtc value.
++	 *
++	 * If non-zero, &drm_crtc_funcs.get_vblank_counter must be set.
+ 	 */
+ 	u32 max_vblank_count;           /**< size of vblank counter register */
+ 
+diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h
+index d25a9603ab57..e9c676381fd4 100644
+--- a/include/drm/drm_vblank.h
++++ b/include/drm/drm_vblank.h
+@@ -128,6 +128,26 @@ struct drm_vblank_crtc {
+ 	 * @last: Protected by &drm_device.vbl_lock, used for wraparound handling.
+ 	 */
+ 	u32 last;
++	/**
++	 * @max_vblank_count:
++	 *
++	 * Maximum value of the vblank registers for this crtc. This value +1
++	 * will result in a wrap-around of the vblank register. It is used
++	 * by the vblank core to handle wrap-arounds.
++	 *
++	 * If set to zero the vblank core will try to guess the elapsed vblanks
++	 * between times when the vblank interrupt is disabled through
++	 * high-precision timestamps. That approach is suffering from small
++	 * races and imprecision over longer time periods, hence exposing a
++	 * hardware vblank counter is always recommended.
++	 *
++	 * This is the runtime configurable per-crtc maximum set through
++	 * drm_crtc_set_max_vblank_count(). If this is used the driver
++	 * must leave the device wide &drm_device.max_vblank_count at zero.
++	 *
++	 * If non-zero, &drm_crtc_funcs.get_vblank_counter must be set.
++	 */
++	u32 max_vblank_count;
+ 	/**
+ 	 * @inmodeset: Tracks whether the vblank is disabled due to a modeset.
+ 	 * For legacy driver bit 2 additionally tracks whether an additional
+@@ -206,4 +226,6 @@ bool drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev,
+ void drm_calc_timestamping_constants(struct drm_crtc *crtc,
+ 				     const struct drm_display_mode *mode);
+ wait_queue_head_t *drm_crtc_vblank_waitqueue(struct drm_crtc *crtc);
++void drm_crtc_set_max_vblank_count(struct drm_crtc *crtc,
++				   u32 max_vblank_count);
+ #endif
+diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
+index bef2e36c01b4..91f9f95ad506 100644
+--- a/include/linux/device-mapper.h
++++ b/include/linux/device-mapper.h
+@@ -62,7 +62,8 @@ typedef int (*dm_clone_and_map_request_fn) (struct dm_target *ti,
+ 					    struct request *rq,
+ 					    union map_info *map_context,
+ 					    struct request **clone);
+-typedef void (*dm_release_clone_request_fn) (struct request *clone);
++typedef void (*dm_release_clone_request_fn) (struct request *clone,
++					     union map_info *map_context);
+ 
+ /*
+  * Returns:
+diff --git a/include/linux/gpio/consumer.h b/include/linux/gpio/consumer.h
+index acc4279ad5e3..412098b24f58 100644
+--- a/include/linux/gpio/consumer.h
++++ b/include/linux/gpio/consumer.h
+@@ -222,7 +222,7 @@ static inline void gpiod_put(struct gpio_desc *desc)
+ 	might_sleep();
+ 
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ }
+ 
+ static inline void gpiod_put_array(struct gpio_descs *descs)
+@@ -230,7 +230,7 @@ static inline void gpiod_put_array(struct gpio_descs *descs)
+ 	might_sleep();
+ 
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(descs);
+ }
+ 
+ static inline struct gpio_desc *__must_check
+@@ -283,7 +283,7 @@ static inline void devm_gpiod_put(struct device *dev, struct gpio_desc *desc)
+ 	might_sleep();
+ 
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ }
+ 
+ static inline void devm_gpiod_put_array(struct device *dev,
+@@ -292,32 +292,32 @@ static inline void devm_gpiod_put_array(struct device *dev,
+ 	might_sleep();
+ 
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(descs);
+ }
+ 
+ 
+ static inline int gpiod_get_direction(const struct gpio_desc *desc)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ 	return -ENOSYS;
+ }
+ static inline int gpiod_direction_input(struct gpio_desc *desc)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ 	return -ENOSYS;
+ }
+ static inline int gpiod_direction_output(struct gpio_desc *desc, int value)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ 	return -ENOSYS;
+ }
+ static inline int gpiod_direction_output_raw(struct gpio_desc *desc, int value)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ 	return -ENOSYS;
+ }
+ 
+@@ -325,7 +325,7 @@ static inline int gpiod_direction_output_raw(struct gpio_desc *desc, int value)
+ static inline int gpiod_get_value(const struct gpio_desc *desc)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ 	return 0;
+ }
+ static inline int gpiod_get_array_value(unsigned int array_size,
+@@ -333,25 +333,25 @@ static inline int gpiod_get_array_value(unsigned int array_size,
+ 					int *value_array)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc_array);
+ 	return 0;
+ }
+ static inline void gpiod_set_value(struct gpio_desc *desc, int value)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ }
+ static inline void gpiod_set_array_value(unsigned int array_size,
+ 					 struct gpio_desc **desc_array,
+ 					 int *value_array)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc_array);
+ }
+ static inline int gpiod_get_raw_value(const struct gpio_desc *desc)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ 	return 0;
+ }
+ static inline int gpiod_get_raw_array_value(unsigned int array_size,
+@@ -359,27 +359,27 @@ static inline int gpiod_get_raw_array_value(unsigned int array_size,
+ 					    int *value_array)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc_array);
+ 	return 0;
+ }
+ static inline void gpiod_set_raw_value(struct gpio_desc *desc, int value)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ }
+ static inline int gpiod_set_raw_array_value(unsigned int array_size,
+ 					     struct gpio_desc **desc_array,
+ 					     int *value_array)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc_array);
+ 	return 0;
+ }
+ 
+ static inline int gpiod_get_value_cansleep(const struct gpio_desc *desc)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ 	return 0;
+ }
+ static inline int gpiod_get_array_value_cansleep(unsigned int array_size,
+@@ -387,25 +387,25 @@ static inline int gpiod_get_array_value_cansleep(unsigned int array_size,
+ 				     int *value_array)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc_array);
+ 	return 0;
+ }
+ static inline void gpiod_set_value_cansleep(struct gpio_desc *desc, int value)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ }
+ static inline void gpiod_set_array_value_cansleep(unsigned int array_size,
+ 					    struct gpio_desc **desc_array,
+ 					    int *value_array)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc_array);
+ }
+ static inline int gpiod_get_raw_value_cansleep(const struct gpio_desc *desc)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ 	return 0;
+ }
+ static inline int gpiod_get_raw_array_value_cansleep(unsigned int array_size,
+@@ -413,55 +413,55 @@ static inline int gpiod_get_raw_array_value_cansleep(unsigned int array_size,
+ 					       int *value_array)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc_array);
+ 	return 0;
+ }
+ static inline void gpiod_set_raw_value_cansleep(struct gpio_desc *desc,
+ 						int value)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ }
+ static inline int gpiod_set_raw_array_value_cansleep(unsigned int array_size,
+ 						struct gpio_desc **desc_array,
+ 						int *value_array)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc_array);
+ 	return 0;
+ }
+ 
+ static inline int gpiod_set_debounce(struct gpio_desc *desc, unsigned debounce)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ 	return -ENOSYS;
+ }
+ 
+ static inline int gpiod_set_transitory(struct gpio_desc *desc, bool transitory)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ 	return -ENOSYS;
+ }
+ 
+ static inline int gpiod_is_active_low(const struct gpio_desc *desc)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ 	return 0;
+ }
+ static inline int gpiod_cansleep(const struct gpio_desc *desc)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ 	return 0;
+ }
+ 
+ static inline int gpiod_to_irq(const struct gpio_desc *desc)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ 	return -EINVAL;
+ }
+ 
+@@ -469,7 +469,7 @@ static inline int gpiod_set_consumer_name(struct gpio_desc *desc,
+ 					  const char *name)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ 	return -EINVAL;
+ }
+ 
+@@ -481,7 +481,7 @@ static inline struct gpio_desc *gpio_to_desc(unsigned gpio)
+ static inline int desc_to_gpio(const struct gpio_desc *desc)
+ {
+ 	/* GPIO can never have been requested */
+-	WARN_ON(1);
++	WARN_ON(desc);
+ 	return -EINVAL;
+ }
+ 
+diff --git a/include/media/cec.h b/include/media/cec.h
+index dc4b412e8fa1..59bf280e9715 100644
+--- a/include/media/cec.h
++++ b/include/media/cec.h
+@@ -333,67 +333,6 @@ void cec_queue_pin_5v_event(struct cec_adapter *adap, bool is_high, ktime_t ts);
+ u16 cec_get_edid_phys_addr(const u8 *edid, unsigned int size,
+ 			   unsigned int *offset);
+ 
+-/**
+- * cec_set_edid_phys_addr() - find and set the physical address
+- *
+- * @edid:	pointer to the EDID data
+- * @size:	size in bytes of the EDID data
+- * @phys_addr:	the new physical address
+- *
+- * This function finds the location of the physical address in the EDID
+- * and fills in the given physical address and updates the checksum
+- * at the end of the EDID block. It does nothing if the EDID doesn't
+- * contain a physical address.
+- */
+-void cec_set_edid_phys_addr(u8 *edid, unsigned int size, u16 phys_addr);
+-
+-/**
+- * cec_phys_addr_for_input() - calculate the PA for an input
+- *
+- * @phys_addr:	the physical address of the parent
+- * @input:	the number of the input port, must be between 1 and 15
+- *
+- * This function calculates a new physical address based on the input
+- * port number. For example:
+- *
+- * PA = 0.0.0.0 and input = 2 becomes 2.0.0.0
+- *
+- * PA = 3.0.0.0 and input = 1 becomes 3.1.0.0
+- *
+- * PA = 3.2.1.0 and input = 5 becomes 3.2.1.5
+- *
+- * PA = 3.2.1.3 and input = 5 becomes f.f.f.f since it maxed out the depth.
+- *
+- * Return: the new physical address or CEC_PHYS_ADDR_INVALID.
+- */
+-u16 cec_phys_addr_for_input(u16 phys_addr, u8 input);
+-
+-/**
+- * cec_phys_addr_validate() - validate a physical address from an EDID
+- *
+- * @phys_addr:	the physical address to validate
+- * @parent:	if not %NULL, then this is filled with the parents PA.
+- * @port:	if not %NULL, then this is filled with the input port.
+- *
+- * This validates a physical address as read from an EDID. If the
+- * PA is invalid (such as 1.0.1.0 since '0' is only allowed at the end),
+- * then it will return -EINVAL.
+- *
+- * The parent PA is passed into %parent and the input port is passed into
+- * %port. For example:
+- *
+- * PA = 0.0.0.0: has parent 0.0.0.0 and input port 0.
+- *
+- * PA = 1.0.0.0: has parent 0.0.0.0 and input port 1.
+- *
+- * PA = 3.2.0.0: has parent 3.0.0.0 and input port 2.
+- *
+- * PA = f.f.f.f: has parent f.f.f.f and input port 0.
+- *
+- * Return: 0 if the PA is valid, -EINVAL if not.
+- */
+-int cec_phys_addr_validate(u16 phys_addr, u16 *parent, u16 *port);
+-
+ #else
+ 
+ static inline int cec_register_adapter(struct cec_adapter *adap,
+@@ -428,25 +367,6 @@ static inline u16 cec_get_edid_phys_addr(const u8 *edid, unsigned int size,
+ 	return CEC_PHYS_ADDR_INVALID;
+ }
+ 
+-static inline void cec_set_edid_phys_addr(u8 *edid, unsigned int size,
+-					  u16 phys_addr)
+-{
+-}
+-
+-static inline u16 cec_phys_addr_for_input(u16 phys_addr, u8 input)
+-{
+-	return CEC_PHYS_ADDR_INVALID;
+-}
+-
+-static inline int cec_phys_addr_validate(u16 phys_addr, u16 *parent, u16 *port)
+-{
+-	if (parent)
+-		*parent = phys_addr;
+-	if (port)
+-		*port = 0;
+-	return 0;
+-}
+-
+ #endif
+ 
+ /**
+diff --git a/include/media/v4l2-dv-timings.h b/include/media/v4l2-dv-timings.h
+index 17cb27df1b81..4e7732d3908c 100644
+--- a/include/media/v4l2-dv-timings.h
++++ b/include/media/v4l2-dv-timings.h
+@@ -234,4 +234,10 @@ v4l2_hdmi_rx_colorimetry(const struct hdmi_avi_infoframe *avi,
+ 			 const struct hdmi_vendor_infoframe *hdmi,
+ 			 unsigned int height);
+ 
++u16 v4l2_get_edid_phys_addr(const u8 *edid, unsigned int size,
++			    unsigned int *offset);
++void v4l2_set_edid_phys_addr(u8 *edid, unsigned int size, u16 phys_addr);
++u16 v4l2_phys_addr_for_input(u16 phys_addr, u8 input);
++int v4l2_phys_addr_validate(u16 phys_addr, u16 *parent, u16 *port);
++
+ #endif
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 67e0a990144a..468deae5d603 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -6562,6 +6562,21 @@ int cfg80211_external_auth_request(struct net_device *netdev,
+ 				   struct cfg80211_external_auth_params *params,
+ 				   gfp_t gfp);
+ 
++/**
++ * cfg80211_iftype_allowed - check whether the interface can be allowed
++ * @wiphy: the wiphy
++ * @iftype: interface type
++ * @is_4addr: use_4addr flag, must be '0' when check_swif is '1'
++ * @check_swif: check iftype against software interfaces
++ *
++ * Check whether the interface is allowed to operate; additionally, this API
++ * can be used to check iftype against the software interfaces when
++ * check_swif is '1'.
++ */
++bool cfg80211_iftype_allowed(struct wiphy *wiphy, enum nl80211_iftype iftype,
++			     bool is_4addr, u8 check_swif);
++
++
+ /* Logging, debugging and troubleshooting/diagnostic helpers. */
+ 
+ /* wiphy_printk helpers, similar to dev_printk */
+diff --git a/include/uapi/linux/keyctl.h b/include/uapi/linux/keyctl.h
+index 7b8c9e19bad1..0f3cb13db8e9 100644
+--- a/include/uapi/linux/keyctl.h
++++ b/include/uapi/linux/keyctl.h
+@@ -65,7 +65,12 @@
+ 
+ /* keyctl structures */
+ struct keyctl_dh_params {
+-	__s32 private;
++	union {
++#ifndef __cplusplus
++		__s32 private;
++#endif
++		__s32 priv;
++	};
+ 	__s32 prime;
+ 	__s32 base;
+ };
+diff --git a/kernel/module.c b/kernel/module.c
+index 3fda10c549a2..0d86fc73d63d 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -76,14 +76,9 @@
+ 
+ /*
+  * Modules' sections will be aligned on page boundaries
+- * to ensure complete separation of code and data, but
+- * only when CONFIG_STRICT_MODULE_RWX=y
++ * to ensure complete separation of code and data
+  */
+-#ifdef CONFIG_STRICT_MODULE_RWX
+ # define debug_align(X) ALIGN(X, PAGE_SIZE)
+-#else
+-# define debug_align(X) (X)
+-#endif
+ 
+ /* If this is set, the section belongs in the init part of the module */
+ #define INIT_OFFSET_MASK (1UL << (BITS_PER_LONG-1))
+@@ -1699,6 +1694,8 @@ static int add_usage_links(struct module *mod)
+ 	return ret;
+ }
+ 
++static void module_remove_modinfo_attrs(struct module *mod, int end);
++
+ static int module_add_modinfo_attrs(struct module *mod)
+ {
+ 	struct module_attribute *attr;
+@@ -1713,24 +1710,34 @@ static int module_add_modinfo_attrs(struct module *mod)
+ 		return -ENOMEM;
+ 
+ 	temp_attr = mod->modinfo_attrs;
+-	for (i = 0; (attr = modinfo_attrs[i]) && !error; i++) {
++	for (i = 0; (attr = modinfo_attrs[i]); i++) {
+ 		if (!attr->test || attr->test(mod)) {
+ 			memcpy(temp_attr, attr, sizeof(*temp_attr));
+ 			sysfs_attr_init(&temp_attr->attr);
+ 			error = sysfs_create_file(&mod->mkobj.kobj,
+ 					&temp_attr->attr);
++			if (error)
++				goto error_out;
+ 			++temp_attr;
+ 		}
+ 	}
++
++	return 0;
++
++error_out:
++	if (i > 0)
++		module_remove_modinfo_attrs(mod, --i);
+ 	return error;
+ }
+ 
+-static void module_remove_modinfo_attrs(struct module *mod)
++static void module_remove_modinfo_attrs(struct module *mod, int end)
+ {
+ 	struct module_attribute *attr;
+ 	int i;
+ 
+ 	for (i = 0; (attr = &mod->modinfo_attrs[i]); i++) {
++		if (end >= 0 && i > end)
++			break;
+ 		/* pick a field to test for end of list */
+ 		if (!attr->attr.name)
+ 			break;
+@@ -1818,7 +1825,7 @@ static int mod_sysfs_setup(struct module *mod,
+ 	return 0;
+ 
+ out_unreg_modinfo_attrs:
+-	module_remove_modinfo_attrs(mod);
++	module_remove_modinfo_attrs(mod, -1);
+ out_unreg_param:
+ 	module_param_sysfs_remove(mod);
+ out_unreg_holders:
+@@ -1854,7 +1861,7 @@ static void mod_sysfs_fini(struct module *mod)
+ {
+ }
+ 
+-static void module_remove_modinfo_attrs(struct module *mod)
++static void module_remove_modinfo_attrs(struct module *mod, int end)
+ {
+ }
+ 
+@@ -1870,7 +1877,7 @@ static void init_param_lock(struct module *mod)
+ static void mod_sysfs_teardown(struct module *mod)
+ {
+ 	del_usage_links(mod);
+-	module_remove_modinfo_attrs(mod);
++	module_remove_modinfo_attrs(mod, -1);
+ 	module_param_sysfs_remove(mod);
+ 	kobject_put(mod->mkobj.drivers_dir);
+ 	kobject_put(mod->holders_dir);
+diff --git a/kernel/resource.c b/kernel/resource.c
+index 30e1bc68503b..bce773cc5e41 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -318,24 +318,27 @@ int release_resource(struct resource *old)
+ 
+ EXPORT_SYMBOL(release_resource);
+ 
+-/*
+- * Finds the lowest iomem resource existing within [res->start.res->end).
+- * The caller must specify res->start, res->end, res->flags, and optionally
+- * desc.  If found, returns 0, res is overwritten, if not found, returns -1.
+- * This function walks the whole tree and not just first level children until
+- * and unless first_level_children_only is true.
++/**
++ * Finds the lowest iomem resource that covers part of [start..end].  The
++ * caller must specify start, end, flags, and desc (which may be
++ * IORES_DESC_NONE).
++ *
++ * If a resource is found, returns 0 and *res is overwritten with the part
++ * of the resource that's within [start..end]; if none is found, returns
++ * -ENODEV.  Returns -EINVAL for invalid parameters.
++ *
++ * This function walks the whole tree and not just first level children
++ * unless @first_level_children_only is true.
+  */
+-static int find_next_iomem_res(struct resource *res, unsigned long desc,
+-			       bool first_level_children_only)
++static int find_next_iomem_res(resource_size_t start, resource_size_t end,
++			       unsigned long flags, unsigned long desc,
++			       bool first_level_children_only,
++			       struct resource *res)
+ {
+-	resource_size_t start, end;
+ 	struct resource *p;
+ 	bool sibling_only = false;
+ 
+ 	BUG_ON(!res);
+-
+-	start = res->start;
+-	end = res->end;
+ 	BUG_ON(start >= end);
+ 
+ 	if (first_level_children_only)
+@@ -344,7 +347,7 @@ static int find_next_iomem_res(struct resource *res, unsigned long desc,
+ 	read_lock(&resource_lock);
+ 
+ 	for (p = iomem_resource.child; p; p = next_resource(p, sibling_only)) {
+-		if ((p->flags & res->flags) != res->flags)
++		if ((p->flags & flags) != flags)
+ 			continue;
+ 		if ((desc != IORES_DESC_NONE) && (desc != p->desc))
+ 			continue;
+@@ -352,39 +355,38 @@ static int find_next_iomem_res(struct resource *res, unsigned long desc,
+ 			p = NULL;
+ 			break;
+ 		}
+-		if ((p->end >= start) && (p->start < end))
++		if ((p->end >= start) && (p->start <= end))
+ 			break;
+ 	}
+ 
++	if (p) {
++		/* copy data */
++		res->start = max(start, p->start);
++		res->end = min(end, p->end);
++		res->flags = p->flags;
++		res->desc = p->desc;
++	}
++
+ 	read_unlock(&resource_lock);
+-	if (!p)
+-		return -1;
+-	/* copy data */
+-	if (res->start < p->start)
+-		res->start = p->start;
+-	if (res->end > p->end)
+-		res->end = p->end;
+-	res->flags = p->flags;
+-	res->desc = p->desc;
+-	return 0;
++	return p ? 0 : -ENODEV;
+ }
+ 
+-static int __walk_iomem_res_desc(struct resource *res, unsigned long desc,
+-				 bool first_level_children_only,
+-				 void *arg,
++static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
++				 unsigned long flags, unsigned long desc,
++				 bool first_level_children_only, void *arg,
+ 				 int (*func)(struct resource *, void *))
+ {
+-	u64 orig_end = res->end;
++	struct resource res;
+ 	int ret = -1;
+ 
+-	while ((res->start < res->end) &&
+-	       !find_next_iomem_res(res, desc, first_level_children_only)) {
+-		ret = (*func)(res, arg);
++	while (start < end &&
++	       !find_next_iomem_res(start, end, flags, desc,
++				    first_level_children_only, &res)) {
++		ret = (*func)(&res, arg);
+ 		if (ret)
+ 			break;
+ 
+-		res->start = res->end + 1;
+-		res->end = orig_end;
++		start = res.end + 1;
+ 	}
+ 
+ 	return ret;
+@@ -407,13 +409,7 @@ static int __walk_iomem_res_desc(struct resource *res, unsigned long desc,
+ int walk_iomem_res_desc(unsigned long desc, unsigned long flags, u64 start,
+ 		u64 end, void *arg, int (*func)(struct resource *, void *))
+ {
+-	struct resource res;
+-
+-	res.start = start;
+-	res.end = end;
+-	res.flags = flags;
+-
+-	return __walk_iomem_res_desc(&res, desc, false, arg, func);
++	return __walk_iomem_res_desc(start, end, flags, desc, false, arg, func);
+ }
+ EXPORT_SYMBOL_GPL(walk_iomem_res_desc);
+ 
+@@ -427,13 +423,9 @@ EXPORT_SYMBOL_GPL(walk_iomem_res_desc);
+ int walk_system_ram_res(u64 start, u64 end, void *arg,
+ 				int (*func)(struct resource *, void *))
+ {
+-	struct resource res;
++	unsigned long flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
+ 
+-	res.start = start;
+-	res.end = end;
+-	res.flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
+-
+-	return __walk_iomem_res_desc(&res, IORES_DESC_NONE, true,
++	return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, true,
+ 				     arg, func);
+ }
+ 
+@@ -444,13 +436,9 @@ int walk_system_ram_res(u64 start, u64 end, void *arg,
+ int walk_mem_res(u64 start, u64 end, void *arg,
+ 		 int (*func)(struct resource *, void *))
+ {
+-	struct resource res;
++	unsigned long flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+ 
+-	res.start = start;
+-	res.end = end;
+-	res.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+-
+-	return __walk_iomem_res_desc(&res, IORES_DESC_NONE, true,
++	return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, true,
+ 				     arg, func);
+ }
+ 
+@@ -464,25 +452,25 @@ int walk_mem_res(u64 start, u64 end, void *arg,
+ int walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages,
+ 		void *arg, int (*func)(unsigned long, unsigned long, void *))
+ {
++	resource_size_t start, end;
++	unsigned long flags;
+ 	struct resource res;
+ 	unsigned long pfn, end_pfn;
+-	u64 orig_end;
+ 	int ret = -1;
+ 
+-	res.start = (u64) start_pfn << PAGE_SHIFT;
+-	res.end = ((u64)(start_pfn + nr_pages) << PAGE_SHIFT) - 1;
+-	res.flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
+-	orig_end = res.end;
+-	while ((res.start < res.end) &&
+-		(find_next_iomem_res(&res, IORES_DESC_NONE, true) >= 0)) {
++	start = (u64) start_pfn << PAGE_SHIFT;
++	end = ((u64)(start_pfn + nr_pages) << PAGE_SHIFT) - 1;
++	flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
++	while (start < end &&
++	       !find_next_iomem_res(start, end, flags, IORES_DESC_NONE,
++				    true, &res)) {
+ 		pfn = (res.start + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ 		end_pfn = (res.end + 1) >> PAGE_SHIFT;
+ 		if (end_pfn > pfn)
+ 			ret = (*func)(pfn, end_pfn - pfn, arg);
+ 		if (ret)
+ 			break;
+-		res.start = res.end + 1;
+-		res.end = orig_end;
++		start = res.end + 1;
+ 	}
+ 	return ret;
+ }
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 75f322603d44..49ed38914669 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4420,6 +4420,8 @@ static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec)
+ 	if (likely(cfs_rq->runtime_remaining > 0))
+ 		return;
+ 
++	if (cfs_rq->throttled)
++		return;
+ 	/*
+ 	 * if we're unable to extend our runtime we resched so that the active
+ 	 * hierarchy can be throttled
+@@ -4615,6 +4617,9 @@ static u64 distribute_cfs_runtime(struct cfs_bandwidth *cfs_b,
+ 		if (!cfs_rq_throttled(cfs_rq))
+ 			goto next;
+ 
++		/* By the above check, this should never be true */
++		SCHED_WARN_ON(cfs_rq->runtime_remaining > 0);
++
+ 		runtime = -cfs_rq->runtime_remaining + 1;
+ 		if (runtime > remaining)
+ 			runtime = remaining;
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index 443edcddac8a..c2708e1f0c69 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -823,7 +823,7 @@ ktime_t ktime_get_coarse_with_offset(enum tk_offsets offs)
+ 
+ 	} while (read_seqcount_retry(&tk_core.seq, seq));
+ 
+-	return base + nsecs;
++	return ktime_add_ns(base, nsecs);
+ }
+ EXPORT_SYMBOL_GPL(ktime_get_coarse_with_offset);
+ 
+diff --git a/mm/migrate.c b/mm/migrate.c
+index b2ea7d1e6f24..0c48191a9036 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -2328,16 +2328,13 @@ next:
+  */
+ static void migrate_vma_collect(struct migrate_vma *migrate)
+ {
+-	struct mm_walk mm_walk;
+-
+-	mm_walk.pmd_entry = migrate_vma_collect_pmd;
+-	mm_walk.pte_entry = NULL;
+-	mm_walk.pte_hole = migrate_vma_collect_hole;
+-	mm_walk.hugetlb_entry = NULL;
+-	mm_walk.test_walk = NULL;
+-	mm_walk.vma = migrate->vma;
+-	mm_walk.mm = migrate->vma->vm_mm;
+-	mm_walk.private = migrate;
++	struct mm_walk mm_walk = {
++		.pmd_entry = migrate_vma_collect_pmd,
++		.pte_hole = migrate_vma_collect_hole,
++		.vma = migrate->vma,
++		.mm = migrate->vma->vm_mm,
++		.private = migrate,
++	};
+ 
+ 	mmu_notifier_invalidate_range_start(mm_walk.mm,
+ 					    migrate->start,
+diff --git a/net/batman-adv/bat_iv_ogm.c b/net/batman-adv/bat_iv_ogm.c
+index 0b7b36fa0d5c..36f244125d24 100644
+--- a/net/batman-adv/bat_iv_ogm.c
++++ b/net/batman-adv/bat_iv_ogm.c
+@@ -463,17 +463,23 @@ static u8 batadv_hop_penalty(u8 tq, const struct batadv_priv *bat_priv)
+  * batadv_iv_ogm_aggr_packet() - checks if there is another OGM attached
+  * @buff_pos: current position in the skb
+  * @packet_len: total length of the skb
+- * @tvlv_len: tvlv length of the previously considered OGM
++ * @ogm_packet: potential OGM in buffer
+  *
+  * Return: true if there is enough space for another OGM, false otherwise.
+  */
+-static bool batadv_iv_ogm_aggr_packet(int buff_pos, int packet_len,
+-				      __be16 tvlv_len)
++static bool
++batadv_iv_ogm_aggr_packet(int buff_pos, int packet_len,
++			  const struct batadv_ogm_packet *ogm_packet)
+ {
+ 	int next_buff_pos = 0;
+ 
+-	next_buff_pos += buff_pos + BATADV_OGM_HLEN;
+-	next_buff_pos += ntohs(tvlv_len);
++	/* check if there is enough space for the header */
++	next_buff_pos += buff_pos + sizeof(*ogm_packet);
++	if (next_buff_pos > packet_len)
++		return false;
++
++	/* check if there is enough space for the optional TVLV */
++	next_buff_pos += ntohs(ogm_packet->tvlv_len);
+ 
+ 	return (next_buff_pos <= packet_len) &&
+ 	       (next_buff_pos <= BATADV_MAX_AGGREGATION_BYTES);
+@@ -501,7 +507,7 @@ static void batadv_iv_ogm_send_to_if(struct batadv_forw_packet *forw_packet,
+ 
+ 	/* adjust all flags and log packets */
+ 	while (batadv_iv_ogm_aggr_packet(buff_pos, forw_packet->packet_len,
+-					 batadv_ogm_packet->tvlv_len)) {
++					 batadv_ogm_packet)) {
+ 		/* we might have aggregated direct link packets with an
+ 		 * ordinary base packet
+ 		 */
+@@ -1852,7 +1858,7 @@ static int batadv_iv_ogm_receive(struct sk_buff *skb,
+ 
+ 	/* unpack the aggregated packets and process them one by one */
+ 	while (batadv_iv_ogm_aggr_packet(ogm_offset, skb_headlen(skb),
+-					 ogm_packet->tvlv_len)) {
++					 ogm_packet)) {
+ 		batadv_iv_ogm_process(skb, ogm_offset, if_incoming);
+ 
+ 		ogm_offset += BATADV_OGM_HLEN;
+diff --git a/net/batman-adv/netlink.c b/net/batman-adv/netlink.c
+index 0d9459b69bdb..c32820963b8e 100644
+--- a/net/batman-adv/netlink.c
++++ b/net/batman-adv/netlink.c
+@@ -118,7 +118,7 @@ batadv_netlink_get_ifindex(const struct nlmsghdr *nlh, int attrtype)
+ {
+ 	struct nlattr *attr = nlmsg_find_attr(nlh, GENL_HDRLEN, attrtype);
+ 
+-	return attr ? nla_get_u32(attr) : 0;
++	return (attr && nla_len(attr) == sizeof(u32)) ? nla_get_u32(attr) : 0;
+ }
+ 
+ /**
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index c59638574cf8..f101a6460b44 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -3527,9 +3527,7 @@ int ieee80211_check_combinations(struct ieee80211_sub_if_data *sdata,
+ 	}
+ 
+ 	/* Always allow software iftypes */
+-	if (local->hw.wiphy->software_iftypes & BIT(iftype) ||
+-	    (iftype == NL80211_IFTYPE_AP_VLAN &&
+-	     local->hw.wiphy->flags & WIPHY_FLAG_4ADDR_AP)) {
++	if (cfg80211_iftype_allowed(local->hw.wiphy, iftype, 0, 1)) {
+ 		if (radar_detect)
+ 			return -EINVAL;
+ 		return 0;
+@@ -3564,7 +3562,8 @@ int ieee80211_check_combinations(struct ieee80211_sub_if_data *sdata,
+ 
+ 		if (sdata_iter == sdata ||
+ 		    !ieee80211_sdata_running(sdata_iter) ||
+-		    local->hw.wiphy->software_iftypes & BIT(wdev_iter->iftype))
++		    cfg80211_iftype_allowed(local->hw.wiphy,
++					    wdev_iter->iftype, 0, 1))
+ 			continue;
+ 
+ 		params.iftype_num[wdev_iter->iftype]++;
+diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c
+index 9c7da811d130..98f193fd5315 100644
+--- a/net/vmw_vsock/hyperv_transport.c
++++ b/net/vmw_vsock/hyperv_transport.c
+@@ -320,6 +320,11 @@ static void hvs_close_connection(struct vmbus_channel *chan)
+ 	lock_sock(sk);
+ 	hvs_do_close_lock_held(vsock_sk(sk), true);
+ 	release_sock(sk);
++
++	/* Release the refcnt for the channel that's opened in
++	 * hvs_open_connection().
++	 */
++	sock_put(sk);
+ }
+ 
+ static void hvs_open_connection(struct vmbus_channel *chan)
+@@ -388,6 +393,9 @@ static void hvs_open_connection(struct vmbus_channel *chan)
+ 	}
+ 
+ 	set_per_channel_state(chan, conn_from_host ? new : sk);
++
++	/* This reference will be dropped by hvs_close_connection(). */
++	sock_hold(conn_from_host ? new : sk);
+ 	vmbus_set_chn_rescind_callback(chan, hvs_close_connection);
+ 
+ 	/* Set the pending send size to max packet size to always get
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index 2a46ec3cb72c..68660781aa51 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -1335,10 +1335,8 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb,
+ 		}
+ 		break;
+ 	case NETDEV_PRE_UP:
+-		if (!(wdev->wiphy->interface_modes & BIT(wdev->iftype)) &&
+-		    !(wdev->iftype == NL80211_IFTYPE_AP_VLAN &&
+-		      rdev->wiphy.flags & WIPHY_FLAG_4ADDR_AP &&
+-		      wdev->use_4addr))
++		if (!cfg80211_iftype_allowed(wdev->wiphy, wdev->iftype,
++					     wdev->use_4addr, 0))
+ 			return notifier_from_errno(-EOPNOTSUPP);
+ 
+ 		if (rfkill_blocked(rdev->rfkill))
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 8e2f03ab4cc9..2a85bff6a8f3 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -3210,9 +3210,7 @@ static int nl80211_new_interface(struct sk_buff *skb, struct genl_info *info)
+ 			return err;
+ 	}
+ 
+-	if (!(rdev->wiphy.interface_modes & (1 << type)) &&
+-	    !(type == NL80211_IFTYPE_AP_VLAN && params.use_4addr &&
+-	      rdev->wiphy.flags & WIPHY_FLAG_4ADDR_AP))
++	if (!cfg80211_iftype_allowed(&rdev->wiphy, type, params.use_4addr, 0))
+ 		return -EOPNOTSUPP;
+ 
+ 	err = nl80211_parse_mon_options(rdev, type, info, &params);
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index d57e2f679a3e..c14e8f6e5e19 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -1670,7 +1670,7 @@ int cfg80211_iter_combinations(struct wiphy *wiphy,
+ 	for (iftype = 0; iftype < NUM_NL80211_IFTYPES; iftype++) {
+ 		num_interfaces += params->iftype_num[iftype];
+ 		if (params->iftype_num[iftype] > 0 &&
+-		    !(wiphy->software_iftypes & BIT(iftype)))
++		    !cfg80211_iftype_allowed(wiphy, iftype, 0, 1))
+ 			used_iftypes |= BIT(iftype);
+ 	}
+ 
+@@ -1692,7 +1692,7 @@ int cfg80211_iter_combinations(struct wiphy *wiphy,
+ 			return -ENOMEM;
+ 
+ 		for (iftype = 0; iftype < NUM_NL80211_IFTYPES; iftype++) {
+-			if (wiphy->software_iftypes & BIT(iftype))
++			if (cfg80211_iftype_allowed(wiphy, iftype, 0, 1))
+ 				continue;
+ 			for (j = 0; j < c->n_limits; j++) {
+ 				all_iftypes |= limits[j].types;
+@@ -1895,3 +1895,26 @@ EXPORT_SYMBOL(rfc1042_header);
+ const unsigned char bridge_tunnel_header[] __aligned(2) =
+ 	{ 0xaa, 0xaa, 0x03, 0x00, 0x00, 0xf8 };
+ EXPORT_SYMBOL(bridge_tunnel_header);
++
++bool cfg80211_iftype_allowed(struct wiphy *wiphy, enum nl80211_iftype iftype,
++			     bool is_4addr, u8 check_swif)
++
++{
++	bool is_vlan = iftype == NL80211_IFTYPE_AP_VLAN;
++
++	switch (check_swif) {
++	case 0:
++		if (is_vlan && is_4addr)
++			return wiphy->flags & WIPHY_FLAG_4ADDR_AP;
++		return wiphy->interface_modes & BIT(iftype);
++	case 1:
++		if (!(wiphy->software_iftypes & BIT(iftype)) && is_vlan)
++			return wiphy->flags & WIPHY_FLAG_4ADDR_AP;
++		return wiphy->software_iftypes & BIT(iftype);
++	default:
++		break;
++	}
++
++	return false;
++}
++EXPORT_SYMBOL(cfg80211_iftype_allowed);
+diff --git a/scripts/decode_stacktrace.sh b/scripts/decode_stacktrace.sh
+index c4a9ddb174bc..5aa75a0a1ced 100755
+--- a/scripts/decode_stacktrace.sh
++++ b/scripts/decode_stacktrace.sh
+@@ -78,7 +78,7 @@ parse_symbol() {
+ 	fi
+ 
+ 	# Strip out the base of the path
+-	code=${code//^$basepath/""}
++	code=${code#$basepath/}
+ 
+ 	# In the case of inlines, move everything to same line
+ 	code=${code//$'\n'/' '}
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index 088ea2ac8570..612f737cee83 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -223,16 +223,21 @@ static void *kvmemdup(const void *src, size_t len)
+ static size_t unpack_u16_chunk(struct aa_ext *e, char **chunk)
+ {
+ 	size_t size = 0;
++	void *pos = e->pos;
+ 
+ 	if (!inbounds(e, sizeof(u16)))
+-		return 0;
++		goto fail;
+ 	size = le16_to_cpu(get_unaligned((__le16 *) e->pos));
+ 	e->pos += sizeof(__le16);
+ 	if (!inbounds(e, size))
+-		return 0;
++		goto fail;
+ 	*chunk = e->pos;
+ 	e->pos += size;
+ 	return size;
++
++fail:
++	e->pos = pos;
++	return 0;
+ }
+ 
+ /* unpack control byte */
+@@ -294,49 +299,66 @@ fail:
+ 
+ static bool unpack_u32(struct aa_ext *e, u32 *data, const char *name)
+ {
++	void *pos = e->pos;
++
+ 	if (unpack_nameX(e, AA_U32, name)) {
+ 		if (!inbounds(e, sizeof(u32)))
+-			return 0;
++			goto fail;
+ 		if (data)
+ 			*data = le32_to_cpu(get_unaligned((__le32 *) e->pos));
+ 		e->pos += sizeof(u32);
+ 		return 1;
+ 	}
++
++fail:
++	e->pos = pos;
+ 	return 0;
+ }
+ 
+ static bool unpack_u64(struct aa_ext *e, u64 *data, const char *name)
+ {
++	void *pos = e->pos;
++
+ 	if (unpack_nameX(e, AA_U64, name)) {
+ 		if (!inbounds(e, sizeof(u64)))
+-			return 0;
++			goto fail;
+ 		if (data)
+ 			*data = le64_to_cpu(get_unaligned((__le64 *) e->pos));
+ 		e->pos += sizeof(u64);
+ 		return 1;
+ 	}
++
++fail:
++	e->pos = pos;
+ 	return 0;
+ }
+ 
+ static size_t unpack_array(struct aa_ext *e, const char *name)
+ {
++	void *pos = e->pos;
++
+ 	if (unpack_nameX(e, AA_ARRAY, name)) {
+ 		int size;
+ 		if (!inbounds(e, sizeof(u16)))
+-			return 0;
++			goto fail;
+ 		size = (int)le16_to_cpu(get_unaligned((__le16 *) e->pos));
+ 		e->pos += sizeof(u16);
+ 		return size;
+ 	}
++
++fail:
++	e->pos = pos;
+ 	return 0;
+ }
+ 
+ static size_t unpack_blob(struct aa_ext *e, char **blob, const char *name)
+ {
++	void *pos = e->pos;
++
+ 	if (unpack_nameX(e, AA_BLOB, name)) {
+ 		u32 size;
+ 		if (!inbounds(e, sizeof(u32)))
+-			return 0;
++			goto fail;
+ 		size = le32_to_cpu(get_unaligned((__le32 *) e->pos));
+ 		e->pos += sizeof(u32);
+ 		if (inbounds(e, (size_t) size)) {
+@@ -345,6 +367,9 @@ static size_t unpack_blob(struct aa_ext *e, char **blob, const char *name)
+ 			return size;
+ 		}
+ 	}
++
++fail:
++	e->pos = pos;
+ 	return 0;
+ }
+ 
+@@ -361,9 +386,10 @@ static int unpack_str(struct aa_ext *e, const char **string, const char *name)
+ 			if (src_str[size - 1] != 0)
+ 				goto fail;
+ 			*string = src_str;
++
++			return size;
+ 		}
+ 	}
+-	return size;
+ 
+ fail:
+ 	e->pos = pos;
+diff --git a/sound/pci/hda/hda_auto_parser.c b/sound/pci/hda/hda_auto_parser.c
+index b9a6b66aeb0e..d8ba3a6d5042 100644
+--- a/sound/pci/hda/hda_auto_parser.c
++++ b/sound/pci/hda/hda_auto_parser.c
+@@ -828,6 +828,8 @@ static void apply_fixup(struct hda_codec *codec, int id, int action, int depth)
+ 	while (id >= 0) {
+ 		const struct hda_fixup *fix = codec->fixup_list + id;
+ 
++		if (++depth > 10)
++			break;
+ 		if (fix->chained_before)
+ 			apply_fixup(codec, fix->chain_id, action, depth + 1);
+ 
+@@ -867,8 +869,6 @@ static void apply_fixup(struct hda_codec *codec, int id, int action, int depth)
+ 		}
+ 		if (!fix->chained || fix->chained_before)
+ 			break;
+-		if (++depth > 10)
+-			break;
+ 		id = fix->chain_id;
+ 	}
+ }
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index a6233775e779..82b0dc9f528f 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -2947,15 +2947,19 @@ static int hda_codec_runtime_resume(struct device *dev)
+ #ifdef CONFIG_PM_SLEEP
+ static int hda_codec_force_resume(struct device *dev)
+ {
++	struct hda_codec *codec = dev_to_hda_codec(dev);
++	bool forced_resume = !codec->relaxed_resume;
+ 	int ret;
+ 
+ 	/* The get/put pair below enforces the runtime resume even if the
+ 	 * device hasn't been used at suspend time.  This trick is needed to
+ 	 * update the jack state change during the sleep.
+ 	 */
+-	pm_runtime_get_noresume(dev);
++	if (forced_resume)
++		pm_runtime_get_noresume(dev);
+ 	ret = pm_runtime_force_resume(dev);
+-	pm_runtime_put(dev);
++	if (forced_resume)
++		pm_runtime_put(dev);
+ 	return ret;
+ }
+ 
+diff --git a/sound/pci/hda/hda_codec.h b/sound/pci/hda/hda_codec.h
+index acacc1900265..2003403ce1c8 100644
+--- a/sound/pci/hda/hda_codec.h
++++ b/sound/pci/hda/hda_codec.h
+@@ -261,6 +261,8 @@ struct hda_codec {
+ 	unsigned int auto_runtime_pm:1; /* enable automatic codec runtime pm */
+ 	unsigned int force_pin_prefix:1; /* Add location prefix */
+ 	unsigned int link_down_at_suspend:1; /* link down at runtime suspend */
++	unsigned int relaxed_resume:1;	/* don't resume forcibly for jack */
++
+ #ifdef CONFIG_PM
+ 	unsigned long power_on_acct;
+ 	unsigned long power_off_acct;
+diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
+index bb2bd33b00ec..2609161707a4 100644
+--- a/sound/pci/hda/hda_generic.c
++++ b/sound/pci/hda/hda_generic.c
+@@ -5991,7 +5991,8 @@ int snd_hda_gen_init(struct hda_codec *codec)
+ 	if (spec->init_hook)
+ 		spec->init_hook(codec);
+ 
+-	snd_hda_apply_verbs(codec);
++	if (!spec->skip_verbs)
++		snd_hda_apply_verbs(codec);
+ 
+ 	init_multi_out(codec);
+ 	init_extra_out(codec);
+diff --git a/sound/pci/hda/hda_generic.h b/sound/pci/hda/hda_generic.h
+index ce9c293717b9..8933c0f64cc4 100644
+--- a/sound/pci/hda/hda_generic.h
++++ b/sound/pci/hda/hda_generic.h
+@@ -247,6 +247,7 @@ struct hda_gen_spec {
+ 	unsigned int indep_hp_enabled:1; /* independent HP enabled */
+ 	unsigned int have_aamix_ctl:1;
+ 	unsigned int hp_mic_jack_modes:1;
++	unsigned int skip_verbs:1; /* don't apply verbs at snd_hda_gen_init() */
+ 
+ 	/* additional mute flags (only effective with auto_mute_via_amp=1) */
+ 	u64 mute_bits;
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 7a3e34b120b3..c3e3d80ff720 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -329,13 +329,11 @@ enum {
+ 
+ #define AZX_DCAPS_INTEL_SKYLAKE \
+ 	(AZX_DCAPS_INTEL_PCH_BASE | AZX_DCAPS_PM_RUNTIME |\
++	 AZX_DCAPS_SYNC_WRITE |\
+ 	 AZX_DCAPS_SEPARATE_STREAM_TAG | AZX_DCAPS_I915_COMPONENT |\
+ 	 AZX_DCAPS_I915_POWERWELL)
+ 
+-#define AZX_DCAPS_INTEL_BROXTON \
+-	(AZX_DCAPS_INTEL_PCH_BASE | AZX_DCAPS_PM_RUNTIME |\
+-	 AZX_DCAPS_SEPARATE_STREAM_TAG | AZX_DCAPS_I915_COMPONENT |\
+-	 AZX_DCAPS_I915_POWERWELL)
++#define AZX_DCAPS_INTEL_BROXTON			AZX_DCAPS_INTEL_SKYLAKE
+ 
+ /* quirks for ATI SB / AMD Hudson */
+ #define AZX_DCAPS_PRESET_ATI_SB \
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 35931a18418f..e4fbfb5557ab 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -2293,8 +2293,10 @@ static void generic_hdmi_free(struct hda_codec *codec)
+ 	struct hdmi_spec *spec = codec->spec;
+ 	int pin_idx, pcm_idx;
+ 
+-	if (codec_has_acomp(codec))
++	if (codec_has_acomp(codec)) {
+ 		snd_hdac_acomp_register_notifier(&codec->bus->core, NULL);
++		codec->relaxed_resume = 0;
++	}
+ 
+ 	for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) {
+ 		struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx);
+@@ -2550,6 +2552,8 @@ static void register_i915_notifier(struct hda_codec *codec)
+ 	spec->drm_audio_ops.pin_eld_notify = intel_pin_eld_notify;
+ 	snd_hdac_acomp_register_notifier(&codec->bus->core,
+ 					&spec->drm_audio_ops);
++	/* no need for forcible resume for jack check thanks to notifier */
++	codec->relaxed_resume = 1;
+ }
+ 
+ /* setup_stream ops override for HSW+ */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 9b5caf099bfb..7f74ebee8c2d 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -836,9 +836,11 @@ static int alc_init(struct hda_codec *codec)
+ 	if (spec->init_hook)
+ 		spec->init_hook(codec);
+ 
++	spec->gen.skip_verbs = 1; /* applied in below */
+ 	snd_hda_gen_init(codec);
+ 	alc_fix_pll(codec);
+ 	alc_auto_init_amp(codec, spec->init_amp);
++	snd_hda_apply_verbs(codec); /* apply verbs here after own init */
+ 
+ 	snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_INIT);
+ 
+@@ -5673,6 +5675,7 @@ enum {
+ 	ALC286_FIXUP_ACER_AIO_HEADSET_MIC,
+ 	ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
+ 	ALC299_FIXUP_PREDATOR_SPK,
++	ALC294_FIXUP_ASUS_INTSPK_HEADSET_MIC,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -6701,6 +6704,16 @@ static const struct hda_fixup alc269_fixups[] = {
+ 			{ }
+ 		}
+ 	},
++	[ALC294_FIXUP_ASUS_INTSPK_HEADSET_MIC] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x14, 0x411111f0 }, /* disable confusing internal speaker */
++			{ 0x19, 0x04a11150 }, /* use as headset mic, without its own jack detect */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -6843,6 +6856,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x82c0, "HP G3 mini premium", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
++	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ 	SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -6859,6 +6873,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ 	SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
+ 	SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
++	SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_INTSPK_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+ 	SND_PCI_QUIRK(0x1043, 0x1a30, "ASUS X705UD", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC),
+@@ -6936,6 +6951,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x312a, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ 	SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ 	SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
++	SND_PCI_QUIRK(0x17aa, 0x3151, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+@@ -8798,6 +8814,7 @@ static int patch_alc680(struct hda_codec *codec)
+ static const struct hda_device_id snd_hda_id_realtek[] = {
+ 	HDA_CODEC_ENTRY(0x10ec0215, "ALC215", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0221, "ALC221", patch_alc269),
++	HDA_CODEC_ENTRY(0x10ec0222, "ALC222", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0225, "ALC225", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0231, "ALC231", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0233, "ALC233", patch_alc269),
+diff --git a/tools/testing/selftests/net/fib_rule_tests.sh b/tools/testing/selftests/net/fib_rule_tests.sh
+index 1ba069967fa2..ba2d9fab28d0 100755
+--- a/tools/testing/selftests/net/fib_rule_tests.sh
++++ b/tools/testing/selftests/net/fib_rule_tests.sh
+@@ -15,6 +15,7 @@ GW_IP6=2001:db8:1::2
+ SRC_IP6=2001:db8:1::3
+ 
+ DEV_ADDR=192.51.100.1
++DEV_ADDR6=2001:db8:1::1
+ DEV=dummy0
+ 
+ log_test()
+@@ -55,8 +56,8 @@ setup()
+ 
+ 	$IP link add dummy0 type dummy
+ 	$IP link set dev dummy0 up
+-	$IP address add 192.51.100.1/24 dev dummy0
+-	$IP -6 address add 2001:db8:1::1/64 dev dummy0
++	$IP address add $DEV_ADDR/24 dev dummy0
++	$IP -6 address add $DEV_ADDR6/64 dev dummy0
+ 
+ 	set +e
+ }
+diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
+index b20b751286fc..757a17f5ebde 100644
+--- a/virt/kvm/eventfd.c
++++ b/virt/kvm/eventfd.c
+@@ -44,6 +44,12 @@
+ 
+ static struct workqueue_struct *irqfd_cleanup_wq;
+ 
++bool __attribute__((weak))
++kvm_arch_irqfd_allowed(struct kvm *kvm, struct kvm_irqfd *args)
++{
++	return true;
++}
++
+ static void
+ irqfd_inject(struct work_struct *work)
+ {
+@@ -297,6 +303,9 @@ kvm_irqfd_assign(struct kvm *kvm, struct kvm_irqfd *args)
+ 	if (!kvm_arch_intc_initialized(kvm))
+ 		return -EAGAIN;
+ 
++	if (!kvm_arch_irqfd_allowed(kvm, args))
++		return -EINVAL;
++
+ 	irqfd = kzalloc(sizeof(*irqfd), GFP_KERNEL);
+ 	if (!irqfd)
+ 		return -ENOMEM;


             reply	other threads:[~2019-09-16 12:26 UTC|newest]

Thread overview: 332+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-16 12:26 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2024-04-18  3:06 [gentoo-commits] proj/linux-patches:4.19 commit in: / Alice Ferrazzi
2023-09-02  9:59 Mike Pagano
2023-08-30 15:00 Mike Pagano
2023-08-16 16:59 Mike Pagano
2023-08-11 11:58 Mike Pagano
2023-08-08 18:43 Mike Pagano
2023-07-24 20:30 Mike Pagano
2023-06-28 10:29 Mike Pagano
2023-06-21 14:55 Alice Ferrazzi
2023-06-14 10:21 Mike Pagano
2023-06-09 11:32 Mike Pagano
2023-05-30 12:57 Mike Pagano
2023-05-17 11:14 Mike Pagano
2023-05-17 11:01 Mike Pagano
2023-05-10 17:59 Mike Pagano
2023-04-26  9:35 Alice Ferrazzi
2023-04-20 11:17 Alice Ferrazzi
2023-04-05 11:41 Mike Pagano
2023-03-22 14:16 Alice Ferrazzi
2023-03-17 10:46 Mike Pagano
2023-03-13 11:35 Alice Ferrazzi
2023-03-11 16:01 Mike Pagano
2023-03-03 12:31 Mike Pagano
2023-02-25 11:41 Mike Pagano
2023-02-24  3:19 Alice Ferrazzi
2023-02-24  3:15 Alice Ferrazzi
2023-02-22 14:51 Alice Ferrazzi
2023-02-06 12:49 Mike Pagano
2023-01-24  7:16 Alice Ferrazzi
2023-01-18 11:11 Mike Pagano
2022-12-14 12:15 Mike Pagano
2022-12-08 12:14 Alice Ferrazzi
2022-11-25 17:04 Mike Pagano
2022-11-23  9:39 Alice Ferrazzi
2022-11-10 17:58 Mike Pagano
2022-11-03 15:11 Mike Pagano
2022-11-01 19:48 Mike Pagano
2022-10-26 11:41 Mike Pagano
2022-10-05 11:59 Mike Pagano
2022-09-28  9:18 Mike Pagano
2022-09-20 12:03 Mike Pagano
2022-09-15 11:09 Mike Pagano
2022-09-05 12:06 Mike Pagano
2022-08-25 10:35 Mike Pagano
2022-08-11 12:36 Mike Pagano
2022-07-29 15:28 Mike Pagano
2022-07-21 20:12 Mike Pagano
2022-07-12 16:01 Mike Pagano
2022-07-07 16:18 Mike Pagano
2022-07-02 16:07 Mike Pagano
2022-06-25 10:22 Mike Pagano
2022-06-16 11:40 Mike Pagano
2022-06-14 16:02 Mike Pagano
2022-06-06 11:05 Mike Pagano
2022-05-27 12:24 Mike Pagano
2022-05-25 11:55 Mike Pagano
2022-05-18  9:50 Mike Pagano
2022-05-15 22:12 Mike Pagano
2022-05-12 11:30 Mike Pagano
2022-05-01 17:04 Mike Pagano
2022-04-27 12:03 Mike Pagano
2022-04-20 12:09 Mike Pagano
2022-04-15 13:11 Mike Pagano
2022-04-12 19:24 Mike Pagano
2022-03-28 10:59 Mike Pagano
2022-03-23 11:57 Mike Pagano
2022-03-16 13:27 Mike Pagano
2022-03-11 10:56 Mike Pagano
2022-03-08 18:30 Mike Pagano
2022-03-02 13:08 Mike Pagano
2022-02-26 21:14 Mike Pagano
2022-02-23 12:39 Mike Pagano
2022-02-16 12:47 Mike Pagano
2022-02-11 12:53 Mike Pagano
2022-02-11 12:46 Mike Pagano
2022-02-11 12:45 Mike Pagano
2022-02-11 12:37 Mike Pagano
2022-02-08 17:56 Mike Pagano
2022-01-29 17:45 Mike Pagano
2022-01-27 11:39 Mike Pagano
2022-01-11 13:14 Mike Pagano
2022-01-05 12:55 Mike Pagano
2021-12-29 13:11 Mike Pagano
2021-12-22 14:07 Mike Pagano
2021-12-14 10:36 Mike Pagano
2021-12-08 12:55 Mike Pagano
2021-12-01 12:51 Mike Pagano
2021-11-26 11:59 Mike Pagano
2021-11-12 14:16 Mike Pagano
2021-11-06 13:26 Mike Pagano
2021-11-02 19:32 Mike Pagano
2021-10-27 11:59 Mike Pagano
2021-10-20 13:26 Mike Pagano
2021-10-17 13:12 Mike Pagano
2021-10-13 15:00 Alice Ferrazzi
2021-10-09 21:33 Mike Pagano
2021-10-06 14:06 Mike Pagano
2021-09-26 14:13 Mike Pagano
2021-09-22 11:40 Mike Pagano
2021-09-20 22:05 Mike Pagano
2021-09-03 11:22 Mike Pagano
2021-09-03 10:08 Alice Ferrazzi
2021-08-26 14:06 Mike Pagano
2021-08-25 22:45 Mike Pagano
2021-08-25 20:41 Mike Pagano
2021-08-15 20:07 Mike Pagano
2021-08-12 11:51 Mike Pagano
2021-08-08 13:39 Mike Pagano
2021-08-04 11:54 Mike Pagano
2021-08-03 12:26 Mike Pagano
2021-07-31 10:34 Alice Ferrazzi
2021-07-28 12:37 Mike Pagano
2021-07-20 15:35 Alice Ferrazzi
2021-07-13 12:38 Mike Pagano
2021-07-11 14:45 Mike Pagano
2021-06-30 14:25 Mike Pagano
2021-06-16 12:22 Mike Pagano
2021-06-10 11:46 Mike Pagano
2021-06-03 10:32 Alice Ferrazzi
2021-05-26 12:05 Mike Pagano
2021-05-22 10:03 Mike Pagano
2021-05-07 11:40 Alice Ferrazzi
2021-04-30 19:02 Mike Pagano
2021-04-28 18:31 Mike Pagano
2021-04-28 11:44 Alice Ferrazzi
2021-04-16 11:15 Alice Ferrazzi
2021-04-14 11:22 Alice Ferrazzi
2021-04-10 13:24 Mike Pagano
2021-04-07 12:21 Mike Pagano
2021-03-30 14:17 Mike Pagano
2021-03-24 12:08 Mike Pagano
2021-03-22 15:50 Mike Pagano
2021-03-20 14:26 Mike Pagano
2021-03-17 16:21 Mike Pagano
2021-03-11 14:05 Mike Pagano
2021-03-07 15:15 Mike Pagano
2021-03-04 12:08 Mike Pagano
2021-02-23 14:31 Alice Ferrazzi
2021-02-13 15:28 Alice Ferrazzi
2021-02-10 10:03 Alice Ferrazzi
2021-02-07 14:40 Alice Ferrazzi
2021-02-03 23:43 Mike Pagano
2021-01-30 13:34 Alice Ferrazzi
2021-01-27 11:15 Mike Pagano
2021-01-23 16:36 Mike Pagano
2021-01-19 20:34 Mike Pagano
2021-01-17 16:20 Mike Pagano
2021-01-12 20:06 Mike Pagano
2021-01-09 12:57 Mike Pagano
2021-01-06 14:15 Mike Pagano
2020-12-30 12:52 Mike Pagano
2020-12-11 12:56 Mike Pagano
2020-12-08 12:06 Mike Pagano
2020-12-02 12:49 Mike Pagano
2020-11-24 14:40 Mike Pagano
2020-11-22 19:26 Mike Pagano
2020-11-18 19:56 Mike Pagano
2020-11-11 15:43 Mike Pagano
2020-11-10 13:56 Mike Pagano
2020-11-05 12:35 Mike Pagano
2020-11-01 20:29 Mike Pagano
2020-10-29 11:18 Mike Pagano
2020-10-17 10:17 Mike Pagano
2020-10-14 20:36 Mike Pagano
2020-10-07 12:50 Mike Pagano
2020-10-01 12:45 Mike Pagano
2020-09-26 22:07 Mike Pagano
2020-09-26 22:00 Mike Pagano
2020-09-24 15:58 Mike Pagano
2020-09-23 12:07 Mike Pagano
2020-09-17 15:01 Mike Pagano
2020-09-17 14:55 Mike Pagano
2020-09-12 17:59 Mike Pagano
2020-09-09 17:59 Mike Pagano
2020-09-03 11:37 Mike Pagano
2020-08-26 11:15 Mike Pagano
2020-08-21 10:49 Alice Ferrazzi
2020-08-19  9:36 Alice Ferrazzi
2020-08-12 23:36 Alice Ferrazzi
2020-08-07 19:16 Mike Pagano
2020-08-05 14:51 Thomas Deutschmann
2020-07-31 18:00 Mike Pagano
2020-07-29 12:33 Mike Pagano
2020-07-22 12:42 Mike Pagano
2020-07-16 11:17 Mike Pagano
2020-07-09 12:12 Mike Pagano
2020-07-01 12:14 Mike Pagano
2020-06-29 17:41 Mike Pagano
2020-06-25 15:07 Mike Pagano
2020-06-22 14:47 Mike Pagano
2020-06-10 21:27 Mike Pagano
2020-06-07 21:52 Mike Pagano
2020-06-03 11:41 Mike Pagano
2020-05-27 16:25 Mike Pagano
2020-05-20 11:30 Mike Pagano
2020-05-20 11:27 Mike Pagano
2020-05-14 11:30 Mike Pagano
2020-05-13 12:33 Mike Pagano
2020-05-11 22:50 Mike Pagano
2020-05-09 22:20 Mike Pagano
2020-05-06 11:46 Mike Pagano
2020-05-02 19:24 Mike Pagano
2020-04-29 17:57 Mike Pagano
2020-04-23 11:44 Mike Pagano
2020-04-21 11:15 Mike Pagano
2020-04-17 11:45 Mike Pagano
2020-04-15 17:09 Mike Pagano
2020-04-13 11:34 Mike Pagano
2020-04-02 15:24 Mike Pagano
2020-03-25 14:58 Mike Pagano
2020-03-20 11:57 Mike Pagano
2020-03-18 14:21 Mike Pagano
2020-03-16 12:23 Mike Pagano
2020-03-11 17:20 Mike Pagano
2020-03-05 16:23 Mike Pagano
2020-02-28 16:38 Mike Pagano
2020-02-24 11:06 Mike Pagano
2020-02-19 23:45 Mike Pagano
2020-02-14 23:52 Mike Pagano
2020-02-11 16:20 Mike Pagano
2020-02-05 17:05 Mike Pagano
2020-02-01 10:37 Mike Pagano
2020-02-01 10:30 Mike Pagano
2020-01-29 16:16 Mike Pagano
2020-01-27 14:25 Mike Pagano
2020-01-23 11:07 Mike Pagano
2020-01-17 19:56 Mike Pagano
2020-01-14 22:30 Mike Pagano
2020-01-12 15:00 Mike Pagano
2020-01-09 11:15 Mike Pagano
2020-01-04 19:50 Mike Pagano
2019-12-31 17:46 Mike Pagano
2019-12-21 15:03 Mike Pagano
2019-12-17 21:56 Mike Pagano
2019-12-13 12:35 Mike Pagano
2019-12-05 12:03 Alice Ferrazzi
2019-12-01 14:06 Thomas Deutschmann
2019-11-24 15:44 Mike Pagano
2019-11-20 19:36 Mike Pagano
2019-11-12 21:00 Mike Pagano
2019-11-10 16:20 Mike Pagano
2019-11-06 14:26 Mike Pagano
2019-10-29 12:04 Mike Pagano
2019-10-17 22:27 Mike Pagano
2019-10-11 17:04 Mike Pagano
2019-10-07 17:42 Mike Pagano
2019-10-05 11:42 Mike Pagano
2019-10-01 10:10 Mike Pagano
2019-09-21 17:11 Mike Pagano
2019-09-19 12:34 Mike Pagano
2019-09-19 10:04 Mike Pagano
2019-09-10 11:12 Mike Pagano
2019-09-06 17:25 Mike Pagano
2019-08-29 14:15 Mike Pagano
2019-08-25 17:37 Mike Pagano
2019-08-23 22:18 Mike Pagano
2019-08-16 12:26 Mike Pagano
2019-08-16 12:13 Mike Pagano
2019-08-09 17:45 Mike Pagano
2019-08-06 19:19 Mike Pagano
2019-08-04 16:15 Mike Pagano
2019-07-31 15:09 Mike Pagano
2019-07-31 10:22 Mike Pagano
2019-07-28 16:27 Mike Pagano
2019-07-26 11:35 Mike Pagano
2019-07-21 14:41 Mike Pagano
2019-07-14 15:44 Mike Pagano
2019-07-10 11:05 Mike Pagano
2019-07-03 11:34 Mike Pagano
2019-06-25 10:53 Mike Pagano
2019-06-22 19:06 Mike Pagano
2019-06-19 17:17 Thomas Deutschmann
2019-06-17 19:22 Mike Pagano
2019-06-15 15:07 Mike Pagano
2019-06-11 12:42 Mike Pagano
2019-06-10 19:43 Mike Pagano
2019-06-09 16:19 Mike Pagano
2019-06-04 11:11 Mike Pagano
2019-05-31 15:02 Mike Pagano
2019-05-26 17:10 Mike Pagano
2019-05-22 11:02 Mike Pagano
2019-05-16 23:03 Mike Pagano
2019-05-14 21:00 Mike Pagano
2019-05-10 19:40 Mike Pagano
2019-05-08 10:06 Mike Pagano
2019-05-05 13:42 Mike Pagano
2019-05-04 18:28 Mike Pagano
2019-05-02 10:13 Mike Pagano
2019-04-27 17:36 Mike Pagano
2019-04-20 11:09 Mike Pagano
2019-04-19 19:51 Mike Pagano
2019-04-05 21:46 Mike Pagano
2019-04-03 10:59 Mike Pagano
2019-03-27 10:22 Mike Pagano
2019-03-23 20:23 Mike Pagano
2019-03-19 16:58 Mike Pagano
2019-03-13 22:08 Mike Pagano
2019-03-10 14:15 Mike Pagano
2019-03-06 19:06 Mike Pagano
2019-03-05 18:04 Mike Pagano
2019-02-27 11:23 Mike Pagano
2019-02-23 11:35 Mike Pagano
2019-02-23  0:46 Mike Pagano
2019-02-20 11:19 Mike Pagano
2019-02-16  0:42 Mike Pagano
2019-02-15 12:39 Mike Pagano
2019-02-12 20:53 Mike Pagano
2019-02-06 17:08 Mike Pagano
2019-01-31 11:28 Mike Pagano
2019-01-26 15:09 Mike Pagano
2019-01-22 23:06 Mike Pagano
2019-01-16 23:32 Mike Pagano
2019-01-13 19:29 Mike Pagano
2019-01-09 17:54 Mike Pagano
2018-12-29 18:55 Mike Pagano
2018-12-29  1:08 Mike Pagano
2018-12-21 14:58 Mike Pagano
2018-12-19 19:09 Mike Pagano
2018-12-17 11:42 Mike Pagano
2018-12-13 11:40 Mike Pagano
2018-12-08 13:17 Mike Pagano
2018-12-08 13:17 Mike Pagano
2018-12-05 20:16 Mike Pagano
2018-12-01 15:08 Mike Pagano
2018-11-27 16:16 Mike Pagano
2018-11-23 12:42 Mike Pagano
2018-11-21 12:30 Mike Pagano
2018-11-14  0:47 Mike Pagano
2018-11-14  0:47 Mike Pagano
2018-11-13 20:44 Mike Pagano
2018-11-04 16:22 Alice Ferrazzi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1568636755.145454b6a808a552cf3e80041ce442cbae29d912.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox