public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Alice Ferrazzi" <alicef@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:5.8 commit in: /
Date: Fri, 21 Aug 2020 11:41:31 +0000 (UTC)	[thread overview]
Message-ID: <1598010077.bf645074ab68cde774b9613eb74462942e461c33.alicef@gentoo> (raw)

commit:     bf645074ab68cde774b9613eb74462942e461c33
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 21 11:41:11 2020 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Aug 21 11:41:17 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bf645074

Linux patch 5.8.3

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README            |    4 +
 1002_linux-5.8.3.patch | 8143 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 8147 insertions(+)

diff --git a/0000_README b/0000_README
index 6e28c94..bacfc9f 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-5.8.2.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.8.2
 
+Patch:  1002_linux-5.8.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.8.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-5.8.3.patch b/1002_linux-5.8.3.patch
new file mode 100644
index 0000000..f212dd8
--- /dev/null
+++ b/1002_linux-5.8.3.patch
@@ -0,0 +1,8143 @@
+diff --git a/Documentation/admin-guide/hw-vuln/multihit.rst b/Documentation/admin-guide/hw-vuln/multihit.rst
+index ba9988d8bce50..140e4cec38c33 100644
+--- a/Documentation/admin-guide/hw-vuln/multihit.rst
++++ b/Documentation/admin-guide/hw-vuln/multihit.rst
+@@ -80,6 +80,10 @@ The possible values in this file are:
+        - The processor is not vulnerable.
+      * - KVM: Mitigation: Split huge pages
+        - Software changes mitigate this issue.
++     * - KVM: Mitigation: VMX unsupported
++       - KVM is not vulnerable because Virtual Machine Extensions (VMX) is not supported.
++     * - KVM: Mitigation: VMX disabled
++       - KVM is not vulnerable because Virtual Machine Extensions (VMX) is disabled.
+      * - KVM: Vulnerable
+        - The processor is vulnerable, but no mitigation enabled
+ 
+diff --git a/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt b/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt
+index c82794002595f..89647d7143879 100644
+--- a/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt
++++ b/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt
+@@ -21,7 +21,7 @@ controller state. The mux controller state is described in
+ 
+ Example:
+ 	mux: mux-controller {
+-		compatible = "mux-gpio";
++		compatible = "gpio-mux";
+ 		#mux-control-cells = <0>;
+ 
+ 		mux-gpios = <&pioA 0 GPIO_ACTIVE_HIGH>,
+diff --git a/Makefile b/Makefile
+index 6940f82a15cc1..6001ed2b14c3a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-idp.dts b/arch/arm64/boot/dts/qcom/sc7180-idp.dts
+index 4e9149d82d09e..17624d6440df5 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-idp.dts
++++ b/arch/arm64/boot/dts/qcom/sc7180-idp.dts
+@@ -312,7 +312,7 @@
+ &remoteproc_mpss {
+ 	status = "okay";
+ 	compatible = "qcom,sc7180-mss-pil";
+-	iommus = <&apps_smmu 0x460 0x1>, <&apps_smmu 0x444 0x3>;
++	iommus = <&apps_smmu 0x461 0x0>, <&apps_smmu 0x444 0x3>;
+ 	memory-region = <&mba_mem &mpss_mem>;
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi b/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi
+index 70466cc4b4055..64fc1bfd66fad 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi
+@@ -634,7 +634,7 @@ ap_ts_i2c: &i2c14 {
+ };
+ 
+ &mss_pil {
+-	iommus = <&apps_smmu 0x780 0x1>,
++	iommus = <&apps_smmu 0x781 0x0>,
+ 		 <&apps_smmu 0x724 0x3>;
+ };
+ 
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index 4d7879484cecc..581602413a130 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -155,7 +155,7 @@ armv8pmu_events_sysfs_show(struct device *dev,
+ 
+ 	pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr);
+ 
+-	return sprintf(page, "event=0x%03llx\n", pmu_attr->id);
++	return sprintf(page, "event=0x%04llx\n", pmu_attr->id);
+ }
+ 
+ #define ARMV8_EVENT_ATTR(name, config)						\
+@@ -244,10 +244,13 @@ armv8pmu_event_attr_is_visible(struct kobject *kobj,
+ 	    test_bit(pmu_attr->id, cpu_pmu->pmceid_bitmap))
+ 		return attr->mode;
+ 
+-	pmu_attr->id -= ARMV8_PMUV3_EXT_COMMON_EVENT_BASE;
+-	if (pmu_attr->id < ARMV8_PMUV3_MAX_COMMON_EVENTS &&
+-	    test_bit(pmu_attr->id, cpu_pmu->pmceid_ext_bitmap))
+-		return attr->mode;
++	if (pmu_attr->id >= ARMV8_PMUV3_EXT_COMMON_EVENT_BASE) {
++		u64 id = pmu_attr->id - ARMV8_PMUV3_EXT_COMMON_EVENT_BASE;
++
++		if (id < ARMV8_PMUV3_MAX_COMMON_EVENTS &&
++		    test_bit(id, cpu_pmu->pmceid_ext_bitmap))
++			return attr->mode;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 6fee1a133e9d6..a7e40bb1e5bc6 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -678,6 +678,7 @@ config SGI_IP27
+ 	select SYS_SUPPORTS_NUMA
+ 	select SYS_SUPPORTS_SMP
+ 	select MIPS_L1_CACHE_SHIFT_7
++	select NUMA
+ 	help
+ 	  This are the SGI Origin 200, Origin 2000 and Onyx 2 Graphics
+ 	  workstations.  To compile a Linux kernel that runs on these, say Y
+diff --git a/arch/mips/boot/dts/ingenic/qi_lb60.dts b/arch/mips/boot/dts/ingenic/qi_lb60.dts
+index 7a371d9c5a33f..eda37fb516f0e 100644
+--- a/arch/mips/boot/dts/ingenic/qi_lb60.dts
++++ b/arch/mips/boot/dts/ingenic/qi_lb60.dts
+@@ -69,7 +69,7 @@
+ 			"Speaker", "OUTL",
+ 			"Speaker", "OUTR",
+ 			"INL", "LOUT",
+-			"INL", "ROUT";
++			"INR", "ROUT";
+ 
+ 		simple-audio-card,aux-devs = <&amp>;
+ 
+diff --git a/arch/mips/kernel/topology.c b/arch/mips/kernel/topology.c
+index cd3e1f82e1a5d..08ad6371fbe08 100644
+--- a/arch/mips/kernel/topology.c
++++ b/arch/mips/kernel/topology.c
+@@ -20,7 +20,7 @@ static int __init topology_init(void)
+ 	for_each_present_cpu(i) {
+ 		struct cpu *c = &per_cpu(cpu_devices, i);
+ 
+-		c->hotpluggable = 1;
++		c->hotpluggable = !!i;
+ 		ret = register_cpu(c, i);
+ 		if (ret)
+ 			printk(KERN_WARNING "topology_init: register_cpu %d "
+diff --git a/arch/openrisc/kernel/stacktrace.c b/arch/openrisc/kernel/stacktrace.c
+index 43f140a28bc72..54d38809e22cb 100644
+--- a/arch/openrisc/kernel/stacktrace.c
++++ b/arch/openrisc/kernel/stacktrace.c
+@@ -13,6 +13,7 @@
+ #include <linux/export.h>
+ #include <linux/sched.h>
+ #include <linux/sched/debug.h>
++#include <linux/sched/task_stack.h>
+ #include <linux/stacktrace.h>
+ 
+ #include <asm/processor.h>
+@@ -68,12 +69,25 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
+ {
+ 	unsigned long *sp = NULL;
+ 
++	if (!try_get_task_stack(tsk))
++		return;
++
+ 	if (tsk == current)
+ 		sp = (unsigned long *) &sp;
+-	else
+-		sp = (unsigned long *) KSTK_ESP(tsk);
++	else {
++		unsigned long ksp;
++
++		/* Locate stack from kernel context */
++		ksp = task_thread_info(tsk)->ksp;
++		ksp += STACK_FRAME_OVERHEAD;	/* redzone */
++		ksp += sizeof(struct pt_regs);
++
++		sp = (unsigned long *) ksp;
++	}
+ 
+ 	unwind_stack(trace, sp, save_stack_address_nosched);
++
++	put_task_stack(tsk);
+ }
+ EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
+ 
+diff --git a/arch/powerpc/include/asm/percpu.h b/arch/powerpc/include/asm/percpu.h
+index dce863a7635cd..8e5b7d0b851c6 100644
+--- a/arch/powerpc/include/asm/percpu.h
++++ b/arch/powerpc/include/asm/percpu.h
+@@ -10,8 +10,6 @@
+ 
+ #ifdef CONFIG_SMP
+ 
+-#include <asm/paca.h>
+-
+ #define __my_cpu_offset local_paca->data_offset
+ 
+ #endif /* CONFIG_SMP */
+@@ -19,4 +17,6 @@
+ 
+ #include <asm-generic/percpu.h>
+ 
++#include <asm/paca.h>
++
+ #endif /* _ASM_POWERPC_PERCPU_H_ */
+diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
+index 641fc5f3d7dd9..3ebb1792e6367 100644
+--- a/arch/powerpc/mm/fault.c
++++ b/arch/powerpc/mm/fault.c
+@@ -267,6 +267,9 @@ static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code,
+ 	return false;
+ }
+ 
++// This comes from 64-bit struct rt_sigframe + __SIGNAL_FRAMESIZE
++#define SIGFRAME_MAX_SIZE	(4096 + 128)
++
+ static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address,
+ 				struct vm_area_struct *vma, unsigned int flags,
+ 				bool *must_retry)
+@@ -274,7 +277,7 @@ static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address,
+ 	/*
+ 	 * N.B. The POWER/Open ABI allows programs to access up to
+ 	 * 288 bytes below the stack pointer.
+-	 * The kernel signal delivery code writes up to about 1.5kB
++	 * The kernel signal delivery code writes a bit over 4KB
+ 	 * below the stack pointer (r1) before decrementing it.
+ 	 * The exec code can write slightly over 640kB to the stack
+ 	 * before setting the user r1.  Thus we allow the stack to
+@@ -299,7 +302,7 @@ static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address,
+ 		 * between the last mapped region and the stack will
+ 		 * expand the stack rather than segfaulting.
+ 		 */
+-		if (address + 2048 >= uregs->gpr[1])
++		if (address + SIGFRAME_MAX_SIZE >= uregs->gpr[1])
+ 			return false;
+ 
+ 		if ((flags & FAULT_FLAG_WRITE) && (flags & FAULT_FLAG_USER) &&
+diff --git a/arch/powerpc/mm/ptdump/hashpagetable.c b/arch/powerpc/mm/ptdump/hashpagetable.c
+index a2c33efc7ce8d..5b8bd34cd3a16 100644
+--- a/arch/powerpc/mm/ptdump/hashpagetable.c
++++ b/arch/powerpc/mm/ptdump/hashpagetable.c
+@@ -258,7 +258,7 @@ static int pseries_find(unsigned long ea, int psize, bool primary, u64 *v, u64 *
+ 	for (i = 0; i < HPTES_PER_GROUP; i += 4, hpte_group += 4) {
+ 		lpar_rc = plpar_pte_read_4(0, hpte_group, (void *)ptes);
+ 
+-		if (lpar_rc != H_SUCCESS)
++		if (lpar_rc)
+ 			continue;
+ 		for (j = 0; j < 4; j++) {
+ 			if (HPTE_V_COMPARE(ptes[j].v, want_v) &&
+diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
+index 5ace2f9a277e9..8b748690dac22 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
+@@ -27,7 +27,7 @@ static bool rtas_hp_event;
+ unsigned long pseries_memory_block_size(void)
+ {
+ 	struct device_node *np;
+-	unsigned int memblock_size = MIN_MEMORY_BLOCK_SIZE;
++	u64 memblock_size = MIN_MEMORY_BLOCK_SIZE;
+ 	struct resource r;
+ 
+ 	np = of_find_node_by_path("/ibm,dynamic-reconfiguration-memory");
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index c7d7ede6300c5..4907a5149a8a3 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -769,6 +769,7 @@ config VFIO_AP
+ 	def_tristate n
+ 	prompt "VFIO support for AP devices"
+ 	depends on S390_AP_IOMMU && VFIO_MDEV_DEVICE && KVM
++	depends on ZCRYPT
+ 	help
+ 		This driver grants access to Adjunct Processor (AP) devices
+ 		via the VFIO mediated device interface.
+diff --git a/arch/s390/lib/test_unwind.c b/arch/s390/lib/test_unwind.c
+index 32b7a30b2485d..b0b12b46bc572 100644
+--- a/arch/s390/lib/test_unwind.c
++++ b/arch/s390/lib/test_unwind.c
+@@ -63,6 +63,7 @@ static noinline int test_unwind(struct task_struct *task, struct pt_regs *regs,
+ 			break;
+ 		if (state.reliable && !addr) {
+ 			pr_err("unwind state reliable but addr is 0\n");
++			kfree(bt);
+ 			return -EINVAL;
+ 		}
+ 		sprint_symbol(sym, addr);
+diff --git a/arch/sh/boards/mach-landisk/setup.c b/arch/sh/boards/mach-landisk/setup.c
+index 16b4d8b0bb850..2c44b94f82fb2 100644
+--- a/arch/sh/boards/mach-landisk/setup.c
++++ b/arch/sh/boards/mach-landisk/setup.c
+@@ -82,6 +82,9 @@ device_initcall(landisk_devices_setup);
+ 
+ static void __init landisk_setup(char **cmdline_p)
+ {
++	/* I/O port identity mapping */
++	__set_io_port_base(0);
++
+ 	/* LED ON */
+ 	__raw_writeb(__raw_readb(PA_LED) | 0x03, PA_LED);
+ 
+diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c
+index fbe1f2fe9a8c8..acd1c75994983 100644
+--- a/arch/sh/mm/fault.c
++++ b/arch/sh/mm/fault.c
+@@ -208,7 +208,6 @@ show_fault_oops(struct pt_regs *regs, unsigned long address)
+ 	if (!oops_may_print())
+ 		return;
+ 
+-	printk(KERN_ALERT "PC:");
+ 	pr_alert("BUG: unable to handle kernel %s at %08lx\n",
+ 		 address < PAGE_SIZE ? "NULL pointer dereference"
+ 				     : "paging request",
+diff --git a/arch/x86/events/rapl.c b/arch/x86/events/rapl.c
+index 0f2bf59f43541..51ff9a3618c95 100644
+--- a/arch/x86/events/rapl.c
++++ b/arch/x86/events/rapl.c
+@@ -665,7 +665,7 @@ static const struct attribute_group *rapl_attr_update[] = {
+ 	&rapl_events_pkg_group,
+ 	&rapl_events_ram_group,
+ 	&rapl_events_gpu_group,
+-	&rapl_events_gpu_group,
++	&rapl_events_psys_group,
+ 	NULL,
+ };
+ 
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index 7649da2478d8a..dae32d948bf25 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -560,6 +560,10 @@ static int x86_vector_alloc_irqs(struct irq_domain *domain, unsigned int virq,
+ 		 * as that can corrupt the affinity move state.
+ 		 */
+ 		irqd_set_handle_enforce_irqctx(irqd);
++
++		/* Don't invoke affinity setter on deactivated interrupts */
++		irqd_set_affinity_on_activate(irqd);
++
+ 		/*
+ 		 * Legacy vectors are already assigned when the IOAPIC
+ 		 * takes them over. They stay on the same vector. This is
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 0b71970d2d3d2..b0802d45abd30 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -31,6 +31,7 @@
+ #include <asm/intel-family.h>
+ #include <asm/e820/api.h>
+ #include <asm/hypervisor.h>
++#include <asm/tlbflush.h>
+ 
+ #include "cpu.h"
+ 
+@@ -1556,7 +1557,12 @@ static ssize_t l1tf_show_state(char *buf)
+ 
+ static ssize_t itlb_multihit_show_state(char *buf)
+ {
+-	if (itlb_multihit_kvm_mitigation)
++	if (!boot_cpu_has(X86_FEATURE_MSR_IA32_FEAT_CTL) ||
++	    !boot_cpu_has(X86_FEATURE_VMX))
++		return sprintf(buf, "KVM: Mitigation: VMX unsupported\n");
++	else if (!(cr4_read_shadow() & X86_CR4_VMXE))
++		return sprintf(buf, "KVM: Mitigation: VMX disabled\n");
++	else if (itlb_multihit_kvm_mitigation)
+ 		return sprintf(buf, "KVM: Mitigation: Split huge pages\n");
+ 	else
+ 		return sprintf(buf, "KVM: Vulnerable\n");
+diff --git a/arch/x86/kernel/tsc_msr.c b/arch/x86/kernel/tsc_msr.c
+index 4fec6f3a1858b..a654a9b4b77c0 100644
+--- a/arch/x86/kernel/tsc_msr.c
++++ b/arch/x86/kernel/tsc_msr.c
+@@ -133,10 +133,15 @@ static const struct freq_desc freq_desc_ann = {
+ 	.mask = 0x0f,
+ };
+ 
+-/* 24 MHz crystal? : 24 * 13 / 4 = 78 MHz */
++/*
++ * 24 MHz crystal? : 24 * 13 / 4 = 78 MHz
++ * Frequency step for Lightning Mountain SoC is fixed to 78 MHz,
++ * so all the frequency entries are 78000.
++ */
+ static const struct freq_desc freq_desc_lgm = {
+ 	.use_msr_plat = true,
+-	.freqs = { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 },
++	.freqs = { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000,
++		   78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 },
+ 	.mask = 0x0f,
+ };
+ 
+diff --git a/arch/xtensa/include/asm/thread_info.h b/arch/xtensa/include/asm/thread_info.h
+index f092cc3f4e66d..956d4d47c6cd1 100644
+--- a/arch/xtensa/include/asm/thread_info.h
++++ b/arch/xtensa/include/asm/thread_info.h
+@@ -55,6 +55,10 @@ struct thread_info {
+ 	mm_segment_t		addr_limit;	/* thread address space */
+ 
+ 	unsigned long		cpenable;
++#if XCHAL_HAVE_EXCLUSIVE
++	/* result of the most recent exclusive store */
++	unsigned long		atomctl8;
++#endif
+ 
+ 	/* Allocate storage for extra user states and coprocessor states. */
+ #if XTENSA_HAVE_COPROCESSORS
+diff --git a/arch/xtensa/kernel/asm-offsets.c b/arch/xtensa/kernel/asm-offsets.c
+index 33a257b33723a..dc5c83cad9be8 100644
+--- a/arch/xtensa/kernel/asm-offsets.c
++++ b/arch/xtensa/kernel/asm-offsets.c
+@@ -93,6 +93,9 @@ int main(void)
+ 	DEFINE(THREAD_RA, offsetof (struct task_struct, thread.ra));
+ 	DEFINE(THREAD_SP, offsetof (struct task_struct, thread.sp));
+ 	DEFINE(THREAD_CPENABLE, offsetof (struct thread_info, cpenable));
++#if XCHAL_HAVE_EXCLUSIVE
++	DEFINE(THREAD_ATOMCTL8, offsetof (struct thread_info, atomctl8));
++#endif
+ #if XTENSA_HAVE_COPROCESSORS
+ 	DEFINE(THREAD_XTREGS_CP0, offsetof(struct thread_info, xtregs_cp.cp0));
+ 	DEFINE(THREAD_XTREGS_CP1, offsetof(struct thread_info, xtregs_cp.cp1));
+diff --git a/arch/xtensa/kernel/entry.S b/arch/xtensa/kernel/entry.S
+index 98515c24d9b28..703cf6205efec 100644
+--- a/arch/xtensa/kernel/entry.S
++++ b/arch/xtensa/kernel/entry.S
+@@ -374,6 +374,11 @@ common_exception:
+ 	s32i	a2, a1, PT_LCOUNT
+ #endif
+ 
++#if XCHAL_HAVE_EXCLUSIVE
++	/* Clear exclusive access monitor set by interrupted code */
++	clrex
++#endif
++
+ 	/* It is now save to restore the EXC_TABLE_FIXUP variable. */
+ 
+ 	rsr	a2, exccause
+@@ -2020,6 +2025,12 @@ ENTRY(_switch_to)
+ 	s32i	a3, a4, THREAD_CPENABLE
+ #endif
+ 
++#if XCHAL_HAVE_EXCLUSIVE
++	l32i	a3, a5, THREAD_ATOMCTL8
++	getex	a3
++	s32i	a3, a4, THREAD_ATOMCTL8
++#endif
++
+ 	/* Flush register file. */
+ 
+ 	spill_registers_kernel
+diff --git a/arch/xtensa/kernel/perf_event.c b/arch/xtensa/kernel/perf_event.c
+index 99fcd63ce597f..a0d05c8598d0f 100644
+--- a/arch/xtensa/kernel/perf_event.c
++++ b/arch/xtensa/kernel/perf_event.c
+@@ -399,7 +399,7 @@ static struct pmu xtensa_pmu = {
+ 	.read = xtensa_pmu_read,
+ };
+ 
+-static int xtensa_pmu_setup(int cpu)
++static int xtensa_pmu_setup(unsigned int cpu)
+ {
+ 	unsigned i;
+ 
+diff --git a/crypto/af_alg.c b/crypto/af_alg.c
+index 28fc323e3fe30..5882ed46f1adb 100644
+--- a/crypto/af_alg.c
++++ b/crypto/af_alg.c
+@@ -635,6 +635,7 @@ void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst,
+ 
+ 	if (!ctx->used)
+ 		ctx->merge = 0;
++	ctx->init = ctx->more;
+ }
+ EXPORT_SYMBOL_GPL(af_alg_pull_tsgl);
+ 
+@@ -734,9 +735,10 @@ EXPORT_SYMBOL_GPL(af_alg_wmem_wakeup);
+  *
+  * @sk socket of connection to user space
+  * @flags If MSG_DONTWAIT is set, then only report if function would sleep
++ * @min Set to minimum request size if partial requests are allowed.
+  * @return 0 when writable memory is available, < 0 upon error
+  */
+-int af_alg_wait_for_data(struct sock *sk, unsigned flags)
++int af_alg_wait_for_data(struct sock *sk, unsigned flags, unsigned min)
+ {
+ 	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ 	struct alg_sock *ask = alg_sk(sk);
+@@ -754,7 +756,9 @@ int af_alg_wait_for_data(struct sock *sk, unsigned flags)
+ 		if (signal_pending(current))
+ 			break;
+ 		timeout = MAX_SCHEDULE_TIMEOUT;
+-		if (sk_wait_event(sk, &timeout, (ctx->used || !ctx->more),
++		if (sk_wait_event(sk, &timeout,
++				  ctx->init && (!ctx->more ||
++						(min && ctx->used >= min)),
+ 				  &wait)) {
+ 			err = 0;
+ 			break;
+@@ -843,10 +847,11 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 	}
+ 
+ 	lock_sock(sk);
+-	if (!ctx->more && ctx->used) {
++	if (ctx->init && (init || !ctx->more)) {
+ 		err = -EINVAL;
+ 		goto unlock;
+ 	}
++	ctx->init = true;
+ 
+ 	if (init) {
+ 		ctx->enc = enc;
+diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
+index 0ae000a61c7f5..43c6aa784858b 100644
+--- a/crypto/algif_aead.c
++++ b/crypto/algif_aead.c
+@@ -106,8 +106,8 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg,
+ 	size_t usedpages = 0;		/* [in]  RX bufs to be used from user */
+ 	size_t processed = 0;		/* [in]  TX bufs to be consumed */
+ 
+-	if (!ctx->used) {
+-		err = af_alg_wait_for_data(sk, flags);
++	if (!ctx->init || ctx->more) {
++		err = af_alg_wait_for_data(sk, flags, 0);
+ 		if (err)
+ 			return err;
+ 	}
+@@ -558,12 +558,6 @@ static int aead_accept_parent_nokey(void *private, struct sock *sk)
+ 
+ 	INIT_LIST_HEAD(&ctx->tsgl_list);
+ 	ctx->len = len;
+-	ctx->used = 0;
+-	atomic_set(&ctx->rcvused, 0);
+-	ctx->more = 0;
+-	ctx->merge = 0;
+-	ctx->enc = 0;
+-	ctx->aead_assoclen = 0;
+ 	crypto_init_wait(&ctx->wait);
+ 
+ 	ask->private = ctx;
+diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
+index ec5567c87a6df..81c4022285a7c 100644
+--- a/crypto/algif_skcipher.c
++++ b/crypto/algif_skcipher.c
+@@ -61,8 +61,8 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
+ 	int err = 0;
+ 	size_t len = 0;
+ 
+-	if (!ctx->used) {
+-		err = af_alg_wait_for_data(sk, flags);
++	if (!ctx->init || (ctx->more && ctx->used < bs)) {
++		err = af_alg_wait_for_data(sk, flags, bs);
+ 		if (err)
+ 			return err;
+ 	}
+@@ -333,6 +333,7 @@ static int skcipher_accept_parent_nokey(void *private, struct sock *sk)
+ 	ctx = sock_kmalloc(sk, len, GFP_KERNEL);
+ 	if (!ctx)
+ 		return -ENOMEM;
++	memset(ctx, 0, len);
+ 
+ 	ctx->iv = sock_kmalloc(sk, crypto_skcipher_ivsize(tfm),
+ 			       GFP_KERNEL);
+@@ -340,16 +341,10 @@ static int skcipher_accept_parent_nokey(void *private, struct sock *sk)
+ 		sock_kfree_s(sk, ctx, len);
+ 		return -ENOMEM;
+ 	}
+-
+ 	memset(ctx->iv, 0, crypto_skcipher_ivsize(tfm));
+ 
+ 	INIT_LIST_HEAD(&ctx->tsgl_list);
+ 	ctx->len = len;
+-	ctx->used = 0;
+-	atomic_set(&ctx->rcvused, 0);
+-	ctx->more = 0;
+-	ctx->merge = 0;
+-	ctx->enc = 0;
+ 	crypto_init_wait(&ctx->wait);
+ 
+ 	ask->private = ctx;
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 7c138a4edc03e..1f72ce1a782b5 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -1823,6 +1823,7 @@ static void populate_shutdown_status(struct nfit_mem *nfit_mem)
+ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
+ 		struct nfit_mem *nfit_mem, u32 device_handle)
+ {
++	struct nvdimm_bus_descriptor *nd_desc = &acpi_desc->nd_desc;
+ 	struct acpi_device *adev, *adev_dimm;
+ 	struct device *dev = acpi_desc->dev;
+ 	unsigned long dsm_mask, label_mask;
+@@ -1834,6 +1835,7 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
+ 	/* nfit test assumes 1:1 relationship between commands and dsms */
+ 	nfit_mem->dsm_mask = acpi_desc->dimm_cmd_force_en;
+ 	nfit_mem->family = NVDIMM_FAMILY_INTEL;
++	set_bit(NVDIMM_FAMILY_INTEL, &nd_desc->dimm_family_mask);
+ 
+ 	if (dcr->valid_fields & ACPI_NFIT_CONTROL_MFG_INFO_VALID)
+ 		sprintf(nfit_mem->id, "%04x-%02x-%04x-%08x",
+@@ -1886,10 +1888,13 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
+ 	 * Note, that checking for function0 (bit0) tells us if any commands
+ 	 * are reachable through this GUID.
+ 	 */
++	clear_bit(NVDIMM_FAMILY_INTEL, &nd_desc->dimm_family_mask);
+ 	for (i = 0; i <= NVDIMM_FAMILY_MAX; i++)
+-		if (acpi_check_dsm(adev_dimm->handle, to_nfit_uuid(i), 1, 1))
++		if (acpi_check_dsm(adev_dimm->handle, to_nfit_uuid(i), 1, 1)) {
++			set_bit(i, &nd_desc->dimm_family_mask);
+ 			if (family < 0 || i == default_dsm_family)
+ 				family = i;
++		}
+ 
+ 	/* limit the supported commands to those that are publicly documented */
+ 	nfit_mem->family = family;
+@@ -2153,6 +2158,9 @@ static void acpi_nfit_init_dsms(struct acpi_nfit_desc *acpi_desc)
+ 
+ 	nd_desc->cmd_mask = acpi_desc->bus_cmd_force_en;
+ 	nd_desc->bus_dsm_mask = acpi_desc->bus_nfit_cmd_force_en;
++	set_bit(ND_CMD_CALL, &nd_desc->cmd_mask);
++	set_bit(NVDIMM_BUS_FAMILY_NFIT, &nd_desc->bus_family_mask);
++
+ 	adev = to_acpi_dev(acpi_desc);
+ 	if (!adev)
+ 		return;
+@@ -2160,7 +2168,6 @@ static void acpi_nfit_init_dsms(struct acpi_nfit_desc *acpi_desc)
+ 	for (i = ND_CMD_ARS_CAP; i <= ND_CMD_CLEAR_ERROR; i++)
+ 		if (acpi_check_dsm(adev->handle, guid, 1, 1ULL << i))
+ 			set_bit(i, &nd_desc->cmd_mask);
+-	set_bit(ND_CMD_CALL, &nd_desc->cmd_mask);
+ 
+ 	dsm_mask =
+ 		(1 << ND_CMD_ARS_CAP) |
+diff --git a/drivers/acpi/nfit/nfit.h b/drivers/acpi/nfit/nfit.h
+index f5525f8bb7708..5c5e7ebba8dc6 100644
+--- a/drivers/acpi/nfit/nfit.h
++++ b/drivers/acpi/nfit/nfit.h
+@@ -33,7 +33,6 @@
+ 		| ACPI_NFIT_MEM_RESTORE_FAILED | ACPI_NFIT_MEM_FLUSH_FAILED \
+ 		| ACPI_NFIT_MEM_NOT_ARMED | ACPI_NFIT_MEM_MAP_FAILED)
+ 
+-#define NVDIMM_FAMILY_MAX NVDIMM_FAMILY_HYPERV
+ #define NVDIMM_CMD_MAX 31
+ 
+ #define NVDIMM_STANDARD_CMDMASK \
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index e8628716ea345..18e81d65d32c4 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -844,7 +844,9 @@ static int __device_attach(struct device *dev, bool allow_async)
+ 	int ret = 0;
+ 
+ 	device_lock(dev);
+-	if (dev->driver) {
++	if (dev->p->dead) {
++		goto out_unlock;
++	} else if (dev->driver) {
+ 		if (device_is_bound(dev)) {
+ 			ret = 1;
+ 			goto out_unlock;
+diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig
+index 326f91b2dda9f..5f952e111ab5a 100644
+--- a/drivers/clk/Kconfig
++++ b/drivers/clk/Kconfig
+@@ -50,7 +50,7 @@ source "drivers/clk/versatile/Kconfig"
+ config CLK_HSDK
+ 	bool "PLL Driver for HSDK platform"
+ 	depends on OF || COMPILE_TEST
+-	depends on IOMEM
++	depends on HAS_IOMEM
+ 	help
+ 	  This driver supports the HSDK core, system, ddr, tunnel and hdmi PLLs
+ 	  control.
+diff --git a/drivers/clk/actions/owl-s500.c b/drivers/clk/actions/owl-s500.c
+index e2007ac4d235d..0eb83a0b70bcc 100644
+--- a/drivers/clk/actions/owl-s500.c
++++ b/drivers/clk/actions/owl-s500.c
+@@ -183,7 +183,7 @@ static OWL_GATE(timer_clk, "timer_clk", "hosc", CMU_DEVCLKEN1, 27, 0, 0);
+ static OWL_GATE(hdmi_clk, "hdmi_clk", "hosc", CMU_DEVCLKEN1, 3, 0, 0);
+ 
+ /* divider clocks */
+-static OWL_DIVIDER(h_clk, "h_clk", "ahbprevdiv_clk", CMU_BUSCLK1, 12, 2, NULL, 0, 0);
++static OWL_DIVIDER(h_clk, "h_clk", "ahbprediv_clk", CMU_BUSCLK1, 12, 2, NULL, 0, 0);
+ static OWL_DIVIDER(rmii_ref_clk, "rmii_ref_clk", "ethernet_pll_clk", CMU_ETHERNETPLL, 1, 1, rmii_ref_div_table, 0, 0);
+ 
+ /* factor clocks */
+diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
+index 6bb7efa12037b..011802f1a6df9 100644
+--- a/drivers/clk/bcm/clk-bcm2835.c
++++ b/drivers/clk/bcm/clk-bcm2835.c
+@@ -314,6 +314,7 @@ struct bcm2835_cprman {
+ 	struct device *dev;
+ 	void __iomem *regs;
+ 	spinlock_t regs_lock; /* spinlock for all clocks */
++	unsigned int soc;
+ 
+ 	/*
+ 	 * Real names of cprman clock parents looked up through
+@@ -525,6 +526,20 @@ static int bcm2835_pll_is_on(struct clk_hw *hw)
+ 		A2W_PLL_CTRL_PRST_DISABLE;
+ }
+ 
++static u32 bcm2835_pll_get_prediv_mask(struct bcm2835_cprman *cprman,
++				       const struct bcm2835_pll_data *data)
++{
++	/*
++	 * On BCM2711 there isn't a pre-divisor available in the PLL feedback
++	 * loop. Bits 13:14 of ANA1 (PLLA,PLLB,PLLC,PLLD) have been re-purposed
++	 * for to for VCO RANGE bits.
++	 */
++	if (cprman->soc & SOC_BCM2711)
++		return 0;
++
++	return data->ana->fb_prediv_mask;
++}
++
+ static void bcm2835_pll_choose_ndiv_and_fdiv(unsigned long rate,
+ 					     unsigned long parent_rate,
+ 					     u32 *ndiv, u32 *fdiv)
+@@ -582,7 +597,7 @@ static unsigned long bcm2835_pll_get_rate(struct clk_hw *hw,
+ 	ndiv = (a2wctrl & A2W_PLL_CTRL_NDIV_MASK) >> A2W_PLL_CTRL_NDIV_SHIFT;
+ 	pdiv = (a2wctrl & A2W_PLL_CTRL_PDIV_MASK) >> A2W_PLL_CTRL_PDIV_SHIFT;
+ 	using_prediv = cprman_read(cprman, data->ana_reg_base + 4) &
+-		data->ana->fb_prediv_mask;
++		       bcm2835_pll_get_prediv_mask(cprman, data);
+ 
+ 	if (using_prediv) {
+ 		ndiv *= 2;
+@@ -665,6 +680,7 @@ static int bcm2835_pll_set_rate(struct clk_hw *hw,
+ 	struct bcm2835_pll *pll = container_of(hw, struct bcm2835_pll, hw);
+ 	struct bcm2835_cprman *cprman = pll->cprman;
+ 	const struct bcm2835_pll_data *data = pll->data;
++	u32 prediv_mask = bcm2835_pll_get_prediv_mask(cprman, data);
+ 	bool was_using_prediv, use_fb_prediv, do_ana_setup_first;
+ 	u32 ndiv, fdiv, a2w_ctl;
+ 	u32 ana[4];
+@@ -682,7 +698,7 @@ static int bcm2835_pll_set_rate(struct clk_hw *hw,
+ 	for (i = 3; i >= 0; i--)
+ 		ana[i] = cprman_read(cprman, data->ana_reg_base + i * 4);
+ 
+-	was_using_prediv = ana[1] & data->ana->fb_prediv_mask;
++	was_using_prediv = ana[1] & prediv_mask;
+ 
+ 	ana[0] &= ~data->ana->mask0;
+ 	ana[0] |= data->ana->set0;
+@@ -692,10 +708,10 @@ static int bcm2835_pll_set_rate(struct clk_hw *hw,
+ 	ana[3] |= data->ana->set3;
+ 
+ 	if (was_using_prediv && !use_fb_prediv) {
+-		ana[1] &= ~data->ana->fb_prediv_mask;
++		ana[1] &= ~prediv_mask;
+ 		do_ana_setup_first = true;
+ 	} else if (!was_using_prediv && use_fb_prediv) {
+-		ana[1] |= data->ana->fb_prediv_mask;
++		ana[1] |= prediv_mask;
+ 		do_ana_setup_first = false;
+ 	} else {
+ 		do_ana_setup_first = true;
+@@ -2238,6 +2254,7 @@ static int bcm2835_clk_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, cprman);
+ 
+ 	cprman->onecell.num = asize;
++	cprman->soc = pdata->soc;
+ 	hws = cprman->onecell.hws;
+ 
+ 	for (i = 0; i < asize; i++) {
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index 9b2dfa08acb2a..1325139173c95 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -56,7 +56,6 @@
+ #define PLL_STATUS(p)		((p)->offset + (p)->regs[PLL_OFF_STATUS])
+ #define PLL_OPMODE(p)		((p)->offset + (p)->regs[PLL_OFF_OPMODE])
+ #define PLL_FRAC(p)		((p)->offset + (p)->regs[PLL_OFF_FRAC])
+-#define PLL_CAL_VAL(p)		((p)->offset + (p)->regs[PLL_OFF_CAL_VAL])
+ 
+ const u8 clk_alpha_pll_regs[][PLL_OFF_MAX_REGS] = {
+ 	[CLK_ALPHA_PLL_TYPE_DEFAULT] =  {
+@@ -115,7 +114,6 @@ const u8 clk_alpha_pll_regs[][PLL_OFF_MAX_REGS] = {
+ 		[PLL_OFF_STATUS] = 0x30,
+ 		[PLL_OFF_OPMODE] = 0x38,
+ 		[PLL_OFF_ALPHA_VAL] = 0x40,
+-		[PLL_OFF_CAL_VAL] = 0x44,
+ 	},
+ 	[CLK_ALPHA_PLL_TYPE_LUCID] =  {
+ 		[PLL_OFF_L_VAL] = 0x04,
+diff --git a/drivers/clk/qcom/gcc-sdm660.c b/drivers/clk/qcom/gcc-sdm660.c
+index bf5730832ef3d..c6fb57cd576f5 100644
+--- a/drivers/clk/qcom/gcc-sdm660.c
++++ b/drivers/clk/qcom/gcc-sdm660.c
+@@ -1715,6 +1715,9 @@ static struct clk_branch gcc_mss_cfg_ahb_clk = {
+ 
+ static struct clk_branch gcc_mss_mnoc_bimc_axi_clk = {
+ 	.halt_reg = 0x8a004,
++	.halt_check = BRANCH_HALT,
++	.hwcg_reg = 0x8a004,
++	.hwcg_bit = 1,
+ 	.clkr = {
+ 		.enable_reg = 0x8a004,
+ 		.enable_mask = BIT(0),
+diff --git a/drivers/clk/qcom/gcc-sm8150.c b/drivers/clk/qcom/gcc-sm8150.c
+index 72524cf110487..55e9d6d75a0cd 100644
+--- a/drivers/clk/qcom/gcc-sm8150.c
++++ b/drivers/clk/qcom/gcc-sm8150.c
+@@ -1617,6 +1617,7 @@ static struct clk_branch gcc_gpu_cfg_ahb_clk = {
+ };
+ 
+ static struct clk_branch gcc_gpu_gpll0_clk_src = {
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x52004,
+ 		.enable_mask = BIT(15),
+@@ -1632,13 +1633,14 @@ static struct clk_branch gcc_gpu_gpll0_clk_src = {
+ };
+ 
+ static struct clk_branch gcc_gpu_gpll0_div_clk_src = {
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x52004,
+ 		.enable_mask = BIT(16),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_gpu_gpll0_div_clk_src",
+ 			.parent_hws = (const struct clk_hw *[]){
+-				&gcc_gpu_gpll0_clk_src.clkr.hw },
++				&gpll0_out_even.clkr.hw },
+ 			.num_parents = 1,
+ 			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+@@ -1729,6 +1731,7 @@ static struct clk_branch gcc_npu_cfg_ahb_clk = {
+ };
+ 
+ static struct clk_branch gcc_npu_gpll0_clk_src = {
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x52004,
+ 		.enable_mask = BIT(18),
+@@ -1744,13 +1747,14 @@ static struct clk_branch gcc_npu_gpll0_clk_src = {
+ };
+ 
+ static struct clk_branch gcc_npu_gpll0_div_clk_src = {
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x52004,
+ 		.enable_mask = BIT(19),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_npu_gpll0_div_clk_src",
+ 			.parent_hws = (const struct clk_hw *[]){
+-				&gcc_npu_gpll0_clk_src.clkr.hw },
++				&gpll0_out_even.clkr.hw },
+ 			.num_parents = 1,
+ 			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+diff --git a/drivers/clk/sirf/clk-atlas6.c b/drivers/clk/sirf/clk-atlas6.c
+index c84d5bab7ac28..b95483bb6a5ec 100644
+--- a/drivers/clk/sirf/clk-atlas6.c
++++ b/drivers/clk/sirf/clk-atlas6.c
+@@ -135,7 +135,7 @@ static void __init atlas6_clk_init(struct device_node *np)
+ 
+ 	for (i = pll1; i < maxclk; i++) {
+ 		atlas6_clks[i] = clk_register(NULL, atlas6_clk_hw_array[i]);
+-		BUG_ON(!atlas6_clks[i]);
++		BUG_ON(IS_ERR(atlas6_clks[i]));
+ 	}
+ 	clk_register_clkdev(atlas6_clks[cpu], NULL, "cpu");
+ 	clk_register_clkdev(atlas6_clks[io],  NULL, "io");
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index bf90a4fcabd1f..8149ac4d6ef22 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -810,12 +810,6 @@ static int ctr_skcipher_setkey(struct crypto_skcipher *skcipher,
+ 	return skcipher_setkey(skcipher, key, keylen, ctx1_iv_off);
+ }
+ 
+-static int arc4_skcipher_setkey(struct crypto_skcipher *skcipher,
+-				const u8 *key, unsigned int keylen)
+-{
+-	return skcipher_setkey(skcipher, key, keylen, 0);
+-}
+-
+ static int des_skcipher_setkey(struct crypto_skcipher *skcipher,
+ 			       const u8 *key, unsigned int keylen)
+ {
+@@ -1967,21 +1961,6 @@ static struct caam_skcipher_alg driver_algs[] = {
+ 		},
+ 		.caam.class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_ECB,
+ 	},
+-	{
+-		.skcipher = {
+-			.base = {
+-				.cra_name = "ecb(arc4)",
+-				.cra_driver_name = "ecb-arc4-caam",
+-				.cra_blocksize = ARC4_BLOCK_SIZE,
+-			},
+-			.setkey = arc4_skcipher_setkey,
+-			.encrypt = skcipher_encrypt,
+-			.decrypt = skcipher_decrypt,
+-			.min_keysize = ARC4_MIN_KEY_SIZE,
+-			.max_keysize = ARC4_MAX_KEY_SIZE,
+-		},
+-		.caam.class1_alg_type = OP_ALG_ALGSEL_ARC4 | OP_ALG_AAI_ECB,
+-	},
+ };
+ 
+ static struct caam_aead_alg driver_aeads[] = {
+@@ -3457,7 +3436,6 @@ int caam_algapi_init(struct device *ctrldev)
+ 	struct caam_drv_private *priv = dev_get_drvdata(ctrldev);
+ 	int i = 0, err = 0;
+ 	u32 aes_vid, aes_inst, des_inst, md_vid, md_inst, ccha_inst, ptha_inst;
+-	u32 arc4_inst;
+ 	unsigned int md_limit = SHA512_DIGEST_SIZE;
+ 	bool registered = false, gcm_support;
+ 
+@@ -3477,8 +3455,6 @@ int caam_algapi_init(struct device *ctrldev)
+ 			   CHA_ID_LS_DES_SHIFT;
+ 		aes_inst = cha_inst & CHA_ID_LS_AES_MASK;
+ 		md_inst = (cha_inst & CHA_ID_LS_MD_MASK) >> CHA_ID_LS_MD_SHIFT;
+-		arc4_inst = (cha_inst & CHA_ID_LS_ARC4_MASK) >>
+-			    CHA_ID_LS_ARC4_SHIFT;
+ 		ccha_inst = 0;
+ 		ptha_inst = 0;
+ 
+@@ -3499,7 +3475,6 @@ int caam_algapi_init(struct device *ctrldev)
+ 		md_inst = mdha & CHA_VER_NUM_MASK;
+ 		ccha_inst = rd_reg32(&priv->ctrl->vreg.ccha) & CHA_VER_NUM_MASK;
+ 		ptha_inst = rd_reg32(&priv->ctrl->vreg.ptha) & CHA_VER_NUM_MASK;
+-		arc4_inst = rd_reg32(&priv->ctrl->vreg.afha) & CHA_VER_NUM_MASK;
+ 
+ 		gcm_support = aesa & CHA_VER_MISC_AES_GCM;
+ 	}
+@@ -3522,10 +3497,6 @@ int caam_algapi_init(struct device *ctrldev)
+ 		if (!aes_inst && (alg_sel == OP_ALG_ALGSEL_AES))
+ 				continue;
+ 
+-		/* Skip ARC4 algorithms if not supported by device */
+-		if (!arc4_inst && alg_sel == OP_ALG_ALGSEL_ARC4)
+-			continue;
+-
+ 		/*
+ 		 * Check support for AES modes not available
+ 		 * on LP devices.
+diff --git a/drivers/crypto/caam/compat.h b/drivers/crypto/caam/compat.h
+index 60e2a54c19f11..c3c22a8de4c00 100644
+--- a/drivers/crypto/caam/compat.h
++++ b/drivers/crypto/caam/compat.h
+@@ -43,7 +43,6 @@
+ #include <crypto/akcipher.h>
+ #include <crypto/scatterwalk.h>
+ #include <crypto/skcipher.h>
+-#include <crypto/arc4.h>
+ #include <crypto/internal/skcipher.h>
+ #include <crypto/internal/hash.h>
+ #include <crypto/internal/rsa.h>
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+index f87b225437fc3..bd5061fbe031e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+@@ -973,7 +973,7 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, char __user *buf,
+ 
+ 	r = pm_runtime_get_sync(adev->ddev->dev);
+ 	if (r < 0)
+-		return r;
++		goto err;
+ 
+ 	r = amdgpu_virt_enable_access_debugfs(adev);
+ 	if (r < 0)
+@@ -1003,7 +1003,7 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, char __user *buf,
+ 		value = data[result >> 2];
+ 		r = put_user(value, (uint32_t *)buf);
+ 		if (r) {
+-			result = r;
++			amdgpu_virt_disable_access_debugfs(adev);
+ 			goto err;
+ 		}
+ 
+@@ -1012,11 +1012,14 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, char __user *buf,
+ 		size -= 4;
+ 	}
+ 
+-err:
+-	pm_runtime_put_autosuspend(adev->ddev->dev);
+ 	kfree(data);
+ 	amdgpu_virt_disable_access_debugfs(adev);
+ 	return result;
++
++err:
++	pm_runtime_put_autosuspend(adev->ddev->dev);
++	kfree(data);
++	return r;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 710edc70e37ec..195d621145ba5 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -8686,6 +8686,29 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 		if (ret)
+ 			goto fail;
+ 
++	/* Check connector changes */
++	for_each_oldnew_connector_in_state(state, connector, old_con_state, new_con_state, i) {
++		struct dm_connector_state *dm_old_con_state = to_dm_connector_state(old_con_state);
++		struct dm_connector_state *dm_new_con_state = to_dm_connector_state(new_con_state);
++
++		/* Skip connectors that are disabled or part of modeset already. */
++		if (!old_con_state->crtc && !new_con_state->crtc)
++			continue;
++
++		if (!new_con_state->crtc)
++			continue;
++
++		new_crtc_state = drm_atomic_get_crtc_state(state, new_con_state->crtc);
++		if (IS_ERR(new_crtc_state)) {
++			ret = PTR_ERR(new_crtc_state);
++			goto fail;
++		}
++
++		if (dm_old_con_state->abm_level !=
++		    dm_new_con_state->abm_level)
++			new_crtc_state->connectors_changed = true;
++	}
++
+ #if defined(CONFIG_DRM_AMD_DC_DCN)
+ 		if (!compute_mst_dsc_configs_for_state(state, dm_state->context))
+ 			goto fail;
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
+index 3fab9296918ab..e133edc587d31 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
+@@ -85,12 +85,77 @@ static int rv1_determine_dppclk_threshold(struct clk_mgr_internal *clk_mgr, stru
+ 	return disp_clk_threshold;
+ }
+ 
+-static void ramp_up_dispclk_with_dpp(struct clk_mgr_internal *clk_mgr, struct dc *dc, struct dc_clocks *new_clocks)
++static void ramp_up_dispclk_with_dpp(
++		struct clk_mgr_internal *clk_mgr,
++		struct dc *dc,
++		struct dc_clocks *new_clocks,
++		bool safe_to_lower)
+ {
+ 	int i;
+ 	int dispclk_to_dpp_threshold = rv1_determine_dppclk_threshold(clk_mgr, new_clocks);
+ 	bool request_dpp_div = new_clocks->dispclk_khz > new_clocks->dppclk_khz;
+ 
++	/* this function is to change dispclk, dppclk and dprefclk according to
++	 * bandwidth requirement. Its call stack is rv1_update_clocks -->
++	 * update_clocks --> dcn10_prepare_bandwidth / dcn10_optimize_bandwidth
++	 * --> prepare_bandwidth / optimize_bandwidth. before change dcn hw,
++	 * prepare_bandwidth will be called first to allow enough clock,
++	 * watermark for change, after end of dcn hw change, optimize_bandwidth
++	 * is executed to lower clock to save power for new dcn hw settings.
++	 *
++	 * below is sequence of commit_planes_for_stream:
++	 *
++	 * step 1: prepare_bandwidth - raise clock to have enough bandwidth
++	 * step 2: lock_doublebuffer_enable
++	 * step 3: pipe_control_lock(true) - make dchubp register change will
++	 * not take effect right way
++	 * step 4: apply_ctx_for_surface - program dchubp
++	 * step 5: pipe_control_lock(false) - dchubp register change take effect
++	 * step 6: optimize_bandwidth --> dc_post_update_surfaces_to_stream
++	 * for full_date, optimize clock to save power
++	 *
++	 * at end of step 1, dcn clocks (dprefclk, dispclk, dppclk) may be
++	 * changed for new dchubp configuration. but real dcn hub dchubps are
++	 * still running with old configuration until end of step 5. this need
++	 * clocks settings at step 1 should not less than that before step 1.
++	 * this is checked by two conditions: 1. if (should_set_clock(safe_to_lower
++	 * , new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz) ||
++	 * new_clocks->dispclk_khz == clk_mgr_base->clks.dispclk_khz)
++	 * 2. request_dpp_div = new_clocks->dispclk_khz > new_clocks->dppclk_khz
++	 *
++	 * the second condition is based on new dchubp configuration. dppclk
++	 * for new dchubp may be different from dppclk before step 1.
++	 * for example, before step 1, dchubps are as below:
++	 * pipe 0: recout=(0,40,1920,980) viewport=(0,0,1920,979)
++	 * pipe 1: recout=(0,0,1920,1080) viewport=(0,0,1920,1080)
++	 * for dppclk for pipe0 need dppclk = dispclk
++	 *
++	 * new dchubp pipe split configuration:
++	 * pipe 0: recout=(0,0,960,1080) viewport=(0,0,960,1080)
++	 * pipe 1: recout=(960,0,960,1080) viewport=(960,0,960,1080)
++	 * dppclk only needs dppclk = dispclk /2.
++	 *
++	 * dispclk, dppclk are not lock by otg master lock. they take effect
++	 * after step 1. during this transition, dispclk are the same, but
++	 * dppclk is changed to half of previous clock for old dchubp
++	 * configuration between step 1 and step 6. This may cause p-state
++	 * warning intermittently.
++	 *
++	 * for new_clocks->dispclk_khz == clk_mgr_base->clks.dispclk_khz, we
++	 * need make sure dppclk are not changed to less between step 1 and 6.
++	 * for new_clocks->dispclk_khz > clk_mgr_base->clks.dispclk_khz,
++	 * new display clock is raised, but we do not know ratio of
++	 * new_clocks->dispclk_khz and clk_mgr_base->clks.dispclk_khz,
++	 * new_clocks->dispclk_khz /2 does not guarantee equal or higher than
++	 * old dppclk. we could ignore power saving different between
++	 * dppclk = displck and dppclk = dispclk / 2 between step 1 and step 6.
++	 * as long as safe_to_lower = false, set dpclk = dispclk to simplify
++	 * condition check.
++	 * todo: review this change for other asic.
++	 **/
++	if (!safe_to_lower)
++		request_dpp_div = false;
++
+ 	/* set disp clk to dpp clk threshold */
+ 
+ 	clk_mgr->funcs->set_dispclk(clk_mgr, dispclk_to_dpp_threshold);
+@@ -209,7 +274,7 @@ static void rv1_update_clocks(struct clk_mgr *clk_mgr_base,
+ 	/* program dispclk on = as a w/a for sleep resume clock ramping issues */
+ 	if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)
+ 			|| new_clocks->dispclk_khz == clk_mgr_base->clks.dispclk_khz) {
+-		ramp_up_dispclk_with_dpp(clk_mgr, dc, new_clocks);
++		ramp_up_dispclk_with_dpp(clk_mgr, dc, new_clocks, safe_to_lower);
+ 		clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
+ 		send_request_to_lower = true;
+ 	}
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
+index 56923a96b4502..ad54f4500af1f 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
+@@ -2725,7 +2725,10 @@ static int ci_initialize_mc_reg_table(struct pp_hwmgr *hwmgr)
+ 
+ static bool ci_is_dpm_running(struct pp_hwmgr *hwmgr)
+ {
+-	return ci_is_smc_ram_running(hwmgr);
++	return (1 == PHM_READ_INDIRECT_FIELD(hwmgr->device,
++					     CGS_IND_REG__SMC, FEATURE_STATUS,
++					     VOLTAGE_CONTROLLER_ON))
++		? true : false;
+ }
+ 
+ static int ci_smu_init(struct pp_hwmgr *hwmgr)
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 1e26b89628f98..ffbd754a53825 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -88,8 +88,8 @@ static int drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,
+ static bool drm_dp_validate_guid(struct drm_dp_mst_topology_mgr *mgr,
+ 				 u8 *guid);
+ 
+-static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux);
+-static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_aux *aux);
++static int drm_dp_mst_register_i2c_bus(struct drm_dp_mst_port *port);
++static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_mst_port *port);
+ static void drm_dp_mst_kick_tx(struct drm_dp_mst_topology_mgr *mgr);
+ 
+ #define DBG_PREFIX "[dp_mst]"
+@@ -1197,7 +1197,8 @@ static int drm_dp_mst_wait_tx_reply(struct drm_dp_mst_branch *mstb,
+ 
+ 		/* remove from q */
+ 		if (txmsg->state == DRM_DP_SIDEBAND_TX_QUEUED ||
+-		    txmsg->state == DRM_DP_SIDEBAND_TX_START_SEND)
++		    txmsg->state == DRM_DP_SIDEBAND_TX_START_SEND ||
++		    txmsg->state == DRM_DP_SIDEBAND_TX_SENT)
+ 			list_del(&txmsg->next);
+ 	}
+ out:
+@@ -1966,7 +1967,7 @@ drm_dp_port_set_pdt(struct drm_dp_mst_port *port, u8 new_pdt,
+ 			}
+ 
+ 			/* remove i2c over sideband */
+-			drm_dp_mst_unregister_i2c_bus(&port->aux);
++			drm_dp_mst_unregister_i2c_bus(port);
+ 		} else {
+ 			mutex_lock(&mgr->lock);
+ 			drm_dp_mst_topology_put_mstb(port->mstb);
+@@ -1981,7 +1982,7 @@ drm_dp_port_set_pdt(struct drm_dp_mst_port *port, u8 new_pdt,
+ 	if (port->pdt != DP_PEER_DEVICE_NONE) {
+ 		if (drm_dp_mst_is_end_device(port->pdt, port->mcs)) {
+ 			/* add i2c over sideband */
+-			ret = drm_dp_mst_register_i2c_bus(&port->aux);
++			ret = drm_dp_mst_register_i2c_bus(port);
+ 		} else {
+ 			lct = drm_dp_calculate_rad(port, rad);
+ 			mstb = drm_dp_add_mst_branch_device(lct, rad);
+@@ -4261,11 +4262,11 @@ bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
+ {
+ 	int ret;
+ 
+-	port = drm_dp_mst_topology_get_port_validated(mgr, port);
+-	if (!port)
++	if (slots < 0)
+ 		return false;
+ 
+-	if (slots < 0)
++	port = drm_dp_mst_topology_get_port_validated(mgr, port);
++	if (!port)
+ 		return false;
+ 
+ 	if (port->vcpi.vcpi > 0) {
+@@ -4281,6 +4282,7 @@ bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
+ 	if (ret) {
+ 		DRM_DEBUG_KMS("failed to init vcpi slots=%d max=63 ret=%d\n",
+ 			      DIV_ROUND_UP(pbn, mgr->pbn_div), ret);
++		drm_dp_mst_topology_put_port(port);
+ 		goto out;
+ 	}
+ 	DRM_DEBUG_KMS("initing vcpi for pbn=%d slots=%d\n",
+@@ -4641,12 +4643,13 @@ static void drm_dp_tx_work(struct work_struct *work)
+ static inline void
+ drm_dp_delayed_destroy_port(struct drm_dp_mst_port *port)
+ {
++	drm_dp_port_set_pdt(port, DP_PEER_DEVICE_NONE, port->mcs);
++
+ 	if (port->connector) {
+ 		drm_connector_unregister(port->connector);
+ 		drm_connector_put(port->connector);
+ 	}
+ 
+-	drm_dp_port_set_pdt(port, DP_PEER_DEVICE_NONE, port->mcs);
+ 	drm_dp_mst_put_port_malloc(port);
+ }
+ 
+@@ -5346,22 +5349,26 @@ static const struct i2c_algorithm drm_dp_mst_i2c_algo = {
+ 
+ /**
+  * drm_dp_mst_register_i2c_bus() - register an I2C adapter for I2C-over-AUX
+- * @aux: DisplayPort AUX channel
++ * @port: The port to add the I2C bus on
+  *
+  * Returns 0 on success or a negative error code on failure.
+  */
+-static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux)
++static int drm_dp_mst_register_i2c_bus(struct drm_dp_mst_port *port)
+ {
++	struct drm_dp_aux *aux = &port->aux;
++	struct device *parent_dev = port->mgr->dev->dev;
++
+ 	aux->ddc.algo = &drm_dp_mst_i2c_algo;
+ 	aux->ddc.algo_data = aux;
+ 	aux->ddc.retries = 3;
+ 
+ 	aux->ddc.class = I2C_CLASS_DDC;
+ 	aux->ddc.owner = THIS_MODULE;
+-	aux->ddc.dev.parent = aux->dev;
+-	aux->ddc.dev.of_node = aux->dev->of_node;
++	/* FIXME: set the kdev of the port's connector as parent */
++	aux->ddc.dev.parent = parent_dev;
++	aux->ddc.dev.of_node = parent_dev->of_node;
+ 
+-	strlcpy(aux->ddc.name, aux->name ? aux->name : dev_name(aux->dev),
++	strlcpy(aux->ddc.name, aux->name ? aux->name : dev_name(parent_dev),
+ 		sizeof(aux->ddc.name));
+ 
+ 	return i2c_add_adapter(&aux->ddc);
+@@ -5369,11 +5376,11 @@ static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux)
+ 
+ /**
+  * drm_dp_mst_unregister_i2c_bus() - unregister an I2C-over-AUX adapter
+- * @aux: DisplayPort AUX channel
++ * @port: The port to remove the I2C bus from
+  */
+-static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_aux *aux)
++static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_mst_port *port)
+ {
+-	i2c_del_adapter(&aux->ddc);
++	i2c_del_adapter(&port->aux.ddc);
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index d00ea384dcbfe..58f5dc2f6dd52 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -121,6 +121,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T101HA"),
+ 		},
+ 		.driver_data = (void *)&lcd800x1280_rightside_up,
++	}, {	/* Asus T103HAF */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T103HAF"),
++		},
++		.driver_data = (void *)&lcd800x1280_rightside_up,
+ 	}, {	/* GPD MicroPC (generic strings, also match on bios date) */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"),
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
+index f069551e412f3..ebc29b6ee86cb 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt.c
++++ b/drivers/gpu/drm/i915/gt/intel_gt.c
+@@ -616,6 +616,11 @@ void intel_gt_driver_unregister(struct intel_gt *gt)
+ void intel_gt_driver_release(struct intel_gt *gt)
+ {
+ 	struct i915_address_space *vm;
++	intel_wakeref_t wakeref;
++
++	/* Scrub all HW state upon release */
++	with_intel_runtime_pm(gt->uncore->rpm, wakeref)
++		__intel_gt_reset(gt, ALL_ENGINES);
+ 
+ 	vm = fetch_and_zero(&gt->vm);
+ 	if (vm) /* FIXME being called twice on error paths :( */
+diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c
+index 1823af9936c98..447a110787a6f 100644
+--- a/drivers/gpu/drm/imx/imx-ldb.c
++++ b/drivers/gpu/drm/imx/imx-ldb.c
+@@ -304,18 +304,19 @@ static void imx_ldb_encoder_disable(struct drm_encoder *encoder)
+ {
+ 	struct imx_ldb_channel *imx_ldb_ch = enc_to_imx_ldb_ch(encoder);
+ 	struct imx_ldb *ldb = imx_ldb_ch->ldb;
++	int dual = ldb->ldb_ctrl & LDB_SPLIT_MODE_EN;
+ 	int mux, ret;
+ 
+ 	drm_panel_disable(imx_ldb_ch->panel);
+ 
+-	if (imx_ldb_ch == &ldb->channel[0])
++	if (imx_ldb_ch == &ldb->channel[0] || dual)
+ 		ldb->ldb_ctrl &= ~LDB_CH0_MODE_EN_MASK;
+-	else if (imx_ldb_ch == &ldb->channel[1])
++	if (imx_ldb_ch == &ldb->channel[1] || dual)
+ 		ldb->ldb_ctrl &= ~LDB_CH1_MODE_EN_MASK;
+ 
+ 	regmap_write(ldb->regmap, IOMUXC_GPR2, ldb->ldb_ctrl);
+ 
+-	if (ldb->ldb_ctrl & LDB_SPLIT_MODE_EN) {
++	if (dual) {
+ 		clk_disable_unprepare(ldb->clk[0]);
+ 		clk_disable_unprepare(ldb->clk[1]);
+ 	}
+diff --git a/drivers/gpu/drm/ingenic/ingenic-drm.c b/drivers/gpu/drm/ingenic/ingenic-drm.c
+index 55b49a31729bf..9764c99ebddf4 100644
+--- a/drivers/gpu/drm/ingenic/ingenic-drm.c
++++ b/drivers/gpu/drm/ingenic/ingenic-drm.c
+@@ -386,7 +386,7 @@ static void ingenic_drm_plane_atomic_update(struct drm_plane *plane,
+ 		addr = drm_fb_cma_get_gem_addr(state->fb, state, 0);
+ 		width = state->src_w >> 16;
+ 		height = state->src_h >> 16;
+-		cpp = state->fb->format->cpp[plane->index];
++		cpp = state->fb->format->cpp[0];
+ 
+ 		priv->dma_hwdesc->addr = addr;
+ 		priv->dma_hwdesc->cmd = width * height * cpp / 4;
+diff --git a/drivers/gpu/drm/omapdrm/dss/dispc.c b/drivers/gpu/drm/omapdrm/dss/dispc.c
+index 6639ee9b05d3d..48593932bddf5 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dispc.c
++++ b/drivers/gpu/drm/omapdrm/dss/dispc.c
+@@ -4915,6 +4915,7 @@ static int dispc_runtime_resume(struct device *dev)
+ static const struct dev_pm_ops dispc_pm_ops = {
+ 	.runtime_suspend = dispc_runtime_suspend,
+ 	.runtime_resume = dispc_runtime_resume,
++	SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ };
+ 
+ struct platform_driver omap_dispchw_driver = {
+diff --git a/drivers/gpu/drm/omapdrm/dss/dsi.c b/drivers/gpu/drm/omapdrm/dss/dsi.c
+index 79ddfbfd1b588..eeccf40bae416 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dsi.c
++++ b/drivers/gpu/drm/omapdrm/dss/dsi.c
+@@ -5467,6 +5467,7 @@ static int dsi_runtime_resume(struct device *dev)
+ static const struct dev_pm_ops dsi_pm_ops = {
+ 	.runtime_suspend = dsi_runtime_suspend,
+ 	.runtime_resume = dsi_runtime_resume,
++	SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ };
+ 
+ struct platform_driver omap_dsihw_driver = {
+diff --git a/drivers/gpu/drm/omapdrm/dss/dss.c b/drivers/gpu/drm/omapdrm/dss/dss.c
+index 4d5739fa4a5d8..6ccbc29c4ce4b 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dss.c
++++ b/drivers/gpu/drm/omapdrm/dss/dss.c
+@@ -1614,6 +1614,7 @@ static int dss_runtime_resume(struct device *dev)
+ static const struct dev_pm_ops dss_pm_ops = {
+ 	.runtime_suspend = dss_runtime_suspend,
+ 	.runtime_resume = dss_runtime_resume,
++	SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ };
+ 
+ struct platform_driver omap_dsshw_driver = {
+diff --git a/drivers/gpu/drm/omapdrm/dss/venc.c b/drivers/gpu/drm/omapdrm/dss/venc.c
+index 9701843ccf09d..01ee6c50b6631 100644
+--- a/drivers/gpu/drm/omapdrm/dss/venc.c
++++ b/drivers/gpu/drm/omapdrm/dss/venc.c
+@@ -902,6 +902,7 @@ static int venc_runtime_resume(struct device *dev)
+ static const struct dev_pm_ops venc_pm_ops = {
+ 	.runtime_suspend = venc_runtime_suspend,
+ 	.runtime_resume = venc_runtime_resume,
++	SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ };
+ 
+ static const struct of_device_id venc_of_match[] = {
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
+index 17b654e1eb942..556181ea4a073 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
+@@ -46,7 +46,7 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
+ 				sg_free_table(&bo->sgts[i]);
+ 			}
+ 		}
+-		kfree(bo->sgts);
++		kvfree(bo->sgts);
+ 	}
+ 
+ 	drm_gem_shmem_free_object(obj);
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+index ed28aeba6d59a..3c8ae7411c800 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+@@ -486,7 +486,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
+ 		pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
+ 				       sizeof(struct page *), GFP_KERNEL | __GFP_ZERO);
+ 		if (!pages) {
+-			kfree(bo->sgts);
++			kvfree(bo->sgts);
+ 			bo->sgts = NULL;
+ 			mutex_unlock(&bo->base.pages_lock);
+ 			ret = -ENOMEM;
+diff --git a/drivers/gpu/drm/tidss/tidss_kms.c b/drivers/gpu/drm/tidss/tidss_kms.c
+index 4b99e9fa84a5b..c0240f7e0b198 100644
+--- a/drivers/gpu/drm/tidss/tidss_kms.c
++++ b/drivers/gpu/drm/tidss/tidss_kms.c
+@@ -154,7 +154,7 @@ static int tidss_dispc_modeset_init(struct tidss_device *tidss)
+ 				break;
+ 			case DISPC_VP_DPI:
+ 				enc_type = DRM_MODE_ENCODER_DPI;
+-				conn_type = DRM_MODE_CONNECTOR_LVDS;
++				conn_type = DRM_MODE_CONNECTOR_DPI;
+ 				break;
+ 			default:
+ 				WARN_ON(1);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index 04d66592f6050..b7a9cee69ea72 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -2578,7 +2578,7 @@ int vmw_kms_fbdev_init_data(struct vmw_private *dev_priv,
+ 		++i;
+ 	}
+ 
+-	if (i != unit) {
++	if (&con->head == &dev_priv->dev->mode_config.connector_list) {
+ 		DRM_ERROR("Could not find initial display unit.\n");
+ 		ret = -EINVAL;
+ 		goto out_unlock;
+@@ -2602,13 +2602,13 @@ int vmw_kms_fbdev_init_data(struct vmw_private *dev_priv,
+ 			break;
+ 	}
+ 
+-	if (mode->type & DRM_MODE_TYPE_PREFERRED)
+-		*p_mode = mode;
+-	else {
++	if (&mode->head == &con->modes) {
+ 		WARN_ONCE(true, "Could not find initial preferred mode.\n");
+ 		*p_mode = list_first_entry(&con->modes,
+ 					   struct drm_display_mode,
+ 					   head);
++	} else {
++		*p_mode = mode;
+ 	}
+ 
+  out_unlock:
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
+index 16dafff5cab19..009f1742bed51 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
+@@ -81,7 +81,7 @@ static int vmw_ldu_commit_list(struct vmw_private *dev_priv)
+ 	struct vmw_legacy_display_unit *entry;
+ 	struct drm_framebuffer *fb = NULL;
+ 	struct drm_crtc *crtc = NULL;
+-	int i = 0;
++	int i;
+ 
+ 	/* If there is no display topology the host just assumes
+ 	 * that the guest will set the same layout as the host.
+@@ -92,12 +92,11 @@ static int vmw_ldu_commit_list(struct vmw_private *dev_priv)
+ 			crtc = &entry->base.crtc;
+ 			w = max(w, crtc->x + crtc->mode.hdisplay);
+ 			h = max(h, crtc->y + crtc->mode.vdisplay);
+-			i++;
+ 		}
+ 
+ 		if (crtc == NULL)
+ 			return 0;
+-		fb = entry->base.crtc.primary->state->fb;
++		fb = crtc->primary->state->fb;
+ 
+ 		return vmw_kms_write_svga(dev_priv, w, h, fb->pitches[0],
+ 					  fb->format->cpp[0] * 8,
+diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c b/drivers/gpu/ipu-v3/ipu-image-convert.c
+index eeca50d9a1ee4..aa1d4b6d278f7 100644
+--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
++++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
+@@ -137,6 +137,17 @@ struct ipu_image_convert_ctx;
+ struct ipu_image_convert_chan;
+ struct ipu_image_convert_priv;
+ 
++enum eof_irq_mask {
++	EOF_IRQ_IN      = BIT(0),
++	EOF_IRQ_ROT_IN  = BIT(1),
++	EOF_IRQ_OUT     = BIT(2),
++	EOF_IRQ_ROT_OUT = BIT(3),
++};
++
++#define EOF_IRQ_COMPLETE (EOF_IRQ_IN | EOF_IRQ_OUT)
++#define EOF_IRQ_ROT_COMPLETE (EOF_IRQ_IN | EOF_IRQ_OUT |	\
++			      EOF_IRQ_ROT_IN | EOF_IRQ_ROT_OUT)
++
+ struct ipu_image_convert_ctx {
+ 	struct ipu_image_convert_chan *chan;
+ 
+@@ -173,6 +184,9 @@ struct ipu_image_convert_ctx {
+ 	/* where to place converted tile in dest image */
+ 	unsigned int out_tile_map[MAX_TILES];
+ 
++	/* mask of completed EOF irqs at every tile conversion */
++	enum eof_irq_mask eof_mask;
++
+ 	struct list_head list;
+ };
+ 
+@@ -189,6 +203,8 @@ struct ipu_image_convert_chan {
+ 	struct ipuv3_channel *rotation_out_chan;
+ 
+ 	/* the IPU end-of-frame irqs */
++	int in_eof_irq;
++	int rot_in_eof_irq;
+ 	int out_eof_irq;
+ 	int rot_out_eof_irq;
+ 
+@@ -1380,6 +1396,9 @@ static int convert_start(struct ipu_image_convert_run *run, unsigned int tile)
+ 	dev_dbg(priv->ipu->dev, "%s: task %u: starting ctx %p run %p tile %u -> %u\n",
+ 		__func__, chan->ic_task, ctx, run, tile, dst_tile);
+ 
++	/* clear EOF irq mask */
++	ctx->eof_mask = 0;
++
+ 	if (ipu_rot_mode_is_irt(ctx->rot_mode)) {
+ 		/* swap width/height for resizer */
+ 		dest_width = d_image->tile[dst_tile].height;
+@@ -1615,7 +1634,7 @@ static bool ic_settings_changed(struct ipu_image_convert_ctx *ctx)
+ }
+ 
+ /* hold irqlock when calling */
+-static irqreturn_t do_irq(struct ipu_image_convert_run *run)
++static irqreturn_t do_tile_complete(struct ipu_image_convert_run *run)
+ {
+ 	struct ipu_image_convert_ctx *ctx = run->ctx;
+ 	struct ipu_image_convert_chan *chan = ctx->chan;
+@@ -1700,6 +1719,7 @@ static irqreturn_t do_irq(struct ipu_image_convert_run *run)
+ 		ctx->cur_buf_num ^= 1;
+ 	}
+ 
++	ctx->eof_mask = 0; /* clear EOF irq mask for next tile */
+ 	ctx->next_tile++;
+ 	return IRQ_HANDLED;
+ done:
+@@ -1709,13 +1729,15 @@ done:
+ 	return IRQ_WAKE_THREAD;
+ }
+ 
+-static irqreturn_t norotate_irq(int irq, void *data)
++static irqreturn_t eof_irq(int irq, void *data)
+ {
+ 	struct ipu_image_convert_chan *chan = data;
++	struct ipu_image_convert_priv *priv = chan->priv;
+ 	struct ipu_image_convert_ctx *ctx;
+ 	struct ipu_image_convert_run *run;
++	irqreturn_t ret = IRQ_HANDLED;
++	bool tile_complete = false;
+ 	unsigned long flags;
+-	irqreturn_t ret;
+ 
+ 	spin_lock_irqsave(&chan->irqlock, flags);
+ 
+@@ -1728,46 +1750,33 @@ static irqreturn_t norotate_irq(int irq, void *data)
+ 
+ 	ctx = run->ctx;
+ 
+-	if (ipu_rot_mode_is_irt(ctx->rot_mode)) {
+-		/* this is a rotation operation, just ignore */
+-		spin_unlock_irqrestore(&chan->irqlock, flags);
+-		return IRQ_HANDLED;
+-	}
+-
+-	ret = do_irq(run);
+-out:
+-	spin_unlock_irqrestore(&chan->irqlock, flags);
+-	return ret;
+-}
+-
+-static irqreturn_t rotate_irq(int irq, void *data)
+-{
+-	struct ipu_image_convert_chan *chan = data;
+-	struct ipu_image_convert_priv *priv = chan->priv;
+-	struct ipu_image_convert_ctx *ctx;
+-	struct ipu_image_convert_run *run;
+-	unsigned long flags;
+-	irqreturn_t ret;
+-
+-	spin_lock_irqsave(&chan->irqlock, flags);
+-
+-	/* get current run and its context */
+-	run = chan->current_run;
+-	if (!run) {
++	if (irq == chan->in_eof_irq) {
++		ctx->eof_mask |= EOF_IRQ_IN;
++	} else if (irq == chan->out_eof_irq) {
++		ctx->eof_mask |= EOF_IRQ_OUT;
++	} else if (irq == chan->rot_in_eof_irq ||
++		   irq == chan->rot_out_eof_irq) {
++		if (!ipu_rot_mode_is_irt(ctx->rot_mode)) {
++			/* this was NOT a rotation op, shouldn't happen */
++			dev_err(priv->ipu->dev,
++				"Unexpected rotation interrupt\n");
++			goto out;
++		}
++		ctx->eof_mask |= (irq == chan->rot_in_eof_irq) ?
++			EOF_IRQ_ROT_IN : EOF_IRQ_ROT_OUT;
++	} else {
++		dev_err(priv->ipu->dev, "Received unknown irq %d\n", irq);
+ 		ret = IRQ_NONE;
+ 		goto out;
+ 	}
+ 
+-	ctx = run->ctx;
+-
+-	if (!ipu_rot_mode_is_irt(ctx->rot_mode)) {
+-		/* this was NOT a rotation operation, shouldn't happen */
+-		dev_err(priv->ipu->dev, "Unexpected rotation interrupt\n");
+-		spin_unlock_irqrestore(&chan->irqlock, flags);
+-		return IRQ_HANDLED;
+-	}
++	if (ipu_rot_mode_is_irt(ctx->rot_mode))
++		tile_complete =	(ctx->eof_mask == EOF_IRQ_ROT_COMPLETE);
++	else
++		tile_complete = (ctx->eof_mask == EOF_IRQ_COMPLETE);
+ 
+-	ret = do_irq(run);
++	if (tile_complete)
++		ret = do_tile_complete(run);
+ out:
+ 	spin_unlock_irqrestore(&chan->irqlock, flags);
+ 	return ret;
+@@ -1801,6 +1810,10 @@ static void force_abort(struct ipu_image_convert_ctx *ctx)
+ 
+ static void release_ipu_resources(struct ipu_image_convert_chan *chan)
+ {
++	if (chan->in_eof_irq >= 0)
++		free_irq(chan->in_eof_irq, chan);
++	if (chan->rot_in_eof_irq >= 0)
++		free_irq(chan->rot_in_eof_irq, chan);
+ 	if (chan->out_eof_irq >= 0)
+ 		free_irq(chan->out_eof_irq, chan);
+ 	if (chan->rot_out_eof_irq >= 0)
+@@ -1819,7 +1832,27 @@ static void release_ipu_resources(struct ipu_image_convert_chan *chan)
+ 
+ 	chan->in_chan = chan->out_chan = chan->rotation_in_chan =
+ 		chan->rotation_out_chan = NULL;
+-	chan->out_eof_irq = chan->rot_out_eof_irq = -1;
++	chan->in_eof_irq = -1;
++	chan->rot_in_eof_irq = -1;
++	chan->out_eof_irq = -1;
++	chan->rot_out_eof_irq = -1;
++}
++
++static int get_eof_irq(struct ipu_image_convert_chan *chan,
++		       struct ipuv3_channel *channel)
++{
++	struct ipu_image_convert_priv *priv = chan->priv;
++	int ret, irq;
++
++	irq = ipu_idmac_channel_irq(priv->ipu, channel, IPU_IRQ_EOF);
++
++	ret = request_threaded_irq(irq, eof_irq, do_bh, 0, "ipu-ic", chan);
++	if (ret < 0) {
++		dev_err(priv->ipu->dev, "could not acquire irq %d\n", irq);
++		return ret;
++	}
++
++	return irq;
+ }
+ 
+ static int get_ipu_resources(struct ipu_image_convert_chan *chan)
+@@ -1855,31 +1888,33 @@ static int get_ipu_resources(struct ipu_image_convert_chan *chan)
+ 	}
+ 
+ 	/* acquire the EOF interrupts */
+-	chan->out_eof_irq = ipu_idmac_channel_irq(priv->ipu,
+-						  chan->out_chan,
+-						  IPU_IRQ_EOF);
++	ret = get_eof_irq(chan, chan->in_chan);
++	if (ret < 0) {
++		chan->in_eof_irq = -1;
++		goto err;
++	}
++	chan->in_eof_irq = ret;
+ 
+-	ret = request_threaded_irq(chan->out_eof_irq, norotate_irq, do_bh,
+-				   0, "ipu-ic", chan);
++	ret = get_eof_irq(chan, chan->rotation_in_chan);
+ 	if (ret < 0) {
+-		dev_err(priv->ipu->dev, "could not acquire irq %d\n",
+-			 chan->out_eof_irq);
+-		chan->out_eof_irq = -1;
++		chan->rot_in_eof_irq = -1;
+ 		goto err;
+ 	}
++	chan->rot_in_eof_irq = ret;
+ 
+-	chan->rot_out_eof_irq = ipu_idmac_channel_irq(priv->ipu,
+-						     chan->rotation_out_chan,
+-						     IPU_IRQ_EOF);
++	ret = get_eof_irq(chan, chan->out_chan);
++	if (ret < 0) {
++		chan->out_eof_irq = -1;
++		goto err;
++	}
++	chan->out_eof_irq = ret;
+ 
+-	ret = request_threaded_irq(chan->rot_out_eof_irq, rotate_irq, do_bh,
+-				   0, "ipu-ic", chan);
++	ret = get_eof_irq(chan, chan->rotation_out_chan);
+ 	if (ret < 0) {
+-		dev_err(priv->ipu->dev, "could not acquire irq %d\n",
+-			chan->rot_out_eof_irq);
+ 		chan->rot_out_eof_irq = -1;
+ 		goto err;
+ 	}
++	chan->rot_out_eof_irq = ret;
+ 
+ 	return 0;
+ err:
+@@ -2458,6 +2493,8 @@ int ipu_image_convert_init(struct ipu_soc *ipu, struct device *dev)
+ 		chan->ic_task = i;
+ 		chan->priv = priv;
+ 		chan->dma_ch = &image_convert_dma_chan[i];
++		chan->in_eof_irq = -1;
++		chan->rot_in_eof_irq = -1;
+ 		chan->out_eof_irq = -1;
+ 		chan->rot_out_eof_irq = -1;
+ 
+diff --git a/drivers/i2c/busses/i2c-bcm-iproc.c b/drivers/i2c/busses/i2c-bcm-iproc.c
+index 8a3c98866fb7e..688e928188214 100644
+--- a/drivers/i2c/busses/i2c-bcm-iproc.c
++++ b/drivers/i2c/busses/i2c-bcm-iproc.c
+@@ -1078,7 +1078,7 @@ static int bcm_iproc_i2c_unreg_slave(struct i2c_client *slave)
+ 	if (!iproc_i2c->slave)
+ 		return -EINVAL;
+ 
+-	iproc_i2c->slave = NULL;
++	disable_irq(iproc_i2c->irq);
+ 
+ 	/* disable all slave interrupts */
+ 	tmp = iproc_i2c_rd_reg(iproc_i2c, IE_OFFSET);
+@@ -1091,6 +1091,17 @@ static int bcm_iproc_i2c_unreg_slave(struct i2c_client *slave)
+ 	tmp &= ~BIT(S_CFG_EN_NIC_SMB_ADDR3_SHIFT);
+ 	iproc_i2c_wr_reg(iproc_i2c, S_CFG_SMBUS_ADDR_OFFSET, tmp);
+ 
++	/* flush TX/RX FIFOs */
++	tmp = (BIT(S_FIFO_RX_FLUSH_SHIFT) | BIT(S_FIFO_TX_FLUSH_SHIFT));
++	iproc_i2c_wr_reg(iproc_i2c, S_FIFO_CTRL_OFFSET, tmp);
++
++	/* clear all pending slave interrupts */
++	iproc_i2c_wr_reg(iproc_i2c, IS_OFFSET, ISR_MASK_SLAVE);
++
++	iproc_i2c->slave = NULL;
++
++	enable_irq(iproc_i2c->irq);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 2e3e1bb750134..9e883474db8ce 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -583,13 +583,14 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
+ 			rcar_i2c_write(priv, ICSIER, SDR | SSR | SAR);
+ 		}
+ 
+-		rcar_i2c_write(priv, ICSSR, ~SAR & 0xff);
++		/* Clear SSR, too, because of old STOPs to other clients than us */
++		rcar_i2c_write(priv, ICSSR, ~(SAR | SSR) & 0xff);
+ 	}
+ 
+ 	/* master sent stop */
+ 	if (ssr_filtered & SSR) {
+ 		i2c_slave_event(priv->slave, I2C_SLAVE_STOP, &value);
+-		rcar_i2c_write(priv, ICSIER, SAR | SSR);
++		rcar_i2c_write(priv, ICSIER, SAR);
+ 		rcar_i2c_write(priv, ICSSR, ~SSR & 0xff);
+ 	}
+ 
+@@ -853,7 +854,7 @@ static int rcar_reg_slave(struct i2c_client *slave)
+ 	priv->slave = slave;
+ 	rcar_i2c_write(priv, ICSAR, slave->addr);
+ 	rcar_i2c_write(priv, ICSSR, 0);
+-	rcar_i2c_write(priv, ICSIER, SAR | SSR);
++	rcar_i2c_write(priv, ICSIER, SAR);
+ 	rcar_i2c_write(priv, ICSCR, SIE | SDBS);
+ 
+ 	return 0;
+@@ -865,12 +866,14 @@ static int rcar_unreg_slave(struct i2c_client *slave)
+ 
+ 	WARN_ON(!priv->slave);
+ 
+-	/* disable irqs and ensure none is running before clearing ptr */
++	/* ensure no irq is running before clearing ptr */
++	disable_irq(priv->irq);
+ 	rcar_i2c_write(priv, ICSIER, 0);
+-	rcar_i2c_write(priv, ICSCR, 0);
++	rcar_i2c_write(priv, ICSSR, 0);
++	enable_irq(priv->irq);
++	rcar_i2c_write(priv, ICSCR, SDBS);
+ 	rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
+ 
+-	synchronize_irq(priv->irq);
+ 	priv->slave = NULL;
+ 
+ 	pm_runtime_put(rcar_i2c_priv_to_dev(priv));
+diff --git a/drivers/iio/dac/ad5592r-base.c b/drivers/iio/dac/ad5592r-base.c
+index 410e90e5f75fb..5226c258856b2 100644
+--- a/drivers/iio/dac/ad5592r-base.c
++++ b/drivers/iio/dac/ad5592r-base.c
+@@ -413,7 +413,7 @@ static int ad5592r_read_raw(struct iio_dev *iio_dev,
+ 			s64 tmp = *val * (3767897513LL / 25LL);
+ 			*val = div_s64_rem(tmp, 1000000000LL, val2);
+ 
+-			ret = IIO_VAL_INT_PLUS_MICRO;
++			return IIO_VAL_INT_PLUS_MICRO;
+ 		} else {
+ 			int mult;
+ 
+@@ -444,7 +444,7 @@ static int ad5592r_read_raw(struct iio_dev *iio_dev,
+ 		ret =  IIO_VAL_INT;
+ 		break;
+ 	default:
+-		ret = -EINVAL;
++		return -EINVAL;
+ 	}
+ 
+ unlock:
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+index b56df409ed0fa..529970195b398 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+@@ -436,8 +436,7 @@ int st_lsm6dsx_update_watermark(struct st_lsm6dsx_sensor *sensor,
+ 				u16 watermark);
+ int st_lsm6dsx_update_fifo(struct st_lsm6dsx_sensor *sensor, bool enable);
+ int st_lsm6dsx_flush_fifo(struct st_lsm6dsx_hw *hw);
+-int st_lsm6dsx_set_fifo_mode(struct st_lsm6dsx_hw *hw,
+-			     enum st_lsm6dsx_fifo_mode fifo_mode);
++int st_lsm6dsx_resume_fifo(struct st_lsm6dsx_hw *hw);
+ int st_lsm6dsx_read_fifo(struct st_lsm6dsx_hw *hw);
+ int st_lsm6dsx_read_tagged_fifo(struct st_lsm6dsx_hw *hw);
+ int st_lsm6dsx_check_odr(struct st_lsm6dsx_sensor *sensor, u32 odr, u8 *val);
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+index afd00daeefb2d..7de10bd636ea0 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+@@ -184,8 +184,8 @@ static int st_lsm6dsx_update_decimators(struct st_lsm6dsx_hw *hw)
+ 	return err;
+ }
+ 
+-int st_lsm6dsx_set_fifo_mode(struct st_lsm6dsx_hw *hw,
+-			     enum st_lsm6dsx_fifo_mode fifo_mode)
++static int st_lsm6dsx_set_fifo_mode(struct st_lsm6dsx_hw *hw,
++				    enum st_lsm6dsx_fifo_mode fifo_mode)
+ {
+ 	unsigned int data;
+ 
+@@ -302,6 +302,18 @@ static int st_lsm6dsx_reset_hw_ts(struct st_lsm6dsx_hw *hw)
+ 	return 0;
+ }
+ 
++int st_lsm6dsx_resume_fifo(struct st_lsm6dsx_hw *hw)
++{
++	int err;
++
++	/* reset hw ts counter */
++	err = st_lsm6dsx_reset_hw_ts(hw);
++	if (err < 0)
++		return err;
++
++	return st_lsm6dsx_set_fifo_mode(hw, ST_LSM6DSX_FIFO_CONT);
++}
++
+ /*
+  * Set max bulk read to ST_LSM6DSX_MAX_WORD_LEN/ST_LSM6DSX_MAX_TAGGED_WORD_LEN
+  * in order to avoid a kmalloc for each bus access
+@@ -675,12 +687,7 @@ int st_lsm6dsx_update_fifo(struct st_lsm6dsx_sensor *sensor, bool enable)
+ 		goto out;
+ 
+ 	if (fifo_mask) {
+-		/* reset hw ts counter */
+-		err = st_lsm6dsx_reset_hw_ts(hw);
+-		if (err < 0)
+-			goto out;
+-
+-		err = st_lsm6dsx_set_fifo_mode(hw, ST_LSM6DSX_FIFO_CONT);
++		err = st_lsm6dsx_resume_fifo(hw);
+ 		if (err < 0)
+ 			goto out;
+ 	}
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+index 0b776cb91928b..b3a08e3e23592 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+@@ -2458,7 +2458,7 @@ static int __maybe_unused st_lsm6dsx_resume(struct device *dev)
+ 	}
+ 
+ 	if (hw->fifo_mask)
+-		err = st_lsm6dsx_set_fifo_mode(hw, ST_LSM6DSX_FIFO_CONT);
++		err = st_lsm6dsx_resume_fifo(hw);
+ 
+ 	return err;
+ }
+diff --git a/drivers/infiniband/core/counters.c b/drivers/infiniband/core/counters.c
+index 738d1faf4bba5..417ebf4d8ba9b 100644
+--- a/drivers/infiniband/core/counters.c
++++ b/drivers/infiniband/core/counters.c
+@@ -288,7 +288,7 @@ int rdma_counter_bind_qp_auto(struct ib_qp *qp, u8 port)
+ 	struct rdma_counter *counter;
+ 	int ret;
+ 
+-	if (!qp->res.valid)
++	if (!qp->res.valid || rdma_is_kernel_res(&qp->res))
+ 		return 0;
+ 
+ 	if (!rdma_is_port_valid(dev, port))
+@@ -483,7 +483,7 @@ int rdma_counter_bind_qpn(struct ib_device *dev, u8 port,
+ 		goto err;
+ 	}
+ 
+-	if (counter->res.task != qp->res.task) {
++	if (rdma_is_kernel_res(&counter->res) != rdma_is_kernel_res(&qp->res)) {
+ 		ret = -EINVAL;
+ 		goto err_task;
+ 	}
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index b48b3f6e632d4..557644dcc9237 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -770,6 +770,7 @@ static int ib_uverbs_reg_mr(struct uverbs_attr_bundle *attrs)
+ 	mr->uobject = uobj;
+ 	atomic_inc(&pd->usecnt);
+ 	mr->res.type = RDMA_RESTRACK_MR;
++	mr->iova = cmd.hca_va;
+ 	rdma_restrack_uadd(&mr->res);
+ 
+ 	uobj->object = mr;
+@@ -861,6 +862,9 @@ static int ib_uverbs_rereg_mr(struct uverbs_attr_bundle *attrs)
+ 		atomic_dec(&old_pd->usecnt);
+ 	}
+ 
++	if (cmd.flags & IB_MR_REREG_TRANS)
++		mr->iova = cmd.hca_va;
++
+ 	memset(&resp, 0, sizeof(resp));
+ 	resp.lkey      = mr->lkey;
+ 	resp.rkey      = mr->rkey;
+diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c
+index 962dc97a8ff2b..1e4f4e5255980 100644
+--- a/drivers/infiniband/hw/cxgb4/mem.c
++++ b/drivers/infiniband/hw/cxgb4/mem.c
+@@ -399,7 +399,6 @@ static int finish_mem_reg(struct c4iw_mr *mhp, u32 stag)
+ 	mmid = stag >> 8;
+ 	mhp->ibmr.rkey = mhp->ibmr.lkey = stag;
+ 	mhp->ibmr.length = mhp->attr.len;
+-	mhp->ibmr.iova = mhp->attr.va_fbo;
+ 	mhp->ibmr.page_size = 1U << (mhp->attr.page_size + 12);
+ 	pr_debug("mmid 0x%x mhp %p\n", mmid, mhp);
+ 	return xa_insert_irq(&mhp->rhp->mrs, mmid, mhp, GFP_KERNEL);
+diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
+index 7e0b205c05eb3..d7c78f841d2f5 100644
+--- a/drivers/infiniband/hw/mlx4/mr.c
++++ b/drivers/infiniband/hw/mlx4/mr.c
+@@ -439,7 +439,6 @@ struct ib_mr *mlx4_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+ 
+ 	mr->ibmr.rkey = mr->ibmr.lkey = mr->mmr.key;
+ 	mr->ibmr.length = length;
+-	mr->ibmr.iova = virt_addr;
+ 	mr->ibmr.page_size = 1U << shift;
+ 
+ 	return &mr->ibmr;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
+index 9a3379c49541f..9ce6a36fe48ed 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib.h
++++ b/drivers/infiniband/ulp/ipoib/ipoib.h
+@@ -515,7 +515,7 @@ void ipoib_ib_dev_cleanup(struct net_device *dev);
+ 
+ int ipoib_ib_dev_open_default(struct net_device *dev);
+ int ipoib_ib_dev_open(struct net_device *dev);
+-int ipoib_ib_dev_stop(struct net_device *dev);
++void ipoib_ib_dev_stop(struct net_device *dev);
+ void ipoib_ib_dev_up(struct net_device *dev);
+ void ipoib_ib_dev_down(struct net_device *dev);
+ int ipoib_ib_dev_stop_default(struct net_device *dev);
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+index da3c5315bbb51..494f413dc3c6c 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+@@ -670,13 +670,12 @@ int ipoib_send(struct net_device *dev, struct sk_buff *skb,
+ 	return rc;
+ }
+ 
+-static void __ipoib_reap_ah(struct net_device *dev)
++static void ipoib_reap_dead_ahs(struct ipoib_dev_priv *priv)
+ {
+-	struct ipoib_dev_priv *priv = ipoib_priv(dev);
+ 	struct ipoib_ah *ah, *tah;
+ 	unsigned long flags;
+ 
+-	netif_tx_lock_bh(dev);
++	netif_tx_lock_bh(priv->dev);
+ 	spin_lock_irqsave(&priv->lock, flags);
+ 
+ 	list_for_each_entry_safe(ah, tah, &priv->dead_ahs, list)
+@@ -687,37 +686,37 @@ static void __ipoib_reap_ah(struct net_device *dev)
+ 		}
+ 
+ 	spin_unlock_irqrestore(&priv->lock, flags);
+-	netif_tx_unlock_bh(dev);
++	netif_tx_unlock_bh(priv->dev);
+ }
+ 
+ void ipoib_reap_ah(struct work_struct *work)
+ {
+ 	struct ipoib_dev_priv *priv =
+ 		container_of(work, struct ipoib_dev_priv, ah_reap_task.work);
+-	struct net_device *dev = priv->dev;
+ 
+-	__ipoib_reap_ah(dev);
++	ipoib_reap_dead_ahs(priv);
+ 
+ 	if (!test_bit(IPOIB_STOP_REAPER, &priv->flags))
+ 		queue_delayed_work(priv->wq, &priv->ah_reap_task,
+ 				   round_jiffies_relative(HZ));
+ }
+ 
+-static void ipoib_flush_ah(struct net_device *dev)
++static void ipoib_start_ah_reaper(struct ipoib_dev_priv *priv)
+ {
+-	struct ipoib_dev_priv *priv = ipoib_priv(dev);
+-
+-	cancel_delayed_work(&priv->ah_reap_task);
+-	flush_workqueue(priv->wq);
+-	ipoib_reap_ah(&priv->ah_reap_task.work);
++	clear_bit(IPOIB_STOP_REAPER, &priv->flags);
++	queue_delayed_work(priv->wq, &priv->ah_reap_task,
++			   round_jiffies_relative(HZ));
+ }
+ 
+-static void ipoib_stop_ah(struct net_device *dev)
++static void ipoib_stop_ah_reaper(struct ipoib_dev_priv *priv)
+ {
+-	struct ipoib_dev_priv *priv = ipoib_priv(dev);
+-
+ 	set_bit(IPOIB_STOP_REAPER, &priv->flags);
+-	ipoib_flush_ah(dev);
++	cancel_delayed_work(&priv->ah_reap_task);
++	/*
++	 * After ipoib_stop_ah_reaper() we always go through
++	 * ipoib_reap_dead_ahs() which ensures the work is really stopped and
++	 * does a final flush out of the dead_ah's list
++	 */
+ }
+ 
+ static int recvs_pending(struct net_device *dev)
+@@ -846,18 +845,6 @@ timeout:
+ 	return 0;
+ }
+ 
+-int ipoib_ib_dev_stop(struct net_device *dev)
+-{
+-	struct ipoib_dev_priv *priv = ipoib_priv(dev);
+-
+-	priv->rn_ops->ndo_stop(dev);
+-
+-	clear_bit(IPOIB_FLAG_INITIALIZED, &priv->flags);
+-	ipoib_flush_ah(dev);
+-
+-	return 0;
+-}
+-
+ int ipoib_ib_dev_open_default(struct net_device *dev)
+ {
+ 	struct ipoib_dev_priv *priv = ipoib_priv(dev);
+@@ -901,10 +888,7 @@ int ipoib_ib_dev_open(struct net_device *dev)
+ 		return -1;
+ 	}
+ 
+-	clear_bit(IPOIB_STOP_REAPER, &priv->flags);
+-	queue_delayed_work(priv->wq, &priv->ah_reap_task,
+-			   round_jiffies_relative(HZ));
+-
++	ipoib_start_ah_reaper(priv);
+ 	if (priv->rn_ops->ndo_open(dev)) {
+ 		pr_warn("%s: Failed to open dev\n", dev->name);
+ 		goto dev_stop;
+@@ -915,13 +899,20 @@ int ipoib_ib_dev_open(struct net_device *dev)
+ 	return 0;
+ 
+ dev_stop:
+-	set_bit(IPOIB_STOP_REAPER, &priv->flags);
+-	cancel_delayed_work(&priv->ah_reap_task);
+-	set_bit(IPOIB_FLAG_INITIALIZED, &priv->flags);
+-	ipoib_ib_dev_stop(dev);
++	ipoib_stop_ah_reaper(priv);
+ 	return -1;
+ }
+ 
++void ipoib_ib_dev_stop(struct net_device *dev)
++{
++	struct ipoib_dev_priv *priv = ipoib_priv(dev);
++
++	priv->rn_ops->ndo_stop(dev);
++
++	clear_bit(IPOIB_FLAG_INITIALIZED, &priv->flags);
++	ipoib_stop_ah_reaper(priv);
++}
++
+ void ipoib_pkey_dev_check_presence(struct net_device *dev)
+ {
+ 	struct ipoib_dev_priv *priv = ipoib_priv(dev);
+@@ -1232,7 +1223,7 @@ static void __ipoib_ib_dev_flush(struct ipoib_dev_priv *priv,
+ 		ipoib_mcast_dev_flush(dev);
+ 		if (oper_up)
+ 			set_bit(IPOIB_FLAG_OPER_UP, &priv->flags);
+-		ipoib_flush_ah(dev);
++		ipoib_reap_dead_ahs(priv);
+ 	}
+ 
+ 	if (level >= IPOIB_FLUSH_NORMAL)
+@@ -1307,7 +1298,7 @@ void ipoib_ib_dev_cleanup(struct net_device *dev)
+ 	 * the neighbor garbage collection is stopped and reaped.
+ 	 * That should all be done now, so make a final ah flush.
+ 	 */
+-	ipoib_stop_ah(dev);
++	ipoib_reap_dead_ahs(priv);
+ 
+ 	clear_bit(IPOIB_PKEY_ASSIGNED, &priv->flags);
+ 
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index 3cfb682b91b0a..ef60e8e4ae67b 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -1976,6 +1976,8 @@ static void ipoib_ndo_uninit(struct net_device *dev)
+ 
+ 	/* no more works over the priv->wq */
+ 	if (priv->wq) {
++		/* See ipoib_mcast_carrier_on_task() */
++		WARN_ON(test_bit(IPOIB_FLAG_OPER_UP, &priv->flags));
+ 		flush_workqueue(priv->wq);
+ 		destroy_workqueue(priv->wq);
+ 		priv->wq = NULL;
+diff --git a/drivers/input/mouse/sentelic.c b/drivers/input/mouse/sentelic.c
+index e99d9bf1a267d..e78c4c7eda34d 100644
+--- a/drivers/input/mouse/sentelic.c
++++ b/drivers/input/mouse/sentelic.c
+@@ -441,7 +441,7 @@ static ssize_t fsp_attr_set_setreg(struct psmouse *psmouse, void *data,
+ 
+ 	fsp_reg_write_enable(psmouse, false);
+ 
+-	return count;
++	return retval;
+ }
+ 
+ PSMOUSE_DEFINE_WO_ATTR(setreg, S_IWUSR, NULL, fsp_attr_set_setreg);
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index 16f47041f1bf5..ec23a2f0b5f8d 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -1459,9 +1459,26 @@ void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid,
+ 	 * Max Invs Pending (MIP) is set to 0 for now until we have DIT in
+ 	 * ECAP.
+ 	 */
+-	desc.qw1 |= addr & ~mask;
+-	if (size_order)
++	if (addr & GENMASK_ULL(size_order + VTD_PAGE_SHIFT, 0))
++		pr_warn_ratelimited("Invalidate non-aligned address %llx, order %d\n",
++				    addr, size_order);
++
++	/* Take page address */
++	desc.qw1 = QI_DEV_EIOTLB_ADDR(addr);
++
++	if (size_order) {
++		/*
++		 * Existing 0s in address below size_order may be the least
++		 * significant bit, we must set them to 1s to avoid having
++		 * smaller size than desired.
++		 */
++		desc.qw1 |= GENMASK_ULL(size_order + VTD_PAGE_SHIFT - 1,
++					VTD_PAGE_SHIFT);
++		/* Clear size_order bit to indicate size */
++		desc.qw1 &= ~mask;
++		/* Set the S bit to indicate flushing more than 1 page */
+ 		desc.qw1 |= QI_DEV_EIOTLB_SIZE;
++	}
+ 
+ 	qi_submit_sync(iommu, &desc, 1, 0);
+ }
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index a459eac967545..04e82f1756010 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -2565,7 +2565,7 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu,
+ 			}
+ 
+ 			if (info->ats_supported && ecap_prs(iommu->ecap) &&
+-			    pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI))
++			    pci_pri_supported(pdev))
+ 				info->pri_supported = 1;
+ 		}
+ 	}
+@@ -5452,13 +5452,12 @@ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
+ 
+ 		switch (BIT(cache_type)) {
+ 		case IOMMU_CACHE_INV_TYPE_IOTLB:
++			/* HW will ignore LSB bits based on address mask */
+ 			if (inv_info->granularity == IOMMU_INV_GRANU_ADDR &&
+ 			    size &&
+ 			    (inv_info->addr_info.addr & ((BIT(VTD_PAGE_SHIFT + size)) - 1))) {
+-				pr_err_ratelimited("Address out of range, 0x%llx, size order %llu\n",
++				pr_err_ratelimited("User address not aligned, 0x%llx, size order %llu\n",
+ 						   inv_info->addr_info.addr, size);
+-				ret = -ERANGE;
+-				goto out_unlock;
+ 			}
+ 
+ 			/*
+diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
+index 6c87c807a0abb..d386853121a26 100644
+--- a/drivers/iommu/intel/svm.c
++++ b/drivers/iommu/intel/svm.c
+@@ -277,20 +277,16 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, struct device *dev,
+ 			goto out;
+ 		}
+ 
++		/*
++		 * Do not allow multiple bindings of the same device-PASID since
++		 * there is only one SL page tables per PASID. We may revisit
++		 * once sharing PGD across domains are supported.
++		 */
+ 		for_each_svm_dev(sdev, svm, dev) {
+-			/*
+-			 * For devices with aux domains, we should allow
+-			 * multiple bind calls with the same PASID and pdev.
+-			 */
+-			if (iommu_dev_feature_enabled(dev,
+-						      IOMMU_DEV_FEAT_AUX)) {
+-				sdev->users++;
+-			} else {
+-				dev_warn_ratelimited(dev,
+-						     "Already bound with PASID %u\n",
+-						     svm->pasid);
+-				ret = -EBUSY;
+-			}
++			dev_warn_ratelimited(dev,
++					     "Already bound with PASID %u\n",
++					     svm->pasid);
++			ret = -EBUSY;
+ 			goto out;
+ 		}
+ 	} else {
+diff --git a/drivers/iommu/omap-iommu-debug.c b/drivers/iommu/omap-iommu-debug.c
+index 8e19bfa94121e..a99afb5d9011c 100644
+--- a/drivers/iommu/omap-iommu-debug.c
++++ b/drivers/iommu/omap-iommu-debug.c
+@@ -98,8 +98,11 @@ static ssize_t debug_read_regs(struct file *file, char __user *userbuf,
+ 	mutex_lock(&iommu_debug_lock);
+ 
+ 	bytes = omap_iommu_dump_ctx(obj, p, count);
++	if (bytes < 0)
++		goto err;
+ 	bytes = simple_read_from_buffer(userbuf, count, ppos, buf, bytes);
+ 
++err:
+ 	mutex_unlock(&iommu_debug_lock);
+ 	kfree(buf);
+ 
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index da44bfa48bc25..95f097448f971 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -3523,6 +3523,7 @@ static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
+ 	msi_alloc_info_t *info = args;
+ 	struct its_device *its_dev = info->scratchpad[0].ptr;
+ 	struct its_node *its = its_dev->its;
++	struct irq_data *irqd;
+ 	irq_hw_number_t hwirq;
+ 	int err;
+ 	int i;
+@@ -3542,7 +3543,9 @@ static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
+ 
+ 		irq_domain_set_hwirq_and_chip(domain, virq + i,
+ 					      hwirq + i, &its_irq_chip, its_dev);
+-		irqd_set_single_target(irq_desc_get_irq_data(irq_to_desc(virq + i)));
++		irqd = irq_get_irq_data(virq + i);
++		irqd_set_single_target(irqd);
++		irqd_set_affinity_on_activate(irqd);
+ 		pr_debug("ID:%d pID:%d vID:%d\n",
+ 			 (int)(hwirq + i - its_dev->event_map.lpi_base),
+ 			 (int)(hwirq + i), virq + i);
+@@ -4087,18 +4090,22 @@ static void its_vpe_4_1_deschedule(struct its_vpe *vpe,
+ static void its_vpe_4_1_invall(struct its_vpe *vpe)
+ {
+ 	void __iomem *rdbase;
++	unsigned long flags;
+ 	u64 val;
++	int cpu;
+ 
+ 	val  = GICR_INVALLR_V;
+ 	val |= FIELD_PREP(GICR_INVALLR_VPEID, vpe->vpe_id);
+ 
+ 	/* Target the redistributor this vPE is currently known on */
+-	raw_spin_lock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
+-	rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base;
++	cpu = vpe_to_cpuid_lock(vpe, &flags);
++	raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock);
++	rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
+ 	gic_write_lpir(val, rdbase + GICR_INVALLR);
+ 
+ 	wait_for_syncr(rdbase);
+-	raw_spin_unlock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
++	raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock);
++	vpe_to_cpuid_unlock(vpe, flags);
+ }
+ 
+ static int its_vpe_4_1_set_vcpu_affinity(struct irq_data *d, void *vcpu_info)
+diff --git a/drivers/irqchip/irq-loongson-liointc.c b/drivers/irqchip/irq-loongson-liointc.c
+index 6ef86a334c62d..9ed1bc4736634 100644
+--- a/drivers/irqchip/irq-loongson-liointc.c
++++ b/drivers/irqchip/irq-loongson-liointc.c
+@@ -60,7 +60,7 @@ static void liointc_chained_handle_irq(struct irq_desc *desc)
+ 	if (!pending) {
+ 		/* Always blame LPC IRQ if we have that bug */
+ 		if (handler->priv->has_lpc_irq_errata &&
+-			(handler->parent_int_map & ~gc->mask_cache &
++			(handler->parent_int_map & gc->mask_cache &
+ 			BIT(LIOINTC_ERRATA_IRQ)))
+ 			pending = BIT(LIOINTC_ERRATA_IRQ);
+ 		else
+@@ -132,11 +132,11 @@ static void liointc_resume(struct irq_chip_generic *gc)
+ 	irq_gc_lock_irqsave(gc, flags);
+ 	/* Disable all at first */
+ 	writel(0xffffffff, gc->reg_base + LIOINTC_REG_INTC_DISABLE);
+-	/* Revert map cache */
++	/* Restore map cache */
+ 	for (i = 0; i < LIOINTC_CHIP_IRQ; i++)
+ 		writeb(priv->map_cache[i], gc->reg_base + i);
+-	/* Revert mask cache */
+-	writel(~gc->mask_cache, gc->reg_base + LIOINTC_REG_INTC_ENABLE);
++	/* Restore mask cache */
++	writel(gc->mask_cache, gc->reg_base + LIOINTC_REG_INTC_ENABLE);
+ 	irq_gc_unlock_irqrestore(gc, flags);
+ }
+ 
+@@ -244,7 +244,7 @@ int __init liointc_of_init(struct device_node *node,
+ 	ct->chip.irq_mask_ack = irq_gc_mask_disable_reg;
+ 	ct->chip.irq_set_type = liointc_set_type;
+ 
+-	gc->mask_cache = 0xffffffff;
++	gc->mask_cache = 0;
+ 	priv->gc = gc;
+ 
+ 	for (i = 0; i < LIOINTC_NUM_PARENT; i++) {
+diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
+index 221e0191b6870..80e3c4813fb05 100644
+--- a/drivers/md/bcache/bcache.h
++++ b/drivers/md/bcache/bcache.h
+@@ -264,7 +264,7 @@ struct bcache_device {
+ #define BCACHE_DEV_UNLINK_DONE		2
+ #define BCACHE_DEV_WB_RUNNING		3
+ #define BCACHE_DEV_RATE_DW_RUNNING	4
+-	unsigned int		nr_stripes;
++	int			nr_stripes;
+ 	unsigned int		stripe_size;
+ 	atomic_t		*stripe_sectors_dirty;
+ 	unsigned long		*full_dirty_stripes;
+diff --git a/drivers/md/bcache/bset.c b/drivers/md/bcache/bset.c
+index 4995fcaefe297..67a2c47f4201a 100644
+--- a/drivers/md/bcache/bset.c
++++ b/drivers/md/bcache/bset.c
+@@ -322,7 +322,7 @@ int bch_btree_keys_alloc(struct btree_keys *b,
+ 
+ 	b->page_order = page_order;
+ 
+-	t->data = (void *) __get_free_pages(gfp, b->page_order);
++	t->data = (void *) __get_free_pages(__GFP_COMP|gfp, b->page_order);
+ 	if (!t->data)
+ 		goto err;
+ 
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 6548a601edf0e..dd116c83de80c 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -785,7 +785,7 @@ int bch_btree_cache_alloc(struct cache_set *c)
+ 	mutex_init(&c->verify_lock);
+ 
+ 	c->verify_ondisk = (void *)
+-		__get_free_pages(GFP_KERNEL, ilog2(bucket_pages(c)));
++		__get_free_pages(GFP_KERNEL|__GFP_COMP, ilog2(bucket_pages(c)));
+ 
+ 	c->verify_data = mca_bucket_alloc(c, &ZERO_KEY, GFP_KERNEL);
+ 
+diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
+index 90aac4e2333f5..d8586b6ccb76a 100644
+--- a/drivers/md/bcache/journal.c
++++ b/drivers/md/bcache/journal.c
+@@ -999,8 +999,8 @@ int bch_journal_alloc(struct cache_set *c)
+ 	j->w[1].c = c;
+ 
+ 	if (!(init_fifo(&j->pin, JOURNAL_PIN, GFP_KERNEL)) ||
+-	    !(j->w[0].data = (void *) __get_free_pages(GFP_KERNEL, JSET_BITS)) ||
+-	    !(j->w[1].data = (void *) __get_free_pages(GFP_KERNEL, JSET_BITS)))
++	    !(j->w[0].data = (void *) __get_free_pages(GFP_KERNEL|__GFP_COMP, JSET_BITS)) ||
++	    !(j->w[1].data = (void *) __get_free_pages(GFP_KERNEL|__GFP_COMP, JSET_BITS)))
+ 		return -ENOMEM;
+ 
+ 	return 0;
+diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
+index 7acf024e99f35..9cc044293acd9 100644
+--- a/drivers/md/bcache/request.c
++++ b/drivers/md/bcache/request.c
+@@ -668,7 +668,9 @@ static void backing_request_endio(struct bio *bio)
+ static void bio_complete(struct search *s)
+ {
+ 	if (s->orig_bio) {
+-		bio_end_io_acct(s->orig_bio, s->start_time);
++		/* Count on bcache device */
++		disk_end_io_acct(s->d->disk, bio_op(s->orig_bio), s->start_time);
++
+ 		trace_bcache_request_end(s->d, s->orig_bio);
+ 		s->orig_bio->bi_status = s->iop.status;
+ 		bio_endio(s->orig_bio);
+@@ -728,8 +730,8 @@ static inline struct search *search_alloc(struct bio *bio,
+ 	s->recoverable		= 1;
+ 	s->write		= op_is_write(bio_op(bio));
+ 	s->read_dirty_data	= 0;
+-	s->start_time		= bio_start_io_acct(bio);
+-
++	/* Count on the bcache device */
++	s->start_time		= disk_start_io_acct(d->disk, bio_sectors(bio), bio_op(bio));
+ 	s->iop.c		= d->c;
+ 	s->iop.bio		= NULL;
+ 	s->iop.inode		= d->id;
+@@ -1080,7 +1082,8 @@ static void detached_dev_end_io(struct bio *bio)
+ 	bio->bi_end_io = ddip->bi_end_io;
+ 	bio->bi_private = ddip->bi_private;
+ 
+-	bio_end_io_acct(bio, ddip->start_time);
++	/* Count on the bcache device */
++	disk_end_io_acct(ddip->d->disk, bio_op(bio), ddip->start_time);
+ 
+ 	if (bio->bi_status) {
+ 		struct cached_dev *dc = container_of(ddip->d,
+@@ -1105,7 +1108,8 @@ static void detached_dev_do_request(struct bcache_device *d, struct bio *bio)
+ 	 */
+ 	ddip = kzalloc(sizeof(struct detached_dev_io_private), GFP_NOIO);
+ 	ddip->d = d;
+-	ddip->start_time = bio_start_io_acct(bio);
++	/* Count on the bcache device */
++	ddip->start_time = disk_start_io_acct(d->disk, bio_sectors(bio), bio_op(bio));
+ 	ddip->bi_end_io = bio->bi_end_io;
+ 	ddip->bi_private = bio->bi_private;
+ 	bio->bi_end_io = detached_dev_end_io;
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 445bb84ee27f8..e15d078230311 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -826,19 +826,19 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
+ 	struct request_queue *q;
+ 	const size_t max_stripes = min_t(size_t, INT_MAX,
+ 					 SIZE_MAX / sizeof(atomic_t));
+-	size_t n;
++	uint64_t n;
+ 	int idx;
+ 
+ 	if (!d->stripe_size)
+ 		d->stripe_size = 1 << 31;
+ 
+-	d->nr_stripes = DIV_ROUND_UP_ULL(sectors, d->stripe_size);
+-
+-	if (!d->nr_stripes || d->nr_stripes > max_stripes) {
+-		pr_err("nr_stripes too large or invalid: %u (start sector beyond end of disk?)\n",
+-			(unsigned int)d->nr_stripes);
++	n = DIV_ROUND_UP_ULL(sectors, d->stripe_size);
++	if (!n || n > max_stripes) {
++		pr_err("nr_stripes too large or invalid: %llu (start sector beyond end of disk?)\n",
++			n);
+ 		return -ENOMEM;
+ 	}
++	d->nr_stripes = n;
+ 
+ 	n = d->nr_stripes * sizeof(atomic_t);
+ 	d->stripe_sectors_dirty = kvzalloc(n, GFP_KERNEL);
+@@ -1776,7 +1776,7 @@ void bch_cache_set_unregister(struct cache_set *c)
+ }
+ 
+ #define alloc_bucket_pages(gfp, c)			\
+-	((void *) __get_free_pages(__GFP_ZERO|gfp, ilog2(bucket_pages(c))))
++	((void *) __get_free_pages(__GFP_ZERO|__GFP_COMP|gfp, ilog2(bucket_pages(c))))
+ 
+ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
+ {
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 1cf1e5016cb9d..ab101ad55459a 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -523,15 +523,19 @@ void bcache_dev_sectors_dirty_add(struct cache_set *c, unsigned int inode,
+ 				  uint64_t offset, int nr_sectors)
+ {
+ 	struct bcache_device *d = c->devices[inode];
+-	unsigned int stripe_offset, stripe, sectors_dirty;
++	unsigned int stripe_offset, sectors_dirty;
++	int stripe;
+ 
+ 	if (!d)
+ 		return;
+ 
++	stripe = offset_to_stripe(d, offset);
++	if (stripe < 0)
++		return;
++
+ 	if (UUID_FLASH_ONLY(&c->uuids[inode]))
+ 		atomic_long_add(nr_sectors, &c->flash_dev_dirty_sectors);
+ 
+-	stripe = offset_to_stripe(d, offset);
+ 	stripe_offset = offset & (d->stripe_size - 1);
+ 
+ 	while (nr_sectors) {
+@@ -571,12 +575,12 @@ static bool dirty_pred(struct keybuf *buf, struct bkey *k)
+ static void refill_full_stripes(struct cached_dev *dc)
+ {
+ 	struct keybuf *buf = &dc->writeback_keys;
+-	unsigned int start_stripe, stripe, next_stripe;
++	unsigned int start_stripe, next_stripe;
++	int stripe;
+ 	bool wrapped = false;
+ 
+ 	stripe = offset_to_stripe(&dc->disk, KEY_OFFSET(&buf->last_scanned));
+-
+-	if (stripe >= dc->disk.nr_stripes)
++	if (stripe < 0)
+ 		stripe = 0;
+ 
+ 	start_stripe = stripe;
+diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
+index b029843ce5b6f..3f1230e22de01 100644
+--- a/drivers/md/bcache/writeback.h
++++ b/drivers/md/bcache/writeback.h
+@@ -52,10 +52,22 @@ static inline uint64_t bcache_dev_sectors_dirty(struct bcache_device *d)
+ 	return ret;
+ }
+ 
+-static inline unsigned int offset_to_stripe(struct bcache_device *d,
++static inline int offset_to_stripe(struct bcache_device *d,
+ 					uint64_t offset)
+ {
+ 	do_div(offset, d->stripe_size);
++
++	/* d->nr_stripes is in range [1, INT_MAX] */
++	if (unlikely(offset >= d->nr_stripes)) {
++		pr_err("Invalid stripe %llu (>= nr_stripes %d).\n",
++			offset, d->nr_stripes);
++		return -EINVAL;
++	}
++
++	/*
++	 * Here offset is definitly smaller than INT_MAX,
++	 * return it as int will never overflow.
++	 */
+ 	return offset;
+ }
+ 
+@@ -63,7 +75,10 @@ static inline bool bcache_dev_stripe_dirty(struct cached_dev *dc,
+ 					   uint64_t offset,
+ 					   unsigned int nr_sectors)
+ {
+-	unsigned int stripe = offset_to_stripe(&dc->disk, offset);
++	int stripe = offset_to_stripe(&dc->disk, offset);
++
++	if (stripe < 0)
++		return false;
+ 
+ 	while (1) {
+ 		if (atomic_read(dc->disk.stripe_sectors_dirty + stripe))
+diff --git a/drivers/md/dm-ebs-target.c b/drivers/md/dm-ebs-target.c
+index 44451276f1281..cb85610527c2c 100644
+--- a/drivers/md/dm-ebs-target.c
++++ b/drivers/md/dm-ebs-target.c
+@@ -363,7 +363,7 @@ static int ebs_map(struct dm_target *ti, struct bio *bio)
+ 	bio_set_dev(bio, ec->dev->bdev);
+ 	bio->bi_iter.bi_sector = ec->start + dm_target_offset(ti, bio->bi_iter.bi_sector);
+ 
+-	if (unlikely(bio->bi_opf & REQ_OP_FLUSH))
++	if (unlikely(bio_op(bio) == REQ_OP_FLUSH))
+ 		return DM_MAPIO_REMAPPED;
+ 	/*
+ 	 * Only queue for bufio processing in case of partial or overlapping buffers
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index 85e0daabad49c..20745e2e34b94 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -70,9 +70,6 @@ void dm_start_queue(struct request_queue *q)
+ 
+ void dm_stop_queue(struct request_queue *q)
+ {
+-	if (blk_mq_queue_stopped(q))
+-		return;
+-
+ 	blk_mq_quiesce_queue(q);
+ }
+ 
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 5b9de2f71bb07..88b391ff9bea7 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -504,7 +504,8 @@ static int dm_blk_report_zones(struct gendisk *disk, sector_t sector,
+ 		}
+ 
+ 		args.tgt = tgt;
+-		ret = tgt->type->report_zones(tgt, &args, nr_zones);
++		ret = tgt->type->report_zones(tgt, &args,
++					      nr_zones - args.zone_idx);
+ 		if (ret < 0)
+ 			goto out;
+ 	} while (args.zone_idx < nr_zones &&
+diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
+index 73fd50e779754..d50737ec40394 100644
+--- a/drivers/md/md-cluster.c
++++ b/drivers/md/md-cluster.c
+@@ -1139,6 +1139,7 @@ static int resize_bitmaps(struct mddev *mddev, sector_t newsize, sector_t oldsiz
+ 		bitmap = get_bitmap_from_slot(mddev, i);
+ 		if (IS_ERR(bitmap)) {
+ 			pr_err("can't get bitmap from slot %d\n", i);
++			bitmap = NULL;
+ 			goto out;
+ 		}
+ 		counts = &bitmap->counts;
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index ab8067f9ce8c6..43eedf7adc79c 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -3607,6 +3607,7 @@ static int need_this_block(struct stripe_head *sh, struct stripe_head_state *s,
+ 	 * is missing/faulty, then we need to read everything we can.
+ 	 */
+ 	if (sh->raid_conf->level != 6 &&
++	    sh->raid_conf->rmw_level != PARITY_DISABLE_RMW &&
+ 	    sh->sector < sh->raid_conf->mddev->recovery_cp)
+ 		/* reconstruct-write isn't being forced */
+ 		return 0;
+@@ -4842,7 +4843,7 @@ static void handle_stripe(struct stripe_head *sh)
+ 	 * or to load a block that is being partially written.
+ 	 */
+ 	if (s.to_read || s.non_overwrite
+-	    || (conf->level == 6 && s.to_write && s.failed)
++	    || (s.to_write && s.failed)
+ 	    || (s.syncing && (s.uptodate + s.compute < disks))
+ 	    || s.replacing
+ 	    || s.expanding)
+diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c
+index abf93158857b9..531e7a41658f7 100644
+--- a/drivers/media/platform/qcom/venus/pm_helpers.c
++++ b/drivers/media/platform/qcom/venus/pm_helpers.c
+@@ -496,6 +496,10 @@ min_loaded_core(struct venus_inst *inst, u32 *min_coreid, u32 *min_load)
+ 	list_for_each_entry(inst_pos, &core->instances, list) {
+ 		if (inst_pos == inst)
+ 			continue;
++
++		if (inst_pos->state != INST_START)
++			continue;
++
+ 		vpp_freq = inst_pos->clk_data.codec_freq_data->vpp_freq;
+ 		coreid = inst_pos->clk_data.core_id;
+ 
+diff --git a/drivers/media/platform/rockchip/rga/rga-hw.c b/drivers/media/platform/rockchip/rga/rga-hw.c
+index 4be6dcf292fff..aaa96f256356b 100644
+--- a/drivers/media/platform/rockchip/rga/rga-hw.c
++++ b/drivers/media/platform/rockchip/rga/rga-hw.c
+@@ -200,22 +200,25 @@ static void rga_cmd_set_trans_info(struct rga_ctx *ctx)
+ 	dst_info.data.format = ctx->out.fmt->hw_format;
+ 	dst_info.data.swap = ctx->out.fmt->color_swap;
+ 
+-	if (ctx->in.fmt->hw_format >= RGA_COLOR_FMT_YUV422SP) {
+-		if (ctx->out.fmt->hw_format < RGA_COLOR_FMT_YUV422SP) {
+-			switch (ctx->in.colorspace) {
+-			case V4L2_COLORSPACE_REC709:
+-				src_info.data.csc_mode =
+-					RGA_SRC_CSC_MODE_BT709_R0;
+-				break;
+-			default:
+-				src_info.data.csc_mode =
+-					RGA_SRC_CSC_MODE_BT601_R0;
+-				break;
+-			}
++	/*
++	 * CSC mode must only be set when the colorspace families differ between
++	 * input and output. It must remain unset (zeroed) if both are the same.
++	 */
++
++	if (RGA_COLOR_FMT_IS_YUV(ctx->in.fmt->hw_format) &&
++	    RGA_COLOR_FMT_IS_RGB(ctx->out.fmt->hw_format)) {
++		switch (ctx->in.colorspace) {
++		case V4L2_COLORSPACE_REC709:
++			src_info.data.csc_mode = RGA_SRC_CSC_MODE_BT709_R0;
++			break;
++		default:
++			src_info.data.csc_mode = RGA_SRC_CSC_MODE_BT601_R0;
++			break;
+ 		}
+ 	}
+ 
+-	if (ctx->out.fmt->hw_format >= RGA_COLOR_FMT_YUV422SP) {
++	if (RGA_COLOR_FMT_IS_RGB(ctx->in.fmt->hw_format) &&
++	    RGA_COLOR_FMT_IS_YUV(ctx->out.fmt->hw_format)) {
+ 		switch (ctx->out.colorspace) {
+ 		case V4L2_COLORSPACE_REC709:
+ 			dst_info.data.csc_mode = RGA_SRC_CSC_MODE_BT709_R0;
+diff --git a/drivers/media/platform/rockchip/rga/rga-hw.h b/drivers/media/platform/rockchip/rga/rga-hw.h
+index 96cb0314dfa70..e8917e5630a48 100644
+--- a/drivers/media/platform/rockchip/rga/rga-hw.h
++++ b/drivers/media/platform/rockchip/rga/rga-hw.h
+@@ -95,6 +95,11 @@
+ #define RGA_COLOR_FMT_CP_8BPP 15
+ #define RGA_COLOR_FMT_MASK 15
+ 
++#define RGA_COLOR_FMT_IS_YUV(fmt) \
++	(((fmt) >= RGA_COLOR_FMT_YUV422SP) && ((fmt) < RGA_COLOR_FMT_CP_1BPP))
++#define RGA_COLOR_FMT_IS_RGB(fmt) \
++	((fmt) < RGA_COLOR_FMT_YUV422SP)
++
+ #define RGA_COLOR_NONE_SWAP 0
+ #define RGA_COLOR_RB_SWAP 1
+ #define RGA_COLOR_ALPHA_SWAP 2
+diff --git a/drivers/media/platform/vsp1/vsp1_dl.c b/drivers/media/platform/vsp1/vsp1_dl.c
+index d7b43037e500a..e07b135613eb5 100644
+--- a/drivers/media/platform/vsp1/vsp1_dl.c
++++ b/drivers/media/platform/vsp1/vsp1_dl.c
+@@ -431,6 +431,8 @@ vsp1_dl_cmd_pool_create(struct vsp1_device *vsp1, enum vsp1_extcmd_type type,
+ 	if (!pool)
+ 		return NULL;
+ 
++	pool->vsp1 = vsp1;
++
+ 	spin_lock_init(&pool->lock);
+ 	INIT_LIST_HEAD(&pool->free);
+ 
+diff --git a/drivers/mfd/arizona-core.c b/drivers/mfd/arizona-core.c
+index f73cf76d1373d..a5e443110fc3d 100644
+--- a/drivers/mfd/arizona-core.c
++++ b/drivers/mfd/arizona-core.c
+@@ -1426,6 +1426,15 @@ err_irq:
+ 	arizona_irq_exit(arizona);
+ err_pm:
+ 	pm_runtime_disable(arizona->dev);
++
++	switch (arizona->pdata.clk32k_src) {
++	case ARIZONA_32KZ_MCLK1:
++	case ARIZONA_32KZ_MCLK2:
++		arizona_clk32k_disable(arizona);
++		break;
++	default:
++		break;
++	}
+ err_reset:
+ 	arizona_enable_reset(arizona);
+ 	regulator_disable(arizona->dcvdd);
+@@ -1448,6 +1457,15 @@ int arizona_dev_exit(struct arizona *arizona)
+ 	regulator_disable(arizona->dcvdd);
+ 	regulator_put(arizona->dcvdd);
+ 
++	switch (arizona->pdata.clk32k_src) {
++	case ARIZONA_32KZ_MCLK1:
++	case ARIZONA_32KZ_MCLK2:
++		arizona_clk32k_disable(arizona);
++		break;
++	default:
++		break;
++	}
++
+ 	mfd_remove_devices(arizona->dev);
+ 	arizona_free_irq(arizona, ARIZONA_IRQ_UNDERCLOCKED, arizona);
+ 	arizona_free_irq(arizona, ARIZONA_IRQ_OVERCLOCKED, arizona);
+diff --git a/drivers/mfd/dln2.c b/drivers/mfd/dln2.c
+index 39276fa626d2b..83e676a096dc1 100644
+--- a/drivers/mfd/dln2.c
++++ b/drivers/mfd/dln2.c
+@@ -287,7 +287,11 @@ static void dln2_rx(struct urb *urb)
+ 	len = urb->actual_length - sizeof(struct dln2_header);
+ 
+ 	if (handle == DLN2_HANDLE_EVENT) {
++		unsigned long flags;
++
++		spin_lock_irqsave(&dln2->event_cb_lock, flags);
+ 		dln2_run_event_callbacks(dln2, id, echo, data, len);
++		spin_unlock_irqrestore(&dln2->event_cb_lock, flags);
+ 	} else {
+ 		/* URB will be re-submitted in _dln2_transfer (free_rx_slot) */
+ 		if (dln2_transfer_complete(dln2, urb, handle, echo))
+diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+index 47ac53e912411..201b8ed37f2e0 100644
+--- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c
++++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+@@ -229,15 +229,12 @@ static void renesas_sdhi_internal_dmac_issue_tasklet_fn(unsigned long arg)
+ 					    DTRAN_CTRL_DM_START);
+ }
+ 
+-static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg)
++static bool renesas_sdhi_internal_dmac_complete(struct tmio_mmc_host *host)
+ {
+-	struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg;
+ 	enum dma_data_direction dir;
+ 
+-	spin_lock_irq(&host->lock);
+-
+ 	if (!host->data)
+-		goto out;
++		return false;
+ 
+ 	if (host->data->flags & MMC_DATA_READ)
+ 		dir = DMA_FROM_DEVICE;
+@@ -250,6 +247,17 @@ static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg)
+ 	if (dir == DMA_FROM_DEVICE)
+ 		clear_bit(SDHI_INTERNAL_DMAC_RX_IN_USE, &global_flags);
+ 
++	return true;
++}
++
++static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg)
++{
++	struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg;
++
++	spin_lock_irq(&host->lock);
++	if (!renesas_sdhi_internal_dmac_complete(host))
++		goto out;
++
+ 	tmio_mmc_do_data_irq(host);
+ out:
+ 	spin_unlock_irq(&host->lock);
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index ac934a715a194..a4033d32a7103 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -1918,6 +1918,22 @@ static int brcmnand_edu_trans(struct brcmnand_host *host, u64 addr, u32 *buf,
+ 	edu_writel(ctrl, EDU_STOP, 0); /* force stop */
+ 	edu_readl(ctrl, EDU_STOP);
+ 
++	if (!ret && edu_cmd == EDU_CMD_READ) {
++		u64 err_addr = 0;
++
++		/*
++		 * check for ECC errors here, subpage ECC errors are
++		 * retained in ECC error address register
++		 */
++		err_addr = brcmnand_get_uncorrecc_addr(ctrl);
++		if (!err_addr) {
++			err_addr = brcmnand_get_correcc_addr(ctrl);
++			if (err_addr)
++				ret = -EUCLEAN;
++		} else
++			ret = -EBADMSG;
++	}
++
+ 	return ret;
+ }
+ 
+@@ -2124,6 +2140,7 @@ static int brcmnand_read(struct mtd_info *mtd, struct nand_chip *chip,
+ 	u64 err_addr = 0;
+ 	int err;
+ 	bool retry = true;
++	bool edu_err = false;
+ 
+ 	dev_dbg(ctrl->dev, "read %llx -> %p\n", (unsigned long long)addr, buf);
+ 
+@@ -2141,6 +2158,10 @@ try_dmaread:
+ 			else
+ 				return -EIO;
+ 		}
++
++		if (has_edu(ctrl) && err_addr)
++			edu_err = true;
++
+ 	} else {
+ 		if (oob)
+ 			memset(oob, 0x99, mtd->oobsize);
+@@ -2188,6 +2209,11 @@ try_dmaread:
+ 	if (mtd_is_bitflip(err)) {
+ 		unsigned int corrected = brcmnand_count_corrected(ctrl);
+ 
++		/* in case of EDU correctable error we read again using PIO */
++		if (edu_err)
++			err = brcmnand_read_by_pio(mtd, chip, addr, trans, buf,
++						   oob, &err_addr);
++
+ 		dev_dbg(ctrl->dev, "corrected error at 0x%llx\n",
+ 			(unsigned long long)err_addr);
+ 		mtd->ecc_stats.corrected += corrected;
+diff --git a/drivers/mtd/nand/raw/fsl_upm.c b/drivers/mtd/nand/raw/fsl_upm.c
+index 627deb26db512..76d1032cd35e8 100644
+--- a/drivers/mtd/nand/raw/fsl_upm.c
++++ b/drivers/mtd/nand/raw/fsl_upm.c
+@@ -62,7 +62,6 @@ static int fun_chip_ready(struct nand_chip *chip)
+ static void fun_wait_rnb(struct fsl_upm_nand *fun)
+ {
+ 	if (fun->rnb_gpio[fun->mchip_number] >= 0) {
+-		struct mtd_info *mtd = nand_to_mtd(&fun->chip);
+ 		int cnt = 1000000;
+ 
+ 		while (--cnt && !fun_chip_ready(&fun->chip))
+diff --git a/drivers/mtd/ubi/fastmap-wl.c b/drivers/mtd/ubi/fastmap-wl.c
+index 83afc00e365a5..28f55f9cf7153 100644
+--- a/drivers/mtd/ubi/fastmap-wl.c
++++ b/drivers/mtd/ubi/fastmap-wl.c
+@@ -381,6 +381,11 @@ static void ubi_fastmap_close(struct ubi_device *ubi)
+ 		ubi->fm_anchor = NULL;
+ 	}
+ 
++	if (ubi->fm_next_anchor) {
++		return_unused_peb(ubi, ubi->fm_next_anchor);
++		ubi->fm_next_anchor = NULL;
++	}
++
+ 	if (ubi->fm) {
+ 		for (i = 0; i < ubi->fm->used_blocks; i++)
+ 			kfree(ubi->fm->e[i]);
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index 27636063ed1bb..42cac572f82dc 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -1086,7 +1086,8 @@ static int __erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk)
+ 	if (!err) {
+ 		spin_lock(&ubi->wl_lock);
+ 
+-		if (!ubi->fm_next_anchor && e->pnum < UBI_FM_MAX_START) {
++		if (!ubi->fm_disabled && !ubi->fm_next_anchor &&
++		    e->pnum < UBI_FM_MAX_START) {
+ 			/* Abort anchor production, if needed it will be
+ 			 * enabled again in the wear leveling started below.
+ 			 */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h
+index cd33c2e6ca5fc..f48eb66ed021b 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h
+@@ -43,7 +43,7 @@ struct qmem {
+ 	void            *base;
+ 	dma_addr_t	iova;
+ 	int		alloc_sz;
+-	u8		entry_sz;
++	u16		entry_sz;
+ 	u8		align;
+ 	u32		qsize;
+ };
+diff --git a/drivers/net/ethernet/qualcomm/emac/emac.c b/drivers/net/ethernet/qualcomm/emac/emac.c
+index 20b1b43a0e393..1166b98d8bb2c 100644
+--- a/drivers/net/ethernet/qualcomm/emac/emac.c
++++ b/drivers/net/ethernet/qualcomm/emac/emac.c
+@@ -474,13 +474,24 @@ static int emac_clks_phase1_init(struct platform_device *pdev,
+ 
+ 	ret = clk_prepare_enable(adpt->clk[EMAC_CLK_CFG_AHB]);
+ 	if (ret)
+-		return ret;
++		goto disable_clk_axi;
+ 
+ 	ret = clk_set_rate(adpt->clk[EMAC_CLK_HIGH_SPEED], 19200000);
+ 	if (ret)
+-		return ret;
++		goto disable_clk_cfg_ahb;
++
++	ret = clk_prepare_enable(adpt->clk[EMAC_CLK_HIGH_SPEED]);
++	if (ret)
++		goto disable_clk_cfg_ahb;
+ 
+-	return clk_prepare_enable(adpt->clk[EMAC_CLK_HIGH_SPEED]);
++	return 0;
++
++disable_clk_cfg_ahb:
++	clk_disable_unprepare(adpt->clk[EMAC_CLK_CFG_AHB]);
++disable_clk_axi:
++	clk_disable_unprepare(adpt->clk[EMAC_CLK_AXI]);
++
++	return ret;
+ }
+ 
+ /* Enable clocks; needs emac_clks_phase1_init to be called before */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+index 02102c781a8cf..bf3250e0e59ca 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+@@ -351,6 +351,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ 	plat_dat->has_gmac = true;
+ 	plat_dat->bsp_priv = gmac;
+ 	plat_dat->fix_mac_speed = ipq806x_gmac_fix_mac_speed;
++	plat_dat->multicast_filter_bins = 0;
+ 
+ 	err = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
+ 	if (err)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
+index efc6ec1b8027c..fc8759f146c7c 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
+@@ -164,6 +164,9 @@ static void dwmac1000_set_filter(struct mac_device_info *hw,
+ 		value = GMAC_FRAME_FILTER_PR | GMAC_FRAME_FILTER_PCF;
+ 	} else if (dev->flags & IFF_ALLMULTI) {
+ 		value = GMAC_FRAME_FILTER_PM;	/* pass all multi */
++	} else if (!netdev_mc_empty(dev) && (mcbitslog2 == 0)) {
++		/* Fall back to all multicast if we've no filter */
++		value = GMAC_FRAME_FILTER_PM;
+ 	} else if (!netdev_mc_empty(dev)) {
+ 		struct netdev_hw_addr *ha;
+ 
+diff --git a/drivers/net/wireless/realtek/rtw88/pci.c b/drivers/net/wireless/realtek/rtw88/pci.c
+index 8228db9a5fc86..3413973bc4750 100644
+--- a/drivers/net/wireless/realtek/rtw88/pci.c
++++ b/drivers/net/wireless/realtek/rtw88/pci.c
+@@ -14,8 +14,11 @@
+ #include "debug.h"
+ 
+ static bool rtw_disable_msi;
++static bool rtw_pci_disable_aspm;
+ module_param_named(disable_msi, rtw_disable_msi, bool, 0644);
++module_param_named(disable_aspm, rtw_pci_disable_aspm, bool, 0644);
+ MODULE_PARM_DESC(disable_msi, "Set Y to disable MSI interrupt support");
++MODULE_PARM_DESC(disable_aspm, "Set Y to disable PCI ASPM support");
+ 
+ static u32 rtw_pci_tx_queue_idx_addr[] = {
+ 	[RTW_TX_QUEUE_BK]	= RTK_PCI_TXBD_IDX_BKQ,
+@@ -1200,6 +1203,9 @@ static void rtw_pci_clkreq_set(struct rtw_dev *rtwdev, bool enable)
+ 	u8 value;
+ 	int ret;
+ 
++	if (rtw_pci_disable_aspm)
++		return;
++
+ 	ret = rtw_dbi_read8(rtwdev, RTK_PCIE_LINK_CFG, &value);
+ 	if (ret) {
+ 		rtw_err(rtwdev, "failed to read CLKREQ_L1, ret=%d", ret);
+@@ -1219,6 +1225,9 @@ static void rtw_pci_aspm_set(struct rtw_dev *rtwdev, bool enable)
+ 	u8 value;
+ 	int ret;
+ 
++	if (rtw_pci_disable_aspm)
++		return;
++
+ 	ret = rtw_dbi_read8(rtwdev, RTK_PCIE_LINK_CFG, &value);
+ 	if (ret) {
+ 		rtw_err(rtwdev, "failed to read ASPM, ret=%d", ret);
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index 09087c38fabdc..955265656b96c 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -1037,9 +1037,25 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm,
+ 		dimm_name = "bus";
+ 	}
+ 
++	/* Validate command family support against bus declared support */
+ 	if (cmd == ND_CMD_CALL) {
++		unsigned long *mask;
++
+ 		if (copy_from_user(&pkg, p, sizeof(pkg)))
+ 			return -EFAULT;
++
++		if (nvdimm) {
++			if (pkg.nd_family > NVDIMM_FAMILY_MAX)
++				return -EINVAL;
++			mask = &nd_desc->dimm_family_mask;
++		} else {
++			if (pkg.nd_family > NVDIMM_BUS_FAMILY_MAX)
++				return -EINVAL;
++			mask = &nd_desc->bus_family_mask;
++		}
++
++		if (!test_bit(pkg.nd_family, mask))
++			return -EINVAL;
+ 	}
+ 
+ 	if (!desc ||
+diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c
+index 4cef69bd3c1bd..4b80150e4afa7 100644
+--- a/drivers/nvdimm/security.c
++++ b/drivers/nvdimm/security.c
+@@ -450,14 +450,19 @@ void __nvdimm_security_overwrite_query(struct nvdimm *nvdimm)
+ 	else
+ 		dev_dbg(&nvdimm->dev, "overwrite completed\n");
+ 
+-	if (nvdimm->sec.overwrite_state)
+-		sysfs_notify_dirent(nvdimm->sec.overwrite_state);
++	/*
++	 * Mark the overwrite work done and update dimm security flags,
++	 * then send a sysfs event notification to wake up userspace
++	 * poll threads to picked up the changed state.
++	 */
+ 	nvdimm->sec.overwrite_tmo = 0;
+ 	clear_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags);
+ 	clear_bit(NDD_WORK_PENDING, &nvdimm->flags);
+-	put_device(&nvdimm->dev);
+ 	nvdimm->sec.flags = nvdimm_security_flags(nvdimm, NVDIMM_USER);
+-	nvdimm->sec.flags = nvdimm_security_flags(nvdimm, NVDIMM_MASTER);
++	nvdimm->sec.ext_flags = nvdimm_security_flags(nvdimm, NVDIMM_MASTER);
++	if (nvdimm->sec.overwrite_state)
++		sysfs_notify_dirent(nvdimm->sec.overwrite_state);
++	put_device(&nvdimm->dev);
+ }
+ 
+ void nvdimm_security_overwrite_query(struct work_struct *work)
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 4ee2330c603e7..f38548e6d55ec 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -362,6 +362,16 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
+ 			break;
+ 		}
+ 		break;
++	case NVME_CTRL_DELETING_NOIO:
++		switch (old_state) {
++		case NVME_CTRL_DELETING:
++		case NVME_CTRL_DEAD:
++			changed = true;
++			/* FALLTHRU */
++		default:
++			break;
++		}
++		break;
+ 	case NVME_CTRL_DEAD:
+ 		switch (old_state) {
+ 		case NVME_CTRL_DELETING:
+@@ -399,6 +409,7 @@ static bool nvme_state_terminal(struct nvme_ctrl *ctrl)
+ 	case NVME_CTRL_CONNECTING:
+ 		return false;
+ 	case NVME_CTRL_DELETING:
++	case NVME_CTRL_DELETING_NOIO:
+ 	case NVME_CTRL_DEAD:
+ 		return true;
+ 	default:
+@@ -3344,6 +3355,7 @@ static ssize_t nvme_sysfs_show_state(struct device *dev,
+ 		[NVME_CTRL_RESETTING]	= "resetting",
+ 		[NVME_CTRL_CONNECTING]	= "connecting",
+ 		[NVME_CTRL_DELETING]	= "deleting",
++		[NVME_CTRL_DELETING_NOIO]= "deleting (no IO)",
+ 		[NVME_CTRL_DEAD]	= "dead",
+ 	};
+ 
+@@ -3911,6 +3923,9 @@ void nvme_remove_namespaces(struct nvme_ctrl *ctrl)
+ 	if (ctrl->state == NVME_CTRL_DEAD)
+ 		nvme_kill_queues(ctrl);
+ 
++	/* this is a no-op when called from the controller reset handler */
++	nvme_change_ctrl_state(ctrl, NVME_CTRL_DELETING_NOIO);
++
+ 	down_write(&ctrl->namespaces_rwsem);
+ 	list_splice_init(&ctrl->namespaces, &ns_list);
+ 	up_write(&ctrl->namespaces_rwsem);
+diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
+index 2a6c8190eeb76..4ec4829d62334 100644
+--- a/drivers/nvme/host/fabrics.c
++++ b/drivers/nvme/host/fabrics.c
+@@ -547,7 +547,7 @@ static struct nvmf_transport_ops *nvmf_lookup_transport(
+ blk_status_t nvmf_fail_nonready_command(struct nvme_ctrl *ctrl,
+ 		struct request *rq)
+ {
+-	if (ctrl->state != NVME_CTRL_DELETING &&
++	if (ctrl->state != NVME_CTRL_DELETING_NOIO &&
+ 	    ctrl->state != NVME_CTRL_DEAD &&
+ 	    !blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH))
+ 		return BLK_STS_RESOURCE;
+diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
+index a0ec40ab62eeb..a9c1e3b4585ec 100644
+--- a/drivers/nvme/host/fabrics.h
++++ b/drivers/nvme/host/fabrics.h
+@@ -182,7 +182,8 @@ bool nvmf_ip_options_match(struct nvme_ctrl *ctrl,
+ static inline bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq,
+ 		bool queue_live)
+ {
+-	if (likely(ctrl->state == NVME_CTRL_LIVE))
++	if (likely(ctrl->state == NVME_CTRL_LIVE ||
++		   ctrl->state == NVME_CTRL_DELETING))
+ 		return true;
+ 	return __nvmf_check_ready(ctrl, rq, queue_live);
+ }
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index e999a8c4b7e87..549f5b0fb0b4b 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -825,6 +825,7 @@ nvme_fc_ctrl_connectivity_loss(struct nvme_fc_ctrl *ctrl)
+ 		break;
+ 
+ 	case NVME_CTRL_DELETING:
++	case NVME_CTRL_DELETING_NOIO:
+ 	default:
+ 		/* no action to take - let it delete */
+ 		break;
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 57d51148e71b6..2672953233434 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -167,9 +167,18 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)
+ 
+ static bool nvme_path_is_disabled(struct nvme_ns *ns)
+ {
+-	return ns->ctrl->state != NVME_CTRL_LIVE ||
+-		test_bit(NVME_NS_ANA_PENDING, &ns->flags) ||
+-		test_bit(NVME_NS_REMOVING, &ns->flags);
++	/*
++	 * We don't treat NVME_CTRL_DELETING as a disabled path as I/O should
++	 * still be able to complete assuming that the controller is connected.
++	 * Otherwise it will fail immediately and return to the requeue list.
++	 */
++	if (ns->ctrl->state != NVME_CTRL_LIVE &&
++	    ns->ctrl->state != NVME_CTRL_DELETING)
++		return true;
++	if (test_bit(NVME_NS_ANA_PENDING, &ns->flags) ||
++	    test_bit(NVME_NS_REMOVING, &ns->flags))
++		return true;
++	return false;
+ }
+ 
+ static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node)
+@@ -574,6 +583,9 @@ static void nvme_ana_work(struct work_struct *work)
+ {
+ 	struct nvme_ctrl *ctrl = container_of(work, struct nvme_ctrl, ana_work);
+ 
++	if (ctrl->state != NVME_CTRL_LIVE)
++		return;
++
+ 	nvme_read_ana_log(ctrl);
+ }
+ 
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 09ffc3246f60e..e268f1d7e1a0f 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -186,6 +186,7 @@ enum nvme_ctrl_state {
+ 	NVME_CTRL_RESETTING,
+ 	NVME_CTRL_CONNECTING,
+ 	NVME_CTRL_DELETING,
++	NVME_CTRL_DELETING_NOIO,
+ 	NVME_CTRL_DEAD,
+ };
+ 
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index af0cfd25ed7a4..876859cd14e86 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1082,11 +1082,12 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
+ 	changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);
+ 	if (!changed) {
+ 		/*
+-		 * state change failure is ok if we're in DELETING state,
++		 * state change failure is ok if we started ctrl delete,
+ 		 * unless we're during creation of a new controller to
+ 		 * avoid races with teardown flow.
+ 		 */
+-		WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING);
++		WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING &&
++			     ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO);
+ 		WARN_ON_ONCE(new);
+ 		ret = -EINVAL;
+ 		goto destroy_io;
+@@ -1139,8 +1140,9 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
+ 	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
+ 
+ 	if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
+-		/* state change failure is ok if we're in DELETING state */
+-		WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING);
++		/* state change failure is ok if we started ctrl delete */
++		WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING &&
++			     ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 83bb329d4113a..a6d2e3330a584 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1929,11 +1929,12 @@ static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new)
+ 
+ 	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE)) {
+ 		/*
+-		 * state change failure is ok if we're in DELETING state,
++		 * state change failure is ok if we started ctrl delete,
+ 		 * unless we're during creation of a new controller to
+ 		 * avoid races with teardown flow.
+ 		 */
+-		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING);
++		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
++			     ctrl->state != NVME_CTRL_DELETING_NOIO);
+ 		WARN_ON_ONCE(new);
+ 		ret = -EINVAL;
+ 		goto destroy_io;
+@@ -1989,8 +1990,9 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
+ 	blk_mq_unquiesce_queue(ctrl->admin_q);
+ 
+ 	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
+-		/* state change failure is ok if we're in DELETING state */
+-		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING);
++		/* state change failure is ok if we started ctrl delete */
++		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
++			     ctrl->state != NVME_CTRL_DELETING_NOIO);
+ 		return;
+ 	}
+ 
+@@ -2025,8 +2027,9 @@ static void nvme_reset_ctrl_work(struct work_struct *work)
+ 	nvme_tcp_teardown_ctrl(ctrl, false);
+ 
+ 	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
+-		/* state change failure is ok if we're in DELETING state */
+-		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING);
++		/* state change failure is ok if we started ctrl delete */
++		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
++			     ctrl->state != NVME_CTRL_DELETING_NOIO);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/pci/ats.c b/drivers/pci/ats.c
+index b761c1f72f672..647e097530a89 100644
+--- a/drivers/pci/ats.c
++++ b/drivers/pci/ats.c
+@@ -325,6 +325,21 @@ int pci_prg_resp_pasid_required(struct pci_dev *pdev)
+ 
+ 	return pdev->pasid_required;
+ }
++
++/**
++ * pci_pri_supported - Check if PRI is supported.
++ * @pdev: PCI device structure
++ *
++ * Returns true if PRI capability is present, false otherwise.
++ */
++bool pci_pri_supported(struct pci_dev *pdev)
++{
++	/* VFs share the PF PRI */
++	if (pci_physfn(pdev)->pri_cap)
++		return true;
++	return false;
++}
++EXPORT_SYMBOL_GPL(pci_pri_supported);
+ #endif /* CONFIG_PCI_PRI */
+ 
+ #ifdef CONFIG_PCI_PASID
+diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c
+index 8e40b3e6da77d..3cef835b375fd 100644
+--- a/drivers/pci/bus.c
++++ b/drivers/pci/bus.c
+@@ -322,12 +322,8 @@ void pci_bus_add_device(struct pci_dev *dev)
+ 
+ 	dev->match_driver = true;
+ 	retval = device_attach(&dev->dev);
+-	if (retval < 0 && retval != -EPROBE_DEFER) {
++	if (retval < 0 && retval != -EPROBE_DEFER)
+ 		pci_warn(dev, "device attach failed (%d)\n", retval);
+-		pci_proc_detach_device(dev);
+-		pci_remove_sysfs_dev_files(dev);
+-		return;
+-	}
+ 
+ 	pci_dev_assign_added(dev, true);
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 138e1a2d21ccd..5dd1740855770 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -45,7 +45,13 @@
+ #define PCIE_CAP_CPL_TIMEOUT_DISABLE		0x10
+ 
+ #define PCIE20_PARF_PHY_CTRL			0x40
++#define PHY_CTRL_PHY_TX0_TERM_OFFSET_MASK	GENMASK(20, 16)
++#define PHY_CTRL_PHY_TX0_TERM_OFFSET(x)		((x) << 16)
++
+ #define PCIE20_PARF_PHY_REFCLK			0x4C
++#define PHY_REFCLK_SSP_EN			BIT(16)
++#define PHY_REFCLK_USE_PAD			BIT(12)
++
+ #define PCIE20_PARF_DBI_BASE_ADDR		0x168
+ #define PCIE20_PARF_SLV_ADDR_SPACE_SIZE		0x16C
+ #define PCIE20_PARF_MHI_CLOCK_RESET_CTRL	0x174
+@@ -77,6 +83,18 @@
+ #define DBI_RO_WR_EN				1
+ 
+ #define PERST_DELAY_US				1000
++/* PARF registers */
++#define PCIE20_PARF_PCS_DEEMPH			0x34
++#define PCS_DEEMPH_TX_DEEMPH_GEN1(x)		((x) << 16)
++#define PCS_DEEMPH_TX_DEEMPH_GEN2_3_5DB(x)	((x) << 8)
++#define PCS_DEEMPH_TX_DEEMPH_GEN2_6DB(x)	((x) << 0)
++
++#define PCIE20_PARF_PCS_SWING			0x38
++#define PCS_SWING_TX_SWING_FULL(x)		((x) << 8)
++#define PCS_SWING_TX_SWING_LOW(x)		((x) << 0)
++
++#define PCIE20_PARF_CONFIG_BITS		0x50
++#define PHY_RX0_EQ(x)				((x) << 24)
+ 
+ #define PCIE20_v3_PARF_SLV_ADDR_SPACE_SIZE	0x358
+ #define SLV_ADDR_SPACE_SZ			0x10000000
+@@ -286,6 +304,7 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
+ 	struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0;
+ 	struct dw_pcie *pci = pcie->pci;
+ 	struct device *dev = pci->dev;
++	struct device_node *node = dev->of_node;
+ 	u32 val;
+ 	int ret;
+ 
+@@ -330,9 +349,29 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
+ 	val &= ~BIT(0);
+ 	writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL);
+ 
++	if (of_device_is_compatible(node, "qcom,pcie-ipq8064")) {
++		writel(PCS_DEEMPH_TX_DEEMPH_GEN1(24) |
++			       PCS_DEEMPH_TX_DEEMPH_GEN2_3_5DB(24) |
++			       PCS_DEEMPH_TX_DEEMPH_GEN2_6DB(34),
++		       pcie->parf + PCIE20_PARF_PCS_DEEMPH);
++		writel(PCS_SWING_TX_SWING_FULL(120) |
++			       PCS_SWING_TX_SWING_LOW(120),
++		       pcie->parf + PCIE20_PARF_PCS_SWING);
++		writel(PHY_RX0_EQ(4), pcie->parf + PCIE20_PARF_CONFIG_BITS);
++	}
++
++	if (of_device_is_compatible(node, "qcom,pcie-ipq8064")) {
++		/* set TX termination offset */
++		val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL);
++		val &= ~PHY_CTRL_PHY_TX0_TERM_OFFSET_MASK;
++		val |= PHY_CTRL_PHY_TX0_TERM_OFFSET(7);
++		writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL);
++	}
++
+ 	/* enable external reference clock */
+ 	val = readl(pcie->parf + PCIE20_PARF_PHY_REFCLK);
+-	val |= BIT(16);
++	val &= ~PHY_REFCLK_USE_PAD;
++	val |= PHY_REFCLK_SSP_EN;
+ 	writel(val, pcie->parf + PCIE20_PARF_PHY_REFCLK);
+ 
+ 	ret = reset_control_deassert(res->phy_reset);
+diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
+index b4c92cee13f8a..3365c93abf0e2 100644
+--- a/drivers/pci/hotplug/acpiphp_glue.c
++++ b/drivers/pci/hotplug/acpiphp_glue.c
+@@ -122,13 +122,21 @@ static struct acpiphp_context *acpiphp_grab_context(struct acpi_device *adev)
+ 	struct acpiphp_context *context;
+ 
+ 	acpi_lock_hp_context();
++
+ 	context = acpiphp_get_context(adev);
+-	if (!context || context->func.parent->is_going_away) {
+-		acpi_unlock_hp_context();
+-		return NULL;
++	if (!context)
++		goto unlock;
++
++	if (context->func.parent->is_going_away) {
++		acpiphp_put_context(context);
++		context = NULL;
++		goto unlock;
+ 	}
++
+ 	get_bridge(context->func.parent);
+ 	acpiphp_put_context(context);
++
++unlock:
+ 	acpi_unlock_hp_context();
+ 	return context;
+ }
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index d442219cd2708..cc6e1a382118e 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -5207,7 +5207,8 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0422, quirk_no_ext_tags);
+  */
+ static void quirk_amd_harvest_no_ats(struct pci_dev *pdev)
+ {
+-	if (pdev->device == 0x7340 && pdev->revision != 0xc5)
++	if ((pdev->device == 0x7312 && pdev->revision != 0x00) ||
++	    (pdev->device == 0x7340 && pdev->revision != 0xc5))
+ 		return;
+ 
+ 	pci_info(pdev, "disabling ATS\n");
+@@ -5218,6 +5219,8 @@ static void quirk_amd_harvest_no_ats(struct pci_dev *pdev)
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x98e4, quirk_amd_harvest_no_ats);
+ /* AMD Iceland dGPU */
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x6900, quirk_amd_harvest_no_ats);
++/* AMD Navi10 dGPU */
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7312, quirk_amd_harvest_no_ats);
+ /* AMD Navi14 dGPU */
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7340, quirk_amd_harvest_no_ats);
+ #endif /* CONFIG_PCI_ATS */
+diff --git a/drivers/pinctrl/pinctrl-ingenic.c b/drivers/pinctrl/pinctrl-ingenic.c
+index 6a8d44504f940..367211998ab00 100644
+--- a/drivers/pinctrl/pinctrl-ingenic.c
++++ b/drivers/pinctrl/pinctrl-ingenic.c
+@@ -1810,9 +1810,9 @@ static void ingenic_gpio_irq_ack(struct irq_data *irqd)
+ 		 */
+ 		high = ingenic_gpio_get_value(jzgc, irq);
+ 		if (high)
+-			irq_set_type(jzgc, irq, IRQ_TYPE_EDGE_FALLING);
++			irq_set_type(jzgc, irq, IRQ_TYPE_LEVEL_LOW);
+ 		else
+-			irq_set_type(jzgc, irq, IRQ_TYPE_EDGE_RISING);
++			irq_set_type(jzgc, irq, IRQ_TYPE_LEVEL_HIGH);
+ 	}
+ 
+ 	if (jzgc->jzpc->info->version >= ID_JZ4760)
+@@ -1848,7 +1848,7 @@ static int ingenic_gpio_irq_set_type(struct irq_data *irqd, unsigned int type)
+ 		 */
+ 		bool high = ingenic_gpio_get_value(jzgc, irqd->hwirq);
+ 
+-		type = high ? IRQ_TYPE_EDGE_FALLING : IRQ_TYPE_EDGE_RISING;
++		type = high ? IRQ_TYPE_LEVEL_LOW : IRQ_TYPE_LEVEL_HIGH;
+ 	}
+ 
+ 	irq_set_type(jzgc, irqd->hwirq, type);
+@@ -1955,7 +1955,8 @@ static int ingenic_gpio_get_direction(struct gpio_chip *gc, unsigned int offset)
+ 	unsigned int pin = gc->base + offset;
+ 
+ 	if (jzpc->info->version >= ID_JZ4760) {
+-		if (ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PAT1))
++		if (ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_INT) ||
++		    ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PAT1))
+ 			return GPIO_LINE_DIRECTION_IN;
+ 		return GPIO_LINE_DIRECTION_OUT;
+ 	}
+diff --git a/drivers/platform/chrome/cros_ec_ishtp.c b/drivers/platform/chrome/cros_ec_ishtp.c
+index ed794a7ddba9b..81364029af367 100644
+--- a/drivers/platform/chrome/cros_ec_ishtp.c
++++ b/drivers/platform/chrome/cros_ec_ishtp.c
+@@ -681,8 +681,10 @@ static int cros_ec_ishtp_probe(struct ishtp_cl_device *cl_device)
+ 
+ 	/* Register croc_ec_dev mfd */
+ 	rv = cros_ec_dev_init(client_data);
+-	if (rv)
++	if (rv) {
++		down_write(&init_lock);
+ 		goto end_cros_ec_dev_init_error;
++	}
+ 
+ 	return 0;
+ 
+diff --git a/drivers/pwm/pwm-bcm-iproc.c b/drivers/pwm/pwm-bcm-iproc.c
+index 1f829edd8ee70..d392a828fc493 100644
+--- a/drivers/pwm/pwm-bcm-iproc.c
++++ b/drivers/pwm/pwm-bcm-iproc.c
+@@ -85,8 +85,6 @@ static void iproc_pwmc_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	u64 tmp, multi, rate;
+ 	u32 value, prescale;
+ 
+-	rate = clk_get_rate(ip->clk);
+-
+ 	value = readl(ip->base + IPROC_PWM_CTRL_OFFSET);
+ 
+ 	if (value & BIT(IPROC_PWM_CTRL_EN_SHIFT(pwm->hwpwm)))
+@@ -99,6 +97,13 @@ static void iproc_pwmc_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	else
+ 		state->polarity = PWM_POLARITY_INVERSED;
+ 
++	rate = clk_get_rate(ip->clk);
++	if (rate == 0) {
++		state->period = 0;
++		state->duty_cycle = 0;
++		return;
++	}
++
+ 	value = readl(ip->base + IPROC_PWM_PRESCALE_OFFSET);
+ 	prescale = value >> IPROC_PWM_PRESCALE_SHIFT(pwm->hwpwm);
+ 	prescale &= IPROC_PWM_PRESCALE_MAX;
+diff --git a/drivers/remoteproc/qcom_q6v5.c b/drivers/remoteproc/qcom_q6v5.c
+index 111a442c993c4..fd6fd36268d93 100644
+--- a/drivers/remoteproc/qcom_q6v5.c
++++ b/drivers/remoteproc/qcom_q6v5.c
+@@ -153,6 +153,8 @@ int qcom_q6v5_request_stop(struct qcom_q6v5 *q6v5)
+ {
+ 	int ret;
+ 
++	q6v5->running = false;
++
+ 	qcom_smem_state_update_bits(q6v5->state,
+ 				    BIT(q6v5->stop_bit), BIT(q6v5->stop_bit));
+ 
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index feb70283b6a21..a6770e5e32daa 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -407,6 +407,12 @@ static int q6v5_load(struct rproc *rproc, const struct firmware *fw)
+ {
+ 	struct q6v5 *qproc = rproc->priv;
+ 
++	/* MBA is restricted to a maximum size of 1M */
++	if (fw->size > qproc->mba_size || fw->size > SZ_1M) {
++		dev_err(qproc->dev, "MBA firmware load failed\n");
++		return -EINVAL;
++	}
++
+ 	memcpy(qproc->mba_region, fw->data, fw->size);
+ 
+ 	return 0;
+@@ -1138,15 +1144,14 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ 		} else if (phdr->p_filesz) {
+ 			/* Replace "xxx.xxx" with "xxx.bxx" */
+ 			sprintf(fw_name + fw_name_len - 3, "b%02d", i);
+-			ret = request_firmware(&seg_fw, fw_name, qproc->dev);
++			ret = request_firmware_into_buf(&seg_fw, fw_name, qproc->dev,
++							ptr, phdr->p_filesz);
+ 			if (ret) {
+ 				dev_err(qproc->dev, "failed to load %s\n", fw_name);
+ 				iounmap(ptr);
+ 				goto release_firmware;
+ 			}
+ 
+-			memcpy(ptr, seg_fw->data, seg_fw->size);
+-
+ 			release_firmware(seg_fw);
+ 		}
+ 
+diff --git a/drivers/rtc/rtc-cpcap.c b/drivers/rtc/rtc-cpcap.c
+index a603f1f211250..800667d73a6fb 100644
+--- a/drivers/rtc/rtc-cpcap.c
++++ b/drivers/rtc/rtc-cpcap.c
+@@ -261,7 +261,7 @@ static int cpcap_rtc_probe(struct platform_device *pdev)
+ 		return PTR_ERR(rtc->rtc_dev);
+ 
+ 	rtc->rtc_dev->ops = &cpcap_rtc_ops;
+-	rtc->rtc_dev->range_max = (1 << 14) * SECS_PER_DAY - 1;
++	rtc->rtc_dev->range_max = (timeu64_t) (DAY_MASK + 1) * SECS_PER_DAY - 1;
+ 
+ 	err = cpcap_get_vendor(dev, rtc->regmap, &rtc->vendor);
+ 	if (err)
+diff --git a/drivers/rtc/rtc-pl031.c b/drivers/rtc/rtc-pl031.c
+index 40d7450a1ce49..c6b89273feba8 100644
+--- a/drivers/rtc/rtc-pl031.c
++++ b/drivers/rtc/rtc-pl031.c
+@@ -275,6 +275,7 @@ static int pl031_set_alarm(struct device *dev, struct rtc_wkalrm *alarm)
+ 	struct pl031_local *ldata = dev_get_drvdata(dev);
+ 
+ 	writel(rtc_tm_to_time64(&alarm->time), ldata->base + RTC_MR);
++	pl031_alarm_irq_enable(dev, alarm->enabled);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
+index 88760416a8cbd..fcd9d4c2f1ee0 100644
+--- a/drivers/scsi/lpfc/lpfc_nvmet.c
++++ b/drivers/scsi/lpfc/lpfc_nvmet.c
+@@ -2112,7 +2112,7 @@ lpfc_nvmet_destroy_targetport(struct lpfc_hba *phba)
+ 		}
+ 		tgtp->tport_unreg_cmp = &tport_unreg_cmp;
+ 		nvmet_fc_unregister_targetport(phba->targetport);
+-		if (!wait_for_completion_timeout(tgtp->tport_unreg_cmp,
++		if (!wait_for_completion_timeout(&tport_unreg_cmp,
+ 					msecs_to_jiffies(LPFC_NVMET_WAIT_TMO)))
+ 			lpfc_printf_log(phba, KERN_ERR, LOG_NVME,
+ 					"6179 Unreg targetport x%px timeout "
+diff --git a/drivers/staging/media/rkisp1/rkisp1-common.h b/drivers/staging/media/rkisp1/rkisp1-common.h
+index 0c4fe503adc90..12bd9d05050db 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-common.h
++++ b/drivers/staging/media/rkisp1/rkisp1-common.h
+@@ -22,6 +22,9 @@
+ #include "rkisp1-regs.h"
+ #include "uapi/rkisp1-config.h"
+ 
++#define RKISP1_ISP_SD_SRC BIT(0)
++#define RKISP1_ISP_SD_SINK BIT(1)
++
+ #define RKISP1_ISP_MAX_WIDTH		4032
+ #define RKISP1_ISP_MAX_HEIGHT		3024
+ #define RKISP1_ISP_MIN_WIDTH		32
+diff --git a/drivers/staging/media/rkisp1/rkisp1-isp.c b/drivers/staging/media/rkisp1/rkisp1-isp.c
+index dc2b59a0160a8..b21a67aea433c 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-isp.c
++++ b/drivers/staging/media/rkisp1/rkisp1-isp.c
+@@ -23,10 +23,6 @@
+ 
+ #define RKISP1_ISP_DEV_NAME	RKISP1_DRIVER_NAME "_isp"
+ 
+-#define RKISP1_DIR_SRC BIT(0)
+-#define RKISP1_DIR_SINK BIT(1)
+-#define RKISP1_DIR_SINK_SRC (RKISP1_DIR_SINK | RKISP1_DIR_SRC)
+-
+ /*
+  * NOTE: MIPI controller and input MUX are also configured in this file.
+  * This is because ISP Subdev describes not only ISP submodule (input size,
+@@ -62,119 +58,119 @@ static const struct rkisp1_isp_mbus_info rkisp1_isp_formats[] = {
+ 	{
+ 		.mbus_code	= MEDIA_BUS_FMT_YUYV8_2X8,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_YUV,
+-		.direction	= RKISP1_DIR_SRC,
++		.direction	= RKISP1_ISP_SD_SRC,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_SRGGB10_1X10,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_BAYER,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_RAW10,
+ 		.bayer_pat	= RKISP1_RAW_RGGB,
+ 		.bus_width	= 10,
+-		.direction	= RKISP1_DIR_SINK_SRC,
++		.direction	= RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_SBGGR10_1X10,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_BAYER,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_RAW10,
+ 		.bayer_pat	= RKISP1_RAW_BGGR,
+ 		.bus_width	= 10,
+-		.direction	= RKISP1_DIR_SINK_SRC,
++		.direction	= RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_SGBRG10_1X10,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_BAYER,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_RAW10,
+ 		.bayer_pat	= RKISP1_RAW_GBRG,
+ 		.bus_width	= 10,
+-		.direction	= RKISP1_DIR_SINK_SRC,
++		.direction	= RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_SGRBG10_1X10,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_BAYER,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_RAW10,
+ 		.bayer_pat	= RKISP1_RAW_GRBG,
+ 		.bus_width	= 10,
+-		.direction	= RKISP1_DIR_SINK_SRC,
++		.direction	= RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_SRGGB12_1X12,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_BAYER,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_RAW12,
+ 		.bayer_pat	= RKISP1_RAW_RGGB,
+ 		.bus_width	= 12,
+-		.direction	= RKISP1_DIR_SINK_SRC,
++		.direction	= RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_SBGGR12_1X12,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_BAYER,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_RAW12,
+ 		.bayer_pat	= RKISP1_RAW_BGGR,
+ 		.bus_width	= 12,
+-		.direction	= RKISP1_DIR_SINK_SRC,
++		.direction	= RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_SGBRG12_1X12,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_BAYER,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_RAW12,
+ 		.bayer_pat	= RKISP1_RAW_GBRG,
+ 		.bus_width	= 12,
+-		.direction	= RKISP1_DIR_SINK_SRC,
++		.direction	= RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_SGRBG12_1X12,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_BAYER,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_RAW12,
+ 		.bayer_pat	= RKISP1_RAW_GRBG,
+ 		.bus_width	= 12,
+-		.direction	= RKISP1_DIR_SINK_SRC,
++		.direction	= RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_SRGGB8_1X8,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_BAYER,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_RAW8,
+ 		.bayer_pat	= RKISP1_RAW_RGGB,
+ 		.bus_width	= 8,
+-		.direction	= RKISP1_DIR_SINK_SRC,
++		.direction	= RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_SBGGR8_1X8,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_BAYER,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_RAW8,
+ 		.bayer_pat	= RKISP1_RAW_BGGR,
+ 		.bus_width	= 8,
+-		.direction	= RKISP1_DIR_SINK_SRC,
++		.direction	= RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_SGBRG8_1X8,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_BAYER,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_RAW8,
+ 		.bayer_pat	= RKISP1_RAW_GBRG,
+ 		.bus_width	= 8,
+-		.direction	= RKISP1_DIR_SINK_SRC,
++		.direction	= RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_SGRBG8_1X8,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_BAYER,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_RAW8,
+ 		.bayer_pat	= RKISP1_RAW_GRBG,
+ 		.bus_width	= 8,
+-		.direction	= RKISP1_DIR_SINK_SRC,
++		.direction	= RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_YUYV8_1X16,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_YUV,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_YUV422_8b,
+ 		.yuv_seq	= RKISP1_CIF_ISP_ACQ_PROP_YCBYCR,
+ 		.bus_width	= 16,
+-		.direction	= RKISP1_DIR_SINK,
++		.direction	= RKISP1_ISP_SD_SINK,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_YVYU8_1X16,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_YUV,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_YUV422_8b,
+ 		.yuv_seq	= RKISP1_CIF_ISP_ACQ_PROP_YCRYCB,
+ 		.bus_width	= 16,
+-		.direction	= RKISP1_DIR_SINK,
++		.direction	= RKISP1_ISP_SD_SINK,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_UYVY8_1X16,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_YUV,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_YUV422_8b,
+ 		.yuv_seq	= RKISP1_CIF_ISP_ACQ_PROP_CBYCRY,
+ 		.bus_width	= 16,
+-		.direction	= RKISP1_DIR_SINK,
++		.direction	= RKISP1_ISP_SD_SINK,
+ 	}, {
+ 		.mbus_code	= MEDIA_BUS_FMT_VYUY8_1X16,
+ 		.pixel_enc	= V4L2_PIXEL_ENC_YUV,
+ 		.mipi_dt	= RKISP1_CIF_CSI2_DT_YUV422_8b,
+ 		.yuv_seq	= RKISP1_CIF_ISP_ACQ_PROP_CRYCBY,
+ 		.bus_width	= 16,
+-		.direction	= RKISP1_DIR_SINK,
++		.direction	= RKISP1_ISP_SD_SINK,
+ 	},
+ };
+ 
+@@ -574,9 +570,9 @@ static int rkisp1_isp_enum_mbus_code(struct v4l2_subdev *sd,
+ 	int pos = 0;
+ 
+ 	if (code->pad == RKISP1_ISP_PAD_SINK_VIDEO) {
+-		dir = RKISP1_DIR_SINK;
++		dir = RKISP1_ISP_SD_SINK;
+ 	} else if (code->pad == RKISP1_ISP_PAD_SOURCE_VIDEO) {
+-		dir = RKISP1_DIR_SRC;
++		dir = RKISP1_ISP_SD_SRC;
+ 	} else {
+ 		if (code->index > 0)
+ 			return -EINVAL;
+@@ -661,7 +657,7 @@ static void rkisp1_isp_set_src_fmt(struct rkisp1_isp *isp,
+ 
+ 	src_fmt->code = format->code;
+ 	mbus_info = rkisp1_isp_mbus_info_get(src_fmt->code);
+-	if (!mbus_info || !(mbus_info->direction & RKISP1_DIR_SRC)) {
++	if (!mbus_info || !(mbus_info->direction & RKISP1_ISP_SD_SRC)) {
+ 		src_fmt->code = RKISP1_DEF_SRC_PAD_FMT;
+ 		mbus_info = rkisp1_isp_mbus_info_get(src_fmt->code);
+ 	}
+@@ -745,7 +741,7 @@ static void rkisp1_isp_set_sink_fmt(struct rkisp1_isp *isp,
+ 					  which);
+ 	sink_fmt->code = format->code;
+ 	mbus_info = rkisp1_isp_mbus_info_get(sink_fmt->code);
+-	if (!mbus_info || !(mbus_info->direction & RKISP1_DIR_SINK)) {
++	if (!mbus_info || !(mbus_info->direction & RKISP1_ISP_SD_SINK)) {
+ 		sink_fmt->code = RKISP1_DEF_SINK_PAD_FMT;
+ 		mbus_info = rkisp1_isp_mbus_info_get(sink_fmt->code);
+ 	}
+diff --git a/drivers/staging/media/rkisp1/rkisp1-resizer.c b/drivers/staging/media/rkisp1/rkisp1-resizer.c
+index e188944941b58..a2b35961bc8b7 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-resizer.c
++++ b/drivers/staging/media/rkisp1/rkisp1-resizer.c
+@@ -542,7 +542,7 @@ static void rkisp1_rsz_set_sink_fmt(struct rkisp1_resizer *rsz,
+ 					    which);
+ 	sink_fmt->code = format->code;
+ 	mbus_info = rkisp1_isp_mbus_info_get(sink_fmt->code);
+-	if (!mbus_info) {
++	if (!mbus_info || !(mbus_info->direction & RKISP1_ISP_SD_SRC)) {
+ 		sink_fmt->code = RKISP1_DEF_FMT;
+ 		mbus_info = rkisp1_isp_mbus_info_get(sink_fmt->code);
+ 	}
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 9ad44a96dfe3a..33f1cca7eaa61 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -2480,12 +2480,11 @@ static int ftdi_prepare_write_buffer(struct usb_serial_port *port,
+ #define FTDI_RS_ERR_MASK (FTDI_RS_BI | FTDI_RS_PE | FTDI_RS_FE | FTDI_RS_OE)
+ 
+ static int ftdi_process_packet(struct usb_serial_port *port,
+-		struct ftdi_private *priv, char *packet, int len)
++		struct ftdi_private *priv, unsigned char *buf, int len)
+ {
++	unsigned char status;
+ 	int i;
+-	char status;
+ 	char flag;
+-	char *ch;
+ 
+ 	if (len < 2) {
+ 		dev_dbg(&port->dev, "malformed packet\n");
+@@ -2495,7 +2494,7 @@ static int ftdi_process_packet(struct usb_serial_port *port,
+ 	/* Compare new line status to the old one, signal if different/
+ 	   N.B. packet may be processed more than once, but differences
+ 	   are only processed once.  */
+-	status = packet[0] & FTDI_STATUS_B0_MASK;
++	status = buf[0] & FTDI_STATUS_B0_MASK;
+ 	if (status != priv->prev_status) {
+ 		char diff_status = status ^ priv->prev_status;
+ 
+@@ -2521,13 +2520,12 @@ static int ftdi_process_packet(struct usb_serial_port *port,
+ 	}
+ 
+ 	/* save if the transmitter is empty or not */
+-	if (packet[1] & FTDI_RS_TEMT)
++	if (buf[1] & FTDI_RS_TEMT)
+ 		priv->transmit_empty = 1;
+ 	else
+ 		priv->transmit_empty = 0;
+ 
+-	len -= 2;
+-	if (!len)
++	if (len == 2)
+ 		return 0;	/* status only */
+ 
+ 	/*
+@@ -2535,40 +2533,41 @@ static int ftdi_process_packet(struct usb_serial_port *port,
+ 	 * data payload to avoid over-reporting.
+ 	 */
+ 	flag = TTY_NORMAL;
+-	if (packet[1] & FTDI_RS_ERR_MASK) {
++	if (buf[1] & FTDI_RS_ERR_MASK) {
+ 		/* Break takes precedence over parity, which takes precedence
+ 		 * over framing errors */
+-		if (packet[1] & FTDI_RS_BI) {
++		if (buf[1] & FTDI_RS_BI) {
+ 			flag = TTY_BREAK;
+ 			port->icount.brk++;
+ 			usb_serial_handle_break(port);
+-		} else if (packet[1] & FTDI_RS_PE) {
++		} else if (buf[1] & FTDI_RS_PE) {
+ 			flag = TTY_PARITY;
+ 			port->icount.parity++;
+-		} else if (packet[1] & FTDI_RS_FE) {
++		} else if (buf[1] & FTDI_RS_FE) {
+ 			flag = TTY_FRAME;
+ 			port->icount.frame++;
+ 		}
+ 		/* Overrun is special, not associated with a char */
+-		if (packet[1] & FTDI_RS_OE) {
++		if (buf[1] & FTDI_RS_OE) {
+ 			port->icount.overrun++;
+ 			tty_insert_flip_char(&port->port, 0, TTY_OVERRUN);
+ 		}
+ 	}
+ 
+-	port->icount.rx += len;
+-	ch = packet + 2;
++	port->icount.rx += len - 2;
+ 
+ 	if (port->port.console && port->sysrq) {
+-		for (i = 0; i < len; i++, ch++) {
+-			if (!usb_serial_handle_sysrq_char(port, *ch))
+-				tty_insert_flip_char(&port->port, *ch, flag);
++		for (i = 2; i < len; i++) {
++			if (usb_serial_handle_sysrq_char(port, buf[i]))
++				continue;
++			tty_insert_flip_char(&port->port, buf[i], flag);
+ 		}
+ 	} else {
+-		tty_insert_flip_string_fixed_flag(&port->port, ch, flag, len);
++		tty_insert_flip_string_fixed_flag(&port->port, buf + 2, flag,
++				len - 2);
+ 	}
+ 
+-	return len;
++	return len - 2;
+ }
+ 
+ static void ftdi_process_read_urb(struct urb *urb)
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+index 8ac6f341dcc16..67956db75013f 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+@@ -331,6 +331,7 @@ static struct vdpasim *vdpasim_create(void)
+ 
+ 	INIT_WORK(&vdpasim->work, vdpasim_work);
+ 	spin_lock_init(&vdpasim->lock);
++	spin_lock_init(&vdpasim->iommu_lock);
+ 
+ 	dev = &vdpasim->vdpa.dev;
+ 	dev->coherent_dma_mask = DMA_BIT_MASK(64);
+@@ -521,7 +522,7 @@ static void vdpasim_get_config(struct vdpa_device *vdpa, unsigned int offset,
+ 	struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
+ 
+ 	if (offset + len < sizeof(struct virtio_net_config))
+-		memcpy(buf, &vdpasim->config + offset, len);
++		memcpy(buf, (u8 *)&vdpasim->config + offset, len);
+ }
+ 
+ static void vdpasim_set_config(struct vdpa_device *vdpa, unsigned int offset,
+diff --git a/drivers/watchdog/f71808e_wdt.c b/drivers/watchdog/f71808e_wdt.c
+index a3c44d75d80eb..26bf366aebc23 100644
+--- a/drivers/watchdog/f71808e_wdt.c
++++ b/drivers/watchdog/f71808e_wdt.c
+@@ -690,9 +690,9 @@ static int __init watchdog_init(int sioaddr)
+ 	 * into the module have been registered yet.
+ 	 */
+ 	watchdog.sioaddr = sioaddr;
+-	watchdog.ident.options = WDIOC_SETTIMEOUT
+-				| WDIOF_MAGICCLOSE
+-				| WDIOF_KEEPALIVEPING;
++	watchdog.ident.options = WDIOF_MAGICCLOSE
++				| WDIOF_KEEPALIVEPING
++				| WDIOF_CARDRESET;
+ 
+ 	snprintf(watchdog.ident.identity,
+ 		sizeof(watchdog.ident.identity), "%s watchdog",
+@@ -706,6 +706,13 @@ static int __init watchdog_init(int sioaddr)
+ 	wdt_conf = superio_inb(sioaddr, F71808FG_REG_WDT_CONF);
+ 	watchdog.caused_reboot = wdt_conf & BIT(F71808FG_FLAG_WDTMOUT_STS);
+ 
++	/*
++	 * We don't want WDTMOUT_STS to stick around till regular reboot.
++	 * Write 1 to the bit to clear it to zero.
++	 */
++	superio_outb(sioaddr, F71808FG_REG_WDT_CONF,
++		     wdt_conf | BIT(F71808FG_FLAG_WDTMOUT_STS));
++
+ 	superio_exit(sioaddr);
+ 
+ 	err = watchdog_set_timeout(timeout);
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index d456dd72d99a0..c904496fff65e 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -211,6 +211,7 @@ static int rti_wdt_probe(struct platform_device *pdev)
+ 
+ err_iomap:
+ 	pm_runtime_put_sync(&pdev->dev);
++	pm_runtime_disable(&pdev->dev);
+ 
+ 	return ret;
+ }
+@@ -221,6 +222,7 @@ static int rti_wdt_remove(struct platform_device *pdev)
+ 
+ 	watchdog_unregister_device(&wdt->wdd);
+ 	pm_runtime_put(&pdev->dev);
++	pm_runtime_disable(&pdev->dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
+index 7e4cd34a8c20e..b535f5fa279b9 100644
+--- a/drivers/watchdog/watchdog_dev.c
++++ b/drivers/watchdog/watchdog_dev.c
+@@ -994,6 +994,15 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
+ 	if (IS_ERR_OR_NULL(watchdog_kworker))
+ 		return -ENODEV;
+ 
++	device_initialize(&wd_data->dev);
++	wd_data->dev.devt = MKDEV(MAJOR(watchdog_devt), wdd->id);
++	wd_data->dev.class = &watchdog_class;
++	wd_data->dev.parent = wdd->parent;
++	wd_data->dev.groups = wdd->groups;
++	wd_data->dev.release = watchdog_core_data_release;
++	dev_set_drvdata(&wd_data->dev, wdd);
++	dev_set_name(&wd_data->dev, "watchdog%d", wdd->id);
++
+ 	kthread_init_work(&wd_data->work, watchdog_ping_work);
+ 	hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
+ 	wd_data->timer.function = watchdog_timer_expired;
+@@ -1014,15 +1023,6 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
+ 		}
+ 	}
+ 
+-	device_initialize(&wd_data->dev);
+-	wd_data->dev.devt = MKDEV(MAJOR(watchdog_devt), wdd->id);
+-	wd_data->dev.class = &watchdog_class;
+-	wd_data->dev.parent = wdd->parent;
+-	wd_data->dev.groups = wdd->groups;
+-	wd_data->dev.release = watchdog_core_data_release;
+-	dev_set_drvdata(&wd_data->dev, wdd);
+-	dev_set_name(&wd_data->dev, "watchdog%d", wdd->id);
+-
+ 	/* Fill in the data structures */
+ 	cdev_init(&wd_data->cdev, &watchdog_fops);
+ 
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index ea10f7bc99abf..ea1c28ccb44ff 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -2303,7 +2303,7 @@ struct btrfs_backref_iter *btrfs_backref_iter_alloc(
+ 		return NULL;
+ 
+ 	ret->path = btrfs_alloc_path();
+-	if (!ret) {
++	if (!ret->path) {
+ 		kfree(ret);
+ 		return NULL;
+ 	}
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 7c8efa0c3ee65..6fdb3392a06d5 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -1059,8 +1059,10 @@ struct btrfs_root {
+ 	wait_queue_head_t log_writer_wait;
+ 	wait_queue_head_t log_commit_wait[2];
+ 	struct list_head log_ctxs[2];
++	/* Used only for log trees of subvolumes, not for the log root tree */
+ 	atomic_t log_writers;
+ 	atomic_t log_commit[2];
++	/* Used only for log trees of subvolumes, not for the log root tree */
+ 	atomic_t log_batch;
+ 	int log_transid;
+ 	/* No matter the commit succeeds or not*/
+@@ -3196,7 +3198,7 @@ do {								\
+ 	/* Report first abort since mount */			\
+ 	if (!test_and_set_bit(BTRFS_FS_STATE_TRANS_ABORTED,	\
+ 			&((trans)->fs_info->fs_state))) {	\
+-		if ((errno) != -EIO) {				\
++		if ((errno) != -EIO && (errno) != -EROFS) {		\
+ 			WARN(1, KERN_DEBUG				\
+ 			"BTRFS: Transaction aborted (error %d)\n",	\
+ 			(errno));					\
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index b1a148058773e..66618a1794ea7 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1395,7 +1395,12 @@ alloc_fail:
+ 	goto out;
+ }
+ 
+-static int btrfs_init_fs_root(struct btrfs_root *root)
++/*
++ * Initialize subvolume root in-memory structure
++ *
++ * @anon_dev:	anonymous device to attach to the root, if zero, allocate new
++ */
++static int btrfs_init_fs_root(struct btrfs_root *root, dev_t anon_dev)
+ {
+ 	int ret;
+ 	unsigned int nofs_flag;
+@@ -1428,9 +1433,20 @@ static int btrfs_init_fs_root(struct btrfs_root *root)
+ 	spin_lock_init(&root->ino_cache_lock);
+ 	init_waitqueue_head(&root->ino_cache_wait);
+ 
+-	ret = get_anon_bdev(&root->anon_dev);
+-	if (ret)
+-		goto fail;
++	/*
++	 * Don't assign anonymous block device to roots that are not exposed to
++	 * userspace, the id pool is limited to 1M
++	 */
++	if (is_fstree(root->root_key.objectid) &&
++	    btrfs_root_refs(&root->root_item) > 0) {
++		if (!anon_dev) {
++			ret = get_anon_bdev(&root->anon_dev);
++			if (ret)
++				goto fail;
++		} else {
++			root->anon_dev = anon_dev;
++		}
++	}
+ 
+ 	mutex_lock(&root->objectid_mutex);
+ 	ret = btrfs_find_highest_objectid(root,
+@@ -1534,8 +1550,27 @@ void btrfs_free_fs_info(struct btrfs_fs_info *fs_info)
+ }
+ 
+ 
+-struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,
+-				     u64 objectid, bool check_ref)
++/*
++ * Get an in-memory reference of a root structure.
++ *
++ * For essential trees like root/extent tree, we grab it from fs_info directly.
++ * For subvolume trees, we check the cached filesystem roots first. If not
++ * found, then read it from disk and add it to cached fs roots.
++ *
++ * Caller should release the root by calling btrfs_put_root() after the usage.
++ *
++ * NOTE: Reloc and log trees can't be read by this function as they share the
++ *	 same root objectid.
++ *
++ * @objectid:	root id
++ * @anon_dev:	preallocated anonymous block device number for new roots,
++ * 		pass 0 for new allocation.
++ * @check_ref:	whether to check root item references, If true, return -ENOENT
++ *		for orphan roots
++ */
++static struct btrfs_root *btrfs_get_root_ref(struct btrfs_fs_info *fs_info,
++					     u64 objectid, dev_t anon_dev,
++					     bool check_ref)
+ {
+ 	struct btrfs_root *root;
+ 	struct btrfs_path *path;
+@@ -1564,6 +1599,8 @@ struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,
+ again:
+ 	root = btrfs_lookup_fs_root(fs_info, objectid);
+ 	if (root) {
++		/* Shouldn't get preallocated anon_dev for cached roots */
++		ASSERT(!anon_dev);
+ 		if (check_ref && btrfs_root_refs(&root->root_item) == 0) {
+ 			btrfs_put_root(root);
+ 			return ERR_PTR(-ENOENT);
+@@ -1583,7 +1620,7 @@ again:
+ 		goto fail;
+ 	}
+ 
+-	ret = btrfs_init_fs_root(root);
++	ret = btrfs_init_fs_root(root, anon_dev);
+ 	if (ret)
+ 		goto fail;
+ 
+@@ -1616,6 +1653,33 @@ fail:
+ 	return ERR_PTR(ret);
+ }
+ 
++/*
++ * Get in-memory reference of a root structure
++ *
++ * @objectid:	tree objectid
++ * @check_ref:	if set, verify that the tree exists and the item has at least
++ *		one reference
++ */
++struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,
++				     u64 objectid, bool check_ref)
++{
++	return btrfs_get_root_ref(fs_info, objectid, 0, check_ref);
++}
++
++/*
++ * Get in-memory reference of a root structure, created as new, optionally pass
++ * the anonymous block device id
++ *
++ * @objectid:	tree objectid
++ * @anon_dev:	if zero, allocate a new anonymous block device or use the
++ *		parameter value
++ */
++struct btrfs_root *btrfs_get_new_fs_root(struct btrfs_fs_info *fs_info,
++					 u64 objectid, dev_t anon_dev)
++{
++	return btrfs_get_root_ref(fs_info, objectid, anon_dev, true);
++}
++
+ static int btrfs_congested_fn(void *congested_data, int bdi_bits)
+ {
+ 	struct btrfs_fs_info *info = (struct btrfs_fs_info *)congested_data;
+diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
+index bf43245406c4d..00dc39d47ed34 100644
+--- a/fs/btrfs/disk-io.h
++++ b/fs/btrfs/disk-io.h
+@@ -67,6 +67,8 @@ void btrfs_free_fs_roots(struct btrfs_fs_info *fs_info);
+ 
+ struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,
+ 				     u64 objectid, bool check_ref);
++struct btrfs_root *btrfs_get_new_fs_root(struct btrfs_fs_info *fs_info,
++					 u64 objectid, dev_t anon_dev);
+ 
+ void btrfs_free_fs_info(struct btrfs_fs_info *fs_info);
+ int btrfs_cleanup_fs_roots(struct btrfs_fs_info *fs_info);
+diff --git a/fs/btrfs/extent-io-tree.h b/fs/btrfs/extent-io-tree.h
+index b6561455b3c42..8bbb734f3f514 100644
+--- a/fs/btrfs/extent-io-tree.h
++++ b/fs/btrfs/extent-io-tree.h
+@@ -34,6 +34,8 @@ struct io_failure_record;
+  */
+ #define CHUNK_ALLOCATED				EXTENT_DIRTY
+ #define CHUNK_TRIMMED				EXTENT_DEFRAG
++#define CHUNK_STATE_MASK			(CHUNK_ALLOCATED |		\
++						 CHUNK_TRIMMED)
+ 
+ enum {
+ 	IO_TREE_FS_PINNED_EXTENTS,
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 96223813b6186..de6fe176fdfb3 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -33,6 +33,7 @@
+ #include "delalloc-space.h"
+ #include "block-group.h"
+ #include "discard.h"
++#include "rcu-string.h"
+ 
+ #undef SCRAMBLE_DELAYED_REFS
+ 
+@@ -5298,7 +5299,14 @@ int btrfs_drop_snapshot(struct btrfs_root *root, int update_ref, int for_reloc)
+ 		goto out;
+ 	}
+ 
+-	trans = btrfs_start_transaction(tree_root, 0);
++	/*
++	 * Use join to avoid potential EINTR from transaction start. See
++	 * wait_reserve_ticket and the whole reservation callchain.
++	 */
++	if (for_reloc)
++		trans = btrfs_join_transaction(tree_root);
++	else
++		trans = btrfs_start_transaction(tree_root, 0);
+ 	if (IS_ERR(trans)) {
+ 		err = PTR_ERR(trans);
+ 		goto out_free;
+@@ -5661,6 +5669,19 @@ static int btrfs_trim_free_extents(struct btrfs_device *device, u64 *trimmed)
+ 					    &start, &end,
+ 					    CHUNK_TRIMMED | CHUNK_ALLOCATED);
+ 
++		/* Check if there are any CHUNK_* bits left */
++		if (start > device->total_bytes) {
++			WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
++			btrfs_warn_in_rcu(fs_info,
++"ignoring attempt to trim beyond device size: offset %llu length %llu device %s device size %llu",
++					  start, end - start + 1,
++					  rcu_str_deref(device->name),
++					  device->total_bytes);
++			mutex_unlock(&fs_info->chunk_mutex);
++			ret = 0;
++			break;
++		}
++
+ 		/* Ensure we skip the reserved area in the first 1M */
+ 		start = max_t(u64, start, SZ_1M);
+ 
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index eeaee346f5a95..8ba8788461ae5 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -4127,7 +4127,7 @@ retry:
+ 	if (!test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) {
+ 		ret = flush_write_bio(&epd);
+ 	} else {
+-		ret = -EUCLEAN;
++		ret = -EROFS;
+ 		end_write_bio(&epd, ret);
+ 	}
+ 	return ret;
+@@ -4502,15 +4502,25 @@ int try_release_extent_mapping(struct page *page, gfp_t mask)
+ 				free_extent_map(em);
+ 				break;
+ 			}
+-			if (!test_range_bit(tree, em->start,
+-					    extent_map_end(em) - 1,
+-					    EXTENT_LOCKED, 0, NULL)) {
++			if (test_range_bit(tree, em->start,
++					   extent_map_end(em) - 1,
++					   EXTENT_LOCKED, 0, NULL))
++				goto next;
++			/*
++			 * If it's not in the list of modified extents, used
++			 * by a fast fsync, we can remove it. If it's being
++			 * logged we can safely remove it since fsync took an
++			 * extra reference on the em.
++			 */
++			if (list_empty(&em->list) ||
++			    test_bit(EXTENT_FLAG_LOGGING, &em->flags)) {
+ 				set_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
+ 					&btrfs_inode->runtime_flags);
+ 				remove_extent_mapping(map, em);
+ 				/* once for the rb tree */
+ 				free_extent_map(em);
+ 			}
++next:
+ 			start = extent_map_end(em);
+ 			write_unlock(&map->lock);
+ 
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index 55955bd424d70..6f7b6bca6dc5b 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -2281,7 +2281,7 @@ out:
+ static bool try_merge_free_space(struct btrfs_free_space_ctl *ctl,
+ 			  struct btrfs_free_space *info, bool update_stat)
+ {
+-	struct btrfs_free_space *left_info;
++	struct btrfs_free_space *left_info = NULL;
+ 	struct btrfs_free_space *right_info;
+ 	bool merged = false;
+ 	u64 offset = info->offset;
+@@ -2297,7 +2297,7 @@ static bool try_merge_free_space(struct btrfs_free_space_ctl *ctl,
+ 	if (right_info && rb_prev(&right_info->offset_index))
+ 		left_info = rb_entry(rb_prev(&right_info->offset_index),
+ 				     struct btrfs_free_space, offset_index);
+-	else
++	else if (!right_info)
+ 		left_info = tree_search_offset(ctl, offset - 1, 0, 0);
+ 
+ 	/* See try_merge_free_space() comment. */
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 3f77ec5de8ec7..7ba1218b1630e 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -650,12 +650,18 @@ cont:
+ 						     page_error_op |
+ 						     PAGE_END_WRITEBACK);
+ 
+-			for (i = 0; i < nr_pages; i++) {
+-				WARN_ON(pages[i]->mapping);
+-				put_page(pages[i]);
++			/*
++			 * Ensure we only free the compressed pages if we have
++			 * them allocated, as we can still reach here with
++			 * inode_need_compress() == false.
++			 */
++			if (pages) {
++				for (i = 0; i < nr_pages; i++) {
++					WARN_ON(pages[i]->mapping);
++					put_page(pages[i]);
++				}
++				kfree(pages);
+ 			}
+-			kfree(pages);
+-
+ 			return 0;
+ 		}
+ 	}
+@@ -4041,6 +4047,8 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry)
+ 		}
+ 	}
+ 
++	free_anon_bdev(dest->anon_dev);
++	dest->anon_dev = 0;
+ out_end_trans:
+ 	trans->block_rsv = NULL;
+ 	trans->bytes_reserved = 0;
+@@ -6625,7 +6633,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode,
+ 	    extent_type == BTRFS_FILE_EXTENT_PREALLOC) {
+ 		/* Only regular file could have regular/prealloc extent */
+ 		if (!S_ISREG(inode->vfs_inode.i_mode)) {
+-			ret = -EUCLEAN;
++			err = -EUCLEAN;
+ 			btrfs_crit(fs_info,
+ 		"regular/prealloc extent found for non-regular inode %llu",
+ 				   btrfs_ino(inode));
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index e8f7c5f008944..1448bc43561c2 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -164,8 +164,11 @@ static int btrfs_ioctl_getflags(struct file *file, void __user *arg)
+ 	return 0;
+ }
+ 
+-/* Check if @flags are a supported and valid set of FS_*_FL flags */
+-static int check_fsflags(unsigned int flags)
++/*
++ * Check if @flags are a supported and valid set of FS_*_FL flags and that
++ * the old and new flags are not conflicting
++ */
++static int check_fsflags(unsigned int old_flags, unsigned int flags)
+ {
+ 	if (flags & ~(FS_IMMUTABLE_FL | FS_APPEND_FL | \
+ 		      FS_NOATIME_FL | FS_NODUMP_FL | \
+@@ -174,9 +177,19 @@ static int check_fsflags(unsigned int flags)
+ 		      FS_NOCOW_FL))
+ 		return -EOPNOTSUPP;
+ 
++	/* COMPR and NOCOMP on new/old are valid */
+ 	if ((flags & FS_NOCOMP_FL) && (flags & FS_COMPR_FL))
+ 		return -EINVAL;
+ 
++	if ((flags & FS_COMPR_FL) && (flags & FS_NOCOW_FL))
++		return -EINVAL;
++
++	/* NOCOW and compression options are mutually exclusive */
++	if ((old_flags & FS_NOCOW_FL) && (flags & (FS_COMPR_FL | FS_NOCOMP_FL)))
++		return -EINVAL;
++	if ((flags & FS_NOCOW_FL) && (old_flags & (FS_COMPR_FL | FS_NOCOMP_FL)))
++		return -EINVAL;
++
+ 	return 0;
+ }
+ 
+@@ -190,7 +203,7 @@ static int btrfs_ioctl_setflags(struct file *file, void __user *arg)
+ 	unsigned int fsflags, old_fsflags;
+ 	int ret;
+ 	const char *comp = NULL;
+-	u32 binode_flags = binode->flags;
++	u32 binode_flags;
+ 
+ 	if (!inode_owner_or_capable(inode))
+ 		return -EPERM;
+@@ -201,22 +214,23 @@ static int btrfs_ioctl_setflags(struct file *file, void __user *arg)
+ 	if (copy_from_user(&fsflags, arg, sizeof(fsflags)))
+ 		return -EFAULT;
+ 
+-	ret = check_fsflags(fsflags);
+-	if (ret)
+-		return ret;
+-
+ 	ret = mnt_want_write_file(file);
+ 	if (ret)
+ 		return ret;
+ 
+ 	inode_lock(inode);
+-
+ 	fsflags = btrfs_mask_fsflags_for_type(inode, fsflags);
+ 	old_fsflags = btrfs_inode_flags_to_fsflags(binode->flags);
++
+ 	ret = vfs_ioc_setflags_prepare(inode, old_fsflags, fsflags);
+ 	if (ret)
+ 		goto out_unlock;
+ 
++	ret = check_fsflags(old_fsflags, fsflags);
++	if (ret)
++		goto out_unlock;
++
++	binode_flags = binode->flags;
+ 	if (fsflags & FS_SYNC_FL)
+ 		binode_flags |= BTRFS_INODE_SYNC;
+ 	else
+@@ -566,6 +580,7 @@ static noinline int create_subvol(struct inode *dir,
+ 	struct inode *inode;
+ 	int ret;
+ 	int err;
++	dev_t anon_dev = 0;
+ 	u64 objectid;
+ 	u64 new_dirid = BTRFS_FIRST_FREE_OBJECTID;
+ 	u64 index = 0;
+@@ -578,6 +593,10 @@ static noinline int create_subvol(struct inode *dir,
+ 	if (ret)
+ 		goto fail_free;
+ 
++	ret = get_anon_bdev(&anon_dev);
++	if (ret < 0)
++		goto fail_free;
++
+ 	/*
+ 	 * Don't create subvolume whose level is not zero. Or qgroup will be
+ 	 * screwed up since it assumes subvolume qgroup's level to be 0.
+@@ -660,12 +679,15 @@ static noinline int create_subvol(struct inode *dir,
+ 		goto fail;
+ 
+ 	key.offset = (u64)-1;
+-	new_root = btrfs_get_fs_root(fs_info, objectid, true);
++	new_root = btrfs_get_new_fs_root(fs_info, objectid, anon_dev);
+ 	if (IS_ERR(new_root)) {
++		free_anon_bdev(anon_dev);
+ 		ret = PTR_ERR(new_root);
+ 		btrfs_abort_transaction(trans, ret);
+ 		goto fail;
+ 	}
++	/* Freeing will be done in btrfs_put_root() of new_root */
++	anon_dev = 0;
+ 
+ 	btrfs_record_root_in_trans(trans, new_root);
+ 
+@@ -735,6 +757,8 @@ fail:
+ 	return ret;
+ 
+ fail_free:
++	if (anon_dev)
++		free_anon_bdev(anon_dev);
+ 	kfree(root_item);
+ 	return ret;
+ }
+@@ -762,6 +786,9 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ 	if (!pending_snapshot)
+ 		return -ENOMEM;
+ 
++	ret = get_anon_bdev(&pending_snapshot->anon_dev);
++	if (ret < 0)
++		goto free_pending;
+ 	pending_snapshot->root_item = kzalloc(sizeof(struct btrfs_root_item),
+ 			GFP_KERNEL);
+ 	pending_snapshot->path = btrfs_alloc_path();
+@@ -823,10 +850,16 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ 
+ 	d_instantiate(dentry, inode);
+ 	ret = 0;
++	pending_snapshot->anon_dev = 0;
+ fail:
++	/* Prevent double freeing of anon_dev */
++	if (ret && pending_snapshot->snap)
++		pending_snapshot->snap->anon_dev = 0;
+ 	btrfs_put_root(pending_snapshot->snap);
+ 	btrfs_subvolume_release_metadata(fs_info, &pending_snapshot->block_rsv);
+ free_pending:
++	if (pending_snapshot->anon_dev)
++		free_anon_bdev(pending_snapshot->anon_dev);
+ 	kfree(pending_snapshot->root_item);
+ 	btrfs_free_path(pending_snapshot->path);
+ 	kfree(pending_snapshot);
+@@ -3198,11 +3231,15 @@ static long btrfs_ioctl_fs_info(struct btrfs_fs_info *fs_info,
+ 	struct btrfs_ioctl_fs_info_args *fi_args;
+ 	struct btrfs_device *device;
+ 	struct btrfs_fs_devices *fs_devices = fs_info->fs_devices;
++	u64 flags_in;
+ 	int ret = 0;
+ 
+-	fi_args = kzalloc(sizeof(*fi_args), GFP_KERNEL);
+-	if (!fi_args)
+-		return -ENOMEM;
++	fi_args = memdup_user(arg, sizeof(*fi_args));
++	if (IS_ERR(fi_args))
++		return PTR_ERR(fi_args);
++
++	flags_in = fi_args->flags;
++	memset(fi_args, 0, sizeof(*fi_args));
+ 
+ 	rcu_read_lock();
+ 	fi_args->num_devices = fs_devices->num_devices;
+@@ -3218,6 +3255,12 @@ static long btrfs_ioctl_fs_info(struct btrfs_fs_info *fs_info,
+ 	fi_args->sectorsize = fs_info->sectorsize;
+ 	fi_args->clone_alignment = fs_info->sectorsize;
+ 
++	if (flags_in & BTRFS_FS_INFO_FLAG_CSUM_INFO) {
++		fi_args->csum_type = btrfs_super_csum_type(fs_info->super_copy);
++		fi_args->csum_size = btrfs_super_csum_size(fs_info->super_copy);
++		fi_args->flags |= BTRFS_FS_INFO_FLAG_CSUM_INFO;
++	}
++
+ 	if (copy_to_user(arg, fi_args, sizeof(*fi_args)))
+ 		ret = -EFAULT;
+ 
+diff --git a/fs/btrfs/ref-verify.c b/fs/btrfs/ref-verify.c
+index af92525dbb168..7f03dbe5b609d 100644
+--- a/fs/btrfs/ref-verify.c
++++ b/fs/btrfs/ref-verify.c
+@@ -286,6 +286,8 @@ static struct block_entry *add_block_entry(struct btrfs_fs_info *fs_info,
+ 			exist_re = insert_root_entry(&exist->roots, re);
+ 			if (exist_re)
+ 				kfree(re);
++		} else {
++			kfree(re);
+ 		}
+ 		kfree(be);
+ 		return exist;
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 3bbae80c752fc..5740ed51a1e8e 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -1686,12 +1686,20 @@ static noinline_for_stack int merge_reloc_root(struct reloc_control *rc,
+ 		btrfs_unlock_up_safe(path, 0);
+ 	}
+ 
+-	min_reserved = fs_info->nodesize * (BTRFS_MAX_LEVEL - 1) * 2;
++	/*
++	 * In merge_reloc_root(), we modify the upper level pointer to swap the
++	 * tree blocks between reloc tree and subvolume tree.  Thus for tree
++	 * block COW, we COW at most from level 1 to root level for each tree.
++	 *
++	 * Thus the needed metadata size is at most root_level * nodesize,
++	 * and * 2 since we have two trees to COW.
++	 */
++	min_reserved = fs_info->nodesize * btrfs_root_level(root_item) * 2;
+ 	memset(&next_key, 0, sizeof(next_key));
+ 
+ 	while (1) {
+ 		ret = btrfs_block_rsv_refill(root, rc->block_rsv, min_reserved,
+-					     BTRFS_RESERVE_FLUSH_ALL);
++					     BTRFS_RESERVE_FLUSH_LIMIT);
+ 		if (ret) {
+ 			err = ret;
+ 			goto out;
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 016a025e36c74..5f5b21e389dbc 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -3758,7 +3758,7 @@ static noinline_for_stack int scrub_supers(struct scrub_ctx *sctx,
+ 	struct btrfs_fs_info *fs_info = sctx->fs_info;
+ 
+ 	if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state))
+-		return -EIO;
++		return -EROFS;
+ 
+ 	/* Seed devices of a new filesystem has their own generation. */
+ 	if (scrub_dev->fs_devices != fs_info->fs_devices)
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index c3826ae883f0e..56cd2cf571588 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -449,6 +449,7 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ 	char *compress_type;
+ 	bool compress_force = false;
+ 	enum btrfs_compression_type saved_compress_type;
++	int saved_compress_level;
+ 	bool saved_compress_force;
+ 	int no_compress = 0;
+ 
+@@ -531,6 +532,7 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ 				info->compress_type : BTRFS_COMPRESS_NONE;
+ 			saved_compress_force =
+ 				btrfs_test_opt(info, FORCE_COMPRESS);
++			saved_compress_level = info->compress_level;
+ 			if (token == Opt_compress ||
+ 			    token == Opt_compress_force ||
+ 			    strncmp(args[0].from, "zlib", 4) == 0) {
+@@ -575,6 +577,8 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ 				no_compress = 0;
+ 			} else if (strncmp(args[0].from, "no", 2) == 0) {
+ 				compress_type = "no";
++				info->compress_level = 0;
++				info->compress_type = 0;
+ 				btrfs_clear_opt(info->mount_opt, COMPRESS);
+ 				btrfs_clear_opt(info->mount_opt, FORCE_COMPRESS);
+ 				compress_force = false;
+@@ -595,11 +599,11 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ 				 */
+ 				btrfs_clear_opt(info->mount_opt, FORCE_COMPRESS);
+ 			}
+-			if ((btrfs_test_opt(info, COMPRESS) &&
+-			     (info->compress_type != saved_compress_type ||
+-			      compress_force != saved_compress_force)) ||
+-			    (!btrfs_test_opt(info, COMPRESS) &&
+-			     no_compress == 1)) {
++			if (no_compress == 1) {
++				btrfs_info(info, "use no compression");
++			} else if ((info->compress_type != saved_compress_type) ||
++				   (compress_force != saved_compress_force) ||
++				   (info->compress_level != saved_compress_level)) {
+ 				btrfs_info(info, "%s %s compression, level %d",
+ 					   (compress_force) ? "force" : "use",
+ 					   compress_type, info->compress_level);
+@@ -1312,6 +1316,7 @@ static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry)
+ {
+ 	struct btrfs_fs_info *info = btrfs_sb(dentry->d_sb);
+ 	const char *compress_type;
++	const char *subvol_name;
+ 
+ 	if (btrfs_test_opt(info, DEGRADED))
+ 		seq_puts(seq, ",degraded");
+@@ -1398,8 +1403,13 @@ static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry)
+ 		seq_puts(seq, ",ref_verify");
+ 	seq_printf(seq, ",subvolid=%llu",
+ 		  BTRFS_I(d_inode(dentry))->root->root_key.objectid);
+-	seq_puts(seq, ",subvol=");
+-	seq_dentry(seq, dentry, " \t\n\\");
++	subvol_name = btrfs_get_subvol_name_from_objectid(info,
++			BTRFS_I(d_inode(dentry))->root->root_key.objectid);
++	if (!IS_ERR(subvol_name)) {
++		seq_puts(seq, ",subvol=");
++		seq_escape(seq, subvol_name, " \t\n\\");
++		kfree(subvol_name);
++	}
+ 	return 0;
+ }
+ 
+@@ -1887,6 +1897,12 @@ static int btrfs_remount(struct super_block *sb, int *flags, char *data)
+ 		set_bit(BTRFS_FS_OPEN, &fs_info->flags);
+ 	}
+ out:
++	/*
++	 * We need to set SB_I_VERSION here otherwise it'll get cleared by VFS,
++	 * since the absence of the flag means it can be toggled off by remount.
++	 */
++	*flags |= SB_I_VERSION;
++
+ 	wake_up_process(fs_info->transaction_kthread);
+ 	btrfs_remount_cleanup(fs_info, old_opts);
+ 	return 0;
+@@ -2296,9 +2312,7 @@ static int btrfs_unfreeze(struct super_block *sb)
+ static int btrfs_show_devname(struct seq_file *m, struct dentry *root)
+ {
+ 	struct btrfs_fs_info *fs_info = btrfs_sb(root->d_sb);
+-	struct btrfs_fs_devices *cur_devices;
+ 	struct btrfs_device *dev, *first_dev = NULL;
+-	struct list_head *head;
+ 
+ 	/*
+ 	 * Lightweight locking of the devices. We should not need
+@@ -2308,18 +2322,13 @@ static int btrfs_show_devname(struct seq_file *m, struct dentry *root)
+ 	 * least until the rcu_read_unlock.
+ 	 */
+ 	rcu_read_lock();
+-	cur_devices = fs_info->fs_devices;
+-	while (cur_devices) {
+-		head = &cur_devices->devices;
+-		list_for_each_entry_rcu(dev, head, dev_list) {
+-			if (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state))
+-				continue;
+-			if (!dev->name)
+-				continue;
+-			if (!first_dev || dev->devid < first_dev->devid)
+-				first_dev = dev;
+-		}
+-		cur_devices = cur_devices->seed;
++	list_for_each_entry_rcu(dev, &fs_info->fs_devices->devices, dev_list) {
++		if (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state))
++			continue;
++		if (!dev->name)
++			continue;
++		if (!first_dev || dev->devid < first_dev->devid)
++			first_dev = dev;
+ 	}
+ 
+ 	if (first_dev)
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index a39bff64ff24e..abc4a8fd6df65 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -1273,7 +1273,9 @@ int btrfs_sysfs_add_devices_dir(struct btrfs_fs_devices *fs_devices,
+ {
+ 	int error = 0;
+ 	struct btrfs_device *dev;
++	unsigned int nofs_flag;
+ 
++	nofs_flag = memalloc_nofs_save();
+ 	list_for_each_entry(dev, &fs_devices->devices, dev_list) {
+ 
+ 		if (one_device && one_device != dev)
+@@ -1301,6 +1303,7 @@ int btrfs_sysfs_add_devices_dir(struct btrfs_fs_devices *fs_devices,
+ 			break;
+ 		}
+ 	}
++	memalloc_nofs_restore(nofs_flag);
+ 
+ 	return error;
+ }
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index b359d4b17658b..2710f8ddb95fb 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -937,7 +937,10 @@ static int __btrfs_end_transaction(struct btrfs_trans_handle *trans,
+ 	if (TRANS_ABORTED(trans) ||
+ 	    test_bit(BTRFS_FS_STATE_ERROR, &info->fs_state)) {
+ 		wake_up_process(info->transaction_kthread);
+-		err = -EIO;
++		if (TRANS_ABORTED(trans))
++			err = trans->aborted;
++		else
++			err = -EROFS;
+ 	}
+ 
+ 	kmem_cache_free(btrfs_trans_handle_cachep, trans);
+@@ -1630,7 +1633,7 @@ static noinline int create_pending_snapshot(struct btrfs_trans_handle *trans,
+ 	}
+ 
+ 	key.offset = (u64)-1;
+-	pending->snap = btrfs_get_fs_root(fs_info, objectid, true);
++	pending->snap = btrfs_get_new_fs_root(fs_info, objectid, pending->anon_dev);
+ 	if (IS_ERR(pending->snap)) {
+ 		ret = PTR_ERR(pending->snap);
+ 		btrfs_abort_transaction(trans, ret);
+diff --git a/fs/btrfs/transaction.h b/fs/btrfs/transaction.h
+index bf102e64bfb25..a122a712f5cc0 100644
+--- a/fs/btrfs/transaction.h
++++ b/fs/btrfs/transaction.h
+@@ -151,6 +151,8 @@ struct btrfs_pending_snapshot {
+ 	struct btrfs_block_rsv block_rsv;
+ 	/* extra metadata reservation for relocation */
+ 	int error;
++	/* Preallocated anonymous block device number */
++	dev_t anon_dev;
+ 	bool readonly;
+ 	struct list_head list;
+ };
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index cd5348f352ddc..d22ff1e0963c6 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3116,29 +3116,17 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ 	btrfs_init_log_ctx(&root_log_ctx, NULL);
+ 
+ 	mutex_lock(&log_root_tree->log_mutex);
+-	atomic_inc(&log_root_tree->log_batch);
+-	atomic_inc(&log_root_tree->log_writers);
+ 
+ 	index2 = log_root_tree->log_transid % 2;
+ 	list_add_tail(&root_log_ctx.list, &log_root_tree->log_ctxs[index2]);
+ 	root_log_ctx.log_transid = log_root_tree->log_transid;
+ 
+-	mutex_unlock(&log_root_tree->log_mutex);
+-
+-	mutex_lock(&log_root_tree->log_mutex);
+-
+ 	/*
+ 	 * Now we are safe to update the log_root_tree because we're under the
+ 	 * log_mutex, and we're a current writer so we're holding the commit
+ 	 * open until we drop the log_mutex.
+ 	 */
+ 	ret = update_log_root(trans, log, &new_root_item);
+-
+-	if (atomic_dec_and_test(&log_root_tree->log_writers)) {
+-		/* atomic_dec_and_test implies a barrier */
+-		cond_wake_up_nomb(&log_root_tree->log_writer_wait);
+-	}
+-
+ 	if (ret) {
+ 		if (!list_empty(&root_log_ctx.list))
+ 			list_del_init(&root_log_ctx.list);
+@@ -3184,8 +3172,6 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ 				root_log_ctx.log_transid - 1);
+ 	}
+ 
+-	wait_for_writer(log_root_tree);
+-
+ 	/*
+ 	 * now that we've moved on to the tree of log tree roots,
+ 	 * check the full commit flag again
+@@ -4041,11 +4027,8 @@ static noinline int copy_items(struct btrfs_trans_handle *trans,
+ 						fs_info->csum_root,
+ 						ds + cs, ds + cs + cl - 1,
+ 						&ordered_sums, 0);
+-				if (ret) {
+-					btrfs_release_path(dst_path);
+-					kfree(ins_data);
+-					return ret;
+-				}
++				if (ret)
++					break;
+ 			}
+ 		}
+ 	}
+@@ -4058,7 +4041,6 @@ static noinline int copy_items(struct btrfs_trans_handle *trans,
+ 	 * we have to do this after the loop above to avoid changing the
+ 	 * log tree while trying to change the log tree.
+ 	 */
+-	ret = 0;
+ 	while (!list_empty(&ordered_sums)) {
+ 		struct btrfs_ordered_sum *sums = list_entry(ordered_sums.next,
+ 						   struct btrfs_ordered_sum,
+@@ -5123,14 +5105,13 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
+ 			   const loff_t end,
+ 			   struct btrfs_log_ctx *ctx)
+ {
+-	struct btrfs_fs_info *fs_info = root->fs_info;
+ 	struct btrfs_path *path;
+ 	struct btrfs_path *dst_path;
+ 	struct btrfs_key min_key;
+ 	struct btrfs_key max_key;
+ 	struct btrfs_root *log = root->log_root;
+ 	int err = 0;
+-	int ret;
++	int ret = 0;
+ 	bool fast_search = false;
+ 	u64 ino = btrfs_ino(inode);
+ 	struct extent_map_tree *em_tree = &inode->extent_tree;
+@@ -5166,15 +5147,19 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
+ 	max_key.offset = (u64)-1;
+ 
+ 	/*
+-	 * Only run delayed items if we are a dir or a new file.
+-	 * Otherwise commit the delayed inode only, which is needed in
+-	 * order for the log replay code to mark inodes for link count
+-	 * fixup (create temporary BTRFS_TREE_LOG_FIXUP_OBJECTID items).
++	 * Only run delayed items if we are a directory. We want to make sure
++	 * all directory indexes hit the fs/subvolume tree so we can find them
++	 * and figure out which index ranges have to be logged.
++	 *
++	 * Otherwise commit the delayed inode only if the full sync flag is set,
++	 * as we want to make sure an up to date version is in the subvolume
++	 * tree so copy_inode_items_to_log() / copy_items() can find it and copy
++	 * it to the log tree. For a non full sync, we always log the inode item
++	 * based on the in-memory struct btrfs_inode which is always up to date.
+ 	 */
+-	if (S_ISDIR(inode->vfs_inode.i_mode) ||
+-	    inode->generation > fs_info->last_trans_committed)
++	if (S_ISDIR(inode->vfs_inode.i_mode))
+ 		ret = btrfs_commit_inode_delayed_items(trans, inode);
+-	else
++	else if (test_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags))
+ 		ret = btrfs_commit_inode_delayed_inode(inode);
+ 
+ 	if (ret) {
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index f403fb1e6d379..0fecf1e4d8f66 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -245,7 +245,9 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
+  *
+  * global::fs_devs - add, remove, updates to the global list
+  *
+- * does not protect: manipulation of the fs_devices::devices list!
++ * does not protect: manipulation of the fs_devices::devices list in general
++ * but in mount context it could be used to exclude list modifications by eg.
++ * scan ioctl
+  *
+  * btrfs_device::name - renames (write side), read is RCU
+  *
+@@ -258,6 +260,9 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
+  * may be used to exclude some operations from running concurrently without any
+  * modifications to the list (see write_all_supers)
+  *
++ * Is not required at mount and close times, because our device list is
++ * protected by the uuid_mutex at that point.
++ *
+  * balance_mutex
+  * -------------
+  * protects balance structures (status, state) and context accessed from
+@@ -602,6 +607,11 @@ static int btrfs_free_stale_devices(const char *path,
+ 	return ret;
+ }
+ 
++/*
++ * This is only used on mount, and we are protected from competing things
++ * messing with our fs_devices by the uuid_mutex, thus we do not need the
++ * fs_devices->device_list_mutex here.
++ */
+ static int btrfs_open_one_device(struct btrfs_fs_devices *fs_devices,
+ 			struct btrfs_device *device, fmode_t flags,
+ 			void *holder)
+@@ -1229,8 +1239,14 @@ int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
+ 	int ret;
+ 
+ 	lockdep_assert_held(&uuid_mutex);
++	/*
++	 * The device_list_mutex cannot be taken here in case opening the
++	 * underlying device takes further locks like bd_mutex.
++	 *
++	 * We also don't need the lock here as this is called during mount and
++	 * exclusion is provided by uuid_mutex
++	 */
+ 
+-	mutex_lock(&fs_devices->device_list_mutex);
+ 	if (fs_devices->opened) {
+ 		fs_devices->opened++;
+ 		ret = 0;
+@@ -1238,7 +1254,6 @@ int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
+ 		list_sort(NULL, &fs_devices->devices, devid_cmp);
+ 		ret = open_fs_devices(fs_devices, flags, holder);
+ 	}
+-	mutex_unlock(&fs_devices->device_list_mutex);
+ 
+ 	return ret;
+ }
+@@ -3231,7 +3246,7 @@ static int del_balance_item(struct btrfs_fs_info *fs_info)
+ 	if (!path)
+ 		return -ENOMEM;
+ 
+-	trans = btrfs_start_transaction(root, 0);
++	trans = btrfs_start_transaction_fallback_global_rsv(root, 0);
+ 	if (IS_ERR(trans)) {
+ 		btrfs_free_path(path);
+ 		return PTR_ERR(trans);
+@@ -4135,7 +4150,22 @@ int btrfs_balance(struct btrfs_fs_info *fs_info,
+ 	mutex_lock(&fs_info->balance_mutex);
+ 	if (ret == -ECANCELED && atomic_read(&fs_info->balance_pause_req))
+ 		btrfs_info(fs_info, "balance: paused");
+-	else if (ret == -ECANCELED && atomic_read(&fs_info->balance_cancel_req))
++	/*
++	 * Balance can be canceled by:
++	 *
++	 * - Regular cancel request
++	 *   Then ret == -ECANCELED and balance_cancel_req > 0
++	 *
++	 * - Fatal signal to "btrfs" process
++	 *   Either the signal caught by wait_reserve_ticket() and callers
++	 *   got -EINTR, or caught by btrfs_should_cancel_balance() and
++	 *   got -ECANCELED.
++	 *   Either way, in this case balance_cancel_req = 0, and
++	 *   ret == -EINTR or ret == -ECANCELED.
++	 *
++	 * So here we only check the return value to catch canceled balance.
++	 */
++	else if (ret == -ECANCELED || ret == -EINTR)
+ 		btrfs_info(fs_info, "balance: canceled");
+ 	else
+ 		btrfs_info(fs_info, "balance: ended with status: %d", ret);
+@@ -4690,6 +4720,10 @@ again:
+ 	}
+ 
+ 	mutex_lock(&fs_info->chunk_mutex);
++	/* Clear all state bits beyond the shrunk device size */
++	clear_extent_bits(&device->alloc_state, new_size, (u64)-1,
++			  CHUNK_STATE_MASK);
++
+ 	btrfs_device_set_disk_total_bytes(device, new_size);
+ 	if (list_empty(&device->post_commit_list))
+ 		list_add_tail(&device->post_commit_list,
+@@ -7049,7 +7083,6 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info)
+ 	 * otherwise we don't need it.
+ 	 */
+ 	mutex_lock(&uuid_mutex);
+-	mutex_lock(&fs_info->chunk_mutex);
+ 
+ 	/*
+ 	 * It is possible for mount and umount to race in such a way that
+@@ -7094,7 +7127,9 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info)
+ 		} else if (found_key.type == BTRFS_CHUNK_ITEM_KEY) {
+ 			struct btrfs_chunk *chunk;
+ 			chunk = btrfs_item_ptr(leaf, slot, struct btrfs_chunk);
++			mutex_lock(&fs_info->chunk_mutex);
+ 			ret = read_one_chunk(&found_key, leaf, chunk);
++			mutex_unlock(&fs_info->chunk_mutex);
+ 			if (ret)
+ 				goto error;
+ 		}
+@@ -7124,7 +7159,6 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info)
+ 	}
+ 	ret = 0;
+ error:
+-	mutex_unlock(&fs_info->chunk_mutex);
+ 	mutex_unlock(&uuid_mutex);
+ 
+ 	btrfs_free_path(path);
+diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
+index 39f5311404b08..060bdcc5ce32c 100644
+--- a/fs/ceph/dir.c
++++ b/fs/ceph/dir.c
+@@ -930,6 +930,10 @@ static int ceph_symlink(struct inode *dir, struct dentry *dentry,
+ 	req->r_num_caps = 2;
+ 	req->r_dentry_drop = CEPH_CAP_FILE_SHARED | CEPH_CAP_AUTH_EXCL;
+ 	req->r_dentry_unless = CEPH_CAP_FILE_EXCL;
++	if (as_ctx.pagelist) {
++		req->r_pagelist = as_ctx.pagelist;
++		as_ctx.pagelist = NULL;
++	}
+ 	err = ceph_mdsc_do_request(mdsc, dir, req);
+ 	if (!err && !req->r_reply_info.head->is_dentry)
+ 		err = ceph_handle_notrace_create(dir, dentry);
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index a50497142e598..dea971f9d89ee 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -3279,8 +3279,10 @@ static void handle_session(struct ceph_mds_session *session,
+ 			goto bad;
+ 		/* version >= 3, feature bits */
+ 		ceph_decode_32_safe(&p, end, len, bad);
+-		ceph_decode_64_safe(&p, end, features, bad);
+-		p += len - sizeof(features);
++		if (len) {
++			ceph_decode_64_safe(&p, end, features, bad);
++			p += len - sizeof(features);
++		}
+ 	}
+ 
+ 	mutex_lock(&mdsc->mutex);
+diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
+index b9db73687eaaf..eba01d0908dd9 100644
+--- a/fs/cifs/smb2inode.c
++++ b/fs/cifs/smb2inode.c
+@@ -115,6 +115,7 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 	vars->oparms.fid = &fid;
+ 	vars->oparms.reconnect = false;
+ 	vars->oparms.mode = mode;
++	vars->oparms.cifs_sb = cifs_sb;
+ 
+ 	rqst[num_rqst].rq_iov = &vars->open_iov[0];
+ 	rqst[num_rqst].rq_nvec = SMB2_CREATE_IOV_SIZE;
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index 157992864ce7e..d88e2683626e7 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -508,15 +508,31 @@ cifs_ses_oplock_break(struct work_struct *work)
+ 	kfree(lw);
+ }
+ 
++static void
++smb2_queue_pending_open_break(struct tcon_link *tlink, __u8 *lease_key,
++			      __le32 new_lease_state)
++{
++	struct smb2_lease_break_work *lw;
++
++	lw = kmalloc(sizeof(struct smb2_lease_break_work), GFP_KERNEL);
++	if (!lw) {
++		cifs_put_tlink(tlink);
++		return;
++	}
++
++	INIT_WORK(&lw->lease_break, cifs_ses_oplock_break);
++	lw->tlink = tlink;
++	lw->lease_state = new_lease_state;
++	memcpy(lw->lease_key, lease_key, SMB2_LEASE_KEY_SIZE);
++	queue_work(cifsiod_wq, &lw->lease_break);
++}
++
+ static bool
+-smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp,
+-		    struct smb2_lease_break_work *lw)
++smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp)
+ {
+-	bool found;
+ 	__u8 lease_state;
+ 	struct list_head *tmp;
+ 	struct cifsFileInfo *cfile;
+-	struct cifs_pending_open *open;
+ 	struct cifsInodeInfo *cinode;
+ 	int ack_req = le32_to_cpu(rsp->Flags &
+ 				  SMB2_NOTIFY_BREAK_LEASE_FLAG_ACK_REQUIRED);
+@@ -546,22 +562,29 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp,
+ 		cfile->oplock_level = lease_state;
+ 
+ 		cifs_queue_oplock_break(cfile);
+-		kfree(lw);
+ 		return true;
+ 	}
+ 
+-	found = false;
++	return false;
++}
++
++static struct cifs_pending_open *
++smb2_tcon_find_pending_open_lease(struct cifs_tcon *tcon,
++				  struct smb2_lease_break *rsp)
++{
++	__u8 lease_state = le32_to_cpu(rsp->NewLeaseState);
++	int ack_req = le32_to_cpu(rsp->Flags &
++				  SMB2_NOTIFY_BREAK_LEASE_FLAG_ACK_REQUIRED);
++	struct cifs_pending_open *open;
++	struct cifs_pending_open *found = NULL;
++
+ 	list_for_each_entry(open, &tcon->pending_opens, olist) {
+ 		if (memcmp(open->lease_key, rsp->LeaseKey,
+ 			   SMB2_LEASE_KEY_SIZE))
+ 			continue;
+ 
+ 		if (!found && ack_req) {
+-			found = true;
+-			memcpy(lw->lease_key, open->lease_key,
+-			       SMB2_LEASE_KEY_SIZE);
+-			lw->tlink = cifs_get_tlink(open->tlink);
+-			queue_work(cifsiod_wq, &lw->lease_break);
++			found = open;
+ 		}
+ 
+ 		cifs_dbg(FYI, "found in the pending open list\n");
+@@ -582,14 +605,7 @@ smb2_is_valid_lease_break(char *buffer)
+ 	struct TCP_Server_Info *server;
+ 	struct cifs_ses *ses;
+ 	struct cifs_tcon *tcon;
+-	struct smb2_lease_break_work *lw;
+-
+-	lw = kmalloc(sizeof(struct smb2_lease_break_work), GFP_KERNEL);
+-	if (!lw)
+-		return false;
+-
+-	INIT_WORK(&lw->lease_break, cifs_ses_oplock_break);
+-	lw->lease_state = rsp->NewLeaseState;
++	struct cifs_pending_open *open;
+ 
+ 	cifs_dbg(FYI, "Checking for lease break\n");
+ 
+@@ -607,11 +623,27 @@ smb2_is_valid_lease_break(char *buffer)
+ 				spin_lock(&tcon->open_file_lock);
+ 				cifs_stats_inc(
+ 				    &tcon->stats.cifs_stats.num_oplock_brks);
+-				if (smb2_tcon_has_lease(tcon, rsp, lw)) {
++				if (smb2_tcon_has_lease(tcon, rsp)) {
+ 					spin_unlock(&tcon->open_file_lock);
+ 					spin_unlock(&cifs_tcp_ses_lock);
+ 					return true;
+ 				}
++				open = smb2_tcon_find_pending_open_lease(tcon,
++									 rsp);
++				if (open) {
++					__u8 lease_key[SMB2_LEASE_KEY_SIZE];
++					struct tcon_link *tlink;
++
++					tlink = cifs_get_tlink(open->tlink);
++					memcpy(lease_key, open->lease_key,
++					       SMB2_LEASE_KEY_SIZE);
++					spin_unlock(&tcon->open_file_lock);
++					spin_unlock(&cifs_tcp_ses_lock);
++					smb2_queue_pending_open_break(tlink,
++								      lease_key,
++								      rsp->NewLeaseState);
++					return true;
++				}
+ 				spin_unlock(&tcon->open_file_lock);
+ 
+ 				if (tcon->crfid.is_valid &&
+@@ -629,7 +661,6 @@ smb2_is_valid_lease_break(char *buffer)
+ 		}
+ 	}
+ 	spin_unlock(&cifs_tcp_ses_lock);
+-	kfree(lw);
+ 	cifs_dbg(FYI, "Can not process lease break - no lease matched\n");
+ 	return false;
+ }
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 2f4cdd290c464..4926887640048 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1387,6 +1387,8 @@ SMB2_auth_kerberos(struct SMB2_sess_data *sess_data)
+ 	spnego_key = cifs_get_spnego_key(ses);
+ 	if (IS_ERR(spnego_key)) {
+ 		rc = PTR_ERR(spnego_key);
++		if (rc == -ENOKEY)
++			cifs_dbg(VFS, "Verify user has a krb5 ticket and keyutils is installed\n");
+ 		spnego_key = NULL;
+ 		goto out;
+ 	}
+diff --git a/fs/ext2/ialloc.c b/fs/ext2/ialloc.c
+index fda7d3f5b4be5..432c3febea6df 100644
+--- a/fs/ext2/ialloc.c
++++ b/fs/ext2/ialloc.c
+@@ -80,6 +80,7 @@ static void ext2_release_inode(struct super_block *sb, int group, int dir)
+ 	if (dir)
+ 		le16_add_cpu(&desc->bg_used_dirs_count, -1);
+ 	spin_unlock(sb_bgl_lock(EXT2_SB(sb), group));
++	percpu_counter_inc(&EXT2_SB(sb)->s_freeinodes_counter);
+ 	if (dir)
+ 		percpu_counter_dec(&EXT2_SB(sb)->s_dirs_counter);
+ 	mark_buffer_dirty(bh);
+@@ -528,7 +529,7 @@ got:
+ 		goto fail;
+ 	}
+ 
+-	percpu_counter_add(&sbi->s_freeinodes_counter, -1);
++	percpu_counter_dec(&sbi->s_freeinodes_counter);
+ 	if (S_ISDIR(mode))
+ 		percpu_counter_inc(&sbi->s_dirs_counter);
+ 
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 1e02a8c106b0a..f6fbe61b1251e 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1353,6 +1353,8 @@ int f2fs_write_multi_pages(struct compress_ctx *cc,
+ 		err = f2fs_write_compressed_pages(cc, submitted,
+ 							wbc, io_type);
+ 		cops->destroy_compress_ctx(cc);
++		kfree(cc->cpages);
++		cc->cpages = NULL;
+ 		if (!err)
+ 			return 0;
+ 		f2fs_bug_on(F2FS_I_SB(cc->inode), err != -EAGAIN);
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 326c63879ddc8..6e9017e6a8197 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -3432,6 +3432,10 @@ static int f2fs_write_end(struct file *file,
+ 	if (f2fs_compressed_file(inode) && fsdata) {
+ 		f2fs_compress_write_end(inode, fsdata, page->index, copied);
+ 		f2fs_update_time(F2FS_I_SB(inode), REQ_TIME);
++
++		if (pos + copied > i_size_read(inode) &&
++				!f2fs_verity_in_progress(inode))
++			f2fs_i_size_write(inode, pos + copied);
+ 		return copied;
+ 	}
+ #endif
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index 6306eaae378b2..6d2ea788d0a17 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -1351,9 +1351,15 @@ int gfs2_extent_map(struct inode *inode, u64 lblock, int *new, u64 *dblock, unsi
+ 	return ret;
+ }
+ 
++/*
++ * NOTE: Never call gfs2_block_zero_range with an open transaction because it
++ * uses iomap write to perform its actions, which begin their own transactions
++ * (iomap_begin, page_prepare, etc.)
++ */
+ static int gfs2_block_zero_range(struct inode *inode, loff_t from,
+ 				 unsigned int length)
+ {
++	BUG_ON(current->journal_info);
+ 	return iomap_zero_range(inode, from, length, NULL, &gfs2_iomap_ops);
+ }
+ 
+@@ -1414,6 +1420,16 @@ static int trunc_start(struct inode *inode, u64 newsize)
+ 	u64 oldsize = inode->i_size;
+ 	int error;
+ 
++	if (!gfs2_is_stuffed(ip)) {
++		unsigned int blocksize = i_blocksize(inode);
++		unsigned int offs = newsize & (blocksize - 1);
++		if (offs) {
++			error = gfs2_block_zero_range(inode, newsize,
++						      blocksize - offs);
++			if (error)
++				return error;
++		}
++	}
+ 	if (journaled)
+ 		error = gfs2_trans_begin(sdp, RES_DINODE + RES_JDATA, GFS2_JTRUNC_REVOKES);
+ 	else
+@@ -1427,19 +1443,10 @@ static int trunc_start(struct inode *inode, u64 newsize)
+ 
+ 	gfs2_trans_add_meta(ip->i_gl, dibh);
+ 
+-	if (gfs2_is_stuffed(ip)) {
++	if (gfs2_is_stuffed(ip))
+ 		gfs2_buffer_clear_tail(dibh, sizeof(struct gfs2_dinode) + newsize);
+-	} else {
+-		unsigned int blocksize = i_blocksize(inode);
+-		unsigned int offs = newsize & (blocksize - 1);
+-		if (offs) {
+-			error = gfs2_block_zero_range(inode, newsize,
+-						      blocksize - offs);
+-			if (error)
+-				goto out;
+-		}
++	else
+ 		ip->i_diskflags |= GFS2_DIF_TRUNC_IN_PROG;
+-	}
+ 
+ 	i_size_write(inode, newsize);
+ 	ip->i_inode.i_mtime = ip->i_inode.i_ctime = current_time(&ip->i_inode);
+@@ -2448,25 +2455,7 @@ int __gfs2_punch_hole(struct file *file, loff_t offset, loff_t length)
+ 	loff_t start, end;
+ 	int error;
+ 
+-	start = round_down(offset, blocksize);
+-	end = round_up(offset + length, blocksize) - 1;
+-	error = filemap_write_and_wait_range(inode->i_mapping, start, end);
+-	if (error)
+-		return error;
+-
+-	if (gfs2_is_jdata(ip))
+-		error = gfs2_trans_begin(sdp, RES_DINODE + 2 * RES_JDATA,
+-					 GFS2_JTRUNC_REVOKES);
+-	else
+-		error = gfs2_trans_begin(sdp, RES_DINODE, 0);
+-	if (error)
+-		return error;
+-
+-	if (gfs2_is_stuffed(ip)) {
+-		error = stuffed_zero_range(inode, offset, length);
+-		if (error)
+-			goto out;
+-	} else {
++	if (!gfs2_is_stuffed(ip)) {
+ 		unsigned int start_off, end_len;
+ 
+ 		start_off = offset & (blocksize - 1);
+@@ -2489,6 +2478,26 @@ int __gfs2_punch_hole(struct file *file, loff_t offset, loff_t length)
+ 		}
+ 	}
+ 
++	start = round_down(offset, blocksize);
++	end = round_up(offset + length, blocksize) - 1;
++	error = filemap_write_and_wait_range(inode->i_mapping, start, end);
++	if (error)
++		return error;
++
++	if (gfs2_is_jdata(ip))
++		error = gfs2_trans_begin(sdp, RES_DINODE + 2 * RES_JDATA,
++					 GFS2_JTRUNC_REVOKES);
++	else
++		error = gfs2_trans_begin(sdp, RES_DINODE, 0);
++	if (error)
++		return error;
++
++	if (gfs2_is_stuffed(ip)) {
++		error = stuffed_zero_range(inode, offset, length);
++		if (error)
++			goto out;
++	}
++
+ 	if (gfs2_is_jdata(ip)) {
+ 		BUG_ON(!current->journal_info);
+ 		gfs2_journaled_truncate_range(inode, offset, length);
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 8545024a1401f..f92876f4f37a1 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -790,9 +790,11 @@ static void gfs2_glock_poke(struct gfs2_glock *gl)
+ 	struct gfs2_holder gh;
+ 	int error;
+ 
+-	error = gfs2_glock_nq_init(gl, LM_ST_SHARED, flags, &gh);
++	gfs2_holder_init(gl, LM_ST_SHARED, flags, &gh);
++	error = gfs2_glock_nq(&gh);
+ 	if (!error)
+ 		gfs2_glock_dq(&gh);
++	gfs2_holder_uninit(&gh);
+ }
+ 
+ static bool gfs2_try_evict(struct gfs2_glock *gl)
+diff --git a/fs/minix/inode.c b/fs/minix/inode.c
+index 0dd929346f3f3..7b09a9158e401 100644
+--- a/fs/minix/inode.c
++++ b/fs/minix/inode.c
+@@ -150,8 +150,10 @@ static int minix_remount (struct super_block * sb, int * flags, char * data)
+ 	return 0;
+ }
+ 
+-static bool minix_check_superblock(struct minix_sb_info *sbi)
++static bool minix_check_superblock(struct super_block *sb)
+ {
++	struct minix_sb_info *sbi = minix_sb(sb);
++
+ 	if (sbi->s_imap_blocks == 0 || sbi->s_zmap_blocks == 0)
+ 		return false;
+ 
+@@ -161,7 +163,7 @@ static bool minix_check_superblock(struct minix_sb_info *sbi)
+ 	 * of indirect blocks which places the limit well above U32_MAX.
+ 	 */
+ 	if (sbi->s_version == MINIX_V1 &&
+-	    sbi->s_max_size > (7 + 512 + 512*512) * BLOCK_SIZE)
++	    sb->s_maxbytes > (7 + 512 + 512*512) * BLOCK_SIZE)
+ 		return false;
+ 
+ 	return true;
+@@ -202,7 +204,7 @@ static int minix_fill_super(struct super_block *s, void *data, int silent)
+ 	sbi->s_zmap_blocks = ms->s_zmap_blocks;
+ 	sbi->s_firstdatazone = ms->s_firstdatazone;
+ 	sbi->s_log_zone_size = ms->s_log_zone_size;
+-	sbi->s_max_size = ms->s_max_size;
++	s->s_maxbytes = ms->s_max_size;
+ 	s->s_magic = ms->s_magic;
+ 	if (s->s_magic == MINIX_SUPER_MAGIC) {
+ 		sbi->s_version = MINIX_V1;
+@@ -233,7 +235,7 @@ static int minix_fill_super(struct super_block *s, void *data, int silent)
+ 		sbi->s_zmap_blocks = m3s->s_zmap_blocks;
+ 		sbi->s_firstdatazone = m3s->s_firstdatazone;
+ 		sbi->s_log_zone_size = m3s->s_log_zone_size;
+-		sbi->s_max_size = m3s->s_max_size;
++		s->s_maxbytes = m3s->s_max_size;
+ 		sbi->s_ninodes = m3s->s_ninodes;
+ 		sbi->s_nzones = m3s->s_zones;
+ 		sbi->s_dirsize = 64;
+@@ -245,7 +247,7 @@ static int minix_fill_super(struct super_block *s, void *data, int silent)
+ 	} else
+ 		goto out_no_fs;
+ 
+-	if (!minix_check_superblock(sbi))
++	if (!minix_check_superblock(s))
+ 		goto out_illegal_sb;
+ 
+ 	/*
+diff --git a/fs/minix/itree_v1.c b/fs/minix/itree_v1.c
+index 046cc96ee7adb..1fed906042aa8 100644
+--- a/fs/minix/itree_v1.c
++++ b/fs/minix/itree_v1.c
+@@ -29,12 +29,12 @@ static int block_to_path(struct inode * inode, long block, int offsets[DEPTH])
+ 	if (block < 0) {
+ 		printk("MINIX-fs: block_to_path: block %ld < 0 on dev %pg\n",
+ 			block, inode->i_sb->s_bdev);
+-	} else if (block >= (minix_sb(inode->i_sb)->s_max_size/BLOCK_SIZE)) {
+-		if (printk_ratelimit())
+-			printk("MINIX-fs: block_to_path: "
+-			       "block %ld too big on dev %pg\n",
+-				block, inode->i_sb->s_bdev);
+-	} else if (block < 7) {
++		return 0;
++	}
++	if ((u64)block * BLOCK_SIZE >= inode->i_sb->s_maxbytes)
++		return 0;
++
++	if (block < 7) {
+ 		offsets[n++] = block;
+ 	} else if ((block -= 7) < 512) {
+ 		offsets[n++] = 7;
+diff --git a/fs/minix/itree_v2.c b/fs/minix/itree_v2.c
+index f7fc7eccccccd..9d00f31a2d9d1 100644
+--- a/fs/minix/itree_v2.c
++++ b/fs/minix/itree_v2.c
+@@ -32,13 +32,12 @@ static int block_to_path(struct inode * inode, long block, int offsets[DEPTH])
+ 	if (block < 0) {
+ 		printk("MINIX-fs: block_to_path: block %ld < 0 on dev %pg\n",
+ 			block, sb->s_bdev);
+-	} else if ((u64)block * (u64)sb->s_blocksize >=
+-			minix_sb(sb)->s_max_size) {
+-		if (printk_ratelimit())
+-			printk("MINIX-fs: block_to_path: "
+-			       "block %ld too big on dev %pg\n",
+-				block, sb->s_bdev);
+-	} else if (block < DIRCOUNT) {
++		return 0;
++	}
++	if ((u64)block * (u64)sb->s_blocksize >= sb->s_maxbytes)
++		return 0;
++
++	if (block < DIRCOUNT) {
+ 		offsets[n++] = block;
+ 	} else if ((block -= DIRCOUNT) < INDIRCOUNT(sb)) {
+ 		offsets[n++] = DIRCOUNT;
+diff --git a/fs/minix/minix.h b/fs/minix/minix.h
+index df081e8afcc3c..168d45d3de73e 100644
+--- a/fs/minix/minix.h
++++ b/fs/minix/minix.h
+@@ -32,7 +32,6 @@ struct minix_sb_info {
+ 	unsigned long s_zmap_blocks;
+ 	unsigned long s_firstdatazone;
+ 	unsigned long s_log_zone_size;
+-	unsigned long s_max_size;
+ 	int s_dirsize;
+ 	int s_namelen;
+ 	struct buffer_head ** s_imap;
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index f96367a2463e3..63940a7a70be1 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -140,6 +140,7 @@ static int
+ nfs_file_flush(struct file *file, fl_owner_t id)
+ {
+ 	struct inode	*inode = file_inode(file);
++	errseq_t since;
+ 
+ 	dprintk("NFS: flush(%pD2)\n", file);
+ 
+@@ -148,7 +149,9 @@ nfs_file_flush(struct file *file, fl_owner_t id)
+ 		return 0;
+ 
+ 	/* Flush writes to the server and return any errors */
+-	return nfs_wb_all(inode);
++	since = filemap_sample_wb_err(file->f_mapping);
++	nfs_wb_all(inode);
++	return filemap_check_wb_err(file->f_mapping, since);
+ }
+ 
+ ssize_t
+@@ -587,12 +590,14 @@ static const struct vm_operations_struct nfs_file_vm_ops = {
+ 	.page_mkwrite = nfs_vm_page_mkwrite,
+ };
+ 
+-static int nfs_need_check_write(struct file *filp, struct inode *inode)
++static int nfs_need_check_write(struct file *filp, struct inode *inode,
++				int error)
+ {
+ 	struct nfs_open_context *ctx;
+ 
+ 	ctx = nfs_file_open_context(filp);
+-	if (nfs_ctx_key_to_expire(ctx, inode))
++	if (nfs_error_is_fatal_on_server(error) ||
++	    nfs_ctx_key_to_expire(ctx, inode))
+ 		return 1;
+ 	return 0;
+ }
+@@ -603,6 +608,8 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ 	struct inode *inode = file_inode(file);
+ 	unsigned long written = 0;
+ 	ssize_t result;
++	errseq_t since;
++	int error;
+ 
+ 	result = nfs_key_timeout_notify(file, inode);
+ 	if (result)
+@@ -627,6 +634,7 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ 	if (iocb->ki_pos > i_size_read(inode))
+ 		nfs_revalidate_mapping(inode, file->f_mapping);
+ 
++	since = filemap_sample_wb_err(file->f_mapping);
+ 	nfs_start_io_write(inode);
+ 	result = generic_write_checks(iocb, from);
+ 	if (result > 0) {
+@@ -645,7 +653,8 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ 		goto out;
+ 
+ 	/* Return error values */
+-	if (nfs_need_check_write(file, inode)) {
++	error = filemap_check_wb_err(file->f_mapping, since);
++	if (nfs_need_check_write(file, inode, error)) {
+ 		int err = nfs_wb_all(inode);
+ 		if (err < 0)
+ 			result = err;
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index de03e440b7eef..048272d60a165 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -790,6 +790,19 @@ ff_layout_choose_best_ds_for_read(struct pnfs_layout_segment *lseg,
+ 	return ff_layout_choose_any_ds_for_read(lseg, start_idx, best_idx);
+ }
+ 
++static struct nfs4_pnfs_ds *
++ff_layout_get_ds_for_read(struct nfs_pageio_descriptor *pgio, int *best_idx)
++{
++	struct pnfs_layout_segment *lseg = pgio->pg_lseg;
++	struct nfs4_pnfs_ds *ds;
++
++	ds = ff_layout_choose_best_ds_for_read(lseg, pgio->pg_mirror_idx,
++					       best_idx);
++	if (ds || !pgio->pg_mirror_idx)
++		return ds;
++	return ff_layout_choose_best_ds_for_read(lseg, 0, best_idx);
++}
++
+ static void
+ ff_layout_pg_get_read(struct nfs_pageio_descriptor *pgio,
+ 		      struct nfs_page *req,
+@@ -840,7 +853,7 @@ retry:
+ 			goto out_nolseg;
+ 	}
+ 
+-	ds = ff_layout_choose_best_ds_for_read(pgio->pg_lseg, 0, &ds_idx);
++	ds = ff_layout_get_ds_for_read(pgio, &ds_idx);
+ 	if (!ds) {
+ 		if (!ff_layout_no_fallback_to_mds(pgio->pg_lseg))
+ 			goto out_mds;
+@@ -1028,11 +1041,24 @@ static void ff_layout_reset_write(struct nfs_pgio_header *hdr, bool retry_pnfs)
+ 	}
+ }
+ 
++static void ff_layout_resend_pnfs_read(struct nfs_pgio_header *hdr)
++{
++	u32 idx = hdr->pgio_mirror_idx + 1;
++	int new_idx = 0;
++
++	if (ff_layout_choose_any_ds_for_read(hdr->lseg, idx + 1, &new_idx))
++		ff_layout_send_layouterror(hdr->lseg);
++	else
++		pnfs_error_mark_layout_for_return(hdr->inode, hdr->lseg);
++	pnfs_read_resend_pnfs(hdr, new_idx);
++}
++
+ static void ff_layout_reset_read(struct nfs_pgio_header *hdr)
+ {
+ 	struct rpc_task *task = &hdr->task;
+ 
+ 	pnfs_layoutcommit_inode(hdr->inode, false);
++	pnfs_error_mark_layout_for_return(hdr->inode, hdr->lseg);
+ 
+ 	if (!test_and_set_bit(NFS_IOHDR_REDO, &hdr->flags)) {
+ 		dprintk("%s Reset task %5u for i/o through MDS "
+@@ -1234,6 +1260,12 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg,
+ 		break;
+ 	case NFS4ERR_NXIO:
+ 		ff_layout_mark_ds_unreachable(lseg, idx);
++		/*
++		 * Don't return the layout if this is a read and we still
++		 * have layouts to try
++		 */
++		if (opnum == OP_READ)
++			break;
+ 		/* Fallthrough */
+ 	default:
+ 		pnfs_error_mark_layout_for_return(lseg->pls_layout->plh_inode,
+@@ -1247,7 +1279,6 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg,
+ static int ff_layout_read_done_cb(struct rpc_task *task,
+ 				struct nfs_pgio_header *hdr)
+ {
+-	int new_idx = hdr->pgio_mirror_idx;
+ 	int err;
+ 
+ 	if (task->tk_status < 0) {
+@@ -1267,10 +1298,6 @@ static int ff_layout_read_done_cb(struct rpc_task *task,
+ 	clear_bit(NFS_IOHDR_RESEND_MDS, &hdr->flags);
+ 	switch (err) {
+ 	case -NFS4ERR_RESET_TO_PNFS:
+-		if (ff_layout_choose_best_ds_for_read(hdr->lseg,
+-					hdr->pgio_mirror_idx + 1,
+-					&new_idx))
+-			goto out_layouterror;
+ 		set_bit(NFS_IOHDR_RESEND_PNFS, &hdr->flags);
+ 		return task->tk_status;
+ 	case -NFS4ERR_RESET_TO_MDS:
+@@ -1281,10 +1308,6 @@ static int ff_layout_read_done_cb(struct rpc_task *task,
+ 	}
+ 
+ 	return 0;
+-out_layouterror:
+-	ff_layout_read_record_layoutstats_done(task, hdr);
+-	ff_layout_send_layouterror(hdr->lseg);
+-	hdr->pgio_mirror_idx = new_idx;
+ out_eagain:
+ 	rpc_restart_call_prepare(task);
+ 	return -EAGAIN;
+@@ -1411,10 +1434,9 @@ static void ff_layout_read_release(void *data)
+ 	struct nfs_pgio_header *hdr = data;
+ 
+ 	ff_layout_read_record_layoutstats_done(&hdr->task, hdr);
+-	if (test_bit(NFS_IOHDR_RESEND_PNFS, &hdr->flags)) {
+-		ff_layout_send_layouterror(hdr->lseg);
+-		pnfs_read_resend_pnfs(hdr);
+-	} else if (test_bit(NFS_IOHDR_RESEND_MDS, &hdr->flags))
++	if (test_bit(NFS_IOHDR_RESEND_PNFS, &hdr->flags))
++		ff_layout_resend_pnfs_read(hdr);
++	else if (test_bit(NFS_IOHDR_RESEND_MDS, &hdr->flags))
+ 		ff_layout_reset_read(hdr);
+ 	pnfs_generic_rw_release(data);
+ }
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index 8e5d6223ddd35..a339707654673 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -110,6 +110,7 @@ static int
+ nfs4_file_flush(struct file *file, fl_owner_t id)
+ {
+ 	struct inode	*inode = file_inode(file);
++	errseq_t since;
+ 
+ 	dprintk("NFS: flush(%pD2)\n", file);
+ 
+@@ -125,7 +126,9 @@ nfs4_file_flush(struct file *file, fl_owner_t id)
+ 		return filemap_fdatawrite(file->f_mapping);
+ 
+ 	/* Flush writes to the server and return any errors */
+-	return nfs_wb_all(inode);
++	since = filemap_sample_wb_err(file->f_mapping);
++	nfs_wb_all(inode);
++	return filemap_check_wb_err(file->f_mapping, since);
+ }
+ 
+ #ifdef CONFIG_NFS_V4_2
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 2e2dac29a9e91..45e0585e0667c 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -5845,8 +5845,6 @@ static int _nfs4_get_security_label(struct inode *inode, void *buf,
+ 		return ret;
+ 	if (!(fattr.valid & NFS_ATTR_FATTR_V4_SECURITY_LABEL))
+ 		return -ENOENT;
+-	if (buflen < label.len)
+-		return -ERANGE;
+ 	return 0;
+ }
+ 
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index 47817ef0aadb1..4e0d8a3b89b67 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -4166,7 +4166,11 @@ static int decode_attr_security_label(struct xdr_stream *xdr, uint32_t *bitmap,
+ 			return -EIO;
+ 		if (len < NFS4_MAXLABELLEN) {
+ 			if (label) {
+-				memcpy(label->label, p, len);
++				if (label->len) {
++					if (label->len < len)
++						return -ERANGE;
++					memcpy(label->label, p, len);
++				}
+ 				label->len = len;
+ 				label->pi = pi;
+ 				label->lfs = lfs;
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index d61dac48dff50..75e988caf3cd7 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -2939,7 +2939,8 @@ pnfs_try_to_read_data(struct nfs_pgio_header *hdr,
+ }
+ 
+ /* Resend all requests through pnfs. */
+-void pnfs_read_resend_pnfs(struct nfs_pgio_header *hdr)
++void pnfs_read_resend_pnfs(struct nfs_pgio_header *hdr,
++			   unsigned int mirror_idx)
+ {
+ 	struct nfs_pageio_descriptor pgio;
+ 
+@@ -2950,6 +2951,7 @@ void pnfs_read_resend_pnfs(struct nfs_pgio_header *hdr)
+ 
+ 		nfs_pageio_init_read(&pgio, hdr->inode, false,
+ 					hdr->completion_ops);
++		pgio.pg_mirror_idx = mirror_idx;
+ 		hdr->task.tk_status = nfs_pageio_resend(&pgio, hdr);
+ 	}
+ }
+diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
+index 8e0ada581b92e..2661c44c62db4 100644
+--- a/fs/nfs/pnfs.h
++++ b/fs/nfs/pnfs.h
+@@ -311,7 +311,7 @@ int _pnfs_return_layout(struct inode *);
+ int pnfs_commit_and_return_layout(struct inode *);
+ void pnfs_ld_write_done(struct nfs_pgio_header *);
+ void pnfs_ld_read_done(struct nfs_pgio_header *);
+-void pnfs_read_resend_pnfs(struct nfs_pgio_header *);
++void pnfs_read_resend_pnfs(struct nfs_pgio_header *, unsigned int mirror_idx);
+ struct pnfs_layout_segment *pnfs_update_layout(struct inode *ino,
+ 					       struct nfs_open_context *ctx,
+ 					       loff_t pos,
+diff --git a/fs/ocfs2/ocfs2.h b/fs/ocfs2/ocfs2.h
+index 2dd71d626196d..7993d527edae9 100644
+--- a/fs/ocfs2/ocfs2.h
++++ b/fs/ocfs2/ocfs2.h
+@@ -327,8 +327,8 @@ struct ocfs2_super
+ 	spinlock_t osb_lock;
+ 	u32 s_next_generation;
+ 	unsigned long osb_flags;
+-	s16 s_inode_steal_slot;
+-	s16 s_meta_steal_slot;
++	u16 s_inode_steal_slot;
++	u16 s_meta_steal_slot;
+ 	atomic_t s_num_inodes_stolen;
+ 	atomic_t s_num_meta_stolen;
+ 
+diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c
+index 45745cc3408a5..8c8cf7f4eb34e 100644
+--- a/fs/ocfs2/suballoc.c
++++ b/fs/ocfs2/suballoc.c
+@@ -879,9 +879,9 @@ static void __ocfs2_set_steal_slot(struct ocfs2_super *osb, int slot, int type)
+ {
+ 	spin_lock(&osb->osb_lock);
+ 	if (type == INODE_ALLOC_SYSTEM_INODE)
+-		osb->s_inode_steal_slot = slot;
++		osb->s_inode_steal_slot = (u16)slot;
+ 	else if (type == EXTENT_ALLOC_SYSTEM_INODE)
+-		osb->s_meta_steal_slot = slot;
++		osb->s_meta_steal_slot = (u16)slot;
+ 	spin_unlock(&osb->osb_lock);
+ }
+ 
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index 71ea9ce71a6b8..1d91dd1e8711c 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -78,7 +78,7 @@ struct mount_options
+ 	unsigned long	commit_interval;
+ 	unsigned long	mount_opt;
+ 	unsigned int	atime_quantum;
+-	signed short	slot;
++	unsigned short	slot;
+ 	int		localalloc_opt;
+ 	unsigned int	resv_level;
+ 	int		dir_resv_level;
+@@ -1349,7 +1349,7 @@ static int ocfs2_parse_options(struct super_block *sb,
+ 				goto bail;
+ 			}
+ 			if (option)
+-				mopt->slot = (s16)option;
++				mopt->slot = (u16)option;
+ 			break;
+ 		case Opt_commit:
+ 			if (match_int(&args[0], &option)) {
+diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c
+index e5ec1afe1c668..2cf05f87565c2 100644
+--- a/fs/ubifs/journal.c
++++ b/fs/ubifs/journal.c
+@@ -539,7 +539,7 @@ int ubifs_jnl_update(struct ubifs_info *c, const struct inode *dir,
+ 		     const struct fscrypt_name *nm, const struct inode *inode,
+ 		     int deletion, int xent)
+ {
+-	int err, dlen, ilen, len, lnum, ino_offs, dent_offs;
++	int err, dlen, ilen, len, lnum, ino_offs, dent_offs, orphan_added = 0;
+ 	int aligned_dlen, aligned_ilen, sync = IS_DIRSYNC(dir);
+ 	int last_reference = !!(deletion && inode->i_nlink == 0);
+ 	struct ubifs_inode *ui = ubifs_inode(inode);
+@@ -630,6 +630,7 @@ int ubifs_jnl_update(struct ubifs_info *c, const struct inode *dir,
+ 			goto out_finish;
+ 		}
+ 		ui->del_cmtno = c->cmt_no;
++		orphan_added = 1;
+ 	}
+ 
+ 	err = write_head(c, BASEHD, dent, len, &lnum, &dent_offs, sync);
+@@ -702,7 +703,7 @@ out_release:
+ 	kfree(dent);
+ out_ro:
+ 	ubifs_ro_mode(c, err);
+-	if (last_reference)
++	if (orphan_added)
+ 		ubifs_delete_orphan(c, inode->i_ino);
+ 	finish_reservation(c);
+ 	return err;
+@@ -1218,7 +1219,7 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ 	void *p;
+ 	union ubifs_key key;
+ 	struct ubifs_dent_node *dent, *dent2;
+-	int err, dlen1, dlen2, ilen, lnum, offs, len;
++	int err, dlen1, dlen2, ilen, lnum, offs, len, orphan_added = 0;
+ 	int aligned_dlen1, aligned_dlen2, plen = UBIFS_INO_NODE_SZ;
+ 	int last_reference = !!(new_inode && new_inode->i_nlink == 0);
+ 	int move = (old_dir != new_dir);
+@@ -1334,6 +1335,7 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ 			goto out_finish;
+ 		}
+ 		new_ui->del_cmtno = c->cmt_no;
++		orphan_added = 1;
+ 	}
+ 
+ 	err = write_head(c, BASEHD, dent, len, &lnum, &offs, sync);
+@@ -1415,7 +1417,7 @@ out_release:
+ 	release_head(c, BASEHD);
+ out_ro:
+ 	ubifs_ro_mode(c, err);
+-	if (last_reference)
++	if (orphan_added)
+ 		ubifs_delete_orphan(c, new_inode->i_ino);
+ out_finish:
+ 	finish_reservation(c);
+diff --git a/fs/ufs/super.c b/fs/ufs/super.c
+index 1da0be667409b..e3b69fb280e8c 100644
+--- a/fs/ufs/super.c
++++ b/fs/ufs/super.c
+@@ -101,7 +101,7 @@ static struct inode *ufs_nfs_get_inode(struct super_block *sb, u64 ino, u32 gene
+ 	struct ufs_sb_private_info *uspi = UFS_SB(sb)->s_uspi;
+ 	struct inode *inode;
+ 
+-	if (ino < UFS_ROOTINO || ino > uspi->s_ncg * uspi->s_ipg)
++	if (ino < UFS_ROOTINO || ino > (u64)uspi->s_ncg * uspi->s_ipg)
+ 		return ERR_PTR(-ESTALE);
+ 
+ 	inode = ufs_iget(sb, ino);
+diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
+index 088c1ded27148..ee6412314f8f3 100644
+--- a/include/crypto/if_alg.h
++++ b/include/crypto/if_alg.h
+@@ -135,6 +135,7 @@ struct af_alg_async_req {
+  *			SG?
+  * @enc:		Cryptographic operation to be performed when
+  *			recvmsg is invoked.
++ * @init:		True if metadata has been sent.
+  * @len:		Length of memory allocated for this data structure.
+  */
+ struct af_alg_ctx {
+@@ -151,6 +152,7 @@ struct af_alg_ctx {
+ 	bool more;
+ 	bool merge;
+ 	bool enc;
++	bool init;
+ 
+ 	unsigned int len;
+ };
+@@ -226,7 +228,7 @@ unsigned int af_alg_count_tsgl(struct sock *sk, size_t bytes, size_t offset);
+ void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst,
+ 		      size_t dst_offset);
+ void af_alg_wmem_wakeup(struct sock *sk);
+-int af_alg_wait_for_data(struct sock *sk, unsigned flags);
++int af_alg_wait_for_data(struct sock *sk, unsigned flags, unsigned min);
+ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 		   unsigned int ivsize);
+ ssize_t af_alg_sendpage(struct socket *sock, struct page *page,
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index f5abba86107d8..2dab217c6047f 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -549,6 +549,16 @@ static inline void i_mmap_unlock_read(struct address_space *mapping)
+ 	up_read(&mapping->i_mmap_rwsem);
+ }
+ 
++static inline void i_mmap_assert_locked(struct address_space *mapping)
++{
++	lockdep_assert_held(&mapping->i_mmap_rwsem);
++}
++
++static inline void i_mmap_assert_write_locked(struct address_space *mapping)
++{
++	lockdep_assert_held_write(&mapping->i_mmap_rwsem);
++}
++
+ /*
+  * Might pages of this file be mapped into userspace?
+  */
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 50650d0d01b9e..a520bf26e5d8e 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -164,7 +164,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
+ 			unsigned long addr, unsigned long sz);
+ pte_t *huge_pte_offset(struct mm_struct *mm,
+ 		       unsigned long addr, unsigned long sz);
+-int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep);
++int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
++				unsigned long *addr, pte_t *ptep);
+ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
+ 				unsigned long *start, unsigned long *end);
+ struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address,
+@@ -203,8 +204,9 @@ static inline struct address_space *hugetlb_page_mapping_lock_write(
+ 	return NULL;
+ }
+ 
+-static inline int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr,
+-					pte_t *ptep)
++static inline int huge_pmd_unshare(struct mm_struct *mm,
++					struct vm_area_struct *vma,
++					unsigned long *addr, pte_t *ptep)
+ {
+ 	return 0;
+ }
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index 04bd9279c3fb3..711bdca975be3 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -381,8 +381,8 @@ enum {
+ 
+ #define QI_DEV_EIOTLB_ADDR(a)	((u64)(a) & VTD_PAGE_MASK)
+ #define QI_DEV_EIOTLB_SIZE	(((u64)1) << 11)
+-#define QI_DEV_EIOTLB_GLOB(g)	((u64)g)
+-#define QI_DEV_EIOTLB_PASID(p)	(((u64)p) << 32)
++#define QI_DEV_EIOTLB_GLOB(g)	((u64)(g) & 0x1)
++#define QI_DEV_EIOTLB_PASID(p)	((u64)((p) & 0xfffff) << 32)
+ #define QI_DEV_EIOTLB_SID(sid)	((u64)((sid) & 0xffff) << 16)
+ #define QI_DEV_EIOTLB_QDEP(qd)	((u64)((qd) & 0x1f) << 4)
+ #define QI_DEV_EIOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | \
+diff --git a/include/linux/irq.h b/include/linux/irq.h
+index 8d5bc2c237d74..1b7f4dfee35b3 100644
+--- a/include/linux/irq.h
++++ b/include/linux/irq.h
+@@ -213,6 +213,8 @@ struct irq_data {
+  *				  required
+  * IRQD_HANDLE_ENFORCE_IRQCTX	- Enforce that handle_irq_*() is only invoked
+  *				  from actual interrupt context.
++ * IRQD_AFFINITY_ON_ACTIVATE	- Affinity is set on activation. Don't call
++ *				  irq_chip::irq_set_affinity() when deactivated.
+  */
+ enum {
+ 	IRQD_TRIGGER_MASK		= 0xf,
+@@ -237,6 +239,7 @@ enum {
+ 	IRQD_CAN_RESERVE		= (1 << 26),
+ 	IRQD_MSI_NOMASK_QUIRK		= (1 << 27),
+ 	IRQD_HANDLE_ENFORCE_IRQCTX	= (1 << 28),
++	IRQD_AFFINITY_ON_ACTIVATE	= (1 << 29),
+ };
+ 
+ #define __irqd_to_state(d) ACCESS_PRIVATE((d)->common, state_use_accessors)
+@@ -421,6 +424,16 @@ static inline bool irqd_msi_nomask_quirk(struct irq_data *d)
+ 	return __irqd_to_state(d) & IRQD_MSI_NOMASK_QUIRK;
+ }
+ 
++static inline void irqd_set_affinity_on_activate(struct irq_data *d)
++{
++	__irqd_to_state(d) |= IRQD_AFFINITY_ON_ACTIVATE;
++}
++
++static inline bool irqd_affinity_on_activate(struct irq_data *d)
++{
++	return __irqd_to_state(d) & IRQD_AFFINITY_ON_ACTIVATE;
++}
++
+ #undef __irqd_to_state
+ 
+ static inline irq_hw_number_t irqd_to_hwirq(struct irq_data *d)
+diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
+index 18da4059be09a..bd39a2cf7972b 100644
+--- a/include/linux/libnvdimm.h
++++ b/include/linux/libnvdimm.h
+@@ -78,6 +78,8 @@ struct nvdimm_bus_descriptor {
+ 	const struct attribute_group **attr_groups;
+ 	unsigned long bus_dsm_mask;
+ 	unsigned long cmd_mask;
++	unsigned long dimm_family_mask;
++	unsigned long bus_family_mask;
+ 	struct module *module;
+ 	char *provider_name;
+ 	struct device_node *of_node;
+diff --git a/include/linux/pci-ats.h b/include/linux/pci-ats.h
+index f75c307f346de..df54cd5b15db0 100644
+--- a/include/linux/pci-ats.h
++++ b/include/linux/pci-ats.h
+@@ -28,6 +28,10 @@ int pci_enable_pri(struct pci_dev *pdev, u32 reqs);
+ void pci_disable_pri(struct pci_dev *pdev);
+ int pci_reset_pri(struct pci_dev *pdev);
+ int pci_prg_resp_pasid_required(struct pci_dev *pdev);
++bool pci_pri_supported(struct pci_dev *pdev);
++#else
++static inline bool pci_pri_supported(struct pci_dev *pdev)
++{ return false; }
+ #endif /* CONFIG_PCI_PRI */
+ 
+ #ifdef CONFIG_PCI_PASID
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 1183507df95bf..d05a2c3ed3a6b 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -891,6 +891,8 @@ static inline int sk_memalloc_socks(void)
+ {
+ 	return static_branch_unlikely(&memalloc_socks_key);
+ }
++
++void __receive_sock(struct file *file);
+ #else
+ 
+ static inline int sk_memalloc_socks(void)
+@@ -898,6 +900,8 @@ static inline int sk_memalloc_socks(void)
+ 	return 0;
+ }
+ 
++static inline void __receive_sock(struct file *file)
++{ }
+ #endif
+ 
+ static inline gfp_t sk_gfp_mask(const struct sock *sk, gfp_t gfp_mask)
+diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h
+index e6b6cb0f8bc6a..24f6848ad78ec 100644
+--- a/include/uapi/linux/btrfs.h
++++ b/include/uapi/linux/btrfs.h
+@@ -243,6 +243,13 @@ struct btrfs_ioctl_dev_info_args {
+ 	__u8 path[BTRFS_DEVICE_PATH_NAME_MAX];	/* out */
+ };
+ 
++/*
++ * Retrieve information about the filesystem
++ */
++
++/* Request information about checksum type and size */
++#define BTRFS_FS_INFO_FLAG_CSUM_INFO			(1 << 0)
++
+ struct btrfs_ioctl_fs_info_args {
+ 	__u64 max_id;				/* out */
+ 	__u64 num_devices;			/* out */
+@@ -250,8 +257,11 @@ struct btrfs_ioctl_fs_info_args {
+ 	__u32 nodesize;				/* out */
+ 	__u32 sectorsize;			/* out */
+ 	__u32 clone_alignment;			/* out */
+-	__u32 reserved32;
+-	__u64 reserved[122];			/* pad to 1k */
++	/* See BTRFS_FS_INFO_FLAG_* */
++	__u16 csum_type;			/* out */
++	__u16 csum_size;			/* out */
++	__u64 flags;				/* in/out */
++	__u8 reserved[968];			/* pad to 1k */
+ };
+ 
+ /*
+diff --git a/include/uapi/linux/ndctl.h b/include/uapi/linux/ndctl.h
+index 0e09dc5cec192..e9468b9332bd5 100644
+--- a/include/uapi/linux/ndctl.h
++++ b/include/uapi/linux/ndctl.h
+@@ -245,6 +245,10 @@ struct nd_cmd_pkg {
+ #define NVDIMM_FAMILY_MSFT 3
+ #define NVDIMM_FAMILY_HYPERV 4
+ #define NVDIMM_FAMILY_PAPR 5
++#define NVDIMM_FAMILY_MAX NVDIMM_FAMILY_PAPR
++
++#define NVDIMM_BUS_FAMILY_NFIT 0
++#define NVDIMM_BUS_FAMILY_MAX NVDIMM_BUS_FAMILY_NFIT
+ 
+ #define ND_IOCTL_CALL			_IOWR(ND_IOCTL, ND_CMD_CALL,\
+ 					struct nd_cmd_pkg)
+diff --git a/init/main.c b/init/main.c
+index 0ead83e86b5aa..883ded3638e59 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -387,8 +387,6 @@ static int __init bootconfig_params(char *param, char *val,
+ {
+ 	if (strcmp(param, "bootconfig") == 0) {
+ 		bootconfig_found = true;
+-	} else if (strcmp(param, "--") == 0) {
+-		initargs_found = true;
+ 	}
+ 	return 0;
+ }
+@@ -399,19 +397,23 @@ static void __init setup_boot_config(const char *cmdline)
+ 	const char *msg;
+ 	int pos;
+ 	u32 size, csum;
+-	char *data, *copy;
++	char *data, *copy, *err;
+ 	int ret;
+ 
+ 	/* Cut out the bootconfig data even if we have no bootconfig option */
+ 	data = get_boot_config_from_initrd(&size, &csum);
+ 
+ 	strlcpy(tmp_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+-	parse_args("bootconfig", tmp_cmdline, NULL, 0, 0, 0, NULL,
+-		   bootconfig_params);
++	err = parse_args("bootconfig", tmp_cmdline, NULL, 0, 0, 0, NULL,
++			 bootconfig_params);
+ 
+-	if (!bootconfig_found)
++	if (IS_ERR(err) || !bootconfig_found)
+ 		return;
+ 
++	/* parse_args() stops at '--' and returns an address */
++	if (err)
++		initargs_found = true;
++
+ 	if (!data) {
+ 		pr_err("'bootconfig' found on command line, but no bootconfig found\n");
+ 		return;
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 2a9fec53e1591..e68a8f9931065 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -320,12 +320,16 @@ static bool irq_set_affinity_deactivated(struct irq_data *data,
+ 	struct irq_desc *desc = irq_data_to_desc(data);
+ 
+ 	/*
++	 * Handle irq chips which can handle affinity only in activated
++	 * state correctly
++	 *
+ 	 * If the interrupt is not yet activated, just store the affinity
+ 	 * mask and do not call the chip driver at all. On activation the
+ 	 * driver has to make sure anyway that the interrupt is in a
+ 	 * useable state so startup works.
+ 	 */
+-	if (!IS_ENABLED(CONFIG_IRQ_DOMAIN_HIERARCHY) || irqd_is_activated(data))
++	if (!IS_ENABLED(CONFIG_IRQ_DOMAIN_HIERARCHY) ||
++	    irqd_is_activated(data) || !irqd_affinity_on_activate(data))
+ 		return false;
+ 
+ 	cpumask_copy(desc->irq_common_data.affinity, mask);
+@@ -2731,8 +2735,10 @@ int irq_set_irqchip_state(unsigned int irq, enum irqchip_irq_state which,
+ 
+ 	do {
+ 		chip = irq_data_get_irq_chip(data);
+-		if (WARN_ON_ONCE(!chip))
+-			return -ENODEV;
++		if (WARN_ON_ONCE(!chip)) {
++			err = -ENODEV;
++			goto out_unlock;
++		}
+ 		if (chip->irq_set_irqchip_state)
+ 			break;
+ #ifdef CONFIG_IRQ_DOMAIN_HIERARCHY
+@@ -2745,6 +2751,7 @@ int irq_set_irqchip_state(unsigned int irq, enum irqchip_irq_state which,
+ 	if (data)
+ 		err = chip->irq_set_irqchip_state(data, which, val);
+ 
++out_unlock:
+ 	irq_put_desc_busunlock(desc, flags);
+ 	return err;
+ }
+diff --git a/kernel/irq/pm.c b/kernel/irq/pm.c
+index 8f557fa1f4fe4..c6c7e187ae748 100644
+--- a/kernel/irq/pm.c
++++ b/kernel/irq/pm.c
+@@ -185,14 +185,18 @@ void rearm_wake_irq(unsigned int irq)
+ 	unsigned long flags;
+ 	struct irq_desc *desc = irq_get_desc_buslock(irq, &flags, IRQ_GET_DESC_CHECK_GLOBAL);
+ 
+-	if (!desc || !(desc->istate & IRQS_SUSPENDED) ||
+-	    !irqd_is_wakeup_set(&desc->irq_data))
++	if (!desc)
+ 		return;
+ 
++	if (!(desc->istate & IRQS_SUSPENDED) ||
++	    !irqd_is_wakeup_set(&desc->irq_data))
++		goto unlock;
++
+ 	desc->istate &= ~IRQS_SUSPENDED;
+ 	irqd_set(&desc->irq_data, IRQD_WAKEUP_ARMED);
+ 	__enable_irq(desc);
+ 
++unlock:
+ 	irq_put_desc_busunlock(desc, flags);
+ }
+ 
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 2e97febeef77d..72af5d37e9ff1 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1079,9 +1079,20 @@ static int disarm_kprobe_ftrace(struct kprobe *p)
+ 		ipmodify ? &kprobe_ipmodify_enabled : &kprobe_ftrace_enabled);
+ }
+ #else	/* !CONFIG_KPROBES_ON_FTRACE */
+-#define prepare_kprobe(p)	arch_prepare_kprobe(p)
+-#define arm_kprobe_ftrace(p)	(-ENODEV)
+-#define disarm_kprobe_ftrace(p)	(-ENODEV)
++static inline int prepare_kprobe(struct kprobe *p)
++{
++	return arch_prepare_kprobe(p);
++}
++
++static inline int arm_kprobe_ftrace(struct kprobe *p)
++{
++	return -ENODEV;
++}
++
++static inline int disarm_kprobe_ftrace(struct kprobe *p)
++{
++	return -ENODEV;
++}
+ #endif
+ 
+ /* Arm a kprobe with text_mutex */
+@@ -2113,6 +2124,13 @@ static void kill_kprobe(struct kprobe *p)
+ 	 * the original probed function (which will be freed soon) any more.
+ 	 */
+ 	arch_remove_kprobe(p);
++
++	/*
++	 * The module is going away. We should disarm the kprobe which
++	 * is using ftrace.
++	 */
++	if (kprobe_ftrace(p))
++		disarm_kprobe_ftrace(p);
+ }
+ 
+ /* Disable one kprobe */
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index 132f84a5fde3f..f481ab35de2f9 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -1239,13 +1239,16 @@ void kthread_use_mm(struct mm_struct *mm)
+ 	WARN_ON_ONCE(tsk->mm);
+ 
+ 	task_lock(tsk);
++	/* Hold off tlb flush IPIs while switching mm's */
++	local_irq_disable();
+ 	active_mm = tsk->active_mm;
+ 	if (active_mm != mm) {
+ 		mmgrab(mm);
+ 		tsk->active_mm = mm;
+ 	}
+ 	tsk->mm = mm;
+-	switch_mm(active_mm, mm, tsk);
++	switch_mm_irqs_off(active_mm, mm, tsk);
++	local_irq_enable();
+ 	task_unlock(tsk);
+ #ifdef finish_arch_post_lock_switch
+ 	finish_arch_post_lock_switch();
+@@ -1274,9 +1277,11 @@ void kthread_unuse_mm(struct mm_struct *mm)
+ 
+ 	task_lock(tsk);
+ 	sync_mm_rss(mm);
++	local_irq_disable();
+ 	tsk->mm = NULL;
+ 	/* active_mm is still 'mm' */
+ 	enter_lazy_tlb(mm, tsk);
++	local_irq_enable();
+ 	task_unlock(tsk);
+ }
+ EXPORT_SYMBOL_GPL(kthread_unuse_mm);
+diff --git a/kernel/module.c b/kernel/module.c
+index aa183c9ac0a25..08c46084d8cca 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -1520,18 +1520,34 @@ struct module_sect_attrs {
+ 	struct module_sect_attr attrs[];
+ };
+ 
++#define MODULE_SECT_READ_SIZE (3 /* "0x", "\n" */ + (BITS_PER_LONG / 4))
+ static ssize_t module_sect_read(struct file *file, struct kobject *kobj,
+ 				struct bin_attribute *battr,
+ 				char *buf, loff_t pos, size_t count)
+ {
+ 	struct module_sect_attr *sattr =
+ 		container_of(battr, struct module_sect_attr, battr);
++	char bounce[MODULE_SECT_READ_SIZE + 1];
++	size_t wrote;
+ 
+ 	if (pos != 0)
+ 		return -EINVAL;
+ 
+-	return sprintf(buf, "0x%px\n",
+-		       kallsyms_show_value(file->f_cred) ? (void *)sattr->address : NULL);
++	/*
++	 * Since we're a binary read handler, we must account for the
++	 * trailing NUL byte that sprintf will write: if "buf" is
++	 * too small to hold the NUL, or the NUL is exactly the last
++	 * byte, the read will look like it got truncated by one byte.
++	 * Since there is no way to ask sprintf nicely to not write
++	 * the NUL, we have to use a bounce buffer.
++	 */
++	wrote = scnprintf(bounce, sizeof(bounce), "0x%px\n",
++			 kallsyms_show_value(file->f_cred)
++				? (void *)sattr->address : NULL);
++	count = min(count, wrote);
++	memcpy(buf, bounce, count);
++
++	return count;
+ }
+ 
+ static void free_sect_attrs(struct module_sect_attrs *sect_attrs)
+@@ -1580,7 +1596,7 @@ static void add_sect_attrs(struct module *mod, const struct load_info *info)
+ 			goto out;
+ 		sect_attrs->nsections++;
+ 		sattr->battr.read = module_sect_read;
+-		sattr->battr.size = 3 /* "0x", "\n" */ + (BITS_PER_LONG / 4);
++		sattr->battr.size = MODULE_SECT_READ_SIZE;
+ 		sattr->battr.attr.mode = 0400;
+ 		*(gattr++) = &(sattr++)->battr;
+ 	}
+diff --git a/kernel/pid.c b/kernel/pid.c
+index f1496b7571621..ee58530d1acad 100644
+--- a/kernel/pid.c
++++ b/kernel/pid.c
+@@ -42,6 +42,7 @@
+ #include <linux/sched/signal.h>
+ #include <linux/sched/task.h>
+ #include <linux/idr.h>
++#include <net/sock.h>
+ 
+ struct pid init_struct_pid = {
+ 	.count		= REFCOUNT_INIT(1),
+@@ -642,10 +643,12 @@ static int pidfd_getfd(struct pid *pid, int fd)
+ 	}
+ 
+ 	ret = get_unused_fd_flags(O_CLOEXEC);
+-	if (ret < 0)
++	if (ret < 0) {
+ 		fput(file);
+-	else
++	} else {
++		__receive_sock(file);
+ 		fd_install(ret, file);
++	}
+ 
+ 	return ret;
+ }
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index c3cbdc436e2e4..f788cd61df212 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -794,6 +794,26 @@ unsigned int sysctl_sched_uclamp_util_max = SCHED_CAPACITY_SCALE;
+ /* All clamps are required to be less or equal than these values */
+ static struct uclamp_se uclamp_default[UCLAMP_CNT];
+ 
++/*
++ * This static key is used to reduce the uclamp overhead in the fast path. It
++ * primarily disables the call to uclamp_rq_{inc, dec}() in
++ * enqueue/dequeue_task().
++ *
++ * This allows users to continue to enable uclamp in their kernel config with
++ * minimum uclamp overhead in the fast path.
++ *
++ * As soon as userspace modifies any of the uclamp knobs, the static key is
++ * enabled, since we have an actual users that make use of uclamp
++ * functionality.
++ *
++ * The knobs that would enable this static key are:
++ *
++ *   * A task modifying its uclamp value with sched_setattr().
++ *   * An admin modifying the sysctl_sched_uclamp_{min, max} via procfs.
++ *   * An admin modifying the cgroup cpu.uclamp.{min, max}
++ */
++DEFINE_STATIC_KEY_FALSE(sched_uclamp_used);
++
+ /* Integer rounded range for each bucket */
+ #define UCLAMP_BUCKET_DELTA DIV_ROUND_CLOSEST(SCHED_CAPACITY_SCALE, UCLAMP_BUCKETS)
+ 
+@@ -990,10 +1010,38 @@ static inline void uclamp_rq_dec_id(struct rq *rq, struct task_struct *p,
+ 
+ 	lockdep_assert_held(&rq->lock);
+ 
++	/*
++	 * If sched_uclamp_used was enabled after task @p was enqueued,
++	 * we could end up with unbalanced call to uclamp_rq_dec_id().
++	 *
++	 * In this case the uc_se->active flag should be false since no uclamp
++	 * accounting was performed at enqueue time and we can just return
++	 * here.
++	 *
++	 * Need to be careful of the following enqeueue/dequeue ordering
++	 * problem too
++	 *
++	 *	enqueue(taskA)
++	 *	// sched_uclamp_used gets enabled
++	 *	enqueue(taskB)
++	 *	dequeue(taskA)
++	 *	// Must not decrement bukcet->tasks here
++	 *	dequeue(taskB)
++	 *
++	 * where we could end up with stale data in uc_se and
++	 * bucket[uc_se->bucket_id].
++	 *
++	 * The following check here eliminates the possibility of such race.
++	 */
++	if (unlikely(!uc_se->active))
++		return;
++
+ 	bucket = &uc_rq->bucket[uc_se->bucket_id];
++
+ 	SCHED_WARN_ON(!bucket->tasks);
+ 	if (likely(bucket->tasks))
+ 		bucket->tasks--;
++
+ 	uc_se->active = false;
+ 
+ 	/*
+@@ -1021,6 +1069,15 @@ static inline void uclamp_rq_inc(struct rq *rq, struct task_struct *p)
+ {
+ 	enum uclamp_id clamp_id;
+ 
++	/*
++	 * Avoid any overhead until uclamp is actually used by the userspace.
++	 *
++	 * The condition is constructed such that a NOP is generated when
++	 * sched_uclamp_used is disabled.
++	 */
++	if (!static_branch_unlikely(&sched_uclamp_used))
++		return;
++
+ 	if (unlikely(!p->sched_class->uclamp_enabled))
+ 		return;
+ 
+@@ -1036,6 +1093,15 @@ static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p)
+ {
+ 	enum uclamp_id clamp_id;
+ 
++	/*
++	 * Avoid any overhead until uclamp is actually used by the userspace.
++	 *
++	 * The condition is constructed such that a NOP is generated when
++	 * sched_uclamp_used is disabled.
++	 */
++	if (!static_branch_unlikely(&sched_uclamp_used))
++		return;
++
+ 	if (unlikely(!p->sched_class->uclamp_enabled))
+ 		return;
+ 
+@@ -1144,8 +1210,10 @@ int sysctl_sched_uclamp_handler(struct ctl_table *table, int write,
+ 		update_root_tg = true;
+ 	}
+ 
+-	if (update_root_tg)
++	if (update_root_tg) {
++		static_branch_enable(&sched_uclamp_used);
+ 		uclamp_update_root_tg();
++	}
+ 
+ 	/*
+ 	 * We update all RUNNABLE tasks only when task groups are in use.
+@@ -1180,6 +1248,15 @@ static int uclamp_validate(struct task_struct *p,
+ 	if (upper_bound > SCHED_CAPACITY_SCALE)
+ 		return -EINVAL;
+ 
++	/*
++	 * We have valid uclamp attributes; make sure uclamp is enabled.
++	 *
++	 * We need to do that here, because enabling static branches is a
++	 * blocking operation which obviously cannot be done while holding
++	 * scheduler locks.
++	 */
++	static_branch_enable(&sched_uclamp_used);
++
+ 	return 0;
+ }
+ 
+@@ -7442,6 +7519,8 @@ static ssize_t cpu_uclamp_write(struct kernfs_open_file *of, char *buf,
+ 	if (req.ret)
+ 		return req.ret;
+ 
++	static_branch_enable(&sched_uclamp_used);
++
+ 	mutex_lock(&uclamp_mutex);
+ 	rcu_read_lock();
+ 
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 7fbaee24c824f..dc6835bc64907 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -210,7 +210,7 @@ unsigned long schedutil_cpu_util(int cpu, unsigned long util_cfs,
+ 	unsigned long dl_util, util, irq;
+ 	struct rq *rq = cpu_rq(cpu);
+ 
+-	if (!IS_BUILTIN(CONFIG_UCLAMP_TASK) &&
++	if (!uclamp_is_used() &&
+ 	    type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt)) {
+ 		return max;
+ 	}
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 877fb08eb1b04..c82857e2e288a 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -862,6 +862,8 @@ struct uclamp_rq {
+ 	unsigned int value;
+ 	struct uclamp_bucket bucket[UCLAMP_BUCKETS];
+ };
++
++DECLARE_STATIC_KEY_FALSE(sched_uclamp_used);
+ #endif /* CONFIG_UCLAMP_TASK */
+ 
+ /*
+@@ -2349,12 +2351,35 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
+ #ifdef CONFIG_UCLAMP_TASK
+ unsigned long uclamp_eff_value(struct task_struct *p, enum uclamp_id clamp_id);
+ 
++/**
++ * uclamp_rq_util_with - clamp @util with @rq and @p effective uclamp values.
++ * @rq:		The rq to clamp against. Must not be NULL.
++ * @util:	The util value to clamp.
++ * @p:		The task to clamp against. Can be NULL if you want to clamp
++ *		against @rq only.
++ *
++ * Clamps the passed @util to the max(@rq, @p) effective uclamp values.
++ *
++ * If sched_uclamp_used static key is disabled, then just return the util
++ * without any clamping since uclamp aggregation at the rq level in the fast
++ * path is disabled, rendering this operation a NOP.
++ *
++ * Use uclamp_eff_value() if you don't care about uclamp values at rq level. It
++ * will return the correct effective uclamp value of the task even if the
++ * static key is disabled.
++ */
+ static __always_inline
+ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+ 				  struct task_struct *p)
+ {
+-	unsigned long min_util = READ_ONCE(rq->uclamp[UCLAMP_MIN].value);
+-	unsigned long max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value);
++	unsigned long min_util;
++	unsigned long max_util;
++
++	if (!static_branch_likely(&sched_uclamp_used))
++		return util;
++
++	min_util = READ_ONCE(rq->uclamp[UCLAMP_MIN].value);
++	max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value);
+ 
+ 	if (p) {
+ 		min_util = max(min_util, uclamp_eff_value(p, UCLAMP_MIN));
+@@ -2371,6 +2396,19 @@ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+ 
+ 	return clamp(util, min_util, max_util);
+ }
++
++/*
++ * When uclamp is compiled in, the aggregation at rq level is 'turned off'
++ * by default in the fast path and only gets turned on once userspace performs
++ * an operation that requires it.
++ *
++ * Returns true if userspace opted-in to use uclamp and aggregation at rq level
++ * hence is active.
++ */
++static inline bool uclamp_is_used(void)
++{
++	return static_branch_likely(&sched_uclamp_used);
++}
+ #else /* CONFIG_UCLAMP_TASK */
+ static inline
+ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+@@ -2378,6 +2416,11 @@ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+ {
+ 	return util;
+ }
++
++static inline bool uclamp_is_used(void)
++{
++	return false;
++}
+ #endif /* CONFIG_UCLAMP_TASK */
+ 
+ #ifdef arch_scale_freq_capacity
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 7d879fae3777f..b5cb5be3ca6f6 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -6187,8 +6187,11 @@ static int referenced_filters(struct dyn_ftrace *rec)
+ 	int cnt = 0;
+ 
+ 	for (ops = ftrace_ops_list; ops != &ftrace_list_end; ops = ops->next) {
+-		if (ops_references_rec(ops, rec))
+-		    cnt++;
++		if (ops_references_rec(ops, rec)) {
++			cnt++;
++			if (ops->flags & FTRACE_OPS_FL_SAVE_REGS)
++				rec->flags |= FTRACE_FL_REGS;
++		}
+ 	}
+ 
+ 	return cnt;
+@@ -6367,8 +6370,8 @@ void ftrace_module_enable(struct module *mod)
+ 		if (ftrace_start_up)
+ 			cnt += referenced_filters(rec);
+ 
+-		/* This clears FTRACE_FL_DISABLED */
+-		rec->flags = cnt;
++		rec->flags &= ~FTRACE_FL_DISABLED;
++		rec->flags += cnt;
+ 
+ 		if (ftrace_start_up && cnt) {
+ 			int failed = __ftrace_replace_code(rec, 1);
+@@ -6966,12 +6969,12 @@ void ftrace_pid_follow_fork(struct trace_array *tr, bool enable)
+ 	if (enable) {
+ 		register_trace_sched_process_fork(ftrace_pid_follow_sched_process_fork,
+ 						  tr);
+-		register_trace_sched_process_exit(ftrace_pid_follow_sched_process_exit,
++		register_trace_sched_process_free(ftrace_pid_follow_sched_process_exit,
+ 						  tr);
+ 	} else {
+ 		unregister_trace_sched_process_fork(ftrace_pid_follow_sched_process_fork,
+ 						    tr);
+-		unregister_trace_sched_process_exit(ftrace_pid_follow_sched_process_exit,
++		unregister_trace_sched_process_free(ftrace_pid_follow_sched_process_exit,
+ 						    tr);
+ 	}
+ }
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index f6f55682d3e2d..a85effb2373bf 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -538,12 +538,12 @@ void trace_event_follow_fork(struct trace_array *tr, bool enable)
+ 	if (enable) {
+ 		register_trace_prio_sched_process_fork(event_filter_pid_sched_process_fork,
+ 						       tr, INT_MIN);
+-		register_trace_prio_sched_process_exit(event_filter_pid_sched_process_exit,
++		register_trace_prio_sched_process_free(event_filter_pid_sched_process_exit,
+ 						       tr, INT_MAX);
+ 	} else {
+ 		unregister_trace_sched_process_fork(event_filter_pid_sched_process_fork,
+ 						    tr);
+-		unregister_trace_sched_process_exit(event_filter_pid_sched_process_exit,
++		unregister_trace_sched_process_free(event_filter_pid_sched_process_exit,
+ 						    tr);
+ 	}
+ }
+diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c
+index e2be7bb7ef7e2..17e1e49e5b936 100644
+--- a/kernel/trace/trace_hwlat.c
++++ b/kernel/trace/trace_hwlat.c
+@@ -283,6 +283,7 @@ static bool disable_migrate;
+ static void move_to_next_cpu(void)
+ {
+ 	struct cpumask *current_mask = &save_cpumask;
++	struct trace_array *tr = hwlat_trace;
+ 	int next_cpu;
+ 
+ 	if (disable_migrate)
+@@ -296,7 +297,7 @@ static void move_to_next_cpu(void)
+ 		goto disable;
+ 
+ 	get_online_cpus();
+-	cpumask_and(current_mask, cpu_online_mask, tracing_buffer_mask);
++	cpumask_and(current_mask, cpu_online_mask, tr->tracing_cpumask);
+ 	next_cpu = cpumask_next(smp_processor_id(), current_mask);
+ 	put_online_cpus();
+ 
+@@ -373,7 +374,7 @@ static int start_kthread(struct trace_array *tr)
+ 	/* Just pick the first CPU on first iteration */
+ 	current_mask = &save_cpumask;
+ 	get_online_cpus();
+-	cpumask_and(current_mask, cpu_online_mask, tracing_buffer_mask);
++	cpumask_and(current_mask, cpu_online_mask, tr->tracing_cpumask);
+ 	put_online_cpus();
+ 	next_cpu = cpumask_first(current_mask);
+ 
+diff --git a/lib/devres.c b/lib/devres.c
+index 6ef51f159c54b..ca0d28727ccef 100644
+--- a/lib/devres.c
++++ b/lib/devres.c
+@@ -119,6 +119,7 @@ __devm_ioremap_resource(struct device *dev, const struct resource *res,
+ {
+ 	resource_size_t size;
+ 	void __iomem *dest_ptr;
++	char *pretty_name;
+ 
+ 	BUG_ON(!dev);
+ 
+@@ -129,7 +130,15 @@ __devm_ioremap_resource(struct device *dev, const struct resource *res,
+ 
+ 	size = resource_size(res);
+ 
+-	if (!devm_request_mem_region(dev, res->start, size, dev_name(dev))) {
++	if (res->name)
++		pretty_name = devm_kasprintf(dev, GFP_KERNEL, "%s %s",
++					     dev_name(dev), res->name);
++	else
++		pretty_name = devm_kstrdup(dev, dev_name(dev), GFP_KERNEL);
++	if (!pretty_name)
++		return IOMEM_ERR_PTR(-ENOMEM);
++
++	if (!devm_request_mem_region(dev, res->start, size, pretty_name)) {
+ 		dev_err(dev, "can't request region for resource %pR\n", res);
+ 		return IOMEM_ERR_PTR(-EBUSY);
+ 	}
+diff --git a/lib/test_kmod.c b/lib/test_kmod.c
+index e651c37d56dbd..eab52770070d6 100644
+--- a/lib/test_kmod.c
++++ b/lib/test_kmod.c
+@@ -745,7 +745,7 @@ static int trigger_config_run_type(struct kmod_test_device *test_dev,
+ 		break;
+ 	case TEST_KMOD_FS_TYPE:
+ 		kfree_const(config->test_fs);
+-		config->test_driver = NULL;
++		config->test_fs = NULL;
+ 		copied = config_copy_test_fs(config, test_str,
+ 					     strlen(test_str));
+ 		break;
+diff --git a/lib/test_lockup.c b/lib/test_lockup.c
+index bd7c7ff39f6be..e7202763a1688 100644
+--- a/lib/test_lockup.c
++++ b/lib/test_lockup.c
+@@ -512,8 +512,8 @@ static int __init test_lockup_init(void)
+ 	if (test_file_path[0]) {
+ 		test_file = filp_open(test_file_path, O_RDONLY, 0);
+ 		if (IS_ERR(test_file)) {
+-			pr_err("cannot find file_path\n");
+-			return -EINVAL;
++			pr_err("failed to open %s: %ld\n", test_file_path, PTR_ERR(test_file));
++			return PTR_ERR(test_file);
+ 		}
+ 		test_inode = file_inode(test_file);
+ 	} else if (test_lock_inode ||
+diff --git a/mm/cma.c b/mm/cma.c
+index 26ecff8188817..0963c0f9c5022 100644
+--- a/mm/cma.c
++++ b/mm/cma.c
+@@ -93,17 +93,15 @@ static void cma_clear_bitmap(struct cma *cma, unsigned long pfn,
+ 	mutex_unlock(&cma->lock);
+ }
+ 
+-static int __init cma_activate_area(struct cma *cma)
++static void __init cma_activate_area(struct cma *cma)
+ {
+ 	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
+ 	unsigned i = cma->count >> pageblock_order;
+ 	struct zone *zone;
+ 
+ 	cma->bitmap = bitmap_zalloc(cma_bitmap_maxno(cma), GFP_KERNEL);
+-	if (!cma->bitmap) {
+-		cma->count = 0;
+-		return -ENOMEM;
+-	}
++	if (!cma->bitmap)
++		goto out_error;
+ 
+ 	WARN_ON_ONCE(!pfn_valid(pfn));
+ 	zone = page_zone(pfn_to_page(pfn));
+@@ -133,25 +131,22 @@ static int __init cma_activate_area(struct cma *cma)
+ 	spin_lock_init(&cma->mem_head_lock);
+ #endif
+ 
+-	return 0;
++	return;
+ 
+ not_in_zone:
+-	pr_err("CMA area %s could not be activated\n", cma->name);
+ 	bitmap_free(cma->bitmap);
++out_error:
+ 	cma->count = 0;
+-	return -EINVAL;
++	pr_err("CMA area %s could not be activated\n", cma->name);
++	return;
+ }
+ 
+ static int __init cma_init_reserved_areas(void)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < cma_area_count; i++) {
+-		int ret = cma_activate_area(&cma_areas[i]);
+-
+-		if (ret)
+-			return ret;
+-	}
++	for (i = 0; i < cma_area_count; i++)
++		cma_activate_area(&cma_areas[i]);
+ 
+ 	return 0;
+ }
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 590111ea6975d..7952c6cb6f08c 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3952,7 +3952,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ 			continue;
+ 
+ 		ptl = huge_pte_lock(h, mm, ptep);
+-		if (huge_pmd_unshare(mm, &address, ptep)) {
++		if (huge_pmd_unshare(mm, vma, &address, ptep)) {
+ 			spin_unlock(ptl);
+ 			/*
+ 			 * We just unmapped a page of PMDs by clearing a PUD.
+@@ -4539,10 +4539,6 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+ 		} else if (unlikely(is_hugetlb_entry_hwpoisoned(entry)))
+ 			return VM_FAULT_HWPOISON_LARGE |
+ 				VM_FAULT_SET_HINDEX(hstate_index(h));
+-	} else {
+-		ptep = huge_pte_alloc(mm, haddr, huge_page_size(h));
+-		if (!ptep)
+-			return VM_FAULT_OOM;
+ 	}
+ 
+ 	/*
+@@ -5019,7 +5015,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
+ 		if (!ptep)
+ 			continue;
+ 		ptl = huge_pte_lock(h, mm, ptep);
+-		if (huge_pmd_unshare(mm, &address, ptep)) {
++		if (huge_pmd_unshare(mm, vma, &address, ptep)) {
+ 			pages++;
+ 			spin_unlock(ptl);
+ 			shared_pmd = true;
+@@ -5313,25 +5309,21 @@ static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr)
+ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
+ 				unsigned long *start, unsigned long *end)
+ {
+-	unsigned long check_addr;
++	unsigned long a_start, a_end;
+ 
+ 	if (!(vma->vm_flags & VM_MAYSHARE))
+ 		return;
+ 
+-	for (check_addr = *start; check_addr < *end; check_addr += PUD_SIZE) {
+-		unsigned long a_start = check_addr & PUD_MASK;
+-		unsigned long a_end = a_start + PUD_SIZE;
++	/* Extend the range to be PUD aligned for a worst case scenario */
++	a_start = ALIGN_DOWN(*start, PUD_SIZE);
++	a_end = ALIGN(*end, PUD_SIZE);
+ 
+-		/*
+-		 * If sharing is possible, adjust start/end if necessary.
+-		 */
+-		if (range_in_vma(vma, a_start, a_end)) {
+-			if (a_start < *start)
+-				*start = a_start;
+-			if (a_end > *end)
+-				*end = a_end;
+-		}
+-	}
++	/*
++	 * Intersect the range with the vma range, since pmd sharing won't be
++	 * across vma after all
++	 */
++	*start = max(vma->vm_start, a_start);
++	*end = min(vma->vm_end, a_end);
+ }
+ 
+ /*
+@@ -5404,12 +5396,14 @@ out:
+  * returns: 1 successfully unmapped a shared pte page
+  *	    0 the underlying pte page is not shared, or it is the last user
+  */
+-int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep)
++int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
++					unsigned long *addr, pte_t *ptep)
+ {
+ 	pgd_t *pgd = pgd_offset(mm, *addr);
+ 	p4d_t *p4d = p4d_offset(pgd, *addr);
+ 	pud_t *pud = pud_offset(p4d, *addr);
+ 
++	i_mmap_assert_write_locked(vma->vm_file->f_mapping);
+ 	BUG_ON(page_count(virt_to_page(ptep)) == 0);
+ 	if (page_count(virt_to_page(ptep)) == 1)
+ 		return 0;
+@@ -5427,7 +5421,8 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
+ 	return NULL;
+ }
+ 
+-int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep)
++int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
++				unsigned long *addr, pte_t *ptep)
+ {
+ 	return 0;
+ }
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 700f5160f3e4d..ac04b332a373a 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1412,7 +1412,7 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ {
+ 	unsigned long haddr = addr & HPAGE_PMD_MASK;
+ 	struct vm_area_struct *vma = find_vma(mm, haddr);
+-	struct page *hpage = NULL;
++	struct page *hpage;
+ 	pte_t *start_pte, *pte;
+ 	pmd_t *pmd, _pmd;
+ 	spinlock_t *ptl;
+@@ -1432,9 +1432,17 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ 	if (!hugepage_vma_check(vma, vma->vm_flags | VM_HUGEPAGE))
+ 		return;
+ 
++	hpage = find_lock_page(vma->vm_file->f_mapping,
++			       linear_page_index(vma, haddr));
++	if (!hpage)
++		return;
++
++	if (!PageHead(hpage))
++		goto drop_hpage;
++
+ 	pmd = mm_find_pmd(mm, haddr);
+ 	if (!pmd)
+-		return;
++		goto drop_hpage;
+ 
+ 	start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl);
+ 
+@@ -1453,30 +1461,11 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ 
+ 		page = vm_normal_page(vma, addr, *pte);
+ 
+-		if (!page || !PageCompound(page))
+-			goto abort;
+-
+-		if (!hpage) {
+-			hpage = compound_head(page);
+-			/*
+-			 * The mapping of the THP should not change.
+-			 *
+-			 * Note that uprobe, debugger, or MAP_PRIVATE may
+-			 * change the page table, but the new page will
+-			 * not pass PageCompound() check.
+-			 */
+-			if (WARN_ON(hpage->mapping != vma->vm_file->f_mapping))
+-				goto abort;
+-		}
+-
+ 		/*
+-		 * Confirm the page maps to the correct subpage.
+-		 *
+-		 * Note that uprobe, debugger, or MAP_PRIVATE may change
+-		 * the page table, but the new page will not pass
+-		 * PageCompound() check.
++		 * Note that uprobe, debugger, or MAP_PRIVATE may change the
++		 * page table, but the new page will not be a subpage of hpage.
+ 		 */
+-		if (WARN_ON(hpage + i != page))
++		if (hpage + i != page)
+ 			goto abort;
+ 		count++;
+ 	}
+@@ -1495,21 +1484,26 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ 	pte_unmap_unlock(start_pte, ptl);
+ 
+ 	/* step 3: set proper refcount and mm_counters. */
+-	if (hpage) {
++	if (count) {
+ 		page_ref_sub(hpage, count);
+ 		add_mm_counter(vma->vm_mm, mm_counter_file(hpage), -count);
+ 	}
+ 
+ 	/* step 4: collapse pmd */
+ 	ptl = pmd_lock(vma->vm_mm, pmd);
+-	_pmd = pmdp_collapse_flush(vma, addr, pmd);
++	_pmd = pmdp_collapse_flush(vma, haddr, pmd);
+ 	spin_unlock(ptl);
+ 	mm_dec_nr_ptes(mm);
+ 	pte_free(mm, pmd_pgtable(_pmd));
++
++drop_hpage:
++	unlock_page(hpage);
++	put_page(hpage);
+ 	return;
+ 
+ abort:
+ 	pte_unmap_unlock(start_pte, ptl);
++	goto drop_hpage;
+ }
+ 
+ static int khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot)
+@@ -1538,6 +1532,7 @@ out:
+ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
+ {
+ 	struct vm_area_struct *vma;
++	struct mm_struct *mm;
+ 	unsigned long addr;
+ 	pmd_t *pmd, _pmd;
+ 
+@@ -1566,7 +1561,8 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
+ 			continue;
+ 		if (vma->vm_end < addr + HPAGE_PMD_SIZE)
+ 			continue;
+-		pmd = mm_find_pmd(vma->vm_mm, addr);
++		mm = vma->vm_mm;
++		pmd = mm_find_pmd(mm, addr);
+ 		if (!pmd)
+ 			continue;
+ 		/*
+@@ -1576,17 +1572,19 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
+ 		 * mmap_lock while holding page lock. Fault path does it in
+ 		 * reverse order. Trylock is a way to avoid deadlock.
+ 		 */
+-		if (mmap_write_trylock(vma->vm_mm)) {
+-			spinlock_t *ptl = pmd_lock(vma->vm_mm, pmd);
+-			/* assume page table is clear */
+-			_pmd = pmdp_collapse_flush(vma, addr, pmd);
+-			spin_unlock(ptl);
+-			mmap_write_unlock(vma->vm_mm);
+-			mm_dec_nr_ptes(vma->vm_mm);
+-			pte_free(vma->vm_mm, pmd_pgtable(_pmd));
++		if (mmap_write_trylock(mm)) {
++			if (!khugepaged_test_exit(mm)) {
++				spinlock_t *ptl = pmd_lock(mm, pmd);
++				/* assume page table is clear */
++				_pmd = pmdp_collapse_flush(vma, addr, pmd);
++				spin_unlock(ptl);
++				mm_dec_nr_ptes(mm);
++				pte_free(mm, pmd_pgtable(_pmd));
++			}
++			mmap_write_unlock(mm);
+ 		} else {
+ 			/* Try again later */
+-			khugepaged_add_pte_mapped_thp(vma->vm_mm, addr);
++			khugepaged_add_pte_mapped_thp(mm, addr);
+ 		}
+ 	}
+ 	i_mmap_unlock_write(mapping);
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index da374cd3d45b3..76c75a599da3f 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1742,7 +1742,7 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size)
+ 	 */
+ 	rc = walk_memory_blocks(start, size, NULL, check_memblock_offlined_cb);
+ 	if (rc)
+-		goto done;
++		return rc;
+ 
+ 	/* remove memmap entry */
+ 	firmware_map_remove(start, start + size, "System RAM");
+@@ -1766,9 +1766,8 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size)
+ 
+ 	try_offline_node(nid);
+ 
+-done:
+ 	mem_hotplug_done();
+-	return rc;
++	return 0;
+ }
+ 
+ /**
+diff --git a/mm/page_counter.c b/mm/page_counter.c
+index c56db2d5e1592..b4663844c9b37 100644
+--- a/mm/page_counter.c
++++ b/mm/page_counter.c
+@@ -72,7 +72,7 @@ void page_counter_charge(struct page_counter *counter, unsigned long nr_pages)
+ 		long new;
+ 
+ 		new = atomic_long_add_return(nr_pages, &c->usage);
+-		propagate_protected_usage(counter, new);
++		propagate_protected_usage(c, new);
+ 		/*
+ 		 * This is indeed racy, but we can live with some
+ 		 * inaccuracy in the watermark.
+@@ -116,7 +116,7 @@ bool page_counter_try_charge(struct page_counter *counter,
+ 		new = atomic_long_add_return(nr_pages, &c->usage);
+ 		if (new > c->max) {
+ 			atomic_long_sub(nr_pages, &c->usage);
+-			propagate_protected_usage(counter, new);
++			propagate_protected_usage(c, new);
+ 			/*
+ 			 * This is racy, but we can live with some
+ 			 * inaccuracy in the failcnt.
+@@ -125,7 +125,7 @@ bool page_counter_try_charge(struct page_counter *counter,
+ 			*fail = c;
+ 			goto failed;
+ 		}
+-		propagate_protected_usage(counter, new);
++		propagate_protected_usage(c, new);
+ 		/*
+ 		 * Just like with failcnt, we can live with some
+ 		 * inaccuracy in the watermark.
+diff --git a/mm/rmap.c b/mm/rmap.c
+index 5fe2dedce1fc1..6cce9ef06753b 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -1469,7 +1469,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ 			 * do this outside rmap routines.
+ 			 */
+ 			VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
+-			if (huge_pmd_unshare(mm, &address, pvmw.pte)) {
++			if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) {
+ 				/*
+ 				 * huge_pmd_unshare unmapped an entire PMD
+ 				 * page.  There is no way of knowing exactly
+diff --git a/mm/shuffle.c b/mm/shuffle.c
+index 44406d9977c77..dd13ab851b3ee 100644
+--- a/mm/shuffle.c
++++ b/mm/shuffle.c
+@@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
+  * For two pages to be swapped in the shuffle, they must be free (on a
+  * 'free_area' lru), have the same order, and have the same migratetype.
+  */
+-static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
++static struct page * __meminit shuffle_valid_page(struct zone *zone,
++						  unsigned long pfn, int order)
+ {
+-	struct page *page;
++	struct page *page = pfn_to_online_page(pfn);
+ 
+ 	/*
+ 	 * Given we're dealing with randomly selected pfns in a zone we
+ 	 * need to ask questions like...
+ 	 */
+ 
+-	/* ...is the pfn even in the memmap? */
+-	if (!pfn_valid_within(pfn))
++	/* ... is the page managed by the buddy? */
++	if (!page)
+ 		return NULL;
+ 
+-	/* ...is the pfn in a present section or a hole? */
+-	if (!pfn_in_present_section(pfn))
++	/* ... is the page assigned to the same zone? */
++	if (page_zone(page) != zone)
+ 		return NULL;
+ 
+ 	/* ...is the page free and currently on a free_area list? */
+-	page = pfn_to_page(pfn);
+ 	if (!PageBuddy(page))
+ 		return NULL;
+ 
+@@ -123,7 +123,7 @@ void __meminit __shuffle_zone(struct zone *z)
+ 		 * page_j randomly selected in the span @zone_start_pfn to
+ 		 * @spanned_pages.
+ 		 */
+-		page_i = shuffle_valid_page(i, order);
++		page_i = shuffle_valid_page(z, i, order);
+ 		if (!page_i)
+ 			continue;
+ 
+@@ -137,7 +137,7 @@ void __meminit __shuffle_zone(struct zone *z)
+ 			j = z->zone_start_pfn +
+ 				ALIGN_DOWN(get_random_long() % z->spanned_pages,
+ 						order_pages);
+-			page_j = shuffle_valid_page(j, order);
++			page_j = shuffle_valid_page(z, j, order);
+ 			if (page_j && page_j != page_i)
+ 				break;
+ 		}
+diff --git a/net/appletalk/atalk_proc.c b/net/appletalk/atalk_proc.c
+index 550c6ca007cc2..9c1241292d1d2 100644
+--- a/net/appletalk/atalk_proc.c
++++ b/net/appletalk/atalk_proc.c
+@@ -229,6 +229,8 @@ int __init atalk_proc_init(void)
+ 				     sizeof(struct aarp_iter_state), NULL))
+ 		goto out;
+ 
++	return 0;
++
+ out:
+ 	remove_proc_subtree("atalk", init_net.proc_net);
+ 	return -ENOMEM;
+diff --git a/net/compat.c b/net/compat.c
+index 434838bef5f80..7dc670c8eac50 100644
+--- a/net/compat.c
++++ b/net/compat.c
+@@ -309,6 +309,7 @@ void scm_detach_fds_compat(struct msghdr *kmsg, struct scm_cookie *scm)
+ 			break;
+ 		}
+ 		/* Bump the usage count and install the file. */
++		__receive_sock(fp[i]);
+ 		fd_install(new_fd, get_file(fp[i]));
+ 	}
+ 
+diff --git a/net/core/sock.c b/net/core/sock.c
+index a14a8cb6ccca6..78f8736be9c50 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2842,6 +2842,27 @@ int sock_no_mmap(struct file *file, struct socket *sock, struct vm_area_struct *
+ }
+ EXPORT_SYMBOL(sock_no_mmap);
+ 
++/*
++ * When a file is received (via SCM_RIGHTS, etc), we must bump the
++ * various sock-based usage counts.
++ */
++void __receive_sock(struct file *file)
++{
++	struct socket *sock;
++	int error;
++
++	/*
++	 * The resulting value of "error" is ignored here since we only
++	 * need to take action when the file is a socket and testing
++	 * "sock" for NULL is sufficient.
++	 */
++	sock = sock_from_file(file, &error);
++	if (sock) {
++		sock_update_netprioidx(&sock->sk->sk_cgrp_data);
++		sock_update_classid(&sock->sk->sk_cgrp_data);
++	}
++}
++
+ ssize_t sock_no_sendpage(struct socket *sock, struct page *page, int offset, size_t size, int flags)
+ {
+ 	ssize_t res;
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index af4cc5fb678ed..05e966f1609e2 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -1050,7 +1050,7 @@ static void __sta_info_destroy_part2(struct sta_info *sta)
+ 	might_sleep();
+ 	lockdep_assert_held(&local->sta_mtx);
+ 
+-	while (sta->sta_state == IEEE80211_STA_AUTHORIZED) {
++	if (sta->sta_state == IEEE80211_STA_AUTHORIZED) {
+ 		ret = sta_info_move_state(sta, IEEE80211_STA_ASSOC);
+ 		WARN_ON_ONCE(ret);
+ 	}
+diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c
+index e59022b3f1254..b9c2ee7ab43fa 100644
+--- a/scripts/recordmcount.c
++++ b/scripts/recordmcount.c
+@@ -42,6 +42,8 @@
+ #define R_ARM_THM_CALL		10
+ #define R_ARM_CALL		28
+ 
++#define R_AARCH64_CALL26	283
++
+ static int fd_map;	/* File descriptor for file being modified. */
+ static int mmap_failed; /* Boolean flag. */
+ static char gpfx;	/* prefix for global symbol name (sometimes '_') */
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index 3e3e568c81309..a59bf2f5b2d4f 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -1035,6 +1035,11 @@ static bool ima_validate_rule(struct ima_rule_entry *entry)
+ 		return false;
+ 	}
+ 
++	/* Ensure that combinations of flags are compatible with each other */
++	if (entry->flags & IMA_CHECK_BLACKLIST &&
++	    !(entry->flags & IMA_MODSIG_ALLOWED))
++		return false;
++
+ 	return true;
+ }
+ 
+@@ -1371,9 +1376,17 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ 				result = -EINVAL;
+ 			break;
+ 		case Opt_appraise_flag:
++			if (entry->action != APPRAISE) {
++				result = -EINVAL;
++				break;
++			}
++
+ 			ima_log_string(ab, "appraise_flag", args[0].from);
+-			if (strstr(args[0].from, "blacklist"))
++			if (IS_ENABLED(CONFIG_IMA_APPRAISE_MODSIG) &&
++			    strstr(args[0].from, "blacklist"))
+ 				entry->flags |= IMA_CHECK_BLACKLIST;
++			else
++				result = -EINVAL;
+ 			break;
+ 		case Opt_permit_directio:
+ 			entry->flags |= IMA_PERMIT_DIRECTIO;
+diff --git a/sound/pci/echoaudio/echoaudio.c b/sound/pci/echoaudio/echoaudio.c
+index 0941a7a17623a..456219a665a79 100644
+--- a/sound/pci/echoaudio/echoaudio.c
++++ b/sound/pci/echoaudio/echoaudio.c
+@@ -2158,7 +2158,6 @@ static int snd_echo_resume(struct device *dev)
+ 	if (err < 0) {
+ 		kfree(commpage_bak);
+ 		dev_err(dev, "resume init_hw err=%d\n", err);
+-		snd_echo_free(chip);
+ 		return err;
+ 	}
+ 
+@@ -2185,7 +2184,6 @@ static int snd_echo_resume(struct device *dev)
+ 	if (request_irq(pci->irq, snd_echo_interrupt, IRQF_SHARED,
+ 			KBUILD_MODNAME, chip)) {
+ 		dev_err(chip->card->dev, "cannot grab irq\n");
+-		snd_echo_free(chip);
+ 		return -EBUSY;
+ 	}
+ 	chip->irq = pci->irq;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 00d155b98c1d1..8626e59f1e6a9 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4171,8 +4171,6 @@ static void alc269_fixup_hp_gpio_led(struct hda_codec *codec,
+ static void alc285_fixup_hp_gpio_led(struct hda_codec *codec,
+ 				const struct hda_fixup *fix, int action)
+ {
+-	struct alc_spec *spec = codec->spec;
+-
+ 	alc_fixup_hp_gpio_led(codec, action, 0x04, 0x01);
+ }
+ 
+diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
+index cb152370fdefd..e7818b44b48ee 100644
+--- a/tools/build/Makefile.feature
++++ b/tools/build/Makefile.feature
+@@ -8,7 +8,7 @@ endif
+ 
+ feature_check = $(eval $(feature_check_code))
+ define feature_check_code
+-  feature-$(1) := $(shell $(MAKE) OUTPUT=$(OUTPUT_FEATURES) CFLAGS="$(EXTRA_CFLAGS) $(FEATURE_CHECK_CFLAGS-$(1))" CXXFLAGS="$(EXTRA_CXXFLAGS) $(FEATURE_CHECK_CXXFLAGS-$(1))" LDFLAGS="$(LDFLAGS) $(FEATURE_CHECK_LDFLAGS-$(1))" -C $(feature_dir) $(OUTPUT_FEATURES)test-$1.bin >/dev/null 2>/dev/null && echo 1 || echo 0)
++  feature-$(1) := $(shell $(MAKE) OUTPUT=$(OUTPUT_FEATURES) CC="$(CC)" CXX="$(CXX)" CFLAGS="$(EXTRA_CFLAGS) $(FEATURE_CHECK_CFLAGS-$(1))" CXXFLAGS="$(EXTRA_CXXFLAGS) $(FEATURE_CHECK_CXXFLAGS-$(1))" LDFLAGS="$(LDFLAGS) $(FEATURE_CHECK_LDFLAGS-$(1))" -C $(feature_dir) $(OUTPUT_FEATURES)test-$1.bin >/dev/null 2>/dev/null && echo 1 || echo 0)
+ endef
+ 
+ feature_set = $(eval $(feature_set_code))
+diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
+index b1f0321180f5c..93b590d81209c 100644
+--- a/tools/build/feature/Makefile
++++ b/tools/build/feature/Makefile
+@@ -74,8 +74,6 @@ FILES=                                          \
+ 
+ FILES := $(addprefix $(OUTPUT),$(FILES))
+ 
+-CC ?= $(CROSS_COMPILE)gcc
+-CXX ?= $(CROSS_COMPILE)g++
+ PKG_CONFIG ?= $(CROSS_COMPILE)pkg-config
+ LLVM_CONFIG ?= llvm-config
+ CLANG ?= clang
+diff --git a/tools/perf/bench/mem-functions.c b/tools/perf/bench/mem-functions.c
+index 9235b76501be8..19d45c377ac18 100644
+--- a/tools/perf/bench/mem-functions.c
++++ b/tools/perf/bench/mem-functions.c
+@@ -223,12 +223,8 @@ static int bench_mem_common(int argc, const char **argv, struct bench_mem_info *
+ 	return 0;
+ }
+ 
+-static u64 do_memcpy_cycles(const struct function *r, size_t size, void *src, void *dst)
++static void memcpy_prefault(memcpy_t fn, size_t size, void *src, void *dst)
+ {
+-	u64 cycle_start = 0ULL, cycle_end = 0ULL;
+-	memcpy_t fn = r->fn.memcpy;
+-	int i;
+-
+ 	/* Make sure to always prefault zero pages even if MMAP_THRESH is crossed: */
+ 	memset(src, 0, size);
+ 
+@@ -237,6 +233,15 @@ static u64 do_memcpy_cycles(const struct function *r, size_t size, void *src, vo
+ 	 * to not measure page fault overhead:
+ 	 */
+ 	fn(dst, src, size);
++}
++
++static u64 do_memcpy_cycles(const struct function *r, size_t size, void *src, void *dst)
++{
++	u64 cycle_start = 0ULL, cycle_end = 0ULL;
++	memcpy_t fn = r->fn.memcpy;
++	int i;
++
++	memcpy_prefault(fn, size, src, dst);
+ 
+ 	cycle_start = get_cycles();
+ 	for (i = 0; i < nr_loops; ++i)
+@@ -252,11 +257,7 @@ static double do_memcpy_gettimeofday(const struct function *r, size_t size, void
+ 	memcpy_t fn = r->fn.memcpy;
+ 	int i;
+ 
+-	/*
+-	 * We prefault the freshly allocated memory range here,
+-	 * to not measure page fault overhead:
+-	 */
+-	fn(dst, src, size);
++	memcpy_prefault(fn, size, src, dst);
+ 
+ 	BUG_ON(gettimeofday(&tv_start, NULL));
+ 	for (i = 0; i < nr_loops; ++i)
+diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
+index a37e7910e9e90..23ea934f30b34 100644
+--- a/tools/perf/builtin-record.c
++++ b/tools/perf/builtin-record.c
+@@ -1489,7 +1489,7 @@ static int record__setup_sb_evlist(struct record *rec)
+ 		evlist__set_cb(rec->sb_evlist, record__process_signal_event, rec);
+ 		rec->thread_id = pthread_self();
+ 	}
+-
++#ifdef HAVE_LIBBPF_SUPPORT
+ 	if (!opts->no_bpf_event) {
+ 		if (rec->sb_evlist == NULL) {
+ 			rec->sb_evlist = evlist__new();
+@@ -1505,7 +1505,7 @@ static int record__setup_sb_evlist(struct record *rec)
+ 			return -1;
+ 		}
+ 	}
+-
++#endif
+ 	if (perf_evlist__start_sb_thread(rec->sb_evlist, &rec->opts.target)) {
+ 		pr_debug("Couldn't start the BPF side band thread:\nBPF programs starting from now on won't be annotatable\n");
+ 		opts->no_bpf_event = true;
+diff --git a/tools/perf/tests/parse-events.c b/tools/perf/tests/parse-events.c
+index 895188b63f963..6a2ec6ec0d0ef 100644
+--- a/tools/perf/tests/parse-events.c
++++ b/tools/perf/tests/parse-events.c
+@@ -631,6 +631,34 @@ static int test__checkterms_simple(struct list_head *terms)
+ 	TEST_ASSERT_VAL("wrong val", term->val.num == 1);
+ 	TEST_ASSERT_VAL("wrong config", !strcmp(term->config, "umask"));
+ 
++	/*
++	 * read
++	 *
++	 * The perf_pmu__test_parse_init injects 'read' term into
++	 * perf_pmu_events_list, so 'read' is evaluated as read term
++	 * and not as raw event with 'ead' hex value.
++	 */
++	term = list_entry(term->list.next, struct parse_events_term, list);
++	TEST_ASSERT_VAL("wrong type term",
++			term->type_term == PARSE_EVENTS__TERM_TYPE_USER);
++	TEST_ASSERT_VAL("wrong type val",
++			term->type_val == PARSE_EVENTS__TERM_TYPE_NUM);
++	TEST_ASSERT_VAL("wrong val", term->val.num == 1);
++	TEST_ASSERT_VAL("wrong config", !strcmp(term->config, "read"));
++
++	/*
++	 * r0xead
++	 *
++	 * To be still able to pass 'ead' value with 'r' syntax,
++	 * we added support to parse 'r0xHEX' event.
++	 */
++	term = list_entry(term->list.next, struct parse_events_term, list);
++	TEST_ASSERT_VAL("wrong type term",
++			term->type_term == PARSE_EVENTS__TERM_TYPE_CONFIG);
++	TEST_ASSERT_VAL("wrong type val",
++			term->type_val == PARSE_EVENTS__TERM_TYPE_NUM);
++	TEST_ASSERT_VAL("wrong val", term->val.num == 0xead);
++	TEST_ASSERT_VAL("wrong config", !term->config);
+ 	return 0;
+ }
+ 
+@@ -1776,7 +1804,7 @@ struct terms_test {
+ 
+ static struct terms_test test__terms[] = {
+ 	[0] = {
+-		.str   = "config=10,config1,config2=3,umask=1",
++		.str   = "config=10,config1,config2=3,umask=1,read,r0xead",
+ 		.check = test__checkterms_simple,
+ 	},
+ };
+@@ -1836,6 +1864,13 @@ static int test_term(struct terms_test *t)
+ 
+ 	INIT_LIST_HEAD(&terms);
+ 
++	/*
++	 * The perf_pmu__test_parse_init prepares perf_pmu_events_list
++	 * which gets freed in parse_events_terms.
++	 */
++	if (perf_pmu__test_parse_init())
++		return -1;
++
+ 	ret = parse_events_terms(&terms, t->str);
+ 	if (ret) {
+ 		pr_debug("failed to parse terms '%s', err %d\n",
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index ef802f6d40c17..6a79cfdf96cb6 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -1014,12 +1014,14 @@ void evsel__config(struct evsel *evsel, struct record_opts *opts,
+ 	if (callchain && callchain->enabled && !evsel->no_aux_samples)
+ 		evsel__config_callchain(evsel, opts, callchain);
+ 
+-	if (opts->sample_intr_regs && !evsel->no_aux_samples) {
++	if (opts->sample_intr_regs && !evsel->no_aux_samples &&
++	    !evsel__is_dummy_event(evsel)) {
+ 		attr->sample_regs_intr = opts->sample_intr_regs;
+ 		evsel__set_sample_bit(evsel, REGS_INTR);
+ 	}
+ 
+-	if (opts->sample_user_regs && !evsel->no_aux_samples) {
++	if (opts->sample_user_regs && !evsel->no_aux_samples &&
++	    !evsel__is_dummy_event(evsel)) {
+ 		attr->sample_regs_user |= opts->sample_user_regs;
+ 		evsel__set_sample_bit(evsel, REGS_USER);
+ 	}
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index f8ccfd6be0eee..7ffcbd6fcd1ae 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -1164,6 +1164,7 @@ static int intel_pt_walk_fup(struct intel_pt_decoder *decoder)
+ 			return 0;
+ 		if (err == -EAGAIN ||
+ 		    intel_pt_fup_with_nlip(decoder, &intel_pt_insn, ip, err)) {
++			decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+ 			if (intel_pt_fup_event(decoder))
+ 				return 0;
+ 			return -EAGAIN;
+@@ -1942,17 +1943,13 @@ next:
+ 			}
+ 			if (decoder->set_fup_mwait)
+ 				no_tip = true;
++			if (no_tip)
++				decoder->pkt_state = INTEL_PT_STATE_FUP_NO_TIP;
++			else
++				decoder->pkt_state = INTEL_PT_STATE_FUP;
+ 			err = intel_pt_walk_fup(decoder);
+-			if (err != -EAGAIN) {
+-				if (err)
+-					return err;
+-				if (no_tip)
+-					decoder->pkt_state =
+-						INTEL_PT_STATE_FUP_NO_TIP;
+-				else
+-					decoder->pkt_state = INTEL_PT_STATE_FUP;
+-				return 0;
+-			}
++			if (err != -EAGAIN)
++				return err;
+ 			if (no_tip) {
+ 				no_tip = false;
+ 				break;
+@@ -1980,8 +1977,10 @@ next:
+ 			 * possibility of another CBR change that gets caught up
+ 			 * in the PSB+.
+ 			 */
+-			if (decoder->cbr != decoder->cbr_seen)
++			if (decoder->cbr != decoder->cbr_seen) {
++				decoder->state.type = 0;
+ 				return 0;
++			}
+ 			break;
+ 
+ 		case INTEL_PT_PIP:
+@@ -2022,8 +2021,10 @@ next:
+ 
+ 		case INTEL_PT_CBR:
+ 			intel_pt_calc_cbr(decoder);
+-			if (decoder->cbr != decoder->cbr_seen)
++			if (decoder->cbr != decoder->cbr_seen) {
++				decoder->state.type = 0;
+ 				return 0;
++			}
+ 			break;
+ 
+ 		case INTEL_PT_MODE_EXEC:
+@@ -2599,15 +2600,11 @@ const struct intel_pt_state *intel_pt_decode(struct intel_pt_decoder *decoder)
+ 			err = intel_pt_walk_tip(decoder);
+ 			break;
+ 		case INTEL_PT_STATE_FUP:
+-			decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+ 			err = intel_pt_walk_fup(decoder);
+ 			if (err == -EAGAIN)
+ 				err = intel_pt_walk_fup_tip(decoder);
+-			else if (!err)
+-				decoder->pkt_state = INTEL_PT_STATE_FUP;
+ 			break;
+ 		case INTEL_PT_STATE_FUP_NO_TIP:
+-			decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+ 			err = intel_pt_walk_fup(decoder);
+ 			if (err == -EAGAIN)
+ 				err = intel_pt_walk_trace(decoder);
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index 3decbb203846a..4476de0e678aa 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -2017,6 +2017,32 @@ err:
+ 	perf_pmu__parse_cleanup();
+ }
+ 
++/*
++ * This function injects special term in
++ * perf_pmu_events_list so the test code
++ * can check on this functionality.
++ */
++int perf_pmu__test_parse_init(void)
++{
++	struct perf_pmu_event_symbol *list;
++
++	list = malloc(sizeof(*list) * 1);
++	if (!list)
++		return -ENOMEM;
++
++	list->type   = PMU_EVENT_SYMBOL;
++	list->symbol = strdup("read");
++
++	if (!list->symbol) {
++		free(list);
++		return -ENOMEM;
++	}
++
++	perf_pmu_events_list = list;
++	perf_pmu_events_list_num = 1;
++	return 0;
++}
++
+ enum perf_pmu_event_symbol_type
+ perf_pmu__parse_check(const char *name)
+ {
+@@ -2078,6 +2104,8 @@ int parse_events_terms(struct list_head *terms, const char *str)
+ 	int ret;
+ 
+ 	ret = parse_events__scanner(str, &parse_state);
++	perf_pmu__parse_cleanup();
++
+ 	if (!ret) {
+ 		list_splice(parse_state.terms, terms);
+ 		zfree(&parse_state.terms);
+diff --git a/tools/perf/util/parse-events.h b/tools/perf/util/parse-events.h
+index 1fe23a2f9b36e..0b8cdb7270f04 100644
+--- a/tools/perf/util/parse-events.h
++++ b/tools/perf/util/parse-events.h
+@@ -253,4 +253,6 @@ static inline bool is_sdt_event(char *str __maybe_unused)
+ }
+ #endif /* HAVE_LIBELF_SUPPORT */
+ 
++int perf_pmu__test_parse_init(void);
++
+ #endif /* __PERF_PARSE_EVENTS_H */
+diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l
+index 002802e17059e..7332d16cb4fc7 100644
+--- a/tools/perf/util/parse-events.l
++++ b/tools/perf/util/parse-events.l
+@@ -41,14 +41,6 @@ static int value(yyscan_t scanner, int base)
+ 	return __value(yylval, text, base, PE_VALUE);
+ }
+ 
+-static int raw(yyscan_t scanner)
+-{
+-	YYSTYPE *yylval = parse_events_get_lval(scanner);
+-	char *text = parse_events_get_text(scanner);
+-
+-	return __value(yylval, text + 1, 16, PE_RAW);
+-}
+-
+ static int str(yyscan_t scanner, int token)
+ {
+ 	YYSTYPE *yylval = parse_events_get_lval(scanner);
+@@ -72,6 +64,17 @@ static int str(yyscan_t scanner, int token)
+ 	return token;
+ }
+ 
++static int raw(yyscan_t scanner)
++{
++	YYSTYPE *yylval = parse_events_get_lval(scanner);
++	char *text = parse_events_get_text(scanner);
++
++	if (perf_pmu__parse_check(text) == PMU_EVENT_SYMBOL)
++		return str(scanner, PE_NAME);
++
++	return __value(yylval, text + 1, 16, PE_RAW);
++}
++
+ static bool isbpf_suffix(char *text)
+ {
+ 	int len = strlen(text);
+diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c
+index 55924255c5355..659024342e9ac 100644
+--- a/tools/perf/util/probe-finder.c
++++ b/tools/perf/util/probe-finder.c
+@@ -1408,6 +1408,9 @@ static int fill_empty_trace_arg(struct perf_probe_event *pev,
+ 	char *type;
+ 	int i, j, ret;
+ 
++	if (!ntevs)
++		return -ENOENT;
++
+ 	for (i = 0; i < pev->nargs; i++) {
+ 		type = NULL;
+ 		for (j = 0; j < ntevs; j++) {
+@@ -1464,7 +1467,7 @@ int debuginfo__find_trace_events(struct debuginfo *dbg,
+ 	if (ret >= 0 && tf.pf.skip_empty_arg)
+ 		ret = fill_empty_trace_arg(pev, tf.tevs, tf.ntevs);
+ 
+-	if (ret < 0) {
++	if (ret < 0 || tf.ntevs == 0) {
+ 		for (i = 0; i < tf.ntevs; i++)
+ 			clear_probe_trace_event(&tf.tevs[i]);
+ 		zfree(tevs);
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 22aaec74ea0ab..4f322d5388757 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -102,7 +102,7 @@ endif
+ OVERRIDE_TARGETS := 1
+ override define CLEAN
+ 	$(call msg,CLEAN)
+-	$(RM) -r $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) $(EXTRA_CLEAN)
++	$(Q)$(RM) -r $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) $(EXTRA_CLEAN)
+ endef
+ 
+ include ../lib.mk
+@@ -122,17 +122,21 @@ $(notdir $(TEST_GEN_PROGS)						\
+ 	 $(TEST_GEN_PROGS_EXTENDED)					\
+ 	 $(TEST_CUSTOM_PROGS)): %: $(OUTPUT)/% ;
+ 
++$(OUTPUT)/%.o: %.c
++	$(call msg,CC,,$@)
++	$(Q)$(CC) $(CFLAGS) -c $(filter %.c,$^) $(LDLIBS) -o $@
++
+ $(OUTPUT)/%:%.c
+ 	$(call msg,BINARY,,$@)
+-	$(LINK.c) $^ $(LDLIBS) -o $@
++	$(Q)$(LINK.c) $^ $(LDLIBS) -o $@
+ 
+ $(OUTPUT)/urandom_read: urandom_read.c
+ 	$(call msg,BINARY,,$@)
+-	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS) -Wl,--build-id
++	$(Q)$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS) -Wl,--build-id
+ 
+ $(OUTPUT)/test_stub.o: test_stub.c $(BPFOBJ)
+ 	$(call msg,CC,,$@)
+-	$(CC) -c $(CFLAGS) -o $@ $<
++	$(Q)$(CC) -c $(CFLAGS) -o $@ $<
+ 
+ VMLINUX_BTF_PATHS := $(if $(O),$(O)/vmlinux)				\
+ 		     $(if $(KBUILD_OUTPUT),$(KBUILD_OUTPUT)/vmlinux)	\
+@@ -141,7 +145,9 @@ VMLINUX_BTF_PATHS := $(if $(O),$(O)/vmlinux)				\
+ 		     /boot/vmlinux-$(shell uname -r)
+ VMLINUX_BTF := $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS))))
+ 
+-$(OUTPUT)/runqslower: $(BPFOBJ)
++DEFAULT_BPFTOOL := $(SCRATCH_DIR)/sbin/bpftool
++
++$(OUTPUT)/runqslower: $(BPFOBJ) | $(DEFAULT_BPFTOOL)
+ 	$(Q)$(MAKE) $(submake_extras) -C $(TOOLSDIR)/bpf/runqslower	\
+ 		    OUTPUT=$(SCRATCH_DIR)/ VMLINUX_BTF=$(VMLINUX_BTF)   \
+ 		    BPFOBJ=$(BPFOBJ) BPF_INCLUDE=$(INCLUDE_DIR) &&	\
+@@ -163,7 +169,6 @@ $(OUTPUT)/test_netcnt: cgroup_helpers.c
+ $(OUTPUT)/test_sock_fields: cgroup_helpers.c
+ $(OUTPUT)/test_sysctl: cgroup_helpers.c
+ 
+-DEFAULT_BPFTOOL := $(SCRATCH_DIR)/sbin/bpftool
+ BPFTOOL ?= $(DEFAULT_BPFTOOL)
+ $(DEFAULT_BPFTOOL): $(wildcard $(BPFTOOLDIR)/*.[ch] $(BPFTOOLDIR)/Makefile)    \
+ 		    $(BPFOBJ) | $(BUILD_DIR)/bpftool
+@@ -179,11 +184,11 @@ $(BPFOBJ): $(wildcard $(BPFDIR)/*.[ch] $(BPFDIR)/Makefile)		       \
+ 
+ $(BUILD_DIR)/libbpf $(BUILD_DIR)/bpftool $(INCLUDE_DIR):
+ 	$(call msg,MKDIR,,$@)
+-	mkdir -p $@
++	$(Q)mkdir -p $@
+ 
+ $(INCLUDE_DIR)/vmlinux.h: $(VMLINUX_BTF) | $(BPFTOOL) $(INCLUDE_DIR)
+ 	$(call msg,GEN,,$@)
+-	$(BPFTOOL) btf dump file $(VMLINUX_BTF) format c > $@
++	$(Q)$(BPFTOOL) btf dump file $(VMLINUX_BTF) format c > $@
+ 
+ # Get Clang's default includes on this system, as opposed to those seen by
+ # '-target bpf'. This fixes "missing" files on some architectures/distros,
+@@ -221,28 +226,28 @@ $(OUTPUT)/flow_dissector_load.o: flow_dissector_load.h
+ # $4 - LDFLAGS
+ define CLANG_BPF_BUILD_RULE
+ 	$(call msg,CLNG-LLC,$(TRUNNER_BINARY),$2)
+-	($(CLANG) $3 -O2 -target bpf -emit-llvm				\
++	$(Q)($(CLANG) $3 -O2 -target bpf -emit-llvm			\
+ 		-c $1 -o - || echo "BPF obj compilation failed") | 	\
+ 	$(LLC) -mattr=dwarfris -march=bpf -mcpu=v3 $4 -filetype=obj -o $2
+ endef
+ # Similar to CLANG_BPF_BUILD_RULE, but with disabled alu32
+ define CLANG_NOALU32_BPF_BUILD_RULE
+ 	$(call msg,CLNG-LLC,$(TRUNNER_BINARY),$2)
+-	($(CLANG) $3 -O2 -target bpf -emit-llvm				\
++	$(Q)($(CLANG) $3 -O2 -target bpf -emit-llvm			\
+ 		-c $1 -o - || echo "BPF obj compilation failed") | 	\
+ 	$(LLC) -march=bpf -mcpu=v2 $4 -filetype=obj -o $2
+ endef
+ # Similar to CLANG_BPF_BUILD_RULE, but using native Clang and bpf LLC
+ define CLANG_NATIVE_BPF_BUILD_RULE
+ 	$(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2)
+-	($(CLANG) $3 -O2 -emit-llvm					\
++	$(Q)($(CLANG) $3 -O2 -emit-llvm					\
+ 		-c $1 -o - || echo "BPF obj compilation failed") | 	\
+ 	$(LLC) -march=bpf -mcpu=v3 $4 -filetype=obj -o $2
+ endef
+ # Build BPF object using GCC
+ define GCC_BPF_BUILD_RULE
+ 	$(call msg,GCC-BPF,$(TRUNNER_BINARY),$2)
+-	$(BPF_GCC) $3 $4 -O2 -c $1 -o $2
++	$(Q)$(BPF_GCC) $3 $4 -O2 -c $1 -o $2
+ endef
+ 
+ SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c
+@@ -284,7 +289,7 @@ ifeq ($($(TRUNNER_OUTPUT)-dir),)
+ $(TRUNNER_OUTPUT)-dir := y
+ $(TRUNNER_OUTPUT):
+ 	$$(call msg,MKDIR,,$$@)
+-	mkdir -p $$@
++	$(Q)mkdir -p $$@
+ endif
+ 
+ # ensure we set up BPF objects generation rule just once for a given
+@@ -304,7 +309,7 @@ $(TRUNNER_BPF_SKELS): $(TRUNNER_OUTPUT)/%.skel.h:			\
+ 		      $(TRUNNER_OUTPUT)/%.o				\
+ 		      | $(BPFTOOL) $(TRUNNER_OUTPUT)
+ 	$$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)
+-	$$(BPFTOOL) gen skeleton $$< > $$@
++	$(Q)$$(BPFTOOL) gen skeleton $$< > $$@
+ endif
+ 
+ # ensure we set up tests.h header generation rule just once
+@@ -328,7 +333,7 @@ $(TRUNNER_TEST_OBJS): $(TRUNNER_OUTPUT)/%.test.o:			\
+ 		      $(TRUNNER_BPF_SKELS)				\
+ 		      $$(BPFOBJ) | $(TRUNNER_OUTPUT)
+ 	$$(call msg,TEST-OBJ,$(TRUNNER_BINARY),$$@)
+-	cd $$(@D) && $$(CC) -I. $$(CFLAGS) -c $(CURDIR)/$$< $$(LDLIBS) -o $$(@F)
++	$(Q)cd $$(@D) && $$(CC) -I. $$(CFLAGS) -c $(CURDIR)/$$< $$(LDLIBS) -o $$(@F)
+ 
+ $(TRUNNER_EXTRA_OBJS): $(TRUNNER_OUTPUT)/%.o:				\
+ 		       %.c						\
+@@ -336,20 +341,20 @@ $(TRUNNER_EXTRA_OBJS): $(TRUNNER_OUTPUT)/%.o:				\
+ 		       $(TRUNNER_TESTS_HDR)				\
+ 		       $$(BPFOBJ) | $(TRUNNER_OUTPUT)
+ 	$$(call msg,EXT-OBJ,$(TRUNNER_BINARY),$$@)
+-	$$(CC) $$(CFLAGS) -c $$< $$(LDLIBS) -o $$@
++	$(Q)$$(CC) $$(CFLAGS) -c $$< $$(LDLIBS) -o $$@
+ 
+ # only copy extra resources if in flavored build
+ $(TRUNNER_BINARY)-extras: $(TRUNNER_EXTRA_FILES) | $(TRUNNER_OUTPUT)
+ ifneq ($2,)
+ 	$$(call msg,EXT-COPY,$(TRUNNER_BINARY),$(TRUNNER_EXTRA_FILES))
+-	cp -a $$^ $(TRUNNER_OUTPUT)/
++	$(Q)cp -a $$^ $(TRUNNER_OUTPUT)/
+ endif
+ 
+ $(OUTPUT)/$(TRUNNER_BINARY): $(TRUNNER_TEST_OBJS)			\
+ 			     $(TRUNNER_EXTRA_OBJS) $$(BPFOBJ)		\
+ 			     | $(TRUNNER_BINARY)-extras
+ 	$$(call msg,BINARY,,$$@)
+-	$$(CC) $$(CFLAGS) $$(filter %.a %.o,$$^) $$(LDLIBS) -o $$@
++	$(Q)$$(CC) $$(CFLAGS) $$(filter %.a %.o,$$^) $$(LDLIBS) -o $$@
+ 
+ endef
+ 
+@@ -402,17 +407,17 @@ verifier/tests.h: verifier/*.c
+ 		) > verifier/tests.h)
+ $(OUTPUT)/test_verifier: test_verifier.c verifier/tests.h $(BPFOBJ) | $(OUTPUT)
+ 	$(call msg,BINARY,,$@)
+-	$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@
++	$(Q)$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@
+ 
+ # Make sure we are able to include and link libbpf against c++.
+ $(OUTPUT)/test_cpp: test_cpp.cpp $(OUTPUT)/test_core_extern.skel.h $(BPFOBJ)
+ 	$(call msg,CXX,,$@)
+-	$(CXX) $(CFLAGS) $^ $(LDLIBS) -o $@
++	$(Q)$(CXX) $(CFLAGS) $^ $(LDLIBS) -o $@
+ 
+ # Benchmark runner
+ $(OUTPUT)/bench_%.o: benchs/bench_%.c bench.h
+ 	$(call msg,CC,,$@)
+-	$(CC) $(CFLAGS) -c $(filter %.c,$^) $(LDLIBS) -o $@
++	$(Q)$(CC) $(CFLAGS) -c $(filter %.c,$^) $(LDLIBS) -o $@
+ $(OUTPUT)/bench_rename.o: $(OUTPUT)/test_overhead.skel.h
+ $(OUTPUT)/bench_trigger.o: $(OUTPUT)/trigger_bench.skel.h
+ $(OUTPUT)/bench_ringbufs.o: $(OUTPUT)/ringbuf_bench.skel.h \
+@@ -425,7 +430,7 @@ $(OUTPUT)/bench: $(OUTPUT)/bench.o $(OUTPUT)/testing_helpers.o \
+ 		 $(OUTPUT)/bench_trigger.o \
+ 		 $(OUTPUT)/bench_ringbufs.o
+ 	$(call msg,BINARY,,$@)
+-	$(CC) $(LDFLAGS) -o $@ $(filter %.a %.o,$^) $(LDLIBS)
++	$(Q)$(CC) $(LDFLAGS) -o $@ $(filter %.a %.o,$^) $(LDLIBS)
+ 
+ EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR)			\
+ 	prog_tests/tests.h map_tests/tests.h verifier/tests.h		\
+diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
+index 54fa5fa688ce9..d498b6aa63a42 100644
+--- a/tools/testing/selftests/bpf/test_progs.c
++++ b/tools/testing/selftests/bpf/test_progs.c
+@@ -12,6 +12,9 @@
+ #include <string.h>
+ #include <execinfo.h> /* backtrace */
+ 
++#define EXIT_NO_TEST		2
++#define EXIT_ERR_SETUP_INFRA	3
++
+ /* defined in test_progs.h */
+ struct test_env env = {};
+ 
+@@ -111,13 +114,31 @@ static void reset_affinity() {
+ 	if (err < 0) {
+ 		stdio_restore();
+ 		fprintf(stderr, "Failed to reset process affinity: %d!\n", err);
+-		exit(-1);
++		exit(EXIT_ERR_SETUP_INFRA);
+ 	}
+ 	err = pthread_setaffinity_np(pthread_self(), sizeof(cpuset), &cpuset);
+ 	if (err < 0) {
+ 		stdio_restore();
+ 		fprintf(stderr, "Failed to reset thread affinity: %d!\n", err);
+-		exit(-1);
++		exit(EXIT_ERR_SETUP_INFRA);
++	}
++}
++
++static void save_netns(void)
++{
++	env.saved_netns_fd = open("/proc/self/ns/net", O_RDONLY);
++	if (env.saved_netns_fd == -1) {
++		perror("open(/proc/self/ns/net)");
++		exit(EXIT_ERR_SETUP_INFRA);
++	}
++}
++
++static void restore_netns(void)
++{
++	if (setns(env.saved_netns_fd, CLONE_NEWNET) == -1) {
++		stdio_restore();
++		perror("setns(CLONE_NEWNS)");
++		exit(EXIT_ERR_SETUP_INFRA);
+ 	}
+ }
+ 
+@@ -138,8 +159,6 @@ void test__end_subtest()
+ 	       test->test_num, test->subtest_num,
+ 	       test->subtest_name, sub_error_cnt ? "FAIL" : "OK");
+ 
+-	reset_affinity();
+-
+ 	free(test->subtest_name);
+ 	test->subtest_name = NULL;
+ }
+@@ -643,6 +662,7 @@ int main(int argc, char **argv)
+ 		return -1;
+ 	}
+ 
++	save_netns();
+ 	stdio_hijack();
+ 	for (i = 0; i < prog_test_cnt; i++) {
+ 		struct prog_test_def *test = &prog_test_defs[i];
+@@ -673,6 +693,7 @@ int main(int argc, char **argv)
+ 			test->error_cnt ? "FAIL" : "OK");
+ 
+ 		reset_affinity();
++		restore_netns();
+ 		if (test->need_cgroup_cleanup)
+ 			cleanup_cgroup_environment();
+ 	}
+@@ -686,6 +707,10 @@ int main(int argc, char **argv)
+ 	free_str_set(&env.subtest_selector.blacklist);
+ 	free_str_set(&env.subtest_selector.whitelist);
+ 	free(env.subtest_selector.num_set);
++	close(env.saved_netns_fd);
++
++	if (env.succ_cnt + env.fail_cnt + env.skip_cnt == 0)
++		return EXIT_NO_TEST;
+ 
+ 	return env.fail_cnt ? EXIT_FAILURE : EXIT_SUCCESS;
+ }
+diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
+index f4503c926acad..b809246039181 100644
+--- a/tools/testing/selftests/bpf/test_progs.h
++++ b/tools/testing/selftests/bpf/test_progs.h
+@@ -78,6 +78,8 @@ struct test_env {
+ 	int sub_succ_cnt; /* successful sub-tests */
+ 	int fail_cnt; /* total failed tests + sub-tests */
+ 	int skip_cnt; /* skipped tests */
++
++	int saved_netns_fd;
+ };
+ 
+ extern struct test_env env;
+diff --git a/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c b/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
+index bdbbbe8431e03..3694613f418f6 100644
+--- a/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
++++ b/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
+@@ -44,7 +44,7 @@ struct shared_info {
+ 	unsigned long amr2;
+ 
+ 	/* AMR value that ptrace should refuse to write to the child. */
+-	unsigned long amr3;
++	unsigned long invalid_amr;
+ 
+ 	/* IAMR value the parent expects to read from the child. */
+ 	unsigned long expected_iamr;
+@@ -57,8 +57,8 @@ struct shared_info {
+ 	 * (even though they're valid ones) because userspace doesn't have
+ 	 * access to those registers.
+ 	 */
+-	unsigned long new_iamr;
+-	unsigned long new_uamor;
++	unsigned long invalid_iamr;
++	unsigned long invalid_uamor;
+ };
+ 
+ static int sys_pkey_alloc(unsigned long flags, unsigned long init_access_rights)
+@@ -66,11 +66,6 @@ static int sys_pkey_alloc(unsigned long flags, unsigned long init_access_rights)
+ 	return syscall(__NR_pkey_alloc, flags, init_access_rights);
+ }
+ 
+-static int sys_pkey_free(int pkey)
+-{
+-	return syscall(__NR_pkey_free, pkey);
+-}
+-
+ static int child(struct shared_info *info)
+ {
+ 	unsigned long reg;
+@@ -100,28 +95,32 @@ static int child(struct shared_info *info)
+ 
+ 	info->amr1 |= 3ul << pkeyshift(pkey1);
+ 	info->amr2 |= 3ul << pkeyshift(pkey2);
+-	info->amr3 |= info->amr2 | 3ul << pkeyshift(pkey3);
++	/*
++	 * invalid amr value where we try to force write
++	 * things which are deined by a uamor setting.
++	 */
++	info->invalid_amr = info->amr2 | (~0x0UL & ~info->expected_uamor);
+ 
++	/*
++	 * if PKEY_DISABLE_EXECUTE succeeded we should update the expected_iamr
++	 */
+ 	if (disable_execute)
+ 		info->expected_iamr |= 1ul << pkeyshift(pkey1);
+ 	else
+ 		info->expected_iamr &= ~(1ul << pkeyshift(pkey1));
+ 
+-	info->expected_iamr &= ~(1ul << pkeyshift(pkey2) | 1ul << pkeyshift(pkey3));
+-
+-	info->expected_uamor |= 3ul << pkeyshift(pkey1) |
+-				3ul << pkeyshift(pkey2);
+-	info->new_iamr |= 1ul << pkeyshift(pkey1) | 1ul << pkeyshift(pkey2);
+-	info->new_uamor |= 3ul << pkeyshift(pkey1);
++	/*
++	 * We allocated pkey2 and pkey 3 above. Clear the IAMR bits.
++	 */
++	info->expected_iamr &= ~(1ul << pkeyshift(pkey2));
++	info->expected_iamr &= ~(1ul << pkeyshift(pkey3));
+ 
+ 	/*
+-	 * We won't use pkey3. We just want a plausible but invalid key to test
+-	 * whether ptrace will let us write to AMR bits we are not supposed to.
+-	 *
+-	 * This also tests whether the kernel restores the UAMOR permissions
+-	 * after a key is freed.
++	 * Create an IAMR value different from expected value.
++	 * Kernel will reject an IAMR and UAMOR change.
+ 	 */
+-	sys_pkey_free(pkey3);
++	info->invalid_iamr = info->expected_iamr | (1ul << pkeyshift(pkey1) | 1ul << pkeyshift(pkey2));
++	info->invalid_uamor = info->expected_uamor & ~(0x3ul << pkeyshift(pkey1));
+ 
+ 	printf("%-30s AMR: %016lx pkey1: %d pkey2: %d pkey3: %d\n",
+ 	       user_write, info->amr1, pkey1, pkey2, pkey3);
+@@ -196,9 +195,9 @@ static int parent(struct shared_info *info, pid_t pid)
+ 	PARENT_SKIP_IF_UNSUPPORTED(ret, &info->child_sync);
+ 	PARENT_FAIL_IF(ret, &info->child_sync);
+ 
+-	info->amr1 = info->amr2 = info->amr3 = regs[0];
+-	info->expected_iamr = info->new_iamr = regs[1];
+-	info->expected_uamor = info->new_uamor = regs[2];
++	info->amr1 = info->amr2 = regs[0];
++	info->expected_iamr = regs[1];
++	info->expected_uamor = regs[2];
+ 
+ 	/* Wake up child so that it can set itself up. */
+ 	ret = prod_child(&info->child_sync);
+@@ -234,10 +233,10 @@ static int parent(struct shared_info *info, pid_t pid)
+ 		return ret;
+ 
+ 	/* Write invalid AMR value in child. */
+-	ret = ptrace_write_regs(pid, NT_PPC_PKEY, &info->amr3, 1);
++	ret = ptrace_write_regs(pid, NT_PPC_PKEY, &info->invalid_amr, 1);
+ 	PARENT_FAIL_IF(ret, &info->child_sync);
+ 
+-	printf("%-30s AMR: %016lx\n", ptrace_write_running, info->amr3);
++	printf("%-30s AMR: %016lx\n", ptrace_write_running, info->invalid_amr);
+ 
+ 	/* Wake up child so that it can verify it didn't change. */
+ 	ret = prod_child(&info->child_sync);
+@@ -249,7 +248,7 @@ static int parent(struct shared_info *info, pid_t pid)
+ 
+ 	/* Try to write to IAMR. */
+ 	regs[0] = info->amr1;
+-	regs[1] = info->new_iamr;
++	regs[1] = info->invalid_iamr;
+ 	ret = ptrace_write_regs(pid, NT_PPC_PKEY, regs, 2);
+ 	PARENT_FAIL_IF(!ret, &info->child_sync);
+ 
+@@ -257,7 +256,7 @@ static int parent(struct shared_info *info, pid_t pid)
+ 	       ptrace_write_running, regs[0], regs[1]);
+ 
+ 	/* Try to write to IAMR and UAMOR. */
+-	regs[2] = info->new_uamor;
++	regs[2] = info->invalid_uamor;
+ 	ret = ptrace_write_regs(pid, NT_PPC_PKEY, regs, 3);
+ 	PARENT_FAIL_IF(!ret, &info->child_sync);
+ 
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index ccf276e138829..592fd1c3d1abb 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -3258,6 +3258,11 @@ TEST(user_notification_with_tsync)
+ 	int ret;
+ 	unsigned int flags;
+ 
++	ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);
++	ASSERT_EQ(0, ret) {
++		TH_LOG("Kernel does not support PR_SET_NO_NEW_PRIVS!");
++	}
++
+ 	/* these were exclusive */
+ 	flags = SECCOMP_FILTER_FLAG_NEW_LISTENER |
+ 		SECCOMP_FILTER_FLAG_TSYNC;


             reply	other threads:[~2020-08-21 11:41 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-21 11:41 Alice Ferrazzi [this message]
  -- strict thread matches above, loose matches on Subject: below --
2020-11-01 20:32 [gentoo-commits] proj/linux-patches:5.8 commit in: / Mike Pagano
2020-10-29 11:20 Mike Pagano
2020-10-17 10:19 Mike Pagano
2020-10-14 20:38 Mike Pagano
2020-10-07 12:47 Mike Pagano
2020-10-01 19:00 Mike Pagano
2020-09-26 21:50 Mike Pagano
2020-09-24 15:37 Mike Pagano
2020-09-24 15:37 Mike Pagano
2020-09-24 15:37 Mike Pagano
2020-09-23 13:06 Mike Pagano
2020-09-23 12:14 Mike Pagano
2020-09-17 14:57 Mike Pagano
2020-09-14 17:36 Mike Pagano
2020-09-12 18:14 Mike Pagano
2020-09-09 18:02 Mike Pagano
2020-09-05 10:48 Mike Pagano
2020-09-03 12:52 Mike Pagano
2020-09-03 12:37 Mike Pagano
2020-08-27 13:22 Mike Pagano
2020-08-26 11:18 Mike Pagano
2020-08-19 14:58 Mike Pagano
2020-08-19  9:16 Alice Ferrazzi
2020-08-13 11:55 Alice Ferrazzi
2020-08-03 14:42 Mike Pagano
2020-08-03 11:35 Mike Pagano
2020-06-29 17:33 Mike Pagano
2020-06-29 17:25 Mike Pagano
2020-06-16 18:22 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1598010077.bf645074ab68cde774b9613eb74462942e461c33.alicef@gentoo \
    --to=alicef@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox