public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:6.4 commit in: /
Date: Thu, 27 Jul 2023 11:41:42 +0000 (UTC)	[thread overview]
Message-ID: <1690458091.ee51f4eb54bb7924e582f9391b383ca51a5efb09.mpagano@gentoo> (raw)

commit:     ee51f4eb54bb7924e582f9391b383ca51a5efb09
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jul 27 11:41:31 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jul 27 11:41:31 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ee51f4eb

Linux patch 6.4.7

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 1006_linux-6.4.7.patch | 9670 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 9670 insertions(+)

diff --git a/1006_linux-6.4.7.patch b/1006_linux-6.4.7.patch
new file mode 100644
index 00000000..3b9df10a
--- /dev/null
+++ b/1006_linux-6.4.7.patch
@@ -0,0 +1,9670 @@
+diff --git a/Makefile b/Makefile
+index 23ddaa3f30343..b3dc3b5f14cae 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 4
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
+index e73af709cb7ad..88d8dfeed0db6 100644
+--- a/arch/arm64/include/asm/exception.h
++++ b/arch/arm64/include/asm/exception.h
+@@ -8,16 +8,11 @@
+ #define __ASM_EXCEPTION_H
+ 
+ #include <asm/esr.h>
+-#include <asm/kprobes.h>
+ #include <asm/ptrace.h>
+ 
+ #include <linux/interrupt.h>
+ 
+-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ #define __exception_irq_entry	__irq_entry
+-#else
+-#define __exception_irq_entry	__kprobes
+-#endif
+ 
+ static inline unsigned long disr_to_esr(u64 disr)
+ {
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 9787503ff43fd..36d72d0300db9 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -701,6 +701,8 @@ struct kvm_vcpu_arch {
+ #define DBG_SS_ACTIVE_PENDING	__vcpu_single_flag(sflags, BIT(5))
+ /* PMUSERENR for the guest EL0 is on physical CPU */
+ #define PMUSERENR_ON_CPU	__vcpu_single_flag(sflags, BIT(6))
++/* WFI instruction trapped */
++#define IN_WFI			__vcpu_single_flag(sflags, BIT(7))
+ 
+ 
+ /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
+diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
+index 93bd0975b15f5..f5e5730c2c1cd 100644
+--- a/arch/arm64/include/asm/kvm_pgtable.h
++++ b/arch/arm64/include/asm/kvm_pgtable.h
+@@ -556,22 +556,26 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size);
+ kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr);
+ 
+ /**
+- * kvm_pgtable_stage2_mkold() - Clear the access flag in a page-table entry.
++ * kvm_pgtable_stage2_test_clear_young() - Test and optionally clear the access
++ *					   flag in a page-table entry.
+  * @pgt:	Page-table structure initialised by kvm_pgtable_stage2_init*().
+  * @addr:	Intermediate physical address to identify the page-table entry.
++ * @size:	Size of the address range to visit.
++ * @mkold:	True if the access flag should be cleared.
+  *
+  * The offset of @addr within a page is ignored.
+  *
+- * If there is a valid, leaf page-table entry used to translate @addr, then
+- * clear the access flag in that entry.
++ * Tests and conditionally clears the access flag for every valid, leaf
++ * page-table entry used to translate the range [@addr, @addr + @size).
+  *
+  * Note that it is the caller's responsibility to invalidate the TLB after
+  * calling this function to ensure that the updated permissions are visible
+  * to the CPUs.
+  *
+- * Return: The old page-table entry prior to clearing the flag, 0 on failure.
++ * Return: True if any of the visited PTEs had the access flag set.
+  */
+-kvm_pte_t kvm_pgtable_stage2_mkold(struct kvm_pgtable *pgt, u64 addr);
++bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr,
++					 u64 size, bool mkold);
+ 
+ /**
+  * kvm_pgtable_stage2_relax_perms() - Relax the permissions enforced by a
+@@ -593,18 +597,6 @@ kvm_pte_t kvm_pgtable_stage2_mkold(struct kvm_pgtable *pgt, u64 addr);
+ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr,
+ 				   enum kvm_pgtable_prot prot);
+ 
+-/**
+- * kvm_pgtable_stage2_is_young() - Test whether a page-table entry has the
+- *				   access flag set.
+- * @pgt:	Page-table structure initialised by kvm_pgtable_stage2_init*().
+- * @addr:	Intermediate physical address to identify the page-table entry.
+- *
+- * The offset of @addr within a page is ignored.
+- *
+- * Return: True if the page-table entry has the access flag set, false otherwise.
+- */
+-bool kvm_pgtable_stage2_is_young(struct kvm_pgtable *pgt, u64 addr);
+-
+ /**
+  * kvm_pgtable_stage2_flush_range() - Clean and invalidate data cache to Point
+  * 				      of Coherency for guest stage-2 address
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index 2fbafa5cc7ac1..9d7d10d60bfdc 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -847,6 +847,8 @@ void sve_sync_from_fpsimd_zeropad(struct task_struct *task)
+ int vec_set_vector_length(struct task_struct *task, enum vec_type type,
+ 			  unsigned long vl, unsigned long flags)
+ {
++	bool free_sme = false;
++
+ 	if (flags & ~(unsigned long)(PR_SVE_VL_INHERIT |
+ 				     PR_SVE_SET_VL_ONEXEC))
+ 		return -EINVAL;
+@@ -897,21 +899,36 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type,
+ 		task->thread.fp_type = FP_STATE_FPSIMD;
+ 	}
+ 
+-	if (system_supports_sme() && type == ARM64_VEC_SME) {
+-		task->thread.svcr &= ~(SVCR_SM_MASK |
+-				       SVCR_ZA_MASK);
+-		clear_thread_flag(TIF_SME);
++	if (system_supports_sme()) {
++		if (type == ARM64_VEC_SME ||
++		    !(task->thread.svcr & (SVCR_SM_MASK | SVCR_ZA_MASK))) {
++			/*
++			 * We are changing the SME VL or weren't using
++			 * SME anyway, discard the state and force a
++			 * reallocation.
++			 */
++			task->thread.svcr &= ~(SVCR_SM_MASK |
++					       SVCR_ZA_MASK);
++			clear_thread_flag(TIF_SME);
++			free_sme = true;
++		}
+ 	}
+ 
+ 	if (task == current)
+ 		put_cpu_fpsimd_context();
+ 
+ 	/*
+-	 * Force reallocation of task SVE and SME state to the correct
+-	 * size on next use:
++	 * Free the changed states if they are not in use, SME will be
++	 * reallocated to the correct size on next use and we just
++	 * allocate SVE now in case it is needed for use in streaming
++	 * mode.
+ 	 */
+-	sve_free(task);
+-	if (system_supports_sme() && type == ARM64_VEC_SME)
++	if (system_supports_sve()) {
++		sve_free(task);
++		sve_alloc(task, true);
++	}
++
++	if (free_sme)
+ 		sme_free(task);
+ 
+ 	task_set_vl(task, type, vl);
+diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
+index 05b022be885b6..644a687100aa3 100644
+--- a/arch/arm64/kvm/arch_timer.c
++++ b/arch/arm64/kvm/arch_timer.c
+@@ -827,8 +827,8 @@ static void timer_set_traps(struct kvm_vcpu *vcpu, struct timer_map *map)
+ 	assign_clear_set_bit(tpt, CNTHCTL_EL1PCEN << 10, set, clr);
+ 	assign_clear_set_bit(tpc, CNTHCTL_EL1PCTEN << 10, set, clr);
+ 
+-	/* This only happens on VHE, so use the CNTKCTL_EL1 accessor */
+-	sysreg_clear_set(cntkctl_el1, clr, set);
++	/* This only happens on VHE, so use the CNTHCTL_EL2 accessor. */
++	sysreg_clear_set(cnthctl_el2, clr, set);
+ }
+ 
+ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu)
+@@ -1559,7 +1559,7 @@ no_vgic:
+ void kvm_timer_init_vhe(void)
+ {
+ 	if (cpus_have_final_cap(ARM64_HAS_ECV_CNTPOFF))
+-		sysreg_clear_set(cntkctl_el1, 0, CNTHCTL_ECV);
++		sysreg_clear_set(cnthctl_el2, 0, CNTHCTL_ECV);
+ }
+ 
+ int kvm_arm_timer_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 14391826241c8..7d8c3dd8b7ca9 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -704,13 +704,15 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu)
+ 	 */
+ 	preempt_disable();
+ 	kvm_vgic_vmcr_sync(vcpu);
+-	vgic_v4_put(vcpu, true);
++	vcpu_set_flag(vcpu, IN_WFI);
++	vgic_v4_put(vcpu);
+ 	preempt_enable();
+ 
+ 	kvm_vcpu_halt(vcpu);
+ 	vcpu_clear_flag(vcpu, IN_WFIT);
+ 
+ 	preempt_disable();
++	vcpu_clear_flag(vcpu, IN_WFI);
+ 	vgic_v4_load(vcpu);
+ 	preempt_enable();
+ }
+@@ -778,7 +780,7 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
+ 		if (kvm_check_request(KVM_REQ_RELOAD_GICv4, vcpu)) {
+ 			/* The distributor enable bits were changed */
+ 			preempt_disable();
+-			vgic_v4_put(vcpu, false);
++			vgic_v4_put(vcpu);
+ 			vgic_v4_load(vcpu);
+ 			preempt_enable();
+ 		}
+@@ -1793,8 +1795,17 @@ static void _kvm_arch_hardware_enable(void *discard)
+ 
+ int kvm_arch_hardware_enable(void)
+ {
+-	int was_enabled = __this_cpu_read(kvm_arm_hardware_enabled);
++	int was_enabled;
+ 
++	/*
++	 * Most calls to this function are made with migration
++	 * disabled, but not with preemption disabled. The former is
++	 * enough to ensure correctness, but most of the helpers
++	 * expect the later and will throw a tantrum otherwise.
++	 */
++	preempt_disable();
++
++	was_enabled = __this_cpu_read(kvm_arm_hardware_enabled);
+ 	_kvm_arch_hardware_enable(NULL);
+ 
+ 	if (!was_enabled) {
+@@ -1802,6 +1813,8 @@ int kvm_arch_hardware_enable(void)
+ 		kvm_timer_cpu_up();
+ 	}
+ 
++	preempt_enable();
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
+index 37bd64e912ca7..14fd3c581a3be 100644
+--- a/arch/arm64/kvm/hyp/pgtable.c
++++ b/arch/arm64/kvm/hyp/pgtable.c
+@@ -1173,25 +1173,54 @@ kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr)
+ 	return pte;
+ }
+ 
+-kvm_pte_t kvm_pgtable_stage2_mkold(struct kvm_pgtable *pgt, u64 addr)
++struct stage2_age_data {
++	bool	mkold;
++	bool	young;
++};
++
++static int stage2_age_walker(const struct kvm_pgtable_visit_ctx *ctx,
++			     enum kvm_pgtable_walk_flags visit)
+ {
+-	kvm_pte_t pte = 0;
+-	stage2_update_leaf_attrs(pgt, addr, 1, 0, KVM_PTE_LEAF_ATTR_LO_S2_AF,
+-				 &pte, NULL, 0);
++	kvm_pte_t new = ctx->old & ~KVM_PTE_LEAF_ATTR_LO_S2_AF;
++	struct stage2_age_data *data = ctx->arg;
++
++	if (!kvm_pte_valid(ctx->old) || new == ctx->old)
++		return 0;
++
++	data->young = true;
++
++	/*
++	 * stage2_age_walker() is always called while holding the MMU lock for
++	 * write, so this will always succeed. Nonetheless, this deliberately
++	 * follows the race detection pattern of the other stage-2 walkers in
++	 * case the locking mechanics of the MMU notifiers is ever changed.
++	 */
++	if (data->mkold && !stage2_try_set_pte(ctx, new))
++		return -EAGAIN;
++
+ 	/*
+ 	 * "But where's the TLBI?!", you scream.
+ 	 * "Over in the core code", I sigh.
+ 	 *
+ 	 * See the '->clear_flush_young()' callback on the KVM mmu notifier.
+ 	 */
+-	return pte;
++	return 0;
+ }
+ 
+-bool kvm_pgtable_stage2_is_young(struct kvm_pgtable *pgt, u64 addr)
++bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr,
++					 u64 size, bool mkold)
+ {
+-	kvm_pte_t pte = 0;
+-	stage2_update_leaf_attrs(pgt, addr, 1, 0, 0, &pte, NULL, 0);
+-	return pte & KVM_PTE_LEAF_ATTR_LO_S2_AF;
++	struct stage2_age_data data = {
++		.mkold		= mkold,
++	};
++	struct kvm_pgtable_walker walker = {
++		.cb		= stage2_age_walker,
++		.arg		= &data,
++		.flags		= KVM_PGTABLE_WALK_LEAF,
++	};
++
++	WARN_ON(kvm_pgtable_walk(pgt, addr, size, &walker));
++	return data.young;
+ }
+ 
+ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr,
+diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
+index 3b9d4d24c361a..8a7e9381710ed 100644
+--- a/arch/arm64/kvm/mmu.c
++++ b/arch/arm64/kvm/mmu.c
+@@ -1639,27 +1639,25 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
+ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
+ {
+ 	u64 size = (range->end - range->start) << PAGE_SHIFT;
+-	kvm_pte_t kpte;
+-	pte_t pte;
+ 
+ 	if (!kvm->arch.mmu.pgt)
+ 		return false;
+ 
+-	WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE);
+-
+-	kpte = kvm_pgtable_stage2_mkold(kvm->arch.mmu.pgt,
+-					range->start << PAGE_SHIFT);
+-	pte = __pte(kpte);
+-	return pte_valid(pte) && pte_young(pte);
++	return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt,
++						   range->start << PAGE_SHIFT,
++						   size, true);
+ }
+ 
+ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
+ {
++	u64 size = (range->end - range->start) << PAGE_SHIFT;
++
+ 	if (!kvm->arch.mmu.pgt)
+ 		return false;
+ 
+-	return kvm_pgtable_stage2_is_young(kvm->arch.mmu.pgt,
+-					   range->start << PAGE_SHIFT);
++	return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt,
++						   range->start << PAGE_SHIFT,
++						   size, false);
+ }
+ 
+ phys_addr_t kvm_mmu_get_httbr(void)
+diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
+index c3b8e132d5992..3dfc8b84e03e6 100644
+--- a/arch/arm64/kvm/vgic/vgic-v3.c
++++ b/arch/arm64/kvm/vgic/vgic-v3.c
+@@ -749,7 +749,7 @@ void vgic_v3_put(struct kvm_vcpu *vcpu)
+ {
+ 	struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
+ 
+-	WARN_ON(vgic_v4_put(vcpu, false));
++	WARN_ON(vgic_v4_put(vcpu));
+ 
+ 	vgic_v3_vmcr_sync(vcpu);
+ 
+diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
+index c1c28fe680ba3..339a55194b2c6 100644
+--- a/arch/arm64/kvm/vgic/vgic-v4.c
++++ b/arch/arm64/kvm/vgic/vgic-v4.c
+@@ -336,14 +336,14 @@ void vgic_v4_teardown(struct kvm *kvm)
+ 	its_vm->vpes = NULL;
+ }
+ 
+-int vgic_v4_put(struct kvm_vcpu *vcpu, bool need_db)
++int vgic_v4_put(struct kvm_vcpu *vcpu)
+ {
+ 	struct its_vpe *vpe = &vcpu->arch.vgic_cpu.vgic_v3.its_vpe;
+ 
+ 	if (!vgic_supports_direct_msis(vcpu->kvm) || !vpe->resident)
+ 		return 0;
+ 
+-	return its_make_vpe_non_resident(vpe, need_db);
++	return its_make_vpe_non_resident(vpe, !!vcpu_get_flag(vcpu, IN_WFI));
+ }
+ 
+ int vgic_v4_load(struct kvm_vcpu *vcpu)
+@@ -354,6 +354,9 @@ int vgic_v4_load(struct kvm_vcpu *vcpu)
+ 	if (!vgic_supports_direct_msis(vcpu->kvm) || vpe->resident)
+ 		return 0;
+ 
++	if (vcpu_get_flag(vcpu, IN_WFI))
++		return 0;
++
+ 	/*
+ 	 * Before making the VPE resident, make sure the redistributor
+ 	 * corresponding to our current CPU expects us here. See the
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index af6bc8403ee46..72b3c21820b96 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -451,7 +451,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
+ void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
+ 				   phys_addr_t size, pgprot_t prot)
+ {
+-	if ((virt >= PAGE_END) && (virt < VMALLOC_START)) {
++	if (virt < PAGE_OFFSET) {
+ 		pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n",
+ 			&phys, virt);
+ 		return;
+@@ -478,7 +478,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
+ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
+ 				phys_addr_t size, pgprot_t prot)
+ {
+-	if ((virt >= PAGE_END) && (virt < VMALLOC_START)) {
++	if (virt < PAGE_OFFSET) {
+ 		pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n",
+ 			&phys, virt);
+ 		return;
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index b26da8efa616e..0ce5f13eabb1b 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -322,7 +322,13 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf)
+ 	 *
+ 	 */
+ 
+-	emit_bti(A64_BTI_C, ctx);
++	/* bpf function may be invoked by 3 instruction types:
++	 * 1. bl, attached via freplace to bpf prog via short jump
++	 * 2. br, attached via freplace to bpf prog via long jump
++	 * 3. blr, working as a function pointer, used by emit_call.
++	 * So BTI_JC should used here to support both br and blr.
++	 */
++	emit_bti(A64_BTI_JC, ctx);
+ 
+ 	emit(A64_MOV(1, A64_R(9), A64_LR), ctx);
+ 	emit(A64_NOP, ctx);
+diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
+index c9a0d1fa32090..930c8cc0812fc 100644
+--- a/arch/arm64/tools/sysreg
++++ b/arch/arm64/tools/sysreg
+@@ -1890,7 +1890,7 @@ Field	0	SM
+ EndSysreg
+ 
+ SysregFields	HFGxTR_EL2
+-Field	63	nAMIAIR2_EL1
++Field	63	nAMAIR2_EL1
+ Field	62	nMAIR2_EL1
+ Field	61	nS2POR_EL1
+ Field	60	nPOR_EL1
+@@ -1905,9 +1905,9 @@ Field	52	nGCS_EL0
+ Res0	51
+ Field	50	nACCDATA_EL1
+ Field	49	ERXADDR_EL1
+-Field	48	EXRPFGCDN_EL1
+-Field	47	EXPFGCTL_EL1
+-Field	46	EXPFGF_EL1
++Field	48	ERXPFGCDN_EL1
++Field	47	ERXPFGCTL_EL1
++Field	46	ERXPFGF_EL1
+ Field	45	ERXMISCn_EL1
+ Field	44	ERXSTATUS_EL1
+ Field	43	ERXCTLR_EL1
+@@ -1922,8 +1922,8 @@ Field	35	TPIDR_EL0
+ Field	34	TPIDRRO_EL0
+ Field	33	TPIDR_EL1
+ Field	32	TCR_EL1
+-Field	31	SCTXNUM_EL0
+-Field	30	SCTXNUM_EL1
++Field	31	SCXTNUM_EL0
++Field	30	SCXTNUM_EL1
+ Field	29	SCTLR_EL1
+ Field	28	REVIDR_EL1
+ Field	27	PAR_EL1
+diff --git a/arch/ia64/kernel/sys_ia64.c b/arch/ia64/kernel/sys_ia64.c
+index 6e948d015332a..eb561cc93632f 100644
+--- a/arch/ia64/kernel/sys_ia64.c
++++ b/arch/ia64/kernel/sys_ia64.c
+@@ -63,7 +63,7 @@ arch_get_unmapped_area (struct file *filp, unsigned long addr, unsigned long len
+ 	info.low_limit = addr;
+ 	info.high_limit = TASK_SIZE;
+ 	info.align_mask = align_mask;
+-	info.align_offset = 0;
++	info.align_offset = pgoff << PAGE_SHIFT;
+ 	return vm_unmapped_area(&info);
+ }
+ 
+diff --git a/arch/mips/include/asm/dec/prom.h b/arch/mips/include/asm/dec/prom.h
+index 1e1247add1cf8..908e96e3a3117 100644
+--- a/arch/mips/include/asm/dec/prom.h
++++ b/arch/mips/include/asm/dec/prom.h
+@@ -70,7 +70,7 @@ static inline bool prom_is_rex(u32 magic)
+  */
+ typedef struct {
+ 	int pagesize;
+-	unsigned char bitmap[0];
++	unsigned char bitmap[];
+ } memmap;
+ 
+ 
+diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
+index 39acccabf2ede..465b7cb9d44f4 100644
+--- a/arch/parisc/kernel/sys_parisc.c
++++ b/arch/parisc/kernel/sys_parisc.c
+@@ -26,12 +26,17 @@
+ #include <linux/compat.h>
+ 
+ /*
+- * Construct an artificial page offset for the mapping based on the physical
++ * Construct an artificial page offset for the mapping based on the virtual
+  * address of the kernel file mapping variable.
++ * If filp is zero the calculated pgoff value aliases the memory of the given
++ * address. This is useful for io_uring where the mapping shall alias a kernel
++ * address and a userspace adress where both the kernel and the userspace
++ * access the same memory region.
+  */
+-#define GET_FILP_PGOFF(filp)		\
+-	(filp ? (((unsigned long) filp->f_mapping) >> 8)	\
+-		 & ((SHM_COLOUR-1) >> PAGE_SHIFT) : 0UL)
++#define GET_FILP_PGOFF(filp, addr)		\
++	((filp ? (((unsigned long) filp->f_mapping) >> 8)	\
++		 & ((SHM_COLOUR-1) >> PAGE_SHIFT) : 0UL)	\
++	  + (addr >> PAGE_SHIFT))
+ 
+ static unsigned long shared_align_offset(unsigned long filp_pgoff,
+ 					 unsigned long pgoff)
+@@ -111,7 +116,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp,
+ 	do_color_align = 0;
+ 	if (filp || (flags & MAP_SHARED))
+ 		do_color_align = 1;
+-	filp_pgoff = GET_FILP_PGOFF(filp);
++	filp_pgoff = GET_FILP_PGOFF(filp, addr);
+ 
+ 	if (flags & MAP_FIXED) {
+ 		/* Even MAP_FIXED mappings must reside within TASK_SIZE */
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index b9f4546139894..73ed8ccb09ce8 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -4617,9 +4617,6 @@ static bool blk_mq_elv_switch_none(struct list_head *head,
+ {
+ 	struct blk_mq_qe_pair *qe;
+ 
+-	if (!q->elevator)
+-		return true;
+-
+ 	qe = kmalloc(sizeof(*qe), GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY);
+ 	if (!qe)
+ 		return false;
+@@ -4627,6 +4624,12 @@ static bool blk_mq_elv_switch_none(struct list_head *head,
+ 	/* q->elevator needs protection from ->sysfs_lock */
+ 	mutex_lock(&q->sysfs_lock);
+ 
++	/* the check has to be done with holding sysfs_lock */
++	if (!q->elevator) {
++		kfree(qe);
++		goto unlock;
++	}
++
+ 	INIT_LIST_HEAD(&qe->node);
+ 	qe->q = q;
+ 	qe->type = q->elevator->type;
+@@ -4634,6 +4637,7 @@ static bool blk_mq_elv_switch_none(struct list_head *head,
+ 	__elevator_get(qe->type);
+ 	list_add(&qe->node, head);
+ 	elevator_disable(q);
++unlock:
+ 	mutex_unlock(&q->sysfs_lock);
+ 
+ 	return true;
+diff --git a/drivers/accel/qaic/qaic_control.c b/drivers/accel/qaic/qaic_control.c
+index 5c57f7b4494e4..cfbc92da426fa 100644
+--- a/drivers/accel/qaic/qaic_control.c
++++ b/drivers/accel/qaic/qaic_control.c
+@@ -14,6 +14,7 @@
+ #include <linux/mm.h>
+ #include <linux/moduleparam.h>
+ #include <linux/mutex.h>
++#include <linux/overflow.h>
+ #include <linux/pci.h>
+ #include <linux/scatterlist.h>
+ #include <linux/types.h>
+@@ -366,7 +367,7 @@ static int encode_passthrough(struct qaic_device *qdev, void *trans, struct wrap
+ 	if (in_trans->hdr.len % 8 != 0)
+ 		return -EINVAL;
+ 
+-	if (msg_hdr_len + in_trans->hdr.len > QAIC_MANAGE_EXT_MSG_LENGTH)
++	if (size_add(msg_hdr_len, in_trans->hdr.len) > QAIC_MANAGE_EXT_MSG_LENGTH)
+ 		return -ENOSPC;
+ 
+ 	trans_wrapper = add_wrapper(wrappers,
+@@ -418,9 +419,12 @@ static int find_and_map_user_pages(struct qaic_device *qdev,
+ 	}
+ 
+ 	ret = get_user_pages_fast(xfer_start_addr, nr_pages, 0, page_list);
+-	if (ret < 0 || ret != nr_pages) {
+-		ret = -EFAULT;
++	if (ret < 0)
+ 		goto free_page_list;
++	if (ret != nr_pages) {
++		nr_pages = ret;
++		ret = -EFAULT;
++		goto put_pages;
+ 	}
+ 
+ 	sgt = kmalloc(sizeof(*sgt), GFP_KERNEL);
+@@ -557,11 +561,8 @@ static int encode_dma(struct qaic_device *qdev, void *trans, struct wrapper_list
+ 	msg = &wrapper->msg;
+ 	msg_hdr_len = le32_to_cpu(msg->hdr.len);
+ 
+-	if (msg_hdr_len > (UINT_MAX - QAIC_MANAGE_EXT_MSG_LENGTH))
+-		return -EINVAL;
+-
+ 	/* There should be enough space to hold at least one ASP entry. */
+-	if (msg_hdr_len + sizeof(*out_trans) + sizeof(struct wire_addr_size_pair) >
++	if (size_add(msg_hdr_len, sizeof(*out_trans) + sizeof(struct wire_addr_size_pair)) >
+ 	    QAIC_MANAGE_EXT_MSG_LENGTH)
+ 		return -ENOMEM;
+ 
+@@ -634,7 +635,7 @@ static int encode_activate(struct qaic_device *qdev, void *trans, struct wrapper
+ 	msg = &wrapper->msg;
+ 	msg_hdr_len = le32_to_cpu(msg->hdr.len);
+ 
+-	if (msg_hdr_len + sizeof(*out_trans) > QAIC_MANAGE_MAX_MSG_LENGTH)
++	if (size_add(msg_hdr_len, sizeof(*out_trans)) > QAIC_MANAGE_MAX_MSG_LENGTH)
+ 		return -ENOSPC;
+ 
+ 	if (!in_trans->queue_size)
+@@ -718,7 +719,7 @@ static int encode_status(struct qaic_device *qdev, void *trans, struct wrapper_l
+ 	msg = &wrapper->msg;
+ 	msg_hdr_len = le32_to_cpu(msg->hdr.len);
+ 
+-	if (msg_hdr_len + in_trans->hdr.len > QAIC_MANAGE_MAX_MSG_LENGTH)
++	if (size_add(msg_hdr_len, in_trans->hdr.len) > QAIC_MANAGE_MAX_MSG_LENGTH)
+ 		return -ENOSPC;
+ 
+ 	trans_wrapper = add_wrapper(wrappers, sizeof(*trans_wrapper));
+@@ -748,7 +749,8 @@ static int encode_message(struct qaic_device *qdev, struct manage_msg *user_msg,
+ 	int ret;
+ 	int i;
+ 
+-	if (!user_msg->count) {
++	if (!user_msg->count ||
++	    user_msg->len < sizeof(*trans_hdr)) {
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+@@ -765,12 +767,13 @@ static int encode_message(struct qaic_device *qdev, struct manage_msg *user_msg,
+ 	}
+ 
+ 	for (i = 0; i < user_msg->count; ++i) {
+-		if (user_len >= user_msg->len) {
++		if (user_len > user_msg->len - sizeof(*trans_hdr)) {
+ 			ret = -EINVAL;
+ 			break;
+ 		}
+ 		trans_hdr = (struct qaic_manage_trans_hdr *)(user_msg->data + user_len);
+-		if (user_len + trans_hdr->len > user_msg->len) {
++		if (trans_hdr->len < sizeof(trans_hdr) ||
++		    size_add(user_len, trans_hdr->len) > user_msg->len) {
+ 			ret = -EINVAL;
+ 			break;
+ 		}
+@@ -953,15 +956,23 @@ static int decode_message(struct qaic_device *qdev, struct manage_msg *user_msg,
+ 	int ret;
+ 	int i;
+ 
+-	if (msg_hdr_len > QAIC_MANAGE_MAX_MSG_LENGTH)
++	if (msg_hdr_len < sizeof(*trans_hdr) ||
++	    msg_hdr_len > QAIC_MANAGE_MAX_MSG_LENGTH)
+ 		return -EINVAL;
+ 
+ 	user_msg->len = 0;
+ 	user_msg->count = le32_to_cpu(msg->hdr.count);
+ 
+ 	for (i = 0; i < user_msg->count; ++i) {
++		u32 hdr_len;
++
++		if (msg_len > msg_hdr_len - sizeof(*trans_hdr))
++			return -EINVAL;
++
+ 		trans_hdr = (struct wire_trans_hdr *)(msg->data + msg_len);
+-		if (msg_len + le32_to_cpu(trans_hdr->len) > msg_hdr_len)
++		hdr_len = le32_to_cpu(trans_hdr->len);
++		if (hdr_len < sizeof(*trans_hdr) ||
++		    size_add(msg_len, hdr_len) > msg_hdr_len)
+ 			return -EINVAL;
+ 
+ 		switch (le32_to_cpu(trans_hdr->type)) {
+diff --git a/drivers/acpi/button.c b/drivers/acpi/button.c
+index 475e1eddfa3b4..ef77c14c72a92 100644
+--- a/drivers/acpi/button.c
++++ b/drivers/acpi/button.c
+@@ -77,6 +77,15 @@ static const struct dmi_system_id dmi_lid_quirks[] = {
+ 		},
+ 		.driver_data = (void *)(long)ACPI_BUTTON_LID_INIT_DISABLED,
+ 	},
++	{
++		/* Nextbook Ares 8A tablet, _LID device always reports lid closed */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "CherryTrail"),
++			DMI_MATCH(DMI_BIOS_VERSION, "M882"),
++		},
++		.driver_data = (void *)(long)ACPI_BUTTON_LID_INIT_DISABLED,
++	},
+ 	{
+ 		/*
+ 		 * Lenovo Yoga 9 14ITL5, initial notification of the LID device
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 0800a9d775580..1dd8d5aebf678 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -470,52 +470,6 @@ static const struct dmi_system_id asus_laptop[] = {
+ 	{ }
+ };
+ 
+-static const struct dmi_system_id lenovo_laptop[] = {
+-	{
+-		.ident = "LENOVO IdeaPad Flex 5 14ALC7",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "82R9"),
+-		},
+-	},
+-	{
+-		.ident = "LENOVO IdeaPad Flex 5 16ALC7",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "82RA"),
+-		},
+-	},
+-	{ }
+-};
+-
+-static const struct dmi_system_id tongfang_gm_rg[] = {
+-	{
+-		.ident = "TongFang GMxRGxx/XMG CORE 15 (M22)/TUXEDO Stellaris 15 Gen4 AMD",
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "GMxRGxx"),
+-		},
+-	},
+-	{ }
+-};
+-
+-static const struct dmi_system_id maingear_laptop[] = {
+-	{
+-		.ident = "MAINGEAR Vector Pro 2 15",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Micro Electronics Inc"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "MG-VCP2-15A3070T"),
+-		}
+-	},
+-	{
+-		.ident = "MAINGEAR Vector Pro 2 17",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Micro Electronics Inc"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "MG-VCP2-17A3070T"),
+-		},
+-	},
+-	{ }
+-};
+-
+ static const struct dmi_system_id lg_laptop[] = {
+ 	{
+ 		.ident = "LG Electronics 17U70P",
+@@ -539,10 +493,6 @@ struct irq_override_cmp {
+ static const struct irq_override_cmp override_table[] = {
+ 	{ medion_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
+ 	{ asus_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
+-	{ lenovo_laptop, 6, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
+-	{ lenovo_laptop, 10, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
+-	{ tongfang_gm_rg, 1, ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_LOW, 1, true },
+-	{ maingear_laptop, 1, ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_LOW, 1, true },
+ 	{ lg_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
+ };
+ 
+@@ -562,16 +512,6 @@ static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
+ 			return entry->override;
+ 	}
+ 
+-#ifdef CONFIG_X86
+-	/*
+-	 * IRQ override isn't needed on modern AMD Zen systems and
+-	 * this override breaks active low IRQs on AMD Ryzen 6000 and
+-	 * newer systems. Skip it.
+-	 */
+-	if (boot_cpu_has(X86_FEATURE_ZEN))
+-		return false;
+-#endif
+-
+ 	return true;
+ }
+ 
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index bcc25d457581d..e7d04ab864a16 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -470,6 +470,22 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "82BK"),
+ 		},
+ 	},
++	{
++	 .callback = video_detect_force_native,
++	 /* Lenovo ThinkPad X131e (3371 AMD version) */
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		DMI_MATCH(DMI_PRODUCT_NAME, "3371"),
++		},
++	},
++	{
++	 .callback = video_detect_force_native,
++	 /* Apple iMac11,3 */
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "iMac11,3"),
++		},
++	},
+ 	{
+ 	 /* https://bugzilla.redhat.com/show_bug.cgi?id=1217249 */
+ 	 .callback = video_detect_force_native,
+@@ -512,6 +528,14 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Precision 7510"),
+ 		},
+ 	},
++	{
++	 .callback = video_detect_force_native,
++	 /* Dell Studio 1569 */
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "Studio 1569"),
++		},
++	},
+ 	{
+ 	 .callback = video_detect_force_native,
+ 	 /* Acer Aspire 3830TG */
+diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
+index 9c2d6f35f88a0..c2b925f8cd4e4 100644
+--- a/drivers/acpi/x86/utils.c
++++ b/drivers/acpi/x86/utils.c
+@@ -259,10 +259,11 @@ bool force_storage_d3(void)
+  * drivers/platform/x86/x86-android-tablets.c kernel module.
+  */
+ #define ACPI_QUIRK_SKIP_I2C_CLIENTS				BIT(0)
+-#define ACPI_QUIRK_UART1_TTY_UART2_SKIP				BIT(1)
+-#define ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY			BIT(2)
+-#define ACPI_QUIRK_USE_ACPI_AC_AND_BATTERY			BIT(3)
+-#define ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS			BIT(4)
++#define ACPI_QUIRK_UART1_SKIP					BIT(1)
++#define ACPI_QUIRK_UART1_TTY_UART2_SKIP				BIT(2)
++#define ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY			BIT(3)
++#define ACPI_QUIRK_USE_ACPI_AC_AND_BATTERY			BIT(4)
++#define ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS			BIT(5)
+ 
+ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ 	/*
+@@ -319,6 +320,7 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ 			DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "YETI-11"),
+ 		},
+ 		.driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
++					ACPI_QUIRK_UART1_SKIP |
+ 					ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY |
+ 					ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
+ 	},
+@@ -365,7 +367,7 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ 					ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
+ 	},
+ 	{
+-		/* Nextbook Ares 8 */
++		/* Nextbook Ares 8 (BYT version)*/
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "M890BAP"),
+@@ -374,6 +376,16 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ 					ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY |
+ 					ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
+ 	},
++	{
++		/* Nextbook Ares 8A (CHT version)*/
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "CherryTrail"),
++			DMI_MATCH(DMI_BIOS_VERSION, "M882"),
++		},
++		.driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
++					ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
++	},
+ 	{
+ 		/* Whitelabel (sold as various brands) TM800A550L */
+ 		.matches = {
+@@ -392,6 +404,7 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ #if IS_ENABLED(CONFIG_X86_ANDROID_TABLETS)
+ static const struct acpi_device_id i2c_acpi_known_good_ids[] = {
+ 	{ "10EC5640", 0 }, /* RealTek ALC5640 audio codec */
++	{ "10EC5651", 0 }, /* RealTek ALC5651 audio codec */
+ 	{ "INT33F4", 0 },  /* X-Powers AXP288 PMIC */
+ 	{ "INT33FD", 0 },  /* Intel Crystal Cove PMIC */
+ 	{ "INT34D3", 0 },  /* Intel Whiskey Cove PMIC */
+@@ -438,6 +451,9 @@ int acpi_quirk_skip_serdev_enumeration(struct device *controller_parent, bool *s
+ 	if (dmi_id)
+ 		quirks = (unsigned long)dmi_id->driver_data;
+ 
++	if ((quirks & ACPI_QUIRK_UART1_SKIP) && uid == 1)
++		*skip = true;
++
+ 	if (quirks & ACPI_QUIRK_UART1_TTY_UART2_SKIP) {
+ 		if (uid == 1)
+ 			return -ENODEV; /* Create tty cdev instead of serdev */
+diff --git a/drivers/base/regmap/regmap-i2c.c b/drivers/base/regmap/regmap-i2c.c
+index 980e5ce6a3a35..3ec611dc0c09f 100644
+--- a/drivers/base/regmap/regmap-i2c.c
++++ b/drivers/base/regmap/regmap-i2c.c
+@@ -242,8 +242,8 @@ static int regmap_i2c_smbus_i2c_read(void *context, const void *reg,
+ static const struct regmap_bus regmap_i2c_smbus_i2c_block = {
+ 	.write = regmap_i2c_smbus_i2c_write,
+ 	.read = regmap_i2c_smbus_i2c_read,
+-	.max_raw_read = I2C_SMBUS_BLOCK_MAX,
+-	.max_raw_write = I2C_SMBUS_BLOCK_MAX,
++	.max_raw_read = I2C_SMBUS_BLOCK_MAX - 1,
++	.max_raw_write = I2C_SMBUS_BLOCK_MAX - 1,
+ };
+ 
+ static int regmap_i2c_smbus_i2c_write_reg16(void *context, const void *data,
+@@ -299,8 +299,8 @@ static int regmap_i2c_smbus_i2c_read_reg16(void *context, const void *reg,
+ static const struct regmap_bus regmap_i2c_smbus_i2c_block_reg16 = {
+ 	.write = regmap_i2c_smbus_i2c_write_reg16,
+ 	.read = regmap_i2c_smbus_i2c_read_reg16,
+-	.max_raw_read = I2C_SMBUS_BLOCK_MAX,
+-	.max_raw_write = I2C_SMBUS_BLOCK_MAX,
++	.max_raw_read = I2C_SMBUS_BLOCK_MAX - 2,
++	.max_raw_write = I2C_SMBUS_BLOCK_MAX - 2,
+ };
+ 
+ static const struct regmap_bus *regmap_get_i2c_bus(struct i2c_client *i2c,
+diff --git a/drivers/base/regmap/regmap-spi-avmm.c b/drivers/base/regmap/regmap-spi-avmm.c
+index 6af692844c196..4c2b94b3e30be 100644
+--- a/drivers/base/regmap/regmap-spi-avmm.c
++++ b/drivers/base/regmap/regmap-spi-avmm.c
+@@ -660,7 +660,7 @@ static const struct regmap_bus regmap_spi_avmm_bus = {
+ 	.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
+ 	.val_format_endian_default = REGMAP_ENDIAN_NATIVE,
+ 	.max_raw_read = SPI_AVMM_VAL_SIZE * MAX_READ_CNT,
+-	.max_raw_write = SPI_AVMM_REG_SIZE + SPI_AVMM_VAL_SIZE * MAX_WRITE_CNT,
++	.max_raw_write = SPI_AVMM_VAL_SIZE * MAX_WRITE_CNT,
+ 	.free_context = spi_avmm_bridge_ctx_free,
+ };
+ 
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index fa2d3fba6ac9d..db7851f0e3b8c 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -2082,8 +2082,6 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg,
+ 	size_t val_count = val_len / val_bytes;
+ 	size_t chunk_count, chunk_bytes;
+ 	size_t chunk_regs = val_count;
+-	size_t max_data = map->max_raw_write - map->format.reg_bytes -
+-			map->format.pad_bytes;
+ 	int ret, i;
+ 
+ 	if (!val_count)
+@@ -2091,8 +2089,8 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg,
+ 
+ 	if (map->use_single_write)
+ 		chunk_regs = 1;
+-	else if (map->max_raw_write && val_len > max_data)
+-		chunk_regs = max_data / val_bytes;
++	else if (map->max_raw_write && val_len > map->max_raw_write)
++		chunk_regs = map->max_raw_write / val_bytes;
+ 
+ 	chunk_count = val_count / chunk_regs;
+ 	chunk_bytes = chunk_regs * val_bytes;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 2a8e2bb038f58..50e23762ec5e9 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -4099,6 +4099,7 @@ static int btusb_probe(struct usb_interface *intf,
+ 	BT_DBG("intf %p id %p", intf, id);
+ 
+ 	if ((id->driver_info & BTUSB_IFNUM_2) &&
++	    (intf->cur_altsetting->desc.bInterfaceNumber != 0) &&
+ 	    (intf->cur_altsetting->desc.bInterfaceNumber != 2))
+ 		return -ENODEV;
+ 
+diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
+index 2a594b754af14..b630f2acc105e 100644
+--- a/drivers/dma-buf/dma-resv.c
++++ b/drivers/dma-buf/dma-resv.c
+@@ -571,6 +571,7 @@ int dma_resv_get_fences(struct dma_resv *obj, enum dma_resv_usage usage,
+ 	dma_resv_for_each_fence_unlocked(&cursor, fence) {
+ 
+ 		if (dma_resv_iter_is_restarted(&cursor)) {
++			struct dma_fence **new_fences;
+ 			unsigned int count;
+ 
+ 			while (*num_fences)
+@@ -579,13 +580,17 @@ int dma_resv_get_fences(struct dma_resv *obj, enum dma_resv_usage usage,
+ 			count = cursor.num_fences + 1;
+ 
+ 			/* Eventually re-allocate the array */
+-			*fences = krealloc_array(*fences, count,
+-						 sizeof(void *),
+-						 GFP_KERNEL);
+-			if (count && !*fences) {
++			new_fences = krealloc_array(*fences, count,
++						    sizeof(void *),
++						    GFP_KERNEL);
++			if (count && !new_fences) {
++				kfree(*fences);
++				*fences = NULL;
++				*num_fences = 0;
+ 				dma_resv_iter_end(&cursor);
+ 				return -ENOMEM;
+ 			}
++			*fences = new_fences;
+ 		}
+ 
+ 		(*fences)[(*num_fences)++] = dma_fence_get(fence);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
+index 53ff91fc6cf6b..d0748bcfad16b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
+@@ -55,8 +55,9 @@ static enum hrtimer_restart amdgpu_vkms_vblank_simulate(struct hrtimer *timer)
+ 		DRM_WARN("%s: vblank timer overrun\n", __func__);
+ 
+ 	ret = drm_crtc_handle_vblank(crtc);
++	/* Don't queue timer again when vblank is disabled. */
+ 	if (!ret)
+-		DRM_ERROR("amdgpu_vkms failure on handling vblank");
++		return HRTIMER_NORESTART;
+ 
+ 	return HRTIMER_RESTART;
+ }
+@@ -81,7 +82,7 @@ static void amdgpu_vkms_disable_vblank(struct drm_crtc *crtc)
+ {
+ 	struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
+ 
+-	hrtimer_cancel(&amdgpu_crtc->vblank_timer);
++	hrtimer_try_to_cancel(&amdgpu_crtc->vblank_timer);
+ }
+ 
+ static bool amdgpu_vkms_get_vblank_timestamp(struct drm_crtc *crtc,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 44f4c74419740..812d7dd4c04b4 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -416,12 +416,12 @@ static void dm_pflip_high_irq(void *interrupt_params)
+ 
+ 	spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags);
+ 
+-	if (amdgpu_crtc->pflip_status != AMDGPU_FLIP_SUBMITTED){
+-		DC_LOG_PFLIP("amdgpu_crtc->pflip_status = %d !=AMDGPU_FLIP_SUBMITTED(%d) on crtc:%d[%p] \n",
+-						 amdgpu_crtc->pflip_status,
+-						 AMDGPU_FLIP_SUBMITTED,
+-						 amdgpu_crtc->crtc_id,
+-						 amdgpu_crtc);
++	if (amdgpu_crtc->pflip_status != AMDGPU_FLIP_SUBMITTED) {
++		DC_LOG_PFLIP("amdgpu_crtc->pflip_status = %d !=AMDGPU_FLIP_SUBMITTED(%d) on crtc:%d[%p]\n",
++			     amdgpu_crtc->pflip_status,
++			     AMDGPU_FLIP_SUBMITTED,
++			     amdgpu_crtc->crtc_id,
++			     amdgpu_crtc);
+ 		spin_unlock_irqrestore(&adev_to_drm(adev)->event_lock, flags);
+ 		return;
+ 	}
+@@ -875,7 +875,7 @@ static int dm_set_powergating_state(void *handle,
+ }
+ 
+ /* Prototypes of private functions */
+-static int dm_early_init(void* handle);
++static int dm_early_init(void *handle);
+ 
+ /* Allocate memory for FBC compressed data  */
+ static void amdgpu_dm_fbc_init(struct drm_connector *connector)
+@@ -1274,7 +1274,7 @@ static void mmhub_read_system_context(struct amdgpu_device *adev, struct dc_phy_
+ 	pa_config->system_aperture.start_addr = (uint64_t)logical_addr_low << 18;
+ 	pa_config->system_aperture.end_addr = (uint64_t)logical_addr_high << 18;
+ 
+-	pa_config->system_aperture.agp_base = (uint64_t)agp_base << 24 ;
++	pa_config->system_aperture.agp_base = (uint64_t)agp_base << 24;
+ 	pa_config->system_aperture.agp_bot = (uint64_t)agp_bot << 24;
+ 	pa_config->system_aperture.agp_top = (uint64_t)agp_top << 24;
+ 
+@@ -1339,6 +1339,15 @@ static void dm_handle_hpd_rx_offload_work(struct work_struct *work)
+ 	if (amdgpu_in_reset(adev))
+ 		goto skip;
+ 
++	if (offload_work->data.bytes.device_service_irq.bits.UP_REQ_MSG_RDY ||
++		offload_work->data.bytes.device_service_irq.bits.DOWN_REP_MSG_RDY) {
++		dm_handle_mst_sideband_msg_ready_event(&aconnector->mst_mgr, DOWN_OR_UP_MSG_RDY_EVENT);
++		spin_lock_irqsave(&offload_work->offload_wq->offload_lock, flags);
++		offload_work->offload_wq->is_handling_mst_msg_rdy_event = false;
++		spin_unlock_irqrestore(&offload_work->offload_wq->offload_lock, flags);
++		goto skip;
++	}
++
+ 	mutex_lock(&adev->dm.dc_lock);
+ 	if (offload_work->data.bytes.device_service_irq.bits.AUTOMATED_TEST) {
+ 		dc_link_dp_handle_automated_test(dc_link);
+@@ -1357,8 +1366,7 @@ static void dm_handle_hpd_rx_offload_work(struct work_struct *work)
+ 		DP_TEST_RESPONSE,
+ 		&test_response.raw,
+ 		sizeof(test_response));
+-	}
+-	else if ((dc_link->connector_signal != SIGNAL_TYPE_EDP) &&
++	} else if ((dc_link->connector_signal != SIGNAL_TYPE_EDP) &&
+ 			dc_link_check_link_loss_status(dc_link, &offload_work->data) &&
+ 			dc_link_dp_allow_hpd_rx_irq(dc_link)) {
+ 		/* offload_work->data is from handle_hpd_rx_irq->
+@@ -1546,7 +1554,7 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ 	mutex_init(&adev->dm.dc_lock);
+ 	mutex_init(&adev->dm.audio_lock);
+ 
+-	if(amdgpu_dm_irq_init(adev)) {
++	if (amdgpu_dm_irq_init(adev)) {
+ 		DRM_ERROR("amdgpu: failed to initialize DM IRQ support.\n");
+ 		goto error;
+ 	}
+@@ -1691,9 +1699,8 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ 	if (amdgpu_dc_debug_mask & DC_DISABLE_STUTTER)
+ 		adev->dm.dc->debug.disable_stutter = true;
+ 
+-	if (amdgpu_dc_debug_mask & DC_DISABLE_DSC) {
++	if (amdgpu_dc_debug_mask & DC_DISABLE_DSC)
+ 		adev->dm.dc->debug.disable_dsc = true;
+-	}
+ 
+ 	if (amdgpu_dc_debug_mask & DC_DISABLE_CLOCK_GATING)
+ 		adev->dm.dc->debug.disable_clock_gate = true;
+@@ -1937,8 +1944,6 @@ static void amdgpu_dm_fini(struct amdgpu_device *adev)
+ 	mutex_destroy(&adev->dm.audio_lock);
+ 	mutex_destroy(&adev->dm.dc_lock);
+ 	mutex_destroy(&adev->dm.dpia_aux_lock);
+-
+-	return;
+ }
+ 
+ static int load_dmcu_fw(struct amdgpu_device *adev)
+@@ -1947,7 +1952,7 @@ static int load_dmcu_fw(struct amdgpu_device *adev)
+ 	int r;
+ 	const struct dmcu_firmware_header_v1_0 *hdr;
+ 
+-	switch(adev->asic_type) {
++	switch (adev->asic_type) {
+ #if defined(CONFIG_DRM_AMD_DC_SI)
+ 	case CHIP_TAHITI:
+ 	case CHIP_PITCAIRN:
+@@ -2704,7 +2709,7 @@ static void dm_gpureset_commit_state(struct dc_state *dc_state,
+ 		struct dc_scaling_info scaling_infos[MAX_SURFACES];
+ 		struct dc_flip_addrs flip_addrs[MAX_SURFACES];
+ 		struct dc_stream_update stream_update;
+-	} * bundle;
++	} *bundle;
+ 	int k, m;
+ 
+ 	bundle = kzalloc(sizeof(*bundle), GFP_KERNEL);
+@@ -2734,8 +2739,6 @@ static void dm_gpureset_commit_state(struct dc_state *dc_state,
+ 
+ cleanup:
+ 	kfree(bundle);
+-
+-	return;
+ }
+ 
+ static int dm_resume(void *handle)
+@@ -2949,8 +2952,7 @@ static const struct amd_ip_funcs amdgpu_dm_funcs = {
+ 	.set_powergating_state = dm_set_powergating_state,
+ };
+ 
+-const struct amdgpu_ip_block_version dm_ip_block =
+-{
++const struct amdgpu_ip_block_version dm_ip_block = {
+ 	.type = AMD_IP_BLOCK_TYPE_DCE,
+ 	.major = 1,
+ 	.minor = 0,
+@@ -2995,9 +2997,12 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
+ 	caps->ext_caps = &aconnector->dc_link->dpcd_sink_ext_caps;
+ 	caps->aux_support = false;
+ 
+-	if (caps->ext_caps->bits.oled == 1 /*||
+-	    caps->ext_caps->bits.sdr_aux_backlight_control == 1 ||
+-	    caps->ext_caps->bits.hdr_aux_backlight_control == 1*/)
++	if (caps->ext_caps->bits.oled == 1
++	    /*
++	     * ||
++	     * caps->ext_caps->bits.sdr_aux_backlight_control == 1 ||
++	     * caps->ext_caps->bits.hdr_aux_backlight_control == 1
++	     */)
+ 		caps->aux_support = true;
+ 
+ 	if (amdgpu_backlight == 0)
+@@ -3231,86 +3236,6 @@ static void handle_hpd_irq(void *param)
+ 
+ }
+ 
+-static void dm_handle_mst_sideband_msg(struct amdgpu_dm_connector *aconnector)
+-{
+-	u8 esi[DP_PSR_ERROR_STATUS - DP_SINK_COUNT_ESI] = { 0 };
+-	u8 dret;
+-	bool new_irq_handled = false;
+-	int dpcd_addr;
+-	int dpcd_bytes_to_read;
+-
+-	const int max_process_count = 30;
+-	int process_count = 0;
+-
+-	const struct dc_link_status *link_status = dc_link_get_status(aconnector->dc_link);
+-
+-	if (link_status->dpcd_caps->dpcd_rev.raw < 0x12) {
+-		dpcd_bytes_to_read = DP_LANE0_1_STATUS - DP_SINK_COUNT;
+-		/* DPCD 0x200 - 0x201 for downstream IRQ */
+-		dpcd_addr = DP_SINK_COUNT;
+-	} else {
+-		dpcd_bytes_to_read = DP_PSR_ERROR_STATUS - DP_SINK_COUNT_ESI;
+-		/* DPCD 0x2002 - 0x2005 for downstream IRQ */
+-		dpcd_addr = DP_SINK_COUNT_ESI;
+-	}
+-
+-	dret = drm_dp_dpcd_read(
+-		&aconnector->dm_dp_aux.aux,
+-		dpcd_addr,
+-		esi,
+-		dpcd_bytes_to_read);
+-
+-	while (dret == dpcd_bytes_to_read &&
+-		process_count < max_process_count) {
+-		u8 ack[DP_PSR_ERROR_STATUS - DP_SINK_COUNT_ESI] = {};
+-		u8 retry;
+-		dret = 0;
+-
+-		process_count++;
+-
+-		DRM_DEBUG_DRIVER("ESI %02x %02x %02x\n", esi[0], esi[1], esi[2]);
+-		/* handle HPD short pulse irq */
+-		if (aconnector->mst_mgr.mst_state)
+-			drm_dp_mst_hpd_irq_handle_event(&aconnector->mst_mgr,
+-							esi,
+-							ack,
+-							&new_irq_handled);
+-
+-		if (new_irq_handled) {
+-			/* ACK at DPCD to notify down stream */
+-			for (retry = 0; retry < 3; retry++) {
+-				ssize_t wret;
+-
+-				wret = drm_dp_dpcd_writeb(&aconnector->dm_dp_aux.aux,
+-							  dpcd_addr + 1,
+-							  ack[1]);
+-				if (wret == 1)
+-					break;
+-			}
+-
+-			if (retry == 3) {
+-				DRM_ERROR("Failed to ack MST event.\n");
+-				return;
+-			}
+-
+-			drm_dp_mst_hpd_irq_send_new_request(&aconnector->mst_mgr);
+-			/* check if there is new irq to be handled */
+-			dret = drm_dp_dpcd_read(
+-				&aconnector->dm_dp_aux.aux,
+-				dpcd_addr,
+-				esi,
+-				dpcd_bytes_to_read);
+-
+-			new_irq_handled = false;
+-		} else {
+-			break;
+-		}
+-	}
+-
+-	if (process_count == max_process_count)
+-		DRM_DEBUG_DRIVER("Loop exceeded max iterations\n");
+-}
+-
+ static void schedule_hpd_rx_offload_work(struct hpd_rx_irq_offload_work_queue *offload_wq,
+ 							union hpd_irq_data hpd_irq_data)
+ {
+@@ -3372,7 +3297,23 @@ static void handle_hpd_rx_irq(void *param)
+ 	if (dc_link_dp_allow_hpd_rx_irq(dc_link)) {
+ 		if (hpd_irq_data.bytes.device_service_irq.bits.UP_REQ_MSG_RDY ||
+ 			hpd_irq_data.bytes.device_service_irq.bits.DOWN_REP_MSG_RDY) {
+-			dm_handle_mst_sideband_msg(aconnector);
++			bool skip = false;
++
++			/*
++			 * DOWN_REP_MSG_RDY is also handled by polling method
++			 * mgr->cbs->poll_hpd_irq()
++			 */
++			spin_lock(&offload_wq->offload_lock);
++			skip = offload_wq->is_handling_mst_msg_rdy_event;
++
++			if (!skip)
++				offload_wq->is_handling_mst_msg_rdy_event = true;
++
++			spin_unlock(&offload_wq->offload_lock);
++
++			if (!skip)
++				schedule_hpd_rx_offload_work(offload_wq, hpd_irq_data);
++
+ 			goto out;
+ 		}
+ 
+@@ -3463,7 +3404,7 @@ static void register_hpd_handlers(struct amdgpu_device *adev)
+ 		aconnector = to_amdgpu_dm_connector(connector);
+ 		dc_link = aconnector->dc_link;
+ 
+-		if (DC_IRQ_SOURCE_INVALID != dc_link->irq_source_hpd) {
++		if (dc_link->irq_source_hpd != DC_IRQ_SOURCE_INVALID) {
+ 			int_params.int_context = INTERRUPT_LOW_IRQ_CONTEXT;
+ 			int_params.irq_source = dc_link->irq_source_hpd;
+ 
+@@ -3472,7 +3413,7 @@ static void register_hpd_handlers(struct amdgpu_device *adev)
+ 					(void *) aconnector);
+ 		}
+ 
+-		if (DC_IRQ_SOURCE_INVALID != dc_link->irq_source_hpd_rx) {
++		if (dc_link->irq_source_hpd_rx != DC_IRQ_SOURCE_INVALID) {
+ 
+ 			/* Also register for DP short pulse (hpd_rx). */
+ 			int_params.int_context = INTERRUPT_LOW_IRQ_CONTEXT;
+@@ -3481,11 +3422,11 @@ static void register_hpd_handlers(struct amdgpu_device *adev)
+ 			amdgpu_dm_irq_register_interrupt(adev, &int_params,
+ 					handle_hpd_rx_irq,
+ 					(void *) aconnector);
+-
+-			if (adev->dm.hpd_rx_offload_wq)
+-				adev->dm.hpd_rx_offload_wq[dc_link->link_index].aconnector =
+-					aconnector;
+ 		}
++
++		if (adev->dm.hpd_rx_offload_wq)
++			adev->dm.hpd_rx_offload_wq[connector->index].aconnector =
++				aconnector;
+ 	}
+ }
+ 
+@@ -3498,7 +3439,7 @@ static int dce60_register_irq_handlers(struct amdgpu_device *adev)
+ 	struct dc_interrupt_params int_params = {0};
+ 	int r;
+ 	int i;
+-	unsigned client_id = AMDGPU_IRQ_CLIENTID_LEGACY;
++	unsigned int client_id = AMDGPU_IRQ_CLIENTID_LEGACY;
+ 
+ 	int_params.requested_polarity = INTERRUPT_POLARITY_DEFAULT;
+ 	int_params.current_polarity = INTERRUPT_POLARITY_DEFAULT;
+@@ -3512,11 +3453,12 @@ static int dce60_register_irq_handlers(struct amdgpu_device *adev)
+ 	 *    Base driver will call amdgpu_dm_irq_handler() for ALL interrupts
+ 	 *    coming from DC hardware.
+ 	 *    amdgpu_dm_irq_handler() will re-direct the interrupt to DC
+-	 *    for acknowledging and handling. */
++	 *    for acknowledging and handling.
++	 */
+ 
+ 	/* Use VBLANK interrupt */
+ 	for (i = 0; i < adev->mode_info.num_crtc; i++) {
+-		r = amdgpu_irq_add_id(adev, client_id, i+1 , &adev->crtc_irq);
++		r = amdgpu_irq_add_id(adev, client_id, i + 1, &adev->crtc_irq);
+ 		if (r) {
+ 			DRM_ERROR("Failed to add crtc irq id!\n");
+ 			return r;
+@@ -3524,7 +3466,7 @@ static int dce60_register_irq_handlers(struct amdgpu_device *adev)
+ 
+ 		int_params.int_context = INTERRUPT_HIGH_IRQ_CONTEXT;
+ 		int_params.irq_source =
+-			dc_interrupt_to_irq_source(dc, i+1 , 0);
++			dc_interrupt_to_irq_source(dc, i + 1, 0);
+ 
+ 		c_irq_params = &adev->dm.vblank_params[int_params.irq_source - DC_IRQ_SOURCE_VBLANK1];
+ 
+@@ -3580,7 +3522,7 @@ static int dce110_register_irq_handlers(struct amdgpu_device *adev)
+ 	struct dc_interrupt_params int_params = {0};
+ 	int r;
+ 	int i;
+-	unsigned client_id = AMDGPU_IRQ_CLIENTID_LEGACY;
++	unsigned int client_id = AMDGPU_IRQ_CLIENTID_LEGACY;
+ 
+ 	if (adev->family >= AMDGPU_FAMILY_AI)
+ 		client_id = SOC15_IH_CLIENTID_DCE;
+@@ -3597,7 +3539,8 @@ static int dce110_register_irq_handlers(struct amdgpu_device *adev)
+ 	 *    Base driver will call amdgpu_dm_irq_handler() for ALL interrupts
+ 	 *    coming from DC hardware.
+ 	 *    amdgpu_dm_irq_handler() will re-direct the interrupt to DC
+-	 *    for acknowledging and handling. */
++	 *    for acknowledging and handling.
++	 */
+ 
+ 	/* Use VBLANK interrupt */
+ 	for (i = VISLANDS30_IV_SRCID_D1_VERTICAL_INTERRUPT0; i <= VISLANDS30_IV_SRCID_D6_VERTICAL_INTERRUPT0; i++) {
+@@ -4044,7 +3987,7 @@ static void amdgpu_dm_update_backlight_caps(struct amdgpu_display_manager *dm,
+ }
+ 
+ static int get_brightness_range(const struct amdgpu_dm_backlight_caps *caps,
+-				unsigned *min, unsigned *max)
++				unsigned int *min, unsigned int *max)
+ {
+ 	if (!caps)
+ 		return 0;
+@@ -4064,7 +4007,7 @@ static int get_brightness_range(const struct amdgpu_dm_backlight_caps *caps,
+ static u32 convert_brightness_from_user(const struct amdgpu_dm_backlight_caps *caps,
+ 					uint32_t brightness)
+ {
+-	unsigned min, max;
++	unsigned int min, max;
+ 
+ 	if (!get_brightness_range(caps, &min, &max))
+ 		return brightness;
+@@ -4077,7 +4020,7 @@ static u32 convert_brightness_from_user(const struct amdgpu_dm_backlight_caps *c
+ static u32 convert_brightness_to_user(const struct amdgpu_dm_backlight_caps *caps,
+ 				      uint32_t brightness)
+ {
+-	unsigned min, max;
++	unsigned int min, max;
+ 
+ 	if (!get_brightness_range(caps, &min, &max))
+ 		return brightness;
+@@ -4557,7 +4500,6 @@ fail:
+ static void amdgpu_dm_destroy_drm_device(struct amdgpu_display_manager *dm)
+ {
+ 	drm_atomic_private_obj_fini(&dm->atomic_obj);
+-	return;
+ }
+ 
+ /******************************************************************************
+@@ -5375,6 +5317,7 @@ static bool adjust_colour_depth_from_display_info(
+ {
+ 	enum dc_color_depth depth = timing_out->display_color_depth;
+ 	int normalized_clk;
++
+ 	do {
+ 		normalized_clk = timing_out->pix_clk_100hz / 10;
+ 		/* YCbCr 4:2:0 requires additional adjustment of 1/2 */
+@@ -5590,6 +5533,7 @@ create_fake_sink(struct amdgpu_dm_connector *aconnector)
+ {
+ 	struct dc_sink_init_data sink_init_data = { 0 };
+ 	struct dc_sink *sink = NULL;
++
+ 	sink_init_data.link = aconnector->dc_link;
+ 	sink_init_data.sink_signal = aconnector->dc_link->connector_signal;
+ 
+@@ -5713,7 +5657,7 @@ get_highest_refresh_rate_mode(struct amdgpu_dm_connector *aconnector,
+ 		return &aconnector->freesync_vid_base;
+ 
+ 	/* Find the preferred mode */
+-	list_for_each_entry (m, list_head, head) {
++	list_for_each_entry(m, list_head, head) {
+ 		if (m->type & DRM_MODE_TYPE_PREFERRED) {
+ 			m_pref = m;
+ 			break;
+@@ -5737,7 +5681,7 @@ get_highest_refresh_rate_mode(struct amdgpu_dm_connector *aconnector,
+ 	 * For some monitors, preferred mode is not the mode with highest
+ 	 * supported refresh rate.
+ 	 */
+-	list_for_each_entry (m, list_head, head) {
++	list_for_each_entry(m, list_head, head) {
+ 		current_refresh  = drm_mode_vrefresh(m);
+ 
+ 		if (m->hdisplay == m_pref->hdisplay &&
+@@ -6010,7 +5954,7 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 		 * This may not be an error, the use case is when we have no
+ 		 * usermode calls to reset and set mode upon hotplug. In this
+ 		 * case, we call set mode ourselves to restore the previous mode
+-		 * and the modelist may not be filled in in time.
++		 * and the modelist may not be filled in time.
+ 		 */
+ 		DRM_DEBUG_DRIVER("No preferred mode found\n");
+ 	} else {
+@@ -6034,9 +5978,9 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 		drm_mode_set_crtcinfo(&mode, 0);
+ 
+ 	/*
+-	* If scaling is enabled and refresh rate didn't change
+-	* we copy the vic and polarities of the old timings
+-	*/
++	 * If scaling is enabled and refresh rate didn't change
++	 * we copy the vic and polarities of the old timings
++	 */
+ 	if (!scale || mode_refresh != preferred_refresh)
+ 		fill_stream_properties_from_drm_display_mode(
+ 			stream, &mode, &aconnector->base, con_state, NULL,
+@@ -6756,6 +6700,7 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
+ 
+ 	if (!state->duplicated) {
+ 		int max_bpc = conn_state->max_requested_bpc;
++
+ 		is_y420 = drm_mode_is_420_also(&connector->display_info, adjusted_mode) &&
+ 			  aconnector->force_yuv420_output;
+ 		color_depth = convert_color_depth_from_display_info(connector,
+@@ -7074,7 +7019,7 @@ static bool is_duplicate_mode(struct amdgpu_dm_connector *aconnector,
+ {
+ 	struct drm_display_mode *m;
+ 
+-	list_for_each_entry (m, &aconnector->base.probed_modes, head) {
++	list_for_each_entry(m, &aconnector->base.probed_modes, head) {
+ 		if (drm_mode_equal(m, mode))
+ 			return true;
+ 	}
+@@ -7192,13 +7137,7 @@ static int amdgpu_dm_connector_get_modes(struct drm_connector *connector)
+ 				drm_add_modes_noedid(connector, 1920, 1080);
+ 	} else {
+ 		amdgpu_dm_connector_ddc_get_modes(connector, edid);
+-		/* most eDP supports only timings from its edid,
+-		 * usually only detailed timings are available
+-		 * from eDP edid. timings which are not from edid
+-		 * may damage eDP
+-		 */
+-		if (connector->connector_type != DRM_MODE_CONNECTOR_eDP)
+-			amdgpu_dm_connector_add_common_modes(encoder, connector);
++		amdgpu_dm_connector_add_common_modes(encoder, connector);
+ 		amdgpu_dm_connector_add_freesync_modes(connector, edid);
+ 	}
+ 	amdgpu_dm_fbc_init(connector);
+@@ -7234,6 +7173,7 @@ void amdgpu_dm_connector_init_helper(struct amdgpu_display_manager *dm,
+ 	aconnector->as_type = ADAPTIVE_SYNC_TYPE_NONE;
+ 	memset(&aconnector->vsdb_info, 0, sizeof(aconnector->vsdb_info));
+ 	mutex_init(&aconnector->hpd_lock);
++	mutex_init(&aconnector->handle_mst_msg_ready);
+ 
+ 	/*
+ 	 * configure support HPD hot plug connector_>polled default value is 0
+@@ -7384,7 +7324,6 @@ static int amdgpu_dm_connector_init(struct amdgpu_display_manager *dm,
+ 
+ 	link->priv = aconnector;
+ 
+-	DRM_DEBUG_DRIVER("%s()\n", __func__);
+ 
+ 	i2c = create_i2c(link->ddc, link->link_index, &res);
+ 	if (!i2c) {
+@@ -8055,7 +7994,15 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ 		 * Only allow immediate flips for fast updates that don't
+ 		 * change memory domain, FB pitch, DCC state, rotation or
+ 		 * mirroring.
++		 *
++		 * dm_crtc_helper_atomic_check() only accepts async flips with
++		 * fast updates.
+ 		 */
++		if (crtc->state->async_flip &&
++		    acrtc_state->update_type != UPDATE_TYPE_FAST)
++			drm_warn_once(state->dev,
++				      "[PLANE:%d:%s] async flip with non-fast update\n",
++				      plane->base.id, plane->name);
+ 		bundle->flip_addrs[planes_count].flip_immediate =
+ 			crtc->state->async_flip &&
+ 			acrtc_state->update_type == UPDATE_TYPE_FAST &&
+@@ -8098,8 +8045,7 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ 			 * DRI3/Present extension with defined target_msc.
+ 			 */
+ 			last_flip_vblank = amdgpu_get_vblank_counter_kms(pcrtc);
+-		}
+-		else {
++		} else {
+ 			/* For variable refresh rate mode only:
+ 			 * Get vblank of last completed flip to avoid > 1 vrr
+ 			 * flips per video frame by use of throttling, but allow
+@@ -8432,8 +8378,8 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ 		dc_resource_state_copy_construct_current(dm->dc, dc_state);
+ 	}
+ 
+-	for_each_oldnew_crtc_in_state (state, crtc, old_crtc_state,
+-				       new_crtc_state, i) {
++	for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state,
++				      new_crtc_state, i) {
+ 		struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc);
+ 
+ 		dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
+@@ -8456,9 +8402,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ 		dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
+ 
+ 		drm_dbg_state(state->dev,
+-			"amdgpu_crtc id:%d crtc_state_flags: enable:%d, active:%d, "
+-			"planes_changed:%d, mode_changed:%d,active_changed:%d,"
+-			"connectors_changed:%d\n",
++			"amdgpu_crtc id:%d crtc_state_flags: enable:%d, active:%d, planes_changed:%d, mode_changed:%d,active_changed:%d,connectors_changed:%d\n",
+ 			acrtc->crtc_id,
+ 			new_crtc_state->enable,
+ 			new_crtc_state->active,
+@@ -9027,8 +8971,8 @@ static int do_aquire_global_lock(struct drm_device *dev,
+ 					&commit->flip_done, 10*HZ);
+ 
+ 		if (ret == 0)
+-			DRM_ERROR("[CRTC:%d:%s] hw_done or flip_done "
+-				  "timed out\n", crtc->base.id, crtc->name);
++			DRM_ERROR("[CRTC:%d:%s] hw_done or flip_done timed out\n",
++				  crtc->base.id, crtc->name);
+ 
+ 		drm_crtc_commit_put(commit);
+ 	}
+@@ -9113,7 +9057,8 @@ is_timing_unchanged_for_freesync(struct drm_crtc_state *old_crtc_state,
+ 	return false;
+ }
+ 
+-static void set_freesync_fixed_config(struct dm_crtc_state *dm_new_crtc_state) {
++static void set_freesync_fixed_config(struct dm_crtc_state *dm_new_crtc_state)
++{
+ 	u64 num, den, res;
+ 	struct drm_crtc_state *new_crtc_state = &dm_new_crtc_state->base;
+ 
+@@ -9236,9 +9181,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
+ 		goto skip_modeset;
+ 
+ 	drm_dbg_state(state->dev,
+-		"amdgpu_crtc id:%d crtc_state_flags: enable:%d, active:%d, "
+-		"planes_changed:%d, mode_changed:%d,active_changed:%d,"
+-		"connectors_changed:%d\n",
++		"amdgpu_crtc id:%d crtc_state_flags: enable:%d, active:%d, planes_changed:%d, mode_changed:%d,active_changed:%d,connectors_changed:%d\n",
+ 		acrtc->crtc_id,
+ 		new_crtc_state->enable,
+ 		new_crtc_state->active,
+@@ -9267,8 +9210,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
+ 						     old_crtc_state)) {
+ 			new_crtc_state->mode_changed = false;
+ 			DRM_DEBUG_DRIVER(
+-				"Mode change not required for front porch change, "
+-				"setting mode_changed to %d",
++				"Mode change not required for front porch change, setting mode_changed to %d",
+ 				new_crtc_state->mode_changed);
+ 
+ 			set_freesync_fixed_config(dm_new_crtc_state);
+@@ -9280,9 +9222,8 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
+ 			struct drm_display_mode *high_mode;
+ 
+ 			high_mode = get_highest_refresh_rate_mode(aconnector, false);
+-			if (!drm_mode_equal(&new_crtc_state->mode, high_mode)) {
++			if (!drm_mode_equal(&new_crtc_state->mode, high_mode))
+ 				set_freesync_fixed_config(dm_new_crtc_state);
+-			}
+ 		}
+ 
+ 		ret = dm_atomic_get_state(state, &dm_state);
+@@ -9450,6 +9391,7 @@ static bool should_reset_plane(struct drm_atomic_state *state,
+ 	 */
+ 	for_each_oldnew_plane_in_state(state, other, old_other_state, new_other_state, i) {
+ 		struct amdgpu_framebuffer *old_afb, *new_afb;
++
+ 		if (other->type == DRM_PLANE_TYPE_CURSOR)
+ 			continue;
+ 
+@@ -9548,11 +9490,12 @@ static int dm_check_cursor_fb(struct amdgpu_crtc *new_acrtc,
+ 	}
+ 
+ 	/* Core DRM takes care of checking FB modifiers, so we only need to
+-	 * check tiling flags when the FB doesn't have a modifier. */
++	 * check tiling flags when the FB doesn't have a modifier.
++	 */
+ 	if (!(fb->flags & DRM_MODE_FB_MODIFIERS)) {
+ 		if (adev->family < AMDGPU_FAMILY_AI) {
+ 			linear = AMDGPU_TILING_GET(afb->tiling_flags, ARRAY_MODE) != DC_ARRAY_2D_TILED_THIN1 &&
+-			         AMDGPU_TILING_GET(afb->tiling_flags, ARRAY_MODE) != DC_ARRAY_1D_TILED_THIN1 &&
++				 AMDGPU_TILING_GET(afb->tiling_flags, ARRAY_MODE) != DC_ARRAY_1D_TILED_THIN1 &&
+ 				 AMDGPU_TILING_GET(afb->tiling_flags, MICRO_TILE_MODE) == 0;
+ 		} else {
+ 			linear = AMDGPU_TILING_GET(afb->tiling_flags, SWIZZLE_MODE) == 0;
+@@ -9774,12 +9717,12 @@ static int dm_check_crtc_cursor(struct drm_atomic_state *state,
+ 	/* On DCE and DCN there is no dedicated hardware cursor plane. We get a
+ 	 * cursor per pipe but it's going to inherit the scaling and
+ 	 * positioning from the underlying pipe. Check the cursor plane's
+-	 * blending properties match the underlying planes'. */
++	 * blending properties match the underlying planes'.
++	 */
+ 
+ 	new_cursor_state = drm_atomic_get_new_plane_state(state, cursor);
+-	if (!new_cursor_state || !new_cursor_state->fb) {
++	if (!new_cursor_state || !new_cursor_state->fb)
+ 		return 0;
+-	}
+ 
+ 	dm_get_oriented_plane_size(new_cursor_state, &cursor_src_w, &cursor_src_h);
+ 	cursor_scale_w = new_cursor_state->crtc_w * 1000 / cursor_src_w;
+@@ -9824,6 +9767,7 @@ static int add_affected_mst_dsc_crtcs(struct drm_atomic_state *state, struct drm
+ 	struct drm_connector_state *conn_state, *old_conn_state;
+ 	struct amdgpu_dm_connector *aconnector = NULL;
+ 	int i;
++
+ 	for_each_oldnew_connector_in_state(state, connector, old_conn_state, conn_state, i) {
+ 		if (!conn_state->crtc)
+ 			conn_state = old_conn_state;
+@@ -10258,7 +10202,7 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 	}
+ 
+ 	/* Store the overall update type for use later in atomic check. */
+-	for_each_new_crtc_in_state (state, crtc, new_crtc_state, i) {
++	for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {
+ 		struct dm_crtc_state *dm_new_crtc_state =
+ 			to_dm_crtc_state(new_crtc_state);
+ 
+@@ -10280,7 +10224,7 @@ fail:
+ 	else if (ret == -EINTR || ret == -EAGAIN || ret == -ERESTARTSYS)
+ 		DRM_DEBUG_DRIVER("Atomic check stopped due to signal.\n");
+ 	else
+-		DRM_DEBUG_DRIVER("Atomic check failed with err: %d \n", ret);
++		DRM_DEBUG_DRIVER("Atomic check failed with err: %d\n", ret);
+ 
+ 	trace_amdgpu_dm_atomic_check_finish(state, ret);
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index 2e2413fd73a4f..b91b902ab3c81 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -194,6 +194,11 @@ struct hpd_rx_irq_offload_work_queue {
+ 	 * we're handling link loss
+ 	 */
+ 	bool is_handling_link_loss;
++	/**
++	 * @is_handling_mst_msg_rdy_event: Used to prevent inserting mst message
++	 * ready event when we're already handling mst message ready event
++	 */
++	bool is_handling_mst_msg_rdy_event;
+ 	/**
+ 	 * @aconnector: The aconnector that this work queue is attached to
+ 	 */
+@@ -638,6 +643,8 @@ struct amdgpu_dm_connector {
+ 	struct drm_dp_mst_port *mst_output_port;
+ 	struct amdgpu_dm_connector *mst_root;
+ 	struct drm_dp_aux *dsc_aux;
++	struct mutex handle_mst_msg_ready;
++
+ 	/* TODO see if we can merge with ddc_bus or make a dm_connector */
+ 	struct amdgpu_i2c_adapter *i2c;
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+index 440fc0869a34b..30d4c6fd95f53 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+@@ -398,6 +398,18 @@ static int dm_crtc_helper_atomic_check(struct drm_crtc *crtc,
+ 		return -EINVAL;
+ 	}
+ 
++	/*
++	 * Only allow async flips for fast updates that don't change the FB
++	 * pitch, the DCC state, rotation, etc.
++	 */
++	if (crtc_state->async_flip &&
++	    dm_crtc_state->update_type != UPDATE_TYPE_FAST) {
++		drm_dbg_atomic(crtc->dev,
++			       "[CRTC:%d:%s] async flips are only supported for fast updates\n",
++			       crtc->base.id, crtc->name);
++		return -EINVAL;
++	}
++
+ 	/* In some use cases, like reset, no stream is attached */
+ 	if (!dm_crtc_state->stream)
+ 		return 0;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 46d0a8f57e552..888e80f498e97 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -619,8 +619,118 @@ dm_dp_add_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
+ 	return connector;
+ }
+ 
++void dm_handle_mst_sideband_msg_ready_event(
++	struct drm_dp_mst_topology_mgr *mgr,
++	enum mst_msg_ready_type msg_rdy_type)
++{
++	uint8_t esi[DP_PSR_ERROR_STATUS - DP_SINK_COUNT_ESI] = { 0 };
++	uint8_t dret;
++	bool new_irq_handled = false;
++	int dpcd_addr;
++	uint8_t dpcd_bytes_to_read;
++	const uint8_t max_process_count = 30;
++	uint8_t process_count = 0;
++	u8 retry;
++	struct amdgpu_dm_connector *aconnector =
++			container_of(mgr, struct amdgpu_dm_connector, mst_mgr);
++
++
++	const struct dc_link_status *link_status = dc_link_get_status(aconnector->dc_link);
++
++	if (link_status->dpcd_caps->dpcd_rev.raw < 0x12) {
++		dpcd_bytes_to_read = DP_LANE0_1_STATUS - DP_SINK_COUNT;
++		/* DPCD 0x200 - 0x201 for downstream IRQ */
++		dpcd_addr = DP_SINK_COUNT;
++	} else {
++		dpcd_bytes_to_read = DP_PSR_ERROR_STATUS - DP_SINK_COUNT_ESI;
++		/* DPCD 0x2002 - 0x2005 for downstream IRQ */
++		dpcd_addr = DP_SINK_COUNT_ESI;
++	}
++
++	mutex_lock(&aconnector->handle_mst_msg_ready);
++
++	while (process_count < max_process_count) {
++		u8 ack[DP_PSR_ERROR_STATUS - DP_SINK_COUNT_ESI] = {};
++
++		process_count++;
++
++		dret = drm_dp_dpcd_read(
++			&aconnector->dm_dp_aux.aux,
++			dpcd_addr,
++			esi,
++			dpcd_bytes_to_read);
++
++		if (dret != dpcd_bytes_to_read) {
++			DRM_DEBUG_KMS("DPCD read and acked number is not as expected!");
++			break;
++		}
++
++		DRM_DEBUG_DRIVER("ESI %02x %02x %02x\n", esi[0], esi[1], esi[2]);
++
++		switch (msg_rdy_type) {
++		case DOWN_REP_MSG_RDY_EVENT:
++			/* Only handle DOWN_REP_MSG_RDY case*/
++			esi[1] &= DP_DOWN_REP_MSG_RDY;
++			break;
++		case UP_REQ_MSG_RDY_EVENT:
++			/* Only handle UP_REQ_MSG_RDY case*/
++			esi[1] &= DP_UP_REQ_MSG_RDY;
++			break;
++		default:
++			/* Handle both cases*/
++			esi[1] &= (DP_DOWN_REP_MSG_RDY | DP_UP_REQ_MSG_RDY);
++			break;
++		}
++
++		if (!esi[1])
++			break;
++
++		/* handle MST irq */
++		if (aconnector->mst_mgr.mst_state)
++			drm_dp_mst_hpd_irq_handle_event(&aconnector->mst_mgr,
++						 esi,
++						 ack,
++						 &new_irq_handled);
++
++		if (new_irq_handled) {
++			/* ACK at DPCD to notify down stream */
++			for (retry = 0; retry < 3; retry++) {
++				ssize_t wret;
++
++				wret = drm_dp_dpcd_writeb(&aconnector->dm_dp_aux.aux,
++							  dpcd_addr + 1,
++							  ack[1]);
++				if (wret == 1)
++					break;
++			}
++
++			if (retry == 3) {
++				DRM_ERROR("Failed to ack MST event.\n");
++				return;
++			}
++
++			drm_dp_mst_hpd_irq_send_new_request(&aconnector->mst_mgr);
++
++			new_irq_handled = false;
++		} else {
++			break;
++		}
++	}
++
++	mutex_unlock(&aconnector->handle_mst_msg_ready);
++
++	if (process_count == max_process_count)
++		DRM_DEBUG_DRIVER("Loop exceeded max iterations\n");
++}
++
++static void dm_handle_mst_down_rep_msg_ready(struct drm_dp_mst_topology_mgr *mgr)
++{
++	dm_handle_mst_sideband_msg_ready_event(mgr, DOWN_REP_MSG_RDY_EVENT);
++}
++
+ static const struct drm_dp_mst_topology_cbs dm_mst_cbs = {
+ 	.add_connector = dm_dp_add_mst_connector,
++	.poll_hpd_irq = dm_handle_mst_down_rep_msg_ready,
+ };
+ 
+ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
+index 1e4ede1e57abd..37c820ab0fdbc 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
+@@ -49,6 +49,13 @@
+ #define PBN_FEC_OVERHEAD_MULTIPLIER_8B_10B	1031
+ #define PBN_FEC_OVERHEAD_MULTIPLIER_128B_132B	1000
+ 
++enum mst_msg_ready_type {
++	NONE_MSG_RDY_EVENT = 0,
++	DOWN_REP_MSG_RDY_EVENT = 1,
++	UP_REQ_MSG_RDY_EVENT = 2,
++	DOWN_OR_UP_MSG_RDY_EVENT = 3
++};
++
+ struct amdgpu_display_manager;
+ struct amdgpu_dm_connector;
+ 
+@@ -61,6 +68,10 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
+ void
+ dm_dp_create_fake_mst_encoders(struct amdgpu_device *adev);
+ 
++void dm_handle_mst_sideband_msg_ready_event(
++	struct drm_dp_mst_topology_mgr *mgr,
++	enum mst_msg_ready_type msg_rdy_type);
++
+ struct dsc_mst_fairness_vars {
+ 	int pbn;
+ 	bool dsc_enabled;
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
+index f9e2e0c3095e7..b686efa43c347 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
+@@ -87,6 +87,11 @@ static int dcn31_get_active_display_cnt_wa(
+ 				stream->signal == SIGNAL_TYPE_DVI_SINGLE_LINK ||
+ 				stream->signal == SIGNAL_TYPE_DVI_DUAL_LINK)
+ 			tmds_present = true;
++
++		/* Checking stream / link detection ensuring that PHY is active*/
++		if (dc_is_dp_signal(stream->signal) && !stream->dpms_off)
++			display_count++;
++
+ 	}
+ 
+ 	for (i = 0; i < dc->link_count; i++) {
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index 1c3b6f25a7825..6f56a35c08571 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -3309,7 +3309,8 @@ void dcn10_wait_for_mpcc_disconnect(
+ 		if (pipe_ctx->stream_res.opp->mpcc_disconnect_pending[mpcc_inst]) {
+ 			struct hubp *hubp = get_hubp_by_inst(res_pool, mpcc_inst);
+ 
+-			if (pipe_ctx->stream_res.tg->funcs->is_tg_enabled(pipe_ctx->stream_res.tg))
++			if (pipe_ctx->stream_res.tg &&
++				pipe_ctx->stream_res.tg->funcs->is_tg_enabled(pipe_ctx->stream_res.tg))
+ 				res_pool->mpc->funcs->wait_for_idle(res_pool->mpc, mpcc_inst);
+ 			pipe_ctx->stream_res.opp->mpcc_disconnect_pending[mpcc_inst] = false;
+ 			hubp->funcs->set_blank(hubp, true);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c b/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
+index 7f72ef882ca41..21eea8d7bf7f4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
+@@ -65,7 +65,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ 		.timing_trace = false,
+ 		.clock_trace = true,
+ 		.disable_pplib_clock_request = true,
+-		.pipe_split_policy = MPC_SPLIT_DYNAMIC,
++		.pipe_split_policy = MPC_SPLIT_AVOID,
+ 		.force_single_disp_pipe_split = false,
+ 		.disable_dcc = DCC_ENABLE,
+ 		.vsr_support = true,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 8fe2e1716da44..e22fc563b462f 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -1927,12 +1927,16 @@ static int sienna_cichlid_read_sensor(struct smu_context *smu,
+ 		*size = 4;
+ 		break;
+ 	case AMDGPU_PP_SENSOR_GFX_MCLK:
+-		ret = sienna_cichlid_get_current_clk_freq_by_table(smu, SMU_UCLK, (uint32_t *)data);
++		ret = sienna_cichlid_get_smu_metrics_data(smu,
++							  METRICS_CURR_UCLK,
++							  (uint32_t *)data);
+ 		*(uint32_t *)data *= 100;
+ 		*size = 4;
+ 		break;
+ 	case AMDGPU_PP_SENSOR_GFX_SCLK:
+-		ret = sienna_cichlid_get_current_clk_freq_by_table(smu, SMU_GFXCLK, (uint32_t *)data);
++		ret = sienna_cichlid_get_smu_metrics_data(smu,
++							  METRICS_AVERAGE_GFXCLK,
++							  (uint32_t *)data);
+ 		*(uint32_t *)data *= 100;
+ 		*size = 4;
+ 		break;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index aac72925db34a..f53a09b02c38a 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -940,7 +940,7 @@ static int smu_v13_0_7_read_sensor(struct smu_context *smu,
+ 		break;
+ 	case AMDGPU_PP_SENSOR_GFX_MCLK:
+ 		ret = smu_v13_0_7_get_smu_metrics_data(smu,
+-						       METRICS_AVERAGE_UCLK,
++						       METRICS_CURR_UCLK,
+ 						       (uint32_t *)data);
+ 		*(uint32_t *)data *= 100;
+ 		*size = 4;
+diff --git a/drivers/gpu/drm/drm_client_modeset.c b/drivers/gpu/drm/drm_client_modeset.c
+index 1b12a3c201a3c..871e4e2129d6d 100644
+--- a/drivers/gpu/drm/drm_client_modeset.c
++++ b/drivers/gpu/drm/drm_client_modeset.c
+@@ -311,6 +311,9 @@ static bool drm_client_target_cloned(struct drm_device *dev,
+ 	can_clone = true;
+ 	dmt_mode = drm_mode_find_dmt(dev, 1024, 768, 60, false);
+ 
++	if (!dmt_mode)
++		goto fail;
++
+ 	for (i = 0; i < connector_count; i++) {
+ 		if (!enabled[i])
+ 			continue;
+@@ -326,11 +329,13 @@ static bool drm_client_target_cloned(struct drm_device *dev,
+ 		if (!modes[i])
+ 			can_clone = false;
+ 	}
++	kfree(dmt_mode);
+ 
+ 	if (can_clone) {
+ 		DRM_DEBUG_KMS("can clone using 1024x768\n");
+ 		return true;
+ 	}
++fail:
+ 	DRM_INFO("kms: can't enable cloning when we probably wanted to.\n");
+ 	return false;
+ }
+@@ -862,6 +867,7 @@ int drm_client_modeset_probe(struct drm_client_dev *client, unsigned int width,
+ 				break;
+ 			}
+ 
++			kfree(modeset->mode);
+ 			modeset->mode = drm_mode_duplicate(dev, mode);
+ 			drm_connector_get(connector);
+ 			modeset->connectors[modeset->num_connectors++] = connector;
+diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
+index 3035cba2c6a29..d7caae281fb92 100644
+--- a/drivers/gpu/drm/i915/i915_perf.c
++++ b/drivers/gpu/drm/i915/i915_perf.c
+@@ -4442,6 +4442,7 @@ static const struct i915_range mtl_oam_b_counters[] = {
+ static const struct i915_range xehp_oa_b_counters[] = {
+ 	{ .start = 0xdc48, .end = 0xdc48 },	/* OAA_ENABLE_REG */
+ 	{ .start = 0xdd00, .end = 0xdd48 },	/* OAG_LCE0_0 - OAA_LENABLE_REG */
++	{}
+ };
+ 
+ static const struct i915_range gen7_oa_mux_regs[] = {
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 42e1665ba11a3..1ecd3d63b1081 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -1873,6 +1873,8 @@ nv50_pior_destroy(struct drm_encoder *encoder)
+ 	nvif_outp_dtor(&nv_encoder->outp);
+ 
+ 	drm_encoder_cleanup(encoder);
++
++	mutex_destroy(&nv_encoder->dp.hpd_irq_lock);
+ 	kfree(encoder);
+ }
+ 
+@@ -1917,6 +1919,8 @@ nv50_pior_create(struct drm_connector *connector, struct dcb_output *dcbe)
+ 	nv_encoder->i2c = ddc;
+ 	nv_encoder->aux = aux;
+ 
++	mutex_init(&nv_encoder->dp.hpd_irq_lock);
++
+ 	encoder = to_drm_encoder(nv_encoder);
+ 	encoder->possible_crtcs = dcbe->heads;
+ 	encoder->possible_clones = 0;
+diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/i2c.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/i2c.h
+index 40a1065ae626e..ef441dfdea09f 100644
+--- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/i2c.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/i2c.h
+@@ -16,7 +16,7 @@ struct nvkm_i2c_bus {
+ 	const struct nvkm_i2c_bus_func *func;
+ 	struct nvkm_i2c_pad *pad;
+ #define NVKM_I2C_BUS_CCB(n) /* 'n' is ccb index */                           (n)
+-#define NVKM_I2C_BUS_EXT(n) /* 'n' is dcb external encoder type */ ((n) + 0x100)
++#define NVKM_I2C_BUS_EXT(n) /* 'n' is dcb external encoder type */  ((n) + 0x10)
+ #define NVKM_I2C_BUS_PRI /* ccb primary comm. port */                        -1
+ #define NVKM_I2C_BUS_SEC /* ccb secondary comm. port */                      -2
+ 	int id;
+@@ -38,7 +38,7 @@ struct nvkm_i2c_aux {
+ 	const struct nvkm_i2c_aux_func *func;
+ 	struct nvkm_i2c_pad *pad;
+ #define NVKM_I2C_AUX_CCB(n) /* 'n' is ccb index */                           (n)
+-#define NVKM_I2C_AUX_EXT(n) /* 'n' is dcb external encoder type */ ((n) + 0x100)
++#define NVKM_I2C_AUX_EXT(n) /* 'n' is dcb external encoder type */  ((n) + 0x10)
+ 	int id;
+ 
+ 	struct mutex mutex;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/uconn.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/uconn.c
+index dad942be6679c..46b057fe1412e 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/uconn.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/uconn.c
+@@ -81,20 +81,29 @@ nvkm_uconn_uevent(struct nvkm_object *object, void *argv, u32 argc, struct nvkm_
+ 		return -ENOSYS;
+ 
+ 	list_for_each_entry(outp, &conn->disp->outps, head) {
+-		if (outp->info.connector == conn->index && outp->dp.aux) {
+-			if (args->v0.types & NVIF_CONN_EVENT_V0_PLUG  ) bits |= NVKM_I2C_PLUG;
+-			if (args->v0.types & NVIF_CONN_EVENT_V0_UNPLUG) bits |= NVKM_I2C_UNPLUG;
+-			if (args->v0.types & NVIF_CONN_EVENT_V0_IRQ   ) bits |= NVKM_I2C_IRQ;
++		if (outp->info.connector == conn->index)
++			break;
++	}
+ 
+-			return nvkm_uevent_add(uevent, &device->i2c->event, outp->dp.aux->id, bits,
+-					       nvkm_uconn_uevent_aux);
+-		}
++	if (&outp->head == &conn->disp->outps)
++		return -EINVAL;
++
++	if (outp->dp.aux && !outp->info.location) {
++		if (args->v0.types & NVIF_CONN_EVENT_V0_PLUG  ) bits |= NVKM_I2C_PLUG;
++		if (args->v0.types & NVIF_CONN_EVENT_V0_UNPLUG) bits |= NVKM_I2C_UNPLUG;
++		if (args->v0.types & NVIF_CONN_EVENT_V0_IRQ   ) bits |= NVKM_I2C_IRQ;
++
++		return nvkm_uevent_add(uevent, &device->i2c->event, outp->dp.aux->id, bits,
++				       nvkm_uconn_uevent_aux);
+ 	}
+ 
+ 	if (args->v0.types & NVIF_CONN_EVENT_V0_PLUG  ) bits |= NVKM_GPIO_HI;
+ 	if (args->v0.types & NVIF_CONN_EVENT_V0_UNPLUG) bits |= NVKM_GPIO_LO;
+-	if (args->v0.types & NVIF_CONN_EVENT_V0_IRQ)
+-		return -EINVAL;
++	if (args->v0.types & NVIF_CONN_EVENT_V0_IRQ) {
++		/* TODO: support DP IRQ on ANX9805 and remove this hack. */
++		if (!outp->info.location)
++			return -EINVAL;
++	}
+ 
+ 	return nvkm_uevent_add(uevent, &device->gpio->event, conn->info.hpd, bits,
+ 			       nvkm_uconn_uevent_gpio);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c
+index 976539de4220c..731b2f68d3dbf 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c
+@@ -260,10 +260,11 @@ nvkm_i2c_new_(const struct nvkm_i2c_func *func, struct nvkm_device *device,
+ {
+ 	struct nvkm_bios *bios = device->bios;
+ 	struct nvkm_i2c *i2c;
++	struct nvkm_i2c_aux *aux;
+ 	struct dcb_i2c_entry ccbE;
+ 	struct dcb_output dcbE;
+ 	u8 ver, hdr;
+-	int ret, i;
++	int ret, i, ids;
+ 
+ 	if (!(i2c = *pi2c = kzalloc(sizeof(*i2c), GFP_KERNEL)))
+ 		return -ENOMEM;
+@@ -406,5 +407,11 @@ nvkm_i2c_new_(const struct nvkm_i2c_func *func, struct nvkm_device *device,
+ 		}
+ 	}
+ 
+-	return nvkm_event_init(&nvkm_i2c_intr_func, &i2c->subdev, 4, i, &i2c->event);
++	ids = 0;
++	list_for_each_entry(aux, &i2c->aux, head)
++		ids = max(ids, aux->id + 1);
++	if (!ids)
++		return 0;
++
++	return nvkm_event_init(&nvkm_i2c_intr_func, &i2c->subdev, 4, ids, &i2c->event);
+ }
+diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c
+index 46a27ebf4588a..a6700d7278bf3 100644
+--- a/drivers/gpu/drm/radeon/radeon_cs.c
++++ b/drivers/gpu/drm/radeon/radeon_cs.c
+@@ -270,7 +270,8 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data)
+ {
+ 	struct drm_radeon_cs *cs = data;
+ 	uint64_t *chunk_array_ptr;
+-	unsigned size, i;
++	u64 size;
++	unsigned i;
+ 	u32 ring = RADEON_CS_RING_GFX;
+ 	s32 priority = 0;
+ 
+diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c
+index 7333f7a87a2fb..46ff9c75bb124 100644
+--- a/drivers/gpu/drm/ttm/ttm_resource.c
++++ b/drivers/gpu/drm/ttm/ttm_resource.c
+@@ -86,6 +86,8 @@ static void ttm_lru_bulk_move_pos_tail(struct ttm_lru_bulk_move_pos *pos,
+ 				       struct ttm_resource *res)
+ {
+ 	if (pos->last != res) {
++		if (pos->first == res)
++			pos->first = list_next_entry(res, lru);
+ 		list_move(&res->lru, &pos->last->lru);
+ 		pos->last = res;
+ 	}
+@@ -111,7 +113,8 @@ static void ttm_lru_bulk_move_del(struct ttm_lru_bulk_move *bulk,
+ {
+ 	struct ttm_lru_bulk_move_pos *pos = ttm_lru_bulk_move_pos(bulk, res);
+ 
+-	if (unlikely(pos->first == res && pos->last == res)) {
++	if (unlikely(WARN_ON(!pos->first || !pos->last) ||
++		     (pos->first == res && pos->last == res))) {
+ 		pos->first = NULL;
+ 		pos->last = NULL;
+ 	} else if (pos->first == res) {
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 5d29abac2300e..55a436a6dde98 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -620,6 +620,7 @@
+ #define USB_DEVICE_ID_UGCI_FIGHTING	0x0030
+ 
+ #define USB_VENDOR_ID_HP		0x03f0
++#define USB_PRODUCT_ID_HP_ELITE_PRESENTER_MOUSE_464A		0x464a
+ #define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0A4A	0x0a4a
+ #define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0B4A	0x0b4a
+ #define USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE		0x134a
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 804fc03600cc9..3983b4f282f8f 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -96,6 +96,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD_A096), HID_QUIRK_NO_INIT_REPORTS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD_A293), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0A4A), HID_QUIRK_ALWAYS_POLL },
++	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_ELITE_PRESENTER_MOUSE_464A), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0B4A), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE_094A), HID_QUIRK_ALWAYS_POLL },
+diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
+index 3ebd4b6586b3e..05c0fb2acbc44 100644
+--- a/drivers/iommu/iommu-sva.c
++++ b/drivers/iommu/iommu-sva.c
+@@ -34,8 +34,9 @@ static int iommu_sva_alloc_pasid(struct mm_struct *mm, ioasid_t min, ioasid_t ma
+ 	}
+ 
+ 	ret = ida_alloc_range(&iommu_global_pasid_ida, min, max, GFP_KERNEL);
+-	if (ret < min)
++	if (ret < 0)
+ 		goto out;
++
+ 	mm->pasid = ret;
+ 	ret = 0;
+ out:
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 350094f1cb09f..18384251399ab 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -4807,11 +4807,21 @@ action_store(struct mddev *mddev, const char *page, size_t len)
+ 			return -EINVAL;
+ 		err = mddev_lock(mddev);
+ 		if (!err) {
+-			if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
++			if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) {
+ 				err =  -EBUSY;
+-			else {
++			} else if (mddev->reshape_position == MaxSector ||
++				   mddev->pers->check_reshape == NULL ||
++				   mddev->pers->check_reshape(mddev)) {
+ 				clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 				err = mddev->pers->start_reshape(mddev);
++			} else {
++				/*
++				 * If reshape is still in progress, and
++				 * md_check_recovery() can continue to reshape,
++				 * don't restart reshape because data can be
++				 * corrupted for raid456.
++				 */
++				clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 			}
+ 			mddev_unlock(mddev);
+ 		}
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 9d23963496194..ee75b058438f3 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -920,6 +920,7 @@ static void flush_pending_writes(struct r10conf *conf)
+ 
+ 			raid1_submit_write(bio);
+ 			bio = next;
++			cond_resched();
+ 		}
+ 		blk_finish_plug(&plug);
+ 	} else
+@@ -1132,6 +1133,7 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule)
+ 
+ 		raid1_submit_write(bio);
+ 		bio = next;
++		cond_resched();
+ 	}
+ 	kfree(plug);
+ }
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index 68df6d4641b5c..eebf967f4711a 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -227,6 +227,8 @@ static int
+ __mcp251xfd_chip_set_mode(const struct mcp251xfd_priv *priv,
+ 			  const u8 mode_req, bool nowait)
+ {
++	const struct can_bittiming *bt = &priv->can.bittiming;
++	unsigned long timeout_us = MCP251XFD_POLL_TIMEOUT_US;
+ 	u32 con = 0, con_reqop, osc = 0;
+ 	u8 mode;
+ 	int err;
+@@ -246,12 +248,16 @@ __mcp251xfd_chip_set_mode(const struct mcp251xfd_priv *priv,
+ 	if (mode_req == MCP251XFD_REG_CON_MODE_SLEEP || nowait)
+ 		return 0;
+ 
++	if (bt->bitrate)
++		timeout_us = max_t(unsigned long, timeout_us,
++				   MCP251XFD_FRAME_LEN_MAX_BITS * USEC_PER_SEC /
++				   bt->bitrate);
++
+ 	err = regmap_read_poll_timeout(priv->map_reg, MCP251XFD_REG_CON, con,
+ 				       !mcp251xfd_reg_invalid(con) &&
+ 				       FIELD_GET(MCP251XFD_REG_CON_OPMOD_MASK,
+ 						 con) == mode_req,
+-				       MCP251XFD_POLL_SLEEP_US,
+-				       MCP251XFD_POLL_TIMEOUT_US);
++				       MCP251XFD_POLL_SLEEP_US, timeout_us);
+ 	if (err != -ETIMEDOUT && err != -EBADMSG)
+ 		return err;
+ 
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
+index 7024ff0cc2c0c..24510b3b80203 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
+@@ -387,6 +387,7 @@ static_assert(MCP251XFD_TIMESTAMP_WORK_DELAY_SEC <
+ #define MCP251XFD_OSC_STAB_TIMEOUT_US (10 * MCP251XFD_OSC_STAB_SLEEP_US)
+ #define MCP251XFD_POLL_SLEEP_US (10)
+ #define MCP251XFD_POLL_TIMEOUT_US (USEC_PER_MSEC)
++#define MCP251XFD_FRAME_LEN_MAX_BITS (736)
+ 
+ /* Misc */
+ #define MCP251XFD_NAPI_WEIGHT 32
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index d476c28840084..f418066569fcc 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -303,12 +303,6 @@ struct gs_can {
+ 	struct can_bittiming_const bt_const, data_bt_const;
+ 	unsigned int channel;	/* channel number */
+ 
+-	/* time counter for hardware timestamps */
+-	struct cyclecounter cc;
+-	struct timecounter tc;
+-	spinlock_t tc_lock; /* spinlock to guard access tc->cycle_last */
+-	struct delayed_work timestamp;
+-
+ 	u32 feature;
+ 	unsigned int hf_size_tx;
+ 
+@@ -325,6 +319,13 @@ struct gs_usb {
+ 	struct gs_can *canch[GS_MAX_INTF];
+ 	struct usb_anchor rx_submitted;
+ 	struct usb_device *udev;
++
++	/* time counter for hardware timestamps */
++	struct cyclecounter cc;
++	struct timecounter tc;
++	spinlock_t tc_lock; /* spinlock to guard access tc->cycle_last */
++	struct delayed_work timestamp;
++
+ 	unsigned int hf_size_rx;
+ 	u8 active_channels;
+ };
+@@ -388,15 +389,15 @@ static int gs_cmd_reset(struct gs_can *dev)
+ 				    GFP_KERNEL);
+ }
+ 
+-static inline int gs_usb_get_timestamp(const struct gs_can *dev,
++static inline int gs_usb_get_timestamp(const struct gs_usb *parent,
+ 				       u32 *timestamp_p)
+ {
+ 	__le32 timestamp;
+ 	int rc;
+ 
+-	rc = usb_control_msg_recv(dev->udev, 0, GS_USB_BREQ_TIMESTAMP,
++	rc = usb_control_msg_recv(parent->udev, 0, GS_USB_BREQ_TIMESTAMP,
+ 				  USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_INTERFACE,
+-				  dev->channel, 0,
++				  0, 0,
+ 				  &timestamp, sizeof(timestamp),
+ 				  USB_CTRL_GET_TIMEOUT,
+ 				  GFP_KERNEL);
+@@ -410,20 +411,20 @@ static inline int gs_usb_get_timestamp(const struct gs_can *dev,
+ 
+ static u64 gs_usb_timestamp_read(const struct cyclecounter *cc) __must_hold(&dev->tc_lock)
+ {
+-	struct gs_can *dev = container_of(cc, struct gs_can, cc);
++	struct gs_usb *parent = container_of(cc, struct gs_usb, cc);
+ 	u32 timestamp = 0;
+ 	int err;
+ 
+-	lockdep_assert_held(&dev->tc_lock);
++	lockdep_assert_held(&parent->tc_lock);
+ 
+ 	/* drop lock for synchronous USB transfer */
+-	spin_unlock_bh(&dev->tc_lock);
+-	err = gs_usb_get_timestamp(dev, &timestamp);
+-	spin_lock_bh(&dev->tc_lock);
++	spin_unlock_bh(&parent->tc_lock);
++	err = gs_usb_get_timestamp(parent, &timestamp);
++	spin_lock_bh(&parent->tc_lock);
+ 	if (err)
+-		netdev_err(dev->netdev,
+-			   "Error %d while reading timestamp. HW timestamps may be inaccurate.",
+-			   err);
++		dev_err(&parent->udev->dev,
++			"Error %d while reading timestamp. HW timestamps may be inaccurate.",
++			err);
+ 
+ 	return timestamp;
+ }
+@@ -431,14 +432,14 @@ static u64 gs_usb_timestamp_read(const struct cyclecounter *cc) __must_hold(&dev
+ static void gs_usb_timestamp_work(struct work_struct *work)
+ {
+ 	struct delayed_work *delayed_work = to_delayed_work(work);
+-	struct gs_can *dev;
++	struct gs_usb *parent;
+ 
+-	dev = container_of(delayed_work, struct gs_can, timestamp);
+-	spin_lock_bh(&dev->tc_lock);
+-	timecounter_read(&dev->tc);
+-	spin_unlock_bh(&dev->tc_lock);
++	parent = container_of(delayed_work, struct gs_usb, timestamp);
++	spin_lock_bh(&parent->tc_lock);
++	timecounter_read(&parent->tc);
++	spin_unlock_bh(&parent->tc_lock);
+ 
+-	schedule_delayed_work(&dev->timestamp,
++	schedule_delayed_work(&parent->timestamp,
+ 			      GS_USB_TIMESTAMP_WORK_DELAY_SEC * HZ);
+ }
+ 
+@@ -446,37 +447,38 @@ static void gs_usb_skb_set_timestamp(struct gs_can *dev,
+ 				     struct sk_buff *skb, u32 timestamp)
+ {
+ 	struct skb_shared_hwtstamps *hwtstamps = skb_hwtstamps(skb);
++	struct gs_usb *parent = dev->parent;
+ 	u64 ns;
+ 
+-	spin_lock_bh(&dev->tc_lock);
+-	ns = timecounter_cyc2time(&dev->tc, timestamp);
+-	spin_unlock_bh(&dev->tc_lock);
++	spin_lock_bh(&parent->tc_lock);
++	ns = timecounter_cyc2time(&parent->tc, timestamp);
++	spin_unlock_bh(&parent->tc_lock);
+ 
+ 	hwtstamps->hwtstamp = ns_to_ktime(ns);
+ }
+ 
+-static void gs_usb_timestamp_init(struct gs_can *dev)
++static void gs_usb_timestamp_init(struct gs_usb *parent)
+ {
+-	struct cyclecounter *cc = &dev->cc;
++	struct cyclecounter *cc = &parent->cc;
+ 
+ 	cc->read = gs_usb_timestamp_read;
+ 	cc->mask = CYCLECOUNTER_MASK(32);
+ 	cc->shift = 32 - bits_per(NSEC_PER_SEC / GS_USB_TIMESTAMP_TIMER_HZ);
+ 	cc->mult = clocksource_hz2mult(GS_USB_TIMESTAMP_TIMER_HZ, cc->shift);
+ 
+-	spin_lock_init(&dev->tc_lock);
+-	spin_lock_bh(&dev->tc_lock);
+-	timecounter_init(&dev->tc, &dev->cc, ktime_get_real_ns());
+-	spin_unlock_bh(&dev->tc_lock);
++	spin_lock_init(&parent->tc_lock);
++	spin_lock_bh(&parent->tc_lock);
++	timecounter_init(&parent->tc, &parent->cc, ktime_get_real_ns());
++	spin_unlock_bh(&parent->tc_lock);
+ 
+-	INIT_DELAYED_WORK(&dev->timestamp, gs_usb_timestamp_work);
+-	schedule_delayed_work(&dev->timestamp,
++	INIT_DELAYED_WORK(&parent->timestamp, gs_usb_timestamp_work);
++	schedule_delayed_work(&parent->timestamp,
+ 			      GS_USB_TIMESTAMP_WORK_DELAY_SEC * HZ);
+ }
+ 
+-static void gs_usb_timestamp_stop(struct gs_can *dev)
++static void gs_usb_timestamp_stop(struct gs_usb *parent)
+ {
+-	cancel_delayed_work_sync(&dev->timestamp);
++	cancel_delayed_work_sync(&parent->timestamp);
+ }
+ 
+ static void gs_update_state(struct gs_can *dev, struct can_frame *cf)
+@@ -560,6 +562,9 @@ static void gs_usb_receive_bulk_callback(struct urb *urb)
+ 	if (!netif_device_present(netdev))
+ 		return;
+ 
++	if (!netif_running(netdev))
++		goto resubmit_urb;
++
+ 	if (hf->echo_id == -1) { /* normal rx */
+ 		if (hf->flags & GS_CAN_FLAG_FD) {
+ 			skb = alloc_canfd_skb(dev->netdev, &cfd);
+@@ -833,6 +838,7 @@ static int gs_can_open(struct net_device *netdev)
+ 		.mode = cpu_to_le32(GS_CAN_MODE_START),
+ 	};
+ 	struct gs_host_frame *hf;
++	struct urb *urb = NULL;
+ 	u32 ctrlmode;
+ 	u32 flags = 0;
+ 	int rc, i;
+@@ -855,14 +861,18 @@ static int gs_can_open(struct net_device *netdev)
+ 	}
+ 
+ 	if (!parent->active_channels) {
++		if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP)
++			gs_usb_timestamp_init(parent);
++
+ 		for (i = 0; i < GS_MAX_RX_URBS; i++) {
+-			struct urb *urb;
+ 			u8 *buf;
+ 
+ 			/* alloc rx urb */
+ 			urb = usb_alloc_urb(0, GFP_KERNEL);
+-			if (!urb)
+-				return -ENOMEM;
++			if (!urb) {
++				rc = -ENOMEM;
++				goto out_usb_kill_anchored_urbs;
++			}
+ 
+ 			/* alloc rx buffer */
+ 			buf = kmalloc(dev->parent->hf_size_rx,
+@@ -870,8 +880,8 @@ static int gs_can_open(struct net_device *netdev)
+ 			if (!buf) {
+ 				netdev_err(netdev,
+ 					   "No memory left for USB buffer\n");
+-				usb_free_urb(urb);
+-				return -ENOMEM;
++				rc = -ENOMEM;
++				goto out_usb_free_urb;
+ 			}
+ 
+ 			/* fill, anchor, and submit rx urb */
+@@ -894,9 +904,7 @@ static int gs_can_open(struct net_device *netdev)
+ 				netdev_err(netdev,
+ 					   "usb_submit failed (err=%d)\n", rc);
+ 
+-				usb_unanchor_urb(urb);
+-				usb_free_urb(urb);
+-				break;
++				goto out_usb_unanchor_urb;
+ 			}
+ 
+ 			/* Drop reference,
+@@ -926,13 +934,9 @@ static int gs_can_open(struct net_device *netdev)
+ 		flags |= GS_CAN_MODE_FD;
+ 
+ 	/* if hardware supports timestamps, enable it */
+-	if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP) {
++	if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP)
+ 		flags |= GS_CAN_MODE_HW_TIMESTAMP;
+ 
+-		/* start polling timestamp */
+-		gs_usb_timestamp_init(dev);
+-	}
+-
+ 	/* finally start device */
+ 	dev->can.state = CAN_STATE_ERROR_ACTIVE;
+ 	dm.flags = cpu_to_le32(flags);
+@@ -942,10 +946,9 @@ static int gs_can_open(struct net_device *netdev)
+ 				  GFP_KERNEL);
+ 	if (rc) {
+ 		netdev_err(netdev, "Couldn't start device (err=%d)\n", rc);
+-		if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP)
+-			gs_usb_timestamp_stop(dev);
+ 		dev->can.state = CAN_STATE_STOPPED;
+-		return rc;
++
++		goto out_usb_kill_anchored_urbs;
+ 	}
+ 
+ 	parent->active_channels++;
+@@ -953,6 +956,22 @@ static int gs_can_open(struct net_device *netdev)
+ 		netif_start_queue(netdev);
+ 
+ 	return 0;
++
++out_usb_unanchor_urb:
++	usb_unanchor_urb(urb);
++out_usb_free_urb:
++	usb_free_urb(urb);
++out_usb_kill_anchored_urbs:
++	if (!parent->active_channels) {
++		usb_kill_anchored_urbs(&dev->tx_submitted);
++
++		if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP)
++			gs_usb_timestamp_stop(parent);
++	}
++
++	close_candev(netdev);
++
++	return rc;
+ }
+ 
+ static int gs_usb_get_state(const struct net_device *netdev,
+@@ -998,14 +1017,13 @@ static int gs_can_close(struct net_device *netdev)
+ 
+ 	netif_stop_queue(netdev);
+ 
+-	/* stop polling timestamp */
+-	if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP)
+-		gs_usb_timestamp_stop(dev);
+-
+ 	/* Stop polling */
+ 	parent->active_channels--;
+ 	if (!parent->active_channels) {
+ 		usb_kill_anchored_urbs(&parent->rx_submitted);
++
++		if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP)
++			gs_usb_timestamp_stop(parent);
+ 	}
+ 
+ 	/* Stop sending URBs */
+diff --git a/drivers/net/dsa/microchip/ksz8795.c b/drivers/net/dsa/microchip/ksz8795.c
+index f56fca1b1a222..cc5b19a3d0df2 100644
+--- a/drivers/net/dsa/microchip/ksz8795.c
++++ b/drivers/net/dsa/microchip/ksz8795.c
+@@ -506,7 +506,13 @@ static int ksz8_r_sta_mac_table(struct ksz_device *dev, u16 addr,
+ 		(data_hi & masks[STATIC_MAC_TABLE_FWD_PORTS]) >>
+ 			shifts[STATIC_MAC_FWD_PORTS];
+ 	alu->is_override = (data_hi & masks[STATIC_MAC_TABLE_OVERRIDE]) ? 1 : 0;
+-	data_hi >>= 1;
++
++	/* KSZ8795 family switches have STATIC_MAC_TABLE_USE_FID and
++	 * STATIC_MAC_TABLE_FID definitions off by 1 when doing read on the
++	 * static MAC table compared to doing write.
++	 */
++	if (ksz_is_ksz87xx(dev))
++		data_hi >>= 1;
+ 	alu->is_static = true;
+ 	alu->is_use_fid = (data_hi & masks[STATIC_MAC_TABLE_USE_FID]) ? 1 : 0;
+ 	alu->fid = (data_hi & masks[STATIC_MAC_TABLE_FID]) >>
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index a4428be5f483c..a0ba2605bb620 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -331,13 +331,13 @@ static const u32 ksz8795_masks[] = {
+ 	[STATIC_MAC_TABLE_VALID]	= BIT(21),
+ 	[STATIC_MAC_TABLE_USE_FID]	= BIT(23),
+ 	[STATIC_MAC_TABLE_FID]		= GENMASK(30, 24),
+-	[STATIC_MAC_TABLE_OVERRIDE]	= BIT(26),
+-	[STATIC_MAC_TABLE_FWD_PORTS]	= GENMASK(24, 20),
++	[STATIC_MAC_TABLE_OVERRIDE]	= BIT(22),
++	[STATIC_MAC_TABLE_FWD_PORTS]	= GENMASK(20, 16),
+ 	[DYNAMIC_MAC_TABLE_ENTRIES_H]	= GENMASK(6, 0),
+-	[DYNAMIC_MAC_TABLE_MAC_EMPTY]	= BIT(8),
++	[DYNAMIC_MAC_TABLE_MAC_EMPTY]	= BIT(7),
+ 	[DYNAMIC_MAC_TABLE_NOT_READY]	= BIT(7),
+ 	[DYNAMIC_MAC_TABLE_ENTRIES]	= GENMASK(31, 29),
+-	[DYNAMIC_MAC_TABLE_FID]		= GENMASK(26, 20),
++	[DYNAMIC_MAC_TABLE_FID]		= GENMASK(22, 16),
+ 	[DYNAMIC_MAC_TABLE_SRC_PORT]	= GENMASK(26, 24),
+ 	[DYNAMIC_MAC_TABLE_TIMESTAMP]	= GENMASK(28, 27),
+ 	[P_MII_TX_FLOW_CTRL]		= BIT(5),
+diff --git a/drivers/net/dsa/microchip/ksz_common.h b/drivers/net/dsa/microchip/ksz_common.h
+index 8abecaf6089ef..33d9a2f6af27a 100644
+--- a/drivers/net/dsa/microchip/ksz_common.h
++++ b/drivers/net/dsa/microchip/ksz_common.h
+@@ -569,6 +569,13 @@ static inline void ksz_regmap_unlock(void *__mtx)
+ 	mutex_unlock(mtx);
+ }
+ 
++static inline bool ksz_is_ksz87xx(struct ksz_device *dev)
++{
++	return dev->chip_id == KSZ8795_CHIP_ID ||
++	       dev->chip_id == KSZ8794_CHIP_ID ||
++	       dev->chip_id == KSZ8765_CHIP_ID;
++}
++
+ static inline bool ksz_is_ksz88x3(struct ksz_device *dev)
+ {
+ 	return dev->chip_id == KSZ8830_CHIP_ID;
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 08a46ffd53af9..642e93e8623eb 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -109,6 +109,13 @@ int mv88e6xxx_wait_mask(struct mv88e6xxx_chip *chip, int addr, int reg,
+ 			usleep_range(1000, 2000);
+ 	}
+ 
++	err = mv88e6xxx_read(chip, addr, reg, &data);
++	if (err)
++		return err;
++
++	if ((data & mask) == val)
++		return 0;
++
+ 	dev_err(chip->dev, "Timeout while waiting for switch\n");
+ 	return -ETIMEDOUT;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+index d385ffc218766..32bb14303473b 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+@@ -438,19 +438,36 @@ static void hns3_dbg_fill_content(char *content, u16 len,
+ 				  const struct hns3_dbg_item *items,
+ 				  const char **result, u16 size)
+ {
++#define HNS3_DBG_LINE_END_LEN	2
+ 	char *pos = content;
++	u16 item_len;
+ 	u16 i;
+ 
++	if (!len) {
++		return;
++	} else if (len <= HNS3_DBG_LINE_END_LEN) {
++		*pos++ = '\0';
++		return;
++	}
++
+ 	memset(content, ' ', len);
+-	for (i = 0; i < size; i++) {
+-		if (result)
+-			strncpy(pos, result[i], strlen(result[i]));
+-		else
+-			strncpy(pos, items[i].name, strlen(items[i].name));
++	len -= HNS3_DBG_LINE_END_LEN;
+ 
+-		pos += strlen(items[i].name) + items[i].interval;
++	for (i = 0; i < size; i++) {
++		item_len = strlen(items[i].name) + items[i].interval;
++		if (len < item_len)
++			break;
++
++		if (result) {
++			if (item_len < strlen(result[i]))
++				break;
++			strscpy(pos, result[i], strlen(result[i]));
++		} else {
++			strscpy(pos, items[i].name, strlen(items[i].name));
++		}
++		pos += item_len;
++		len -= item_len;
+ 	}
+-
+ 	*pos++ = '\n';
+ 	*pos++ = '\0';
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
+index a0b46e7d863eb..233c132dc513e 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
+@@ -88,16 +88,35 @@ static void hclge_dbg_fill_content(char *content, u16 len,
+ 				   const struct hclge_dbg_item *items,
+ 				   const char **result, u16 size)
+ {
++#define HCLGE_DBG_LINE_END_LEN	2
+ 	char *pos = content;
++	u16 item_len;
+ 	u16 i;
+ 
++	if (!len) {
++		return;
++	} else if (len <= HCLGE_DBG_LINE_END_LEN) {
++		*pos++ = '\0';
++		return;
++	}
++
+ 	memset(content, ' ', len);
++	len -= HCLGE_DBG_LINE_END_LEN;
++
+ 	for (i = 0; i < size; i++) {
+-		if (result)
+-			strncpy(pos, result[i], strlen(result[i]));
+-		else
+-			strncpy(pos, items[i].name, strlen(items[i].name));
+-		pos += strlen(items[i].name) + items[i].interval;
++		item_len = strlen(items[i].name) + items[i].interval;
++		if (len < item_len)
++			break;
++
++		if (result) {
++			if (item_len < strlen(result[i]))
++				break;
++			strscpy(pos, result[i], strlen(result[i]));
++		} else {
++			strscpy(pos, items[i].name, strlen(items[i].name));
++		}
++		pos += item_len;
++		len -= item_len;
+ 	}
+ 	*pos++ = '\n';
+ 	*pos++ = '\0';
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index 39d0fe76a38ff..8cbdebc5b6989 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -255,8 +255,10 @@ struct iavf_adapter {
+ 	struct workqueue_struct *wq;
+ 	struct work_struct reset_task;
+ 	struct work_struct adminq_task;
++	struct work_struct finish_config;
+ 	struct delayed_work client_task;
+ 	wait_queue_head_t down_waitqueue;
++	wait_queue_head_t reset_waitqueue;
+ 	wait_queue_head_t vc_waitqueue;
+ 	struct iavf_q_vector *q_vectors;
+ 	struct list_head vlan_filter_list;
+@@ -518,14 +520,12 @@ int iavf_up(struct iavf_adapter *adapter);
+ void iavf_down(struct iavf_adapter *adapter);
+ int iavf_process_config(struct iavf_adapter *adapter);
+ int iavf_parse_vf_resource_msg(struct iavf_adapter *adapter);
+-void iavf_schedule_reset(struct iavf_adapter *adapter);
++void iavf_schedule_reset(struct iavf_adapter *adapter, u64 flags);
+ void iavf_schedule_request_stats(struct iavf_adapter *adapter);
++void iavf_schedule_finish_config(struct iavf_adapter *adapter);
+ void iavf_reset(struct iavf_adapter *adapter);
+ void iavf_set_ethtool_ops(struct net_device *netdev);
+ void iavf_update_stats(struct iavf_adapter *adapter);
+-void iavf_reset_interrupt_capability(struct iavf_adapter *adapter);
+-int iavf_init_interrupt_scheme(struct iavf_adapter *adapter);
+-void iavf_irq_enable_queues(struct iavf_adapter *adapter);
+ void iavf_free_all_tx_resources(struct iavf_adapter *adapter);
+ void iavf_free_all_rx_resources(struct iavf_adapter *adapter);
+ 
+@@ -579,17 +579,11 @@ void iavf_enable_vlan_stripping_v2(struct iavf_adapter *adapter, u16 tpid);
+ void iavf_disable_vlan_stripping_v2(struct iavf_adapter *adapter, u16 tpid);
+ void iavf_enable_vlan_insertion_v2(struct iavf_adapter *adapter, u16 tpid);
+ void iavf_disable_vlan_insertion_v2(struct iavf_adapter *adapter, u16 tpid);
+-int iavf_replace_primary_mac(struct iavf_adapter *adapter,
+-			     const u8 *new_mac);
+-void
+-iavf_set_vlan_offload_features(struct iavf_adapter *adapter,
+-			       netdev_features_t prev_features,
+-			       netdev_features_t features);
+ void iavf_add_fdir_filter(struct iavf_adapter *adapter);
+ void iavf_del_fdir_filter(struct iavf_adapter *adapter);
+ void iavf_add_adv_rss_cfg(struct iavf_adapter *adapter);
+ void iavf_del_adv_rss_cfg(struct iavf_adapter *adapter);
+ struct iavf_mac_filter *iavf_add_filter(struct iavf_adapter *adapter,
+ 					const u8 *macaddr);
+-int iavf_lock_timeout(struct mutex *lock, unsigned int msecs);
++int iavf_wait_for_reset(struct iavf_adapter *adapter);
+ #endif /* _IAVF_H_ */
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+index 6f171d1d85b75..2f47cfa7f06e2 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+@@ -484,6 +484,7 @@ static int iavf_set_priv_flags(struct net_device *netdev, u32 flags)
+ {
+ 	struct iavf_adapter *adapter = netdev_priv(netdev);
+ 	u32 orig_flags, new_flags, changed_flags;
++	int ret = 0;
+ 	u32 i;
+ 
+ 	orig_flags = READ_ONCE(adapter->flags);
+@@ -531,12 +532,14 @@ static int iavf_set_priv_flags(struct net_device *netdev, u32 flags)
+ 	/* issue a reset to force legacy-rx change to take effect */
+ 	if (changed_flags & IAVF_FLAG_LEGACY_RX) {
+ 		if (netif_running(netdev)) {
+-			adapter->flags |= IAVF_FLAG_RESET_NEEDED;
+-			queue_work(adapter->wq, &adapter->reset_task);
++			iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED);
++			ret = iavf_wait_for_reset(adapter);
++			if (ret)
++				netdev_warn(netdev, "Changing private flags timeout or interrupted waiting for reset");
+ 		}
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ /**
+@@ -627,6 +630,7 @@ static int iavf_set_ringparam(struct net_device *netdev,
+ {
+ 	struct iavf_adapter *adapter = netdev_priv(netdev);
+ 	u32 new_rx_count, new_tx_count;
++	int ret = 0;
+ 
+ 	if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
+ 		return -EINVAL;
+@@ -671,11 +675,13 @@ static int iavf_set_ringparam(struct net_device *netdev,
+ 	}
+ 
+ 	if (netif_running(netdev)) {
+-		adapter->flags |= IAVF_FLAG_RESET_NEEDED;
+-		queue_work(adapter->wq, &adapter->reset_task);
++		iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED);
++		ret = iavf_wait_for_reset(adapter);
++		if (ret)
++			netdev_warn(netdev, "Changing ring parameters timeout or interrupted waiting for reset");
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ /**
+@@ -1830,7 +1836,7 @@ static int iavf_set_channels(struct net_device *netdev,
+ {
+ 	struct iavf_adapter *adapter = netdev_priv(netdev);
+ 	u32 num_req = ch->combined_count;
+-	int i;
++	int ret = 0;
+ 
+ 	if ((adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_ADQ) &&
+ 	    adapter->num_tc) {
+@@ -1852,22 +1858,13 @@ static int iavf_set_channels(struct net_device *netdev,
+ 
+ 	adapter->num_req_queues = num_req;
+ 	adapter->flags |= IAVF_FLAG_REINIT_ITR_NEEDED;
+-	iavf_schedule_reset(adapter);
++	iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED);
+ 
+-	/* wait for the reset is done */
+-	for (i = 0; i < IAVF_RESET_WAIT_COMPLETE_COUNT; i++) {
+-		msleep(IAVF_RESET_WAIT_MS);
+-		if (adapter->flags & IAVF_FLAG_RESET_PENDING)
+-			continue;
+-		break;
+-	}
+-	if (i == IAVF_RESET_WAIT_COMPLETE_COUNT) {
+-		adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
+-		adapter->num_active_queues = num_req;
+-		return -EOPNOTSUPP;
+-	}
++	ret = iavf_wait_for_reset(adapter);
++	if (ret)
++		netdev_warn(netdev, "Changing channel count timeout or interrupted waiting for reset");
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 4a66873882d12..ba96312feb505 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -166,6 +166,45 @@ static struct iavf_adapter *iavf_pdev_to_adapter(struct pci_dev *pdev)
+ 	return netdev_priv(pci_get_drvdata(pdev));
+ }
+ 
++/**
++ * iavf_is_reset_in_progress - Check if a reset is in progress
++ * @adapter: board private structure
++ */
++static bool iavf_is_reset_in_progress(struct iavf_adapter *adapter)
++{
++	if (adapter->state == __IAVF_RESETTING ||
++	    adapter->flags & (IAVF_FLAG_RESET_PENDING |
++			      IAVF_FLAG_RESET_NEEDED))
++		return true;
++
++	return false;
++}
++
++/**
++ * iavf_wait_for_reset - Wait for reset to finish.
++ * @adapter: board private structure
++ *
++ * Returns 0 if reset finished successfully, negative on timeout or interrupt.
++ */
++int iavf_wait_for_reset(struct iavf_adapter *adapter)
++{
++	int ret = wait_event_interruptible_timeout(adapter->reset_waitqueue,
++					!iavf_is_reset_in_progress(adapter),
++					msecs_to_jiffies(5000));
++
++	/* If ret < 0 then it means wait was interrupted.
++	 * If ret == 0 then it means we got a timeout while waiting
++	 * for reset to finish.
++	 * If ret > 0 it means reset has finished.
++	 */
++	if (ret > 0)
++		return 0;
++	else if (ret < 0)
++		return -EINTR;
++	else
++		return -EBUSY;
++}
++
+ /**
+  * iavf_allocate_dma_mem_d - OS specific memory alloc for shared code
+  * @hw:   pointer to the HW structure
+@@ -253,7 +292,7 @@ enum iavf_status iavf_free_virt_mem_d(struct iavf_hw *hw,
+  *
+  * Returns 0 on success, negative on failure
+  **/
+-int iavf_lock_timeout(struct mutex *lock, unsigned int msecs)
++static int iavf_lock_timeout(struct mutex *lock, unsigned int msecs)
+ {
+ 	unsigned int wait, delay = 10;
+ 
+@@ -270,12 +309,14 @@ int iavf_lock_timeout(struct mutex *lock, unsigned int msecs)
+ /**
+  * iavf_schedule_reset - Set the flags and schedule a reset event
+  * @adapter: board private structure
++ * @flags: IAVF_FLAG_RESET_PENDING or IAVF_FLAG_RESET_NEEDED
+  **/
+-void iavf_schedule_reset(struct iavf_adapter *adapter)
++void iavf_schedule_reset(struct iavf_adapter *adapter, u64 flags)
+ {
+-	if (!(adapter->flags &
+-	      (IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED))) {
+-		adapter->flags |= IAVF_FLAG_RESET_NEEDED;
++	if (!test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section) &&
++	    !(adapter->flags &
++	    (IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED))) {
++		adapter->flags |= flags;
+ 		queue_work(adapter->wq, &adapter->reset_task);
+ 	}
+ }
+@@ -303,7 +344,7 @@ static void iavf_tx_timeout(struct net_device *netdev, unsigned int txqueue)
+ 	struct iavf_adapter *adapter = netdev_priv(netdev);
+ 
+ 	adapter->tx_timeout_count++;
+-	iavf_schedule_reset(adapter);
++	iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED);
+ }
+ 
+ /**
+@@ -362,7 +403,7 @@ static void iavf_irq_disable(struct iavf_adapter *adapter)
+  * iavf_irq_enable_queues - Enable interrupt for all queues
+  * @adapter: board private structure
+  **/
+-void iavf_irq_enable_queues(struct iavf_adapter *adapter)
++static void iavf_irq_enable_queues(struct iavf_adapter *adapter)
+ {
+ 	struct iavf_hw *hw = &adapter->hw;
+ 	int i;
+@@ -1003,8 +1044,8 @@ struct iavf_mac_filter *iavf_add_filter(struct iavf_adapter *adapter,
+  *
+  * Do not call this with mac_vlan_list_lock!
+  **/
+-int iavf_replace_primary_mac(struct iavf_adapter *adapter,
+-			     const u8 *new_mac)
++static int iavf_replace_primary_mac(struct iavf_adapter *adapter,
++				    const u8 *new_mac)
+ {
+ 	struct iavf_hw *hw = &adapter->hw;
+ 	struct iavf_mac_filter *f;
+@@ -1663,10 +1704,10 @@ static int iavf_set_interrupt_capability(struct iavf_adapter *adapter)
+ 		adapter->msix_entries[vector].entry = vector;
+ 
+ 	err = iavf_acquire_msix_vectors(adapter, v_budget);
++	if (!err)
++		iavf_schedule_finish_config(adapter);
+ 
+ out:
+-	netif_set_real_num_rx_queues(adapter->netdev, pairs);
+-	netif_set_real_num_tx_queues(adapter->netdev, pairs);
+ 	return err;
+ }
+ 
+@@ -1840,19 +1881,16 @@ static int iavf_alloc_q_vectors(struct iavf_adapter *adapter)
+ static void iavf_free_q_vectors(struct iavf_adapter *adapter)
+ {
+ 	int q_idx, num_q_vectors;
+-	int napi_vectors;
+ 
+ 	if (!adapter->q_vectors)
+ 		return;
+ 
+ 	num_q_vectors = adapter->num_msix_vectors - NONQ_VECS;
+-	napi_vectors = adapter->num_active_queues;
+ 
+ 	for (q_idx = 0; q_idx < num_q_vectors; q_idx++) {
+ 		struct iavf_q_vector *q_vector = &adapter->q_vectors[q_idx];
+ 
+-		if (q_idx < napi_vectors)
+-			netif_napi_del(&q_vector->napi);
++		netif_napi_del(&q_vector->napi);
+ 	}
+ 	kfree(adapter->q_vectors);
+ 	adapter->q_vectors = NULL;
+@@ -1863,7 +1901,7 @@ static void iavf_free_q_vectors(struct iavf_adapter *adapter)
+  * @adapter: board private structure
+  *
+  **/
+-void iavf_reset_interrupt_capability(struct iavf_adapter *adapter)
++static void iavf_reset_interrupt_capability(struct iavf_adapter *adapter)
+ {
+ 	if (!adapter->msix_entries)
+ 		return;
+@@ -1878,7 +1916,7 @@ void iavf_reset_interrupt_capability(struct iavf_adapter *adapter)
+  * @adapter: board private structure to initialize
+  *
+  **/
+-int iavf_init_interrupt_scheme(struct iavf_adapter *adapter)
++static int iavf_init_interrupt_scheme(struct iavf_adapter *adapter)
+ {
+ 	int err;
+ 
+@@ -1889,9 +1927,7 @@ int iavf_init_interrupt_scheme(struct iavf_adapter *adapter)
+ 		goto err_alloc_queues;
+ 	}
+ 
+-	rtnl_lock();
+ 	err = iavf_set_interrupt_capability(adapter);
+-	rtnl_unlock();
+ 	if (err) {
+ 		dev_err(&adapter->pdev->dev,
+ 			"Unable to setup interrupt capabilities\n");
+@@ -1944,15 +1980,16 @@ static void iavf_free_rss(struct iavf_adapter *adapter)
+ /**
+  * iavf_reinit_interrupt_scheme - Reallocate queues and vectors
+  * @adapter: board private structure
++ * @running: true if adapter->state == __IAVF_RUNNING
+  *
+  * Returns 0 on success, negative on failure
+  **/
+-static int iavf_reinit_interrupt_scheme(struct iavf_adapter *adapter)
++static int iavf_reinit_interrupt_scheme(struct iavf_adapter *adapter, bool running)
+ {
+ 	struct net_device *netdev = adapter->netdev;
+ 	int err;
+ 
+-	if (netif_running(netdev))
++	if (running)
+ 		iavf_free_traffic_irqs(adapter);
+ 	iavf_free_misc_irq(adapter);
+ 	iavf_reset_interrupt_capability(adapter);
+@@ -1976,6 +2013,78 @@ err:
+ 	return err;
+ }
+ 
++/**
++ * iavf_finish_config - do all netdev work that needs RTNL
++ * @work: our work_struct
++ *
++ * Do work that needs both RTNL and crit_lock.
++ **/
++static void iavf_finish_config(struct work_struct *work)
++{
++	struct iavf_adapter *adapter;
++	int pairs, err;
++
++	adapter = container_of(work, struct iavf_adapter, finish_config);
++
++	/* Always take RTNL first to prevent circular lock dependency */
++	rtnl_lock();
++	mutex_lock(&adapter->crit_lock);
++
++	if ((adapter->flags & IAVF_FLAG_SETUP_NETDEV_FEATURES) &&
++	    adapter->netdev_registered &&
++	    !test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section)) {
++		netdev_update_features(adapter->netdev);
++		adapter->flags &= ~IAVF_FLAG_SETUP_NETDEV_FEATURES;
++	}
++
++	switch (adapter->state) {
++	case __IAVF_DOWN:
++		if (!adapter->netdev_registered) {
++			err = register_netdevice(adapter->netdev);
++			if (err) {
++				dev_err(&adapter->pdev->dev, "Unable to register netdev (%d)\n",
++					err);
++
++				/* go back and try again.*/
++				iavf_free_rss(adapter);
++				iavf_free_misc_irq(adapter);
++				iavf_reset_interrupt_capability(adapter);
++				iavf_change_state(adapter,
++						  __IAVF_INIT_CONFIG_ADAPTER);
++				goto out;
++			}
++			adapter->netdev_registered = true;
++		}
++
++		/* Set the real number of queues when reset occurs while
++		 * state == __IAVF_DOWN
++		 */
++		fallthrough;
++	case __IAVF_RUNNING:
++		pairs = adapter->num_active_queues;
++		netif_set_real_num_rx_queues(adapter->netdev, pairs);
++		netif_set_real_num_tx_queues(adapter->netdev, pairs);
++		break;
++
++	default:
++		break;
++	}
++
++out:
++	mutex_unlock(&adapter->crit_lock);
++	rtnl_unlock();
++}
++
++/**
++ * iavf_schedule_finish_config - Set the flags and schedule a reset event
++ * @adapter: board private structure
++ **/
++void iavf_schedule_finish_config(struct iavf_adapter *adapter)
++{
++	if (!test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section))
++		queue_work(adapter->wq, &adapter->finish_config);
++}
++
+ /**
+  * iavf_process_aq_command - process aq_required flags
+  * and sends aq command
+@@ -2176,7 +2285,7 @@ static int iavf_process_aq_command(struct iavf_adapter *adapter)
+  * the watchdog if any changes are requested to expedite the request via
+  * virtchnl.
+  **/
+-void
++static void
+ iavf_set_vlan_offload_features(struct iavf_adapter *adapter,
+ 			       netdev_features_t prev_features,
+ 			       netdev_features_t features)
+@@ -2383,7 +2492,7 @@ int iavf_parse_vf_resource_msg(struct iavf_adapter *adapter)
+ 			adapter->vsi_res->num_queue_pairs);
+ 		adapter->flags |= IAVF_FLAG_REINIT_MSIX_NEEDED;
+ 		adapter->num_req_queues = adapter->vsi_res->num_queue_pairs;
+-		iavf_schedule_reset(adapter);
++		iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED);
+ 
+ 		return -EAGAIN;
+ 	}
+@@ -2613,22 +2722,8 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter)
+ 
+ 	netif_carrier_off(netdev);
+ 	adapter->link_up = false;
+-
+-	/* set the semaphore to prevent any callbacks after device registration
+-	 * up to time when state of driver will be set to __IAVF_DOWN
+-	 */
+-	rtnl_lock();
+-	if (!adapter->netdev_registered) {
+-		err = register_netdevice(netdev);
+-		if (err) {
+-			rtnl_unlock();
+-			goto err_register;
+-		}
+-	}
+-
+-	adapter->netdev_registered = true;
+-
+ 	netif_tx_stop_all_queues(netdev);
++
+ 	if (CLIENT_ALLOWED(adapter)) {
+ 		err = iavf_lan_add_device(adapter);
+ 		if (err)
+@@ -2641,7 +2736,6 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter)
+ 
+ 	iavf_change_state(adapter, __IAVF_DOWN);
+ 	set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
+-	rtnl_unlock();
+ 
+ 	iavf_misc_irq_enable(adapter);
+ 	wake_up(&adapter->down_waitqueue);
+@@ -2661,10 +2755,11 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter)
+ 		/* request initial VLAN offload settings */
+ 		iavf_set_vlan_offload_features(adapter, 0, netdev->features);
+ 
++	iavf_schedule_finish_config(adapter);
+ 	return;
++
+ err_mem:
+ 	iavf_free_rss(adapter);
+-err_register:
+ 	iavf_free_misc_irq(adapter);
+ err_sw_init:
+ 	iavf_reset_interrupt_capability(adapter);
+@@ -2691,26 +2786,9 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		goto restart_watchdog;
+ 	}
+ 
+-	if ((adapter->flags & IAVF_FLAG_SETUP_NETDEV_FEATURES) &&
+-	    adapter->netdev_registered &&
+-	    !test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section) &&
+-	    rtnl_trylock()) {
+-		netdev_update_features(adapter->netdev);
+-		rtnl_unlock();
+-		adapter->flags &= ~IAVF_FLAG_SETUP_NETDEV_FEATURES;
+-	}
+-
+ 	if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
+ 		iavf_change_state(adapter, __IAVF_COMM_FAILED);
+ 
+-	if (adapter->flags & IAVF_FLAG_RESET_NEEDED) {
+-		adapter->aq_required = 0;
+-		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+-		mutex_unlock(&adapter->crit_lock);
+-		queue_work(adapter->wq, &adapter->reset_task);
+-		return;
+-	}
+-
+ 	switch (adapter->state) {
+ 	case __IAVF_STARTUP:
+ 		iavf_startup(adapter);
+@@ -2838,11 +2916,10 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 	/* check for hw reset */
+ 	reg_val = rd32(hw, IAVF_VF_ARQLEN1) & IAVF_VF_ARQLEN1_ARQENABLE_MASK;
+ 	if (!reg_val) {
+-		adapter->flags |= IAVF_FLAG_RESET_PENDING;
+ 		adapter->aq_required = 0;
+ 		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+ 		dev_err(&adapter->pdev->dev, "Hardware reset detected\n");
+-		queue_work(adapter->wq, &adapter->reset_task);
++		iavf_schedule_reset(adapter, IAVF_FLAG_RESET_PENDING);
+ 		mutex_unlock(&adapter->crit_lock);
+ 		queue_delayed_work(adapter->wq,
+ 				   &adapter->watchdog_task, HZ * 2);
+@@ -3068,7 +3145,7 @@ continue_reset:
+ 
+ 	if ((adapter->flags & IAVF_FLAG_REINIT_MSIX_NEEDED) ||
+ 	    (adapter->flags & IAVF_FLAG_REINIT_ITR_NEEDED)) {
+-		err = iavf_reinit_interrupt_scheme(adapter);
++		err = iavf_reinit_interrupt_scheme(adapter, running);
+ 		if (err)
+ 			goto reset_err;
+ 	}
+@@ -3163,6 +3240,7 @@ continue_reset:
+ 
+ 	adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
+ 
++	wake_up(&adapter->reset_waitqueue);
+ 	mutex_unlock(&adapter->client_lock);
+ 	mutex_unlock(&adapter->crit_lock);
+ 
+@@ -3239,9 +3317,7 @@ static void iavf_adminq_task(struct work_struct *work)
+ 	} while (pending);
+ 	mutex_unlock(&adapter->crit_lock);
+ 
+-	if ((adapter->flags &
+-	     (IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED)) ||
+-	    adapter->state == __IAVF_RESETTING)
++	if (iavf_is_reset_in_progress(adapter))
+ 		goto freedom;
+ 
+ 	/* check for error indications */
+@@ -4327,6 +4403,7 @@ static int iavf_close(struct net_device *netdev)
+ static int iavf_change_mtu(struct net_device *netdev, int new_mtu)
+ {
+ 	struct iavf_adapter *adapter = netdev_priv(netdev);
++	int ret = 0;
+ 
+ 	netdev_dbg(netdev, "changing MTU from %d to %d\n",
+ 		   netdev->mtu, new_mtu);
+@@ -4337,11 +4414,15 @@ static int iavf_change_mtu(struct net_device *netdev, int new_mtu)
+ 	}
+ 
+ 	if (netif_running(netdev)) {
+-		adapter->flags |= IAVF_FLAG_RESET_NEEDED;
+-		queue_work(adapter->wq, &adapter->reset_task);
++		iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED);
++		ret = iavf_wait_for_reset(adapter);
++		if (ret < 0)
++			netdev_warn(netdev, "MTU change interrupted waiting for reset");
++		else if (ret)
++			netdev_warn(netdev, "MTU change timed out waiting for reset");
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ #define NETIF_VLAN_OFFLOAD_FEATURES	(NETIF_F_HW_VLAN_CTAG_RX | \
+@@ -4934,6 +5015,7 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	INIT_WORK(&adapter->reset_task, iavf_reset_task);
+ 	INIT_WORK(&adapter->adminq_task, iavf_adminq_task);
++	INIT_WORK(&adapter->finish_config, iavf_finish_config);
+ 	INIT_DELAYED_WORK(&adapter->watchdog_task, iavf_watchdog_task);
+ 	INIT_DELAYED_WORK(&adapter->client_task, iavf_client_task);
+ 	queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+@@ -4942,6 +5024,9 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Setup the wait queue for indicating transition to down status */
+ 	init_waitqueue_head(&adapter->down_waitqueue);
+ 
++	/* Setup the wait queue for indicating transition to running state */
++	init_waitqueue_head(&adapter->reset_waitqueue);
++
+ 	/* Setup the wait queue for indicating virtchannel events */
+ 	init_waitqueue_head(&adapter->vc_waitqueue);
+ 
+@@ -5073,13 +5158,15 @@ static void iavf_remove(struct pci_dev *pdev)
+ 		usleep_range(500, 1000);
+ 	}
+ 	cancel_delayed_work_sync(&adapter->watchdog_task);
++	cancel_work_sync(&adapter->finish_config);
+ 
++	rtnl_lock();
+ 	if (adapter->netdev_registered) {
+-		rtnl_lock();
+ 		unregister_netdevice(netdev);
+ 		adapter->netdev_registered = false;
+-		rtnl_unlock();
+ 	}
++	rtnl_unlock();
++
+ 	if (CLIENT_ALLOWED(adapter)) {
+ 		err = iavf_lan_del_device(adapter);
+ 		if (err)
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+index e989feda133c1..8c5f6096b0022 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+@@ -54,7 +54,7 @@ static void iavf_unmap_and_free_tx_resource(struct iavf_ring *ring,
+  * iavf_clean_tx_ring - Free any empty Tx buffers
+  * @tx_ring: ring to be cleaned
+  **/
+-void iavf_clean_tx_ring(struct iavf_ring *tx_ring)
++static void iavf_clean_tx_ring(struct iavf_ring *tx_ring)
+ {
+ 	unsigned long bi_size;
+ 	u16 i;
+@@ -110,7 +110,7 @@ void iavf_free_tx_resources(struct iavf_ring *tx_ring)
+  * Since there is no access to the ring head register
+  * in XL710, we need to use our local copies
+  **/
+-u32 iavf_get_tx_pending(struct iavf_ring *ring, bool in_sw)
++static u32 iavf_get_tx_pending(struct iavf_ring *ring, bool in_sw)
+ {
+ 	u32 head, tail;
+ 
+@@ -127,6 +127,24 @@ u32 iavf_get_tx_pending(struct iavf_ring *ring, bool in_sw)
+ 	return 0;
+ }
+ 
++/**
++ * iavf_force_wb - Issue SW Interrupt so HW does a wb
++ * @vsi: the VSI we care about
++ * @q_vector: the vector on which to force writeback
++ **/
++static void iavf_force_wb(struct iavf_vsi *vsi, struct iavf_q_vector *q_vector)
++{
++	u32 val = IAVF_VFINT_DYN_CTLN1_INTENA_MASK |
++		  IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK | /* set noitr */
++		  IAVF_VFINT_DYN_CTLN1_SWINT_TRIG_MASK |
++		  IAVF_VFINT_DYN_CTLN1_SW_ITR_INDX_ENA_MASK
++		  /* allow 00 to be written to the index */;
++
++	wr32(&vsi->back->hw,
++	     IAVF_VFINT_DYN_CTLN1(q_vector->reg_idx),
++	     val);
++}
++
+ /**
+  * iavf_detect_recover_hung - Function to detect and recover hung_queues
+  * @vsi:  pointer to vsi struct with tx queues
+@@ -352,25 +370,6 @@ static void iavf_enable_wb_on_itr(struct iavf_vsi *vsi,
+ 	q_vector->arm_wb_state = true;
+ }
+ 
+-/**
+- * iavf_force_wb - Issue SW Interrupt so HW does a wb
+- * @vsi: the VSI we care about
+- * @q_vector: the vector  on which to force writeback
+- *
+- **/
+-void iavf_force_wb(struct iavf_vsi *vsi, struct iavf_q_vector *q_vector)
+-{
+-	u32 val = IAVF_VFINT_DYN_CTLN1_INTENA_MASK |
+-		  IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK | /* set noitr */
+-		  IAVF_VFINT_DYN_CTLN1_SWINT_TRIG_MASK |
+-		  IAVF_VFINT_DYN_CTLN1_SW_ITR_INDX_ENA_MASK
+-		  /* allow 00 to be written to the index */;
+-
+-	wr32(&vsi->back->hw,
+-	     IAVF_VFINT_DYN_CTLN1(q_vector->reg_idx),
+-	     val);
+-}
+-
+ static inline bool iavf_container_is_rx(struct iavf_q_vector *q_vector,
+ 					struct iavf_ring_container *rc)
+ {
+@@ -687,7 +686,7 @@ err:
+  * iavf_clean_rx_ring - Free Rx buffers
+  * @rx_ring: ring to be cleaned
+  **/
+-void iavf_clean_rx_ring(struct iavf_ring *rx_ring)
++static void iavf_clean_rx_ring(struct iavf_ring *rx_ring)
+ {
+ 	unsigned long bi_size;
+ 	u16 i;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h
+index 2624bf6d009e3..7e6ee32d19b69 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h
++++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h
+@@ -442,15 +442,11 @@ static inline unsigned int iavf_rx_pg_order(struct iavf_ring *ring)
+ 
+ bool iavf_alloc_rx_buffers(struct iavf_ring *rxr, u16 cleaned_count);
+ netdev_tx_t iavf_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
+-void iavf_clean_tx_ring(struct iavf_ring *tx_ring);
+-void iavf_clean_rx_ring(struct iavf_ring *rx_ring);
+ int iavf_setup_tx_descriptors(struct iavf_ring *tx_ring);
+ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring);
+ void iavf_free_tx_resources(struct iavf_ring *tx_ring);
+ void iavf_free_rx_resources(struct iavf_ring *rx_ring);
+ int iavf_napi_poll(struct napi_struct *napi, int budget);
+-void iavf_force_wb(struct iavf_vsi *vsi, struct iavf_q_vector *q_vector);
+-u32 iavf_get_tx_pending(struct iavf_ring *ring, bool in_sw);
+ void iavf_detect_recover_hung(struct iavf_vsi *vsi);
+ int __iavf_maybe_stop_tx(struct iavf_ring *tx_ring, int size);
+ bool __iavf_chk_linearize(struct sk_buff *skb);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+index 7c0578b5457b9..be3c007ce90a9 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+@@ -1961,9 +1961,8 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
+ 		case VIRTCHNL_EVENT_RESET_IMPENDING:
+ 			dev_info(&adapter->pdev->dev, "Reset indication received from the PF\n");
+ 			if (!(adapter->flags & IAVF_FLAG_RESET_PENDING)) {
+-				adapter->flags |= IAVF_FLAG_RESET_PENDING;
+ 				dev_info(&adapter->pdev->dev, "Scheduling reset task\n");
+-				queue_work(adapter->wq, &adapter->reset_task);
++				iavf_schedule_reset(adapter, IAVF_FLAG_RESET_PENDING);
+ 			}
+ 			break;
+ 		default:
+@@ -2237,6 +2236,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
+ 
+ 		iavf_process_config(adapter);
+ 		adapter->flags |= IAVF_FLAG_SETUP_NETDEV_FEATURES;
++		iavf_schedule_finish_config(adapter);
+ 
+ 		iavf_set_queue_vlan_tag_loc(adapter);
+ 
+@@ -2285,6 +2285,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
+ 	case VIRTCHNL_OP_ENABLE_QUEUES:
+ 		/* enable transmits */
+ 		iavf_irq_enable(adapter, true);
++		wake_up(&adapter->reset_waitqueue);
+ 		adapter->flags &= ~IAVF_FLAG_QUEUES_DISABLED;
+ 		break;
+ 	case VIRTCHNL_OP_DISABLE_QUEUES:
+diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
+index 1911d644dfa8d..619cb07a40691 100644
+--- a/drivers/net/ethernet/intel/ice/ice_base.c
++++ b/drivers/net/ethernet/intel/ice/ice_base.c
+@@ -758,6 +758,8 @@ void ice_vsi_free_q_vectors(struct ice_vsi *vsi)
+ 
+ 	ice_for_each_q_vector(vsi, v_idx)
+ 		ice_free_q_vector(vsi, v_idx);
++
++	vsi->num_q_vectors = 0;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index f86e814354a31..ec4138e684bd2 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -2920,8 +2920,13 @@ ice_get_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring,
+ 
+ 	ring->rx_max_pending = ICE_MAX_NUM_DESC;
+ 	ring->tx_max_pending = ICE_MAX_NUM_DESC;
+-	ring->rx_pending = vsi->rx_rings[0]->count;
+-	ring->tx_pending = vsi->tx_rings[0]->count;
++	if (vsi->tx_rings && vsi->rx_rings) {
++		ring->rx_pending = vsi->rx_rings[0]->count;
++		ring->tx_pending = vsi->tx_rings[0]->count;
++	} else {
++		ring->rx_pending = 0;
++		ring->tx_pending = 0;
++	}
+ 
+ 	/* Rx mini and jumbo rings are not supported */
+ 	ring->rx_mini_max_pending = 0;
+@@ -2955,6 +2960,10 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring,
+ 		return -EINVAL;
+ 	}
+ 
++	/* Return if there is no rings (device is reloading) */
++	if (!vsi->tx_rings || !vsi->rx_rings)
++		return -EBUSY;
++
+ 	new_tx_cnt = ALIGN(ring->tx_pending, ICE_REQ_DESC_MULTIPLE);
+ 	if (new_tx_cnt != ring->tx_pending)
+ 		netdev_info(netdev, "Requested Tx descriptor count rounded up to %d\n",
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 11ae0e41f518a..284a1f0bfdb54 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -3272,39 +3272,12 @@ int ice_vsi_release(struct ice_vsi *vsi)
+ 		return -ENODEV;
+ 	pf = vsi->back;
+ 
+-	/* do not unregister while driver is in the reset recovery pending
+-	 * state. Since reset/rebuild happens through PF service task workqueue,
+-	 * it's not a good idea to unregister netdev that is associated to the
+-	 * PF that is running the work queue items currently. This is done to
+-	 * avoid check_flush_dependency() warning on this wq
+-	 */
+-	if (vsi->netdev && !ice_is_reset_in_progress(pf->state) &&
+-	    (test_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state))) {
+-		unregister_netdev(vsi->netdev);
+-		clear_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state);
+-	}
+-
+-	if (vsi->type == ICE_VSI_PF)
+-		ice_devlink_destroy_pf_port(pf);
+-
+ 	if (test_bit(ICE_FLAG_RSS_ENA, pf->flags))
+ 		ice_rss_clean(vsi);
+ 
+ 	ice_vsi_close(vsi);
+ 	ice_vsi_decfg(vsi);
+ 
+-	if (vsi->netdev) {
+-		if (test_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state)) {
+-			unregister_netdev(vsi->netdev);
+-			clear_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state);
+-		}
+-		if (test_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state)) {
+-			free_netdev(vsi->netdev);
+-			vsi->netdev = NULL;
+-			clear_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state);
+-		}
+-	}
+-
+ 	/* retain SW VSI data structure since it is needed to unregister and
+ 	 * free VSI netdev when PF is not in reset recovery pending state,\
+ 	 * for ex: during rmmod.
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 1277e0a044ee4..fbe70458fda27 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -4655,9 +4655,9 @@ static int ice_start_eth(struct ice_vsi *vsi)
+ 	if (err)
+ 		return err;
+ 
+-	rtnl_lock();
+ 	err = ice_vsi_open(vsi);
+-	rtnl_unlock();
++	if (err)
++		ice_fltr_remove_all(vsi);
+ 
+ 	return err;
+ }
+@@ -5120,6 +5120,7 @@ int ice_load(struct ice_pf *pf)
+ 	params = ice_vsi_to_params(vsi);
+ 	params.flags = ICE_VSI_FLAG_INIT;
+ 
++	rtnl_lock();
+ 	err = ice_vsi_cfg(vsi, &params);
+ 	if (err)
+ 		goto err_vsi_cfg;
+@@ -5127,6 +5128,7 @@ int ice_load(struct ice_pf *pf)
+ 	err = ice_start_eth(ice_get_main_vsi(pf));
+ 	if (err)
+ 		goto err_start_eth;
++	rtnl_unlock();
+ 
+ 	err = ice_init_rdma(pf);
+ 	if (err)
+@@ -5141,9 +5143,11 @@ int ice_load(struct ice_pf *pf)
+ 
+ err_init_rdma:
+ 	ice_vsi_close(ice_get_main_vsi(pf));
++	rtnl_lock();
+ err_start_eth:
+ 	ice_vsi_decfg(ice_get_main_vsi(pf));
+ err_vsi_cfg:
++	rtnl_unlock();
+ 	ice_deinit_dev(pf);
+ 	return err;
+ }
+@@ -5156,8 +5160,10 @@ void ice_unload(struct ice_pf *pf)
+ {
+ 	ice_deinit_features(pf);
+ 	ice_deinit_rdma(pf);
++	rtnl_lock();
+ 	ice_stop_eth(ice_get_main_vsi(pf));
+ 	ice_vsi_decfg(ice_get_main_vsi(pf));
++	rtnl_unlock();
+ 	ice_deinit_dev(pf);
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index bb3db387d49cf..ba5e1d1320f67 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -9585,6 +9585,11 @@ static pci_ers_result_t igb_io_error_detected(struct pci_dev *pdev,
+ 	struct net_device *netdev = pci_get_drvdata(pdev);
+ 	struct igb_adapter *adapter = netdev_priv(netdev);
+ 
++	if (state == pci_channel_io_normal) {
++		dev_warn(&pdev->dev, "Non-correctable non-fatal error reported.\n");
++		return PCI_ERS_RESULT_CAN_RECOVER;
++	}
++
+ 	netif_device_detach(netdev);
+ 
+ 	if (state == pci_channel_io_perm_failure)
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 44aa4342cbbb5..496a4eb687b00 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -2417,6 +2417,8 @@ static int igc_xdp_xmit_back(struct igc_adapter *adapter, struct xdp_buff *xdp)
+ 	nq = txring_txq(ring);
+ 
+ 	__netif_tx_lock(nq, cpu);
++	/* Avoid transmit queue timeout since we share it with the slow path */
++	txq_trans_cond_update(nq);
+ 	res = igc_xdp_init_tx_descriptor(ring, xdpf);
+ 	__netif_tx_unlock(nq);
+ 	return res;
+@@ -2824,15 +2826,18 @@ static void igc_xdp_xmit_zc(struct igc_ring *ring)
+ 	struct netdev_queue *nq = txring_txq(ring);
+ 	union igc_adv_tx_desc *tx_desc = NULL;
+ 	int cpu = smp_processor_id();
+-	u16 ntu = ring->next_to_use;
+ 	struct xdp_desc xdp_desc;
+-	u16 budget;
++	u16 budget, ntu;
+ 
+ 	if (!netif_carrier_ok(ring->netdev))
+ 		return;
+ 
+ 	__netif_tx_lock(nq, cpu);
+ 
++	/* Avoid transmit queue timeout since we share it with the slow path */
++	txq_trans_cond_update(nq);
++
++	ntu = ring->next_to_use;
+ 	budget = igc_desc_unused(ring);
+ 
+ 	while (xsk_tx_peek_desc(pool, &xdp_desc) && budget--) {
+@@ -6385,6 +6390,9 @@ static int igc_xdp_xmit(struct net_device *dev, int num_frames,
+ 
+ 	__netif_tx_lock(nq, cpu);
+ 
++	/* Avoid transmit queue timeout since we share it with the slow path */
++	txq_trans_cond_update(nq);
++
+ 	drops = 0;
+ 	for (i = 0; i < num_frames; i++) {
+ 		int err;
+diff --git a/drivers/net/ethernet/litex/litex_liteeth.c b/drivers/net/ethernet/litex/litex_liteeth.c
+index 35f24e0f09349..ffa96059079c6 100644
+--- a/drivers/net/ethernet/litex/litex_liteeth.c
++++ b/drivers/net/ethernet/litex/litex_liteeth.c
+@@ -78,8 +78,7 @@ static int liteeth_rx(struct net_device *netdev)
+ 	memcpy_fromio(data, priv->rx_base + rx_slot * priv->slot_size, len);
+ 	skb->protocol = eth_type_trans(skb, netdev);
+ 
+-	netdev->stats.rx_packets++;
+-	netdev->stats.rx_bytes += len;
++	dev_sw_netstats_rx_add(netdev, len);
+ 
+ 	return netif_rx(skb);
+ 
+@@ -185,8 +184,7 @@ static netdev_tx_t liteeth_start_xmit(struct sk_buff *skb,
+ 	litex_write16(priv->base + LITEETH_READER_LENGTH, skb->len);
+ 	litex_write8(priv->base + LITEETH_READER_START, 1);
+ 
+-	netdev->stats.tx_bytes += skb->len;
+-	netdev->stats.tx_packets++;
++	dev_sw_netstats_tx_add(netdev, 1, skb->len);
+ 
+ 	priv->tx_slot = (priv->tx_slot + 1) % priv->num_tx_slots;
+ 	dev_kfree_skb_any(skb);
+@@ -194,9 +192,17 @@ static netdev_tx_t liteeth_start_xmit(struct sk_buff *skb,
+ 	return NETDEV_TX_OK;
+ }
+ 
++static void
++liteeth_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats)
++{
++	netdev_stats_to_stats64(stats, &netdev->stats);
++	dev_fetch_sw_netstats(stats, netdev->tstats);
++}
++
+ static const struct net_device_ops liteeth_netdev_ops = {
+ 	.ndo_open		= liteeth_open,
+ 	.ndo_stop		= liteeth_stop,
++	.ndo_get_stats64	= liteeth_get_stats64,
+ 	.ndo_start_xmit         = liteeth_start_xmit,
+ };
+ 
+@@ -242,6 +248,11 @@ static int liteeth_probe(struct platform_device *pdev)
+ 	priv->netdev = netdev;
+ 	priv->dev = &pdev->dev;
+ 
++	netdev->tstats = devm_netdev_alloc_pcpu_stats(&pdev->dev,
++						      struct pcpu_sw_netstats);
++	if (!netdev->tstats)
++		return -ENOMEM;
++
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0)
+ 		return irq;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 18284ad751572..384d26bee9b23 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -1452,8 +1452,9 @@ static int otx2_init_hw_resources(struct otx2_nic *pf)
+ 	if (err)
+ 		goto err_free_npa_lf;
+ 
+-	/* Enable backpressure */
+-	otx2_nix_config_bp(pf, true);
++	/* Enable backpressure for CGX mapped PF/VFs */
++	if (!is_otx2_lbkvf(pf->pdev))
++		otx2_nix_config_bp(pf, true);
+ 
+ 	/* Init Auras and pools used by NIX RQ, for free buffer ptrs */
+ 	err = otx2_rq_aura_pool_init(pf);
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 834c644b67db5..2d15342c260ae 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -3846,23 +3846,6 @@ static int mtk_hw_deinit(struct mtk_eth *eth)
+ 	return 0;
+ }
+ 
+-static int __init mtk_init(struct net_device *dev)
+-{
+-	struct mtk_mac *mac = netdev_priv(dev);
+-	struct mtk_eth *eth = mac->hw;
+-	int ret;
+-
+-	ret = of_get_ethdev_address(mac->of_node, dev);
+-	if (ret) {
+-		/* If the mac address is invalid, use random mac address */
+-		eth_hw_addr_random(dev);
+-		dev_err(eth->dev, "generated random MAC address %pM\n",
+-			dev->dev_addr);
+-	}
+-
+-	return 0;
+-}
+-
+ static void mtk_uninit(struct net_device *dev)
+ {
+ 	struct mtk_mac *mac = netdev_priv(dev);
+@@ -4278,7 +4261,6 @@ static const struct ethtool_ops mtk_ethtool_ops = {
+ };
+ 
+ static const struct net_device_ops mtk_netdev_ops = {
+-	.ndo_init		= mtk_init,
+ 	.ndo_uninit		= mtk_uninit,
+ 	.ndo_open		= mtk_open,
+ 	.ndo_stop		= mtk_stop,
+@@ -4340,6 +4322,17 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ 	mac->hw = eth;
+ 	mac->of_node = np;
+ 
++	err = of_get_ethdev_address(mac->of_node, eth->netdev[id]);
++	if (err == -EPROBE_DEFER)
++		return err;
++
++	if (err) {
++		/* If the mac address is invalid, use random mac address */
++		eth_hw_addr_random(eth->netdev[id]);
++		dev_err(eth->dev, "generated random MAC address %pM\n",
++			eth->netdev[id]->dev_addr);
++	}
++
+ 	memset(mac->hwlro_ip, 0, sizeof(mac->hwlro_ip));
+ 	mac->hwlro_ip_cnt = 0;
+ 
+diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c b/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c
+index 316fe2e70fead..1a97feca77f23 100644
+--- a/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c
++++ b/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c
+@@ -98,7 +98,7 @@ mtk_ppe_debugfs_foe_show(struct seq_file *m, void *private, bool bind)
+ 
+ 		acct = mtk_foe_entry_get_mib(ppe, i, NULL);
+ 
+-		type = FIELD_GET(MTK_FOE_IB1_PACKET_TYPE, entry->ib1);
++		type = mtk_get_ib1_pkt_type(ppe->eth, entry->ib1);
+ 		seq_printf(m, "%05x %s %7s", i,
+ 			   mtk_foe_entry_state_str(state),
+ 			   mtk_foe_pkt_type_str(type));
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 4b19803a7dd01..b69122686407d 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -2747,6 +2747,13 @@ static void rtl_hw_aspm_clkreq_enable(struct rtl8169_private *tp, bool enable)
+ 		return;
+ 
+ 	if (enable) {
++		/* On these chip versions ASPM can even harm
++		 * bus communication of other PCI devices.
++		 */
++		if (tp->mac_version == RTL_GIGA_MAC_VER_42 ||
++		    tp->mac_version == RTL_GIGA_MAC_VER_43)
++			return;
++
+ 		rtl_mod_config5(tp, 0, ASPM_en);
+ 		rtl_mod_config2(tp, 0, ClkReqEn);
+ 
+@@ -4514,10 +4521,6 @@ static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance)
+ 	}
+ 
+ 	if (napi_schedule_prep(&tp->napi)) {
+-		rtl_unlock_config_regs(tp);
+-		rtl_hw_aspm_clkreq_enable(tp, false);
+-		rtl_lock_config_regs(tp);
+-
+ 		rtl_irq_disable(tp);
+ 		__napi_schedule(&tp->napi);
+ 	}
+@@ -4577,14 +4580,9 @@ static int rtl8169_poll(struct napi_struct *napi, int budget)
+ 
+ 	work_done = rtl_rx(dev, tp, budget);
+ 
+-	if (work_done < budget && napi_complete_done(napi, work_done)) {
++	if (work_done < budget && napi_complete_done(napi, work_done))
+ 		rtl_irq_enable(tp);
+ 
+-		rtl_unlock_config_regs(tp);
+-		rtl_hw_aspm_clkreq_enable(tp, true);
+-		rtl_lock_config_regs(tp);
+-	}
+-
+ 	return work_done;
+ }
+ 
+diff --git a/drivers/net/ethernet/ti/cpsw_ale.c b/drivers/net/ethernet/ti/cpsw_ale.c
+index 0c5e783e574c4..64bf22cd860c9 100644
+--- a/drivers/net/ethernet/ti/cpsw_ale.c
++++ b/drivers/net/ethernet/ti/cpsw_ale.c
+@@ -106,23 +106,37 @@ struct cpsw_ale_dev_id {
+ 
+ static inline int cpsw_ale_get_field(u32 *ale_entry, u32 start, u32 bits)
+ {
+-	int idx;
++	int idx, idx2;
++	u32 hi_val = 0;
+ 
+ 	idx    = start / 32;
++	idx2 = (start + bits - 1) / 32;
++	/* Check if bits to be fetched exceed a word */
++	if (idx != idx2) {
++		idx2 = 2 - idx2; /* flip */
++		hi_val = ale_entry[idx2] << ((idx2 * 32) - start);
++	}
+ 	start -= idx * 32;
+ 	idx    = 2 - idx; /* flip */
+-	return (ale_entry[idx] >> start) & BITMASK(bits);
++	return (hi_val + (ale_entry[idx] >> start)) & BITMASK(bits);
+ }
+ 
+ static inline void cpsw_ale_set_field(u32 *ale_entry, u32 start, u32 bits,
+ 				      u32 value)
+ {
+-	int idx;
++	int idx, idx2;
+ 
+ 	value &= BITMASK(bits);
+-	idx    = start / 32;
++	idx = start / 32;
++	idx2 = (start + bits - 1) / 32;
++	/* Check if bits to be set exceed a word */
++	if (idx != idx2) {
++		idx2 = 2 - idx2; /* flip */
++		ale_entry[idx2] &= ~(BITMASK(bits + start - (idx2 * 32)));
++		ale_entry[idx2] |= (value >> ((idx2 * 32) - start));
++	}
+ 	start -= idx * 32;
+-	idx    = 2 - idx; /* flip */
++	idx = 2 - idx; /* flip */
+ 	ale_entry[idx] &= ~(BITMASK(bits) << start);
+ 	ale_entry[idx] |=  (value << start);
+ }
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 53598210be6cb..2c4e6de8f4d9f 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -3452,23 +3452,30 @@ static int __init phy_init(void)
+ {
+ 	int rc;
+ 
++	ethtool_set_ethtool_phy_ops(&phy_ethtool_phy_ops);
++
+ 	rc = mdio_bus_init();
+ 	if (rc)
+-		return rc;
++		goto err_ethtool_phy_ops;
+ 
+-	ethtool_set_ethtool_phy_ops(&phy_ethtool_phy_ops);
+ 	features_init();
+ 
+ 	rc = phy_driver_register(&genphy_c45_driver, THIS_MODULE);
+ 	if (rc)
+-		goto err_c45;
++		goto err_mdio_bus;
+ 
+ 	rc = phy_driver_register(&genphy_driver, THIS_MODULE);
+-	if (rc) {
+-		phy_driver_unregister(&genphy_c45_driver);
++	if (rc)
++		goto err_c45;
++
++	return 0;
++
+ err_c45:
+-		mdio_bus_exit();
+-	}
++	phy_driver_unregister(&genphy_c45_driver);
++err_mdio_bus:
++	mdio_bus_exit();
++err_ethtool_phy_ops:
++	ethtool_set_ethtool_phy_ops(NULL);
+ 
+ 	return rc;
+ }
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index bdb3a76a352e4..6043e63b42f97 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -664,7 +664,7 @@ static int vrf_finish_output6(struct net *net, struct sock *sk,
+ 	skb->protocol = htons(ETH_P_IPV6);
+ 	skb->dev = dev;
+ 
+-	rcu_read_lock_bh();
++	rcu_read_lock();
+ 	nexthop = rt6_nexthop((struct rt6_info *)dst, &ipv6_hdr(skb)->daddr);
+ 	neigh = __ipv6_neigh_lookup_noref(dst->dev, nexthop);
+ 	if (unlikely(!neigh))
+@@ -672,10 +672,10 @@ static int vrf_finish_output6(struct net *net, struct sock *sk,
+ 	if (!IS_ERR(neigh)) {
+ 		sock_confirm_neigh(skb, neigh);
+ 		ret = neigh_output(neigh, skb, false);
+-		rcu_read_unlock_bh();
++		rcu_read_unlock();
+ 		return ret;
+ 	}
+-	rcu_read_unlock_bh();
++	rcu_read_unlock();
+ 
+ 	IP6_INC_STATS(dev_net(dst->dev),
+ 		      ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES);
+@@ -889,7 +889,7 @@ static int vrf_finish_output(struct net *net, struct sock *sk, struct sk_buff *s
+ 		}
+ 	}
+ 
+-	rcu_read_lock_bh();
++	rcu_read_lock();
+ 
+ 	neigh = ip_neigh_for_gw(rt, skb, &is_v6gw);
+ 	if (!IS_ERR(neigh)) {
+@@ -898,11 +898,11 @@ static int vrf_finish_output(struct net *net, struct sock *sk, struct sk_buff *s
+ 		sock_confirm_neigh(skb, neigh);
+ 		/* if crossing protocols, can not use the cached header */
+ 		ret = neigh_output(neigh, skb, is_v6gw);
+-		rcu_read_unlock_bh();
++		rcu_read_unlock();
+ 		return ret;
+ 	}
+ 
+-	rcu_read_unlock_bh();
++	rcu_read_unlock();
+ 	vrf_tx_error(skb->dev, skb);
+ 	return -EINVAL;
+ }
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index 9de23c11e18bb..8ab1a62351b98 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -962,7 +962,8 @@ int ath11k_core_check_dt(struct ath11k_base *ab)
+ }
+ 
+ static int __ath11k_core_create_board_name(struct ath11k_base *ab, char *name,
+-					   size_t name_len, bool with_variant)
++					   size_t name_len, bool with_variant,
++					   bool bus_type_mode)
+ {
+ 	/* strlen(',variant=') + strlen(ab->qmi.target.bdf_ext) */
+ 	char variant[9 + ATH11K_QMI_BDF_EXT_STR_LENGTH] = { 0 };
+@@ -973,15 +974,20 @@ static int __ath11k_core_create_board_name(struct ath11k_base *ab, char *name,
+ 
+ 	switch (ab->id.bdf_search) {
+ 	case ATH11K_BDF_SEARCH_BUS_AND_BOARD:
+-		scnprintf(name, name_len,
+-			  "bus=%s,vendor=%04x,device=%04x,subsystem-vendor=%04x,subsystem-device=%04x,qmi-chip-id=%d,qmi-board-id=%d%s",
+-			  ath11k_bus_str(ab->hif.bus),
+-			  ab->id.vendor, ab->id.device,
+-			  ab->id.subsystem_vendor,
+-			  ab->id.subsystem_device,
+-			  ab->qmi.target.chip_id,
+-			  ab->qmi.target.board_id,
+-			  variant);
++		if (bus_type_mode)
++			scnprintf(name, name_len,
++				  "bus=%s",
++				  ath11k_bus_str(ab->hif.bus));
++		else
++			scnprintf(name, name_len,
++				  "bus=%s,vendor=%04x,device=%04x,subsystem-vendor=%04x,subsystem-device=%04x,qmi-chip-id=%d,qmi-board-id=%d%s",
++				  ath11k_bus_str(ab->hif.bus),
++				  ab->id.vendor, ab->id.device,
++				  ab->id.subsystem_vendor,
++				  ab->id.subsystem_device,
++				  ab->qmi.target.chip_id,
++				  ab->qmi.target.board_id,
++				  variant);
+ 		break;
+ 	default:
+ 		scnprintf(name, name_len,
+@@ -1000,13 +1006,19 @@ static int __ath11k_core_create_board_name(struct ath11k_base *ab, char *name,
+ static int ath11k_core_create_board_name(struct ath11k_base *ab, char *name,
+ 					 size_t name_len)
+ {
+-	return __ath11k_core_create_board_name(ab, name, name_len, true);
++	return __ath11k_core_create_board_name(ab, name, name_len, true, false);
+ }
+ 
+ static int ath11k_core_create_fallback_board_name(struct ath11k_base *ab, char *name,
+ 						  size_t name_len)
+ {
+-	return __ath11k_core_create_board_name(ab, name, name_len, false);
++	return __ath11k_core_create_board_name(ab, name, name_len, false, false);
++}
++
++static int ath11k_core_create_bus_type_board_name(struct ath11k_base *ab, char *name,
++						  size_t name_len)
++{
++	return __ath11k_core_create_board_name(ab, name, name_len, false, true);
+ }
+ 
+ const struct firmware *ath11k_core_firmware_request(struct ath11k_base *ab,
+@@ -1310,7 +1322,7 @@ success:
+ 
+ int ath11k_core_fetch_regdb(struct ath11k_base *ab, struct ath11k_board_data *bd)
+ {
+-	char boardname[BOARD_NAME_SIZE];
++	char boardname[BOARD_NAME_SIZE], default_boardname[BOARD_NAME_SIZE];
+ 	int ret;
+ 
+ 	ret = ath11k_core_create_board_name(ab, boardname, BOARD_NAME_SIZE);
+@@ -1327,6 +1339,21 @@ int ath11k_core_fetch_regdb(struct ath11k_base *ab, struct ath11k_board_data *bd
+ 	if (!ret)
+ 		goto exit;
+ 
++	ret = ath11k_core_create_bus_type_board_name(ab, default_boardname,
++						     BOARD_NAME_SIZE);
++	if (ret) {
++		ath11k_dbg(ab, ATH11K_DBG_BOOT,
++			   "failed to create default board name for regdb: %d", ret);
++		goto exit;
++	}
++
++	ret = ath11k_core_fetch_board_data_api_n(ab, bd, default_boardname,
++						 ATH11K_BD_IE_REGDB,
++						 ATH11K_BD_IE_REGDB_NAME,
++						 ATH11K_BD_IE_REGDB_DATA);
++	if (!ret)
++		goto exit;
++
+ 	ret = ath11k_core_fetch_board_data_api_1(ab, bd, ATH11K_REGDB_FILE_NAME);
+ 	if (ret)
+ 		ath11k_dbg(ab, ATH11K_DBG_BOOT, "failed to fetch %s from %s\n",
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 1c93f1afccc57..01ff197b017f7 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -8892,7 +8892,7 @@ static int ath11k_mac_setup_channels_rates(struct ath11k *ar,
+ 	}
+ 
+ 	if (supported_bands & WMI_HOST_WLAN_5G_CAP) {
+-		if (reg_cap->high_5ghz_chan >= ATH11K_MAX_6G_FREQ) {
++		if (reg_cap->high_5ghz_chan >= ATH11K_MIN_6G_FREQ) {
+ 			channels = kmemdup(ath11k_6ghz_channels,
+ 					   sizeof(ath11k_6ghz_channels), GFP_KERNEL);
+ 			if (!channels) {
+@@ -9468,6 +9468,7 @@ void ath11k_mac_destroy(struct ath11k_base *ab)
+ 		if (!ar)
+ 			continue;
+ 
++		ath11k_fw_stats_free(&ar->fw_stats);
+ 		ieee80211_free_hw(ar->hw);
+ 		pdev->ar = NULL;
+ 	}
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index d0b59bc2905a9..42d9b29623a47 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -8103,6 +8103,11 @@ complete:
+ 	rcu_read_unlock();
+ 	spin_unlock_bh(&ar->data_lock);
+ 
++	/* Since the stats's pdev, vdev and beacon list are spliced and reinitialised
++	 * at this point, no need to free the individual list.
++	 */
++	return;
++
+ free:
+ 	ath11k_fw_stats_free(&stats);
+ }
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index ee792822b4113..58acfe8fdf8c0 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -4425,6 +4425,7 @@ static int ath12k_mac_mgmt_tx_wmi(struct ath12k *ar, struct ath12k_vif *arvif,
+ 	int buf_id;
+ 	int ret;
+ 
++	ATH12K_SKB_CB(skb)->ar = ar;
+ 	spin_lock_bh(&ar->txmgmt_idr_lock);
+ 	buf_id = idr_alloc(&ar->txmgmt_idr, skb, 0,
+ 			   ATH12K_TX_MGMT_NUM_PENDING_MAX, GFP_ATOMIC);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c b/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c
+index 8853821b37168..1e659bd07392a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2022 Intel Corporation
++ * Copyright (C) 2022 - 2023 Intel Corporation
+  */
+ #include <linux/kernel.h>
+ #include <net/mac80211.h>
+@@ -179,9 +179,14 @@ int iwl_mvm_sec_key_add(struct iwl_mvm *mvm,
+ 		.u.add.key_flags = cpu_to_le32(key_flags),
+ 		.u.add.tx_seq = cpu_to_le64(atomic64_read(&keyconf->tx_pn)),
+ 	};
++	int max_key_len = sizeof(cmd.u.add.key);
+ 	int ret;
+ 
+-	if (WARN_ON(keyconf->keylen > sizeof(cmd.u.add.key)))
++	if (keyconf->cipher == WLAN_CIPHER_SUITE_WEP40 ||
++	    keyconf->cipher == WLAN_CIPHER_SUITE_WEP104)
++		max_key_len -= IWL_SEC_WEP_KEY_OFFSET;
++
++	if (WARN_ON(keyconf->keylen > max_key_len))
+ 		return -EINVAL;
+ 
+ 	if (WARN_ON(!sta_mask))
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/power.c b/drivers/net/wireless/intel/iwlwifi/mvm/power.c
+index ac1dae52556f8..19839cc44eb3d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/power.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/power.c
+@@ -647,30 +647,32 @@ static void iwl_mvm_power_set_pm(struct iwl_mvm *mvm,
+ 		return;
+ 
+ 	/* enable PM on bss if bss stand alone */
+-	if (vifs->bss_active && !vifs->p2p_active && !vifs->ap_active) {
++	if (bss_mvmvif && vifs->bss_active && !vifs->p2p_active &&
++	    !vifs->ap_active) {
+ 		bss_mvmvif->pm_enabled = true;
+ 		return;
+ 	}
+ 
+ 	/* enable PM on p2p if p2p stand alone */
+-	if (vifs->p2p_active && !vifs->bss_active && !vifs->ap_active) {
++	if (p2p_mvmvif && vifs->p2p_active && !vifs->bss_active &&
++	    !vifs->ap_active) {
+ 		p2p_mvmvif->pm_enabled = true;
+ 		return;
+ 	}
+ 
+-	if (vifs->bss_active && vifs->p2p_active)
++	if (p2p_mvmvif && bss_mvmvif && vifs->bss_active && vifs->p2p_active)
+ 		client_same_channel =
+ 			iwl_mvm_have_links_same_channel(bss_mvmvif, p2p_mvmvif);
+ 
+-	if (vifs->bss_active && vifs->ap_active)
++	if (bss_mvmvif && ap_mvmvif && vifs->bss_active && vifs->ap_active)
+ 		ap_same_channel =
+ 			iwl_mvm_have_links_same_channel(bss_mvmvif, ap_mvmvif);
+ 
+ 	/* clients are not stand alone: enable PM if DCM */
+ 	if (!(client_same_channel || ap_same_channel)) {
+-		if (vifs->bss_active)
++		if (bss_mvmvif && vifs->bss_active)
+ 			bss_mvmvif->pm_enabled = true;
+-		if (vifs->p2p_active)
++		if (p2p_mvmvif && vifs->p2p_active)
+ 			p2p_mvmvif->pm_enabled = true;
+ 		return;
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index b85e363544f8b..7f9a809dd081c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -2884,7 +2884,7 @@ int iwl_mvm_sta_rx_agg(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 	}
+ 
+ 	if (iwl_mvm_has_new_rx_api(mvm) && start) {
+-		u16 reorder_buf_size = buf_size * sizeof(baid_data->entries[0]);
++		u32 reorder_buf_size = buf_size * sizeof(baid_data->entries[0]);
+ 
+ 		/* sparse doesn't like the __align() so don't check */
+ #ifndef __CHECKER__
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 79115eb1c2852..e086664a4eaca 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -495,6 +495,7 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ 	{IWL_PCI_DEVICE(0x7AF0, PCI_ANY_ID, iwl_so_trans_cfg)},
+ 	{IWL_PCI_DEVICE(0x51F0, PCI_ANY_ID, iwl_so_long_latency_trans_cfg)},
+ 	{IWL_PCI_DEVICE(0x51F1, PCI_ANY_ID, iwl_so_long_latency_imr_trans_cfg)},
++	{IWL_PCI_DEVICE(0x51F1, PCI_ANY_ID, iwl_so_long_latency_trans_cfg)},
+ 	{IWL_PCI_DEVICE(0x54F0, PCI_ANY_ID, iwl_so_long_latency_trans_cfg)},
+ 	{IWL_PCI_DEVICE(0x7F70, PCI_ANY_ID, iwl_so_trans_cfg)},
+ 
+@@ -544,6 +545,7 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 	IWL_DEV_INFO(0x51F0, 0x1551, iwl9560_2ac_cfg_soc, iwl9560_killer_1550i_160_name),
+ 	IWL_DEV_INFO(0x51F0, 0x1691, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690s_name),
+ 	IWL_DEV_INFO(0x51F0, 0x1692, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690i_name),
++	IWL_DEV_INFO(0x51F1, 0x1692, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690i_name),
+ 	IWL_DEV_INFO(0x54F0, 0x1691, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690s_name),
+ 	IWL_DEV_INFO(0x54F0, 0x1692, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690i_name),
+ 	IWL_DEV_INFO(0x7A70, 0x1691, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690s_name),
+@@ -682,6 +684,8 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 	IWL_DEV_INFO(0x2726, 0x1672, iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_killer_1675i_name),
+ 	IWL_DEV_INFO(0x51F0, 0x1671, iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_killer_1675s_name),
+ 	IWL_DEV_INFO(0x51F0, 0x1672, iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_killer_1675i_name),
++	IWL_DEV_INFO(0x51F1, 0x1671, iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_killer_1675s_name),
++	IWL_DEV_INFO(0x51F1, 0x1672, iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_killer_1675i_name),
+ 	IWL_DEV_INFO(0x54F0, 0x1671, iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_killer_1675s_name),
+ 	IWL_DEV_INFO(0x54F0, 0x1672, iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_killer_1675i_name),
+ 	IWL_DEV_INFO(0x7A70, 0x1671, iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_killer_1675s_name),
+diff --git a/drivers/net/wireless/realtek/rtw88/sdio.c b/drivers/net/wireless/realtek/rtw88/sdio.c
+index 06fce7c3addaa..2c1fb2dabd40a 100644
+--- a/drivers/net/wireless/realtek/rtw88/sdio.c
++++ b/drivers/net/wireless/realtek/rtw88/sdio.c
+@@ -998,9 +998,9 @@ static void rtw_sdio_rxfifo_recv(struct rtw_dev *rtwdev, u32 rx_len)
+ 
+ static void rtw_sdio_rx_isr(struct rtw_dev *rtwdev)
+ {
+-	u32 rx_len, total_rx_bytes = 0;
++	u32 rx_len, hisr, total_rx_bytes = 0;
+ 
+-	while (total_rx_bytes < SZ_64K) {
++	do {
+ 		if (rtw_chip_wcpu_11n(rtwdev))
+ 			rx_len = rtw_read16(rtwdev, REG_SDIO_RX0_REQ_LEN);
+ 		else
+@@ -1012,7 +1012,25 @@ static void rtw_sdio_rx_isr(struct rtw_dev *rtwdev)
+ 		rtw_sdio_rxfifo_recv(rtwdev, rx_len);
+ 
+ 		total_rx_bytes += rx_len;
+-	}
++
++		if (rtw_chip_wcpu_11n(rtwdev)) {
++			/* Stop if no more RX requests are pending, even if
++			 * rx_len could be greater than zero in the next
++			 * iteration. This is needed because the RX buffer may
++			 * already contain data while either HW or FW are not
++			 * done filling that buffer yet. Still reading the
++			 * buffer can result in packets where
++			 * rtw_rx_pkt_stat.pkt_len is zero or points beyond the
++			 * end of the buffer.
++			 */
++			hisr = rtw_read32(rtwdev, REG_SDIO_HISR);
++		} else {
++			/* RTW_WCPU_11AC chips have improved hardware or
++			 * firmware and can use rx_len unconditionally.
++			 */
++			hisr = REG_SDIO_HISR_RX_REQUEST;
++		}
++	} while (total_rx_bytes < SZ_64K && hisr & REG_SDIO_HISR_RX_REQUEST);
+ }
+ 
+ static void rtw_sdio_handle_interrupt(struct sdio_func *sdio_func)
+diff --git a/drivers/net/wireless/virtual/mac80211_hwsim.c b/drivers/net/wireless/virtual/mac80211_hwsim.c
+index 89c7a1420381d..ed5af63025979 100644
+--- a/drivers/net/wireless/virtual/mac80211_hwsim.c
++++ b/drivers/net/wireless/virtual/mac80211_hwsim.c
+@@ -4,7 +4,7 @@
+  * Copyright (c) 2008, Jouni Malinen <j@w1.fi>
+  * Copyright (c) 2011, Javier Lopez <jlopex@gmail.com>
+  * Copyright (c) 2016 - 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018 - 2022 Intel Corporation
++ * Copyright (C) 2018 - 2023 Intel Corporation
+  */
+ 
+ /*
+@@ -1864,7 +1864,7 @@ mac80211_hwsim_select_tx_link(struct mac80211_hwsim_data *data,
+ 
+ 	WARN_ON(is_multicast_ether_addr(hdr->addr1));
+ 
+-	if (WARN_ON_ONCE(!sta->valid_links))
++	if (WARN_ON_ONCE(!sta || !sta->valid_links))
+ 		return &vif->bss_conf;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(vif->link_conf); i++) {
+diff --git a/drivers/of/platform.c b/drivers/of/platform.c
+index 78ae841874490..e46482cef9c7d 100644
+--- a/drivers/of/platform.c
++++ b/drivers/of/platform.c
+@@ -553,7 +553,7 @@ static int __init of_platform_default_populate_init(void)
+ 			if (!of_get_property(node, "linux,opened", NULL) ||
+ 			    !of_get_property(node, "linux,boot-display", NULL))
+ 				continue;
+-			dev = of_platform_device_create(node, "of-display.0", NULL);
++			dev = of_platform_device_create(node, "of-display", NULL);
+ 			of_node_put(node);
+ 			if (WARN_ON(!dev))
+ 				return -ENOMEM;
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzg2l.c b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+index 9511d920565e9..b53d26167da52 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzg2l.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+@@ -249,6 +249,7 @@ static int rzg2l_map_add_config(struct pinctrl_map *map,
+ 
+ static int rzg2l_dt_subnode_to_map(struct pinctrl_dev *pctldev,
+ 				   struct device_node *np,
++				   struct device_node *parent,
+ 				   struct pinctrl_map **map,
+ 				   unsigned int *num_maps,
+ 				   unsigned int *index)
+@@ -266,6 +267,7 @@ static int rzg2l_dt_subnode_to_map(struct pinctrl_dev *pctldev,
+ 	struct property *prop;
+ 	int ret, gsel, fsel;
+ 	const char **pin_fn;
++	const char *name;
+ 	const char *pin;
+ 
+ 	pinmux = of_find_property(np, "pinmux", NULL);
+@@ -349,8 +351,19 @@ static int rzg2l_dt_subnode_to_map(struct pinctrl_dev *pctldev,
+ 		psel_val[i] = MUX_FUNC(value);
+ 	}
+ 
++	if (parent) {
++		name = devm_kasprintf(pctrl->dev, GFP_KERNEL, "%pOFn.%pOFn",
++				      parent, np);
++		if (!name) {
++			ret = -ENOMEM;
++			goto done;
++		}
++	} else {
++		name = np->name;
++	}
++
+ 	/* Register a single pin group listing all the pins we read from DT */
+-	gsel = pinctrl_generic_add_group(pctldev, np->name, pins, num_pinmux, NULL);
++	gsel = pinctrl_generic_add_group(pctldev, name, pins, num_pinmux, NULL);
+ 	if (gsel < 0) {
+ 		ret = gsel;
+ 		goto done;
+@@ -360,17 +373,16 @@ static int rzg2l_dt_subnode_to_map(struct pinctrl_dev *pctldev,
+ 	 * Register a single group function where the 'data' is an array PSEL
+ 	 * register values read from DT.
+ 	 */
+-	pin_fn[0] = np->name;
+-	fsel = pinmux_generic_add_function(pctldev, np->name, pin_fn, 1,
+-					   psel_val);
++	pin_fn[0] = name;
++	fsel = pinmux_generic_add_function(pctldev, name, pin_fn, 1, psel_val);
+ 	if (fsel < 0) {
+ 		ret = fsel;
+ 		goto remove_group;
+ 	}
+ 
+ 	maps[idx].type = PIN_MAP_TYPE_MUX_GROUP;
+-	maps[idx].data.mux.group = np->name;
+-	maps[idx].data.mux.function = np->name;
++	maps[idx].data.mux.group = name;
++	maps[idx].data.mux.function = name;
+ 	idx++;
+ 
+ 	dev_dbg(pctrl->dev, "Parsed %pOF with %d pins\n", np, num_pinmux);
+@@ -417,7 +429,7 @@ static int rzg2l_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	index = 0;
+ 
+ 	for_each_child_of_node(np, child) {
+-		ret = rzg2l_dt_subnode_to_map(pctldev, child, map,
++		ret = rzg2l_dt_subnode_to_map(pctldev, child, np, map,
+ 					      num_maps, &index);
+ 		if (ret < 0) {
+ 			of_node_put(child);
+@@ -426,7 +438,7 @@ static int rzg2l_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	}
+ 
+ 	if (*num_maps == 0) {
+-		ret = rzg2l_dt_subnode_to_map(pctldev, np, map,
++		ret = rzg2l_dt_subnode_to_map(pctldev, np, NULL, map,
+ 					      num_maps, &index);
+ 		if (ret < 0)
+ 			goto done;
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzv2m.c b/drivers/pinctrl/renesas/pinctrl-rzv2m.c
+index e5472293bc7fb..35b23c1a5684d 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzv2m.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzv2m.c
+@@ -209,6 +209,7 @@ static int rzv2m_map_add_config(struct pinctrl_map *map,
+ 
+ static int rzv2m_dt_subnode_to_map(struct pinctrl_dev *pctldev,
+ 				   struct device_node *np,
++				   struct device_node *parent,
+ 				   struct pinctrl_map **map,
+ 				   unsigned int *num_maps,
+ 				   unsigned int *index)
+@@ -226,6 +227,7 @@ static int rzv2m_dt_subnode_to_map(struct pinctrl_dev *pctldev,
+ 	struct property *prop;
+ 	int ret, gsel, fsel;
+ 	const char **pin_fn;
++	const char *name;
+ 	const char *pin;
+ 
+ 	pinmux = of_find_property(np, "pinmux", NULL);
+@@ -309,8 +311,19 @@ static int rzv2m_dt_subnode_to_map(struct pinctrl_dev *pctldev,
+ 		psel_val[i] = MUX_FUNC(value);
+ 	}
+ 
++	if (parent) {
++		name = devm_kasprintf(pctrl->dev, GFP_KERNEL, "%pOFn.%pOFn",
++				      parent, np);
++		if (!name) {
++			ret = -ENOMEM;
++			goto done;
++		}
++	} else {
++		name = np->name;
++	}
++
+ 	/* Register a single pin group listing all the pins we read from DT */
+-	gsel = pinctrl_generic_add_group(pctldev, np->name, pins, num_pinmux, NULL);
++	gsel = pinctrl_generic_add_group(pctldev, name, pins, num_pinmux, NULL);
+ 	if (gsel < 0) {
+ 		ret = gsel;
+ 		goto done;
+@@ -320,17 +333,16 @@ static int rzv2m_dt_subnode_to_map(struct pinctrl_dev *pctldev,
+ 	 * Register a single group function where the 'data' is an array PSEL
+ 	 * register values read from DT.
+ 	 */
+-	pin_fn[0] = np->name;
+-	fsel = pinmux_generic_add_function(pctldev, np->name, pin_fn, 1,
+-					   psel_val);
++	pin_fn[0] = name;
++	fsel = pinmux_generic_add_function(pctldev, name, pin_fn, 1, psel_val);
+ 	if (fsel < 0) {
+ 		ret = fsel;
+ 		goto remove_group;
+ 	}
+ 
+ 	maps[idx].type = PIN_MAP_TYPE_MUX_GROUP;
+-	maps[idx].data.mux.group = np->name;
+-	maps[idx].data.mux.function = np->name;
++	maps[idx].data.mux.group = name;
++	maps[idx].data.mux.function = name;
+ 	idx++;
+ 
+ 	dev_dbg(pctrl->dev, "Parsed %pOF with %d pins\n", np, num_pinmux);
+@@ -377,7 +389,7 @@ static int rzv2m_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	index = 0;
+ 
+ 	for_each_child_of_node(np, child) {
+-		ret = rzv2m_dt_subnode_to_map(pctldev, child, map,
++		ret = rzv2m_dt_subnode_to_map(pctldev, child, np, map,
+ 					      num_maps, &index);
+ 		if (ret < 0) {
+ 			of_node_put(child);
+@@ -386,7 +398,7 @@ static int rzv2m_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	}
+ 
+ 	if (*num_maps == 0) {
+-		ret = rzv2m_dt_subnode_to_map(pctldev, np, map,
++		ret = rzv2m_dt_subnode_to_map(pctldev, np, NULL, map,
+ 					      num_maps, &index);
+ 		if (ret < 0)
+ 			goto done;
+diff --git a/drivers/regulator/da9063-regulator.c b/drivers/regulator/da9063-regulator.c
+index c5dd77be558b6..dfd5ec9f75c90 100644
+--- a/drivers/regulator/da9063-regulator.c
++++ b/drivers/regulator/da9063-regulator.c
+@@ -778,6 +778,9 @@ static int da9063_check_xvp_constraints(struct regulator_config *config)
+ 	const struct notification_limit *uv_l = &constr->under_voltage_limits;
+ 	const struct notification_limit *ov_l = &constr->over_voltage_limits;
+ 
++	if (!config->init_data) /* No config in DT, pointers will be invalid */
++		return 0;
++
+ 	/* make sure that only one severity is used to clarify if unchanged, enabled or disabled */
+ 	if ((!!uv_l->prot + !!uv_l->err + !!uv_l->warn) > 1) {
+ 		dev_err(config->dev, "%s: at most one voltage monitoring severity allowed!\n",
+diff --git a/drivers/s390/crypto/zcrypt_msgtype6.c b/drivers/s390/crypto/zcrypt_msgtype6.c
+index 9eb5153737007..f7fb6836eaaa1 100644
+--- a/drivers/s390/crypto/zcrypt_msgtype6.c
++++ b/drivers/s390/crypto/zcrypt_msgtype6.c
+@@ -1111,23 +1111,36 @@ static long zcrypt_msgtype6_send_cprb(bool userspace, struct zcrypt_queue *zq,
+ 				      struct ica_xcRB *xcrb,
+ 				      struct ap_message *ap_msg)
+ {
+-	int rc;
+ 	struct response_type *rtype = ap_msg->private;
+ 	struct {
+ 		struct type6_hdr hdr;
+ 		struct CPRBX cprbx;
+ 		/* ... more data blocks ... */
+ 	} __packed * msg = ap_msg->msg;
+-
+-	/*
+-	 * Set the queue's reply buffer length minus 128 byte padding
+-	 * as reply limit for the card firmware.
+-	 */
+-	msg->hdr.fromcardlen1 = min_t(unsigned int, msg->hdr.fromcardlen1,
+-				      zq->reply.bufsize - 128);
+-	if (msg->hdr.fromcardlen2)
+-		msg->hdr.fromcardlen2 =
+-			zq->reply.bufsize - msg->hdr.fromcardlen1 - 128;
++	unsigned int max_payload_size;
++	int rc, delta;
++
++	/* calculate maximum payload for this card and msg type */
++	max_payload_size = zq->reply.bufsize - sizeof(struct type86_fmt2_msg);
++
++	/* limit each of the two from fields to the maximum payload size */
++	msg->hdr.fromcardlen1 = min(msg->hdr.fromcardlen1, max_payload_size);
++	msg->hdr.fromcardlen2 = min(msg->hdr.fromcardlen2, max_payload_size);
++
++	/* calculate delta if the sum of both exceeds max payload size */
++	delta = msg->hdr.fromcardlen1 + msg->hdr.fromcardlen2
++		- max_payload_size;
++	if (delta > 0) {
++		/*
++		 * Sum exceeds maximum payload size, prune fromcardlen1
++		 * (always trust fromcardlen2)
++		 */
++		if (delta > msg->hdr.fromcardlen1) {
++			rc = -EINVAL;
++			goto out;
++		}
++		msg->hdr.fromcardlen1 -= delta;
++	}
+ 
+ 	init_completion(&rtype->work);
+ 	rc = ap_queue_message(zq->queue, ap_msg);
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index 037f8c98a6d36..3c766a9ade381 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -1496,6 +1496,11 @@ sg_add_device(struct device *cl_dev)
+ 	int error;
+ 	unsigned long iflags;
+ 
++	if (!blk_get_queue(scsidp->request_queue)) {
++		pr_warn("%s: get scsi_device queue failed\n", __func__);
++		return -ENODEV;
++	}
++
+ 	error = -ENOMEM;
+ 	cdev = cdev_alloc();
+ 	if (!cdev) {
+@@ -1553,6 +1558,7 @@ cdev_add_err:
+ out:
+ 	if (cdev)
+ 		cdev_del(cdev);
++	blk_put_queue(scsidp->request_queue);
+ 	return error;
+ }
+ 
+@@ -1560,6 +1566,7 @@ static void
+ sg_device_destroy(struct kref *kref)
+ {
+ 	struct sg_device *sdp = container_of(kref, struct sg_device, d_ref);
++	struct request_queue *q = sdp->device->request_queue;
+ 	unsigned long flags;
+ 
+ 	/* CAUTION!  Note that the device can still be found via idr_find()
+@@ -1567,6 +1574,9 @@ sg_device_destroy(struct kref *kref)
+ 	 * any other cleanup.
+ 	 */
+ 
++	blk_trace_remove(q);
++	blk_put_queue(q);
++
+ 	write_lock_irqsave(&sg_index_lock, flags);
+ 	idr_remove(&sg_index_idr, sdp->index);
+ 	write_unlock_irqrestore(&sg_index_lock, flags);
+diff --git a/drivers/spi/spi-bcm63xx.c b/drivers/spi/spi-bcm63xx.c
+index 9aecb77c3d892..07b5b71b23520 100644
+--- a/drivers/spi/spi-bcm63xx.c
++++ b/drivers/spi/spi-bcm63xx.c
+@@ -126,7 +126,7 @@ enum bcm63xx_regs_spi {
+ 	SPI_MSG_DATA_SIZE,
+ };
+ 
+-#define BCM63XX_SPI_MAX_PREPEND		15
++#define BCM63XX_SPI_MAX_PREPEND		7
+ 
+ #define BCM63XX_SPI_MAX_CS		8
+ #define BCM63XX_SPI_BUS_NUM		0
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 32449bef4415a..abf10f92415dc 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -40,6 +40,7 @@
+ #define CQSPI_SUPPORT_EXTERNAL_DMA	BIT(2)
+ #define CQSPI_NO_SUPPORT_WR_COMPLETION	BIT(3)
+ #define CQSPI_SLOW_SRAM		BIT(4)
++#define CQSPI_NEEDS_APB_AHB_HAZARD_WAR	BIT(5)
+ 
+ /* Capabilities */
+ #define CQSPI_SUPPORTS_OCTAL		BIT(0)
+@@ -90,6 +91,7 @@ struct cqspi_st {
+ 	u32			pd_dev_id;
+ 	bool			wr_completion;
+ 	bool			slow_sram;
++	bool			apb_ahb_hazard;
+ };
+ 
+ struct cqspi_driver_platdata {
+@@ -1027,6 +1029,13 @@ static int cqspi_indirect_write_execute(struct cqspi_flash_pdata *f_pdata,
+ 	if (cqspi->wr_delay)
+ 		ndelay(cqspi->wr_delay);
+ 
++	/*
++	 * If a hazard exists between the APB and AHB interfaces, perform a
++	 * dummy readback from the controller to ensure synchronization.
++	 */
++	if (cqspi->apb_ahb_hazard)
++		readl(reg_base + CQSPI_REG_INDIRECTWR);
++
+ 	while (remaining > 0) {
+ 		size_t write_words, mod_bytes;
+ 
+@@ -1754,6 +1763,8 @@ static int cqspi_probe(struct platform_device *pdev)
+ 			cqspi->wr_completion = false;
+ 		if (ddata->quirks & CQSPI_SLOW_SRAM)
+ 			cqspi->slow_sram = true;
++		if (ddata->quirks & CQSPI_NEEDS_APB_AHB_HAZARD_WAR)
++			cqspi->apb_ahb_hazard = true;
+ 
+ 		if (of_device_is_compatible(pdev->dev.of_node,
+ 					    "xlnx,versal-ospi-1.0")) {
+@@ -1888,6 +1899,10 @@ static const struct cqspi_driver_platdata jh7110_qspi = {
+ 	.quirks = CQSPI_DISABLE_DAC_MODE,
+ };
+ 
++static const struct cqspi_driver_platdata pensando_cdns_qspi = {
++	.quirks = CQSPI_NEEDS_APB_AHB_HAZARD_WAR | CQSPI_DISABLE_DAC_MODE,
++};
++
+ static const struct of_device_id cqspi_dt_ids[] = {
+ 	{
+ 		.compatible = "cdns,qspi-nor",
+@@ -1917,6 +1932,10 @@ static const struct of_device_id cqspi_dt_ids[] = {
+ 		.compatible = "starfive,jh7110-qspi",
+ 		.data = &jh7110_qspi,
+ 	},
++	{
++		.compatible = "amd,pensando-elba-qspi",
++		.data = &pensando_cdns_qspi,
++	},
+ 	{ /* end of table */ }
+ };
+ 
+diff --git a/drivers/spi/spi-dw-mmio.c b/drivers/spi/spi-dw-mmio.c
+index 15f5e9cb54ad4..a963bc96c223f 100644
+--- a/drivers/spi/spi-dw-mmio.c
++++ b/drivers/spi/spi-dw-mmio.c
+@@ -236,6 +236,24 @@ static int dw_spi_intel_init(struct platform_device *pdev,
+ 	return 0;
+ }
+ 
++/*
++ * DMA-based mem ops are not configured for this device and are not tested.
++ */
++static int dw_spi_mountevans_imc_init(struct platform_device *pdev,
++				      struct dw_spi_mmio *dwsmmio)
++{
++	/*
++	 * The Intel Mount Evans SoC's Integrated Management Complex DW
++	 * apb_ssi_v4.02a controller has an errata where a full TX FIFO can
++	 * result in data corruption. The suggested workaround is to never
++	 * completely fill the FIFO. The TX FIFO has a size of 32 so the
++	 * fifo_len is set to 31.
++	 */
++	dwsmmio->dws.fifo_len = 31;
++
++	return 0;
++}
++
+ static int dw_spi_canaan_k210_init(struct platform_device *pdev,
+ 				   struct dw_spi_mmio *dwsmmio)
+ {
+@@ -405,6 +423,10 @@ static const struct of_device_id dw_spi_mmio_of_match[] = {
+ 	{ .compatible = "snps,dwc-ssi-1.01a", .data = dw_spi_hssi_init},
+ 	{ .compatible = "intel,keembay-ssi", .data = dw_spi_intel_init},
+ 	{ .compatible = "intel,thunderbay-ssi", .data = dw_spi_intel_init},
++	{
++		.compatible = "intel,mountevans-imc-ssi",
++		.data = dw_spi_mountevans_imc_init,
++	},
+ 	{ .compatible = "microchip,sparx5-spi", dw_spi_mscc_sparx5_init},
+ 	{ .compatible = "canaan,k210-spi", dw_spi_canaan_k210_init},
+ 	{ .compatible = "amd,pensando-elba-spi", .data = dw_spi_elba_init},
+diff --git a/drivers/spi/spi-s3c64xx.c b/drivers/spi/spi-s3c64xx.c
+index 7ac17f0d18a95..1a8b31e20baf2 100644
+--- a/drivers/spi/spi-s3c64xx.c
++++ b/drivers/spi/spi-s3c64xx.c
+@@ -668,6 +668,8 @@ static int s3c64xx_spi_config(struct s3c64xx_spi_driver_data *sdd)
+ 
+ 	if ((sdd->cur_mode & SPI_LOOP) && sdd->port_conf->has_loopback)
+ 		val |= S3C64XX_SPI_MODE_SELF_LOOPBACK;
++	else
++		val &= ~S3C64XX_SPI_MODE_SELF_LOOPBACK;
+ 
+ 	writel(val, regs + S3C64XX_SPI_MODE_CFG);
+ 
+diff --git a/drivers/video/fbdev/au1200fb.c b/drivers/video/fbdev/au1200fb.c
+index aed88ce45bf09..d8f085d4ede30 100644
+--- a/drivers/video/fbdev/au1200fb.c
++++ b/drivers/video/fbdev/au1200fb.c
+@@ -1732,6 +1732,9 @@ static int au1200fb_drv_probe(struct platform_device *dev)
+ 
+ 	/* Now hook interrupt too */
+ 	irq = platform_get_irq(dev, 0);
++	if (irq < 0)
++		return irq;
++
+ 	ret = request_irq(irq, au1200fb_handle_irq,
+ 			  IRQF_SHARED, "lcd", (void *)dev);
+ 	if (ret) {
+diff --git a/drivers/video/fbdev/imxfb.c b/drivers/video/fbdev/imxfb.c
+index adf36690c342b..c8b1c73412d36 100644
+--- a/drivers/video/fbdev/imxfb.c
++++ b/drivers/video/fbdev/imxfb.c
+@@ -613,10 +613,10 @@ static int imxfb_activate_var(struct fb_var_screeninfo *var, struct fb_info *inf
+ 	if (var->hsync_len < 1    || var->hsync_len > 64)
+ 		printk(KERN_ERR "%s: invalid hsync_len %d\n",
+ 			info->fix.id, var->hsync_len);
+-	if (var->left_margin > 255)
++	if (var->left_margin < 3  || var->left_margin > 255)
+ 		printk(KERN_ERR "%s: invalid left_margin %d\n",
+ 			info->fix.id, var->left_margin);
+-	if (var->right_margin > 255)
++	if (var->right_margin < 1 || var->right_margin > 255)
+ 		printk(KERN_ERR "%s: invalid right_margin %d\n",
+ 			info->fix.id, var->right_margin);
+ 	if (var->yres < 1 || var->yres > ymax_mask)
+@@ -1043,7 +1043,6 @@ failed_cmap:
+ failed_map:
+ failed_ioremap:
+ failed_getclock:
+-	release_mem_region(res->start, resource_size(res));
+ failed_of_parse:
+ 	kfree(info->pseudo_palette);
+ failed_init:
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index e2b3448476490..152b3ec911599 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -2084,6 +2084,7 @@ static int exclude_super_stripes(struct btrfs_block_group *cache)
+ 
+ 		/* Shouldn't have super stripes in sequential zones */
+ 		if (zoned && nr) {
++			kfree(logical);
+ 			btrfs_err(fs_info,
+ 			"zoned: block group %llu must not contain super block",
+ 				  cache->start);
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 4912d624ca3d3..886e661a218fc 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -417,9 +417,13 @@ static noinline int update_ref_for_cow(struct btrfs_trans_handle *trans,
+ 					       &refs, &flags);
+ 		if (ret)
+ 			return ret;
+-		if (refs == 0) {
+-			ret = -EROFS;
+-			btrfs_handle_fs_error(fs_info, ret, NULL);
++		if (unlikely(refs == 0)) {
++			btrfs_crit(fs_info,
++		"found 0 references for tree block at bytenr %llu level %d root %llu",
++				   buf->start, btrfs_header_level(buf),
++				   btrfs_root_id(root));
++			ret = -EUCLEAN;
++			btrfs_abort_transaction(trans, ret);
+ 			return ret;
+ 		}
+ 	} else {
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index fc59eb4024438..795b30913c542 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -2265,6 +2265,9 @@ static int btrfs_init_csum_hash(struct btrfs_fs_info *fs_info, u16 csum_type)
+ 		if (!strstr(crypto_shash_driver_name(csum_shash), "generic"))
+ 			set_bit(BTRFS_FS_CSUM_IMPL_FAST, &fs_info->flags);
+ 		break;
++	case BTRFS_CSUM_TYPE_XXHASH:
++		set_bit(BTRFS_FS_CSUM_IMPL_FAST, &fs_info->flags);
++		break;
+ 	default:
+ 		break;
+ 	}
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index e3ae55d8bae14..a37a6587efaf0 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -1592,38 +1592,7 @@ done:
+ 		set_page_writeback(page);
+ 		end_page_writeback(page);
+ 	}
+-	/*
+-	 * Here we used to have a check for PageError() and then set @ret and
+-	 * call end_extent_writepage().
+-	 *
+-	 * But in fact setting @ret here will cause different error paths
+-	 * between subpage and regular sectorsize.
+-	 *
+-	 * For regular page size, we never submit current page, but only add
+-	 * current page to current bio.
+-	 * The bio submission can only happen in next page.
+-	 * Thus if we hit the PageError() branch, @ret is already set to
+-	 * non-zero value and will not get updated for regular sectorsize.
+-	 *
+-	 * But for subpage case, it's possible we submit part of current page,
+-	 * thus can get PageError() set by submitted bio of the same page,
+-	 * while our @ret is still 0.
+-	 *
+-	 * So here we unify the behavior and don't set @ret.
+-	 * Error can still be properly passed to higher layer as page will
+-	 * be set error, here we just don't handle the IO failure.
+-	 *
+-	 * NOTE: This is just a hotfix for subpage.
+-	 * The root fix will be properly ending ordered extent when we hit
+-	 * an error during writeback.
+-	 *
+-	 * But that needs a bigger refactoring, as we not only need to grab the
+-	 * submitted OE, but also need to know exactly at which bytenr we hit
+-	 * the error.
+-	 * Currently the full page based __extent_writepage_io() is not
+-	 * capable of that.
+-	 */
+-	if (PageError(page))
++	if (ret)
+ 		end_extent_writepage(page, ret, page_start, page_end);
+ 	unlock_page(page);
+ 	ASSERT(ret <= 0);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 2e6eed4b1b3cc..c89071186388b 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -3546,11 +3546,14 @@ int btrfs_orphan_cleanup(struct btrfs_root *root)
+ 		found_key.type = BTRFS_INODE_ITEM_KEY;
+ 		found_key.offset = 0;
+ 		inode = btrfs_iget(fs_info->sb, last_objectid, root);
+-		ret = PTR_ERR_OR_ZERO(inode);
+-		if (ret && ret != -ENOENT)
+-			goto out;
++		if (IS_ERR(inode)) {
++			ret = PTR_ERR(inode);
++			inode = NULL;
++			if (ret != -ENOENT)
++				goto out;
++		}
+ 
+-		if (ret == -ENOENT && root == fs_info->tree_root) {
++		if (!inode && root == fs_info->tree_root) {
+ 			struct btrfs_root *dead_root;
+ 			int is_dead_root = 0;
+ 
+@@ -3611,17 +3614,17 @@ int btrfs_orphan_cleanup(struct btrfs_root *root)
+ 		 * deleted but wasn't. The inode number may have been reused,
+ 		 * but either way, we can delete the orphan item.
+ 		 */
+-		if (ret == -ENOENT || inode->i_nlink) {
+-			if (!ret) {
++		if (!inode || inode->i_nlink) {
++			if (inode) {
+ 				ret = btrfs_drop_verity_items(BTRFS_I(inode));
+ 				iput(inode);
++				inode = NULL;
+ 				if (ret)
+ 					goto out;
+ 			}
+ 			trans = btrfs_start_transaction(root, 1);
+ 			if (IS_ERR(trans)) {
+ 				ret = PTR_ERR(trans);
+-				iput(inode);
+ 				goto out;
+ 			}
+ 			btrfs_debug(fs_info, "auto deleting %Lu",
+@@ -3629,10 +3632,8 @@ int btrfs_orphan_cleanup(struct btrfs_root *root)
+ 			ret = btrfs_del_orphan_item(trans, root,
+ 						    found_key.objectid);
+ 			btrfs_end_transaction(trans);
+-			if (ret) {
+-				iput(inode);
++			if (ret)
+ 				goto out;
+-			}
+ 			continue;
+ 		}
+ 
+@@ -4734,9 +4735,6 @@ again:
+ 		ret = -ENOMEM;
+ 		goto out;
+ 	}
+-	ret = set_page_extent_mapped(page);
+-	if (ret < 0)
+-		goto out_unlock;
+ 
+ 	if (!PageUptodate(page)) {
+ 		ret = btrfs_read_folio(NULL, page_folio(page));
+@@ -4751,6 +4749,17 @@ again:
+ 			goto out_unlock;
+ 		}
+ 	}
++
++	/*
++	 * We unlock the page after the io is completed and then re-lock it
++	 * above.  release_folio() could have come in between that and cleared
++	 * PagePrivate(), but left the page in the mapping.  Set the page mapped
++	 * here to make sure it's properly set for the subpage stuff.
++	 */
++	ret = set_page_extent_mapped(page);
++	if (ret < 0)
++		goto out_unlock;
++
+ 	wait_on_page_writeback(page);
+ 
+ 	lock_extent(io_tree, block_start, block_end, &cached_state);
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index f8735b31da16f..360bf2522a871 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -4433,4 +4433,5 @@ void btrfs_qgroup_destroy_extent_records(struct btrfs_transaction *trans)
+ 		ulist_free(entry->old_roots);
+ 		kfree(entry);
+ 	}
++	*root = RB_ROOT;
+ }
+diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
+index 2fab37f062def..fd2cd30aeb0d8 100644
+--- a/fs/btrfs/raid56.c
++++ b/fs/btrfs/raid56.c
+@@ -71,7 +71,7 @@ static void rmw_rbio_work_locked(struct work_struct *work);
+ static void index_rbio_pages(struct btrfs_raid_bio *rbio);
+ static int alloc_rbio_pages(struct btrfs_raid_bio *rbio);
+ 
+-static int finish_parity_scrub(struct btrfs_raid_bio *rbio, int need_check);
++static int finish_parity_scrub(struct btrfs_raid_bio *rbio);
+ static void scrub_rbio_work_locked(struct work_struct *work);
+ 
+ static void free_raid_bio_pointers(struct btrfs_raid_bio *rbio)
+@@ -2404,7 +2404,7 @@ static int alloc_rbio_essential_pages(struct btrfs_raid_bio *rbio)
+ 	return 0;
+ }
+ 
+-static int finish_parity_scrub(struct btrfs_raid_bio *rbio, int need_check)
++static int finish_parity_scrub(struct btrfs_raid_bio *rbio)
+ {
+ 	struct btrfs_io_context *bioc = rbio->bioc;
+ 	const u32 sectorsize = bioc->fs_info->sectorsize;
+@@ -2445,9 +2445,6 @@ static int finish_parity_scrub(struct btrfs_raid_bio *rbio, int need_check)
+ 	 */
+ 	clear_bit(RBIO_CACHE_READY_BIT, &rbio->flags);
+ 
+-	if (!need_check)
+-		goto writeback;
+-
+ 	p_sector.page = alloc_page(GFP_NOFS);
+ 	if (!p_sector.page)
+ 		return -ENOMEM;
+@@ -2516,7 +2513,6 @@ static int finish_parity_scrub(struct btrfs_raid_bio *rbio, int need_check)
+ 		q_sector.page = NULL;
+ 	}
+ 
+-writeback:
+ 	/*
+ 	 * time to start writing.  Make bios for everything from the
+ 	 * higher layers (the bio_list in our rbio) and our p/q.  Ignore
+@@ -2699,7 +2695,6 @@ static int scrub_assemble_read_bios(struct btrfs_raid_bio *rbio)
+ 
+ static void scrub_rbio(struct btrfs_raid_bio *rbio)
+ {
+-	bool need_check = false;
+ 	int sector_nr;
+ 	int ret;
+ 
+@@ -2722,7 +2717,7 @@ static void scrub_rbio(struct btrfs_raid_bio *rbio)
+ 	 * We have every sector properly prepared. Can finish the scrub
+ 	 * and writeback the good content.
+ 	 */
+-	ret = finish_parity_scrub(rbio, need_check);
++	ret = finish_parity_scrub(rbio);
+ 	wait_event(rbio->io_wait, atomic_read(&rbio->stripes_pending) == 0);
+ 	for (sector_nr = 0; sector_nr < rbio->stripe_nsectors; sector_nr++) {
+ 		int found_errors;
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 72a838c975345..436e15e3759da 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4071,14 +4071,6 @@ static int alloc_profile_is_valid(u64 flags, int extended)
+ 	return has_single_bit_set(flags);
+ }
+ 
+-static inline int balance_need_close(struct btrfs_fs_info *fs_info)
+-{
+-	/* cancel requested || normal exit path */
+-	return atomic_read(&fs_info->balance_cancel_req) ||
+-		(atomic_read(&fs_info->balance_pause_req) == 0 &&
+-		 atomic_read(&fs_info->balance_cancel_req) == 0);
+-}
+-
+ /*
+  * Validate target profile against allowed profiles and return true if it's OK.
+  * Otherwise print the error message and return false.
+@@ -4268,6 +4260,7 @@ int btrfs_balance(struct btrfs_fs_info *fs_info,
+ 	u64 num_devices;
+ 	unsigned seq;
+ 	bool reducing_redundancy;
++	bool paused = false;
+ 	int i;
+ 
+ 	if (btrfs_fs_closing(fs_info) ||
+@@ -4398,6 +4391,7 @@ int btrfs_balance(struct btrfs_fs_info *fs_info,
+ 	if (ret == -ECANCELED && atomic_read(&fs_info->balance_pause_req)) {
+ 		btrfs_info(fs_info, "balance: paused");
+ 		btrfs_exclop_balance(fs_info, BTRFS_EXCLOP_BALANCE_PAUSED);
++		paused = true;
+ 	}
+ 	/*
+ 	 * Balance can be canceled by:
+@@ -4426,8 +4420,8 @@ int btrfs_balance(struct btrfs_fs_info *fs_info,
+ 		btrfs_update_ioctl_balance_args(fs_info, bargs);
+ 	}
+ 
+-	if ((ret && ret != -ECANCELED && ret != -ENOSPC) ||
+-	    balance_need_close(fs_info)) {
++	/* We didn't pause, we can clean everything up. */
++	if (!paused) {
+ 		reset_balance_state(fs_info);
+ 		btrfs_exclop_finish(fs_info);
+ 	}
+@@ -6405,7 +6399,8 @@ int __btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op,
+ 	    (!need_full_stripe(op) || !dev_replace_is_ongoing ||
+ 	     !dev_replace->tgtdev)) {
+ 		set_io_stripe(smap, map, stripe_index, stripe_offset, stripe_nr);
+-		*mirror_num_ret = mirror_num;
++		if (mirror_num_ret)
++			*mirror_num_ret = mirror_num;
+ 		*bioc_ret = NULL;
+ 		ret = 0;
+ 		goto out;
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index 997ca4b32e87f..4a1c238600c52 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -1411,7 +1411,7 @@ static void z_erofs_decompress_kickoff(struct z_erofs_decompressqueue *io,
+ 	if (atomic_add_return(bios, &io->pending_bios))
+ 		return;
+ 	/* Use (kthread_)work and sync decompression for atomic contexts only */
+-	if (in_atomic() || irqs_disabled()) {
++	if (!in_task() || irqs_disabled() || rcu_read_lock_any_held()) {
+ #ifdef CONFIG_EROFS_FS_PCPU_KTHREAD
+ 		struct kthread_worker *worker;
+ 
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 321e3a888c20b..05151d61b00b3 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1782,6 +1782,20 @@ static int ext4_xattr_set_entry(struct ext4_xattr_info *i,
+ 		memmove(here, (void *)here + size,
+ 			(void *)last - (void *)here + sizeof(__u32));
+ 		memset(last, 0, size);
++
++		/*
++		 * Update i_inline_off - moved ibody region might contain
++		 * system.data attribute.  Handling a failure here won't
++		 * cause other complications for setting an xattr.
++		 */
++		if (!is_block && ext4_has_inline_data(inode)) {
++			ret = ext4_find_inline_data_nolock(inode);
++			if (ret) {
++				ext4_warning_inode(inode,
++					"unable to update i_inline_off");
++				goto out;
++			}
++		}
+ 	} else if (s->not_found) {
+ 		/* Insert new name. */
+ 		size_t size = EXT4_XATTR_LEN(name_len);
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 35bc174f9ba21..49c3e96207260 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -258,7 +258,7 @@ static int fuse_dentry_revalidate(struct dentry *entry, unsigned int flags)
+ 			spin_unlock(&fi->lock);
+ 		}
+ 		kfree(forget);
+-		if (ret == -ENOMEM)
++		if (ret == -ENOMEM || ret == -EINTR)
+ 			goto out;
+ 		if (ret || fuse_invalid_attr(&outarg.attr) ||
+ 		    fuse_stale_inode(inode, outarg.generation, &outarg.attr))
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index d66070af145d0..f19d748890f08 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -1134,7 +1134,10 @@ static void process_init_reply(struct fuse_mount *fm, struct fuse_args *args,
+ 		process_init_limits(fc, arg);
+ 
+ 		if (arg->minor >= 6) {
+-			u64 flags = arg->flags | (u64) arg->flags2 << 32;
++			u64 flags = arg->flags;
++
++			if (flags & FUSE_INIT_EXT)
++				flags |= (u64) arg->flags2 << 32;
+ 
+ 			ra_pages = arg->max_readahead / PAGE_SIZE;
+ 			if (flags & FUSE_ASYNC_READ)
+@@ -1254,7 +1257,8 @@ void fuse_send_init(struct fuse_mount *fm)
+ 		FUSE_ABORT_ERROR | FUSE_MAX_PAGES | FUSE_CACHE_SYMLINKS |
+ 		FUSE_NO_OPENDIR_SUPPORT | FUSE_EXPLICIT_INVAL_DATA |
+ 		FUSE_HANDLE_KILLPRIV_V2 | FUSE_SETXATTR_EXT | FUSE_INIT_EXT |
+-		FUSE_SECURITY_CTX | FUSE_CREATE_SUPP_GROUP;
++		FUSE_SECURITY_CTX | FUSE_CREATE_SUPP_GROUP |
++		FUSE_HAS_EXPIRE_ONLY;
+ #ifdef CONFIG_FUSE_DAX
+ 	if (fm->fc->dax)
+ 		flags |= FUSE_MAP_ALIGNMENT;
+diff --git a/fs/fuse/ioctl.c b/fs/fuse/ioctl.c
+index 8e01bfdfc4303..726640fa439e0 100644
+--- a/fs/fuse/ioctl.c
++++ b/fs/fuse/ioctl.c
+@@ -9,14 +9,23 @@
+ #include <linux/compat.h>
+ #include <linux/fileattr.h>
+ 
+-static ssize_t fuse_send_ioctl(struct fuse_mount *fm, struct fuse_args *args)
++static ssize_t fuse_send_ioctl(struct fuse_mount *fm, struct fuse_args *args,
++			       struct fuse_ioctl_out *outarg)
+ {
+-	ssize_t ret = fuse_simple_request(fm, args);
++	ssize_t ret;
++
++	args->out_args[0].size = sizeof(*outarg);
++	args->out_args[0].value = outarg;
++
++	ret = fuse_simple_request(fm, args);
+ 
+ 	/* Translate ENOSYS, which shouldn't be returned from fs */
+ 	if (ret == -ENOSYS)
+ 		ret = -ENOTTY;
+ 
++	if (ret >= 0 && outarg->result == -ENOSYS)
++		outarg->result = -ENOTTY;
++
+ 	return ret;
+ }
+ 
+@@ -264,13 +273,11 @@ long fuse_do_ioctl(struct file *file, unsigned int cmd, unsigned long arg,
+ 	}
+ 
+ 	ap.args.out_numargs = 2;
+-	ap.args.out_args[0].size = sizeof(outarg);
+-	ap.args.out_args[0].value = &outarg;
+ 	ap.args.out_args[1].size = out_size;
+ 	ap.args.out_pages = true;
+ 	ap.args.out_argvar = true;
+ 
+-	transferred = fuse_send_ioctl(fm, &ap.args);
++	transferred = fuse_send_ioctl(fm, &ap.args, &outarg);
+ 	err = transferred;
+ 	if (transferred < 0)
+ 		goto out;
+@@ -399,12 +406,10 @@ static int fuse_priv_ioctl(struct inode *inode, struct fuse_file *ff,
+ 	args.in_args[1].size = inarg.in_size;
+ 	args.in_args[1].value = ptr;
+ 	args.out_numargs = 2;
+-	args.out_args[0].size = sizeof(outarg);
+-	args.out_args[0].value = &outarg;
+ 	args.out_args[1].size = inarg.out_size;
+ 	args.out_args[1].value = ptr;
+ 
+-	err = fuse_send_ioctl(fm, &args);
++	err = fuse_send_ioctl(fm, &args, &outarg);
+ 	if (!err) {
+ 		if (outarg.result < 0)
+ 			err = outarg.result;
+diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
+index 51bd38da21cdd..25e3c20eb19f6 100644
+--- a/fs/jbd2/checkpoint.c
++++ b/fs/jbd2/checkpoint.c
+@@ -57,28 +57,6 @@ static inline void __buffer_unlink(struct journal_head *jh)
+ 	}
+ }
+ 
+-/*
+- * Move a buffer from the checkpoint list to the checkpoint io list
+- *
+- * Called with j_list_lock held
+- */
+-static inline void __buffer_relink_io(struct journal_head *jh)
+-{
+-	transaction_t *transaction = jh->b_cp_transaction;
+-
+-	__buffer_unlink_first(jh);
+-
+-	if (!transaction->t_checkpoint_io_list) {
+-		jh->b_cpnext = jh->b_cpprev = jh;
+-	} else {
+-		jh->b_cpnext = transaction->t_checkpoint_io_list;
+-		jh->b_cpprev = transaction->t_checkpoint_io_list->b_cpprev;
+-		jh->b_cpprev->b_cpnext = jh;
+-		jh->b_cpnext->b_cpprev = jh;
+-	}
+-	transaction->t_checkpoint_io_list = jh;
+-}
+-
+ /*
+  * Check a checkpoint buffer could be release or not.
+  *
+@@ -183,6 +161,7 @@ __flush_batch(journal_t *journal, int *batch_count)
+ 		struct buffer_head *bh = journal->j_chkpt_bhs[i];
+ 		BUFFER_TRACE(bh, "brelse");
+ 		__brelse(bh);
++		journal->j_chkpt_bhs[i] = NULL;
+ 	}
+ 	*batch_count = 0;
+ }
+@@ -242,6 +221,11 @@ restart:
+ 		jh = transaction->t_checkpoint_list;
+ 		bh = jh2bh(jh);
+ 
++		/*
++		 * The buffer may be writing back, or flushing out in the
++		 * last couple of cycles, or re-adding into a new transaction,
++		 * need to check it again until it's unlocked.
++		 */
+ 		if (buffer_locked(bh)) {
+ 			get_bh(bh);
+ 			spin_unlock(&journal->j_list_lock);
+@@ -287,28 +271,32 @@ restart:
+ 		}
+ 		if (!buffer_dirty(bh)) {
+ 			BUFFER_TRACE(bh, "remove from checkpoint");
+-			if (__jbd2_journal_remove_checkpoint(jh))
+-				/* The transaction was released; we're done */
++			/*
++			 * If the transaction was released or the checkpoint
++			 * list was empty, we're done.
++			 */
++			if (__jbd2_journal_remove_checkpoint(jh) ||
++			    !transaction->t_checkpoint_list)
+ 				goto out;
+-			continue;
++		} else {
++			/*
++			 * We are about to write the buffer, it could be
++			 * raced by some other transaction shrink or buffer
++			 * re-log logic once we release the j_list_lock,
++			 * leave it on the checkpoint list and check status
++			 * again to make sure it's clean.
++			 */
++			BUFFER_TRACE(bh, "queue");
++			get_bh(bh);
++			J_ASSERT_BH(bh, !buffer_jwrite(bh));
++			journal->j_chkpt_bhs[batch_count++] = bh;
++			transaction->t_chp_stats.cs_written++;
++			transaction->t_checkpoint_list = jh->b_cpnext;
+ 		}
+-		/*
+-		 * Important: we are about to write the buffer, and
+-		 * possibly block, while still holding the journal
+-		 * lock.  We cannot afford to let the transaction
+-		 * logic start messing around with this buffer before
+-		 * we write it to disk, as that would break
+-		 * recoverability.
+-		 */
+-		BUFFER_TRACE(bh, "queue");
+-		get_bh(bh);
+-		J_ASSERT_BH(bh, !buffer_jwrite(bh));
+-		journal->j_chkpt_bhs[batch_count++] = bh;
+-		__buffer_relink_io(jh);
+-		transaction->t_chp_stats.cs_written++;
++
+ 		if ((batch_count == JBD2_NR_BATCH) ||
+-		    need_resched() ||
+-		    spin_needbreak(&journal->j_list_lock))
++		    need_resched() || spin_needbreak(&journal->j_list_lock) ||
++		    jh2bh(transaction->t_checkpoint_list) == journal->j_chkpt_bhs[0])
+ 			goto unlock_and_flush;
+ 	}
+ 
+@@ -322,38 +310,6 @@ restart:
+ 			goto restart;
+ 	}
+ 
+-	/*
+-	 * Now we issued all of the transaction's buffers, let's deal
+-	 * with the buffers that are out for I/O.
+-	 */
+-restart2:
+-	/* Did somebody clean up the transaction in the meanwhile? */
+-	if (journal->j_checkpoint_transactions != transaction ||
+-	    transaction->t_tid != this_tid)
+-		goto out;
+-
+-	while (transaction->t_checkpoint_io_list) {
+-		jh = transaction->t_checkpoint_io_list;
+-		bh = jh2bh(jh);
+-		if (buffer_locked(bh)) {
+-			get_bh(bh);
+-			spin_unlock(&journal->j_list_lock);
+-			wait_on_buffer(bh);
+-			/* the journal_head may have gone by now */
+-			BUFFER_TRACE(bh, "brelse");
+-			__brelse(bh);
+-			spin_lock(&journal->j_list_lock);
+-			goto restart2;
+-		}
+-
+-		/*
+-		 * Now in whatever state the buffer currently is, we
+-		 * know that it has been written out and so we can
+-		 * drop it from the list
+-		 */
+-		if (__jbd2_journal_remove_checkpoint(jh))
+-			break;
+-	}
+ out:
+ 	spin_unlock(&journal->j_list_lock);
+ 	result = jbd2_cleanup_journal_tail(journal);
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index da6a2bc6bf022..bd4ef43b02033 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -1959,6 +1959,9 @@ dbAllocDmapLev(struct bmap * bmp,
+ 	if (dbFindLeaf((dmtree_t *) & dp->tree, l2nb, &leafidx))
+ 		return -ENOSPC;
+ 
++	if (leafidx < 0)
++		return -EIO;
++
+ 	/* determine the block number within the file system corresponding
+ 	 * to the leaf at which free space was found.
+ 	 */
+diff --git a/fs/jfs/jfs_txnmgr.c b/fs/jfs/jfs_txnmgr.c
+index ffd4feece0785..ce4b4760fcb1d 100644
+--- a/fs/jfs/jfs_txnmgr.c
++++ b/fs/jfs/jfs_txnmgr.c
+@@ -354,6 +354,11 @@ tid_t txBegin(struct super_block *sb, int flag)
+ 	jfs_info("txBegin: flag = 0x%x", flag);
+ 	log = JFS_SBI(sb)->log;
+ 
++	if (!log) {
++		jfs_error(sb, "read-only filesystem\n");
++		return 0;
++	}
++
+ 	TXN_LOCK();
+ 
+ 	INCREMENT(TxStat.txBegin);
+diff --git a/fs/jfs/namei.c b/fs/jfs/namei.c
+index b29d68b5eec53..f370c76051205 100644
+--- a/fs/jfs/namei.c
++++ b/fs/jfs/namei.c
+@@ -799,6 +799,11 @@ static int jfs_link(struct dentry *old_dentry,
+ 	if (rc)
+ 		goto out;
+ 
++	if (isReadOnly(ip)) {
++		jfs_error(ip->i_sb, "read-only filesystem\n");
++		return -EROFS;
++	}
++
+ 	tid = txBegin(ip->i_sb, 0);
+ 
+ 	mutex_lock_nested(&JFS_IP(dir)->commit_mutex, COMMIT_MUTEX_PARENT);
+diff --git a/fs/overlayfs/ovl_entry.h b/fs/overlayfs/ovl_entry.h
+index fd11fe6d6d45f..6b9f7917fc1bb 100644
+--- a/fs/overlayfs/ovl_entry.h
++++ b/fs/overlayfs/ovl_entry.h
+@@ -32,6 +32,7 @@ struct ovl_sb {
+ };
+ 
+ struct ovl_layer {
++	/* ovl_free_fs() relies on @mnt being the first member! */
+ 	struct vfsmount *mnt;
+ 	/* Trap in ovl inode cache */
+ 	struct inode *trap;
+@@ -42,6 +43,14 @@ struct ovl_layer {
+ 	int fsid;
+ };
+ 
++/*
++ * ovl_free_fs() relies on @mnt being the first member when unmounting
++ * the private mounts created for each layer. Let's check both the
++ * offset and type.
++ */
++static_assert(offsetof(struct ovl_layer, mnt) == 0);
++static_assert(__same_type(typeof_member(struct ovl_layer, mnt), struct vfsmount *));
++
+ struct ovl_path {
+ 	const struct ovl_layer *layer;
+ 	struct dentry *dentry;
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index ffd40dc3e4e99..e3e4f40476579 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -555,7 +555,7 @@ restart:
+ 			continue;
+ 		/* Wait for dquot users */
+ 		if (atomic_read(&dquot->dq_count)) {
+-			dqgrab(dquot);
++			atomic_inc(&dquot->dq_count);
+ 			spin_unlock(&dq_list_lock);
+ 			/*
+ 			 * Once dqput() wakes us up, we know it's time to free
+@@ -2420,7 +2420,8 @@ int dquot_load_quota_sb(struct super_block *sb, int type, int format_id,
+ 
+ 	error = add_dquot_ref(sb, type);
+ 	if (error)
+-		dquot_disable(sb, type, flags);
++		dquot_disable(sb, type,
++			      DQUOT_USAGE_ENABLED | DQUOT_LIMITS_ENABLED);
+ 
+ 	return error;
+ out_fmt:
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index d9f0b3b94f007..853209268f507 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -60,7 +60,7 @@ extern bool disable_legacy_dialects;
+ #define TLINK_IDLE_EXPIRE	(600 * HZ)
+ 
+ /* Drop the connection to not overload the server */
+-#define NUM_STATUS_IO_TIMEOUT   5
++#define MAX_STATUS_IO_TIMEOUT   5
+ 
+ static int ip_connect(struct TCP_Server_Info *server);
+ static int generic_ip_connect(struct TCP_Server_Info *server);
+@@ -1117,6 +1117,7 @@ cifs_demultiplex_thread(void *p)
+ 	struct mid_q_entry *mids[MAX_COMPOUND];
+ 	char *bufs[MAX_COMPOUND];
+ 	unsigned int noreclaim_flag, num_io_timeout = 0;
++	bool pending_reconnect = false;
+ 
+ 	noreclaim_flag = memalloc_noreclaim_save();
+ 	cifs_dbg(FYI, "Demultiplex PID: %d\n", task_pid_nr(current));
+@@ -1156,6 +1157,8 @@ cifs_demultiplex_thread(void *p)
+ 		cifs_dbg(FYI, "RFC1002 header 0x%x\n", pdu_length);
+ 		if (!is_smb_response(server, buf[0]))
+ 			continue;
++
++		pending_reconnect = false;
+ next_pdu:
+ 		server->pdu_size = pdu_length;
+ 
+@@ -1213,10 +1216,13 @@ next_pdu:
+ 		if (server->ops->is_status_io_timeout &&
+ 		    server->ops->is_status_io_timeout(buf)) {
+ 			num_io_timeout++;
+-			if (num_io_timeout > NUM_STATUS_IO_TIMEOUT) {
+-				cifs_reconnect(server, false);
++			if (num_io_timeout > MAX_STATUS_IO_TIMEOUT) {
++				cifs_server_dbg(VFS,
++						"Number of request timeouts exceeded %d. Reconnecting",
++						MAX_STATUS_IO_TIMEOUT);
++
++				pending_reconnect = true;
+ 				num_io_timeout = 0;
+-				continue;
+ 			}
+ 		}
+ 
+@@ -1263,6 +1269,11 @@ next_pdu:
+ 			buf = server->smallbuf;
+ 			goto next_pdu;
+ 		}
++
++		/* do this reconnect at the very end after processing all MIDs */
++		if (pending_reconnect)
++			cifs_reconnect(server, true);
++
+ 	} /* end while !EXITING */
+ 
+ 	/* buffer usually freed in free_mid - need to free it here on exit */
+diff --git a/fs/smb/client/dfs.c b/fs/smb/client/dfs.c
+index 26d14dd0482ef..cf83617236d8b 100644
+--- a/fs/smb/client/dfs.c
++++ b/fs/smb/client/dfs.c
+@@ -66,6 +66,12 @@ static int get_session(struct cifs_mount_ctx *mnt_ctx, const char *full_path)
+ 	return rc;
+ }
+ 
++/*
++ * Track individual DFS referral servers used by new DFS mount.
++ *
++ * On success, their lifetime will be shared by final tcon (dfs_ses_list).
++ * Otherwise, they will be put by dfs_put_root_smb_sessions() in cifs_mount().
++ */
+ static int add_root_smb_session(struct cifs_mount_ctx *mnt_ctx)
+ {
+ 	struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;
+@@ -80,11 +86,12 @@ static int add_root_smb_session(struct cifs_mount_ctx *mnt_ctx)
+ 		INIT_LIST_HEAD(&root_ses->list);
+ 
+ 		spin_lock(&cifs_tcp_ses_lock);
+-		ses->ses_count++;
++		cifs_smb_ses_inc_refcount(ses);
+ 		spin_unlock(&cifs_tcp_ses_lock);
+ 		root_ses->ses = ses;
+ 		list_add_tail(&root_ses->list, &mnt_ctx->dfs_ses_list);
+ 	}
++	/* Select new DFS referral server so that new referrals go through it */
+ 	ctx->dfs_root_ses = ses;
+ 	return 0;
+ }
+@@ -244,7 +251,6 @@ out:
+ int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)
+ {
+ 	struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;
+-	struct cifs_ses *ses;
+ 	bool nodfs = ctx->nodfs;
+ 	int rc;
+ 
+@@ -278,20 +284,8 @@ int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)
+ 	}
+ 
+ 	*isdfs = true;
+-	/*
+-	 * Prevent DFS root session of being put in the first call to
+-	 * cifs_mount_put_conns().  If another DFS root server was not found
+-	 * while chasing the referrals (@ctx->dfs_root_ses == @ses), then we
+-	 * can safely put extra refcount of @ses.
+-	 */
+-	ses = mnt_ctx->ses;
+-	mnt_ctx->ses = NULL;
+-	mnt_ctx->server = NULL;
+-	rc = __dfs_mount_share(mnt_ctx);
+-	if (ses == ctx->dfs_root_ses)
+-		cifs_put_smb_ses(ses);
+-
+-	return rc;
++	add_root_smb_session(mnt_ctx);
++	return __dfs_mount_share(mnt_ctx);
+ }
+ 
+ /* Update dfs referral path of superblock */
+diff --git a/fs/smb/client/smb2transport.c b/fs/smb/client/smb2transport.c
+index 22954a9c7a6c7..355e8700530fc 100644
+--- a/fs/smb/client/smb2transport.c
++++ b/fs/smb/client/smb2transport.c
+@@ -159,7 +159,7 @@ smb2_find_smb_ses_unlocked(struct TCP_Server_Info *server, __u64 ses_id)
+ 			spin_unlock(&ses->ses_lock);
+ 			continue;
+ 		}
+-		++ses->ses_count;
++		cifs_smb_ses_inc_refcount(ses);
+ 		spin_unlock(&ses->ses_lock);
+ 		return ses;
+ 	}
+diff --git a/fs/udf/unicode.c b/fs/udf/unicode.c
+index 622569007b530..2142cbd1dde24 100644
+--- a/fs/udf/unicode.c
++++ b/fs/udf/unicode.c
+@@ -247,7 +247,7 @@ static int udf_name_from_CS0(struct super_block *sb,
+ 	}
+ 
+ 	if (translate) {
+-		if (str_o_len <= 2 && str_o[0] == '.' &&
++		if (str_o_len > 0 && str_o_len <= 2 && str_o[0] == '.' &&
+ 		    (str_o_len == 1 || str_o[1] == '.'))
+ 			needsCRC = 1;
+ 		if (needsCRC) {
+diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
+index 402b545959af7..5b27f94d4fad6 100644
+--- a/include/kvm/arm_vgic.h
++++ b/include/kvm/arm_vgic.h
+@@ -431,7 +431,7 @@ int kvm_vgic_v4_unset_forwarding(struct kvm *kvm, int irq,
+ 
+ int vgic_v4_load(struct kvm_vcpu *vcpu);
+ void vgic_v4_commit(struct kvm_vcpu *vcpu);
+-int vgic_v4_put(struct kvm_vcpu *vcpu, bool need_db);
++int vgic_v4_put(struct kvm_vcpu *vcpu);
+ 
+ /* CPU HP callbacks */
+ void kvm_vgic_cpu_up(void);
+diff --git a/include/linux/psi.h b/include/linux/psi.h
+index ab26200c28033..e0745873e3f26 100644
+--- a/include/linux/psi.h
++++ b/include/linux/psi.h
+@@ -23,8 +23,9 @@ void psi_memstall_enter(unsigned long *flags);
+ void psi_memstall_leave(unsigned long *flags);
+ 
+ int psi_show(struct seq_file *s, struct psi_group *group, enum psi_res res);
+-struct psi_trigger *psi_trigger_create(struct psi_group *group,
+-			char *buf, enum psi_res res, struct file *file);
++struct psi_trigger *psi_trigger_create(struct psi_group *group, char *buf,
++				       enum psi_res res, struct file *file,
++				       struct kernfs_open_file *of);
+ void psi_trigger_destroy(struct psi_trigger *t);
+ 
+ __poll_t psi_trigger_poll(void **trigger_ptr, struct file *file,
+diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
+index 040c089581c6c..f1fd3a8044e0e 100644
+--- a/include/linux/psi_types.h
++++ b/include/linux/psi_types.h
+@@ -137,6 +137,9 @@ struct psi_trigger {
+ 	/* Wait queue for polling */
+ 	wait_queue_head_t event_wait;
+ 
++	/* Kernfs file for cgroup triggers */
++	struct kernfs_open_file *of;
++
+ 	/* Pending event flag */
+ 	int event;
+ 
+diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
+index 20099268fa257..669e8cff40c74 100644
+--- a/include/linux/sched/signal.h
++++ b/include/linux/sched/signal.h
+@@ -135,7 +135,7 @@ struct signal_struct {
+ #ifdef CONFIG_POSIX_TIMERS
+ 
+ 	/* POSIX.1b Interval Timers */
+-	int			posix_timer_id;
++	unsigned int		next_posix_timer_id;
+ 	struct list_head	posix_timers;
+ 
+ 	/* ITIMER_REAL timer for the process */
+diff --git a/include/linux/tcp.h b/include/linux/tcp.h
+index b4c08ac869835..91a37c99ba665 100644
+--- a/include/linux/tcp.h
++++ b/include/linux/tcp.h
+@@ -513,7 +513,7 @@ static inline void fastopen_queue_tune(struct sock *sk, int backlog)
+ 	struct request_sock_queue *queue = &inet_csk(sk)->icsk_accept_queue;
+ 	int somaxconn = READ_ONCE(sock_net(sk)->core.sysctl_somaxconn);
+ 
+-	queue->fastopenq.max_qlen = min_t(unsigned int, backlog, somaxconn);
++	WRITE_ONCE(queue->fastopenq.max_qlen, min_t(unsigned int, backlog, somaxconn));
+ }
+ 
+ static inline void tcp_move_syn(struct tcp_sock *tp,
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 9654567cfae37..870b6d3c5146b 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -822,6 +822,7 @@ struct hci_conn_params {
+ 
+ 	struct hci_conn *conn;
+ 	bool explicit_connect;
++	/* Accessed without hdev->lock: */
+ 	hci_conn_flags_t flags;
+ 	u8  privacy_mode;
+ };
+@@ -1573,7 +1574,11 @@ struct hci_conn_params *hci_conn_params_add(struct hci_dev *hdev,
+ 					    bdaddr_t *addr, u8 addr_type);
+ void hci_conn_params_del(struct hci_dev *hdev, bdaddr_t *addr, u8 addr_type);
+ void hci_conn_params_clear_disabled(struct hci_dev *hdev);
++void hci_conn_params_free(struct hci_conn_params *param);
+ 
++void hci_pend_le_list_del_init(struct hci_conn_params *param);
++void hci_pend_le_list_add(struct hci_conn_params *param,
++			  struct list_head *list);
+ struct hci_conn_params *hci_pend_le_action_lookup(struct list_head *list,
+ 						  bdaddr_t *addr,
+ 						  u8 addr_type);
+diff --git a/include/net/ip.h b/include/net/ip.h
+index acec504c469a0..83a1a9bc3ceb1 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -282,7 +282,7 @@ void ip_send_unicast_reply(struct sock *sk, struct sk_buff *skb,
+ 			   const struct ip_options *sopt,
+ 			   __be32 daddr, __be32 saddr,
+ 			   const struct ip_reply_arg *arg,
+-			   unsigned int len, u64 transmit_time);
++			   unsigned int len, u64 transmit_time, u32 txhash);
+ 
+ #define IP_INC_STATS(net, field)	SNMP_INC_STATS64((net)->mib.ip_statistics, field)
+ #define __IP_INC_STATS(net, field)	__SNMP_INC_STATS64((net)->mib.ip_statistics, field)
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 5066e4586cf09..182337a8cf94a 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -1514,25 +1514,38 @@ void tcp_leave_memory_pressure(struct sock *sk);
+ static inline int keepalive_intvl_when(const struct tcp_sock *tp)
+ {
+ 	struct net *net = sock_net((struct sock *)tp);
++	int val;
+ 
+-	return tp->keepalive_intvl ? :
+-		READ_ONCE(net->ipv4.sysctl_tcp_keepalive_intvl);
++	/* Paired with WRITE_ONCE() in tcp_sock_set_keepintvl()
++	 * and do_tcp_setsockopt().
++	 */
++	val = READ_ONCE(tp->keepalive_intvl);
++
++	return val ? : READ_ONCE(net->ipv4.sysctl_tcp_keepalive_intvl);
+ }
+ 
+ static inline int keepalive_time_when(const struct tcp_sock *tp)
+ {
+ 	struct net *net = sock_net((struct sock *)tp);
++	int val;
+ 
+-	return tp->keepalive_time ? :
+-		READ_ONCE(net->ipv4.sysctl_tcp_keepalive_time);
++	/* Paired with WRITE_ONCE() in tcp_sock_set_keepidle_locked() */
++	val = READ_ONCE(tp->keepalive_time);
++
++	return val ? : READ_ONCE(net->ipv4.sysctl_tcp_keepalive_time);
+ }
+ 
+ static inline int keepalive_probes(const struct tcp_sock *tp)
+ {
+ 	struct net *net = sock_net((struct sock *)tp);
++	int val;
+ 
+-	return tp->keepalive_probes ? :
+-		READ_ONCE(net->ipv4.sysctl_tcp_keepalive_probes);
++	/* Paired with WRITE_ONCE() in tcp_sock_set_keepcnt()
++	 * and do_tcp_setsockopt().
++	 */
++	val = READ_ONCE(tp->keepalive_probes);
++
++	return val ? : READ_ONCE(net->ipv4.sysctl_tcp_keepalive_probes);
+ }
+ 
+ static inline u32 keepalive_time_elapsed(const struct tcp_sock *tp)
+@@ -2053,7 +2066,11 @@ void __tcp_v4_send_check(struct sk_buff *skb, __be32 saddr, __be32 daddr);
+ static inline u32 tcp_notsent_lowat(const struct tcp_sock *tp)
+ {
+ 	struct net *net = sock_net((struct sock *)tp);
+-	return tp->notsent_lowat ?: READ_ONCE(net->ipv4.sysctl_tcp_notsent_lowat);
++	u32 val;
++
++	val = READ_ONCE(tp->notsent_lowat);
++
++	return val ?: READ_ONCE(net->ipv4.sysctl_tcp_notsent_lowat);
+ }
+ 
+ bool tcp_stream_memory_free(const struct sock *sk, int wake);
+diff --git a/include/uapi/linux/fuse.h b/include/uapi/linux/fuse.h
+index 1b9d0dfae72df..b3fcab13fcd3d 100644
+--- a/include/uapi/linux/fuse.h
++++ b/include/uapi/linux/fuse.h
+@@ -206,6 +206,7 @@
+  *  - add extension header
+  *  - add FUSE_EXT_GROUPS
+  *  - add FUSE_CREATE_SUPP_GROUP
++ *  - add FUSE_HAS_EXPIRE_ONLY
+  */
+ 
+ #ifndef _LINUX_FUSE_H
+@@ -369,6 +370,7 @@ struct fuse_file_lock {
+  * FUSE_HAS_INODE_DAX:  use per inode DAX
+  * FUSE_CREATE_SUPP_GROUP: add supplementary group info to create, mkdir,
+  *			symlink and mknod (single group that matches parent)
++ * FUSE_HAS_EXPIRE_ONLY: kernel supports expiry-only entry invalidation
+  */
+ #define FUSE_ASYNC_READ		(1 << 0)
+ #define FUSE_POSIX_LOCKS	(1 << 1)
+@@ -406,6 +408,7 @@ struct fuse_file_lock {
+ #define FUSE_SECURITY_CTX	(1ULL << 32)
+ #define FUSE_HAS_INODE_DAX	(1ULL << 33)
+ #define FUSE_CREATE_SUPP_GROUP	(1ULL << 34)
++#define FUSE_HAS_EXPIRE_ONLY	(1ULL << 35)
+ 
+ /**
+  * CUSE INIT request/reply flags
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index f1b79959d1c1d..d6667b435dd39 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -2032,6 +2032,14 @@ fail:
+ 		ret = io_issue_sqe(req, issue_flags);
+ 		if (ret != -EAGAIN)
+ 			break;
++
++		/*
++		 * If REQ_F_NOWAIT is set, then don't wait or retry with
++		 * poll. -EAGAIN is final for that case.
++		 */
++		if (req->flags & REQ_F_NOWAIT)
++			break;
++
+ 		/*
+ 		 * We can get EAGAIN for iopolled IO even though we're
+ 		 * forcing a sync submission from here, since we can't
+@@ -3425,8 +3433,6 @@ static unsigned long io_uring_mmu_get_unmapped_area(struct file *filp,
+ 			unsigned long addr, unsigned long len,
+ 			unsigned long pgoff, unsigned long flags)
+ {
+-	const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags);
+-	struct vm_unmapped_area_info info;
+ 	void *ptr;
+ 
+ 	/*
+@@ -3441,32 +3447,26 @@ static unsigned long io_uring_mmu_get_unmapped_area(struct file *filp,
+ 	if (IS_ERR(ptr))
+ 		return -ENOMEM;
+ 
+-	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
+-	info.length = len;
+-	info.low_limit = max(PAGE_SIZE, mmap_min_addr);
+-	info.high_limit = arch_get_mmap_base(addr, current->mm->mmap_base);
++	/*
++	 * Some architectures have strong cache aliasing requirements.
++	 * For such architectures we need a coherent mapping which aliases
++	 * kernel memory *and* userspace memory. To achieve that:
++	 * - use a NULL file pointer to reference physical memory, and
++	 * - use the kernel virtual address of the shared io_uring context
++	 *   (instead of the userspace-provided address, which has to be 0UL
++	 *   anyway).
++	 * For architectures without such aliasing requirements, the
++	 * architecture will return any suitable mapping because addr is 0.
++	 */
++	filp = NULL;
++	flags |= MAP_SHARED;
++	pgoff = 0;	/* has been translated to ptr above */
+ #ifdef SHM_COLOUR
+-	info.align_mask = PAGE_MASK & (SHM_COLOUR - 1UL);
++	addr = (uintptr_t) ptr;
+ #else
+-	info.align_mask = PAGE_MASK & (SHMLBA - 1UL);
++	addr = 0UL;
+ #endif
+-	info.align_offset = (unsigned long) ptr;
+-
+-	/*
+-	 * A failed mmap() very likely causes application failure,
+-	 * so fall back to the bottom-up function here. This scenario
+-	 * can happen with large stack limits and large mmap()
+-	 * allocations.
+-	 */
+-	addr = vm_unmapped_area(&info);
+-	if (offset_in_page(addr)) {
+-		info.flags = 0;
+-		info.low_limit = TASK_UNMAPPED_BASE;
+-		info.high_limit = mmap_end;
+-		addr = vm_unmapped_area(&info);
+-	}
+-
+-	return addr;
++	return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags);
+ }
+ 
+ #else /* !CONFIG_MMU */
+diff --git a/kernel/bpf/bpf_lru_list.c b/kernel/bpf/bpf_lru_list.c
+index d99e89f113c43..3dabdd137d102 100644
+--- a/kernel/bpf/bpf_lru_list.c
++++ b/kernel/bpf/bpf_lru_list.c
+@@ -41,7 +41,12 @@ static struct list_head *local_pending_list(struct bpf_lru_locallist *loc_l)
+ /* bpf_lru_node helpers */
+ static bool bpf_lru_node_is_ref(const struct bpf_lru_node *node)
+ {
+-	return node->ref;
++	return READ_ONCE(node->ref);
++}
++
++static void bpf_lru_node_clear_ref(struct bpf_lru_node *node)
++{
++	WRITE_ONCE(node->ref, 0);
+ }
+ 
+ static void bpf_lru_list_count_inc(struct bpf_lru_list *l,
+@@ -89,7 +94,7 @@ static void __bpf_lru_node_move_in(struct bpf_lru_list *l,
+ 
+ 	bpf_lru_list_count_inc(l, tgt_type);
+ 	node->type = tgt_type;
+-	node->ref = 0;
++	bpf_lru_node_clear_ref(node);
+ 	list_move(&node->list, &l->lists[tgt_type]);
+ }
+ 
+@@ -110,7 +115,7 @@ static void __bpf_lru_node_move(struct bpf_lru_list *l,
+ 		bpf_lru_list_count_inc(l, tgt_type);
+ 		node->type = tgt_type;
+ 	}
+-	node->ref = 0;
++	bpf_lru_node_clear_ref(node);
+ 
+ 	/* If the moving node is the next_inactive_rotation candidate,
+ 	 * move the next_inactive_rotation pointer also.
+@@ -353,7 +358,7 @@ static void __local_list_add_pending(struct bpf_lru *lru,
+ 	*(u32 *)((void *)node + lru->hash_offset) = hash;
+ 	node->cpu = cpu;
+ 	node->type = BPF_LRU_LOCAL_LIST_T_PENDING;
+-	node->ref = 0;
++	bpf_lru_node_clear_ref(node);
+ 	list_add(&node->list, local_pending_list(loc_l));
+ }
+ 
+@@ -419,7 +424,7 @@ static struct bpf_lru_node *bpf_percpu_lru_pop_free(struct bpf_lru *lru,
+ 	if (!list_empty(free_list)) {
+ 		node = list_first_entry(free_list, struct bpf_lru_node, list);
+ 		*(u32 *)((void *)node + lru->hash_offset) = hash;
+-		node->ref = 0;
++		bpf_lru_node_clear_ref(node);
+ 		__bpf_lru_node_move(l, node, BPF_LRU_LIST_T_INACTIVE);
+ 	}
+ 
+@@ -522,7 +527,7 @@ static void bpf_common_lru_push_free(struct bpf_lru *lru,
+ 		}
+ 
+ 		node->type = BPF_LRU_LOCAL_LIST_T_FREE;
+-		node->ref = 0;
++		bpf_lru_node_clear_ref(node);
+ 		list_move(&node->list, local_free_list(loc_l));
+ 
+ 		raw_spin_unlock_irqrestore(&loc_l->lock, flags);
+@@ -568,7 +573,7 @@ static void bpf_common_lru_populate(struct bpf_lru *lru, void *buf,
+ 
+ 		node = (struct bpf_lru_node *)(buf + node_offset);
+ 		node->type = BPF_LRU_LIST_T_FREE;
+-		node->ref = 0;
++		bpf_lru_node_clear_ref(node);
+ 		list_add(&node->list, &l->lists[BPF_LRU_LIST_T_FREE]);
+ 		buf += elem_size;
+ 	}
+@@ -594,7 +599,7 @@ again:
+ 		node = (struct bpf_lru_node *)(buf + node_offset);
+ 		node->cpu = cpu;
+ 		node->type = BPF_LRU_LIST_T_FREE;
+-		node->ref = 0;
++		bpf_lru_node_clear_ref(node);
+ 		list_add(&node->list, &l->lists[BPF_LRU_LIST_T_FREE]);
+ 		i++;
+ 		buf += elem_size;
+diff --git a/kernel/bpf/bpf_lru_list.h b/kernel/bpf/bpf_lru_list.h
+index 4ea227c9c1ade..8f3c8b2b4490e 100644
+--- a/kernel/bpf/bpf_lru_list.h
++++ b/kernel/bpf/bpf_lru_list.h
+@@ -64,11 +64,8 @@ struct bpf_lru {
+ 
+ static inline void bpf_lru_node_set_ref(struct bpf_lru_node *node)
+ {
+-	/* ref is an approximation on access frequency.  It does not
+-	 * have to be very accurate.  Hence, no protection is used.
+-	 */
+-	if (!node->ref)
+-		node->ref = 1;
++	if (!READ_ONCE(node->ref))
++		WRITE_ONCE(node->ref, 1);
+ }
+ 
+ int bpf_lru_init(struct bpf_lru *lru, bool percpu, u32 hash_offset,
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 25ca17a8e1964..8b4e92439d1d6 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -485,25 +485,26 @@ static bool btf_type_is_fwd(const struct btf_type *t)
+ 	return BTF_INFO_KIND(t->info) == BTF_KIND_FWD;
+ }
+ 
+-static bool btf_type_nosize(const struct btf_type *t)
++static bool btf_type_is_datasec(const struct btf_type *t)
+ {
+-	return btf_type_is_void(t) || btf_type_is_fwd(t) ||
+-	       btf_type_is_func(t) || btf_type_is_func_proto(t);
++	return BTF_INFO_KIND(t->info) == BTF_KIND_DATASEC;
+ }
+ 
+-static bool btf_type_nosize_or_null(const struct btf_type *t)
++static bool btf_type_is_decl_tag(const struct btf_type *t)
+ {
+-	return !t || btf_type_nosize(t);
++	return BTF_INFO_KIND(t->info) == BTF_KIND_DECL_TAG;
+ }
+ 
+-static bool btf_type_is_datasec(const struct btf_type *t)
++static bool btf_type_nosize(const struct btf_type *t)
+ {
+-	return BTF_INFO_KIND(t->info) == BTF_KIND_DATASEC;
++	return btf_type_is_void(t) || btf_type_is_fwd(t) ||
++	       btf_type_is_func(t) || btf_type_is_func_proto(t) ||
++	       btf_type_is_decl_tag(t);
+ }
+ 
+-static bool btf_type_is_decl_tag(const struct btf_type *t)
++static bool btf_type_nosize_or_null(const struct btf_type *t)
+ {
+-	return BTF_INFO_KIND(t->info) == BTF_KIND_DECL_TAG;
++	return !t || btf_type_nosize(t);
+ }
+ 
+ static bool btf_type_is_decl_tag_target(const struct btf_type *t)
+diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
+index 046ddff37a76d..850494423530e 100644
+--- a/kernel/bpf/log.c
++++ b/kernel/bpf/log.c
+@@ -62,9 +62,6 @@ void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt,
+ 
+ 	n = vscnprintf(log->kbuf, BPF_VERIFIER_TMP_LOG_SIZE, fmt, args);
+ 
+-	WARN_ONCE(n >= BPF_VERIFIER_TMP_LOG_SIZE - 1,
+-		  "verifier log line truncated - local buffer too short\n");
+-
+ 	if (log->level == BPF_LOG_KERNEL) {
+ 		bool newline = n > 0 && log->kbuf[n - 1] == '\n';
+ 
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index f1c8733f76b83..5524fcf6fb2a4 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -5394,7 +5394,8 @@ static int bpf_unpriv_handler(struct ctl_table *table, int write,
+ 		*(int *)table->data = unpriv_enable;
+ 	}
+ 
+-	unpriv_ebpf_notify(unpriv_enable);
++	if (write)
++		unpriv_ebpf_notify(unpriv_enable);
+ 
+ 	return ret;
+ }
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index aac31e33323bb..4fbfe1d086467 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5381,16 +5381,17 @@ static int update_stack_depth(struct bpf_verifier_env *env,
+  * Since recursion is prevented by check_cfg() this algorithm
+  * only needs a local stack of MAX_CALL_FRAMES to remember callsites
+  */
+-static int check_max_stack_depth(struct bpf_verifier_env *env)
++static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx)
+ {
+-	int depth = 0, frame = 0, idx = 0, i = 0, subprog_end;
+ 	struct bpf_subprog_info *subprog = env->subprog_info;
+ 	struct bpf_insn *insn = env->prog->insnsi;
++	int depth = 0, frame = 0, i, subprog_end;
+ 	bool tail_call_reachable = false;
+ 	int ret_insn[MAX_CALL_FRAMES];
+ 	int ret_prog[MAX_CALL_FRAMES];
+ 	int j;
+ 
++	i = subprog[idx].start;
+ process_func:
+ 	/* protect against potential stack overflow that might happen when
+ 	 * bpf2bpf calls get combined with tailcalls. Limit the caller's stack
+@@ -5429,7 +5430,7 @@ process_func:
+ continue_func:
+ 	subprog_end = subprog[idx + 1].start;
+ 	for (; i < subprog_end; i++) {
+-		int next_insn;
++		int next_insn, sidx;
+ 
+ 		if (!bpf_pseudo_call(insn + i) && !bpf_pseudo_func(insn + i))
+ 			continue;
+@@ -5439,14 +5440,14 @@ continue_func:
+ 
+ 		/* find the callee */
+ 		next_insn = i + insn[i].imm + 1;
+-		idx = find_subprog(env, next_insn);
+-		if (idx < 0) {
++		sidx = find_subprog(env, next_insn);
++		if (sidx < 0) {
+ 			WARN_ONCE(1, "verifier bug. No program starts at insn %d\n",
+ 				  next_insn);
+ 			return -EFAULT;
+ 		}
+-		if (subprog[idx].is_async_cb) {
+-			if (subprog[idx].has_tail_call) {
++		if (subprog[sidx].is_async_cb) {
++			if (subprog[sidx].has_tail_call) {
+ 				verbose(env, "verifier bug. subprog has tail_call and async cb\n");
+ 				return -EFAULT;
+ 			}
+@@ -5455,6 +5456,7 @@ continue_func:
+ 				continue;
+ 		}
+ 		i = next_insn;
++		idx = sidx;
+ 
+ 		if (subprog[idx].has_tail_call)
+ 			tail_call_reachable = true;
+@@ -5490,6 +5492,22 @@ continue_func:
+ 	goto continue_func;
+ }
+ 
++static int check_max_stack_depth(struct bpf_verifier_env *env)
++{
++	struct bpf_subprog_info *si = env->subprog_info;
++	int ret;
++
++	for (int i = 0; i < env->subprog_cnt; i++) {
++		if (!i || si[i].is_async_cb) {
++			ret = check_max_stack_depth_subprog(env, i);
++			if (ret < 0)
++				return ret;
++		}
++		continue;
++	}
++	return 0;
++}
++
+ #ifndef CONFIG_BPF_JIT_ALWAYS_ON
+ static int get_callee_stack_depth(struct bpf_verifier_env *env,
+ 				  const struct bpf_insn *insn, int idx)
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 4d42f0cbc11ea..3299ec69ce0d1 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -3785,7 +3785,7 @@ static ssize_t pressure_write(struct kernfs_open_file *of, char *buf,
+ 	}
+ 
+ 	psi = cgroup_psi(cgrp);
+-	new = psi_trigger_create(psi, buf, res, of->file);
++	new = psi_trigger_create(psi, buf, res, of->file, of);
+ 	if (IS_ERR(new)) {
+ 		cgroup_put(cgrp);
+ 		return PTR_ERR(new);
+diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
+index 77747391f49b6..4874508bb950e 100644
+--- a/kernel/kallsyms.c
++++ b/kernel/kallsyms.c
+@@ -174,11 +174,10 @@ static bool cleanup_symbol_name(char *s)
+ 	 * LLVM appends various suffixes for local functions and variables that
+ 	 * must be promoted to global scope as part of LTO.  This can break
+ 	 * hooking of static functions with kprobes. '.' is not a valid
+-	 * character in an identifier in C. Suffixes observed:
++	 * character in an identifier in C. Suffixes only in LLVM LTO observed:
+ 	 * - foo.llvm.[0-9a-f]+
+-	 * - foo.[0-9a-f]+
+ 	 */
+-	res = strchr(s, '.');
++	res = strstr(s, ".llvm.");
+ 	if (res) {
+ 		*res = '\0';
+ 		return true;
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index 8f08c087142b0..9b9ce09f8f358 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -241,7 +241,6 @@ static void cblist_init_generic(struct rcu_tasks *rtp)
+ 	if (rcu_task_enqueue_lim < 0) {
+ 		rcu_task_enqueue_lim = 1;
+ 		rcu_task_cb_adjust = true;
+-		pr_info("%s: Setting adjustable number of callback queues.\n", __func__);
+ 	} else if (rcu_task_enqueue_lim == 0) {
+ 		rcu_task_enqueue_lim = 1;
+ 	}
+@@ -272,6 +271,10 @@ static void cblist_init_generic(struct rcu_tasks *rtp)
+ 		raw_spin_unlock_rcu_node(rtpcp); // irqs remain disabled.
+ 	}
+ 	raw_spin_unlock_irqrestore(&rtp->cbs_gbl_lock, flags);
++
++	if (rcu_task_cb_adjust)
++		pr_info("%s: Setting adjustable number of callback queues.\n", __func__);
++
+ 	pr_info("%s: Setting shift to %d and lim to %d.\n", __func__, data_race(rtp->percpu_enqueue_shift), data_race(rtp->percpu_enqueue_lim));
+ }
+ 
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index 3b7abb58157df..8239b39d945bd 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -643,7 +643,7 @@ static void synchronize_rcu_expedited_wait(void)
+ 					"O."[!!cpu_online(cpu)],
+ 					"o."[!!(rdp->grpmask & rnp->expmaskinit)],
+ 					"N."[!!(rdp->grpmask & rnp->expmaskinitnext)],
+-					"D."[!!(rdp->cpu_no_qs.b.exp)]);
++					"D."[!!data_race(rdp->cpu_no_qs.b.exp)]);
+ 			}
+ 		}
+ 		pr_cont(" } %lu jiffies s: %lu root: %#lx/%c\n",
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index 7b0fe741a0886..41021080ad258 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -257,6 +257,8 @@ static void rcu_preempt_ctxt_queue(struct rcu_node *rnp, struct rcu_data *rdp)
+ 	 * GP should not be able to end until we report, so there should be
+ 	 * no need to check for a subsequent expedited GP.  (Though we are
+ 	 * still in a quiescent state in any case.)
++	 *
++	 * Interrupts are disabled, so ->cpu_no_qs.b.exp cannot change.
+ 	 */
+ 	if (blkd_state & RCU_EXP_BLKD && rdp->cpu_no_qs.b.exp)
+ 		rcu_report_exp_rdp(rdp);
+@@ -941,7 +943,7 @@ notrace void rcu_preempt_deferred_qs(struct task_struct *t)
+ {
+ 	struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
+ 
+-	if (rdp->cpu_no_qs.b.exp)
++	if (READ_ONCE(rdp->cpu_no_qs.b.exp))
+ 		rcu_report_exp_rdp(rdp);
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 4da5f35417626..dacb56d7e9147 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -7174,7 +7174,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
+ 	    recent_used_cpu != target &&
+ 	    cpus_share_cache(recent_used_cpu, target) &&
+ 	    (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) &&
+-	    cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) &&
++	    cpumask_test_cpu(recent_used_cpu, p->cpus_ptr) &&
+ 	    asym_fits_cpu(task_util, util_min, util_max, recent_used_cpu)) {
+ 		return recent_used_cpu;
+ 	}
+@@ -10762,7 +10762,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
+ 		.sd		= sd,
+ 		.dst_cpu	= this_cpu,
+ 		.dst_rq		= this_rq,
+-		.dst_grpmask    = sched_group_span(sd->groups),
++		.dst_grpmask    = group_balance_mask(sd->groups),
+ 		.idle		= idle,
+ 		.loop_break	= SCHED_NR_MIGRATE_BREAK,
+ 		.cpus		= cpus,
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index e072f6b31bf30..80d8c10e93638 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -494,8 +494,12 @@ static u64 update_triggers(struct psi_group *group, u64 now, bool *update_total,
+ 			continue;
+ 
+ 		/* Generate an event */
+-		if (cmpxchg(&t->event, 0, 1) == 0)
+-			wake_up_interruptible(&t->event_wait);
++		if (cmpxchg(&t->event, 0, 1) == 0) {
++			if (t->of)
++				kernfs_notify(t->of->kn);
++			else
++				wake_up_interruptible(&t->event_wait);
++		}
+ 		t->last_event_time = now;
+ 		/* Reset threshold breach flag once event got generated */
+ 		t->pending_event = false;
+@@ -1272,8 +1276,9 @@ int psi_show(struct seq_file *m, struct psi_group *group, enum psi_res res)
+ 	return 0;
+ }
+ 
+-struct psi_trigger *psi_trigger_create(struct psi_group *group,
+-			char *buf, enum psi_res res, struct file *file)
++struct psi_trigger *psi_trigger_create(struct psi_group *group, char *buf,
++				       enum psi_res res, struct file *file,
++				       struct kernfs_open_file *of)
+ {
+ 	struct psi_trigger *t;
+ 	enum psi_states state;
+@@ -1333,7 +1338,9 @@ struct psi_trigger *psi_trigger_create(struct psi_group *group,
+ 
+ 	t->event = 0;
+ 	t->last_event_time = 0;
+-	init_waitqueue_head(&t->event_wait);
++	t->of = of;
++	if (!of)
++		init_waitqueue_head(&t->event_wait);
+ 	t->pending_event = false;
+ 	t->aggregator = privileged ? PSI_POLL : PSI_AVGS;
+ 
+@@ -1390,7 +1397,10 @@ void psi_trigger_destroy(struct psi_trigger *t)
+ 	 * being accessed later. Can happen if cgroup is deleted from under a
+ 	 * polling process.
+ 	 */
+-	wake_up_pollfree(&t->event_wait);
++	if (t->of)
++		kernfs_notify(t->of->kn);
++	else
++		wake_up_interruptible(&t->event_wait);
+ 
+ 	if (t->aggregator == PSI_AVGS) {
+ 		mutex_lock(&group->avgs_lock);
+@@ -1462,7 +1472,10 @@ __poll_t psi_trigger_poll(void **trigger_ptr,
+ 	if (!t)
+ 		return DEFAULT_POLLMASK | EPOLLERR | EPOLLPRI;
+ 
+-	poll_wait(file, &t->event_wait, wait);
++	if (t->of)
++		kernfs_generic_poll(t->of, wait);
++	else
++		poll_wait(file, &t->event_wait, wait);
+ 
+ 	if (cmpxchg(&t->event, 1, 0) == 1)
+ 		ret |= EPOLLPRI;
+@@ -1532,7 +1545,7 @@ static ssize_t psi_write(struct file *file, const char __user *user_buf,
+ 		return -EBUSY;
+ 	}
+ 
+-	new = psi_trigger_create(&psi_system, buf, res, file);
++	new = psi_trigger_create(&psi_system, buf, res, file, NULL);
+ 	if (IS_ERR(new)) {
+ 		mutex_unlock(&seq->lock);
+ 		return PTR_ERR(new);
+diff --git a/kernel/sys.c b/kernel/sys.c
+index 339fee3eff6a2..a36a27ebac33e 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -2529,11 +2529,6 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
+ 			else
+ 				return -EINVAL;
+ 			break;
+-	case PR_GET_AUXV:
+-		if (arg4 || arg5)
+-			return -EINVAL;
+-		error = prctl_get_auxv((void __user *)arg2, arg3);
+-		break;
+ 		default:
+ 			return -EINVAL;
+ 		}
+@@ -2688,6 +2683,11 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
+ 	case PR_SET_VMA:
+ 		error = prctl_set_vma(arg2, arg3, arg4, arg5);
+ 		break;
++	case PR_GET_AUXV:
++		if (arg4 || arg5)
++			return -EINVAL;
++		error = prctl_get_auxv((void __user *)arg2, arg3);
++		break;
+ #ifdef CONFIG_KSM
+ 	case PR_SET_MEMORY_MERGE:
+ 		if (arg3 || arg4 || arg5)
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index ed3c4a9543982..2d6cf93ca370a 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -140,25 +140,30 @@ static struct k_itimer *posix_timer_by_id(timer_t id)
+ static int posix_timer_add(struct k_itimer *timer)
+ {
+ 	struct signal_struct *sig = current->signal;
+-	int first_free_id = sig->posix_timer_id;
+ 	struct hlist_head *head;
+-	int ret = -ENOENT;
++	unsigned int cnt, id;
+ 
+-	do {
++	/*
++	 * FIXME: Replace this by a per signal struct xarray once there is
++	 * a plan to handle the resulting CRIU regression gracefully.
++	 */
++	for (cnt = 0; cnt <= INT_MAX; cnt++) {
+ 		spin_lock(&hash_lock);
+-		head = &posix_timers_hashtable[hash(sig, sig->posix_timer_id)];
+-		if (!__posix_timers_find(head, sig, sig->posix_timer_id)) {
++		id = sig->next_posix_timer_id;
++
++		/* Write the next ID back. Clamp it to the positive space */
++		sig->next_posix_timer_id = (id + 1) & INT_MAX;
++
++		head = &posix_timers_hashtable[hash(sig, id)];
++		if (!__posix_timers_find(head, sig, id)) {
+ 			hlist_add_head_rcu(&timer->t_hash, head);
+-			ret = sig->posix_timer_id;
++			spin_unlock(&hash_lock);
++			return id;
+ 		}
+-		if (++sig->posix_timer_id < 0)
+-			sig->posix_timer_id = 0;
+-		if ((sig->posix_timer_id == first_free_id) && (ret == -ENOENT))
+-			/* Loop over all possible ids completed */
+-			ret = -EAGAIN;
+ 		spin_unlock(&hash_lock);
+-	} while (ret == -ENOENT);
+-	return ret;
++	}
++	/* POSIX return code when no timer ID could be allocated */
++	return -EAGAIN;
+ }
+ 
+ static inline void unlock_timer(struct k_itimer *timr, unsigned long flags)
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index c8c61381eba48..d06938ae07174 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -6668,7 +6668,8 @@ static int event_hist_trigger_parse(struct event_command *cmd_ops,
+ 		goto out_unreg;
+ 
+ 	if (has_hist_vars(hist_data) || hist_data->n_var_refs) {
+-		if (save_hist_vars(hist_data))
++		ret = save_hist_vars(hist_data);
++		if (ret)
+ 			goto out_unreg;
+ 	}
+ 
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index 960223ed91991..061cc3ed58f5b 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -1795,7 +1795,7 @@ uaccess_end:
+ 	return ret;
+ }
+ 
+-static int copy_iovec_from_user(struct iovec *iov,
++static __noclone int copy_iovec_from_user(struct iovec *iov,
+ 		const struct iovec __user *uiov, unsigned long nr_segs)
+ {
+ 	int ret = -EFAULT;
+diff --git a/lib/maple_tree.c b/lib/maple_tree.c
+index 35264f1936a37..bb28a49d173c0 100644
+--- a/lib/maple_tree.c
++++ b/lib/maple_tree.c
+@@ -3693,7 +3693,8 @@ static inline int mas_root_expand(struct ma_state *mas, void *entry)
+ 	mas->offset = slot;
+ 	pivots[slot] = mas->last;
+ 	if (mas->last != ULONG_MAX)
+-		slot++;
++		pivots[++slot] = ULONG_MAX;
++
+ 	mas->depth = 1;
+ 	mas_set_height(mas);
+ 	ma_set_meta(node, maple_leaf_64, 0, slot);
+diff --git a/mm/mlock.c b/mm/mlock.c
+index 40b43f8740dfb..39e03a37f0a98 100644
+--- a/mm/mlock.c
++++ b/mm/mlock.c
+@@ -471,7 +471,6 @@ static int apply_vma_lock_flags(unsigned long start, size_t len,
+ {
+ 	unsigned long nstart, end, tmp;
+ 	struct vm_area_struct *vma, *prev;
+-	int error;
+ 	VMA_ITERATOR(vmi, current->mm, start);
+ 
+ 	VM_BUG_ON(offset_in_page(start));
+@@ -492,6 +491,7 @@ static int apply_vma_lock_flags(unsigned long start, size_t len,
+ 	nstart = start;
+ 	tmp = vma->vm_start;
+ 	for_each_vma_range(vmi, vma, end) {
++		int error;
+ 		vm_flags_t newflags;
+ 
+ 		if (vma->vm_start != tmp)
+@@ -505,14 +505,15 @@ static int apply_vma_lock_flags(unsigned long start, size_t len,
+ 			tmp = end;
+ 		error = mlock_fixup(&vmi, vma, &prev, nstart, tmp, newflags);
+ 		if (error)
+-			break;
++			return error;
++		tmp = vma_iter_end(&vmi);
+ 		nstart = tmp;
+ 	}
+ 
+-	if (vma_iter_end(&vmi) < end)
++	if (tmp < end)
+ 		return -ENOMEM;
+ 
+-	return error;
++	return 0;
+ }
+ 
+ /*
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 2275e0d9f8419..31c115b225e7e 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -118,7 +118,7 @@ static void hci_connect_le_scan_cleanup(struct hci_conn *conn, u8 status)
+ 	 */
+ 	params->explicit_connect = false;
+ 
+-	list_del_init(&params->action);
++	hci_pend_le_list_del_init(params);
+ 
+ 	switch (params->auto_connect) {
+ 	case HCI_AUTO_CONN_EXPLICIT:
+@@ -127,10 +127,10 @@ static void hci_connect_le_scan_cleanup(struct hci_conn *conn, u8 status)
+ 		return;
+ 	case HCI_AUTO_CONN_DIRECT:
+ 	case HCI_AUTO_CONN_ALWAYS:
+-		list_add(&params->action, &hdev->pend_le_conns);
++		hci_pend_le_list_add(params, &hdev->pend_le_conns);
+ 		break;
+ 	case HCI_AUTO_CONN_REPORT:
+-		list_add(&params->action, &hdev->pend_le_reports);
++		hci_pend_le_list_add(params, &hdev->pend_le_reports);
+ 		break;
+ 	default:
+ 		break;
+@@ -1426,8 +1426,8 @@ static int hci_explicit_conn_params_set(struct hci_dev *hdev,
+ 	if (params->auto_connect == HCI_AUTO_CONN_DISABLED ||
+ 	    params->auto_connect == HCI_AUTO_CONN_REPORT ||
+ 	    params->auto_connect == HCI_AUTO_CONN_EXPLICIT) {
+-		list_del_init(&params->action);
+-		list_add(&params->action, &hdev->pend_le_conns);
++		hci_pend_le_list_del_init(params);
++		hci_pend_le_list_add(params, &hdev->pend_le_conns);
+ 	}
+ 
+ 	params->explicit_connect = true;
+@@ -1684,7 +1684,7 @@ struct hci_conn *hci_connect_sco(struct hci_dev *hdev, int type, bdaddr_t *dst,
+ 	if (!link) {
+ 		hci_conn_drop(acl);
+ 		hci_conn_drop(sco);
+-		return NULL;
++		return ERR_PTR(-ENOLINK);
+ 	}
+ 
+ 	sco->setting = setting;
+@@ -2256,7 +2256,7 @@ struct hci_conn *hci_connect_cis(struct hci_dev *hdev, bdaddr_t *dst,
+ 	if (!link) {
+ 		hci_conn_drop(le);
+ 		hci_conn_drop(cis);
+-		return NULL;
++		return ERR_PTR(-ENOLINK);
+ 	}
+ 
+ 	/* If LE is already connected and CIS handle is already set proceed to
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 48917c68358de..1ec83985f1ab0 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1972,6 +1972,7 @@ static int hci_remove_adv_monitor(struct hci_dev *hdev,
+ 				  struct adv_monitor *monitor)
+ {
+ 	int status = 0;
++	int handle;
+ 
+ 	switch (hci_get_adv_monitor_offload_ext(hdev)) {
+ 	case HCI_ADV_MONITOR_EXT_NONE: /* also goes here when powered off */
+@@ -1980,9 +1981,10 @@ static int hci_remove_adv_monitor(struct hci_dev *hdev,
+ 		goto free_monitor;
+ 
+ 	case HCI_ADV_MONITOR_EXT_MSFT:
++		handle = monitor->handle;
+ 		status = msft_remove_monitor(hdev, monitor);
+ 		bt_dev_dbg(hdev, "%s remove monitor %d msft status %d",
+-			   hdev->name, monitor->handle, status);
++			   hdev->name, handle, status);
+ 		break;
+ 	}
+ 
+@@ -2249,21 +2251,45 @@ struct hci_conn_params *hci_conn_params_lookup(struct hci_dev *hdev,
+ 	return NULL;
+ }
+ 
+-/* This function requires the caller holds hdev->lock */
++/* This function requires the caller holds hdev->lock or rcu_read_lock */
+ struct hci_conn_params *hci_pend_le_action_lookup(struct list_head *list,
+ 						  bdaddr_t *addr, u8 addr_type)
+ {
+ 	struct hci_conn_params *param;
+ 
+-	list_for_each_entry(param, list, action) {
++	rcu_read_lock();
++
++	list_for_each_entry_rcu(param, list, action) {
+ 		if (bacmp(&param->addr, addr) == 0 &&
+-		    param->addr_type == addr_type)
++		    param->addr_type == addr_type) {
++			rcu_read_unlock();
+ 			return param;
++		}
+ 	}
+ 
++	rcu_read_unlock();
++
+ 	return NULL;
+ }
+ 
++/* This function requires the caller holds hdev->lock */
++void hci_pend_le_list_del_init(struct hci_conn_params *param)
++{
++	if (list_empty(&param->action))
++		return;
++
++	list_del_rcu(&param->action);
++	synchronize_rcu();
++	INIT_LIST_HEAD(&param->action);
++}
++
++/* This function requires the caller holds hdev->lock */
++void hci_pend_le_list_add(struct hci_conn_params *param,
++			  struct list_head *list)
++{
++	list_add_rcu(&param->action, list);
++}
++
+ /* This function requires the caller holds hdev->lock */
+ struct hci_conn_params *hci_conn_params_add(struct hci_dev *hdev,
+ 					    bdaddr_t *addr, u8 addr_type)
+@@ -2297,14 +2323,15 @@ struct hci_conn_params *hci_conn_params_add(struct hci_dev *hdev,
+ 	return params;
+ }
+ 
+-static void hci_conn_params_free(struct hci_conn_params *params)
++void hci_conn_params_free(struct hci_conn_params *params)
+ {
++	hci_pend_le_list_del_init(params);
++
+ 	if (params->conn) {
+ 		hci_conn_drop(params->conn);
+ 		hci_conn_put(params->conn);
+ 	}
+ 
+-	list_del(&params->action);
+ 	list_del(&params->list);
+ 	kfree(params);
+ }
+@@ -2342,8 +2369,7 @@ void hci_conn_params_clear_disabled(struct hci_dev *hdev)
+ 			continue;
+ 		}
+ 
+-		list_del(&params->list);
+-		kfree(params);
++		hci_conn_params_free(params);
+ 	}
+ 
+ 	BT_DBG("All LE disabled connection parameters were removed");
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 21e26d3b286cc..cb0b5fe7a6f8c 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -1564,7 +1564,7 @@ static u8 hci_cc_le_set_privacy_mode(struct hci_dev *hdev, void *data,
+ 
+ 	params = hci_conn_params_lookup(hdev, &cp->bdaddr, cp->bdaddr_type);
+ 	if (params)
+-		params->privacy_mode = cp->mode;
++		WRITE_ONCE(params->privacy_mode, cp->mode);
+ 
+ 	hci_dev_unlock(hdev);
+ 
+@@ -2784,6 +2784,9 @@ static void hci_cs_disconnect(struct hci_dev *hdev, u8 status)
+ 			hci_enable_advertising(hdev);
+ 		}
+ 
++		/* Inform sockets conn is gone before we delete it */
++		hci_disconn_cfm(conn, HCI_ERROR_UNSPECIFIED);
++
+ 		goto done;
+ 	}
+ 
+@@ -2804,8 +2807,8 @@ static void hci_cs_disconnect(struct hci_dev *hdev, u8 status)
+ 
+ 		case HCI_AUTO_CONN_DIRECT:
+ 		case HCI_AUTO_CONN_ALWAYS:
+-			list_del_init(&params->action);
+-			list_add(&params->action, &hdev->pend_le_conns);
++			hci_pend_le_list_del_init(params);
++			hci_pend_le_list_add(params, &hdev->pend_le_conns);
+ 			break;
+ 
+ 		default:
+@@ -3423,8 +3426,8 @@ static void hci_disconn_complete_evt(struct hci_dev *hdev, void *data,
+ 
+ 		case HCI_AUTO_CONN_DIRECT:
+ 		case HCI_AUTO_CONN_ALWAYS:
+-			list_del_init(&params->action);
+-			list_add(&params->action, &hdev->pend_le_conns);
++			hci_pend_le_list_del_init(params);
++			hci_pend_le_list_add(params, &hdev->pend_le_conns);
+ 			hci_update_passive_scan(hdev);
+ 			break;
+ 
+@@ -5961,7 +5964,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ 	params = hci_pend_le_action_lookup(&hdev->pend_le_conns, &conn->dst,
+ 					   conn->dst_type);
+ 	if (params) {
+-		list_del_init(&params->action);
++		hci_pend_le_list_del_init(params);
+ 		if (params->conn) {
+ 			hci_conn_drop(params->conn);
+ 			hci_conn_put(params->conn);
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index b5b1b610df335..1bcb54272dc67 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -2160,15 +2160,23 @@ static int hci_le_del_accept_list_sync(struct hci_dev *hdev,
+ 	return 0;
+ }
+ 
++struct conn_params {
++	bdaddr_t addr;
++	u8 addr_type;
++	hci_conn_flags_t flags;
++	u8 privacy_mode;
++};
++
+ /* Adds connection to resolve list if needed.
+  * Setting params to NULL programs local hdev->irk
+  */
+ static int hci_le_add_resolve_list_sync(struct hci_dev *hdev,
+-					struct hci_conn_params *params)
++					struct conn_params *params)
+ {
+ 	struct hci_cp_le_add_to_resolv_list cp;
+ 	struct smp_irk *irk;
+ 	struct bdaddr_list_with_irk *entry;
++	struct hci_conn_params *p;
+ 
+ 	if (!use_ll_privacy(hdev))
+ 		return 0;
+@@ -2203,6 +2211,16 @@ static int hci_le_add_resolve_list_sync(struct hci_dev *hdev,
+ 	/* Default privacy mode is always Network */
+ 	params->privacy_mode = HCI_NETWORK_PRIVACY;
+ 
++	rcu_read_lock();
++	p = hci_pend_le_action_lookup(&hdev->pend_le_conns,
++				      &params->addr, params->addr_type);
++	if (!p)
++		p = hci_pend_le_action_lookup(&hdev->pend_le_reports,
++					      &params->addr, params->addr_type);
++	if (p)
++		WRITE_ONCE(p->privacy_mode, HCI_NETWORK_PRIVACY);
++	rcu_read_unlock();
++
+ done:
+ 	if (hci_dev_test_flag(hdev, HCI_PRIVACY))
+ 		memcpy(cp.local_irk, hdev->irk, 16);
+@@ -2215,7 +2233,7 @@ done:
+ 
+ /* Set Device Privacy Mode. */
+ static int hci_le_set_privacy_mode_sync(struct hci_dev *hdev,
+-					struct hci_conn_params *params)
++					struct conn_params *params)
+ {
+ 	struct hci_cp_le_set_privacy_mode cp;
+ 	struct smp_irk *irk;
+@@ -2240,6 +2258,8 @@ static int hci_le_set_privacy_mode_sync(struct hci_dev *hdev,
+ 	bacpy(&cp.bdaddr, &irk->bdaddr);
+ 	cp.mode = HCI_DEVICE_PRIVACY;
+ 
++	/* Note: params->privacy_mode is not updated since it is a copy */
++
+ 	return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_PRIVACY_MODE,
+ 				     sizeof(cp), &cp, HCI_CMD_TIMEOUT);
+ }
+@@ -2249,7 +2269,7 @@ static int hci_le_set_privacy_mode_sync(struct hci_dev *hdev,
+  * properly set the privacy mode.
+  */
+ static int hci_le_add_accept_list_sync(struct hci_dev *hdev,
+-				       struct hci_conn_params *params,
++				       struct conn_params *params,
+ 				       u8 *num_entries)
+ {
+ 	struct hci_cp_le_add_to_accept_list cp;
+@@ -2447,6 +2467,52 @@ struct sk_buff *hci_read_local_oob_data_sync(struct hci_dev *hdev,
+ 	return __hci_cmd_sync_sk(hdev, opcode, 0, NULL, 0, HCI_CMD_TIMEOUT, sk);
+ }
+ 
++static struct conn_params *conn_params_copy(struct list_head *list, size_t *n)
++{
++	struct hci_conn_params *params;
++	struct conn_params *p;
++	size_t i;
++
++	rcu_read_lock();
++
++	i = 0;
++	list_for_each_entry_rcu(params, list, action)
++		++i;
++	*n = i;
++
++	rcu_read_unlock();
++
++	p = kvcalloc(*n, sizeof(struct conn_params), GFP_KERNEL);
++	if (!p)
++		return NULL;
++
++	rcu_read_lock();
++
++	i = 0;
++	list_for_each_entry_rcu(params, list, action) {
++		/* Racing adds are handled in next scan update */
++		if (i >= *n)
++			break;
++
++		/* No hdev->lock, but: addr, addr_type are immutable.
++		 * privacy_mode is only written by us or in
++		 * hci_cc_le_set_privacy_mode that we wait for.
++		 * We should be idempotent so MGMT updating flags
++		 * while we are processing is OK.
++		 */
++		bacpy(&p[i].addr, &params->addr);
++		p[i].addr_type = params->addr_type;
++		p[i].flags = READ_ONCE(params->flags);
++		p[i].privacy_mode = READ_ONCE(params->privacy_mode);
++		++i;
++	}
++
++	rcu_read_unlock();
++
++	*n = i;
++	return p;
++}
++
+ /* Device must not be scanning when updating the accept list.
+  *
+  * Update is done using the following sequence:
+@@ -2466,11 +2532,12 @@ struct sk_buff *hci_read_local_oob_data_sync(struct hci_dev *hdev,
+  */
+ static u8 hci_update_accept_list_sync(struct hci_dev *hdev)
+ {
+-	struct hci_conn_params *params;
++	struct conn_params *params;
+ 	struct bdaddr_list *b, *t;
+ 	u8 num_entries = 0;
+ 	bool pend_conn, pend_report;
+ 	u8 filter_policy;
++	size_t i, n;
+ 	int err;
+ 
+ 	/* Pause advertising if resolving list can be used as controllers
+@@ -2504,6 +2571,7 @@ static u8 hci_update_accept_list_sync(struct hci_dev *hdev)
+ 		if (hci_conn_hash_lookup_le(hdev, &b->bdaddr, b->bdaddr_type))
+ 			continue;
+ 
++		/* Pointers not dereferenced, no locks needed */
+ 		pend_conn = hci_pend_le_action_lookup(&hdev->pend_le_conns,
+ 						      &b->bdaddr,
+ 						      b->bdaddr_type);
+@@ -2532,23 +2600,50 @@ static u8 hci_update_accept_list_sync(struct hci_dev *hdev)
+ 	 * available accept list entries in the controller, then
+ 	 * just abort and return filer policy value to not use the
+ 	 * accept list.
++	 *
++	 * The list and params may be mutated while we wait for events,
++	 * so make a copy and iterate it.
+ 	 */
+-	list_for_each_entry(params, &hdev->pend_le_conns, action) {
+-		err = hci_le_add_accept_list_sync(hdev, params, &num_entries);
+-		if (err)
++
++	params = conn_params_copy(&hdev->pend_le_conns, &n);
++	if (!params) {
++		err = -ENOMEM;
++		goto done;
++	}
++
++	for (i = 0; i < n; ++i) {
++		err = hci_le_add_accept_list_sync(hdev, &params[i],
++						  &num_entries);
++		if (err) {
++			kvfree(params);
+ 			goto done;
++		}
+ 	}
+ 
++	kvfree(params);
++
+ 	/* After adding all new pending connections, walk through
+ 	 * the list of pending reports and also add these to the
+ 	 * accept list if there is still space. Abort if space runs out.
+ 	 */
+-	list_for_each_entry(params, &hdev->pend_le_reports, action) {
+-		err = hci_le_add_accept_list_sync(hdev, params, &num_entries);
+-		if (err)
++
++	params = conn_params_copy(&hdev->pend_le_reports, &n);
++	if (!params) {
++		err = -ENOMEM;
++		goto done;
++	}
++
++	for (i = 0; i < n; ++i) {
++		err = hci_le_add_accept_list_sync(hdev, &params[i],
++						  &num_entries);
++		if (err) {
++			kvfree(params);
+ 			goto done;
++		}
+ 	}
+ 
++	kvfree(params);
++
+ 	/* Use the allowlist unless the following conditions are all true:
+ 	 * - We are not currently suspending
+ 	 * - There are 1 or more ADV monitors registered and it's not offloaded
+@@ -4839,12 +4934,12 @@ static void hci_pend_le_actions_clear(struct hci_dev *hdev)
+ 	struct hci_conn_params *p;
+ 
+ 	list_for_each_entry(p, &hdev->le_conn_params, list) {
++		hci_pend_le_list_del_init(p);
+ 		if (p->conn) {
+ 			hci_conn_drop(p->conn);
+ 			hci_conn_put(p->conn);
+ 			p->conn = NULL;
+ 		}
+-		list_del_init(&p->action);
+ 	}
+ 
+ 	BT_DBG("All LE pending actions cleared");
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 34d55a85d8f6f..94d5bc104fede 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -123,8 +123,11 @@ static struct iso_conn *iso_conn_add(struct hci_conn *hcon)
+ {
+ 	struct iso_conn *conn = hcon->iso_data;
+ 
+-	if (conn)
++	if (conn) {
++		if (!conn->hcon)
++			conn->hcon = hcon;
+ 		return conn;
++	}
+ 
+ 	conn = kzalloc(sizeof(*conn), GFP_KERNEL);
+ 	if (!conn)
+@@ -300,14 +303,13 @@ static int iso_connect_bis(struct sock *sk)
+ 		goto unlock;
+ 	}
+ 
+-	hci_dev_unlock(hdev);
+-	hci_dev_put(hdev);
++	lock_sock(sk);
+ 
+ 	err = iso_chan_add(conn, sk, NULL);
+-	if (err)
+-		return err;
+-
+-	lock_sock(sk);
++	if (err) {
++		release_sock(sk);
++		goto unlock;
++	}
+ 
+ 	/* Update source addr of the socket */
+ 	bacpy(&iso_pi(sk)->src, &hcon->src);
+@@ -321,7 +323,6 @@ static int iso_connect_bis(struct sock *sk)
+ 	}
+ 
+ 	release_sock(sk);
+-	return err;
+ 
+ unlock:
+ 	hci_dev_unlock(hdev);
+@@ -389,14 +390,13 @@ static int iso_connect_cis(struct sock *sk)
+ 		goto unlock;
+ 	}
+ 
+-	hci_dev_unlock(hdev);
+-	hci_dev_put(hdev);
++	lock_sock(sk);
+ 
+ 	err = iso_chan_add(conn, sk, NULL);
+-	if (err)
+-		return err;
+-
+-	lock_sock(sk);
++	if (err) {
++		release_sock(sk);
++		goto unlock;
++	}
+ 
+ 	/* Update source addr of the socket */
+ 	bacpy(&iso_pi(sk)->src, &hcon->src);
+@@ -413,7 +413,6 @@ static int iso_connect_cis(struct sock *sk)
+ 	}
+ 
+ 	release_sock(sk);
+-	return err;
+ 
+ unlock:
+ 	hci_dev_unlock(hdev);
+@@ -1072,8 +1071,8 @@ static int iso_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ 			    size_t len)
+ {
+ 	struct sock *sk = sock->sk;
+-	struct iso_conn *conn = iso_pi(sk)->conn;
+ 	struct sk_buff *skb, **frag;
++	size_t mtu;
+ 	int err;
+ 
+ 	BT_DBG("sock %p, sk %p", sock, sk);
+@@ -1085,11 +1084,18 @@ static int iso_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ 	if (msg->msg_flags & MSG_OOB)
+ 		return -EOPNOTSUPP;
+ 
+-	if (sk->sk_state != BT_CONNECTED)
++	lock_sock(sk);
++
++	if (sk->sk_state != BT_CONNECTED) {
++		release_sock(sk);
+ 		return -ENOTCONN;
++	}
++
++	mtu = iso_pi(sk)->conn->hcon->hdev->iso_mtu;
++
++	release_sock(sk);
+ 
+-	skb = bt_skb_sendmsg(sk, msg, len, conn->hcon->hdev->iso_mtu,
+-			     HCI_ISO_DATA_HDR_SIZE, 0);
++	skb = bt_skb_sendmsg(sk, msg, len, mtu, HCI_ISO_DATA_HDR_SIZE, 0);
+ 	if (IS_ERR(skb))
+ 		return PTR_ERR(skb);
+ 
+@@ -1102,8 +1108,7 @@ static int iso_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ 	while (len) {
+ 		struct sk_buff *tmp;
+ 
+-		tmp = bt_skb_sendmsg(sk, msg, len, conn->hcon->hdev->iso_mtu,
+-				     0, 0);
++		tmp = bt_skb_sendmsg(sk, msg, len, mtu, 0, 0);
+ 		if (IS_ERR(tmp)) {
+ 			kfree_skb(skb);
+ 			return PTR_ERR(tmp);
+@@ -1158,15 +1163,19 @@ static int iso_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+ 	BT_DBG("sk %p", sk);
+ 
+ 	if (test_and_clear_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) {
++		lock_sock(sk);
+ 		switch (sk->sk_state) {
+ 		case BT_CONNECT2:
+-			lock_sock(sk);
+ 			iso_conn_defer_accept(pi->conn->hcon);
+ 			sk->sk_state = BT_CONFIG;
+ 			release_sock(sk);
+ 			return 0;
+ 		case BT_CONNECT:
++			release_sock(sk);
+ 			return iso_connect_cis(sk);
++		default:
++			release_sock(sk);
++			break;
+ 		}
+ 	}
+ 
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index f7b2d0971f240..1e07d0f289723 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -1297,15 +1297,15 @@ static void restart_le_actions(struct hci_dev *hdev)
+ 		/* Needed for AUTO_OFF case where might not "really"
+ 		 * have been powered off.
+ 		 */
+-		list_del_init(&p->action);
++		hci_pend_le_list_del_init(p);
+ 
+ 		switch (p->auto_connect) {
+ 		case HCI_AUTO_CONN_DIRECT:
+ 		case HCI_AUTO_CONN_ALWAYS:
+-			list_add(&p->action, &hdev->pend_le_conns);
++			hci_pend_le_list_add(p, &hdev->pend_le_conns);
+ 			break;
+ 		case HCI_AUTO_CONN_REPORT:
+-			list_add(&p->action, &hdev->pend_le_reports);
++			hci_pend_le_list_add(p, &hdev->pend_le_reports);
+ 			break;
+ 		default:
+ 			break;
+@@ -5169,7 +5169,7 @@ static int set_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
+ 		goto unlock;
+ 	}
+ 
+-	params->flags = current_flags;
++	WRITE_ONCE(params->flags, current_flags);
+ 	status = MGMT_STATUS_SUCCESS;
+ 
+ 	/* Update passive scan if HCI_CONN_FLAG_DEVICE_PRIVACY
+@@ -7580,7 +7580,7 @@ static int hci_conn_params_set(struct hci_dev *hdev, bdaddr_t *addr,
+ 	if (params->auto_connect == auto_connect)
+ 		return 0;
+ 
+-	list_del_init(&params->action);
++	hci_pend_le_list_del_init(params);
+ 
+ 	switch (auto_connect) {
+ 	case HCI_AUTO_CONN_DISABLED:
+@@ -7589,18 +7589,18 @@ static int hci_conn_params_set(struct hci_dev *hdev, bdaddr_t *addr,
+ 		 * connect to device, keep connecting.
+ 		 */
+ 		if (params->explicit_connect)
+-			list_add(&params->action, &hdev->pend_le_conns);
++			hci_pend_le_list_add(params, &hdev->pend_le_conns);
+ 		break;
+ 	case HCI_AUTO_CONN_REPORT:
+ 		if (params->explicit_connect)
+-			list_add(&params->action, &hdev->pend_le_conns);
++			hci_pend_le_list_add(params, &hdev->pend_le_conns);
+ 		else
+-			list_add(&params->action, &hdev->pend_le_reports);
++			hci_pend_le_list_add(params, &hdev->pend_le_reports);
+ 		break;
+ 	case HCI_AUTO_CONN_DIRECT:
+ 	case HCI_AUTO_CONN_ALWAYS:
+ 		if (!is_connected(hdev, addr, addr_type))
+-			list_add(&params->action, &hdev->pend_le_conns);
++			hci_pend_le_list_add(params, &hdev->pend_le_conns);
+ 		break;
+ 	}
+ 
+@@ -7823,9 +7823,7 @@ static int remove_device(struct sock *sk, struct hci_dev *hdev,
+ 			goto unlock;
+ 		}
+ 
+-		list_del(&params->action);
+-		list_del(&params->list);
+-		kfree(params);
++		hci_conn_params_free(params);
+ 
+ 		device_removed(sk, hdev, &cp->addr.bdaddr, cp->addr.type);
+ 	} else {
+@@ -7856,9 +7854,7 @@ static int remove_device(struct sock *sk, struct hci_dev *hdev,
+ 				p->auto_connect = HCI_AUTO_CONN_EXPLICIT;
+ 				continue;
+ 			}
+-			list_del(&p->action);
+-			list_del(&p->list);
+-			kfree(p);
++			hci_conn_params_free(p);
+ 		}
+ 
+ 		bt_dev_dbg(hdev, "All LE connection parameters were removed");
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index cd1a27ac555d0..7762604ddfc05 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -126,8 +126,11 @@ static struct sco_conn *sco_conn_add(struct hci_conn *hcon)
+ 	struct hci_dev *hdev = hcon->hdev;
+ 	struct sco_conn *conn = hcon->sco_data;
+ 
+-	if (conn)
++	if (conn) {
++		if (!conn->hcon)
++			conn->hcon = hcon;
+ 		return conn;
++	}
+ 
+ 	conn = kzalloc(sizeof(struct sco_conn), GFP_KERNEL);
+ 	if (!conn)
+@@ -268,21 +271,21 @@ static int sco_connect(struct sock *sk)
+ 		goto unlock;
+ 	}
+ 
+-	hci_dev_unlock(hdev);
+-	hci_dev_put(hdev);
+-
+ 	conn = sco_conn_add(hcon);
+ 	if (!conn) {
+ 		hci_conn_drop(hcon);
+-		return -ENOMEM;
++		err = -ENOMEM;
++		goto unlock;
+ 	}
+ 
+-	err = sco_chan_add(conn, sk, NULL);
+-	if (err)
+-		return err;
+-
+ 	lock_sock(sk);
+ 
++	err = sco_chan_add(conn, sk, NULL);
++	if (err) {
++		release_sock(sk);
++		goto unlock;
++	}
++
+ 	/* Update source addr of the socket */
+ 	bacpy(&sco_pi(sk)->src, &hcon->src);
+ 
+@@ -296,8 +299,6 @@ static int sco_connect(struct sock *sk)
+ 
+ 	release_sock(sk);
+ 
+-	return err;
+-
+ unlock:
+ 	hci_dev_unlock(hdev);
+ 	hci_dev_put(hdev);
+diff --git a/net/bridge/br_stp_if.c b/net/bridge/br_stp_if.c
+index 75204d36d7f90..b65962682771f 100644
+--- a/net/bridge/br_stp_if.c
++++ b/net/bridge/br_stp_if.c
+@@ -201,6 +201,9 @@ int br_stp_set_enabled(struct net_bridge *br, unsigned long val,
+ {
+ 	ASSERT_RTNL();
+ 
++	if (!net_eq(dev_net(br->dev), &init_net))
++		NL_SET_ERR_MSG_MOD(extack, "STP does not work in non-root netns");
++
+ 	if (br_mrp_enabled(br)) {
+ 		NL_SET_ERR_MSG_MOD(extack,
+ 				   "STP can't be enabled if MRP is already enabled");
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index a962ec2b8ba5b..925d48cc50f81 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -1526,6 +1526,12 @@ static int bcm_release(struct socket *sock)
+ 
+ 	lock_sock(sk);
+ 
++#if IS_ENABLED(CONFIG_PROC_FS)
++	/* remove procfs entry */
++	if (net->can.bcmproc_dir && bo->bcm_proc_read)
++		remove_proc_entry(bo->procname, net->can.bcmproc_dir);
++#endif /* CONFIG_PROC_FS */
++
+ 	list_for_each_entry_safe(op, next, &bo->tx_ops, list)
+ 		bcm_remove_op(op);
+ 
+@@ -1561,12 +1567,6 @@ static int bcm_release(struct socket *sock)
+ 	list_for_each_entry_safe(op, next, &bo->rx_ops, list)
+ 		bcm_remove_op(op);
+ 
+-#if IS_ENABLED(CONFIG_PROC_FS)
+-	/* remove procfs entry */
+-	if (net->can.bcmproc_dir && bo->bcm_proc_read)
+-		remove_proc_entry(bo->procname, net->can.bcmproc_dir);
+-#endif /* CONFIG_PROC_FS */
+-
+ 	/* remove device reference */
+ 	if (bo->bound) {
+ 		bo->bound   = 0;
+diff --git a/net/devlink/health.c b/net/devlink/health.c
+index 0839706d5741a..194340a8bb863 100644
+--- a/net/devlink/health.c
++++ b/net/devlink/health.c
+@@ -480,7 +480,7 @@ static void devlink_recover_notify(struct devlink_health_reporter *reporter,
+ 	int err;
+ 
+ 	WARN_ON(cmd != DEVLINK_CMD_HEALTH_REPORTER_RECOVER);
+-	WARN_ON(!xa_get_mark(&devlinks, devlink->index, DEVLINK_REGISTERED));
++	ASSERT_DEVLINK_REGISTERED(devlink);
+ 
+ 	msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+ 	if (!msg)
+diff --git a/net/devlink/leftover.c b/net/devlink/leftover.c
+index cd02549680767..790e61b2a9404 100644
+--- a/net/devlink/leftover.c
++++ b/net/devlink/leftover.c
+@@ -6772,7 +6772,10 @@ void devlink_notify_unregister(struct devlink *devlink)
+ 
+ static void devlink_port_type_warn(struct work_struct *work)
+ {
+-	WARN(true, "Type was not set for devlink port.");
++	struct devlink_port *port = container_of(to_delayed_work(work),
++						 struct devlink_port,
++						 type_warn_dw);
++	dev_warn(port->devlink->dev, "Type was not set for devlink port.");
+ }
+ 
+ static bool devlink_port_type_should_warn(struct devlink_port *devlink_port)
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index ba06ed42e4284..2be2d49225573 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -1132,7 +1132,7 @@ static int esp_init_authenc(struct xfrm_state *x,
+ 	err = crypto_aead_setkey(aead, key, keylen);
+ 
+ free_key:
+-	kfree(key);
++	kfree_sensitive(key);
+ 
+ error:
+ 	return err;
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 1386787eaf1a5..3105a676eba76 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -1016,7 +1016,7 @@ static void reqsk_timer_handler(struct timer_list *t)
+ 
+ 	icsk = inet_csk(sk_listener);
+ 	net = sock_net(sk_listener);
+-	max_syn_ack_retries = icsk->icsk_syn_retries ? :
++	max_syn_ack_retries = READ_ONCE(icsk->icsk_syn_retries) ? :
+ 		READ_ONCE(net->ipv4.sysctl_tcp_synack_retries);
+ 	/* Normally all the openreqs are young and become mature
+ 	 * (i.e. converted to established socket) for first timeout.
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index e7391bf310a75..0819d6001b9ab 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -650,20 +650,8 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
+ 	spin_lock(lock);
+ 	if (osk) {
+ 		WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
+-		ret = sk_hashed(osk);
+-		if (ret) {
+-			/* Before deleting the node, we insert a new one to make
+-			 * sure that the look-up-sk process would not miss either
+-			 * of them and that at least one node would exist in ehash
+-			 * table all the time. Otherwise there's a tiny chance
+-			 * that lookup process could find nothing in ehash table.
+-			 */
+-			__sk_nulls_add_node_tail_rcu(sk, list);
+-			sk_nulls_del_node_init_rcu(osk);
+-		}
+-		goto unlock;
+-	}
+-	if (found_dup_sk) {
++		ret = sk_nulls_del_node_init_rcu(osk);
++	} else if (found_dup_sk) {
+ 		*found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
+ 		if (*found_dup_sk)
+ 			ret = false;
+@@ -672,7 +660,6 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
+ 	if (ret)
+ 		__sk_nulls_add_node_rcu(sk, list);
+ 
+-unlock:
+ 	spin_unlock(lock);
+ 
+ 	return ret;
+diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
+index 40052414c7c71..2c1b245dba8e8 100644
+--- a/net/ipv4/inet_timewait_sock.c
++++ b/net/ipv4/inet_timewait_sock.c
+@@ -88,10 +88,10 @@ void inet_twsk_put(struct inet_timewait_sock *tw)
+ }
+ EXPORT_SYMBOL_GPL(inet_twsk_put);
+ 
+-static void inet_twsk_add_node_tail_rcu(struct inet_timewait_sock *tw,
+-					struct hlist_nulls_head *list)
++static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw,
++				   struct hlist_nulls_head *list)
+ {
+-	hlist_nulls_add_tail_rcu(&tw->tw_node, list);
++	hlist_nulls_add_head_rcu(&tw->tw_node, list);
+ }
+ 
+ static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw,
+@@ -144,7 +144,7 @@ void inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk,
+ 
+ 	spin_lock(lock);
+ 
+-	inet_twsk_add_node_tail_rcu(tw, &ehead->chain);
++	inet_twsk_add_node_rcu(tw, &ehead->chain);
+ 
+ 	/* Step 3: Remove SK from hash chain */
+ 	if (__sk_nulls_del_node_init_rcu(sk))
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 61892268e8a6c..a1bead441026e 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -1692,7 +1692,7 @@ void ip_send_unicast_reply(struct sock *sk, struct sk_buff *skb,
+ 			   const struct ip_options *sopt,
+ 			   __be32 daddr, __be32 saddr,
+ 			   const struct ip_reply_arg *arg,
+-			   unsigned int len, u64 transmit_time)
++			   unsigned int len, u64 transmit_time, u32 txhash)
+ {
+ 	struct ip_options_data replyopts;
+ 	struct ipcm_cookie ipc;
+@@ -1755,6 +1755,8 @@ void ip_send_unicast_reply(struct sock *sk, struct sk_buff *skb,
+ 								arg->csum));
+ 		nskb->ip_summed = CHECKSUM_NONE;
+ 		nskb->mono_delivery_time = !!transmit_time;
++		if (txhash)
++			skb_set_hash(nskb, txhash, PKT_HASH_TYPE_L4);
+ 		ip_push_pending_frames(sk, &fl4);
+ 	}
+ out:
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 8d20d9221238c..79f29e138fc9f 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -3400,7 +3400,7 @@ int tcp_sock_set_syncnt(struct sock *sk, int val)
+ 		return -EINVAL;
+ 
+ 	lock_sock(sk);
+-	inet_csk(sk)->icsk_syn_retries = val;
++	WRITE_ONCE(inet_csk(sk)->icsk_syn_retries, val);
+ 	release_sock(sk);
+ 	return 0;
+ }
+@@ -3409,7 +3409,7 @@ EXPORT_SYMBOL(tcp_sock_set_syncnt);
+ void tcp_sock_set_user_timeout(struct sock *sk, u32 val)
+ {
+ 	lock_sock(sk);
+-	inet_csk(sk)->icsk_user_timeout = val;
++	WRITE_ONCE(inet_csk(sk)->icsk_user_timeout, val);
+ 	release_sock(sk);
+ }
+ EXPORT_SYMBOL(tcp_sock_set_user_timeout);
+@@ -3421,7 +3421,8 @@ int tcp_sock_set_keepidle_locked(struct sock *sk, int val)
+ 	if (val < 1 || val > MAX_TCP_KEEPIDLE)
+ 		return -EINVAL;
+ 
+-	tp->keepalive_time = val * HZ;
++	/* Paired with WRITE_ONCE() in keepalive_time_when() */
++	WRITE_ONCE(tp->keepalive_time, val * HZ);
+ 	if (sock_flag(sk, SOCK_KEEPOPEN) &&
+ 	    !((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))) {
+ 		u32 elapsed = keepalive_time_elapsed(tp);
+@@ -3453,7 +3454,7 @@ int tcp_sock_set_keepintvl(struct sock *sk, int val)
+ 		return -EINVAL;
+ 
+ 	lock_sock(sk);
+-	tcp_sk(sk)->keepalive_intvl = val * HZ;
++	WRITE_ONCE(tcp_sk(sk)->keepalive_intvl, val * HZ);
+ 	release_sock(sk);
+ 	return 0;
+ }
+@@ -3465,7 +3466,8 @@ int tcp_sock_set_keepcnt(struct sock *sk, int val)
+ 		return -EINVAL;
+ 
+ 	lock_sock(sk);
+-	tcp_sk(sk)->keepalive_probes = val;
++	/* Paired with READ_ONCE() in keepalive_probes() */
++	WRITE_ONCE(tcp_sk(sk)->keepalive_probes, val);
+ 	release_sock(sk);
+ 	return 0;
+ }
+@@ -3667,19 +3669,19 @@ int do_tcp_setsockopt(struct sock *sk, int level, int optname,
+ 		if (val < 1 || val > MAX_TCP_KEEPINTVL)
+ 			err = -EINVAL;
+ 		else
+-			tp->keepalive_intvl = val * HZ;
++			WRITE_ONCE(tp->keepalive_intvl, val * HZ);
+ 		break;
+ 	case TCP_KEEPCNT:
+ 		if (val < 1 || val > MAX_TCP_KEEPCNT)
+ 			err = -EINVAL;
+ 		else
+-			tp->keepalive_probes = val;
++			WRITE_ONCE(tp->keepalive_probes, val);
+ 		break;
+ 	case TCP_SYNCNT:
+ 		if (val < 1 || val > MAX_TCP_SYNCNT)
+ 			err = -EINVAL;
+ 		else
+-			icsk->icsk_syn_retries = val;
++			WRITE_ONCE(icsk->icsk_syn_retries, val);
+ 		break;
+ 
+ 	case TCP_SAVE_SYN:
+@@ -3692,18 +3694,18 @@ int do_tcp_setsockopt(struct sock *sk, int level, int optname,
+ 
+ 	case TCP_LINGER2:
+ 		if (val < 0)
+-			tp->linger2 = -1;
++			WRITE_ONCE(tp->linger2, -1);
+ 		else if (val > TCP_FIN_TIMEOUT_MAX / HZ)
+-			tp->linger2 = TCP_FIN_TIMEOUT_MAX;
++			WRITE_ONCE(tp->linger2, TCP_FIN_TIMEOUT_MAX);
+ 		else
+-			tp->linger2 = val * HZ;
++			WRITE_ONCE(tp->linger2, val * HZ);
+ 		break;
+ 
+ 	case TCP_DEFER_ACCEPT:
+ 		/* Translate value in seconds to number of retransmits */
+-		icsk->icsk_accept_queue.rskq_defer_accept =
+-			secs_to_retrans(val, TCP_TIMEOUT_INIT / HZ,
+-					TCP_RTO_MAX / HZ);
++		WRITE_ONCE(icsk->icsk_accept_queue.rskq_defer_accept,
++			   secs_to_retrans(val, TCP_TIMEOUT_INIT / HZ,
++					   TCP_RTO_MAX / HZ));
+ 		break;
+ 
+ 	case TCP_WINDOW_CLAMP:
+@@ -3727,7 +3729,7 @@ int do_tcp_setsockopt(struct sock *sk, int level, int optname,
+ 		if (val < 0)
+ 			err = -EINVAL;
+ 		else
+-			icsk->icsk_user_timeout = val;
++			WRITE_ONCE(icsk->icsk_user_timeout, val);
+ 		break;
+ 
+ 	case TCP_FASTOPEN:
+@@ -3765,13 +3767,13 @@ int do_tcp_setsockopt(struct sock *sk, int level, int optname,
+ 		if (!tp->repair)
+ 			err = -EPERM;
+ 		else
+-			tp->tsoffset = val - tcp_time_stamp_raw();
++			WRITE_ONCE(tp->tsoffset, val - tcp_time_stamp_raw());
+ 		break;
+ 	case TCP_REPAIR_WINDOW:
+ 		err = tcp_repair_set_window(tp, optval, optlen);
+ 		break;
+ 	case TCP_NOTSENT_LOWAT:
+-		tp->notsent_lowat = val;
++		WRITE_ONCE(tp->notsent_lowat, val);
+ 		sk->sk_write_space(sk);
+ 		break;
+ 	case TCP_INQ:
+@@ -3783,7 +3785,7 @@ int do_tcp_setsockopt(struct sock *sk, int level, int optname,
+ 	case TCP_TX_DELAY:
+ 		if (val)
+ 			tcp_enable_tx_delay();
+-		tp->tcp_tx_delay = val;
++		WRITE_ONCE(tp->tcp_tx_delay, val);
+ 		break;
+ 	default:
+ 		err = -ENOPROTOOPT;
+@@ -4100,17 +4102,18 @@ int do_tcp_getsockopt(struct sock *sk, int level,
+ 		val = keepalive_probes(tp);
+ 		break;
+ 	case TCP_SYNCNT:
+-		val = icsk->icsk_syn_retries ? :
++		val = READ_ONCE(icsk->icsk_syn_retries) ? :
+ 			READ_ONCE(net->ipv4.sysctl_tcp_syn_retries);
+ 		break;
+ 	case TCP_LINGER2:
+-		val = tp->linger2;
++		val = READ_ONCE(tp->linger2);
+ 		if (val >= 0)
+ 			val = (val ? : READ_ONCE(net->ipv4.sysctl_tcp_fin_timeout)) / HZ;
+ 		break;
+ 	case TCP_DEFER_ACCEPT:
+-		val = retrans_to_secs(icsk->icsk_accept_queue.rskq_defer_accept,
+-				      TCP_TIMEOUT_INIT / HZ, TCP_RTO_MAX / HZ);
++		val = READ_ONCE(icsk->icsk_accept_queue.rskq_defer_accept);
++		val = retrans_to_secs(val, TCP_TIMEOUT_INIT / HZ,
++				      TCP_RTO_MAX / HZ);
+ 		break;
+ 	case TCP_WINDOW_CLAMP:
+ 		val = tp->window_clamp;
+@@ -4247,11 +4250,11 @@ int do_tcp_getsockopt(struct sock *sk, int level,
+ 		break;
+ 
+ 	case TCP_USER_TIMEOUT:
+-		val = icsk->icsk_user_timeout;
++		val = READ_ONCE(icsk->icsk_user_timeout);
+ 		break;
+ 
+ 	case TCP_FASTOPEN:
+-		val = icsk->icsk_accept_queue.fastopenq.max_qlen;
++		val = READ_ONCE(icsk->icsk_accept_queue.fastopenq.max_qlen);
+ 		break;
+ 
+ 	case TCP_FASTOPEN_CONNECT:
+@@ -4263,14 +4266,14 @@ int do_tcp_getsockopt(struct sock *sk, int level,
+ 		break;
+ 
+ 	case TCP_TX_DELAY:
+-		val = tp->tcp_tx_delay;
++		val = READ_ONCE(tp->tcp_tx_delay);
+ 		break;
+ 
+ 	case TCP_TIMESTAMP:
+-		val = tcp_time_stamp_raw() + tp->tsoffset;
++		val = tcp_time_stamp_raw() + READ_ONCE(tp->tsoffset);
+ 		break;
+ 	case TCP_NOTSENT_LOWAT:
+-		val = tp->notsent_lowat;
++		val = READ_ONCE(tp->notsent_lowat);
+ 		break;
+ 	case TCP_INQ:
+ 		val = tp->recvmsg_inq;
+diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
+index 45cc7f1ca2961..85e4953f11821 100644
+--- a/net/ipv4/tcp_fastopen.c
++++ b/net/ipv4/tcp_fastopen.c
+@@ -296,6 +296,7 @@ static struct sock *tcp_fastopen_create_child(struct sock *sk,
+ static bool tcp_fastopen_queue_check(struct sock *sk)
+ {
+ 	struct fastopen_queue *fastopenq;
++	int max_qlen;
+ 
+ 	/* Make sure the listener has enabled fastopen, and we don't
+ 	 * exceed the max # of pending TFO requests allowed before trying
+@@ -308,10 +309,11 @@ static bool tcp_fastopen_queue_check(struct sock *sk)
+ 	 * temporarily vs a server not supporting Fast Open at all.
+ 	 */
+ 	fastopenq = &inet_csk(sk)->icsk_accept_queue.fastopenq;
+-	if (fastopenq->max_qlen == 0)
++	max_qlen = READ_ONCE(fastopenq->max_qlen);
++	if (max_qlen == 0)
+ 		return false;
+ 
+-	if (fastopenq->qlen >= fastopenq->max_qlen) {
++	if (fastopenq->qlen >= max_qlen) {
+ 		struct request_sock *req1;
+ 		spin_lock(&fastopenq->lock);
+ 		req1 = fastopenq->rskq_rst_head;
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 06d2573685ca9..f37d13ee7b4cc 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -307,8 +307,9 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 						  inet->inet_daddr,
+ 						  inet->inet_sport,
+ 						  usin->sin_port));
+-		tp->tsoffset = secure_tcp_ts_off(net, inet->inet_saddr,
+-						 inet->inet_daddr);
++		WRITE_ONCE(tp->tsoffset,
++			   secure_tcp_ts_off(net, inet->inet_saddr,
++					     inet->inet_daddr));
+ 	}
+ 
+ 	inet->inet_id = get_random_u16();
+@@ -692,6 +693,7 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
+ 	u64 transmit_time = 0;
+ 	struct sock *ctl_sk;
+ 	struct net *net;
++	u32 txhash = 0;
+ 
+ 	/* Never send a reset in response to a reset. */
+ 	if (th->rst)
+@@ -829,6 +831,8 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
+ 				   inet_twsk(sk)->tw_priority : sk->sk_priority;
+ 		transmit_time = tcp_transmit_time(sk);
+ 		xfrm_sk_clone_policy(ctl_sk, sk);
++		txhash = (sk->sk_state == TCP_TIME_WAIT) ?
++			 inet_twsk(sk)->tw_txhash : sk->sk_txhash;
+ 	} else {
+ 		ctl_sk->sk_mark = 0;
+ 		ctl_sk->sk_priority = 0;
+@@ -837,7 +841,7 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
+ 			      skb, &TCP_SKB_CB(skb)->header.h4.opt,
+ 			      ip_hdr(skb)->saddr, ip_hdr(skb)->daddr,
+ 			      &arg, arg.iov[0].iov_len,
+-			      transmit_time);
++			      transmit_time, txhash);
+ 
+ 	xfrm_sk_free_policy(ctl_sk);
+ 	sock_net_set(ctl_sk, &init_net);
+@@ -859,7 +863,7 @@ static void tcp_v4_send_ack(const struct sock *sk,
+ 			    struct sk_buff *skb, u32 seq, u32 ack,
+ 			    u32 win, u32 tsval, u32 tsecr, int oif,
+ 			    struct tcp_md5sig_key *key,
+-			    int reply_flags, u8 tos)
++			    int reply_flags, u8 tos, u32 txhash)
+ {
+ 	const struct tcphdr *th = tcp_hdr(skb);
+ 	struct {
+@@ -935,7 +939,7 @@ static void tcp_v4_send_ack(const struct sock *sk,
+ 			      skb, &TCP_SKB_CB(skb)->header.h4.opt,
+ 			      ip_hdr(skb)->saddr, ip_hdr(skb)->daddr,
+ 			      &arg, arg.iov[0].iov_len,
+-			      transmit_time);
++			      transmit_time, txhash);
+ 
+ 	sock_net_set(ctl_sk, &init_net);
+ 	__TCP_INC_STATS(net, TCP_MIB_OUTSEGS);
+@@ -955,7 +959,8 @@ static void tcp_v4_timewait_ack(struct sock *sk, struct sk_buff *skb)
+ 			tw->tw_bound_dev_if,
+ 			tcp_twsk_md5_key(tcptw),
+ 			tw->tw_transparent ? IP_REPLY_ARG_NOSRCCHECK : 0,
+-			tw->tw_tos
++			tw->tw_tos,
++			tw->tw_txhash
+ 			);
+ 
+ 	inet_twsk_put(tw);
+@@ -984,11 +989,12 @@ static void tcp_v4_reqsk_send_ack(const struct sock *sk, struct sk_buff *skb,
+ 			tcp_rsk(req)->rcv_nxt,
+ 			req->rsk_rcv_wnd >> inet_rsk(req)->rcv_wscale,
+ 			tcp_time_stamp_raw() + tcp_rsk(req)->ts_off,
+-			req->ts_recent,
++			READ_ONCE(req->ts_recent),
+ 			0,
+ 			tcp_md5_do_lookup(sk, l3index, addr, AF_INET),
+ 			inet_rsk(req)->no_srccheck ? IP_REPLY_ARG_NOSRCCHECK : 0,
+-			ip_hdr(skb)->tos);
++			ip_hdr(skb)->tos,
++			READ_ONCE(tcp_rsk(req)->txhash));
+ }
+ 
+ /*
+@@ -2963,7 +2969,6 @@ static int bpf_iter_tcp_seq_show(struct seq_file *seq, void *v)
+ 	struct bpf_iter_meta meta;
+ 	struct bpf_prog *prog;
+ 	struct sock *sk = v;
+-	bool slow;
+ 	uid_t uid;
+ 	int ret;
+ 
+@@ -2971,7 +2976,7 @@ static int bpf_iter_tcp_seq_show(struct seq_file *seq, void *v)
+ 		return 0;
+ 
+ 	if (sk_fullsock(sk))
+-		slow = lock_sock_fast(sk);
++		lock_sock(sk);
+ 
+ 	if (unlikely(sk_unhashed(sk))) {
+ 		ret = SEQ_SKIP;
+@@ -2995,7 +3000,7 @@ static int bpf_iter_tcp_seq_show(struct seq_file *seq, void *v)
+ 
+ unlock:
+ 	if (sk_fullsock(sk))
+-		unlock_sock_fast(sk, slow);
++		release_sock(sk);
+ 	return ret;
+ 
+ }
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index dac0d62120e62..62641d42b06b5 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -528,7 +528,7 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,
+ 	newicsk->icsk_ack.lrcvtime = tcp_jiffies32;
+ 
+ 	newtp->lsndtime = tcp_jiffies32;
+-	newsk->sk_txhash = treq->txhash;
++	newsk->sk_txhash = READ_ONCE(treq->txhash);
+ 	newtp->total_retrans = req->num_retrans;
+ 
+ 	tcp_init_xmit_timers(newsk);
+@@ -555,7 +555,7 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,
+ 	newtp->max_window = newtp->snd_wnd;
+ 
+ 	if (newtp->rx_opt.tstamp_ok) {
+-		newtp->rx_opt.ts_recent = req->ts_recent;
++		newtp->rx_opt.ts_recent = READ_ONCE(req->ts_recent);
+ 		newtp->rx_opt.ts_recent_stamp = ktime_get_seconds();
+ 		newtp->tcp_header_len = sizeof(struct tcphdr) + TCPOLEN_TSTAMP_ALIGNED;
+ 	} else {
+@@ -619,7 +619,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
+ 		tcp_parse_options(sock_net(sk), skb, &tmp_opt, 0, NULL);
+ 
+ 		if (tmp_opt.saw_tstamp) {
+-			tmp_opt.ts_recent = req->ts_recent;
++			tmp_opt.ts_recent = READ_ONCE(req->ts_recent);
+ 			if (tmp_opt.rcv_tsecr)
+ 				tmp_opt.rcv_tsecr -= tcp_rsk(req)->ts_off;
+ 			/* We do not store true stamp, but it is not required,
+@@ -758,8 +758,11 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
+ 
+ 	/* In sequence, PAWS is OK. */
+ 
++	/* TODO: We probably should defer ts_recent change once
++	 * we take ownership of @req.
++	 */
+ 	if (tmp_opt.saw_tstamp && !after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt))
+-		req->ts_recent = tmp_opt.rcv_tsval;
++		WRITE_ONCE(req->ts_recent, tmp_opt.rcv_tsval);
+ 
+ 	if (TCP_SKB_CB(skb)->seq == tcp_rsk(req)->rcv_isn) {
+ 		/* Truncate SYN, it is out of window starting
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index cfe128b81a010..518cb4abc8b4f 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -876,7 +876,7 @@ static unsigned int tcp_synack_options(const struct sock *sk,
+ 	if (likely(ireq->tstamp_ok)) {
+ 		opts->options |= OPTION_TS;
+ 		opts->tsval = tcp_skb_timestamp(skb) + tcp_rsk(req)->ts_off;
+-		opts->tsecr = req->ts_recent;
++		opts->tsecr = READ_ONCE(req->ts_recent);
+ 		remaining -= TCPOLEN_TSTAMP_ALIGNED;
+ 	}
+ 	if (likely(ireq->sack_ok)) {
+@@ -3578,7 +3578,7 @@ struct sk_buff *tcp_make_synack(const struct sock *sk, struct dst_entry *dst,
+ 	rcu_read_lock();
+ 	md5 = tcp_rsk(req)->af_specific->req_md5_lookup(sk, req_to_sk(req));
+ #endif
+-	skb_set_hash(skb, tcp_rsk(req)->txhash, PKT_HASH_TYPE_L4);
++	skb_set_hash(skb, READ_ONCE(tcp_rsk(req)->txhash), PKT_HASH_TYPE_L4);
+ 	/* bpf program will be interested in the tcp_flags */
+ 	TCP_SKB_CB(skb)->tcp_flags = TCPHDR_SYN | TCPHDR_ACK;
+ 	tcp_header_size = tcp_synack_options(sk, req, mss, skb, &opts, md5,
+@@ -4121,7 +4121,7 @@ int tcp_rtx_synack(const struct sock *sk, struct request_sock *req)
+ 
+ 	/* Paired with WRITE_ONCE() in sock_setsockopt() */
+ 	if (READ_ONCE(sk->sk_txrehash) == SOCK_TXREHASH_ENABLED)
+-		tcp_rsk(req)->txhash = net_tx_rndhash();
++		WRITE_ONCE(tcp_rsk(req)->txhash, net_tx_rndhash());
+ 	res = af_ops->send_synack(sk, NULL, &fl, req, NULL, TCP_SYNACK_NORMAL,
+ 				  NULL);
+ 	if (!res) {
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index 1f01e15ca24fd..4a61832e7f69b 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -273,13 +273,20 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+ 	__sum16 check;
+ 	__be16 newlen;
+ 
+-	if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST)
+-		return __udp_gso_segment_list(gso_skb, features, is_ipv6);
+-
+ 	mss = skb_shinfo(gso_skb)->gso_size;
+ 	if (gso_skb->len <= sizeof(*uh) + mss)
+ 		return ERR_PTR(-EINVAL);
+ 
++	if (skb_gso_ok(gso_skb, features | NETIF_F_GSO_ROBUST)) {
++		/* Packet is from an untrusted source, reset gso_segs. */
++		skb_shinfo(gso_skb)->gso_segs = DIV_ROUND_UP(gso_skb->len - sizeof(*uh),
++							     mss);
++		return NULL;
++	}
++
++	if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST)
++		return __udp_gso_segment_list(gso_skb, features, is_ipv6);
++
+ 	skb_pull(gso_skb, sizeof(*uh));
+ 
+ 	/* clear destructor to avoid skb_segment assigning it to tail */
+@@ -387,8 +394,7 @@ static struct sk_buff *udp4_ufo_fragment(struct sk_buff *skb,
+ 	if (!pskb_may_pull(skb, sizeof(struct udphdr)))
+ 		goto out;
+ 
+-	if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 &&
+-	    !skb_gso_ok(skb, features | NETIF_F_GSO_ROBUST))
++	if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4)
+ 		return __udp_gso_segment(skb, features, false);
+ 
+ 	mss = skb_shinfo(skb)->gso_size;
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index da80974ad23ae..070d87abf7c02 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -955,7 +955,8 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ 		goto tx_err;
+ 
+ 	if (skb->len > dev->mtu + dev->hard_header_len) {
+-		pskb_trim(skb, dev->mtu + dev->hard_header_len);
++		if (pskb_trim(skb, dev->mtu + dev->hard_header_len))
++			goto tx_err;
+ 		truncate = true;
+ 	}
+ 
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 7132eb213a7a2..f7c248a7f8d1d 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1130,10 +1130,10 @@ static void tcp_v6_reqsk_send_ack(const struct sock *sk, struct sk_buff *skb,
+ 			tcp_rsk(req)->rcv_nxt,
+ 			req->rsk_rcv_wnd >> inet_rsk(req)->rcv_wscale,
+ 			tcp_time_stamp_raw() + tcp_rsk(req)->ts_off,
+-			req->ts_recent, sk->sk_bound_dev_if,
++			READ_ONCE(req->ts_recent), sk->sk_bound_dev_if,
+ 			tcp_v6_md5_do_lookup(sk, &ipv6_hdr(skb)->saddr, l3index),
+ 			ipv6_get_dsfield(ipv6_hdr(skb)), 0, sk->sk_priority,
+-			tcp_rsk(req)->txhash);
++			READ_ONCE(tcp_rsk(req)->txhash));
+ }
+ 
+ 
+diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c
+index c39c1e32f9804..e0e10f6bcdc18 100644
+--- a/net/ipv6/udp_offload.c
++++ b/net/ipv6/udp_offload.c
+@@ -42,8 +42,7 @@ static struct sk_buff *udp6_ufo_fragment(struct sk_buff *skb,
+ 		if (!pskb_may_pull(skb, sizeof(struct udphdr)))
+ 			goto out;
+ 
+-		if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 &&
+-		    !skb_gso_ok(skb, features | NETIF_F_GSO_ROBUST))
++		if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4)
+ 			return __udp_gso_segment(skb, features, true);
+ 
+ 		mss = skb_shinfo(skb)->gso_size;
+diff --git a/net/llc/llc_input.c b/net/llc/llc_input.c
+index c309b72a58779..7cac441862e21 100644
+--- a/net/llc/llc_input.c
++++ b/net/llc/llc_input.c
+@@ -163,9 +163,6 @@ int llc_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	void (*sta_handler)(struct sk_buff *skb);
+ 	void (*sap_handler)(struct llc_sap *sap, struct sk_buff *skb);
+ 
+-	if (!net_eq(dev_net(dev), &init_net))
+-		goto drop;
+-
+ 	/*
+ 	 * When the interface is in promisc. mode, drop all the crap that it
+ 	 * receives, do not try to analyse it.
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 18546f9b2a63a..ccf0b3d80fd97 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -3684,8 +3684,6 @@ int nft_chain_validate(const struct nft_ctx *ctx, const struct nft_chain *chain)
+ 			if (err < 0)
+ 				return err;
+ 		}
+-
+-		cond_resched();
+ 	}
+ 
+ 	return 0;
+@@ -3709,6 +3707,8 @@ static int nft_table_validate(struct net *net, const struct nft_table *table)
+ 		err = nft_chain_validate(&ctx, chain);
+ 		if (err < 0)
+ 			return err;
++
++		cond_resched();
+ 	}
+ 
+ 	return 0;
+@@ -4086,6 +4086,8 @@ static int nf_tables_delrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 		list_for_each_entry(chain, &table->chains, list) {
+ 			if (!nft_is_active_next(net, chain))
+ 				continue;
++			if (nft_chain_is_bound(chain))
++				continue;
+ 
+ 			ctx.chain = chain;
+ 			err = nft_delrule_by_chain(&ctx);
+@@ -10482,6 +10484,9 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ 
+ 	if (!tb[NFTA_VERDICT_CODE])
+ 		return -EINVAL;
++
++	/* zero padding hole for memcmp */
++	memset(data, 0, sizeof(*data));
+ 	data->verdict.code = ntohl(nla_get_be32(tb[NFTA_VERDICT_CODE]));
+ 
+ 	switch (data->verdict.code) {
+@@ -10764,6 +10769,9 @@ static void __nft_release_table(struct net *net, struct nft_table *table)
+ 	ctx.family = table->family;
+ 	ctx.table = table;
+ 	list_for_each_entry(chain, &table->chains, list) {
++		if (nft_chain_is_bound(chain))
++			continue;
++
+ 		ctx.chain = chain;
+ 		list_for_each_entry_safe(rule, nr, &chain->rules, list) {
+ 			list_del(&rule->list);
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 0452ee586c1cc..a81829c10feab 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1930,7 +1930,11 @@ static void nft_pipapo_remove(const struct net *net, const struct nft_set *set,
+ 		int i, start, rules_fx;
+ 
+ 		match_start = data;
+-		match_end = (const u8 *)nft_set_ext_key_end(&e->ext)->data;
++
++		if (nft_set_ext_exists(&e->ext, NFT_SET_EXT_KEY_END))
++			match_end = (const u8 *)nft_set_ext_key_end(&e->ext)->data;
++		else
++			match_end = data;
+ 
+ 		start = first_rule;
+ 		rules_fx = rules_f0;
+diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c
+index 466c26df853a0..382c7a71f81f2 100644
+--- a/net/sched/cls_bpf.c
++++ b/net/sched/cls_bpf.c
+@@ -406,56 +406,6 @@ static int cls_bpf_prog_from_efd(struct nlattr **tb, struct cls_bpf_prog *prog,
+ 	return 0;
+ }
+ 
+-static int cls_bpf_set_parms(struct net *net, struct tcf_proto *tp,
+-			     struct cls_bpf_prog *prog, unsigned long base,
+-			     struct nlattr **tb, struct nlattr *est, u32 flags,
+-			     struct netlink_ext_ack *extack)
+-{
+-	bool is_bpf, is_ebpf, have_exts = false;
+-	u32 gen_flags = 0;
+-	int ret;
+-
+-	is_bpf = tb[TCA_BPF_OPS_LEN] && tb[TCA_BPF_OPS];
+-	is_ebpf = tb[TCA_BPF_FD];
+-	if ((!is_bpf && !is_ebpf) || (is_bpf && is_ebpf))
+-		return -EINVAL;
+-
+-	ret = tcf_exts_validate(net, tp, tb, est, &prog->exts, flags,
+-				extack);
+-	if (ret < 0)
+-		return ret;
+-
+-	if (tb[TCA_BPF_FLAGS]) {
+-		u32 bpf_flags = nla_get_u32(tb[TCA_BPF_FLAGS]);
+-
+-		if (bpf_flags & ~TCA_BPF_FLAG_ACT_DIRECT)
+-			return -EINVAL;
+-
+-		have_exts = bpf_flags & TCA_BPF_FLAG_ACT_DIRECT;
+-	}
+-	if (tb[TCA_BPF_FLAGS_GEN]) {
+-		gen_flags = nla_get_u32(tb[TCA_BPF_FLAGS_GEN]);
+-		if (gen_flags & ~CLS_BPF_SUPPORTED_GEN_FLAGS ||
+-		    !tc_flags_valid(gen_flags))
+-			return -EINVAL;
+-	}
+-
+-	prog->exts_integrated = have_exts;
+-	prog->gen_flags = gen_flags;
+-
+-	ret = is_bpf ? cls_bpf_prog_from_ops(tb, prog) :
+-		       cls_bpf_prog_from_efd(tb, prog, gen_flags, tp);
+-	if (ret < 0)
+-		return ret;
+-
+-	if (tb[TCA_BPF_CLASSID]) {
+-		prog->res.classid = nla_get_u32(tb[TCA_BPF_CLASSID]);
+-		tcf_bind_filter(tp, &prog->res, base);
+-	}
+-
+-	return 0;
+-}
+-
+ static int cls_bpf_change(struct net *net, struct sk_buff *in_skb,
+ 			  struct tcf_proto *tp, unsigned long base,
+ 			  u32 handle, struct nlattr **tca,
+@@ -463,9 +413,12 @@ static int cls_bpf_change(struct net *net, struct sk_buff *in_skb,
+ 			  struct netlink_ext_ack *extack)
+ {
+ 	struct cls_bpf_head *head = rtnl_dereference(tp->root);
++	bool is_bpf, is_ebpf, have_exts = false;
+ 	struct cls_bpf_prog *oldprog = *arg;
+ 	struct nlattr *tb[TCA_BPF_MAX + 1];
++	bool bound_to_filter = false;
+ 	struct cls_bpf_prog *prog;
++	u32 gen_flags = 0;
+ 	int ret;
+ 
+ 	if (tca[TCA_OPTIONS] == NULL)
+@@ -504,11 +457,51 @@ static int cls_bpf_change(struct net *net, struct sk_buff *in_skb,
+ 		goto errout;
+ 	prog->handle = handle;
+ 
+-	ret = cls_bpf_set_parms(net, tp, prog, base, tb, tca[TCA_RATE], flags,
+-				extack);
++	is_bpf = tb[TCA_BPF_OPS_LEN] && tb[TCA_BPF_OPS];
++	is_ebpf = tb[TCA_BPF_FD];
++	if ((!is_bpf && !is_ebpf) || (is_bpf && is_ebpf)) {
++		ret = -EINVAL;
++		goto errout_idr;
++	}
++
++	ret = tcf_exts_validate(net, tp, tb, tca[TCA_RATE], &prog->exts,
++				flags, extack);
++	if (ret < 0)
++		goto errout_idr;
++
++	if (tb[TCA_BPF_FLAGS]) {
++		u32 bpf_flags = nla_get_u32(tb[TCA_BPF_FLAGS]);
++
++		if (bpf_flags & ~TCA_BPF_FLAG_ACT_DIRECT) {
++			ret = -EINVAL;
++			goto errout_idr;
++		}
++
++		have_exts = bpf_flags & TCA_BPF_FLAG_ACT_DIRECT;
++	}
++	if (tb[TCA_BPF_FLAGS_GEN]) {
++		gen_flags = nla_get_u32(tb[TCA_BPF_FLAGS_GEN]);
++		if (gen_flags & ~CLS_BPF_SUPPORTED_GEN_FLAGS ||
++		    !tc_flags_valid(gen_flags)) {
++			ret = -EINVAL;
++			goto errout_idr;
++		}
++	}
++
++	prog->exts_integrated = have_exts;
++	prog->gen_flags = gen_flags;
++
++	ret = is_bpf ? cls_bpf_prog_from_ops(tb, prog) :
++		cls_bpf_prog_from_efd(tb, prog, gen_flags, tp);
+ 	if (ret < 0)
+ 		goto errout_idr;
+ 
++	if (tb[TCA_BPF_CLASSID]) {
++		prog->res.classid = nla_get_u32(tb[TCA_BPF_CLASSID]);
++		tcf_bind_filter(tp, &prog->res, base);
++		bound_to_filter = true;
++	}
++
+ 	ret = cls_bpf_offload(tp, prog, oldprog, extack);
+ 	if (ret)
+ 		goto errout_parms;
+@@ -530,6 +523,8 @@ static int cls_bpf_change(struct net *net, struct sk_buff *in_skb,
+ 	return 0;
+ 
+ errout_parms:
++	if (bound_to_filter)
++		tcf_unbind_filter(tp, &prog->res);
+ 	cls_bpf_free_parms(prog);
+ errout_idr:
+ 	if (!oldprog)
+diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
+index fa3bbd187eb97..c4ed11df62548 100644
+--- a/net/sched/cls_matchall.c
++++ b/net/sched/cls_matchall.c
+@@ -159,26 +159,6 @@ static const struct nla_policy mall_policy[TCA_MATCHALL_MAX + 1] = {
+ 	[TCA_MATCHALL_FLAGS]		= { .type = NLA_U32 },
+ };
+ 
+-static int mall_set_parms(struct net *net, struct tcf_proto *tp,
+-			  struct cls_mall_head *head,
+-			  unsigned long base, struct nlattr **tb,
+-			  struct nlattr *est, u32 flags, u32 fl_flags,
+-			  struct netlink_ext_ack *extack)
+-{
+-	int err;
+-
+-	err = tcf_exts_validate_ex(net, tp, tb, est, &head->exts, flags,
+-				   fl_flags, extack);
+-	if (err < 0)
+-		return err;
+-
+-	if (tb[TCA_MATCHALL_CLASSID]) {
+-		head->res.classid = nla_get_u32(tb[TCA_MATCHALL_CLASSID]);
+-		tcf_bind_filter(tp, &head->res, base);
+-	}
+-	return 0;
+-}
+-
+ static int mall_change(struct net *net, struct sk_buff *in_skb,
+ 		       struct tcf_proto *tp, unsigned long base,
+ 		       u32 handle, struct nlattr **tca,
+@@ -187,6 +167,7 @@ static int mall_change(struct net *net, struct sk_buff *in_skb,
+ {
+ 	struct cls_mall_head *head = rtnl_dereference(tp->root);
+ 	struct nlattr *tb[TCA_MATCHALL_MAX + 1];
++	bool bound_to_filter = false;
+ 	struct cls_mall_head *new;
+ 	u32 userflags = 0;
+ 	int err;
+@@ -226,11 +207,17 @@ static int mall_change(struct net *net, struct sk_buff *in_skb,
+ 		goto err_alloc_percpu;
+ 	}
+ 
+-	err = mall_set_parms(net, tp, new, base, tb, tca[TCA_RATE],
+-			     flags, new->flags, extack);
+-	if (err)
++	err = tcf_exts_validate_ex(net, tp, tb, tca[TCA_RATE],
++				   &new->exts, flags, new->flags, extack);
++	if (err < 0)
+ 		goto err_set_parms;
+ 
++	if (tb[TCA_MATCHALL_CLASSID]) {
++		new->res.classid = nla_get_u32(tb[TCA_MATCHALL_CLASSID]);
++		tcf_bind_filter(tp, &new->res, base);
++		bound_to_filter = true;
++	}
++
+ 	if (!tc_skip_hw(new->flags)) {
+ 		err = mall_replace_hw_filter(tp, new, (unsigned long)new,
+ 					     extack);
+@@ -246,6 +233,8 @@ static int mall_change(struct net *net, struct sk_buff *in_skb,
+ 	return 0;
+ 
+ err_replace_hw_filter:
++	if (bound_to_filter)
++		tcf_unbind_filter(tp, &new->res);
+ err_set_parms:
+ 	free_percpu(new->pf);
+ err_alloc_percpu:
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index d15d50de79802..5abf31e432caf 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -712,8 +712,23 @@ static const struct nla_policy u32_policy[TCA_U32_MAX + 1] = {
+ 	[TCA_U32_FLAGS]		= { .type = NLA_U32 },
+ };
+ 
++static void u32_unbind_filter(struct tcf_proto *tp, struct tc_u_knode *n,
++			      struct nlattr **tb)
++{
++	if (tb[TCA_U32_CLASSID])
++		tcf_unbind_filter(tp, &n->res);
++}
++
++static void u32_bind_filter(struct tcf_proto *tp, struct tc_u_knode *n,
++			    unsigned long base, struct nlattr **tb)
++{
++	if (tb[TCA_U32_CLASSID]) {
++		n->res.classid = nla_get_u32(tb[TCA_U32_CLASSID]);
++		tcf_bind_filter(tp, &n->res, base);
++	}
++}
++
+ static int u32_set_parms(struct net *net, struct tcf_proto *tp,
+-			 unsigned long base,
+ 			 struct tc_u_knode *n, struct nlattr **tb,
+ 			 struct nlattr *est, u32 flags, u32 fl_flags,
+ 			 struct netlink_ext_ack *extack)
+@@ -760,10 +775,6 @@ static int u32_set_parms(struct net *net, struct tcf_proto *tp,
+ 		if (ht_old)
+ 			ht_old->refcnt--;
+ 	}
+-	if (tb[TCA_U32_CLASSID]) {
+-		n->res.classid = nla_get_u32(tb[TCA_U32_CLASSID]);
+-		tcf_bind_filter(tp, &n->res, base);
+-	}
+ 
+ 	if (ifindex >= 0)
+ 		n->ifindex = ifindex;
+@@ -903,17 +914,27 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ 		if (!new)
+ 			return -ENOMEM;
+ 
+-		err = u32_set_parms(net, tp, base, new, tb,
+-				    tca[TCA_RATE], flags, new->flags,
+-				    extack);
++		err = u32_set_parms(net, tp, new, tb, tca[TCA_RATE],
++				    flags, new->flags, extack);
+ 
+ 		if (err) {
+ 			__u32_destroy_key(new);
+ 			return err;
+ 		}
+ 
++		u32_bind_filter(tp, new, base, tb);
++
+ 		err = u32_replace_hw_knode(tp, new, flags, extack);
+ 		if (err) {
++			u32_unbind_filter(tp, new, tb);
++
++			if (tb[TCA_U32_LINK]) {
++				struct tc_u_hnode *ht_old;
++
++				ht_old = rtnl_dereference(n->ht_down);
++				if (ht_old)
++					ht_old->refcnt++;
++			}
+ 			__u32_destroy_key(new);
+ 			return err;
+ 		}
+@@ -1074,15 +1095,18 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ 	}
+ #endif
+ 
+-	err = u32_set_parms(net, tp, base, n, tb, tca[TCA_RATE],
++	err = u32_set_parms(net, tp, n, tb, tca[TCA_RATE],
+ 			    flags, n->flags, extack);
++
++	u32_bind_filter(tp, n, base, tb);
++
+ 	if (err == 0) {
+ 		struct tc_u_knode __rcu **ins;
+ 		struct tc_u_knode *pins;
+ 
+ 		err = u32_replace_hw_knode(tp, n, flags, extack);
+ 		if (err)
+-			goto errhw;
++			goto errunbind;
+ 
+ 		if (!tc_in_hw(n->flags))
+ 			n->flags |= TCA_CLS_FLAGS_NOT_IN_HW;
+@@ -1100,7 +1124,9 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ 		return 0;
+ 	}
+ 
+-errhw:
++errunbind:
++	u32_unbind_filter(tp, n, tb);
++
+ #ifdef CONFIG_CLS_U32_MARK
+ 	free_percpu(n->pcpu_success);
+ #endif
+diff --git a/net/wireless/wext-core.c b/net/wireless/wext-core.c
+index a125fd1fa1342..a161c64d1765e 100644
+--- a/net/wireless/wext-core.c
++++ b/net/wireless/wext-core.c
+@@ -815,6 +815,12 @@ static int ioctl_standard_iw_point(struct iw_point *iwp, unsigned int cmd,
+ 		}
+ 	}
+ 
++	/* Sanity-check to ensure we never end up _allocating_ zero
++	 * bytes of data for extra.
++	 */
++	if (extra_size <= 0)
++		return -EFAULT;
++
+ 	/* kzalloc() ensures NULL-termination for essid_compat. */
+ 	extra = kzalloc(extra_size, GFP_KERNEL);
+ 	if (!extra)
+diff --git a/scripts/Makefile.build b/scripts/Makefile.build
+index 9f94fc83f0865..5f4a0228543ae 100644
+--- a/scripts/Makefile.build
++++ b/scripts/Makefile.build
+@@ -279,6 +279,9 @@ $(obj)/%.lst: $(src)/%.c FORCE
+ 
+ rust_allowed_features := core_ffi_c,explicit_generic_args_with_impl_trait,new_uninit,pin_macro
+ 
++# `--out-dir` is required to avoid temporaries being created by `rustc` in the
++# current working directory, which may be not accessible in the out-of-tree
++# modules case.
+ rust_common_cmd = \
+ 	RUST_MODFILE=$(modfile) $(RUSTC_OR_CLIPPY) $(rust_flags) \
+ 	-Zallow-features=$(rust_allowed_features) \
+@@ -287,7 +290,7 @@ rust_common_cmd = \
+ 	--extern alloc --extern kernel \
+ 	--crate-type rlib -L $(objtree)/rust/ \
+ 	--crate-name $(basename $(notdir $@)) \
+-	--emit=dep-info=$(depfile)
++	--out-dir $(dir $@) --emit=dep-info=$(depfile)
+ 
+ # `--emit=obj`, `--emit=asm` and `--emit=llvm-ir` imply a single codegen unit
+ # will be used. We explicitly request `-Ccodegen-units=1` in any case, and
+diff --git a/scripts/Makefile.host b/scripts/Makefile.host
+index 7aea9005e4970..8f7f842b54f9e 100644
+--- a/scripts/Makefile.host
++++ b/scripts/Makefile.host
+@@ -86,7 +86,11 @@ hostc_flags    = -Wp,-MMD,$(depfile) \
+ hostcxx_flags  = -Wp,-MMD,$(depfile) \
+                  $(KBUILD_HOSTCXXFLAGS) $(HOST_EXTRACXXFLAGS) \
+                  $(HOSTCXXFLAGS_$(target-stem).o)
+-hostrust_flags = --emit=dep-info=$(depfile) \
++
++# `--out-dir` is required to avoid temporaries being created by `rustc` in the
++# current working directory, which may be not accessible in the out-of-tree
++# modules case.
++hostrust_flags = --out-dir $(dir $@) --emit=dep-info=$(depfile) \
+                  $(KBUILD_HOSTRUSTFLAGS) $(HOST_EXTRARUSTFLAGS) \
+                  $(HOSTRUSTFLAGS_$(target-stem))
+ 
+diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c
+index 0d2db41177b23..13af6d0ff845d 100644
+--- a/scripts/kallsyms.c
++++ b/scripts/kallsyms.c
+@@ -346,10 +346,10 @@ static void cleanup_symbol_name(char *s)
+ 	 * ASCII[_]   = 5f
+ 	 * ASCII[a-z] = 61,7a
+ 	 *
+-	 * As above, replacing '.' with '\0' does not affect the main sorting,
+-	 * but it helps us with subsorting.
++	 * As above, replacing the first '.' in ".llvm." with '\0' does not
++	 * affect the main sorting, but it helps us with subsorting.
+ 	 */
+-	p = strchr(s, '.');
++	p = strstr(s, ".llvm.");
+ 	if (p)
+ 		*p = '\0';
+ }
+diff --git a/security/keys/request_key.c b/security/keys/request_key.c
+index 07a0ef2baacd8..a7673ad86d18d 100644
+--- a/security/keys/request_key.c
++++ b/security/keys/request_key.c
+@@ -401,17 +401,21 @@ static int construct_alloc_key(struct keyring_search_context *ctx,
+ 	set_bit(KEY_FLAG_USER_CONSTRUCT, &key->flags);
+ 
+ 	if (dest_keyring) {
+-		ret = __key_link_lock(dest_keyring, &ctx->index_key);
++		ret = __key_link_lock(dest_keyring, &key->index_key);
+ 		if (ret < 0)
+ 			goto link_lock_failed;
+-		ret = __key_link_begin(dest_keyring, &ctx->index_key, &edit);
+-		if (ret < 0)
+-			goto link_prealloc_failed;
+ 	}
+ 
+-	/* attach the key to the destination keyring under lock, but we do need
++	/*
++	 * Attach the key to the destination keyring under lock, but we do need
+ 	 * to do another check just in case someone beat us to it whilst we
+-	 * waited for locks */
++	 * waited for locks.
++	 *
++	 * The caller might specify a comparison function which looks for keys
++	 * that do not exactly match but are still equivalent from the caller's
++	 * perspective. The __key_link_begin() operation must be done only after
++	 * an actual key is determined.
++	 */
+ 	mutex_lock(&key_construction_mutex);
+ 
+ 	rcu_read_lock();
+@@ -420,12 +424,16 @@ static int construct_alloc_key(struct keyring_search_context *ctx,
+ 	if (!IS_ERR(key_ref))
+ 		goto key_already_present;
+ 
+-	if (dest_keyring)
++	if (dest_keyring) {
++		ret = __key_link_begin(dest_keyring, &key->index_key, &edit);
++		if (ret < 0)
++			goto link_alloc_failed;
+ 		__key_link(dest_keyring, key, &edit);
++	}
+ 
+ 	mutex_unlock(&key_construction_mutex);
+ 	if (dest_keyring)
+-		__key_link_end(dest_keyring, &ctx->index_key, edit);
++		__key_link_end(dest_keyring, &key->index_key, edit);
+ 	mutex_unlock(&user->cons_lock);
+ 	*_key = key;
+ 	kleave(" = 0 [%d]", key_serial(key));
+@@ -438,10 +446,13 @@ key_already_present:
+ 	mutex_unlock(&key_construction_mutex);
+ 	key = key_ref_to_ptr(key_ref);
+ 	if (dest_keyring) {
++		ret = __key_link_begin(dest_keyring, &key->index_key, &edit);
++		if (ret < 0)
++			goto link_alloc_failed_unlocked;
+ 		ret = __key_link_check_live_key(dest_keyring, key);
+ 		if (ret == 0)
+ 			__key_link(dest_keyring, key, &edit);
+-		__key_link_end(dest_keyring, &ctx->index_key, edit);
++		__key_link_end(dest_keyring, &key->index_key, edit);
+ 		if (ret < 0)
+ 			goto link_check_failed;
+ 	}
+@@ -456,8 +467,10 @@ link_check_failed:
+ 	kleave(" = %d [linkcheck]", ret);
+ 	return ret;
+ 
+-link_prealloc_failed:
+-	__key_link_end(dest_keyring, &ctx->index_key, edit);
++link_alloc_failed:
++	mutex_unlock(&key_construction_mutex);
++link_alloc_failed_unlocked:
++	__key_link_end(dest_keyring, &key->index_key, edit);
+ link_lock_failed:
+ 	mutex_unlock(&user->cons_lock);
+ 	key_put(key);
+diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c
+index 2b2c8eb258d5b..bc700f85f80be 100644
+--- a/security/keys/trusted-keys/trusted_tpm2.c
++++ b/security/keys/trusted-keys/trusted_tpm2.c
+@@ -186,7 +186,7 @@ int tpm2_key_priv(void *context, size_t hdrlen,
+ }
+ 
+ /**
+- * tpm_buf_append_auth() - append TPMS_AUTH_COMMAND to the buffer.
++ * tpm2_buf_append_auth() - append TPMS_AUTH_COMMAND to the buffer.
+  *
+  * @buf: an allocated tpm_buf instance
+  * @session_handle: session handle
+diff --git a/sound/pci/emu10k1/emufx.c b/sound/pci/emu10k1/emufx.c
+index 3f64ccab0e632..fba19a854f27a 100644
+--- a/sound/pci/emu10k1/emufx.c
++++ b/sound/pci/emu10k1/emufx.c
+@@ -1559,14 +1559,8 @@ A_OP(icode, &ptr, iMAC0, A_GPR(var), A_GPR(var), A_GPR(vol), A_EXTIN(input))
+ 	gpr += 2;
+ 
+ 	/* Master volume (will be renamed later) */
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+0+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+0+SND_EMU10K1_PLAYBACK_CHANNELS));
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+1+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+1+SND_EMU10K1_PLAYBACK_CHANNELS));
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+2+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+2+SND_EMU10K1_PLAYBACK_CHANNELS));
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+3+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+3+SND_EMU10K1_PLAYBACK_CHANNELS));
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+4+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+4+SND_EMU10K1_PLAYBACK_CHANNELS));
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+5+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+5+SND_EMU10K1_PLAYBACK_CHANNELS));
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+6+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+6+SND_EMU10K1_PLAYBACK_CHANNELS));
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+7+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+7+SND_EMU10K1_PLAYBACK_CHANNELS));
++	for (z = 0; z < 8; z++)
++		A_OP(icode, &ptr, iMAC0, A_GPR(playback+z+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+z+SND_EMU10K1_PLAYBACK_CHANNELS));
+ 	snd_emu10k1_init_mono_control(&controls[nctl++], "Wave Master Playback Volume", gpr, 0);
+ 	gpr += 2;
+ 
+@@ -1653,102 +1647,14 @@ A_OP(icode, &ptr, iMAC0, A_GPR(var), A_GPR(var), A_GPR(vol), A_EXTIN(input))
+ 			dev_dbg(emu->card->dev, "emufx.c: gpr=0x%x, tmp=0x%x\n",
+ 			       gpr, tmp);
+ 			*/
+-			/* For the EMU1010: How to get 32bit values from the DSP. High 16bits into L, low 16bits into R. */
+-			/* A_P16VIN(0) is delayed by one sample,
+-			 * so all other A_P16VIN channels will need to also be delayed
+-			 */
+-			/* Left ADC in. 1 of 2 */
+ 			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_P16VIN(0x0), A_FXBUS2(0) );
+-			/* Right ADC in 1 of 2 */
+-			gpr_map[gpr++] = 0x00000000;
+-			/* Delaying by one sample: instead of copying the input
+-			 * value A_P16VIN to output A_FXBUS2 as in the first channel,
+-			 * we use an auxiliary register, delaying the value by one
+-			 * sample
+-			 */
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr - 1), A_FXBUS2(2) );
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x1), A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr - 1), A_FXBUS2(4) );
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x2), A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr - 1), A_FXBUS2(6) );
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x3), A_C_00000000, A_C_00000000);
+-			/* For 96kHz mode */
+-			/* Left ADC in. 2 of 2 */
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr - 1), A_FXBUS2(0x8) );
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x4), A_C_00000000, A_C_00000000);
+-			/* Right ADC in 2 of 2 */
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr - 1), A_FXBUS2(0xa) );
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x5), A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr - 1), A_FXBUS2(0xc) );
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x6), A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr - 1), A_FXBUS2(0xe) );
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x7), A_C_00000000, A_C_00000000);
+-			/* Pavel Hofman - we still have voices, A_FXBUS2s, and
+-			 * A_P16VINs available -
+-			 * let's add 8 more capture channels - total of 16
+-			 */
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x10));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x8),
+-			     A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x12));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x9),
+-			     A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x14));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0xa),
+-			     A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x16));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0xb),
+-			     A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x18));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0xc),
+-			     A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x1a));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0xd),
+-			     A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x1c));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0xe),
+-			     A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x1e));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0xf),
+-			     A_C_00000000, A_C_00000000);
++			/* A_P16VIN(0) is delayed by one sample, so all other A_P16VIN channels
++			 * will need to also be delayed; we use an auxiliary register for that. */
++			for (z = 1; z < 0x10; z++) {
++				snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr), A_FXBUS2(z * 2) );
++				A_OP(icode, &ptr, iACC3, A_GPR(gpr), A_P16VIN(z), A_C_00000000, A_C_00000000);
++				gpr_map[gpr++] = 0x00000000;
++			}
+ 		}
+ 
+ #if 0
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f1b934a502169..169572c8ed40f 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -122,6 +122,7 @@ struct alc_spec {
+ 	unsigned int ultra_low_power:1;
+ 	unsigned int has_hs_key:1;
+ 	unsigned int no_internal_mic_pin:1;
++	unsigned int en_3kpull_low:1;
+ 
+ 	/* for PLL fix */
+ 	hda_nid_t pll_nid;
+@@ -3622,6 +3623,7 @@ static void alc256_shutup(struct hda_codec *codec)
+ 	if (!hp_pin)
+ 		hp_pin = 0x21;
+ 
++	alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+ 	hp_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+ 
+ 	if (hp_pin_sense)
+@@ -3638,8 +3640,7 @@ static void alc256_shutup(struct hda_codec *codec)
+ 	/* If disable 3k pulldown control for alc257, the Mic detection will not work correctly
+ 	 * when booting with headset plugged. So skip setting it for the codec alc257
+ 	 */
+-	if (codec->core.vendor_id != 0x10ec0236 &&
+-	    codec->core.vendor_id != 0x10ec0257)
++	if (spec->en_3kpull_low)
+ 		alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+ 
+ 	if (!spec->no_shutup_pins)
+@@ -4623,6 +4624,21 @@ static void alc236_fixup_hp_mute_led_coefbit(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc236_fixup_hp_mute_led_coefbit2(struct hda_codec *codec,
++					  const struct hda_fixup *fix, int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++		spec->mute_led_polarity = 0;
++		spec->mute_led_coef.idx = 0x07;
++		spec->mute_led_coef.mask = 1;
++		spec->mute_led_coef.on = 1;
++		spec->mute_led_coef.off = 0;
++		snd_hda_gen_add_mute_led_cdev(codec, coef_mute_led_set);
++	}
++}
++
+ /* turn on/off mic-mute LED per capture hook by coef bit */
+ static int coef_micmute_led_set(struct led_classdev *led_cdev,
+ 				enum led_brightness brightness)
+@@ -7120,6 +7136,10 @@ enum {
+ 	ALC294_FIXUP_ASUS_DUAL_SPK,
+ 	ALC285_FIXUP_THINKPAD_X1_GEN7,
+ 	ALC285_FIXUP_THINKPAD_HEADSET_JACK,
++	ALC294_FIXUP_ASUS_ALLY,
++	ALC294_FIXUP_ASUS_ALLY_PINS,
++	ALC294_FIXUP_ASUS_ALLY_VERBS,
++	ALC294_FIXUP_ASUS_ALLY_SPEAKER,
+ 	ALC294_FIXUP_ASUS_HPE,
+ 	ALC294_FIXUP_ASUS_COEF_1B,
+ 	ALC294_FIXUP_ASUS_GX502_HP,
+@@ -7133,6 +7153,7 @@ enum {
+ 	ALC285_FIXUP_HP_GPIO_LED,
+ 	ALC285_FIXUP_HP_MUTE_LED,
+ 	ALC285_FIXUP_HP_SPECTRE_X360_MUTE_LED,
++	ALC236_FIXUP_HP_MUTE_LED_COEFBIT2,
+ 	ALC236_FIXUP_HP_GPIO_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
+@@ -7203,6 +7224,7 @@ enum {
+ 	ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN,
+ 	ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS,
+ 	ALC236_FIXUP_DELL_DUAL_CODECS,
++	ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI,
+ };
+ 
+ /* A special fixup for Lenovo C940 and Yoga Duet 7;
+@@ -8432,6 +8454,47 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC294_FIXUP_SPK2_TO_DAC1
+ 	},
++	[ALC294_FIXUP_ASUS_ALLY] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = cs35l41_fixup_i2c_two,
++		.chained = true,
++		.chain_id = ALC294_FIXUP_ASUS_ALLY_PINS
++	},
++	[ALC294_FIXUP_ASUS_ALLY_PINS] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x03a11050 },
++			{ 0x1a, 0x03a11c30 },
++			{ 0x21, 0x03211420 },
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC294_FIXUP_ASUS_ALLY_VERBS
++	},
++	[ALC294_FIXUP_ASUS_ALLY_VERBS] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x45 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x5089 },
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x46 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0004 },
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x47 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0xa47a },
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x49 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0049},
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x4a },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x201b },
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x6b },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x4278},
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC294_FIXUP_ASUS_ALLY_SPEAKER
++	},
++	[ALC294_FIXUP_ASUS_ALLY_SPEAKER] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc285_fixup_speaker2_to_dac1,
++	},
+ 	[ALC285_FIXUP_THINKPAD_X1_GEN7] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc285_fixup_thinkpad_x1_gen7,
+@@ -8556,6 +8619,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc285_fixup_hp_spectre_x360_mute_led,
+ 	},
++	[ALC236_FIXUP_HP_MUTE_LED_COEFBIT2] = {
++	    .type = HDA_FIXUP_FUNC,
++	    .v.func = alc236_fixup_hp_mute_led_coefbit2,
++	},
+ 	[ALC236_FIXUP_HP_GPIO_LED] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc236_fixup_hp_gpio_led,
+@@ -9069,8 +9136,6 @@ static const struct hda_fixup alc269_fixups[] = {
+ 	[ALC287_FIXUP_CS35L41_I2C_2] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = cs35l41_fixup_i2c_two,
+-		.chained = true,
+-		.chain_id = ALC269_FIXUP_THINKPAD_ACPI,
+ 	},
+ 	[ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED] = {
+ 		.type = HDA_FIXUP_FUNC,
+@@ -9207,6 +9272,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 	},
++	[ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = cs35l41_fixup_i2c_two,
++		.chained = true,
++		.chain_id = ALC269_FIXUP_THINKPAD_ACPI,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -9440,6 +9511,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x886d, "HP ZBook Fury 17.3 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
++	SND_PCI_QUIRK(0x103c, 0x887a, "HP Laptop 15s-eq2xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x888d, "HP ZBook Power 15.6 inch G8 Mobile Workstation PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8895, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_SPEAKERS_MICMUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED),
+@@ -9535,6 +9607,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1740, "ASUS UX430UA", ALC295_FIXUP_ASUS_DACS),
+ 	SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK),
++	SND_PCI_QUIRK(0x1043, 0x17f3, "ROG Ally RC71L_RC71L", ALC294_FIXUP_ASUS_ALLY),
+ 	SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC),
+@@ -9646,6 +9719,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x5157, "Clevo W517GU1", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x51a1, "Clevo NS50MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x51b1, "Clevo NS50AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x51b3, "Clevo NS70AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x5630, "Clevo NP50RNJS", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x70a1, "Clevo NB70T[HJK]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x70b3, "Clevo NK70SB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -9729,14 +9803,14 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x22be, "Thinkpad X1 Carbon 8th", ALC285_FIXUP_THINKPAD_HEADSET_JACK),
+ 	SND_PCI_QUIRK(0x17aa, 0x22c1, "Thinkpad P1 Gen 3", ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK),
+ 	SND_PCI_QUIRK(0x17aa, 0x22c2, "Thinkpad X1 Extreme Gen 3", ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK),
+-	SND_PCI_QUIRK(0x17aa, 0x22f1, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2),
+-	SND_PCI_QUIRK(0x17aa, 0x22f2, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2),
+-	SND_PCI_QUIRK(0x17aa, 0x22f3, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2),
+-	SND_PCI_QUIRK(0x17aa, 0x2316, "Thinkpad P1 Gen 6", ALC287_FIXUP_CS35L41_I2C_2),
+-	SND_PCI_QUIRK(0x17aa, 0x2317, "Thinkpad P1 Gen 6", ALC287_FIXUP_CS35L41_I2C_2),
+-	SND_PCI_QUIRK(0x17aa, 0x2318, "Thinkpad Z13 Gen2", ALC287_FIXUP_CS35L41_I2C_2),
+-	SND_PCI_QUIRK(0x17aa, 0x2319, "Thinkpad Z16 Gen2", ALC287_FIXUP_CS35L41_I2C_2),
+-	SND_PCI_QUIRK(0x17aa, 0x231a, "Thinkpad Z16 Gen2", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x17aa, 0x22f1, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI),
++	SND_PCI_QUIRK(0x17aa, 0x22f2, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI),
++	SND_PCI_QUIRK(0x17aa, 0x22f3, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI),
++	SND_PCI_QUIRK(0x17aa, 0x2316, "Thinkpad P1 Gen 6", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI),
++	SND_PCI_QUIRK(0x17aa, 0x2317, "Thinkpad P1 Gen 6", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI),
++	SND_PCI_QUIRK(0x17aa, 0x2318, "Thinkpad Z13 Gen2", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI),
++	SND_PCI_QUIRK(0x17aa, 0x2319, "Thinkpad Z16 Gen2", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI),
++	SND_PCI_QUIRK(0x17aa, 0x231a, "Thinkpad Z16 Gen2", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ 	SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ 	SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+@@ -10601,6 +10675,8 @@ static int patch_alc269(struct hda_codec *codec)
+ 		spec->shutup = alc256_shutup;
+ 		spec->init_hook = alc256_init;
+ 		spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */
++		if (codec->bus->pci->vendor == PCI_VENDOR_ID_AMD)
++			spec->en_3kpull_low = true;
+ 		break;
+ 	case 0x10ec0257:
+ 		spec->codec_variant = ALC269_TYPE_ALC257;
+diff --git a/sound/soc/amd/acp/amd.h b/sound/soc/amd/acp/amd.h
+index 5f2119f422715..12a176a50fd6e 100644
+--- a/sound/soc/amd/acp/amd.h
++++ b/sound/soc/amd/acp/amd.h
+@@ -173,7 +173,7 @@ int snd_amd_acp_find_config(struct pci_dev *pci);
+ 
+ static inline u64 acp_get_byte_count(struct acp_dev_data *adata, int dai_id, int direction)
+ {
+-	u64 byte_count, low = 0, high = 0;
++	u64 byte_count = 0, low = 0, high = 0;
+ 
+ 	if (direction == SNDRV_PCM_STREAM_PLAYBACK) {
+ 		switch (dai_id) {
+@@ -191,7 +191,7 @@ static inline u64 acp_get_byte_count(struct acp_dev_data *adata, int dai_id, int
+ 			break;
+ 		default:
+ 			dev_err(adata->dev, "Invalid dai id %x\n", dai_id);
+-			return -EINVAL;
++			goto POINTER_RETURN_BYTES;
+ 		}
+ 	} else {
+ 		switch (dai_id) {
+@@ -213,12 +213,13 @@ static inline u64 acp_get_byte_count(struct acp_dev_data *adata, int dai_id, int
+ 			break;
+ 		default:
+ 			dev_err(adata->dev, "Invalid dai id %x\n", dai_id);
+-			return -EINVAL;
++			goto POINTER_RETURN_BYTES;
+ 		}
+ 	}
+ 	/* Get 64 bit value from two 32 bit registers */
+ 	byte_count = (high << 32) | low;
+ 
++POINTER_RETURN_BYTES:
+ 	return byte_count;
+ }
+ 
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index 8020097d4e4c8..1b50b2d66beb2 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -701,6 +701,7 @@ config SND_SOC_CS35L41_I2C
+ 
+ config SND_SOC_CS35L45
+ 	tristate
++	select REGMAP_IRQ
+ 
+ config SND_SOC_CS35L45_SPI
+ 	tristate "Cirrus Logic CS35L45 CODEC (SPI)"
+diff --git a/sound/soc/codecs/cs42l51-i2c.c b/sound/soc/codecs/cs42l51-i2c.c
+index 85238339fbcab..b2085ff4b3226 100644
+--- a/sound/soc/codecs/cs42l51-i2c.c
++++ b/sound/soc/codecs/cs42l51-i2c.c
+@@ -19,6 +19,12 @@ static struct i2c_device_id cs42l51_i2c_id[] = {
+ };
+ MODULE_DEVICE_TABLE(i2c, cs42l51_i2c_id);
+ 
++const struct of_device_id cs42l51_of_match[] = {
++	{ .compatible = "cirrus,cs42l51", },
++	{ }
++};
++MODULE_DEVICE_TABLE(of, cs42l51_of_match);
++
+ static int cs42l51_i2c_probe(struct i2c_client *i2c)
+ {
+ 	struct regmap_config config;
+diff --git a/sound/soc/codecs/cs42l51.c b/sound/soc/codecs/cs42l51.c
+index e88d9ff95cdfc..4b832d52f643f 100644
+--- a/sound/soc/codecs/cs42l51.c
++++ b/sound/soc/codecs/cs42l51.c
+@@ -826,13 +826,6 @@ int __maybe_unused cs42l51_resume(struct device *dev)
+ }
+ EXPORT_SYMBOL_GPL(cs42l51_resume);
+ 
+-const struct of_device_id cs42l51_of_match[] = {
+-	{ .compatible = "cirrus,cs42l51", },
+-	{ }
+-};
+-MODULE_DEVICE_TABLE(of, cs42l51_of_match);
+-EXPORT_SYMBOL_GPL(cs42l51_of_match);
+-
+ MODULE_AUTHOR("Arnaud Patard <arnaud.patard@rtp-net.org>");
+ MODULE_DESCRIPTION("Cirrus Logic CS42L51 ALSA SoC Codec Driver");
+ MODULE_LICENSE("GPL");
+diff --git a/sound/soc/codecs/cs42l51.h b/sound/soc/codecs/cs42l51.h
+index a79343e8a54ea..125703ede1133 100644
+--- a/sound/soc/codecs/cs42l51.h
++++ b/sound/soc/codecs/cs42l51.h
+@@ -16,7 +16,6 @@ int cs42l51_probe(struct device *dev, struct regmap *regmap);
+ void cs42l51_remove(struct device *dev);
+ int __maybe_unused cs42l51_suspend(struct device *dev);
+ int __maybe_unused cs42l51_resume(struct device *dev);
+-extern const struct of_device_id cs42l51_of_match[];
+ 
+ #define CS42L51_CHIP_ID			0x1B
+ #define CS42L51_CHIP_REV_A		0x00
+diff --git a/sound/soc/codecs/rt5640.c b/sound/soc/codecs/rt5640.c
+index 1392570555070..31578ea712a99 100644
+--- a/sound/soc/codecs/rt5640.c
++++ b/sound/soc/codecs/rt5640.c
+@@ -2567,9 +2567,10 @@ static void rt5640_enable_jack_detect(struct snd_soc_component *component,
+ 	if (jack_data && jack_data->use_platform_clock)
+ 		rt5640->use_platform_clock = jack_data->use_platform_clock;
+ 
+-	ret = request_irq(rt5640->irq, rt5640_irq,
+-			  IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+-			  "rt5640", rt5640);
++	ret = devm_request_threaded_irq(component->dev, rt5640->irq,
++					NULL, rt5640_irq,
++					IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
++					"rt5640", rt5640);
+ 	if (ret) {
+ 		dev_warn(component->dev, "Failed to reguest IRQ %d: %d\n", rt5640->irq, ret);
+ 		rt5640_disable_jack_detect(component);
+@@ -2622,8 +2623,9 @@ static void rt5640_enable_hda_jack_detect(
+ 
+ 	rt5640->jack = jack;
+ 
+-	ret = request_irq(rt5640->irq, rt5640_irq,
+-			  IRQF_TRIGGER_RISING | IRQF_ONESHOT, "rt5640", rt5640);
++	ret = devm_request_threaded_irq(component->dev, rt5640->irq,
++					NULL, rt5640_irq, IRQF_TRIGGER_RISING | IRQF_ONESHOT,
++					"rt5640", rt5640);
+ 	if (ret) {
+ 		dev_warn(component->dev, "Failed to reguest IRQ %d: %d\n", rt5640->irq, ret);
+ 		rt5640->irq = -ENXIO;
+diff --git a/sound/soc/codecs/wcd-mbhc-v2.c b/sound/soc/codecs/wcd-mbhc-v2.c
+index 1911750f7445c..5da1934527f34 100644
+--- a/sound/soc/codecs/wcd-mbhc-v2.c
++++ b/sound/soc/codecs/wcd-mbhc-v2.c
+@@ -1454,7 +1454,7 @@ struct wcd_mbhc *wcd_mbhc_init(struct snd_soc_component *component,
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
+-	mbhc = devm_kzalloc(dev, sizeof(*mbhc), GFP_KERNEL);
++	mbhc = kzalloc(sizeof(*mbhc), GFP_KERNEL);
+ 	if (!mbhc)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -1474,61 +1474,76 @@ struct wcd_mbhc *wcd_mbhc_init(struct snd_soc_component *component,
+ 
+ 	INIT_WORK(&mbhc->correct_plug_swch, wcd_correct_swch_plug);
+ 
+-	ret = devm_request_threaded_irq(dev, mbhc->intr_ids->mbhc_sw_intr, NULL,
++	ret = request_threaded_irq(mbhc->intr_ids->mbhc_sw_intr, NULL,
+ 					wcd_mbhc_mech_plug_detect_irq,
+ 					IRQF_ONESHOT | IRQF_TRIGGER_RISING,
+ 					"mbhc sw intr", mbhc);
+ 	if (ret)
+-		goto err;
++		goto err_free_mbhc;
+ 
+-	ret = devm_request_threaded_irq(dev, mbhc->intr_ids->mbhc_btn_press_intr, NULL,
++	ret = request_threaded_irq(mbhc->intr_ids->mbhc_btn_press_intr, NULL,
+ 					wcd_mbhc_btn_press_handler,
+ 					IRQF_ONESHOT | IRQF_TRIGGER_RISING,
+ 					"Button Press detect", mbhc);
+ 	if (ret)
+-		goto err;
++		goto err_free_sw_intr;
+ 
+-	ret = devm_request_threaded_irq(dev, mbhc->intr_ids->mbhc_btn_release_intr, NULL,
++	ret = request_threaded_irq(mbhc->intr_ids->mbhc_btn_release_intr, NULL,
+ 					wcd_mbhc_btn_release_handler,
+ 					IRQF_ONESHOT | IRQF_TRIGGER_RISING,
+ 					"Button Release detect", mbhc);
+ 	if (ret)
+-		goto err;
++		goto err_free_btn_press_intr;
+ 
+-	ret = devm_request_threaded_irq(dev, mbhc->intr_ids->mbhc_hs_ins_intr, NULL,
++	ret = request_threaded_irq(mbhc->intr_ids->mbhc_hs_ins_intr, NULL,
+ 					wcd_mbhc_adc_hs_ins_irq,
+ 					IRQF_ONESHOT | IRQF_TRIGGER_RISING,
+ 					"Elect Insert", mbhc);
+ 	if (ret)
+-		goto err;
++		goto err_free_btn_release_intr;
+ 
+ 	disable_irq_nosync(mbhc->intr_ids->mbhc_hs_ins_intr);
+ 
+-	ret = devm_request_threaded_irq(dev, mbhc->intr_ids->mbhc_hs_rem_intr, NULL,
++	ret = request_threaded_irq(mbhc->intr_ids->mbhc_hs_rem_intr, NULL,
+ 					wcd_mbhc_adc_hs_rem_irq,
+ 					IRQF_ONESHOT | IRQF_TRIGGER_RISING,
+ 					"Elect Remove", mbhc);
+ 	if (ret)
+-		goto err;
++		goto err_free_hs_ins_intr;
+ 
+ 	disable_irq_nosync(mbhc->intr_ids->mbhc_hs_rem_intr);
+ 
+-	ret = devm_request_threaded_irq(dev, mbhc->intr_ids->hph_left_ocp, NULL,
++	ret = request_threaded_irq(mbhc->intr_ids->hph_left_ocp, NULL,
+ 					wcd_mbhc_hphl_ocp_irq,
+ 					IRQF_ONESHOT | IRQF_TRIGGER_RISING,
+ 					"HPH_L OCP detect", mbhc);
+ 	if (ret)
+-		goto err;
++		goto err_free_hs_rem_intr;
+ 
+-	ret = devm_request_threaded_irq(dev, mbhc->intr_ids->hph_right_ocp, NULL,
++	ret = request_threaded_irq(mbhc->intr_ids->hph_right_ocp, NULL,
+ 					wcd_mbhc_hphr_ocp_irq,
+ 					IRQF_ONESHOT | IRQF_TRIGGER_RISING,
+ 					"HPH_R OCP detect", mbhc);
+ 	if (ret)
+-		goto err;
++		goto err_free_hph_left_ocp;
+ 
+ 	return mbhc;
+-err:
++
++err_free_hph_left_ocp:
++	free_irq(mbhc->intr_ids->hph_left_ocp, mbhc);
++err_free_hs_rem_intr:
++	free_irq(mbhc->intr_ids->mbhc_hs_rem_intr, mbhc);
++err_free_hs_ins_intr:
++	free_irq(mbhc->intr_ids->mbhc_hs_ins_intr, mbhc);
++err_free_btn_release_intr:
++	free_irq(mbhc->intr_ids->mbhc_btn_release_intr, mbhc);
++err_free_btn_press_intr:
++	free_irq(mbhc->intr_ids->mbhc_btn_press_intr, mbhc);
++err_free_sw_intr:
++	free_irq(mbhc->intr_ids->mbhc_sw_intr, mbhc);
++err_free_mbhc:
++	kfree(mbhc);
++
+ 	dev_err(dev, "Failed to request mbhc interrupts %d\n", ret);
+ 
+ 	return ERR_PTR(ret);
+@@ -1537,9 +1552,19 @@ EXPORT_SYMBOL(wcd_mbhc_init);
+ 
+ void wcd_mbhc_deinit(struct wcd_mbhc *mbhc)
+ {
++	free_irq(mbhc->intr_ids->hph_right_ocp, mbhc);
++	free_irq(mbhc->intr_ids->hph_left_ocp, mbhc);
++	free_irq(mbhc->intr_ids->mbhc_hs_rem_intr, mbhc);
++	free_irq(mbhc->intr_ids->mbhc_hs_ins_intr, mbhc);
++	free_irq(mbhc->intr_ids->mbhc_btn_release_intr, mbhc);
++	free_irq(mbhc->intr_ids->mbhc_btn_press_intr, mbhc);
++	free_irq(mbhc->intr_ids->mbhc_sw_intr, mbhc);
++
+ 	mutex_lock(&mbhc->lock);
+ 	wcd_cancel_hs_detect_plug(mbhc,	&mbhc->correct_plug_swch);
+ 	mutex_unlock(&mbhc->lock);
++
++	kfree(mbhc);
+ }
+ EXPORT_SYMBOL(wcd_mbhc_deinit);
+ 
+diff --git a/sound/soc/codecs/wcd934x.c b/sound/soc/codecs/wcd934x.c
+index c0d1fa36d8411..e467cbe12d8a9 100644
+--- a/sound/soc/codecs/wcd934x.c
++++ b/sound/soc/codecs/wcd934x.c
+@@ -3044,6 +3044,17 @@ static int wcd934x_mbhc_init(struct snd_soc_component *component)
+ 
+ 	return 0;
+ }
++
++static void wcd934x_mbhc_deinit(struct snd_soc_component *component)
++{
++	struct wcd934x_codec *wcd = snd_soc_component_get_drvdata(component);
++
++	if (!wcd->mbhc)
++		return;
++
++	wcd_mbhc_deinit(wcd->mbhc);
++}
++
+ static int wcd934x_comp_probe(struct snd_soc_component *component)
+ {
+ 	struct wcd934x_codec *wcd = dev_get_drvdata(component->dev);
+@@ -3077,6 +3088,7 @@ static void wcd934x_comp_remove(struct snd_soc_component *comp)
+ {
+ 	struct wcd934x_codec *wcd = dev_get_drvdata(comp->dev);
+ 
++	wcd934x_mbhc_deinit(comp);
+ 	wcd_clsh_ctrl_free(wcd->clsh_ctrl);
+ }
+ 
+diff --git a/sound/soc/codecs/wcd938x.c b/sound/soc/codecs/wcd938x.c
+index e7d6a02cdec0d..4a0b990f56e12 100644
+--- a/sound/soc/codecs/wcd938x.c
++++ b/sound/soc/codecs/wcd938x.c
+@@ -210,7 +210,7 @@ struct wcd938x_priv {
+ };
+ 
+ static const SNDRV_CTL_TLVD_DECLARE_DB_MINMAX(ear_pa_gain, 600, -1800);
+-static const SNDRV_CTL_TLVD_DECLARE_DB_MINMAX(line_gain, 600, -3000);
++static const DECLARE_TLV_DB_SCALE(line_gain, -3000, 150, -3000);
+ static const SNDRV_CTL_TLVD_DECLARE_DB_MINMAX(analog_gain, 0, 3000);
+ 
+ struct wcd938x_mbhc_zdet_param {
+@@ -2165,8 +2165,8 @@ static inline void wcd938x_mbhc_get_result_params(struct wcd938x_priv *wcd938x,
+ 	else if (x1 < minCode_param[noff])
+ 		*zdet = WCD938X_ZDET_FLOATING_IMPEDANCE;
+ 
+-	pr_err("%s: d1=%d, c1=%d, x1=0x%x, z_val=%d(milliOhm)\n",
+-		__func__, d1, c1, x1, *zdet);
++	pr_debug("%s: d1=%d, c1=%d, x1=0x%x, z_val=%d (milliohm)\n",
++		 __func__, d1, c1, x1, *zdet);
+ ramp_down:
+ 	i = 0;
+ 	while (x1) {
+@@ -2625,6 +2625,8 @@ static int wcd938x_mbhc_init(struct snd_soc_component *component)
+ 						     WCD938X_IRQ_HPHR_OCP_INT);
+ 
+ 	wcd938x->wcd_mbhc = wcd_mbhc_init(component, &mbhc_cb, intr_ids, wcd_mbhc_fields, true);
++	if (IS_ERR(wcd938x->wcd_mbhc))
++		return PTR_ERR(wcd938x->wcd_mbhc);
+ 
+ 	snd_soc_add_component_controls(component, impedance_detect_controls,
+ 				       ARRAY_SIZE(impedance_detect_controls));
+@@ -2633,6 +2635,14 @@ static int wcd938x_mbhc_init(struct snd_soc_component *component)
+ 
+ 	return 0;
+ }
++
++static void wcd938x_mbhc_deinit(struct snd_soc_component *component)
++{
++	struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component);
++
++	wcd_mbhc_deinit(wcd938x->wcd_mbhc);
++}
++
+ /* END MBHC */
+ 
+ static const struct snd_kcontrol_new wcd938x_snd_controls[] = {
+@@ -2652,8 +2662,8 @@ static const struct snd_kcontrol_new wcd938x_snd_controls[] = {
+ 		       wcd938x_get_swr_port, wcd938x_set_swr_port),
+ 	SOC_SINGLE_EXT("DSD_R Switch", WCD938X_DSD_R, 0, 1, 0,
+ 		       wcd938x_get_swr_port, wcd938x_set_swr_port),
+-	SOC_SINGLE_TLV("HPHL Volume", WCD938X_HPH_L_EN, 0, 0x18, 0, line_gain),
+-	SOC_SINGLE_TLV("HPHR Volume", WCD938X_HPH_R_EN, 0, 0x18, 0, line_gain),
++	SOC_SINGLE_TLV("HPHL Volume", WCD938X_HPH_L_EN, 0, 0x18, 1, line_gain),
++	SOC_SINGLE_TLV("HPHR Volume", WCD938X_HPH_R_EN, 0, 0x18, 1, line_gain),
+ 	WCD938X_EAR_PA_GAIN_TLV("EAR_PA Volume", WCD938X_ANA_EAR_COMPANDER_CTL,
+ 				2, 0x10, 0, ear_pa_gain),
+ 	SOC_SINGLE_EXT("ADC1 Switch", WCD938X_ADC1, 1, 1, 0,
+@@ -3080,16 +3090,33 @@ static int wcd938x_irq_init(struct wcd938x_priv *wcd, struct device *dev)
+ static int wcd938x_soc_codec_probe(struct snd_soc_component *component)
+ {
+ 	struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component);
++	struct sdw_slave *tx_sdw_dev = wcd938x->tx_sdw_dev;
+ 	struct device *dev = component->dev;
++	unsigned long time_left;
+ 	int ret, i;
+ 
++	time_left = wait_for_completion_timeout(&tx_sdw_dev->initialization_complete,
++						msecs_to_jiffies(2000));
++	if (!time_left) {
++		dev_err(dev, "soundwire device init timeout\n");
++		return -ETIMEDOUT;
++	}
++
+ 	snd_soc_component_init_regmap(component, wcd938x->regmap);
+ 
++	ret = pm_runtime_resume_and_get(dev);
++	if (ret < 0)
++		return ret;
++
+ 	wcd938x->variant = snd_soc_component_read_field(component,
+ 						 WCD938X_DIGITAL_EFUSE_REG_0,
+ 						 WCD938X_ID_MASK);
+ 
+ 	wcd938x->clsh_info = wcd_clsh_ctrl_alloc(component, WCD938X);
++	if (IS_ERR(wcd938x->clsh_info)) {
++		pm_runtime_put(dev);
++		return PTR_ERR(wcd938x->clsh_info);
++	}
+ 
+ 	wcd938x_io_init(wcd938x);
+ 	/* Set all interrupts as edge triggered */
+@@ -3098,6 +3125,8 @@ static int wcd938x_soc_codec_probe(struct snd_soc_component *component)
+ 			     (WCD938X_DIGITAL_INTR_LEVEL_0 + i), 0);
+ 	}
+ 
++	pm_runtime_put(dev);
++
+ 	wcd938x->hphr_pdm_wd_int = regmap_irq_get_virq(wcd938x->irq_chip,
+ 						       WCD938X_IRQ_HPHR_PDM_WD_INT);
+ 	wcd938x->hphl_pdm_wd_int = regmap_irq_get_virq(wcd938x->irq_chip,
+@@ -3109,20 +3138,26 @@ static int wcd938x_soc_codec_probe(struct snd_soc_component *component)
+ 	ret = request_threaded_irq(wcd938x->hphr_pdm_wd_int, NULL, wcd938x_wd_handle_irq,
+ 				   IRQF_ONESHOT | IRQF_TRIGGER_RISING,
+ 				   "HPHR PDM WD INT", wcd938x);
+-	if (ret)
++	if (ret) {
+ 		dev_err(dev, "Failed to request HPHR WD interrupt (%d)\n", ret);
++		goto err_free_clsh_ctrl;
++	}
+ 
+ 	ret = request_threaded_irq(wcd938x->hphl_pdm_wd_int, NULL, wcd938x_wd_handle_irq,
+ 				   IRQF_ONESHOT | IRQF_TRIGGER_RISING,
+ 				   "HPHL PDM WD INT", wcd938x);
+-	if (ret)
++	if (ret) {
+ 		dev_err(dev, "Failed to request HPHL WD interrupt (%d)\n", ret);
++		goto err_free_hphr_pdm_wd_int;
++	}
+ 
+ 	ret = request_threaded_irq(wcd938x->aux_pdm_wd_int, NULL, wcd938x_wd_handle_irq,
+ 				   IRQF_ONESHOT | IRQF_TRIGGER_RISING,
+ 				   "AUX PDM WD INT", wcd938x);
+-	if (ret)
++	if (ret) {
+ 		dev_err(dev, "Failed to request Aux WD interrupt (%d)\n", ret);
++		goto err_free_hphl_pdm_wd_int;
++	}
+ 
+ 	/* Disable watchdog interrupt for HPH and AUX */
+ 	disable_irq_nosync(wcd938x->hphr_pdm_wd_int);
+@@ -3137,7 +3172,7 @@ static int wcd938x_soc_codec_probe(struct snd_soc_component *component)
+ 			dev_err(component->dev,
+ 				"%s: Failed to add snd ctrls for variant: %d\n",
+ 				__func__, wcd938x->variant);
+-			goto err;
++			goto err_free_aux_pdm_wd_int;
+ 		}
+ 		break;
+ 	case WCD9385:
+@@ -3147,7 +3182,7 @@ static int wcd938x_soc_codec_probe(struct snd_soc_component *component)
+ 			dev_err(component->dev,
+ 				"%s: Failed to add snd ctrls for variant: %d\n",
+ 				__func__, wcd938x->variant);
+-			goto err;
++			goto err_free_aux_pdm_wd_int;
+ 		}
+ 		break;
+ 	default:
+@@ -3155,12 +3190,38 @@ static int wcd938x_soc_codec_probe(struct snd_soc_component *component)
+ 	}
+ 
+ 	ret = wcd938x_mbhc_init(component);
+-	if (ret)
++	if (ret) {
+ 		dev_err(component->dev,  "mbhc initialization failed\n");
+-err:
++		goto err_free_aux_pdm_wd_int;
++	}
++
++	return 0;
++
++err_free_aux_pdm_wd_int:
++	free_irq(wcd938x->aux_pdm_wd_int, wcd938x);
++err_free_hphl_pdm_wd_int:
++	free_irq(wcd938x->hphl_pdm_wd_int, wcd938x);
++err_free_hphr_pdm_wd_int:
++	free_irq(wcd938x->hphr_pdm_wd_int, wcd938x);
++err_free_clsh_ctrl:
++	wcd_clsh_ctrl_free(wcd938x->clsh_info);
++
+ 	return ret;
+ }
+ 
++static void wcd938x_soc_codec_remove(struct snd_soc_component *component)
++{
++	struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component);
++
++	wcd938x_mbhc_deinit(component);
++
++	free_irq(wcd938x->aux_pdm_wd_int, wcd938x);
++	free_irq(wcd938x->hphl_pdm_wd_int, wcd938x);
++	free_irq(wcd938x->hphr_pdm_wd_int, wcd938x);
++
++	wcd_clsh_ctrl_free(wcd938x->clsh_info);
++}
++
+ static int wcd938x_codec_set_jack(struct snd_soc_component *comp,
+ 				  struct snd_soc_jack *jack, void *data)
+ {
+@@ -3177,6 +3238,7 @@ static int wcd938x_codec_set_jack(struct snd_soc_component *comp,
+ static const struct snd_soc_component_driver soc_codec_dev_wcd938x = {
+ 	.name = "wcd938x_codec",
+ 	.probe = wcd938x_soc_codec_probe,
++	.remove = wcd938x_soc_codec_remove,
+ 	.controls = wcd938x_snd_controls,
+ 	.num_controls = ARRAY_SIZE(wcd938x_snd_controls),
+ 	.dapm_widgets = wcd938x_dapm_widgets,
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index e3105d48fb651..e9f1398ca9330 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -507,12 +507,6 @@ static int fsl_sai_set_bclk(struct snd_soc_dai *dai, bool tx, u32 freq)
+ 				   savediv / 2 - 1);
+ 	}
+ 
+-	if (sai->soc_data->max_register >= FSL_SAI_MCTL) {
+-		/* SAI is in master mode at this point, so enable MCLK */
+-		regmap_update_bits(sai->regmap, FSL_SAI_MCTL,
+-				   FSL_SAI_MCTL_MCLK_EN, FSL_SAI_MCTL_MCLK_EN);
+-	}
+-
+ 	return 0;
+ }
+ 
+@@ -719,7 +713,7 @@ static void fsl_sai_config_disable(struct fsl_sai *sai, int dir)
+ 	u32 xcsr, count = 100;
+ 
+ 	regmap_update_bits(sai->regmap, FSL_SAI_xCSR(tx, ofs),
+-			   FSL_SAI_CSR_TERE, 0);
++			   FSL_SAI_CSR_TERE | FSL_SAI_CSR_BCE, 0);
+ 
+ 	/* TERE will remain set till the end of current frame */
+ 	do {
+diff --git a/sound/soc/fsl/fsl_sai.h b/sound/soc/fsl/fsl_sai.h
+index a53c4f0e25faf..db8aabb156c7d 100644
+--- a/sound/soc/fsl/fsl_sai.h
++++ b/sound/soc/fsl/fsl_sai.h
+@@ -91,6 +91,7 @@
+ /* SAI Transmit/Receive Control Register */
+ #define FSL_SAI_CSR_TERE	BIT(31)
+ #define FSL_SAI_CSR_SE		BIT(30)
++#define FSL_SAI_CSR_BCE		BIT(28)
+ #define FSL_SAI_CSR_FR		BIT(25)
+ #define FSL_SAI_CSR_SR		BIT(24)
+ #define FSL_SAI_CSR_xF_SHIFT	16
+diff --git a/sound/soc/qcom/qdsp6/q6apm.c b/sound/soc/qcom/qdsp6/q6apm.c
+index a7a3f973eb6d5..cdebf209c8a55 100644
+--- a/sound/soc/qcom/qdsp6/q6apm.c
++++ b/sound/soc/qcom/qdsp6/q6apm.c
+@@ -446,6 +446,8 @@ static int graph_callback(struct gpr_resp_pkt *data, void *priv, int op)
+ 
+ 	switch (hdr->opcode) {
+ 	case DATA_CMD_RSP_WR_SH_MEM_EP_DATA_BUFFER_DONE_V2:
++		if (!graph->ar_graph)
++			break;
+ 		client_event = APM_CLIENT_EVENT_DATA_WRITE_DONE;
+ 		mutex_lock(&graph->lock);
+ 		token = hdr->token & APM_WRITE_TOKEN_MASK;
+@@ -479,6 +481,8 @@ static int graph_callback(struct gpr_resp_pkt *data, void *priv, int op)
+ 		wake_up(&graph->cmd_wait);
+ 		break;
+ 	case DATA_CMD_RSP_RD_SH_MEM_EP_DATA_BUFFER_V2:
++		if (!graph->ar_graph)
++			break;
+ 		client_event = APM_CLIENT_EVENT_DATA_READ_DONE;
+ 		mutex_lock(&graph->lock);
+ 		rd_done = data->payload;
+@@ -581,8 +585,9 @@ int q6apm_graph_close(struct q6apm_graph *graph)
+ {
+ 	struct audioreach_graph *ar_graph = graph->ar_graph;
+ 
+-	gpr_free_port(graph->port);
++	graph->ar_graph = NULL;
+ 	kref_put(&ar_graph->refcount, q6apm_put_audioreach_graph);
++	gpr_free_port(graph->port);
+ 	kfree(graph);
+ 
+ 	return 0;
+diff --git a/sound/soc/qcom/qdsp6/topology.c b/sound/soc/qcom/qdsp6/topology.c
+index cccc59b570b9a..130b22a34fb3b 100644
+--- a/sound/soc/qcom/qdsp6/topology.c
++++ b/sound/soc/qcom/qdsp6/topology.c
+@@ -1277,8 +1277,8 @@ int audioreach_tplg_init(struct snd_soc_component *component)
+ 
+ 	ret = snd_soc_tplg_component_load(component, &audioreach_tplg_ops, fw);
+ 	if (ret < 0) {
+-		dev_err(dev, "tplg component load failed%d\n", ret);
+-		ret = -EINVAL;
++		if (ret != -EPROBE_DEFER)
++			dev_err(dev, "tplg component load failed: %d\n", ret);
+ 	}
+ 
+ 	release_firmware(fw);
+diff --git a/sound/soc/sof/ipc3-dtrace.c b/sound/soc/sof/ipc3-dtrace.c
+index 1d3bca2d28dd6..35da85a45a9ae 100644
+--- a/sound/soc/sof/ipc3-dtrace.c
++++ b/sound/soc/sof/ipc3-dtrace.c
+@@ -186,7 +186,6 @@ static ssize_t dfsentry_trace_filter_write(struct file *file, const char __user
+ 	struct snd_sof_dfsentry *dfse = file->private_data;
+ 	struct sof_ipc_trace_filter_elem *elems = NULL;
+ 	struct snd_sof_dev *sdev = dfse->sdev;
+-	loff_t pos = 0;
+ 	int num_elems;
+ 	char *string;
+ 	int ret;
+@@ -201,11 +200,11 @@ static ssize_t dfsentry_trace_filter_write(struct file *file, const char __user
+ 	if (!string)
+ 		return -ENOMEM;
+ 
+-	/* assert null termination */
+-	string[count] = 0;
+-	ret = simple_write_to_buffer(string, count, &pos, from, count);
+-	if (ret < 0)
++	if (copy_from_user(string, from, count)) {
++		ret = -EFAULT;
+ 		goto error;
++	}
++	string[count] = '\0';
+ 
+ 	ret = trace_filter_parse(sdev, string, &num_elems, &elems);
+ 	if (ret < 0)
+diff --git a/sound/soc/tegra/tegra210_adx.c b/sound/soc/tegra/tegra210_adx.c
+index 41117c1d61fb3..c9eba3566aeed 100644
+--- a/sound/soc/tegra/tegra210_adx.c
++++ b/sound/soc/tegra/tegra210_adx.c
+@@ -2,7 +2,7 @@
+ //
+ // tegra210_adx.c - Tegra210 ADX driver
+ //
+-// Copyright (c) 2021 NVIDIA CORPORATION.  All rights reserved.
++// Copyright (c) 2021-2023 NVIDIA CORPORATION.  All rights reserved.
+ 
+ #include <linux/clk.h>
+ #include <linux/device.h>
+@@ -175,10 +175,20 @@ static int tegra210_adx_get_byte_map(struct snd_kcontrol *kcontrol,
+ 	mc = (struct soc_mixer_control *)kcontrol->private_value;
+ 	enabled = adx->byte_mask[mc->reg / 32] & (1 << (mc->reg % 32));
+ 
++	/*
++	 * TODO: Simplify this logic to just return from bytes_map[]
++	 *
++	 * Presently below is required since bytes_map[] is
++	 * tightly packed and cannot store the control value of 256.
++	 * Byte mask state is used to know if 256 needs to be returned.
++	 * Note that for control value of 256, the put() call stores 0
++	 * in the bytes_map[] and disables the corresponding bit in
++	 * byte_mask[].
++	 */
+ 	if (enabled)
+ 		ucontrol->value.integer.value[0] = bytes_map[mc->reg];
+ 	else
+-		ucontrol->value.integer.value[0] = 0;
++		ucontrol->value.integer.value[0] = 256;
+ 
+ 	return 0;
+ }
+@@ -192,19 +202,19 @@ static int tegra210_adx_put_byte_map(struct snd_kcontrol *kcontrol,
+ 	int value = ucontrol->value.integer.value[0];
+ 	struct soc_mixer_control *mc =
+ 		(struct soc_mixer_control *)kcontrol->private_value;
++	unsigned int mask_val = adx->byte_mask[mc->reg / 32];
+ 
+-	if (value == bytes_map[mc->reg])
++	if (value >= 0 && value <= 255)
++		mask_val |= (1 << (mc->reg % 32));
++	else
++		mask_val &= ~(1 << (mc->reg % 32));
++
++	if (mask_val == adx->byte_mask[mc->reg / 32])
+ 		return 0;
+ 
+-	if (value >= 0 && value <= 255) {
+-		/* update byte map and enable slot */
+-		bytes_map[mc->reg] = value;
+-		adx->byte_mask[mc->reg / 32] |= (1 << (mc->reg % 32));
+-	} else {
+-		/* reset byte map and disable slot */
+-		bytes_map[mc->reg] = 0;
+-		adx->byte_mask[mc->reg / 32] &= ~(1 << (mc->reg % 32));
+-	}
++	/* Update byte map and slot */
++	bytes_map[mc->reg] = value % 256;
++	adx->byte_mask[mc->reg / 32] = mask_val;
+ 
+ 	return 1;
+ }
+diff --git a/sound/soc/tegra/tegra210_amx.c b/sound/soc/tegra/tegra210_amx.c
+index 782a141b65c0c..179876949b308 100644
+--- a/sound/soc/tegra/tegra210_amx.c
++++ b/sound/soc/tegra/tegra210_amx.c
+@@ -2,7 +2,7 @@
+ //
+ // tegra210_amx.c - Tegra210 AMX driver
+ //
+-// Copyright (c) 2021 NVIDIA CORPORATION.  All rights reserved.
++// Copyright (c) 2021-2023 NVIDIA CORPORATION.  All rights reserved.
+ 
+ #include <linux/clk.h>
+ #include <linux/device.h>
+@@ -203,10 +203,20 @@ static int tegra210_amx_get_byte_map(struct snd_kcontrol *kcontrol,
+ 	else
+ 		enabled = amx->byte_mask[0] & (1 << reg);
+ 
++	/*
++	 * TODO: Simplify this logic to just return from bytes_map[]
++	 *
++	 * Presently below is required since bytes_map[] is
++	 * tightly packed and cannot store the control value of 256.
++	 * Byte mask state is used to know if 256 needs to be returned.
++	 * Note that for control value of 256, the put() call stores 0
++	 * in the bytes_map[] and disables the corresponding bit in
++	 * byte_mask[].
++	 */
+ 	if (enabled)
+ 		ucontrol->value.integer.value[0] = bytes_map[reg];
+ 	else
+-		ucontrol->value.integer.value[0] = 0;
++		ucontrol->value.integer.value[0] = 256;
+ 
+ 	return 0;
+ }
+@@ -221,25 +231,19 @@ static int tegra210_amx_put_byte_map(struct snd_kcontrol *kcontrol,
+ 	unsigned char *bytes_map = (unsigned char *)&amx->map;
+ 	int reg = mc->reg;
+ 	int value = ucontrol->value.integer.value[0];
++	unsigned int mask_val = amx->byte_mask[reg / 32];
+ 
+-	if (value == bytes_map[reg])
++	if (value >= 0 && value <= 255)
++		mask_val |= (1 << (reg % 32));
++	else
++		mask_val &= ~(1 << (reg % 32));
++
++	if (mask_val == amx->byte_mask[reg / 32])
+ 		return 0;
+ 
+-	if (value >= 0 && value <= 255) {
+-		/* Update byte map and enable slot */
+-		bytes_map[reg] = value;
+-		if (reg > 31)
+-			amx->byte_mask[1] |= (1 << (reg - 32));
+-		else
+-			amx->byte_mask[0] |= (1 << reg);
+-	} else {
+-		/* Reset byte map and disable slot */
+-		bytes_map[reg] = 0;
+-		if (reg > 31)
+-			amx->byte_mask[1] &= ~(1 << (reg - 32));
+-		else
+-			amx->byte_mask[0] &= ~(1 << reg);
+-	}
++	/* Update byte map and slot */
++	bytes_map[reg] = value % 256;
++	amx->byte_mask[reg / 32] = mask_val;
+ 
+ 	return 1;
+ }
+diff --git a/tools/include/nolibc/stackprotector.h b/tools/include/nolibc/stackprotector.h
+index d119cbbbc256f..9890e86c26172 100644
+--- a/tools/include/nolibc/stackprotector.h
++++ b/tools/include/nolibc/stackprotector.h
+@@ -45,8 +45,9 @@ __attribute__((weak,no_stack_protector,section(".text.nolibc_stack_chk")))
+ void __stack_chk_init(void)
+ {
+ 	my_syscall3(__NR_getrandom, &__stack_chk_guard, sizeof(__stack_chk_guard), 0);
+-	/* a bit more randomness in case getrandom() fails */
+-	__stack_chk_guard ^= (uintptr_t) &__stack_chk_guard;
++	/* a bit more randomness in case getrandom() fails, ensure the guard is never 0 */
++	if (__stack_chk_guard != (uintptr_t) &__stack_chk_guard)
++		__stack_chk_guard ^= (uintptr_t) &__stack_chk_guard;
+ }
+ #endif // defined(NOLIBC_STACKPROTECTOR)
+ 
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index a794d9eca93d8..72f068682c9a2 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -155,9 +155,9 @@ FEATURE_CHECK_LDFLAGS-libcrypto = -lcrypto
+ ifdef CSINCLUDES
+   LIBOPENCSD_CFLAGS := -I$(CSINCLUDES)
+ endif
+-OPENCSDLIBS := -lopencsd_c_api
++OPENCSDLIBS := -lopencsd_c_api -lopencsd
+ ifeq ($(findstring -static,${LDFLAGS}),-static)
+-  OPENCSDLIBS += -lopencsd -lstdc++
++  OPENCSDLIBS += -lstdc++
+ endif
+ ifdef CSLIBS
+   LIBOPENCSD_LDFLAGS := -L$(CSLIBS)
+diff --git a/tools/perf/tests/shell/test_uprobe_from_different_cu.sh b/tools/perf/tests/shell/test_uprobe_from_different_cu.sh
+new file mode 100644
+index 0000000000000..00d2e0e2e0c28
+--- /dev/null
++++ b/tools/perf/tests/shell/test_uprobe_from_different_cu.sh
+@@ -0,0 +1,77 @@
++#!/bin/bash
++# test perf probe of function from different CU
++# SPDX-License-Identifier: GPL-2.0
++
++set -e
++
++temp_dir=$(mktemp -d /tmp/perf-uprobe-different-cu-sh.XXXXXXXXXX)
++
++cleanup()
++{
++	trap - EXIT TERM INT
++	if [[ "${temp_dir}" =~ ^/tmp/perf-uprobe-different-cu-sh.*$ ]]; then
++		echo "--- Cleaning up ---"
++		perf probe -x ${temp_dir}/testfile -d foo
++		rm -f "${temp_dir}/"*
++		rmdir "${temp_dir}"
++	fi
++}
++
++trap_cleanup()
++{
++        cleanup
++        exit 1
++}
++
++trap trap_cleanup EXIT TERM INT
++
++cat > ${temp_dir}/testfile-foo.h << EOF
++struct t
++{
++  int *p;
++  int c;
++};
++
++extern int foo (int i, struct t *t);
++EOF
++
++cat > ${temp_dir}/testfile-foo.c << EOF
++#include "testfile-foo.h"
++
++int
++foo (int i, struct t *t)
++{
++  int j, res = 0;
++  for (j = 0; j < i && j < t->c; j++)
++    res += t->p[j];
++
++  return res;
++}
++EOF
++
++cat > ${temp_dir}/testfile-main.c << EOF
++#include "testfile-foo.h"
++
++static struct t g;
++
++int
++main (int argc, char **argv)
++{
++  int i;
++  int j[argc];
++  g.c = argc;
++  g.p = j;
++  for (i = 0; i < argc; i++)
++    j[i] = (int) argv[i][0];
++  return foo (3, &g);
++}
++EOF
++
++gcc -g -Og -flto -c ${temp_dir}/testfile-foo.c -o ${temp_dir}/testfile-foo.o
++gcc -g -Og -c ${temp_dir}/testfile-main.c -o ${temp_dir}/testfile-main.o
++gcc -g -Og -o ${temp_dir}/testfile ${temp_dir}/testfile-foo.o ${temp_dir}/testfile-main.o
++
++perf probe -x ${temp_dir}/testfile --funcs foo
++perf probe -x ${temp_dir}/testfile foo
++
++cleanup
+diff --git a/tools/perf/util/dwarf-aux.c b/tools/perf/util/dwarf-aux.c
+index 3bff678745635..3597ca288c9c6 100644
+--- a/tools/perf/util/dwarf-aux.c
++++ b/tools/perf/util/dwarf-aux.c
+@@ -478,8 +478,10 @@ static const char *die_get_file_name(Dwarf_Die *dw_die, int idx)
+ {
+ 	Dwarf_Die cu_die;
+ 	Dwarf_Files *files;
++	Dwarf_Attribute attr_mem;
+ 
+-	if (idx < 0 || !dwarf_diecu(dw_die, &cu_die, NULL, NULL) ||
++	if (idx < 0 || !dwarf_attr_integrate(dw_die, DW_AT_decl_file, &attr_mem) ||
++	    !dwarf_cu_die(attr_mem.cu, &cu_die, NULL, NULL, NULL, NULL, NULL, NULL) ||
+ 	    dwarf_getsrcfiles(&cu_die, &files, NULL) != 0)
+ 		return NULL;
+ 
+diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
+index 9286d3baa12d6..adc5392df4009 100644
+--- a/tools/testing/radix-tree/maple.c
++++ b/tools/testing/radix-tree/maple.c
+@@ -206,9 +206,9 @@ static noinline void check_new_node(struct maple_tree *mt)
+ 				e = i - 1;
+ 		} else {
+ 			if (i >= 4)
+-				e = i - 4;
+-			else if (i == 3)
+-				e = i - 2;
++				e = i - 3;
++			else if (i >= 1)
++				e = i - 1;
+ 			else
+ 				e = 0;
+ 		}
+diff --git a/tools/testing/selftests/mm/mkdirty.c b/tools/testing/selftests/mm/mkdirty.c
+index 6d71d972997b2..301abb99e027e 100644
+--- a/tools/testing/selftests/mm/mkdirty.c
++++ b/tools/testing/selftests/mm/mkdirty.c
+@@ -321,8 +321,8 @@ close_uffd:
+ munmap:
+ 	munmap(dst, pagesize);
+ 	free(src);
+-#endif /* __NR_userfaultfd */
+ }
++#endif /* __NR_userfaultfd */
+ 
+ int main(void)
+ {
+diff --git a/tools/testing/selftests/tc-testing/config b/tools/testing/selftests/tc-testing/config
+index 6e73b09c20c81..71706197ba0f8 100644
+--- a/tools/testing/selftests/tc-testing/config
++++ b/tools/testing/selftests/tc-testing/config
+@@ -5,6 +5,8 @@ CONFIG_NF_CONNTRACK=m
+ CONFIG_NF_CONNTRACK_MARK=y
+ CONFIG_NF_CONNTRACK_ZONES=y
+ CONFIG_NF_CONNTRACK_LABELS=y
++CONFIG_NF_CONNTRACK_PROCFS=y
++CONFIG_NF_FLOW_TABLE=m
+ CONFIG_NF_NAT=m
+ CONFIG_NETFILTER_XT_TARGET_LOG=m
+ 
+diff --git a/tools/testing/selftests/tc-testing/settings b/tools/testing/selftests/tc-testing/settings
+new file mode 100644
+index 0000000000000..e2206265f67c7
+--- /dev/null
++++ b/tools/testing/selftests/tc-testing/settings
+@@ -0,0 +1 @@
++timeout=900


             reply	other threads:[~2023-07-27 11:41 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-27 11:41 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2023-09-13 11:04 [gentoo-commits] proj/linux-patches:6.4 commit in: / Mike Pagano
2023-09-06 22:15 Mike Pagano
2023-09-02  9:55 Mike Pagano
2023-08-30 13:48 Mike Pagano
2023-08-30 13:45 Mike Pagano
2023-08-23 15:57 Mike Pagano
2023-08-16 20:20 Mike Pagano
2023-08-16 17:28 Mike Pagano
2023-08-11 11:53 Mike Pagano
2023-08-08 18:39 Mike Pagano
2023-08-03 11:47 Mike Pagano
2023-08-02 10:35 Mike Pagano
2023-07-29 18:37 Mike Pagano
2023-07-27 11:46 Mike Pagano
2023-07-24 20:26 Mike Pagano
2023-07-23 15:15 Mike Pagano
2023-07-19 17:20 Mike Pagano
2023-07-19 17:04 Mike Pagano
2023-07-11 11:45 Mike Pagano
2023-07-05 20:40 Mike Pagano
2023-07-05 20:26 Mike Pagano
2023-07-04 12:57 Mike Pagano
2023-07-04 12:57 Mike Pagano
2023-07-03 16:59 Mike Pagano
2023-07-01 18:48 Mike Pagano
2023-07-01 18:19 Mike Pagano
2023-05-09 12:38 Mike Pagano
2023-05-09 12:36 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1690458091.ee51f4eb54bb7924e582f9391b383ca51a5efb09.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox