From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <gentoo-commits+bounces-1654234-garchives=archives.gentoo.org@lists.gentoo.org>
Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80])
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
	 key-exchange X25519 server-signature RSA-PSS (2048 bits))
	(No client certificate requested)
	by finch.gentoo.org (Postfix) with ESMTPS id C5284158170
	for <garchives@archives.gentoo.org>; Thu, 18 Jul 2024 12:16:26 +0000 (UTC)
Received: from pigeon.gentoo.org (localhost [127.0.0.1])
	by pigeon.gentoo.org (Postfix) with SMTP id E711EE2A1A;
	Thu, 18 Jul 2024 12:16:25 +0000 (UTC)
Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183])
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
	 key-exchange X25519 server-signature RSA-PSS (4096 bits))
	(No client certificate requested)
	by pigeon.gentoo.org (Postfix) with ESMTPS id 8FF12E2A1A
	for <gentoo-commits@lists.gentoo.org>; Thu, 18 Jul 2024 12:16:25 +0000 (UTC)
Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52])
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
	 key-exchange X25519 server-signature RSA-PSS (4096 bits))
	(No client certificate requested)
	by smtp.gentoo.org (Postfix) with ESMTPS id 21B42335D63
	for <gentoo-commits@lists.gentoo.org>; Thu, 18 Jul 2024 12:15:54 +0000 (UTC)
Received: from localhost.localdomain (localhost [IPv6:::1])
	by oystercatcher.gentoo.org (Postfix) with ESMTP id BC4FB1B89
	for <gentoo-commits@lists.gentoo.org>; Thu, 18 Jul 2024 12:15:52 +0000 (UTC)
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Content-Transfer-Encoding: 8bit
Content-type: text/plain; charset=UTF-8
Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" <mpagano@gentoo.org>
Message-ID: <1721304941.4910d4b94b7c2762c4408285599268c9899da415.mpagano@gentoo>
Subject: [gentoo-commits] proj/linux-patches:6.1 commit in: /
X-VCS-Repository: proj/linux-patches
X-VCS-Files: 0000_README 1099_linux-6.1.100.patch
X-VCS-Directories: /
X-VCS-Committer: mpagano
X-VCS-Committer-Name: Mike Pagano
X-VCS-Revision: 4910d4b94b7c2762c4408285599268c9899da415
X-VCS-Branch: 6.1
Date: Thu, 18 Jul 2024 12:15:52 +0000 (UTC)
Precedence: bulk
List-Post: <mailto:gentoo-commits@lists.gentoo.org>
List-Help: <mailto:gentoo-commits+help@lists.gentoo.org>
List-Unsubscribe: <mailto:gentoo-commits+unsubscribe@lists.gentoo.org>
List-Subscribe: <mailto:gentoo-commits+subscribe@lists.gentoo.org>
List-Id: Gentoo Linux mail <gentoo-commits.gentoo.org>
X-BeenThere: gentoo-commits@lists.gentoo.org
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-Archives-Salt: d8c3483e-971e-494d-abed-1055cd9e0044
X-Archives-Hash: 40e6e03f67bd581f5cbd5f8ca90a4de3

commit:     4910d4b94b7c2762c4408285599268c9899da415
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jul 18 12:15:41 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jul 18 12:15:41 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4910d4b9

Linux patch 6.1.100

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1099_linux-6.1.100.patch | 3692 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3696 insertions(+)

diff --git a/0000_README b/0000_README
index 6e536b9a..2650897a 100644
--- a/0000_README
+++ b/0000_README
@@ -439,6 +439,10 @@ Patch:  1098_linux-6.1.99.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.1.99
 
+Patch:  1099_linux-6.1.100.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.1.100
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1099_linux-6.1.100.patch b/1099_linux-6.1.100.patch
new file mode 100644
index 00000000..594ab69b
--- /dev/null
+++ b/1099_linux-6.1.100.patch
@@ -0,0 +1,3692 @@
+diff --git a/Documentation/admin-guide/cifs/usage.rst b/Documentation/admin-guide/cifs/usage.rst
+index 3766bf8a1c20e..a50047cf95ca2 100644
+--- a/Documentation/admin-guide/cifs/usage.rst
++++ b/Documentation/admin-guide/cifs/usage.rst
+@@ -722,40 +722,26 @@ Configuration pseudo-files:
+ ======================= =======================================================
+ SecurityFlags		Flags which control security negotiation and
+ 			also packet signing. Authentication (may/must)
+-			flags (e.g. for NTLM and/or NTLMv2) may be combined with
++			flags (e.g. for NTLMv2) may be combined with
+ 			the signing flags.  Specifying two different password
+ 			hashing mechanisms (as "must use") on the other hand
+ 			does not make much sense. Default flags are::
+ 
+-				0x07007
+-
+-			(NTLM, NTLMv2 and packet signing allowed).  The maximum
+-			allowable flags if you want to allow mounts to servers
+-			using weaker password hashes is 0x37037 (lanman,
+-			plaintext, ntlm, ntlmv2, signing allowed).  Some
+-			SecurityFlags require the corresponding menuconfig
+-			options to be enabled.  Enabling plaintext
+-			authentication currently requires also enabling
+-			lanman authentication in the security flags
+-			because the cifs module only supports sending
+-			laintext passwords using the older lanman dialect
+-			form of the session setup SMB.  (e.g. for authentication
+-			using plain text passwords, set the SecurityFlags
+-			to 0x30030)::
++				0x00C5
++
++			(NTLMv2 and packet signing allowed).  Some SecurityFlags
++			may require enabling a corresponding menuconfig option.
+ 
+ 			  may use packet signing			0x00001
+ 			  must use packet signing			0x01001
+-			  may use NTLM (most common password hash)	0x00002
+-			  must use NTLM					0x02002
+ 			  may use NTLMv2				0x00004
+ 			  must use NTLMv2				0x04004
+-			  may use Kerberos security			0x00008
+-			  must use Kerberos				0x08008
+-			  may use lanman (weak) password hash		0x00010
+-			  must use lanman password hash			0x10010
+-			  may use plaintext passwords			0x00020
+-			  must use plaintext passwords			0x20020
+-			  (reserved for future packet encryption)	0x00040
++			  may use Kerberos security (krb5)		0x00008
++			  must use Kerberos                             0x08008
++			  may use NTLMSSP               		0x00080
++			  must use NTLMSSP           			0x80080
++			  seal (packet encryption)			0x00040
++			  must seal (not implemented yet)               0x40040
+ 
+ cifsFYI			If set to non-zero value, additional debug information
+ 			will be logged to the system error log.  This field
+diff --git a/Makefile b/Makefile
+index c12da8fcb089b..54099eefe18ca 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 1
+-SUBLEVEL = 99
++SUBLEVEL = 100
+ EXTRAVERSION =
+ NAME = Curry Ramen
+ 
+diff --git a/arch/arm/mach-davinci/pm.c b/arch/arm/mach-davinci/pm.c
+index 8aa39db095d76..2c5155bd376ba 100644
+--- a/arch/arm/mach-davinci/pm.c
++++ b/arch/arm/mach-davinci/pm.c
+@@ -61,7 +61,7 @@ static void davinci_pm_suspend(void)
+ 
+ 	/* Configure sleep count in deep sleep register */
+ 	val = __raw_readl(pm_config.deepsleep_reg);
+-	val &= ~DEEPSLEEP_SLEEPCOUNT_MASK,
++	val &= ~DEEPSLEEP_SLEEPCOUNT_MASK;
+ 	val |= pm_config.sleepcount;
+ 	__raw_writel(val, pm_config.deepsleep_reg);
+ 
+diff --git a/arch/s390/include/asm/processor.h b/arch/s390/include/asm/processor.h
+index c907f747d2a04..26861b09293f1 100644
+--- a/arch/s390/include/asm/processor.h
++++ b/arch/s390/include/asm/processor.h
+@@ -250,8 +250,8 @@ static inline void __load_psw(psw_t psw)
+  */
+ static __always_inline void __load_psw_mask(unsigned long mask)
+ {
++	psw_t psw __uninitialized;
+ 	unsigned long addr;
+-	psw_t psw;
+ 
+ 	psw.mask = mask;
+ 
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 6624806e6904b..a114338380a6f 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -167,22 +167,9 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
+ 	jne	swapgs_restore_regs_and_return_to_usermode
+ 
+ 	/*
+-	 * SYSCALL clears RF when it saves RFLAGS in R11 and SYSRET cannot
+-	 * restore RF properly. If the slowpath sets it for whatever reason, we
+-	 * need to restore it correctly.
+-	 *
+-	 * SYSRET can restore TF, but unlike IRET, restoring TF results in a
+-	 * trap from userspace immediately after SYSRET.  This would cause an
+-	 * infinite loop whenever #DB happens with register state that satisfies
+-	 * the opportunistic SYSRET conditions.  For example, single-stepping
+-	 * this user code:
+-	 *
+-	 *           movq	$stuck_here, %rcx
+-	 *           pushfq
+-	 *           popq %r11
+-	 *   stuck_here:
+-	 *
+-	 * would never get past 'stuck_here'.
++	 * SYSRET cannot restore RF.  It can restore TF, but unlike IRET,
++	 * restoring TF results in a trap from userspace immediately after
++	 * SYSRET.
+ 	 */
+ 	testq	$(X86_EFLAGS_RF|X86_EFLAGS_TF), %r11
+ 	jnz	swapgs_restore_regs_and_return_to_usermode
+diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
+index b14b8cd85eb23..74a2f418e6745 100644
+--- a/arch/x86/entry/entry_64_compat.S
++++ b/arch/x86/entry/entry_64_compat.S
+@@ -90,10 +90,6 @@ SYM_INNER_LABEL(entry_SYSENTER_compat_after_hwframe, SYM_L_GLOBAL)
+ 
+ 	cld
+ 
+-	IBRS_ENTER
+-	UNTRAIN_RET
+-	CLEAR_BRANCH_HISTORY
+-
+ 	/*
+ 	 * SYSENTER doesn't filter flags, so we need to clear NT and AC
+ 	 * ourselves.  To save a few cycles, we can check whether
+@@ -117,6 +113,16 @@ SYM_INNER_LABEL(entry_SYSENTER_compat_after_hwframe, SYM_L_GLOBAL)
+ 	jnz	.Lsysenter_fix_flags
+ .Lsysenter_flags_fixed:
+ 
++	/*
++	 * CPU bugs mitigations mechanisms can call other functions. They
++	 * should be invoked after making sure TF is cleared because
++	 * single-step is ignored only for instructions inside the
++	 * entry_SYSENTER_compat function.
++	 */
++	IBRS_ENTER
++	UNTRAIN_RET
++	CLEAR_BRANCH_HISTORY
++
+ 	movq	%rsp, %rdi
+ 	call	do_SYSENTER_32
+ 	/* XEN PV guests always use IRET path */
+diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
+index 055955c9bfcb7..7880e2a7ec6ad 100644
+--- a/arch/x86/lib/retpoline.S
++++ b/arch/x86/lib/retpoline.S
+@@ -107,6 +107,7 @@ __EXPORT_THUNK(srso_alias_untrain_ret)
+ /* dummy definition for alternatives */
+ SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
+ 	ANNOTATE_UNRET_SAFE
++	ANNOTATE_NOENDBR
+ 	ret
+ 	int3
+ SYM_FUNC_END(srso_alias_untrain_ret)
+@@ -261,7 +262,6 @@ SYM_CODE_START(__x86_return_thunk)
+ 	UNWIND_HINT_FUNC
+ 	ANNOTATE_NOENDBR
+ 	ANNOTATE_UNRET_SAFE
+-	ANNOTATE_NOENDBR
+ 	ret
+ 	int3
+ SYM_CODE_END(__x86_return_thunk)
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 6f613eef28879..18f4334a96919 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -16,7 +16,6 @@
+ #include <linux/acpi.h>
+ #include <linux/dmi.h>
+ #include <linux/sched.h>       /* need_resched() */
+-#include <linux/sort.h>
+ #include <linux/tick.h>
+ #include <linux/cpuidle.h>
+ #include <linux/cpu.h>
+@@ -388,25 +387,24 @@ static void acpi_processor_power_verify_c3(struct acpi_processor *pr,
+ 	return;
+ }
+ 
+-static int acpi_cst_latency_cmp(const void *a, const void *b)
++static void acpi_cst_latency_sort(struct acpi_processor_cx *states, size_t length)
+ {
+-	const struct acpi_processor_cx *x = a, *y = b;
++	int i, j, k;
+ 
+-	if (!(x->valid && y->valid))
+-		return 0;
+-	if (x->latency > y->latency)
+-		return 1;
+-	if (x->latency < y->latency)
+-		return -1;
+-	return 0;
+-}
+-static void acpi_cst_latency_swap(void *a, void *b, int n)
+-{
+-	struct acpi_processor_cx *x = a, *y = b;
++	for (i = 1; i < length; i++) {
++		if (!states[i].valid)
++			continue;
+ 
+-	if (!(x->valid && y->valid))
+-		return;
+-	swap(x->latency, y->latency);
++		for (j = i - 1, k = i; j >= 0; j--) {
++			if (!states[j].valid)
++				continue;
++
++			if (states[j].latency > states[k].latency)
++				swap(states[j].latency, states[k].latency);
++
++			k = j;
++		}
++	}
+ }
+ 
+ static int acpi_processor_power_verify(struct acpi_processor *pr)
+@@ -451,10 +449,7 @@ static int acpi_processor_power_verify(struct acpi_processor *pr)
+ 
+ 	if (buggy_latency) {
+ 		pr_notice("FW issue: working around C-state latencies out of order\n");
+-		sort(&pr->power.states[1], max_cstate,
+-		     sizeof(struct acpi_processor_cx),
+-		     acpi_cst_latency_cmp,
+-		     acpi_cst_latency_swap);
++		acpi_cst_latency_sort(&pr->power.states[1], max_cstate);
+ 	}
+ 
+ 	lapic_timer_propagate_broadcast(pr);
+diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c
+index ee71376f174b7..3bc1d9243dbd0 100644
+--- a/drivers/char/hpet.c
++++ b/drivers/char/hpet.c
+@@ -289,8 +289,13 @@ hpet_read(struct file *file, char __user *buf, size_t count, loff_t * ppos)
+ 	if (!devp->hd_ireqfreq)
+ 		return -EIO;
+ 
+-	if (count < sizeof(unsigned long))
+-		return -EINVAL;
++	if (in_compat_syscall()) {
++		if (count < sizeof(compat_ulong_t))
++			return -EINVAL;
++	} else {
++		if (count < sizeof(unsigned long))
++			return -EINVAL;
++	}
+ 
+ 	add_wait_queue(&devp->hd_waitqueue, &wait);
+ 
+@@ -314,9 +319,16 @@ hpet_read(struct file *file, char __user *buf, size_t count, loff_t * ppos)
+ 		schedule();
+ 	}
+ 
+-	retval = put_user(data, (unsigned long __user *)buf);
+-	if (!retval)
+-		retval = sizeof(unsigned long);
++	if (in_compat_syscall()) {
++		retval = put_user(data, (compat_ulong_t __user *)buf);
++		if (!retval)
++			retval = sizeof(compat_ulong_t);
++	} else {
++		retval = put_user(data, (unsigned long __user *)buf);
++		if (!retval)
++			retval = sizeof(unsigned long);
++	}
++
+ out:
+ 	__set_current_state(TASK_RUNNING);
+ 	remove_wait_queue(&devp->hd_waitqueue, &wait);
+@@ -671,12 +683,24 @@ struct compat_hpet_info {
+ 	unsigned short hi_timer;
+ };
+ 
++/* 32-bit types would lead to different command codes which should be
++ * translated into 64-bit ones before passed to hpet_ioctl_common
++ */
++#define COMPAT_HPET_INFO       _IOR('h', 0x03, struct compat_hpet_info)
++#define COMPAT_HPET_IRQFREQ    _IOW('h', 0x6, compat_ulong_t)
++
+ static long
+ hpet_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ {
+ 	struct hpet_info info;
+ 	int err;
+ 
++	if (cmd == COMPAT_HPET_INFO)
++		cmd = HPET_INFO;
++
++	if (cmd == COMPAT_HPET_IRQFREQ)
++		cmd = HPET_IRQFREQ;
++
+ 	mutex_lock(&hpet_mutex);
+ 	err = hpet_ioctl_common(file->private_data, cmd, arg, &info);
+ 	mutex_unlock(&hpet_mutex);
+diff --git a/drivers/firmware/cirrus/cs_dsp.c b/drivers/firmware/cirrus/cs_dsp.c
+index 64ed9d3f5d5d8..ee4c32669607f 100644
+--- a/drivers/firmware/cirrus/cs_dsp.c
++++ b/drivers/firmware/cirrus/cs_dsp.c
+@@ -1014,9 +1014,16 @@ struct cs_dsp_coeff_parsed_coeff {
+ 	int len;
+ };
+ 
+-static int cs_dsp_coeff_parse_string(int bytes, const u8 **pos, const u8 **str)
++static int cs_dsp_coeff_parse_string(int bytes, const u8 **pos, unsigned int avail,
++				     const u8 **str)
+ {
+-	int length;
++	int length, total_field_len;
++
++	/* String fields are at least one __le32 */
++	if (sizeof(__le32) > avail) {
++		*pos = NULL;
++		return 0;
++	}
+ 
+ 	switch (bytes) {
+ 	case 1:
+@@ -1029,10 +1036,16 @@ static int cs_dsp_coeff_parse_string(int bytes, const u8 **pos, const u8 **str)
+ 		return 0;
+ 	}
+ 
++	total_field_len = ((length + bytes) + 3) & ~0x03;
++	if ((unsigned int)total_field_len > avail) {
++		*pos = NULL;
++		return 0;
++	}
++
+ 	if (str)
+ 		*str = *pos + bytes;
+ 
+-	*pos += ((length + bytes) + 3) & ~0x03;
++	*pos += total_field_len;
+ 
+ 	return length;
+ }
+@@ -1057,71 +1070,134 @@ static int cs_dsp_coeff_parse_int(int bytes, const u8 **pos)
+ 	return val;
+ }
+ 
+-static inline void cs_dsp_coeff_parse_alg(struct cs_dsp *dsp, const u8 **data,
+-					  struct cs_dsp_coeff_parsed_alg *blk)
++static int cs_dsp_coeff_parse_alg(struct cs_dsp *dsp,
++				  const struct wmfw_region *region,
++				  struct cs_dsp_coeff_parsed_alg *blk)
+ {
+ 	const struct wmfw_adsp_alg_data *raw;
++	unsigned int data_len = le32_to_cpu(region->len);
++	unsigned int pos;
++	const u8 *tmp;
++
++	raw = (const struct wmfw_adsp_alg_data *)region->data;
+ 
+ 	switch (dsp->fw_ver) {
+ 	case 0:
+ 	case 1:
+-		raw = (const struct wmfw_adsp_alg_data *)*data;
+-		*data = raw->data;
++		if (sizeof(*raw) > data_len)
++			return -EOVERFLOW;
+ 
+ 		blk->id = le32_to_cpu(raw->id);
+ 		blk->name = raw->name;
+-		blk->name_len = strlen(raw->name);
++		blk->name_len = strnlen(raw->name, ARRAY_SIZE(raw->name));
+ 		blk->ncoeff = le32_to_cpu(raw->ncoeff);
++
++		pos = sizeof(*raw);
+ 		break;
+ 	default:
+-		blk->id = cs_dsp_coeff_parse_int(sizeof(raw->id), data);
+-		blk->name_len = cs_dsp_coeff_parse_string(sizeof(u8), data,
++		if (sizeof(raw->id) > data_len)
++			return -EOVERFLOW;
++
++		tmp = region->data;
++		blk->id = cs_dsp_coeff_parse_int(sizeof(raw->id), &tmp);
++		pos = tmp - region->data;
++
++		tmp = &region->data[pos];
++		blk->name_len = cs_dsp_coeff_parse_string(sizeof(u8), &tmp, data_len - pos,
+ 							  &blk->name);
+-		cs_dsp_coeff_parse_string(sizeof(u16), data, NULL);
+-		blk->ncoeff = cs_dsp_coeff_parse_int(sizeof(raw->ncoeff), data);
++		if (!tmp)
++			return -EOVERFLOW;
++
++		pos = tmp - region->data;
++		cs_dsp_coeff_parse_string(sizeof(u16), &tmp, data_len - pos, NULL);
++		if (!tmp)
++			return -EOVERFLOW;
++
++		pos = tmp - region->data;
++		if (sizeof(raw->ncoeff) > (data_len - pos))
++			return -EOVERFLOW;
++
++		blk->ncoeff = cs_dsp_coeff_parse_int(sizeof(raw->ncoeff), &tmp);
++		pos += sizeof(raw->ncoeff);
+ 		break;
+ 	}
+ 
++	if ((int)blk->ncoeff < 0)
++		return -EOVERFLOW;
++
+ 	cs_dsp_dbg(dsp, "Algorithm ID: %#x\n", blk->id);
+ 	cs_dsp_dbg(dsp, "Algorithm name: %.*s\n", blk->name_len, blk->name);
+ 	cs_dsp_dbg(dsp, "# of coefficient descriptors: %#x\n", blk->ncoeff);
++
++	return pos;
+ }
+ 
+-static inline void cs_dsp_coeff_parse_coeff(struct cs_dsp *dsp, const u8 **data,
+-					    struct cs_dsp_coeff_parsed_coeff *blk)
++static int cs_dsp_coeff_parse_coeff(struct cs_dsp *dsp,
++				    const struct wmfw_region *region,
++				    unsigned int pos,
++				    struct cs_dsp_coeff_parsed_coeff *blk)
+ {
+ 	const struct wmfw_adsp_coeff_data *raw;
++	unsigned int data_len = le32_to_cpu(region->len);
++	unsigned int blk_len, blk_end_pos;
+ 	const u8 *tmp;
+-	int length;
++
++	raw = (const struct wmfw_adsp_coeff_data *)&region->data[pos];
++	if (sizeof(raw->hdr) > (data_len - pos))
++		return -EOVERFLOW;
++
++	blk_len = le32_to_cpu(raw->hdr.size);
++	if (blk_len > S32_MAX)
++		return -EOVERFLOW;
++
++	if (blk_len > (data_len - pos - sizeof(raw->hdr)))
++		return -EOVERFLOW;
++
++	blk_end_pos = pos + sizeof(raw->hdr) + blk_len;
++
++	blk->offset = le16_to_cpu(raw->hdr.offset);
++	blk->mem_type = le16_to_cpu(raw->hdr.type);
+ 
+ 	switch (dsp->fw_ver) {
+ 	case 0:
+ 	case 1:
+-		raw = (const struct wmfw_adsp_coeff_data *)*data;
+-		*data = *data + sizeof(raw->hdr) + le32_to_cpu(raw->hdr.size);
++		if (sizeof(*raw) > (data_len - pos))
++			return -EOVERFLOW;
+ 
+-		blk->offset = le16_to_cpu(raw->hdr.offset);
+-		blk->mem_type = le16_to_cpu(raw->hdr.type);
+ 		blk->name = raw->name;
+-		blk->name_len = strlen(raw->name);
++		blk->name_len = strnlen(raw->name, ARRAY_SIZE(raw->name));
+ 		blk->ctl_type = le16_to_cpu(raw->ctl_type);
+ 		blk->flags = le16_to_cpu(raw->flags);
+ 		blk->len = le32_to_cpu(raw->len);
+ 		break;
+ 	default:
+-		tmp = *data;
+-		blk->offset = cs_dsp_coeff_parse_int(sizeof(raw->hdr.offset), &tmp);
+-		blk->mem_type = cs_dsp_coeff_parse_int(sizeof(raw->hdr.type), &tmp);
+-		length = cs_dsp_coeff_parse_int(sizeof(raw->hdr.size), &tmp);
+-		blk->name_len = cs_dsp_coeff_parse_string(sizeof(u8), &tmp,
++		pos += sizeof(raw->hdr);
++		tmp = &region->data[pos];
++		blk->name_len = cs_dsp_coeff_parse_string(sizeof(u8), &tmp, data_len - pos,
+ 							  &blk->name);
+-		cs_dsp_coeff_parse_string(sizeof(u8), &tmp, NULL);
+-		cs_dsp_coeff_parse_string(sizeof(u16), &tmp, NULL);
++		if (!tmp)
++			return -EOVERFLOW;
++
++		pos = tmp - region->data;
++		cs_dsp_coeff_parse_string(sizeof(u8), &tmp, data_len - pos, NULL);
++		if (!tmp)
++			return -EOVERFLOW;
++
++		pos = tmp - region->data;
++		cs_dsp_coeff_parse_string(sizeof(u16), &tmp, data_len - pos, NULL);
++		if (!tmp)
++			return -EOVERFLOW;
++
++		pos = tmp - region->data;
++		if (sizeof(raw->ctl_type) + sizeof(raw->flags) + sizeof(raw->len) >
++		    (data_len - pos))
++			return -EOVERFLOW;
++
+ 		blk->ctl_type = cs_dsp_coeff_parse_int(sizeof(raw->ctl_type), &tmp);
++		pos += sizeof(raw->ctl_type);
+ 		blk->flags = cs_dsp_coeff_parse_int(sizeof(raw->flags), &tmp);
++		pos += sizeof(raw->flags);
+ 		blk->len = cs_dsp_coeff_parse_int(sizeof(raw->len), &tmp);
+-
+-		*data = *data + sizeof(raw->hdr) + length;
+ 		break;
+ 	}
+ 
+@@ -1131,6 +1207,8 @@ static inline void cs_dsp_coeff_parse_coeff(struct cs_dsp *dsp, const u8 **data,
+ 	cs_dsp_dbg(dsp, "\tCoefficient flags: %#x\n", blk->flags);
+ 	cs_dsp_dbg(dsp, "\tALSA control type: %#x\n", blk->ctl_type);
+ 	cs_dsp_dbg(dsp, "\tALSA control len: %#x\n", blk->len);
++
++	return blk_end_pos;
+ }
+ 
+ static int cs_dsp_check_coeff_flags(struct cs_dsp *dsp,
+@@ -1154,12 +1232,16 @@ static int cs_dsp_parse_coeff(struct cs_dsp *dsp,
+ 	struct cs_dsp_alg_region alg_region = {};
+ 	struct cs_dsp_coeff_parsed_alg alg_blk;
+ 	struct cs_dsp_coeff_parsed_coeff coeff_blk;
+-	const u8 *data = region->data;
+-	int i, ret;
++	int i, pos, ret;
++
++	pos = cs_dsp_coeff_parse_alg(dsp, region, &alg_blk);
++	if (pos < 0)
++		return pos;
+ 
+-	cs_dsp_coeff_parse_alg(dsp, &data, &alg_blk);
+ 	for (i = 0; i < alg_blk.ncoeff; i++) {
+-		cs_dsp_coeff_parse_coeff(dsp, &data, &coeff_blk);
++		pos = cs_dsp_coeff_parse_coeff(dsp, region, pos, &coeff_blk);
++		if (pos < 0)
++			return pos;
+ 
+ 		switch (coeff_blk.ctl_type) {
+ 		case WMFW_CTL_TYPE_BYTES:
+@@ -1228,6 +1310,10 @@ static unsigned int cs_dsp_adsp1_parse_sizes(struct cs_dsp *dsp,
+ 	const struct wmfw_adsp1_sizes *adsp1_sizes;
+ 
+ 	adsp1_sizes = (void *)&firmware->data[pos];
++	if (sizeof(*adsp1_sizes) > firmware->size - pos) {
++		cs_dsp_err(dsp, "%s: file truncated\n", file);
++		return 0;
++	}
+ 
+ 	cs_dsp_dbg(dsp, "%s: %d DM, %d PM, %d ZM\n", file,
+ 		   le32_to_cpu(adsp1_sizes->dm), le32_to_cpu(adsp1_sizes->pm),
+@@ -1244,6 +1330,10 @@ static unsigned int cs_dsp_adsp2_parse_sizes(struct cs_dsp *dsp,
+ 	const struct wmfw_adsp2_sizes *adsp2_sizes;
+ 
+ 	adsp2_sizes = (void *)&firmware->data[pos];
++	if (sizeof(*adsp2_sizes) > firmware->size - pos) {
++		cs_dsp_err(dsp, "%s: file truncated\n", file);
++		return 0;
++	}
+ 
+ 	cs_dsp_dbg(dsp, "%s: %d XM, %d YM %d PM, %d ZM\n", file,
+ 		   le32_to_cpu(adsp2_sizes->xm), le32_to_cpu(adsp2_sizes->ym),
+@@ -1283,7 +1373,6 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware,
+ 	struct regmap *regmap = dsp->regmap;
+ 	unsigned int pos = 0;
+ 	const struct wmfw_header *header;
+-	const struct wmfw_adsp1_sizes *adsp1_sizes;
+ 	const struct wmfw_footer *footer;
+ 	const struct wmfw_region *region;
+ 	const struct cs_dsp_region *mem;
+@@ -1296,10 +1385,8 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware,
+ 
+ 	ret = -EINVAL;
+ 
+-	pos = sizeof(*header) + sizeof(*adsp1_sizes) + sizeof(*footer);
+-	if (pos >= firmware->size) {
+-		cs_dsp_err(dsp, "%s: file too short, %zu bytes\n",
+-			   file, firmware->size);
++	if (sizeof(*header) >= firmware->size) {
++		ret = -EOVERFLOW;
+ 		goto out_fw;
+ 	}
+ 
+@@ -1327,22 +1414,36 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware,
+ 
+ 	pos = sizeof(*header);
+ 	pos = dsp->ops->parse_sizes(dsp, file, pos, firmware);
++	if ((pos == 0) || (sizeof(*footer) > firmware->size - pos)) {
++		ret = -EOVERFLOW;
++		goto out_fw;
++	}
+ 
+ 	footer = (void *)&firmware->data[pos];
+ 	pos += sizeof(*footer);
+ 
+ 	if (le32_to_cpu(header->len) != pos) {
+-		cs_dsp_err(dsp, "%s: unexpected header length %d\n",
+-			   file, le32_to_cpu(header->len));
++		ret = -EOVERFLOW;
+ 		goto out_fw;
+ 	}
+ 
+ 	cs_dsp_dbg(dsp, "%s: timestamp %llu\n", file,
+ 		   le64_to_cpu(footer->timestamp));
+ 
+-	while (pos < firmware->size &&
+-	       sizeof(*region) < firmware->size - pos) {
++	while (pos < firmware->size) {
++		/* Is there enough data for a complete block header? */
++		if (sizeof(*region) > firmware->size - pos) {
++			ret = -EOVERFLOW;
++			goto out_fw;
++		}
++
+ 		region = (void *)&(firmware->data[pos]);
++
++		if (le32_to_cpu(region->len) > firmware->size - pos - sizeof(*region)) {
++			ret = -EOVERFLOW;
++			goto out_fw;
++		}
++
+ 		region_name = "Unknown";
+ 		reg = 0;
+ 		text = NULL;
+@@ -1399,16 +1500,6 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware,
+ 			   regions, le32_to_cpu(region->len), offset,
+ 			   region_name);
+ 
+-		if (le32_to_cpu(region->len) >
+-		    firmware->size - pos - sizeof(*region)) {
+-			cs_dsp_err(dsp,
+-				   "%s.%d: %s region len %d bytes exceeds file length %zu\n",
+-				   file, regions, region_name,
+-				   le32_to_cpu(region->len), firmware->size);
+-			ret = -EINVAL;
+-			goto out_fw;
+-		}
+-
+ 		if (text) {
+ 			memcpy(text, region->data, le32_to_cpu(region->len));
+ 			cs_dsp_info(dsp, "%s: %s\n", file, text);
+@@ -1459,6 +1550,9 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware,
+ 	cs_dsp_buf_free(&buf_list);
+ 	kfree(text);
+ 
++	if (ret == -EOVERFLOW)
++		cs_dsp_err(dsp, "%s: file content overflows file data\n", file);
++
+ 	return ret;
+ }
+ 
+@@ -2026,10 +2120,20 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware
+ 	pos = le32_to_cpu(hdr->len);
+ 
+ 	blocks = 0;
+-	while (pos < firmware->size &&
+-	       sizeof(*blk) < firmware->size - pos) {
++	while (pos < firmware->size) {
++		/* Is there enough data for a complete block header? */
++		if (sizeof(*blk) > firmware->size - pos) {
++			ret = -EOVERFLOW;
++			goto out_fw;
++		}
++
+ 		blk = (void *)(&firmware->data[pos]);
+ 
++		if (le32_to_cpu(blk->len) > firmware->size - pos - sizeof(*blk)) {
++			ret = -EOVERFLOW;
++			goto out_fw;
++		}
++
+ 		type = le16_to_cpu(blk->type);
+ 		offset = le16_to_cpu(blk->offset);
+ 		version = le32_to_cpu(blk->ver) >> 8;
+@@ -2125,17 +2229,6 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware
+ 		}
+ 
+ 		if (reg) {
+-			if (le32_to_cpu(blk->len) >
+-			    firmware->size - pos - sizeof(*blk)) {
+-				cs_dsp_err(dsp,
+-					   "%s.%d: %s region len %d bytes exceeds file length %zu\n",
+-					   file, blocks, region_name,
+-					   le32_to_cpu(blk->len),
+-					   firmware->size);
+-				ret = -EINVAL;
+-				goto out_fw;
+-			}
+-
+ 			buf = cs_dsp_buf_alloc(blk->data,
+ 					       le32_to_cpu(blk->len),
+ 					       &buf_list);
+@@ -2175,6 +2268,10 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware
+ 	regmap_async_complete(regmap);
+ 	cs_dsp_buf_free(&buf_list);
+ 	kfree(text);
++
++	if (ret == -EOVERFLOW)
++		cs_dsp_err(dsp, "%s: file content overflows file data\n", file);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index cef82b205c261..d0098e342ba22 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -114,6 +114,7 @@ enum rcar_i2c_type {
+ 	I2C_RCAR_GEN1,
+ 	I2C_RCAR_GEN2,
+ 	I2C_RCAR_GEN3,
++	I2C_RCAR_GEN4,
+ };
+ 
+ struct rcar_i2c_priv {
+@@ -223,6 +224,14 @@ static void rcar_i2c_init(struct rcar_i2c_priv *priv)
+ 
+ }
+ 
++static void rcar_i2c_reset_slave(struct rcar_i2c_priv *priv)
++{
++	rcar_i2c_write(priv, ICSIER, 0);
++	rcar_i2c_write(priv, ICSSR, 0);
++	rcar_i2c_write(priv, ICSCR, SDBS);
++	rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
++}
++
+ static int rcar_i2c_bus_barrier(struct rcar_i2c_priv *priv)
+ {
+ 	int ret;
+@@ -386,8 +395,8 @@ static void rcar_i2c_cleanup_dma(struct rcar_i2c_priv *priv, bool terminate)
+ 	dma_unmap_single(chan->device->dev, sg_dma_address(&priv->sg),
+ 			 sg_dma_len(&priv->sg), priv->dma_direction);
+ 
+-	/* Gen3 can only do one RXDMA per transfer and we just completed it */
+-	if (priv->devtype == I2C_RCAR_GEN3 &&
++	/* Gen3+ can only do one RXDMA per transfer and we just completed it */
++	if (priv->devtype >= I2C_RCAR_GEN3 &&
+ 	    priv->dma_direction == DMA_FROM_DEVICE)
+ 		priv->flags |= ID_P_NO_RXDMA;
+ 
+@@ -815,6 +824,10 @@ static int rcar_i2c_do_reset(struct rcar_i2c_priv *priv)
+ {
+ 	int ret;
+ 
++	/* Don't reset if a slave instance is currently running */
++	if (priv->slave)
++		return -EISCONN;
++
+ 	ret = reset_control_reset(priv->rstc);
+ 	if (ret)
+ 		return ret;
+@@ -841,14 +854,12 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap,
+ 	if (ret < 0)
+ 		goto out;
+ 
+-	/* Gen3 needs a reset before allowing RXDMA once */
+-	if (priv->devtype == I2C_RCAR_GEN3) {
+-		priv->flags |= ID_P_NO_RXDMA;
+-		if (!IS_ERR(priv->rstc)) {
+-			ret = rcar_i2c_do_reset(priv);
+-			if (ret == 0)
+-				priv->flags &= ~ID_P_NO_RXDMA;
+-		}
++	/* Gen3+ needs a reset. That also allows RXDMA once */
++	if (priv->devtype >= I2C_RCAR_GEN3) {
++		ret = rcar_i2c_do_reset(priv);
++		if (ret)
++			goto out;
++		priv->flags &= ~ID_P_NO_RXDMA;
+ 	}
+ 
+ 	rcar_i2c_init(priv);
+@@ -975,11 +986,8 @@ static int rcar_unreg_slave(struct i2c_client *slave)
+ 
+ 	/* ensure no irq is running before clearing ptr */
+ 	disable_irq(priv->irq);
+-	rcar_i2c_write(priv, ICSIER, 0);
+-	rcar_i2c_write(priv, ICSSR, 0);
++	rcar_i2c_reset_slave(priv);
+ 	enable_irq(priv->irq);
+-	rcar_i2c_write(priv, ICSCR, SDBS);
+-	rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
+ 
+ 	priv->slave = NULL;
+ 
+@@ -1032,7 +1040,7 @@ static const struct of_device_id rcar_i2c_dt_ids[] = {
+ 	{ .compatible = "renesas,rcar-gen1-i2c", .data = (void *)I2C_RCAR_GEN1 },
+ 	{ .compatible = "renesas,rcar-gen2-i2c", .data = (void *)I2C_RCAR_GEN2 },
+ 	{ .compatible = "renesas,rcar-gen3-i2c", .data = (void *)I2C_RCAR_GEN3 },
+-	{ .compatible = "renesas,rcar-gen4-i2c", .data = (void *)I2C_RCAR_GEN3 },
++	{ .compatible = "renesas,rcar-gen4-i2c", .data = (void *)I2C_RCAR_GEN4 },
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(of, rcar_i2c_dt_ids);
+@@ -1092,22 +1100,15 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ 		goto out_pm_disable;
+ 	}
+ 
+-	rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
++	/* Bring hardware to known state */
++	rcar_i2c_init(priv);
++	rcar_i2c_reset_slave(priv);
+ 
+ 	if (priv->devtype < I2C_RCAR_GEN3) {
+ 		irqflags |= IRQF_NO_THREAD;
+ 		irqhandler = rcar_i2c_gen2_irq;
+ 	}
+ 
+-	if (priv->devtype == I2C_RCAR_GEN3) {
+-		priv->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+-		if (!IS_ERR(priv->rstc)) {
+-			ret = reset_control_status(priv->rstc);
+-			if (ret < 0)
+-				priv->rstc = ERR_PTR(-ENOTSUPP);
+-		}
+-	}
+-
+ 	/* Stay always active when multi-master to keep arbitration working */
+ 	if (of_property_read_bool(dev->of_node, "multi-master"))
+ 		priv->flags |= ID_P_PM_BLOCKED;
+@@ -1117,6 +1118,22 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ 	if (of_property_read_bool(dev->of_node, "smbus"))
+ 		priv->flags |= ID_P_HOST_NOTIFY;
+ 
++	/* R-Car Gen3+ needs a reset before every transfer */
++	if (priv->devtype >= I2C_RCAR_GEN3) {
++		priv->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
++		if (IS_ERR(priv->rstc)) {
++			ret = PTR_ERR(priv->rstc);
++			goto out_pm_put;
++		}
++
++		ret = reset_control_status(priv->rstc);
++		if (ret < 0)
++			goto out_pm_put;
++
++		/* hard reset disturbs HostNotify local target, so disable it */
++		priv->flags &= ~ID_P_HOST_NOTIFY;
++	}
++
+ 	ret = platform_get_irq(pdev, 0);
+ 	if (ret < 0)
+ 		goto out_pm_put;
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index 8af82f42af30b..d6a879f1542c5 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -1049,6 +1049,7 @@ EXPORT_SYMBOL(i2c_find_device_by_fwnode);
+ 
+ static const struct i2c_device_id dummy_id[] = {
+ 	{ "dummy", 0 },
++	{ "smbus_host_notify", 0 },
+ 	{ },
+ };
+ 
+diff --git a/drivers/i2c/i2c-slave-testunit.c b/drivers/i2c/i2c-slave-testunit.c
+index 54c08f48a8b85..b9967a5a7d255 100644
+--- a/drivers/i2c/i2c-slave-testunit.c
++++ b/drivers/i2c/i2c-slave-testunit.c
+@@ -118,6 +118,13 @@ static int i2c_slave_testunit_slave_cb(struct i2c_client *client,
+ 			queue_delayed_work(system_long_wq, &tu->worker,
+ 					   msecs_to_jiffies(10 * tu->regs[TU_REG_DELAY]));
+ 		}
++
++		/*
++		 * Reset reg_idx to avoid that work gets queued again in case of
++		 * STOP after a following read message. But do not clear TU regs
++		 * here because we still need them in the workqueue!
++		 */
++		tu->reg_idx = 0;
+ 		break;
+ 
+ 	case I2C_SLAVE_WRITE_REQUESTED:
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 69cc24962706c..6c94364019c81 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -1509,16 +1509,20 @@ static int fastrpc_get_info_from_dsp(struct fastrpc_user *fl, uint32_t *dsp_attr
+ {
+ 	struct fastrpc_invoke_args args[2] = { 0 };
+ 
+-	/* Capability filled in userspace */
++	/*
++	 * Capability filled in userspace. This carries the information
++	 * about the remoteproc support which is fetched from the remoteproc
++	 * sysfs node by userspace.
++	 */
+ 	dsp_attr_buf[0] = 0;
++	dsp_attr_buf_len -= 1;
+ 
+ 	args[0].ptr = (u64)(uintptr_t)&dsp_attr_buf_len;
+ 	args[0].length = sizeof(dsp_attr_buf_len);
+ 	args[0].fd = -1;
+ 	args[1].ptr = (u64)(uintptr_t)&dsp_attr_buf[1];
+-	args[1].length = dsp_attr_buf_len;
++	args[1].length = dsp_attr_buf_len * sizeof(u32);
+ 	args[1].fd = -1;
+-	fl->pd = USER_PD;
+ 
+ 	return fastrpc_internal_invoke(fl, true, FASTRPC_DSP_UTILITIES_HANDLE,
+ 				       FASTRPC_SCALARS(0, 1, 1), args);
+@@ -1546,7 +1550,7 @@ static int fastrpc_get_info_from_kernel(struct fastrpc_ioctl_capability *cap,
+ 	if (!dsp_attributes)
+ 		return -ENOMEM;
+ 
+-	err = fastrpc_get_info_from_dsp(fl, dsp_attributes, FASTRPC_MAX_DSP_ATTRIBUTES_LEN);
++	err = fastrpc_get_info_from_dsp(fl, dsp_attributes, FASTRPC_MAX_DSP_ATTRIBUTES);
+ 	if (err == DSP_UNSUPPORTED_API) {
+ 		dev_info(&cctx->rpdev->dev,
+ 			 "Warning: DSP capabilities not supported on domain: %d\n", domain);
+@@ -1599,7 +1603,7 @@ static int fastrpc_get_dsp_info(struct fastrpc_user *fl, char __user *argp)
+ 	if (err)
+ 		return err;
+ 
+-	if (copy_to_user(argp, &cap.capability, sizeof(cap.capability)))
++	if (copy_to_user(argp, &cap, sizeof(cap)))
+ 		return -EFAULT;
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 9efd4b962dce2..1194dcacbd29e 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -13315,6 +13315,10 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi, struct bpf_prog *prog,
+ 	bool need_reset;
+ 	int i;
+ 
++	/* VSI shall be deleted in a moment, block loading new programs */
++	if (prog && test_bit(__I40E_IN_REMOVE, pf->state))
++		return -EINVAL;
++
+ 	/* Don't allow frames that span over multiple buffers */
+ 	if (frame_size > i40e_calculate_vsi_rx_buf_len(vsi)) {
+ 		NL_SET_ERR_MSG_MOD(extack, "MTU too large to enable XDP");
+@@ -13323,14 +13327,9 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi, struct bpf_prog *prog,
+ 
+ 	/* When turning XDP on->off/off->on we reset and rebuild the rings. */
+ 	need_reset = (i40e_enabled_xdp_vsi(vsi) != !!prog);
+-
+ 	if (need_reset)
+ 		i40e_prep_for_reset(pf);
+ 
+-	/* VSI shall be deleted in a moment, just return EINVAL */
+-	if (test_bit(__I40E_IN_REMOVE, pf->state))
+-		return -EINVAL;
+-
+ 	old_prog = xchg(&vsi->xdp_prog, prog);
+ 
+ 	if (need_reset) {
+diff --git a/drivers/net/ethernet/lantiq_etop.c b/drivers/net/ethernet/lantiq_etop.c
+index f5961bdcc4809..61baf1da76eea 100644
+--- a/drivers/net/ethernet/lantiq_etop.c
++++ b/drivers/net/ethernet/lantiq_etop.c
+@@ -217,9 +217,9 @@ ltq_etop_free_channel(struct net_device *dev, struct ltq_etop_chan *ch)
+ 	if (ch->dma.irq)
+ 		free_irq(ch->dma.irq, priv);
+ 	if (IS_RX(ch->idx)) {
+-		int desc;
++		struct ltq_dma_channel *dma = &ch->dma;
+ 
+-		for (desc = 0; desc < LTQ_DESC_NUM; desc++)
++		for (dma->desc = 0; dma->desc < LTQ_DESC_NUM; dma->desc++)
+ 			dev_kfree_skb_any(ch->skb[ch->dma.desc]);
+ 	}
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+index be70269e91684..c288589446935 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+@@ -1084,6 +1084,8 @@ struct nix_vtag_config_rsp {
+ 	 */
+ };
+ 
++#define NIX_FLOW_KEY_TYPE_L3_L4_MASK (~(0xf << 28))
++
+ struct nix_rss_flowkey_cfg {
+ 	struct mbox_msghdr hdr;
+ 	int	mcam_index;  /* MCAM entry index to modify */
+@@ -1109,6 +1111,10 @@ struct nix_rss_flowkey_cfg {
+ #define NIX_FLOW_KEY_TYPE_IPV4_PROTO	BIT(21)
+ #define NIX_FLOW_KEY_TYPE_AH		BIT(22)
+ #define NIX_FLOW_KEY_TYPE_ESP		BIT(23)
++#define NIX_FLOW_KEY_TYPE_L4_DST_ONLY BIT(28)
++#define NIX_FLOW_KEY_TYPE_L4_SRC_ONLY BIT(29)
++#define NIX_FLOW_KEY_TYPE_L3_DST_ONLY BIT(30)
++#define NIX_FLOW_KEY_TYPE_L3_SRC_ONLY BIT(31)
+ 	u32	flowkey_cfg; /* Flowkey types selected */
+ 	u8	group;       /* RSS context or group */
+ };
+@@ -1627,7 +1633,9 @@ struct cpt_lf_alloc_req_msg {
+ 	u16 nix_pf_func;
+ 	u16 sso_pf_func;
+ 	u16 eng_grpmsk;
+-	int blkaddr;
++	u8 blkaddr;
++	u8 ctx_ilen_valid : 1;
++	u8 ctx_ilen : 7;
+ };
+ 
+ #define CPT_INLINE_INBOUND      0
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc.h b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
+index aaff91bc7415a..32a9425a2b1ea 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/npc.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
+@@ -63,8 +63,13 @@ enum npc_kpu_lb_ltype {
+ 	NPC_LT_LB_CUSTOM1 = 0xF,
+ };
+ 
++/* Don't modify ltypes up to IP6_EXT, otherwise length and checksum of IP
++ * headers may not be checked correctly. IPv4 ltypes and IPv6 ltypes must
++ * differ only at bit 0 so mask 0xE can be used to detect extended headers.
++ */
+ enum npc_kpu_lc_ltype {
+-	NPC_LT_LC_IP = 1,
++	NPC_LT_LC_PTP = 1,
++	NPC_LT_LC_IP,
+ 	NPC_LT_LC_IP_OPT,
+ 	NPC_LT_LC_IP6,
+ 	NPC_LT_LC_IP6_EXT,
+@@ -72,7 +77,6 @@ enum npc_kpu_lc_ltype {
+ 	NPC_LT_LC_RARP,
+ 	NPC_LT_LC_MPLS,
+ 	NPC_LT_LC_NSH,
+-	NPC_LT_LC_PTP,
+ 	NPC_LT_LC_FCOE,
+ 	NPC_LT_LC_NGIO,
+ 	NPC_LT_LC_CUSTOM0 = 0xE,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index a7034b47ed6c9..c7829265eade9 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -1638,7 +1638,7 @@ static int rvu_check_rsrc_availability(struct rvu *rvu,
+ 		if (req->ssow > block->lf.max) {
+ 			dev_err(&rvu->pdev->dev,
+ 				"Func 0x%x: Invalid SSOW req, %d > max %d\n",
+-				 pcifunc, req->sso, block->lf.max);
++				 pcifunc, req->ssow, block->lf.max);
+ 			return -EINVAL;
+ 		}
+ 		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->addr);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+index 6fb02b93c1718..b226a4d376aab 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+@@ -17,7 +17,7 @@
+ #define	PCI_DEVID_OTX2_CPT10K_PF 0xA0F2
+ 
+ /* Length of initial context fetch in 128 byte words */
+-#define CPT_CTX_ILEN    2ULL
++#define CPT_CTX_ILEN    1ULL
+ 
+ #define cpt_get_eng_sts(e_min, e_max, rsp, etype)                   \
+ ({                                                                  \
+@@ -429,8 +429,12 @@ int rvu_mbox_handler_cpt_lf_alloc(struct rvu *rvu,
+ 
+ 		/* Set CPT LF group and priority */
+ 		val = (u64)req->eng_grpmsk << 48 | 1;
+-		if (!is_rvu_otx2(rvu))
+-			val |= (CPT_CTX_ILEN << 17);
++		if (!is_rvu_otx2(rvu)) {
++			if (req->ctx_ilen_valid)
++				val |= (req->ctx_ilen << 17);
++			else
++				val |= (CPT_CTX_ILEN << 17);
++		}
+ 
+ 		rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), val);
+ 
+@@ -692,7 +696,8 @@ int rvu_mbox_handler_cpt_rd_wr_register(struct rvu *rvu,
+ 					struct cpt_rd_wr_reg_msg *req,
+ 					struct cpt_rd_wr_reg_msg *rsp)
+ {
+-	int blkaddr;
++	u64 offset = req->reg_offset;
++	int blkaddr, lf;
+ 
+ 	blkaddr = validate_and_get_cpt_blkaddr(req->blkaddr);
+ 	if (blkaddr < 0)
+@@ -703,17 +708,25 @@ int rvu_mbox_handler_cpt_rd_wr_register(struct rvu *rvu,
+ 	    !is_cpt_vf(rvu, req->hdr.pcifunc))
+ 		return CPT_AF_ERR_ACCESS_DENIED;
+ 
+-	rsp->reg_offset = req->reg_offset;
+-	rsp->ret_val = req->ret_val;
+-	rsp->is_write = req->is_write;
+-
+ 	if (!is_valid_offset(rvu, req))
+ 		return CPT_AF_ERR_ACCESS_DENIED;
+ 
++	/* Translate local LF used by VFs to global CPT LF */
++	lf = rvu_get_lf(rvu, &rvu->hw->block[blkaddr], req->hdr.pcifunc,
++			(offset & 0xFFF) >> 3);
++
++	/* Translate local LF's offset to global CPT LF's offset */
++	offset &= 0xFF000;
++	offset += lf << 3;
++
++	rsp->reg_offset = offset;
++	rsp->ret_val = req->ret_val;
++	rsp->is_write = req->is_write;
++
+ 	if (req->is_write)
+-		rvu_write64(rvu, blkaddr, req->reg_offset, req->val);
++		rvu_write64(rvu, blkaddr, offset, req->val);
+ 	else
+-		rsp->val = rvu_read64(rvu, blkaddr, req->reg_offset);
++		rsp->val = rvu_read64(rvu, blkaddr, offset);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index 67080d5053e07..ef526408b0bd2 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -3354,6 +3354,11 @@ static int get_flowkey_alg_idx(struct nix_hw *nix_hw, u32 flow_cfg)
+ 	return -ERANGE;
+ }
+ 
++/* Mask to match ipv6(NPC_LT_LC_IP6) and ipv6 ext(NPC_LT_LC_IP6_EXT) */
++#define NPC_LT_LC_IP6_MATCH_MSK ((~(NPC_LT_LC_IP6 ^ NPC_LT_LC_IP6_EXT)) & 0xf)
++/* Mask to match both ipv4(NPC_LT_LC_IP) and ipv4 ext(NPC_LT_LC_IP_OPT) */
++#define NPC_LT_LC_IP_MATCH_MSK  ((~(NPC_LT_LC_IP ^ NPC_LT_LC_IP_OPT)) & 0xf)
++
+ static int set_flowkey_fields(struct nix_rx_flowkey_alg *alg, u32 flow_cfg)
+ {
+ 	int idx, nr_field, key_off, field_marker, keyoff_marker;
+@@ -3361,6 +3366,7 @@ static int set_flowkey_fields(struct nix_rx_flowkey_alg *alg, u32 flow_cfg)
+ 	struct nix_rx_flowkey_alg *field;
+ 	struct nix_rx_flowkey_alg tmp;
+ 	u32 key_type, valid_key;
++	u32 l3_l4_src_dst;
+ 	int l4_key_offset = 0;
+ 
+ 	if (!alg)
+@@ -3388,6 +3394,15 @@ static int set_flowkey_fields(struct nix_rx_flowkey_alg *alg, u32 flow_cfg)
+ 	 * group_member - Enabled when protocol is part of a group.
+ 	 */
+ 
++	/* Last 4 bits (31:28) are reserved to specify SRC, DST
++	 * selection for L3, L4 i.e IPV[4,6]_SRC, IPV[4,6]_DST,
++	 * [TCP,UDP,SCTP]_SRC, [TCP,UDP,SCTP]_DST
++	 * 31 => L3_SRC, 30 => L3_DST, 29 => L4_SRC, 28 => L4_DST
++	 */
++	l3_l4_src_dst = flow_cfg;
++	/* Reset these 4 bits, so that these won't be part of key */
++	flow_cfg &= NIX_FLOW_KEY_TYPE_L3_L4_MASK;
++
+ 	keyoff_marker = 0; max_key_off = 0; group_member = 0;
+ 	nr_field = 0; key_off = 0; field_marker = 1;
+ 	field = &tmp; max_bit_pos = fls(flow_cfg);
+@@ -3413,7 +3428,7 @@ static int set_flowkey_fields(struct nix_rx_flowkey_alg *alg, u32 flow_cfg)
+ 			field->hdr_offset = 9; /* offset */
+ 			field->bytesm1 = 0; /* 1 byte */
+ 			field->ltype_match = NPC_LT_LC_IP;
+-			field->ltype_mask = 0xF;
++			field->ltype_mask = NPC_LT_LC_IP_MATCH_MSK;
+ 			break;
+ 		case NIX_FLOW_KEY_TYPE_IPV4:
+ 		case NIX_FLOW_KEY_TYPE_INNR_IPV4:
+@@ -3425,7 +3440,22 @@ static int set_flowkey_fields(struct nix_rx_flowkey_alg *alg, u32 flow_cfg)
+ 			}
+ 			field->hdr_offset = 12; /* SIP offset */
+ 			field->bytesm1 = 7; /* SIP + DIP, 8 bytes */
+-			field->ltype_mask = 0xF; /* Match only IPv4 */
++
++			/* Only SIP */
++			if (l3_l4_src_dst & NIX_FLOW_KEY_TYPE_L3_SRC_ONLY)
++				field->bytesm1 = 3; /* SIP, 4 bytes */
++
++			if (l3_l4_src_dst & NIX_FLOW_KEY_TYPE_L3_DST_ONLY) {
++				/* Both SIP + DIP */
++				if (field->bytesm1 == 3) {
++					field->bytesm1 = 7; /* SIP + DIP, 8B */
++				} else {
++					/* Only DIP */
++					field->hdr_offset = 16; /* DIP off */
++					field->bytesm1 = 3; /* DIP, 4 bytes */
++				}
++			}
++			field->ltype_mask = NPC_LT_LC_IP_MATCH_MSK;
+ 			keyoff_marker = false;
+ 			break;
+ 		case NIX_FLOW_KEY_TYPE_IPV6:
+@@ -3438,7 +3468,23 @@ static int set_flowkey_fields(struct nix_rx_flowkey_alg *alg, u32 flow_cfg)
+ 			}
+ 			field->hdr_offset = 8; /* SIP offset */
+ 			field->bytesm1 = 31; /* SIP + DIP, 32 bytes */
+-			field->ltype_mask = 0xF; /* Match only IPv6 */
++
++			/* Only SIP */
++			if (l3_l4_src_dst & NIX_FLOW_KEY_TYPE_L3_SRC_ONLY)
++				field->bytesm1 = 15; /* SIP, 16 bytes */
++
++			if (l3_l4_src_dst & NIX_FLOW_KEY_TYPE_L3_DST_ONLY) {
++				/* Both SIP + DIP */
++				if (field->bytesm1 == 15) {
++					/* SIP + DIP, 32 bytes */
++					field->bytesm1 = 31;
++				} else {
++					/* Only DIP */
++					field->hdr_offset = 24; /* DIP off */
++					field->bytesm1 = 15; /* DIP,16 bytes */
++				}
++			}
++			field->ltype_mask = NPC_LT_LC_IP6_MATCH_MSK;
+ 			break;
+ 		case NIX_FLOW_KEY_TYPE_TCP:
+ 		case NIX_FLOW_KEY_TYPE_UDP:
+@@ -3453,6 +3499,21 @@ static int set_flowkey_fields(struct nix_rx_flowkey_alg *alg, u32 flow_cfg)
+ 				field->lid = NPC_LID_LH;
+ 			field->bytesm1 = 3; /* Sport + Dport, 4 bytes */
+ 
++			if (l3_l4_src_dst & NIX_FLOW_KEY_TYPE_L4_SRC_ONLY)
++				field->bytesm1 = 1; /* SRC, 2 bytes */
++
++			if (l3_l4_src_dst & NIX_FLOW_KEY_TYPE_L4_DST_ONLY) {
++				/* Both SRC + DST */
++				if (field->bytesm1 == 1) {
++					/* SRC + DST, 4 bytes */
++					field->bytesm1 = 3;
++				} else {
++					/* Only DIP */
++					field->hdr_offset = 2; /* DST off */
++					field->bytesm1 = 1; /* DST, 2 bytes */
++				}
++			}
++
+ 			/* Enum values for NPC_LID_LD and NPC_LID_LG are same,
+ 			 * so no need to change the ltype_match, just change
+ 			 * the lid for inner protocols
+diff --git a/drivers/net/ethernet/mediatek/mtk_star_emac.c b/drivers/net/ethernet/mediatek/mtk_star_emac.c
+index 7050351250b7a..ad27749c0931c 100644
+--- a/drivers/net/ethernet/mediatek/mtk_star_emac.c
++++ b/drivers/net/ethernet/mediatek/mtk_star_emac.c
+@@ -1531,6 +1531,7 @@ static int mtk_star_probe(struct platform_device *pdev)
+ {
+ 	struct device_node *of_node;
+ 	struct mtk_star_priv *priv;
++	struct phy_device *phydev;
+ 	struct net_device *ndev;
+ 	struct device *dev;
+ 	void __iomem *base;
+@@ -1656,6 +1657,12 @@ static int mtk_star_probe(struct platform_device *pdev)
+ 	netif_napi_add(ndev, &priv->rx_napi, mtk_star_rx_poll);
+ 	netif_napi_add_tx(ndev, &priv->tx_napi, mtk_star_tx_poll);
+ 
++	phydev = of_phy_find_device(priv->phy_node);
++	if (phydev) {
++		phydev->mac_managed_pm = true;
++		put_device(&phydev->mdio.dev);
++	}
++
+ 	return devm_register_netdev(dev, ndev);
+ }
+ 
+diff --git a/drivers/net/ethernet/micrel/ks8851_common.c b/drivers/net/ethernet/micrel/ks8851_common.c
+index 6453c92f0fa7c..7fa1820db9cce 100644
+--- a/drivers/net/ethernet/micrel/ks8851_common.c
++++ b/drivers/net/ethernet/micrel/ks8851_common.c
+@@ -352,11 +352,11 @@ static irqreturn_t ks8851_irq(int irq, void *_ks)
+ 		netif_dbg(ks, intr, ks->netdev,
+ 			  "%s: txspace %d\n", __func__, tx_space);
+ 
+-		spin_lock(&ks->statelock);
++		spin_lock_bh(&ks->statelock);
+ 		ks->tx_space = tx_space;
+ 		if (netif_queue_stopped(ks->netdev))
+ 			netif_wake_queue(ks->netdev);
+-		spin_unlock(&ks->statelock);
++		spin_unlock_bh(&ks->statelock);
+ 	}
+ 
+ 	if (status & IRQ_SPIBEI) {
+@@ -482,6 +482,7 @@ static int ks8851_net_open(struct net_device *dev)
+ 	ks8851_wrreg16(ks, KS_IER, ks->rc_ier);
+ 
+ 	ks->queued_len = 0;
++	ks->tx_space = ks8851_rdreg16(ks, KS_TXMIR);
+ 	netif_start_queue(ks->netdev);
+ 
+ 	netif_dbg(ks, ifup, ks->netdev, "network device up\n");
+@@ -635,14 +636,14 @@ static void ks8851_set_rx_mode(struct net_device *dev)
+ 
+ 	/* schedule work to do the actual set of the data if needed */
+ 
+-	spin_lock(&ks->statelock);
++	spin_lock_bh(&ks->statelock);
+ 
+ 	if (memcmp(&rxctrl, &ks->rxctrl, sizeof(rxctrl)) != 0) {
+ 		memcpy(&ks->rxctrl, &rxctrl, sizeof(ks->rxctrl));
+ 		schedule_work(&ks->rxctrl_work);
+ 	}
+ 
+-	spin_unlock(&ks->statelock);
++	spin_unlock_bh(&ks->statelock);
+ }
+ 
+ static int ks8851_set_mac_address(struct net_device *dev, void *addr)
+@@ -1101,7 +1102,6 @@ int ks8851_probe_common(struct net_device *netdev, struct device *dev,
+ 	int ret;
+ 
+ 	ks->netdev = netdev;
+-	ks->tx_space = 6144;
+ 
+ 	ks->gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
+ 	ret = PTR_ERR_OR_ZERO(ks->gpio);
+diff --git a/drivers/net/ethernet/micrel/ks8851_spi.c b/drivers/net/ethernet/micrel/ks8851_spi.c
+index 4dcbff789b19d..e33a5e7beb39e 100644
+--- a/drivers/net/ethernet/micrel/ks8851_spi.c
++++ b/drivers/net/ethernet/micrel/ks8851_spi.c
+@@ -340,10 +340,10 @@ static void ks8851_tx_work(struct work_struct *work)
+ 
+ 	tx_space = ks8851_rdreg16_spi(ks, KS_TXMIR);
+ 
+-	spin_lock(&ks->statelock);
++	spin_lock_bh(&ks->statelock);
+ 	ks->queued_len -= dequeued_len;
+ 	ks->tx_space = tx_space;
+-	spin_unlock(&ks->statelock);
++	spin_unlock_bh(&ks->statelock);
+ 
+ 	ks8851_unlock_spi(ks, &flags);
+ }
+diff --git a/drivers/net/phy/microchip_t1.c b/drivers/net/phy/microchip_t1.c
+index 8569a545e0a3f..9517243e3051e 100644
+--- a/drivers/net/phy/microchip_t1.c
++++ b/drivers/net/phy/microchip_t1.c
+@@ -711,7 +711,7 @@ static int lan87xx_cable_test_report(struct phy_device *phydev)
+ 	ethnl_cable_test_result(phydev, ETHTOOL_A_CABLE_PAIR_A,
+ 				lan87xx_cable_test_report_trans(detect));
+ 
+-	return 0;
++	return phy_init_hw(phydev);
+ }
+ 
+ static int lan87xx_cable_test_get_status(struct phy_device *phydev,
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index 1d71f5276241c..5a6fa566e722f 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -70,6 +70,7 @@
+ #define MPHDRLEN_SSN	4	/* ditto with short sequence numbers */
+ 
+ #define PPP_PROTO_LEN	2
++#define PPP_LCP_HDRLEN	4
+ 
+ /*
+  * An instance of /dev/ppp can be associated with either a ppp
+@@ -491,6 +492,15 @@ static ssize_t ppp_read(struct file *file, char __user *buf,
+ 	return ret;
+ }
+ 
++static bool ppp_check_packet(struct sk_buff *skb, size_t count)
++{
++	/* LCP packets must include LCP header which 4 bytes long:
++	 * 1-byte code, 1-byte identifier, and 2-byte length.
++	 */
++	return get_unaligned_be16(skb->data) != PPP_LCP ||
++		count >= PPP_PROTO_LEN + PPP_LCP_HDRLEN;
++}
++
+ static ssize_t ppp_write(struct file *file, const char __user *buf,
+ 			 size_t count, loff_t *ppos)
+ {
+@@ -513,6 +523,11 @@ static ssize_t ppp_write(struct file *file, const char __user *buf,
+ 		kfree_skb(skb);
+ 		goto out;
+ 	}
++	ret = -EINVAL;
++	if (unlikely(!ppp_check_packet(skb, count))) {
++		kfree_skb(skb);
++		goto out;
++	}
+ 
+ 	switch (pf->kind) {
+ 	case INTERFACE:
+diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
+index 0ba714ca5185c..4b8528206cc8a 100644
+--- a/drivers/net/wireguard/allowedips.c
++++ b/drivers/net/wireguard/allowedips.c
+@@ -15,8 +15,8 @@ static void swap_endian(u8 *dst, const u8 *src, u8 bits)
+ 	if (bits == 32) {
+ 		*(u32 *)dst = be32_to_cpu(*(const __be32 *)src);
+ 	} else if (bits == 128) {
+-		((u64 *)dst)[0] = be64_to_cpu(((const __be64 *)src)[0]);
+-		((u64 *)dst)[1] = be64_to_cpu(((const __be64 *)src)[1]);
++		((u64 *)dst)[0] = get_unaligned_be64(src);
++		((u64 *)dst)[1] = get_unaligned_be64(src + 8);
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireguard/queueing.h b/drivers/net/wireguard/queueing.h
+index 1d4f9196bfe17..3ce70db9dd3fe 100644
+--- a/drivers/net/wireguard/queueing.h
++++ b/drivers/net/wireguard/queueing.h
+@@ -124,10 +124,10 @@ static inline int wg_cpumask_choose_online(int *stored_cpu, unsigned int id)
+  */
+ static inline int wg_cpumask_next_online(int *last_cpu)
+ {
+-	int cpu = cpumask_next(*last_cpu, cpu_online_mask);
++	int cpu = cpumask_next(READ_ONCE(*last_cpu), cpu_online_mask);
+ 	if (cpu >= nr_cpu_ids)
+ 		cpu = cpumask_first(cpu_online_mask);
+-	*last_cpu = cpu;
++	WRITE_ONCE(*last_cpu, cpu);
+ 	return cpu;
+ }
+ 
+diff --git a/drivers/net/wireguard/send.c b/drivers/net/wireguard/send.c
+index 0d48e0f4a1ba3..26e09c30d596c 100644
+--- a/drivers/net/wireguard/send.c
++++ b/drivers/net/wireguard/send.c
+@@ -222,7 +222,7 @@ void wg_packet_send_keepalive(struct wg_peer *peer)
+ {
+ 	struct sk_buff *skb;
+ 
+-	if (skb_queue_empty(&peer->staged_packet_queue)) {
++	if (skb_queue_empty_lockless(&peer->staged_packet_queue)) {
+ 		skb = alloc_skb(DATA_PACKET_HEAD_ROOM + MESSAGE_MINIMUM_LENGTH,
+ 				GFP_ATOMIC);
+ 		if (unlikely(!skb))
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 34ee9d36ee7ba..f060583941027 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -367,10 +367,9 @@ static int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem,
+ 	if (!config->base_dev)
+ 		return -EINVAL;
+ 
+-	if (config->type == NVMEM_TYPE_FRAM)
+-		bin_attr_nvmem_eeprom_compat.attr.name = "fram";
+-
+ 	nvmem->eeprom = bin_attr_nvmem_eeprom_compat;
++	if (config->type == NVMEM_TYPE_FRAM)
++		nvmem->eeprom.attr.name = "fram";
+ 	nvmem->eeprom.attr.mode = nvmem_bin_attr_get_umode(nvmem);
+ 	nvmem->eeprom.size = nvmem->size;
+ #ifdef CONFIG_DEBUG_LOCK_ALLOC
+diff --git a/drivers/nvmem/meson-efuse.c b/drivers/nvmem/meson-efuse.c
+index ba2714bef8d0e..cf1b249e67ca2 100644
+--- a/drivers/nvmem/meson-efuse.c
++++ b/drivers/nvmem/meson-efuse.c
+@@ -18,18 +18,24 @@ static int meson_efuse_read(void *context, unsigned int offset,
+ 			    void *val, size_t bytes)
+ {
+ 	struct meson_sm_firmware *fw = context;
++	int ret;
+ 
+-	return meson_sm_call_read(fw, (u8 *)val, bytes, SM_EFUSE_READ, offset,
+-				  bytes, 0, 0, 0);
++	ret = meson_sm_call_read(fw, (u8 *)val, bytes, SM_EFUSE_READ, offset,
++				 bytes, 0, 0, 0);
++
++	return ret < 0 ? ret : 0;
+ }
+ 
+ static int meson_efuse_write(void *context, unsigned int offset,
+ 			     void *val, size_t bytes)
+ {
+ 	struct meson_sm_firmware *fw = context;
++	int ret;
++
++	ret = meson_sm_call_write(fw, (u8 *)val, bytes, SM_EFUSE_WRITE, offset,
++				  bytes, 0, 0, 0);
+ 
+-	return meson_sm_call_write(fw, (u8 *)val, bytes, SM_EFUSE_WRITE, offset,
+-				   bytes, 0, 0, 0);
++	return ret < 0 ? ret : 0;
+ }
+ 
+ static const struct of_device_id meson_efuse_match[] = {
+diff --git a/drivers/nvmem/rmem.c b/drivers/nvmem/rmem.c
+index 752d0bf4445ee..7f907c5a445e7 100644
+--- a/drivers/nvmem/rmem.c
++++ b/drivers/nvmem/rmem.c
+@@ -46,7 +46,10 @@ static int rmem_read(void *context, unsigned int offset,
+ 
+ 	memunmap(addr);
+ 
+-	return count;
++	if (count < 0)
++		return count;
++
++	return count == bytes ? 0 : -EIO;
+ }
+ 
+ static int rmem_probe(struct platform_device *pdev)
+diff --git a/drivers/platform/x86/toshiba_acpi.c b/drivers/platform/x86/toshiba_acpi.c
+index 1a8cb8eb22829..033e28aaeea63 100644
+--- a/drivers/platform/x86/toshiba_acpi.c
++++ b/drivers/platform/x86/toshiba_acpi.c
+@@ -3305,6 +3305,7 @@ static const struct dmi_system_id toshiba_dmi_quirks[] __initconst = {
+ 		},
+ 	 .driver_data = (void *)(QUIRK_TURN_ON_PANEL_ON_RESUME | QUIRK_HCI_HOTKEY_QUICKSTART),
+ 	},
++	{ }
+ };
+ 
+ static int toshiba_acpi_add(struct acpi_device *acpi_dev)
+diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
+index d396ac8b9cedd..15613b183fbd0 100644
+--- a/drivers/usb/core/config.c
++++ b/drivers/usb/core/config.c
+@@ -291,6 +291,20 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
+ 	if (ifp->desc.bNumEndpoints >= num_ep)
+ 		goto skip_to_next_endpoint_or_interface_descriptor;
+ 
++	/* Save a copy of the descriptor and use it instead of the original */
++	endpoint = &ifp->endpoint[ifp->desc.bNumEndpoints];
++	memcpy(&endpoint->desc, d, n);
++	d = &endpoint->desc;
++
++	/* Clear the reserved bits in bEndpointAddress */
++	i = d->bEndpointAddress &
++			(USB_ENDPOINT_DIR_MASK | USB_ENDPOINT_NUMBER_MASK);
++	if (i != d->bEndpointAddress) {
++		dev_notice(ddev, "config %d interface %d altsetting %d has an endpoint descriptor with address 0x%X, changing to 0x%X\n",
++		    cfgno, inum, asnum, d->bEndpointAddress, i);
++		endpoint->desc.bEndpointAddress = i;
++	}
++
+ 	/* Check for duplicate endpoint addresses */
+ 	if (config_endpoint_is_duplicate(config, inum, asnum, d)) {
+ 		dev_notice(ddev, "config %d interface %d altsetting %d has a duplicate endpoint with address 0x%X, skipping\n",
+@@ -308,10 +322,8 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
+ 		}
+ 	}
+ 
+-	endpoint = &ifp->endpoint[ifp->desc.bNumEndpoints];
++	/* Accept this endpoint */
+ 	++ifp->desc.bNumEndpoints;
+-
+-	memcpy(&endpoint->desc, d, n);
+ 	INIT_LIST_HEAD(&endpoint->urb_list);
+ 
+ 	/*
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index b4783574b8e66..13171454f9591 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -506,6 +506,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x1b1c, 0x1b38), .driver_info = USB_QUIRK_DELAY_INIT |
+ 	  USB_QUIRK_DELAY_CTRL_MSG },
+ 
++	/* START BP-850k Printer */
++	{ USB_DEVICE(0x1bc3, 0x0003), .driver_info = USB_QUIRK_NO_SET_INTF },
++
+ 	/* MIDI keyboard WORLDE MINI */
+ 	{ USB_DEVICE(0x1c75, 0x0204), .driver_info =
+ 			USB_QUIRK_CONFIG_INTF_STRINGS },
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index b94aec6227c51..5c1c7f36e5442 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -105,9 +105,12 @@ static int usb_string_copy(const char *s, char **s_copy)
+ 	int ret;
+ 	char *str;
+ 	char *copy = *s_copy;
++
+ 	ret = strlen(s);
+ 	if (ret > USB_MAX_STRING_LEN)
+ 		return -EOVERFLOW;
++	if (ret < 1)
++		return -EINVAL;
+ 
+ 	if (copy) {
+ 		str = copy;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 27e01671d3865..505f45429c125 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1247,10 +1247,20 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 			xhci_dbg(xhci, "Start the secondary HCD\n");
+ 			retval = xhci_run(xhci->shared_hcd);
+ 		}
+-
++		if (retval)
++			return retval;
++		/*
++		 * Resume roothubs unconditionally as PORTSC change bits are not
++		 * immediately visible after xHC reset
++		 */
+ 		hcd->state = HC_STATE_SUSPENDED;
+-		if (xhci->shared_hcd)
++
++		if (xhci->shared_hcd) {
+ 			xhci->shared_hcd->state = HC_STATE_SUSPENDED;
++			usb_hcd_resume_root_hub(xhci->shared_hcd);
++		}
++		usb_hcd_resume_root_hub(hcd);
++
+ 		goto done;
+ 	}
+ 
+@@ -1274,7 +1284,6 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 
+ 	xhci_dbc_resume(xhci);
+ 
+- done:
+ 	if (retval == 0) {
+ 		/*
+ 		 * Resume roothubs only if there are pending events.
+@@ -1293,6 +1302,7 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 			usb_hcd_resume_root_hub(hcd);
+ 		}
+ 	}
++done:
+ 	/*
+ 	 * If system is subject to the Quirk, Compliance Mode Timer needs to
+ 	 * be re-initialized Always after a system resume. Ports are subject
+diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
+index 6b12bb4648b83..26f287180f8ab 100644
+--- a/drivers/usb/serial/mos7840.c
++++ b/drivers/usb/serial/mos7840.c
+@@ -1736,6 +1736,49 @@ static void mos7840_port_remove(struct usb_serial_port *port)
+ 	kfree(mos7840_port);
+ }
+ 
++static int mos7840_suspend(struct usb_serial *serial, pm_message_t message)
++{
++	struct moschip_port *mos7840_port;
++	struct usb_serial_port *port;
++	int i;
++
++	for (i = 0; i < serial->num_ports; ++i) {
++		port = serial->port[i];
++		if (!tty_port_initialized(&port->port))
++			continue;
++
++		mos7840_port = usb_get_serial_port_data(port);
++
++		usb_kill_urb(mos7840_port->read_urb);
++		mos7840_port->read_urb_busy = false;
++	}
++
++	return 0;
++}
++
++static int mos7840_resume(struct usb_serial *serial)
++{
++	struct moschip_port *mos7840_port;
++	struct usb_serial_port *port;
++	int res;
++	int i;
++
++	for (i = 0; i < serial->num_ports; ++i) {
++		port = serial->port[i];
++		if (!tty_port_initialized(&port->port))
++			continue;
++
++		mos7840_port = usb_get_serial_port_data(port);
++
++		mos7840_port->read_urb_busy = true;
++		res = usb_submit_urb(mos7840_port->read_urb, GFP_NOIO);
++		if (res)
++			mos7840_port->read_urb_busy = false;
++	}
++
++	return 0;
++}
++
+ static struct usb_serial_driver moschip7840_4port_device = {
+ 	.driver = {
+ 		   .owner = THIS_MODULE,
+@@ -1763,6 +1806,8 @@ static struct usb_serial_driver moschip7840_4port_device = {
+ 	.port_probe = mos7840_port_probe,
+ 	.port_remove = mos7840_port_remove,
+ 	.read_bulk_callback = mos7840_bulk_in_callback,
++	.suspend = mos7840_suspend,
++	.resume = mos7840_resume,
+ };
+ 
+ static struct usb_serial_driver * const serial_drivers[] = {
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index b5ee8518fcc78..cb0eb7fd25426 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1425,6 +1425,10 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1901, 0xff),	/* Telit LN940 (MBIM) */
+ 	  .driver_info = NCTRL(0) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x3000, 0xff),	/* Telit FN912 */
++	  .driver_info = RSVD(0) | NCTRL(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x3001, 0xff),	/* Telit FN912 */
++	  .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x7010, 0xff),	/* Telit LE910-S1 (RNDIS) */
+ 	  .driver_info = NCTRL(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x7011, 0xff),	/* Telit LE910-S1 (ECM) */
+@@ -1433,6 +1437,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x701b, 0xff),	/* Telit LE910R1 (ECM) */
+ 	  .driver_info = NCTRL(2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x9000, 0xff),	/* Telit generic core-dump device */
++	  .driver_info = NCTRL(0) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, 0x9010),				/* Telit SBL FN980 flashing device */
+ 	  .driver_info = NCTRL(0) | ZLP },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, 0x9200),				/* Telit LE910S1 flashing device */
+@@ -2224,6 +2230,10 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_7106_2COM, 0x02, 0x02, 0x01) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_4COM2, 0xff, 0x02, 0x01) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_4COM2, 0xff, 0x00, 0x00) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x7126, 0xff, 0x00, 0x00),
++	  .driver_info = NCTRL(2) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x7127, 0xff, 0x00, 0x00),
++	  .driver_info = NCTRL(2) | NCTRL(3) | NCTRL(4) },
+ 	{ USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MEN200) },
+ 	{ USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MPL200),
+ 	  .driver_info = RSVD(1) | RSVD(4) },
+@@ -2284,6 +2294,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe0f0, 0xff),			/* Foxconn T99W373 MBIM */
+ 	  .driver_info = RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe145, 0xff),			/* Foxconn T99W651 RNDIS */
++	  .driver_info = RSVD(5) | RSVD(6) },
+ 	{ USB_DEVICE(0x1508, 0x1001),						/* Fibocom NL668 (IOT version) */
+ 	  .driver_info = RSVD(4) | RSVD(5) | RSVD(6) },
+ 	{ USB_DEVICE(0x1782, 0x4d10) },						/* Fibocom L610 (AT mode) */
+@@ -2321,6 +2333,32 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x0115, 0xff),			/* Rolling RW135-GL (laptop MBIM) */
+ 	  .driver_info = RSVD(5) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x0802, 0xff),			/* Rolling RW350-GL (laptop MBIM) */
++	  .driver_info = RSVD(5) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0100, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WWD for Global */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0100, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0100, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0101, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WRD for Global SKU */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0101, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0101, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0106, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WRD for China SKU */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0106, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0106, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0111, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WWD for SA */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0111, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0111, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0112, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WWD for EU */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0112, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0112, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0113, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WWD for NA */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0113, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0113, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0115, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WWD for China EDU */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0115, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0115, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WWD for Golbal EDU */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0xff, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) },
+diff --git a/fs/cachefiles/daemon.c b/fs/cachefiles/daemon.c
+index b9945e4f697be..89b11336a8369 100644
+--- a/fs/cachefiles/daemon.c
++++ b/fs/cachefiles/daemon.c
+@@ -357,14 +357,24 @@ static __poll_t cachefiles_daemon_poll(struct file *file,
+ 					   struct poll_table_struct *poll)
+ {
+ 	struct cachefiles_cache *cache = file->private_data;
++	XA_STATE(xas, &cache->reqs, 0);
++	struct cachefiles_req *req;
+ 	__poll_t mask;
+ 
+ 	poll_wait(file, &cache->daemon_pollwq, poll);
+ 	mask = 0;
+ 
+ 	if (cachefiles_in_ondemand_mode(cache)) {
+-		if (!xa_empty(&cache->reqs))
+-			mask |= EPOLLIN;
++		if (!xa_empty(&cache->reqs)) {
++			xas_lock(&xas);
++			xas_for_each_marked(&xas, req, ULONG_MAX, CACHEFILES_REQ_NEW) {
++				if (!cachefiles_ondemand_is_reopening_read(req)) {
++					mask |= EPOLLIN;
++					break;
++				}
++			}
++			xas_unlock(&xas);
++		}
+ 	} else {
+ 		if (test_bit(CACHEFILES_STATE_CHANGED, &cache->flags))
+ 			mask |= EPOLLIN;
+diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h
+index 3eea52462fc87..111ad6ecd4baf 100644
+--- a/fs/cachefiles/internal.h
++++ b/fs/cachefiles/internal.h
+@@ -48,6 +48,7 @@ enum cachefiles_object_state {
+ 	CACHEFILES_ONDEMAND_OBJSTATE_CLOSE, /* Anonymous fd closed by daemon or initial state */
+ 	CACHEFILES_ONDEMAND_OBJSTATE_OPEN, /* Anonymous fd associated with object is available */
+ 	CACHEFILES_ONDEMAND_OBJSTATE_REOPENING, /* Object that was closed and is being reopened. */
++	CACHEFILES_ONDEMAND_OBJSTATE_DROPPING, /* Object is being dropped. */
+ };
+ 
+ struct cachefiles_ondemand_info {
+@@ -128,6 +129,7 @@ struct cachefiles_cache {
+ 	unsigned long			req_id_next;
+ 	struct xarray			ondemand_ids;	/* xarray for ondemand_id allocation */
+ 	u32				ondemand_id_next;
++	u32				msg_id_next;
+ };
+ 
+ static inline bool cachefiles_in_ondemand_mode(struct cachefiles_cache *cache)
+@@ -335,6 +337,14 @@ cachefiles_ondemand_set_object_##_state(struct cachefiles_object *object) \
+ CACHEFILES_OBJECT_STATE_FUNCS(open, OPEN);
+ CACHEFILES_OBJECT_STATE_FUNCS(close, CLOSE);
+ CACHEFILES_OBJECT_STATE_FUNCS(reopening, REOPENING);
++CACHEFILES_OBJECT_STATE_FUNCS(dropping, DROPPING);
++
++static inline bool cachefiles_ondemand_is_reopening_read(struct cachefiles_req *req)
++{
++	return cachefiles_ondemand_object_is_reopening(req->object) &&
++			req->msg.opcode == CACHEFILES_OP_READ;
++}
++
+ #else
+ static inline ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache,
+ 					char __user *_buffer, size_t buflen)
+@@ -365,6 +375,11 @@ static inline int cachefiles_ondemand_init_obj_info(struct cachefiles_object *ob
+ static inline void cachefiles_ondemand_deinit_obj_info(struct cachefiles_object *obj)
+ {
+ }
++
++static inline bool cachefiles_ondemand_is_reopening_read(struct cachefiles_req *req)
++{
++	return false;
++}
+ #endif
+ 
+ /*
+diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c
+index 4b39f0422e590..51173ab6dbd84 100644
+--- a/fs/cachefiles/ondemand.c
++++ b/fs/cachefiles/ondemand.c
+@@ -494,7 +494,8 @@ static int cachefiles_ondemand_send_req(struct cachefiles_object *object,
+ 		 */
+ 		xas_lock(&xas);
+ 
+-		if (test_bit(CACHEFILES_DEAD, &cache->flags)) {
++		if (test_bit(CACHEFILES_DEAD, &cache->flags) ||
++		    cachefiles_ondemand_object_is_dropping(object)) {
+ 			xas_unlock(&xas);
+ 			ret = -EIO;
+ 			goto out;
+@@ -504,20 +505,32 @@ static int cachefiles_ondemand_send_req(struct cachefiles_object *object,
+ 		smp_mb();
+ 
+ 		if (opcode == CACHEFILES_OP_CLOSE &&
+-			!cachefiles_ondemand_object_is_open(object)) {
++		    !cachefiles_ondemand_object_is_open(object)) {
+ 			WARN_ON_ONCE(object->ondemand->ondemand_id == 0);
+ 			xas_unlock(&xas);
+ 			ret = -EIO;
+ 			goto out;
+ 		}
+ 
+-		xas.xa_index = 0;
++		/*
++		 * Cyclically find a free xas to avoid msg_id reuse that would
++		 * cause the daemon to successfully copen a stale msg_id.
++		 */
++		xas.xa_index = cache->msg_id_next;
+ 		xas_find_marked(&xas, UINT_MAX, XA_FREE_MARK);
++		if (xas.xa_node == XAS_RESTART) {
++			xas.xa_index = 0;
++			xas_find_marked(&xas, cache->msg_id_next - 1, XA_FREE_MARK);
++		}
+ 		if (xas.xa_node == XAS_RESTART)
+ 			xas_set_err(&xas, -EBUSY);
++
+ 		xas_store(&xas, req);
+-		xas_clear_mark(&xas, XA_FREE_MARK);
+-		xas_set_mark(&xas, CACHEFILES_REQ_NEW);
++		if (xas_valid(&xas)) {
++			cache->msg_id_next = xas.xa_index + 1;
++			xas_clear_mark(&xas, XA_FREE_MARK);
++			xas_set_mark(&xas, CACHEFILES_REQ_NEW);
++		}
+ 		xas_unlock(&xas);
+ 	} while (xas_nomem(&xas, GFP_KERNEL));
+ 
+@@ -535,7 +548,8 @@ static int cachefiles_ondemand_send_req(struct cachefiles_object *object,
+ 	 * If error occurs after creating the anonymous fd,
+ 	 * cachefiles_ondemand_fd_release() will set object to close.
+ 	 */
+-	if (opcode == CACHEFILES_OP_OPEN)
++	if (opcode == CACHEFILES_OP_OPEN &&
++	    !cachefiles_ondemand_object_is_dropping(object))
+ 		cachefiles_ondemand_set_object_close(object);
+ 	kfree(req);
+ 	return ret;
+@@ -634,8 +648,34 @@ int cachefiles_ondemand_init_object(struct cachefiles_object *object)
+ 
+ void cachefiles_ondemand_clean_object(struct cachefiles_object *object)
+ {
++	unsigned long index;
++	struct cachefiles_req *req;
++	struct cachefiles_cache *cache;
++
++	if (!object->ondemand)
++		return;
++
+ 	cachefiles_ondemand_send_req(object, CACHEFILES_OP_CLOSE, 0,
+ 			cachefiles_ondemand_init_close_req, NULL);
++
++	if (!object->ondemand->ondemand_id)
++		return;
++
++	/* Cancel all requests for the object that is being dropped. */
++	cache = object->volume->cache;
++	xa_lock(&cache->reqs);
++	cachefiles_ondemand_set_object_dropping(object);
++	xa_for_each(&cache->reqs, index, req) {
++		if (req->object == object) {
++			req->error = -EIO;
++			complete(&req->done);
++			__xa_erase(&cache->reqs, index);
++		}
++	}
++	xa_unlock(&cache->reqs);
++
++	/* Wait for ondemand_object_worker() to finish to avoid UAF. */
++	cancel_work_sync(&object->ondemand->ondemand_work);
+ }
+ 
+ int cachefiles_ondemand_init_obj_info(struct cachefiles_object *object,
+diff --git a/fs/cachefiles/xattr.c b/fs/cachefiles/xattr.c
+index 00b087c14995a..0ecfc9065047c 100644
+--- a/fs/cachefiles/xattr.c
++++ b/fs/cachefiles/xattr.c
+@@ -110,9 +110,11 @@ int cachefiles_check_auxdata(struct cachefiles_object *object, struct file *file
+ 	if (xlen == 0)
+ 		xlen = vfs_getxattr(&init_user_ns, dentry, cachefiles_xattr_cache, buf, tlen);
+ 	if (xlen != tlen) {
+-		if (xlen < 0)
++		if (xlen < 0) {
++			ret = xlen;
+ 			trace_cachefiles_vfs_error(object, file_inode(file), xlen,
+ 						   cachefiles_trace_getxattr_error);
++		}
+ 		if (xlen == -EIO)
+ 			cachefiles_io_error_obj(
+ 				object,
+@@ -252,6 +254,7 @@ int cachefiles_check_volume_xattr(struct cachefiles_volume *volume)
+ 		xlen = vfs_getxattr(&init_user_ns, dentry, cachefiles_xattr_cache, buf, len);
+ 	if (xlen != len) {
+ 		if (xlen < 0) {
++			ret = xlen;
+ 			trace_cachefiles_vfs_error(NULL, d_inode(dentry), xlen,
+ 						   cachefiles_trace_getxattr_error);
+ 			if (xlen == -EIO)
+diff --git a/fs/dcache.c b/fs/dcache.c
+index b09bc88dbbec7..04f32dc8d1ad8 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -356,7 +356,11 @@ static inline void __d_clear_type_and_inode(struct dentry *dentry)
+ 	flags &= ~(DCACHE_ENTRY_TYPE | DCACHE_FALLTHRU);
+ 	WRITE_ONCE(dentry->d_flags, flags);
+ 	dentry->d_inode = NULL;
+-	if (dentry->d_flags & DCACHE_LRU_LIST)
++	/*
++	 * The negative counter only tracks dentries on the LRU. Don't inc if
++	 * d_lru is on another list.
++	 */
++	if ((flags & (DCACHE_LRU_LIST|DCACHE_SHRINK_LIST)) == DCACHE_LRU_LIST)
+ 		this_cpu_inc(nr_dentry_negative);
+ }
+ 
+@@ -2001,9 +2005,11 @@ static void __d_instantiate(struct dentry *dentry, struct inode *inode)
+ 
+ 	spin_lock(&dentry->d_lock);
+ 	/*
+-	 * Decrement negative dentry count if it was in the LRU list.
++	 * The negative counter only tracks dentries on the LRU. Don't dec if
++	 * d_lru is on another list.
+ 	 */
+-	if (dentry->d_flags & DCACHE_LRU_LIST)
++	if ((dentry->d_flags &
++	     (DCACHE_LRU_LIST|DCACHE_SHRINK_LIST)) == DCACHE_LRU_LIST)
+ 		this_cpu_dec(nr_dentry_negative);
+ 	hlist_add_head(&dentry->d_u.d_alias, &inode->i_dentry);
+ 	raw_write_seqcount_begin(&dentry->d_seq);
+diff --git a/fs/locks.c b/fs/locks.c
+index 7d0918b8fe5d6..c23bcfe9b0fdd 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -1298,9 +1298,9 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
+ 		locks_wake_up_blocks(left);
+ 	}
+  out:
++	trace_posix_lock_inode(inode, request, error);
+ 	spin_unlock(&ctx->flc_lock);
+ 	percpu_up_read(&file_rwsem);
+-	trace_posix_lock_inode(inode, request, error);
+ 	/*
+ 	 * Free any unused locks.
+ 	 */
+diff --git a/fs/nilfs2/dir.c b/fs/nilfs2/dir.c
+index 4bba1970ad333..36438834a0c73 100644
+--- a/fs/nilfs2/dir.c
++++ b/fs/nilfs2/dir.c
+@@ -396,11 +396,39 @@ nilfs_find_entry(struct inode *dir, const struct qstr *qstr,
+ 
+ struct nilfs_dir_entry *nilfs_dotdot(struct inode *dir, struct page **p)
+ {
+-	struct nilfs_dir_entry *de = nilfs_get_page(dir, 0, p);
++	struct page *page;
++	struct nilfs_dir_entry *de, *next_de;
++	size_t limit;
++	char *msg;
+ 
++	de = nilfs_get_page(dir, 0, &page);
+ 	if (IS_ERR(de))
+ 		return NULL;
+-	return nilfs_next_entry(de);
++
++	limit = nilfs_last_byte(dir, 0);  /* is a multiple of chunk size */
++	if (unlikely(!limit || le64_to_cpu(de->inode) != dir->i_ino ||
++		     !nilfs_match(1, ".", de))) {
++		msg = "missing '.'";
++		goto fail;
++	}
++
++	next_de = nilfs_next_entry(de);
++	/*
++	 * If "next_de" has not reached the end of the chunk, there is
++	 * at least one more record.  Check whether it matches "..".
++	 */
++	if (unlikely((char *)next_de == (char *)de + nilfs_chunk_size(dir) ||
++		     !nilfs_match(2, "..", next_de))) {
++		msg = "missing '..'";
++		goto fail;
++	}
++	*p = page;
++	return next_de;
++
++fail:
++	nilfs_error(dir->i_sb, "directory #%lu %s", dir->i_ino, msg);
++	nilfs_put_page(page);
++	return NULL;
+ }
+ 
+ ino_t nilfs_inode_by_name(struct inode *dir, const struct qstr *qstr)
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index e5a72f9c793ef..1564febd1439f 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -1837,8 +1837,8 @@ require use of the stronger protocol */
+ #define   CIFSSEC_MUST_SEAL	0x40040 /* not supported yet */
+ #define   CIFSSEC_MUST_NTLMSSP	0x80080 /* raw ntlmssp with ntlmv2 */
+ 
+-#define   CIFSSEC_DEF (CIFSSEC_MAY_SIGN | CIFSSEC_MAY_NTLMV2 | CIFSSEC_MAY_NTLMSSP)
+-#define   CIFSSEC_MAX (CIFSSEC_MUST_NTLMV2)
++#define   CIFSSEC_DEF (CIFSSEC_MAY_SIGN | CIFSSEC_MAY_NTLMV2 | CIFSSEC_MAY_NTLMSSP | CIFSSEC_MAY_SEAL)
++#define   CIFSSEC_MAX (CIFSSEC_MAY_SIGN | CIFSSEC_MUST_KRB5 | CIFSSEC_MAY_SEAL)
+ #define   CIFSSEC_AUTH_MASK (CIFSSEC_MAY_NTLMV2 | CIFSSEC_MAY_KRB5 | CIFSSEC_MAY_NTLMSSP)
+ /*
+  *****************************************************************
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 34d88425434ab..6344bc81736c0 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -2062,15 +2062,22 @@ int smb2_tree_connect(struct ksmbd_work *work)
+  * @access:		file access flags
+  * @disposition:	file disposition flags
+  * @may_flags:		set with MAY_ flags
++ * @is_dir:		is creating open flags for directory
+  *
+  * Return:      file open flags
+  */
+ static int smb2_create_open_flags(bool file_present, __le32 access,
+ 				  __le32 disposition,
+-				  int *may_flags)
++				  int *may_flags,
++				  bool is_dir)
+ {
+ 	int oflags = O_NONBLOCK | O_LARGEFILE;
+ 
++	if (is_dir) {
++		access &= ~FILE_WRITE_DESIRE_ACCESS_LE;
++		ksmbd_debug(SMB, "Discard write access to a directory\n");
++	}
++
+ 	if (access & FILE_READ_DESIRED_ACCESS_LE &&
+ 	    access & FILE_WRITE_DESIRE_ACCESS_LE) {
+ 		oflags |= O_RDWR;
+@@ -2983,7 +2990,9 @@ int smb2_open(struct ksmbd_work *work)
+ 
+ 	open_flags = smb2_create_open_flags(file_present, daccess,
+ 					    req->CreateDisposition,
+-					    &may_flags);
++					    &may_flags,
++		req->CreateOptions & FILE_DIRECTORY_FILE_LE ||
++		(file_present && S_ISDIR(d_inode(path.dentry)->i_mode)));
+ 
+ 	if (!test_tree_conn_flag(tcon, KSMBD_TREE_CONN_FLAG_WRITABLE)) {
+ 		if (open_flags & (O_CREAT | O_TRUNC)) {
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index 154c103eca751..82101a2cf933e 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -1968,7 +1968,7 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx,
+ 		goto out;
+ 	features = uffdio_api.features;
+ 	ret = -EINVAL;
+-	if (uffdio_api.api != UFFD_API || (features & ~UFFD_API_FEATURES))
++	if (uffdio_api.api != UFFD_API)
+ 		goto err_out;
+ 	ret = -EPERM;
+ 	if ((features & UFFD_FEATURE_EVENT_FORK) && !capable(CAP_SYS_PTRACE))
+@@ -1985,6 +1985,11 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx,
+ #ifndef CONFIG_PTE_MARKER_UFFD_WP
+ 	uffdio_api.features &= ~UFFD_FEATURE_WP_HUGETLBFS_SHMEM;
+ #endif
++
++	ret = -EINVAL;
++	if (features & ~uffdio_api.features)
++		goto err_out;
++
+ 	uffdio_api.ioctls = UFFD_API_IOCTLS;
+ 	ret = -EFAULT;
+ 	if (copy_to_user(buf, &uffdio_api, sizeof(uffdio_api)))
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 1ca1902af23e9..6b18b8da025f9 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1777,6 +1777,8 @@ struct bpf_prog *bpf_prog_get_curr_or_next(u32 *id);
+ void *bpf_map_kmalloc_node(const struct bpf_map *map, size_t size, gfp_t flags,
+ 			   int node);
+ void *bpf_map_kzalloc(const struct bpf_map *map, size_t size, gfp_t flags);
++void *bpf_map_kvcalloc(struct bpf_map *map, size_t n, size_t size,
++		       gfp_t flags);
+ void __percpu *bpf_map_alloc_percpu(const struct bpf_map *map, size_t size,
+ 				    size_t align, gfp_t flags);
+ #else
+@@ -1793,6 +1795,12 @@ bpf_map_kzalloc(const struct bpf_map *map, size_t size, gfp_t flags)
+ 	return kzalloc(size, flags);
+ }
+ 
++static inline void *
++bpf_map_kvcalloc(struct bpf_map *map, size_t n, size_t size, gfp_t flags)
++{
++	return kvcalloc(n, size, flags);
++}
++
+ static inline void __percpu *
+ bpf_map_alloc_percpu(const struct bpf_map *map, size_t size, size_t align,
+ 		     gfp_t flags)
+diff --git a/include/linux/bpf_local_storage.h b/include/linux/bpf_local_storage.h
+index 7ea18d4da84b8..6d37a40cd90e8 100644
+--- a/include/linux/bpf_local_storage.h
++++ b/include/linux/bpf_local_storage.h
+@@ -116,21 +116,22 @@ static struct bpf_local_storage_cache name = {			\
+ 	.idx_lock = __SPIN_LOCK_UNLOCKED(name.idx_lock),	\
+ }
+ 
+-u16 bpf_local_storage_cache_idx_get(struct bpf_local_storage_cache *cache);
+-void bpf_local_storage_cache_idx_free(struct bpf_local_storage_cache *cache,
+-				      u16 idx);
+-
+ /* Helper functions for bpf_local_storage */
+ int bpf_local_storage_map_alloc_check(union bpf_attr *attr);
+ 
+-struct bpf_local_storage_map *bpf_local_storage_map_alloc(union bpf_attr *attr);
++struct bpf_map *
++bpf_local_storage_map_alloc(union bpf_attr *attr,
++			    struct bpf_local_storage_cache *cache);
+ 
+ struct bpf_local_storage_data *
+ bpf_local_storage_lookup(struct bpf_local_storage *local_storage,
+ 			 struct bpf_local_storage_map *smap,
+ 			 bool cacheit_lockit);
+ 
+-void bpf_local_storage_map_free(struct bpf_local_storage_map *smap,
++bool bpf_local_storage_unlink_nolock(struct bpf_local_storage *local_storage);
++
++void bpf_local_storage_map_free(struct bpf_map *map,
++				struct bpf_local_storage_cache *cache,
+ 				int __percpu *busy_counter);
+ 
+ int bpf_local_storage_map_check_btf(const struct bpf_map *map,
+@@ -141,10 +142,6 @@ int bpf_local_storage_map_check_btf(const struct bpf_map *map,
+ void bpf_selem_link_storage_nolock(struct bpf_local_storage *local_storage,
+ 				   struct bpf_local_storage_elem *selem);
+ 
+-bool bpf_selem_unlink_storage_nolock(struct bpf_local_storage *local_storage,
+-				     struct bpf_local_storage_elem *selem,
+-				     bool uncharge_omem, bool use_trace_rcu);
+-
+ void bpf_selem_unlink(struct bpf_local_storage_elem *selem, bool use_trace_rcu);
+ 
+ void bpf_selem_link_map(struct bpf_local_storage_map *smap,
+diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h
+index ae4c9579ca5f0..efe5e8067652b 100644
+--- a/include/linux/compiler_attributes.h
++++ b/include/linux/compiler_attributes.h
+@@ -321,6 +321,18 @@
+  */
+ #define __section(section)              __attribute__((__section__(section)))
+ 
++/*
++ * Optional: only supported since gcc >= 12
++ *
++ *   gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-uninitialized-variable-attribute
++ * clang: https://clang.llvm.org/docs/AttributeReference.html#uninitialized
++ */
++#if __has_attribute(__uninitialized__)
++# define __uninitialized		__attribute__((__uninitialized__))
++#else
++# define __uninitialized
++#endif
++
+ /*
+  *   gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-unused-function-attribute
+  *   gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Type-Attributes.html#index-unused-type-attribute
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index 93d2003091222..61906244c14d6 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -1814,8 +1814,9 @@ static inline int subsection_map_index(unsigned long pfn)
+ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn)
+ {
+ 	int idx = subsection_map_index(pfn);
++	struct mem_section_usage *usage = READ_ONCE(ms->usage);
+ 
+-	return test_bit(idx, READ_ONCE(ms->usage)->subsection_map);
++	return usage ? test_bit(idx, usage->subsection_map) : 0;
+ }
+ #else
+ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn)
+diff --git a/kernel/bpf/bpf_inode_storage.c b/kernel/bpf/bpf_inode_storage.c
+index 5f7683b191998..6a1d4d22816a3 100644
+--- a/kernel/bpf/bpf_inode_storage.c
++++ b/kernel/bpf/bpf_inode_storage.c
+@@ -56,11 +56,9 @@ static struct bpf_local_storage_data *inode_storage_lookup(struct inode *inode,
+ 
+ void bpf_inode_storage_free(struct inode *inode)
+ {
+-	struct bpf_local_storage_elem *selem;
+ 	struct bpf_local_storage *local_storage;
+ 	bool free_inode_storage = false;
+ 	struct bpf_storage_blob *bsb;
+-	struct hlist_node *n;
+ 
+ 	bsb = bpf_inode(inode);
+ 	if (!bsb)
+@@ -74,30 +72,11 @@ void bpf_inode_storage_free(struct inode *inode)
+ 		return;
+ 	}
+ 
+-	/* Neither the bpf_prog nor the bpf-map's syscall
+-	 * could be modifying the local_storage->list now.
+-	 * Thus, no elem can be added-to or deleted-from the
+-	 * local_storage->list by the bpf_prog or by the bpf-map's syscall.
+-	 *
+-	 * It is racing with bpf_local_storage_map_free() alone
+-	 * when unlinking elem from the local_storage->list and
+-	 * the map's bucket->list.
+-	 */
+ 	raw_spin_lock_bh(&local_storage->lock);
+-	hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
+-		/* Always unlink from map before unlinking from
+-		 * local_storage.
+-		 */
+-		bpf_selem_unlink_map(selem);
+-		free_inode_storage = bpf_selem_unlink_storage_nolock(
+-			local_storage, selem, false, false);
+-	}
++	free_inode_storage = bpf_local_storage_unlink_nolock(local_storage);
+ 	raw_spin_unlock_bh(&local_storage->lock);
+ 	rcu_read_unlock();
+ 
+-	/* free_inoode_storage should always be true as long as
+-	 * local_storage->list was non-empty.
+-	 */
+ 	if (free_inode_storage)
+ 		kfree_rcu(local_storage, rcu);
+ }
+@@ -226,23 +205,12 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key,
+ 
+ static struct bpf_map *inode_storage_map_alloc(union bpf_attr *attr)
+ {
+-	struct bpf_local_storage_map *smap;
+-
+-	smap = bpf_local_storage_map_alloc(attr);
+-	if (IS_ERR(smap))
+-		return ERR_CAST(smap);
+-
+-	smap->cache_idx = bpf_local_storage_cache_idx_get(&inode_cache);
+-	return &smap->map;
++	return bpf_local_storage_map_alloc(attr, &inode_cache);
+ }
+ 
+ static void inode_storage_map_free(struct bpf_map *map)
+ {
+-	struct bpf_local_storage_map *smap;
+-
+-	smap = (struct bpf_local_storage_map *)map;
+-	bpf_local_storage_cache_idx_free(&inode_cache, smap->cache_idx);
+-	bpf_local_storage_map_free(smap, NULL);
++	bpf_local_storage_map_free(map, &inode_cache, NULL);
+ }
+ 
+ BTF_ID_LIST_SINGLE(inode_storage_map_btf_ids, struct,
+diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
+index d9d88a2cda5e5..51a9f024c1829 100644
+--- a/kernel/bpf/bpf_local_storage.c
++++ b/kernel/bpf/bpf_local_storage.c
+@@ -114,9 +114,9 @@ static void bpf_selem_free_rcu(struct rcu_head *rcu)
+  * The caller must ensure selem->smap is still valid to be
+  * dereferenced for its smap->elem_size and smap->cache_idx.
+  */
+-bool bpf_selem_unlink_storage_nolock(struct bpf_local_storage *local_storage,
+-				     struct bpf_local_storage_elem *selem,
+-				     bool uncharge_mem, bool use_trace_rcu)
++static bool bpf_selem_unlink_storage_nolock(struct bpf_local_storage *local_storage,
++					    struct bpf_local_storage_elem *selem,
++					    bool uncharge_mem, bool use_trace_rcu)
+ {
+ 	struct bpf_local_storage_map *smap;
+ 	bool free_local_storage;
+@@ -501,7 +501,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
+ 	return ERR_PTR(err);
+ }
+ 
+-u16 bpf_local_storage_cache_idx_get(struct bpf_local_storage_cache *cache)
++static u16 bpf_local_storage_cache_idx_get(struct bpf_local_storage_cache *cache)
+ {
+ 	u64 min_usage = U64_MAX;
+ 	u16 i, res = 0;
+@@ -525,21 +525,132 @@ u16 bpf_local_storage_cache_idx_get(struct bpf_local_storage_cache *cache)
+ 	return res;
+ }
+ 
+-void bpf_local_storage_cache_idx_free(struct bpf_local_storage_cache *cache,
+-				      u16 idx)
++static void bpf_local_storage_cache_idx_free(struct bpf_local_storage_cache *cache,
++					     u16 idx)
+ {
+ 	spin_lock(&cache->idx_lock);
+ 	cache->idx_usage_counts[idx]--;
+ 	spin_unlock(&cache->idx_lock);
+ }
+ 
+-void bpf_local_storage_map_free(struct bpf_local_storage_map *smap,
+-				int __percpu *busy_counter)
++int bpf_local_storage_map_alloc_check(union bpf_attr *attr)
++{
++	if (attr->map_flags & ~BPF_LOCAL_STORAGE_CREATE_FLAG_MASK ||
++	    !(attr->map_flags & BPF_F_NO_PREALLOC) ||
++	    attr->max_entries ||
++	    attr->key_size != sizeof(int) || !attr->value_size ||
++	    /* Enforce BTF for userspace sk dumping */
++	    !attr->btf_key_type_id || !attr->btf_value_type_id)
++		return -EINVAL;
++
++	if (!bpf_capable())
++		return -EPERM;
++
++	if (attr->value_size > BPF_LOCAL_STORAGE_MAX_VALUE_SIZE)
++		return -E2BIG;
++
++	return 0;
++}
++
++int bpf_local_storage_map_check_btf(const struct bpf_map *map,
++				    const struct btf *btf,
++				    const struct btf_type *key_type,
++				    const struct btf_type *value_type)
++{
++	u32 int_data;
++
++	if (BTF_INFO_KIND(key_type->info) != BTF_KIND_INT)
++		return -EINVAL;
++
++	int_data = *(u32 *)(key_type + 1);
++	if (BTF_INT_BITS(int_data) != 32 || BTF_INT_OFFSET(int_data))
++		return -EINVAL;
++
++	return 0;
++}
++
++bool bpf_local_storage_unlink_nolock(struct bpf_local_storage *local_storage)
+ {
+ 	struct bpf_local_storage_elem *selem;
++	bool free_storage = false;
++	struct hlist_node *n;
++
++	/* Neither the bpf_prog nor the bpf_map's syscall
++	 * could be modifying the local_storage->list now.
++	 * Thus, no elem can be added to or deleted from the
++	 * local_storage->list by the bpf_prog or by the bpf_map's syscall.
++	 *
++	 * It is racing with bpf_local_storage_map_free() alone
++	 * when unlinking elem from the local_storage->list and
++	 * the map's bucket->list.
++	 */
++	hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
++		/* Always unlink from map before unlinking from
++		 * local_storage.
++		 */
++		bpf_selem_unlink_map(selem);
++		/* If local_storage list has only one element, the
++		 * bpf_selem_unlink_storage_nolock() will return true.
++		 * Otherwise, it will return false. The current loop iteration
++		 * intends to remove all local storage. So the last iteration
++		 * of the loop will set the free_cgroup_storage to true.
++		 */
++		free_storage = bpf_selem_unlink_storage_nolock(
++			local_storage, selem, false, false);
++	}
++
++	return free_storage;
++}
++
++struct bpf_map *
++bpf_local_storage_map_alloc(union bpf_attr *attr,
++			    struct bpf_local_storage_cache *cache)
++{
++	struct bpf_local_storage_map *smap;
++	unsigned int i;
++	u32 nbuckets;
++
++	smap = bpf_map_area_alloc(sizeof(*smap), NUMA_NO_NODE);
++	if (!smap)
++		return ERR_PTR(-ENOMEM);
++	bpf_map_init_from_attr(&smap->map, attr);
++
++	nbuckets = roundup_pow_of_two(num_possible_cpus());
++	/* Use at least 2 buckets, select_bucket() is undefined behavior with 1 bucket */
++	nbuckets = max_t(u32, 2, nbuckets);
++	smap->bucket_log = ilog2(nbuckets);
++
++	smap->buckets = bpf_map_kvcalloc(&smap->map, nbuckets,
++					 sizeof(*smap->buckets), GFP_USER | __GFP_NOWARN);
++	if (!smap->buckets) {
++		bpf_map_area_free(smap);
++		return ERR_PTR(-ENOMEM);
++	}
++
++	for (i = 0; i < nbuckets; i++) {
++		INIT_HLIST_HEAD(&smap->buckets[i].list);
++		raw_spin_lock_init(&smap->buckets[i].lock);
++	}
++
++	smap->elem_size = offsetof(struct bpf_local_storage_elem,
++				   sdata.data[attr->value_size]);
++
++	smap->cache_idx = bpf_local_storage_cache_idx_get(cache);
++	return &smap->map;
++}
++
++void bpf_local_storage_map_free(struct bpf_map *map,
++				struct bpf_local_storage_cache *cache,
++				int __percpu *busy_counter)
++{
+ 	struct bpf_local_storage_map_bucket *b;
++	struct bpf_local_storage_elem *selem;
++	struct bpf_local_storage_map *smap;
+ 	unsigned int i;
+ 
++	smap = (struct bpf_local_storage_map *)map;
++	bpf_local_storage_cache_idx_free(cache, smap->cache_idx);
++
+ 	/* Note that this map might be concurrently cloned from
+ 	 * bpf_sk_storage_clone. Wait for any existing bpf_sk_storage_clone
+ 	 * RCU read section to finish before proceeding. New RCU
+@@ -594,73 +705,3 @@ void bpf_local_storage_map_free(struct bpf_local_storage_map *smap,
+ 	kvfree(smap->buckets);
+ 	bpf_map_area_free(smap);
+ }
+-
+-int bpf_local_storage_map_alloc_check(union bpf_attr *attr)
+-{
+-	if (attr->map_flags & ~BPF_LOCAL_STORAGE_CREATE_FLAG_MASK ||
+-	    !(attr->map_flags & BPF_F_NO_PREALLOC) ||
+-	    attr->max_entries ||
+-	    attr->key_size != sizeof(int) || !attr->value_size ||
+-	    /* Enforce BTF for userspace sk dumping */
+-	    !attr->btf_key_type_id || !attr->btf_value_type_id)
+-		return -EINVAL;
+-
+-	if (!bpf_capable())
+-		return -EPERM;
+-
+-	if (attr->value_size > BPF_LOCAL_STORAGE_MAX_VALUE_SIZE)
+-		return -E2BIG;
+-
+-	return 0;
+-}
+-
+-struct bpf_local_storage_map *bpf_local_storage_map_alloc(union bpf_attr *attr)
+-{
+-	struct bpf_local_storage_map *smap;
+-	unsigned int i;
+-	u32 nbuckets;
+-
+-	smap = bpf_map_area_alloc(sizeof(*smap), NUMA_NO_NODE);
+-	if (!smap)
+-		return ERR_PTR(-ENOMEM);
+-	bpf_map_init_from_attr(&smap->map, attr);
+-
+-	nbuckets = roundup_pow_of_two(num_possible_cpus());
+-	/* Use at least 2 buckets, select_bucket() is undefined behavior with 1 bucket */
+-	nbuckets = max_t(u32, 2, nbuckets);
+-	smap->bucket_log = ilog2(nbuckets);
+-
+-	smap->buckets = kvcalloc(sizeof(*smap->buckets), nbuckets,
+-				 GFP_USER | __GFP_NOWARN | __GFP_ACCOUNT);
+-	if (!smap->buckets) {
+-		bpf_map_area_free(smap);
+-		return ERR_PTR(-ENOMEM);
+-	}
+-
+-	for (i = 0; i < nbuckets; i++) {
+-		INIT_HLIST_HEAD(&smap->buckets[i].list);
+-		raw_spin_lock_init(&smap->buckets[i].lock);
+-	}
+-
+-	smap->elem_size =
+-		sizeof(struct bpf_local_storage_elem) + attr->value_size;
+-
+-	return smap;
+-}
+-
+-int bpf_local_storage_map_check_btf(const struct bpf_map *map,
+-				    const struct btf *btf,
+-				    const struct btf_type *key_type,
+-				    const struct btf_type *value_type)
+-{
+-	u32 int_data;
+-
+-	if (BTF_INFO_KIND(key_type->info) != BTF_KIND_INT)
+-		return -EINVAL;
+-
+-	int_data = *(u32 *)(key_type + 1);
+-	if (BTF_INT_BITS(int_data) != 32 || BTF_INT_OFFSET(int_data))
+-		return -EINVAL;
+-
+-	return 0;
+-}
+diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
+index 6f290623347e0..40a92edd6f539 100644
+--- a/kernel/bpf/bpf_task_storage.c
++++ b/kernel/bpf/bpf_task_storage.c
+@@ -71,10 +71,8 @@ task_storage_lookup(struct task_struct *task, struct bpf_map *map,
+ 
+ void bpf_task_storage_free(struct task_struct *task)
+ {
+-	struct bpf_local_storage_elem *selem;
+ 	struct bpf_local_storage *local_storage;
+ 	bool free_task_storage = false;
+-	struct hlist_node *n;
+ 	unsigned long flags;
+ 
+ 	rcu_read_lock();
+@@ -85,32 +83,13 @@ void bpf_task_storage_free(struct task_struct *task)
+ 		return;
+ 	}
+ 
+-	/* Neither the bpf_prog nor the bpf-map's syscall
+-	 * could be modifying the local_storage->list now.
+-	 * Thus, no elem can be added-to or deleted-from the
+-	 * local_storage->list by the bpf_prog or by the bpf-map's syscall.
+-	 *
+-	 * It is racing with bpf_local_storage_map_free() alone
+-	 * when unlinking elem from the local_storage->list and
+-	 * the map's bucket->list.
+-	 */
+ 	bpf_task_storage_lock();
+ 	raw_spin_lock_irqsave(&local_storage->lock, flags);
+-	hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
+-		/* Always unlink from map before unlinking from
+-		 * local_storage.
+-		 */
+-		bpf_selem_unlink_map(selem);
+-		free_task_storage = bpf_selem_unlink_storage_nolock(
+-			local_storage, selem, false, false);
+-	}
++	free_task_storage = bpf_local_storage_unlink_nolock(local_storage);
+ 	raw_spin_unlock_irqrestore(&local_storage->lock, flags);
+ 	bpf_task_storage_unlock();
+ 	rcu_read_unlock();
+ 
+-	/* free_task_storage should always be true as long as
+-	 * local_storage->list was non-empty.
+-	 */
+ 	if (free_task_storage)
+ 		kfree_rcu(local_storage, rcu);
+ }
+@@ -288,23 +267,12 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
+ 
+ static struct bpf_map *task_storage_map_alloc(union bpf_attr *attr)
+ {
+-	struct bpf_local_storage_map *smap;
+-
+-	smap = bpf_local_storage_map_alloc(attr);
+-	if (IS_ERR(smap))
+-		return ERR_CAST(smap);
+-
+-	smap->cache_idx = bpf_local_storage_cache_idx_get(&task_cache);
+-	return &smap->map;
++	return bpf_local_storage_map_alloc(attr, &task_cache);
+ }
+ 
+ static void task_storage_map_free(struct bpf_map *map)
+ {
+-	struct bpf_local_storage_map *smap;
+-
+-	smap = (struct bpf_local_storage_map *)map;
+-	bpf_local_storage_cache_idx_free(&task_cache, smap->cache_idx);
+-	bpf_local_storage_map_free(smap, &bpf_task_storage_busy);
++	bpf_local_storage_map_free(map, &task_cache, &bpf_task_storage_busy);
+ }
+ 
+ BTF_ID_LIST_SINGLE(task_storage_map_btf_ids, struct, bpf_local_storage_map)
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 1e46a84694b8a..d77597daa0022 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -470,6 +470,21 @@ void *bpf_map_kzalloc(const struct bpf_map *map, size_t size, gfp_t flags)
+ 	return ptr;
+ }
+ 
++void *bpf_map_kvcalloc(struct bpf_map *map, size_t n, size_t size,
++		       gfp_t flags)
++{
++	struct mem_cgroup *memcg, *old_memcg;
++	void *ptr;
++
++	memcg = bpf_map_get_memcg(map);
++	old_memcg = set_active_memcg(memcg);
++	ptr = kvcalloc(n, size, flags | __GFP_ACCOUNT);
++	set_active_memcg(old_memcg);
++	mem_cgroup_put(memcg);
++
++	return ptr;
++}
++
+ void __percpu *bpf_map_alloc_percpu(const struct bpf_map *map, size_t size,
+ 				    size_t align, gfp_t flags)
+ {
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 56a5c8beb553d..8973d3c9597ce 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -3599,6 +3599,8 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
+ 						continue;
+ 					if (type == STACK_MISC)
+ 						continue;
++					if (type == STACK_INVALID && env->allow_uninit_stack)
++						continue;
+ 					verbose(env, "invalid read from stack off %d+%d size %d\n",
+ 						off, i, size);
+ 					return -EACCES;
+@@ -3636,6 +3638,8 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
+ 				continue;
+ 			if (type == STACK_ZERO)
+ 				continue;
++			if (type == STACK_INVALID && env->allow_uninit_stack)
++				continue;
+ 			verbose(env, "invalid read from stack off %d+%d size %d\n",
+ 				off, i, size);
+ 			return -EACCES;
+@@ -5426,7 +5430,8 @@ static int check_stack_range_initialized(
+ 		stype = &state->stack[spi].slot_type[slot % BPF_REG_SIZE];
+ 		if (*stype == STACK_MISC)
+ 			goto mark;
+-		if (*stype == STACK_ZERO) {
++		if ((*stype == STACK_ZERO) ||
++		    (*stype == STACK_INVALID && env->allow_uninit_stack)) {
+ 			if (clobber) {
+ 				/* helper can write anything into the stack */
+ 				*stype = STACK_MISC;
+@@ -11967,6 +11972,10 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
+ 		if (old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_INVALID)
+ 			continue;
+ 
++		if (env->allow_uninit_stack &&
++		    old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_MISC)
++			continue;
++
+ 		/* explored stack has more populated slots than current stack
+ 		 * and these slots were used
+ 		 */
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index d71234729edb4..cac41c49bd2f5 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -701,7 +701,6 @@ static void update_rq_clock_task(struct rq *rq, s64 delta)
+ 
+ 	rq->prev_irq_time += irq_delta;
+ 	delta -= irq_delta;
+-	psi_account_irqtime(rq->curr, irq_delta);
+ #endif
+ #ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
+ 	if (static_key_false((&paravirt_steal_rq_enabled))) {
+@@ -5500,7 +5499,7 @@ void scheduler_tick(void)
+ {
+ 	int cpu = smp_processor_id();
+ 	struct rq *rq = cpu_rq(cpu);
+-	struct task_struct *curr = rq->curr;
++	struct task_struct *curr;
+ 	struct rq_flags rf;
+ 	unsigned long thermal_pressure;
+ 	u64 resched_latency;
+@@ -5512,6 +5511,9 @@ void scheduler_tick(void)
+ 
+ 	rq_lock(rq, &rf);
+ 
++	curr = rq->curr;
++	psi_account_irqtime(rq, curr, NULL);
++
+ 	update_rq_clock(rq);
+ 	thermal_pressure = arch_scale_thermal_pressure(cpu_of(rq));
+ 	update_thermal_load_avg(rq_clock_thermal(rq), rq, thermal_pressure);
+@@ -6550,6 +6552,7 @@ static void __sched notrace __schedule(unsigned int sched_mode)
+ 		++*switch_count;
+ 
+ 		migrate_disable_switch(rq, prev);
++		psi_account_irqtime(rq, prev, next);
+ 		psi_sched_switch(prev, next, !task_on_rq_queued(prev));
+ 
+ 		trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state);
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 0de8354d5ad0a..d0851610cf467 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -8479,12 +8479,8 @@ static int detach_tasks(struct lb_env *env)
+ 			break;
+ 
+ 		env->loop++;
+-		/*
+-		 * We've more or less seen every task there is, call it quits
+-		 * unless we haven't found any movable task yet.
+-		 */
+-		if (env->loop > env->loop_max &&
+-		    !(env->flags & LBF_ALL_PINNED))
++		/* We've more or less seen every task there is, call it quits */
++		if (env->loop > env->loop_max)
+ 			break;
+ 
+ 		/* take a breather every nr_migrate tasks */
+@@ -10623,9 +10619,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
+ 
+ 		if (env.flags & LBF_NEED_BREAK) {
+ 			env.flags &= ~LBF_NEED_BREAK;
+-			/* Stop if we tried all running tasks */
+-			if (env.loop < busiest->nr_running)
+-				goto more_balance;
++			goto more_balance;
+ 		}
+ 
+ 		/*
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index 80d8c10e93638..81dbced92df5f 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -785,6 +785,7 @@ static void psi_group_change(struct psi_group *group, int cpu,
+ 	enum psi_states s;
+ 	u32 state_mask;
+ 
++	lockdep_assert_rq_held(cpu_rq(cpu));
+ 	groupc = per_cpu_ptr(group->pcpu, cpu);
+ 
+ 	/*
+@@ -1003,19 +1004,29 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ }
+ 
+ #ifdef CONFIG_IRQ_TIME_ACCOUNTING
+-void psi_account_irqtime(struct task_struct *task, u32 delta)
++void psi_account_irqtime(struct rq *rq, struct task_struct *curr, struct task_struct *prev)
+ {
+-	int cpu = task_cpu(task);
++	int cpu = task_cpu(curr);
+ 	struct psi_group *group;
+ 	struct psi_group_cpu *groupc;
+-	u64 now;
++	u64 now, irq;
++	s64 delta;
+ 
+-	if (!task->pid)
++	if (!curr->pid)
++		return;
++
++	lockdep_assert_rq_held(rq);
++	group = task_psi_group(curr);
++	if (prev && task_psi_group(prev) == group)
+ 		return;
+ 
+ 	now = cpu_clock(cpu);
++	irq = irq_time_read(cpu);
++	delta = (s64)(irq - rq->psi_irq_time);
++	if (delta < 0)
++		return;
++	rq->psi_irq_time = irq;
+ 
+-	group = task_psi_group(task);
+ 	do {
+ 		if (!group->enabled)
+ 			continue;
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index b62d53d7c264f..81d9698f0a1eb 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1084,6 +1084,7 @@ struct rq {
+ 
+ #ifdef CONFIG_IRQ_TIME_ACCOUNTING
+ 	u64			prev_irq_time;
++	u64			psi_irq_time;
+ #endif
+ #ifdef CONFIG_PARAVIRT
+ 	u64			prev_steal_time;
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+index 84a188913cc9d..b49a96fad1d2f 100644
+--- a/kernel/sched/stats.h
++++ b/kernel/sched/stats.h
+@@ -110,8 +110,12 @@ __schedstats_from_se(struct sched_entity *se)
+ void psi_task_change(struct task_struct *task, int clear, int set);
+ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ 		     bool sleep);
+-void psi_account_irqtime(struct task_struct *task, u32 delta);
+-
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++void psi_account_irqtime(struct rq *rq, struct task_struct *curr, struct task_struct *prev);
++#else
++static inline void psi_account_irqtime(struct rq *rq, struct task_struct *curr,
++				       struct task_struct *prev) {}
++#endif /*CONFIG_IRQ_TIME_ACCOUNTING */
+ /*
+  * PSI tracks state that persists across sleeps, such as iowaits and
+  * memory stalls. As a result, it has to distinguish between sleeps,
+@@ -206,7 +210,8 @@ static inline void psi_ttwu_dequeue(struct task_struct *p) {}
+ static inline void psi_sched_switch(struct task_struct *prev,
+ 				    struct task_struct *next,
+ 				    bool sleep) {}
+-static inline void psi_account_irqtime(struct task_struct *task, u32 delta) {}
++static inline void psi_account_irqtime(struct rq *rq, struct task_struct *curr,
++				       struct task_struct *prev) {}
+ #endif /* CONFIG_PSI */
+ 
+ #ifdef CONFIG_SCHED_INFO
+diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
+index db60217f911b3..2cf1254fd4522 100644
+--- a/net/ceph/mon_client.c
++++ b/net/ceph/mon_client.c
+@@ -1085,13 +1085,19 @@ static void delayed_work(struct work_struct *work)
+ 	struct ceph_mon_client *monc =
+ 		container_of(work, struct ceph_mon_client, delayed_work.work);
+ 
+-	dout("monc delayed_work\n");
+ 	mutex_lock(&monc->mutex);
++	dout("%s mon%d\n", __func__, monc->cur_mon);
++	if (monc->cur_mon < 0) {
++		goto out;
++	}
++
+ 	if (monc->hunting) {
+ 		dout("%s continuing hunt\n", __func__);
+ 		reopen_session(monc);
+ 	} else {
+ 		int is_auth = ceph_auth_is_authenticated(monc->auth);
++
++		dout("%s is_authed %d\n", __func__, is_auth);
+ 		if (ceph_con_keepalive_expired(&monc->con,
+ 					       CEPH_MONC_PING_TIMEOUT)) {
+ 			dout("monc keepalive timeout\n");
+@@ -1116,6 +1122,8 @@ static void delayed_work(struct work_struct *work)
+ 		}
+ 	}
+ 	__schedule_delayed(monc);
++
++out:
+ 	mutex_unlock(&monc->mutex);
+ }
+ 
+@@ -1232,13 +1240,15 @@ EXPORT_SYMBOL(ceph_monc_init);
+ void ceph_monc_stop(struct ceph_mon_client *monc)
+ {
+ 	dout("stop\n");
+-	cancel_delayed_work_sync(&monc->delayed_work);
+ 
+ 	mutex_lock(&monc->mutex);
+ 	__close_session(monc);
++	monc->hunting = false;
+ 	monc->cur_mon = -1;
+ 	mutex_unlock(&monc->mutex);
+ 
++	cancel_delayed_work_sync(&monc->delayed_work);
++
+ 	/*
+ 	 * flush msgr queue before we destroy ourselves to ensure that:
+ 	 *  - any work that references our embedded con is finished.
+diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
+index ad01b1bea52e4..0124536e8a9db 100644
+--- a/net/core/bpf_sk_storage.c
++++ b/net/core/bpf_sk_storage.c
+@@ -48,10 +48,8 @@ static int bpf_sk_storage_del(struct sock *sk, struct bpf_map *map)
+ /* Called by __sk_destruct() & bpf_sk_storage_clone() */
+ void bpf_sk_storage_free(struct sock *sk)
+ {
+-	struct bpf_local_storage_elem *selem;
+ 	struct bpf_local_storage *sk_storage;
+ 	bool free_sk_storage = false;
+-	struct hlist_node *n;
+ 
+ 	rcu_read_lock();
+ 	sk_storage = rcu_dereference(sk->sk_bpf_storage);
+@@ -60,24 +58,8 @@ void bpf_sk_storage_free(struct sock *sk)
+ 		return;
+ 	}
+ 
+-	/* Netiher the bpf_prog nor the bpf-map's syscall
+-	 * could be modifying the sk_storage->list now.
+-	 * Thus, no elem can be added-to or deleted-from the
+-	 * sk_storage->list by the bpf_prog or by the bpf-map's syscall.
+-	 *
+-	 * It is racing with bpf_local_storage_map_free() alone
+-	 * when unlinking elem from the sk_storage->list and
+-	 * the map's bucket->list.
+-	 */
+ 	raw_spin_lock_bh(&sk_storage->lock);
+-	hlist_for_each_entry_safe(selem, n, &sk_storage->list, snode) {
+-		/* Always unlink from map before unlinking from
+-		 * sk_storage.
+-		 */
+-		bpf_selem_unlink_map(selem);
+-		free_sk_storage = bpf_selem_unlink_storage_nolock(
+-			sk_storage, selem, true, false);
+-	}
++	free_sk_storage = bpf_local_storage_unlink_nolock(sk_storage);
+ 	raw_spin_unlock_bh(&sk_storage->lock);
+ 	rcu_read_unlock();
+ 
+@@ -87,23 +69,12 @@ void bpf_sk_storage_free(struct sock *sk)
+ 
+ static void bpf_sk_storage_map_free(struct bpf_map *map)
+ {
+-	struct bpf_local_storage_map *smap;
+-
+-	smap = (struct bpf_local_storage_map *)map;
+-	bpf_local_storage_cache_idx_free(&sk_cache, smap->cache_idx);
+-	bpf_local_storage_map_free(smap, NULL);
++	bpf_local_storage_map_free(map, &sk_cache, NULL);
+ }
+ 
+ static struct bpf_map *bpf_sk_storage_map_alloc(union bpf_attr *attr)
+ {
+-	struct bpf_local_storage_map *smap;
+-
+-	smap = bpf_local_storage_map_alloc(attr);
+-	if (IS_ERR(smap))
+-		return ERR_CAST(smap);
+-
+-	smap->cache_idx = bpf_local_storage_cache_idx_get(&sk_cache);
+-	return &smap->map;
++	return bpf_local_storage_map_alloc(attr, &sk_cache);
+ }
+ 
+ static int notsupp_get_next_key(struct bpf_map *map, void *key,
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index cdd65ca3124a4..87c39cc12327f 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -441,11 +441,12 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset,
+ 			if (copy > len)
+ 				copy = len;
+ 
++			n = 0;
+ 			skb_frag_foreach_page(frag,
+ 					      skb_frag_off(frag) + offset - start,
+ 					      copy, p, p_off, p_len, copied) {
+ 				vaddr = kmap_local_page(p);
+-				n = INDIRECT_CALL_1(cb, simple_copy_to_iter,
++				n += INDIRECT_CALL_1(cb, simple_copy_to_iter,
+ 					vaddr + p_off, p_len, data, to);
+ 				kunmap_local(vaddr);
+ 			}
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 8b0459a6b629f..746d950de0e14 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -433,7 +433,8 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
+ 			page = sg_page(sge);
+ 			if (copied + copy > len)
+ 				copy = len - copied;
+-			copy = copy_page_to_iter(page, sge->offset, copy, iter);
++			if (copy)
++				copy = copy_page_to_iter(page, sge->offset, copy, iter);
+ 			if (!copy) {
+ 				copied = copied ? copied : -EFAULT;
+ 				goto out;
+diff --git a/net/ethtool/linkstate.c b/net/ethtool/linkstate.c
+index fb676f349455a..470582a70ccbe 100644
+--- a/net/ethtool/linkstate.c
++++ b/net/ethtool/linkstate.c
+@@ -36,6 +36,8 @@ static int linkstate_get_sqi(struct net_device *dev)
+ 	mutex_lock(&phydev->lock);
+ 	if (!phydev->drv || !phydev->drv->get_sqi)
+ 		ret = -EOPNOTSUPP;
++	else if (!phydev->link)
++		ret = -ENETDOWN;
+ 	else
+ 		ret = phydev->drv->get_sqi(phydev);
+ 	mutex_unlock(&phydev->lock);
+@@ -54,6 +56,8 @@ static int linkstate_get_sqi_max(struct net_device *dev)
+ 	mutex_lock(&phydev->lock);
+ 	if (!phydev->drv || !phydev->drv->get_sqi_max)
+ 		ret = -EOPNOTSUPP;
++	else if (!phydev->link)
++		ret = -ENETDOWN;
+ 	else
+ 		ret = phydev->drv->get_sqi_max(phydev);
+ 	mutex_unlock(&phydev->lock);
+@@ -61,6 +65,17 @@ static int linkstate_get_sqi_max(struct net_device *dev)
+ 	return ret;
+ };
+ 
++static bool linkstate_sqi_critical_error(int sqi)
++{
++	return sqi < 0 && sqi != -EOPNOTSUPP && sqi != -ENETDOWN;
++}
++
++static bool linkstate_sqi_valid(struct linkstate_reply_data *data)
++{
++	return data->sqi >= 0 && data->sqi_max >= 0 &&
++	       data->sqi <= data->sqi_max;
++}
++
+ static int linkstate_get_link_ext_state(struct net_device *dev,
+ 					struct linkstate_reply_data *data)
+ {
+@@ -92,12 +107,12 @@ static int linkstate_prepare_data(const struct ethnl_req_info *req_base,
+ 	data->link = __ethtool_get_link(dev);
+ 
+ 	ret = linkstate_get_sqi(dev);
+-	if (ret < 0 && ret != -EOPNOTSUPP)
++	if (linkstate_sqi_critical_error(ret))
+ 		goto out;
+ 	data->sqi = ret;
+ 
+ 	ret = linkstate_get_sqi_max(dev);
+-	if (ret < 0 && ret != -EOPNOTSUPP)
++	if (linkstate_sqi_critical_error(ret))
+ 		goto out;
+ 	data->sqi_max = ret;
+ 
+@@ -122,11 +137,10 @@ static int linkstate_reply_size(const struct ethnl_req_info *req_base,
+ 	len = nla_total_size(sizeof(u8)) /* LINKSTATE_LINK */
+ 		+ 0;
+ 
+-	if (data->sqi != -EOPNOTSUPP)
+-		len += nla_total_size(sizeof(u32));
+-
+-	if (data->sqi_max != -EOPNOTSUPP)
+-		len += nla_total_size(sizeof(u32));
++	if (linkstate_sqi_valid(data)) {
++		len += nla_total_size(sizeof(u32)); /* LINKSTATE_SQI */
++		len += nla_total_size(sizeof(u32)); /* LINKSTATE_SQI_MAX */
++	}
+ 
+ 	if (data->link_ext_state_provided)
+ 		len += nla_total_size(sizeof(u8)); /* LINKSTATE_EXT_STATE */
+@@ -147,13 +161,14 @@ static int linkstate_fill_reply(struct sk_buff *skb,
+ 	    nla_put_u8(skb, ETHTOOL_A_LINKSTATE_LINK, !!data->link))
+ 		return -EMSGSIZE;
+ 
+-	if (data->sqi != -EOPNOTSUPP &&
+-	    nla_put_u32(skb, ETHTOOL_A_LINKSTATE_SQI, data->sqi))
+-		return -EMSGSIZE;
++	if (linkstate_sqi_valid(data)) {
++		if (nla_put_u32(skb, ETHTOOL_A_LINKSTATE_SQI, data->sqi))
++			return -EMSGSIZE;
+ 
+-	if (data->sqi_max != -EOPNOTSUPP &&
+-	    nla_put_u32(skb, ETHTOOL_A_LINKSTATE_SQI_MAX, data->sqi_max))
+-		return -EMSGSIZE;
++		if (nla_put_u32(skb, ETHTOOL_A_LINKSTATE_SQI_MAX,
++				data->sqi_max))
++			return -EMSGSIZE;
++	}
+ 
+ 	if (data->link_ext_state_provided) {
+ 		if (nla_put_u8(skb, ETHTOOL_A_LINKSTATE_EXT_STATE,
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 317cb90d77102..359ffda9b736b 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -2101,8 +2101,16 @@ void tcp_clear_retrans(struct tcp_sock *tp)
+ static inline void tcp_init_undo(struct tcp_sock *tp)
+ {
+ 	tp->undo_marker = tp->snd_una;
++
+ 	/* Retransmission still in flight may cause DSACKs later. */
+-	tp->undo_retrans = tp->retrans_out ? : -1;
++	/* First, account for regular retransmits in flight: */
++	tp->undo_retrans = tp->retrans_out;
++	/* Next, account for TLP retransmits in flight: */
++	if (tp->tlp_high_seq && tp->tlp_retrans)
++		tp->undo_retrans++;
++	/* Finally, avoid 0, because undo_retrans==0 means "can undo now": */
++	if (!tp->undo_retrans)
++		tp->undo_retrans = -1;
+ }
+ 
+ static bool tcp_is_rack(const struct sock *sk)
+@@ -2181,6 +2189,7 @@ void tcp_enter_loss(struct sock *sk)
+ 
+ 	tcp_set_ca_state(sk, TCP_CA_Loss);
+ 	tp->high_seq = tp->snd_nxt;
++	tp->tlp_high_seq = 0;
+ 	tcp_ecn_queue_cwr(tp);
+ 
+ 	/* F-RTO RFC5682 sec 3.1 step 1: retransmit SND.UNA if no previous
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index 44b49f7d1a9e6..016f9eff49b40 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -444,17 +444,34 @@ static void tcp_fastopen_synack_timer(struct sock *sk, struct request_sock *req)
+ static bool tcp_rtx_probe0_timed_out(const struct sock *sk,
+ 				     const struct sk_buff *skb)
+ {
++	const struct inet_connection_sock *icsk = inet_csk(sk);
++	u32 user_timeout = READ_ONCE(icsk->icsk_user_timeout);
+ 	const struct tcp_sock *tp = tcp_sk(sk);
+-	const int timeout = TCP_RTO_MAX * 2;
+-	u32 rcv_delta, rtx_delta;
+-
+-	rcv_delta = inet_csk(sk)->icsk_timeout - tp->rcv_tstamp;
+-	if (rcv_delta <= timeout)
+-		return false;
++	int timeout = TCP_RTO_MAX * 2;
++	u32 rtx_delta;
++	s32 rcv_delta;
+ 
+ 	rtx_delta = (u32)msecs_to_jiffies(tcp_time_stamp(tp) -
+ 			(tp->retrans_stamp ?: tcp_skb_timestamp(skb)));
+ 
++	if (user_timeout) {
++		/* If user application specified a TCP_USER_TIMEOUT,
++		 * it does not want win 0 packets to 'reset the timer'
++		 * while retransmits are not making progress.
++		 */
++		if (rtx_delta > user_timeout)
++			return true;
++		timeout = min_t(u32, timeout, msecs_to_jiffies(user_timeout));
++	}
++
++	/* Note: timer interrupt might have been delayed by at least one jiffy,
++	 * and tp->rcv_tstamp might very well have been written recently.
++	 * rcv_delta can thus be negative.
++	 */
++	rcv_delta = icsk->icsk_timeout - tp->rcv_tstamp;
++	if (rcv_delta <= timeout)
++		return false;
++
+ 	return rtx_delta > timeout;
+ }
+ 
+@@ -496,8 +513,6 @@ void tcp_retransmit_timer(struct sock *sk)
+ 	if (WARN_ON_ONCE(!skb))
+ 		return;
+ 
+-	tp->tlp_high_seq = 0;
+-
+ 	if (!tp->snd_wnd && !sock_flag(sk, SOCK_DEAD) &&
+ 	    !((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV))) {
+ 		/* Receiver dastardly shrinks window. Our retransmits
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index b8f93c1479ae1..53267566808c1 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -319,6 +319,8 @@ int udp_lib_get_port(struct sock *sk, unsigned short snum,
+ 			goto fail_unlock;
+ 		}
+ 
++		sock_set_flag(sk, SOCK_RCU_FREE);
++
+ 		sk_add_node_rcu(sk, &hslot->head);
+ 		hslot->count++;
+ 		sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
+@@ -335,7 +337,7 @@ int udp_lib_get_port(struct sock *sk, unsigned short snum,
+ 		hslot2->count++;
+ 		spin_unlock(&hslot2->lock);
+ 	}
+-	sock_set_flag(sk, SOCK_RCU_FREE);
++
+ 	error = 0;
+ fail_unlock:
+ 	spin_unlock_bh(&hslot->lock);
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index cd95a315fde82..44ff7f356ec15 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -1212,6 +1212,14 @@ static int tcf_ct_act(struct sk_buff *skb, const struct tc_action *a,
+ 		 */
+ 		if (nf_conntrack_confirm(skb) != NF_ACCEPT)
+ 			goto drop;
++
++		/* The ct may be dropped if a clash has been resolved,
++		 * so it's necessary to retrieve it from skb again to
++		 * prevent UAF.
++		 */
++		ct = nf_ct_get(skb, &ctinfo);
++		if (!ct)
++			skip_add = true;
+ 	}
+ 
+ 	if (!skip_add)
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 05aa32696e7c2..02f651f85e739 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -2333,6 +2333,13 @@ static void xs_tcp_setup_socket(struct work_struct *work)
+ 		transport->srcport = 0;
+ 		status = -EAGAIN;
+ 		break;
++	case -EPERM:
++		/* Happens, for instance, if a BPF program is preventing
++		 * the connect. Remap the error so upper layers can better
++		 * deal with it.
++		 */
++		status = -ECONNREFUSED;
++		fallthrough;
+ 	case -EINVAL:
+ 		/* Happens, for instance, if the user specified a link
+ 		 * local IPv6 address without a scope-id.
+diff --git a/scripts/ld-version.sh b/scripts/ld-version.sh
+index a78b804b680cf..b9513d224476f 100755
+--- a/scripts/ld-version.sh
++++ b/scripts/ld-version.sh
+@@ -57,9 +57,11 @@ else
+ 	fi
+ fi
+ 
+-# Some distributions append a package release number, as in 2.34-4.fc32
+-# Trim the hyphen and any characters that follow.
+-version=${version%-*}
++# There may be something after the version, such as a distribution's package
++# release number (like Fedora's "2.34-4.fc32") or punctuation (like LLD briefly
++# added before the "compatible with GNU linkers" string), so remove everything
++# after just numbers and periods.
++version=${version%%[!0-9.]*}
+ 
+ cversion=$(get_canonical_version $version)
+ min_cversion=$(get_canonical_version $min_version)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 4635dc70a8404..06f00819d1a8a 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9661,6 +9661,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x841c, "HP Pavilion 15-CK0xx", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
++	SND_PCI_QUIRK(0x103c, 0x84a6, "HP 250 G7 Notebook PC", ALC269_FIXUP_HP_LINE1_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x84ae, "HP 15-db0403ng", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x84da, "HP OMEN dc0019-ur", ALC295_FIXUP_HP_OMEN),
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+@@ -9922,6 +9923,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
+ 	SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
+ 	SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
++	SND_PCI_QUIRK(0x10ec, 0x11bc, "VAIO VJFE-IL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x10ec, 0x1230, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x124c, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x1252, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+@@ -10015,6 +10017,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0xa600, "Clevo NL50NU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xa650, "Clevo NP[567]0SN[CD]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xa671, "Clevo NP70SN[CDE]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0xa763, "Clevo V54x_6x_TU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xb018, "Clevo NP50D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xb019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xb022, "Clevo NH77D[DC][QW]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -10153,6 +10156,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
++	SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO),
+ 	SND_PCI_QUIRK(0x2782, 0x1707, "Vaio VJFE-ADL", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
+diff --git a/tools/testing/selftests/bpf/progs/test_global_func10.c b/tools/testing/selftests/bpf/progs/test_global_func10.c
+index 97b7031d0e227..d361eba167f6a 100644
+--- a/tools/testing/selftests/bpf/progs/test_global_func10.c
++++ b/tools/testing/selftests/bpf/progs/test_global_func10.c
+@@ -4,12 +4,12 @@
+ #include <bpf/bpf_helpers.h>
+ 
+ struct Small {
+-	int x;
++	long x;
+ };
+ 
+ struct Big {
+-	int x;
+-	int y;
++	long x;
++	long y;
+ };
+ 
+ __noinline int foo(const struct Big *big)
+@@ -21,7 +21,8 @@ __noinline int foo(const struct Big *big)
+ }
+ 
+ SEC("cgroup_skb/ingress")
+-int test_cls(struct __sk_buff *skb)
++__failure __msg("invalid indirect access to stack")
++int global_func10(struct __sk_buff *skb)
+ {
+ 	const struct Small small = {.x = skb->len };
+ 
+diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c
+index e1a937277b54d..a201d2871bfbd 100644
+--- a/tools/testing/selftests/bpf/verifier/calls.c
++++ b/tools/testing/selftests/bpf/verifier/calls.c
+@@ -2221,19 +2221,22 @@
+ 	 * that fp-8 stack slot was unused in the fall-through
+ 	 * branch and will accept the program incorrectly
+ 	 */
+-	BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 2, 2),
++	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
++	BPF_JMP_IMM(BPF_JGT, BPF_REG_0, 2, 2),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_48b = { 6 },
+-	.errstr = "invalid indirect read from stack R2 off -8+0 size 8",
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_XDP,
++	.fixup_map_hash_48b = { 7 },
++	.errstr_unpriv = "invalid indirect read from stack R2 off -8+0 size 8",
++	.result_unpriv = REJECT,
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
+ },
+ {
+ 	"calls: ctx read at start of subprog",
+diff --git a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c b/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
+index a6c869a7319cd..9c4885885aba0 100644
+--- a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
++++ b/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
+@@ -29,19 +29,30 @@
+ {
+ 	"helper access to variable memory: stack, bitwise AND, zero included",
+ 	.insns = {
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 8),
+-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64),
+-	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128),
+-	BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 64),
+-	BPF_MOV64_IMM(BPF_REG_3, 0),
+-	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
++	/* set max stack size */
++	BPF_ST_MEM(BPF_DW, BPF_REG_10, -128, 0),
++	/* set r3 to a random value */
++	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
++	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
++	/* use bitwise AND to limit r3 range to [0, 64] */
++	BPF_ALU64_IMM(BPF_AND, BPF_REG_3, 64),
++	BPF_LD_MAP_FD(BPF_REG_1, 0),
++	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -64),
++	BPF_MOV64_IMM(BPF_REG_4, 0),
++	/* Call bpf_ringbuf_output(), it is one of a few helper functions with
++	 * ARG_CONST_SIZE_OR_ZERO parameter allowed in unpriv mode.
++	 * For unpriv this should signal an error, because memory at &fp[-64] is
++	 * not initialized.
++	 */
++	BPF_EMIT_CALL(BPF_FUNC_ringbuf_output),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid indirect read from stack R1 off -64+0 size 64",
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
++	.fixup_map_ringbuf = { 4 },
++	.errstr_unpriv = "invalid indirect read from stack R2 off -64+0 size 64",
++	.result_unpriv = REJECT,
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
+ },
+ {
+ 	"helper access to variable memory: stack, bitwise AND + JMP, wrong max",
+@@ -183,20 +194,31 @@
+ {
+ 	"helper access to variable memory: stack, JMP, no min check",
+ 	.insns = {
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 8),
+-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64),
+-	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128),
+-	BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 64, 3),
+-	BPF_MOV64_IMM(BPF_REG_3, 0),
+-	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
++	/* set max stack size */
++	BPF_ST_MEM(BPF_DW, BPF_REG_10, -128, 0),
++	/* set r3 to a random value */
++	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
++	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
++	/* use JMP to limit r3 range to [0, 64] */
++	BPF_JMP_IMM(BPF_JGT, BPF_REG_3, 64, 6),
++	BPF_LD_MAP_FD(BPF_REG_1, 0),
++	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -64),
++	BPF_MOV64_IMM(BPF_REG_4, 0),
++	/* Call bpf_ringbuf_output(), it is one of a few helper functions with
++	 * ARG_CONST_SIZE_OR_ZERO parameter allowed in unpriv mode.
++	 * For unpriv this should signal an error, because memory at &fp[-64] is
++	 * not initialized.
++	 */
++	BPF_EMIT_CALL(BPF_FUNC_ringbuf_output),
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid indirect read from stack R1 off -64+0 size 64",
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
++	.fixup_map_ringbuf = { 4 },
++	.errstr_unpriv = "invalid indirect read from stack R2 off -64+0 size 64",
++	.result_unpriv = REJECT,
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
+ },
+ {
+ 	"helper access to variable memory: stack, JMP (signed), no min check",
+@@ -564,29 +586,41 @@
+ {
+ 	"helper access to variable memory: 8 bytes leak",
+ 	.insns = {
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 8),
+-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64),
++	/* set max stack size */
++	BPF_ST_MEM(BPF_DW, BPF_REG_10, -128, 0),
++	/* set r3 to a random value */
++	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
++	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
++	BPF_LD_MAP_FD(BPF_REG_1, 0),
++	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -64),
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -64),
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -56),
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -48),
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -40),
++	/* Note: fp[-32] left uninitialized */
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -24),
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
+-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -128),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -128),
+-	BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 63),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1),
+-	BPF_MOV64_IMM(BPF_REG_3, 0),
+-	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
++	/* Limit r3 range to [1, 64] */
++	BPF_ALU64_IMM(BPF_AND, BPF_REG_3, 63),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 1),
++	BPF_MOV64_IMM(BPF_REG_4, 0),
++	/* Call bpf_ringbuf_output(), it is one of a few helper functions with
++	 * ARG_CONST_SIZE_OR_ZERO parameter allowed in unpriv mode.
++	 * For unpriv this should signal an error, because memory region [1, 64]
++	 * at &fp[-64] is not fully initialized.
++	 */
++	BPF_EMIT_CALL(BPF_FUNC_ringbuf_output),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid indirect read from stack R1 off -64+32 size 64",
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
++	.fixup_map_ringbuf = { 3 },
++	.errstr_unpriv = "invalid indirect read from stack R2 off -64+32 size 64",
++	.result_unpriv = REJECT,
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
+ },
+ {
+ 	"helper access to variable memory: 8 bytes no leak (init memory)",
+diff --git a/tools/testing/selftests/bpf/verifier/int_ptr.c b/tools/testing/selftests/bpf/verifier/int_ptr.c
+index 070893fb29007..02d9e004260b3 100644
+--- a/tools/testing/selftests/bpf/verifier/int_ptr.c
++++ b/tools/testing/selftests/bpf/verifier/int_ptr.c
+@@ -54,12 +54,13 @@
+ 		/* bpf_strtoul() */
+ 		BPF_EMIT_CALL(BPF_FUNC_strtoul),
+ 
+-		BPF_MOV64_IMM(BPF_REG_0, 1),
++		BPF_MOV64_IMM(BPF_REG_0, 0),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_CGROUP_SYSCTL,
+-	.errstr = "invalid indirect read from stack R4 off -16+4 size 8",
++	.result_unpriv = REJECT,
++	.errstr_unpriv = "invalid indirect read from stack R4 off -16+4 size 8",
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
+ },
+ {
+ 	"ARG_PTR_TO_LONG misaligned",
+diff --git a/tools/testing/selftests/bpf/verifier/search_pruning.c b/tools/testing/selftests/bpf/verifier/search_pruning.c
+index d63fd8991b03a..745d6b5842fd4 100644
+--- a/tools/testing/selftests/bpf/verifier/search_pruning.c
++++ b/tools/testing/selftests/bpf/verifier/search_pruning.c
+@@ -128,9 +128,10 @@
+ 		BPF_EXIT_INSN(),
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+-	.errstr = "invalid read from stack off -16+0 size 8",
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
++	.errstr_unpriv = "invalid read from stack off -16+0 size 8",
++	.result_unpriv = REJECT,
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
+ },
+ {
+ 	"precision tracking for u32 spill/fill",
+@@ -258,6 +259,8 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.flags = BPF_F_TEST_STATE_FREQ,
+-	.errstr = "invalid read from stack off -8+1 size 8",
+-	.result = REJECT,
++	.errstr_unpriv = "invalid read from stack off -8+1 size 8",
++	.result_unpriv = REJECT,
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/sock.c b/tools/testing/selftests/bpf/verifier/sock.c
+index d11d0b28be416..108dd3ee1edda 100644
+--- a/tools/testing/selftests/bpf/verifier/sock.c
++++ b/tools/testing/selftests/bpf/verifier/sock.c
+@@ -530,33 +530,6 @@
+ 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ 	.result = ACCEPT,
+ },
+-{
+-	"sk_storage_get(map, skb->sk, &stack_value, 1): partially init stack_value",
+-	.insns = {
+-	BPF_MOV64_IMM(BPF_REG_2, 0),
+-	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_2, -8),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
+-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+-	BPF_MOV64_IMM(BPF_REG_0, 0),
+-	BPF_EXIT_INSN(),
+-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
+-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+-	BPF_MOV64_IMM(BPF_REG_0, 0),
+-	BPF_EXIT_INSN(),
+-	BPF_MOV64_IMM(BPF_REG_4, 1),
+-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -8),
+-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+-	BPF_LD_MAP_FD(BPF_REG_1, 0),
+-	BPF_EMIT_CALL(BPF_FUNC_sk_storage_get),
+-	BPF_MOV64_IMM(BPF_REG_0, 0),
+-	BPF_EXIT_INSN(),
+-	},
+-	.fixup_sk_storage_map = { 14 },
+-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+-	.result = REJECT,
+-	.errstr = "invalid indirect read from stack",
+-},
+ {
+ 	"bpf_map_lookup_elem(smap, &key)",
+ 	.insns = {
+diff --git a/tools/testing/selftests/bpf/verifier/spill_fill.c b/tools/testing/selftests/bpf/verifier/spill_fill.c
+index e23f07175e1bf..53286a7b49aab 100644
+--- a/tools/testing/selftests/bpf/verifier/spill_fill.c
++++ b/tools/testing/selftests/bpf/verifier/spill_fill.c
+@@ -171,9 +171,10 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.result = REJECT,
+-	.errstr = "invalid read from stack off -4+0 size 4",
+-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
++	.result_unpriv = REJECT,
++	.errstr_unpriv = "invalid read from stack off -4+0 size 4",
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
+ },
+ {
+ 	"Spill a u32 const scalar.  Refill as u16.  Offset to skb->data",
+diff --git a/tools/testing/selftests/bpf/verifier/var_off.c b/tools/testing/selftests/bpf/verifier/var_off.c
+index d37f512fad16e..b183e26c03f10 100644
+--- a/tools/testing/selftests/bpf/verifier/var_off.c
++++ b/tools/testing/selftests/bpf/verifier/var_off.c
+@@ -212,31 +212,6 @@
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_LWT_IN,
+ },
+-{
+-	"indirect variable-offset stack access, max_off+size > max_initialized",
+-	.insns = {
+-	/* Fill only the second from top 8 bytes of the stack. */
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0),
+-	/* Get an unknown value. */
+-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0),
+-	/* Make it small and 4-byte aligned. */
+-	BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4),
+-	BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 16),
+-	/* Add it to fp.  We now have either fp-12 or fp-16, but we don't know
+-	 * which. fp-12 size 8 is partially uninitialized stack.
+-	 */
+-	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10),
+-	/* Dereference it indirectly. */
+-	BPF_LD_MAP_FD(BPF_REG_1, 0),
+-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+-	BPF_MOV64_IMM(BPF_REG_0, 0),
+-	BPF_EXIT_INSN(),
+-	},
+-	.fixup_map_hash_8b = { 5 },
+-	.errstr = "invalid indirect read from stack R2 var_off",
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_LWT_IN,
+-},
+ {
+ 	"indirect variable-offset stack access, min_off < min_initialized",
+ 	.insns = {
+@@ -289,33 +264,6 @@
+ 	.result = ACCEPT,
+ 	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
+ },
+-{
+-	"indirect variable-offset stack access, uninitialized",
+-	.insns = {
+-	BPF_MOV64_IMM(BPF_REG_2, 6),
+-	BPF_MOV64_IMM(BPF_REG_3, 28),
+-	/* Fill the top 16 bytes of the stack. */
+-	BPF_ST_MEM(BPF_W, BPF_REG_10, -16, 0),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+-	/* Get an unknown value. */
+-	BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, 0),
+-	/* Make it small and 4-byte aligned. */
+-	BPF_ALU64_IMM(BPF_AND, BPF_REG_4, 4),
+-	BPF_ALU64_IMM(BPF_SUB, BPF_REG_4, 16),
+-	/* Add it to fp.  We now have either fp-12 or fp-16, we don't know
+-	 * which, but either way it points to initialized stack.
+-	 */
+-	BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_10),
+-	BPF_MOV64_IMM(BPF_REG_5, 8),
+-	/* Dereference it indirectly. */
+-	BPF_EMIT_CALL(BPF_FUNC_getsockopt),
+-	BPF_MOV64_IMM(BPF_REG_0, 0),
+-	BPF_EXIT_INSN(),
+-	},
+-	.errstr = "invalid indirect read from stack R4 var_off",
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_SOCK_OPS,
+-},
+ {
+ 	"indirect variable-offset stack access, ok",
+ 	.insns = {
+diff --git a/tools/testing/selftests/wireguard/qemu/Makefile b/tools/testing/selftests/wireguard/qemu/Makefile
+index e95bd56b332f7..35856b11c1435 100644
+--- a/tools/testing/selftests/wireguard/qemu/Makefile
++++ b/tools/testing/selftests/wireguard/qemu/Makefile
+@@ -109,9 +109,9 @@ KERNEL_ARCH := x86_64
+ KERNEL_BZIMAGE := $(KERNEL_BUILD_PATH)/arch/x86/boot/bzImage
+ QEMU_VPORT_RESULT := virtio-serial-device
+ ifeq ($(HOST_ARCH),$(ARCH))
+-QEMU_MACHINE := -cpu host -machine microvm,accel=kvm,pit=off,pic=off,rtc=off -no-acpi
++QEMU_MACHINE := -cpu host -machine microvm,accel=kvm,pit=off,pic=off,rtc=off,acpi=off
+ else
+-QEMU_MACHINE := -cpu max -machine microvm -no-acpi
++QEMU_MACHINE := -cpu max -machine microvm,acpi=off
+ endif
+ else ifeq ($(ARCH),i686)
+ CHOST := i686-linux-musl
+@@ -120,9 +120,9 @@ KERNEL_ARCH := x86
+ KERNEL_BZIMAGE := $(KERNEL_BUILD_PATH)/arch/x86/boot/bzImage
+ QEMU_VPORT_RESULT := virtio-serial-device
+ ifeq ($(subst x86_64,i686,$(HOST_ARCH)),$(ARCH))
+-QEMU_MACHINE := -cpu host -machine microvm,accel=kvm,pit=off,pic=off,rtc=off -no-acpi
++QEMU_MACHINE := -cpu host -machine microvm,accel=kvm,pit=off,pic=off,rtc=off,acpi=off
+ else
+-QEMU_MACHINE := -cpu coreduo -machine microvm -no-acpi
++QEMU_MACHINE := -cpu coreduo -machine microvm,acpi=off
+ endif
+ else ifeq ($(ARCH),mips64)
+ CHOST := mips64-linux-musl