public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-05-26 10:21 Mike Pagano
  0 siblings, 0 replies; 19+ messages in thread
From: Mike Pagano @ 2025-05-26 10:21 UTC (permalink / raw
  To: gentoo-commits

commit:     f4b1118a08a7086d48bfeacb2439357b2bfd8916
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon May 26 10:21:01 2025 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon May 26 10:21:01 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f4b1118a

Remove redundant patch

Removed:
1900_eventpoll-Prevent-hang-in-epoll-wait.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                     |  4 --
 1900_eventpoll-Prevent-hang-in-epoll-wait.patch | 51 -------------------------
 2 files changed, 55 deletions(-)

diff --git a/0000_README b/0000_README
index 2aa6dbca..576a00f8 100644
--- a/0000_README
+++ b/0000_README
@@ -54,10 +54,6 @@ Patch:  1730_parisc-Disable-prctl.patch
 From:   https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:   prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
 
-Patch:  1900_eventpoll-Prevent-hang-in-epoll-wait.patch
-From:   https://lore.kernel.org/linux-fsdevel/20250429153419.94723-1-jdamato@fastly.com/T/#u
-Desc:   eventpoll: Prevent hang in epoll_wait
-
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1900_eventpoll-Prevent-hang-in-epoll-wait.patch b/1900_eventpoll-Prevent-hang-in-epoll-wait.patch
deleted file mode 100644
index 7f1e543a..00000000
--- a/1900_eventpoll-Prevent-hang-in-epoll-wait.patch
+++ /dev/null
@@ -1,51 +0,0 @@
-From git@z Thu Jan  1 00:00:00 1970
-Subject: [PATCH] eventpoll: Prevent hang in epoll_wait
-From: Joe Damato <jdamato@fastly.com>
-Date: Tue, 29 Apr 2025 15:34:19 +0000
-Message-Id: <20250429153419.94723-1-jdamato@fastly.com>
-MIME-Version: 1.0
-Content-Type: text/plain; charset="utf-8"
-Content-Transfer-Encoding: 7bit
-
-In commit 0a65bc27bd64 ("eventpoll: Set epoll timeout if it's in the
-future"), a bug was introduced causing the loop in ep_poll to hang under
-certain circumstances.
-
-When the timeout is non-NULL and ep_schedule_timeout returns false, the
-flag timed_out was not set to true. This causes a hang.
-
-Adjust the logic and set timed_out, if needed, fixing the original code.
-
-Reported-by: Christian Brauner <brauner@kernel.org>
-Closes: https://lore.kernel.org/linux-fsdevel/20250426-haben-redeverbot-0b58878ac722@brauner/
-Reported-by: Mike Pagano <mpagano@gentoo.org>
-Closes: https://bugs.gentoo.org/954806
-Reported-by: Carlos Llamas <cmllamas@google.com>
-Closes: https://lore.kernel.org/linux-fsdevel/aBAB_4gQ6O_haAjp@google.com/
-Fixes: 0a65bc27bd64 ("eventpoll: Set epoll timeout if it's in the future")
-Tested-by: Carlos Llamas <cmllamas@google.com>
-Signed-off-by: Joe Damato <jdamato@fastly.com>
----
- fs/eventpoll.c | 4 +++-
- 1 file changed, 3 insertions(+), 1 deletion(-)
-
-diff --git a/fs/eventpoll.c b/fs/eventpoll.c
-index 4bc264b854c4..1a5d1147f082 100644
---- a/fs/eventpoll.c
-+++ b/fs/eventpoll.c
-@@ -2111,7 +2111,9 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
- 
- 		write_unlock_irq(&ep->lock);
- 
--		if (!eavail && ep_schedule_timeout(to))
-+		if (!ep_schedule_timeout(to))
-+			timed_out = 1;
-+		else if (!eavail)
- 			timed_out = !schedule_hrtimeout_range(to, slack,
- 							      HRTIMER_MODE_ABS);
- 		__set_current_state(TASK_RUNNING);
-
-base-commit: f520bed25d17bb31c2d2d72b0a785b593a4e3179
--- 
-2.43.0
-


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-05-27 19:29 Mike Pagano
  0 siblings, 0 replies; 19+ messages in thread
From: Mike Pagano @ 2025-05-27 19:29 UTC (permalink / raw
  To: gentoo-commits

commit:     16118effc681ddda4a7723b77a37636e4ed2853a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue May 27 19:29:18 2025 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue May 27 19:29:18 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=16118eff

BMQ(BitMap Queue) Scheduler v6.15-r0

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                  |     7 +
 5020_BMQ-and-PDS-io-scheduler-v6.15-r0.patch | 11567 +++++++++++++++++++++++++
 5021_BMQ-and-PDS-gentoo-defaults.patch       |    13 +
 3 files changed, 11587 insertions(+)

diff --git a/0000_README b/0000_README
index 576a00f8..057bf40b 100644
--- a/0000_README
+++ b/0000_README
@@ -78,3 +78,10 @@ Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
 
+Patch:  5020_BMQ-and-PDS-io-scheduler-v6.15-r0.patch
+From:   https://gitlab.com/alfredchen/projectc
+Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
+
+Patch:  5021_BMQ-and-PDS-gentoo-defaults.patch
+From:   https://gitweb.gentoo.org/proj/linux-patches.git/
+Desc:   Set defaults for BMQ. Add archs as people test, default to N

diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.15-r0.patch b/5020_BMQ-and-PDS-io-scheduler-v6.15-r0.patch
new file mode 100644
index 00000000..e9c20283
--- /dev/null
+++ b/5020_BMQ-and-PDS-io-scheduler-v6.15-r0.patch
@@ -0,0 +1,11567 @@
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index dd49a89a62d3..4118f8c92125 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1700,3 +1700,12 @@ is 10 seconds.
+ 
+ The softlockup threshold is (``2 * watchdog_thresh``). Setting this
+ tunable to zero will disable lockup detection altogether.
++
++yield_type:
++===========
++
++BMQ/PDS CPU scheduler only. This determines what type of yield calls
++to sched_yield() will be performed.
++
++  0 - No yield.
++  1 - Requeue task. (default)
+diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
+new file mode 100644
+index 000000000000..05c84eec0f31
+--- /dev/null
++++ b/Documentation/scheduler/sched-BMQ.txt
+@@ -0,0 +1,110 @@
++                         BitMap queue CPU Scheduler
++                         --------------------------
++
++CONTENT
++========
++
++ Background
++ Design
++   Overview
++   Task policy
++   Priority management
++   BitMap Queue
++   CPU Assignment and Migration
++
++
++Background
++==========
++
++BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
++of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
++and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
++simple, while efficiency and scalable for interactive tasks, such as desktop,
++movie playback and gaming etc.
++
++Design
++======
++
++Overview
++--------
++
++BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
++each CPU is responsible for scheduling the tasks that are putting into it's
++run queue.
++
++The run queue is a set of priority queues. Note that these queues are fifo
++queue for non-rt tasks or priority queue for rt tasks in data structure. See
++BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
++that most applications are non-rt tasks. No matter the queue is fifo or
++priority, In each queue is an ordered list of runnable tasks awaiting execution
++and the data structures are the same. When it is time for a new task to run,
++the scheduler simply looks the lowest numbered queueue that contains a task,
++and runs the first task from the head of that queue. And per CPU idle task is
++also in the run queue, so the scheduler can always find a task to run on from
++its run queue.
++
++Each task will assigned the same timeslice(default 4ms) when it is picked to
++start running. Task will be reinserted at the end of the appropriate priority
++queue when it uses its whole timeslice. When the scheduler selects a new task
++from the priority queue it sets the CPU's preemption timer for the remainder of
++the previous timeslice. When that timer fires the scheduler will stop execution
++on that task, select another task and start over again.
++
++If a task blocks waiting for a shared resource then it's taken out of its
++priority queue and is placed in a wait queue for the shared resource. When it
++is unblocked it will be reinserted in the appropriate priority queue of an
++eligible CPU.
++
++Task policy
++-----------
++
++BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
++mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
++NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
++policy.
++
++DEADLINE
++	It is squashed as priority 0 FIFO task.
++
++FIFO/RR
++	All RT tasks share one single priority queue in BMQ run queue designed. The
++complexity of insert operation is O(n). BMQ is not designed for system runs
++with major rt policy tasks.
++
++NORMAL/BATCH/IDLE
++	BATCH and IDLE tasks are treated as the same policy. They compete CPU with
++NORMAL policy tasks, but they just don't boost. To control the priority of
++NORMAL/BATCH/IDLE tasks, simply use nice level.
++
++ISO
++	ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
++task instead.
++
++Priority management
++-------------------
++
++RT tasks have priority from 0-99. For non-rt tasks, there are three different
++factors used to determine the effective priority of a task. The effective
++priority being what is used to determine which queue it will be in.
++
++The first factor is simply the task’s static priority. Which is assigned from
++task's nice level, within [-20, 19] in userland's point of view and [0, 39]
++internally.
++
++The second factor is the priority boost. This is a value bounded between
++[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
++modified by the following cases:
++
++*When a thread has used up its entire timeslice, always deboost its boost by
++increasing by one.
++*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
++and its switch-in time(time after last switch and run) below the thredhold
++based on its priority boost, will boost its boost by decreasing by one buti is
++capped at 0 (won’t go negative).
++
++The intent in this system is to ensure that interactive threads are serviced
++quickly. These are usually the threads that interact directly with the user
++and cause user-perceivable latency. These threads usually do little work and
++spend most of their time blocked awaiting another user event. So they get the
++priority boost from unblocking while background threads that do most of the
++processing receive the priority penalty for using their entire timeslice.
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index b0d4e1908b22..ded115576172 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -515,7 +515,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
+ 		seq_puts(m, "0 0 0\n");
+ 	else
+ 		seq_printf(m, "%llu %llu %lu\n",
+-		   (unsigned long long)task->se.sum_exec_runtime,
++		   (unsigned long long)tsk_seruntime(task),
+ 		   (unsigned long long)task->sched_info.run_delay,
+ 		   task->sched_info.pcount);
+ 
+diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
+index 8874f681b056..59eb72bf7d5f 100644
+--- a/include/asm-generic/resource.h
++++ b/include/asm-generic/resource.h
+@@ -23,7 +23,7 @@
+ 	[RLIMIT_LOCKS]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ 	[RLIMIT_SIGPENDING]	= { 		0,	       0 },	\
+ 	[RLIMIT_MSGQUEUE]	= {   MQ_BYTES_MAX,   MQ_BYTES_MAX },	\
+-	[RLIMIT_NICE]		= { 0, 0 },				\
++	[RLIMIT_NICE]		= { 30, 30 },				\
+ 	[RLIMIT_RTPRIO]		= { 0, 0 },				\
+ 	[RLIMIT_RTTIME]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ }
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index f96ac1982893..f0bb88b01dfe 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -839,9 +839,13 @@ struct task_struct {
+ 	struct alloc_tag		*alloc_tag;
+ #endif
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
+ 	int				on_cpu;
++#endif
++
++#ifdef CONFIG_SMP
+ 	struct __call_single_node	wake_entry;
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned int			wakee_flips;
+ 	unsigned long			wakee_flip_decay_ts;
+ 	struct task_struct		*last_wakee;
+@@ -855,6 +859,7 @@ struct task_struct {
+ 	 */
+ 	int				recent_used_cpu;
+ 	int				wake_cpu;
++#endif /* !CONFIG_SCHED_ALT */
+ #endif
+ 	int				on_rq;
+ 
+@@ -863,6 +868,19 @@ struct task_struct {
+ 	int				normal_prio;
+ 	unsigned int			rt_priority;
+ 
++#ifdef CONFIG_SCHED_ALT
++	u64				last_ran;
++	s64				time_slice;
++	struct list_head		sq_node;
++#ifdef CONFIG_SCHED_BMQ
++	int				boost_prio;
++#endif /* CONFIG_SCHED_BMQ */
++#ifdef CONFIG_SCHED_PDS
++	u64				deadline;
++#endif /* CONFIG_SCHED_PDS */
++	/* sched_clock time spent running */
++	u64				sched_time;
++#else /* !CONFIG_SCHED_ALT */
+ 	struct sched_entity		se;
+ 	struct sched_rt_entity		rt;
+ 	struct sched_dl_entity		dl;
+@@ -877,6 +895,7 @@ struct task_struct {
+ 	unsigned long			core_cookie;
+ 	unsigned int			core_occupation;
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_CGROUP_SCHED
+ 	struct task_group		*sched_task_group;
+@@ -913,11 +932,15 @@ struct task_struct {
+ 	const cpumask_t			*cpus_ptr;
+ 	cpumask_t			*user_cpus_ptr;
+ 	cpumask_t			cpus_mask;
++#ifndef CONFIG_SCHED_ALT
+ 	void				*migration_pending;
++#endif
+ #ifdef CONFIG_SMP
+ 	unsigned short			migration_disabled;
+ #endif
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned short			migration_flags;
++#endif
+ 
+ #ifdef CONFIG_PREEMPT_RCU
+ 	int				rcu_read_lock_nesting;
+@@ -949,8 +972,10 @@ struct task_struct {
+ 
+ 	struct list_head		tasks;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 	struct plist_node		pushable_tasks;
+ 	struct rb_node			pushable_dl_tasks;
++#endif
+ #endif
+ 
+ 	struct mm_struct		*mm;
+@@ -1663,6 +1688,15 @@ struct task_struct {
+ 	 */
+ };
+ 
++#ifdef CONFIG_SCHED_ALT
++#define tsk_seruntime(t)		((t)->sched_time)
++/* replace the uncertian rt_timeout with 0UL */
++#define tsk_rttimeout(t)		(0UL)
++#else /* CFS */
++#define tsk_seruntime(t)	((t)->se.sum_exec_runtime)
++#define tsk_rttimeout(t)	((t)->rt.timeout)
++#endif /* !CONFIG_SCHED_ALT */
++
+ #define TASK_REPORT_IDLE	(TASK_REPORT + 1)
+ #define TASK_REPORT_MAX		(TASK_REPORT_IDLE << 1)
+ 
+@@ -2203,7 +2237,11 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
+ 
+ static inline bool task_is_runnable(struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++	return p->on_rq;
++#else
+ 	return p->on_rq && !p->se.sched_delayed;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ 
+ extern bool sched_task_on_rq(struct task_struct *p);
+diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
+index f9aabbc9d22e..1f9109a84286 100644
+--- a/include/linux/sched/deadline.h
++++ b/include/linux/sched/deadline.h
+@@ -2,6 +2,25 @@
+ #ifndef _LINUX_SCHED_DEADLINE_H
+ #define _LINUX_SCHED_DEADLINE_H
+ 
++#ifdef CONFIG_SCHED_ALT
++
++static inline int dl_task(struct task_struct *p)
++{
++	return 0;
++}
++
++#ifdef CONFIG_SCHED_BMQ
++#define __tsk_deadline(p)	(0UL)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define __tsk_deadline(p)	((((u64) ((p)->prio))<<56) | (p)->deadline)
++#endif
++
++#else
++
++#define __tsk_deadline(p)	((p)->dl.deadline)
++
+ /*
+  * SCHED_DEADLINE tasks has negative priorities, reflecting
+  * the fact that any of them has higher prio than RT and
+@@ -23,6 +42,7 @@ static inline bool dl_task(struct task_struct *p)
+ {
+ 	return dl_prio(p->prio);
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ static inline bool dl_time_before(u64 a, u64 b)
+ {
+diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
+index 6ab43b4f72f9..ef1cff556c5e 100644
+--- a/include/linux/sched/prio.h
++++ b/include/linux/sched/prio.h
+@@ -19,6 +19,28 @@
+ #define MAX_PRIO		(MAX_RT_PRIO + NICE_WIDTH)
+ #define DEFAULT_PRIO		(MAX_RT_PRIO + NICE_WIDTH / 2)
+ 
++#ifdef CONFIG_SCHED_ALT
++
++/* Undefine MAX_PRIO and DEFAULT_PRIO */
++#undef MAX_PRIO
++#undef DEFAULT_PRIO
++
++/* +/- priority levels from the base priority */
++#ifdef CONFIG_SCHED_BMQ
++#define MAX_PRIORITY_ADJ	(12)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define MAX_PRIORITY_ADJ	(0)
++#endif
++
++#define MIN_NORMAL_PRIO		(128)
++#define NORMAL_PRIO_NUM		(64)
++#define MAX_PRIO		(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
++#define DEFAULT_PRIO		(MAX_PRIO - MAX_PRIORITY_ADJ - NICE_WIDTH / 2)
++
++#endif /* CONFIG_SCHED_ALT */
++
+ /*
+  * Convert user-nice values [ -20 ... 0 ... 19 ]
+  * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index 4e3338103654..6dfef878fe3b 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -45,8 +45,10 @@ static inline bool rt_or_dl_task_policy(struct task_struct *tsk)
+ 
+ 	if (policy == SCHED_FIFO || policy == SCHED_RR)
+ 		return true;
++#ifndef CONFIG_SCHED_ALT
+ 	if (policy == SCHED_DEADLINE)
+ 		return true;
++#endif
+ 	return false;
+ }
+ 
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index 7b4301b7235f..06d8ffb190af 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -225,7 +225,8 @@ static inline bool cpus_share_resources(int this_cpu, int that_cpu)
+ 
+ #endif	/* !CONFIG_SMP */
+ 
+-#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
++#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
++	!defined(CONFIG_SCHED_ALT)
+ extern void rebuild_sched_domains_energy(void);
+ #else
+ static inline void rebuild_sched_domains_energy(void)
+diff --git a/init/Kconfig b/init/Kconfig
+index bf3a920064be..ec8a8c526ab3 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -670,6 +670,7 @@ config TASK_IO_ACCOUNTING
+ 
+ config PSI
+ 	bool "Pressure stall information tracking"
++	depends on !SCHED_ALT
+ 	select KERNFS
+ 	help
+ 	  Collect metrics that indicate how overcommitted the CPU, memory,
+@@ -881,6 +882,35 @@ config UCLAMP_BUCKETS_COUNT
+ 
+ 	  If in doubt, use the default value.
+ 
++menuconfig SCHED_ALT
++	bool "Alternative CPU Schedulers"
++	default y
++	help
++	  This feature enable alternative CPU scheduler"
++
++if SCHED_ALT
++
++choice
++	prompt "Alternative CPU Scheduler"
++	default SCHED_BMQ
++
++config SCHED_BMQ
++	bool "BMQ CPU scheduler"
++	help
++	  The BitMap Queue CPU scheduler for excellent interactivity and
++	  responsiveness on the desktop and solid scalability on normal
++	  hardware and commodity servers.
++
++config SCHED_PDS
++	bool "PDS CPU scheduler"
++	help
++	  The Priority and Deadline based Skip list multiple queue CPU
++	  Scheduler.
++
++endchoice
++
++endif
++
+ endmenu
+ 
+ #
+@@ -946,6 +976,7 @@ config NUMA_BALANCING
+ 	depends on ARCH_SUPPORTS_NUMA_BALANCING
+ 	depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
+ 	depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
++	depends on !SCHED_ALT
+ 	help
+ 	  This option adds support for automatic NUMA aware memory/task placement.
+ 	  The mechanism is quite primitive and is based on migrating memory when
+@@ -1364,6 +1395,7 @@ config CHECKPOINT_RESTORE
+ 
+ config SCHED_AUTOGROUP
+ 	bool "Automatic process group scheduling"
++	depends on !SCHED_ALT
+ 	select CGROUPS
+ 	select CGROUP_SCHED
+ 	select FAIR_GROUP_SCHED
+diff --git a/init/init_task.c b/init/init_task.c
+index e557f622bd90..99e59c2082e0 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -72,9 +72,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ 	.stack		= init_stack,
+ 	.usage		= REFCOUNT_INIT(2),
+ 	.flags		= PF_KTHREAD,
++#ifdef CONFIG_SCHED_ALT
++	.on_cpu		= 1,
++	.prio		= DEFAULT_PRIO,
++	.static_prio	= DEFAULT_PRIO,
++	.normal_prio	= DEFAULT_PRIO,
++#else
+ 	.prio		= MAX_PRIO - 20,
+ 	.static_prio	= MAX_PRIO - 20,
+ 	.normal_prio	= MAX_PRIO - 20,
++#endif
+ 	.policy		= SCHED_NORMAL,
+ 	.cpus_ptr	= &init_task.cpus_mask,
+ 	.user_cpus_ptr	= NULL,
+@@ -87,6 +94,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ 	.restart_block	= {
+ 		.fn = do_no_restart_syscall,
+ 	},
++#ifdef CONFIG_SCHED_ALT
++	.sq_node	= LIST_HEAD_INIT(init_task.sq_node),
++#ifdef CONFIG_SCHED_BMQ
++	.boost_prio	= 0,
++#endif
++#ifdef CONFIG_SCHED_PDS
++	.deadline	= 0,
++#endif
++	.time_slice	= HZ,
++#else
+ 	.se		= {
+ 		.group_node 	= LIST_HEAD_INIT(init_task.se.group_node),
+ 	},
+@@ -94,10 +111,13 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ 		.run_list	= LIST_HEAD_INIT(init_task.rt.run_list),
+ 		.time_slice	= RR_TIMESLICE,
+ 	},
++#endif
+ 	.tasks		= LIST_HEAD_INIT(init_task.tasks),
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 	.pushable_tasks	= PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+ #endif
++#endif
+ #ifdef CONFIG_CGROUP_SCHED
+ 	.sched_task_group = &root_task_group,
+ #endif
+diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
+index 54ea59ff8fbe..a6d3560cef75 100644
+--- a/kernel/Kconfig.preempt
++++ b/kernel/Kconfig.preempt
+@@ -134,7 +134,7 @@ config PREEMPT_DYNAMIC
+ 
+ config SCHED_CORE
+ 	bool "Core Scheduling for SMT"
+-	depends on SCHED_SMT
++	depends on SCHED_SMT && !SCHED_ALT
+ 	help
+ 	  This option permits Core Scheduling, a means of coordinated task
+ 	  selection across SMT siblings. When enabled -- see
+@@ -152,7 +152,7 @@ config SCHED_CORE
+ 
+ config SCHED_CLASS_EXT
+ 	bool "Extensible Scheduling Class"
+-	depends on BPF_SYSCALL && BPF_JIT && DEBUG_INFO_BTF
++	depends on BPF_SYSCALL && BPF_JIT && DEBUG_INFO_BTF && !SCHED_ALT
+ 	select STACKTRACE if STACKTRACE_SUPPORT
+ 	help
+ 	  This option enables a new scheduler class sched_ext (SCX), which
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 24b70ea3e6ce..0fe2fdb69420 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -648,7 +648,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
+ 	return ret;
+ }
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * Helper routine for generate_sched_domains().
+  * Do cpusets a, b have overlapping effective cpus_allowed masks?
+@@ -1061,7 +1061,7 @@ void rebuild_sched_domains_locked(void)
+ 	/* Have scheduler rebuild the domains */
+ 	partition_sched_domains(ndoms, doms, attr);
+ }
+-#else /* !CONFIG_SMP */
++#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
+ void rebuild_sched_domains_locked(void)
+ {
+ }
+@@ -3014,12 +3014,15 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ 				goto out_unlock;
+ 		}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 		if (dl_task(task)) {
+ 			cs->nr_migrate_dl_tasks++;
+ 			cs->sum_migrate_dl_bw += task->dl.dl_bw;
+ 		}
++#endif
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (!cs->nr_migrate_dl_tasks)
+ 		goto out_success;
+ 
+@@ -3040,6 +3043,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ 	}
+ 
+ out_success:
++#endif
+ 	/*
+ 	 * Mark attach is in progress.  This makes validate_change() fail
+ 	 * changes which zero cpus/mems_allowed.
+@@ -3061,12 +3065,14 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
+ 	mutex_lock(&cpuset_mutex);
+ 	dec_attach_in_progress_locked(cs);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (cs->nr_migrate_dl_tasks) {
+ 		int cpu = cpumask_any(cs->effective_cpus);
+ 
+ 		dl_bw_free(cpu, cs->sum_migrate_dl_bw);
+ 		reset_migrate_dl_data(cs);
+ 	}
++#endif
+ 
+ 	mutex_unlock(&cpuset_mutex);
+ }
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+index eb63a021ac04..950c053dfecb 100644
+--- a/kernel/delayacct.c
++++ b/kernel/delayacct.c
+@@ -155,7 +155,7 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ 	 */
+ 	t1 = tsk->sched_info.pcount;
+ 	t2 = tsk->sched_info.run_delay;
+-	t3 = tsk->se.sum_exec_runtime;
++	t3 = tsk_seruntime(tsk);
+ 
+ 	d->cpu_count += t1;
+ 
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 1b51dc099f1e..a9edd23ce8d8 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -201,7 +201,7 @@ static void __exit_signal(struct release_task_post *post, struct task_struct *ts
+ 	sig->inblock += task_io_get_inblock(tsk);
+ 	sig->oublock += task_io_get_oublock(tsk);
+ 	task_io_accounting_add(&sig->ioac, &tsk->ioac);
+-	sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
++	sig->sum_sched_runtime += tsk_seruntime(tsk);
+ 	sig->nr_threads--;
+ 	__unhash_process(post, tsk, group_dead);
+ 	write_sequnlock(&sig->stats_lock);
+@@ -284,8 +284,8 @@ void release_task(struct task_struct *p)
+ 	write_unlock_irq(&tasklist_lock);
+ 	proc_flush_pid(thread_pid);
+ 	put_pid(thread_pid);
+-	add_device_randomness(&p->se.sum_exec_runtime,
+-			      sizeof(p->se.sum_exec_runtime));
++	add_device_randomness((const void*) &tsk_seruntime(p),
++			      sizeof(unsigned long long));
+ 	free_pids(post.pids);
+ 	release_thread(p);
+ 	/*
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index c80902eacd79..b1d388145968 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -366,7 +366,7 @@ waiter_update_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+ 	lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry));
+ 
+ 	waiter->tree.prio = __waiter_prio(task);
+-	waiter->tree.deadline = task->dl.deadline;
++	waiter->tree.deadline = __tsk_deadline(task);
+ }
+ 
+ /*
+@@ -387,16 +387,20 @@ waiter_clone_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+  * Only use with rt_waiter_node_{less,equal}()
+  */
+ #define task_to_waiter_node(p)	\
+-	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
++	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
+ #define task_to_waiter(p)	\
+ 	&(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) }
+ 
+ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ 					       struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline < right->deadline);
++#else
+ 	if (left->prio < right->prio)
+ 		return 1;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -405,16 +409,22 @@ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return dl_time_before(left->deadline, right->deadline);
++#endif
+ 
+ 	return 0;
++#endif
+ }
+ 
+ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ 						 struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline == right->deadline);
++#else
+ 	if (left->prio != right->prio)
+ 		return 0;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -423,8 +433,10 @@ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return left->deadline == right->deadline;
++#endif
+ 
+ 	return 1;
++#endif
+ }
+ 
+ static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
+diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
+index 37f025a096c9..45ae7a6fd9ac 100644
+--- a/kernel/locking/ww_mutex.h
++++ b/kernel/locking/ww_mutex.h
+@@ -247,6 +247,7 @@ __ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
+ 
+ 		/* equal static prio */
+ 
++#ifndef	CONFIG_SCHED_ALT
+ 		if (dl_prio(a_prio)) {
+ 			if (dl_time_before(b->task->dl.deadline,
+ 					   a->task->dl.deadline))
+@@ -256,6 +257,7 @@ __ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
+ 					   b->task->dl.deadline))
+ 				return false;
+ 		}
++#endif
+ 
+ 		/* equal prio */
+ 	}
+diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
+index 8ae86371ddcd..a972ef1e31a7 100644
+--- a/kernel/sched/Makefile
++++ b/kernel/sched/Makefile
+@@ -33,7 +33,12 @@ endif
+ # These compilation units have roughly the same size and complexity - so their
+ # build parallelizes well and finishes roughly at once:
+ #
++ifdef CONFIG_SCHED_ALT
++obj-y += alt_core.o
++obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
++else
+ obj-y += core.o
+ obj-y += fair.o
++endif
+ obj-y += build_policy.o
+ obj-y += build_utility.o
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+new file mode 100644
+index 000000000000..0afd3670e9bb
+--- /dev/null
++++ b/kernel/sched/alt_core.c
+@@ -0,0 +1,7707 @@
++/*
++ *  kernel/sched/alt_core.c
++ *
++ *  Core alternative kernel scheduler code and related syscalls
++ *
++ *  Copyright (C) 1991-2002  Linus Torvalds
++ *
++ *  2009-08-13	Brainfuck deadline scheduling policy by Con Kolivas deletes
++ *		a whole lot of those previous things.
++ *  2017-09-06	Priority and Deadline based Skip list multiple queue kernel
++ *		scheduler by Alfred Chen.
++ *  2019-02-20	BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
++ */
++#include <linux/sched/clock.h>
++#include <linux/sched/cputime.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/hotplug.h>
++#include <linux/sched/init.h>
++#include <linux/sched/isolation.h>
++#include <linux/sched/loadavg.h>
++#include <linux/sched/mm.h>
++#include <linux/sched/nohz.h>
++#include <linux/sched/stat.h>
++#include <linux/sched/wake_q.h>
++
++#include <linux/blkdev.h>
++#include <linux/context_tracking.h>
++#include <linux/cpuset.h>
++#include <linux/delayacct.h>
++#include <linux/init_task.h>
++#include <linux/kcov.h>
++#include <linux/kprobes.h>
++#include <linux/nmi.h>
++#include <linux/rseq.h>
++#include <linux/scs.h>
++
++#include <uapi/linux/sched/types.h>
++
++#include <asm/irq_regs.h>
++#include <asm/switch_to.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/sched.h>
++#include <trace/events/ipi.h>
++#undef CREATE_TRACE_POINTS
++
++#include "sched.h"
++#include "smp.h"
++
++#include "pelt.h"
++
++#include "../../io_uring/io-wq.h"
++#include "../smpboot.h"
++
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
++
++/*
++ * Export tracepoints that act as a bare tracehook (ie: have no trace event
++ * associated with them) to allow external modules to probe them.
++ */
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
++
++#define sched_feat(x)	(1)
++/*
++ * Print a warning if need_resched is set for the given duration (if
++ * LATENCY_WARN is enabled).
++ *
++ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
++ * per boot.
++ */
++__read_mostly int sysctl_resched_latency_warn_ms = 100;
++__read_mostly int sysctl_resched_latency_warn_once = 1;
++
++#define ALT_SCHED_VERSION "v6.15-r0"
++
++#define STOP_PRIO		(MAX_RT_PRIO - 1)
++
++/*
++ * Time slice
++ * (default: 4 msec, units: nanoseconds)
++ */
++unsigned int sysctl_sched_base_slice __read_mostly	= (4 << 20);
++
++#include "alt_core.h"
++#include "alt_topology.h"
++
++/* Reschedule if less than this many μs left */
++#define RESCHED_NS		(100 << 10)
++
++/**
++ * sched_yield_type - Type of sched_yield() will be performed.
++ * 0: No yield.
++ * 1: Requeue task. (default)
++ */
++int sched_yield_type __read_mostly = 1;
++
++#ifdef CONFIG_SMP
++cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
++
++#ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
++EXPORT_SYMBOL_GPL(sched_smt_present);
++
++cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
++#endif
++
++/*
++ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
++ * the domain), this allows us to quickly tell if two cpus are in the same cache
++ * domain, see cpus_share_cache().
++ */
++DEFINE_PER_CPU(int, sd_llc_id);
++#endif /* CONFIG_SMP */
++
++DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS + 2] ____cacheline_aligned_in_smp;
++
++cpumask_t *const sched_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS - 1];
++cpumask_t *const sched_sg_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++cpumask_t *const sched_pcore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++cpumask_t *const sched_ecore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS + 1];
++
++/* task function */
++static inline const struct cpumask *task_user_cpus(struct task_struct *p)
++{
++	if (!p->user_cpus_ptr)
++		return cpu_possible_mask; /* &init_task.cpus_mask */
++	return p->user_cpus_ptr;
++}
++
++/* sched_queue related functions */
++static inline void sched_queue_init(struct sched_queue *q)
++{
++	int i;
++
++	bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
++	for(i = 0; i < SCHED_LEVELS; i++)
++		INIT_LIST_HEAD(&q->heads[i]);
++}
++
++/*
++ * Init idle task and put into queue structure of rq
++ * IMPORTANT: may be called multiple times for a single cpu
++ */
++static inline void sched_queue_init_idle(struct sched_queue *q,
++					 struct task_struct *idle)
++{
++	INIT_LIST_HEAD(&q->heads[IDLE_TASK_SCHED_PRIO]);
++	list_add_tail(&idle->sq_node, &q->heads[IDLE_TASK_SCHED_PRIO]);
++	idle->on_rq = TASK_ON_RQ_QUEUED;
++}
++
++#define CLEAR_CACHED_PREEMPT_MASK(pr, low, high, cpu)		\
++	if (low < pr && pr <= high)				\
++		cpumask_clear_cpu(cpu, sched_preempt_mask + pr);
++
++#define SET_CACHED_PREEMPT_MASK(pr, low, high, cpu)		\
++	if (low < pr && pr <= high)				\
++		cpumask_set_cpu(cpu, sched_preempt_mask + pr);
++
++static atomic_t sched_prio_record = ATOMIC_INIT(0);
++
++/* water mark related functions */
++static inline void update_sched_preempt_mask(struct rq *rq)
++{
++	int prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++	int last_prio = rq->prio;
++	int cpu, pr;
++
++	if (prio == last_prio)
++		return;
++
++	rq->prio = prio;
++#ifdef CONFIG_SCHED_PDS
++	rq->prio_idx = sched_prio2idx(rq->prio, rq);
++#endif
++	cpu = cpu_of(rq);
++	pr = atomic_read(&sched_prio_record);
++
++	if (prio < last_prio) {
++		if (IDLE_TASK_SCHED_PRIO == last_prio) {
++			rq->clear_idle_mask_func(cpu, sched_idle_mask);
++			last_prio -= 2;
++		}
++		CLEAR_CACHED_PREEMPT_MASK(pr, prio, last_prio, cpu);
++
++		return;
++	}
++	/* last_prio < prio */
++	if (IDLE_TASK_SCHED_PRIO == prio) {
++		rq->set_idle_mask_func(cpu, sched_idle_mask);
++		prio -= 2;
++	}
++	SET_CACHED_PREEMPT_MASK(pr, last_prio, prio, cpu);
++}
++
++/* need a wrapper since we may need to trace from modules */
++EXPORT_TRACEPOINT_SYMBOL(sched_set_state_tp);
++
++/* Call via the helper macro trace_set_current_state. */
++void __trace_set_current_state(int state_value)
++{
++	trace_sched_set_state_tp(current, state_value);
++}
++EXPORT_SYMBOL(__trace_set_current_state);
++
++/*
++ * Serialization rules:
++ *
++ * Lock order:
++ *
++ *   p->pi_lock
++ *     rq->lock
++ *       hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
++ *
++ *  rq1->lock
++ *    rq2->lock  where: rq1 < rq2
++ *
++ * Regular state:
++ *
++ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
++ * local CPU's rq->lock, it optionally removes the task from the runqueue and
++ * always looks at the local rq data structures to find the most eligible task
++ * to run next.
++ *
++ * Task enqueue is also under rq->lock, possibly taken from another CPU.
++ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
++ * the local CPU to avoid bouncing the runqueue state around [ see
++ * ttwu_queue_wakelist() ]
++ *
++ * Task wakeup, specifically wakeups that involve migration, are horribly
++ * complicated to avoid having to take two rq->locks.
++ *
++ * Special state:
++ *
++ * System-calls and anything external will use task_rq_lock() which acquires
++ * both p->pi_lock and rq->lock. As a consequence the state they change is
++ * stable while holding either lock:
++ *
++ *  - sched_setaffinity()/
++ *    set_cpus_allowed_ptr():	p->cpus_ptr, p->nr_cpus_allowed
++ *  - set_user_nice():		p->se.load, p->*prio
++ *  - __sched_setscheduler():	p->sched_class, p->policy, p->*prio,
++ *				p->se.load, p->rt_priority,
++ *				p->dl.dl_{runtime, deadline, period, flags, bw, density}
++ *  - sched_setnuma():		p->numa_preferred_nid
++ *  - sched_move_task():        p->sched_task_group
++ *  - uclamp_update_active()	p->uclamp*
++ *
++ * p->state <- TASK_*:
++ *
++ *   is changed locklessly using set_current_state(), __set_current_state() or
++ *   set_special_state(), see their respective comments, or by
++ *   try_to_wake_up(). This latter uses p->pi_lock to serialize against
++ *   concurrent self.
++ *
++ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
++ *
++ *   is set by activate_task() and cleared by deactivate_task(), under
++ *   rq->lock. Non-zero indicates the task is runnable, the special
++ *   ON_RQ_MIGRATING state is used for migration without holding both
++ *   rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
++ *
++ *   Additionally it is possible to be ->on_rq but still be considered not
++ *   runnable when p->se.sched_delayed is true. These tasks are on the runqueue
++ *   but will be dequeued as soon as they get picked again. See the
++ *   task_is_runnable() helper.
++ *
++ * p->on_cpu <- { 0, 1 }:
++ *
++ *   is set by prepare_task() and cleared by finish_task() such that it will be
++ *   set before p is scheduled-in and cleared after p is scheduled-out, both
++ *   under rq->lock. Non-zero indicates the task is running on its CPU.
++ *
++ *   [ The astute reader will observe that it is possible for two tasks on one
++ *     CPU to have ->on_cpu = 1 at the same time. ]
++ *
++ * task_cpu(p): is changed by set_task_cpu(), the rules are:
++ *
++ *  - Don't call set_task_cpu() on a blocked task:
++ *
++ *    We don't care what CPU we're not running on, this simplifies hotplug,
++ *    the CPU assignment of blocked tasks isn't required to be valid.
++ *
++ *  - for try_to_wake_up(), called under p->pi_lock:
++ *
++ *    This allows try_to_wake_up() to only take one rq->lock, see its comment.
++ *
++ *  - for migration called under rq->lock:
++ *    [ see task_on_rq_migrating() in task_rq_lock() ]
++ *
++ *    o move_queued_task()
++ *    o detach_task()
++ *
++ *  - for migration called under double_rq_lock():
++ *
++ *    o __migrate_swap_task()
++ *    o push_rt_task() / pull_rt_task()
++ *    o push_dl_task() / pull_dl_task()
++ *    o dl_task_offline_migration()
++ *
++ */
++
++/*
++ * Context: p->pi_lock
++ */
++static inline struct rq *
++task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock, unsigned long *flags)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock_irqsave(&rq->lock, *flags);
++			if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&rq->lock, *flags);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			raw_spin_lock_irqsave(&p->pi_lock, *flags);
++			if (likely(!p->on_cpu && !p->on_rq && rq == task_rq(p))) {
++				*plock = &p->pi_lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
++		}
++	}
++}
++
++static inline void
++task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock, unsigned long *flags)
++{
++	raw_spin_unlock_irqrestore(lock, *flags);
++}
++
++/*
++ * __task_rq_lock - lock the rq @p resides on.
++ */
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	lockdep_assert_held(&p->pi_lock);
++
++	for (;;) {
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
++			return rq;
++		raw_spin_unlock(&rq->lock);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++/*
++ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
++ */
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	for (;;) {
++		raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		/*
++		 *	move_queued_task()		task_rq_lock()
++		 *
++		 *	ACQUIRE (rq->lock)
++		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
++		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
++		 *	[S] ->cpu = new_cpu		[L] task_rq()
++		 *					[L] ->on_rq
++		 *	RELEASE (rq->lock)
++		 *
++		 * If we observe the old CPU in task_rq_lock(), the acquire of
++		 * the old rq->lock will fully serialize against the stores.
++		 *
++		 * If we observe the new CPU in task_rq_lock(), the address
++		 * dependency headed by '[L] rq = task_rq()' and the acquire
++		 * will pair with the WMB to ensure we then also see migrating.
++		 */
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
++			return rq;
++		}
++		raw_spin_unlock(&rq->lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irqsave(&rq->lock, rf->flags);
++}
++
++static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
++}
++
++DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
++		    rq_lock_irqsave(_T->lock, &_T->rf),
++		    rq_unlock_irqrestore(_T->lock, &_T->rf),
++		    struct rq_flags rf)
++
++void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
++{
++	raw_spinlock_t *lock;
++
++	/* Matches synchronize_rcu() in __sched_core_enable() */
++	preempt_disable();
++
++	for (;;) {
++		lock = __rq_lockp(rq);
++		raw_spin_lock_nested(lock, subclass);
++		if (likely(lock == __rq_lockp(rq))) {
++			/* preempt_count *MUST* be > 1 */
++			preempt_enable_no_resched();
++			return;
++		}
++		raw_spin_unlock(lock);
++	}
++}
++
++void raw_spin_rq_unlock(struct rq *rq)
++{
++	raw_spin_unlock(rq_lockp(rq));
++}
++
++/*
++ * RQ-clock updating methods:
++ */
++
++static void update_rq_clock_task(struct rq *rq, s64 delta)
++{
++/*
++ * In theory, the compile should just see 0 here, and optimize out the call
++ * to sched_rt_avg_update. But I don't trust it...
++ */
++	s64 __maybe_unused steal = 0, irq_delta = 0;
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	if (irqtime_enabled()) {
++		irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
++
++		/*
++		 * Since irq_time is only updated on {soft,}irq_exit, we might run into
++		 * this case when a previous update_rq_clock() happened inside a
++		 * {soft,}IRQ region.
++		 *
++		 * When this happens, we stop ->clock_task and only update the
++		 * prev_irq_time stamp to account for the part that fit, so that a next
++		 * update will consume the rest. This ensures ->clock_task is
++		 * monotonic.
++		 *
++		 * It does however cause some slight miss-attribution of {soft,}IRQ
++		 * time, a more accurate solution would be to update the irq_time using
++		 * the current rq->clock timestamp, except that would require using
++		 * atomic ops.
++		 */
++		if (irq_delta > delta)
++			irq_delta = delta;
++
++		rq->prev_irq_time += irq_delta;
++		delta -= irq_delta;
++		delayacct_irq(rq->curr, irq_delta);
++	}
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	if (static_key_false((&paravirt_steal_rq_enabled))) {
++		u64 prev_steal;
++
++		steal = prev_steal = paravirt_steal_clock(cpu_of(rq));
++		steal -= rq->prev_steal_time_rq;
++
++		if (unlikely(steal > delta))
++			steal = delta;
++
++		rq->prev_steal_time_rq = prev_steal;
++		delta -= steal;
++	}
++#endif
++
++	rq->clock_task += delta;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	if ((irq_delta + steal))
++		update_irq_load_avg(rq, irq_delta + steal);
++#endif
++}
++
++static inline void update_rq_clock(struct rq *rq)
++{
++	s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
++
++	if (unlikely(delta <= 0))
++		return;
++	rq->clock += delta;
++	sched_update_rq_clock(rq);
++	update_rq_clock_task(rq, delta);
++}
++
++/*
++ * RQ Load update routine
++ */
++#define RQ_LOAD_HISTORY_BITS		(sizeof(s32) * 8ULL)
++#define RQ_UTIL_SHIFT			(8)
++#define RQ_LOAD_HISTORY_TO_UTIL(l)	(((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
++
++#define LOAD_BLOCK(t)		((t) >> 17)
++#define LOAD_HALF_BLOCK(t)	((t) >> 16)
++#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b)	(1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
++
++static inline void rq_load_update(struct rq *rq)
++{
++	u64 time = rq->clock;
++	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp), RQ_LOAD_HISTORY_BITS - 1);
++	u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
++	u64 curr = !!rq->nr_running;
++
++	if (delta) {
++		rq->load_history = rq->load_history >> delta;
++
++		if (delta < RQ_UTIL_SHIFT) {
++			rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
++			if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
++				rq->load_history ^= LOAD_BLOCK_BIT(delta);
++		}
++
++		rq->load_block = BLOCK_MASK(time) * prev;
++	} else {
++		rq->load_block += (time - rq->load_stamp) * prev;
++	}
++	if (prev ^ curr)
++		rq->load_history ^= CURRENT_LOAD_BIT;
++	rq->load_stamp = time;
++}
++
++unsigned long rq_load_util(struct rq *rq, unsigned long max)
++{
++	return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
++}
++
++#ifdef CONFIG_SMP
++unsigned long sched_cpu_util(int cpu)
++{
++	return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_CPU_FREQ
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid.  Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++	struct update_util_data *data;
++
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data, cpu_of(rq)));
++	if (data)
++		data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++}
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++/*
++ * Tick may be needed by tasks in the runqueue depending on their policy and
++ * requirements. If tick is needed, lets send the target an IPI to kick it out
++ * of nohz mode if necessary.
++ */
++static inline void sched_update_tick_dependency(struct rq *rq)
++{
++	int cpu = cpu_of(rq);
++
++	if (!tick_nohz_full_cpu(cpu))
++		return;
++
++	if (rq->nr_running < 2)
++		tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
++	else
++		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
++}
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_update_tick_dependency(struct rq *rq) { }
++#endif
++
++bool sched_task_on_rq(struct task_struct *p)
++{
++	return task_on_rq_queued(p);
++}
++
++unsigned long get_wchan(struct task_struct *p)
++{
++	unsigned long ip = 0;
++	unsigned int state;
++
++	if (!p || p == current)
++		return 0;
++
++	/* Only get wchan if task is blocked and we can keep it that way. */
++	raw_spin_lock_irq(&p->pi_lock);
++	state = READ_ONCE(p->__state);
++	smp_rmb(); /* see try_to_wake_up() */
++	if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
++		ip = __get_wchan(p);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	return ip;
++}
++
++/*
++ * Add/Remove/Requeue task to/from the runqueue routines
++ * Context: rq->lock
++ */
++#define __SCHED_DEQUEUE_TASK(p, rq, flags, func)					\
++	sched_info_dequeue(rq, p);							\
++											\
++	__list_del_entry(&p->sq_node);							\
++	if (p->sq_node.prev == p->sq_node.next) {					\
++		clear_bit(sched_idx2prio(p->sq_node.next - &rq->queue.heads[0], rq),	\
++			  rq->queue.bitmap);						\
++		func;									\
++	}
++
++#define __SCHED_ENQUEUE_TASK(p, rq, flags, func)					\
++	sched_info_enqueue(rq, p);							\
++	{										\
++	int idx, prio;									\
++	TASK_SCHED_PRIO_IDX(p, rq, idx, prio);						\
++	list_add_tail(&p->sq_node, &rq->queue.heads[idx]);				\
++	if (list_is_first(&p->sq_node, &rq->queue.heads[idx])) {			\
++		set_bit(prio, rq->queue.bitmap);					\
++		func;									\
++	}										\
++	}
++
++static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++	--rq->nr_running;
++#ifdef CONFIG_SMP
++	if (1 == rq->nr_running)
++		cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_ENQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++	++rq->nr_running;
++#ifdef CONFIG_SMP
++	if (2 == rq->nr_running)
++		cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++void requeue_task(struct task_struct *p, struct rq *rq)
++{
++	struct list_head *node = &p->sq_node;
++	int deq_idx, idx, prio;
++
++	TASK_SCHED_PRIO_IDX(p, rq, idx, prio);
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++	/*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
++		  cpu_of(rq), task_cpu(p));
++#endif
++	if (list_is_last(node, &rq->queue.heads[idx]))
++		return;
++
++	__list_del_entry(node);
++	if (node->prev == node->next && (deq_idx = node->next - &rq->queue.heads[0]) != idx)
++		clear_bit(sched_idx2prio(deq_idx, rq), rq->queue.bitmap);
++
++	list_add_tail(node, &rq->queue.heads[idx]);
++	if (list_is_first(node, &rq->queue.heads[idx]))
++		set_bit(prio, rq->queue.bitmap);
++	update_sched_preempt_mask(rq);
++}
++
++/*
++ * try_cmpxchg based fetch_or() macro so it works for different integer types:
++ */
++#define fetch_or(ptr, mask)						\
++	({								\
++		typeof(ptr) _ptr = (ptr);				\
++		typeof(mask) _mask = (mask);				\
++		typeof(*_ptr) _val = *_ptr;				\
++									\
++		do {							\
++		} while (!try_cmpxchg(_ptr, &_val, _val | _mask));	\
++	_val;								\
++})
++
++#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
++/*
++ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
++ * this avoids any races wrt polling state changes and thereby avoids
++ * spurious IPIs.
++ */
++static inline bool set_nr_and_not_polling(struct thread_info *ti, int tif)
++{
++	return !(fetch_or(&ti->flags, 1 << tif) & _TIF_POLLING_NRFLAG);
++}
++
++/*
++ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
++ *
++ * If this returns true, then the idle task promises to call
++ * sched_ttwu_pending() and reschedule soon.
++ */
++static bool set_nr_if_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	typeof(ti->flags) val = READ_ONCE(ti->flags);
++
++	do {
++		if (!(val & _TIF_POLLING_NRFLAG))
++			return false;
++		if (val & _TIF_NEED_RESCHED)
++			return true;
++	} while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
++
++	return true;
++}
++
++#else
++static inline bool set_nr_and_not_polling(struct thread_info *ti, int tif)
++{
++	set_ti_thread_flag(ti, tif);
++	return true;
++}
++
++#ifdef CONFIG_SMP
++static inline bool set_nr_if_polling(struct task_struct *p)
++{
++	return false;
++}
++#endif
++#endif
++
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	struct wake_q_node *node = &task->wake_q;
++
++	/*
++	 * Atomically grab the task, if ->wake_q is !nil already it means
++	 * it's already queued (either by us or someone else) and will get the
++	 * wakeup due to that.
++	 *
++	 * In order to ensure that a pending wakeup will observe our pending
++	 * state, even in the failed case, an explicit smp_mb() must be used.
++	 */
++	smp_mb__before_atomic();
++	if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
++		return false;
++
++	/*
++	 * The head is context local, there can be no concurrency.
++	 */
++	*head->lastp = node;
++	head->lastp = &node->next;
++	return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	if (__wake_q_add(head, task))
++		get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++	if (!__wake_q_add(head, task))
++		put_task_struct(task);
++}
++
++void wake_up_q(struct wake_q_head *head)
++{
++	struct wake_q_node *node = head->first;
++
++	while (node != WAKE_Q_TAIL) {
++		struct task_struct *task;
++
++		task = container_of(node, struct task_struct, wake_q);
++		node = node->next;
++		/* pairs with cmpxchg_relaxed() in __wake_q_add() */
++		WRITE_ONCE(task->wake_q.next, NULL);
++		/* Task can safely be re-inserted now. */
++
++		/*
++		 * wake_up_process() executes a full barrier, which pairs with
++		 * the queueing in wake_q_add() so as not to miss wakeups.
++		 */
++		wake_up_process(task);
++		put_task_struct(task);
++	}
++}
++
++/*
++ * resched_curr - mark rq's current task 'to be rescheduled now'.
++ *
++ * On UP this means the setting of the need_resched flag, on SMP it
++ * might also involve a cross-CPU call to trigger the scheduler on
++ * the target CPU.
++ */
++static inline void __resched_curr(struct rq *rq, int tif)
++{
++	struct task_struct *curr = rq->curr;
++	struct thread_info *cti = task_thread_info(curr);
++	int cpu;
++
++	lockdep_assert_held(&rq->lock);
++
++	/*
++	 * Always immediately preempt the idle task; no point in delaying doing
++	 * actual work.
++	 */
++	if (is_idle_task(curr) && tif == TIF_NEED_RESCHED_LAZY)
++		tif = TIF_NEED_RESCHED;
++
++	if (cti->flags & ((1 << tif) | _TIF_NEED_RESCHED))
++		return;
++
++	cpu = cpu_of(rq);
++	if (cpu == smp_processor_id()) {
++		set_ti_thread_flag(cti, tif);
++		if (tif == TIF_NEED_RESCHED)
++			set_preempt_need_resched();
++		return;
++	}
++
++	if (set_nr_and_not_polling(cti, tif)) {
++		if (tif == TIF_NEED_RESCHED)
++			smp_send_reschedule(cpu);
++	} else {
++		trace_sched_wake_idle_without_ipi(cpu);
++	}
++}
++
++static inline void resched_curr(struct rq *rq)
++{
++	__resched_curr(rq, TIF_NEED_RESCHED);
++}
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_preempt_lazy);
++static __always_inline bool dynamic_preempt_lazy(void)
++{
++	return static_branch_unlikely(&sk_dynamic_preempt_lazy);
++}
++#else
++static __always_inline bool dynamic_preempt_lazy(void)
++{
++	return IS_ENABLED(CONFIG_PREEMPT_LAZY);
++}
++#endif
++
++static __always_inline int get_lazy_tif_bit(void)
++{
++	if (dynamic_preempt_lazy())
++		return TIF_NEED_RESCHED_LAZY;
++
++	return TIF_NEED_RESCHED;
++}
++
++static inline void resched_curr_lazy(struct rq *rq)
++{
++	__resched_curr(rq, get_lazy_tif_bit());
++}
++
++void resched_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (cpu_online(cpu) || cpu == smp_processor_id())
++		resched_curr(cpu_rq(cpu));
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_NO_HZ_COMMON
++/*
++ * This routine will record that the CPU is going idle with tick stopped.
++ * This info will be used in performing idle load balancing in the future.
++ */
++void nohz_balance_enter_idle(int cpu) {}
++
++/*
++ * In the semi idle case, use the nearest busy CPU for migrating timers
++ * from an idle CPU.  This is good for power-savings.
++ *
++ * We don't do similar optimization for completely idle system, as
++ * selecting an idle CPU will add more delays to the timers than intended
++ * (as that CPU's timer base may not be up to date wrt jiffies etc).
++ */
++int get_nohz_timer_target(void)
++{
++	int i, cpu = smp_processor_id(), default_cpu = -1;
++	struct cpumask *mask;
++	const struct cpumask *hk_mask;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE)) {
++		if (!idle_cpu(cpu))
++			return cpu;
++		default_cpu = cpu;
++	}
++
++	hk_mask = housekeeping_cpumask(HK_TYPE_KERNEL_NOISE);
++
++	for (mask = per_cpu(sched_cpu_topo_masks, cpu);
++	     mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
++		for_each_cpu_and(i, mask, hk_mask)
++			if (!idle_cpu(i))
++				return i;
++
++	if (default_cpu == -1)
++		default_cpu = housekeeping_any_cpu(HK_TYPE_KERNEL_NOISE);
++	cpu = default_cpu;
++
++	return cpu;
++}
++
++/*
++ * When add_timer_on() enqueues a timer into the timer wheel of an
++ * idle CPU then this timer might expire before the next timer event
++ * which is scheduled to wake up that CPU. In case of a completely
++ * idle system the next event might even be infinite time into the
++ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
++ * leaves the inner idle loop so the newly added timer is taken into
++ * account when the CPU goes back to idle and evaluates the timer
++ * wheel for the next timer event.
++ */
++static inline void wake_up_idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (cpu == smp_processor_id())
++		return;
++
++	/*
++	 * Set TIF_NEED_RESCHED and send an IPI if in the non-polling
++	 * part of the idle loop. This forces an exit from the idle loop
++	 * and a round trip to schedule(). Now this could be optimized
++	 * because a simple new idle loop iteration is enough to
++	 * re-evaluate the next tick. Provided some re-ordering of tick
++	 * nohz functions that would need to follow TIF_NR_POLLING
++	 * clearing:
++	 *
++	 * - On most architectures, a simple fetch_or on ti::flags with a
++	 *   "0" value would be enough to know if an IPI needs to be sent.
++	 *
++	 * - x86 needs to perform a last need_resched() check between
++	 *   monitor and mwait which doesn't take timers into account.
++	 *   There a dedicated TIF_TIMER flag would be required to
++	 *   fetch_or here and be checked along with TIF_NEED_RESCHED
++	 *   before mwait().
++	 *
++	 * However, remote timer enqueue is not such a frequent event
++	 * and testing of the above solutions didn't appear to report
++	 * much benefits.
++	 */
++	if (set_nr_and_not_polling(task_thread_info(rq->idle), TIF_NEED_RESCHED))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++static inline bool wake_up_full_nohz_cpu(int cpu)
++{
++	/*
++	 * We just need the target to call irq_exit() and re-evaluate
++	 * the next tick. The nohz full kick at least implies that.
++	 * If needed we can still optimize that later with an
++	 * empty IRQ.
++	 */
++	if (cpu_is_offline(cpu))
++		return true;  /* Don't try to wake offline CPUs. */
++	if (tick_nohz_full_cpu(cpu)) {
++		if (cpu != smp_processor_id() ||
++		    tick_nohz_tick_stopped())
++			tick_nohz_full_kick_cpu(cpu);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_nohz_cpu(int cpu)
++{
++	if (!wake_up_full_nohz_cpu(cpu))
++		wake_up_idle_cpu(cpu);
++}
++
++static void nohz_csd_func(void *info)
++{
++	struct rq *rq = info;
++	int cpu = cpu_of(rq);
++	unsigned int flags;
++
++	/*
++	 * Release the rq::nohz_csd.
++	 */
++	flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
++	WARN_ON(!(flags & NOHZ_KICK_MASK));
++
++	rq->idle_balance = idle_cpu(cpu);
++	if (rq->idle_balance) {
++		rq->nohz_idle_balance = flags;
++		__raise_softirq_irqoff(SCHED_SOFTIRQ);
++	}
++}
++
++#endif /* CONFIG_NO_HZ_COMMON */
++#endif /* CONFIG_SMP */
++
++static inline void wakeup_preempt(struct rq *rq)
++{
++	if (sched_rq_first_task(rq) != rq->curr)
++		resched_curr(rq);
++}
++
++static __always_inline
++int __task_state_match(struct task_struct *p, unsigned int state)
++{
++	if (READ_ONCE(p->__state) & state)
++		return 1;
++
++	if (READ_ONCE(p->saved_state) & state)
++		return -1;
++
++	return 0;
++}
++
++static __always_inline
++int task_state_match(struct task_struct *p, unsigned int state)
++{
++	/*
++	 * Serialize against current_save_and_set_rtlock_wait_state(),
++	 * current_restore_rtlock_saved_state(), and __refrigerator().
++	 */
++	guard(raw_spinlock_irq)(&p->pi_lock);
++
++	return __task_state_match(p, state);
++}
++
++/*
++ * wait_task_inactive - wait for a thread to unschedule.
++ *
++ * Wait for the thread to block in any of the states set in @match_state.
++ * If it changes, i.e. @p might have woken up, then return zero.  When we
++ * succeed in waiting for @p to be off its CPU, we return a positive number
++ * (its total switch count).  If a second call a short while later returns the
++ * same number, the caller can be sure that @p has remained unscheduled the
++ * whole time.
++ *
++ * The caller must ensure that the task *will* unschedule sometime soon,
++ * else this function might spin for a *long* time. This function can't
++ * be called with interrupts off, or it may introduce deadlock with
++ * smp_call_function() if an IPI is sent by the same process we are
++ * waiting to become inactive.
++ */
++unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
++{
++	unsigned long flags;
++	int running, queued, match;
++	unsigned long ncsw;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	for (;;) {
++		rq = task_rq(p);
++
++		/*
++		 * If the task is actively running on another CPU
++		 * still, just relax and busy-wait without holding
++		 * any locks.
++		 *
++		 * NOTE! Since we don't hold any locks, it's not
++		 * even sure that "rq" stays as the right runqueue!
++		 * But we don't care, since this will return false
++		 * if the runqueue has changed and p is actually now
++		 * running somewhere else!
++		 */
++		while (task_on_cpu(p)) {
++			if (!task_state_match(p, match_state))
++				return 0;
++			cpu_relax();
++		}
++
++		/*
++		 * Ok, time to look more closely! We need the rq
++		 * lock now, to be *sure*. If we're wrong, we'll
++		 * just go back and repeat.
++		 */
++		task_access_lock_irqsave(p, &lock, &flags);
++		trace_sched_wait_task(p);
++		running = task_on_cpu(p);
++		queued = p->on_rq;
++		ncsw = 0;
++		if ((match = __task_state_match(p, match_state))) {
++			/*
++			 * When matching on p->saved_state, consider this task
++			 * still queued so it will wait.
++			 */
++			if (match < 0)
++				queued = 1;
++			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
++		}
++		task_access_unlock_irqrestore(p, lock, &flags);
++
++		/*
++		 * If it changed from the expected state, bail out now.
++		 */
++		if (unlikely(!ncsw))
++			break;
++
++		/*
++		 * Was it really running after all now that we
++		 * checked with the proper locks actually held?
++		 *
++		 * Oops. Go back and try again..
++		 */
++		if (unlikely(running)) {
++			cpu_relax();
++			continue;
++		}
++
++		/*
++		 * It's not enough that it's not actively running,
++		 * it must be off the runqueue _entirely_, and not
++		 * preempted!
++		 *
++		 * So if it was still runnable (but just not actively
++		 * running right now), it's preempted, and we should
++		 * yield - it could be a while.
++		 */
++		if (unlikely(queued)) {
++			ktime_t to = NSEC_PER_SEC / HZ;
++
++			set_current_state(TASK_UNINTERRUPTIBLE);
++			schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
++			continue;
++		}
++
++		/*
++		 * Ahh, all good. It wasn't running, and it wasn't
++		 * runnable, which means that it will never become
++		 * running in the future either. We're all done!
++		 */
++		break;
++	}
++
++	return ncsw;
++}
++
++#ifdef CONFIG_SCHED_HRTICK
++/*
++ * Use HR-timers to deliver accurate preemption points.
++ */
++
++static void hrtick_clear(struct rq *rq)
++{
++	if (hrtimer_active(&rq->hrtick_timer))
++		hrtimer_cancel(&rq->hrtick_timer);
++}
++
++/*
++ * High-resolution timer tick.
++ * Runs from hardirq context with interrupts disabled.
++ */
++static enum hrtimer_restart hrtick(struct hrtimer *timer)
++{
++	struct rq *rq = container_of(timer, struct rq, hrtick_timer);
++
++	WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
++
++	raw_spin_lock(&rq->lock);
++	resched_curr(rq);
++	raw_spin_unlock(&rq->lock);
++
++	return HRTIMER_NORESTART;
++}
++
++/*
++ * Use hrtick when:
++ *  - enabled by features
++ *  - hrtimer is actually high res
++ */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	/**
++	 * Alt schedule FW doesn't support sched_feat yet
++	if (!sched_feat(HRTICK))
++		return 0;
++	*/
++	if (!cpu_active(cpu_of(rq)))
++		return 0;
++	return hrtimer_is_hres_active(&rq->hrtick_timer);
++}
++
++#ifdef CONFIG_SMP
++
++static void __hrtick_restart(struct rq *rq)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	ktime_t time = rq->hrtick_time;
++
++	hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
++}
++
++/*
++ * called from hardirq (IPI) context
++ */
++static void __hrtick_start(void *arg)
++{
++	struct rq *rq = arg;
++
++	raw_spin_lock(&rq->lock);
++	__hrtick_restart(rq);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and IRQs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	s64 delta;
++
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense and can cause timer DoS.
++	 */
++	delta = max_t(s64, delay, 10000LL);
++
++	rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
++
++	if (rq == this_rq())
++		__hrtick_restart(rq);
++	else
++		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
++}
++
++#else
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and IRQs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense. Rely on vruntime for fairness.
++	 */
++	delay = max_t(u64, delay, 10000LL);
++	hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
++		      HRTIMER_MODE_REL_PINNED_HARD);
++}
++#endif /* CONFIG_SMP */
++
++static void hrtick_rq_init(struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
++#endif
++
++	hrtimer_setup(&rq->hrtick_timer, hrtick, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
++}
++#else	/* CONFIG_SCHED_HRTICK */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	return 0;
++}
++
++static inline void hrtick_clear(struct rq *rq)
++{
++}
++
++static inline void hrtick_rq_init(struct rq *rq)
++{
++}
++#endif	/* CONFIG_SCHED_HRTICK */
++
++/*
++ * activate_task - move a task to the runqueue.
++ *
++ * Context: rq->lock
++ */
++static void activate_task(struct task_struct *p, struct rq *rq)
++{
++	enqueue_task(p, rq, ENQUEUE_WAKEUP);
++
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++	ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++	/*
++	 * If in_iowait is set, the code below may not trigger any cpufreq
++	 * utilization updates, so do it here explicitly with the IOWAIT flag
++	 * passed.
++	 */
++	cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
++}
++
++static void block_task(struct rq *rq, struct task_struct *p)
++{
++	dequeue_task(p, rq, DEQUEUE_SLEEP);
++
++	if (p->sched_contributes_to_load)
++		rq->nr_uninterruptible++;
++
++	if (p->in_iowait) {
++		atomic_inc(&rq->nr_iowait);
++		delayacct_blkio_start();
++	}
++
++	ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++	/*
++	 * The moment this write goes through, ttwu() can swoop in and migrate
++	 * this task, rendering our rq->__lock ineffective.
++	 *
++	 * __schedule()				try_to_wake_up()
++	 *   LOCK rq->__lock			  LOCK p->pi_lock
++	 *   pick_next_task()
++	 *     pick_next_task_fair()
++	 *       pick_next_entity()
++	 *         dequeue_entities()
++	 *           __block_task()
++	 *             RELEASE p->on_rq = 0	  if (p->on_rq && ...)
++	 *					    break;
++	 *
++	 *					  ACQUIRE (after ctrl-dep)
++	 *
++	 *					  cpu = select_task_rq();
++	 *					  set_task_cpu(p, cpu);
++	 *					  ttwu_queue()
++	 *					    ttwu_do_activate()
++	 *					      LOCK rq->__lock
++	 *					      activate_task()
++	 *					        STORE p->on_rq = 1
++	 *   UNLOCK rq->__lock
++	 *
++	 * Callers must ensure to not reference @p after this -- we no longer
++	 * own it.
++	 */
++	smp_store_release(&p->on_rq, 0);
++}
++
++static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
++	 * successfully executed on another CPU. We must ensure that updates of
++	 * per-task data have been completed by this moment.
++	 */
++	smp_wmb();
++
++	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * We should never call set_task_cpu() on a blocked task,
++	 * ttwu() will sort out the placement.
++	 */
++	WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
++
++#ifdef CONFIG_LOCKDEP
++	/*
++	 * The caller should hold either p->pi_lock or rq->lock, when changing
++	 * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
++	 *
++	 * sched_move_task() holds both and thus holding either pins the cgroup,
++	 * see task_group().
++	 */
++	WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
++				      lockdep_is_held(&task_rq(p)->lock)));
++#endif
++	/*
++	 * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
++	 */
++	WARN_ON_ONCE(!cpu_online(new_cpu));
++
++	WARN_ON_ONCE(is_migration_disabled(p));
++	trace_sched_migrate_task(p, new_cpu);
++
++	if (task_cpu(p) != new_cpu)
++	{
++		rseq_migrate(p);
++		sched_mm_cid_migrate_from(p);
++		perf_event_task_migrate(p);
++	}
++
++	__set_task_cpu(p, new_cpu);
++}
++
++static void
++__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	/*
++	 * This here violates the locking rules for affinity, since we're only
++	 * supposed to change these variables while holding both rq->lock and
++	 * p->pi_lock.
++	 *
++	 * HOWEVER, it magically works, because ttwu() is the only code that
++	 * accesses these variables under p->pi_lock and only does so after
++	 * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
++	 * before finish_task().
++	 *
++	 * XXX do further audits, this smells like something putrid.
++	 */
++	WARN_ON_ONCE(!p->on_cpu);
++	p->cpus_ptr = new_mask;
++}
++
++void migrate_disable(void)
++{
++	struct task_struct *p = current;
++	int cpu;
++
++	if (p->migration_disabled) {
++#ifdef CONFIG_DEBUG_PREEMPT
++		/*
++		 * Warn about overflow half-way through the range.
++		 */
++		WARN_ON_ONCE((s16)p->migration_disabled < 0);
++#endif
++		p->migration_disabled++;
++		return;
++	}
++
++	guard(preempt)();
++	cpu = smp_processor_id();
++	if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
++		cpu_rq(cpu)->nr_pinned++;
++		p->migration_disabled = 1;
++		/*
++		 * Violates locking rules! see comment in __do_set_cpus_ptr().
++		 */
++		if (p->cpus_ptr == &p->cpus_mask)
++			__do_set_cpus_ptr(p, cpumask_of(cpu));
++	}
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++	struct task_struct *p = current;
++
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Check both overflow from migrate_disable() and superfluous
++	 * migrate_enable().
++	 */
++	if (WARN_ON_ONCE((s16)p->migration_disabled <= 0))
++		return;
++#endif
++
++	if (p->migration_disabled > 1) {
++		p->migration_disabled--;
++		return;
++	}
++
++	/*
++	 * Ensure stop_task runs either before or after this, and that
++	 * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
++	 */
++	guard(preempt)();
++	/*
++	 * Assumption: current should be running on allowed cpu
++	 */
++	WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
++	if (p->cpus_ptr != &p->cpus_mask)
++		__do_set_cpus_ptr(p, &p->cpus_mask);
++	/*
++	 * Mustn't clear migration_disabled() until cpus_ptr points back at the
++	 * regular cpus_mask, otherwise things that race (eg.
++	 * select_fallback_rq) get confused.
++	 */
++	barrier();
++	p->migration_disabled = 0;
++	this_rq()->nr_pinned--;
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
++static void __migrate_force_enable(struct task_struct *p, struct rq *rq)
++{
++	if (likely(p->cpus_ptr != &p->cpus_mask))
++		__do_set_cpus_ptr(p, &p->cpus_mask);
++	p->migration_disabled = 0;
++	/* When p is migrate_disabled, rq->lock should be held */
++	rq->nr_pinned--;
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return rq->nr_pinned;
++}
++
++/*
++ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
++ * __set_cpus_allowed_ptr() and select_fallback_rq().
++ */
++static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
++{
++	/* When not in the task's cpumask, no point in looking further. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/* migrate_disabled() must be allowed to finish. */
++	if (is_migration_disabled(p))
++		return cpu_online(cpu);
++
++	/* Non kernel threads are not allowed during either online or offline. */
++	if (!(p->flags & PF_KTHREAD))
++		return cpu_active(cpu) && task_cpu_possible(cpu, p);
++
++	/* KTHREAD_IS_PER_CPU is always allowed. */
++	if (kthread_is_per_cpu(p))
++		return cpu_online(cpu);
++
++	/* Regular kernel threads don't get to stay during offline. */
++	if (cpu_dying(cpu))
++		return false;
++
++	/* But are allowed during online. */
++	return cpu_online(cpu);
++}
++
++/*
++ * This is how migration works:
++ *
++ * 1) we invoke migration_cpu_stop() on the target CPU using
++ *    stop_one_cpu().
++ * 2) stopper starts to run (implicitly forcing the migrated thread
++ *    off the CPU)
++ * 3) it checks whether the migrated task is still in the wrong runqueue.
++ * 4) if it's in the wrong runqueue then the migration thread removes
++ *    it and puts it into the right queue.
++ * 5) stopper completes and stop_one_cpu() returns and the migration
++ *    is done.
++ */
++
++/*
++ * move_queued_task - move a queued task to new rq.
++ *
++ * Returns (locked) new rq. Old rq's lock is released.
++ */
++struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu)
++{
++	lockdep_assert_held(&rq->lock);
++
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++	dequeue_task(p, rq, 0);
++	set_task_cpu(p, new_cpu);
++	raw_spin_unlock(&rq->lock);
++
++	rq = cpu_rq(new_cpu);
++
++	raw_spin_lock(&rq->lock);
++	WARN_ON_ONCE(task_cpu(p) != new_cpu);
++
++	sched_mm_cid_migrate_to(rq, p);
++
++	sched_task_sanity_check(p, rq);
++	enqueue_task(p, rq, 0);
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++	wakeup_preempt(rq);
++
++	return rq;
++}
++
++struct migration_arg {
++	struct task_struct *task;
++	int dest_cpu;
++};
++
++/*
++ * Move (not current) task off this CPU, onto the destination CPU. We're doing
++ * this because either it can't run here any more (set_cpus_allowed()
++ * away from this CPU, or CPU going down), or because we're
++ * attempting to rebalance this task on exec (sched_exec).
++ *
++ * So we race with normal scheduler movements, but that's OK, as long
++ * as the task is no longer on this CPU.
++ */
++static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int dest_cpu)
++{
++	/* Affinity changed (again). */
++	if (!is_cpu_allowed(p, dest_cpu))
++		return rq;
++
++	return move_queued_task(rq, p, dest_cpu);
++}
++
++/*
++ * migration_cpu_stop - this will be executed by a high-prio stopper thread
++ * and performs thread migration by bumping thread off CPU then
++ * 'pushing' onto another runqueue.
++ */
++static int migration_cpu_stop(void *data)
++{
++	struct migration_arg *arg = data;
++	struct task_struct *p = arg->task;
++	struct rq *rq = this_rq();
++	unsigned long flags;
++
++	/*
++	 * The original target CPU might have gone down and we might
++	 * be on another CPU but it doesn't matter.
++	 */
++	local_irq_save(flags);
++	/*
++	 * We need to explicitly wake pending tasks before running
++	 * __migrate_task() such that we will not miss enforcing cpus_ptr
++	 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
++	 */
++	flush_smp_call_function_queue();
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++	/*
++	 * If task_rq(p) != rq, it cannot be migrated here, because we're
++	 * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
++	 * we're holding p->pi_lock.
++	 */
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		rq = __migrate_task(rq, p, arg->dest_cpu);
++	}
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++static inline void
++set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
++{
++	cpumask_copy(&p->cpus_mask, ctx->new_mask);
++	p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
++
++	/*
++	 * Swap in a new user_cpus_ptr if SCA_USER flag set
++	 */
++	if (ctx->flags & SCA_USER)
++		swap(p->user_cpus_ptr, ctx->user_mask);
++}
++
++static void
++__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
++{
++	lockdep_assert_held(&p->pi_lock);
++	set_cpus_allowed_common(p, ctx);
++	mm_set_cpus_allowed(p->mm, ctx->new_mask);
++}
++
++/*
++ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
++ * affinity (if any) should be destroyed too.
++ */
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.user_mask = NULL,
++		.flags     = SCA_USER,	/* clear the user requested mask */
++	};
++	union cpumask_rcuhead {
++		cpumask_t cpumask;
++		struct rcu_head rcu;
++	};
++
++	__do_set_cpus_allowed(p, &ac);
++
++	if (is_migration_disabled(p) && !cpumask_test_cpu(task_cpu(p), &p->cpus_mask))
++		__migrate_force_enable(p, task_rq(p));
++
++	/*
++	 * Because this is called with p->pi_lock held, it is not possible
++	 * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
++	 * kfree_rcu().
++	 */
++	kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
++}
++
++int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
++		      int node)
++{
++	cpumask_t *user_mask;
++	unsigned long flags;
++
++	/*
++	 * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
++	 * may differ by now due to racing.
++	 */
++	dst->user_cpus_ptr = NULL;
++
++	/*
++	 * This check is racy and losing the race is a valid situation.
++	 * It is not worth the extra overhead of taking the pi_lock on
++	 * every fork/clone.
++	 */
++	if (data_race(!src->user_cpus_ptr))
++		return 0;
++
++	user_mask = alloc_user_cpus_ptr(node);
++	if (!user_mask)
++		return -ENOMEM;
++
++	/*
++	 * Use pi_lock to protect content of user_cpus_ptr
++	 *
++	 * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
++	 * do_set_cpus_allowed().
++	 */
++	raw_spin_lock_irqsave(&src->pi_lock, flags);
++	if (src->user_cpus_ptr) {
++		swap(dst->user_cpus_ptr, user_mask);
++		cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
++	}
++	raw_spin_unlock_irqrestore(&src->pi_lock, flags);
++
++	if (unlikely(user_mask))
++		kfree(user_mask);
++
++	return 0;
++}
++
++static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
++{
++	struct cpumask *user_mask = NULL;
++
++	swap(p->user_cpus_ptr, user_mask);
++
++	return user_mask;
++}
++
++void release_user_cpus_ptr(struct task_struct *p)
++{
++	kfree(clear_user_cpus_ptr(p));
++}
++
++#endif
++
++/**
++ * task_curr - is this task currently executing on a CPU?
++ * @p: the task in question.
++ *
++ * Return: 1 if the task is currently executing. 0 otherwise.
++ */
++inline int task_curr(const struct task_struct *p)
++{
++	return cpu_curr(task_cpu(p)) == p;
++}
++
++#ifdef CONFIG_SMP
++/***
++ * kick_process - kick a running thread to enter/exit the kernel
++ * @p: the to-be-kicked thread
++ *
++ * Cause a process which is running on another CPU to enter
++ * kernel-mode, without any delay. (to get signals handled.)
++ *
++ * NOTE: this function doesn't have to take the runqueue lock,
++ * because all it wants to ensure is that the remote task enters
++ * the kernel. If the IPI races and the task has been migrated
++ * to another CPU then no harm is done and the purpose has been
++ * achieved as well.
++ */
++void kick_process(struct task_struct *p)
++{
++	guard(preempt)();
++	int cpu = task_cpu(p);
++
++	if ((cpu != smp_processor_id()) && task_curr(p))
++		smp_send_reschedule(cpu);
++}
++EXPORT_SYMBOL_GPL(kick_process);
++
++/*
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
++ *
++ * A few notes on cpu_active vs cpu_online:
++ *
++ *  - cpu_active must be a subset of cpu_online
++ *
++ *  - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
++ *    see __set_cpus_allowed_ptr(). At this point the newly online
++ *    CPU isn't yet part of the sched domains, and balancing will not
++ *    see it.
++ *
++ *  - on cpu-down we clear cpu_active() to mask the sched domains and
++ *    avoid the load balancer to place new tasks on the to be removed
++ *    CPU. Existing tasks will remain running there and will be taken
++ *    off.
++ *
++ * This means that fallback selection must not select !active CPUs.
++ * And can assume that any active CPU must be online. Conversely
++ * select_task_rq() below may allow selection of !active CPUs in order
++ * to satisfy the above rules.
++ */
++static int select_fallback_rq(int cpu, struct task_struct *p)
++{
++	int nid = cpu_to_node(cpu);
++	const struct cpumask *nodemask = NULL;
++	enum { cpuset, possible, fail } state = cpuset;
++	int dest_cpu;
++
++	/*
++	 * If the node that the CPU is on has been offlined, cpu_to_node()
++	 * will return -1. There is no CPU on the node, and we should
++	 * select the CPU on the other node.
++	 */
++	if (nid != -1) {
++		nodemask = cpumask_of_node(nid);
++
++		/* Look for allowed, online CPU in same node. */
++		for_each_cpu(dest_cpu, nodemask) {
++			if (is_cpu_allowed(p, dest_cpu))
++				return dest_cpu;
++		}
++	}
++
++	for (;;) {
++		/* Any allowed, online CPU? */
++		for_each_cpu(dest_cpu, p->cpus_ptr) {
++			if (!is_cpu_allowed(p, dest_cpu))
++				continue;
++			goto out;
++		}
++
++		/* No more Mr. Nice Guy. */
++		switch (state) {
++		case cpuset:
++			if (cpuset_cpus_allowed_fallback(p)) {
++				state = possible;
++				break;
++			}
++			fallthrough;
++		case possible:
++			/*
++			 * XXX When called from select_task_rq() we only
++			 * hold p->pi_lock and again violate locking order.
++			 *
++			 * More yuck to audit.
++			 */
++			do_set_cpus_allowed(p, task_cpu_fallback_mask(p));
++			state = fail;
++			break;
++
++		case fail:
++			BUG();
++			break;
++		}
++	}
++
++out:
++	if (state != cpuset) {
++		/*
++		 * Don't tell them about moving exiting tasks or
++		 * kernel threads (both mm NULL), since they never
++		 * leave kernel.
++		 */
++		if (p->mm && printk_ratelimit()) {
++			printk_deferred("process %d (%s) no longer affine to cpu%d\n",
++					task_pid_nr(p), p->comm, cpu);
++		}
++	}
++
++	return dest_cpu;
++}
++
++static inline void
++sched_preempt_mask_flush(cpumask_t *mask, int prio, int ref)
++{
++	int cpu;
++
++	cpumask_copy(mask, sched_preempt_mask + ref);
++	if (prio < ref) {
++		for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
++			if (prio < cpu_rq(cpu)->prio)
++				cpumask_set_cpu(cpu, mask);
++		}
++	} else {
++		for_each_cpu_andnot(cpu, mask, sched_idle_mask) {
++			if (prio >= cpu_rq(cpu)->prio)
++				cpumask_clear_cpu(cpu, mask);
++		}
++	}
++}
++
++static inline int
++preempt_mask_check(cpumask_t *preempt_mask, cpumask_t *allow_mask, int prio)
++{
++	cpumask_t *mask = sched_preempt_mask + prio;
++	int pr = atomic_read(&sched_prio_record);
++
++	if (pr != prio && SCHED_QUEUE_BITS - 1 != prio) {
++		sched_preempt_mask_flush(mask, prio, pr);
++		atomic_set(&sched_prio_record, prio);
++	}
++
++	return cpumask_and(preempt_mask, allow_mask, mask);
++}
++
++__read_mostly idle_select_func_t idle_select_func ____cacheline_aligned_in_smp = cpumask_and;
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	cpumask_t allow_mask, mask;
++
++	if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
++		return select_fallback_rq(task_cpu(p), p);
++
++	if (idle_select_func(&mask, &allow_mask, sched_idle_mask)	||
++	    preempt_mask_check(&mask, &allow_mask, task_sched_prio(p)))
++		return best_mask_cpu(task_cpu(p), &mask);
++
++	return best_mask_cpu(task_cpu(p), &allow_mask);
++}
++
++void sched_set_stop_task(int cpu, struct task_struct *stop)
++{
++	static struct lock_class_key stop_pi_lock;
++	struct sched_param stop_param = { .sched_priority = STOP_PRIO };
++	struct sched_param start_param = { .sched_priority = 0 };
++	struct task_struct *old_stop = cpu_rq(cpu)->stop;
++
++	if (stop) {
++		/*
++		 * Make it appear like a SCHED_FIFO task, its something
++		 * userspace knows about and won't get confused about.
++		 *
++		 * Also, it will make PI more or less work without too
++		 * much confusion -- but then, stop work should not
++		 * rely on PI working anyway.
++		 */
++		sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
++
++		/*
++		 * The PI code calls rt_mutex_setprio() with ->pi_lock held to
++		 * adjust the effective priority of a task. As a result,
++		 * rt_mutex_setprio() can trigger (RT) balancing operations,
++		 * which can then trigger wakeups of the stop thread to push
++		 * around the current task.
++		 *
++		 * The stop task itself will never be part of the PI-chain, it
++		 * never blocks, therefore that ->pi_lock recursion is safe.
++		 * Tell lockdep about this by placing the stop->pi_lock in its
++		 * own class.
++		 */
++		lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
++	}
++
++	cpu_rq(cpu)->stop = stop;
++
++	if (old_stop) {
++		/*
++		 * Reset it back to a normal scheduling policy so that
++		 * it can die in pieces.
++		 */
++		sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
++	}
++}
++
++static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
++			    raw_spinlock_t *lock, unsigned long irq_flags)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	/* Can the task run on the task's current CPU? If so, we're done */
++	if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
++		if (is_migration_disabled(p))
++			__migrate_force_enable(p, rq);
++
++		if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
++			struct migration_arg arg = { p, dest_cpu };
++
++			/* Need help from migration thread: drop lock and wait. */
++			__task_access_unlock(p, lock);
++			raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++			stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
++			return 0;
++		}
++		if (task_on_rq_queued(p)) {
++			/*
++			 * OK, since we're going to drop the lock immediately
++			 * afterwards anyway.
++			 */
++			update_rq_clock(rq);
++			rq = move_queued_task(rq, p, dest_cpu);
++			lock = &rq->lock;
++		}
++	}
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return 0;
++}
++
++static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
++					 struct affinity_context *ctx,
++					 struct rq *rq,
++					 raw_spinlock_t *lock,
++					 unsigned long irq_flags)
++{
++	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
++	const struct cpumask *cpu_valid_mask = cpu_active_mask;
++	bool kthread = p->flags & PF_KTHREAD;
++	int dest_cpu;
++	int ret = 0;
++
++	if (kthread || is_migration_disabled(p)) {
++		/*
++		 * Kernel threads are allowed on online && !active CPUs,
++		 * however, during cpu-hot-unplug, even these might get pushed
++		 * away if not KTHREAD_IS_PER_CPU.
++		 *
++		 * Specifically, migration_disabled() tasks must not fail the
++		 * cpumask_any_and_distribute() pick below, esp. so on
++		 * SCA_MIGRATE_ENABLE, otherwise we'll not call
++		 * set_cpus_allowed_common() and actually reset p->cpus_ptr.
++		 */
++		cpu_valid_mask = cpu_online_mask;
++	}
++
++	if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	/*
++	 * Must re-check here, to close a race against __kthread_bind(),
++	 * sched_setaffinity() is not guaranteed to observe the flag.
++	 */
++	if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
++		goto out;
++
++	dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
++	if (dest_cpu >= nr_cpu_ids) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	__do_set_cpus_allowed(p, ctx);
++
++	return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
++
++out:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++
++	return ret;
++}
++
++/*
++ * Change a given task's CPU affinity. Migrate the thread to a
++ * is removed from the allowed bitmask.
++ *
++ * NOTE: the caller must have a valid reference to the task, the
++ * task must not exit() & deallocate itself prematurely. The
++ * call is not atomic; no spinlocks may be held.
++ */
++int __set_cpus_allowed_ptr(struct task_struct *p,
++			   struct affinity_context *ctx)
++{
++	unsigned long irq_flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
++	 * flags are set.
++	 */
++	if (p->user_cpus_ptr &&
++	    !(ctx->flags & SCA_USER) &&
++	    cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
++		ctx->new_mask = rq->scratch_mask;
++
++
++	return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
++}
++
++int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++
++	return __set_cpus_allowed_ptr(p, &ac);
++}
++EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
++
++/*
++ * Change a given task's CPU affinity to the intersection of its current
++ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
++ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
++ * affinity or use cpu_online_mask instead.
++ *
++ * If the resulting mask is empty, leave the affinity unchanged and return
++ * -EINVAL.
++ */
++static int restrict_cpus_allowed_ptr(struct task_struct *p,
++				     struct cpumask *new_mask,
++				     const struct cpumask *subset_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++	unsigned long irq_flags;
++	raw_spinlock_t *lock;
++	struct rq *rq;
++	int err;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++
++	if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
++		err = -EINVAL;
++		goto err_unlock;
++	}
++
++	return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
++
++err_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return err;
++}
++
++/*
++ * Restrict the CPU affinity of task @p so that it is a subset of
++ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
++ * old affinity mask. If the resulting mask is empty, we warn and walk
++ * up the cpuset hierarchy until we find a suitable mask.
++ */
++void force_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	cpumask_var_t new_mask;
++	const struct cpumask *override_mask = task_cpu_possible_mask(p);
++
++	alloc_cpumask_var(&new_mask, GFP_KERNEL);
++
++	/*
++	 * __migrate_task() can fail silently in the face of concurrent
++	 * offlining of the chosen destination CPU, so take the hotplug
++	 * lock to ensure that the migration succeeds.
++	 */
++	cpus_read_lock();
++	if (!cpumask_available(new_mask))
++		goto out_set_mask;
++
++	if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
++		goto out_free_mask;
++
++	/*
++	 * We failed to find a valid subset of the affinity mask for the
++	 * task, so override it based on its cpuset hierarchy.
++	 */
++	cpuset_cpus_allowed(p, new_mask);
++	override_mask = new_mask;
++
++out_set_mask:
++	if (printk_ratelimit()) {
++		printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
++				task_pid_nr(p), p->comm,
++				cpumask_pr_args(override_mask));
++	}
++
++	WARN_ON(set_cpus_allowed_ptr(p, override_mask));
++out_free_mask:
++	cpus_read_unlock();
++	free_cpumask_var(new_mask);
++}
++
++/*
++ * Restore the affinity of a task @p which was previously restricted by a
++ * call to force_compatible_cpus_allowed_ptr().
++ *
++ * It is the caller's responsibility to serialise this with any calls to
++ * force_compatible_cpus_allowed_ptr(@p).
++ */
++void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	struct affinity_context ac = {
++		.new_mask  = task_user_cpus(p),
++		.flags     = 0,
++	};
++	int ret;
++
++	/*
++	 * Try to restore the old affinity mask with __sched_setaffinity().
++	 * Cpuset masking will be done there too.
++	 */
++	ret = __sched_setaffinity(p, &ac);
++	WARN_ON_ONCE(ret);
++}
++
++#else /* CONFIG_SMP */
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	return 0;
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return false;
++}
++
++#endif /* !CONFIG_SMP */
++
++static void
++ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq;
++
++	if (!schedstat_enabled())
++		return;
++
++	rq = this_rq();
++
++#ifdef CONFIG_SMP
++	if (cpu == rq->cpu) {
++		__schedstat_inc(rq->ttwu_local);
++		__schedstat_inc(p->stats.nr_wakeups_local);
++	} else {
++		/** Alt schedule FW ToDo:
++		 * How to do ttwu_wake_remote
++		 */
++	}
++#endif /* CONFIG_SMP */
++
++	__schedstat_inc(rq->ttwu_count);
++	__schedstat_inc(p->stats.nr_wakeups);
++}
++
++/*
++ * Mark the task runnable.
++ */
++static inline void ttwu_do_wakeup(struct task_struct *p)
++{
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	trace_sched_wakeup(p);
++}
++
++static inline void
++ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++	if (p->sched_contributes_to_load)
++		rq->nr_uninterruptible--;
++
++	if (
++#ifdef CONFIG_SMP
++	    !(wake_flags & WF_MIGRATED) &&
++#endif
++	    p->in_iowait) {
++		delayacct_blkio_end(p);
++		atomic_dec(&task_rq(p)->nr_iowait);
++	}
++
++	activate_task(p, rq);
++	wakeup_preempt(rq);
++
++	ttwu_do_wakeup(p);
++}
++
++/*
++ * Consider @p being inside a wait loop:
++ *
++ *   for (;;) {
++ *      set_current_state(TASK_UNINTERRUPTIBLE);
++ *
++ *      if (CONDITION)
++ *         break;
++ *
++ *      schedule();
++ *   }
++ *   __set_current_state(TASK_RUNNING);
++ *
++ * between set_current_state() and schedule(). In this case @p is still
++ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
++ * an atomic manner.
++ *
++ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
++ * then schedule() must still happen and p->state can be changed to
++ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
++ * need to do a full wakeup with enqueue.
++ *
++ * Returns: %true when the wakeup is done,
++ *          %false otherwise.
++ */
++static int ttwu_runnable(struct task_struct *p, int wake_flags)
++{
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	int ret = 0;
++
++	rq = __task_access_lock(p, &lock);
++	if (task_on_rq_queued(p)) {
++		if (!task_on_cpu(p)) {
++			/*
++			 * When on_rq && !on_cpu the task is preempted, see if
++			 * it should preempt the task that is current now.
++			 */
++			update_rq_clock(rq);
++			wakeup_preempt(rq);
++		}
++		ttwu_do_wakeup(p);
++		ret = 1;
++	}
++	__task_access_unlock(p, lock);
++
++	return ret;
++}
++
++#ifdef CONFIG_SMP
++void sched_ttwu_pending(void *arg)
++{
++	struct llist_node *llist = arg;
++	struct rq *rq = this_rq();
++	struct task_struct *p, *t;
++	struct rq_flags rf;
++
++	if (!llist)
++		return;
++
++	rq_lock_irqsave(rq, &rf);
++	update_rq_clock(rq);
++
++	llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
++		if (WARN_ON_ONCE(p->on_cpu))
++			smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
++			set_task_cpu(p, cpu_of(rq));
++
++		ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
++	}
++
++	/*
++	 * Must be after enqueueing at least once task such that
++	 * idle_cpu() does not observe a false-negative -- if it does,
++	 * it is possible for select_idle_siblings() to stack a number
++	 * of tasks on this CPU during that window.
++	 *
++	 * It is OK to clear ttwu_pending when another task pending.
++	 * We will receive IPI after local IRQ enabled and then enqueue it.
++	 * Since now nr_running > 0, idle_cpu() will always get correct result.
++	 */
++	WRITE_ONCE(rq->ttwu_pending, 0);
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Prepare the scene for sending an IPI for a remote smp_call
++ *
++ * Returns true if the caller can proceed with sending the IPI.
++ * Returns false otherwise.
++ */
++bool call_function_single_prep_ipi(int cpu)
++{
++	if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
++		trace_sched_wake_idle_without_ipi(cpu);
++		return false;
++	}
++
++	return true;
++}
++
++/*
++ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
++ * necessary. The wakee CPU on receipt of the IPI will queue the task
++ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
++ * of the wakeup instead of the waker.
++ */
++static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
++
++	WRITE_ONCE(rq->ttwu_pending, 1);
++	__smp_call_single_queue(cpu, &p->wake_entry.llist);
++}
++
++static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
++{
++	/*
++	 * Do not complicate things with the async wake_list while the CPU is
++	 * in hotplug state.
++	 */
++	if (!cpu_active(cpu))
++		return false;
++
++	/* Ensure the task will still be allowed to run on the CPU. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/*
++	 * If the CPU does not share cache, then queue the task on the
++	 * remote rqs wakelist to avoid accessing remote data.
++	 */
++	if (!cpus_share_cache(smp_processor_id(), cpu))
++		return true;
++
++	if (cpu == smp_processor_id())
++		return false;
++
++	/*
++	 * If the wakee cpu is idle, or the task is descheduling and the
++	 * only running task on the CPU, then use the wakelist to offload
++	 * the task activation to the idle (or soon-to-be-idle) CPU as
++	 * the current CPU is likely busy. nr_running is checked to
++	 * avoid unnecessary task stacking.
++	 *
++	 * Note that we can only get here with (wakee) p->on_rq=0,
++	 * p->on_cpu can be whatever, we've done the dequeue, so
++	 * the wakee has been accounted out of ->nr_running.
++	 */
++	if (!cpu_rq(cpu)->nr_running)
++		return true;
++
++	return false;
++}
++
++static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
++		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
++		__ttwu_queue_wakelist(p, cpu, wake_flags);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_if_idle(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	guard(rcu)();
++	if (is_idle_task(rcu_dereference(rq->curr))) {
++		guard(raw_spinlock_irqsave)(&rq->lock);
++		if (is_idle_task(rq->curr))
++			resched_curr(rq);
++	}
++}
++
++extern struct static_key_false sched_asym_cpucapacity;
++
++static __always_inline bool sched_asym_cpucap_active(void)
++{
++	return static_branch_unlikely(&sched_asym_cpucapacity);
++}
++
++bool cpus_equal_capacity(int this_cpu, int that_cpu)
++{
++	if (!sched_asym_cpucap_active())
++		return true;
++
++	if (this_cpu == that_cpu)
++		return true;
++
++	return arch_scale_cpu_capacity(this_cpu) == arch_scale_cpu_capacity(that_cpu);
++}
++
++bool cpus_share_cache(int this_cpu, int that_cpu)
++{
++	if (this_cpu == that_cpu)
++		return true;
++
++	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
++}
++#else /* !CONFIG_SMP */
++
++static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	return false;
++}
++
++#endif /* CONFIG_SMP */
++
++static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (ttwu_queue_wakelist(p, cpu, wake_flags))
++		return;
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++	ttwu_do_activate(rq, p, wake_flags);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Invoked from try_to_wake_up() to check whether the task can be woken up.
++ *
++ * The caller holds p::pi_lock if p != current or has preemption
++ * disabled when p == current.
++ *
++ * The rules of saved_state:
++ *
++ *   The related locking code always holds p::pi_lock when updating
++ *   p::saved_state, which means the code is fully serialized in both cases.
++ *
++ *  For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
++ *  No other bits set. This allows to distinguish all wakeup scenarios.
++ *
++ *  For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
++ *  allows us to prevent early wakeup of tasks before they can be run on
++ *  asymmetric ISA architectures (eg ARMv9).
++ */
++static __always_inline
++bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
++{
++	int match;
++
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++		WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
++			     state != TASK_RTLOCK_WAIT);
++	}
++
++	*success = !!(match = __task_state_match(p, state));
++
++	/*
++	 * Saved state preserves the task state across blocking on
++	 * an RT lock or TASK_FREEZABLE tasks.  If the state matches,
++	 * set p::saved_state to TASK_RUNNING, but do not wake the task
++	 * because it waits for a lock wakeup or __thaw_task(). Also
++	 * indicate success because from the regular waker's point of
++	 * view this has succeeded.
++	 *
++	 * After acquiring the lock the task will restore p::__state
++	 * from p::saved_state which ensures that the regular
++	 * wakeup is not lost. The restore will also set
++	 * p::saved_state to TASK_RUNNING so any further tests will
++	 * not result in false positives vs. @success
++	 */
++	if (match < 0)
++		p->saved_state = TASK_RUNNING;
++
++	return match > 0;
++}
++
++/*
++ * Notes on Program-Order guarantees on SMP systems.
++ *
++ *  MIGRATION
++ *
++ * The basic program-order guarantee on SMP systems is that when a task [t]
++ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
++ * execution on its new CPU [c1].
++ *
++ * For migration (of runnable tasks) this is provided by the following means:
++ *
++ *  A) UNLOCK of the rq(c0)->lock scheduling out task t
++ *  B) migration for t is required to synchronize *both* rq(c0)->lock and
++ *     rq(c1)->lock (if not at the same time, then in that order).
++ *  C) LOCK of the rq(c1)->lock scheduling in task
++ *
++ * Transitivity guarantees that B happens after A and C after B.
++ * Note: we only require RCpc transitivity.
++ * Note: the CPU doing B need not be c0 or c1
++ *
++ * Example:
++ *
++ *   CPU0            CPU1            CPU2
++ *
++ *   LOCK rq(0)->lock
++ *   sched-out X
++ *   sched-in Y
++ *   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(0)->lock // orders against CPU0
++ *                                   dequeue X
++ *                                   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(1)->lock
++ *                                   enqueue X
++ *                                   UNLOCK rq(1)->lock
++ *
++ *                   LOCK rq(1)->lock // orders against CPU2
++ *                   sched-out Z
++ *                   sched-in X
++ *                   UNLOCK rq(1)->lock
++ *
++ *
++ *  BLOCKING -- aka. SLEEP + WAKEUP
++ *
++ * For blocking we (obviously) need to provide the same guarantee as for
++ * migration. However the means are completely different as there is no lock
++ * chain to provide order. Instead we do:
++ *
++ *   1) smp_store_release(X->on_cpu, 0)   -- finish_task()
++ *   2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
++ *
++ * Example:
++ *
++ *   CPU0 (schedule)  CPU1 (try_to_wake_up) CPU2 (schedule)
++ *
++ *   LOCK rq(0)->lock LOCK X->pi_lock
++ *   dequeue X
++ *   sched-out X
++ *   smp_store_release(X->on_cpu, 0);
++ *
++ *                    smp_cond_load_acquire(&X->on_cpu, !VAL);
++ *                    X->state = WAKING
++ *                    set_task_cpu(X,2)
++ *
++ *                    LOCK rq(2)->lock
++ *                    enqueue X
++ *                    X->state = RUNNING
++ *                    UNLOCK rq(2)->lock
++ *
++ *                                          LOCK rq(2)->lock // orders against CPU1
++ *                                          sched-out Z
++ *                                          sched-in X
++ *                                          UNLOCK rq(2)->lock
++ *
++ *                    UNLOCK X->pi_lock
++ *   UNLOCK rq(0)->lock
++ *
++ *
++ * However; for wakeups there is a second guarantee we must provide, namely we
++ * must observe the state that lead to our wakeup. That is, not only must our
++ * task observe its own prior state, it must also observe the stores prior to
++ * its wakeup.
++ *
++ * This means that any means of doing remote wakeups must order the CPU doing
++ * the wakeup against the CPU the task is going to end up running on. This,
++ * however, is already required for the regular Program-Order guarantee above,
++ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
++ *
++ */
++
++/**
++ * try_to_wake_up - wake up a thread
++ * @p: the thread to be awakened
++ * @state: the mask of task states that can be woken
++ * @wake_flags: wake modifier flags (WF_*)
++ *
++ * Conceptually does:
++ *
++ *   If (@state & @p->state) @p->state = TASK_RUNNING.
++ *
++ * If the task was not queued/runnable, also place it back on a runqueue.
++ *
++ * This function is atomic against schedule() which would dequeue the task.
++ *
++ * It issues a full memory barrier before accessing @p->state, see the comment
++ * with set_current_state().
++ *
++ * Uses p->pi_lock to serialize against concurrent wake-ups.
++ *
++ * Relies on p->pi_lock stabilizing:
++ *  - p->sched_class
++ *  - p->cpus_ptr
++ *  - p->sched_task_group
++ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
++ *
++ * Tries really hard to only take one task_rq(p)->lock for performance.
++ * Takes rq->lock in:
++ *  - ttwu_runnable()    -- old rq, unavoidable, see comment there;
++ *  - ttwu_queue()       -- new rq, for enqueue of the task;
++ *  - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
++ *
++ * As a consequence we race really badly with just about everything. See the
++ * many memory barriers and their comments for details.
++ *
++ * Return: %true if @p->state changes (an actual wakeup was done),
++ *	   %false otherwise.
++ */
++int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
++{
++	guard(preempt)();
++	int cpu, success = 0;
++
++	if (p == current) {
++		/*
++		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
++		 * == smp_processor_id()'. Together this means we can special
++		 * case the whole 'p->on_rq && ttwu_runnable()' case below
++		 * without taking any locks.
++		 *
++		 * In particular:
++		 *  - we rely on Program-Order guarantees for all the ordering,
++		 *  - we're serialized against set_special_state() by virtue of
++		 *    it disabling IRQs (this allows not taking ->pi_lock).
++		 */
++		if (!ttwu_state_match(p, state, &success))
++			goto out;
++
++		trace_sched_waking(p);
++		ttwu_do_wakeup(p);
++		goto out;
++	}
++
++	/*
++	 * If we are going to wake up a thread waiting for CONDITION we
++	 * need to ensure that CONDITION=1 done by the caller can not be
++	 * reordered with p->state check below. This pairs with smp_store_mb()
++	 * in set_current_state() that the waiting thread does.
++	 */
++	scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
++		smp_mb__after_spinlock();
++		if (!ttwu_state_match(p, state, &success))
++			break;
++
++		trace_sched_waking(p);
++
++		/*
++		 * Ensure we load p->on_rq _after_ p->state, otherwise it would
++		 * be possible to, falsely, observe p->on_rq == 0 and get stuck
++		 * in smp_cond_load_acquire() below.
++		 *
++		 * sched_ttwu_pending()			try_to_wake_up()
++		 *   STORE p->on_rq = 1			  LOAD p->state
++		 *   UNLOCK rq->lock
++		 *
++		 * __schedule() (switch to task 'p')
++		 *   LOCK rq->lock			  smp_rmb();
++		 *   smp_mb__after_spinlock();
++		 *   UNLOCK rq->lock
++		 *
++		 * [task p]
++		 *   STORE p->state = UNINTERRUPTIBLE	  LOAD p->on_rq
++		 *
++		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++		 * __schedule().  See the comment for smp_mb__after_spinlock().
++		 *
++		 * A similar smp_rmb() lives in __task_needs_rq_lock().
++		 */
++		smp_rmb();
++		if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
++			break;
++
++#ifdef CONFIG_SMP
++		/*
++		 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
++		 * possible to, falsely, observe p->on_cpu == 0.
++		 *
++		 * One must be running (->on_cpu == 1) in order to remove oneself
++		 * from the runqueue.
++		 *
++		 * __schedule() (switch to task 'p')	try_to_wake_up()
++		 *   STORE p->on_cpu = 1		  LOAD p->on_rq
++		 *   UNLOCK rq->lock
++		 *
++		 * __schedule() (put 'p' to sleep)
++		 *   LOCK rq->lock			  smp_rmb();
++		 *   smp_mb__after_spinlock();
++		 *   STORE p->on_rq = 0			  LOAD p->on_cpu
++		 *
++		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++		 * __schedule().  See the comment for smp_mb__after_spinlock().
++		 *
++		 * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
++		 * schedule()'s deactivate_task() has 'happened' and p will no longer
++		 * care about it's own p->state. See the comment in __schedule().
++		 */
++		smp_acquire__after_ctrl_dep();
++
++		/*
++		 * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
++		 * == 0), which means we need to do an enqueue, change p->state to
++		 * TASK_WAKING such that we can unlock p->pi_lock before doing the
++		 * enqueue, such as ttwu_queue_wakelist().
++		 */
++		WRITE_ONCE(p->__state, TASK_WAKING);
++
++		/*
++		 * If the owning (remote) CPU is still in the middle of schedule() with
++		 * this task as prev, considering queueing p on the remote CPUs wake_list
++		 * which potentially sends an IPI instead of spinning on p->on_cpu to
++		 * let the waker make forward progress. This is safe because IRQs are
++		 * disabled and the IPI will deliver after on_cpu is cleared.
++		 *
++		 * Ensure we load task_cpu(p) after p->on_cpu:
++		 *
++		 * set_task_cpu(p, cpu);
++		 *   STORE p->cpu = @cpu
++		 * __schedule() (switch to task 'p')
++		 *   LOCK rq->lock
++		 *   smp_mb__after_spin_lock()          smp_cond_load_acquire(&p->on_cpu)
++		 *   STORE p->on_cpu = 1                LOAD p->cpu
++		 *
++		 * to ensure we observe the correct CPU on which the task is currently
++		 * scheduling.
++		 */
++		if (smp_load_acquire(&p->on_cpu) &&
++		    ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
++			break;
++
++		/*
++		 * If the owning (remote) CPU is still in the middle of schedule() with
++		 * this task as prev, wait until it's done referencing the task.
++		 *
++		 * Pairs with the smp_store_release() in finish_task().
++		 *
++		 * This ensures that tasks getting woken will be fully ordered against
++		 * their previous state and preserve Program Order.
++		 */
++		smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		sched_task_ttwu(p);
++
++		if ((wake_flags & WF_CURRENT_CPU) &&
++		    cpumask_test_cpu(smp_processor_id(), p->cpus_ptr))
++			cpu = smp_processor_id();
++		else
++			cpu = select_task_rq(p);
++
++		if (cpu != task_cpu(p)) {
++			if (p->in_iowait) {
++				delayacct_blkio_end(p);
++				atomic_dec(&task_rq(p)->nr_iowait);
++			}
++
++			wake_flags |= WF_MIGRATED;
++			set_task_cpu(p, cpu);
++		}
++#else
++		sched_task_ttwu(p);
++
++		cpu = task_cpu(p);
++#endif /* CONFIG_SMP */
++
++		ttwu_queue(p, cpu, wake_flags);
++	}
++out:
++	if (success)
++		ttwu_stat(p, task_cpu(p), wake_flags);
++
++	return success;
++}
++
++static bool __task_needs_rq_lock(struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
++	 * the task is blocked. Make sure to check @state since ttwu() can drop
++	 * locks at the end, see ttwu_queue_wakelist().
++	 */
++	if (state == TASK_RUNNING || state == TASK_WAKING)
++		return true;
++
++	/*
++	 * Ensure we load p->on_rq after p->__state, otherwise it would be
++	 * possible to, falsely, observe p->on_rq == 0.
++	 *
++	 * See try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	if (p->on_rq)
++		return true;
++
++#ifdef CONFIG_SMP
++	/*
++	 * Ensure the task has finished __schedule() and will not be referenced
++	 * anymore. Again, see try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	smp_cond_load_acquire(&p->on_cpu, !VAL);
++#endif
++
++	return false;
++}
++
++/**
++ * task_call_func - Invoke a function on task in fixed state
++ * @p: Process for which the function is to be invoked, can be @current.
++ * @func: Function to invoke.
++ * @arg: Argument to function.
++ *
++ * Fix the task in it's current state by avoiding wakeups and or rq operations
++ * and call @func(@arg) on it.  This function can use task_is_runnable() and
++ * task_curr() to work out what the state is, if required.  Given that @func
++ * can be invoked with a runqueue lock held, it had better be quite
++ * lightweight.
++ *
++ * Returns:
++ *   Whatever @func returns
++ */
++int task_call_func(struct task_struct *p, task_call_f func, void *arg)
++{
++	struct rq *rq = NULL;
++	struct rq_flags rf;
++	int ret;
++
++	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
++
++	if (__task_needs_rq_lock(p))
++		rq = __task_rq_lock(p, &rf);
++
++	/*
++	 * At this point the task is pinned; either:
++	 *  - blocked and we're holding off wakeups      (pi->lock)
++	 *  - woken, and we're holding off enqueue       (rq->lock)
++	 *  - queued, and we're holding off schedule     (rq->lock)
++	 *  - running, and we're holding off de-schedule (rq->lock)
++	 *
++	 * The called function (@func) can use: task_curr(), p->on_rq and
++	 * p->__state to differentiate between these states.
++	 */
++	ret = func(p, arg);
++
++	if (rq)
++		__task_rq_unlock(rq, &rf);
++
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
++	return ret;
++}
++
++/**
++ * cpu_curr_snapshot - Return a snapshot of the currently running task
++ * @cpu: The CPU on which to snapshot the task.
++ *
++ * Returns the task_struct pointer of the task "currently" running on
++ * the specified CPU.  If the same task is running on that CPU throughout,
++ * the return value will be a pointer to that task's task_struct structure.
++ * If the CPU did any context switches even vaguely concurrently with the
++ * execution of this function, the return value will be a pointer to the
++ * task_struct structure of a randomly chosen task that was running on
++ * that CPU somewhere around the time that this function was executing.
++ *
++ * If the specified CPU was offline, the return value is whatever it
++ * is, perhaps a pointer to the task_struct structure of that CPU's idle
++ * task, but there is no guarantee.  Callers wishing a useful return
++ * value must take some action to ensure that the specified CPU remains
++ * online throughout.
++ *
++ * This function executes full memory barriers before and after fetching
++ * the pointer, which permits the caller to confine this function's fetch
++ * with respect to the caller's accesses to other shared variables.
++ */
++struct task_struct *cpu_curr_snapshot(int cpu)
++{
++	struct task_struct *t;
++
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	t = rcu_dereference(cpu_curr(cpu));
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	return t;
++}
++
++/**
++ * wake_up_process - Wake up a specific process
++ * @p: The process to be woken up.
++ *
++ * Attempt to wake up the nominated process and move it to the set of runnable
++ * processes.
++ *
++ * Return: 1 if the process was woken up, 0 if it was already running.
++ *
++ * This function executes a full memory barrier before accessing the task state.
++ */
++int wake_up_process(struct task_struct *p)
++{
++	return try_to_wake_up(p, TASK_NORMAL, 0);
++}
++EXPORT_SYMBOL(wake_up_process);
++
++int wake_up_state(struct task_struct *p, unsigned int state)
++{
++	return try_to_wake_up(p, state, 0);
++}
++
++/*
++ * Perform scheduler related setup for a newly forked process p.
++ * p is forked by current.
++ *
++ * __sched_fork() is basic setup which is also used by sched_init() to
++ * initialize the boot CPU's idle task.
++ */
++static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	p->on_rq			= 0;
++	p->on_cpu			= 0;
++	p->utime			= 0;
++	p->stime			= 0;
++	p->sched_time			= 0;
++
++#ifdef CONFIG_SCHEDSTATS
++	/* Even if schedstat is disabled, there should not be garbage */
++	memset(&p->stats, 0, sizeof(p->stats));
++#endif
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++	INIT_HLIST_HEAD(&p->preempt_notifiers);
++#endif
++
++#ifdef CONFIG_COMPACTION
++	p->capture_control = NULL;
++#endif
++#ifdef CONFIG_SMP
++	p->wake_entry.u_flags = CSD_TYPE_TTWU;
++#endif
++	init_sched_mm_cid(p);
++}
++
++/*
++ * fork()/clone()-time setup:
++ */
++int sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	__sched_fork(clone_flags, p);
++	/*
++	 * We mark the process as NEW here. This guarantees that
++	 * nobody will actually run it, and a signal or other external
++	 * event cannot wake it up and insert it on the runqueue either.
++	 */
++	p->__state = TASK_NEW;
++
++	/*
++	 * Make sure we do not leak PI boosting priority to the child.
++	 */
++	p->prio = current->normal_prio;
++
++	/*
++	 * Revert to default priority/policy on fork if requested.
++	 */
++	if (unlikely(p->sched_reset_on_fork)) {
++		if (task_has_rt_policy(p)) {
++			p->policy = SCHED_NORMAL;
++			p->static_prio = NICE_TO_PRIO(0);
++			p->rt_priority = 0;
++		} else if (PRIO_TO_NICE(p->static_prio) < 0)
++			p->static_prio = NICE_TO_PRIO(0);
++
++		p->prio = p->normal_prio = p->static_prio;
++
++		/*
++		 * We don't need the reset flag anymore after the fork. It has
++		 * fulfilled its duty:
++		 */
++		p->sched_reset_on_fork = 0;
++	}
++
++#ifdef CONFIG_SCHED_INFO
++	if (unlikely(sched_info_on()))
++		memset(&p->sched_info, 0, sizeof(p->sched_info));
++#endif
++	init_task_preempt_count(p);
++
++	return 0;
++}
++
++int sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	/*
++	 * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
++	 * required yet, but lockdep gets upset if rules are violated.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	/*
++	 * Share the timeslice between parent and child, thus the
++	 * total amount of pending timeslices in the system doesn't change,
++	 * resulting in more scheduling fairness.
++	 */
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	rq->curr->time_slice /= 2;
++	p->time_slice = rq->curr->time_slice;
++#ifdef CONFIG_SCHED_HRTICK
++	hrtick_start(rq, rq->curr->time_slice);
++#endif
++
++	if (p->time_slice < RESCHED_NS) {
++		p->time_slice = sysctl_sched_base_slice;
++		resched_curr(rq);
++	}
++	sched_task_fork(p, rq);
++	raw_spin_unlock(&rq->lock);
++
++	rseq_migrate(p);
++	/*
++	 * We're setting the CPU for the first time, we don't migrate,
++	 * so use __set_task_cpu().
++	 */
++	__set_task_cpu(p, smp_processor_id());
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++void sched_cancel_fork(struct task_struct *p)
++{
++}
++
++void sched_post_fork(struct task_struct *p)
++{
++}
++
++#ifdef CONFIG_SCHEDSTATS
++
++DEFINE_STATIC_KEY_FALSE(sched_schedstats);
++
++static void set_schedstats(bool enabled)
++{
++	if (enabled)
++		static_branch_enable(&sched_schedstats);
++	else
++		static_branch_disable(&sched_schedstats);
++}
++
++void force_schedstat_enabled(void)
++{
++	if (!schedstat_enabled()) {
++		pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
++		static_branch_enable(&sched_schedstats);
++	}
++}
++
++static int __init setup_schedstats(char *str)
++{
++	int ret = 0;
++	if (!str)
++		goto out;
++
++	if (!strcmp(str, "enable")) {
++		set_schedstats(true);
++		ret = 1;
++	} else if (!strcmp(str, "disable")) {
++		set_schedstats(false);
++		ret = 1;
++	}
++out:
++	if (!ret)
++		pr_warn("Unable to parse schedstats=\n");
++
++	return ret;
++}
++__setup("schedstats=", setup_schedstats);
++
++#ifdef CONFIG_PROC_SYSCTL
++static int sysctl_schedstats(const struct ctl_table *table, int write, void *buffer,
++		size_t *lenp, loff_t *ppos)
++{
++	struct ctl_table t;
++	int err;
++	int state = static_branch_likely(&sched_schedstats);
++
++	if (write && !capable(CAP_SYS_ADMIN))
++		return -EPERM;
++
++	t = *table;
++	t.data = &state;
++	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
++	if (err < 0)
++		return err;
++	if (write)
++		set_schedstats(state);
++	return err;
++}
++#endif /* CONFIG_PROC_SYSCTL */
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_SYSCTL
++static const struct ctl_table sched_core_sysctls[] = {
++#ifdef CONFIG_SCHEDSTATS
++	{
++		.procname       = "sched_schedstats",
++		.data           = NULL,
++		.maxlen         = sizeof(unsigned int),
++		.mode           = 0644,
++		.proc_handler   = sysctl_schedstats,
++		.extra1         = SYSCTL_ZERO,
++		.extra2         = SYSCTL_ONE,
++	},
++#endif /* CONFIG_SCHEDSTATS */
++};
++static int __init sched_core_sysctl_init(void)
++{
++	register_sysctl_init("kernel", sched_core_sysctls);
++	return 0;
++}
++late_initcall(sched_core_sysctl_init);
++#endif /* CONFIG_SYSCTL */
++
++/*
++ * wake_up_new_task - wake up a newly created task for the first time.
++ *
++ * This function will do some initial scheduler statistics housekeeping
++ * that must be done for every newly created context, then puts the task
++ * on the runqueue and wakes it.
++ */
++void wake_up_new_task(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	rq = cpu_rq(select_task_rq(p));
++#ifdef CONFIG_SMP
++	rseq_migrate(p);
++	/*
++	 * Fork balancing, do it here and not earlier because:
++	 * - cpus_ptr can change in the fork path
++	 * - any previously selected CPU might disappear through hotplug
++	 *
++	 * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
++	 * as we're not fully set-up yet.
++	 */
++	__set_task_cpu(p, cpu_of(rq));
++#endif
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	activate_task(p, rq);
++	trace_sched_wakeup_new(p);
++	wakeup_preempt(rq);
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++
++static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
++
++void preempt_notifier_inc(void)
++{
++	static_branch_inc(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_inc);
++
++void preempt_notifier_dec(void)
++{
++	static_branch_dec(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_dec);
++
++/**
++ * preempt_notifier_register - tell me when current is being preempted & rescheduled
++ * @notifier: notifier struct to register
++ */
++void preempt_notifier_register(struct preempt_notifier *notifier)
++{
++	if (!static_branch_unlikely(&preempt_notifier_key))
++		WARN(1, "registering preempt_notifier while notifiers disabled\n");
++
++	hlist_add_head(&notifier->link, &current->preempt_notifiers);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_register);
++
++/**
++ * preempt_notifier_unregister - no longer interested in preemption notifications
++ * @notifier: notifier struct to unregister
++ *
++ * This is *not* safe to call from within a preemption notifier.
++ */
++void preempt_notifier_unregister(struct preempt_notifier *notifier)
++{
++	hlist_del(&notifier->link);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
++
++static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_in(notifier, raw_smp_processor_id());
++}
++
++static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_in_preempt_notifiers(curr);
++}
++
++static void
++__fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				   struct task_struct *next)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_out(notifier, next);
++}
++
++static __always_inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_out_preempt_notifiers(curr, next);
++}
++
++#else /* !CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++}
++
++static inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++}
++
++#endif /* CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void prepare_task(struct task_struct *next)
++{
++	/*
++	 * Claim the task as running, we do this before switching to it
++	 * such that any running task will have this set.
++	 *
++	 * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
++	 * its ordering comment.
++	 */
++	WRITE_ONCE(next->on_cpu, 1);
++}
++
++static inline void finish_task(struct task_struct *prev)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * This must be the very last reference to @prev from this CPU. After
++	 * p->on_cpu is cleared, the task can be moved to a different CPU. We
++	 * must ensure this doesn't happen until the switch is completely
++	 * finished.
++	 *
++	 * In particular, the load of prev->state in finish_task_switch() must
++	 * happen before this.
++	 *
++	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
++	 */
++	smp_store_release(&prev->on_cpu, 0);
++#else
++	prev->on_cpu = 0;
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	void (*func)(struct rq *rq);
++	struct balance_callback *next;
++
++	lockdep_assert_held(&rq->lock);
++
++	while (head) {
++		func = (void (*)(struct rq *))head->func;
++		next = head->next;
++		head->next = NULL;
++		head = next;
++
++		func(rq);
++	}
++}
++
++static void balance_push(struct rq *rq);
++
++/*
++ * balance_push_callback is a right abuse of the callback interface and plays
++ * by significantly different rules.
++ *
++ * Where the normal balance_callback's purpose is to be ran in the same context
++ * that queued it (only later, when it's safe to drop rq->lock again),
++ * balance_push_callback is specifically targeted at __schedule().
++ *
++ * This abuse is tolerated because it places all the unlikely/odd cases behind
++ * a single test, namely: rq->balance_callback == NULL.
++ */
++struct balance_callback balance_push_callback = {
++	.next = NULL,
++	.func = balance_push,
++};
++
++static inline struct balance_callback *
++__splice_balance_callbacks(struct rq *rq, bool split)
++{
++	struct balance_callback *head = rq->balance_callback;
++
++	if (likely(!head))
++		return NULL;
++
++	lockdep_assert_rq_held(rq);
++	/*
++	 * Must not take balance_push_callback off the list when
++	 * splice_balance_callbacks() and balance_callbacks() are not
++	 * in the same rq->lock section.
++	 *
++	 * In that case it would be possible for __schedule() to interleave
++	 * and observe the list empty.
++	 */
++	if (split && head == &balance_push_callback)
++		head = NULL;
++	else
++		rq->balance_callback = NULL;
++
++	return head;
++}
++
++struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return __splice_balance_callbacks(rq, true);
++}
++
++static void __balance_callbacks(struct rq *rq)
++{
++	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
++}
++
++void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	unsigned long flags;
++
++	if (unlikely(head)) {
++		raw_spin_lock_irqsave(&rq->lock, flags);
++		do_balance_callbacks(rq, head);
++		raw_spin_unlock_irqrestore(&rq->lock, flags);
++	}
++}
++
++#else
++
++static inline void __balance_callbacks(struct rq *rq)
++{
++}
++#endif
++
++static inline void
++prepare_lock_switch(struct rq *rq, struct task_struct *next)
++{
++	/*
++	 * Since the runqueue lock will be released by the next
++	 * task (which is an invalid locking op but in the case
++	 * of the scheduler it's an obvious special-case), so we
++	 * do an early lockdep release here:
++	 */
++	spin_release(&rq->lock.dep_map, _THIS_IP_);
++#ifdef CONFIG_DEBUG_SPINLOCK
++	/* this is a valid case when another task releases the spinlock */
++	rq->lock.owner = next;
++#endif
++}
++
++static inline void finish_lock_switch(struct rq *rq)
++{
++	/*
++	 * If we are tracking spinlock dependencies then we have to
++	 * fix up the runqueue lock - which gets 'carried over' from
++	 * prev into current:
++	 */
++	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
++	__balance_callbacks(rq);
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++/*
++ * NOP if the arch has not defined these:
++ */
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++static inline void kmap_local_sched_out(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_out();
++#endif
++}
++
++static inline void kmap_local_sched_in(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_in();
++#endif
++}
++
++/**
++ * prepare_task_switch - prepare to switch tasks
++ * @rq: the runqueue preparing to switch
++ * @next: the task we are going to switch to.
++ *
++ * This is called with the rq lock held and interrupts off. It must
++ * be paired with a subsequent finish_task_switch after the context
++ * switch.
++ *
++ * prepare_task_switch sets up locking and calls architecture specific
++ * hooks.
++ */
++static inline void
++prepare_task_switch(struct rq *rq, struct task_struct *prev,
++		    struct task_struct *next)
++{
++	kcov_prepare_switch(prev);
++	sched_info_switch(rq, prev, next);
++	perf_event_task_sched_out(prev, next);
++	rseq_preempt(prev);
++	fire_sched_out_preempt_notifiers(prev, next);
++	kmap_local_sched_out();
++	prepare_task(next);
++	prepare_arch_switch(next);
++}
++
++/**
++ * finish_task_switch - clean up after a task-switch
++ * @rq: runqueue associated with task-switch
++ * @prev: the thread we just switched away from.
++ *
++ * finish_task_switch must be called after the context switch, paired
++ * with a prepare_task_switch call before the context switch.
++ * finish_task_switch will reconcile locking set up by prepare_task_switch,
++ * and do any other architecture-specific cleanup actions.
++ *
++ * Note that we may have delayed dropping an mm in context_switch(). If
++ * so, we finish that here outside of the runqueue lock.  (Doing it
++ * with the lock held can cause deadlocks; see schedule() for
++ * details.)
++ *
++ * The context switch have flipped the stack from under us and restored the
++ * local variables which were saved when this task called schedule() in the
++ * past. 'prev == current' is still correct but we need to recalculate this_rq
++ * because prev may have moved to another CPU.
++ */
++static struct rq *finish_task_switch(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	struct rq *rq = this_rq();
++	struct mm_struct *mm = rq->prev_mm;
++	unsigned int prev_state;
++
++	/*
++	 * The previous task will have left us with a preempt_count of 2
++	 * because it left us after:
++	 *
++	 *	schedule()
++	 *	  preempt_disable();			// 1
++	 *	  __schedule()
++	 *	    raw_spin_lock_irq(&rq->lock)	// 2
++	 *
++	 * Also, see FORK_PREEMPT_COUNT.
++	 */
++	if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
++		      "corrupted preempt_count: %s/%d/0x%x\n",
++		      current->comm, current->pid, preempt_count()))
++		preempt_count_set(FORK_PREEMPT_COUNT);
++
++	rq->prev_mm = NULL;
++
++	/*
++	 * A task struct has one reference for the use as "current".
++	 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
++	 * schedule one last time. The schedule call will never return, and
++	 * the scheduled task must drop that reference.
++	 *
++	 * We must observe prev->state before clearing prev->on_cpu (in
++	 * finish_task), otherwise a concurrent wakeup can get prev
++	 * running on another CPU and we could rave with its RUNNING -> DEAD
++	 * transition, resulting in a double drop.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	vtime_task_switch(prev);
++	perf_event_task_sched_in(prev, current);
++	finish_task(prev);
++	tick_nohz_task_switch();
++	finish_lock_switch(rq);
++	finish_arch_post_lock_switch();
++	kcov_finish_switch(current);
++	/*
++	 * kmap_local_sched_out() is invoked with rq::lock held and
++	 * interrupts disabled. There is no requirement for that, but the
++	 * sched out code does not have an interrupt enabled section.
++	 * Restoring the maps on sched in does not require interrupts being
++	 * disabled either.
++	 */
++	kmap_local_sched_in();
++
++	fire_sched_in_preempt_notifiers(current);
++	/*
++	 * When switching through a kernel thread, the loop in
++	 * membarrier_{private,global}_expedited() may have observed that
++	 * kernel thread and not issued an IPI. It is therefore possible to
++	 * schedule between user->kernel->user threads without passing though
++	 * switch_mm(). Membarrier requires a barrier after storing to
++	 * rq->curr, before returning to userspace, so provide them here:
++	 *
++	 * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
++	 *   provided by mmdrop_lazy_tlb(),
++	 * - a sync_core for SYNC_CORE.
++	 */
++	if (mm) {
++		membarrier_mm_sync_core_before_usermode(mm);
++		mmdrop_lazy_tlb_sched(mm);
++	}
++	if (unlikely(prev_state == TASK_DEAD)) {
++		/* Task is done with its stack. */
++		put_task_stack(prev);
++
++		put_task_struct_rcu_user(prev);
++	}
++
++	return rq;
++}
++
++/**
++ * schedule_tail - first thing a freshly forked thread must call.
++ * @prev: the thread we just switched away from.
++ */
++asmlinkage __visible void schedule_tail(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	/*
++	 * New tasks start with FORK_PREEMPT_COUNT, see there and
++	 * finish_task_switch() for details.
++	 *
++	 * finish_task_switch() will drop rq->lock() and lower preempt_count
++	 * and the preempt_enable() will end up enabling preemption (on
++	 * PREEMPT_COUNT kernels).
++	 */
++
++	finish_task_switch(prev);
++	/*
++	 * This is a special case: the newly created task has just
++	 * switched the context for the first time. It is returning from
++	 * schedule for the first time in this path.
++	 */
++	trace_sched_exit_tp(true, CALLER_ADDR0);
++	preempt_enable();
++
++	if (current->set_child_tid)
++		put_user(task_pid_vnr(current), current->set_child_tid);
++
++	calculate_sigpending();
++}
++
++/*
++ * context_switch - switch to the new MM and the new thread's register state.
++ */
++static __always_inline struct rq *
++context_switch(struct rq *rq, struct task_struct *prev,
++	       struct task_struct *next)
++{
++	prepare_task_switch(rq, prev, next);
++
++	/*
++	 * For paravirt, this is coupled with an exit in switch_to to
++	 * combine the page table reload and the switch backend into
++	 * one hypercall.
++	 */
++	arch_start_context_switch(prev);
++
++	/*
++	 * kernel -> kernel   lazy + transfer active
++	 *   user -> kernel   lazy + mmgrab_lazy_tlb() active
++	 *
++	 * kernel ->   user   switch + mmdrop_lazy_tlb() active
++	 *   user ->   user   switch
++	 *
++	 * switch_mm_cid() needs to be updated if the barriers provided
++	 * by context_switch() are modified.
++	 */
++	if (!next->mm) {                                // to kernel
++		enter_lazy_tlb(prev->active_mm, next);
++
++		next->active_mm = prev->active_mm;
++		if (prev->mm)                           // from user
++			mmgrab_lazy_tlb(prev->active_mm);
++		else
++			prev->active_mm = NULL;
++	} else {                                        // to user
++		membarrier_switch_mm(rq, prev->active_mm, next->mm);
++		/*
++		 * sys_membarrier() requires an smp_mb() between setting
++		 * rq->curr / membarrier_switch_mm() and returning to userspace.
++		 *
++		 * The below provides this either through switch_mm(), or in
++		 * case 'prev->active_mm == next->mm' through
++		 * finish_task_switch()'s mmdrop().
++		 */
++		switch_mm_irqs_off(prev->active_mm, next->mm, next);
++		lru_gen_use_mm(next->mm);
++
++		if (!prev->mm) {                        // from kernel
++			/* will mmdrop_lazy_tlb() in finish_task_switch(). */
++			rq->prev_mm = prev->active_mm;
++			prev->active_mm = NULL;
++		}
++	}
++
++	/* switch_mm_cid() requires the memory barriers above. */
++	switch_mm_cid(rq, prev, next);
++
++	prepare_lock_switch(rq, next);
++
++	/* Here we just switch the register state and the stack. */
++	switch_to(prev, next, prev);
++	barrier();
++
++	return finish_task_switch(prev);
++}
++
++/*
++ * nr_running, nr_uninterruptible and nr_context_switches:
++ *
++ * externally visible scheduler statistics: current number of runnable
++ * threads, total number of context switches performed since bootup.
++ */
++unsigned int nr_running(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_online_cpu(i)
++		sum += cpu_rq(i)->nr_running;
++
++	return sum;
++}
++
++/*
++ * Check if only the current task is running on the CPU.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race.  The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptible section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
++ */
++bool single_task_running(void)
++{
++	return raw_rq()->nr_running == 1;
++}
++EXPORT_SYMBOL(single_task_running);
++
++unsigned long long nr_context_switches_cpu(int cpu)
++{
++	return cpu_rq(cpu)->nr_switches;
++}
++
++unsigned long long nr_context_switches(void)
++{
++	int i;
++	unsigned long long sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += cpu_rq(i)->nr_switches;
++
++	return sum;
++}
++
++/*
++ * Consumers of these two interfaces, like for example the cpuidle menu
++ * governor, are using nonsensical data. Preferring shallow idle state selection
++ * for a CPU that has IO-wait which might not even end up running the task when
++ * it does become runnable.
++ */
++
++unsigned int nr_iowait_cpu(int cpu)
++{
++	return atomic_read(&cpu_rq(cpu)->nr_iowait);
++}
++
++/*
++ * IO-wait accounting, and how it's mostly bollocks (on SMP).
++ *
++ * The idea behind IO-wait account is to account the idle time that we could
++ * have spend running if it were not for IO. That is, if we were to improve the
++ * storage performance, we'd have a proportional reduction in IO-wait time.
++ *
++ * This all works nicely on UP, where, when a task blocks on IO, we account
++ * idle time as IO-wait, because if the storage were faster, it could've been
++ * running and we'd not be idle.
++ *
++ * This has been extended to SMP, by doing the same for each CPU. This however
++ * is broken.
++ *
++ * Imagine for instance the case where two tasks block on one CPU, only the one
++ * CPU will have IO-wait accounted, while the other has regular idle. Even
++ * though, if the storage were faster, both could've ran at the same time,
++ * utilising both CPUs.
++ *
++ * This means, that when looking globally, the current IO-wait accounting on
++ * SMP is a lower bound, by reason of under accounting.
++ *
++ * Worse, since the numbers are provided per CPU, they are sometimes
++ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
++ * associated with any one particular CPU, it can wake to another CPU than it
++ * blocked on. This means the per CPU IO-wait number is meaningless.
++ *
++ * Task CPU affinities can make all that even more 'interesting'.
++ */
++
++unsigned int nr_iowait(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += nr_iowait_cpu(i);
++
++	return sum;
++}
++
++#ifdef CONFIG_SMP
++
++/*
++ * sched_exec - execve() is a valuable balancing opportunity, because at
++ * this point the task has the smallest effective memory and cache
++ * footprint.
++ */
++void sched_exec(void)
++{
++}
++
++#endif
++
++DEFINE_PER_CPU(struct kernel_stat, kstat);
++DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
++
++EXPORT_PER_CPU_SYMBOL(kstat);
++EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
++
++static inline void update_curr(struct rq *rq, struct task_struct *p)
++{
++	s64 ns = rq->clock_task - p->last_ran;
++
++	p->sched_time += ns;
++	cgroup_account_cputime(p, ns);
++	account_group_exec_runtime(p, ns);
++
++	p->time_slice -= ns;
++	p->last_ran = rq->clock_task;
++}
++
++/*
++ * Return accounted runtime for the task.
++ * Return separately the current's pending runtime that have not been
++ * accounted yet.
++ */
++unsigned long long task_sched_runtime(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	u64 ns;
++
++#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
++	/*
++	 * 64-bit doesn't need locks to atomically read a 64-bit value.
++	 * So we have a optimization chance when the task's delta_exec is 0.
++	 * Reading ->on_cpu is racy, but this is OK.
++	 *
++	 * If we race with it leaving CPU, we'll take a lock. So we're correct.
++	 * If we race with it entering CPU, unaccounted time is 0. This is
++	 * indistinguishable from the read occurring a few cycles earlier.
++	 * If we see ->on_cpu without ->on_rq, the task is leaving, and has
++	 * been accounted, so we're correct here as well.
++	 */
++	if (!p->on_cpu || !task_on_rq_queued(p))
++		return tsk_seruntime(p);
++#endif
++
++	rq = task_access_lock_irqsave(p, &lock, &flags);
++	/*
++	 * Must be ->curr _and_ ->on_rq.  If dequeued, we would
++	 * project cycles that may never be accounted to this
++	 * thread, breaking clock_gettime().
++	 */
++	if (p == rq->curr && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		update_curr(rq, p);
++	}
++	ns = tsk_seruntime(p);
++	task_access_unlock_irqrestore(p, lock, &flags);
++
++	return ns;
++}
++
++/* This manages tasks that have run out of timeslice during a scheduler_tick */
++static inline void scheduler_task_tick(struct rq *rq)
++{
++	struct task_struct *p = rq->curr;
++
++	if (is_idle_task(p))
++		return;
++
++	update_curr(rq, p);
++	cpufreq_update_util(rq, 0);
++
++	/*
++	 * Tasks have less than RESCHED_NS of time slice left they will be
++	 * rescheduled.
++	 */
++	if (p->time_slice >= RESCHED_NS)
++		return;
++	set_tsk_need_resched(p);
++	set_preempt_need_resched();
++}
++
++static u64 cpu_resched_latency(struct rq *rq)
++{
++	int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
++	u64 resched_latency, now = rq_clock(rq);
++	static bool warned_once;
++
++	if (sysctl_resched_latency_warn_once && warned_once)
++		return 0;
++
++	if (!need_resched() || !latency_warn_ms)
++		return 0;
++
++	if (system_state == SYSTEM_BOOTING)
++		return 0;
++
++	if (!rq->last_seen_need_resched_ns) {
++		rq->last_seen_need_resched_ns = now;
++		rq->ticks_without_resched = 0;
++		return 0;
++	}
++
++	rq->ticks_without_resched++;
++	resched_latency = now - rq->last_seen_need_resched_ns;
++	if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
++		return 0;
++
++	warned_once = true;
++
++	return resched_latency;
++}
++
++static int __init setup_resched_latency_warn_ms(char *str)
++{
++	long val;
++
++	if ((kstrtol(str, 0, &val))) {
++		pr_warn("Unable to set resched_latency_warn_ms\n");
++		return 1;
++	}
++
++	sysctl_resched_latency_warn_ms = val;
++	return 1;
++}
++__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
++
++/*
++ * This function gets called by the timer code, with HZ frequency.
++ * We call it with interrupts disabled.
++ */
++void sched_tick(void)
++{
++	int cpu __maybe_unused = smp_processor_id();
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *curr = rq->curr;
++	u64 resched_latency;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE))
++		arch_scale_freq_tick();
++
++	sched_clock_tick();
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	if (dynamic_preempt_lazy() && tif_test_bit(TIF_NEED_RESCHED_LAZY))
++		resched_curr(rq);
++
++	scheduler_task_tick(rq);
++	if (sched_feat(LATENCY_WARN))
++		resched_latency = cpu_resched_latency(rq);
++	calc_global_load_tick(rq);
++
++	task_tick_mm_cid(rq, rq->curr);
++
++	raw_spin_unlock(&rq->lock);
++
++	if (sched_feat(LATENCY_WARN) && resched_latency)
++		resched_latency_warn(cpu, resched_latency);
++
++	perf_event_task_tick();
++
++	if (curr->flags & PF_WQ_WORKER)
++		wq_worker_tick(curr);
++}
++
++#ifdef CONFIG_NO_HZ_FULL
++
++struct tick_work {
++	int			cpu;
++	atomic_t		state;
++	struct delayed_work	work;
++};
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE	0
++#define TICK_SCHED_REMOTE_OFFLINING	1
++#define TICK_SCHED_REMOTE_RUNNING	2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ *          TICK_SCHED_REMOTE_OFFLINE
++ *                    |   ^
++ *                    |   |
++ *                    |   | sched_tick_remote()
++ *                    |   |
++ *                    |   |
++ *                    +--TICK_SCHED_REMOTE_OFFLINING
++ *                    |   ^
++ *                    |   |
++ * sched_tick_start() |   | sched_tick_stop()
++ *                    |   |
++ *                    V   |
++ *          TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
++
++static struct tick_work __percpu *tick_work_cpu;
++
++static void sched_tick_remote(struct work_struct *work)
++{
++	struct delayed_work *dwork = to_delayed_work(work);
++	struct tick_work *twork = container_of(dwork, struct tick_work, work);
++	int cpu = twork->cpu;
++	struct rq *rq = cpu_rq(cpu);
++	int os;
++
++	/*
++	 * Handle the tick only if it appears the remote CPU is running in full
++	 * dynticks mode. The check is racy by nature, but missing a tick or
++	 * having one too much is no big deal because the scheduler tick updates
++	 * statistics and checks timeslices in a time-independent way, regardless
++	 * of when exactly it is running.
++	 */
++	if (tick_nohz_tick_stopped_cpu(cpu)) {
++		guard(raw_spinlock_irqsave)(&rq->lock);
++		struct task_struct *curr = rq->curr;
++
++		if (cpu_online(cpu)) {
++			update_rq_clock(rq);
++
++			if (!is_idle_task(curr)) {
++				/*
++				 * Make sure the next tick runs within a
++				 * reasonable amount of time.
++				 */
++				u64 delta = rq_clock_task(rq) - curr->last_ran;
++				WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++			}
++			scheduler_task_tick(rq);
++
++			calc_load_nohz_remote(rq);
++		}
++	}
++
++	/*
++	 * Run the remote tick once per second (1Hz). This arbitrary
++	 * frequency is large enough to avoid overload but short enough
++	 * to keep scheduler internal stats reasonably up to date.  But
++	 * first update state to reflect hotplug activity if required.
++	 */
++	os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++	if (os == TICK_SCHED_REMOTE_RUNNING)
++		queue_delayed_work(system_unbound_wq, dwork, HZ);
++}
++
++static void sched_tick_start(int cpu)
++{
++	int os;
++	struct tick_work *twork;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++	if (os == TICK_SCHED_REMOTE_OFFLINE) {
++		twork->cpu = cpu;
++		INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++		queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++	}
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++static void sched_tick_stop(int cpu)
++{
++	struct tick_work *twork;
++	int os;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	/* There cannot be competing actions, but don't rely on stop-machine. */
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
++	WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
++	/* Don't cancel, as this would mess up the state machine. */
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++int __init sched_tick_offload_init(void)
++{
++	tick_work_cpu = alloc_percpu(struct tick_work);
++	BUG_ON(!tick_work_cpu);
++	return 0;
++}
++
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_tick_start(int cpu) { }
++static inline void sched_tick_stop(int cpu) { }
++#endif
++
++#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
++				defined(CONFIG_PREEMPT_TRACER))
++/*
++ * If the value passed in is equal to the current preempt count
++ * then we just disabled preemption. Start timing the latency.
++ */
++static inline void preempt_latency_start(int val)
++{
++	if (preempt_count() == val) {
++		unsigned long ip = get_lock_parent_ip();
++#ifdef CONFIG_DEBUG_PREEMPT
++		current->preempt_disable_ip = ip;
++#endif
++		trace_preempt_off(CALLER_ADDR0, ip);
++	}
++}
++
++void preempt_count_add(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
++		return;
++#endif
++	__preempt_count_add(val);
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Spinlock count overflowing soon?
++	 */
++	DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
++				PREEMPT_MASK - 10);
++#endif
++	preempt_latency_start(val);
++}
++EXPORT_SYMBOL(preempt_count_add);
++NOKPROBE_SYMBOL(preempt_count_add);
++
++/*
++ * If the value passed in equals to the current preempt count
++ * then we just enabled preemption. Stop timing the latency.
++ */
++static inline void preempt_latency_stop(int val)
++{
++	if (preempt_count() == val)
++		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++}
++
++void preempt_count_sub(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
++		return;
++	/*
++	 * Is the spinlock portion underflowing?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
++			!(preempt_count() & PREEMPT_MASK)))
++		return;
++#endif
++
++	preempt_latency_stop(val);
++	__preempt_count_sub(val);
++}
++EXPORT_SYMBOL(preempt_count_sub);
++NOKPROBE_SYMBOL(preempt_count_sub);
++
++#else
++static inline void preempt_latency_start(int val) { }
++static inline void preempt_latency_stop(int val) { }
++#endif
++
++static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	return p->preempt_disable_ip;
++#else
++	return 0;
++#endif
++}
++
++/*
++ * Print scheduling while atomic bug:
++ */
++static noinline void __schedule_bug(struct task_struct *prev)
++{
++	/* Save this before calling printk(), since that will clobber it */
++	unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
++
++	if (oops_in_progress)
++		return;
++
++	printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
++		prev->comm, prev->pid, preempt_count());
++
++	debug_show_held_locks(prev);
++	print_modules();
++	if (irqs_disabled())
++		print_irqtrace_events(prev);
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++		pr_err("Preemption disabled at:");
++		print_ip_sym(KERN_ERR, preempt_disable_ip);
++	}
++	check_panic_on_warn("scheduling while atomic");
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++
++/*
++ * Various schedule()-time debugging checks and statistics:
++ */
++static inline void schedule_debug(struct task_struct *prev, bool preempt)
++{
++#ifdef CONFIG_SCHED_STACK_END_CHECK
++	if (task_stack_end_corrupted(prev))
++		panic("corrupted stack end detected inside scheduler\n");
++
++	if (task_scs_end_corrupted(prev))
++		panic("corrupted shadow stack detected inside scheduler\n");
++#endif
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++	if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
++		printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
++			prev->comm, prev->pid, prev->non_block_count);
++		dump_stack();
++		add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++	}
++#endif
++
++	if (unlikely(in_atomic_preempt_off())) {
++		__schedule_bug(prev);
++		preempt_count_set(PREEMPT_DISABLED);
++	}
++	rcu_sleep_check();
++	WARN_ON_ONCE(ct_state() == CT_STATE_USER);
++
++	profile_hit(SCHED_PROFILING, __builtin_return_address(0));
++
++	schedstat_inc(this_rq()->sched_count);
++}
++
++#ifdef ALT_SCHED_DEBUG
++void alt_sched_debug(void)
++{
++	printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx,"
++	       " ecore_idle: 0x%04lx\n",
++	       sched_rq_pending_mask.bits[0],
++	       sched_idle_mask->bits[0],
++	       sched_pcore_idle_mask->bits[0],
++	       sched_ecore_idle_mask->bits[0]);
++}
++#endif
++
++#ifdef	CONFIG_SMP
++
++#ifdef CONFIG_PREEMPT_RT
++#define SCHED_NR_MIGRATE_BREAK 8
++#else
++#define SCHED_NR_MIGRATE_BREAK 32
++#endif
++
++__read_mostly unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
++
++/*
++ * Migrate pending tasks in @rq to @dest_cpu
++ */
++static inline int
++migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
++{
++	struct task_struct *p, *skip = rq->curr;
++	int nr_migrated = 0;
++	int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
++
++	/* WA to check rq->curr is still on rq */
++	if (!task_on_rq_queued(skip))
++		return 0;
++
++	while (skip != rq->idle && nr_tries &&
++	       (p = sched_rq_next_task(skip, rq)) != rq->idle) {
++		skip = sched_rq_next_task(p, rq);
++		if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
++			__SCHED_DEQUEUE_TASK(p, rq, 0, );
++			set_task_cpu(p, dest_cpu);
++			sched_task_sanity_check(p, dest_rq);
++			sched_mm_cid_migrate_to(dest_rq, p);
++			__SCHED_ENQUEUE_TASK(p, dest_rq, 0, );
++			nr_migrated++;
++		}
++		nr_tries--;
++	}
++
++	return nr_migrated;
++}
++
++static inline int take_other_rq_tasks(struct rq *rq, int cpu)
++{
++	cpumask_t *topo_mask, *end_mask, chk;
++
++	if (unlikely(!rq->online))
++		return 0;
++
++	if (cpumask_empty(&sched_rq_pending_mask))
++		return 0;
++
++	topo_mask = per_cpu(sched_cpu_topo_masks, cpu);
++	end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
++	do {
++		int i;
++
++		if (!cpumask_and(&chk, &sched_rq_pending_mask, topo_mask))
++			continue;
++
++		for_each_cpu_wrap(i, &chk, cpu) {
++			int nr_migrated;
++			struct rq *src_rq;
++
++			src_rq = cpu_rq(i);
++			if (!do_raw_spin_trylock(&src_rq->lock))
++				continue;
++			spin_acquire(&src_rq->lock.dep_map,
++				     SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++			if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
++				src_rq->nr_running -= nr_migrated;
++				if (src_rq->nr_running < 2)
++					cpumask_clear_cpu(i, &sched_rq_pending_mask);
++
++				spin_release(&src_rq->lock.dep_map, _RET_IP_);
++				do_raw_spin_unlock(&src_rq->lock);
++
++				rq->nr_running += nr_migrated;
++				if (rq->nr_running > 1)
++					cpumask_set_cpu(cpu, &sched_rq_pending_mask);
++
++				update_sched_preempt_mask(rq);
++				cpufreq_update_util(rq, 0);
++
++				return 1;
++			}
++
++			spin_release(&src_rq->lock.dep_map, _RET_IP_);
++			do_raw_spin_unlock(&src_rq->lock);
++		}
++	} while (++topo_mask < end_mask);
++
++	return 0;
++}
++#endif
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sysctl_sched_base_slice;
++
++	sched_task_renew(p, rq);
++
++	if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
++		requeue_task(p, rq);
++}
++
++/*
++ * Timeslices below RESCHED_NS are considered as good as expired as there's no
++ * point rescheduling when there's so little time left.
++ */
++static inline void check_curr(struct task_struct *p, struct rq *rq)
++{
++	if (unlikely(rq->idle == p))
++		return;
++
++	update_curr(rq, p);
++
++	if (p->time_slice < RESCHED_NS)
++		time_slice_expired(p, rq);
++}
++
++static inline struct task_struct *
++choose_next_task(struct rq *rq, int cpu)
++{
++	struct task_struct *next = sched_rq_first_task(rq);
++
++	if (next == rq->idle) {
++#ifdef	CONFIG_SMP
++		if (!take_other_rq_tasks(rq, cpu)) {
++			if (likely(rq->balance_func && rq->online))
++				rq->balance_func(rq, cpu);
++#endif /* CONFIG_SMP */
++
++			schedstat_inc(rq->sched_goidle);
++			/*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
++			return next;
++#ifdef	CONFIG_SMP
++		}
++		next = sched_rq_first_task(rq);
++#endif
++	}
++#ifdef CONFIG_HIGH_RES_TIMERS
++	hrtick_start(rq, next->time_slice);
++#endif
++	/*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
++	return next;
++}
++
++/*
++ * Constants for the sched_mode argument of __schedule().
++ *
++ * The mode argument allows RT enabled kernels to differentiate a
++ * preemption from blocking on an 'sleeping' spin/rwlock.
++ */
++ #define SM_IDLE		(-1)
++ #define SM_NONE		0
++ #define SM_PREEMPT		1
++ #define SM_RTLOCK_WAIT		2
++
++/*
++ * Helper function for __schedule()
++ *
++ * If a task does not have signals pending, deactivate it
++ * Otherwise marks the task's __state as RUNNING
++ */
++static bool try_to_block_task(struct rq *rq, struct task_struct *p,
++			      unsigned long task_state)
++{
++	if (signal_pending_state(task_state, p)) {
++		WRITE_ONCE(p->__state, TASK_RUNNING);
++		return false;
++	}
++	p->sched_contributes_to_load =
++		(task_state & TASK_UNINTERRUPTIBLE) &&
++		!(task_state & TASK_NOLOAD) &&
++		!(task_state & TASK_FROZEN);
++
++	/*
++	 * __schedule()			ttwu()
++	 *   prev_state = prev->state;    if (p->on_rq && ...)
++	 *   if (prev_state)		    goto out;
++	 *     p->on_rq = 0;		  smp_acquire__after_ctrl_dep();
++	 *				  p->state = TASK_WAKING
++	 *
++	 * Where __schedule() and ttwu() have matching control dependencies.
++	 *
++	 * After this, schedule() must not care about p->state any more.
++	 */
++	sched_task_deactivate(p, rq);
++	block_task(rq, p);
++	return true;
++}
++
++/*
++ * schedule() is the main scheduler function.
++ *
++ * The main means of driving the scheduler and thus entering this function are:
++ *
++ *   1. Explicit blocking: mutex, semaphore, waitqueue, etc.
++ *
++ *   2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
++ *      paths. For example, see arch/x86/entry_64.S.
++ *
++ *      To drive preemption between tasks, the scheduler sets the flag in timer
++ *      interrupt handler sched_tick().
++ *
++ *   3. Wakeups don't really cause entry into schedule(). They add a
++ *      task to the run-queue and that's it.
++ *
++ *      Now, if the new task added to the run-queue preempts the current
++ *      task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
++ *      called on the nearest possible occasion:
++ *
++ *       - If the kernel is preemptible (CONFIG_PREEMPTION=y):
++ *
++ *         - in syscall or exception context, at the next outmost
++ *           preempt_enable(). (this might be as soon as the wake_up()'s
++ *           spin_unlock()!)
++ *
++ *         - in IRQ context, return from interrupt-handler to
++ *           preemptible context
++ *
++ *       - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
++ *         then at the next:
++ *
++ *          - cond_resched() call
++ *          - explicit schedule() call
++ *          - return from syscall or exception to user-space
++ *          - return from interrupt-handler to user-space
++ *
++ * WARNING: must be called with preemption disabled!
++ */
++static void __sched notrace __schedule(int sched_mode)
++{
++	struct task_struct *prev, *next;
++	/*
++	 * On PREEMPT_RT kernel, SM_RTLOCK_WAIT is noted
++	 * as a preemption by schedule_debug() and RCU.
++	 */
++	bool preempt = sched_mode > SM_NONE;
++	bool is_switch = false;
++	unsigned long *switch_count;
++	unsigned long prev_state;
++	struct rq *rq;
++	int cpu;
++
++	trace_sched_entry_tp(preempt, CALLER_ADDR0);
++
++	cpu = smp_processor_id();
++	rq = cpu_rq(cpu);
++	prev = rq->curr;
++
++	schedule_debug(prev, preempt);
++
++	/* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
++	hrtick_clear(rq);
++
++	local_irq_disable();
++	rcu_note_context_switch(preempt);
++
++	/*
++	 * Make sure that signal_pending_state()->signal_pending() below
++	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
++	 * done by the caller to avoid the race with signal_wake_up():
++	 *
++	 * __set_current_state(@state)		signal_wake_up()
++	 * schedule()				  set_tsk_thread_flag(p, TIF_SIGPENDING)
++	 *					  wake_up_state(p, state)
++	 *   LOCK rq->lock			    LOCK p->pi_state
++	 *   smp_mb__after_spinlock()		    smp_mb__after_spinlock()
++	 *     if (signal_pending_state())	    if (p->state & @state)
++	 *
++	 * Also, the membarrier system call requires a full memory barrier
++	 * after coming from user-space, before storing to rq->curr; this
++	 * barrier matches a full barrier in the proximity of the membarrier
++	 * system call exit.
++	 */
++	raw_spin_lock(&rq->lock);
++	smp_mb__after_spinlock();
++
++	update_rq_clock(rq);
++
++	switch_count = &prev->nivcsw;
++
++	/* Task state changes only considers SM_PREEMPT as preemption */
++	preempt = sched_mode == SM_PREEMPT;
++
++	/*
++	 * We must load prev->state once (task_struct::state is volatile), such
++	 * that we form a control dependency vs deactivate_task() below.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	if (sched_mode == SM_IDLE) {
++		if (!rq->nr_running) {
++			next = prev;
++			goto picked;
++		}
++	} else if (!preempt && prev_state) {
++		try_to_block_task(rq, prev, prev_state);
++		switch_count = &prev->nvcsw;
++	}
++
++	check_curr(prev, rq);
++
++	next = choose_next_task(rq, cpu);
++picked:
++	clear_tsk_need_resched(prev);
++	clear_preempt_need_resched();
++	rq->last_seen_need_resched_ns = 0;
++
++	is_switch = prev != next;
++	if (likely(is_switch)) {
++		next->last_ran = rq->clock_task;
++
++		/*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
++		rq->nr_switches++;
++		/*
++		 * RCU users of rcu_dereference(rq->curr) may not see
++		 * changes to task_struct made by pick_next_task().
++		 */
++		RCU_INIT_POINTER(rq->curr, next);
++		/*
++		 * The membarrier system call requires each architecture
++		 * to have a full memory barrier after updating
++		 * rq->curr, before returning to user-space.
++		 *
++		 * Here are the schemes providing that barrier on the
++		 * various architectures:
++		 * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC,
++		 *   RISC-V.  switch_mm() relies on membarrier_arch_switch_mm()
++		 *   on PowerPC and on RISC-V.
++		 * - finish_lock_switch() for weakly-ordered
++		 *   architectures where spin_unlock is a full barrier,
++		 * - switch_to() for arm64 (weakly-ordered, spin_unlock
++		 *   is a RELEASE barrier),
++		 *
++		 * The barrier matches a full barrier in the proximity of
++		 * the membarrier system call entry.
++		 *
++		 * On RISC-V, this barrier pairing is also needed for the
++		 * SYNC_CORE command when switching between processes, cf.
++		 * the inline comments in membarrier_arch_switch_mm().
++		 */
++		++*switch_count;
++
++		trace_sched_switch(preempt, prev, next, prev_state);
++
++		/* Also unlocks the rq: */
++		rq = context_switch(rq, prev, next);
++
++		cpu = cpu_of(rq);
++	} else {
++		__balance_callbacks(rq);
++		raw_spin_unlock_irq(&rq->lock);
++	}
++	trace_sched_exit_tp(is_switch, CALLER_ADDR0);
++}
++
++void __noreturn do_task_dead(void)
++{
++	/* Causes final put_task_struct in finish_task_switch(): */
++	set_special_state(TASK_DEAD);
++
++	/* Tell freezer to ignore us: */
++	current->flags |= PF_NOFREEZE;
++
++	__schedule(SM_NONE);
++	BUG();
++
++	/* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
++	for (;;)
++		cpu_relax();
++}
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++	static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
++	unsigned int task_flags;
++
++	/*
++	 * Establish LD_WAIT_CONFIG context to ensure none of the code called
++	 * will use a blocking primitive -- which would lead to recursion.
++	 */
++	lock_map_acquire_try(&sched_map);
++
++	task_flags = tsk->flags;
++	/*
++	 * If a worker goes to sleep, notify and ask workqueue whether it
++	 * wants to wake up a task to maintain concurrency.
++	 */
++	if (task_flags & PF_WQ_WORKER)
++		wq_worker_sleeping(tsk);
++	else if (task_flags & PF_IO_WORKER)
++		io_wq_worker_sleeping(tsk);
++
++	/*
++	 * spinlock and rwlock must not flush block requests.  This will
++	 * deadlock if the callback attempts to acquire a lock which is
++	 * already acquired.
++	 */
++	WARN_ON_ONCE(current->__state & TASK_RTLOCK_WAIT);
++
++	/*
++	 * If we are going to sleep and we have plugged IO queued,
++	 * make sure to submit it to avoid deadlocks.
++	 */
++	blk_flush_plug(tsk->plug, true);
++
++	lock_map_release(&sched_map);
++}
++
++static void sched_update_worker(struct task_struct *tsk)
++{
++	if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER | PF_BLOCK_TS)) {
++		if (tsk->flags & PF_BLOCK_TS)
++			blk_plug_invalidate_ts(tsk);
++		if (tsk->flags & PF_WQ_WORKER)
++			wq_worker_running(tsk);
++		else if (tsk->flags & PF_IO_WORKER)
++			io_wq_worker_running(tsk);
++	}
++}
++
++static __always_inline void __schedule_loop(int sched_mode)
++{
++	do {
++		preempt_disable();
++		__schedule(sched_mode);
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++}
++
++asmlinkage __visible void __sched schedule(void)
++{
++	struct task_struct *tsk = current;
++
++#ifdef CONFIG_RT_MUTEXES
++	lockdep_assert(!tsk->sched_rt_mutex);
++#endif
++
++	if (!task_is_running(tsk))
++		sched_submit_work(tsk);
++	__schedule_loop(SM_NONE);
++	sched_update_worker(tsk);
++}
++EXPORT_SYMBOL(schedule);
++
++/*
++ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
++ * state (have scheduled out non-voluntarily) by making sure that all
++ * tasks have either left the run queue or have gone into user space.
++ * As idle tasks do not do either, they must not ever be preempted
++ * (schedule out non-voluntarily).
++ *
++ * schedule_idle() is similar to schedule_preempt_disable() except that it
++ * never enables preemption because it does not call sched_submit_work().
++ */
++void __sched schedule_idle(void)
++{
++	/*
++	 * As this skips calling sched_submit_work(), which the idle task does
++	 * regardless because that function is a NOP when the task is in a
++	 * TASK_RUNNING state, make sure this isn't used someplace that the
++	 * current task can be in any other state. Note, idle is always in the
++	 * TASK_RUNNING state.
++	 */
++	WARN_ON_ONCE(current->__state);
++	do {
++		__schedule(SM_IDLE);
++	} while (need_resched());
++}
++
++#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
++asmlinkage __visible void __sched schedule_user(void)
++{
++	/*
++	 * If we come here after a random call to set_need_resched(),
++	 * or we have been woken up remotely but the IPI has not yet arrived,
++	 * we haven't yet exited the RCU idle mode. Do it here manually until
++	 * we find a better solution.
++	 *
++	 * NB: There are buggy callers of this function.  Ideally we
++	 * should warn if prev_state != CT_STATE_USER, but that will trigger
++	 * too frequently to make sense yet.
++	 */
++	enum ctx_state prev_state = exception_enter();
++	schedule();
++	exception_exit(prev_state);
++}
++#endif
++
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++	sched_preempt_enable_no_resched();
++	schedule();
++	preempt_disable();
++}
++
++#ifdef CONFIG_PREEMPT_RT
++void __sched notrace schedule_rtlock(void)
++{
++	__schedule_loop(SM_RTLOCK_WAIT);
++}
++NOKPROBE_SYMBOL(schedule_rtlock);
++#endif
++
++static void __sched notrace preempt_schedule_common(void)
++{
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		__schedule(SM_PREEMPT);
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++
++		/*
++		 * Check again in case we missed a preemption opportunity
++		 * between schedule and now.
++		 */
++	} while (need_resched());
++}
++
++#ifdef CONFIG_PREEMPTION
++/*
++ * This is the entry point to schedule() from in-kernel preemption
++ * off of preempt_enable.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule(void)
++{
++	/*
++	 * If there is a non-zero preempt_count or interrupts are disabled,
++	 * we do not want to preempt the current task. Just return..
++	 */
++	if (likely(!preemptible()))
++		return;
++
++	preempt_schedule_common();
++}
++NOKPROBE_SYMBOL(preempt_schedule);
++EXPORT_SYMBOL(preempt_schedule);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_dynamic_enabled
++#define preempt_schedule_dynamic_enabled	preempt_schedule
++#define preempt_schedule_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
++void __sched notrace dynamic_preempt_schedule(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
++		return;
++	preempt_schedule();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule);
++EXPORT_SYMBOL(dynamic_preempt_schedule);
++#endif
++#endif
++
++/**
++ * preempt_schedule_notrace - preempt_schedule called by tracing
++ *
++ * The tracing infrastructure uses preempt_enable_notrace to prevent
++ * recursion and tracing preempt enabling caused by the tracing
++ * infrastructure itself. But as tracing can happen in areas coming
++ * from userspace or just about to enter userspace, a preempt enable
++ * can occur before user_exit() is called. This will cause the scheduler
++ * to be called when the system is still in usermode.
++ *
++ * To prevent this, the preempt_enable_notrace will use this function
++ * instead of preempt_schedule() to exit user context if needed before
++ * calling the scheduler.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
++{
++	enum ctx_state prev_ctx;
++
++	if (likely(!preemptible()))
++		return;
++
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		/*
++		 * Needs preempt disabled in case user_exit() is traced
++		 * and the tracer calls preempt_enable_notrace() causing
++		 * an infinite recursion.
++		 */
++		prev_ctx = exception_enter();
++		__schedule(SM_PREEMPT);
++		exception_exit(prev_ctx);
++
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++	} while (need_resched());
++}
++EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_notrace_dynamic_enabled
++#define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace
++#define preempt_schedule_notrace_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
++void __sched notrace dynamic_preempt_schedule_notrace(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
++		return;
++	preempt_schedule_notrace();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
++EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
++#endif
++#endif
++
++#endif /* CONFIG_PREEMPTION */
++
++/*
++ * This is the entry point to schedule() from kernel preemption
++ * off of IRQ context.
++ * Note, that this is called and return with IRQs disabled. This will
++ * protect us against recursive calling from IRQ contexts.
++ */
++asmlinkage __visible void __sched preempt_schedule_irq(void)
++{
++	enum ctx_state prev_state;
++
++	/* Catch callers which need to be fixed */
++	BUG_ON(preempt_count() || !irqs_disabled());
++
++	prev_state = exception_enter();
++
++	do {
++		preempt_disable();
++		local_irq_enable();
++		__schedule(SM_PREEMPT);
++		local_irq_disable();
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++
++	exception_exit(prev_state);
++}
++
++int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
++			  void *key)
++{
++	WARN_ON_ONCE(wake_flags & ~(WF_SYNC|WF_CURRENT_CPU));
++	return try_to_wake_up(curr->private, mode, wake_flags);
++}
++EXPORT_SYMBOL(default_wake_function);
++
++void check_task_changed(struct task_struct *p, struct rq *rq)
++{
++	/* Trigger resched if task sched_prio has been modified. */
++	if (task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		requeue_task(p, rq);
++		wakeup_preempt(rq);
++	}
++}
++
++void __setscheduler_prio(struct task_struct *p, int prio)
++{
++	p->prio = prio;
++}
++
++#ifdef CONFIG_RT_MUTEXES
++
++/*
++ * Would be more useful with typeof()/auto_type but they don't mix with
++ * bit-fields. Since it's a local thing, use int. Keep the generic sounding
++ * name such that if someone were to implement this function we get to compare
++ * notes.
++ */
++#define fetch_and_set(x, v) ({ int _x = (x); (x) = (v); _x; })
++
++void rt_mutex_pre_schedule(void)
++{
++	lockdep_assert(!fetch_and_set(current->sched_rt_mutex, 1));
++	sched_submit_work(current);
++}
++
++void rt_mutex_schedule(void)
++{
++	lockdep_assert(current->sched_rt_mutex);
++	__schedule_loop(SM_NONE);
++}
++
++void rt_mutex_post_schedule(void)
++{
++	sched_update_worker(current);
++	lockdep_assert(fetch_and_set(current->sched_rt_mutex, 0));
++}
++
++/*
++ * rt_mutex_setprio - set the current priority of a task
++ * @p: task to boost
++ * @pi_task: donor task
++ *
++ * This function changes the 'effective' priority of a task. It does
++ * not touch ->normal_prio like __setscheduler().
++ *
++ * Used by the rt_mutex code to implement priority inheritance
++ * logic. Call site only calls if the priority of the task changed.
++ */
++void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
++{
++	int prio;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	/* XXX used to be waiter->prio, not waiter->task->prio */
++	prio = __rt_effective_prio(pi_task, p->normal_prio);
++
++	/*
++	 * If nothing changed; bail early.
++	 */
++	if (p->pi_top_task == pi_task && prio == p->prio)
++		return;
++
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Set under pi_lock && rq->lock, such that the value can be used under
++	 * either lock.
++	 *
++	 * Note that there is loads of tricky to make this pointer cache work
++	 * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
++	 * ensure a task is de-boosted (pi_task is set to NULL) before the
++	 * task is allowed to run again (and can exit). This ensures the pointer
++	 * points to a blocked task -- which guarantees the task is present.
++	 */
++	p->pi_top_task = pi_task;
++
++	/*
++	 * For FIFO/RR we only need to set prio, if that matches we're done.
++	 */
++	if (prio == p->prio)
++		goto out_unlock;
++
++	/*
++	 * Idle task boosting is a no-no in general. There is one
++	 * exception, when PREEMPT_RT and NOHZ is active:
++	 *
++	 * The idle task calls get_next_timer_interrupt() and holds
++	 * the timer wheel base->lock on the CPU and another CPU wants
++	 * to access the timer (probably to cancel it). We can safely
++	 * ignore the boosting request, as the idle CPU runs this code
++	 * with interrupts disabled and will complete the lock
++	 * protected section without being interrupted. So there is no
++	 * real need to boost.
++	 */
++	if (unlikely(p == rq->idle)) {
++		WARN_ON(p != rq->curr);
++		WARN_ON(p->pi_blocked_on);
++		goto out_unlock;
++	}
++
++	trace_sched_pi_setprio(p, pi_task);
++
++	__setscheduler_prio(p, prio);
++
++	check_task_changed(p, rq);
++out_unlock:
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++
++	if (task_on_rq_queued(p))
++		__balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++
++	preempt_enable();
++}
++#endif
++
++#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
++int __sched __cond_resched(void)
++{
++	if (should_resched(0) && !irqs_disabled()) {
++		preempt_schedule_common();
++		return 1;
++	}
++	/*
++	 * In PREEMPT_RCU kernels, ->rcu_read_lock_nesting tells the tick
++	 * whether the current CPU is in an RCU read-side critical section,
++	 * so the tick can report quiescent states even for CPUs looping
++	 * in kernel context.  In contrast, in non-preemptible kernels,
++	 * RCU readers leave no in-memory hints, which means that CPU-bound
++	 * processes executing in kernel context might never report an
++	 * RCU quiescent state.  Therefore, the following code causes
++	 * cond_resched() to report a quiescent state, but only when RCU
++	 * is in urgent need of one.
++	 * A third case, preemptible, but non-PREEMPT_RCU provides for
++	 * urgently needed quiescent states via rcu_flavor_sched_clock_irq().
++	 */
++#ifndef CONFIG_PREEMPT_RCU
++	rcu_all_qs();
++#endif
++	return 0;
++}
++EXPORT_SYMBOL(__cond_resched);
++#endif
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define cond_resched_dynamic_enabled	__cond_resched
++#define cond_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(cond_resched);
++
++#define might_resched_dynamic_enabled	__cond_resched
++#define might_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(might_resched);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
++int __sched dynamic_cond_resched(void)
++{
++	klp_sched_try_switch();
++	if (!static_branch_unlikely(&sk_dynamic_cond_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_cond_resched);
++
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
++int __sched dynamic_might_resched(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_might_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_might_resched);
++#endif
++#endif
++
++/*
++ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
++ * call schedule, and on return reacquire the lock.
++ *
++ * This works OK both with and without CONFIG_PREEMPTION.  We do strange low-level
++ * operations here to prevent schedule() from being called twice (once via
++ * spin_unlock(), once by hand).
++ */
++int __cond_resched_lock(spinlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held(lock);
++
++	if (spin_needbreak(lock) || resched) {
++		spin_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		spin_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_lock);
++
++int __cond_resched_rwlock_read(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_read(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		read_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		read_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_read);
++
++int __cond_resched_rwlock_write(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_write(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		write_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		write_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_write);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++#ifdef CONFIG_GENERIC_ENTRY
++#include <linux/entry-common.h>
++#endif
++
++/*
++ * SC:cond_resched
++ * SC:might_resched
++ * SC:preempt_schedule
++ * SC:preempt_schedule_notrace
++ * SC:irqentry_exit_cond_resched
++ *
++ *
++ * NONE:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *   dynamic_preempt_lazy       <- false
++ *
++ * VOLUNTARY:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- __cond_resched
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *   dynamic_preempt_lazy       <- false
++ *
++ * FULL:
++ *   cond_resched               <- RET0
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- preempt_schedule
++ *   preempt_schedule_notrace   <- preempt_schedule_notrace
++ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ *   dynamic_preempt_lazy       <- false
++ *
++ * LAZY:
++ *   cond_resched               <- RET0
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- preempt_schedule
++ *   preempt_schedule_notrace   <- preempt_schedule_notrace
++ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ *   dynamic_preempt_lazy       <- true
++ */
++
++enum {
++	preempt_dynamic_undefined = -1,
++	preempt_dynamic_none,
++	preempt_dynamic_voluntary,
++	preempt_dynamic_full,
++	preempt_dynamic_lazy,
++};
++
++int preempt_dynamic_mode = preempt_dynamic_undefined;
++
++int sched_dynamic_mode(const char *str)
++{
++#ifndef CONFIG_PREEMPT_RT
++	if (!strcmp(str, "none"))
++		return preempt_dynamic_none;
++
++	if (!strcmp(str, "voluntary"))
++		return preempt_dynamic_voluntary;
++#endif
++
++	if (!strcmp(str, "full"))
++		return preempt_dynamic_full;
++
++#ifdef CONFIG_ARCH_HAS_PREEMPT_LAZY
++	if (!strcmp(str, "lazy"))
++		return preempt_dynamic_lazy;
++#endif
++
++	return -EINVAL;
++}
++
++#define preempt_dynamic_key_enable(f)  static_key_enable(&sk_dynamic_##f.key)
++#define preempt_dynamic_key_disable(f) static_key_disable(&sk_dynamic_##f.key)
++
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define preempt_dynamic_enable(f)	static_call_update(f, f##_dynamic_enabled)
++#define preempt_dynamic_disable(f)	static_call_update(f, f##_dynamic_disabled)
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++#define preempt_dynamic_enable(f)	preempt_dynamic_key_enable(f)
++#define preempt_dynamic_disable(f)	preempt_dynamic_key_disable(f)
++#else
++#error "Unsupported PREEMPT_DYNAMIC mechanism"
++#endif
++
++static DEFINE_MUTEX(sched_dynamic_mutex);
++static bool klp_override;
++
++static void __sched_dynamic_update(int mode)
++{
++	/*
++	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
++	 * the ZERO state, which is invalid.
++	 */
++	if (!klp_override)
++		preempt_dynamic_enable(cond_resched);
++	preempt_dynamic_enable(cond_resched);
++	preempt_dynamic_enable(might_resched);
++	preempt_dynamic_enable(preempt_schedule);
++	preempt_dynamic_enable(preempt_schedule_notrace);
++	preempt_dynamic_enable(irqentry_exit_cond_resched);
++	preempt_dynamic_key_disable(preempt_lazy);
++
++	switch (mode) {
++	case preempt_dynamic_none:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		preempt_dynamic_key_disable(preempt_lazy);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: none\n");
++		break;
++
++	case preempt_dynamic_voluntary:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_enable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		preempt_dynamic_key_disable(preempt_lazy);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: voluntary\n");
++		break;
++
++	case preempt_dynamic_full:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_enable(preempt_schedule);
++		preempt_dynamic_enable(preempt_schedule_notrace);
++		preempt_dynamic_enable(irqentry_exit_cond_resched);
++		preempt_dynamic_key_disable(preempt_lazy);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: full\n");
++		break;
++
++	case preempt_dynamic_lazy:
++		if (!klp_override)
++			preempt_dynamic_disable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_enable(preempt_schedule);
++		preempt_dynamic_enable(preempt_schedule_notrace);
++		preempt_dynamic_enable(irqentry_exit_cond_resched);
++		preempt_dynamic_key_enable(preempt_lazy);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: lazy\n");
++		break;
++	}
++
++	preempt_dynamic_mode = mode;
++}
++
++void sched_dynamic_update(int mode)
++{
++	mutex_lock(&sched_dynamic_mutex);
++	__sched_dynamic_update(mode);
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++#ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL
++
++static int klp_cond_resched(void)
++{
++	__klp_sched_try_switch();
++	return __cond_resched();
++}
++
++void sched_dynamic_klp_enable(void)
++{
++	mutex_lock(&sched_dynamic_mutex);
++
++	klp_override = true;
++	static_call_update(cond_resched, klp_cond_resched);
++
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++void sched_dynamic_klp_disable(void)
++{
++	mutex_lock(&sched_dynamic_mutex);
++
++	klp_override = false;
++	__sched_dynamic_update(preempt_dynamic_mode);
++
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++#endif /* CONFIG_HAVE_PREEMPT_DYNAMIC_CALL */
++
++
++static int __init setup_preempt_mode(char *str)
++{
++	int mode = sched_dynamic_mode(str);
++	if (mode < 0) {
++		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
++		return 0;
++	}
++
++	sched_dynamic_update(mode);
++	return 1;
++}
++__setup("preempt=", setup_preempt_mode);
++
++static void __init preempt_dynamic_init(void)
++{
++	if (preempt_dynamic_mode == preempt_dynamic_undefined) {
++		if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
++			sched_dynamic_update(preempt_dynamic_none);
++		} else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
++			sched_dynamic_update(preempt_dynamic_voluntary);
++		} else if (IS_ENABLED(CONFIG_PREEMPT_LAZY)) {
++			sched_dynamic_update(preempt_dynamic_lazy);
++		} else {
++			/* Default static call setting, nothing to do */
++			WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
++			preempt_dynamic_mode = preempt_dynamic_full;
++			pr_info("Dynamic Preempt: full\n");
++		}
++	}
++}
++
++#define PREEMPT_MODEL_ACCESSOR(mode) \
++	bool preempt_model_##mode(void)						 \
++	{									 \
++		WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
++		return preempt_dynamic_mode == preempt_dynamic_##mode;		 \
++	}									 \
++	EXPORT_SYMBOL_GPL(preempt_model_##mode)
++
++PREEMPT_MODEL_ACCESSOR(none);
++PREEMPT_MODEL_ACCESSOR(voluntary);
++PREEMPT_MODEL_ACCESSOR(full);
++PREEMPT_MODEL_ACCESSOR(lazy);
++
++#else /* !CONFIG_PREEMPT_DYNAMIC: */
++
++static inline void preempt_dynamic_init(void) { }
++
++#endif /* CONFIG_PREEMPT_DYNAMIC */
++
++const char *preempt_modes[] = {
++	"none", "voluntary", "full", "lazy", NULL,
++};
++
++const char *preempt_model_str(void)
++{
++	bool brace = IS_ENABLED(CONFIG_PREEMPT_RT) &&
++		(IS_ENABLED(CONFIG_PREEMPT_DYNAMIC) ||
++		 IS_ENABLED(CONFIG_PREEMPT_LAZY));
++	static char buf[128];
++
++	if (IS_ENABLED(CONFIG_PREEMPT_BUILD)) {
++		struct seq_buf s;
++
++		seq_buf_init(&s, buf, sizeof(buf));
++		seq_buf_puts(&s, "PREEMPT");
++
++		if (IS_ENABLED(CONFIG_PREEMPT_RT))
++			seq_buf_printf(&s, "%sRT%s",
++				       brace ? "_{" : "_",
++				       brace ? "," : "");
++
++		if (IS_ENABLED(CONFIG_PREEMPT_DYNAMIC)) {
++			seq_buf_printf(&s, "(%s)%s",
++				       preempt_dynamic_mode > 0 ?
++				       preempt_modes[preempt_dynamic_mode] : "undef",
++				       brace ? "}" : "");
++			return seq_buf_str(&s);
++		}
++
++		if (IS_ENABLED(CONFIG_PREEMPT_LAZY)) {
++			seq_buf_printf(&s, "LAZY%s",
++				       brace ? "}" : "");
++			return seq_buf_str(&s);
++		}
++
++		return seq_buf_str(&s);
++	}
++
++	if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY_BUILD))
++		return "VOLUNTARY";
++
++	return "NONE";
++}
++
++int io_schedule_prepare(void)
++{
++	int old_iowait = current->in_iowait;
++
++	current->in_iowait = 1;
++	blk_flush_plug(current->plug, true);
++	return old_iowait;
++}
++
++void io_schedule_finish(int token)
++{
++	current->in_iowait = token;
++}
++
++/*
++ * This task is about to go to sleep on IO.  Increment rq->nr_iowait so
++ * that process accounting knows that this is a task in IO wait state.
++ *
++ * But don't do that if it is a deliberate, throttling IO wait (this task
++ * has set its backing_dev_info: the queue against which it should throttle)
++ */
++
++long __sched io_schedule_timeout(long timeout)
++{
++	int token;
++	long ret;
++
++	token = io_schedule_prepare();
++	ret = schedule_timeout(timeout);
++	io_schedule_finish(token);
++
++	return ret;
++}
++EXPORT_SYMBOL(io_schedule_timeout);
++
++void __sched io_schedule(void)
++{
++	int token;
++
++	token = io_schedule_prepare();
++	schedule();
++	io_schedule_finish(token);
++}
++EXPORT_SYMBOL(io_schedule);
++
++void sched_show_task(struct task_struct *p)
++{
++	unsigned long free;
++	int ppid;
++
++	if (!try_get_task_stack(p))
++		return;
++
++	pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
++
++	if (task_is_running(p))
++		pr_cont("  running task    ");
++	free = stack_not_used(p);
++	ppid = 0;
++	rcu_read_lock();
++	if (pid_alive(p))
++		ppid = task_pid_nr(rcu_dereference(p->real_parent));
++	rcu_read_unlock();
++	pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d task_flags:0x%04x flags:0x%08lx\n",
++		free, task_pid_nr(p), task_tgid_nr(p),
++		ppid, p->flags, read_task_thread_flags(p));
++
++	print_worker_info(KERN_INFO, p);
++	print_stop_info(KERN_INFO, p);
++	show_stack(p, NULL, KERN_INFO);
++	put_task_stack(p);
++}
++EXPORT_SYMBOL_GPL(sched_show_task);
++
++static inline bool
++state_filter_match(unsigned long state_filter, struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/* no filter, everything matches */
++	if (!state_filter)
++		return true;
++
++	/* filter, but doesn't match */
++	if (!(state & state_filter))
++		return false;
++
++	/*
++	 * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
++	 * TASK_KILLABLE).
++	 */
++	if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
++		return false;
++
++	return true;
++}
++
++
++void show_state_filter(unsigned int state_filter)
++{
++	struct task_struct *g, *p;
++
++	rcu_read_lock();
++	for_each_process_thread(g, p) {
++		/*
++		 * reset the NMI-timeout, listing all files on a slow
++		 * console might take a lot of time:
++		 * Also, reset softlockup watchdogs on all CPUs, because
++		 * another CPU might be blocked waiting for us to process
++		 * an IPI.
++		 */
++		touch_nmi_watchdog();
++		touch_all_softlockup_watchdogs();
++		if (state_filter_match(state_filter, p))
++			sched_show_task(p);
++	}
++
++	/* TODO: Alt schedule FW should support this
++	if (!state_filter)
++		sysrq_sched_debug_show();
++	*/
++	rcu_read_unlock();
++	/*
++	 * Only show locks if all tasks are dumped:
++	 */
++	if (!state_filter)
++		debug_show_all_locks();
++}
++
++void dump_cpu_task(int cpu)
++{
++	if (in_hardirq() && cpu == smp_processor_id()) {
++		struct pt_regs *regs;
++
++		regs = get_irq_regs();
++		if (regs) {
++			show_regs(regs);
++			return;
++		}
++	}
++
++	if (trigger_single_cpu_backtrace(cpu))
++		return;
++
++	pr_info("Task dump for CPU %d:\n", cpu);
++	sched_show_task(cpu_curr(cpu));
++}
++
++/**
++ * init_idle - set up an idle thread for a given CPU
++ * @idle: task in question
++ * @cpu: CPU the idle task belongs to
++ *
++ * NOTE: this function does not set the idle thread's NEED_RESCHED
++ * flag, to make booting more robust.
++ */
++void __init init_idle(struct task_struct *idle, int cpu)
++{
++#ifdef CONFIG_SMP
++	struct affinity_context ac = (struct affinity_context) {
++		.new_mask  = cpumask_of(cpu),
++		.flags     = 0,
++	};
++#endif
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&idle->pi_lock, flags);
++	raw_spin_lock(&rq->lock);
++
++	idle->last_ran = rq->clock_task;
++	idle->__state = TASK_RUNNING;
++	/*
++	 * PF_KTHREAD should already be set at this point; regardless, make it
++	 * look like a proper per-CPU kthread.
++	 */
++	idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
++	kthread_set_per_cpu(idle, cpu);
++
++	sched_queue_init_idle(&rq->queue, idle);
++
++#ifdef CONFIG_SMP
++	/*
++	 * No validation and serialization required at boot time and for
++	 * setting up the idle tasks of not yet online CPUs.
++	 */
++	set_cpus_allowed_common(idle, &ac);
++#endif
++
++	/* Silence PROVE_RCU */
++	rcu_read_lock();
++	__set_task_cpu(idle, cpu);
++	rcu_read_unlock();
++
++	rq->idle = idle;
++	rcu_assign_pointer(rq->curr, idle);
++	idle->on_cpu = 1;
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
++
++	/* Set the preempt count _outside_ the spinlocks! */
++	init_idle_preempt_count(idle, cpu);
++
++	ftrace_graph_init_idle_task(idle, cpu);
++	vtime_init_idle(idle, cpu);
++#ifdef CONFIG_SMP
++	sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
++			      const struct cpumask __maybe_unused *trial)
++{
++	return 1;
++}
++
++int task_can_attach(struct task_struct *p)
++{
++	int ret = 0;
++
++	/*
++	 * Kthreads which disallow setaffinity shouldn't be moved
++	 * to a new cpuset; we don't want to change their CPU
++	 * affinity and isolating such threads by their set of
++	 * allowed nodes is unnecessary.  Thus, cpusets are not
++	 * applicable for such threads.  This prevents checking for
++	 * success of set_cpus_allowed_ptr() on all attached tasks
++	 * before cpus_mask may be changed.
++	 */
++	if (p->flags & PF_NO_SETAFFINITY)
++		ret = -EINVAL;
++
++	return ret;
++}
++
++bool sched_smp_initialized __read_mostly;
++
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Invoked on the outgoing CPU in context of the CPU hotplug thread
++ * after ensuring that there are no user space tasks left on the CPU.
++ *
++ * If there is a lazy mm in use on the hotplug thread, drop it and
++ * switch to init_mm.
++ *
++ * The reference count on init_mm is dropped in finish_cpu().
++ */
++static void sched_force_init_mm(void)
++{
++	struct mm_struct *mm = current->active_mm;
++
++	if (mm != &init_mm) {
++		mmgrab_lazy_tlb(&init_mm);
++		local_irq_disable();
++		current->active_mm = &init_mm;
++		switch_mm_irqs_off(mm, &init_mm, current);
++		local_irq_enable();
++		finish_arch_post_lock_switch();
++		mmdrop_lazy_tlb(mm);
++	}
++
++	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
++}
++
++static int __balance_push_cpu_stop(void *arg)
++{
++	struct task_struct *p = arg;
++	struct rq *rq = this_rq();
++	struct rq_flags rf;
++	int cpu;
++
++	raw_spin_lock_irq(&p->pi_lock);
++	rq_lock(rq, &rf);
++
++	update_rq_clock(rq);
++
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		cpu = select_fallback_rq(rq->cpu, p);
++		rq = __migrate_task(rq, p, cpu);
++	}
++
++	rq_unlock(rq, &rf);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	put_task_struct(p);
++
++	return 0;
++}
++
++static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
++
++/*
++ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
++ * effective when the hotplug motion is down.
++ */
++static void balance_push(struct rq *rq)
++{
++	struct task_struct *push_task = rq->curr;
++
++	lockdep_assert_held(&rq->lock);
++
++	/*
++	 * Ensure the thing is persistent until balance_push_set(.on = false);
++	 */
++	rq->balance_callback = &balance_push_callback;
++
++	/*
++	 * Only active while going offline and when invoked on the outgoing
++	 * CPU.
++	 */
++	if (!cpu_dying(rq->cpu) || rq != this_rq())
++		return;
++
++	/*
++	 * Both the cpu-hotplug and stop task are in this case and are
++	 * required to complete the hotplug process.
++	 */
++	if (kthread_is_per_cpu(push_task) ||
++	    is_migration_disabled(push_task)) {
++
++		/*
++		 * If this is the idle task on the outgoing CPU try to wake
++		 * up the hotplug control thread which might wait for the
++		 * last task to vanish. The rcuwait_active() check is
++		 * accurate here because the waiter is pinned on this CPU
++		 * and can't obviously be running in parallel.
++		 *
++		 * On RT kernels this also has to check whether there are
++		 * pinned and scheduled out tasks on the runqueue. They
++		 * need to leave the migrate disabled section first.
++		 */
++		if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
++		    rcuwait_active(&rq->hotplug_wait)) {
++			raw_spin_unlock(&rq->lock);
++			rcuwait_wake_up(&rq->hotplug_wait);
++			raw_spin_lock(&rq->lock);
++		}
++		return;
++	}
++
++	get_task_struct(push_task);
++	/*
++	 * Temporarily drop rq->lock such that we can wake-up the stop task.
++	 * Both preemption and IRQs are still disabled.
++	 */
++	preempt_disable();
++	raw_spin_unlock(&rq->lock);
++	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
++			    this_cpu_ptr(&push_work));
++	preempt_enable();
++	/*
++	 * At this point need_resched() is true and we'll take the loop in
++	 * schedule(). The next pick is obviously going to be the stop task
++	 * which kthread_is_per_cpu() and will push this task away.
++	 */
++	raw_spin_lock(&rq->lock);
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct rq_flags rf;
++
++	rq_lock_irqsave(rq, &rf);
++	if (on) {
++		WARN_ON_ONCE(rq->balance_callback);
++		rq->balance_callback = &balance_push_callback;
++	} else if (rq->balance_callback == &balance_push_callback) {
++		rq->balance_callback = NULL;
++	}
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Invoked from a CPUs hotplug control thread after the CPU has been marked
++ * inactive. All tasks which are not per CPU kernel threads are either
++ * pushed off this CPU now via balance_push() or placed on a different CPU
++ * during wakeup. Wait until the CPU is quiescent.
++ */
++static void balance_hotplug_wait(void)
++{
++	struct rq *rq = this_rq();
++
++	rcuwait_wait_event(&rq->hotplug_wait,
++			   rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
++			   TASK_UNINTERRUPTIBLE);
++}
++
++#else
++
++static void balance_push(struct rq *rq)
++{
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++}
++
++static inline void balance_hotplug_wait(void)
++{
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++static void set_rq_offline(struct rq *rq)
++{
++	if (rq->online) {
++		update_rq_clock(rq);
++		rq->online = false;
++	}
++}
++
++static void set_rq_online(struct rq *rq)
++{
++	if (!rq->online)
++		rq->online = true;
++}
++
++static inline void sched_set_rq_online(struct rq *rq, int cpu)
++{
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_online(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++static inline void sched_set_rq_offline(struct rq *rq, int cpu)
++{
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_offline(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++/*
++ * used to mark begin/end of suspend/resume:
++ */
++static int num_cpus_frozen;
++
++/*
++ * Update cpusets according to cpu_active mask.  If cpusets are
++ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
++ * around partition_sched_domains().
++ *
++ * If we come here as part of a suspend/resume, don't touch cpusets because we
++ * want to restore it back to its original state upon resume anyway.
++ */
++static void cpuset_cpu_active(void)
++{
++	if (cpuhp_tasks_frozen) {
++		/*
++		 * num_cpus_frozen tracks how many CPUs are involved in suspend
++		 * resume sequence. As long as this is not the last online
++		 * operation in the resume sequence, just build a single sched
++		 * domain, ignoring cpusets.
++		 */
++		cpuset_reset_sched_domains();
++		if (--num_cpus_frozen)
++			return;
++		/*
++		 * This is the last CPU online operation. So fall through and
++		 * restore the original sched domains by considering the
++		 * cpuset configurations.
++		 */
++		cpuset_force_rebuild();
++	}
++
++	cpuset_update_active_cpus();
++}
++
++static void cpuset_cpu_inactive(unsigned int cpu)
++{
++	if (!cpuhp_tasks_frozen) {
++		cpuset_update_active_cpus();
++	} else {
++		num_cpus_frozen++;
++		cpuset_reset_sched_domains();
++	}
++}
++
++static inline void sched_smt_present_inc(int cpu)
++{
++#ifdef CONFIG_SCHED_SMT
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++		static_branch_inc_cpuslocked(&sched_smt_present);
++		cpumask_or(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
++	}
++#endif
++}
++
++static inline void sched_smt_present_dec(int cpu)
++{
++#ifdef CONFIG_SCHED_SMT
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++		static_branch_dec_cpuslocked(&sched_smt_present);
++		if (!static_branch_likely(&sched_smt_present))
++			cpumask_clear(sched_pcore_idle_mask);
++		cpumask_andnot(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
++	}
++#endif
++}
++
++int sched_cpu_activate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	/*
++	 * Clear the balance_push callback and prepare to schedule
++	 * regular tasks.
++	 */
++	balance_push_set(cpu, false);
++
++	set_cpu_active(cpu, true);
++
++	if (sched_smp_initialized)
++		cpuset_cpu_active();
++
++	/*
++	 * Put the rq online, if not already. This happens:
++	 *
++	 * 1) In the early boot process, because we build the real domains
++	 *    after all cpus have been brought up.
++	 *
++	 * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
++	 *    domains.
++	 */
++	sched_set_rq_online(rq, cpu);
++
++	/*
++	 * When going up, increment the number of cores with SMT present.
++	 */
++	sched_smt_present_inc(cpu);
++
++	return 0;
++}
++
++int sched_cpu_deactivate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	set_cpu_active(cpu, false);
++
++	/*
++	 * From this point forward, this CPU will refuse to run any task that
++	 * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
++	 * push those tasks away until this gets cleared, see
++	 * sched_cpu_dying().
++	 */
++	balance_push_set(cpu, true);
++
++	/*
++	 * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
++	 * users of this state to go away such that all new such users will
++	 * observe it.
++	 *
++	 * Specifically, we rely on ttwu to no longer target this CPU, see
++	 * ttwu_queue_cond() and is_cpu_allowed().
++	 *
++	 * Do sync before park smpboot threads to take care the RCU boost case.
++	 */
++	synchronize_rcu();
++
++	sched_set_rq_offline(rq, cpu);
++
++	/*
++	 * When going down, decrement the number of cores with SMT present.
++	 */
++	sched_smt_present_dec(cpu);
++
++	if (!sched_smp_initialized)
++		return 0;
++
++	cpuset_cpu_inactive(cpu);
++
++	return 0;
++}
++
++static void sched_rq_cpu_starting(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	rq->calc_load_update = calc_load_update;
++}
++
++int sched_cpu_starting(unsigned int cpu)
++{
++	sched_rq_cpu_starting(cpu);
++	sched_tick_start(cpu);
++	return 0;
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++
++/*
++ * Invoked immediately before the stopper thread is invoked to bring the
++ * CPU down completely. At this point all per CPU kthreads except the
++ * hotplug thread (current) and the stopper thread (inactive) have been
++ * either parked or have been unbound from the outgoing CPU. Ensure that
++ * any of those which might be on the way out are gone.
++ *
++ * If after this point a bound task is being woken on this CPU then the
++ * responsible hotplug callback has failed to do it's job.
++ * sched_cpu_dying() will catch it with the appropriate fireworks.
++ */
++int sched_cpu_wait_empty(unsigned int cpu)
++{
++	balance_hotplug_wait();
++	sched_force_init_mm();
++	return 0;
++}
++
++/*
++ * Since this CPU is going 'away' for a while, fold any nr_active delta we
++ * might have. Called from the CPU stopper task after ensuring that the
++ * stopper is the last running task on the CPU, so nr_active count is
++ * stable. We need to take the tear-down thread which is calling this into
++ * account, so we hand in adjust = 1 to the load calculation.
++ *
++ * Also see the comment "Global load-average calculations".
++ */
++static void calc_load_migrate(struct rq *rq)
++{
++	long delta = calc_load_fold_active(rq, 1);
++
++	if (delta)
++		atomic_long_add(delta, &calc_load_tasks);
++}
++
++static void dump_rq_tasks(struct rq *rq, const char *loglvl)
++{
++	struct task_struct *g, *p;
++	int cpu = cpu_of(rq);
++
++	lockdep_assert_held(&rq->lock);
++
++	printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
++	for_each_process_thread(g, p) {
++		if (task_cpu(p) != cpu)
++			continue;
++
++		if (!task_on_rq_queued(p))
++			continue;
++
++		printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
++	}
++}
++
++int sched_cpu_dying(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/* Handle pending wakeups and then migrate everything off */
++	sched_tick_stop(cpu);
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
++		WARN(true, "Dying CPU not properly vacated!");
++		dump_rq_tasks(rq, KERN_WARNING);
++	}
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	calc_load_migrate(rq);
++	hrtick_clear(rq);
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_SMP
++static void sched_init_topology_cpumask_early(void)
++{
++	int cpu;
++	cpumask_t *tmp;
++
++	for_each_possible_cpu(cpu) {
++		/* init topo masks */
++		tmp = per_cpu(sched_cpu_topo_masks, cpu);
++
++		cpumask_copy(tmp, cpu_possible_mask);
++		per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++		per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
++	}
++}
++
++#define TOPOLOGY_CPUMASK(name, mask, last)\
++	if (cpumask_and(topo, topo, mask)) {					\
++		cpumask_copy(topo, mask);					\
++		printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name,	\
++		       cpu, (topo++)->bits[0]);					\
++	}									\
++	if (!last)								\
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(mask),	\
++				  nr_cpumask_bits);
++
++static void sched_init_topology_cpumask(void)
++{
++	int cpu;
++	cpumask_t *topo;
++
++	for_each_online_cpu(cpu) {
++		topo = per_cpu(sched_cpu_topo_masks, cpu);
++
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
++				  nr_cpumask_bits);
++#ifdef CONFIG_SCHED_SMT
++		TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
++#endif
++		TOPOLOGY_CPUMASK(cluster, topology_cluster_cpumask(cpu), false);
++
++		per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
++		per_cpu(sched_cpu_llc_mask, cpu) = topo;
++		TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
++
++		TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
++
++		TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
++
++		per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
++		printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
++		       cpu, per_cpu(sd_llc_id, cpu),
++		       (int) (per_cpu(sched_cpu_llc_mask, cpu) -
++			      per_cpu(sched_cpu_topo_masks, cpu)));
++	}
++}
++#endif
++
++void __init sched_init_smp(void)
++{
++	/* Move init over to a non-isolated CPU */
++	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
++		BUG();
++	current->flags &= ~PF_NO_SETAFFINITY;
++
++	sched_init_topology();
++	sched_init_topology_cpumask();
++
++	sched_smp_initialized = true;
++}
++
++static int __init migration_init(void)
++{
++	sched_cpu_starting(smp_processor_id());
++	return 0;
++}
++early_initcall(migration_init);
++
++#else
++void __init sched_init_smp(void)
++{
++	cpu_rq(0)->idle->time_slice = sysctl_sched_base_slice;
++}
++#endif /* CONFIG_SMP */
++
++int in_sched_functions(unsigned long addr)
++{
++	return in_lock_functions(addr) ||
++		(addr >= (unsigned long)__sched_text_start
++		&& addr < (unsigned long)__sched_text_end);
++}
++
++#ifdef CONFIG_CGROUP_SCHED
++/*
++ * Default task group.
++ * Every task in system belongs to this group at bootup.
++ */
++struct task_group root_task_group;
++LIST_HEAD(task_groups);
++
++/* Cacheline aligned slab cache for task_group */
++static struct kmem_cache *task_group_cache __ro_after_init;
++#endif /* CONFIG_CGROUP_SCHED */
++
++void __init sched_init(void)
++{
++	int i;
++	struct rq *rq;
++
++	printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
++			 " by Alfred Chen.\n");
++
++	wait_bit_init();
++
++#ifdef CONFIG_SMP
++	for (i = 0; i < SCHED_QUEUE_BITS; i++)
++		cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++	task_group_cache = KMEM_CACHE(task_group, 0);
++
++	list_add(&root_task_group.list, &task_groups);
++	INIT_LIST_HEAD(&root_task_group.children);
++	INIT_LIST_HEAD(&root_task_group.siblings);
++#endif /* CONFIG_CGROUP_SCHED */
++	for_each_possible_cpu(i) {
++		rq = cpu_rq(i);
++
++		sched_queue_init(&rq->queue);
++		rq->prio = IDLE_TASK_SCHED_PRIO;
++#ifdef CONFIG_SCHED_PDS
++		rq->prio_idx = rq->prio;
++#endif
++
++		raw_spin_lock_init(&rq->lock);
++		rq->nr_running = rq->nr_uninterruptible = 0;
++		rq->calc_load_active = 0;
++		rq->calc_load_update = jiffies + LOAD_FREQ;
++#ifdef CONFIG_SMP
++		rq->online = false;
++		rq->cpu = i;
++
++		rq->clear_idle_mask_func = cpumask_clear_cpu;
++		rq->set_idle_mask_func = cpumask_set_cpu;
++		rq->balance_func = NULL;
++		rq->active_balance_arg.active = 0;
++
++#ifdef CONFIG_NO_HZ_COMMON
++		INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
++#endif
++		rq->balance_callback = &balance_push_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++		rcuwait_init(&rq->hotplug_wait);
++#endif
++#endif /* CONFIG_SMP */
++		rq->nr_switches = 0;
++
++		hrtick_rq_init(rq);
++		atomic_set(&rq->nr_iowait, 0);
++
++		zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
++	}
++#ifdef CONFIG_SMP
++	/* Set rq->online for cpu 0 */
++	cpu_rq(0)->online = true;
++#endif
++	/*
++	 * The boot idle thread does lazy MMU switching as well:
++	 */
++	mmgrab_lazy_tlb(&init_mm);
++	enter_lazy_tlb(&init_mm, current);
++
++	/*
++	 * The idle task doesn't need the kthread struct to function, but it
++	 * is dressed up as a per-CPU kthread and thus needs to play the part
++	 * if we want to avoid special-casing it in code that deals with per-CPU
++	 * kthreads.
++	 */
++	WARN_ON(!set_kthread_struct(current));
++
++	/*
++	 * Make us the idle thread. Technically, schedule() should not be
++	 * called from this thread, however somewhere below it might be,
++	 * but because we are the idle thread, we just pick up running again
++	 * when this runqueue becomes "idle".
++	 */
++	__sched_fork(0, current);
++	init_idle(current, smp_processor_id());
++
++	calc_load_update = jiffies + LOAD_FREQ;
++
++#ifdef CONFIG_SMP
++	idle_thread_set_boot_cpu();
++	balance_push_set(smp_processor_id(), false);
++
++	sched_init_topology_cpumask_early();
++#endif /* SMP */
++
++	preempt_dynamic_init();
++}
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++
++void __might_sleep(const char *file, int line)
++{
++	unsigned int state = get_current_state();
++	/*
++	 * Blocking primitives will set (and therefore destroy) current->state,
++	 * since we will exit with TASK_RUNNING make sure we enter with it,
++	 * otherwise we will destroy state.
++	 */
++	WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
++			"do not call blocking ops when !TASK_RUNNING; "
++			"state=%x set at [<%p>] %pS\n", state,
++			(void *)current->task_state_change,
++			(void *)current->task_state_change);
++
++	__might_resched(file, line, 0);
++}
++EXPORT_SYMBOL(__might_sleep);
++
++static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
++{
++	if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
++		return;
++
++	if (preempt_count() == preempt_offset)
++		return;
++
++	pr_err("Preemption disabled at:");
++	print_ip_sym(KERN_ERR, ip);
++}
++
++static inline bool resched_offsets_ok(unsigned int offsets)
++{
++	unsigned int nested = preempt_count();
++
++	nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
++
++	return nested == offsets;
++}
++
++void __might_resched(const char *file, int line, unsigned int offsets)
++{
++	/* Ratelimiting timestamp: */
++	static unsigned long prev_jiffy;
++
++	unsigned long preempt_disable_ip;
++
++	/* WARN_ON_ONCE() by default, no rate limit required: */
++	rcu_sleep_check();
++
++	if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
++	     !is_idle_task(current) && !current->non_block_count) ||
++	    system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
++	    oops_in_progress)
++		return;
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	/* Save this before calling printk(), since that will clobber it: */
++	preempt_disable_ip = get_preempt_disable_ip(current);
++
++	pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
++	       file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), current->non_block_count,
++	       current->pid, current->comm);
++	pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
++	       offsets & MIGHT_RESCHED_PREEMPT_MASK);
++
++	if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
++		pr_err("RCU nest depth: %d, expected: %u\n",
++		       rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
++	}
++
++	if (task_stack_end_corrupted(current))
++		pr_emerg("Thread overran stack, or stack corrupted\n");
++
++	debug_show_held_locks(current);
++	if (irqs_disabled())
++		print_irqtrace_events(current);
++
++	print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
++				 preempt_disable_ip);
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL(__might_resched);
++
++void __cant_sleep(const char *file, int line, int preempt_offset)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > preempt_offset)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
++	printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
++			in_atomic(), irqs_disabled(),
++			current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_sleep);
++
++#ifdef CONFIG_SMP
++void __cant_migrate(const char *file, int line)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (is_migration_disabled(current))
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > 0)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), is_migration_disabled(current),
++	       current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_migrate);
++#endif
++#endif
++
++#ifdef CONFIG_MAGIC_SYSRQ
++void normalize_rt_tasks(void)
++{
++	struct task_struct *g, *p;
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++	};
++
++	read_lock(&tasklist_lock);
++	for_each_process_thread(g, p) {
++		/*
++		 * Only normalize user tasks:
++		 */
++		if (p->flags & PF_KTHREAD)
++			continue;
++
++		schedstat_set(p->stats.wait_start,  0);
++		schedstat_set(p->stats.sleep_start, 0);
++		schedstat_set(p->stats.block_start, 0);
++
++		if (!rt_or_dl_task(p)) {
++			/*
++			 * Renice negative nice level userspace
++			 * tasks back to 0:
++			 */
++			if (task_nice(p) < 0)
++				set_user_nice(p, 0);
++			continue;
++		}
++
++		__sched_setscheduler(p, &attr, false, false);
++	}
++	read_unlock(&tasklist_lock);
++}
++#endif /* CONFIG_MAGIC_SYSRQ */
++
++#if defined(CONFIG_KGDB_KDB)
++/*
++ * These functions are only useful for KDB.
++ *
++ * They can only be called when the whole system has been
++ * stopped - every CPU needs to be quiescent, and no scheduling
++ * activity can take place. Using them for anything else would
++ * be a serious bug, and as a result, they aren't even visible
++ * under any other configuration.
++ */
++
++/**
++ * curr_task - return the current task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ *
++ * Return: The current task for @cpu.
++ */
++struct task_struct *curr_task(int cpu)
++{
++	return cpu_curr(cpu);
++}
++
++#endif /* defined(CONFIG_KGDB_KDB) */
++
++#ifdef CONFIG_CGROUP_SCHED
++static void sched_free_group(struct task_group *tg)
++{
++	kmem_cache_free(task_group_cache, tg);
++}
++
++static void sched_free_group_rcu(struct rcu_head *rhp)
++{
++	sched_free_group(container_of(rhp, struct task_group, rcu));
++}
++
++static void sched_unregister_group(struct task_group *tg)
++{
++	/*
++	 * We have to wait for yet another RCU grace period to expire, as
++	 * print_cfs_stats() might run concurrently.
++	 */
++	call_rcu(&tg->rcu, sched_free_group_rcu);
++}
++
++/* allocate runqueue etc for a new task group */
++struct task_group *sched_create_group(struct task_group *parent)
++{
++	struct task_group *tg;
++
++	tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
++	if (!tg)
++		return ERR_PTR(-ENOMEM);
++
++	return tg;
++}
++
++void sched_online_group(struct task_group *tg, struct task_group *parent)
++{
++}
++
++/* RCU callback to free various structures associated with a task group */
++static void sched_unregister_group_rcu(struct rcu_head *rhp)
++{
++	/* Now it should be safe to free those cfs_rqs: */
++	sched_unregister_group(container_of(rhp, struct task_group, rcu));
++}
++
++void sched_destroy_group(struct task_group *tg)
++{
++	/* Wait for possible concurrent references to cfs_rqs complete: */
++	call_rcu(&tg->rcu, sched_unregister_group_rcu);
++}
++
++void sched_release_group(struct task_group *tg)
++{
++}
++
++static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
++{
++	return css ? container_of(css, struct task_group, css) : NULL;
++}
++
++static struct cgroup_subsys_state *
++cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
++{
++	struct task_group *parent = css_tg(parent_css);
++	struct task_group *tg;
++
++	if (!parent) {
++		/* This is early initialization for the top cgroup */
++		return &root_task_group.css;
++	}
++
++	tg = sched_create_group(parent);
++	if (IS_ERR(tg))
++		return ERR_PTR(-ENOMEM);
++	return &tg->css;
++}
++
++/* Expose task group only after completing cgroup initialization */
++static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++	struct task_group *parent = css_tg(css->parent);
++
++	if (parent)
++		sched_online_group(tg, parent);
++	return 0;
++}
++
++static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	sched_release_group(tg);
++}
++
++static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	/*
++	 * Relies on the RCU grace period between css_released() and this.
++	 */
++	sched_unregister_group(tg);
++}
++
++#ifdef CONFIG_RT_GROUP_SCHED
++static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
++{
++	return 0;
++}
++#endif
++
++static void cpu_cgroup_attach(struct cgroup_taskset *tset)
++{
++}
++
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++static int sched_group_set_shares(struct task_group *tg, unsigned long shares)
++{
++	return 0;
++}
++
++static int sched_group_set_idle(struct task_group *tg, long idle)
++{
++	return 0;
++}
++
++static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cftype, u64 shareval)
++{
++	return sched_group_set_shares(css_tg(css), shareval);
++}
++
++static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
++				struct cftype *cft, s64 idle)
++{
++	return sched_group_set_idle(css_tg(css), idle);
++}
++#endif
++
++#ifdef CONFIG_CFS_BANDWIDTH
++static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
++				  struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_quota_write_s64(struct cgroup_subsys_state *css,
++				   struct cftype *cftype, s64 cfs_quota_us)
++{
++	return 0;
++}
++
++static u64 cpu_cfs_period_read_u64(struct cgroup_subsys_state *css,
++				   struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_period_write_u64(struct cgroup_subsys_state *css,
++				    struct cftype *cftype, u64 cfs_period_us)
++{
++	return 0;
++}
++
++static u64 cpu_cfs_burst_read_u64(struct cgroup_subsys_state *css,
++				  struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_burst_write_u64(struct cgroup_subsys_state *css,
++				   struct cftype *cftype, u64 cfs_burst_us)
++{
++	return 0;
++}
++
++static int cpu_cfs_stat_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_cfs_local_stat_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_RT_GROUP_SCHED
++static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
++				struct cftype *cft, s64 val)
++{
++	return 0;
++}
++
++static s64 cpu_rt_runtime_read(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
++				    struct cftype *cftype, u64 rt_period_us)
++{
++	return 0;
++}
++
++static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
++				   struct cftype *cft)
++{
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_UCLAMP_TASK_GROUP
++static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_uclamp_max_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
++				    char *buf, size_t nbytes,
++				    loff_t off)
++{
++	return nbytes;
++}
++
++static ssize_t cpu_uclamp_max_write(struct kernfs_open_file *of,
++				    char *buf, size_t nbytes,
++				    loff_t off)
++{
++	return nbytes;
++}
++#endif
++
++static struct cftype cpu_legacy_files[] = {
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++	{
++		.name = "shares",
++		.read_u64 = cpu_shares_read_u64,
++		.write_u64 = cpu_shares_write_u64,
++	},
++	{
++		.name = "idle",
++		.read_s64 = cpu_idle_read_s64,
++		.write_s64 = cpu_idle_write_s64,
++	},
++#endif
++#ifdef CONFIG_CFS_BANDWIDTH
++	{
++		.name = "cfs_quota_us",
++		.read_s64 = cpu_cfs_quota_read_s64,
++		.write_s64 = cpu_cfs_quota_write_s64,
++	},
++	{
++		.name = "cfs_period_us",
++		.read_u64 = cpu_cfs_period_read_u64,
++		.write_u64 = cpu_cfs_period_write_u64,
++	},
++	{
++		.name = "cfs_burst_us",
++		.read_u64 = cpu_cfs_burst_read_u64,
++		.write_u64 = cpu_cfs_burst_write_u64,
++	},
++	{
++		.name = "stat",
++		.seq_show = cpu_cfs_stat_show,
++	},
++	{
++		.name = "stat.local",
++		.seq_show = cpu_cfs_local_stat_show,
++	},
++#endif
++#ifdef CONFIG_RT_GROUP_SCHED
++	{
++		.name = "rt_runtime_us",
++		.read_s64 = cpu_rt_runtime_read,
++		.write_s64 = cpu_rt_runtime_write,
++	},
++	{
++		.name = "rt_period_us",
++		.read_u64 = cpu_rt_period_read_uint,
++		.write_u64 = cpu_rt_period_write_uint,
++	},
++#endif
++#ifdef CONFIG_UCLAMP_TASK_GROUP
++	{
++		.name = "uclamp.min",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_min_show,
++		.write = cpu_uclamp_min_write,
++	},
++	{
++		.name = "uclamp.max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_max_show,
++		.write = cpu_uclamp_max_write,
++	},
++#endif
++	{ }	/* Terminate */
++};
++
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cft, u64 weight)
++{
++	return 0;
++}
++
++static s64 cpu_weight_nice_read_s64(struct cgroup_subsys_state *css,
++				    struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_weight_nice_write_s64(struct cgroup_subsys_state *css,
++				     struct cftype *cft, s64 nice)
++{
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_CFS_BANDWIDTH
++static int cpu_max_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static ssize_t cpu_max_write(struct kernfs_open_file *of,
++			     char *buf, size_t nbytes, loff_t off)
++{
++	return nbytes;
++}
++#endif
++
++static struct cftype cpu_files[] = {
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++	{
++		.name = "weight",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_u64 = cpu_weight_read_u64,
++		.write_u64 = cpu_weight_write_u64,
++	},
++	{
++		.name = "weight.nice",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_s64 = cpu_weight_nice_read_s64,
++		.write_s64 = cpu_weight_nice_write_s64,
++	},
++	{
++		.name = "idle",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_s64 = cpu_idle_read_s64,
++		.write_s64 = cpu_idle_write_s64,
++	},
++#endif
++#ifdef CONFIG_CFS_BANDWIDTH
++	{
++		.name = "max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_max_show,
++		.write = cpu_max_write,
++	},
++	{
++		.name = "max.burst",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_u64 = cpu_cfs_burst_read_u64,
++		.write_u64 = cpu_cfs_burst_write_u64,
++	},
++#endif
++#ifdef CONFIG_UCLAMP_TASK_GROUP
++	{
++		.name = "uclamp.min",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_min_show,
++		.write = cpu_uclamp_min_write,
++	},
++	{
++		.name = "uclamp.max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_max_show,
++		.write = cpu_uclamp_max_write,
++	},
++#endif
++	{ }	/* terminate */
++};
++
++static int cpu_extra_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++static int cpu_local_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++struct cgroup_subsys cpu_cgrp_subsys = {
++	.css_alloc	= cpu_cgroup_css_alloc,
++	.css_online	= cpu_cgroup_css_online,
++	.css_released	= cpu_cgroup_css_released,
++	.css_free	= cpu_cgroup_css_free,
++	.css_extra_stat_show = cpu_extra_stat_show,
++	.css_local_stat_show = cpu_local_stat_show,
++#ifdef CONFIG_RT_GROUP_SCHED
++	.can_attach	= cpu_cgroup_can_attach,
++#endif
++	.attach		= cpu_cgroup_attach,
++	.legacy_cftypes	= cpu_legacy_files,
++	.dfl_cftypes	= cpu_files,
++	.early_init	= true,
++	.threaded	= true,
++};
++#endif	/* CONFIG_CGROUP_SCHED */
++
++#undef CREATE_TRACE_POINTS
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#
++/*
++ * @cid_lock: Guarantee forward-progress of cid allocation.
++ *
++ * Concurrency ID allocation within a bitmap is mostly lock-free. The cid_lock
++ * is only used when contention is detected by the lock-free allocation so
++ * forward progress can be guaranteed.
++ */
++DEFINE_RAW_SPINLOCK(cid_lock);
++
++/*
++ * @use_cid_lock: Select cid allocation behavior: lock-free vs spinlock.
++ *
++ * When @use_cid_lock is 0, the cid allocation is lock-free. When contention is
++ * detected, it is set to 1 to ensure that all newly coming allocations are
++ * serialized by @cid_lock until the allocation which detected contention
++ * completes and sets @use_cid_lock back to 0. This guarantees forward progress
++ * of a cid allocation.
++ */
++int use_cid_lock;
++
++/*
++ * mm_cid remote-clear implements a lock-free algorithm to clear per-mm/cpu cid
++ * concurrently with respect to the execution of the source runqueue context
++ * switch.
++ *
++ * There is one basic properties we want to guarantee here:
++ *
++ * (1) Remote-clear should _never_ mark a per-cpu cid UNSET when it is actively
++ * used by a task. That would lead to concurrent allocation of the cid and
++ * userspace corruption.
++ *
++ * Provide this guarantee by introducing a Dekker memory ordering to guarantee
++ * that a pair of loads observe at least one of a pair of stores, which can be
++ * shown as:
++ *
++ *      X = Y = 0
++ *
++ *      w[X]=1          w[Y]=1
++ *      MB              MB
++ *      r[Y]=y          r[X]=x
++ *
++ * Which guarantees that x==0 && y==0 is impossible. But rather than using
++ * values 0 and 1, this algorithm cares about specific state transitions of the
++ * runqueue current task (as updated by the scheduler context switch), and the
++ * per-mm/cpu cid value.
++ *
++ * Let's introduce task (Y) which has task->mm == mm and task (N) which has
++ * task->mm != mm for the rest of the discussion. There are two scheduler state
++ * transitions on context switch we care about:
++ *
++ * (TSA) Store to rq->curr with transition from (N) to (Y)
++ *
++ * (TSB) Store to rq->curr with transition from (Y) to (N)
++ *
++ * On the remote-clear side, there is one transition we care about:
++ *
++ * (TMA) cmpxchg to *pcpu_cid to set the LAZY flag
++ *
++ * There is also a transition to UNSET state which can be performed from all
++ * sides (scheduler, remote-clear). It is always performed with a cmpxchg which
++ * guarantees that only a single thread will succeed:
++ *
++ * (TMB) cmpxchg to *pcpu_cid to mark UNSET
++ *
++ * Just to be clear, what we do _not_ want to happen is a transition to UNSET
++ * when a thread is actively using the cid (property (1)).
++ *
++ * Let's looks at the relevant combinations of TSA/TSB, and TMA transitions.
++ *
++ * Scenario A) (TSA)+(TMA) (from next task perspective)
++ *
++ * CPU0                                      CPU1
++ *
++ * Context switch CS-1                       Remote-clear
++ *   - store to rq->curr: (N)->(Y) (TSA)     - cmpxchg to *pcpu_id to LAZY (TMA)
++ *                                             (implied barrier after cmpxchg)
++ *   - switch_mm_cid()
++ *     - memory barrier (see switch_mm_cid()
++ *       comment explaining how this barrier
++ *       is combined with other scheduler
++ *       barriers)
++ *     - mm_cid_get (next)
++ *       - READ_ONCE(*pcpu_cid)              - rcu_dereference(src_rq->curr)
++ *
++ * This Dekker ensures that either task (Y) is observed by the
++ * rcu_dereference() or the LAZY flag is observed by READ_ONCE(), or both are
++ * observed.
++ *
++ * If task (Y) store is observed by rcu_dereference(), it means that there is
++ * still an active task on the cpu. Remote-clear will therefore not transition
++ * to UNSET, which fulfills property (1).
++ *
++ * If task (Y) is not observed, but the lazy flag is observed by READ_ONCE(),
++ * it will move its state to UNSET, which clears the percpu cid perhaps
++ * uselessly (which is not an issue for correctness). Because task (Y) is not
++ * observed, CPU1 can move ahead to set the state to UNSET. Because moving
++ * state to UNSET is done with a cmpxchg expecting that the old state has the
++ * LAZY flag set, only one thread will successfully UNSET.
++ *
++ * If both states (LAZY flag and task (Y)) are observed, the thread on CPU0
++ * will observe the LAZY flag and transition to UNSET (perhaps uselessly), and
++ * CPU1 will observe task (Y) and do nothing more, which is fine.
++ *
++ * What we are effectively preventing with this Dekker is a scenario where
++ * neither LAZY flag nor store (Y) are observed, which would fail property (1)
++ * because this would UNSET a cid which is actively used.
++ */
++
++void sched_mm_cid_migrate_from(struct task_struct *t)
++{
++	t->migrate_from_cpu = task_cpu(t);
++}
++
++static
++int __sched_mm_cid_migrate_from_fetch_cid(struct rq *src_rq,
++					  struct task_struct *t,
++					  struct mm_cid *src_pcpu_cid)
++{
++	struct mm_struct *mm = t->mm;
++	struct task_struct *src_task;
++	int src_cid, last_mm_cid;
++
++	if (!mm)
++		return -1;
++
++	last_mm_cid = t->last_mm_cid;
++	/*
++	 * If the migrated task has no last cid, or if the current
++	 * task on src rq uses the cid, it means the source cid does not need
++	 * to be moved to the destination cpu.
++	 */
++	if (last_mm_cid == -1)
++		return -1;
++	src_cid = READ_ONCE(src_pcpu_cid->cid);
++	if (!mm_cid_is_valid(src_cid) || last_mm_cid != src_cid)
++		return -1;
++
++	/*
++	 * If we observe an active task using the mm on this rq, it means we
++	 * are not the last task to be migrated from this cpu for this mm, so
++	 * there is no need to move src_cid to the destination cpu.
++	 */
++	guard(rcu)();
++	src_task = rcu_dereference(src_rq->curr);
++	if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++		t->last_mm_cid = -1;
++		return -1;
++	}
++
++	return src_cid;
++}
++
++static
++int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
++					      struct task_struct *t,
++					      struct mm_cid *src_pcpu_cid,
++					      int src_cid)
++{
++	struct task_struct *src_task;
++	struct mm_struct *mm = t->mm;
++	int lazy_cid;
++
++	if (src_cid == -1)
++		return -1;
++
++	/*
++	 * Attempt to clear the source cpu cid to move it to the destination
++	 * cpu.
++	 */
++	lazy_cid = mm_cid_set_lazy_put(src_cid);
++	if (!try_cmpxchg(&src_pcpu_cid->cid, &src_cid, lazy_cid))
++		return -1;
++
++	/*
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm matches the scheduler barrier in context_switch()
++	 * between store to rq->curr and load of prev and next task's
++	 * per-mm/cpu cid.
++	 *
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm_cid_active matches the barrier in
++	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++	 * load of per-mm/cpu cid.
++	 */
++
++	/*
++	 * If we observe an active task using the mm on this rq after setting
++	 * the lazy-put flag, this task will be responsible for transitioning
++	 * from lazy-put flag set to MM_CID_UNSET.
++	 */
++	scoped_guard (rcu) {
++		src_task = rcu_dereference(src_rq->curr);
++		if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++			rcu_read_unlock();
++			/*
++			 * We observed an active task for this mm, there is therefore
++			 * no point in moving this cid to the destination cpu.
++			 */
++			t->last_mm_cid = -1;
++			return -1;
++		}
++	}
++
++	/*
++	 * The src_cid is unused, so it can be unset.
++	 */
++	if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++		return -1;
++	WRITE_ONCE(src_pcpu_cid->recent_cid, MM_CID_UNSET);
++	return src_cid;
++}
++
++/*
++ * Migration to dst cpu. Called with dst_rq lock held.
++ * Interrupts are disabled, which keeps the window of cid ownership without the
++ * source rq lock held small.
++ */
++void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t)
++{
++	struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
++	struct mm_struct *mm = t->mm;
++	int src_cid, src_cpu;
++	bool dst_cid_is_set;
++	struct rq *src_rq;
++
++	lockdep_assert_rq_held(dst_rq);
++
++	if (!mm)
++		return;
++	src_cpu = t->migrate_from_cpu;
++	if (src_cpu == -1) {
++		t->last_mm_cid = -1;
++		return;
++	}
++	/*
++	 * Move the src cid if the dst cid is unset. This keeps id
++	 * allocation closest to 0 in cases where few threads migrate around
++	 * many CPUs.
++	 *
++	 * If destination cid or recent cid is already set, we may have
++	 * to just clear the src cid to ensure compactness in frequent
++	 * migrations scenarios.
++	 *
++	 * It is not useful to clear the src cid when the number of threads is
++	 * greater or equal to the number of allowed CPUs, because user-space
++	 * can expect that the number of allowed cids can reach the number of
++	 * allowed CPUs.
++	 */
++	dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
++	dst_cid_is_set = !mm_cid_is_unset(READ_ONCE(dst_pcpu_cid->cid)) ||
++			 !mm_cid_is_unset(READ_ONCE(dst_pcpu_cid->recent_cid));
++	if (dst_cid_is_set && atomic_read(&mm->mm_users) >= READ_ONCE(mm->nr_cpus_allowed))
++		return;
++	src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
++	src_rq = cpu_rq(src_cpu);
++	src_cid = __sched_mm_cid_migrate_from_fetch_cid(src_rq, t, src_pcpu_cid);
++	if (src_cid == -1)
++		return;
++	src_cid = __sched_mm_cid_migrate_from_try_steal_cid(src_rq, t, src_pcpu_cid,
++							    src_cid);
++	if (src_cid == -1)
++		return;
++	if (dst_cid_is_set) {
++		__mm_cid_put(mm, src_cid);
++		return;
++	}
++	/* Move src_cid to dst cpu. */
++	mm_cid_snapshot_time(dst_rq, mm);
++	WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
++	WRITE_ONCE(dst_pcpu_cid->recent_cid, src_cid);
++}
++
++static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
++				      int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *t;
++	int cid, lazy_cid;
++
++	cid = READ_ONCE(pcpu_cid->cid);
++	if (!mm_cid_is_valid(cid))
++		return;
++
++	/*
++	 * Clear the cpu cid if it is set to keep cid allocation compact.  If
++	 * there happens to be other tasks left on the source cpu using this
++	 * mm, the next task using this mm will reallocate its cid on context
++	 * switch.
++	 */
++	lazy_cid = mm_cid_set_lazy_put(cid);
++	if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid))
++		return;
++
++	/*
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm matches the scheduler barrier in context_switch()
++	 * between store to rq->curr and load of prev and next task's
++	 * per-mm/cpu cid.
++	 *
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm_cid_active matches the barrier in
++	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++	 * load of per-mm/cpu cid.
++	 */
++
++	/*
++	 * If we observe an active task using the mm on this rq after setting
++	 * the lazy-put flag, that task will be responsible for transitioning
++	 * from lazy-put flag set to MM_CID_UNSET.
++	 */
++	scoped_guard (rcu) {
++		t = rcu_dereference(rq->curr);
++		if (READ_ONCE(t->mm_cid_active) && t->mm == mm)
++			return;
++	}
++
++	/*
++	 * The cid is unused, so it can be unset.
++	 * Disable interrupts to keep the window of cid ownership without rq
++	 * lock small.
++	 */
++	scoped_guard (irqsave) {
++		if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++			__mm_cid_put(mm, cid);
++	}
++}
++
++static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct mm_cid *pcpu_cid;
++	struct task_struct *curr;
++	u64 rq_clock;
++
++	/*
++	 * rq->clock load is racy on 32-bit but one spurious clear once in a
++	 * while is irrelevant.
++	 */
++	rq_clock = READ_ONCE(rq->clock);
++	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++
++	/*
++	 * In order to take care of infrequently scheduled tasks, bump the time
++	 * snapshot associated with this cid if an active task using the mm is
++	 * observed on this rq.
++	 */
++	scoped_guard (rcu) {
++		curr = rcu_dereference(rq->curr);
++		if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) {
++			WRITE_ONCE(pcpu_cid->time, rq_clock);
++			return;
++		}
++	}
++
++	if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS)
++		return;
++	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
++					     int weight)
++{
++	struct mm_cid *pcpu_cid;
++	int cid;
++
++	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++	cid = READ_ONCE(pcpu_cid->cid);
++	if (!mm_cid_is_valid(cid) || cid < weight)
++		return;
++	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void task_mm_cid_work(struct callback_head *work)
++{
++	unsigned long now = jiffies, old_scan, next_scan;
++	struct task_struct *t = current;
++	struct cpumask *cidmask;
++	struct mm_struct *mm;
++	int weight, cpu;
++
++	WARN_ON_ONCE(t != container_of(work, struct task_struct, cid_work));
++
++	work->next = work;	/* Prevent double-add */
++	if (t->flags & PF_EXITING)
++		return;
++	mm = t->mm;
++	if (!mm)
++		return;
++	old_scan = READ_ONCE(mm->mm_cid_next_scan);
++	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++	if (!old_scan) {
++		unsigned long res;
++
++		res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
++		if (res != old_scan)
++			old_scan = res;
++		else
++			old_scan = next_scan;
++	}
++	if (time_before(now, old_scan))
++		return;
++	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
++		return;
++	cidmask = mm_cidmask(mm);
++	/* Clear cids that were not recently used. */
++	for_each_possible_cpu(cpu)
++		sched_mm_cid_remote_clear_old(mm, cpu);
++	weight = cpumask_weight(cidmask);
++	/*
++	 * Clear cids that are greater or equal to the cidmask weight to
++	 * recompact it.
++	 */
++	for_each_possible_cpu(cpu)
++		sched_mm_cid_remote_clear_weight(mm, cpu, weight);
++}
++
++void init_sched_mm_cid(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	int mm_users = 0;
++
++	if (mm) {
++		mm_users = atomic_read(&mm->mm_users);
++		if (mm_users == 1)
++			mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++	}
++	t->cid_work.next = &t->cid_work;	/* Protect against double add */
++	init_task_work(&t->cid_work, task_mm_cid_work);
++}
++
++void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
++{
++	struct callback_head *work = &curr->cid_work;
++	unsigned long now = jiffies;
++
++	if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
++	    work->next != work)
++		return;
++	if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
++		return;
++
++	/* No page allocation under rq lock */
++	task_work_add(curr, work, TWA_RESUME);
++}
++
++void sched_mm_cid_exit_signals(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	guard(rq_lock_irqsave)(rq);
++	preempt_enable_no_resched();	/* holding spinlock */
++	WRITE_ONCE(t->mm_cid_active, 0);
++	/*
++	 * Store t->mm_cid_active before loading per-mm/cpu cid.
++	 * Matches barrier in sched_mm_cid_remote_clear_old().
++	 */
++	smp_mb();
++	mm_cid_put(mm);
++	t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_before_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	guard(rq_lock_irqsave)(rq);
++	preempt_enable_no_resched();	/* holding spinlock */
++	WRITE_ONCE(t->mm_cid_active, 0);
++	/*
++	 * Store t->mm_cid_active before loading per-mm/cpu cid.
++	 * Matches barrier in sched_mm_cid_remote_clear_old().
++	 */
++	smp_mb();
++	mm_cid_put(mm);
++	t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_after_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	scoped_guard (rq_lock_irqsave, rq) {
++		preempt_enable_no_resched();	/* holding spinlock */
++		WRITE_ONCE(t->mm_cid_active, 1);
++		/*
++		 * Store t->mm_cid_active before loading per-mm/cpu cid.
++		 * Matches barrier in sched_mm_cid_remote_clear_old().
++		 */
++		smp_mb();
++		t->last_mm_cid = t->mm_cid = mm_cid_get(rq, t, mm);
++	}
++	rseq_set_notify_resume(t);
++}
++
++void sched_mm_cid_fork(struct task_struct *t)
++{
++	WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
++	t->mm_cid_active = 1;
++}
++#endif
+diff --git a/kernel/sched/alt_core.h b/kernel/sched/alt_core.h
+new file mode 100644
+index 000000000000..12d76d9d290e
+--- /dev/null
++++ b/kernel/sched/alt_core.h
+@@ -0,0 +1,213 @@
++#ifndef _KERNEL_SCHED_ALT_CORE_H
++#define _KERNEL_SCHED_ALT_CORE_H
++
++/*
++ * Compile time debug macro
++ * #define ALT_SCHED_DEBUG
++ */
++
++/*
++ * Task related inlined functions
++ */
++static inline bool is_migration_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_SMP
++	return p->migration_disabled;
++#else
++	return false;
++#endif
++}
++
++/* rt_prio(prio) defined in include/linux/sched/rt.h */
++#define rt_task(p)		rt_prio((p)->prio)
++#define rt_policy(policy)	((policy) == SCHED_FIFO || (policy) == SCHED_RR)
++#define task_has_rt_policy(p)	(rt_policy((p)->policy))
++
++struct affinity_context {
++	const struct cpumask	*new_mask;
++	struct cpumask		*user_mask;
++	unsigned int		flags;
++};
++
++/* CONFIG_SCHED_CLASS_EXT is not supported */
++#define scx_switched_all()	false
++
++#define SCA_CHECK		0x01
++#define SCA_MIGRATE_DISABLE	0x02
++#define SCA_MIGRATE_ENABLE	0x04
++#define SCA_USER		0x08
++
++#ifdef CONFIG_SMP
++
++extern int __set_cpus_allowed_ptr(struct task_struct *p, struct affinity_context *ctx);
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	/*
++	 * See do_set_cpus_allowed() above for the rcu_head usage.
++	 */
++	int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
++
++	return kmalloc_node(size, GFP_KERNEL, node);
++}
++
++#else /* !CONFIG_SMP: */
++
++static inline int __set_cpus_allowed_ptr(struct task_struct *p,
++					 struct affinity_context *ctx)
++{
++	return set_cpus_allowed_ptr(p, ctx->new_mask);
++}
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	return NULL;
++}
++
++#endif /* !CONFIG_SMP */
++
++#ifdef CONFIG_RT_MUTEXES
++
++static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
++{
++	if (pi_task)
++		prio = min(prio, pi_task->prio);
++
++	return prio;
++}
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	struct task_struct *pi_task = rt_mutex_get_top_task(p);
++
++	return __rt_effective_prio(pi_task, prio);
++}
++
++#else /* !CONFIG_RT_MUTEXES: */
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	return prio;
++}
++
++#endif /* !CONFIG_RT_MUTEXES */
++
++extern int __sched_setscheduler(struct task_struct *p, const struct sched_attr *attr, bool user, bool pi);
++extern int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
++extern void __setscheduler_prio(struct task_struct *p, int prio);
++
++/*
++ * Context API
++ */
++static inline struct rq *__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock(&rq->lock);
++			if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock(&rq->lock);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			*plock = NULL;
++			return rq;
++		}
++	}
++}
++
++static inline void __task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
++{
++	if (NULL != lock)
++		raw_spin_unlock(lock);
++}
++
++void check_task_changed(struct task_struct *p, struct rq *rq);
++
++/*
++ * RQ related inlined functions
++ */
++
++/*
++ * This routine assume that the idle task always in queue
++ */
++static inline struct task_struct *sched_rq_first_task(struct rq *rq)
++{
++	const struct list_head *head = &rq->queue.heads[sched_rq_prio_idx(rq)];
++
++	return list_first_entry(head, struct task_struct, sq_node);
++}
++
++static inline struct task_struct * sched_rq_next_task(struct task_struct *p, struct rq *rq)
++{
++	struct list_head *next = p->sq_node.next;
++
++	if (&rq->queue.heads[0] <= next && next < &rq->queue.heads[SCHED_LEVELS]) {
++		struct list_head *head;
++		unsigned long idx = next - &rq->queue.heads[0];
++
++		idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
++				    sched_idx2prio(idx, rq) + 1);
++		head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++		return list_first_entry(head, struct task_struct, sq_node);
++	}
++
++	return list_next_entry(p, sq_node);
++}
++
++extern void requeue_task(struct task_struct *p, struct rq *rq);
++
++#ifdef ALT_SCHED_DEBUG
++extern void alt_sched_debug(void);
++#else
++static inline void alt_sched_debug(void) {}
++#endif
++
++extern int sched_yield_type;
++
++#ifdef CONFIG_SMP
++extern cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DECLARE_STATIC_KEY_FALSE(sched_smt_present);
++DECLARE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++
++extern cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
++
++extern cpumask_t *const sched_idle_mask;
++extern cpumask_t *const sched_sg_idle_mask;
++extern cpumask_t *const sched_pcore_idle_mask;
++extern cpumask_t *const sched_ecore_idle_mask;
++
++extern struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu);
++
++typedef bool (*idle_select_func_t)(struct cpumask *dstp, const struct cpumask *src1p,
++				   const struct cpumask *src2p);
++
++extern idle_select_func_t idle_select_func;
++#endif
++
++/* balance callback */
++#ifdef CONFIG_SMP
++extern struct balance_callback *splice_balance_callbacks(struct rq *rq);
++extern void balance_callbacks(struct rq *rq, struct balance_callback *head);
++#else
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return NULL;
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++}
++
++#endif
++
++#endif /* _KERNEL_SCHED_ALT_CORE_H */
+diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
+new file mode 100644
+index 000000000000..1dbd7eb6a434
+--- /dev/null
++++ b/kernel/sched/alt_debug.c
+@@ -0,0 +1,32 @@
++/*
++ * kernel/sched/alt_debug.c
++ *
++ * Print the alt scheduler debugging details
++ *
++ * Author: Alfred Chen
++ * Date  : 2020
++ */
++#include "sched.h"
++#include "linux/sched/debug.h"
++
++/*
++ * This allows printing both to /proc/sched_debug and
++ * to the console
++ */
++#define SEQ_printf(m, x...)			\
++ do {						\
++	if (m)					\
++		seq_printf(m, x);		\
++	else					\
++		pr_cont(x);			\
++ } while (0)
++
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++			  struct seq_file *m)
++{
++	SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
++						get_nr_threads(p));
++}
++
++void proc_sched_set_task(struct task_struct *p)
++{}
+diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
+new file mode 100644
+index 000000000000..97577bcef5cf
+--- /dev/null
++++ b/kernel/sched/alt_sched.h
+@@ -0,0 +1,1035 @@
++#ifndef _KERNEL_SCHED_ALT_SCHED_H
++#define _KERNEL_SCHED_ALT_SCHED_H
++
++#include <linux/context_tracking.h>
++#include <linux/profile.h>
++#include <linux/stop_machine.h>
++#include <linux/syscalls.h>
++#include <linux/tick.h>
++
++#include <trace/events/power.h>
++#include <trace/events/sched.h>
++
++#include "../workqueue_internal.h"
++
++#include "cpupri.h"
++
++#ifdef CONFIG_CGROUP_SCHED
++/* task group related information */
++struct task_group {
++	struct cgroup_subsys_state css;
++
++	struct rcu_head rcu;
++	struct list_head list;
++
++	struct task_group *parent;
++	struct list_head siblings;
++	struct list_head children;
++};
++
++extern struct task_group *sched_create_group(struct task_group *parent);
++extern void sched_online_group(struct task_group *tg,
++			       struct task_group *parent);
++extern void sched_destroy_group(struct task_group *tg);
++extern void sched_release_group(struct task_group *tg);
++#endif /* CONFIG_CGROUP_SCHED */
++
++#define MIN_SCHED_NORMAL_PRIO	(32)
++/*
++ * levels: RT(0-24), reserved(25-31), NORMAL(32-63), cpu idle task(64)
++ *
++ * -- BMQ --
++ * NORMAL: (lower boost range 12, NICE_WIDTH 40, higher boost range 12) / 2
++ * -- PDS --
++ * NORMAL: SCHED_EDGE_DELTA + ((NICE_WIDTH 40) / 2)
++ */
++#define SCHED_LEVELS		(64 + 1)
++
++#define IDLE_TASK_SCHED_PRIO	(SCHED_LEVELS - 1)
++
++/*
++ * Increase resolution of nice-level calculations for 64-bit architectures.
++ * The extra resolution improves shares distribution and load balancing of
++ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
++ * hierarchies, especially on larger systems. This is not a user-visible change
++ * and does not change the user-interface for setting shares/weights.
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ *
++ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
++ * increase coverage and consistency always enable it on 64-bit platforms.
++ */
++#ifdef CONFIG_64BIT
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++	unsigned long __w = (w); \
++	if (__w) \
++		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++	__w; \
++})
++#else
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		(w)
++# define scale_load_down(w)	(w)
++#endif
++
++/* task_struct::on_rq states: */
++#define TASK_ON_RQ_QUEUED	1
++#define TASK_ON_RQ_MIGRATING	2
++
++static inline int task_on_rq_queued(struct task_struct *p)
++{
++	return READ_ONCE(p->on_rq) == TASK_ON_RQ_QUEUED;
++}
++
++static inline int task_on_rq_migrating(struct task_struct *p)
++{
++	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
++}
++
++/* Wake flags. The first three directly map to some SD flag value */
++#define WF_EXEC         0x02 /* Wakeup after exec; maps to SD_BALANCE_EXEC */
++#define WF_FORK         0x04 /* Wakeup after fork; maps to SD_BALANCE_FORK */
++#define WF_TTWU         0x08 /* Wakeup;            maps to SD_BALANCE_WAKE */
++
++#define WF_SYNC         0x10 /* Waker goes to sleep after wakeup */
++#define WF_MIGRATED     0x20 /* Internal use, task got migrated */
++#define WF_CURRENT_CPU  0x40 /* Prefer to move the wakee to the current CPU. */
++
++#ifdef CONFIG_SMP
++static_assert(WF_EXEC == SD_BALANCE_EXEC);
++static_assert(WF_FORK == SD_BALANCE_FORK);
++static_assert(WF_TTWU == SD_BALANCE_WAKE);
++#endif
++
++#define SCHED_QUEUE_BITS	(SCHED_LEVELS - 1)
++
++struct sched_queue {
++	DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
++	struct list_head heads[SCHED_LEVELS];
++};
++
++struct rq;
++struct cpuidle_state;
++
++struct balance_callback {
++	struct balance_callback *next;
++	void (*func)(struct rq *rq);
++};
++
++typedef void (*balance_func_t)(struct rq *rq, int cpu);
++typedef void (*set_idle_mask_func_t)(unsigned int cpu, struct cpumask *dstp);
++typedef void (*clear_idle_mask_func_t)(int cpu, struct cpumask *dstp);
++
++struct balance_arg {
++	struct task_struct	*task;
++	int			active;
++	cpumask_t		*cpumask;
++};
++
++/*
++ * This is the main, per-CPU runqueue data structure.
++ * This data should only be modified by the local cpu.
++ */
++struct rq {
++	/* runqueue lock: */
++	raw_spinlock_t			lock;
++
++	struct task_struct __rcu	*curr;
++	struct task_struct		*idle;
++	struct task_struct		*stop;
++	struct mm_struct		*prev_mm;
++
++	struct sched_queue		queue		____cacheline_aligned;
++
++	int				prio;
++#ifdef CONFIG_SCHED_PDS
++	int				prio_idx;
++	u64				time_edge;
++#endif
++
++	/* switch count */
++	u64 nr_switches;
++
++	atomic_t nr_iowait;
++
++	u64 last_seen_need_resched_ns;
++	int ticks_without_resched;
++
++#ifdef CONFIG_MEMBARRIER
++	int membarrier_state;
++#endif
++
++	set_idle_mask_func_t	set_idle_mask_func;
++	clear_idle_mask_func_t	clear_idle_mask_func;
++
++#ifdef CONFIG_SMP
++	int cpu;		/* cpu of this runqueue */
++	bool online;
++
++	unsigned int		ttwu_pending;
++	unsigned char		nohz_idle_balance;
++	unsigned char		idle_balance;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	struct sched_avg	avg_irq;
++#endif
++
++	balance_func_t		balance_func;
++	struct balance_arg	active_balance_arg		____cacheline_aligned;
++	struct cpu_stop_work	active_balance_work;
++
++	struct balance_callback	*balance_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++	struct rcuwait		hotplug_wait;
++#endif
++	unsigned int		nr_pinned;
++
++#endif /* CONFIG_SMP */
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	u64 prev_irq_time;
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++#ifdef CONFIG_PARAVIRT
++	u64 prev_steal_time;
++#endif /* CONFIG_PARAVIRT */
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	u64 prev_steal_time_rq;
++#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
++
++	/* For genenal cpu load util */
++	s32 load_history;
++	u64 load_block;
++	u64 load_stamp;
++
++	/* calc_load related fields */
++	unsigned long calc_load_update;
++	long calc_load_active;
++
++	/* Ensure that all clocks are in the same cache line */
++	u64			clock ____cacheline_aligned;
++	u64			clock_task;
++
++	unsigned int  nr_running;
++	unsigned long nr_uninterruptible;
++
++#ifdef CONFIG_SCHED_HRTICK
++#ifdef CONFIG_SMP
++	call_single_data_t hrtick_csd;
++#endif
++	struct hrtimer		hrtick_timer;
++	ktime_t			hrtick_time;
++#endif
++
++#ifdef CONFIG_SCHEDSTATS
++
++	/* latency stats */
++	struct sched_info rq_sched_info;
++	unsigned long long rq_cpu_time;
++	/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
++
++	/* sys_sched_yield() stats */
++	unsigned int yld_count;
++
++	/* schedule() stats */
++	unsigned int sched_switch;
++	unsigned int sched_count;
++	unsigned int sched_goidle;
++
++	/* try_to_wake_up() stats */
++	unsigned int ttwu_count;
++	unsigned int ttwu_local;
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_CPU_IDLE
++	/* Must be inspected within a rcu lock section */
++	struct cpuidle_state *idle_state;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++#ifdef CONFIG_SMP
++	call_single_data_t	nohz_csd;
++#endif
++	atomic_t		nohz_flags;
++#endif /* CONFIG_NO_HZ_COMMON */
++
++	/* Scratch cpumask to be temporarily used under rq_lock */
++	cpumask_var_t		scratch_mask;
++};
++
++extern unsigned int sysctl_sched_base_slice;
++
++extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
++
++extern unsigned long calc_load_update;
++extern atomic_long_t calc_load_tasks;
++
++extern void calc_global_load_tick(struct rq *this_rq);
++extern long calc_load_fold_active(struct rq *this_rq, long adjust);
++
++DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++#define cpu_rq(cpu)		(&per_cpu(runqueues, (cpu)))
++#define this_rq()		this_cpu_ptr(&runqueues)
++#define task_rq(p)		cpu_rq(task_cpu(p))
++#define cpu_curr(cpu)		(cpu_rq(cpu)->curr)
++#define raw_rq()		raw_cpu_ptr(&runqueues)
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_SYSCTL
++void register_sched_domain_sysctl(void);
++void unregister_sched_domain_sysctl(void);
++#else
++static inline void register_sched_domain_sysctl(void)
++{
++}
++static inline void unregister_sched_domain_sysctl(void)
++{
++}
++#endif
++
++extern bool sched_smp_initialized;
++
++enum {
++#ifdef CONFIG_SCHED_SMT
++	SMT_LEVEL_SPACE_HOLDER,
++#endif
++	COREGROUP_LEVEL_SPACE_HOLDER,
++	CORE_LEVEL_SPACE_HOLDER,
++	OTHER_LEVEL_SPACE_HOLDER,
++	NR_CPU_AFFINITY_LEVELS
++};
++
++DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++
++static inline int
++__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
++{
++	int cpu;
++
++	while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
++		mask++;
++
++	return cpu;
++}
++
++static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
++{
++	return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
++}
++
++#endif /* CONFIG_SMP */
++
++extern void resched_latency_warn(int cpu, u64 latency);
++
++#ifndef arch_scale_freq_tick
++static __always_inline
++void arch_scale_freq_tick(void)
++{
++}
++#endif
++
++#ifndef arch_scale_freq_capacity
++static __always_inline
++unsigned long arch_scale_freq_capacity(int cpu)
++{
++	return SCHED_CAPACITY_SCALE;
++}
++#endif
++
++static inline u64 __rq_clock_broken(struct rq *rq)
++{
++	return READ_ONCE(rq->clock);
++}
++
++static inline u64 rq_clock(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock;
++}
++
++static inline u64 rq_clock_task(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock_task;
++}
++
++/*
++ * {de,en}queue flags:
++ *
++ * DEQUEUE_SLEEP  - task is no longer runnable
++ * ENQUEUE_WAKEUP - task just became runnable
++ *
++ */
++
++#define DEQUEUE_SLEEP		0x01
++
++#define ENQUEUE_WAKEUP		0x01
++
++
++/*
++ * Below are scheduler API which using in other kernel code
++ * It use the dummy rq_flags
++ * ToDo : BMQ need to support these APIs for compatibility with mainline
++ * scheduler code.
++ */
++struct rq_flags {
++	unsigned long flags;
++};
++
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock);
++
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock);
++
++static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++}
++
++static inline void
++rq_lock(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock(&rq->lock);
++}
++
++static inline void
++rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++rq_lock_irq(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irq(&rq->lock);
++}
++
++static inline void
++rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++static inline struct rq *
++this_rq_lock_irq(struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	local_irq_disable();
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	return rq;
++}
++
++static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
++{
++	return &rq->lock;
++}
++
++static inline raw_spinlock_t *rq_lockp(struct rq *rq)
++{
++	return __rq_lockp(rq);
++}
++
++static inline void lockdep_assert_rq_held(struct rq *rq)
++{
++	lockdep_assert_held(__rq_lockp(rq));
++}
++
++extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
++extern void raw_spin_rq_unlock(struct rq *rq);
++
++static inline void raw_spin_rq_lock(struct rq *rq)
++{
++	raw_spin_rq_lock_nested(rq, 0);
++}
++
++static inline void raw_spin_rq_lock_irq(struct rq *rq)
++{
++	local_irq_disable();
++	raw_spin_rq_lock(rq);
++}
++
++static inline void raw_spin_rq_unlock_irq(struct rq *rq)
++{
++	raw_spin_rq_unlock(rq);
++	local_irq_enable();
++}
++
++static inline int task_current(struct rq *rq, struct task_struct *p)
++{
++	return rq->curr == p;
++}
++
++static inline bool task_on_cpu(struct task_struct *p)
++{
++	return p->on_cpu;
++}
++
++extern struct static_key_false sched_schedstats;
++
++#ifdef CONFIG_CPU_IDLE
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++	rq->idle_state = idle_state;
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	WARN_ON(!rcu_read_lock_held());
++	return rq->idle_state;
++}
++#else
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	return NULL;
++}
++#endif
++
++static inline int cpu_of(const struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	return rq->cpu;
++#else
++	return 0;
++#endif
++}
++
++extern void resched_cpu(int cpu);
++
++#include "stats.h"
++
++#ifdef CONFIG_NO_HZ_COMMON
++#define NOHZ_BALANCE_KICK_BIT	0
++#define NOHZ_STATS_KICK_BIT	1
++
++#define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
++#define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)
++
++#define NOHZ_KICK_MASK	(NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++
++#define nohz_flags(cpu)	(&cpu_rq(cpu)->nohz_flags)
++
++/* TODO: needed?
++extern void nohz_balance_exit_idle(struct rq *rq);
++#else
++static inline void nohz_balance_exit_idle(struct rq *rq) { }
++*/
++#endif
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++struct irqtime {
++	u64			total;
++	u64			tick_delta;
++	u64			irq_start_time;
++	struct u64_stats_sync	sync;
++};
++
++DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
++extern int sched_clock_irqtime;
++
++static inline int irqtime_enabled(void)
++{
++	return sched_clock_irqtime;
++}
++
++/*
++ * Returns the irqtime minus the softirq time computed by ksoftirqd.
++ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
++ * and never move forward.
++ */
++static inline u64 irq_time_read(int cpu)
++{
++	struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
++	unsigned int seq;
++	u64 total;
++
++	do {
++		seq = __u64_stats_fetch_begin(&irqtime->sync);
++		total = irqtime->total;
++	} while (__u64_stats_fetch_retry(&irqtime->sync, seq));
++
++	return total;
++}
++#else
++
++static inline int irqtime_enabled(void)
++{
++	return 0;
++}
++
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++#ifdef CONFIG_CPU_FREQ
++DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++extern int __init sched_tick_offload_init(void);
++#else
++static inline int sched_tick_offload_init(void) { return 0; }
++#endif
++
++#ifdef arch_scale_freq_capacity
++#ifndef arch_scale_freq_invariant
++#define arch_scale_freq_invariant()	(true)
++#endif
++#else /* arch_scale_freq_capacity */
++#define arch_scale_freq_invariant()	(false)
++#endif
++
++#ifdef CONFIG_SMP
++unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
++				 unsigned long min,
++				 unsigned long max);
++#endif /* CONFIG_SMP */
++
++extern void schedule_idle(void);
++
++#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
++
++/*
++ * !! For sched_setattr_nocheck() (kernel) only !!
++ *
++ * This is actually gross. :(
++ *
++ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
++ * tasks, but still be able to sleep. We need this on platforms that cannot
++ * atomically change clock frequency. Remove once fast switching will be
++ * available on such platforms.
++ *
++ * SUGOV stands for SchedUtil GOVernor.
++ */
++#define SCHED_FLAG_SUGOV	0x10000000
++
++#ifdef CONFIG_MEMBARRIER
++/*
++ * The scheduler provides memory barriers required by membarrier between:
++ * - prior user-space memory accesses and store to rq->membarrier_state,
++ * - store to rq->membarrier_state and following user-space memory accesses.
++ * In the same way it provides those guarantees around store to rq->curr.
++ */
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++	int membarrier_state;
++
++	if (prev_mm == next_mm)
++		return;
++
++	membarrier_state = atomic_read(&next_mm->membarrier_state);
++	if (READ_ONCE(rq->membarrier_state) == membarrier_state)
++		return;
++
++	WRITE_ONCE(rq->membarrier_state, membarrier_state);
++}
++#else
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++}
++#endif
++
++#ifdef CONFIG_NUMA
++extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
++#else
++static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return nr_cpu_ids;
++}
++#endif
++
++extern void swake_up_all_locked(struct swait_queue_head *q);
++extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
++
++extern int try_to_wake_up(struct task_struct *tsk, unsigned int state, int wake_flags);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++extern int preempt_dynamic_mode;
++extern int sched_dynamic_mode(const char *str);
++extern void sched_dynamic_update(int mode);
++#endif
++extern const char *preempt_modes[];
++
++static inline void nohz_run_idle_balance(int cpu) { }
++
++static inline unsigned long
++uclamp_eff_value(struct task_struct *p, enum uclamp_id clamp_id)
++{
++	if (clamp_id == UCLAMP_MIN)
++		return 0;
++
++	return SCHED_CAPACITY_SCALE;
++}
++
++static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
++
++static inline bool uclamp_is_used(void)
++{
++	return false;
++}
++
++static inline unsigned long
++uclamp_rq_get(struct rq *rq, enum uclamp_id clamp_id)
++{
++	if (clamp_id == UCLAMP_MIN)
++		return 0;
++
++	return SCHED_CAPACITY_SCALE;
++}
++
++static inline void
++uclamp_rq_set(struct rq *rq, enum uclamp_id clamp_id, unsigned int value)
++{
++}
++
++static inline bool uclamp_rq_is_idle(struct rq *rq)
++{
++	return false;
++}
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
++#define MM_CID_SCAN_DELAY	100			/* 100ms */
++
++extern raw_spinlock_t cid_lock;
++extern int use_cid_lock;
++
++extern void sched_mm_cid_migrate_from(struct task_struct *t);
++extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t);
++extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
++extern void init_sched_mm_cid(struct task_struct *t);
++
++static inline void __mm_cid_put(struct mm_struct *mm, int cid)
++{
++	if (cid < 0)
++		return;
++	cpumask_clear_cpu(cid, mm_cidmask(mm));
++}
++
++/*
++ * The per-mm/cpu cid can have the MM_CID_LAZY_PUT flag set or transition to
++ * the MM_CID_UNSET state without holding the rq lock, but the rq lock needs to
++ * be held to transition to other states.
++ *
++ * State transitions synchronized with cmpxchg or try_cmpxchg need to be
++ * consistent across cpus, which prevents use of this_cpu_cmpxchg.
++ */
++static inline void mm_cid_put_lazy(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid;
++
++	lockdep_assert_irqs_disabled();
++	cid = __this_cpu_read(pcpu_cid->cid);
++	if (!mm_cid_is_lazy_put(cid) ||
++	    !try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++		return;
++	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int mm_cid_pcpu_unset(struct mm_struct *mm)
++{
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid, res;
++
++	lockdep_assert_irqs_disabled();
++	cid = __this_cpu_read(pcpu_cid->cid);
++	for (;;) {
++		if (mm_cid_is_unset(cid))
++			return MM_CID_UNSET;
++		/*
++		 * Attempt transition from valid or lazy-put to unset.
++		 */
++		res = cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, cid, MM_CID_UNSET);
++		if (res == cid)
++			break;
++		cid = res;
++	}
++	return cid;
++}
++
++static inline void mm_cid_put(struct mm_struct *mm)
++{
++	int cid;
++
++	lockdep_assert_irqs_disabled();
++	cid = mm_cid_pcpu_unset(mm);
++	if (cid == MM_CID_UNSET)
++		return;
++	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
++{
++	struct cpumask *cidmask = mm_cidmask(mm);
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid, max_nr_cid, allowed_max_nr_cid;
++
++	/*
++	 * After shrinking the number of threads or reducing the number
++	 * of allowed cpus, reduce the value of max_nr_cid so expansion
++	 * of cid allocation will preserve cache locality if the number
++	 * of threads or allowed cpus increase again.
++	 */
++	max_nr_cid = atomic_read(&mm->max_nr_cid);
++	while ((allowed_max_nr_cid = min_t(int, READ_ONCE(mm->nr_cpus_allowed),
++					   atomic_read(&mm->mm_users))),
++	       max_nr_cid > allowed_max_nr_cid) {
++		/* atomic_try_cmpxchg loads previous mm->max_nr_cid into max_nr_cid. */
++		if (atomic_try_cmpxchg(&mm->max_nr_cid, &max_nr_cid, allowed_max_nr_cid)) {
++			max_nr_cid = allowed_max_nr_cid;
++			break;
++		}
++	}
++	/* Try to re-use recent cid. This improves cache locality. */
++	cid = __this_cpu_read(pcpu_cid->recent_cid);
++	if (!mm_cid_is_unset(cid) && cid < max_nr_cid &&
++	    !cpumask_test_and_set_cpu(cid, cidmask))
++		return cid;
++	/*
++	 * Expand cid allocation if the maximum number of concurrency
++	 * IDs allocated (max_nr_cid) is below the number cpus allowed
++	 * and number of threads. Expanding cid allocation as much as
++	 * possible improves cache locality.
++	 */
++	cid = max_nr_cid;
++	while (cid < READ_ONCE(mm->nr_cpus_allowed) && cid < atomic_read(&mm->mm_users)) {
++		/* atomic_try_cmpxchg loads previous mm->max_nr_cid into cid. */
++		if (!atomic_try_cmpxchg(&mm->max_nr_cid, &cid, cid + 1))
++			continue;
++		if (!cpumask_test_and_set_cpu(cid, cidmask))
++			return cid;
++	}
++	/*
++	 * Find the first available concurrency id.
++	 * Retry finding first zero bit if the mask is temporarily
++	 * filled. This only happens during concurrent remote-clear
++	 * which owns a cid without holding a rq lock.
++	 */
++	for (;;) {
++		cid = cpumask_first_zero(cidmask);
++		if (cid < READ_ONCE(mm->nr_cpus_allowed))
++			break;
++		cpu_relax();
++	}
++	if (cpumask_test_and_set_cpu(cid, cidmask))
++		return -1;
++
++	return cid;
++}
++
++/*
++ * Save a snapshot of the current runqueue time of this cpu
++ * with the per-cpu cid value, allowing to estimate how recently it was used.
++ */
++static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
++{
++	struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(rq));
++
++	lockdep_assert_rq_held(rq);
++	WRITE_ONCE(pcpu_cid->time, rq->clock);
++}
++
++static inline int __mm_cid_get(struct rq *rq, struct task_struct *t,
++			       struct mm_struct *mm)
++{
++	int cid;
++
++	/*
++	 * All allocations (even those using the cid_lock) are lock-free. If
++	 * use_cid_lock is set, hold the cid_lock to perform cid allocation to
++	 * guarantee forward progress.
++	 */
++	if (!READ_ONCE(use_cid_lock)) {
++		cid = __mm_cid_try_get(t, mm);
++		if (cid >= 0)
++			goto end;
++		raw_spin_lock(&cid_lock);
++	} else {
++		raw_spin_lock(&cid_lock);
++		cid = __mm_cid_try_get(t, mm);
++		if (cid >= 0)
++			goto unlock;
++	}
++
++	/*
++	 * cid concurrently allocated. Retry while forcing following
++	 * allocations to use the cid_lock to ensure forward progress.
++	 */
++	WRITE_ONCE(use_cid_lock, 1);
++	/*
++	 * Set use_cid_lock before allocation. Only care about program order
++	 * because this is only required for forward progress.
++	 */
++	barrier();
++	/*
++	 * Retry until it succeeds. It is guaranteed to eventually succeed once
++	 * all newcoming allocations observe the use_cid_lock flag set.
++	 */
++	do {
++		cid = __mm_cid_try_get(t, mm);
++		cpu_relax();
++	} while (cid < 0);
++	/*
++	 * Allocate before clearing use_cid_lock. Only care about
++	 * program order because this is for forward progress.
++	 */
++	barrier();
++	WRITE_ONCE(use_cid_lock, 0);
++unlock:
++	raw_spin_unlock(&cid_lock);
++end:
++	mm_cid_snapshot_time(rq, mm);
++	return cid;
++}
++
++static inline int mm_cid_get(struct rq *rq, struct task_struct *t,
++			     struct mm_struct *mm)
++{
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	struct cpumask *cpumask;
++	int cid;
++
++	lockdep_assert_rq_held(rq);
++	cpumask = mm_cidmask(mm);
++	cid = __this_cpu_read(pcpu_cid->cid);
++	if (mm_cid_is_valid(cid)) {
++		mm_cid_snapshot_time(rq, mm);
++		return cid;
++	}
++	if (mm_cid_is_lazy_put(cid)) {
++		if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++			__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++	}
++	cid = __mm_cid_get(rq, t, mm);
++	__this_cpu_write(pcpu_cid->cid, cid);
++	__this_cpu_write(pcpu_cid->recent_cid, cid);
++
++	return cid;
++}
++
++static inline void switch_mm_cid(struct rq *rq,
++				 struct task_struct *prev,
++				 struct task_struct *next)
++{
++	/*
++	 * Provide a memory barrier between rq->curr store and load of
++	 * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition.
++	 *
++	 * Should be adapted if context_switch() is modified.
++	 */
++	if (!next->mm) {                                // to kernel
++		/*
++		 * user -> kernel transition does not guarantee a barrier, but
++		 * we can use the fact that it performs an atomic operation in
++		 * mmgrab().
++		 */
++		if (prev->mm)                           // from user
++			smp_mb__after_mmgrab();
++		/*
++		 * kernel -> kernel transition does not change rq->curr->mm
++		 * state. It stays NULL.
++		 */
++	} else {                                        // to user
++		/*
++		 * kernel -> user transition does not provide a barrier
++		 * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
++		 * Provide it here.
++		 */
++		if (!prev->mm)                          // from kernel
++			smp_mb();
++		/*
++		 * user -> user transition guarantees a memory barrier through
++		 * switch_mm() when current->mm changes. If current->mm is
++		 * unchanged, no barrier is needed.
++		 */
++	}
++	if (prev->mm_cid_active) {
++		mm_cid_snapshot_time(rq, prev->mm);
++		mm_cid_put_lazy(prev);
++		prev->mm_cid = -1;
++	}
++	if (next->mm_cid_active)
++		next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next, next->mm);
++}
++
++#else
++static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
++static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
++static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { }
++static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
++static inline void init_sched_mm_cid(struct task_struct *t) { }
++#endif
++
++#ifdef CONFIG_SMP
++extern struct balance_callback balance_push_callback;
++
++static inline void
++queue_balance_callback(struct rq *rq,
++		       struct balance_callback *head,
++		       void (*func)(struct rq *rq))
++{
++	lockdep_assert_rq_held(rq);
++
++	/*
++	 * Don't (re)queue an already queued item; nor queue anything when
++	 * balance_push() is active, see the comment with
++	 * balance_push_callback.
++	 */
++	if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
++		return;
++
++	head->func = func;
++	head->next = rq->balance_callback;
++	rq->balance_callback = head;
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_SCHED_BMQ
++#include "bmq.h"
++#endif
++#ifdef CONFIG_SCHED_PDS
++#include "pds.h"
++#endif
++
++#endif /* _KERNEL_SCHED_ALT_SCHED_H */
+diff --git a/kernel/sched/alt_topology.c b/kernel/sched/alt_topology.c
+new file mode 100644
+index 000000000000..2266138ee783
+--- /dev/null
++++ b/kernel/sched/alt_topology.c
+@@ -0,0 +1,350 @@
++#include "alt_core.h"
++#include "alt_topology.h"
++
++#ifdef CONFIG_SMP
++
++static cpumask_t sched_pcore_mask ____cacheline_aligned_in_smp;
++
++static int __init sched_pcore_mask_setup(char *str)
++{
++	if (cpulist_parse(str, &sched_pcore_mask))
++		pr_warn("sched/alt: pcore_cpus= incorrect CPU range\n");
++
++	return 0;
++}
++__setup("pcore_cpus=", sched_pcore_mask_setup);
++
++/*
++ * set/clear idle mask functions
++ */
++#ifdef CONFIG_SCHED_SMT
++static void set_idle_mask_smt(unsigned int cpu, struct cpumask *dstp)
++{
++	cpumask_set_cpu(cpu, dstp);
++	if (cpumask_subset(cpu_smt_mask(cpu), sched_idle_mask))
++		cpumask_or(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++}
++
++static void clear_idle_mask_smt(int cpu, struct cpumask *dstp)
++{
++	cpumask_clear_cpu(cpu, dstp);
++	cpumask_andnot(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++}
++#endif
++
++static void set_idle_mask_pcore(unsigned int cpu, struct cpumask *dstp)
++{
++	cpumask_set_cpu(cpu, dstp);
++	cpumask_set_cpu(cpu, sched_pcore_idle_mask);
++}
++
++static void clear_idle_mask_pcore(int cpu, struct cpumask *dstp)
++{
++	cpumask_clear_cpu(cpu, dstp);
++	cpumask_clear_cpu(cpu, sched_pcore_idle_mask);
++}
++
++static void set_idle_mask_ecore(unsigned int cpu, struct cpumask *dstp)
++{
++	cpumask_set_cpu(cpu, dstp);
++	cpumask_set_cpu(cpu, sched_ecore_idle_mask);
++}
++
++static void clear_idle_mask_ecore(int cpu, struct cpumask *dstp)
++{
++	cpumask_clear_cpu(cpu, dstp);
++	cpumask_clear_cpu(cpu, sched_ecore_idle_mask);
++}
++
++/*
++ * Idle cpu/rq selection functions
++ */
++#ifdef CONFIG_SCHED_SMT
++static bool p1_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
++				 const struct cpumask *src2p)
++{
++	return cpumask_and(dstp, src1p, src2p + 1)	||
++	       cpumask_and(dstp, src1p, src2p);
++}
++#endif
++
++static bool p1p2_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
++					const struct cpumask *src2p)
++{
++	return cpumask_and(dstp, src1p, src2p + 1)	||
++	       cpumask_and(dstp, src1p, src2p + 2)	||
++	       cpumask_and(dstp, src1p, src2p);
++}
++
++/* common balance functions */
++static int active_balance_cpu_stop(void *data)
++{
++	struct balance_arg *arg = data;
++	struct task_struct *p = arg->task;
++	struct rq *rq = this_rq();
++	unsigned long flags;
++	cpumask_t tmp;
++
++	local_irq_save(flags);
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++
++	arg->active = 0;
++
++	if (task_on_rq_queued(p) && task_rq(p) == rq &&
++	    cpumask_and(&tmp, p->cpus_ptr, arg->cpumask) &&
++	    !is_migration_disabled(p)) {
++		int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu_of(rq)));
++		rq = move_queued_task(rq, p, dcpu);
++	}
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++/* trigger_active_balance - for @rq */
++static inline int
++trigger_active_balance(struct rq *src_rq, struct rq *rq, cpumask_t *target_mask)
++{
++	struct balance_arg *arg;
++	unsigned long flags;
++	struct task_struct *p;
++	int res;
++
++	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
++		return 0;
++
++	arg = &rq->active_balance_arg;
++	res = (1 == rq->nr_running) &&					\
++	      !is_migration_disabled((p = sched_rq_first_task(rq))) &&	\
++	      cpumask_intersects(p->cpus_ptr, target_mask) &&		\
++	      !arg->active;
++	if (res) {
++		arg->task = p;
++		arg->cpumask = target_mask;
++
++		arg->active = 1;
++	}
++
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	if (res) {
++		preempt_disable();
++		raw_spin_unlock(&src_rq->lock);
++
++		stop_one_cpu_nowait(cpu_of(rq), active_balance_cpu_stop, arg,
++				    &rq->active_balance_work);
++
++		preempt_enable();
++		raw_spin_lock(&src_rq->lock);
++	}
++
++	return res;
++}
++
++static inline int
++ecore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
++{
++	if (cpumask_andnot(single_task_mask, single_task_mask, &sched_pcore_mask)) {
++		int i, cpu = cpu_of(rq);
++
++		for_each_cpu_wrap(i, single_task_mask, cpu)
++			if (trigger_active_balance(rq, cpu_rq(i), target_mask))
++				return 1;
++	}
++
++	return 0;
++}
++
++static DEFINE_PER_CPU(struct balance_callback, active_balance_head);
++
++#ifdef CONFIG_SCHED_SMT
++static inline int
++smt_pcore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
++{
++	cpumask_t smt_single_mask;
++
++	if (cpumask_and(&smt_single_mask, single_task_mask, &sched_smt_mask)) {
++		int i, cpu = cpu_of(rq);
++
++		for_each_cpu_wrap(i, &smt_single_mask, cpu) {
++			if (cpumask_subset(cpu_smt_mask(i), &smt_single_mask) &&
++			    trigger_active_balance(rq, cpu_rq(i), target_mask))
++				return 1;
++		}
++	}
++
++	return 0;
++}
++
++/* smt p core balance functions */
++static inline void smt_pcore_balance(struct rq *rq)
++{
++	cpumask_t single_task_mask;
++
++	if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++	    cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++	    (/* smt core group balance */
++	     (static_key_count(&sched_smt_present.key) > 1 &&
++	      smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)
++	     ) ||
++	     /* e core to idle smt core balance */
++	     ecore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)))
++		return;
++}
++
++static void smt_pcore_balance_func(struct rq *rq, const int cpu)
++{
++	if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
++		queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_pcore_balance);
++}
++
++/* smt balance functions */
++static inline void smt_balance(struct rq *rq)
++{
++	cpumask_t single_task_mask;
++
++	if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++	    cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++	    static_key_count(&sched_smt_present.key) > 1 &&
++	    smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask))
++		return;
++}
++
++static void smt_balance_func(struct rq *rq, const int cpu)
++{
++	if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
++		queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_balance);
++}
++
++/* e core balance functions */
++static inline void ecore_balance(struct rq *rq)
++{
++	cpumask_t single_task_mask;
++
++	if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++	    cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++	    /* smt occupied p core to idle e core balance */
++	    smt_pcore_source_balance(rq, &single_task_mask, sched_ecore_idle_mask))
++		return;
++}
++
++static void ecore_balance_func(struct rq *rq, const int cpu)
++{
++	queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), ecore_balance);
++}
++#endif /* CONFIG_SCHED_SMT */
++
++/* p core balance functions */
++static inline void pcore_balance(struct rq *rq)
++{
++	cpumask_t single_task_mask;
++
++	if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++	    cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++	    /* idle e core to p core balance */
++	    ecore_source_balance(rq, &single_task_mask, sched_pcore_idle_mask))
++		return;
++}
++
++static void pcore_balance_func(struct rq *rq, const int cpu)
++{
++	queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), pcore_balance);
++}
++
++#ifdef ALT_SCHED_DEBUG
++#define SCHED_DEBUG_INFO(...)	printk(KERN_INFO __VA_ARGS__)
++#else
++#define SCHED_DEBUG_INFO(...)	do { } while(0)
++#endif
++
++#define SET_IDLE_SELECT_FUNC(func)						\
++{										\
++	idle_select_func = func;						\
++	printk(KERN_INFO "sched: "#func);					\
++}
++
++#define SET_RQ_BALANCE_FUNC(rq, cpu, func)					\
++{										\
++	rq->balance_func = func;						\
++	SCHED_DEBUG_INFO("sched: cpu#%02d -> "#func, cpu);			\
++}
++
++#define SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_func, clear_func)			\
++{										\
++	rq->set_idle_mask_func		= set_func;				\
++	rq->clear_idle_mask_func	= clear_func;				\
++	SCHED_DEBUG_INFO("sched: cpu#%02d -> "#set_func" "#clear_func, cpu);	\
++}
++
++void sched_init_topology(void)
++{
++	int cpu;
++	struct rq *rq;
++	cpumask_t sched_ecore_mask = { CPU_BITS_NONE };
++	int ecore_present = 0;
++
++#ifdef CONFIG_SCHED_SMT
++	if (!cpumask_empty(&sched_smt_mask))
++		printk(KERN_INFO "sched: smt mask: 0x%08lx\n", sched_smt_mask.bits[0]);
++#endif
++
++	if (!cpumask_empty(&sched_pcore_mask)) {
++		cpumask_andnot(&sched_ecore_mask, cpu_online_mask, &sched_pcore_mask);
++		printk(KERN_INFO "sched: pcore mask: 0x%08lx, ecore mask: 0x%08lx\n",
++		       sched_pcore_mask.bits[0], sched_ecore_mask.bits[0]);
++
++		ecore_present = !cpumask_empty(&sched_ecore_mask);
++	}
++
++#ifdef CONFIG_SCHED_SMT
++	/* idle select function */
++	if (cpumask_equal(&sched_smt_mask, cpu_online_mask)) {
++		SET_IDLE_SELECT_FUNC(p1_idle_select_func);
++	} else
++#endif
++	if (!cpumask_empty(&sched_pcore_mask)) {
++		SET_IDLE_SELECT_FUNC(p1p2_idle_select_func);
++	}
++
++	for_each_online_cpu(cpu) {
++		rq = cpu_rq(cpu);
++		/* take chance to reset time slice for idle tasks */
++		rq->idle->time_slice = sysctl_sched_base_slice;
++
++#ifdef CONFIG_SCHED_SMT
++		if (cpumask_weight(cpu_smt_mask(cpu)) > 1) {
++			SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_smt, clear_idle_mask_smt);
++
++			if (cpumask_test_cpu(cpu, &sched_pcore_mask) &&
++			    !cpumask_intersects(&sched_ecore_mask, &sched_smt_mask)) {
++				SET_RQ_BALANCE_FUNC(rq, cpu, smt_pcore_balance_func);
++			} else {
++				SET_RQ_BALANCE_FUNC(rq, cpu, smt_balance_func);
++			}
++
++			continue;
++		}
++#endif
++		/* !SMT or only one cpu in sg */
++		if (cpumask_test_cpu(cpu, &sched_pcore_mask)) {
++			SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_pcore, clear_idle_mask_pcore);
++
++			if (ecore_present)
++				SET_RQ_BALANCE_FUNC(rq, cpu, pcore_balance_func);
++
++			continue;
++		}
++		if (cpumask_test_cpu(cpu, &sched_ecore_mask)) {
++			SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_ecore, clear_idle_mask_ecore);
++#ifdef CONFIG_SCHED_SMT
++			if (cpumask_intersects(&sched_pcore_mask, &sched_smt_mask))
++				SET_RQ_BALANCE_FUNC(rq, cpu, ecore_balance_func);
++#endif
++		}
++	}
++}
++#endif /* CONFIG_SMP */
+diff --git a/kernel/sched/alt_topology.h b/kernel/sched/alt_topology.h
+new file mode 100644
+index 000000000000..076174cd2bc6
+--- /dev/null
++++ b/kernel/sched/alt_topology.h
+@@ -0,0 +1,6 @@
++#ifndef _KERNEL_SCHED_ALT_TOPOLOGY_H
++#define _KERNEL_SCHED_ALT_TOPOLOGY_H
++
++extern void sched_init_topology(void);
++
++#endif /* _KERNEL_SCHED_ALT_TOPOLOGY_H */
+diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
+new file mode 100644
+index 000000000000..5a7835246ec3
+--- /dev/null
++++ b/kernel/sched/bmq.h
+@@ -0,0 +1,103 @@
++#ifndef _KERNEL_SCHED_BMQ_H
++#define _KERNEL_SCHED_BMQ_H
++
++#define ALT_SCHED_NAME "BMQ"
++
++/*
++ * BMQ only routines
++ */
++static inline void boost_task(struct task_struct *p, int n)
++{
++	int limit;
++
++	switch (p->policy) {
++	case SCHED_NORMAL:
++		limit = -MAX_PRIORITY_ADJ;
++		break;
++	case SCHED_BATCH:
++		limit = 0;
++		break;
++	default:
++		return;
++	}
++
++	p->boost_prio = max(limit, p->boost_prio - n);
++}
++
++static inline void deboost_task(struct task_struct *p)
++{
++	if (p->boost_prio < MAX_PRIORITY_ADJ)
++		p->boost_prio++;
++}
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms) {}
++
++/* This API is used in task_prio(), return value readed by human users */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	return p->prio + p->boost_prio - MIN_NORMAL_PRIO;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MIN_NORMAL_PRIO)? (p->prio >> 2) :
++		MIN_SCHED_NORMAL_PRIO + (p->prio + p->boost_prio - MIN_NORMAL_PRIO) / 2;
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio)	\
++	prio = task_sched_prio(p);		\
++	idx = prio;
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++	return prio;
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++	return idx;
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++	return rq->prio;
++}
++
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (p->prio + p->boost_prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq) {}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++	deboost_task(p);
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++static inline void sched_task_fork(struct task_struct *p, struct rq *rq) {}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++static inline void sched_task_ttwu(struct task_struct *p)
++{
++	s64 delta = this_rq()->clock_task > p->last_ran;
++
++	if (likely(delta > 0))
++		boost_task(p, delta  >> 22);
++}
++
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++{
++	boost_task(p, 1);
++}
++
++#endif /* _KERNEL_SCHED_BMQ_H */
+diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
+index 72d97aa8b726..60ce3eecaa7b 100644
+--- a/kernel/sched/build_policy.c
++++ b/kernel/sched/build_policy.c
+@@ -49,15 +49,21 @@
+ 
+ #include "idle.c"
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include "rt.c"
++#endif
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ # include "cpudeadline.c"
++#endif
+ # include "pelt.c"
+ #endif
+ 
+ #include "cputime.c"
++#ifndef CONFIG_SCHED_ALT
+ #include "deadline.c"
++#endif
+ 
+ #ifdef CONFIG_SCHED_CLASS_EXT
+ # include "ext.c"
+diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c
+index bf9d8db94b70..1c5443b89013 100644
+--- a/kernel/sched/build_utility.c
++++ b/kernel/sched/build_utility.c
+@@ -56,6 +56,10 @@
+ 
+ #include "clock.c"
+ 
++#ifdef CONFIG_SCHED_ALT
++# include "alt_topology.c"
++#endif
++
+ #ifdef CONFIG_CGROUP_CPUACCT
+ # include "cpuacct.c"
+ #endif
+@@ -68,7 +72,7 @@
+ # include "cpufreq_schedutil.c"
+ #endif
+ 
+-#include "debug.c"
++# include "debug.c"
+ 
+ #ifdef CONFIG_SCHEDSTATS
+ # include "stats.c"
+@@ -82,7 +86,9 @@
+ 
+ #ifdef CONFIG_SMP
+ # include "cpupri.c"
++#ifndef CONFIG_SCHED_ALT
+ # include "stop_task.c"
++#endif
+ # include "topology.c"
+ #endif
+ 
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 816f07f9d30f..f88a79bcd2bf 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -223,6 +223,7 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
+ 
+ static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned long min, max, util = scx_cpuperf_target(sg_cpu->cpu);
+ 
+ 	if (!scx_switched_all())
+@@ -231,6 +232,10 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
+ 	util = max(util, boost);
+ 	sg_cpu->bw_min = min;
+ 	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
++#else /* CONFIG_SCHED_ALT */
++	sg_cpu->bw_min = 0;
++	sg_cpu->util = rq_load_util(cpu_rq(sg_cpu->cpu), arch_scale_cpu_capacity(sg_cpu->cpu));
++#endif /* CONFIG_SCHED_ALT */
+ }
+ 
+ /**
+@@ -390,8 +395,10 @@ static inline bool sugov_hold_freq(struct sugov_cpu *sg_cpu) { return false; }
+  */
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
+-		sg_cpu->sg_policy->need_freq_update = true;
++		sg_cpu->sg_policy->limits_changed = true;
++#endif
+ }
+ 
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+@@ -685,6 +692,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
+ 	}
+ 
+ 	ret = sched_setattr_nocheck(thread, &attr);
++
+ 	if (ret) {
+ 		kthread_stop(thread);
+ 		pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index 6dab4854c6c0..24705643a077 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -124,7 +124,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
+ 	p->utime += cputime;
+ 	account_group_user_time(p, cputime);
+ 
+-	index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
++	index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
+ 
+ 	/* Add user time to cpustat. */
+ 	task_group_account_field(p, index, cputime);
+@@ -148,7 +148,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
+ 	p->gtime += cputime;
+ 
+ 	/* Add guest time to cpustat. */
+-	if (task_nice(p) > 0) {
++	if (task_running_nice(p)) {
+ 		task_group_account_field(p, CPUTIME_NICE, cputime);
+ 		cpustat[CPUTIME_GUEST_NICE] += cputime;
+ 	} else {
+@@ -286,7 +286,7 @@ static inline u64 account_other_time(u64 max)
+ #ifdef CONFIG_64BIT
+ static inline u64 read_sum_exec_runtime(struct task_struct *t)
+ {
+-	return t->se.sum_exec_runtime;
++	return tsk_seruntime(t);
+ }
+ #else
+ static u64 read_sum_exec_runtime(struct task_struct *t)
+@@ -296,7 +296,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
+ 	struct rq *rq;
+ 
+ 	rq = task_rq_lock(t, &rf);
+-	ns = t->se.sum_exec_runtime;
++	ns = tsk_seruntime(t);
+ 	task_rq_unlock(rq, t, &rf);
+ 
+ 	return ns;
+@@ -621,7 +621,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
+ {
+ 	struct task_cputime cputime = {
+-		.sum_exec_runtime = p->se.sum_exec_runtime,
++		.sum_exec_runtime = tsk_seruntime(p),
+ 	};
+ 
+ 	if (task_cputime(p, &cputime.utime, &cputime.stime))
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index 56ae54e0ce6a..27464ba49425 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -7,6 +7,7 @@
+  * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * This allows printing both to /sys/kernel/debug/sched/debug and
+  * to the console
+@@ -215,6 +216,7 @@ static const struct file_operations sched_scaling_fops = {
+ };
+ 
+ #endif /* SMP */
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 
+@@ -281,6 +283,7 @@ static const struct file_operations sched_dynamic_fops = {
+ 
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+ 
++#ifndef CONFIG_SCHED_ALT
+ __read_mostly bool sched_debug_verbose;
+ 
+ #ifdef CONFIG_SMP
+@@ -471,9 +474,11 @@ static const struct file_operations fair_server_period_fops = {
+ 	.llseek		= seq_lseek,
+ 	.release	= single_release,
+ };
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ static struct dentry *debugfs_sched;
+ 
++#ifndef CONFIG_SCHED_ALT
+ static void debugfs_fair_server_init(void)
+ {
+ 	struct dentry *d_fair;
+@@ -494,6 +499,7 @@ static void debugfs_fair_server_init(void)
+ 		debugfs_create_file("period", 0644, d_cpu, (void *) cpu, &fair_server_period_fops);
+ 	}
+ }
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ static __init int sched_init_debug(void)
+ {
+@@ -501,14 +507,17 @@ static __init int sched_init_debug(void)
+ 
+ 	debugfs_sched = debugfs_create_dir("sched", NULL);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
+ 	debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 	debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
+ #endif
+ 
+ 	debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
+ 	debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once);
+ 
+@@ -533,13 +542,17 @@ static __init int sched_init_debug(void)
+ #endif
+ 
+ 	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_fair_server_init();
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	return 0;
+ }
+ late_initcall(sched_init_debug);
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 
+ static cpumask_var_t		sd_sysctl_cpus;
+@@ -1284,6 +1297,11 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
+ 
+ 	sched_show_numa(p, m);
+ }
++#else
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++						  struct seq_file *m)
++{ }
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ void proc_sched_set_task(struct task_struct *p)
+ {
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 2c85c86b455f..4369a4b123c9 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -423,6 +423,7 @@ void cpu_startup_entry(enum cpuhp_state state)
+ 		do_idle();
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * idle-task scheduling class.
+  */
+@@ -538,3 +539,4 @@ DEFINE_SCHED_CLASS(idle) = {
+ 	.switched_to		= switched_to_idle,
+ 	.update_curr		= update_curr_idle,
+ };
++#endif
+diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
+new file mode 100644
+index 000000000000..fe3099071eb7
+--- /dev/null
++++ b/kernel/sched/pds.h
+@@ -0,0 +1,139 @@
++#ifndef _KERNEL_SCHED_PDS_H
++#define _KERNEL_SCHED_PDS_H
++
++#define ALT_SCHED_NAME "PDS"
++
++static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
++
++#define SCHED_NORMAL_PRIO_NUM	(32)
++#define SCHED_EDGE_DELTA	(SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
++
++/* PDS assume SCHED_NORMAL_PRIO_NUM is power of 2 */
++#define SCHED_NORMAL_PRIO_MOD(x)	((x) & (SCHED_NORMAL_PRIO_NUM - 1))
++
++/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
++static __read_mostly int sched_timeslice_shift = 23;
++
++/*
++ * Common interfaces
++ */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	u64 sched_dl = max(p->deadline, rq->time_edge);
++
++#ifdef ALT_SCHED_DEBUG
++	if (WARN_ONCE(sched_dl - rq->time_edge > NORMAL_PRIO_NUM - 1,
++		      "pds: task_sched_prio_normal() delta %lld\n", sched_dl - rq->time_edge))
++		return SCHED_NORMAL_PRIO_NUM - 1;
++#endif
++
++	return sched_dl - rq->time_edge;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
++		MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio)							\
++	if (p->prio < MIN_NORMAL_PRIO) {							\
++		prio = p->prio >> 2;								\
++		idx = prio;									\
++	} else {										\
++		u64 sched_dl = max(p->deadline, rq->time_edge);					\
++		prio = MIN_SCHED_NORMAL_PRIO + sched_dl - rq->time_edge;			\
++		idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_dl);			\
++	}
++
++static inline int sched_prio2idx(int sched_prio, struct rq *rq)
++{
++	return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
++		sched_prio :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
++}
++
++static inline int sched_idx2prio(int sched_idx, struct rq *rq)
++{
++	return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
++		sched_idx :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++	return rq->prio_idx;
++}
++
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (p->prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq)
++{
++	struct list_head head;
++	u64 old = rq->time_edge;
++	u64 now = rq->clock >> sched_timeslice_shift;
++	u64 prio, delta;
++	DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
++
++	if (now == old)
++		return;
++
++	rq->time_edge = now;
++	delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
++	INIT_LIST_HEAD(&head);
++
++	prio = MIN_SCHED_NORMAL_PRIO;
++	for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
++		list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
++				      SCHED_NORMAL_PRIO_MOD(prio + old), &head);
++
++	bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
++	if (!list_empty(&head)) {
++		u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
++
++		__list_splice(&head, rq->queue.heads + idx, rq->queue.heads[idx].next);
++		set_bit(MIN_SCHED_NORMAL_PRIO, normal);
++	}
++	bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
++		       (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
++
++	if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
++		return;
++
++	rq->prio = max_t(u64, MIN_SCHED_NORMAL_PRIO, rq->prio - delta);
++	rq->prio_idx = sched_prio2idx(rq->prio, rq);
++}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++	if (p->prio >= MIN_NORMAL_PRIO)
++		p->deadline = rq->time_edge + SCHED_EDGE_DELTA +
++			      (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
++{
++	u64 max_dl = rq->time_edge + SCHED_EDGE_DELTA + NICE_WIDTH / 2 - 1;
++	if (unlikely(p->deadline > max_dl))
++		p->deadline = max_dl;
++}
++
++static inline void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++	sched_task_renew(p, rq);
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sysctl_sched_base_slice;
++	sched_task_renew(p, rq);
++}
++
++static inline void sched_task_ttwu(struct task_struct *p) {}
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
++
++#endif /* _KERNEL_SCHED_PDS_H */
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+index 7a8534a2deff..c57eb8f000d1 100644
+--- a/kernel/sched/pelt.c
++++ b/kernel/sched/pelt.c
+@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
+ 	WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * sched_entity:
+  *
+@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ 
+ 	return 0;
+ }
++#endif
+ 
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * hardware:
+  *
+@@ -468,6 +470,7 @@ int update_irq_load_avg(struct rq *rq, u64 running)
+ }
+ #endif
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * Load avg and utiliztion metrics need to be updated periodically and before
+  * consumption. This function updates the metrics for all subsystems except for
+@@ -487,3 +490,4 @@ bool update_other_load_avgs(struct rq *rq)
+ 		update_hw_load_avg(rq_clock_task(rq), rq, hw_pressure) |
+ 		update_irq_load_avg(rq, 0);
+ }
++#endif /* !CONFIG_SCHED_ALT */
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index f4f6a0875c66..ee780f2b6c17 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -1,14 +1,16 @@
+ #ifdef CONFIG_SMP
+ #include "sched-pelt.h"
+ 
++#ifndef CONFIG_SCHED_ALT
+ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
+ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
+ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
+ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
+ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
+ bool update_other_load_avgs(struct rq *rq);
++#endif
+ 
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ int update_hw_load_avg(u64 now, struct rq *rq, u64 capacity);
+ 
+ static inline u64 hw_load_avg(struct rq *rq)
+@@ -45,6 +47,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ 	return PELT_MIN_DIVIDER + avg->period_contrib;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ 	unsigned int enqueued;
+@@ -181,9 +184,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ 	return rq_clock_pelt(rq_of(cfs_rq));
+ }
+ #endif
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #else
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline int
+ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ {
+@@ -201,6 +206,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ {
+ 	return 0;
+ }
++#endif
+ 
+ static inline int
+ update_hw_load_avg(u64 now, struct rq *rq, u64 capacity)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 47972f34ea70..b003004f56a9 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -5,6 +5,10 @@
+ #ifndef _KERNEL_SCHED_SCHED_H
+ #define _KERNEL_SCHED_SCHED_H
+ 
++#ifdef CONFIG_SCHED_ALT
++#include "alt_sched.h"
++#else
++
+ #include <linux/sched/affinity.h>
+ #include <linux/sched/autogroup.h>
+ #include <linux/sched/cpufreq.h>
+@@ -3961,4 +3965,9 @@ void sched_enq_and_set_task(struct sched_enq_and_set_ctx *ctx);
+ 
+ #include "ext.h"
+ 
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (task_nice(p) > 0);
++}
++#endif /* !CONFIG_SCHED_ALT */
+ #endif /* _KERNEL_SCHED_SCHED_H */
+diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
+index 4346fd81c31f..11f05554b538 100644
+--- a/kernel/sched/stats.c
++++ b/kernel/sched/stats.c
+@@ -115,8 +115,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 	} else {
+ 		struct rq *rq;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		struct sched_domain *sd;
+ 		int dcount = 0;
++#endif
+ #endif
+ 		cpu = (unsigned long)(v - 2);
+ 		rq = cpu_rq(cpu);
+@@ -133,6 +135,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 		seq_printf(seq, "\n");
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		/* domain-specific stats */
+ 		rcu_read_lock();
+ 		for_each_domain(cpu, sd) {
+@@ -163,6 +166,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 			    sd->ttwu_move_balance);
+ 		}
+ 		rcu_read_unlock();
++#endif
+ #endif
+ 	}
+ 	return 0;
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+index 452826df6ae1..b980bfc4ec95 100644
+--- a/kernel/sched/stats.h
++++ b/kernel/sched/stats.h
+@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart  (struct rq *rq, unsigned long long delt
+ 
+ #endif /* CONFIG_SCHEDSTATS */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ struct sched_entity_stats {
+ 	struct sched_entity     se;
+@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity *se)
+ #endif
+ 	return &task_of(se)->stats;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PSI
+ void psi_task_change(struct task_struct *task, int clear, int set);
+diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
+index c326de1344fb..4b3eaab8c9b8 100644
+--- a/kernel/sched/syscalls.c
++++ b/kernel/sched/syscalls.c
+@@ -16,6 +16,14 @@
+ #include "sched.h"
+ #include "autogroup.h"
+ 
++#ifdef CONFIG_SCHED_ALT
++#include "alt_core.h"
++
++static inline int __normal_prio(int policy, int rt_prio, int static_prio)
++{
++	return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) : static_prio;
++}
++#else /* !CONFIG_SCHED_ALT */
+ static inline int __normal_prio(int policy, int rt_prio, int nice)
+ {
+ 	int prio;
+@@ -29,6 +37,7 @@ static inline int __normal_prio(int policy, int rt_prio, int nice)
+ 
+ 	return prio;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ /*
+  * Calculate the expected normal priority: i.e. priority
+@@ -39,7 +48,11 @@ static inline int __normal_prio(int policy, int rt_prio, int nice)
+  */
+ static inline int normal_prio(struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++	return __normal_prio(p->policy, p->rt_priority, p->static_prio);
++#else /* !CONFIG_SCHED_ALT */
+ 	return __normal_prio(p->policy, p->rt_priority, PRIO_TO_NICE(p->static_prio));
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ 
+ /*
+@@ -64,6 +77,37 @@ static int effective_prio(struct task_struct *p)
+ 
+ void set_user_nice(struct task_struct *p, long nice)
+ {
++#ifdef CONFIG_SCHED_ALT
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
++		return;
++	/*
++	 * We have to be careful, if called from sys_setpriority(),
++	 * the task might be in the middle of scheduling on another CPU.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	rq = __task_access_lock(p, &lock);
++
++	p->static_prio = NICE_TO_PRIO(nice);
++	/*
++	 * The RT priorities are set via sched_setscheduler(), but we still
++	 * allow the 'normal' nice value to be set - but as expected
++	 * it won't have any effect on scheduling until the task is
++	 * not SCHED_NORMAL/SCHED_BATCH:
++	 */
++	if (task_has_rt_policy(p))
++		goto out_unlock;
++
++	p->prio = effective_prio(p);
++
++	check_task_changed(p, rq);
++out_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++#else
+ 	bool queued, running;
+ 	struct rq *rq;
+ 	int old_prio;
+@@ -112,6 +156,7 @@ void set_user_nice(struct task_struct *p, long nice)
+ 	 * lowered its priority, then reschedule its CPU:
+ 	 */
+ 	p->sched_class->prio_changed(rq, p, old_prio);
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ EXPORT_SYMBOL(set_user_nice);
+ 
+@@ -190,7 +235,19 @@ SYSCALL_DEFINE1(nice, int, increment)
+  */
+ int task_prio(const struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++/*
++ * sched policy         return value   kernel prio    user prio/nice
++ *
++ * (BMQ)normal, batch, idle[0 ... 53]  [100 ... 139]          0/[-20 ... 19]/[-7 ... 7]
++ * (PDS)normal, batch, idle[0 ... 39]            100          0/[-20 ... 19]
++ * fifo, rr             [-1 ... -100]     [99 ... 0]  [0 ... 99]
++ */
++	return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
++		task_sched_prio_normal(p, task_rq(p));
++#else
+ 	return p->prio - MAX_RT_PRIO;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ 
+ /**
+@@ -300,11 +357,16 @@ static void __setscheduler_params(struct task_struct *p,
+ 
+ 	p->policy = policy;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_policy(policy))
+ 		__setparam_dl(p, attr);
+ 	else if (fair_policy(policy))
+ 		__setparam_fair(p, attr);
++#else	/* !CONFIG_SCHED_ALT */
++	p->static_prio = NICE_TO_PRIO(attr->sched_nice);
++#endif /* CONFIG_SCHED_ALT */
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	/* rt-policy tasks do not have a timerslack */
+ 	if (rt_or_dl_task_policy(p)) {
+ 		p->timer_slack_ns = 0;
+@@ -312,6 +374,7 @@ static void __setscheduler_params(struct task_struct *p,
+ 		/* when switching back to non-rt policy, restore timerslack */
+ 		p->timer_slack_ns = p->default_timer_slack_ns;
+ 	}
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	/*
+ 	 * __sched_setscheduler() ensures attr->sched_priority == 0 when
+@@ -320,7 +383,9 @@ static void __setscheduler_params(struct task_struct *p,
+ 	 */
+ 	p->rt_priority = attr->sched_priority;
+ 	p->normal_prio = normal_prio(p);
++#ifndef CONFIG_SCHED_ALT
+ 	set_load_weight(p, true);
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ 
+ /*
+@@ -336,6 +401,8 @@ static bool check_same_owner(struct task_struct *p)
+ 		uid_eq(cred->euid, pcred->uid));
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
++
+ #ifdef CONFIG_UCLAMP_TASK
+ 
+ static int uclamp_validate(struct task_struct *p,
+@@ -449,6 +516,7 @@ static inline int uclamp_validate(struct task_struct *p,
+ static void __setscheduler_uclamp(struct task_struct *p,
+ 				  const struct sched_attr *attr) { }
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ /*
+  * Allow unprivileged RT tasks to decrease priority.
+@@ -459,11 +527,13 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ 					 const struct sched_attr *attr,
+ 					 int policy, int reset_on_fork)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	if (fair_policy(policy)) {
+ 		if (attr->sched_nice < task_nice(p) &&
+ 		    !is_nice_reduction(p, attr->sched_nice))
+ 			goto req_priv;
+ 	}
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	if (rt_policy(policy)) {
+ 		unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
+@@ -478,6 +548,7 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ 			goto req_priv;
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	/*
+ 	 * Can't set/change SCHED_DEADLINE policy at all for now
+ 	 * (safest behavior); in the future we would like to allow
+@@ -495,6 +566,7 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ 		if (!is_nice_reduction(p, task_nice(p)))
+ 			goto req_priv;
+ 	}
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	/* Can't change other user's priorities: */
+ 	if (!check_same_owner(p))
+@@ -517,6 +589,158 @@ int __sched_setscheduler(struct task_struct *p,
+ 			 const struct sched_attr *attr,
+ 			 bool user, bool pi)
+ {
++#ifdef CONFIG_SCHED_ALT
++	const struct sched_attr dl_squash_attr = {
++		.size		= sizeof(struct sched_attr),
++		.sched_policy	= SCHED_FIFO,
++		.sched_nice	= 0,
++		.sched_priority = 99,
++	};
++	int oldpolicy = -1, policy = attr->sched_policy;
++	int retval, newprio;
++	struct balance_callback *head;
++	unsigned long flags;
++	struct rq *rq;
++	int reset_on_fork;
++	raw_spinlock_t *lock;
++
++	/* The pi code expects interrupts enabled */
++	BUG_ON(pi && in_interrupt());
++
++	/*
++	 * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
++	 */
++	if (unlikely(SCHED_DEADLINE == policy)) {
++		attr = &dl_squash_attr;
++		policy = attr->sched_policy;
++	}
++recheck:
++	/* Double check policy once rq lock held */
++	if (policy < 0) {
++		reset_on_fork = p->sched_reset_on_fork;
++		policy = oldpolicy = p->policy;
++	} else {
++		reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
++
++		if (policy > SCHED_IDLE)
++			return -EINVAL;
++	}
++
++	if (attr->sched_flags & ~(SCHED_FLAG_ALL))
++		return -EINVAL;
++
++	/*
++	 * Valid priorities for SCHED_FIFO and SCHED_RR are
++	 * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
++	 * SCHED_BATCH and SCHED_IDLE is 0.
++	 */
++	if (attr->sched_priority < 0 ||
++	    (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
++	    (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
++		return -EINVAL;
++	if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
++	    (attr->sched_priority != 0))
++		return -EINVAL;
++
++	if (user) {
++		retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
++		if (retval)
++			return retval;
++
++		retval = security_task_setscheduler(p);
++		if (retval)
++			return retval;
++	}
++
++	/*
++	 * Make sure no PI-waiters arrive (or leave) while we are
++	 * changing the priority of the task:
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++
++	/*
++	 * To be able to change p->policy safely, task_access_lock()
++	 * must be called.
++	 * IF use task_access_lock() here:
++	 * For the task p which is not running, reading rq->stop is
++	 * racy but acceptable as ->stop doesn't change much.
++	 * An enhancemnet can be made to read rq->stop saftly.
++	 */
++	rq = __task_access_lock(p, &lock);
++
++	/*
++	 * Changing the policy of the stop threads its a very bad idea
++	 */
++	if (p == rq->stop) {
++		retval = -EINVAL;
++		goto unlock;
++	}
++
++	/*
++	 * If not changing anything there's no need to proceed further:
++	 */
++	if (unlikely(policy == p->policy)) {
++		if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
++			goto change;
++		if (!rt_policy(policy) &&
++		    NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
++			goto change;
++
++		p->sched_reset_on_fork = reset_on_fork;
++		retval = 0;
++		goto unlock;
++	}
++change:
++
++	/* Re-check policy now with rq lock held */
++	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
++		policy = oldpolicy = -1;
++		__task_access_unlock(p, lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++		goto recheck;
++	}
++
++	p->sched_reset_on_fork = reset_on_fork;
++
++	newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
++	if (pi) {
++		/*
++		 * Take priority boosted tasks into account. If the new
++		 * effective priority is unchanged, we just store the new
++		 * normal parameters and do not touch the scheduler class and
++		 * the runqueue. This will be done when the task deboost
++		 * itself.
++		 */
++		newprio = rt_effective_prio(p, newprio);
++	}
++
++	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++		__setscheduler_params(p, attr);
++		__setscheduler_prio(p, newprio);
++	}
++
++	check_task_changed(p, rq);
++
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++	head = splice_balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	if (pi)
++		rt_mutex_adjust_pi(p);
++
++	/* Run balance callbacks after we've adjusted the PI chain: */
++	balance_callbacks(rq, head);
++	preempt_enable();
++
++	return 0;
++
++unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++	return retval;
++#else /* !CONFIG_SCHED_ALT */
+ 	int oldpolicy = -1, policy = attr->sched_policy;
+ 	int retval, oldprio, newprio, queued, running;
+ 	const struct sched_class *prev_class, *next_class;
+@@ -754,6 +978,7 @@ int __sched_setscheduler(struct task_struct *p,
+ 	if (cpuset_locked)
+ 		cpuset_unlock();
+ 	return retval;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ 
+ static int _sched_setscheduler(struct task_struct *p, int policy,
+@@ -765,8 +990,10 @@ static int _sched_setscheduler(struct task_struct *p, int policy,
+ 		.sched_nice	= PRIO_TO_NICE(p->static_prio),
+ 	};
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (p->se.custom_slice)
+ 		attr.sched_runtime = p->se.slice;
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	/* Fixup the legacy SCHED_RESET_ON_FORK hack. */
+ 	if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
+@@ -934,13 +1161,18 @@ static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *a
+ 
+ static void get_params(struct task_struct *p, struct sched_attr *attr)
+ {
+-	if (task_has_dl_policy(p)) {
++#ifndef CONFIG_SCHED_ALT
++	if (task_has_dl_policy(p))
+ 		__getparam_dl(p, attr);
+-	} else if (task_has_rt_policy(p)) {
++	else
++#endif
++	if (task_has_rt_policy(p)) {
+ 		attr->sched_priority = p->rt_priority;
+ 	} else {
+ 		attr->sched_nice = task_nice(p);
++#ifndef CONFIG_SCHED_ALT
+ 		attr->sched_runtime = p->se.slice;
++#endif
+ 	}
+ }
+ 
+@@ -1122,6 +1354,7 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
+ #ifdef CONFIG_SMP
+ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	/*
+ 	 * If the task isn't a deadline task or admission control is
+ 	 * disabled then we don't care about affinity changes.
+@@ -1145,6 +1378,7 @@ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
+ 	guard(rcu)();
+ 	if (!cpumask_subset(task_rq(p)->rd->span, mask))
+ 		return -EBUSY;
++#endif
+ 
+ 	return 0;
+ }
+@@ -1169,9 +1403,11 @@ int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
+ 	ctx->new_mask = new_mask;
+ 	ctx->flags |= SCA_CHECK;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	retval = dl_task_check_affinity(p, new_mask);
+ 	if (retval)
+ 		goto out_free_new_mask;
++#endif
+ 
+ 	retval = __set_cpus_allowed_ptr(p, ctx);
+ 	if (retval)
+@@ -1351,13 +1587,34 @@ SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
+ 
+ static void do_sched_yield(void)
+ {
+-	struct rq_flags rf;
+ 	struct rq *rq;
++	struct rq_flags rf;
++
++#ifdef CONFIG_SCHED_ALT
++	struct task_struct *p;
++
++	if (!sched_yield_type)
++		return;
+ 
+ 	rq = this_rq_lock_irq(&rf);
+ 
++	schedstat_inc(rq->yld_count);
++
++	p = current;
++	if (rt_task(p)) {
++		if (task_on_rq_queued(p))
++			requeue_task(p, rq);
++	} else if (rq->nr_running > 1) {
++		do_sched_yield_type_1(p, rq);
++		if (task_on_rq_queued(p))
++			requeue_task(p, rq);
++	}
++#else /* !CONFIG_SCHED_ALT */
++	rq = this_rq_lock_irq(&rf);
++
+ 	schedstat_inc(rq->yld_count);
+ 	current->sched_class->yield_task(rq);
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	preempt_disable();
+ 	rq_unlock_irq(rq, &rf);
+@@ -1426,6 +1683,9 @@ EXPORT_SYMBOL(yield);
+  */
+ int __sched yield_to(struct task_struct *p, bool preempt)
+ {
++#ifdef CONFIG_SCHED_ALT
++	return 0;
++#else /* !CONFIG_SCHED_ALT */
+ 	struct task_struct *curr = current;
+ 	struct rq *rq, *p_rq;
+ 	int yielded = 0;
+@@ -1471,6 +1731,7 @@ int __sched yield_to(struct task_struct *p, bool preempt)
+ 		schedule();
+ 
+ 	return yielded;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ EXPORT_SYMBOL_GPL(yield_to);
+ 
+@@ -1491,7 +1752,9 @@ SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
+ 	case SCHED_RR:
+ 		ret = MAX_RT_PRIO-1;
+ 		break;
++#ifndef CONFIG_SCHED_ALT
+ 	case SCHED_DEADLINE:
++#endif
+ 	case SCHED_NORMAL:
+ 	case SCHED_BATCH:
+ 	case SCHED_IDLE:
+@@ -1519,7 +1782,9 @@ SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
+ 	case SCHED_RR:
+ 		ret = 1;
+ 		break;
++#ifndef CONFIG_SCHED_ALT
+ 	case SCHED_DEADLINE:
++#endif
+ 	case SCHED_NORMAL:
+ 	case SCHED_BATCH:
+ 	case SCHED_IDLE:
+@@ -1531,7 +1796,9 @@ SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
+ 
+ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned int time_slice = 0;
++#endif
+ 	int retval;
+ 
+ 	if (pid < 0)
+@@ -1546,6 +1813,7 @@ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ 		if (retval)
+ 			return retval;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 		scoped_guard (task_rq_lock, p) {
+ 			struct rq *rq = scope.rq;
+ 			if (p->sched_class->get_rr_interval)
+@@ -1554,6 +1822,13 @@ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ 	}
+ 
+ 	jiffies_to_timespec64(time_slice, t);
++#else
++	}
++
++	alt_sched_debug();
++
++	*t = ns_to_timespec64(sysctl_sched_base_slice);
++#endif /* !CONFIG_SCHED_ALT */
+ 	return 0;
+ }
+ 
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index f1ebc60d967f..da91c6b7a629 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -3,6 +3,7 @@
+  * Scheduler topology setup/handling methods
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include <linux/bsearch.h>
+ 
+ DEFINE_MUTEX(sched_domains_mutex);
+@@ -1456,8 +1457,10 @@ static void asym_cpu_capacity_scan(void)
+  */
+ 
+ static int default_relax_domain_level = -1;
++#endif /* CONFIG_SCHED_ALT */
+ int sched_domain_level_max;
+ 
++#ifndef CONFIG_SCHED_ALT
+ static int __init setup_relax_domain_level(char *str)
+ {
+ 	if (kstrtoint(str, 0, &default_relax_domain_level))
+@@ -1690,6 +1693,7 @@ sd_init(struct sched_domain_topology_level *tl,
+ 
+ 	return sd;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ /*
+  * Topology list, bottom-up.
+@@ -1726,6 +1730,7 @@ void __init set_sched_topology(struct sched_domain_topology_level *tl)
+ 	sched_domain_topology_saved = NULL;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA
+ 
+ static const struct cpumask *sd_numa_mask(int cpu)
+@@ -2778,3 +2783,28 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ 	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ 	sched_domains_mutex_unlock();
+ }
++#else /* CONFIG_SCHED_ALT */
++DEFINE_STATIC_KEY_FALSE(sched_asym_cpucapacity);
++
++void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
++			     struct sched_domain_attr *dattr_new)
++{}
++
++#ifdef CONFIG_NUMA
++int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return best_mask_cpu(cpu, cpus);
++}
++
++int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
++{
++	return cpumask_nth(cpu, cpus);
++}
++
++const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
++{
++	return ERR_PTR(-EOPNOTSUPP);
++}
++EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
++#endif /* CONFIG_NUMA */
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 3b7a7308e35b..aebd89ee129e 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -84,6 +84,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
+ static const int ngroups_max = NGROUPS_MAX;
+ static const int cap_last_cap = CAP_LAST_CAP;
+ 
++#ifdef CONFIG_SCHED_ALT
++extern int sched_yield_type;
++#endif
++
+ #ifdef CONFIG_PROC_SYSCTL
+ 
+ /**
+@@ -1819,6 +1823,17 @@ static const struct ctl_table kern_table[] = {
+ 		.proc_handler	= proc_dointvec,
+ 	},
+ #endif
++#ifdef CONFIG_SCHED_ALT
++	{
++		.procname	= "yield_type",
++		.data		= &sched_yield_type,
++		.maxlen		= sizeof (int),
++		.mode		= 0644,
++		.proc_handler	= &proc_dointvec_minmax,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= SYSCTL_TWO,
++	},
++#endif
+ #ifdef CONFIG_SYSCTL_ARCH_UNALIGN_NO_WARN
+ 	{
+ 		.procname	= "ignore-unaligned-usertrap",
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 50e8d04ab661..0a761f9cd5e4 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -223,7 +223,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
+ 	u64 stime, utime;
+ 
+ 	task_cputime(p, &utime, &stime);
+-	store_samples(samples, stime, utime, p->se.sum_exec_runtime);
++	store_samples(samples, stime, utime, tsk_seruntime(p));
+ }
+ 
+ static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
+@@ -835,6 +835,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
+ 	}
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void check_dl_overrun(struct task_struct *tsk)
+ {
+ 	if (tsk->dl.dl_overrun) {
+@@ -842,6 +843,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
+ 		send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
+ 	}
+ }
++#endif
+ 
+ static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
+ {
+@@ -869,8 +871,10 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	u64 samples[CPUCLOCK_MAX];
+ 	unsigned long soft;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk))
+ 		check_dl_overrun(tsk);
++#endif
+ 
+ 	if (expiry_cache_is_inactive(pct))
+ 		return;
+@@ -884,7 +888,7 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	soft = task_rlimit(tsk, RLIMIT_RTTIME);
+ 	if (soft != RLIM_INFINITY) {
+ 		/* Task RT timeout is accounted in jiffies. RTTIME is usec */
+-		unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
++		unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
+ 		unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
+ 
+ 		/* At the hard limit, send SIGKILL. No further action. */
+@@ -1120,8 +1124,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
+ 			return true;
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk) && tsk->dl.dl_overrun)
+ 		return true;
++#endif
+ 
+ 	return false;
+ }
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index 4d915c0a263c..2368d8b75a9f 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -2495,7 +2495,11 @@ static void run_local_timers(void)
+ 		 */
+ 		if (time_after_eq(jiffies, READ_ONCE(base->next_expiry)) ||
+ 		    (i == BASE_DEF && tmigr_requires_handle_remote())) {
++#ifdef CONFIG_SCHED_BMQ
++			__raise_softirq_irqoff(TIMER_SOFTIRQ);
++#else
+ 			raise_timer_softirq(TIMER_SOFTIRQ);
++#endif
+ 			return;
+ 		}
+ 	}
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index e732c9e37e14..141f28c0a304 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1645,6 +1645,9 @@ static void osnoise_sleep(bool skip_period)
+  */
+ static inline int osnoise_migration_pending(void)
+ {
++#ifdef CONFIG_SCHED_ALT
++	return 0;
++#else
+ 	if (!current->migration_pending)
+ 		return 0;
+ 
+@@ -1666,6 +1669,7 @@ static inline int osnoise_migration_pending(void)
+ 	mutex_unlock(&interface_lock);
+ 
+ 	return 1;
++#endif
+ }
+ 
+ /*
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index d88c44f1dfa5..4af3cbbdcccb 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -1423,10 +1423,15 @@ static int trace_wakeup_test_thread(void *data)
+ {
+ 	/* Make this a -deadline thread */
+ 	static const struct sched_attr attr = {
++#ifdef CONFIG_SCHED_ALT
++		/* No deadline on BMQ/PDS, use RR */
++		.sched_policy = SCHED_RR,
++#else
+ 		.sched_policy = SCHED_DEADLINE,
+ 		.sched_runtime = 100000ULL,
+ 		.sched_deadline = 10000000ULL,
+ 		.sched_period = 10000000ULL
++#endif
+ 	};
+ 	struct wakeup_test_data *x = data;
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index cf6203282737..5f276dce1df3 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -1247,6 +1247,7 @@ static bool kick_pool(struct worker_pool *pool)
+ 
+ 	p = worker->task;
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 	/*
+ 	 * Idle @worker is about to execute @work and waking up provides an
+@@ -1276,6 +1277,8 @@ static bool kick_pool(struct worker_pool *pool)
+ 		}
+ 	}
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
++
+ 	wake_up_process(p);
+ 	return true;
+ }
+@@ -1404,7 +1407,11 @@ void wq_worker_running(struct task_struct *task)
+ 	 * CPU intensive auto-detection cares about how long a work item hogged
+ 	 * CPU without sleeping. Reset the starting timestamp on wakeup.
+ 	 */
++#ifdef CONFIG_SCHED_ALT
++	worker->current_at = worker->task->sched_time;
++#else
+ 	worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ 
+ 	WRITE_ONCE(worker->sleeping, 0);
+ }
+@@ -1489,7 +1496,11 @@ void wq_worker_tick(struct task_struct *task)
+ 	 * We probably want to make this prettier in the future.
+ 	 */
+ 	if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
++#ifdef CONFIG_SCHED_ALT
++	    worker->task->sched_time - worker->current_at <
++#else
+ 	    worker->task->se.sum_exec_runtime - worker->current_at <
++#endif
+ 	    wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
+ 		return;
+ 
+@@ -3166,7 +3177,11 @@ __acquires(&pool->lock)
+ 	worker->current_func = work->func;
+ 	worker->current_pwq = pwq;
+ 	if (worker->task)
++#ifdef CONFIG_SCHED_ALT
++		worker->current_at = worker->task->sched_time;
++#else
+ 		worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ 	work_data = *work_data_bits(work);
+ 	worker->current_color = get_work_color(work_data);
+ 

diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
new file mode 100644
index 00000000..6dc48eec
--- /dev/null
+++ b/5021_BMQ-and-PDS-gentoo-defaults.patch
@@ -0,0 +1,13 @@
+--- a/init/Kconfig	2023-02-13 08:16:09.534315265 -0500
++++ b/init/Kconfig	2023-02-13 08:17:24.130237204 -0500
+@@ -867,8 +867,9 @@ config UCLAMP_BUCKETS_COUNT
+ 	  If in doubt, use the default value.
+ 
+ menuconfig SCHED_ALT
++	depends on X86_64
+ 	bool "Alternative CPU Schedulers"
+-	default y
++	default n
+ 	help
+ 	  This feature enable alternative CPU scheduler"
+ 


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-06-01 21:41 Mike Pagano
  0 siblings, 0 replies; 19+ messages in thread
From: Mike Pagano @ 2025-06-01 21:41 UTC (permalink / raw
  To: gentoo-commits

commit:     280089be0bf593e91861414c34b7577465265ebe
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue May 27 20:03:03 2025 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue May 27 20:03:03 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=280089be

Fix RANDOM_KMALLOC_CACHE(S) typo

https://bugs.gentoo.org/956708

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 3016080a..298dc6ec 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -207,7 +207,7 @@
 +	select SECURITY_LANDLOCK
 +	select SCHED_CORE if SCHED_SMT
 +	select BUG_ON_DATA_CORRUPTION
-+	select RANDOM_KMALLOC_CACHE if SLUB_TINY=n
++	select RANDOM_KMALLOC_CACHES if SLUB_TINY=n
 +	select SCHED_STACK_END_CHECK
 +	select SECCOMP if HAVE_ARCH_SECCOMP
 +	select SECCOMP_FILTER if HAVE_ARCH_SECCOMP_FILTER


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-06-04 18:07 Mike Pagano
  0 siblings, 0 replies; 19+ messages in thread
From: Mike Pagano @ 2025-06-04 18:07 UTC (permalink / raw
  To: gentoo-commits

commit:     c45fa4419fce16c5dc081016501128ab834008b6
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun  4 18:07:12 2025 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun  4 18:07:12 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c45fa441

Linux patch 6.15.1

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1000_linux-6.15.1.patch | 2184 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2188 insertions(+)

diff --git a/0000_README b/0000_README
index 057bf40b..61b05f13 100644
--- a/0000_README
+++ b/0000_README
@@ -42,6 +42,10 @@ EXPERIMENTAL
 
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
+Patch:  1000_linux-6.15.1.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.15.1
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1000_linux-6.15.1.patch b/1000_linux-6.15.1.patch
new file mode 100644
index 00000000..2ad8a361
--- /dev/null
+++ b/1000_linux-6.15.1.patch
@@ -0,0 +1,2184 @@
+diff --git a/Makefile b/Makefile
+index c1cd1b5fc269a6..61d69a3fc827cc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 15
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm64/boot/dts/intel/socfpga_agilex5.dtsi b/arch/arm64/boot/dts/intel/socfpga_agilex5.dtsi
+index 51c6e19e40b843..7d9394a0430272 100644
+--- a/arch/arm64/boot/dts/intel/socfpga_agilex5.dtsi
++++ b/arch/arm64/boot/dts/intel/socfpga_agilex5.dtsi
+@@ -222,9 +222,9 @@ i3c1: i3c@10da1000 {
+ 			status = "disabled";
+ 		};
+ 
+-		gpio0: gpio@ffc03200 {
++		gpio0: gpio@10c03200 {
+ 			compatible = "snps,dw-apb-gpio";
+-			reg = <0xffc03200 0x100>;
++			reg = <0x10c03200 0x100>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			resets = <&rst GPIO0_RESET>;
+diff --git a/arch/arm64/boot/dts/qcom/ipq9574.dtsi b/arch/arm64/boot/dts/qcom/ipq9574.dtsi
+index 94229002897257..3c02351fbb156a 100644
+--- a/arch/arm64/boot/dts/qcom/ipq9574.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq9574.dtsi
+@@ -378,6 +378,8 @@ cryptobam: dma-controller@704000 {
+ 			interrupts = <GIC_SPI 207 IRQ_TYPE_LEVEL_HIGH>;
+ 			#dma-cells = <1>;
+ 			qcom,ee = <1>;
++			qcom,num-ees = <4>;
++			num-channels = <16>;
+ 			qcom,controlled-remotely;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sa8775p.dtsi b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+index 3394ae2d130034..2329460b210381 100644
+--- a/arch/arm64/boot/dts/qcom/sa8775p.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+@@ -2413,6 +2413,8 @@ cryptobam: dma-controller@1dc4000 {
+ 			interrupts = <GIC_SPI 272 IRQ_TYPE_LEVEL_HIGH>;
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
++			qcom,num-ees = <4>;
++			num-channels = <20>;
+ 			qcom,controlled-remotely;
+ 			iommus = <&apps_smmu 0x480 0x00>,
+ 				 <&apps_smmu 0x481 0x00>;
+@@ -4903,15 +4905,7 @@ compute-cb@1 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <1>;
+ 						iommus = <&apps_smmu 0x2141 0x04a0>,
+-							 <&apps_smmu 0x2161 0x04a0>,
+-							 <&apps_smmu 0x2181 0x0400>,
+-							 <&apps_smmu 0x21c1 0x04a0>,
+-							 <&apps_smmu 0x21e1 0x04a0>,
+-							 <&apps_smmu 0x2541 0x04a0>,
+-							 <&apps_smmu 0x2561 0x04a0>,
+-							 <&apps_smmu 0x2581 0x0400>,
+-							 <&apps_smmu 0x25c1 0x04a0>,
+-							 <&apps_smmu 0x25e1 0x04a0>;
++							 <&apps_smmu 0x2181 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -4919,15 +4913,7 @@ compute-cb@2 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <2>;
+ 						iommus = <&apps_smmu 0x2142 0x04a0>,
+-							 <&apps_smmu 0x2162 0x04a0>,
+-							 <&apps_smmu 0x2182 0x0400>,
+-							 <&apps_smmu 0x21c2 0x04a0>,
+-							 <&apps_smmu 0x21e2 0x04a0>,
+-							 <&apps_smmu 0x2542 0x04a0>,
+-							 <&apps_smmu 0x2562 0x04a0>,
+-							 <&apps_smmu 0x2582 0x0400>,
+-							 <&apps_smmu 0x25c2 0x04a0>,
+-							 <&apps_smmu 0x25e2 0x04a0>;
++							 <&apps_smmu 0x2182 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -4935,15 +4921,7 @@ compute-cb@3 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <3>;
+ 						iommus = <&apps_smmu 0x2143 0x04a0>,
+-							 <&apps_smmu 0x2163 0x04a0>,
+-							 <&apps_smmu 0x2183 0x0400>,
+-							 <&apps_smmu 0x21c3 0x04a0>,
+-							 <&apps_smmu 0x21e3 0x04a0>,
+-							 <&apps_smmu 0x2543 0x04a0>,
+-							 <&apps_smmu 0x2563 0x04a0>,
+-							 <&apps_smmu 0x2583 0x0400>,
+-							 <&apps_smmu 0x25c3 0x04a0>,
+-							 <&apps_smmu 0x25e3 0x04a0>;
++							 <&apps_smmu 0x2183 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -4951,15 +4929,7 @@ compute-cb@4 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <4>;
+ 						iommus = <&apps_smmu 0x2144 0x04a0>,
+-							 <&apps_smmu 0x2164 0x04a0>,
+-							 <&apps_smmu 0x2184 0x0400>,
+-							 <&apps_smmu 0x21c4 0x04a0>,
+-							 <&apps_smmu 0x21e4 0x04a0>,
+-							 <&apps_smmu 0x2544 0x04a0>,
+-							 <&apps_smmu 0x2564 0x04a0>,
+-							 <&apps_smmu 0x2584 0x0400>,
+-							 <&apps_smmu 0x25c4 0x04a0>,
+-							 <&apps_smmu 0x25e4 0x04a0>;
++							 <&apps_smmu 0x2184 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -4967,15 +4937,7 @@ compute-cb@5 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <5>;
+ 						iommus = <&apps_smmu 0x2145 0x04a0>,
+-							 <&apps_smmu 0x2165 0x04a0>,
+-							 <&apps_smmu 0x2185 0x0400>,
+-							 <&apps_smmu 0x21c5 0x04a0>,
+-							 <&apps_smmu 0x21e5 0x04a0>,
+-							 <&apps_smmu 0x2545 0x04a0>,
+-							 <&apps_smmu 0x2565 0x04a0>,
+-							 <&apps_smmu 0x2585 0x0400>,
+-							 <&apps_smmu 0x25c5 0x04a0>,
+-							 <&apps_smmu 0x25e5 0x04a0>;
++							 <&apps_smmu 0x2185 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -4983,15 +4945,7 @@ compute-cb@6 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <6>;
+ 						iommus = <&apps_smmu 0x2146 0x04a0>,
+-							 <&apps_smmu 0x2166 0x04a0>,
+-							 <&apps_smmu 0x2186 0x0400>,
+-							 <&apps_smmu 0x21c6 0x04a0>,
+-							 <&apps_smmu 0x21e6 0x04a0>,
+-							 <&apps_smmu 0x2546 0x04a0>,
+-							 <&apps_smmu 0x2566 0x04a0>,
+-							 <&apps_smmu 0x2586 0x0400>,
+-							 <&apps_smmu 0x25c6 0x04a0>,
+-							 <&apps_smmu 0x25e6 0x04a0>;
++							 <&apps_smmu 0x2186 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -4999,15 +4953,7 @@ compute-cb@7 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <7>;
+ 						iommus = <&apps_smmu 0x2147 0x04a0>,
+-							 <&apps_smmu 0x2167 0x04a0>,
+-							 <&apps_smmu 0x2187 0x0400>,
+-							 <&apps_smmu 0x21c7 0x04a0>,
+-							 <&apps_smmu 0x21e7 0x04a0>,
+-							 <&apps_smmu 0x2547 0x04a0>,
+-							 <&apps_smmu 0x2567 0x04a0>,
+-							 <&apps_smmu 0x2587 0x0400>,
+-							 <&apps_smmu 0x25c7 0x04a0>,
+-							 <&apps_smmu 0x25e7 0x04a0>;
++							 <&apps_smmu 0x2187 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -5015,15 +4961,7 @@ compute-cb@8 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <8>;
+ 						iommus = <&apps_smmu 0x2148 0x04a0>,
+-							 <&apps_smmu 0x2168 0x04a0>,
+-							 <&apps_smmu 0x2188 0x0400>,
+-							 <&apps_smmu 0x21c8 0x04a0>,
+-							 <&apps_smmu 0x21e8 0x04a0>,
+-							 <&apps_smmu 0x2548 0x04a0>,
+-							 <&apps_smmu 0x2568 0x04a0>,
+-							 <&apps_smmu 0x2588 0x0400>,
+-							 <&apps_smmu 0x25c8 0x04a0>,
+-							 <&apps_smmu 0x25e8 0x04a0>;
++							 <&apps_smmu 0x2188 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -5031,31 +4969,7 @@ compute-cb@9 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <9>;
+ 						iommus = <&apps_smmu 0x2149 0x04a0>,
+-							 <&apps_smmu 0x2169 0x04a0>,
+-							 <&apps_smmu 0x2189 0x0400>,
+-							 <&apps_smmu 0x21c9 0x04a0>,
+-							 <&apps_smmu 0x21e9 0x04a0>,
+-							 <&apps_smmu 0x2549 0x04a0>,
+-							 <&apps_smmu 0x2569 0x04a0>,
+-							 <&apps_smmu 0x2589 0x0400>,
+-							 <&apps_smmu 0x25c9 0x04a0>,
+-							 <&apps_smmu 0x25e9 0x04a0>;
+-						dma-coherent;
+-					};
+-
+-					compute-cb@10 {
+-						compatible = "qcom,fastrpc-compute-cb";
+-						reg = <10>;
+-						iommus = <&apps_smmu 0x214a 0x04a0>,
+-							 <&apps_smmu 0x216a 0x04a0>,
+-							 <&apps_smmu 0x218a 0x0400>,
+-							 <&apps_smmu 0x21ca 0x04a0>,
+-							 <&apps_smmu 0x21ea 0x04a0>,
+-							 <&apps_smmu 0x254a 0x04a0>,
+-							 <&apps_smmu 0x256a 0x04a0>,
+-							 <&apps_smmu 0x258a 0x0400>,
+-							 <&apps_smmu 0x25ca 0x04a0>,
+-							 <&apps_smmu 0x25ea 0x04a0>;
++							 <&apps_smmu 0x2189 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -5063,15 +4977,7 @@ compute-cb@11 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <11>;
+ 						iommus = <&apps_smmu 0x214b 0x04a0>,
+-							 <&apps_smmu 0x216b 0x04a0>,
+-							 <&apps_smmu 0x218b 0x0400>,
+-							 <&apps_smmu 0x21cb 0x04a0>,
+-							 <&apps_smmu 0x21eb 0x04a0>,
+-							 <&apps_smmu 0x254b 0x04a0>,
+-							 <&apps_smmu 0x256b 0x04a0>,
+-							 <&apps_smmu 0x258b 0x0400>,
+-							 <&apps_smmu 0x25cb 0x04a0>,
+-							 <&apps_smmu 0x25eb 0x04a0>;
++							 <&apps_smmu 0x218b 0x0400>;
+ 						dma-coherent;
+ 					};
+ 				};
+@@ -5131,15 +5037,7 @@ compute-cb@1 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <1>;
+ 						iommus = <&apps_smmu 0x2941 0x04a0>,
+-							 <&apps_smmu 0x2961 0x04a0>,
+-							 <&apps_smmu 0x2981 0x0400>,
+-							 <&apps_smmu 0x29c1 0x04a0>,
+-							 <&apps_smmu 0x29e1 0x04a0>,
+-							 <&apps_smmu 0x2d41 0x04a0>,
+-							 <&apps_smmu 0x2d61 0x04a0>,
+-							 <&apps_smmu 0x2d81 0x0400>,
+-							 <&apps_smmu 0x2dc1 0x04a0>,
+-							 <&apps_smmu 0x2de1 0x04a0>;
++							 <&apps_smmu 0x2981 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -5147,15 +5045,7 @@ compute-cb@2 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <2>;
+ 						iommus = <&apps_smmu 0x2942 0x04a0>,
+-							 <&apps_smmu 0x2962 0x04a0>,
+-							 <&apps_smmu 0x2982 0x0400>,
+-							 <&apps_smmu 0x29c2 0x04a0>,
+-							 <&apps_smmu 0x29e2 0x04a0>,
+-							 <&apps_smmu 0x2d42 0x04a0>,
+-							 <&apps_smmu 0x2d62 0x04a0>,
+-							 <&apps_smmu 0x2d82 0x0400>,
+-							 <&apps_smmu 0x2dc2 0x04a0>,
+-							 <&apps_smmu 0x2de2 0x04a0>;
++							 <&apps_smmu 0x2982 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -5163,15 +5053,7 @@ compute-cb@3 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <3>;
+ 						iommus = <&apps_smmu 0x2943 0x04a0>,
+-							 <&apps_smmu 0x2963 0x04a0>,
+-							 <&apps_smmu 0x2983 0x0400>,
+-							 <&apps_smmu 0x29c3 0x04a0>,
+-							 <&apps_smmu 0x29e3 0x04a0>,
+-							 <&apps_smmu 0x2d43 0x04a0>,
+-							 <&apps_smmu 0x2d63 0x04a0>,
+-							 <&apps_smmu 0x2d83 0x0400>,
+-							 <&apps_smmu 0x2dc3 0x04a0>,
+-							 <&apps_smmu 0x2de3 0x04a0>;
++							 <&apps_smmu 0x2983 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -5179,15 +5061,7 @@ compute-cb@4 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <4>;
+ 						iommus = <&apps_smmu 0x2944 0x04a0>,
+-							 <&apps_smmu 0x2964 0x04a0>,
+-							 <&apps_smmu 0x2984 0x0400>,
+-							 <&apps_smmu 0x29c4 0x04a0>,
+-							 <&apps_smmu 0x29e4 0x04a0>,
+-							 <&apps_smmu 0x2d44 0x04a0>,
+-							 <&apps_smmu 0x2d64 0x04a0>,
+-							 <&apps_smmu 0x2d84 0x0400>,
+-							 <&apps_smmu 0x2dc4 0x04a0>,
+-							 <&apps_smmu 0x2de4 0x04a0>;
++							 <&apps_smmu 0x2984 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -5195,15 +5069,7 @@ compute-cb@5 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <5>;
+ 						iommus = <&apps_smmu 0x2945 0x04a0>,
+-							 <&apps_smmu 0x2965 0x04a0>,
+-							 <&apps_smmu 0x2985 0x0400>,
+-							 <&apps_smmu 0x29c5 0x04a0>,
+-							 <&apps_smmu 0x29e5 0x04a0>,
+-							 <&apps_smmu 0x2d45 0x04a0>,
+-							 <&apps_smmu 0x2d65 0x04a0>,
+-							 <&apps_smmu 0x2d85 0x0400>,
+-							 <&apps_smmu 0x2dc5 0x04a0>,
+-							 <&apps_smmu 0x2de5 0x04a0>;
++							 <&apps_smmu 0x2985 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -5211,15 +5077,7 @@ compute-cb@6 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <6>;
+ 						iommus = <&apps_smmu 0x2946 0x04a0>,
+-							 <&apps_smmu 0x2966 0x04a0>,
+-							 <&apps_smmu 0x2986 0x0400>,
+-							 <&apps_smmu 0x29c6 0x04a0>,
+-							 <&apps_smmu 0x29e6 0x04a0>,
+-							 <&apps_smmu 0x2d46 0x04a0>,
+-							 <&apps_smmu 0x2d66 0x04a0>,
+-							 <&apps_smmu 0x2d86 0x0400>,
+-							 <&apps_smmu 0x2dc6 0x04a0>,
+-							 <&apps_smmu 0x2de6 0x04a0>;
++							 <&apps_smmu 0x2986 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -5227,15 +5085,7 @@ compute-cb@7 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <7>;
+ 						iommus = <&apps_smmu 0x2947 0x04a0>,
+-							 <&apps_smmu 0x2967 0x04a0>,
+-							 <&apps_smmu 0x2987 0x0400>,
+-							 <&apps_smmu 0x29c7 0x04a0>,
+-							 <&apps_smmu 0x29e7 0x04a0>,
+-							 <&apps_smmu 0x2d47 0x04a0>,
+-							 <&apps_smmu 0x2d67 0x04a0>,
+-							 <&apps_smmu 0x2d87 0x0400>,
+-							 <&apps_smmu 0x2dc7 0x04a0>,
+-							 <&apps_smmu 0x2de7 0x04a0>;
++							 <&apps_smmu 0x2987 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -5243,15 +5093,7 @@ compute-cb@8 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <8>;
+ 						iommus = <&apps_smmu 0x2948 0x04a0>,
+-							 <&apps_smmu 0x2968 0x04a0>,
+-							 <&apps_smmu 0x2988 0x0400>,
+-							 <&apps_smmu 0x29c8 0x04a0>,
+-							 <&apps_smmu 0x29e8 0x04a0>,
+-							 <&apps_smmu 0x2d48 0x04a0>,
+-							 <&apps_smmu 0x2d68 0x04a0>,
+-							 <&apps_smmu 0x2d88 0x0400>,
+-							 <&apps_smmu 0x2dc8 0x04a0>,
+-							 <&apps_smmu 0x2de8 0x04a0>;
++							 <&apps_smmu 0x2988 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -5259,15 +5101,7 @@ compute-cb@9 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <9>;
+ 						iommus = <&apps_smmu 0x2949 0x04a0>,
+-							 <&apps_smmu 0x2969 0x04a0>,
+-							 <&apps_smmu 0x2989 0x0400>,
+-							 <&apps_smmu 0x29c9 0x04a0>,
+-							 <&apps_smmu 0x29e9 0x04a0>,
+-							 <&apps_smmu 0x2d49 0x04a0>,
+-							 <&apps_smmu 0x2d69 0x04a0>,
+-							 <&apps_smmu 0x2d89 0x0400>,
+-							 <&apps_smmu 0x2dc9 0x04a0>,
+-							 <&apps_smmu 0x2de9 0x04a0>;
++							 <&apps_smmu 0x2989 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -5275,15 +5109,7 @@ compute-cb@10 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <10>;
+ 						iommus = <&apps_smmu 0x294a 0x04a0>,
+-							 <&apps_smmu 0x296a 0x04a0>,
+-							 <&apps_smmu 0x298a 0x0400>,
+-							 <&apps_smmu 0x29ca 0x04a0>,
+-							 <&apps_smmu 0x29ea 0x04a0>,
+-							 <&apps_smmu 0x2d4a 0x04a0>,
+-							 <&apps_smmu 0x2d6a 0x04a0>,
+-							 <&apps_smmu 0x2d8a 0x0400>,
+-							 <&apps_smmu 0x2dca 0x04a0>,
+-							 <&apps_smmu 0x2dea 0x04a0>;
++							 <&apps_smmu 0x298a 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -5291,15 +5117,7 @@ compute-cb@11 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <11>;
+ 						iommus = <&apps_smmu 0x294b 0x04a0>,
+-							 <&apps_smmu 0x296b 0x04a0>,
+-							 <&apps_smmu 0x298b 0x0400>,
+-							 <&apps_smmu 0x29cb 0x04a0>,
+-							 <&apps_smmu 0x29eb 0x04a0>,
+-							 <&apps_smmu 0x2d4b 0x04a0>,
+-							 <&apps_smmu 0x2d6b 0x04a0>,
+-							 <&apps_smmu 0x2d8b 0x0400>,
+-							 <&apps_smmu 0x2dcb 0x04a0>,
+-							 <&apps_smmu 0x2deb 0x04a0>;
++							 <&apps_smmu 0x298b 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -5307,15 +5125,7 @@ compute-cb@12 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <12>;
+ 						iommus = <&apps_smmu 0x294c 0x04a0>,
+-							 <&apps_smmu 0x296c 0x04a0>,
+-							 <&apps_smmu 0x298c 0x0400>,
+-							 <&apps_smmu 0x29cc 0x04a0>,
+-							 <&apps_smmu 0x29ec 0x04a0>,
+-							 <&apps_smmu 0x2d4c 0x04a0>,
+-							 <&apps_smmu 0x2d6c 0x04a0>,
+-							 <&apps_smmu 0x2d8c 0x0400>,
+-							 <&apps_smmu 0x2dcc 0x04a0>,
+-							 <&apps_smmu 0x2dec 0x04a0>;
++							 <&apps_smmu 0x298c 0x0400>;
+ 						dma-coherent;
+ 					};
+ 
+@@ -5323,15 +5133,7 @@ compute-cb@13 {
+ 						compatible = "qcom,fastrpc-compute-cb";
+ 						reg = <13>;
+ 						iommus = <&apps_smmu 0x294d 0x04a0>,
+-							 <&apps_smmu 0x296d 0x04a0>,
+-							 <&apps_smmu 0x298d 0x0400>,
+-							 <&apps_smmu 0x29Cd 0x04a0>,
+-							 <&apps_smmu 0x29ed 0x04a0>,
+-							 <&apps_smmu 0x2d4d 0x04a0>,
+-							 <&apps_smmu 0x2d6d 0x04a0>,
+-							 <&apps_smmu 0x2d8d 0x0400>,
+-							 <&apps_smmu 0x2dcd 0x04a0>,
+-							 <&apps_smmu 0x2ded 0x04a0>;
++							 <&apps_smmu 0x298d 0x0400>;
+ 						dma-coherent;
+ 					};
+ 				};
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 69da30f35baaab..f055600d6cfe5b 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -455,7 +455,7 @@ cdsp_secure_heap: memory@80c00000 {
+ 			no-map;
+ 		};
+ 
+-		pil_camera_mem: mmeory@85200000 {
++		pil_camera_mem: memory@85200000 {
+ 			reg = <0x0 0x85200000 0x0 0x500000>;
+ 			no-map;
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index 9c809fc5fa45a9..419df72cd04b0c 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -5283,6 +5283,8 @@ cryptobam: dma-controller@1dc4000 {
+ 			interrupts = <GIC_SPI 272 IRQ_TYPE_LEVEL_HIGH>;
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
++			qcom,num-ees = <4>;
++			num-channels = <16>;
+ 			qcom,controlled-remotely;
+ 			iommus = <&apps_smmu 0x584 0x11>,
+ 				 <&apps_smmu 0x588 0x0>,
+diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+index eac8de4005d82f..ac3e00ad417719 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+@@ -1957,6 +1957,8 @@ cryptobam: dma-controller@1dc4000 {
+ 			interrupts = <GIC_SPI 272 IRQ_TYPE_LEVEL_HIGH>;
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
++			qcom,num-ees = <4>;
++			num-channels = <20>;
+ 			qcom,controlled-remotely;
+ 			iommus = <&apps_smmu 0x480 0x0>,
+ 				 <&apps_smmu 0x481 0x0>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8650.dtsi b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+index 86684cb9a93256..c8a2a76a98f000 100644
+--- a/arch/arm64/boot/dts/qcom/sm8650.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+@@ -2533,6 +2533,8 @@ cryptobam: dma-controller@1dc4000 {
+ 				 <&apps_smmu 0x481 0>;
+ 
+ 			qcom,ee = <0>;
++			qcom,num-ees = <4>;
++			num-channels = <20>;
+ 			qcom,controlled-remotely;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts b/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts
+index 5e3970b26e2f95..f5063a0df9fbfa 100644
+--- a/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts
++++ b/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts
+@@ -507,6 +507,7 @@ vreg_l12b_1p2: ldo12 {
+ 			regulator-min-microvolt = <1200000>;
+ 			regulator-max-microvolt = <1200000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+ 
+ 		vreg_l13b_3p0: ldo13 {
+@@ -528,6 +529,7 @@ vreg_l15b_1p8: ldo15 {
+ 			regulator-min-microvolt = <1800000>;
+ 			regulator-max-microvolt = <1800000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+ 
+ 		vreg_l16b_2p9: ldo16 {
+@@ -745,8 +747,8 @@ vreg_l1j_0p8: ldo1 {
+ 
+ 		vreg_l2j_1p2: ldo2 {
+ 			regulator-name = "vreg_l2j_1p2";
+-			regulator-min-microvolt = <1200000>;
+-			regulator-max-microvolt = <1200000>;
++			regulator-min-microvolt = <1256000>;
++			regulator-max-microvolt = <1256000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+index 53781f9b13af3e..f53067463b7601 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+@@ -330,8 +330,8 @@ vreg_l1j_0p8: ldo1 {
+ 
+ 		vreg_l2j_1p2: ldo2 {
+ 			regulator-name = "vreg_l2j_1p2";
+-			regulator-min-microvolt = <1200000>;
+-			regulator-max-microvolt = <1200000>;
++			regulator-min-microvolt = <1256000>;
++			regulator-max-microvolt = <1256000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-dell-xps13-9345.dts b/arch/arm64/boot/dts/qcom/x1e80100-dell-xps13-9345.dts
+index 86e87f03b0ec61..90f588ed7d63d7 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-dell-xps13-9345.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-dell-xps13-9345.dts
+@@ -359,6 +359,7 @@ vreg_l12b_1p2: ldo12 {
+ 			regulator-min-microvolt = <1200000>;
+ 			regulator-max-microvolt = <1200000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+ 
+ 		vreg_l13b_3p0: ldo13 {
+@@ -380,6 +381,7 @@ vreg_l15b_1p8: ldo15 {
+ 			regulator-min-microvolt = <1800000>;
+ 			regulator-max-microvolt = <1800000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+ 
+ 		vreg_l17b_2p5: ldo17 {
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-hp-omnibook-x14.dts b/arch/arm64/boot/dts/qcom/x1e80100-hp-omnibook-x14.dts
+index cd860a246c450b..929da9ecddc47c 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-hp-omnibook-x14.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-hp-omnibook-x14.dts
+@@ -633,6 +633,7 @@ vreg_l12b_1p2: ldo12 {
+ 			regulator-min-microvolt = <1200000>;
+ 			regulator-max-microvolt = <1200000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+ 
+ 		vreg_l13b_3p0: ldo13 {
+@@ -654,6 +655,7 @@ vreg_l15b_1p8: ldo15 {
+ 			regulator-min-microvolt = <1800000>;
+ 			regulator-max-microvolt = <1800000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+ 
+ 		vreg_l16b_2p9: ldo16 {
+@@ -871,8 +873,8 @@ vreg_l1j_0p8: ldo1 {
+ 
+ 		vreg_l2j_1p2: ldo2 {
+ 			regulator-name = "vreg_l2j_1p2";
+-			regulator-min-microvolt = <1200000>;
+-			regulator-max-microvolt = <1200000>;
++			regulator-min-microvolt = <1256000>;
++			regulator-max-microvolt = <1256000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+ 		};
+ 
+@@ -1352,18 +1354,22 @@ &remoteproc_cdsp {
+ 	status = "okay";
+ };
+ 
++&smb2360_0 {
++	status = "okay";
++};
++
+ &smb2360_0_eusb2_repeater {
+ 	vdd18-supply = <&vreg_l3d_1p8>;
+ 	vdd3-supply = <&vreg_l2b_3p0>;
++};
+ 
++&smb2360_1 {
+ 	status = "okay";
+ };
+ 
+ &smb2360_1_eusb2_repeater {
+ 	vdd18-supply = <&vreg_l3d_1p8>;
+ 	vdd3-supply = <&vreg_l14b_3p0>;
+-
+-	status = "okay";
+ };
+ 
+ &swr0 {
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+index a3d53f2ba2c3d0..744a66ae5bdc84 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+@@ -290,6 +290,7 @@ vreg_l12b_1p2: ldo12 {
+ 			regulator-min-microvolt = <1200000>;
+ 			regulator-max-microvolt = <1200000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+ 
+ 		vreg_l14b_3p0: ldo14 {
+@@ -304,8 +305,8 @@ vreg_l15b_1p8: ldo15 {
+ 			regulator-min-microvolt = <1800000>;
+ 			regulator-max-microvolt = <1800000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+-
+ 	};
+ 
+ 	regulators-1 {
+@@ -508,8 +509,8 @@ vreg_l1j_0p8: ldo1 {
+ 
+ 		vreg_l2j_1p2: ldo2 {
+ 			regulator-name = "vreg_l2j_1p2";
+-			regulator-min-microvolt = <1200000>;
+-			regulator-max-microvolt = <1200000>;
++			regulator-min-microvolt = <1256000>;
++			regulator-max-microvolt = <1256000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+index ec594628304a9a..f06f4547884683 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+@@ -437,6 +437,7 @@ vreg_l12b_1p2: ldo12 {
+ 			regulator-min-microvolt = <1200000>;
+ 			regulator-max-microvolt = <1200000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+ 
+ 		vreg_l13b_3p0: ldo13 {
+@@ -458,6 +459,7 @@ vreg_l15b_1p8: ldo15 {
+ 			regulator-min-microvolt = <1800000>;
+ 			regulator-max-microvolt = <1800000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+ 
+ 		vreg_l16b_2p9: ldo16 {
+@@ -675,8 +677,8 @@ vreg_l1j_0p8: ldo1 {
+ 
+ 		vreg_l2j_1p2: ldo2 {
+ 			regulator-name = "vreg_l2j_1p2";
+-			regulator-min-microvolt = <1200000>;
+-			regulator-max-microvolt = <1200000>;
++			regulator-min-microvolt = <1256000>;
++			regulator-max-microvolt = <1256000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index 4936fa5b98ff7a..5aeecf711340d2 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -20,6 +20,7 @@
+ #include <dt-bindings/soc/qcom,gpr.h>
+ #include <dt-bindings/soc/qcom,rpmh-rsc.h>
+ #include <dt-bindings/sound/qcom,q6dsp-lpass-ports.h>
++#include <dt-bindings/thermal/thermal.h>
+ 
+ / {
+ 	interrupt-parent = <&intc>;
+@@ -3125,7 +3126,7 @@ pcie3: pcie@1bd0000 {
+ 			device_type = "pci";
+ 			compatible = "qcom,pcie-x1e80100";
+ 			reg = <0x0 0x01bd0000 0x0 0x3000>,
+-			      <0x0 0x78000000 0x0 0xf1d>,
++			      <0x0 0x78000000 0x0 0xf20>,
+ 			      <0x0 0x78000f40 0x0 0xa8>,
+ 			      <0x0 0x78001000 0x0 0x1000>,
+ 			      <0x0 0x78100000 0x0 0x100000>,
+@@ -8457,8 +8458,8 @@ trip-point0 {
+ 				};
+ 
+ 				aoss0-critical {
+-					temperature = <125000>;
+-					hysteresis = <0>;
++					temperature = <115000>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -8483,7 +8484,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8509,7 +8510,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8535,7 +8536,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8561,7 +8562,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8587,7 +8588,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8613,7 +8614,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8639,7 +8640,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8665,7 +8666,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8683,8 +8684,8 @@ trip-point0 {
+ 				};
+ 
+ 				cpuss2-critical {
+-					temperature = <125000>;
+-					hysteresis = <0>;
++					temperature = <115000>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -8701,8 +8702,8 @@ trip-point0 {
+ 				};
+ 
+ 				cpuss2-critical {
+-					temperature = <125000>;
+-					hysteresis = <0>;
++					temperature = <115000>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -8719,7 +8720,7 @@ trip-point0 {
+ 				};
+ 
+ 				mem-critical {
+-					temperature = <125000>;
++					temperature = <115000>;
+ 					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+@@ -8727,15 +8728,19 @@ mem-critical {
+ 		};
+ 
+ 		video-thermal {
+-			polling-delay-passive = <250>;
+-
+ 			thermal-sensors = <&tsens0 12>;
+ 
+ 			trips {
+ 				trip-point0 {
+-					temperature = <125000>;
++					temperature = <90000>;
++					hysteresis = <2000>;
++					type = "hot";
++				};
++
++				video-critical {
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+-					type = "passive";
++					type = "critical";
+ 				};
+ 			};
+ 		};
+@@ -8751,8 +8756,8 @@ trip-point0 {
+ 				};
+ 
+ 				aoss0-critical {
+-					temperature = <125000>;
+-					hysteresis = <0>;
++					temperature = <115000>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -8777,7 +8782,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8803,7 +8808,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8829,7 +8834,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8855,7 +8860,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8881,7 +8886,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8907,7 +8912,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8933,7 +8938,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8959,7 +8964,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -8977,8 +8982,8 @@ trip-point0 {
+ 				};
+ 
+ 				cpuss2-critical {
+-					temperature = <125000>;
+-					hysteresis = <0>;
++					temperature = <115000>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -8995,8 +9000,8 @@ trip-point0 {
+ 				};
+ 
+ 				cpuss2-critical {
+-					temperature = <125000>;
+-					hysteresis = <0>;
++					temperature = <115000>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -9013,8 +9018,8 @@ trip-point0 {
+ 				};
+ 
+ 				aoss0-critical {
+-					temperature = <125000>;
+-					hysteresis = <0>;
++					temperature = <115000>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -9039,7 +9044,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9065,7 +9070,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9091,7 +9096,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9117,7 +9122,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9143,7 +9148,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9169,7 +9174,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9195,7 +9200,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9221,7 +9226,7 @@ trip-point1 {
+ 				};
+ 
+ 				cpu-critical {
+-					temperature = <110000>;
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9239,8 +9244,8 @@ trip-point0 {
+ 				};
+ 
+ 				cpuss2-critical {
+-					temperature = <125000>;
+-					hysteresis = <0>;
++					temperature = <115000>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -9257,8 +9262,8 @@ trip-point0 {
+ 				};
+ 
+ 				cpuss2-critical {
+-					temperature = <125000>;
+-					hysteresis = <0>;
++					temperature = <115000>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -9275,8 +9280,8 @@ trip-point0 {
+ 				};
+ 
+ 				aoss0-critical {
+-					temperature = <125000>;
+-					hysteresis = <0>;
++					temperature = <115000>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -9293,8 +9298,8 @@ trip-point0 {
+ 				};
+ 
+ 				nsp0-critical {
+-					temperature = <125000>;
+-					hysteresis = <0>;
++					temperature = <115000>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -9311,8 +9316,8 @@ trip-point0 {
+ 				};
+ 
+ 				nsp1-critical {
+-					temperature = <125000>;
+-					hysteresis = <0>;
++					temperature = <115000>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -9329,8 +9334,8 @@ trip-point0 {
+ 				};
+ 
+ 				nsp2-critical {
+-					temperature = <125000>;
+-					hysteresis = <0>;
++					temperature = <115000>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -9347,33 +9352,34 @@ trip-point0 {
+ 				};
+ 
+ 				nsp3-critical {
+-					temperature = <125000>;
+-					hysteresis = <0>;
++					temperature = <115000>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+ 		};
+ 
+ 		gpuss-0-thermal {
+-			polling-delay-passive = <10>;
++			polling-delay-passive = <200>;
+ 
+ 			thermal-sensors = <&tsens3 5>;
+ 
+-			trips {
+-				trip-point0 {
+-					temperature = <85000>;
+-					hysteresis = <1000>;
+-					type = "passive";
++			cooling-maps {
++				map0 {
++					trip = <&gpuss0_alert0>;
++					cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ 				};
++			};
+ 
+-				trip-point1 {
+-					temperature = <90000>;
++			trips {
++				gpuss0_alert0: trip-point0 {
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+-					type = "hot";
++					type = "passive";
+ 				};
+ 
+-				trip-point2 {
+-					temperature = <125000>;
++				gpu-critical {
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9381,25 +9387,26 @@ trip-point2 {
+ 		};
+ 
+ 		gpuss-1-thermal {
+-			polling-delay-passive = <10>;
++			polling-delay-passive = <200>;
+ 
+ 			thermal-sensors = <&tsens3 6>;
+ 
+-			trips {
+-				trip-point0 {
+-					temperature = <85000>;
+-					hysteresis = <1000>;
+-					type = "passive";
++			cooling-maps {
++				map0 {
++					trip = <&gpuss1_alert0>;
++					cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ 				};
++			};
+ 
+-				trip-point1 {
+-					temperature = <90000>;
++			trips {
++				gpuss1_alert0: trip-point0 {
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+-					type = "hot";
++					type = "passive";
+ 				};
+ 
+-				trip-point2 {
+-					temperature = <125000>;
++				gpu-critical {
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9407,25 +9414,26 @@ trip-point2 {
+ 		};
+ 
+ 		gpuss-2-thermal {
+-			polling-delay-passive = <10>;
++			polling-delay-passive = <200>;
+ 
+ 			thermal-sensors = <&tsens3 7>;
+ 
+-			trips {
+-				trip-point0 {
+-					temperature = <85000>;
+-					hysteresis = <1000>;
+-					type = "passive";
++			cooling-maps {
++				map0 {
++					trip = <&gpuss2_alert0>;
++					cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ 				};
++			};
+ 
+-				trip-point1 {
+-					temperature = <90000>;
++			trips {
++				gpuss2_alert0: trip-point0 {
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+-					type = "hot";
++					type = "passive";
+ 				};
+ 
+-				trip-point2 {
+-					temperature = <125000>;
++				gpu-critical {
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9433,25 +9441,26 @@ trip-point2 {
+ 		};
+ 
+ 		gpuss-3-thermal {
+-			polling-delay-passive = <10>;
++			polling-delay-passive = <200>;
+ 
+ 			thermal-sensors = <&tsens3 8>;
+ 
+-			trips {
+-				trip-point0 {
+-					temperature = <85000>;
+-					hysteresis = <1000>;
+-					type = "passive";
++			cooling-maps {
++				map0 {
++					trip = <&gpuss3_alert0>;
++					cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ 				};
++			};
+ 
+-				trip-point1 {
+-					temperature = <90000>;
++			trips {
++				gpuss3_alert0: trip-point0 {
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+-					type = "hot";
++					type = "passive";
+ 				};
+ 
+-				trip-point2 {
+-					temperature = <125000>;
++				gpu-critical {
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9459,25 +9468,26 @@ trip-point2 {
+ 		};
+ 
+ 		gpuss-4-thermal {
+-			polling-delay-passive = <10>;
++			polling-delay-passive = <200>;
+ 
+ 			thermal-sensors = <&tsens3 9>;
+ 
+-			trips {
+-				trip-point0 {
+-					temperature = <85000>;
+-					hysteresis = <1000>;
+-					type = "passive";
++			cooling-maps {
++				map0 {
++					trip = <&gpuss4_alert0>;
++					cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ 				};
++			};
+ 
+-				trip-point1 {
+-					temperature = <90000>;
++			trips {
++				gpuss4_alert0: trip-point0 {
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+-					type = "hot";
++					type = "passive";
+ 				};
+ 
+-				trip-point2 {
+-					temperature = <125000>;
++				gpu-critical {
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9485,25 +9495,26 @@ trip-point2 {
+ 		};
+ 
+ 		gpuss-5-thermal {
+-			polling-delay-passive = <10>;
++			polling-delay-passive = <200>;
+ 
+ 			thermal-sensors = <&tsens3 10>;
+ 
+-			trips {
+-				trip-point0 {
+-					temperature = <85000>;
+-					hysteresis = <1000>;
+-					type = "passive";
++			cooling-maps {
++				map0 {
++					trip = <&gpuss5_alert0>;
++					cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ 				};
++			};
+ 
+-				trip-point1 {
+-					temperature = <90000>;
++			trips {
++				gpuss5_alert0: trip-point0 {
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+-					type = "hot";
++					type = "passive";
+ 				};
+ 
+-				trip-point2 {
+-					temperature = <125000>;
++				gpu-critical {
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9511,25 +9522,26 @@ trip-point2 {
+ 		};
+ 
+ 		gpuss-6-thermal {
+-			polling-delay-passive = <10>;
++			polling-delay-passive = <200>;
+ 
+ 			thermal-sensors = <&tsens3 11>;
+ 
+-			trips {
+-				trip-point0 {
+-					temperature = <85000>;
+-					hysteresis = <1000>;
+-					type = "passive";
++			cooling-maps {
++				map0 {
++					trip = <&gpuss6_alert0>;
++					cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ 				};
++			};
+ 
+-				trip-point1 {
+-					temperature = <90000>;
++			trips {
++				gpuss6_alert0: trip-point0 {
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+-					type = "hot";
++					type = "passive";
+ 				};
+ 
+-				trip-point2 {
+-					temperature = <125000>;
++				gpu-critical {
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9537,25 +9549,26 @@ trip-point2 {
+ 		};
+ 
+ 		gpuss-7-thermal {
+-			polling-delay-passive = <10>;
++			polling-delay-passive = <200>;
+ 
+ 			thermal-sensors = <&tsens3 12>;
+ 
+-			trips {
+-				trip-point0 {
+-					temperature = <85000>;
+-					hysteresis = <1000>;
+-					type = "passive";
++			cooling-maps {
++				map0 {
++					trip = <&gpuss7_alert0>;
++					cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ 				};
++			};
+ 
+-				trip-point1 {
+-					temperature = <90000>;
++			trips {
++				gpuss7_alert0: trip-point0 {
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+-					type = "hot";
++					type = "passive";
+ 				};
+ 
+-				trip-point2 {
+-					temperature = <125000>;
++				gpu-critical {
++					temperature = <115000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+@@ -9574,7 +9587,7 @@ trip-point0 {
+ 
+ 				camera0-critical {
+ 					temperature = <115000>;
+-					hysteresis = <0>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -9592,7 +9605,7 @@ trip-point0 {
+ 
+ 				camera0-critical {
+ 					temperature = <115000>;
+-					hysteresis = <0>;
++					hysteresis = <1000>;
+ 					type = "critical";
+ 				};
+ 			};
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+index e00fbaa8acc168..314d9dfdba5732 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+@@ -60,16 +60,6 @@ vcc3v3_sys: regulator-vcc3v3-sys {
+ 		vin-supply = <&vcc5v0_sys>;
+ 	};
+ 
+-	vcc5v0_host: regulator-vcc5v0-host {
+-		compatible = "regulator-fixed";
+-		gpio = <&gpio4 RK_PA3 GPIO_ACTIVE_LOW>;
+-		pinctrl-names = "default";
+-		pinctrl-0 = <&vcc5v0_host_en>;
+-		regulator-name = "vcc5v0_host";
+-		regulator-always-on;
+-		vin-supply = <&vcc5v0_sys>;
+-	};
+-
+ 	vcc5v0_sys: regulator-vcc5v0-sys {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "vcc5v0_sys";
+@@ -527,10 +517,10 @@ pmic_int_l: pmic-int-l {
+ 		};
+ 	};
+ 
+-	usb2 {
+-		vcc5v0_host_en: vcc5v0-host-en {
++	usb {
++		cy3304_reset: cy3304-reset {
+ 			rockchip,pins =
+-			  <4 RK_PA3 RK_FUNC_GPIO &pcfg_pull_none>;
++			  <4 RK_PA3 RK_FUNC_GPIO &pcfg_output_high>;
+ 		};
+ 	};
+ 
+@@ -597,7 +587,6 @@ u2phy1_otg: otg-port {
+ 	};
+ 
+ 	u2phy1_host: host-port {
+-		phy-supply = <&vcc5v0_host>;
+ 		status = "okay";
+ 	};
+ };
+@@ -609,6 +598,29 @@ &usbdrd3_1 {
+ &usbdrd_dwc3_1 {
+ 	status = "okay";
+ 	dr_mode = "host";
++	pinctrl-names = "default";
++	pinctrl-0 = <&cy3304_reset>;
++	#address-cells = <1>;
++	#size-cells = <0>;
++
++	hub_2_0: hub@1 {
++		compatible = "usb4b4,6502", "usb4b4,6506";
++		reg = <1>;
++		peer-hub = <&hub_3_0>;
++		reset-gpios = <&gpio4 RK_PA3 GPIO_ACTIVE_HIGH>;
++		vdd-supply = <&vcc1v2_phy>;
++		vdd2-supply = <&vcc3v3_sys>;
++
++	};
++
++	hub_3_0: hub@2 {
++		compatible = "usb4b4,6500", "usb4b4,6504";
++		reg = <2>;
++		peer-hub = <&hub_2_0>;
++		reset-gpios = <&gpio4 RK_PA3 GPIO_ACTIVE_HIGH>;
++		vdd-supply = <&vcc1v2_phy>;
++		vdd2-supply = <&vcc3v3_sys>;
++	};
+ };
+ 
+ &usb_host1_ehci {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3576.dtsi b/arch/arm64/boot/dts/rockchip/rk3576.dtsi
+index ebb5fc8bb8b136..3824242f8ae88a 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3576.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3576.dtsi
+@@ -1364,6 +1364,7 @@ sfc1: spi@2a300000 {
+ 			interrupts = <GIC_SPI 255 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cru SCLK_FSPI1_X2>, <&cru HCLK_FSPI1>;
+ 			clock-names = "clk_sfc", "hclk_sfc";
++			power-domains = <&power RK3576_PD_SDGMAC>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			status = "disabled";
+@@ -1414,6 +1415,7 @@ sfc0: spi@2a340000 {
+ 			interrupts = <GIC_SPI 254 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cru SCLK_FSPI_X2>, <&cru HCLK_FSPI>;
+ 			clock-names = "clk_sfc", "hclk_sfc";
++			power-domains = <&power RK3576_PD_NVM>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+index 7d355aa73ea211..0c286f600296cd 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+@@ -552,8 +552,6 @@ sdhci0: mmc@fa10000 {
+ 		power-domains = <&k3_pds 57 TI_SCI_PD_EXCLUSIVE>;
+ 		clocks = <&k3_clks 57 5>, <&k3_clks 57 6>;
+ 		clock-names = "clk_ahb", "clk_xin";
+-		assigned-clocks = <&k3_clks 57 6>;
+-		assigned-clock-parents = <&k3_clks 57 8>;
+ 		bus-width = <8>;
+ 		mmc-ddr-1_8v;
+ 		mmc-hs200-1_8v;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
+index a1daba7b1fad5d..455ccc770f16a1 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
+@@ -575,8 +575,6 @@ sdhci0: mmc@fa10000 {
+ 		power-domains = <&k3_pds 57 TI_SCI_PD_EXCLUSIVE>;
+ 		clocks = <&k3_clks 57 5>, <&k3_clks 57 6>;
+ 		clock-names = "clk_ahb", "clk_xin";
+-		assigned-clocks = <&k3_clks 57 6>;
+-		assigned-clock-parents = <&k3_clks 57 8>;
+ 		bus-width = <8>;
+ 		mmc-hs200-1_8v;
+ 		ti,clkbuf-sel = <0x7>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi
+index 6e3beb5c2e010e..f9b5c97518d68f 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi
+@@ -564,8 +564,6 @@ sdhci0: mmc@fa10000 {
+ 		power-domains = <&k3_pds 57 TI_SCI_PD_EXCLUSIVE>;
+ 		clocks = <&k3_clks 57 1>, <&k3_clks 57 2>;
+ 		clock-names = "clk_ahb", "clk_xin";
+-		assigned-clocks = <&k3_clks 57 2>;
+-		assigned-clock-parents = <&k3_clks 57 4>;
+ 		bus-width = <8>;
+ 		mmc-ddr-1_8v;
+ 		mmc-hs200-1_8v;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-imx219.dtso b/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-imx219.dtso
+index 76ca02127f95ff..dd090813a32d61 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-imx219.dtso
++++ b/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-imx219.dtso
+@@ -22,7 +22,7 @@ &main_i2c2 {
+ 	#size-cells = <0>;
+ 	status = "okay";
+ 
+-	i2c-switch@71 {
++	i2c-mux@71 {
+ 		compatible = "nxp,pca9543";
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+@@ -39,7 +39,6 @@ ov5640: camera@10 {
+ 				reg = <0x10>;
+ 
+ 				clocks = <&clk_imx219_fixed>;
+-				clock-names = "xclk";
+ 
+ 				reset-gpios = <&exp1 13 GPIO_ACTIVE_HIGH>;
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-ov5640.dtso b/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-ov5640.dtso
+index ccc7f5e43184fa..7fc7c95f5cd578 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-ov5640.dtso
++++ b/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-ov5640.dtso
+@@ -22,7 +22,7 @@ &main_i2c2 {
+ 	#size-cells = <0>;
+ 	status = "okay";
+ 
+-	i2c-switch@71 {
++	i2c-mux@71 {
+ 		compatible = "nxp,pca9543";
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-tevi-ov5640.dtso b/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-tevi-ov5640.dtso
+index 4eaf9d757dd0ad..b6bfdfbbdd984a 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-tevi-ov5640.dtso
++++ b/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-tevi-ov5640.dtso
+@@ -22,7 +22,7 @@ &main_i2c2 {
+ 	#size-cells = <0>;
+ 	status = "okay";
+ 
+-	i2c-switch@71 {
++	i2c-mux@71 {
+ 		compatible = "nxp,pca9543";
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+index 94a812a1355baf..5ebf7ada6e4851 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+@@ -449,6 +449,8 @@ sdhci0: mmc@4f80000 {
+ 		ti,otap-del-sel-mmc-hs = <0x0>;
+ 		ti,otap-del-sel-ddr52 = <0x5>;
+ 		ti,otap-del-sel-hs200 = <0x5>;
++		ti,itap-del-sel-legacy = <0xa>;
++		ti,itap-del-sel-mmc-hs = <0x1>;
+ 		ti,itap-del-sel-ddr52 = <0x0>;
+ 		dma-coherent;
+ 		status = "disabled";
+diff --git a/arch/arm64/boot/dts/ti/k3-am68-sk-base-board.dts b/arch/arm64/boot/dts/ti/k3-am68-sk-base-board.dts
+index 11522b36e0cece..5fa70a874d7b4d 100644
+--- a/arch/arm64/boot/dts/ti/k3-am68-sk-base-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-am68-sk-base-board.dts
+@@ -44,6 +44,17 @@ vusb_main: regulator-vusb-main5v0 {
+ 		regulator-boot-on;
+ 	};
+ 
++	vsys_5v0: regulator-vsys5v0 {
++		/* Output of LM61460 */
++		compatible = "regulator-fixed";
++		regulator-name = "vsys_5v0";
++		regulator-min-microvolt = <5000000>;
++		regulator-max-microvolt = <5000000>;
++		vin-supply = <&vusb_main>;
++		regulator-always-on;
++		regulator-boot-on;
++	};
++
+ 	vsys_3v3: regulator-vsys3v3 {
+ 		/* Output of LM5141 */
+ 		compatible = "regulator-fixed";
+@@ -76,7 +87,7 @@ vdd_sd_dv: regulator-tlv71033 {
+ 		regulator-min-microvolt = <1800000>;
+ 		regulator-max-microvolt = <3300000>;
+ 		regulator-boot-on;
+-		vin-supply = <&vsys_3v3>;
++		vin-supply = <&vsys_5v0>;
+ 		gpios = <&main_gpio0 49 GPIO_ACTIVE_HIGH>;
+ 		states = <1800000 0x0>,
+ 			 <3300000 0x1>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-sk-csi2-dual-imx219.dtso b/arch/arm64/boot/dts/ti/k3-j721e-sk-csi2-dual-imx219.dtso
+index 47bb5480b5b006..4eb3cffab0321d 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-sk-csi2-dual-imx219.dtso
++++ b/arch/arm64/boot/dts/ti/k3-j721e-sk-csi2-dual-imx219.dtso
+@@ -19,6 +19,33 @@ clk_imx219_fixed: imx219-xclk {
+ 		#clock-cells = <0>;
+ 		clock-frequency = <24000000>;
+ 	};
++
++	reg_2p8v: regulator-2p8v {
++		compatible = "regulator-fixed";
++		regulator-name = "2P8V";
++		regulator-min-microvolt = <2800000>;
++		regulator-max-microvolt = <2800000>;
++		vin-supply = <&vdd_sd_dv>;
++		regulator-always-on;
++	};
++
++	reg_1p8v: regulator-1p8v {
++		compatible = "regulator-fixed";
++		regulator-name = "1P8V";
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <1800000>;
++		vin-supply = <&vdd_sd_dv>;
++		regulator-always-on;
++	};
++
++	reg_1p2v: regulator-1p2v {
++		compatible = "regulator-fixed";
++		regulator-name = "1P2V";
++		regulator-min-microvolt = <1200000>;
++		regulator-max-microvolt = <1200000>;
++		vin-supply = <&vdd_sd_dv>;
++		regulator-always-on;
++	};
+ };
+ 
+ &csi_mux {
+@@ -34,7 +61,9 @@ imx219_0: imx219-0@10 {
+ 		reg = <0x10>;
+ 
+ 		clocks = <&clk_imx219_fixed>;
+-		clock-names = "xclk";
++		VANA-supply = <&reg_2p8v>;
++		VDIG-supply = <&reg_1p8v>;
++		VDDL-supply = <&reg_1p2v>;
+ 
+ 		port {
+ 			csi2_cam0: endpoint {
+@@ -56,7 +85,9 @@ imx219_1: imx219-1@10 {
+ 		reg = <0x10>;
+ 
+ 		clocks = <&clk_imx219_fixed>;
+-		clock-names = "xclk";
++		VANA-supply = <&reg_2p8v>;
++		VDIG-supply = <&reg_1p8v>;
++		VDDL-supply = <&reg_1p2v>;
+ 
+ 		port {
+ 			csi2_cam1: endpoint {
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-sk.dts b/arch/arm64/boot/dts/ti/k3-j721e-sk.dts
+index 440ef57be2943c..ffef3d1cfd5532 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721e-sk.dts
+@@ -184,6 +184,17 @@ vsys_3v3: fixedregulator-vsys3v3 {
+ 		regulator-boot-on;
+ 	};
+ 
++	vsys_5v0: fixedregulator-vsys5v0 {
++		/* Output of LM61460 */
++		compatible = "regulator-fixed";
++		regulator-name = "vsys_5v0";
++		regulator-min-microvolt = <5000000>;
++		regulator-max-microvolt = <5000000>;
++		vin-supply = <&vusb_main>;
++		regulator-always-on;
++		regulator-boot-on;
++	};
++
+ 	vdd_mmc1: fixedregulator-sd {
+ 		compatible = "regulator-fixed";
+ 		pinctrl-names = "default";
+@@ -211,6 +222,20 @@ vdd_sd_dv_alt: gpio-regulator-tps659411 {
+ 			 <3300000 0x1>;
+ 	};
+ 
++	vdd_sd_dv: gpio-regulator-TLV71033 {
++		compatible = "regulator-gpio";
++		pinctrl-names = "default";
++		pinctrl-0 = <&vdd_sd_dv_pins_default>;
++		regulator-name = "tlv71033";
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <3300000>;
++		regulator-boot-on;
++		vin-supply = <&vsys_5v0>;
++		gpios = <&main_gpio0 118 GPIO_ACTIVE_HIGH>;
++		states = <1800000 0x0>,
++			 <3300000 0x1>;
++	};
++
+ 	transceiver1: can-phy1 {
+ 		compatible = "ti,tcan1042";
+ 		#phy-cells = <0>;
+@@ -613,6 +638,12 @@ J721E_WKUP_IOPAD(0xd4, PIN_OUTPUT, 7) /* (G26) WKUP_GPIO0_9 */
+ 		>;
+ 	};
+ 
++	vdd_sd_dv_pins_default: vdd-sd-dv-default-pins {
++		pinctrl-single,pins = <
++			J721E_IOPAD(0x1dc, PIN_OUTPUT, 7) /* (Y1) SPI1_CLK.GPIO0_118 */
++		>;
++	};
++
+ 	wkup_uart0_pins_default: wkup-uart0-default-pins {
+ 		pinctrl-single,pins = <
+ 			J721E_WKUP_IOPAD(0xa0, PIN_INPUT, 0) /* (J29) WKUP_UART0_RXD */
+diff --git a/arch/arm64/boot/dts/ti/k3-j722s-evm.dts b/arch/arm64/boot/dts/ti/k3-j722s-evm.dts
+index 2127316f36a34b..0bf2e182166244 100644
+--- a/arch/arm64/boot/dts/ti/k3-j722s-evm.dts
++++ b/arch/arm64/boot/dts/ti/k3-j722s-evm.dts
+@@ -843,6 +843,10 @@ &serdes_ln_ctrl {
+ 		      <J722S_SERDES1_LANE0_PCIE0_LANE0>;
+ };
+ 
++&serdes_wiz0 {
++	status = "okay";
++};
++
+ &serdes0 {
+ 	status = "okay";
+ 	serdes0_usb_link: phy@0 {
+@@ -854,6 +858,10 @@ serdes0_usb_link: phy@0 {
+ 	};
+ };
+ 
++&serdes_wiz1 {
++	status = "okay";
++};
++
+ &serdes1 {
+ 	status = "okay";
+ 	serdes1_pcie_link: phy@0 {
+diff --git a/arch/arm64/boot/dts/ti/k3-j722s-main.dtsi b/arch/arm64/boot/dts/ti/k3-j722s-main.dtsi
+index 6850f50530f12b..beda9e40e931b4 100644
+--- a/arch/arm64/boot/dts/ti/k3-j722s-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j722s-main.dtsi
+@@ -32,6 +32,8 @@ serdes_wiz0: phy@f000000 {
+ 		assigned-clocks = <&k3_clks 279 1>;
+ 		assigned-clock-parents = <&k3_clks 279 5>;
+ 
++		status = "disabled";
++
+ 		serdes0: serdes@f000000 {
+ 			compatible = "ti,j721e-serdes-10g";
+ 			reg = <0x0f000000 0x00010000>;
+@@ -70,6 +72,8 @@ serdes_wiz1: phy@f010000 {
+ 		assigned-clocks = <&k3_clks 280 1>;
+ 		assigned-clock-parents = <&k3_clks 280 5>;
+ 
++		status = "disabled";
++
+ 		serdes1: serdes@f010000 {
+ 			compatible = "ti,j721e-serdes-10g";
+ 			reg = <0x0f010000 0x00010000>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi b/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi
+index 1944616ab3579a..1fc0a11c5ab4a9 100644
+--- a/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi
+@@ -77,7 +77,7 @@ pcie1_ctrl: pcie1-ctrl@4074 {
+ 
+ 		serdes_ln_ctrl: mux-controller@4080 {
+ 			compatible = "reg-mux";
+-			reg = <0x00004080 0x30>;
++			reg = <0x00004080 0x50>;
+ 			#mux-control-cells = <1>;
+ 			mux-reg-masks = <0x0 0x3>, <0x4 0x3>, /* SERDES0 lane0/1 select */
+ 					<0x8 0x3>, <0xc 0x3>, /* SERDES0 lane2/3 select */
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 9d728800a862e6..5bc2fc969494f5 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -277,6 +277,8 @@ int iommu_device_register(struct iommu_device *iommu,
+ 		err = bus_iommu_probe(iommu_buses[i]);
+ 	if (err)
+ 		iommu_device_unregister(iommu);
++	else
++		WRITE_ONCE(iommu->ready, true);
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(iommu_device_register);
+@@ -422,13 +424,15 @@ static int iommu_init_device(struct device *dev)
+ 	 * is buried in the bus dma_configure path. Properly unpicking that is
+ 	 * still a big job, so for now just invoke the whole thing. The device
+ 	 * already having a driver bound means dma_configure has already run and
+-	 * either found no IOMMU to wait for, or we're in its replay call right
+-	 * now, so either way there's no point calling it again.
++	 * found no IOMMU to wait for, so there's no point calling it again.
+ 	 */
+-	if (!dev->driver && dev->bus->dma_configure) {
++	if (!dev->iommu->fwspec && !dev->driver && dev->bus->dma_configure) {
+ 		mutex_unlock(&iommu_probe_device_lock);
+ 		dev->bus->dma_configure(dev);
+ 		mutex_lock(&iommu_probe_device_lock);
++		/* If another instance finished the job for us, skip it */
++		if (!dev->iommu || dev->iommu_group)
++			return -ENODEV;
+ 	}
+ 	/*
+ 	 * At this point, relevant devices either now have a fwspec which will
+@@ -2830,31 +2834,39 @@ bool iommu_default_passthrough(void)
+ }
+ EXPORT_SYMBOL_GPL(iommu_default_passthrough);
+ 
+-const struct iommu_ops *iommu_ops_from_fwnode(const struct fwnode_handle *fwnode)
++static const struct iommu_device *iommu_from_fwnode(const struct fwnode_handle *fwnode)
+ {
+-	const struct iommu_ops *ops = NULL;
+-	struct iommu_device *iommu;
++	const struct iommu_device *iommu, *ret = NULL;
+ 
+ 	spin_lock(&iommu_device_lock);
+ 	list_for_each_entry(iommu, &iommu_device_list, list)
+ 		if (iommu->fwnode == fwnode) {
+-			ops = iommu->ops;
++			ret = iommu;
+ 			break;
+ 		}
+ 	spin_unlock(&iommu_device_lock);
+-	return ops;
++	return ret;
++}
++
++const struct iommu_ops *iommu_ops_from_fwnode(const struct fwnode_handle *fwnode)
++{
++	const struct iommu_device *iommu = iommu_from_fwnode(fwnode);
++
++	return iommu ? iommu->ops : NULL;
+ }
+ 
+ int iommu_fwspec_init(struct device *dev, struct fwnode_handle *iommu_fwnode)
+ {
+-	const struct iommu_ops *ops = iommu_ops_from_fwnode(iommu_fwnode);
++	const struct iommu_device *iommu = iommu_from_fwnode(iommu_fwnode);
+ 	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+ 
+-	if (!ops)
++	if (!iommu)
+ 		return driver_deferred_probe_check_state(dev);
++	if (!dev->iommu && !READ_ONCE(iommu->ready))
++		return -EPROBE_DEFER;
+ 
+ 	if (fwspec)
+-		return ops == iommu_fwspec_ops(fwspec) ? 0 : -EINVAL;
++		return iommu->ops == iommu_fwspec_ops(fwspec) ? 0 : -EINVAL;
+ 
+ 	if (!dev_iommu_get(dev))
+ 		return -ENOMEM;
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index d4fe30ff225b6a..403850b1040d3d 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -727,8 +727,8 @@ static umode_t arm_cmn_event_attr_is_visible(struct kobject *kobj,
+ 
+ 		if ((chan == 5 && cmn->rsp_vc_num < 2) ||
+ 		    (chan == 6 && cmn->dat_vc_num < 2) ||
+-		    (chan == 7 && cmn->snp_vc_num < 2) ||
+-		    (chan == 8 && cmn->req_vc_num < 2))
++		    (chan == 7 && cmn->req_vc_num < 2) ||
++		    (chan == 8 && cmn->snp_vc_num < 2))
+ 			return 0;
+ 	}
+ 
+@@ -882,8 +882,8 @@ static umode_t arm_cmn_event_attr_is_visible(struct kobject *kobj,
+ 	_CMN_EVENT_XP(pub_##_name, (_event) | (4 << 5)),	\
+ 	_CMN_EVENT_XP(rsp2_##_name, (_event) | (5 << 5)),	\
+ 	_CMN_EVENT_XP(dat2_##_name, (_event) | (6 << 5)),	\
+-	_CMN_EVENT_XP(snp2_##_name, (_event) | (7 << 5)),	\
+-	_CMN_EVENT_XP(req2_##_name, (_event) | (8 << 5))
++	_CMN_EVENT_XP(req2_##_name, (_event) | (7 << 5)),	\
++	_CMN_EVENT_XP(snp2_##_name, (_event) | (8 << 5))
+ 
+ #define CMN_EVENT_XP_DAT(_name, _event)				\
+ 	_CMN_EVENT_XP_PORT(dat_##_name, (_event) | (3 << 5)),	\
+@@ -2558,6 +2558,7 @@ static int arm_cmn_probe(struct platform_device *pdev)
+ 
+ 	cmn->dev = &pdev->dev;
+ 	cmn->part = (unsigned long)device_get_match_data(cmn->dev);
++	cmn->cpu = cpumask_local_spread(0, dev_to_node(cmn->dev));
+ 	platform_set_drvdata(pdev, cmn);
+ 
+ 	if (cmn->part == PART_CMN600 && has_acpi_companion(cmn->dev)) {
+@@ -2585,7 +2586,6 @@ static int arm_cmn_probe(struct platform_device *pdev)
+ 	if (err)
+ 		return err;
+ 
+-	cmn->cpu = cpumask_local_spread(0, dev_to_node(cmn->dev));
+ 	cmn->pmu = (struct pmu) {
+ 		.module = THIS_MODULE,
+ 		.parent = cmn->dev,
+@@ -2651,6 +2651,7 @@ static const struct acpi_device_id arm_cmn_acpi_match[] = {
+ 	{ "ARMHC600", PART_CMN600 },
+ 	{ "ARMHC650" },
+ 	{ "ARMHC700" },
++	{ "ARMHC003" },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(acpi, arm_cmn_acpi_match);
+diff --git a/fs/coredump.c b/fs/coredump.c
+index c33c177a701b3d..d740a04112663c 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -43,6 +43,8 @@
+ #include <linux/timekeeping.h>
+ #include <linux/sysctl.h>
+ #include <linux/elf.h>
++#include <linux/pidfs.h>
++#include <uapi/linux/pidfd.h>
+ 
+ #include <linux/uaccess.h>
+ #include <asm/mmu_context.h>
+@@ -60,6 +62,12 @@ static void free_vma_snapshot(struct coredump_params *cprm);
+ #define CORE_FILE_NOTE_SIZE_DEFAULT (4*1024*1024)
+ /* Define a reasonable max cap */
+ #define CORE_FILE_NOTE_SIZE_MAX (16*1024*1024)
++/*
++ * File descriptor number for the pidfd for the thread-group leader of
++ * the coredumping task installed into the usermode helper's file
++ * descriptor table.
++ */
++#define COREDUMP_PIDFD_NUMBER 3
+ 
+ static int core_uses_pid;
+ static unsigned int core_pipe_limit;
+@@ -339,6 +347,27 @@ static int format_corename(struct core_name *cn, struct coredump_params *cprm,
+ 			case 'C':
+ 				err = cn_printf(cn, "%d", cprm->cpu);
+ 				break;
++			/* pidfd number */
++			case 'F': {
++				/*
++				 * Installing a pidfd only makes sense if
++				 * we actually spawn a usermode helper.
++				 */
++				if (!ispipe)
++					break;
++
++				/*
++				 * Note that we'll install a pidfd for the
++				 * thread-group leader. We know that task
++				 * linkage hasn't been removed yet and even if
++				 * this @current isn't the actual thread-group
++				 * leader we know that the thread-group leader
++				 * cannot be reaped until @current has exited.
++				 */
++				cprm->pid = task_tgid(current);
++				err = cn_printf(cn, "%d", COREDUMP_PIDFD_NUMBER);
++				break;
++			}
+ 			default:
+ 				break;
+ 			}
+@@ -493,7 +522,7 @@ static void wait_for_dump_helpers(struct file *file)
+ }
+ 
+ /*
+- * umh_pipe_setup
++ * umh_coredump_setup
+  * helper function to customize the process used
+  * to collect the core in userspace.  Specifically
+  * it sets up a pipe and installs it as fd 0 (stdin)
+@@ -503,11 +532,32 @@ static void wait_for_dump_helpers(struct file *file)
+  * is a special value that we use to trap recursive
+  * core dumps
+  */
+-static int umh_pipe_setup(struct subprocess_info *info, struct cred *new)
++static int umh_coredump_setup(struct subprocess_info *info, struct cred *new)
+ {
+ 	struct file *files[2];
+ 	struct coredump_params *cp = (struct coredump_params *)info->data;
+-	int err = create_pipe_files(files, 0);
++	int err;
++
++	if (cp->pid) {
++		struct file *pidfs_file __free(fput) = NULL;
++
++		pidfs_file = pidfs_alloc_file(cp->pid, 0);
++		if (IS_ERR(pidfs_file))
++			return PTR_ERR(pidfs_file);
++
++		/*
++		 * Usermode helpers are childen of either
++		 * system_unbound_wq or of kthreadd. So we know that
++		 * we're starting off with a clean file descriptor
++		 * table. So we should always be able to use
++		 * COREDUMP_PIDFD_NUMBER as our file descriptor value.
++		 */
++		err = replace_fd(COREDUMP_PIDFD_NUMBER, pidfs_file, 0);
++		if (err < 0)
++			return err;
++	}
++
++	err = create_pipe_files(files, 0);
+ 	if (err)
+ 		return err;
+ 
+@@ -515,10 +565,13 @@ static int umh_pipe_setup(struct subprocess_info *info, struct cred *new)
+ 
+ 	err = replace_fd(0, files[0], 0);
+ 	fput(files[0]);
++	if (err < 0)
++		return err;
++
+ 	/* and disallow core files too */
+ 	current->signal->rlim[RLIMIT_CORE] = (struct rlimit){1, 1};
+ 
+-	return err;
++	return 0;
+ }
+ 
+ void do_coredump(const kernel_siginfo_t *siginfo)
+@@ -593,7 +646,7 @@ void do_coredump(const kernel_siginfo_t *siginfo)
+ 		}
+ 
+ 		if (cprm.limit == 1) {
+-			/* See umh_pipe_setup() which sets RLIMIT_CORE = 1.
++			/* See umh_coredump_setup() which sets RLIMIT_CORE = 1.
+ 			 *
+ 			 * Normally core limits are irrelevant to pipes, since
+ 			 * we're not writing to the file system, but we use
+@@ -632,7 +685,7 @@ void do_coredump(const kernel_siginfo_t *siginfo)
+ 		retval = -ENOMEM;
+ 		sub_info = call_usermodehelper_setup(helper_argv[0],
+ 						helper_argv, NULL, GFP_KERNEL,
+-						umh_pipe_setup, NULL, &cprm);
++						umh_coredump_setup, NULL, &cprm);
+ 		if (sub_info)
+ 			retval = call_usermodehelper_exec(sub_info,
+ 							  UMH_WAIT_EXEC);
+diff --git a/fs/pidfs.c b/fs/pidfs.c
+index d64a4cbeb0dafa..50e69a9e104a60 100644
+--- a/fs/pidfs.c
++++ b/fs/pidfs.c
+@@ -888,6 +888,7 @@ struct file *pidfs_alloc_file(struct pid *pid, unsigned int flags)
+ 		return ERR_PTR(-ESRCH);
+ 
+ 	flags &= ~PIDFD_CLONE;
++	flags |= O_RDWR;
+ 	pidfd_file = dentry_open(&path, flags, current_cred());
+ 	/* Raise PIDFD_THREAD explicitly as do_dentry_open() strips it. */
+ 	if (!IS_ERR(pidfd_file))
+diff --git a/include/linux/coredump.h b/include/linux/coredump.h
+index 77e6e195d1d687..76e41805b92de9 100644
+--- a/include/linux/coredump.h
++++ b/include/linux/coredump.h
+@@ -28,6 +28,7 @@ struct coredump_params {
+ 	int vma_count;
+ 	size_t vma_data_size;
+ 	struct core_vma_metadata *vma_meta;
++	struct pid *pid;
+ };
+ 
+ extern unsigned int core_file_note_size_limit;
+diff --git a/include/linux/iommu.h b/include/linux/iommu.h
+index 3a8d35d41fdad5..4273871845eebb 100644
+--- a/include/linux/iommu.h
++++ b/include/linux/iommu.h
+@@ -750,6 +750,7 @@ struct iommu_domain_ops {
+  * @dev: struct device for sysfs handling
+  * @singleton_group: Used internally for drivers that have only one group
+  * @max_pasids: number of supported PASIDs
++ * @ready: set once iommu_device_register() has completed successfully
+  */
+ struct iommu_device {
+ 	struct list_head list;
+@@ -758,6 +759,7 @@ struct iommu_device {
+ 	struct device *dev;
+ 	struct iommu_group *singleton_group;
+ 	u32 max_pasids;
++	bool ready;
+ };
+ 
+ /**
+diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
+index 7986145a527cbe..5a7745170e84b1 100644
+--- a/net/sched/sch_hfsc.c
++++ b/net/sched/sch_hfsc.c
+@@ -175,6 +175,11 @@ struct hfsc_sched {
+ 
+ #define	HT_INFINITY	0xffffffffffffffffULL	/* infinite time value */
+ 
++static bool cl_in_el_or_vttree(struct hfsc_class *cl)
++{
++	return ((cl->cl_flags & HFSC_FSC) && cl->cl_nactive) ||
++		((cl->cl_flags & HFSC_RSC) && !RB_EMPTY_NODE(&cl->el_node));
++}
+ 
+ /*
+  * eligible tree holds backlogged classes being sorted by their eligible times.
+@@ -1040,6 +1045,8 @@ hfsc_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ 	if (cl == NULL)
+ 		return -ENOBUFS;
+ 
++	RB_CLEAR_NODE(&cl->el_node);
++
+ 	err = tcf_block_get(&cl->block, &cl->filter_list, sch, extack);
+ 	if (err) {
+ 		kfree(cl);
+@@ -1572,7 +1579,7 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free)
+ 	sch->qstats.backlog += len;
+ 	sch->q.qlen++;
+ 
+-	if (first && !cl->cl_nactive) {
++	if (first && !cl_in_el_or_vttree(cl)) {
+ 		if (cl->cl_flags & HFSC_RSC)
+ 			init_ed(cl, len);
+ 		if (cl->cl_flags & HFSC_FSC)


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-06-05 19:13 Mike Pagano
  0 siblings, 0 replies; 19+ messages in thread
From: Mike Pagano @ 2025-06-05 19:13 UTC (permalink / raw
  To: gentoo-commits

commit:     8e6513be962044e2cb5eb27aa86b6281090e747b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jun  5 19:12:21 2025 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jun  5 19:12:21 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8e6513be

Revert: "drm/amd/display: more liberal vmin/vmax update for freesync"

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                  |  4 +++
 2700_amd-revert-vmin-vmax-for-freesync.patch | 48 ++++++++++++++++++++++++++++
 2 files changed, 52 insertions(+)

diff --git a/0000_README b/0000_README
index 61b05f13..4206c43f 100644
--- a/0000_README
+++ b/0000_README
@@ -62,6 +62,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2700_amd-revert-vmin-vmax-for-freesync.patch
+From:   https://github.com/archlinux/linux/commit/30dd9945fd79d33a049da4e52984c9bc07450de2.patch
+Desc:   Revert "drm/amd/display: more liberal vmin/vmax update for freesync"
+
 Patch:  2901_permit-menuconfig-sorting.patch
 From:   https://lore.kernel.org/
 Desc:   menuconfig: Allow sorting the entries alphabetically

diff --git a/2700_amd-revert-vmin-vmax-for-freesync.patch b/2700_amd-revert-vmin-vmax-for-freesync.patch
new file mode 100644
index 00000000..b0b80885
--- /dev/null
+++ b/2700_amd-revert-vmin-vmax-for-freesync.patch
@@ -0,0 +1,48 @@
+From 30dd9945fd79d33a049da4e52984c9bc07450de2 Mon Sep 17 00:00:00 2001
+From: Aurabindo Pillai <aurabindo.pillai@amd.com>
+Date: Wed, 21 May 2025 16:10:57 -0400
+Subject: [PATCH] Revert "drm/amd/display: more liberal vmin/vmax update for
+ freesync"
+
+This reverts commit 219898d29c438d8ec34a5560fac4ea8f6b8d4f20 since it
+causes regressions on certain configs. Revert until the issue can be
+isolated and debugged.
+
+Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/4238
+Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
+Cherry-picked-for: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/issues/139
+---
+ .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c    | 16 +++++-----------
+ 1 file changed, 5 insertions(+), 11 deletions(-)
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 2dbd71fbae28a5..e4f0517f0f2b23 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -668,21 +668,15 @@ static void dm_crtc_high_irq(void *interrupt_params)
+ 	spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags);
+ 
+ 	if (acrtc->dm_irq_params.stream &&
+-		acrtc->dm_irq_params.vrr_params.supported) {
+-		bool replay_en = acrtc->dm_irq_params.stream->link->replay_settings.replay_feature_enabled;
+-		bool psr_en = acrtc->dm_irq_params.stream->link->psr_settings.psr_feature_enabled;
+-		bool fs_active_var_en = acrtc->dm_irq_params.freesync_config.state == VRR_STATE_ACTIVE_VARIABLE;
+-
++	    acrtc->dm_irq_params.vrr_params.supported &&
++	    acrtc->dm_irq_params.freesync_config.state ==
++		    VRR_STATE_ACTIVE_VARIABLE) {
+ 		mod_freesync_handle_v_update(adev->dm.freesync_module,
+ 					     acrtc->dm_irq_params.stream,
+ 					     &acrtc->dm_irq_params.vrr_params);
+ 
+-		/* update vmin_vmax only if freesync is enabled, or only if PSR and REPLAY are disabled */
+-		if (fs_active_var_en || (!fs_active_var_en && !replay_en && !psr_en)) {
+-			dc_stream_adjust_vmin_vmax(adev->dm.dc,
+-					acrtc->dm_irq_params.stream,
+-					&acrtc->dm_irq_params.vrr_params.adjust);
+-		}
++		dc_stream_adjust_vmin_vmax(adev->dm.dc, acrtc->dm_irq_params.stream,
++					   &acrtc->dm_irq_params.vrr_params.adjust);
+ 	}
+ 
+ 	/*


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-06-10 12:14 Mike Pagano
  0 siblings, 0 replies; 19+ messages in thread
From: Mike Pagano @ 2025-06-10 12:14 UTC (permalink / raw
  To: gentoo-commits

commit:     769ff0a468b182edbbcdbbc1bbd1cf69da4b0e35
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 10 12:13:52 2025 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 10 12:13:52 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=769ff0a4

Linux patch 6.15.2

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1001_linux-6.15.2.patch | 1328 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1332 insertions(+)

diff --git a/0000_README b/0000_README
index 4206c43f..9ef52bb6 100644
--- a/0000_README
+++ b/0000_README
@@ -46,6 +46,10 @@ Patch:  1000_linux-6.15.1.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.15.1
 
+Patch:  1001_linux-6.15.2.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.15.2
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1001_linux-6.15.2.patch b/1001_linux-6.15.2.patch
new file mode 100644
index 00000000..7fff4f68
--- /dev/null
+++ b/1001_linux-6.15.2.patch
@@ -0,0 +1,1328 @@
+diff --git a/Documentation/devicetree/bindings/phy/fsl,imx8mq-usb-phy.yaml b/Documentation/devicetree/bindings/phy/fsl,imx8mq-usb-phy.yaml
+index daee0c0fc91539..c468207eb95168 100644
+--- a/Documentation/devicetree/bindings/phy/fsl,imx8mq-usb-phy.yaml
++++ b/Documentation/devicetree/bindings/phy/fsl,imx8mq-usb-phy.yaml
+@@ -63,8 +63,7 @@ properties:
+   fsl,phy-tx-vboost-level-microvolt:
+     description:
+       Adjust the boosted transmit launch pk-pk differential amplitude
+-    minimum: 880
+-    maximum: 1120
++    enum: [844, 1008, 1156]
+ 
+   fsl,phy-comp-dis-tune-percent:
+     description:
+diff --git a/Documentation/devicetree/bindings/pwm/adi,axi-pwmgen.yaml b/Documentation/devicetree/bindings/pwm/adi,axi-pwmgen.yaml
+index 45e112d0efb466..5575c58357d6e7 100644
+--- a/Documentation/devicetree/bindings/pwm/adi,axi-pwmgen.yaml
++++ b/Documentation/devicetree/bindings/pwm/adi,axi-pwmgen.yaml
+@@ -30,11 +30,19 @@ properties:
+     const: 3
+ 
+   clocks:
+-    maxItems: 1
++    minItems: 1
++    maxItems: 2
++
++  clock-names:
++    minItems: 1
++    items:
++      - const: axi
++      - const: ext
+ 
+ required:
+   - reg
+   - clocks
++  - clock-names
+ 
+ unevaluatedProperties: false
+ 
+@@ -43,6 +51,7 @@ examples:
+     pwm@44b00000 {
+         compatible = "adi,axi-pwmgen-2.00.a";
+         reg = <0x44b00000 0x1000>;
+-        clocks = <&spi_clk>;
++        clocks = <&fpga_clk>, <&spi_clk>;
++        clock-names = "axi", "ext";
+         #pwm-cells = <3>;
+     };
+diff --git a/Documentation/devicetree/bindings/remoteproc/qcom,sm8150-pas.yaml b/Documentation/devicetree/bindings/remoteproc/qcom,sm8150-pas.yaml
+index 56ff6386534ddf..5dcc2a32c08004 100644
+--- a/Documentation/devicetree/bindings/remoteproc/qcom,sm8150-pas.yaml
++++ b/Documentation/devicetree/bindings/remoteproc/qcom,sm8150-pas.yaml
+@@ -16,6 +16,9 @@ description:
+ properties:
+   compatible:
+     enum:
++      - qcom,sc8180x-adsp-pas
++      - qcom,sc8180x-cdsp-pas
++      - qcom,sc8180x-slpi-pas
+       - qcom,sm8150-adsp-pas
+       - qcom,sm8150-cdsp-pas
+       - qcom,sm8150-mpss-pas
+diff --git a/Documentation/devicetree/bindings/usb/cypress,hx3.yaml b/Documentation/devicetree/bindings/usb/cypress,hx3.yaml
+index 1033b7a4b8f953..d6eac1213228d2 100644
+--- a/Documentation/devicetree/bindings/usb/cypress,hx3.yaml
++++ b/Documentation/devicetree/bindings/usb/cypress,hx3.yaml
+@@ -14,9 +14,22 @@ allOf:
+ 
+ properties:
+   compatible:
+-    enum:
+-      - usb4b4,6504
+-      - usb4b4,6506
++    oneOf:
++      - enum:
++          - usb4b4,6504
++          - usb4b4,6506
++      - items:
++          - enum:
++              - usb4b4,6500
++              - usb4b4,6508
++          - const: usb4b4,6504
++      - items:
++          - enum:
++              - usb4b4,6502
++              - usb4b4,6503
++              - usb4b4,6507
++              - usb4b4,650a
++          - const: usb4b4,6506
+ 
+   reg: true
+ 
+diff --git a/Documentation/firmware-guide/acpi/dsd/data-node-references.rst b/Documentation/firmware-guide/acpi/dsd/data-node-references.rst
+index 8d8b53e96bcfee..ccb4b153e6f2dd 100644
+--- a/Documentation/firmware-guide/acpi/dsd/data-node-references.rst
++++ b/Documentation/firmware-guide/acpi/dsd/data-node-references.rst
+@@ -12,11 +12,14 @@ ACPI in general allows referring to device objects in the tree only.
+ Hierarchical data extension nodes may not be referred to directly, hence this
+ document defines a scheme to implement such references.
+ 
+-A reference consist of the device object name followed by one or more
+-hierarchical data extension [dsd-guide] keys. Specifically, the hierarchical
+-data extension node which is referred to by the key shall lie directly under
+-the parent object i.e. either the device object or another hierarchical data
+-extension node.
++A reference to a _DSD hierarchical data node is a string consisting of a
++device object reference followed by a dot (".") and a relative path to a data
++node object. Do not use non-string references as this will produce a copy of
++the hierarchical data node, not a reference!
++
++The hierarchical data extension node which is referred to shall be located
++directly under its parent object i.e. either the device object or another
++hierarchical data extension node [dsd-guide].
+ 
+ The keys in the hierarchical data nodes shall consist of the name of the node,
+ "@" character and the number of the node in hexadecimal notation (without pre-
+@@ -33,11 +36,9 @@ extension key.
+ Example
+ =======
+ 
+-In the ASL snippet below, the "reference" _DSD property contains a
+-device object reference to DEV0 and under that device object, a
+-hierarchical data extension key "node@1" referring to the NOD1 object
+-and lastly, a hierarchical data extension key "anothernode" referring to
+-the ANOD object which is also the final target node of the reference.
++In the ASL snippet below, the "reference" _DSD property contains a string
++reference to a hierarchical data extension node ANOD under DEV0 under the parent
++of DEV1. ANOD is also the final target node of the reference.
+ ::
+ 
+ 	Device (DEV0)
+@@ -76,10 +77,7 @@ the ANOD object which is also the final target node of the reference.
+ 	    Name (_DSD, Package () {
+ 		ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
+ 		Package () {
+-		    Package () {
+-			"reference", Package () {
+-			    ^DEV0, "node@1", "anothernode"
+-			}
++		    Package () { "reference", "^DEV0.ANOD" }
+ 		    },
+ 		}
+ 	    })
+diff --git a/Documentation/firmware-guide/acpi/dsd/graph.rst b/Documentation/firmware-guide/acpi/dsd/graph.rst
+index b9dbfc73ed25b6..d6ae5ffa748ca4 100644
+--- a/Documentation/firmware-guide/acpi/dsd/graph.rst
++++ b/Documentation/firmware-guide/acpi/dsd/graph.rst
+@@ -66,12 +66,9 @@ of that port shall be zero. Similarly, if a port may only have a single
+ endpoint, the number of that endpoint shall be zero.
+ 
+ The endpoint reference uses property extension with "remote-endpoint" property
+-name followed by a reference in the same package. Such references consist of
+-the remote device reference, the first package entry of the port data extension
+-reference under the device and finally the first package entry of the endpoint
+-data extension reference under the port. Individual references thus appear as::
++name followed by a string reference in the same package. [data-node-ref]::
+ 
+-    Package() { device, "port@X", "endpoint@Y" }
++    "device.datanode"
+ 
+ In the above example, "X" is the number of the port and "Y" is the number of
+ the endpoint.
+@@ -109,7 +106,7 @@ A simple example of this is show below::
+ 		ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
+ 		Package () {
+ 		    Package () { "reg", 0 },
+-		    Package () { "remote-endpoint", Package() { \_SB.PCI0.ISP, "port@4", "endpoint@0" } },
++		    Package () { "remote-endpoint", "\\_SB.PCI0.ISP.EP40" },
+ 		}
+ 	    })
+ 	}
+@@ -141,7 +138,7 @@ A simple example of this is show below::
+ 		ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
+ 		Package () {
+ 		    Package () { "reg", 0 },
+-		    Package () { "remote-endpoint", Package () { \_SB.PCI0.I2C2.CAM0, "port@0", "endpoint@0" } },
++		    Package () { "remote-endpoint", "\\_SB.PCI0.I2C2.CAM0.EP00" },
+ 		}
+ 	    })
+ 	}
+diff --git a/Documentation/firmware-guide/acpi/dsd/leds.rst b/Documentation/firmware-guide/acpi/dsd/leds.rst
+index 93db592c93c712..a97cd07d49be38 100644
+--- a/Documentation/firmware-guide/acpi/dsd/leds.rst
++++ b/Documentation/firmware-guide/acpi/dsd/leds.rst
+@@ -15,11 +15,6 @@ Referring to LEDs in Device tree is documented in [video-interfaces], in
+ "flash-leds" property documentation. In short, LEDs are directly referred to by
+ using phandles.
+ 
+-While Device tree allows referring to any node in the tree [devicetree], in
+-ACPI references are limited to device nodes only [acpi]. For this reason using
+-the same mechanism on ACPI is not possible. A mechanism to refer to non-device
+-ACPI nodes is documented in [data-node-ref].
+-
+ ACPI allows (as does DT) using integer arguments after the reference. A
+ combination of the LED driver device reference and an integer argument,
+ referring to the "reg" property of the relevant LED, is used to identify
+@@ -74,7 +69,7 @@ omitted. ::
+ 			Package () {
+ 				Package () {
+ 					"flash-leds",
+-					Package () { ^LED, "led@0", ^LED, "led@1" },
++					Package () { "^LED.LED0", "^LED.LED1" },
+ 				}
+ 			}
+ 		})
+diff --git a/Makefile b/Makefile
+index 61d69a3fc827cc..7138d1fabfa4ae 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 15
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index d6cf1e23c2a326..96355ab9aed953 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -1238,10 +1238,6 @@ void play_dead_common(void)
+ 	local_irq_disable();
+ }
+ 
+-/*
+- * We need to flush the caches before going to sleep, lest we have
+- * dirty data in our caches when we come back up.
+- */
+ void __noreturn mwait_play_dead(unsigned int eax_hint)
+ {
+ 	struct mwait_cpu_dead *md = this_cpu_ptr(&mwait_cpu_dead);
+@@ -1287,6 +1283,50 @@ void __noreturn mwait_play_dead(unsigned int eax_hint)
+ 	}
+ }
+ 
++/*
++ * We need to flush the caches before going to sleep, lest we have
++ * dirty data in our caches when we come back up.
++ */
++static inline void mwait_play_dead_cpuid_hint(void)
++{
++	unsigned int eax, ebx, ecx, edx;
++	unsigned int highest_cstate = 0;
++	unsigned int highest_subcstate = 0;
++	int i;
++
++	if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
++	    boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)
++		return;
++	if (!this_cpu_has(X86_FEATURE_MWAIT))
++		return;
++	if (!this_cpu_has(X86_FEATURE_CLFLUSH))
++		return;
++
++	eax = CPUID_LEAF_MWAIT;
++	ecx = 0;
++	native_cpuid(&eax, &ebx, &ecx, &edx);
++
++	/*
++	 * eax will be 0 if EDX enumeration is not valid.
++	 * Initialized below to cstate, sub_cstate value when EDX is valid.
++	 */
++	if (!(ecx & CPUID5_ECX_EXTENSIONS_SUPPORTED)) {
++		eax = 0;
++	} else {
++		edx >>= MWAIT_SUBSTATE_SIZE;
++		for (i = 0; i < 7 && edx; i++, edx >>= MWAIT_SUBSTATE_SIZE) {
++			if (edx & MWAIT_SUBSTATE_MASK) {
++				highest_cstate = i;
++				highest_subcstate = edx & MWAIT_SUBSTATE_MASK;
++			}
++		}
++		eax = (highest_cstate << MWAIT_SUBSTATE_SIZE) |
++			(highest_subcstate - 1);
++	}
++
++	mwait_play_dead(eax);
++}
++
+ /*
+  * Kick all "offline" CPUs out of mwait on kexec(). See comment in
+  * mwait_play_dead().
+@@ -1337,9 +1377,9 @@ void native_play_dead(void)
+ 	play_dead_common();
+ 	tboot_shutdown(TB_SHUTDOWN_WFS);
+ 
+-	/* Below returns only on error. */
+-	cpuidle_play_dead();
+-	hlt_play_dead();
++	mwait_play_dead_cpuid_hint();
++	if (cpuidle_play_dead())
++		hlt_play_dead();
+ }
+ 
+ #else /* ... !CONFIG_HOTPLUG_CPU */
+diff --git a/drivers/acpi/acpica/acdebug.h b/drivers/acpi/acpica/acdebug.h
+index 911875c5a5f190..58842130ca47bb 100644
+--- a/drivers/acpi/acpica/acdebug.h
++++ b/drivers/acpi/acpica/acdebug.h
+@@ -37,7 +37,7 @@ struct acpi_db_argument_info {
+ struct acpi_db_execute_walk {
+ 	u32 count;
+ 	u32 max_count;
+-	char name_seg[ACPI_NAMESEG_SIZE + 1];
++	char name_seg[ACPI_NAMESEG_SIZE + 1] ACPI_NONSTRING;
+ };
+ 
+ #define PARAM_LIST(pl)                  pl
+diff --git a/drivers/acpi/acpica/aclocal.h b/drivers/acpi/acpica/aclocal.h
+index 6481c48c22bb7e..b40e9a520618e0 100644
+--- a/drivers/acpi/acpica/aclocal.h
++++ b/drivers/acpi/acpica/aclocal.h
+@@ -293,7 +293,7 @@ acpi_status (*acpi_internal_method) (struct acpi_walk_state * walk_state);
+  * expected_return_btypes - Allowed type(s) for the return value
+  */
+ struct acpi_name_info {
+-	char name[ACPI_NAMESEG_SIZE] __nonstring;
++	char name[ACPI_NAMESEG_SIZE] ACPI_NONSTRING;
+ 	u16 argument_list;
+ 	u8 expected_btypes;
+ };
+@@ -370,7 +370,7 @@ typedef acpi_status (*acpi_object_converter) (struct acpi_namespace_node *
+ 					      converted_object);
+ 
+ struct acpi_simple_repair_info {
+-	char name[ACPI_NAMESEG_SIZE] __nonstring;
++	char name[ACPI_NAMESEG_SIZE] ACPI_NONSTRING;
+ 	u32 unexpected_btypes;
+ 	u32 package_index;
+ 	acpi_object_converter object_converter;
+diff --git a/drivers/acpi/acpica/nsnames.c b/drivers/acpi/acpica/nsnames.c
+index d91153f6570053..22aeeeb56cffdb 100644
+--- a/drivers/acpi/acpica/nsnames.c
++++ b/drivers/acpi/acpica/nsnames.c
+@@ -194,7 +194,7 @@ acpi_ns_build_normalized_path(struct acpi_namespace_node *node,
+ 			      char *full_path, u32 path_size, u8 no_trailing)
+ {
+ 	u32 length = 0, i;
+-	char name[ACPI_NAMESEG_SIZE];
++	char name[ACPI_NAMESEG_SIZE] ACPI_NONSTRING;
+ 	u8 do_no_trailing;
+ 	char c, *left, *right;
+ 	struct acpi_namespace_node *next_node;
+diff --git a/drivers/acpi/acpica/nsrepair2.c b/drivers/acpi/acpica/nsrepair2.c
+index 330b5e4711daca..0075fc80d49843 100644
+--- a/drivers/acpi/acpica/nsrepair2.c
++++ b/drivers/acpi/acpica/nsrepair2.c
+@@ -25,7 +25,7 @@ acpi_status (*acpi_repair_function) (struct acpi_evaluate_info * info,
+ 				     return_object_ptr);
+ 
+ typedef struct acpi_repair_info {
+-	char name[ACPI_NAMESEG_SIZE] __nonstring;
++	char name[ACPI_NAMESEG_SIZE] ACPI_NONSTRING;
+ 	acpi_repair_function repair_function;
+ 
+ } acpi_repair_info;
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 5fc2c8ee61b19b..6be0f7ac7213d1 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -79,6 +79,8 @@ static HLIST_HEAD(binder_deferred_list);
+ static DEFINE_MUTEX(binder_deferred_lock);
+ 
+ static HLIST_HEAD(binder_devices);
++static DEFINE_SPINLOCK(binder_devices_lock);
++
+ static HLIST_HEAD(binder_procs);
+ static DEFINE_MUTEX(binder_procs_lock);
+ 
+@@ -5244,6 +5246,7 @@ static void binder_free_proc(struct binder_proc *proc)
+ 			__func__, proc->outstanding_txns);
+ 	device = container_of(proc->context, struct binder_device, context);
+ 	if (refcount_dec_and_test(&device->ref)) {
++		binder_remove_device(device);
+ 		kfree(proc->context->name);
+ 		kfree(device);
+ 	}
+@@ -6929,7 +6932,16 @@ const struct binder_debugfs_entry binder_debugfs_entries[] = {
+ 
+ void binder_add_device(struct binder_device *device)
+ {
++	spin_lock(&binder_devices_lock);
+ 	hlist_add_head(&device->hlist, &binder_devices);
++	spin_unlock(&binder_devices_lock);
++}
++
++void binder_remove_device(struct binder_device *device)
++{
++	spin_lock(&binder_devices_lock);
++	hlist_del_init(&device->hlist);
++	spin_unlock(&binder_devices_lock);
+ }
+ 
+ static int __init init_binder_device(const char *name)
+@@ -6956,7 +6968,7 @@ static int __init init_binder_device(const char *name)
+ 		return ret;
+ 	}
+ 
+-	hlist_add_head(&binder_device->hlist, &binder_devices);
++	binder_add_device(binder_device);
+ 
+ 	return ret;
+ }
+@@ -7018,7 +7030,7 @@ static int __init binder_init(void)
+ err_init_binder_device_failed:
+ 	hlist_for_each_entry_safe(device, tmp, &binder_devices, hlist) {
+ 		misc_deregister(&device->miscdev);
+-		hlist_del(&device->hlist);
++		binder_remove_device(device);
+ 		kfree(device);
+ 	}
+ 
+diff --git a/drivers/android/binder_internal.h b/drivers/android/binder_internal.h
+index 6a66c9769c6cd4..1ba5caf1d88d9d 100644
+--- a/drivers/android/binder_internal.h
++++ b/drivers/android/binder_internal.h
+@@ -583,9 +583,13 @@ struct binder_object {
+ /**
+  * Add a binder device to binder_devices
+  * @device: the new binder device to add to the global list
+- *
+- * Not reentrant as the list is not protected by any locks
+  */
+ void binder_add_device(struct binder_device *device);
+ 
++/**
++ * Remove a binder device to binder_devices
++ * @device: the binder device to remove from the global list
++ */
++void binder_remove_device(struct binder_device *device);
++
+ #endif /* _LINUX_BINDER_INTERNAL_H */
+diff --git a/drivers/android/binderfs.c b/drivers/android/binderfs.c
+index 94c6446604fc95..44d430c4ebefd2 100644
+--- a/drivers/android/binderfs.c
++++ b/drivers/android/binderfs.c
+@@ -274,7 +274,7 @@ static void binderfs_evict_inode(struct inode *inode)
+ 	mutex_unlock(&binderfs_minors_mutex);
+ 
+ 	if (refcount_dec_and_test(&device->ref)) {
+-		hlist_del_init(&device->hlist);
++		binder_remove_device(device);
+ 		kfree(device->context.name);
+ 		kfree(device);
+ 	}
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index e00590ba24fdbb..a2dc39c005f4f8 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -2415,14 +2415,14 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ 
+ 		qcadev->bt_en = devm_gpiod_get_optional(&serdev->dev, "enable",
+ 					       GPIOD_OUT_LOW);
+-		if (IS_ERR(qcadev->bt_en) &&
+-		    (data->soc_type == QCA_WCN6750 ||
+-		     data->soc_type == QCA_WCN6855)) {
+-			dev_err(&serdev->dev, "failed to acquire BT_EN gpio\n");
+-			return PTR_ERR(qcadev->bt_en);
+-		}
++		if (IS_ERR(qcadev->bt_en))
++			return dev_err_probe(&serdev->dev,
++					     PTR_ERR(qcadev->bt_en),
++					     "failed to acquire BT_EN gpio\n");
+ 
+-		if (!qcadev->bt_en)
++		if (!qcadev->bt_en &&
++		    (data->soc_type == QCA_WCN6750 ||
++		     data->soc_type == QCA_WCN6855))
+ 			power_ctrl_enabled = false;
+ 
+ 		qcadev->sw_ctrl = devm_gpiod_get_optional(&serdev->dev, "swctrl",
+diff --git a/drivers/clk/samsung/clk-exynosautov920.c b/drivers/clk/samsung/clk-exynosautov920.c
+index dc8d4240f6defc..b0561faecfeb1b 100644
+--- a/drivers/clk/samsung/clk-exynosautov920.c
++++ b/drivers/clk/samsung/clk-exynosautov920.c
+@@ -1393,7 +1393,7 @@ static const unsigned long hsi1_clk_regs[] __initconst = {
+ /* List of parent clocks for Muxes in CMU_HSI1 */
+ PNAME(mout_hsi1_mmc_card_user_p) = {"oscclk", "dout_clkcmu_hsi1_mmc_card"};
+ PNAME(mout_hsi1_noc_user_p) = { "oscclk", "dout_clkcmu_hsi1_noc" };
+-PNAME(mout_hsi1_usbdrd_user_p) = { "oscclk", "mout_clkcmu_hsi1_usbdrd" };
++PNAME(mout_hsi1_usbdrd_user_p) = { "oscclk", "dout_clkcmu_hsi1_usbdrd" };
+ PNAME(mout_hsi1_usbdrd_p) = { "dout_tcxo_div2", "mout_hsi1_usbdrd_user" };
+ 
+ static const struct samsung_mux_clock hsi1_mux_clks[] __initconst = {
+diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
+index d26b610e4f24f9..76768fe213a978 100644
+--- a/drivers/cpufreq/acpi-cpufreq.c
++++ b/drivers/cpufreq/acpi-cpufreq.c
+@@ -660,7 +660,7 @@ static u64 get_max_boost_ratio(unsigned int cpu, u64 *nominal_freq)
+ 	nominal_perf = perf_caps.nominal_perf;
+ 
+ 	if (nominal_freq)
+-		*nominal_freq = perf_caps.nominal_freq;
++		*nominal_freq = perf_caps.nominal_freq * 1000;
+ 
+ 	if (!highest_perf || !nominal_perf) {
+ 		pr_debug("CPU%d: highest or nominal performance missing\n", cpu);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index a187cdb43e7e1d..0fe8bd19ecd13e 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -675,21 +675,15 @@ static void dm_crtc_high_irq(void *interrupt_params)
+ 	spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags);
+ 
+ 	if (acrtc->dm_irq_params.stream &&
+-		acrtc->dm_irq_params.vrr_params.supported) {
+-		bool replay_en = acrtc->dm_irq_params.stream->link->replay_settings.replay_feature_enabled;
+-		bool psr_en = acrtc->dm_irq_params.stream->link->psr_settings.psr_feature_enabled;
+-		bool fs_active_var_en = acrtc->dm_irq_params.freesync_config.state == VRR_STATE_ACTIVE_VARIABLE;
+-
++	    acrtc->dm_irq_params.vrr_params.supported &&
++	    acrtc->dm_irq_params.freesync_config.state ==
++		    VRR_STATE_ACTIVE_VARIABLE) {
+ 		mod_freesync_handle_v_update(adev->dm.freesync_module,
+ 					     acrtc->dm_irq_params.stream,
+ 					     &acrtc->dm_irq_params.vrr_params);
+ 
+-		/* update vmin_vmax only if freesync is enabled, or only if PSR and REPLAY are disabled */
+-		if (fs_active_var_en || (!fs_active_var_en && !replay_en && !psr_en)) {
+-			dc_stream_adjust_vmin_vmax(adev->dm.dc,
+-					acrtc->dm_irq_params.stream,
+-					&acrtc->dm_irq_params.vrr_params.adjust);
+-		}
++		dc_stream_adjust_vmin_vmax(adev->dm.dc, acrtc->dm_irq_params.stream,
++					   &acrtc->dm_irq_params.vrr_params.adjust);
+ 	}
+ 
+ 	/*
+diff --git a/drivers/nvmem/Kconfig b/drivers/nvmem/Kconfig
+index 8671b7c974b933..eceb3cdb421ffb 100644
+--- a/drivers/nvmem/Kconfig
++++ b/drivers/nvmem/Kconfig
+@@ -260,6 +260,7 @@ config NVMEM_RCAR_EFUSE
+ config NVMEM_RMEM
+ 	tristate "Reserved Memory Based Driver Support"
+ 	depends on HAS_IOMEM
++	select CRC32
+ 	help
+ 	  This driver maps reserved memory into an nvmem device. It might be
+ 	  useful to expose information left by firmware in memory.
+diff --git a/drivers/pinctrl/mediatek/mtk-eint.c b/drivers/pinctrl/mediatek/mtk-eint.c
+index b4eb2beab691cf..c516c34aaaf603 100644
+--- a/drivers/pinctrl/mediatek/mtk-eint.c
++++ b/drivers/pinctrl/mediatek/mtk-eint.c
+@@ -22,7 +22,6 @@
+ #include <linux/platform_device.h>
+ 
+ #include "mtk-eint.h"
+-#include "pinctrl-mtk-common-v2.h"
+ 
+ #define MTK_EINT_EDGE_SENSITIVE           0
+ #define MTK_EINT_LEVEL_SENSITIVE          1
+@@ -505,10 +504,9 @@ int mtk_eint_find_irq(struct mtk_eint *eint, unsigned long eint_n)
+ }
+ EXPORT_SYMBOL_GPL(mtk_eint_find_irq);
+ 
+-int mtk_eint_do_init(struct mtk_eint *eint)
++int mtk_eint_do_init(struct mtk_eint *eint, struct mtk_eint_pin *eint_pin)
+ {
+ 	unsigned int size, i, port, inst = 0;
+-	struct mtk_pinctrl *hw = (struct mtk_pinctrl *)eint->pctl;
+ 
+ 	/* If clients don't assign a specific regs, let's use generic one */
+ 	if (!eint->regs)
+@@ -519,7 +517,15 @@ int mtk_eint_do_init(struct mtk_eint *eint)
+ 	if (!eint->base_pin_num)
+ 		return -ENOMEM;
+ 
+-	if (eint->nbase == 1) {
++	if (eint_pin) {
++		eint->pins = eint_pin;
++		for (i = 0; i < eint->hw->ap_num; i++) {
++			inst = eint->pins[i].instance;
++			if (inst >= eint->nbase)
++				continue;
++			eint->base_pin_num[inst]++;
++		}
++	} else {
+ 		size = eint->hw->ap_num * sizeof(struct mtk_eint_pin);
+ 		eint->pins = devm_kmalloc(eint->dev, size, GFP_KERNEL);
+ 		if (!eint->pins)
+@@ -533,16 +539,6 @@ int mtk_eint_do_init(struct mtk_eint *eint)
+ 		}
+ 	}
+ 
+-	if (hw && hw->soc && hw->soc->eint_pin) {
+-		eint->pins = hw->soc->eint_pin;
+-		for (i = 0; i < eint->hw->ap_num; i++) {
+-			inst = eint->pins[i].instance;
+-			if (inst >= eint->nbase)
+-				continue;
+-			eint->base_pin_num[inst]++;
+-		}
+-	}
+-
+ 	eint->pin_list = devm_kmalloc(eint->dev, eint->nbase * sizeof(u16 *), GFP_KERNEL);
+ 	if (!eint->pin_list)
+ 		goto err_pin_list;
+@@ -610,7 +606,7 @@ int mtk_eint_do_init(struct mtk_eint *eint)
+ err_wake_mask:
+ 	devm_kfree(eint->dev, eint->pin_list);
+ err_pin_list:
+-	if (eint->nbase == 1)
++	if (!eint_pin)
+ 		devm_kfree(eint->dev, eint->pins);
+ err_pins:
+ 	devm_kfree(eint->dev, eint->base_pin_num);
+diff --git a/drivers/pinctrl/mediatek/mtk-eint.h b/drivers/pinctrl/mediatek/mtk-eint.h
+index f7f58cca0d5e31..23801d4b636f62 100644
+--- a/drivers/pinctrl/mediatek/mtk-eint.h
++++ b/drivers/pinctrl/mediatek/mtk-eint.h
+@@ -88,7 +88,7 @@ struct mtk_eint {
+ };
+ 
+ #if IS_ENABLED(CONFIG_EINT_MTK)
+-int mtk_eint_do_init(struct mtk_eint *eint);
++int mtk_eint_do_init(struct mtk_eint *eint, struct mtk_eint_pin *eint_pin);
+ int mtk_eint_do_suspend(struct mtk_eint *eint);
+ int mtk_eint_do_resume(struct mtk_eint *eint);
+ int mtk_eint_set_debounce(struct mtk_eint *eint, unsigned long eint_n,
+@@ -96,7 +96,8 @@ int mtk_eint_set_debounce(struct mtk_eint *eint, unsigned long eint_n,
+ int mtk_eint_find_irq(struct mtk_eint *eint, unsigned long eint_n);
+ 
+ #else
+-static inline int mtk_eint_do_init(struct mtk_eint *eint)
++static inline int mtk_eint_do_init(struct mtk_eint *eint,
++				   struct mtk_eint_pin *eint_pin)
+ {
+ 	return -EOPNOTSUPP;
+ }
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+index d1556b75d9effd..ba13558bfcd7bb 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+@@ -416,7 +416,7 @@ int mtk_build_eint(struct mtk_pinctrl *hw, struct platform_device *pdev)
+ 	hw->eint->pctl = hw;
+ 	hw->eint->gpio_xlate = &mtk_eint_xt;
+ 
+-	ret = mtk_eint_do_init(hw->eint);
++	ret = mtk_eint_do_init(hw->eint, hw->soc->eint_pin);
+ 	if (ret)
+ 		goto err_free_eint;
+ 
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common.c
+index 8596f3541265e9..7289648eaa0259 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common.c
+@@ -1039,7 +1039,7 @@ static int mtk_eint_init(struct mtk_pinctrl *pctl, struct platform_device *pdev)
+ 	pctl->eint->pctl = pctl;
+ 	pctl->eint->gpio_xlate = &mtk_eint_xt;
+ 
+-	return mtk_eint_do_init(pctl->eint);
++	return mtk_eint_do_init(pctl->eint, NULL);
+ }
+ 
+ /* This is used as a common probe function */
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+index 335744ac831057..79f9c08e5039c3 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+@@ -417,20 +417,22 @@ static int armada_37xx_gpio_direction_output(struct gpio_chip *chip,
+ 					     unsigned int offset, int value)
+ {
+ 	struct armada_37xx_pinctrl *info = gpiochip_get_data(chip);
+-	unsigned int reg = OUTPUT_EN;
++	unsigned int en_offset = offset;
++	unsigned int reg = OUTPUT_VAL;
+ 	unsigned int mask, val, ret;
+ 
+ 	armada_37xx_update_reg(&reg, &offset);
+ 	mask = BIT(offset);
++	val = value ? mask : 0;
+ 
+-	ret = regmap_update_bits(info->regmap, reg, mask, mask);
+-
++	ret = regmap_update_bits(info->regmap, reg, mask, val);
+ 	if (ret)
+ 		return ret;
+ 
+-	reg = OUTPUT_VAL;
+-	val = value ? mask : 0;
+-	regmap_update_bits(info->regmap, reg, mask, val);
++	reg = OUTPUT_EN;
++	armada_37xx_update_reg(&reg, &en_offset);
++
++	regmap_update_bits(info->regmap, reg, mask, mask);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/rtc/class.c b/drivers/rtc/class.c
+index b88cd4fb295bce..b1a2be1f9e3b93 100644
+--- a/drivers/rtc/class.c
++++ b/drivers/rtc/class.c
+@@ -326,7 +326,7 @@ static void rtc_device_get_offset(struct rtc_device *rtc)
+ 	 *
+ 	 * Otherwise the offset seconds should be 0.
+ 	 */
+-	if (rtc->start_secs > rtc->range_max ||
++	if ((rtc->start_secs >= 0 && rtc->start_secs > rtc->range_max) ||
+ 	    rtc->start_secs + range_secs - 1 < rtc->range_min)
+ 		rtc->offset_secs = rtc->start_secs - rtc->range_min;
+ 	else if (rtc->start_secs > rtc->range_min)
+diff --git a/drivers/rtc/lib.c b/drivers/rtc/lib.c
+index fe361652727a3f..13b5b1f2046510 100644
+--- a/drivers/rtc/lib.c
++++ b/drivers/rtc/lib.c
+@@ -46,24 +46,38 @@ EXPORT_SYMBOL(rtc_year_days);
+  * rtc_time64_to_tm - converts time64_t to rtc_time.
+  *
+  * @time:	The number of seconds since 01-01-1970 00:00:00.
+- *		(Must be positive.)
++ *		Works for values since at least 1900
+  * @tm:		Pointer to the struct rtc_time.
+  */
+ void rtc_time64_to_tm(time64_t time, struct rtc_time *tm)
+ {
+-	unsigned int secs;
+-	int days;
++	int days, secs;
+ 
+ 	u64 u64tmp;
+ 	u32 u32tmp, udays, century, day_of_century, year_of_century, year,
+ 		day_of_year, month, day;
+ 	bool is_Jan_or_Feb, is_leap_year;
+ 
+-	/* time must be positive */
++	/*
++	 * Get days and seconds while preserving the sign to
++	 * handle negative time values (dates before 1970-01-01)
++	 */
+ 	days = div_s64_rem(time, 86400, &secs);
+ 
++	/*
++	 * We need 0 <= secs < 86400 which isn't given for negative
++	 * values of time. Fixup accordingly.
++	 */
++	if (secs < 0) {
++		days -= 1;
++		secs += 86400;
++	}
++
+ 	/* day of the week, 1970-01-01 was a Thursday */
+ 	tm->tm_wday = (days + 4) % 7;
++	/* Ensure tm_wday is always positive */
++	if (tm->tm_wday < 0)
++		tm->tm_wday += 7;
+ 
+ 	/*
+ 	 * The following algorithm is, basically, Proposition 6.3 of Neri
+@@ -93,7 +107,7 @@ void rtc_time64_to_tm(time64_t time, struct rtc_time *tm)
+ 	 * thus, is slightly different from [1].
+ 	 */
+ 
+-	udays		= ((u32) days) + 719468;
++	udays		= days + 719468;
+ 
+ 	u32tmp		= 4 * udays + 3;
+ 	century		= u32tmp / 146097;
+diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
+index cd15e84c47f475..1db2e951b53fac 100644
+--- a/drivers/thunderbolt/ctl.c
++++ b/drivers/thunderbolt/ctl.c
+@@ -151,6 +151,11 @@ static void tb_cfg_request_dequeue(struct tb_cfg_request *req)
+ 	struct tb_ctl *ctl = req->ctl;
+ 
+ 	mutex_lock(&ctl->request_queue_lock);
++	if (!test_bit(TB_CFG_REQUEST_ACTIVE, &req->flags)) {
++		mutex_unlock(&ctl->request_queue_lock);
++		return;
++	}
++
+ 	list_del(&req->list);
+ 	clear_bit(TB_CFG_REQUEST_ACTIVE, &req->flags);
+ 	if (test_bit(TB_CFG_REQUEST_CANCELED, &req->flags))
+diff --git a/drivers/tty/serial/jsm/jsm_tty.c b/drivers/tty/serial/jsm/jsm_tty.c
+index ce0fef7e2c665c..be2f130696b3a0 100644
+--- a/drivers/tty/serial/jsm/jsm_tty.c
++++ b/drivers/tty/serial/jsm/jsm_tty.c
+@@ -451,6 +451,7 @@ int jsm_uart_port_init(struct jsm_board *brd)
+ 		if (!brd->channels[i])
+ 			continue;
+ 
++		brd->channels[i]->uart_port.dev = &brd->pci_dev->dev;
+ 		brd->channels[i]->uart_port.irq = brd->irq;
+ 		brd->channels[i]->uart_port.uartclk = 14745600;
+ 		brd->channels[i]->uart_port.type = PORT_JSM;
+diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c
+index 740d2d2b19fbe0..66f3d9324ba2f3 100644
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -483,6 +483,7 @@ static int usbtmc_get_stb(struct usbtmc_file_data *file_data, __u8 *stb)
+ 	u8 tag;
+ 	int rv;
+ 	long wait_rv;
++	unsigned long expire;
+ 
+ 	dev_dbg(dev, "Enter ioctl_read_stb iin_ep_present: %d\n",
+ 		data->iin_ep_present);
+@@ -512,10 +513,11 @@ static int usbtmc_get_stb(struct usbtmc_file_data *file_data, __u8 *stb)
+ 	}
+ 
+ 	if (data->iin_ep_present) {
++		expire = msecs_to_jiffies(file_data->timeout);
+ 		wait_rv = wait_event_interruptible_timeout(
+ 			data->waitq,
+ 			atomic_read(&data->iin_data_valid) != 0,
+-			file_data->timeout);
++			expire);
+ 		if (wait_rv < 0) {
+ 			dev_dbg(dev, "wait interrupted %ld\n", wait_rv);
+ 			rv = wait_rv;
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 36d3df7d040c63..53d68d20fb62e0 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -372,6 +372,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* SanDisk Corp. SanDisk 3.2Gen1 */
+ 	{ USB_DEVICE(0x0781, 0x55a3), .driver_info = USB_QUIRK_DELAY_INIT },
+ 
++	/* SanDisk Extreme 55AE */
++	{ USB_DEVICE(0x0781, 0x55ae), .driver_info = USB_QUIRK_NO_LPM },
++
+ 	/* Realforce 87U Keyboard */
+ 	{ USB_DEVICE(0x0853, 0x011b), .driver_info = USB_QUIRK_NO_LPM },
+ 
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 010688dd9e49ce..22579d0d8ab8aa 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -458,6 +458,8 @@ static int pl2303_detect_type(struct usb_serial *serial)
+ 		case 0x605:
+ 		case 0x700:	/* GR */
+ 		case 0x705:
++		case 0x905:	/* GT-2AB */
++		case 0x1005:	/* GC-Q20 */
+ 			return TYPE_HXN;
+ 		}
+ 		break;
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index d460d71b425783..1477e31d776327 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -52,6 +52,13 @@ UNUSUAL_DEV(0x059f, 0x1061, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_NO_REPORT_OPCODES | US_FL_NO_SAME),
+ 
++/* Reported-by: Zhihong Zhou <zhouzhihong@greatwall.com.cn> */
++UNUSUAL_DEV(0x0781, 0x55e8, 0x0000, 0x9999,
++		"SanDisk",
++		"",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_IGNORE_UAS),
++
+ /* Reported-by: Hongling Zeng <zenghongling@kylinos.cn> */
+ UNUSUAL_DEV(0x090c, 0x2000, 0x0000, 0x9999,
+ 		"Hiksemi",
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index 9c5278a0c5d409..70910232a05d74 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -434,7 +434,7 @@ struct ucsi_debugfs_entry {
+ 		u64 low;
+ 		u64 high;
+ 	} response;
+-	u32 status;
++	int status;
+ 	struct dentry *dentry;
+ };
+ 
+diff --git a/fs/bcachefs/dirent.c b/fs/bcachefs/dirent.c
+index a5119508822789..901230ca4a750e 100644
+--- a/fs/bcachefs/dirent.c
++++ b/fs/bcachefs/dirent.c
+@@ -395,8 +395,8 @@ int bch2_dirent_read_target(struct btree_trans *trans, subvol_inum dir,
+ }
+ 
+ int bch2_dirent_rename(struct btree_trans *trans,
+-		subvol_inum src_dir, struct bch_hash_info *src_hash, u64 *src_dir_i_size,
+-		subvol_inum dst_dir, struct bch_hash_info *dst_hash, u64 *dst_dir_i_size,
++		subvol_inum src_dir, struct bch_hash_info *src_hash,
++		subvol_inum dst_dir, struct bch_hash_info *dst_hash,
+ 		const struct qstr *src_name, subvol_inum *src_inum, u64 *src_offset,
+ 		const struct qstr *dst_name, subvol_inum *dst_inum, u64 *dst_offset,
+ 		enum bch_rename_mode mode)
+@@ -535,14 +535,6 @@ int bch2_dirent_rename(struct btree_trans *trans,
+ 	    new_src->v.d_type == DT_SUBVOL)
+ 		new_src->v.d_parent_subvol = cpu_to_le32(src_dir.subvol);
+ 
+-	if (old_dst.k)
+-		*dst_dir_i_size -= bkey_bytes(old_dst.k);
+-	*src_dir_i_size -= bkey_bytes(old_src.k);
+-
+-	if (mode == BCH_RENAME_EXCHANGE)
+-		*src_dir_i_size += bkey_bytes(&new_src->k);
+-	*dst_dir_i_size += bkey_bytes(&new_dst->k);
+-
+ 	ret = bch2_trans_update(trans, &dst_iter, &new_dst->k_i, 0);
+ 	if (ret)
+ 		goto out;
+diff --git a/fs/bcachefs/dirent.h b/fs/bcachefs/dirent.h
+index d3e7ae669575a6..999b895fa28a86 100644
+--- a/fs/bcachefs/dirent.h
++++ b/fs/bcachefs/dirent.h
+@@ -80,8 +80,8 @@ enum bch_rename_mode {
+ };
+ 
+ int bch2_dirent_rename(struct btree_trans *,
+-		       subvol_inum, struct bch_hash_info *, u64 *,
+-		       subvol_inum, struct bch_hash_info *, u64 *,
++		       subvol_inum, struct bch_hash_info *,
++		       subvol_inum, struct bch_hash_info *,
+ 		       const struct qstr *, subvol_inum *, u64 *,
+ 		       const struct qstr *, subvol_inum *, u64 *,
+ 		       enum bch_rename_mode);
+diff --git a/fs/bcachefs/errcode.h b/fs/bcachefs/errcode.h
+index d9ebffa5b3a282..346766299cb34a 100644
+--- a/fs/bcachefs/errcode.h
++++ b/fs/bcachefs/errcode.h
+@@ -209,6 +209,8 @@
+ 	x(EINVAL,			remove_would_lose_data)			\
+ 	x(EINVAL,			no_resize_with_buckets_nouse)		\
+ 	x(EINVAL,			inode_unpack_error)			\
++	x(EINVAL,			inode_not_unlinked)			\
++	x(EINVAL,			inode_has_child_snapshot)		\
+ 	x(EINVAL,			varint_decode_error)			\
+ 	x(EINVAL,			erasure_coding_found_btree_node)	\
+ 	x(EOPNOTSUPP,			may_not_use_incompat_feature)		\
+diff --git a/fs/bcachefs/fs.c b/fs/bcachefs/fs.c
+index 47f1a64c5c8d83..8a47ce3467e8d2 100644
+--- a/fs/bcachefs/fs.c
++++ b/fs/bcachefs/fs.c
+@@ -2181,7 +2181,13 @@ static void bch2_evict_inode(struct inode *vinode)
+ 				KEY_TYPE_QUOTA_WARN);
+ 		bch2_quota_acct(c, inode->ei_qid, Q_INO, -1,
+ 				KEY_TYPE_QUOTA_WARN);
+-		bch2_inode_rm(c, inode_inum(inode));
++		int ret = bch2_inode_rm(c, inode_inum(inode));
++		if (ret && !bch2_err_matches(ret, EROFS)) {
++			bch_err_msg(c, ret, "VFS incorrectly tried to delete inode %llu:%llu",
++				    inode->ei_inum.subvol,
++				    inode->ei_inum.inum);
++			bch2_sb_error_count(c, BCH_FSCK_ERR_vfs_bad_inode_rm);
++		}
+ 
+ 		/*
+ 		 * If we are deleting, we need it present in the vfs hash table
+diff --git a/fs/bcachefs/fsck.c b/fs/bcachefs/fsck.c
+index aaf18708527644..bf117f2225d8a0 100644
+--- a/fs/bcachefs/fsck.c
++++ b/fs/bcachefs/fsck.c
+@@ -1183,6 +1183,14 @@ static int check_inode(struct btree_trans *trans,
+ 		ret = 0;
+ 	}
+ 
++	if (fsck_err_on(S_ISDIR(u.bi_mode) && u.bi_size,
++			trans, inode_dir_has_nonzero_i_size,
++			"directory %llu:%u with nonzero i_size %lli",
++			u.bi_inum, u.bi_snapshot, u.bi_size)) {
++		u.bi_size = 0;
++		do_update = true;
++	}
++
+ 	ret = bch2_inode_has_child_snapshots(trans, k.k->p);
+ 	if (ret < 0)
+ 		goto err;
+diff --git a/fs/bcachefs/inode.c b/fs/bcachefs/inode.c
+index 490b85841de960..845efd429d13a8 100644
+--- a/fs/bcachefs/inode.c
++++ b/fs/bcachefs/inode.c
+@@ -38,6 +38,7 @@ static const char * const bch2_inode_flag_strs[] = {
+ #undef  x
+ 
+ static int delete_ancestor_snapshot_inodes(struct btree_trans *, struct bpos);
++static int may_delete_deleted_inum(struct btree_trans *, subvol_inum);
+ 
+ static const u8 byte_table[8] = { 1, 2, 3, 4, 6, 8, 10, 13 };
+ 
+@@ -1048,19 +1049,23 @@ int bch2_inode_rm(struct bch_fs *c, subvol_inum inum)
+ 	u32 snapshot;
+ 	int ret;
+ 
++	ret = lockrestart_do(trans, may_delete_deleted_inum(trans, inum));
++	if (ret)
++		goto err2;
++
+ 	/*
+ 	 * If this was a directory, there shouldn't be any real dirents left -
+ 	 * but there could be whiteouts (from hash collisions) that we should
+ 	 * delete:
+ 	 *
+-	 * XXX: the dirent could ideally would delete whiteouts when they're no
++	 * XXX: the dirent code ideally would delete whiteouts when they're no
+ 	 * longer needed
+ 	 */
+ 	ret   = bch2_inode_delete_keys(trans, inum, BTREE_ID_extents) ?:
+ 		bch2_inode_delete_keys(trans, inum, BTREE_ID_xattrs) ?:
+ 		bch2_inode_delete_keys(trans, inum, BTREE_ID_dirents);
+ 	if (ret)
+-		goto err;
++		goto err2;
+ retry:
+ 	bch2_trans_begin(trans);
+ 
+@@ -1342,10 +1347,8 @@ int bch2_inode_rm_snapshot(struct btree_trans *trans, u64 inum, u32 snapshot)
+ 		delete_ancestor_snapshot_inodes(trans, SPOS(0, inum, snapshot));
+ }
+ 
+-static int may_delete_deleted_inode(struct btree_trans *trans,
+-				    struct btree_iter *iter,
+-				    struct bpos pos,
+-				    bool *need_another_pass)
++static int may_delete_deleted_inode(struct btree_trans *trans, struct bpos pos,
++				    bool from_deleted_inodes)
+ {
+ 	struct bch_fs *c = trans->c;
+ 	struct btree_iter inode_iter;
+@@ -1360,11 +1363,13 @@ static int may_delete_deleted_inode(struct btree_trans *trans,
+ 		return ret;
+ 
+ 	ret = bkey_is_inode(k.k) ? 0 : -BCH_ERR_ENOENT_inode;
+-	if (fsck_err_on(!bkey_is_inode(k.k),
++	if (fsck_err_on(from_deleted_inodes && ret,
+ 			trans, deleted_inode_missing,
+ 			"nonexistent inode %llu:%u in deleted_inodes btree",
+ 			pos.offset, pos.snapshot))
+ 		goto delete;
++	if (ret)
++		goto out;
+ 
+ 	ret = bch2_inode_unpack(k, &inode);
+ 	if (ret)
+@@ -1372,7 +1377,8 @@ static int may_delete_deleted_inode(struct btree_trans *trans,
+ 
+ 	if (S_ISDIR(inode.bi_mode)) {
+ 		ret = bch2_empty_dir_snapshot(trans, pos.offset, 0, pos.snapshot);
+-		if (fsck_err_on(bch2_err_matches(ret, ENOTEMPTY),
++		if (fsck_err_on(from_deleted_inodes &&
++				bch2_err_matches(ret, ENOTEMPTY),
+ 				trans, deleted_inode_is_dir,
+ 				"non empty directory %llu:%u in deleted_inodes btree",
+ 				pos.offset, pos.snapshot))
+@@ -1381,17 +1387,25 @@ static int may_delete_deleted_inode(struct btree_trans *trans,
+ 			goto out;
+ 	}
+ 
+-	if (fsck_err_on(!(inode.bi_flags & BCH_INODE_unlinked),
++	ret = inode.bi_flags & BCH_INODE_unlinked ? 0 : -BCH_ERR_inode_not_unlinked;
++	if (fsck_err_on(from_deleted_inodes && ret,
+ 			trans, deleted_inode_not_unlinked,
+ 			"non-deleted inode %llu:%u in deleted_inodes btree",
+ 			pos.offset, pos.snapshot))
+ 		goto delete;
++	if (ret)
++		goto out;
++
++	ret = !(inode.bi_flags & BCH_INODE_has_child_snapshot)
++		? 0 : -BCH_ERR_inode_has_child_snapshot;
+ 
+-	if (fsck_err_on(inode.bi_flags & BCH_INODE_has_child_snapshot,
++	if (fsck_err_on(from_deleted_inodes && ret,
+ 			trans, deleted_inode_has_child_snapshots,
+ 			"inode with child snapshots %llu:%u in deleted_inodes btree",
+ 			pos.offset, pos.snapshot))
+ 		goto delete;
++	if (ret)
++		goto out;
+ 
+ 	ret = bch2_inode_has_child_snapshots(trans, k.k->p);
+ 	if (ret < 0)
+@@ -1408,19 +1422,28 @@ static int may_delete_deleted_inode(struct btree_trans *trans,
+ 			if (ret)
+ 				goto out;
+ 		}
++
++		if (!from_deleted_inodes) {
++			ret =   bch2_trans_commit(trans, NULL, NULL, BCH_TRANS_COMMIT_no_enospc) ?:
++				-BCH_ERR_inode_has_child_snapshot;
++			goto out;
++		}
++
+ 		goto delete;
+ 
+ 	}
+ 
+-	if (test_bit(BCH_FS_clean_recovery, &c->flags) &&
+-	    !fsck_err(trans, deleted_inode_but_clean,
+-		      "filesystem marked as clean but have deleted inode %llu:%u",
+-		      pos.offset, pos.snapshot)) {
+-		ret = 0;
+-		goto out;
+-	}
++	if (from_deleted_inodes) {
++		if (test_bit(BCH_FS_clean_recovery, &c->flags) &&
++		    !fsck_err(trans, deleted_inode_but_clean,
++			      "filesystem marked as clean but have deleted inode %llu:%u",
++			      pos.offset, pos.snapshot)) {
++			ret = 0;
++			goto out;
++		}
+ 
+-	ret = 1;
++		ret = 1;
++	}
+ out:
+ fsck_err:
+ 	bch2_trans_iter_exit(trans, &inode_iter);
+@@ -1431,12 +1454,19 @@ static int may_delete_deleted_inode(struct btree_trans *trans,
+ 	goto out;
+ }
+ 
++static int may_delete_deleted_inum(struct btree_trans *trans, subvol_inum inum)
++{
++	u32 snapshot;
++
++	return bch2_subvolume_get_snapshot(trans, inum.subvol, &snapshot) ?:
++		may_delete_deleted_inode(trans, SPOS(0, inum.inum, snapshot), false);
++}
++
+ int bch2_delete_dead_inodes(struct bch_fs *c)
+ {
+ 	struct btree_trans *trans = bch2_trans_get(c);
+-	bool need_another_pass;
+ 	int ret;
+-again:
++
+ 	/*
+ 	 * if we ran check_inodes() unlinked inodes will have already been
+ 	 * cleaned up but the write buffer will be out of sync; therefore we
+@@ -1446,8 +1476,6 @@ int bch2_delete_dead_inodes(struct bch_fs *c)
+ 	if (ret)
+ 		goto err;
+ 
+-	need_another_pass = false;
+-
+ 	/*
+ 	 * Weird transaction restart handling here because on successful delete,
+ 	 * bch2_inode_rm_snapshot() will return a nested transaction restart,
+@@ -1457,7 +1485,7 @@ int bch2_delete_dead_inodes(struct bch_fs *c)
+ 	ret = for_each_btree_key_commit(trans, iter, BTREE_ID_deleted_inodes, POS_MIN,
+ 					BTREE_ITER_prefetch|BTREE_ITER_all_snapshots, k,
+ 					NULL, NULL, BCH_TRANS_COMMIT_no_enospc, ({
+-		ret = may_delete_deleted_inode(trans, &iter, k.k->p, &need_another_pass);
++		ret = may_delete_deleted_inode(trans, k.k->p, true);
+ 		if (ret > 0) {
+ 			bch_verbose_ratelimited(c, "deleting unlinked inode %llu:%u",
+ 						k.k->p.offset, k.k->p.snapshot);
+@@ -1478,9 +1506,6 @@ int bch2_delete_dead_inodes(struct bch_fs *c)
+ 
+ 		ret;
+ 	}));
+-
+-	if (!ret && need_another_pass)
+-		goto again;
+ err:
+ 	bch2_trans_put(trans);
+ 	return ret;
+diff --git a/fs/bcachefs/namei.c b/fs/bcachefs/namei.c
+index 9136a90977893c..413fb60cff434b 100644
+--- a/fs/bcachefs/namei.c
++++ b/fs/bcachefs/namei.c
+@@ -418,8 +418,8 @@ int bch2_rename_trans(struct btree_trans *trans,
+ 	}
+ 
+ 	ret = bch2_dirent_rename(trans,
+-				 src_dir, &src_hash, &src_dir_u->bi_size,
+-				 dst_dir, &dst_hash, &dst_dir_u->bi_size,
++				 src_dir, &src_hash,
++				 dst_dir, &dst_hash,
+ 				 src_name, &src_inum, &src_offset,
+ 				 dst_name, &dst_inum, &dst_offset,
+ 				 mode);
+diff --git a/fs/bcachefs/sb-errors_format.h b/fs/bcachefs/sb-errors_format.h
+index 4036a20c6adc26..9387f6092fe989 100644
+--- a/fs/bcachefs/sb-errors_format.h
++++ b/fs/bcachefs/sb-errors_format.h
+@@ -232,6 +232,7 @@ enum bch_fsck_flags {
+ 	x(inode_dir_multiple_links,				206,	FSCK_AUTOFIX)	\
+ 	x(inode_dir_missing_backpointer,			284,	FSCK_AUTOFIX)	\
+ 	x(inode_dir_unlinked_but_not_empty,			286,	FSCK_AUTOFIX)	\
++	x(inode_dir_has_nonzero_i_size,				319,	FSCK_AUTOFIX)	\
+ 	x(inode_multiple_links_but_nlink_0,			207,	FSCK_AUTOFIX)	\
+ 	x(inode_wrong_backpointer,				208,	FSCK_AUTOFIX)	\
+ 	x(inode_wrong_nlink,					209,	FSCK_AUTOFIX)	\
+@@ -243,6 +244,7 @@ enum bch_fsck_flags {
+ 	x(inode_parent_has_case_insensitive_not_set,		317,	FSCK_AUTOFIX)	\
+ 	x(vfs_inode_i_blocks_underflow,				311,	FSCK_AUTOFIX)	\
+ 	x(vfs_inode_i_blocks_not_zero_at_truncate,		313,	FSCK_AUTOFIX)	\
++	x(vfs_bad_inode_rm,					320,	0)		\
+ 	x(deleted_inode_but_clean,				211,	FSCK_AUTOFIX)	\
+ 	x(deleted_inode_missing,				212,	FSCK_AUTOFIX)	\
+ 	x(deleted_inode_is_dir,					213,	FSCK_AUTOFIX)	\
+@@ -328,7 +330,7 @@ enum bch_fsck_flags {
+ 	x(dirent_stray_data_after_cf_name,			305,	0)		\
+ 	x(rebalance_work_incorrectly_set,			309,	FSCK_AUTOFIX)	\
+ 	x(rebalance_work_incorrectly_unset,			310,	FSCK_AUTOFIX)	\
+-	x(MAX,							319,	0)
++	x(MAX,							321,	0)
+ 
+ enum bch_sb_error_id {
+ #define x(t, n, ...) BCH_FSCK_ERR_##t = n,
+diff --git a/fs/bcachefs/subvolume.c b/fs/bcachefs/subvolume.c
+index d0209f7658bb87..bc6009a7128443 100644
+--- a/fs/bcachefs/subvolume.c
++++ b/fs/bcachefs/subvolume.c
+@@ -6,6 +6,7 @@
+ #include "errcode.h"
+ #include "error.h"
+ #include "fs.h"
++#include "inode.h"
+ #include "recovery_passes.h"
+ #include "snapshot.h"
+ #include "subvolume.h"
+@@ -113,10 +114,20 @@ static int check_subvol(struct btree_trans *trans,
+ 			     "subvolume %llu points to missing subvolume root %llu:%u",
+ 			     k.k->p.offset, le64_to_cpu(subvol.v->inode),
+ 			     le32_to_cpu(subvol.v->snapshot))) {
+-			ret = bch2_subvolume_delete(trans, iter->pos.offset);
+-			bch_err_msg(c, ret, "deleting subvolume %llu", iter->pos.offset);
+-			ret = ret ?: -BCH_ERR_transaction_restart_nested;
+-			goto err;
++			/*
++			 * Recreate - any contents that are still disconnected
++			 * will then get reattached under lost+found
++			 */
++			bch2_inode_init_early(c, &inode);
++			bch2_inode_init_late(&inode, bch2_current_time(c),
++					     0, 0, S_IFDIR|0700, 0, NULL);
++			inode.bi_inum			= le64_to_cpu(subvol.v->inode);
++			inode.bi_snapshot		= le32_to_cpu(subvol.v->snapshot);
++			inode.bi_subvol			= k.k->p.offset;
++			inode.bi_parent_subvol		= le32_to_cpu(subvol.v->fs_path_parent);
++			ret = __bch2_fsck_write_inode(trans, &inode);
++			if (ret)
++				goto err;
+ 		}
+ 	} else {
+ 		goto err;
+diff --git a/include/acpi/actbl.h b/include/acpi/actbl.h
+index 2fc89704be1797..74cc61e3ab09b1 100644
+--- a/include/acpi/actbl.h
++++ b/include/acpi/actbl.h
+@@ -66,12 +66,12 @@
+  ******************************************************************************/
+ 
+ struct acpi_table_header {
+-	char signature[ACPI_NAMESEG_SIZE] __nonstring;	/* ASCII table signature */
++	char signature[ACPI_NAMESEG_SIZE] ACPI_NONSTRING;	/* ASCII table signature */
+ 	u32 length;		/* Length of table in bytes, including this header */
+ 	u8 revision;		/* ACPI Specification minor version number */
+ 	u8 checksum;		/* To make sum of entire table == 0 */
+-	char oem_id[ACPI_OEM_ID_SIZE];	/* ASCII OEM identification */
+-	char oem_table_id[ACPI_OEM_TABLE_ID_SIZE];	/* ASCII OEM table identification */
++	char oem_id[ACPI_OEM_ID_SIZE] ACPI_NONSTRING;	/* ASCII OEM identification */
++	char oem_table_id[ACPI_OEM_TABLE_ID_SIZE] ACPI_NONSTRING;	/* ASCII OEM table identification */
+ 	u32 oem_revision;	/* OEM revision number */
+ 	char asl_compiler_id[ACPI_NAMESEG_SIZE];	/* ASCII ASL compiler vendor ID */
+ 	u32 asl_compiler_revision;	/* ASL compiler version */
+diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h
+index 80767e8bf3ad43..f7b3c4a4b7e7c3 100644
+--- a/include/acpi/actypes.h
++++ b/include/acpi/actypes.h
+@@ -1327,4 +1327,8 @@ typedef enum {
+ #define ACPI_FLEX_ARRAY(TYPE, NAME)     TYPE NAME[0]
+ #endif
+ 
++#ifndef ACPI_NONSTRING
++#define ACPI_NONSTRING		/* No terminating NUL character */
++#endif
++
+ #endif				/* __ACTYPES_H__ */
+diff --git a/include/acpi/platform/acgcc.h b/include/acpi/platform/acgcc.h
+index 04b4bf62051707..68e9379623e6dc 100644
+--- a/include/acpi/platform/acgcc.h
++++ b/include/acpi/platform/acgcc.h
+@@ -72,4 +72,12 @@
+                 TYPE NAME[];                    \
+         }
+ 
++/*
++ * Explicitly mark strings that lack a terminating NUL character so
++ * that ACPICA can be built with -Wunterminated-string-initialization.
++ */
++#if __has_attribute(__nonstring__)
++#define ACPI_NONSTRING __attribute__((__nonstring__))
++#endif
++
+ #endif				/* __ACGCC_H__ */
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 5b8db27fb6ef37..766cb3cd254e05 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -6824,7 +6824,7 @@ static ssize_t tracing_splice_read_pipe(struct file *filp,
+ 		ret = trace_seq_to_buffer(&iter->seq,
+ 					  page_address(spd.pages[i]),
+ 					  min((size_t)trace_seq_used(&iter->seq),
+-						  PAGE_SIZE));
++						  (size_t)PAGE_SIZE));
+ 		if (ret < 0) {
+ 			__free_page(spd.pages[i]);
+ 			break;
+diff --git a/tools/power/acpi/os_specific/service_layers/oslinuxtbl.c b/tools/power/acpi/os_specific/service_layers/oslinuxtbl.c
+index 9d70d8c945af4e..987a5d32f3b600 100644
+--- a/tools/power/acpi/os_specific/service_layers/oslinuxtbl.c
++++ b/tools/power/acpi/os_specific/service_layers/oslinuxtbl.c
+@@ -19,7 +19,7 @@ ACPI_MODULE_NAME("oslinuxtbl")
+ typedef struct osl_table_info {
+ 	struct osl_table_info *next;
+ 	u32 instance;
+-	char signature[ACPI_NAMESEG_SIZE];
++	char signature[ACPI_NAMESEG_SIZE] ACPI_NONSTRING;
+ 
+ } osl_table_info;
+ 
+diff --git a/tools/power/acpi/tools/acpidump/apfiles.c b/tools/power/acpi/tools/acpidump/apfiles.c
+index 13817f9112c06a..9fc927fcc22a7f 100644
+--- a/tools/power/acpi/tools/acpidump/apfiles.c
++++ b/tools/power/acpi/tools/acpidump/apfiles.c
+@@ -103,7 +103,7 @@ int ap_open_output_file(char *pathname)
+ 
+ int ap_write_to_binary_file(struct acpi_table_header *table, u32 instance)
+ {
+-	char filename[ACPI_NAMESEG_SIZE + 16];
++	char filename[ACPI_NAMESEG_SIZE + 16] ACPI_NONSTRING;
+ 	char instance_str[16];
+ 	ACPI_FILE file;
+ 	acpi_size actual;


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-06-10 12:24 Mike Pagano
  0 siblings, 0 replies; 19+ messages in thread
From: Mike Pagano @ 2025-06-10 12:24 UTC (permalink / raw
  To: gentoo-commits

commit:     de43d4227d9df80eeff6dab94e4a378bd746ebd0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 10 12:24:29 2025 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 10 12:24:29 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=de43d422

Remove redundant patch

Removed:
2700_amd-revert-vmin-vmax-for-freesync.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                  |  4 ---
 2700_amd-revert-vmin-vmax-for-freesync.patch | 48 ----------------------------
 2 files changed, 52 deletions(-)

diff --git a/0000_README b/0000_README
index 9ef52bb6..1f59639d 100644
--- a/0000_README
+++ b/0000_README
@@ -66,10 +66,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  2700_amd-revert-vmin-vmax-for-freesync.patch
-From:   https://github.com/archlinux/linux/commit/30dd9945fd79d33a049da4e52984c9bc07450de2.patch
-Desc:   Revert "drm/amd/display: more liberal vmin/vmax update for freesync"
-
 Patch:  2901_permit-menuconfig-sorting.patch
 From:   https://lore.kernel.org/
 Desc:   menuconfig: Allow sorting the entries alphabetically

diff --git a/2700_amd-revert-vmin-vmax-for-freesync.patch b/2700_amd-revert-vmin-vmax-for-freesync.patch
deleted file mode 100644
index b0b80885..00000000
--- a/2700_amd-revert-vmin-vmax-for-freesync.patch
+++ /dev/null
@@ -1,48 +0,0 @@
-From 30dd9945fd79d33a049da4e52984c9bc07450de2 Mon Sep 17 00:00:00 2001
-From: Aurabindo Pillai <aurabindo.pillai@amd.com>
-Date: Wed, 21 May 2025 16:10:57 -0400
-Subject: [PATCH] Revert "drm/amd/display: more liberal vmin/vmax update for
- freesync"
-
-This reverts commit 219898d29c438d8ec34a5560fac4ea8f6b8d4f20 since it
-causes regressions on certain configs. Revert until the issue can be
-isolated and debugged.
-
-Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/4238
-Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
-Cherry-picked-for: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/issues/139
----
- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c    | 16 +++++-----------
- 1 file changed, 5 insertions(+), 11 deletions(-)
-
-diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
-index 2dbd71fbae28a5..e4f0517f0f2b23 100644
---- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
-+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
-@@ -668,21 +668,15 @@ static void dm_crtc_high_irq(void *interrupt_params)
- 	spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags);
- 
- 	if (acrtc->dm_irq_params.stream &&
--		acrtc->dm_irq_params.vrr_params.supported) {
--		bool replay_en = acrtc->dm_irq_params.stream->link->replay_settings.replay_feature_enabled;
--		bool psr_en = acrtc->dm_irq_params.stream->link->psr_settings.psr_feature_enabled;
--		bool fs_active_var_en = acrtc->dm_irq_params.freesync_config.state == VRR_STATE_ACTIVE_VARIABLE;
--
-+	    acrtc->dm_irq_params.vrr_params.supported &&
-+	    acrtc->dm_irq_params.freesync_config.state ==
-+		    VRR_STATE_ACTIVE_VARIABLE) {
- 		mod_freesync_handle_v_update(adev->dm.freesync_module,
- 					     acrtc->dm_irq_params.stream,
- 					     &acrtc->dm_irq_params.vrr_params);
- 
--		/* update vmin_vmax only if freesync is enabled, or only if PSR and REPLAY are disabled */
--		if (fs_active_var_en || (!fs_active_var_en && !replay_en && !psr_en)) {
--			dc_stream_adjust_vmin_vmax(adev->dm.dc,
--					acrtc->dm_irq_params.stream,
--					&acrtc->dm_irq_params.vrr_params.adjust);
--		}
-+		dc_stream_adjust_vmin_vmax(adev->dm.dc, acrtc->dm_irq_params.stream,
-+					   &acrtc->dm_irq_params.vrr_params.adjust);
- 	}
- 
- 	/*


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-06-19 14:21 Mike Pagano
  0 siblings, 0 replies; 19+ messages in thread
From: Mike Pagano @ 2025-06-19 14:21 UTC (permalink / raw
  To: gentoo-commits

commit:     17ff67232f48bbe882b1a95d4b9447752f5ac7d0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jun 19 14:21:39 2025 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jun 19 14:21:39 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=17ff6723

Linux patch 6.15.3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1002_linux-6.15.3.patch | 37139 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 37143 insertions(+)

diff --git a/0000_README b/0000_README
index 1f59639d..e54206b2 100644
--- a/0000_README
+++ b/0000_README
@@ -50,6 +50,10 @@ Patch:  1001_linux-6.15.2.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.15.2
 
+Patch:  1002_linux-6.15.3.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.15.3
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1002_linux-6.15.3.patch b/1002_linux-6.15.3.patch
new file mode 100644
index 00000000..2b161b0f
--- /dev/null
+++ b/1002_linux-6.15.3.patch
@@ -0,0 +1,37139 @@
+diff --git a/Documentation/devicetree/bindings/regulator/mediatek,mt6357-regulator.yaml b/Documentation/devicetree/bindings/regulator/mediatek,mt6357-regulator.yaml
+index 6327bb2f6ee080..698266c09e2535 100644
+--- a/Documentation/devicetree/bindings/regulator/mediatek,mt6357-regulator.yaml
++++ b/Documentation/devicetree/bindings/regulator/mediatek,mt6357-regulator.yaml
+@@ -33,7 +33,7 @@ patternProperties:
+ 
+   "^ldo-v(camio18|aud28|aux18|io18|io28|rf12|rf18|cn18|cn28|fe28)$":
+     type: object
+-    $ref: fixed-regulator.yaml#
++    $ref: regulator.yaml#
+     unevaluatedProperties: false
+     description:
+       Properties for single fixed LDO regulator.
+@@ -112,7 +112,6 @@ examples:
+           regulator-enable-ramp-delay = <220>;
+         };
+         mt6357_vfe28_reg: ldo-vfe28 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vfe28";
+           regulator-min-microvolt = <2800000>;
+           regulator-max-microvolt = <2800000>;
+@@ -125,14 +124,12 @@ examples:
+           regulator-enable-ramp-delay = <110>;
+         };
+         mt6357_vrf18_reg: ldo-vrf18 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vrf18";
+           regulator-min-microvolt = <1800000>;
+           regulator-max-microvolt = <1800000>;
+           regulator-enable-ramp-delay = <110>;
+         };
+         mt6357_vrf12_reg: ldo-vrf12 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vrf12";
+           regulator-min-microvolt = <1200000>;
+           regulator-max-microvolt = <1200000>;
+@@ -157,14 +154,12 @@ examples:
+           regulator-enable-ramp-delay = <264>;
+         };
+         mt6357_vcn28_reg: ldo-vcn28 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vcn28";
+           regulator-min-microvolt = <2800000>;
+           regulator-max-microvolt = <2800000>;
+           regulator-enable-ramp-delay = <264>;
+         };
+         mt6357_vcn18_reg: ldo-vcn18 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vcn18";
+           regulator-min-microvolt = <1800000>;
+           regulator-max-microvolt = <1800000>;
+@@ -183,7 +178,6 @@ examples:
+           regulator-enable-ramp-delay = <264>;
+         };
+         mt6357_vcamio_reg: ldo-vcamio18 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vcamio";
+           regulator-min-microvolt = <1800000>;
+           regulator-max-microvolt = <1800000>;
+@@ -212,28 +206,24 @@ examples:
+           regulator-always-on;
+         };
+         mt6357_vaux18_reg: ldo-vaux18 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vaux18";
+           regulator-min-microvolt = <1800000>;
+           regulator-max-microvolt = <1800000>;
+           regulator-enable-ramp-delay = <264>;
+         };
+         mt6357_vaud28_reg: ldo-vaud28 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vaud28";
+           regulator-min-microvolt = <2800000>;
+           regulator-max-microvolt = <2800000>;
+           regulator-enable-ramp-delay = <264>;
+         };
+         mt6357_vio28_reg: ldo-vio28 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vio28";
+           regulator-min-microvolt = <2800000>;
+           regulator-max-microvolt = <2800000>;
+           regulator-enable-ramp-delay = <264>;
+         };
+         mt6357_vio18_reg: ldo-vio18 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vio18";
+           regulator-min-microvolt = <1800000>;
+           regulator-max-microvolt = <1800000>;
+diff --git a/Documentation/devicetree/bindings/soc/fsl/fsl,qman-fqd.yaml b/Documentation/devicetree/bindings/soc/fsl/fsl,qman-fqd.yaml
+index de0b4ae740ff23..a975bce599750e 100644
+--- a/Documentation/devicetree/bindings/soc/fsl/fsl,qman-fqd.yaml
++++ b/Documentation/devicetree/bindings/soc/fsl/fsl,qman-fqd.yaml
+@@ -50,7 +50,7 @@ required:
+   - compatible
+ 
+ allOf:
+-  - $ref: reserved-memory.yaml
++  - $ref: /schemas/reserved-memory/reserved-memory.yaml
+ 
+ unevaluatedProperties: false
+ 
+@@ -61,7 +61,7 @@ examples:
+         #size-cells = <2>;
+ 
+         qman-fqd {
+-            compatible = "shared-dma-pool";
++            compatible = "fsl,qman-fqd";
+             size = <0 0x400000>;
+             alignment = <0 0x400000>;
+             no-map;
+diff --git a/Documentation/devicetree/bindings/vendor-prefixes.yaml b/Documentation/devicetree/bindings/vendor-prefixes.yaml
+index 86f6a19b28ae21..190ab40cf23afc 100644
+--- a/Documentation/devicetree/bindings/vendor-prefixes.yaml
++++ b/Documentation/devicetree/bindings/vendor-prefixes.yaml
+@@ -864,6 +864,8 @@ patternProperties:
+     description: Linux-specific binding
+   "^linx,.*":
+     description: Linx Technologies
++  "^liontron,.*":
++    description: Shenzhen Liontron Technology Co., Ltd
+   "^liteon,.*":
+     description: LITE-ON Technology Corp.
+   "^litex,.*":
+diff --git a/Documentation/gpu/xe/index.rst b/Documentation/gpu/xe/index.rst
+index 92cfb25e64d327..b53a0cc7f66a36 100644
+--- a/Documentation/gpu/xe/index.rst
++++ b/Documentation/gpu/xe/index.rst
+@@ -16,6 +16,7 @@ DG2, etc is provided to prototype the driver.
+    xe_migrate
+    xe_cs
+    xe_pm
++   xe_gt_freq
+    xe_pcode
+    xe_gt_mcr
+    xe_wa
+diff --git a/Documentation/gpu/xe/xe_gt_freq.rst b/Documentation/gpu/xe/xe_gt_freq.rst
+new file mode 100644
+index 00000000000000..c0811200e32755
+--- /dev/null
++++ b/Documentation/gpu/xe/xe_gt_freq.rst
+@@ -0,0 +1,14 @@
++.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
++
++==========================
++Xe GT Frequency Management
++==========================
++
++.. kernel-doc:: drivers/gpu/drm/xe/xe_gt_freq.c
++   :doc: Xe GT Frequency Management
++
++Internal API
++============
++
++.. kernel-doc:: drivers/gpu/drm/xe/xe_gt_freq.c
++   :internal:
+diff --git a/Documentation/misc-devices/lis3lv02d.rst b/Documentation/misc-devices/lis3lv02d.rst
+index 959bd2b822cfa9..6b3b7405ebdf6e 100644
+--- a/Documentation/misc-devices/lis3lv02d.rst
++++ b/Documentation/misc-devices/lis3lv02d.rst
+@@ -22,10 +22,10 @@ sporting the feature officially called "HP Mobile Data Protection System 3D" or
+ models (full list can be found in drivers/platform/x86/hp_accel.c) will have
+ their axis automatically oriented on standard way (eg: you can directly play
+ neverball). The accelerometer data is readable via
+-/sys/devices/platform/lis3lv02d. Reported values are scaled
++/sys/devices/faux/lis3lv02d. Reported values are scaled
+ to mg values (1/1000th of earth gravity).
+ 
+-Sysfs attributes under /sys/devices/platform/lis3lv02d/:
++Sysfs attributes under /sys/devices/faux/lis3lv02d/:
+ 
+ position
+       - 3D position that the accelerometer reports. Format: "(x,y,z)"
+@@ -85,7 +85,7 @@ the accelerometer are converted into a "standard" organisation of the axes
+ If your laptop model is not recognized (cf "dmesg"), you can send an
+ email to the maintainer to add it to the database.  When reporting a new
+ laptop, please include the output of "dmidecode" plus the value of
+-/sys/devices/platform/lis3lv02d/position in these four cases.
++/sys/devices/faux/lis3lv02d/position in these four cases.
+ 
+ Q&A
+ ---
+diff --git a/Documentation/netlink/specs/rt_link.yaml b/Documentation/netlink/specs/rt_link.yaml
+index 6b9d5ee87d93a8..2ac0e9fda1582d 100644
+--- a/Documentation/netlink/specs/rt_link.yaml
++++ b/Documentation/netlink/specs/rt_link.yaml
+@@ -1778,15 +1778,19 @@ attribute-sets:
+       -
+         name: iflags
+         type: u16
++        byte-order: big-endian
+       -
+         name: oflags
+         type: u16
++        byte-order: big-endian
+       -
+         name: ikey
+         type: u32
++        byte-order: big-endian
+       -
+         name: okey
+         type: u32
++        byte-order: big-endian
+       -
+         name: local
+         type: binary
+@@ -1806,10 +1810,11 @@ attribute-sets:
+         type: u8
+       -
+         name: encap-limit
+-        type: u32
++        type: u8
+       -
+         name: flowinfo
+         type: u32
++        byte-order: big-endian
+       -
+         name: flags
+         type: u32
+@@ -1822,9 +1827,11 @@ attribute-sets:
+       -
+         name: encap-sport
+         type: u16
++        byte-order: big-endian
+       -
+         name: encap-dport
+         type: u16
++        byte-order: big-endian
+       -
+         name: collect-metadata
+         type: flag
+@@ -1846,6 +1853,54 @@ attribute-sets:
+       -
+         name: erspan-hwid
+         type: u16
++  -
++    name: linkinfo-gre6-attrs
++    subset-of: linkinfo-gre-attrs
++    attributes:
++      -
++        name: link
++      -
++        name: iflags
++      -
++        name: oflags
++      -
++        name: ikey
++      -
++        name: okey
++      -
++        name: local
++        display-hint: ipv6
++      -
++        name: remote
++        display-hint: ipv6
++      -
++        name: ttl
++      -
++        name: encap-limit
++      -
++        name: flowinfo
++      -
++        name: flags
++      -
++        name: encap-type
++      -
++        name: encap-flags
++      -
++        name: encap-sport
++      -
++        name: encap-dport
++      -
++        name: collect-metadata
++      -
++        name: fwmark
++      -
++        name: erspan-index
++      -
++        name: erspan-ver
++      -
++        name: erspan-dir
++      -
++        name: erspan-hwid
+   -
+     name: linkinfo-vti-attrs
+     name-prefix: ifla-vti-
+@@ -1856,9 +1911,11 @@ attribute-sets:
+       -
+         name: ikey
+         type: u32
++        byte-order: big-endian
+       -
+         name: okey
+         type: u32
++        byte-order: big-endian
+       -
+         name: local
+         type: binary
+@@ -1908,6 +1965,7 @@ attribute-sets:
+       -
+         name: port
+         type: u16
++        byte-order: big-endian
+       -
+         name: collect-metadata
+         type: flag
+@@ -1927,6 +1985,7 @@ attribute-sets:
+       -
+         name: label
+         type: u32
++        byte-order: big-endian
+       -
+         name: ttl-inherit
+         type: u8
+@@ -1967,9 +2026,11 @@ attribute-sets:
+       -
+         name: flowinfo
+         type: u32
++        byte-order: big-endian
+       -
+         name: flags
+         type: u16
++        byte-order: big-endian
+       -
+         name: proto
+         type: u8
+@@ -1999,9 +2060,11 @@ attribute-sets:
+       -
+         name: encap-sport
+         type: u16
++        byte-order: big-endian
+       -
+         name: encap-dport
+         type: u16
++        byte-order: big-endian
+       -
+         name: collect-metadata
+         type: flag
+@@ -2299,6 +2362,9 @@ sub-messages:
+       -
+         value: gretap
+         attribute-set: linkinfo-gre-attrs
++      -
++        value: ip6gre
++        attribute-set: linkinfo-gre6-attrs
+       -
+         value: geneve
+         attribute-set: linkinfo-geneve-attrs
+diff --git a/Documentation/networking/xfrm_device.rst b/Documentation/networking/xfrm_device.rst
+index 7f24c09f269431..122204da0fff69 100644
+--- a/Documentation/networking/xfrm_device.rst
++++ b/Documentation/networking/xfrm_device.rst
+@@ -65,9 +65,13 @@ Callbacks to implement
+   /* from include/linux/netdevice.h */
+   struct xfrmdev_ops {
+         /* Crypto and Packet offload callbacks */
+-	int	(*xdo_dev_state_add) (struct xfrm_state *x, struct netlink_ext_ack *extack);
+-	void	(*xdo_dev_state_delete) (struct xfrm_state *x);
+-	void	(*xdo_dev_state_free) (struct xfrm_state *x);
++	int	(*xdo_dev_state_add)(struct net_device *dev,
++                                     struct xfrm_state *x,
++                                     struct netlink_ext_ack *extack);
++	void	(*xdo_dev_state_delete)(struct net_device *dev,
++                                        struct xfrm_state *x);
++	void	(*xdo_dev_state_free)(struct net_device *dev,
++                                      struct xfrm_state *x);
+ 	bool	(*xdo_dev_offload_ok) (struct sk_buff *skb,
+ 				       struct xfrm_state *x);
+ 	void    (*xdo_dev_state_advance_esn) (struct xfrm_state *x);
+diff --git a/Makefile b/Makefile
+index 7138d1fabfa4ae..01ddb4eb3659f4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 15
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/boot/dts/microchip/at91sam9263ek.dts b/arch/arm/boot/dts/microchip/at91sam9263ek.dts
+index 471ea25296aa14..93c5268a0845d0 100644
+--- a/arch/arm/boot/dts/microchip/at91sam9263ek.dts
++++ b/arch/arm/boot/dts/microchip/at91sam9263ek.dts
+@@ -152,7 +152,7 @@ nand_controller: nand-controller {
+ 				nand@3 {
+ 					reg = <0x3 0x0 0x800000>;
+ 					rb-gpios = <&pioA 22 GPIO_ACTIVE_HIGH>;
+-					cs-gpios = <&pioA 15 GPIO_ACTIVE_HIGH>;
++					cs-gpios = <&pioD 15 GPIO_ACTIVE_HIGH>;
+ 					nand-bus-width = <8>;
+ 					nand-ecc-mode = "soft";
+ 					nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/microchip/tny_a9263.dts b/arch/arm/boot/dts/microchip/tny_a9263.dts
+index 3dd48b3e06da57..fd8244b56e0593 100644
+--- a/arch/arm/boot/dts/microchip/tny_a9263.dts
++++ b/arch/arm/boot/dts/microchip/tny_a9263.dts
+@@ -64,7 +64,7 @@ nand_controller: nand-controller {
+ 				nand@3 {
+ 					reg = <0x3 0x0 0x800000>;
+ 					rb-gpios = <&pioA 22 GPIO_ACTIVE_HIGH>;
+-					cs-gpios = <&pioA 15 GPIO_ACTIVE_HIGH>;
++					cs-gpios = <&pioD 15 GPIO_ACTIVE_HIGH>;
+ 					nand-bus-width = <8>;
+ 					nand-ecc-mode = "soft";
+ 					nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/microchip/usb_a9263.dts b/arch/arm/boot/dts/microchip/usb_a9263.dts
+index 60d7936dc56274..8e1a3fb61087ca 100644
+--- a/arch/arm/boot/dts/microchip/usb_a9263.dts
++++ b/arch/arm/boot/dts/microchip/usb_a9263.dts
+@@ -58,7 +58,7 @@ usb1: gadget@fff78000 {
+ 			};
+ 
+ 			spi0: spi@fffa4000 {
+-				cs-gpios = <&pioB 15 GPIO_ACTIVE_HIGH>;
++				cs-gpios = <&pioA 5 GPIO_ACTIVE_LOW>;
+ 				status = "okay";
+ 				flash@0 {
+ 					compatible = "atmel,at45", "atmel,dataflash";
+@@ -84,7 +84,7 @@ nand_controller: nand-controller {
+ 				nand@3 {
+ 					reg = <0x3 0x0 0x800000>;
+ 					rb-gpios = <&pioA 22 GPIO_ACTIVE_HIGH>;
+-					cs-gpios = <&pioA 15 GPIO_ACTIVE_HIGH>;
++					cs-gpios = <&pioD 15 GPIO_ACTIVE_HIGH>;
+ 					nand-bus-width = <8>;
+ 					nand-ecc-mode = "soft";
+ 					nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/qcom/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom/qcom-apq8064.dtsi
+index 5f1a6b4b764492..1dad4e4493926f 100644
+--- a/arch/arm/boot/dts/qcom/qcom-apq8064.dtsi
++++ b/arch/arm/boot/dts/qcom/qcom-apq8064.dtsi
+@@ -213,12 +213,6 @@ sleep_clk: sleep_clk {
+ 		};
+ 	};
+ 
+-	sfpb_mutex: hwmutex {
+-		compatible = "qcom,sfpb-mutex";
+-		syscon = <&sfpb_wrapper_mutex 0x604 0x4>;
+-		#hwlock-cells = <1>;
+-	};
+-
+ 	smem {
+ 		compatible = "qcom,smem";
+ 		memory-region = <&smem_region>;
+@@ -284,6 +278,40 @@ scm {
+ 		};
+ 	};
+ 
++	replicator {
++		compatible = "arm,coresight-static-replicator";
++
++		clocks = <&rpmcc RPM_QDSS_CLK>;
++		clock-names = "apb_pclk";
++
++		in-ports {
++			port {
++				replicator_in: endpoint {
++					remote-endpoint = <&funnel_out>;
++				};
++			};
++		};
++
++		out-ports {
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			port@0 {
++				reg = <0>;
++				replicator_out0: endpoint {
++					remote-endpoint = <&etb_in>;
++				};
++			};
++
++			port@1 {
++				reg = <1>;
++				replicator_out1: endpoint {
++					remote-endpoint = <&tpiu_in>;
++				};
++			};
++		};
++	};
++
+ 	soc: soc {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+@@ -305,9 +333,10 @@ tlmm_pinmux: pinctrl@800000 {
+ 			pinctrl-0 = <&ps_hold_default_state>;
+ 		};
+ 
+-		sfpb_wrapper_mutex: syscon@1200000 {
+-			compatible = "syscon";
+-			reg = <0x01200000 0x8000>;
++		sfpb_mutex: hwmutex@1200600 {
++			compatible = "qcom,sfpb-mutex";
++			reg = <0x01200600 0x100>;
++			#hwlock-cells = <1>;
+ 		};
+ 
+ 		intc: interrupt-controller@2000000 {
+@@ -326,6 +355,8 @@ timer@200a000 {
+ 				     <GIC_PPI 3 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_EDGE_RISING)>;
+ 			reg = <0x0200a000 0x100>;
+ 			clock-frequency = <27000000>;
++			clocks = <&sleep_clk>;
++			clock-names = "sleep";
+ 			cpu-offset = <0x80000>;
+ 		};
+ 
+@@ -1532,39 +1563,6 @@ tpiu_in: endpoint {
+ 			};
+ 		};
+ 
+-		replicator {
+-			compatible = "arm,coresight-static-replicator";
+-
+-			clocks = <&rpmcc RPM_QDSS_CLK>;
+-			clock-names = "apb_pclk";
+-
+-			out-ports {
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+-
+-				port@0 {
+-					reg = <0>;
+-					replicator_out0: endpoint {
+-						remote-endpoint = <&etb_in>;
+-					};
+-				};
+-				port@1 {
+-					reg = <1>;
+-					replicator_out1: endpoint {
+-						remote-endpoint = <&tpiu_in>;
+-					};
+-				};
+-			};
+-
+-			in-ports {
+-				port {
+-					replicator_in: endpoint {
+-						remote-endpoint = <&funnel_out>;
+-					};
+-				};
+-			};
+-		};
+-
+ 		funnel@1a04000 {
+ 			compatible = "arm,coresight-dynamic-funnel", "arm,primecell";
+ 			reg = <0x1a04000 0x1000>;
+diff --git a/arch/arm/mach-aspeed/Kconfig b/arch/arm/mach-aspeed/Kconfig
+index 080019aa6fcd89..fcf287edd0e5e6 100644
+--- a/arch/arm/mach-aspeed/Kconfig
++++ b/arch/arm/mach-aspeed/Kconfig
+@@ -2,7 +2,6 @@
+ menuconfig ARCH_ASPEED
+ 	bool "Aspeed BMC architectures"
+ 	depends on (CPU_LITTLE_ENDIAN && ARCH_MULTI_V5) || ARCH_MULTI_V6 || ARCH_MULTI_V7
+-	select SRAM
+ 	select WATCHDOG
+ 	select ASPEED_WATCHDOG
+ 	select MFD_SYSCON
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index a182295e6f08bf..6527d0d5656a13 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -333,9 +333,9 @@ config ARCH_MMAP_RND_BITS_MAX
+ 	default 24 if ARM64_VA_BITS=39
+ 	default 27 if ARM64_VA_BITS=42
+ 	default 30 if ARM64_VA_BITS=47
+-	default 29 if ARM64_VA_BITS=48 && ARM64_64K_PAGES
+-	default 31 if ARM64_VA_BITS=48 && ARM64_16K_PAGES
+-	default 33 if ARM64_VA_BITS=48
++	default 29 if (ARM64_VA_BITS=48 || ARM64_VA_BITS=52) && ARM64_64K_PAGES
++	default 31 if (ARM64_VA_BITS=48 || ARM64_VA_BITS=52) && ARM64_16K_PAGES
++	default 33 if (ARM64_VA_BITS=48 || ARM64_VA_BITS=52)
+ 	default 14 if ARM64_64K_PAGES
+ 	default 16 if ARM64_16K_PAGES
+ 	default 18
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a100.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a100.dtsi
+index f9f6fea03b7446..bd366389b2389d 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a100.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a100.dtsi
+@@ -252,6 +252,7 @@ mmc0: mmc@4020000 {
+ 			interrupts = <GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&mmc0_pins>;
++			max-frequency = <150000000>;
+ 			status = "disabled";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+@@ -267,6 +268,7 @@ mmc1: mmc@4021000 {
+ 			interrupts = <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&mmc1_pins>;
++			max-frequency = <150000000>;
+ 			status = "disabled";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+@@ -282,6 +284,7 @@ mmc2: mmc@4022000 {
+ 			interrupts = <GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&mmc2_pins>;
++			max-frequency = <150000000>;
+ 			status = "disabled";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-beacon-kit.dts b/arch/arm64/boot/dts/freescale/imx8mm-beacon-kit.dts
+index 97ff1ddd631888..734a75198f06e0 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-beacon-kit.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mm-beacon-kit.dts
+@@ -124,6 +124,7 @@ &sai5 {
+ 	assigned-clock-parents = <&clk IMX8MM_AUDIO_PLL1_OUT>;
+ 	assigned-clock-rates = <24576000>;
+ 	#sound-dai-cells = <0>;
++	fsl,sai-mclk-direction-output;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
+index 62ed64663f4952..9ba0cb89fa24e0 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
+@@ -233,6 +233,7 @@ eeprom@50 {
+ 	rtc: rtc@51 {
+ 		compatible = "nxp,pcf85263";
+ 		reg = <0x51>;
++		quartz-load-femtofarads = <12500>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-beacon-kit.dts b/arch/arm64/boot/dts/freescale/imx8mn-beacon-kit.dts
+index 1df5ceb1138793..37fc5ed98d7f61 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-beacon-kit.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mn-beacon-kit.dts
+@@ -124,6 +124,7 @@ &sai5 {
+ 	assigned-clock-parents = <&clk IMX8MN_AUDIO_PLL1_OUT>;
+ 	assigned-clock-rates = <24576000>;
+ 	#sound-dai-cells = <0>;
++	fsl,sai-mclk-direction-output;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
+index 2a64115eebf1c6..bb11590473a4c7 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
+@@ -242,6 +242,7 @@ eeprom@50 {
+ 	rtc: rtc@51 {
+ 		compatible = "nxp,pcf85263";
+ 		reg = <0x51>;
++		quartz-load-femtofarads = <12500>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-beacon-som.dtsi
+index 15f7ab58db36cc..88561df70d03ac 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-beacon-som.dtsi
+@@ -257,6 +257,7 @@ eeprom@50 {
+ 	rtc: rtc@51 {
+ 		compatible = "nxp,pcf85263";
+ 		reg = <0x51>;
++		quartz-load-femtofarads = <12500>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/mediatek/mt6357.dtsi b/arch/arm64/boot/dts/mediatek/mt6357.dtsi
+index 5fafa842d312f3..dca4e5c3d8e210 100644
+--- a/arch/arm64/boot/dts/mediatek/mt6357.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt6357.dtsi
+@@ -60,7 +60,6 @@ mt6357_vpa_reg: buck-vpa {
+ 			};
+ 
+ 			mt6357_vfe28_reg: ldo-vfe28 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vfe28";
+ 				regulator-min-microvolt = <2800000>;
+ 				regulator-max-microvolt = <2800000>;
+@@ -75,7 +74,6 @@ mt6357_vxo22_reg: ldo-vxo22 {
+ 			};
+ 
+ 			mt6357_vrf18_reg: ldo-vrf18 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vrf18";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+@@ -83,7 +81,6 @@ mt6357_vrf18_reg: ldo-vrf18 {
+ 			};
+ 
+ 			mt6357_vrf12_reg: ldo-vrf12 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vrf12";
+ 				regulator-min-microvolt = <1200000>;
+ 				regulator-max-microvolt = <1200000>;
+@@ -112,7 +109,6 @@ mt6357_vcn33_wifi_reg: ldo-vcn33-wifi {
+ 			};
+ 
+ 			mt6357_vcn28_reg: ldo-vcn28 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vcn28";
+ 				regulator-min-microvolt = <2800000>;
+ 				regulator-max-microvolt = <2800000>;
+@@ -120,7 +116,6 @@ mt6357_vcn28_reg: ldo-vcn28 {
+ 			};
+ 
+ 			mt6357_vcn18_reg: ldo-vcn18 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vcn18";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+@@ -142,7 +137,6 @@ mt6357_vcamd_reg: ldo-vcamd {
+ 			};
+ 
+ 			mt6357_vcamio_reg: ldo-vcamio18 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vcamio";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+@@ -175,7 +169,6 @@ mt6357_vsram_proc_reg: ldo-vsram-proc {
+ 			};
+ 
+ 			mt6357_vaux18_reg: ldo-vaux18 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vaux18";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+@@ -183,7 +176,6 @@ mt6357_vaux18_reg: ldo-vaux18 {
+ 			};
+ 
+ 			mt6357_vaud28_reg: ldo-vaud28 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vaud28";
+ 				regulator-min-microvolt = <2800000>;
+ 				regulator-max-microvolt = <2800000>;
+@@ -191,7 +183,6 @@ mt6357_vaud28_reg: ldo-vaud28 {
+ 			};
+ 
+ 			mt6357_vio28_reg: ldo-vio28 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vio28";
+ 				regulator-min-microvolt = <2800000>;
+ 				regulator-max-microvolt = <2800000>;
+@@ -199,7 +190,6 @@ mt6357_vio28_reg: ldo-vio28 {
+ 			};
+ 
+ 			mt6357_vio18_reg: ldo-vio18 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vio18";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt6359.dtsi b/arch/arm64/boot/dts/mediatek/mt6359.dtsi
+index 7b10f9c59819a9..467d8a4c2aa7f1 100644
+--- a/arch/arm64/boot/dts/mediatek/mt6359.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt6359.dtsi
+@@ -20,6 +20,8 @@ mt6359codec: audio-codec {
+ 		};
+ 
+ 		regulators {
++			compatible = "mediatek,mt6359-regulator";
++
+ 			mt6359_vs1_buck_reg: buck_vs1 {
+ 				regulator-name = "vs1";
+ 				regulator-min-microvolt = <800000>;
+@@ -298,7 +300,7 @@ mt6359_vsram_others_sshub_ldo: ldo_vsram_others_sshub {
+ 			};
+ 		};
+ 
+-		mt6359rtc: mt6359rtc {
++		mt6359rtc: rtc {
+ 			compatible = "mediatek,mt6358-rtc";
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+index e1495f1900a7b4..f9ca6b3720e915 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+@@ -259,14 +259,10 @@ panel_in: endpoint {
+ 			};
+ 		};
+ 	};
++};
+ 
+-	ports {
+-		port {
+-			dsi_out: endpoint {
+-				remote-endpoint = <&panel_in>;
+-			};
+-		};
+-	};
++&dsi_out {
++	remote-endpoint = <&panel_in>;
+ };
+ 
+ &gic {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+index 0aa34e5bbaaa87..3c1fe80e64b9c5 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+@@ -1836,6 +1836,10 @@ dsi0: dsi@14014000 {
+ 			phys = <&mipi_tx0>;
+ 			phy-names = "dphy";
+ 			status = "disabled";
++
++			port {
++				dsi_out: endpoint { };
++			};
+ 		};
+ 
+ 		dpi0: dpi@14015000 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8188.dtsi b/arch/arm64/boot/dts/mediatek/mt8188.dtsi
+index 69a8423d385890..29d35ca945973c 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8188.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8188.dtsi
+@@ -2579,7 +2579,7 @@ rdma0: rdma@1c002000 {
+ 			reg = <0 0x1c002000 0 0x1000>;
+ 			clocks = <&vdosys0 CLK_VDO0_DISP_RDMA0>;
+ 			interrupts = <GIC_SPI 638 IRQ_TYPE_LEVEL_HIGH 0>;
+-			iommus = <&vdo_iommu M4U_PORT_L1_DISP_RDMA0>;
++			iommus = <&vpp_iommu M4U_PORT_L1_DISP_RDMA0>;
+ 			power-domains = <&spm MT8188_POWER_DOMAIN_VDOSYS0>;
+ 			mediatek,gce-client-reg = <&gce0 SUBSYS_1c00XXXX 0x2000 0x1000>;
+ 
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+index 4f2dc0a7556610..1ded4b3f87605f 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+@@ -617,22 +617,6 @@ power-domain@MT8195_POWER_DOMAIN_VPPSYS0 {
+ 					#size-cells = <0>;
+ 					#power-domain-cells = <1>;
+ 
+-					power-domain@MT8195_POWER_DOMAIN_VDEC1 {
+-						reg = <MT8195_POWER_DOMAIN_VDEC1>;
+-						clocks = <&vdecsys CLK_VDEC_LARB1>;
+-						clock-names = "vdec1-0";
+-						mediatek,infracfg = <&infracfg_ao>;
+-						#power-domain-cells = <0>;
+-					};
+-
+-					power-domain@MT8195_POWER_DOMAIN_VENC_CORE1 {
+-						reg = <MT8195_POWER_DOMAIN_VENC_CORE1>;
+-						clocks = <&vencsys_core1 CLK_VENC_CORE1_LARB>;
+-						clock-names = "venc1-larb";
+-						mediatek,infracfg = <&infracfg_ao>;
+-						#power-domain-cells = <0>;
+-					};
+-
+ 					power-domain@MT8195_POWER_DOMAIN_VDOSYS0 {
+ 						reg = <MT8195_POWER_DOMAIN_VDOSYS0>;
+ 						clocks = <&topckgen CLK_TOP_CFG_VDO0>,
+@@ -678,15 +662,25 @@ power-domain@MT8195_POWER_DOMAIN_VDEC0 {
+ 							clocks = <&vdecsys_soc CLK_VDEC_SOC_LARB1>;
+ 							clock-names = "vdec0-0";
+ 							mediatek,infracfg = <&infracfg_ao>;
++							#address-cells = <1>;
++							#size-cells = <0>;
+ 							#power-domain-cells = <0>;
+-						};
+ 
+-						power-domain@MT8195_POWER_DOMAIN_VDEC2 {
+-							reg = <MT8195_POWER_DOMAIN_VDEC2>;
+-							clocks = <&vdecsys_core1 CLK_VDEC_CORE1_LARB1>;
+-							clock-names = "vdec2-0";
+-							mediatek,infracfg = <&infracfg_ao>;
+-							#power-domain-cells = <0>;
++							power-domain@MT8195_POWER_DOMAIN_VDEC1 {
++								reg = <MT8195_POWER_DOMAIN_VDEC1>;
++								clocks = <&vdecsys CLK_VDEC_LARB1>;
++								clock-names = "vdec1-0";
++								mediatek,infracfg = <&infracfg_ao>;
++								#power-domain-cells = <0>;
++							};
++
++							power-domain@MT8195_POWER_DOMAIN_VDEC2 {
++								reg = <MT8195_POWER_DOMAIN_VDEC2>;
++								clocks = <&vdecsys_core1 CLK_VDEC_CORE1_LARB1>;
++								clock-names = "vdec2-0";
++								mediatek,infracfg = <&infracfg_ao>;
++								#power-domain-cells = <0>;
++							};
+ 						};
+ 
+ 						power-domain@MT8195_POWER_DOMAIN_VENC {
+@@ -694,7 +688,17 @@ power-domain@MT8195_POWER_DOMAIN_VENC {
+ 							clocks = <&vencsys CLK_VENC_LARB>;
+ 							clock-names = "venc0-larb";
+ 							mediatek,infracfg = <&infracfg_ao>;
++							#address-cells = <1>;
++							#size-cells = <0>;
+ 							#power-domain-cells = <0>;
++
++							power-domain@MT8195_POWER_DOMAIN_VENC_CORE1 {
++								reg = <MT8195_POWER_DOMAIN_VENC_CORE1>;
++								clocks = <&vencsys_core1 CLK_VENC_CORE1_LARB>;
++								clock-names = "venc1-larb";
++								mediatek,infracfg = <&infracfg_ao>;
++								#power-domain-cells = <0>;
++							};
+ 						};
+ 
+ 						power-domain@MT8195_POWER_DOMAIN_VDOSYS1 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8390-genio-common.dtsi b/arch/arm64/boot/dts/mediatek/mt8390-genio-common.dtsi
+index 60139e6dffd8e0..6a75b230282eda 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8390-genio-common.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8390-genio-common.dtsi
+@@ -1199,8 +1199,18 @@ xhci_ss_ep: endpoint {
+ };
+ 
+ &ssusb2 {
++	/*
++	 * the ssusb2 controller is one but we got two ports : one is routed
++	 * to the M.2 slot, the other is on the RPi header who does support
++	 * full OTG.
++	 * As the controller is shared between them, the role switch default
++	 * mode is set to host to make any peripheral inserted in the M.2
++	 * slot (i.e BT/WIFI module) be detected when the other port is
++	 * unused.
++	 */
+ 	dr_mode = "otg";
+ 	maximum-speed = "high-speed";
++	role-switch-default-mode = "host";
+ 	usb-role-switch;
+ 	vusb33-supply = <&mt6359_vusb_ldo_reg>;
+ 	wakeup-source;
+@@ -1211,7 +1221,7 @@ &ssusb2 {
+ 	connector {
+ 		compatible = "gpio-usb-b-connector", "usb-b-connector";
+ 		type = "micro";
+-		id-gpios = <&pio 89 GPIO_ACTIVE_HIGH>;
++		id-gpios = <&pio 89 GPIO_ACTIVE_LOW>;
+ 		vbus-supply = <&usb_p2_vbus>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/nvidia/tegra186.dtsi b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
+index 2b3bb5d0af17bd..f0b7949df92c05 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra186.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
+@@ -621,9 +621,7 @@ uartb: serial@3110000 {
+ 		reg-shift = <2>;
+ 		interrupts = <GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&bpmp TEGRA186_CLK_UARTB>;
+-		clock-names = "serial";
+ 		resets = <&bpmp TEGRA186_RESET_UARTB>;
+-		reset-names = "serial";
+ 		status = "disabled";
+ 	};
+ 
+@@ -633,9 +631,7 @@ uartd: serial@3130000 {
+ 		reg-shift = <2>;
+ 		interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&bpmp TEGRA186_CLK_UARTD>;
+-		clock-names = "serial";
+ 		resets = <&bpmp TEGRA186_RESET_UARTD>;
+-		reset-names = "serial";
+ 		status = "disabled";
+ 	};
+ 
+@@ -645,9 +641,7 @@ uarte: serial@3140000 {
+ 		reg-shift = <2>;
+ 		interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&bpmp TEGRA186_CLK_UARTE>;
+-		clock-names = "serial";
+ 		resets = <&bpmp TEGRA186_RESET_UARTE>;
+-		reset-names = "serial";
+ 		status = "disabled";
+ 	};
+ 
+@@ -657,9 +651,7 @@ uartf: serial@3150000 {
+ 		reg-shift = <2>;
+ 		interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&bpmp TEGRA186_CLK_UARTF>;
+-		clock-names = "serial";
+ 		resets = <&bpmp TEGRA186_RESET_UARTF>;
+-		reset-names = "serial";
+ 		status = "disabled";
+ 	};
+ 
+@@ -1236,9 +1228,7 @@ uartc: serial@c280000 {
+ 		reg-shift = <2>;
+ 		interrupts = <GIC_SPI 114 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&bpmp TEGRA186_CLK_UARTC>;
+-		clock-names = "serial";
+ 		resets = <&bpmp TEGRA186_RESET_UARTC>;
+-		reset-names = "serial";
+ 		status = "disabled";
+ 	};
+ 
+@@ -1248,9 +1238,7 @@ uartg: serial@c290000 {
+ 		reg-shift = <2>;
+ 		interrupts = <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&bpmp TEGRA186_CLK_UARTG>;
+-		clock-names = "serial";
+ 		resets = <&bpmp TEGRA186_RESET_UARTG>;
+-		reset-names = "serial";
+ 		status = "disabled";
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+index 33f92b77cd9d9e..c3695077478514 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+@@ -766,9 +766,7 @@ uartd: serial@3130000 {
+ 			reg-shift = <2>;
+ 			interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&bpmp TEGRA194_CLK_UARTD>;
+-			clock-names = "serial";
+ 			resets = <&bpmp TEGRA194_RESET_UARTD>;
+-			reset-names = "serial";
+ 			status = "disabled";
+ 		};
+ 
+@@ -778,9 +776,7 @@ uarte: serial@3140000 {
+ 			reg-shift = <2>;
+ 			interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&bpmp TEGRA194_CLK_UARTE>;
+-			clock-names = "serial";
+ 			resets = <&bpmp TEGRA194_RESET_UARTE>;
+-			reset-names = "serial";
+ 			status = "disabled";
+ 		};
+ 
+@@ -790,9 +786,7 @@ uartf: serial@3150000 {
+ 			reg-shift = <2>;
+ 			interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&bpmp TEGRA194_CLK_UARTF>;
+-			clock-names = "serial";
+ 			resets = <&bpmp TEGRA194_RESET_UARTF>;
+-			reset-names = "serial";
+ 			status = "disabled";
+ 		};
+ 
+@@ -817,9 +811,7 @@ uarth: serial@3170000 {
+ 			reg-shift = <2>;
+ 			interrupts = <GIC_SPI 207 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&bpmp TEGRA194_CLK_UARTH>;
+-			clock-names = "serial";
+ 			resets = <&bpmp TEGRA194_RESET_UARTH>;
+-			reset-names = "serial";
+ 			status = "disabled";
+ 		};
+ 
+@@ -1616,9 +1608,7 @@ uartc: serial@c280000 {
+ 			reg-shift = <2>;
+ 			interrupts = <GIC_SPI 114 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&bpmp TEGRA194_CLK_UARTC>;
+-			clock-names = "serial";
+ 			resets = <&bpmp TEGRA194_RESET_UARTC>;
+-			reset-names = "serial";
+ 			status = "disabled";
+ 		};
+ 
+@@ -1628,9 +1618,7 @@ uartg: serial@c290000 {
+ 			reg-shift = <2>;
+ 			interrupts = <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&bpmp TEGRA194_CLK_UARTG>;
+-			clock-names = "serial";
+ 			resets = <&bpmp TEGRA194_RESET_UARTG>;
+-			reset-names = "serial";
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
+index 9b9d1d15b0c7ea..1bb1f9640a800a 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
+@@ -11,6 +11,7 @@ aliases {
+ 		rtc0 = "/i2c@7000d000/pmic@3c";
+ 		rtc1 = "/rtc@7000e000";
+ 		serial0 = &uarta;
++		serial3 = &uartd;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/qcom/ipq9574-rdp-common.dtsi b/arch/arm64/boot/dts/qcom/ipq9574-rdp-common.dtsi
+index ae12f069f26fa5..b24b795873d416 100644
+--- a/arch/arm64/boot/dts/qcom/ipq9574-rdp-common.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq9574-rdp-common.dtsi
+@@ -111,6 +111,13 @@ mp5496_l2: l2 {
+ 			regulator-always-on;
+ 			regulator-boot-on;
+ 		};
++
++		mp5496_l5: l5 {
++			regulator-min-microvolt = <1800000>;
++			regulator-max-microvolt = <1800000>;
++			regulator-always-on;
++			regulator-boot-on;
++		};
+ 	};
+ };
+ 
+@@ -146,7 +153,7 @@ &usb_0_dwc3 {
+ };
+ 
+ &usb_0_qmpphy {
+-	vdda-pll-supply = <&mp5496_l2>;
++	vdda-pll-supply = <&mp5496_l5>;
+ 	vdda-phy-supply = <&regulator_fixed_0p925>;
+ 
+ 	status = "okay";
+@@ -154,7 +161,7 @@ &usb_0_qmpphy {
+ 
+ &usb_0_qusbphy {
+ 	vdd-supply = <&regulator_fixed_0p925>;
+-	vdda-pll-supply = <&mp5496_l2>;
++	vdda-pll-supply = <&mp5496_l5>;
+ 	vdda-phy-dpdm-supply = <&regulator_fixed_3p3>;
+ 
+ 	status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/ipq9574.dtsi b/arch/arm64/boot/dts/qcom/ipq9574.dtsi
+index 3c02351fbb156a..b790a6b288abb8 100644
+--- a/arch/arm64/boot/dts/qcom/ipq9574.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq9574.dtsi
+@@ -974,14 +974,14 @@ pcie3: pcie@18000000 {
+ 			ranges = <0x01000000 0x0 0x00000000 0x18200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x18300000 0x18300000 0x0 0x7d00000>;
+ 
+-			interrupts = <GIC_SPI 126 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 128 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 129 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 130 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 137 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 221 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 222 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 225 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 312 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 326 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 415 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 494 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 495 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi0",
+ 					  "msi1",
+ 					  "msi2",
+diff --git a/arch/arm64/boot/dts/qcom/msm8998.dtsi b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+index c2caad85c668df..fa6769320a238c 100644
+--- a/arch/arm64/boot/dts/qcom/msm8998.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+@@ -2,6 +2,7 @@
+ /* Copyright (c) 2016, The Linux Foundation. All rights reserved. */
+ 
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
++#include <dt-bindings/clock/qcom,dsi-phy-28nm.h>
+ #include <dt-bindings/clock/qcom,gcc-msm8998.h>
+ #include <dt-bindings/clock/qcom,gpucc-msm8998.h>
+ #include <dt-bindings/clock/qcom,mmcc-msm8998.h>
+@@ -2790,11 +2791,11 @@ mmcc: clock-controller@c8c0000 {
+ 				      "gpll0_div";
+ 			clocks = <&rpmcc RPM_SMD_XO_CLK_SRC>,
+ 				 <&gcc GCC_MMSS_GPLL0_CLK>,
+-				 <&mdss_dsi0_phy 1>,
+-				 <&mdss_dsi0_phy 0>,
+-				 <&mdss_dsi1_phy 1>,
+-				 <&mdss_dsi1_phy 0>,
+-				 <&mdss_hdmi_phy 0>,
++				 <&mdss_dsi0_phy DSI_PIXEL_PLL_CLK>,
++				 <&mdss_dsi0_phy DSI_BYTE_PLL_CLK>,
++				 <&mdss_dsi1_phy DSI_PIXEL_PLL_CLK>,
++				 <&mdss_dsi1_phy DSI_BYTE_PLL_CLK>,
++				 <&mdss_hdmi_phy>,
+ 				 <0>,
+ 				 <0>,
+ 				 <&gcc GCC_MMSS_GPLL0_DIV_CLK>;
+@@ -2932,8 +2933,8 @@ mdss_dsi0: dsi@c994000 {
+ 					      "bus";
+ 				assigned-clocks = <&mmcc BYTE0_CLK_SRC>,
+ 						  <&mmcc PCLK0_CLK_SRC>;
+-				assigned-clock-parents = <&mdss_dsi0_phy 0>,
+-							 <&mdss_dsi0_phy 1>;
++				assigned-clock-parents = <&mdss_dsi0_phy DSI_BYTE_PLL_CLK>,
++							 <&mdss_dsi0_phy DSI_PIXEL_PLL_CLK>;
+ 
+ 				operating-points-v2 = <&dsi_opp_table>;
+ 				power-domains = <&rpmpd MSM8998_VDDCX>;
+@@ -3008,8 +3009,8 @@ mdss_dsi1: dsi@c996000 {
+ 					      "bus";
+ 				assigned-clocks = <&mmcc BYTE1_CLK_SRC>,
+ 						  <&mmcc PCLK1_CLK_SRC>;
+-				assigned-clock-parents = <&mdss_dsi1_phy 0>,
+-							 <&mdss_dsi1_phy 1>;
++				assigned-clock-parents = <&mdss_dsi1_phy DSI_BYTE_PLL_CLK>,
++							 <&mdss_dsi1_phy DSI_PIXEL_PLL_CLK>;
+ 
+ 				operating-points-v2 = <&dsi_opp_table>;
+ 				power-domains = <&rpmpd MSM8998_VDDCX>;
+diff --git a/arch/arm64/boot/dts/qcom/qcm2290.dtsi b/arch/arm64/boot/dts/qcom/qcm2290.dtsi
+index f0746123e594d5..6e3e57dd02612f 100644
+--- a/arch/arm64/boot/dts/qcom/qcm2290.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcm2290.dtsi
+@@ -1073,7 +1073,7 @@ spi0: spi@4a80000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				#address-cells = <1>;
+@@ -1092,7 +1092,7 @@ uart0: serial@4a80000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				status = "disabled";
+@@ -1137,7 +1137,7 @@ spi1: spi@4a84000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				#address-cells = <1>;
+@@ -1184,7 +1184,7 @@ spi2: spi@4a88000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				#address-cells = <1>;
+@@ -1231,7 +1231,7 @@ spi3: spi@4a8c000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				#address-cells = <1>;
+@@ -1278,7 +1278,7 @@ spi4: spi@4a90000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				#address-cells = <1>;
+@@ -1297,7 +1297,7 @@ uart4: serial@4a90000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				status = "disabled";
+@@ -1342,7 +1342,7 @@ spi5: spi@4a94000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				#address-cells = <1>;
+diff --git a/arch/arm64/boot/dts/qcom/qcs615.dtsi b/arch/arm64/boot/dts/qcom/qcs615.dtsi
+index f4abfad474ea62..12065484904380 100644
+--- a/arch/arm64/boot/dts/qcom/qcs615.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs615.dtsi
+@@ -1022,10 +1022,10 @@ ufs_mem_hc: ufshc@1d84000 {
+ 				      "bus_aggr_clk",
+ 				      "iface_clk",
+ 				      "core_clk_unipro",
+-				      "core_clk_ice",
+ 				      "ref_clk",
+ 				      "tx_lane0_sync_clk",
+-				      "rx_lane0_sync_clk";
++				      "rx_lane0_sync_clk",
++				      "ice_core_clk";
+ 
+ 			resets = <&gcc GCC_UFS_PHY_BCR>;
+ 			reset-names = "rst";
+@@ -1060,10 +1060,10 @@ opp-50000000 {
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <37500000>,
+-						 /bits/ 64 <75000000>,
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <0>,
+-						 /bits/ 64 <0>;
++						 /bits/ 64 <0>,
++						 /bits/ 64 <75000000>;
+ 					required-opps = <&rpmhpd_opp_low_svs>;
+ 				};
+ 
+@@ -1072,10 +1072,10 @@ opp-100000000 {
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <75000000>,
+-						 /bits/ 64 <150000000>,
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <0>,
+-						 /bits/ 64 <0>;
++						 /bits/ 64 <0>,
++						 /bits/ 64 <150000000>;
+ 					required-opps = <&rpmhpd_opp_svs>;
+ 				};
+ 
+@@ -1084,10 +1084,10 @@ opp-200000000 {
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <150000000>,
+-						 /bits/ 64 <300000000>,
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <0>,
+-						 /bits/ 64 <0>;
++						 /bits/ 64 <0>,
++						 /bits/ 64 <300000000>;
+ 					required-opps = <&rpmhpd_opp_nom>;
+ 				};
+ 			};
+@@ -3304,7 +3304,6 @@ spmi_bus: spmi@c440000 {
+ 			#interrupt-cells = <4>;
+ 			#address-cells = <2>;
+ 			#size-cells = <0>;
+-			cell-index = <0>;
+ 			qcom,channel = <0>;
+ 			qcom,ee = <0>;
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/qcs8300.dtsi b/arch/arm64/boot/dts/qcom/qcs8300.dtsi
+index 4a057f7c0d9fae..13b1121cdf175b 100644
+--- a/arch/arm64/boot/dts/qcom/qcs8300.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs8300.dtsi
+@@ -798,18 +798,6 @@ cryptobam: dma-controller@1dc4000 {
+ 				 <&apps_smmu 0x481 0x00>;
+ 		};
+ 
+-		crypto: crypto@1dfa000 {
+-			compatible = "qcom,qcs8300-qce", "qcom,qce";
+-			reg = <0x0 0x01dfa000 0x0 0x6000>;
+-			dmas = <&cryptobam 4>, <&cryptobam 5>;
+-			dma-names = "rx", "tx";
+-			iommus = <&apps_smmu 0x480 0x00>,
+-				 <&apps_smmu 0x481 0x00>;
+-			interconnects = <&aggre2_noc MASTER_CRYPTO_CORE0 QCOM_ICC_TAG_ALWAYS
+-					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+-			interconnect-names = "memory";
+-		};
+-
+ 		ice: crypto@1d88000 {
+ 			compatible = "qcom,qcs8300-inline-crypto-engine",
+ 				     "qcom,inline-crypto-engine";
+diff --git a/arch/arm64/boot/dts/qcom/sa8775p.dtsi b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+index 2329460b210381..2010b7988b6cc4 100644
+--- a/arch/arm64/boot/dts/qcom/sa8775p.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+@@ -2420,17 +2420,6 @@ cryptobam: dma-controller@1dc4000 {
+ 				 <&apps_smmu 0x481 0x00>;
+ 		};
+ 
+-		crypto: crypto@1dfa000 {
+-			compatible = "qcom,sa8775p-qce", "qcom,qce";
+-			reg = <0x0 0x01dfa000 0x0 0x6000>;
+-			dmas = <&cryptobam 4>, <&cryptobam 5>;
+-			dma-names = "rx", "tx";
+-			iommus = <&apps_smmu 0x480 0x00>,
+-				 <&apps_smmu 0x481 0x00>;
+-			interconnects = <&aggre2_noc MASTER_CRYPTO_CORE0 0 &mc_virt SLAVE_EBI1 0>;
+-			interconnect-names = "memory";
+-		};
+-
+ 		stm: stm@4002000 {
+ 			compatible = "arm,coresight-stm", "arm,primecell";
+ 			reg = <0x0 0x4002000 0x0 0x1000>,
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts b/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
+index f3190f408f4b2c..0f1ebd869ce315 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
++++ b/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
+@@ -1202,9 +1202,6 @@ &sound {
+ 		"VA DMIC0", "MIC BIAS1",
+ 		"VA DMIC1", "MIC BIAS1",
+ 		"VA DMIC2", "MIC BIAS3",
+-		"VA DMIC0", "VA MIC BIAS1",
+-		"VA DMIC1", "VA MIC BIAS1",
+-		"VA DMIC2", "VA MIC BIAS3",
+ 		"TX SWR_ADC1", "ADC2_OUTPUT";
+ 
+ 	wcd-playback-dai-link {
+diff --git a/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts b/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts
+index d402f4c85b11d1..ee696317f78cc3 100644
+--- a/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts
++++ b/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts
+@@ -175,6 +175,7 @@ &blsp1_dma {
+ 	 * BAM DMA interconnects support is in place.
+ 	 */
+ 	/delete-property/ clocks;
++	/delete-property/ clock-names;
+ };
+ 
+ &blsp1_uart2 {
+@@ -187,6 +188,7 @@ &blsp2_dma {
+ 	 * BAM DMA interconnects support is in place.
+ 	 */
+ 	/delete-property/ clocks;
++	/delete-property/ clock-names;
+ };
+ 
+ &blsp2_uart1 {
+diff --git a/arch/arm64/boot/dts/qcom/sdm660-xiaomi-lavender.dts b/arch/arm64/boot/dts/qcom/sdm660-xiaomi-lavender.dts
+index 7167f75bced3fd..a9926ad6c6f9f5 100644
+--- a/arch/arm64/boot/dts/qcom/sdm660-xiaomi-lavender.dts
++++ b/arch/arm64/boot/dts/qcom/sdm660-xiaomi-lavender.dts
+@@ -107,6 +107,7 @@ &qusb2phy0 {
+ 	status = "okay";
+ 
+ 	vdd-supply = <&vreg_l1b_0p925>;
++	vdda-pll-supply = <&vreg_l10a_1p8>;
+ 	vdda-phy-dpdm-supply = <&vreg_l7b_3p125>;
+ };
+ 
+@@ -404,6 +405,8 @@ &sdhc_1 {
+ &sdhc_2 {
+ 	status = "okay";
+ 
++	cd-gpios = <&tlmm 54 GPIO_ACTIVE_HIGH>;
++
+ 	vmmc-supply = <&vreg_l5b_2p95>;
+ 	vqmmc-supply = <&vreg_l2b_2p95>;
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-samsung-starqltechn.dts b/arch/arm64/boot/dts/qcom/sdm845-samsung-starqltechn.dts
+index d37a433130b98f..5948b401165ce9 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-samsung-starqltechn.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-samsung-starqltechn.dts
+@@ -135,8 +135,6 @@ vdda_pll_cc_ebi23:
+ 		vdda_sp_sensor:
+ 		vdda_ufs1_core:
+ 		vdda_ufs2_core:
+-		vdda_usb1_ss_core:
+-		vdda_usb2_ss_core:
+ 		vreg_l1a_0p875: ldo1 {
+ 			regulator-min-microvolt = <880000>;
+ 			regulator-max-microvolt = <880000>;
+@@ -157,6 +155,7 @@ vreg_l3a_1p0: ldo3 {
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+ 		};
+ 
++		vdda_usb1_ss_core:
+ 		vdd_wcss_cx:
+ 		vdd_wcss_mx:
+ 		vdda_wcss_pll:
+@@ -383,8 +382,8 @@ &ufs_mem_phy {
+ };
+ 
+ &sdhc_2 {
+-	pinctrl-names = "default";
+ 	pinctrl-0 = <&sdc2_clk_state &sdc2_cmd_state &sdc2_data_state &sd_card_det_n_state>;
++	pinctrl-names = "default";
+ 	cd-gpios = <&tlmm 126 GPIO_ACTIVE_LOW>;
+ 	vmmc-supply = <&vreg_l21a_2p95>;
+ 	vqmmc-supply = <&vddpx_2>;
+@@ -418,16 +417,9 @@ &usb_1_qmpphy {
+ 	status = "okay";
+ };
+ 
+-&wifi {
+-	vdd-0.8-cx-mx-supply = <&vreg_l5a_0p8>;
+-	vdd-1.8-xo-supply = <&vreg_l7a_1p8>;
+-	vdd-1.3-rfa-supply = <&vreg_l17a_1p3>;
+-	vdd-3.3-ch0-supply = <&vreg_l25a_3p3>;
+-	status = "okay";
+-};
+-
+ &tlmm {
+-	gpio-reserved-ranges = <0 4>, <27 4>, <81 4>, <85 4>;
++	gpio-reserved-ranges = <27 4>, /* SPI (eSE - embedded Secure Element) */
++			       <85 4>; /* SPI (fingerprint reader) */
+ 
+ 	sdc2_clk_state: sdc2-clk-state {
+ 		pins = "sdc2_clk";
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index c2937b4d9f1802..68613ea7146c88 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -606,7 +606,7 @@ cpu7_opp8: opp-1632000000 {
+ 		};
+ 
+ 		cpu7_opp9: opp-1747200000 {
+-			opp-hz = /bits/ 64 <1708800000>;
++			opp-hz = /bits/ 64 <1747200000>;
+ 			opp-peak-kBps = <5412000 42393600>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index f055600d6cfe5b..a86d0067634e81 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -1806,11 +1806,11 @@ cryptobam: dma-controller@1dc4000 {
+ 			interrupts = <GIC_SPI 272 IRQ_TYPE_LEVEL_HIGH>;
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
++			qcom,num-ees = <4>;
++			num-channels = <16>;
+ 			qcom,controlled-remotely;
+ 			iommus = <&apps_smmu 0x594 0x0011>,
+ 				 <&apps_smmu 0x596 0x0011>;
+-			/* FIXME: Probing BAM DMA causes some abort and system hang */
+-			status = "fail";
+ 		};
+ 
+ 		crypto: crypto@1dfa000 {
+@@ -1822,8 +1822,6 @@ crypto: crypto@1dfa000 {
+ 				 <&apps_smmu 0x596 0x0011>;
+ 			interconnects = <&aggre2_noc MASTER_CRYPTO 0 &mc_virt SLAVE_EBI1 0>;
+ 			interconnect-names = "memory";
+-			/* FIXME: dependency BAM DMA is disabled */
+-			status = "disabled";
+ 		};
+ 
+ 		ipa: ipa@1e40000 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+index ac3e00ad417719..65ebddd124e2a8 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+@@ -331,7 +331,8 @@ firmware {
+ 		scm: scm {
+ 			compatible = "qcom,scm-sm8550", "qcom,scm";
+ 			qcom,dload-mode = <&tcsr 0x19000>;
+-			interconnects = <&aggre2_noc MASTER_CRYPTO 0 &mc_virt SLAVE_EBI1 0>;
++			interconnects = <&aggre2_noc MASTER_CRYPTO QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 		};
+ 	};
+ 
+@@ -850,9 +851,12 @@ i2c8: i2c@880000 {
+ 				interrupts = <GIC_SPI 373 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 0 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 0 QCOM_GPI_I2C>;
+@@ -868,9 +872,12 @@ spi8: spi@880000 {
+ 				interrupts = <GIC_SPI 373 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi8_data_clk>, <&qup_spi8_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 0 QCOM_GPI_SPI>,
+ 				       <&gpi_dma2 1 0 QCOM_GPI_SPI>;
+@@ -890,9 +897,12 @@ i2c9: i2c@884000 {
+ 				interrupts = <GIC_SPI 583 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 1 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 1 QCOM_GPI_I2C>;
+@@ -908,9 +918,12 @@ spi9: spi@884000 {
+ 				interrupts = <GIC_SPI 583 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi9_data_clk>, <&qup_spi9_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 1 QCOM_GPI_SPI>,
+ 				       <&gpi_dma2 1 1 QCOM_GPI_SPI>;
+@@ -930,9 +943,12 @@ i2c10: i2c@888000 {
+ 				interrupts = <GIC_SPI 584 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 2 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 2 QCOM_GPI_I2C>;
+@@ -948,9 +964,12 @@ spi10: spi@888000 {
+ 				interrupts = <GIC_SPI 584 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi10_data_clk>, <&qup_spi10_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 2 QCOM_GPI_SPI>,
+ 				       <&gpi_dma2 1 2 QCOM_GPI_SPI>;
+@@ -970,9 +989,12 @@ i2c11: i2c@88c000 {
+ 				interrupts = <GIC_SPI 585 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 3 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 3 QCOM_GPI_I2C>;
+@@ -988,9 +1010,12 @@ spi11: spi@88c000 {
+ 				interrupts = <GIC_SPI 585 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi11_data_clk>, <&qup_spi11_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 3 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 3 QCOM_GPI_I2C>;
+@@ -1010,9 +1035,12 @@ i2c12: i2c@890000 {
+ 				interrupts = <GIC_SPI 586 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 4 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 4 QCOM_GPI_I2C>;
+@@ -1028,9 +1056,12 @@ spi12: spi@890000 {
+ 				interrupts = <GIC_SPI 586 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi12_data_clk>, <&qup_spi12_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 4 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 4 QCOM_GPI_I2C>;
+@@ -1050,9 +1081,12 @@ i2c13: i2c@894000 {
+ 				interrupts = <GIC_SPI 587 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt  SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 5 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 5 QCOM_GPI_I2C>;
+@@ -1068,9 +1102,12 @@ spi13: spi@894000 {
+ 				interrupts = <GIC_SPI 587 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi13_data_clk>, <&qup_spi13_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt  SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 5 QCOM_GPI_SPI>,
+ 				       <&gpi_dma2 1 5 QCOM_GPI_SPI>;
+@@ -1088,8 +1125,10 @@ uart14: serial@898000 {
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_uart14_default>, <&qup_uart14_cts_rts>;
+ 				interrupts = <GIC_SPI 461 IRQ_TYPE_LEVEL_HIGH>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1104,9 +1143,12 @@ i2c15: i2c@89c000 {
+ 				interrupts = <GIC_SPI 462 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt  SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 7 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 7 QCOM_GPI_I2C>;
+@@ -1122,9 +1164,12 @@ spi15: spi@89c000 {
+ 				interrupts = <GIC_SPI 462 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi15_data_clk>, <&qup_spi15_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt  SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 7 QCOM_GPI_SPI>,
+ 				       <&gpi_dma2 1 7 QCOM_GPI_SPI>;
+@@ -1156,8 +1201,10 @@ i2c_hub_0: i2c@980000 {
+ 				interrupts = <GIC_SPI 464 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1173,8 +1220,10 @@ i2c_hub_1: i2c@984000 {
+ 				interrupts = <GIC_SPI 465 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1190,8 +1239,10 @@ i2c_hub_2: i2c@988000 {
+ 				interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1207,8 +1258,10 @@ i2c_hub_3: i2c@98c000 {
+ 				interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1224,8 +1277,10 @@ i2c_hub_4: i2c@990000 {
+ 				interrupts = <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1241,8 +1296,10 @@ i2c_hub_5: i2c@994000 {
+ 				interrupts = <GIC_SPI 469 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1258,8 +1315,10 @@ i2c_hub_6: i2c@998000 {
+ 				interrupts = <GIC_SPI 470 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1275,8 +1334,10 @@ i2c_hub_7: i2c@99c000 {
+ 				interrupts = <GIC_SPI 471 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1292,8 +1353,10 @@ i2c_hub_8: i2c@9a0000 {
+ 				interrupts = <GIC_SPI 472 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1309,8 +1372,10 @@ i2c_hub_9: i2c@9a4000 {
+ 				interrupts = <GIC_SPI 473 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1347,7 +1412,8 @@ qupv3_id_0: geniqup@ac0000 {
+ 			clocks = <&gcc GCC_QUPV3_WRAP_1_M_AHB_CLK>,
+ 				 <&gcc GCC_QUPV3_WRAP_1_S_AHB_CLK>;
+ 			iommus = <&apps_smmu 0xa3 0>;
+-			interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>;
++			interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++					 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>;
+ 			interconnect-names = "qup-core";
+ 			dma-coherent;
+ 			#address-cells = <2>;
+@@ -1364,9 +1430,12 @@ i2c0: i2c@a80000 {
+ 				interrupts = <GIC_SPI 353 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 0 QCOM_GPI_I2C>,
+ 				       <&gpi_dma1 1 0 QCOM_GPI_I2C>;
+@@ -1382,9 +1451,12 @@ spi0: spi@a80000 {
+ 				interrupts = <GIC_SPI 353 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi0_data_clk>, <&qup_spi0_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 0 QCOM_GPI_SPI>,
+ 				       <&gpi_dma1 1 0 QCOM_GPI_SPI>;
+@@ -1404,9 +1476,12 @@ i2c1: i2c@a84000 {
+ 				interrupts = <GIC_SPI 354 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 1 QCOM_GPI_I2C>,
+ 				       <&gpi_dma1 1 1 QCOM_GPI_I2C>;
+@@ -1422,9 +1497,12 @@ spi1: spi@a84000 {
+ 				interrupts = <GIC_SPI 354 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi1_data_clk>, <&qup_spi1_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 1 QCOM_GPI_SPI>,
+ 				       <&gpi_dma1 1 1 QCOM_GPI_SPI>;
+@@ -1444,9 +1522,12 @@ i2c2: i2c@a88000 {
+ 				interrupts = <GIC_SPI 355 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 2 QCOM_GPI_I2C>,
+ 				       <&gpi_dma1 1 2 QCOM_GPI_I2C>;
+@@ -1462,9 +1543,12 @@ spi2: spi@a88000 {
+ 				interrupts = <GIC_SPI 355 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi2_data_clk>, <&qup_spi2_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 2 QCOM_GPI_SPI>,
+ 				       <&gpi_dma1 1 2 QCOM_GPI_SPI>;
+@@ -1484,9 +1568,12 @@ i2c3: i2c@a8c000 {
+ 				interrupts = <GIC_SPI 356 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 3 QCOM_GPI_I2C>,
+ 				       <&gpi_dma1 1 3 QCOM_GPI_I2C>;
+@@ -1502,9 +1589,12 @@ spi3: spi@a8c000 {
+ 				interrupts = <GIC_SPI 356 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi3_data_clk>, <&qup_spi3_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 3 QCOM_GPI_SPI>,
+ 				       <&gpi_dma1 1 3 QCOM_GPI_SPI>;
+@@ -1524,9 +1614,12 @@ i2c4: i2c@a90000 {
+ 				interrupts = <GIC_SPI 357 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 4 QCOM_GPI_I2C>,
+ 				       <&gpi_dma1 1 4 QCOM_GPI_I2C>;
+@@ -1542,9 +1635,12 @@ spi4: spi@a90000 {
+ 				interrupts = <GIC_SPI 357 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi4_data_clk>, <&qup_spi4_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 4 QCOM_GPI_SPI>,
+ 				       <&gpi_dma1 1 4 QCOM_GPI_SPI>;
+@@ -1562,9 +1658,12 @@ i2c5: i2c@a94000 {
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_i2c5_data_clk>;
+ 				interrupts = <GIC_SPI 358 IRQ_TYPE_LEVEL_HIGH>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 5 QCOM_GPI_I2C>,
+ 				       <&gpi_dma1 1 5 QCOM_GPI_I2C>;
+@@ -1582,9 +1681,12 @@ spi5: spi@a94000 {
+ 				interrupts = <GIC_SPI 358 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi5_data_clk>, <&qup_spi5_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 5 QCOM_GPI_SPI>,
+ 				       <&gpi_dma1 1 5 QCOM_GPI_SPI>;
+@@ -1602,9 +1704,12 @@ i2c6: i2c@a98000 {
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_i2c6_data_clk>;
+ 				interrupts = <GIC_SPI 363 IRQ_TYPE_LEVEL_HIGH>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 6 QCOM_GPI_I2C>,
+ 				       <&gpi_dma1 1 6 QCOM_GPI_I2C>;
+@@ -1622,9 +1727,12 @@ spi6: spi@a98000 {
+ 				interrupts = <GIC_SPI 363 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi6_data_clk>, <&qup_spi6_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 6 QCOM_GPI_SPI>,
+ 				       <&gpi_dma1 1 6 QCOM_GPI_SPI>;
+@@ -1643,8 +1751,10 @@ uart7: serial@a9c000 {
+ 				pinctrl-0 = <&qup_uart7_default>;
+ 				interrupts = <GIC_SPI 579 IRQ_TYPE_LEVEL_HIGH>;
+ 				interconnect-names = "qup-core", "qup-config";
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>;
+ 				status = "disabled";
+ 			};
+ 		};
+@@ -1768,8 +1878,10 @@ pcie0: pcie@1c00000 {
+ 				      "ddrss_sf_tbu",
+ 				      "noc_aggr";
+ 
+-			interconnects = <&pcie_noc MASTER_PCIE_0 0 &mc_virt SLAVE_EBI1 0>,
+-					<&gem_noc MASTER_APPSS_PROC 0 &cnoc_main SLAVE_PCIE_0 0>;
++			interconnects = <&pcie_noc MASTER_PCIE_0 QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++					 &cnoc_main SLAVE_PCIE_0 QCOM_ICC_TAG_ALWAYS>;
+ 			interconnect-names = "pcie-mem", "cpu-pcie";
+ 
+ 			msi-map = <0x0 &gic_its 0x1400 0x1>,
+@@ -1891,8 +2003,10 @@ pcie1: pcie@1c08000 {
+ 			assigned-clocks = <&gcc GCC_PCIE_1_AUX_CLK>;
+ 			assigned-clock-rates = <19200000>;
+ 
+-			interconnects = <&pcie_noc MASTER_PCIE_1 0 &mc_virt SLAVE_EBI1 0>,
+-					<&gem_noc MASTER_APPSS_PROC 0 &cnoc_main SLAVE_PCIE_1 0>;
++			interconnects = <&pcie_noc MASTER_PCIE_1 QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++					 &cnoc_main SLAVE_PCIE_1 QCOM_ICC_TAG_ALWAYS>;
+ 			interconnect-names = "pcie-mem", "cpu-pcie";
+ 
+ 			msi-map = <0x0 &gic_its 0x1480 0x1>,
+@@ -1971,7 +2085,8 @@ crypto: crypto@1dfa000 {
+ 			dma-names = "rx", "tx";
+ 			iommus = <&apps_smmu 0x480 0x0>,
+ 				 <&apps_smmu 0x481 0x0>;
+-			interconnects = <&aggre2_noc MASTER_CRYPTO 0 &mc_virt SLAVE_EBI1 0>;
++			interconnects = <&aggre2_noc MASTER_CRYPTO QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 			interconnect-names = "memory";
+ 		};
+ 
+@@ -2015,8 +2130,10 @@ ufs_mem_hc: ufshc@1d84000 {
+ 			dma-coherent;
+ 
+ 			operating-points-v2 = <&ufs_opp_table>;
+-			interconnects = <&aggre1_noc MASTER_UFS_MEM 0 &mc_virt SLAVE_EBI1 0>,
+-					<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_UFS_MEM_CFG 0>;
++			interconnects = <&aggre1_noc MASTER_UFS_MEM QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++					 &config_noc SLAVE_UFS_MEM_CFG QCOM_ICC_TAG_ALWAYS>;
+ 
+ 			interconnect-names = "ufs-ddr", "cpu-ufs";
+ 			clock-names = "core_clk",
+@@ -2316,8 +2433,10 @@ ipa: ipa@3f40000 {
+ 			clocks = <&rpmhcc RPMH_IPA_CLK>;
+ 			clock-names = "core";
+ 
+-			interconnects = <&aggre2_noc MASTER_IPA 0 &mc_virt SLAVE_EBI1 0>,
+-					<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_IPA_CFG 0>;
++			interconnects = <&aggre2_noc MASTER_IPA QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++					 &config_noc SLAVE_IPA_CFG QCOM_ICC_TAG_ALWAYS>;
+ 			interconnect-names = "memory",
+ 					     "config";
+ 
+@@ -2351,7 +2470,8 @@ remoteproc_mpss: remoteproc@4080000 {
+ 					<&rpmhpd RPMHPD_MSS>;
+ 			power-domain-names = "cx", "mss";
+ 
+-			interconnects = <&mc_virt MASTER_LLCC 0 &mc_virt SLAVE_EBI1 0>;
++			interconnects = <&mc_virt MASTER_LLCC QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 
+ 			memory-region = <&mpss_mem>, <&q6_mpss_dtb_mem>, <&mpss_dsm_mem>;
+ 
+@@ -2392,7 +2512,8 @@ remoteproc_adsp: remoteproc@6800000 {
+ 					<&rpmhpd RPMHPD_LMX>;
+ 			power-domain-names = "lcx", "lmx";
+ 
+-			interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC 0 &mc_virt SLAVE_EBI1 0>;
++			interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 
+ 			memory-region = <&adspslpi_mem>, <&q6_adsp_dtb_mem>;
+ 
+@@ -2850,8 +2971,10 @@ sdhc_2: mmc@8804000 {
+ 			power-domains = <&rpmhpd RPMHPD_CX>;
+ 			operating-points-v2 = <&sdhc2_opp_table>;
+ 
+-			interconnects = <&aggre2_noc MASTER_SDCC_2 0 &mc_virt SLAVE_EBI1 0>,
+-					<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_SDCC_2 0>;
++			interconnects = <&aggre2_noc MASTER_SDCC_2 QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++					 &config_noc SLAVE_SDCC_2 QCOM_ICC_TAG_ALWAYS>;
+ 			interconnect-names = "sdhc-ddr", "cpu-sdhc";
+ 			bus-width = <4>;
+ 			dma-coherent;
+@@ -3022,8 +3145,11 @@ mdss: display-subsystem@ae00000 {
+ 
+ 			power-domains = <&dispcc MDSS_GDSC>;
+ 
+-			interconnects = <&mmss_noc MASTER_MDP 0 &mc_virt SLAVE_EBI1 0>;
+-			interconnect-names = "mdp0-mem";
++			interconnects = <&mmss_noc MASTER_MDP QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ACTIVE_ONLY
++					 &config_noc SLAVE_DISPLAY_CFG QCOM_ICC_TAG_ACTIVE_ONLY>;
++			interconnect-names = "mdp0-mem", "cpu-cfg";
+ 
+ 			iommus = <&apps_smmu 0x1c00 0x2>;
+ 
+@@ -3495,8 +3621,10 @@ usb_1: usb@a6f8800 {
+ 
+ 			resets = <&gcc GCC_USB30_PRIM_BCR>;
+ 
+-			interconnects = <&aggre1_noc MASTER_USB3_0 0 &mc_virt SLAVE_EBI1 0>,
+-					<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_USB3_0 0>;
++			interconnects = <&aggre1_noc MASTER_USB3_0 QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++					 &config_noc SLAVE_USB3_0 QCOM_ICC_TAG_ALWAYS>;
+ 			interconnect-names = "usb-ddr", "apps-usb";
+ 
+ 			status = "disabled";
+@@ -4619,7 +4747,8 @@ pmu@24091000 {
+ 			compatible = "qcom,sm8550-llcc-bwmon", "qcom,sc7280-llcc-bwmon";
+ 			reg = <0 0x24091000 0 0x1000>;
+ 			interrupts = <GIC_SPI 81 IRQ_TYPE_LEVEL_HIGH>;
+-			interconnects = <&mc_virt MASTER_LLCC 3 &mc_virt SLAVE_EBI1 3>;
++			interconnects = <&mc_virt MASTER_LLCC QCOM_ICC_TAG_ACTIVE_ONLY
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ACTIVE_ONLY>;
+ 
+ 			operating-points-v2 = <&llcc_bwmon_opp_table>;
+ 
+@@ -4668,7 +4797,8 @@ pmu@240b6400 {
+ 			compatible = "qcom,sm8550-cpu-bwmon", "qcom,sdm845-bwmon";
+ 			reg = <0 0x240b6400 0 0x600>;
+ 			interrupts = <GIC_SPI 581 IRQ_TYPE_LEVEL_HIGH>;
+-			interconnects = <&gem_noc MASTER_APPSS_PROC 3 &gem_noc SLAVE_LLCC 3>;
++			interconnects = <&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ACTIVE_ONLY
++					 &gem_noc SLAVE_LLCC QCOM_ICC_TAG_ACTIVE_ONLY>;
+ 
+ 			operating-points-v2 = <&cpu_bwmon_opp_table>;
+ 
+@@ -4752,7 +4882,8 @@ remoteproc_cdsp: remoteproc@32300000 {
+ 					<&rpmhpd RPMHPD_NSP>;
+ 			power-domain-names = "cx", "mxc", "nsp";
+ 
+-			interconnects = <&nsp_noc MASTER_CDSP_PROC 0 &mc_virt SLAVE_EBI1 0>;
++			interconnects = <&nsp_noc MASTER_CDSP_PROC QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 
+ 			memory-region = <&cdsp_mem>, <&q6_cdsp_dtb_mem>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8650.dtsi b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+index c8a2a76a98f000..76acce6754986a 100644
+--- a/arch/arm64/boot/dts/qcom/sm8650.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+@@ -159,13 +159,20 @@ cpu3: cpu@300 {
+ 			power-domain-names = "psci";
+ 
+ 			enable-method = "psci";
+-			next-level-cache = <&l2_200>;
++			next-level-cache = <&l2_300>;
+ 			capacity-dmips-mhz = <1792>;
+ 			dynamic-power-coefficient = <238>;
+ 
+ 			qcom,freq-domain = <&cpufreq_hw 3>;
+ 
+ 			#cooling-cells = <2>;
++
++			l2_300: l2-cache {
++				compatible = "cache";
++				cache-level = <2>;
++				cache-unified;
++				next-level-cache = <&l3_0>;
++			};
+ 		};
+ 
+ 		cpu4: cpu@400 {
+@@ -460,7 +467,7 @@ cpu_pd1: power-domain-cpu1 {
+ 		cpu_pd2: power-domain-cpu2 {
+ 			#power-domain-cells = <0>;
+ 			power-domains = <&cluster_pd>;
+-			domain-idle-states = <&silver_cpu_sleep_0>;
++			domain-idle-states = <&gold_cpu_sleep_0>;
+ 		};
+ 
+ 		cpu_pd3: power-domain-cpu3 {
+@@ -3658,8 +3665,11 @@ mdss: display-subsystem@ae00000 {
+ 			resets = <&dispcc DISP_CC_MDSS_CORE_BCR>;
+ 
+ 			interconnects = <&mmss_noc MASTER_MDP QCOM_ICC_TAG_ALWAYS
+-					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+-			interconnect-names = "mdp0-mem";
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ACTIVE_ONLY
++					 &config_noc SLAVE_DISPLAY_CFG QCOM_ICC_TAG_ACTIVE_ONLY>;
++			interconnect-names = "mdp0-mem",
++					     "cpu-cfg";
+ 
+ 			power-domains = <&dispcc MDSS_GDSC>;
+ 
+@@ -6537,20 +6547,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu0_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -6570,20 +6580,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu1_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -6603,20 +6613,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu2_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -6636,20 +6646,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu3_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -6669,20 +6679,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu4_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -6702,20 +6712,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu5_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -6735,20 +6745,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu6_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -6768,20 +6778,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu7_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+diff --git a/arch/arm64/boot/dts/qcom/sm8750.dtsi b/arch/arm64/boot/dts/qcom/sm8750.dtsi
+index 3bbd7d18598ee0..e8bb587a7813f9 100644
+--- a/arch/arm64/boot/dts/qcom/sm8750.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8750.dtsi
+@@ -233,53 +233,59 @@ psci {
+ 
+ 		cpu_pd0: power-domain-cpu0 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster0_pd>;
+ 			domain-idle-states = <&cluster0_c4>;
+ 		};
+ 
+ 		cpu_pd1: power-domain-cpu1 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster0_pd>;
+ 			domain-idle-states = <&cluster0_c4>;
+ 		};
+ 
+ 		cpu_pd2: power-domain-cpu2 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster0_pd>;
+ 			domain-idle-states = <&cluster0_c4>;
+ 		};
+ 
+ 		cpu_pd3: power-domain-cpu3 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster0_pd>;
+ 			domain-idle-states = <&cluster0_c4>;
+ 		};
+ 
+ 		cpu_pd4: power-domain-cpu4 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster0_pd>;
+ 			domain-idle-states = <&cluster0_c4>;
+ 		};
+ 
+ 		cpu_pd5: power-domain-cpu5 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster0_pd>;
+ 			domain-idle-states = <&cluster0_c4>;
+ 		};
+ 
+ 		cpu_pd6: power-domain-cpu6 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster1_pd>;
+ 			domain-idle-states = <&cluster1_c4>;
+ 		};
+ 
+ 		cpu_pd7: power-domain-cpu7 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster1_pd>;
+ 			domain-idle-states = <&cluster1_c4>;
+ 		};
+ 
+-		cluster_pd: power-domain-cluster {
++		cluster0_pd: power-domain-cluster0 {
++			#power-domain-cells = <0>;
++			domain-idle-states = <&cluster_cl5>;
++			power-domains = <&system_pd>;
++		};
++
++		cluster1_pd: power-domain-cluster1 {
+ 			#power-domain-cells = <0>;
+ 			domain-idle-states = <&cluster_cl5>;
+ 			power-domains = <&system_pd>;
+@@ -987,7 +993,7 @@ uart14: serial@898000 {
+ 
+ 				interrupts = <GIC_SPI 461 IRQ_TYPE_LEVEL_HIGH>;
+ 
+-				clocks = <&gcc GCC_QUPV3_WRAP2_S5_CLK>;
++				clocks = <&gcc GCC_QUPV3_WRAP2_S6_CLK>;
+ 				clock-names = "se";
+ 
+ 				interconnects =	<&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
+diff --git a/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts b/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts
+index f5063a0df9fbfa..3cfe42ec089141 100644
+--- a/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts
++++ b/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts
+@@ -790,6 +790,9 @@ typec-mux@8 {
+ 
+ 		reset-gpios = <&tlmm 185 GPIO_ACTIVE_HIGH>;
+ 
++		pinctrl-0 = <&rtmr2_default>;
++		pinctrl-names = "default";
++
+ 		orientation-switch;
+ 		retimer-switch;
+ 
+@@ -845,6 +848,9 @@ typec-mux@8 {
+ 
+ 		reset-gpios = <&pm8550_gpios 10 GPIO_ACTIVE_HIGH>;
+ 
++		pinctrl-0 = <&rtmr0_default>;
++		pinctrl-names = "default";
++
+ 		retimer-switch;
+ 		orientation-switch;
+ 
+@@ -900,6 +906,9 @@ typec-mux@8 {
+ 
+ 		reset-gpios = <&tlmm 176 GPIO_ACTIVE_HIGH>;
+ 
++		pinctrl-0 = <&rtmr1_default>;
++		pinctrl-names = "default";
++
+ 		retimer-switch;
+ 		orientation-switch;
+ 
+@@ -1018,9 +1027,22 @@ &pcie6a_phy {
+ };
+ 
+ &pm8550_gpios {
++	rtmr0_default: rtmr0-reset-n-active-state {
++		pins = "gpio10";
++		function = "normal";
++		power-source = <1>; /* 1.8 V */
++		bias-disable;
++		input-disable;
++		output-enable;
++	};
++
+ 	usb0_3p3_reg_en: usb0-3p3-reg-en-state {
+ 		pins = "gpio11";
+ 		function = "normal";
++		power-source = <1>; /* 1.8 V */
++		bias-disable;
++		input-disable;
++		output-enable;
+ 	};
+ };
+ 
+@@ -1028,6 +1050,10 @@ &pmc8380_5_gpios {
+ 	usb0_pwr_1p15_en: usb0-pwr-1p15-en-state {
+ 		pins = "gpio8";
+ 		function = "normal";
++		power-source = <1>; /* 1.8 V */
++		bias-disable;
++		input-disable;
++		output-enable;
+ 	};
+ };
+ 
+@@ -1035,6 +1061,10 @@ &pm8550ve_9_gpios {
+ 	usb0_1p8_reg_en: usb0-1p8-reg-en-state {
+ 		pins = "gpio8";
+ 		function = "normal";
++		power-source = <1>; /* 1.8 V */
++		bias-disable;
++		input-disable;
++		output-enable;
+ 	};
+ };
+ 
+@@ -1205,6 +1235,20 @@ wake-n-pins {
+ 		};
+ 	};
+ 
++	rtmr1_default: rtmr1-reset-n-active-state {
++		pins = "gpio176";
++		function = "gpio";
++		drive-strength = <2>;
++		bias-disable;
++	};
++
++	rtmr2_default: rtmr2-reset-n-active-state {
++		pins = "gpio185";
++		function = "gpio";
++		drive-strength = <2>;
++		bias-disable;
++	};
++
+ 	rtmr1_1p15_reg_en: rtmr1-1p15-reg-en-state {
+ 		pins = "gpio188";
+ 		function = "gpio";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi b/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
+index 5867953c73564c..6a883fafe3c77a 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
+@@ -510,6 +510,7 @@ vreg_l12b: ldo12 {
+ 			regulator-min-microvolt = <1200000>;
+ 			regulator-max-microvolt = <1200000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+ 
+ 		vreg_l13b: ldo13 {
+@@ -531,6 +532,7 @@ vreg_l15b: ldo15 {
+ 			regulator-min-microvolt = <1800000>;
+ 			regulator-max-microvolt = <1800000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+ 
+ 		vreg_l16b: ldo16 {
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index 5aeecf711340d2..607d32f68c3406 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -4815,6 +4815,8 @@ usb_2_dwc3: usb@a200000 {
+ 				snps,dis-u1-entry-quirk;
+ 				snps,dis-u2-entry-quirk;
+ 
++				dma-coherent;
++
+ 				ports {
+ 					#address-cells = <1>;
+ 					#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/renesas/white-hawk-ard-audio-da7212.dtso b/arch/arm64/boot/dts/renesas/white-hawk-ard-audio-da7212.dtso
+index c27b9b3d4e5f4a..f2d53e958da116 100644
+--- a/arch/arm64/boot/dts/renesas/white-hawk-ard-audio-da7212.dtso
++++ b/arch/arm64/boot/dts/renesas/white-hawk-ard-audio-da7212.dtso
+@@ -108,7 +108,7 @@ sound_clk_pins: sound-clk {
+ 	};
+ 
+ 	tpu0_pins: tpu0 {
+-		groups = "tpu_to0_a";
++		groups = "tpu_to0_b";
+ 		function = "tpu";
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/renesas/white-hawk-single.dtsi b/arch/arm64/boot/dts/renesas/white-hawk-single.dtsi
+index 20e8232f2f3234..976a3ab44e5a52 100644
+--- a/arch/arm64/boot/dts/renesas/white-hawk-single.dtsi
++++ b/arch/arm64/boot/dts/renesas/white-hawk-single.dtsi
+@@ -11,6 +11,10 @@
+ / {
+ 	model = "Renesas White Hawk Single board";
+ 	compatible = "renesas,white-hawk-single";
++
++	aliases {
++		ethernet3 = &tsn0;
++	};
+ };
+ 
+ &hscif0 {
+@@ -53,7 +57,7 @@ &tsn0 {
+ 	pinctrl-0 = <&tsn0_pins>;
+ 	pinctrl-names = "default";
+ 	phy-mode = "rgmii";
+-	phy-handle = <&phy3>;
++	phy-handle = <&tsn0_phy>;
+ 	status = "okay";
+ 
+ 	mdio {
+@@ -63,7 +67,7 @@ mdio {
+ 		reset-gpios = <&gpio1 23 GPIO_ACTIVE_LOW>;
+ 		reset-post-delay-us = <4000>;
+ 
+-		phy3: ethernet-phy@0 {
++		tsn0_phy: ethernet-phy@0 {
+ 			compatible = "ethernet-phy-id002b.0980",
+ 				     "ethernet-phy-ieee802.3-c22";
+ 			reg = <0>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts b/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
+index f2234dabd66411..70979079923c10 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
+@@ -312,14 +312,6 @@ &uart2 {
+ 	status = "okay";
+ };
+ 
+-&usb_host0_ehci {
+-	status = "okay";
+-};
+-
+-&usb_host0_ohci {
+-	status = "okay";
+-};
+-
+ &vopb {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+index 314d9dfdba5732..587e89d7fc5e42 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+@@ -585,10 +585,6 @@ &u2phy1 {
+ 	u2phy1_otg: otg-port {
+ 		status = "okay";
+ 	};
+-
+-	u2phy1_host: host-port {
+-		status = "okay";
+-	};
+ };
+ 
+ &usbdrd3_1 {
+@@ -622,11 +618,3 @@ hub_3_0: hub@2 {
+ 		vdd2-supply = <&vcc3v3_sys>;
+ 	};
+ };
+-
+-&usb_host1_ehci {
+-	status = "okay";
+-};
+-
+-&usb_host1_ohci {
+-	status = "okay";
+-};
+diff --git a/arch/arm64/boot/dts/rockchip/rk3528.dtsi b/arch/arm64/boot/dts/rockchip/rk3528.dtsi
+index 26c3559d6a6deb..7f1ffd6003f581 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3528.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3528.dtsi
+@@ -404,9 +404,10 @@ uart2: serial@ffa00000 {
+ 
+ 		uart3: serial@ffa08000 {
+ 			compatible = "rockchip,rk3528-uart", "snps,dw-apb-uart";
++			reg = <0x0 0xffa08000 0x0 0x100>;
+ 			clocks = <&cru SCLK_UART3>, <&cru PCLK_UART3>;
+ 			clock-names = "baudclk", "apb_pclk";
+-			reg = <0x0 0xffa08000 0x0 0x100>;
++			interrupts = <GIC_SPI 43 IRQ_TYPE_LEVEL_HIGH>;
+ 			reg-io-width = <4>;
+ 			reg-shift = <2>;
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-rock-3c.dts b/arch/arm64/boot/dts/rockchip/rk3566-rock-3c.dts
+index 53e71528e4c4c7..6224d72813e593 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-rock-3c.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3566-rock-3c.dts
+@@ -636,6 +636,7 @@ flash@0 {
+ 		spi-max-frequency = <104000000>;
+ 		spi-rx-bus-width = <4>;
+ 		spi-tx-bus-width = <1>;
++		vcc-supply = <&vcc_1v8>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-nanopi-r5s.dtsi b/arch/arm64/boot/dts/rockchip/rk3568-nanopi-r5s.dtsi
+index 00c479aa18711a..a28b4af10d13a2 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-nanopi-r5s.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3568-nanopi-r5s.dtsi
+@@ -486,9 +486,12 @@ &saradc {
+ &sdhci {
+ 	bus-width = <8>;
+ 	max-frequency = <200000000>;
++	mmc-hs200-1_8v;
+ 	non-removable;
+ 	pinctrl-names = "default";
+-	pinctrl-0 = <&emmc_bus8 &emmc_clk &emmc_cmd>;
++	pinctrl-0 = <&emmc_bus8 &emmc_clk &emmc_cmd &emmc_datastrobe>;
++	vmmc-supply = <&vcc_3v3>;
++	vqmmc-supply = <&vcc_1v8>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi b/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
+index 1e18ad93ba0ebd..c52af310c7062e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
+@@ -439,16 +439,15 @@ xin32k: clock-2 {
+ 		#clock-cells = <0>;
+ 	};
+ 
+-	pmu_sram: sram@10f000 {
+-		compatible = "mmio-sram";
+-		reg = <0x0 0x0010f000 0x0 0x100>;
+-		ranges = <0 0x0 0x0010f000 0x100>;
+-		#address-cells = <1>;
+-		#size-cells = <1>;
++	reserved-memory {
++		#address-cells = <2>;
++		#size-cells = <2>;
++		ranges;
+ 
+-		scmi_shmem: sram@0 {
++		scmi_shmem: shmem@10f000 {
+ 			compatible = "arm,scmi-shmem";
+-			reg = <0x0 0x100>;
++			reg = <0x0 0x0010f000 0x0 0x100>;
++			no-map;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
+index 4421852161dd65..da4e0cacd6d72d 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
+@@ -573,6 +573,7 @@ &usb1 {
+ &ospi1 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&mcu_fss0_ospi1_pins_default>;
++	status = "okay";
+ 
+ 	flash@0 {
+ 		compatible = "jedec,spi-nor";
+diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
+index c4ce2c67c0e067..8e5d4dbd74e50d 100644
+--- a/arch/arm64/configs/defconfig
++++ b/arch/arm64/configs/defconfig
+@@ -1587,6 +1587,9 @@ CONFIG_PHY_HISTB_COMBPHY=y
+ CONFIG_PHY_HISI_INNO_USB2=y
+ CONFIG_PHY_MVEBU_CP110_COMPHY=y
+ CONFIG_PHY_MTK_TPHY=y
++CONFIG_PHY_MTK_HDMI=m
++CONFIG_PHY_MTK_MIPI_DSI=m
++CONFIG_PHY_MTK_DP=m
+ CONFIG_PHY_QCOM_EDP=m
+ CONFIG_PHY_QCOM_PCIE2=m
+ CONFIG_PHY_QCOM_QMP=m
+diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
+index e4f77757937e65..71f0cbf7b28872 100644
+--- a/arch/arm64/include/asm/esr.h
++++ b/arch/arm64/include/asm/esr.h
+@@ -378,12 +378,14 @@
+ /*
+  * ISS values for SME traps
+  */
+-
+-#define ESR_ELx_SME_ISS_SME_DISABLED	0
+-#define ESR_ELx_SME_ISS_ILL		1
+-#define ESR_ELx_SME_ISS_SM_DISABLED	2
+-#define ESR_ELx_SME_ISS_ZA_DISABLED	3
+-#define ESR_ELx_SME_ISS_ZT_DISABLED	4
++#define ESR_ELx_SME_ISS_SMTC_MASK		GENMASK(2, 0)
++#define ESR_ELx_SME_ISS_SMTC(esr)		((esr) & ESR_ELx_SME_ISS_SMTC_MASK)
++
++#define ESR_ELx_SME_ISS_SMTC_SME_DISABLED	0
++#define ESR_ELx_SME_ISS_SMTC_ILL		1
++#define ESR_ELx_SME_ISS_SMTC_SM_DISABLED	2
++#define ESR_ELx_SME_ISS_SMTC_ZA_DISABLED	3
++#define ESR_ELx_SME_ISS_SMTC_ZT_DISABLED	4
+ 
+ /* ISS field definitions for MOPS exceptions */
+ #define ESR_ELx_MOPS_ISS_MEM_INST	(UL(1) << 24)
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index 8370d55f035334..0e649d0e59b06e 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -359,9 +359,6 @@ static void task_fpsimd_load(void)
+ 	WARN_ON(preemptible());
+ 	WARN_ON(test_thread_flag(TIF_KERNEL_FPSTATE));
+ 
+-	if (system_supports_fpmr())
+-		write_sysreg_s(current->thread.uw.fpmr, SYS_FPMR);
+-
+ 	if (system_supports_sve() || system_supports_sme()) {
+ 		switch (current->thread.fp_type) {
+ 		case FP_STATE_FPSIMD:
+@@ -413,6 +410,9 @@ static void task_fpsimd_load(void)
+ 			restore_ffr = system_supports_fa64();
+ 	}
+ 
++	if (system_supports_fpmr())
++		write_sysreg_s(current->thread.uw.fpmr, SYS_FPMR);
++
+ 	if (restore_sve_regs) {
+ 		WARN_ON_ONCE(current->thread.fp_type != FP_STATE_SVE);
+ 		sve_load_state(sve_pffr(&current->thread),
+@@ -651,7 +651,7 @@ static void __fpsimd_to_sve(void *sst, struct user_fpsimd_state const *fst,
+  * task->thread.uw.fpsimd_state must be up to date before calling this
+  * function.
+  */
+-static void fpsimd_to_sve(struct task_struct *task)
++static inline void fpsimd_to_sve(struct task_struct *task)
+ {
+ 	unsigned int vq;
+ 	void *sst = task->thread.sve_state;
+@@ -675,7 +675,7 @@ static void fpsimd_to_sve(struct task_struct *task)
+  * bytes of allocated kernel memory.
+  * task->thread.sve_state must be up to date before calling this function.
+  */
+-static void sve_to_fpsimd(struct task_struct *task)
++static inline void sve_to_fpsimd(struct task_struct *task)
+ {
+ 	unsigned int vq, vl;
+ 	void const *sst = task->thread.sve_state;
+@@ -1436,7 +1436,7 @@ void do_sme_acc(unsigned long esr, struct pt_regs *regs)
+ 	 * If this not a trap due to SME being disabled then something
+ 	 * is being used in the wrong mode, report as SIGILL.
+ 	 */
+-	if (ESR_ELx_ISS(esr) != ESR_ELx_SME_ISS_SME_DISABLED) {
++	if (ESR_ELx_SME_ISS_SMTC(esr) != ESR_ELx_SME_ISS_SMTC_SME_DISABLED) {
+ 		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
+ 		return;
+ 	}
+@@ -1460,6 +1460,8 @@ void do_sme_acc(unsigned long esr, struct pt_regs *regs)
+ 		sme_set_vq(vq_minus_one);
+ 
+ 		fpsimd_bind_task_to_cpu();
++	} else {
++		fpsimd_flush_task_state(current);
+ 	}
+ 
+ 	put_cpu_fpsimd_context();
+@@ -1573,8 +1575,8 @@ void fpsimd_thread_switch(struct task_struct *next)
+ 		fpsimd_save_user_state();
+ 
+ 	if (test_tsk_thread_flag(next, TIF_KERNEL_FPSTATE)) {
+-		fpsimd_load_kernel_state(next);
+ 		fpsimd_flush_cpu_state();
++		fpsimd_load_kernel_state(next);
+ 	} else {
+ 		/*
+ 		 * Fix up TIF_FOREIGN_FPSTATE to correctly describe next's
+@@ -1661,6 +1663,9 @@ void fpsimd_flush_thread(void)
+ 		current->thread.svcr = 0;
+ 	}
+ 
++	if (system_supports_fpmr())
++		current->thread.uw.fpmr = 0;
++
+ 	current->thread.fp_type = FP_STATE_FPSIMD;
+ 
+ 	put_cpu_fpsimd_context();
+@@ -1801,7 +1806,7 @@ void fpsimd_update_current_state(struct user_fpsimd_state const *state)
+ 	get_cpu_fpsimd_context();
+ 
+ 	current->thread.uw.fpsimd_state = *state;
+-	if (test_thread_flag(TIF_SVE))
++	if (current->thread.fp_type == FP_STATE_SVE)
+ 		fpsimd_to_sve(current);
+ 
+ 	task_fpsimd_load();
+diff --git a/arch/arm64/xen/hypercall.S b/arch/arm64/xen/hypercall.S
+index 9d01361696a145..ae551b8571374f 100644
+--- a/arch/arm64/xen/hypercall.S
++++ b/arch/arm64/xen/hypercall.S
+@@ -83,7 +83,26 @@ HYPERCALL3(vcpu_op);
+ HYPERCALL1(platform_op_raw);
+ HYPERCALL2(multicall);
+ HYPERCALL2(vm_assist);
+-HYPERCALL3(dm_op);
++
++SYM_FUNC_START(HYPERVISOR_dm_op)
++	mov x16, #__HYPERVISOR_dm_op;	\
++	/*
++	 * dm_op hypercalls are issued by the userspace. The kernel needs to
++	 * enable access to TTBR0_EL1 as the hypervisor would issue stage 1
++	 * translations to user memory via AT instructions. Since AT
++	 * instructions are not affected by the PAN bit (ARMv8.1), we only
++	 * need the explicit uaccess_enable/disable if the TTBR0 PAN emulation
++	 * is enabled (it implies that hardware UAO and PAN disabled).
++	 */
++	uaccess_ttbr0_enable x6, x7, x8
++	hvc XEN_IMM
++
++	/*
++	 * Disable userspace access from kernel once the hyp call completed.
++	 */
++	uaccess_ttbr0_disable x6, x7
++	ret
++SYM_FUNC_END(HYPERVISOR_dm_op);
+ 
+ SYM_FUNC_START(privcmd_call)
+ 	mov x16, x0
+diff --git a/arch/m68k/mac/config.c b/arch/m68k/mac/config.c
+index e324410ef239c0..d26c7f4f8c360a 100644
+--- a/arch/m68k/mac/config.c
++++ b/arch/m68k/mac/config.c
+@@ -793,7 +793,7 @@ static void __init mac_identify(void)
+ 	}
+ 
+ 	macintosh_config = mac_data_table;
+-	for (m = macintosh_config; m->ident != -1; m++) {
++	for (m = &mac_data_table[1]; m->ident != -1; m++) {
+ 		if (m->ident == model) {
+ 			macintosh_config = m;
+ 			break;
+diff --git a/arch/mips/boot/dts/loongson/loongson64c_4core_ls7a.dts b/arch/mips/boot/dts/loongson/loongson64c_4core_ls7a.dts
+index c7ea4f1c0bb21f..6c277ab83d4b94 100644
+--- a/arch/mips/boot/dts/loongson/loongson64c_4core_ls7a.dts
++++ b/arch/mips/boot/dts/loongson/loongson64c_4core_ls7a.dts
+@@ -29,6 +29,7 @@ msi: msi-controller@2ff00000 {
+ 		compatible = "loongson,pch-msi-1.0";
+ 		reg = <0 0x2ff00000 0 0x8>;
+ 		interrupt-controller;
++		#interrupt-cells = <1>;
+ 		msi-controller;
+ 		loongson,msi-base-vec = <64>;
+ 		loongson,msi-num-vecs = <64>;
+diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
+index 6ac621155ec3c8..9d1ab3971694ae 100644
+--- a/arch/powerpc/kernel/Makefile
++++ b/arch/powerpc/kernel/Makefile
+@@ -160,9 +160,7 @@ endif
+ 
+ obj64-$(CONFIG_PPC_TRANSACTIONAL_MEM)	+= tm.o
+ 
+-ifneq ($(CONFIG_XMON)$(CONFIG_KEXEC_CORE)(CONFIG_PPC_BOOK3S),)
+ obj-y				+= ppc_save_regs.o
+-endif
+ 
+ obj-$(CONFIG_EPAPR_PARAVIRT)	+= epapr_paravirt.o epapr_hcalls.o
+ obj-$(CONFIG_KVM_GUEST)		+= kvm.o kvm_emul.o
+diff --git a/arch/powerpc/kexec/crash.c b/arch/powerpc/kexec/crash.c
+index 9ac3266e496522..a325c1c02f96dc 100644
+--- a/arch/powerpc/kexec/crash.c
++++ b/arch/powerpc/kexec/crash.c
+@@ -359,7 +359,10 @@ void default_machine_crash_shutdown(struct pt_regs *regs)
+ 	if (TRAP(regs) == INTERRUPT_SYSTEM_RESET)
+ 		is_via_system_reset = 1;
+ 
+-	crash_smp_send_stop();
++	if (IS_ENABLED(CONFIG_SMP))
++		crash_smp_send_stop();
++	else
++		crash_kexec_prepare();
+ 
+ 	crash_save_cpu(regs, crashing_cpu);
+ 
+diff --git a/arch/powerpc/platforms/book3s/vas-api.c b/arch/powerpc/platforms/book3s/vas-api.c
+index 0b6365d85d1171..dc6f75d3ac6ef7 100644
+--- a/arch/powerpc/platforms/book3s/vas-api.c
++++ b/arch/powerpc/platforms/book3s/vas-api.c
+@@ -521,6 +521,15 @@ static int coproc_mmap(struct file *fp, struct vm_area_struct *vma)
+ 		return -EINVAL;
+ 	}
+ 
++	/*
++	 * Map complete page to the paste address. So the user
++	 * space should pass 0ULL to the offset parameter.
++	 */
++	if (vma->vm_pgoff) {
++		pr_debug("Page offset unsupported to map paste address\n");
++		return -EINVAL;
++	}
++
+ 	/* Ensure instance has an open send window */
+ 	if (!txwin) {
+ 		pr_err("No send window open?\n");
+diff --git a/arch/powerpc/platforms/powernv/memtrace.c b/arch/powerpc/platforms/powernv/memtrace.c
+index 4ac9808e55a44d..2ea30b34335415 100644
+--- a/arch/powerpc/platforms/powernv/memtrace.c
++++ b/arch/powerpc/platforms/powernv/memtrace.c
+@@ -48,11 +48,15 @@ static ssize_t memtrace_read(struct file *filp, char __user *ubuf,
+ static int memtrace_mmap(struct file *filp, struct vm_area_struct *vma)
+ {
+ 	struct memtrace_entry *ent = filp->private_data;
++	unsigned long ent_nrpages = ent->size >> PAGE_SHIFT;
++	unsigned long vma_nrpages = vma_pages(vma);
+ 
+-	if (ent->size < vma->vm_end - vma->vm_start)
++	/* The requested page offset should be within object's page count */
++	if (vma->vm_pgoff >= ent_nrpages)
+ 		return -EINVAL;
+ 
+-	if (vma->vm_pgoff << PAGE_SHIFT >= ent->size)
++	/* The requested mapping range should remain within the bounds */
++	if (vma_nrpages > ent_nrpages - vma->vm_pgoff)
+ 		return -EINVAL;
+ 
+ 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
+index d6ebc19fb99c51..eec333dd2e598c 100644
+--- a/arch/powerpc/platforms/pseries/iommu.c
++++ b/arch/powerpc/platforms/pseries/iommu.c
+@@ -197,7 +197,7 @@ static void tce_iommu_userspace_view_free(struct iommu_table *tbl)
+ 
+ static void tce_free_pSeries(struct iommu_table *tbl)
+ {
+-	if (!tbl->it_userspace)
++	if (tbl->it_userspace)
+ 		tce_iommu_userspace_view_free(tbl);
+ }
+ 
+diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
+index 77c788660223b3..56f06a27d45fb1 100644
+--- a/arch/riscv/kernel/traps_misaligned.c
++++ b/arch/riscv/kernel/traps_misaligned.c
+@@ -455,7 +455,7 @@ static int handle_scalar_misaligned_load(struct pt_regs *regs)
+ 
+ 	val.data_u64 = 0;
+ 	if (user_mode(regs)) {
+-		if (copy_from_user(&val, (u8 __user *)addr, len))
++		if (copy_from_user_nofault(&val, (u8 __user *)addr, len))
+ 			return -1;
+ 	} else {
+ 		memcpy(&val, (u8 *)addr, len);
+@@ -556,7 +556,7 @@ static int handle_scalar_misaligned_store(struct pt_regs *regs)
+ 		return -EOPNOTSUPP;
+ 
+ 	if (user_mode(regs)) {
+-		if (copy_to_user((u8 __user *)addr, &val, len))
++		if (copy_to_user_nofault((u8 __user *)addr, &val, len))
+ 			return -1;
+ 	} else {
+ 		memcpy((u8 *)addr, &val, len);
+diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
+index d1c83a77735e05..0000ecf49b188b 100644
+--- a/arch/riscv/kvm/vcpu_sbi.c
++++ b/arch/riscv/kvm/vcpu_sbi.c
+@@ -143,9 +143,9 @@ void kvm_riscv_vcpu_sbi_system_reset(struct kvm_vcpu *vcpu,
+ 	struct kvm_vcpu *tmp;
+ 
+ 	kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
+-		spin_lock(&vcpu->arch.mp_state_lock);
++		spin_lock(&tmp->arch.mp_state_lock);
+ 		WRITE_ONCE(tmp->arch.mp_state.mp_state, KVM_MP_STATE_STOPPED);
+-		spin_unlock(&vcpu->arch.mp_state_lock);
++		spin_unlock(&tmp->arch.mp_state_lock);
+ 	}
+ 	kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
+ 
+diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
+index 9a5d5be8acf41e..d278bf0c09d1b3 100644
+--- a/arch/s390/kernel/uv.c
++++ b/arch/s390/kernel/uv.c
+@@ -15,6 +15,7 @@
+ #include <linux/pagemap.h>
+ #include <linux/swap.h>
+ #include <linux/pagewalk.h>
++#include <linux/backing-dev.h>
+ #include <asm/facility.h>
+ #include <asm/sections.h>
+ #include <asm/uv.h>
+@@ -324,32 +325,87 @@ static int make_folio_secure(struct mm_struct *mm, struct folio *folio, struct u
+ }
+ 
+ /**
+- * s390_wiggle_split_folio() - try to drain extra references to a folio and optionally split.
++ * s390_wiggle_split_folio() - try to drain extra references to a folio and
++ *			       split the folio if it is large.
+  * @mm:    the mm containing the folio to work on
+  * @folio: the folio
+- * @split: whether to split a large folio
+  *
+  * Context: Must be called while holding an extra reference to the folio;
+  *          the mm lock should not be held.
+- * Return: 0 if the folio was split successfully;
+- *         -EAGAIN if the folio was not split successfully but another attempt
+- *                 can be made, or if @split was set to false;
+- *         -EINVAL in case of other errors. See split_folio().
++ * Return: 0 if the operation was successful;
++ *	   -EAGAIN if splitting the large folio was not successful,
++ *		   but another attempt can be made;
++ *	   -EINVAL in case of other folio splitting errors. See split_folio().
+  */
+-static int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool split)
++static int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio)
+ {
+-	int rc;
++	int rc, tried_splits;
+ 
+ 	lockdep_assert_not_held(&mm->mmap_lock);
+ 	folio_wait_writeback(folio);
+ 	lru_add_drain_all();
+-	if (split) {
++
++	if (!folio_test_large(folio))
++		return 0;
++
++	for (tried_splits = 0; tried_splits < 2; tried_splits++) {
++		struct address_space *mapping;
++		loff_t lstart, lend;
++		struct inode *inode;
++
+ 		folio_lock(folio);
+ 		rc = split_folio(folio);
++		if (rc != -EBUSY) {
++			folio_unlock(folio);
++			return rc;
++		}
++
++		/*
++		 * Splitting with -EBUSY can fail for various reasons, but we
++		 * have to handle one case explicitly for now: some mappings
++		 * don't allow for splitting dirty folios; writeback will
++		 * mark them clean again, including marking all page table
++		 * entries mapping the folio read-only, to catch future write
++		 * attempts.
++		 *
++		 * While the system should be writing back dirty folios in the
++		 * background, we obtained this folio by looking up a writable
++		 * page table entry. On these problematic mappings, writable
++		 * page table entries imply dirty folios, preventing the
++		 * split in the first place.
++		 *
++		 * To prevent a livelock when trigger writeback manually and
++		 * letting the caller look up the folio again in the page
++		 * table (turning it dirty), immediately try to split again.
++		 *
++		 * This is only a problem for some mappings (e.g., XFS);
++		 * mappings that do not support writeback (e.g., shmem) do not
++		 * apply.
++		 */
++		if (!folio_test_dirty(folio) || folio_test_anon(folio) ||
++		    !folio->mapping || !mapping_can_writeback(folio->mapping)) {
++			folio_unlock(folio);
++			break;
++		}
++
++		/*
++		 * Ideally, we'd only trigger writeback on this exact folio. But
++		 * there is no easy way to do that, so we'll stabilize the
++		 * mapping while we still hold the folio lock, so we can drop
++		 * the folio lock to trigger writeback on the range currently
++		 * covered by the folio instead.
++		 */
++		mapping = folio->mapping;
++		lstart = folio_pos(folio);
++		lend = lstart + folio_size(folio) - 1;
++		inode = igrab(mapping->host);
+ 		folio_unlock(folio);
+ 
+-		if (rc != -EBUSY)
+-			return rc;
++		if (unlikely(!inode))
++			break;
++
++		filemap_write_and_wait_range(mapping, lstart, lend);
++		iput(mapping->host);
+ 	}
+ 	return -EAGAIN;
+ }
+@@ -393,8 +449,11 @@ int make_hva_secure(struct mm_struct *mm, unsigned long hva, struct uv_cb_header
+ 	folio_walk_end(&fw, vma);
+ 	mmap_read_unlock(mm);
+ 
+-	if (rc == -E2BIG || rc == -EBUSY)
+-		rc = s390_wiggle_split_folio(mm, folio, rc == -E2BIG);
++	if (rc == -E2BIG || rc == -EBUSY) {
++		rc = s390_wiggle_split_folio(mm, folio);
++		if (!rc)
++			rc = -EAGAIN;
++	}
+ 	folio_put(folio);
+ 
+ 	return rc;
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index 0776dfde2dba9c..945106b5562db0 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -605,17 +605,15 @@ static void bpf_jit_prologue(struct bpf_jit *jit, struct bpf_prog *fp,
+ 	}
+ 	/* Setup stack and backchain */
+ 	if (is_first_pass(jit) || (jit->seen & SEEN_STACK)) {
+-		if (is_first_pass(jit) || (jit->seen & SEEN_FUNC))
+-			/* lgr %w1,%r15 (backchain) */
+-			EMIT4(0xb9040000, REG_W1, REG_15);
++		/* lgr %w1,%r15 (backchain) */
++		EMIT4(0xb9040000, REG_W1, REG_15);
+ 		/* la %bfp,STK_160_UNUSED(%r15) (BPF frame pointer) */
+ 		EMIT4_DISP(0x41000000, BPF_REG_FP, REG_15, STK_160_UNUSED);
+ 		/* aghi %r15,-STK_OFF */
+ 		EMIT4_IMM(0xa70b0000, REG_15, -(STK_OFF + stack_depth));
+-		if (is_first_pass(jit) || (jit->seen & SEEN_FUNC))
+-			/* stg %w1,152(%r15) (backchain) */
+-			EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0,
+-				      REG_15, 152);
++		/* stg %w1,152(%r15) (backchain) */
++		EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0,
++			      REG_15, 152);
+ 	}
+ }
+ 
+diff --git a/arch/um/os-Linux/sigio.c b/arch/um/os-Linux/sigio.c
+index a05a6ecee75615..6de145f8fe3d93 100644
+--- a/arch/um/os-Linux/sigio.c
++++ b/arch/um/os-Linux/sigio.c
+@@ -12,6 +12,7 @@
+ #include <signal.h>
+ #include <string.h>
+ #include <sys/epoll.h>
++#include <asm/unistd.h>
+ #include <kern_util.h>
+ #include <init.h>
+ #include <os.h>
+@@ -46,7 +47,7 @@ static void *write_sigio_thread(void *unused)
+ 			       __func__, errno);
+ 		}
+ 
+-		CATCH_EINTR(r = tgkill(pid, pid, SIGIO));
++		CATCH_EINTR(r = syscall(__NR_tgkill, pid, pid, SIGIO));
+ 		if (r < 0)
+ 			printk(UM_KERN_ERR "%s: tgkill failed, errno = %d\n",
+ 			       __func__, errno);
+diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
+index 49c26ce2b11522..a6fa01ef35a10e 100644
+--- a/arch/x86/events/amd/uncore.c
++++ b/arch/x86/events/amd/uncore.c
+@@ -38,7 +38,6 @@ struct amd_uncore_ctx {
+ 	int refcnt;
+ 	int cpu;
+ 	struct perf_event **events;
+-	struct hlist_node node;
+ };
+ 
+ struct amd_uncore_pmu {
+@@ -890,6 +889,39 @@ static void amd_uncore_umc_start(struct perf_event *event, int flags)
+ 	perf_event_update_userpage(event);
+ }
+ 
++static void amd_uncore_umc_read(struct perf_event *event)
++{
++	struct hw_perf_event *hwc = &event->hw;
++	u64 prev, new, shift;
++	s64 delta;
++
++	shift = COUNTER_SHIFT + 1;
++	prev = local64_read(&hwc->prev_count);
++
++	/*
++	 * UMC counters do not have RDPMC assignments. Read counts directly
++	 * from the corresponding PERF_CTR.
++	 */
++	rdmsrl(hwc->event_base, new);
++
++	/*
++	 * Unlike the other uncore counters, UMC counters saturate and set the
++	 * Overflow bit (bit 48) on overflow. Since they do not roll over,
++	 * proactively reset the corresponding PERF_CTR when bit 47 is set so
++	 * that the counter never gets a chance to saturate.
++	 */
++	if (new & BIT_ULL(63 - COUNTER_SHIFT)) {
++		wrmsrl(hwc->event_base, 0);
++		local64_set(&hwc->prev_count, 0);
++	} else {
++		local64_set(&hwc->prev_count, new);
++	}
++
++	delta = (new << shift) - (prev << shift);
++	delta >>= shift;
++	local64_add(delta, &event->count);
++}
++
+ static
+ void amd_uncore_umc_ctx_scan(struct amd_uncore *uncore, unsigned int cpu)
+ {
+@@ -968,7 +1000,7 @@ int amd_uncore_umc_ctx_init(struct amd_uncore *uncore, unsigned int cpu)
+ 				.del		= amd_uncore_del,
+ 				.start		= amd_uncore_umc_start,
+ 				.stop		= amd_uncore_stop,
+-				.read		= amd_uncore_read,
++				.read		= amd_uncore_umc_read,
+ 				.capabilities	= PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT,
+ 				.module		= THIS_MODULE,
+ 			};
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index ddeb40930bc802..3ca16e1dbbb833 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -706,3 +706,36 @@ bool hv_is_hyperv_initialized(void)
+ 	return hypercall_msr.enable;
+ }
+ EXPORT_SYMBOL_GPL(hv_is_hyperv_initialized);
++
++int hv_apicid_to_vp_index(u32 apic_id)
++{
++	u64 control;
++	u64 status;
++	unsigned long irq_flags;
++	struct hv_get_vp_from_apic_id_in *input;
++	u32 *output, ret;
++
++	local_irq_save(irq_flags);
++
++	input = *this_cpu_ptr(hyperv_pcpu_input_arg);
++	memset(input, 0, sizeof(*input));
++	input->partition_id = HV_PARTITION_ID_SELF;
++	input->apic_ids[0] = apic_id;
++
++	output = *this_cpu_ptr(hyperv_pcpu_output_arg);
++
++	control = HV_HYPERCALL_REP_COMP_1 | HVCALL_GET_VP_INDEX_FROM_APIC_ID;
++	status = hv_do_hypercall(control, input, output);
++	ret = output[0];
++
++	local_irq_restore(irq_flags);
++
++	if (!hv_result_success(status)) {
++		pr_err("failed to get vp index from apic id %d, status %#llx\n",
++		       apic_id, status);
++		return -EINVAL;
++	}
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(hv_apicid_to_vp_index);
+diff --git a/arch/x86/hyperv/hv_vtl.c b/arch/x86/hyperv/hv_vtl.c
+index 13242ed8ff16fe..5d59e1e05e6491 100644
+--- a/arch/x86/hyperv/hv_vtl.c
++++ b/arch/x86/hyperv/hv_vtl.c
+@@ -206,41 +206,9 @@ static int hv_vtl_bringup_vcpu(u32 target_vp_index, int cpu, u64 eip_ignored)
+ 	return ret;
+ }
+ 
+-static int hv_vtl_apicid_to_vp_id(u32 apic_id)
+-{
+-	u64 control;
+-	u64 status;
+-	unsigned long irq_flags;
+-	struct hv_get_vp_from_apic_id_in *input;
+-	u32 *output, ret;
+-
+-	local_irq_save(irq_flags);
+-
+-	input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+-	memset(input, 0, sizeof(*input));
+-	input->partition_id = HV_PARTITION_ID_SELF;
+-	input->apic_ids[0] = apic_id;
+-
+-	output = *this_cpu_ptr(hyperv_pcpu_output_arg);
+-
+-	control = HV_HYPERCALL_REP_COMP_1 | HVCALL_GET_VP_ID_FROM_APIC_ID;
+-	status = hv_do_hypercall(control, input, output);
+-	ret = output[0];
+-
+-	local_irq_restore(irq_flags);
+-
+-	if (!hv_result_success(status)) {
+-		pr_err("failed to get vp id from apic id %d, status %#llx\n",
+-		       apic_id, status);
+-		return -EINVAL;
+-	}
+-
+-	return ret;
+-}
+-
+ static int hv_vtl_wakeup_secondary_cpu(u32 apicid, unsigned long start_eip)
+ {
+-	int vp_id, cpu;
++	int vp_index, cpu;
+ 
+ 	/* Find the logical CPU for the APIC ID */
+ 	for_each_present_cpu(cpu) {
+@@ -251,18 +219,18 @@ static int hv_vtl_wakeup_secondary_cpu(u32 apicid, unsigned long start_eip)
+ 		return -EINVAL;
+ 
+ 	pr_debug("Bringing up CPU with APIC ID %d in VTL2...\n", apicid);
+-	vp_id = hv_vtl_apicid_to_vp_id(apicid);
++	vp_index = hv_apicid_to_vp_index(apicid);
+ 
+-	if (vp_id < 0) {
++	if (vp_index < 0) {
+ 		pr_err("Couldn't find CPU with APIC ID %d\n", apicid);
+ 		return -EINVAL;
+ 	}
+-	if (vp_id > ms_hyperv.max_vp_index) {
+-		pr_err("Invalid CPU id %d for APIC ID %d\n", vp_id, apicid);
++	if (vp_index > ms_hyperv.max_vp_index) {
++		pr_err("Invalid CPU id %d for APIC ID %d\n", vp_index, apicid);
+ 		return -EINVAL;
+ 	}
+ 
+-	return hv_vtl_bringup_vcpu(vp_id, cpu, start_eip);
++	return hv_vtl_bringup_vcpu(vp_index, cpu, start_eip);
+ }
+ 
+ int __init hv_vtl_early_init(void)
+diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c
+index 77bf05f06b9efa..0cc239cdb4dad8 100644
+--- a/arch/x86/hyperv/ivm.c
++++ b/arch/x86/hyperv/ivm.c
+@@ -9,6 +9,7 @@
+ #include <linux/bitfield.h>
+ #include <linux/types.h>
+ #include <linux/slab.h>
++#include <linux/cpu.h>
+ #include <asm/svm.h>
+ #include <asm/sev.h>
+ #include <asm/io.h>
+@@ -288,7 +289,7 @@ static void snp_cleanup_vmsa(struct sev_es_save_area *vmsa)
+ 		free_page((unsigned long)vmsa);
+ }
+ 
+-int hv_snp_boot_ap(u32 cpu, unsigned long start_ip)
++int hv_snp_boot_ap(u32 apic_id, unsigned long start_ip)
+ {
+ 	struct sev_es_save_area *vmsa = (struct sev_es_save_area *)
+ 		__get_free_page(GFP_KERNEL | __GFP_ZERO);
+@@ -297,10 +298,27 @@ int hv_snp_boot_ap(u32 cpu, unsigned long start_ip)
+ 	u64 ret, retry = 5;
+ 	struct hv_enable_vp_vtl *start_vp_input;
+ 	unsigned long flags;
++	int cpu, vp_index;
+ 
+ 	if (!vmsa)
+ 		return -ENOMEM;
+ 
++	/* Find the Hyper-V VP index which might be not the same as APIC ID */
++	vp_index = hv_apicid_to_vp_index(apic_id);
++	if (vp_index < 0 || vp_index > ms_hyperv.max_vp_index)
++		return -EINVAL;
++
++	/*
++	 * Find the Linux CPU number for addressing the per-CPU data, and it
++	 * might not be the same as APIC ID.
++	 */
++	for_each_present_cpu(cpu) {
++		if (arch_match_cpu_phys_id(cpu, apic_id))
++			break;
++	}
++	if (cpu >= nr_cpu_ids)
++		return -EINVAL;
++
+ 	native_store_gdt(&gdtr);
+ 
+ 	vmsa->gdtr.base = gdtr.address;
+@@ -348,7 +366,7 @@ int hv_snp_boot_ap(u32 cpu, unsigned long start_ip)
+ 	start_vp_input = (struct hv_enable_vp_vtl *)ap_start_input_arg;
+ 	memset(start_vp_input, 0, sizeof(*start_vp_input));
+ 	start_vp_input->partition_id = -1;
+-	start_vp_input->vp_index = cpu;
++	start_vp_input->vp_index = vp_index;
+ 	start_vp_input->target_vtl.target_vtl = ms_hyperv.vtl;
+ 	*(u64 *)&start_vp_input->vp_context = __pa(vmsa) | 1;
+ 
+diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
+index bab5ccfc60a748..0b9a3a307d0655 100644
+--- a/arch/x86/include/asm/mshyperv.h
++++ b/arch/x86/include/asm/mshyperv.h
+@@ -268,11 +268,11 @@ int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry);
+ #ifdef CONFIG_AMD_MEM_ENCRYPT
+ bool hv_ghcb_negotiate_protocol(void);
+ void __noreturn hv_ghcb_terminate(unsigned int set, unsigned int reason);
+-int hv_snp_boot_ap(u32 cpu, unsigned long start_ip);
++int hv_snp_boot_ap(u32 apic_id, unsigned long start_ip);
+ #else
+ static inline bool hv_ghcb_negotiate_protocol(void) { return false; }
+ static inline void hv_ghcb_terminate(unsigned int set, unsigned int reason) {}
+-static inline int hv_snp_boot_ap(u32 cpu, unsigned long start_ip) { return 0; }
++static inline int hv_snp_boot_ap(u32 apic_id, unsigned long start_ip) { return 0; }
+ #endif
+ 
+ #if defined(CONFIG_AMD_MEM_ENCRYPT) || defined(CONFIG_INTEL_TDX_GUEST)
+@@ -306,6 +306,7 @@ static __always_inline u64 hv_raw_get_msr(unsigned int reg)
+ {
+ 	return __rdmsr(reg);
+ }
++int hv_apicid_to_vp_index(u32 apic_id);
+ 
+ #else /* CONFIG_HYPERV */
+ static inline void hyperv_init(void) {}
+@@ -327,6 +328,7 @@ static inline void hv_set_msr(unsigned int reg, u64 value) { }
+ static inline u64 hv_get_msr(unsigned int reg) { return 0; }
+ static inline void hv_set_non_nested_msr(unsigned int reg, u64 value) { }
+ static inline u64 hv_get_non_nested_msr(unsigned int reg) { return 0; }
++static inline int hv_apicid_to_vp_index(u32 apic_id) { return -EINVAL; }
+ #endif /* CONFIG_HYPERV */
+ 
+ 
+diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
+index ce857ef54cf158..54dc313bcdf018 100644
+--- a/arch/x86/include/asm/mwait.h
++++ b/arch/x86/include/asm/mwait.h
+@@ -116,13 +116,10 @@ static __always_inline void __sti_mwait(unsigned long eax, unsigned long ecx)
+ static __always_inline void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
+ {
+ 	if (static_cpu_has_bug(X86_BUG_MONITOR) || !current_set_polling_and_test()) {
+-		if (static_cpu_has_bug(X86_BUG_CLFLUSH_MONITOR)) {
+-			mb();
+-			clflush((void *)&current_thread_info()->flags);
+-			mb();
+-		}
++		const void *addr = &current_thread_info()->flags;
+ 
+-		__monitor((void *)&current_thread_info()->flags, 0, 0);
++		alternative_input("", "clflush (%[addr])", X86_BUG_CLFLUSH_MONITOR, [addr] "a" (addr));
++		__monitor(addr, 0, 0);
+ 
+ 		if (!need_resched()) {
+ 			if (ecx & 1) {
+diff --git a/arch/x86/include/asm/sighandling.h b/arch/x86/include/asm/sighandling.h
+index e770c4fc47f4c5..8727c7e21dd1e6 100644
+--- a/arch/x86/include/asm/sighandling.h
++++ b/arch/x86/include/asm/sighandling.h
+@@ -24,4 +24,26 @@ int ia32_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs);
+ int x64_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs);
+ int x32_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs);
+ 
++/*
++ * To prevent immediate repeat of single step trap on return from SIGTRAP
++ * handler if the trap flag (TF) is set without an external debugger attached,
++ * clear the software event flag in the augmented SS, ensuring no single-step
++ * trap is pending upon ERETU completion.
++ *
++ * Note, this function should be called in sigreturn() before the original
++ * state is restored to make sure the TF is read from the entry frame.
++ */
++static __always_inline void prevent_single_step_upon_eretu(struct pt_regs *regs)
++{
++	/*
++	 * If the trap flag (TF) is set, i.e., the sigreturn() SYSCALL instruction
++	 * is being single-stepped, do not clear the software event flag in the
++	 * augmented SS, thus a debugger won't skip over the following instruction.
++	 */
++#ifdef CONFIG_X86_FRED
++	if (!(regs->flags & X86_EFLAGS_TF))
++		regs->fred_ss.swevent = 0;
++#endif
++}
++
+ #endif /* _ASM_X86_SIGHANDLING_H */
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 0ff057ff11ce93..5de4a879232a6c 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1005,17 +1005,18 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
+ 		c->x86_capability[CPUID_D_1_EAX] = eax;
+ 	}
+ 
+-	/* AMD-defined flags: level 0x80000001 */
++	/*
++	 * Check if extended CPUID leaves are implemented: Max extended
++	 * CPUID leaf must be in the 0x80000001-0x8000ffff range.
++	 */
+ 	eax = cpuid_eax(0x80000000);
+-	c->extended_cpuid_level = eax;
++	c->extended_cpuid_level = ((eax & 0xffff0000) == 0x80000000) ? eax : 0;
+ 
+-	if ((eax & 0xffff0000) == 0x80000000) {
+-		if (eax >= 0x80000001) {
+-			cpuid(0x80000001, &eax, &ebx, &ecx, &edx);
++	if (c->extended_cpuid_level >= 0x80000001) {
++		cpuid(0x80000001, &eax, &ebx, &ecx, &edx);
+ 
+-			c->x86_capability[CPUID_8000_0001_ECX] = ecx;
+-			c->x86_capability[CPUID_8000_0001_EDX] = edx;
+-		}
++		c->x86_capability[CPUID_8000_0001_ECX] = ecx;
++		c->x86_capability[CPUID_8000_0001_EDX] = edx;
+ 	}
+ 
+ 	if (c->extended_cpuid_level >= 0x80000007) {
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index 079f046ee26d19..e8021d3e58824a 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -696,6 +696,8 @@ static int load_late_locked(void)
+ 		return load_late_stop_cpus(true);
+ 	case UCODE_NFOUND:
+ 		return -ENOENT;
++	case UCODE_OK:
++		return 0;
+ 	default:
+ 		return -EBADFD;
+ 	}
+diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
+index e2c6b471d2302a..8c18327eb10bbb 100644
+--- a/arch/x86/kernel/cpu/mtrr/generic.c
++++ b/arch/x86/kernel/cpu/mtrr/generic.c
+@@ -593,7 +593,7 @@ static void get_fixed_ranges(mtrr_type *frs)
+ 
+ void mtrr_save_fixed_ranges(void *info)
+ {
+-	if (boot_cpu_has(X86_FEATURE_MTRR))
++	if (mtrr_state.have_fixed)
+ 		get_fixed_ranges(mtrr_state.fixed_ranges);
+ }
+ 
+diff --git a/arch/x86/kernel/ioport.c b/arch/x86/kernel/ioport.c
+index 6290dd120f5e45..ff40f09ad9116c 100644
+--- a/arch/x86/kernel/ioport.c
++++ b/arch/x86/kernel/ioport.c
+@@ -33,8 +33,9 @@ void io_bitmap_share(struct task_struct *tsk)
+ 	set_tsk_thread_flag(tsk, TIF_IO_BITMAP);
+ }
+ 
+-static void task_update_io_bitmap(struct task_struct *tsk)
++static void task_update_io_bitmap(void)
+ {
++	struct task_struct *tsk = current;
+ 	struct thread_struct *t = &tsk->thread;
+ 
+ 	if (t->iopl_emul == 3 || t->io_bitmap) {
+@@ -54,7 +55,12 @@ void io_bitmap_exit(struct task_struct *tsk)
+ 	struct io_bitmap *iobm = tsk->thread.io_bitmap;
+ 
+ 	tsk->thread.io_bitmap = NULL;
+-	task_update_io_bitmap(tsk);
++	/*
++	 * Don't touch the TSS when invoked on a failed fork(). TSS
++	 * reflects the state of @current and not the state of @tsk.
++	 */
++	if (tsk == current)
++		task_update_io_bitmap();
+ 	if (iobm && refcount_dec_and_test(&iobm->refcnt))
+ 		kfree(iobm);
+ }
+@@ -192,8 +198,7 @@ SYSCALL_DEFINE1(iopl, unsigned int, level)
+ 	}
+ 
+ 	t->iopl_emul = level;
+-	task_update_io_bitmap(current);
+-
++	task_update_io_bitmap();
+ 	return 0;
+ }
+ 
+diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
+index 81f9b78e0f7baa..6cd5d2d6c58af6 100644
+--- a/arch/x86/kernel/irq.c
++++ b/arch/x86/kernel/irq.c
+@@ -419,7 +419,7 @@ static __always_inline bool handle_pending_pir(u64 *pir, struct pt_regs *regs)
+ 	bool handled = false;
+ 
+ 	for (i = 0; i < 4; i++)
+-		pir_copy[i] = pir[i];
++		pir_copy[i] = READ_ONCE(pir[i]);
+ 
+ 	for (i = 0; i < 4; i++) {
+ 		if (!pir_copy[i])
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 962c3ce39323e7..4940fcd409251c 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -181,6 +181,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
+ 	frame->ret_addr = (unsigned long) ret_from_fork_asm;
+ 	p->thread.sp = (unsigned long) fork_frame;
+ 	p->thread.io_bitmap = NULL;
++	clear_tsk_thread_flag(p, TIF_IO_BITMAP);
+ 	p->thread.iopl_warn = 0;
+ 	memset(p->thread.ptrace_bps, 0, sizeof(p->thread.ptrace_bps));
+ 
+@@ -469,6 +470,11 @@ void native_tss_update_io_bitmap(void)
+ 	} else {
+ 		struct io_bitmap *iobm = t->io_bitmap;
+ 
++		if (WARN_ON_ONCE(!iobm)) {
++			clear_thread_flag(TIF_IO_BITMAP);
++			native_tss_invalidate_io_bitmap();
++		}
++
+ 		/*
+ 		 * Only copy bitmap data when the sequence number differs. The
+ 		 * update time is accounted to the incoming task.
+@@ -907,13 +913,10 @@ static __init bool prefer_mwait_c1_over_halt(void)
+ static __cpuidle void mwait_idle(void)
+ {
+ 	if (!current_set_polling_and_test()) {
+-		if (this_cpu_has(X86_BUG_CLFLUSH_MONITOR)) {
+-			mb(); /* quirk */
+-			clflush((void *)&current_thread_info()->flags);
+-			mb(); /* quirk */
+-		}
++		const void *addr = &current_thread_info()->flags;
+ 
+-		__monitor((void *)&current_thread_info()->flags, 0, 0);
++		alternative_input("", "clflush (%[addr])", X86_BUG_CLFLUSH_MONITOR, [addr] "a" (addr));
++		__monitor(addr, 0, 0);
+ 		if (!need_resched()) {
+ 			__sti_mwait(0, 0);
+ 			raw_local_irq_disable();
+diff --git a/arch/x86/kernel/signal_32.c b/arch/x86/kernel/signal_32.c
+index 98123ff10506c6..42bbc42bd3503c 100644
+--- a/arch/x86/kernel/signal_32.c
++++ b/arch/x86/kernel/signal_32.c
+@@ -152,6 +152,8 @@ SYSCALL32_DEFINE0(sigreturn)
+ 	struct sigframe_ia32 __user *frame = (struct sigframe_ia32 __user *)(regs->sp-8);
+ 	sigset_t set;
+ 
++	prevent_single_step_upon_eretu(regs);
++
+ 	if (!access_ok(frame, sizeof(*frame)))
+ 		goto badframe;
+ 	if (__get_user(set.sig[0], &frame->sc.oldmask)
+@@ -175,6 +177,8 @@ SYSCALL32_DEFINE0(rt_sigreturn)
+ 	struct rt_sigframe_ia32 __user *frame;
+ 	sigset_t set;
+ 
++	prevent_single_step_upon_eretu(regs);
++
+ 	frame = (struct rt_sigframe_ia32 __user *)(regs->sp - 4);
+ 
+ 	if (!access_ok(frame, sizeof(*frame)))
+diff --git a/arch/x86/kernel/signal_64.c b/arch/x86/kernel/signal_64.c
+index ee9453891901b7..d483b585c6c604 100644
+--- a/arch/x86/kernel/signal_64.c
++++ b/arch/x86/kernel/signal_64.c
+@@ -250,6 +250,8 @@ SYSCALL_DEFINE0(rt_sigreturn)
+ 	sigset_t set;
+ 	unsigned long uc_flags;
+ 
++	prevent_single_step_upon_eretu(regs);
++
+ 	frame = (struct rt_sigframe __user *)(regs->sp - sizeof(long));
+ 	if (!access_ok(frame, sizeof(*frame)))
+ 		goto badframe;
+@@ -366,6 +368,8 @@ COMPAT_SYSCALL_DEFINE0(x32_rt_sigreturn)
+ 	sigset_t set;
+ 	unsigned long uc_flags;
+ 
++	prevent_single_step_upon_eretu(regs);
++
+ 	frame = (struct rt_sigframe_x32 __user *)(regs->sp - 8);
+ 
+ 	if (!access_ok(frame, sizeof(*frame)))
+diff --git a/arch/x86/lib/x86-opcode-map.txt b/arch/x86/lib/x86-opcode-map.txt
+index f5dd84eb55dcda..cd3fd5155f6ece 100644
+--- a/arch/x86/lib/x86-opcode-map.txt
++++ b/arch/x86/lib/x86-opcode-map.txt
+@@ -35,7 +35,7 @@
+ #  - (!F3) : the last prefix is not 0xF3 (including non-last prefix case)
+ #  - (66&F2): Both 0x66 and 0xF2 prefixes are specified.
+ #
+-# REX2 Prefix
++# REX2 Prefix Superscripts
+ #  - (!REX2): REX2 is not allowed
+ #  - (REX2): REX2 variant e.g. JMPABS
+ 
+@@ -286,10 +286,10 @@ df: ESC
+ # Note: "forced64" is Intel CPU behavior: they ignore 0x66 prefix
+ # in 64-bit mode. AMD CPUs accept 0x66 prefix, it causes RIP truncation
+ # to 16 bits. In 32-bit mode, 0x66 is accepted by both Intel and AMD.
+-e0: LOOPNE/LOOPNZ Jb (f64) (!REX2)
+-e1: LOOPE/LOOPZ Jb (f64) (!REX2)
+-e2: LOOP Jb (f64) (!REX2)
+-e3: JrCXZ Jb (f64) (!REX2)
++e0: LOOPNE/LOOPNZ Jb (f64),(!REX2)
++e1: LOOPE/LOOPZ Jb (f64),(!REX2)
++e2: LOOP Jb (f64),(!REX2)
++e3: JrCXZ Jb (f64),(!REX2)
+ e4: IN AL,Ib (!REX2)
+ e5: IN eAX,Ib (!REX2)
+ e6: OUT Ib,AL (!REX2)
+@@ -298,10 +298,10 @@ e7: OUT Ib,eAX (!REX2)
+ # in "near" jumps and calls is 16-bit. For CALL,
+ # push of return address is 16-bit wide, RSP is decremented by 2
+ # but is not truncated to 16 bits, unlike RIP.
+-e8: CALL Jz (f64) (!REX2)
+-e9: JMP-near Jz (f64) (!REX2)
+-ea: JMP-far Ap (i64) (!REX2)
+-eb: JMP-short Jb (f64) (!REX2)
++e8: CALL Jz (f64),(!REX2)
++e9: JMP-near Jz (f64),(!REX2)
++ea: JMP-far Ap (i64),(!REX2)
++eb: JMP-short Jb (f64),(!REX2)
+ ec: IN AL,DX (!REX2)
+ ed: IN eAX,DX (!REX2)
+ ee: OUT DX,AL (!REX2)
+@@ -478,22 +478,22 @@ AVXcode: 1
+ 7f: movq Qq,Pq | vmovdqa Wx,Vx (66) | vmovdqa32/64 Wx,Vx (66),(evo) | vmovdqu Wx,Vx (F3) | vmovdqu32/64 Wx,Vx (F3),(evo) | vmovdqu8/16 Wx,Vx (F2),(ev)
+ # 0x0f 0x80-0x8f
+ # Note: "forced64" is Intel CPU behavior (see comment about CALL insn).
+-80: JO Jz (f64) (!REX2)
+-81: JNO Jz (f64) (!REX2)
+-82: JB/JC/JNAE Jz (f64) (!REX2)
+-83: JAE/JNB/JNC Jz (f64) (!REX2)
+-84: JE/JZ Jz (f64) (!REX2)
+-85: JNE/JNZ Jz (f64) (!REX2)
+-86: JBE/JNA Jz (f64) (!REX2)
+-87: JA/JNBE Jz (f64) (!REX2)
+-88: JS Jz (f64) (!REX2)
+-89: JNS Jz (f64) (!REX2)
+-8a: JP/JPE Jz (f64) (!REX2)
+-8b: JNP/JPO Jz (f64) (!REX2)
+-8c: JL/JNGE Jz (f64) (!REX2)
+-8d: JNL/JGE Jz (f64) (!REX2)
+-8e: JLE/JNG Jz (f64) (!REX2)
+-8f: JNLE/JG Jz (f64) (!REX2)
++80: JO Jz (f64),(!REX2)
++81: JNO Jz (f64),(!REX2)
++82: JB/JC/JNAE Jz (f64),(!REX2)
++83: JAE/JNB/JNC Jz (f64),(!REX2)
++84: JE/JZ Jz (f64),(!REX2)
++85: JNE/JNZ Jz (f64),(!REX2)
++86: JBE/JNA Jz (f64),(!REX2)
++87: JA/JNBE Jz (f64),(!REX2)
++88: JS Jz (f64),(!REX2)
++89: JNS Jz (f64),(!REX2)
++8a: JP/JPE Jz (f64),(!REX2)
++8b: JNP/JPO Jz (f64),(!REX2)
++8c: JL/JNGE Jz (f64),(!REX2)
++8d: JNL/JGE Jz (f64),(!REX2)
++8e: JLE/JNG Jz (f64),(!REX2)
++8f: JNLE/JG Jz (f64),(!REX2)
+ # 0x0f 0x90-0x9f
+ 90: SETO Eb | kmovw/q Vk,Wk | kmovb/d Vk,Wk (66)
+ 91: SETNO Eb | kmovw/q Mv,Vk | kmovb/d Mv,Vk (66)
+diff --git a/block/blk-integrity.c b/block/blk-integrity.c
+index a1678f0a9f81f9..e4e2567061f9db 100644
+--- a/block/blk-integrity.c
++++ b/block/blk-integrity.c
+@@ -117,13 +117,8 @@ int blk_rq_integrity_map_user(struct request *rq, void __user *ubuf,
+ {
+ 	int ret;
+ 	struct iov_iter iter;
+-	unsigned int direction;
+ 
+-	if (op_is_write(req_op(rq)))
+-		direction = ITER_DEST;
+-	else
+-		direction = ITER_SOURCE;
+-	iov_iter_ubuf(&iter, direction, ubuf, bytes);
++	iov_iter_ubuf(&iter, rq_data_dir(rq), ubuf, bytes);
+ 	ret = bio_integrity_map_user(rq->bio, &iter);
+ 	if (ret)
+ 		return ret;
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index d6dd2e04787491..7437de947120ed 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -644,6 +644,18 @@ static void __tg_update_carryover(struct throtl_grp *tg, bool rw,
+ 	u64 bps_limit = tg_bps_limit(tg, rw);
+ 	u32 iops_limit = tg_iops_limit(tg, rw);
+ 
++	/*
++	 * If the queue is empty, carryover handling is not needed. In such cases,
++	 * tg->[bytes/io]_disp should be reset to 0 to avoid impacting the dispatch
++	 * of subsequent bios. The same handling applies when the previous BPS/IOPS
++	 * limit was set to max.
++	 */
++	if (tg->service_queue.nr_queued[rw] == 0) {
++		tg->bytes_disp[rw] = 0;
++		tg->io_disp[rw] = 0;
++		return;
++	}
++
+ 	/*
+ 	 * If config is updated while bios are still throttled, calculate and
+ 	 * accumulate how many bytes/ios are waited across changes. And
+@@ -656,8 +668,8 @@ static void __tg_update_carryover(struct throtl_grp *tg, bool rw,
+ 	if (iops_limit != UINT_MAX)
+ 		*ios = calculate_io_allowed(iops_limit, jiffy_elapsed) -
+ 			tg->io_disp[rw];
+-	tg->bytes_disp[rw] -= *bytes;
+-	tg->io_disp[rw] -= *ios;
++	tg->bytes_disp[rw] = -*bytes;
++	tg->io_disp[rw] = -*ios;
+ }
+ 
+ static void tg_update_carryover(struct throtl_grp *tg)
+@@ -665,10 +677,8 @@ static void tg_update_carryover(struct throtl_grp *tg)
+ 	long long bytes[2] = {0};
+ 	int ios[2] = {0};
+ 
+-	if (tg->service_queue.nr_queued[READ])
+-		__tg_update_carryover(tg, READ, &bytes[READ], &ios[READ]);
+-	if (tg->service_queue.nr_queued[WRITE])
+-		__tg_update_carryover(tg, WRITE, &bytes[WRITE], &ios[WRITE]);
++	__tg_update_carryover(tg, READ, &bytes[READ], &ios[READ]);
++	__tg_update_carryover(tg, WRITE, &bytes[WRITE], &ios[WRITE]);
+ 
+ 	/* see comments in struct throtl_grp for meaning of these fields. */
+ 	throtl_log(&tg->service_queue, "%s: %lld %lld %d %d\n", __func__,
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 8f15d1aa6eb89a..45c91016cef38a 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -1306,7 +1306,6 @@ static void blk_zone_wplug_bio_work(struct work_struct *work)
+ 	spin_unlock_irqrestore(&zwplug->lock, flags);
+ 
+ 	bdev = bio->bi_bdev;
+-	submit_bio_noacct_nocheck(bio);
+ 
+ 	/*
+ 	 * blk-mq devices will reuse the extra reference on the request queue
+@@ -1314,8 +1313,12 @@ static void blk_zone_wplug_bio_work(struct work_struct *work)
+ 	 * path for BIO-based devices will not do that. So drop this extra
+ 	 * reference here.
+ 	 */
+-	if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO))
++	if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO)) {
++		bdev->bd_disk->fops->submit_bio(bio);
+ 		blk_queue_exit(bdev->bd_disk->queue);
++	} else {
++		blk_mq_submit_bio(bio);
++	}
+ 
+ put_zwplug:
+ 	/* Drop the reference we took in disk_zone_wplug_schedule_bio_work(). */
+diff --git a/block/elevator.c b/block/elevator.c
+index b4d08026b02cef..dc4cadef728e55 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -744,7 +744,6 @@ ssize_t elv_iosched_store(struct gendisk *disk, const char *buf,
+ ssize_t elv_iosched_show(struct gendisk *disk, char *name)
+ {
+ 	struct request_queue *q = disk->queue;
+-	struct elevator_queue *eq = q->elevator;
+ 	struct elevator_type *cur = NULL, *e;
+ 	int len = 0;
+ 
+@@ -753,7 +752,7 @@ ssize_t elv_iosched_show(struct gendisk *disk, char *name)
+ 		len += sprintf(name+len, "[none] ");
+ 	} else {
+ 		len += sprintf(name+len, "none ");
+-		cur = eq->type;
++		cur = q->elevator->type;
+ 	}
+ 
+ 	spin_lock(&elv_list_lock);
+diff --git a/crypto/api.c b/crypto/api.c
+index 3416e98128a059..8592d3dccc64e6 100644
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -220,10 +220,19 @@ static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg,
+ 		if (crypto_is_test_larval(larval))
+ 			crypto_larval_kill(larval);
+ 		alg = ERR_PTR(-ETIMEDOUT);
+-	} else if (!alg) {
++	} else if (!alg || PTR_ERR(alg) == -EEXIST) {
++		int err = alg ? -EEXIST : -EAGAIN;
++
++		/*
++		 * EEXIST is expected because two probes can be scheduled
++		 * at the same time with one using alg_name and the other
++		 * using driver_name.  Do a re-lookup but do not retry in
++		 * case we hit a quirk like gcm_base(ctr(aes),...) which
++		 * will never match.
++		 */
+ 		alg = &larval->alg;
+ 		alg = crypto_alg_lookup(alg->cra_name, type, mask) ?:
+-		      ERR_PTR(-EAGAIN);
++		      ERR_PTR(err);
+ 	} else if (IS_ERR(alg))
+ 		;
+ 	else if (crypto_is_test_larval(larval) &&
+diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
+index bf165d321440d5..89dc887d2c5c7e 100644
+--- a/crypto/asymmetric_keys/public_key.c
++++ b/crypto/asymmetric_keys/public_key.c
+@@ -188,6 +188,8 @@ static int software_key_query(const struct kernel_pkey_params *params,
+ 	ptr = pkey_pack_u32(ptr, pkey->paramlen);
+ 	memcpy(ptr, pkey->params, pkey->paramlen);
+ 
++	memset(info, 0, sizeof(*info));
++
+ 	if (issig) {
+ 		sig = crypto_alloc_sig(alg_name, 0, 0);
+ 		if (IS_ERR(sig)) {
+@@ -203,6 +205,7 @@ static int software_key_query(const struct kernel_pkey_params *params,
+ 			goto error_free_tfm;
+ 
+ 		len = crypto_sig_keysize(sig);
++		info->key_size = len;
+ 		info->max_sig_size = crypto_sig_maxsize(sig);
+ 		info->max_data_size = crypto_sig_digestsize(sig);
+ 
+@@ -211,6 +214,9 @@ static int software_key_query(const struct kernel_pkey_params *params,
+ 			info->supported_ops |= KEYCTL_SUPPORTS_SIGN;
+ 
+ 		if (strcmp(params->encoding, "pkcs1") == 0) {
++			info->max_enc_size = len / BITS_PER_BYTE;
++			info->max_dec_size = len / BITS_PER_BYTE;
++
+ 			info->supported_ops |= KEYCTL_SUPPORTS_ENCRYPT;
+ 			if (pkey->key_is_private)
+ 				info->supported_ops |= KEYCTL_SUPPORTS_DECRYPT;
+@@ -230,18 +236,17 @@ static int software_key_query(const struct kernel_pkey_params *params,
+ 			goto error_free_tfm;
+ 
+ 		len = crypto_akcipher_maxsize(tfm);
++		info->key_size = len * BITS_PER_BYTE;
+ 		info->max_sig_size = len;
+ 		info->max_data_size = len;
++		info->max_enc_size = len;
++		info->max_dec_size = len;
+ 
+ 		info->supported_ops = KEYCTL_SUPPORTS_ENCRYPT;
+ 		if (pkey->key_is_private)
+ 			info->supported_ops |= KEYCTL_SUPPORTS_DECRYPT;
+ 	}
+ 
+-	info->key_size = len * 8;
+-	info->max_enc_size = len;
+-	info->max_dec_size = len;
+-
+ 	ret = 0;
+ 
+ error_free_tfm:
+diff --git a/crypto/ecdsa-p1363.c b/crypto/ecdsa-p1363.c
+index 4454f1f8f33f58..e0c55c64711c83 100644
+--- a/crypto/ecdsa-p1363.c
++++ b/crypto/ecdsa-p1363.c
+@@ -21,7 +21,8 @@ static int ecdsa_p1363_verify(struct crypto_sig *tfm,
+ 			      const void *digest, unsigned int dlen)
+ {
+ 	struct ecdsa_p1363_ctx *ctx = crypto_sig_ctx(tfm);
+-	unsigned int keylen = crypto_sig_keysize(ctx->child);
++	unsigned int keylen = DIV_ROUND_UP_POW2(crypto_sig_keysize(ctx->child),
++						BITS_PER_BYTE);
+ 	unsigned int ndigits = DIV_ROUND_UP_POW2(keylen, sizeof(u64));
+ 	struct ecdsa_raw_sig sig;
+ 
+@@ -45,7 +46,8 @@ static unsigned int ecdsa_p1363_max_size(struct crypto_sig *tfm)
+ {
+ 	struct ecdsa_p1363_ctx *ctx = crypto_sig_ctx(tfm);
+ 
+-	return 2 * crypto_sig_keysize(ctx->child);
++	return 2 * DIV_ROUND_UP_POW2(crypto_sig_keysize(ctx->child),
++				     BITS_PER_BYTE);
+ }
+ 
+ static unsigned int ecdsa_p1363_digest_size(struct crypto_sig *tfm)
+diff --git a/crypto/ecdsa-x962.c b/crypto/ecdsa-x962.c
+index 90a04f4b9a2f55..ee71594d10a069 100644
+--- a/crypto/ecdsa-x962.c
++++ b/crypto/ecdsa-x962.c
+@@ -82,7 +82,7 @@ static int ecdsa_x962_verify(struct crypto_sig *tfm,
+ 	int err;
+ 
+ 	sig_ctx.ndigits = DIV_ROUND_UP_POW2(crypto_sig_keysize(ctx->child),
+-					    sizeof(u64));
++					    sizeof(u64) * BITS_PER_BYTE);
+ 
+ 	err = asn1_ber_decoder(&ecdsasignature_decoder, &sig_ctx, src, slen);
+ 	if (err < 0)
+@@ -103,7 +103,8 @@ static unsigned int ecdsa_x962_max_size(struct crypto_sig *tfm)
+ {
+ 	struct ecdsa_x962_ctx *ctx = crypto_sig_ctx(tfm);
+ 	struct sig_alg *alg = crypto_sig_alg(ctx->child);
+-	int slen = crypto_sig_keysize(ctx->child);
++	int slen = DIV_ROUND_UP_POW2(crypto_sig_keysize(ctx->child),
++				     BITS_PER_BYTE);
+ 
+ 	/*
+ 	 * Verify takes ECDSA-Sig-Value (described in RFC 5480) as input,
+diff --git a/crypto/ecdsa.c b/crypto/ecdsa.c
+index 117526d15ddebf..a70b60a90a3c76 100644
+--- a/crypto/ecdsa.c
++++ b/crypto/ecdsa.c
+@@ -167,7 +167,7 @@ static unsigned int ecdsa_key_size(struct crypto_sig *tfm)
+ {
+ 	struct ecc_ctx *ctx = crypto_sig_ctx(tfm);
+ 
+-	return DIV_ROUND_UP(ctx->curve->nbits, 8);
++	return ctx->curve->nbits;
+ }
+ 
+ static unsigned int ecdsa_digest_size(struct crypto_sig *tfm)
+diff --git a/crypto/ecrdsa.c b/crypto/ecrdsa.c
+index b3dd8a3ddeb796..2c0602f0cd406f 100644
+--- a/crypto/ecrdsa.c
++++ b/crypto/ecrdsa.c
+@@ -249,7 +249,7 @@ static unsigned int ecrdsa_key_size(struct crypto_sig *tfm)
+ 	 * Verify doesn't need any output, so it's just informational
+ 	 * for keyctl to determine the key bit size.
+ 	 */
+-	return ctx->pub_key.ndigits * sizeof(u64);
++	return ctx->pub_key.ndigits * sizeof(u64) * BITS_PER_BYTE;
+ }
+ 
+ static unsigned int ecrdsa_max_size(struct crypto_sig *tfm)
+diff --git a/crypto/krb5/rfc3961_simplified.c b/crypto/krb5/rfc3961_simplified.c
+index 79180d28baa9fb..e49cbdec7c404d 100644
+--- a/crypto/krb5/rfc3961_simplified.c
++++ b/crypto/krb5/rfc3961_simplified.c
+@@ -89,6 +89,7 @@ int crypto_shash_update_sg(struct shash_desc *desc, struct scatterlist *sg,
+ 
+ 	sg_miter_start(&miter, sg, sg_nents(sg),
+ 		       SG_MITER_FROM_SG | SG_MITER_LOCAL);
++	sg_miter_skip(&miter, offset);
+ 	for (i = 0; i < len; i += n) {
+ 		sg_miter_next(&miter);
+ 		n = min(miter.length, len - i);
+diff --git a/crypto/lrw.c b/crypto/lrw.c
+index 391ae0f7641ff9..15f579a768614d 100644
+--- a/crypto/lrw.c
++++ b/crypto/lrw.c
+@@ -322,7 +322,7 @@ static int lrw_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 
+ 	err = crypto_grab_skcipher(spawn, skcipher_crypto_instance(inst),
+ 				   cipher_name, 0, mask);
+-	if (err == -ENOENT) {
++	if (err == -ENOENT && memcmp(cipher_name, "ecb(", 4)) {
+ 		err = -ENAMETOOLONG;
+ 		if (snprintf(ecb_name, CRYPTO_MAX_ALG_NAME, "ecb(%s)",
+ 			     cipher_name) >= CRYPTO_MAX_ALG_NAME)
+@@ -356,7 +356,7 @@ static int lrw_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	/* Alas we screwed up the naming so we have to mangle the
+ 	 * cipher name.
+ 	 */
+-	if (!strncmp(cipher_name, "ecb(", 4)) {
++	if (!memcmp(cipher_name, "ecb(", 4)) {
+ 		int len;
+ 
+ 		len = strscpy(ecb_name, cipher_name + 4, sizeof(ecb_name));
+diff --git a/crypto/rsassa-pkcs1.c b/crypto/rsassa-pkcs1.c
+index d01ac75635e008..94fa5e9600e79d 100644
+--- a/crypto/rsassa-pkcs1.c
++++ b/crypto/rsassa-pkcs1.c
+@@ -301,7 +301,7 @@ static unsigned int rsassa_pkcs1_key_size(struct crypto_sig *tfm)
+ {
+ 	struct rsassa_pkcs1_ctx *ctx = crypto_sig_ctx(tfm);
+ 
+-	return ctx->key_size;
++	return ctx->key_size * BITS_PER_BYTE;
+ }
+ 
+ static int rsassa_pkcs1_set_pub_key(struct crypto_sig *tfm,
+diff --git a/crypto/sig.c b/crypto/sig.c
+index dfc7cae9080282..53a3dd6fbe3fe6 100644
+--- a/crypto/sig.c
++++ b/crypto/sig.c
+@@ -102,6 +102,11 @@ static int sig_default_set_key(struct crypto_sig *tfm,
+ 	return -ENOSYS;
+ }
+ 
++static unsigned int sig_default_size(struct crypto_sig *tfm)
++{
++	return DIV_ROUND_UP_POW2(crypto_sig_keysize(tfm), BITS_PER_BYTE);
++}
++
+ static int sig_prepare_alg(struct sig_alg *alg)
+ {
+ 	struct crypto_alg *base = &alg->base;
+@@ -117,9 +122,9 @@ static int sig_prepare_alg(struct sig_alg *alg)
+ 	if (!alg->key_size)
+ 		return -EINVAL;
+ 	if (!alg->max_size)
+-		alg->max_size = alg->key_size;
++		alg->max_size = sig_default_size;
+ 	if (!alg->digest_size)
+-		alg->digest_size = alg->key_size;
++		alg->digest_size = sig_default_size;
+ 
+ 	base->cra_type = &crypto_sig_type;
+ 	base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
+diff --git a/crypto/xts.c b/crypto/xts.c
+index 31529c9ef08f8f..46b7c70ea54bbf 100644
+--- a/crypto/xts.c
++++ b/crypto/xts.c
+@@ -363,7 +363,7 @@ static int xts_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 
+ 	err = crypto_grab_skcipher(&ctx->spawn, skcipher_crypto_instance(inst),
+ 				   cipher_name, 0, mask);
+-	if (err == -ENOENT) {
++	if (err == -ENOENT && memcmp(cipher_name, "ecb(", 4)) {
+ 		err = -ENAMETOOLONG;
+ 		if (snprintf(name, CRYPTO_MAX_ALG_NAME, "ecb(%s)",
+ 			     cipher_name) >= CRYPTO_MAX_ALG_NAME)
+@@ -397,7 +397,7 @@ static int xts_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	/* Alas we screwed up the naming so we have to mangle the
+ 	 * cipher name.
+ 	 */
+-	if (!strncmp(cipher_name, "ecb(", 4)) {
++	if (!memcmp(cipher_name, "ecb(", 4)) {
+ 		int len;
+ 
+ 		len = strscpy(name, cipher_name + 4, sizeof(name));
+diff --git a/drivers/accel/amdxdna/aie2_message.c b/drivers/accel/amdxdna/aie2_message.c
+index bf4219e32cc19d..82412eec9a4b8b 100644
+--- a/drivers/accel/amdxdna/aie2_message.c
++++ b/drivers/accel/amdxdna/aie2_message.c
+@@ -525,7 +525,7 @@ aie2_cmdlist_fill_one_slot_cf(void *cmd_buf, u32 offset,
+ 	if (!payload)
+ 		return -EINVAL;
+ 
+-	if (!slot_cf_has_space(offset, payload_len))
++	if (!slot_has_space(*buf, offset, payload_len))
+ 		return -ENOSPC;
+ 
+ 	buf->cu_idx = cu_idx;
+@@ -558,7 +558,7 @@ aie2_cmdlist_fill_one_slot_dpu(void *cmd_buf, u32 offset,
+ 	if (payload_len < sizeof(*sn) || arg_sz > MAX_DPU_ARGS_SIZE)
+ 		return -EINVAL;
+ 
+-	if (!slot_dpu_has_space(offset, arg_sz))
++	if (!slot_has_space(*buf, offset, arg_sz))
+ 		return -ENOSPC;
+ 
+ 	buf->inst_buf_addr = sn->buffer;
+@@ -569,7 +569,7 @@ aie2_cmdlist_fill_one_slot_dpu(void *cmd_buf, u32 offset,
+ 	memcpy(buf->args, sn->prop_args, arg_sz);
+ 
+ 	/* Accurate buf size to hint firmware to do necessary copy */
+-	*size += sizeof(*buf) + arg_sz;
++	*size = sizeof(*buf) + arg_sz;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/accel/amdxdna/aie2_msg_priv.h b/drivers/accel/amdxdna/aie2_msg_priv.h
+index 4e02e744b470eb..6df9065b13f685 100644
+--- a/drivers/accel/amdxdna/aie2_msg_priv.h
++++ b/drivers/accel/amdxdna/aie2_msg_priv.h
+@@ -319,18 +319,16 @@ struct async_event_msg_resp {
+ } __packed;
+ 
+ #define MAX_CHAIN_CMDBUF_SIZE SZ_4K
+-#define slot_cf_has_space(offset, payload_size) \
+-	(MAX_CHAIN_CMDBUF_SIZE - ((offset) + (payload_size)) > \
+-	 offsetof(struct cmd_chain_slot_execbuf_cf, args[0]))
++#define slot_has_space(slot, offset, payload_size)		\
++	(MAX_CHAIN_CMDBUF_SIZE >= (offset) + (payload_size) +	\
++	 sizeof(typeof(slot)))
++
+ struct cmd_chain_slot_execbuf_cf {
+ 	__u32 cu_idx;
+ 	__u32 arg_cnt;
+ 	__u32 args[] __counted_by(arg_cnt);
+ };
+ 
+-#define slot_dpu_has_space(offset, payload_size) \
+-	(MAX_CHAIN_CMDBUF_SIZE - ((offset) + (payload_size)) > \
+-	 offsetof(struct cmd_chain_slot_dpu, args[0]))
+ struct cmd_chain_slot_dpu {
+ 	__u64 inst_buf_addr;
+ 	__u32 inst_size;
+diff --git a/drivers/accel/amdxdna/aie2_psp.c b/drivers/accel/amdxdna/aie2_psp.c
+index dc3a072ce3b6df..f28a060a88109b 100644
+--- a/drivers/accel/amdxdna/aie2_psp.c
++++ b/drivers/accel/amdxdna/aie2_psp.c
+@@ -126,8 +126,8 @@ struct psp_device *aie2m_psp_create(struct drm_device *ddev, struct psp_config *
+ 	psp->ddev = ddev;
+ 	memcpy(psp->psp_regs, conf->psp_regs, sizeof(psp->psp_regs));
+ 
+-	psp->fw_buf_sz = ALIGN(conf->fw_size, PSP_FW_ALIGN) + PSP_FW_ALIGN;
+-	psp->fw_buffer = drmm_kmalloc(ddev, psp->fw_buf_sz, GFP_KERNEL);
++	psp->fw_buf_sz = ALIGN(conf->fw_size, PSP_FW_ALIGN);
++	psp->fw_buffer = drmm_kmalloc(ddev, psp->fw_buf_sz + PSP_FW_ALIGN, GFP_KERNEL);
+ 	if (!psp->fw_buffer) {
+ 		drm_err(ddev, "no memory for fw buffer");
+ 		return NULL;
+diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c
+index b28da35c30b675..1c8e283ad98542 100644
+--- a/drivers/accel/ivpu/ivpu_job.c
++++ b/drivers/accel/ivpu/ivpu_job.c
+@@ -247,6 +247,10 @@ static int ivpu_cmdq_unregister(struct ivpu_file_priv *file_priv, struct ivpu_cm
+ 	if (!cmdq->db_id)
+ 		return 0;
+ 
++	ret = ivpu_jsm_unregister_db(vdev, cmdq->db_id);
++	if (!ret)
++		ivpu_dbg(vdev, JOB, "DB %d unregistered\n", cmdq->db_id);
++
+ 	if (vdev->fw->sched_mode == VPU_SCHEDULING_MODE_HW) {
+ 		ret = ivpu_jsm_hws_destroy_cmdq(vdev, file_priv->ctx.id, cmdq->id);
+ 		if (!ret)
+@@ -254,10 +258,6 @@ static int ivpu_cmdq_unregister(struct ivpu_file_priv *file_priv, struct ivpu_cm
+ 				 cmdq->id, file_priv->ctx.id);
+ 	}
+ 
+-	ret = ivpu_jsm_unregister_db(vdev, cmdq->db_id);
+-	if (!ret)
+-		ivpu_dbg(vdev, JOB, "DB %d unregistered\n", cmdq->db_id);
+-
+ 	xa_erase(&file_priv->vdev->db_xa, cmdq->db_id);
+ 	cmdq->db_id = 0;
+ 
+diff --git a/drivers/acpi/acpica/exserial.c b/drivers/acpi/acpica/exserial.c
+index 5241f4c01c7655..89a4ac447a2bea 100644
+--- a/drivers/acpi/acpica/exserial.c
++++ b/drivers/acpi/acpica/exserial.c
+@@ -201,6 +201,12 @@ acpi_ex_read_serial_bus(union acpi_operand_object *obj_desc,
+ 		function = ACPI_READ;
+ 		break;
+ 
++	case ACPI_ADR_SPACE_FIXED_HARDWARE:
++
++		buffer_length = ACPI_FFH_INPUT_BUFFER_SIZE;
++		function = ACPI_READ;
++		break;
++
+ 	default:
+ 		return_ACPI_STATUS(AE_AML_INVALID_SPACE_ID);
+ 	}
+diff --git a/drivers/acpi/apei/Kconfig b/drivers/acpi/apei/Kconfig
+index 3cfe7e7475f2fd..070c07d68dfb2f 100644
+--- a/drivers/acpi/apei/Kconfig
++++ b/drivers/acpi/apei/Kconfig
+@@ -23,6 +23,7 @@ config ACPI_APEI_GHES
+ 	select ACPI_HED
+ 	select IRQ_WORK
+ 	select GENERIC_ALLOCATOR
++	select ARM_SDE_INTERFACE if ARM64
+ 	help
+ 	  Generic Hardware Error Source provides a way to report
+ 	  platform hardware errors (such as that from chipset). It
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index 289e365f84b249..0f3c663c1b0a33 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -1715,7 +1715,7 @@ void __init acpi_ghes_init(void)
+ {
+ 	int rc;
+ 
+-	sdei_init();
++	acpi_sdei_init();
+ 
+ 	if (acpi_disabled)
+ 		return;
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index f193e713825ac2..ff23b6edb2df37 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -463,7 +463,7 @@ bool cppc_allow_fast_switch(void)
+ 	struct cpc_desc *cpc_ptr;
+ 	int cpu;
+ 
+-	for_each_possible_cpu(cpu) {
++	for_each_present_cpu(cpu) {
+ 		cpc_ptr = per_cpu(cpc_desc_ptr, cpu);
+ 		desired_reg = &cpc_ptr->cpc_regs[DESIRED_PERF];
+ 		if (!CPC_IN_SYSTEM_MEMORY(desired_reg) &&
+diff --git a/drivers/acpi/osi.c b/drivers/acpi/osi.c
+index df9328c850bd33..f2c943b934be0a 100644
+--- a/drivers/acpi/osi.c
++++ b/drivers/acpi/osi.c
+@@ -42,7 +42,6 @@ static struct acpi_osi_entry
+ osi_setup_entries[OSI_STRING_ENTRIES_MAX] __initdata = {
+ 	{"Module Device", true},
+ 	{"Processor Device", true},
+-	{"3.0 _SCP Extensions", true},
+ 	{"Processor Aggregator Device", true},
+ };
+ 
+diff --git a/drivers/acpi/platform_profile.c b/drivers/acpi/platform_profile.c
+index ffbfd32f4cf1ba..b43f4459a4f61e 100644
+--- a/drivers/acpi/platform_profile.c
++++ b/drivers/acpi/platform_profile.c
+@@ -688,6 +688,9 @@ static int __init platform_profile_init(void)
+ {
+ 	int err;
+ 
++	if (acpi_disabled)
++		return -EOPNOTSUPP;
++
+ 	err = class_register(&platform_profile_class);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 14c7bac4100b46..7d59c6c9185fc1 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -534,7 +534,7 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+  */
+ static const struct dmi_system_id irq1_edge_low_force_override[] = {
+ 	{
+-		/* MECHREV Jiaolong17KS Series GM7XG0M */
++		/* MECHREVO Jiaolong17KS Series GM7XG0M */
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "GM7XG0M"),
+ 		},
+diff --git a/drivers/acpi/thermal.c b/drivers/acpi/thermal.c
+index 0c874186f8aed4..5c2defe55898f1 100644
+--- a/drivers/acpi/thermal.c
++++ b/drivers/acpi/thermal.c
+@@ -803,6 +803,12 @@ static int acpi_thermal_add(struct acpi_device *device)
+ 
+ 	acpi_thermal_aml_dependency_fix(tz);
+ 
++	/*
++	 * Set the cooling mode [_SCP] to active cooling. This needs to happen before
++	 * we retrieve the trip point values.
++	 */
++	acpi_execute_simple_method(tz->device->handle, "_SCP", ACPI_THERMAL_MODE_ACTIVE);
++
+ 	/* Get trip points [_ACi, _PSV, etc.] (required). */
+ 	acpi_thermal_get_trip_points(tz);
+ 
+@@ -814,10 +820,6 @@ static int acpi_thermal_add(struct acpi_device *device)
+ 	if (result)
+ 		goto free_memory;
+ 
+-	/* Set the cooling mode [_SCP] to active cooling. */
+-	acpi_execute_simple_method(tz->device->handle, "_SCP",
+-				   ACPI_THERMAL_MODE_ACTIVE);
+-
+ 	/* Determine the default polling frequency [_TZP]. */
+ 	if (tzp)
+ 		tz->polling_frequency = tzp;
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index c8b0a9e29ed843..1926454c7a7e8c 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -941,6 +941,8 @@ static void device_resume(struct device *dev, pm_message_t state, bool async)
+ 	if (!dev->power.is_suspended)
+ 		goto Complete;
+ 
++	dev->power.is_suspended = false;
++
+ 	if (dev->power.direct_complete) {
+ 		/*
+ 		 * Allow new children to be added under the device after this
+@@ -1003,7 +1005,6 @@ static void device_resume(struct device *dev, pm_message_t state, bool async)
+ 
+  End:
+ 	error = dpm_run_callback(callback, dev, state, info);
+-	dev->power.is_suspended = false;
+ 
+ 	device_unlock(dev);
+ 	dpm_watchdog_clear(&wd);
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index 0e127b0329c00c..205a4f8828b0ac 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -1568,6 +1568,32 @@ void pm_runtime_enable(struct device *dev)
+ }
+ EXPORT_SYMBOL_GPL(pm_runtime_enable);
+ 
++static void pm_runtime_set_suspended_action(void *data)
++{
++	pm_runtime_set_suspended(data);
++}
++
++/**
++ * devm_pm_runtime_set_active_enabled - set_active version of devm_pm_runtime_enable.
++ *
++ * @dev: Device to handle.
++ */
++int devm_pm_runtime_set_active_enabled(struct device *dev)
++{
++	int err;
++
++	err = pm_runtime_set_active(dev);
++	if (err)
++		return err;
++
++	err = devm_add_action_or_reset(dev, pm_runtime_set_suspended_action, dev);
++	if (err)
++		return err;
++
++	return devm_pm_runtime_enable(dev);
++}
++EXPORT_SYMBOL_GPL(devm_pm_runtime_set_active_enabled);
++
+ static void pm_runtime_disable_action(void *data)
+ {
+ 	pm_runtime_dont_use_autosuspend(data);
+@@ -1590,6 +1616,24 @@ int devm_pm_runtime_enable(struct device *dev)
+ }
+ EXPORT_SYMBOL_GPL(devm_pm_runtime_enable);
+ 
++static void pm_runtime_put_noidle_action(void *data)
++{
++	pm_runtime_put_noidle(data);
++}
++
++/**
++ * devm_pm_runtime_get_noresume - devres-enabled version of pm_runtime_get_noresume.
++ *
++ * @dev: Device to handle.
++ */
++int devm_pm_runtime_get_noresume(struct device *dev)
++{
++	pm_runtime_get_noresume(dev);
++
++	return devm_add_action_or_reset(dev, pm_runtime_put_noidle_action, dev);
++}
++EXPORT_SYMBOL_GPL(devm_pm_runtime_get_noresume);
++
+ /**
+  * pm_runtime_forbid - Block runtime PM of a device.
+  * @dev: Device to handle.
+diff --git a/drivers/block/brd.c b/drivers/block/brd.c
+index 292f127cae0abe..02fa8106ef549f 100644
+--- a/drivers/block/brd.c
++++ b/drivers/block/brd.c
+@@ -224,19 +224,22 @@ static int brd_do_bvec(struct brd_device *brd, struct page *page,
+ 
+ static void brd_do_discard(struct brd_device *brd, sector_t sector, u32 size)
+ {
+-	sector_t aligned_sector = (sector + PAGE_SECTORS) & ~PAGE_SECTORS;
++	sector_t aligned_sector = round_up(sector, PAGE_SECTORS);
++	sector_t aligned_end = round_down(
++			sector + (size >> SECTOR_SHIFT), PAGE_SECTORS);
+ 	struct page *page;
+ 
+-	size -= (aligned_sector - sector) * SECTOR_SIZE;
++	if (aligned_end <= aligned_sector)
++		return;
++
+ 	xa_lock(&brd->brd_pages);
+-	while (size >= PAGE_SIZE && aligned_sector < rd_size * 2) {
++	while (aligned_sector < aligned_end && aligned_sector < rd_size * 2) {
+ 		page = __xa_erase(&brd->brd_pages, aligned_sector >> PAGE_SECTORS_SHIFT);
+ 		if (page) {
+ 			__free_page(page);
+ 			brd->brd_nr_pages--;
+ 		}
+ 		aligned_sector += PAGE_SECTORS;
+-		size -= PAGE_SIZE;
+ 	}
+ 	xa_unlock(&brd->brd_pages);
+ }
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index e2b1f377f58563..f8d136684109aa 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -308,11 +308,14 @@ static void lo_complete_rq(struct request *rq)
+ static void lo_rw_aio_do_completion(struct loop_cmd *cmd)
+ {
+ 	struct request *rq = blk_mq_rq_from_pdu(cmd);
++	struct loop_device *lo = rq->q->queuedata;
+ 
+ 	if (!atomic_dec_and_test(&cmd->ref))
+ 		return;
+ 	kfree(cmd->bvec);
+ 	cmd->bvec = NULL;
++	if (req_op(rq) == REQ_OP_WRITE)
++		file_end_write(lo->lo_backing_file);
+ 	if (likely(!blk_should_fake_timeout(rq->q)))
+ 		blk_mq_complete_request(rq);
+ }
+@@ -387,9 +390,10 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
+ 		cmd->iocb.ki_flags = 0;
+ 	}
+ 
+-	if (rw == ITER_SOURCE)
++	if (rw == ITER_SOURCE) {
++		file_start_write(lo->lo_backing_file);
+ 		ret = file->f_op->write_iter(&cmd->iocb, &iter);
+-	else
++	} else
+ 		ret = file->f_op->read_iter(&cmd->iocb, &iter);
+ 
+ 	lo_rw_aio_do_completion(cmd);
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 48e2f400957bc9..46d9bbd8e411b3 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -2719,7 +2719,7 @@ static int btintel_uefi_get_dsbr(u32 *dsbr_var)
+ 	} __packed data;
+ 
+ 	efi_status_t status;
+-	unsigned long data_size = 0;
++	unsigned long data_size = sizeof(data);
+ 	efi_guid_t guid = EFI_GUID(0xe65d8884, 0xd4af, 0x4b20, 0x8d, 0x03,
+ 				   0x77, 0x2e, 0xcc, 0x3d, 0xa5, 0x31);
+ 
+@@ -2729,16 +2729,10 @@ static int btintel_uefi_get_dsbr(u32 *dsbr_var)
+ 	if (!efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE))
+ 		return -EOPNOTSUPP;
+ 
+-	status = efi.get_variable(BTINTEL_EFI_DSBR, &guid, NULL, &data_size,
+-				  NULL);
+-
+-	if (status != EFI_BUFFER_TOO_SMALL || !data_size)
+-		return -EIO;
+-
+ 	status = efi.get_variable(BTINTEL_EFI_DSBR, &guid, NULL, &data_size,
+ 				  &data);
+ 
+-	if (status != EFI_SUCCESS)
++	if (status != EFI_SUCCESS || data_size != sizeof(data))
+ 		return -ENXIO;
+ 
+ 	*dsbr_var = data.dsbr;
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index 0a759ea26fd38f..385e29367dd1df 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -303,8 +303,13 @@ static int btintel_pcie_submit_rx(struct btintel_pcie_data *data)
+ static int btintel_pcie_start_rx(struct btintel_pcie_data *data)
+ {
+ 	int i, ret;
++	struct rxq *rxq = &data->rxq;
++
++	/* Post (BTINTEL_PCIE_RX_DESCS_COUNT - 3) buffers to overcome the
++	 * hardware issues leading to race condition at the firmware.
++	 */
+ 
+-	for (i = 0; i < BTINTEL_PCIE_RX_MAX_QUEUE; i++) {
++	for (i = 0; i < rxq->count - 3; i++) {
+ 		ret = btintel_pcie_submit_rx(data);
+ 		if (ret)
+ 			return ret;
+@@ -1664,8 +1669,8 @@ static int btintel_pcie_alloc(struct btintel_pcie_data *data)
+ 	 *  + size of index * Number of queues(2) * type of index array(4)
+ 	 *  + size of context information
+ 	 */
+-	total = (sizeof(struct tfd) + sizeof(struct urbd0) + sizeof(struct frbd)
+-		+ sizeof(struct urbd1)) * BTINTEL_DESCS_COUNT;
++	total = (sizeof(struct tfd) + sizeof(struct urbd0)) * BTINTEL_PCIE_TX_DESCS_COUNT;
++	total += (sizeof(struct frbd) + sizeof(struct urbd1)) * BTINTEL_PCIE_RX_DESCS_COUNT;
+ 
+ 	/* Add the sum of size of index array and size of ci struct */
+ 	total += (sizeof(u16) * BTINTEL_PCIE_NUM_QUEUES * 4) + sizeof(struct ctx_info);
+@@ -1690,36 +1695,36 @@ static int btintel_pcie_alloc(struct btintel_pcie_data *data)
+ 	data->dma_v_addr = v_addr;
+ 
+ 	/* Setup descriptor count */
+-	data->txq.count = BTINTEL_DESCS_COUNT;
+-	data->rxq.count = BTINTEL_DESCS_COUNT;
++	data->txq.count = BTINTEL_PCIE_TX_DESCS_COUNT;
++	data->rxq.count = BTINTEL_PCIE_RX_DESCS_COUNT;
+ 
+ 	/* Setup tfds */
+ 	data->txq.tfds_p_addr = p_addr;
+ 	data->txq.tfds = v_addr;
+ 
+-	p_addr += (sizeof(struct tfd) * BTINTEL_DESCS_COUNT);
+-	v_addr += (sizeof(struct tfd) * BTINTEL_DESCS_COUNT);
++	p_addr += (sizeof(struct tfd) * BTINTEL_PCIE_TX_DESCS_COUNT);
++	v_addr += (sizeof(struct tfd) * BTINTEL_PCIE_TX_DESCS_COUNT);
+ 
+ 	/* Setup urbd0 */
+ 	data->txq.urbd0s_p_addr = p_addr;
+ 	data->txq.urbd0s = v_addr;
+ 
+-	p_addr += (sizeof(struct urbd0) * BTINTEL_DESCS_COUNT);
+-	v_addr += (sizeof(struct urbd0) * BTINTEL_DESCS_COUNT);
++	p_addr += (sizeof(struct urbd0) * BTINTEL_PCIE_TX_DESCS_COUNT);
++	v_addr += (sizeof(struct urbd0) * BTINTEL_PCIE_TX_DESCS_COUNT);
+ 
+ 	/* Setup FRBD*/
+ 	data->rxq.frbds_p_addr = p_addr;
+ 	data->rxq.frbds = v_addr;
+ 
+-	p_addr += (sizeof(struct frbd) * BTINTEL_DESCS_COUNT);
+-	v_addr += (sizeof(struct frbd) * BTINTEL_DESCS_COUNT);
++	p_addr += (sizeof(struct frbd) * BTINTEL_PCIE_RX_DESCS_COUNT);
++	v_addr += (sizeof(struct frbd) * BTINTEL_PCIE_RX_DESCS_COUNT);
+ 
+ 	/* Setup urbd1 */
+ 	data->rxq.urbd1s_p_addr = p_addr;
+ 	data->rxq.urbd1s = v_addr;
+ 
+-	p_addr += (sizeof(struct urbd1) * BTINTEL_DESCS_COUNT);
+-	v_addr += (sizeof(struct urbd1) * BTINTEL_DESCS_COUNT);
++	p_addr += (sizeof(struct urbd1) * BTINTEL_PCIE_RX_DESCS_COUNT);
++	v_addr += (sizeof(struct urbd1) * BTINTEL_PCIE_RX_DESCS_COUNT);
+ 
+ 	/* Setup data buffers for txq */
+ 	err = btintel_pcie_setup_txq_bufs(data, &data->txq);
+diff --git a/drivers/bluetooth/btintel_pcie.h b/drivers/bluetooth/btintel_pcie.h
+index 873178019cad09..a94910ccd5d3c2 100644
+--- a/drivers/bluetooth/btintel_pcie.h
++++ b/drivers/bluetooth/btintel_pcie.h
+@@ -135,8 +135,11 @@ enum btintel_pcie_tlv_type {
+ /* Default interrupt timeout in msec */
+ #define BTINTEL_DEFAULT_INTR_TIMEOUT_MS	3000
+ 
+-/* The number of descriptors in TX/RX queues */
+-#define BTINTEL_DESCS_COUNT	16
++/* The number of descriptors in TX queues */
++#define BTINTEL_PCIE_TX_DESCS_COUNT	32
++
++/* The number of descriptors in RX queues */
++#define BTINTEL_PCIE_RX_DESCS_COUNT	64
+ 
+ /* Number of Queue for TX and RX
+  * It indicates the index of the IA(Index Array)
+@@ -158,9 +161,6 @@ enum {
+ /* Doorbell vector for TFD */
+ #define BTINTEL_PCIE_TX_DB_VEC	0
+ 
+-/* Number of pending RX requests for downlink */
+-#define BTINTEL_PCIE_RX_MAX_QUEUE	6
+-
+ /* Doorbell vector for FRBD */
+ #define BTINTEL_PCIE_RX_DB_VEC	513
+ 
+diff --git a/drivers/bus/fsl-mc/fsl-mc-bus.c b/drivers/bus/fsl-mc/fsl-mc-bus.c
+index a8be8cf246fb6f..7671bd15854551 100644
+--- a/drivers/bus/fsl-mc/fsl-mc-bus.c
++++ b/drivers/bus/fsl-mc/fsl-mc-bus.c
+@@ -139,9 +139,9 @@ static int fsl_mc_bus_uevent(const struct device *dev, struct kobj_uevent_env *e
+ 
+ static int fsl_mc_dma_configure(struct device *dev)
+ {
++	const struct device_driver *drv = READ_ONCE(dev->driver);
+ 	struct device *dma_dev = dev;
+ 	struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev);
+-	struct fsl_mc_driver *mc_drv = to_fsl_mc_driver(dev->driver);
+ 	u32 input_id = mc_dev->icid;
+ 	int ret;
+ 
+@@ -153,8 +153,8 @@ static int fsl_mc_dma_configure(struct device *dev)
+ 	else
+ 		ret = acpi_dma_configure_id(dev, DEV_DMA_COHERENT, &input_id);
+ 
+-	/* @mc_drv may not be valid when we're called from the IOMMU layer */
+-	if (!ret && dev->driver && !mc_drv->driver_managed_dma) {
++	/* @drv may not be valid when we're called from the IOMMU layer */
++	if (!ret && drv && !to_fsl_mc_driver(drv)->driver_managed_dma) {
+ 		ret = iommu_device_use_default_domain(dev);
+ 		if (ret)
+ 			arch_teardown_dma_ops(dev);
+@@ -906,8 +906,10 @@ int fsl_mc_device_add(struct fsl_mc_obj_desc *obj_desc,
+ 
+ error_cleanup_dev:
+ 	kfree(mc_dev->regions);
+-	kfree(mc_bus);
+-	kfree(mc_dev);
++	if (mc_bus)
++		kfree(mc_bus);
++	else
++		kfree(mc_dev);
+ 
+ 	return error;
+ }
+diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
+index 8fb33c90482f79..ae61967605563c 100644
+--- a/drivers/char/Kconfig
++++ b/drivers/char/Kconfig
+@@ -404,7 +404,7 @@ config TELCLOCK
+ 	  configuration of the telecom clock configuration settings.  This
+ 	  device is used for hardware synchronization across the ATCA backplane
+ 	  fabric.  Upon loading, the driver exports a sysfs directory,
+-	  /sys/devices/platform/telco_clock, with a number of files for
++	  /sys/devices/faux/telco_clock, with a number of files for
+ 	  controlling the behavior of this hardware.
+ 
+ source "drivers/s390/char/Kconfig"
+diff --git a/drivers/clk/bcm/clk-raspberrypi.c b/drivers/clk/bcm/clk-raspberrypi.c
+index 0e1fe3759530a4..720acc10f8aa45 100644
+--- a/drivers/clk/bcm/clk-raspberrypi.c
++++ b/drivers/clk/bcm/clk-raspberrypi.c
+@@ -286,6 +286,8 @@ static struct clk_hw *raspberrypi_clk_register(struct raspberrypi_clk *rpi,
+ 	init.name = devm_kasprintf(rpi->dev, GFP_KERNEL,
+ 				   "fw-clk-%s",
+ 				   rpi_firmware_clk_names[id]);
++	if (!init.name)
++		return ERR_PTR(-ENOMEM);
+ 	init.ops = &raspberrypi_firmware_clk_ops;
+ 	init.flags = CLK_GET_RATE_NOCACHE;
+ 
+diff --git a/drivers/clk/qcom/camcc-sm6350.c b/drivers/clk/qcom/camcc-sm6350.c
+index 1871970fb046d7..8aac97d29ce3ff 100644
+--- a/drivers/clk/qcom/camcc-sm6350.c
++++ b/drivers/clk/qcom/camcc-sm6350.c
+@@ -1695,6 +1695,9 @@ static struct clk_branch camcc_sys_tmr_clk = {
+ 
+ static struct gdsc bps_gdsc = {
+ 	.gdscr = 0x6004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "bps_gdsc",
+ 	},
+@@ -1704,6 +1707,9 @@ static struct gdsc bps_gdsc = {
+ 
+ static struct gdsc ipe_0_gdsc = {
+ 	.gdscr = 0x7004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "ipe_0_gdsc",
+ 	},
+@@ -1713,6 +1719,9 @@ static struct gdsc ipe_0_gdsc = {
+ 
+ static struct gdsc ife_0_gdsc = {
+ 	.gdscr = 0x9004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "ife_0_gdsc",
+ 	},
+@@ -1721,6 +1730,9 @@ static struct gdsc ife_0_gdsc = {
+ 
+ static struct gdsc ife_1_gdsc = {
+ 	.gdscr = 0xa004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "ife_1_gdsc",
+ 	},
+@@ -1729,6 +1741,9 @@ static struct gdsc ife_1_gdsc = {
+ 
+ static struct gdsc ife_2_gdsc = {
+ 	.gdscr = 0xb004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "ife_2_gdsc",
+ 	},
+@@ -1737,6 +1752,9 @@ static struct gdsc ife_2_gdsc = {
+ 
+ static struct gdsc titan_top_gdsc = {
+ 	.gdscr = 0x14004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "titan_top_gdsc",
+ 	},
+diff --git a/drivers/clk/qcom/dispcc-sm6350.c b/drivers/clk/qcom/dispcc-sm6350.c
+index e703ecf00e4404..b0bd163a449ccd 100644
+--- a/drivers/clk/qcom/dispcc-sm6350.c
++++ b/drivers/clk/qcom/dispcc-sm6350.c
+@@ -681,6 +681,9 @@ static struct clk_branch disp_cc_xo_clk = {
+ 
+ static struct gdsc mdss_gdsc = {
+ 	.gdscr = 0x1004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "mdss_gdsc",
+ 	},
+diff --git a/drivers/clk/qcom/gcc-msm8939.c b/drivers/clk/qcom/gcc-msm8939.c
+index 7431c9a65044f8..45193b3d714bab 100644
+--- a/drivers/clk/qcom/gcc-msm8939.c
++++ b/drivers/clk/qcom/gcc-msm8939.c
+@@ -432,7 +432,7 @@ static const struct parent_map gcc_xo_gpll0_gpll1a_gpll6_sleep_map[] = {
+ 	{ P_XO, 0 },
+ 	{ P_GPLL0, 1 },
+ 	{ P_GPLL1_AUX, 2 },
+-	{ P_GPLL6, 2 },
++	{ P_GPLL6, 3 },
+ 	{ P_SLEEP_CLK, 6 },
+ };
+ 
+@@ -1113,7 +1113,7 @@ static struct clk_rcg2 jpeg0_clk_src = {
+ };
+ 
+ static const struct freq_tbl ftbl_gcc_camss_mclk0_1_clk[] = {
+-	F(24000000, P_GPLL0, 1, 1, 45),
++	F(24000000, P_GPLL6, 1, 1, 45),
+ 	F(66670000, P_GPLL0, 12, 0, 0),
+ 	{ }
+ };
+diff --git a/drivers/clk/qcom/gcc-sm6350.c b/drivers/clk/qcom/gcc-sm6350.c
+index 74346dc026068a..a4d6dff9d0f7f1 100644
+--- a/drivers/clk/qcom/gcc-sm6350.c
++++ b/drivers/clk/qcom/gcc-sm6350.c
+@@ -2320,6 +2320,9 @@ static struct clk_branch gcc_video_xo_clk = {
+ 
+ static struct gdsc usb30_prim_gdsc = {
+ 	.gdscr = 0x1a004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "usb30_prim_gdsc",
+ 	},
+@@ -2328,6 +2331,9 @@ static struct gdsc usb30_prim_gdsc = {
+ 
+ static struct gdsc ufs_phy_gdsc = {
+ 	.gdscr = 0x3a004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "ufs_phy_gdsc",
+ 	},
+diff --git a/drivers/clk/qcom/gpucc-sm6350.c b/drivers/clk/qcom/gpucc-sm6350.c
+index 35ed0500bc5931..ee89c42413f885 100644
+--- a/drivers/clk/qcom/gpucc-sm6350.c
++++ b/drivers/clk/qcom/gpucc-sm6350.c
+@@ -413,6 +413,9 @@ static struct clk_branch gpu_cc_gx_vsense_clk = {
+ static struct gdsc gpu_cx_gdsc = {
+ 	.gdscr = 0x106c,
+ 	.gds_hw_ctrl = 0x1540,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0x8,
+ 	.pd = {
+ 		.name = "gpu_cx_gdsc",
+ 	},
+@@ -423,6 +426,9 @@ static struct gdsc gpu_cx_gdsc = {
+ static struct gdsc gpu_gx_gdsc = {
+ 	.gdscr = 0x100c,
+ 	.clamp_io_ctrl = 0x1508,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0x2,
+ 	.pd = {
+ 		.name = "gpu_gx_gdsc",
+ 		.power_on = gdsc_gx_do_nothing_enable,
+diff --git a/drivers/counter/interrupt-cnt.c b/drivers/counter/interrupt-cnt.c
+index 949598d51575a1..d83848d0fe2af5 100644
+--- a/drivers/counter/interrupt-cnt.c
++++ b/drivers/counter/interrupt-cnt.c
+@@ -3,12 +3,14 @@
+  * Copyright (c) 2021 Pengutronix, Oleksij Rempel <kernel@pengutronix.de>
+  */
+ 
++#include <linux/cleanup.h>
+ #include <linux/counter.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/interrupt.h>
+ #include <linux/irq.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/module.h>
++#include <linux/mutex.h>
+ #include <linux/platform_device.h>
+ #include <linux/types.h>
+ 
+@@ -19,6 +21,7 @@ struct interrupt_cnt_priv {
+ 	struct gpio_desc *gpio;
+ 	int irq;
+ 	bool enabled;
++	struct mutex lock;
+ 	struct counter_signal signals;
+ 	struct counter_synapse synapses;
+ 	struct counter_count cnts;
+@@ -41,6 +44,8 @@ static int interrupt_cnt_enable_read(struct counter_device *counter,
+ {
+ 	struct interrupt_cnt_priv *priv = counter_priv(counter);
+ 
++	guard(mutex)(&priv->lock);
++
+ 	*enable = priv->enabled;
+ 
+ 	return 0;
+@@ -51,6 +56,8 @@ static int interrupt_cnt_enable_write(struct counter_device *counter,
+ {
+ 	struct interrupt_cnt_priv *priv = counter_priv(counter);
+ 
++	guard(mutex)(&priv->lock);
++
+ 	if (priv->enabled == enable)
+ 		return 0;
+ 
+@@ -227,6 +234,8 @@ static int interrupt_cnt_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
++	mutex_init(&priv->lock);
++
+ 	ret = devm_counter_add(dev, counter);
+ 	if (ret < 0)
+ 		return dev_err_probe(dev, ret, "Failed to add counter\n");
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+index 19b7fb4a93e86c..05f67661553c9a 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+@@ -275,13 +275,16 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
+ 	} else {
+ 		if (nr_sgs > 0)
+ 			dma_unmap_sg(ce->dev, areq->src, ns, DMA_TO_DEVICE);
+-		dma_unmap_sg(ce->dev, areq->dst, nd, DMA_FROM_DEVICE);
++
++		if (nr_sgd > 0)
++			dma_unmap_sg(ce->dev, areq->dst, nd, DMA_FROM_DEVICE);
+ 	}
+ 
+ theend_iv:
+ 	if (areq->iv && ivsize > 0) {
+-		if (rctx->addr_iv)
++		if (!dma_mapping_error(ce->dev, rctx->addr_iv))
+ 			dma_unmap_single(ce->dev, rctx->addr_iv, rctx->ivlen, DMA_TO_DEVICE);
++
+ 		offset = areq->cryptlen - ivsize;
+ 		if (rctx->op_dir & CE_DECRYPTION) {
+ 			memcpy(areq->iv, chan->backup_iv, ivsize);
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
+index ec1ffda9ea32e0..658f520cee0caa 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
+@@ -832,13 +832,12 @@ static int sun8i_ce_pm_init(struct sun8i_ce_dev *ce)
+ 	err = pm_runtime_set_suspended(ce->dev);
+ 	if (err)
+ 		return err;
+-	pm_runtime_enable(ce->dev);
+-	return err;
+-}
+ 
+-static void sun8i_ce_pm_exit(struct sun8i_ce_dev *ce)
+-{
+-	pm_runtime_disable(ce->dev);
++	err = devm_pm_runtime_enable(ce->dev);
++	if (err)
++		return err;
++
++	return 0;
+ }
+ 
+ static int sun8i_ce_get_clks(struct sun8i_ce_dev *ce)
+@@ -1041,7 +1040,7 @@ static int sun8i_ce_probe(struct platform_device *pdev)
+ 			       "sun8i-ce-ns", ce);
+ 	if (err) {
+ 		dev_err(ce->dev, "Cannot request CryptoEngine Non-secure IRQ (err=%d)\n", err);
+-		goto error_irq;
++		goto error_pm;
+ 	}
+ 
+ 	err = sun8i_ce_register_algs(ce);
+@@ -1082,8 +1081,6 @@ static int sun8i_ce_probe(struct platform_device *pdev)
+ 	return 0;
+ error_alg:
+ 	sun8i_ce_unregister_algs(ce);
+-error_irq:
+-	sun8i_ce_pm_exit(ce);
+ error_pm:
+ 	sun8i_ce_free_chanlist(ce, MAXFLOW - 1);
+ 	return err;
+@@ -1104,8 +1101,6 @@ static void sun8i_ce_remove(struct platform_device *pdev)
+ #endif
+ 
+ 	sun8i_ce_free_chanlist(ce, MAXFLOW - 1);
+-
+-	sun8i_ce_pm_exit(ce);
+ }
+ 
+ static const struct of_device_id sun8i_ce_crypto_of_match_table[] = {
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
+index 6072dd9f390b40..3f9d79ea01aaa6 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
+@@ -343,9 +343,8 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	u32 common;
+ 	u64 byte_count;
+ 	__le32 *bf;
+-	void *buf = NULL;
++	void *buf, *result;
+ 	int j, i, todo;
+-	void *result = NULL;
+ 	u64 bs;
+ 	int digestsize;
+ 	dma_addr_t addr_res, addr_pad;
+@@ -365,14 +364,14 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	buf = kcalloc(2, bs, GFP_KERNEL | GFP_DMA);
+ 	if (!buf) {
+ 		err = -ENOMEM;
+-		goto theend;
++		goto err_out;
+ 	}
+ 	bf = (__le32 *)buf;
+ 
+ 	result = kzalloc(digestsize, GFP_KERNEL | GFP_DMA);
+ 	if (!result) {
+ 		err = -ENOMEM;
+-		goto theend;
++		goto err_free_buf;
+ 	}
+ 
+ 	flow = rctx->flow;
+@@ -398,7 +397,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	if (nr_sgs <= 0 || nr_sgs > MAX_SG) {
+ 		dev_err(ce->dev, "Invalid sg number %d\n", nr_sgs);
+ 		err = -EINVAL;
+-		goto theend;
++		goto err_free_result;
+ 	}
+ 
+ 	len = areq->nbytes;
+@@ -411,7 +410,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	if (len > 0) {
+ 		dev_err(ce->dev, "remaining len %d\n", len);
+ 		err = -EINVAL;
+-		goto theend;
++		goto err_unmap_src;
+ 	}
+ 	addr_res = dma_map_single(ce->dev, result, digestsize, DMA_FROM_DEVICE);
+ 	cet->t_dst[0].addr = desc_addr_val_le32(ce, addr_res);
+@@ -419,7 +418,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	if (dma_mapping_error(ce->dev, addr_res)) {
+ 		dev_err(ce->dev, "DMA map dest\n");
+ 		err = -EINVAL;
+-		goto theend;
++		goto err_unmap_src;
+ 	}
+ 
+ 	byte_count = areq->nbytes;
+@@ -441,7 +440,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	}
+ 	if (!j) {
+ 		err = -EINVAL;
+-		goto theend;
++		goto err_unmap_result;
+ 	}
+ 
+ 	addr_pad = dma_map_single(ce->dev, buf, j * 4, DMA_TO_DEVICE);
+@@ -450,7 +449,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	if (dma_mapping_error(ce->dev, addr_pad)) {
+ 		dev_err(ce->dev, "DMA error on padding SG\n");
+ 		err = -EINVAL;
+-		goto theend;
++		goto err_unmap_result;
+ 	}
+ 
+ 	if (ce->variant->hash_t_dlen_in_bits)
+@@ -463,16 +462,25 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	err = sun8i_ce_run_task(ce, flow, crypto_ahash_alg_name(tfm));
+ 
+ 	dma_unmap_single(ce->dev, addr_pad, j * 4, DMA_TO_DEVICE);
+-	dma_unmap_sg(ce->dev, areq->src, ns, DMA_TO_DEVICE);
++
++err_unmap_result:
+ 	dma_unmap_single(ce->dev, addr_res, digestsize, DMA_FROM_DEVICE);
++	if (!err)
++		memcpy(areq->result, result, algt->alg.hash.base.halg.digestsize);
+ 
++err_unmap_src:
++	dma_unmap_sg(ce->dev, areq->src, ns, DMA_TO_DEVICE);
+ 
+-	memcpy(areq->result, result, algt->alg.hash.base.halg.digestsize);
+-theend:
+-	kfree(buf);
++err_free_result:
+ 	kfree(result);
++
++err_free_buf:
++	kfree(buf);
++
++err_out:
+ 	local_bh_disable();
+ 	crypto_finalize_hash_request(engine, breq, err);
+ 	local_bh_enable();
++
+ 	return 0;
+ }
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h
+index 3b5c2af013d0da..83df4d71905318 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h
+@@ -308,8 +308,8 @@ struct sun8i_ce_hash_tfm_ctx {
+  * @flow:	the flow to use for this request
+  */
+ struct sun8i_ce_hash_reqctx {
+-	struct ahash_request fallback_req;
+ 	int flow;
++	struct ahash_request fallback_req; // keep at the end
+ };
+ 
+ /*
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+index 9b9605ce8ee629..8831bcb230c2d4 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+@@ -141,7 +141,7 @@ static int sun8i_ss_setup_ivs(struct skcipher_request *areq)
+ 
+ 	/* we need to copy all IVs from source in case DMA is bi-directionnal */
+ 	while (sg && len) {
+-		if (sg_dma_len(sg) == 0) {
++		if (sg->length == 0) {
+ 			sg = sg_next(sg);
+ 			continue;
+ 		}
+diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+index 09d9589f2d681d..33a285981dfd45 100644
+--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
++++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+@@ -1187,8 +1187,7 @@ static int iaa_compress(struct crypto_tfm *tfm,	struct acomp_req *req,
+ 			" src_addr %llx, dst_addr %llx\n", __func__,
+ 			active_compression_mode->name,
+ 			src_addr, dst_addr);
+-	} else if (ctx->async_mode)
+-		req->base.data = idxd_desc;
++	}
+ 
+ 	dev_dbg(dev, "%s: compression mode %s,"
+ 		" desc->src1_addr %llx, desc->src1_size %d,"
+@@ -1425,8 +1424,7 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req,
+ 			" src_addr %llx, dst_addr %llx\n", __func__,
+ 			active_compression_mode->name,
+ 			src_addr, dst_addr);
+-	} else if (ctx->async_mode && !disable_async)
+-		req->base.data = idxd_desc;
++	}
+ 
+ 	dev_dbg(dev, "%s: decompression mode %s,"
+ 		" desc->src1_addr %llx, desc->src1_size %d,"
+diff --git a/drivers/crypto/marvell/cesa/cipher.c b/drivers/crypto/marvell/cesa/cipher.c
+index cf62db50f95858..48c5c8ea8c43ec 100644
+--- a/drivers/crypto/marvell/cesa/cipher.c
++++ b/drivers/crypto/marvell/cesa/cipher.c
+@@ -459,6 +459,9 @@ static int mv_cesa_skcipher_queue_req(struct skcipher_request *req,
+ 	struct mv_cesa_skcipher_req *creq = skcipher_request_ctx(req);
+ 	struct mv_cesa_engine *engine;
+ 
++	if (!req->cryptlen)
++		return 0;
++
+ 	ret = mv_cesa_skcipher_req_init(req, tmpl);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/crypto/marvell/cesa/hash.c b/drivers/crypto/marvell/cesa/hash.c
+index f150861ceaf695..6815eddc906812 100644
+--- a/drivers/crypto/marvell/cesa/hash.c
++++ b/drivers/crypto/marvell/cesa/hash.c
+@@ -663,7 +663,7 @@ static int mv_cesa_ahash_dma_req_init(struct ahash_request *req)
+ 	if (ret)
+ 		goto err_free_tdma;
+ 
+-	if (iter.src.sg) {
++	if (iter.base.len > iter.src.op_offset) {
+ 		/*
+ 		 * Add all the new data, inserting an operation block and
+ 		 * launch command between each full SRAM block-worth of
+diff --git a/drivers/crypto/xilinx/zynqmp-sha.c b/drivers/crypto/xilinx/zynqmp-sha.c
+index 580649f9bff81f..0edf8eb264b55f 100644
+--- a/drivers/crypto/xilinx/zynqmp-sha.c
++++ b/drivers/crypto/xilinx/zynqmp-sha.c
+@@ -3,18 +3,19 @@
+  * Xilinx ZynqMP SHA Driver.
+  * Copyright (c) 2022 Xilinx Inc.
+  */
+-#include <linux/cacheflush.h>
+ #include <crypto/hash.h>
+ #include <crypto/internal/hash.h>
+ #include <crypto/sha3.h>
+-#include <linux/crypto.h>
++#include <linux/cacheflush.h>
++#include <linux/cleanup.h>
+ #include <linux/device.h>
+ #include <linux/dma-mapping.h>
++#include <linux/err.h>
+ #include <linux/firmware/xlnx-zynqmp.h>
+-#include <linux/init.h>
+ #include <linux/io.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
++#include <linux/spinlock.h>
+ #include <linux/platform_device.h>
+ 
+ #define ZYNQMP_DMA_BIT_MASK		32U
+@@ -43,6 +44,8 @@ struct zynqmp_sha_desc_ctx {
+ static dma_addr_t update_dma_addr, final_dma_addr;
+ static char *ubuf, *fbuf;
+ 
++static DEFINE_SPINLOCK(zynqmp_sha_lock);
++
+ static int zynqmp_sha_init_tfm(struct crypto_shash *hash)
+ {
+ 	const char *fallback_driver_name = crypto_shash_alg_name(hash);
+@@ -124,7 +127,8 @@ static int zynqmp_sha_export(struct shash_desc *desc, void *out)
+ 	return crypto_shash_export(&dctx->fbk_req, out);
+ }
+ 
+-static int zynqmp_sha_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out)
++static int __zynqmp_sha_digest(struct shash_desc *desc, const u8 *data,
++			       unsigned int len, u8 *out)
+ {
+ 	unsigned int remaining_len = len;
+ 	int update_size;
+@@ -159,6 +163,12 @@ static int zynqmp_sha_digest(struct shash_desc *desc, const u8 *data, unsigned i
+ 	return ret;
+ }
+ 
++static int zynqmp_sha_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out)
++{
++	scoped_guard(spinlock_bh, &zynqmp_sha_lock)
++		return __zynqmp_sha_digest(desc, data, len, out);
++}
++
+ static struct zynqmp_sha_drv_ctx sha3_drv_ctx = {
+ 	.sha3_384 = {
+ 		.init = zynqmp_sha_init,
+diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
+index b6255c0601bb29..aa2dc762140f6e 100644
+--- a/drivers/dma/ti/k3-udma.c
++++ b/drivers/dma/ti/k3-udma.c
+@@ -5624,7 +5624,8 @@ static int udma_probe(struct platform_device *pdev)
+ 		uc->config.dir = DMA_MEM_TO_MEM;
+ 		uc->name = devm_kasprintf(dev, GFP_KERNEL, "%s chan%d",
+ 					  dev_name(dev), i);
+-
++		if (!uc->name)
++			return -ENOMEM;
+ 		vchan_init(&uc->vc, &ud->ddev);
+ 		/* Use custom vchan completion handling */
+ 		tasklet_setup(&uc->vc.task, udma_vchan_complete);
+diff --git a/drivers/edac/bluefield_edac.c b/drivers/edac/bluefield_edac.c
+index 4942a240c30f25..ae3bb7afa103eb 100644
+--- a/drivers/edac/bluefield_edac.c
++++ b/drivers/edac/bluefield_edac.c
+@@ -199,8 +199,10 @@ static void bluefield_gather_report_ecc(struct mem_ctl_info *mci,
+ 	 * error without the detailed information.
+ 	 */
+ 	err = bluefield_edac_readl(priv, MLXBF_SYNDROM, &dram_syndrom);
+-	if (err)
++	if (err) {
+ 		dev_err(priv->dev, "DRAM syndrom read failed.\n");
++		return;
++	}
+ 
+ 	serr = FIELD_GET(MLXBF_SYNDROM__SERR, dram_syndrom);
+ 	derr = FIELD_GET(MLXBF_SYNDROM__DERR, dram_syndrom);
+@@ -213,20 +215,26 @@ static void bluefield_gather_report_ecc(struct mem_ctl_info *mci,
+ 	}
+ 
+ 	err = bluefield_edac_readl(priv, MLXBF_ADD_INFO, &dram_additional_info);
+-	if (err)
++	if (err) {
+ 		dev_err(priv->dev, "DRAM additional info read failed.\n");
++		return;
++	}
+ 
+ 	err_prank = FIELD_GET(MLXBF_ADD_INFO__ERR_PRANK, dram_additional_info);
+ 
+ 	ecc_dimm = (err_prank >= 2 && priv->dimm_ranks[0] <= 2) ? 1 : 0;
+ 
+ 	err = bluefield_edac_readl(priv, MLXBF_ERR_ADDR_0, &edea0);
+-	if (err)
++	if (err) {
+ 		dev_err(priv->dev, "Error addr 0 read failed.\n");
++		return;
++	}
+ 
+ 	err = bluefield_edac_readl(priv, MLXBF_ERR_ADDR_1, &edea1);
+-	if (err)
++	if (err) {
+ 		dev_err(priv->dev, "Error addr 1 read failed.\n");
++		return;
++	}
+ 
+ 	ecc_dimm_addr = ((u64)edea1 << 32) | edea0;
+ 
+@@ -250,8 +258,10 @@ static void bluefield_edac_check(struct mem_ctl_info *mci)
+ 		return;
+ 
+ 	err = bluefield_edac_readl(priv, MLXBF_ECC_CNT, &ecc_count);
+-	if (err)
++	if (err) {
+ 		dev_err(priv->dev, "ECC count read failed.\n");
++		return;
++	}
+ 
+ 	single_error_count = FIELD_GET(MLXBF_ECC_CNT__SERR_CNT, ecc_count);
+ 	double_error_count = FIELD_GET(MLXBF_ECC_CNT__DERR_CNT, ecc_count);
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index 355a977019e944..355b527d839e78 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -95,7 +95,7 @@ static u32 offsets_demand2_spr[] = {0x22c70, 0x22d80, 0x22f18, 0x22d58, 0x22c64,
+ static u32 offsets_demand_spr_hbm0[] = {0x2a54, 0x2a60, 0x2b10, 0x2a58, 0x2a5c, 0x0ee0};
+ static u32 offsets_demand_spr_hbm1[] = {0x2e54, 0x2e60, 0x2f10, 0x2e58, 0x2e5c, 0x0fb0};
+ 
+-static void __enable_retry_rd_err_log(struct skx_imc *imc, int chan, bool enable,
++static void __enable_retry_rd_err_log(struct skx_imc *imc, int chan, bool enable, u32 *rrl_ctl,
+ 				      u32 *offsets_scrub, u32 *offsets_demand,
+ 				      u32 *offsets_demand2)
+ {
+@@ -108,10 +108,10 @@ static void __enable_retry_rd_err_log(struct skx_imc *imc, int chan, bool enable
+ 
+ 	if (enable) {
+ 		/* Save default configurations */
+-		imc->chan[chan].retry_rd_err_log_s = s;
+-		imc->chan[chan].retry_rd_err_log_d = d;
++		rrl_ctl[0] = s;
++		rrl_ctl[1] = d;
+ 		if (offsets_demand2)
+-			imc->chan[chan].retry_rd_err_log_d2 = d2;
++			rrl_ctl[2] = d2;
+ 
+ 		s &= ~RETRY_RD_ERR_LOG_NOOVER_UC;
+ 		s |=  RETRY_RD_ERR_LOG_EN;
+@@ -125,25 +125,25 @@ static void __enable_retry_rd_err_log(struct skx_imc *imc, int chan, bool enable
+ 		}
+ 	} else {
+ 		/* Restore default configurations */
+-		if (imc->chan[chan].retry_rd_err_log_s & RETRY_RD_ERR_LOG_UC)
++		if (rrl_ctl[0] & RETRY_RD_ERR_LOG_UC)
+ 			s |=  RETRY_RD_ERR_LOG_UC;
+-		if (imc->chan[chan].retry_rd_err_log_s & RETRY_RD_ERR_LOG_NOOVER)
++		if (rrl_ctl[0] & RETRY_RD_ERR_LOG_NOOVER)
+ 			s |=  RETRY_RD_ERR_LOG_NOOVER;
+-		if (!(imc->chan[chan].retry_rd_err_log_s & RETRY_RD_ERR_LOG_EN))
++		if (!(rrl_ctl[0] & RETRY_RD_ERR_LOG_EN))
+ 			s &= ~RETRY_RD_ERR_LOG_EN;
+-		if (imc->chan[chan].retry_rd_err_log_d & RETRY_RD_ERR_LOG_UC)
++		if (rrl_ctl[1] & RETRY_RD_ERR_LOG_UC)
+ 			d |=  RETRY_RD_ERR_LOG_UC;
+-		if (imc->chan[chan].retry_rd_err_log_d & RETRY_RD_ERR_LOG_NOOVER)
++		if (rrl_ctl[1] & RETRY_RD_ERR_LOG_NOOVER)
+ 			d |=  RETRY_RD_ERR_LOG_NOOVER;
+-		if (!(imc->chan[chan].retry_rd_err_log_d & RETRY_RD_ERR_LOG_EN))
++		if (!(rrl_ctl[1] & RETRY_RD_ERR_LOG_EN))
+ 			d &= ~RETRY_RD_ERR_LOG_EN;
+ 
+ 		if (offsets_demand2) {
+-			if (imc->chan[chan].retry_rd_err_log_d2 & RETRY_RD_ERR_LOG_UC)
++			if (rrl_ctl[2] & RETRY_RD_ERR_LOG_UC)
+ 				d2 |=  RETRY_RD_ERR_LOG_UC;
+-			if (!(imc->chan[chan].retry_rd_err_log_d2 & RETRY_RD_ERR_LOG_NOOVER))
++			if (!(rrl_ctl[2] & RETRY_RD_ERR_LOG_NOOVER))
+ 				d2 &=  ~RETRY_RD_ERR_LOG_NOOVER;
+-			if (!(imc->chan[chan].retry_rd_err_log_d2 & RETRY_RD_ERR_LOG_EN))
++			if (!(rrl_ctl[2] & RETRY_RD_ERR_LOG_EN))
+ 				d2 &= ~RETRY_RD_ERR_LOG_EN;
+ 		}
+ 	}
+@@ -157,6 +157,7 @@ static void __enable_retry_rd_err_log(struct skx_imc *imc, int chan, bool enable
+ static void enable_retry_rd_err_log(bool enable)
+ {
+ 	int i, j, imc_num, chan_num;
++	struct skx_channel *chan;
+ 	struct skx_imc *imc;
+ 	struct skx_dev *d;
+ 
+@@ -171,8 +172,9 @@ static void enable_retry_rd_err_log(bool enable)
+ 			if (!imc->mbase)
+ 				continue;
+ 
++			chan = d->imc[i].chan;
+ 			for (j = 0; j < chan_num; j++)
+-				__enable_retry_rd_err_log(imc, j, enable,
++				__enable_retry_rd_err_log(imc, j, enable, chan[j].rrl_ctl[0],
+ 							  res_cfg->offsets_scrub,
+ 							  res_cfg->offsets_demand,
+ 							  res_cfg->offsets_demand2);
+@@ -186,12 +188,13 @@ static void enable_retry_rd_err_log(bool enable)
+ 			if (!imc->mbase || !imc->hbm_mc)
+ 				continue;
+ 
++			chan = d->imc[i].chan;
+ 			for (j = 0; j < chan_num; j++) {
+-				__enable_retry_rd_err_log(imc, j, enable,
++				__enable_retry_rd_err_log(imc, j, enable, chan[j].rrl_ctl[0],
+ 							  res_cfg->offsets_scrub_hbm0,
+ 							  res_cfg->offsets_demand_hbm0,
+ 							  NULL);
+-				__enable_retry_rd_err_log(imc, j, enable,
++				__enable_retry_rd_err_log(imc, j, enable, chan[j].rrl_ctl[1],
+ 							  res_cfg->offsets_scrub_hbm1,
+ 							  res_cfg->offsets_demand_hbm1,
+ 							  NULL);
+diff --git a/drivers/edac/skx_common.c b/drivers/edac/skx_common.c
+index fa5b442b184499..c9ade45c1a99f3 100644
+--- a/drivers/edac/skx_common.c
++++ b/drivers/edac/skx_common.c
+@@ -116,6 +116,7 @@ EXPORT_SYMBOL_GPL(skx_adxl_get);
+ 
+ void skx_adxl_put(void)
+ {
++	adxl_component_count = 0;
+ 	kfree(adxl_values);
+ 	kfree(adxl_msg);
+ }
+diff --git a/drivers/edac/skx_common.h b/drivers/edac/skx_common.h
+index ca5408803f8787..5afd425f3b4ff1 100644
+--- a/drivers/edac/skx_common.h
++++ b/drivers/edac/skx_common.h
+@@ -79,6 +79,9 @@
+  */
+ #define MCACOD_EXT_MEM_ERR	0x280
+ 
++/* Max RRL register sets per {,sub-,pseudo-}channel. */
++#define NUM_RRL_SET		3
++
+ /*
+  * Each cpu socket contains some pci devices that provide global
+  * information, and also some that are local to each of the two
+@@ -117,9 +120,11 @@ struct skx_dev {
+ 		struct skx_channel {
+ 			struct pci_dev	*cdev;
+ 			struct pci_dev	*edev;
+-			u32 retry_rd_err_log_s;
+-			u32 retry_rd_err_log_d;
+-			u32 retry_rd_err_log_d2;
++			/*
++			 * Two groups of RRL control registers per channel to save default RRL
++			 * settings of two {sub-,pseudo-}channels in Linux RRL control mode.
++			 */
++			u32 rrl_ctl[2][NUM_RRL_SET];
+ 			struct skx_dimm {
+ 				u8 close_pg;
+ 				u8 bank_xor_enable;
+diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
+index aadc395ee16813..7df19d82aa689e 100644
+--- a/drivers/firmware/Kconfig
++++ b/drivers/firmware/Kconfig
+@@ -31,7 +31,6 @@ config ARM_SCPI_PROTOCOL
+ config ARM_SDE_INTERFACE
+ 	bool "ARM Software Delegated Exception Interface (SDEI)"
+ 	depends on ARM64
+-	depends on ACPI_APEI_GHES
+ 	help
+ 	  The Software Delegated Exception Interface (SDEI) is an ARM
+ 	  standard for registering callbacks from the platform firmware
+diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c
+index 3e8051fe829657..71e2a9a89f6ada 100644
+--- a/drivers/firmware/arm_sdei.c
++++ b/drivers/firmware/arm_sdei.c
+@@ -1062,13 +1062,12 @@ static bool __init sdei_present_acpi(void)
+ 	return true;
+ }
+ 
+-void __init sdei_init(void)
++void __init acpi_sdei_init(void)
+ {
+ 	struct platform_device *pdev;
+ 	int ret;
+ 
+-	ret = platform_driver_register(&sdei_driver);
+-	if (ret || !sdei_present_acpi())
++	if (!sdei_present_acpi())
+ 		return;
+ 
+ 	pdev = platform_device_register_simple(sdei_driver.driver.name,
+@@ -1081,6 +1080,12 @@ void __init sdei_init(void)
+ 	}
+ }
+ 
++static int __init sdei_init(void)
++{
++	return platform_driver_register(&sdei_driver);
++}
++arch_initcall(sdei_init);
++
+ int sdei_event_handler(struct pt_regs *regs,
+ 		       struct sdei_registered_event *arg)
+ {
+diff --git a/drivers/firmware/efi/libstub/efi-stub-helper.c b/drivers/firmware/efi/libstub/efi-stub-helper.c
+index fd6dc790c5a89d..7aa2f9ad293562 100644
+--- a/drivers/firmware/efi/libstub/efi-stub-helper.c
++++ b/drivers/firmware/efi/libstub/efi-stub-helper.c
+@@ -601,6 +601,7 @@ efi_status_t efi_load_initrd_cmdline(efi_loaded_image_t *image,
+  * @image:	EFI loaded image protocol
+  * @soft_limit:	preferred address for loading the initrd
+  * @hard_limit:	upper limit address for loading the initrd
++ * @out:	pointer to store the address of the initrd table
+  *
+  * Return:	status code
+  */
+diff --git a/drivers/firmware/psci/psci.c b/drivers/firmware/psci/psci.c
+index a1ebbe9b73b136..38ca190d4a22d6 100644
+--- a/drivers/firmware/psci/psci.c
++++ b/drivers/firmware/psci/psci.c
+@@ -804,8 +804,10 @@ int __init psci_dt_init(void)
+ 
+ 	np = of_find_matching_node_and_match(NULL, psci_of_match, &matched_np);
+ 
+-	if (!np || !of_device_is_available(np))
++	if (!np || !of_device_is_available(np)) {
++		of_node_put(np);
+ 		return -ENODEV;
++	}
+ 
+ 	init_fn = (psci_initcall_t)matched_np->data;
+ 	ret = init_fn(np);
+diff --git a/drivers/firmware/samsung/exynos-acpm-pmic.c b/drivers/firmware/samsung/exynos-acpm-pmic.c
+index 85e90d236da21e..39b33a356ebd24 100644
+--- a/drivers/firmware/samsung/exynos-acpm-pmic.c
++++ b/drivers/firmware/samsung/exynos-acpm-pmic.c
+@@ -43,13 +43,13 @@ static inline u32 acpm_pmic_get_bulk(u32 data, unsigned int i)
+ 	return (data >> (ACPM_PMIC_BULK_SHIFT * i)) & ACPM_PMIC_BULK_MASK;
+ }
+ 
+-static void acpm_pmic_set_xfer(struct acpm_xfer *xfer, u32 *cmd,
++static void acpm_pmic_set_xfer(struct acpm_xfer *xfer, u32 *cmd, size_t cmdlen,
+ 			       unsigned int acpm_chan_id)
+ {
+ 	xfer->txd = cmd;
+ 	xfer->rxd = cmd;
+-	xfer->txlen = sizeof(cmd);
+-	xfer->rxlen = sizeof(cmd);
++	xfer->txlen = cmdlen;
++	xfer->rxlen = cmdlen;
+ 	xfer->acpm_chan_id = acpm_chan_id;
+ }
+ 
+@@ -71,7 +71,7 @@ int acpm_pmic_read_reg(const struct acpm_handle *handle,
+ 	int ret;
+ 
+ 	acpm_pmic_init_read_cmd(cmd, type, reg, chan);
+-	acpm_pmic_set_xfer(&xfer, cmd, acpm_chan_id);
++	acpm_pmic_set_xfer(&xfer, cmd, sizeof(cmd), acpm_chan_id);
+ 
+ 	ret = acpm_do_xfer(handle, &xfer);
+ 	if (ret)
+@@ -104,7 +104,7 @@ int acpm_pmic_bulk_read(const struct acpm_handle *handle,
+ 		return -EINVAL;
+ 
+ 	acpm_pmic_init_bulk_read_cmd(cmd, type, reg, chan, count);
+-	acpm_pmic_set_xfer(&xfer, cmd, acpm_chan_id);
++	acpm_pmic_set_xfer(&xfer, cmd, sizeof(cmd), acpm_chan_id);
+ 
+ 	ret = acpm_do_xfer(handle, &xfer);
+ 	if (ret)
+@@ -144,7 +144,7 @@ int acpm_pmic_write_reg(const struct acpm_handle *handle,
+ 	int ret;
+ 
+ 	acpm_pmic_init_write_cmd(cmd, type, reg, chan, value);
+-	acpm_pmic_set_xfer(&xfer, cmd, acpm_chan_id);
++	acpm_pmic_set_xfer(&xfer, cmd, sizeof(cmd), acpm_chan_id);
+ 
+ 	ret = acpm_do_xfer(handle, &xfer);
+ 	if (ret)
+@@ -184,7 +184,7 @@ int acpm_pmic_bulk_write(const struct acpm_handle *handle,
+ 		return -EINVAL;
+ 
+ 	acpm_pmic_init_bulk_write_cmd(cmd, type, reg, chan, count, buf);
+-	acpm_pmic_set_xfer(&xfer, cmd, acpm_chan_id);
++	acpm_pmic_set_xfer(&xfer, cmd, sizeof(cmd), acpm_chan_id);
+ 
+ 	ret = acpm_do_xfer(handle, &xfer);
+ 	if (ret)
+@@ -214,7 +214,7 @@ int acpm_pmic_update_reg(const struct acpm_handle *handle,
+ 	int ret;
+ 
+ 	acpm_pmic_init_update_cmd(cmd, type, reg, chan, value, mask);
+-	acpm_pmic_set_xfer(&xfer, cmd, acpm_chan_id);
++	acpm_pmic_set_xfer(&xfer, cmd, sizeof(cmd), acpm_chan_id);
+ 
+ 	ret = acpm_do_xfer(handle, &xfer);
+ 	if (ret)
+diff --git a/drivers/firmware/samsung/exynos-acpm.c b/drivers/firmware/samsung/exynos-acpm.c
+index 15e991b99f5a38..e80cb7a8da8f23 100644
+--- a/drivers/firmware/samsung/exynos-acpm.c
++++ b/drivers/firmware/samsung/exynos-acpm.c
+@@ -696,24 +696,17 @@ static const struct acpm_handle *acpm_get_by_phandle(struct device *dev,
+ 		return ERR_PTR(-ENODEV);
+ 
+ 	pdev = of_find_device_by_node(acpm_np);
+-	if (!pdev) {
+-		dev_err(dev, "Cannot find device node %s\n", acpm_np->name);
+-		of_node_put(acpm_np);
+-		return ERR_PTR(-EPROBE_DEFER);
+-	}
+-
+ 	of_node_put(acpm_np);
++	if (!pdev)
++		return ERR_PTR(-EPROBE_DEFER);
+ 
+ 	acpm = platform_get_drvdata(pdev);
+ 	if (!acpm) {
+-		dev_err(dev, "Cannot get drvdata from %s\n",
+-			dev_name(&pdev->dev));
+ 		platform_device_put(pdev);
+ 		return ERR_PTR(-EPROBE_DEFER);
+ 	}
+ 
+ 	if (!try_module_get(pdev->dev.driver->owner)) {
+-		dev_err(dev, "Cannot get module reference.\n");
+ 		platform_device_put(pdev);
+ 		return ERR_PTR(-EPROBE_DEFER);
+ 	}
+diff --git a/drivers/fpga/tests/fpga-mgr-test.c b/drivers/fpga/tests/fpga-mgr-test.c
+index 8748babb050458..62975a39ee14e4 100644
+--- a/drivers/fpga/tests/fpga-mgr-test.c
++++ b/drivers/fpga/tests/fpga-mgr-test.c
+@@ -263,6 +263,7 @@ static void fpga_mgr_test_img_load_sgt(struct kunit *test)
+ 	img_buf = init_test_buffer(test, IMAGE_SIZE);
+ 
+ 	sgt = kunit_kzalloc(test, sizeof(*sgt), GFP_KERNEL);
++	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, sgt);
+ 	ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 	sg_init_one(sgt->sgl, img_buf, IMAGE_SIZE);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 23e6a05359c24b..c68c2e2f4d61aa 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -4800,7 +4800,7 @@ static int gfx_v10_0_sw_init(struct amdgpu_ip_block *ip_block)
+ 		adev->gfx.cleaner_shader_size = sizeof(gfx_10_1_10_cleaner_shader_hex);
+ 		if (adev->gfx.me_fw_version >= 101 &&
+ 		    adev->gfx.pfp_fw_version  >= 158 &&
+-		    adev->gfx.mec_fw_version >= 152) {
++		    adev->gfx.mec_fw_version >= 151) {
+ 			adev->gfx.enable_cleaner_shader = true;
+ 			r = amdgpu_gfx_cleaner_shader_sw_init(adev, adev->gfx.cleaner_shader_size);
+ 			if (r) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0_cleaner_shader.h b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0_cleaner_shader.h
+index 5255378af53c0a..f67569ccf9f609 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0_cleaner_shader.h
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0_cleaner_shader.h
+@@ -43,9 +43,9 @@ static const u32 gfx_10_1_10_cleaner_shader_hex[] = {
+ 	0xd70f6a01, 0x000202ff,
+ 	0x00000400, 0x80828102,
+ 	0xbf84fff7, 0xbefc03ff,
+-	0x00000068, 0xbe803080,
+-	0xbe813080, 0xbe823080,
+-	0xbe833080, 0x80fc847c,
++	0x00000068, 0xbe803000,
++	0xbe813000, 0xbe823000,
++	0xbe833000, 0x80fc847c,
+ 	0xbf84fffa, 0xbeea0480,
+ 	0xbeec0480, 0xbeee0480,
+ 	0xbef00480, 0xbef20480,
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_1_10_cleaner_shader.asm b/drivers/gpu/drm/amd/amdgpu/gfx_v10_1_10_cleaner_shader.asm
+index 9ba3359253c95d..54f7ed9e2801c5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_1_10_cleaner_shader.asm
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_1_10_cleaner_shader.asm
+@@ -40,7 +40,6 @@ shader main
+   type(CS)
+   wave_size(32)
+ // Note: original source code from SQ team
+-
+ //
+ // Create 32 waves in a threadgroup (CS waves)
+ // Each allocates 64 VGPRs
+@@ -71,8 +70,8 @@ label_0005:
+   s_sub_u32     s2, s2, 8
+   s_cbranch_scc0  label_0005
+   //
+-  s_mov_b32     s2, 0x80000000                     // Bit31 is first_wave
+-  s_and_b32     s2, s2, s0                                  // sgpr0 has tg_size (first_wave) term as in ucode only COMPUTE_PGM_RSRC2.tg_size_en is set
++  s_mov_b32     s2, 0x80000000                       // Bit31 is first_wave
++  s_and_b32     s2, s2, s1                           // sgpr0 has tg_size (first_wave) term as in ucode only COMPUTE_PGM_RSRC2.tg_size_en is set
+   s_cbranch_scc0  label_0023                         // Clean LDS if its first wave of ThreadGroup/WorkGroup
+   // CLEAR LDS
+   //
+@@ -99,10 +98,10 @@ label_001F:
+ label_0023:
+   s_mov_b32     m0, 0x00000068  // Loop 108/4=27 times  (loop unrolled for performance)
+ label_sgpr_loop:
+-  s_movreld_b32     s0, 0
+-  s_movreld_b32     s1, 0
+-  s_movreld_b32     s2, 0
+-  s_movreld_b32     s3, 0
++  s_movreld_b32     s0, s0
++  s_movreld_b32     s1, s0
++  s_movreld_b32     s2, s0
++  s_movreld_b32     s3, s0
+   s_sub_u32         m0, m0, 4
+   s_cbranch_scc0  label_sgpr_loop
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/basics/fixpt31_32.c b/drivers/gpu/drm/amd/display/dc/basics/fixpt31_32.c
+index 88d3f9d7dd556a..452206b5095eb0 100644
+--- a/drivers/gpu/drm/amd/display/dc/basics/fixpt31_32.c
++++ b/drivers/gpu/drm/amd/display/dc/basics/fixpt31_32.c
+@@ -51,8 +51,6 @@ static inline unsigned long long complete_integer_division_u64(
+ {
+ 	unsigned long long result;
+ 
+-	ASSERT(divisor);
+-
+ 	result = div64_u64_rem(dividend, divisor, remainder);
+ 
+ 	return result;
+@@ -213,9 +211,6 @@ struct fixed31_32 dc_fixpt_recip(struct fixed31_32 arg)
+ 	 * @note
+ 	 * Good idea to use Newton's method
+ 	 */
+-
+-	ASSERT(arg.value);
+-
+ 	return dc_fixpt_from_fraction(
+ 		dc_fixpt_one.value,
+ 		arg.value);
+diff --git a/drivers/gpu/drm/amd/display/dc/sspl/spl_fixpt31_32.c b/drivers/gpu/drm/amd/display/dc/sspl/spl_fixpt31_32.c
+index 52d97918a3bd21..ebf0287417e0eb 100644
+--- a/drivers/gpu/drm/amd/display/dc/sspl/spl_fixpt31_32.c
++++ b/drivers/gpu/drm/amd/display/dc/sspl/spl_fixpt31_32.c
+@@ -29,8 +29,6 @@ static inline unsigned long long spl_complete_integer_division_u64(
+ {
+ 	unsigned long long result;
+ 
+-	SPL_ASSERT(divisor);
+-
+ 	result = spl_div64_u64_rem(dividend, divisor, remainder);
+ 
+ 	return result;
+@@ -196,8 +194,6 @@ struct spl_fixed31_32 spl_fixpt_recip(struct spl_fixed31_32 arg)
+ 	 * Good idea to use Newton's method
+ 	 */
+ 
+-	SPL_ASSERT(arg.value);
+-
+ 	return spl_fixpt_from_fraction(
+ 		spl_fixpt_one.value,
+ 		arg.value);
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
+index 4bd92fd782be6a..8d40ed0f0e8383 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
+@@ -143,6 +143,10 @@ int atomctrl_initialize_mc_reg_table(
+ 	vram_info = (ATOM_VRAM_INFO_HEADER_V2_1 *)
+ 		smu_atom_get_data_table(hwmgr->adev,
+ 				GetIndexIntoMasterTable(DATA, VRAM_Info), &size, &frev, &crev);
++	if (!vram_info) {
++		pr_err("Could not retrieve the VramInfo table!");
++		return -EINVAL;
++	}
+ 
+ 	if (module_index >= vram_info->ucNumOfVRAMModule) {
+ 		pr_err("Invalid VramInfo table.");
+@@ -180,6 +184,10 @@ int atomctrl_initialize_mc_reg_table_v2_2(
+ 	vram_info = (ATOM_VRAM_INFO_HEADER_V2_2 *)
+ 		smu_atom_get_data_table(hwmgr->adev,
+ 				GetIndexIntoMasterTable(DATA, VRAM_Info), &size, &frev, &crev);
++	if (!vram_info) {
++		pr_err("Could not retrieve the VramInfo table!");
++		return -EINVAL;
++	}
+ 
+ 	if (module_index >= vram_info->ucNumOfVRAMModule) {
+ 		pr_err("Invalid VramInfo table.");
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+index 071168aa0c3bda..5222b1e9f533d0 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+@@ -1597,10 +1597,8 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 	}
+ 
+ 	dp->reg_base = devm_platform_ioremap_resource(pdev, 0);
+-	if (IS_ERR(dp->reg_base)) {
+-		ret = PTR_ERR(dp->reg_base);
+-		goto err_disable_clk;
+-	}
++	if (IS_ERR(dp->reg_base))
++		return ERR_CAST(dp->reg_base);
+ 
+ 	dp->force_hpd = of_property_read_bool(dev->of_node, "force-hpd");
+ 
+@@ -1612,8 +1610,7 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 	if (IS_ERR(dp->hpd_gpiod)) {
+ 		dev_err(dev, "error getting HDP GPIO: %ld\n",
+ 			PTR_ERR(dp->hpd_gpiod));
+-		ret = PTR_ERR(dp->hpd_gpiod);
+-		goto err_disable_clk;
++		return ERR_CAST(dp->hpd_gpiod);
+ 	}
+ 
+ 	if (dp->hpd_gpiod) {
+@@ -1633,8 +1630,7 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 
+ 	if (dp->irq == -ENXIO) {
+ 		dev_err(&pdev->dev, "failed to get irq\n");
+-		ret = -ENODEV;
+-		goto err_disable_clk;
++		return ERR_PTR(-ENODEV);
+ 	}
+ 
+ 	ret = devm_request_threaded_irq(&pdev->dev, dp->irq,
+@@ -1643,15 +1639,22 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 					irq_flags, "analogix-dp", dp);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to request irq\n");
+-		goto err_disable_clk;
++		return ERR_PTR(ret);
+ 	}
+ 	disable_irq(dp->irq);
+ 
+-	return dp;
++	dp->aux.name = "DP-AUX";
++	dp->aux.transfer = analogix_dpaux_transfer;
++	dp->aux.dev = dp->dev;
++	drm_dp_aux_init(&dp->aux);
+ 
+-err_disable_clk:
+-	clk_disable_unprepare(dp->clock);
+-	return ERR_PTR(ret);
++	pm_runtime_use_autosuspend(dp->dev);
++	pm_runtime_set_autosuspend_delay(dp->dev, 100);
++	ret = devm_pm_runtime_enable(dp->dev);
++	if (ret)
++		return ERR_PTR(ret);
++
++	return dp;
+ }
+ EXPORT_SYMBOL_GPL(analogix_dp_probe);
+ 
+@@ -1696,25 +1699,12 @@ int analogix_dp_bind(struct analogix_dp_device *dp, struct drm_device *drm_dev)
+ 	dp->drm_dev = drm_dev;
+ 	dp->encoder = dp->plat_data->encoder;
+ 
+-	if (IS_ENABLED(CONFIG_PM)) {
+-		pm_runtime_use_autosuspend(dp->dev);
+-		pm_runtime_set_autosuspend_delay(dp->dev, 100);
+-		pm_runtime_enable(dp->dev);
+-	} else {
+-		ret = analogix_dp_resume(dp);
+-		if (ret)
+-			return ret;
+-	}
+-
+-	dp->aux.name = "DP-AUX";
+-	dp->aux.transfer = analogix_dpaux_transfer;
+-	dp->aux.dev = dp->dev;
+ 	dp->aux.drm_dev = drm_dev;
+ 
+ 	ret = drm_dp_aux_register(&dp->aux);
+ 	if (ret) {
+ 		DRM_ERROR("failed to register AUX (%d)\n", ret);
+-		goto err_disable_pm_runtime;
++		return ret;
+ 	}
+ 
+ 	ret = analogix_dp_create_bridge(drm_dev, dp);
+@@ -1727,13 +1717,6 @@ int analogix_dp_bind(struct analogix_dp_device *dp, struct drm_device *drm_dev)
+ 
+ err_unregister_aux:
+ 	drm_dp_aux_unregister(&dp->aux);
+-err_disable_pm_runtime:
+-	if (IS_ENABLED(CONFIG_PM)) {
+-		pm_runtime_dont_use_autosuspend(dp->dev);
+-		pm_runtime_disable(dp->dev);
+-	} else {
+-		analogix_dp_suspend(dp);
+-	}
+ 
+ 	return ret;
+ }
+@@ -1750,13 +1733,6 @@ void analogix_dp_unbind(struct analogix_dp_device *dp)
+ 	}
+ 
+ 	drm_dp_aux_unregister(&dp->aux);
+-
+-	if (IS_ENABLED(CONFIG_PM)) {
+-		pm_runtime_dont_use_autosuspend(dp->dev);
+-		pm_runtime_disable(dp->dev);
+-	} else {
+-		analogix_dp_suspend(dp);
+-	}
+ }
+ EXPORT_SYMBOL_GPL(analogix_dp_unbind);
+ 
+diff --git a/drivers/gpu/drm/bridge/lontium-lt9611uxc.c b/drivers/gpu/drm/bridge/lontium-lt9611uxc.c
+index f4c3ff1fdc6923..f6e714feeea54c 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt9611uxc.c
++++ b/drivers/gpu/drm/bridge/lontium-lt9611uxc.c
+@@ -880,7 +880,11 @@ static int lt9611uxc_probe(struct i2c_client *client)
+ 		}
+ 	}
+ 
+-	return lt9611uxc_audio_init(dev, lt9611uxc);
++	ret = lt9611uxc_audio_init(dev, lt9611uxc);
++	if (ret)
++		goto err_remove_bridge;
++
++	return 0;
+ 
+ err_remove_bridge:
+ 	free_irq(client->irq, lt9611uxc);
+diff --git a/drivers/gpu/drm/ci/gitlab-ci.yml b/drivers/gpu/drm/ci/gitlab-ci.yml
+index f04aabe8327c6b..b06b9e7d3d09bf 100644
+--- a/drivers/gpu/drm/ci/gitlab-ci.yml
++++ b/drivers/gpu/drm/ci/gitlab-ci.yml
+@@ -143,11 +143,11 @@ stages:
+     # Pre-merge pipeline
+     - if: &is-pre-merge $CI_PIPELINE_SOURCE == "merge_request_event"
+     # Push to a branch on a fork
+-    - if: &is-fork-push $CI_PROJECT_NAMESPACE != "mesa" && $CI_PIPELINE_SOURCE == "push"
++    - if: &is-fork-push $CI_PIPELINE_SOURCE == "push"
+     # nightly pipeline
+     - if: &is-scheduled-pipeline $CI_PIPELINE_SOURCE == "schedule"
+     # pipeline for direct pushes that bypassed the CI
+-    - if: &is-direct-push $CI_PROJECT_NAMESPACE == "mesa" && $CI_PIPELINE_SOURCE == "push" && $GITLAB_USER_LOGIN != "marge-bot"
++    - if: &is-direct-push $CI_PIPELINE_SOURCE == "push" && $GITLAB_USER_LOGIN != "marge-bot"
+ 
+ 
+ # Rules applied to every job in the pipeline
+@@ -170,26 +170,15 @@ stages:
+     - !reference [.disable-farm-mr-rules, rules]
+     # Never run immediately after merging, as we just ran everything
+     - !reference [.never-post-merge-rules, rules]
+-    # Build everything in merge pipelines, if any files affecting the pipeline
+-    # were changed
++    # Build everything in merge pipelines
+     - if: *is-merge-attempt
+-      changes: &all_paths
+-      - drivers/gpu/drm/ci/**/*
+       when: on_success
+     # Same as above, but for pre-merge pipelines
+     - if: *is-pre-merge
+-      changes:
+-        *all_paths
+       when: manual
+-    # Skip everything for pre-merge and merge pipelines which don't change
+-    # anything in the build
+-    - if: *is-merge-attempt
+-      when: never
+-    - if: *is-pre-merge
+-      when: never
+     # Build everything after someone bypassed the CI
+     - if: *is-direct-push
+-      when: on_success
++      when: manual
+     # Build everything in scheduled pipelines
+     - if: *is-scheduled-pipeline
+       when: on_success
+diff --git a/drivers/gpu/drm/display/drm_hdmi_audio_helper.c b/drivers/gpu/drm/display/drm_hdmi_audio_helper.c
+index 05afc9f0bdd6b6..ae8a0cf595fc6f 100644
+--- a/drivers/gpu/drm/display/drm_hdmi_audio_helper.c
++++ b/drivers/gpu/drm/display/drm_hdmi_audio_helper.c
+@@ -103,7 +103,8 @@ static int drm_connector_hdmi_audio_hook_plugged_cb(struct device *dev,
+ 	connector->hdmi_audio.plugged_cb = fn;
+ 	connector->hdmi_audio.plugged_cb_dev = codec_dev;
+ 
+-	fn(codec_dev, connector->hdmi_audio.last_state);
++	if (fn)
++		fn(codec_dev, connector->hdmi_audio.last_state);
+ 
+ 	mutex_unlock(&connector->hdmi_audio.lock);
+ 
+diff --git a/drivers/gpu/drm/drm_panic_qr.rs b/drivers/gpu/drm/drm_panic_qr.rs
+index f2a99681b99858..de2ddf5dbbd3f1 100644
+--- a/drivers/gpu/drm/drm_panic_qr.rs
++++ b/drivers/gpu/drm/drm_panic_qr.rs
+@@ -366,8 +366,48 @@ fn iter(&self) -> SegmentIterator<'_> {
+         SegmentIterator {
+             segment: self,
+             offset: 0,
+-            carry: 0,
+-            carry_len: 0,
++            decfifo: Default::default(),
++        }
++    }
++}
++
++/// Max fifo size is 17 (max push) + 2 (max remaining)
++const MAX_FIFO_SIZE: usize = 19;
++
++/// A simple Decimal digit FIFO
++#[derive(Default)]
++struct DecFifo {
++    decimals: [u8; MAX_FIFO_SIZE],
++    len: usize,
++}
++
++impl DecFifo {
++    fn push(&mut self, data: u64, len: usize) {
++        let mut chunk = data;
++        for i in (0..self.len).rev() {
++            self.decimals[i + len] = self.decimals[i];
++        }
++        for i in 0..len {
++            self.decimals[i] = (chunk % 10) as u8;
++            chunk /= 10;
++        }
++        self.len += len;
++    }
++
++    /// Pop 3 decimal digits from the FIFO
++    fn pop3(&mut self) -> Option<(u16, usize)> {
++        if self.len == 0 {
++            None
++        } else {
++            let poplen = 3.min(self.len);
++            self.len -= poplen;
++            let mut out = 0;
++            let mut exp = 1;
++            for i in 0..poplen {
++                out += self.decimals[self.len + i] as u16 * exp;
++                exp *= 10;
++            }
++            Some((out, NUM_CHARS_BITS[poplen]))
+         }
+     }
+ }
+@@ -375,8 +415,7 @@ fn iter(&self) -> SegmentIterator<'_> {
+ struct SegmentIterator<'a> {
+     segment: &'a Segment<'a>,
+     offset: usize,
+-    carry: u64,
+-    carry_len: usize,
++    decfifo: DecFifo,
+ }
+ 
+ impl Iterator for SegmentIterator<'_> {
+@@ -394,31 +433,17 @@ fn next(&mut self) -> Option<Self::Item> {
+                 }
+             }
+             Segment::Numeric(data) => {
+-                if self.carry_len < 3 && self.offset < data.len() {
+-                    // If there are less than 3 decimal digits in the carry,
+-                    // take the next 7 bytes of input, and add them to the carry.
++                if self.decfifo.len < 3 && self.offset < data.len() {
++                    // If there are less than 3 decimal digits in the fifo,
++                    // take the next 7 bytes of input, and push them to the fifo.
+                     let mut buf = [0u8; 8];
+                     let len = 7.min(data.len() - self.offset);
+                     buf[..len].copy_from_slice(&data[self.offset..self.offset + len]);
+                     let chunk = u64::from_le_bytes(buf);
+-                    let pow = u64::pow(10, BYTES_TO_DIGITS[len] as u32);
+-                    self.carry = chunk + self.carry * pow;
++                    self.decfifo.push(chunk, BYTES_TO_DIGITS[len]);
+                     self.offset += len;
+-                    self.carry_len += BYTES_TO_DIGITS[len];
+-                }
+-                match self.carry_len {
+-                    0 => None,
+-                    len => {
+-                        // take the next 3 decimal digits of the carry
+-                        // and return 10bits of numeric data.
+-                        let out_len = 3.min(len);
+-                        self.carry_len -= out_len;
+-                        let pow = u64::pow(10, self.carry_len as u32);
+-                        let out = (self.carry / pow) as u16;
+-                        self.carry = self.carry % pow;
+-                        Some((out, NUM_CHARS_BITS[out_len]))
+-                    }
+                 }
++                self.decfifo.pop3()
+             }
+         }
+     }
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 392c3653d0d738..cd8f728d5fddc4 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -2523,6 +2523,7 @@ intel_dp_dsc_compute_pipe_bpp_limits(struct intel_dp *intel_dp,
+ 
+ bool
+ intel_dp_compute_config_limits(struct intel_dp *intel_dp,
++			       struct intel_connector *connector,
+ 			       struct intel_crtc_state *crtc_state,
+ 			       bool respect_downstream_limits,
+ 			       bool dsc,
+@@ -2576,7 +2577,7 @@ intel_dp_compute_config_limits(struct intel_dp *intel_dp,
+ 	intel_dp_test_compute_config(intel_dp, crtc_state, limits);
+ 
+ 	return intel_dp_compute_config_link_bpp_limits(intel_dp,
+-						       intel_dp->attached_connector,
++						       connector,
+ 						       crtc_state,
+ 						       dsc,
+ 						       limits);
+@@ -2637,7 +2638,7 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
+ 	joiner_needs_dsc = intel_dp_joiner_needs_dsc(display, num_joined_pipes);
+ 
+ 	dsc_needed = joiner_needs_dsc || intel_dp->force_dsc_en ||
+-		     !intel_dp_compute_config_limits(intel_dp, pipe_config,
++		     !intel_dp_compute_config_limits(intel_dp, connector, pipe_config,
+ 						     respect_downstream_limits,
+ 						     false,
+ 						     &limits);
+@@ -2671,7 +2672,7 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
+ 			    str_yes_no(ret), str_yes_no(joiner_needs_dsc),
+ 			    str_yes_no(intel_dp->force_dsc_en));
+ 
+-		if (!intel_dp_compute_config_limits(intel_dp, pipe_config,
++		if (!intel_dp_compute_config_limits(intel_dp, connector, pipe_config,
+ 						    respect_downstream_limits,
+ 						    true,
+ 						    &limits))
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
+index 9189db4c25946a..98f90955fdb1db 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.h
++++ b/drivers/gpu/drm/i915/display/intel_dp.h
+@@ -194,6 +194,7 @@ void intel_dp_wait_source_oui(struct intel_dp *intel_dp);
+ int intel_dp_output_bpp(enum intel_output_format output_format, int bpp);
+ 
+ bool intel_dp_compute_config_limits(struct intel_dp *intel_dp,
++				    struct intel_connector *connector,
+ 				    struct intel_crtc_state *crtc_state,
+ 				    bool respect_downstream_limits,
+ 				    bool dsc,
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+index 6dc2d31ccb5a53..fe685f098ba9a2 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+@@ -590,12 +590,13 @@ adjust_limits_for_dsc_hblank_expansion_quirk(struct intel_dp *intel_dp,
+ 
+ static bool
+ mst_stream_compute_config_limits(struct intel_dp *intel_dp,
+-				 const struct intel_connector *connector,
++				 struct intel_connector *connector,
+ 				 struct intel_crtc_state *crtc_state,
+ 				 bool dsc,
+ 				 struct link_config_limits *limits)
+ {
+-	if (!intel_dp_compute_config_limits(intel_dp, crtc_state, false, dsc,
++	if (!intel_dp_compute_config_limits(intel_dp, connector,
++					    crtc_state, false, dsc,
+ 					    limits))
+ 		return false;
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_psr_regs.h b/drivers/gpu/drm/i915/display/intel_psr_regs.h
+index 795e6b9cc575c8..248136456048e3 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr_regs.h
++++ b/drivers/gpu/drm/i915/display/intel_psr_regs.h
+@@ -325,8 +325,8 @@
+ #define  PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION_MASK	REG_GENMASK(20, 16)
+ #define  PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION(val)	REG_FIELD_PREP(PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION_MASK, val)
+ #define  PORT_ALPM_LFPS_CTL_FIRST_LFPS_HALF_CYCLE_DURATION_MASK	REG_GENMASK(12, 8)
+-#define  PORT_ALPM_LFPS_CTL_FIRST_LFPS_HALF_CYCLE_DURATION(val)	REG_FIELD_PREP(PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION_MASK, val)
++#define  PORT_ALPM_LFPS_CTL_FIRST_LFPS_HALF_CYCLE_DURATION(val)	REG_FIELD_PREP(PORT_ALPM_LFPS_CTL_FIRST_LFPS_HALF_CYCLE_DURATION_MASK, val)
+ #define  PORT_ALPM_LFPS_CTL_LAST_LFPS_HALF_CYCLE_DURATION_MASK	REG_GENMASK(4, 0)
+-#define  PORT_ALPM_LFPS_CTL_LAST_LFPS_HALF_CYCLE_DURATION(val)	REG_FIELD_PREP(PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION_MASK, val)
++#define  PORT_ALPM_LFPS_CTL_LAST_LFPS_HALF_CYCLE_DURATION(val)	REG_FIELD_PREP(PORT_ALPM_LFPS_CTL_LAST_LFPS_HALF_CYCLE_DURATION_MASK, val)
+ 
+ #endif /* __INTEL_PSR_REGS_H__ */
+diff --git a/drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.c b/drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.c
+index c6321dafef4f3c..74bb3bedf30f5d 100644
+--- a/drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.c
++++ b/drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.c
+@@ -41,12 +41,12 @@ static s64 interp(s64 x, s64 x1, s64 x2, s64 y1, s64 y2)
+ {
+ 	s64 dydx;
+ 
+-	dydx = DIV_ROUND_UP_ULL((y2 - y1) * 100000, (x2 - x1));
++	dydx = DIV64_U64_ROUND_UP((y2 - y1) * 100000, (x2 - x1));
+ 
+-	return (y1 + DIV_ROUND_UP_ULL(dydx * (x - x1), 100000));
++	return (y1 + DIV64_U64_ROUND_UP(dydx * (x - x1), 100000));
+ }
+ 
+-static void get_ana_cp_int_prop(u32 vco_clk,
++static void get_ana_cp_int_prop(u64 vco_clk,
+ 				u32 refclk_postscalar,
+ 				int mpll_ana_v2i,
+ 				int c, int a,
+@@ -115,16 +115,16 @@ static void get_ana_cp_int_prop(u32 vco_clk,
+ 								      CURVE0_MULTIPLIER));
+ 
+ 	scaled_interpolated_sqrt =
+-			int_sqrt(DIV_ROUND_UP_ULL(interpolated_product, vco_div_refclk_float) *
++			int_sqrt(DIV64_U64_ROUND_UP(interpolated_product, vco_div_refclk_float) *
+ 			DIV_ROUND_DOWN_ULL(1000000000000ULL, 55));
+ 
+ 	/* Scale vco_div_refclk for ana_cp_int */
+ 	scaled_vco_div_refclk2 = DIV_ROUND_UP_ULL(vco_div_refclk_float, 1000000);
+-	adjusted_vco_clk2 = 1460281 * DIV_ROUND_UP_ULL(scaled_interpolated_sqrt *
++	adjusted_vco_clk2 = 1460281 * DIV64_U64_ROUND_UP(scaled_interpolated_sqrt *
+ 						       scaled_vco_div_refclk2,
+ 						       curve_1_interpolated);
+ 
+-	*ana_cp_prop = DIV_ROUND_UP_ULL(adjusted_vco_clk2, curve_2_scaled2);
++	*ana_cp_prop = DIV64_U64_ROUND_UP(adjusted_vco_clk2, curve_2_scaled2);
+ 	*ana_cp_prop = max(1, min(*ana_cp_prop, 127));
+ }
+ 
+@@ -165,10 +165,10 @@ static void compute_hdmi_tmds_pll(u64 pixel_clock, u32 refclk,
+ 	/* Select appropriate v2i point */
+ 	if (datarate <= INTEL_SNPS_PHY_HDMI_9999MHZ) {
+ 		mpll_ana_v2i = 2;
+-		tx_clk_div = ilog2(DIV_ROUND_DOWN_ULL(INTEL_SNPS_PHY_HDMI_9999MHZ, datarate));
++		tx_clk_div = ilog2(div64_u64(INTEL_SNPS_PHY_HDMI_9999MHZ, datarate));
+ 	} else {
+ 		mpll_ana_v2i = 3;
+-		tx_clk_div = ilog2(DIV_ROUND_DOWN_ULL(INTEL_SNPS_PHY_HDMI_16GHZ, datarate));
++		tx_clk_div = ilog2(div64_u64(INTEL_SNPS_PHY_HDMI_16GHZ, datarate));
+ 	}
+ 	vco_clk = (datarate << tx_clk_div) >> 1;
+ 
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+index f8cb7c630d5b83..127316d2c8aa99 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+@@ -633,7 +633,7 @@ static int guc_submission_send_busy_loop(struct intel_guc *guc,
+ 		atomic_inc(&guc->outstanding_submission_g2h);
+ 
+ 	ret = intel_guc_send_busy_loop(guc, action, len, g2h_len_dw, loop);
+-	if (ret)
++	if (ret && g2h_len_dw)
+ 		atomic_dec(&guc->outstanding_submission_g2h);
+ 
+ 	return ret;
+@@ -3443,18 +3443,29 @@ static inline int guc_lrc_desc_unpin(struct intel_context *ce)
+ 	 * GuC is active, lets destroy this context, but at this point we can still be racing
+ 	 * with suspend, so we undo everything if the H2G fails in deregister_context so
+ 	 * that GuC reset will find this context during clean up.
++	 *
++	 * There is a race condition where the reset code could have altered
++	 * this context's state and done a wakeref put before we try to
++	 * deregister it here. So check if the context is still set to be
++	 * destroyed before undoing earlier changes, to avoid two wakeref puts
++	 * on the same context.
+ 	 */
+ 	ret = deregister_context(ce, ce->guc_id.id);
+ 	if (ret) {
++		bool pending_destroyed;
+ 		spin_lock_irqsave(&ce->guc_state.lock, flags);
+-		set_context_registered(ce);
+-		clr_context_destroyed(ce);
++		pending_destroyed = context_destroyed(ce);
++		if (pending_destroyed) {
++			set_context_registered(ce);
++			clr_context_destroyed(ce);
++		}
+ 		spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+ 		/*
+ 		 * As gt-pm is awake at function entry, intel_wakeref_put_async merely decrements
+ 		 * the wakeref immediately but per function spec usage call this after unlock.
+ 		 */
+-		intel_wakeref_put_async(&gt->wakeref);
++		if (pending_destroyed)
++			intel_wakeref_put_async(&gt->wakeref);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 74158b9d65035b..7c0c12dde48859 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -470,7 +470,7 @@ static int mtk_drm_kms_init(struct drm_device *drm)
+ 
+ 	ret = drmm_mode_config_init(drm);
+ 	if (ret)
+-		goto put_mutex_dev;
++		return ret;
+ 
+ 	drm->mode_config.min_width = 64;
+ 	drm->mode_config.min_height = 64;
+@@ -488,8 +488,11 @@ static int mtk_drm_kms_init(struct drm_device *drm)
+ 	for (i = 0; i < private->data->mmsys_dev_num; i++) {
+ 		drm->dev_private = private->all_drm_private[i];
+ 		ret = component_bind_all(private->all_drm_private[i]->dev, drm);
+-		if (ret)
+-			goto put_mutex_dev;
++		if (ret) {
++			while (--i >= 0)
++				component_unbind_all(private->all_drm_private[i]->dev, drm);
++			return ret;
++		}
+ 	}
+ 
+ 	/*
+@@ -582,9 +585,6 @@ static int mtk_drm_kms_init(struct drm_device *drm)
+ err_component_unbind:
+ 	for (i = 0; i < private->data->mmsys_dev_num; i++)
+ 		component_unbind_all(private->all_drm_private[i]->dev, drm);
+-put_mutex_dev:
+-	for (i = 0; i < private->data->mmsys_dev_num; i++)
+-		put_device(private->all_drm_private[i]->mutex_dev);
+ 
+ 	return ret;
+ }
+@@ -655,8 +655,10 @@ static int mtk_drm_bind(struct device *dev)
+ 		return 0;
+ 
+ 	drm = drm_dev_alloc(&mtk_drm_driver, dev);
+-	if (IS_ERR(drm))
+-		return PTR_ERR(drm);
++	if (IS_ERR(drm)) {
++		ret = PTR_ERR(drm);
++		goto err_put_dev;
++	}
+ 
+ 	private->drm_master = true;
+ 	drm->dev_private = private;
+@@ -682,18 +684,31 @@ static int mtk_drm_bind(struct device *dev)
+ 	drm_dev_put(drm);
+ 	for (i = 0; i < private->data->mmsys_dev_num; i++)
+ 		private->all_drm_private[i]->drm = NULL;
++err_put_dev:
++	for (i = 0; i < private->data->mmsys_dev_num; i++) {
++		/* For device_find_child in mtk_drm_get_all_priv() */
++		put_device(private->all_drm_private[i]->dev);
++	}
++	put_device(private->mutex_dev);
+ 	return ret;
+ }
+ 
+ static void mtk_drm_unbind(struct device *dev)
+ {
+ 	struct mtk_drm_private *private = dev_get_drvdata(dev);
++	int i;
+ 
+ 	/* for multi mmsys dev, unregister drm dev in mmsys master */
+ 	if (private->drm_master) {
+ 		drm_dev_unregister(private->drm);
+ 		mtk_drm_kms_deinit(private->drm);
+ 		drm_dev_put(private->drm);
++
++		for (i = 0; i < private->data->mmsys_dev_num; i++) {
++			/* For device_find_child in mtk_drm_get_all_priv() */
++			put_device(private->all_drm_private[i]->dev);
++		}
++		put_device(private->mutex_dev);
+ 	}
+ 	private->mtk_drm_bound = false;
+ 	private->drm_master = false;
+diff --git a/drivers/gpu/drm/meson/meson_encoder_hdmi.c b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+index c08fa93e50a30e..2bccda1e52a17d 100644
+--- a/drivers/gpu/drm/meson/meson_encoder_hdmi.c
++++ b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+@@ -108,7 +108,7 @@ static void meson_encoder_hdmi_set_vclk(struct meson_encoder_hdmi *encoder_hdmi,
+ 		venc_freq /= 2;
+ 
+ 	dev_dbg(priv->dev,
+-		"vclk:%lluHz phy=%lluHz venc=%lluHz hdmi=%lluHz enci=%d\n",
++		"phy:%lluHz vclk=%lluHz venc=%lluHz hdmi=%lluHz enci=%d\n",
+ 		phy_freq, vclk_freq, venc_freq, hdmi_freq,
+ 		priv->venc.hdmi_use_enci);
+ 
+diff --git a/drivers/gpu/drm/meson/meson_vclk.c b/drivers/gpu/drm/meson/meson_vclk.c
+index 3325580d885d0a..dfe0c28a0f054c 100644
+--- a/drivers/gpu/drm/meson/meson_vclk.c
++++ b/drivers/gpu/drm/meson/meson_vclk.c
+@@ -110,10 +110,7 @@
+ #define HDMI_PLL_LOCK		BIT(31)
+ #define HDMI_PLL_LOCK_G12A	(3 << 30)
+ 
+-#define PIXEL_FREQ_1000_1001(_freq)	\
+-	DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL)
+-#define PHY_FREQ_1000_1001(_freq)	\
+-	(PIXEL_FREQ_1000_1001(DIV_ROUND_DOWN_ULL(_freq, 10ULL)) * 10)
++#define FREQ_1000_1001(_freq)	DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL)
+ 
+ /* VID PLL Dividers */
+ enum {
+@@ -772,6 +769,36 @@ static void meson_hdmi_pll_generic_set(struct meson_drm *priv,
+ 		  pll_freq);
+ }
+ 
++static bool meson_vclk_freqs_are_matching_param(unsigned int idx,
++						unsigned long long phy_freq,
++						unsigned long long vclk_freq)
++{
++	DRM_DEBUG_DRIVER("i = %d vclk_freq = %lluHz alt = %lluHz\n",
++			 idx, params[idx].vclk_freq,
++			 FREQ_1000_1001(params[idx].vclk_freq));
++	DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n",
++			 idx, params[idx].phy_freq,
++			 FREQ_1000_1001(params[idx].phy_freq));
++
++	/* Match strict frequency */
++	if (phy_freq == params[idx].phy_freq &&
++	    vclk_freq == params[idx].vclk_freq)
++		return true;
++
++	/* Match 1000/1001 variant: vclk deviation has to be less than 1kHz
++	 * (drm EDID is defined in 1kHz steps, so everything smaller must be
++	 * rounding error) and the PHY freq deviation has to be less than
++	 * 10kHz (as the TMDS clock is 10 times the pixel clock, so anything
++	 * smaller must be rounding error as well).
++	 */
++	if (abs(vclk_freq - FREQ_1000_1001(params[idx].vclk_freq)) < 1000 &&
++	    abs(phy_freq - FREQ_1000_1001(params[idx].phy_freq)) < 10000)
++		return true;
++
++	/* no match */
++	return false;
++}
++
+ enum drm_mode_status
+ meson_vclk_vic_supported_freq(struct meson_drm *priv,
+ 			      unsigned long long phy_freq,
+@@ -790,19 +817,7 @@ meson_vclk_vic_supported_freq(struct meson_drm *priv,
+ 	}
+ 
+ 	for (i = 0 ; params[i].pixel_freq ; ++i) {
+-		DRM_DEBUG_DRIVER("i = %d pixel_freq = %lluHz alt = %lluHz\n",
+-				 i, params[i].pixel_freq,
+-				 PIXEL_FREQ_1000_1001(params[i].pixel_freq));
+-		DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n",
+-				 i, params[i].phy_freq,
+-				 PHY_FREQ_1000_1001(params[i].phy_freq));
+-		/* Match strict frequency */
+-		if (phy_freq == params[i].phy_freq &&
+-		    vclk_freq == params[i].vclk_freq)
+-			return MODE_OK;
+-		/* Match 1000/1001 variant */
+-		if (phy_freq == PHY_FREQ_1000_1001(params[i].phy_freq) &&
+-		    vclk_freq == PIXEL_FREQ_1000_1001(params[i].vclk_freq))
++		if (meson_vclk_freqs_are_matching_param(i, phy_freq, vclk_freq))
+ 			return MODE_OK;
+ 	}
+ 
+@@ -1075,10 +1090,8 @@ void meson_vclk_setup(struct meson_drm *priv, unsigned int target,
+ 	}
+ 
+ 	for (freq = 0 ; params[freq].pixel_freq ; ++freq) {
+-		if ((phy_freq == params[freq].phy_freq ||
+-		     phy_freq == PHY_FREQ_1000_1001(params[freq].phy_freq)) &&
+-		    (vclk_freq == params[freq].vclk_freq ||
+-		     vclk_freq == PIXEL_FREQ_1000_1001(params[freq].vclk_freq))) {
++		if (meson_vclk_freqs_are_matching_param(freq, phy_freq,
++							vclk_freq)) {
+ 			if (vclk_freq != params[freq].vclk_freq)
+ 				vic_alternate_clock = true;
+ 			else
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 242d02d48c0cd0..90991ba5a4ae10 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -655,7 +655,6 @@ static void a6xx_calc_ubwc_config(struct adreno_gpu *gpu)
+ 	if (adreno_is_7c3(gpu)) {
+ 		gpu->ubwc_config.highest_bank_bit = 14;
+ 		gpu->ubwc_config.amsbc = 1;
+-		gpu->ubwc_config.rgb565_predicator = 1;
+ 		gpu->ubwc_config.uavflagprd_inv = 2;
+ 		gpu->ubwc_config.macrotile_mode = 1;
+ 	}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_14_msm8937.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_14_msm8937.h
+index ad60089f18ea6c..39027a21c6feec 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_14_msm8937.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_14_msm8937.h
+@@ -100,14 +100,12 @@ static const struct dpu_pingpong_cfg msm8937_pp[] = {
+ 	{
+ 		.name = "pingpong_0", .id = PINGPONG_0,
+ 		.base = 0x70000, .len = 0xd4,
+-		.features = PINGPONG_MSM8996_MASK,
+ 		.sblk = &msm8996_pp_sblk,
+ 		.intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+ 		.intr_rdptr = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12),
+ 	}, {
+ 		.name = "pingpong_1", .id = PINGPONG_1,
+ 		.base = 0x70800, .len = 0xd4,
+-		.features = PINGPONG_MSM8996_MASK,
+ 		.sblk = &msm8996_pp_sblk,
+ 		.intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
+ 		.intr_rdptr = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13),
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_15_msm8917.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_15_msm8917.h
+index a1cf89a0a42d5f..8d1b43ea1663cf 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_15_msm8917.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_15_msm8917.h
+@@ -93,7 +93,6 @@ static const struct dpu_pingpong_cfg msm8917_pp[] = {
+ 	{
+ 		.name = "pingpong_0", .id = PINGPONG_0,
+ 		.base = 0x70000, .len = 0xd4,
+-		.features = PINGPONG_MSM8996_MASK,
+ 		.sblk = &msm8996_pp_sblk,
+ 		.intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+ 		.intr_rdptr = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12),
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_16_msm8953.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_16_msm8953.h
+index eea9b80e2287a8..16c12499b24bb4 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_16_msm8953.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_16_msm8953.h
+@@ -100,14 +100,12 @@ static const struct dpu_pingpong_cfg msm8953_pp[] = {
+ 	{
+ 		.name = "pingpong_0", .id = PINGPONG_0,
+ 		.base = 0x70000, .len = 0xd4,
+-		.features = PINGPONG_MSM8996_MASK,
+ 		.sblk = &msm8996_pp_sblk,
+ 		.intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+ 		.intr_rdptr = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12),
+ 	}, {
+ 		.name = "pingpong_1", .id = PINGPONG_1,
+ 		.base = 0x70800, .len = 0xd4,
+-		.features = PINGPONG_MSM8996_MASK,
+ 		.sblk = &msm8996_pp_sblk,
+ 		.intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
+ 		.intr_rdptr = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13),
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+index 979527d98fbcb1..8e23dbfeef3543 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+@@ -76,7 +76,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	{
+ 		.name = "sspp_0", .id = SSPP_VIG0,
+ 		.base = 0x4000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 0,
+ 		.type = SSPP_TYPE_VIG,
+@@ -84,7 +84,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	}, {
+ 		.name = "sspp_1", .id = SSPP_VIG1,
+ 		.base = 0x6000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 4,
+ 		.type = SSPP_TYPE_VIG,
+@@ -92,7 +92,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	}, {
+ 		.name = "sspp_2", .id = SSPP_VIG2,
+ 		.base = 0x8000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 8,
+ 		.type = SSPP_TYPE_VIG,
+@@ -100,7 +100,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	}, {
+ 		.name = "sspp_3", .id = SSPP_VIG3,
+ 		.base = 0xa000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 12,
+ 		.type = SSPP_TYPE_VIG,
+@@ -108,7 +108,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	}, {
+ 		.name = "sspp_8", .id = SSPP_DMA0,
+ 		.base = 0x24000, .len = 0x1f0,
+-		.features = DMA_SDM845_MASK,
++		.features = DMA_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 1,
+ 		.type = SSPP_TYPE_DMA,
+@@ -116,7 +116,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	}, {
+ 		.name = "sspp_9", .id = SSPP_DMA1,
+ 		.base = 0x26000, .len = 0x1f0,
+-		.features = DMA_SDM845_MASK,
++		.features = DMA_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 5,
+ 		.type = SSPP_TYPE_DMA,
+@@ -124,7 +124,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	}, {
+ 		.name = "sspp_10", .id = SSPP_DMA2,
+ 		.base = 0x28000, .len = 0x1f0,
+-		.features = DMA_CURSOR_SDM845_MASK,
++		.features = DMA_CURSOR_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 9,
+ 		.type = SSPP_TYPE_DMA,
+@@ -132,7 +132,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	}, {
+ 		.name = "sspp_11", .id = SSPP_DMA3,
+ 		.base = 0x2a000, .len = 0x1f0,
+-		.features = DMA_CURSOR_SDM845_MASK,
++		.features = DMA_CURSOR_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 13,
+ 		.type = SSPP_TYPE_DMA,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+index d76b8992a6c18c..e736eb73a7e615 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+@@ -75,7 +75,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	{
+ 		.name = "sspp_0", .id = SSPP_VIG0,
+ 		.base = 0x4000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 0,
+ 		.type = SSPP_TYPE_VIG,
+@@ -83,7 +83,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	}, {
+ 		.name = "sspp_1", .id = SSPP_VIG1,
+ 		.base = 0x6000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 4,
+ 		.type = SSPP_TYPE_VIG,
+@@ -91,7 +91,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	}, {
+ 		.name = "sspp_2", .id = SSPP_VIG2,
+ 		.base = 0x8000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 8,
+ 		.type = SSPP_TYPE_VIG,
+@@ -99,7 +99,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	}, {
+ 		.name = "sspp_3", .id = SSPP_VIG3,
+ 		.base = 0xa000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 12,
+ 		.type = SSPP_TYPE_VIG,
+@@ -107,7 +107,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	}, {
+ 		.name = "sspp_8", .id = SSPP_DMA0,
+ 		.base = 0x24000, .len = 0x1f0,
+-		.features = DMA_SDM845_MASK,
++		.features = DMA_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 1,
+ 		.type = SSPP_TYPE_DMA,
+@@ -115,7 +115,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	}, {
+ 		.name = "sspp_9", .id = SSPP_DMA1,
+ 		.base = 0x26000, .len = 0x1f0,
+-		.features = DMA_SDM845_MASK,
++		.features = DMA_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 5,
+ 		.type = SSPP_TYPE_DMA,
+@@ -123,7 +123,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	}, {
+ 		.name = "sspp_10", .id = SSPP_DMA2,
+ 		.base = 0x28000, .len = 0x1f0,
+-		.features = DMA_CURSOR_SDM845_MASK,
++		.features = DMA_CURSOR_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 9,
+ 		.type = SSPP_TYPE_DMA,
+@@ -131,7 +131,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	}, {
+ 		.name = "sspp_11", .id = SSPP_DMA3,
+ 		.base = 0x2a000, .len = 0x1f0,
+-		.features = DMA_CURSOR_SDM845_MASK,
++		.features = DMA_CURSOR_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 13,
+ 		.type = SSPP_TYPE_DMA,
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index bbc47d86ae9e67..ab8c1f19dcb42d 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -367,17 +367,21 @@ static int msm_dp_display_send_hpd_notification(struct msm_dp_display_private *d
+ 	return 0;
+ }
+ 
+-static void msm_dp_display_lttpr_init(struct msm_dp_display_private *dp)
++static int msm_dp_display_lttpr_init(struct msm_dp_display_private *dp, u8 *dpcd)
+ {
+-	u8 lttpr_caps[DP_LTTPR_COMMON_CAP_SIZE];
+-	int rc;
++	int rc, lttpr_count;
+ 
+-	if (drm_dp_read_lttpr_common_caps(dp->aux, dp->panel->dpcd, lttpr_caps))
+-		return;
++	if (drm_dp_read_lttpr_common_caps(dp->aux, dpcd, dp->link->lttpr_common_caps))
++		return 0;
+ 
+-	rc = drm_dp_lttpr_init(dp->aux, drm_dp_lttpr_count(lttpr_caps));
+-	if (rc)
++	lttpr_count = drm_dp_lttpr_count(dp->link->lttpr_common_caps);
++	rc = drm_dp_lttpr_init(dp->aux, lttpr_count);
++	if (rc) {
+ 		DRM_ERROR("failed to set LTTPRs transparency mode, rc=%d\n", rc);
++		return 0;
++	}
++
++	return lttpr_count;
+ }
+ 
+ static int msm_dp_display_process_hpd_high(struct msm_dp_display_private *dp)
+@@ -385,12 +389,17 @@ static int msm_dp_display_process_hpd_high(struct msm_dp_display_private *dp)
+ 	struct drm_connector *connector = dp->msm_dp_display.connector;
+ 	const struct drm_display_info *info = &connector->display_info;
+ 	int rc = 0;
++	u8 dpcd[DP_RECEIVER_CAP_SIZE];
+ 
+-	rc = msm_dp_panel_read_sink_caps(dp->panel, connector);
++	rc = drm_dp_read_dpcd_caps(dp->aux, dpcd);
+ 	if (rc)
+ 		goto end;
+ 
+-	msm_dp_display_lttpr_init(dp);
++	dp->link->lttpr_count = msm_dp_display_lttpr_init(dp, dpcd);
++
++	rc = msm_dp_panel_read_sink_caps(dp->panel, connector);
++	if (rc)
++		goto end;
+ 
+ 	msm_dp_link_process_request(dp->link);
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_link.h b/drivers/gpu/drm/msm/dp/dp_link.h
+index 8db5d5698a97cf..ba47c6d19fbfac 100644
+--- a/drivers/gpu/drm/msm/dp/dp_link.h
++++ b/drivers/gpu/drm/msm/dp/dp_link.h
+@@ -7,6 +7,7 @@
+ #define _DP_LINK_H_
+ 
+ #include "dp_aux.h"
++#include <drm/display/drm_dp_helper.h>
+ 
+ #define DS_PORT_STATUS_CHANGED 0x200
+ #define DP_TEST_BIT_DEPTH_UNKNOWN 0xFFFFFFFF
+@@ -60,6 +61,9 @@ struct msm_dp_link_phy_params {
+ };
+ 
+ struct msm_dp_link {
++	u8 lttpr_common_caps[DP_LTTPR_COMMON_CAP_SIZE];
++	int lttpr_count;
++
+ 	u32 sink_request;
+ 	u32 test_response;
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
+index 92415bf8aa1665..4e8ab75c771b1e 100644
+--- a/drivers/gpu/drm/msm/dp/dp_panel.c
++++ b/drivers/gpu/drm/msm/dp/dp_panel.c
+@@ -47,7 +47,7 @@ static void msm_dp_panel_read_psr_cap(struct msm_dp_panel_private *panel)
+ 
+ static int msm_dp_panel_read_dpcd(struct msm_dp_panel *msm_dp_panel)
+ {
+-	int rc;
++	int rc, max_lttpr_lanes, max_lttpr_rate;
+ 	struct msm_dp_panel_private *panel;
+ 	struct msm_dp_link_info *link_info;
+ 	u8 *dpcd, major, minor;
+@@ -75,6 +75,16 @@ static int msm_dp_panel_read_dpcd(struct msm_dp_panel *msm_dp_panel)
+ 	if (link_info->rate > msm_dp_panel->max_dp_link_rate)
+ 		link_info->rate = msm_dp_panel->max_dp_link_rate;
+ 
++	/* Limit data lanes from LTTPR capabilities, if any */
++	max_lttpr_lanes = drm_dp_lttpr_max_lane_count(panel->link->lttpr_common_caps);
++	if (max_lttpr_lanes && max_lttpr_lanes < link_info->num_lanes)
++		link_info->num_lanes = max_lttpr_lanes;
++
++	/* Limit link rate from LTTPR capabilities, if any */
++	max_lttpr_rate = drm_dp_lttpr_max_link_rate(panel->link->lttpr_common_caps);
++	if (max_lttpr_rate && max_lttpr_rate < link_info->rate)
++		link_info->rate = max_lttpr_rate;
++
+ 	drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
+ 	drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
+ 	drm_dbg_dp(panel->drm_dev, "lane_count=%d\n", link_info->num_lanes);
+diff --git a/drivers/gpu/drm/panel/panel-samsung-sofef00.c b/drivers/gpu/drm/panel/panel-samsung-sofef00.c
+index 04ce925b3d9dbd..49cfa84b34f0ca 100644
+--- a/drivers/gpu/drm/panel/panel-samsung-sofef00.c
++++ b/drivers/gpu/drm/panel/panel-samsung-sofef00.c
+@@ -22,7 +22,6 @@ struct sofef00_panel {
+ 	struct mipi_dsi_device *dsi;
+ 	struct regulator *supply;
+ 	struct gpio_desc *reset_gpio;
+-	const struct drm_display_mode *mode;
+ };
+ 
+ static inline
+@@ -159,26 +158,11 @@ static const struct drm_display_mode enchilada_panel_mode = {
+ 	.height_mm = 145,
+ };
+ 
+-static const struct drm_display_mode fajita_panel_mode = {
+-	.clock = (1080 + 72 + 16 + 36) * (2340 + 32 + 4 + 18) * 60 / 1000,
+-	.hdisplay = 1080,
+-	.hsync_start = 1080 + 72,
+-	.hsync_end = 1080 + 72 + 16,
+-	.htotal = 1080 + 72 + 16 + 36,
+-	.vdisplay = 2340,
+-	.vsync_start = 2340 + 32,
+-	.vsync_end = 2340 + 32 + 4,
+-	.vtotal = 2340 + 32 + 4 + 18,
+-	.width_mm = 68,
+-	.height_mm = 145,
+-};
+-
+ static int sofef00_panel_get_modes(struct drm_panel *panel, struct drm_connector *connector)
+ {
+ 	struct drm_display_mode *mode;
+-	struct sofef00_panel *ctx = to_sofef00_panel(panel);
+ 
+-	mode = drm_mode_duplicate(connector->dev, ctx->mode);
++	mode = drm_mode_duplicate(connector->dev, &enchilada_panel_mode);
+ 	if (!mode)
+ 		return -ENOMEM;
+ 
+@@ -239,13 +223,6 @@ static int sofef00_panel_probe(struct mipi_dsi_device *dsi)
+ 	if (!ctx)
+ 		return -ENOMEM;
+ 
+-	ctx->mode = of_device_get_match_data(dev);
+-
+-	if (!ctx->mode) {
+-		dev_err(dev, "Missing device mode\n");
+-		return -ENODEV;
+-	}
+-
+ 	ctx->supply = devm_regulator_get(dev, "vddio");
+ 	if (IS_ERR(ctx->supply))
+ 		return dev_err_probe(dev, PTR_ERR(ctx->supply),
+@@ -295,14 +272,7 @@ static void sofef00_panel_remove(struct mipi_dsi_device *dsi)
+ }
+ 
+ static const struct of_device_id sofef00_panel_of_match[] = {
+-	{ // OnePlus 6 / enchilada
+-		.compatible = "samsung,sofef00",
+-		.data = &enchilada_panel_mode,
+-	},
+-	{ // OnePlus 6T / fajita
+-		.compatible = "samsung,s6e3fc2x01",
+-		.data = &fajita_panel_mode,
+-	},
++	{ .compatible = "samsung,sofef00" },
+ 	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, sofef00_panel_of_match);
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 33a37539de574e..3aaac96c0bfbf5 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -2199,13 +2199,14 @@ static const struct display_timing evervision_vgg644804_timing = {
+ static const struct panel_desc evervision_vgg644804 = {
+ 	.timings = &evervision_vgg644804_timing,
+ 	.num_timings = 1,
+-	.bpc = 8,
++	.bpc = 6,
+ 	.size = {
+ 		.width = 115,
+ 		.height = 86,
+ 	},
+ 	.bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG,
+-	.bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_SAMPLE_NEGEDGE,
++	.bus_flags = DRM_BUS_FLAG_DE_HIGH,
++	.connector_type = DRM_MODE_CONNECTOR_LVDS,
+ };
+ 
+ static const struct display_timing evervision_vgg804821_timing = {
+diff --git a/drivers/gpu/drm/panthor/panthor_device.c b/drivers/gpu/drm/panthor/panthor_device.c
+index a9da1d1eeb7071..1e8811c6716dfa 100644
+--- a/drivers/gpu/drm/panthor/panthor_device.c
++++ b/drivers/gpu/drm/panthor/panthor_device.c
+@@ -171,10 +171,6 @@ int panthor_device_init(struct panthor_device *ptdev)
+ 	struct page *p;
+ 	int ret;
+ 
+-	ret = panthor_gpu_coherency_init(ptdev);
+-	if (ret)
+-		return ret;
+-
+ 	init_completion(&ptdev->unplug.done);
+ 	ret = drmm_mutex_init(&ptdev->base, &ptdev->unplug.lock);
+ 	if (ret)
+@@ -247,6 +243,10 @@ int panthor_device_init(struct panthor_device *ptdev)
+ 	if (ret)
+ 		goto err_rpm_put;
+ 
++	ret = panthor_gpu_coherency_init(ptdev);
++	if (ret)
++		goto err_unplug_gpu;
++
+ 	ret = panthor_mmu_init(ptdev);
+ 	if (ret)
+ 		goto err_unplug_gpu;
+diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
+index 12a02e28f50fd8..7cca97d298ea10 100644
+--- a/drivers/gpu/drm/panthor/panthor_mmu.c
++++ b/drivers/gpu/drm/panthor/panthor_mmu.c
+@@ -781,6 +781,7 @@ int panthor_vm_active(struct panthor_vm *vm)
+ 	if (ptdev->mmu->as.faulty_mask & panthor_mmu_as_fault_mask(ptdev, as)) {
+ 		gpu_write(ptdev, MMU_INT_CLEAR, panthor_mmu_as_fault_mask(ptdev, as));
+ 		ptdev->mmu->as.faulty_mask &= ~panthor_mmu_as_fault_mask(ptdev, as);
++		ptdev->mmu->irq.mask |= panthor_mmu_as_fault_mask(ptdev, as);
+ 		gpu_write(ptdev, MMU_INT_MASK, ~ptdev->mmu->as.faulty_mask);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/panthor/panthor_regs.h b/drivers/gpu/drm/panthor/panthor_regs.h
+index b7b3b3add16627..a7a323dc5cf92a 100644
+--- a/drivers/gpu/drm/panthor/panthor_regs.h
++++ b/drivers/gpu/drm/panthor/panthor_regs.h
+@@ -133,8 +133,8 @@
+ #define GPU_COHERENCY_PROT_BIT(name)			BIT(GPU_COHERENCY_  ## name)
+ 
+ #define GPU_COHERENCY_PROTOCOL				0x304
+-#define   GPU_COHERENCY_ACE				0
+-#define   GPU_COHERENCY_ACE_LITE			1
++#define   GPU_COHERENCY_ACE_LITE			0
++#define   GPU_COHERENCY_ACE				1
+ #define   GPU_COHERENCY_NONE				31
+ 
+ #define MCU_CONTROL					0x700
+diff --git a/drivers/gpu/drm/renesas/rcar-du/rcar_du_kms.c b/drivers/gpu/drm/renesas/rcar-du/rcar_du_kms.c
+index 70d8ad065bfa1d..4c8fe83dd6101b 100644
+--- a/drivers/gpu/drm/renesas/rcar-du/rcar_du_kms.c
++++ b/drivers/gpu/drm/renesas/rcar-du/rcar_du_kms.c
+@@ -705,7 +705,7 @@ static int rcar_du_vsps_init(struct rcar_du_device *rcdu)
+ 		ret = of_parse_phandle_with_fixed_args(np, vsps_prop_name,
+ 						       cells, i, &args);
+ 		if (ret < 0)
+-			goto error;
++			goto done;
+ 
+ 		/*
+ 		 * Add the VSP to the list or update the corresponding existing
+@@ -743,13 +743,11 @@ static int rcar_du_vsps_init(struct rcar_du_device *rcdu)
+ 		vsp->dev = rcdu;
+ 
+ 		ret = rcar_du_vsp_init(vsp, vsps[i].np, vsps[i].crtcs_mask);
+-		if (ret < 0)
+-			goto error;
++		if (ret)
++			goto done;
+ 	}
+ 
+-	return 0;
+-
+-error:
++done:
+ 	for (i = 0; i < ARRAY_SIZE(vsps); ++i)
+ 		of_node_put(vsps[i].np);
+ 
+diff --git a/drivers/gpu/drm/tegra/rgb.c b/drivers/gpu/drm/tegra/rgb.c
+index 1e8ec50b759e46..ff5a749710db3a 100644
+--- a/drivers/gpu/drm/tegra/rgb.c
++++ b/drivers/gpu/drm/tegra/rgb.c
+@@ -200,6 +200,11 @@ static const struct drm_encoder_helper_funcs tegra_rgb_encoder_helper_funcs = {
+ 	.atomic_check = tegra_rgb_encoder_atomic_check,
+ };
+ 
++static void tegra_dc_of_node_put(void *data)
++{
++	of_node_put(data);
++}
++
+ int tegra_dc_rgb_probe(struct tegra_dc *dc)
+ {
+ 	struct device_node *np;
+@@ -207,7 +212,14 @@ int tegra_dc_rgb_probe(struct tegra_dc *dc)
+ 	int err;
+ 
+ 	np = of_get_child_by_name(dc->dev->of_node, "rgb");
+-	if (!np || !of_device_is_available(np))
++	if (!np)
++		return -ENODEV;
++
++	err = devm_add_action_or_reset(dc->dev, tegra_dc_of_node_put, np);
++	if (err < 0)
++		return err;
++
++	if (!of_device_is_available(np))
+ 		return -ENODEV;
+ 
+ 	rgb = devm_kzalloc(dc->dev, sizeof(*rgb), GFP_KERNEL);
+diff --git a/drivers/gpu/drm/v3d/v3d_debugfs.c b/drivers/gpu/drm/v3d/v3d_debugfs.c
+index 76816f2551c100..7e789e181af0ac 100644
+--- a/drivers/gpu/drm/v3d/v3d_debugfs.c
++++ b/drivers/gpu/drm/v3d/v3d_debugfs.c
+@@ -21,74 +21,74 @@ struct v3d_reg_def {
+ };
+ 
+ static const struct v3d_reg_def v3d_hub_reg_defs[] = {
+-	REGDEF(33, 42, V3D_HUB_AXICFG),
+-	REGDEF(33, 71, V3D_HUB_UIFCFG),
+-	REGDEF(33, 71, V3D_HUB_IDENT0),
+-	REGDEF(33, 71, V3D_HUB_IDENT1),
+-	REGDEF(33, 71, V3D_HUB_IDENT2),
+-	REGDEF(33, 71, V3D_HUB_IDENT3),
+-	REGDEF(33, 71, V3D_HUB_INT_STS),
+-	REGDEF(33, 71, V3D_HUB_INT_MSK_STS),
+-
+-	REGDEF(33, 71, V3D_MMU_CTL),
+-	REGDEF(33, 71, V3D_MMU_VIO_ADDR),
+-	REGDEF(33, 71, V3D_MMU_VIO_ID),
+-	REGDEF(33, 71, V3D_MMU_DEBUG_INFO),
+-
+-	REGDEF(71, 71, V3D_GMP_STATUS(71)),
+-	REGDEF(71, 71, V3D_GMP_CFG(71)),
+-	REGDEF(71, 71, V3D_GMP_VIO_ADDR(71)),
++	REGDEF(V3D_GEN_33, V3D_GEN_42, V3D_HUB_AXICFG),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_HUB_UIFCFG),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_HUB_IDENT0),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_HUB_IDENT1),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_HUB_IDENT2),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_HUB_IDENT3),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_HUB_INT_STS),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_HUB_INT_MSK_STS),
++
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_MMU_CTL),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_MMU_VIO_ADDR),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_MMU_VIO_ID),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_MMU_DEBUG_INFO),
++
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_GMP_STATUS(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_GMP_CFG(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_GMP_VIO_ADDR(71)),
+ };
+ 
+ static const struct v3d_reg_def v3d_gca_reg_defs[] = {
+-	REGDEF(33, 33, V3D_GCA_SAFE_SHUTDOWN),
+-	REGDEF(33, 33, V3D_GCA_SAFE_SHUTDOWN_ACK),
++	REGDEF(V3D_GEN_33, V3D_GEN_33, V3D_GCA_SAFE_SHUTDOWN),
++	REGDEF(V3D_GEN_33, V3D_GEN_33, V3D_GCA_SAFE_SHUTDOWN_ACK),
+ };
+ 
+ static const struct v3d_reg_def v3d_core_reg_defs[] = {
+-	REGDEF(33, 71, V3D_CTL_IDENT0),
+-	REGDEF(33, 71, V3D_CTL_IDENT1),
+-	REGDEF(33, 71, V3D_CTL_IDENT2),
+-	REGDEF(33, 71, V3D_CTL_MISCCFG),
+-	REGDEF(33, 71, V3D_CTL_INT_STS),
+-	REGDEF(33, 71, V3D_CTL_INT_MSK_STS),
+-	REGDEF(33, 71, V3D_CLE_CT0CS),
+-	REGDEF(33, 71, V3D_CLE_CT0CA),
+-	REGDEF(33, 71, V3D_CLE_CT0EA),
+-	REGDEF(33, 71, V3D_CLE_CT1CS),
+-	REGDEF(33, 71, V3D_CLE_CT1CA),
+-	REGDEF(33, 71, V3D_CLE_CT1EA),
+-
+-	REGDEF(33, 71, V3D_PTB_BPCA),
+-	REGDEF(33, 71, V3D_PTB_BPCS),
+-
+-	REGDEF(33, 42, V3D_GMP_STATUS(33)),
+-	REGDEF(33, 42, V3D_GMP_CFG(33)),
+-	REGDEF(33, 42, V3D_GMP_VIO_ADDR(33)),
+-
+-	REGDEF(33, 71, V3D_ERR_FDBGO),
+-	REGDEF(33, 71, V3D_ERR_FDBGB),
+-	REGDEF(33, 71, V3D_ERR_FDBGS),
+-	REGDEF(33, 71, V3D_ERR_STAT),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CTL_IDENT0),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CTL_IDENT1),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CTL_IDENT2),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CTL_MISCCFG),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CTL_INT_STS),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CTL_INT_MSK_STS),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CLE_CT0CS),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CLE_CT0CA),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CLE_CT0EA),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CLE_CT1CS),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CLE_CT1CA),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CLE_CT1EA),
++
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_PTB_BPCA),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_PTB_BPCS),
++
++	REGDEF(V3D_GEN_33, V3D_GEN_42, V3D_GMP_STATUS(33)),
++	REGDEF(V3D_GEN_33, V3D_GEN_42, V3D_GMP_CFG(33)),
++	REGDEF(V3D_GEN_33, V3D_GEN_42, V3D_GMP_VIO_ADDR(33)),
++
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_ERR_FDBGO),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_ERR_FDBGB),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_ERR_FDBGS),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_ERR_STAT),
+ };
+ 
+ static const struct v3d_reg_def v3d_csd_reg_defs[] = {
+-	REGDEF(41, 71, V3D_CSD_STATUS),
+-	REGDEF(41, 42, V3D_CSD_CURRENT_CFG0(41)),
+-	REGDEF(41, 42, V3D_CSD_CURRENT_CFG1(41)),
+-	REGDEF(41, 42, V3D_CSD_CURRENT_CFG2(41)),
+-	REGDEF(41, 42, V3D_CSD_CURRENT_CFG3(41)),
+-	REGDEF(41, 42, V3D_CSD_CURRENT_CFG4(41)),
+-	REGDEF(41, 42, V3D_CSD_CURRENT_CFG5(41)),
+-	REGDEF(41, 42, V3D_CSD_CURRENT_CFG6(41)),
+-	REGDEF(71, 71, V3D_CSD_CURRENT_CFG0(71)),
+-	REGDEF(71, 71, V3D_CSD_CURRENT_CFG1(71)),
+-	REGDEF(71, 71, V3D_CSD_CURRENT_CFG2(71)),
+-	REGDEF(71, 71, V3D_CSD_CURRENT_CFG3(71)),
+-	REGDEF(71, 71, V3D_CSD_CURRENT_CFG4(71)),
+-	REGDEF(71, 71, V3D_CSD_CURRENT_CFG5(71)),
+-	REGDEF(71, 71, V3D_CSD_CURRENT_CFG6(71)),
+-	REGDEF(71, 71, V3D_V7_CSD_CURRENT_CFG7),
++	REGDEF(V3D_GEN_41, V3D_GEN_71, V3D_CSD_STATUS),
++	REGDEF(V3D_GEN_41, V3D_GEN_42, V3D_CSD_CURRENT_CFG0(41)),
++	REGDEF(V3D_GEN_41, V3D_GEN_42, V3D_CSD_CURRENT_CFG1(41)),
++	REGDEF(V3D_GEN_41, V3D_GEN_42, V3D_CSD_CURRENT_CFG2(41)),
++	REGDEF(V3D_GEN_41, V3D_GEN_42, V3D_CSD_CURRENT_CFG3(41)),
++	REGDEF(V3D_GEN_41, V3D_GEN_42, V3D_CSD_CURRENT_CFG4(41)),
++	REGDEF(V3D_GEN_41, V3D_GEN_42, V3D_CSD_CURRENT_CFG5(41)),
++	REGDEF(V3D_GEN_41, V3D_GEN_42, V3D_CSD_CURRENT_CFG6(41)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_CSD_CURRENT_CFG0(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_CSD_CURRENT_CFG1(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_CSD_CURRENT_CFG2(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_CSD_CURRENT_CFG3(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_CSD_CURRENT_CFG4(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_CSD_CURRENT_CFG5(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_CSD_CURRENT_CFG6(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_V7_CSD_CURRENT_CFG7),
+ };
+ 
+ static int v3d_v3d_debugfs_regs(struct seq_file *m, void *unused)
+@@ -164,7 +164,7 @@ static int v3d_v3d_debugfs_ident(struct seq_file *m, void *unused)
+ 		   str_yes_no(ident2 & V3D_HUB_IDENT2_WITH_MMU));
+ 	seq_printf(m, "TFU:        %s\n",
+ 		   str_yes_no(ident1 & V3D_HUB_IDENT1_WITH_TFU));
+-	if (v3d->ver <= 42) {
++	if (v3d->ver <= V3D_GEN_42) {
+ 		seq_printf(m, "TSY:        %s\n",
+ 			   str_yes_no(ident1 & V3D_HUB_IDENT1_WITH_TSY));
+ 	}
+@@ -196,11 +196,11 @@ static int v3d_v3d_debugfs_ident(struct seq_file *m, void *unused)
+ 		seq_printf(m, "  QPUs:         %d\n", nslc * qups);
+ 		seq_printf(m, "  Semaphores:   %d\n",
+ 			   V3D_GET_FIELD(ident1, V3D_IDENT1_NSEM));
+-		if (v3d->ver <= 42) {
++		if (v3d->ver <= V3D_GEN_42) {
+ 			seq_printf(m, "  BCG int:      %d\n",
+ 				   (ident2 & V3D_IDENT2_BCG_INT) != 0);
+ 		}
+-		if (v3d->ver < 40) {
++		if (v3d->ver < V3D_GEN_41) {
+ 			seq_printf(m, "  Override TMU: %d\n",
+ 				   (misccfg & V3D_MISCCFG_OVRTMUOUT) != 0);
+ 		}
+@@ -234,7 +234,7 @@ static int v3d_measure_clock(struct seq_file *m, void *unused)
+ 	int core = 0;
+ 	int measure_ms = 1000;
+ 
+-	if (v3d->ver >= 40) {
++	if (v3d->ver >= V3D_GEN_41) {
+ 		int cycle_count_reg = V3D_PCTR_CYCLE_COUNT(v3d->ver);
+ 		V3D_CORE_WRITE(core, V3D_V4_PCTR_0_SRC_0_3,
+ 			       V3D_SET_FIELD_VER(cycle_count_reg,
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c
+index 852015214e971c..aa68be8fe86b71 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.c
++++ b/drivers/gpu/drm/v3d/v3d_drv.c
+@@ -17,6 +17,7 @@
+ #include <linux/dma-mapping.h>
+ #include <linux/io.h>
+ #include <linux/module.h>
++#include <linux/of.h>
+ #include <linux/of_platform.h>
+ #include <linux/platform_device.h>
+ #include <linux/sched/clock.h>
+@@ -92,7 +93,7 @@ static int v3d_get_param_ioctl(struct drm_device *dev, void *data,
+ 		args->value = 1;
+ 		return 0;
+ 	case DRM_V3D_PARAM_SUPPORTS_PERFMON:
+-		args->value = (v3d->ver >= 40);
++		args->value = (v3d->ver >= V3D_GEN_41);
+ 		return 0;
+ 	case DRM_V3D_PARAM_SUPPORTS_MULTISYNC_EXT:
+ 		args->value = 1;
+@@ -254,10 +255,10 @@ static const struct drm_driver v3d_drm_driver = {
+ };
+ 
+ static const struct of_device_id v3d_of_match[] = {
+-	{ .compatible = "brcm,2711-v3d" },
+-	{ .compatible = "brcm,2712-v3d" },
+-	{ .compatible = "brcm,7268-v3d" },
+-	{ .compatible = "brcm,7278-v3d" },
++	{ .compatible = "brcm,2711-v3d", .data = (void *)V3D_GEN_42 },
++	{ .compatible = "brcm,2712-v3d", .data = (void *)V3D_GEN_71 },
++	{ .compatible = "brcm,7268-v3d", .data = (void *)V3D_GEN_33 },
++	{ .compatible = "brcm,7278-v3d", .data = (void *)V3D_GEN_41 },
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(of, v3d_of_match);
+@@ -274,6 +275,7 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct drm_device *drm;
+ 	struct v3d_dev *v3d;
++	enum v3d_gen gen;
+ 	int ret;
+ 	u32 mmu_debug;
+ 	u32 ident1, ident3;
+@@ -287,6 +289,9 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, drm);
+ 
++	gen = (uintptr_t)of_device_get_match_data(dev);
++	v3d->ver = gen;
++
+ 	ret = map_regs(v3d, &v3d->hub_regs, "hub");
+ 	if (ret)
+ 		return ret;
+@@ -316,6 +321,11 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ 	ident1 = V3D_READ(V3D_HUB_IDENT1);
+ 	v3d->ver = (V3D_GET_FIELD(ident1, V3D_HUB_IDENT1_TVER) * 10 +
+ 		    V3D_GET_FIELD(ident1, V3D_HUB_IDENT1_REV));
++	/* Make sure that the V3D tech version retrieved from the HW is equal
++	 * to the one advertised by the device tree.
++	 */
++	WARN_ON(v3d->ver != gen);
++
+ 	v3d->cores = V3D_GET_FIELD(ident1, V3D_HUB_IDENT1_NCORES);
+ 	WARN_ON(v3d->cores > 1); /* multicore not yet implemented */
+ 
+@@ -340,7 +350,7 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	if (v3d->ver < 41) {
++	if (v3d->ver < V3D_GEN_41) {
+ 		ret = map_regs(v3d, &v3d->gca_regs, "gca");
+ 		if (ret)
+ 			goto clk_disable;
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
+index 9deaefa0f95b71..de4a9e18f6a903 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.h
++++ b/drivers/gpu/drm/v3d/v3d_drv.h
+@@ -94,11 +94,18 @@ struct v3d_perfmon {
+ 	u64 values[] __counted_by(ncounters);
+ };
+ 
++enum v3d_gen {
++	V3D_GEN_33 = 33,
++	V3D_GEN_41 = 41,
++	V3D_GEN_42 = 42,
++	V3D_GEN_71 = 71,
++};
++
+ struct v3d_dev {
+ 	struct drm_device drm;
+ 
+ 	/* Short representation (e.g. 33, 41) of the V3D tech version */
+-	int ver;
++	enum v3d_gen ver;
+ 
+ 	/* Short representation (e.g. 5, 6) of the V3D tech revision */
+ 	int rev;
+@@ -199,7 +206,7 @@ to_v3d_dev(struct drm_device *dev)
+ static inline bool
+ v3d_has_csd(struct v3d_dev *v3d)
+ {
+-	return v3d->ver >= 41;
++	return v3d->ver >= V3D_GEN_41;
+ }
+ 
+ #define v3d_to_pdev(v3d) to_platform_device((v3d)->drm.dev)
+diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
+index b1e681630ded09..1ea6d3832c2212 100644
+--- a/drivers/gpu/drm/v3d/v3d_gem.c
++++ b/drivers/gpu/drm/v3d/v3d_gem.c
+@@ -25,7 +25,7 @@ v3d_init_core(struct v3d_dev *v3d, int core)
+ 	 * type.  If you want the default behavior, you can still put
+ 	 * "2" in the indirect texture state's output_type field.
+ 	 */
+-	if (v3d->ver < 40)
++	if (v3d->ver < V3D_GEN_41)
+ 		V3D_CORE_WRITE(core, V3D_CTL_MISCCFG, V3D_MISCCFG_OVRTMUOUT);
+ 
+ 	/* Whenever we flush the L2T cache, we always want to flush
+@@ -58,7 +58,7 @@ v3d_idle_axi(struct v3d_dev *v3d, int core)
+ static void
+ v3d_idle_gca(struct v3d_dev *v3d)
+ {
+-	if (v3d->ver >= 41)
++	if (v3d->ver >= V3D_GEN_41)
+ 		return;
+ 
+ 	V3D_GCA_WRITE(V3D_GCA_SAFE_SHUTDOWN, V3D_GCA_SAFE_SHUTDOWN_EN);
+@@ -132,13 +132,13 @@ v3d_reset(struct v3d_dev *v3d)
+ static void
+ v3d_flush_l3(struct v3d_dev *v3d)
+ {
+-	if (v3d->ver < 41) {
++	if (v3d->ver < V3D_GEN_41) {
+ 		u32 gca_ctrl = V3D_GCA_READ(V3D_GCA_CACHE_CTRL);
+ 
+ 		V3D_GCA_WRITE(V3D_GCA_CACHE_CTRL,
+ 			      gca_ctrl | V3D_GCA_CACHE_CTRL_FLUSH);
+ 
+-		if (v3d->ver < 33) {
++		if (v3d->ver < V3D_GEN_33) {
+ 			V3D_GCA_WRITE(V3D_GCA_CACHE_CTRL,
+ 				      gca_ctrl & ~V3D_GCA_CACHE_CTRL_FLUSH);
+ 		}
+@@ -151,7 +151,7 @@ v3d_flush_l3(struct v3d_dev *v3d)
+ static void
+ v3d_invalidate_l2c(struct v3d_dev *v3d, int core)
+ {
+-	if (v3d->ver > 32)
++	if (v3d->ver >= V3D_GEN_33)
+ 		return;
+ 
+ 	V3D_CORE_WRITE(core, V3D_CTL_L2CACTL,
+diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c
+index 72b6a119412fa7..2cca5d3a26a22c 100644
+--- a/drivers/gpu/drm/v3d/v3d_irq.c
++++ b/drivers/gpu/drm/v3d/v3d_irq.c
+@@ -143,7 +143,7 @@ v3d_irq(int irq, void *arg)
+ 	/* We shouldn't be triggering these if we have GMP in
+ 	 * always-allowed mode.
+ 	 */
+-	if (v3d->ver < 71 && (intsts & V3D_INT_GMPV))
++	if (v3d->ver < V3D_GEN_71 && (intsts & V3D_INT_GMPV))
+ 		dev_err(v3d->drm.dev, "GMP violation\n");
+ 
+ 	/* V3D 4.2 wires the hub and core IRQs together, so if we &
+@@ -186,27 +186,59 @@ v3d_hub_irq(int irq, void *arg)
+ 		u32 axi_id = V3D_READ(V3D_MMU_VIO_ID);
+ 		u64 vio_addr = ((u64)V3D_READ(V3D_MMU_VIO_ADDR) <<
+ 				(v3d->va_width - 32));
+-		static const char *const v3d41_axi_ids[] = {
+-			"L2T",
+-			"PTB",
+-			"PSE",
+-			"TLB",
+-			"CLE",
+-			"TFU",
+-			"MMU",
+-			"GMP",
++		static const struct {
++			u32 begin;
++			u32 end;
++			const char *client;
++		} v3d41_axi_ids[] = {
++			{0x00, 0x20, "L2T"},
++			{0x20, 0x21, "PTB"},
++			{0x40, 0x41, "PSE"},
++			{0x60, 0x80, "TLB"},
++			{0x80, 0x88, "CLE"},
++			{0xA0, 0xA1, "TFU"},
++			{0xC0, 0xE0, "MMU"},
++			{0xE0, 0xE1, "GMP"},
++		}, v3d71_axi_ids[] = {
++			{0x00, 0x30, "L2T"},
++			{0x30, 0x38, "CLE"},
++			{0x38, 0x39, "PTB"},
++			{0x39, 0x3A, "PSE"},
++			{0x3A, 0x3B, "CSD"},
++			{0x40, 0x60, "TLB"},
++			{0x60, 0x70, "MMU"},
++			{0x7C, 0x7E, "TFU"},
++			{0x7F, 0x80, "GMP"},
+ 		};
+ 		const char *client = "?";
+ 
+ 		V3D_WRITE(V3D_MMU_CTL, V3D_READ(V3D_MMU_CTL));
+ 
+-		if (v3d->ver >= 41) {
+-			axi_id = axi_id >> 5;
+-			if (axi_id < ARRAY_SIZE(v3d41_axi_ids))
+-				client = v3d41_axi_ids[axi_id];
++		if (v3d->ver >= V3D_GEN_71) {
++			size_t i;
++
++			axi_id = axi_id & 0x7F;
++			for (i = 0; i < ARRAY_SIZE(v3d71_axi_ids); i++) {
++				if (axi_id >= v3d71_axi_ids[i].begin &&
++				    axi_id < v3d71_axi_ids[i].end) {
++					client = v3d71_axi_ids[i].client;
++					break;
++				}
++			}
++		} else if (v3d->ver >= V3D_GEN_41) {
++			size_t i;
++
++			axi_id = axi_id & 0xFF;
++			for (i = 0; i < ARRAY_SIZE(v3d41_axi_ids); i++) {
++				if (axi_id >= v3d41_axi_ids[i].begin &&
++				    axi_id < v3d41_axi_ids[i].end) {
++					client = v3d41_axi_ids[i].client;
++					break;
++				}
++			}
+ 		}
+ 
+-		dev_err(v3d->drm.dev, "MMU error from client %s (%d) at 0x%llx%s%s%s\n",
++		dev_err(v3d->drm.dev, "MMU error from client %s (0x%x) at 0x%llx%s%s%s\n",
+ 			client, axi_id, (long long)vio_addr,
+ 			((intsts & V3D_HUB_INT_MMU_WRV) ?
+ 			 ", write violation" : ""),
+@@ -217,7 +249,7 @@ v3d_hub_irq(int irq, void *arg)
+ 		status = IRQ_HANDLED;
+ 	}
+ 
+-	if (v3d->ver >= 71 && (intsts & V3D_V7_HUB_INT_GMPV)) {
++	if (v3d->ver >= V3D_GEN_71 && (intsts & V3D_V7_HUB_INT_GMPV)) {
+ 		dev_err(v3d->drm.dev, "GMP Violation\n");
+ 		status = IRQ_HANDLED;
+ 	}
+diff --git a/drivers/gpu/drm/v3d/v3d_perfmon.c b/drivers/gpu/drm/v3d/v3d_perfmon.c
+index 3ebda2fa46fc47..9a3fe52558746e 100644
+--- a/drivers/gpu/drm/v3d/v3d_perfmon.c
++++ b/drivers/gpu/drm/v3d/v3d_perfmon.c
+@@ -200,10 +200,10 @@ void v3d_perfmon_init(struct v3d_dev *v3d)
+ 	const struct v3d_perf_counter_desc *counters = NULL;
+ 	unsigned int max = 0;
+ 
+-	if (v3d->ver >= 71) {
++	if (v3d->ver >= V3D_GEN_71) {
+ 		counters = v3d_v71_performance_counters;
+ 		max = ARRAY_SIZE(v3d_v71_performance_counters);
+-	} else if (v3d->ver >= 42) {
++	} else if (v3d->ver >= V3D_GEN_42) {
+ 		counters = v3d_v42_performance_counters;
+ 		max = ARRAY_SIZE(v3d_v42_performance_counters);
+ 	}
+diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
+index eb35482f6fb577..35f131a46d0701 100644
+--- a/drivers/gpu/drm/v3d/v3d_sched.c
++++ b/drivers/gpu/drm/v3d/v3d_sched.c
+@@ -357,11 +357,11 @@ v3d_tfu_job_run(struct drm_sched_job *sched_job)
+ 	V3D_WRITE(V3D_TFU_ICA(v3d->ver), job->args.ica);
+ 	V3D_WRITE(V3D_TFU_IUA(v3d->ver), job->args.iua);
+ 	V3D_WRITE(V3D_TFU_IOA(v3d->ver), job->args.ioa);
+-	if (v3d->ver >= 71)
++	if (v3d->ver >= V3D_GEN_71)
+ 		V3D_WRITE(V3D_V7_TFU_IOC, job->args.v71.ioc);
+ 	V3D_WRITE(V3D_TFU_IOS(v3d->ver), job->args.ios);
+ 	V3D_WRITE(V3D_TFU_COEF0(v3d->ver), job->args.coef[0]);
+-	if (v3d->ver >= 71 || (job->args.coef[0] & V3D_TFU_COEF0_USECOEF)) {
++	if (v3d->ver >= V3D_GEN_71 || (job->args.coef[0] & V3D_TFU_COEF0_USECOEF)) {
+ 		V3D_WRITE(V3D_TFU_COEF1(v3d->ver), job->args.coef[1]);
+ 		V3D_WRITE(V3D_TFU_COEF2(v3d->ver), job->args.coef[2]);
+ 		V3D_WRITE(V3D_TFU_COEF3(v3d->ver), job->args.coef[3]);
+@@ -412,7 +412,7 @@ v3d_csd_job_run(struct drm_sched_job *sched_job)
+ 	 *
+ 	 * XXX: Set the CFG7 register
+ 	 */
+-	if (v3d->ver >= 71)
++	if (v3d->ver >= V3D_GEN_71)
+ 		V3D_CORE_WRITE(0, V3D_V7_CSD_QUEUED_CFG7, 0);
+ 
+ 	/* CFG0 write kicks off the job. */
+diff --git a/drivers/gpu/drm/vc4/tests/vc4_mock_output.c b/drivers/gpu/drm/vc4/tests/vc4_mock_output.c
+index e70d7c3076acf1..f0ddc223c1f839 100644
+--- a/drivers/gpu/drm/vc4/tests/vc4_mock_output.c
++++ b/drivers/gpu/drm/vc4/tests/vc4_mock_output.c
+@@ -75,24 +75,30 @@ int vc4_mock_atomic_add_output(struct kunit *test,
+ 	int ret;
+ 
+ 	encoder = vc4_find_encoder_by_type(drm, type);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, encoder);
++	if (!encoder)
++		return -ENODEV;
+ 
+ 	crtc = vc4_find_crtc_for_encoder(test, drm, encoder);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, crtc);
++	if (!crtc)
++		return -ENODEV;
+ 
+ 	output = encoder_to_vc4_dummy_output(encoder);
+ 	conn = &output->connector;
+ 	conn_state = drm_atomic_get_connector_state(state, conn);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, conn_state);
++	if (IS_ERR(conn_state))
++		return PTR_ERR(conn_state);
+ 
+ 	ret = drm_atomic_set_crtc_for_connector(conn_state, crtc);
+-	KUNIT_EXPECT_EQ(test, ret, 0);
++	if (ret)
++		return ret;
+ 
+ 	crtc_state = drm_atomic_get_crtc_state(state, crtc);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, crtc_state);
++	if (IS_ERR(crtc_state))
++		return PTR_ERR(crtc_state);
+ 
+ 	ret = drm_atomic_set_mode_for_crtc(crtc_state, &default_mode);
+-	KUNIT_EXPECT_EQ(test, ret, 0);
++	if (ret)
++		return ret;
+ 
+ 	crtc_state->active = true;
+ 
+@@ -113,26 +119,32 @@ int vc4_mock_atomic_del_output(struct kunit *test,
+ 	int ret;
+ 
+ 	encoder = vc4_find_encoder_by_type(drm, type);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, encoder);
++	if (!encoder)
++		return -ENODEV;
+ 
+ 	crtc = vc4_find_crtc_for_encoder(test, drm, encoder);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, crtc);
++	if (!crtc)
++		return -ENODEV;
+ 
+ 	crtc_state = drm_atomic_get_crtc_state(state, crtc);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, crtc_state);
++	if (IS_ERR(crtc_state))
++		return PTR_ERR(crtc_state);
+ 
+ 	crtc_state->active = false;
+ 
+ 	ret = drm_atomic_set_mode_for_crtc(crtc_state, NULL);
+-	KUNIT_ASSERT_EQ(test, ret, 0);
++	if (ret)
++		return ret;
+ 
+ 	output = encoder_to_vc4_dummy_output(encoder);
+ 	conn = &output->connector;
+ 	conn_state = drm_atomic_get_connector_state(state, conn);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, conn_state);
++	if (IS_ERR(conn_state))
++		return PTR_ERR(conn_state);
+ 
+ 	ret = drm_atomic_set_crtc_for_connector(conn_state, NULL);
+-	KUNIT_ASSERT_EQ(test, ret, 0);
++	if (ret)
++		return ret;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/vc4/tests/vc4_test_pv_muxing.c b/drivers/gpu/drm/vc4/tests/vc4_test_pv_muxing.c
+index 992e8f5c5c6ea8..d1f694029169ad 100644
+--- a/drivers/gpu/drm/vc4/tests/vc4_test_pv_muxing.c
++++ b/drivers/gpu/drm/vc4/tests/vc4_test_pv_muxing.c
+@@ -20,7 +20,6 @@
+ 
+ struct pv_muxing_priv {
+ 	struct vc4_dev *vc4;
+-	struct drm_atomic_state *state;
+ };
+ 
+ static bool check_fifo_conflict(struct kunit *test,
+@@ -677,18 +676,41 @@ static void drm_vc4_test_pv_muxing(struct kunit *test)
+ {
+ 	const struct pv_muxing_param *params = test->param_value;
+ 	const struct pv_muxing_priv *priv = test->priv;
+-	struct drm_atomic_state *state = priv->state;
++	struct drm_modeset_acquire_ctx ctx;
++	struct drm_atomic_state *state;
++	struct drm_device *drm;
++	struct vc4_dev *vc4;
+ 	unsigned int i;
+ 	int ret;
+ 
++	drm_modeset_acquire_init(&ctx, 0);
++
++	vc4 = priv->vc4;
++	drm = &vc4->base;
++
++retry:
++	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
++	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+ 	for (i = 0; i < params->nencoders; i++) {
+ 		enum vc4_encoder_type enc_type = params->encoders[i];
+ 
+ 		ret = vc4_mock_atomic_add_output(test, state, enc_type);
++		if (ret == -EDEADLK) {
++			drm_atomic_state_clear(state);
++			ret = drm_modeset_backoff(&ctx);
++			if (!ret)
++				goto retry;
++		}
+ 		KUNIT_ASSERT_EQ(test, ret, 0);
+ 	}
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry;
++	}
+ 	KUNIT_EXPECT_EQ(test, ret, 0);
+ 
+ 	KUNIT_EXPECT_TRUE(test,
+@@ -700,33 +722,61 @@ static void drm_vc4_test_pv_muxing(struct kunit *test)
+ 		KUNIT_EXPECT_TRUE(test, check_channel_for_encoder(test, state, enc_type,
+ 								  params->check_fn));
+ 	}
++
++	drm_modeset_drop_locks(&ctx);
++	drm_modeset_acquire_fini(&ctx);
+ }
+ 
+ static void drm_vc4_test_pv_muxing_invalid(struct kunit *test)
+ {
+ 	const struct pv_muxing_param *params = test->param_value;
+ 	const struct pv_muxing_priv *priv = test->priv;
+-	struct drm_atomic_state *state = priv->state;
++	struct drm_modeset_acquire_ctx ctx;
++	struct drm_atomic_state *state;
++	struct drm_device *drm;
++	struct vc4_dev *vc4;
+ 	unsigned int i;
+ 	int ret;
+ 
++	drm_modeset_acquire_init(&ctx, 0);
++
++	vc4 = priv->vc4;
++	drm = &vc4->base;
++
++retry:
++	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
++	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
++
+ 	for (i = 0; i < params->nencoders; i++) {
+ 		enum vc4_encoder_type enc_type = params->encoders[i];
+ 
+ 		ret = vc4_mock_atomic_add_output(test, state, enc_type);
++		if (ret == -EDEADLK) {
++			drm_atomic_state_clear(state);
++			ret = drm_modeset_backoff(&ctx);
++			if (!ret)
++				goto retry;
++		}
+ 		KUNIT_ASSERT_EQ(test, ret, 0);
+ 	}
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry;
++	}
+ 	KUNIT_EXPECT_LT(test, ret, 0);
++
++	drm_modeset_drop_locks(&ctx);
++	drm_modeset_acquire_fini(&ctx);
+ }
+ 
+ static int vc4_pv_muxing_test_init(struct kunit *test)
+ {
+ 	const struct pv_muxing_param *params = test->param_value;
+-	struct drm_modeset_acquire_ctx ctx;
+ 	struct pv_muxing_priv *priv;
+-	struct drm_device *drm;
+ 	struct vc4_dev *vc4;
+ 
+ 	priv = kunit_kzalloc(test, sizeof(*priv), GFP_KERNEL);
+@@ -737,15 +787,6 @@ static int vc4_pv_muxing_test_init(struct kunit *test)
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vc4);
+ 	priv->vc4 = vc4;
+ 
+-	drm_modeset_acquire_init(&ctx, 0);
+-
+-	drm = &vc4->base;
+-	priv->state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, priv->state);
+-
+-	drm_modeset_drop_locks(&ctx);
+-	drm_modeset_acquire_fini(&ctx);
+-
+ 	return 0;
+ }
+ 
+@@ -800,13 +841,26 @@ static void drm_test_vc5_pv_muxing_bugs_subsequent_crtc_enable(struct kunit *tes
+ 	drm_modeset_acquire_init(&ctx, 0);
+ 
+ 	drm = &vc4->base;
++retry_first:
+ 	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+ 
+ 	ret = vc4_mock_atomic_add_output(test, state, VC4_ENCODER_TYPE_HDMI0);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_first;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_first;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	new_hvs_state = vc4_hvs_get_new_global_state(state);
+@@ -823,13 +877,26 @@ static void drm_test_vc5_pv_muxing_bugs_subsequent_crtc_enable(struct kunit *tes
+ 	ret = drm_atomic_helper_swap_state(state, false);
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
++retry_second:
+ 	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+ 
+ 	ret = vc4_mock_atomic_add_output(test, state, VC4_ENCODER_TYPE_HDMI1);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_second;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_second;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	new_hvs_state = vc4_hvs_get_new_global_state(state);
+@@ -874,16 +941,35 @@ static void drm_test_vc5_pv_muxing_bugs_stable_fifo(struct kunit *test)
+ 	drm_modeset_acquire_init(&ctx, 0);
+ 
+ 	drm = &vc4->base;
++retry_first:
+ 	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+ 
+ 	ret = vc4_mock_atomic_add_output(test, state, VC4_ENCODER_TYPE_HDMI0);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_first;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	ret = vc4_mock_atomic_add_output(test, state, VC4_ENCODER_TYPE_HDMI1);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_first;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_first;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	new_hvs_state = vc4_hvs_get_new_global_state(state);
+@@ -908,13 +994,26 @@ static void drm_test_vc5_pv_muxing_bugs_stable_fifo(struct kunit *test)
+ 	ret = drm_atomic_helper_swap_state(state, false);
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
++retry_second:
+ 	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+ 
+ 	ret = vc4_mock_atomic_del_output(test, state, VC4_ENCODER_TYPE_HDMI0);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_second;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_second;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	new_hvs_state = vc4_hvs_get_new_global_state(state);
+@@ -968,25 +1067,50 @@ drm_test_vc5_pv_muxing_bugs_subsequent_crtc_enable_too_many_crtc_state(struct ku
+ 	drm_modeset_acquire_init(&ctx, 0);
+ 
+ 	drm = &vc4->base;
++retry_first:
+ 	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+ 
+ 	ret = vc4_mock_atomic_add_output(test, state, VC4_ENCODER_TYPE_HDMI0);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_first;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_first;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+-
+ 	ret = drm_atomic_helper_swap_state(state, false);
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
++retry_second:
+ 	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+ 
+ 	ret = vc4_mock_atomic_add_output(test, state, VC4_ENCODER_TYPE_HDMI1);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_second;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_second;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	new_vc4_crtc_state = get_vc4_crtc_state_for_encoder(test, state,
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 37238a12baa58a..176aba27b03d36 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -372,13 +372,13 @@ static void vc4_hdmi_handle_hotplug(struct vc4_hdmi *vc4_hdmi,
+ 	 * the lock for now.
+ 	 */
+ 
++	drm_atomic_helper_connector_hdmi_hotplug(connector, status);
++
+ 	if (status == connector_status_disconnected) {
+ 		cec_phys_addr_invalidate(vc4_hdmi->cec_adap);
+ 		return;
+ 	}
+ 
+-	drm_atomic_helper_connector_hdmi_hotplug(connector, status);
+-
+ 	cec_s_phys_addr(vc4_hdmi->cec_adap,
+ 			connector->display_info.source_physical_address, false);
+ 
+@@ -559,12 +559,6 @@ static int vc4_hdmi_connector_init(struct drm_device *dev,
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = drm_connector_hdmi_audio_init(connector, dev->dev,
+-					    &vc4_hdmi_audio_funcs,
+-					    8, false, -1);
+-	if (ret)
+-		return ret;
+-
+ 	drm_connector_helper_add(connector, &vc4_hdmi_connector_helper_funcs);
+ 
+ 	/*
+@@ -2274,6 +2268,12 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *vc4_hdmi)
+ 		return ret;
+ 	}
+ 
++	ret = drm_connector_hdmi_audio_init(&vc4_hdmi->connector, dev,
++					    &vc4_hdmi_audio_funcs, 8, false,
++					    -1);
++	if (ret)
++		return ret;
++
+ 	dai_link->cpus		= &vc4_hdmi->audio.cpu;
+ 	dai_link->codecs	= &vc4_hdmi->audio.codec;
+ 	dai_link->platforms	= &vc4_hdmi->audio.platform;
+diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
+index 12034ec1202990..8c9898b9055d4c 100644
+--- a/drivers/gpu/drm/vkms/vkms_crtc.c
++++ b/drivers/gpu/drm/vkms/vkms_crtc.c
+@@ -194,7 +194,7 @@ static int vkms_crtc_atomic_check(struct drm_crtc *crtc,
+ 		i++;
+ 	}
+ 
+-	vkms_state->active_planes = kcalloc(i, sizeof(plane), GFP_KERNEL);
++	vkms_state->active_planes = kcalloc(i, sizeof(*vkms_state->active_planes), GFP_KERNEL);
+ 	if (!vkms_state->active_planes)
+ 		return -ENOMEM;
+ 	vkms_state->num_active_planes = i;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+index 9b5b8c1f063bb7..f30df3dc871fd1 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+@@ -51,11 +51,13 @@ static void vmw_bo_release(struct vmw_bo *vbo)
+ 			mutex_lock(&res->dev_priv->cmdbuf_mutex);
+ 			(void)vmw_resource_reserve(res, false, true);
+ 			vmw_resource_mob_detach(res);
++			if (res->dirty)
++				res->func->dirty_free(res);
+ 			if (res->coherent)
+ 				vmw_bo_dirty_release(res->guest_memory_bo);
+ 			res->guest_memory_bo = NULL;
+ 			res->guest_memory_offset = 0;
+-			vmw_resource_unreserve(res, false, false, false, NULL,
++			vmw_resource_unreserve(res, true, false, false, NULL,
+ 					       0);
+ 			mutex_unlock(&res->dev_priv->cmdbuf_mutex);
+ 		}
+@@ -73,9 +75,9 @@ static void vmw_bo_free(struct ttm_buffer_object *bo)
+ {
+ 	struct vmw_bo *vbo = to_vmw_bo(&bo->base);
+ 
+-	WARN_ON(vbo->dirty);
+ 	WARN_ON(!RB_EMPTY_ROOT(&vbo->res_tree));
+ 	vmw_bo_release(vbo);
++	WARN_ON(vbo->dirty);
+ 	kfree(vbo);
+ }
+ 
+@@ -848,9 +850,9 @@ void vmw_bo_placement_set_default_accelerated(struct vmw_bo *bo)
+ 	vmw_bo_placement_set(bo, domain, domain);
+ }
+ 
+-void vmw_bo_add_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res)
++int vmw_bo_add_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res)
+ {
+-	xa_store(&vbo->detached_resources, (unsigned long)res, res, GFP_KERNEL);
++	return xa_err(xa_store(&vbo->detached_resources, (unsigned long)res, res, GFP_KERNEL));
+ }
+ 
+ void vmw_bo_del_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res)
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
+index 11e330c7c7f52b..51790a11fe6494 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
+@@ -141,7 +141,7 @@ void vmw_bo_move_notify(struct ttm_buffer_object *bo,
+ 			struct ttm_resource *mem);
+ void vmw_bo_swap_notify(struct ttm_buffer_object *bo);
+ 
+-void vmw_bo_add_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res);
++int vmw_bo_add_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res);
+ void vmw_bo_del_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res);
+ struct vmw_surface *vmw_bo_surface(struct vmw_bo *vbo);
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+index 2e52d73eba4840..ea741bc4ac3fc7 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+@@ -4086,6 +4086,23 @@ static int vmw_execbuf_tie_context(struct vmw_private *dev_priv,
+ 	return 0;
+ }
+ 
++/*
++ * DMA fence callback to remove a seqno_waiter
++ */
++struct seqno_waiter_rm_context {
++	struct dma_fence_cb base;
++	struct vmw_private *dev_priv;
++};
++
++static void seqno_waiter_rm_cb(struct dma_fence *f, struct dma_fence_cb *cb)
++{
++	struct seqno_waiter_rm_context *ctx =
++		container_of(cb, struct seqno_waiter_rm_context, base);
++
++	vmw_seqno_waiter_remove(ctx->dev_priv);
++	kfree(ctx);
++}
++
+ int vmw_execbuf_process(struct drm_file *file_priv,
+ 			struct vmw_private *dev_priv,
+ 			void __user *user_commands, void *kernel_commands,
+@@ -4266,6 +4283,15 @@ int vmw_execbuf_process(struct drm_file *file_priv,
+ 		} else {
+ 			/* Link the fence with the FD created earlier */
+ 			fd_install(out_fence_fd, sync_file->file);
++			struct seqno_waiter_rm_context *ctx =
++				kmalloc(sizeof(*ctx), GFP_KERNEL);
++			ctx->dev_priv = dev_priv;
++			vmw_seqno_waiter_add(dev_priv);
++			if (dma_fence_add_callback(&fence->base, &ctx->base,
++						   seqno_waiter_rm_cb) < 0) {
++				vmw_seqno_waiter_remove(dev_priv);
++				kfree(ctx);
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+index a73af8a355fbf5..c4d5fe5f330f98 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+@@ -273,7 +273,7 @@ int vmw_user_resource_lookup_handle(struct vmw_private *dev_priv,
+ 		goto out_bad_resource;
+ 
+ 	res = converter->base_obj_to_res(base);
+-	kref_get(&res->kref);
++	vmw_resource_reference(res);
+ 
+ 	*p_res = res;
+ 	ret = 0;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+index 5721c74da3e0b9..d7a8070330ba54 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+@@ -658,7 +658,7 @@ static void vmw_user_surface_free(struct vmw_resource *res)
+ 	struct vmw_user_surface *user_srf =
+ 	    container_of(srf, struct vmw_user_surface, srf);
+ 
+-	WARN_ON_ONCE(res->dirty);
++	WARN_ON(res->dirty);
+ 	if (user_srf->master)
+ 		drm_master_put(&user_srf->master);
+ 	kfree(srf->offsets);
+@@ -689,8 +689,7 @@ static void vmw_user_surface_base_release(struct ttm_base_object **p_base)
+ 	 * Dumb buffers own the resource and they'll unref the
+ 	 * resource themselves
+ 	 */
+-	if (res && res->guest_memory_bo && res->guest_memory_bo->is_dumb)
+-		return;
++	WARN_ON(res && res->guest_memory_bo && res->guest_memory_bo->is_dumb);
+ 
+ 	vmw_resource_unreference(&res);
+ }
+@@ -871,7 +870,12 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
+ 			vmw_resource_unreference(&res);
+ 			goto out_unlock;
+ 		}
+-		vmw_bo_add_detached_resource(res->guest_memory_bo, res);
++
++		ret = vmw_bo_add_detached_resource(res->guest_memory_bo, res);
++		if (unlikely(ret != 0)) {
++			vmw_resource_unreference(&res);
++			goto out_unlock;
++		}
+ 	}
+ 
+ 	tmp = vmw_resource_reference(&srf->res);
+@@ -1670,6 +1674,14 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
+ 
+ 	}
+ 
++	if (res->guest_memory_bo) {
++		ret = vmw_bo_add_detached_resource(res->guest_memory_bo, res);
++		if (unlikely(ret != 0)) {
++			vmw_resource_unreference(&res);
++			goto out_unlock;
++		}
++	}
++
+ 	tmp = vmw_resource_reference(res);
+ 	ret = ttm_prime_object_init(tfile, res->guest_memory_size, &user_srf->prime,
+ 				    VMW_RES_SURFACE,
+@@ -1684,7 +1696,6 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
+ 	rep->handle      = user_srf->prime.base.handle;
+ 	rep->backup_size = res->guest_memory_size;
+ 	if (res->guest_memory_bo) {
+-		vmw_bo_add_detached_resource(res->guest_memory_bo, res);
+ 		rep->buffer_map_handle =
+ 			drm_vma_node_offset_addr(&res->guest_memory_bo->tbo.base.vma_node);
+ 		rep->buffer_size = res->guest_memory_bo->tbo.base.size;
+@@ -2358,12 +2369,19 @@ int vmw_dumb_create(struct drm_file *file_priv,
+ 	vbo = res->guest_memory_bo;
+ 	vbo->is_dumb = true;
+ 	vbo->dumb_surface = vmw_res_to_srf(res);
+-
++	drm_gem_object_put(&vbo->tbo.base);
++	/*
++	 * Unset the user surface dtor since this in not actually exposed
++	 * to userspace. The suface is owned via the dumb_buffer's GEM handle
++	 */
++	struct vmw_user_surface *usurf = container_of(vbo->dumb_surface,
++						struct vmw_user_surface, srf);
++	usurf->prime.base.refcount_release = NULL;
+ err:
+ 	if (res)
+ 		vmw_resource_unreference(&res);
+-	if (ret)
+-		ttm_ref_object_base_unref(tfile, arg.rep.handle);
++
++	ttm_ref_object_base_unref(tfile, arg.rep.handle);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/xe/Kconfig b/drivers/gpu/drm/xe/Kconfig
+index 5c2f459a2925a4..9a46aafcb33bae 100644
+--- a/drivers/gpu/drm/xe/Kconfig
++++ b/drivers/gpu/drm/xe/Kconfig
+@@ -2,6 +2,8 @@
+ config DRM_XE
+ 	tristate "Intel Xe Graphics"
+ 	depends on DRM && PCI && MMU && (m || (y && KUNIT=y))
++	depends on INTEL_VSEC || !INTEL_VSEC
++	depends on X86_PLATFORM_DEVICES || !(X86 && ACPI)
+ 	select INTERVAL_TREE
+ 	# we need shmfs for the swappable backing store, and in particular
+ 	# the shmem_readpage() which depends upon tmpfs
+@@ -27,7 +29,6 @@ config DRM_XE
+ 	select BACKLIGHT_CLASS_DEVICE if ACPI
+ 	select INPUT if ACPI
+ 	select ACPI_VIDEO if X86 && ACPI
+-	select X86_PLATFORM_DEVICES if X86 && ACPI
+ 	select ACPI_WMI if X86 && ACPI
+ 	select SYNC_FILE
+ 	select IOSF_MBI
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index 64f9c936eea063..5922302c3e00cc 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -816,21 +816,6 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
+ 		goto out;
+ 	}
+ 
+-	/* Reject BO eviction if BO is bound to current VM. */
+-	if (evict && ctx->resv) {
+-		struct drm_gpuvm_bo *vm_bo;
+-
+-		drm_gem_for_each_gpuvm_bo(vm_bo, &bo->ttm.base) {
+-			struct xe_vm *vm = gpuvm_to_vm(vm_bo->vm);
+-
+-			if (xe_vm_resv(vm) == ctx->resv &&
+-			    xe_vm_in_preempt_fence_mode(vm)) {
+-				ret = -EBUSY;
+-				goto out;
+-			}
+-		}
+-	}
+-
+ 	/*
+ 	 * Failed multi-hop where the old_mem is still marked as
+ 	 * TTM_PL_FLAG_TEMPORARY, should just be a dummy move.
+@@ -1023,6 +1008,25 @@ static long xe_bo_shrink_purge(struct ttm_operation_ctx *ctx,
+ 	return lret;
+ }
+ 
++static bool
++xe_bo_eviction_valuable(struct ttm_buffer_object *bo, const struct ttm_place *place)
++{
++	struct drm_gpuvm_bo *vm_bo;
++
++	if (!ttm_bo_eviction_valuable(bo, place))
++		return false;
++
++	if (!xe_bo_is_xe_bo(bo))
++		return true;
++
++	drm_gem_for_each_gpuvm_bo(vm_bo, &bo->base) {
++		if (xe_vm_is_validating(gpuvm_to_vm(vm_bo->vm)))
++			return false;
++	}
++
++	return true;
++}
++
+ /**
+  * xe_bo_shrink() - Try to shrink an xe bo.
+  * @ctx: The struct ttm_operation_ctx used for shrinking.
+@@ -1057,7 +1061,7 @@ long xe_bo_shrink(struct ttm_operation_ctx *ctx, struct ttm_buffer_object *bo,
+ 	    (flags.purge && !xe_tt->purgeable))
+ 		return -EBUSY;
+ 
+-	if (!ttm_bo_eviction_valuable(bo, &place))
++	if (!xe_bo_eviction_valuable(bo, &place))
+ 		return -EBUSY;
+ 
+ 	if (!xe_bo_is_xe_bo(bo) || !xe_bo_get_unless_zero(xe_bo))
+@@ -1418,7 +1422,7 @@ const struct ttm_device_funcs xe_ttm_funcs = {
+ 	.io_mem_pfn = xe_ttm_io_mem_pfn,
+ 	.access_memory = xe_ttm_access_memory,
+ 	.release_notify = xe_ttm_bo_release_notify,
+-	.eviction_valuable = ttm_bo_eviction_valuable,
++	.eviction_valuable = xe_bo_eviction_valuable,
+ 	.delete_mem_notify = xe_ttm_bo_delete_mem_notify,
+ 	.swap_notify = xe_ttm_bo_swap_notify,
+ };
+@@ -2260,6 +2264,8 @@ int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict)
+ 		.no_wait_gpu = false,
+ 		.gfp_retry_mayfail = true,
+ 	};
++	struct pin_cookie cookie;
++	int ret;
+ 
+ 	if (vm) {
+ 		lockdep_assert_held(&vm->lock);
+@@ -2269,8 +2275,12 @@ int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict)
+ 		ctx.resv = xe_vm_resv(vm);
+ 	}
+ 
++	cookie = xe_vm_set_validating(vm, allow_res_evict);
+ 	trace_xe_bo_validate(bo);
+-	return ttm_bo_validate(&bo->ttm, &bo->placement, &ctx);
++	ret = ttm_bo_validate(&bo->ttm, &bo->placement, &ctx);
++	xe_vm_clear_validating(vm, allow_res_evict, cookie);
++
++	return ret;
+ }
+ 
+ bool xe_bo_is_xe_bo(struct ttm_buffer_object *bo)
+@@ -2386,7 +2396,7 @@ typedef int (*xe_gem_create_set_property_fn)(struct xe_device *xe,
+ 					     u64 value);
+ 
+ static const xe_gem_create_set_property_fn gem_create_set_property_funcs[] = {
+-	[DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY] = gem_create_set_pxp_type,
++	[DRM_XE_GEM_CREATE_SET_PROPERTY_PXP_TYPE] = gem_create_set_pxp_type,
+ };
+ 
+ static int gem_create_user_ext_set_property(struct xe_device *xe,
+diff --git a/drivers/gpu/drm/xe/xe_gt_freq.c b/drivers/gpu/drm/xe/xe_gt_freq.c
+index 604bdc7c817360..552ac92496a408 100644
+--- a/drivers/gpu/drm/xe/xe_gt_freq.c
++++ b/drivers/gpu/drm/xe/xe_gt_freq.c
+@@ -32,13 +32,18 @@
+  * Xe's Freq provides a sysfs API for frequency management:
+  *
+  * device/tile#/gt#/freq0/<item>_freq *read-only* files:
++ *
+  * - act_freq: The actual resolved frequency decided by PCODE.
+  * - cur_freq: The current one requested by GuC PC to the PCODE.
+  * - rpn_freq: The Render Performance (RP) N level, which is the minimal one.
++ * - rpa_freq: The Render Performance (RP) A level, which is the achiveable one.
++ *   Calculated by PCODE at runtime based on multiple running conditions
+  * - rpe_freq: The Render Performance (RP) E level, which is the efficient one.
++ *   Calculated by PCODE at runtime based on multiple running conditions
+  * - rp0_freq: The Render Performance (RP) 0 level, which is the maximum one.
+  *
+  * device/tile#/gt#/freq0/<item>_freq *read-write* files:
++ *
+  * - min_freq: Min frequency request.
+  * - max_freq: Max frequency request.
+  *             If max <= min, then freq_min becomes a fixed frequency request.
+diff --git a/drivers/gpu/drm/xe/xe_guc_debugfs.c b/drivers/gpu/drm/xe/xe_guc_debugfs.c
+index c569ff456e7416..0b102ab46c4df2 100644
+--- a/drivers/gpu/drm/xe/xe_guc_debugfs.c
++++ b/drivers/gpu/drm/xe/xe_guc_debugfs.c
+@@ -17,101 +17,130 @@
+ #include "xe_macros.h"
+ #include "xe_pm.h"
+ 
+-static struct xe_guc *node_to_guc(struct drm_info_node *node)
+-{
+-	return node->info_ent->data;
+-}
+-
+-static int guc_info(struct seq_file *m, void *data)
++/*
++ * guc_debugfs_show - A show callback for struct drm_info_list
++ * @m: the &seq_file
++ * @data: data used by the drm debugfs helpers
++ *
++ * This callback can be used in struct drm_info_list to describe debugfs
++ * files that are &xe_guc specific in similar way how we handle &xe_gt
++ * specific files using &xe_gt_debugfs_simple_show.
++ *
++ * It is assumed that those debugfs files will be created on directory entry
++ * which grandparent struct dentry d_inode->i_private points to &xe_gt.
++ *
++ *      /sys/kernel/debug/dri/0/
++ *      ├── gt0			# dent->d_parent->d_parent (d_inode->i_private == gt)
++ *      │   ├── uc		# dent->d_parent
++ *      │   │   ├── guc_info	# dent
++ *      │   │   ├── guc_...
++ *
++ * This function assumes that &m->private will be set to the &struct
++ * drm_info_node corresponding to the instance of the info on a given &struct
++ * drm_minor (see struct drm_info_list.show for details).
++ *
++ * This function also assumes that struct drm_info_list.data will point to the
++ * function code that will actually print a file content::
++ *
++ *    int (*print)(struct xe_guc *, struct drm_printer *)
++ *
++ * Example::
++ *
++ *    int foo(struct xe_guc *guc, struct drm_printer *p)
++ *    {
++ *        drm_printf(p, "enabled %d\n", guc->submission_state.enabled);
++ *        return 0;
++ *    }
++ *
++ *    static const struct drm_info_list bar[] = {
++ *        { name = "foo", .show = guc_debugfs_show, .data = foo },
++ *    };
++ *
++ *    parent = debugfs_create_dir("uc", gtdir);
++ *    drm_debugfs_create_files(bar, ARRAY_SIZE(bar), parent, minor);
++ *
++ * Return: 0 on success or a negative error code on failure.
++ */
++static int guc_debugfs_show(struct seq_file *m, void *data)
+ {
+-	struct xe_guc *guc = node_to_guc(m->private);
+-	struct xe_device *xe = guc_to_xe(guc);
+ 	struct drm_printer p = drm_seq_file_printer(m);
++	struct drm_info_node *node = m->private;
++	struct dentry *parent = node->dent->d_parent;
++	struct dentry *grandparent = parent->d_parent;
++	struct xe_gt *gt = grandparent->d_inode->i_private;
++	struct xe_device *xe = gt_to_xe(gt);
++	int (*print)(struct xe_guc *, struct drm_printer *) = node->info_ent->data;
++	int ret;
+ 
+ 	xe_pm_runtime_get(xe);
+-	xe_guc_print_info(guc, &p);
++	ret = print(&gt->uc.guc, &p);
+ 	xe_pm_runtime_put(xe);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+-static int guc_log(struct seq_file *m, void *data)
++static int guc_log(struct xe_guc *guc, struct drm_printer *p)
+ {
+-	struct xe_guc *guc = node_to_guc(m->private);
+-	struct xe_device *xe = guc_to_xe(guc);
+-	struct drm_printer p = drm_seq_file_printer(m);
+-
+-	xe_pm_runtime_get(xe);
+-	xe_guc_log_print(&guc->log, &p);
+-	xe_pm_runtime_put(xe);
+-
++	xe_guc_log_print(&guc->log, p);
+ 	return 0;
+ }
+ 
+-static int guc_log_dmesg(struct seq_file *m, void *data)
++static int guc_log_dmesg(struct xe_guc *guc, struct drm_printer *p)
+ {
+-	struct xe_guc *guc = node_to_guc(m->private);
+-	struct xe_device *xe = guc_to_xe(guc);
+-
+-	xe_pm_runtime_get(xe);
+ 	xe_guc_log_print_dmesg(&guc->log);
+-	xe_pm_runtime_put(xe);
+-
+ 	return 0;
+ }
+ 
+-static int guc_ctb(struct seq_file *m, void *data)
++static int guc_ctb(struct xe_guc *guc, struct drm_printer *p)
+ {
+-	struct xe_guc *guc = node_to_guc(m->private);
+-	struct xe_device *xe = guc_to_xe(guc);
+-	struct drm_printer p = drm_seq_file_printer(m);
+-
+-	xe_pm_runtime_get(xe);
+-	xe_guc_ct_print(&guc->ct, &p, true);
+-	xe_pm_runtime_put(xe);
+-
++	xe_guc_ct_print(&guc->ct, p, true);
+ 	return 0;
+ }
+ 
+-static int guc_pc(struct seq_file *m, void *data)
++static int guc_pc(struct xe_guc *guc, struct drm_printer *p)
+ {
+-	struct xe_guc *guc = node_to_guc(m->private);
+-	struct xe_device *xe = guc_to_xe(guc);
+-	struct drm_printer p = drm_seq_file_printer(m);
+-
+-	xe_pm_runtime_get(xe);
+-	xe_guc_pc_print(&guc->pc, &p);
+-	xe_pm_runtime_put(xe);
+-
++	xe_guc_pc_print(&guc->pc, p);
+ 	return 0;
+ }
+ 
+-static const struct drm_info_list debugfs_list[] = {
+-	{"guc_info", guc_info, 0},
+-	{"guc_log", guc_log, 0},
+-	{"guc_log_dmesg", guc_log_dmesg, 0},
+-	{"guc_ctb", guc_ctb, 0},
+-	{"guc_pc", guc_pc, 0},
++/*
++ * only for GuC debugfs files which can be safely used on the VF as well:
++ * - without access to the GuC privileged registers
++ * - without access to the PF specific GuC objects
++ */
++static const struct drm_info_list vf_safe_debugfs_list[] = {
++	{ "guc_info", .show = guc_debugfs_show, .data = xe_guc_print_info },
++	{ "guc_ctb", .show = guc_debugfs_show, .data = guc_ctb },
++};
++
++/* For GuC debugfs files that require the SLPC support */
++static const struct drm_info_list slpc_debugfs_list[] = {
++	{ "guc_pc", .show = guc_debugfs_show, .data = guc_pc },
++};
++
++/* everything else should be added here */
++static const struct drm_info_list pf_only_debugfs_list[] = {
++	{ "guc_log", .show = guc_debugfs_show, .data = guc_log },
++	{ "guc_log_dmesg", .show = guc_debugfs_show, .data = guc_log_dmesg },
+ };
+ 
+ void xe_guc_debugfs_register(struct xe_guc *guc, struct dentry *parent)
+ {
+-	struct drm_minor *minor = guc_to_xe(guc)->drm.primary;
+-	struct drm_info_list *local;
+-	int i;
+-
+-#define DEBUGFS_SIZE	(ARRAY_SIZE(debugfs_list) * sizeof(struct drm_info_list))
+-	local = drmm_kmalloc(&guc_to_xe(guc)->drm, DEBUGFS_SIZE, GFP_KERNEL);
+-	if (!local)
+-		return;
++	struct xe_device *xe =  guc_to_xe(guc);
++	struct drm_minor *minor = xe->drm.primary;
+ 
+-	memcpy(local, debugfs_list, DEBUGFS_SIZE);
+-#undef DEBUGFS_SIZE
++	drm_debugfs_create_files(vf_safe_debugfs_list,
++				 ARRAY_SIZE(vf_safe_debugfs_list),
++				 parent, minor);
+ 
+-	for (i = 0; i < ARRAY_SIZE(debugfs_list); ++i)
+-		local[i].data = guc;
++	if (!IS_SRIOV_VF(xe)) {
++		drm_debugfs_create_files(pf_only_debugfs_list,
++					 ARRAY_SIZE(pf_only_debugfs_list),
++					 parent, minor);
+ 
+-	drm_debugfs_create_files(local,
+-				 ARRAY_SIZE(debugfs_list),
+-				 parent, minor);
++		if (!xe->info.skip_guc_pc)
++			drm_debugfs_create_files(slpc_debugfs_list,
++						 ARRAY_SIZE(slpc_debugfs_list),
++						 parent, minor);
++	}
+ }
+diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
+index 03bfba696b3789..16e20b5ad325f9 100644
+--- a/drivers/gpu/drm/xe/xe_lrc.c
++++ b/drivers/gpu/drm/xe/xe_lrc.c
+@@ -941,11 +941,18 @@ static void xe_lrc_finish(struct xe_lrc *lrc)
+  * store it in the PPHSWP.
+  */
+ #define CONTEXT_ACTIVE 1ULL
+-static void xe_lrc_setup_utilization(struct xe_lrc *lrc)
++static int xe_lrc_setup_utilization(struct xe_lrc *lrc)
+ {
+-	u32 *cmd;
++	u32 *cmd, *buf = NULL;
+ 
+-	cmd = lrc->bb_per_ctx_bo->vmap.vaddr;
++	if (lrc->bb_per_ctx_bo->vmap.is_iomem) {
++		buf = kmalloc(lrc->bb_per_ctx_bo->size, GFP_KERNEL);
++		if (!buf)
++			return -ENOMEM;
++		cmd = buf;
++	} else {
++		cmd = lrc->bb_per_ctx_bo->vmap.vaddr;
++	}
+ 
+ 	*cmd++ = MI_STORE_REGISTER_MEM | MI_SRM_USE_GGTT | MI_SRM_ADD_CS_OFFSET;
+ 	*cmd++ = ENGINE_ID(0).addr;
+@@ -966,9 +973,16 @@ static void xe_lrc_setup_utilization(struct xe_lrc *lrc)
+ 
+ 	*cmd++ = MI_BATCH_BUFFER_END;
+ 
++	if (buf) {
++		xe_map_memcpy_to(gt_to_xe(lrc->gt), &lrc->bb_per_ctx_bo->vmap, 0,
++				 buf, (cmd - buf) * sizeof(*cmd));
++		kfree(buf);
++	}
++
+ 	xe_lrc_write_ctx_reg(lrc, CTX_BB_PER_CTX_PTR,
+ 			     xe_bo_ggtt_addr(lrc->bb_per_ctx_bo) | 1);
+ 
++	return 0;
+ }
+ 
+ #define PVC_CTX_ASID		(0x2e + 1)
+@@ -1123,7 +1137,9 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
+ 	map = __xe_lrc_start_seqno_map(lrc);
+ 	xe_map_write32(lrc_to_xe(lrc), &map, lrc->fence_ctx.next_seqno - 1);
+ 
+-	xe_lrc_setup_utilization(lrc);
++	err = xe_lrc_setup_utilization(lrc);
++	if (err)
++		goto err_lrc_finish;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
+index f4d108dc49b1b1..30f7ce06c89690 100644
+--- a/drivers/gpu/drm/xe/xe_pci.c
++++ b/drivers/gpu/drm/xe/xe_pci.c
+@@ -922,6 +922,7 @@ static int xe_pci_suspend(struct device *dev)
+ 
+ 	pci_save_state(pdev);
+ 	pci_disable_device(pdev);
++	pci_set_power_state(pdev, PCI_D3cold);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
+index 454ea7dc08ac83..b5bc15f436fa2d 100644
+--- a/drivers/gpu/drm/xe/xe_pxp.c
++++ b/drivers/gpu/drm/xe/xe_pxp.c
+@@ -541,10 +541,14 @@ int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q)
+ 	 */
+ 	xe_pm_runtime_get(pxp->xe);
+ 
+-	if (!pxp_prerequisites_done(pxp)) {
+-		ret = -EBUSY;
++	/* get_readiness_status() returns 0 for in-progress and 1 for done */
++	ret = xe_pxp_get_readiness_status(pxp);
++	if (ret <= 0) {
++		if (!ret)
++			ret = -EBUSY;
+ 		goto out;
+ 	}
++	ret = 0;
+ 
+ wait_for_idle:
+ 	/*
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index 367c84b90e9ef7..737172013a8f9e 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -1681,10 +1681,16 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
+ 	if (flags & XE_VM_FLAG_LR_MODE)
+ 		xe_pm_runtime_get_noresume(xe);
+ 
++	if (flags & XE_VM_FLAG_FAULT_MODE) {
++		err = xe_svm_init(vm);
++		if (err)
++			goto err_no_resv;
++	}
++
+ 	vm_resv_obj = drm_gpuvm_resv_object_alloc(&xe->drm);
+ 	if (!vm_resv_obj) {
+ 		err = -ENOMEM;
+-		goto err_no_resv;
++		goto err_svm_fini;
+ 	}
+ 
+ 	drm_gpuvm_init(&vm->gpuvm, "Xe VM", DRM_GPUVM_RESV_PROTECTED, &xe->drm,
+@@ -1757,12 +1763,6 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
+ 		}
+ 	}
+ 
+-	if (flags & XE_VM_FLAG_FAULT_MODE) {
+-		err = xe_svm_init(vm);
+-		if (err)
+-			goto err_close;
+-	}
+-
+ 	if (number_tiles > 1)
+ 		vm->composite_fence_ctx = dma_fence_context_alloc(1);
+ 
+@@ -1776,6 +1776,11 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
+ 	xe_vm_close_and_put(vm);
+ 	return ERR_PTR(err);
+ 
++err_svm_fini:
++	if (flags & XE_VM_FLAG_FAULT_MODE) {
++		vm->size = 0; /* close the vm */
++		xe_svm_fini(vm);
++	}
+ err_no_resv:
+ 	mutex_destroy(&vm->snap_mutex);
+ 	for_each_tile(tile, xe, id)
+diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
+index 0ef811fc2bdeee..494af6bdc646b4 100644
+--- a/drivers/gpu/drm/xe/xe_vm.h
++++ b/drivers/gpu/drm/xe/xe_vm.h
+@@ -301,6 +301,75 @@ void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap);
+ void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p);
+ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap);
+ 
++/**
++ * xe_vm_set_validating() - Register this task as currently making bos resident
++ * @allow_res_evict: Allow eviction of buffer objects bound to @vm when
++ * validating.
++ * @vm: Pointer to the vm or NULL.
++ *
++ * Register this task as currently making bos resident for the vm. Intended
++ * to avoid eviction by the same task of shared bos bound to the vm.
++ * Call with the vm's resv lock held.
++ *
++ * Return: A pin cookie that should be used for xe_vm_clear_validating().
++ */
++static inline struct pin_cookie xe_vm_set_validating(struct xe_vm *vm,
++						     bool allow_res_evict)
++{
++	struct pin_cookie cookie = {};
++
++	if (vm && !allow_res_evict) {
++		xe_vm_assert_held(vm);
++		cookie = lockdep_pin_lock(&xe_vm_resv(vm)->lock.base);
++		/* Pairs with READ_ONCE in xe_vm_is_validating() */
++		WRITE_ONCE(vm->validating, current);
++	}
++
++	return cookie;
++}
++
++/**
++ * xe_vm_clear_validating() - Unregister this task as currently making bos resident
++ * @vm: Pointer to the vm or NULL
++ * @allow_res_evict: Eviction from @vm was allowed. Must be set to the same
++ * value as for xe_vm_set_validation().
++ * @cookie: Cookie obtained from xe_vm_set_validating().
++ *
++ * Register this task as currently making bos resident for the vm. Intended
++ * to avoid eviction by the same task of shared bos bound to the vm.
++ * Call with the vm's resv lock held.
++ */
++static inline void xe_vm_clear_validating(struct xe_vm *vm, bool allow_res_evict,
++					  struct pin_cookie cookie)
++{
++	if (vm && !allow_res_evict) {
++		lockdep_unpin_lock(&xe_vm_resv(vm)->lock.base, cookie);
++		/* Pairs with READ_ONCE in xe_vm_is_validating() */
++		WRITE_ONCE(vm->validating, NULL);
++	}
++}
++
++/**
++ * xe_vm_is_validating() - Whether bos bound to the vm are currently being made resident
++ * by the current task.
++ * @vm: Pointer to the vm.
++ *
++ * If this function returns %true, we should be in a vm resv locked region, since
++ * the current process is the same task that called xe_vm_set_validating().
++ * The function asserts that that's indeed the case.
++ *
++ * Return: %true if the task is currently making bos resident, %false otherwise.
++ */
++static inline bool xe_vm_is_validating(struct xe_vm *vm)
++{
++	/* Pairs with WRITE_ONCE in xe_vm_is_validating() */
++	if (READ_ONCE(vm->validating) == current) {
++		xe_vm_assert_held(vm);
++		return true;
++	}
++	return false;
++}
++
+ #if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
+ void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma);
+ #else
+diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
+index 84fa41b9fa20f3..0882674ce1cbab 100644
+--- a/drivers/gpu/drm/xe/xe_vm_types.h
++++ b/drivers/gpu/drm/xe/xe_vm_types.h
+@@ -310,6 +310,14 @@ struct xe_vm {
+ 	 * protected by the vm resv.
+ 	 */
+ 	u64 tlb_flush_seqno;
++	/**
++	 * @validating: The task that is currently making bos resident for this vm.
++	 * Protected by the VM's resv for writing. Opportunistic reading can be done
++	 * using READ_ONCE. Note: This is a workaround for the
++	 * TTM eviction_valuable() callback not being passed a struct
++	 * ttm_operation_context(). Future work might want to address this.
++	 */
++	struct task_struct *validating;
+ 	/** @batch_invalidate_tlb: Always invalidate TLB before batch start */
+ 	bool batch_invalidate_tlb;
+ 	/** @xef: XE file handle for tracking this VM's drm client */
+diff --git a/drivers/gpu/drm/xlnx/Kconfig b/drivers/gpu/drm/xlnx/Kconfig
+index dbecca9bdd544f..cfabf5e2a0bb0a 100644
+--- a/drivers/gpu/drm/xlnx/Kconfig
++++ b/drivers/gpu/drm/xlnx/Kconfig
+@@ -22,6 +22,7 @@ config DRM_ZYNQMP_DPSUB_AUDIO
+ 	bool "ZynqMP DisplayPort Audio Support"
+ 	depends on DRM_ZYNQMP_DPSUB
+ 	depends on SND && SND_SOC
++	depends on SND_SOC=y || DRM_ZYNQMP_DPSUB=m
+ 	select SND_SOC_GENERIC_DMAENGINE_PCM
+ 	help
+ 	  Choose this option to enable DisplayPort audio support in the ZynqMP
+diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig
+index a503252702b7b4..43859fc757470c 100644
+--- a/drivers/hid/Kconfig
++++ b/drivers/hid/Kconfig
+@@ -151,6 +151,7 @@ config HID_APPLEIR
+ config HID_APPLETB_BL
+ 	tristate "Apple Touch Bar Backlight"
+ 	depends on BACKLIGHT_CLASS_DEVICE
++	depends on X86 || COMPILE_TEST
+ 	help
+ 	  Say Y here if you want support for the backlight of Touch Bars on x86
+ 	  MacBook Pros.
+@@ -163,6 +164,7 @@ config HID_APPLETB_KBD
+ 	depends on USB_HID
+ 	depends on BACKLIGHT_CLASS_DEVICE
+ 	depends on INPUT
++	depends on X86 || COMPILE_TEST
+ 	select INPUT_SPARSEKMAP
+ 	select HID_APPLETB_BL
+ 	help
+diff --git a/drivers/hid/hid-hyperv.c b/drivers/hid/hid-hyperv.c
+index 0fb210e40a4127..9eafff0b6ea4c3 100644
+--- a/drivers/hid/hid-hyperv.c
++++ b/drivers/hid/hid-hyperv.c
+@@ -192,7 +192,7 @@ static void mousevsc_on_receive_device_info(struct mousevsc_dev *input_device,
+ 		goto cleanup;
+ 
+ 	input_device->report_desc_size = le16_to_cpu(
+-					desc->desc[0].wDescriptorLength);
++					desc->rpt_desc.wDescriptorLength);
+ 	if (input_device->report_desc_size == 0) {
+ 		input_device->dev_info_status = -EINVAL;
+ 		goto cleanup;
+@@ -210,7 +210,7 @@ static void mousevsc_on_receive_device_info(struct mousevsc_dev *input_device,
+ 
+ 	memcpy(input_device->report_desc,
+ 	       ((unsigned char *)desc) + desc->bLength,
+-	       le16_to_cpu(desc->desc[0].wDescriptorLength));
++	       le16_to_cpu(desc->rpt_desc.wDescriptorLength));
+ 
+ 	/* Send the ack */
+ 	memset(&ack, 0, sizeof(struct mousevsc_prt_msg));
+diff --git a/drivers/hid/intel-thc-hid/intel-quicki2c/pci-quicki2c.c b/drivers/hid/intel-thc-hid/intel-quicki2c/pci-quicki2c.c
+index fa51155ebe3937..8a8c4a46f92700 100644
+--- a/drivers/hid/intel-thc-hid/intel-quicki2c/pci-quicki2c.c
++++ b/drivers/hid/intel-thc-hid/intel-quicki2c/pci-quicki2c.c
+@@ -82,15 +82,10 @@ static int quicki2c_acpi_get_dsd_property(struct acpi_device *adev, acpi_string
+ {
+ 	acpi_handle handle = acpi_device_handle(adev);
+ 	struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
+-	union acpi_object obj = { .type = type };
+-	struct acpi_object_list arg_list = {
+-		.count = 1,
+-		.pointer = &obj,
+-	};
+ 	union acpi_object *ret_obj;
+ 	acpi_status status;
+ 
+-	status = acpi_evaluate_object(handle, dsd_method_name, &arg_list, &buffer);
++	status = acpi_evaluate_object(handle, dsd_method_name, NULL, &buffer);
+ 	if (ACPI_FAILURE(status)) {
+ 		acpi_handle_err(handle,
+ 				"Can't evaluate %s method: %d\n", dsd_method_name, status);
+diff --git a/drivers/hid/usbhid/hid-core.c b/drivers/hid/usbhid/hid-core.c
+index 7d9297fad90ea7..d4cbecc668ec02 100644
+--- a/drivers/hid/usbhid/hid-core.c
++++ b/drivers/hid/usbhid/hid-core.c
+@@ -984,12 +984,11 @@ static int usbhid_parse(struct hid_device *hid)
+ 	struct usb_host_interface *interface = intf->cur_altsetting;
+ 	struct usb_device *dev = interface_to_usbdev (intf);
+ 	struct hid_descriptor *hdesc;
++	struct hid_class_descriptor *hcdesc;
+ 	u32 quirks = 0;
+ 	unsigned int rsize = 0;
+ 	char *rdesc;
+-	int ret, n;
+-	int num_descriptors;
+-	size_t offset = offsetof(struct hid_descriptor, desc);
++	int ret;
+ 
+ 	quirks = hid_lookup_quirk(hid);
+ 
+@@ -1011,20 +1010,19 @@ static int usbhid_parse(struct hid_device *hid)
+ 		return -ENODEV;
+ 	}
+ 
+-	if (hdesc->bLength < sizeof(struct hid_descriptor)) {
+-		dbg_hid("hid descriptor is too short\n");
++	if (!hdesc->bNumDescriptors ||
++	    hdesc->bLength != sizeof(*hdesc) +
++			      (hdesc->bNumDescriptors - 1) * sizeof(*hcdesc)) {
++		dbg_hid("hid descriptor invalid, bLen=%hhu bNum=%hhu\n",
++			hdesc->bLength, hdesc->bNumDescriptors);
+ 		return -EINVAL;
+ 	}
+ 
+ 	hid->version = le16_to_cpu(hdesc->bcdHID);
+ 	hid->country = hdesc->bCountryCode;
+ 
+-	num_descriptors = min_t(int, hdesc->bNumDescriptors,
+-	       (hdesc->bLength - offset) / sizeof(struct hid_class_descriptor));
+-
+-	for (n = 0; n < num_descriptors; n++)
+-		if (hdesc->desc[n].bDescriptorType == HID_DT_REPORT)
+-			rsize = le16_to_cpu(hdesc->desc[n].wDescriptorLength);
++	if (hdesc->rpt_desc.bDescriptorType == HID_DT_REPORT)
++		rsize = le16_to_cpu(hdesc->rpt_desc.wDescriptorLength);
+ 
+ 	if (!rsize || rsize > HID_MAX_DESCRIPTOR_SIZE) {
+ 		dbg_hid("weird size of report descriptor (%u)\n", rsize);
+@@ -1052,6 +1050,11 @@ static int usbhid_parse(struct hid_device *hid)
+ 		goto err;
+ 	}
+ 
++	if (hdesc->bNumDescriptors > 1)
++		hid_warn(intf,
++			"%u unsupported optional hid class descriptors\n",
++			(int)(hdesc->bNumDescriptors - 1));
++
+ 	hid->quirks |= quirks;
+ 
+ 	return 0;
+diff --git a/drivers/hwmon/asus-ec-sensors.c b/drivers/hwmon/asus-ec-sensors.c
+index 006ced5ab6e6ad..c7c02a1f55d459 100644
+--- a/drivers/hwmon/asus-ec-sensors.c
++++ b/drivers/hwmon/asus-ec-sensors.c
+@@ -933,6 +933,10 @@ static int asus_ec_hwmon_read_string(struct device *dev,
+ {
+ 	struct ec_sensors_data *state = dev_get_drvdata(dev);
+ 	int sensor_index = find_ec_sensor_index(state, type, channel);
++
++	if (sensor_index < 0)
++		return sensor_index;
++
+ 	*str = get_sensor_info(state, sensor_index)->label;
+ 
+ 	return 0;
+diff --git a/drivers/hwtracing/coresight/coresight-catu.c b/drivers/hwtracing/coresight/coresight-catu.c
+index fa170c966bc3be..d4e2e175e07700 100644
+--- a/drivers/hwtracing/coresight/coresight-catu.c
++++ b/drivers/hwtracing/coresight/coresight-catu.c
+@@ -458,12 +458,17 @@ static int catu_enable_hw(struct catu_drvdata *drvdata, enum cs_mode cs_mode,
+ static int catu_enable(struct coresight_device *csdev, enum cs_mode mode,
+ 		       void *data)
+ {
+-	int rc;
++	int rc = 0;
+ 	struct catu_drvdata *catu_drvdata = csdev_to_catu_drvdata(csdev);
+ 
+-	CS_UNLOCK(catu_drvdata->base);
+-	rc = catu_enable_hw(catu_drvdata, mode, data);
+-	CS_LOCK(catu_drvdata->base);
++	guard(raw_spinlock_irqsave)(&catu_drvdata->spinlock);
++	if (csdev->refcnt == 0) {
++		CS_UNLOCK(catu_drvdata->base);
++		rc = catu_enable_hw(catu_drvdata, mode, data);
++		CS_LOCK(catu_drvdata->base);
++	}
++	if (!rc)
++		csdev->refcnt++;
+ 	return rc;
+ }
+ 
+@@ -486,12 +491,15 @@ static int catu_disable_hw(struct catu_drvdata *drvdata)
+ 
+ static int catu_disable(struct coresight_device *csdev, void *__unused)
+ {
+-	int rc;
++	int rc = 0;
+ 	struct catu_drvdata *catu_drvdata = csdev_to_catu_drvdata(csdev);
+ 
+-	CS_UNLOCK(catu_drvdata->base);
+-	rc = catu_disable_hw(catu_drvdata);
+-	CS_LOCK(catu_drvdata->base);
++	guard(raw_spinlock_irqsave)(&catu_drvdata->spinlock);
++	if (--csdev->refcnt == 0) {
++		CS_UNLOCK(catu_drvdata->base);
++		rc = catu_disable_hw(catu_drvdata);
++		CS_LOCK(catu_drvdata->base);
++	}
+ 	return rc;
+ }
+ 
+@@ -550,6 +558,7 @@ static int __catu_probe(struct device *dev, struct resource *res)
+ 	dev->platform_data = pdata;
+ 
+ 	drvdata->base = base;
++	raw_spin_lock_init(&drvdata->spinlock);
+ 	catu_desc.access = CSDEV_ACCESS_IOMEM(base);
+ 	catu_desc.pdata = pdata;
+ 	catu_desc.dev = dev;
+@@ -702,7 +711,7 @@ static int __init catu_init(void)
+ {
+ 	int ret;
+ 
+-	ret = coresight_init_driver("catu", &catu_driver, &catu_platform_driver);
++	ret = coresight_init_driver("catu", &catu_driver, &catu_platform_driver, THIS_MODULE);
+ 	tmc_etr_set_catu_ops(&etr_catu_buf_ops);
+ 	return ret;
+ }
+diff --git a/drivers/hwtracing/coresight/coresight-catu.h b/drivers/hwtracing/coresight/coresight-catu.h
+index 141feac1c14b08..755776cd19c5bb 100644
+--- a/drivers/hwtracing/coresight/coresight-catu.h
++++ b/drivers/hwtracing/coresight/coresight-catu.h
+@@ -65,6 +65,7 @@ struct catu_drvdata {
+ 	void __iomem *base;
+ 	struct coresight_device *csdev;
+ 	int irq;
++	raw_spinlock_t spinlock;
+ };
+ 
+ #define CATU_REG32(name, offset)					\
+diff --git a/drivers/hwtracing/coresight/coresight-config.h b/drivers/hwtracing/coresight/coresight-config.h
+index b9ebc9fcfb7f20..90fd937d3bd837 100644
+--- a/drivers/hwtracing/coresight/coresight-config.h
++++ b/drivers/hwtracing/coresight/coresight-config.h
+@@ -228,7 +228,7 @@ struct cscfg_feature_csdev {
+  * @feats_csdev:references to the device features to enable.
+  */
+ struct cscfg_config_csdev {
+-	const struct cscfg_config_desc *config_desc;
++	struct cscfg_config_desc *config_desc;
+ 	struct coresight_device *csdev;
+ 	bool enabled;
+ 	struct list_head node;
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index fb43ef6a3b1f0d..d3523f0262af82 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -465,7 +465,7 @@ int coresight_enable_path(struct coresight_path *path, enum cs_mode mode,
+ 		/* Enable all helpers adjacent to the path first */
+ 		ret = coresight_enable_helpers(csdev, mode, path);
+ 		if (ret)
+-			goto err;
++			goto err_disable_path;
+ 		/*
+ 		 * ETF devices are tricky... They can be a link or a sink,
+ 		 * depending on how they are configured.  If an ETF has been
+@@ -486,8 +486,10 @@ int coresight_enable_path(struct coresight_path *path, enum cs_mode mode,
+ 			 * that need disabling. Disabling the path here
+ 			 * would mean we could disrupt an existing session.
+ 			 */
+-			if (ret)
++			if (ret) {
++				coresight_disable_helpers(csdev, path);
+ 				goto out;
++			}
+ 			break;
+ 		case CORESIGHT_DEV_TYPE_SOURCE:
+ 			/* sources are enabled from either sysFS or Perf */
+@@ -497,16 +499,19 @@ int coresight_enable_path(struct coresight_path *path, enum cs_mode mode,
+ 			child = list_next_entry(nd, link)->csdev;
+ 			ret = coresight_enable_link(csdev, parent, child, source);
+ 			if (ret)
+-				goto err;
++				goto err_disable_helpers;
+ 			break;
+ 		default:
+-			goto err;
++			ret = -EINVAL;
++			goto err_disable_helpers;
+ 		}
+ 	}
+ 
+ out:
+ 	return ret;
+-err:
++err_disable_helpers:
++	coresight_disable_helpers(csdev, path);
++err_disable_path:
+ 	coresight_disable_path_from(path, nd);
+ 	goto out;
+ }
+@@ -1585,17 +1590,17 @@ module_init(coresight_init);
+ module_exit(coresight_exit);
+ 
+ int coresight_init_driver(const char *drv, struct amba_driver *amba_drv,
+-			  struct platform_driver *pdev_drv)
++			  struct platform_driver *pdev_drv, struct module *owner)
+ {
+ 	int ret;
+ 
+-	ret = amba_driver_register(amba_drv);
++	ret = __amba_driver_register(amba_drv, owner);
+ 	if (ret) {
+ 		pr_err("%s: error registering AMBA driver\n", drv);
+ 		return ret;
+ 	}
+ 
+-	ret = platform_driver_register(pdev_drv);
++	ret = __platform_driver_register(pdev_drv, owner);
+ 	if (!ret)
+ 		return 0;
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-cpu-debug.c b/drivers/hwtracing/coresight/coresight-cpu-debug.c
+index 342c3aaf414dd8..a871d997330b09 100644
+--- a/drivers/hwtracing/coresight/coresight-cpu-debug.c
++++ b/drivers/hwtracing/coresight/coresight-cpu-debug.c
+@@ -774,7 +774,8 @@ static struct platform_driver debug_platform_driver = {
+ 
+ static int __init debug_init(void)
+ {
+-	return coresight_init_driver("debug", &debug_driver, &debug_platform_driver);
++	return coresight_init_driver("debug", &debug_driver, &debug_platform_driver,
++				     THIS_MODULE);
+ }
+ 
+ static void __exit debug_exit(void)
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+index 2b8f1046384020..88ef381ee6dd9b 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+@@ -1020,6 +1020,9 @@ static void etm4_disable_sysfs(struct coresight_device *csdev)
+ 	smp_call_function_single(drvdata->cpu, etm4_disable_hw, drvdata, 1);
+ 
+ 	raw_spin_unlock(&drvdata->spinlock);
++
++	cscfg_csdev_disable_active_config(csdev);
++
+ 	cpus_read_unlock();
+ 
+ 	/*
+@@ -1176,7 +1179,7 @@ static void cpu_detect_trace_filtering(struct etmv4_drvdata *drvdata)
+ 	 * tracing at the kernel EL and EL0, forcing to use the
+ 	 * virtual time as the timestamp.
+ 	 */
+-	trfcr = (TRFCR_EL1_TS_VIRTUAL |
++	trfcr = (FIELD_PREP(TRFCR_EL1_TS_MASK, TRFCR_EL1_TS_VIRTUAL) |
+ 		 TRFCR_EL1_ExTRE |
+ 		 TRFCR_EL1_E0TRE);
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+index fdd0956fecb36d..c3ca904de584d7 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+@@ -2320,11 +2320,11 @@ static ssize_t ts_source_show(struct device *dev,
+ 		goto out;
+ 	}
+ 
+-	switch (drvdata->trfcr & TRFCR_EL1_TS_MASK) {
++	val = FIELD_GET(TRFCR_EL1_TS_MASK, drvdata->trfcr);
++	switch (val) {
+ 	case TRFCR_EL1_TS_VIRTUAL:
+ 	case TRFCR_EL1_TS_GUEST_PHYSICAL:
+ 	case TRFCR_EL1_TS_PHYSICAL:
+-		val = FIELD_GET(TRFCR_EL1_TS_MASK, drvdata->trfcr);
+ 		break;
+ 	default:
+ 		val = -1;
+diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c
+index 0541712b2bcb69..124fc2e26cfb1a 100644
+--- a/drivers/hwtracing/coresight/coresight-funnel.c
++++ b/drivers/hwtracing/coresight/coresight-funnel.c
+@@ -433,7 +433,8 @@ static struct amba_driver dynamic_funnel_driver = {
+ 
+ static int __init funnel_init(void)
+ {
+-	return coresight_init_driver("funnel", &dynamic_funnel_driver, &funnel_driver);
++	return coresight_init_driver("funnel", &dynamic_funnel_driver, &funnel_driver,
++				     THIS_MODULE);
+ }
+ 
+ static void __exit funnel_exit(void)
+diff --git a/drivers/hwtracing/coresight/coresight-replicator.c b/drivers/hwtracing/coresight/coresight-replicator.c
+index ee7ee79f6cf775..572dcd2bac16d9 100644
+--- a/drivers/hwtracing/coresight/coresight-replicator.c
++++ b/drivers/hwtracing/coresight/coresight-replicator.c
+@@ -438,7 +438,8 @@ static struct amba_driver dynamic_replicator_driver = {
+ 
+ static int __init replicator_init(void)
+ {
+-	return coresight_init_driver("replicator", &dynamic_replicator_driver, &replicator_driver);
++	return coresight_init_driver("replicator", &dynamic_replicator_driver, &replicator_driver,
++				     THIS_MODULE);
+ }
+ 
+ static void __exit replicator_exit(void)
+diff --git a/drivers/hwtracing/coresight/coresight-stm.c b/drivers/hwtracing/coresight/coresight-stm.c
+index 26f9339f38b938..527347e4d16c5d 100644
+--- a/drivers/hwtracing/coresight/coresight-stm.c
++++ b/drivers/hwtracing/coresight/coresight-stm.c
+@@ -1058,7 +1058,7 @@ static struct platform_driver stm_platform_driver = {
+ 
+ static int __init stm_init(void)
+ {
+-	return coresight_init_driver("stm", &stm_driver, &stm_platform_driver);
++	return coresight_init_driver("stm", &stm_driver, &stm_platform_driver, THIS_MODULE);
+ }
+ 
+ static void __exit stm_exit(void)
+diff --git a/drivers/hwtracing/coresight/coresight-syscfg.c b/drivers/hwtracing/coresight/coresight-syscfg.c
+index a70c1454b4106c..83dad24e0116d4 100644
+--- a/drivers/hwtracing/coresight/coresight-syscfg.c
++++ b/drivers/hwtracing/coresight/coresight-syscfg.c
+@@ -395,6 +395,8 @@ static void cscfg_remove_owned_csdev_configs(struct coresight_device *csdev, voi
+ 	if (list_empty(&csdev->config_csdev_list))
+ 		return;
+ 
++  guard(raw_spinlock_irqsave)(&csdev->cscfg_csdev_lock);
++
+ 	list_for_each_entry_safe(config_csdev, tmp, &csdev->config_csdev_list, node) {
+ 		if (config_csdev->config_desc->load_owner == load_owner)
+ 			list_del(&config_csdev->node);
+@@ -867,6 +869,25 @@ void cscfg_csdev_reset_feats(struct coresight_device *csdev)
+ }
+ EXPORT_SYMBOL_GPL(cscfg_csdev_reset_feats);
+ 
++static bool cscfg_config_desc_get(struct cscfg_config_desc *config_desc)
++{
++	if (!atomic_fetch_inc(&config_desc->active_cnt)) {
++		/* must ensure that config cannot be unloaded in use */
++		if (unlikely(cscfg_owner_get(config_desc->load_owner))) {
++			atomic_dec(&config_desc->active_cnt);
++			return false;
++		}
++	}
++
++	return true;
++}
++
++static void cscfg_config_desc_put(struct cscfg_config_desc *config_desc)
++{
++	if (!atomic_dec_return(&config_desc->active_cnt))
++		cscfg_owner_put(config_desc->load_owner);
++}
++
+ /*
+  * This activate configuration for either perf or sysfs. Perf can have multiple
+  * active configs, selected per event, sysfs is limited to one.
+@@ -890,22 +911,17 @@ static int _cscfg_activate_config(unsigned long cfg_hash)
+ 			if (config_desc->available == false)
+ 				return -EBUSY;
+ 
+-			/* must ensure that config cannot be unloaded in use */
+-			err = cscfg_owner_get(config_desc->load_owner);
+-			if (err)
++			if (!cscfg_config_desc_get(config_desc)) {
++				err = -EINVAL;
+ 				break;
++			}
++
+ 			/*
+ 			 * increment the global active count - control changes to
+ 			 * active configurations
+ 			 */
+ 			atomic_inc(&cscfg_mgr->sys_active_cnt);
+ 
+-			/*
+-			 * mark the descriptor as active so enable config on a
+-			 * device instance will use it
+-			 */
+-			atomic_inc(&config_desc->active_cnt);
+-
+ 			err = 0;
+ 			dev_dbg(cscfg_device(), "Activate config %s.\n", config_desc->name);
+ 			break;
+@@ -920,9 +936,8 @@ static void _cscfg_deactivate_config(unsigned long cfg_hash)
+ 
+ 	list_for_each_entry(config_desc, &cscfg_mgr->config_desc_list, item) {
+ 		if ((unsigned long)config_desc->event_ea->var == cfg_hash) {
+-			atomic_dec(&config_desc->active_cnt);
+ 			atomic_dec(&cscfg_mgr->sys_active_cnt);
+-			cscfg_owner_put(config_desc->load_owner);
++			cscfg_config_desc_put(config_desc);
+ 			dev_dbg(cscfg_device(), "Deactivate config %s.\n", config_desc->name);
+ 			break;
+ 		}
+@@ -1047,7 +1062,7 @@ int cscfg_csdev_enable_active_config(struct coresight_device *csdev,
+ 				     unsigned long cfg_hash, int preset)
+ {
+ 	struct cscfg_config_csdev *config_csdev_active = NULL, *config_csdev_item;
+-	const struct cscfg_config_desc *config_desc;
++	struct cscfg_config_desc *config_desc;
+ 	unsigned long flags;
+ 	int err = 0;
+ 
+@@ -1062,8 +1077,8 @@ int cscfg_csdev_enable_active_config(struct coresight_device *csdev,
+ 	raw_spin_lock_irqsave(&csdev->cscfg_csdev_lock, flags);
+ 	list_for_each_entry(config_csdev_item, &csdev->config_csdev_list, node) {
+ 		config_desc = config_csdev_item->config_desc;
+-		if ((atomic_read(&config_desc->active_cnt)) &&
+-		    ((unsigned long)config_desc->event_ea->var == cfg_hash)) {
++		if (((unsigned long)config_desc->event_ea->var == cfg_hash) &&
++				cscfg_config_desc_get(config_desc)) {
+ 			config_csdev_active = config_csdev_item;
+ 			csdev->active_cscfg_ctxt = (void *)config_csdev_active;
+ 			break;
+@@ -1097,7 +1112,11 @@ int cscfg_csdev_enable_active_config(struct coresight_device *csdev,
+ 				err = -EBUSY;
+ 			raw_spin_unlock_irqrestore(&csdev->cscfg_csdev_lock, flags);
+ 		}
++
++		if (err)
++			cscfg_config_desc_put(config_desc);
+ 	}
++
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(cscfg_csdev_enable_active_config);
+@@ -1136,8 +1155,10 @@ void cscfg_csdev_disable_active_config(struct coresight_device *csdev)
+ 	raw_spin_unlock_irqrestore(&csdev->cscfg_csdev_lock, flags);
+ 
+ 	/* true if there was an enabled active config */
+-	if (config_csdev)
++	if (config_csdev) {
+ 		cscfg_csdev_disable_config(config_csdev);
++		cscfg_config_desc_put(config_csdev->config_desc);
++	}
+ }
+ EXPORT_SYMBOL_GPL(cscfg_csdev_disable_active_config);
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-core.c b/drivers/hwtracing/coresight/coresight-tmc-core.c
+index a7814e8e657b21..455b1c9b15682c 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-core.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-core.c
+@@ -1060,7 +1060,7 @@ static struct platform_driver tmc_platform_driver = {
+ 
+ static int __init tmc_init(void)
+ {
+-	return coresight_init_driver("tmc", &tmc_driver, &tmc_platform_driver);
++	return coresight_init_driver("tmc", &tmc_driver, &tmc_platform_driver, THIS_MODULE);
+ }
+ 
+ static void __exit tmc_exit(void)
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+index d858740001c27d..a922e3b709638d 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+@@ -747,7 +747,6 @@ int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata)
+ 	char *buf = NULL;
+ 	enum tmc_mode mode;
+ 	unsigned long flags;
+-	int rc = 0;
+ 
+ 	/* config types are set a boot time and never change */
+ 	if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETB &&
+@@ -773,11 +772,11 @@ int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata)
+ 		 * can't be NULL.
+ 		 */
+ 		memset(drvdata->buf, 0, drvdata->size);
+-		rc = __tmc_etb_enable_hw(drvdata);
+-		if (rc) {
+-			raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
+-			return rc;
+-		}
++		/*
++		 * Ignore failures to enable the TMC to make sure, we don't
++		 * leave the TMC in a "reading" state.
++		 */
++		__tmc_etb_enable_hw(drvdata);
+ 	} else {
+ 		/*
+ 		 * The ETB/ETF is not tracing and the buffer was just read.
+diff --git a/drivers/hwtracing/coresight/coresight-tpiu.c b/drivers/hwtracing/coresight/coresight-tpiu.c
+index 97ef36f03ec207..3e015928842808 100644
+--- a/drivers/hwtracing/coresight/coresight-tpiu.c
++++ b/drivers/hwtracing/coresight/coresight-tpiu.c
+@@ -318,7 +318,7 @@ static struct platform_driver tpiu_platform_driver = {
+ 
+ static int __init tpiu_init(void)
+ {
+-	return coresight_init_driver("tpiu", &tpiu_driver, &tpiu_platform_driver);
++	return coresight_init_driver("tpiu", &tpiu_driver, &tpiu_platform_driver, THIS_MODULE);
+ }
+ 
+ static void __exit tpiu_exit(void)
+diff --git a/drivers/iio/adc/ad4851.c b/drivers/iio/adc/ad4851.c
+index 98ebc853db7962..f1d2e2896f2a2d 100644
+--- a/drivers/iio/adc/ad4851.c
++++ b/drivers/iio/adc/ad4851.c
+@@ -1034,7 +1034,7 @@ static int ad4858_parse_channels(struct iio_dev *indio_dev)
+ 	struct device *dev = &st->spi->dev;
+ 	struct iio_chan_spec *ad4851_channels;
+ 	const struct iio_chan_spec ad4851_chan = AD4858_IIO_CHANNEL;
+-	int ret;
++	int ret, i = 0;
+ 
+ 	ret = ad4851_parse_channels_common(indio_dev, &ad4851_channels,
+ 					   ad4851_chan);
+@@ -1042,15 +1042,15 @@ static int ad4858_parse_channels(struct iio_dev *indio_dev)
+ 		return ret;
+ 
+ 	device_for_each_child_node_scoped(dev, child) {
+-		ad4851_channels->has_ext_scan_type = 1;
++		ad4851_channels[i].has_ext_scan_type = 1;
+ 		if (fwnode_property_read_bool(child, "bipolar")) {
+-			ad4851_channels->ext_scan_type = ad4851_scan_type_20_b;
+-			ad4851_channels->num_ext_scan_type = ARRAY_SIZE(ad4851_scan_type_20_b);
++			ad4851_channels[i].ext_scan_type = ad4851_scan_type_20_b;
++			ad4851_channels[i].num_ext_scan_type = ARRAY_SIZE(ad4851_scan_type_20_b);
+ 		} else {
+-			ad4851_channels->ext_scan_type = ad4851_scan_type_20_u;
+-			ad4851_channels->num_ext_scan_type = ARRAY_SIZE(ad4851_scan_type_20_u);
++			ad4851_channels[i].ext_scan_type = ad4851_scan_type_20_u;
++			ad4851_channels[i].num_ext_scan_type = ARRAY_SIZE(ad4851_scan_type_20_u);
+ 		}
+-		ad4851_channels++;
++		i++;
+ 	}
+ 
+ 	indio_dev->channels = ad4851_channels;
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index 3ea81a98e45534..7d5d84a07cae1d 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -301,9 +301,9 @@ static int ad7124_get_3db_filter_freq(struct ad7124_state *st,
+ 
+ 	switch (st->channels[channel].cfg.filter_type) {
+ 	case AD7124_SINC3_FILTER:
+-		return DIV_ROUND_CLOSEST(fadc * 230, 1000);
++		return DIV_ROUND_CLOSEST(fadc * 272, 1000);
+ 	case AD7124_SINC4_FILTER:
+-		return DIV_ROUND_CLOSEST(fadc * 262, 1000);
++		return DIV_ROUND_CLOSEST(fadc * 230, 1000);
+ 	default:
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/iio/adc/mcp3911.c b/drivers/iio/adc/mcp3911.c
+index 6748b44d568db6..60a19c35807ab7 100644
+--- a/drivers/iio/adc/mcp3911.c
++++ b/drivers/iio/adc/mcp3911.c
+@@ -6,7 +6,7 @@
+  * Copyright (C) 2018 Kent Gustavsson <kent@minoris.se>
+  */
+ #include <linux/bitfield.h>
+-#include <linux/bits.h>
++#include <linux/bitops.h>
+ #include <linux/cleanup.h>
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+@@ -79,6 +79,8 @@
+ #define MCP3910_CONFIG1_CLKEXT		BIT(6)
+ #define MCP3910_CONFIG1_VREFEXT		BIT(7)
+ 
++#define MCP3910_CHANNEL(ch)		(MCP3911_REG_CHANNEL0 + (ch))
++
+ #define MCP3910_REG_OFFCAL_CH0		0x0f
+ #define MCP3910_OFFCAL(ch)		(MCP3910_REG_OFFCAL_CH0 + (ch) * 6)
+ 
+@@ -110,6 +112,7 @@ struct mcp3911_chip_info {
+ 	int (*get_offset)(struct mcp3911 *adc, int channel, int *val);
+ 	int (*set_offset)(struct mcp3911 *adc, int channel, int val);
+ 	int (*set_scale)(struct mcp3911 *adc, int channel, u32 val);
++	int (*get_raw)(struct mcp3911 *adc, int channel, int *val);
+ };
+ 
+ struct mcp3911 {
+@@ -170,6 +173,18 @@ static int mcp3911_update(struct mcp3911 *adc, u8 reg, u32 mask, u32 val, u8 len
+ 	return mcp3911_write(adc, reg, val, len);
+ }
+ 
++static int mcp3911_read_s24(struct mcp3911 *const adc, u8 const reg, s32 *const val)
++{
++	u32 uval;
++	int const ret = mcp3911_read(adc, reg, &uval, 3);
++
++	if (ret)
++		return ret;
++
++	*val = sign_extend32(uval, 23);
++	return ret;
++}
++
+ static int mcp3910_enable_offset(struct mcp3911 *adc, bool enable)
+ {
+ 	unsigned int mask = MCP3910_CONFIG0_EN_OFFCAL;
+@@ -194,6 +209,11 @@ static int mcp3910_set_offset(struct mcp3911 *adc, int channel, int val)
+ 	return adc->chip->enable_offset(adc, 1);
+ }
+ 
++static int mcp3910_get_raw(struct mcp3911 *adc, int channel, s32 *val)
++{
++	return mcp3911_read_s24(adc, MCP3910_CHANNEL(channel), val);
++}
++
+ static int mcp3911_enable_offset(struct mcp3911 *adc, bool enable)
+ {
+ 	unsigned int mask = MCP3911_STATUSCOM_EN_OFFCAL;
+@@ -218,6 +238,11 @@ static int mcp3911_set_offset(struct mcp3911 *adc, int channel, int val)
+ 	return adc->chip->enable_offset(adc, 1);
+ }
+ 
++static int mcp3911_get_raw(struct mcp3911 *adc, int channel, s32 *val)
++{
++	return mcp3911_read_s24(adc, MCP3911_CHANNEL(channel), val);
++}
++
+ static int mcp3910_get_osr(struct mcp3911 *adc, u32 *val)
+ {
+ 	int ret;
+@@ -321,12 +346,9 @@ static int mcp3911_read_raw(struct iio_dev *indio_dev,
+ 	guard(mutex)(&adc->lock);
+ 	switch (mask) {
+ 	case IIO_CHAN_INFO_RAW:
+-		ret = mcp3911_read(adc,
+-				   MCP3911_CHANNEL(channel->channel), val, 3);
++		ret = adc->chip->get_raw(adc, channel->channel, val);
+ 		if (ret)
+ 			return ret;
+-
+-		*val = sign_extend32(*val, 23);
+ 		return IIO_VAL_INT;
+ 	case IIO_CHAN_INFO_OFFSET:
+ 		ret = adc->chip->get_offset(adc, channel->channel, val);
+@@ -799,6 +821,7 @@ static const struct mcp3911_chip_info mcp3911_chip_info[] = {
+ 		.get_offset = mcp3910_get_offset,
+ 		.set_offset = mcp3910_set_offset,
+ 		.set_scale = mcp3910_set_scale,
++		.get_raw = mcp3910_get_raw,
+ 	},
+ 	[MCP3911] = {
+ 		.channels = mcp3911_channels,
+@@ -810,6 +833,7 @@ static const struct mcp3911_chip_info mcp3911_chip_info[] = {
+ 		.get_offset = mcp3911_get_offset,
+ 		.set_offset = mcp3911_set_offset,
+ 		.set_scale = mcp3911_set_scale,
++		.get_raw = mcp3911_get_raw,
+ 	},
+ 	[MCP3912] = {
+ 		.channels = mcp3912_channels,
+@@ -821,6 +845,7 @@ static const struct mcp3911_chip_info mcp3911_chip_info[] = {
+ 		.get_offset = mcp3910_get_offset,
+ 		.set_offset = mcp3910_set_offset,
+ 		.set_scale = mcp3910_set_scale,
++		.get_raw = mcp3910_get_raw,
+ 	},
+ 	[MCP3913] = {
+ 		.channels = mcp3913_channels,
+@@ -832,6 +857,7 @@ static const struct mcp3911_chip_info mcp3911_chip_info[] = {
+ 		.get_offset = mcp3910_get_offset,
+ 		.set_offset = mcp3910_set_offset,
+ 		.set_scale = mcp3910_set_scale,
++		.get_raw = mcp3910_get_raw,
+ 	},
+ 	[MCP3914] = {
+ 		.channels = mcp3914_channels,
+@@ -843,6 +869,7 @@ static const struct mcp3911_chip_info mcp3911_chip_info[] = {
+ 		.get_offset = mcp3910_get_offset,
+ 		.set_offset = mcp3910_set_offset,
+ 		.set_scale = mcp3910_set_scale,
++		.get_raw = mcp3910_get_raw,
+ 	},
+ 	[MCP3918] = {
+ 		.channels = mcp3918_channels,
+@@ -854,6 +881,7 @@ static const struct mcp3911_chip_info mcp3911_chip_info[] = {
+ 		.get_offset = mcp3910_get_offset,
+ 		.set_offset = mcp3910_set_offset,
+ 		.set_scale = mcp3910_set_scale,
++		.get_raw = mcp3910_get_raw,
+ 	},
+ 	[MCP3919] = {
+ 		.channels = mcp3919_channels,
+@@ -865,6 +893,7 @@ static const struct mcp3911_chip_info mcp3911_chip_info[] = {
+ 		.get_offset = mcp3910_get_offset,
+ 		.set_offset = mcp3910_set_offset,
+ 		.set_scale = mcp3910_set_scale,
++		.get_raw = mcp3910_get_raw,
+ 	},
+ };
+ static const struct of_device_id mcp3911_dt_ids[] = {
+diff --git a/drivers/iio/adc/pac1934.c b/drivers/iio/adc/pac1934.c
+index 20802b7f49ea84..09fe88eb3fb045 100644
+--- a/drivers/iio/adc/pac1934.c
++++ b/drivers/iio/adc/pac1934.c
+@@ -1081,7 +1081,7 @@ static int pac1934_chip_identify(struct pac1934_chip_info *info)
+ 
+ /*
+  * documentation related to the ACPI device definition
+- * https://ww1.microchip.com/downloads/aemDocuments/documents/OTH/ApplicationNotes/ApplicationNotes/PAC1934-Integration-Notes-for-Microsoft-Windows-10-and-Windows-11-Driver-Support-DS00002534.pdf
++ * https://ww1.microchip.com/downloads/aemDocuments/documents/OTH/ApplicationNotes/ApplicationNotes/PAC193X-Integration-Notes-for-Microsoft-Windows-10-and-Windows-11-Driver-Support-DS00002534.pdf
+  */
+ static int pac1934_acpi_parse_channel_config(struct i2c_client *client,
+ 					     struct pac1934_chip_info *info)
+diff --git a/drivers/iio/dac/adi-axi-dac.c b/drivers/iio/dac/adi-axi-dac.c
+index 892d770aec69c4..05b374e137d35d 100644
+--- a/drivers/iio/dac/adi-axi-dac.c
++++ b/drivers/iio/dac/adi-axi-dac.c
+@@ -707,6 +707,7 @@ static int axi_dac_bus_reg_read(struct iio_backend *back, u32 reg, u32 *val,
+ {
+ 	struct axi_dac_state *st = iio_backend_get_priv(back);
+ 	int ret;
++	u32 ival;
+ 
+ 	guard(mutex)(&st->lock);
+ 
+@@ -719,6 +720,13 @@ static int axi_dac_bus_reg_read(struct iio_backend *back, u32 reg, u32 *val,
+ 	if (ret)
+ 		return ret;
+ 
++	ret = regmap_read_poll_timeout(st->regmap,
++				AXI_DAC_UI_STATUS_REG, ival,
++				FIELD_GET(AXI_DAC_UI_STATUS_IF_BUSY, ival) == 0,
++				10, 100 * KILO);
++	if (ret)
++		return ret;
++
+ 	return regmap_read(st->regmap, AXI_DAC_CUSTOM_RD_REG, val);
+ }
+ 
+diff --git a/drivers/iio/filter/admv8818.c b/drivers/iio/filter/admv8818.c
+index d85b7d3de86604..cc8ce0fe74e7c6 100644
+--- a/drivers/iio/filter/admv8818.c
++++ b/drivers/iio/filter/admv8818.c
+@@ -14,6 +14,7 @@
+ #include <linux/mod_devicetable.h>
+ #include <linux/mutex.h>
+ #include <linux/notifier.h>
++#include <linux/property.h>
+ #include <linux/regmap.h>
+ #include <linux/spi/spi.h>
+ #include <linux/units.h>
+@@ -70,6 +71,16 @@
+ #define ADMV8818_HPF_WR0_MSK			GENMASK(7, 4)
+ #define ADMV8818_LPF_WR0_MSK			GENMASK(3, 0)
+ 
++#define ADMV8818_BAND_BYPASS       0
++#define ADMV8818_BAND_MIN          1
++#define ADMV8818_BAND_MAX          4
++#define ADMV8818_BAND_CORNER_LOW   0
++#define ADMV8818_BAND_CORNER_HIGH  1
++
++#define ADMV8818_STATE_MIN   0
++#define ADMV8818_STATE_MAX   15
++#define ADMV8818_NUM_STATES  16
++
+ enum {
+ 	ADMV8818_BW_FREQ,
+ 	ADMV8818_CENTER_FREQ
+@@ -90,20 +101,24 @@ struct admv8818_state {
+ 	struct mutex		lock;
+ 	unsigned int		filter_mode;
+ 	u64			cf_hz;
++	u64			lpf_margin_hz;
++	u64			hpf_margin_hz;
+ };
+ 
+-static const unsigned long long freq_range_hpf[4][2] = {
++static const unsigned long long freq_range_hpf[5][2] = {
++	{0ULL, 0ULL}, /* bypass */
+ 	{1750000000ULL, 3550000000ULL},
+ 	{3400000000ULL, 7250000000ULL},
+ 	{6600000000, 12000000000},
+ 	{12500000000, 19900000000}
+ };
+ 
+-static const unsigned long long freq_range_lpf[4][2] = {
++static const unsigned long long freq_range_lpf[5][2] = {
++	{U64_MAX, U64_MAX}, /* bypass */
+ 	{2050000000ULL, 3850000000ULL},
+ 	{3350000000ULL, 7250000000ULL},
+ 	{7000000000, 13000000000},
+-	{12550000000, 18500000000}
++	{12550000000, 18850000000}
+ };
+ 
+ static const struct regmap_config admv8818_regmap_config = {
+@@ -121,44 +136,59 @@ static const char * const admv8818_modes[] = {
+ 
+ static int __admv8818_hpf_select(struct admv8818_state *st, u64 freq)
+ {
+-	unsigned int hpf_step = 0, hpf_band = 0, i, j;
+-	u64 freq_step;
+-	int ret;
++	int band, state, ret;
++	unsigned int hpf_state = ADMV8818_STATE_MIN, hpf_band = ADMV8818_BAND_BYPASS;
++	u64 freq_error, min_freq_error, freq_corner, freq_step;
+ 
+-	if (freq < freq_range_hpf[0][0])
++	if (freq < freq_range_hpf[ADMV8818_BAND_MIN][ADMV8818_BAND_CORNER_LOW])
+ 		goto hpf_write;
+ 
+-	if (freq > freq_range_hpf[3][1]) {
+-		hpf_step = 15;
+-		hpf_band = 4;
+-
++	if (freq >= freq_range_hpf[ADMV8818_BAND_MAX][ADMV8818_BAND_CORNER_HIGH]) {
++		hpf_state = ADMV8818_STATE_MAX;
++		hpf_band = ADMV8818_BAND_MAX;
+ 		goto hpf_write;
+ 	}
+ 
+-	for (i = 0; i < 4; i++) {
+-		freq_step = div_u64((freq_range_hpf[i][1] -
+-			freq_range_hpf[i][0]), 15);
++	/* Close HPF frequency gap between 12 and 12.5 GHz */
++	if (freq >= 12000ULL * HZ_PER_MHZ && freq < 12500ULL * HZ_PER_MHZ) {
++		hpf_state = ADMV8818_STATE_MAX;
++		hpf_band = 3;
++		goto hpf_write;
++	}
+ 
+-		if (freq > freq_range_hpf[i][0] &&
+-		    (freq < freq_range_hpf[i][1] + freq_step)) {
+-			hpf_band = i + 1;
++	min_freq_error = U64_MAX;
++	for (band = ADMV8818_BAND_MIN; band <= ADMV8818_BAND_MAX; band++) {
++		/*
++		 * This (and therefore all other ranges) have a corner
++		 * frequency higher than the target frequency.
++		 */
++		if (freq_range_hpf[band][ADMV8818_BAND_CORNER_LOW] > freq)
++			break;
+ 
+-			for (j = 1; j <= 16; j++) {
+-				if (freq < (freq_range_hpf[i][0] + (freq_step * j))) {
+-					hpf_step = j - 1;
+-					break;
+-				}
++		freq_step = freq_range_hpf[band][ADMV8818_BAND_CORNER_HIGH] -
++			    freq_range_hpf[band][ADMV8818_BAND_CORNER_LOW];
++		freq_step = div_u64(freq_step, ADMV8818_NUM_STATES - 1);
++
++		for (state = ADMV8818_STATE_MIN; state <= ADMV8818_STATE_MAX; state++) {
++			freq_corner = freq_range_hpf[band][ADMV8818_BAND_CORNER_LOW] +
++				      freq_step * state;
++
++			/*
++			 * This (and therefore all other states) have a corner
++			 * frequency higher than the target frequency.
++			 */
++			if (freq_corner > freq)
++				break;
++
++			freq_error = freq - freq_corner;
++			if (freq_error < min_freq_error) {
++				min_freq_error = freq_error;
++				hpf_state = state;
++				hpf_band = band;
+ 			}
+-			break;
+ 		}
+ 	}
+ 
+-	/* Close HPF frequency gap between 12 and 12.5 GHz */
+-	if (freq >= 12000 * HZ_PER_MHZ && freq <= 12500 * HZ_PER_MHZ) {
+-		hpf_band = 3;
+-		hpf_step = 15;
+-	}
+-
+ hpf_write:
+ 	ret = regmap_update_bits(st->regmap, ADMV8818_REG_WR0_SW,
+ 				 ADMV8818_SW_IN_SET_WR0_MSK |
+@@ -170,7 +200,7 @@ static int __admv8818_hpf_select(struct admv8818_state *st, u64 freq)
+ 
+ 	return regmap_update_bits(st->regmap, ADMV8818_REG_WR0_FILTER,
+ 				  ADMV8818_HPF_WR0_MSK,
+-				  FIELD_PREP(ADMV8818_HPF_WR0_MSK, hpf_step));
++				  FIELD_PREP(ADMV8818_HPF_WR0_MSK, hpf_state));
+ }
+ 
+ static int admv8818_hpf_select(struct admv8818_state *st, u64 freq)
+@@ -186,31 +216,52 @@ static int admv8818_hpf_select(struct admv8818_state *st, u64 freq)
+ 
+ static int __admv8818_lpf_select(struct admv8818_state *st, u64 freq)
+ {
+-	unsigned int lpf_step = 0, lpf_band = 0, i, j;
+-	u64 freq_step;
+-	int ret;
++	int band, state, ret;
++	unsigned int lpf_state = ADMV8818_STATE_MIN, lpf_band = ADMV8818_BAND_BYPASS;
++	u64 freq_error, min_freq_error, freq_corner, freq_step;
+ 
+-	if (freq > freq_range_lpf[3][1])
++	if (freq > freq_range_lpf[ADMV8818_BAND_MAX][ADMV8818_BAND_CORNER_HIGH])
+ 		goto lpf_write;
+ 
+-	if (freq < freq_range_lpf[0][0]) {
+-		lpf_band = 1;
+-
++	if (freq < freq_range_lpf[ADMV8818_BAND_MIN][ADMV8818_BAND_CORNER_LOW]) {
++		lpf_state = ADMV8818_STATE_MIN;
++		lpf_band = ADMV8818_BAND_MIN;
+ 		goto lpf_write;
+ 	}
+ 
+-	for (i = 0; i < 4; i++) {
+-		if (freq > freq_range_lpf[i][0] && freq < freq_range_lpf[i][1]) {
+-			lpf_band = i + 1;
+-			freq_step = div_u64((freq_range_lpf[i][1] - freq_range_lpf[i][0]), 15);
++	min_freq_error = U64_MAX;
++	for (band = ADMV8818_BAND_MAX; band >= ADMV8818_BAND_MIN; --band) {
++		/*
++		 * At this point the highest corner frequency of
++		 * all remaining ranges is below the target.
++		 * LPF corner should be >= the target.
++		 */
++		if (freq > freq_range_lpf[band][ADMV8818_BAND_CORNER_HIGH])
++			break;
++
++		freq_step = freq_range_lpf[band][ADMV8818_BAND_CORNER_HIGH] -
++			    freq_range_lpf[band][ADMV8818_BAND_CORNER_LOW];
++		freq_step = div_u64(freq_step, ADMV8818_NUM_STATES - 1);
++
++		for (state = ADMV8818_STATE_MAX; state >= ADMV8818_STATE_MIN; --state) {
++
++			freq_corner = freq_range_lpf[band][ADMV8818_BAND_CORNER_LOW] +
++				      state * freq_step;
+ 
+-			for (j = 0; j <= 15; j++) {
+-				if (freq < (freq_range_lpf[i][0] + (freq_step * j))) {
+-					lpf_step = j;
+-					break;
+-				}
++			/*
++			 * At this point all other states in range will
++			 * place the corner frequency below the target
++			 * LPF corner should >= the target.
++			 */
++			if (freq > freq_corner)
++				break;
++
++			freq_error = freq_corner - freq;
++			if (freq_error < min_freq_error) {
++				min_freq_error = freq_error;
++				lpf_state = state;
++				lpf_band = band;
+ 			}
+-			break;
+ 		}
+ 	}
+ 
+@@ -225,7 +276,7 @@ static int __admv8818_lpf_select(struct admv8818_state *st, u64 freq)
+ 
+ 	return regmap_update_bits(st->regmap, ADMV8818_REG_WR0_FILTER,
+ 				  ADMV8818_LPF_WR0_MSK,
+-				  FIELD_PREP(ADMV8818_LPF_WR0_MSK, lpf_step));
++				  FIELD_PREP(ADMV8818_LPF_WR0_MSK, lpf_state));
+ }
+ 
+ static int admv8818_lpf_select(struct admv8818_state *st, u64 freq)
+@@ -242,16 +293,28 @@ static int admv8818_lpf_select(struct admv8818_state *st, u64 freq)
+ static int admv8818_rfin_band_select(struct admv8818_state *st)
+ {
+ 	int ret;
++	u64 hpf_corner_target, lpf_corner_target;
+ 
+ 	st->cf_hz = clk_get_rate(st->clkin);
+ 
++	/* Check for underflow */
++	if (st->cf_hz > st->hpf_margin_hz)
++		hpf_corner_target = st->cf_hz - st->hpf_margin_hz;
++	else
++		hpf_corner_target = 0;
++
++	/* Check for overflow */
++	lpf_corner_target = st->cf_hz + st->lpf_margin_hz;
++	if (lpf_corner_target < st->cf_hz)
++		lpf_corner_target = U64_MAX;
++
+ 	mutex_lock(&st->lock);
+ 
+-	ret = __admv8818_hpf_select(st, st->cf_hz);
++	ret = __admv8818_hpf_select(st, hpf_corner_target);
+ 	if (ret)
+ 		goto exit;
+ 
+-	ret = __admv8818_lpf_select(st, st->cf_hz);
++	ret = __admv8818_lpf_select(st, lpf_corner_target);
+ exit:
+ 	mutex_unlock(&st->lock);
+ 	return ret;
+@@ -278,8 +341,11 @@ static int __admv8818_read_hpf_freq(struct admv8818_state *st, u64 *hpf_freq)
+ 
+ 	hpf_state = FIELD_GET(ADMV8818_HPF_WR0_MSK, data);
+ 
+-	*hpf_freq = div_u64(freq_range_hpf[hpf_band - 1][1] - freq_range_hpf[hpf_band - 1][0], 15);
+-	*hpf_freq = freq_range_hpf[hpf_band - 1][0] + (*hpf_freq * hpf_state);
++	*hpf_freq = freq_range_hpf[hpf_band][ADMV8818_BAND_CORNER_HIGH] -
++		    freq_range_hpf[hpf_band][ADMV8818_BAND_CORNER_LOW];
++	*hpf_freq = div_u64(*hpf_freq, ADMV8818_NUM_STATES - 1);
++	*hpf_freq = freq_range_hpf[hpf_band][ADMV8818_BAND_CORNER_LOW] +
++		    (*hpf_freq * hpf_state);
+ 
+ 	return ret;
+ }
+@@ -316,8 +382,11 @@ static int __admv8818_read_lpf_freq(struct admv8818_state *st, u64 *lpf_freq)
+ 
+ 	lpf_state = FIELD_GET(ADMV8818_LPF_WR0_MSK, data);
+ 
+-	*lpf_freq = div_u64(freq_range_lpf[lpf_band - 1][1] - freq_range_lpf[lpf_band - 1][0], 15);
+-	*lpf_freq = freq_range_lpf[lpf_band - 1][0] + (*lpf_freq * lpf_state);
++	*lpf_freq = freq_range_lpf[lpf_band][ADMV8818_BAND_CORNER_HIGH] -
++		    freq_range_lpf[lpf_band][ADMV8818_BAND_CORNER_LOW];
++	*lpf_freq = div_u64(*lpf_freq, ADMV8818_NUM_STATES - 1);
++	*lpf_freq = freq_range_lpf[lpf_band][ADMV8818_BAND_CORNER_LOW] +
++		    (*lpf_freq * lpf_state);
+ 
+ 	return ret;
+ }
+@@ -333,6 +402,19 @@ static int admv8818_read_lpf_freq(struct admv8818_state *st, u64 *lpf_freq)
+ 	return ret;
+ }
+ 
++static int admv8818_write_raw_get_fmt(struct iio_dev *indio_dev,
++								struct iio_chan_spec const *chan,
++								long mask)
++{
++	switch (mask) {
++	case IIO_CHAN_INFO_LOW_PASS_FILTER_3DB_FREQUENCY:
++	case IIO_CHAN_INFO_HIGH_PASS_FILTER_3DB_FREQUENCY:
++		return IIO_VAL_INT_64;
++	default:
++		return -EINVAL;
++	}
++}
++
+ static int admv8818_write_raw(struct iio_dev *indio_dev,
+ 			      struct iio_chan_spec const *chan,
+ 			      int val, int val2, long info)
+@@ -341,6 +423,9 @@ static int admv8818_write_raw(struct iio_dev *indio_dev,
+ 
+ 	u64 freq = ((u64)val2 << 32 | (u32)val);
+ 
++	if ((s64)freq < 0)
++		return -EINVAL;
++
+ 	switch (info) {
+ 	case IIO_CHAN_INFO_LOW_PASS_FILTER_3DB_FREQUENCY:
+ 		return admv8818_lpf_select(st, freq);
+@@ -502,6 +587,7 @@ static int admv8818_set_mode(struct iio_dev *indio_dev,
+ 
+ static const struct iio_info admv8818_info = {
+ 	.write_raw = admv8818_write_raw,
++	.write_raw_get_fmt = admv8818_write_raw_get_fmt,
+ 	.read_raw = admv8818_read_raw,
+ 	.debugfs_reg_access = &admv8818_reg_access,
+ };
+@@ -641,6 +727,32 @@ static int admv8818_clk_setup(struct admv8818_state *st)
+ 	return devm_add_action_or_reset(&spi->dev, admv8818_clk_notifier_unreg, st);
+ }
+ 
++static int admv8818_read_properties(struct admv8818_state *st)
++{
++	struct spi_device *spi = st->spi;
++	u32 mhz;
++	int ret;
++
++	ret = device_property_read_u32(&spi->dev, "adi,lpf-margin-mhz", &mhz);
++	if (ret == 0)
++		st->lpf_margin_hz = (u64)mhz * HZ_PER_MHZ;
++	else if (ret == -EINVAL)
++		st->lpf_margin_hz = 0;
++	else
++		return ret;
++
++
++	ret = device_property_read_u32(&spi->dev, "adi,hpf-margin-mhz", &mhz);
++	if (ret == 0)
++		st->hpf_margin_hz = (u64)mhz * HZ_PER_MHZ;
++	else if (ret == -EINVAL)
++		st->hpf_margin_hz = 0;
++	else if (ret < 0)
++		return ret;
++
++	return 0;
++}
++
+ static int admv8818_probe(struct spi_device *spi)
+ {
+ 	struct iio_dev *indio_dev;
+@@ -672,6 +784,10 @@ static int admv8818_probe(struct spi_device *spi)
+ 
+ 	mutex_init(&st->lock);
+ 
++	ret = admv8818_read_properties(st);
++	if (ret)
++		return ret;
++
+ 	ret = admv8818_init(st);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 142170473e7536..e64cbd034a2a19 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -167,7 +167,7 @@ struct cm_port {
+ struct cm_device {
+ 	struct kref kref;
+ 	struct list_head list;
+-	spinlock_t mad_agent_lock;
++	rwlock_t mad_agent_lock;
+ 	struct ib_device *ib_device;
+ 	u8 ack_delay;
+ 	int going_down;
+@@ -285,7 +285,7 @@ static struct ib_mad_send_buf *cm_alloc_msg(struct cm_id_private *cm_id_priv)
+ 	if (!cm_id_priv->av.port)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	spin_lock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
++	read_lock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
+ 	mad_agent = cm_id_priv->av.port->mad_agent;
+ 	if (!mad_agent) {
+ 		m = ERR_PTR(-EINVAL);
+@@ -311,7 +311,7 @@ static struct ib_mad_send_buf *cm_alloc_msg(struct cm_id_private *cm_id_priv)
+ 	m->ah = ah;
+ 
+ out:
+-	spin_unlock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
++	read_unlock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
+ 	return m;
+ }
+ 
+@@ -1297,10 +1297,10 @@ static __be64 cm_form_tid(struct cm_id_private *cm_id_priv)
+ 	if (!cm_id_priv->av.port)
+ 		return cpu_to_be64(low_tid);
+ 
+-	spin_lock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
++	read_lock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
+ 	if (cm_id_priv->av.port->mad_agent)
+ 		hi_tid = ((u64)cm_id_priv->av.port->mad_agent->hi_tid) << 32;
+-	spin_unlock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
++	read_unlock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
+ 	return cpu_to_be64(hi_tid | low_tid);
+ }
+ 
+@@ -3786,7 +3786,8 @@ static void cm_process_send_error(struct cm_id_private *cm_id_priv,
+ 	spin_lock_irq(&cm_id_priv->lock);
+ 	if (msg != cm_id_priv->msg) {
+ 		spin_unlock_irq(&cm_id_priv->lock);
+-		cm_free_priv_msg(msg);
++		cm_free_msg(msg);
++		cm_deref_id(cm_id_priv);
+ 		return;
+ 	}
+ 	cm_free_priv_msg(msg);
+@@ -4378,7 +4379,7 @@ static int cm_add_one(struct ib_device *ib_device)
+ 		return -ENOMEM;
+ 
+ 	kref_init(&cm_dev->kref);
+-	spin_lock_init(&cm_dev->mad_agent_lock);
++	rwlock_init(&cm_dev->mad_agent_lock);
+ 	cm_dev->ib_device = ib_device;
+ 	cm_dev->ack_delay = ib_device->attrs.local_ca_ack_delay;
+ 	cm_dev->going_down = 0;
+@@ -4494,9 +4495,9 @@ static void cm_remove_one(struct ib_device *ib_device, void *client_data)
+ 		 * The above ensures no call paths from the work are running,
+ 		 * the remaining paths all take the mad_agent_lock.
+ 		 */
+-		spin_lock(&cm_dev->mad_agent_lock);
++		write_lock(&cm_dev->mad_agent_lock);
+ 		port->mad_agent = NULL;
+-		spin_unlock(&cm_dev->mad_agent_lock);
++		write_unlock(&cm_dev->mad_agent_lock);
+ 		ib_unregister_mad_agent(mad_agent);
+ 		ib_port_unregister_client_groups(ib_device, i,
+ 						 cm_counter_groups);
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index ab31eefa916b3e..274cfbd5aaba76 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -5245,7 +5245,8 @@ static int cma_netevent_callback(struct notifier_block *self,
+ 			   neigh->ha, ETH_ALEN))
+ 			continue;
+ 		cma_id_get(current_id);
+-		queue_work(cma_wq, &current_id->id.net_work);
++		if (!queue_work(cma_wq, &current_id->id.net_work))
++			cma_id_put(current_id);
+ 	}
+ out:
+ 	spin_unlock_irqrestore(&id_table_lock, flags);
+diff --git a/drivers/infiniband/hw/bnxt_re/debugfs.c b/drivers/infiniband/hw/bnxt_re/debugfs.c
+index af91d16c3c77f5..e632f1661b9295 100644
+--- a/drivers/infiniband/hw/bnxt_re/debugfs.c
++++ b/drivers/infiniband/hw/bnxt_re/debugfs.c
+@@ -170,6 +170,9 @@ static int map_cc_config_offset_gen0_ext0(u32 offset, struct bnxt_qplib_cc_param
+ 	case CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TCP_CP:
+ 		*val =  ccparam->tcp_cp;
+ 		break;
++	case CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_INACTIVITY_CP:
++		*val = ccparam->inact_th;
++		break;
+ 	default:
+ 		return -EINVAL;
+ 	}
+@@ -203,7 +206,7 @@ static ssize_t bnxt_re_cc_config_get(struct file *filp, char __user *buffer,
+ 	return simple_read_from_buffer(buffer, usr_buf_len, ppos, (u8 *)(buf), rc);
+ }
+ 
+-static void bnxt_re_fill_gen0_ext0(struct bnxt_qplib_cc_param *ccparam, u32 offset, u32 val)
++static int bnxt_re_fill_gen0_ext0(struct bnxt_qplib_cc_param *ccparam, u32 offset, u32 val)
+ {
+ 	u32 modify_mask;
+ 
+@@ -247,7 +250,9 @@ static void bnxt_re_fill_gen0_ext0(struct bnxt_qplib_cc_param *ccparam, u32 offs
+ 		ccparam->tcp_cp = val;
+ 		break;
+ 	case CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TX_QUEUE:
++		return -EOPNOTSUPP;
+ 	case CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_INACTIVITY_CP:
++		ccparam->inact_th = val;
+ 		break;
+ 	case CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TIME_PER_PHASE:
+ 		ccparam->time_pph = val;
+@@ -258,17 +263,20 @@ static void bnxt_re_fill_gen0_ext0(struct bnxt_qplib_cc_param *ccparam, u32 offs
+ 	}
+ 
+ 	ccparam->mask = modify_mask;
++	return 0;
+ }
+ 
+ static int bnxt_re_configure_cc(struct bnxt_re_dev *rdev, u32 gen_ext, u32 offset, u32 val)
+ {
+ 	struct bnxt_qplib_cc_param ccparam = { };
++	int rc;
+ 
+-	/* Supporting only Gen 0 now */
+-	if (gen_ext == CC_CONFIG_GEN0_EXT0)
+-		bnxt_re_fill_gen0_ext0(&ccparam, offset, val);
+-	else
+-		return -EINVAL;
++	if (gen_ext != CC_CONFIG_GEN0_EXT0)
++		return -EOPNOTSUPP;
++
++	rc = bnxt_re_fill_gen0_ext0(&ccparam, offset, val);
++	if (rc)
++		return rc;
+ 
+ 	bnxt_qplib_modify_cc(&rdev->qplib_res, &ccparam);
+ 	return 0;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_ah.c b/drivers/infiniband/hw/hns/hns_roce_ah.c
+index 4fc5b9d5fea87e..307c35888b3003 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_ah.c
++++ b/drivers/infiniband/hw/hns/hns_roce_ah.c
+@@ -33,7 +33,6 @@
+ #include <linux/pci.h>
+ #include <rdma/ib_addr.h>
+ #include <rdma/ib_cache.h>
+-#include "hnae3.h"
+ #include "hns_roce_device.h"
+ #include "hns_roce_hw_v2.h"
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 160e8927d364e1..59352d1b62099f 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -43,7 +43,6 @@
+ #include <rdma/ib_umem.h>
+ #include <rdma/uverbs_ioctl.h>
+ 
+-#include "hnae3.h"
+ #include "hns_roce_common.h"
+ #include "hns_roce_device.h"
+ #include "hns_roce_cmd.h"
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index 91a5665465ffba..bc7466830eaf9d 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -34,6 +34,7 @@
+ #define _HNS_ROCE_HW_V2_H
+ 
+ #include <linux/bitops.h>
++#include "hnae3.h"
+ 
+ #define HNS_ROCE_V2_MAX_RC_INL_INN_SZ		32
+ #define HNS_ROCE_V2_MTT_ENTRY_SZ		64
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index 8d0b63d4b50a6c..e7a497cc125cc3 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -37,7 +37,6 @@
+ #include <rdma/ib_smi.h>
+ #include <rdma/ib_user_verbs.h>
+ #include <rdma/ib_cache.h>
+-#include "hnae3.h"
+ #include "hns_roce_common.h"
+ #include "hns_roce_device.h"
+ #include "hns_roce_hem.h"
+diff --git a/drivers/infiniband/hw/hns/hns_roce_restrack.c b/drivers/infiniband/hw/hns/hns_roce_restrack.c
+index 356d9881694973..f637b73b946e44 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_restrack.c
++++ b/drivers/infiniband/hw/hns/hns_roce_restrack.c
+@@ -4,7 +4,6 @@
+ #include <rdma/rdma_cm.h>
+ #include <rdma/restrack.h>
+ #include <uapi/rdma/rdma_netlink.h>
+-#include "hnae3.h"
+ #include "hns_roce_common.h"
+ #include "hns_roce_device.h"
+ #include "hns_roce_hw_v2.h"
+diff --git a/drivers/infiniband/hw/mlx5/qpc.c b/drivers/infiniband/hw/mlx5/qpc.c
+index d3dcc272200afa..146d03ae40bd9f 100644
+--- a/drivers/infiniband/hw/mlx5/qpc.c
++++ b/drivers/infiniband/hw/mlx5/qpc.c
+@@ -21,8 +21,10 @@ mlx5_get_rsc(struct mlx5_qp_table *table, u32 rsn)
+ 	spin_lock_irqsave(&table->lock, flags);
+ 
+ 	common = radix_tree_lookup(&table->tree, rsn);
+-	if (common)
++	if (common && !common->invalid)
+ 		refcount_inc(&common->refcount);
++	else
++		common = NULL;
+ 
+ 	spin_unlock_irqrestore(&table->lock, flags);
+ 
+@@ -178,6 +180,18 @@ static int create_resource_common(struct mlx5_ib_dev *dev,
+ 	return 0;
+ }
+ 
++static void modify_resource_common_state(struct mlx5_ib_dev *dev,
++					 struct mlx5_core_qp *qp,
++					 bool invalid)
++{
++	struct mlx5_qp_table *table = &dev->qp_table;
++	unsigned long flags;
++
++	spin_lock_irqsave(&table->lock, flags);
++	qp->common.invalid = invalid;
++	spin_unlock_irqrestore(&table->lock, flags);
++}
++
+ static void destroy_resource_common(struct mlx5_ib_dev *dev,
+ 				    struct mlx5_core_qp *qp)
+ {
+@@ -609,8 +623,20 @@ int mlx5_core_create_rq_tracked(struct mlx5_ib_dev *dev, u32 *in, int inlen,
+ int mlx5_core_destroy_rq_tracked(struct mlx5_ib_dev *dev,
+ 				 struct mlx5_core_qp *rq)
+ {
++	int ret;
++
++	/* The rq destruction can be called again in case it fails, hence we
++	 * mark the common resource as invalid and only once FW destruction
++	 * is completed successfully we actually destroy the resources.
++	 */
++	modify_resource_common_state(dev, rq, true);
++	ret = destroy_rq_tracked(dev, rq->qpn, rq->uid);
++	if (ret) {
++		modify_resource_common_state(dev, rq, false);
++		return ret;
++	}
+ 	destroy_resource_common(dev, rq);
+-	return destroy_rq_tracked(dev, rq->qpn, rq->uid);
++	return 0;
+ }
+ 
+ static void destroy_sq_tracked(struct mlx5_ib_dev *dev, u32 sqn, u16 uid)
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index 7975fb0e2782f0..f2af3e0aef35b5 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -811,7 +811,12 @@ static void rxe_qp_do_cleanup(struct work_struct *work)
+ 	spin_unlock_irqrestore(&qp->state_lock, flags);
+ 	qp->qp_timeout_jiffies = 0;
+ 
+-	if (qp_type(qp) == IB_QPT_RC) {
++	/* In the function timer_setup, .function is initialized. If .function
++	 * is NULL, it indicates the function timer_setup is not called, the
++	 * timer is not initialized. Or else, the timer is initialized.
++	 */
++	if (qp_type(qp) == IB_QPT_RC && qp->retrans_timer.function &&
++		qp->rnr_nak_timer.function) {
+ 		timer_delete_sync(&qp->retrans_timer);
+ 		timer_delete_sync(&qp->rnr_nak_timer);
+ 	}
+diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
+index cd750f512deee2..bad585b45a31df 100644
+--- a/drivers/iommu/Kconfig
++++ b/drivers/iommu/Kconfig
+@@ -199,7 +199,6 @@ source "drivers/iommu/riscv/Kconfig"
+ config IRQ_REMAP
+ 	bool "Support for Interrupt Remapping"
+ 	depends on X86_64 && X86_IO_APIC && PCI_MSI && ACPI
+-	select DMAR_TABLE if INTEL_IOMMU
+ 	help
+ 	  Supports Interrupt remapping for IO-APIC and MSI devices.
+ 	  To use x2apic mode in the CPU's which support x2APIC enhancements or
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index 48d910399a1ba6..be8d0f7db617d0 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -2953,7 +2953,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
+ 	smmu = master->smmu;
+ 
+ 	if (smmu_domain->smmu != smmu)
+-		return ret;
++		return -EINVAL;
+ 
+ 	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+ 		cdptr = arm_smmu_alloc_cd_ptr(master, IOMMU_NO_PASID);
+diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
+index 7632c80edea63a..396c4f6f5a5bd9 100644
+--- a/drivers/iommu/io-pgtable-arm.c
++++ b/drivers/iommu/io-pgtable-arm.c
+@@ -13,6 +13,7 @@
+ #include <linux/bitops.h>
+ #include <linux/io-pgtable.h>
+ #include <linux/kernel.h>
++#include <linux/device/faux.h>
+ #include <linux/sizes.h>
+ #include <linux/slab.h>
+ #include <linux/types.h>
+@@ -1433,15 +1434,17 @@ static int __init arm_lpae_do_selftests(void)
+ 	};
+ 
+ 	int i, j, k, pass = 0, fail = 0;
+-	struct device dev;
++	struct faux_device *dev;
+ 	struct io_pgtable_cfg cfg = {
+ 		.tlb = &dummy_tlb_ops,
+ 		.coherent_walk = true,
+-		.iommu_dev = &dev,
+ 	};
+ 
+-	/* __arm_lpae_alloc_pages() merely needs dev_to_node() to work */
+-	set_dev_node(&dev, NUMA_NO_NODE);
++	dev = faux_device_create("io-pgtable-test", NULL, 0);
++	if (!dev)
++		return -ENOMEM;
++
++	cfg.iommu_dev = &dev->dev;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(pgsize); ++i) {
+ 		for (j = 0; j < ARRAY_SIZE(address_size); ++j) {
+@@ -1461,6 +1464,8 @@ static int __init arm_lpae_do_selftests(void)
+ 	}
+ 
+ 	pr_info("selftest: completed with %d PASS %d FAIL\n", pass, fail);
++	faux_device_destroy(dev);
++
+ 	return fail ? -EFAULT : 0;
+ }
+ subsys_initcall(arm_lpae_do_selftests);
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 5bc2fc969494f5..e4628d96216102 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -2399,6 +2399,7 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
+ 	unsigned int pgsize_idx, pgsize_idx_next;
+ 	unsigned long pgsizes;
+ 	size_t offset, pgsize, pgsize_next;
++	size_t offset_end;
+ 	unsigned long addr_merge = paddr | iova;
+ 
+ 	/* Page sizes supported by the hardware and small enough for @size */
+@@ -2439,7 +2440,8 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
+ 	 * If size is big enough to accommodate the larger page, reduce
+ 	 * the number of smaller pages.
+ 	 */
+-	if (offset + pgsize_next <= size)
++	if (!check_add_overflow(offset, pgsize_next, &offset_end) &&
++	    offset_end <= size)
+ 		size = offset;
+ 
+ out_set_count:
+diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
+index e424b279a8cd9b..90341b24a81155 100644
+--- a/drivers/iommu/ipmmu-vmsa.c
++++ b/drivers/iommu/ipmmu-vmsa.c
+@@ -1090,7 +1090,8 @@ static int ipmmu_probe(struct platform_device *pdev)
+ 	if (mmu->features->has_cache_leaf_nodes && ipmmu_is_root(mmu))
+ 		return 0;
+ 
+-	ret = iommu_device_sysfs_add(&mmu->iommu, &pdev->dev, NULL, dev_name(&pdev->dev));
++	ret = iommu_device_sysfs_add(&mmu->iommu, &pdev->dev, NULL, "%s",
++				     dev_name(&pdev->dev));
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/mailbox/Kconfig b/drivers/mailbox/Kconfig
+index ed52db272f4d05..e8445cda7c6182 100644
+--- a/drivers/mailbox/Kconfig
++++ b/drivers/mailbox/Kconfig
+@@ -191,8 +191,8 @@ config POLARFIRE_SOC_MAILBOX
+ 
+ config MCHP_SBI_IPC_MBOX
+ 	tristate "Microchip Inter-processor Communication (IPC) SBI driver"
+-	depends on RISCV_SBI || COMPILE_TEST
+-	depends on ARCH_MICROCHIP
++	depends on RISCV_SBI
++	depends on ARCH_MICROCHIP || COMPILE_TEST
+ 	help
+ 	  Mailbox implementation for Microchip devices with an
+ 	  Inter-process communication (IPC) controller.
+diff --git a/drivers/mailbox/imx-mailbox.c b/drivers/mailbox/imx-mailbox.c
+index 6ef8338add0d61..6778afc64a048c 100644
+--- a/drivers/mailbox/imx-mailbox.c
++++ b/drivers/mailbox/imx-mailbox.c
+@@ -226,7 +226,7 @@ static int imx_mu_generic_tx(struct imx_mu_priv *priv,
+ {
+ 	u32 *arg = data;
+ 	u32 val;
+-	int ret;
++	int ret, count;
+ 
+ 	switch (cp->type) {
+ 	case IMX_MU_TYPE_TX:
+@@ -240,11 +240,20 @@ static int imx_mu_generic_tx(struct imx_mu_priv *priv,
+ 	case IMX_MU_TYPE_TXDB_V2:
+ 		imx_mu_write(priv, IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx),
+ 			     priv->dcfg->xCR[IMX_MU_GCR]);
+-		ret = readl_poll_timeout(priv->base + priv->dcfg->xCR[IMX_MU_GCR], val,
+-					 !(val & IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx)),
+-					 0, 1000);
+-		if (ret)
+-			dev_warn_ratelimited(priv->dev, "channel type: %d failure\n", cp->type);
++		ret = -ETIMEDOUT;
++		count = 0;
++		while (ret && (count < 10)) {
++			ret =
++			readl_poll_timeout(priv->base + priv->dcfg->xCR[IMX_MU_GCR], val,
++					   !(val & IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx)),
++					   0, 10000);
++
++			if (ret) {
++				dev_warn_ratelimited(priv->dev,
++						     "channel type: %d timeout, %d times, retry\n",
++						     cp->type, ++count);
++			}
++		}
+ 		break;
+ 	default:
+ 		dev_warn_ratelimited(priv->dev, "Send data on wrong channel type: %d\n", cp->type);
+diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
+index d186865b8dce64..ab4e8d1954a16e 100644
+--- a/drivers/mailbox/mtk-cmdq-mailbox.c
++++ b/drivers/mailbox/mtk-cmdq-mailbox.c
+@@ -92,18 +92,6 @@ struct gce_plat {
+ 	u32 gce_num;
+ };
+ 
+-static void cmdq_sw_ddr_enable(struct cmdq *cmdq, bool enable)
+-{
+-	WARN_ON(clk_bulk_enable(cmdq->pdata->gce_num, cmdq->clocks));
+-
+-	if (enable)
+-		writel(GCE_DDR_EN | GCE_CTRL_BY_SW, cmdq->base + GCE_GCTL_VALUE);
+-	else
+-		writel(GCE_CTRL_BY_SW, cmdq->base + GCE_GCTL_VALUE);
+-
+-	clk_bulk_disable(cmdq->pdata->gce_num, cmdq->clocks);
+-}
+-
+ u8 cmdq_get_shift_pa(struct mbox_chan *chan)
+ {
+ 	struct cmdq *cmdq = container_of(chan->mbox, struct cmdq, mbox);
+@@ -112,6 +100,19 @@ u8 cmdq_get_shift_pa(struct mbox_chan *chan)
+ }
+ EXPORT_SYMBOL(cmdq_get_shift_pa);
+ 
++static void cmdq_gctl_value_toggle(struct cmdq *cmdq, bool ddr_enable)
++{
++	u32 val = cmdq->pdata->control_by_sw ? GCE_CTRL_BY_SW : 0;
++
++	if (!cmdq->pdata->control_by_sw && !cmdq->pdata->sw_ddr_en)
++		return;
++
++	if (cmdq->pdata->sw_ddr_en && ddr_enable)
++		val |= GCE_DDR_EN;
++
++	writel(val, cmdq->base + GCE_GCTL_VALUE);
++}
++
+ static int cmdq_thread_suspend(struct cmdq *cmdq, struct cmdq_thread *thread)
+ {
+ 	u32 status;
+@@ -140,16 +141,10 @@ static void cmdq_thread_resume(struct cmdq_thread *thread)
+ static void cmdq_init(struct cmdq *cmdq)
+ {
+ 	int i;
+-	u32 gctl_regval = 0;
+ 
+ 	WARN_ON(clk_bulk_enable(cmdq->pdata->gce_num, cmdq->clocks));
+-	if (cmdq->pdata->control_by_sw)
+-		gctl_regval = GCE_CTRL_BY_SW;
+-	if (cmdq->pdata->sw_ddr_en)
+-		gctl_regval |= GCE_DDR_EN;
+ 
+-	if (gctl_regval)
+-		writel(gctl_regval, cmdq->base + GCE_GCTL_VALUE);
++	cmdq_gctl_value_toggle(cmdq, true);
+ 
+ 	writel(CMDQ_THR_ACTIVE_SLOT_CYCLES, cmdq->base + CMDQ_THR_SLOT_CYCLES);
+ 	for (i = 0; i <= CMDQ_MAX_EVENT; i++)
+@@ -315,14 +310,21 @@ static irqreturn_t cmdq_irq_handler(int irq, void *dev)
+ static int cmdq_runtime_resume(struct device *dev)
+ {
+ 	struct cmdq *cmdq = dev_get_drvdata(dev);
++	int ret;
+ 
+-	return clk_bulk_enable(cmdq->pdata->gce_num, cmdq->clocks);
++	ret = clk_bulk_enable(cmdq->pdata->gce_num, cmdq->clocks);
++	if (ret)
++		return ret;
++
++	cmdq_gctl_value_toggle(cmdq, true);
++	return 0;
+ }
+ 
+ static int cmdq_runtime_suspend(struct device *dev)
+ {
+ 	struct cmdq *cmdq = dev_get_drvdata(dev);
+ 
++	cmdq_gctl_value_toggle(cmdq, false);
+ 	clk_bulk_disable(cmdq->pdata->gce_num, cmdq->clocks);
+ 	return 0;
+ }
+@@ -347,9 +349,6 @@ static int cmdq_suspend(struct device *dev)
+ 	if (task_running)
+ 		dev_warn(dev, "exist running task(s) in suspend\n");
+ 
+-	if (cmdq->pdata->sw_ddr_en)
+-		cmdq_sw_ddr_enable(cmdq, false);
+-
+ 	return pm_runtime_force_suspend(dev);
+ }
+ 
+@@ -360,9 +359,6 @@ static int cmdq_resume(struct device *dev)
+ 	WARN_ON(pm_runtime_force_resume(dev));
+ 	cmdq->suspended = false;
+ 
+-	if (cmdq->pdata->sw_ddr_en)
+-		cmdq_sw_ddr_enable(cmdq, true);
+-
+ 	return 0;
+ }
+ 
+@@ -370,9 +366,6 @@ static void cmdq_remove(struct platform_device *pdev)
+ {
+ 	struct cmdq *cmdq = platform_get_drvdata(pdev);
+ 
+-	if (cmdq->pdata->sw_ddr_en)
+-		cmdq_sw_ddr_enable(cmdq, false);
+-
+ 	if (!IS_ENABLED(CONFIG_PM))
+ 		cmdq_runtime_suspend(&pdev->dev);
+ 
+diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
+index 3637761f35853c..f3a3f2ef632261 100644
+--- a/drivers/md/dm-core.h
++++ b/drivers/md/dm-core.h
+@@ -141,6 +141,7 @@ struct mapped_device {
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 	unsigned int nr_zones;
+ 	void *zone_revalidate_map;
++	struct task_struct *revalidate_map_task;
+ #endif
+ 
+ #ifdef CONFIG_IMA
+diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
+index b690905ab89ffb..347881f323d5bc 100644
+--- a/drivers/md/dm-flakey.c
++++ b/drivers/md/dm-flakey.c
+@@ -47,14 +47,15 @@ enum feature_flag_bits {
+ };
+ 
+ struct per_bio_data {
+-	bool bio_submitted;
++	bool bio_can_corrupt;
++	struct bvec_iter saved_iter;
+ };
+ 
+ static int parse_features(struct dm_arg_set *as, struct flakey_c *fc,
+ 			  struct dm_target *ti)
+ {
+-	int r;
+-	unsigned int argc;
++	int r = 0;
++	unsigned int argc = 0;
+ 	const char *arg_name;
+ 
+ 	static const struct dm_arg _args[] = {
+@@ -65,14 +66,13 @@ static int parse_features(struct dm_arg_set *as, struct flakey_c *fc,
+ 		{0, PROBABILITY_BASE, "Invalid random corrupt argument"},
+ 	};
+ 
+-	/* No feature arguments supplied. */
+-	if (!as->argc)
+-		return 0;
+-
+-	r = dm_read_arg_group(_args, as, &argc, &ti->error);
+-	if (r)
++	if (as->argc && (r = dm_read_arg_group(_args, as, &argc, &ti->error)))
+ 		return r;
+ 
++	/* No feature arguments supplied. */
++	if (!argc)
++		goto error_all_io;
++
+ 	while (argc) {
+ 		arg_name = dm_shift_arg(as);
+ 		argc--;
+@@ -217,6 +217,7 @@ static int parse_features(struct dm_arg_set *as, struct flakey_c *fc,
+ 	if (!fc->corrupt_bio_byte && !test_bit(ERROR_READS, &fc->flags) &&
+ 	    !test_bit(DROP_WRITES, &fc->flags) && !test_bit(ERROR_WRITES, &fc->flags) &&
+ 	    !fc->random_read_corrupt && !fc->random_write_corrupt) {
++error_all_io:
+ 		set_bit(ERROR_WRITES, &fc->flags);
+ 		set_bit(ERROR_READS, &fc->flags);
+ 	}
+@@ -339,7 +340,8 @@ static void flakey_map_bio(struct dm_target *ti, struct bio *bio)
+ }
+ 
+ static void corrupt_bio_common(struct bio *bio, unsigned int corrupt_bio_byte,
+-			       unsigned char corrupt_bio_value)
++			       unsigned char corrupt_bio_value,
++			       struct bvec_iter start)
+ {
+ 	struct bvec_iter iter;
+ 	struct bio_vec bvec;
+@@ -348,7 +350,7 @@ static void corrupt_bio_common(struct bio *bio, unsigned int corrupt_bio_byte,
+ 	 * Overwrite the Nth byte of the bio's data, on whichever page
+ 	 * it falls.
+ 	 */
+-	bio_for_each_segment(bvec, bio, iter) {
++	__bio_for_each_segment(bvec, bio, iter, start) {
+ 		if (bio_iter_len(bio, iter) > corrupt_bio_byte) {
+ 			unsigned char *segment = bvec_kmap_local(&bvec);
+ 			segment[corrupt_bio_byte] = corrupt_bio_value;
+@@ -357,36 +359,31 @@ static void corrupt_bio_common(struct bio *bio, unsigned int corrupt_bio_byte,
+ 				"(rw=%c bi_opf=%u bi_sector=%llu size=%u)\n",
+ 				bio, corrupt_bio_value, corrupt_bio_byte,
+ 				(bio_data_dir(bio) == WRITE) ? 'w' : 'r', bio->bi_opf,
+-				(unsigned long long)bio->bi_iter.bi_sector,
+-				bio->bi_iter.bi_size);
++				(unsigned long long)start.bi_sector,
++				start.bi_size);
+ 			break;
+ 		}
+ 		corrupt_bio_byte -= bio_iter_len(bio, iter);
+ 	}
+ }
+ 
+-static void corrupt_bio_data(struct bio *bio, struct flakey_c *fc)
++static void corrupt_bio_data(struct bio *bio, struct flakey_c *fc,
++			     struct bvec_iter start)
+ {
+ 	unsigned int corrupt_bio_byte = fc->corrupt_bio_byte - 1;
+ 
+-	if (!bio_has_data(bio))
+-		return;
+-
+-	corrupt_bio_common(bio, corrupt_bio_byte, fc->corrupt_bio_value);
++	corrupt_bio_common(bio, corrupt_bio_byte, fc->corrupt_bio_value, start);
+ }
+ 
+-static void corrupt_bio_random(struct bio *bio)
++static void corrupt_bio_random(struct bio *bio, struct bvec_iter start)
+ {
+ 	unsigned int corrupt_byte;
+ 	unsigned char corrupt_value;
+ 
+-	if (!bio_has_data(bio))
+-		return;
+-
+-	corrupt_byte = get_random_u32() % bio->bi_iter.bi_size;
++	corrupt_byte = get_random_u32() % start.bi_size;
+ 	corrupt_value = get_random_u8();
+ 
+-	corrupt_bio_common(bio, corrupt_byte, corrupt_value);
++	corrupt_bio_common(bio, corrupt_byte, corrupt_value, start);
+ }
+ 
+ static void clone_free(struct bio *clone)
+@@ -481,7 +478,7 @@ static int flakey_map(struct dm_target *ti, struct bio *bio)
+ 	unsigned int elapsed;
+ 	struct per_bio_data *pb = dm_per_bio_data(bio, sizeof(struct per_bio_data));
+ 
+-	pb->bio_submitted = false;
++	pb->bio_can_corrupt = false;
+ 
+ 	if (op_is_zone_mgmt(bio_op(bio)))
+ 		goto map_bio;
+@@ -490,10 +487,11 @@ static int flakey_map(struct dm_target *ti, struct bio *bio)
+ 	elapsed = (jiffies - fc->start_time) / HZ;
+ 	if (elapsed % (fc->up_interval + fc->down_interval) >= fc->up_interval) {
+ 		bool corrupt_fixed, corrupt_random;
+-		/*
+-		 * Flag this bio as submitted while down.
+-		 */
+-		pb->bio_submitted = true;
++
++		if (bio_has_data(bio)) {
++			pb->bio_can_corrupt = true;
++			pb->saved_iter = bio->bi_iter;
++		}
+ 
+ 		/*
+ 		 * Error reads if neither corrupt_bio_byte or drop_writes or error_writes are set.
+@@ -516,6 +514,8 @@ static int flakey_map(struct dm_target *ti, struct bio *bio)
+ 			return DM_MAPIO_SUBMITTED;
+ 		}
+ 
++		if (!pb->bio_can_corrupt)
++			goto map_bio;
+ 		/*
+ 		 * Corrupt matching writes.
+ 		 */
+@@ -535,9 +535,11 @@ static int flakey_map(struct dm_target *ti, struct bio *bio)
+ 			struct bio *clone = clone_bio(ti, fc, bio);
+ 			if (clone) {
+ 				if (corrupt_fixed)
+-					corrupt_bio_data(clone, fc);
++					corrupt_bio_data(clone, fc,
++							 clone->bi_iter);
+ 				if (corrupt_random)
+-					corrupt_bio_random(clone);
++					corrupt_bio_random(clone,
++							   clone->bi_iter);
+ 				submit_bio(clone);
+ 				return DM_MAPIO_SUBMITTED;
+ 			}
+@@ -559,21 +561,21 @@ static int flakey_end_io(struct dm_target *ti, struct bio *bio,
+ 	if (op_is_zone_mgmt(bio_op(bio)))
+ 		return DM_ENDIO_DONE;
+ 
+-	if (!*error && pb->bio_submitted && (bio_data_dir(bio) == READ)) {
++	if (!*error && pb->bio_can_corrupt && (bio_data_dir(bio) == READ)) {
+ 		if (fc->corrupt_bio_byte) {
+ 			if ((fc->corrupt_bio_rw == READ) &&
+ 			    all_corrupt_bio_flags_match(bio, fc)) {
+ 				/*
+ 				 * Corrupt successful matching READs while in down state.
+ 				 */
+-				corrupt_bio_data(bio, fc);
++				corrupt_bio_data(bio, fc, pb->saved_iter);
+ 			}
+ 		}
+ 		if (fc->random_read_corrupt) {
+ 			u64 rnd = get_random_u64();
+ 			u32 rem = do_div(rnd, PROBABILITY_BASE);
+ 			if (rem < fc->random_read_corrupt)
+-				corrupt_bio_random(bio);
++				corrupt_bio_random(bio, pb->saved_iter);
+ 		}
+ 		if (test_bit(ERROR_READS, &fc->flags)) {
+ 			/*
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 6b23e777e10e7f..e009bba52d4c0c 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -1490,6 +1490,18 @@ bool dm_table_has_no_data_devices(struct dm_table *t)
+ 	return true;
+ }
+ 
++bool dm_table_is_wildcard(struct dm_table *t)
++{
++	for (unsigned int i = 0; i < t->num_targets; i++) {
++		struct dm_target *ti = dm_table_get_target(t, i);
++
++		if (!dm_target_is_wildcard(ti->type))
++			return false;
++	}
++
++	return true;
++}
++
+ static int device_not_zoned(struct dm_target *ti, struct dm_dev *dev,
+ 			    sector_t start, sector_t len, void *data)
+ {
+@@ -1830,10 +1842,24 @@ static bool dm_table_supports_atomic_writes(struct dm_table *t)
+ 	return true;
+ }
+ 
++bool dm_table_supports_size_change(struct dm_table *t, sector_t old_size,
++				   sector_t new_size)
++{
++	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && dm_has_zone_plugs(t->md) &&
++	    old_size != new_size) {
++		DMWARN("%s: device has zone write plug resources. "
++		       "Cannot change size",
++		       dm_device_name(t->md));
++		return false;
++	}
++	return true;
++}
++
+ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 			      struct queue_limits *limits)
+ {
+ 	int r;
++	struct queue_limits old_limits;
+ 
+ 	if (!dm_table_supports_nowait(t))
+ 		limits->features &= ~BLK_FEAT_NOWAIT;
+@@ -1860,28 +1886,30 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 	if (dm_table_supports_flush(t))
+ 		limits->features |= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA;
+ 
+-	if (dm_table_supports_dax(t, device_not_dax_capable)) {
++	if (dm_table_supports_dax(t, device_not_dax_capable))
+ 		limits->features |= BLK_FEAT_DAX;
+-		if (dm_table_supports_dax(t, device_not_dax_synchronous_capable))
+-			set_dax_synchronous(t->md->dax_dev);
+-	} else
++	else
+ 		limits->features &= ~BLK_FEAT_DAX;
+ 
+-	if (dm_table_any_dev_attr(t, device_dax_write_cache_enabled, NULL))
+-		dax_write_cache(t->md->dax_dev, true);
+-
+ 	/* For a zoned table, setup the zone related queue attributes. */
+-	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
+-	    (limits->features & BLK_FEAT_ZONED)) {
+-		r = dm_set_zones_restrictions(t, q, limits);
+-		if (r)
+-			return r;
++	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED)) {
++		if (limits->features & BLK_FEAT_ZONED) {
++			r = dm_set_zones_restrictions(t, q, limits);
++			if (r)
++				return r;
++		} else if (dm_has_zone_plugs(t->md)) {
++			DMWARN("%s: device has zone write plug resources. "
++			       "Cannot switch to non-zoned table.",
++			       dm_device_name(t->md));
++			return -EINVAL;
++		}
+ 	}
+ 
+ 	if (dm_table_supports_atomic_writes(t))
+ 		limits->features |= BLK_FEAT_ATOMIC_WRITES;
+ 
+-	r = queue_limits_set(q, limits);
++	old_limits = queue_limits_start_update(q);
++	r = queue_limits_commit_update(q, limits);
+ 	if (r)
+ 		return r;
+ 
+@@ -1892,10 +1920,21 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
+ 	    (limits->features & BLK_FEAT_ZONED)) {
+ 		r = dm_revalidate_zones(t, q);
+-		if (r)
++		if (r) {
++			queue_limits_set(q, &old_limits);
+ 			return r;
++		}
+ 	}
+ 
++	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED))
++		dm_finalize_zone_settings(t, limits);
++
++	if (dm_table_supports_dax(t, device_not_dax_synchronous_capable))
++		set_dax_synchronous(t->md->dax_dev);
++
++	if (dm_table_any_dev_attr(t, device_dax_write_cache_enabled, NULL))
++		dax_write_cache(t->md->dax_dev, true);
++
+ 	dm_update_crypto_profile(q, t);
+ 	return 0;
+ }
+diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c
+index 20edd3fabbabfe..4af78111d0b4dd 100644
+--- a/drivers/md/dm-zone.c
++++ b/drivers/md/dm-zone.c
+@@ -56,24 +56,31 @@ int dm_blk_report_zones(struct gendisk *disk, sector_t sector,
+ {
+ 	struct mapped_device *md = disk->private_data;
+ 	struct dm_table *map;
+-	int srcu_idx, ret;
++	struct dm_table *zone_revalidate_map = md->zone_revalidate_map;
++	int srcu_idx, ret = -EIO;
++	bool put_table = false;
+ 
+-	if (!md->zone_revalidate_map) {
+-		/* Regular user context */
++	if (!zone_revalidate_map || md->revalidate_map_task != current) {
++		/*
++		 * Regular user context or
++		 * Zone revalidation during __bind() is in progress, but this
++		 * call is from a different process
++		 */
+ 		if (dm_suspended_md(md))
+ 			return -EAGAIN;
+ 
+ 		map = dm_get_live_table(md, &srcu_idx);
+-		if (!map)
+-			return -EIO;
++		put_table = true;
+ 	} else {
+ 		/* Zone revalidation during __bind() */
+-		map = md->zone_revalidate_map;
++		map = zone_revalidate_map;
+ 	}
+ 
+-	ret = dm_blk_do_report_zones(md, map, sector, nr_zones, cb, data);
++	if (map)
++		ret = dm_blk_do_report_zones(md, map, sector, nr_zones, cb,
++					     data);
+ 
+-	if (!md->zone_revalidate_map)
++	if (put_table)
+ 		dm_put_live_table(md, srcu_idx);
+ 
+ 	return ret;
+@@ -153,33 +160,36 @@ int dm_revalidate_zones(struct dm_table *t, struct request_queue *q)
+ {
+ 	struct mapped_device *md = t->md;
+ 	struct gendisk *disk = md->disk;
++	unsigned int nr_zones = disk->nr_zones;
+ 	int ret;
+ 
+ 	if (!get_capacity(disk))
+ 		return 0;
+ 
+-	/* Revalidate only if something changed. */
+-	if (!disk->nr_zones || disk->nr_zones != md->nr_zones) {
+-		DMINFO("%s using %s zone append",
+-		       disk->disk_name,
+-		       queue_emulates_zone_append(q) ? "emulated" : "native");
+-		md->nr_zones = 0;
+-	}
+-
+-	if (md->nr_zones)
++	/*
++	 * Do not revalidate if zone write plug resources have already
++	 * been allocated.
++	 */
++	if (dm_has_zone_plugs(md))
+ 		return 0;
+ 
++	DMINFO("%s using %s zone append", disk->disk_name,
++	       queue_emulates_zone_append(q) ? "emulated" : "native");
++
+ 	/*
+ 	 * Our table is not live yet. So the call to dm_get_live_table()
+ 	 * in dm_blk_report_zones() will fail. Set a temporary pointer to
+ 	 * our table for dm_blk_report_zones() to use directly.
+ 	 */
+ 	md->zone_revalidate_map = t;
++	md->revalidate_map_task = current;
+ 	ret = blk_revalidate_disk_zones(disk);
++	md->revalidate_map_task = NULL;
+ 	md->zone_revalidate_map = NULL;
+ 
+ 	if (ret) {
+ 		DMERR("Revalidate zones failed %d", ret);
++		disk->nr_zones = nr_zones;
+ 		return ret;
+ 	}
+ 
+@@ -340,12 +350,8 @@ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q,
+ 	 * mapped device queue as needing zone append emulation.
+ 	 */
+ 	WARN_ON_ONCE(queue_is_mq(q));
+-	if (dm_table_supports_zone_append(t)) {
+-		clear_bit(DMF_EMULATE_ZONE_APPEND, &md->flags);
+-	} else {
+-		set_bit(DMF_EMULATE_ZONE_APPEND, &md->flags);
++	if (!dm_table_supports_zone_append(t))
+ 		lim->max_hw_zone_append_sectors = 0;
+-	}
+ 
+ 	/*
+ 	 * Determine the max open and max active zone limits for the mapped
+@@ -380,15 +386,28 @@ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q,
+ 		lim->max_open_zones = 0;
+ 		lim->max_active_zones = 0;
+ 		lim->max_hw_zone_append_sectors = 0;
++		lim->max_zone_append_sectors = 0;
+ 		lim->zone_write_granularity = 0;
+ 		lim->chunk_sectors = 0;
+ 		lim->features &= ~BLK_FEAT_ZONED;
+-		clear_bit(DMF_EMULATE_ZONE_APPEND, &md->flags);
+-		md->nr_zones = 0;
+-		disk->nr_zones = 0;
+ 		return 0;
+ 	}
+ 
++	if (get_capacity(disk) && dm_has_zone_plugs(t->md)) {
++		if (q->limits.chunk_sectors != lim->chunk_sectors) {
++			DMWARN("%s: device has zone write plug resources. "
++			       "Cannot change zone size",
++			       disk->disk_name);
++			return -EINVAL;
++		}
++		if (lim->max_hw_zone_append_sectors != 0 &&
++		    !dm_table_is_wildcard(t)) {
++			DMWARN("%s: device has zone write plug resources. "
++			       "New table must emulate zone append",
++			       disk->disk_name);
++			return -EINVAL;
++		}
++	}
+ 	/*
+ 	 * Warn once (when the capacity is not yet set) if the mapped device is
+ 	 * partially using zone resources of the target devices as that leads to
+@@ -408,6 +427,23 @@ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q,
+ 	return 0;
+ }
+ 
++void dm_finalize_zone_settings(struct dm_table *t, struct queue_limits *lim)
++{
++	struct mapped_device *md = t->md;
++
++	if (lim->features & BLK_FEAT_ZONED) {
++		if (dm_table_supports_zone_append(t))
++			clear_bit(DMF_EMULATE_ZONE_APPEND, &md->flags);
++		else
++			set_bit(DMF_EMULATE_ZONE_APPEND, &md->flags);
++	} else {
++		clear_bit(DMF_EMULATE_ZONE_APPEND, &md->flags);
++		md->nr_zones = 0;
++		md->disk->nr_zones = 0;
++	}
++}
++
++
+ /*
+  * IO completion callback called from clone_endio().
+  */
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 5ab7574c0c76ab..240f6dab8ddafb 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -2421,21 +2421,35 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
+ 			       struct queue_limits *limits)
+ {
+ 	struct dm_table *old_map;
+-	sector_t size;
++	sector_t size, old_size;
+ 	int ret;
+ 
+ 	lockdep_assert_held(&md->suspend_lock);
+ 
+ 	size = dm_table_get_size(t);
+ 
++	old_size = dm_get_size(md);
++
++	if (!dm_table_supports_size_change(t, old_size, size)) {
++		old_map = ERR_PTR(-EINVAL);
++		goto out;
++	}
++
++	set_capacity(md->disk, size);
++
++	ret = dm_table_set_restrictions(t, md->queue, limits);
++	if (ret) {
++		set_capacity(md->disk, old_size);
++		old_map = ERR_PTR(ret);
++		goto out;
++	}
++
+ 	/*
+ 	 * Wipe any geometry if the size of the table changed.
+ 	 */
+-	if (size != dm_get_size(md))
++	if (size != old_size)
+ 		memset(&md->geometry, 0, sizeof(md->geometry));
+ 
+-	set_capacity(md->disk, size);
+-
+ 	dm_table_event_callback(t, event_callback, md);
+ 
+ 	if (dm_table_request_based(t)) {
+@@ -2453,10 +2467,10 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
+ 		 * requests in the queue may refer to bio from the old bioset,
+ 		 * so you must walk through the queue to unprep.
+ 		 */
+-		if (!md->mempools) {
++		if (!md->mempools)
+ 			md->mempools = t->mempools;
+-			t->mempools = NULL;
+-		}
++		else
++			dm_free_md_mempools(t->mempools);
+ 	} else {
+ 		/*
+ 		 * The md may already have mempools that need changing.
+@@ -2465,14 +2479,8 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
+ 		 */
+ 		dm_free_md_mempools(md->mempools);
+ 		md->mempools = t->mempools;
+-		t->mempools = NULL;
+-	}
+-
+-	ret = dm_table_set_restrictions(t, md->queue, limits);
+-	if (ret) {
+-		old_map = ERR_PTR(ret);
+-		goto out;
+ 	}
++	t->mempools = NULL;
+ 
+ 	old_map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock));
+ 	rcu_assign_pointer(md->map, (void *)t);
+diff --git a/drivers/md/dm.h b/drivers/md/dm.h
+index a0a8ff11981580..245f52b592154d 100644
+--- a/drivers/md/dm.h
++++ b/drivers/md/dm.h
+@@ -58,6 +58,7 @@ void dm_table_event_callback(struct dm_table *t,
+ 			     void (*fn)(void *), void *context);
+ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector);
+ bool dm_table_has_no_data_devices(struct dm_table *table);
++bool dm_table_is_wildcard(struct dm_table *t);
+ int dm_calculate_queue_limits(struct dm_table *table,
+ 			      struct queue_limits *limits);
+ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+@@ -72,6 +73,8 @@ struct target_type *dm_table_get_immutable_target_type(struct dm_table *t);
+ struct dm_target *dm_table_get_immutable_target(struct dm_table *t);
+ struct dm_target *dm_table_get_wildcard_target(struct dm_table *t);
+ bool dm_table_request_based(struct dm_table *t);
++bool dm_table_supports_size_change(struct dm_table *t, sector_t old_size,
++				   sector_t new_size);
+ 
+ void dm_lock_md_type(struct mapped_device *md);
+ void dm_unlock_md_type(struct mapped_device *md);
+@@ -102,6 +105,7 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t);
+ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q,
+ 		struct queue_limits *lim);
+ int dm_revalidate_zones(struct dm_table *t, struct request_queue *q);
++void dm_finalize_zone_settings(struct dm_table *t, struct queue_limits *lim);
+ void dm_zone_endio(struct dm_io *io, struct bio *clone);
+ #ifdef CONFIG_BLK_DEV_ZONED
+ int dm_blk_report_zones(struct gendisk *disk, sector_t sector,
+@@ -110,12 +114,14 @@ bool dm_is_zone_write(struct mapped_device *md, struct bio *bio);
+ int dm_zone_get_reset_bitmap(struct mapped_device *md, struct dm_table *t,
+ 			     sector_t sector, unsigned int nr_zones,
+ 			     unsigned long *need_reset);
++#define dm_has_zone_plugs(md) ((md)->disk->zone_wplugs_hash != NULL)
+ #else
+ #define dm_blk_report_zones	NULL
+ static inline bool dm_is_zone_write(struct mapped_device *md, struct bio *bio)
+ {
+ 	return false;
+ }
++#define dm_has_zone_plugs(md) false
+ #endif
+ 
+ /*
+diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c
+index c7efd8aab675cc..b8b3a90697012c 100644
+--- a/drivers/md/raid1-10.c
++++ b/drivers/md/raid1-10.c
+@@ -293,3 +293,13 @@ static inline bool raid1_should_read_first(struct mddev *mddev,
+ 
+ 	return false;
+ }
++
++/*
++ * bio with REQ_RAHEAD or REQ_NOWAIT can fail at anytime, before such IO is
++ * submitted to the underlying disks, hence don't record badblocks or retry
++ * in this case.
++ */
++static inline bool raid1_should_handle_error(struct bio *bio)
++{
++	return !(bio->bi_opf & (REQ_RAHEAD | REQ_NOWAIT));
++}
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index de9bccbe7337b5..1fe645e6300121 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -373,14 +373,16 @@ static void raid1_end_read_request(struct bio *bio)
+ 	 */
+ 	update_head_pos(r1_bio->read_disk, r1_bio);
+ 
+-	if (uptodate)
++	if (uptodate) {
+ 		set_bit(R1BIO_Uptodate, &r1_bio->state);
+-	else if (test_bit(FailFast, &rdev->flags) &&
+-		 test_bit(R1BIO_FailFast, &r1_bio->state))
++	} else if (test_bit(FailFast, &rdev->flags) &&
++		 test_bit(R1BIO_FailFast, &r1_bio->state)) {
+ 		/* This was a fail-fast read so we definitely
+ 		 * want to retry */
+ 		;
+-	else {
++	} else if (!raid1_should_handle_error(bio)) {
++		uptodate = 1;
++	} else {
+ 		/* If all other devices have failed, we want to return
+ 		 * the error upwards rather than fail the last device.
+ 		 * Here we redefine "uptodate" to mean "Don't want to retry"
+@@ -451,16 +453,15 @@ static void raid1_end_write_request(struct bio *bio)
+ 	struct bio *to_put = NULL;
+ 	int mirror = find_bio_disk(r1_bio, bio);
+ 	struct md_rdev *rdev = conf->mirrors[mirror].rdev;
+-	bool discard_error;
+ 	sector_t lo = r1_bio->sector;
+ 	sector_t hi = r1_bio->sector + r1_bio->sectors;
+-
+-	discard_error = bio->bi_status && bio_op(bio) == REQ_OP_DISCARD;
++	bool ignore_error = !raid1_should_handle_error(bio) ||
++		(bio->bi_status && bio_op(bio) == REQ_OP_DISCARD);
+ 
+ 	/*
+ 	 * 'one mirror IO has finished' event handler:
+ 	 */
+-	if (bio->bi_status && !discard_error) {
++	if (bio->bi_status && !ignore_error) {
+ 		set_bit(WriteErrorSeen,	&rdev->flags);
+ 		if (!test_and_set_bit(WantReplacement, &rdev->flags))
+ 			set_bit(MD_RECOVERY_NEEDED, &
+@@ -511,7 +512,7 @@ static void raid1_end_write_request(struct bio *bio)
+ 
+ 		/* Maybe we can clear some bad blocks. */
+ 		if (rdev_has_badblock(rdev, r1_bio->sector, r1_bio->sectors) &&
+-		    !discard_error) {
++		    !ignore_error) {
+ 			r1_bio->bios[mirror] = IO_MADE_GOOD;
+ 			set_bit(R1BIO_MadeGood, &r1_bio->state);
+ 		}
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index ba32bac975b8d6..54320a887ecc50 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -399,6 +399,8 @@ static void raid10_end_read_request(struct bio *bio)
+ 		 * wait for the 'master' bio.
+ 		 */
+ 		set_bit(R10BIO_Uptodate, &r10_bio->state);
++	} else if (!raid1_should_handle_error(bio)) {
++		uptodate = 1;
+ 	} else {
+ 		/* If all other devices that store this block have
+ 		 * failed, we want to return the error upwards rather
+@@ -456,9 +458,8 @@ static void raid10_end_write_request(struct bio *bio)
+ 	int slot, repl;
+ 	struct md_rdev *rdev = NULL;
+ 	struct bio *to_put = NULL;
+-	bool discard_error;
+-
+-	discard_error = bio->bi_status && bio_op(bio) == REQ_OP_DISCARD;
++	bool ignore_error = !raid1_should_handle_error(bio) ||
++		(bio->bi_status && bio_op(bio) == REQ_OP_DISCARD);
+ 
+ 	dev = find_bio_disk(conf, r10_bio, bio, &slot, &repl);
+ 
+@@ -472,7 +473,7 @@ static void raid10_end_write_request(struct bio *bio)
+ 	/*
+ 	 * this branch is our 'one mirror IO has finished' event handler:
+ 	 */
+-	if (bio->bi_status && !discard_error) {
++	if (bio->bi_status && !ignore_error) {
+ 		if (repl)
+ 			/* Never record new bad blocks to replacement,
+ 			 * just fail it.
+@@ -527,7 +528,7 @@ static void raid10_end_write_request(struct bio *bio)
+ 		/* Maybe we can clear some bad blocks. */
+ 		if (rdev_has_badblock(rdev, r10_bio->devs[slot].addr,
+ 				      r10_bio->sectors) &&
+-		    !discard_error) {
++		    !ignore_error) {
+ 			bio_put(bio);
+ 			if (repl)
+ 				r10_bio->devs[slot].repl_bio = IO_MADE_GOOD;
+diff --git a/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c b/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c
+index 3d2913de9a86c6..7af6765532e332 100644
+--- a/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c
++++ b/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c
+@@ -114,7 +114,7 @@ struct hdmirx_stream {
+ 	spinlock_t vbq_lock; /* to lock video buffer queue */
+ 	bool stopping;
+ 	wait_queue_head_t wq_stopped;
+-	u32 frame_idx;
++	u32 sequence;
+ 	u32 line_flag_int_cnt;
+ 	u32 irq_stat;
+ };
+@@ -1540,7 +1540,7 @@ static int hdmirx_start_streaming(struct vb2_queue *queue, unsigned int count)
+ 	int line_flag;
+ 
+ 	mutex_lock(&hdmirx_dev->stream_lock);
+-	stream->frame_idx = 0;
++	stream->sequence = 0;
+ 	stream->line_flag_int_cnt = 0;
+ 	stream->curr_buf = NULL;
+ 	stream->next_buf = NULL;
+@@ -1948,7 +1948,7 @@ static void dma_idle_int_handler(struct snps_hdmirx_dev *hdmirx_dev,
+ 
+ 			if (vb_done) {
+ 				vb_done->vb2_buf.timestamp = ktime_get_ns();
+-				vb_done->sequence = stream->frame_idx;
++				vb_done->sequence = stream->sequence;
+ 
+ 				if (bt->interlaced)
+ 					vb_done->field = V4L2_FIELD_INTERLACED_TB;
+@@ -1956,10 +1956,6 @@ static void dma_idle_int_handler(struct snps_hdmirx_dev *hdmirx_dev,
+ 					vb_done->field = V4L2_FIELD_NONE;
+ 
+ 				hdmirx_vb_done(stream, vb_done);
+-				stream->frame_idx++;
+-				if (stream->frame_idx == 30)
+-					v4l2_dbg(1, debug, v4l2_dev,
+-						 "rcv frames\n");
+ 			}
+ 
+ 			stream->curr_buf = NULL;
+@@ -1971,6 +1967,10 @@ static void dma_idle_int_handler(struct snps_hdmirx_dev *hdmirx_dev,
+ 			v4l2_dbg(3, debug, v4l2_dev,
+ 				 "%s: next_buf NULL, skip vb_done\n", __func__);
+ 		}
++
++		stream->sequence++;
++		if (stream->sequence == 30)
++			v4l2_dbg(1, debug, v4l2_dev, "rcv frames\n");
+ 	}
+ 
+ DMA_IDLE_OUT:
+diff --git a/drivers/media/platform/verisilicon/hantro_postproc.c b/drivers/media/platform/verisilicon/hantro_postproc.c
+index c435a393e0cb70..9f559a13d409bb 100644
+--- a/drivers/media/platform/verisilicon/hantro_postproc.c
++++ b/drivers/media/platform/verisilicon/hantro_postproc.c
+@@ -250,8 +250,10 @@ int hantro_postproc_init(struct hantro_ctx *ctx)
+ 
+ 	for (i = 0; i < num_buffers; i++) {
+ 		ret = hantro_postproc_alloc(ctx, i);
+-		if (ret)
++		if (ret) {
++			hantro_postproc_free(ctx);
+ 			return ret;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/mfd/exynos-lpass.c b/drivers/mfd/exynos-lpass.c
+index 6a585173230b13..44797001a4322b 100644
+--- a/drivers/mfd/exynos-lpass.c
++++ b/drivers/mfd/exynos-lpass.c
+@@ -104,11 +104,22 @@ static const struct regmap_config exynos_lpass_reg_conf = {
+ 	.fast_io	= true,
+ };
+ 
++static void exynos_lpass_disable_lpass(void *data)
++{
++	struct platform_device *pdev = data;
++	struct exynos_lpass *lpass = platform_get_drvdata(pdev);
++
++	pm_runtime_disable(&pdev->dev);
++	if (!pm_runtime_status_suspended(&pdev->dev))
++		exynos_lpass_disable(lpass);
++}
++
+ static int exynos_lpass_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct exynos_lpass *lpass;
+ 	void __iomem *base_top;
++	int ret;
+ 
+ 	lpass = devm_kzalloc(dev, sizeof(*lpass), GFP_KERNEL);
+ 	if (!lpass)
+@@ -122,8 +133,8 @@ static int exynos_lpass_probe(struct platform_device *pdev)
+ 	if (IS_ERR(lpass->sfr0_clk))
+ 		return PTR_ERR(lpass->sfr0_clk);
+ 
+-	lpass->top = regmap_init_mmio(dev, base_top,
+-					&exynos_lpass_reg_conf);
++	lpass->top = devm_regmap_init_mmio(dev, base_top,
++					   &exynos_lpass_reg_conf);
+ 	if (IS_ERR(lpass->top)) {
+ 		dev_err(dev, "LPASS top regmap initialization failed\n");
+ 		return PTR_ERR(lpass->top);
+@@ -134,18 +145,11 @@ static int exynos_lpass_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(dev);
+ 	exynos_lpass_enable(lpass);
+ 
+-	return devm_of_platform_populate(dev);
+-}
+-
+-static void exynos_lpass_remove(struct platform_device *pdev)
+-{
+-	struct exynos_lpass *lpass = platform_get_drvdata(pdev);
++	ret = devm_add_action_or_reset(dev, exynos_lpass_disable_lpass, pdev);
++	if (ret)
++		return ret;
+ 
+-	exynos_lpass_disable(lpass);
+-	pm_runtime_disable(&pdev->dev);
+-	if (!pm_runtime_status_suspended(&pdev->dev))
+-		exynos_lpass_disable(lpass);
+-	regmap_exit(lpass->top);
++	return devm_of_platform_populate(dev);
+ }
+ 
+ static int __maybe_unused exynos_lpass_suspend(struct device *dev)
+@@ -185,7 +189,6 @@ static struct platform_driver exynos_lpass_driver = {
+ 		.of_match_table	= exynos_lpass_of_match,
+ 	},
+ 	.probe	= exynos_lpass_probe,
+-	.remove	= exynos_lpass_remove,
+ };
+ module_platform_driver(exynos_lpass_driver);
+ 
+diff --git a/drivers/mfd/stmpe-spi.c b/drivers/mfd/stmpe-spi.c
+index 792236f56399af..b9cc85ea2c4019 100644
+--- a/drivers/mfd/stmpe-spi.c
++++ b/drivers/mfd/stmpe-spi.c
+@@ -129,7 +129,7 @@ static const struct spi_device_id stmpe_spi_id[] = {
+ 	{ "stmpe2403", STMPE2403 },
+ 	{ }
+ };
+-MODULE_DEVICE_TABLE(spi, stmpe_id);
++MODULE_DEVICE_TABLE(spi, stmpe_spi_id);
+ 
+ static struct spi_driver stmpe_spi_driver = {
+ 	.driver = {
+diff --git a/drivers/misc/lis3lv02d/Kconfig b/drivers/misc/lis3lv02d/Kconfig
+index bb2fec4b5880bf..56005243a230d5 100644
+--- a/drivers/misc/lis3lv02d/Kconfig
++++ b/drivers/misc/lis3lv02d/Kconfig
+@@ -10,7 +10,7 @@ config SENSORS_LIS3_SPI
+ 	help
+ 	  This driver provides support for the LIS3LV02Dx accelerometer connected
+ 	  via SPI. The accelerometer data is readable via
+-	  /sys/devices/platform/lis3lv02d.
++	  /sys/devices/faux/lis3lv02d.
+ 
+ 	  This driver also provides an absolute input class device, allowing
+ 	  the laptop to act as a pinball machine-esque joystick.
+@@ -26,7 +26,7 @@ config SENSORS_LIS3_I2C
+ 	help
+ 	  This driver provides support for the LIS3LV02Dx accelerometer connected
+ 	  via I2C. The accelerometer data is readable via
+-	  /sys/devices/platform/lis3lv02d.
++	  /sys/devices/faux/lis3lv02d.
+ 
+ 	  This driver also provides an absolute input class device, allowing
+ 	  the device to act as a pinball machine-esque joystick.
+diff --git a/drivers/misc/mei/vsc-tp.c b/drivers/misc/mei/vsc-tp.c
+index da26a080916c54..267d0de5fade83 100644
+--- a/drivers/misc/mei/vsc-tp.c
++++ b/drivers/misc/mei/vsc-tp.c
+@@ -324,7 +324,7 @@ int vsc_tp_rom_xfer(struct vsc_tp *tp, const void *obuf, void *ibuf, size_t len)
+ 	guard(mutex)(&tp->mutex);
+ 
+ 	/* rom xfer is big endian */
+-	cpu_to_be32_array((u32 *)tp->tx_buf, obuf, words);
++	cpu_to_be32_array((__be32 *)tp->tx_buf, obuf, words);
+ 
+ 	ret = read_poll_timeout(gpiod_get_value_cansleep, ret,
+ 				!ret, VSC_TP_ROM_XFER_POLL_DELAY_US,
+@@ -340,7 +340,7 @@ int vsc_tp_rom_xfer(struct vsc_tp *tp, const void *obuf, void *ibuf, size_t len)
+ 		return ret;
+ 
+ 	if (ibuf)
+-		be32_to_cpu_array(ibuf, (u32 *)tp->rx_buf, words);
++		be32_to_cpu_array(ibuf, (__be32 *)tp->rx_buf, words);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/misc/vmw_vmci/vmci_host.c b/drivers/misc/vmw_vmci/vmci_host.c
+index abe79f6fd2a79b..b64944367ac533 100644
+--- a/drivers/misc/vmw_vmci/vmci_host.c
++++ b/drivers/misc/vmw_vmci/vmci_host.c
+@@ -227,6 +227,7 @@ static int drv_cp_harray_to_user(void __user *user_buf_uva,
+ static int vmci_host_setup_notify(struct vmci_ctx *context,
+ 				  unsigned long uva)
+ {
++	struct page *page;
+ 	int retval;
+ 
+ 	if (context->notify_page) {
+@@ -243,13 +244,11 @@ static int vmci_host_setup_notify(struct vmci_ctx *context,
+ 	/*
+ 	 * Lock physical page backing a given user VA.
+ 	 */
+-	retval = get_user_pages_fast(uva, 1, FOLL_WRITE, &context->notify_page);
+-	if (retval != 1) {
+-		context->notify_page = NULL;
++	retval = get_user_pages_fast(uva, 1, FOLL_WRITE, &page);
++	if (retval != 1)
+ 		return VMCI_ERROR_GENERIC;
+-	}
+-	if (context->notify_page == NULL)
+-		return VMCI_ERROR_UNAVAILABLE;
++
++	context->notify_page = page;
+ 
+ 	/*
+ 	 * Map the locked page and set up notify pointer.
+diff --git a/drivers/mtd/nand/ecc-mxic.c b/drivers/mtd/nand/ecc-mxic.c
+index 56b56f726b9983..1bf9a5a64b87a4 100644
+--- a/drivers/mtd/nand/ecc-mxic.c
++++ b/drivers/mtd/nand/ecc-mxic.c
+@@ -614,7 +614,7 @@ static int mxic_ecc_finish_io_req_external(struct nand_device *nand,
+ {
+ 	struct mxic_ecc_engine *mxic = nand_to_mxic(nand);
+ 	struct mxic_ecc_ctx *ctx = nand_to_ecc_ctx(nand);
+-	int nents, step, ret;
++	int nents, step, ret = 0;
+ 
+ 	if (req->mode == MTD_OPS_RAW)
+ 		return 0;
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 8ea183da8d5398..17ae4b819a5977 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -453,13 +453,14 @@ static struct net_device *bond_ipsec_dev(struct xfrm_state *xs)
+ 
+ /**
+  * bond_ipsec_add_sa - program device with a security association
++ * @bond_dev: pointer to the bond net device
+  * @xs: pointer to transformer state struct
+  * @extack: extack point to fill failure reason
+  **/
+-static int bond_ipsec_add_sa(struct xfrm_state *xs,
++static int bond_ipsec_add_sa(struct net_device *bond_dev,
++			     struct xfrm_state *xs,
+ 			     struct netlink_ext_ack *extack)
+ {
+-	struct net_device *bond_dev = xs->xso.dev;
+ 	struct net_device *real_dev;
+ 	netdevice_tracker tracker;
+ 	struct bond_ipsec *ipsec;
+@@ -495,9 +496,9 @@ static int bond_ipsec_add_sa(struct xfrm_state *xs,
+ 		goto out;
+ 	}
+ 
+-	xs->xso.real_dev = real_dev;
+-	err = real_dev->xfrmdev_ops->xdo_dev_state_add(xs, extack);
++	err = real_dev->xfrmdev_ops->xdo_dev_state_add(real_dev, xs, extack);
+ 	if (!err) {
++		xs->xso.real_dev = real_dev;
+ 		ipsec->xs = xs;
+ 		INIT_LIST_HEAD(&ipsec->list);
+ 		mutex_lock(&bond->ipsec_lock);
+@@ -539,11 +540,25 @@ static void bond_ipsec_add_sa_all(struct bonding *bond)
+ 		if (ipsec->xs->xso.real_dev == real_dev)
+ 			continue;
+ 
+-		ipsec->xs->xso.real_dev = real_dev;
+-		if (real_dev->xfrmdev_ops->xdo_dev_state_add(ipsec->xs, NULL)) {
++		if (real_dev->xfrmdev_ops->xdo_dev_state_add(real_dev,
++							     ipsec->xs, NULL)) {
+ 			slave_warn(bond_dev, real_dev, "%s: failed to add SA\n", __func__);
+-			ipsec->xs->xso.real_dev = NULL;
++			continue;
+ 		}
++
++		spin_lock_bh(&ipsec->xs->lock);
++		/* xs might have been killed by the user during the migration
++		 * to the new dev, but bond_ipsec_del_sa() should have done
++		 * nothing, as xso.real_dev is NULL.
++		 * Delete it from the device we just added it to. The pending
++		 * bond_ipsec_free_sa() call will do the rest of the cleanup.
++		 */
++		if (ipsec->xs->km.state == XFRM_STATE_DEAD &&
++		    real_dev->xfrmdev_ops->xdo_dev_state_delete)
++			real_dev->xfrmdev_ops->xdo_dev_state_delete(real_dev,
++								    ipsec->xs);
++		ipsec->xs->xso.real_dev = real_dev;
++		spin_unlock_bh(&ipsec->xs->lock);
+ 	}
+ out:
+ 	mutex_unlock(&bond->ipsec_lock);
+@@ -551,54 +566,27 @@ static void bond_ipsec_add_sa_all(struct bonding *bond)
+ 
+ /**
+  * bond_ipsec_del_sa - clear out this specific SA
++ * @bond_dev: pointer to the bond net device
+  * @xs: pointer to transformer state struct
+  **/
+-static void bond_ipsec_del_sa(struct xfrm_state *xs)
++static void bond_ipsec_del_sa(struct net_device *bond_dev,
++			      struct xfrm_state *xs)
+ {
+-	struct net_device *bond_dev = xs->xso.dev;
+ 	struct net_device *real_dev;
+-	netdevice_tracker tracker;
+-	struct bond_ipsec *ipsec;
+-	struct bonding *bond;
+-	struct slave *slave;
+ 
+-	if (!bond_dev)
++	if (!bond_dev || !xs->xso.real_dev)
+ 		return;
+ 
+-	rcu_read_lock();
+-	bond = netdev_priv(bond_dev);
+-	slave = rcu_dereference(bond->curr_active_slave);
+-	real_dev = slave ? slave->dev : NULL;
+-	netdev_hold(real_dev, &tracker, GFP_ATOMIC);
+-	rcu_read_unlock();
+-
+-	if (!slave)
+-		goto out;
+-
+-	if (!xs->xso.real_dev)
+-		goto out;
+-
+-	WARN_ON(xs->xso.real_dev != real_dev);
++	real_dev = xs->xso.real_dev;
+ 
+ 	if (!real_dev->xfrmdev_ops ||
+ 	    !real_dev->xfrmdev_ops->xdo_dev_state_delete ||
+ 	    netif_is_bond_master(real_dev)) {
+ 		slave_warn(bond_dev, real_dev, "%s: no slave xdo_dev_state_delete\n", __func__);
+-		goto out;
++		return;
+ 	}
+ 
+-	real_dev->xfrmdev_ops->xdo_dev_state_delete(xs);
+-out:
+-	netdev_put(real_dev, &tracker);
+-	mutex_lock(&bond->ipsec_lock);
+-	list_for_each_entry(ipsec, &bond->ipsec_list, list) {
+-		if (ipsec->xs == xs) {
+-			list_del(&ipsec->list);
+-			kfree(ipsec);
+-			break;
+-		}
+-	}
+-	mutex_unlock(&bond->ipsec_lock);
++	real_dev->xfrmdev_ops->xdo_dev_state_delete(real_dev, xs);
+ }
+ 
+ static void bond_ipsec_del_sa_all(struct bonding *bond)
+@@ -624,46 +612,55 @@ static void bond_ipsec_del_sa_all(struct bonding *bond)
+ 			slave_warn(bond_dev, real_dev,
+ 				   "%s: no slave xdo_dev_state_delete\n",
+ 				   __func__);
+-		} else {
+-			real_dev->xfrmdev_ops->xdo_dev_state_delete(ipsec->xs);
+-			if (real_dev->xfrmdev_ops->xdo_dev_state_free)
+-				real_dev->xfrmdev_ops->xdo_dev_state_free(ipsec->xs);
++			continue;
+ 		}
++
++		spin_lock_bh(&ipsec->xs->lock);
++		ipsec->xs->xso.real_dev = NULL;
++		/* Don't double delete states killed by the user. */
++		if (ipsec->xs->km.state != XFRM_STATE_DEAD)
++			real_dev->xfrmdev_ops->xdo_dev_state_delete(real_dev,
++								    ipsec->xs);
++		spin_unlock_bh(&ipsec->xs->lock);
++
++		if (real_dev->xfrmdev_ops->xdo_dev_state_free)
++			real_dev->xfrmdev_ops->xdo_dev_state_free(real_dev,
++								  ipsec->xs);
+ 	}
+ 	mutex_unlock(&bond->ipsec_lock);
+ }
+ 
+-static void bond_ipsec_free_sa(struct xfrm_state *xs)
++static void bond_ipsec_free_sa(struct net_device *bond_dev,
++			       struct xfrm_state *xs)
+ {
+-	struct net_device *bond_dev = xs->xso.dev;
+ 	struct net_device *real_dev;
+-	netdevice_tracker tracker;
++	struct bond_ipsec *ipsec;
+ 	struct bonding *bond;
+-	struct slave *slave;
+ 
+ 	if (!bond_dev)
+ 		return;
+ 
+-	rcu_read_lock();
+ 	bond = netdev_priv(bond_dev);
+-	slave = rcu_dereference(bond->curr_active_slave);
+-	real_dev = slave ? slave->dev : NULL;
+-	netdev_hold(real_dev, &tracker, GFP_ATOMIC);
+-	rcu_read_unlock();
+-
+-	if (!slave)
+-		goto out;
+ 
++	mutex_lock(&bond->ipsec_lock);
+ 	if (!xs->xso.real_dev)
+ 		goto out;
+ 
+-	WARN_ON(xs->xso.real_dev != real_dev);
++	real_dev = xs->xso.real_dev;
+ 
+-	if (real_dev && real_dev->xfrmdev_ops &&
++	xs->xso.real_dev = NULL;
++	if (real_dev->xfrmdev_ops &&
+ 	    real_dev->xfrmdev_ops->xdo_dev_state_free)
+-		real_dev->xfrmdev_ops->xdo_dev_state_free(xs);
++		real_dev->xfrmdev_ops->xdo_dev_state_free(real_dev, xs);
+ out:
+-	netdev_put(real_dev, &tracker);
++	list_for_each_entry(ipsec, &bond->ipsec_list, list) {
++		if (ipsec->xs == xs) {
++			list_del(&ipsec->list);
++			kfree(ipsec);
++			break;
++		}
++	}
++	mutex_unlock(&bond->ipsec_lock);
+ }
+ 
+ /**
+@@ -2118,15 +2115,26 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
+ 		 * set the master's mac address to that of the first slave
+ 		 */
+ 		memcpy(ss.__data, bond_dev->dev_addr, bond_dev->addr_len);
+-		ss.ss_family = slave_dev->type;
+-		res = dev_set_mac_address(slave_dev, (struct sockaddr *)&ss,
+-					  extack);
+-		if (res) {
+-			slave_err(bond_dev, slave_dev, "Error %d calling set_mac_address\n", res);
+-			goto err_restore_mtu;
+-		}
++	} else if (bond->params.fail_over_mac == BOND_FOM_FOLLOW &&
++		   BOND_MODE(bond) == BOND_MODE_ACTIVEBACKUP &&
++		   memcmp(slave_dev->dev_addr, bond_dev->dev_addr, bond_dev->addr_len) == 0) {
++		/* Set slave to random address to avoid duplicate mac
++		 * address in later fail over.
++		 */
++		eth_random_addr(ss.__data);
++	} else {
++		goto skip_mac_set;
+ 	}
+ 
++	ss.ss_family = slave_dev->type;
++	res = dev_set_mac_address(slave_dev, (struct sockaddr *)&ss, extack);
++	if (res) {
++		slave_err(bond_dev, slave_dev, "Error %d calling set_mac_address\n", res);
++		goto err_restore_mtu;
++	}
++
++skip_mac_set:
++
+ 	/* set no_addrconf flag before open to prevent IPv6 addrconf */
+ 	slave_dev->priv_flags |= IFF_NO_ADDRCONF;
+ 
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 7216eb8f949367..dc2f4adac9bc96 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -21,6 +21,8 @@
+ #include <linux/export.h>
+ #include <linux/gpio.h>
+ #include <linux/kernel.h>
++#include <linux/math.h>
++#include <linux/minmax.h>
+ #include <linux/module.h>
+ #include <linux/platform_data/b53.h>
+ #include <linux/phy.h>
+@@ -1202,6 +1204,10 @@ static int b53_setup(struct dsa_switch *ds)
+ 	 */
+ 	ds->untag_vlan_aware_bridge_pvid = true;
+ 
++	/* Ageing time is set in seconds */
++	ds->ageing_time_min = 1 * 1000;
++	ds->ageing_time_max = AGE_TIME_MAX * 1000;
++
+ 	ret = b53_reset_switch(dev);
+ 	if (ret) {
+ 		dev_err(ds->dev, "failed to reset switch\n");
+@@ -1317,41 +1323,17 @@ static void b53_adjust_63xx_rgmii(struct dsa_switch *ds, int port,
+ 				  phy_interface_t interface)
+ {
+ 	struct b53_device *dev = ds->priv;
+-	u8 rgmii_ctrl = 0, off;
+-
+-	if (port == dev->imp_port)
+-		off = B53_RGMII_CTRL_IMP;
+-	else
+-		off = B53_RGMII_CTRL_P(port);
+-
+-	b53_read8(dev, B53_CTRL_PAGE, off, &rgmii_ctrl);
++	u8 rgmii_ctrl = 0;
+ 
+-	switch (interface) {
+-	case PHY_INTERFACE_MODE_RGMII_ID:
+-		rgmii_ctrl |= (RGMII_CTRL_DLL_RXC | RGMII_CTRL_DLL_TXC);
+-		break;
+-	case PHY_INTERFACE_MODE_RGMII_RXID:
+-		rgmii_ctrl &= ~(RGMII_CTRL_DLL_TXC);
+-		rgmii_ctrl |= RGMII_CTRL_DLL_RXC;
+-		break;
+-	case PHY_INTERFACE_MODE_RGMII_TXID:
+-		rgmii_ctrl &= ~(RGMII_CTRL_DLL_RXC);
+-		rgmii_ctrl |= RGMII_CTRL_DLL_TXC;
+-		break;
+-	case PHY_INTERFACE_MODE_RGMII:
+-	default:
+-		rgmii_ctrl &= ~(RGMII_CTRL_DLL_RXC | RGMII_CTRL_DLL_TXC);
+-		break;
+-	}
++	b53_read8(dev, B53_CTRL_PAGE, B53_RGMII_CTRL_P(port), &rgmii_ctrl);
++	rgmii_ctrl &= ~(RGMII_CTRL_DLL_RXC | RGMII_CTRL_DLL_TXC);
+ 
+-	if (port != dev->imp_port) {
+-		if (is63268(dev))
+-			rgmii_ctrl |= RGMII_CTRL_MII_OVERRIDE;
++	if (is63268(dev))
++		rgmii_ctrl |= RGMII_CTRL_MII_OVERRIDE;
+ 
+-		rgmii_ctrl |= RGMII_CTRL_ENABLE_GMII;
+-	}
++	rgmii_ctrl |= RGMII_CTRL_ENABLE_GMII;
+ 
+-	b53_write8(dev, B53_CTRL_PAGE, off, rgmii_ctrl);
++	b53_write8(dev, B53_CTRL_PAGE, B53_RGMII_CTRL_P(port), rgmii_ctrl);
+ 
+ 	dev_dbg(ds->dev, "Configured port %d for %s\n", port,
+ 		phy_modes(interface));
+@@ -1372,8 +1354,7 @@ static void b53_adjust_531x5_rgmii(struct dsa_switch *ds, int port,
+ 	 * tx_clk aligned timing (restoring to reset defaults)
+ 	 */
+ 	b53_read8(dev, B53_CTRL_PAGE, off, &rgmii_ctrl);
+-	rgmii_ctrl &= ~(RGMII_CTRL_DLL_RXC | RGMII_CTRL_DLL_TXC |
+-			RGMII_CTRL_TIMING_SEL);
++	rgmii_ctrl &= ~(RGMII_CTRL_DLL_RXC | RGMII_CTRL_DLL_TXC);
+ 
+ 	/* PHY_INTERFACE_MODE_RGMII_TXID means TX internal delay, make
+ 	 * sure that we enable the port TX clock internal delay to
+@@ -1393,7 +1374,10 @@ static void b53_adjust_531x5_rgmii(struct dsa_switch *ds, int port,
+ 		rgmii_ctrl |= RGMII_CTRL_DLL_TXC;
+ 	if (interface == PHY_INTERFACE_MODE_RGMII)
+ 		rgmii_ctrl |= RGMII_CTRL_DLL_TXC | RGMII_CTRL_DLL_RXC;
+-	rgmii_ctrl |= RGMII_CTRL_TIMING_SEL;
++
++	if (dev->chip_id != BCM53115_DEVICE_ID)
++		rgmii_ctrl |= RGMII_CTRL_TIMING_SEL;
++
+ 	b53_write8(dev, B53_CTRL_PAGE, off, rgmii_ctrl);
+ 
+ 	dev_info(ds->dev, "Configured port %d for %s\n", port,
+@@ -1457,6 +1441,10 @@ static void b53_phylink_get_caps(struct dsa_switch *ds, int port,
+ 	__set_bit(PHY_INTERFACE_MODE_MII, config->supported_interfaces);
+ 	__set_bit(PHY_INTERFACE_MODE_REVMII, config->supported_interfaces);
+ 
++	/* BCM63xx RGMII ports support RGMII */
++	if (is63xx(dev) && in_range(port, B53_63XX_RGMII0, 4))
++		phy_interface_set_rgmii(config->supported_interfaces);
++
+ 	config->mac_capabilities = MAC_ASYM_PAUSE | MAC_SYM_PAUSE |
+ 		MAC_10 | MAC_100;
+ 
+@@ -1496,7 +1484,7 @@ static void b53_phylink_mac_config(struct phylink_config *config,
+ 	struct b53_device *dev = ds->priv;
+ 	int port = dp->index;
+ 
+-	if (is63xx(dev) && port >= B53_63XX_RGMII0)
++	if (is63xx(dev) && in_range(port, B53_63XX_RGMII0, 4))
+ 		b53_adjust_63xx_rgmii(ds, port, interface);
+ 
+ 	if (mode == MLO_AN_FIXED) {
+@@ -2046,9 +2034,6 @@ int b53_br_join(struct dsa_switch *ds, int port, struct dsa_bridge bridge,
+ 
+ 		b53_get_vlan_entry(dev, pvid, vl);
+ 		vl->members &= ~BIT(port);
+-		if (vl->members == BIT(cpu_port))
+-			vl->members &= ~BIT(cpu_port);
+-		vl->untag = vl->members;
+ 		b53_set_vlan_entry(dev, pvid, vl);
+ 	}
+ 
+@@ -2127,8 +2112,7 @@ void b53_br_leave(struct dsa_switch *ds, int port, struct dsa_bridge bridge)
+ 		}
+ 
+ 		b53_get_vlan_entry(dev, pvid, vl);
+-		vl->members |= BIT(port) | BIT(cpu_port);
+-		vl->untag |= BIT(port) | BIT(cpu_port);
++		vl->members |= BIT(port);
+ 		b53_set_vlan_entry(dev, pvid, vl);
+ 	}
+ }
+@@ -2348,6 +2332,9 @@ int b53_eee_init(struct dsa_switch *ds, int port, struct phy_device *phy)
+ {
+ 	int ret;
+ 
++	if (!b53_support_eee(ds, port))
++		return 0;
++
+ 	ret = phy_init_eee(phy, false);
+ 	if (ret)
+ 		return 0;
+@@ -2362,7 +2349,7 @@ bool b53_support_eee(struct dsa_switch *ds, int port)
+ {
+ 	struct b53_device *dev = ds->priv;
+ 
+-	return !is5325(dev) && !is5365(dev);
++	return !is5325(dev) && !is5365(dev) && !is63xx(dev);
+ }
+ EXPORT_SYMBOL(b53_support_eee);
+ 
+@@ -2406,6 +2393,28 @@ static int b53_get_max_mtu(struct dsa_switch *ds, int port)
+ 	return B53_MAX_MTU;
+ }
+ 
++int b53_set_ageing_time(struct dsa_switch *ds, unsigned int msecs)
++{
++	struct b53_device *dev = ds->priv;
++	u32 atc;
++	int reg;
++
++	if (is63xx(dev))
++		reg = B53_AGING_TIME_CONTROL_63XX;
++	else
++		reg = B53_AGING_TIME_CONTROL;
++
++	atc = DIV_ROUND_CLOSEST(msecs, 1000);
++
++	if (!is5325(dev) && !is5365(dev))
++		atc |= AGE_CHANGE;
++
++	b53_write32(dev, B53_MGMT_PAGE, reg, atc);
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(b53_set_ageing_time);
++
+ static const struct phylink_mac_ops b53_phylink_mac_ops = {
+ 	.mac_select_pcs	= b53_phylink_mac_select_pcs,
+ 	.mac_config	= b53_phylink_mac_config,
+@@ -2429,6 +2438,7 @@ static const struct dsa_switch_ops b53_switch_ops = {
+ 	.port_disable		= b53_disable_port,
+ 	.support_eee		= b53_support_eee,
+ 	.set_mac_eee		= b53_set_mac_eee,
++	.set_ageing_time	= b53_set_ageing_time,
+ 	.port_bridge_join	= b53_br_join,
+ 	.port_bridge_leave	= b53_br_leave,
+ 	.port_pre_bridge_flags	= b53_br_flags_pre,
+diff --git a/drivers/net/dsa/b53/b53_priv.h b/drivers/net/dsa/b53/b53_priv.h
+index 2cf3e6a81e3785..a5ef7071ba07b1 100644
+--- a/drivers/net/dsa/b53/b53_priv.h
++++ b/drivers/net/dsa/b53/b53_priv.h
+@@ -343,6 +343,7 @@ void b53_get_strings(struct dsa_switch *ds, int port, u32 stringset,
+ void b53_get_ethtool_stats(struct dsa_switch *ds, int port, uint64_t *data);
+ int b53_get_sset_count(struct dsa_switch *ds, int port, int sset);
+ void b53_get_ethtool_phy_stats(struct dsa_switch *ds, int port, uint64_t *data);
++int b53_set_ageing_time(struct dsa_switch *ds, unsigned int msecs);
+ int b53_br_join(struct dsa_switch *ds, int port, struct dsa_bridge bridge,
+ 		bool *tx_fwd_offload, struct netlink_ext_ack *extack);
+ void b53_br_leave(struct dsa_switch *ds, int port, struct dsa_bridge bridge);
+diff --git a/drivers/net/dsa/b53/b53_regs.h b/drivers/net/dsa/b53/b53_regs.h
+index 5f7a0e5c5709d3..1fbc5a204bc721 100644
+--- a/drivers/net/dsa/b53/b53_regs.h
++++ b/drivers/net/dsa/b53/b53_regs.h
+@@ -220,6 +220,13 @@
+ #define   BRCM_HDR_P5_EN		BIT(1) /* Enable tagging on port 5 */
+ #define   BRCM_HDR_P7_EN		BIT(2) /* Enable tagging on port 7 */
+ 
++/* Aging Time control register (32 bit) */
++#define B53_AGING_TIME_CONTROL		0x06
++#define B53_AGING_TIME_CONTROL_63XX	0x08
++#define  AGE_CHANGE			BIT(20)
++#define  AGE_TIME_MASK			0x7ffff
++#define  AGE_TIME_MAX			1048575
++
+ /* Mirror capture control register (16 bit) */
+ #define B53_MIR_CAP_CTL			0x10
+ #define  CAP_PORT_MASK			0xf
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 454a8c7fd7eea5..960685596093b6 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -1235,6 +1235,7 @@ static const struct dsa_switch_ops bcm_sf2_ops = {
+ 	.port_disable		= bcm_sf2_port_disable,
+ 	.support_eee		= b53_support_eee,
+ 	.set_mac_eee		= b53_set_mac_eee,
++	.set_ageing_time	= b53_set_ageing_time,
+ 	.port_bridge_join	= b53_br_join,
+ 	.port_bridge_leave	= b53_br_leave,
+ 	.port_pre_bridge_flags	= b53_br_flags_pre,
+diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c
+index 1e9ab65218ff14..af28a9300a15c7 100644
+--- a/drivers/net/ethernet/airoha/airoha_eth.c
++++ b/drivers/net/ethernet/airoha/airoha_eth.c
+@@ -67,15 +67,6 @@ static void airoha_qdma_irq_disable(struct airoha_qdma *qdma, int index,
+ 	airoha_qdma_set_irqmask(qdma, index, mask, 0);
+ }
+ 
+-static bool airhoa_is_lan_gdm_port(struct airoha_gdm_port *port)
+-{
+-	/* GDM1 port on EN7581 SoC is connected to the lan dsa switch.
+-	 * GDM{2,3,4} can be used as wan port connected to an external
+-	 * phy module.
+-	 */
+-	return port->id == 1;
+-}
+-
+ static void airoha_set_macaddr(struct airoha_gdm_port *port, const u8 *addr)
+ {
+ 	struct airoha_eth *eth = port->qdma->eth;
+@@ -89,6 +80,8 @@ static void airoha_set_macaddr(struct airoha_gdm_port *port, const u8 *addr)
+ 	val = (addr[3] << 16) | (addr[4] << 8) | addr[5];
+ 	airoha_fe_wr(eth, REG_FE_MAC_LMIN(reg), val);
+ 	airoha_fe_wr(eth, REG_FE_MAC_LMAX(reg), val);
++
++	airoha_ppe_init_upd_mem(port);
+ }
+ 
+ static void airoha_set_gdm_port_fwd_cfg(struct airoha_eth *eth, u32 addr,
+@@ -1068,7 +1061,7 @@ static int airoha_qdma_init_hfwd_queues(struct airoha_qdma *qdma)
+ 			LMGR_INIT_START | LMGR_SRAM_MODE_MASK |
+ 			HW_FWD_DESC_NUM_MASK,
+ 			FIELD_PREP(HW_FWD_DESC_NUM_MASK, HW_DSCP_NUM) |
+-			LMGR_INIT_START);
++			LMGR_INIT_START | LMGR_SRAM_MODE_MASK);
+ 
+ 	return read_poll_timeout(airoha_qdma_rr, status,
+ 				 !(status & LMGR_INIT_START), USEC_PER_MSEC,
+@@ -2541,7 +2534,15 @@ static int airoha_alloc_gdm_port(struct airoha_eth *eth,
+ 	if (err)
+ 		return err;
+ 
+-	return register_netdev(dev);
++	err = register_netdev(dev);
++	if (err)
++		goto free_metadata_dst;
++
++	return 0;
++
++free_metadata_dst:
++	airoha_metadata_dst_free(port);
++	return err;
+ }
+ 
+ static int airoha_probe(struct platform_device *pdev)
+diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h
+index ec8908f904c619..2bf6b1a2dd9b03 100644
+--- a/drivers/net/ethernet/airoha/airoha_eth.h
++++ b/drivers/net/ethernet/airoha/airoha_eth.h
+@@ -532,6 +532,15 @@ u32 airoha_rmw(void __iomem *base, u32 offset, u32 mask, u32 val);
+ #define airoha_qdma_clear(qdma, offset, val)			\
+ 	airoha_rmw((qdma)->regs, (offset), (val), 0)
+ 
++static inline bool airhoa_is_lan_gdm_port(struct airoha_gdm_port *port)
++{
++	/* GDM1 port on EN7581 SoC is connected to the lan dsa switch.
++	 * GDM{2,3,4} can be used as wan port connected to an external
++	 * phy module.
++	 */
++	return port->id == 1;
++}
++
+ bool airoha_is_valid_gdm_port(struct airoha_eth *eth,
+ 			      struct airoha_gdm_port *port);
+ 
+@@ -540,6 +549,7 @@ int airoha_ppe_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+ 				 void *cb_priv);
+ int airoha_ppe_init(struct airoha_eth *eth);
+ void airoha_ppe_deinit(struct airoha_eth *eth);
++void airoha_ppe_init_upd_mem(struct airoha_gdm_port *port);
+ struct airoha_foe_entry *airoha_ppe_foe_get_entry(struct airoha_ppe *ppe,
+ 						  u32 hash);
+ 
+diff --git a/drivers/net/ethernet/airoha/airoha_ppe.c b/drivers/net/ethernet/airoha/airoha_ppe.c
+index f10dab935cab6f..1b8f21f808890e 100644
+--- a/drivers/net/ethernet/airoha/airoha_ppe.c
++++ b/drivers/net/ethernet/airoha/airoha_ppe.c
+@@ -206,6 +206,7 @@ static int airoha_ppe_foe_entry_prepare(struct airoha_eth *eth,
+ 	int dsa_port = airoha_get_dsa_port(&dev);
+ 	struct airoha_foe_mac_info_common *l2;
+ 	u32 qdata, ports_pad, val;
++	u8 smac_id = 0xf;
+ 
+ 	memset(hwe, 0, sizeof(*hwe));
+ 
+@@ -234,6 +235,14 @@ static int airoha_ppe_foe_entry_prepare(struct airoha_eth *eth,
+ 		else
+ 			pse_port = 2; /* uplink relies on GDM2 loopback */
+ 		val |= FIELD_PREP(AIROHA_FOE_IB2_PSE_PORT, pse_port);
++
++		/* For downlink traffic consume SRAM memory for hw forwarding
++		 * descriptors queue.
++		 */
++		if (airhoa_is_lan_gdm_port(port))
++			val |= AIROHA_FOE_IB2_FAST_PATH;
++
++		smac_id = port->id;
+ 	}
+ 
+ 	if (is_multicast_ether_addr(data->eth.h_dest))
+@@ -274,7 +283,7 @@ static int airoha_ppe_foe_entry_prepare(struct airoha_eth *eth,
+ 		hwe->ipv4.l2.src_mac_lo =
+ 			get_unaligned_be16(data->eth.h_source + 4);
+ 	} else {
+-		l2->src_mac_hi = FIELD_PREP(AIROHA_FOE_MAC_SMAC_ID, 0xf);
++		l2->src_mac_hi = FIELD_PREP(AIROHA_FOE_MAC_SMAC_ID, smac_id);
+ 	}
+ 
+ 	if (data->vlan.num) {
+@@ -862,6 +871,27 @@ void airoha_ppe_check_skb(struct airoha_ppe *ppe, u16 hash)
+ 	airoha_ppe_foe_insert_entry(ppe, hash);
+ }
+ 
++void airoha_ppe_init_upd_mem(struct airoha_gdm_port *port)
++{
++	struct airoha_eth *eth = port->qdma->eth;
++	struct net_device *dev = port->dev;
++	const u8 *addr = dev->dev_addr;
++	u32 val;
++
++	val = (addr[2] << 24) | (addr[3] << 16) | (addr[4] << 8) | addr[5];
++	airoha_fe_wr(eth, REG_UPDMEM_DATA(0), val);
++	airoha_fe_wr(eth, REG_UPDMEM_CTRL(0),
++		     FIELD_PREP(PPE_UPDMEM_ADDR_MASK, port->id) |
++		     PPE_UPDMEM_WR_MASK | PPE_UPDMEM_REQ_MASK);
++
++	val = (addr[0] << 8) | addr[1];
++	airoha_fe_wr(eth, REG_UPDMEM_DATA(0), val);
++	airoha_fe_wr(eth, REG_UPDMEM_CTRL(0),
++		     FIELD_PREP(PPE_UPDMEM_ADDR_MASK, port->id) |
++		     FIELD_PREP(PPE_UPDMEM_OFFSET_MASK, 1) |
++		     PPE_UPDMEM_WR_MASK | PPE_UPDMEM_REQ_MASK);
++}
++
+ int airoha_ppe_init(struct airoha_eth *eth)
+ {
+ 	struct airoha_ppe *ppe;
+diff --git a/drivers/net/ethernet/airoha/airoha_regs.h b/drivers/net/ethernet/airoha/airoha_regs.h
+index 8146cde4e8ba37..57bff8d2de276b 100644
+--- a/drivers/net/ethernet/airoha/airoha_regs.h
++++ b/drivers/net/ethernet/airoha/airoha_regs.h
+@@ -312,6 +312,16 @@
+ #define REG_PPE_RAM_BASE(_n)			(((_n) ? PPE2_BASE : PPE1_BASE) + 0x320)
+ #define REG_PPE_RAM_ENTRY(_m, _n)		(REG_PPE_RAM_BASE(_m) + ((_n) << 2))
+ 
++#define REG_UPDMEM_CTRL(_n)			(((_n) ? PPE2_BASE : PPE1_BASE) + 0x370)
++#define PPE_UPDMEM_ACK_MASK			BIT(31)
++#define PPE_UPDMEM_ADDR_MASK			GENMASK(11, 8)
++#define PPE_UPDMEM_OFFSET_MASK			GENMASK(7, 4)
++#define PPE_UPDMEM_SEL_MASK			GENMASK(3, 2)
++#define PPE_UPDMEM_WR_MASK			BIT(1)
++#define PPE_UPDMEM_REQ_MASK			BIT(0)
++
++#define REG_UPDMEM_DATA(_n)			(((_n) ? PPE2_BASE : PPE1_BASE) + 0x374)
++
+ #define REG_FE_GDM_TX_OK_PKT_CNT_H(_n)		(GDM_BASE(_n) + 0x280)
+ #define REG_FE_GDM_TX_OK_BYTE_CNT_H(_n)		(GDM_BASE(_n) + 0x284)
+ #define REG_FE_GDM_TX_ETH_PKT_CNT_H(_n)		(GDM_BASE(_n) + 0x288)
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 551c279dc14bed..51395c96b2e994 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -6480,10 +6480,11 @@ static const struct tlsdev_ops cxgb4_ktls_ops = {
+ 
+ #if IS_ENABLED(CONFIG_CHELSIO_IPSEC_INLINE)
+ 
+-static int cxgb4_xfrm_add_state(struct xfrm_state *x,
++static int cxgb4_xfrm_add_state(struct net_device *dev,
++				struct xfrm_state *x,
+ 				struct netlink_ext_ack *extack)
+ {
+-	struct adapter *adap = netdev2adap(x->xso.dev);
++	struct adapter *adap = netdev2adap(dev);
+ 	int ret;
+ 
+ 	if (!mutex_trylock(&uld_mutex)) {
+@@ -6494,7 +6495,8 @@ static int cxgb4_xfrm_add_state(struct xfrm_state *x,
+ 	if (ret)
+ 		goto out_unlock;
+ 
+-	ret = adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_add(x, extack);
++	ret = adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_add(dev, x,
++									extack);
+ 
+ out_unlock:
+ 	mutex_unlock(&uld_mutex);
+@@ -6502,9 +6504,9 @@ static int cxgb4_xfrm_add_state(struct xfrm_state *x,
+ 	return ret;
+ }
+ 
+-static void cxgb4_xfrm_del_state(struct xfrm_state *x)
++static void cxgb4_xfrm_del_state(struct net_device *dev, struct xfrm_state *x)
+ {
+-	struct adapter *adap = netdev2adap(x->xso.dev);
++	struct adapter *adap = netdev2adap(dev);
+ 
+ 	if (!mutex_trylock(&uld_mutex)) {
+ 		dev_dbg(adap->pdev_dev,
+@@ -6514,15 +6516,15 @@ static void cxgb4_xfrm_del_state(struct xfrm_state *x)
+ 	if (chcr_offload_state(adap, CXGB4_XFRMDEV_OPS))
+ 		goto out_unlock;
+ 
+-	adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_delete(x);
++	adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_delete(dev, x);
+ 
+ out_unlock:
+ 	mutex_unlock(&uld_mutex);
+ }
+ 
+-static void cxgb4_xfrm_free_state(struct xfrm_state *x)
++static void cxgb4_xfrm_free_state(struct net_device *dev, struct xfrm_state *x)
+ {
+-	struct adapter *adap = netdev2adap(x->xso.dev);
++	struct adapter *adap = netdev2adap(dev);
+ 
+ 	if (!mutex_trylock(&uld_mutex)) {
+ 		dev_dbg(adap->pdev_dev,
+@@ -6532,7 +6534,7 @@ static void cxgb4_xfrm_free_state(struct xfrm_state *x)
+ 	if (chcr_offload_state(adap, CXGB4_XFRMDEV_OPS))
+ 		goto out_unlock;
+ 
+-	adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_free(x);
++	adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_free(dev, x);
+ 
+ out_unlock:
+ 	mutex_unlock(&uld_mutex);
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c
+index baba96883f48b5..ecd9a0bd5e1822 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c
+@@ -75,9 +75,12 @@ static int ch_ipsec_uld_state_change(void *handle, enum cxgb4_state new_state);
+ static int ch_ipsec_xmit(struct sk_buff *skb, struct net_device *dev);
+ static void *ch_ipsec_uld_add(const struct cxgb4_lld_info *infop);
+ static void ch_ipsec_advance_esn_state(struct xfrm_state *x);
+-static void ch_ipsec_xfrm_free_state(struct xfrm_state *x);
+-static void ch_ipsec_xfrm_del_state(struct xfrm_state *x);
+-static int ch_ipsec_xfrm_add_state(struct xfrm_state *x,
++static void ch_ipsec_xfrm_free_state(struct net_device *dev,
++				     struct xfrm_state *x);
++static void ch_ipsec_xfrm_del_state(struct net_device *dev,
++				    struct xfrm_state *x);
++static int ch_ipsec_xfrm_add_state(struct net_device *dev,
++				   struct xfrm_state *x,
+ 				   struct netlink_ext_ack *extack);
+ 
+ static const struct xfrmdev_ops ch_ipsec_xfrmdev_ops = {
+@@ -223,7 +226,8 @@ static int ch_ipsec_setkey(struct xfrm_state *x,
+  * returns 0 on success, negative error if failed to send message to FPGA
+  * positive error if FPGA returned a bad response
+  */
+-static int ch_ipsec_xfrm_add_state(struct xfrm_state *x,
++static int ch_ipsec_xfrm_add_state(struct net_device *dev,
++				   struct xfrm_state *x,
+ 				   struct netlink_ext_ack *extack)
+ {
+ 	struct ipsec_sa_entry *sa_entry;
+@@ -302,14 +306,16 @@ static int ch_ipsec_xfrm_add_state(struct xfrm_state *x,
+ 	return res;
+ }
+ 
+-static void ch_ipsec_xfrm_del_state(struct xfrm_state *x)
++static void ch_ipsec_xfrm_del_state(struct net_device *dev,
++				    struct xfrm_state *x)
+ {
+ 	/* do nothing */
+ 	if (!x->xso.offload_handle)
+ 		return;
+ }
+ 
+-static void ch_ipsec_xfrm_free_state(struct xfrm_state *x)
++static void ch_ipsec_xfrm_free_state(struct net_device *dev,
++				     struct xfrm_state *x)
+ {
+ 	struct ipsec_sa_entry *sa_entry;
+ 
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index c3791cf23c876c..d561d45021a581 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -2153,7 +2153,7 @@ void gve_handle_report_stats(struct gve_priv *priv)
+ 			};
+ 			stats[stats_idx++] = (struct stats) {
+ 				.stat_name = cpu_to_be32(RX_BUFFERS_POSTED),
+-				.value = cpu_to_be64(priv->rx[0].fill_cnt),
++				.value = cpu_to_be64(priv->rx[idx].fill_cnt),
+ 				.queue_id = cpu_to_be32(idx),
+ 			};
+ 		}
+diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/ethernet/google/gve/gve_tx_dqo.c
+index 2eba868d80370a..f7da7de23d6726 100644
+--- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c
++++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c
+@@ -763,6 +763,9 @@ static int gve_tx_add_skb_dqo(struct gve_tx_ring *tx,
+ 	s16 completion_tag;
+ 
+ 	pkt = gve_alloc_pending_packet(tx);
++	if (!pkt)
++		return -ENOMEM;
++
+ 	pkt->skb = skb;
+ 	completion_tag = pkt - tx->dqo.pending_packets;
+ 
+diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
+index 3f089c3d47b23b..d8595e84326dbc 100644
+--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
++++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
+@@ -477,10 +477,6 @@ static void e1000_down_and_stop(struct e1000_adapter *adapter)
+ 
+ 	cancel_delayed_work_sync(&adapter->phy_info_task);
+ 	cancel_delayed_work_sync(&adapter->fifo_stall_task);
+-
+-	/* Only kill reset task if adapter is not resetting */
+-	if (!test_bit(__E1000_RESETTING, &adapter->flags))
+-		cancel_work_sync(&adapter->reset_task);
+ }
+ 
+ void e1000_down(struct e1000_adapter *adapter)
+@@ -1266,6 +1262,10 @@ static void e1000_remove(struct pci_dev *pdev)
+ 
+ 	unregister_netdev(netdev);
+ 
++	/* Only kill reset task if adapter is not resetting */
++	if (!test_bit(__E1000_RESETTING, &adapter->flags))
++		cancel_work_sync(&adapter->reset_task);
++
+ 	e1000_phy_hw_reset(hw);
+ 
+ 	kfree(adapter->tx_ring);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 1120f8e4bb6703..88e6bef69342c2 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1546,8 +1546,8 @@ static void i40e_cleanup_reset_vf(struct i40e_vf *vf)
+  * @vf: pointer to the VF structure
+  * @flr: VFLR was issued or not
+  *
+- * Returns true if the VF is in reset, resets successfully, or resets
+- * are disabled and false otherwise.
++ * Return: True if reset was performed successfully or if resets are disabled.
++ * False if reset is already in progress.
+  **/
+ bool i40e_reset_vf(struct i40e_vf *vf, bool flr)
+ {
+@@ -1566,7 +1566,7 @@ bool i40e_reset_vf(struct i40e_vf *vf, bool flr)
+ 
+ 	/* If VF is being reset already we don't need to continue. */
+ 	if (test_and_set_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
+-		return true;
++		return false;
+ 
+ 	i40e_trigger_vf_reset(vf, flr);
+ 
+@@ -4328,7 +4328,10 @@ int i40e_vc_process_vflr_event(struct i40e_pf *pf)
+ 		reg = rd32(hw, I40E_GLGEN_VFLRSTAT(reg_idx));
+ 		if (reg & BIT(bit_idx))
+ 			/* i40e_reset_vf will clear the bit in GLGEN_VFLRSTAT */
+-			i40e_reset_vf(vf, true);
++			if (!i40e_reset_vf(vf, true)) {
++				/* At least one VF did not finish resetting, retry next time */
++				set_bit(__I40E_VFLR_EVENT_PENDING, pf->state);
++			}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index 9de3e0ba37316c..f7a98ff43a57fb 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -268,7 +268,6 @@ struct iavf_adapter {
+ 	struct list_head vlan_filter_list;
+ 	int num_vlan_filters;
+ 	struct list_head mac_filter_list;
+-	struct mutex crit_lock;
+ 	/* Lock to protect accesses to MAC and VLAN lists */
+ 	spinlock_t mac_vlan_list_lock;
+ 	char misc_vector_name[IFNAMSIZ + 9];
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+index 288bb5b2e72ef7..2b2b315205b5e0 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+@@ -4,6 +4,8 @@
+ #include <linux/bitfield.h>
+ #include <linux/uaccess.h>
+ 
++#include <net/netdev_lock.h>
++
+ /* ethtool support for iavf */
+ #include "iavf.h"
+ 
+@@ -1256,9 +1258,10 @@ static int iavf_add_fdir_ethtool(struct iavf_adapter *adapter, struct ethtool_rx
+ {
+ 	struct ethtool_rx_flow_spec *fsp = &cmd->fs;
+ 	struct iavf_fdir_fltr *fltr;
+-	int count = 50;
+ 	int err;
+ 
++	netdev_assert_locked(adapter->netdev);
++
+ 	if (!(adapter->flags & IAVF_FLAG_FDIR_ENABLED))
+ 		return -EOPNOTSUPP;
+ 
+@@ -1277,14 +1280,6 @@ static int iavf_add_fdir_ethtool(struct iavf_adapter *adapter, struct ethtool_rx
+ 	if (!fltr)
+ 		return -ENOMEM;
+ 
+-	while (!mutex_trylock(&adapter->crit_lock)) {
+-		if (--count == 0) {
+-			kfree(fltr);
+-			return -EINVAL;
+-		}
+-		udelay(1);
+-	}
+-
+ 	err = iavf_add_fdir_fltr_info(adapter, fsp, fltr);
+ 	if (!err)
+ 		err = iavf_fdir_add_fltr(adapter, fltr);
+@@ -1292,7 +1287,6 @@ static int iavf_add_fdir_ethtool(struct iavf_adapter *adapter, struct ethtool_rx
+ 	if (err)
+ 		kfree(fltr);
+ 
+-	mutex_unlock(&adapter->crit_lock);
+ 	return err;
+ }
+ 
+@@ -1435,11 +1429,13 @@ iavf_set_adv_rss_hash_opt(struct iavf_adapter *adapter,
+ {
+ 	struct iavf_adv_rss *rss_old, *rss_new;
+ 	bool rss_new_add = false;
+-	int count = 50, err = 0;
+ 	bool symm = false;
+ 	u64 hash_flds;
++	int err = 0;
+ 	u32 hdrs;
+ 
++	netdev_assert_locked(adapter->netdev);
++
+ 	if (!ADV_RSS_SUPPORT(adapter))
+ 		return -EOPNOTSUPP;
+ 
+@@ -1463,15 +1459,6 @@ iavf_set_adv_rss_hash_opt(struct iavf_adapter *adapter,
+ 		return -EINVAL;
+ 	}
+ 
+-	while (!mutex_trylock(&adapter->crit_lock)) {
+-		if (--count == 0) {
+-			kfree(rss_new);
+-			return -EINVAL;
+-		}
+-
+-		udelay(1);
+-	}
+-
+ 	spin_lock_bh(&adapter->adv_rss_lock);
+ 	rss_old = iavf_find_adv_rss_cfg_by_hdrs(adapter, hdrs);
+ 	if (rss_old) {
+@@ -1500,8 +1487,6 @@ iavf_set_adv_rss_hash_opt(struct iavf_adapter *adapter,
+ 	if (!err)
+ 		iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_ADD_ADV_RSS_CFG);
+ 
+-	mutex_unlock(&adapter->crit_lock);
+-
+ 	if (!rss_new_add)
+ 		kfree(rss_new);
+ 
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 6d7ba4d67a1933..81d7249d1149c8 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -1287,11 +1287,11 @@ static void iavf_configure(struct iavf_adapter *adapter)
+ /**
+  * iavf_up_complete - Finish the last steps of bringing up a connection
+  * @adapter: board private structure
+- *
+- * Expects to be called while holding crit_lock.
+- **/
++ */
+ static void iavf_up_complete(struct iavf_adapter *adapter)
+ {
++	netdev_assert_locked(adapter->netdev);
++
+ 	iavf_change_state(adapter, __IAVF_RUNNING);
+ 	clear_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
+ 
+@@ -1410,13 +1410,13 @@ static void iavf_clear_adv_rss_conf(struct iavf_adapter *adapter)
+ /**
+  * iavf_down - Shutdown the connection processing
+  * @adapter: board private structure
+- *
+- * Expects to be called while holding crit_lock.
+- **/
++ */
+ void iavf_down(struct iavf_adapter *adapter)
+ {
+ 	struct net_device *netdev = adapter->netdev;
+ 
++	netdev_assert_locked(netdev);
++
+ 	if (adapter->state <= __IAVF_DOWN_PENDING)
+ 		return;
+ 
+@@ -2025,22 +2025,21 @@ static int iavf_reinit_interrupt_scheme(struct iavf_adapter *adapter, bool runni
+  * iavf_finish_config - do all netdev work that needs RTNL
+  * @work: our work_struct
+  *
+- * Do work that needs both RTNL and crit_lock.
+- **/
++ * Do work that needs RTNL.
++ */
+ static void iavf_finish_config(struct work_struct *work)
+ {
+ 	struct iavf_adapter *adapter;
+-	bool locks_released = false;
++	bool netdev_released = false;
+ 	int pairs, err;
+ 
+ 	adapter = container_of(work, struct iavf_adapter, finish_config);
+ 
+ 	/* Always take RTNL first to prevent circular lock dependency;
+-	 * The dev->lock is needed to update the queue number
++	 * the dev->lock (== netdev lock) is needed to update the queue number.
+ 	 */
+ 	rtnl_lock();
+ 	netdev_lock(adapter->netdev);
+-	mutex_lock(&adapter->crit_lock);
+ 
+ 	if ((adapter->flags & IAVF_FLAG_SETUP_NETDEV_FEATURES) &&
+ 	    adapter->netdev->reg_state == NETREG_REGISTERED &&
+@@ -2059,22 +2058,21 @@ static void iavf_finish_config(struct work_struct *work)
+ 		netif_set_real_num_tx_queues(adapter->netdev, pairs);
+ 
+ 		if (adapter->netdev->reg_state != NETREG_REGISTERED) {
+-			mutex_unlock(&adapter->crit_lock);
+ 			netdev_unlock(adapter->netdev);
+-			locks_released = true;
++			netdev_released = true;
+ 			err = register_netdevice(adapter->netdev);
+ 			if (err) {
+ 				dev_err(&adapter->pdev->dev, "Unable to register netdev (%d)\n",
+ 					err);
+ 
+ 				/* go back and try again.*/
+-				mutex_lock(&adapter->crit_lock);
++				netdev_lock(adapter->netdev);
+ 				iavf_free_rss(adapter);
+ 				iavf_free_misc_irq(adapter);
+ 				iavf_reset_interrupt_capability(adapter);
+ 				iavf_change_state(adapter,
+ 						  __IAVF_INIT_CONFIG_ADAPTER);
+-				mutex_unlock(&adapter->crit_lock);
++				netdev_unlock(adapter->netdev);
+ 				goto out;
+ 			}
+ 		}
+@@ -2090,10 +2088,8 @@ static void iavf_finish_config(struct work_struct *work)
+ 	}
+ 
+ out:
+-	if (!locks_released) {
+-		mutex_unlock(&adapter->crit_lock);
++	if (!netdev_released)
+ 		netdev_unlock(adapter->netdev);
+-	}
+ 	rtnl_unlock();
+ }
+ 
+@@ -2911,28 +2907,15 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter)
+ 	iavf_change_state(adapter, __IAVF_INIT_FAILED);
+ }
+ 
+-/**
+- * iavf_watchdog_task - Periodic call-back task
+- * @work: pointer to work_struct
+- **/
+-static void iavf_watchdog_task(struct work_struct *work)
++static const int IAVF_NO_RESCHED = -1;
++
++/* return: msec delay for requeueing itself */
++static int iavf_watchdog_step(struct iavf_adapter *adapter)
+ {
+-	struct iavf_adapter *adapter = container_of(work,
+-						    struct iavf_adapter,
+-						    watchdog_task.work);
+-	struct net_device *netdev = adapter->netdev;
+ 	struct iavf_hw *hw = &adapter->hw;
+ 	u32 reg_val;
+ 
+-	netdev_lock(netdev);
+-	if (!mutex_trylock(&adapter->crit_lock)) {
+-		if (adapter->state == __IAVF_REMOVE) {
+-			netdev_unlock(netdev);
+-			return;
+-		}
+-
+-		goto restart_watchdog;
+-	}
++	netdev_assert_locked(adapter->netdev);
+ 
+ 	if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
+ 		iavf_change_state(adapter, __IAVF_COMM_FAILED);
+@@ -2940,39 +2923,19 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 	switch (adapter->state) {
+ 	case __IAVF_STARTUP:
+ 		iavf_startup(adapter);
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   msecs_to_jiffies(30));
+-		return;
++		return 30;
+ 	case __IAVF_INIT_VERSION_CHECK:
+ 		iavf_init_version_check(adapter);
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   msecs_to_jiffies(30));
+-		return;
++		return 30;
+ 	case __IAVF_INIT_GET_RESOURCES:
+ 		iavf_init_get_resources(adapter);
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   msecs_to_jiffies(1));
+-		return;
++		return 1;
+ 	case __IAVF_INIT_EXTENDED_CAPS:
+ 		iavf_init_process_extended_caps(adapter);
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   msecs_to_jiffies(1));
+-		return;
++		return 1;
+ 	case __IAVF_INIT_CONFIG_ADAPTER:
+ 		iavf_init_config_adapter(adapter);
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   msecs_to_jiffies(1));
+-		return;
++		return 1;
+ 	case __IAVF_INIT_FAILED:
+ 		if (test_bit(__IAVF_IN_REMOVE_TASK,
+ 			     &adapter->crit_section)) {
+@@ -2980,27 +2943,18 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 			 * watchdog task, iavf_remove should handle this state
+ 			 * as it can loop forever
+ 			 */
+-			mutex_unlock(&adapter->crit_lock);
+-			netdev_unlock(netdev);
+-			return;
++			return IAVF_NO_RESCHED;
+ 		}
+ 		if (++adapter->aq_wait_count > IAVF_AQ_MAX_ERR) {
+ 			dev_err(&adapter->pdev->dev,
+ 				"Failed to communicate with PF; waiting before retry\n");
+ 			adapter->flags |= IAVF_FLAG_PF_COMMS_FAILED;
+ 			iavf_shutdown_adminq(hw);
+-			mutex_unlock(&adapter->crit_lock);
+-			netdev_unlock(netdev);
+-			queue_delayed_work(adapter->wq,
+-					   &adapter->watchdog_task, (5 * HZ));
+-			return;
++			return 5000;
+ 		}
+ 		/* Try again from failed step*/
+ 		iavf_change_state(adapter, adapter->last_state);
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task, HZ);
+-		return;
++		return 1000;
+ 	case __IAVF_COMM_FAILED:
+ 		if (test_bit(__IAVF_IN_REMOVE_TASK,
+ 			     &adapter->crit_section)) {
+@@ -3010,9 +2964,7 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 			 */
+ 			iavf_change_state(adapter, __IAVF_INIT_FAILED);
+ 			adapter->flags &= ~IAVF_FLAG_PF_COMMS_FAILED;
+-			mutex_unlock(&adapter->crit_lock);
+-			netdev_unlock(netdev);
+-			return;
++			return IAVF_NO_RESCHED;
+ 		}
+ 		reg_val = rd32(hw, IAVF_VFGEN_RSTAT) &
+ 			  IAVF_VFGEN_RSTAT_VFR_STATE_MASK;
+@@ -3030,18 +2982,9 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		}
+ 		adapter->aq_required = 0;
+ 		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq,
+-				   &adapter->watchdog_task,
+-				   msecs_to_jiffies(10));
+-		return;
++		return 10;
+ 	case __IAVF_RESETTING:
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   HZ * 2);
+-		return;
++		return 2000;
+ 	case __IAVF_DOWN:
+ 	case __IAVF_DOWN_PENDING:
+ 	case __IAVF_TESTING:
+@@ -3068,9 +3011,7 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		break;
+ 	case __IAVF_REMOVE:
+ 	default:
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		return;
++		return IAVF_NO_RESCHED;
+ 	}
+ 
+ 	/* check for hw reset */
+@@ -3080,24 +3021,29 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+ 		dev_err(&adapter->pdev->dev, "Hardware reset detected\n");
+ 		iavf_schedule_reset(adapter, IAVF_FLAG_RESET_PENDING);
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq,
+-				   &adapter->watchdog_task, HZ * 2);
+-		return;
+ 	}
+ 
+-	mutex_unlock(&adapter->crit_lock);
+-restart_watchdog:
+-	netdev_unlock(netdev);
++	return adapter->aq_required ? 20 : 2000;
++}
++
++static void iavf_watchdog_task(struct work_struct *work)
++{
++	struct iavf_adapter *adapter = container_of(work,
++						    struct iavf_adapter,
++						    watchdog_task.work);
++	struct net_device *netdev = adapter->netdev;
++	int msec_delay;
++
++	netdev_lock(netdev);
++	msec_delay = iavf_watchdog_step(adapter);
++	/* note that we schedule a different task */
+ 	if (adapter->state >= __IAVF_DOWN)
+ 		queue_work(adapter->wq, &adapter->adminq_task);
+-	if (adapter->aq_required)
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   msecs_to_jiffies(20));
+-	else
++
++	if (msec_delay != IAVF_NO_RESCHED)
+ 		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   HZ * 2);
++				   msecs_to_jiffies(msec_delay));
++	netdev_unlock(netdev);
+ }
+ 
+ /**
+@@ -3105,14 +3051,15 @@ static void iavf_watchdog_task(struct work_struct *work)
+  * @adapter: board private structure
+  *
+  * Set communication failed flag and free all resources.
+- * NOTE: This function is expected to be called with crit_lock being held.
+- **/
++ */
+ static void iavf_disable_vf(struct iavf_adapter *adapter)
+ {
+ 	struct iavf_mac_filter *f, *ftmp;
+ 	struct iavf_vlan_filter *fv, *fvtmp;
+ 	struct iavf_cloud_filter *cf, *cftmp;
+ 
++	netdev_assert_locked(adapter->netdev);
++
+ 	adapter->flags |= IAVF_FLAG_PF_COMMS_FAILED;
+ 
+ 	/* We don't use netif_running() because it may be true prior to
+@@ -3212,17 +3159,7 @@ static void iavf_reset_task(struct work_struct *work)
+ 	int i = 0, err;
+ 	bool running;
+ 
+-	/* When device is being removed it doesn't make sense to run the reset
+-	 * task, just return in such a case.
+-	 */
+ 	netdev_lock(netdev);
+-	if (!mutex_trylock(&adapter->crit_lock)) {
+-		if (adapter->state != __IAVF_REMOVE)
+-			queue_work(adapter->wq, &adapter->reset_task);
+-
+-		netdev_unlock(netdev);
+-		return;
+-	}
+ 
+ 	iavf_misc_irq_disable(adapter);
+ 	if (adapter->flags & IAVF_FLAG_RESET_NEEDED) {
+@@ -3267,12 +3204,22 @@ static void iavf_reset_task(struct work_struct *work)
+ 		dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n",
+ 			reg_val);
+ 		iavf_disable_vf(adapter);
+-		mutex_unlock(&adapter->crit_lock);
+ 		netdev_unlock(netdev);
+ 		return; /* Do not attempt to reinit. It's dead, Jim. */
+ 	}
+ 
+ continue_reset:
++	/* If we are still early in the state machine, just restart. */
++	if (adapter->state <= __IAVF_INIT_FAILED) {
++		iavf_shutdown_adminq(hw);
++		iavf_change_state(adapter, __IAVF_STARTUP);
++		iavf_startup(adapter);
++		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
++				   msecs_to_jiffies(30));
++		netdev_unlock(netdev);
++		return;
++	}
++
+ 	/* We don't use netif_running() because it may be true prior to
+ 	 * ndo_open() returning, so we can't assume it means all our open
+ 	 * tasks have finished, since we're not holding the rtnl_lock here.
+@@ -3411,7 +3358,6 @@ static void iavf_reset_task(struct work_struct *work)
+ 	adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
+ 
+ 	wake_up(&adapter->reset_waitqueue);
+-	mutex_unlock(&adapter->crit_lock);
+ 	netdev_unlock(netdev);
+ 
+ 	return;
+@@ -3422,7 +3368,6 @@ static void iavf_reset_task(struct work_struct *work)
+ 	}
+ 	iavf_disable_vf(adapter);
+ 
+-	mutex_unlock(&adapter->crit_lock);
+ 	netdev_unlock(netdev);
+ 	dev_err(&adapter->pdev->dev, "failed to allocate resources during reinit\n");
+ }
+@@ -3435,6 +3380,7 @@ static void iavf_adminq_task(struct work_struct *work)
+ {
+ 	struct iavf_adapter *adapter =
+ 		container_of(work, struct iavf_adapter, adminq_task);
++	struct net_device *netdev = adapter->netdev;
+ 	struct iavf_hw *hw = &adapter->hw;
+ 	struct iavf_arq_event_info event;
+ 	enum virtchnl_ops v_op;
+@@ -3442,13 +3388,7 @@ static void iavf_adminq_task(struct work_struct *work)
+ 	u32 val, oldval;
+ 	u16 pending;
+ 
+-	if (!mutex_trylock(&adapter->crit_lock)) {
+-		if (adapter->state == __IAVF_REMOVE)
+-			return;
+-
+-		queue_work(adapter->wq, &adapter->adminq_task);
+-		goto out;
+-	}
++	netdev_lock(netdev);
+ 
+ 	if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
+ 		goto unlock;
+@@ -3515,8 +3455,7 @@ static void iavf_adminq_task(struct work_struct *work)
+ freedom:
+ 	kfree(event.msg_buf);
+ unlock:
+-	mutex_unlock(&adapter->crit_lock);
+-out:
++	netdev_unlock(netdev);
+ 	/* re-enable Admin queue interrupt cause */
+ 	iavf_misc_irq_enable(adapter);
+ }
+@@ -4209,8 +4148,8 @@ static int iavf_configure_clsflower(struct iavf_adapter *adapter,
+ 				    struct flow_cls_offload *cls_flower)
+ {
+ 	int tc = tc_classid_to_hwtc(adapter->netdev, cls_flower->classid);
+-	struct iavf_cloud_filter *filter = NULL;
+-	int err = -EINVAL, count = 50;
++	struct iavf_cloud_filter *filter;
++	int err;
+ 
+ 	if (tc < 0) {
+ 		dev_err(&adapter->pdev->dev, "Invalid traffic class\n");
+@@ -4220,17 +4159,10 @@ static int iavf_configure_clsflower(struct iavf_adapter *adapter,
+ 	filter = kzalloc(sizeof(*filter), GFP_KERNEL);
+ 	if (!filter)
+ 		return -ENOMEM;
+-
+-	while (!mutex_trylock(&adapter->crit_lock)) {
+-		if (--count == 0) {
+-			kfree(filter);
+-			return err;
+-		}
+-		udelay(1);
+-	}
+-
+ 	filter->cookie = cls_flower->cookie;
+ 
++	netdev_lock(adapter->netdev);
++
+ 	/* bail out here if filter already exists */
+ 	spin_lock_bh(&adapter->cloud_filter_list_lock);
+ 	if (iavf_find_cf(adapter, &cls_flower->cookie)) {
+@@ -4264,7 +4196,7 @@ static int iavf_configure_clsflower(struct iavf_adapter *adapter,
+ 	if (err)
+ 		kfree(filter);
+ 
+-	mutex_unlock(&adapter->crit_lock);
++	netdev_unlock(adapter->netdev);
+ 	return err;
+ }
+ 
+@@ -4568,28 +4500,13 @@ static int iavf_open(struct net_device *netdev)
+ 		return -EIO;
+ 	}
+ 
+-	while (!mutex_trylock(&adapter->crit_lock)) {
+-		/* If we are in __IAVF_INIT_CONFIG_ADAPTER state the crit_lock
+-		 * is already taken and iavf_open is called from an upper
+-		 * device's notifier reacting on NETDEV_REGISTER event.
+-		 * We have to leave here to avoid dead lock.
+-		 */
+-		if (adapter->state == __IAVF_INIT_CONFIG_ADAPTER)
+-			return -EBUSY;
+-
+-		usleep_range(500, 1000);
+-	}
+-
+-	if (adapter->state != __IAVF_DOWN) {
+-		err = -EBUSY;
+-		goto err_unlock;
+-	}
++	if (adapter->state != __IAVF_DOWN)
++		return -EBUSY;
+ 
+ 	if (adapter->state == __IAVF_RUNNING &&
+ 	    !test_bit(__IAVF_VSI_DOWN, adapter->vsi.state)) {
+ 		dev_dbg(&adapter->pdev->dev, "VF is already open.\n");
+-		err = 0;
+-		goto err_unlock;
++		return 0;
+ 	}
+ 
+ 	/* allocate transmit descriptors */
+@@ -4608,9 +4525,7 @@ static int iavf_open(struct net_device *netdev)
+ 		goto err_req_irq;
+ 
+ 	spin_lock_bh(&adapter->mac_vlan_list_lock);
+-
+ 	iavf_add_filter(adapter, adapter->hw.mac.addr);
+-
+ 	spin_unlock_bh(&adapter->mac_vlan_list_lock);
+ 
+ 	/* Restore filters that were removed with IFF_DOWN */
+@@ -4623,8 +4538,6 @@ static int iavf_open(struct net_device *netdev)
+ 
+ 	iavf_irq_enable(adapter, true);
+ 
+-	mutex_unlock(&adapter->crit_lock);
+-
+ 	return 0;
+ 
+ err_req_irq:
+@@ -4634,8 +4547,6 @@ static int iavf_open(struct net_device *netdev)
+ 	iavf_free_all_rx_resources(adapter);
+ err_setup_tx:
+ 	iavf_free_all_tx_resources(adapter);
+-err_unlock:
+-	mutex_unlock(&adapter->crit_lock);
+ 
+ 	return err;
+ }
+@@ -4659,12 +4570,8 @@ static int iavf_close(struct net_device *netdev)
+ 
+ 	netdev_assert_locked(netdev);
+ 
+-	mutex_lock(&adapter->crit_lock);
+-
+-	if (adapter->state <= __IAVF_DOWN_PENDING) {
+-		mutex_unlock(&adapter->crit_lock);
++	if (adapter->state <= __IAVF_DOWN_PENDING)
+ 		return 0;
+-	}
+ 
+ 	set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
+ 	/* We cannot send IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS before
+@@ -4695,7 +4602,6 @@ static int iavf_close(struct net_device *netdev)
+ 	iavf_change_state(adapter, __IAVF_DOWN_PENDING);
+ 	iavf_free_traffic_irqs(adapter);
+ 
+-	mutex_unlock(&adapter->crit_lock);
+ 	netdev_unlock(netdev);
+ 
+ 	/* We explicitly don't free resources here because the hardware is
+@@ -4714,11 +4620,10 @@ static int iavf_close(struct net_device *netdev)
+ 				    msecs_to_jiffies(500));
+ 	if (!status)
+ 		netdev_warn(netdev, "Device resources not yet released\n");
+-
+ 	netdev_lock(netdev);
+-	mutex_lock(&adapter->crit_lock);
++
+ 	adapter->aq_required |= aq_to_restore;
+-	mutex_unlock(&adapter->crit_lock);
++
+ 	return 0;
+ }
+ 
+@@ -5227,15 +5132,16 @@ iavf_shaper_set(struct net_shaper_binding *binding,
+ 	struct iavf_adapter *adapter = netdev_priv(binding->netdev);
+ 	const struct net_shaper_handle *handle = &shaper->handle;
+ 	struct iavf_ring *tx_ring;
+-	int ret = 0;
++	int ret;
++
++	netdev_assert_locked(adapter->netdev);
+ 
+-	mutex_lock(&adapter->crit_lock);
+ 	if (handle->id >= adapter->num_active_queues)
+-		goto unlock;
++		return 0;
+ 
+ 	ret = iavf_verify_shaper(binding, shaper, extack);
+ 	if (ret)
+-		goto unlock;
++		return ret;
+ 
+ 	tx_ring = &adapter->tx_rings[handle->id];
+ 
+@@ -5245,9 +5151,7 @@ iavf_shaper_set(struct net_shaper_binding *binding,
+ 
+ 	adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW;
+ 
+-unlock:
+-	mutex_unlock(&adapter->crit_lock);
+-	return ret;
++	return 0;
+ }
+ 
+ static int iavf_shaper_del(struct net_shaper_binding *binding,
+@@ -5257,9 +5161,10 @@ static int iavf_shaper_del(struct net_shaper_binding *binding,
+ 	struct iavf_adapter *adapter = netdev_priv(binding->netdev);
+ 	struct iavf_ring *tx_ring;
+ 
+-	mutex_lock(&adapter->crit_lock);
++	netdev_assert_locked(adapter->netdev);
++
+ 	if (handle->id >= adapter->num_active_queues)
+-		goto unlock;
++		return 0;
+ 
+ 	tx_ring = &adapter->tx_rings[handle->id];
+ 	tx_ring->q_shaper.bw_min = 0;
+@@ -5268,8 +5173,6 @@ static int iavf_shaper_del(struct net_shaper_binding *binding,
+ 
+ 	adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW;
+ 
+-unlock:
+-	mutex_unlock(&adapter->crit_lock);
+ 	return 0;
+ }
+ 
+@@ -5530,10 +5433,6 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		goto err_alloc_qos_cap;
+ 	}
+ 
+-	/* set up the locks for the AQ, do this only once in probe
+-	 * and destroy them only once in remove
+-	 */
+-	mutex_init(&adapter->crit_lock);
+ 	mutex_init(&hw->aq.asq_mutex);
+ 	mutex_init(&hw->aq.arq_mutex);
+ 
+@@ -5596,22 +5495,24 @@ static int iavf_suspend(struct device *dev_d)
+ {
+ 	struct net_device *netdev = dev_get_drvdata(dev_d);
+ 	struct iavf_adapter *adapter = netdev_priv(netdev);
++	bool running;
+ 
+ 	netif_device_detach(netdev);
+ 
++	running = netif_running(netdev);
++	if (running)
++		rtnl_lock();
+ 	netdev_lock(netdev);
+-	mutex_lock(&adapter->crit_lock);
+ 
+-	if (netif_running(netdev)) {
+-		rtnl_lock();
++	if (running)
+ 		iavf_down(adapter);
+-		rtnl_unlock();
+-	}
++
+ 	iavf_free_misc_irq(adapter);
+ 	iavf_reset_interrupt_capability(adapter);
+ 
+-	mutex_unlock(&adapter->crit_lock);
+ 	netdev_unlock(netdev);
++	if (running)
++		rtnl_unlock();
+ 
+ 	return 0;
+ }
+@@ -5688,20 +5589,20 @@ static void iavf_remove(struct pci_dev *pdev)
+ 	 * There are flows where register/unregister netdev may race.
+ 	 */
+ 	while (1) {
+-		mutex_lock(&adapter->crit_lock);
++		netdev_lock(netdev);
+ 		if (adapter->state == __IAVF_RUNNING ||
+ 		    adapter->state == __IAVF_DOWN ||
+ 		    adapter->state == __IAVF_INIT_FAILED) {
+-			mutex_unlock(&adapter->crit_lock);
++			netdev_unlock(netdev);
+ 			break;
+ 		}
+ 		/* Simply return if we already went through iavf_shutdown */
+ 		if (adapter->state == __IAVF_REMOVE) {
+-			mutex_unlock(&adapter->crit_lock);
++			netdev_unlock(netdev);
+ 			return;
+ 		}
+ 
+-		mutex_unlock(&adapter->crit_lock);
++		netdev_unlock(netdev);
+ 		usleep_range(500, 1000);
+ 	}
+ 	cancel_delayed_work_sync(&adapter->watchdog_task);
+@@ -5711,7 +5612,6 @@ static void iavf_remove(struct pci_dev *pdev)
+ 		unregister_netdev(netdev);
+ 
+ 	netdev_lock(netdev);
+-	mutex_lock(&adapter->crit_lock);
+ 	dev_info(&adapter->pdev->dev, "Removing device\n");
+ 	iavf_change_state(adapter, __IAVF_REMOVE);
+ 
+@@ -5727,9 +5627,11 @@ static void iavf_remove(struct pci_dev *pdev)
+ 
+ 	iavf_misc_irq_disable(adapter);
+ 	/* Shut down all the garbage mashers on the detention level */
++	netdev_unlock(netdev);
+ 	cancel_work_sync(&adapter->reset_task);
+ 	cancel_delayed_work_sync(&adapter->watchdog_task);
+ 	cancel_work_sync(&adapter->adminq_task);
++	netdev_lock(netdev);
+ 
+ 	adapter->aq_required = 0;
+ 	adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
+@@ -5747,8 +5649,6 @@ static void iavf_remove(struct pci_dev *pdev)
+ 	/* destroy the locks only once, here */
+ 	mutex_destroy(&hw->aq.arq_mutex);
+ 	mutex_destroy(&hw->aq.asq_mutex);
+-	mutex_unlock(&adapter->crit_lock);
+-	mutex_destroy(&adapter->crit_lock);
+ 	netdev_unlock(netdev);
+ 
+ 	iounmap(hw->hw_addr);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+index a6f0e5990be250..07f0d0a0f1e28a 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+@@ -79,6 +79,23 @@ iavf_poll_virtchnl_msg(struct iavf_hw *hw, struct iavf_arq_event_info *event,
+ 			return iavf_status_to_errno(status);
+ 		received_op =
+ 		    (enum virtchnl_ops)le32_to_cpu(event->desc.cookie_high);
++
++		if (received_op == VIRTCHNL_OP_EVENT) {
++			struct iavf_adapter *adapter = hw->back;
++			struct virtchnl_pf_event *vpe =
++				(struct virtchnl_pf_event *)event->msg_buf;
++
++			if (vpe->event != VIRTCHNL_EVENT_RESET_IMPENDING)
++				continue;
++
++			dev_info(&adapter->pdev->dev, "Reset indication received from the PF\n");
++			if (!(adapter->flags & IAVF_FLAG_RESET_PENDING))
++				iavf_schedule_reset(adapter,
++						    IAVF_FLAG_RESET_PENDING);
++
++			return -EIO;
++		}
++
+ 		if (op_to_poll == received_op)
+ 			break;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index d390157b59fe18..82d472f1d781a7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2740,6 +2740,27 @@ void ice_map_xdp_rings(struct ice_vsi *vsi)
+ 	}
+ }
+ 
++/**
++ * ice_unmap_xdp_rings - Unmap XDP rings from interrupt vectors
++ * @vsi: the VSI with XDP rings being unmapped
++ */
++static void ice_unmap_xdp_rings(struct ice_vsi *vsi)
++{
++	int v_idx;
++
++	ice_for_each_q_vector(vsi, v_idx) {
++		struct ice_q_vector *q_vector = vsi->q_vectors[v_idx];
++		struct ice_tx_ring *ring;
++
++		ice_for_each_tx_ring(ring, q_vector->tx)
++			if (!ring->tx_buf || !ice_ring_is_xdp(ring))
++				break;
++
++		/* restore the value of last node prior to XDP setup */
++		q_vector->tx.tx_ring = ring;
++	}
++}
++
+ /**
+  * ice_prepare_xdp_rings - Allocate, configure and setup Tx rings for XDP
+  * @vsi: VSI to bring up Tx rings used by XDP
+@@ -2803,7 +2824,7 @@ int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog,
+ 	if (status) {
+ 		dev_err(dev, "Failed VSI LAN queue config for XDP, error: %d\n",
+ 			status);
+-		goto clear_xdp_rings;
++		goto unmap_xdp_rings;
+ 	}
+ 
+ 	/* assign the prog only when it's not already present on VSI;
+@@ -2819,6 +2840,8 @@ int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog,
+ 		ice_vsi_assign_bpf_prog(vsi, prog);
+ 
+ 	return 0;
++unmap_xdp_rings:
++	ice_unmap_xdp_rings(vsi);
+ clear_xdp_rings:
+ 	ice_for_each_xdp_txq(vsi, i)
+ 		if (vsi->xdp_rings[i]) {
+@@ -2835,6 +2858,8 @@ int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog,
+ 	mutex_unlock(&pf->avail_q_mutex);
+ 
+ 	devm_kfree(dev, vsi->xdp_rings);
++	vsi->xdp_rings = NULL;
++
+ 	return -ENOMEM;
+ }
+ 
+@@ -2850,7 +2875,7 @@ int ice_destroy_xdp_rings(struct ice_vsi *vsi, enum ice_xdp_cfg cfg_type)
+ {
+ 	u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
+ 	struct ice_pf *pf = vsi->back;
+-	int i, v_idx;
++	int i;
+ 
+ 	/* q_vectors are freed in reset path so there's no point in detaching
+ 	 * rings
+@@ -2858,17 +2883,7 @@ int ice_destroy_xdp_rings(struct ice_vsi *vsi, enum ice_xdp_cfg cfg_type)
+ 	if (cfg_type == ICE_XDP_CFG_PART)
+ 		goto free_qmap;
+ 
+-	ice_for_each_q_vector(vsi, v_idx) {
+-		struct ice_q_vector *q_vector = vsi->q_vectors[v_idx];
+-		struct ice_tx_ring *ring;
+-
+-		ice_for_each_tx_ring(ring, q_vector->tx)
+-			if (!ring->tx_buf || !ice_ring_is_xdp(ring))
+-				break;
+-
+-		/* restore the value of last node prior to XDP setup */
+-		q_vector->tx.tx_ring = ring;
+-	}
++	ice_unmap_xdp_rings(vsi);
+ 
+ free_qmap:
+ 	mutex_lock(&pf->avail_q_mutex);
+@@ -3013,11 +3028,14 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
+ 		xdp_ring_err = ice_vsi_determine_xdp_res(vsi);
+ 		if (xdp_ring_err) {
+ 			NL_SET_ERR_MSG_MOD(extack, "Not enough Tx resources for XDP");
++			goto resume_if;
+ 		} else {
+ 			xdp_ring_err = ice_prepare_xdp_rings(vsi, prog,
+ 							     ICE_XDP_CFG_FULL);
+-			if (xdp_ring_err)
++			if (xdp_ring_err) {
+ 				NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Tx resources failed");
++				goto resume_if;
++			}
+ 		}
+ 		xdp_features_set_redirect_target(vsi->netdev, true);
+ 		/* reallocate Rx queues that are used for zero-copy */
+@@ -3035,6 +3053,7 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
+ 			NL_SET_ERR_MSG_MOD(extack, "Freeing XDP Rx resources failed");
+ 	}
+ 
++resume_if:
+ 	if (if_running)
+ 		ret = ice_up(vsi);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index 1fd1ae03eb9096..11ed48a62b5360 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -2307,6 +2307,7 @@ static int ice_capture_crosststamp(ktime_t *device,
+ 	ts = ((u64)ts_hi << 32) | ts_lo;
+ 	system->cycles = ts;
+ 	system->cs_id = CSID_X86_ART;
++	system->use_nsecs = true;
+ 
+ 	/* Read Device source clock time */
+ 	ts_lo = rd32(hw, cfg->dev_time_l[tmr_idx]);
+diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
+index 6ca13c5dcb14e7..d9d09296d1d481 100644
+--- a/drivers/net/ethernet/intel/ice/ice_sched.c
++++ b/drivers/net/ethernet/intel/ice/ice_sched.c
+@@ -84,6 +84,27 @@ ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid)
+ 	return NULL;
+ }
+ 
++/**
++ * ice_sched_find_next_vsi_node - find the next node for a given VSI
++ * @vsi_node: VSI support node to start search with
++ *
++ * Return: Next VSI support node, or NULL.
++ *
++ * The function returns a pointer to the next node from the VSI layer
++ * assigned to the given VSI, or NULL if there is no such a node.
++ */
++static struct ice_sched_node *
++ice_sched_find_next_vsi_node(struct ice_sched_node *vsi_node)
++{
++	unsigned int vsi_handle = vsi_node->vsi_handle;
++
++	while ((vsi_node = vsi_node->sibling) != NULL)
++		if (vsi_node->vsi_handle == vsi_handle)
++			break;
++
++	return vsi_node;
++}
++
+ /**
+  * ice_aqc_send_sched_elem_cmd - send scheduling elements cmd
+  * @hw: pointer to the HW struct
+@@ -1084,8 +1105,10 @@ ice_sched_add_nodes_to_layer(struct ice_port_info *pi,
+ 		if (parent->num_children < max_child_nodes) {
+ 			new_num_nodes = max_child_nodes - parent->num_children;
+ 		} else {
+-			/* This parent is full, try the next sibling */
+-			parent = parent->sibling;
++			/* This parent is full,
++			 * try the next available sibling.
++			 */
++			parent = ice_sched_find_next_vsi_node(parent);
+ 			/* Don't modify the first node TEID memory if the
+ 			 * first node was added already in the above call.
+ 			 * Instead send some temp memory for all other
+@@ -1528,12 +1551,23 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ 	/* get the first queue group node from VSI sub-tree */
+ 	qgrp_node = ice_sched_get_first_node(pi, vsi_node, qgrp_layer);
+ 	while (qgrp_node) {
++		struct ice_sched_node *next_vsi_node;
++
+ 		/* make sure the qgroup node is part of the VSI subtree */
+ 		if (ice_sched_find_node_in_subtree(pi->hw, vsi_node, qgrp_node))
+ 			if (qgrp_node->num_children < max_children &&
+ 			    qgrp_node->owner == owner)
+ 				break;
+ 		qgrp_node = qgrp_node->sibling;
++		if (qgrp_node)
++			continue;
++
++		next_vsi_node = ice_sched_find_next_vsi_node(vsi_node);
++		if (!next_vsi_node)
++			break;
++
++		vsi_node = next_vsi_node;
++		qgrp_node = ice_sched_get_first_node(pi, vsi_node, qgrp_layer);
+ 	}
+ 
+ 	/* Select the best queue group */
+@@ -1604,16 +1638,16 @@ ice_sched_get_agg_node(struct ice_port_info *pi, struct ice_sched_node *tc_node,
+ /**
+  * ice_sched_calc_vsi_child_nodes - calculate number of VSI child nodes
+  * @hw: pointer to the HW struct
+- * @num_qs: number of queues
++ * @num_new_qs: number of new queues that will be added to the tree
+  * @num_nodes: num nodes array
+  *
+  * This function calculates the number of VSI child nodes based on the
+  * number of queues.
+  */
+ static void
+-ice_sched_calc_vsi_child_nodes(struct ice_hw *hw, u16 num_qs, u16 *num_nodes)
++ice_sched_calc_vsi_child_nodes(struct ice_hw *hw, u16 num_new_qs, u16 *num_nodes)
+ {
+-	u16 num = num_qs;
++	u16 num = num_new_qs;
+ 	u8 i, qgl, vsil;
+ 
+ 	qgl = ice_sched_get_qgrp_layer(hw);
+@@ -1779,7 +1813,11 @@ ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_handle,
+ 		if (!parent)
+ 			return -EIO;
+ 
+-		if (i == vsil)
++		/* Do not modify the VSI handle for already existing VSI nodes,
++		 * (if no new VSI node was added to the tree).
++		 * Assign the VSI handle only to newly added VSI nodes.
++		 */
++		if (i == vsil && num_added)
+ 			parent->vsi_handle = vsi_handle;
+ 	}
+ 
+@@ -1812,6 +1850,41 @@ ice_sched_add_vsi_to_topo(struct ice_port_info *pi, u16 vsi_handle, u8 tc)
+ 					       num_nodes);
+ }
+ 
++/**
++ * ice_sched_recalc_vsi_support_nodes - recalculate VSI support nodes count
++ * @hw: pointer to the HW struct
++ * @vsi_node: pointer to the leftmost VSI node that needs to be extended
++ * @new_numqs: new number of queues that has to be handled by the VSI
++ * @new_num_nodes: pointer to nodes count table to modify the VSI layer entry
++ *
++ * This function recalculates the number of supported nodes that need to
++ * be added after adding more Tx queues for a given VSI.
++ * The number of new VSI support nodes that shall be added will be saved
++ * to the @new_num_nodes table for the VSI layer.
++ */
++static void
++ice_sched_recalc_vsi_support_nodes(struct ice_hw *hw,
++				   struct ice_sched_node *vsi_node,
++				   unsigned int new_numqs, u16 *new_num_nodes)
++{
++	u32 vsi_nodes_cnt = 1;
++	u32 max_queue_cnt = 1;
++	u32 qgl, vsil;
++
++	qgl = ice_sched_get_qgrp_layer(hw);
++	vsil = ice_sched_get_vsi_layer(hw);
++
++	for (u32 i = vsil; i <= qgl; i++)
++		max_queue_cnt *= hw->max_children[i];
++
++	while ((vsi_node = ice_sched_find_next_vsi_node(vsi_node)) != NULL)
++		vsi_nodes_cnt++;
++
++	if (new_numqs > (max_queue_cnt * vsi_nodes_cnt))
++		new_num_nodes[vsil] = DIV_ROUND_UP(new_numqs, max_queue_cnt) -
++				      vsi_nodes_cnt;
++}
++
+ /**
+  * ice_sched_update_vsi_child_nodes - update VSI child nodes
+  * @pi: port information structure
+@@ -1863,15 +1936,25 @@ ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
+ 			return status;
+ 	}
+ 
+-	if (new_numqs)
+-		ice_sched_calc_vsi_child_nodes(hw, new_numqs, new_num_nodes);
+-	/* Keep the max number of queue configuration all the time. Update the
+-	 * tree only if number of queues > previous number of queues. This may
++	ice_sched_recalc_vsi_support_nodes(hw, vsi_node,
++					   new_numqs, new_num_nodes);
++	ice_sched_calc_vsi_child_nodes(hw, new_numqs - prev_numqs,
++				       new_num_nodes);
++
++	/* Never decrease the number of queues in the tree. Update the tree
++	 * only if number of queues > previous number of queues. This may
+ 	 * leave some extra nodes in the tree if number of queues < previous
+ 	 * number but that wouldn't harm anything. Removing those extra nodes
+ 	 * may complicate the code if those nodes are part of SRL or
+ 	 * individually rate limited.
++	 * Also, add the required VSI support nodes if the existing ones cannot
++	 * handle the requested new number of queues.
+ 	 */
++	status = ice_sched_add_vsi_support_nodes(pi, vsi_handle, tc_node,
++						 new_num_nodes);
++	if (status)
++		return status;
++
+ 	status = ice_sched_add_vsi_child_nodes(pi, vsi_handle, tc_node,
+ 					       new_num_nodes, owner);
+ 	if (status)
+@@ -2012,6 +2095,58 @@ static bool ice_sched_is_leaf_node_present(struct ice_sched_node *node)
+ 	return (node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF);
+ }
+ 
++/**
++ * ice_sched_rm_vsi_subtree - remove all nodes assigned to a given VSI
++ * @pi: port information structure
++ * @vsi_node: pointer to the leftmost node of the VSI to be removed
++ * @owner: LAN or RDMA
++ * @tc: TC number
++ *
++ * Return: Zero in case of success, or -EBUSY if the VSI has leaf nodes in TC.
++ *
++ * This function removes all the VSI support nodes associated with a given VSI
++ * and its LAN or RDMA children nodes from the scheduler tree.
++ */
++static int
++ice_sched_rm_vsi_subtree(struct ice_port_info *pi,
++			 struct ice_sched_node *vsi_node, u8 owner, u8 tc)
++{
++	u16 vsi_handle = vsi_node->vsi_handle;
++	bool all_vsi_nodes_removed = true;
++	int j = 0;
++
++	while (vsi_node) {
++		struct ice_sched_node *next_vsi_node;
++
++		if (ice_sched_is_leaf_node_present(vsi_node)) {
++			ice_debug(pi->hw, ICE_DBG_SCHED, "VSI has leaf nodes in TC %d\n", tc);
++			return -EBUSY;
++		}
++		while (j < vsi_node->num_children) {
++			if (vsi_node->children[j]->owner == owner)
++				ice_free_sched_node(pi, vsi_node->children[j]);
++			else
++				j++;
++		}
++
++		next_vsi_node = ice_sched_find_next_vsi_node(vsi_node);
++
++		/* remove the VSI if it has no children */
++		if (!vsi_node->num_children)
++			ice_free_sched_node(pi, vsi_node);
++		else
++			all_vsi_nodes_removed = false;
++
++		vsi_node = next_vsi_node;
++	}
++
++	/* clean up aggregator related VSI info if any */
++	if (all_vsi_nodes_removed)
++		ice_sched_rm_agg_vsi_info(pi, vsi_handle);
++
++	return 0;
++}
++
+ /**
+  * ice_sched_rm_vsi_cfg - remove the VSI and its children nodes
+  * @pi: port information structure
+@@ -2038,7 +2173,6 @@ ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
+ 
+ 	ice_for_each_traffic_class(i) {
+ 		struct ice_sched_node *vsi_node, *tc_node;
+-		u8 j = 0;
+ 
+ 		tc_node = ice_sched_get_tc_node(pi, i);
+ 		if (!tc_node)
+@@ -2048,31 +2182,12 @@ ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
+ 		if (!vsi_node)
+ 			continue;
+ 
+-		if (ice_sched_is_leaf_node_present(vsi_node)) {
+-			ice_debug(pi->hw, ICE_DBG_SCHED, "VSI has leaf nodes in TC %d\n", i);
+-			status = -EBUSY;
++		status = ice_sched_rm_vsi_subtree(pi, vsi_node, owner, i);
++		if (status)
+ 			goto exit_sched_rm_vsi_cfg;
+-		}
+-		while (j < vsi_node->num_children) {
+-			if (vsi_node->children[j]->owner == owner) {
+-				ice_free_sched_node(pi, vsi_node->children[j]);
+ 
+-				/* reset the counter again since the num
+-				 * children will be updated after node removal
+-				 */
+-				j = 0;
+-			} else {
+-				j++;
+-			}
+-		}
+-		/* remove the VSI if it has no children */
+-		if (!vsi_node->num_children) {
+-			ice_free_sched_node(pi, vsi_node);
+-			vsi_ctx->sched.vsi_node[i] = NULL;
++		vsi_ctx->sched.vsi_node[i] = NULL;
+ 
+-			/* clean up aggregator related VSI info if any */
+-			ice_sched_rm_agg_vsi_info(pi, vsi_handle);
+-		}
+ 		if (owner == ICE_SCHED_NODE_OWNER_LAN)
+ 			vsi_ctx->sched.max_lanq[i] = 0;
+ 		else
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+index 3a033ce19cda23..2ed801398971cc 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -1816,11 +1816,19 @@ void idpf_vc_event_task(struct work_struct *work)
+ 	if (test_bit(IDPF_REMOVE_IN_PROG, adapter->flags))
+ 		return;
+ 
+-	if (test_bit(IDPF_HR_FUNC_RESET, adapter->flags) ||
+-	    test_bit(IDPF_HR_DRV_LOAD, adapter->flags)) {
+-		set_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);
+-		idpf_init_hard_reset(adapter);
+-	}
++	if (test_bit(IDPF_HR_FUNC_RESET, adapter->flags))
++		goto func_reset;
++
++	if (test_bit(IDPF_HR_DRV_LOAD, adapter->flags))
++		goto drv_load;
++
++	return;
++
++func_reset:
++	idpf_vc_xn_shutdown(adapter->vcxn_mngr);
++drv_load:
++	set_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);
++	idpf_init_hard_reset(adapter);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
+index eae1b6f474e624..6ade54e213259c 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
+@@ -362,17 +362,18 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
+ {
+ 	struct idpf_tx_offload_params offload = { };
+ 	struct idpf_tx_buf *first;
++	int csum, tso, needed;
+ 	unsigned int count;
+ 	__be16 protocol;
+-	int csum, tso;
+ 
+ 	count = idpf_tx_desc_count_required(tx_q, skb);
+ 	if (unlikely(!count))
+ 		return idpf_tx_drop_skb(tx_q, skb);
+ 
+-	if (idpf_tx_maybe_stop_common(tx_q,
+-				      count + IDPF_TX_DESCS_PER_CACHE_LINE +
+-				      IDPF_TX_DESCS_FOR_CTX)) {
++	needed = count + IDPF_TX_DESCS_PER_CACHE_LINE + IDPF_TX_DESCS_FOR_CTX;
++	if (!netif_subqueue_maybe_stop(tx_q->netdev, tx_q->idx,
++				       IDPF_DESC_UNUSED(tx_q),
++				       needed, needed)) {
+ 		idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false);
+ 
+ 		u64_stats_update_begin(&tx_q->stats_sync);
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+index 2d5f5c9f91ce1e..aa16e4c1edbb8b 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+@@ -2132,6 +2132,19 @@ void idpf_tx_splitq_build_flow_desc(union idpf_tx_flex_desc *desc,
+ 	desc->flow.qw1.compl_tag = cpu_to_le16(params->compl_tag);
+ }
+ 
++/* Global conditions to tell whether the txq (and related resources)
++ * has room to allow the use of "size" descriptors.
++ */
++static int idpf_txq_has_room(struct idpf_tx_queue *tx_q, u32 size)
++{
++	if (IDPF_DESC_UNUSED(tx_q) < size ||
++	    IDPF_TX_COMPLQ_PENDING(tx_q->txq_grp) >
++		IDPF_TX_COMPLQ_OVERFLOW_THRESH(tx_q->txq_grp->complq) ||
++	    IDPF_TX_BUF_RSV_LOW(tx_q))
++		return 0;
++	return 1;
++}
++
+ /**
+  * idpf_tx_maybe_stop_splitq - 1st level check for Tx splitq stop conditions
+  * @tx_q: the queue to be checked
+@@ -2142,29 +2155,11 @@ void idpf_tx_splitq_build_flow_desc(union idpf_tx_flex_desc *desc,
+ static int idpf_tx_maybe_stop_splitq(struct idpf_tx_queue *tx_q,
+ 				     unsigned int descs_needed)
+ {
+-	if (idpf_tx_maybe_stop_common(tx_q, descs_needed))
+-		goto out;
+-
+-	/* If there are too many outstanding completions expected on the
+-	 * completion queue, stop the TX queue to give the device some time to
+-	 * catch up
+-	 */
+-	if (unlikely(IDPF_TX_COMPLQ_PENDING(tx_q->txq_grp) >
+-		     IDPF_TX_COMPLQ_OVERFLOW_THRESH(tx_q->txq_grp->complq)))
+-		goto splitq_stop;
+-
+-	/* Also check for available book keeping buffers; if we are low, stop
+-	 * the queue to wait for more completions
+-	 */
+-	if (unlikely(IDPF_TX_BUF_RSV_LOW(tx_q)))
+-		goto splitq_stop;
+-
+-	return 0;
+-
+-splitq_stop:
+-	netif_stop_subqueue(tx_q->netdev, tx_q->idx);
++	if (netif_subqueue_maybe_stop(tx_q->netdev, tx_q->idx,
++				      idpf_txq_has_room(tx_q, descs_needed),
++				      1, 1))
++		return 0;
+ 
+-out:
+ 	u64_stats_update_begin(&tx_q->stats_sync);
+ 	u64_stats_inc(&tx_q->q_stats.q_busy);
+ 	u64_stats_update_end(&tx_q->stats_sync);
+@@ -2190,12 +2185,6 @@ void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val,
+ 	nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx);
+ 	tx_q->next_to_use = val;
+ 
+-	if (idpf_tx_maybe_stop_common(tx_q, IDPF_TX_DESC_NEEDED)) {
+-		u64_stats_update_begin(&tx_q->stats_sync);
+-		u64_stats_inc(&tx_q->q_stats.q_busy);
+-		u64_stats_update_end(&tx_q->stats_sync);
+-	}
+-
+ 	/* Force memory writes to complete before letting h/w
+ 	 * know there are new descriptors to fetch.  (Only
+ 	 * applicable for weak-ordered memory model archs,
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+index b029f566e57cd6..c192a6c547dd32 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+@@ -1037,12 +1037,4 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_rx_queue *rxq,
+ 				      u16 cleaned_count);
+ int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off);
+ 
+-static inline bool idpf_tx_maybe_stop_common(struct idpf_tx_queue *tx_q,
+-					     u32 needed)
+-{
+-	return !netif_subqueue_maybe_stop(tx_q->netdev, tx_q->idx,
+-					  IDPF_DESC_UNUSED(tx_q),
+-					  needed, needed);
+-}
+-
+ #endif /* !_IDPF_TXRX_H_ */
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+index 3d2413b8684fca..5d2ca007f6828e 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+@@ -376,7 +376,7 @@ static void idpf_vc_xn_init(struct idpf_vc_xn_manager *vcxn_mngr)
+  * All waiting threads will be woken-up and their transaction aborted. Further
+  * operations on that object will fail.
+  */
+-static void idpf_vc_xn_shutdown(struct idpf_vc_xn_manager *vcxn_mngr)
++void idpf_vc_xn_shutdown(struct idpf_vc_xn_manager *vcxn_mngr)
+ {
+ 	int i;
+ 
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
+index 83da5d8da56bf2..23271cf0a21605 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
++++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
+@@ -66,5 +66,6 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport);
+ int idpf_send_set_sriov_vfs_msg(struct idpf_adapter *adapter, u16 num_vfs);
+ int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, bool get);
+ int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get);
++void idpf_vc_xn_shutdown(struct idpf_vc_xn_manager *vcxn_mngr);
+ 
+ #endif /* _IDPF_VIRTCHNL_H_ */
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+index 07ea1954a276ed..796e90d741f022 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+@@ -9,7 +9,7 @@
+ #define IXGBE_IPSEC_KEY_BITS  160
+ static const char aes_gcm_name[] = "rfc4106(gcm(aes))";
+ 
+-static void ixgbe_ipsec_del_sa(struct xfrm_state *xs);
++static void ixgbe_ipsec_del_sa(struct net_device *dev, struct xfrm_state *xs);
+ 
+ /**
+  * ixgbe_ipsec_set_tx_sa - set the Tx SA registers
+@@ -321,7 +321,7 @@ void ixgbe_ipsec_restore(struct ixgbe_adapter *adapter)
+ 
+ 		if (r->used) {
+ 			if (r->mode & IXGBE_RXTXMOD_VF)
+-				ixgbe_ipsec_del_sa(r->xs);
++				ixgbe_ipsec_del_sa(adapter->netdev, r->xs);
+ 			else
+ 				ixgbe_ipsec_set_rx_sa(hw, i, r->xs->id.spi,
+ 						      r->key, r->salt,
+@@ -330,7 +330,7 @@ void ixgbe_ipsec_restore(struct ixgbe_adapter *adapter)
+ 
+ 		if (t->used) {
+ 			if (t->mode & IXGBE_RXTXMOD_VF)
+-				ixgbe_ipsec_del_sa(t->xs);
++				ixgbe_ipsec_del_sa(adapter->netdev, t->xs);
+ 			else
+ 				ixgbe_ipsec_set_tx_sa(hw, i, t->key, t->salt);
+ 		}
+@@ -417,6 +417,7 @@ static struct xfrm_state *ixgbe_ipsec_find_rx_state(struct ixgbe_ipsec *ipsec,
+ 
+ /**
+  * ixgbe_ipsec_parse_proto_keys - find the key and salt based on the protocol
++ * @dev: pointer to net device
+  * @xs: pointer to xfrm_state struct
+  * @mykey: pointer to key array to populate
+  * @mysalt: pointer to salt value to populate
+@@ -424,10 +425,10 @@ static struct xfrm_state *ixgbe_ipsec_find_rx_state(struct ixgbe_ipsec *ipsec,
+  * This copies the protocol keys and salt to our own data tables.  The
+  * 82599 family only supports the one algorithm.
+  **/
+-static int ixgbe_ipsec_parse_proto_keys(struct xfrm_state *xs,
++static int ixgbe_ipsec_parse_proto_keys(struct net_device *dev,
++					struct xfrm_state *xs,
+ 					u32 *mykey, u32 *mysalt)
+ {
+-	struct net_device *dev = xs->xso.real_dev;
+ 	unsigned char *key_data;
+ 	char *alg_name = NULL;
+ 	int key_len;
+@@ -473,11 +474,12 @@ static int ixgbe_ipsec_parse_proto_keys(struct xfrm_state *xs,
+ 
+ /**
+  * ixgbe_ipsec_check_mgmt_ip - make sure there is no clash with mgmt IP filters
++ * @dev: pointer to net device
+  * @xs: pointer to transformer state struct
+  **/
+-static int ixgbe_ipsec_check_mgmt_ip(struct xfrm_state *xs)
++static int ixgbe_ipsec_check_mgmt_ip(struct net_device *dev,
++				     struct xfrm_state *xs)
+ {
+-	struct net_device *dev = xs->xso.real_dev;
+ 	struct ixgbe_adapter *adapter = netdev_priv(dev);
+ 	struct ixgbe_hw *hw = &adapter->hw;
+ 	u32 mfval, manc, reg;
+@@ -556,13 +558,14 @@ static int ixgbe_ipsec_check_mgmt_ip(struct xfrm_state *xs)
+ 
+ /**
+  * ixgbe_ipsec_add_sa - program device with a security association
++ * @dev: pointer to device to program
+  * @xs: pointer to transformer state struct
+  * @extack: extack point to fill failure reason
+  **/
+-static int ixgbe_ipsec_add_sa(struct xfrm_state *xs,
++static int ixgbe_ipsec_add_sa(struct net_device *dev,
++			      struct xfrm_state *xs,
+ 			      struct netlink_ext_ack *extack)
+ {
+-	struct net_device *dev = xs->xso.real_dev;
+ 	struct ixgbe_adapter *adapter = netdev_priv(dev);
+ 	struct ixgbe_ipsec *ipsec = adapter->ipsec;
+ 	struct ixgbe_hw *hw = &adapter->hw;
+@@ -581,7 +584,7 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (ixgbe_ipsec_check_mgmt_ip(xs)) {
++	if (ixgbe_ipsec_check_mgmt_ip(dev, xs)) {
+ 		NL_SET_ERR_MSG_MOD(extack, "IPsec IP addr clash with mgmt filters");
+ 		return -EINVAL;
+ 	}
+@@ -615,7 +618,7 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs,
+ 			rsa.decrypt = xs->ealg || xs->aead;
+ 
+ 		/* get the key and salt */
+-		ret = ixgbe_ipsec_parse_proto_keys(xs, rsa.key, &rsa.salt);
++		ret = ixgbe_ipsec_parse_proto_keys(dev, xs, rsa.key, &rsa.salt);
+ 		if (ret) {
+ 			NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Rx SA table");
+ 			return ret;
+@@ -724,7 +727,7 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs,
+ 		if (xs->id.proto & IPPROTO_ESP)
+ 			tsa.encrypt = xs->ealg || xs->aead;
+ 
+-		ret = ixgbe_ipsec_parse_proto_keys(xs, tsa.key, &tsa.salt);
++		ret = ixgbe_ipsec_parse_proto_keys(dev, xs, tsa.key, &tsa.salt);
+ 		if (ret) {
+ 			NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Tx SA table");
+ 			memset(&tsa, 0, sizeof(tsa));
+@@ -752,11 +755,11 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs,
+ 
+ /**
+  * ixgbe_ipsec_del_sa - clear out this specific SA
++ * @dev: pointer to device to program
+  * @xs: pointer to transformer state struct
+  **/
+-static void ixgbe_ipsec_del_sa(struct xfrm_state *xs)
++static void ixgbe_ipsec_del_sa(struct net_device *dev, struct xfrm_state *xs)
+ {
+-	struct net_device *dev = xs->xso.real_dev;
+ 	struct ixgbe_adapter *adapter = netdev_priv(dev);
+ 	struct ixgbe_ipsec *ipsec = adapter->ipsec;
+ 	struct ixgbe_hw *hw = &adapter->hw;
+@@ -841,7 +844,8 @@ void ixgbe_ipsec_vf_clear(struct ixgbe_adapter *adapter, u32 vf)
+ 			continue;
+ 		if (ipsec->rx_tbl[i].mode & IXGBE_RXTXMOD_VF &&
+ 		    ipsec->rx_tbl[i].vf == vf)
+-			ixgbe_ipsec_del_sa(ipsec->rx_tbl[i].xs);
++			ixgbe_ipsec_del_sa(adapter->netdev,
++					   ipsec->rx_tbl[i].xs);
+ 	}
+ 
+ 	/* search tx sa table */
+@@ -850,7 +854,8 @@ void ixgbe_ipsec_vf_clear(struct ixgbe_adapter *adapter, u32 vf)
+ 			continue;
+ 		if (ipsec->tx_tbl[i].mode & IXGBE_RXTXMOD_VF &&
+ 		    ipsec->tx_tbl[i].vf == vf)
+-			ixgbe_ipsec_del_sa(ipsec->tx_tbl[i].xs);
++			ixgbe_ipsec_del_sa(adapter->netdev,
++					   ipsec->tx_tbl[i].xs);
+ 	}
+ }
+ 
+@@ -930,7 +935,7 @@ int ixgbe_ipsec_vf_add_sa(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
+ 	memcpy(xs->aead->alg_name, aes_gcm_name, sizeof(aes_gcm_name));
+ 
+ 	/* set up the HW offload */
+-	err = ixgbe_ipsec_add_sa(xs, NULL);
++	err = ixgbe_ipsec_add_sa(adapter->netdev, xs, NULL);
+ 	if (err)
+ 		goto err_aead;
+ 
+@@ -1034,7 +1039,7 @@ int ixgbe_ipsec_vf_del_sa(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
+ 		xs = ipsec->tx_tbl[sa_idx].xs;
+ 	}
+ 
+-	ixgbe_ipsec_del_sa(xs);
++	ixgbe_ipsec_del_sa(adapter->netdev, xs);
+ 
+ 	/* remove the xs that was made-up in the add request */
+ 	kfree_sensitive(xs);
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ipsec.c b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
+index 8ba037e3d9c270..65580b9cb06f21 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
+@@ -201,6 +201,7 @@ struct xfrm_state *ixgbevf_ipsec_find_rx_state(struct ixgbevf_ipsec *ipsec,
+ 
+ /**
+  * ixgbevf_ipsec_parse_proto_keys - find the key and salt based on the protocol
++ * @dev: pointer to net device to program
+  * @xs: pointer to xfrm_state struct
+  * @mykey: pointer to key array to populate
+  * @mysalt: pointer to salt value to populate
+@@ -208,10 +209,10 @@ struct xfrm_state *ixgbevf_ipsec_find_rx_state(struct ixgbevf_ipsec *ipsec,
+  * This copies the protocol keys and salt to our own data tables.  The
+  * 82599 family only supports the one algorithm.
+  **/
+-static int ixgbevf_ipsec_parse_proto_keys(struct xfrm_state *xs,
++static int ixgbevf_ipsec_parse_proto_keys(struct net_device *dev,
++					  struct xfrm_state *xs,
+ 					  u32 *mykey, u32 *mysalt)
+ {
+-	struct net_device *dev = xs->xso.real_dev;
+ 	unsigned char *key_data;
+ 	char *alg_name = NULL;
+ 	int key_len;
+@@ -256,13 +257,14 @@ static int ixgbevf_ipsec_parse_proto_keys(struct xfrm_state *xs,
+ 
+ /**
+  * ixgbevf_ipsec_add_sa - program device with a security association
++ * @dev: pointer to net device to program
+  * @xs: pointer to transformer state struct
+  * @extack: extack point to fill failure reason
+  **/
+-static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs,
++static int ixgbevf_ipsec_add_sa(struct net_device *dev,
++				struct xfrm_state *xs,
+ 				struct netlink_ext_ack *extack)
+ {
+-	struct net_device *dev = xs->xso.real_dev;
+ 	struct ixgbevf_adapter *adapter;
+ 	struct ixgbevf_ipsec *ipsec;
+ 	u16 sa_idx;
+@@ -310,7 +312,8 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs,
+ 			rsa.decrypt = xs->ealg || xs->aead;
+ 
+ 		/* get the key and salt */
+-		ret = ixgbevf_ipsec_parse_proto_keys(xs, rsa.key, &rsa.salt);
++		ret = ixgbevf_ipsec_parse_proto_keys(dev, xs, rsa.key,
++						     &rsa.salt);
+ 		if (ret) {
+ 			NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Rx SA table");
+ 			return ret;
+@@ -363,7 +366,8 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs,
+ 		if (xs->id.proto & IPPROTO_ESP)
+ 			tsa.encrypt = xs->ealg || xs->aead;
+ 
+-		ret = ixgbevf_ipsec_parse_proto_keys(xs, tsa.key, &tsa.salt);
++		ret = ixgbevf_ipsec_parse_proto_keys(dev, xs, tsa.key,
++						     &tsa.salt);
+ 		if (ret) {
+ 			NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Tx SA table");
+ 			memset(&tsa, 0, sizeof(tsa));
+@@ -388,11 +392,12 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs,
+ 
+ /**
+  * ixgbevf_ipsec_del_sa - clear out this specific SA
++ * @dev: pointer to net device to program
+  * @xs: pointer to transformer state struct
+  **/
+-static void ixgbevf_ipsec_del_sa(struct xfrm_state *xs)
++static void ixgbevf_ipsec_del_sa(struct net_device *dev,
++				 struct xfrm_state *xs)
+ {
+-	struct net_device *dev = xs->xso.real_dev;
+ 	struct ixgbevf_adapter *adapter;
+ 	struct ixgbevf_ipsec *ipsec;
+ 	u16 sa_idx;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+index 655dd4726d36ef..0277d226293e9c 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+@@ -143,6 +143,8 @@ static int mcs_notify_pfvf(struct mcs_intr_event *event, struct rvu *rvu)
+ 
+ 	otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pf);
+ 
++	otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, pf);
++
+ 	mutex_unlock(&rvu->mbox_lock);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+index 992fa0b82e8d2d..ebb56eb0d18cfd 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+@@ -272,6 +272,8 @@ static void cgx_notify_pfs(struct cgx_link_event *event, struct rvu *rvu)
+ 
+ 		otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pfid);
+ 
++		otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, pfid);
++
+ 		mutex_unlock(&rvu->mbox_lock);
+ 	} while (pfmap);
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
+index 052ae5923e3a85..32953cca108c80 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
+@@ -60,6 +60,8 @@ static int rvu_rep_up_notify(struct rvu *rvu, struct rep_event *event)
+ 
+ 	otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pf);
+ 
++	otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, pf);
++
+ 	mutex_unlock(&rvu->mbox_lock);
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+index fc59e50bafce66..a6500e3673f248 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+@@ -663,10 +663,10 @@ static int cn10k_ipsec_inb_add_state(struct xfrm_state *x,
+ 	return -EOPNOTSUPP;
+ }
+ 
+-static int cn10k_ipsec_outb_add_state(struct xfrm_state *x,
++static int cn10k_ipsec_outb_add_state(struct net_device *dev,
++				      struct xfrm_state *x,
+ 				      struct netlink_ext_ack *extack)
+ {
+-	struct net_device *netdev = x->xso.dev;
+ 	struct cn10k_tx_sa_s *sa_entry;
+ 	struct qmem *sa_info;
+ 	struct otx2_nic *pf;
+@@ -676,7 +676,7 @@ static int cn10k_ipsec_outb_add_state(struct xfrm_state *x,
+ 	if (err)
+ 		return err;
+ 
+-	pf = netdev_priv(netdev);
++	pf = netdev_priv(dev);
+ 
+ 	err = qmem_alloc(pf->dev, &sa_info, pf->ipsec.sa_size, OTX2_ALIGN);
+ 	if (err)
+@@ -700,18 +700,18 @@ static int cn10k_ipsec_outb_add_state(struct xfrm_state *x,
+ 	return 0;
+ }
+ 
+-static int cn10k_ipsec_add_state(struct xfrm_state *x,
++static int cn10k_ipsec_add_state(struct net_device *dev,
++				 struct xfrm_state *x,
+ 				 struct netlink_ext_ack *extack)
+ {
+ 	if (x->xso.dir == XFRM_DEV_OFFLOAD_IN)
+ 		return cn10k_ipsec_inb_add_state(x, extack);
+ 	else
+-		return cn10k_ipsec_outb_add_state(x, extack);
++		return cn10k_ipsec_outb_add_state(dev, x, extack);
+ }
+ 
+-static void cn10k_ipsec_del_state(struct xfrm_state *x)
++static void cn10k_ipsec_del_state(struct net_device *dev, struct xfrm_state *x)
+ {
+-	struct net_device *netdev = x->xso.dev;
+ 	struct cn10k_tx_sa_s *sa_entry;
+ 	struct qmem *sa_info;
+ 	struct otx2_nic *pf;
+@@ -720,7 +720,7 @@ static void cn10k_ipsec_del_state(struct xfrm_state *x)
+ 	if (x->xso.dir == XFRM_DEV_OFFLOAD_IN)
+ 		return;
+ 
+-	pf = netdev_priv(netdev);
++	pf = netdev_priv(dev);
+ 
+ 	sa_info = (struct qmem *)x->xso.offload_handle;
+ 	sa_entry = (struct cn10k_tx_sa_s *)sa_info->base;
+@@ -732,7 +732,7 @@ static void cn10k_ipsec_del_state(struct xfrm_state *x)
+ 
+ 	err = cn10k_outb_write_sa(pf, sa_info);
+ 	if (err)
+-		netdev_err(netdev, "Error (%d) deleting SA\n", err);
++		netdev_err(dev, "Error (%d) deleting SA\n", err);
+ 
+ 	x->xso.offload_handle = 0;
+ 	qmem_free(pf->dev, sa_info);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
+index 35acc07bd96489..5765bac119f0e7 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
+@@ -1638,6 +1638,7 @@ static int otx2_qos_leaf_del_last(struct otx2_nic *pfvf, u16 classid, bool force
+ 	if (!node->is_static)
+ 		dwrr_del_node = true;
+ 
++	WRITE_ONCE(node->qid, OTX2_QOS_QID_INNER);
+ 	/* destroy the leaf node */
+ 	otx2_qos_disable_sq(pfvf, qid);
+ 	otx2_qos_destroy_node(pfvf, node);
+@@ -1682,9 +1683,6 @@ static int otx2_qos_leaf_del_last(struct otx2_nic *pfvf, u16 classid, bool force
+ 	}
+ 	kfree(new_cfg);
+ 
+-	/* update tx_real_queues */
+-	otx2_qos_update_tx_netdev_queues(pfvf);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
+index c5dbae0e513b64..58d572ce08eff5 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
+@@ -256,6 +256,26 @@ int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx)
+ 	return err;
+ }
+ 
++static int otx2_qos_nix_npa_ndc_sync(struct otx2_nic *pfvf)
++{
++	struct ndc_sync_op *req;
++	int rc;
++
++	mutex_lock(&pfvf->mbox.lock);
++
++	req = otx2_mbox_alloc_msg_ndc_sync_op(&pfvf->mbox);
++	if (!req) {
++		mutex_unlock(&pfvf->mbox.lock);
++		return -ENOMEM;
++	}
++
++	req->nix_lf_tx_sync = true;
++	req->npa_lf_sync = true;
++	rc = otx2_sync_mbox_msg(&pfvf->mbox);
++	mutex_unlock(&pfvf->mbox.lock);
++	return rc;
++}
++
+ void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx)
+ {
+ 	struct otx2_qset *qset = &pfvf->qset;
+@@ -285,6 +305,8 @@ void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx)
+ 
+ 	otx2_qos_sqb_flush(pfvf, sq_idx);
+ 	otx2_smq_flush(pfvf, otx2_get_smq_idx(pfvf, sq_idx));
++	/* NIX/NPA NDC sync */
++	otx2_qos_nix_npa_ndc_sync(pfvf);
+ 	otx2_cleanup_tx_cqes(pfvf, cq);
+ 
+ 	mutex_lock(&pfvf->mbox.lock);
+diff --git a/drivers/net/ethernet/mediatek/mtk_star_emac.c b/drivers/net/ethernet/mediatek/mtk_star_emac.c
+index b175119a6a7da5..b83886a4112105 100644
+--- a/drivers/net/ethernet/mediatek/mtk_star_emac.c
++++ b/drivers/net/ethernet/mediatek/mtk_star_emac.c
+@@ -1463,6 +1463,8 @@ static __maybe_unused int mtk_star_suspend(struct device *dev)
+ 	if (netif_running(ndev))
+ 		mtk_star_disable(ndev);
+ 
++	netif_device_detach(ndev);
++
+ 	clk_bulk_disable_unprepare(MTK_STAR_NCLKS, priv->clks);
+ 
+ 	return 0;
+@@ -1487,6 +1489,8 @@ static __maybe_unused int mtk_star_resume(struct device *dev)
+ 			clk_bulk_disable_unprepare(MTK_STAR_NCLKS, priv->clks);
+ 	}
+ 
++	netif_device_attach(ndev);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_clock.c b/drivers/net/ethernet/mellanox/mlx4/en_clock.c
+index cd754cd76bde1b..d73a2044dc2662 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_clock.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_clock.c
+@@ -249,7 +249,7 @@ static const struct ptp_clock_info mlx4_en_ptp_clock_info = {
+ static u32 freq_to_shift(u16 freq)
+ {
+ 	u32 freq_khz = freq * 1000;
+-	u64 max_val_cycles = freq_khz * 1000 * MLX4_EN_WRAP_AROUND_SEC;
++	u64 max_val_cycles = freq_khz * 1000ULL * MLX4_EN_WRAP_AROUND_SEC;
+ 	u64 max_val_cycles_rounded = 1ULL << fls64(max_val_cycles - 1);
+ 	/* calculate max possible multiplier in order to fit in 64bit */
+ 	u64 max_mul = div64_u64(ULLONG_MAX, max_val_cycles_rounded);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+index f803e1c9359006..5ce1b463b7a8dd 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+@@ -707,8 +707,8 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq,
+ 				xdpi = mlx5e_xdpi_fifo_pop(xdpi_fifo);
+ 				page = xdpi.page.page;
+ 
+-				/* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE)
+-				 * as we know this is a page_pool page.
++				/* No need to check page_pool_page_is_pp() as we
++				 * know this is a page_pool page.
+ 				 */
+ 				page_pool_recycle_direct(page->pp, page);
+ 			} while (++n < num);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+index 2dd842aac6fc47..77f61cd28a7993 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+@@ -259,8 +259,7 @@ static void mlx5e_ipsec_init_macs(struct mlx5e_ipsec_sa_entry *sa_entry,
+ 				  struct mlx5_accel_esp_xfrm_attrs *attrs)
+ {
+ 	struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry);
+-	struct xfrm_state *x = sa_entry->x;
+-	struct net_device *netdev;
++	struct net_device *netdev = sa_entry->dev;
+ 	struct neighbour *n;
+ 	u8 addr[ETH_ALEN];
+ 	const void *pkey;
+@@ -270,8 +269,6 @@ static void mlx5e_ipsec_init_macs(struct mlx5e_ipsec_sa_entry *sa_entry,
+ 	    attrs->type != XFRM_DEV_OFFLOAD_PACKET)
+ 		return;
+ 
+-	netdev = x->xso.real_dev;
+-
+ 	mlx5_query_mac_address(mdev, addr);
+ 	switch (attrs->dir) {
+ 	case XFRM_DEV_OFFLOAD_IN:
+@@ -692,17 +689,17 @@ static int mlx5e_ipsec_create_dwork(struct mlx5e_ipsec_sa_entry *sa_entry)
+ 	return 0;
+ }
+ 
+-static int mlx5e_xfrm_add_state(struct xfrm_state *x,
++static int mlx5e_xfrm_add_state(struct net_device *dev,
++				struct xfrm_state *x,
+ 				struct netlink_ext_ack *extack)
+ {
+ 	struct mlx5e_ipsec_sa_entry *sa_entry = NULL;
+-	struct net_device *netdev = x->xso.real_dev;
+ 	struct mlx5e_ipsec *ipsec;
+ 	struct mlx5e_priv *priv;
+ 	gfp_t gfp;
+ 	int err;
+ 
+-	priv = netdev_priv(netdev);
++	priv = netdev_priv(dev);
+ 	if (!priv->ipsec)
+ 		return -EOPNOTSUPP;
+ 
+@@ -713,6 +710,7 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x,
+ 		return -ENOMEM;
+ 
+ 	sa_entry->x = x;
++	sa_entry->dev = dev;
+ 	sa_entry->ipsec = ipsec;
+ 	/* Check if this SA is originated from acquire flow temporary SA */
+ 	if (x->xso.flags & XFRM_DEV_OFFLOAD_FLAG_ACQ)
+@@ -809,7 +807,7 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x,
+ 	return err;
+ }
+ 
+-static void mlx5e_xfrm_del_state(struct xfrm_state *x)
++static void mlx5e_xfrm_del_state(struct net_device *dev, struct xfrm_state *x)
+ {
+ 	struct mlx5e_ipsec_sa_entry *sa_entry = to_ipsec_sa_entry(x);
+ 	struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
+@@ -822,7 +820,7 @@ static void mlx5e_xfrm_del_state(struct xfrm_state *x)
+ 	WARN_ON(old != sa_entry);
+ }
+ 
+-static void mlx5e_xfrm_free_state(struct xfrm_state *x)
++static void mlx5e_xfrm_free_state(struct net_device *dev, struct xfrm_state *x)
+ {
+ 	struct mlx5e_ipsec_sa_entry *sa_entry = to_ipsec_sa_entry(x);
+ 	struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
+@@ -855,8 +853,6 @@ static int mlx5e_ipsec_netevent_event(struct notifier_block *nb,
+ 	struct mlx5e_ipsec_sa_entry *sa_entry;
+ 	struct mlx5e_ipsec *ipsec;
+ 	struct neighbour *n = ptr;
+-	struct net_device *netdev;
+-	struct xfrm_state *x;
+ 	unsigned long idx;
+ 
+ 	if (event != NETEVENT_NEIGH_UPDATE || !(n->nud_state & NUD_VALID))
+@@ -876,11 +872,9 @@ static int mlx5e_ipsec_netevent_event(struct notifier_block *nb,
+ 				continue;
+ 		}
+ 
+-		x = sa_entry->x;
+-		netdev = x->xso.real_dev;
+ 		data = sa_entry->work->data;
+ 
+-		neigh_ha_snapshot(data->addr, n, netdev);
++		neigh_ha_snapshot(data->addr, n, sa_entry->dev);
+ 		queue_work(ipsec->wq, &sa_entry->work->work);
+ 	}
+ 
+@@ -996,8 +990,8 @@ static void mlx5e_xfrm_update_stats(struct xfrm_state *x)
+ 	size_t headers;
+ 
+ 	lockdep_assert(lockdep_is_held(&x->lock) ||
+-		       lockdep_is_held(&dev_net(x->xso.real_dev)->xfrm.xfrm_cfg_mutex) ||
+-		       lockdep_is_held(&dev_net(x->xso.real_dev)->xfrm.xfrm_state_lock));
++		       lockdep_is_held(&net->xfrm.xfrm_cfg_mutex) ||
++		       lockdep_is_held(&net->xfrm.xfrm_state_lock));
+ 
+ 	if (x->xso.flags & XFRM_DEV_OFFLOAD_FLAG_ACQ)
+ 		return;
+@@ -1170,7 +1164,7 @@ mlx5e_ipsec_build_accel_pol_attrs(struct mlx5e_ipsec_pol_entry *pol_entry,
+ static int mlx5e_xfrm_add_policy(struct xfrm_policy *x,
+ 				 struct netlink_ext_ack *extack)
+ {
+-	struct net_device *netdev = x->xdo.real_dev;
++	struct net_device *netdev = x->xdo.dev;
+ 	struct mlx5e_ipsec_pol_entry *pol_entry;
+ 	struct mlx5e_priv *priv;
+ 	int err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
+index a63c2289f8af92..ffcd0cdeb77544 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
+@@ -274,6 +274,7 @@ struct mlx5e_ipsec_limits {
+ struct mlx5e_ipsec_sa_entry {
+ 	struct mlx5e_ipsec_esn_state esn_state;
+ 	struct xfrm_state *x;
++	struct net_device *dev;
+ 	struct mlx5e_ipsec *ipsec;
+ 	struct mlx5_accel_esp_xfrm_attrs attrs;
+ 	void (*set_iv_op)(struct sk_buff *skb, struct xfrm_state *x,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index fdf9e9bb99ace6..6253ea4e99a44f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -43,7 +43,6 @@
+ #include "en/fs_ethtool.h"
+ 
+ #define LANES_UNKNOWN		 0
+-#define MAX_LANES		 8
+ 
+ void mlx5e_ethtool_get_drvinfo(struct mlx5e_priv *priv,
+ 			       struct ethtool_drvinfo *drvinfo)
+@@ -1098,10 +1097,8 @@ static void get_link_properties(struct net_device *netdev,
+ 		speed = info->speed;
+ 		lanes = info->lanes;
+ 		duplex = DUPLEX_FULL;
+-	} else if (data_rate_oper) {
++	} else if (data_rate_oper)
+ 		speed = 100 * data_rate_oper;
+-		lanes = MAX_LANES;
+-	}
+ 
+ out:
+ 	link_ksettings->base.duplex = duplex;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index f1d908f611349f..fef418e1ed1a08 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -2028,9 +2028,8 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
+ 	return err;
+ }
+ 
+-static bool mlx5_flow_has_geneve_opt(struct mlx5e_tc_flow *flow)
++static bool mlx5_flow_has_geneve_opt(struct mlx5_flow_spec *spec)
+ {
+-	struct mlx5_flow_spec *spec = &flow->attr->parse_attr->spec;
+ 	void *headers_v = MLX5_ADDR_OF(fte_match_param,
+ 				       spec->match_value,
+ 				       misc_parameters_3);
+@@ -2069,7 +2068,7 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
+ 	}
+ 	complete_all(&flow->del_hw_done);
+ 
+-	if (mlx5_flow_has_geneve_opt(flow))
++	if (mlx5_flow_has_geneve_opt(&attr->parse_attr->spec))
+ 		mlx5_geneve_tlv_option_del(priv->mdev->geneve);
+ 
+ 	if (flow->decap_route)
+@@ -2574,12 +2573,13 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv,
+ 
+ 		err = mlx5e_tc_tun_parse(filter_dev, priv, tmp_spec, f, match_level);
+ 		if (err) {
+-			kvfree(tmp_spec);
+ 			NL_SET_ERR_MSG_MOD(extack, "Failed to parse tunnel attributes");
+ 			netdev_warn(priv->netdev, "Failed to parse tunnel attributes");
+-			return err;
++		} else {
++			err = mlx5e_tc_set_attr_rx_tun(flow, tmp_spec);
+ 		}
+-		err = mlx5e_tc_set_attr_rx_tun(flow, tmp_spec);
++		if (mlx5_flow_has_geneve_opt(tmp_spec))
++			mlx5_geneve_tlv_option_del(priv->mdev->geneve);
+ 		kvfree(tmp_spec);
+ 		if (err)
+ 			return err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 7fb8a3381f849e..4917d185d0c352 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -1295,12 +1295,15 @@ mlx5_eswitch_enable_pf_vf_vports(struct mlx5_eswitch *esw,
+ 		ret = mlx5_eswitch_load_pf_vf_vport(esw, MLX5_VPORT_ECPF, enabled_events);
+ 		if (ret)
+ 			goto ecpf_err;
+-		if (mlx5_core_ec_sriov_enabled(esw->dev)) {
+-			ret = mlx5_eswitch_load_ec_vf_vports(esw, esw->esw_funcs.num_ec_vfs,
+-							     enabled_events);
+-			if (ret)
+-				goto ec_vf_err;
+-		}
++	}
++
++	/* Enable ECVF vports */
++	if (mlx5_core_ec_sriov_enabled(esw->dev)) {
++		ret = mlx5_eswitch_load_ec_vf_vports(esw,
++						     esw->esw_funcs.num_ec_vfs,
++						     enabled_events);
++		if (ret)
++			goto ec_vf_err;
+ 	}
+ 
+ 	/* Enable VF vports */
+@@ -1331,9 +1334,11 @@ void mlx5_eswitch_disable_pf_vf_vports(struct mlx5_eswitch *esw)
+ {
+ 	mlx5_eswitch_unload_vf_vports(esw, esw->esw_funcs.num_vfs);
+ 
++	if (mlx5_core_ec_sriov_enabled(esw->dev))
++		mlx5_eswitch_unload_ec_vf_vports(esw,
++						 esw->esw_funcs.num_ec_vfs);
++
+ 	if (mlx5_ecpf_vport_exists(esw->dev)) {
+-		if (mlx5_core_ec_sriov_enabled(esw->dev))
+-			mlx5_eswitch_unload_ec_vf_vports(esw, esw->esw_funcs.num_vfs);
+ 		mlx5_eswitch_unload_pf_vf_vport(esw, MLX5_VPORT_ECPF);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 6163bc98d94a94..445301ea70426d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -2207,6 +2207,7 @@ try_add_to_existing_fg(struct mlx5_flow_table *ft,
+ 	struct mlx5_flow_handle *rule;
+ 	struct match_list *iter;
+ 	bool take_write = false;
++	bool try_again = false;
+ 	struct fs_fte *fte;
+ 	u64  version = 0;
+ 	int err;
+@@ -2271,6 +2272,7 @@ try_add_to_existing_fg(struct mlx5_flow_table *ft,
+ 		nested_down_write_ref_node(&g->node, FS_LOCK_PARENT);
+ 
+ 		if (!g->node.active) {
++			try_again = true;
+ 			up_write_ref_node(&g->node, false);
+ 			continue;
+ 		}
+@@ -2292,7 +2294,8 @@ try_add_to_existing_fg(struct mlx5_flow_table *ft,
+ 			tree_put_node(&fte->node, false);
+ 		return rule;
+ 	}
+-	rule = ERR_PTR(-ENOENT);
++	err = try_again ? -EAGAIN : -ENOENT;
++	rule = ERR_PTR(err);
+ out:
+ 	kmem_cache_free(steering->ftes_cache, fte);
+ 	return rule;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+index 972e8e9df585ba..9bc9bd83c2324c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+@@ -291,7 +291,7 @@ static void free_4k(struct mlx5_core_dev *dev, u64 addr, u32 function)
+ static int alloc_system_page(struct mlx5_core_dev *dev, u32 function)
+ {
+ 	struct device *device = mlx5_core_dma_dev(dev);
+-	int nid = dev_to_node(device);
++	int nid = dev->priv.numa_node;
+ 	struct page *page;
+ 	u64 zero_addr = 1;
+ 	u64 addr;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
+index b5332c54d4fb0f..17b8a3beb11732 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
+@@ -1361,8 +1361,8 @@ mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
+ 	struct mlx5hws_cmd_set_fte_attr fte_attr = {0};
+ 	struct mlx5hws_cmd_forward_tbl *fw_island;
+ 	struct mlx5hws_action *action;
+-	u32 i /*, packet_reformat_id*/;
+-	int ret;
++	int ret, last_dest_idx = -1;
++	u32 i;
+ 
+ 	if (num_dest <= 1) {
+ 		mlx5hws_err(ctx, "Action must have multiple dests\n");
+@@ -1392,11 +1392,8 @@ mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
+ 			dest_list[i].destination_id = dests[i].dest->dest_obj.obj_id;
+ 			fte_attr.action_flags |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+ 			fte_attr.ignore_flow_level = ignore_flow_level;
+-			/* ToDo: In SW steering we have a handling of 'go to WIRE'
+-			 * destination here by upper layer setting 'is_wire_ft' flag
+-			 * if the destination is wire.
+-			 * This is because uplink should be last dest in the list.
+-			 */
++			if (dests[i].is_wire_ft)
++				last_dest_idx = i;
+ 			break;
+ 		case MLX5HWS_ACTION_TYP_VPORT:
+ 			dest_list[i].destination_type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
+@@ -1420,6 +1417,9 @@ mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
+ 		}
+ 	}
+ 
++	if (last_dest_idx != -1)
++		swap(dest_list[last_dest_idx], dest_list[num_dest - 1]);
++
+ 	fte_attr.dests_num = num_dest;
+ 	fte_attr.dests = dest_list;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c
+index 19dce1ba512d42..32de8bfc7644f5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c
+@@ -90,13 +90,19 @@ int mlx5hws_bwc_matcher_create_simple(struct mlx5hws_bwc_matcher *bwc_matcher,
+ 	bwc_matcher->priority = priority;
+ 	bwc_matcher->size_log = MLX5HWS_BWC_MATCHER_INIT_SIZE_LOG;
+ 
++	bwc_matcher->size_of_at_array = MLX5HWS_BWC_MATCHER_ATTACH_AT_NUM;
++	bwc_matcher->at = kcalloc(bwc_matcher->size_of_at_array,
++				  sizeof(*bwc_matcher->at), GFP_KERNEL);
++	if (!bwc_matcher->at)
++		goto free_bwc_matcher_rules;
++
+ 	/* create dummy action template */
+ 	bwc_matcher->at[0] =
+ 		mlx5hws_action_template_create(action_types ?
+ 					       action_types : init_action_types);
+ 	if (!bwc_matcher->at[0]) {
+ 		mlx5hws_err(table->ctx, "BWC matcher: failed creating action template\n");
+-		goto free_bwc_matcher_rules;
++		goto free_bwc_matcher_at_array;
+ 	}
+ 
+ 	bwc_matcher->num_of_at = 1;
+@@ -126,6 +132,8 @@ int mlx5hws_bwc_matcher_create_simple(struct mlx5hws_bwc_matcher *bwc_matcher,
+ 	mlx5hws_match_template_destroy(bwc_matcher->mt);
+ free_at:
+ 	mlx5hws_action_template_destroy(bwc_matcher->at[0]);
++free_bwc_matcher_at_array:
++	kfree(bwc_matcher->at);
+ free_bwc_matcher_rules:
+ 	kfree(bwc_matcher->rules);
+ err:
+@@ -192,6 +200,7 @@ int mlx5hws_bwc_matcher_destroy_simple(struct mlx5hws_bwc_matcher *bwc_matcher)
+ 
+ 	for (i = 0; i < bwc_matcher->num_of_at; i++)
+ 		mlx5hws_action_template_destroy(bwc_matcher->at[i]);
++	kfree(bwc_matcher->at);
+ 
+ 	mlx5hws_match_template_destroy(bwc_matcher->mt);
+ 	kfree(bwc_matcher->rules);
+@@ -520,6 +529,23 @@ hws_bwc_matcher_extend_at(struct mlx5hws_bwc_matcher *bwc_matcher,
+ 			  struct mlx5hws_rule_action rule_actions[])
+ {
+ 	enum mlx5hws_action_type action_types[MLX5HWS_BWC_MAX_ACTS];
++	void *p;
++
++	if (unlikely(bwc_matcher->num_of_at >= bwc_matcher->size_of_at_array)) {
++		if (bwc_matcher->size_of_at_array >= MLX5HWS_MATCHER_MAX_AT)
++			return -ENOMEM;
++		bwc_matcher->size_of_at_array *= 2;
++		p = krealloc(bwc_matcher->at,
++			     bwc_matcher->size_of_at_array *
++				     sizeof(*bwc_matcher->at),
++			     __GFP_ZERO | GFP_KERNEL);
++		if (!p) {
++			bwc_matcher->size_of_at_array /= 2;
++			return -ENOMEM;
++		}
++
++		bwc_matcher->at = p;
++	}
+ 
+ 	hws_bwc_rule_actions_to_action_types(rule_actions, action_types);
+ 
+@@ -777,6 +803,7 @@ int mlx5hws_bwc_rule_create_simple(struct mlx5hws_bwc_rule *bwc_rule,
+ 	struct mlx5hws_rule_attr rule_attr;
+ 	struct mutex *queue_lock; /* Protect the queue */
+ 	u32 num_of_rules;
++	bool need_rehash;
+ 	int ret = 0;
+ 	int at_idx;
+ 
+@@ -803,10 +830,14 @@ int mlx5hws_bwc_rule_create_simple(struct mlx5hws_bwc_rule *bwc_rule,
+ 		at_idx = bwc_matcher->num_of_at - 1;
+ 
+ 		ret = mlx5hws_matcher_attach_at(bwc_matcher->matcher,
+-						bwc_matcher->at[at_idx]);
++						bwc_matcher->at[at_idx],
++						&need_rehash);
+ 		if (unlikely(ret)) {
+-			/* Action template attach failed, possibly due to
+-			 * requiring more action STEs.
++			hws_bwc_unlock_all_queues(ctx);
++			return ret;
++		}
++		if (unlikely(need_rehash)) {
++			/* The new action template requires more action STEs.
+ 			 * Need to attempt creating new matcher with all
+ 			 * the action templates, including the new one.
+ 			 */
+@@ -942,6 +973,7 @@ hws_bwc_rule_action_update(struct mlx5hws_bwc_rule *bwc_rule,
+ 	struct mlx5hws_context *ctx = bwc_matcher->matcher->tbl->ctx;
+ 	struct mlx5hws_rule_attr rule_attr;
+ 	struct mutex *queue_lock; /* Protect the queue */
++	bool need_rehash;
+ 	int at_idx, ret;
+ 	u16 idx;
+ 
+@@ -973,12 +1005,17 @@ hws_bwc_rule_action_update(struct mlx5hws_bwc_rule *bwc_rule,
+ 			at_idx = bwc_matcher->num_of_at - 1;
+ 
+ 			ret = mlx5hws_matcher_attach_at(bwc_matcher->matcher,
+-							bwc_matcher->at[at_idx]);
++							bwc_matcher->at[at_idx],
++							&need_rehash);
+ 			if (unlikely(ret)) {
+-				/* Action template attach failed, possibly due to
+-				 * requiring more action STEs.
+-				 * Need to attempt creating new matcher with all
+-				 * the action templates, including the new one.
++				hws_bwc_unlock_all_queues(ctx);
++				return ret;
++			}
++			if (unlikely(need_rehash)) {
++				/* The new action template requires more action
++				 * STEs. Need to attempt creating new matcher
++				 * with all the action templates, including the
++				 * new one.
+ 				 */
+ 				ret = hws_bwc_matcher_rehash_at(bwc_matcher);
+ 				if (unlikely(ret)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h
+index 47f7ed1415535f..bb0cf4b922ceba 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h
+@@ -10,9 +10,7 @@
+ #define MLX5HWS_BWC_MATCHER_REHASH_BURST_TH 32
+ 
+ /* Max number of AT attach operations for the same matcher.
+- * When the limit is reached, next attempt to attach new AT
+- * will result in creation of a new matcher and moving all
+- * the rules to this matcher.
++ * When the limit is reached, a larger buffer is allocated for the ATs.
+  */
+ #define MLX5HWS_BWC_MATCHER_ATTACH_AT_NUM 8
+ 
+@@ -23,10 +21,11 @@
+ struct mlx5hws_bwc_matcher {
+ 	struct mlx5hws_matcher *matcher;
+ 	struct mlx5hws_match_template *mt;
+-	struct mlx5hws_action_template *at[MLX5HWS_BWC_MATCHER_ATTACH_AT_NUM];
+-	u32 priority;
++	struct mlx5hws_action_template **at;
+ 	u8 num_of_at;
++	u8 size_of_at_array;
+ 	u8 size_log;
++	u32 priority;
+ 	atomic_t num_of_rules;
+ 	struct list_head *rules;
+ };
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/definer.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/definer.c
+index c8cc0c8115f537..293459458cc5f9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/definer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/definer.c
+@@ -559,6 +559,9 @@ hws_definer_conv_outer(struct mlx5hws_definer_conv_data *cd,
+ 	HWS_SET_HDR(fc, match_param, IP_PROTOCOL_O,
+ 		    outer_headers.ip_protocol,
+ 		    eth_l3_outer.protocol_next_header);
++	HWS_SET_HDR(fc, match_param, IP_VERSION_O,
++		    outer_headers.ip_version,
++		    eth_l3_outer.ip_version);
+ 	HWS_SET_HDR(fc, match_param, IP_TTL_O,
+ 		    outer_headers.ttl_hoplimit,
+ 		    eth_l3_outer.time_to_live_hop_limit);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
+index 1b787cd66e6fd3..29c5e00af1aa06 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
+@@ -966,6 +966,9 @@ static int mlx5_fs_fte_get_hws_actions(struct mlx5_flow_root_namespace *ns,
+ 			switch (attr->type) {
+ 			case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE:
+ 				dest_action = mlx5_fs_get_dest_action_ft(fs_ctx, dst);
++				if (dst->dest_attr.ft->flags &
++				    MLX5_FLOW_TABLE_UPLINK_VPORT)
++					dest_actions[num_dest_actions].is_wire_ft = true;
+ 				break;
+ 			case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE_NUM:
+ 				dest_action = mlx5_fs_get_dest_action_table_num(fs_ctx,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c
+index b61864b320536d..37a4497048a6fa 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c
+@@ -905,18 +905,48 @@ static int hws_matcher_uninit(struct mlx5hws_matcher *matcher)
+ 	return 0;
+ }
+ 
++static int hws_matcher_grow_at_array(struct mlx5hws_matcher *matcher)
++{
++	void *p;
++
++	if (matcher->size_of_at_array >= MLX5HWS_MATCHER_MAX_AT)
++		return -ENOMEM;
++
++	matcher->size_of_at_array *= 2;
++	p = krealloc(matcher->at,
++		     matcher->size_of_at_array * sizeof(*matcher->at),
++		     __GFP_ZERO | GFP_KERNEL);
++	if (!p) {
++		matcher->size_of_at_array /= 2;
++		return -ENOMEM;
++	}
++
++	matcher->at = p;
++
++	return 0;
++}
++
+ int mlx5hws_matcher_attach_at(struct mlx5hws_matcher *matcher,
+-			      struct mlx5hws_action_template *at)
++			      struct mlx5hws_action_template *at,
++			      bool *need_rehash)
+ {
+ 	bool is_jumbo = mlx5hws_matcher_mt_is_jumbo(matcher->mt);
+ 	struct mlx5hws_context *ctx = matcher->tbl->ctx;
+ 	u32 required_stes;
+ 	int ret;
+ 
+-	if (!matcher->attr.max_num_of_at_attach) {
+-		mlx5hws_dbg(ctx, "Num of current at (%d) exceed allowed value\n",
+-			    matcher->num_of_at);
+-		return -EOPNOTSUPP;
++	*need_rehash = false;
++
++	if (unlikely(matcher->num_of_at >= matcher->size_of_at_array)) {
++		ret = hws_matcher_grow_at_array(matcher);
++		if (ret)
++			return ret;
++
++		if (matcher->col_matcher) {
++			ret = hws_matcher_grow_at_array(matcher->col_matcher);
++			if (ret)
++				return ret;
++		}
+ 	}
+ 
+ 	ret = hws_matcher_check_and_process_at(matcher, at);
+@@ -927,12 +957,11 @@ int mlx5hws_matcher_attach_at(struct mlx5hws_matcher *matcher,
+ 	if (matcher->action_ste.max_stes < required_stes) {
+ 		mlx5hws_dbg(ctx, "Required STEs [%d] exceeds initial action template STE [%d]\n",
+ 			    required_stes, matcher->action_ste.max_stes);
+-		return -ENOMEM;
++		*need_rehash = true;
+ 	}
+ 
+ 	matcher->at[matcher->num_of_at] = *at;
+ 	matcher->num_of_at += 1;
+-	matcher->attr.max_num_of_at_attach -= 1;
+ 
+ 	if (matcher->col_matcher)
+ 		matcher->col_matcher->num_of_at = matcher->num_of_at;
+@@ -960,8 +989,9 @@ hws_matcher_set_templates(struct mlx5hws_matcher *matcher,
+ 	if (!matcher->mt)
+ 		return -ENOMEM;
+ 
+-	matcher->at = kvcalloc(num_of_at + matcher->attr.max_num_of_at_attach,
+-			       sizeof(*matcher->at),
++	matcher->size_of_at_array =
++		num_of_at + matcher->attr.max_num_of_at_attach;
++	matcher->at = kvcalloc(matcher->size_of_at_array, sizeof(*matcher->at),
+ 			       GFP_KERNEL);
+ 	if (!matcher->at) {
+ 		mlx5hws_err(ctx, "Failed to allocate action template array\n");
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h
+index 020de70270c501..20b32012c418be 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h
+@@ -23,6 +23,9 @@
+  */
+ #define MLX5HWS_MATCHER_ACTION_RTC_UPDATE_MULT 1
+ 
++/* Maximum number of action templates that can be attached to a matcher. */
++#define MLX5HWS_MATCHER_MAX_AT 128
++
+ enum mlx5hws_matcher_offset {
+ 	MLX5HWS_MATCHER_OFFSET_TAG_DW1 = 12,
+ 	MLX5HWS_MATCHER_OFFSET_TAG_DW0 = 13,
+@@ -72,6 +75,7 @@ struct mlx5hws_matcher {
+ 	struct mlx5hws_match_template *mt;
+ 	struct mlx5hws_action_template *at;
+ 	u8 num_of_at;
++	u8 size_of_at_array;
+ 	u8 num_of_mt;
+ 	/* enum mlx5hws_matcher_flags */
+ 	u8 flags;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
+index 5121951f2778a8..173f7ed1c17c3d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
+@@ -211,6 +211,7 @@ struct mlx5hws_action_dest_attr {
+ 	struct mlx5hws_action *dest;
+ 	/* Optional reformat action */
+ 	struct mlx5hws_action *reformat;
++	bool is_wire_ft;
+ };
+ 
+ /**
+@@ -399,11 +400,14 @@ int mlx5hws_matcher_destroy(struct mlx5hws_matcher *matcher);
+  *
+  * @matcher: Matcher to attach the action template to.
+  * @at: Action template to be attached to the matcher.
++ * @need_rehash: Output parameter that tells callers if the matcher needs to be
++ * rehashed.
+  *
+  * Return: Zero on success, non-zero otherwise.
+  */
+ int mlx5hws_matcher_attach_at(struct mlx5hws_matcher *matcher,
+-			      struct mlx5hws_action_template *at);
++			      struct mlx5hws_action_template *at,
++			      bool *need_rehash);
+ 
+ /**
+  * mlx5hws_matcher_resize_set_target - Link two matchers and enable moving rules.
+diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
+index a70b88037a208b..7f36443832ada3 100644
+--- a/drivers/net/ethernet/microchip/lan743x_main.c
++++ b/drivers/net/ethernet/microchip/lan743x_main.c
+@@ -1330,7 +1330,7 @@ static int lan743x_mac_set_mtu(struct lan743x_adapter *adapter, int new_mtu)
+ }
+ 
+ /* PHY */
+-static int lan743x_phy_reset(struct lan743x_adapter *adapter)
++static int lan743x_hw_reset_phy(struct lan743x_adapter *adapter)
+ {
+ 	u32 data;
+ 
+@@ -1346,11 +1346,6 @@ static int lan743x_phy_reset(struct lan743x_adapter *adapter)
+ 				  50000, 1000000);
+ }
+ 
+-static int lan743x_phy_init(struct lan743x_adapter *adapter)
+-{
+-	return lan743x_phy_reset(adapter);
+-}
+-
+ static void lan743x_phy_interface_select(struct lan743x_adapter *adapter)
+ {
+ 	u32 id_rev;
+@@ -3534,10 +3529,6 @@ static int lan743x_hardware_init(struct lan743x_adapter *adapter,
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = lan743x_phy_init(adapter);
+-	if (ret)
+-		return ret;
+-
+ 	ret = lan743x_ptp_init(adapter);
+ 	if (ret)
+ 		return ret;
+@@ -3674,6 +3665,10 @@ static int lan743x_pcidev_probe(struct pci_dev *pdev,
+ 	if (ret)
+ 		goto cleanup_pci;
+ 
++	ret = lan743x_hw_reset_phy(adapter);
++	if (ret)
++		goto cleanup_pci;
++
+ 	ret = lan743x_hardware_init(adapter, pdev);
+ 	if (ret)
+ 		goto cleanup_pci;
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
+index 0af143ec0f8694..7001584f1b7a62 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
+@@ -353,6 +353,11 @@ static void lan966x_ifh_set_rew_op(void *ifh, u64 rew_op)
+ 	lan966x_ifh_set(ifh, rew_op, IFH_POS_REW_CMD, IFH_WID_REW_CMD);
+ }
+ 
++static void lan966x_ifh_set_oam_type(void *ifh, u64 oam_type)
++{
++	lan966x_ifh_set(ifh, oam_type, IFH_POS_PDU_TYPE, IFH_WID_PDU_TYPE);
++}
++
+ static void lan966x_ifh_set_timestamp(void *ifh, u64 timestamp)
+ {
+ 	lan966x_ifh_set(ifh, timestamp, IFH_POS_TIMESTAMP, IFH_WID_TIMESTAMP);
+@@ -380,6 +385,7 @@ static netdev_tx_t lan966x_port_xmit(struct sk_buff *skb,
+ 			return err;
+ 
+ 		lan966x_ifh_set_rew_op(ifh, LAN966X_SKB_CB(skb)->rew_op);
++		lan966x_ifh_set_oam_type(ifh, LAN966X_SKB_CB(skb)->pdu_type);
+ 		lan966x_ifh_set_timestamp(ifh, LAN966X_SKB_CB(skb)->ts_id);
+ 	}
+ 
+@@ -873,6 +879,7 @@ static int lan966x_probe_port(struct lan966x *lan966x, u32 p,
+ 	lan966x_vlan_port_set_vlan_aware(port, 0);
+ 	lan966x_vlan_port_set_vid(port, HOST_PVID, false, false);
+ 	lan966x_vlan_port_apply(port);
++	lan966x_vlan_port_rew_host(port);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_main.h b/drivers/net/ethernet/microchip/lan966x/lan966x_main.h
+index 1efa584e710777..4f75f068836933 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_main.h
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_main.h
+@@ -75,6 +75,10 @@
+ #define IFH_REW_OP_ONE_STEP_PTP		0x3
+ #define IFH_REW_OP_TWO_STEP_PTP		0x4
+ 
++#define IFH_PDU_TYPE_NONE		0
++#define IFH_PDU_TYPE_IPV4		7
++#define IFH_PDU_TYPE_IPV6		8
++
+ #define FDMA_RX_DCB_MAX_DBS		1
+ #define FDMA_TX_DCB_MAX_DBS		1
+ 
+@@ -254,6 +258,7 @@ struct lan966x_phc {
+ 
+ struct lan966x_skb_cb {
+ 	u8 rew_op;
++	u8 pdu_type;
+ 	u16 ts_id;
+ 	unsigned long jiffies;
+ };
+@@ -492,6 +497,7 @@ void lan966x_vlan_port_apply(struct lan966x_port *port);
+ bool lan966x_vlan_cpu_member_cpu_vlan_mask(struct lan966x *lan966x, u16 vid);
+ void lan966x_vlan_port_set_vlan_aware(struct lan966x_port *port,
+ 				      bool vlan_aware);
++void lan966x_vlan_port_rew_host(struct lan966x_port *port);
+ int lan966x_vlan_port_set_vid(struct lan966x_port *port,
+ 			      u16 vid,
+ 			      bool pvid,
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c b/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c
+index 63905bb5a63a83..87e5e81d40dc68 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c
+@@ -322,34 +322,55 @@ void lan966x_ptp_hwtstamp_get(struct lan966x_port *port,
+ 	*cfg = phc->hwtstamp_config;
+ }
+ 
+-static int lan966x_ptp_classify(struct lan966x_port *port, struct sk_buff *skb)
++static void lan966x_ptp_classify(struct lan966x_port *port, struct sk_buff *skb,
++				 u8 *rew_op, u8 *pdu_type)
+ {
+ 	struct ptp_header *header;
+ 	u8 msgtype;
+ 	int type;
+ 
+-	if (port->ptp_tx_cmd == IFH_REW_OP_NOOP)
+-		return IFH_REW_OP_NOOP;
++	if (port->ptp_tx_cmd == IFH_REW_OP_NOOP) {
++		*rew_op = IFH_REW_OP_NOOP;
++		*pdu_type = IFH_PDU_TYPE_NONE;
++		return;
++	}
+ 
+ 	type = ptp_classify_raw(skb);
+-	if (type == PTP_CLASS_NONE)
+-		return IFH_REW_OP_NOOP;
++	if (type == PTP_CLASS_NONE) {
++		*rew_op = IFH_REW_OP_NOOP;
++		*pdu_type = IFH_PDU_TYPE_NONE;
++		return;
++	}
+ 
+ 	header = ptp_parse_header(skb, type);
+-	if (!header)
+-		return IFH_REW_OP_NOOP;
++	if (!header) {
++		*rew_op = IFH_REW_OP_NOOP;
++		*pdu_type = IFH_PDU_TYPE_NONE;
++		return;
++	}
+ 
+-	if (port->ptp_tx_cmd == IFH_REW_OP_TWO_STEP_PTP)
+-		return IFH_REW_OP_TWO_STEP_PTP;
++	if (type & PTP_CLASS_L2)
++		*pdu_type = IFH_PDU_TYPE_NONE;
++	if (type & PTP_CLASS_IPV4)
++		*pdu_type = IFH_PDU_TYPE_IPV4;
++	if (type & PTP_CLASS_IPV6)
++		*pdu_type = IFH_PDU_TYPE_IPV6;
++
++	if (port->ptp_tx_cmd == IFH_REW_OP_TWO_STEP_PTP) {
++		*rew_op = IFH_REW_OP_TWO_STEP_PTP;
++		return;
++	}
+ 
+ 	/* If it is sync and run 1 step then set the correct operation,
+ 	 * otherwise run as 2 step
+ 	 */
+ 	msgtype = ptp_get_msgtype(header, type);
+-	if ((msgtype & 0xf) == 0)
+-		return IFH_REW_OP_ONE_STEP_PTP;
++	if ((msgtype & 0xf) == 0) {
++		*rew_op = IFH_REW_OP_ONE_STEP_PTP;
++		return;
++	}
+ 
+-	return IFH_REW_OP_TWO_STEP_PTP;
++	*rew_op = IFH_REW_OP_TWO_STEP_PTP;
+ }
+ 
+ static void lan966x_ptp_txtstamp_old_release(struct lan966x_port *port)
+@@ -374,10 +395,12 @@ int lan966x_ptp_txtstamp_request(struct lan966x_port *port,
+ {
+ 	struct lan966x *lan966x = port->lan966x;
+ 	unsigned long flags;
++	u8 pdu_type;
+ 	u8 rew_op;
+ 
+-	rew_op = lan966x_ptp_classify(port, skb);
++	lan966x_ptp_classify(port, skb, &rew_op, &pdu_type);
+ 	LAN966X_SKB_CB(skb)->rew_op = rew_op;
++	LAN966X_SKB_CB(skb)->pdu_type = pdu_type;
+ 
+ 	if (rew_op != IFH_REW_OP_TWO_STEP_PTP)
+ 		return 0;
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_switchdev.c b/drivers/net/ethernet/microchip/lan966x/lan966x_switchdev.c
+index 1c88120eb291a2..bcb4db76b75cd5 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_switchdev.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_switchdev.c
+@@ -297,6 +297,7 @@ static void lan966x_port_bridge_leave(struct lan966x_port *port,
+ 	lan966x_vlan_port_set_vlan_aware(port, false);
+ 	lan966x_vlan_port_set_vid(port, HOST_PVID, false, false);
+ 	lan966x_vlan_port_apply(port);
++	lan966x_vlan_port_rew_host(port);
+ }
+ 
+ int lan966x_port_changeupper(struct net_device *dev,
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_vlan.c b/drivers/net/ethernet/microchip/lan966x/lan966x_vlan.c
+index fa34a739c748e1..7da22520724ce2 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_vlan.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_vlan.c
+@@ -149,6 +149,27 @@ void lan966x_vlan_port_set_vlan_aware(struct lan966x_port *port,
+ 	port->vlan_aware = vlan_aware;
+ }
+ 
++/* When the interface is in host mode, the interface should not be vlan aware
++ * but it should insert all the tags that it gets from the network stack.
++ * The tags are not in the data of the frame but actually in the skb and the ifh
++ * is configured already to get this tag. So what we need to do is to update the
++ * rewriter to insert the vlan tag for all frames which have a vlan tag
++ * different than 0.
++ */
++void lan966x_vlan_port_rew_host(struct lan966x_port *port)
++{
++	struct lan966x *lan966x = port->lan966x;
++	u32 val;
++
++	/* Tag all frames except when VID=0*/
++	val = REW_TAG_CFG_TAG_CFG_SET(2);
++
++	/* Update only some bits in the register */
++	lan_rmw(val,
++		REW_TAG_CFG_TAG_CFG,
++		lan966x, REW_TAG_CFG(port->chip_port));
++}
++
+ void lan966x_vlan_port_apply(struct lan966x_port *port)
+ {
+ 	struct lan966x *lan966x = port->lan966x;
+diff --git a/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
+index 671af5d4c5d25c..9e7c285eaa6bca 100644
+--- a/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
++++ b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
+@@ -266,17 +266,17 @@ static void set_sha2_512hmac(struct nfp_ipsec_cfg_add_sa *cfg, int *trunc_len)
+ 	}
+ }
+ 
+-static int nfp_net_xfrm_add_state(struct xfrm_state *x,
++static int nfp_net_xfrm_add_state(struct net_device *dev,
++				  struct xfrm_state *x,
+ 				  struct netlink_ext_ack *extack)
+ {
+-	struct net_device *netdev = x->xso.real_dev;
+ 	struct nfp_ipsec_cfg_mssg msg = {};
+ 	int i, key_len, trunc_len, err = 0;
+ 	struct nfp_ipsec_cfg_add_sa *cfg;
+ 	struct nfp_net *nn;
+ 	unsigned int saidx;
+ 
+-	nn = netdev_priv(netdev);
++	nn = netdev_priv(dev);
+ 	cfg = &msg.cfg_add_sa;
+ 
+ 	/* General */
+@@ -546,17 +546,16 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x,
+ 	return 0;
+ }
+ 
+-static void nfp_net_xfrm_del_state(struct xfrm_state *x)
++static void nfp_net_xfrm_del_state(struct net_device *dev, struct xfrm_state *x)
+ {
+ 	struct nfp_ipsec_cfg_mssg msg = {
+ 		.cmd = NFP_IPSEC_CFG_MSSG_INV_SA,
+ 		.sa_idx = x->xso.offload_handle - 1,
+ 	};
+-	struct net_device *netdev = x->xso.real_dev;
+ 	struct nfp_net *nn;
+ 	int err;
+ 
+-	nn = netdev_priv(netdev);
++	nn = netdev_priv(dev);
+ 	err = nfp_net_sched_mbox_amsg_work(nn, NFP_NET_CFG_MBOX_CMD_IPSEC, &msg,
+ 					   sizeof(msg), nfp_net_ipsec_cfg);
+ 	if (err)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_est.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_est.c
+index c9693f77e1f61f..ac6f2e3a3fcd2f 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_est.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_est.c
+@@ -32,6 +32,11 @@ static int est_configure(struct stmmac_priv *priv, struct stmmac_est *cfg,
+ 	int i, ret = 0;
+ 	u32 ctrl;
+ 
++	if (!ptp_rate) {
++		netdev_warn(priv->dev, "Invalid PTP rate");
++		return -EINVAL;
++	}
++
+ 	ret |= est_write(est_addr, EST_BTR_LOW, cfg->btr[0], false);
+ 	ret |= est_write(est_addr, EST_BTR_HIGH, cfg->btr[1], false);
+ 	ret |= est_write(est_addr, EST_TER, cfg->ter, false);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 59d07d0d3369db..3a049a158ea111 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -803,6 +803,11 @@ int stmmac_init_tstamp_counter(struct stmmac_priv *priv, u32 systime_flags)
+ 	if (!(priv->dma_cap.time_stamp || priv->dma_cap.atime_stamp))
+ 		return -EOPNOTSUPP;
+ 
++	if (!priv->plat->clk_ptp_rate) {
++		netdev_err(priv->dev, "Invalid PTP clock rate");
++		return -EINVAL;
++	}
++
+ 	stmmac_config_hw_tstamping(priv, priv->ptpaddr, systime_flags);
+ 	priv->systime_flags = systime_flags;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index c73eff6a56b87a..15205a47cafc27 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -430,6 +430,7 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+ 	struct device_node *np = pdev->dev.of_node;
+ 	struct plat_stmmacenet_data *plat;
+ 	struct stmmac_dma_cfg *dma_cfg;
++	static int bus_id = -ENODEV;
+ 	int phy_mode;
+ 	void *ret;
+ 	int rc;
+@@ -465,8 +466,14 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+ 	of_property_read_u32(np, "max-speed", &plat->max_speed);
+ 
+ 	plat->bus_id = of_alias_get_id(np, "ethernet");
+-	if (plat->bus_id < 0)
+-		plat->bus_id = 0;
++	if (plat->bus_id < 0) {
++		if (bus_id < 0)
++			bus_id = of_alias_get_highest_id("ethernet");
++		/* No ethernet alias found, init at -1 so first bus_id is 0 */
++		if (bus_id < 0)
++			bus_id = -1;
++		plat->bus_id = ++bus_id;
++	}
+ 
+ 	/* Default to phy auto-detection */
+ 	plat->phy_addr = -1;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+index 429b2d357813c8..3767ba495e78d2 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+@@ -317,7 +317,7 @@ void stmmac_ptp_register(struct stmmac_priv *priv)
+ 
+ 	/* Calculate the clock domain crossing (CDC) error if necessary */
+ 	priv->plat->cdc_error_adj = 0;
+-	if (priv->plat->has_gmac4 && priv->plat->clk_ptp_rate)
++	if (priv->plat->has_gmac4)
+ 		priv->plat->cdc_error_adj = (2 * NSEC_PER_SEC) / priv->plat->clk_ptp_rate;
+ 
+ 	/* Update the ptp clock parameters based on feature discovery, when
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_stats.c b/drivers/net/ethernet/ti/icssg/icssg_stats.c
+index 6f0edae38ea242..172ae38381b453 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_stats.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_stats.c
+@@ -29,6 +29,14 @@ void emac_update_hardware_stats(struct prueth_emac *emac)
+ 	spin_lock(&prueth->stats_lock);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(icssg_all_miig_stats); i++) {
++		/* In MII mode TX lines are swapped inside ICSSG, so read Tx stats
++		 * from slice1 for port0 and slice0 for port1 to get accurate Tx
++		 * stats for a given port
++		 */
++		if (emac->phy_if == PHY_INTERFACE_MODE_MII &&
++		    icssg_all_miig_stats[i].offset >= ICSSG_TX_PACKET_OFFSET &&
++		    icssg_all_miig_stats[i].offset <= ICSSG_TX_BYTE_OFFSET)
++			base = stats_base[slice ^ 1];
+ 		regmap_read(prueth->miig_rt,
+ 			    base + icssg_all_miig_stats[i].offset,
+ 			    &val);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 054abf283ab33e..5f912b27bfd7fc 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -880,7 +880,7 @@ static void axienet_dma_tx_cb(void *data, const struct dmaengine_result *result)
+ 	dev_consume_skb_any(skbuf_dma->skb);
+ 	netif_txq_completed_wake(txq, 1, len,
+ 				 CIRC_SPACE(lp->tx_ring_head, lp->tx_ring_tail, TX_BD_NUM_MAX),
+-				 2 * MAX_SKB_FRAGS);
++				 2);
+ }
+ 
+ /**
+@@ -914,7 +914,7 @@ axienet_start_xmit_dmaengine(struct sk_buff *skb, struct net_device *ndev)
+ 
+ 	dma_dev = lp->tx_chan->device;
+ 	sg_len = skb_shinfo(skb)->nr_frags + 1;
+-	if (CIRC_SPACE(lp->tx_ring_head, lp->tx_ring_tail, TX_BD_NUM_MAX) <= sg_len) {
++	if (CIRC_SPACE(lp->tx_ring_head, lp->tx_ring_tail, TX_BD_NUM_MAX) <= 1) {
+ 		netif_stop_queue(ndev);
+ 		if (net_ratelimit())
+ 			netdev_warn(ndev, "TX ring unexpectedly full\n");
+@@ -964,7 +964,7 @@ axienet_start_xmit_dmaengine(struct sk_buff *skb, struct net_device *ndev)
+ 	txq = skb_get_tx_queue(lp->ndev, skb);
+ 	netdev_tx_sent_queue(txq, skb->len);
+ 	netif_txq_maybe_stop(txq, CIRC_SPACE(lp->tx_ring_head, lp->tx_ring_tail, TX_BD_NUM_MAX),
+-			     MAX_SKB_FRAGS + 1, 2 * MAX_SKB_FRAGS);
++			     1, 2);
+ 
+ 	dmaengine_submit(dma_tx_desc);
+ 	dma_async_issue_pending(lp->tx_chan);
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 3d315e30ee4725..7edbe76b5455a8 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -247,15 +247,39 @@ static sci_t make_sci(const u8 *addr, __be16 port)
+ 	return sci;
+ }
+ 
+-static sci_t macsec_frame_sci(struct macsec_eth_header *hdr, bool sci_present)
++static sci_t macsec_active_sci(struct macsec_secy *secy)
+ {
+-	sci_t sci;
++	struct macsec_rx_sc *rx_sc = rcu_dereference_bh(secy->rx_sc);
++
++	/* Case single RX SC */
++	if (rx_sc && !rcu_dereference_bh(rx_sc->next))
++		return (rx_sc->active) ? rx_sc->sci : 0;
++	/* Case no RX SC or multiple */
++	else
++		return 0;
++}
++
++static sci_t macsec_frame_sci(struct macsec_eth_header *hdr, bool sci_present,
++			      struct macsec_rxh_data *rxd)
++{
++	struct macsec_dev *macsec;
++	sci_t sci = 0;
+ 
+-	if (sci_present)
++	/* SC = 1 */
++	if (sci_present) {
+ 		memcpy(&sci, hdr->secure_channel_id,
+ 		       sizeof(hdr->secure_channel_id));
+-	else
++	/* SC = 0; ES = 0 */
++	} else if ((!(hdr->tci_an & (MACSEC_TCI_ES | MACSEC_TCI_SC))) &&
++		   (list_is_singular(&rxd->secys))) {
++		/* Only one SECY should exist on this scenario */
++		macsec = list_first_or_null_rcu(&rxd->secys, struct macsec_dev,
++						secys);
++		if (macsec)
++			return macsec_active_sci(&macsec->secy);
++	} else {
+ 		sci = make_sci(hdr->eth.h_source, MACSEC_PORT_ES);
++	}
+ 
+ 	return sci;
+ }
+@@ -1109,7 +1133,7 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
+ 	struct macsec_rxh_data *rxd;
+ 	struct macsec_dev *macsec;
+ 	unsigned int len;
+-	sci_t sci;
++	sci_t sci = 0;
+ 	u32 hdr_pn;
+ 	bool cbit;
+ 	struct pcpu_rx_sc_stats *rxsc_stats;
+@@ -1156,11 +1180,14 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
+ 
+ 	macsec_skb_cb(skb)->has_sci = !!(hdr->tci_an & MACSEC_TCI_SC);
+ 	macsec_skb_cb(skb)->assoc_num = hdr->tci_an & MACSEC_AN_MASK;
+-	sci = macsec_frame_sci(hdr, macsec_skb_cb(skb)->has_sci);
+ 
+ 	rcu_read_lock();
+ 	rxd = macsec_data_rcu(skb->dev);
+ 
++	sci = macsec_frame_sci(hdr, macsec_skb_cb(skb)->has_sci, rxd);
++	if (!sci)
++		goto drop_nosc;
++
+ 	list_for_each_entry_rcu(macsec, &rxd->secys, secys) {
+ 		struct macsec_rx_sc *sc = find_rx_sc(&macsec->secy, sci);
+ 
+@@ -1283,6 +1310,7 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
+ 	macsec_rxsa_put(rx_sa);
+ drop_nosa:
+ 	macsec_rxsc_put(rx_sc);
++drop_nosc:
+ 	rcu_read_unlock();
+ drop_direct:
+ 	kfree_skb(skb);
+diff --git a/drivers/net/mctp/mctp-usb.c b/drivers/net/mctp/mctp-usb.c
+index e8d4b01c3f3458..775a386d0aca12 100644
+--- a/drivers/net/mctp/mctp-usb.c
++++ b/drivers/net/mctp/mctp-usb.c
+@@ -257,6 +257,8 @@ static int mctp_usb_open(struct net_device *dev)
+ 
+ 	WRITE_ONCE(mctp_usb->stopped, false);
+ 
++	netif_start_queue(dev);
++
+ 	return mctp_usb_rx_queue(mctp_usb, GFP_KERNEL);
+ }
+ 
+diff --git a/drivers/net/netconsole.c b/drivers/net/netconsole.c
+index 4289ccd3e41bff..176935a8645ff1 100644
+--- a/drivers/net/netconsole.c
++++ b/drivers/net/netconsole.c
+@@ -1252,7 +1252,6 @@ static int sysdata_append_release(struct netconsole_target *nt, int offset)
+  */
+ static int prepare_extradata(struct netconsole_target *nt)
+ {
+-	u32 fields = SYSDATA_CPU_NR | SYSDATA_TASKNAME;
+ 	int extradata_len;
+ 
+ 	/* userdata was appended when configfs write helper was called
+@@ -1260,7 +1259,7 @@ static int prepare_extradata(struct netconsole_target *nt)
+ 	 */
+ 	extradata_len = nt->userdata_length;
+ 
+-	if (!(nt->sysdata_fields & fields))
++	if (!nt->sysdata_fields)
+ 		goto out;
+ 
+ 	if (nt->sysdata_fields & SYSDATA_CPU_NR)
+diff --git a/drivers/net/netdevsim/ipsec.c b/drivers/net/netdevsim/ipsec.c
+index d88bdb9a17176a..47cdee5577d461 100644
+--- a/drivers/net/netdevsim/ipsec.c
++++ b/drivers/net/netdevsim/ipsec.c
+@@ -85,11 +85,11 @@ static int nsim_ipsec_find_empty_idx(struct nsim_ipsec *ipsec)
+ 	return -ENOSPC;
+ }
+ 
+-static int nsim_ipsec_parse_proto_keys(struct xfrm_state *xs,
++static int nsim_ipsec_parse_proto_keys(struct net_device *dev,
++				       struct xfrm_state *xs,
+ 				       u32 *mykey, u32 *mysalt)
+ {
+ 	const char aes_gcm_name[] = "rfc4106(gcm(aes))";
+-	struct net_device *dev = xs->xso.real_dev;
+ 	unsigned char *key_data;
+ 	char *alg_name = NULL;
+ 	int key_len;
+@@ -129,17 +129,16 @@ static int nsim_ipsec_parse_proto_keys(struct xfrm_state *xs,
+ 	return 0;
+ }
+ 
+-static int nsim_ipsec_add_sa(struct xfrm_state *xs,
++static int nsim_ipsec_add_sa(struct net_device *dev,
++			     struct xfrm_state *xs,
+ 			     struct netlink_ext_ack *extack)
+ {
+ 	struct nsim_ipsec *ipsec;
+-	struct net_device *dev;
+ 	struct netdevsim *ns;
+ 	struct nsim_sa sa;
+ 	u16 sa_idx;
+ 	int ret;
+ 
+-	dev = xs->xso.real_dev;
+ 	ns = netdev_priv(dev);
+ 	ipsec = &ns->ipsec;
+ 
+@@ -174,7 +173,7 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs,
+ 		sa.crypt = xs->ealg || xs->aead;
+ 
+ 	/* get the key and salt */
+-	ret = nsim_ipsec_parse_proto_keys(xs, sa.key, &sa.salt);
++	ret = nsim_ipsec_parse_proto_keys(dev, xs, sa.key, &sa.salt);
+ 	if (ret) {
+ 		NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for SA table");
+ 		return ret;
+@@ -200,9 +199,9 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs,
+ 	return 0;
+ }
+ 
+-static void nsim_ipsec_del_sa(struct xfrm_state *xs)
++static void nsim_ipsec_del_sa(struct net_device *dev, struct xfrm_state *xs)
+ {
+-	struct netdevsim *ns = netdev_priv(xs->xso.real_dev);
++	struct netdevsim *ns = netdev_priv(dev);
+ 	struct nsim_ipsec *ipsec = &ns->ipsec;
+ 	u16 sa_idx;
+ 
+diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
+index 0e0321a7ddd710..31a06e71be25bb 100644
+--- a/drivers/net/netdevsim/netdev.c
++++ b/drivers/net/netdevsim/netdev.c
+@@ -369,7 +369,8 @@ static int nsim_poll(struct napi_struct *napi, int budget)
+ 	int done;
+ 
+ 	done = nsim_rcv(rq, budget);
+-	napi_complete(napi);
++	if (done < budget)
++		napi_complete_done(napi, done);
+ 
+ 	return done;
+ }
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index ede596c1a69d1b..909b4d53fdacdc 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -903,6 +903,9 @@ int __mdiobus_read(struct mii_bus *bus, int addr, u32 regnum)
+ 
+ 	lockdep_assert_held_once(&bus->mdio_lock);
+ 
++	if (addr >= PHY_MAX_ADDR)
++		return -ENXIO;
++
+ 	if (bus->read)
+ 		retval = bus->read(bus, addr, regnum);
+ 	else
+@@ -932,6 +935,9 @@ int __mdiobus_write(struct mii_bus *bus, int addr, u32 regnum, u16 val)
+ 
+ 	lockdep_assert_held_once(&bus->mdio_lock);
+ 
++	if (addr >= PHY_MAX_ADDR)
++		return -ENXIO;
++
+ 	if (bus->write)
+ 		err = bus->write(bus, addr, regnum, val);
+ 	else
+@@ -993,6 +999,9 @@ int __mdiobus_c45_read(struct mii_bus *bus, int addr, int devad, u32 regnum)
+ 
+ 	lockdep_assert_held_once(&bus->mdio_lock);
+ 
++	if (addr >= PHY_MAX_ADDR)
++		return -ENXIO;
++
+ 	if (bus->read_c45)
+ 		retval = bus->read_c45(bus, addr, devad, regnum);
+ 	else
+@@ -1024,6 +1033,9 @@ int __mdiobus_c45_write(struct mii_bus *bus, int addr, int devad, u32 regnum,
+ 
+ 	lockdep_assert_held_once(&bus->mdio_lock);
+ 
++	if (addr >= PHY_MAX_ADDR)
++		return -ENXIO;
++
+ 	if (bus->write_c45)
+ 		err = bus->write_c45(bus, addr, devad, regnum, val);
+ 	else
+diff --git a/drivers/net/phy/mediatek/Kconfig b/drivers/net/phy/mediatek/Kconfig
+index 2a8ac5aed0f893..6a4c2b328c4183 100644
+--- a/drivers/net/phy/mediatek/Kconfig
++++ b/drivers/net/phy/mediatek/Kconfig
+@@ -15,8 +15,7 @@ config MEDIATEK_GE_PHY
+ 
+ config MEDIATEK_GE_SOC_PHY
+ 	tristate "MediaTek SoC Ethernet PHYs"
+-	depends on (ARM64 && ARCH_MEDIATEK) || COMPILE_TEST
+-	depends on NVMEM_MTK_EFUSE
++	depends on (ARM64 && ARCH_MEDIATEK && NVMEM_MTK_EFUSE) || COMPILE_TEST
+ 	select MTK_NET_PHYLIB
+ 	help
+ 	  Supports MediaTek SoC built-in Gigabit Ethernet PHYs.
+diff --git a/drivers/net/phy/mscc/mscc_ptp.c b/drivers/net/phy/mscc/mscc_ptp.c
+index ed8fb14a7f215e..6b800081eed52f 100644
+--- a/drivers/net/phy/mscc/mscc_ptp.c
++++ b/drivers/net/phy/mscc/mscc_ptp.c
+@@ -946,7 +946,9 @@ static int vsc85xx_ip1_conf(struct phy_device *phydev, enum ts_blk blk,
+ 	/* UDP checksum offset in IPv4 packet
+ 	 * according to: https://tools.ietf.org/html/rfc768
+ 	 */
+-	val |= IP1_NXT_PROT_UDP_CHKSUM_OFF(26) | IP1_NXT_PROT_UDP_CHKSUM_CLEAR;
++	val |= IP1_NXT_PROT_UDP_CHKSUM_OFF(26);
++	if (enable)
++		val |= IP1_NXT_PROT_UDP_CHKSUM_CLEAR;
+ 	vsc85xx_ts_write_csr(phydev, blk, MSCC_ANA_IP1_NXT_PROT_UDP_CHKSUM,
+ 			     val);
+ 
+@@ -1166,18 +1168,24 @@ static void vsc85xx_txtstamp(struct mii_timestamper *mii_ts,
+ 		container_of(mii_ts, struct vsc8531_private, mii_ts);
+ 
+ 	if (!vsc8531->ptp->configured)
+-		return;
++		goto out;
+ 
+-	if (vsc8531->ptp->tx_type == HWTSTAMP_TX_OFF) {
+-		kfree_skb(skb);
+-		return;
+-	}
++	if (vsc8531->ptp->tx_type == HWTSTAMP_TX_OFF)
++		goto out;
++
++	if (vsc8531->ptp->tx_type == HWTSTAMP_TX_ONESTEP_SYNC)
++		if (ptp_msg_is_sync(skb, type))
++			goto out;
+ 
+ 	skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ 
+ 	mutex_lock(&vsc8531->ts_lock);
+ 	__skb_queue_tail(&vsc8531->ptp->tx_queue, skb);
+ 	mutex_unlock(&vsc8531->ts_lock);
++	return;
++
++out:
++	kfree_skb(skb);
+ }
+ 
+ static bool vsc85xx_rxtstamp(struct mii_timestamper *mii_ts,
+diff --git a/drivers/net/phy/phy_caps.c b/drivers/net/phy/phy_caps.c
+index 7033216897264e..38417e2886118c 100644
+--- a/drivers/net/phy/phy_caps.c
++++ b/drivers/net/phy/phy_caps.c
+@@ -188,6 +188,9 @@ phy_caps_lookup_by_linkmode_rev(const unsigned long *linkmodes, bool fdx_only)
+  * When @exact is not set, we return either an exact match, or matching capabilities
+  * at lower speed, or the lowest matching speed, or NULL.
+  *
++ * Non-exact matches will try to return an exact speed and duplex match, but may
++ * return matching capabilities with same speed but a different duplex.
++ *
+  * Returns: a matched link_capabilities according to the above process, NULL
+  *	    otherwise.
+  */
+@@ -195,7 +198,7 @@ const struct link_capabilities *
+ phy_caps_lookup(int speed, unsigned int duplex, const unsigned long *supported,
+ 		bool exact)
+ {
+-	const struct link_capabilities *lcap, *last = NULL;
++	const struct link_capabilities *lcap, *match = NULL, *last = NULL;
+ 
+ 	for_each_link_caps_desc_speed(lcap) {
+ 		if (linkmode_intersects(lcap->linkmodes, supported)) {
+@@ -204,16 +207,19 @@ phy_caps_lookup(int speed, unsigned int duplex, const unsigned long *supported,
+ 			if (lcap->speed == speed && lcap->duplex == duplex) {
+ 				return lcap;
+ 			} else if (!exact) {
+-				if (lcap->speed <= speed)
+-					return lcap;
++				if (!match && lcap->speed <= speed)
++					match = lcap;
++
++				if (lcap->speed < speed)
++					break;
+ 			}
+ 		}
+ 	}
+ 
+-	if (!exact)
+-		return last;
++	if (!match && !exact)
++		match = last;
+ 
+-	return NULL;
++	return match;
+ }
+ EXPORT_SYMBOL_GPL(phy_caps_lookup);
+ 
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index cc1bfd22fb8120..7d5e76a3db0e94 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1727,8 +1727,10 @@ void phy_detach(struct phy_device *phydev)
+ 	struct module *ndev_owner = NULL;
+ 	struct mii_bus *bus;
+ 
+-	if (phydev->devlink)
++	if (phydev->devlink) {
+ 		device_link_del(phydev->devlink);
++		phydev->devlink = NULL;
++	}
+ 
+ 	if (phydev->sysfs_links) {
+ 		if (dev)
+diff --git a/drivers/net/usb/aqc111.c b/drivers/net/usb/aqc111.c
+index ff5be2cbf17b90..9201ee10a13f78 100644
+--- a/drivers/net/usb/aqc111.c
++++ b/drivers/net/usb/aqc111.c
+@@ -30,11 +30,14 @@ static int aqc111_read_cmd_nopm(struct usbnet *dev, u8 cmd, u16 value,
+ 	ret = usbnet_read_cmd_nopm(dev, cmd, USB_DIR_IN | USB_TYPE_VENDOR |
+ 				   USB_RECIP_DEVICE, value, index, data, size);
+ 
+-	if (unlikely(ret < 0))
++	if (unlikely(ret < size)) {
+ 		netdev_warn(dev->net,
+ 			    "Failed to read(0x%x) reg index 0x%04x: %d\n",
+ 			    cmd, index, ret);
+ 
++		ret = ret < 0 ? ret : -ENODATA;
++	}
++
+ 	return ret;
+ }
+ 
+@@ -46,11 +49,14 @@ static int aqc111_read_cmd(struct usbnet *dev, u8 cmd, u16 value,
+ 	ret = usbnet_read_cmd(dev, cmd, USB_DIR_IN | USB_TYPE_VENDOR |
+ 			      USB_RECIP_DEVICE, value, index, data, size);
+ 
+-	if (unlikely(ret < 0))
++	if (unlikely(ret < size)) {
+ 		netdev_warn(dev->net,
+ 			    "Failed to read(0x%x) reg index 0x%04x: %d\n",
+ 			    cmd, index, ret);
+ 
++		ret = ret < 0 ? ret : -ENODATA;
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
+index c676979c7ab940..287b7c20c0d6c6 100644
+--- a/drivers/net/vmxnet3/vmxnet3_drv.c
++++ b/drivers/net/vmxnet3/vmxnet3_drv.c
+@@ -1568,6 +1568,30 @@ vmxnet3_get_hdr_len(struct vmxnet3_adapter *adapter, struct sk_buff *skb,
+ 	return (hlen + (hdr.tcp->doff << 2));
+ }
+ 
++static void
++vmxnet3_lro_tunnel(struct sk_buff *skb, __be16 ip_proto)
++{
++	struct udphdr *uh = NULL;
++
++	if (ip_proto == htons(ETH_P_IP)) {
++		struct iphdr *iph = (struct iphdr *)skb->data;
++
++		if (iph->protocol == IPPROTO_UDP)
++			uh = (struct udphdr *)(iph + 1);
++	} else {
++		struct ipv6hdr *iph = (struct ipv6hdr *)skb->data;
++
++		if (iph->nexthdr == IPPROTO_UDP)
++			uh = (struct udphdr *)(iph + 1);
++	}
++	if (uh) {
++		if (uh->check)
++			skb_shinfo(skb)->gso_type |= SKB_GSO_UDP_TUNNEL_CSUM;
++		else
++			skb_shinfo(skb)->gso_type |= SKB_GSO_UDP_TUNNEL;
++	}
++}
++
+ static int
+ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
+ 		       struct vmxnet3_adapter *adapter, int quota)
+@@ -1881,6 +1905,8 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
+ 			if (segCnt != 0 && mss != 0) {
+ 				skb_shinfo(skb)->gso_type = rcd->v4 ?
+ 					SKB_GSO_TCPV4 : SKB_GSO_TCPV6;
++				if (encap_lro)
++					vmxnet3_lro_tunnel(skb, skb->protocol);
+ 				skb_shinfo(skb)->gso_size = mss;
+ 				skb_shinfo(skb)->gso_segs = segCnt;
+ 			} else if ((segCnt != 0 || skb->len > mtu) && !encap_lro) {
+diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c
+index 3ffeeba5dccf40..4a529f1f9beab6 100644
+--- a/drivers/net/wireguard/device.c
++++ b/drivers/net/wireguard/device.c
+@@ -366,6 +366,7 @@ static int wg_newlink(struct net_device *dev,
+ 	if (ret < 0)
+ 		goto err_free_handshake_queue;
+ 
++	dev_set_threaded(dev, true);
+ 	ret = register_netdevice(dev);
+ 	if (ret < 0)
+ 		goto err_uninit_ratelimiter;
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
+index 866bad2db33487..65673b1aba55d2 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.c
++++ b/drivers/net/wireless/ath/ath10k/snoc.c
+@@ -937,7 +937,9 @@ static int ath10k_snoc_hif_start(struct ath10k *ar)
+ 
+ 	dev_set_threaded(ar->napi_dev, true);
+ 	ath10k_core_napi_enable(ar);
+-	ath10k_snoc_irq_enable(ar);
++	/* IRQs are left enabled when we restart due to a firmware crash */
++	if (!test_bit(ATH10K_SNOC_FLAG_RECOVERY, &ar_snoc->flags))
++		ath10k_snoc_irq_enable(ar);
+ 	ath10k_snoc_rx_post(ar);
+ 
+ 	clear_bit(ATH10K_SNOC_FLAG_RECOVERY, &ar_snoc->flags);
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index 3d39ff85ba94ad..22eb1b0377ffed 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -951,6 +951,7 @@ void ath11k_fw_stats_init(struct ath11k *ar)
+ 	INIT_LIST_HEAD(&ar->fw_stats.bcn);
+ 
+ 	init_completion(&ar->fw_stats_complete);
++	init_completion(&ar->fw_stats_done);
+ }
+ 
+ void ath11k_fw_stats_free(struct ath11k_fw_stats *stats)
+@@ -1946,6 +1947,20 @@ int ath11k_core_qmi_firmware_ready(struct ath11k_base *ab)
+ {
+ 	int ret;
+ 
++	switch (ath11k_crypto_mode) {
++	case ATH11K_CRYPT_MODE_SW:
++		set_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);
++		set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
++		break;
++	case ATH11K_CRYPT_MODE_HW:
++		clear_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);
++		clear_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
++		break;
++	default:
++		ath11k_info(ab, "invalid crypto_mode: %d\n", ath11k_crypto_mode);
++		return -EINVAL;
++	}
++
+ 	ret = ath11k_core_start_firmware(ab, ab->fw_mode);
+ 	if (ret) {
+ 		ath11k_err(ab, "failed to start firmware: %d\n", ret);
+@@ -1964,20 +1979,6 @@ int ath11k_core_qmi_firmware_ready(struct ath11k_base *ab)
+ 		goto err_firmware_stop;
+ 	}
+ 
+-	switch (ath11k_crypto_mode) {
+-	case ATH11K_CRYPT_MODE_SW:
+-		set_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);
+-		set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
+-		break;
+-	case ATH11K_CRYPT_MODE_HW:
+-		clear_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);
+-		clear_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
+-		break;
+-	default:
+-		ath11k_info(ab, "invalid crypto_mode: %d\n", ath11k_crypto_mode);
+-		return -EINVAL;
+-	}
+-
+ 	if (ath11k_frame_mode == ATH11K_HW_TXRX_RAW)
+ 		set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
+ 
+@@ -2050,6 +2051,7 @@ static int ath11k_core_reconfigure_on_crash(struct ath11k_base *ab)
+ void ath11k_core_halt(struct ath11k *ar)
+ {
+ 	struct ath11k_base *ab = ar->ab;
++	struct list_head *pos, *n;
+ 
+ 	lockdep_assert_held(&ar->conf_mutex);
+ 
+@@ -2065,7 +2067,12 @@ void ath11k_core_halt(struct ath11k *ar)
+ 
+ 	rcu_assign_pointer(ab->pdevs_active[ar->pdev_idx], NULL);
+ 	synchronize_rcu();
+-	INIT_LIST_HEAD(&ar->arvifs);
++
++	spin_lock_bh(&ar->data_lock);
++	list_for_each_safe(pos, n, &ar->arvifs)
++		list_del_init(pos);
++	spin_unlock_bh(&ar->data_lock);
++
+ 	idr_init(&ar->txmgmt_idr);
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index 1a3d0de4afde83..529aca4f40621e 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -599,6 +599,8 @@ struct ath11k_fw_stats {
+ 	struct list_head pdevs;
+ 	struct list_head vdevs;
+ 	struct list_head bcn;
++	u32 num_vdev_recvd;
++	u32 num_bcn_recvd;
+ };
+ 
+ struct ath11k_dbg_htt_stats {
+@@ -783,7 +785,7 @@ struct ath11k {
+ 	u8 alpha2[REG_ALPHA2_LEN + 1];
+ 	struct ath11k_fw_stats fw_stats;
+ 	struct completion fw_stats_complete;
+-	bool fw_stats_done;
++	struct completion fw_stats_done;
+ 
+ 	/* protected by conf_mutex */
+ 	bool ps_state_enable;
+diff --git a/drivers/net/wireless/ath/ath11k/debugfs.c b/drivers/net/wireless/ath/ath11k/debugfs.c
+index bf192529e3fe26..5d46f8e4c231fb 100644
+--- a/drivers/net/wireless/ath/ath11k/debugfs.c
++++ b/drivers/net/wireless/ath/ath11k/debugfs.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2020 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <linux/vmalloc.h>
+@@ -93,57 +93,14 @@ void ath11k_debugfs_add_dbring_entry(struct ath11k *ar,
+ 	spin_unlock_bh(&dbr_data->lock);
+ }
+ 
+-static void ath11k_debugfs_fw_stats_reset(struct ath11k *ar)
+-{
+-	spin_lock_bh(&ar->data_lock);
+-	ar->fw_stats_done = false;
+-	ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs);
+-	ath11k_fw_stats_vdevs_free(&ar->fw_stats.vdevs);
+-	spin_unlock_bh(&ar->data_lock);
+-}
+-
+ void ath11k_debugfs_fw_stats_process(struct ath11k *ar, struct ath11k_fw_stats *stats)
+ {
+ 	struct ath11k_base *ab = ar->ab;
+-	struct ath11k_pdev *pdev;
+-	bool is_end;
+-	static unsigned int num_vdev, num_bcn;
+-	size_t total_vdevs_started = 0;
+-	int i;
+-
+-	/* WMI_REQUEST_PDEV_STAT request has been already processed */
+-
+-	if (stats->stats_id == WMI_REQUEST_RSSI_PER_CHAIN_STAT) {
+-		ar->fw_stats_done = true;
+-		return;
+-	}
+-
+-	if (stats->stats_id == WMI_REQUEST_VDEV_STAT) {
+-		if (list_empty(&stats->vdevs)) {
+-			ath11k_warn(ab, "empty vdev stats");
+-			return;
+-		}
+-		/* FW sends all the active VDEV stats irrespective of PDEV,
+-		 * hence limit until the count of all VDEVs started
+-		 */
+-		for (i = 0; i < ab->num_radios; i++) {
+-			pdev = rcu_dereference(ab->pdevs_active[i]);
+-			if (pdev && pdev->ar)
+-				total_vdevs_started += ar->num_started_vdevs;
+-		}
+-
+-		is_end = ((++num_vdev) == total_vdevs_started);
+-
+-		list_splice_tail_init(&stats->vdevs,
+-				      &ar->fw_stats.vdevs);
+-
+-		if (is_end) {
+-			ar->fw_stats_done = true;
+-			num_vdev = 0;
+-		}
+-		return;
+-	}
++	bool is_end = true;
+ 
++	/* WMI_REQUEST_PDEV_STAT, WMI_REQUEST_RSSI_PER_CHAIN_STAT and
++	 * WMI_REQUEST_VDEV_STAT requests have been already processed.
++	 */
+ 	if (stats->stats_id == WMI_REQUEST_BCN_STAT) {
+ 		if (list_empty(&stats->bcn)) {
+ 			ath11k_warn(ab, "empty bcn stats");
+@@ -152,97 +109,18 @@ void ath11k_debugfs_fw_stats_process(struct ath11k *ar, struct ath11k_fw_stats *
+ 		/* Mark end until we reached the count of all started VDEVs
+ 		 * within the PDEV
+ 		 */
+-		is_end = ((++num_bcn) == ar->num_started_vdevs);
++		if (ar->num_started_vdevs)
++			is_end = ((++ar->fw_stats.num_bcn_recvd) ==
++				  ar->num_started_vdevs);
+ 
+ 		list_splice_tail_init(&stats->bcn,
+ 				      &ar->fw_stats.bcn);
+ 
+-		if (is_end) {
+-			ar->fw_stats_done = true;
+-			num_bcn = 0;
+-		}
++		if (is_end)
++			complete(&ar->fw_stats_done);
+ 	}
+ }
+ 
+-static int ath11k_debugfs_fw_stats_request(struct ath11k *ar,
+-					   struct stats_request_params *req_param)
+-{
+-	struct ath11k_base *ab = ar->ab;
+-	unsigned long timeout, time_left;
+-	int ret;
+-
+-	lockdep_assert_held(&ar->conf_mutex);
+-
+-	/* FW stats can get split when exceeding the stats data buffer limit.
+-	 * In that case, since there is no end marking for the back-to-back
+-	 * received 'update stats' event, we keep a 3 seconds timeout in case,
+-	 * fw_stats_done is not marked yet
+-	 */
+-	timeout = jiffies + secs_to_jiffies(3);
+-
+-	ath11k_debugfs_fw_stats_reset(ar);
+-
+-	reinit_completion(&ar->fw_stats_complete);
+-
+-	ret = ath11k_wmi_send_stats_request_cmd(ar, req_param);
+-
+-	if (ret) {
+-		ath11k_warn(ab, "could not request fw stats (%d)\n",
+-			    ret);
+-		return ret;
+-	}
+-
+-	time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 1 * HZ);
+-
+-	if (!time_left)
+-		return -ETIMEDOUT;
+-
+-	for (;;) {
+-		if (time_after(jiffies, timeout))
+-			break;
+-
+-		spin_lock_bh(&ar->data_lock);
+-		if (ar->fw_stats_done) {
+-			spin_unlock_bh(&ar->data_lock);
+-			break;
+-		}
+-		spin_unlock_bh(&ar->data_lock);
+-	}
+-	return 0;
+-}
+-
+-int ath11k_debugfs_get_fw_stats(struct ath11k *ar, u32 pdev_id,
+-				u32 vdev_id, u32 stats_id)
+-{
+-	struct ath11k_base *ab = ar->ab;
+-	struct stats_request_params req_param;
+-	int ret;
+-
+-	mutex_lock(&ar->conf_mutex);
+-
+-	if (ar->state != ATH11K_STATE_ON) {
+-		ret = -ENETDOWN;
+-		goto err_unlock;
+-	}
+-
+-	req_param.pdev_id = pdev_id;
+-	req_param.vdev_id = vdev_id;
+-	req_param.stats_id = stats_id;
+-
+-	ret = ath11k_debugfs_fw_stats_request(ar, &req_param);
+-	if (ret)
+-		ath11k_warn(ab, "failed to request fw stats: %d\n", ret);
+-
+-	ath11k_dbg(ab, ATH11K_DBG_WMI,
+-		   "debug get fw stat pdev id %d vdev id %d stats id 0x%x\n",
+-		   pdev_id, vdev_id, stats_id);
+-
+-err_unlock:
+-	mutex_unlock(&ar->conf_mutex);
+-
+-	return ret;
+-}
+-
+ static int ath11k_open_pdev_stats(struct inode *inode, struct file *file)
+ {
+ 	struct ath11k *ar = inode->i_private;
+@@ -268,7 +146,7 @@ static int ath11k_open_pdev_stats(struct inode *inode, struct file *file)
+ 	req_param.vdev_id = 0;
+ 	req_param.stats_id = WMI_REQUEST_PDEV_STAT;
+ 
+-	ret = ath11k_debugfs_fw_stats_request(ar, &req_param);
++	ret = ath11k_mac_fw_stats_request(ar, &req_param);
+ 	if (ret) {
+ 		ath11k_warn(ab, "failed to request fw pdev stats: %d\n", ret);
+ 		goto err_free;
+@@ -339,7 +217,7 @@ static int ath11k_open_vdev_stats(struct inode *inode, struct file *file)
+ 	req_param.vdev_id = 0;
+ 	req_param.stats_id = WMI_REQUEST_VDEV_STAT;
+ 
+-	ret = ath11k_debugfs_fw_stats_request(ar, &req_param);
++	ret = ath11k_mac_fw_stats_request(ar, &req_param);
+ 	if (ret) {
+ 		ath11k_warn(ar->ab, "failed to request fw vdev stats: %d\n", ret);
+ 		goto err_free;
+@@ -415,7 +293,7 @@ static int ath11k_open_bcn_stats(struct inode *inode, struct file *file)
+ 			continue;
+ 
+ 		req_param.vdev_id = arvif->vdev_id;
+-		ret = ath11k_debugfs_fw_stats_request(ar, &req_param);
++		ret = ath11k_mac_fw_stats_request(ar, &req_param);
+ 		if (ret) {
+ 			ath11k_warn(ar->ab, "failed to request fw bcn stats: %d\n", ret);
+ 			goto err_free;
+diff --git a/drivers/net/wireless/ath/ath11k/debugfs.h b/drivers/net/wireless/ath/ath11k/debugfs.h
+index a39e458637b013..ed7fec177588f6 100644
+--- a/drivers/net/wireless/ath/ath11k/debugfs.h
++++ b/drivers/net/wireless/ath/ath11k/debugfs.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2022, 2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef _ATH11K_DEBUGFS_H_
+@@ -273,8 +273,6 @@ void ath11k_debugfs_unregister(struct ath11k *ar);
+ void ath11k_debugfs_fw_stats_process(struct ath11k *ar, struct ath11k_fw_stats *stats);
+ 
+ void ath11k_debugfs_fw_stats_init(struct ath11k *ar);
+-int ath11k_debugfs_get_fw_stats(struct ath11k *ar, u32 pdev_id,
+-				u32 vdev_id, u32 stats_id);
+ 
+ static inline bool ath11k_debugfs_is_pktlog_lite_mode_enabled(struct ath11k *ar)
+ {
+@@ -381,12 +379,6 @@ static inline int ath11k_debugfs_rx_filter(struct ath11k *ar)
+ 	return 0;
+ }
+ 
+-static inline int ath11k_debugfs_get_fw_stats(struct ath11k *ar,
+-					      u32 pdev_id, u32 vdev_id, u32 stats_id)
+-{
+-	return 0;
+-}
+-
+ static inline void
+ ath11k_debugfs_add_dbring_entry(struct ath11k *ar,
+ 				enum wmi_direct_buffer_module id,
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 97816916abac96..4763b271309aa2 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -8991,6 +8991,86 @@ static void ath11k_mac_put_chain_rssi(struct station_info *sinfo,
+ 	}
+ }
+ 
++static void ath11k_mac_fw_stats_reset(struct ath11k *ar)
++{
++	spin_lock_bh(&ar->data_lock);
++	ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs);
++	ath11k_fw_stats_vdevs_free(&ar->fw_stats.vdevs);
++	ar->fw_stats.num_vdev_recvd = 0;
++	ar->fw_stats.num_bcn_recvd = 0;
++	spin_unlock_bh(&ar->data_lock);
++}
++
++int ath11k_mac_fw_stats_request(struct ath11k *ar,
++				struct stats_request_params *req_param)
++{
++	struct ath11k_base *ab = ar->ab;
++	unsigned long time_left;
++	int ret;
++
++	lockdep_assert_held(&ar->conf_mutex);
++
++	ath11k_mac_fw_stats_reset(ar);
++
++	reinit_completion(&ar->fw_stats_complete);
++	reinit_completion(&ar->fw_stats_done);
++
++	ret = ath11k_wmi_send_stats_request_cmd(ar, req_param);
++
++	if (ret) {
++		ath11k_warn(ab, "could not request fw stats (%d)\n",
++			    ret);
++		return ret;
++	}
++
++	time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 1 * HZ);
++	if (!time_left)
++		return -ETIMEDOUT;
++
++	/* FW stats can get split when exceeding the stats data buffer limit.
++	 * In that case, since there is no end marking for the back-to-back
++	 * received 'update stats' event, we keep a 3 seconds timeout in case,
++	 * fw_stats_done is not marked yet
++	 */
++	time_left = wait_for_completion_timeout(&ar->fw_stats_done, 3 * HZ);
++	if (!time_left)
++		return -ETIMEDOUT;
++
++	return 0;
++}
++
++static int ath11k_mac_get_fw_stats(struct ath11k *ar, u32 pdev_id,
++				   u32 vdev_id, u32 stats_id)
++{
++	struct ath11k_base *ab = ar->ab;
++	struct stats_request_params req_param;
++	int ret;
++
++	mutex_lock(&ar->conf_mutex);
++
++	if (ar->state != ATH11K_STATE_ON) {
++		ret = -ENETDOWN;
++		goto err_unlock;
++	}
++
++	req_param.pdev_id = pdev_id;
++	req_param.vdev_id = vdev_id;
++	req_param.stats_id = stats_id;
++
++	ret = ath11k_mac_fw_stats_request(ar, &req_param);
++	if (ret)
++		ath11k_warn(ab, "failed to request fw stats: %d\n", ret);
++
++	ath11k_dbg(ab, ATH11K_DBG_WMI,
++		   "debug get fw stat pdev id %d vdev id %d stats id 0x%x\n",
++		   pdev_id, vdev_id, stats_id);
++
++err_unlock:
++	mutex_unlock(&ar->conf_mutex);
++
++	return ret;
++}
++
+ static void ath11k_mac_op_sta_statistics(struct ieee80211_hw *hw,
+ 					 struct ieee80211_vif *vif,
+ 					 struct ieee80211_sta *sta,
+@@ -9028,8 +9108,8 @@ static void ath11k_mac_op_sta_statistics(struct ieee80211_hw *hw,
+ 	if (!(sinfo->filled & BIT_ULL(NL80211_STA_INFO_CHAIN_SIGNAL)) &&
+ 	    arsta->arvif->vdev_type == WMI_VDEV_TYPE_STA &&
+ 	    ar->ab->hw_params.supports_rssi_stats &&
+-	    !ath11k_debugfs_get_fw_stats(ar, ar->pdev->pdev_id, 0,
+-					 WMI_REQUEST_RSSI_PER_CHAIN_STAT)) {
++	    !ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0,
++				     WMI_REQUEST_RSSI_PER_CHAIN_STAT)) {
+ 		ath11k_mac_put_chain_rssi(sinfo, arsta, "fw stats", true);
+ 	}
+ 
+@@ -9037,8 +9117,8 @@ static void ath11k_mac_op_sta_statistics(struct ieee80211_hw *hw,
+ 	if (!signal &&
+ 	    arsta->arvif->vdev_type == WMI_VDEV_TYPE_STA &&
+ 	    ar->ab->hw_params.supports_rssi_stats &&
+-	    !(ath11k_debugfs_get_fw_stats(ar, ar->pdev->pdev_id, 0,
+-					WMI_REQUEST_VDEV_STAT)))
++	    !(ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0,
++				      WMI_REQUEST_VDEV_STAT)))
+ 		signal = arsta->rssi_beacon;
+ 
+ 	ath11k_dbg(ar->ab, ATH11K_DBG_MAC,
+@@ -9384,11 +9464,13 @@ static int ath11k_fw_stats_request(struct ath11k *ar,
+ 	lockdep_assert_held(&ar->conf_mutex);
+ 
+ 	spin_lock_bh(&ar->data_lock);
+-	ar->fw_stats_done = false;
+ 	ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs);
++	ar->fw_stats.num_vdev_recvd = 0;
++	ar->fw_stats.num_bcn_recvd = 0;
+ 	spin_unlock_bh(&ar->data_lock);
+ 
+ 	reinit_completion(&ar->fw_stats_complete);
++	reinit_completion(&ar->fw_stats_done);
+ 
+ 	ret = ath11k_wmi_send_stats_request_cmd(ar, req_param);
+ 	if (ret) {
+diff --git a/drivers/net/wireless/ath/ath11k/mac.h b/drivers/net/wireless/ath/ath11k/mac.h
+index f5800fbecff89e..5e61eea1bb0378 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.h
++++ b/drivers/net/wireless/ath/ath11k/mac.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023, 2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH11K_MAC_H
+@@ -179,4 +179,6 @@ int ath11k_mac_vif_set_keepalive(struct ath11k_vif *arvif,
+ void ath11k_mac_fill_reg_tpc_info(struct ath11k *ar,
+ 				  struct ieee80211_vif *vif,
+ 				  struct ieee80211_chanctx_conf *ctx);
++int ath11k_mac_fw_stats_request(struct ath11k *ar,
++				struct stats_request_params *req_param);
+ #endif
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index d7f852bebf4aa2..98811726d33bf1 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -8158,6 +8158,11 @@ static void ath11k_peer_assoc_conf_event(struct ath11k_base *ab, struct sk_buff
+ static void ath11k_update_stats_event(struct ath11k_base *ab, struct sk_buff *skb)
+ {
+ 	struct ath11k_fw_stats stats = {};
++	size_t total_vdevs_started = 0;
++	struct ath11k_pdev *pdev;
++	bool is_end = true;
++	int i;
++
+ 	struct ath11k *ar;
+ 	int ret;
+ 
+@@ -8184,18 +8189,50 @@ static void ath11k_update_stats_event(struct ath11k_base *ab, struct sk_buff *sk
+ 
+ 	spin_lock_bh(&ar->data_lock);
+ 
+-	/* WMI_REQUEST_PDEV_STAT can be requested via .get_txpower mac ops or via
++	/* WMI_REQUEST_PDEV_STAT, WMI_REQUEST_VDEV_STAT and
++	 * WMI_REQUEST_RSSI_PER_CHAIN_STAT can be requested via mac ops or via
+ 	 * debugfs fw stats. Therefore, processing it separately.
+ 	 */
+ 	if (stats.stats_id == WMI_REQUEST_PDEV_STAT) {
+ 		list_splice_tail_init(&stats.pdevs, &ar->fw_stats.pdevs);
+-		ar->fw_stats_done = true;
++		complete(&ar->fw_stats_done);
++		goto complete;
++	}
++
++	if (stats.stats_id == WMI_REQUEST_RSSI_PER_CHAIN_STAT) {
++		complete(&ar->fw_stats_done);
++		goto complete;
++	}
++
++	if (stats.stats_id == WMI_REQUEST_VDEV_STAT) {
++		if (list_empty(&stats.vdevs)) {
++			ath11k_warn(ab, "empty vdev stats");
++			goto complete;
++		}
++		/* FW sends all the active VDEV stats irrespective of PDEV,
++		 * hence limit until the count of all VDEVs started
++		 */
++		for (i = 0; i < ab->num_radios; i++) {
++			pdev = rcu_dereference(ab->pdevs_active[i]);
++			if (pdev && pdev->ar)
++				total_vdevs_started += ar->num_started_vdevs;
++		}
++
++		if (total_vdevs_started)
++			is_end = ((++ar->fw_stats.num_vdev_recvd) ==
++				  total_vdevs_started);
++
++		list_splice_tail_init(&stats.vdevs,
++				      &ar->fw_stats.vdevs);
++
++		if (is_end)
++			complete(&ar->fw_stats_done);
++
+ 		goto complete;
+ 	}
+ 
+-	/* WMI_REQUEST_VDEV_STAT, WMI_REQUEST_BCN_STAT and WMI_REQUEST_RSSI_PER_CHAIN_STAT
+-	 * are currently requested only via debugfs fw stats. Hence, processing these
+-	 * in debugfs context
++	/* WMI_REQUEST_BCN_STAT is currently requested only via debugfs fw stats.
++	 * Hence, processing it in debugfs context
+ 	 */
+ 	ath11k_debugfs_fw_stats_process(ar, &stats);
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/core.c b/drivers/net/wireless/ath/ath12k/core.c
+index 0b2dec081c6ee8..261f52b327e89c 100644
+--- a/drivers/net/wireless/ath/ath12k/core.c
++++ b/drivers/net/wireless/ath/ath12k/core.c
+@@ -891,6 +891,9 @@ static void ath12k_core_hw_group_stop(struct ath12k_hw_group *ag)
+ 		ab = ag->ab[i];
+ 		if (!ab)
+ 			continue;
++
++		clear_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags);
++
+ 		ath12k_core_device_cleanup(ab);
+ 	}
+ 
+@@ -1026,6 +1029,8 @@ static int ath12k_core_hw_group_start(struct ath12k_hw_group *ag)
+ 
+ 		mutex_lock(&ab->core_lock);
+ 
++		set_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags);
++
+ 		ret = ath12k_core_pdev_create(ab);
+ 		if (ret) {
+ 			ath12k_err(ab, "failed to create pdev core %d\n", ret);
+@@ -1246,6 +1251,7 @@ static void ath12k_rfkill_work(struct work_struct *work)
+ 
+ void ath12k_core_halt(struct ath12k *ar)
+ {
++	struct list_head *pos, *n;
+ 	struct ath12k_base *ab = ar->ab;
+ 
+ 	lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy);
+@@ -1261,7 +1267,12 @@ void ath12k_core_halt(struct ath12k *ar)
+ 
+ 	rcu_assign_pointer(ab->pdevs_active[ar->pdev_idx], NULL);
+ 	synchronize_rcu();
+-	INIT_LIST_HEAD(&ar->arvifs);
++
++	spin_lock_bh(&ar->data_lock);
++	list_for_each_safe(pos, n, &ar->arvifs)
++		list_del_init(pos);
++	spin_unlock_bh(&ar->data_lock);
++
+ 	idr_init(&ar->txmgmt_idr);
+ }
+ 
+@@ -1774,7 +1785,7 @@ static void ath12k_core_hw_group_destroy(struct ath12k_hw_group *ag)
+ 	}
+ }
+ 
+-static void ath12k_core_hw_group_cleanup(struct ath12k_hw_group *ag)
++void ath12k_core_hw_group_cleanup(struct ath12k_hw_group *ag)
+ {
+ 	struct ath12k_base *ab;
+ 	int i;
+@@ -1891,7 +1902,8 @@ int ath12k_core_init(struct ath12k_base *ab)
+ 	if (!ag) {
+ 		mutex_unlock(&ath12k_hw_group_mutex);
+ 		ath12k_warn(ab, "unable to get hw group\n");
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto err_unregister_notifier;
+ 	}
+ 
+ 	mutex_unlock(&ath12k_hw_group_mutex);
+@@ -1906,7 +1918,7 @@ int ath12k_core_init(struct ath12k_base *ab)
+ 		if (ret) {
+ 			mutex_unlock(&ag->mutex);
+ 			ath12k_warn(ab, "unable to create hw group\n");
+-			goto err;
++			goto err_destroy_hw_group;
+ 		}
+ 	}
+ 
+@@ -1914,18 +1926,20 @@ int ath12k_core_init(struct ath12k_base *ab)
+ 
+ 	return 0;
+ 
+-err:
++err_destroy_hw_group:
+ 	ath12k_core_hw_group_destroy(ab->ag);
+ 	ath12k_core_hw_group_unassign(ab);
++err_unregister_notifier:
++	ath12k_core_panic_notifier_unregister(ab);
++
+ 	return ret;
+ }
+ 
+ void ath12k_core_deinit(struct ath12k_base *ab)
+ {
+-	ath12k_core_panic_notifier_unregister(ab);
+-	ath12k_core_hw_group_cleanup(ab->ag);
+ 	ath12k_core_hw_group_destroy(ab->ag);
+ 	ath12k_core_hw_group_unassign(ab);
++	ath12k_core_panic_notifier_unregister(ab);
+ }
+ 
+ void ath12k_core_free(struct ath12k_base *ab)
+diff --git a/drivers/net/wireless/ath/ath12k/core.h b/drivers/net/wireless/ath/ath12k/core.h
+index 3fac4f00d3832c..f5f1ec796f7c55 100644
+--- a/drivers/net/wireless/ath/ath12k/core.h
++++ b/drivers/net/wireless/ath/ath12k/core.h
+@@ -533,11 +533,21 @@ struct ath12k_sta {
+ 	enum ieee80211_sta_state state;
+ };
+ 
+-#define ATH12K_MIN_5G_FREQ 4150
+-#define ATH12K_MIN_6G_FREQ 5925
+-#define ATH12K_MAX_6G_FREQ 7115
++#define ATH12K_HALF_20MHZ_BW	10
++#define ATH12K_2GHZ_MIN_CENTER	2412
++#define ATH12K_2GHZ_MAX_CENTER	2484
++#define ATH12K_5GHZ_MIN_CENTER	4900
++#define ATH12K_5GHZ_MAX_CENTER	5920
++#define ATH12K_6GHZ_MIN_CENTER	5935
++#define ATH12K_6GHZ_MAX_CENTER	7115
++#define ATH12K_MIN_2GHZ_FREQ	(ATH12K_2GHZ_MIN_CENTER - ATH12K_HALF_20MHZ_BW - 1)
++#define ATH12K_MAX_2GHZ_FREQ	(ATH12K_2GHZ_MAX_CENTER + ATH12K_HALF_20MHZ_BW + 1)
++#define ATH12K_MIN_5GHZ_FREQ	(ATH12K_5GHZ_MIN_CENTER - ATH12K_HALF_20MHZ_BW)
++#define ATH12K_MAX_5GHZ_FREQ	(ATH12K_5GHZ_MAX_CENTER + ATH12K_HALF_20MHZ_BW)
++#define ATH12K_MIN_6GHZ_FREQ	(ATH12K_6GHZ_MIN_CENTER - ATH12K_HALF_20MHZ_BW)
++#define ATH12K_MAX_6GHZ_FREQ	(ATH12K_6GHZ_MAX_CENTER + ATH12K_HALF_20MHZ_BW)
+ #define ATH12K_NUM_CHANS 101
+-#define ATH12K_MAX_5G_CHAN 173
++#define ATH12K_MAX_5GHZ_CHAN 173
+ 
+ enum ath12k_hw_state {
+ 	ATH12K_HW_STATE_OFF,
+@@ -1185,6 +1195,7 @@ struct ath12k_fw_stats_pdev {
+ };
+ 
+ int ath12k_core_qmi_firmware_ready(struct ath12k_base *ab);
++void ath12k_core_hw_group_cleanup(struct ath12k_hw_group *ag);
+ int ath12k_core_pre_init(struct ath12k_base *ab);
+ int ath12k_core_init(struct ath12k_base *ath12k);
+ void ath12k_core_deinit(struct ath12k_base *ath12k);
+diff --git a/drivers/net/wireless/ath/ath12k/debugfs.c b/drivers/net/wireless/ath/ath12k/debugfs.c
+index 57002215ddf168..5efe30cf77470a 100644
+--- a/drivers/net/wireless/ath/ath12k/debugfs.c
++++ b/drivers/net/wireless/ath/ath12k/debugfs.c
+@@ -88,8 +88,8 @@ static int ath12k_get_tpc_ctl_mode_idx(struct wmi_tpc_stats_arg *tpc_stats,
+ 	u32 chan_freq = le32_to_cpu(tpc_stats->tpc_config.chan_freq);
+ 	u8 band;
+ 
+-	band = ((chan_freq > ATH12K_MIN_6G_FREQ) ? NL80211_BAND_6GHZ :
+-		((chan_freq > ATH12K_MIN_5G_FREQ) ? NL80211_BAND_5GHZ :
++	band = ((chan_freq > ATH12K_MIN_6GHZ_FREQ) ? NL80211_BAND_6GHZ :
++		((chan_freq > ATH12K_MIN_5GHZ_FREQ) ? NL80211_BAND_5GHZ :
+ 		NL80211_BAND_2GHZ));
+ 
+ 	if (band == NL80211_BAND_5GHZ || band == NL80211_BAND_6GHZ) {
+diff --git a/drivers/net/wireless/ath/ath12k/debugfs_htt_stats.c b/drivers/net/wireless/ath/ath12k/debugfs_htt_stats.c
+index 1c0d5fa39a8dcb..aeaf970339d4dc 100644
+--- a/drivers/net/wireless/ath/ath12k/debugfs_htt_stats.c
++++ b/drivers/net/wireless/ath/ath12k/debugfs_htt_stats.c
+@@ -5377,6 +5377,9 @@ static ssize_t ath12k_write_htt_stats_type(struct file *file,
+ 	const int size = 32;
+ 	int num_args;
+ 
++	if (count > size)
++		return -EINVAL;
++
+ 	char *buf __free(kfree) = kzalloc(size, GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOMEM;
+diff --git a/drivers/net/wireless/ath/ath12k/dp.h b/drivers/net/wireless/ath/ath12k/dp.h
+index 75435a931548c9..427a87b63dec3b 100644
+--- a/drivers/net/wireless/ath/ath12k/dp.h
++++ b/drivers/net/wireless/ath/ath12k/dp.h
+@@ -106,6 +106,8 @@ struct dp_mon_mpdu {
+ 	struct list_head list;
+ 	struct sk_buff *head;
+ 	struct sk_buff *tail;
++	u32 err_bitmap;
++	u8 decap_format;
+ };
+ 
+ #define DP_MON_MAX_STATUS_BUF 32
+diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
+index d22800e894850d..600d97169f241a 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
++++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
+@@ -1647,7 +1647,7 @@ ath12k_dp_mon_rx_parse_status_tlv(struct ath12k *ar,
+ 				u32_get_bits(info[0], HAL_RX_MPDU_START_INFO0_PPDU_ID);
+ 		}
+ 
+-		break;
++		return HAL_RX_MON_STATUS_MPDU_START;
+ 	}
+ 	case HAL_RX_MSDU_START:
+ 		/* TODO: add msdu start parsing logic */
+@@ -1700,33 +1700,159 @@ static void ath12k_dp_mon_rx_msdus_set_payload(struct ath12k *ar,
+ 	skb_pull(head_msdu, rx_pkt_offset + l2_hdr_offset);
+ }
+ 
++static void
++ath12k_dp_mon_fill_rx_stats_info(struct ath12k *ar,
++				 struct hal_rx_mon_ppdu_info *ppdu_info,
++				 struct ieee80211_rx_status *rx_status)
++{
++	u32 center_freq = ppdu_info->freq;
++
++	rx_status->freq = center_freq;
++	rx_status->bw = ath12k_mac_bw_to_mac80211_bw(ppdu_info->bw);
++	rx_status->nss = ppdu_info->nss;
++	rx_status->rate_idx = 0;
++	rx_status->encoding = RX_ENC_LEGACY;
++	rx_status->flag |= RX_FLAG_NO_SIGNAL_VAL;
++
++	if (center_freq >= ATH12K_MIN_6GHZ_FREQ &&
++	    center_freq <= ATH12K_MAX_6GHZ_FREQ) {
++		rx_status->band = NL80211_BAND_6GHZ;
++	} else if (center_freq >= ATH12K_MIN_2GHZ_FREQ &&
++		   center_freq <= ATH12K_MAX_2GHZ_FREQ) {
++		rx_status->band = NL80211_BAND_2GHZ;
++	} else if (center_freq >= ATH12K_MIN_5GHZ_FREQ &&
++		   center_freq <= ATH12K_MAX_5GHZ_FREQ) {
++		rx_status->band = NL80211_BAND_5GHZ;
++	} else {
++		rx_status->band = NUM_NL80211_BANDS;
++	}
++}
++
++static void
++ath12k_dp_mon_fill_rx_rate(struct ath12k *ar,
++			   struct hal_rx_mon_ppdu_info *ppdu_info,
++			   struct ieee80211_rx_status *rx_status)
++{
++	struct ieee80211_supported_band *sband;
++	enum rx_msdu_start_pkt_type pkt_type;
++	u8 rate_mcs, nss, sgi;
++	bool is_cck;
++
++	pkt_type = ppdu_info->preamble_type;
++	rate_mcs = ppdu_info->rate;
++	nss = ppdu_info->nss;
++	sgi = ppdu_info->gi;
++
++	switch (pkt_type) {
++	case RX_MSDU_START_PKT_TYPE_11A:
++	case RX_MSDU_START_PKT_TYPE_11B:
++		is_cck = (pkt_type == RX_MSDU_START_PKT_TYPE_11B);
++		if (rx_status->band < NUM_NL80211_BANDS) {
++			sband = &ar->mac.sbands[rx_status->band];
++			rx_status->rate_idx = ath12k_mac_hw_rate_to_idx(sband, rate_mcs,
++									is_cck);
++		}
++		break;
++	case RX_MSDU_START_PKT_TYPE_11N:
++		rx_status->encoding = RX_ENC_HT;
++		if (rate_mcs > ATH12K_HT_MCS_MAX) {
++			ath12k_warn(ar->ab,
++				    "Received with invalid mcs in HT mode %d\n",
++				     rate_mcs);
++			break;
++		}
++		rx_status->rate_idx = rate_mcs + (8 * (nss - 1));
++		if (sgi)
++			rx_status->enc_flags |= RX_ENC_FLAG_SHORT_GI;
++		break;
++	case RX_MSDU_START_PKT_TYPE_11AC:
++		rx_status->encoding = RX_ENC_VHT;
++		rx_status->rate_idx = rate_mcs;
++		if (rate_mcs > ATH12K_VHT_MCS_MAX) {
++			ath12k_warn(ar->ab,
++				    "Received with invalid mcs in VHT mode %d\n",
++				     rate_mcs);
++			break;
++		}
++		if (sgi)
++			rx_status->enc_flags |= RX_ENC_FLAG_SHORT_GI;
++		break;
++	case RX_MSDU_START_PKT_TYPE_11AX:
++		rx_status->rate_idx = rate_mcs;
++		if (rate_mcs > ATH12K_HE_MCS_MAX) {
++			ath12k_warn(ar->ab,
++				    "Received with invalid mcs in HE mode %d\n",
++				    rate_mcs);
++			break;
++		}
++		rx_status->encoding = RX_ENC_HE;
++		rx_status->he_gi = ath12k_he_gi_to_nl80211_he_gi(sgi);
++		break;
++	case RX_MSDU_START_PKT_TYPE_11BE:
++		rx_status->rate_idx = rate_mcs;
++		if (rate_mcs > ATH12K_EHT_MCS_MAX) {
++			ath12k_warn(ar->ab,
++				    "Received with invalid mcs in EHT mode %d\n",
++				    rate_mcs);
++			break;
++		}
++		rx_status->encoding = RX_ENC_EHT;
++		rx_status->he_gi = ath12k_he_gi_to_nl80211_he_gi(sgi);
++		break;
++	default:
++		ath12k_dbg(ar->ab, ATH12K_DBG_DATA,
++			   "monitor receives invalid preamble type %d",
++			    pkt_type);
++		break;
++	}
++}
++
+ static struct sk_buff *
+ ath12k_dp_mon_rx_merg_msdus(struct ath12k *ar,
+-			    struct sk_buff *head_msdu, struct sk_buff *tail_msdu,
+-			    struct ieee80211_rx_status *rxs, bool *fcs_err)
++			    struct dp_mon_mpdu *mon_mpdu,
++			    struct hal_rx_mon_ppdu_info *ppdu_info,
++			    struct ieee80211_rx_status *rxs)
+ {
+ 	struct ath12k_base *ab = ar->ab;
+ 	struct sk_buff *msdu, *mpdu_buf, *prev_buf, *head_frag_list;
+-	struct hal_rx_desc *rx_desc, *tail_rx_desc;
+-	u8 *hdr_desc, *dest, decap_format;
++	struct sk_buff *head_msdu, *tail_msdu;
++	struct hal_rx_desc *rx_desc;
++	u8 *hdr_desc, *dest, decap_format = mon_mpdu->decap_format;
+ 	struct ieee80211_hdr_3addr *wh;
+-	u32 err_bitmap, frag_list_sum_len = 0;
++	struct ieee80211_channel *channel;
++	u32 frag_list_sum_len = 0;
++	u8 channel_num = ppdu_info->chan_num;
+ 
+ 	mpdu_buf = NULL;
++	head_msdu = mon_mpdu->head;
++	tail_msdu = mon_mpdu->tail;
+ 
+ 	if (!head_msdu)
+ 		goto err_merge_fail;
+ 
+-	rx_desc = (struct hal_rx_desc *)head_msdu->data;
+-	tail_rx_desc = (struct hal_rx_desc *)tail_msdu->data;
++	ath12k_dp_mon_fill_rx_stats_info(ar, ppdu_info, rxs);
+ 
+-	err_bitmap = ath12k_dp_rx_h_mpdu_err(ab, tail_rx_desc);
+-	if (err_bitmap & HAL_RX_MPDU_ERR_FCS)
+-		*fcs_err = true;
++	if (unlikely(rxs->band == NUM_NL80211_BANDS ||
++		     !ath12k_ar_to_hw(ar)->wiphy->bands[rxs->band])) {
++		ath12k_dbg(ar->ab, ATH12K_DBG_DATA,
++			   "sband is NULL for status band %d channel_num %d center_freq %d pdev_id %d\n",
++			   rxs->band, channel_num, ppdu_info->freq, ar->pdev_idx);
+ 
+-	decap_format = ath12k_dp_rx_h_decap_type(ab, tail_rx_desc);
++		spin_lock_bh(&ar->data_lock);
++		channel = ar->rx_channel;
++		if (channel) {
++			rxs->band = channel->band;
++			channel_num =
++				ieee80211_frequency_to_channel(channel->center_freq);
++		}
++		spin_unlock_bh(&ar->data_lock);
++	}
++
++	if (rxs->band < NUM_NL80211_BANDS)
++		rxs->freq = ieee80211_channel_to_frequency(channel_num,
++							   rxs->band);
+ 
+-	ath12k_dp_rx_h_ppdu(ar, tail_rx_desc, rxs);
++	ath12k_dp_mon_fill_rx_rate(ar, ppdu_info, rxs);
+ 
+ 	if (decap_format == DP_RX_DECAP_TYPE_RAW) {
+ 		ath12k_dp_mon_rx_msdus_set_payload(ar, head_msdu, tail_msdu);
+@@ -1954,7 +2080,8 @@ static void ath12k_dp_mon_update_radiotap(struct ath12k *ar,
+ 
+ static void ath12k_dp_mon_rx_deliver_msdu(struct ath12k *ar, struct napi_struct *napi,
+ 					  struct sk_buff *msdu,
+-					  struct ieee80211_rx_status *status)
++					  struct ieee80211_rx_status *status,
++					  u8 decap)
+ {
+ 	static const struct ieee80211_radiotap_he known = {
+ 		.data1 = cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA1_DATA_MCS_KNOWN |
+@@ -1966,7 +2093,7 @@ static void ath12k_dp_mon_rx_deliver_msdu(struct ath12k *ar, struct napi_struct
+ 	struct ieee80211_sta *pubsta = NULL;
+ 	struct ath12k_peer *peer;
+ 	struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+-	u8 decap = DP_RX_DECAP_TYPE_RAW;
++	struct ath12k_dp_rx_info rx_info;
+ 	bool is_mcbc = rxcb->is_mcbc;
+ 	bool is_eapol_tkip = rxcb->is_eapol;
+ 
+@@ -1977,10 +2104,9 @@ static void ath12k_dp_mon_rx_deliver_msdu(struct ath12k *ar, struct napi_struct
+ 		status->flag |= RX_FLAG_RADIOTAP_HE;
+ 	}
+ 
+-	if (!(status->flag & RX_FLAG_ONLY_MONITOR))
+-		decap = ath12k_dp_rx_h_decap_type(ar->ab, rxcb->rx_desc);
+ 	spin_lock_bh(&ar->ab->base_lock);
+-	peer = ath12k_dp_rx_h_find_peer(ar->ab, msdu);
++	rx_info.addr2_present = false;
++	peer = ath12k_dp_rx_h_find_peer(ar->ab, msdu, &rx_info);
+ 	if (peer && peer->sta) {
+ 		pubsta = peer->sta;
+ 		if (pubsta->valid_links) {
+@@ -2035,25 +2161,23 @@ static void ath12k_dp_mon_rx_deliver_msdu(struct ath12k *ar, struct napi_struct
+ }
+ 
+ static int ath12k_dp_mon_rx_deliver(struct ath12k *ar,
+-				    struct sk_buff *head_msdu, struct sk_buff *tail_msdu,
++				    struct dp_mon_mpdu *mon_mpdu,
+ 				    struct hal_rx_mon_ppdu_info *ppduinfo,
+ 				    struct napi_struct *napi)
+ {
+ 	struct ath12k_pdev_dp *dp = &ar->dp;
+ 	struct sk_buff *mon_skb, *skb_next, *header;
+ 	struct ieee80211_rx_status *rxs = &dp->rx_status;
+-	bool fcs_err = false;
++	u8 decap = DP_RX_DECAP_TYPE_RAW;
+ 
+-	mon_skb = ath12k_dp_mon_rx_merg_msdus(ar,
+-					      head_msdu, tail_msdu,
+-					      rxs, &fcs_err);
++	mon_skb = ath12k_dp_mon_rx_merg_msdus(ar, mon_mpdu, ppduinfo, rxs);
+ 	if (!mon_skb)
+ 		goto mon_deliver_fail;
+ 
+ 	header = mon_skb;
+ 	rxs->flag = 0;
+ 
+-	if (fcs_err)
++	if (mon_mpdu->err_bitmap & HAL_RX_MPDU_ERR_FCS)
+ 		rxs->flag = RX_FLAG_FAILED_FCS_CRC;
+ 
+ 	do {
+@@ -2070,8 +2194,12 @@ static int ath12k_dp_mon_rx_deliver(struct ath12k *ar,
+ 			rxs->flag |= RX_FLAG_ALLOW_SAME_PN;
+ 		}
+ 		rxs->flag |= RX_FLAG_ONLY_MONITOR;
++
++		if (!(rxs->flag & RX_FLAG_ONLY_MONITOR))
++			decap = mon_mpdu->decap_format;
++
+ 		ath12k_dp_mon_update_radiotap(ar, ppduinfo, mon_skb, rxs);
+-		ath12k_dp_mon_rx_deliver_msdu(ar, napi, mon_skb, rxs);
++		ath12k_dp_mon_rx_deliver_msdu(ar, napi, mon_skb, rxs, decap);
+ 		mon_skb = skb_next;
+ 	} while (mon_skb);
+ 	rxs->flag = 0;
+@@ -2079,7 +2207,7 @@ static int ath12k_dp_mon_rx_deliver(struct ath12k *ar,
+ 	return 0;
+ 
+ mon_deliver_fail:
+-	mon_skb = head_msdu;
++	mon_skb = mon_mpdu->head;
+ 	while (mon_skb) {
+ 		skb_next = mon_skb->next;
+ 		dev_kfree_skb_any(mon_skb);
+@@ -2088,6 +2216,144 @@ static int ath12k_dp_mon_rx_deliver(struct ath12k *ar,
+ 	return -EINVAL;
+ }
+ 
++static int ath12k_dp_pkt_set_pktlen(struct sk_buff *skb, u32 len)
++{
++	if (skb->len > len) {
++		skb_trim(skb, len);
++	} else {
++		if (skb_tailroom(skb) < len - skb->len) {
++			if ((pskb_expand_head(skb, 0,
++					      len - skb->len - skb_tailroom(skb),
++					      GFP_ATOMIC))) {
++				return -ENOMEM;
++			}
++		}
++		skb_put(skb, (len - skb->len));
++	}
++
++	return 0;
++}
++
++static void ath12k_dp_mon_parse_rx_msdu_end_err(u32 info, u32 *errmap)
++{
++	if (info & RX_MSDU_END_INFO13_FCS_ERR)
++		*errmap |= HAL_RX_MPDU_ERR_FCS;
++
++	if (info & RX_MSDU_END_INFO13_DECRYPT_ERR)
++		*errmap |= HAL_RX_MPDU_ERR_DECRYPT;
++
++	if (info & RX_MSDU_END_INFO13_TKIP_MIC_ERR)
++		*errmap |= HAL_RX_MPDU_ERR_TKIP_MIC;
++
++	if (info & RX_MSDU_END_INFO13_A_MSDU_ERROR)
++		*errmap |= HAL_RX_MPDU_ERR_AMSDU_ERR;
++
++	if (info & RX_MSDU_END_INFO13_OVERFLOW_ERR)
++		*errmap |= HAL_RX_MPDU_ERR_OVERFLOW;
++
++	if (info & RX_MSDU_END_INFO13_MSDU_LEN_ERR)
++		*errmap |= HAL_RX_MPDU_ERR_MSDU_LEN;
++
++	if (info & RX_MSDU_END_INFO13_MPDU_LEN_ERR)
++		*errmap |= HAL_RX_MPDU_ERR_MPDU_LEN;
++}
++
++static int
++ath12k_dp_mon_parse_status_msdu_end(struct ath12k_mon_data *pmon,
++				    const struct hal_rx_msdu_end *msdu_end)
++{
++	struct dp_mon_mpdu *mon_mpdu = pmon->mon_mpdu;
++
++	ath12k_dp_mon_parse_rx_msdu_end_err(__le32_to_cpu(msdu_end->info2),
++					    &mon_mpdu->err_bitmap);
++
++	mon_mpdu->decap_format = le32_get_bits(msdu_end->info1,
++					       RX_MSDU_END_INFO11_DECAP_FORMAT);
++
++	return 0;
++}
++
++static int
++ath12k_dp_mon_parse_status_buf(struct ath12k *ar,
++			       struct ath12k_mon_data *pmon,
++			       const struct dp_mon_packet_info *packet_info)
++{
++	struct ath12k_base *ab = ar->ab;
++	struct dp_rxdma_mon_ring *buf_ring = &ab->dp.rxdma_mon_buf_ring;
++	struct sk_buff *msdu;
++	int buf_id;
++	u32 offset;
++
++	buf_id = u32_get_bits(packet_info->cookie, DP_RXDMA_BUF_COOKIE_BUF_ID);
++
++	spin_lock_bh(&buf_ring->idr_lock);
++	msdu = idr_remove(&buf_ring->bufs_idr, buf_id);
++	spin_unlock_bh(&buf_ring->idr_lock);
++
++	if (unlikely(!msdu)) {
++		ath12k_warn(ab, "mon dest desc with inval buf_id %d\n", buf_id);
++		return 0;
++	}
++
++	dma_unmap_single(ab->dev, ATH12K_SKB_RXCB(msdu)->paddr,
++			 msdu->len + skb_tailroom(msdu),
++			 DMA_FROM_DEVICE);
++
++	offset = packet_info->dma_length + ATH12K_MON_RX_DOT11_OFFSET;
++	if (ath12k_dp_pkt_set_pktlen(msdu, offset)) {
++		dev_kfree_skb_any(msdu);
++		goto dest_replenish;
++	}
++
++	if (!pmon->mon_mpdu->head)
++		pmon->mon_mpdu->head = msdu;
++	else
++		pmon->mon_mpdu->tail->next = msdu;
++
++	pmon->mon_mpdu->tail = msdu;
++
++dest_replenish:
++	ath12k_dp_mon_buf_replenish(ab, buf_ring, 1);
++
++	return 0;
++}
++
++static int
++ath12k_dp_mon_parse_rx_dest_tlv(struct ath12k *ar,
++				struct ath12k_mon_data *pmon,
++				enum hal_rx_mon_status hal_status,
++				const void *tlv_data)
++{
++	switch (hal_status) {
++	case HAL_RX_MON_STATUS_MPDU_START:
++		if (WARN_ON_ONCE(pmon->mon_mpdu))
++			break;
++
++		pmon->mon_mpdu = kzalloc(sizeof(*pmon->mon_mpdu), GFP_ATOMIC);
++		if (!pmon->mon_mpdu)
++			return -ENOMEM;
++		break;
++	case HAL_RX_MON_STATUS_BUF_ADDR:
++		return ath12k_dp_mon_parse_status_buf(ar, pmon, tlv_data);
++	case HAL_RX_MON_STATUS_MPDU_END:
++		/* If no MSDU then free empty MPDU */
++		if (pmon->mon_mpdu->tail) {
++			pmon->mon_mpdu->tail->next = NULL;
++			list_add_tail(&pmon->mon_mpdu->list, &pmon->dp_rx_mon_mpdu_list);
++		} else {
++			kfree(pmon->mon_mpdu);
++		}
++		pmon->mon_mpdu = NULL;
++		break;
++	case HAL_RX_MON_STATUS_MSDU_END:
++		return ath12k_dp_mon_parse_status_msdu_end(pmon, tlv_data);
++	default:
++		break;
++	}
++
++	return 0;
++}
++
+ static enum hal_rx_mon_status
+ ath12k_dp_mon_parse_rx_dest(struct ath12k *ar, struct ath12k_mon_data *pmon,
+ 			    struct sk_buff *skb)
+@@ -2114,14 +2380,20 @@ ath12k_dp_mon_parse_rx_dest(struct ath12k *ar, struct ath12k_mon_data *pmon,
+ 			tlv_len = le64_get_bits(tlv->tl, HAL_TLV_64_HDR_LEN);
+ 
+ 		hal_status = ath12k_dp_mon_rx_parse_status_tlv(ar, pmon, tlv);
++
++		if (ar->monitor_started &&
++		    ath12k_dp_mon_parse_rx_dest_tlv(ar, pmon, hal_status, tlv->value))
++			return HAL_RX_MON_STATUS_PPDU_DONE;
++
+ 		ptr += sizeof(*tlv) + tlv_len;
+ 		ptr = PTR_ALIGN(ptr, HAL_TLV_64_ALIGN);
+ 
+-		if ((ptr - skb->data) >= DP_RX_BUFFER_SIZE)
++		if ((ptr - skb->data) > skb->len)
+ 			break;
+ 
+ 	} while ((hal_status == HAL_RX_MON_STATUS_PPDU_NOT_DONE) ||
+ 		 (hal_status == HAL_RX_MON_STATUS_BUF_ADDR) ||
++		 (hal_status == HAL_RX_MON_STATUS_MPDU_START) ||
+ 		 (hal_status == HAL_RX_MON_STATUS_MPDU_END) ||
+ 		 (hal_status == HAL_RX_MON_STATUS_MSDU_END));
+ 
+@@ -2141,23 +2413,21 @@ ath12k_dp_mon_rx_parse_mon_status(struct ath12k *ar,
+ 	struct hal_rx_mon_ppdu_info *ppdu_info = &pmon->mon_ppdu_info;
+ 	struct dp_mon_mpdu *tmp;
+ 	struct dp_mon_mpdu *mon_mpdu = pmon->mon_mpdu;
+-	struct sk_buff *head_msdu, *tail_msdu;
+-	enum hal_rx_mon_status hal_status = HAL_RX_MON_STATUS_BUF_DONE;
++	enum hal_rx_mon_status hal_status;
+ 
+-	ath12k_dp_mon_parse_rx_dest(ar, pmon, skb);
++	hal_status = ath12k_dp_mon_parse_rx_dest(ar, pmon, skb);
++	if (hal_status != HAL_RX_MON_STATUS_PPDU_DONE)
++		return hal_status;
+ 
+ 	list_for_each_entry_safe(mon_mpdu, tmp, &pmon->dp_rx_mon_mpdu_list, list) {
+ 		list_del(&mon_mpdu->list);
+-		head_msdu = mon_mpdu->head;
+-		tail_msdu = mon_mpdu->tail;
+ 
+-		if (head_msdu && tail_msdu) {
+-			ath12k_dp_mon_rx_deliver(ar, head_msdu,
+-						 tail_msdu, ppdu_info, napi);
+-		}
++		if (mon_mpdu->head && mon_mpdu->tail)
++			ath12k_dp_mon_rx_deliver(ar, mon_mpdu, ppdu_info, napi);
+ 
+ 		kfree(mon_mpdu);
+ 	}
++
+ 	return hal_status;
+ }
+ 
+@@ -2838,16 +3108,13 @@ ath12k_dp_mon_tx_process_ppdu_info(struct ath12k *ar,
+ 				   struct dp_mon_tx_ppdu_info *tx_ppdu_info)
+ {
+ 	struct dp_mon_mpdu *tmp, *mon_mpdu;
+-	struct sk_buff *head_msdu, *tail_msdu;
+ 
+ 	list_for_each_entry_safe(mon_mpdu, tmp,
+ 				 &tx_ppdu_info->dp_tx_mon_mpdu_list, list) {
+ 		list_del(&mon_mpdu->list);
+-		head_msdu = mon_mpdu->head;
+-		tail_msdu = mon_mpdu->tail;
+ 
+-		if (head_msdu)
+-			ath12k_dp_mon_rx_deliver(ar, head_msdu, tail_msdu,
++		if (mon_mpdu->head)
++			ath12k_dp_mon_rx_deliver(ar, mon_mpdu,
+ 						 &tx_ppdu_info->rx_status, napi);
+ 
+ 		kfree(mon_mpdu);
+@@ -3346,7 +3613,7 @@ int ath12k_dp_mon_srng_process(struct ath12k *ar, int *budget,
+ 		ath12k_dp_mon_rx_memset_ppdu_info(ppdu_info);
+ 
+ 	while ((skb = __skb_dequeue(&skb_list))) {
+-		hal_status = ath12k_dp_mon_parse_rx_dest(ar, pmon, skb);
++		hal_status = ath12k_dp_mon_rx_parse_mon_status(ar, pmon, skb, napi);
+ 		if (hal_status != HAL_RX_MON_STATUS_PPDU_DONE) {
+ 			ppdu_info->ppdu_continuation = true;
+ 			dev_kfree_skb_any(skb);
+diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.h b/drivers/net/wireless/ath/ath12k/dp_mon.h
+index e4368eb42aca83..b039f6b9277c69 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_mon.h
++++ b/drivers/net/wireless/ath/ath12k/dp_mon.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_DP_MON_H
+@@ -9,6 +9,8 @@
+ 
+ #include "core.h"
+ 
++#define ATH12K_MON_RX_DOT11_OFFSET	5
++
+ enum dp_monitor_mode {
+ 	ATH12K_DP_TX_MONITOR_MODE,
+ 	ATH12K_DP_RX_MONITOR_MODE
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index 75bf4211ad4227..7fadd366ec13de 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -228,12 +228,6 @@ static void ath12k_dp_rx_desc_get_crypto_header(struct ath12k_base *ab,
+ 	ab->hal_rx_ops->rx_desc_get_crypto_header(desc, crypto_hdr, enctype);
+ }
+ 
+-static u16 ath12k_dp_rxdesc_get_mpdu_frame_ctrl(struct ath12k_base *ab,
+-						struct hal_rx_desc *desc)
+-{
+-	return ab->hal_rx_ops->rx_desc_get_mpdu_frame_ctl(desc);
+-}
+-
+ static inline u8 ath12k_dp_rx_get_msdu_src_link(struct ath12k_base *ab,
+ 						struct hal_rx_desc *desc)
+ {
+@@ -1823,6 +1817,7 @@ static int ath12k_dp_rx_msdu_coalesce(struct ath12k *ar,
+ 	struct hal_rx_desc *ldesc;
+ 	int space_extra, rem_len, buf_len;
+ 	u32 hal_rx_desc_sz = ar->ab->hal.hal_desc_sz;
++	bool is_continuation;
+ 
+ 	/* As the msdu is spread across multiple rx buffers,
+ 	 * find the offset to the start of msdu for computing
+@@ -1871,7 +1866,8 @@ static int ath12k_dp_rx_msdu_coalesce(struct ath12k *ar,
+ 	rem_len = msdu_len - buf_first_len;
+ 	while ((skb = __skb_dequeue(msdu_list)) != NULL && rem_len > 0) {
+ 		rxcb = ATH12K_SKB_RXCB(skb);
+-		if (rxcb->is_continuation)
++		is_continuation = rxcb->is_continuation;
++		if (is_continuation)
+ 			buf_len = DP_RX_BUFFER_SIZE - hal_rx_desc_sz;
+ 		else
+ 			buf_len = rem_len;
+@@ -1889,7 +1885,7 @@ static int ath12k_dp_rx_msdu_coalesce(struct ath12k *ar,
+ 		dev_kfree_skb_any(skb);
+ 
+ 		rem_len -= buf_len;
+-		if (!rxcb->is_continuation)
++		if (!is_continuation)
+ 			break;
+ 	}
+ 
+@@ -1914,21 +1910,14 @@ static struct sk_buff *ath12k_dp_rx_get_msdu_last_buf(struct sk_buff_head *msdu_
+ 	return NULL;
+ }
+ 
+-static void ath12k_dp_rx_h_csum_offload(struct ath12k *ar, struct sk_buff *msdu)
++static void ath12k_dp_rx_h_csum_offload(struct sk_buff *msdu,
++					struct ath12k_dp_rx_info *rx_info)
+ {
+-	struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+-	struct ath12k_base *ab = ar->ab;
+-	bool ip_csum_fail, l4_csum_fail;
+-
+-	ip_csum_fail = ath12k_dp_rx_h_ip_cksum_fail(ab, rxcb->rx_desc);
+-	l4_csum_fail = ath12k_dp_rx_h_l4_cksum_fail(ab, rxcb->rx_desc);
+-
+-	msdu->ip_summed = (ip_csum_fail || l4_csum_fail) ?
+-			  CHECKSUM_NONE : CHECKSUM_UNNECESSARY;
++	msdu->ip_summed = (rx_info->ip_csum_fail || rx_info->l4_csum_fail) ?
++			   CHECKSUM_NONE : CHECKSUM_UNNECESSARY;
+ }
+ 
+-static int ath12k_dp_rx_crypto_mic_len(struct ath12k *ar,
+-				       enum hal_encrypt_type enctype)
++int ath12k_dp_rx_crypto_mic_len(struct ath12k *ar, enum hal_encrypt_type enctype)
+ {
+ 	switch (enctype) {
+ 	case HAL_ENCRYPT_TYPE_OPEN:
+@@ -2122,10 +2111,13 @@ static void ath12k_get_dot11_hdr_from_rx_desc(struct ath12k *ar,
+ 	struct hal_rx_desc *rx_desc = rxcb->rx_desc;
+ 	struct ath12k_base *ab = ar->ab;
+ 	size_t hdr_len, crypto_len;
+-	struct ieee80211_hdr *hdr;
+-	u16 qos_ctl;
+-	__le16 fc;
+-	u8 *crypto_hdr;
++	struct ieee80211_hdr hdr;
++	__le16 qos_ctl;
++	u8 *crypto_hdr, mesh_ctrl;
++
++	ath12k_dp_rx_desc_get_dot11_hdr(ab, rx_desc, &hdr);
++	hdr_len = ieee80211_hdrlen(hdr.frame_control);
++	mesh_ctrl = ath12k_dp_rx_h_mesh_ctl_present(ab, rx_desc);
+ 
+ 	if (!(status->flag & RX_FLAG_IV_STRIPPED)) {
+ 		crypto_len = ath12k_dp_rx_crypto_param_len(ar, enctype);
+@@ -2133,27 +2125,21 @@ static void ath12k_get_dot11_hdr_from_rx_desc(struct ath12k *ar,
+ 		ath12k_dp_rx_desc_get_crypto_header(ab, rx_desc, crypto_hdr, enctype);
+ 	}
+ 
+-	fc = cpu_to_le16(ath12k_dp_rxdesc_get_mpdu_frame_ctrl(ab, rx_desc));
+-	hdr_len = ieee80211_hdrlen(fc);
+ 	skb_push(msdu, hdr_len);
+-	hdr = (struct ieee80211_hdr *)msdu->data;
+-	hdr->frame_control = fc;
+-
+-	/* Get wifi header from rx_desc */
+-	ath12k_dp_rx_desc_get_dot11_hdr(ab, rx_desc, hdr);
++	memcpy(msdu->data, &hdr, min(hdr_len, sizeof(hdr)));
+ 
+ 	if (rxcb->is_mcbc)
+ 		status->flag &= ~RX_FLAG_PN_VALIDATED;
+ 
+ 	/* Add QOS header */
+-	if (ieee80211_is_data_qos(hdr->frame_control)) {
+-		qos_ctl = rxcb->tid;
+-		if (ath12k_dp_rx_h_mesh_ctl_present(ab, rx_desc))
+-			qos_ctl |= IEEE80211_QOS_CTL_MESH_CONTROL_PRESENT;
++	if (ieee80211_is_data_qos(hdr.frame_control)) {
++		struct ieee80211_hdr *qos_ptr = (struct ieee80211_hdr *)msdu->data;
+ 
+-		/* TODO: Add other QoS ctl fields when required */
+-		memcpy(msdu->data + (hdr_len - IEEE80211_QOS_CTL_LEN),
+-		       &qos_ctl, IEEE80211_QOS_CTL_LEN);
++		qos_ctl = cpu_to_le16(rxcb->tid & IEEE80211_QOS_CTL_TID_MASK);
++		if (mesh_ctrl)
++			qos_ctl |= cpu_to_le16(IEEE80211_QOS_CTL_MESH_CONTROL_PRESENT);
++
++		memcpy(ieee80211_get_qos_ctl(qos_ptr), &qos_ctl, IEEE80211_QOS_CTL_LEN);
+ 	}
+ }
+ 
+@@ -2229,10 +2215,10 @@ static void ath12k_dp_rx_h_undecap(struct ath12k *ar, struct sk_buff *msdu,
+ }
+ 
+ struct ath12k_peer *
+-ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu)
++ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu,
++			 struct ath12k_dp_rx_info *rx_info)
+ {
+ 	struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+-	struct hal_rx_desc *rx_desc = rxcb->rx_desc;
+ 	struct ath12k_peer *peer = NULL;
+ 
+ 	lockdep_assert_held(&ab->base_lock);
+@@ -2243,39 +2229,35 @@ ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu)
+ 	if (peer)
+ 		return peer;
+ 
+-	if (!rx_desc || !(ath12k_dp_rxdesc_mac_addr2_valid(ab, rx_desc)))
+-		return NULL;
++	if (rx_info->addr2_present)
++		peer = ath12k_peer_find_by_addr(ab, rx_info->addr2);
+ 
+-	peer = ath12k_peer_find_by_addr(ab,
+-					ath12k_dp_rxdesc_get_mpdu_start_addr2(ab,
+-									      rx_desc));
+ 	return peer;
+ }
+ 
+ static void ath12k_dp_rx_h_mpdu(struct ath12k *ar,
+ 				struct sk_buff *msdu,
+ 				struct hal_rx_desc *rx_desc,
+-				struct ieee80211_rx_status *rx_status)
++				struct ath12k_dp_rx_info *rx_info)
+ {
+-	bool  fill_crypto_hdr;
+ 	struct ath12k_base *ab = ar->ab;
+ 	struct ath12k_skb_rxcb *rxcb;
+ 	enum hal_encrypt_type enctype;
+ 	bool is_decrypted = false;
+ 	struct ieee80211_hdr *hdr;
+ 	struct ath12k_peer *peer;
++	struct ieee80211_rx_status *rx_status = rx_info->rx_status;
+ 	u32 err_bitmap;
+ 
+ 	/* PN for multicast packets will be checked in mac80211 */
+ 	rxcb = ATH12K_SKB_RXCB(msdu);
+-	fill_crypto_hdr = ath12k_dp_rx_h_is_da_mcbc(ar->ab, rx_desc);
+-	rxcb->is_mcbc = fill_crypto_hdr;
++	rxcb->is_mcbc = rx_info->is_mcbc;
+ 
+ 	if (rxcb->is_mcbc)
+-		rxcb->peer_id = ath12k_dp_rx_h_peer_id(ar->ab, rx_desc);
++		rxcb->peer_id = rx_info->peer_id;
+ 
+ 	spin_lock_bh(&ar->ab->base_lock);
+-	peer = ath12k_dp_rx_h_find_peer(ar->ab, msdu);
++	peer = ath12k_dp_rx_h_find_peer(ar->ab, msdu, rx_info);
+ 	if (peer) {
+ 		if (rxcb->is_mcbc)
+ 			enctype = peer->sec_type_grp;
+@@ -2305,7 +2287,7 @@ static void ath12k_dp_rx_h_mpdu(struct ath12k *ar,
+ 	if (is_decrypted) {
+ 		rx_status->flag |= RX_FLAG_DECRYPTED | RX_FLAG_MMIC_STRIPPED;
+ 
+-		if (fill_crypto_hdr)
++		if (rx_info->is_mcbc)
+ 			rx_status->flag |= RX_FLAG_MIC_STRIPPED |
+ 					RX_FLAG_ICV_STRIPPED;
+ 		else
+@@ -2313,37 +2295,28 @@ static void ath12k_dp_rx_h_mpdu(struct ath12k *ar,
+ 					   RX_FLAG_PN_VALIDATED;
+ 	}
+ 
+-	ath12k_dp_rx_h_csum_offload(ar, msdu);
++	ath12k_dp_rx_h_csum_offload(msdu, rx_info);
+ 	ath12k_dp_rx_h_undecap(ar, msdu, rx_desc,
+ 			       enctype, rx_status, is_decrypted);
+ 
+-	if (!is_decrypted || fill_crypto_hdr)
++	if (!is_decrypted || rx_info->is_mcbc)
+ 		return;
+ 
+-	if (ath12k_dp_rx_h_decap_type(ar->ab, rx_desc) !=
+-	    DP_RX_DECAP_TYPE_ETHERNET2_DIX) {
++	if (rx_info->decap_type != DP_RX_DECAP_TYPE_ETHERNET2_DIX) {
+ 		hdr = (void *)msdu->data;
+ 		hdr->frame_control &= ~__cpu_to_le16(IEEE80211_FCTL_PROTECTED);
+ 	}
+ }
+ 
+-static void ath12k_dp_rx_h_rate(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+-				struct ieee80211_rx_status *rx_status)
++static void ath12k_dp_rx_h_rate(struct ath12k *ar, struct ath12k_dp_rx_info *rx_info)
+ {
+-	struct ath12k_base *ab = ar->ab;
+ 	struct ieee80211_supported_band *sband;
+-	enum rx_msdu_start_pkt_type pkt_type;
+-	u8 bw;
+-	u8 rate_mcs, nss;
+-	u8 sgi;
++	struct ieee80211_rx_status *rx_status = rx_info->rx_status;
++	enum rx_msdu_start_pkt_type pkt_type = rx_info->pkt_type;
++	u8 bw = rx_info->bw, sgi = rx_info->sgi;
++	u8 rate_mcs = rx_info->rate_mcs, nss = rx_info->nss;
+ 	bool is_cck;
+ 
+-	pkt_type = ath12k_dp_rx_h_pkt_type(ab, rx_desc);
+-	bw = ath12k_dp_rx_h_rx_bw(ab, rx_desc);
+-	rate_mcs = ath12k_dp_rx_h_rate_mcs(ab, rx_desc);
+-	nss = ath12k_dp_rx_h_nss(ab, rx_desc);
+-	sgi = ath12k_dp_rx_h_sgi(ab, rx_desc);
+-
+ 	switch (pkt_type) {
+ 	case RX_MSDU_START_PKT_TYPE_11A:
+ 	case RX_MSDU_START_PKT_TYPE_11B:
+@@ -2412,10 +2385,35 @@ static void ath12k_dp_rx_h_rate(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+ 	}
+ }
+ 
+-void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+-			 struct ieee80211_rx_status *rx_status)
++void ath12k_dp_rx_h_fetch_info(struct ath12k_base *ab, struct hal_rx_desc *rx_desc,
++			       struct ath12k_dp_rx_info *rx_info)
+ {
+-	struct ath12k_base *ab = ar->ab;
++	rx_info->ip_csum_fail = ath12k_dp_rx_h_ip_cksum_fail(ab, rx_desc);
++	rx_info->l4_csum_fail = ath12k_dp_rx_h_l4_cksum_fail(ab, rx_desc);
++	rx_info->is_mcbc = ath12k_dp_rx_h_is_da_mcbc(ab, rx_desc);
++	rx_info->decap_type = ath12k_dp_rx_h_decap_type(ab, rx_desc);
++	rx_info->pkt_type = ath12k_dp_rx_h_pkt_type(ab, rx_desc);
++	rx_info->sgi = ath12k_dp_rx_h_sgi(ab, rx_desc);
++	rx_info->rate_mcs = ath12k_dp_rx_h_rate_mcs(ab, rx_desc);
++	rx_info->bw = ath12k_dp_rx_h_rx_bw(ab, rx_desc);
++	rx_info->nss = ath12k_dp_rx_h_nss(ab, rx_desc);
++	rx_info->tid = ath12k_dp_rx_h_tid(ab, rx_desc);
++	rx_info->peer_id = ath12k_dp_rx_h_peer_id(ab, rx_desc);
++	rx_info->phy_meta_data = ath12k_dp_rx_h_freq(ab, rx_desc);
++
++	if (ath12k_dp_rxdesc_mac_addr2_valid(ab, rx_desc)) {
++		ether_addr_copy(rx_info->addr2,
++				ath12k_dp_rxdesc_get_mpdu_start_addr2(ab, rx_desc));
++		rx_info->addr2_present = true;
++	}
++
++	ath12k_dbg_dump(ab, ATH12K_DBG_DATA, NULL, "rx_desc: ",
++			rx_desc, sizeof(*rx_desc));
++}
++
++void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct ath12k_dp_rx_info *rx_info)
++{
++	struct ieee80211_rx_status *rx_status = rx_info->rx_status;
+ 	u8 channel_num;
+ 	u32 center_freq, meta_data;
+ 	struct ieee80211_channel *channel;
+@@ -2429,12 +2427,12 @@ void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+ 
+ 	rx_status->flag |= RX_FLAG_NO_SIGNAL_VAL;
+ 
+-	meta_data = ath12k_dp_rx_h_freq(ab, rx_desc);
++	meta_data = rx_info->phy_meta_data;
+ 	channel_num = meta_data;
+ 	center_freq = meta_data >> 16;
+ 
+-	if (center_freq >= ATH12K_MIN_6G_FREQ &&
+-	    center_freq <= ATH12K_MAX_6G_FREQ) {
++	if (center_freq >= ATH12K_MIN_6GHZ_FREQ &&
++	    center_freq <= ATH12K_MAX_6GHZ_FREQ) {
+ 		rx_status->band = NL80211_BAND_6GHZ;
+ 		rx_status->freq = center_freq;
+ 	} else if (channel_num >= 1 && channel_num <= 14) {
+@@ -2450,20 +2448,18 @@ void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+ 				ieee80211_frequency_to_channel(channel->center_freq);
+ 		}
+ 		spin_unlock_bh(&ar->data_lock);
+-		ath12k_dbg_dump(ar->ab, ATH12K_DBG_DATA, NULL, "rx_desc: ",
+-				rx_desc, sizeof(*rx_desc));
+ 	}
+ 
+ 	if (rx_status->band != NL80211_BAND_6GHZ)
+ 		rx_status->freq = ieee80211_channel_to_frequency(channel_num,
+ 								 rx_status->band);
+ 
+-	ath12k_dp_rx_h_rate(ar, rx_desc, rx_status);
++	ath12k_dp_rx_h_rate(ar, rx_info);
+ }
+ 
+ static void ath12k_dp_rx_deliver_msdu(struct ath12k *ar, struct napi_struct *napi,
+ 				      struct sk_buff *msdu,
+-				      struct ieee80211_rx_status *status)
++				      struct ath12k_dp_rx_info *rx_info)
+ {
+ 	struct ath12k_base *ab = ar->ab;
+ 	static const struct ieee80211_radiotap_he known = {
+@@ -2476,6 +2472,7 @@ static void ath12k_dp_rx_deliver_msdu(struct ath12k *ar, struct napi_struct *nap
+ 	struct ieee80211_sta *pubsta;
+ 	struct ath12k_peer *peer;
+ 	struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
++	struct ieee80211_rx_status *status = rx_info->rx_status;
+ 	u8 decap = DP_RX_DECAP_TYPE_RAW;
+ 	bool is_mcbc = rxcb->is_mcbc;
+ 	bool is_eapol = rxcb->is_eapol;
+@@ -2488,10 +2485,10 @@ static void ath12k_dp_rx_deliver_msdu(struct ath12k *ar, struct napi_struct *nap
+ 	}
+ 
+ 	if (!(status->flag & RX_FLAG_ONLY_MONITOR))
+-		decap = ath12k_dp_rx_h_decap_type(ab, rxcb->rx_desc);
++		decap = rx_info->decap_type;
+ 
+ 	spin_lock_bh(&ab->base_lock);
+-	peer = ath12k_dp_rx_h_find_peer(ab, msdu);
++	peer = ath12k_dp_rx_h_find_peer(ab, msdu, rx_info);
+ 
+ 	pubsta = peer ? peer->sta : NULL;
+ 
+@@ -2574,7 +2571,7 @@ static bool ath12k_dp_rx_check_nwifi_hdr_len_valid(struct ath12k_base *ab,
+ static int ath12k_dp_rx_process_msdu(struct ath12k *ar,
+ 				     struct sk_buff *msdu,
+ 				     struct sk_buff_head *msdu_list,
+-				     struct ieee80211_rx_status *rx_status)
++				     struct ath12k_dp_rx_info *rx_info)
+ {
+ 	struct ath12k_base *ab = ar->ab;
+ 	struct hal_rx_desc *rx_desc, *lrx_desc;
+@@ -2634,10 +2631,11 @@ static int ath12k_dp_rx_process_msdu(struct ath12k *ar,
+ 		goto free_out;
+ 	}
+ 
+-	ath12k_dp_rx_h_ppdu(ar, rx_desc, rx_status);
+-	ath12k_dp_rx_h_mpdu(ar, msdu, rx_desc, rx_status);
++	ath12k_dp_rx_h_fetch_info(ab, rx_desc, rx_info);
++	ath12k_dp_rx_h_ppdu(ar, rx_info);
++	ath12k_dp_rx_h_mpdu(ar, msdu, rx_desc, rx_info);
+ 
+-	rx_status->flag |= RX_FLAG_SKIP_MONITOR | RX_FLAG_DUP_VALIDATED;
++	rx_info->rx_status->flag |= RX_FLAG_SKIP_MONITOR | RX_FLAG_DUP_VALIDATED;
+ 
+ 	return 0;
+ 
+@@ -2657,12 +2655,16 @@ static void ath12k_dp_rx_process_received_packets(struct ath12k_base *ab,
+ 	struct ath12k *ar;
+ 	struct ath12k_hw_link *hw_links = ag->hw_links;
+ 	struct ath12k_base *partner_ab;
++	struct ath12k_dp_rx_info rx_info;
+ 	u8 hw_link_id, pdev_id;
+ 	int ret;
+ 
+ 	if (skb_queue_empty(msdu_list))
+ 		return;
+ 
++	rx_info.addr2_present = false;
++	rx_info.rx_status = &rx_status;
++
+ 	rcu_read_lock();
+ 
+ 	while ((msdu = __skb_dequeue(msdu_list))) {
+@@ -2683,7 +2685,7 @@ static void ath12k_dp_rx_process_received_packets(struct ath12k_base *ab,
+ 			continue;
+ 		}
+ 
+-		ret = ath12k_dp_rx_process_msdu(ar, msdu, msdu_list, &rx_status);
++		ret = ath12k_dp_rx_process_msdu(ar, msdu, msdu_list, &rx_info);
+ 		if (ret) {
+ 			ath12k_dbg(ab, ATH12K_DBG_DATA,
+ 				   "Unable to process msdu %d", ret);
+@@ -2691,7 +2693,7 @@ static void ath12k_dp_rx_process_received_packets(struct ath12k_base *ab,
+ 			continue;
+ 		}
+ 
+-		ath12k_dp_rx_deliver_msdu(ar, napi, msdu, &rx_status);
++		ath12k_dp_rx_deliver_msdu(ar, napi, msdu, &rx_info);
+ 	}
+ 
+ 	rcu_read_unlock();
+@@ -2984,6 +2986,7 @@ static int ath12k_dp_rx_h_verify_tkip_mic(struct ath12k *ar, struct ath12k_peer
+ 	struct ieee80211_rx_status *rxs = IEEE80211_SKB_RXCB(msdu);
+ 	struct ieee80211_key_conf *key_conf;
+ 	struct ieee80211_hdr *hdr;
++	struct ath12k_dp_rx_info rx_info;
+ 	u8 mic[IEEE80211_CCMP_MIC_LEN];
+ 	int head_len, tail_len, ret;
+ 	size_t data_len;
+@@ -2994,6 +2997,9 @@ static int ath12k_dp_rx_h_verify_tkip_mic(struct ath12k *ar, struct ath12k_peer
+ 	if (ath12k_dp_rx_h_enctype(ab, rx_desc) != HAL_ENCRYPT_TYPE_TKIP_MIC)
+ 		return 0;
+ 
++	rx_info.addr2_present = false;
++	rx_info.rx_status = rxs;
++
+ 	hdr = (struct ieee80211_hdr *)(msdu->data + hal_rx_desc_sz);
+ 	hdr_len = ieee80211_hdrlen(hdr->frame_control);
+ 	head_len = hdr_len + hal_rx_desc_sz + IEEE80211_TKIP_IV_LEN;
+@@ -3020,6 +3026,8 @@ static int ath12k_dp_rx_h_verify_tkip_mic(struct ath12k *ar, struct ath12k_peer
+ 	(ATH12K_SKB_RXCB(msdu))->is_first_msdu = true;
+ 	(ATH12K_SKB_RXCB(msdu))->is_last_msdu = true;
+ 
++	ath12k_dp_rx_h_fetch_info(ab, rx_desc, &rx_info);
++
+ 	rxs->flag |= RX_FLAG_MMIC_ERROR | RX_FLAG_MMIC_STRIPPED |
+ 		    RX_FLAG_IV_STRIPPED | RX_FLAG_DECRYPTED;
+ 	skb_pull(msdu, hal_rx_desc_sz);
+@@ -3027,7 +3035,7 @@ static int ath12k_dp_rx_h_verify_tkip_mic(struct ath12k *ar, struct ath12k_peer
+ 	if (unlikely(!ath12k_dp_rx_check_nwifi_hdr_len_valid(ab, rx_desc, msdu)))
+ 		return -EINVAL;
+ 
+-	ath12k_dp_rx_h_ppdu(ar, rx_desc, rxs);
++	ath12k_dp_rx_h_ppdu(ar, &rx_info);
+ 	ath12k_dp_rx_h_undecap(ar, msdu, rx_desc,
+ 			       HAL_ENCRYPT_TYPE_TKIP_MIC, rxs, true);
+ 	ieee80211_rx(ath12k_ar_to_hw(ar), msdu);
+@@ -3716,7 +3724,7 @@ static void ath12k_dp_rx_null_q_desc_sg_drop(struct ath12k *ar,
+ }
+ 
+ static int ath12k_dp_rx_h_null_q_desc(struct ath12k *ar, struct sk_buff *msdu,
+-				      struct ieee80211_rx_status *status,
++				      struct ath12k_dp_rx_info *rx_info,
+ 				      struct sk_buff_head *msdu_list)
+ {
+ 	struct ath12k_base *ab = ar->ab;
+@@ -3772,11 +3780,11 @@ static int ath12k_dp_rx_h_null_q_desc(struct ath12k *ar, struct sk_buff *msdu,
+ 	if (unlikely(!ath12k_dp_rx_check_nwifi_hdr_len_valid(ab, desc, msdu)))
+ 		return -EINVAL;
+ 
+-	ath12k_dp_rx_h_ppdu(ar, desc, status);
+-
+-	ath12k_dp_rx_h_mpdu(ar, msdu, desc, status);
++	ath12k_dp_rx_h_fetch_info(ab, desc, rx_info);
++	ath12k_dp_rx_h_ppdu(ar, rx_info);
++	ath12k_dp_rx_h_mpdu(ar, msdu, desc, rx_info);
+ 
+-	rxcb->tid = ath12k_dp_rx_h_tid(ab, desc);
++	rxcb->tid = rx_info->tid;
+ 
+ 	/* Please note that caller will having the access to msdu and completing
+ 	 * rx with mac80211. Need not worry about cleaning up amsdu_list.
+@@ -3786,7 +3794,7 @@ static int ath12k_dp_rx_h_null_q_desc(struct ath12k *ar, struct sk_buff *msdu,
+ }
+ 
+ static bool ath12k_dp_rx_h_reo_err(struct ath12k *ar, struct sk_buff *msdu,
+-				   struct ieee80211_rx_status *status,
++				   struct ath12k_dp_rx_info *rx_info,
+ 				   struct sk_buff_head *msdu_list)
+ {
+ 	struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+@@ -3796,7 +3804,7 @@ static bool ath12k_dp_rx_h_reo_err(struct ath12k *ar, struct sk_buff *msdu,
+ 
+ 	switch (rxcb->err_code) {
+ 	case HAL_REO_DEST_RING_ERROR_CODE_DESC_ADDR_ZERO:
+-		if (ath12k_dp_rx_h_null_q_desc(ar, msdu, status, msdu_list))
++		if (ath12k_dp_rx_h_null_q_desc(ar, msdu, rx_info, msdu_list))
+ 			drop = true;
+ 		break;
+ 	case HAL_REO_DEST_RING_ERROR_CODE_PN_CHECK_FAILED:
+@@ -3817,7 +3825,7 @@ static bool ath12k_dp_rx_h_reo_err(struct ath12k *ar, struct sk_buff *msdu,
+ }
+ 
+ static bool ath12k_dp_rx_h_tkip_mic_err(struct ath12k *ar, struct sk_buff *msdu,
+-					struct ieee80211_rx_status *status)
++					struct ath12k_dp_rx_info *rx_info)
+ {
+ 	struct ath12k_base *ab = ar->ab;
+ 	u16 msdu_len;
+@@ -3831,24 +3839,33 @@ static bool ath12k_dp_rx_h_tkip_mic_err(struct ath12k *ar, struct sk_buff *msdu,
+ 
+ 	l3pad_bytes = ath12k_dp_rx_h_l3pad(ab, desc);
+ 	msdu_len = ath12k_dp_rx_h_msdu_len(ab, desc);
++
++	if ((hal_rx_desc_sz + l3pad_bytes + msdu_len) > DP_RX_BUFFER_SIZE) {
++		ath12k_dbg(ab, ATH12K_DBG_DATA,
++			   "invalid msdu len in tkip mic err %u\n", msdu_len);
++		ath12k_dbg_dump(ab, ATH12K_DBG_DATA, NULL, "", desc,
++				sizeof(*desc));
++		return true;
++	}
++
+ 	skb_put(msdu, hal_rx_desc_sz + l3pad_bytes + msdu_len);
+ 	skb_pull(msdu, hal_rx_desc_sz + l3pad_bytes);
+ 
+ 	if (unlikely(!ath12k_dp_rx_check_nwifi_hdr_len_valid(ab, desc, msdu)))
+ 		return true;
+ 
+-	ath12k_dp_rx_h_ppdu(ar, desc, status);
++	ath12k_dp_rx_h_ppdu(ar, rx_info);
+ 
+-	status->flag |= (RX_FLAG_MMIC_STRIPPED | RX_FLAG_MMIC_ERROR |
+-			 RX_FLAG_DECRYPTED);
++	rx_info->rx_status->flag |= (RX_FLAG_MMIC_STRIPPED | RX_FLAG_MMIC_ERROR |
++				     RX_FLAG_DECRYPTED);
+ 
+ 	ath12k_dp_rx_h_undecap(ar, msdu, desc,
+-			       HAL_ENCRYPT_TYPE_TKIP_MIC, status, false);
++			       HAL_ENCRYPT_TYPE_TKIP_MIC, rx_info->rx_status, false);
+ 	return false;
+ }
+ 
+ static bool ath12k_dp_rx_h_rxdma_err(struct ath12k *ar,  struct sk_buff *msdu,
+-				     struct ieee80211_rx_status *status)
++				     struct ath12k_dp_rx_info *rx_info)
+ {
+ 	struct ath12k_base *ab = ar->ab;
+ 	struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+@@ -3863,7 +3880,8 @@ static bool ath12k_dp_rx_h_rxdma_err(struct ath12k *ar,  struct sk_buff *msdu,
+ 	case HAL_REO_ENTR_RING_RXDMA_ECODE_TKIP_MIC_ERR:
+ 		err_bitmap = ath12k_dp_rx_h_mpdu_err(ab, rx_desc);
+ 		if (err_bitmap & HAL_RX_MPDU_ERR_TKIP_MIC) {
+-			drop = ath12k_dp_rx_h_tkip_mic_err(ar, msdu, status);
++			ath12k_dp_rx_h_fetch_info(ab, rx_desc, rx_info);
++			drop = ath12k_dp_rx_h_tkip_mic_err(ar, msdu, rx_info);
+ 			break;
+ 		}
+ 		fallthrough;
+@@ -3885,14 +3903,18 @@ static void ath12k_dp_rx_wbm_err(struct ath12k *ar,
+ {
+ 	struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+ 	struct ieee80211_rx_status rxs = {0};
++	struct ath12k_dp_rx_info rx_info;
+ 	bool drop = true;
+ 
++	rx_info.addr2_present = false;
++	rx_info.rx_status = &rxs;
++
+ 	switch (rxcb->err_rel_src) {
+ 	case HAL_WBM_REL_SRC_MODULE_REO:
+-		drop = ath12k_dp_rx_h_reo_err(ar, msdu, &rxs, msdu_list);
++		drop = ath12k_dp_rx_h_reo_err(ar, msdu, &rx_info, msdu_list);
+ 		break;
+ 	case HAL_WBM_REL_SRC_MODULE_RXDMA:
+-		drop = ath12k_dp_rx_h_rxdma_err(ar, msdu, &rxs);
++		drop = ath12k_dp_rx_h_rxdma_err(ar, msdu, &rx_info);
+ 		break;
+ 	default:
+ 		/* msdu will get freed */
+@@ -3904,7 +3926,7 @@ static void ath12k_dp_rx_wbm_err(struct ath12k *ar,
+ 		return;
+ 	}
+ 
+-	ath12k_dp_rx_deliver_msdu(ar, napi, msdu, &rxs);
++	ath12k_dp_rx_deliver_msdu(ar, napi, msdu, &rx_info);
+ }
+ 
+ int ath12k_dp_rx_process_wbm_err(struct ath12k_base *ab,
+@@ -4480,6 +4502,8 @@ int ath12k_dp_rx_pdev_mon_attach(struct ath12k *ar)
+ 
+ 	pmon->mon_last_linkdesc_paddr = 0;
+ 	pmon->mon_last_buf_cookie = DP_RX_DESC_COOKIE_MAX + 1;
++	INIT_LIST_HEAD(&pmon->dp_rx_mon_mpdu_list);
++	pmon->mon_mpdu = NULL;
+ 	spin_lock_init(&pmon->mon_lock);
+ 
+ 	return 0;
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.h b/drivers/net/wireless/ath/ath12k/dp_rx.h
+index 88e42365a9d8bc..a4e179c6f2664f 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.h
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.h
+@@ -65,6 +65,24 @@ struct ath12k_dp_rx_rfc1042_hdr {
+ 	__be16 snap_type;
+ } __packed;
+ 
++struct ath12k_dp_rx_info {
++	struct ieee80211_rx_status *rx_status;
++	u32 phy_meta_data;
++	u16 peer_id;
++	u8 decap_type;
++	u8 pkt_type;
++	u8 sgi;
++	u8 rate_mcs;
++	u8 bw;
++	u8 nss;
++	u8 addr2[ETH_ALEN];
++	u8 tid;
++	bool ip_csum_fail;
++	bool l4_csum_fail;
++	bool is_mcbc;
++	bool addr2_present;
++};
++
+ static inline u32 ath12k_he_gi_to_nl80211_he_gi(u8 sgi)
+ {
+ 	u32 ret = 0;
+@@ -131,13 +149,13 @@ int ath12k_dp_rx_peer_frag_setup(struct ath12k *ar, const u8 *peer_mac, int vdev
+ u8 ath12k_dp_rx_h_l3pad(struct ath12k_base *ab,
+ 			struct hal_rx_desc *desc);
+ struct ath12k_peer *
+-ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu);
++ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu,
++			 struct ath12k_dp_rx_info *rx_info);
+ u8 ath12k_dp_rx_h_decap_type(struct ath12k_base *ab,
+ 			     struct hal_rx_desc *desc);
+ u32 ath12k_dp_rx_h_mpdu_err(struct ath12k_base *ab,
+ 			    struct hal_rx_desc *desc);
+-void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+-			 struct ieee80211_rx_status *rx_status);
++void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct ath12k_dp_rx_info *rx_info);
+ int ath12k_dp_rxdma_ring_sel_config_qcn9274(struct ath12k_base *ab);
+ int ath12k_dp_rxdma_ring_sel_config_wcn7850(struct ath12k_base *ab);
+ 
+@@ -145,4 +163,9 @@ int ath12k_dp_htt_tlv_iter(struct ath12k_base *ab, const void *ptr, size_t len,
+ 			   int (*iter)(struct ath12k_base *ar, u16 tag, u16 len,
+ 				       const void *ptr, void *data),
+ 			   void *data);
++void ath12k_dp_rx_h_fetch_info(struct ath12k_base *ab,  struct hal_rx_desc *rx_desc,
++			       struct ath12k_dp_rx_info *rx_info);
++
++int ath12k_dp_rx_crypto_mic_len(struct ath12k *ar, enum hal_encrypt_type enctype);
++
+ #endif /* ATH12K_DP_RX_H */
+diff --git a/drivers/net/wireless/ath/ath12k/dp_tx.c b/drivers/net/wireless/ath/ath12k/dp_tx.c
+index ced232bf4aed01..f82d2c58eff3f6 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_tx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_tx.c
+@@ -229,7 +229,7 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_link_vif *arvif,
+ 	struct ath12k_skb_cb *skb_cb = ATH12K_SKB_CB(skb);
+ 	struct hal_tcl_data_cmd *hal_tcl_desc;
+ 	struct hal_tx_msdu_ext_desc *msg;
+-	struct sk_buff *skb_ext_desc;
++	struct sk_buff *skb_ext_desc = NULL;
+ 	struct hal_srng *tcl_ring;
+ 	struct ieee80211_hdr *hdr = (void *)skb->data;
+ 	struct ath12k_vif *ahvif = arvif->ahvif;
+@@ -415,18 +415,15 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_link_vif *arvif,
+ 			if (ret < 0) {
+ 				ath12k_dbg(ab, ATH12K_DBG_DP_TX,
+ 					   "Failed to add HTT meta data, dropping packet\n");
+-				kfree_skb(skb_ext_desc);
+-				goto fail_unmap_dma;
++				goto fail_free_ext_skb;
+ 			}
+ 		}
+ 
+ 		ti.paddr = dma_map_single(ab->dev, skb_ext_desc->data,
+ 					  skb_ext_desc->len, DMA_TO_DEVICE);
+ 		ret = dma_mapping_error(ab->dev, ti.paddr);
+-		if (ret) {
+-			kfree_skb(skb_ext_desc);
+-			goto fail_unmap_dma;
+-		}
++		if (ret)
++			goto fail_free_ext_skb;
+ 
+ 		ti.data_len = skb_ext_desc->len;
+ 		ti.type = HAL_TCL_DESC_TYPE_EXT_DESC;
+@@ -462,7 +459,7 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_link_vif *arvif,
+ 			ring_selector++;
+ 		}
+ 
+-		goto fail_unmap_dma;
++		goto fail_unmap_dma_ext;
+ 	}
+ 
+ 	ath12k_hal_tx_cmd_desc_setup(ab, hal_tcl_desc, &ti);
+@@ -478,13 +475,16 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_link_vif *arvif,
+ 
+ 	return 0;
+ 
+-fail_unmap_dma:
+-	dma_unmap_single(ab->dev, ti.paddr, ti.data_len, DMA_TO_DEVICE);
+-
++fail_unmap_dma_ext:
+ 	if (skb_cb->paddr_ext_desc)
+ 		dma_unmap_single(ab->dev, skb_cb->paddr_ext_desc,
+ 				 sizeof(struct hal_tx_msdu_ext_desc),
+ 				 DMA_TO_DEVICE);
++fail_free_ext_skb:
++	kfree_skb(skb_ext_desc);
++
++fail_unmap_dma:
++	dma_unmap_single(ab->dev, ti.paddr, ti.data_len, DMA_TO_DEVICE);
+ 
+ fail_remove_tx_buf:
+ 	ath12k_dp_tx_release_txbuf(dp, tx_desc, pool_id);
+@@ -585,6 +585,7 @@ ath12k_dp_tx_process_htt_tx_complete(struct ath12k_base *ab,
+ 	case HAL_WBM_REL_HTT_TX_COMP_STATUS_TTL:
+ 	case HAL_WBM_REL_HTT_TX_COMP_STATUS_REINJ:
+ 	case HAL_WBM_REL_HTT_TX_COMP_STATUS_INSPECT:
++	case HAL_WBM_REL_HTT_TX_COMP_STATUS_VDEVID_MISMATCH:
+ 		ath12k_dp_tx_free_txbuf(ab, msdu, mac_id, tx_ring);
+ 		break;
+ 	case HAL_WBM_REL_HTT_TX_COMP_STATUS_MEC_NOTIFY:
+diff --git a/drivers/net/wireless/ath/ath12k/hal.c b/drivers/net/wireless/ath/ath12k/hal.c
+index cd59ff8e6c7b0c..d00869a33fea06 100644
+--- a/drivers/net/wireless/ath/ath12k/hal.c
++++ b/drivers/net/wireless/ath/ath12k/hal.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ #include <linux/dma-mapping.h>
+ #include "hal_tx.h"
+@@ -511,11 +511,6 @@ static void ath12k_hw_qcn9274_rx_desc_get_crypto_hdr(struct hal_rx_desc *desc,
+ 	crypto_hdr[7] = HAL_RX_MPDU_INFO_PN_GET_BYTE2(desc->u.qcn9274.mpdu_start.pn[1]);
+ }
+ 
+-static u16 ath12k_hw_qcn9274_rx_desc_get_mpdu_frame_ctl(struct hal_rx_desc *desc)
+-{
+-	return __le16_to_cpu(desc->u.qcn9274.mpdu_start.frame_ctrl);
+-}
+-
+ static int ath12k_hal_srng_create_config_qcn9274(struct ath12k_base *ab)
+ {
+ 	struct ath12k_hal *hal = &ab->hal;
+@@ -552,9 +547,9 @@ static int ath12k_hal_srng_create_config_qcn9274(struct ath12k_base *ab)
+ 	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_STATUS_HP;
+ 
+ 	s = &hal->srng_config[HAL_TCL_DATA];
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_BASE_LSB;
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_BASE_LSB(ab);
+ 	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_HP;
+-	s->reg_size[0] = HAL_TCL2_RING_BASE_LSB - HAL_TCL1_RING_BASE_LSB;
++	s->reg_size[0] = HAL_TCL2_RING_BASE_LSB(ab) - HAL_TCL1_RING_BASE_LSB(ab);
+ 	s->reg_size[1] = HAL_TCL2_RING_HP - HAL_TCL1_RING_HP;
+ 
+ 	s = &hal->srng_config[HAL_TCL_CMD];
+@@ -566,29 +561,29 @@ static int ath12k_hal_srng_create_config_qcn9274(struct ath12k_base *ab)
+ 	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL_STATUS_RING_HP;
+ 
+ 	s = &hal->srng_config[HAL_CE_SRC];
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG + HAL_CE_DST_RING_BASE_LSB;
+-	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG + HAL_CE_DST_RING_HP;
+-	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG;
+-	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG;
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_BASE_LSB;
++	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_HP;
++	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab);
++	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab);
+ 
+ 	s = &hal->srng_config[HAL_CE_DST];
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_RING_BASE_LSB;
+-	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_RING_HP;
+-	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+-	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_BASE_LSB;
++	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_HP;
++	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
++	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
+ 
+ 	s = &hal->srng_config[HAL_CE_DST_STATUS];
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG +
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) +
+ 		HAL_CE_DST_STATUS_RING_BASE_LSB;
+-	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_STATUS_RING_HP;
+-	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+-	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
++	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_STATUS_RING_HP;
++	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
++	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
+ 
+ 	s = &hal->srng_config[HAL_WBM_IDLE_LINK];
+ 	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM_IDLE_LINK_RING_BASE_LSB(ab);
+@@ -736,7 +731,6 @@ const struct hal_rx_ops hal_rx_qcn9274_ops = {
+ 	.rx_desc_is_da_mcbc = ath12k_hw_qcn9274_rx_desc_is_da_mcbc,
+ 	.rx_desc_get_dot11_hdr = ath12k_hw_qcn9274_rx_desc_get_dot11_hdr,
+ 	.rx_desc_get_crypto_header = ath12k_hw_qcn9274_rx_desc_get_crypto_hdr,
+-	.rx_desc_get_mpdu_frame_ctl = ath12k_hw_qcn9274_rx_desc_get_mpdu_frame_ctl,
+ 	.dp_rx_h_msdu_done = ath12k_hw_qcn9274_dp_rx_h_msdu_done,
+ 	.dp_rx_h_l4_cksum_fail = ath12k_hw_qcn9274_dp_rx_h_l4_cksum_fail,
+ 	.dp_rx_h_ip_cksum_fail = ath12k_hw_qcn9274_dp_rx_h_ip_cksum_fail,
+@@ -975,11 +969,6 @@ ath12k_hw_qcn9274_compact_rx_desc_get_crypto_hdr(struct hal_rx_desc *desc,
+ 		HAL_RX_MPDU_INFO_PN_GET_BYTE2(desc->u.qcn9274_compact.mpdu_start.pn[1]);
+ }
+ 
+-static u16 ath12k_hw_qcn9274_compact_rx_desc_get_mpdu_frame_ctl(struct hal_rx_desc *desc)
+-{
+-	return __le16_to_cpu(desc->u.qcn9274_compact.mpdu_start.frame_ctrl);
+-}
+-
+ static bool ath12k_hw_qcn9274_compact_dp_rx_h_msdu_done(struct hal_rx_desc *desc)
+ {
+ 	return !!le32_get_bits(desc->u.qcn9274_compact.msdu_end.info14,
+@@ -1080,8 +1069,6 @@ const struct hal_rx_ops hal_rx_qcn9274_compact_ops = {
+ 	.rx_desc_is_da_mcbc = ath12k_hw_qcn9274_compact_rx_desc_is_da_mcbc,
+ 	.rx_desc_get_dot11_hdr = ath12k_hw_qcn9274_compact_rx_desc_get_dot11_hdr,
+ 	.rx_desc_get_crypto_header = ath12k_hw_qcn9274_compact_rx_desc_get_crypto_hdr,
+-	.rx_desc_get_mpdu_frame_ctl =
+-		ath12k_hw_qcn9274_compact_rx_desc_get_mpdu_frame_ctl,
+ 	.dp_rx_h_msdu_done = ath12k_hw_qcn9274_compact_dp_rx_h_msdu_done,
+ 	.dp_rx_h_l4_cksum_fail = ath12k_hw_qcn9274_compact_dp_rx_h_l4_cksum_fail,
+ 	.dp_rx_h_ip_cksum_fail = ath12k_hw_qcn9274_compact_dp_rx_h_ip_cksum_fail,
+@@ -1330,11 +1317,6 @@ static void ath12k_hw_wcn7850_rx_desc_get_crypto_hdr(struct hal_rx_desc *desc,
+ 	crypto_hdr[7] = HAL_RX_MPDU_INFO_PN_GET_BYTE2(desc->u.wcn7850.mpdu_start.pn[1]);
+ }
+ 
+-static u16 ath12k_hw_wcn7850_rx_desc_get_mpdu_frame_ctl(struct hal_rx_desc *desc)
+-{
+-	return __le16_to_cpu(desc->u.wcn7850.mpdu_start.frame_ctrl);
+-}
+-
+ static int ath12k_hal_srng_create_config_wcn7850(struct ath12k_base *ab)
+ {
+ 	struct ath12k_hal *hal = &ab->hal;
+@@ -1371,9 +1353,9 @@ static int ath12k_hal_srng_create_config_wcn7850(struct ath12k_base *ab)
+ 
+ 	s = &hal->srng_config[HAL_TCL_DATA];
+ 	s->max_rings = 5;
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_BASE_LSB;
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_BASE_LSB(ab);
+ 	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_HP;
+-	s->reg_size[0] = HAL_TCL2_RING_BASE_LSB - HAL_TCL1_RING_BASE_LSB;
++	s->reg_size[0] = HAL_TCL2_RING_BASE_LSB(ab) - HAL_TCL1_RING_BASE_LSB(ab);
+ 	s->reg_size[1] = HAL_TCL2_RING_HP - HAL_TCL1_RING_HP;
+ 
+ 	s = &hal->srng_config[HAL_TCL_CMD];
+@@ -1386,31 +1368,31 @@ static int ath12k_hal_srng_create_config_wcn7850(struct ath12k_base *ab)
+ 
+ 	s = &hal->srng_config[HAL_CE_SRC];
+ 	s->max_rings = 12;
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG + HAL_CE_DST_RING_BASE_LSB;
+-	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG + HAL_CE_DST_RING_HP;
+-	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG;
+-	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG;
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_BASE_LSB;
++	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_HP;
++	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab);
++	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab);
+ 
+ 	s = &hal->srng_config[HAL_CE_DST];
+ 	s->max_rings = 12;
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_RING_BASE_LSB;
+-	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_RING_HP;
+-	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+-	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_BASE_LSB;
++	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_HP;
++	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
++	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
+ 
+ 	s = &hal->srng_config[HAL_CE_DST_STATUS];
+ 	s->max_rings = 12;
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG +
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) +
+ 		HAL_CE_DST_STATUS_RING_BASE_LSB;
+-	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_STATUS_RING_HP;
+-	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+-	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
++	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_STATUS_RING_HP;
++	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
++	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
+ 
+ 	s = &hal->srng_config[HAL_WBM_IDLE_LINK];
+ 	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM_IDLE_LINK_RING_BASE_LSB(ab);
+@@ -1555,7 +1537,6 @@ const struct hal_rx_ops hal_rx_wcn7850_ops = {
+ 	.rx_desc_is_da_mcbc = ath12k_hw_wcn7850_rx_desc_is_da_mcbc,
+ 	.rx_desc_get_dot11_hdr = ath12k_hw_wcn7850_rx_desc_get_dot11_hdr,
+ 	.rx_desc_get_crypto_header = ath12k_hw_wcn7850_rx_desc_get_crypto_hdr,
+-	.rx_desc_get_mpdu_frame_ctl = ath12k_hw_wcn7850_rx_desc_get_mpdu_frame_ctl,
+ 	.dp_rx_h_msdu_done = ath12k_hw_wcn7850_dp_rx_h_msdu_done,
+ 	.dp_rx_h_l4_cksum_fail = ath12k_hw_wcn7850_dp_rx_h_l4_cksum_fail,
+ 	.dp_rx_h_ip_cksum_fail = ath12k_hw_wcn7850_dp_rx_h_ip_cksum_fail,
+@@ -1756,7 +1737,7 @@ static void ath12k_hal_srng_src_hw_init(struct ath12k_base *ab,
+ 			      HAL_TCL1_RING_BASE_MSB_RING_BASE_ADDR_MSB) |
+ 	      u32_encode_bits((srng->entry_size * srng->num_entries),
+ 			      HAL_TCL1_RING_BASE_MSB_RING_SIZE);
+-	ath12k_hif_write32(ab, reg_base + HAL_TCL1_RING_BASE_MSB_OFFSET, val);
++	ath12k_hif_write32(ab, reg_base + HAL_TCL1_RING_BASE_MSB_OFFSET(ab), val);
+ 
+ 	val = u32_encode_bits(srng->entry_size, HAL_REO1_RING_ID_ENTRY_SIZE);
+ 	ath12k_hif_write32(ab, reg_base + HAL_TCL1_RING_ID_OFFSET(ab), val);
+diff --git a/drivers/net/wireless/ath/ath12k/hal.h b/drivers/net/wireless/ath/ath12k/hal.h
+index 94e2e873595831..c8205672cd3dd5 100644
+--- a/drivers/net/wireless/ath/ath12k/hal.h
++++ b/drivers/net/wireless/ath/ath12k/hal.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_HAL_H
+@@ -44,10 +44,14 @@ struct ath12k_base;
+ #define HAL_SEQ_WCSS_UMAC_OFFSET		0x00a00000
+ #define HAL_SEQ_WCSS_UMAC_REO_REG		0x00a38000
+ #define HAL_SEQ_WCSS_UMAC_TCL_REG		0x00a44000
+-#define HAL_SEQ_WCSS_UMAC_CE0_SRC_REG		0x01b80000
+-#define HAL_SEQ_WCSS_UMAC_CE0_DST_REG		0x01b81000
+-#define HAL_SEQ_WCSS_UMAC_CE1_SRC_REG		0x01b82000
+-#define HAL_SEQ_WCSS_UMAC_CE1_DST_REG		0x01b83000
++#define HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) \
++	((ab)->hw_params->regs->hal_umac_ce0_src_reg_base)
++#define HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) \
++	((ab)->hw_params->regs->hal_umac_ce0_dest_reg_base)
++#define HAL_SEQ_WCSS_UMAC_CE1_SRC_REG(ab) \
++	((ab)->hw_params->regs->hal_umac_ce1_src_reg_base)
++#define HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) \
++	((ab)->hw_params->regs->hal_umac_ce1_dest_reg_base)
+ #define HAL_SEQ_WCSS_UMAC_WBM_REG		0x00a34000
+ 
+ #define HAL_CE_WFSS_CE_REG_BASE			0x01b80000
+@@ -57,8 +61,10 @@ struct ath12k_base;
+ /* SW2TCL(x) R0 ring configuration address */
+ #define HAL_TCL1_RING_CMN_CTRL_REG		0x00000020
+ #define HAL_TCL1_RING_DSCP_TID_MAP		0x00000240
+-#define HAL_TCL1_RING_BASE_LSB			0x00000900
+-#define HAL_TCL1_RING_BASE_MSB			0x00000904
++#define HAL_TCL1_RING_BASE_LSB(ab) \
++	((ab)->hw_params->regs->hal_tcl1_ring_base_lsb)
++#define HAL_TCL1_RING_BASE_MSB(ab) \
++	((ab)->hw_params->regs->hal_tcl1_ring_base_msb)
+ #define HAL_TCL1_RING_ID(ab)			((ab)->hw_params->regs->hal_tcl1_ring_id)
+ #define HAL_TCL1_RING_MISC(ab) \
+ 	((ab)->hw_params->regs->hal_tcl1_ring_misc)
+@@ -76,30 +82,31 @@ struct ath12k_base;
+ 	((ab)->hw_params->regs->hal_tcl1_ring_msi1_base_msb)
+ #define HAL_TCL1_RING_MSI1_DATA(ab) \
+ 	((ab)->hw_params->regs->hal_tcl1_ring_msi1_data)
+-#define HAL_TCL2_RING_BASE_LSB			0x00000978
++#define HAL_TCL2_RING_BASE_LSB(ab) \
++	((ab)->hw_params->regs->hal_tcl2_ring_base_lsb)
+ #define HAL_TCL_RING_BASE_LSB(ab) \
+ 	((ab)->hw_params->regs->hal_tcl_ring_base_lsb)
+ 
+-#define HAL_TCL1_RING_MSI1_BASE_LSB_OFFSET(ab)				\
+-	(HAL_TCL1_RING_MSI1_BASE_LSB(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_MSI1_BASE_MSB_OFFSET(ab)				\
+-	(HAL_TCL1_RING_MSI1_BASE_MSB(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_MSI1_DATA_OFFSET(ab)				\
+-	(HAL_TCL1_RING_MSI1_DATA(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_BASE_MSB_OFFSET				\
+-	(HAL_TCL1_RING_BASE_MSB - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_ID_OFFSET(ab)				\
+-	(HAL_TCL1_RING_ID(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_CONSR_INT_SETUP_IX0_OFFSET(ab)			\
+-	(HAL_TCL1_RING_CONSUMER_INT_SETUP_IX0(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_CONSR_INT_SETUP_IX1_OFFSET(ab) \
+-		(HAL_TCL1_RING_CONSUMER_INT_SETUP_IX1(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_TP_ADDR_LSB_OFFSET(ab) \
+-		(HAL_TCL1_RING_TP_ADDR_LSB(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_TP_ADDR_MSB_OFFSET(ab) \
+-		(HAL_TCL1_RING_TP_ADDR_MSB(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_MISC_OFFSET(ab) \
+-		(HAL_TCL1_RING_MISC(ab) - HAL_TCL1_RING_BASE_LSB)
++#define HAL_TCL1_RING_MSI1_BASE_LSB_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_MSI1_BASE_LSB(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_MSI1_BASE_MSB_OFFSET(ab)	({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_MSI1_BASE_MSB(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_MSI1_DATA_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_MSI1_DATA(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_BASE_MSB_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_BASE_MSB(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_ID_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_ID(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_CONSR_INT_SETUP_IX0_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_CONSUMER_INT_SETUP_IX0(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_CONSR_INT_SETUP_IX1_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_CONSUMER_INT_SETUP_IX1(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_TP_ADDR_LSB_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_TP_ADDR_LSB(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_TP_ADDR_MSB_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_TP_ADDR_MSB(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_MISC_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_MISC(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
+ 
+ /* SW2TCL(x) R2 ring pointers (head/tail) address */
+ #define HAL_TCL1_RING_HP			0x00002000
+@@ -1068,7 +1075,6 @@ struct hal_rx_ops {
+ 	bool (*rx_desc_is_da_mcbc)(struct hal_rx_desc *desc);
+ 	void (*rx_desc_get_dot11_hdr)(struct hal_rx_desc *desc,
+ 				      struct ieee80211_hdr *hdr);
+-	u16 (*rx_desc_get_mpdu_frame_ctl)(struct hal_rx_desc *desc);
+ 	void (*rx_desc_get_crypto_header)(struct hal_rx_desc *desc,
+ 					  u8 *crypto_hdr,
+ 					  enum hal_encrypt_type enctype);
+diff --git a/drivers/net/wireless/ath/ath12k/hal_desc.h b/drivers/net/wireless/ath/ath12k/hal_desc.h
+index 3e8983b85de863..63d279fab32249 100644
+--- a/drivers/net/wireless/ath/ath12k/hal_desc.h
++++ b/drivers/net/wireless/ath/ath12k/hal_desc.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022, 2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2022, 2024-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ #include "core.h"
+ 
+@@ -1298,6 +1298,7 @@ enum hal_wbm_htt_tx_comp_status {
+ 	HAL_WBM_REL_HTT_TX_COMP_STATUS_REINJ,
+ 	HAL_WBM_REL_HTT_TX_COMP_STATUS_INSPECT,
+ 	HAL_WBM_REL_HTT_TX_COMP_STATUS_MEC_NOTIFY,
++	HAL_WBM_REL_HTT_TX_COMP_STATUS_VDEVID_MISMATCH,
+ 	HAL_WBM_REL_HTT_TX_COMP_STATUS_MAX,
+ };
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/hal_rx.h b/drivers/net/wireless/ath/ath12k/hal_rx.h
+index 6bdcd0867d86e3..c753eb2a03ad24 100644
+--- a/drivers/net/wireless/ath/ath12k/hal_rx.h
++++ b/drivers/net/wireless/ath/ath12k/hal_rx.h
+@@ -108,11 +108,12 @@ enum hal_rx_mon_status {
+ 	HAL_RX_MON_STATUS_PPDU_DONE,
+ 	HAL_RX_MON_STATUS_BUF_DONE,
+ 	HAL_RX_MON_STATUS_BUF_ADDR,
++	HAL_RX_MON_STATUS_MPDU_START,
+ 	HAL_RX_MON_STATUS_MPDU_END,
+ 	HAL_RX_MON_STATUS_MSDU_END,
+ };
+ 
+-#define HAL_RX_MAX_MPDU		256
++#define HAL_RX_MAX_MPDU				1024
+ #define HAL_RX_NUM_WORDS_PER_PPDU_BITMAP	(HAL_RX_MAX_MPDU >> 5)
+ 
+ struct hal_rx_user_status {
+@@ -506,6 +507,18 @@ struct hal_rx_mpdu_start {
+ 	__le32 rsvd2[16];
+ } __packed;
+ 
++struct hal_rx_msdu_end {
++	__le32 info0;
++	__le32 rsvd0[9];
++	__le16 info00;
++	__le16 info01;
++	__le32 rsvd00[8];
++	__le32 info1;
++	__le32 rsvd1[10];
++	__le32 info2;
++	__le32 rsvd2;
++} __packed;
++
+ #define HAL_RX_PPDU_END_DURATION	GENMASK(23, 0)
+ struct hal_rx_ppdu_end_duration {
+ 	__le32 rsvd0[9];
+diff --git a/drivers/net/wireless/ath/ath12k/hw.c b/drivers/net/wireless/ath/ath12k/hw.c
+index a106ebed7870de..1bfb11bae7add3 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.c
++++ b/drivers/net/wireless/ath/ath12k/hw.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <linux/types.h>
+@@ -619,6 +619,9 @@ static const struct ath12k_hw_regs qcn9274_v1_regs = {
+ 	.hal_tcl1_ring_msi1_base_msb = 0x0000094c,
+ 	.hal_tcl1_ring_msi1_data = 0x00000950,
+ 	.hal_tcl_ring_base_lsb = 0x00000b58,
++	.hal_tcl1_ring_base_lsb = 0x00000900,
++	.hal_tcl1_ring_base_msb = 0x00000904,
++	.hal_tcl2_ring_base_lsb = 0x00000978,
+ 
+ 	/* TCL STATUS ring address */
+ 	.hal_tcl_status_ring_base_lsb = 0x00000d38,
+@@ -681,6 +684,14 @@ static const struct ath12k_hw_regs qcn9274_v1_regs = {
+ 
+ 	/* REO status ring address */
+ 	.hal_reo_status_ring_base = 0x00000a84,
++
++	/* CE base address */
++	.hal_umac_ce0_src_reg_base = 0x01b80000,
++	.hal_umac_ce0_dest_reg_base = 0x01b81000,
++	.hal_umac_ce1_src_reg_base = 0x01b82000,
++	.hal_umac_ce1_dest_reg_base = 0x01b83000,
++
++	.gcc_gcc_pcie_hot_rst = 0x1e38338,
+ };
+ 
+ static const struct ath12k_hw_regs qcn9274_v2_regs = {
+@@ -695,6 +706,9 @@ static const struct ath12k_hw_regs qcn9274_v2_regs = {
+ 	.hal_tcl1_ring_msi1_base_msb = 0x0000094c,
+ 	.hal_tcl1_ring_msi1_data = 0x00000950,
+ 	.hal_tcl_ring_base_lsb = 0x00000b58,
++	.hal_tcl1_ring_base_lsb = 0x00000900,
++	.hal_tcl1_ring_base_msb = 0x00000904,
++	.hal_tcl2_ring_base_lsb = 0x00000978,
+ 
+ 	/* TCL STATUS ring address */
+ 	.hal_tcl_status_ring_base_lsb = 0x00000d38,
+@@ -761,6 +775,14 @@ static const struct ath12k_hw_regs qcn9274_v2_regs = {
+ 
+ 	/* REO status ring address */
+ 	.hal_reo_status_ring_base = 0x00000aa0,
++
++	/* CE base address */
++	.hal_umac_ce0_src_reg_base = 0x01b80000,
++	.hal_umac_ce0_dest_reg_base = 0x01b81000,
++	.hal_umac_ce1_src_reg_base = 0x01b82000,
++	.hal_umac_ce1_dest_reg_base = 0x01b83000,
++
++	.gcc_gcc_pcie_hot_rst = 0x1e38338,
+ };
+ 
+ static const struct ath12k_hw_regs wcn7850_regs = {
+@@ -775,6 +797,9 @@ static const struct ath12k_hw_regs wcn7850_regs = {
+ 	.hal_tcl1_ring_msi1_base_msb = 0x0000094c,
+ 	.hal_tcl1_ring_msi1_data = 0x00000950,
+ 	.hal_tcl_ring_base_lsb = 0x00000b58,
++	.hal_tcl1_ring_base_lsb = 0x00000900,
++	.hal_tcl1_ring_base_msb = 0x00000904,
++	.hal_tcl2_ring_base_lsb = 0x00000978,
+ 
+ 	/* TCL STATUS ring address */
+ 	.hal_tcl_status_ring_base_lsb = 0x00000d38,
+@@ -837,6 +862,14 @@ static const struct ath12k_hw_regs wcn7850_regs = {
+ 
+ 	/* REO status ring address */
+ 	.hal_reo_status_ring_base = 0x00000a84,
++
++	/* CE base address */
++	.hal_umac_ce0_src_reg_base = 0x01b80000,
++	.hal_umac_ce0_dest_reg_base = 0x01b81000,
++	.hal_umac_ce1_src_reg_base = 0x01b82000,
++	.hal_umac_ce1_dest_reg_base = 0x01b83000,
++
++	.gcc_gcc_pcie_hot_rst = 0x1e40304,
+ };
+ 
+ static const struct ath12k_hw_hal_params ath12k_hw_hal_params_qcn9274 = {
+diff --git a/drivers/net/wireless/ath/ath12k/hw.h b/drivers/net/wireless/ath/ath12k/hw.h
+index 8d52182e28aef4..862b11325a9021 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.h
++++ b/drivers/net/wireless/ath/ath12k/hw.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_HW_H
+@@ -293,6 +293,9 @@ struct ath12k_hw_regs {
+ 	u32 hal_tcl1_ring_msi1_base_msb;
+ 	u32 hal_tcl1_ring_msi1_data;
+ 	u32 hal_tcl_ring_base_lsb;
++	u32 hal_tcl1_ring_base_lsb;
++	u32 hal_tcl1_ring_base_msb;
++	u32 hal_tcl2_ring_base_lsb;
+ 
+ 	u32 hal_tcl_status_ring_base_lsb;
+ 
+@@ -316,6 +319,11 @@ struct ath12k_hw_regs {
+ 	u32 pcie_qserdes_sysclk_en_sel;
+ 	u32 pcie_pcs_osc_dtct_config_base;
+ 
++	u32 hal_umac_ce0_src_reg_base;
++	u32 hal_umac_ce0_dest_reg_base;
++	u32 hal_umac_ce1_src_reg_base;
++	u32 hal_umac_ce1_dest_reg_base;
++
+ 	u32 hal_ppe_rel_ring_base;
+ 
+ 	u32 hal_reo2_ring_base;
+@@ -347,6 +355,8 @@ struct ath12k_hw_regs {
+ 	u32 hal_reo_cmd_ring_base;
+ 
+ 	u32 hal_reo_status_ring_base;
++
++	u32 gcc_gcc_pcie_hot_rst;
+ };
+ 
+ static inline const char *ath12k_bd_ie_type_str(enum ath12k_bd_ie_type type)
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index dfa05f0ee6c9f7..331bcf5e6c4cce 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -229,7 +229,8 @@ ath12k_phymodes[NUM_NL80211_BANDS][ATH12K_CHAN_WIDTH_NUM] = {
+ const struct htt_rx_ring_tlv_filter ath12k_mac_mon_status_filter_default = {
+ 	.rx_filter = HTT_RX_FILTER_TLV_FLAGS_MPDU_START |
+ 		     HTT_RX_FILTER_TLV_FLAGS_PPDU_END |
+-		     HTT_RX_FILTER_TLV_FLAGS_PPDU_END_STATUS_DONE,
++		     HTT_RX_FILTER_TLV_FLAGS_PPDU_END_STATUS_DONE |
++		     HTT_RX_FILTER_TLV_FLAGS_PPDU_START_USER_INFO,
+ 	.pkt_filter_flags0 = HTT_RX_FP_MGMT_FILTER_FLAGS0,
+ 	.pkt_filter_flags1 = HTT_RX_FP_MGMT_FILTER_FLAGS1,
+ 	.pkt_filter_flags2 = HTT_RX_FP_CTRL_FILTER_FLASG2,
+@@ -874,12 +875,12 @@ static bool ath12k_mac_band_match(enum nl80211_band band1, enum WMI_HOST_WLAN_BA
+ {
+ 	switch (band1) {
+ 	case NL80211_BAND_2GHZ:
+-		if (band2 & WMI_HOST_WLAN_2G_CAP)
++		if (band2 & WMI_HOST_WLAN_2GHZ_CAP)
+ 			return true;
+ 		break;
+ 	case NL80211_BAND_5GHZ:
+ 	case NL80211_BAND_6GHZ:
+-		if (band2 & WMI_HOST_WLAN_5G_CAP)
++		if (band2 & WMI_HOST_WLAN_5GHZ_CAP)
+ 			return true;
+ 		break;
+ 	default:
+@@ -980,7 +981,7 @@ static int ath12k_mac_txpower_recalc(struct ath12k *ar)
+ 	ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "txpower to set in hw %d\n",
+ 		   txpower / 2);
+ 
+-	if ((pdev->cap.supported_bands & WMI_HOST_WLAN_2G_CAP) &&
++	if ((pdev->cap.supported_bands & WMI_HOST_WLAN_2GHZ_CAP) &&
+ 	    ar->txpower_limit_2g != txpower) {
+ 		param = WMI_PDEV_PARAM_TXPOWER_LIMIT2G;
+ 		ret = ath12k_wmi_pdev_set_param(ar, param,
+@@ -990,7 +991,7 @@ static int ath12k_mac_txpower_recalc(struct ath12k *ar)
+ 		ar->txpower_limit_2g = txpower;
+ 	}
+ 
+-	if ((pdev->cap.supported_bands & WMI_HOST_WLAN_5G_CAP) &&
++	if ((pdev->cap.supported_bands & WMI_HOST_WLAN_5GHZ_CAP) &&
+ 	    ar->txpower_limit_5g != txpower) {
+ 		param = WMI_PDEV_PARAM_TXPOWER_LIMIT5G;
+ 		ret = ath12k_wmi_pdev_set_param(ar, param,
+@@ -1272,12 +1273,12 @@ static int ath12k_mac_monitor_vdev_create(struct ath12k *ar)
+ 	arg.pdev_id = pdev->pdev_id;
+ 	arg.if_stats_id = ATH12K_INVAL_VDEV_STATS_ID;
+ 
+-	if (pdev->cap.supported_bands & WMI_HOST_WLAN_2G_CAP) {
++	if (pdev->cap.supported_bands & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		arg.chains[NL80211_BAND_2GHZ].tx = ar->num_tx_chains;
+ 		arg.chains[NL80211_BAND_2GHZ].rx = ar->num_rx_chains;
+ 	}
+ 
+-	if (pdev->cap.supported_bands & WMI_HOST_WLAN_5G_CAP) {
++	if (pdev->cap.supported_bands & WMI_HOST_WLAN_5GHZ_CAP) {
+ 		arg.chains[NL80211_BAND_5GHZ].tx = ar->num_tx_chains;
+ 		arg.chains[NL80211_BAND_5GHZ].rx = ar->num_rx_chains;
+ 	}
+@@ -3988,7 +3989,7 @@ static void ath12k_mac_bss_info_changed(struct ath12k *ar,
+ 		else
+ 			rateidx = ffs(info->basic_rates) - 1;
+ 
+-		if (ar->pdev->cap.supported_bands & WMI_HOST_WLAN_5G_CAP)
++		if (ar->pdev->cap.supported_bands & WMI_HOST_WLAN_5GHZ_CAP)
+ 			rateidx += ATH12K_MAC_FIRST_OFDM_RATE_IDX;
+ 
+ 		bitrate = ath12k_legacy_rates[rateidx].bitrate;
+@@ -4162,9 +4163,9 @@ ath12k_mac_select_scan_device(struct ieee80211_hw *hw,
+ 	 * split the hw request and perform multiple scans
+ 	 */
+ 
+-	if (center_freq < ATH12K_MIN_5G_FREQ)
++	if (center_freq < ATH12K_MIN_5GHZ_FREQ)
+ 		band = NL80211_BAND_2GHZ;
+-	else if (center_freq < ATH12K_MIN_6G_FREQ)
++	else if (center_freq < ATH12K_MIN_6GHZ_FREQ)
+ 		band = NL80211_BAND_5GHZ;
+ 	else
+ 		band = NL80211_BAND_6GHZ;
+@@ -4605,7 +4606,6 @@ static int ath12k_install_key(struct ath12k_link_vif *arvif,
+ 		.macaddr = macaddr,
+ 	};
+ 	struct ath12k_vif *ahvif = arvif->ahvif;
+-	struct ieee80211_vif *vif = ath12k_ahvif_to_vif(ahvif);
+ 
+ 	lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy);
+ 
+@@ -4624,8 +4624,8 @@ static int ath12k_install_key(struct ath12k_link_vif *arvif,
+ 
+ 	switch (key->cipher) {
+ 	case WLAN_CIPHER_SUITE_CCMP:
++	case WLAN_CIPHER_SUITE_CCMP_256:
+ 		arg.key_cipher = WMI_CIPHER_AES_CCM;
+-		/* TODO: Re-check if flag is valid */
+ 		key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV_MGMT;
+ 		break;
+ 	case WLAN_CIPHER_SUITE_TKIP:
+@@ -4633,12 +4633,10 @@ static int ath12k_install_key(struct ath12k_link_vif *arvif,
+ 		arg.key_txmic_len = 8;
+ 		arg.key_rxmic_len = 8;
+ 		break;
+-	case WLAN_CIPHER_SUITE_CCMP_256:
+-		arg.key_cipher = WMI_CIPHER_AES_CCM;
+-		break;
+ 	case WLAN_CIPHER_SUITE_GCMP:
+ 	case WLAN_CIPHER_SUITE_GCMP_256:
+ 		arg.key_cipher = WMI_CIPHER_AES_GCM;
++		key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV_MGMT;
+ 		break;
+ 	default:
+ 		ath12k_warn(ar->ab, "cipher %d is not supported\n", key->cipher);
+@@ -4658,7 +4656,7 @@ static int ath12k_install_key(struct ath12k_link_vif *arvif,
+ 	if (!wait_for_completion_timeout(&ar->install_key_done, 1 * HZ))
+ 		return -ETIMEDOUT;
+ 
+-	if (ether_addr_equal(macaddr, vif->addr))
++	if (ether_addr_equal(macaddr, arvif->bssid))
+ 		ahvif->key_cipher = key->cipher;
+ 
+ 	return ar->install_key_status ? -EINVAL : 0;
+@@ -6475,7 +6473,7 @@ static void ath12k_mac_setup_ht_vht_cap(struct ath12k *ar,
+ 	rate_cap_tx_chainmask = ar->cfg_tx_chainmask >> cap->tx_chain_mask_shift;
+ 	rate_cap_rx_chainmask = ar->cfg_rx_chainmask >> cap->rx_chain_mask_shift;
+ 
+-	if (cap->supported_bands & WMI_HOST_WLAN_2G_CAP) {
++	if (cap->supported_bands & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		band = &ar->mac.sbands[NL80211_BAND_2GHZ];
+ 		ht_cap = cap->band[NL80211_BAND_2GHZ].ht_cap_info;
+ 		if (ht_cap_info)
+@@ -6484,7 +6482,7 @@ static void ath12k_mac_setup_ht_vht_cap(struct ath12k *ar,
+ 						    rate_cap_rx_chainmask);
+ 	}
+ 
+-	if (cap->supported_bands & WMI_HOST_WLAN_5G_CAP &&
++	if (cap->supported_bands & WMI_HOST_WLAN_5GHZ_CAP &&
+ 	    (ar->ab->hw_params->single_pdev_only ||
+ 	     !ar->supports_6ghz)) {
+ 		band = &ar->mac.sbands[NL80211_BAND_5GHZ];
+@@ -6893,7 +6891,7 @@ static void ath12k_mac_setup_sband_iftype_data(struct ath12k *ar,
+ 	enum nl80211_band band;
+ 	int count;
+ 
+-	if (cap->supported_bands & WMI_HOST_WLAN_2G_CAP) {
++	if (cap->supported_bands & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		band = NL80211_BAND_2GHZ;
+ 		count = ath12k_mac_copy_sband_iftype_data(ar, cap,
+ 							  ar->mac.iftype[band],
+@@ -6903,7 +6901,7 @@ static void ath12k_mac_setup_sband_iftype_data(struct ath12k *ar,
+ 						 count);
+ 	}
+ 
+-	if (cap->supported_bands & WMI_HOST_WLAN_5G_CAP) {
++	if (cap->supported_bands & WMI_HOST_WLAN_5GHZ_CAP) {
+ 		band = NL80211_BAND_5GHZ;
+ 		count = ath12k_mac_copy_sband_iftype_data(ar, cap,
+ 							  ar->mac.iftype[band],
+@@ -6913,7 +6911,7 @@ static void ath12k_mac_setup_sband_iftype_data(struct ath12k *ar,
+ 						 count);
+ 	}
+ 
+-	if (cap->supported_bands & WMI_HOST_WLAN_5G_CAP &&
++	if (cap->supported_bands & WMI_HOST_WLAN_5GHZ_CAP &&
+ 	    ar->supports_6ghz) {
+ 		band = NL80211_BAND_6GHZ;
+ 		count = ath12k_mac_copy_sband_iftype_data(ar, cap,
+@@ -7042,6 +7040,8 @@ static int ath12k_mac_mgmt_tx_wmi(struct ath12k *ar, struct ath12k_link_vif *arv
+ 	struct ath12k_base *ab = ar->ab;
+ 	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ 	struct ieee80211_tx_info *info;
++	enum hal_encrypt_type enctype;
++	unsigned int mic_len;
+ 	dma_addr_t paddr;
+ 	int buf_id;
+ 	int ret;
+@@ -7057,12 +7057,16 @@ static int ath12k_mac_mgmt_tx_wmi(struct ath12k *ar, struct ath12k_link_vif *arv
+ 		return -ENOSPC;
+ 
+ 	info = IEEE80211_SKB_CB(skb);
+-	if (!(info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP)) {
++	if ((ATH12K_SKB_CB(skb)->flags & ATH12K_SKB_CIPHER_SET) &&
++	    !(info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP)) {
+ 		if ((ieee80211_is_action(hdr->frame_control) ||
+ 		     ieee80211_is_deauth(hdr->frame_control) ||
+ 		     ieee80211_is_disassoc(hdr->frame_control)) &&
+ 		     ieee80211_has_protected(hdr->frame_control)) {
+-			skb_put(skb, IEEE80211_CCMP_MIC_LEN);
++			enctype =
++			    ath12k_dp_tx_get_encrypt_type(ATH12K_SKB_CB(skb)->cipher);
++			mic_len = ath12k_dp_rx_crypto_mic_len(ar, enctype);
++			skb_put(skb, mic_len);
+ 		}
+ 	}
+ 
+@@ -7429,7 +7433,6 @@ static void ath12k_mac_op_tx(struct ieee80211_hw *hw,
+ 								info_flags);
+ 
+ 			skb_cb = ATH12K_SKB_CB(msdu_copied);
+-			info = IEEE80211_SKB_CB(msdu_copied);
+ 			skb_cb->link_id = link_id;
+ 
+ 			/* For open mode, skip peer find logic */
+@@ -7452,7 +7455,6 @@ static void ath12k_mac_op_tx(struct ieee80211_hw *hw,
+ 			if (key) {
+ 				skb_cb->cipher = key->cipher;
+ 				skb_cb->flags |= ATH12K_SKB_CIPHER_SET;
+-				info->control.hw_key = key;
+ 
+ 				hdr = (struct ieee80211_hdr *)msdu_copied->data;
+ 				if (!ieee80211_has_protected(hdr->frame_control))
+@@ -7903,15 +7905,15 @@ static int ath12k_mac_setup_vdev_create_arg(struct ath12k_link_vif *arvif,
+ 			return ret;
+ 	}
+ 
+-	if (pdev->cap.supported_bands & WMI_HOST_WLAN_2G_CAP) {
++	if (pdev->cap.supported_bands & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		arg->chains[NL80211_BAND_2GHZ].tx = ar->num_tx_chains;
+ 		arg->chains[NL80211_BAND_2GHZ].rx = ar->num_rx_chains;
+ 	}
+-	if (pdev->cap.supported_bands & WMI_HOST_WLAN_5G_CAP) {
++	if (pdev->cap.supported_bands & WMI_HOST_WLAN_5GHZ_CAP) {
+ 		arg->chains[NL80211_BAND_5GHZ].tx = ar->num_tx_chains;
+ 		arg->chains[NL80211_BAND_5GHZ].rx = ar->num_rx_chains;
+ 	}
+-	if (pdev->cap.supported_bands & WMI_HOST_WLAN_5G_CAP &&
++	if (pdev->cap.supported_bands & WMI_HOST_WLAN_5GHZ_CAP &&
+ 	    ar->supports_6ghz) {
+ 		arg->chains[NL80211_BAND_6GHZ].tx = ar->num_tx_chains;
+ 		arg->chains[NL80211_BAND_6GHZ].rx = ar->num_rx_chains;
+@@ -7940,7 +7942,7 @@ ath12k_mac_prepare_he_mode(struct ath12k_pdev *pdev, u32 viftype)
+ 	u32 *hecap_phy_ptr = NULL;
+ 	u32 hemode;
+ 
+-	if (pdev->cap.supported_bands & WMI_HOST_WLAN_2G_CAP)
++	if (pdev->cap.supported_bands & WMI_HOST_WLAN_2GHZ_CAP)
+ 		cap_band = &pdev_cap->band[NL80211_BAND_2GHZ];
+ 	else
+ 		cap_band = &pdev_cap->band[NL80211_BAND_5GHZ];
+@@ -9462,8 +9464,8 @@ ath12k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw,
+ 
+ 	ar = ath12k_mac_assign_vif_to_vdev(hw, arvif, ctx);
+ 	if (!ar) {
+-		ath12k_warn(arvif->ar->ab, "failed to assign chanctx for vif %pM link id %u link vif is already started",
+-			    vif->addr, link_id);
++		ath12k_hw_warn(ah, "failed to assign chanctx for vif %pM link id %u link vif is already started",
++			       vif->addr, link_id);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -10640,10 +10642,10 @@ static u32 ath12k_get_phy_id(struct ath12k *ar, u32 band)
+ 	struct ath12k_pdev *pdev = ar->pdev;
+ 	struct ath12k_pdev_cap *pdev_cap = &pdev->cap;
+ 
+-	if (band == WMI_HOST_WLAN_2G_CAP)
++	if (band == WMI_HOST_WLAN_2GHZ_CAP)
+ 		return pdev_cap->band[NL80211_BAND_2GHZ].phy_id;
+ 
+-	if (band == WMI_HOST_WLAN_5G_CAP)
++	if (band == WMI_HOST_WLAN_5GHZ_CAP)
+ 		return pdev_cap->band[NL80211_BAND_5GHZ].phy_id;
+ 
+ 	ath12k_warn(ar->ab, "unsupported phy cap:%d\n", band);
+@@ -10668,7 +10670,7 @@ static int ath12k_mac_setup_channels_rates(struct ath12k *ar,
+ 
+ 	reg_cap = &ar->ab->hal_reg_cap[ar->pdev_idx];
+ 
+-	if (supported_bands & WMI_HOST_WLAN_2G_CAP) {
++	if (supported_bands & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		channels = kmemdup(ath12k_2ghz_channels,
+ 				   sizeof(ath12k_2ghz_channels),
+ 				   GFP_KERNEL);
+@@ -10684,7 +10686,7 @@ static int ath12k_mac_setup_channels_rates(struct ath12k *ar,
+ 		bands[NL80211_BAND_2GHZ] = band;
+ 
+ 		if (ar->ab->hw_params->single_pdev_only) {
+-			phy_id = ath12k_get_phy_id(ar, WMI_HOST_WLAN_2G_CAP);
++			phy_id = ath12k_get_phy_id(ar, WMI_HOST_WLAN_2GHZ_CAP);
+ 			reg_cap = &ar->ab->hal_reg_cap[phy_id];
+ 		}
+ 		ath12k_mac_update_ch_list(ar, band,
+@@ -10692,8 +10694,8 @@ static int ath12k_mac_setup_channels_rates(struct ath12k *ar,
+ 					  reg_cap->high_2ghz_chan);
+ 	}
+ 
+-	if (supported_bands & WMI_HOST_WLAN_5G_CAP) {
+-		if (reg_cap->high_5ghz_chan >= ATH12K_MIN_6G_FREQ) {
++	if (supported_bands & WMI_HOST_WLAN_5GHZ_CAP) {
++		if (reg_cap->high_5ghz_chan >= ATH12K_MIN_6GHZ_FREQ) {
+ 			channels = kmemdup(ath12k_6ghz_channels,
+ 					   sizeof(ath12k_6ghz_channels), GFP_KERNEL);
+ 			if (!channels) {
+@@ -10715,7 +10717,7 @@ static int ath12k_mac_setup_channels_rates(struct ath12k *ar,
+ 			ah->use_6ghz_regd = true;
+ 		}
+ 
+-		if (reg_cap->low_5ghz_chan < ATH12K_MIN_6G_FREQ) {
++		if (reg_cap->low_5ghz_chan < ATH12K_MIN_6GHZ_FREQ) {
+ 			channels = kmemdup(ath12k_5ghz_channels,
+ 					   sizeof(ath12k_5ghz_channels),
+ 					   GFP_KERNEL);
+@@ -10734,7 +10736,7 @@ static int ath12k_mac_setup_channels_rates(struct ath12k *ar,
+ 			bands[NL80211_BAND_5GHZ] = band;
+ 
+ 			if (ar->ab->hw_params->single_pdev_only) {
+-				phy_id = ath12k_get_phy_id(ar, WMI_HOST_WLAN_5G_CAP);
++				phy_id = ath12k_get_phy_id(ar, WMI_HOST_WLAN_5GHZ_CAP);
+ 				reg_cap = &ar->ab->hal_reg_cap[phy_id];
+ 			}
+ 
+@@ -11572,7 +11574,6 @@ void ath12k_mac_mlo_teardown(struct ath12k_hw_group *ag)
+ 
+ int ath12k_mac_register(struct ath12k_hw_group *ag)
+ {
+-	struct ath12k_base *ab = ag->ab[0];
+ 	struct ath12k_hw *ah;
+ 	int i;
+ 	int ret;
+@@ -11585,8 +11586,6 @@ int ath12k_mac_register(struct ath12k_hw_group *ag)
+ 			goto err;
+ 	}
+ 
+-	set_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags);
+-
+ 	return 0;
+ 
+ err:
+@@ -11603,12 +11602,9 @@ int ath12k_mac_register(struct ath12k_hw_group *ag)
+ 
+ void ath12k_mac_unregister(struct ath12k_hw_group *ag)
+ {
+-	struct ath12k_base *ab = ag->ab[0];
+ 	struct ath12k_hw *ah;
+ 	int i;
+ 
+-	clear_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags);
+-
+ 	for (i = ag->num_hw - 1; i >= 0; i--) {
+ 		ah = ath12k_ag_to_ah(ag, i);
+ 		if (!ah)
+diff --git a/drivers/net/wireless/ath/ath12k/mhi.c b/drivers/net/wireless/ath/ath12k/mhi.c
+index 2f6d14382ed70c..4d40c4ec4b8110 100644
+--- a/drivers/net/wireless/ath/ath12k/mhi.c
++++ b/drivers/net/wireless/ath/ath12k/mhi.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2020-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <linux/msi.h>
+@@ -285,8 +285,11 @@ static void ath12k_mhi_op_status_cb(struct mhi_controller *mhi_cntrl,
+ 			break;
+ 		}
+ 
+-		if (!(test_bit(ATH12K_FLAG_UNREGISTERING, &ab->dev_flags)))
++		if (!(test_bit(ATH12K_FLAG_UNREGISTERING, &ab->dev_flags))) {
++			set_bit(ATH12K_FLAG_CRASH_FLUSH, &ab->dev_flags);
++			set_bit(ATH12K_FLAG_RECOVERY, &ab->dev_flags);
+ 			queue_work(ab->workqueue_aux, &ab->reset_work);
++		}
+ 		break;
+ 	default:
+ 		break;
+diff --git a/drivers/net/wireless/ath/ath12k/pci.c b/drivers/net/wireless/ath/ath12k/pci.c
+index b474696ac6d8c9..2e7d302ace679d 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.c
++++ b/drivers/net/wireless/ath/ath12k/pci.c
+@@ -292,10 +292,10 @@ static void ath12k_pci_enable_ltssm(struct ath12k_base *ab)
+ 
+ 	ath12k_dbg(ab, ATH12K_DBG_PCI, "pci ltssm 0x%x\n", val);
+ 
+-	val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST);
++	val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST(ab));
+ 	val |= GCC_GCC_PCIE_HOT_RST_VAL;
+-	ath12k_pci_write32(ab, GCC_GCC_PCIE_HOT_RST, val);
+-	val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST);
++	ath12k_pci_write32(ab, GCC_GCC_PCIE_HOT_RST(ab), val);
++	val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST(ab));
+ 
+ 	ath12k_dbg(ab, ATH12K_DBG_PCI, "pci pcie_hot_rst 0x%x\n", val);
+ 
+@@ -1710,12 +1710,12 @@ static int ath12k_pci_probe(struct pci_dev *pdev,
+ err_mhi_unregister:
+ 	ath12k_mhi_unregister(ab_pci);
+ 
+-err_pci_msi_free:
+-	ath12k_pci_msi_free(ab_pci);
+-
+ err_irq_affinity_cleanup:
+ 	ath12k_pci_set_irq_affinity_hint(ab_pci, NULL);
+ 
++err_pci_msi_free:
++	ath12k_pci_msi_free(ab_pci);
++
+ err_pci_free_region:
+ 	ath12k_pci_free_region(ab_pci);
+ 
+@@ -1734,8 +1734,6 @@ static void ath12k_pci_remove(struct pci_dev *pdev)
+ 
+ 	if (test_bit(ATH12K_FLAG_QMI_FAIL, &ab->dev_flags)) {
+ 		ath12k_pci_power_down(ab, false);
+-		ath12k_qmi_deinit_service(ab);
+-		ath12k_core_hw_group_unassign(ab);
+ 		goto qmi_fail;
+ 	}
+ 
+@@ -1743,9 +1741,10 @@ static void ath12k_pci_remove(struct pci_dev *pdev)
+ 
+ 	cancel_work_sync(&ab->reset_work);
+ 	cancel_work_sync(&ab->dump_work);
+-	ath12k_core_deinit(ab);
++	ath12k_core_hw_group_cleanup(ab->ag);
+ 
+ qmi_fail:
++	ath12k_core_deinit(ab);
+ 	ath12k_fw_unmap(ab);
+ 	ath12k_mhi_unregister(ab_pci);
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/pci.h b/drivers/net/wireless/ath/ath12k/pci.h
+index 31584a7ad80eb9..9321674eef8b8f 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.h
++++ b/drivers/net/wireless/ath/ath12k/pci.h
+@@ -28,7 +28,9 @@
+ #define PCIE_PCIE_PARF_LTSSM			0x1e081b0
+ #define PARM_LTSSM_VALUE			0x111
+ 
+-#define GCC_GCC_PCIE_HOT_RST			0x1e38338
++#define GCC_GCC_PCIE_HOT_RST(ab) \
++	((ab)->hw_params->regs->gcc_gcc_pcie_hot_rst)
++
+ #define GCC_GCC_PCIE_HOT_RST_VAL		0x10
+ 
+ #define PCIE_PCIE_INT_ALL_CLEAR			0x1e08228
+diff --git a/drivers/net/wireless/ath/ath12k/reg.c b/drivers/net/wireless/ath/ath12k/reg.c
+index 439d61f284d892..7fa7cd301b7579 100644
+--- a/drivers/net/wireless/ath/ath12k/reg.c
++++ b/drivers/net/wireless/ath/ath12k/reg.c
+@@ -777,8 +777,12 @@ void ath12k_reg_free(struct ath12k_base *ab)
+ {
+ 	int i;
+ 
++	mutex_lock(&ab->core_lock);
+ 	for (i = 0; i < ab->hw_params->max_radios; i++) {
+ 		kfree(ab->default_regd[i]);
+ 		kfree(ab->new_regd[i]);
++		ab->default_regd[i] = NULL;
++		ab->new_regd[i] = NULL;
+ 	}
++	mutex_unlock(&ab->core_lock);
+ }
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index 6d1ea5f3a791b0..fe50c3d3cb8201 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -520,10 +520,10 @@ ath12k_pull_mac_phy_cap_svc_ready_ext(struct ath12k_wmi_pdev *wmi_handle,
+ 	 * band to band for a single radio, need to see how this should be
+ 	 * handled.
+ 	 */
+-	if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_2G_CAP) {
++	if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		pdev_cap->tx_chain_mask = le32_to_cpu(mac_caps->tx_chain_mask_2g);
+ 		pdev_cap->rx_chain_mask = le32_to_cpu(mac_caps->rx_chain_mask_2g);
+-	} else if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_5G_CAP) {
++	} else if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_5GHZ_CAP) {
+ 		pdev_cap->vht_cap = le32_to_cpu(mac_caps->vht_cap_info_5g);
+ 		pdev_cap->vht_mcs = le32_to_cpu(mac_caps->vht_supp_mcs_5g);
+ 		pdev_cap->he_mcs = le32_to_cpu(mac_caps->he_supp_mcs_5g);
+@@ -546,7 +546,7 @@ ath12k_pull_mac_phy_cap_svc_ready_ext(struct ath12k_wmi_pdev *wmi_handle,
+ 	pdev_cap->rx_chain_mask_shift =
+ 			find_first_bit((unsigned long *)&pdev_cap->rx_chain_mask, 32);
+ 
+-	if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_2G_CAP) {
++	if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		cap_band = &pdev_cap->band[NL80211_BAND_2GHZ];
+ 		cap_band->phy_id = le32_to_cpu(mac_caps->phy_id);
+ 		cap_band->max_bw_supported = le32_to_cpu(mac_caps->max_bw_supported_2g);
+@@ -566,7 +566,7 @@ ath12k_pull_mac_phy_cap_svc_ready_ext(struct ath12k_wmi_pdev *wmi_handle,
+ 				le32_to_cpu(mac_caps->he_ppet2g.ppet16_ppet8_ru3_ru0[i]);
+ 	}
+ 
+-	if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_5G_CAP) {
++	if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_5GHZ_CAP) {
+ 		cap_band = &pdev_cap->band[NL80211_BAND_5GHZ];
+ 		cap_band->phy_id = le32_to_cpu(mac_caps->phy_id);
+ 		cap_band->max_bw_supported =
+@@ -2351,7 +2351,7 @@ int ath12k_wmi_send_peer_assoc_cmd(struct ath12k *ar,
+ 
+ 	for (i = 0; i < arg->peer_eht_mcs_count; i++) {
+ 		eht_mcs = ptr;
+-		eht_mcs->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_HE_RATE_SET,
++		eht_mcs->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_EHT_RATE_SET,
+ 							     sizeof(*eht_mcs));
+ 
+ 		eht_mcs->rx_mcs_set = cpu_to_le32(arg->peer_eht_rx_mcs_set[i]);
+@@ -3646,15 +3646,15 @@ ath12k_fill_band_to_mac_param(struct ath12k_base  *soc,
+ 		arg[i].pdev_id = pdev->pdev_id;
+ 
+ 		switch (pdev->cap.supported_bands) {
+-		case WMI_HOST_WLAN_2G_5G_CAP:
++		case WMI_HOST_WLAN_2GHZ_5GHZ_CAP:
+ 			arg[i].start_freq = hal_reg_cap->low_2ghz_chan;
+ 			arg[i].end_freq = hal_reg_cap->high_5ghz_chan;
+ 			break;
+-		case WMI_HOST_WLAN_2G_CAP:
++		case WMI_HOST_WLAN_2GHZ_CAP:
+ 			arg[i].start_freq = hal_reg_cap->low_2ghz_chan;
+ 			arg[i].end_freq = hal_reg_cap->high_2ghz_chan;
+ 			break;
+-		case WMI_HOST_WLAN_5G_CAP:
++		case WMI_HOST_WLAN_5GHZ_CAP:
+ 			arg[i].start_freq = hal_reg_cap->low_5ghz_chan;
+ 			arg[i].end_freq = hal_reg_cap->high_5ghz_chan;
+ 			break;
+@@ -4601,6 +4601,7 @@ static int ath12k_service_ready_ext_event(struct ath12k_base *ab,
+ 	return 0;
+ 
+ err:
++	kfree(svc_rdy_ext.mac_phy_caps);
+ 	ath12k_wmi_free_dbring_caps(ab);
+ 	return ret;
+ }
+@@ -4699,7 +4700,7 @@ ath12k_wmi_tlv_mac_phy_caps_ext_parse(struct ath12k_base *ab,
+ 		bands = pdev->cap.supported_bands;
+ 	}
+ 
+-	if (bands & WMI_HOST_WLAN_2G_CAP) {
++	if (bands & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		ath12k_wmi_eht_caps_parse(pdev, NL80211_BAND_2GHZ,
+ 					  caps->eht_cap_mac_info_2ghz,
+ 					  caps->eht_cap_phy_info_2ghz,
+@@ -4708,7 +4709,7 @@ ath12k_wmi_tlv_mac_phy_caps_ext_parse(struct ath12k_base *ab,
+ 					  caps->eht_cap_info_internal);
+ 	}
+ 
+-	if (bands & WMI_HOST_WLAN_5G_CAP) {
++	if (bands & WMI_HOST_WLAN_5GHZ_CAP) {
+ 		ath12k_wmi_eht_caps_parse(pdev, NL80211_BAND_5GHZ,
+ 					  caps->eht_cap_mac_info_5ghz,
+ 					  caps->eht_cap_phy_info_5ghz,
+@@ -4922,7 +4923,7 @@ static u8 ath12k_wmi_ignore_num_extra_rules(struct ath12k_wmi_reg_rule_ext_param
+ 	for (count = 0; count < num_reg_rules; count++) {
+ 		start_freq = le32_get_bits(rule[count].freq_info, REG_RULE_START_FREQ);
+ 
+-		if (start_freq >= ATH12K_MIN_6G_FREQ)
++		if (start_freq >= ATH12K_MIN_6GHZ_FREQ)
+ 			num_invalid_5ghz_rules++;
+ 	}
+ 
+@@ -4992,9 +4993,9 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
+ 	for (i = 0; i < WMI_REG_CURRENT_MAX_AP_TYPE; i++) {
+ 		num_6g_reg_rules_ap[i] = reg_info->num_6g_reg_rules_ap[i];
+ 
+-		if (num_6g_reg_rules_ap[i] > MAX_6G_REG_RULES) {
++		if (num_6g_reg_rules_ap[i] > MAX_6GHZ_REG_RULES) {
+ 			ath12k_warn(ab, "Num 6G reg rules for AP mode(%d) exceeds max limit (num_6g_reg_rules_ap: %d, max_rules: %d)\n",
+-				    i, num_6g_reg_rules_ap[i], MAX_6G_REG_RULES);
++				    i, num_6g_reg_rules_ap[i], MAX_6GHZ_REG_RULES);
+ 			kfree(tb);
+ 			return -EINVAL;
+ 		}
+@@ -5015,9 +5016,9 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
+ 				reg_info->num_6g_reg_rules_cl[WMI_REG_VLP_AP][i];
+ 		total_reg_rules += num_6g_reg_rules_cl[WMI_REG_VLP_AP][i];
+ 
+-		if (num_6g_reg_rules_cl[WMI_REG_INDOOR_AP][i] > MAX_6G_REG_RULES ||
+-		    num_6g_reg_rules_cl[WMI_REG_STD_POWER_AP][i] > MAX_6G_REG_RULES ||
+-		    num_6g_reg_rules_cl[WMI_REG_VLP_AP][i] >  MAX_6G_REG_RULES) {
++		if (num_6g_reg_rules_cl[WMI_REG_INDOOR_AP][i] > MAX_6GHZ_REG_RULES ||
++		    num_6g_reg_rules_cl[WMI_REG_STD_POWER_AP][i] > MAX_6GHZ_REG_RULES ||
++		    num_6g_reg_rules_cl[WMI_REG_VLP_AP][i] >  MAX_6GHZ_REG_RULES) {
+ 			ath12k_warn(ab, "Num 6g client reg rules exceeds max limit, for client(type: %d)\n",
+ 				    i);
+ 			kfree(tb);
+@@ -6317,13 +6318,13 @@ static void ath12k_mgmt_rx_event(struct ath12k_base *ab, struct sk_buff *skb)
+ 	if (rx_ev.status & WMI_RX_STATUS_ERR_MIC)
+ 		status->flag |= RX_FLAG_MMIC_ERROR;
+ 
+-	if (rx_ev.chan_freq >= ATH12K_MIN_6G_FREQ &&
+-	    rx_ev.chan_freq <= ATH12K_MAX_6G_FREQ) {
++	if (rx_ev.chan_freq >= ATH12K_MIN_6GHZ_FREQ &&
++	    rx_ev.chan_freq <= ATH12K_MAX_6GHZ_FREQ) {
+ 		status->band = NL80211_BAND_6GHZ;
+ 		status->freq = rx_ev.chan_freq;
+ 	} else if (rx_ev.channel >= 1 && rx_ev.channel <= 14) {
+ 		status->band = NL80211_BAND_2GHZ;
+-	} else if (rx_ev.channel >= 36 && rx_ev.channel <= ATH12K_MAX_5G_CHAN) {
++	} else if (rx_ev.channel >= 36 && rx_ev.channel <= ATH12K_MAX_5GHZ_CHAN) {
+ 		status->band = NL80211_BAND_5GHZ;
+ 	} else {
+ 		/* Shouldn't happen unless list of advertised channels to
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.h b/drivers/net/wireless/ath/ath12k/wmi.h
+index 1ba33e30ddd279..be4ac91dd34f50 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.h
++++ b/drivers/net/wireless/ath/ath12k/wmi.h
+@@ -216,9 +216,9 @@ enum wmi_host_hw_mode_priority {
+ };
+ 
+ enum WMI_HOST_WLAN_BAND {
+-	WMI_HOST_WLAN_2G_CAP	= 1,
+-	WMI_HOST_WLAN_5G_CAP	= 2,
+-	WMI_HOST_WLAN_2G_5G_CAP	= 3,
++	WMI_HOST_WLAN_2GHZ_CAP		= 1,
++	WMI_HOST_WLAN_5GHZ_CAP		= 2,
++	WMI_HOST_WLAN_2GHZ_5GHZ_CAP	= 3,
+ };
+ 
+ enum wmi_cmd_group {
+@@ -2690,8 +2690,8 @@ enum wmi_channel_width {
+  * 2 - index for 160 MHz, first 3 bytes valid
+  * 3 - index for 320 MHz, first 3 bytes valid
+  */
+-#define WMI_MAX_EHT_SUPP_MCS_2G_SIZE  2
+-#define WMI_MAX_EHT_SUPP_MCS_5G_SIZE  4
++#define WMI_MAX_EHT_SUPP_MCS_2GHZ_SIZE  2
++#define WMI_MAX_EHT_SUPP_MCS_5GHZ_SIZE  4
+ 
+ #define WMI_EHTCAP_TXRX_MCS_NSS_IDX_80    0
+ #define WMI_EHTCAP_TXRX_MCS_NSS_IDX_160   1
+@@ -2730,8 +2730,8 @@ struct ath12k_wmi_caps_ext_params {
+ 	struct ath12k_wmi_ppe_threshold_params eht_ppet_2ghz;
+ 	struct ath12k_wmi_ppe_threshold_params eht_ppet_5ghz;
+ 	__le32 eht_cap_info_internal;
+-	__le32 eht_supp_mcs_ext_2ghz[WMI_MAX_EHT_SUPP_MCS_2G_SIZE];
+-	__le32 eht_supp_mcs_ext_5ghz[WMI_MAX_EHT_SUPP_MCS_5G_SIZE];
++	__le32 eht_supp_mcs_ext_2ghz[WMI_MAX_EHT_SUPP_MCS_2GHZ_SIZE];
++	__le32 eht_supp_mcs_ext_5ghz[WMI_MAX_EHT_SUPP_MCS_5GHZ_SIZE];
+ 	__le32 eml_capability;
+ 	__le32 mld_capability;
+ } __packed;
+@@ -4108,7 +4108,7 @@ struct ath12k_wmi_eht_rate_set_params {
+ 
+ #define MAX_REG_RULES 10
+ #define REG_ALPHA2_LEN 2
+-#define MAX_6G_REG_RULES 5
++#define MAX_6GHZ_REG_RULES 5
+ 
+ enum wmi_start_event_param {
+ 	WMI_VDEV_START_RESP_EVENT = 0,
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c b/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c
+index 547634f82183d6..81fa7cbad89213 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c
+@@ -290,6 +290,9 @@ void ath9k_htc_swba(struct ath9k_htc_priv *priv,
+ 	struct ath_common *common = ath9k_hw_common(priv->ah);
+ 	int slot;
+ 
++	if (!priv->cur_beacon_conf.enable_beacon)
++		return;
++
+ 	if (swba->beacon_pending != 0) {
+ 		priv->beacon.bmisscnt++;
+ 		if (priv->beacon.bmisscnt > BSTUCK_THRESHOLD) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+index ce4954b0d52462..44a249a753ecf6 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+@@ -328,6 +328,7 @@ iwl_trans_get_rb_size_order(enum iwl_amsdu_size rb_size)
+ 	case IWL_AMSDU_4K:
+ 		return get_order(4 * 1024);
+ 	case IWL_AMSDU_8K:
++		return get_order(8 * 1024);
+ 	case IWL_AMSDU_12K:
+ 		return get_order(16 * 1024);
+ 	default:
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/mld.c b/drivers/net/wireless/intel/iwlwifi/mld/mld.c
+index 73d2166a4c2570..7a098942dc8021 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/mld.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/mld.c
+@@ -638,7 +638,8 @@ iwl_mld_nic_error(struct iwl_op_mode *op_mode,
+ 	 * It might not actually be true that we'll restart, but the
+ 	 * setting doesn't matter if we're going to be unbound either.
+ 	 */
+-	if (type != IWL_ERR_TYPE_RESET_HS_TIMEOUT)
++	if (type != IWL_ERR_TYPE_RESET_HS_TIMEOUT &&
++	    mld->fw_status.running)
+ 		mld->fw_status.in_hw_restart = true;
+ }
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+index 068c58e9c1eb4e..c2729dab8e79e5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+@@ -2,6 +2,7 @@
+ /******************************************************************************
+  *
+  * Copyright(c) 2005 - 2014, 2018 - 2023 Intel Corporation. All rights reserved.
++ * Copyright(c) 2025 Intel Corporation
+  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
+  *****************************************************************************/
+@@ -2709,6 +2710,7 @@ static void rs_drv_get_rate(void *mvm_r, struct ieee80211_sta *sta,
+ 							  optimal_rate);
+ 		iwl_mvm_hwrate_to_tx_rate_v1(last_ucode_rate, info->band,
+ 					     &txrc->reported_rate);
++		txrc->reported_rate.count = 1;
+ 	}
+ 	spin_unlock_bh(&lq_sta->pers.lock);
+ }
+diff --git a/drivers/net/wireless/marvell/mwifiex/11n.c b/drivers/net/wireless/marvell/mwifiex/11n.c
+index 738bafc3749b0a..66f0f5377ac181 100644
+--- a/drivers/net/wireless/marvell/mwifiex/11n.c
++++ b/drivers/net/wireless/marvell/mwifiex/11n.c
+@@ -403,14 +403,12 @@ mwifiex_cmd_append_11n_tlv(struct mwifiex_private *priv,
+ 
+ 		if (sband->ht_cap.cap & IEEE80211_HT_CAP_SUP_WIDTH_20_40 &&
+ 		    bss_desc->bcn_ht_oper->ht_param &
+-		    IEEE80211_HT_PARAM_CHAN_WIDTH_ANY) {
+-			chan_list->chan_scan_param[0].radio_type |=
+-				CHAN_BW_40MHZ << 2;
++		    IEEE80211_HT_PARAM_CHAN_WIDTH_ANY)
+ 			SET_SECONDARYCHAN(chan_list->chan_scan_param[0].
+ 					  radio_type,
+ 					  (bss_desc->bcn_ht_oper->ht_param &
+ 					  IEEE80211_HT_PARAM_CHA_SEC_OFFSET));
+-		}
++
+ 		*buffer += struct_size(chan_list, chan_scan_param, 1);
+ 		ret_len += struct_size(chan_list, chan_scan_param, 1);
+ 	}
+diff --git a/drivers/net/wireless/mediatek/mt76/channel.c b/drivers/net/wireless/mediatek/mt76/channel.c
+index e7b839e7429034..cc2d888e3f17a5 100644
+--- a/drivers/net/wireless/mediatek/mt76/channel.c
++++ b/drivers/net/wireless/mediatek/mt76/channel.c
+@@ -302,11 +302,13 @@ void mt76_put_vif_phy_link(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 			   struct mt76_vif_link *mlink)
+ {
+ 	struct mt76_dev *dev = phy->dev;
+-	struct mt76_vif_data *mvif = mlink->mvif;
++	struct mt76_vif_data *mvif;
+ 
+ 	if (IS_ERR_OR_NULL(mlink) || !mlink->offchannel)
+ 		return;
+ 
++	mvif = mlink->mvif;
++
+ 	rcu_assign_pointer(mvif->offchannel_link, NULL);
+ 	dev->drv->vif_link_remove(phy, vif, &vif->bss_conf, mlink);
+ 	kfree(mlink);
+diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
+index b88d7e10742ee6..e9605dc222910f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
+@@ -449,8 +449,10 @@ mt76_phy_init(struct mt76_phy *phy, struct ieee80211_hw *hw)
+ 	wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_AIRTIME_FAIRNESS);
+ 	wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_AQL);
+ 
+-	wiphy->available_antennas_tx = phy->antenna_mask;
+-	wiphy->available_antennas_rx = phy->antenna_mask;
++	if (!wiphy->available_antennas_tx)
++		wiphy->available_antennas_tx = phy->antenna_mask;
++	if (!wiphy->available_antennas_rx)
++		wiphy->available_antennas_rx = phy->antenna_mask;
+ 
+ 	wiphy->sar_capa = &mt76_sar_capa;
+ 	phy->frp = devm_kcalloc(dev->dev, wiphy->sar_capa->num_freq_ranges,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+index 876f0692850a2e..9c4d5cea0c42e9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+@@ -651,6 +651,9 @@ int mt7915_mmio_wed_init(struct mt7915_dev *dev, void *pdev_ptr,
+ 		wed->wlan.base = devm_ioremap(dev->mt76.dev,
+ 					      pci_resource_start(pci_dev, 0),
+ 					      pci_resource_len(pci_dev, 0));
++		if (!wed->wlan.base)
++			return -ENOMEM;
++
+ 		wed->wlan.phy_base = pci_resource_start(pci_dev, 0);
+ 		wed->wlan.wpdma_int = pci_resource_start(pci_dev, 0) +
+ 				      MT_INT_WED_SOURCE_CSR;
+@@ -678,6 +681,9 @@ int mt7915_mmio_wed_init(struct mt7915_dev *dev, void *pdev_ptr,
+ 		wed->wlan.bus_type = MTK_WED_BUS_AXI;
+ 		wed->wlan.base = devm_ioremap(dev->mt76.dev, res->start,
+ 					      resource_size(res));
++		if (!wed->wlan.base)
++			return -ENOMEM;
++
+ 		wed->wlan.phy_base = res->start;
+ 		wed->wlan.wpdma_int = res->start + MT_INT_SOURCE_CSR;
+ 		wed->wlan.wpdma_mask = res->start + MT_INT_MASK_CSR;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/init.c b/drivers/net/wireless/mediatek/mt76/mt7925/init.c
+index 63cb08f4d87cc4..79639be0d29aca 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/init.c
+@@ -89,7 +89,7 @@ void mt7925_regd_be_ctrl(struct mt792x_dev *dev, u8 *alpha2)
+ 		}
+ 
+ 		/* Check the last one */
+-		if (rule->flag && BIT(0))
++		if (rule->flag & BIT(0))
+ 			break;
+ 
+ 		pos += sizeof(*rule);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index 14b1f603fb6224..dea5b9bcb3fdfb 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -783,7 +783,7 @@ int mt7925_mcu_fw_log_2_host(struct mt792x_dev *dev, u8 ctrl)
+ 	int ret;
+ 
+ 	ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_UNI_CMD(WSYS_CONFIG),
+-					&req, sizeof(req), false, NULL);
++					&req, sizeof(req), true, NULL);
+ 	return ret;
+ }
+ 
+@@ -1424,7 +1424,7 @@ int mt7925_mcu_set_eeprom(struct mt792x_dev *dev)
+ 	};
+ 
+ 	return mt76_mcu_send_and_get_msg(&dev->mt76, MCU_UNI_CMD(EFUSE_CTRL),
+-					 &req, sizeof(req), false, NULL);
++					 &req, sizeof(req), true, NULL);
+ }
+ EXPORT_SYMBOL_GPL(mt7925_mcu_set_eeprom);
+ 
+@@ -2046,8 +2046,6 @@ int mt7925_mcu_set_sniffer(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ 		},
+ 	};
+ 
+-	mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(SNIFFER), &req, sizeof(req), true);
+-
+ 	return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(SNIFFER), &req, sizeof(req),
+ 				 true);
+ }
+@@ -2764,7 +2762,7 @@ int mt7925_mcu_set_dbdc(struct mt76_phy *phy, bool enable)
+ 	conf->band = 0; /* unused */
+ 
+ 	err = mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SET_DBDC_PARMS),
+-				    false);
++				    true);
+ 
+ 	return err;
+ }
+@@ -2790,6 +2788,9 @@ int mt7925_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 	struct tlv *tlv;
+ 	int max_len;
+ 
++	if (test_bit(MT76_HW_SCANNING, &phy->state))
++		return -EBUSY;
++
+ 	max_len = sizeof(*hdr) + sizeof(*req) + sizeof(*ssid) +
+ 				sizeof(*bssid) + sizeof(*chan_info) +
+ 				sizeof(*misc) + sizeof(*ie);
+@@ -2869,7 +2870,7 @@ int mt7925_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 	}
+ 
+ 	err = mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SCAN_REQ),
+-				    false);
++				    true);
+ 	if (err < 0)
+ 		clear_bit(MT76_HW_SCANNING, &phy->state);
+ 
+@@ -2975,7 +2976,7 @@ int mt7925_mcu_sched_scan_req(struct mt76_phy *phy,
+ 	}
+ 
+ 	return mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SCAN_REQ),
+-				     false);
++				     true);
+ }
+ EXPORT_SYMBOL_GPL(mt7925_mcu_sched_scan_req);
+ 
+@@ -3011,7 +3012,7 @@ mt7925_mcu_sched_scan_enable(struct mt76_phy *phy,
+ 		clear_bit(MT76_HW_SCHED_SCANNING, &phy->state);
+ 
+ 	return mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SCAN_REQ),
+-				     false);
++				     true);
+ }
+ 
+ int mt7925_mcu_cancel_hw_scan(struct mt76_phy *phy,
+@@ -3050,7 +3051,7 @@ int mt7925_mcu_cancel_hw_scan(struct mt76_phy *phy,
+ 	}
+ 
+ 	return mt76_mcu_send_msg(phy->dev, MCU_UNI_CMD(SCAN_REQ),
+-				 &req, sizeof(req), false);
++				 &req, sizeof(req), true);
+ }
+ EXPORT_SYMBOL_GPL(mt7925_mcu_cancel_hw_scan);
+ 
+@@ -3155,7 +3156,7 @@ int mt7925_mcu_set_channel_domain(struct mt76_phy *phy)
+ 	memcpy(__skb_push(skb, sizeof(req)), &req, sizeof(req));
+ 
+ 	return mt76_mcu_skb_send_msg(dev, skb, MCU_UNI_CMD(SET_DOMAIN_INFO),
+-				     false);
++				     true);
+ }
+ EXPORT_SYMBOL_GPL(mt7925_mcu_set_channel_domain);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/dma.c b/drivers/net/wireless/mediatek/mt76/mt7996/dma.c
+index 69a7d9b2e38bd7..4b68d2fc5e0949 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/dma.c
+@@ -493,7 +493,7 @@ int mt7996_dma_init(struct mt7996_dev *dev)
+ 	ret = mt76_queue_alloc(dev, &dev->mt76.q_rx[MT_RXQ_MCU],
+ 			       MT_RXQ_ID(MT_RXQ_MCU),
+ 			       MT7996_RX_MCU_RING_SIZE,
+-			       MT_RX_BUF_SIZE,
++			       MT7996_RX_MCU_BUF_SIZE,
+ 			       MT_RXQ_RING_BASE(MT_RXQ_MCU));
+ 	if (ret)
+ 		return ret;
+@@ -502,7 +502,7 @@ int mt7996_dma_init(struct mt7996_dev *dev)
+ 	ret = mt76_queue_alloc(dev, &dev->mt76.q_rx[MT_RXQ_MCU_WA],
+ 			       MT_RXQ_ID(MT_RXQ_MCU_WA),
+ 			       MT7996_RX_MCU_RING_SIZE_WA,
+-			       MT_RX_BUF_SIZE,
++			       MT7996_RX_MCU_BUF_SIZE,
+ 			       MT_RXQ_RING_BASE(MT_RXQ_MCU_WA));
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c
+index 53dfac02f8af0b..f0c76aac175dff 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c
+@@ -304,6 +304,7 @@ int mt7996_eeprom_parse_hw_cap(struct mt7996_dev *dev, struct mt7996_phy *phy)
+ 		phy->has_aux_rx = true;
+ 
+ 	mphy->antenna_mask = BIT(nss) - 1;
++	phy->orig_antenna_mask = mphy->antenna_mask;
+ 	mphy->chainmask = (BIT(path) - 1) << dev->chainshift[band_idx];
+ 	phy->orig_chainmask = mphy->chainmask;
+ 	dev->chainmask |= mphy->chainmask;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/init.c b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+index 6b660424aedc31..4906b0ecc73e02 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+@@ -217,6 +217,9 @@ static int mt7996_thermal_init(struct mt7996_phy *phy)
+ 
+ 	name = devm_kasprintf(&wiphy->dev, GFP_KERNEL, "mt7996_%s.%d",
+ 			      wiphy_name(wiphy), phy->mt76->band_idx);
++	if (!name)
++		return -ENOMEM;
++
+ 	snprintf(cname, sizeof(cname), "cooling_device%d", phy->mt76->band_idx);
+ 
+ 	cdev = thermal_cooling_device_register(name, phy, &mt7996_thermal_ops);
+@@ -1113,12 +1116,12 @@ mt7996_set_stream_he_txbf_caps(struct mt7996_phy *phy,
+ 
+ 	c = IEEE80211_HE_PHY_CAP4_SU_BEAMFORMEE;
+ 
+-	if (is_mt7996(phy->mt76->dev))
+-		c |= IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_UNDER_80MHZ_4 |
+-		     (IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_ABOVE_80MHZ_4 * non_2g);
+-	else
++	if (is_mt7992(phy->mt76->dev))
+ 		c |= IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_UNDER_80MHZ_5 |
+ 		     (IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_ABOVE_80MHZ_5 * non_2g);
++	else
++		c |= IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_UNDER_80MHZ_4 |
++		     (IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_ABOVE_80MHZ_4 * non_2g);
+ 
+ 	elem->phy_cap_info[4] |= c;
+ 
+@@ -1318,6 +1321,9 @@ mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ 		u8_encode_bits(IEEE80211_EHT_MAC_CAP0_MAX_MPDU_LEN_11454,
+ 			       IEEE80211_EHT_MAC_CAP0_MAX_MPDU_LEN_MASK);
+ 
++	eht_cap_elem->mac_cap_info[1] |=
++		IEEE80211_EHT_MAC_CAP1_MAX_AMPDU_LEN_MASK;
++
+ 	eht_cap_elem->phy_cap_info[0] =
+ 		IEEE80211_EHT_PHY_CAP0_NDP_4_EHT_LFT_32_GI |
+ 		IEEE80211_EHT_PHY_CAP0_SU_BEAMFORMER |
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+index 91c64e3a0860ff..a3295b22523a61 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+@@ -68,11 +68,13 @@ static int mt7996_start(struct ieee80211_hw *hw)
+ 
+ static void mt7996_stop_phy(struct mt7996_phy *phy)
+ {
+-	struct mt7996_dev *dev = phy->dev;
++	struct mt7996_dev *dev;
+ 
+ 	if (!phy || !test_bit(MT76_STATE_RUNNING, &phy->mt76->state))
+ 		return;
+ 
++	dev = phy->dev;
++
+ 	cancel_delayed_work_sync(&phy->mt76->mac_work);
+ 
+ 	mutex_lock(&dev->mt76.mutex);
+@@ -414,11 +416,13 @@ static void mt7996_phy_set_rxfilter(struct mt7996_phy *phy)
+ 
+ static void mt7996_set_monitor(struct mt7996_phy *phy, bool enabled)
+ {
+-	struct mt7996_dev *dev = phy->dev;
++	struct mt7996_dev *dev;
+ 
+ 	if (!phy)
+ 		return;
+ 
++	dev = phy->dev;
++
+ 	if (enabled == !(phy->rxfilter & MT_WF_RFCR_DROP_OTHER_UC))
+ 		return;
+ 
+@@ -998,16 +1002,22 @@ mt7996_mac_sta_add_links(struct mt7996_dev *dev, struct ieee80211_vif *vif,
+ 			continue;
+ 
+ 		link_conf = link_conf_dereference_protected(vif, link_id);
+-		if (!link_conf)
++		if (!link_conf) {
++			err = -EINVAL;
+ 			goto error_unlink;
++		}
+ 
+ 		link = mt7996_vif_link(dev, vif, link_id);
+-		if (!link)
++		if (!link) {
++			err = -EINVAL;
+ 			goto error_unlink;
++		}
+ 
+ 		link_sta = link_sta_dereference_protected(sta, link_id);
+-		if (!link_sta)
++		if (!link_sta) {
++			err = -EINVAL;
+ 			goto error_unlink;
++		}
+ 
+ 		err = mt7996_mac_sta_init_link(dev, link_conf, link_sta, link,
+ 					       link_id);
+@@ -1518,7 +1528,8 @@ mt7996_set_antenna(struct ieee80211_hw *hw, u32 tx_ant, u32 rx_ant)
+ 		u8 shift = dev->chainshift[band_idx];
+ 
+ 		phy->mt76->chainmask = tx_ant & phy->orig_chainmask;
+-		phy->mt76->antenna_mask = phy->mt76->chainmask >> shift;
++		phy->mt76->antenna_mask = (phy->mt76->chainmask >> shift) &
++					  phy->orig_antenna_mask;
+ 
+ 		mt76_set_stream_caps(phy->mt76, true);
+ 		mt7996_set_stream_vht_txbf_caps(phy);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
+index 13b188e281bdb9..af9169030bad99 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
+@@ -323,6 +323,9 @@ int mt7996_mmio_wed_init(struct mt7996_dev *dev, void *pdev_ptr,
+ 	wed->wlan.base = devm_ioremap(dev->mt76.dev,
+ 				      pci_resource_start(pci_dev, 0),
+ 				      pci_resource_len(pci_dev, 0));
++	if (!wed->wlan.base)
++		return -ENOMEM;
++
+ 	wed->wlan.phy_base = pci_resource_start(pci_dev, 0);
+ 
+ 	if (hif2) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+index 43e646ed6094cb..77605403b39661 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+@@ -29,6 +29,9 @@
+ #define MT7996_RX_RING_SIZE		1536
+ #define MT7996_RX_MCU_RING_SIZE		512
+ #define MT7996_RX_MCU_RING_SIZE_WA	1024
++/* scatter-gather of mcu event is not supported in connac3 */
++#define MT7996_RX_MCU_BUF_SIZE		(2048 + \
++					 SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
+ 
+ #define MT7996_FIRMWARE_WA		"mediatek/mt7996/mt7996_wa.bin"
+ #define MT7996_FIRMWARE_WM		"mediatek/mt7996/mt7996_wm.bin"
+@@ -293,6 +296,7 @@ struct mt7996_phy {
+ 	struct mt76_channel_state state_ts;
+ 
+ 	u16 orig_chainmask;
++	u16 orig_antenna_mask;
+ 
+ 	bool has_aux_rx;
+ 	bool counter_reset;
+diff --git a/drivers/net/wireless/realtek/rtw88/coex.c b/drivers/net/wireless/realtek/rtw88/coex.c
+index c929db1e53ca63..64904278ddad7d 100644
+--- a/drivers/net/wireless/realtek/rtw88/coex.c
++++ b/drivers/net/wireless/realtek/rtw88/coex.c
+@@ -309,7 +309,7 @@ static void rtw_coex_tdma_timer_base(struct rtw_dev *rtwdev, u8 type)
+ {
+ 	struct rtw_coex *coex = &rtwdev->coex;
+ 	struct rtw_coex_stat *coex_stat = &coex->stat;
+-	u8 para[2] = {0};
++	u8 para[6] = {};
+ 	u8 times;
+ 	u16 tbtt_interval = coex_stat->wl_beacon_interval;
+ 
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+index 5e53e0db177efe..8937a7b656edb1 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+@@ -3951,7 +3951,8 @@ static void rtw8822c_dpk_cal_coef1(struct rtw_dev *rtwdev)
+ 	rtw_write32(rtwdev, REG_NCTL0, 0x00001148);
+ 	rtw_write32(rtwdev, REG_NCTL0, 0x00001149);
+ 
+-	check_hw_ready(rtwdev, 0x2d9c, MASKBYTE0, 0x55);
++	if (!check_hw_ready(rtwdev, 0x2d9c, MASKBYTE0, 0x55))
++		rtw_warn(rtwdev, "DPK stuck, performance may be suboptimal");
+ 
+ 	rtw_write8(rtwdev, 0x1b10, 0x0);
+ 	rtw_write32_mask(rtwdev, REG_NCTL0, BIT_SUBPAGE, 0x0000000c);
+diff --git a/drivers/net/wireless/realtek/rtw88/sdio.c b/drivers/net/wireless/realtek/rtw88/sdio.c
+index 6209a49312f176..410f637b1add58 100644
+--- a/drivers/net/wireless/realtek/rtw88/sdio.c
++++ b/drivers/net/wireless/realtek/rtw88/sdio.c
+@@ -718,10 +718,7 @@ static u8 rtw_sdio_get_tx_qsel(struct rtw_dev *rtwdev, struct sk_buff *skb,
+ 	case RTW_TX_QUEUE_H2C:
+ 		return TX_DESC_QSEL_H2C;
+ 	case RTW_TX_QUEUE_MGMT:
+-		if (rtw_chip_wcpu_11n(rtwdev))
+-			return TX_DESC_QSEL_HIGH;
+-		else
+-			return TX_DESC_QSEL_MGMT;
++		return TX_DESC_QSEL_MGMT;
+ 	case RTW_TX_QUEUE_HI0:
+ 		return TX_DESC_QSEL_HIGH;
+ 	default:
+@@ -1227,10 +1224,7 @@ static void rtw_sdio_process_tx_queue(struct rtw_dev *rtwdev,
+ 		return;
+ 	}
+ 
+-	if (queue <= RTW_TX_QUEUE_VO)
+-		rtw_sdio_indicate_tx_status(rtwdev, skb);
+-	else
+-		dev_kfree_skb_any(skb);
++	rtw_sdio_indicate_tx_status(rtwdev, skb);
+ }
+ 
+ static void rtw_sdio_tx_handler(struct work_struct *work)
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index 8643b17866f897..6c52b0425f2ea9 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -5477,7 +5477,7 @@ int rtw89_fw_h2c_scan_list_offload_be(struct rtw89_dev *rtwdev, int ch_num,
+ 	return 0;
+ }
+ 
+-#define RTW89_SCAN_DELAY_TSF_UNIT 104800
++#define RTW89_SCAN_DELAY_TSF_UNIT 1000000
+ int rtw89_fw_h2c_scan_offload_ax(struct rtw89_dev *rtwdev,
+ 				 struct rtw89_scan_option *option,
+ 				 struct rtw89_vif_link *rtwvif_link,
+diff --git a/drivers/net/wireless/realtek/rtw89/pci.c b/drivers/net/wireless/realtek/rtw89/pci.c
+index c2fe5a898dc717..064f6a94010731 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci.c
++++ b/drivers/net/wireless/realtek/rtw89/pci.c
+@@ -228,7 +228,7 @@ int rtw89_pci_sync_skb_for_device_and_validate_rx_info(struct rtw89_dev *rtwdev,
+ 						       struct sk_buff *skb)
+ {
+ 	struct rtw89_pci_rx_info *rx_info = RTW89_PCI_RX_SKB_CB(skb);
+-	int rx_tag_retry = 100;
++	int rx_tag_retry = 1000;
+ 	int ret;
+ 
+ 	do {
+@@ -3105,17 +3105,26 @@ static bool rtw89_pci_is_dac_compatible_bridge(struct rtw89_dev *rtwdev)
+ 	return false;
+ }
+ 
+-static void rtw89_pci_cfg_dac(struct rtw89_dev *rtwdev)
++static int rtw89_pci_cfg_dac(struct rtw89_dev *rtwdev, bool force)
+ {
+ 	struct rtw89_pci *rtwpci = (struct rtw89_pci *)rtwdev->priv;
++	struct pci_dev *pdev = rtwpci->pdev;
++	int ret;
++	u8 val;
+ 
+-	if (!rtwpci->enable_dac)
+-		return;
++	if (!rtwpci->enable_dac && !force)
++		return 0;
+ 
+ 	if (!rtw89_pci_chip_is_manual_dac(rtwdev))
+-		return;
++		return 0;
+ 
+-	rtw89_pci_config_byte_set(rtwdev, RTW89_PCIE_L1_CTRL, RTW89_PCIE_BIT_EN_64BITS);
++	/* Configure DAC only via PCI config API, not DBI interfaces */
++	ret = pci_read_config_byte(pdev, RTW89_PCIE_L1_CTRL, &val);
++	if (ret)
++		return ret;
++
++	val |= RTW89_PCIE_BIT_EN_64BITS;
++	return pci_write_config_byte(pdev, RTW89_PCIE_L1_CTRL, val);
+ }
+ 
+ static int rtw89_pci_setup_mapping(struct rtw89_dev *rtwdev,
+@@ -3133,13 +3142,16 @@ static int rtw89_pci_setup_mapping(struct rtw89_dev *rtwdev,
+ 	}
+ 
+ 	if (!rtw89_pci_is_dac_compatible_bridge(rtwdev))
+-		goto no_dac;
++		goto try_dac_done;
+ 
+ 	ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(36));
+ 	if (!ret) {
+-		rtwpci->enable_dac = true;
+-		rtw89_pci_cfg_dac(rtwdev);
+-	} else {
++		ret = rtw89_pci_cfg_dac(rtwdev, true);
++		if (!ret) {
++			rtwpci->enable_dac = true;
++			goto try_dac_done;
++		}
++
+ 		ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+ 		if (ret) {
+ 			rtw89_err(rtwdev,
+@@ -3147,7 +3159,7 @@ static int rtw89_pci_setup_mapping(struct rtw89_dev *rtwdev,
+ 			goto err_release_regions;
+ 		}
+ 	}
+-no_dac:
++try_dac_done:
+ 
+ 	resource_len = pci_resource_len(pdev, bar_id);
+ 	rtwpci->mmap = pci_iomap(pdev, bar_id, resource_len);
+@@ -4302,7 +4314,7 @@ static void rtw89_pci_l2_hci_ldo(struct rtw89_dev *rtwdev)
+ void rtw89_pci_basic_cfg(struct rtw89_dev *rtwdev, bool resume)
+ {
+ 	if (resume)
+-		rtw89_pci_cfg_dac(rtwdev);
++		rtw89_pci_cfg_dac(rtwdev, false);
+ 
+ 	rtw89_pci_disable_eq(rtwdev);
+ 	rtw89_pci_filter_out(rtwdev);
+diff --git a/drivers/net/wwan/mhi_wwan_mbim.c b/drivers/net/wwan/mhi_wwan_mbim.c
+index 8755c5e6a65b30..c814fbd756a1e7 100644
+--- a/drivers/net/wwan/mhi_wwan_mbim.c
++++ b/drivers/net/wwan/mhi_wwan_mbim.c
+@@ -550,8 +550,8 @@ static int mhi_mbim_newlink(void *ctxt, struct net_device *ndev, u32 if_id,
+ 	struct mhi_mbim_link *link = wwan_netdev_drvpriv(ndev);
+ 	struct mhi_mbim_context *mbim = ctxt;
+ 
+-	link->session = if_id;
+ 	link->mbim = mbim;
++	link->session = mhi_mbim_get_link_mux_id(link->mbim->mdev->mhi_cntrl) + if_id;
+ 	link->ndev = ndev;
+ 	u64_stats_init(&link->rx_syncp);
+ 	u64_stats_init(&link->tx_syncp);
+@@ -607,7 +607,7 @@ static int mhi_mbim_probe(struct mhi_device *mhi_dev, const struct mhi_device_id
+ {
+ 	struct mhi_controller *cntrl = mhi_dev->mhi_cntrl;
+ 	struct mhi_mbim_context *mbim;
+-	int err, link_id;
++	int err;
+ 
+ 	mbim = devm_kzalloc(&mhi_dev->dev, sizeof(*mbim), GFP_KERNEL);
+ 	if (!mbim)
+@@ -628,11 +628,8 @@ static int mhi_mbim_probe(struct mhi_device *mhi_dev, const struct mhi_device_id
+ 	/* Number of transfer descriptors determines size of the queue */
+ 	mbim->rx_queue_sz = mhi_get_free_desc_count(mhi_dev, DMA_FROM_DEVICE);
+ 
+-	/* Get the corresponding mux_id from mhi */
+-	link_id = mhi_mbim_get_link_mux_id(cntrl);
+-
+ 	/* Register wwan link ops with MHI controller representing WWAN instance */
+-	return wwan_register_ops(&cntrl->mhi_dev->dev, &mhi_mbim_wwan_ops, mbim, link_id);
++	return wwan_register_ops(&cntrl->mhi_dev->dev, &mhi_mbim_wwan_ops, mbim, 0);
+ }
+ 
+ static void mhi_mbim_remove(struct mhi_device *mhi_dev)
+diff --git a/drivers/net/wwan/t7xx/t7xx_netdev.c b/drivers/net/wwan/t7xx/t7xx_netdev.c
+index 91fa082e9cab80..fc0a7cb181df2c 100644
+--- a/drivers/net/wwan/t7xx/t7xx_netdev.c
++++ b/drivers/net/wwan/t7xx/t7xx_netdev.c
+@@ -302,7 +302,7 @@ static int t7xx_ccmni_wwan_newlink(void *ctxt, struct net_device *dev, u32 if_id
+ 	ccmni->ctlb = ctlb;
+ 	ccmni->dev = dev;
+ 	atomic_set(&ccmni->usage, 0);
+-	ctlb->ccmni_inst[if_id] = ccmni;
++	WRITE_ONCE(ctlb->ccmni_inst[if_id], ccmni);
+ 
+ 	ret = register_netdevice(dev);
+ 	if (ret)
+@@ -324,6 +324,7 @@ static void t7xx_ccmni_wwan_dellink(void *ctxt, struct net_device *dev, struct l
+ 	if (WARN_ON(ctlb->ccmni_inst[if_id] != ccmni))
+ 		return;
+ 
++	WRITE_ONCE(ctlb->ccmni_inst[if_id], NULL);
+ 	unregister_netdevice(dev);
+ }
+ 
+@@ -419,7 +420,7 @@ static void t7xx_ccmni_recv_skb(struct t7xx_ccmni_ctrl *ccmni_ctlb, struct sk_bu
+ 
+ 	skb_cb = T7XX_SKB_CB(skb);
+ 	netif_id = skb_cb->netif_idx;
+-	ccmni = ccmni_ctlb->ccmni_inst[netif_id];
++	ccmni = READ_ONCE(ccmni_ctlb->ccmni_inst[netif_id]);
+ 	if (!ccmni) {
+ 		dev_kfree_skb(skb);
+ 		return;
+@@ -441,7 +442,7 @@ static void t7xx_ccmni_recv_skb(struct t7xx_ccmni_ctrl *ccmni_ctlb, struct sk_bu
+ 
+ static void t7xx_ccmni_queue_tx_irq_notify(struct t7xx_ccmni_ctrl *ctlb, int qno)
+ {
+-	struct t7xx_ccmni *ccmni = ctlb->ccmni_inst[0];
++	struct t7xx_ccmni *ccmni = READ_ONCE(ctlb->ccmni_inst[0]);
+ 	struct netdev_queue *net_queue;
+ 
+ 	if (netif_running(ccmni->dev) && atomic_read(&ccmni->usage) > 0) {
+@@ -453,7 +454,7 @@ static void t7xx_ccmni_queue_tx_irq_notify(struct t7xx_ccmni_ctrl *ctlb, int qno
+ 
+ static void t7xx_ccmni_queue_tx_full_notify(struct t7xx_ccmni_ctrl *ctlb, int qno)
+ {
+-	struct t7xx_ccmni *ccmni = ctlb->ccmni_inst[0];
++	struct t7xx_ccmni *ccmni = READ_ONCE(ctlb->ccmni_inst[0]);
+ 	struct netdev_queue *net_queue;
+ 
+ 	if (atomic_read(&ccmni->usage) > 0) {
+@@ -471,7 +472,7 @@ static void t7xx_ccmni_queue_state_notify(struct t7xx_pci_dev *t7xx_dev,
+ 	if (ctlb->md_sta != MD_STATE_READY)
+ 		return;
+ 
+-	if (!ctlb->ccmni_inst[0]) {
++	if (!READ_ONCE(ctlb->ccmni_inst[0])) {
+ 		dev_warn(&t7xx_dev->pdev->dev, "No netdev registered yet\n");
+ 		return;
+ 	}
+diff --git a/drivers/nvme/host/constants.c b/drivers/nvme/host/constants.c
+index 2b9e6cfaf2a80a..1a0058be582104 100644
+--- a/drivers/nvme/host/constants.c
++++ b/drivers/nvme/host/constants.c
+@@ -145,7 +145,7 @@ static const char * const nvme_statuses[] = {
+ 	[NVME_SC_BAD_ATTRIBUTES] = "Conflicting Attributes",
+ 	[NVME_SC_INVALID_PI] = "Invalid Protection Information",
+ 	[NVME_SC_READ_ONLY] = "Attempted Write to Read Only Range",
+-	[NVME_SC_ONCS_NOT_SUPPORTED] = "ONCS Not Supported",
++	[NVME_SC_CMD_SIZE_LIM_EXCEEDED	] = "Command Size Limits Exceeded",
+ 	[NVME_SC_ZONE_BOUNDARY_ERROR] = "Zoned Boundary Error",
+ 	[NVME_SC_ZONE_FULL] = "Zone Is Full",
+ 	[NVME_SC_ZONE_READ_ONLY] = "Zone Is Read Only",
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 6b04473c0ab73c..93a8119ad5ca66 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -286,7 +286,6 @@ static blk_status_t nvme_error_status(u16 status)
+ 	case NVME_SC_NS_NOT_READY:
+ 		return BLK_STS_TARGET;
+ 	case NVME_SC_BAD_ATTRIBUTES:
+-	case NVME_SC_ONCS_NOT_SUPPORTED:
+ 	case NVME_SC_INVALID_OPCODE:
+ 	case NVME_SC_INVALID_FIELD:
+ 	case NVME_SC_INVALID_NS:
+diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
+index ca86d3bf7ea49d..f29107d95ff26d 100644
+--- a/drivers/nvme/host/ioctl.c
++++ b/drivers/nvme/host/ioctl.c
+@@ -521,7 +521,7 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+ 	if (d.data_len) {
+ 		ret = nvme_map_user_request(req, d.addr, d.data_len,
+ 			nvme_to_user_ptr(d.metadata), d.metadata_len,
+-			map_iter, vec);
++			map_iter, vec ? NVME_IOCTL_VEC : 0);
+ 		if (ret)
+ 			goto out_free_req;
+ 	}
+diff --git a/drivers/nvme/host/pr.c b/drivers/nvme/host/pr.c
+index cf2d2c5039ddbf..ca6a74607b1397 100644
+--- a/drivers/nvme/host/pr.c
++++ b/drivers/nvme/host/pr.c
+@@ -82,8 +82,6 @@ static int nvme_status_to_pr_err(int status)
+ 		return PR_STS_SUCCESS;
+ 	case NVME_SC_RESERVATION_CONFLICT:
+ 		return PR_STS_RESERVATION_CONFLICT;
+-	case NVME_SC_ONCS_NOT_SUPPORTED:
+-		return -EOPNOTSUPP;
+ 	case NVME_SC_BAD_ATTRIBUTES:
+ 	case NVME_SC_INVALID_OPCODE:
+ 	case NVME_SC_INVALID_FIELD:
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 245475c43127fb..69b1ddff6731fc 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -62,14 +62,7 @@ inline u16 errno_to_nvme_status(struct nvmet_req *req, int errno)
+ 		return  NVME_SC_LBA_RANGE | NVME_STATUS_DNR;
+ 	case -EOPNOTSUPP:
+ 		req->error_loc = offsetof(struct nvme_common_command, opcode);
+-		switch (req->cmd->common.opcode) {
+-		case nvme_cmd_dsm:
+-		case nvme_cmd_write_zeroes:
+-			return NVME_SC_ONCS_NOT_SUPPORTED | NVME_STATUS_DNR;
+-		default:
+-			return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+-		}
+-		break;
++		return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 	case -ENODATA:
+ 		req->error_loc = offsetof(struct nvme_rw_command, nsid);
+ 		return NVME_SC_ACCESS_DENIED;
+diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
+index 641201e62c1baf..20becea1ad9683 100644
+--- a/drivers/nvme/target/fcloop.c
++++ b/drivers/nvme/target/fcloop.c
+@@ -618,12 +618,13 @@ fcloop_fcp_recv_work(struct work_struct *work)
+ {
+ 	struct fcloop_fcpreq *tfcp_req =
+ 		container_of(work, struct fcloop_fcpreq, fcp_rcv_work);
+-	struct nvmefc_fcp_req *fcpreq = tfcp_req->fcpreq;
++	struct nvmefc_fcp_req *fcpreq;
+ 	unsigned long flags;
+ 	int ret = 0;
+ 	bool aborted = false;
+ 
+ 	spin_lock_irqsave(&tfcp_req->reqlock, flags);
++	fcpreq = tfcp_req->fcpreq;
+ 	switch (tfcp_req->inistate) {
+ 	case INI_IO_START:
+ 		tfcp_req->inistate = INI_IO_ACTIVE;
+@@ -638,16 +639,19 @@ fcloop_fcp_recv_work(struct work_struct *work)
+ 	}
+ 	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+-	if (unlikely(aborted))
+-		ret = -ECANCELED;
+-	else {
+-		if (likely(!check_for_drop(tfcp_req)))
+-			ret = nvmet_fc_rcv_fcp_req(tfcp_req->tport->targetport,
+-				&tfcp_req->tgt_fcp_req,
+-				fcpreq->cmdaddr, fcpreq->cmdlen);
+-		else
+-			pr_info("%s: dropped command ********\n", __func__);
++	if (unlikely(aborted)) {
++		/* the abort handler will call fcloop_call_host_done */
++		return;
++	}
++
++	if (unlikely(check_for_drop(tfcp_req))) {
++		pr_info("%s: dropped command ********\n", __func__);
++		return;
+ 	}
++
++	ret = nvmet_fc_rcv_fcp_req(tfcp_req->tport->targetport,
++				   &tfcp_req->tgt_fcp_req,
++				   fcpreq->cmdaddr, fcpreq->cmdlen);
+ 	if (ret)
+ 		fcloop_call_host_done(fcpreq, tfcp_req, ret);
+ }
+@@ -662,9 +666,10 @@ fcloop_fcp_abort_recv_work(struct work_struct *work)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+-	fcpreq = tfcp_req->fcpreq;
+ 	switch (tfcp_req->inistate) {
+ 	case INI_IO_ABORTED:
++		fcpreq = tfcp_req->fcpreq;
++		tfcp_req->fcpreq = NULL;
+ 		break;
+ 	case INI_IO_COMPLETED:
+ 		completed = true;
+@@ -686,10 +691,6 @@ fcloop_fcp_abort_recv_work(struct work_struct *work)
+ 		nvmet_fc_rcv_fcp_abort(tfcp_req->tport->targetport,
+ 					&tfcp_req->tgt_fcp_req);
+ 
+-	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+-	tfcp_req->fcpreq = NULL;
+-	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+-
+ 	fcloop_call_host_done(fcpreq, tfcp_req, -ECANCELED);
+ 	/* call_host_done releases reference for abort downcall */
+ }
+diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
+index 83be0657e6df4e..1cfa13d029bfa2 100644
+--- a/drivers/nvme/target/io-cmd-bdev.c
++++ b/drivers/nvme/target/io-cmd-bdev.c
+@@ -145,15 +145,8 @@ u16 blk_to_nvme_status(struct nvmet_req *req, blk_status_t blk_sts)
+ 		req->error_loc = offsetof(struct nvme_rw_command, slba);
+ 		break;
+ 	case BLK_STS_NOTSUPP:
++		status = NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 		req->error_loc = offsetof(struct nvme_common_command, opcode);
+-		switch (req->cmd->common.opcode) {
+-		case nvme_cmd_dsm:
+-		case nvme_cmd_write_zeroes:
+-			status = NVME_SC_ONCS_NOT_SUPPORTED | NVME_STATUS_DNR;
+-			break;
+-		default:
+-			status = NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+-		}
+ 		break;
+ 	case BLK_STS_MEDIUM:
+ 		status = NVME_SC_ACCESS_DENIED;
+diff --git a/drivers/nvmem/zynqmp_nvmem.c b/drivers/nvmem/zynqmp_nvmem.c
+index 8682adaacd692d..7da717d6c7faf3 100644
+--- a/drivers/nvmem/zynqmp_nvmem.c
++++ b/drivers/nvmem/zynqmp_nvmem.c
+@@ -213,6 +213,7 @@ static int zynqmp_nvmem_probe(struct platform_device *pdev)
+ 	econfig.word_size = 1;
+ 	econfig.size = ZYNQMP_NVMEM_SIZE;
+ 	econfig.dev = dev;
++	econfig.priv = dev;
+ 	econfig.add_legacy_fixed_of_cells = true;
+ 	econfig.reg_read = zynqmp_nvmem_read;
+ 	econfig.reg_write = zynqmp_nvmem_write;
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index 64d301893af7b8..eeb370e0f50777 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -2029,15 +2029,16 @@ static int __init unittest_data_add(void)
+ 	rc = of_resolve_phandles(unittest_data_node);
+ 	if (rc) {
+ 		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
+-		of_overlay_mutex_unlock();
+-		return -EINVAL;
++		rc = -EINVAL;
++		goto unlock;
+ 	}
+ 
+ 	/* attach the sub-tree to live tree */
+ 	if (!of_root) {
+ 		pr_warn("%s: no live tree to attach sub-tree\n", __func__);
+ 		kfree(unittest_data);
+-		return -ENODEV;
++		rc = -ENODEV;
++		goto unlock;
+ 	}
+ 
+ 	EXPECT_BEGIN(KERN_INFO,
+@@ -2056,9 +2057,10 @@ static int __init unittest_data_add(void)
+ 	EXPECT_END(KERN_INFO,
+ 		   "Duplicate name in testcase-data, renamed to \"duplicate-name#1\"");
+ 
++unlock:
+ 	of_overlay_mutex_unlock();
+ 
+-	return 0;
++	return rc;
+ }
+ 
+ #ifdef CONFIG_OF_OVERLAY
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
+index 8af95e9da7cec6..741e10a575ec75 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
+@@ -570,14 +570,5 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
+ 	if (!bridge->ops)
+ 		bridge->ops = &cdns_pcie_host_ops;
+ 
+-	ret = pci_host_probe(bridge);
+-	if (ret < 0)
+-		goto err_init;
+-
+-	return 0;
+-
+- err_init:
+-	pm_runtime_put_sync(dev);
+-
+-	return ret;
++	return pci_host_probe(bridge);
+ }
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index 5f267dd261b51e..ea5c06371171ff 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -129,6 +129,11 @@ struct imx_pcie_drvdata {
+ 	const struct dw_pcie_host_ops *ops;
+ };
+ 
++struct imx_lut_data {
++	u32 data1;
++	u32 data2;
++};
++
+ struct imx_pcie {
+ 	struct dw_pcie		*pci;
+ 	struct gpio_desc	*reset_gpiod;
+@@ -148,6 +153,8 @@ struct imx_pcie {
+ 	struct regulator	*vph;
+ 	void __iomem		*phy_base;
+ 
++	/* LUT data for pcie */
++	struct imx_lut_data	luts[IMX95_MAX_LUT];
+ 	/* power domain for pcie */
+ 	struct device		*pd_pcie;
+ 	/* power domain for pcie phy */
+@@ -1386,6 +1393,42 @@ static void imx_pcie_msi_save_restore(struct imx_pcie *imx_pcie, bool save)
+ 	}
+ }
+ 
++static void imx_pcie_lut_save(struct imx_pcie *imx_pcie)
++{
++	u32 data1, data2;
++	int i;
++
++	for (i = 0; i < IMX95_MAX_LUT; i++) {
++		regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_ACSCTRL,
++			     IMX95_PEO_LUT_RWA | i);
++		regmap_read(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA1, &data1);
++		regmap_read(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA2, &data2);
++		if (data1 & IMX95_PE0_LUT_VLD) {
++			imx_pcie->luts[i].data1 = data1;
++			imx_pcie->luts[i].data2 = data2;
++		} else {
++			imx_pcie->luts[i].data1 = 0;
++			imx_pcie->luts[i].data2 = 0;
++		}
++	}
++}
++
++static void imx_pcie_lut_restore(struct imx_pcie *imx_pcie)
++{
++	int i;
++
++	for (i = 0; i < IMX95_MAX_LUT; i++) {
++		if ((imx_pcie->luts[i].data1 & IMX95_PE0_LUT_VLD) == 0)
++			continue;
++
++		regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA1,
++			     imx_pcie->luts[i].data1);
++		regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA2,
++			     imx_pcie->luts[i].data2);
++		regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_ACSCTRL, i);
++	}
++}
++
+ static int imx_pcie_suspend_noirq(struct device *dev)
+ {
+ 	struct imx_pcie *imx_pcie = dev_get_drvdata(dev);
+@@ -1394,6 +1437,8 @@ static int imx_pcie_suspend_noirq(struct device *dev)
+ 		return 0;
+ 
+ 	imx_pcie_msi_save_restore(imx_pcie, true);
++	if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_HAS_LUT))
++		imx_pcie_lut_save(imx_pcie);
+ 	if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_BROKEN_SUSPEND)) {
+ 		/*
+ 		 * The minimum for a workaround would be to set PERST# and to
+@@ -1438,6 +1483,8 @@ static int imx_pcie_resume_noirq(struct device *dev)
+ 		if (ret)
+ 			return ret;
+ 	}
++	if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_HAS_LUT))
++		imx_pcie_lut_restore(imx_pcie);
+ 	imx_pcie_msi_save_restore(imx_pcie, false);
+ 
+ 	return 0;
+diff --git a/drivers/pci/controller/dwc/pcie-rcar-gen4.c b/drivers/pci/controller/dwc/pcie-rcar-gen4.c
+index fc872dd35029c0..02638ec442e701 100644
+--- a/drivers/pci/controller/dwc/pcie-rcar-gen4.c
++++ b/drivers/pci/controller/dwc/pcie-rcar-gen4.c
+@@ -403,6 +403,7 @@ static const struct pci_epc_features rcar_gen4_pcie_epc_features = {
+ 	.msix_capable = false,
+ 	.bar[BAR_1] = { .type = BAR_RESERVED, },
+ 	.bar[BAR_3] = { .type = BAR_RESERVED, },
++	.bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = 256 },
+ 	.bar[BAR_5] = { .type = BAR_RESERVED, },
+ 	.align = SZ_1M,
+ };
+diff --git a/drivers/pci/controller/pcie-apple.c b/drivers/pci/controller/pcie-apple.c
+index 18e11b9a7f4647..3d778d8b018756 100644
+--- a/drivers/pci/controller/pcie-apple.c
++++ b/drivers/pci/controller/pcie-apple.c
+@@ -540,7 +540,7 @@ static int apple_pcie_setup_port(struct apple_pcie *pcie,
+ 	rmw_set(PORT_APPCLK_EN, port->base + PORT_APPCLK);
+ 
+ 	/* Assert PERST# before setting up the clock */
+-	gpiod_set_value(reset, 1);
++	gpiod_set_value_cansleep(reset, 1);
+ 
+ 	ret = apple_pcie_setup_refclk(pcie, port);
+ 	if (ret < 0)
+@@ -551,7 +551,7 @@ static int apple_pcie_setup_port(struct apple_pcie *pcie,
+ 
+ 	/* Deassert PERST# */
+ 	rmw_set(PORT_PERST_OFF, port->base + PORT_PERST);
+-	gpiod_set_value(reset, 0);
++	gpiod_set_value_cansleep(reset, 0);
+ 
+ 	/* Wait for 100ms after PERST# deassertion (PCIe r5.0, 6.6.1) */
+ 	msleep(100);
+diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h
+index 14954f43e5e9af..5864a20323f21a 100644
+--- a/drivers/pci/controller/pcie-rockchip.h
++++ b/drivers/pci/controller/pcie-rockchip.h
+@@ -319,11 +319,12 @@ static const char * const rockchip_pci_pm_rsts[] = {
+ 	"aclk",
+ };
+ 
++/* NOTE: Do not reorder the deassert sequence of the following reset pins */
+ static const char * const rockchip_pci_core_rsts[] = {
+-	"mgmt-sticky",
+-	"core",
+-	"mgmt",
+ 	"pipe",
++	"mgmt",
++	"core",
++	"mgmt-sticky",
+ };
+ 
+ struct rockchip_pcie {
+diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c
+index 394395c7f8decf..577a9e490115c9 100644
+--- a/drivers/pci/endpoint/pci-epf-core.c
++++ b/drivers/pci/endpoint/pci-epf-core.c
+@@ -236,12 +236,13 @@ void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar,
+ 	}
+ 
+ 	dev = epc->dev.parent;
+-	dma_free_coherent(dev, epf_bar[bar].size, addr,
++	dma_free_coherent(dev, epf_bar[bar].aligned_size, addr,
+ 			  epf_bar[bar].phys_addr);
+ 
+ 	epf_bar[bar].phys_addr = 0;
+ 	epf_bar[bar].addr = NULL;
+ 	epf_bar[bar].size = 0;
++	epf_bar[bar].aligned_size = 0;
+ 	epf_bar[bar].barno = 0;
+ 	epf_bar[bar].flags = 0;
+ }
+@@ -264,7 +265,7 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
+ 			  enum pci_epc_interface_type type)
+ {
+ 	u64 bar_fixed_size = epc_features->bar[bar].fixed_size;
+-	size_t align = epc_features->align;
++	size_t aligned_size, align = epc_features->align;
+ 	struct pci_epf_bar *epf_bar;
+ 	dma_addr_t phys_addr;
+ 	struct pci_epc *epc;
+@@ -285,12 +286,18 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
+ 			return NULL;
+ 		}
+ 		size = bar_fixed_size;
++	} else {
++		/* BAR size must be power of two */
++		size = roundup_pow_of_two(size);
+ 	}
+ 
+-	if (align)
+-		size = ALIGN(size, align);
+-	else
+-		size = roundup_pow_of_two(size);
++	/*
++	 * Allocate enough memory to accommodate the iATU alignment
++	 * requirement.  In most cases, this will be the same as .size but
++	 * it might be different if, for example, the fixed size of a BAR
++	 * is smaller than align.
++	 */
++	aligned_size = align ? ALIGN(size, align) : size;
+ 
+ 	if (type == PRIMARY_INTERFACE) {
+ 		epc = epf->epc;
+@@ -301,7 +308,7 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
+ 	}
+ 
+ 	dev = epc->dev.parent;
+-	space = dma_alloc_coherent(dev, size, &phys_addr, GFP_KERNEL);
++	space = dma_alloc_coherent(dev, aligned_size, &phys_addr, GFP_KERNEL);
+ 	if (!space) {
+ 		dev_err(dev, "failed to allocate mem space\n");
+ 		return NULL;
+@@ -310,6 +317,7 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
+ 	epf_bar[bar].phys_addr = phys_addr;
+ 	epf_bar[bar].addr = space;
+ 	epf_bar[bar].size = size;
++	epf_bar[bar].aligned_size = aligned_size;
+ 	epf_bar[bar].barno = bar;
+ 	if (upper_32_bits(size) || epc_features->bar[bar].only_64bit)
+ 		epf_bar[bar].flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
+diff --git a/drivers/pci/hotplug/pci_hotplug_core.c b/drivers/pci/hotplug/pci_hotplug_core.c
+index d30f1316c98e2c..d7fc3bc039643c 100644
+--- a/drivers/pci/hotplug/pci_hotplug_core.c
++++ b/drivers/pci/hotplug/pci_hotplug_core.c
+@@ -492,6 +492,75 @@ void pci_hp_destroy(struct hotplug_slot *slot)
+ }
+ EXPORT_SYMBOL_GPL(pci_hp_destroy);
+ 
++static DECLARE_WAIT_QUEUE_HEAD(pci_hp_link_change_wq);
++
++/**
++ * pci_hp_ignore_link_change - begin code section causing spurious link changes
++ * @pdev: PCI hotplug bridge
++ *
++ * Mark the beginning of a code section causing spurious link changes on the
++ * Secondary Bus of @pdev, e.g. as a side effect of a Secondary Bus Reset,
++ * D3cold transition, firmware update or FPGA reconfiguration.
++ *
++ * Hotplug drivers can thus check whether such a code section is executing
++ * concurrently, await it with pci_hp_spurious_link_change() and ignore the
++ * resulting link change events.
++ *
++ * Must be paired with pci_hp_unignore_link_change().  May be called both
++ * from the PCI core and from Endpoint drivers.  May be called for bridges
++ * which are not hotplug-capable, in which case it has no effect because
++ * no hotplug driver is bound to the bridge.
++ */
++void pci_hp_ignore_link_change(struct pci_dev *pdev)
++{
++	set_bit(PCI_LINK_CHANGING, &pdev->priv_flags);
++	smp_mb__after_atomic(); /* pairs with implied barrier of wait_event() */
++}
++
++/**
++ * pci_hp_unignore_link_change - end code section causing spurious link changes
++ * @pdev: PCI hotplug bridge
++ *
++ * Mark the end of a code section causing spurious link changes on the
++ * Secondary Bus of @pdev.  Must be paired with pci_hp_ignore_link_change().
++ */
++void pci_hp_unignore_link_change(struct pci_dev *pdev)
++{
++	set_bit(PCI_LINK_CHANGED, &pdev->priv_flags);
++	mb(); /* ensure pci_hp_spurious_link_change() sees either bit set */
++	clear_bit(PCI_LINK_CHANGING, &pdev->priv_flags);
++	wake_up_all(&pci_hp_link_change_wq);
++}
++
++/**
++ * pci_hp_spurious_link_change - check for spurious link changes
++ * @pdev: PCI hotplug bridge
++ *
++ * Check whether a code section is executing concurrently which is causing
++ * spurious link changes on the Secondary Bus of @pdev.  Await the end of the
++ * code section if so.
++ *
++ * May be called by hotplug drivers to check whether a link change is spurious
++ * and can be ignored.
++ *
++ * Because a genuine link change may have occurred in-between a spurious link
++ * change and the invocation of this function, hotplug drivers should perform
++ * sanity checks such as retrieving the current link state and bringing down
++ * the slot if the link is down.
++ *
++ * Return: %true if such a code section has been executing concurrently,
++ * otherwise %false.  Also return %true if such a code section has not been
++ * executing concurrently, but at least once since the last invocation of this
++ * function.
++ */
++bool pci_hp_spurious_link_change(struct pci_dev *pdev)
++{
++	wait_event(pci_hp_link_change_wq,
++		   !test_bit(PCI_LINK_CHANGING, &pdev->priv_flags));
++
++	return test_and_clear_bit(PCI_LINK_CHANGED, &pdev->priv_flags);
++}
++
+ static int __init pci_hotplug_init(void)
+ {
+ 	int result;
+diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h
+index 273dd8c66f4eff..debc79b0adfb2c 100644
+--- a/drivers/pci/hotplug/pciehp.h
++++ b/drivers/pci/hotplug/pciehp.h
+@@ -187,6 +187,7 @@ int pciehp_card_present(struct controller *ctrl);
+ int pciehp_card_present_or_link_active(struct controller *ctrl);
+ int pciehp_check_link_status(struct controller *ctrl);
+ int pciehp_check_link_active(struct controller *ctrl);
++bool pciehp_device_replaced(struct controller *ctrl);
+ void pciehp_release_ctrl(struct controller *ctrl);
+ 
+ int pciehp_sysfs_enable_slot(struct hotplug_slot *hotplug_slot);
+diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c
+index 997841c6989359..f59baa91297099 100644
+--- a/drivers/pci/hotplug/pciehp_core.c
++++ b/drivers/pci/hotplug/pciehp_core.c
+@@ -284,35 +284,6 @@ static int pciehp_suspend(struct pcie_device *dev)
+ 	return 0;
+ }
+ 
+-static bool pciehp_device_replaced(struct controller *ctrl)
+-{
+-	struct pci_dev *pdev __free(pci_dev_put) = NULL;
+-	u32 reg;
+-
+-	if (pci_dev_is_disconnected(ctrl->pcie->port))
+-		return false;
+-
+-	pdev = pci_get_slot(ctrl->pcie->port->subordinate, PCI_DEVFN(0, 0));
+-	if (!pdev)
+-		return true;
+-
+-	if (pci_read_config_dword(pdev, PCI_VENDOR_ID, &reg) ||
+-	    reg != (pdev->vendor | (pdev->device << 16)) ||
+-	    pci_read_config_dword(pdev, PCI_CLASS_REVISION, &reg) ||
+-	    reg != (pdev->revision | (pdev->class << 8)))
+-		return true;
+-
+-	if (pdev->hdr_type == PCI_HEADER_TYPE_NORMAL &&
+-	    (pci_read_config_dword(pdev, PCI_SUBSYSTEM_VENDOR_ID, &reg) ||
+-	     reg != (pdev->subsystem_vendor | (pdev->subsystem_device << 16))))
+-		return true;
+-
+-	if (pci_get_dsn(pdev) != ctrl->dsn)
+-		return true;
+-
+-	return false;
+-}
+-
+ static int pciehp_resume_noirq(struct pcie_device *dev)
+ {
+ 	struct controller *ctrl = get_service_data(dev);
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 8a09fb6083e276..ebd342bda235d4 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -563,20 +563,50 @@ void pciehp_power_off_slot(struct controller *ctrl)
+ 		 PCI_EXP_SLTCTL_PWR_OFF);
+ }
+ 
+-static void pciehp_ignore_dpc_link_change(struct controller *ctrl,
+-					  struct pci_dev *pdev, int irq)
++bool pciehp_device_replaced(struct controller *ctrl)
++{
++	struct pci_dev *pdev __free(pci_dev_put) = NULL;
++	u32 reg;
++
++	if (pci_dev_is_disconnected(ctrl->pcie->port))
++		return false;
++
++	pdev = pci_get_slot(ctrl->pcie->port->subordinate, PCI_DEVFN(0, 0));
++	if (!pdev)
++		return true;
++
++	if (pci_read_config_dword(pdev, PCI_VENDOR_ID, &reg) ||
++	    reg != (pdev->vendor | (pdev->device << 16)) ||
++	    pci_read_config_dword(pdev, PCI_CLASS_REVISION, &reg) ||
++	    reg != (pdev->revision | (pdev->class << 8)))
++		return true;
++
++	if (pdev->hdr_type == PCI_HEADER_TYPE_NORMAL &&
++	    (pci_read_config_dword(pdev, PCI_SUBSYSTEM_VENDOR_ID, &reg) ||
++	     reg != (pdev->subsystem_vendor | (pdev->subsystem_device << 16))))
++		return true;
++
++	if (pci_get_dsn(pdev) != ctrl->dsn)
++		return true;
++
++	return false;
++}
++
++static void pciehp_ignore_link_change(struct controller *ctrl,
++				      struct pci_dev *pdev, int irq,
++				      u16 ignored_events)
+ {
+ 	/*
+ 	 * Ignore link changes which occurred while waiting for DPC recovery.
+ 	 * Could be several if DPC triggered multiple times consecutively.
++	 * Also ignore link changes caused by Secondary Bus Reset, etc.
+ 	 */
+ 	synchronize_hardirq(irq);
+-	atomic_and(~PCI_EXP_SLTSTA_DLLSC, &ctrl->pending_events);
++	atomic_and(~ignored_events, &ctrl->pending_events);
+ 	if (pciehp_poll_mode)
+ 		pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
+-					   PCI_EXP_SLTSTA_DLLSC);
+-	ctrl_info(ctrl, "Slot(%s): Link Down/Up ignored (recovered by DPC)\n",
+-		  slot_name(ctrl));
++					   ignored_events);
++	ctrl_info(ctrl, "Slot(%s): Link Down/Up ignored\n", slot_name(ctrl));
+ 
+ 	/*
+ 	 * If the link is unexpectedly down after successful recovery,
+@@ -584,8 +614,8 @@ static void pciehp_ignore_dpc_link_change(struct controller *ctrl,
+ 	 * Synthesize it to ensure that it is acted on.
+ 	 */
+ 	down_read_nested(&ctrl->reset_lock, ctrl->depth);
+-	if (!pciehp_check_link_active(ctrl))
+-		pciehp_request(ctrl, PCI_EXP_SLTSTA_DLLSC);
++	if (!pciehp_check_link_active(ctrl) || pciehp_device_replaced(ctrl))
++		pciehp_request(ctrl, ignored_events);
+ 	up_read(&ctrl->reset_lock);
+ }
+ 
+@@ -732,12 +762,19 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
+ 
+ 	/*
+ 	 * Ignore Link Down/Up events caused by Downstream Port Containment
+-	 * if recovery from the error succeeded.
++	 * if recovery succeeded, or caused by Secondary Bus Reset,
++	 * suspend to D3cold, firmware update, FPGA reconfiguration, etc.
+ 	 */
+-	if ((events & PCI_EXP_SLTSTA_DLLSC) && pci_dpc_recovered(pdev) &&
++	if ((events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) &&
++	    (pci_dpc_recovered(pdev) || pci_hp_spurious_link_change(pdev)) &&
+ 	    ctrl->state == ON_STATE) {
+-		events &= ~PCI_EXP_SLTSTA_DLLSC;
+-		pciehp_ignore_dpc_link_change(ctrl, pdev, irq);
++		u16 ignored_events = PCI_EXP_SLTSTA_DLLSC;
++
++		if (!ctrl->inband_presence_disabled)
++			ignored_events |= events & PCI_EXP_SLTSTA_PDC;
++
++		events &= ~ignored_events;
++		pciehp_ignore_link_change(ctrl, pdev, irq, ignored_events);
+ 	}
+ 
+ 	/*
+@@ -902,7 +939,6 @@ int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, bool probe)
+ {
+ 	struct controller *ctrl = to_ctrl(hotplug_slot);
+ 	struct pci_dev *pdev = ctrl_dev(ctrl);
+-	u16 stat_mask = 0, ctrl_mask = 0;
+ 	int rc;
+ 
+ 	if (probe)
+@@ -910,23 +946,11 @@ int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, bool probe)
+ 
+ 	down_write_nested(&ctrl->reset_lock, ctrl->depth);
+ 
+-	if (!ATTN_BUTTN(ctrl)) {
+-		ctrl_mask |= PCI_EXP_SLTCTL_PDCE;
+-		stat_mask |= PCI_EXP_SLTSTA_PDC;
+-	}
+-	ctrl_mask |= PCI_EXP_SLTCTL_DLLSCE;
+-	stat_mask |= PCI_EXP_SLTSTA_DLLSC;
+-
+-	pcie_write_cmd(ctrl, 0, ctrl_mask);
+-	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+-		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 0);
++	pci_hp_ignore_link_change(pdev);
+ 
+ 	rc = pci_bridge_secondary_bus_reset(ctrl->pcie->port);
+ 
+-	pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, stat_mask);
+-	pcie_write_cmd_nowait(ctrl, ctrl_mask, ctrl_mask);
+-	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+-		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ctrl_mask);
++	pci_hp_unignore_link_change(pdev);
+ 
+ 	up_write(&ctrl->reset_lock);
+ 	return rc;
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index af370628e58393..b78e0e41732445 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -1676,24 +1676,19 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
+ 		return NULL;
+ 
+ 	root_ops = kzalloc(sizeof(*root_ops), GFP_KERNEL);
+-	if (!root_ops) {
+-		kfree(ri);
+-		return NULL;
+-	}
++	if (!root_ops)
++		goto free_ri;
+ 
+ 	ri->cfg = pci_acpi_setup_ecam_mapping(root);
+-	if (!ri->cfg) {
+-		kfree(ri);
+-		kfree(root_ops);
+-		return NULL;
+-	}
++	if (!ri->cfg)
++		goto free_root_ops;
+ 
+ 	root_ops->release_info = pci_acpi_generic_release_info;
+ 	root_ops->prepare_resources = pci_acpi_root_prepare_resources;
+ 	root_ops->pci_ops = (struct pci_ops *)&ri->cfg->ops->pci_ops;
+ 	bus = acpi_pci_root_create(root, root_ops, &ri->common, ri->cfg);
+ 	if (!bus)
+-		return NULL;
++		goto free_cfg;
+ 
+ 	/* If we must preserve the resource configuration, claim now */
+ 	host = pci_find_host_bridge(bus);
+@@ -1710,6 +1705,14 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
+ 		pcie_bus_configure_settings(child);
+ 
+ 	return bus;
++
++free_cfg:
++	pci_ecam_free(ri->cfg);
++free_root_ops:
++	kfree(root_ops);
++free_ri:
++	kfree(ri);
++	return NULL;
+ }
+ 
+ void pcibios_add_bus(struct pci_bus *bus)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index e77d5b53c0cec9..4d84ed41248442 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -4954,7 +4954,7 @@ int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type)
+ 		delay);
+ 	if (!pcie_wait_for_link_delay(dev, true, delay)) {
+ 		/* Did not train, no need to wait any further */
+-		pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n");
++		pci_info(dev, "Data Link Layer Link Active not set in %d msec\n", delay);
+ 		return -ENOTTY;
+ 	}
+ 
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index b81e99cd4b62a3..7db798bdcaaae6 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -227,6 +227,7 @@ static inline int pci_proc_detach_bus(struct pci_bus *bus) { return 0; }
+ 
+ /* Functions for PCI Hotplug drivers to use */
+ int pci_hp_add_bridge(struct pci_dev *dev);
++bool pci_hp_spurious_link_change(struct pci_dev *pdev);
+ 
+ #if defined(CONFIG_SYSFS) && defined(HAVE_PCI_LEGACY)
+ void pci_create_legacy_files(struct pci_bus *bus);
+@@ -557,6 +558,8 @@ static inline int pci_dev_set_disconnected(struct pci_dev *dev, void *unused)
+ #define PCI_DPC_RECOVERED 1
+ #define PCI_DPC_RECOVERING 2
+ #define PCI_DEV_REMOVED 3
++#define PCI_LINK_CHANGED 4
++#define PCI_LINK_CHANGING 5
+ 
+ static inline void pci_dev_assign_added(struct pci_dev *dev)
+ {
+diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
+index df42f15c98295f..9d85f1b3b76112 100644
+--- a/drivers/pci/pcie/dpc.c
++++ b/drivers/pci/pcie/dpc.c
+@@ -258,40 +258,48 @@ static int dpc_get_aer_uncorrect_severity(struct pci_dev *dev,
+ void dpc_process_error(struct pci_dev *pdev)
+ {
+ 	u16 cap = pdev->dpc_cap, status, source, reason, ext_reason;
+-	struct aer_err_info info;
++	struct aer_err_info info = {};
+ 
+ 	pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
+-	pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, &source);
+-
+-	pci_info(pdev, "containment event, status:%#06x source:%#06x\n",
+-		 status, source);
+ 
+ 	reason = status & PCI_EXP_DPC_STATUS_TRIGGER_RSN;
+-	ext_reason = status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT;
+-	pci_warn(pdev, "%s detected\n",
+-		 (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR) ?
+-		 "unmasked uncorrectable error" :
+-		 (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE) ?
+-		 "ERR_NONFATAL" :
+-		 (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE) ?
+-		 "ERR_FATAL" :
+-		 (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO) ?
+-		 "RP PIO error" :
+-		 (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_SW_TRIGGER) ?
+-		 "software trigger" :
+-		 "reserved error");
+-
+-	/* show RP PIO error detail information */
+-	if (pdev->dpc_rp_extensions &&
+-	    reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT &&
+-	    ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO)
+-		dpc_process_rp_pio_error(pdev);
+-	else if (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR &&
+-		 dpc_get_aer_uncorrect_severity(pdev, &info) &&
+-		 aer_get_device_error_info(pdev, &info)) {
+-		aer_print_error(pdev, &info);
+-		pci_aer_clear_nonfatal_status(pdev);
+-		pci_aer_clear_fatal_status(pdev);
++
++	switch (reason) {
++	case PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR:
++		pci_warn(pdev, "containment event, status:%#06x: unmasked uncorrectable error detected\n",
++			 status);
++		if (dpc_get_aer_uncorrect_severity(pdev, &info) &&
++		    aer_get_device_error_info(pdev, &info)) {
++			aer_print_error(pdev, &info);
++			pci_aer_clear_nonfatal_status(pdev);
++			pci_aer_clear_fatal_status(pdev);
++		}
++		break;
++	case PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE:
++	case PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE:
++		pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID,
++				     &source);
++		pci_warn(pdev, "containment event, status:%#06x, %s received from %04x:%02x:%02x.%d\n",
++			 status,
++			 (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE) ?
++				"ERR_FATAL" : "ERR_NONFATAL",
++			 pci_domain_nr(pdev->bus), PCI_BUS_NUM(source),
++			 PCI_SLOT(source), PCI_FUNC(source));
++		break;
++	case PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT:
++		ext_reason = status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT;
++		pci_warn(pdev, "containment event, status:%#06x: %s detected\n",
++			 status,
++			 (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO) ?
++			 "RP PIO error" :
++			 (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_SW_TRIGGER) ?
++			 "software trigger" :
++			 "reserved error");
++		/* show RP PIO error detail information */
++		if (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO &&
++		    pdev->dpc_rp_extensions)
++			dpc_process_rp_pio_error(pdev);
++		break;
+ 	}
+ }
+ 
+diff --git a/drivers/pci/pwrctrl/core.c b/drivers/pci/pwrctrl/core.c
+index 9cc7e2b7f2b560..6bdbfed584d6d7 100644
+--- a/drivers/pci/pwrctrl/core.c
++++ b/drivers/pci/pwrctrl/core.c
+@@ -101,6 +101,8 @@ EXPORT_SYMBOL_GPL(pci_pwrctrl_device_set_ready);
+  */
+ void pci_pwrctrl_device_unset_ready(struct pci_pwrctrl *pwrctrl)
+ {
++	cancel_work_sync(&pwrctrl->work);
++
+ 	/*
+ 	 * We don't have to delete the link here. Typically, this function
+ 	 * is only called when the power control device is being detached. If
+diff --git a/drivers/perf/amlogic/meson_ddr_pmu_core.c b/drivers/perf/amlogic/meson_ddr_pmu_core.c
+index 07446d784a1a64..c1e755c356a333 100644
+--- a/drivers/perf/amlogic/meson_ddr_pmu_core.c
++++ b/drivers/perf/amlogic/meson_ddr_pmu_core.c
+@@ -511,7 +511,7 @@ int meson_ddr_pmu_create(struct platform_device *pdev)
+ 
+ 	fmt_attr_fill(pmu->info.hw_info->fmt_attr);
+ 
+-	pmu->cpu = smp_processor_id();
++	pmu->cpu = raw_smp_processor_id();
+ 
+ 	name = devm_kasprintf(&pdev->dev, GFP_KERNEL, DDR_PERF_DEV_NAME);
+ 	if (!name)
+diff --git a/drivers/perf/arm-ni.c b/drivers/perf/arm-ni.c
+index fd7a5e60e96302..de7b6cce4d68a8 100644
+--- a/drivers/perf/arm-ni.c
++++ b/drivers/perf/arm-ni.c
+@@ -575,6 +575,23 @@ static int arm_ni_init_cd(struct arm_ni *ni, struct arm_ni_node *node, u64 res_s
+ 	return err;
+ }
+ 
++static void arm_ni_remove(struct platform_device *pdev)
++{
++	struct arm_ni *ni = platform_get_drvdata(pdev);
++
++	for (int i = 0; i < ni->num_cds; i++) {
++		struct arm_ni_cd *cd = ni->cds + i;
++
++		if (!cd->pmu_base)
++			continue;
++
++		writel_relaxed(0, cd->pmu_base + NI_PMCR);
++		writel_relaxed(U32_MAX, cd->pmu_base + NI_PMINTENCLR);
++		perf_pmu_unregister(&cd->pmu);
++		cpuhp_state_remove_instance_nocalls(arm_ni_hp_state, &cd->cpuhp_node);
++	}
++}
++
+ static void arm_ni_probe_domain(void __iomem *base, struct arm_ni_node *node)
+ {
+ 	u32 reg = readl_relaxed(base + NI_NODE_TYPE);
+@@ -643,6 +660,7 @@ static int arm_ni_probe(struct platform_device *pdev)
+ 	ni->num_cds = num_cds;
+ 	ni->part = part;
+ 	ni->id = atomic_fetch_inc(&id);
++	platform_set_drvdata(pdev, ni);
+ 
+ 	for (int v = 0; v < cfg.num_components; v++) {
+ 		reg = readl_relaxed(cfg.base + NI_CHILD_PTR(v));
+@@ -656,8 +674,11 @@ static int arm_ni_probe(struct platform_device *pdev)
+ 				reg = readl_relaxed(pd.base + NI_CHILD_PTR(c));
+ 				arm_ni_probe_domain(base + reg, &cd);
+ 				ret = arm_ni_init_cd(ni, &cd, res->start);
+-				if (ret)
++				if (ret) {
++					ni->cds[cd.id].pmu_base = NULL;
++					arm_ni_remove(pdev);
+ 					return ret;
++				}
+ 			}
+ 		}
+ 	}
+@@ -665,23 +686,6 @@ static int arm_ni_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static void arm_ni_remove(struct platform_device *pdev)
+-{
+-	struct arm_ni *ni = platform_get_drvdata(pdev);
+-
+-	for (int i = 0; i < ni->num_cds; i++) {
+-		struct arm_ni_cd *cd = ni->cds + i;
+-
+-		if (!cd->pmu_base)
+-			continue;
+-
+-		writel_relaxed(0, cd->pmu_base + NI_PMCR);
+-		writel_relaxed(U32_MAX, cd->pmu_base + NI_PMINTENCLR);
+-		perf_pmu_unregister(&cd->pmu);
+-		cpuhp_state_remove_instance_nocalls(arm_ni_hp_state, &cd->cpuhp_node);
+-	}
+-}
+-
+ #ifdef CONFIG_OF
+ static const struct of_device_id arm_ni_of_match[] = {
+ 	{ .compatible = "arm,ni-700" },
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
+index 78772157045752..ed646a7e705ba3 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
+@@ -2106,12 +2106,16 @@ static void __iomem *qmp_usb_iomap(struct device *dev, struct device_node *np,
+ 					int index, bool exclusive)
+ {
+ 	struct resource res;
++	void __iomem *mem;
+ 
+ 	if (!exclusive) {
+ 		if (of_address_to_resource(np, index, &res))
+ 			return IOMEM_ERR_PTR(-EINVAL);
+ 
+-		return devm_ioremap(dev, res.start, resource_size(&res));
++		mem = devm_ioremap(dev, res.start, resource_size(&res));
++		if (!mem)
++			return IOMEM_ERR_PTR(-ENOMEM);
++		return mem;
+ 	}
+ 
+ 	return devm_of_iomap(dev, np, index, NULL);
+diff --git a/drivers/phy/qualcomm/phy-qcom-qusb2.c b/drivers/phy/qualcomm/phy-qcom-qusb2.c
+index 1f5f7df14d5a2f..49c37c53b38e70 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qusb2.c
++++ b/drivers/phy/qualcomm/phy-qcom-qusb2.c
+@@ -151,21 +151,6 @@ static const struct qusb2_phy_init_tbl ipq6018_init_tbl[] = {
+ 	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_AUTOPGM_CTL1, 0x9F),
+ };
+ 
+-static const struct qusb2_phy_init_tbl ipq5424_init_tbl[] = {
+-	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL, 0x14),
+-	QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE1, 0x00),
+-	QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE2, 0x53),
+-	QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE4, 0xc3),
+-	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_TUNE, 0x30),
+-	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_USER_CTL1, 0x79),
+-	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_USER_CTL2, 0x21),
+-	QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE5, 0x00),
+-	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_PWR_CTRL, 0x00),
+-	QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TEST2, 0x14),
+-	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_TEST, 0x80),
+-	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_AUTOPGM_CTL1, 0x9f),
+-};
+-
+ static const struct qusb2_phy_init_tbl qcs615_init_tbl[] = {
+ 	QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE1, 0xc8),
+ 	QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE2, 0xb3),
+@@ -359,16 +344,6 @@ static const struct qusb2_phy_cfg ipq6018_phy_cfg = {
+ 	.autoresume_en   = BIT(0),
+ };
+ 
+-static const struct qusb2_phy_cfg ipq5424_phy_cfg = {
+-	.tbl            = ipq5424_init_tbl,
+-	.tbl_num        = ARRAY_SIZE(ipq5424_init_tbl),
+-	.regs           = ipq6018_regs_layout,
+-
+-	.disable_ctrl   = POWER_DOWN,
+-	.mask_core_ready = PLL_LOCKED,
+-	.autoresume_en   = BIT(0),
+-};
+-
+ static const struct qusb2_phy_cfg qcs615_phy_cfg = {
+ 	.tbl            = qcs615_init_tbl,
+ 	.tbl_num        = ARRAY_SIZE(qcs615_init_tbl),
+@@ -955,7 +930,7 @@ static const struct phy_ops qusb2_phy_gen_ops = {
+ static const struct of_device_id qusb2_phy_of_match_table[] = {
+ 	{
+ 		.compatible	= "qcom,ipq5424-qusb2-phy",
+-		.data		= &ipq5424_phy_cfg,
++		.data		= &ipq6018_phy_cfg,
+ 	}, {
+ 		.compatible	= "qcom,ipq6018-qusb2-phy",
+ 		.data		= &ipq6018_phy_cfg,
+diff --git a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+index 77236f012a1f75..61db514ce5cfb5 100644
+--- a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
++++ b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+@@ -320,6 +320,7 @@
+ #define LN3_TX_SER_RATE_SEL_HBR2_MASK	BIT(3)
+ #define LN3_TX_SER_RATE_SEL_HBR3_MASK	BIT(2)
+ 
++#define HDMI14_MAX_RATE			340000000
+ #define HDMI20_MAX_RATE			600000000
+ 
+ enum dp_link_rate {
+@@ -1007,9 +1008,7 @@ static int rk_hdptx_ropll_tmds_cmn_config(struct rk_hdptx_phy *hdptx,
+ {
+ 	const struct ropll_config *cfg = NULL;
+ 	struct ropll_config rc = {0};
+-	int i;
+-
+-	hdptx->rate = rate * 100;
++	int ret, i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(ropll_tmds_cfg); i++)
+ 		if (rate == ropll_tmds_cfg[i].bit_rate) {
+@@ -1064,7 +1063,11 @@ static int rk_hdptx_ropll_tmds_cmn_config(struct rk_hdptx_phy *hdptx,
+ 	regmap_update_bits(hdptx->regmap, CMN_REG(0086), PLL_PCG_CLK_EN_MASK,
+ 			   FIELD_PREP(PLL_PCG_CLK_EN_MASK, 0x1));
+ 
+-	return rk_hdptx_post_enable_pll(hdptx);
++	ret = rk_hdptx_post_enable_pll(hdptx);
++	if (!ret)
++		hdptx->rate = rate * 100;
++
++	return ret;
+ }
+ 
+ static int rk_hdptx_ropll_tmds_mode_config(struct rk_hdptx_phy *hdptx,
+@@ -1074,7 +1077,7 @@ static int rk_hdptx_ropll_tmds_mode_config(struct rk_hdptx_phy *hdptx,
+ 
+ 	regmap_write(hdptx->regmap, LNTOP_REG(0200), 0x06);
+ 
+-	if (rate >= 3400000) {
++	if (rate > HDMI14_MAX_RATE / 100) {
+ 		/* For 1/40 bitrate clk */
+ 		rk_hdptx_multi_reg_write(hdptx, rk_hdtpx_tmds_lntop_highbr_seq);
+ 	} else {
+diff --git a/drivers/pinctrl/mediatek/mtk-eint.c b/drivers/pinctrl/mediatek/mtk-eint.c
+index c516c34aaaf603..e235a98ae7ee58 100644
+--- a/drivers/pinctrl/mediatek/mtk-eint.c
++++ b/drivers/pinctrl/mediatek/mtk-eint.c
+@@ -506,7 +506,7 @@ EXPORT_SYMBOL_GPL(mtk_eint_find_irq);
+ 
+ int mtk_eint_do_init(struct mtk_eint *eint, struct mtk_eint_pin *eint_pin)
+ {
+-	unsigned int size, i, port, inst = 0;
++	unsigned int size, i, port, virq, inst = 0;
+ 
+ 	/* If clients don't assign a specific regs, let's use generic one */
+ 	if (!eint->regs)
+@@ -580,7 +580,7 @@ int mtk_eint_do_init(struct mtk_eint *eint, struct mtk_eint_pin *eint_pin)
+ 		if (inst >= eint->nbase)
+ 			continue;
+ 		eint->pin_list[inst][eint->pins[i].index] = i;
+-		int virq = irq_create_mapping(eint->domain, i);
++		virq = irq_create_mapping(eint->domain, i);
+ 		irq_set_chip_and_handler(virq, &mtk_eint_irq_chip,
+ 					 handle_level_irq);
+ 		irq_set_chip_data(virq, eint);
+diff --git a/drivers/pinctrl/mediatek/mtk-eint.h b/drivers/pinctrl/mediatek/mtk-eint.h
+index 23801d4b636f62..fc31a4c0c77bf2 100644
+--- a/drivers/pinctrl/mediatek/mtk-eint.h
++++ b/drivers/pinctrl/mediatek/mtk-eint.h
+@@ -66,7 +66,7 @@ struct mtk_eint_xt {
+ struct mtk_eint {
+ 	struct device *dev;
+ 	void __iomem **base;
+-	u8 nbase;
++	int nbase;
+ 	u16 *base_pin_num;
+ 	struct irq_domain *domain;
+ 	int irq;
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+index ba13558bfcd7bb..4918d38abfc29d 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+@@ -381,10 +381,13 @@ int mtk_build_eint(struct mtk_pinctrl *hw, struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	count_reg_names = of_property_count_strings(np, "reg-names");
+-	if (count_reg_names < hw->soc->nbase_names)
++	if (count_reg_names < 0)
++		return -EINVAL;
++
++	hw->eint->nbase = count_reg_names - (int)hw->soc->nbase_names;
++	if (hw->eint->nbase <= 0)
+ 		return -EINVAL;
+ 
+-	hw->eint->nbase = count_reg_names - hw->soc->nbase_names;
+ 	hw->eint->base = devm_kmalloc_array(&pdev->dev, hw->eint->nbase,
+ 					    sizeof(*hw->eint->base), GFP_KERNEL | __GFP_ZERO);
+ 	if (!hw->eint->base) {
+diff --git a/drivers/pinctrl/pinctrl-at91.c b/drivers/pinctrl/pinctrl-at91.c
+index 93ab277d9943cf..fbe74e4ef320c1 100644
+--- a/drivers/pinctrl/pinctrl-at91.c
++++ b/drivers/pinctrl/pinctrl-at91.c
+@@ -1819,12 +1819,16 @@ static int at91_gpio_probe(struct platform_device *pdev)
+ 	struct at91_gpio_chip *at91_chip = NULL;
+ 	struct gpio_chip *chip;
+ 	struct pinctrl_gpio_range *range;
++	int alias_idx;
+ 	int ret = 0;
+ 	int irq, i;
+-	int alias_idx = of_alias_get_id(np, "gpio");
+ 	uint32_t ngpio;
+ 	char **names;
+ 
++	alias_idx = of_alias_get_id(np, "gpio");
++	if (alias_idx < 0)
++		return alias_idx;
++
+ 	BUG_ON(alias_idx >= ARRAY_SIZE(gpio_chips));
+ 	if (gpio_chips[alias_idx])
+ 		return dev_err_probe(dev, -EBUSY, "%d slot is occupied.\n", alias_idx);
+diff --git a/drivers/pinctrl/qcom/pinctrl-qcm2290.c b/drivers/pinctrl/qcom/pinctrl-qcm2290.c
+index ba699eac9ee8b2..20e9bccda4cd6d 100644
+--- a/drivers/pinctrl/qcom/pinctrl-qcm2290.c
++++ b/drivers/pinctrl/qcom/pinctrl-qcm2290.c
+@@ -165,6 +165,10 @@ static const struct pinctrl_pin_desc qcm2290_pins[] = {
+ 	PINCTRL_PIN(62, "GPIO_62"),
+ 	PINCTRL_PIN(63, "GPIO_63"),
+ 	PINCTRL_PIN(64, "GPIO_64"),
++	PINCTRL_PIN(65, "GPIO_65"),
++	PINCTRL_PIN(66, "GPIO_66"),
++	PINCTRL_PIN(67, "GPIO_67"),
++	PINCTRL_PIN(68, "GPIO_68"),
+ 	PINCTRL_PIN(69, "GPIO_69"),
+ 	PINCTRL_PIN(70, "GPIO_70"),
+ 	PINCTRL_PIN(71, "GPIO_71"),
+@@ -179,12 +183,17 @@ static const struct pinctrl_pin_desc qcm2290_pins[] = {
+ 	PINCTRL_PIN(80, "GPIO_80"),
+ 	PINCTRL_PIN(81, "GPIO_81"),
+ 	PINCTRL_PIN(82, "GPIO_82"),
++	PINCTRL_PIN(83, "GPIO_83"),
++	PINCTRL_PIN(84, "GPIO_84"),
++	PINCTRL_PIN(85, "GPIO_85"),
+ 	PINCTRL_PIN(86, "GPIO_86"),
+ 	PINCTRL_PIN(87, "GPIO_87"),
+ 	PINCTRL_PIN(88, "GPIO_88"),
+ 	PINCTRL_PIN(89, "GPIO_89"),
+ 	PINCTRL_PIN(90, "GPIO_90"),
+ 	PINCTRL_PIN(91, "GPIO_91"),
++	PINCTRL_PIN(92, "GPIO_92"),
++	PINCTRL_PIN(93, "GPIO_93"),
+ 	PINCTRL_PIN(94, "GPIO_94"),
+ 	PINCTRL_PIN(95, "GPIO_95"),
+ 	PINCTRL_PIN(96, "GPIO_96"),
+diff --git a/drivers/pinctrl/qcom/pinctrl-qcs615.c b/drivers/pinctrl/qcom/pinctrl-qcs615.c
+index 23015b055f6a92..17ca743c2210fc 100644
+--- a/drivers/pinctrl/qcom/pinctrl-qcs615.c
++++ b/drivers/pinctrl/qcom/pinctrl-qcs615.c
+@@ -1062,7 +1062,7 @@ static const struct msm_pinctrl_soc_data qcs615_tlmm = {
+ 	.nfunctions = ARRAY_SIZE(qcs615_functions),
+ 	.groups = qcs615_groups,
+ 	.ngroups = ARRAY_SIZE(qcs615_groups),
+-	.ngpios = 123,
++	.ngpios = 124,
+ 	.tiles = qcs615_tiles,
+ 	.ntiles = ARRAY_SIZE(qcs615_tiles),
+ 	.wakeirq_map = qcs615_pdc_map,
+diff --git a/drivers/pinctrl/qcom/pinctrl-qcs8300.c b/drivers/pinctrl/qcom/pinctrl-qcs8300.c
+index ba6de944a859a0..5f5f7c4ac644c4 100644
+--- a/drivers/pinctrl/qcom/pinctrl-qcs8300.c
++++ b/drivers/pinctrl/qcom/pinctrl-qcs8300.c
+@@ -1204,7 +1204,7 @@ static const struct msm_pinctrl_soc_data qcs8300_pinctrl = {
+ 	.nfunctions = ARRAY_SIZE(qcs8300_functions),
+ 	.groups = qcs8300_groups,
+ 	.ngroups = ARRAY_SIZE(qcs8300_groups),
+-	.ngpios = 133,
++	.ngpios = 134,
+ 	.wakeirq_map = qcs8300_pdc_map,
+ 	.nwakeirq_map = ARRAY_SIZE(qcs8300_pdc_map),
+ 	.egpio_func = 11,
+diff --git a/drivers/pinctrl/qcom/tlmm-test.c b/drivers/pinctrl/qcom/tlmm-test.c
+index fd02bf3a76cbcc..7b99e89e0f6703 100644
+--- a/drivers/pinctrl/qcom/tlmm-test.c
++++ b/drivers/pinctrl/qcom/tlmm-test.c
+@@ -547,6 +547,7 @@ static int tlmm_test_init(struct kunit *test)
+ 	struct tlmm_test_priv *priv;
+ 
+ 	priv = kunit_kzalloc(test, sizeof(*priv), GFP_KERNEL);
++	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, priv);
+ 
+ 	atomic_set(&priv->intr_count, 0);
+ 	atomic_set(&priv->thread_count, 0);
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c b/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
+index dd07720e32cc09..9fd894729a7b87 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
+@@ -1419,8 +1419,8 @@ static const struct samsung_pin_ctrl exynosautov920_pin_ctrl[] = {
+ 		.pin_banks	= exynosautov920_pin_banks0,
+ 		.nr_banks	= ARRAY_SIZE(exynosautov920_pin_banks0),
+ 		.eint_wkup_init	= exynos_eint_wkup_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= exynosautov920_pinctrl_suspend,
++		.resume		= exynosautov920_pinctrl_resume,
+ 		.retention_data	= &exynosautov920_retention_data,
+ 	}, {
+ 		/* pin-controller instance 1 AUD data */
+@@ -1431,43 +1431,43 @@ static const struct samsung_pin_ctrl exynosautov920_pin_ctrl[] = {
+ 		.pin_banks	= exynosautov920_pin_banks2,
+ 		.nr_banks	= ARRAY_SIZE(exynosautov920_pin_banks2),
+ 		.eint_gpio_init	= exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= exynosautov920_pinctrl_suspend,
++		.resume		= exynosautov920_pinctrl_resume,
+ 	}, {
+ 		/* pin-controller instance 3 HSI1 data */
+ 		.pin_banks	= exynosautov920_pin_banks3,
+ 		.nr_banks	= ARRAY_SIZE(exynosautov920_pin_banks3),
+ 		.eint_gpio_init	= exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= exynosautov920_pinctrl_suspend,
++		.resume		= exynosautov920_pinctrl_resume,
+ 	}, {
+ 		/* pin-controller instance 4 HSI2 data */
+ 		.pin_banks	= exynosautov920_pin_banks4,
+ 		.nr_banks	= ARRAY_SIZE(exynosautov920_pin_banks4),
+ 		.eint_gpio_init	= exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= exynosautov920_pinctrl_suspend,
++		.resume		= exynosautov920_pinctrl_resume,
+ 	}, {
+ 		/* pin-controller instance 5 HSI2UFS data */
+ 		.pin_banks	= exynosautov920_pin_banks5,
+ 		.nr_banks	= ARRAY_SIZE(exynosautov920_pin_banks5),
+ 		.eint_gpio_init	= exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= exynosautov920_pinctrl_suspend,
++		.resume		= exynosautov920_pinctrl_resume,
+ 	}, {
+ 		/* pin-controller instance 6 PERIC0 data */
+ 		.pin_banks	= exynosautov920_pin_banks6,
+ 		.nr_banks	= ARRAY_SIZE(exynosautov920_pin_banks6),
+ 		.eint_gpio_init	= exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= exynosautov920_pinctrl_suspend,
++		.resume		= exynosautov920_pinctrl_resume,
+ 	}, {
+ 		/* pin-controller instance 7 PERIC1 data */
+ 		.pin_banks	= exynosautov920_pin_banks7,
+ 		.nr_banks	= ARRAY_SIZE(exynosautov920_pin_banks7),
+ 		.eint_gpio_init	= exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= exynosautov920_pinctrl_suspend,
++		.resume		= exynosautov920_pinctrl_resume,
+ 	},
+ };
+ 
+@@ -1762,15 +1762,15 @@ static const struct samsung_pin_ctrl gs101_pin_ctrl[] __initconst = {
+ 		.pin_banks	= gs101_pin_alive,
+ 		.nr_banks	= ARRAY_SIZE(gs101_pin_alive),
+ 		.eint_wkup_init = exynos_eint_wkup_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= gs101_pinctrl_suspend,
++		.resume		= gs101_pinctrl_resume,
+ 	}, {
+ 		/* pin banks of gs101 pin-controller (FAR_ALIVE) */
+ 		.pin_banks	= gs101_pin_far_alive,
+ 		.nr_banks	= ARRAY_SIZE(gs101_pin_far_alive),
+ 		.eint_wkup_init = exynos_eint_wkup_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= gs101_pinctrl_suspend,
++		.resume		= gs101_pinctrl_resume,
+ 	}, {
+ 		/* pin banks of gs101 pin-controller (GSACORE) */
+ 		.pin_banks	= gs101_pin_gsacore,
+@@ -1784,29 +1784,29 @@ static const struct samsung_pin_ctrl gs101_pin_ctrl[] __initconst = {
+ 		.pin_banks	= gs101_pin_peric0,
+ 		.nr_banks	= ARRAY_SIZE(gs101_pin_peric0),
+ 		.eint_gpio_init = exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= gs101_pinctrl_suspend,
++		.resume		= gs101_pinctrl_resume,
+ 	}, {
+ 		/* pin banks of gs101 pin-controller (PERIC1) */
+ 		.pin_banks	= gs101_pin_peric1,
+ 		.nr_banks	= ARRAY_SIZE(gs101_pin_peric1),
+ 		.eint_gpio_init = exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume	= exynos_pinctrl_resume,
++		.suspend	= gs101_pinctrl_suspend,
++		.resume		= gs101_pinctrl_resume,
+ 	}, {
+ 		/* pin banks of gs101 pin-controller (HSI1) */
+ 		.pin_banks	= gs101_pin_hsi1,
+ 		.nr_banks	= ARRAY_SIZE(gs101_pin_hsi1),
+ 		.eint_gpio_init = exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= gs101_pinctrl_suspend,
++		.resume		= gs101_pinctrl_resume,
+ 	}, {
+ 		/* pin banks of gs101 pin-controller (HSI2) */
+ 		.pin_banks	= gs101_pin_hsi2,
+ 		.nr_banks	= ARRAY_SIZE(gs101_pin_hsi2),
+ 		.eint_gpio_init = exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= gs101_pinctrl_suspend,
++		.resume		= gs101_pinctrl_resume,
+ 	},
+ };
+ 
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.c b/drivers/pinctrl/samsung/pinctrl-exynos.c
+index 42093bae8bb793..0879684338c772 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos.c
+@@ -762,153 +762,187 @@ __init int exynos_eint_wkup_init(struct samsung_pinctrl_drv_data *d)
+ 	return 0;
+ }
+ 
+-static void exynos_pinctrl_suspend_bank(
+-				struct samsung_pinctrl_drv_data *drvdata,
+-				struct samsung_pin_bank *bank)
++static void exynos_set_wakeup(struct samsung_pin_bank *bank)
+ {
+-	struct exynos_eint_gpio_save *save = bank->soc_priv;
+-	const void __iomem *regs = bank->eint_base;
++	struct exynos_irq_chip *irq_chip;
+ 
+-	if (clk_enable(bank->drvdata->pclk)) {
+-		dev_err(bank->gpio_chip.parent,
+-			"unable to enable clock for saving state\n");
+-		return;
++	if (bank->irq_chip) {
++		irq_chip = bank->irq_chip;
++		irq_chip->set_eint_wakeup_mask(bank->drvdata, irq_chip);
+ 	}
+-
+-	save->eint_con = readl(regs + EXYNOS_GPIO_ECON_OFFSET
+-						+ bank->eint_offset);
+-	save->eint_fltcon0 = readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
+-						+ 2 * bank->eint_offset);
+-	save->eint_fltcon1 = readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
+-						+ 2 * bank->eint_offset + 4);
+-	save->eint_mask = readl(regs + bank->irq_chip->eint_mask
+-						+ bank->eint_offset);
+-
+-	clk_disable(bank->drvdata->pclk);
+-
+-	pr_debug("%s: save     con %#010x\n", bank->name, save->eint_con);
+-	pr_debug("%s: save fltcon0 %#010x\n", bank->name, save->eint_fltcon0);
+-	pr_debug("%s: save fltcon1 %#010x\n", bank->name, save->eint_fltcon1);
+-	pr_debug("%s: save    mask %#010x\n", bank->name, save->eint_mask);
+ }
+ 
+-static void exynosauto_pinctrl_suspend_bank(struct samsung_pinctrl_drv_data *drvdata,
+-					    struct samsung_pin_bank *bank)
++void exynos_pinctrl_suspend(struct samsung_pin_bank *bank)
+ {
+ 	struct exynos_eint_gpio_save *save = bank->soc_priv;
+ 	const void __iomem *regs = bank->eint_base;
+ 
+-	if (clk_enable(bank->drvdata->pclk)) {
+-		dev_err(bank->gpio_chip.parent,
+-			"unable to enable clock for saving state\n");
+-		return;
++	if (bank->eint_type == EINT_TYPE_GPIO) {
++		save->eint_con = readl(regs + EXYNOS_GPIO_ECON_OFFSET
++				       + bank->eint_offset);
++		save->eint_fltcon0 = readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
++					   + 2 * bank->eint_offset);
++		save->eint_fltcon1 = readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
++					   + 2 * bank->eint_offset + 4);
++		save->eint_mask = readl(regs + bank->irq_chip->eint_mask
++					+ bank->eint_offset);
++
++		pr_debug("%s: save     con %#010x\n",
++			 bank->name, save->eint_con);
++		pr_debug("%s: save fltcon0 %#010x\n",
++			 bank->name, save->eint_fltcon0);
++		pr_debug("%s: save fltcon1 %#010x\n",
++			 bank->name, save->eint_fltcon1);
++		pr_debug("%s: save    mask %#010x\n",
++			 bank->name, save->eint_mask);
++	} else if (bank->eint_type == EINT_TYPE_WKUP) {
++		exynos_set_wakeup(bank);
+ 	}
+-
+-	save->eint_con = readl(regs + bank->pctl_offset + bank->eint_con_offset);
+-	save->eint_mask = readl(regs + bank->pctl_offset + bank->eint_mask_offset);
+-
+-	clk_disable(bank->drvdata->pclk);
+-
+-	pr_debug("%s: save     con %#010x\n", bank->name, save->eint_con);
+-	pr_debug("%s: save    mask %#010x\n", bank->name, save->eint_mask);
+ }
+ 
+-void exynos_pinctrl_suspend(struct samsung_pinctrl_drv_data *drvdata)
++void gs101_pinctrl_suspend(struct samsung_pin_bank *bank)
+ {
+-	struct samsung_pin_bank *bank = drvdata->pin_banks;
+-	struct exynos_irq_chip *irq_chip = NULL;
+-	int i;
++	struct exynos_eint_gpio_save *save = bank->soc_priv;
++	const void __iomem *regs = bank->eint_base;
+ 
+-	for (i = 0; i < drvdata->nr_banks; ++i, ++bank) {
+-		if (bank->eint_type == EINT_TYPE_GPIO) {
+-			if (bank->eint_con_offset)
+-				exynosauto_pinctrl_suspend_bank(drvdata, bank);
+-			else
+-				exynos_pinctrl_suspend_bank(drvdata, bank);
+-		}
+-		else if (bank->eint_type == EINT_TYPE_WKUP) {
+-			if (!irq_chip) {
+-				irq_chip = bank->irq_chip;
+-				irq_chip->set_eint_wakeup_mask(drvdata,
+-							       irq_chip);
+-			}
+-		}
++	if (bank->eint_type == EINT_TYPE_GPIO) {
++		save->eint_con = readl(regs + EXYNOS_GPIO_ECON_OFFSET
++				       + bank->eint_offset);
++
++		save->eint_fltcon0 = readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
++					   + bank->eint_fltcon_offset);
++
++		/* fltcon1 register only exists for pins 4-7 */
++		if (bank->nr_pins > 4)
++			save->eint_fltcon1 = readl(regs +
++						EXYNOS_GPIO_EFLTCON_OFFSET
++						+ bank->eint_fltcon_offset + 4);
++
++		save->eint_mask = readl(regs + bank->irq_chip->eint_mask
++					+ bank->eint_offset);
++
++		pr_debug("%s: save     con %#010x\n",
++			 bank->name, save->eint_con);
++		pr_debug("%s: save fltcon0 %#010x\n",
++			 bank->name, save->eint_fltcon0);
++		if (bank->nr_pins > 4)
++			pr_debug("%s: save fltcon1 %#010x\n",
++				 bank->name, save->eint_fltcon1);
++		pr_debug("%s: save    mask %#010x\n",
++			 bank->name, save->eint_mask);
++	} else if (bank->eint_type == EINT_TYPE_WKUP) {
++		exynos_set_wakeup(bank);
+ 	}
+ }
+ 
+-static void exynos_pinctrl_resume_bank(
+-				struct samsung_pinctrl_drv_data *drvdata,
+-				struct samsung_pin_bank *bank)
++void exynosautov920_pinctrl_suspend(struct samsung_pin_bank *bank)
+ {
+ 	struct exynos_eint_gpio_save *save = bank->soc_priv;
+-	void __iomem *regs = bank->eint_base;
++	const void __iomem *regs = bank->eint_base;
+ 
+-	if (clk_enable(bank->drvdata->pclk)) {
+-		dev_err(bank->gpio_chip.parent,
+-			"unable to enable clock for restoring state\n");
+-		return;
++	if (bank->eint_type == EINT_TYPE_GPIO) {
++		save->eint_con = readl(regs + bank->pctl_offset +
++				       bank->eint_con_offset);
++		save->eint_mask = readl(regs + bank->pctl_offset +
++					bank->eint_mask_offset);
++		pr_debug("%s: save     con %#010x\n",
++			 bank->name, save->eint_con);
++		pr_debug("%s: save    mask %#010x\n",
++			 bank->name, save->eint_mask);
++	} else if (bank->eint_type == EINT_TYPE_WKUP) {
++		exynos_set_wakeup(bank);
+ 	}
++}
+ 
+-	pr_debug("%s:     con %#010x => %#010x\n", bank->name,
+-			readl(regs + EXYNOS_GPIO_ECON_OFFSET
+-			+ bank->eint_offset), save->eint_con);
+-	pr_debug("%s: fltcon0 %#010x => %#010x\n", bank->name,
+-			readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
+-			+ 2 * bank->eint_offset), save->eint_fltcon0);
+-	pr_debug("%s: fltcon1 %#010x => %#010x\n", bank->name,
+-			readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
+-			+ 2 * bank->eint_offset + 4), save->eint_fltcon1);
+-	pr_debug("%s:    mask %#010x => %#010x\n", bank->name,
+-			readl(regs + bank->irq_chip->eint_mask
+-			+ bank->eint_offset), save->eint_mask);
+-
+-	writel(save->eint_con, regs + EXYNOS_GPIO_ECON_OFFSET
+-						+ bank->eint_offset);
+-	writel(save->eint_fltcon0, regs + EXYNOS_GPIO_EFLTCON_OFFSET
+-						+ 2 * bank->eint_offset);
+-	writel(save->eint_fltcon1, regs + EXYNOS_GPIO_EFLTCON_OFFSET
+-						+ 2 * bank->eint_offset + 4);
+-	writel(save->eint_mask, regs + bank->irq_chip->eint_mask
+-						+ bank->eint_offset);
++void gs101_pinctrl_resume(struct samsung_pin_bank *bank)
++{
++	struct exynos_eint_gpio_save *save = bank->soc_priv;
+ 
+-	clk_disable(bank->drvdata->pclk);
++	void __iomem *regs = bank->eint_base;
++	void __iomem *eint_fltcfg0 = regs + EXYNOS_GPIO_EFLTCON_OFFSET
++		     + bank->eint_fltcon_offset;
++
++	if (bank->eint_type == EINT_TYPE_GPIO) {
++		pr_debug("%s:     con %#010x => %#010x\n", bank->name,
++			 readl(regs + EXYNOS_GPIO_ECON_OFFSET
++			       + bank->eint_offset), save->eint_con);
++
++		pr_debug("%s: fltcon0 %#010x => %#010x\n", bank->name,
++			 readl(eint_fltcfg0), save->eint_fltcon0);
++
++		/* fltcon1 register only exists for pins 4-7 */
++		if (bank->nr_pins > 4)
++			pr_debug("%s: fltcon1 %#010x => %#010x\n", bank->name,
++				 readl(eint_fltcfg0 + 4), save->eint_fltcon1);
++
++		pr_debug("%s:    mask %#010x => %#010x\n", bank->name,
++			 readl(regs + bank->irq_chip->eint_mask
++			       + bank->eint_offset), save->eint_mask);
++
++		writel(save->eint_con, regs + EXYNOS_GPIO_ECON_OFFSET
++		       + bank->eint_offset);
++		writel(save->eint_fltcon0, eint_fltcfg0);
++
++		if (bank->nr_pins > 4)
++			writel(save->eint_fltcon1, eint_fltcfg0 + 4);
++		writel(save->eint_mask, regs + bank->irq_chip->eint_mask
++		       + bank->eint_offset);
++	}
+ }
+ 
+-static void exynosauto_pinctrl_resume_bank(struct samsung_pinctrl_drv_data *drvdata,
+-					   struct samsung_pin_bank *bank)
++void exynos_pinctrl_resume(struct samsung_pin_bank *bank)
+ {
+ 	struct exynos_eint_gpio_save *save = bank->soc_priv;
+ 	void __iomem *regs = bank->eint_base;
+ 
+-	if (clk_enable(bank->drvdata->pclk)) {
+-		dev_err(bank->gpio_chip.parent,
+-			"unable to enable clock for restoring state\n");
+-		return;
++	if (bank->eint_type == EINT_TYPE_GPIO) {
++		pr_debug("%s:     con %#010x => %#010x\n", bank->name,
++			 readl(regs + EXYNOS_GPIO_ECON_OFFSET
++			       + bank->eint_offset), save->eint_con);
++		pr_debug("%s: fltcon0 %#010x => %#010x\n", bank->name,
++			 readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
++			       + 2 * bank->eint_offset), save->eint_fltcon0);
++		pr_debug("%s: fltcon1 %#010x => %#010x\n", bank->name,
++			 readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
++			       + 2 * bank->eint_offset + 4),
++			 save->eint_fltcon1);
++		pr_debug("%s:    mask %#010x => %#010x\n", bank->name,
++			 readl(regs + bank->irq_chip->eint_mask
++			       + bank->eint_offset), save->eint_mask);
++
++		writel(save->eint_con, regs + EXYNOS_GPIO_ECON_OFFSET
++		       + bank->eint_offset);
++		writel(save->eint_fltcon0, regs + EXYNOS_GPIO_EFLTCON_OFFSET
++		       + 2 * bank->eint_offset);
++		writel(save->eint_fltcon1, regs + EXYNOS_GPIO_EFLTCON_OFFSET
++		       + 2 * bank->eint_offset + 4);
++		writel(save->eint_mask, regs + bank->irq_chip->eint_mask
++		       + bank->eint_offset);
+ 	}
+-
+-	pr_debug("%s:     con %#010x => %#010x\n", bank->name,
+-		 readl(regs + bank->pctl_offset + bank->eint_con_offset), save->eint_con);
+-	pr_debug("%s:    mask %#010x => %#010x\n", bank->name,
+-		 readl(regs + bank->pctl_offset + bank->eint_mask_offset), save->eint_mask);
+-
+-	writel(save->eint_con, regs + bank->pctl_offset + bank->eint_con_offset);
+-	writel(save->eint_mask, regs + bank->pctl_offset + bank->eint_mask_offset);
+-
+-	clk_disable(bank->drvdata->pclk);
+ }
+ 
+-void exynos_pinctrl_resume(struct samsung_pinctrl_drv_data *drvdata)
++void exynosautov920_pinctrl_resume(struct samsung_pin_bank *bank)
+ {
+-	struct samsung_pin_bank *bank = drvdata->pin_banks;
+-	int i;
++	struct exynos_eint_gpio_save *save = bank->soc_priv;
++	void __iomem *regs = bank->eint_base;
+ 
+-	for (i = 0; i < drvdata->nr_banks; ++i, ++bank)
+-		if (bank->eint_type == EINT_TYPE_GPIO) {
+-			if (bank->eint_con_offset)
+-				exynosauto_pinctrl_resume_bank(drvdata, bank);
+-			else
+-				exynos_pinctrl_resume_bank(drvdata, bank);
+-		}
++	if (bank->eint_type == EINT_TYPE_GPIO) {
++		/* exynosautov920 has eint_con_offset for all but one bank */
++		if (!bank->eint_con_offset)
++			exynos_pinctrl_resume(bank);
++
++		pr_debug("%s:     con %#010x => %#010x\n", bank->name,
++			 readl(regs + bank->pctl_offset + bank->eint_con_offset),
++			 save->eint_con);
++		pr_debug("%s:    mask %#010x => %#010x\n", bank->name,
++			 readl(regs + bank->pctl_offset +
++			       bank->eint_mask_offset), save->eint_mask);
++
++		writel(save->eint_con,
++		       regs + bank->pctl_offset + bank->eint_con_offset);
++		writel(save->eint_mask,
++		       regs + bank->pctl_offset + bank->eint_mask_offset);
++	}
+ }
+ 
+ static void exynos_retention_enable(struct samsung_pinctrl_drv_data *drvdata)
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.h b/drivers/pinctrl/samsung/pinctrl-exynos.h
+index b483270ddc53c0..2bee52b61b9317 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos.h
++++ b/drivers/pinctrl/samsung/pinctrl-exynos.h
+@@ -240,8 +240,12 @@ struct exynos_muxed_weint_data {
+ 
+ int exynos_eint_gpio_init(struct samsung_pinctrl_drv_data *d);
+ int exynos_eint_wkup_init(struct samsung_pinctrl_drv_data *d);
+-void exynos_pinctrl_suspend(struct samsung_pinctrl_drv_data *drvdata);
+-void exynos_pinctrl_resume(struct samsung_pinctrl_drv_data *drvdata);
++void exynosautov920_pinctrl_resume(struct samsung_pin_bank *bank);
++void exynosautov920_pinctrl_suspend(struct samsung_pin_bank *bank);
++void exynos_pinctrl_suspend(struct samsung_pin_bank *bank);
++void exynos_pinctrl_resume(struct samsung_pin_bank *bank);
++void gs101_pinctrl_suspend(struct samsung_pin_bank *bank);
++void gs101_pinctrl_resume(struct samsung_pin_bank *bank);
+ struct samsung_retention_ctrl *
+ exynos_retention_init(struct samsung_pinctrl_drv_data *drvdata,
+ 		      const struct samsung_retention_data *data);
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c
+index 2896eb2de2c098..ef557217e173af 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.c
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c
+@@ -1333,6 +1333,7 @@ static int samsung_pinctrl_probe(struct platform_device *pdev)
+ static int __maybe_unused samsung_pinctrl_suspend(struct device *dev)
+ {
+ 	struct samsung_pinctrl_drv_data *drvdata = dev_get_drvdata(dev);
++	struct samsung_pin_bank *bank;
+ 	int i;
+ 
+ 	i = clk_enable(drvdata->pclk);
+@@ -1343,7 +1344,7 @@ static int __maybe_unused samsung_pinctrl_suspend(struct device *dev)
+ 	}
+ 
+ 	for (i = 0; i < drvdata->nr_banks; i++) {
+-		struct samsung_pin_bank *bank = &drvdata->pin_banks[i];
++		bank = &drvdata->pin_banks[i];
+ 		const void __iomem *reg = bank->pctl_base + bank->pctl_offset;
+ 		const u8 *offs = bank->type->reg_offset;
+ 		const u8 *widths = bank->type->fld_width;
+@@ -1371,10 +1372,14 @@ static int __maybe_unused samsung_pinctrl_suspend(struct device *dev)
+ 		}
+ 	}
+ 
++	for (i = 0; i < drvdata->nr_banks; i++) {
++		bank = &drvdata->pin_banks[i];
++		if (drvdata->suspend)
++			drvdata->suspend(bank);
++	}
++
+ 	clk_disable(drvdata->pclk);
+ 
+-	if (drvdata->suspend)
+-		drvdata->suspend(drvdata);
+ 	if (drvdata->retention_ctrl && drvdata->retention_ctrl->enable)
+ 		drvdata->retention_ctrl->enable(drvdata);
+ 
+@@ -1392,6 +1397,7 @@ static int __maybe_unused samsung_pinctrl_suspend(struct device *dev)
+ static int __maybe_unused samsung_pinctrl_resume(struct device *dev)
+ {
+ 	struct samsung_pinctrl_drv_data *drvdata = dev_get_drvdata(dev);
++	struct samsung_pin_bank *bank;
+ 	int ret;
+ 	int i;
+ 
+@@ -1406,11 +1412,14 @@ static int __maybe_unused samsung_pinctrl_resume(struct device *dev)
+ 		return ret;
+ 	}
+ 
+-	if (drvdata->resume)
+-		drvdata->resume(drvdata);
++	for (i = 0; i < drvdata->nr_banks; i++) {
++		bank = &drvdata->pin_banks[i];
++		if (drvdata->resume)
++			drvdata->resume(bank);
++	}
+ 
+ 	for (i = 0; i < drvdata->nr_banks; i++) {
+-		struct samsung_pin_bank *bank = &drvdata->pin_banks[i];
++		bank = &drvdata->pin_banks[i];
+ 		void __iomem *reg = bank->pctl_base + bank->pctl_offset;
+ 		const u8 *offs = bank->type->reg_offset;
+ 		const u8 *widths = bank->type->fld_width;
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.h b/drivers/pinctrl/samsung/pinctrl-samsung.h
+index 3cf758df7d6912..fcc57c244d167d 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.h
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.h
+@@ -285,8 +285,8 @@ struct samsung_pin_ctrl {
+ 	int		(*eint_gpio_init)(struct samsung_pinctrl_drv_data *);
+ 	int		(*eint_wkup_init)(struct samsung_pinctrl_drv_data *);
+ 	void		(*pud_value_init)(struct samsung_pinctrl_drv_data *drvdata);
+-	void		(*suspend)(struct samsung_pinctrl_drv_data *);
+-	void		(*resume)(struct samsung_pinctrl_drv_data *);
++	void		(*suspend)(struct samsung_pin_bank *bank);
++	void		(*resume)(struct samsung_pin_bank *bank);
+ };
+ 
+ /**
+@@ -335,8 +335,8 @@ struct samsung_pinctrl_drv_data {
+ 
+ 	struct samsung_retention_ctrl	*retention_ctrl;
+ 
+-	void (*suspend)(struct samsung_pinctrl_drv_data *);
+-	void (*resume)(struct samsung_pinctrl_drv_data *);
++	void (*suspend)(struct samsung_pin_bank *bank);
++	void (*resume)(struct samsung_pin_bank *bank);
+ };
+ 
+ /**
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi-dt.c b/drivers/pinctrl/sunxi/pinctrl-sunxi-dt.c
+index 1833078f68776c..4e34b0cd3b73aa 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi-dt.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi-dt.c
+@@ -143,7 +143,7 @@ static struct sunxi_desc_pin *init_pins_table(struct device *dev,
+  */
+ static int prepare_function_table(struct device *dev, struct device_node *pnode,
+ 				  struct sunxi_desc_pin *pins, int npins,
+-				  const u8 *irq_bank_muxes)
++				  unsigned pin_base, const u8 *irq_bank_muxes)
+ {
+ 	struct device_node *node;
+ 	struct property *prop;
+@@ -166,7 +166,7 @@ static int prepare_function_table(struct device *dev, struct device_node *pnode,
+ 	 */
+ 	for (i = 0; i < npins; i++) {
+ 		struct sunxi_desc_pin *pin = &pins[i];
+-		int bank = pin->pin.number / PINS_PER_BANK;
++		int bank = (pin->pin.number - pin_base) / PINS_PER_BANK;
+ 
+ 		if (irq_bank_muxes[bank]) {
+ 			pin->variant++;
+@@ -211,7 +211,7 @@ static int prepare_function_table(struct device *dev, struct device_node *pnode,
+ 	last_bank = 0;
+ 	for (i = 0; i < npins; i++) {
+ 		struct sunxi_desc_pin *pin = &pins[i];
+-		int bank = pin->pin.number / PINS_PER_BANK;
++		int bank = (pin->pin.number - pin_base) / PINS_PER_BANK;
+ 		int lastfunc = pin->variant + 1;
+ 		int irq_mux = irq_bank_muxes[bank];
+ 
+@@ -353,7 +353,7 @@ int sunxi_pinctrl_dt_table_init(struct platform_device *pdev,
+ 		return PTR_ERR(pins);
+ 
+ 	ret = prepare_function_table(&pdev->dev, pnode, pins, desc->npins,
+-				     irq_bank_muxes);
++				     desc->pin_base, irq_bank_muxes);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
+index d2228720991ffe..7678e3d05fd36f 100644
+--- a/drivers/platform/chrome/cros_ec_typec.c
++++ b/drivers/platform/chrome/cros_ec_typec.c
+@@ -22,8 +22,10 @@
+ 
+ #define DRV_NAME "cros-ec-typec"
+ 
+-#define DP_PORT_VDO	(DP_CONF_SET_PIN_ASSIGN(BIT(DP_PIN_ASSIGN_C) | BIT(DP_PIN_ASSIGN_D)) | \
+-				DP_CAP_DFP_D | DP_CAP_RECEPTACLE)
++#define DP_PORT_VDO	(DP_CAP_DFP_D | DP_CAP_RECEPTACLE | \
++			 DP_CONF_SET_PIN_ASSIGN(BIT(DP_PIN_ASSIGN_C) | \
++						BIT(DP_PIN_ASSIGN_D) | \
++						BIT(DP_PIN_ASSIGN_E)))
+ 
+ static void cros_typec_role_switch_quirk(struct fwnode_handle *fwnode)
+ {
+diff --git a/drivers/power/reset/at91-reset.c b/drivers/power/reset/at91-reset.c
+index 036b18a1f90f80..511f5a8f8961ce 100644
+--- a/drivers/power/reset/at91-reset.c
++++ b/drivers/power/reset/at91-reset.c
+@@ -129,12 +129,11 @@ static int at91_reset(struct notifier_block *this, unsigned long mode,
+ 		"	str	%4, [%0, %6]\n\t"
+ 		/* Disable SDRAM1 accesses */
+ 		"1:	tst	%1, #0\n\t"
+-		"	beq	2f\n\t"
+ 		"	strne	%3, [%1, #" __stringify(AT91_DDRSDRC_RTR) "]\n\t"
+ 		/* Power down SDRAM1 */
+ 		"	strne	%4, [%1, %6]\n\t"
+ 		/* Reset CPU */
+-		"2:	str	%5, [%2, #" __stringify(AT91_RSTC_CR) "]\n\t"
++		"	str	%5, [%2, #" __stringify(AT91_RSTC_CR) "]\n\t"
+ 
+ 		"	b	.\n\t"
+ 		:
+@@ -145,7 +144,7 @@ static int at91_reset(struct notifier_block *this, unsigned long mode,
+ 		  "r" cpu_to_le32(AT91_DDRSDRC_LPCB_POWER_DOWN),
+ 		  "r" (reset->data->reset_args),
+ 		  "r" (reset->ramc_lpr)
+-		: "r4");
++	);
+ 
+ 	return NOTIFY_DONE;
+ }
+diff --git a/drivers/power/supply/max77705_charger.c b/drivers/power/supply/max77705_charger.c
+index eec5e9ef795efd..329b430d0e5065 100644
+--- a/drivers/power/supply/max77705_charger.c
++++ b/drivers/power/supply/max77705_charger.c
+@@ -545,20 +545,28 @@ static int max77705_charger_probe(struct i2c_client *i2c)
+ 		return dev_err_probe(dev, ret, "failed to add irq chip\n");
+ 
+ 	chg->wqueue = create_singlethread_workqueue(dev_name(dev));
+-	if (IS_ERR(chg->wqueue))
+-		return dev_err_probe(dev, PTR_ERR(chg->wqueue), "failed to create workqueue\n");
++	if (!chg->wqueue)
++		return dev_err_probe(dev, -ENOMEM, "failed to create workqueue\n");
+ 
+ 	ret = devm_work_autocancel(dev, &chg->chgin_work, max77705_chgin_isr_work);
+-	if (ret)
+-		return dev_err_probe(dev, ret, "failed to initialize interrupt work\n");
++	if (ret) {
++		dev_err_probe(dev, ret, "failed to initialize interrupt work\n");
++		goto destroy_wq;
++	}
+ 
+ 	max77705_charger_initialize(chg);
+ 
+ 	ret = max77705_charger_enable(chg);
+-	if (ret)
+-		return dev_err_probe(dev, ret, "failed to enable charge\n");
++	if (ret) {
++		dev_err_probe(dev, ret, "failed to enable charge\n");
++		goto destroy_wq;
++	}
+ 
+ 	return devm_add_action_or_reset(dev, max77705_charger_disable, chg);
++
++destroy_wq:
++	destroy_workqueue(chg->wqueue);
++	return ret;
+ }
+ 
+ static const struct of_device_id max77705_charger_of_match[] = {
+diff --git a/drivers/ptp/ptp_private.h b/drivers/ptp/ptp_private.h
+index 18934e28469ee6..528d86a33f37de 100644
+--- a/drivers/ptp/ptp_private.h
++++ b/drivers/ptp/ptp_private.h
+@@ -98,17 +98,7 @@ static inline int queue_cnt(const struct timestamp_event_queue *q)
+ /* Check if ptp virtual clock is in use */
+ static inline bool ptp_vclock_in_use(struct ptp_clock *ptp)
+ {
+-	bool in_use = false;
+-
+-	if (mutex_lock_interruptible(&ptp->n_vclocks_mux))
+-		return true;
+-
+-	if (!ptp->is_virtual_clock && ptp->n_vclocks)
+-		in_use = true;
+-
+-	mutex_unlock(&ptp->n_vclocks_mux);
+-
+-	return in_use;
++	return !ptp->is_virtual_clock;
+ }
+ 
+ /* Check if ptp clock shall be free running */
+diff --git a/drivers/regulator/max20086-regulator.c b/drivers/regulator/max20086-regulator.c
+index 198d45f8e88493..3d333b61fb18c8 100644
+--- a/drivers/regulator/max20086-regulator.c
++++ b/drivers/regulator/max20086-regulator.c
+@@ -5,6 +5,7 @@
+ // Copyright (C) 2022 Laurent Pinchart <laurent.pinchart@idesonboard.com>
+ // Copyright (C) 2018 Avnet, Inc.
+ 
++#include <linux/cleanup.h>
+ #include <linux/err.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/i2c.h>
+@@ -133,11 +134,11 @@ static int max20086_regulators_register(struct max20086 *chip)
+ static int max20086_parse_regulators_dt(struct max20086 *chip, bool *boot_on)
+ {
+ 	struct of_regulator_match *matches;
+-	struct device_node *node;
+ 	unsigned int i;
+ 	int ret;
+ 
+-	node = of_get_child_by_name(chip->dev->of_node, "regulators");
++	struct device_node *node __free(device_node) =
++		of_get_child_by_name(chip->dev->of_node, "regulators");
+ 	if (!node) {
+ 		dev_err(chip->dev, "regulators node not found\n");
+ 		return -ENODEV;
+@@ -153,7 +154,6 @@ static int max20086_parse_regulators_dt(struct max20086 *chip, bool *boot_on)
+ 
+ 	ret = of_regulator_match(chip->dev, node, matches,
+ 				 chip->info->num_outputs);
+-	of_node_put(node);
+ 	if (ret < 0) {
+ 		dev_err(chip->dev, "Failed to match regulators\n");
+ 		return -EINVAL;
+diff --git a/drivers/remoteproc/qcom_wcnss_iris.c b/drivers/remoteproc/qcom_wcnss_iris.c
+index b989718776bdb5..2b52b403eb3f76 100644
+--- a/drivers/remoteproc/qcom_wcnss_iris.c
++++ b/drivers/remoteproc/qcom_wcnss_iris.c
+@@ -196,6 +196,7 @@ struct qcom_iris *qcom_iris_probe(struct device *parent, bool *use_48mhz_xo)
+ 
+ err_device_del:
+ 	device_del(&iris->dev);
++	put_device(&iris->dev);
+ 
+ 	return ERR_PTR(ret);
+ }
+@@ -203,4 +204,5 @@ struct qcom_iris *qcom_iris_probe(struct device *parent, bool *use_48mhz_xo)
+ void qcom_iris_remove(struct qcom_iris *iris)
+ {
+ 	device_del(&iris->dev);
++	put_device(&iris->dev);
+ }
+diff --git a/drivers/remoteproc/ti_k3_dsp_remoteproc.c b/drivers/remoteproc/ti_k3_dsp_remoteproc.c
+index a695890254ff76..35e8c3cc313c36 100644
+--- a/drivers/remoteproc/ti_k3_dsp_remoteproc.c
++++ b/drivers/remoteproc/ti_k3_dsp_remoteproc.c
+@@ -115,10 +115,6 @@ static void k3_dsp_rproc_mbox_callback(struct mbox_client *client, void *data)
+ 	const char *name = kproc->rproc->name;
+ 	u32 msg = omap_mbox_message(data);
+ 
+-	/* Do not forward messages from a detached core */
+-	if (kproc->rproc->state == RPROC_DETACHED)
+-		return;
+-
+ 	dev_dbg(dev, "mbox msg: 0x%x\n", msg);
+ 
+ 	switch (msg) {
+@@ -159,10 +155,6 @@ static void k3_dsp_rproc_kick(struct rproc *rproc, int vqid)
+ 	mbox_msg_t msg = (mbox_msg_t)vqid;
+ 	int ret;
+ 
+-	/* Do not forward messages to a detached core */
+-	if (kproc->rproc->state == RPROC_DETACHED)
+-		return;
+-
+ 	/* send the index of the triggered virtqueue in the mailbox payload */
+ 	ret = mbox_send_message(kproc->mbox, (void *)msg);
+ 	if (ret < 0)
+diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+index dbc513c5569cbf..ba082ca13e7508 100644
+--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c
++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+@@ -194,10 +194,6 @@ static void k3_r5_rproc_mbox_callback(struct mbox_client *client, void *data)
+ 	const char *name = kproc->rproc->name;
+ 	u32 msg = omap_mbox_message(data);
+ 
+-	/* Do not forward message from a detached core */
+-	if (kproc->rproc->state == RPROC_DETACHED)
+-		return;
+-
+ 	dev_dbg(dev, "mbox msg: 0x%x\n", msg);
+ 
+ 	switch (msg) {
+@@ -233,10 +229,6 @@ static void k3_r5_rproc_kick(struct rproc *rproc, int vqid)
+ 	mbox_msg_t msg = (mbox_msg_t)vqid;
+ 	int ret;
+ 
+-	/* Do not forward message to a detached core */
+-	if (kproc->rproc->state == RPROC_DETACHED)
+-		return;
+-
+ 	/* send the index of the triggered virtqueue in the mailbox payload */
+ 	ret = mbox_send_message(kproc->mbox, (void *)msg);
+ 	if (ret < 0)
+@@ -448,13 +440,36 @@ static int k3_r5_rproc_prepare(struct rproc *rproc)
+ {
+ 	struct k3_r5_rproc *kproc = rproc->priv;
+ 	struct k3_r5_cluster *cluster = kproc->cluster;
+-	struct k3_r5_core *core = kproc->core;
++	struct k3_r5_core *core = kproc->core, *core0, *core1;
+ 	struct device *dev = kproc->dev;
+ 	u32 ctrl = 0, cfg = 0, stat = 0;
+ 	u64 boot_vec = 0;
+ 	bool mem_init_dis;
+ 	int ret;
+ 
++	/*
++	 * R5 cores require to be powered on sequentially, core0 should be in
++	 * higher power state than core1 in a cluster. So, wait for core0 to
++	 * power up before proceeding to core1 and put timeout of 2sec. This
++	 * waiting mechanism is necessary because rproc_auto_boot_callback() for
++	 * core1 can be called before core0 due to thread execution order.
++	 *
++	 * By placing the wait mechanism here in .prepare() ops, this condition
++	 * is enforced for rproc boot requests from sysfs as well.
++	 */
++	core0 = list_first_entry(&cluster->cores, struct k3_r5_core, elem);
++	core1 = list_last_entry(&cluster->cores, struct k3_r5_core, elem);
++	if (cluster->mode == CLUSTER_MODE_SPLIT && core == core1 &&
++	    !core0->released_from_reset) {
++		ret = wait_event_interruptible_timeout(cluster->core_transition,
++						       core0->released_from_reset,
++						       msecs_to_jiffies(2000));
++		if (ret <= 0) {
++			dev_err(dev, "can not power up core1 before core0");
++			return -EPERM;
++		}
++	}
++
+ 	ret = ti_sci_proc_get_status(core->tsp, &boot_vec, &cfg, &ctrl, &stat);
+ 	if (ret < 0)
+ 		return ret;
+@@ -470,6 +485,14 @@ static int k3_r5_rproc_prepare(struct rproc *rproc)
+ 		return ret;
+ 	}
+ 
++	/*
++	 * Notify all threads in the wait queue when core0 state has changed so
++	 * that threads waiting for this condition can be executed.
++	 */
++	core->released_from_reset = true;
++	if (core == core0)
++		wake_up_interruptible(&cluster->core_transition);
++
+ 	/*
+ 	 * Newer IP revisions like on J7200 SoCs support h/w auto-initialization
+ 	 * of TCMs, so there is no need to perform the s/w memzero. This bit is
+@@ -515,10 +538,30 @@ static int k3_r5_rproc_unprepare(struct rproc *rproc)
+ {
+ 	struct k3_r5_rproc *kproc = rproc->priv;
+ 	struct k3_r5_cluster *cluster = kproc->cluster;
+-	struct k3_r5_core *core = kproc->core;
++	struct k3_r5_core *core = kproc->core, *core0, *core1;
+ 	struct device *dev = kproc->dev;
+ 	int ret;
+ 
++	/*
++	 * Ensure power-down of cores is sequential in split mode. Core1 must
++	 * power down before Core0 to maintain the expected state. By placing
++	 * the wait mechanism here in .unprepare() ops, this condition is
++	 * enforced for rproc stop or shutdown requests from sysfs and device
++	 * removal as well.
++	 */
++	core0 = list_first_entry(&cluster->cores, struct k3_r5_core, elem);
++	core1 = list_last_entry(&cluster->cores, struct k3_r5_core, elem);
++	if (cluster->mode == CLUSTER_MODE_SPLIT && core == core0 &&
++	    core1->released_from_reset) {
++		ret = wait_event_interruptible_timeout(cluster->core_transition,
++						       !core1->released_from_reset,
++						       msecs_to_jiffies(2000));
++		if (ret <= 0) {
++			dev_err(dev, "can not power down core0 before core1");
++			return -EPERM;
++		}
++	}
++
+ 	/* Re-use LockStep-mode reset logic for Single-CPU mode */
+ 	ret = (cluster->mode == CLUSTER_MODE_LOCKSTEP ||
+ 	       cluster->mode == CLUSTER_MODE_SINGLECPU) ?
+@@ -526,6 +569,14 @@ static int k3_r5_rproc_unprepare(struct rproc *rproc)
+ 	if (ret)
+ 		dev_err(dev, "unable to disable cores, ret = %d\n", ret);
+ 
++	/*
++	 * Notify all threads in the wait queue when core1 state has changed so
++	 * that threads waiting for this condition can be executed.
++	 */
++	core->released_from_reset = false;
++	if (core == core1)
++		wake_up_interruptible(&cluster->core_transition);
++
+ 	return ret;
+ }
+ 
+@@ -551,7 +602,7 @@ static int k3_r5_rproc_start(struct rproc *rproc)
+ 	struct k3_r5_rproc *kproc = rproc->priv;
+ 	struct k3_r5_cluster *cluster = kproc->cluster;
+ 	struct device *dev = kproc->dev;
+-	struct k3_r5_core *core0, *core;
++	struct k3_r5_core *core;
+ 	u32 boot_addr;
+ 	int ret;
+ 
+@@ -573,21 +624,9 @@ static int k3_r5_rproc_start(struct rproc *rproc)
+ 				goto unroll_core_run;
+ 		}
+ 	} else {
+-		/* do not allow core 1 to start before core 0 */
+-		core0 = list_first_entry(&cluster->cores, struct k3_r5_core,
+-					 elem);
+-		if (core != core0 && core0->rproc->state == RPROC_OFFLINE) {
+-			dev_err(dev, "%s: can not start core 1 before core 0\n",
+-				__func__);
+-			return -EPERM;
+-		}
+-
+ 		ret = k3_r5_core_run(core);
+ 		if (ret)
+ 			return ret;
+-
+-		core->released_from_reset = true;
+-		wake_up_interruptible(&cluster->core_transition);
+ 	}
+ 
+ 	return 0;
+@@ -628,8 +667,7 @@ static int k3_r5_rproc_stop(struct rproc *rproc)
+ {
+ 	struct k3_r5_rproc *kproc = rproc->priv;
+ 	struct k3_r5_cluster *cluster = kproc->cluster;
+-	struct device *dev = kproc->dev;
+-	struct k3_r5_core *core1, *core = kproc->core;
++	struct k3_r5_core *core = kproc->core;
+ 	int ret;
+ 
+ 	/* halt all applicable cores */
+@@ -642,16 +680,6 @@ static int k3_r5_rproc_stop(struct rproc *rproc)
+ 			}
+ 		}
+ 	} else {
+-		/* do not allow core 0 to stop before core 1 */
+-		core1 = list_last_entry(&cluster->cores, struct k3_r5_core,
+-					elem);
+-		if (core != core1 && core1->rproc->state != RPROC_OFFLINE) {
+-			dev_err(dev, "%s: can not stop core 0 before core 1\n",
+-				__func__);
+-			ret = -EPERM;
+-			goto out;
+-		}
+-
+ 		ret = k3_r5_core_halt(core);
+ 		if (ret)
+ 			goto out;
+@@ -1279,26 +1307,6 @@ static int k3_r5_cluster_rproc_init(struct platform_device *pdev)
+ 		    cluster->mode == CLUSTER_MODE_SINGLECPU ||
+ 		    cluster->mode == CLUSTER_MODE_SINGLECORE)
+ 			break;
+-
+-		/*
+-		 * R5 cores require to be powered on sequentially, core0
+-		 * should be in higher power state than core1 in a cluster
+-		 * So, wait for current core to power up before proceeding
+-		 * to next core and put timeout of 2sec for each core.
+-		 *
+-		 * This waiting mechanism is necessary because
+-		 * rproc_auto_boot_callback() for core1 can be called before
+-		 * core0 due to thread execution order.
+-		 */
+-		ret = wait_event_interruptible_timeout(cluster->core_transition,
+-						       core->released_from_reset,
+-						       msecs_to_jiffies(2000));
+-		if (ret <= 0) {
+-			dev_err(dev,
+-				"Timed out waiting for %s core to power up!\n",
+-				rproc->name);
+-			goto out;
+-		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c
+index 40d386809d6b78..bb161def317533 100644
+--- a/drivers/rpmsg/qcom_smd.c
++++ b/drivers/rpmsg/qcom_smd.c
+@@ -746,7 +746,7 @@ static int __qcom_smd_send(struct qcom_smd_channel *channel, const void *data,
+ 	__le32 hdr[5] = { cpu_to_le32(len), };
+ 	int tlen = sizeof(hdr) + len;
+ 	unsigned long flags;
+-	int ret;
++	int ret = 0;
+ 
+ 	/* Word aligned channels only accept word size aligned data */
+ 	if (channel->info_word && len % 4)
+diff --git a/drivers/rtc/rtc-loongson.c b/drivers/rtc/rtc-loongson.c
+index 97e5625c064ceb..2ca7ffd5d7a92a 100644
+--- a/drivers/rtc/rtc-loongson.c
++++ b/drivers/rtc/rtc-loongson.c
+@@ -129,6 +129,14 @@ static u32 loongson_rtc_handler(void *id)
+ {
+ 	struct loongson_rtc_priv *priv = (struct loongson_rtc_priv *)id;
+ 
++	rtc_update_irq(priv->rtcdev, 1, RTC_AF | RTC_IRQF);
++
++	/*
++	 * The TOY_MATCH0_REG should be cleared 0 here,
++	 * otherwise the interrupt cannot be cleared.
++	 */
++	regmap_write(priv->regmap, TOY_MATCH0_REG, 0);
++
+ 	spin_lock(&priv->lock);
+ 	/* Disable RTC alarm wakeup and interrupt */
+ 	writel(readl(priv->pm_base + PM1_EN_REG) & ~RTC_EN,
+diff --git a/drivers/rtc/rtc-sh.c b/drivers/rtc/rtc-sh.c
+index 9ea40f40188f3e..3409f576422485 100644
+--- a/drivers/rtc/rtc-sh.c
++++ b/drivers/rtc/rtc-sh.c
+@@ -485,9 +485,15 @@ static int __init sh_rtc_probe(struct platform_device *pdev)
+ 		return -ENOENT;
+ 	}
+ 
+-	rtc->periodic_irq = ret;
+-	rtc->carry_irq = platform_get_irq(pdev, 1);
+-	rtc->alarm_irq = platform_get_irq(pdev, 2);
++	if (!pdev->dev.of_node) {
++		rtc->periodic_irq = ret;
++		rtc->carry_irq = platform_get_irq(pdev, 1);
++		rtc->alarm_irq = platform_get_irq(pdev, 2);
++	} else {
++		rtc->alarm_irq = ret;
++		rtc->periodic_irq = platform_get_irq(pdev, 1);
++		rtc->carry_irq = platform_get_irq(pdev, 2);
++	}
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_IO, 0);
+ 	if (!res)
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index 944cf2fb05617d..d3981b6779316b 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -1885,33 +1885,14 @@ static int hisi_sas_I_T_nexus_reset(struct domain_device *device)
+ 	}
+ 	hisi_sas_dereg_device(hisi_hba, device);
+ 
+-	rc = hisi_sas_debug_I_T_nexus_reset(device);
+-	if (rc == TMF_RESP_FUNC_COMPLETE && dev_is_sata(device)) {
+-		struct sas_phy *local_phy;
+-
++	if (dev_is_sata(device)) {
+ 		rc = hisi_sas_softreset_ata_disk(device);
+-		switch (rc) {
+-		case -ECOMM:
+-			rc = -ENODEV;
+-			break;
+-		case TMF_RESP_FUNC_FAILED:
+-		case -EMSGSIZE:
+-		case -EIO:
+-			local_phy = sas_get_local_phy(device);
+-			rc = sas_phy_enable(local_phy, 0);
+-			if (!rc) {
+-				local_phy->enabled = 0;
+-				dev_err(dev, "Disabled local phy of ATA disk %016llx due to softreset fail (%d)\n",
+-					SAS_ADDR(device->sas_addr), rc);
+-				rc = -ENODEV;
+-			}
+-			sas_put_local_phy(local_phy);
+-			break;
+-		default:
+-			break;
+-		}
++		if (rc == TMF_RESP_FUNC_FAILED)
++			dev_err(dev, "ata disk %016llx reset (%d)\n",
++				SAS_ADDR(device->sas_addr), rc);
+ 	}
+ 
++	rc = hisi_sas_debug_I_T_nexus_reset(device);
+ 	if ((rc == TMF_RESP_FUNC_COMPLETE) || (rc == -ENODEV))
+ 		hisi_sas_release_task(hisi_hba, device);
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index 179be6c5a43e07..c2ec4db6728697 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -161,7 +161,7 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ 	struct lpfc_hba   *phba;
+ 	struct lpfc_work_evt *evtp;
+ 	unsigned long iflags;
+-	bool nvme_reg = false;
++	bool drop_initial_node_ref = false;
+ 
+ 	ndlp = ((struct lpfc_rport_data *)rport->dd_data)->pnode;
+ 	if (!ndlp)
+@@ -188,8 +188,13 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ 		spin_lock_irqsave(&ndlp->lock, iflags);
+ 		ndlp->rport = NULL;
+ 
+-		if (ndlp->fc4_xpt_flags & NVME_XPT_REGD)
+-			nvme_reg = true;
++		/* Only 1 thread can drop the initial node reference.
++		 * If not registered for NVME and NLP_DROPPED flag is
++		 * clear, remove the initial reference.
++		 */
++		if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD))
++			if (!test_and_set_bit(NLP_DROPPED, &ndlp->nlp_flag))
++				drop_initial_node_ref = true;
+ 
+ 		/* The scsi_transport is done with the rport so lpfc cannot
+ 		 * call to unregister.
+@@ -200,13 +205,16 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ 			/* If NLP_XPT_REGD was cleared in lpfc_nlp_unreg_node,
+ 			 * unregister calls were made to the scsi and nvme
+ 			 * transports and refcnt was already decremented. Clear
+-			 * the NLP_XPT_REGD flag only if the NVME Rport is
++			 * the NLP_XPT_REGD flag only if the NVME nrport is
+ 			 * confirmed unregistered.
+ 			 */
+-			if (!nvme_reg && ndlp->fc4_xpt_flags & NLP_XPT_REGD) {
+-				ndlp->fc4_xpt_flags &= ~NLP_XPT_REGD;
++			if (ndlp->fc4_xpt_flags & NLP_XPT_REGD) {
++				if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD))
++					ndlp->fc4_xpt_flags &= ~NLP_XPT_REGD;
+ 				spin_unlock_irqrestore(&ndlp->lock, iflags);
+-				lpfc_nlp_put(ndlp); /* may free ndlp */
++
++				/* Release scsi transport reference */
++				lpfc_nlp_put(ndlp);
+ 			} else {
+ 				spin_unlock_irqrestore(&ndlp->lock, iflags);
+ 			}
+@@ -214,14 +222,8 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ 			spin_unlock_irqrestore(&ndlp->lock, iflags);
+ 		}
+ 
+-		/* Only 1 thread can drop the initial node reference.  If
+-		 * another thread has set NLP_DROPPED, this thread is done.
+-		 */
+-		if (nvme_reg || test_bit(NLP_DROPPED, &ndlp->nlp_flag))
+-			return;
+-
+-		set_bit(NLP_DROPPED, &ndlp->nlp_flag);
+-		lpfc_nlp_put(ndlp);
++		if (drop_initial_node_ref)
++			lpfc_nlp_put(ndlp);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_ctl.c b/drivers/scsi/mpt3sas/mpt3sas_ctl.c
+index 063b10dd82514e..02fc204b9bf7b2 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_ctl.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_ctl.c
+@@ -2869,8 +2869,9 @@ _ctl_get_mpt_mctp_passthru_adapter(int dev_index)
+ 		if (ioc->facts.IOCCapabilities & MPI26_IOCFACTS_CAPABILITY_MCTP_PASSTHRU) {
+ 			if (count == dev_index) {
+ 				spin_unlock(&gioc_lock);
+-				return 0;
++				return ioc;
+ 			}
++			count++;
+ 		}
+ 	}
+ 	spin_unlock(&gioc_lock);
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 436bd29d5ebae6..6b1ebab36fa35b 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -699,7 +699,7 @@ static u32 qedf_get_login_failures(void *cookie)
+ }
+ 
+ static struct qed_fcoe_cb_ops qedf_cb_ops = {
+-	{
++	.common = {
+ 		.link_update = qedf_link_update,
+ 		.bw_update = qedf_bw_update,
+ 		.schedule_recovery_handler = qedf_schedule_recovery_handler,
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 0b8c91bf793fcb..c75a806496d674 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -3499,7 +3499,7 @@ static int iscsi_new_flashnode(struct iscsi_transport *transport,
+ 		pr_err("%s could not find host no %u\n",
+ 		       __func__, ev->u.new_flashnode.host_no);
+ 		err = -ENODEV;
+-		goto put_host;
++		goto exit_new_fnode;
+ 	}
+ 
+ 	index = transport->new_flashnode(shost, data, len);
+@@ -3509,7 +3509,6 @@ static int iscsi_new_flashnode(struct iscsi_transport *transport,
+ 	else
+ 		err = -EIO;
+ 
+-put_host:
+ 	scsi_host_put(shost);
+ 
+ exit_new_fnode:
+@@ -3534,7 +3533,7 @@ static int iscsi_del_flashnode(struct iscsi_transport *transport,
+ 		pr_err("%s could not find host no %u\n",
+ 		       __func__, ev->u.del_flashnode.host_no);
+ 		err = -ENODEV;
+-		goto put_host;
++		goto exit_del_fnode;
+ 	}
+ 
+ 	idx = ev->u.del_flashnode.flashnode_idx;
+@@ -3576,7 +3575,7 @@ static int iscsi_login_flashnode(struct iscsi_transport *transport,
+ 		pr_err("%s could not find host no %u\n",
+ 		       __func__, ev->u.login_flashnode.host_no);
+ 		err = -ENODEV;
+-		goto put_host;
++		goto exit_login_fnode;
+ 	}
+ 
+ 	idx = ev->u.login_flashnode.flashnode_idx;
+@@ -3628,7 +3627,7 @@ static int iscsi_logout_flashnode(struct iscsi_transport *transport,
+ 		pr_err("%s could not find host no %u\n",
+ 		       __func__, ev->u.logout_flashnode.host_no);
+ 		err = -ENODEV;
+-		goto put_host;
++		goto exit_logout_fnode;
+ 	}
+ 
+ 	idx = ev->u.logout_flashnode.flashnode_idx;
+@@ -3678,7 +3677,7 @@ static int iscsi_logout_flashnode_sid(struct iscsi_transport *transport,
+ 		pr_err("%s could not find host no %u\n",
+ 		       __func__, ev->u.logout_flashnode.host_no);
+ 		err = -ENODEV;
+-		goto put_host;
++		goto exit_logout_sid;
+ 	}
+ 
+ 	session = iscsi_session_lookup(ev->u.logout_flashnode_sid.sid);
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 8a26eca4fdc9b8..6c9dec7e3128fd 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -5990,7 +5990,7 @@ static bool pqi_is_parity_write_stream(struct pqi_ctrl_info *ctrl_info,
+ 			pqi_stream_data->next_lba = rmd.first_block +
+ 				rmd.block_cnt;
+ 			pqi_stream_data->last_accessed = jiffies;
+-			per_cpu_ptr(device->raid_io_stats, smp_processor_id())->write_stream_cnt++;
++				per_cpu_ptr(device->raid_io_stats, raw_smp_processor_id())->write_stream_cnt++;
+ 			return true;
+ 		}
+ 
+@@ -6069,7 +6069,7 @@ static int pqi_scsi_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scm
+ 			rc = pqi_raid_bypass_submit_scsi_cmd(ctrl_info, device, scmd, queue_group);
+ 			if (rc == 0 || rc == SCSI_MLQUEUE_HOST_BUSY) {
+ 				raid_bypassed = true;
+-				per_cpu_ptr(device->raid_io_stats, smp_processor_id())->raid_bypass_cnt++;
++				per_cpu_ptr(device->raid_io_stats, raw_smp_processor_id())->raid_bypass_cnt++;
+ 			}
+ 		}
+ 		if (!raid_bypassed)
+diff --git a/drivers/soc/aspeed/aspeed-lpc-snoop.c b/drivers/soc/aspeed/aspeed-lpc-snoop.c
+index 9ab5ba9cf1d61c..ef8f355589a584 100644
+--- a/drivers/soc/aspeed/aspeed-lpc-snoop.c
++++ b/drivers/soc/aspeed/aspeed-lpc-snoop.c
+@@ -166,7 +166,7 @@ static int aspeed_lpc_snoop_config_irq(struct aspeed_lpc_snoop *lpc_snoop,
+ 	int rc;
+ 
+ 	lpc_snoop->irq = platform_get_irq(pdev, 0);
+-	if (!lpc_snoop->irq)
++	if (lpc_snoop->irq < 0)
+ 		return -ENODEV;
+ 
+ 	rc = devm_request_irq(dev, lpc_snoop->irq,
+@@ -200,11 +200,15 @@ static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ 	lpc_snoop->chan[channel].miscdev.minor = MISC_DYNAMIC_MINOR;
+ 	lpc_snoop->chan[channel].miscdev.name =
+ 		devm_kasprintf(dev, GFP_KERNEL, "%s%d", DEVICE_NAME, channel);
++	if (!lpc_snoop->chan[channel].miscdev.name) {
++		rc = -ENOMEM;
++		goto err_free_fifo;
++	}
+ 	lpc_snoop->chan[channel].miscdev.fops = &snoop_fops;
+ 	lpc_snoop->chan[channel].miscdev.parent = dev;
+ 	rc = misc_register(&lpc_snoop->chan[channel].miscdev);
+ 	if (rc)
+-		return rc;
++		goto err_free_fifo;
+ 
+ 	/* Enable LPC snoop channel at requested port */
+ 	switch (channel) {
+@@ -221,7 +225,8 @@ static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ 		hicrb_en = HICRB_ENSNP1D;
+ 		break;
+ 	default:
+-		return -EINVAL;
++		rc = -EINVAL;
++		goto err_misc_deregister;
+ 	}
+ 
+ 	regmap_update_bits(lpc_snoop->regmap, HICR5, hicr5_en, hicr5_en);
+@@ -231,6 +236,12 @@ static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ 		regmap_update_bits(lpc_snoop->regmap, HICRB,
+ 				hicrb_en, hicrb_en);
+ 
++	return 0;
++
++err_misc_deregister:
++	misc_deregister(&lpc_snoop->chan[channel].miscdev);
++err_free_fifo:
++	kfifo_free(&lpc_snoop->chan[channel].fifo);
+ 	return rc;
+ }
+ 
+diff --git a/drivers/soc/qcom/smp2p.c b/drivers/soc/qcom/smp2p.c
+index a3e88ced328a91..c9199d6fbe26ec 100644
+--- a/drivers/soc/qcom/smp2p.c
++++ b/drivers/soc/qcom/smp2p.c
+@@ -575,7 +575,7 @@ static int qcom_smp2p_probe(struct platform_device *pdev)
+ 	smp2p->mbox_client.knows_txdone = true;
+ 	smp2p->mbox_chan = mbox_request_channel(&smp2p->mbox_client, 0);
+ 	if (IS_ERR(smp2p->mbox_chan)) {
+-		if (PTR_ERR(smp2p->mbox_chan) != -ENODEV)
++		if (PTR_ERR(smp2p->mbox_chan) != -ENOENT)
+ 			return PTR_ERR(smp2p->mbox_chan);
+ 
+ 		smp2p->mbox_chan = NULL;
+diff --git a/drivers/soundwire/generic_bandwidth_allocation.c b/drivers/soundwire/generic_bandwidth_allocation.c
+index 1cfaccf43eac50..c18f0c16f92973 100644
+--- a/drivers/soundwire/generic_bandwidth_allocation.c
++++ b/drivers/soundwire/generic_bandwidth_allocation.c
+@@ -204,6 +204,13 @@ static void _sdw_compute_port_params(struct sdw_bus *bus,
+ 			port_bo = 1;
+ 
+ 			list_for_each_entry(m_rt, &bus->m_rt_list, bus_node) {
++				/*
++				 * Only runtimes with CONFIGURED, PREPARED, ENABLED, and DISABLED
++				 * states should be included in the bandwidth calculation.
++				 */
++				if (m_rt->stream->state > SDW_STREAM_DISABLED ||
++				    m_rt->stream->state < SDW_STREAM_CONFIGURED)
++					continue;
+ 				sdw_compute_master_ports(m_rt, &params[i], &port_bo, hstop);
+ 			}
+ 
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index 244ac010686298..e7b61dc4ce6766 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -1436,22 +1436,17 @@ static int atmel_qspi_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_set_autosuspend_delay(&pdev->dev, 500);
+ 	pm_runtime_use_autosuspend(&pdev->dev);
+-	pm_runtime_set_active(&pdev->dev);
+-	pm_runtime_enable(&pdev->dev);
+-	pm_runtime_get_noresume(&pdev->dev);
++	devm_pm_runtime_set_active_enabled(&pdev->dev);
++	devm_pm_runtime_get_noresume(&pdev->dev);
+ 
+ 	err = atmel_qspi_init(aq);
+ 	if (err)
+ 		goto dma_release;
+ 
+ 	err = spi_register_controller(ctrl);
+-	if (err) {
+-		pm_runtime_put_noidle(&pdev->dev);
+-		pm_runtime_disable(&pdev->dev);
+-		pm_runtime_set_suspended(&pdev->dev);
+-		pm_runtime_dont_use_autosuspend(&pdev->dev);
++	if (err)
+ 		goto dma_release;
+-	}
++
+ 	pm_runtime_mark_last_busy(&pdev->dev);
+ 	pm_runtime_put_autosuspend(&pdev->dev);
+ 
+@@ -1530,10 +1525,6 @@ static void atmel_qspi_remove(struct platform_device *pdev)
+ 		 */
+ 		dev_warn(&pdev->dev, "Failed to resume device on remove\n");
+ 	}
+-
+-	pm_runtime_disable(&pdev->dev);
+-	pm_runtime_dont_use_autosuspend(&pdev->dev);
+-	pm_runtime_put_noidle(&pdev->dev);
+ }
+ 
+ static int __maybe_unused atmel_qspi_suspend(struct device *dev)
+diff --git a/drivers/spi/spi-bcm63xx-hsspi.c b/drivers/spi/spi-bcm63xx-hsspi.c
+index 644b44d2aef24e..18261cbd413b49 100644
+--- a/drivers/spi/spi-bcm63xx-hsspi.c
++++ b/drivers/spi/spi-bcm63xx-hsspi.c
+@@ -745,7 +745,7 @@ static int bcm63xx_hsspi_probe(struct platform_device *pdev)
+ 	if (IS_ERR(clk))
+ 		return PTR_ERR(clk);
+ 
+-	reset = devm_reset_control_get_optional_exclusive(dev, NULL);
++	reset = devm_reset_control_get_optional_shared(dev, NULL);
+ 	if (IS_ERR(reset))
+ 		return PTR_ERR(reset);
+ 
+diff --git a/drivers/spi/spi-bcm63xx.c b/drivers/spi/spi-bcm63xx.c
+index c8f64ec69344af..b56210734caafc 100644
+--- a/drivers/spi/spi-bcm63xx.c
++++ b/drivers/spi/spi-bcm63xx.c
+@@ -523,7 +523,7 @@ static int bcm63xx_spi_probe(struct platform_device *pdev)
+ 		return PTR_ERR(clk);
+ 	}
+ 
+-	reset = devm_reset_control_get_optional_exclusive(dev, NULL);
++	reset = devm_reset_control_get_optional_shared(dev, NULL);
+ 	if (IS_ERR(reset))
+ 		return PTR_ERR(reset);
+ 
+diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
+index 29c616e2c408cf..70bb74b3bd9c32 100644
+--- a/drivers/spi/spi-omap2-mcspi.c
++++ b/drivers/spi/spi-omap2-mcspi.c
+@@ -134,6 +134,7 @@ struct omap2_mcspi {
+ 	size_t			max_xfer_len;
+ 	u32			ref_clk_hz;
+ 	bool			use_multi_mode;
++	bool			last_msg_kept_cs;
+ };
+ 
+ struct omap2_mcspi_cs {
+@@ -1269,6 +1270,10 @@ static int omap2_mcspi_prepare_message(struct spi_controller *ctlr,
+ 	 * multi-mode is applicable.
+ 	 */
+ 	mcspi->use_multi_mode = true;
++
++	if (mcspi->last_msg_kept_cs)
++		mcspi->use_multi_mode = false;
++
+ 	list_for_each_entry(tr, &msg->transfers, transfer_list) {
+ 		if (!tr->bits_per_word)
+ 			bits_per_word = msg->spi->bits_per_word;
+@@ -1287,18 +1292,19 @@ static int omap2_mcspi_prepare_message(struct spi_controller *ctlr,
+ 			mcspi->use_multi_mode = false;
+ 		}
+ 
+-		/* Check if transfer asks to change the CS status after the transfer */
+-		if (!tr->cs_change)
+-			mcspi->use_multi_mode = false;
+-
+-		/*
+-		 * If at least one message is not compatible, switch back to single mode
+-		 *
+-		 * The bits_per_word of certain transfer can be different, but it will have no
+-		 * impact on the signal itself.
+-		 */
+-		if (!mcspi->use_multi_mode)
+-			break;
++		if (list_is_last(&tr->transfer_list, &msg->transfers)) {
++			/* Check if transfer asks to keep the CS status after the whole message */
++			if (tr->cs_change) {
++				mcspi->use_multi_mode = false;
++				mcspi->last_msg_kept_cs = true;
++			} else {
++				mcspi->last_msg_kept_cs = false;
++			}
++		} else {
++			/* Check if transfer asks to change the CS status after the transfer */
++			if (!tr->cs_change)
++				mcspi->use_multi_mode = false;
++		}
+ 	}
+ 
+ 	omap2_mcspi_set_mode(ctlr);
+diff --git a/drivers/spi/spi-qpic-snand.c b/drivers/spi/spi-qpic-snand.c
+index 94948c8781e83f..44a8f58e46fe12 100644
+--- a/drivers/spi/spi-qpic-snand.c
++++ b/drivers/spi/spi-qpic-snand.c
+@@ -250,9 +250,11 @@ static const struct mtd_ooblayout_ops qcom_spi_ooblayout = {
+ static int qcom_spi_ecc_init_ctx_pipelined(struct nand_device *nand)
+ {
+ 	struct qcom_nand_controller *snandc = nand_to_qcom_snand(nand);
++	struct nand_ecc_props *reqs = &nand->ecc.requirements;
++	struct nand_ecc_props *user = &nand->ecc.user_conf;
+ 	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
+ 	struct mtd_info *mtd = nanddev_to_mtd(nand);
+-	int cwperpage, bad_block_byte;
++	int cwperpage, bad_block_byte, ret;
+ 	struct qpic_ecc *ecc_cfg;
+ 
+ 	cwperpage = mtd->writesize / NANDC_STEP_SIZE;
+@@ -261,11 +263,39 @@ static int qcom_spi_ecc_init_ctx_pipelined(struct nand_device *nand)
+ 	ecc_cfg = kzalloc(sizeof(*ecc_cfg), GFP_KERNEL);
+ 	if (!ecc_cfg)
+ 		return -ENOMEM;
+-	snandc->qspi->oob_buf = kzalloc(mtd->writesize + mtd->oobsize,
++
++	if (user->step_size && user->strength) {
++		ecc_cfg->step_size = user->step_size;
++		ecc_cfg->strength = user->strength;
++	} else if (reqs->step_size && reqs->strength) {
++		ecc_cfg->step_size = reqs->step_size;
++		ecc_cfg->strength = reqs->strength;
++	} else {
++		/* use defaults */
++		ecc_cfg->step_size = NANDC_STEP_SIZE;
++		ecc_cfg->strength = 4;
++	}
++
++	if (ecc_cfg->step_size != NANDC_STEP_SIZE) {
++		dev_err(snandc->dev,
++			"only %u bytes ECC step size is supported\n",
++			NANDC_STEP_SIZE);
++		ret = -EOPNOTSUPP;
++		goto err_free_ecc_cfg;
++	}
++
++	if (ecc_cfg->strength != 4) {
++		dev_err(snandc->dev,
++			"only 4 bits ECC strength is supported\n");
++		ret = -EOPNOTSUPP;
++		goto err_free_ecc_cfg;
++	}
++
++	snandc->qspi->oob_buf = kmalloc(mtd->writesize + mtd->oobsize,
+ 					GFP_KERNEL);
+ 	if (!snandc->qspi->oob_buf) {
+-		kfree(ecc_cfg);
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto err_free_ecc_cfg;
+ 	}
+ 
+ 	memset(snandc->qspi->oob_buf, 0xff, mtd->writesize + mtd->oobsize);
+@@ -280,8 +310,6 @@ static int qcom_spi_ecc_init_ctx_pipelined(struct nand_device *nand)
+ 	ecc_cfg->bytes = ecc_cfg->ecc_bytes_hw + ecc_cfg->spare_bytes + ecc_cfg->bbm_size;
+ 
+ 	ecc_cfg->steps = 4;
+-	ecc_cfg->strength = 4;
+-	ecc_cfg->step_size = 512;
+ 	ecc_cfg->cw_data = 516;
+ 	ecc_cfg->cw_size = ecc_cfg->cw_data + ecc_cfg->bytes;
+ 	bad_block_byte = mtd->writesize - ecc_cfg->cw_size * (cwperpage - 1) + 1;
+@@ -339,6 +367,10 @@ static int qcom_spi_ecc_init_ctx_pipelined(struct nand_device *nand)
+ 		ecc_cfg->strength, ecc_cfg->step_size);
+ 
+ 	return 0;
++
++err_free_ecc_cfg:
++	kfree(ecc_cfg);
++	return ret;
+ }
+ 
+ static void qcom_spi_ecc_cleanup_ctx_pipelined(struct nand_device *nand)
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index 8a98c313548e37..7d8a7998f8ae73 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -918,6 +918,7 @@ static int sh_msiof_transfer_one(struct spi_controller *ctlr,
+ 	void *rx_buf = t->rx_buf;
+ 	unsigned int len = t->len;
+ 	unsigned int bits = t->bits_per_word;
++	unsigned int max_wdlen = 256;
+ 	unsigned int bytes_per_word;
+ 	unsigned int words;
+ 	int n;
+@@ -931,17 +932,17 @@ static int sh_msiof_transfer_one(struct spi_controller *ctlr,
+ 	if (!spi_controller_is_target(p->ctlr))
+ 		sh_msiof_spi_set_clk_regs(p, t);
+ 
++	if (tx_buf)
++		max_wdlen = min(max_wdlen, p->tx_fifo_size);
++	if (rx_buf)
++		max_wdlen = min(max_wdlen, p->rx_fifo_size);
++
+ 	while (ctlr->dma_tx && len > 15) {
+ 		/*
+ 		 *  DMA supports 32-bit words only, hence pack 8-bit and 16-bit
+ 		 *  words, with byte resp. word swapping.
+ 		 */
+-		unsigned int l = 0;
+-
+-		if (tx_buf)
+-			l = min(round_down(len, 4), p->tx_fifo_size * 4);
+-		if (rx_buf)
+-			l = min(round_down(len, 4), p->rx_fifo_size * 4);
++		unsigned int l = min(round_down(len, 4), max_wdlen * 4);
+ 
+ 		if (bits <= 8) {
+ 			copy32 = copy_bswap32;
+diff --git a/drivers/spi/spi-tegra210-quad.c b/drivers/spi/spi-tegra210-quad.c
+index 64e1b2f8a0001c..665c06e1473beb 100644
+--- a/drivers/spi/spi-tegra210-quad.c
++++ b/drivers/spi/spi-tegra210-quad.c
+@@ -134,7 +134,7 @@
+ #define QSPI_COMMAND_VALUE_SET(X)		(((x) & 0xFF) << 0)
+ 
+ #define QSPI_CMB_SEQ_CMD_CFG			0x1a0
+-#define QSPI_COMMAND_X1_X2_X4(x)		(((x) & 0x3) << 13)
++#define QSPI_COMMAND_X1_X2_X4(x)		((((x) >> 1) & 0x3) << 13)
+ #define QSPI_COMMAND_X1_X2_X4_MASK		(0x03 << 13)
+ #define QSPI_COMMAND_SDR_DDR			BIT(12)
+ #define QSPI_COMMAND_SIZE_SET(x)		(((x) & 0xFF) << 0)
+@@ -147,7 +147,7 @@
+ #define QSPI_ADDRESS_VALUE_SET(X)		(((x) & 0xFFFF) << 0)
+ 
+ #define QSPI_CMB_SEQ_ADDR_CFG			0x1ac
+-#define QSPI_ADDRESS_X1_X2_X4(x)		(((x) & 0x3) << 13)
++#define QSPI_ADDRESS_X1_X2_X4(x)		((((x) >> 1) & 0x3) << 13)
+ #define QSPI_ADDRESS_X1_X2_X4_MASK		(0x03 << 13)
+ #define QSPI_ADDRESS_SDR_DDR			BIT(12)
+ #define QSPI_ADDRESS_SIZE_SET(x)		(((x) & 0xFF) << 0)
+@@ -1036,10 +1036,6 @@ static u32 tegra_qspi_addr_config(bool is_ddr, u8 bus_width, u8 len)
+ {
+ 	u32 addr_config = 0;
+ 
+-	/* Extract Address configuration and value */
+-	is_ddr = 0; //Only SDR mode supported
+-	bus_width = 0; //X1 mode
+-
+ 	if (is_ddr)
+ 		addr_config |= QSPI_ADDRESS_SDR_DDR;
+ 	else
+@@ -1079,13 +1075,13 @@ static int tegra_qspi_combined_seq_xfer(struct tegra_qspi *tqspi,
+ 		switch (transfer_phase) {
+ 		case CMD_TRANSFER:
+ 			/* X1 SDR mode */
+-			cmd_config = tegra_qspi_cmd_config(false, 0,
++			cmd_config = tegra_qspi_cmd_config(false, xfer->tx_nbits,
+ 							   xfer->len);
+ 			cmd_value = *((const u8 *)(xfer->tx_buf));
+ 			break;
+ 		case ADDR_TRANSFER:
+ 			/* X1 SDR mode */
+-			addr_config = tegra_qspi_addr_config(false, 0,
++			addr_config = tegra_qspi_addr_config(false, xfer->tx_nbits,
+ 							     xfer->len);
+ 			address_value = *((const u32 *)(xfer->tx_buf));
+ 			break;
+@@ -1163,26 +1159,22 @@ static int tegra_qspi_combined_seq_xfer(struct tegra_qspi *tqspi,
+ 				ret = -EIO;
+ 				goto exit;
+ 			}
+-			if (!xfer->cs_change) {
+-				tegra_qspi_transfer_end(spi);
+-				spi_transfer_delay_exec(xfer);
+-			}
+ 			break;
+ 		default:
+ 			ret = -EINVAL;
+ 			goto exit;
+ 		}
+ 		msg->actual_length += xfer->len;
++		if (!xfer->cs_change && transfer_phase == DATA_TRANSFER) {
++			tegra_qspi_transfer_end(spi);
++			spi_transfer_delay_exec(xfer);
++		}
+ 		transfer_phase++;
+ 	}
+ 	ret = 0;
+ 
+ exit:
+ 	msg->status = ret;
+-	if (ret < 0) {
+-		tegra_qspi_transfer_end(spi);
+-		spi_transfer_delay_exec(xfer);
+-	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/staging/gpib/ines/ines_gpib.c b/drivers/staging/gpib/ines/ines_gpib.c
+index d93eb05dab9038..8e2375d8ddac24 100644
+--- a/drivers/staging/gpib/ines/ines_gpib.c
++++ b/drivers/staging/gpib/ines/ines_gpib.c
+@@ -1484,7 +1484,7 @@ static void __exit ines_exit_module(void)
+ 	gpib_unregister_driver(&ines_pci_unaccel_interface);
+ 	gpib_unregister_driver(&ines_pci_accel_interface);
+ 	gpib_unregister_driver(&ines_isa_interface);
+-#ifdef GPIB__PCMCIA
++#ifdef CONFIG_GPIB_PCMCIA
+ 	gpib_unregister_driver(&ines_pcmcia_interface);
+ 	gpib_unregister_driver(&ines_pcmcia_unaccel_interface);
+ 	gpib_unregister_driver(&ines_pcmcia_accel_interface);
+diff --git a/drivers/staging/gpib/uapi/gpib_user.h b/drivers/staging/gpib/uapi/gpib_user.h
+index 5ff4588686fde3..0fd32fb9e7a64d 100644
+--- a/drivers/staging/gpib/uapi/gpib_user.h
++++ b/drivers/staging/gpib/uapi/gpib_user.h
+@@ -178,7 +178,7 @@ static inline uint8_t MTA(unsigned int addr)
+ 
+ static inline uint8_t MSA(unsigned int addr)
+ {
+-	return gpib_address_restrict(addr) | SAD;
++	return (addr & 0x1f) | SAD;
+ }
+ 
+ static inline uint8_t PPE_byte(unsigned int dio_line, int sense)
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index f9bef5173bf25c..a9bfd5305410c2 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -213,8 +213,14 @@ static int rkvdec_enum_framesizes(struct file *file, void *priv,
+ 	if (!fmt)
+ 		return -EINVAL;
+ 
+-	fsize->type = V4L2_FRMSIZE_TYPE_STEPWISE;
+-	fsize->stepwise = fmt->frmsize;
++	fsize->type = V4L2_FRMSIZE_TYPE_CONTINUOUS;
++	fsize->stepwise.min_width = 1;
++	fsize->stepwise.max_width = fmt->frmsize.max_width;
++	fsize->stepwise.step_width = 1;
++	fsize->stepwise.min_height = 1;
++	fsize->stepwise.max_height = fmt->frmsize.max_height;
++	fsize->stepwise.step_height = 1;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/thermal/mediatek/lvts_thermal.c b/drivers/thermal/mediatek/lvts_thermal.c
+index 088481d91e6e29..985925147ac068 100644
+--- a/drivers/thermal/mediatek/lvts_thermal.c
++++ b/drivers/thermal/mediatek/lvts_thermal.c
+@@ -213,6 +213,13 @@ static const struct debugfs_reg32 lvts_regs[] = {
+ 	LVTS_DEBUG_FS_REGS(LVTS_CLKEN),
+ };
+ 
++static void lvts_debugfs_exit(void *data)
++{
++	struct lvts_domain *lvts_td = data;
++
++	debugfs_remove_recursive(lvts_td->dom_dentry);
++}
++
+ static int lvts_debugfs_init(struct device *dev, struct lvts_domain *lvts_td)
+ {
+ 	struct debugfs_regset32 *regset;
+@@ -245,12 +252,7 @@ static int lvts_debugfs_init(struct device *dev, struct lvts_domain *lvts_td)
+ 		debugfs_create_regset32("registers", 0400, dentry, regset);
+ 	}
+ 
+-	return 0;
+-}
+-
+-static void lvts_debugfs_exit(struct lvts_domain *lvts_td)
+-{
+-	debugfs_remove_recursive(lvts_td->dom_dentry);
++	return devm_add_action_or_reset(dev, lvts_debugfs_exit, lvts_td);
+ }
+ 
+ #else
+@@ -261,8 +263,6 @@ static inline int lvts_debugfs_init(struct device *dev,
+ 	return 0;
+ }
+ 
+-static void lvts_debugfs_exit(struct lvts_domain *lvts_td) { }
+-
+ #endif
+ 
+ static int lvts_raw_to_temp(u32 raw_temp, int temp_factor)
+@@ -1374,8 +1374,6 @@ static void lvts_remove(struct platform_device *pdev)
+ 
+ 	for (i = 0; i < lvts_td->num_lvts_ctrl; i++)
+ 		lvts_ctrl_set_enable(&lvts_td->lvts_ctrl[i], false);
+-
+-	lvts_debugfs_exit(lvts_td);
+ }
+ 
+ static const struct lvts_ctrl_data mt7988_lvts_ap_data_ctrl[] = {
+diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
+index e51d01671d8e7c..3e96f1afd4268e 100644
+--- a/drivers/thunderbolt/usb4.c
++++ b/drivers/thunderbolt/usb4.c
+@@ -440,10 +440,10 @@ int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags)
+ 			bool configured = val & PORT_CS_19_PC;
+ 			usb4 = port->usb4;
+ 
+-			if (((flags & TB_WAKE_ON_CONNECT) |
++			if (((flags & TB_WAKE_ON_CONNECT) &&
+ 			      device_may_wakeup(&usb4->dev)) && !configured)
+ 				val |= PORT_CS_19_WOC;
+-			if (((flags & TB_WAKE_ON_DISCONNECT) |
++			if (((flags & TB_WAKE_ON_DISCONNECT) &&
+ 			      device_may_wakeup(&usb4->dev)) && configured)
+ 				val |= PORT_CS_19_WOD;
+ 			if ((flags & TB_WAKE_ON_USB4) && configured)
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 2a0ce11f405d2d..72ae08d6204ff4 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -1173,16 +1173,6 @@ static int omap_8250_tx_dma(struct uart_8250_port *p)
+ 		return 0;
+ 	}
+ 
+-	sg_init_table(&sg, 1);
+-	ret = kfifo_dma_out_prepare_mapped(&tport->xmit_fifo, &sg, 1,
+-					   UART_XMIT_SIZE, dma->tx_addr);
+-	if (ret != 1) {
+-		serial8250_clear_THRI(p);
+-		return 0;
+-	}
+-
+-	dma->tx_size = sg_dma_len(&sg);
+-
+ 	if (priv->habit & OMAP_DMA_TX_KICK) {
+ 		unsigned char c;
+ 		u8 tx_lvl;
+@@ -1207,18 +1197,22 @@ static int omap_8250_tx_dma(struct uart_8250_port *p)
+ 			ret = -EBUSY;
+ 			goto err;
+ 		}
+-		if (dma->tx_size < 4) {
++		if (kfifo_len(&tport->xmit_fifo) < 4) {
+ 			ret = -EINVAL;
+ 			goto err;
+ 		}
+-		if (!kfifo_get(&tport->xmit_fifo, &c)) {
++		if (!uart_fifo_out(&p->port, &c, 1)) {
+ 			ret = -EINVAL;
+ 			goto err;
+ 		}
+ 		skip_byte = c;
+-		/* now we need to recompute due to kfifo_get */
+-		kfifo_dma_out_prepare_mapped(&tport->xmit_fifo, &sg, 1,
+-				UART_XMIT_SIZE, dma->tx_addr);
++	}
++
++	sg_init_table(&sg, 1);
++	ret = kfifo_dma_out_prepare_mapped(&tport->xmit_fifo, &sg, 1, UART_XMIT_SIZE, dma->tx_addr);
++	if (ret != 1) {
++		ret = -EINVAL;
++		goto err;
+ 	}
+ 
+ 	desc = dmaengine_prep_slave_sg(dma->txchan, &sg, 1, DMA_MEM_TO_DEV,
+@@ -1228,6 +1222,7 @@ static int omap_8250_tx_dma(struct uart_8250_port *p)
+ 		goto err;
+ 	}
+ 
++	dma->tx_size = sg_dma_len(&sg);
+ 	dma->tx_running = 1;
+ 
+ 	desc->callback = omap_8250_dma_tx_complete;
+diff --git a/drivers/tty/serial/milbeaut_usio.c b/drivers/tty/serial/milbeaut_usio.c
+index 059bea18dbab56..4e47dca2c4ed9c 100644
+--- a/drivers/tty/serial/milbeaut_usio.c
++++ b/drivers/tty/serial/milbeaut_usio.c
+@@ -523,7 +523,10 @@ static int mlb_usio_probe(struct platform_device *pdev)
+ 	}
+ 	port->membase = devm_ioremap(&pdev->dev, res->start,
+ 				resource_size(res));
+-
++	if (!port->membase) {
++		ret = -ENOMEM;
++		goto failed;
++	}
+ 	ret = platform_get_irq_byname(pdev, "rx");
+ 	mlb_usio_irq[index][RX] = ret;
+ 
+diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
+index 4b91072f3a4e91..1f2bdd2e1cc593 100644
+--- a/drivers/tty/vt/vt_ioctl.c
++++ b/drivers/tty/vt/vt_ioctl.c
+@@ -1103,8 +1103,6 @@ long vt_compat_ioctl(struct tty_struct *tty,
+ 	case VT_WAITACTIVE:
+ 	case VT_RELDISP:
+ 	case VT_DISALLOCATE:
+-	case VT_RESIZE:
+-	case VT_RESIZEX:
+ 		return vt_ioctl(tty, cmd, arg);
+ 
+ 	/*
+diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c
+index f1294c29f48491..1e50675772febb 100644
+--- a/drivers/ufs/core/ufs-mcq.c
++++ b/drivers/ufs/core/ufs-mcq.c
+@@ -674,7 +674,6 @@ int ufshcd_mcq_abort(struct scsi_cmnd *cmd)
+ 	int tag = scsi_cmd_to_rq(cmd)->tag;
+ 	struct ufshcd_lrb *lrbp = &hba->lrb[tag];
+ 	struct ufs_hw_queue *hwq;
+-	unsigned long flags;
+ 	int err;
+ 
+ 	/* Skip task abort in case previous aborts failed and report failure */
+@@ -713,10 +712,5 @@ int ufshcd_mcq_abort(struct scsi_cmnd *cmd)
+ 		return FAILED;
+ 	}
+ 
+-	spin_lock_irqsave(&hwq->cq_lock, flags);
+-	if (ufshcd_cmd_inflight(lrbp->cmd))
+-		ufshcd_release_scsi_cmd(hba, lrbp);
+-	spin_unlock_irqrestore(&hwq->cq_lock, flags);
+-
+ 	return SUCCESS;
+ }
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 7735421e399182..04f769d907a446 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -6587,9 +6587,14 @@ static void ufshcd_err_handler(struct work_struct *work)
+ 		up(&hba->host_sem);
+ 		return;
+ 	}
+-	ufshcd_set_eh_in_progress(hba);
+ 	spin_unlock_irqrestore(hba->host->host_lock, flags);
++
+ 	ufshcd_err_handling_prepare(hba);
++
++	spin_lock_irqsave(hba->host->host_lock, flags);
++	ufshcd_set_eh_in_progress(hba);
++	spin_unlock_irqrestore(hba->host->host_lock, flags);
++
+ 	/* Complete requests that have door-bell cleared by h/w */
+ 	ufshcd_complete_requests(hba, false);
+ 	spin_lock_irqsave(hba->host->host_lock, flags);
+diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
+index c0761ccc1381e3..31649f908dd466 100644
+--- a/drivers/ufs/host/ufs-qcom.c
++++ b/drivers/ufs/host/ufs-qcom.c
+@@ -103,7 +103,9 @@ static const struct __ufs_qcom_bw_table {
+ };
+ 
+ static void ufs_qcom_get_default_testbus_cfg(struct ufs_qcom_host *host);
+-static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, unsigned long freq);
++static unsigned long ufs_qcom_opp_freq_to_clk_freq(struct ufs_hba *hba,
++						   unsigned long freq, char *name);
++static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, bool is_scale_up, unsigned long freq);
+ 
+ static struct ufs_qcom_host *rcdev_to_ufs_host(struct reset_controller_dev *rcd)
+ {
+@@ -452,10 +454,9 @@ static int ufs_qcom_power_up_sequence(struct ufs_hba *hba)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (phy->power_count) {
++	if (phy->power_count)
+ 		phy_power_off(phy);
+-		phy_exit(phy);
+-	}
++
+ 
+ 	/* phy initialization - calibrate the phy */
+ 	ret = phy_init(phy);
+@@ -602,7 +603,7 @@ static int ufs_qcom_link_startup_notify(struct ufs_hba *hba,
+ 			return -EINVAL;
+ 		}
+ 
+-		err = ufs_qcom_set_core_clk_ctrl(hba, ULONG_MAX);
++		err = ufs_qcom_set_core_clk_ctrl(hba, true, ULONG_MAX);
+ 		if (err)
+ 			dev_err(hba->dev, "cfg core clk ctrl failed\n");
+ 		/*
+@@ -1360,29 +1361,46 @@ static int ufs_qcom_set_clk_40ns_cycles(struct ufs_hba *hba,
+ 	return ufshcd_dme_set(hba, UIC_ARG_MIB(PA_VS_CORE_CLK_40NS_CYCLES), reg);
+ }
+ 
+-static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, unsigned long freq)
++static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, bool is_scale_up, unsigned long freq)
+ {
+ 	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+ 	struct list_head *head = &hba->clk_list_head;
+ 	struct ufs_clk_info *clki;
+ 	u32 cycles_in_1us = 0;
+ 	u32 core_clk_ctrl_reg;
++	unsigned long clk_freq;
+ 	int err;
+ 
++	if (hba->use_pm_opp && freq != ULONG_MAX) {
++		clk_freq = ufs_qcom_opp_freq_to_clk_freq(hba, freq, "core_clk_unipro");
++		if (clk_freq) {
++			cycles_in_1us = ceil(clk_freq, HZ_PER_MHZ);
++			goto set_core_clk_ctrl;
++		}
++	}
++
+ 	list_for_each_entry(clki, head, list) {
+ 		if (!IS_ERR_OR_NULL(clki->clk) &&
+ 		    !strcmp(clki->name, "core_clk_unipro")) {
+-			if (!clki->max_freq)
++			if (!clki->max_freq) {
+ 				cycles_in_1us = 150; /* default for backwards compatibility */
+-			else if (freq == ULONG_MAX)
++				break;
++			}
++
++			if (freq == ULONG_MAX) {
+ 				cycles_in_1us = ceil(clki->max_freq, HZ_PER_MHZ);
+-			else
+-				cycles_in_1us = ceil(freq, HZ_PER_MHZ);
++				break;
++			}
+ 
++			if (is_scale_up)
++				cycles_in_1us = ceil(clki->max_freq, HZ_PER_MHZ);
++			else
++				cycles_in_1us = ceil(clk_get_rate(clki->clk), HZ_PER_MHZ);
+ 			break;
+ 		}
+ 	}
+ 
++set_core_clk_ctrl:
+ 	err = ufshcd_dme_get(hba,
+ 			    UIC_ARG_MIB(DME_VS_CORE_CLK_CTRL),
+ 			    &core_clk_ctrl_reg);
+@@ -1425,7 +1443,7 @@ static int ufs_qcom_clk_scale_up_pre_change(struct ufs_hba *hba, unsigned long f
+ 		return ret;
+ 	}
+ 	/* set unipro core clock attributes and clear clock divider */
+-	return ufs_qcom_set_core_clk_ctrl(hba, freq);
++	return ufs_qcom_set_core_clk_ctrl(hba, true, freq);
+ }
+ 
+ static int ufs_qcom_clk_scale_up_post_change(struct ufs_hba *hba)
+@@ -1457,7 +1475,7 @@ static int ufs_qcom_clk_scale_down_pre_change(struct ufs_hba *hba)
+ static int ufs_qcom_clk_scale_down_post_change(struct ufs_hba *hba, unsigned long freq)
+ {
+ 	/* set unipro core clock attributes and clear clock divider */
+-	return ufs_qcom_set_core_clk_ctrl(hba, freq);
++	return ufs_qcom_set_core_clk_ctrl(hba, false, freq);
+ }
+ 
+ static int ufs_qcom_clk_scale_notify(struct ufs_hba *hba, bool scale_up,
+@@ -1922,11 +1940,53 @@ static int ufs_qcom_config_esi(struct ufs_hba *hba)
+ 	return ret;
+ }
+ 
++static unsigned long ufs_qcom_opp_freq_to_clk_freq(struct ufs_hba *hba,
++						   unsigned long freq, char *name)
++{
++	struct ufs_clk_info *clki;
++	struct dev_pm_opp *opp;
++	unsigned long clk_freq;
++	int idx = 0;
++	bool found = false;
++
++	opp = dev_pm_opp_find_freq_exact_indexed(hba->dev, freq, 0, true);
++	if (IS_ERR(opp)) {
++		dev_err(hba->dev, "Failed to find OPP for exact frequency %lu\n", freq);
++		return 0;
++	}
++
++	list_for_each_entry(clki, &hba->clk_list_head, list) {
++		if (!strcmp(clki->name, name)) {
++			found = true;
++			break;
++		}
++
++		idx++;
++	}
++
++	if (!found) {
++		dev_err(hba->dev, "Failed to find clock '%s' in clk list\n", name);
++		dev_pm_opp_put(opp);
++		return 0;
++	}
++
++	clk_freq = dev_pm_opp_get_freq_indexed(opp, idx);
++
++	dev_pm_opp_put(opp);
++
++	return clk_freq;
++}
++
+ static u32 ufs_qcom_freq_to_gear_speed(struct ufs_hba *hba, unsigned long freq)
+ {
+-	u32 gear = 0;
++	u32 gear = UFS_HS_DONT_CHANGE;
++	unsigned long unipro_freq;
+ 
+-	switch (freq) {
++	if (!hba->use_pm_opp)
++		return gear;
++
++	unipro_freq = ufs_qcom_opp_freq_to_clk_freq(hba, freq, "core_clk_unipro");
++	switch (unipro_freq) {
+ 	case 403000000:
+ 		gear = UFS_HS_G5;
+ 		break;
+@@ -1946,10 +2006,10 @@ static u32 ufs_qcom_freq_to_gear_speed(struct ufs_hba *hba, unsigned long freq)
+ 		break;
+ 	default:
+ 		dev_err(hba->dev, "%s: Unsupported clock freq : %lu\n", __func__, freq);
+-		break;
++		return UFS_HS_DONT_CHANGE;
+ 	}
+ 
+-	return gear;
++	return min_t(u32, gear, hba->max_pwr_info.info.gear_rx);
+ }
+ 
+ /*
+diff --git a/drivers/usb/cdns3/cdnsp-gadget.c b/drivers/usb/cdns3/cdnsp-gadget.c
+index 4824a10df07e7c..55f95f41b3b4dd 100644
+--- a/drivers/usb/cdns3/cdnsp-gadget.c
++++ b/drivers/usb/cdns3/cdnsp-gadget.c
+@@ -29,7 +29,8 @@
+ unsigned int cdnsp_port_speed(unsigned int port_status)
+ {
+ 	/*Detect gadget speed based on PORTSC register*/
+-	if (DEV_SUPERSPEEDPLUS(port_status))
++	if (DEV_SUPERSPEEDPLUS(port_status) ||
++	    DEV_SSP_GEN1x2(port_status) || DEV_SSP_GEN2x2(port_status))
+ 		return USB_SPEED_SUPER_PLUS;
+ 	else if (DEV_SUPERSPEED(port_status))
+ 		return USB_SPEED_SUPER;
+@@ -547,6 +548,7 @@ int cdnsp_wait_for_cmd_compl(struct cdnsp_device *pdev)
+ 	dma_addr_t cmd_deq_dma;
+ 	union cdnsp_trb *event;
+ 	u32 cycle_state;
++	u32 retry = 10;
+ 	int ret, val;
+ 	u64 cmd_dma;
+ 	u32  flags;
+@@ -578,8 +580,23 @@ int cdnsp_wait_for_cmd_compl(struct cdnsp_device *pdev)
+ 		flags = le32_to_cpu(event->event_cmd.flags);
+ 
+ 		/* Check the owner of the TRB. */
+-		if ((flags & TRB_CYCLE) != cycle_state)
++		if ((flags & TRB_CYCLE) != cycle_state) {
++			/*
++			 * Give some extra time to get chance controller
++			 * to finish command before returning error code.
++			 * Checking CMD_RING_BUSY is not sufficient because
++			 * this bit is cleared to '0' when the Command
++			 * Descriptor has been executed by controller
++			 * and not when command completion event has
++			 * be added to event ring.
++			 */
++			if (retry--) {
++				udelay(20);
++				continue;
++			}
++
+ 			return -EINVAL;
++		}
+ 
+ 		cmd_dma = le64_to_cpu(event->event_cmd.cmd_trb);
+ 
+diff --git a/drivers/usb/cdns3/cdnsp-gadget.h b/drivers/usb/cdns3/cdnsp-gadget.h
+index 12534be52f39df..2afa3e558f85ca 100644
+--- a/drivers/usb/cdns3/cdnsp-gadget.h
++++ b/drivers/usb/cdns3/cdnsp-gadget.h
+@@ -285,11 +285,15 @@ struct cdnsp_port_regs {
+ #define XDEV_HS			(0x3 << 10)
+ #define XDEV_SS			(0x4 << 10)
+ #define XDEV_SSP		(0x5 << 10)
++#define XDEV_SSP1x2		(0x6 << 10)
++#define XDEV_SSP2x2		(0x7 << 10)
+ #define DEV_UNDEFSPEED(p)	(((p) & DEV_SPEED_MASK) == (0x0 << 10))
+ #define DEV_FULLSPEED(p)	(((p) & DEV_SPEED_MASK) == XDEV_FS)
+ #define DEV_HIGHSPEED(p)	(((p) & DEV_SPEED_MASK) == XDEV_HS)
+ #define DEV_SUPERSPEED(p)	(((p) & DEV_SPEED_MASK) == XDEV_SS)
+ #define DEV_SUPERSPEEDPLUS(p)	(((p) & DEV_SPEED_MASK) == XDEV_SSP)
++#define DEV_SSP_GEN1x2(p)	(((p) & DEV_SPEED_MASK) == XDEV_SSP1x2)
++#define DEV_SSP_GEN2x2(p)	(((p) & DEV_SPEED_MASK) == XDEV_SSP2x2)
+ #define DEV_SUPERSPEED_ANY(p)	(((p) & DEV_SPEED_MASK) >= XDEV_SS)
+ #define DEV_PORT_SPEED(p)	(((p) >> 10) & 0x0f)
+ /* Port Link State Write Strobe - set this when changing link state */
+diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c
+index 66f3d9324ba2f3..75de29725a450c 100644
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -565,14 +565,15 @@ static int usbtmc488_ioctl_read_stb(struct usbtmc_file_data *file_data,
+ 
+ 	rv = usbtmc_get_stb(file_data, &stb);
+ 
+-	if (rv > 0) {
+-		srq_asserted = atomic_xchg(&file_data->srq_asserted,
+-					srq_asserted);
+-		if (srq_asserted)
+-			stb |= 0x40; /* Set RQS bit */
++	if (rv < 0)
++		return rv;
++
++	srq_asserted = atomic_xchg(&file_data->srq_asserted, srq_asserted);
++	if (srq_asserted)
++		stb |= 0x40; /* Set RQS bit */
++
++	rv = put_user(stb, (__u8 __user *)arg);
+ 
+-		rv = put_user(stb, (__u8 __user *)arg);
+-	}
+ 	return rv;
+ 
+ }
+@@ -2201,7 +2202,7 @@ static long usbtmc_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 
+ 	case USBTMC_IOCTL_GET_STB:
+ 		retval = usbtmc_get_stb(file_data, &tmp_byte);
+-		if (retval > 0)
++		if (!retval)
+ 			retval = put_user(tmp_byte, (__u8 __user *)arg);
+ 		break;
+ 
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 0e1dd6ef60a719..9f19fc7494e022 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -6133,6 +6133,7 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 	struct usb_hub			*parent_hub;
+ 	struct usb_hcd			*hcd = bus_to_hcd(udev->bus);
+ 	struct usb_device_descriptor	descriptor;
++	struct usb_interface		*intf;
+ 	struct usb_host_bos		*bos;
+ 	int				i, j, ret = 0;
+ 	int				port1 = udev->portnum;
+@@ -6190,6 +6191,18 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 	if (!udev->actconfig)
+ 		goto done;
+ 
++	/*
++	 * Some devices can't handle setting default altsetting 0 with a
++	 * Set-Interface request. Disable host-side endpoints of those
++	 * interfaces here. Enable and reset them back after host has set
++	 * its internal endpoint structures during usb_hcd_alloc_bandwith()
++	 */
++	for (i = 0; i < udev->actconfig->desc.bNumInterfaces; i++) {
++		intf = udev->actconfig->interface[i];
++		if (intf->cur_altsetting->desc.bAlternateSetting == 0)
++			usb_disable_interface(udev, intf, true);
++	}
++
+ 	mutex_lock(hcd->bandwidth_mutex);
+ 	ret = usb_hcd_alloc_bandwidth(udev, udev->actconfig, NULL, NULL);
+ 	if (ret < 0) {
+@@ -6221,12 +6234,11 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 	 */
+ 	for (i = 0; i < udev->actconfig->desc.bNumInterfaces; i++) {
+ 		struct usb_host_config *config = udev->actconfig;
+-		struct usb_interface *intf = config->interface[i];
+ 		struct usb_interface_descriptor *desc;
+ 
++		intf = config->interface[i];
+ 		desc = &intf->cur_altsetting->desc;
+ 		if (desc->bAlternateSetting == 0) {
+-			usb_disable_interface(udev, intf, true);
+ 			usb_enable_interface(udev, intf, true);
+ 			ret = 0;
+ 		} else {
+diff --git a/drivers/usb/core/usb-acpi.c b/drivers/usb/core/usb-acpi.c
+index 935c0efea0b640..ea1ce8beb0cbb2 100644
+--- a/drivers/usb/core/usb-acpi.c
++++ b/drivers/usb/core/usb-acpi.c
+@@ -165,6 +165,8 @@ static int usb_acpi_add_usb4_devlink(struct usb_device *udev)
+ 		return 0;
+ 
+ 	hub = usb_hub_to_struct_hub(udev->parent);
++	if (!hub)
++		return 0;
+ 	port_dev = hub->ports[udev->portnum - 1];
+ 
+ 	struct fwnode_handle *nhi_fwnode __free(fwnode_handle) =
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index 740311c4fa2496..c7a05f842745bc 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -144,8 +144,8 @@ static struct hid_descriptor hidg_desc = {
+ 	.bcdHID				= cpu_to_le16(0x0101),
+ 	.bCountryCode			= 0x00,
+ 	.bNumDescriptors		= 0x1,
+-	/*.desc[0].bDescriptorType	= DYNAMIC */
+-	/*.desc[0].wDescriptorLenght	= DYNAMIC */
++	/*.rpt_desc.bDescriptorType	= DYNAMIC */
++	/*.rpt_desc.wDescriptorLength	= DYNAMIC */
+ };
+ 
+ /* Super-Speed Support */
+@@ -939,8 +939,8 @@ static int hidg_setup(struct usb_function *f,
+ 			struct hid_descriptor hidg_desc_copy = hidg_desc;
+ 
+ 			VDBG(cdev, "USB_REQ_GET_DESCRIPTOR: HID\n");
+-			hidg_desc_copy.desc[0].bDescriptorType = HID_DT_REPORT;
+-			hidg_desc_copy.desc[0].wDescriptorLength =
++			hidg_desc_copy.rpt_desc.bDescriptorType = HID_DT_REPORT;
++			hidg_desc_copy.rpt_desc.wDescriptorLength =
+ 				cpu_to_le16(hidg->report_desc_length);
+ 
+ 			length = min_t(unsigned short, length,
+@@ -1210,8 +1210,8 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ 	 * We can use hidg_desc struct here but we should not relay
+ 	 * that its content won't change after returning from this function.
+ 	 */
+-	hidg_desc.desc[0].bDescriptorType = HID_DT_REPORT;
+-	hidg_desc.desc[0].wDescriptorLength =
++	hidg_desc.rpt_desc.bDescriptorType = HID_DT_REPORT;
++	hidg_desc.rpt_desc.wDescriptorLength =
+ 		cpu_to_le16(hidg->report_desc_length);
+ 
+ 	hidg_hs_in_ep_desc.bEndpointAddress =
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 4b3d5075621aa0..d709e24c1fd422 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1570,7 +1570,7 @@ static int gadget_match_driver(struct device *dev, const struct device_driver *d
+ {
+ 	struct usb_gadget *gadget = dev_to_usb_gadget(dev);
+ 	struct usb_udc *udc = gadget->udc;
+-	struct usb_gadget_driver *driver = container_of(drv,
++	const struct usb_gadget_driver *driver = container_of(drv,
+ 			struct usb_gadget_driver, driver);
+ 
+ 	/* If the driver specifies a udc_name, it must match the UDC's name */
+diff --git a/drivers/usb/misc/onboard_usb_dev.c b/drivers/usb/misc/onboard_usb_dev.c
+index f5372dfa241a9c..86f25bcb64253c 100644
+--- a/drivers/usb/misc/onboard_usb_dev.c
++++ b/drivers/usb/misc/onboard_usb_dev.c
+@@ -36,9 +36,10 @@
+ #define USB5744_CMD_CREG_ACCESS			0x99
+ #define USB5744_CMD_CREG_ACCESS_LSB		0x37
+ #define USB5744_CREG_MEM_ADDR			0x00
++#define USB5744_CREG_MEM_RD_ADDR		0x04
+ #define USB5744_CREG_WRITE			0x00
+-#define USB5744_CREG_RUNTIMEFLAGS2		0x41
+-#define USB5744_CREG_RUNTIMEFLAGS2_LSB		0x1D
++#define USB5744_CREG_READ			0x01
++#define USB5744_CREG_RUNTIMEFLAGS2		0x411D
+ #define USB5744_CREG_BYPASS_UDC_SUSPEND		BIT(3)
+ 
+ static void onboard_dev_attach_usb_driver(struct work_struct *work);
+@@ -309,11 +310,88 @@ static void onboard_dev_attach_usb_driver(struct work_struct *work)
+ 		pr_err("Failed to attach USB driver: %pe\n", ERR_PTR(err));
+ }
+ 
++#if IS_ENABLED(CONFIG_USB_ONBOARD_DEV_USB5744)
++static int onboard_dev_5744_i2c_read_byte(struct i2c_client *client, u16 addr, u8 *data)
++{
++	struct i2c_msg msg[2];
++	u8 rd_buf[3];
++	int ret;
++
++	u8 wr_buf[7] = {0, USB5744_CREG_MEM_ADDR, 4,
++			USB5744_CREG_READ, 1,
++			addr >> 8 & 0xff,
++			addr & 0xff};
++	msg[0].addr = client->addr;
++	msg[0].flags = 0;
++	msg[0].len = sizeof(wr_buf);
++	msg[0].buf = wr_buf;
++
++	ret = i2c_transfer(client->adapter, msg, 1);
++	if (ret < 0)
++		return ret;
++
++	wr_buf[0] = USB5744_CMD_CREG_ACCESS;
++	wr_buf[1] = USB5744_CMD_CREG_ACCESS_LSB;
++	wr_buf[2] = 0;
++	msg[0].len = 3;
++
++	ret = i2c_transfer(client->adapter, msg, 1);
++	if (ret < 0)
++		return ret;
++
++	wr_buf[0] = 0;
++	wr_buf[1] = USB5744_CREG_MEM_RD_ADDR;
++	msg[0].len = 2;
++
++	msg[1].addr = client->addr;
++	msg[1].flags = I2C_M_RD;
++	msg[1].len = 2;
++	msg[1].buf = rd_buf;
++
++	ret = i2c_transfer(client->adapter, msg, 2);
++	if (ret < 0)
++		return ret;
++	*data = rd_buf[1];
++
++	return 0;
++}
++
++static int onboard_dev_5744_i2c_write_byte(struct i2c_client *client, u16 addr, u8 data)
++{
++	struct i2c_msg msg[2];
++	int ret;
++
++	u8 wr_buf[8] = {0, USB5744_CREG_MEM_ADDR, 5,
++			USB5744_CREG_WRITE, 1,
++			addr >> 8 & 0xff,
++			addr & 0xff,
++			data};
++	msg[0].addr = client->addr;
++	msg[0].flags = 0;
++	msg[0].len = sizeof(wr_buf);
++	msg[0].buf = wr_buf;
++
++	ret = i2c_transfer(client->adapter, msg, 1);
++	if (ret < 0)
++		return ret;
++
++	msg[0].len = 3;
++	wr_buf[0] = USB5744_CMD_CREG_ACCESS;
++	wr_buf[1] = USB5744_CMD_CREG_ACCESS_LSB;
++	wr_buf[2] = 0;
++
++	ret = i2c_transfer(client->adapter, msg, 1);
++	if (ret < 0)
++		return ret;
++
++	return 0;
++}
++
+ static int onboard_dev_5744_i2c_init(struct i2c_client *client)
+ {
+-#if IS_ENABLED(CONFIG_USB_ONBOARD_DEV_USB5744)
+ 	struct device *dev = &client->dev;
+ 	int ret;
++	u8 reg;
+ 
+ 	/*
+ 	 * Set BYPASS_UDC_SUSPEND bit to ensure MCU is always enabled
+@@ -321,20 +399,16 @@ static int onboard_dev_5744_i2c_init(struct i2c_client *client)
+ 	 * The command writes 5 bytes to memory and single data byte in
+ 	 * configuration register.
+ 	 */
+-	char wr_buf[7] = {USB5744_CREG_MEM_ADDR, 5,
+-			  USB5744_CREG_WRITE, 1,
+-			  USB5744_CREG_RUNTIMEFLAGS2,
+-			  USB5744_CREG_RUNTIMEFLAGS2_LSB,
+-			  USB5744_CREG_BYPASS_UDC_SUSPEND};
+-
+-	ret = i2c_smbus_write_block_data(client, 0, sizeof(wr_buf), wr_buf);
++	ret = onboard_dev_5744_i2c_read_byte(client,
++					     USB5744_CREG_RUNTIMEFLAGS2, &reg);
+ 	if (ret)
+-		return dev_err_probe(dev, ret, "BYPASS_UDC_SUSPEND bit configuration failed\n");
++		return dev_err_probe(dev, ret, "CREG_RUNTIMEFLAGS2 read failed\n");
+ 
+-	ret = i2c_smbus_write_word_data(client, USB5744_CMD_CREG_ACCESS,
+-					USB5744_CMD_CREG_ACCESS_LSB);
++	reg |= USB5744_CREG_BYPASS_UDC_SUSPEND;
++	ret = onboard_dev_5744_i2c_write_byte(client,
++					      USB5744_CREG_RUNTIMEFLAGS2, reg);
+ 	if (ret)
+-		return dev_err_probe(dev, ret, "Configuration Register Access Command failed\n");
++		return dev_err_probe(dev, ret, "BYPASS_UDC_SUSPEND bit configuration failed\n");
+ 
+ 	/* Send SMBus command to boot hub. */
+ 	ret = i2c_smbus_write_word_data(client, USB5744_CMD_ATTACH,
+@@ -343,10 +417,13 @@ static int onboard_dev_5744_i2c_init(struct i2c_client *client)
+ 		return dev_err_probe(dev, ret, "USB Attach with SMBus command failed\n");
+ 
+ 	return ret;
++}
+ #else
++static int onboard_dev_5744_i2c_init(struct i2c_client *client)
++{
+ 	return -ENODEV;
+-#endif
+ }
++#endif
+ 
+ static int onboard_dev_probe(struct platform_device *pdev)
+ {
+diff --git a/drivers/usb/renesas_usbhs/common.c b/drivers/usb/renesas_usbhs/common.c
+index 4b35ef216125c7..16692e72b73650 100644
+--- a/drivers/usb/renesas_usbhs/common.c
++++ b/drivers/usb/renesas_usbhs/common.c
+@@ -685,10 +685,29 @@ static int usbhs_probe(struct platform_device *pdev)
+ 	INIT_DELAYED_WORK(&priv->notify_hotplug_work, usbhsc_notify_hotplug);
+ 	spin_lock_init(usbhs_priv_to_lock(priv));
+ 
++	/*
++	 * Acquire clocks and enable power management (PM) early in the
++	 * probe process, as the driver accesses registers during
++	 * initialization. Ensure the device is active before proceeding.
++	 */
++	pm_runtime_enable(dev);
++
++	ret = usbhsc_clk_get(dev, priv);
++	if (ret)
++		goto probe_pm_disable;
++
++	ret = pm_runtime_resume_and_get(dev);
++	if (ret)
++		goto probe_clk_put;
++
++	ret = usbhsc_clk_prepare_enable(priv);
++	if (ret)
++		goto probe_pm_put;
++
+ 	/* call pipe and module init */
+ 	ret = usbhs_pipe_probe(priv);
+ 	if (ret < 0)
+-		return ret;
++		goto probe_clk_dis_unprepare;
+ 
+ 	ret = usbhs_fifo_probe(priv);
+ 	if (ret < 0)
+@@ -705,10 +724,6 @@ static int usbhs_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto probe_fail_rst;
+ 
+-	ret = usbhsc_clk_get(dev, priv);
+-	if (ret)
+-		goto probe_fail_clks;
+-
+ 	/*
+ 	 * deviece reset here because
+ 	 * USB device might be used in boot loader.
+@@ -721,7 +736,7 @@ static int usbhs_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_warn(dev, "USB function not selected (GPIO)\n");
+ 			ret = -ENOTSUPP;
+-			goto probe_end_mod_exit;
++			goto probe_assert_rest;
+ 		}
+ 	}
+ 
+@@ -735,14 +750,19 @@ static int usbhs_probe(struct platform_device *pdev)
+ 	ret = usbhs_platform_call(priv, hardware_init, pdev);
+ 	if (ret < 0) {
+ 		dev_err(dev, "platform init failed.\n");
+-		goto probe_end_mod_exit;
++		goto probe_assert_rest;
+ 	}
+ 
+ 	/* reset phy for connection */
+ 	usbhs_platform_call(priv, phy_reset, pdev);
+ 
+-	/* power control */
+-	pm_runtime_enable(dev);
++	/*
++	 * Disable the clocks that were enabled earlier in the probe path,
++	 * and let the driver handle the clocks beyond this point.
++	 */
++	usbhsc_clk_disable_unprepare(priv);
++	pm_runtime_put(dev);
++
+ 	if (!usbhs_get_dparam(priv, runtime_pwctrl)) {
+ 		usbhsc_power_ctrl(priv, 1);
+ 		usbhs_mod_autonomy_mode(priv);
+@@ -759,9 +779,7 @@ static int usbhs_probe(struct platform_device *pdev)
+ 
+ 	return ret;
+ 
+-probe_end_mod_exit:
+-	usbhsc_clk_put(priv);
+-probe_fail_clks:
++probe_assert_rest:
+ 	reset_control_assert(priv->rsts);
+ probe_fail_rst:
+ 	usbhs_mod_remove(priv);
+@@ -769,6 +787,14 @@ static int usbhs_probe(struct platform_device *pdev)
+ 	usbhs_fifo_remove(priv);
+ probe_end_pipe_exit:
+ 	usbhs_pipe_remove(priv);
++probe_clk_dis_unprepare:
++	usbhsc_clk_disable_unprepare(priv);
++probe_pm_put:
++	pm_runtime_put(dev);
++probe_clk_put:
++	usbhsc_clk_put(priv);
++probe_pm_disable:
++	pm_runtime_disable(dev);
+ 
+ 	dev_info(dev, "probe failed (%d)\n", ret);
+ 
+diff --git a/drivers/usb/typec/bus.c b/drivers/usb/typec/bus.c
+index ae90688d23e400..a884cec9ab7e88 100644
+--- a/drivers/usb/typec/bus.c
++++ b/drivers/usb/typec/bus.c
+@@ -449,7 +449,7 @@ ATTRIBUTE_GROUPS(typec);
+ 
+ static int typec_match(struct device *dev, const struct device_driver *driver)
+ {
+-	struct typec_altmode_driver *drv = to_altmode_driver(driver);
++	const struct typec_altmode_driver *drv = to_altmode_driver(driver);
+ 	struct typec_altmode *altmode = to_typec_altmode(dev);
+ 	const struct typec_device_id *id;
+ 
+diff --git a/drivers/usb/typec/tcpm/tcpci_maxim_core.c b/drivers/usb/typec/tcpm/tcpci_maxim_core.c
+index fd1b8059336764..648311f5e3cf13 100644
+--- a/drivers/usb/typec/tcpm/tcpci_maxim_core.c
++++ b/drivers/usb/typec/tcpm/tcpci_maxim_core.c
+@@ -166,7 +166,8 @@ static void process_rx(struct max_tcpci_chip *chip, u16 status)
+ 		return;
+ 	}
+ 
+-	if (count > sizeof(struct pd_message) || count + 1 > TCPC_RECEIVE_BUFFER_LEN) {
++	if (count > sizeof(struct pd_message) + 1 ||
++	    count + 1 > TCPC_RECEIVE_BUFFER_LEN) {
+ 		dev_err(chip->dev, "Invalid TCPC_RX_BYTE_CNT %d\n", count);
+ 		return;
+ 	}
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 8adf6f95463304..214d45f8e55c21 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -596,6 +596,15 @@ struct pd_rx_event {
+ 	enum tcpm_transmit_type rx_sop_type;
+ };
+ 
++struct altmode_vdm_event {
++	struct kthread_work work;
++	struct tcpm_port *port;
++	u32 header;
++	u32 *data;
++	int cnt;
++	enum tcpm_transmit_type tx_sop_type;
++};
++
+ static const char * const pd_rev[] = {
+ 	[PD_REV10]		= "rev1",
+ 	[PD_REV20]		= "rev2",
+@@ -1608,18 +1617,68 @@ static void tcpm_queue_vdm(struct tcpm_port *port, const u32 header,
+ 	mod_vdm_delayed_work(port, 0);
+ }
+ 
+-static void tcpm_queue_vdm_unlocked(struct tcpm_port *port, const u32 header,
+-				    const u32 *data, int cnt, enum tcpm_transmit_type tx_sop_type)
++static void tcpm_queue_vdm_work(struct kthread_work *work)
+ {
+-	if (port->state != SRC_READY && port->state != SNK_READY &&
+-	    port->state != SRC_VDM_IDENTITY_REQUEST)
+-		return;
++	struct altmode_vdm_event *event = container_of(work,
++						       struct altmode_vdm_event,
++						       work);
++	struct tcpm_port *port = event->port;
+ 
+ 	mutex_lock(&port->lock);
+-	tcpm_queue_vdm(port, header, data, cnt, tx_sop_type);
++	if (port->state != SRC_READY && port->state != SNK_READY &&
++	    port->state != SRC_VDM_IDENTITY_REQUEST) {
++		tcpm_log_force(port, "dropping altmode_vdm_event");
++		goto port_unlock;
++	}
++
++	tcpm_queue_vdm(port, event->header, event->data, event->cnt, event->tx_sop_type);
++
++port_unlock:
++	kfree(event->data);
++	kfree(event);
+ 	mutex_unlock(&port->lock);
+ }
+ 
++static int tcpm_queue_vdm_unlocked(struct tcpm_port *port, const u32 header,
++				   const u32 *data, int cnt, enum tcpm_transmit_type tx_sop_type)
++{
++	struct altmode_vdm_event *event;
++	u32 *data_cpy;
++	int ret = -ENOMEM;
++
++	event = kzalloc(sizeof(*event), GFP_KERNEL);
++	if (!event)
++		goto err_event;
++
++	data_cpy = kcalloc(cnt, sizeof(u32), GFP_KERNEL);
++	if (!data_cpy)
++		goto err_data;
++
++	kthread_init_work(&event->work, tcpm_queue_vdm_work);
++	event->port = port;
++	event->header = header;
++	memcpy(data_cpy, data, sizeof(u32) * cnt);
++	event->data = data_cpy;
++	event->cnt = cnt;
++	event->tx_sop_type = tx_sop_type;
++
++	ret = kthread_queue_work(port->wq, &event->work);
++	if (!ret) {
++		ret = -EBUSY;
++		goto err_queue;
++	}
++
++	return 0;
++
++err_queue:
++	kfree(data_cpy);
++err_data:
++	kfree(event);
++err_event:
++	tcpm_log_force(port, "failed to queue altmode vdm, err:%d", ret);
++	return ret;
++}
++
+ static void svdm_consume_identity(struct tcpm_port *port, const u32 *p, int cnt)
+ {
+ 	u32 vdo = p[VDO_INDEX_IDH];
+@@ -2830,8 +2889,7 @@ static int tcpm_altmode_enter(struct typec_altmode *altmode, u32 *vdo)
+ 	header = VDO(altmode->svid, vdo ? 2 : 1, svdm_version, CMD_ENTER_MODE);
+ 	header |= VDO_OPOS(altmode->mode);
+ 
+-	tcpm_queue_vdm_unlocked(port, header, vdo, vdo ? 1 : 0, TCPC_TX_SOP);
+-	return 0;
++	return tcpm_queue_vdm_unlocked(port, header, vdo, vdo ? 1 : 0, TCPC_TX_SOP);
+ }
+ 
+ static int tcpm_altmode_exit(struct typec_altmode *altmode)
+@@ -2847,8 +2905,7 @@ static int tcpm_altmode_exit(struct typec_altmode *altmode)
+ 	header = VDO(altmode->svid, 1, svdm_version, CMD_EXIT_MODE);
+ 	header |= VDO_OPOS(altmode->mode);
+ 
+-	tcpm_queue_vdm_unlocked(port, header, NULL, 0, TCPC_TX_SOP);
+-	return 0;
++	return tcpm_queue_vdm_unlocked(port, header, NULL, 0, TCPC_TX_SOP);
+ }
+ 
+ static int tcpm_altmode_vdm(struct typec_altmode *altmode,
+@@ -2856,9 +2913,7 @@ static int tcpm_altmode_vdm(struct typec_altmode *altmode,
+ {
+ 	struct tcpm_port *port = typec_altmode_get_drvdata(altmode);
+ 
+-	tcpm_queue_vdm_unlocked(port, header, data, count - 1, TCPC_TX_SOP);
+-
+-	return 0;
++	return tcpm_queue_vdm_unlocked(port, header, data, count - 1, TCPC_TX_SOP);
+ }
+ 
+ static const struct typec_altmode_ops tcpm_altmode_ops = {
+@@ -2882,8 +2937,7 @@ static int tcpm_cable_altmode_enter(struct typec_altmode *altmode, enum typec_pl
+ 	header = VDO(altmode->svid, vdo ? 2 : 1, svdm_version, CMD_ENTER_MODE);
+ 	header |= VDO_OPOS(altmode->mode);
+ 
+-	tcpm_queue_vdm_unlocked(port, header, vdo, vdo ? 1 : 0, TCPC_TX_SOP_PRIME);
+-	return 0;
++	return tcpm_queue_vdm_unlocked(port, header, vdo, vdo ? 1 : 0, TCPC_TX_SOP_PRIME);
+ }
+ 
+ static int tcpm_cable_altmode_exit(struct typec_altmode *altmode, enum typec_plug_index sop)
+@@ -2899,8 +2953,7 @@ static int tcpm_cable_altmode_exit(struct typec_altmode *altmode, enum typec_plu
+ 	header = VDO(altmode->svid, 1, svdm_version, CMD_EXIT_MODE);
+ 	header |= VDO_OPOS(altmode->mode);
+ 
+-	tcpm_queue_vdm_unlocked(port, header, NULL, 0, TCPC_TX_SOP_PRIME);
+-	return 0;
++	return tcpm_queue_vdm_unlocked(port, header, NULL, 0, TCPC_TX_SOP_PRIME);
+ }
+ 
+ static int tcpm_cable_altmode_vdm(struct typec_altmode *altmode, enum typec_plug_index sop,
+@@ -2908,9 +2961,7 @@ static int tcpm_cable_altmode_vdm(struct typec_altmode *altmode, enum typec_plug
+ {
+ 	struct tcpm_port *port = typec_altmode_get_drvdata(altmode);
+ 
+-	tcpm_queue_vdm_unlocked(port, header, data, count - 1, TCPC_TX_SOP_PRIME);
+-
+-	return 0;
++	return tcpm_queue_vdm_unlocked(port, header, data, count - 1, TCPC_TX_SOP_PRIME);
+ }
+ 
+ static const struct typec_cable_ops tcpm_cable_ops = {
+diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
+index 451c639299eb3b..d12a350440d3ca 100644
+--- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
++++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
+@@ -350,6 +350,32 @@ static int vf_qm_func_stop(struct hisi_qm *qm)
+ 	return hisi_qm_mb(qm, QM_MB_CMD_PAUSE_QM, 0, 0, 0);
+ }
+ 
++static int vf_qm_version_check(struct acc_vf_data *vf_data, struct device *dev)
++{
++	switch (vf_data->acc_magic) {
++	case ACC_DEV_MAGIC_V2:
++		if (vf_data->major_ver != ACC_DRV_MAJOR_VER) {
++			dev_info(dev, "migration driver version<%u.%u> not match!\n",
++				 vf_data->major_ver, vf_data->minor_ver);
++			return -EINVAL;
++		}
++		break;
++	case ACC_DEV_MAGIC_V1:
++		/* Correct dma address */
++		vf_data->eqe_dma = vf_data->qm_eqc_dw[QM_XQC_ADDR_HIGH];
++		vf_data->eqe_dma <<= QM_XQC_ADDR_OFFSET;
++		vf_data->eqe_dma |= vf_data->qm_eqc_dw[QM_XQC_ADDR_LOW];
++		vf_data->aeqe_dma = vf_data->qm_aeqc_dw[QM_XQC_ADDR_HIGH];
++		vf_data->aeqe_dma <<= QM_XQC_ADDR_OFFSET;
++		vf_data->aeqe_dma |= vf_data->qm_aeqc_dw[QM_XQC_ADDR_LOW];
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static int vf_qm_check_match(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 			     struct hisi_acc_vf_migration_file *migf)
+ {
+@@ -363,7 +389,8 @@ static int vf_qm_check_match(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 	if (migf->total_length < QM_MATCH_SIZE || hisi_acc_vdev->match_done)
+ 		return 0;
+ 
+-	if (vf_data->acc_magic != ACC_DEV_MAGIC) {
++	ret = vf_qm_version_check(vf_data, dev);
++	if (ret) {
+ 		dev_err(dev, "failed to match ACC_DEV_MAGIC\n");
+ 		return -EINVAL;
+ 	}
+@@ -399,13 +426,6 @@ static int vf_qm_check_match(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 		return -EINVAL;
+ 	}
+ 
+-	ret = qm_write_regs(vf_qm, QM_VF_STATE, &vf_data->vf_qm_state, 1);
+-	if (ret) {
+-		dev_err(dev, "failed to write QM_VF_STATE\n");
+-		return ret;
+-	}
+-
+-	hisi_acc_vdev->vf_qm_state = vf_data->vf_qm_state;
+ 	hisi_acc_vdev->match_done = true;
+ 	return 0;
+ }
+@@ -418,7 +438,9 @@ static int vf_qm_get_match_data(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 	int vf_id = hisi_acc_vdev->vf_id;
+ 	int ret;
+ 
+-	vf_data->acc_magic = ACC_DEV_MAGIC;
++	vf_data->acc_magic = ACC_DEV_MAGIC_V2;
++	vf_data->major_ver = ACC_DRV_MAJOR_VER;
++	vf_data->minor_ver = ACC_DRV_MINOR_VER;
+ 	/* Save device id */
+ 	vf_data->dev_id = hisi_acc_vdev->vf_dev->device;
+ 
+@@ -441,6 +463,19 @@ static int vf_qm_get_match_data(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 	return 0;
+ }
+ 
++static void vf_qm_xeqc_save(struct hisi_qm *qm,
++			    struct hisi_acc_vf_migration_file *migf)
++{
++	struct acc_vf_data *vf_data = &migf->vf_data;
++	u16 eq_head, aeq_head;
++
++	eq_head = vf_data->qm_eqc_dw[0] & 0xFFFF;
++	qm_db(qm, 0, QM_DOORBELL_CMD_EQ, eq_head, 0);
++
++	aeq_head = vf_data->qm_aeqc_dw[0] & 0xFFFF;
++	qm_db(qm, 0, QM_DOORBELL_CMD_AEQ, aeq_head, 0);
++}
++
+ static int vf_qm_load_data(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 			   struct hisi_acc_vf_migration_file *migf)
+ {
+@@ -456,6 +491,20 @@ static int vf_qm_load_data(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 	if (migf->total_length < sizeof(struct acc_vf_data))
+ 		return -EINVAL;
+ 
++	if (!vf_data->eqe_dma || !vf_data->aeqe_dma ||
++	    !vf_data->sqc_dma || !vf_data->cqc_dma) {
++		dev_info(dev, "resume dma addr is NULL!\n");
++		hisi_acc_vdev->vf_qm_state = QM_NOT_READY;
++		return 0;
++	}
++
++	ret = qm_write_regs(qm, QM_VF_STATE, &vf_data->vf_qm_state, 1);
++	if (ret) {
++		dev_err(dev, "failed to write QM_VF_STATE\n");
++		return -EINVAL;
++	}
++	hisi_acc_vdev->vf_qm_state = vf_data->vf_qm_state;
++
+ 	qm->eqe_dma = vf_data->eqe_dma;
+ 	qm->aeqe_dma = vf_data->aeqe_dma;
+ 	qm->sqc_dma = vf_data->sqc_dma;
+@@ -496,12 +545,12 @@ static int vf_qm_read_data(struct hisi_qm *vf_qm, struct acc_vf_data *vf_data)
+ 		return -EINVAL;
+ 
+ 	/* Every reg is 32 bit, the dma address is 64 bit. */
+-	vf_data->eqe_dma = vf_data->qm_eqc_dw[1];
++	vf_data->eqe_dma = vf_data->qm_eqc_dw[QM_XQC_ADDR_HIGH];
+ 	vf_data->eqe_dma <<= QM_XQC_ADDR_OFFSET;
+-	vf_data->eqe_dma |= vf_data->qm_eqc_dw[0];
+-	vf_data->aeqe_dma = vf_data->qm_aeqc_dw[1];
++	vf_data->eqe_dma |= vf_data->qm_eqc_dw[QM_XQC_ADDR_LOW];
++	vf_data->aeqe_dma = vf_data->qm_aeqc_dw[QM_XQC_ADDR_HIGH];
+ 	vf_data->aeqe_dma <<= QM_XQC_ADDR_OFFSET;
+-	vf_data->aeqe_dma |= vf_data->qm_aeqc_dw[0];
++	vf_data->aeqe_dma |= vf_data->qm_aeqc_dw[QM_XQC_ADDR_LOW];
+ 
+ 	/* Through SQC_BT/CQC_BT to get sqc and cqc address */
+ 	ret = qm_get_sqc(vf_qm, &vf_data->sqc_dma);
+@@ -524,7 +573,6 @@ static int vf_qm_state_save(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ {
+ 	struct acc_vf_data *vf_data = &migf->vf_data;
+ 	struct hisi_qm *vf_qm = &hisi_acc_vdev->vf_qm;
+-	struct device *dev = &vf_qm->pdev->dev;
+ 	int ret;
+ 
+ 	if (unlikely(qm_wait_dev_not_ready(vf_qm))) {
+@@ -538,17 +586,14 @@ static int vf_qm_state_save(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 	vf_data->vf_qm_state = QM_READY;
+ 	hisi_acc_vdev->vf_qm_state = vf_data->vf_qm_state;
+ 
+-	ret = vf_qm_cache_wb(vf_qm);
+-	if (ret) {
+-		dev_err(dev, "failed to writeback QM Cache!\n");
+-		return ret;
+-	}
+-
+ 	ret = vf_qm_read_data(vf_qm, vf_data);
+ 	if (ret)
+ 		return -EINVAL;
+ 
+ 	migf->total_length = sizeof(struct acc_vf_data);
++	/* Save eqc and aeqc interrupt information */
++	vf_qm_xeqc_save(vf_qm, migf);
++
+ 	return 0;
+ }
+ 
+@@ -967,6 +1012,13 @@ static int hisi_acc_vf_stop_device(struct hisi_acc_vf_core_device *hisi_acc_vdev
+ 		dev_err(dev, "failed to check QM INT state!\n");
+ 		return ret;
+ 	}
++
++	ret = vf_qm_cache_wb(vf_qm);
++	if (ret) {
++		dev_err(dev, "failed to writeback QM cache!\n");
++		return ret;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1463,6 +1515,7 @@ static void hisi_acc_vfio_pci_close_device(struct vfio_device *core_vdev)
+ 	struct hisi_acc_vf_core_device *hisi_acc_vdev = hisi_acc_get_vf_dev(core_vdev);
+ 	struct hisi_qm *vf_qm = &hisi_acc_vdev->vf_qm;
+ 
++	hisi_acc_vf_disable_fds(hisi_acc_vdev);
+ 	mutex_lock(&hisi_acc_vdev->open_mutex);
+ 	hisi_acc_vdev->dev_opened = false;
+ 	iounmap(vf_qm->io_base);
+@@ -1485,6 +1538,7 @@ static int hisi_acc_vfio_pci_migrn_init_dev(struct vfio_device *core_vdev)
+ 	hisi_acc_vdev->vf_id = pci_iov_vf_id(pdev) + 1;
+ 	hisi_acc_vdev->pf_qm = pf_qm;
+ 	hisi_acc_vdev->vf_dev = pdev;
++	hisi_acc_vdev->vf_qm_state = QM_NOT_READY;
+ 	mutex_init(&hisi_acc_vdev->state_mutex);
+ 	mutex_init(&hisi_acc_vdev->open_mutex);
+ 
+diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.h b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.h
+index 245d7537b2bcd4..91002ceeebc18a 100644
+--- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.h
++++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.h
+@@ -39,6 +39,9 @@
+ #define QM_REG_ADDR_OFFSET	0x0004
+ 
+ #define QM_XQC_ADDR_OFFSET	32U
++#define QM_XQC_ADDR_LOW	0x1
++#define QM_XQC_ADDR_HIGH	0x2
++
+ #define QM_VF_AEQ_INT_MASK	0x0004
+ #define QM_VF_EQ_INT_MASK	0x000c
+ #define QM_IFC_INT_SOURCE_V	0x0020
+@@ -50,10 +53,15 @@
+ #define QM_EQC_DW0		0X8000
+ #define QM_AEQC_DW0		0X8020
+ 
++#define ACC_DRV_MAJOR_VER 1
++#define ACC_DRV_MINOR_VER 0
++
++#define ACC_DEV_MAGIC_V1	0XCDCDCDCDFEEDAACC
++#define ACC_DEV_MAGIC_V2	0xAACCFEEDDECADEDE
++
+ struct acc_vf_data {
+ #define QM_MATCH_SIZE offsetofend(struct acc_vf_data, qm_rsv_state)
+ 	/* QM match information */
+-#define ACC_DEV_MAGIC	0XCDCDCDCDFEEDAACC
+ 	u64 acc_magic;
+ 	u32 qp_num;
+ 	u32 dev_id;
+@@ -61,7 +69,9 @@ struct acc_vf_data {
+ 	u32 qp_base;
+ 	u32 vf_qm_state;
+ 	/* QM reserved match information */
+-	u32 qm_rsv_state[3];
++	u16 major_ver;
++	u16 minor_ver;
++	u32 qm_rsv_state[2];
+ 
+ 	/* QM RW regs */
+ 	u32 aeq_int_mask;
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index 0ac56072af9f23..ba5d91e576af16 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -293,7 +293,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize)
+ 			struct rb_node *p;
+ 
+ 			for (p = rb_prev(n); p; p = rb_prev(p)) {
+-				struct vfio_dma *dma = rb_entry(n,
++				struct vfio_dma *dma = rb_entry(p,
+ 							struct vfio_dma, node);
+ 
+ 				vfio_dma_bitmap_free(dma);
+diff --git a/drivers/video/backlight/qcom-wled.c b/drivers/video/backlight/qcom-wled.c
+index 9afe701b2a1b64..a63bb42c8f8b03 100644
+--- a/drivers/video/backlight/qcom-wled.c
++++ b/drivers/video/backlight/qcom-wled.c
+@@ -1406,9 +1406,11 @@ static int wled_configure(struct wled *wled)
+ 	wled->ctrl_addr = be32_to_cpu(*prop_addr);
+ 
+ 	rc = of_property_read_string(dev->of_node, "label", &wled->name);
+-	if (rc)
++	if (rc) {
+ 		wled->name = devm_kasprintf(dev, GFP_KERNEL, "%pOFn", dev->of_node);
+-
++		if (!wled->name)
++			return -ENOMEM;
++	}
+ 	switch (wled->version) {
+ 	case 3:
+ 		u32_opts = wled3_opts;
+diff --git a/drivers/video/fbdev/core/fbcvt.c b/drivers/video/fbdev/core/fbcvt.c
+index 64843464c66135..cd3821bd82e566 100644
+--- a/drivers/video/fbdev/core/fbcvt.c
++++ b/drivers/video/fbdev/core/fbcvt.c
+@@ -312,7 +312,7 @@ int fb_find_mode_cvt(struct fb_videomode *mode, int margins, int rb)
+ 	cvt.f_refresh = cvt.refresh;
+ 	cvt.interlace = 1;
+ 
+-	if (!cvt.xres || !cvt.yres || !cvt.refresh) {
++	if (!cvt.xres || !cvt.yres || !cvt.refresh || cvt.f_refresh > INT_MAX) {
+ 		printk(KERN_INFO "fbcvt: Invalid input parameters\n");
+ 		return 1;
+ 	}
+diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
+index d50fe030d82534..7182f43ed05515 100644
+--- a/drivers/virtio/virtio_pci_modern.c
++++ b/drivers/virtio/virtio_pci_modern.c
+@@ -48,6 +48,7 @@ void vp_modern_avq_done(struct virtqueue *vq)
+ {
+ 	struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
+ 	struct virtio_pci_admin_vq *admin_vq = &vp_dev->admin_vq;
++	unsigned int status_size = sizeof(struct virtio_admin_cmd_status);
+ 	struct virtio_admin_cmd *cmd;
+ 	unsigned long flags;
+ 	unsigned int len;
+@@ -56,7 +57,17 @@ void vp_modern_avq_done(struct virtqueue *vq)
+ 	do {
+ 		virtqueue_disable_cb(vq);
+ 		while ((cmd = virtqueue_get_buf(vq, &len))) {
+-			cmd->result_sg_size = len;
++			/* If the number of bytes written by the device is less
++			 * than the size of struct virtio_admin_cmd_status, the
++			 * remaining status bytes will remain zero-initialized,
++			 * since the buffer was zeroed during allocation.
++			 * In this case, set the size of command_specific_result
++			 * to 0.
++			 */
++			if (len < status_size)
++				cmd->result_sg_size = 0;
++			else
++				cmd->result_sg_size = len - status_size;
+ 			complete(&cmd->completion);
+ 		}
+ 	} while (!virtqueue_enable_cb(vq));
+diff --git a/drivers/watchdog/exar_wdt.c b/drivers/watchdog/exar_wdt.c
+index 7c61ff34327116..c2e3bb08df899a 100644
+--- a/drivers/watchdog/exar_wdt.c
++++ b/drivers/watchdog/exar_wdt.c
+@@ -221,7 +221,7 @@ static const struct watchdog_info exar_wdt_info = {
+ 	.options	= WDIOF_KEEPALIVEPING |
+ 			  WDIOF_SETTIMEOUT |
+ 			  WDIOF_MAGICCLOSE,
+-	.identity	= "Exar/MaxLinear XR28V38x Watchdog",
++	.identity	= "Exar XR28V38x Watchdog",
+ };
+ 
+ static const struct watchdog_ops exar_wdt_ops = {
+diff --git a/drivers/watchdog/lenovo_se30_wdt.c b/drivers/watchdog/lenovo_se30_wdt.c
+index 024b842499b368..1c73bb7eeeeed1 100644
+--- a/drivers/watchdog/lenovo_se30_wdt.c
++++ b/drivers/watchdog/lenovo_se30_wdt.c
+@@ -271,6 +271,8 @@ static int lenovo_se30_wdt_probe(struct platform_device *pdev)
+ 		return -EBUSY;
+ 
+ 	priv->shm_base_addr = devm_ioremap(dev, base_phys, SHM_WIN_SIZE);
++	if (!priv->shm_base_addr)
++		return -ENOMEM;
+ 
+ 	priv->wdt_cfg.mod = WDT_MODULE;
+ 	priv->wdt_cfg.idx = WDT_CFG_INDEX;
+diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
+index 8c852807ba1c10..2de37dcd75566f 100644
+--- a/drivers/xen/balloon.c
++++ b/drivers/xen/balloon.c
+@@ -704,15 +704,18 @@ static int __init balloon_add_regions(void)
+ 
+ 		/*
+ 		 * Extra regions are accounted for in the physmap, but need
+-		 * decreasing from current_pages to balloon down the initial
+-		 * allocation, because they are already accounted for in
+-		 * total_pages.
++		 * decreasing from current_pages and target_pages to balloon
++		 * down the initial allocation, because they are already
++		 * accounted for in total_pages.
+ 		 */
+-		if (extra_pfn_end - start_pfn >= balloon_stats.current_pages) {
++		pages = extra_pfn_end - start_pfn;
++		if (pages >= balloon_stats.current_pages ||
++		    pages >= balloon_stats.target_pages) {
+ 			WARN(1, "Extra pages underflow current target");
+ 			return -ERANGE;
+ 		}
+-		balloon_stats.current_pages -= extra_pfn_end - start_pfn;
++		balloon_stats.current_pages -= pages;
++		balloon_stats.target_pages -= pages;
+ 	}
+ 
+ 	return 0;
+diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c
+index 32619d146cbc19..862164181baca1 100644
+--- a/fs/9p/vfs_addr.c
++++ b/fs/9p/vfs_addr.c
+@@ -59,7 +59,7 @@ static void v9fs_issue_write(struct netfs_io_subrequest *subreq)
+ 	len = p9_client_write(fid, subreq->start, &subreq->io_iter, &err);
+ 	if (len > 0)
+ 		__set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
+-	netfs_write_subrequest_terminated(subreq, len ?: err, false);
++	netfs_write_subrequest_terminated(subreq, len ?: err);
+ }
+ 
+ /**
+@@ -77,7 +77,8 @@ static void v9fs_issue_read(struct netfs_io_subrequest *subreq)
+ 
+ 	/* if we just extended the file size, any portion not in
+ 	 * cache won't be on server and is zeroes */
+-	if (subreq->rreq->origin != NETFS_DIO_READ)
++	if (subreq->rreq->origin != NETFS_UNBUFFERED_READ &&
++	    subreq->rreq->origin != NETFS_DIO_READ)
+ 		__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
+ 	if (pos + total >= i_size_read(rreq->inode))
+ 		__set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags);
+@@ -164,4 +165,5 @@ const struct address_space_operations v9fs_addr_operations = {
+ 	.invalidate_folio	= netfs_invalidate_folio,
+ 	.direct_IO		= noop_direct_IO,
+ 	.writepages		= netfs_writepages,
++	.migrate_folio		= filemap_migrate_folio,
+ };
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index 18b0a9f1615e44..2e7526ea883ae2 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -120,17 +120,17 @@ static void afs_issue_write_worker(struct work_struct *work)
+ 
+ #if 0 // Error injection
+ 	if (subreq->debug_index == 3)
+-		return netfs_write_subrequest_terminated(subreq, -ENOANO, false);
++		return netfs_write_subrequest_terminated(subreq, -ENOANO);
+ 
+ 	if (!subreq->retry_count) {
+ 		set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
+-		return netfs_write_subrequest_terminated(subreq, -EAGAIN, false);
++		return netfs_write_subrequest_terminated(subreq, -EAGAIN);
+ 	}
+ #endif
+ 
+ 	op = afs_alloc_operation(wreq->netfs_priv, vnode->volume);
+ 	if (IS_ERR(op))
+-		return netfs_write_subrequest_terminated(subreq, -EAGAIN, false);
++		return netfs_write_subrequest_terminated(subreq, -EAGAIN);
+ 
+ 	afs_op_set_vnode(op, 0, vnode);
+ 	op->file[0].dv_delta	= 1;
+@@ -166,7 +166,7 @@ static void afs_issue_write_worker(struct work_struct *work)
+ 		break;
+ 	}
+ 
+-	netfs_write_subrequest_terminated(subreq, ret < 0 ? ret : subreq->len, false);
++	netfs_write_subrequest_terminated(subreq, ret < 0 ? ret : subreq->len);
+ }
+ 
+ void afs_issue_write(struct netfs_io_subrequest *subreq)
+@@ -202,6 +202,7 @@ void afs_retry_request(struct netfs_io_request *wreq, struct netfs_io_stream *st
+ 	case NETFS_READ_GAPS:
+ 	case NETFS_READ_SINGLE:
+ 	case NETFS_READ_FOR_WRITE:
++	case NETFS_UNBUFFERED_READ:
+ 	case NETFS_DIO_READ:
+ 		return;
+ 	default:
+diff --git a/fs/btrfs/extent-io-tree.c b/fs/btrfs/extent-io-tree.c
+index 13de6af279e526..b5b44ea91f9996 100644
+--- a/fs/btrfs/extent-io-tree.c
++++ b/fs/btrfs/extent-io-tree.c
+@@ -1252,8 +1252,11 @@ static int __set_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
+ 		if (!prealloc)
+ 			goto search_again;
+ 		ret = split_state(tree, state, prealloc, end + 1);
+-		if (ret)
++		if (ret) {
+ 			extent_io_tree_panic(tree, state, "split", ret);
++			prealloc = NULL;
++			goto out;
++		}
+ 
+ 		set_state_bits(tree, prealloc, bits, changeset);
+ 		cache_state(prealloc, cached_state);
+@@ -1456,6 +1459,7 @@ int convert_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
+ 		if (IS_ERR(inserted_state)) {
+ 			ret = PTR_ERR(inserted_state);
+ 			extent_io_tree_panic(tree, prealloc, "insert", ret);
++			goto out;
+ 		}
+ 		cache_state(inserted_state, cached_state);
+ 		if (inserted_state == prealloc)
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 71b8a825c4479a..22455fbcb29eba 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1862,7 +1862,7 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
+ 		if (reserved_space < fsize) {
+ 			end = page_start + reserved_space - 1;
+ 			btrfs_delalloc_release_space(BTRFS_I(inode),
+-					data_reserved, page_start,
++					data_reserved, end + 1,
+ 					fsize - reserved_space, true);
+ 		}
+ 	}
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 90f5da3c520ac3..8a3f44302788cd 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4849,8 +4849,11 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len,
+ 	folio = __filemap_get_folio(mapping, index,
+ 				    FGP_LOCK | FGP_ACCESSED | FGP_CREAT, mask);
+ 	if (IS_ERR(folio)) {
+-		btrfs_delalloc_release_space(inode, data_reserved, block_start,
+-					     blocksize, true);
++		if (only_release_metadata)
++			btrfs_delalloc_release_metadata(inode, blocksize, true);
++		else
++			btrfs_delalloc_release_space(inode, data_reserved,
++						     block_start, blocksize, true);
+ 		btrfs_delalloc_release_extents(inode, blocksize);
+ 		ret = -ENOMEM;
+ 		goto out;
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index c3b2e29e3e019d..4c525a0408125d 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -153,12 +153,14 @@ struct scrub_stripe {
+ 	unsigned int init_nr_io_errors;
+ 	unsigned int init_nr_csum_errors;
+ 	unsigned int init_nr_meta_errors;
++	unsigned int init_nr_meta_gen_errors;
+ 
+ 	/*
+ 	 * The following error bitmaps are all for the current status.
+ 	 * Every time we submit a new read, these bitmaps may be updated.
+ 	 *
+-	 * error_bitmap = io_error_bitmap | csum_error_bitmap | meta_error_bitmap;
++	 * error_bitmap = io_error_bitmap | csum_error_bitmap |
++	 *		  meta_error_bitmap | meta_generation_bitmap;
+ 	 *
+ 	 * IO and csum errors can happen for both metadata and data.
+ 	 */
+@@ -166,6 +168,7 @@ struct scrub_stripe {
+ 	unsigned long io_error_bitmap;
+ 	unsigned long csum_error_bitmap;
+ 	unsigned long meta_error_bitmap;
++	unsigned long meta_gen_error_bitmap;
+ 
+ 	/* For writeback (repair or replace) error reporting. */
+ 	unsigned long write_error_bitmap;
+@@ -616,7 +619,7 @@ static void scrub_verify_one_metadata(struct scrub_stripe *stripe, int sector_nr
+ 	memcpy(on_disk_csum, header->csum, fs_info->csum_size);
+ 
+ 	if (logical != btrfs_stack_header_bytenr(header)) {
+-		bitmap_set(&stripe->csum_error_bitmap, sector_nr, sectors_per_tree);
++		bitmap_set(&stripe->meta_error_bitmap, sector_nr, sectors_per_tree);
+ 		bitmap_set(&stripe->error_bitmap, sector_nr, sectors_per_tree);
+ 		btrfs_warn_rl(fs_info,
+ 		"tree block %llu mirror %u has bad bytenr, has %llu want %llu",
+@@ -672,7 +675,7 @@ static void scrub_verify_one_metadata(struct scrub_stripe *stripe, int sector_nr
+ 	}
+ 	if (stripe->sectors[sector_nr].generation !=
+ 	    btrfs_stack_header_generation(header)) {
+-		bitmap_set(&stripe->meta_error_bitmap, sector_nr, sectors_per_tree);
++		bitmap_set(&stripe->meta_gen_error_bitmap, sector_nr, sectors_per_tree);
+ 		bitmap_set(&stripe->error_bitmap, sector_nr, sectors_per_tree);
+ 		btrfs_warn_rl(fs_info,
+ 		"tree block %llu mirror %u has bad generation, has %llu want %llu",
+@@ -684,6 +687,7 @@ static void scrub_verify_one_metadata(struct scrub_stripe *stripe, int sector_nr
+ 	bitmap_clear(&stripe->error_bitmap, sector_nr, sectors_per_tree);
+ 	bitmap_clear(&stripe->csum_error_bitmap, sector_nr, sectors_per_tree);
+ 	bitmap_clear(&stripe->meta_error_bitmap, sector_nr, sectors_per_tree);
++	bitmap_clear(&stripe->meta_gen_error_bitmap, sector_nr, sectors_per_tree);
+ }
+ 
+ static void scrub_verify_one_sector(struct scrub_stripe *stripe, int sector_nr)
+@@ -972,8 +976,22 @@ static void scrub_stripe_report_errors(struct scrub_ctx *sctx,
+ 			if (__ratelimit(&rs) && dev)
+ 				scrub_print_common_warning("header error", dev, false,
+ 						     stripe->logical, physical);
++		if (test_bit(sector_nr, &stripe->meta_gen_error_bitmap))
++			if (__ratelimit(&rs) && dev)
++				scrub_print_common_warning("generation error", dev, false,
++						     stripe->logical, physical);
+ 	}
+ 
++	/* Update the device stats. */
++	for (int i = 0; i < stripe->init_nr_io_errors; i++)
++		btrfs_dev_stat_inc_and_print(stripe->dev, BTRFS_DEV_STAT_READ_ERRS);
++	for (int i = 0; i < stripe->init_nr_csum_errors; i++)
++		btrfs_dev_stat_inc_and_print(stripe->dev, BTRFS_DEV_STAT_CORRUPTION_ERRS);
++	/* Generation mismatch error is based on each metadata, not each block. */
++	for (int i = 0; i < stripe->init_nr_meta_gen_errors;
++	     i += (fs_info->nodesize >> fs_info->sectorsize_bits))
++		btrfs_dev_stat_inc_and_print(stripe->dev, BTRFS_DEV_STAT_GENERATION_ERRS);
++
+ 	spin_lock(&sctx->stat_lock);
+ 	sctx->stat.data_extents_scrubbed += stripe->nr_data_extents;
+ 	sctx->stat.tree_extents_scrubbed += stripe->nr_meta_extents;
+@@ -982,7 +1000,8 @@ static void scrub_stripe_report_errors(struct scrub_ctx *sctx,
+ 	sctx->stat.no_csum += nr_nodatacsum_sectors;
+ 	sctx->stat.read_errors += stripe->init_nr_io_errors;
+ 	sctx->stat.csum_errors += stripe->init_nr_csum_errors;
+-	sctx->stat.verify_errors += stripe->init_nr_meta_errors;
++	sctx->stat.verify_errors += stripe->init_nr_meta_errors +
++				    stripe->init_nr_meta_gen_errors;
+ 	sctx->stat.uncorrectable_errors +=
+ 		bitmap_weight(&stripe->error_bitmap, stripe->nr_sectors);
+ 	sctx->stat.corrected_errors += nr_repaired_sectors;
+@@ -1028,6 +1047,8 @@ static void scrub_stripe_read_repair_worker(struct work_struct *work)
+ 						    stripe->nr_sectors);
+ 	stripe->init_nr_meta_errors = bitmap_weight(&stripe->meta_error_bitmap,
+ 						    stripe->nr_sectors);
++	stripe->init_nr_meta_gen_errors = bitmap_weight(&stripe->meta_gen_error_bitmap,
++							stripe->nr_sectors);
+ 
+ 	if (bitmap_empty(&stripe->init_error_bitmap, stripe->nr_sectors))
+ 		goto out;
+@@ -1142,6 +1163,9 @@ static void scrub_write_endio(struct btrfs_bio *bbio)
+ 		bitmap_set(&stripe->write_error_bitmap, sector_nr,
+ 			   bio_size >> fs_info->sectorsize_bits);
+ 		spin_unlock_irqrestore(&stripe->write_error_lock, flags);
++		for (int i = 0; i < (bio_size >> fs_info->sectorsize_bits); i++)
++			btrfs_dev_stat_inc_and_print(stripe->dev,
++						     BTRFS_DEV_STAT_WRITE_ERRS);
+ 	}
+ 	bio_put(&bbio->bio);
+ 
+@@ -1508,10 +1532,12 @@ static void scrub_stripe_reset_bitmaps(struct scrub_stripe *stripe)
+ 	stripe->init_nr_io_errors = 0;
+ 	stripe->init_nr_csum_errors = 0;
+ 	stripe->init_nr_meta_errors = 0;
++	stripe->init_nr_meta_gen_errors = 0;
+ 	stripe->error_bitmap = 0;
+ 	stripe->io_error_bitmap = 0;
+ 	stripe->csum_error_bitmap = 0;
+ 	stripe->meta_error_bitmap = 0;
++	stripe->meta_gen_error_bitmap = 0;
+ }
+ 
+ /*
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 90dc094cfa5e5a..f5af11565b8760 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -6583,6 +6583,19 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
+ 		btrfs_log_get_delayed_items(inode, &delayed_ins_list,
+ 					    &delayed_del_list);
+ 
++	/*
++	 * If we are fsyncing a file with 0 hard links, then commit the delayed
++	 * inode because the last inode ref (or extref) item may still be in the
++	 * subvolume tree and if we log it the file will still exist after a log
++	 * replay. So commit the delayed inode to delete that last ref and we
++	 * skip logging it.
++	 */
++	if (inode->vfs_inode.i_nlink == 0) {
++		ret = btrfs_commit_inode_delayed_inode(inode);
++		if (ret)
++			goto out_unlock;
++	}
++
+ 	ret = copy_inode_items_to_log(trans, inode, &min_key, &max_key,
+ 				      path, dst_path, logged_isize,
+ 				      inode_only, ctx,
+@@ -7051,14 +7064,9 @@ static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans,
+ 	if (btrfs_root_generation(&root->root_item) == trans->transid)
+ 		return BTRFS_LOG_FORCE_COMMIT;
+ 
+-	/*
+-	 * Skip already logged inodes or inodes corresponding to tmpfiles
+-	 * (since logging them is pointless, a link count of 0 means they
+-	 * will never be accessible).
+-	 */
+-	if ((btrfs_inode_in_log(inode, trans->transid) &&
+-	     list_empty(&ctx->ordered_extents)) ||
+-	    inode->vfs_inode.i_nlink == 0)
++	/* Skip already logged inodes and without new extents. */
++	if (btrfs_inode_in_log(inode, trans->transid) &&
++	    list_empty(&ctx->ordered_extents))
+ 		return BTRFS_NO_LOG_SYNC;
+ 
+ 	ret = start_log_trans(trans, root, ctx);
+diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c
+index 92058ae4348826..c08e4a66ac07a7 100644
+--- a/fs/cachefiles/io.c
++++ b/fs/cachefiles/io.c
+@@ -63,7 +63,7 @@ static void cachefiles_read_complete(struct kiocb *iocb, long ret)
+ 				ret = -ESTALE;
+ 		}
+ 
+-		ki->term_func(ki->term_func_priv, ret, ki->was_async);
++		ki->term_func(ki->term_func_priv, ret);
+ 	}
+ 
+ 	cachefiles_put_kiocb(ki);
+@@ -188,7 +188,7 @@ static int cachefiles_read(struct netfs_cache_resources *cres,
+ 
+ presubmission_error:
+ 	if (term_func)
+-		term_func(term_func_priv, ret < 0 ? ret : skipped, false);
++		term_func(term_func_priv, ret < 0 ? ret : skipped);
+ 	return ret;
+ }
+ 
+@@ -271,7 +271,7 @@ static void cachefiles_write_complete(struct kiocb *iocb, long ret)
+ 	atomic_long_sub(ki->b_writing, &object->volume->cache->b_writing);
+ 	set_bit(FSCACHE_COOKIE_HAVE_DATA, &object->cookie->flags);
+ 	if (ki->term_func)
+-		ki->term_func(ki->term_func_priv, ret, ki->was_async);
++		ki->term_func(ki->term_func_priv, ret);
+ 	cachefiles_put_kiocb(ki);
+ }
+ 
+@@ -301,7 +301,7 @@ int __cachefiles_write(struct cachefiles_object *object,
+ 	ki = kzalloc(sizeof(struct cachefiles_kiocb), GFP_KERNEL);
+ 	if (!ki) {
+ 		if (term_func)
+-			term_func(term_func_priv, -ENOMEM, false);
++			term_func(term_func_priv, -ENOMEM);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -366,7 +366,7 @@ static int cachefiles_write(struct netfs_cache_resources *cres,
+ {
+ 	if (!fscache_wait_for_operation(cres, FSCACHE_WANT_WRITE)) {
+ 		if (term_func)
+-			term_func(term_func_priv, -ENOBUFS, false);
++			term_func(term_func_priv, -ENOBUFS);
+ 		trace_netfs_sreq(term_func_priv, netfs_sreq_trace_cache_nowrite);
+ 		return -ENOBUFS;
+ 	}
+@@ -665,7 +665,7 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq)
+ 		pre = CACHEFILES_DIO_BLOCK_SIZE - off;
+ 		if (pre >= len) {
+ 			fscache_count_dio_misfit();
+-			netfs_write_subrequest_terminated(subreq, len, false);
++			netfs_write_subrequest_terminated(subreq, len);
+ 			return;
+ 		}
+ 		subreq->transferred += pre;
+@@ -691,7 +691,7 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq)
+ 		len -= post;
+ 		if (len == 0) {
+ 			fscache_count_dio_misfit();
+-			netfs_write_subrequest_terminated(subreq, post, false);
++			netfs_write_subrequest_terminated(subreq, post);
+ 			return;
+ 		}
+ 		iov_iter_truncate(&subreq->io_iter, len);
+@@ -703,7 +703,7 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq)
+ 					 &start, &len, len, true);
+ 	cachefiles_end_secure(cache, saved_cred);
+ 	if (ret < 0) {
+-		netfs_write_subrequest_terminated(subreq, ret, false);
++		netfs_write_subrequest_terminated(subreq, ret);
+ 		return;
+ 	}
+ 
+diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
+index 29be367905a16f..b95c4cb21c13f0 100644
+--- a/fs/ceph/addr.c
++++ b/fs/ceph/addr.c
+@@ -238,6 +238,7 @@ static void finish_netfs_read(struct ceph_osd_request *req)
+ 		if (sparse && err > 0)
+ 			err = ceph_sparse_ext_map_end(op);
+ 		if (err < subreq->len &&
++		    subreq->rreq->origin != NETFS_UNBUFFERED_READ &&
+ 		    subreq->rreq->origin != NETFS_DIO_READ)
+ 			__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
+ 		if (IS_ENCRYPTED(inode) && err > 0) {
+@@ -281,7 +282,8 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq)
+ 	size_t len;
+ 	int mode;
+ 
+-	if (rreq->origin != NETFS_DIO_READ)
++	if (rreq->origin != NETFS_UNBUFFERED_READ &&
++	    rreq->origin != NETFS_DIO_READ)
+ 		__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
+ 	__clear_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
+ 
+@@ -539,7 +541,7 @@ static void ceph_set_page_fscache(struct page *page)
+ 	folio_start_private_2(page_folio(page)); /* [DEPRECATED] */
+ }
+ 
+-static void ceph_fscache_write_terminated(void *priv, ssize_t error, bool was_async)
++static void ceph_fscache_write_terminated(void *priv, ssize_t error)
+ {
+ 	struct inode *inode = priv;
+ 
+diff --git a/fs/dax.c b/fs/dax.c
+index 676303419e9e8a..f8d8b1afd23244 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -257,7 +257,7 @@ static void *wait_entry_unlocked_exclusive(struct xa_state *xas, void *entry)
+ 		wq = dax_entry_waitqueue(xas, entry, &ewait.key);
+ 		prepare_to_wait_exclusive(wq, &ewait.wait,
+ 					TASK_UNINTERRUPTIBLE);
+-		xas_pause(xas);
++		xas_reset(xas);
+ 		xas_unlock_irq(xas);
+ 		schedule();
+ 		finish_wait(wq, &ewait.wait);
+diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
+index 9c9129bca3460b..34517ca9df9157 100644
+--- a/fs/erofs/fscache.c
++++ b/fs/erofs/fscache.c
+@@ -102,8 +102,7 @@ static void erofs_fscache_req_io_put(struct erofs_fscache_io *io)
+ 		erofs_fscache_req_put(req);
+ }
+ 
+-static void erofs_fscache_req_end_io(void *priv,
+-		ssize_t transferred_or_error, bool was_async)
++static void erofs_fscache_req_end_io(void *priv, ssize_t transferred_or_error)
+ {
+ 	struct erofs_fscache_io *io = priv;
+ 	struct erofs_fscache_rq *req = io->private;
+@@ -180,8 +179,7 @@ struct erofs_fscache_bio {
+ 	struct bio_vec bvecs[BIO_MAX_VECS];
+ };
+ 
+-static void erofs_fscache_bio_endio(void *priv,
+-		ssize_t transferred_or_error, bool was_async)
++static void erofs_fscache_bio_endio(void *priv, ssize_t transferred_or_error)
+ {
+ 	struct erofs_fscache_bio *io = priv;
+ 
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index da6ee7c39290d5..6e57b9cc6ed2e0 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -165,8 +165,11 @@ static int erofs_init_device(struct erofs_buf *buf, struct super_block *sb,
+ 				filp_open(dif->path, O_RDONLY | O_LARGEFILE, 0) :
+ 				bdev_file_open_by_path(dif->path,
+ 						BLK_OPEN_READ, sb->s_type, NULL);
+-		if (IS_ERR(file))
++		if (IS_ERR(file)) {
++			if (file == ERR_PTR(-ENOTBLK))
++				return -EINVAL;
+ 			return PTR_ERR(file);
++		}
+ 
+ 		if (!erofs_is_fileio_mode(sbi)) {
+ 			dif->dax_dev = fs_dax_get_by_bdev(file_bdev(file),
+@@ -510,24 +513,52 @@ static int erofs_fc_parse_param(struct fs_context *fc,
+ 	return 0;
+ }
+ 
+-static struct inode *erofs_nfs_get_inode(struct super_block *sb,
+-					 u64 ino, u32 generation)
++static int erofs_encode_fh(struct inode *inode, u32 *fh, int *max_len,
++			   struct inode *parent)
+ {
+-	return erofs_iget(sb, ino);
++	erofs_nid_t nid = EROFS_I(inode)->nid;
++	int len = parent ? 6 : 3;
++
++	if (*max_len < len) {
++		*max_len = len;
++		return FILEID_INVALID;
++	}
++
++	fh[0] = (u32)(nid >> 32);
++	fh[1] = (u32)(nid & 0xffffffff);
++	fh[2] = inode->i_generation;
++
++	if (parent) {
++		nid = EROFS_I(parent)->nid;
++
++		fh[3] = (u32)(nid >> 32);
++		fh[4] = (u32)(nid & 0xffffffff);
++		fh[5] = parent->i_generation;
++	}
++
++	*max_len = len;
++	return parent ? FILEID_INO64_GEN_PARENT : FILEID_INO64_GEN;
+ }
+ 
+ static struct dentry *erofs_fh_to_dentry(struct super_block *sb,
+ 		struct fid *fid, int fh_len, int fh_type)
+ {
+-	return generic_fh_to_dentry(sb, fid, fh_len, fh_type,
+-				    erofs_nfs_get_inode);
++	if ((fh_type != FILEID_INO64_GEN &&
++	     fh_type != FILEID_INO64_GEN_PARENT) || fh_len < 3)
++		return NULL;
++
++	return d_obtain_alias(erofs_iget(sb,
++		((u64)fid->raw[0] << 32) | fid->raw[1]));
+ }
+ 
+ static struct dentry *erofs_fh_to_parent(struct super_block *sb,
+ 		struct fid *fid, int fh_len, int fh_type)
+ {
+-	return generic_fh_to_parent(sb, fid, fh_len, fh_type,
+-				    erofs_nfs_get_inode);
++	if (fh_type != FILEID_INO64_GEN_PARENT || fh_len < 6)
++		return NULL;
++
++	return d_obtain_alias(erofs_iget(sb,
++		((u64)fid->raw[3] << 32) | fid->raw[4]));
+ }
+ 
+ static struct dentry *erofs_get_parent(struct dentry *child)
+@@ -543,7 +574,7 @@ static struct dentry *erofs_get_parent(struct dentry *child)
+ }
+ 
+ static const struct export_operations erofs_export_ops = {
+-	.encode_fh = generic_encode_ino32_fh,
++	.encode_fh = erofs_encode_fh,
+ 	.fh_to_dentry = erofs_fh_to_dentry,
+ 	.fh_to_parent = erofs_fh_to_parent,
+ 	.get_parent = erofs_get_parent,
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 54f89f0ee69b18..b0b8748ae287f4 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -53,8 +53,8 @@ bool f2fs_is_cp_guaranteed(struct page *page)
+ 	struct inode *inode;
+ 	struct f2fs_sb_info *sbi;
+ 
+-	if (!mapping)
+-		return false;
++	if (fscrypt_is_bounce_page(page))
++		return page_private_gcing(fscrypt_pagecache_page(page));
+ 
+ 	inode = mapping->host;
+ 	sbi = F2FS_I_SB(inode);
+@@ -3966,7 +3966,7 @@ static int check_swap_activate(struct swap_info_struct *sis,
+ 
+ 		if ((pblock - SM_I(sbi)->main_blkaddr) % blks_per_sec ||
+ 				nr_pblocks % blks_per_sec ||
+-				!f2fs_valid_pinned_area(sbi, pblock)) {
++				f2fs_is_sequential_zone_area(sbi, pblock)) {
+ 			bool last_extent = false;
+ 
+ 			not_aligned++;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index f1576dc6ec6797..4f34a7d9760a10 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1780,7 +1780,7 @@ struct f2fs_sb_info {
+ 	unsigned int dirty_device;		/* for checkpoint data flush */
+ 	spinlock_t dev_lock;			/* protect dirty_device */
+ 	bool aligned_blksize;			/* all devices has the same logical blksize */
+-	unsigned int first_zoned_segno;		/* first zoned segno */
++	unsigned int first_seq_zone_segno;	/* first segno in sequential zone */
+ 
+ 	/* For write statistics */
+ 	u64 sectors_written_start;
+@@ -2518,8 +2518,14 @@ static inline void dec_valid_block_count(struct f2fs_sb_info *sbi,
+ 	blkcnt_t sectors = count << F2FS_LOG_SECTORS_PER_BLOCK;
+ 
+ 	spin_lock(&sbi->stat_lock);
+-	f2fs_bug_on(sbi, sbi->total_valid_block_count < (block_t) count);
+-	sbi->total_valid_block_count -= (block_t)count;
++	if (unlikely(sbi->total_valid_block_count < count)) {
++		f2fs_warn(sbi, "Inconsistent total_valid_block_count:%u, ino:%lu, count:%u",
++			  sbi->total_valid_block_count, inode->i_ino, count);
++		sbi->total_valid_block_count = 0;
++		set_sbi_flag(sbi, SBI_NEED_FSCK);
++	} else {
++		sbi->total_valid_block_count -= count;
++	}
+ 	if (sbi->reserved_blocks &&
+ 		sbi->current_reserved_blocks < sbi->reserved_blocks)
+ 		sbi->current_reserved_blocks = min(sbi->reserved_blocks,
+@@ -4622,12 +4628,16 @@ F2FS_FEATURE_FUNCS(readonly, RO);
+ F2FS_FEATURE_FUNCS(device_alias, DEVICE_ALIAS);
+ 
+ #ifdef CONFIG_BLK_DEV_ZONED
+-static inline bool f2fs_blkz_is_seq(struct f2fs_sb_info *sbi, int devi,
+-				    block_t blkaddr)
++static inline bool f2fs_zone_is_seq(struct f2fs_sb_info *sbi, int devi,
++							unsigned int zone)
+ {
+-	unsigned int zno = blkaddr / sbi->blocks_per_blkz;
++	return test_bit(zone, FDEV(devi).blkz_seq);
++}
+ 
+-	return test_bit(zno, FDEV(devi).blkz_seq);
++static inline bool f2fs_blkz_is_seq(struct f2fs_sb_info *sbi, int devi,
++								block_t blkaddr)
++{
++	return f2fs_zone_is_seq(sbi, devi, blkaddr / sbi->blocks_per_blkz);
+ }
+ #endif
+ 
+@@ -4699,15 +4709,31 @@ static inline bool f2fs_lfs_mode(struct f2fs_sb_info *sbi)
+ 	return F2FS_OPTION(sbi).fs_mode == FS_MODE_LFS;
+ }
+ 
+-static inline bool f2fs_valid_pinned_area(struct f2fs_sb_info *sbi,
++static inline bool f2fs_is_sequential_zone_area(struct f2fs_sb_info *sbi,
+ 					  block_t blkaddr)
+ {
+ 	if (f2fs_sb_has_blkzoned(sbi)) {
++#ifdef CONFIG_BLK_DEV_ZONED
+ 		int devi = f2fs_target_device_index(sbi, blkaddr);
+ 
+-		return !bdev_is_zoned(FDEV(devi).bdev);
++		if (!bdev_is_zoned(FDEV(devi).bdev))
++			return false;
++
++		if (f2fs_is_multi_device(sbi)) {
++			if (blkaddr < FDEV(devi).start_blk ||
++				blkaddr > FDEV(devi).end_blk) {
++				f2fs_err(sbi, "Invalid block %x", blkaddr);
++				return false;
++			}
++			blkaddr -= FDEV(devi).start_blk;
++		}
++
++		return f2fs_blkz_is_seq(sbi, devi, blkaddr);
++#else
++		return false;
++#endif
+ 	}
+-	return true;
++	return false;
+ }
+ 
+ static inline bool f2fs_low_mem_mode(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 2b8f9239bede7c..8b5a55b72264dd 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -2066,6 +2066,9 @@ int f2fs_gc_range(struct f2fs_sb_info *sbi,
+ 			.iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
+ 		};
+ 
++		if (IS_CURSEC(sbi, GET_SEC_FROM_SEG(sbi, segno)))
++			continue;
++
+ 		do_garbage_collect(sbi, segno, &gc_list, FG_GC, true, false);
+ 		put_gc_inode(&gc_list);
+ 
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 8f8b9b843bdf4b..28137d499f8f65 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -414,7 +414,7 @@ static int f2fs_link(struct dentry *old_dentry, struct inode *dir,
+ 
+ 	if (is_inode_flag_set(dir, FI_PROJ_INHERIT) &&
+ 			(!projid_eq(F2FS_I(dir)->i_projid,
+-			F2FS_I(old_dentry->d_inode)->i_projid)))
++			F2FS_I(inode)->i_projid)))
+ 		return -EXDEV;
+ 
+ 	err = f2fs_dquot_initialize(dir);
+@@ -914,7 +914,7 @@ static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 
+ 	if (is_inode_flag_set(new_dir, FI_PROJ_INHERIT) &&
+ 			(!projid_eq(F2FS_I(new_dir)->i_projid,
+-			F2FS_I(old_dentry->d_inode)->i_projid)))
++			F2FS_I(old_inode)->i_projid)))
+ 		return -EXDEV;
+ 
+ 	/*
+@@ -1107,10 +1107,10 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 
+ 	if ((is_inode_flag_set(new_dir, FI_PROJ_INHERIT) &&
+ 			!projid_eq(F2FS_I(new_dir)->i_projid,
+-			F2FS_I(old_dentry->d_inode)->i_projid)) ||
+-	    (is_inode_flag_set(new_dir, FI_PROJ_INHERIT) &&
++			F2FS_I(old_inode)->i_projid)) ||
++	    (is_inode_flag_set(old_dir, FI_PROJ_INHERIT) &&
+ 			!projid_eq(F2FS_I(old_dir)->i_projid,
+-			F2FS_I(new_dentry->d_inode)->i_projid)))
++			F2FS_I(new_inode)->i_projid)))
+ 		return -EXDEV;
+ 
+ 	err = f2fs_dquot_initialize(old_dir);
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 396ef71f41e359..41ca73622c8d46 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -424,7 +424,7 @@ void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need)
+ 	if (need && excess_cached_nats(sbi))
+ 		f2fs_balance_fs_bg(sbi, false);
+ 
+-	if (!f2fs_is_checkpoint_ready(sbi))
++	if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED)))
+ 		return;
+ 
+ 	/*
+@@ -2777,7 +2777,7 @@ static int get_new_segment(struct f2fs_sb_info *sbi,
+ 		if (sbi->blkzone_alloc_policy == BLKZONE_ALLOC_PRIOR_CONV || pinning)
+ 			segno = 0;
+ 		else
+-			segno = max(sbi->first_zoned_segno, *newseg);
++			segno = max(sbi->first_seq_zone_segno, *newseg);
+ 		hint = GET_SEC_FROM_SEG(sbi, segno);
+ 	}
+ #endif
+@@ -2789,7 +2789,7 @@ static int get_new_segment(struct f2fs_sb_info *sbi,
+ 	if (secno >= MAIN_SECS(sbi) && f2fs_sb_has_blkzoned(sbi)) {
+ 		/* Write only to sequential zones */
+ 		if (sbi->blkzone_alloc_policy == BLKZONE_ALLOC_ONLY_SEQ) {
+-			hint = GET_SEC_FROM_SEG(sbi, sbi->first_zoned_segno);
++			hint = GET_SEC_FROM_SEG(sbi, sbi->first_seq_zone_segno);
+ 			secno = find_next_zero_bit(free_i->free_secmap, MAIN_SECS(sbi), hint);
+ 		} else
+ 			secno = find_first_zero_bit(free_i->free_secmap,
+@@ -2838,9 +2838,9 @@ static int get_new_segment(struct f2fs_sb_info *sbi,
+ 	/* set it as dirty segment in free segmap */
+ 	f2fs_bug_on(sbi, test_bit(segno, free_i->free_segmap));
+ 
+-	/* no free section in conventional zone */
++	/* no free section in conventional device or conventional zone */
+ 	if (new_sec && pinning &&
+-		!f2fs_valid_pinned_area(sbi, START_BLOCK(sbi, segno))) {
++		f2fs_is_sequential_zone_area(sbi, START_BLOCK(sbi, segno))) {
+ 		ret = -EAGAIN;
+ 		goto out_unlock;
+ 	}
+@@ -3311,7 +3311,7 @@ int f2fs_allocate_pinning_section(struct f2fs_sb_info *sbi)
+ 
+ 	if (f2fs_sb_has_blkzoned(sbi) && err == -EAGAIN && gc_required) {
+ 		f2fs_down_write(&sbi->gc_lock);
+-		err = f2fs_gc_range(sbi, 0, GET_SEGNO(sbi, FDEV(0).end_blk),
++		err = f2fs_gc_range(sbi, 0, sbi->first_seq_zone_segno - 1,
+ 				true, ZONED_PIN_SEC_REQUIRED_COUNT);
+ 		f2fs_up_write(&sbi->gc_lock);
+ 
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 0465dc00b349d2..503f6df690bf2b 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -429,7 +429,6 @@ static inline void __set_free(struct f2fs_sb_info *sbi, unsigned int segno)
+ 	unsigned int secno = GET_SEC_FROM_SEG(sbi, segno);
+ 	unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno);
+ 	unsigned int next;
+-	unsigned int usable_segs = f2fs_usable_segs_in_sec(sbi);
+ 
+ 	spin_lock(&free_i->segmap_lock);
+ 	clear_bit(segno, free_i->free_segmap);
+@@ -437,7 +436,7 @@ static inline void __set_free(struct f2fs_sb_info *sbi, unsigned int segno)
+ 
+ 	next = find_next_bit(free_i->free_segmap,
+ 			start_segno + SEGS_PER_SEC(sbi), start_segno);
+-	if (next >= start_segno + usable_segs) {
++	if (next >= start_segno + f2fs_usable_segs_in_sec(sbi)) {
+ 		clear_bit(secno, free_i->free_secmap);
+ 		free_i->free_sections++;
+ 	}
+@@ -463,22 +462,36 @@ static inline void __set_test_and_free(struct f2fs_sb_info *sbi,
+ 	unsigned int secno = GET_SEC_FROM_SEG(sbi, segno);
+ 	unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno);
+ 	unsigned int next;
+-	unsigned int usable_segs = f2fs_usable_segs_in_sec(sbi);
++	bool ret;
+ 
+ 	spin_lock(&free_i->segmap_lock);
+-	if (test_and_clear_bit(segno, free_i->free_segmap)) {
+-		free_i->free_segments++;
+-
+-		if (!inmem && IS_CURSEC(sbi, secno))
+-			goto skip_free;
+-		next = find_next_bit(free_i->free_segmap,
+-				start_segno + SEGS_PER_SEC(sbi), start_segno);
+-		if (next >= start_segno + usable_segs) {
+-			if (test_and_clear_bit(secno, free_i->free_secmap))
+-				free_i->free_sections++;
+-		}
+-	}
+-skip_free:
++	ret = test_and_clear_bit(segno, free_i->free_segmap);
++	if (!ret)
++		goto unlock_out;
++
++	free_i->free_segments++;
++
++	if (!inmem && IS_CURSEC(sbi, secno))
++		goto unlock_out;
++
++	/* check large section */
++	next = find_next_bit(free_i->free_segmap,
++			     start_segno + SEGS_PER_SEC(sbi), start_segno);
++	if (next < start_segno + f2fs_usable_segs_in_sec(sbi))
++		goto unlock_out;
++
++	ret = test_and_clear_bit(secno, free_i->free_secmap);
++	if (!ret)
++		goto unlock_out;
++
++	free_i->free_sections++;
++
++	if (GET_SEC_FROM_SEG(sbi, sbi->next_victim_seg[BG_GC]) == secno)
++		sbi->next_victim_seg[BG_GC] = NULL_SEGNO;
++	if (GET_SEC_FROM_SEG(sbi, sbi->next_victim_seg[FG_GC]) == secno)
++		sbi->next_victim_seg[FG_GC] = NULL_SEGNO;
++
++unlock_out:
+ 	spin_unlock(&free_i->segmap_lock);
+ }
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index f087b2b71c8987..386326f7a440eb 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1882,9 +1882,9 @@ static int f2fs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 	buf->f_fsid    = u64_to_fsid(id);
+ 
+ #ifdef CONFIG_QUOTA
+-	if (is_inode_flag_set(dentry->d_inode, FI_PROJ_INHERIT) &&
++	if (is_inode_flag_set(d_inode(dentry), FI_PROJ_INHERIT) &&
+ 			sb_has_quota_limits_enabled(sb, PRJQUOTA)) {
+-		f2fs_statfs_project(sb, F2FS_I(dentry->d_inode)->i_projid, buf);
++		f2fs_statfs_project(sb, F2FS_I(d_inode(dentry))->i_projid, buf);
+ 	}
+ #endif
+ 	return 0;
+@@ -4304,14 +4304,35 @@ static void f2fs_record_error_work(struct work_struct *work)
+ 	f2fs_record_stop_reason(sbi);
+ }
+ 
+-static inline unsigned int get_first_zoned_segno(struct f2fs_sb_info *sbi)
++static inline unsigned int get_first_seq_zone_segno(struct f2fs_sb_info *sbi)
+ {
++#ifdef CONFIG_BLK_DEV_ZONED
++	unsigned int zoneno, total_zones;
+ 	int devi;
+ 
+-	for (devi = 0; devi < sbi->s_ndevs; devi++)
+-		if (bdev_is_zoned(FDEV(devi).bdev))
+-			return GET_SEGNO(sbi, FDEV(devi).start_blk);
+-	return 0;
++	if (!f2fs_sb_has_blkzoned(sbi))
++		return NULL_SEGNO;
++
++	for (devi = 0; devi < sbi->s_ndevs; devi++) {
++		if (!bdev_is_zoned(FDEV(devi).bdev))
++			continue;
++
++		total_zones = GET_ZONE_FROM_SEG(sbi, FDEV(devi).total_segments);
++
++		for (zoneno = 0; zoneno < total_zones; zoneno++) {
++			unsigned int segs, blks;
++
++			if (!f2fs_zone_is_seq(sbi, devi, zoneno))
++				continue;
++
++			segs = GET_SEG_FROM_SEC(sbi,
++					zoneno * sbi->secs_per_zone);
++			blks = SEGS_TO_BLKS(sbi, segs);
++			return GET_SEGNO(sbi, FDEV(devi).start_blk + blks);
++		}
++	}
++#endif
++	return NULL_SEGNO;
+ }
+ 
+ static int f2fs_scan_devices(struct f2fs_sb_info *sbi)
+@@ -4348,6 +4369,14 @@ static int f2fs_scan_devices(struct f2fs_sb_info *sbi)
+ #endif
+ 
+ 	for (i = 0; i < max_devices; i++) {
++		if (max_devices == 1) {
++			FDEV(i).total_segments =
++				le32_to_cpu(raw_super->segment_count_main);
++			FDEV(i).start_blk = 0;
++			FDEV(i).end_blk = FDEV(i).total_segments *
++						BLKS_PER_SEG(sbi);
++		}
++
+ 		if (i == 0)
+ 			FDEV(0).bdev_file = sbi->sb->s_bdev_file;
+ 		else if (!RDEV(i).path[0])
+@@ -4718,7 +4747,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
+ 	sbi->sectors_written_start = f2fs_get_sectors_written(sbi);
+ 
+ 	/* get segno of first zoned block device */
+-	sbi->first_zoned_segno = get_first_zoned_segno(sbi);
++	sbi->first_seq_zone_segno = get_first_seq_zone_segno(sbi);
+ 
+ 	/* Read accumulated write IO statistics if exists */
+ 	seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE);
+diff --git a/fs/filesystems.c b/fs/filesystems.c
+index 58b9067b2391ce..95e5256821a534 100644
+--- a/fs/filesystems.c
++++ b/fs/filesystems.c
+@@ -156,15 +156,19 @@ static int fs_index(const char __user * __name)
+ static int fs_name(unsigned int index, char __user * buf)
+ {
+ 	struct file_system_type * tmp;
+-	int len, res;
++	int len, res = -EINVAL;
+ 
+ 	read_lock(&file_systems_lock);
+-	for (tmp = file_systems; tmp; tmp = tmp->next, index--)
+-		if (index <= 0 && try_module_get(tmp->owner))
++	for (tmp = file_systems; tmp; tmp = tmp->next, index--) {
++		if (index == 0) {
++			if (try_module_get(tmp->owner))
++				res = 0;
+ 			break;
++		}
++	}
+ 	read_unlock(&file_systems_lock);
+-	if (!tmp)
+-		return -EINVAL;
++	if (res)
++		return res;
+ 
+ 	/* OK, we got the reference, so we can safely block */
+ 	len = strlen(tmp->name) + 1;
+diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
+index 68fc8af14700d3..eb4270e82ef8ee 100644
+--- a/fs/gfs2/aops.c
++++ b/fs/gfs2/aops.c
+@@ -37,27 +37,6 @@
+ #include "aops.h"
+ 
+ 
+-void gfs2_trans_add_databufs(struct gfs2_inode *ip, struct folio *folio,
+-			     size_t from, size_t len)
+-{
+-	struct buffer_head *head = folio_buffers(folio);
+-	unsigned int bsize = head->b_size;
+-	struct buffer_head *bh;
+-	size_t to = from + len;
+-	size_t start, end;
+-
+-	for (bh = head, start = 0; bh != head || !start;
+-	     bh = bh->b_this_page, start = end) {
+-		end = start + bsize;
+-		if (end <= from)
+-			continue;
+-		if (start >= to)
+-			break;
+-		set_buffer_uptodate(bh);
+-		gfs2_trans_add_data(ip->i_gl, bh);
+-	}
+-}
+-
+ /**
+  * gfs2_get_block_noalloc - Fills in a buffer head with details about a block
+  * @inode: The inode
+@@ -133,11 +112,42 @@ static int __gfs2_jdata_write_folio(struct folio *folio,
+ 					inode->i_sb->s_blocksize,
+ 					BIT(BH_Dirty)|BIT(BH_Uptodate));
+ 		}
+-		gfs2_trans_add_databufs(ip, folio, 0, folio_size(folio));
++		gfs2_trans_add_databufs(ip->i_gl, folio, 0, folio_size(folio));
+ 	}
+ 	return gfs2_write_jdata_folio(folio, wbc);
+ }
+ 
++/**
++ * gfs2_jdata_writeback - Write jdata folios to the log
++ * @mapping: The mapping to write
++ * @wbc: The writeback control
++ *
++ * Returns: errno
++ */
++int gfs2_jdata_writeback(struct address_space *mapping, struct writeback_control *wbc)
++{
++	struct inode *inode = mapping->host;
++	struct gfs2_inode *ip = GFS2_I(inode);
++	struct gfs2_sbd *sdp = GFS2_SB(mapping->host);
++	struct folio *folio = NULL;
++	int error;
++
++	BUG_ON(current->journal_info);
++	if (gfs2_assert_withdraw(sdp, ip->i_gl->gl_state == LM_ST_EXCLUSIVE))
++		return 0;
++
++	while ((folio = writeback_iter(mapping, wbc, folio, &error))) {
++		if (folio_test_checked(folio)) {
++			folio_redirty_for_writepage(wbc, folio);
++			folio_unlock(folio);
++			continue;
++		}
++		error = __gfs2_jdata_write_folio(folio, wbc);
++	}
++
++	return error;
++}
++
+ /**
+  * gfs2_writepages - Write a bunch of dirty pages back to disk
+  * @mapping: The mapping to write
+diff --git a/fs/gfs2/aops.h b/fs/gfs2/aops.h
+index a10c4334d24893..bf002522a78220 100644
+--- a/fs/gfs2/aops.h
++++ b/fs/gfs2/aops.h
+@@ -9,7 +9,6 @@
+ #include "incore.h"
+ 
+ void adjust_fs_space(struct inode *inode);
+-void gfs2_trans_add_databufs(struct gfs2_inode *ip, struct folio *folio,
+-			     size_t from, size_t len);
++int gfs2_jdata_writeback(struct address_space *mapping, struct writeback_control *wbc);
+ 
+ #endif /* __AOPS_DOT_H__ */
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index 366516b98b3f31..b81984def58ec3 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -988,7 +988,8 @@ static void gfs2_iomap_put_folio(struct inode *inode, loff_t pos,
+ 	struct gfs2_sbd *sdp = GFS2_SB(inode);
+ 
+ 	if (!gfs2_is_stuffed(ip))
+-		gfs2_trans_add_databufs(ip, folio, offset_in_folio(folio, pos),
++		gfs2_trans_add_databufs(ip->i_gl, folio,
++					offset_in_folio(folio, pos),
+ 					copied);
+ 
+ 	folio_unlock(folio);
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index d7220a6fe8f55e..ba25b884169e50 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -1166,7 +1166,6 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number,
+ 		   const struct gfs2_glock_operations *glops, int create,
+ 		   struct gfs2_glock **glp)
+ {
+-	struct super_block *s = sdp->sd_vfs;
+ 	struct lm_lockname name = { .ln_number = number,
+ 				    .ln_type = glops->go_type,
+ 				    .ln_sbd = sdp };
+@@ -1229,7 +1228,7 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number,
+ 	mapping = gfs2_glock2aspace(gl);
+ 	if (mapping) {
+                 mapping->a_ops = &gfs2_meta_aops;
+-		mapping->host = s->s_bdev->bd_mapping->host;
++		mapping->host = sdp->sd_inode;
+ 		mapping->flags = 0;
+ 		mapping_set_gfp_mask(mapping, GFP_NOFS);
+ 		mapping->i_private_data = NULL;
+diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
+index eb4714f299efb6..116efe335c3212 100644
+--- a/fs/gfs2/glops.c
++++ b/fs/gfs2/glops.c
+@@ -168,7 +168,7 @@ void gfs2_ail_flush(struct gfs2_glock *gl, bool fsync)
+ static int gfs2_rgrp_metasync(struct gfs2_glock *gl)
+ {
+ 	struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
+-	struct address_space *metamapping = &sdp->sd_aspace;
++	struct address_space *metamapping = gfs2_aspace(sdp);
+ 	struct gfs2_rgrpd *rgd = gfs2_glock2rgrp(gl);
+ 	const unsigned bsize = sdp->sd_sb.sb_bsize;
+ 	loff_t start = (rgd->rd_addr * bsize) & PAGE_MASK;
+@@ -225,7 +225,7 @@ static int rgrp_go_sync(struct gfs2_glock *gl)
+ static void rgrp_go_inval(struct gfs2_glock *gl, int flags)
+ {
+ 	struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
+-	struct address_space *mapping = &sdp->sd_aspace;
++	struct address_space *mapping = gfs2_aspace(sdp);
+ 	struct gfs2_rgrpd *rgd = gfs2_glock2rgrp(gl);
+ 	const unsigned bsize = sdp->sd_sb.sb_bsize;
+ 	loff_t start, end;
+diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
+index 74abbd4970f80b..0a41c4e76b3267 100644
+--- a/fs/gfs2/incore.h
++++ b/fs/gfs2/incore.h
+@@ -795,7 +795,7 @@ struct gfs2_sbd {
+ 
+ 	/* Log stuff */
+ 
+-	struct address_space sd_aspace;
++	struct inode *sd_inode;
+ 
+ 	spinlock_t sd_log_lock;
+ 
+@@ -851,6 +851,13 @@ struct gfs2_sbd {
+ 	unsigned long sd_glock_dqs_held;
+ };
+ 
++#define GFS2_BAD_INO 1
++
++static inline struct address_space *gfs2_aspace(struct gfs2_sbd *sdp)
++{
++	return sdp->sd_inode->i_mapping;
++}
++
+ static inline void gfs2_glstats_inc(struct gfs2_glock *gl, int which)
+ {
+ 	gl->gl_stats.stats[which]++;
+diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c
+index 198a8cbaf5e5ad..8fd81444ffea00 100644
+--- a/fs/gfs2/inode.c
++++ b/fs/gfs2/inode.c
+@@ -439,6 +439,74 @@ static int alloc_dinode(struct gfs2_inode *ip, u32 flags, unsigned *dblocks)
+ 	return error;
+ }
+ 
++static void gfs2_final_release_pages(struct gfs2_inode *ip)
++{
++	struct inode *inode = &ip->i_inode;
++	struct gfs2_glock *gl = ip->i_gl;
++
++	if (unlikely(!gl)) {
++		/* This can only happen during incomplete inode creation. */
++		BUG_ON(!test_bit(GIF_ALLOC_FAILED, &ip->i_flags));
++		return;
++	}
++
++	truncate_inode_pages(gfs2_glock2aspace(gl), 0);
++	truncate_inode_pages(&inode->i_data, 0);
++
++	if (atomic_read(&gl->gl_revokes) == 0) {
++		clear_bit(GLF_LFLUSH, &gl->gl_flags);
++		clear_bit(GLF_DIRTY, &gl->gl_flags);
++	}
++}
++
++int gfs2_dinode_dealloc(struct gfs2_inode *ip)
++{
++	struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
++	struct gfs2_rgrpd *rgd;
++	struct gfs2_holder gh;
++	int error;
++
++	if (gfs2_get_inode_blocks(&ip->i_inode) != 1) {
++		gfs2_consist_inode(ip);
++		return -EIO;
++	}
++
++	gfs2_rindex_update(sdp);
++
++	error = gfs2_quota_hold(ip, NO_UID_QUOTA_CHANGE, NO_GID_QUOTA_CHANGE);
++	if (error)
++		return error;
++
++	rgd = gfs2_blk2rgrpd(sdp, ip->i_no_addr, 1);
++	if (!rgd) {
++		gfs2_consist_inode(ip);
++		error = -EIO;
++		goto out_qs;
++	}
++
++	error = gfs2_glock_nq_init(rgd->rd_gl, LM_ST_EXCLUSIVE,
++				   LM_FLAG_NODE_SCOPE, &gh);
++	if (error)
++		goto out_qs;
++
++	error = gfs2_trans_begin(sdp, RES_RG_BIT + RES_STATFS + RES_QUOTA,
++				 sdp->sd_jdesc->jd_blocks);
++	if (error)
++		goto out_rg_gunlock;
++
++	gfs2_free_di(rgd, ip);
++
++	gfs2_final_release_pages(ip);
++
++	gfs2_trans_end(sdp);
++
++out_rg_gunlock:
++	gfs2_glock_dq_uninit(&gh);
++out_qs:
++	gfs2_quota_unhold(ip);
++	return error;
++}
++
+ static void gfs2_init_dir(struct buffer_head *dibh,
+ 			  const struct gfs2_inode *parent)
+ {
+@@ -629,10 +697,11 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ 	struct gfs2_inode *dip = GFS2_I(dir), *ip;
+ 	struct gfs2_sbd *sdp = GFS2_SB(&dip->i_inode);
+ 	struct gfs2_glock *io_gl;
+-	int error;
++	int error, dealloc_error;
+ 	u32 aflags = 0;
+ 	unsigned blocks = 1;
+ 	struct gfs2_diradd da = { .bh = NULL, .save_loc = 1, };
++	bool xattr_initialized = false;
+ 
+ 	if (!name->len || name->len > GFS2_FNAMESIZE)
+ 		return -ENAMETOOLONG;
+@@ -659,7 +728,8 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ 	if (!IS_ERR(inode)) {
+ 		if (S_ISDIR(inode->i_mode)) {
+ 			iput(inode);
+-			inode = ERR_PTR(-EISDIR);
++			inode = NULL;
++			error = -EISDIR;
+ 			goto fail_gunlock;
+ 		}
+ 		d_instantiate(dentry, inode);
+@@ -744,11 +814,11 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ 
+ 	error = gfs2_glock_get(sdp, ip->i_no_addr, &gfs2_inode_glops, CREATE, &ip->i_gl);
+ 	if (error)
+-		goto fail_free_inode;
++		goto fail_dealloc_inode;
+ 
+ 	error = gfs2_glock_get(sdp, ip->i_no_addr, &gfs2_iopen_glops, CREATE, &io_gl);
+ 	if (error)
+-		goto fail_free_inode;
++		goto fail_dealloc_inode;
+ 	gfs2_cancel_delete_work(io_gl);
+ 	io_gl->gl_no_formal_ino = ip->i_no_formal_ino;
+ 
+@@ -772,8 +842,10 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ 	if (error)
+ 		goto fail_gunlock3;
+ 
+-	if (blocks > 1)
++	if (blocks > 1) {
+ 		gfs2_init_xattr(ip);
++		xattr_initialized = true;
++	}
+ 	init_dinode(dip, ip, symname);
+ 	gfs2_trans_end(sdp);
+ 
+@@ -828,6 +900,18 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ 	gfs2_glock_dq_uninit(&ip->i_iopen_gh);
+ fail_gunlock2:
+ 	gfs2_glock_put(io_gl);
++fail_dealloc_inode:
++	set_bit(GIF_ALLOC_FAILED, &ip->i_flags);
++	dealloc_error = 0;
++	if (ip->i_eattr)
++		dealloc_error = gfs2_ea_dealloc(ip, xattr_initialized);
++	clear_nlink(inode);
++	mark_inode_dirty(inode);
++	if (!dealloc_error)
++		dealloc_error = gfs2_dinode_dealloc(ip);
++	if (dealloc_error)
++		fs_warn(sdp, "%s: %d\n", __func__, dealloc_error);
++	ip->i_no_addr = 0;
+ fail_free_inode:
+ 	if (ip->i_gl) {
+ 		gfs2_glock_put(ip->i_gl);
+@@ -842,10 +926,6 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ 	gfs2_dir_no_add(&da);
+ 	gfs2_glock_dq_uninit(&d_gh);
+ 	if (!IS_ERR_OR_NULL(inode)) {
+-		set_bit(GIF_ALLOC_FAILED, &ip->i_flags);
+-		clear_nlink(inode);
+-		if (ip->i_no_addr)
+-			mark_inode_dirty(inode);
+ 		if (inode->i_state & I_NEW)
+ 			iget_failed(inode);
+ 		else
+diff --git a/fs/gfs2/inode.h b/fs/gfs2/inode.h
+index 9e5e1622d50a60..eafe123617e698 100644
+--- a/fs/gfs2/inode.h
++++ b/fs/gfs2/inode.h
+@@ -92,6 +92,7 @@ struct inode *gfs2_inode_lookup(struct super_block *sb, unsigned type,
+ struct inode *gfs2_lookup_by_inum(struct gfs2_sbd *sdp, u64 no_addr,
+ 				  u64 no_formal_ino,
+ 				  unsigned int blktype);
++int gfs2_dinode_dealloc(struct gfs2_inode *ip);
+ 
+ struct inode *gfs2_lookupi(struct inode *dir, const struct qstr *name,
+ 			   int is_root);
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index f9c5089783d24c..115c4ac457e90a 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -31,6 +31,7 @@
+ #include "dir.h"
+ #include "trace_gfs2.h"
+ #include "trans.h"
++#include "aops.h"
+ 
+ static void gfs2_log_shutdown(struct gfs2_sbd *sdp);
+ 
+@@ -131,7 +132,11 @@ __acquires(&sdp->sd_ail_lock)
+ 		if (!mapping)
+ 			continue;
+ 		spin_unlock(&sdp->sd_ail_lock);
+-		ret = mapping->a_ops->writepages(mapping, wbc);
++		BUG_ON(GFS2_SB(mapping->host) != sdp);
++		if (gfs2_is_jdata(GFS2_I(mapping->host)))
++			ret = gfs2_jdata_writeback(mapping, wbc);
++		else
++			ret = mapping->a_ops->writepages(mapping, wbc);
+ 		if (need_resched()) {
+ 			blk_finish_plug(plug);
+ 			cond_resched();
+diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c
+index 198cc705663755..9dc8885c95d072 100644
+--- a/fs/gfs2/meta_io.c
++++ b/fs/gfs2/meta_io.c
+@@ -132,7 +132,7 @@ struct buffer_head *gfs2_getbuf(struct gfs2_glock *gl, u64 blkno, int create)
+ 	unsigned int bufnum;
+ 
+ 	if (mapping == NULL)
+-		mapping = &sdp->sd_aspace;
++		mapping = gfs2_aspace(sdp);
+ 
+ 	shift = PAGE_SHIFT - sdp->sd_sb.sb_bsize_shift;
+ 	index = blkno >> shift;             /* convert block to page */
+diff --git a/fs/gfs2/meta_io.h b/fs/gfs2/meta_io.h
+index 831d988c2ceb74..b7c8a6684d0249 100644
+--- a/fs/gfs2/meta_io.h
++++ b/fs/gfs2/meta_io.h
+@@ -44,9 +44,7 @@ static inline struct gfs2_sbd *gfs2_mapping2sbd(struct address_space *mapping)
+ 		struct gfs2_glock_aspace *gla =
+ 			container_of(mapping, struct gfs2_glock_aspace, mapping);
+ 		return gla->glock.gl_name.ln_sbd;
+-	} else if (mapping->a_ops == &gfs2_rgrp_aops)
+-		return container_of(mapping, struct gfs2_sbd, sd_aspace);
+-	else
++	} else
+ 		return inode->i_sb->s_fs_info;
+ }
+ 
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index e83d293c361423..4a0f7de41b2b2f 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -64,15 +64,17 @@ static void gfs2_tune_init(struct gfs2_tune *gt)
+ 
+ void free_sbd(struct gfs2_sbd *sdp)
+ {
++	struct super_block *sb = sdp->sd_vfs;
++
+ 	if (sdp->sd_lkstats)
+ 		free_percpu(sdp->sd_lkstats);
++	sb->s_fs_info = NULL;
+ 	kfree(sdp);
+ }
+ 
+ static struct gfs2_sbd *init_sbd(struct super_block *sb)
+ {
+ 	struct gfs2_sbd *sdp;
+-	struct address_space *mapping;
+ 
+ 	sdp = kzalloc(sizeof(struct gfs2_sbd), GFP_KERNEL);
+ 	if (!sdp)
+@@ -109,16 +111,6 @@ static struct gfs2_sbd *init_sbd(struct super_block *sb)
+ 
+ 	INIT_LIST_HEAD(&sdp->sd_sc_inodes_list);
+ 
+-	mapping = &sdp->sd_aspace;
+-
+-	address_space_init_once(mapping);
+-	mapping->a_ops = &gfs2_rgrp_aops;
+-	mapping->host = sb->s_bdev->bd_mapping->host;
+-	mapping->flags = 0;
+-	mapping_set_gfp_mask(mapping, GFP_NOFS);
+-	mapping->i_private_data = NULL;
+-	mapping->writeback_index = 0;
+-
+ 	spin_lock_init(&sdp->sd_log_lock);
+ 	atomic_set(&sdp->sd_log_pinned, 0);
+ 	INIT_LIST_HEAD(&sdp->sd_log_revokes);
+@@ -1135,6 +1127,7 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	int silent = fc->sb_flags & SB_SILENT;
+ 	struct gfs2_sbd *sdp;
+ 	struct gfs2_holder mount_gh;
++	struct address_space *mapping;
+ 	int error;
+ 
+ 	sdp = init_sbd(sb);
+@@ -1156,6 +1149,7 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	sb->s_flags |= SB_NOSEC;
+ 	sb->s_magic = GFS2_MAGIC;
+ 	sb->s_op = &gfs2_super_ops;
++
+ 	sb->s_d_op = &gfs2_dops;
+ 	sb->s_export_op = &gfs2_export_ops;
+ 	sb->s_qcop = &gfs2_quotactl_ops;
+@@ -1181,9 +1175,21 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 		sdp->sd_tune.gt_statfs_quantum = 30;
+ 	}
+ 
++	/* Set up an address space for metadata writes */
++	sdp->sd_inode = new_inode(sb);
++	error = -ENOMEM;
++	if (!sdp->sd_inode)
++		goto fail_free;
++	sdp->sd_inode->i_ino = GFS2_BAD_INO;
++	sdp->sd_inode->i_size = OFFSET_MAX;
++
++	mapping = gfs2_aspace(sdp);
++	mapping->a_ops = &gfs2_rgrp_aops;
++	mapping_set_gfp_mask(mapping, GFP_NOFS);
++
+ 	error = init_names(sdp, silent);
+ 	if (error)
+-		goto fail_free;
++		goto fail_iput;
+ 
+ 	snprintf(sdp->sd_fsname, sizeof(sdp->sd_fsname), "%s", sdp->sd_table_name);
+ 
+@@ -1192,7 +1198,7 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 			WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_FREEZABLE, 0,
+ 			sdp->sd_fsname);
+ 	if (!sdp->sd_glock_wq)
+-		goto fail_free;
++		goto fail_iput;
+ 
+ 	sdp->sd_delete_wq = alloc_workqueue("gfs2-delete/%s",
+ 			WQ_MEM_RECLAIM | WQ_FREEZABLE, 0, sdp->sd_fsname);
+@@ -1309,9 +1315,10 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ fail_glock_wq:
+ 	if (sdp->sd_glock_wq)
+ 		destroy_workqueue(sdp->sd_glock_wq);
++fail_iput:
++	iput(sdp->sd_inode);
+ fail_free:
+ 	free_sbd(sdp);
+-	sb->s_fs_info = NULL;
+ 	return error;
+ }
+ 
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 44e5658b896c88..0bd7827e6371e2 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -648,7 +648,7 @@ static void gfs2_put_super(struct super_block *sb)
+ 	gfs2_jindex_free(sdp);
+ 	/*  Take apart glock structures and buffer lists  */
+ 	gfs2_gl_hash_clear(sdp);
+-	truncate_inode_pages_final(&sdp->sd_aspace);
++	iput(sdp->sd_inode);
+ 	gfs2_delete_debugfs_file(sdp);
+ 
+ 	gfs2_sys_fs_del(sdp);
+@@ -674,7 +674,7 @@ static int gfs2_sync_fs(struct super_block *sb, int wait)
+ 	return sdp->sd_log_error;
+ }
+ 
+-static int gfs2_do_thaw(struct gfs2_sbd *sdp)
++static int gfs2_do_thaw(struct gfs2_sbd *sdp, enum freeze_holder who)
+ {
+ 	struct super_block *sb = sdp->sd_vfs;
+ 	int error;
+@@ -682,7 +682,7 @@ static int gfs2_do_thaw(struct gfs2_sbd *sdp)
+ 	error = gfs2_freeze_lock_shared(sdp);
+ 	if (error)
+ 		goto fail;
+-	error = thaw_super(sb, FREEZE_HOLDER_USERSPACE);
++	error = thaw_super(sb, who);
+ 	if (!error)
+ 		return 0;
+ 
+@@ -710,7 +710,7 @@ void gfs2_freeze_func(struct work_struct *work)
+ 	gfs2_freeze_unlock(sdp);
+ 	set_bit(SDF_FROZEN, &sdp->sd_flags);
+ 
+-	error = gfs2_do_thaw(sdp);
++	error = gfs2_do_thaw(sdp, FREEZE_HOLDER_USERSPACE);
+ 	if (error)
+ 		goto out;
+ 
+@@ -728,6 +728,7 @@ void gfs2_freeze_func(struct work_struct *work)
+ /**
+  * gfs2_freeze_super - prevent further writes to the filesystem
+  * @sb: the VFS structure for the filesystem
++ * @who: freeze flags
+  *
+  */
+ 
+@@ -744,7 +745,7 @@ static int gfs2_freeze_super(struct super_block *sb, enum freeze_holder who)
+ 	}
+ 
+ 	for (;;) {
+-		error = freeze_super(sb, FREEZE_HOLDER_USERSPACE);
++		error = freeze_super(sb, who);
+ 		if (error) {
+ 			fs_info(sdp, "GFS2: couldn't freeze filesystem: %d\n",
+ 				error);
+@@ -758,7 +759,7 @@ static int gfs2_freeze_super(struct super_block *sb, enum freeze_holder who)
+ 			break;
+ 		}
+ 
+-		error = gfs2_do_thaw(sdp);
++		error = gfs2_do_thaw(sdp, who);
+ 		if (error)
+ 			goto out;
+ 
+@@ -796,6 +797,7 @@ static int gfs2_freeze_fs(struct super_block *sb)
+ /**
+  * gfs2_thaw_super - reallow writes to the filesystem
+  * @sb: the VFS structure for the filesystem
++ * @who: freeze flags
+  *
+  */
+ 
+@@ -814,7 +816,7 @@ static int gfs2_thaw_super(struct super_block *sb, enum freeze_holder who)
+ 	atomic_inc(&sb->s_active);
+ 	gfs2_freeze_unlock(sdp);
+ 
+-	error = gfs2_do_thaw(sdp);
++	error = gfs2_do_thaw(sdp, who);
+ 
+ 	if (!error) {
+ 		clear_bit(SDF_FREEZE_INITIATOR, &sdp->sd_flags);
+@@ -1173,74 +1175,6 @@ static int gfs2_show_options(struct seq_file *s, struct dentry *root)
+ 	return 0;
+ }
+ 
+-static void gfs2_final_release_pages(struct gfs2_inode *ip)
+-{
+-	struct inode *inode = &ip->i_inode;
+-	struct gfs2_glock *gl = ip->i_gl;
+-
+-	if (unlikely(!gl)) {
+-		/* This can only happen during incomplete inode creation. */
+-		BUG_ON(!test_bit(GIF_ALLOC_FAILED, &ip->i_flags));
+-		return;
+-	}
+-
+-	truncate_inode_pages(gfs2_glock2aspace(gl), 0);
+-	truncate_inode_pages(&inode->i_data, 0);
+-
+-	if (atomic_read(&gl->gl_revokes) == 0) {
+-		clear_bit(GLF_LFLUSH, &gl->gl_flags);
+-		clear_bit(GLF_DIRTY, &gl->gl_flags);
+-	}
+-}
+-
+-static int gfs2_dinode_dealloc(struct gfs2_inode *ip)
+-{
+-	struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+-	struct gfs2_rgrpd *rgd;
+-	struct gfs2_holder gh;
+-	int error;
+-
+-	if (gfs2_get_inode_blocks(&ip->i_inode) != 1) {
+-		gfs2_consist_inode(ip);
+-		return -EIO;
+-	}
+-
+-	gfs2_rindex_update(sdp);
+-
+-	error = gfs2_quota_hold(ip, NO_UID_QUOTA_CHANGE, NO_GID_QUOTA_CHANGE);
+-	if (error)
+-		return error;
+-
+-	rgd = gfs2_blk2rgrpd(sdp, ip->i_no_addr, 1);
+-	if (!rgd) {
+-		gfs2_consist_inode(ip);
+-		error = -EIO;
+-		goto out_qs;
+-	}
+-
+-	error = gfs2_glock_nq_init(rgd->rd_gl, LM_ST_EXCLUSIVE,
+-				   LM_FLAG_NODE_SCOPE, &gh);
+-	if (error)
+-		goto out_qs;
+-
+-	error = gfs2_trans_begin(sdp, RES_RG_BIT + RES_STATFS + RES_QUOTA,
+-				 sdp->sd_jdesc->jd_blocks);
+-	if (error)
+-		goto out_rg_gunlock;
+-
+-	gfs2_free_di(rgd, ip);
+-
+-	gfs2_final_release_pages(ip);
+-
+-	gfs2_trans_end(sdp);
+-
+-out_rg_gunlock:
+-	gfs2_glock_dq_uninit(&gh);
+-out_qs:
+-	gfs2_quota_unhold(ip);
+-	return error;
+-}
+-
+ /**
+  * gfs2_glock_put_eventually
+  * @gl:	The glock to put
+@@ -1326,9 +1260,6 @@ static enum evict_behavior evict_should_delete(struct inode *inode,
+ 	struct gfs2_sbd *sdp = sb->s_fs_info;
+ 	int ret;
+ 
+-	if (unlikely(test_bit(GIF_ALLOC_FAILED, &ip->i_flags)))
+-		goto should_delete;
+-
+ 	if (gfs2_holder_initialized(&ip->i_iopen_gh) &&
+ 	    test_bit(GLF_DEFER_DELETE, &ip->i_iopen_gh.gh_gl->gl_flags))
+ 		return EVICT_SHOULD_DEFER_DELETE;
+@@ -1358,7 +1289,6 @@ static enum evict_behavior evict_should_delete(struct inode *inode,
+ 	if (inode->i_nlink)
+ 		return EVICT_SHOULD_SKIP_DELETE;
+ 
+-should_delete:
+ 	if (gfs2_holder_initialized(&ip->i_iopen_gh) &&
+ 	    test_bit(HIF_HOLDER, &ip->i_iopen_gh.gh_iflags))
+ 		return gfs2_upgrade_iopen_glock(inode);
+@@ -1382,7 +1312,7 @@ static int evict_unlinked_inode(struct inode *inode)
+ 	}
+ 
+ 	if (ip->i_eattr) {
+-		ret = gfs2_ea_dealloc(ip);
++		ret = gfs2_ea_dealloc(ip, true);
+ 		if (ret)
+ 			goto out;
+ 	}
+diff --git a/fs/gfs2/sys.c b/fs/gfs2/sys.c
+index ecc699f8d9fcaa..6286183021022a 100644
+--- a/fs/gfs2/sys.c
++++ b/fs/gfs2/sys.c
+@@ -764,7 +764,6 @@ int gfs2_sys_fs_add(struct gfs2_sbd *sdp)
+ 	fs_err(sdp, "error %d adding sysfs files\n", error);
+ 	kobject_put(&sdp->sd_kobj);
+ 	wait_for_completion(&sdp->sd_kobj_unregister);
+-	sb->s_fs_info = NULL;
+ 	return error;
+ }
+ 
+diff --git a/fs/gfs2/trans.c b/fs/gfs2/trans.c
+index f8ae2c666fd609..075f7e9abe47ca 100644
+--- a/fs/gfs2/trans.c
++++ b/fs/gfs2/trans.c
+@@ -226,6 +226,27 @@ void gfs2_trans_add_data(struct gfs2_glock *gl, struct buffer_head *bh)
+ 	unlock_buffer(bh);
+ }
+ 
++void gfs2_trans_add_databufs(struct gfs2_glock *gl, struct folio *folio,
++			     size_t from, size_t len)
++{
++	struct buffer_head *head = folio_buffers(folio);
++	unsigned int bsize = head->b_size;
++	struct buffer_head *bh;
++	size_t to = from + len;
++	size_t start, end;
++
++	for (bh = head, start = 0; bh != head || !start;
++	     bh = bh->b_this_page, start = end) {
++		end = start + bsize;
++		if (end <= from)
++			continue;
++		if (start >= to)
++			break;
++		set_buffer_uptodate(bh);
++		gfs2_trans_add_data(gl, bh);
++	}
++}
++
+ void gfs2_trans_add_meta(struct gfs2_glock *gl, struct buffer_head *bh)
+ {
+ 
+diff --git a/fs/gfs2/trans.h b/fs/gfs2/trans.h
+index f8ce5302280d31..790c55f59e6121 100644
+--- a/fs/gfs2/trans.h
++++ b/fs/gfs2/trans.h
+@@ -42,6 +42,8 @@ int gfs2_trans_begin(struct gfs2_sbd *sdp, unsigned int blocks,
+ 
+ void gfs2_trans_end(struct gfs2_sbd *sdp);
+ void gfs2_trans_add_data(struct gfs2_glock *gl, struct buffer_head *bh);
++void gfs2_trans_add_databufs(struct gfs2_glock *gl, struct folio *folio,
++			     size_t from, size_t len);
+ void gfs2_trans_add_meta(struct gfs2_glock *gl, struct buffer_head *bh);
+ void gfs2_trans_add_revoke(struct gfs2_sbd *sdp, struct gfs2_bufdata *bd);
+ void gfs2_trans_remove_revoke(struct gfs2_sbd *sdp, u64 blkno, unsigned int len);
+diff --git a/fs/gfs2/xattr.c b/fs/gfs2/xattr.c
+index 17ae5070a90e67..df9c93de94c793 100644
+--- a/fs/gfs2/xattr.c
++++ b/fs/gfs2/xattr.c
+@@ -1383,7 +1383,7 @@ static int ea_dealloc_indirect(struct gfs2_inode *ip)
+ 	return error;
+ }
+ 
+-static int ea_dealloc_block(struct gfs2_inode *ip)
++static int ea_dealloc_block(struct gfs2_inode *ip, bool initialized)
+ {
+ 	struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+ 	struct gfs2_rgrpd *rgd;
+@@ -1416,7 +1416,7 @@ static int ea_dealloc_block(struct gfs2_inode *ip)
+ 	ip->i_eattr = 0;
+ 	gfs2_add_inode_blocks(&ip->i_inode, -1);
+ 
+-	if (likely(!test_bit(GIF_ALLOC_FAILED, &ip->i_flags))) {
++	if (initialized) {
+ 		error = gfs2_meta_inode_buffer(ip, &dibh);
+ 		if (!error) {
+ 			gfs2_trans_add_meta(ip->i_gl, dibh);
+@@ -1435,11 +1435,12 @@ static int ea_dealloc_block(struct gfs2_inode *ip)
+ /**
+  * gfs2_ea_dealloc - deallocate the extended attribute fork
+  * @ip: the inode
++ * @initialized: xattrs have been initialized
+  *
+  * Returns: errno
+  */
+ 
+-int gfs2_ea_dealloc(struct gfs2_inode *ip)
++int gfs2_ea_dealloc(struct gfs2_inode *ip, bool initialized)
+ {
+ 	int error;
+ 
+@@ -1451,7 +1452,7 @@ int gfs2_ea_dealloc(struct gfs2_inode *ip)
+ 	if (error)
+ 		return error;
+ 
+-	if (likely(!test_bit(GIF_ALLOC_FAILED, &ip->i_flags))) {
++	if (initialized) {
+ 		error = ea_foreach(ip, ea_dealloc_unstuffed, NULL);
+ 		if (error)
+ 			goto out_quota;
+@@ -1463,7 +1464,7 @@ int gfs2_ea_dealloc(struct gfs2_inode *ip)
+ 		}
+ 	}
+ 
+-	error = ea_dealloc_block(ip);
++	error = ea_dealloc_block(ip, initialized);
+ 
+ out_quota:
+ 	gfs2_quota_unhold(ip);
+diff --git a/fs/gfs2/xattr.h b/fs/gfs2/xattr.h
+index eb12eb7e37c194..3c9788e0e13750 100644
+--- a/fs/gfs2/xattr.h
++++ b/fs/gfs2/xattr.h
+@@ -54,7 +54,7 @@ int __gfs2_xattr_set(struct inode *inode, const char *name,
+ 		     const void *value, size_t size,
+ 		     int flags, int type);
+ ssize_t gfs2_listxattr(struct dentry *dentry, char *buffer, size_t size);
+-int gfs2_ea_dealloc(struct gfs2_inode *ip);
++int gfs2_ea_dealloc(struct gfs2_inode *ip, bool initialized);
+ 
+ /* Exported to acl.c */
+ 
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index 5b08bd417b2872..0ac474888a02e9 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1675,6 +1675,8 @@ static int iomap_add_to_ioend(struct iomap_writepage_ctx *wpc,
+ 		ioend_flags |= IOMAP_IOEND_UNWRITTEN;
+ 	if (wpc->iomap.flags & IOMAP_F_SHARED)
+ 		ioend_flags |= IOMAP_IOEND_SHARED;
++	if (folio_test_dropbehind(folio))
++		ioend_flags |= IOMAP_IOEND_DONTCACHE;
+ 	if (pos == wpc->iomap.offset && (wpc->iomap.flags & IOMAP_F_BOUNDARY))
+ 		ioend_flags |= IOMAP_IOEND_BOUNDARY;
+ 
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index fc70d72c3fe805..43487fa83eaea1 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -1580,8 +1580,9 @@ void kernfs_break_active_protection(struct kernfs_node *kn)
+  * invoked before finishing the kernfs operation.  Note that while this
+  * function restores the active reference, it doesn't and can't actually
+  * restore the active protection - @kn may already or be in the process of
+- * being removed.  Once kernfs_break_active_protection() is invoked, that
+- * protection is irreversibly gone for the kernfs operation instance.
++ * being drained and removed.  Once kernfs_break_active_protection() is
++ * invoked, that protection is irreversibly gone for the kernfs operation
++ * instance.
+  *
+  * While this function may be called at any point after
+  * kernfs_break_active_protection() is invoked, its most useful location
+diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
+index 66fe8fe41f0605..a6c692cac61659 100644
+--- a/fs/kernfs/file.c
++++ b/fs/kernfs/file.c
+@@ -778,8 +778,9 @@ bool kernfs_should_drain_open_files(struct kernfs_node *kn)
+ 	/*
+ 	 * @kn being deactivated guarantees that @kn->attr.open can't change
+ 	 * beneath us making the lockless test below safe.
++	 * Callers post kernfs_unbreak_active_protection may be counted in
++	 * kn->active by now, do not WARN_ON because of them.
+ 	 */
+-	WARN_ON_ONCE(atomic_read(&kn->active) != KN_DEACTIVATED_BIAS);
+ 
+ 	rcu_read_lock();
+ 	on = rcu_dereference(kn->attr.open);
+diff --git a/fs/mount.h b/fs/mount.h
+index 7aecf2a6047232..ad7173037924a8 100644
+--- a/fs/mount.h
++++ b/fs/mount.h
+@@ -7,10 +7,6 @@
+ 
+ extern struct list_head notify_list;
+ 
+-typedef __u32 __bitwise mntns_flags_t;
+-
+-#define MNTNS_PROPAGATING	((__force mntns_flags_t)(1 << 0))
+-
+ struct mnt_namespace {
+ 	struct ns_common	ns;
+ 	struct mount *	root;
+@@ -37,7 +33,6 @@ struct mnt_namespace {
+ 	struct rb_node		mnt_ns_tree_node; /* node in the mnt_ns_tree */
+ 	struct list_head	mnt_ns_list; /* entry in the sequential list of mounts namespace */
+ 	refcount_t		passive; /* number references not pinning @mounts */
+-	mntns_flags_t		mntns_flags;
+ } __randomize_layout;
+ 
+ struct mnt_pcp {
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 1b466c54a357d1..d6ac7e533b0212 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2424,7 +2424,7 @@ void drop_collected_mounts(struct vfsmount *mnt)
+ 	namespace_unlock();
+ }
+ 
+-bool has_locked_children(struct mount *mnt, struct dentry *dentry)
++static bool __has_locked_children(struct mount *mnt, struct dentry *dentry)
+ {
+ 	struct mount *child;
+ 
+@@ -2438,6 +2438,16 @@ bool has_locked_children(struct mount *mnt, struct dentry *dentry)
+ 	return false;
+ }
+ 
++bool has_locked_children(struct mount *mnt, struct dentry *dentry)
++{
++	bool res;
++
++	read_seqlock_excl(&mount_lock);
++	res = __has_locked_children(mnt, dentry);
++	read_sequnlock_excl(&mount_lock);
++	return res;
++}
++
+ /*
+  * Check that there aren't references to earlier/same mount namespaces in the
+  * specified subtree.  Such references can act as pins for mount namespaces
+@@ -2482,23 +2492,27 @@ struct vfsmount *clone_private_mount(const struct path *path)
+ 	if (IS_MNT_UNBINDABLE(old_mnt))
+ 		return ERR_PTR(-EINVAL);
+ 
+-	if (mnt_has_parent(old_mnt)) {
+-		if (!check_mnt(old_mnt))
+-			return ERR_PTR(-EINVAL);
+-	} else {
+-		if (!is_mounted(&old_mnt->mnt))
+-			return ERR_PTR(-EINVAL);
+-
+-		/* Make sure this isn't something purely kernel internal. */
+-		if (!is_anon_ns(old_mnt->mnt_ns))
++	/*
++	 * Make sure the source mount is acceptable.
++	 * Anything mounted in our mount namespace is allowed.
++	 * Otherwise, it must be the root of an anonymous mount
++	 * namespace, and we need to make sure no namespace
++	 * loops get created.
++	 */
++	if (!check_mnt(old_mnt)) {
++		if (!is_mounted(&old_mnt->mnt) ||
++			!is_anon_ns(old_mnt->mnt_ns) ||
++			mnt_has_parent(old_mnt))
+ 			return ERR_PTR(-EINVAL);
+ 
+-		/* Make sure we don't create mount namespace loops. */
+ 		if (!check_for_nsfs_mounts(old_mnt))
+ 			return ERR_PTR(-EINVAL);
+ 	}
+ 
+-	if (has_locked_children(old_mnt, path->dentry))
++        if (!ns_capable(old_mnt->mnt_ns->user_ns, CAP_SYS_ADMIN))
++		return ERR_PTR(-EPERM);
++
++	if (__has_locked_children(old_mnt, path->dentry))
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	new_mnt = clone_mnt(old_mnt, path->dentry, CL_PRIVATE);
+@@ -2944,6 +2958,10 @@ static int do_change_type(struct path *path, int ms_flags)
+ 		return -EINVAL;
+ 
+ 	namespace_lock();
++	if (!check_mnt(mnt)) {
++		err = -EINVAL;
++		goto out_unlock;
++	}
+ 	if (type == MS_SHARED) {
+ 		err = invent_group_ids(mnt, recurse);
+ 		if (err)
+@@ -3035,7 +3053,7 @@ static struct mount *__do_loopback(struct path *old_path, int recurse)
+ 	if (!may_copy_tree(old_path))
+ 		return mnt;
+ 
+-	if (!recurse && has_locked_children(old, old_path->dentry))
++	if (!recurse && __has_locked_children(old, old_path->dentry))
+ 		return mnt;
+ 
+ 	if (recurse)
+@@ -3428,7 +3446,7 @@ static int do_set_group(struct path *from_path, struct path *to_path)
+ 		goto out;
+ 
+ 	/* From mount should not have locked children in place of To's root */
+-	if (has_locked_children(from, to->mnt.mnt_root))
++	if (__has_locked_children(from, to->mnt.mnt_root))
+ 		goto out;
+ 
+ 	/* Setting sharing groups is only allowed on private mounts */
+@@ -3442,7 +3460,7 @@ static int do_set_group(struct path *from_path, struct path *to_path)
+ 	if (IS_MNT_SLAVE(from)) {
+ 		struct mount *m = from->mnt_master;
+ 
+-		list_add(&to->mnt_slave, &m->mnt_slave_list);
++		list_add(&to->mnt_slave, &from->mnt_slave);
+ 		to->mnt_master = m;
+ 	}
+ 
+@@ -3467,18 +3485,25 @@ static int do_set_group(struct path *from_path, struct path *to_path)
+  * Check if path is overmounted, i.e., if there's a mount on top of
+  * @path->mnt with @path->dentry as mountpoint.
+  *
+- * Context: This function expects namespace_lock() to be held.
++ * Context: namespace_sem must be held at least shared.
++ * MUST NOT be called under lock_mount_hash() (there one should just
++ * call __lookup_mnt() and check if it returns NULL).
+  * Return: If path is overmounted true is returned, false if not.
+  */
+ static inline bool path_overmounted(const struct path *path)
+ {
++	unsigned seq = read_seqbegin(&mount_lock);
++	bool no_child;
++
+ 	rcu_read_lock();
+-	if (unlikely(__lookup_mnt(path->mnt, path->dentry))) {
+-		rcu_read_unlock();
+-		return true;
+-	}
++	no_child = !__lookup_mnt(path->mnt, path->dentry);
+ 	rcu_read_unlock();
+-	return false;
++	if (need_seqretry(&mount_lock, seq)) {
++		read_seqlock_excl(&mount_lock);
++		no_child = !__lookup_mnt(path->mnt, path->dentry);
++		read_sequnlock_excl(&mount_lock);
++	}
++	return unlikely(!no_child);
+ }
+ 
+ /**
+@@ -3637,46 +3662,41 @@ static int do_move_mount(struct path *old_path,
+ 	ns = old->mnt_ns;
+ 
+ 	err = -EINVAL;
+-	if (!may_use_mount(p))
+-		goto out;
+-
+ 	/* The thing moved must be mounted... */
+ 	if (!is_mounted(&old->mnt))
+ 		goto out;
+ 
+-	/* ... and either ours or the root of anon namespace */
+-	if (!(attached ? check_mnt(old) : is_anon_ns(ns)))
+-		goto out;
+-
+-	if (is_anon_ns(ns)) {
++	if (check_mnt(old)) {
++		/* if the source is in our namespace... */
++		/* ... it should be detachable from parent */
++		if (!mnt_has_parent(old) || IS_MNT_LOCKED(old))
++			goto out;
++		/* ... and the target should be in our namespace */
++		if (!check_mnt(p))
++			goto out;
++	} else {
+ 		/*
+-		 * Ending up with two files referring to the root of the
+-		 * same anonymous mount namespace would cause an error
+-		 * as this would mean trying to move the same mount
+-		 * twice into the mount tree which would be rejected
+-		 * later. But be explicit about it right here.
++		 * otherwise the source must be the root of some anon namespace.
++		 * AV: check for mount being root of an anon namespace is worth
++		 * an inlined predicate...
+ 		 */
+-		if ((is_anon_ns(p->mnt_ns) && ns == p->mnt_ns))
++		if (!is_anon_ns(ns) || mnt_has_parent(old))
+ 			goto out;
+-
+ 		/*
+-		 * If this is an anonymous mount tree ensure that mount
+-		 * propagation can detect mounts that were just
+-		 * propagated to the target mount tree so we don't
+-		 * propagate onto them.
++		 * Bail out early if the target is within the same namespace -
++		 * subsequent checks would've rejected that, but they lose
++		 * some corner cases if we check it early.
+ 		 */
+-		ns->mntns_flags |= MNTNS_PROPAGATING;
+-	} else if (is_anon_ns(p->mnt_ns)) {
++		if (ns == p->mnt_ns)
++			goto out;
+ 		/*
+-		 * Don't allow moving an attached mount tree to an
+-		 * anonymous mount tree.
++		 * Target should be either in our namespace or in an acceptable
++		 * anon namespace, sensu check_anonymous_mnt().
+ 		 */
+-		goto out;
++		if (!may_use_mount(p))
++			goto out;
+ 	}
+ 
+-	if (old->mnt.mnt_flags & MNT_LOCKED)
+-		goto out;
+-
+ 	if (!path_mounted(old_path))
+ 		goto out;
+ 
+@@ -3722,8 +3742,6 @@ static int do_move_mount(struct path *old_path,
+ 	if (attached)
+ 		put_mountpoint(old_mp);
+ out:
+-	if (is_anon_ns(ns))
+-		ns->mntns_flags &= ~MNTNS_PROPAGATING;
+ 	unlock_mount(mp);
+ 	if (!err) {
+ 		if (attached) {
+diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
+index 0d1b6d35ff3b80..fd4619275801be 100644
+--- a/fs/netfs/buffered_read.c
++++ b/fs/netfs/buffered_read.c
+@@ -262,9 +262,9 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
+ 				if (ret < 0) {
+ 					subreq->error = ret;
+ 					/* Not queued - release both refs. */
+-					netfs_put_subrequest(subreq, false,
++					netfs_put_subrequest(subreq,
+ 							     netfs_sreq_trace_put_cancel);
+-					netfs_put_subrequest(subreq, false,
++					netfs_put_subrequest(subreq,
+ 							     netfs_sreq_trace_put_cancel);
+ 					break;
+ 				}
+@@ -297,8 +297,8 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
+ 			subreq->error = ret;
+ 			trace_netfs_sreq(subreq, netfs_sreq_trace_cancel);
+ 			/* Not queued - release both refs. */
+-			netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
+-			netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
++			netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
++			netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
+ 			break;
+ 		}
+ 		size -= slice;
+@@ -312,7 +312,7 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
+ 	if (unlikely(size > 0)) {
+ 		smp_wmb(); /* Write lists before ALL_QUEUED. */
+ 		set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags);
+-		netfs_wake_read_collector(rreq);
++		netfs_wake_collector(rreq);
+ 	}
+ 
+ 	/* Defer error return as we may need to wait for outstanding I/O. */
+@@ -365,12 +365,10 @@ void netfs_readahead(struct readahead_control *ractl)
+ 		goto cleanup_free;
+ 	netfs_read_to_pagecache(rreq);
+ 
+-	netfs_put_request(rreq, true, netfs_rreq_trace_put_return);
+-	return;
++	return netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 
+ cleanup_free:
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_failed);
+-	return;
++	return netfs_put_request(rreq, netfs_rreq_trace_put_failed);
+ }
+ EXPORT_SYMBOL(netfs_readahead);
+ 
+@@ -470,11 +468,11 @@ static int netfs_read_gaps(struct file *file, struct folio *folio)
+ 		folio_mark_uptodate(folio);
+ 	}
+ 	folio_unlock(folio);
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 	return ret < 0 ? ret : 0;
+ 
+ discard:
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_discard);
++	netfs_put_request(rreq, netfs_rreq_trace_put_discard);
+ alloc_error:
+ 	folio_unlock(folio);
+ 	return ret;
+@@ -530,11 +528,11 @@ int netfs_read_folio(struct file *file, struct folio *folio)
+ 
+ 	netfs_read_to_pagecache(rreq);
+ 	ret = netfs_wait_for_read(rreq);
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 	return ret < 0 ? ret : 0;
+ 
+ discard:
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_discard);
++	netfs_put_request(rreq, netfs_rreq_trace_put_discard);
+ alloc_error:
+ 	folio_unlock(folio);
+ 	return ret;
+@@ -689,7 +687,7 @@ int netfs_write_begin(struct netfs_inode *ctx,
+ 	ret = netfs_wait_for_read(rreq);
+ 	if (ret < 0)
+ 		goto error;
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 
+ have_folio:
+ 	ret = folio_wait_private_2_killable(folio);
+@@ -701,7 +699,7 @@ int netfs_write_begin(struct netfs_inode *ctx,
+ 	return 0;
+ 
+ error_put:
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_failed);
++	netfs_put_request(rreq, netfs_rreq_trace_put_failed);
+ error:
+ 	if (folio) {
+ 		folio_unlock(folio);
+@@ -752,11 +750,11 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio,
+ 
+ 	netfs_read_to_pagecache(rreq);
+ 	ret = netfs_wait_for_read(rreq);
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 	return ret < 0 ? ret : 0;
+ 
+ error_put:
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_discard);
++	netfs_put_request(rreq, netfs_rreq_trace_put_discard);
+ error:
+ 	_leave(" = %d", ret);
+ 	return ret;
+diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c
+index b4826360a41112..dbb544e183d13d 100644
+--- a/fs/netfs/buffered_write.c
++++ b/fs/netfs/buffered_write.c
+@@ -386,7 +386,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
+ 		wbc_detach_inode(&wbc);
+ 		if (ret2 == -EIOCBQUEUED)
+ 			return ret2;
+-		if (ret == 0)
++		if (ret == 0 && ret2 < 0)
+ 			ret = ret2;
+ 	}
+ 
+diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c
+index 5e3f0aeb51f31f..9902766195d7b2 100644
+--- a/fs/netfs/direct_read.c
++++ b/fs/netfs/direct_read.c
+@@ -85,7 +85,7 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)
+ 		if (rreq->netfs_ops->prepare_read) {
+ 			ret = rreq->netfs_ops->prepare_read(subreq);
+ 			if (ret < 0) {
+-				netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
++				netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
+ 				break;
+ 			}
+ 		}
+@@ -103,7 +103,7 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)
+ 		rreq->netfs_ops->issue_read(subreq);
+ 
+ 		if (test_bit(NETFS_RREQ_PAUSE, &rreq->flags))
+-			netfs_wait_for_pause(rreq);
++			netfs_wait_for_paused_read(rreq);
+ 		if (test_bit(NETFS_RREQ_FAILED, &rreq->flags))
+ 			break;
+ 		if (test_bit(NETFS_RREQ_BLOCKED, &rreq->flags) &&
+@@ -115,7 +115,7 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)
+ 	if (unlikely(size > 0)) {
+ 		smp_wmb(); /* Write lists before ALL_QUEUED. */
+ 		set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags);
+-		netfs_wake_read_collector(rreq);
++		netfs_wake_collector(rreq);
+ 	}
+ 
+ 	return ret;
+@@ -144,7 +144,7 @@ static ssize_t netfs_unbuffered_read(struct netfs_io_request *rreq, bool sync)
+ 	ret = netfs_dispatch_unbuffered_reads(rreq);
+ 
+ 	if (!rreq->submitted) {
+-		netfs_put_request(rreq, false, netfs_rreq_trace_put_no_submit);
++		netfs_put_request(rreq, netfs_rreq_trace_put_no_submit);
+ 		inode_dio_end(rreq->inode);
+ 		ret = 0;
+ 		goto out;
+@@ -188,7 +188,8 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i
+ 
+ 	rreq = netfs_alloc_request(iocb->ki_filp->f_mapping, iocb->ki_filp,
+ 				   iocb->ki_pos, orig_count,
+-				   NETFS_DIO_READ);
++				   iocb->ki_flags & IOCB_DIRECT ?
++				   NETFS_DIO_READ : NETFS_UNBUFFERED_READ);
+ 	if (IS_ERR(rreq))
+ 		return PTR_ERR(rreq);
+ 
+@@ -236,7 +237,7 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i
+ 	}
+ 
+ out:
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 	if (ret > 0)
+ 		orig_count -= ret;
+ 	return ret;
+diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c
+index 42ce53cc216e9d..fa9a5bf3c6d512 100644
+--- a/fs/netfs/direct_write.c
++++ b/fs/netfs/direct_write.c
+@@ -87,6 +87,8 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
+ 	}
+ 
+ 	__set_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags);
++	if (async)
++		__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);
+ 
+ 	/* Copy the data into the bounce buffer and encrypt it. */
+ 	// TODO
+@@ -105,19 +107,15 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
+ 
+ 	if (!async) {
+ 		trace_netfs_rreq(wreq, netfs_rreq_trace_wait_ip);
+-		wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS,
+-			    TASK_UNINTERRUPTIBLE);
+-		ret = wreq->error;
+-		if (ret == 0) {
+-			ret = wreq->transferred;
++		ret = netfs_wait_for_write(wreq);
++		if (ret > 0)
+ 			iocb->ki_pos += ret;
+-		}
+ 	} else {
+ 		ret = -EIOCBQUEUED;
+ 	}
+ 
+ out:
+-	netfs_put_request(wreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(wreq, netfs_rreq_trace_put_return);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(netfs_unbuffered_write_iter_locked);
+diff --git a/fs/netfs/fscache_io.c b/fs/netfs/fscache_io.c
+index b1722a82c03d3d..e4308457633ca3 100644
+--- a/fs/netfs/fscache_io.c
++++ b/fs/netfs/fscache_io.c
+@@ -192,8 +192,7 @@ EXPORT_SYMBOL(__fscache_clear_page_bits);
+ /*
+  * Deal with the completion of writing the data to the cache.
+  */
+-static void fscache_wreq_done(void *priv, ssize_t transferred_or_error,
+-			      bool was_async)
++static void fscache_wreq_done(void *priv, ssize_t transferred_or_error)
+ {
+ 	struct fscache_write_request *wreq = priv;
+ 
+@@ -202,8 +201,7 @@ static void fscache_wreq_done(void *priv, ssize_t transferred_or_error,
+ 					wreq->set_bits);
+ 
+ 	if (wreq->term_func)
+-		wreq->term_func(wreq->term_func_priv, transferred_or_error,
+-				was_async);
++		wreq->term_func(wreq->term_func_priv, transferred_or_error);
+ 	fscache_end_operation(&wreq->cache_resources);
+ 	kfree(wreq);
+ }
+@@ -255,14 +253,14 @@ void __fscache_write_to_cache(struct fscache_cookie *cookie,
+ 	return;
+ 
+ abandon_end:
+-	return fscache_wreq_done(wreq, ret, false);
++	return fscache_wreq_done(wreq, ret);
+ abandon_free:
+ 	kfree(wreq);
+ abandon:
+ 	if (using_pgpriv2)
+ 		fscache_clear_page_bits(mapping, start, len, cond);
+ 	if (term_func)
+-		term_func(term_func_priv, ret, false);
++		term_func(term_func_priv, ret);
+ }
+ EXPORT_SYMBOL(__fscache_write_to_cache);
+ 
+diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
+index 1c4f953c3d683b..e2ee9183392b93 100644
+--- a/fs/netfs/internal.h
++++ b/fs/netfs/internal.h
+@@ -23,7 +23,7 @@
+ /*
+  * buffered_read.c
+  */
+-void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, bool was_async);
++void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error);
+ int netfs_prefetch_for_write(struct file *file, struct folio *folio,
+ 			     size_t offset, size_t len);
+ 
+@@ -62,6 +62,14 @@ static inline void netfs_proc_del_rreq(struct netfs_io_request *rreq) {}
+ struct folio_queue *netfs_buffer_make_space(struct netfs_io_request *rreq,
+ 					    enum netfs_folioq_trace trace);
+ void netfs_reset_iter(struct netfs_io_subrequest *subreq);
++void netfs_wake_collector(struct netfs_io_request *rreq);
++void netfs_subreq_clear_in_progress(struct netfs_io_subrequest *subreq);
++void netfs_wait_for_in_progress_stream(struct netfs_io_request *rreq,
++				       struct netfs_io_stream *stream);
++ssize_t netfs_wait_for_read(struct netfs_io_request *rreq);
++ssize_t netfs_wait_for_write(struct netfs_io_request *rreq);
++void netfs_wait_for_paused_read(struct netfs_io_request *rreq);
++void netfs_wait_for_paused_write(struct netfs_io_request *rreq);
+ 
+ /*
+  * objects.c
+@@ -71,9 +79,8 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
+ 					     loff_t start, size_t len,
+ 					     enum netfs_io_origin origin);
+ void netfs_get_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what);
+-void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async);
+-void netfs_put_request(struct netfs_io_request *rreq, bool was_async,
+-		       enum netfs_rreq_ref_trace what);
++void netfs_clear_subrequests(struct netfs_io_request *rreq);
++void netfs_put_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what);
+ struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq);
+ 
+ static inline void netfs_see_request(struct netfs_io_request *rreq,
+@@ -92,11 +99,9 @@ static inline void netfs_see_subrequest(struct netfs_io_subrequest *subreq,
+ /*
+  * read_collect.c
+  */
++bool netfs_read_collection(struct netfs_io_request *rreq);
+ void netfs_read_collection_worker(struct work_struct *work);
+-void netfs_wake_read_collector(struct netfs_io_request *rreq);
+-void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, bool was_async);
+-ssize_t netfs_wait_for_read(struct netfs_io_request *rreq);
+-void netfs_wait_for_pause(struct netfs_io_request *rreq);
++void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error);
+ 
+ /*
+  * read_pgpriv2.c
+@@ -176,8 +181,8 @@ static inline void netfs_stat_d(atomic_t *stat)
+  * write_collect.c
+  */
+ int netfs_folio_written_back(struct folio *folio);
++bool netfs_write_collection(struct netfs_io_request *wreq);
+ void netfs_write_collection_worker(struct work_struct *work);
+-void netfs_wake_write_collector(struct netfs_io_request *wreq, bool was_async);
+ 
+ /*
+  * write_issue.c
+@@ -198,8 +203,8 @@ struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size_t len
+ int netfs_advance_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc,
+ 			       struct folio *folio, size_t copied, bool to_page_end,
+ 			       struct folio **writethrough_cache);
+-int netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc,
+-			   struct folio *writethrough_cache);
++ssize_t netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc,
++			       struct folio *writethrough_cache);
+ int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, size_t len);
+ 
+ /*
+@@ -254,6 +259,21 @@ static inline void netfs_put_group_many(struct netfs_group *netfs_group, int nr)
+ 		netfs_group->free(netfs_group);
+ }
+ 
++/*
++ * Clear and wake up a NETFS_RREQ_* flag bit on a request.
++ */
++static inline void netfs_wake_rreq_flag(struct netfs_io_request *rreq,
++					unsigned int rreq_flag,
++					enum netfs_rreq_trace trace)
++{
++	if (test_bit(rreq_flag, &rreq->flags)) {
++		trace_netfs_rreq(rreq, trace);
++		clear_bit_unlock(rreq_flag, &rreq->flags);
++		smp_mb__after_atomic(); /* Set flag before task state */
++		wake_up(&rreq->waitq);
++	}
++}
++
+ /*
+  * fscache-cache.c
+  */
+diff --git a/fs/netfs/main.c b/fs/netfs/main.c
+index 70ecc8f5f21034..3db401d269e7b3 100644
+--- a/fs/netfs/main.c
++++ b/fs/netfs/main.c
+@@ -39,6 +39,7 @@ static const char *netfs_origins[nr__netfs_io_origin] = {
+ 	[NETFS_READ_GAPS]		= "RG",
+ 	[NETFS_READ_SINGLE]		= "R1",
+ 	[NETFS_READ_FOR_WRITE]		= "RW",
++	[NETFS_UNBUFFERED_READ]		= "UR",
+ 	[NETFS_DIO_READ]		= "DR",
+ 	[NETFS_WRITEBACK]		= "WB",
+ 	[NETFS_WRITEBACK_SINGLE]	= "W1",
+diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c
+index 7099aa07737ac0..43b67a28a8fa07 100644
+--- a/fs/netfs/misc.c
++++ b/fs/netfs/misc.c
+@@ -313,3 +313,222 @@ bool netfs_release_folio(struct folio *folio, gfp_t gfp)
+ 	return true;
+ }
+ EXPORT_SYMBOL(netfs_release_folio);
++
++/*
++ * Wake the collection work item.
++ */
++void netfs_wake_collector(struct netfs_io_request *rreq)
++{
++	if (test_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags) &&
++	    !test_bit(NETFS_RREQ_RETRYING, &rreq->flags)) {
++		queue_work(system_unbound_wq, &rreq->work);
++	} else {
++		trace_netfs_rreq(rreq, netfs_rreq_trace_wake_queue);
++		wake_up(&rreq->waitq);
++	}
++}
++
++/*
++ * Mark a subrequest as no longer being in progress and, if need be, wake the
++ * collector.
++ */
++void netfs_subreq_clear_in_progress(struct netfs_io_subrequest *subreq)
++{
++	struct netfs_io_request *rreq = subreq->rreq;
++	struct netfs_io_stream *stream = &rreq->io_streams[subreq->stream_nr];
++
++	clear_bit_unlock(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
++	smp_mb__after_atomic(); /* Clear IN_PROGRESS before task state */
++
++	/* If we are at the head of the queue, wake up the collector. */
++	if (list_is_first(&subreq->rreq_link, &stream->subrequests) ||
++	    test_bit(NETFS_RREQ_RETRYING, &rreq->flags))
++		netfs_wake_collector(rreq);
++}
++
++/*
++ * Wait for all outstanding I/O in a stream to quiesce.
++ */
++void netfs_wait_for_in_progress_stream(struct netfs_io_request *rreq,
++				       struct netfs_io_stream *stream)
++{
++	struct netfs_io_subrequest *subreq;
++	DEFINE_WAIT(myself);
++
++	list_for_each_entry(subreq, &stream->subrequests, rreq_link) {
++		if (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags))
++			continue;
++
++		trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue);
++		for (;;) {
++			prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE);
++
++			if (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags))
++				break;
++
++			trace_netfs_sreq(subreq, netfs_sreq_trace_wait_for);
++			schedule();
++			trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue);
++		}
++	}
++
++	finish_wait(&rreq->waitq, &myself);
++}
++
++/*
++ * Perform collection in app thread if not offloaded to workqueue.
++ */
++static int netfs_collect_in_app(struct netfs_io_request *rreq,
++				bool (*collector)(struct netfs_io_request *rreq))
++{
++	bool need_collect = false, inactive = true;
++
++	for (int i = 0; i < NR_IO_STREAMS; i++) {
++		struct netfs_io_subrequest *subreq;
++		struct netfs_io_stream *stream = &rreq->io_streams[i];
++
++		if (!stream->active)
++			continue;
++		inactive = false;
++		trace_netfs_collect_stream(rreq, stream);
++		subreq = list_first_entry_or_null(&stream->subrequests,
++						  struct netfs_io_subrequest,
++						  rreq_link);
++		if (subreq &&
++		    (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags) ||
++		     test_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags))) {
++			need_collect = true;
++			break;
++		}
++	}
++
++	if (!need_collect && !inactive)
++		return 0; /* Sleep */
++
++	__set_current_state(TASK_RUNNING);
++	if (collector(rreq)) {
++		/* Drop the ref from the NETFS_RREQ_IN_PROGRESS flag. */
++		netfs_put_request(rreq, netfs_rreq_trace_put_work_ip);
++		return 1; /* Done */
++	}
++
++	if (inactive) {
++		WARN(true, "Failed to collect inactive req R=%08x\n",
++		     rreq->debug_id);
++		cond_resched();
++	}
++	return 2; /* Again */
++}
++
++/*
++ * Wait for a request to complete, successfully or otherwise.
++ */
++static ssize_t netfs_wait_for_request(struct netfs_io_request *rreq,
++				      bool (*collector)(struct netfs_io_request *rreq))
++{
++	DEFINE_WAIT(myself);
++	ssize_t ret;
++
++	for (;;) {
++		trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue);
++		prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE);
++
++		if (!test_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags)) {
++			switch (netfs_collect_in_app(rreq, collector)) {
++			case 0:
++				break;
++			case 1:
++				goto all_collected;
++			case 2:
++				continue;
++			}
++		}
++
++		if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags))
++			break;
++
++		schedule();
++		trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue);
++	}
++
++all_collected:
++	finish_wait(&rreq->waitq, &myself);
++
++	ret = rreq->error;
++	if (ret == 0) {
++		ret = rreq->transferred;
++		switch (rreq->origin) {
++		case NETFS_DIO_READ:
++		case NETFS_DIO_WRITE:
++		case NETFS_READ_SINGLE:
++		case NETFS_UNBUFFERED_READ:
++		case NETFS_UNBUFFERED_WRITE:
++			break;
++		default:
++			if (rreq->submitted < rreq->len) {
++				trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read);
++				ret = -EIO;
++			}
++			break;
++		}
++	}
++
++	return ret;
++}
++
++ssize_t netfs_wait_for_read(struct netfs_io_request *rreq)
++{
++	return netfs_wait_for_request(rreq, netfs_read_collection);
++}
++
++ssize_t netfs_wait_for_write(struct netfs_io_request *rreq)
++{
++	return netfs_wait_for_request(rreq, netfs_write_collection);
++}
++
++/*
++ * Wait for a paused operation to unpause or complete in some manner.
++ */
++static void netfs_wait_for_pause(struct netfs_io_request *rreq,
++				 bool (*collector)(struct netfs_io_request *rreq))
++{
++	DEFINE_WAIT(myself);
++
++	trace_netfs_rreq(rreq, netfs_rreq_trace_wait_pause);
++
++	for (;;) {
++		trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue);
++		prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE);
++
++		if (!test_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags)) {
++			switch (netfs_collect_in_app(rreq, collector)) {
++			case 0:
++				break;
++			case 1:
++				goto all_collected;
++			case 2:
++				continue;
++			}
++		}
++
++		if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags) ||
++		    !test_bit(NETFS_RREQ_PAUSE, &rreq->flags))
++			break;
++
++		schedule();
++		trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue);
++	}
++
++all_collected:
++	finish_wait(&rreq->waitq, &myself);
++}
++
++void netfs_wait_for_paused_read(struct netfs_io_request *rreq)
++{
++	return netfs_wait_for_pause(rreq, netfs_read_collection);
++}
++
++void netfs_wait_for_paused_write(struct netfs_io_request *rreq)
++{
++	return netfs_wait_for_pause(rreq, netfs_write_collection);
++}
+diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c
+index dc6b41ef18b097..31fa0c81e2a43e 100644
+--- a/fs/netfs/objects.c
++++ b/fs/netfs/objects.c
+@@ -10,6 +10,8 @@
+ #include <linux/delay.h>
+ #include "internal.h"
+ 
++static void netfs_free_request(struct work_struct *work);
++
+ /*
+  * Allocate an I/O request and initialise it.
+  */
+@@ -34,6 +36,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
+ 	}
+ 
+ 	memset(rreq, 0, kmem_cache_size(cache));
++	INIT_WORK(&rreq->cleanup_work, netfs_free_request);
+ 	rreq->start	= start;
+ 	rreq->len	= len;
+ 	rreq->origin	= origin;
+@@ -49,13 +52,14 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
+ 	INIT_LIST_HEAD(&rreq->io_streams[0].subrequests);
+ 	INIT_LIST_HEAD(&rreq->io_streams[1].subrequests);
+ 	init_waitqueue_head(&rreq->waitq);
+-	refcount_set(&rreq->ref, 1);
++	refcount_set(&rreq->ref, 2);
+ 
+ 	if (origin == NETFS_READAHEAD ||
+ 	    origin == NETFS_READPAGE ||
+ 	    origin == NETFS_READ_GAPS ||
+ 	    origin == NETFS_READ_SINGLE ||
+ 	    origin == NETFS_READ_FOR_WRITE ||
++	    origin == NETFS_UNBUFFERED_READ ||
+ 	    origin == NETFS_DIO_READ) {
+ 		INIT_WORK(&rreq->work, netfs_read_collection_worker);
+ 		rreq->io_streams[0].avail = true;
+@@ -63,7 +67,9 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
+ 		INIT_WORK(&rreq->work, netfs_write_collection_worker);
+ 	}
+ 
++	/* The IN_PROGRESS flag comes with a ref. */
+ 	__set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
++
+ 	if (file && file->f_flags & O_NONBLOCK)
+ 		__set_bit(NETFS_RREQ_NONBLOCK, &rreq->flags);
+ 	if (rreq->netfs_ops->init_request) {
+@@ -75,7 +81,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
+ 	}
+ 
+ 	atomic_inc(&ctx->io_count);
+-	trace_netfs_rreq_ref(rreq->debug_id, 1, netfs_rreq_trace_new);
++	trace_netfs_rreq_ref(rreq->debug_id, refcount_read(&rreq->ref), netfs_rreq_trace_new);
+ 	netfs_proc_add_rreq(rreq);
+ 	netfs_stat(&netfs_n_rh_rreq);
+ 	return rreq;
+@@ -89,7 +95,7 @@ void netfs_get_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace
+ 	trace_netfs_rreq_ref(rreq->debug_id, r + 1, what);
+ }
+ 
+-void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async)
++void netfs_clear_subrequests(struct netfs_io_request *rreq)
+ {
+ 	struct netfs_io_subrequest *subreq;
+ 	struct netfs_io_stream *stream;
+@@ -101,8 +107,7 @@ void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async)
+ 			subreq = list_first_entry(&stream->subrequests,
+ 						  struct netfs_io_subrequest, rreq_link);
+ 			list_del(&subreq->rreq_link);
+-			netfs_put_subrequest(subreq, was_async,
+-					     netfs_sreq_trace_put_clear);
++			netfs_put_subrequest(subreq, netfs_sreq_trace_put_clear);
+ 		}
+ 	}
+ }
+@@ -118,13 +123,19 @@ static void netfs_free_request_rcu(struct rcu_head *rcu)
+ static void netfs_free_request(struct work_struct *work)
+ {
+ 	struct netfs_io_request *rreq =
+-		container_of(work, struct netfs_io_request, work);
++		container_of(work, struct netfs_io_request, cleanup_work);
+ 	struct netfs_inode *ictx = netfs_inode(rreq->inode);
+ 	unsigned int i;
+ 
+ 	trace_netfs_rreq(rreq, netfs_rreq_trace_free);
++
++	/* Cancel/flush the result collection worker.  That does not carry a
++	 * ref of its own, so we must wait for it somewhere.
++	 */
++	cancel_work_sync(&rreq->work);
++
+ 	netfs_proc_del_rreq(rreq);
+-	netfs_clear_subrequests(rreq, false);
++	netfs_clear_subrequests(rreq);
+ 	if (rreq->netfs_ops->free_request)
+ 		rreq->netfs_ops->free_request(rreq);
+ 	if (rreq->cache_resources.ops)
+@@ -145,8 +156,7 @@ static void netfs_free_request(struct work_struct *work)
+ 	call_rcu(&rreq->rcu, netfs_free_request_rcu);
+ }
+ 
+-void netfs_put_request(struct netfs_io_request *rreq, bool was_async,
+-		       enum netfs_rreq_ref_trace what)
++void netfs_put_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what)
+ {
+ 	unsigned int debug_id;
+ 	bool dead;
+@@ -156,15 +166,8 @@ void netfs_put_request(struct netfs_io_request *rreq, bool was_async,
+ 		debug_id = rreq->debug_id;
+ 		dead = __refcount_dec_and_test(&rreq->ref, &r);
+ 		trace_netfs_rreq_ref(debug_id, r - 1, what);
+-		if (dead) {
+-			if (was_async) {
+-				rreq->work.func = netfs_free_request;
+-				if (!queue_work(system_unbound_wq, &rreq->work))
+-					WARN_ON(1);
+-			} else {
+-				netfs_free_request(&rreq->work);
+-			}
+-		}
++		if (dead)
++			WARN_ON(!queue_work(system_unbound_wq, &rreq->cleanup_work));
+ 	}
+ }
+ 
+@@ -206,8 +209,7 @@ void netfs_get_subrequest(struct netfs_io_subrequest *subreq,
+ 			     what);
+ }
+ 
+-static void netfs_free_subrequest(struct netfs_io_subrequest *subreq,
+-				  bool was_async)
++static void netfs_free_subrequest(struct netfs_io_subrequest *subreq)
+ {
+ 	struct netfs_io_request *rreq = subreq->rreq;
+ 
+@@ -216,10 +218,10 @@ static void netfs_free_subrequest(struct netfs_io_subrequest *subreq,
+ 		rreq->netfs_ops->free_subrequest(subreq);
+ 	mempool_free(subreq, rreq->netfs_ops->subrequest_pool ?: &netfs_subrequest_pool);
+ 	netfs_stat_d(&netfs_n_rh_sreq);
+-	netfs_put_request(rreq, was_async, netfs_rreq_trace_put_subreq);
++	netfs_put_request(rreq, netfs_rreq_trace_put_subreq);
+ }
+ 
+-void netfs_put_subrequest(struct netfs_io_subrequest *subreq, bool was_async,
++void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
+ 			  enum netfs_sreq_ref_trace what)
+ {
+ 	unsigned int debug_index = subreq->debug_index;
+@@ -230,5 +232,5 @@ void netfs_put_subrequest(struct netfs_io_subrequest *subreq, bool was_async,
+ 	dead = __refcount_dec_and_test(&subreq->ref, &r);
+ 	trace_netfs_sreq_ref(debug_id, debug_index, r - 1, what);
+ 	if (dead)
+-		netfs_free_subrequest(subreq, was_async);
++		netfs_free_subrequest(subreq);
+ }
+diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
+index 23c75755ad4ed9..bad677e58a4237 100644
+--- a/fs/netfs/read_collect.c
++++ b/fs/netfs/read_collect.c
+@@ -280,9 +280,13 @@ static void netfs_collect_read_results(struct netfs_io_request *rreq)
+ 			stream->need_retry = true;
+ 			notes |= NEED_RETRY | MADE_PROGRESS;
+ 			break;
++		} else if (test_bit(NETFS_RREQ_SHORT_TRANSFER, &rreq->flags)) {
++			notes |= MADE_PROGRESS;
+ 		} else {
+ 			if (!stream->failed)
+-				stream->transferred = stream->collected_to - rreq->start;
++				stream->transferred += transferred;
++			if (front->transferred < front->len)
++				set_bit(NETFS_RREQ_SHORT_TRANSFER, &rreq->flags);
+ 			notes |= MADE_PROGRESS;
+ 		}
+ 
+@@ -297,7 +301,7 @@ static void netfs_collect_read_results(struct netfs_io_request *rreq)
+ 						 struct netfs_io_subrequest, rreq_link);
+ 		stream->front = front;
+ 		spin_unlock(&rreq->lock);
+-		netfs_put_subrequest(remove, false,
++		netfs_put_subrequest(remove,
+ 				     notes & ABANDON_SREQ ?
+ 				     netfs_sreq_trace_put_abandon :
+ 				     netfs_sreq_trace_put_done);
+@@ -311,14 +315,8 @@ static void netfs_collect_read_results(struct netfs_io_request *rreq)
+ 
+ 	if (notes & NEED_RETRY)
+ 		goto need_retry;
+-	if ((notes & MADE_PROGRESS) && test_bit(NETFS_RREQ_PAUSE, &rreq->flags)) {
+-		trace_netfs_rreq(rreq, netfs_rreq_trace_unpause);
+-		clear_bit_unlock(NETFS_RREQ_PAUSE, &rreq->flags);
+-		smp_mb__after_atomic(); /* Set PAUSE before task state */
+-		wake_up(&rreq->waitq);
+-	}
+-
+ 	if (notes & MADE_PROGRESS) {
++		netfs_wake_rreq_flag(rreq, NETFS_RREQ_PAUSE, netfs_rreq_trace_unpause);
+ 		//cond_resched();
+ 		goto reassess;
+ 	}
+@@ -342,24 +340,10 @@ static void netfs_collect_read_results(struct netfs_io_request *rreq)
+  */
+ static void netfs_rreq_assess_dio(struct netfs_io_request *rreq)
+ {
+-	struct netfs_io_subrequest *subreq;
+-	struct netfs_io_stream *stream = &rreq->io_streams[0];
+ 	unsigned int i;
+ 
+-	/* Collect unbuffered reads and direct reads, adding up the transfer
+-	 * sizes until we find the first short or failed subrequest.
+-	 */
+-	list_for_each_entry(subreq, &stream->subrequests, rreq_link) {
+-		rreq->transferred += subreq->transferred;
+-
+-		if (subreq->transferred < subreq->len ||
+-		    test_bit(NETFS_SREQ_FAILED, &subreq->flags)) {
+-			rreq->error = subreq->error;
+-			break;
+-		}
+-	}
+-
+-	if (rreq->origin == NETFS_DIO_READ) {
++	if (rreq->origin == NETFS_UNBUFFERED_READ ||
++	    rreq->origin == NETFS_DIO_READ) {
+ 		for (i = 0; i < rreq->direct_bv_count; i++) {
+ 			flush_dcache_page(rreq->direct_bv[i].bv_page);
+ 			// TODO: cifs marks pages in the destination buffer
+@@ -377,7 +361,8 @@ static void netfs_rreq_assess_dio(struct netfs_io_request *rreq)
+ 	}
+ 	if (rreq->netfs_ops->done)
+ 		rreq->netfs_ops->done(rreq);
+-	if (rreq->origin == NETFS_DIO_READ)
++	if (rreq->origin == NETFS_UNBUFFERED_READ ||
++	    rreq->origin == NETFS_DIO_READ)
+ 		inode_dio_end(rreq->inode);
+ }
+ 
+@@ -410,7 +395,7 @@ static void netfs_rreq_assess_single(struct netfs_io_request *rreq)
+  * Note that we're in normal kernel thread context at this point, possibly
+  * running on a workqueue.
+  */
+-static void netfs_read_collection(struct netfs_io_request *rreq)
++bool netfs_read_collection(struct netfs_io_request *rreq)
+ {
+ 	struct netfs_io_stream *stream = &rreq->io_streams[0];
+ 
+@@ -420,11 +405,11 @@ static void netfs_read_collection(struct netfs_io_request *rreq)
+ 	 * queue is empty.
+ 	 */
+ 	if (!test_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags))
+-		return;
++		return false;
+ 	smp_rmb(); /* Read ALL_QUEUED before subreq lists. */
+ 
+ 	if (!list_empty(&stream->subrequests))
+-		return;
++		return false;
+ 
+ 	/* Okay, declare that all I/O is complete. */
+ 	rreq->transferred = stream->transferred;
+@@ -433,6 +418,7 @@ static void netfs_read_collection(struct netfs_io_request *rreq)
+ 	//netfs_rreq_is_still_valid(rreq);
+ 
+ 	switch (rreq->origin) {
++	case NETFS_UNBUFFERED_READ:
+ 	case NETFS_DIO_READ:
+ 	case NETFS_READ_GAPS:
+ 		netfs_rreq_assess_dio(rreq);
+@@ -445,14 +431,15 @@ static void netfs_read_collection(struct netfs_io_request *rreq)
+ 	}
+ 	task_io_account_read(rreq->transferred);
+ 
+-	trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip);
+-	clear_and_wake_up_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
++	netfs_wake_rreq_flag(rreq, NETFS_RREQ_IN_PROGRESS, netfs_rreq_trace_wake_ip);
++	/* As we cleared NETFS_RREQ_IN_PROGRESS, we acquired its ref. */
+ 
+ 	trace_netfs_rreq(rreq, netfs_rreq_trace_done);
+-	netfs_clear_subrequests(rreq, false);
++	netfs_clear_subrequests(rreq);
+ 	netfs_unlock_abandoned_read_pages(rreq);
+ 	if (unlikely(rreq->copy_to_cache))
+ 		netfs_pgpriv2_end_copy_to_cache(rreq);
++	return true;
+ }
+ 
+ void netfs_read_collection_worker(struct work_struct *work)
+@@ -460,26 +447,12 @@ void netfs_read_collection_worker(struct work_struct *work)
+ 	struct netfs_io_request *rreq = container_of(work, struct netfs_io_request, work);
+ 
+ 	netfs_see_request(rreq, netfs_rreq_trace_see_work);
+-	if (test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags))
+-		netfs_read_collection(rreq);
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_work);
+-}
+-
+-/*
+- * Wake the collection work item.
+- */
+-void netfs_wake_read_collector(struct netfs_io_request *rreq)
+-{
+-	if (test_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags) &&
+-	    !test_bit(NETFS_RREQ_RETRYING, &rreq->flags)) {
+-		if (!work_pending(&rreq->work)) {
+-			netfs_get_request(rreq, netfs_rreq_trace_get_work);
+-			if (!queue_work(system_unbound_wq, &rreq->work))
+-				netfs_put_request(rreq, true, netfs_rreq_trace_put_work_nq);
+-		}
+-	} else {
+-		trace_netfs_rreq(rreq, netfs_rreq_trace_wake_queue);
+-		wake_up(&rreq->waitq);
++	if (test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags)) {
++		if (netfs_read_collection(rreq))
++			/* Drop the ref from the IN_PROGRESS flag. */
++			netfs_put_request(rreq, netfs_rreq_trace_put_work_ip);
++		else
++			netfs_see_request(rreq, netfs_rreq_trace_see_work_complete);
+ 	}
+ }
+ 
+@@ -511,7 +484,7 @@ void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq)
+ 	    list_is_first(&subreq->rreq_link, &stream->subrequests)
+ 	    ) {
+ 		__set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
+-		netfs_wake_read_collector(rreq);
++		netfs_wake_collector(rreq);
+ 	}
+ }
+ EXPORT_SYMBOL(netfs_read_subreq_progress);
+@@ -535,7 +508,6 @@ EXPORT_SYMBOL(netfs_read_subreq_progress);
+ void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq)
+ {
+ 	struct netfs_io_request *rreq = subreq->rreq;
+-	struct netfs_io_stream *stream = &rreq->io_streams[0];
+ 
+ 	switch (subreq->source) {
+ 	case NETFS_READ_FROM_CACHE:
+@@ -582,23 +554,15 @@ void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq)
+ 	}
+ 
+ 	trace_netfs_sreq(subreq, netfs_sreq_trace_terminated);
+-
+-	clear_bit_unlock(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
+-	smp_mb__after_atomic(); /* Clear IN_PROGRESS before task state */
+-
+-	/* If we are at the head of the queue, wake up the collector. */
+-	if (list_is_first(&subreq->rreq_link, &stream->subrequests) ||
+-	    test_bit(NETFS_RREQ_RETRYING, &rreq->flags))
+-		netfs_wake_read_collector(rreq);
+-
+-	netfs_put_subrequest(subreq, true, netfs_sreq_trace_put_terminated);
++	netfs_subreq_clear_in_progress(subreq);
++	netfs_put_subrequest(subreq, netfs_sreq_trace_put_terminated);
+ }
+ EXPORT_SYMBOL(netfs_read_subreq_terminated);
+ 
+ /*
+  * Handle termination of a read from the cache.
+  */
+-void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, bool was_async)
++void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error)
+ {
+ 	struct netfs_io_subrequest *subreq = priv;
+ 
+@@ -613,94 +577,3 @@ void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, bool
+ 	}
+ 	netfs_read_subreq_terminated(subreq);
+ }
+-
+-/*
+- * Wait for the read operation to complete, successfully or otherwise.
+- */
+-ssize_t netfs_wait_for_read(struct netfs_io_request *rreq)
+-{
+-	struct netfs_io_subrequest *subreq;
+-	struct netfs_io_stream *stream = &rreq->io_streams[0];
+-	DEFINE_WAIT(myself);
+-	ssize_t ret;
+-
+-	for (;;) {
+-		trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue);
+-		prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE);
+-
+-		subreq = list_first_entry_or_null(&stream->subrequests,
+-						  struct netfs_io_subrequest, rreq_link);
+-		if (subreq &&
+-		    (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags) ||
+-		     test_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags))) {
+-			__set_current_state(TASK_RUNNING);
+-			netfs_read_collection(rreq);
+-			continue;
+-		}
+-
+-		if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags))
+-			break;
+-
+-		schedule();
+-		trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue);
+-	}
+-
+-	finish_wait(&rreq->waitq, &myself);
+-
+-	ret = rreq->error;
+-	if (ret == 0) {
+-		ret = rreq->transferred;
+-		switch (rreq->origin) {
+-		case NETFS_DIO_READ:
+-		case NETFS_READ_SINGLE:
+-			ret = rreq->transferred;
+-			break;
+-		default:
+-			if (rreq->submitted < rreq->len) {
+-				trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read);
+-				ret = -EIO;
+-			}
+-			break;
+-		}
+-	}
+-
+-	return ret;
+-}
+-
+-/*
+- * Wait for a paused read operation to unpause or complete in some manner.
+- */
+-void netfs_wait_for_pause(struct netfs_io_request *rreq)
+-{
+-	struct netfs_io_subrequest *subreq;
+-	struct netfs_io_stream *stream = &rreq->io_streams[0];
+-	DEFINE_WAIT(myself);
+-
+-	trace_netfs_rreq(rreq, netfs_rreq_trace_wait_pause);
+-
+-	for (;;) {
+-		trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue);
+-		prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE);
+-
+-		if (!test_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags)) {
+-			subreq = list_first_entry_or_null(&stream->subrequests,
+-							  struct netfs_io_subrequest, rreq_link);
+-			if (subreq &&
+-			    (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags) ||
+-			     test_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags))) {
+-				__set_current_state(TASK_RUNNING);
+-				netfs_read_collection(rreq);
+-				continue;
+-			}
+-		}
+-
+-		if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags) ||
+-		    !test_bit(NETFS_RREQ_PAUSE, &rreq->flags))
+-			break;
+-
+-		schedule();
+-		trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue);
+-	}
+-
+-	finish_wait(&rreq->waitq, &myself);
+-}
+diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c
+index cf7727060215ad..5bbe906a551d57 100644
+--- a/fs/netfs/read_pgpriv2.c
++++ b/fs/netfs/read_pgpriv2.c
+@@ -116,7 +116,7 @@ static struct netfs_io_request *netfs_pgpriv2_begin_copy_to_cache(
+ 	return creq;
+ 
+ cancel_put:
+-	netfs_put_request(creq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(creq, netfs_rreq_trace_put_return);
+ cancel:
+ 	rreq->copy_to_cache = ERR_PTR(-ENOBUFS);
+ 	clear_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags);
+@@ -155,7 +155,7 @@ void netfs_pgpriv2_end_copy_to_cache(struct netfs_io_request *rreq)
+ 	smp_wmb(); /* Write lists before ALL_QUEUED. */
+ 	set_bit(NETFS_RREQ_ALL_QUEUED, &creq->flags);
+ 
+-	netfs_put_request(creq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(creq, netfs_rreq_trace_put_return);
+ 	creq->copy_to_cache = NULL;
+ }
+ 
+diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c
+index 0f294b26e08c96..b99e84a8170af2 100644
+--- a/fs/netfs/read_retry.c
++++ b/fs/netfs/read_retry.c
+@@ -173,7 +173,7 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
+ 						      &stream->subrequests, rreq_link) {
+ 				trace_netfs_sreq(subreq, netfs_sreq_trace_superfluous);
+ 				list_del(&subreq->rreq_link);
+-				netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_done);
++				netfs_put_subrequest(subreq, netfs_sreq_trace_put_done);
+ 				if (subreq == to)
+ 					break;
+ 			}
+@@ -257,35 +257,15 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
+  */
+ void netfs_retry_reads(struct netfs_io_request *rreq)
+ {
+-	struct netfs_io_subrequest *subreq;
+ 	struct netfs_io_stream *stream = &rreq->io_streams[0];
+-	DEFINE_WAIT(myself);
+ 
+ 	netfs_stat(&netfs_n_rh_retry_read_req);
+ 
+-	set_bit(NETFS_RREQ_RETRYING, &rreq->flags);
+-
+ 	/* Wait for all outstanding I/O to quiesce before performing retries as
+ 	 * we may need to renegotiate the I/O sizes.
+ 	 */
+-	list_for_each_entry(subreq, &stream->subrequests, rreq_link) {
+-		if (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags))
+-			continue;
+-
+-		trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue);
+-		for (;;) {
+-			prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE);
+-
+-			if (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags))
+-				break;
+-
+-			trace_netfs_sreq(subreq, netfs_sreq_trace_wait_for);
+-			schedule();
+-			trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue);
+-		}
+-
+-		finish_wait(&rreq->waitq, &myself);
+-	}
++	set_bit(NETFS_RREQ_RETRYING, &rreq->flags);
++	netfs_wait_for_in_progress_stream(rreq, stream);
+ 	clear_bit(NETFS_RREQ_RETRYING, &rreq->flags);
+ 
+ 	trace_netfs_rreq(rreq, netfs_rreq_trace_resubmit);
+diff --git a/fs/netfs/read_single.c b/fs/netfs/read_single.c
+index fea0ecdecc5397..fa622a6cd56da3 100644
+--- a/fs/netfs/read_single.c
++++ b/fs/netfs/read_single.c
+@@ -142,7 +142,7 @@ static int netfs_single_dispatch_read(struct netfs_io_request *rreq)
+ 	set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags);
+ 	return ret;
+ cancel:
+-	netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
++	netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
+ 	return ret;
+ }
+ 
+@@ -185,11 +185,11 @@ ssize_t netfs_read_single(struct inode *inode, struct file *file, struct iov_ite
+ 	netfs_single_dispatch_read(rreq);
+ 
+ 	ret = netfs_wait_for_read(rreq);
+-	netfs_put_request(rreq, true, netfs_rreq_trace_put_return);
++	netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 	return ret;
+ 
+ cleanup_free:
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_failed);
++	netfs_put_request(rreq, netfs_rreq_trace_put_failed);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(netfs_read_single);
+diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c
+index 3fca59e6475d1c..0ce7b53e7fe83f 100644
+--- a/fs/netfs/write_collect.c
++++ b/fs/netfs/write_collect.c
+@@ -280,7 +280,7 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
+ 							 struct netfs_io_subrequest, rreq_link);
+ 			stream->front = front;
+ 			spin_unlock(&wreq->lock);
+-			netfs_put_subrequest(remove, false,
++			netfs_put_subrequest(remove,
+ 					     notes & SAW_FAILURE ?
+ 					     netfs_sreq_trace_put_cancel :
+ 					     netfs_sreq_trace_put_done);
+@@ -321,18 +321,14 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
+ 
+ 	if (notes & NEED_RETRY)
+ 		goto need_retry;
+-	if ((notes & MADE_PROGRESS) && test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) {
+-		trace_netfs_rreq(wreq, netfs_rreq_trace_unpause);
+-		clear_bit_unlock(NETFS_RREQ_PAUSE, &wreq->flags);
+-		smp_mb__after_atomic(); /* Set PAUSE before task state */
+-		wake_up(&wreq->waitq);
+-	}
+ 
+-	if (notes & NEED_REASSESS) {
++	if (notes & MADE_PROGRESS) {
++		netfs_wake_rreq_flag(wreq, NETFS_RREQ_PAUSE, netfs_rreq_trace_unpause);
+ 		//cond_resched();
+ 		goto reassess_streams;
+ 	}
+-	if (notes & MADE_PROGRESS) {
++
++	if (notes & NEED_REASSESS) {
+ 		//cond_resched();
+ 		goto reassess_streams;
+ 	}
+@@ -356,30 +352,21 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
+ /*
+  * Perform the collection of subrequests, folios and encryption buffers.
+  */
+-void netfs_write_collection_worker(struct work_struct *work)
++bool netfs_write_collection(struct netfs_io_request *wreq)
+ {
+-	struct netfs_io_request *wreq = container_of(work, struct netfs_io_request, work);
+ 	struct netfs_inode *ictx = netfs_inode(wreq->inode);
+ 	size_t transferred;
+ 	int s;
+ 
+ 	_enter("R=%x", wreq->debug_id);
+ 
+-	netfs_see_request(wreq, netfs_rreq_trace_see_work);
+-	if (!test_bit(NETFS_RREQ_IN_PROGRESS, &wreq->flags)) {
+-		netfs_put_request(wreq, false, netfs_rreq_trace_put_work);
+-		return;
+-	}
+-
+ 	netfs_collect_write_results(wreq);
+ 
+ 	/* We're done when the app thread has finished posting subreqs and all
+ 	 * the queues in all the streams are empty.
+ 	 */
+-	if (!test_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags)) {
+-		netfs_put_request(wreq, false, netfs_rreq_trace_put_work);
+-		return;
+-	}
++	if (!test_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags))
++		return false;
+ 	smp_rmb(); /* Read ALL_QUEUED before lists. */
+ 
+ 	transferred = LONG_MAX;
+@@ -387,10 +374,8 @@ void netfs_write_collection_worker(struct work_struct *work)
+ 		struct netfs_io_stream *stream = &wreq->io_streams[s];
+ 		if (!stream->active)
+ 			continue;
+-		if (!list_empty(&stream->subrequests)) {
+-			netfs_put_request(wreq, false, netfs_rreq_trace_put_work);
+-			return;
+-		}
++		if (!list_empty(&stream->subrequests))
++			return false;
+ 		if (stream->transferred < transferred)
+ 			transferred = stream->transferred;
+ 	}
+@@ -428,8 +413,8 @@ void netfs_write_collection_worker(struct work_struct *work)
+ 		inode_dio_end(wreq->inode);
+ 
+ 	_debug("finished");
+-	trace_netfs_rreq(wreq, netfs_rreq_trace_wake_ip);
+-	clear_and_wake_up_bit(NETFS_RREQ_IN_PROGRESS, &wreq->flags);
++	netfs_wake_rreq_flag(wreq, NETFS_RREQ_IN_PROGRESS, netfs_rreq_trace_wake_ip);
++	/* As we cleared NETFS_RREQ_IN_PROGRESS, we acquired its ref. */
+ 
+ 	if (wreq->iocb) {
+ 		size_t written = min(wreq->transferred, wreq->len);
+@@ -440,19 +425,21 @@ void netfs_write_collection_worker(struct work_struct *work)
+ 		wreq->iocb = VFS_PTR_POISON;
+ 	}
+ 
+-	netfs_clear_subrequests(wreq, false);
+-	netfs_put_request(wreq, false, netfs_rreq_trace_put_work_complete);
++	netfs_clear_subrequests(wreq);
++	return true;
+ }
+ 
+-/*
+- * Wake the collection work item.
+- */
+-void netfs_wake_write_collector(struct netfs_io_request *wreq, bool was_async)
++void netfs_write_collection_worker(struct work_struct *work)
+ {
+-	if (!work_pending(&wreq->work)) {
+-		netfs_get_request(wreq, netfs_rreq_trace_get_work);
+-		if (!queue_work(system_unbound_wq, &wreq->work))
+-			netfs_put_request(wreq, was_async, netfs_rreq_trace_put_work_nq);
++	struct netfs_io_request *rreq = container_of(work, struct netfs_io_request, work);
++
++	netfs_see_request(rreq, netfs_rreq_trace_see_work);
++	if (test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags)) {
++		if (netfs_write_collection(rreq))
++			/* Drop the ref from the IN_PROGRESS flag. */
++			netfs_put_request(rreq, netfs_rreq_trace_put_work_ip);
++		else
++			netfs_see_request(rreq, netfs_rreq_trace_see_work_complete);
+ 	}
+ }
+ 
+@@ -460,7 +447,6 @@ void netfs_wake_write_collector(struct netfs_io_request *wreq, bool was_async)
+  * netfs_write_subrequest_terminated - Note the termination of a write operation.
+  * @_op: The I/O request that has terminated.
+  * @transferred_or_error: The amount of data transferred or an error code.
+- * @was_async: The termination was asynchronous
+  *
+  * This tells the library that a contributory write I/O operation has
+  * terminated, one way or another, and that it should collect the results.
+@@ -470,21 +456,16 @@ void netfs_wake_write_collector(struct netfs_io_request *wreq, bool was_async)
+  * negative error code.  The library will look after reissuing I/O operations
+  * as appropriate and writing downloaded data to the cache.
+  *
+- * If @was_async is true, the caller might be running in softirq or interrupt
+- * context and we can't sleep.
+- *
+  * When this is called, ownership of the subrequest is transferred back to the
+  * library, along with a ref.
+  *
+  * Note that %_op is a void* so that the function can be passed to
+  * kiocb::term_func without the need for a casting wrapper.
+  */
+-void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error,
+-				       bool was_async)
++void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error)
+ {
+ 	struct netfs_io_subrequest *subreq = _op;
+ 	struct netfs_io_request *wreq = subreq->rreq;
+-	struct netfs_io_stream *stream = &wreq->io_streams[subreq->stream_nr];
+ 
+ 	_enter("%x[%x] %zd", wreq->debug_id, subreq->debug_index, transferred_or_error);
+ 
+@@ -536,15 +517,7 @@ void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error,
+ 	}
+ 
+ 	trace_netfs_sreq(subreq, netfs_sreq_trace_terminated);
+-
+-	clear_and_wake_up_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
+-
+-	/* If we are at the head of the queue, wake up the collector,
+-	 * transferring a ref to it if we were the ones to do so.
+-	 */
+-	if (list_is_first(&subreq->rreq_link, &stream->subrequests))
+-		netfs_wake_write_collector(wreq, was_async);
+-
+-	netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated);
++	netfs_subreq_clear_in_progress(subreq);
++	netfs_put_subrequest(subreq, netfs_sreq_trace_put_terminated);
+ }
+ EXPORT_SYMBOL(netfs_write_subrequest_terminated);
+diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
+index 77279fc5b5a7cb..50bee2c4130d1e 100644
+--- a/fs/netfs/write_issue.c
++++ b/fs/netfs/write_issue.c
+@@ -134,7 +134,7 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping,
+ 	return wreq;
+ nomem:
+ 	wreq->error = -ENOMEM;
+-	netfs_put_request(wreq, false, netfs_rreq_trace_put_failed);
++	netfs_put_request(wreq, netfs_rreq_trace_put_failed);
+ 	return ERR_PTR(-ENOMEM);
+ }
+ 
+@@ -233,7 +233,7 @@ static void netfs_do_issue_write(struct netfs_io_stream *stream,
+ 	_enter("R=%x[%x],%zx", wreq->debug_id, subreq->debug_index, subreq->len);
+ 
+ 	if (test_bit(NETFS_SREQ_FAILED, &subreq->flags))
+-		return netfs_write_subrequest_terminated(subreq, subreq->error, false);
++		return netfs_write_subrequest_terminated(subreq, subreq->error);
+ 
+ 	trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
+ 	stream->issue_write(subreq);
+@@ -542,7 +542,7 @@ static void netfs_end_issue_write(struct netfs_io_request *wreq)
+ 	}
+ 
+ 	if (needs_poke)
+-		netfs_wake_write_collector(wreq, false);
++		netfs_wake_collector(wreq);
+ }
+ 
+ /*
+@@ -576,6 +576,7 @@ int netfs_writepages(struct address_space *mapping,
+ 		goto couldnt_start;
+ 	}
+ 
++	__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);
+ 	trace_netfs_write(wreq, netfs_write_trace_writeback);
+ 	netfs_stat(&netfs_n_wh_writepages);
+ 
+@@ -599,8 +600,9 @@ int netfs_writepages(struct address_space *mapping,
+ 	netfs_end_issue_write(wreq);
+ 
+ 	mutex_unlock(&ictx->wb_lock);
++	netfs_wake_collector(wreq);
+ 
+-	netfs_put_request(wreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(wreq, netfs_rreq_trace_put_return);
+ 	_leave(" = %d", error);
+ 	return error;
+ 
+@@ -673,11 +675,11 @@ int netfs_advance_writethrough(struct netfs_io_request *wreq, struct writeback_c
+ /*
+  * End a write operation used when writing through the pagecache.
+  */
+-int netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc,
+-			   struct folio *writethrough_cache)
++ssize_t netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc,
++			       struct folio *writethrough_cache)
+ {
+ 	struct netfs_inode *ictx = netfs_inode(wreq->inode);
+-	int ret;
++	ssize_t ret;
+ 
+ 	_enter("R=%x", wreq->debug_id);
+ 
+@@ -688,13 +690,11 @@ int netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_contr
+ 
+ 	mutex_unlock(&ictx->wb_lock);
+ 
+-	if (wreq->iocb) {
++	if (wreq->iocb)
+ 		ret = -EIOCBQUEUED;
+-	} else {
+-		wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS, TASK_UNINTERRUPTIBLE);
+-		ret = wreq->error;
+-	}
+-	netfs_put_request(wreq, false, netfs_rreq_trace_put_return);
++	else
++		ret = netfs_wait_for_write(wreq);
++	netfs_put_request(wreq, netfs_rreq_trace_put_return);
+ 	return ret;
+ }
+ 
+@@ -722,10 +722,8 @@ int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, size_t
+ 		start += part;
+ 		len -= part;
+ 		rolling_buffer_advance(&wreq->buffer, part);
+-		if (test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) {
+-			trace_netfs_rreq(wreq, netfs_rreq_trace_wait_pause);
+-			wait_event(wreq->waitq, !test_bit(NETFS_RREQ_PAUSE, &wreq->flags));
+-		}
++		if (test_bit(NETFS_RREQ_PAUSE, &wreq->flags))
++			netfs_wait_for_paused_write(wreq);
+ 		if (test_bit(NETFS_RREQ_FAILED, &wreq->flags))
+ 			break;
+ 	}
+@@ -885,7 +883,8 @@ int netfs_writeback_single(struct address_space *mapping,
+ 		goto couldnt_start;
+ 	}
+ 
+-	trace_netfs_write(wreq, netfs_write_trace_writeback);
++	__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);
++	trace_netfs_write(wreq, netfs_write_trace_writeback_single);
+ 	netfs_stat(&netfs_n_wh_writepages);
+ 
+ 	if (__test_and_set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))
+@@ -914,8 +913,9 @@ int netfs_writeback_single(struct address_space *mapping,
+ 	set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
+ 
+ 	mutex_unlock(&ictx->wb_lock);
++	netfs_wake_collector(wreq);
+ 
+-	netfs_put_request(wreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(wreq, netfs_rreq_trace_put_return);
+ 	_leave(" = %d", ret);
+ 	return ret;
+ 
+diff --git a/fs/netfs/write_retry.c b/fs/netfs/write_retry.c
+index 545d33079a77d0..9d1d8a8bab7261 100644
+--- a/fs/netfs/write_retry.c
++++ b/fs/netfs/write_retry.c
+@@ -39,9 +39,10 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
+ 			if (test_bit(NETFS_SREQ_FAILED, &subreq->flags))
+ 				break;
+ 			if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) {
+-				struct iov_iter source = subreq->io_iter;
++				struct iov_iter source;
+ 
+-				iov_iter_revert(&source, subreq->len - source.count);
++				netfs_reset_iter(subreq);
++				source = subreq->io_iter;
+ 				netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
+ 				netfs_reissue_write(stream, subreq, &source);
+ 			}
+@@ -131,7 +132,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
+ 						      &stream->subrequests, rreq_link) {
+ 				trace_netfs_sreq(subreq, netfs_sreq_trace_discard);
+ 				list_del(&subreq->rreq_link);
+-				netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_done);
++				netfs_put_subrequest(subreq, netfs_sreq_trace_put_done);
+ 				if (subreq == to)
+ 					break;
+ 			}
+@@ -199,7 +200,6 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
+  */
+ void netfs_retry_writes(struct netfs_io_request *wreq)
+ {
+-	struct netfs_io_subrequest *subreq;
+ 	struct netfs_io_stream *stream;
+ 	int s;
+ 
+@@ -208,16 +208,13 @@ void netfs_retry_writes(struct netfs_io_request *wreq)
+ 	/* Wait for all outstanding I/O to quiesce before performing retries as
+ 	 * we may need to renegotiate the I/O sizes.
+ 	 */
++	set_bit(NETFS_RREQ_RETRYING, &wreq->flags);
+ 	for (s = 0; s < NR_IO_STREAMS; s++) {
+ 		stream = &wreq->io_streams[s];
+-		if (!stream->active)
+-			continue;
+-
+-		list_for_each_entry(subreq, &stream->subrequests, rreq_link) {
+-			wait_on_bit(&subreq->flags, NETFS_SREQ_IN_PROGRESS,
+-				    TASK_UNINTERRUPTIBLE);
+-		}
++		if (stream->active)
++			netfs_wait_for_in_progress_stream(wreq, stream);
+ 	}
++	clear_bit(NETFS_RREQ_RETRYING, &wreq->flags);
+ 
+ 	// TODO: Enc: Fetch changed partial pages
+ 	// TODO: Enc: Reencrypt content if needed.
+diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c
+index e278a1ad1ca3e8..8b07851787312b 100644
+--- a/fs/nfs/fscache.c
++++ b/fs/nfs/fscache.c
+@@ -367,6 +367,7 @@ void nfs_netfs_read_completion(struct nfs_pgio_header *hdr)
+ 
+ 	sreq = netfs->sreq;
+ 	if (test_bit(NFS_IOHDR_EOF, &hdr->flags) &&
++	    sreq->rreq->origin != NETFS_UNBUFFERED_READ &&
+ 	    sreq->rreq->origin != NETFS_DIO_READ)
+ 		__set_bit(NETFS_SREQ_CLEAR_TAIL, &sreq->flags);
+ 
+diff --git a/fs/nfs/localio.c b/fs/nfs/localio.c
+index 4ec952f9f47dde..e6d36b3d3fc059 100644
+--- a/fs/nfs/localio.c
++++ b/fs/nfs/localio.c
+@@ -207,14 +207,16 @@ void nfs_local_probe_async(struct nfs_client *clp)
+ }
+ EXPORT_SYMBOL_GPL(nfs_local_probe_async);
+ 
+-static inline struct nfsd_file *nfs_local_file_get(struct nfsd_file *nf)
++static inline void nfs_local_file_put(struct nfsd_file *localio)
+ {
+-	return nfs_to->nfsd_file_get(nf);
+-}
++	/* nfs_to_nfsd_file_put_local() expects an __rcu pointer
++	 * but we have a __kernel pointer.  It is always safe
++	 * to cast a __kernel pointer to an __rcu pointer
++	 * because the cast only weakens what is known about the pointer.
++	 */
++	struct nfsd_file __rcu *nf = (struct nfsd_file __rcu*) localio;
+ 
+-static inline void nfs_local_file_put(struct nfsd_file *nf)
+-{
+-	nfs_to->nfsd_file_put(nf);
++	nfs_to_nfsd_file_put_local(&nf);
+ }
+ 
+ /*
+@@ -226,12 +228,13 @@ static inline void nfs_local_file_put(struct nfsd_file *nf)
+ static struct nfsd_file *
+ __nfs_local_open_fh(struct nfs_client *clp, const struct cred *cred,
+ 		    struct nfs_fh *fh, struct nfs_file_localio *nfl,
++		    struct nfsd_file __rcu **pnf,
+ 		    const fmode_t mode)
+ {
+ 	struct nfsd_file *localio;
+ 
+ 	localio = nfs_open_local_fh(&clp->cl_uuid, clp->cl_rpcclient,
+-				    cred, fh, nfl, mode);
++				    cred, fh, nfl, pnf, mode);
+ 	if (IS_ERR(localio)) {
+ 		int status = PTR_ERR(localio);
+ 		trace_nfs_local_open_fh(fh, mode, status);
+@@ -258,7 +261,7 @@ nfs_local_open_fh(struct nfs_client *clp, const struct cred *cred,
+ 		  struct nfs_fh *fh, struct nfs_file_localio *nfl,
+ 		  const fmode_t mode)
+ {
+-	struct nfsd_file *nf, *new, __rcu **pnf;
++	struct nfsd_file *nf, __rcu **pnf;
+ 
+ 	if (!nfs_server_is_local(clp))
+ 		return NULL;
+@@ -270,29 +273,9 @@ nfs_local_open_fh(struct nfs_client *clp, const struct cred *cred,
+ 	else
+ 		pnf = &nfl->ro_file;
+ 
+-	new = NULL;
+-	rcu_read_lock();
+-	nf = rcu_dereference(*pnf);
+-	if (!nf) {
+-		rcu_read_unlock();
+-		new = __nfs_local_open_fh(clp, cred, fh, nfl, mode);
+-		if (IS_ERR(new))
+-			return NULL;
+-		rcu_read_lock();
+-		/* try to swap in the pointer */
+-		spin_lock(&clp->cl_uuid.lock);
+-		nf = rcu_dereference_protected(*pnf, 1);
+-		if (!nf) {
+-			nf = new;
+-			new = NULL;
+-			rcu_assign_pointer(*pnf, nf);
+-		}
+-		spin_unlock(&clp->cl_uuid.lock);
+-	}
+-	nf = nfs_local_file_get(nf);
+-	rcu_read_unlock();
+-	if (new)
+-		nfs_to_nfsd_file_put_local(new);
++	nf = __nfs_local_open_fh(clp, cred, fh, nfl, pnf, mode);
++	if (IS_ERR(nf))
++		return NULL;
+ 	return nf;
+ }
+ EXPORT_SYMBOL_GPL(nfs_local_open_fh);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index b1d2122bd5a749..4b123bca65e12d 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -5164,13 +5164,15 @@ static int nfs4_do_create(struct inode *dir, struct dentry *dentry, struct nfs4_
+ }
+ 
+ static struct dentry *nfs4_do_mkdir(struct inode *dir, struct dentry *dentry,
+-				    struct nfs4_createdata *data)
++				    struct nfs4_createdata *data, int *statusp)
+ {
+-	int status = nfs4_call_sync(NFS_SERVER(dir)->client, NFS_SERVER(dir), &data->msg,
++	struct dentry *ret;
++
++	*statusp = nfs4_call_sync(NFS_SERVER(dir)->client, NFS_SERVER(dir), &data->msg,
+ 				    &data->arg.seq_args, &data->res.seq_res, 1);
+ 
+-	if (status)
+-		return ERR_PTR(status);
++	if (*statusp)
++		return NULL;
+ 
+ 	spin_lock(&dir->i_lock);
+ 	/* Creating a directory bumps nlink in the parent */
+@@ -5179,7 +5181,11 @@ static struct dentry *nfs4_do_mkdir(struct inode *dir, struct dentry *dentry,
+ 				      data->res.fattr->time_start,
+ 				      NFS_INO_INVALID_DATA);
+ 	spin_unlock(&dir->i_lock);
+-	return nfs_add_or_obtain(dentry, data->res.fh, data->res.fattr);
++	ret = nfs_add_or_obtain(dentry, data->res.fh, data->res.fattr);
++	if (!IS_ERR(ret))
++		return ret;
++	*statusp = PTR_ERR(ret);
++	return NULL;
+ }
+ 
+ static void nfs4_free_createdata(struct nfs4_createdata *data)
+@@ -5240,17 +5246,18 @@ static int nfs4_proc_symlink(struct inode *dir, struct dentry *dentry,
+ 
+ static struct dentry *_nfs4_proc_mkdir(struct inode *dir, struct dentry *dentry,
+ 				       struct iattr *sattr,
+-				       struct nfs4_label *label)
++				       struct nfs4_label *label, int *statusp)
+ {
+ 	struct nfs4_createdata *data;
+-	struct dentry *ret = ERR_PTR(-ENOMEM);
++	struct dentry *ret = NULL;
+ 
++	*statusp = -ENOMEM;
+ 	data = nfs4_alloc_createdata(dir, &dentry->d_name, sattr, NF4DIR);
+ 	if (data == NULL)
+ 		goto out;
+ 
+ 	data->arg.label = label;
+-	ret = nfs4_do_mkdir(dir, dentry, data);
++	ret = nfs4_do_mkdir(dir, dentry, data, statusp);
+ 
+ 	nfs4_free_createdata(data);
+ out:
+@@ -5273,11 +5280,12 @@ static struct dentry *nfs4_proc_mkdir(struct inode *dir, struct dentry *dentry,
+ 	if (!(server->attr_bitmask[2] & FATTR4_WORD2_MODE_UMASK))
+ 		sattr->ia_mode &= ~current_umask();
+ 	do {
+-		alias = _nfs4_proc_mkdir(dir, dentry, sattr, label);
+-		err = PTR_ERR_OR_ZERO(alias);
++		alias = _nfs4_proc_mkdir(dir, dentry, sattr, label, &err);
+ 		trace_nfs4_mkdir(dir, &dentry->d_name, err);
+-		err = nfs4_handle_exception(NFS_SERVER(dir), err,
+-				&exception);
++		if (err)
++			alias = ERR_PTR(nfs4_handle_exception(NFS_SERVER(dir),
++							      err,
++							      &exception));
+ 	} while (exception.retry);
+ 	nfs4_label_release_security(label);
+ 
+diff --git a/fs/nfs/super.c b/fs/nfs/super.c
+index 9eea9e62afc9c3..91b5503b6f74d7 100644
+--- a/fs/nfs/super.c
++++ b/fs/nfs/super.c
+@@ -1051,6 +1051,16 @@ int nfs_reconfigure(struct fs_context *fc)
+ 
+ 	sync_filesystem(sb);
+ 
++	/*
++	 * The SB_RDONLY flag has been removed from the superblock during
++	 * mounts to prevent interference between different filesystems.
++	 * Similarly, it is also necessary to ignore the SB_RDONLY flag
++	 * during reconfiguration; otherwise, it may also result in the
++	 * creation of redundant superblocks when mounting a directory with
++	 * different rw and ro flags multiple times.
++	 */
++	fc->sb_flags_mask &= ~SB_RDONLY;
++
+ 	/*
+ 	 * Userspace mount programs that send binary options generally send
+ 	 * them populated with default values. We have no way to know which
+@@ -1308,8 +1318,17 @@ int nfs_get_tree_common(struct fs_context *fc)
+ 	if (IS_ERR(server))
+ 		return PTR_ERR(server);
+ 
++	/*
++	 * When NFS_MOUNT_UNSHARED is not set, NFS forces the sharing of a
++	 * superblock among each filesystem that mounts sub-directories
++	 * belonging to a single exported root path.
++	 * To prevent interference between different filesystems, the
++	 * SB_RDONLY flag should be removed from the superblock.
++	 */
+ 	if (server->flags & NFS_MOUNT_UNSHARED)
+ 		compare_super = NULL;
++	else
++		fc->sb_flags &= ~SB_RDONLY;
+ 
+ 	/* -o noac implies -o sync */
+ 	if (server->flags & NFS_MOUNT_NOAC)
+diff --git a/fs/nfs_common/nfslocalio.c b/fs/nfs_common/nfslocalio.c
+index 6a0bdea6d6449f..05c7c16e37ab4c 100644
+--- a/fs/nfs_common/nfslocalio.c
++++ b/fs/nfs_common/nfslocalio.c
+@@ -151,8 +151,7 @@ EXPORT_SYMBOL_GPL(nfs_localio_enable_client);
+  */
+ static bool nfs_uuid_put(nfs_uuid_t *nfs_uuid)
+ {
+-	LIST_HEAD(local_files);
+-	struct nfs_file_localio *nfl, *tmp;
++	struct nfs_file_localio *nfl;
+ 
+ 	spin_lock(&nfs_uuid->lock);
+ 	if (unlikely(!rcu_access_pointer(nfs_uuid->net))) {
+@@ -166,17 +165,42 @@ static bool nfs_uuid_put(nfs_uuid_t *nfs_uuid)
+ 		nfs_uuid->dom = NULL;
+ 	}
+ 
+-	list_splice_init(&nfs_uuid->files, &local_files);
+-	spin_unlock(&nfs_uuid->lock);
+-
+ 	/* Walk list of files and ensure their last references dropped */
+-	list_for_each_entry_safe(nfl, tmp, &local_files, list) {
+-		nfs_close_local_fh(nfl);
++
++	while ((nfl = list_first_entry_or_null(&nfs_uuid->files,
++					       struct nfs_file_localio,
++					       list)) != NULL) {
++		/* If nfs_uuid is already NULL, nfs_close_local_fh is
++		 * closing and we must wait, else we unlink and close.
++		 */
++		if (rcu_access_pointer(nfl->nfs_uuid) == NULL) {
++			/* nfs_close_local_fh() is doing the
++			 * close and we must wait. until it unlinks
++			 */
++			wait_var_event_spinlock(nfl,
++						list_first_entry_or_null(
++							&nfs_uuid->files,
++							struct nfs_file_localio,
++							list) != nfl,
++						&nfs_uuid->lock);
++			continue;
++		}
++
++		/* Remove nfl from nfs_uuid->files list */
++		list_del_init(&nfl->list);
++		spin_unlock(&nfs_uuid->lock);
++
++		nfs_to_nfsd_file_put_local(&nfl->ro_file);
++		nfs_to_nfsd_file_put_local(&nfl->rw_file);
+ 		cond_resched();
+-	}
+ 
+-	spin_lock(&nfs_uuid->lock);
+-	BUG_ON(!list_empty(&nfs_uuid->files));
++		spin_lock(&nfs_uuid->lock);
++		/* Now we can allow racing nfs_close_local_fh() to
++		 * skip the locking.
++		 */
++		RCU_INIT_POINTER(nfl->nfs_uuid, NULL);
++		wake_up_var_locked(&nfl->nfs_uuid, &nfs_uuid->lock);
++	}
+ 
+ 	/* Remove client from nn->local_clients */
+ 	if (nfs_uuid->list_lock) {
+@@ -237,6 +261,7 @@ static void nfs_uuid_add_file(nfs_uuid_t *nfs_uuid, struct nfs_file_localio *nfl
+ struct nfsd_file *nfs_open_local_fh(nfs_uuid_t *uuid,
+ 		   struct rpc_clnt *rpc_clnt, const struct cred *cred,
+ 		   const struct nfs_fh *nfs_fh, struct nfs_file_localio *nfl,
++		   struct nfsd_file __rcu **pnf,
+ 		   const fmode_t fmode)
+ {
+ 	struct net *net;
+@@ -261,10 +286,9 @@ struct nfsd_file *nfs_open_local_fh(nfs_uuid_t *uuid,
+ 	rcu_read_unlock();
+ 	/* We have an implied reference to net thanks to nfsd_net_try_get */
+ 	localio = nfs_to->nfsd_open_local_fh(net, uuid->dom, rpc_clnt,
+-					     cred, nfs_fh, fmode);
+-	if (IS_ERR(localio))
+-		nfs_to_nfsd_net_put(net);
+-	else
++					     cred, nfs_fh, pnf, fmode);
++	nfs_to_nfsd_net_put(net);
++	if (!IS_ERR(localio))
+ 		nfs_uuid_add_file(uuid, nfl);
+ 
+ 	return localio;
+@@ -273,8 +297,6 @@ EXPORT_SYMBOL_GPL(nfs_open_local_fh);
+ 
+ void nfs_close_local_fh(struct nfs_file_localio *nfl)
+ {
+-	struct nfsd_file *ro_nf = NULL;
+-	struct nfsd_file *rw_nf = NULL;
+ 	nfs_uuid_t *nfs_uuid;
+ 
+ 	rcu_read_lock();
+@@ -285,28 +307,39 @@ void nfs_close_local_fh(struct nfs_file_localio *nfl)
+ 		return;
+ 	}
+ 
+-	ro_nf = rcu_access_pointer(nfl->ro_file);
+-	rw_nf = rcu_access_pointer(nfl->rw_file);
+-	if (ro_nf || rw_nf) {
+-		spin_lock(&nfs_uuid->lock);
+-		if (ro_nf)
+-			ro_nf = rcu_dereference_protected(xchg(&nfl->ro_file, NULL), 1);
+-		if (rw_nf)
+-			rw_nf = rcu_dereference_protected(xchg(&nfl->rw_file, NULL), 1);
+-
+-		/* Remove nfl from nfs_uuid->files list */
+-		RCU_INIT_POINTER(nfl->nfs_uuid, NULL);
+-		list_del_init(&nfl->list);
++	spin_lock(&nfs_uuid->lock);
++	if (!rcu_access_pointer(nfl->nfs_uuid)) {
++		/* nfs_uuid_put has finished here */
+ 		spin_unlock(&nfs_uuid->lock);
+ 		rcu_read_unlock();
+-
+-		if (ro_nf)
+-			nfs_to_nfsd_file_put_local(ro_nf);
+-		if (rw_nf)
+-			nfs_to_nfsd_file_put_local(rw_nf);
+ 		return;
+ 	}
++	if (list_empty(&nfs_uuid->files)) {
++		/* nfs_uuid_put() has started closing files, wait for it
++		 * to finished
++		 */
++		spin_unlock(&nfs_uuid->lock);
++		rcu_read_unlock();
++		wait_var_event(&nfl->nfs_uuid,
++			       rcu_access_pointer(nfl->nfs_uuid) == NULL);
++		return;
++	}
++	/* tell nfs_uuid_put() to wait for us */
++	RCU_INIT_POINTER(nfl->nfs_uuid, NULL);
++	spin_unlock(&nfs_uuid->lock);
+ 	rcu_read_unlock();
++
++	nfs_to_nfsd_file_put_local(&nfl->ro_file);
++	nfs_to_nfsd_file_put_local(&nfl->rw_file);
++
++	/* Remove nfl from nfs_uuid->files list and signal nfs_uuid_put()
++	 * that we are done.  The moment we drop the spinlock the
++	 * nfs_uuid could be freed.
++	 */
++	spin_lock(&nfs_uuid->lock);
++	list_del_init(&nfl->list);
++	wake_up_var_locked(&nfl->nfs_uuid, &nfs_uuid->lock);
++	spin_unlock(&nfs_uuid->lock);
+ }
+ EXPORT_SYMBOL_GPL(nfs_close_local_fh);
+ 
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index ab85e6a2454f4c..e108b6c705b459 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -378,14 +378,40 @@ nfsd_file_put(struct nfsd_file *nf)
+  * the reference of the nfsd_file.
+  */
+ struct net *
+-nfsd_file_put_local(struct nfsd_file *nf)
++nfsd_file_put_local(struct nfsd_file __rcu **pnf)
+ {
+-	struct net *net = nf->nf_net;
++	struct nfsd_file *nf;
++	struct net *net = NULL;
+ 
+-	nfsd_file_put(nf);
++	nf = unrcu_pointer(xchg(pnf, NULL));
++	if (nf) {
++		net = nf->nf_net;
++		nfsd_file_put(nf);
++	}
+ 	return net;
+ }
+ 
++/**
++ * nfsd_file_get_local - get nfsd_file reference and reference to net
++ * @nf: nfsd_file of which to put the reference
++ *
++ * Get reference to both the nfsd_file and nf->nf_net.
++ */
++struct nfsd_file *
++nfsd_file_get_local(struct nfsd_file *nf)
++{
++	struct net *net = nf->nf_net;
++
++	if (nfsd_net_try_get(net)) {
++		nf = nfsd_file_get(nf);
++		if (!nf)
++			nfsd_net_put(net);
++	} else {
++		nf = NULL;
++	}
++	return nf;
++}
++
+ /**
+  * nfsd_file_file - get the backing file of an nfsd_file
+  * @nf: nfsd_file of which to access the backing file.
+diff --git a/fs/nfsd/filecache.h b/fs/nfsd/filecache.h
+index 5865f9c7271214..722b26c71e454a 100644
+--- a/fs/nfsd/filecache.h
++++ b/fs/nfsd/filecache.h
+@@ -62,7 +62,8 @@ void nfsd_file_cache_shutdown(void);
+ int nfsd_file_cache_start_net(struct net *net);
+ void nfsd_file_cache_shutdown_net(struct net *net);
+ void nfsd_file_put(struct nfsd_file *nf);
+-struct net *nfsd_file_put_local(struct nfsd_file *nf);
++struct net *nfsd_file_put_local(struct nfsd_file __rcu **nf);
++struct nfsd_file *nfsd_file_get_local(struct nfsd_file *nf);
+ struct nfsd_file *nfsd_file_get(struct nfsd_file *nf);
+ struct file *nfsd_file_file(struct nfsd_file *nf);
+ void nfsd_file_close_inode_sync(struct inode *inode);
+diff --git a/fs/nfsd/localio.c b/fs/nfsd/localio.c
+index 238647fa379e32..80d9ff6608a7b9 100644
+--- a/fs/nfsd/localio.c
++++ b/fs/nfsd/localio.c
+@@ -24,21 +24,6 @@
+ #include "filecache.h"
+ #include "cache.h"
+ 
+-static const struct nfsd_localio_operations nfsd_localio_ops = {
+-	.nfsd_net_try_get  = nfsd_net_try_get,
+-	.nfsd_net_put  = nfsd_net_put,
+-	.nfsd_open_local_fh = nfsd_open_local_fh,
+-	.nfsd_file_put_local = nfsd_file_put_local,
+-	.nfsd_file_get = nfsd_file_get,
+-	.nfsd_file_put = nfsd_file_put,
+-	.nfsd_file_file = nfsd_file_file,
+-};
+-
+-void nfsd_localio_ops_init(void)
+-{
+-	nfs_to = &nfsd_localio_ops;
+-}
+-
+ /**
+  * nfsd_open_local_fh - lookup a local filehandle @nfs_fh and map to nfsd_file
+  *
+@@ -47,6 +32,7 @@ void nfsd_localio_ops_init(void)
+  * @rpc_clnt: rpc_clnt that the client established
+  * @cred: cred that the client established
+  * @nfs_fh: filehandle to lookup
++ * @nfp: place to find the nfsd_file, or store it if it was non-NULL
+  * @fmode: fmode_t to use for open
+  *
+  * This function maps a local fh to a path on a local filesystem.
+@@ -57,10 +43,11 @@ void nfsd_localio_ops_init(void)
+  * set. Caller (NFS client) is responsible for calling nfsd_net_put and
+  * nfsd_file_put (via nfs_to_nfsd_file_put_local).
+  */
+-struct nfsd_file *
++static struct nfsd_file *
+ nfsd_open_local_fh(struct net *net, struct auth_domain *dom,
+ 		   struct rpc_clnt *rpc_clnt, const struct cred *cred,
+-		   const struct nfs_fh *nfs_fh, const fmode_t fmode)
++		   const struct nfs_fh *nfs_fh, struct nfsd_file __rcu **pnf,
++		   const fmode_t fmode)
+ {
+ 	int mayflags = NFSD_MAY_LOCALIO;
+ 	struct svc_cred rq_cred;
+@@ -71,6 +58,15 @@ nfsd_open_local_fh(struct net *net, struct auth_domain *dom,
+ 	if (nfs_fh->size > NFS4_FHSIZE)
+ 		return ERR_PTR(-EINVAL);
+ 
++	if (!nfsd_net_try_get(net))
++		return ERR_PTR(-ENXIO);
++
++	rcu_read_lock();
++	localio = nfsd_file_get(rcu_dereference(*pnf));
++	rcu_read_unlock();
++	if (localio)
++		return localio;
++
+ 	/* nfs_fh -> svc_fh */
+ 	fh_init(&fh, NFS4_FHSIZE);
+ 	fh.fh_handle.fh_size = nfs_fh->size;
+@@ -92,9 +88,47 @@ nfsd_open_local_fh(struct net *net, struct auth_domain *dom,
+ 	if (rq_cred.cr_group_info)
+ 		put_group_info(rq_cred.cr_group_info);
+ 
++	if (!IS_ERR(localio)) {
++		struct nfsd_file *new;
++		if (!nfsd_net_try_get(net)) {
++			nfsd_file_put(localio);
++			nfsd_net_put(net);
++			return ERR_PTR(-ENXIO);
++		}
++		nfsd_file_get(localio);
++	again:
++		new = unrcu_pointer(cmpxchg(pnf, NULL, RCU_INITIALIZER(localio)));
++		if (new) {
++			/* Some other thread installed an nfsd_file */
++			if (nfsd_file_get(new) == NULL)
++				goto again;
++			/*
++			 * Drop the ref we were going to install and the
++			 * one we were going to return.
++			 */
++			nfsd_file_put(localio);
++			nfsd_file_put(localio);
++			localio = new;
++		}
++	} else
++		nfsd_net_put(net);
++
+ 	return localio;
+ }
+-EXPORT_SYMBOL_GPL(nfsd_open_local_fh);
++
++static const struct nfsd_localio_operations nfsd_localio_ops = {
++	.nfsd_net_try_get  = nfsd_net_try_get,
++	.nfsd_net_put  = nfsd_net_put,
++	.nfsd_open_local_fh = nfsd_open_local_fh,
++	.nfsd_file_put_local = nfsd_file_put_local,
++	.nfsd_file_get_local = nfsd_file_get_local,
++	.nfsd_file_file = nfsd_file_file,
++};
++
++void nfsd_localio_ops_init(void)
++{
++	nfs_to = &nfsd_localio_ops;
++}
+ 
+ /*
+  * UUID_IS_LOCAL XDR functions
+diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
+index 0d8f7fb15c2e54..dd0c8e560ef6a2 100644
+--- a/fs/nilfs2/btree.c
++++ b/fs/nilfs2/btree.c
+@@ -2102,11 +2102,13 @@ static int nilfs_btree_propagate(struct nilfs_bmap *btree,
+ 
+ 	ret = nilfs_btree_do_lookup(btree, path, key, NULL, level + 1, 0);
+ 	if (ret < 0) {
+-		if (unlikely(ret == -ENOENT))
++		if (unlikely(ret == -ENOENT)) {
+ 			nilfs_crit(btree->b_inode->i_sb,
+ 				   "writing node/leaf block does not appear in b-tree (ino=%lu) at key=%llu, level=%d",
+ 				   btree->b_inode->i_ino,
+ 				   (unsigned long long)key, level);
++			ret = -EINVAL;
++		}
+ 		goto out;
+ 	}
+ 
+diff --git a/fs/nilfs2/direct.c b/fs/nilfs2/direct.c
+index 893ab36824cc2b..2d8dc6b35b5477 100644
+--- a/fs/nilfs2/direct.c
++++ b/fs/nilfs2/direct.c
+@@ -273,6 +273,9 @@ static int nilfs_direct_propagate(struct nilfs_bmap *bmap,
+ 	dat = nilfs_bmap_get_dat(bmap);
+ 	key = nilfs_bmap_data_get_key(bmap, bh);
+ 	ptr = nilfs_direct_get_ptr(bmap, key);
++	if (ptr == NILFS_BMAP_INVALID_PTR)
++		return -EINVAL;
++
+ 	if (!buffer_nilfs_volatile(bh)) {
+ 		oldreq.pr_entry_nr = ptr;
+ 		newreq.pr_entry_nr = ptr;
+diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c
+index 78d20e4baa2c9a..1bf2a6593dec66 100644
+--- a/fs/ntfs3/index.c
++++ b/fs/ntfs3/index.c
+@@ -2182,6 +2182,10 @@ static int indx_get_entry_to_replace(struct ntfs_index *indx,
+ 
+ 		e = hdr_first_de(&n->index->ihdr);
+ 		fnd_push(fnd, n, e);
++		if (!e) {
++			err = -EINVAL;
++			goto out;
++		}
+ 
+ 		if (!de_is_last(e)) {
+ 			/*
+@@ -2203,6 +2207,10 @@ static int indx_get_entry_to_replace(struct ntfs_index *indx,
+ 
+ 	n = fnd->nodes[level];
+ 	te = hdr_first_de(&n->index->ihdr);
++	if (!te) {
++		err = -EINVAL;
++		goto out;
++	}
+ 	/* Copy the candidate entry into the replacement entry buffer. */
+ 	re = kmalloc(le16_to_cpu(te->size) + sizeof(u64), GFP_NOFS);
+ 	if (!re) {
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index 3e2957a1e3605c..0f0d27d4644a9b 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -805,6 +805,10 @@ static ssize_t ntfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ 		ret = 0;
+ 		goto out;
+ 	}
++	if (is_compressed(ni)) {
++		ret = 0;
++		goto out;
++	}
+ 
+ 	ret = blockdev_direct_IO(iocb, inode, iter,
+ 				 wr ? ntfs_get_block_direct_IO_W :
+@@ -2068,5 +2072,6 @@ const struct address_space_operations ntfs_aops_cmpr = {
+ 	.read_folio	= ntfs_read_folio,
+ 	.readahead	= ntfs_readahead,
+ 	.dirty_folio	= block_dirty_folio,
++	.direct_IO	= ntfs_direct_IO,
+ };
+ // clang-format on
+diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
+index e272429da3db34..de7f12858729ac 100644
+--- a/fs/ocfs2/quota_local.c
++++ b/fs/ocfs2/quota_local.c
+@@ -674,7 +674,7 @@ int ocfs2_finish_quota_recovery(struct ocfs2_super *osb,
+ 			break;
+ 	}
+ out:
+-	kfree(rec);
++	ocfs2_free_quota_recovery(rec);
+ 	return status;
+ }
+ 
+diff --git a/fs/pidfs.c b/fs/pidfs.c
+index 50e69a9e104a60..87a53d2ae4bb78 100644
+--- a/fs/pidfs.c
++++ b/fs/pidfs.c
+@@ -336,7 +336,7 @@ static long pidfd_info(struct file *file, unsigned int cmd, unsigned long arg)
+ 	kinfo.pid = task_pid_vnr(task);
+ 	kinfo.mask |= PIDFD_INFO_PID;
+ 
+-	if (kinfo.pid == 0 || kinfo.tgid == 0 || (kinfo.ppid == 0 && kinfo.pid != 1))
++	if (kinfo.pid == 0 || kinfo.tgid == 0)
+ 		return -ESRCH;
+ 
+ copy_out:
+diff --git a/fs/pnode.c b/fs/pnode.c
+index fb77427df39e2e..ffd429b760d5d4 100644
+--- a/fs/pnode.c
++++ b/fs/pnode.c
+@@ -231,8 +231,8 @@ static int propagate_one(struct mount *m, struct mountpoint *dest_mp)
+ 	/* skip if mountpoint isn't visible in m */
+ 	if (!is_subdir(dest_mp->m_dentry, m->mnt.mnt_root))
+ 		return 0;
+-	/* skip if m is in the anon_ns we are emptying */
+-	if (m->mnt_ns->mntns_flags & MNTNS_PROPAGATING)
++	/* skip if m is in the anon_ns */
++	if (is_anon_ns(m->mnt_ns))
+ 		return 0;
+ 
+ 	if (peers(m, last_dest)) {
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index ecf774a8f1ca01..66093fa78aed7d 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -151,8 +151,7 @@ extern bool is_size_safe_to_change(struct cifsInodeInfo *cifsInode, __u64 eof,
+ 				   bool from_readdir);
+ extern void cifs_update_eof(struct cifsInodeInfo *cifsi, loff_t offset,
+ 			    unsigned int bytes_written);
+-void cifs_write_subrequest_terminated(struct cifs_io_subrequest *wdata, ssize_t result,
+-				      bool was_async);
++void cifs_write_subrequest_terminated(struct cifs_io_subrequest *wdata, ssize_t result);
+ extern struct cifsFileInfo *find_writable_file(struct cifsInodeInfo *, int);
+ extern int cifs_get_writable_file(struct cifsInodeInfo *cifs_inode,
+ 				  int flags,
+diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
+index f55457b4b82e36..a3ba3346ed313f 100644
+--- a/fs/smb/client/cifssmb.c
++++ b/fs/smb/client/cifssmb.c
+@@ -1725,7 +1725,7 @@ cifs_writev_callback(struct mid_q_entry *mid)
+ 			      server->credits, server->in_flight,
+ 			      0, cifs_trace_rw_credits_write_response_clear);
+ 	wdata->credits.value = 0;
+-	cifs_write_subrequest_terminated(wdata, result, true);
++	cifs_write_subrequest_terminated(wdata, result);
+ 	release_mid(mid);
+ 	trace_smb3_rw_credits(credits.rreq_debug_id, credits.rreq_debug_index, 0,
+ 			      server->credits, server->in_flight,
+@@ -1813,7 +1813,7 @@ cifs_async_writev(struct cifs_io_subrequest *wdata)
+ out:
+ 	if (rc) {
+ 		add_credits_and_wake_if(wdata->server, &wdata->credits, 0);
+-		cifs_write_subrequest_terminated(wdata, rc, false);
++		cifs_write_subrequest_terminated(wdata, rc);
+ 	}
+ }
+ 
+@@ -2753,10 +2753,10 @@ int cifs_query_reparse_point(const unsigned int xid,
+ 
+ 	io_req->TotalParameterCount = 0;
+ 	io_req->TotalDataCount = 0;
+-	io_req->MaxParameterCount = cpu_to_le32(2);
++	io_req->MaxParameterCount = cpu_to_le32(0);
+ 	/* BB find exact data count max from sess structure BB */
+ 	io_req->MaxDataCount = cpu_to_le32(CIFSMaxBufSize & 0xFFFFFF00);
+-	io_req->MaxSetupCount = 4;
++	io_req->MaxSetupCount = 1;
+ 	io_req->Reserved = 0;
+ 	io_req->ParameterOffset = 0;
+ 	io_req->DataCount = 0;
+@@ -2783,6 +2783,22 @@ int cifs_query_reparse_point(const unsigned int xid,
+ 		goto error;
+ 	}
+ 
++	/* SetupCount must be 1, otherwise offset to ByteCount is incorrect. */
++	if (io_rsp->SetupCount != 1) {
++		rc = -EIO;
++		goto error;
++	}
++
++	/*
++	 * ReturnedDataLen is output length of executed IOCTL.
++	 * DataCount is output length transferred over network.
++	 * Check that we have full FSCTL_GET_REPARSE_POINT buffer.
++	 */
++	if (data_count != le16_to_cpu(io_rsp->ReturnedDataLen)) {
++		rc = -EIO;
++		goto error;
++	}
++
+ 	end = 2 + get_bcc(&io_rsp->hdr) + (__u8 *)&io_rsp->ByteCount;
+ 	start = (__u8 *)&io_rsp->hdr.Protocol + data_offset;
+ 	if (start >= end) {
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index 950aa4f912f5cd..9835672267d277 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -130,7 +130,7 @@ static void cifs_issue_write(struct netfs_io_subrequest *subreq)
+ 	else
+ 		trace_netfs_sreq(subreq, netfs_sreq_trace_fail);
+ 	add_credits_and_wake_if(wdata->server, &wdata->credits, 0);
+-	cifs_write_subrequest_terminated(wdata, rc, false);
++	cifs_write_subrequest_terminated(wdata, rc);
+ 	goto out;
+ }
+ 
+@@ -219,7 +219,8 @@ static void cifs_issue_read(struct netfs_io_subrequest *subreq)
+ 			goto failed;
+ 	}
+ 
+-	if (subreq->rreq->origin != NETFS_DIO_READ)
++	if (subreq->rreq->origin != NETFS_UNBUFFERED_READ &&
++	    subreq->rreq->origin != NETFS_DIO_READ)
+ 		__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
+ 
+ 	trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
+@@ -998,15 +999,18 @@ int cifs_open(struct inode *inode, struct file *file)
+ 		rc = cifs_get_readable_path(tcon, full_path, &cfile);
+ 	}
+ 	if (rc == 0) {
+-		if (file->f_flags == cfile->f_flags) {
++		unsigned int oflags = file->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC);
++		unsigned int cflags = cfile->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC);
++
++		if (cifs_convert_flags(oflags, 0) == cifs_convert_flags(cflags, 0) &&
++		    (oflags & (O_SYNC|O_DIRECT)) == (cflags & (O_SYNC|O_DIRECT))) {
+ 			file->private_data = cfile;
+ 			spin_lock(&CIFS_I(inode)->deferred_lock);
+ 			cifs_del_deferred_close(cfile);
+ 			spin_unlock(&CIFS_I(inode)->deferred_lock);
+ 			goto use_cache;
+-		} else {
+-			_cifsFileInfo_put(cfile, true, false);
+ 		}
++		_cifsFileInfo_put(cfile, true, false);
+ 	} else {
+ 		/* hard link on the defeered close file */
+ 		rc = cifs_get_hardlink_path(tcon, inode, file);
+@@ -2423,8 +2427,7 @@ int cifs_lock(struct file *file, int cmd, struct file_lock *flock)
+ 	return rc;
+ }
+ 
+-void cifs_write_subrequest_terminated(struct cifs_io_subrequest *wdata, ssize_t result,
+-				      bool was_async)
++void cifs_write_subrequest_terminated(struct cifs_io_subrequest *wdata, ssize_t result)
+ {
+ 	struct netfs_io_request *wreq = wdata->rreq;
+ 	struct netfs_inode *ictx = netfs_inode(wreq->inode);
+@@ -2441,7 +2444,7 @@ void cifs_write_subrequest_terminated(struct cifs_io_subrequest *wdata, ssize_t
+ 			netfs_resize_file(ictx, wrend, true);
+ 	}
+ 
+-	netfs_write_subrequest_terminated(&wdata->subreq, result, was_async);
++	netfs_write_subrequest_terminated(&wdata->subreq, result);
+ }
+ 
+ struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode,
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 4e28632b5fd661..399185ca7cacb0 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -4888,7 +4888,7 @@ smb2_writev_callback(struct mid_q_entry *mid)
+ 			      0, cifs_trace_rw_credits_write_response_clear);
+ 	wdata->credits.value = 0;
+ 	trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_progress);
+-	cifs_write_subrequest_terminated(wdata, result ?: written, true);
++	cifs_write_subrequest_terminated(wdata, result ?: written);
+ 	release_mid(mid);
+ 	trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, 0,
+ 			      server->credits, server->in_flight,
+@@ -5061,7 +5061,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata)
+ 				      -(int)wdata->credits.value,
+ 				      cifs_trace_rw_credits_write_response_clear);
+ 		add_credits_and_wake_if(wdata->server, &wdata->credits, 0);
+-		cifs_write_subrequest_terminated(wdata, rc, true);
++		cifs_write_subrequest_terminated(wdata, rc);
+ 	}
+ }
+ 
+diff --git a/fs/squashfs/super.c b/fs/squashfs/super.c
+index 67c55fe32ce88d..992ea0e372572f 100644
+--- a/fs/squashfs/super.c
++++ b/fs/squashfs/super.c
+@@ -202,6 +202,11 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	msblk->panic_on_errors = (opts->errors == Opt_errors_panic);
+ 
+ 	msblk->devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE);
++	if (!msblk->devblksize) {
++		errorf(fc, "squashfs: unable to set blocksize\n");
++		return -EINVAL;
++	}
++
+ 	msblk->devblksize_log2 = ffz(~msblk->devblksize);
+ 
+ 	mutex_init(&msblk->meta_index_mutex);
+diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
+index 26a04a78348967..63151feb9c3fd5 100644
+--- a/fs/xfs/xfs_aops.c
++++ b/fs/xfs/xfs_aops.c
+@@ -436,6 +436,25 @@ xfs_map_blocks(
+ 	return 0;
+ }
+ 
++static bool
++xfs_ioend_needs_wq_completion(
++	struct iomap_ioend	*ioend)
++{
++	/* Changing inode size requires a transaction. */
++	if (xfs_ioend_is_append(ioend))
++		return true;
++
++	/* Extent manipulation requires a transaction. */
++	if (ioend->io_flags & (IOMAP_IOEND_UNWRITTEN | IOMAP_IOEND_SHARED))
++		return true;
++
++	/* Page cache invalidation cannot be done in irq context. */
++	if (ioend->io_flags & IOMAP_IOEND_DONTCACHE)
++		return true;
++
++	return false;
++}
++
+ static int
+ xfs_submit_ioend(
+ 	struct iomap_writepage_ctx *wpc,
+@@ -460,8 +479,7 @@ xfs_submit_ioend(
+ 	memalloc_nofs_restore(nofs_flag);
+ 
+ 	/* send ioends that might require a transaction to the completion wq */
+-	if (xfs_ioend_is_append(ioend) ||
+-	    (ioend->io_flags & (IOMAP_IOEND_UNWRITTEN | IOMAP_IOEND_SHARED)))
++	if (xfs_ioend_needs_wq_completion(ioend))
+ 		ioend->io_bio.bi_end_io = xfs_end_bio;
+ 
+ 	if (status)
+diff --git a/fs/xfs/xfs_discard.c b/fs/xfs/xfs_discard.c
+index c1a306268ae439..94d0873bcd6289 100644
+--- a/fs/xfs/xfs_discard.c
++++ b/fs/xfs/xfs_discard.c
+@@ -167,6 +167,14 @@ xfs_discard_extents(
+ 	return error;
+ }
+ 
++/*
++ * Care must be taken setting up the trim cursor as the perags may not have been
++ * initialised when the cursor is initialised. e.g. a clean mount which hasn't
++ * read in AGFs and the first operation run on the mounted fs is a trim. This
++ * can result in perag fields that aren't initialised until
++ * xfs_trim_gather_extents() calls xfs_alloc_read_agf() to lock down the AG for
++ * the free space search.
++ */
+ struct xfs_trim_cur {
+ 	xfs_agblock_t	start;
+ 	xfs_extlen_t	count;
+@@ -204,6 +212,14 @@ xfs_trim_gather_extents(
+ 	if (error)
+ 		goto out_trans_cancel;
+ 
++	/*
++	 * First time through tcur->count will not have been initialised as
++	 * pag->pagf_longest is not guaranteed to be valid before we read
++	 * the AGF buffer above.
++	 */
++	if (!tcur->count)
++		tcur->count = pag->pagf_longest;
++
+ 	if (tcur->by_bno) {
+ 		/* sub-AG discard request always starts at tcur->start */
+ 		cur = xfs_bnobt_init_cursor(mp, tp, agbp, pag);
+@@ -350,7 +366,6 @@ xfs_trim_perag_extents(
+ {
+ 	struct xfs_trim_cur	tcur = {
+ 		.start		= start,
+-		.count		= pag->pagf_longest,
+ 		.end		= end,
+ 		.minlen		= minlen,
+ 	};
+diff --git a/include/crypto/sig.h b/include/crypto/sig.h
+index 11024708c06929..fa6dafafab3f0d 100644
+--- a/include/crypto/sig.h
++++ b/include/crypto/sig.h
+@@ -128,7 +128,7 @@ static inline void crypto_free_sig(struct crypto_sig *tfm)
+ /**
+  * crypto_sig_keysize() - Get key size
+  *
+- * Function returns the key size in bytes.
++ * Function returns the key size in bits.
+  * Function assumes that the key is already set in the transformation. If this
+  * function is called without a setkey or with a failed setkey, you may end up
+  * in a NULL dereference.
+diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h
+index abf0bd76e3703a..6f5976aca3e860 100644
+--- a/include/hyperv/hvgdk_mini.h
++++ b/include/hyperv/hvgdk_mini.h
+@@ -475,7 +475,7 @@ union hv_vp_assist_msr_contents {	 /* HV_REGISTER_VP_ASSIST_PAGE */
+ #define HVCALL_CREATE_PORT				0x0095
+ #define HVCALL_CONNECT_PORT				0x0096
+ #define HVCALL_START_VP					0x0099
+-#define HVCALL_GET_VP_ID_FROM_APIC_ID			0x009a
++#define HVCALL_GET_VP_INDEX_FROM_APIC_ID			0x009a
+ #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE	0x00af
+ #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST	0x00b0
+ #define HVCALL_SIGNAL_EVENT_DIRECT			0x00c0
+diff --git a/include/kunit/clk.h b/include/kunit/clk.h
+index 0afae7688157b8..f226044cc78d11 100644
+--- a/include/kunit/clk.h
++++ b/include/kunit/clk.h
+@@ -6,6 +6,7 @@ struct clk;
+ struct clk_hw;
+ struct device;
+ struct device_node;
++struct of_phandle_args;
+ struct kunit;
+ 
+ struct clk *
+diff --git a/include/linux/arm_sdei.h b/include/linux/arm_sdei.h
+index 255701e1251b4a..f652a5028b5907 100644
+--- a/include/linux/arm_sdei.h
++++ b/include/linux/arm_sdei.h
+@@ -46,12 +46,12 @@ int sdei_unregister_ghes(struct ghes *ghes);
+ /* For use by arch code when CPU hotplug notifiers are not appropriate. */
+ int sdei_mask_local_cpu(void);
+ int sdei_unmask_local_cpu(void);
+-void __init sdei_init(void);
++void __init acpi_sdei_init(void);
+ void sdei_handler_abort(void);
+ #else
+ static inline int sdei_mask_local_cpu(void) { return 0; }
+ static inline int sdei_unmask_local_cpu(void) { return 0; }
+-static inline void sdei_init(void) { }
++static inline void acpi_sdei_init(void) { }
+ static inline void sdei_handler_abort(void) { }
+ #endif /* CONFIG_ARM_SDE_INTERFACE */
+ 
+diff --git a/include/linux/bio.h b/include/linux/bio.h
+index b786ec5bcc81d0..b474a47ec7eefe 100644
+--- a/include/linux/bio.h
++++ b/include/linux/bio.h
+@@ -291,7 +291,7 @@ static inline void bio_first_folio(struct folio_iter *fi, struct bio *bio,
+ 
+ 	fi->folio = page_folio(bvec->bv_page);
+ 	fi->offset = bvec->bv_offset +
+-			PAGE_SIZE * (bvec->bv_page - &fi->folio->page);
++			PAGE_SIZE * folio_page_idx(fi->folio, bvec->bv_page);
+ 	fi->_seg_count = bvec->bv_len;
+ 	fi->length = min(folio_size(fi->folio) - fi->offset, fi->_seg_count);
+ 	fi->_next = folio_next(fi->folio);
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index 9734544b6957cf..d1f02f8e3e55f4 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -356,7 +356,11 @@ enum {
+ 	INSN_F_SPI_MASK = 0x3f, /* 6 bits */
+ 	INSN_F_SPI_SHIFT = 3, /* shifted 3 bits to the left */
+ 
+-	INSN_F_STACK_ACCESS = BIT(9), /* we need 10 bits total */
++	INSN_F_STACK_ACCESS = BIT(9),
++
++	INSN_F_DST_REG_STACK = BIT(10), /* dst_reg is PTR_TO_STACK */
++	INSN_F_SRC_REG_STACK = BIT(11), /* src_reg is PTR_TO_STACK */
++	/* total 12 bits are used now. */
+ };
+ 
+ static_assert(INSN_F_FRAMENO_MASK + 1 >= MAX_CALL_FRAMES);
+@@ -365,9 +369,9 @@ static_assert(INSN_F_SPI_MASK + 1 >= MAX_BPF_STACK / 8);
+ struct bpf_insn_hist_entry {
+ 	u32 idx;
+ 	/* insn idx can't be bigger than 1 million */
+-	u32 prev_idx : 22;
+-	/* special flags, e.g., whether insn is doing register stack spill/load */
+-	u32 flags : 10;
++	u32 prev_idx : 20;
++	/* special INSN_F_xxx flags */
++	u32 flags : 12;
+ 	/* additional registers that need precision tracking when this
+ 	 * jump is backtracked, vector of six 10-bit records
+ 	 */
+diff --git a/include/linux/bvec.h b/include/linux/bvec.h
+index 204b22a99c4ba6..0a80e1f9aa201c 100644
+--- a/include/linux/bvec.h
++++ b/include/linux/bvec.h
+@@ -57,9 +57,12 @@ static inline void bvec_set_page(struct bio_vec *bv, struct page *page,
+  * @offset:	offset into the folio
+  */
+ static inline void bvec_set_folio(struct bio_vec *bv, struct folio *folio,
+-		unsigned int len, unsigned int offset)
++		size_t len, size_t offset)
+ {
+-	bvec_set_page(bv, &folio->page, len, offset);
++	unsigned long nr = offset / PAGE_SIZE;
++
++	WARN_ON_ONCE(len > UINT_MAX);
++	bvec_set_page(bv, folio_page(folio, nr), len, offset % PAGE_SIZE);
+ }
+ 
+ /**
+diff --git a/include/linux/coresight.h b/include/linux/coresight.h
+index d79a242b271d6e..cfcf6e4707ed94 100644
+--- a/include/linux/coresight.h
++++ b/include/linux/coresight.h
+@@ -723,7 +723,7 @@ coresight_find_output_type(struct coresight_platform_data *pdata,
+ 			   union coresight_dev_subtype subtype);
+ 
+ int coresight_init_driver(const char *drv, struct amba_driver *amba_drv,
+-			  struct platform_driver *pdev_drv);
++			  struct platform_driver *pdev_drv, struct module *owner);
+ 
+ void coresight_remove_driver(struct amba_driver *amba_drv,
+ 			     struct platform_driver *pdev_drv);
+diff --git a/include/linux/exportfs.h b/include/linux/exportfs.h
+index fc93f0abf513cd..25c4a5afbd4432 100644
+--- a/include/linux/exportfs.h
++++ b/include/linux/exportfs.h
+@@ -314,6 +314,9 @@ static inline bool exportfs_can_decode_fh(const struct export_operations *nop)
+ static inline bool exportfs_can_encode_fh(const struct export_operations *nop,
+ 					  int fh_flags)
+ {
++	if (!nop)
++		return false;
++
+ 	/*
+ 	 * If a non-decodeable file handle was requested, we only need to make
+ 	 * sure that filesystem did not opt-out of encoding fid.
+@@ -321,6 +324,13 @@ static inline bool exportfs_can_encode_fh(const struct export_operations *nop,
+ 	if (fh_flags & EXPORT_FH_FID)
+ 		return exportfs_can_encode_fid(nop);
+ 
++	/*
++	 * If a connectable file handle was requested, we need to make sure that
++	 * filesystem can also decode connected file handles.
++	 */
++	if ((fh_flags & EXPORT_FH_CONNECTABLE) && !nop->fh_to_parent)
++		return false;
++
+ 	/*
+ 	 * If a decodeable file handle was requested, we need to make sure that
+ 	 * filesystem can also decode file handles.
+diff --git a/include/linux/fscache.h b/include/linux/fscache.h
+index 9de27643607fb1..266e6c9e6f83ad 100644
+--- a/include/linux/fscache.h
++++ b/include/linux/fscache.h
+@@ -628,7 +628,7 @@ static inline void fscache_write_to_cache(struct fscache_cookie *cookie,
+ 					 term_func, term_func_priv,
+ 					 using_pgpriv2, caching);
+ 	else if (term_func)
+-		term_func(term_func_priv, -ENOBUFS, false);
++		term_func(term_func_priv, -ENOBUFS);
+ 
+ }
+ 
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index ef9a90ca0fbd6a..daae1d6d11a744 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -740,8 +740,9 @@ struct hid_descriptor {
+ 	__le16 bcdHID;
+ 	__u8  bCountryCode;
+ 	__u8  bNumDescriptors;
++	struct hid_class_descriptor rpt_desc;
+ 
+-	struct hid_class_descriptor desc[1];
++	struct hid_class_descriptor opt_descs[];
+ } __attribute__ ((packed));
+ 
+ #define HID_DEVICE(b, g, ven, prod)					\
+diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h
+index 457b4fba88bd00..7edc3fb0641cba 100644
+--- a/include/linux/ieee80211.h
++++ b/include/linux/ieee80211.h
+@@ -111,6 +111,8 @@
+ 
+ /* bits unique to S1G beacon */
+ #define IEEE80211_S1G_BCN_NEXT_TBTT	0x100
++#define IEEE80211_S1G_BCN_CSSID		0x200
++#define IEEE80211_S1G_BCN_ANO		0x400
+ 
+ /* see 802.11ah-2016 9.9 NDP CMAC frames */
+ #define IEEE80211_S1G_1MHZ_NDP_BITS	25
+@@ -153,9 +155,6 @@
+ 
+ #define IEEE80211_ANO_NETTYPE_WILD              15
+ 
+-/* bits unique to S1G beacon */
+-#define IEEE80211_S1G_BCN_NEXT_TBTT    0x100
+-
+ /* control extension - for IEEE80211_FTYPE_CTL | IEEE80211_STYPE_CTL_EXT */
+ #define IEEE80211_CTL_EXT_POLL		0x2000
+ #define IEEE80211_CTL_EXT_SPR		0x3000
+@@ -627,6 +626,42 @@ static inline bool ieee80211_is_s1g_beacon(__le16 fc)
+ 	       cpu_to_le16(IEEE80211_FTYPE_EXT | IEEE80211_STYPE_S1G_BEACON);
+ }
+ 
++/**
++ * ieee80211_s1g_has_next_tbtt - check if IEEE80211_S1G_BCN_NEXT_TBTT
++ * @fc: frame control bytes in little-endian byteorder
++ * Return: whether or not the frame contains the variable-length
++ *	next TBTT field
++ */
++static inline bool ieee80211_s1g_has_next_tbtt(__le16 fc)
++{
++	return ieee80211_is_s1g_beacon(fc) &&
++		(fc & cpu_to_le16(IEEE80211_S1G_BCN_NEXT_TBTT));
++}
++
++/**
++ * ieee80211_s1g_has_ano - check if IEEE80211_S1G_BCN_ANO
++ * @fc: frame control bytes in little-endian byteorder
++ * Return: whether or not the frame contains the variable-length
++ *	ANO field
++ */
++static inline bool ieee80211_s1g_has_ano(__le16 fc)
++{
++	return ieee80211_is_s1g_beacon(fc) &&
++		(fc & cpu_to_le16(IEEE80211_S1G_BCN_ANO));
++}
++
++/**
++ * ieee80211_s1g_has_cssid - check if IEEE80211_S1G_BCN_CSSID
++ * @fc: frame control bytes in little-endian byteorder
++ * Return: whether or not the frame contains the variable-length
++ *	compressed SSID field
++ */
++static inline bool ieee80211_s1g_has_cssid(__le16 fc)
++{
++	return ieee80211_is_s1g_beacon(fc) &&
++		(fc & cpu_to_le16(IEEE80211_S1G_BCN_CSSID));
++}
++
+ /**
+  * ieee80211_is_s1g_short_beacon - check if frame is an S1G short beacon
+  * @fc: frame control bytes in little-endian byteorder
+@@ -1245,16 +1280,40 @@ struct ieee80211_ext {
+ 			u8 change_seq;
+ 			u8 variable[0];
+ 		} __packed s1g_beacon;
+-		struct {
+-			u8 sa[ETH_ALEN];
+-			__le32 timestamp;
+-			u8 change_seq;
+-			u8 next_tbtt[3];
+-			u8 variable[0];
+-		} __packed s1g_short_beacon;
+ 	} u;
+ } __packed __aligned(2);
+ 
++/**
++ * ieee80211_s1g_optional_len - determine length of optional S1G beacon fields
++ * @fc: frame control bytes in little-endian byteorder
++ * Return: total length in bytes of the optional fixed-length fields
++ *
++ * S1G beacons may contain up to three optional fixed-length fields that
++ * precede the variable-length elements. Whether these fields are present
++ * is indicated by flags in the frame control field.
++ *
++ * From IEEE 802.11-2024 section 9.3.4.3:
++ *  - Next TBTT field may be 0 or 3 bytes
++ *  - Short SSID field may be 0 or 4 bytes
++ *  - Access Network Options (ANO) field may be 0 or 1 byte
++ */
++static inline size_t
++ieee80211_s1g_optional_len(__le16 fc)
++{
++	size_t len = 0;
++
++	if (ieee80211_s1g_has_next_tbtt(fc))
++		len += 3;
++
++	if (ieee80211_s1g_has_cssid(fc))
++		len += 4;
++
++	if (ieee80211_s1g_has_ano(fc))
++		len += 1;
++
++	return len;
++}
++
+ #define IEEE80211_TWT_CONTROL_NDP			BIT(0)
+ #define IEEE80211_TWT_CONTROL_RESP_MODE			BIT(1)
+ #define IEEE80211_TWT_CONTROL_NEG_TYPE_BROADCAST	BIT(3)
+diff --git a/include/linux/iomap.h b/include/linux/iomap.h
+index 68416b135151d7..522644d62f30f0 100644
+--- a/include/linux/iomap.h
++++ b/include/linux/iomap.h
+@@ -377,13 +377,16 @@ sector_t iomap_bmap(struct address_space *mapping, sector_t bno,
+ #define IOMAP_IOEND_BOUNDARY		(1U << 2)
+ /* is direct I/O */
+ #define IOMAP_IOEND_DIRECT		(1U << 3)
++/* is DONTCACHE I/O */
++#define IOMAP_IOEND_DONTCACHE		(1U << 4)
+ 
+ /*
+  * Flags that if set on either ioend prevent the merge of two ioends.
+  * (IOMAP_IOEND_BOUNDARY also prevents merges, but only one-way)
+  */
+ #define IOMAP_IOEND_NOMERGE_FLAGS \
+-	(IOMAP_IOEND_SHARED | IOMAP_IOEND_UNWRITTEN | IOMAP_IOEND_DIRECT)
++	(IOMAP_IOEND_SHARED | IOMAP_IOEND_UNWRITTEN | IOMAP_IOEND_DIRECT | \
++	 IOMAP_IOEND_DONTCACHE)
+ 
+ /*
+  * Structure for writeback I/O completions.
+diff --git a/include/linux/mdio.h b/include/linux/mdio.h
+index 3c3deac57894ef..e43ff9f980a46b 100644
+--- a/include/linux/mdio.h
++++ b/include/linux/mdio.h
+@@ -45,10 +45,7 @@ struct mdio_device {
+ 	unsigned int reset_deassert_delay;
+ };
+ 
+-static inline struct mdio_device *to_mdio_device(const struct device *dev)
+-{
+-	return container_of(dev, struct mdio_device, dev);
+-}
++#define to_mdio_device(__dev)	container_of_const(__dev, struct mdio_device, dev)
+ 
+ /* struct mdio_driver_common: Common to all MDIO drivers */
+ struct mdio_driver_common {
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index d1dfbad9a44730..e6ba8f4f4bd1f4 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -398,6 +398,7 @@ struct mlx5_core_rsc_common {
+ 	enum mlx5_res_type	res;
+ 	refcount_t		refcount;
+ 	struct completion	free;
++	bool			invalid;
+ };
+ 
+ struct mlx5_uars_page {
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index fdda6b16263b35..e51dba8398f747 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -4265,4 +4265,62 @@ int arch_lock_shadow_stack_status(struct task_struct *t, unsigned long status);
+ #define VM_SEALED_SYSMAP	VM_NONE
+ #endif
+ 
++/*
++ * DMA mapping IDs for page_pool
++ *
++ * When DMA-mapping a page, page_pool allocates an ID (from an xarray) and
++ * stashes it in the upper bits of page->pp_magic. We always want to be able to
++ * unambiguously identify page pool pages (using page_pool_page_is_pp()). Non-PP
++ * pages can have arbitrary kernel pointers stored in the same field as pp_magic
++ * (since it overlaps with page->lru.next), so we must ensure that we cannot
++ * mistake a valid kernel pointer with any of the values we write into this
++ * field.
++ *
++ * On architectures that set POISON_POINTER_DELTA, this is already ensured,
++ * since this value becomes part of PP_SIGNATURE; meaning we can just use the
++ * space between the PP_SIGNATURE value (without POISON_POINTER_DELTA), and the
++ * lowest bits of POISON_POINTER_DELTA. On arches where POISON_POINTER_DELTA is
++ * 0, we make sure that we leave the two topmost bits empty, as that guarantees
++ * we won't mistake a valid kernel pointer for a value we set, regardless of the
++ * VMSPLIT setting.
++ *
++ * Altogether, this means that the number of bits available is constrained by
++ * the size of an unsigned long (at the upper end, subtracting two bits per the
++ * above), and the definition of PP_SIGNATURE (with or without
++ * POISON_POINTER_DELTA).
++ */
++#define PP_DMA_INDEX_SHIFT (1 + __fls(PP_SIGNATURE - POISON_POINTER_DELTA))
++#if POISON_POINTER_DELTA > 0
++/* PP_SIGNATURE includes POISON_POINTER_DELTA, so limit the size of the DMA
++ * index to not overlap with that if set
++ */
++#define PP_DMA_INDEX_BITS MIN(32, __ffs(POISON_POINTER_DELTA) - PP_DMA_INDEX_SHIFT)
++#else
++/* Always leave out the topmost two; see above. */
++#define PP_DMA_INDEX_BITS MIN(32, BITS_PER_LONG - PP_DMA_INDEX_SHIFT - 2)
++#endif
++
++#define PP_DMA_INDEX_MASK GENMASK(PP_DMA_INDEX_BITS + PP_DMA_INDEX_SHIFT - 1, \
++				  PP_DMA_INDEX_SHIFT)
++
++/* Mask used for checking in page_pool_page_is_pp() below. page->pp_magic is
++ * OR'ed with PP_SIGNATURE after the allocation in order to preserve bit 0 for
++ * the head page of compound page and bit 1 for pfmemalloc page, as well as the
++ * bits used for the DMA index. page_is_pfmemalloc() is checked in
++ * __page_pool_put_page() to avoid recycling the pfmemalloc page.
++ */
++#define PP_MAGIC_MASK ~(PP_DMA_INDEX_MASK | 0x3UL)
++
++#ifdef CONFIG_PAGE_POOL
++static inline bool page_pool_page_is_pp(struct page *page)
++{
++	return (page->pp_magic & PP_MAGIC_MASK) == PP_SIGNATURE;
++}
++#else
++static inline bool page_pool_page_is_pp(struct page *page)
++{
++	return false;
++}
++#endif
++
+ #endif /* _LINUX_MM_H */
+diff --git a/include/linux/mount.h b/include/linux/mount.h
+index dcc17ce8a959e0..1a3136e53eaa07 100644
+--- a/include/linux/mount.h
++++ b/include/linux/mount.h
+@@ -22,48 +22,52 @@ struct fs_context;
+ struct file;
+ struct path;
+ 
+-#define MNT_NOSUID	0x01
+-#define MNT_NODEV	0x02
+-#define MNT_NOEXEC	0x04
+-#define MNT_NOATIME	0x08
+-#define MNT_NODIRATIME	0x10
+-#define MNT_RELATIME	0x20
+-#define MNT_READONLY	0x40	/* does the user want this to be r/o? */
+-#define MNT_NOSYMFOLLOW	0x80
+-
+-#define MNT_SHRINKABLE	0x100
+-#define MNT_WRITE_HOLD	0x200
+-
+-#define MNT_SHARED	0x1000	/* if the vfsmount is a shared mount */
+-#define MNT_UNBINDABLE	0x2000	/* if the vfsmount is a unbindable mount */
+-/*
+- * MNT_SHARED_MASK is the set of flags that should be cleared when a
+- * mount becomes shared.  Currently, this is only the flag that says a
+- * mount cannot be bind mounted, since this is how we create a mount
+- * that shares events with another mount.  If you add a new MNT_*
+- * flag, consider how it interacts with shared mounts.
+- */
+-#define MNT_SHARED_MASK	(MNT_UNBINDABLE)
+-#define MNT_USER_SETTABLE_MASK  (MNT_NOSUID | MNT_NODEV | MNT_NOEXEC \
+-				 | MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME \
+-				 | MNT_READONLY | MNT_NOSYMFOLLOW)
+-#define MNT_ATIME_MASK (MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME )
+-
+-#define MNT_INTERNAL_FLAGS (MNT_SHARED | MNT_WRITE_HOLD | MNT_INTERNAL | \
+-			    MNT_DOOMED | MNT_SYNC_UMOUNT | MNT_MARKED)
+-
+-#define MNT_INTERNAL	0x4000
+-
+-#define MNT_LOCK_ATIME		0x040000
+-#define MNT_LOCK_NOEXEC		0x080000
+-#define MNT_LOCK_NOSUID		0x100000
+-#define MNT_LOCK_NODEV		0x200000
+-#define MNT_LOCK_READONLY	0x400000
+-#define MNT_LOCKED		0x800000
+-#define MNT_DOOMED		0x1000000
+-#define MNT_SYNC_UMOUNT		0x2000000
+-#define MNT_MARKED		0x4000000
+-#define MNT_UMOUNT		0x8000000
++enum mount_flags {
++	MNT_NOSUID	= 0x01,
++	MNT_NODEV	= 0x02,
++	MNT_NOEXEC	= 0x04,
++	MNT_NOATIME	= 0x08,
++	MNT_NODIRATIME	= 0x10,
++	MNT_RELATIME	= 0x20,
++	MNT_READONLY	= 0x40, /* does the user want this to be r/o? */
++	MNT_NOSYMFOLLOW	= 0x80,
++
++	MNT_SHRINKABLE	= 0x100,
++	MNT_WRITE_HOLD	= 0x200,
++
++	MNT_SHARED	= 0x1000, /* if the vfsmount is a shared mount */
++	MNT_UNBINDABLE	= 0x2000, /* if the vfsmount is a unbindable mount */
++
++	MNT_INTERNAL	= 0x4000,
++
++	MNT_LOCK_ATIME		= 0x040000,
++	MNT_LOCK_NOEXEC		= 0x080000,
++	MNT_LOCK_NOSUID		= 0x100000,
++	MNT_LOCK_NODEV		= 0x200000,
++	MNT_LOCK_READONLY	= 0x400000,
++	MNT_LOCKED		= 0x800000,
++	MNT_DOOMED		= 0x1000000,
++	MNT_SYNC_UMOUNT		= 0x2000000,
++	MNT_MARKED		= 0x4000000,
++	MNT_UMOUNT		= 0x8000000,
++
++	/*
++	 * MNT_SHARED_MASK is the set of flags that should be cleared when a
++	 * mount becomes shared.  Currently, this is only the flag that says a
++	 * mount cannot be bind mounted, since this is how we create a mount
++	 * that shares events with another mount.  If you add a new MNT_*
++	 * flag, consider how it interacts with shared mounts.
++	 */
++	MNT_SHARED_MASK	= MNT_UNBINDABLE,
++	MNT_USER_SETTABLE_MASK  = MNT_NOSUID | MNT_NODEV | MNT_NOEXEC
++				  | MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME
++				  | MNT_READONLY | MNT_NOSYMFOLLOW,
++	MNT_ATIME_MASK = MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME,
++
++	MNT_INTERNAL_FLAGS = MNT_SHARED | MNT_WRITE_HOLD | MNT_INTERNAL |
++			     MNT_DOOMED | MNT_SYNC_UMOUNT | MNT_MARKED |
++			     MNT_LOCKED,
++};
+ 
+ struct vfsmount {
+ 	struct dentry *mnt_root;	/* root of the mounted tree */
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 7ea022750e4e09..33338a233cc724 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -1012,9 +1012,13 @@ struct netdev_bpf {
+ 
+ #ifdef CONFIG_XFRM_OFFLOAD
+ struct xfrmdev_ops {
+-	int	(*xdo_dev_state_add) (struct xfrm_state *x, struct netlink_ext_ack *extack);
+-	void	(*xdo_dev_state_delete) (struct xfrm_state *x);
+-	void	(*xdo_dev_state_free) (struct xfrm_state *x);
++	int	(*xdo_dev_state_add)(struct net_device *dev,
++				     struct xfrm_state *x,
++				     struct netlink_ext_ack *extack);
++	void	(*xdo_dev_state_delete)(struct net_device *dev,
++					struct xfrm_state *x);
++	void	(*xdo_dev_state_free)(struct net_device *dev,
++				      struct xfrm_state *x);
+ 	bool	(*xdo_dev_offload_ok) (struct sk_buff *skb,
+ 				       struct xfrm_state *x);
+ 	void	(*xdo_dev_state_advance_esn) (struct xfrm_state *x);
+diff --git a/include/linux/netfs.h b/include/linux/netfs.h
+index c86a11cfc4a36a..1464b3a104989d 100644
+--- a/include/linux/netfs.h
++++ b/include/linux/netfs.h
+@@ -51,8 +51,7 @@ enum netfs_io_source {
+ 	NETFS_INVALID_WRITE,
+ } __mode(byte);
+ 
+-typedef void (*netfs_io_terminated_t)(void *priv, ssize_t transferred_or_error,
+-				      bool was_async);
++typedef void (*netfs_io_terminated_t)(void *priv, ssize_t transferred_or_error);
+ 
+ /*
+  * Per-inode context.  This wraps the VFS inode.
+@@ -207,6 +206,7 @@ enum netfs_io_origin {
+ 	NETFS_READ_GAPS,		/* This read is a synchronous read to fill gaps */
+ 	NETFS_READ_SINGLE,		/* This read should be treated as a single object */
+ 	NETFS_READ_FOR_WRITE,		/* This read is to prepare a write */
++	NETFS_UNBUFFERED_READ,		/* This is an unbuffered read */
+ 	NETFS_DIO_READ,			/* This is a direct I/O read */
+ 	NETFS_WRITEBACK,		/* This write was triggered by writepages */
+ 	NETFS_WRITEBACK_SINGLE,		/* This monolithic write was triggered by writepages */
+@@ -223,9 +223,10 @@ enum netfs_io_origin {
+  */
+ struct netfs_io_request {
+ 	union {
+-		struct work_struct work;
++		struct work_struct cleanup_work; /* Deferred cleanup work */
+ 		struct rcu_head rcu;
+ 	};
++	struct work_struct	work;		/* Result collector work */
+ 	struct inode		*inode;		/* The file being accessed */
+ 	struct address_space	*mapping;	/* The mapping being accessed */
+ 	struct kiocb		*iocb;		/* AIO completion vector */
+@@ -270,7 +271,7 @@ struct netfs_io_request {
+ #define NETFS_RREQ_NO_UNLOCK_FOLIO	2	/* Don't unlock no_unlock_folio on completion */
+ #define NETFS_RREQ_DONT_UNLOCK_FOLIOS	3	/* Don't unlock the folios on completion */
+ #define NETFS_RREQ_FAILED		4	/* The request failed */
+-#define NETFS_RREQ_IN_PROGRESS		5	/* Unlocked when the request completes */
++#define NETFS_RREQ_IN_PROGRESS		5	/* Unlocked when the request completes (has ref) */
+ #define NETFS_RREQ_FOLIO_COPY_TO_CACHE	6	/* Copy current folio to cache from read */
+ #define NETFS_RREQ_UPLOAD_TO_SERVER	8	/* Need to write to the server */
+ #define NETFS_RREQ_NONBLOCK		9	/* Don't block if possible (O_NONBLOCK) */
+@@ -279,6 +280,7 @@ struct netfs_io_request {
+ #define NETFS_RREQ_USE_IO_ITER		12	/* Use ->io_iter rather than ->i_pages */
+ #define NETFS_RREQ_ALL_QUEUED		13	/* All subreqs are now queued */
+ #define NETFS_RREQ_RETRYING		14	/* Set if we're in the retry path */
++#define NETFS_RREQ_SHORT_TRANSFER	15	/* Set if we have a short transfer */
+ #define NETFS_RREQ_USE_PGPRIV2		31	/* [DEPRECATED] Use PG_private_2 to mark
+ 						 * write to cache on read */
+ 	const struct netfs_request_ops *netfs_ops;
+@@ -439,15 +441,14 @@ void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq);
+ void netfs_get_subrequest(struct netfs_io_subrequest *subreq,
+ 			  enum netfs_sreq_ref_trace what);
+ void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
+-			  bool was_async, enum netfs_sreq_ref_trace what);
++			  enum netfs_sreq_ref_trace what);
+ ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len,
+ 				struct iov_iter *new,
+ 				iov_iter_extraction_t extraction_flags);
+ size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset,
+ 			size_t max_size, size_t max_segs);
+ void netfs_prepare_write_failed(struct netfs_io_subrequest *subreq);
+-void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error,
+-				       bool was_async);
++void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error);
+ void netfs_queue_write_request(struct netfs_io_subrequest *subreq);
+ 
+ int netfs_start_io_read(struct inode *inode);
+diff --git a/include/linux/nfslocalio.h b/include/linux/nfslocalio.h
+index 9aa8a43843d717..5c7c92659e736f 100644
+--- a/include/linux/nfslocalio.h
++++ b/include/linux/nfslocalio.h
+@@ -50,10 +50,6 @@ void nfs_localio_invalidate_clients(struct list_head *nn_local_clients,
+ 				    spinlock_t *nn_local_clients_lock);
+ 
+ /* localio needs to map filehandle -> struct nfsd_file */
+-extern struct nfsd_file *
+-nfsd_open_local_fh(struct net *, struct auth_domain *, struct rpc_clnt *,
+-		   const struct cred *, const struct nfs_fh *,
+-		   const fmode_t) __must_hold(rcu);
+ void nfs_close_local_fh(struct nfs_file_localio *);
+ 
+ struct nfsd_localio_operations {
+@@ -64,10 +60,10 @@ struct nfsd_localio_operations {
+ 						struct rpc_clnt *,
+ 						const struct cred *,
+ 						const struct nfs_fh *,
++						struct nfsd_file __rcu **pnf,
+ 						const fmode_t);
+-	struct net *(*nfsd_file_put_local)(struct nfsd_file *);
+-	struct nfsd_file *(*nfsd_file_get)(struct nfsd_file *);
+-	void (*nfsd_file_put)(struct nfsd_file *);
++	struct net *(*nfsd_file_put_local)(struct nfsd_file __rcu **);
++	struct nfsd_file *(*nfsd_file_get_local)(struct nfsd_file *);
+ 	struct file *(*nfsd_file_file)(struct nfsd_file *);
+ } ____cacheline_aligned;
+ 
+@@ -77,6 +73,7 @@ extern const struct nfsd_localio_operations *nfs_to;
+ struct nfsd_file *nfs_open_local_fh(nfs_uuid_t *,
+ 		   struct rpc_clnt *, const struct cred *,
+ 		   const struct nfs_fh *, struct nfs_file_localio *,
++		   struct nfsd_file __rcu **pnf,
+ 		   const fmode_t);
+ 
+ static inline void nfs_to_nfsd_net_put(struct net *net)
+@@ -91,16 +88,19 @@ static inline void nfs_to_nfsd_net_put(struct net *net)
+ 	rcu_read_unlock();
+ }
+ 
+-static inline void nfs_to_nfsd_file_put_local(struct nfsd_file *localio)
++static inline void nfs_to_nfsd_file_put_local(struct nfsd_file __rcu **localio)
+ {
+ 	/*
+-	 * Must not hold RCU otherwise nfsd_file_put() can easily trigger:
+-	 * "Voluntary context switch within RCU read-side critical section!"
+-	 * by scheduling deep in underlying filesystem (e.g. XFS).
++	 * Either *localio must be guaranteed to be non-NULL, or caller
++	 * must prevent nfsd shutdown from completing as nfs_close_local_fh()
++	 * does by blocking the nfs_uuid from being finally put.
+ 	 */
+-	struct net *net = nfs_to->nfsd_file_put_local(localio);
++	struct net *net;
+ 
+-	nfs_to_nfsd_net_put(net);
++	net = nfs_to->nfsd_file_put_local(localio);
++
++	if (net)
++		nfs_to_nfsd_net_put(net);
+ }
+ 
+ #else   /* CONFIG_NFS_LOCALIO */
+diff --git a/include/linux/nvme.h b/include/linux/nvme.h
+index 2479ed10f53e37..5d7afb6079f1d8 100644
+--- a/include/linux/nvme.h
++++ b/include/linux/nvme.h
+@@ -2094,7 +2094,7 @@ enum {
+ 	NVME_SC_BAD_ATTRIBUTES		= 0x180,
+ 	NVME_SC_INVALID_PI		= 0x181,
+ 	NVME_SC_READ_ONLY		= 0x182,
+-	NVME_SC_ONCS_NOT_SUPPORTED	= 0x183,
++	NVME_SC_CMD_SIZE_LIM_EXCEEDED	= 0x183,
+ 
+ 	/*
+ 	 * I/O Command Set Specific - Fabrics commands:
+diff --git a/include/linux/overflow.h b/include/linux/overflow.h
+index 0c7e3dcfe8670c..89e9d604988351 100644
+--- a/include/linux/overflow.h
++++ b/include/linux/overflow.h
+@@ -389,24 +389,37 @@ static inline size_t __must_check size_sub(size_t minuend, size_t subtrahend)
+ 	struct_size((type *)NULL, member, count)
+ 
+ /**
+- * _DEFINE_FLEX() - helper macro for DEFINE_FLEX() family.
+- * Enables caller macro to pass (different) initializer.
++ * __DEFINE_FLEX() - helper macro for DEFINE_FLEX() family.
++ * Enables caller macro to pass arbitrary trailing expressions
+  *
+  * @type: structure type name, including "struct" keyword.
+  * @name: Name for a variable to define.
+  * @member: Name of the array member.
+  * @count: Number of elements in the array; must be compile-time const.
+- * @initializer: initializer expression (could be empty for no init).
++ * @trailer: Trailing expressions for attributes and/or initializers.
+  */
+-#define _DEFINE_FLEX(type, name, member, count, initializer...)			\
++#define __DEFINE_FLEX(type, name, member, count, trailer...)			\
+ 	_Static_assert(__builtin_constant_p(count),				\
+ 		       "onstack flex array members require compile-time const count"); \
+ 	union {									\
+ 		u8 bytes[struct_size_t(type, member, count)];			\
+ 		type obj;							\
+-	} name##_u initializer;							\
++	} name##_u trailer;							\
+ 	type *name = (type *)&name##_u
+ 
++/**
++ * _DEFINE_FLEX() - helper macro for DEFINE_FLEX() family.
++ * Enables caller macro to pass (different) initializer.
++ *
++ * @type: structure type name, including "struct" keyword.
++ * @name: Name for a variable to define.
++ * @member: Name of the array member.
++ * @count: Number of elements in the array; must be compile-time const.
++ * @initializer: Initializer expression (e.g., pass `= { }` at minimum).
++ */
++#define _DEFINE_FLEX(type, name, member, count, initializer...)			\
++	__DEFINE_FLEX(type, name, member, count, = { .obj initializer })
++
+ /**
+  * DEFINE_RAW_FLEX() - Define an on-stack instance of structure with a trailing
+  * flexible array member, when it does not have a __counted_by annotation.
+@@ -421,7 +434,7 @@ static inline size_t __must_check size_sub(size_t minuend, size_t subtrahend)
+  * Use __struct_size(@name) to get compile-time size of it afterwards.
+  */
+ #define DEFINE_RAW_FLEX(type, name, member, count)	\
+-	_DEFINE_FLEX(type, name, member, count, = {})
++	__DEFINE_FLEX(type, name, member, count, = { })
+ 
+ /**
+  * DEFINE_FLEX() - Define an on-stack instance of structure with a trailing
+@@ -438,6 +451,6 @@ static inline size_t __must_check size_sub(size_t minuend, size_t subtrahend)
+  * Use __struct_size(@NAME) to get compile-time size of it afterwards.
+  */
+ #define DEFINE_FLEX(TYPE, NAME, MEMBER, COUNTER, COUNT)	\
+-	_DEFINE_FLEX(TYPE, NAME, MEMBER, COUNT, = { .obj.COUNTER = COUNT, })
++	_DEFINE_FLEX(TYPE, NAME, MEMBER, COUNT, = { .COUNTER = COUNT, })
+ 
+ #endif /* __LINUX_OVERFLOW_H */
+diff --git a/include/linux/pci-epf.h b/include/linux/pci-epf.h
+index 879d19cebd4fc6..749cee0bcf2cc0 100644
+--- a/include/linux/pci-epf.h
++++ b/include/linux/pci-epf.h
+@@ -114,6 +114,8 @@ struct pci_epf_driver {
+  * @phys_addr: physical address that should be mapped to the BAR
+  * @addr: virtual address corresponding to the @phys_addr
+  * @size: the size of the address space present in BAR
++ * @aligned_size: the size actually allocated to accommodate the iATU alignment
++ *                requirement
+  * @barno: BAR number
+  * @flags: flags that are set for the BAR
+  */
+@@ -121,6 +123,7 @@ struct pci_epf_bar {
+ 	dma_addr_t	phys_addr;
+ 	void		*addr;
+ 	size_t		size;
++	size_t		aligned_size;
+ 	enum pci_barno	barno;
+ 	int		flags;
+ };
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 51e2bd6405cda5..081e5c0a3ddf4e 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -1850,6 +1850,14 @@ static inline bool pcie_aspm_support_enabled(void) { return false; }
+ static inline bool pcie_aspm_enabled(struct pci_dev *pdev) { return false; }
+ #endif
+ 
++#ifdef CONFIG_HOTPLUG_PCI
++void pci_hp_ignore_link_change(struct pci_dev *pdev);
++void pci_hp_unignore_link_change(struct pci_dev *pdev);
++#else
++static inline void pci_hp_ignore_link_change(struct pci_dev *pdev) { }
++static inline void pci_hp_unignore_link_change(struct pci_dev *pdev) { }
++#endif
++
+ #ifdef CONFIG_PCIEAER
+ bool pci_aer_available(void);
+ #else
+diff --git a/include/linux/phy.h b/include/linux/phy.h
+index a2bfae80c44975..bef68f6af99a92 100644
+--- a/include/linux/phy.h
++++ b/include/linux/phy.h
+@@ -744,10 +744,7 @@ struct phy_device {
+ #define PHY_F_NO_IRQ		0x80000000
+ #define PHY_F_RXC_ALWAYS_ON	0x40000000
+ 
+-static inline struct phy_device *to_phy_device(const struct device *dev)
+-{
+-	return container_of(to_mdio_device(dev), struct phy_device, mdio);
+-}
++#define to_phy_device(__dev)	container_of_const(to_mdio_device(__dev), struct phy_device, mdio)
+ 
+ /**
+  * struct phy_tdr_config - Configuration of a TDR raw test
+diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
+index 7fb5a459847ef3..756b842dcd3091 100644
+--- a/include/linux/pm_runtime.h
++++ b/include/linux/pm_runtime.h
+@@ -96,7 +96,9 @@ extern void pm_runtime_new_link(struct device *dev);
+ extern void pm_runtime_drop_link(struct device_link *link);
+ extern void pm_runtime_release_supplier(struct device_link *link);
+ 
++int devm_pm_runtime_set_active_enabled(struct device *dev);
+ extern int devm_pm_runtime_enable(struct device *dev);
++int devm_pm_runtime_get_noresume(struct device *dev);
+ 
+ /**
+  * pm_suspend_ignore_children - Set runtime PM behavior regarding children.
+@@ -294,7 +296,9 @@ static inline bool pm_runtime_blocked(struct device *dev) { return true; }
+ static inline void pm_runtime_allow(struct device *dev) {}
+ static inline void pm_runtime_forbid(struct device *dev) {}
+ 
++static inline int devm_pm_runtime_set_active_enabled(struct device *dev) { return 0; }
+ static inline int devm_pm_runtime_enable(struct device *dev) { return 0; }
++static inline int devm_pm_runtime_get_noresume(struct device *dev) { return 0; }
+ 
+ static inline void pm_suspend_ignore_children(struct device *dev, bool enable) {}
+ static inline void pm_runtime_get_noresume(struct device *dev) {}
+diff --git a/include/linux/poison.h b/include/linux/poison.h
+index 331a9a996fa874..8ca2235f78d5d9 100644
+--- a/include/linux/poison.h
++++ b/include/linux/poison.h
+@@ -70,6 +70,10 @@
+ #define KEY_DESTROY		0xbd
+ 
+ /********** net/core/page_pool.c **********/
++/*
++ * page_pool uses additional free bits within this value to store data, see the
++ * definition of PP_DMA_INDEX_MASK in mm.h
++ */
+ #define PP_SIGNATURE		(0x40 + POISON_POINTER_DELTA)
+ 
+ /********** net/core/skbuff.c **********/
+diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
+index 0387d64e2c66c6..36fb3edfa403d9 100644
+--- a/include/linux/virtio_vsock.h
++++ b/include/linux/virtio_vsock.h
+@@ -140,6 +140,7 @@ struct virtio_vsock_sock {
+ 	u32 last_fwd_cnt;
+ 	u32 rx_bytes;
+ 	u32 buf_alloc;
++	u32 buf_used;
+ 	struct sk_buff_head rx_queue;
+ 	u32 msg_count;
+ };
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 797992019f9ee5..521a9d0acac692 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -557,7 +557,8 @@ enum {
+ #define ESCO_LINK	0x02
+ /* Low Energy links do not have defined link type. Use invented one */
+ #define LE_LINK		0x80
+-#define ISO_LINK	0x82
++#define CIS_LINK	0x82
++#define BIS_LINK	0x83
+ #define INVALID_LINK	0xff
+ 
+ /* LMP features */
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 54bfeeaa09959b..d15316bffd70bb 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -545,6 +545,7 @@ struct hci_dev {
+ 	struct hci_conn_hash	conn_hash;
+ 
+ 	struct list_head	mesh_pending;
++	struct mutex		mgmt_pending_lock;
+ 	struct list_head	mgmt_pending;
+ 	struct list_head	reject_list;
+ 	struct list_head	accept_list;
+@@ -996,7 +997,8 @@ static inline void hci_conn_hash_add(struct hci_dev *hdev, struct hci_conn *c)
+ 	case ESCO_LINK:
+ 		h->sco_num++;
+ 		break;
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		h->iso_num++;
+ 		break;
+ 	}
+@@ -1022,7 +1024,8 @@ static inline void hci_conn_hash_del(struct hci_dev *hdev, struct hci_conn *c)
+ 	case ESCO_LINK:
+ 		h->sco_num--;
+ 		break;
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		h->iso_num--;
+ 		break;
+ 	}
+@@ -1039,7 +1042,8 @@ static inline unsigned int hci_conn_num(struct hci_dev *hdev, __u8 type)
+ 	case SCO_LINK:
+ 	case ESCO_LINK:
+ 		return h->sco_num;
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		return h->iso_num;
+ 	default:
+ 		return 0;
+@@ -1100,7 +1104,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_bis(struct hci_dev *hdev,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (bacmp(&c->dst, ba) || c->type != ISO_LINK)
++		if (bacmp(&c->dst, ba) || c->type != BIS_LINK)
+ 			continue;
+ 
+ 		if (c->iso_qos.bcast.bis == bis) {
+@@ -1122,7 +1126,7 @@ hci_conn_hash_lookup_create_pa_sync(struct hci_dev *hdev)
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != ISO_LINK)
++		if (c->type != BIS_LINK)
+ 			continue;
+ 
+ 		if (!test_bit(HCI_CONN_CREATE_PA_SYNC, &c->flags))
+@@ -1148,8 +1152,8 @@ hci_conn_hash_lookup_per_adv_bis(struct hci_dev *hdev,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (bacmp(&c->dst, ba) || c->type != ISO_LINK ||
+-			!test_bit(HCI_CONN_PER_ADV, &c->flags))
++		if (bacmp(&c->dst, ba) || c->type != BIS_LINK ||
++		    !test_bit(HCI_CONN_PER_ADV, &c->flags))
+ 			continue;
+ 
+ 		if (c->iso_qos.bcast.big == big &&
+@@ -1238,7 +1242,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_cis(struct hci_dev *hdev,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != ISO_LINK || !bacmp(&c->dst, BDADDR_ANY))
++		if (c->type != CIS_LINK)
+ 			continue;
+ 
+ 		/* Match CIG ID if set */
+@@ -1270,7 +1274,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_cig(struct hci_dev *hdev,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != ISO_LINK || !bacmp(&c->dst, BDADDR_ANY))
++		if (c->type != CIS_LINK)
+ 			continue;
+ 
+ 		if (handle == c->iso_qos.ucast.cig) {
+@@ -1293,17 +1297,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_big(struct hci_dev *hdev,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != ISO_LINK)
+-			continue;
+-
+-		/* An ISO_LINK hcon with BDADDR_ANY as destination
+-		 * address is a Broadcast connection. A Broadcast
+-		 * slave connection is associated with a PA train,
+-		 * so the sync_handle can be used to differentiate
+-		 * from unicast.
+-		 */
+-		if (bacmp(&c->dst, BDADDR_ANY) &&
+-		    c->sync_handle == HCI_SYNC_HANDLE_INVALID)
++		if (c->type != BIS_LINK)
+ 			continue;
+ 
+ 		if (handle == c->iso_qos.bcast.big) {
+@@ -1327,7 +1321,7 @@ hci_conn_hash_lookup_big_sync_pend(struct hci_dev *hdev,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != ISO_LINK)
++		if (c->type != BIS_LINK)
+ 			continue;
+ 
+ 		if (handle == c->iso_qos.bcast.big && num_bis == c->num_bis) {
+@@ -1350,8 +1344,8 @@ hci_conn_hash_lookup_big_state(struct hci_dev *hdev, __u8 handle,  __u16 state)
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (bacmp(&c->dst, BDADDR_ANY) || c->type != ISO_LINK ||
+-			c->state != state)
++		if (c->type != BIS_LINK || bacmp(&c->dst, BDADDR_ANY) ||
++		    c->state != state)
+ 			continue;
+ 
+ 		if (handle == c->iso_qos.bcast.big) {
+@@ -1374,8 +1368,8 @@ hci_conn_hash_lookup_pa_sync_big_handle(struct hci_dev *hdev, __u8 big)
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != ISO_LINK ||
+-			!test_bit(HCI_CONN_PA_SYNC, &c->flags))
++		if (c->type != BIS_LINK ||
++		    !test_bit(HCI_CONN_PA_SYNC, &c->flags))
+ 			continue;
+ 
+ 		if (c->iso_qos.bcast.big == big) {
+@@ -1397,7 +1391,7 @@ hci_conn_hash_lookup_pa_sync_handle(struct hci_dev *hdev, __u16 sync_handle)
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != ISO_LINK)
++		if (c->type != BIS_LINK)
+ 			continue;
+ 
+ 		/* Ignore the listen hcon, we are looking
+@@ -2012,7 +2006,8 @@ static inline int hci_proto_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ 	case ESCO_LINK:
+ 		return sco_connect_ind(hdev, bdaddr, flags);
+ 
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		return iso_connect_ind(hdev, bdaddr, flags);
+ 
+ 	default:
+@@ -2403,7 +2398,6 @@ void mgmt_advertising_added(struct sock *sk, struct hci_dev *hdev,
+ 			    u8 instance);
+ void mgmt_advertising_removed(struct sock *sk, struct hci_dev *hdev,
+ 			      u8 instance);
+-void mgmt_adv_monitor_removed(struct hci_dev *hdev, u16 handle);
+ int mgmt_phy_configuration_changed(struct hci_dev *hdev, struct sock *skip);
+ void mgmt_adv_monitor_device_lost(struct hci_dev *hdev, u16 handle,
+ 				  bdaddr_t *bdaddr, u8 addr_type);
+diff --git a/include/net/checksum.h b/include/net/checksum.h
+index 243f972267b8d1..be9356d4b67a1e 100644
+--- a/include/net/checksum.h
++++ b/include/net/checksum.h
+@@ -164,7 +164,7 @@ void inet_proto_csum_replace16(__sum16 *sum, struct sk_buff *skb,
+ 			       const __be32 *from, const __be32 *to,
+ 			       bool pseudohdr);
+ void inet_proto_csum_replace_by_diff(__sum16 *sum, struct sk_buff *skb,
+-				     __wsum diff, bool pseudohdr);
++				     __wsum diff, bool pseudohdr, bool ipv6);
+ 
+ static __always_inline
+ void inet_proto_csum_replace2(__sum16 *sum, struct sk_buff *skb,
+diff --git a/include/net/netfilter/nft_fib.h b/include/net/netfilter/nft_fib.h
+index 6e202ed5e63f3c..7370fba844efcf 100644
+--- a/include/net/netfilter/nft_fib.h
++++ b/include/net/netfilter/nft_fib.h
+@@ -2,6 +2,7 @@
+ #ifndef _NFT_FIB_H_
+ #define _NFT_FIB_H_
+ 
++#include <net/l3mdev.h>
+ #include <net/netfilter/nf_tables.h>
+ 
+ struct nft_fib {
+@@ -39,6 +40,14 @@ static inline bool nft_fib_can_skip(const struct nft_pktinfo *pkt)
+ 	return nft_fib_is_loopback(pkt->skb, indev);
+ }
+ 
++static inline int nft_fib_l3mdev_master_ifindex_rcu(const struct nft_pktinfo *pkt,
++						    const struct net_device *iif)
++{
++	const struct net_device *dev = iif ? iif : pkt->skb->dev;
++
++	return l3mdev_master_ifindex_rcu(dev);
++}
++
+ int nft_fib_dump(struct sk_buff *skb, const struct nft_expr *expr, bool reset);
+ int nft_fib_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 		 const struct nlattr * const tb[]);
+diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
+index 36eb57d73abc6c..431b593de70937 100644
+--- a/include/net/page_pool/types.h
++++ b/include/net/page_pool/types.h
+@@ -6,6 +6,7 @@
+ #include <linux/dma-direction.h>
+ #include <linux/ptr_ring.h>
+ #include <linux/types.h>
++#include <linux/xarray.h>
+ #include <net/netmem.h>
+ 
+ #define PP_FLAG_DMA_MAP		BIT(0) /* Should page_pool do the DMA
+@@ -33,6 +34,9 @@
+ #define PP_FLAG_ALL		(PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV | \
+ 				 PP_FLAG_SYSTEM_POOL | PP_FLAG_ALLOW_UNREADABLE_NETMEM)
+ 
++/* Index limit to stay within PP_DMA_INDEX_BITS for DMA indices */
++#define PP_DMA_INDEX_LIMIT XA_LIMIT(1, BIT(PP_DMA_INDEX_BITS) - 1)
++
+ /*
+  * Fast allocation side cache array/stack
+  *
+@@ -221,6 +225,8 @@ struct page_pool {
+ 	void *mp_priv;
+ 	const struct memory_provider_ops *mp_ops;
+ 
++	struct xarray dma_mapped;
++
+ #ifdef CONFIG_PAGE_POOL_STATS
+ 	/* recycle stats are per-cpu to avoid locking */
+ 	struct page_pool_recycle_stats __percpu *recycle_stats;
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 694f954258d437..99470c6d24de8b 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2978,8 +2978,11 @@ int sock_ioctl_inout(struct sock *sk, unsigned int cmd,
+ int sk_ioctl(struct sock *sk, unsigned int cmd, void __user *arg);
+ static inline bool sk_is_readable(struct sock *sk)
+ {
+-	if (sk->sk_prot->sock_is_readable)
+-		return sk->sk_prot->sock_is_readable(sk);
++	const struct proto *prot = READ_ONCE(sk->sk_prot);
++
++	if (prot->sock_is_readable)
++		return prot->sock_is_readable(sk);
++
+ 	return false;
+ }
+ #endif	/* _SOCK_H */
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 06ab2a3d2ebd10..1f1861c57e2ad0 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -147,8 +147,19 @@ enum {
+ };
+ 
+ struct xfrm_dev_offload {
++	/* The device for this offload.
++	 * Device drivers should not use this directly, as that will prevent
++	 * them from working with bonding device. Instead, the device passed
++	 * to the add/delete callbacks should be used.
++	 */
+ 	struct net_device	*dev;
+ 	netdevice_tracker	dev_tracker;
++	/* This is a private pointer used by the bonding driver (and eventually
++	 * should be moved there). Device drivers should not use it.
++	 * Protected by xfrm_state.lock AND bond.ipsec_lock in most cases,
++	 * except in the .xdo_dev_state_del() flow, where only xfrm_state.lock
++	 * is held.
++	 */
+ 	struct net_device	*real_dev;
+ 	unsigned long		offload_handle;
+ 	u8			dir : 2;
+diff --git a/include/sound/hdaudio.h b/include/sound/hdaudio.h
+index b098ceadbe74bf..9a70048adbc069 100644
+--- a/include/sound/hdaudio.h
++++ b/include/sound/hdaudio.h
+@@ -223,7 +223,7 @@ struct hdac_driver {
+ 	struct device_driver driver;
+ 	int type;
+ 	const struct hda_device_id *id_table;
+-	int (*match)(struct hdac_device *dev, struct hdac_driver *drv);
++	int (*match)(struct hdac_device *dev, const struct hdac_driver *drv);
+ 	void (*unsol_event)(struct hdac_device *dev, unsigned int event);
+ 
+ 	/* fields used by ext bus APIs */
+@@ -235,7 +235,7 @@ struct hdac_driver {
+ #define drv_to_hdac_driver(_drv) container_of(_drv, struct hdac_driver, driver)
+ 
+ const struct hda_device_id *
+-hdac_get_device_id(struct hdac_device *hdev, struct hdac_driver *drv);
++hdac_get_device_id(struct hdac_device *hdev, const struct hdac_driver *drv);
+ 
+ /*
+  * Bus verb operators
+diff --git a/include/sound/hdaudio_ext.h b/include/sound/hdaudio_ext.h
+index 4c7a40e149a594..7de390022ac268 100644
+--- a/include/sound/hdaudio_ext.h
++++ b/include/sound/hdaudio_ext.h
+@@ -22,6 +22,7 @@ void snd_hdac_ext_bus_ppcap_enable(struct hdac_bus *chip, bool enable);
+ void snd_hdac_ext_bus_ppcap_int_enable(struct hdac_bus *chip, bool enable);
+ 
+ int snd_hdac_ext_bus_get_ml_capabilities(struct hdac_bus *bus);
++struct hdac_ext_link *snd_hdac_ext_bus_get_hlink_by_id(struct hdac_bus *bus, u32 id);
+ struct hdac_ext_link *snd_hdac_ext_bus_get_hlink_by_addr(struct hdac_bus *bus, int addr);
+ struct hdac_ext_link *snd_hdac_ext_bus_get_hlink_by_name(struct hdac_bus *bus,
+ 							 const char *codec_name);
+@@ -97,12 +98,17 @@ struct hdac_ext_link {
+ 	void __iomem *ml_addr; /* link output stream reg pointer */
+ 	u32 lcaps;   /* link capablities */
+ 	u16 lsdiid;  /* link sdi identifier */
++	u32 id;
++	u8 slcount;
+ 
+ 	int ref_count;
+ 
+ 	struct list_head list;
+ };
+ 
++#define hdac_ext_link_alt(link)		((link)->lcaps & AZX_ML_HDA_LCAP_ALT)
++#define hdac_ext_link_ofls(link)	((link)->lcaps & AZX_ML_HDA_LCAP_OFLS)
++
+ int snd_hdac_ext_bus_link_power_up(struct hdac_ext_link *hlink);
+ int snd_hdac_ext_bus_link_power_down(struct hdac_ext_link *hlink);
+ int snd_hdac_ext_bus_link_power_up_all(struct hdac_bus *bus);
+diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
+index f880835f7695ed..4175eec40048ad 100644
+--- a/include/trace/events/netfs.h
++++ b/include/trace/events/netfs.h
+@@ -30,6 +30,7 @@
+ 	EM(netfs_write_trace_dio_write,		"DIO-WRITE")	\
+ 	EM(netfs_write_trace_unbuffered_write,	"UNB-WRITE")	\
+ 	EM(netfs_write_trace_writeback,		"WRITEBACK")	\
++	EM(netfs_write_trace_writeback_single,	"WB-SINGLE") \
+ 	E_(netfs_write_trace_writethrough,	"WRITETHRU")
+ 
+ #define netfs_rreq_origins					\
+@@ -38,6 +39,7 @@
+ 	EM(NETFS_READ_GAPS,			"RG")		\
+ 	EM(NETFS_READ_SINGLE,			"R1")		\
+ 	EM(NETFS_READ_FOR_WRITE,		"RW")		\
++	EM(NETFS_UNBUFFERED_READ,		"UR")		\
+ 	EM(NETFS_DIO_READ,			"DR")		\
+ 	EM(NETFS_WRITEBACK,			"WB")		\
+ 	EM(NETFS_WRITEBACK_SINGLE,		"W1")		\
+@@ -128,17 +130,15 @@
+ #define netfs_rreq_ref_traces					\
+ 	EM(netfs_rreq_trace_get_for_outstanding,"GET OUTSTND")	\
+ 	EM(netfs_rreq_trace_get_subreq,		"GET SUBREQ ")	\
+-	EM(netfs_rreq_trace_get_work,		"GET WORK   ")	\
+ 	EM(netfs_rreq_trace_put_complete,	"PUT COMPLT ")	\
+ 	EM(netfs_rreq_trace_put_discard,	"PUT DISCARD")	\
+ 	EM(netfs_rreq_trace_put_failed,		"PUT FAILED ")	\
+ 	EM(netfs_rreq_trace_put_no_submit,	"PUT NO-SUBM")	\
+ 	EM(netfs_rreq_trace_put_return,		"PUT RETURN ")	\
+ 	EM(netfs_rreq_trace_put_subreq,		"PUT SUBREQ ")	\
+-	EM(netfs_rreq_trace_put_work,		"PUT WORK   ")	\
+-	EM(netfs_rreq_trace_put_work_complete,	"PUT WORK CP")	\
+-	EM(netfs_rreq_trace_put_work_nq,	"PUT WORK NQ")	\
++	EM(netfs_rreq_trace_put_work_ip,	"PUT WORK IP ")	\
+ 	EM(netfs_rreq_trace_see_work,		"SEE WORK   ")	\
++	EM(netfs_rreq_trace_see_work_complete,	"SEE WORK CP")	\
+ 	E_(netfs_rreq_trace_new,		"NEW        ")
+ 
+ #define netfs_sreq_ref_traces					\
+diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
+index 616916985e3f30..5e5442d03f8789 100644
+--- a/include/uapi/drm/xe_drm.h
++++ b/include/uapi/drm/xe_drm.h
+@@ -1206,6 +1206,11 @@ struct drm_xe_vm_bind {
+  *    there is no need to explicitly set that. When a queue of type
+  *    %DRM_XE_PXP_TYPE_HWDRM is created, the PXP default HWDRM session
+  *    (%XE_PXP_HWDRM_DEFAULT_SESSION) will be started, if isn't already running.
++ *    The user is expected to query the PXP status via the query ioctl (see
++ *    %DRM_XE_DEVICE_QUERY_PXP_STATUS) and to wait for PXP to be ready before
++ *    attempting to create a queue with this property. When a queue is created
++ *    before PXP is ready, the ioctl will return -EBUSY if init is still in
++ *    progress or -EIO if init failed.
+  *    Given that going into a power-saving state kills PXP HWDRM sessions,
+  *    runtime PM will be blocked while queues of this type are alive.
+  *    All PXP queues will be killed if a PXP invalidation event occurs.
+diff --git a/include/uapi/linux/bits.h b/include/uapi/linux/bits.h
+index 682b406e10679d..a04afef9efca42 100644
+--- a/include/uapi/linux/bits.h
++++ b/include/uapi/linux/bits.h
+@@ -4,9 +4,9 @@
+ #ifndef _UAPI_LINUX_BITS_H
+ #define _UAPI_LINUX_BITS_H
+ 
+-#define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (BITS_PER_LONG - 1 - (h))))
++#define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (__BITS_PER_LONG - 1 - (h))))
+ 
+-#define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (BITS_PER_LONG_LONG - 1 - (h))))
++#define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (__BITS_PER_LONG_LONG - 1 - (h))))
+ 
+ #define __GENMASK_U128(h, l) \
+ 	((_BIT128((h)) << 1) - (_BIT128(l)))
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index fd404729b1154b..fe5df2a9fe8ee6 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -2051,7 +2051,8 @@ union bpf_attr {
+  * 		untouched (unless **BPF_F_MARK_ENFORCE** is added as well), and
+  * 		for updates resulting in a null checksum the value is set to
+  * 		**CSUM_MANGLED_0** instead. Flag **BPF_F_PSEUDO_HDR** indicates
+- * 		the checksum is to be computed against a pseudo-header.
++ * 		that the modified header field is part of the pseudo-header.
++ * 		Flag **BPF_F_IPV6** should be set for IPv6 packets.
+  *
+  * 		This helper works in combination with **bpf_csum_diff**\ (),
+  * 		which does not update the checksum in-place, but offers more
+@@ -6068,6 +6069,7 @@ enum {
+ 	BPF_F_PSEUDO_HDR		= (1ULL << 4),
+ 	BPF_F_MARK_MANGLED_0		= (1ULL << 5),
+ 	BPF_F_MARK_ENFORCE		= (1ULL << 6),
++	BPF_F_IPV6			= (1ULL << 7),
+ };
+ 
+ /* BPF_FUNC_skb_set_tunnel_key and BPF_FUNC_skb_get_tunnel_key flags. */
+diff --git a/io_uring/fdinfo.c b/io_uring/fdinfo.c
+index e0d6a59a89fa1b..f948917f7f7071 100644
+--- a/io_uring/fdinfo.c
++++ b/io_uring/fdinfo.c
+@@ -172,18 +172,26 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
+ 
+ 	if (ctx->flags & IORING_SETUP_SQPOLL) {
+ 		struct io_sq_data *sq = ctx->sq_data;
++		struct task_struct *tsk;
+ 
++		rcu_read_lock();
++		tsk = rcu_dereference(sq->thread);
+ 		/*
+ 		 * sq->thread might be NULL if we raced with the sqpoll
+ 		 * thread termination.
+ 		 */
+-		if (sq->thread) {
++		if (tsk) {
++			get_task_struct(tsk);
++			rcu_read_unlock();
++			getrusage(tsk, RUSAGE_SELF, &sq_usage);
++			put_task_struct(tsk);
+ 			sq_pid = sq->task_pid;
+ 			sq_cpu = sq->sq_cpu;
+-			getrusage(sq->thread, RUSAGE_SELF, &sq_usage);
+ 			sq_total_time = (sq_usage.ru_stime.tv_sec * 1000000
+ 					 + sq_usage.ru_stime.tv_usec);
+ 			sq_work_time = sq->work_time;
++		} else {
++			rcu_read_unlock();
+ 		}
+ 	}
+ 
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index edda31a15c6e65..e5466f65682699 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -537,18 +537,30 @@ void io_req_queue_iowq(struct io_kiocb *req)
+ 	io_req_task_work_add(req);
+ }
+ 
++static bool io_drain_defer_seq(struct io_kiocb *req, u32 seq)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++
++	return seq + READ_ONCE(ctx->cq_extra) != ctx->cached_cq_tail;
++}
++
+ static __cold noinline void io_queue_deferred(struct io_ring_ctx *ctx)
+ {
++	bool drain_seen = false, first = true;
++
+ 	spin_lock(&ctx->completion_lock);
+ 	while (!list_empty(&ctx->defer_list)) {
+ 		struct io_defer_entry *de = list_first_entry(&ctx->defer_list,
+ 						struct io_defer_entry, list);
+ 
+-		if (req_need_defer(de->req, de->seq))
++		drain_seen |= de->req->flags & REQ_F_IO_DRAIN;
++		if ((drain_seen || first) && io_drain_defer_seq(de->req, de->seq))
+ 			break;
++
+ 		list_del_init(&de->list);
+ 		io_req_task_queue(de->req);
+ 		kfree(de);
++		first = false;
+ 	}
+ 	spin_unlock(&ctx->completion_lock);
+ }
+@@ -2901,7 +2913,7 @@ static __cold void io_ring_exit_work(struct work_struct *work)
+ 			struct task_struct *tsk;
+ 
+ 			io_sq_thread_park(sqd);
+-			tsk = sqd->thread;
++			tsk = sqpoll_task_locked(sqd);
+ 			if (tsk && tsk->io_uring && tsk->io_uring->io_wq)
+ 				io_wq_cancel_cb(tsk->io_uring->io_wq,
+ 						io_cancel_ctx_cb, ctx, true);
+@@ -3138,7 +3150,7 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
+ 	s64 inflight;
+ 	DEFINE_WAIT(wait);
+ 
+-	WARN_ON_ONCE(sqd && sqd->thread != current);
++	WARN_ON_ONCE(sqd && sqpoll_task_locked(sqd) != current);
+ 
+ 	if (!current->io_uring)
+ 		return;
+diff --git a/io_uring/register.c b/io_uring/register.c
+index cc23a4c205cd43..a59589249fce7a 100644
+--- a/io_uring/register.c
++++ b/io_uring/register.c
+@@ -273,6 +273,8 @@ static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
+ 	if (ctx->flags & IORING_SETUP_SQPOLL) {
+ 		sqd = ctx->sq_data;
+ 		if (sqd) {
++			struct task_struct *tsk;
++
+ 			/*
+ 			 * Observe the correct sqd->lock -> ctx->uring_lock
+ 			 * ordering. Fine to drop uring_lock here, we hold
+@@ -282,8 +284,9 @@ static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
+ 			mutex_unlock(&ctx->uring_lock);
+ 			mutex_lock(&sqd->lock);
+ 			mutex_lock(&ctx->uring_lock);
+-			if (sqd->thread)
+-				tctx = sqd->thread->io_uring;
++			tsk = sqpoll_task_locked(sqd);
++			if (tsk)
++				tctx = tsk->io_uring;
+ 		}
+ 	} else {
+ 		tctx = current->io_uring;
+diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
+index 03c699493b5ab6..268d2fbe6160c2 100644
+--- a/io_uring/sqpoll.c
++++ b/io_uring/sqpoll.c
+@@ -30,7 +30,7 @@ enum {
+ void io_sq_thread_unpark(struct io_sq_data *sqd)
+ 	__releases(&sqd->lock)
+ {
+-	WARN_ON_ONCE(sqd->thread == current);
++	WARN_ON_ONCE(sqpoll_task_locked(sqd) == current);
+ 
+ 	/*
+ 	 * Do the dance but not conditional clear_bit() because it'd race with
+@@ -46,24 +46,32 @@ void io_sq_thread_unpark(struct io_sq_data *sqd)
+ void io_sq_thread_park(struct io_sq_data *sqd)
+ 	__acquires(&sqd->lock)
+ {
+-	WARN_ON_ONCE(data_race(sqd->thread) == current);
++	struct task_struct *tsk;
+ 
+ 	atomic_inc(&sqd->park_pending);
+ 	set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
+ 	mutex_lock(&sqd->lock);
+-	if (sqd->thread)
+-		wake_up_process(sqd->thread);
++
++	tsk = sqpoll_task_locked(sqd);
++	if (tsk) {
++		WARN_ON_ONCE(tsk == current);
++		wake_up_process(tsk);
++	}
+ }
+ 
+ void io_sq_thread_stop(struct io_sq_data *sqd)
+ {
+-	WARN_ON_ONCE(sqd->thread == current);
++	struct task_struct *tsk;
++
+ 	WARN_ON_ONCE(test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state));
+ 
+ 	set_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
+ 	mutex_lock(&sqd->lock);
+-	if (sqd->thread)
+-		wake_up_process(sqd->thread);
++	tsk = sqpoll_task_locked(sqd);
++	if (tsk) {
++		WARN_ON_ONCE(tsk == current);
++		wake_up_process(tsk);
++	}
+ 	mutex_unlock(&sqd->lock);
+ 	wait_for_completion(&sqd->exited);
+ }
+@@ -270,7 +278,8 @@ static int io_sq_thread(void *data)
+ 	/* offload context creation failed, just exit */
+ 	if (!current->io_uring) {
+ 		mutex_lock(&sqd->lock);
+-		sqd->thread = NULL;
++		rcu_assign_pointer(sqd->thread, NULL);
++		put_task_struct(current);
+ 		mutex_unlock(&sqd->lock);
+ 		goto err_out;
+ 	}
+@@ -379,7 +388,8 @@ static int io_sq_thread(void *data)
+ 		io_sq_tw(&retry_list, UINT_MAX);
+ 
+ 	io_uring_cancel_generic(true, sqd);
+-	sqd->thread = NULL;
++	rcu_assign_pointer(sqd->thread, NULL);
++	put_task_struct(current);
+ 	list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
+ 		atomic_or(IORING_SQ_NEED_WAKEUP, &ctx->rings->sq_flags);
+ 	io_run_task_work();
+@@ -484,7 +494,10 @@ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+ 			goto err_sqpoll;
+ 		}
+ 
+-		sqd->thread = tsk;
++		mutex_lock(&sqd->lock);
++		rcu_assign_pointer(sqd->thread, tsk);
++		mutex_unlock(&sqd->lock);
++
+ 		task_to_put = get_task_struct(tsk);
+ 		ret = io_uring_alloc_task_context(tsk, ctx);
+ 		wake_up_new_task(tsk);
+@@ -495,9 +508,6 @@ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+ 		ret = -EINVAL;
+ 		goto err;
+ 	}
+-
+-	if (task_to_put)
+-		put_task_struct(task_to_put);
+ 	return 0;
+ err_sqpoll:
+ 	complete(&ctx->sq_data->exited);
+@@ -515,10 +525,13 @@ __cold int io_sqpoll_wq_cpu_affinity(struct io_ring_ctx *ctx,
+ 	int ret = -EINVAL;
+ 
+ 	if (sqd) {
++		struct task_struct *tsk;
++
+ 		io_sq_thread_park(sqd);
+ 		/* Don't set affinity for a dying thread */
+-		if (sqd->thread)
+-			ret = io_wq_cpu_affinity(sqd->thread->io_uring, mask);
++		tsk = sqpoll_task_locked(sqd);
++		if (tsk)
++			ret = io_wq_cpu_affinity(tsk->io_uring, mask);
+ 		io_sq_thread_unpark(sqd);
+ 	}
+ 
+diff --git a/io_uring/sqpoll.h b/io_uring/sqpoll.h
+index 4171666b1cf4cc..b83dcdec9765fd 100644
+--- a/io_uring/sqpoll.h
++++ b/io_uring/sqpoll.h
+@@ -8,7 +8,7 @@ struct io_sq_data {
+ 	/* ctx's that are using this sqd */
+ 	struct list_head	ctx_list;
+ 
+-	struct task_struct	*thread;
++	struct task_struct __rcu *thread;
+ 	struct wait_queue_head	wait;
+ 
+ 	unsigned		sq_thread_idle;
+@@ -29,3 +29,9 @@ void io_sq_thread_unpark(struct io_sq_data *sqd);
+ void io_put_sq_data(struct io_sq_data *sqd);
+ void io_sqpoll_wait_sq(struct io_ring_ctx *ctx);
+ int io_sqpoll_wq_cpu_affinity(struct io_ring_ctx *ctx, cpumask_var_t mask);
++
++static inline struct task_struct *sqpoll_task_locked(struct io_sq_data *sqd)
++{
++	return rcu_dereference_protected(sqd->thread,
++					 lockdep_is_held(&sqd->lock));
++}
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index ba6b6118cf5040..c20babbf998f4e 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -2358,8 +2358,8 @@ static unsigned int __bpf_prog_ret0_warn(const void *ctx,
+ 	return 0;
+ }
+ 
+-bool bpf_prog_map_compatible(struct bpf_map *map,
+-			     const struct bpf_prog *fp)
++static bool __bpf_prog_map_compatible(struct bpf_map *map,
++				      const struct bpf_prog *fp)
+ {
+ 	enum bpf_prog_type prog_type = resolve_prog_type(fp);
+ 	bool ret;
+@@ -2368,14 +2368,6 @@ bool bpf_prog_map_compatible(struct bpf_map *map,
+ 	if (fp->kprobe_override)
+ 		return false;
+ 
+-	/* XDP programs inserted into maps are not guaranteed to run on
+-	 * a particular netdev (and can run outside driver context entirely
+-	 * in the case of devmap and cpumap). Until device checks
+-	 * are implemented, prohibit adding dev-bound programs to program maps.
+-	 */
+-	if (bpf_prog_is_dev_bound(aux))
+-		return false;
+-
+ 	spin_lock(&map->owner.lock);
+ 	if (!map->owner.type) {
+ 		/* There's no owner yet where we could check for
+@@ -2409,6 +2401,19 @@ bool bpf_prog_map_compatible(struct bpf_map *map,
+ 	return ret;
+ }
+ 
++bool bpf_prog_map_compatible(struct bpf_map *map, const struct bpf_prog *fp)
++{
++	/* XDP programs inserted into maps are not guaranteed to run on
++	 * a particular netdev (and can run outside driver context entirely
++	 * in the case of devmap and cpumap). Until device checks
++	 * are implemented, prohibit adding dev-bound programs to program maps.
++	 */
++	if (bpf_prog_is_dev_bound(fp->aux))
++		return false;
++
++	return __bpf_prog_map_compatible(map, fp);
++}
++
+ static int bpf_check_tail_call(const struct bpf_prog *fp)
+ {
+ 	struct bpf_prog_aux *aux = fp->aux;
+@@ -2421,7 +2426,7 @@ static int bpf_check_tail_call(const struct bpf_prog *fp)
+ 		if (!map_type_contains_progs(map))
+ 			continue;
+ 
+-		if (!bpf_prog_map_compatible(map, fp)) {
++		if (!__bpf_prog_map_compatible(map, fp)) {
+ 			ret = -EINVAL;
+ 			goto out;
+ 		}
+@@ -2469,7 +2474,7 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
+ 	/* In case of BPF to BPF calls, verifier did all the prep
+ 	 * work with regards to JITing, etc.
+ 	 */
+-	bool jit_needed = false;
++	bool jit_needed = fp->jit_requested;
+ 
+ 	if (fp->bpf_func)
+ 		goto finalize;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 54c6953a8b84c2..efa70141171290 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -4413,8 +4413,10 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx,
+ 			 * before it would be equally necessary to
+ 			 * propagate it to dreg.
+ 			 */
+-			bt_set_reg(bt, dreg);
+-			bt_set_reg(bt, sreg);
++			if (!hist || !(hist->flags & INSN_F_SRC_REG_STACK))
++				bt_set_reg(bt, sreg);
++			if (!hist || !(hist->flags & INSN_F_DST_REG_STACK))
++				bt_set_reg(bt, dreg);
+ 		} else if (BPF_SRC(insn->code) == BPF_K) {
+ 			 /* dreg <cond> K
+ 			  * Only dreg still needs precision before
+@@ -16377,6 +16379,7 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+ 	struct bpf_reg_state *eq_branch_regs;
+ 	struct linked_regs linked_regs = {};
+ 	u8 opcode = BPF_OP(insn->code);
++	int insn_flags = 0;
+ 	bool is_jmp32;
+ 	int pred = -1;
+ 	int err;
+@@ -16435,6 +16438,9 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+ 				insn->src_reg);
+ 			return -EACCES;
+ 		}
++
++		if (src_reg->type == PTR_TO_STACK)
++			insn_flags |= INSN_F_SRC_REG_STACK;
+ 	} else {
+ 		if (insn->src_reg != BPF_REG_0) {
+ 			verbose(env, "BPF_JMP/JMP32 uses reserved fields\n");
+@@ -16446,6 +16452,14 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+ 		__mark_reg_known(src_reg, insn->imm);
+ 	}
+ 
++	if (dst_reg->type == PTR_TO_STACK)
++		insn_flags |= INSN_F_DST_REG_STACK;
++	if (insn_flags) {
++		err = push_insn_history(env, this_branch, insn_flags, 0);
++		if (err)
++			return err;
++	}
++
+ 	is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32;
+ 	pred = is_branch_taken(dst_reg, src_reg, opcode, is_jmp32);
+ 	if (pred >= 0) {
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 95e703891b24f8..e97bc9220fd1a8 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6239,6 +6239,9 @@ static int perf_event_set_output(struct perf_event *event,
+ static int perf_event_set_filter(struct perf_event *event, void __user *arg);
+ static int perf_copy_attr(struct perf_event_attr __user *uattr,
+ 			  struct perf_event_attr *attr);
++static int __perf_event_set_bpf_prog(struct perf_event *event,
++				     struct bpf_prog *prog,
++				     u64 bpf_cookie);
+ 
+ static long _perf_ioctl(struct perf_event *event, unsigned int cmd, unsigned long arg)
+ {
+@@ -6301,7 +6304,7 @@ static long _perf_ioctl(struct perf_event *event, unsigned int cmd, unsigned lon
+ 		if (IS_ERR(prog))
+ 			return PTR_ERR(prog);
+ 
+-		err = perf_event_set_bpf_prog(event, prog, 0);
++		err = __perf_event_set_bpf_prog(event, prog, 0);
+ 		if (err) {
+ 			bpf_prog_put(prog);
+ 			return err;
+@@ -10029,14 +10032,14 @@ __perf_event_account_interrupt(struct perf_event *event, int throttle)
+ 		hwc->interrupts = 1;
+ 	} else {
+ 		hwc->interrupts++;
+-		if (unlikely(throttle &&
+-			     hwc->interrupts > max_samples_per_tick)) {
+-			__this_cpu_inc(perf_throttled_count);
+-			tick_dep_set_cpu(smp_processor_id(), TICK_DEP_BIT_PERF_EVENTS);
+-			hwc->interrupts = MAX_INTERRUPTS;
+-			perf_log_throttle(event, 0);
+-			ret = 1;
+-		}
++	}
++
++	if (unlikely(throttle && hwc->interrupts >= max_samples_per_tick)) {
++		__this_cpu_inc(perf_throttled_count);
++		tick_dep_set_cpu(smp_processor_id(), TICK_DEP_BIT_PERF_EVENTS);
++		hwc->interrupts = MAX_INTERRUPTS;
++		perf_log_throttle(event, 0);
++		ret = 1;
+ 	}
+ 
+ 	if (event->attr.freq) {
+@@ -11069,8 +11072,9 @@ static inline bool perf_event_is_tracing(struct perf_event *event)
+ 	return false;
+ }
+ 
+-int perf_event_set_bpf_prog(struct perf_event *event, struct bpf_prog *prog,
+-			    u64 bpf_cookie)
++static int __perf_event_set_bpf_prog(struct perf_event *event,
++				     struct bpf_prog *prog,
++				     u64 bpf_cookie)
+ {
+ 	bool is_kprobe, is_uprobe, is_tracepoint, is_syscall_tp;
+ 
+@@ -11108,6 +11112,20 @@ int perf_event_set_bpf_prog(struct perf_event *event, struct bpf_prog *prog,
+ 	return perf_event_attach_bpf_prog(event, prog, bpf_cookie);
+ }
+ 
++int perf_event_set_bpf_prog(struct perf_event *event,
++			    struct bpf_prog *prog,
++			    u64 bpf_cookie)
++{
++	struct perf_event_context *ctx;
++	int ret;
++
++	ctx = perf_event_ctx_lock(event);
++	ret = __perf_event_set_bpf_prog(event, prog, bpf_cookie);
++	perf_event_ctx_unlock(event, ctx);
++
++	return ret;
++}
++
+ void perf_event_free_bpf_prog(struct perf_event *event)
+ {
+ 	if (!event->prog)
+@@ -11130,7 +11148,15 @@ static void perf_event_free_filter(struct perf_event *event)
+ {
+ }
+ 
+-int perf_event_set_bpf_prog(struct perf_event *event, struct bpf_prog *prog,
++static int __perf_event_set_bpf_prog(struct perf_event *event,
++				     struct bpf_prog *prog,
++				     u64 bpf_cookie)
++{
++	return -ENOENT;
++}
++
++int perf_event_set_bpf_prog(struct perf_event *event,
++			    struct bpf_prog *prog,
+ 			    u64 bpf_cookie)
+ {
+ 	return -ENOENT;
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index d9b7e2b38c7a9f..41606247c27763 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -233,6 +233,10 @@ static int em_compute_costs(struct device *dev, struct em_perf_state *table,
+ 	unsigned long prev_cost = ULONG_MAX;
+ 	int i, ret;
+ 
++	/* This is needed only for CPUs and EAS skip other devices */
++	if (!_is_cpu_device(dev))
++		return 0;
++
+ 	/* Compute the cost of each performance state. */
+ 	for (i = nr_states - 1; i >= 0; i--) {
+ 		unsigned long power_res, cost;
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index 23c0f4e6cb2ffe..5af9c7ee98cd4a 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -90,6 +90,11 @@ void hibernate_release(void)
+ 	atomic_inc(&hibernate_atomic);
+ }
+ 
++bool hibernation_in_progress(void)
++{
++	return !atomic_read(&hibernate_atomic);
++}
++
+ bool hibernation_available(void)
+ {
+ 	return nohibernate == 0 &&
+diff --git a/kernel/power/main.c b/kernel/power/main.c
+index 6254814d481714..0622e7dacf1720 100644
+--- a/kernel/power/main.c
++++ b/kernel/power/main.c
+@@ -613,7 +613,8 @@ bool pm_debug_messages_on __read_mostly;
+ 
+ bool pm_debug_messages_should_print(void)
+ {
+-	return pm_debug_messages_on && pm_suspend_target_state != PM_SUSPEND_ON;
++	return pm_debug_messages_on && (hibernation_in_progress() ||
++		pm_suspend_target_state != PM_SUSPEND_ON);
+ }
+ EXPORT_SYMBOL_GPL(pm_debug_messages_should_print);
+ 
+diff --git a/kernel/power/power.h b/kernel/power/power.h
+index c352dea2f67b56..f8496f40b54fa5 100644
+--- a/kernel/power/power.h
++++ b/kernel/power/power.h
+@@ -71,10 +71,14 @@ extern void enable_restore_image_protection(void);
+ static inline void enable_restore_image_protection(void) {}
+ #endif /* CONFIG_STRICT_KERNEL_RWX */
+ 
++extern bool hibernation_in_progress(void);
++
+ #else /* !CONFIG_HIBERNATION */
+ 
+ static inline void hibernate_reserved_size_init(void) {}
+ static inline void hibernate_image_size_init(void) {}
++
++static inline bool hibernation_in_progress(void) { return false; }
+ #endif /* !CONFIG_HIBERNATION */
+ 
+ #define power_attr(_name) \
+diff --git a/kernel/power/wakelock.c b/kernel/power/wakelock.c
+index 52571dcad768b9..4e941999a53ba6 100644
+--- a/kernel/power/wakelock.c
++++ b/kernel/power/wakelock.c
+@@ -49,6 +49,9 @@ ssize_t pm_show_wakelocks(char *buf, bool show_active)
+ 			len += sysfs_emit_at(buf, len, "%s ", wl->name);
+ 	}
+ 
++	if (len > 0)
++		--len;
++
+ 	len += sysfs_emit_at(buf, len, "\n");
+ 
+ 	mutex_unlock(&wakelocks_lock);
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 659f83e7104869..80b10893b5038d 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -801,6 +801,10 @@ static int rcu_watching_snap_save(struct rcu_data *rdp)
+ 	return 0;
+ }
+ 
++#ifndef arch_irq_stat_cpu
++#define arch_irq_stat_cpu(cpu) 0
++#endif
++
+ /*
+  * Returns positive if the specified CPU has passed through a quiescent state
+  * by virtue of being in or having passed through an dynticks idle state since
+@@ -936,9 +940,9 @@ static int rcu_watching_snap_recheck(struct rcu_data *rdp)
+ 			rsrp->cputime_irq     = kcpustat_field(kcsp, CPUTIME_IRQ, cpu);
+ 			rsrp->cputime_softirq = kcpustat_field(kcsp, CPUTIME_SOFTIRQ, cpu);
+ 			rsrp->cputime_system  = kcpustat_field(kcsp, CPUTIME_SYSTEM, cpu);
+-			rsrp->nr_hardirqs = kstat_cpu_irqs_sum(rdp->cpu);
+-			rsrp->nr_softirqs = kstat_cpu_softirqs_sum(rdp->cpu);
+-			rsrp->nr_csw = nr_context_switches_cpu(rdp->cpu);
++			rsrp->nr_hardirqs = kstat_cpu_irqs_sum(cpu) + arch_irq_stat_cpu(cpu);
++			rsrp->nr_softirqs = kstat_cpu_softirqs_sum(cpu);
++			rsrp->nr_csw = nr_context_switches_cpu(cpu);
+ 			rsrp->jiffies = jiffies;
+ 			rsrp->gp_seq = rdp->gp_seq;
+ 		}
+diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
+index a9a811d9d7a372..1bba2225e7448b 100644
+--- a/kernel/rcu/tree.h
++++ b/kernel/rcu/tree.h
+@@ -168,7 +168,7 @@ struct rcu_snap_record {
+ 	u64		cputime_irq;	/* Accumulated cputime of hard irqs */
+ 	u64		cputime_softirq;/* Accumulated cputime of soft irqs */
+ 	u64		cputime_system; /* Accumulated cputime of kernel tasks */
+-	unsigned long	nr_hardirqs;	/* Accumulated number of hard irqs */
++	u64		nr_hardirqs;	/* Accumulated number of hard irqs */
+ 	unsigned int	nr_softirqs;	/* Accumulated number of soft irqs */
+ 	unsigned long long nr_csw;	/* Accumulated number of task switches */
+ 	unsigned long   jiffies;	/* Track jiffies value */
+diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h
+index 925fcdad5dea22..56b21219442b65 100644
+--- a/kernel/rcu/tree_stall.h
++++ b/kernel/rcu/tree_stall.h
+@@ -435,8 +435,8 @@ static void print_cpu_stat_info(int cpu)
+ 	rsr.cputime_system  = kcpustat_field(kcsp, CPUTIME_SYSTEM, cpu);
+ 
+ 	pr_err("\t         hardirqs   softirqs   csw/system\n");
+-	pr_err("\t number: %8ld %10d %12lld\n",
+-		kstat_cpu_irqs_sum(cpu) - rsrp->nr_hardirqs,
++	pr_err("\t number: %8lld %10d %12lld\n",
++		kstat_cpu_irqs_sum(cpu) + arch_irq_stat_cpu(cpu) - rsrp->nr_hardirqs,
+ 		kstat_cpu_softirqs_sum(cpu) - rsrp->nr_softirqs,
+ 		nr_context_switches_cpu(cpu) - rsrp->nr_csw);
+ 	pr_err("\tcputime: %8lld %10lld %12lld   ==> %d(ms)\n",
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index c81cf642dba055..d593d6612ba07e 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -2283,6 +2283,12 @@ unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state
+ 		 * just go back and repeat.
+ 		 */
+ 		rq = task_rq_lock(p, &rf);
++		/*
++		 * If task is sched_delayed, force dequeue it, to avoid always
++		 * hitting the tick timeout in the queued case
++		 */
++		if (p->se.sched_delayed)
++			dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED);
+ 		trace_sched_wait_task(p);
+ 		running = task_on_cpu(rq, p);
+ 		queued = task_on_rq_queued(p);
+@@ -6571,12 +6577,14 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
+  * Otherwise marks the task's __state as RUNNING
+  */
+ static bool try_to_block_task(struct rq *rq, struct task_struct *p,
+-			      unsigned long task_state)
++			      unsigned long *task_state_p)
+ {
++	unsigned long task_state = *task_state_p;
+ 	int flags = DEQUEUE_NOCLOCK;
+ 
+ 	if (signal_pending_state(task_state, p)) {
+ 		WRITE_ONCE(p->__state, TASK_RUNNING);
++		*task_state_p = TASK_RUNNING;
+ 		return false;
+ 	}
+ 
+@@ -6713,7 +6721,7 @@ static void __sched notrace __schedule(int sched_mode)
+ 			goto picked;
+ 		}
+ 	} else if (!preempt && prev_state) {
+-		try_to_block_task(rq, prev, prev_state);
++		try_to_block_task(rq, prev, &prev_state);
+ 		switch_count = &prev->nvcsw;
+ 	}
+ 
+diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
+index e67a19a071c114..50b9f3af810d93 100644
+--- a/kernel/sched/ext_idle.c
++++ b/kernel/sched/ext_idle.c
+@@ -131,6 +131,7 @@ static s32 pick_idle_cpu_in_node(const struct cpumask *cpus_allowed, int node, u
+ 		goto retry;
+ }
+ 
++#ifdef CONFIG_NUMA
+ /*
+  * Tracks nodes that have not yet been visited when searching for an idle
+  * CPU across all available nodes.
+@@ -179,6 +180,13 @@ static s32 pick_idle_cpu_from_online_nodes(const struct cpumask *cpus_allowed, i
+ 
+ 	return cpu;
+ }
++#else
++static inline s32
++pick_idle_cpu_from_online_nodes(const struct cpumask *cpus_allowed, int node, u64 flags)
++{
++	return -EBUSY;
++}
++#endif
+ 
+ /*
+  * Find an idle CPU in the system, starting from @node.
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 0fb9bf995a4795..0c04ed41485259 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -7196,6 +7196,11 @@ static bool dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+ 	return true;
+ }
+ 
++static inline unsigned int cfs_h_nr_delayed(struct rq *rq)
++{
++	return (rq->cfs.h_nr_queued - rq->cfs.h_nr_runnable);
++}
++
+ #ifdef CONFIG_SMP
+ 
+ /* Working cpumask for: sched_balance_rq(), sched_balance_newidle(). */
+@@ -7357,8 +7362,12 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
+ 	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
+ 		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
+ 
+-	if (sync && cpu_rq(this_cpu)->nr_running == 1)
+-		return this_cpu;
++	if (sync) {
++		struct rq *rq = cpu_rq(this_cpu);
++
++		if ((rq->nr_running - cfs_h_nr_delayed(rq)) == 1)
++			return this_cpu;
++	}
+ 
+ 	if (available_idle_cpu(prev_cpu))
+ 		return prev_cpu;
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 50e8d04ab661f4..2e5b89d7d86605 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -1405,6 +1405,15 @@ void run_posix_cpu_timers(void)
+ 
+ 	lockdep_assert_irqs_disabled();
+ 
++	/*
++	 * Ensure that release_task(tsk) can't happen while
++	 * handle_posix_cpu_timers() is running. Otherwise, a concurrent
++	 * posix_cpu_timer_del() may fail to lock_task_sighand(tsk) and
++	 * miss timer->it.cpu.firing != 0.
++	 */
++	if (tsk->exit_state)
++		return;
++
+ 	/*
+ 	 * If the actual expiry is deferred to task work context and the
+ 	 * work is already scheduled there is no point to do anything here.
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 187dc37d61d4a3..090cdab38f0ccd 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -1858,7 +1858,7 @@ static struct pt_regs *get_bpf_raw_tp_regs(void)
+ 	struct bpf_raw_tp_regs *tp_regs = this_cpu_ptr(&bpf_raw_tp_regs);
+ 	int nest_level = this_cpu_inc_return(bpf_raw_tp_nest_level);
+ 
+-	if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(tp_regs->regs))) {
++	if (nest_level > ARRAY_SIZE(tp_regs->regs)) {
+ 		this_cpu_dec(bpf_raw_tp_nest_level);
+ 		return ERR_PTR(-EBUSY);
+ 	}
+@@ -2987,6 +2987,9 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
+ 	if (sizeof(u64) != sizeof(void *))
+ 		return -EOPNOTSUPP;
+ 
++	if (attr->link_create.flags)
++		return -EINVAL;
++
+ 	if (!is_kprobe_multi(prog))
+ 		return -EINVAL;
+ 
+@@ -3376,6 +3379,9 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
+ 	if (sizeof(u64) != sizeof(void *))
+ 		return -EOPNOTSUPP;
+ 
++	if (attr->link_create.flags)
++		return -EINVAL;
++
+ 	if (!is_uprobe_multi(prog))
+ 		return -EINVAL;
+ 
+@@ -3417,7 +3423,9 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
+ 	}
+ 
+ 	if (pid) {
++		rcu_read_lock();
+ 		task = get_pid_task(find_vpid(pid), PIDTYPE_TGID);
++		rcu_read_unlock();
+ 		if (!task) {
+ 			err = -ESRCH;
+ 			goto error_path_put;
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 3f9bf562beea23..67707ff28fc519 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -2849,6 +2849,12 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 	if (nr_pages < 2)
+ 		nr_pages = 2;
+ 
++	/*
++	 * Keep CPUs from coming online while resizing to synchronize
++	 * with new per CPU buffers being created.
++	 */
++	guard(cpus_read_lock)();
++
+ 	/* prevent another thread from changing buffer sizes */
+ 	mutex_lock(&buffer->mutex);
+ 	atomic_inc(&buffer->resizing);
+@@ -2893,7 +2899,6 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 			cond_resched();
+ 		}
+ 
+-		cpus_read_lock();
+ 		/*
+ 		 * Fire off all the required work handlers
+ 		 * We can't schedule on offline CPUs, but it's not necessary
+@@ -2933,7 +2938,6 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 			cpu_buffer->nr_pages_to_update = 0;
+ 		}
+ 
+-		cpus_read_unlock();
+ 	} else {
+ 		cpu_buffer = buffer->buffers[cpu_id];
+ 
+@@ -2961,8 +2965,6 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 			goto out_err;
+ 		}
+ 
+-		cpus_read_lock();
+-
+ 		/* Can't run something on an offline CPU. */
+ 		if (!cpu_online(cpu_id))
+ 			rb_update_pages(cpu_buffer);
+@@ -2981,7 +2983,6 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 		}
+ 
+ 		cpu_buffer->nr_pages_to_update = 0;
+-		cpus_read_unlock();
+ 	}
+ 
+  out:
+@@ -6764,7 +6765,7 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
+ 	old_size = buffer->subbuf_size;
+ 
+ 	/* prevent another thread from changing buffer sizes */
+-	mutex_lock(&buffer->mutex);
++	guard(mutex)(&buffer->mutex);
+ 	atomic_inc(&buffer->record_disabled);
+ 
+ 	/* Make sure all commits have finished */
+@@ -6869,7 +6870,6 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
+ 	}
+ 
+ 	atomic_dec(&buffer->record_disabled);
+-	mutex_unlock(&buffer->mutex);
+ 
+ 	return 0;
+ 
+@@ -6878,7 +6878,6 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
+ 	buffer->subbuf_size = old_size;
+ 
+ 	atomic_dec(&buffer->record_disabled);
+-	mutex_unlock(&buffer->mutex);
+ 
+ 	for_each_buffer_cpu(buffer, cpu) {
+ 		cpu_buffer = buffer->buffers[cpu];
+@@ -7284,8 +7283,8 @@ int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu)
+ 	/* Check if any events were dropped */
+ 	missed_events = cpu_buffer->lost_events;
+ 
+-	if (cpu_buffer->reader_page != cpu_buffer->commit_page) {
+-		if (missed_events) {
++	if (missed_events) {
++		if (cpu_buffer->reader_page != cpu_buffer->commit_page) {
+ 			struct buffer_data_page *bpage = reader->page;
+ 			unsigned int commit;
+ 			/*
+@@ -7306,13 +7305,23 @@ int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu)
+ 				local_add(RB_MISSED_STORED, &bpage->commit);
+ 			}
+ 			local_add(RB_MISSED_EVENTS, &bpage->commit);
++		} else if (!WARN_ONCE(cpu_buffer->reader_page == cpu_buffer->tail_page,
++				      "Reader on commit with %ld missed events",
++				      missed_events)) {
++			/*
++			 * There shouldn't be any missed events if the tail_page
++			 * is on the reader page. But if the tail page is not on the
++			 * reader page and the commit_page is, that would mean that
++			 * there's a commit_overrun (an interrupt preempted an
++			 * addition of an event and then filled the buffer
++			 * with new events). In this case it's not an
++			 * error, but it should still be reported.
++			 *
++			 * TODO: Add missed events to the page for user space to know.
++			 */
++			pr_info("Ring buffer [%d] commit overrun lost %ld events at timestamp:%lld\n",
++				cpu, missed_events, cpu_buffer->reader_page->page->time_stamp);
+ 		}
+-	} else {
+-		/*
+-		 * There really shouldn't be any missed events if the commit
+-		 * is on the reader page.
+-		 */
+-		WARN_ON_ONCE(missed_events);
+ 	}
+ 
+ 	cpu_buffer->lost_events = 0;
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 79be1995db44c4..10ee434a9b755f 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -1772,6 +1772,9 @@ extern int event_enable_register_trigger(char *glob,
+ extern void event_enable_unregister_trigger(char *glob,
+ 					    struct event_trigger_data *test,
+ 					    struct trace_event_file *file);
++extern struct event_trigger_data *
++trigger_data_alloc(struct event_command *cmd_ops, char *cmd, char *param,
++		   void *private_data);
+ extern void trigger_data_free(struct event_trigger_data *data);
+ extern int event_trigger_init(struct event_trigger_data *data);
+ extern int trace_event_trigger_enable_disable(struct trace_event_file *file,
+@@ -1798,11 +1801,6 @@ extern bool event_trigger_check_remove(const char *glob);
+ extern bool event_trigger_empty_param(const char *param);
+ extern int event_trigger_separate_filter(char *param_and_filter, char **param,
+ 					 char **filter, bool param_required);
+-extern struct event_trigger_data *
+-event_trigger_alloc(struct event_command *cmd_ops,
+-		    char *cmd,
+-		    char *param,
+-		    void *private_data);
+ extern int event_trigger_parse_num(char *trigger,
+ 				   struct event_trigger_data *trigger_data);
+ extern int event_trigger_set_filter(struct event_command *cmd_ops,
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 1260c23cfa5fc4..86fd06812cdab4 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -5246,17 +5246,94 @@ hist_trigger_actions(struct hist_trigger_data *hist_data,
+ 	}
+ }
+ 
++/*
++ * The hist_pad structure is used to save information to create
++ * a histogram from the histogram trigger. It's too big to store
++ * on the stack, so when the histogram trigger is initialized
++ * a percpu array of 4 hist_pad structures is allocated.
++ * This will cover every context from normal, softirq, irq and NMI
++ * in the very unlikely event that a tigger happens at each of
++ * these contexts and interrupts a currently active trigger.
++ */
++struct hist_pad {
++	unsigned long		entries[HIST_STACKTRACE_DEPTH];
++	u64			var_ref_vals[TRACING_MAP_VARS_MAX];
++	char			compound_key[HIST_KEY_SIZE_MAX];
++};
++
++static struct hist_pad __percpu *hist_pads;
++static DEFINE_PER_CPU(int, hist_pad_cnt);
++static refcount_t hist_pad_ref;
++
++/* One hist_pad for every context (normal, softirq, irq, NMI) */
++#define MAX_HIST_CNT 4
++
++static int alloc_hist_pad(void)
++{
++	lockdep_assert_held(&event_mutex);
++
++	if (refcount_read(&hist_pad_ref)) {
++		refcount_inc(&hist_pad_ref);
++		return 0;
++	}
++
++	hist_pads = __alloc_percpu(sizeof(struct hist_pad) * MAX_HIST_CNT,
++				   __alignof__(struct hist_pad));
++	if (!hist_pads)
++		return -ENOMEM;
++
++	refcount_set(&hist_pad_ref, 1);
++	return 0;
++}
++
++static void free_hist_pad(void)
++{
++	lockdep_assert_held(&event_mutex);
++
++	if (!refcount_dec_and_test(&hist_pad_ref))
++		return;
++
++	free_percpu(hist_pads);
++	hist_pads = NULL;
++}
++
++static struct hist_pad *get_hist_pad(void)
++{
++	struct hist_pad *hist_pad;
++	int cnt;
++
++	if (WARN_ON_ONCE(!hist_pads))
++		return NULL;
++
++	preempt_disable();
++
++	hist_pad = per_cpu_ptr(hist_pads, smp_processor_id());
++
++	if (this_cpu_read(hist_pad_cnt) == MAX_HIST_CNT) {
++		preempt_enable();
++		return NULL;
++	}
++
++	cnt = this_cpu_inc_return(hist_pad_cnt) - 1;
++
++	return &hist_pad[cnt];
++}
++
++static void put_hist_pad(void)
++{
++	this_cpu_dec(hist_pad_cnt);
++	preempt_enable();
++}
++
+ static void event_hist_trigger(struct event_trigger_data *data,
+ 			       struct trace_buffer *buffer, void *rec,
+ 			       struct ring_buffer_event *rbe)
+ {
+ 	struct hist_trigger_data *hist_data = data->private_data;
+ 	bool use_compound_key = (hist_data->n_keys > 1);
+-	unsigned long entries[HIST_STACKTRACE_DEPTH];
+-	u64 var_ref_vals[TRACING_MAP_VARS_MAX];
+-	char compound_key[HIST_KEY_SIZE_MAX];
+ 	struct tracing_map_elt *elt = NULL;
+ 	struct hist_field *key_field;
++	struct hist_pad *hist_pad;
+ 	u64 field_contents;
+ 	void *key = NULL;
+ 	unsigned int i;
+@@ -5264,12 +5341,18 @@ static void event_hist_trigger(struct event_trigger_data *data,
+ 	if (unlikely(!rbe))
+ 		return;
+ 
+-	memset(compound_key, 0, hist_data->key_size);
++	hist_pad = get_hist_pad();
++	if (!hist_pad)
++		return;
++
++	memset(hist_pad->compound_key, 0, hist_data->key_size);
+ 
+ 	for_each_hist_key_field(i, hist_data) {
+ 		key_field = hist_data->fields[i];
+ 
+ 		if (key_field->flags & HIST_FIELD_FL_STACKTRACE) {
++			unsigned long *entries = hist_pad->entries;
++
+ 			memset(entries, 0, HIST_STACKTRACE_SIZE);
+ 			if (key_field->field) {
+ 				unsigned long *stack, n_entries;
+@@ -5293,26 +5376,31 @@ static void event_hist_trigger(struct event_trigger_data *data,
+ 		}
+ 
+ 		if (use_compound_key)
+-			add_to_key(compound_key, key, key_field, rec);
++			add_to_key(hist_pad->compound_key, key, key_field, rec);
+ 	}
+ 
+ 	if (use_compound_key)
+-		key = compound_key;
++		key = hist_pad->compound_key;
+ 
+ 	if (hist_data->n_var_refs &&
+-	    !resolve_var_refs(hist_data, key, var_ref_vals, false))
+-		return;
++	    !resolve_var_refs(hist_data, key, hist_pad->var_ref_vals, false))
++		goto out;
+ 
+ 	elt = tracing_map_insert(hist_data->map, key);
+ 	if (!elt)
+-		return;
++		goto out;
+ 
+-	hist_trigger_elt_update(hist_data, elt, buffer, rec, rbe, var_ref_vals);
++	hist_trigger_elt_update(hist_data, elt, buffer, rec, rbe, hist_pad->var_ref_vals);
+ 
+-	if (resolve_var_refs(hist_data, key, var_ref_vals, true))
+-		hist_trigger_actions(hist_data, elt, buffer, rec, rbe, key, var_ref_vals);
++	if (resolve_var_refs(hist_data, key, hist_pad->var_ref_vals, true)) {
++		hist_trigger_actions(hist_data, elt, buffer, rec, rbe,
++				     key, hist_pad->var_ref_vals);
++	}
+ 
+ 	hist_poll_wakeup();
++
++ out:
++	put_hist_pad();
+ }
+ 
+ static void hist_trigger_stacktrace_print(struct seq_file *m,
+@@ -6157,6 +6245,9 @@ static int event_hist_trigger_init(struct event_trigger_data *data)
+ {
+ 	struct hist_trigger_data *hist_data = data->private_data;
+ 
++	if (alloc_hist_pad() < 0)
++		return -ENOMEM;
++
+ 	if (!data->ref && hist_data->attrs->name)
+ 		save_named_trigger(hist_data->attrs->name, data);
+ 
+@@ -6201,6 +6292,7 @@ static void event_hist_trigger_free(struct event_trigger_data *data)
+ 
+ 		destroy_hist_data(hist_data);
+ 	}
++	free_hist_pad();
+ }
+ 
+ static const struct event_trigger_ops event_hist_trigger_ops = {
+@@ -6216,9 +6308,7 @@ static int event_hist_trigger_named_init(struct event_trigger_data *data)
+ 
+ 	save_named_trigger(data->named_data->name, data);
+ 
+-	event_hist_trigger_init(data->named_data);
+-
+-	return 0;
++	return event_hist_trigger_init(data->named_data);
+ }
+ 
+ static void event_hist_trigger_named_free(struct event_trigger_data *data)
+@@ -6705,7 +6795,7 @@ static int event_hist_trigger_parse(struct event_command *cmd_ops,
+ 		return PTR_ERR(hist_data);
+ 	}
+ 
+-	trigger_data = event_trigger_alloc(cmd_ops, cmd, param, hist_data);
++	trigger_data = trigger_data_alloc(cmd_ops, cmd, param, hist_data);
+ 	if (!trigger_data) {
+ 		ret = -ENOMEM;
+ 		goto out_free;
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index 6e87ae2a1a66bf..c443ed7649a896 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -804,7 +804,7 @@ int event_trigger_separate_filter(char *param_and_filter, char **param,
+ }
+ 
+ /**
+- * event_trigger_alloc - allocate and init event_trigger_data for a trigger
++ * trigger_data_alloc - allocate and init event_trigger_data for a trigger
+  * @cmd_ops: The event_command operations for the trigger
+  * @cmd: The cmd string
+  * @param: The param string
+@@ -815,14 +815,14 @@ int event_trigger_separate_filter(char *param_and_filter, char **param,
+  * trigger_ops to assign to the event_trigger_data.  @private_data can
+  * also be passed in and associated with the event_trigger_data.
+  *
+- * Use event_trigger_free() to free an event_trigger_data object.
++ * Use trigger_data_free() to free an event_trigger_data object.
+  *
+  * Return: The trigger_data object success, NULL otherwise
+  */
+-struct event_trigger_data *event_trigger_alloc(struct event_command *cmd_ops,
+-					       char *cmd,
+-					       char *param,
+-					       void *private_data)
++struct event_trigger_data *trigger_data_alloc(struct event_command *cmd_ops,
++					      char *cmd,
++					      char *param,
++					      void *private_data)
+ {
+ 	struct event_trigger_data *trigger_data;
+ 	const struct event_trigger_ops *trigger_ops;
+@@ -989,13 +989,13 @@ event_trigger_parse(struct event_command *cmd_ops,
+ 		return ret;
+ 
+ 	ret = -ENOMEM;
+-	trigger_data = event_trigger_alloc(cmd_ops, cmd, param, file);
++	trigger_data = trigger_data_alloc(cmd_ops, cmd, param, file);
+ 	if (!trigger_data)
+ 		goto out;
+ 
+ 	if (remove) {
+ 		event_trigger_unregister(cmd_ops, file, glob+1, trigger_data);
+-		kfree(trigger_data);
++		trigger_data_free(trigger_data);
+ 		ret = 0;
+ 		goto out;
+ 	}
+@@ -1022,7 +1022,7 @@ event_trigger_parse(struct event_command *cmd_ops,
+ 
+  out_free:
+ 	event_trigger_reset_filter(cmd_ops, trigger_data);
+-	kfree(trigger_data);
++	trigger_data_free(trigger_data);
+ 	goto out;
+ }
+ 
+@@ -1793,7 +1793,7 @@ int event_enable_trigger_parse(struct event_command *cmd_ops,
+ 	enable_data->enable = enable;
+ 	enable_data->file = event_enable_file;
+ 
+-	trigger_data = event_trigger_alloc(cmd_ops, cmd, param, enable_data);
++	trigger_data = trigger_data_alloc(cmd_ops, cmd, param, enable_data);
+ 	if (!trigger_data) {
+ 		kfree(enable_data);
+ 		goto out;
+diff --git a/lib/Kconfig.ubsan b/lib/Kconfig.ubsan
+index f6ea0c5b5da393..96cd896684676d 100644
+--- a/lib/Kconfig.ubsan
++++ b/lib/Kconfig.ubsan
+@@ -118,6 +118,8 @@ config UBSAN_UNREACHABLE
+ 
+ config UBSAN_INTEGER_WRAP
+ 	bool "Perform checking for integer arithmetic wrap-around"
++	# This is very experimental so drop the next line if you really want it
++	depends on BROKEN
+ 	depends on !COMPILE_TEST
+ 	depends on $(cc-option,-fsanitize-undefined-ignore-overflow-pattern=all)
+ 	depends on $(cc-option,-fsanitize=signed-integer-overflow)
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index bc9391e55d57ea..9ce83ab71bacd8 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -820,7 +820,7 @@ static bool iov_iter_aligned_bvec(const struct iov_iter *i, unsigned addr_mask,
+ 	size_t size = i->count;
+ 
+ 	do {
+-		size_t len = bvec->bv_len;
++		size_t len = bvec->bv_len - skip;
+ 
+ 		if (len > size)
+ 			len = size;
+diff --git a/lib/kunit/static_stub.c b/lib/kunit/static_stub.c
+index 92b2cccd5e7633..484fd85251b415 100644
+--- a/lib/kunit/static_stub.c
++++ b/lib/kunit/static_stub.c
+@@ -96,7 +96,7 @@ void __kunit_activate_static_stub(struct kunit *test,
+ 
+ 	/* If the replacement address is NULL, deactivate the stub. */
+ 	if (!replacement_addr) {
+-		kunit_deactivate_static_stub(test, replacement_addr);
++		kunit_deactivate_static_stub(test, real_fn_addr);
+ 		return;
+ 	}
+ 
+diff --git a/lib/tests/usercopy_kunit.c b/lib/tests/usercopy_kunit.c
+index 77fa00a13df775..80f8abe10968c1 100644
+--- a/lib/tests/usercopy_kunit.c
++++ b/lib/tests/usercopy_kunit.c
+@@ -27,6 +27,7 @@
+ 			    !defined(CONFIG_MICROBLAZE) &&	\
+ 			    !defined(CONFIG_NIOS2) &&		\
+ 			    !defined(CONFIG_PPC32) &&		\
++			    !defined(CONFIG_SPARC32) &&		\
+ 			    !defined(CONFIG_SUPERH))
+ # define TEST_U64
+ #endif
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 7b90cbeb4a1adf..6af6d8f2929ce4 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -1589,6 +1589,16 @@ int folio_wait_private_2_killable(struct folio *folio)
+ }
+ EXPORT_SYMBOL(folio_wait_private_2_killable);
+ 
++static void filemap_end_dropbehind(struct folio *folio)
++{
++	struct address_space *mapping = folio->mapping;
++
++	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
++
++	if (mapping && !folio_test_writeback(folio) && !folio_test_dirty(folio))
++		folio_unmap_invalidate(mapping, folio, 0);
++}
++
+ /*
+  * If folio was marked as dropbehind, then pages should be dropped when writeback
+  * completes. Do that now. If we fail, it's likely because of a big folio -
+@@ -1604,8 +1614,7 @@ static void folio_end_dropbehind_write(struct folio *folio)
+ 	 * invalidation in that case.
+ 	 */
+ 	if (in_task() && folio_trylock(folio)) {
+-		if (folio->mapping)
+-			folio_unmap_invalidate(folio->mapping, folio, 0);
++		filemap_end_dropbehind(folio);
+ 		folio_unlock(folio);
+ 	}
+ }
+@@ -2635,8 +2644,7 @@ static inline bool pos_same_folio(loff_t pos1, loff_t pos2, struct folio *folio)
+ 	return (pos1 >> shift == pos2 >> shift);
+ }
+ 
+-static void filemap_end_dropbehind_read(struct address_space *mapping,
+-					struct folio *folio)
++static void filemap_end_dropbehind_read(struct folio *folio)
+ {
+ 	if (!folio_test_dropbehind(folio))
+ 		return;
+@@ -2644,7 +2652,7 @@ static void filemap_end_dropbehind_read(struct address_space *mapping,
+ 		return;
+ 	if (folio_trylock(folio)) {
+ 		if (folio_test_clear_dropbehind(folio))
+-			folio_unmap_invalidate(mapping, folio, 0);
++			filemap_end_dropbehind(folio);
+ 		folio_unlock(folio);
+ 	}
+ }
+@@ -2765,7 +2773,7 @@ ssize_t filemap_read(struct kiocb *iocb, struct iov_iter *iter,
+ 		for (i = 0; i < folio_batch_count(&fbatch); i++) {
+ 			struct folio *folio = fbatch.folios[i];
+ 
+-			filemap_end_dropbehind_read(mapping, folio);
++			filemap_end_dropbehind_read(folio);
+ 			folio_put(folio);
+ 		}
+ 		folio_batch_init(&fbatch);
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 47fa713ccb4d89..4f29e393f6af1c 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -898,9 +898,7 @@ static inline bool page_expected_state(struct page *page,
+ #ifdef CONFIG_MEMCG
+ 			page->memcg_data |
+ #endif
+-#ifdef CONFIG_PAGE_POOL
+-			((page->pp_magic & ~0x3UL) == PP_SIGNATURE) |
+-#endif
++			page_pool_page_is_pp(page) |
+ 			(page->flags & check_flags)))
+ 		return false;
+ 
+@@ -927,10 +925,8 @@ static const char *page_bad_reason(struct page *page, unsigned long flags)
+ 	if (unlikely(page->memcg_data))
+ 		bad_reason = "page still charged to cgroup";
+ #endif
+-#ifdef CONFIG_PAGE_POOL
+-	if (unlikely((page->pp_magic & ~0x3UL) == PP_SIGNATURE))
++	if (unlikely(page_pool_page_is_pp(page)))
+ 		bad_reason = "page_pool leak";
+-#endif
+ 	return bad_reason;
+ }
+ 
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 61461b9fa13431..5c1ca57ccd2853 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -1704,7 +1704,7 @@ p9_client_write_subreq(struct netfs_io_subrequest *subreq)
+ 				    start, len, &subreq->io_iter);
+ 	}
+ 	if (IS_ERR(req)) {
+-		netfs_write_subrequest_terminated(subreq, PTR_ERR(req), false);
++		netfs_write_subrequest_terminated(subreq, PTR_ERR(req));
+ 		return;
+ 	}
+ 
+@@ -1712,7 +1712,7 @@ p9_client_write_subreq(struct netfs_io_subrequest *subreq)
+ 	if (err) {
+ 		trace_9p_protocol_dump(clnt, &req->rc);
+ 		p9_req_put(clnt, req);
+-		netfs_write_subrequest_terminated(subreq, err, false);
++		netfs_write_subrequest_terminated(subreq, err);
+ 		return;
+ 	}
+ 
+@@ -1724,7 +1724,7 @@ p9_client_write_subreq(struct netfs_io_subrequest *subreq)
+ 	p9_debug(P9_DEBUG_9P, "<<< RWRITE count %d\n", len);
+ 
+ 	p9_req_put(clnt, req);
+-	netfs_write_subrequest_terminated(subreq, written, false);
++	netfs_write_subrequest_terminated(subreq, written);
+ }
+ EXPORT_SYMBOL(p9_client_write_subreq);
+ 
+diff --git a/net/bluetooth/eir.c b/net/bluetooth/eir.c
+index 1bc51e2b05a347..3f72111ba651f9 100644
+--- a/net/bluetooth/eir.c
++++ b/net/bluetooth/eir.c
+@@ -242,7 +242,7 @@ u8 eir_create_per_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr)
+ 	return ad_len;
+ }
+ 
+-u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr)
++u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr, u8 size)
+ {
+ 	struct adv_info *adv = NULL;
+ 	u8 ad_len = 0, flags = 0;
+@@ -286,7 +286,7 @@ u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr)
+ 		/* If flags would still be empty, then there is no need to
+ 		 * include the "Flags" AD field".
+ 		 */
+-		if (flags) {
++		if (flags && (ad_len + eir_precalc_len(1) <= size)) {
+ 			ptr[0] = 0x02;
+ 			ptr[1] = EIR_FLAGS;
+ 			ptr[2] = flags;
+@@ -316,7 +316,8 @@ u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr)
+ 		}
+ 
+ 		/* Provide Tx Power only if we can provide a valid value for it */
+-		if (adv_tx_power != HCI_TX_POWER_INVALID) {
++		if (adv_tx_power != HCI_TX_POWER_INVALID &&
++		    (ad_len + eir_precalc_len(1) <= size)) {
+ 			ptr[0] = 0x02;
+ 			ptr[1] = EIR_TX_POWER;
+ 			ptr[2] = (u8)adv_tx_power;
+@@ -366,17 +367,19 @@ u8 eir_create_scan_rsp(struct hci_dev *hdev, u8 instance, u8 *ptr)
+ 
+ void *eir_get_service_data(u8 *eir, size_t eir_len, u16 uuid, size_t *len)
+ {
+-	while ((eir = eir_get_data(eir, eir_len, EIR_SERVICE_DATA, len))) {
++	size_t dlen;
++
++	while ((eir = eir_get_data(eir, eir_len, EIR_SERVICE_DATA, &dlen))) {
+ 		u16 value = get_unaligned_le16(eir);
+ 
+ 		if (uuid == value) {
+ 			if (len)
+-				*len -= 2;
++				*len = dlen - 2;
+ 			return &eir[2];
+ 		}
+ 
+-		eir += *len;
+-		eir_len -= *len;
++		eir += dlen;
++		eir_len -= dlen;
+ 	}
+ 
+ 	return NULL;
+diff --git a/net/bluetooth/eir.h b/net/bluetooth/eir.h
+index 5c89a05e8b2905..9372db83f912fa 100644
+--- a/net/bluetooth/eir.h
++++ b/net/bluetooth/eir.h
+@@ -9,7 +9,7 @@
+ 
+ void eir_create(struct hci_dev *hdev, u8 *data);
+ 
+-u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr);
++u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr, u8 size);
+ u8 eir_create_scan_rsp(struct hci_dev *hdev, u8 instance, u8 *ptr);
+ u8 eir_create_per_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr);
+ 
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 946d2ae551f86c..fccdb864af7264 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -785,7 +785,7 @@ static int hci_le_big_terminate(struct hci_dev *hdev, u8 big, struct hci_conn *c
+ 	d->sync_handle = conn->sync_handle;
+ 
+ 	if (test_and_clear_bit(HCI_CONN_PA_SYNC, &conn->flags)) {
+-		hci_conn_hash_list_flag(hdev, find_bis, ISO_LINK,
++		hci_conn_hash_list_flag(hdev, find_bis, BIS_LINK,
+ 					HCI_CONN_PA_SYNC, d);
+ 
+ 		if (!d->count)
+@@ -795,7 +795,7 @@ static int hci_le_big_terminate(struct hci_dev *hdev, u8 big, struct hci_conn *c
+ 	}
+ 
+ 	if (test_and_clear_bit(HCI_CONN_BIG_SYNC, &conn->flags)) {
+-		hci_conn_hash_list_flag(hdev, find_bis, ISO_LINK,
++		hci_conn_hash_list_flag(hdev, find_bis, BIS_LINK,
+ 					HCI_CONN_BIG_SYNC, d);
+ 
+ 		if (!d->count)
+@@ -885,9 +885,11 @@ static void cis_cleanup(struct hci_conn *conn)
+ 	/* Check if ISO connection is a CIS and remove CIG if there are
+ 	 * no other connections using it.
+ 	 */
+-	hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, BT_BOUND, &d);
+-	hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, BT_CONNECT, &d);
+-	hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, BT_CONNECTED, &d);
++	hci_conn_hash_list_state(hdev, find_cis, CIS_LINK, BT_BOUND, &d);
++	hci_conn_hash_list_state(hdev, find_cis, CIS_LINK, BT_CONNECT,
++				 &d);
++	hci_conn_hash_list_state(hdev, find_cis, CIS_LINK, BT_CONNECTED,
++				 &d);
+ 	if (d.count)
+ 		return;
+ 
+@@ -910,7 +912,8 @@ static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t
+ 		if (!hdev->acl_mtu)
+ 			return ERR_PTR(-ECONNREFUSED);
+ 		break;
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		if (hdev->iso_mtu)
+ 			/* Dedicated ISO Buffer exists */
+ 			break;
+@@ -974,7 +977,8 @@ static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t
+ 		hci_copy_identity_address(hdev, &conn->src, &conn->src_type);
+ 		conn->mtu = hdev->le_mtu ? hdev->le_mtu : hdev->acl_mtu;
+ 		break;
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		/* conn->src should reflect the local identity address */
+ 		hci_copy_identity_address(hdev, &conn->src, &conn->src_type);
+ 
+@@ -1071,7 +1075,8 @@ static void hci_conn_cleanup_child(struct hci_conn *conn, u8 reason)
+ 		if (HCI_CONN_HANDLE_UNSET(conn->handle))
+ 			hci_conn_failed(conn, reason);
+ 		break;
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		if ((conn->state != BT_CONNECTED &&
+ 		    !test_bit(HCI_CONN_CREATE_CIS, &conn->flags)) ||
+ 		    test_bit(HCI_CONN_BIG_CREATED, &conn->flags))
+@@ -1146,7 +1151,8 @@ void hci_conn_del(struct hci_conn *conn)
+ 			hdev->acl_cnt += conn->sent;
+ 	} else {
+ 		/* Unacked ISO frames */
+-		if (conn->type == ISO_LINK) {
++		if (conn->type == CIS_LINK ||
++		    conn->type == BIS_LINK) {
+ 			if (hdev->iso_pkts)
+ 				hdev->iso_cnt += conn->sent;
+ 			else if (hdev->le_pkts)
+@@ -1532,7 +1538,7 @@ static struct hci_conn *hci_add_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ 		     memcmp(conn->le_per_adv_data, base, base_len)))
+ 		return ERR_PTR(-EADDRINUSE);
+ 
+-	conn = hci_conn_add_unset(hdev, ISO_LINK, dst, HCI_ROLE_MASTER);
++	conn = hci_conn_add_unset(hdev, BIS_LINK, dst, HCI_ROLE_MASTER);
+ 	if (IS_ERR(conn))
+ 		return conn;
+ 
+@@ -1740,7 +1746,7 @@ static int hci_le_create_big(struct hci_conn *conn, struct bt_iso_qos *qos)
+ 	data.count = 0;
+ 
+ 	/* Create a BIS for each bound connection */
+-	hci_conn_hash_list_state(hdev, bis_list, ISO_LINK,
++	hci_conn_hash_list_state(hdev, bis_list, BIS_LINK,
+ 				 BT_BOUND, &data);
+ 
+ 	cp.handle = qos->bcast.big;
+@@ -1829,12 +1835,12 @@ static bool hci_le_set_cig_params(struct hci_conn *conn, struct bt_iso_qos *qos)
+ 		for (data.cig = 0x00; data.cig < 0xf0; data.cig++) {
+ 			data.count = 0;
+ 
+-			hci_conn_hash_list_state(hdev, find_cis, ISO_LINK,
++			hci_conn_hash_list_state(hdev, find_cis, CIS_LINK,
+ 						 BT_CONNECT, &data);
+ 			if (data.count)
+ 				continue;
+ 
+-			hci_conn_hash_list_state(hdev, find_cis, ISO_LINK,
++			hci_conn_hash_list_state(hdev, find_cis, CIS_LINK,
+ 						 BT_CONNECTED, &data);
+ 			if (!data.count)
+ 				break;
+@@ -1884,7 +1890,8 @@ struct hci_conn *hci_bind_cis(struct hci_dev *hdev, bdaddr_t *dst,
+ 	cis = hci_conn_hash_lookup_cis(hdev, dst, dst_type, qos->ucast.cig,
+ 				       qos->ucast.cis);
+ 	if (!cis) {
+-		cis = hci_conn_add_unset(hdev, ISO_LINK, dst, HCI_ROLE_MASTER);
++		cis = hci_conn_add_unset(hdev, CIS_LINK, dst,
++					 HCI_ROLE_MASTER);
+ 		if (IS_ERR(cis))
+ 			return cis;
+ 		cis->cleanup = cis_cleanup;
+@@ -1976,7 +1983,7 @@ bool hci_iso_setup_path(struct hci_conn *conn)
+ 
+ int hci_conn_check_create_cis(struct hci_conn *conn)
+ {
+-	if (conn->type != ISO_LINK || !bacmp(&conn->dst, BDADDR_ANY))
++	if (conn->type != CIS_LINK)
+ 		return -EINVAL;
+ 
+ 	if (!conn->parent || conn->parent->state != BT_CONNECTED ||
+@@ -2070,7 +2077,9 @@ struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst,
+ {
+ 	struct hci_conn *conn;
+ 
+-	conn = hci_conn_add_unset(hdev, ISO_LINK, dst, HCI_ROLE_SLAVE);
++	bt_dev_dbg(hdev, "dst %pMR type %d sid %d", dst, dst_type, sid);
++
++	conn = hci_conn_add_unset(hdev, BIS_LINK, dst, HCI_ROLE_SLAVE);
+ 	if (IS_ERR(conn))
+ 		return conn;
+ 
+@@ -2219,7 +2228,7 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ 	 * the start periodic advertising and create BIG commands have
+ 	 * been queued
+ 	 */
+-	hci_conn_hash_list_state(hdev, bis_mark_per_adv, ISO_LINK,
++	hci_conn_hash_list_state(hdev, bis_mark_per_adv, BIS_LINK,
+ 				 BT_BOUND, &data);
+ 
+ 	/* Queue start periodic advertising and create BIG */
+@@ -2951,7 +2960,8 @@ void hci_conn_tx_queue(struct hci_conn *conn, struct sk_buff *skb)
+ 	 * TODO: SCO support without flowctl (needs to be done in drivers)
+ 	 */
+ 	switch (conn->type) {
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 	case ACL_LINK:
+ 	case LE_LINK:
+ 		break;
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 5eb0600bbd03cc..af30a420bab75a 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1877,10 +1877,8 @@ void hci_free_adv_monitor(struct hci_dev *hdev, struct adv_monitor *monitor)
+ 	if (monitor->handle)
+ 		idr_remove(&hdev->adv_monitors_idr, monitor->handle);
+ 
+-	if (monitor->state != ADV_MONITOR_STATE_NOT_REGISTERED) {
++	if (monitor->state != ADV_MONITOR_STATE_NOT_REGISTERED)
+ 		hdev->adv_monitors_cnt--;
+-		mgmt_adv_monitor_removed(hdev, monitor->handle);
+-	}
+ 
+ 	kfree(monitor);
+ }
+@@ -2487,6 +2485,7 @@ struct hci_dev *hci_alloc_dev_priv(int sizeof_priv)
+ 
+ 	mutex_init(&hdev->lock);
+ 	mutex_init(&hdev->req_lock);
++	mutex_init(&hdev->mgmt_pending_lock);
+ 
+ 	ida_init(&hdev->unset_handle_ida);
+ 
+@@ -2898,12 +2897,13 @@ int hci_recv_frame(struct hci_dev *hdev, struct sk_buff *skb)
+ 		break;
+ 	case HCI_ACLDATA_PKT:
+ 		/* Detect if ISO packet has been sent as ACL */
+-		if (hci_conn_num(hdev, ISO_LINK)) {
++		if (hci_conn_num(hdev, CIS_LINK) ||
++		    hci_conn_num(hdev, BIS_LINK)) {
+ 			__u16 handle = __le16_to_cpu(hci_acl_hdr(skb)->handle);
+ 			__u8 type;
+ 
+ 			type = hci_conn_lookup_type(hdev, hci_handle(handle));
+-			if (type == ISO_LINK)
++			if (type == CIS_LINK || type == BIS_LINK)
+ 				hci_skb_pkt_type(skb) = HCI_ISODATA_PKT;
+ 		}
+ 		break;
+@@ -3345,7 +3345,8 @@ static inline void hci_quote_sent(struct hci_conn *conn, int num, int *quote)
+ 	case LE_LINK:
+ 		cnt = hdev->le_mtu ? hdev->le_cnt : hdev->acl_cnt;
+ 		break;
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		cnt = hdev->iso_mtu ? hdev->iso_cnt :
+ 			hdev->le_mtu ? hdev->le_cnt : hdev->acl_cnt;
+ 		break;
+@@ -3359,7 +3360,7 @@ static inline void hci_quote_sent(struct hci_conn *conn, int num, int *quote)
+ }
+ 
+ static struct hci_conn *hci_low_sent(struct hci_dev *hdev, __u8 type,
+-				     int *quote)
++				     __u8 type2, int *quote)
+ {
+ 	struct hci_conn_hash *h = &hdev->conn_hash;
+ 	struct hci_conn *conn = NULL, *c;
+@@ -3371,7 +3372,8 @@ static struct hci_conn *hci_low_sent(struct hci_dev *hdev, __u8 type,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != type || skb_queue_empty(&c->data_q))
++		if ((c->type != type && c->type != type2) ||
++		    skb_queue_empty(&c->data_q))
+ 			continue;
+ 
+ 		if (c->state != BT_CONNECTED && c->state != BT_CONFIG)
+@@ -3403,23 +3405,18 @@ static void hci_link_tx_to(struct hci_dev *hdev, __u8 type)
+ 
+ 	bt_dev_err(hdev, "link tx timeout");
+ 
+-	rcu_read_lock();
++	hci_dev_lock(hdev);
+ 
+ 	/* Kill stalled connections */
+-	list_for_each_entry_rcu(c, &h->list, list) {
++	list_for_each_entry(c, &h->list, list) {
+ 		if (c->type == type && c->sent) {
+ 			bt_dev_err(hdev, "killing stalled connection %pMR",
+ 				   &c->dst);
+-			/* hci_disconnect might sleep, so, we have to release
+-			 * the RCU read lock before calling it.
+-			 */
+-			rcu_read_unlock();
+ 			hci_disconnect(c, HCI_ERROR_REMOTE_USER_TERM);
+-			rcu_read_lock();
+ 		}
+ 	}
+ 
+-	rcu_read_unlock();
++	hci_dev_unlock(hdev);
+ }
+ 
+ static struct hci_chan *hci_chan_sent(struct hci_dev *hdev, __u8 type,
+@@ -3579,7 +3576,7 @@ static void hci_sched_sco(struct hci_dev *hdev, __u8 type)
+ 	else
+ 		cnt = &hdev->sco_cnt;
+ 
+-	while (*cnt && (conn = hci_low_sent(hdev, type, &quote))) {
++	while (*cnt && (conn = hci_low_sent(hdev, type, type, &quote))) {
+ 		while (quote-- && (skb = skb_dequeue(&conn->data_q))) {
+ 			BT_DBG("skb %p len %d", skb, skb->len);
+ 			hci_send_conn_frame(hdev, conn, skb);
+@@ -3707,12 +3704,14 @@ static void hci_sched_iso(struct hci_dev *hdev)
+ 
+ 	BT_DBG("%s", hdev->name);
+ 
+-	if (!hci_conn_num(hdev, ISO_LINK))
++	if (!hci_conn_num(hdev, CIS_LINK) &&
++	    !hci_conn_num(hdev, BIS_LINK))
+ 		return;
+ 
+ 	cnt = hdev->iso_pkts ? &hdev->iso_cnt :
+ 		hdev->le_pkts ? &hdev->le_cnt : &hdev->acl_cnt;
+-	while (*cnt && (conn = hci_low_sent(hdev, ISO_LINK, &quote))) {
++	while (*cnt && (conn = hci_low_sent(hdev, CIS_LINK, BIS_LINK,
++					    &quote))) {
+ 		while (quote-- && (skb = skb_dequeue(&conn->data_q))) {
+ 			BT_DBG("skb %p len %d", skb, skb->len);
+ 			hci_send_conn_frame(hdev, conn, skb);
+@@ -4057,10 +4056,13 @@ static void hci_send_cmd_sync(struct hci_dev *hdev, struct sk_buff *skb)
+ 		return;
+ 	}
+ 
+-	err = hci_send_frame(hdev, skb);
+-	if (err < 0) {
+-		hci_cmd_sync_cancel_sync(hdev, -err);
+-		return;
++	if (hci_skb_opcode(skb) != HCI_OP_NOP) {
++		err = hci_send_frame(hdev, skb);
++		if (err < 0) {
++			hci_cmd_sync_cancel_sync(hdev, -err);
++			return;
++		}
++		atomic_dec(&hdev->cmd_cnt);
+ 	}
+ 
+ 	if (hdev->req_status == HCI_REQ_PEND &&
+@@ -4068,8 +4070,6 @@ static void hci_send_cmd_sync(struct hci_dev *hdev, struct sk_buff *skb)
+ 		kfree_skb(hdev->req_skb);
+ 		hdev->req_skb = skb_clone(hdev->sent_cmd, GFP_KERNEL);
+ 	}
+-
+-	atomic_dec(&hdev->cmd_cnt);
+ }
+ 
+ static void hci_cmd_work(struct work_struct *work)
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index c38ada69c3d7f2..66052d6aaa1d50 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3804,7 +3804,7 @@ static void hci_unbound_cis_failed(struct hci_dev *hdev, u8 cig, u8 status)
+ 	lockdep_assert_held(&hdev->lock);
+ 
+ 	list_for_each_entry_safe(conn, tmp, &hdev->conn_hash.list, list) {
+-		if (conn->type != ISO_LINK || !bacmp(&conn->dst, BDADDR_ANY) ||
++		if (conn->type != CIS_LINK ||
+ 		    conn->state == BT_OPEN || conn->iso_qos.ucast.cig != cig)
+ 			continue;
+ 
+@@ -4467,7 +4467,8 @@ static void hci_num_comp_pkts_evt(struct hci_dev *hdev, void *data,
+ 
+ 			break;
+ 
+-		case ISO_LINK:
++		case CIS_LINK:
++		case BIS_LINK:
+ 			if (hdev->iso_pkts) {
+ 				hdev->iso_cnt += count;
+ 				if (hdev->iso_cnt > hdev->iso_pkts)
+@@ -6351,6 +6352,17 @@ static void hci_le_ext_adv_report_evt(struct hci_dev *hdev, void *data,
+ 			info->secondary_phy &= 0x1f;
+ 		}
+ 
++		/* Check if PA Sync is pending and if the hci_conn SID has not
++		 * been set update it.
++		 */
++		if (hci_dev_test_flag(hdev, HCI_PA_SYNC)) {
++			struct hci_conn *conn;
++
++			conn = hci_conn_hash_lookup_create_pa_sync(hdev);
++			if (conn && conn->sid == HCI_SID_INVALID)
++				conn->sid = info->sid;
++		}
++
+ 		if (legacy_evt_type != LE_ADV_INVALID) {
+ 			process_adv_report(hdev, legacy_evt_type, &info->bdaddr,
+ 					   info->bdaddr_type, NULL, 0,
+@@ -6402,7 +6414,8 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+ 	conn->sync_handle = le16_to_cpu(ev->handle);
+ 	conn->sid = HCI_SID_INVALID;
+ 
+-	mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, ISO_LINK, &flags);
++	mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, BIS_LINK,
++				      &flags);
+ 	if (!(mask & HCI_LM_ACCEPT)) {
+ 		hci_le_pa_term_sync(hdev, ev->handle);
+ 		goto unlock;
+@@ -6412,7 +6425,7 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+ 		goto unlock;
+ 
+ 	/* Add connection to indicate PA sync event */
+-	pa_sync = hci_conn_add_unset(hdev, ISO_LINK, BDADDR_ANY,
++	pa_sync = hci_conn_add_unset(hdev, BIS_LINK, BDADDR_ANY,
+ 				     HCI_ROLE_SLAVE);
+ 
+ 	if (IS_ERR(pa_sync))
+@@ -6443,7 +6456,7 @@ static void hci_le_per_adv_report_evt(struct hci_dev *hdev, void *data,
+ 
+ 	hci_dev_lock(hdev);
+ 
+-	mask |= hci_proto_connect_ind(hdev, BDADDR_ANY, ISO_LINK, &flags);
++	mask |= hci_proto_connect_ind(hdev, BDADDR_ANY, BIS_LINK, &flags);
+ 	if (!(mask & HCI_LM_ACCEPT))
+ 		goto unlock;
+ 
+@@ -6727,7 +6740,7 @@ static void hci_le_cis_estabilished_evt(struct hci_dev *hdev, void *data,
+ 		goto unlock;
+ 	}
+ 
+-	if (conn->type != ISO_LINK) {
++	if (conn->type != CIS_LINK) {
+ 		bt_dev_err(hdev,
+ 			   "Invalid connection link type handle 0x%4.4x",
+ 			   handle);
+@@ -6845,7 +6858,7 @@ static void hci_le_cis_req_evt(struct hci_dev *hdev, void *data,
+ 	if (!acl)
+ 		goto unlock;
+ 
+-	mask = hci_proto_connect_ind(hdev, &acl->dst, ISO_LINK, &flags);
++	mask = hci_proto_connect_ind(hdev, &acl->dst, CIS_LINK, &flags);
+ 	if (!(mask & HCI_LM_ACCEPT)) {
+ 		hci_le_reject_cis(hdev, ev->cis_handle);
+ 		goto unlock;
+@@ -6853,8 +6866,8 @@ static void hci_le_cis_req_evt(struct hci_dev *hdev, void *data,
+ 
+ 	cis = hci_conn_hash_lookup_handle(hdev, cis_handle);
+ 	if (!cis) {
+-		cis = hci_conn_add(hdev, ISO_LINK, &acl->dst, HCI_ROLE_SLAVE,
+-				   cis_handle);
++		cis = hci_conn_add(hdev, CIS_LINK, &acl->dst,
++				   HCI_ROLE_SLAVE, cis_handle);
+ 		if (IS_ERR(cis)) {
+ 			hci_le_reject_cis(hdev, ev->cis_handle);
+ 			goto unlock;
+@@ -6969,7 +6982,7 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ 				bt_dev_dbg(hdev, "ignore too large handle %u", handle);
+ 				continue;
+ 			}
+-			bis = hci_conn_add(hdev, ISO_LINK, BDADDR_ANY,
++			bis = hci_conn_add(hdev, BIS_LINK, BDADDR_ANY,
+ 					   HCI_ROLE_SLAVE, handle);
+ 			if (IS_ERR(bis))
+ 				continue;
+@@ -7025,7 +7038,7 @@ static void hci_le_big_info_adv_report_evt(struct hci_dev *hdev, void *data,
+ 
+ 	hci_dev_lock(hdev);
+ 
+-	mask |= hci_proto_connect_ind(hdev, BDADDR_ANY, ISO_LINK, &flags);
++	mask |= hci_proto_connect_ind(hdev, BDADDR_ANY, BIS_LINK, &flags);
+ 	if (!(mask & HCI_LM_ACCEPT))
+ 		goto unlock;
+ 
+@@ -7155,7 +7168,8 @@ static void hci_le_meta_evt(struct hci_dev *hdev, void *data,
+ 
+ 	/* Only match event if command OGF is for LE */
+ 	if (hdev->req_skb &&
+-	    hci_opcode_ogf(hci_skb_opcode(hdev->req_skb)) == 0x08 &&
++	   (hci_opcode_ogf(hci_skb_opcode(hdev->req_skb)) == 0x08 ||
++	    hci_skb_opcode(hdev->req_skb) == HCI_OP_NOP) &&
+ 	    hci_skb_event(hdev->req_skb) == ev->subevent) {
+ 		*opcode = hci_skb_opcode(hdev->req_skb);
+ 		hci_req_cmd_complete(hdev, *opcode, 0x00, req_complete,
+@@ -7511,8 +7525,10 @@ void hci_event_packet(struct hci_dev *hdev, struct sk_buff *skb)
+ 		goto done;
+ 	}
+ 
++	hci_dev_lock(hdev);
+ 	kfree_skb(hdev->recv_event);
+ 	hdev->recv_event = skb_clone(skb, GFP_KERNEL);
++	hci_dev_unlock(hdev);
+ 
+ 	event = hdr->evt;
+ 	if (!event) {
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index e56b1cbedab908..83de3847c8eaf7 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -1559,7 +1559,8 @@ static int hci_enable_per_advertising_sync(struct hci_dev *hdev, u8 instance)
+ static int hci_adv_bcast_annoucement(struct hci_dev *hdev, struct adv_info *adv)
+ {
+ 	u8 bid[3];
+-	u8 ad[4 + 3];
++	u8 ad[HCI_MAX_EXT_AD_LENGTH];
++	u8 len;
+ 
+ 	/* Skip if NULL adv as instance 0x00 is used for general purpose
+ 	 * advertising so it cannot used for the likes of Broadcast Announcement
+@@ -1585,8 +1586,10 @@ static int hci_adv_bcast_annoucement(struct hci_dev *hdev, struct adv_info *adv)
+ 
+ 	/* Generate Broadcast ID */
+ 	get_random_bytes(bid, sizeof(bid));
+-	eir_append_service_data(ad, 0, 0x1852, bid, sizeof(bid));
+-	hci_set_adv_instance_data(hdev, adv->instance, sizeof(ad), ad, 0, NULL);
++	len = eir_append_service_data(ad, 0, 0x1852, bid, sizeof(bid));
++	memcpy(ad + len, adv->adv_data, adv->adv_data_len);
++	hci_set_adv_instance_data(hdev, adv->instance, len + adv->adv_data_len,
++				  ad, 0, NULL);
+ 
+ 	return hci_update_adv_data_sync(hdev, adv->instance);
+ }
+@@ -1603,8 +1606,15 @@ int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 data_len,
+ 
+ 	if (instance) {
+ 		adv = hci_find_adv_instance(hdev, instance);
+-		/* Create an instance if that could not be found */
+-		if (!adv) {
++		if (adv) {
++			/* Turn it into periodic advertising */
++			adv->periodic = true;
++			adv->per_adv_data_len = data_len;
++			if (data)
++				memcpy(adv->per_adv_data, data, data_len);
++			adv->flags = flags;
++		} else if (!adv) {
++			/* Create an instance if that could not be found */
+ 			adv = hci_add_per_instance(hdev, instance, flags,
+ 						   data_len, data,
+ 						   sync_interval,
+@@ -1812,7 +1822,8 @@ static int hci_set_ext_adv_data_sync(struct hci_dev *hdev, u8 instance)
+ 			return 0;
+ 	}
+ 
+-	len = eir_create_adv_data(hdev, instance, pdu->data);
++	len = eir_create_adv_data(hdev, instance, pdu->data,
++				  HCI_MAX_EXT_AD_LENGTH);
+ 
+ 	pdu->length = len;
+ 	pdu->handle = adv ? adv->handle : instance;
+@@ -1843,7 +1854,7 @@ static int hci_set_adv_data_sync(struct hci_dev *hdev, u8 instance)
+ 
+ 	memset(&cp, 0, sizeof(cp));
+ 
+-	len = eir_create_adv_data(hdev, instance, cp.data);
++	len = eir_create_adv_data(hdev, instance, cp.data, sizeof(cp.data));
+ 
+ 	/* There's nothing to do if the data hasn't changed */
+ 	if (hdev->adv_data_len == len &&
+@@ -2860,7 +2871,7 @@ static int hci_le_set_ext_scan_param_sync(struct hci_dev *hdev, u8 type,
+ 		if (sent) {
+ 			struct hci_conn *conn;
+ 
+-			conn = hci_conn_hash_lookup_ba(hdev, ISO_LINK,
++			conn = hci_conn_hash_lookup_ba(hdev, BIS_LINK,
+ 						       &sent->bdaddr);
+ 			if (conn) {
+ 				struct bt_iso_qos *qos = &conn->iso_qos;
+@@ -5477,7 +5488,7 @@ static int hci_connect_cancel_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ 	if (conn->type == LE_LINK)
+ 		return hci_le_connect_cancel_sync(hdev, conn, reason);
+ 
+-	if (conn->type == ISO_LINK) {
++	if (conn->type == CIS_LINK) {
+ 		/* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
+ 		 * page 1857:
+ 		 *
+@@ -5490,9 +5501,10 @@ static int hci_connect_cancel_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ 			return hci_disconnect_sync(hdev, conn, reason);
+ 
+ 		/* CIS with no Create CIS sent have nothing to cancel */
+-		if (bacmp(&conn->dst, BDADDR_ANY))
+-			return HCI_ERROR_LOCAL_HOST_TERM;
++		return HCI_ERROR_LOCAL_HOST_TERM;
++	}
+ 
++	if (conn->type == BIS_LINK) {
+ 		/* There is no way to cancel a BIS without terminating the BIG
+ 		 * which is done later on connection cleanup.
+ 		 */
+@@ -5554,9 +5566,12 @@ static int hci_reject_conn_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ {
+ 	struct hci_cp_reject_conn_req cp;
+ 
+-	if (conn->type == ISO_LINK)
++	if (conn->type == CIS_LINK)
+ 		return hci_le_reject_cis_sync(hdev, conn, reason);
+ 
++	if (conn->type == BIS_LINK)
++		return -EINVAL;
++
+ 	if (conn->type == SCO_LINK || conn->type == ESCO_LINK)
+ 		return hci_reject_sco_sync(hdev, conn, reason);
+ 
+@@ -6898,20 +6913,37 @@ int hci_le_conn_update_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ 
+ static void create_pa_complete(struct hci_dev *hdev, void *data, int err)
+ {
++	struct hci_conn *conn = data;
++	struct hci_conn *pa_sync;
++
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+-	if (!err)
++	if (err == -ECANCELED)
+ 		return;
+ 
++	hci_dev_lock(hdev);
++
+ 	hci_dev_clear_flag(hdev, HCI_PA_SYNC);
+ 
+-	if (err == -ECANCELED)
+-		return;
++	if (!hci_conn_valid(hdev, conn))
++		clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
+ 
+-	hci_dev_lock(hdev);
++	if (!err)
++		goto unlock;
+ 
+-	hci_update_passive_scan_sync(hdev);
++	/* Add connection to indicate PA sync error */
++	pa_sync = hci_conn_add_unset(hdev, BIS_LINK, BDADDR_ANY,
++				     HCI_ROLE_SLAVE);
++
++	if (IS_ERR(pa_sync))
++		goto unlock;
+ 
++	set_bit(HCI_CONN_PA_SYNC_FAILED, &pa_sync->flags);
++
++	/* Notify iso layer */
++	hci_connect_cfm(pa_sync, bt_status(err));
++
++unlock:
+ 	hci_dev_unlock(hdev);
+ }
+ 
+@@ -6925,9 +6957,23 @@ static int hci_le_pa_create_sync(struct hci_dev *hdev, void *data)
+ 	if (!hci_conn_valid(hdev, conn))
+ 		return -ECANCELED;
+ 
++	if (conn->sync_handle != HCI_SYNC_HANDLE_INVALID)
++		return -EINVAL;
++
+ 	if (hci_dev_test_and_set_flag(hdev, HCI_PA_SYNC))
+ 		return -EBUSY;
+ 
++	/* Stop scanning if SID has not been set and active scanning is enabled
++	 * so we use passive scanning which will be scanning using the allow
++	 * list programmed to contain only the connection address.
++	 */
++	if (conn->sid == HCI_SID_INVALID &&
++	    hci_dev_test_flag(hdev, HCI_LE_SCAN)) {
++		hci_scan_disable_sync(hdev);
++		hci_dev_set_flag(hdev, HCI_LE_SCAN_INTERRUPTED);
++		hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
++	}
++
+ 	/* Mark HCI_CONN_CREATE_PA_SYNC so hci_update_passive_scan_sync can
+ 	 * program the address in the allow list so PA advertisements can be
+ 	 * received.
+@@ -6936,6 +6982,14 @@ static int hci_le_pa_create_sync(struct hci_dev *hdev, void *data)
+ 
+ 	hci_update_passive_scan_sync(hdev);
+ 
++	/* SID has not been set listen for HCI_EV_LE_EXT_ADV_REPORT to update
++	 * it.
++	 */
++	if (conn->sid == HCI_SID_INVALID)
++		__hci_cmd_sync_status_sk(hdev, HCI_OP_NOP, 0, NULL,
++					 HCI_EV_LE_EXT_ADV_REPORT,
++					 conn->conn_timeout, NULL);
++
+ 	memset(&cp, 0, sizeof(cp));
+ 	cp.options = qos->bcast.options;
+ 	cp.sid = conn->sid;
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 2819cda616bce8..5389af86bdae4f 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -941,7 +941,7 @@ static int iso_sock_bind_bc(struct socket *sock, struct sockaddr *addr,
+ 
+ 	iso_pi(sk)->dst_type = sa->iso_bc->bc_bdaddr_type;
+ 
+-	if (sa->iso_bc->bc_sid > 0x0f)
++	if (sa->iso_bc->bc_sid > 0x0f && sa->iso_bc->bc_sid != HCI_SID_INVALID)
+ 		return -EINVAL;
+ 
+ 	iso_pi(sk)->bc_sid = sa->iso_bc->bc_sid;
+@@ -2029,6 +2029,9 @@ static bool iso_match_sid(struct sock *sk, void *data)
+ {
+ 	struct hci_ev_le_pa_sync_established *ev = data;
+ 
++	if (iso_pi(sk)->bc_sid == HCI_SID_INVALID)
++		return true;
++
+ 	return ev->sid == iso_pi(sk)->bc_sid;
+ }
+ 
+@@ -2075,8 +2078,10 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+ 	if (ev1) {
+ 		sk = iso_get_sock(&hdev->bdaddr, bdaddr, BT_LISTEN,
+ 				  iso_match_sid, ev1);
+-		if (sk && !ev1->status)
++		if (sk && !ev1->status) {
+ 			iso_pi(sk)->sync_handle = le16_to_cpu(ev1->handle);
++			iso_pi(sk)->bc_sid = ev1->sid;
++		}
+ 
+ 		goto done;
+ 	}
+@@ -2203,7 +2208,7 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+ 
+ static void iso_connect_cfm(struct hci_conn *hcon, __u8 status)
+ {
+-	if (hcon->type != ISO_LINK) {
++	if (hcon->type != CIS_LINK && hcon->type != BIS_LINK) {
+ 		if (hcon->type != LE_LINK)
+ 			return;
+ 
+@@ -2244,7 +2249,7 @@ static void iso_connect_cfm(struct hci_conn *hcon, __u8 status)
+ 
+ static void iso_disconn_cfm(struct hci_conn *hcon, __u8 reason)
+ {
+-	if (hcon->type != ISO_LINK)
++	if (hcon->type != CIS_LINK && hcon->type != BIS_LINK)
+ 		return;
+ 
+ 	BT_DBG("hcon %p reason %d", hcon, reason);
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 042d3ac3b4a38e..a5bde5db58efcb 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -4870,7 +4870,8 @@ static int l2cap_le_connect_req(struct l2cap_conn *conn,
+ 
+ 	if (!smp_sufficient_security(conn->hcon, pchan->sec_level,
+ 				     SMP_ALLOW_STK)) {
+-		result = L2CAP_CR_LE_AUTHENTICATION;
++		result = pchan->sec_level == BT_SECURITY_MEDIUM ?
++			L2CAP_CR_LE_ENCRYPTION : L2CAP_CR_LE_AUTHENTICATION;
+ 		chan = NULL;
+ 		goto response_unlock;
+ 	}
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 46b22708dfbd2d..d540f7b4f75fbf 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -1447,22 +1447,17 @@ static void settings_rsp(struct mgmt_pending_cmd *cmd, void *data)
+ 
+ 	send_settings_rsp(cmd->sk, cmd->opcode, match->hdev);
+ 
+-	list_del(&cmd->list);
+-
+ 	if (match->sk == NULL) {
+ 		match->sk = cmd->sk;
+ 		sock_hold(match->sk);
+ 	}
+-
+-	mgmt_pending_free(cmd);
+ }
+ 
+ static void cmd_status_rsp(struct mgmt_pending_cmd *cmd, void *data)
+ {
+ 	u8 *status = data;
+ 
+-	mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, *status);
+-	mgmt_pending_remove(cmd);
++	mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, *status);
+ }
+ 
+ static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data)
+@@ -1476,8 +1471,6 @@ static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data)
+ 
+ 	if (cmd->cmd_complete) {
+ 		cmd->cmd_complete(cmd, match->mgmt_status);
+-		mgmt_pending_remove(cmd);
+-
+ 		return;
+ 	}
+ 
+@@ -1486,13 +1479,13 @@ static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data)
+ 
+ static int generic_cmd_complete(struct mgmt_pending_cmd *cmd, u8 status)
+ {
+-	return mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, status,
++	return mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, status,
+ 				 cmd->param, cmd->param_len);
+ }
+ 
+ static int addr_cmd_complete(struct mgmt_pending_cmd *cmd, u8 status)
+ {
+-	return mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, status,
++	return mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, status,
+ 				 cmd->param, sizeof(struct mgmt_addr_info));
+ }
+ 
+@@ -1532,7 +1525,7 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
+ 
+ 	if (err) {
+ 		u8 mgmt_err = mgmt_status(err);
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err);
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err);
+ 		hci_dev_clear_flag(hdev, HCI_LIMITED_DISCOVERABLE);
+ 		goto done;
+ 	}
+@@ -1707,7 +1700,7 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
+ 
+ 	if (err) {
+ 		u8 mgmt_err = mgmt_status(err);
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err);
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err);
+ 		goto done;
+ 	}
+ 
+@@ -1943,8 +1936,8 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ 			new_settings(hdev, NULL);
+ 		}
+ 
+-		mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, cmd_status_rsp,
+-				     &mgmt_err);
++		mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true,
++				     cmd_status_rsp, &mgmt_err);
+ 		return;
+ 	}
+ 
+@@ -1954,7 +1947,7 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ 		changed = hci_dev_test_and_clear_flag(hdev, HCI_SSP_ENABLED);
+ 	}
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, settings_rsp, &match);
++	mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true, settings_rsp, &match);
+ 
+ 	if (changed)
+ 		new_settings(hdev, match.sk);
+@@ -2074,12 +2067,12 @@ static void set_le_complete(struct hci_dev *hdev, void *data, int err)
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+ 	if (status) {
+-		mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, cmd_status_rsp,
+-							&status);
++		mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, cmd_status_rsp,
++				     &status);
+ 		return;
+ 	}
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, settings_rsp, &match);
++	mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, settings_rsp, &match);
+ 
+ 	new_settings(hdev, match.sk);
+ 
+@@ -2138,7 +2131,7 @@ static void set_mesh_complete(struct hci_dev *hdev, void *data, int err)
+ 	struct sock *sk = cmd->sk;
+ 
+ 	if (status) {
+-		mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev,
++		mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev, true,
+ 				     cmd_status_rsp, &status);
+ 		return;
+ 	}
+@@ -2566,7 +2559,8 @@ static int mgmt_hci_cmd_sync(struct sock *sk, struct hci_dev *hdev,
+ 	struct mgmt_pending_cmd *cmd;
+ 	int err;
+ 
+-	if (len < sizeof(*cp))
++	if (len != (offsetof(struct mgmt_cp_hci_cmd_sync, params) +
++		    le16_to_cpu(cp->params_len)))
+ 		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_HCI_CMD_SYNC,
+ 				       MGMT_STATUS_INVALID_PARAMS);
+ 
+@@ -2637,7 +2631,7 @@ static void mgmt_class_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+-	mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
++	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 			  mgmt_status(err), hdev->dev_class, 3);
+ 
+ 	mgmt_pending_free(cmd);
+@@ -3221,7 +3215,8 @@ static int disconnect(struct sock *sk, struct hci_dev *hdev, void *data,
+ static u8 link_to_bdaddr(u8 link_type, u8 addr_type)
+ {
+ 	switch (link_type) {
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 	case LE_LINK:
+ 		switch (addr_type) {
+ 		case ADDR_LE_DEV_PUBLIC:
+@@ -3425,7 +3420,7 @@ static int pairing_complete(struct mgmt_pending_cmd *cmd, u8 status)
+ 	bacpy(&rp.addr.bdaddr, &conn->dst);
+ 	rp.addr.type = link_to_bdaddr(conn->type, conn->dst_type);
+ 
+-	err = mgmt_cmd_complete(cmd->sk, cmd->index, MGMT_OP_PAIR_DEVICE,
++	err = mgmt_cmd_complete(cmd->sk, cmd->hdev->id, MGMT_OP_PAIR_DEVICE,
+ 				status, &rp, sizeof(rp));
+ 
+ 	/* So we don't get further callbacks for this connection */
+@@ -5106,24 +5101,14 @@ static void mgmt_adv_monitor_added(struct sock *sk, struct hci_dev *hdev,
+ 	mgmt_event(MGMT_EV_ADV_MONITOR_ADDED, hdev, &ev, sizeof(ev), sk);
+ }
+ 
+-void mgmt_adv_monitor_removed(struct hci_dev *hdev, u16 handle)
++static void mgmt_adv_monitor_removed(struct sock *sk, struct hci_dev *hdev,
++				     __le16 handle)
+ {
+ 	struct mgmt_ev_adv_monitor_removed ev;
+-	struct mgmt_pending_cmd *cmd;
+-	struct sock *sk_skip = NULL;
+-	struct mgmt_cp_remove_adv_monitor *cp;
+-
+-	cmd = pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev);
+-	if (cmd) {
+-		cp = cmd->param;
+-
+-		if (cp->monitor_handle)
+-			sk_skip = cmd->sk;
+-	}
+ 
+-	ev.monitor_handle = cpu_to_le16(handle);
++	ev.monitor_handle = handle;
+ 
+-	mgmt_event(MGMT_EV_ADV_MONITOR_REMOVED, hdev, &ev, sizeof(ev), sk_skip);
++	mgmt_event(MGMT_EV_ADV_MONITOR_REMOVED, hdev, &ev, sizeof(ev), sk);
+ }
+ 
+ static int read_adv_mon_features(struct sock *sk, struct hci_dev *hdev,
+@@ -5194,7 +5179,7 @@ static void mgmt_add_adv_patterns_monitor_complete(struct hci_dev *hdev,
+ 		hci_update_passive_scan(hdev);
+ 	}
+ 
+-	mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
++	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 			  mgmt_status(status), &rp, sizeof(rp));
+ 	mgmt_pending_remove(cmd);
+ 
+@@ -5225,8 +5210,7 @@ static int __add_adv_patterns_monitor(struct sock *sk, struct hci_dev *hdev,
+ 
+ 	if (pending_find(MGMT_OP_SET_LE, hdev) ||
+ 	    pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR, hdev) ||
+-	    pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR_RSSI, hdev) ||
+-	    pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev)) {
++	    pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR_RSSI, hdev)) {
+ 		status = MGMT_STATUS_BUSY;
+ 		goto unlock;
+ 	}
+@@ -5396,8 +5380,7 @@ static void mgmt_remove_adv_monitor_complete(struct hci_dev *hdev,
+ 	struct mgmt_pending_cmd *cmd = data;
+ 	struct mgmt_cp_remove_adv_monitor *cp;
+ 
+-	if (status == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev))
++	if (status == -ECANCELED)
+ 		return;
+ 
+ 	hci_dev_lock(hdev);
+@@ -5406,12 +5389,14 @@ static void mgmt_remove_adv_monitor_complete(struct hci_dev *hdev,
+ 
+ 	rp.monitor_handle = cp->monitor_handle;
+ 
+-	if (!status)
++	if (!status) {
++		mgmt_adv_monitor_removed(cmd->sk, hdev, cp->monitor_handle);
+ 		hci_update_passive_scan(hdev);
++	}
+ 
+-	mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
++	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 			  mgmt_status(status), &rp, sizeof(rp));
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ 
+ 	hci_dev_unlock(hdev);
+ 	bt_dev_dbg(hdev, "remove monitor %d complete, status %d",
+@@ -5421,10 +5406,6 @@ static void mgmt_remove_adv_monitor_complete(struct hci_dev *hdev,
+ static int mgmt_remove_adv_monitor_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-
+-	if (cmd != pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev))
+-		return -ECANCELED;
+-
+ 	struct mgmt_cp_remove_adv_monitor *cp = cmd->param;
+ 	u16 handle = __le16_to_cpu(cp->monitor_handle);
+ 
+@@ -5443,14 +5424,13 @@ static int remove_adv_monitor(struct sock *sk, struct hci_dev *hdev,
+ 	hci_dev_lock(hdev);
+ 
+ 	if (pending_find(MGMT_OP_SET_LE, hdev) ||
+-	    pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev) ||
+ 	    pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR, hdev) ||
+ 	    pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR_RSSI, hdev)) {
+ 		status = MGMT_STATUS_BUSY;
+ 		goto unlock;
+ 	}
+ 
+-	cmd = mgmt_pending_add(sk, MGMT_OP_REMOVE_ADV_MONITOR, hdev, data, len);
++	cmd = mgmt_pending_new(sk, MGMT_OP_REMOVE_ADV_MONITOR, hdev, data, len);
+ 	if (!cmd) {
+ 		status = MGMT_STATUS_NO_RESOURCES;
+ 		goto unlock;
+@@ -5460,7 +5440,7 @@ static int remove_adv_monitor(struct sock *sk, struct hci_dev *hdev,
+ 				  mgmt_remove_adv_monitor_complete);
+ 
+ 	if (err) {
+-		mgmt_pending_remove(cmd);
++		mgmt_pending_free(cmd);
+ 
+ 		if (err == -ENOMEM)
+ 			status = MGMT_STATUS_NO_RESOURCES;
+@@ -5790,7 +5770,7 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ 	    cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev))
+ 		return;
+ 
+-	mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err),
++	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err),
+ 			  cmd->param, 1);
+ 	mgmt_pending_remove(cmd);
+ 
+@@ -6011,7 +5991,7 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+-	mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err),
++	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err),
+ 			  cmd->param, 1);
+ 	mgmt_pending_remove(cmd);
+ 
+@@ -6236,7 +6216,7 @@ static void set_advertising_complete(struct hci_dev *hdev, void *data, int err)
+ 	u8 status = mgmt_status(err);
+ 
+ 	if (status) {
+-		mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev,
++		mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true,
+ 				     cmd_status_rsp, &status);
+ 		return;
+ 	}
+@@ -6246,7 +6226,7 @@ static void set_advertising_complete(struct hci_dev *hdev, void *data, int err)
+ 	else
+ 		hci_dev_clear_flag(hdev, HCI_ADVERTISING);
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, settings_rsp,
++	mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true, settings_rsp,
+ 			     &match);
+ 
+ 	new_settings(hdev, match.sk);
+@@ -6590,7 +6570,7 @@ static void set_bredr_complete(struct hci_dev *hdev, void *data, int err)
+ 		 */
+ 		hci_dev_clear_flag(hdev, HCI_BREDR_ENABLED);
+ 
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err);
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err);
+ 	} else {
+ 		send_settings_rsp(cmd->sk, MGMT_OP_SET_BREDR, hdev);
+ 		new_settings(hdev, cmd->sk);
+@@ -6727,7 +6707,7 @@ static void set_secure_conn_complete(struct hci_dev *hdev, void *data, int err)
+ 	if (err) {
+ 		u8 mgmt_err = mgmt_status(err);
+ 
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err);
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err);
+ 		goto done;
+ 	}
+ 
+@@ -7174,7 +7154,7 @@ static void get_conn_info_complete(struct hci_dev *hdev, void *data, int err)
+ 		rp.max_tx_power = HCI_TX_POWER_INVALID;
+ 	}
+ 
+-	mgmt_cmd_complete(cmd->sk, cmd->index, MGMT_OP_GET_CONN_INFO, status,
++	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, MGMT_OP_GET_CONN_INFO, status,
+ 			  &rp, sizeof(rp));
+ 
+ 	mgmt_pending_free(cmd);
+@@ -7334,7 +7314,7 @@ static void get_clock_info_complete(struct hci_dev *hdev, void *data, int err)
+ 	}
+ 
+ complete:
+-	mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, status, &rp,
++	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, status, &rp,
+ 			  sizeof(rp));
+ 
+ 	mgmt_pending_free(cmd);
+@@ -8584,10 +8564,10 @@ static void add_advertising_complete(struct hci_dev *hdev, void *data, int err)
+ 	rp.instance = cp->instance;
+ 
+ 	if (err)
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				mgmt_status(err));
+ 	else
+-		mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				  mgmt_status(err), &rp, sizeof(rp));
+ 
+ 	add_adv_complete(hdev, cmd->sk, cp->instance, err);
+@@ -8775,10 +8755,10 @@ static void add_ext_adv_params_complete(struct hci_dev *hdev, void *data,
+ 
+ 		hci_remove_adv_instance(hdev, cp->instance);
+ 
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				mgmt_status(err));
+ 	} else {
+-		mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				  mgmt_status(err), &rp, sizeof(rp));
+ 	}
+ 
+@@ -8925,10 +8905,10 @@ static void add_ext_adv_data_complete(struct hci_dev *hdev, void *data, int err)
+ 	rp.instance = cp->instance;
+ 
+ 	if (err)
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				mgmt_status(err));
+ 	else
+-		mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				  mgmt_status(err), &rp, sizeof(rp));
+ 
+ 	mgmt_pending_free(cmd);
+@@ -9087,10 +9067,10 @@ static void remove_advertising_complete(struct hci_dev *hdev, void *data,
+ 	rp.instance = cp->instance;
+ 
+ 	if (err)
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				mgmt_status(err));
+ 	else
+-		mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				  MGMT_STATUS_SUCCESS, &rp, sizeof(rp));
+ 
+ 	mgmt_pending_free(cmd);
+@@ -9362,7 +9342,7 @@ void mgmt_index_removed(struct hci_dev *hdev)
+ 	if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
+ 		return;
+ 
+-	mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match);
++	mgmt_pending_foreach(0, hdev, true, cmd_complete_rsp, &match);
+ 
+ 	if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
+ 		mgmt_index_event(MGMT_EV_UNCONF_INDEX_REMOVED, hdev, NULL, 0,
+@@ -9400,7 +9380,8 @@ void mgmt_power_on(struct hci_dev *hdev, int err)
+ 		hci_update_passive_scan(hdev);
+ 	}
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match);
++	mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, true, settings_rsp,
++			     &match);
+ 
+ 	new_settings(hdev, match.sk);
+ 
+@@ -9415,7 +9396,8 @@ void __mgmt_power_off(struct hci_dev *hdev)
+ 	struct cmd_lookup match = { NULL, hdev };
+ 	u8 zero_cod[] = { 0, 0, 0 };
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match);
++	mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, true, settings_rsp,
++			     &match);
+ 
+ 	/* If the power off is because of hdev unregistration let
+ 	 * use the appropriate INVALID_INDEX status. Otherwise use
+@@ -9429,7 +9411,7 @@ void __mgmt_power_off(struct hci_dev *hdev)
+ 	else
+ 		match.mgmt_status = MGMT_STATUS_NOT_POWERED;
+ 
+-	mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match);
++	mgmt_pending_foreach(0, hdev, true, cmd_complete_rsp, &match);
+ 
+ 	if (memcmp(hdev->dev_class, zero_cod, sizeof(zero_cod)) != 0) {
+ 		mgmt_limited_event(MGMT_EV_CLASS_OF_DEV_CHANGED, hdev,
+@@ -9670,7 +9652,6 @@ static void unpair_device_rsp(struct mgmt_pending_cmd *cmd, void *data)
+ 	device_unpaired(hdev, &cp->addr.bdaddr, cp->addr.type, cmd->sk);
+ 
+ 	cmd->cmd_complete(cmd, 0);
+-	mgmt_pending_remove(cmd);
+ }
+ 
+ bool mgmt_powering_down(struct hci_dev *hdev)
+@@ -9726,8 +9707,8 @@ void mgmt_disconnect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ 	struct mgmt_cp_disconnect *cp;
+ 	struct mgmt_pending_cmd *cmd;
+ 
+-	mgmt_pending_foreach(MGMT_OP_UNPAIR_DEVICE, hdev, unpair_device_rsp,
+-			     hdev);
++	mgmt_pending_foreach(MGMT_OP_UNPAIR_DEVICE, hdev, true,
++			     unpair_device_rsp, hdev);
+ 
+ 	cmd = pending_find(MGMT_OP_DISCONNECT, hdev);
+ 	if (!cmd)
+@@ -9920,7 +9901,7 @@ void mgmt_auth_enable_complete(struct hci_dev *hdev, u8 status)
+ 
+ 	if (status) {
+ 		u8 mgmt_err = mgmt_status(status);
+-		mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev,
++		mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, true,
+ 				     cmd_status_rsp, &mgmt_err);
+ 		return;
+ 	}
+@@ -9930,8 +9911,8 @@ void mgmt_auth_enable_complete(struct hci_dev *hdev, u8 status)
+ 	else
+ 		changed = hci_dev_test_and_clear_flag(hdev, HCI_LINK_SECURITY);
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, settings_rsp,
+-			     &match);
++	mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, true,
++			     settings_rsp, &match);
+ 
+ 	if (changed)
+ 		new_settings(hdev, match.sk);
+@@ -9955,9 +9936,12 @@ void mgmt_set_class_of_dev_complete(struct hci_dev *hdev, u8 *dev_class,
+ {
+ 	struct cmd_lookup match = { NULL, hdev, mgmt_status(status) };
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_DEV_CLASS, hdev, sk_lookup, &match);
+-	mgmt_pending_foreach(MGMT_OP_ADD_UUID, hdev, sk_lookup, &match);
+-	mgmt_pending_foreach(MGMT_OP_REMOVE_UUID, hdev, sk_lookup, &match);
++	mgmt_pending_foreach(MGMT_OP_SET_DEV_CLASS, hdev, false, sk_lookup,
++			     &match);
++	mgmt_pending_foreach(MGMT_OP_ADD_UUID, hdev, false, sk_lookup,
++			     &match);
++	mgmt_pending_foreach(MGMT_OP_REMOVE_UUID, hdev, false, sk_lookup,
++			     &match);
+ 
+ 	if (!status) {
+ 		mgmt_limited_event(MGMT_EV_CLASS_OF_DEV_CHANGED, hdev, dev_class,
+diff --git a/net/bluetooth/mgmt_util.c b/net/bluetooth/mgmt_util.c
+index e5ff65e424b5b4..a88a07da394734 100644
+--- a/net/bluetooth/mgmt_util.c
++++ b/net/bluetooth/mgmt_util.c
+@@ -217,30 +217,47 @@ int mgmt_cmd_complete(struct sock *sk, u16 index, u16 cmd, u8 status,
+ struct mgmt_pending_cmd *mgmt_pending_find(unsigned short channel, u16 opcode,
+ 					   struct hci_dev *hdev)
+ {
+-	struct mgmt_pending_cmd *cmd;
++	struct mgmt_pending_cmd *cmd, *tmp;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
+ 
+-	list_for_each_entry(cmd, &hdev->mgmt_pending, list) {
++	list_for_each_entry_safe(cmd, tmp, &hdev->mgmt_pending, list) {
+ 		if (hci_sock_get_channel(cmd->sk) != channel)
+ 			continue;
+-		if (cmd->opcode == opcode)
++
++		if (cmd->opcode == opcode) {
++			mutex_unlock(&hdev->mgmt_pending_lock);
+ 			return cmd;
++		}
+ 	}
+ 
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
+ 	return NULL;
+ }
+ 
+-void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev,
++void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev, bool remove,
+ 			  void (*cb)(struct mgmt_pending_cmd *cmd, void *data),
+ 			  void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd, *tmp;
+ 
++	mutex_lock(&hdev->mgmt_pending_lock);
++
+ 	list_for_each_entry_safe(cmd, tmp, &hdev->mgmt_pending, list) {
+ 		if (opcode > 0 && cmd->opcode != opcode)
+ 			continue;
+ 
++		if (remove)
++			list_del(&cmd->list);
++
+ 		cb(cmd, data);
++
++		if (remove)
++			mgmt_pending_free(cmd);
+ 	}
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
+ }
+ 
+ struct mgmt_pending_cmd *mgmt_pending_new(struct sock *sk, u16 opcode,
+@@ -254,7 +271,7 @@ struct mgmt_pending_cmd *mgmt_pending_new(struct sock *sk, u16 opcode,
+ 		return NULL;
+ 
+ 	cmd->opcode = opcode;
+-	cmd->index = hdev->id;
++	cmd->hdev = hdev;
+ 
+ 	cmd->param = kmemdup(data, len, GFP_KERNEL);
+ 	if (!cmd->param) {
+@@ -280,7 +297,9 @@ struct mgmt_pending_cmd *mgmt_pending_add(struct sock *sk, u16 opcode,
+ 	if (!cmd)
+ 		return NULL;
+ 
++	mutex_lock(&hdev->mgmt_pending_lock);
+ 	list_add_tail(&cmd->list, &hdev->mgmt_pending);
++	mutex_unlock(&hdev->mgmt_pending_lock);
+ 
+ 	return cmd;
+ }
+@@ -294,7 +313,10 @@ void mgmt_pending_free(struct mgmt_pending_cmd *cmd)
+ 
+ void mgmt_pending_remove(struct mgmt_pending_cmd *cmd)
+ {
++	mutex_lock(&cmd->hdev->mgmt_pending_lock);
+ 	list_del(&cmd->list);
++	mutex_unlock(&cmd->hdev->mgmt_pending_lock);
++
+ 	mgmt_pending_free(cmd);
+ }
+ 
+@@ -304,7 +326,7 @@ void mgmt_mesh_foreach(struct hci_dev *hdev,
+ {
+ 	struct mgmt_mesh_tx *mesh_tx, *tmp;
+ 
+-	list_for_each_entry_safe(mesh_tx, tmp, &hdev->mgmt_pending, list) {
++	list_for_each_entry_safe(mesh_tx, tmp, &hdev->mesh_pending, list) {
+ 		if (!sk || mesh_tx->sk == sk)
+ 			cb(mesh_tx, data);
+ 	}
+diff --git a/net/bluetooth/mgmt_util.h b/net/bluetooth/mgmt_util.h
+index f2ba994ab1d847..024e51dd693756 100644
+--- a/net/bluetooth/mgmt_util.h
++++ b/net/bluetooth/mgmt_util.h
+@@ -33,7 +33,7 @@ struct mgmt_mesh_tx {
+ struct mgmt_pending_cmd {
+ 	struct list_head list;
+ 	u16 opcode;
+-	int index;
++	struct hci_dev *hdev;
+ 	void *param;
+ 	size_t param_len;
+ 	struct sock *sk;
+@@ -54,7 +54,7 @@ int mgmt_cmd_complete(struct sock *sk, u16 index, u16 cmd, u8 status,
+ 
+ struct mgmt_pending_cmd *mgmt_pending_find(unsigned short channel, u16 opcode,
+ 					   struct hci_dev *hdev);
+-void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev,
++void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev, bool remove,
+ 			  void (*cb)(struct mgmt_pending_cmd *cmd, void *data),
+ 			  void *data);
+ struct mgmt_pending_cmd *mgmt_pending_add(struct sock *sk, u16 opcode,
+diff --git a/net/bridge/netfilter/nf_conntrack_bridge.c b/net/bridge/netfilter/nf_conntrack_bridge.c
+index 816bb0fde718ed..6482de4d875092 100644
+--- a/net/bridge/netfilter/nf_conntrack_bridge.c
++++ b/net/bridge/netfilter/nf_conntrack_bridge.c
+@@ -60,19 +60,19 @@ static int nf_br_ip_fragment(struct net *net, struct sock *sk,
+ 		struct ip_fraglist_iter iter;
+ 		struct sk_buff *frag;
+ 
+-		if (first_len - hlen > mtu ||
+-		    skb_headroom(skb) < ll_rs)
++		if (first_len - hlen > mtu)
+ 			goto blackhole;
+ 
+-		if (skb_cloned(skb))
++		if (skb_cloned(skb) ||
++		    skb_headroom(skb) < ll_rs)
+ 			goto slow_path;
+ 
+ 		skb_walk_frags(skb, frag) {
+-			if (frag->len > mtu ||
+-			    skb_headroom(frag) < hlen + ll_rs)
++			if (frag->len > mtu)
+ 				goto blackhole;
+ 
+-			if (skb_shared(frag))
++			if (skb_shared(frag) ||
++			    skb_headroom(frag) < hlen + ll_rs)
+ 				goto slow_path;
+ 		}
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 0d891634c69270..2b20aadaf9268d 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -10393,7 +10393,7 @@ static void dev_index_release(struct net *net, int ifindex)
+ static bool from_cleanup_net(void)
+ {
+ #ifdef CONFIG_NET_NS
+-	return current == cleanup_net_task;
++	return current == READ_ONCE(cleanup_net_task);
+ #else
+ 	return false;
+ #endif
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 577a4504e26fa0..357d26b76c22d9 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -1968,10 +1968,11 @@ BPF_CALL_5(bpf_l4_csum_replace, struct sk_buff *, skb, u32, offset,
+ 	bool is_pseudo = flags & BPF_F_PSEUDO_HDR;
+ 	bool is_mmzero = flags & BPF_F_MARK_MANGLED_0;
+ 	bool do_mforce = flags & BPF_F_MARK_ENFORCE;
++	bool is_ipv6   = flags & BPF_F_IPV6;
+ 	__sum16 *ptr;
+ 
+ 	if (unlikely(flags & ~(BPF_F_MARK_MANGLED_0 | BPF_F_MARK_ENFORCE |
+-			       BPF_F_PSEUDO_HDR | BPF_F_HDR_FIELD_MASK)))
++			       BPF_F_PSEUDO_HDR | BPF_F_HDR_FIELD_MASK | BPF_F_IPV6)))
+ 		return -EINVAL;
+ 	if (unlikely(offset > 0xffff || offset & 1))
+ 		return -EFAULT;
+@@ -1987,7 +1988,7 @@ BPF_CALL_5(bpf_l4_csum_replace, struct sk_buff *, skb, u32, offset,
+ 		if (unlikely(from != 0))
+ 			return -EINVAL;
+ 
+-		inet_proto_csum_replace_by_diff(ptr, skb, to, is_pseudo);
++		inet_proto_csum_replace_by_diff(ptr, skb, to, is_pseudo, is_ipv6);
+ 		break;
+ 	case 2:
+ 		inet_proto_csum_replace2(ptr, skb, from, to, is_pseudo);
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index b0dfdf791ece5a..599f6a89ae581e 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -600,7 +600,7 @@ static void cleanup_net(struct work_struct *work)
+ 	LIST_HEAD(net_exit_list);
+ 	LIST_HEAD(dev_kill_list);
+ 
+-	cleanup_net_task = current;
++	WRITE_ONCE(cleanup_net_task, current);
+ 
+ 	/* Atomically snapshot the list of namespaces to cleanup */
+ 	net_kill_list = llist_del_all(&cleanup_list);
+@@ -676,7 +676,7 @@ static void cleanup_net(struct work_struct *work)
+ 		put_user_ns(net->user_ns);
+ 		net_passive_dec(net);
+ 	}
+-	cleanup_net_task = NULL;
++	WRITE_ONCE(cleanup_net_task, NULL);
+ }
+ 
+ /**
+diff --git a/net/core/netmem_priv.h b/net/core/netmem_priv.h
+index 7eadb8393e002f..cd95394399b40c 100644
+--- a/net/core/netmem_priv.h
++++ b/net/core/netmem_priv.h
+@@ -5,7 +5,7 @@
+ 
+ static inline unsigned long netmem_get_pp_magic(netmem_ref netmem)
+ {
+-	return __netmem_clear_lsb(netmem)->pp_magic;
++	return __netmem_clear_lsb(netmem)->pp_magic & ~PP_DMA_INDEX_MASK;
+ }
+ 
+ static inline void netmem_or_pp_magic(netmem_ref netmem, unsigned long pp_magic)
+@@ -15,9 +15,16 @@ static inline void netmem_or_pp_magic(netmem_ref netmem, unsigned long pp_magic)
+ 
+ static inline void netmem_clear_pp_magic(netmem_ref netmem)
+ {
++	WARN_ON_ONCE(__netmem_clear_lsb(netmem)->pp_magic & PP_DMA_INDEX_MASK);
++
+ 	__netmem_clear_lsb(netmem)->pp_magic = 0;
+ }
+ 
++static inline bool netmem_is_pp(netmem_ref netmem)
++{
++	return (netmem_get_pp_magic(netmem) & PP_MAGIC_MASK) == PP_SIGNATURE;
++}
++
+ static inline void netmem_set_pp(netmem_ref netmem, struct page_pool *pool)
+ {
+ 	__netmem_clear_lsb(netmem)->pp = pool;
+@@ -28,4 +35,28 @@ static inline void netmem_set_dma_addr(netmem_ref netmem,
+ {
+ 	__netmem_clear_lsb(netmem)->dma_addr = dma_addr;
+ }
++
++static inline unsigned long netmem_get_dma_index(netmem_ref netmem)
++{
++	unsigned long magic;
++
++	if (WARN_ON_ONCE(netmem_is_net_iov(netmem)))
++		return 0;
++
++	magic = __netmem_clear_lsb(netmem)->pp_magic;
++
++	return (magic & PP_DMA_INDEX_MASK) >> PP_DMA_INDEX_SHIFT;
++}
++
++static inline void netmem_set_dma_index(netmem_ref netmem,
++					unsigned long id)
++{
++	unsigned long magic;
++
++	if (WARN_ON_ONCE(netmem_is_net_iov(netmem)))
++		return;
++
++	magic = netmem_get_pp_magic(netmem) | (id << PP_DMA_INDEX_SHIFT);
++	__netmem_clear_lsb(netmem)->pp_magic = magic;
++}
+ #endif
+diff --git a/net/core/page_pool.c b/net/core/page_pool.c
+index 7745ad924ae2d8..2d9c51f480fb5f 100644
+--- a/net/core/page_pool.c
++++ b/net/core/page_pool.c
+@@ -153,9 +153,9 @@ u64 *page_pool_ethtool_stats_get(u64 *data, const void *stats)
+ EXPORT_SYMBOL(page_pool_ethtool_stats_get);
+ 
+ #else
+-#define alloc_stat_inc(pool, __stat)
+-#define recycle_stat_inc(pool, __stat)
+-#define recycle_stat_add(pool, __stat, val)
++#define alloc_stat_inc(...)	do { } while (0)
++#define recycle_stat_inc(...)	do { } while (0)
++#define recycle_stat_add(...)	do { } while (0)
+ #endif
+ 
+ static bool page_pool_producer_lock(struct page_pool *pool)
+@@ -276,8 +276,7 @@ static int page_pool_init(struct page_pool *pool,
+ 	/* Driver calling page_pool_create() also call page_pool_destroy() */
+ 	refcount_set(&pool->user_cnt, 1);
+ 
+-	if (pool->dma_map)
+-		get_device(pool->p.dev);
++	xa_init_flags(&pool->dma_mapped, XA_FLAGS_ALLOC1);
+ 
+ 	if (pool->slow.flags & PP_FLAG_ALLOW_UNREADABLE_NETMEM) {
+ 		netdev_assert_locked(pool->slow.netdev);
+@@ -320,9 +319,7 @@ static int page_pool_init(struct page_pool *pool,
+ static void page_pool_uninit(struct page_pool *pool)
+ {
+ 	ptr_ring_cleanup(&pool->ring, NULL);
+-
+-	if (pool->dma_map)
+-		put_device(pool->p.dev);
++	xa_destroy(&pool->dma_mapped);
+ 
+ #ifdef CONFIG_PAGE_POOL_STATS
+ 	if (!pool->system)
+@@ -463,13 +460,21 @@ page_pool_dma_sync_for_device(const struct page_pool *pool,
+ 			      netmem_ref netmem,
+ 			      u32 dma_sync_size)
+ {
+-	if (pool->dma_sync && dma_dev_need_sync(pool->p.dev))
+-		__page_pool_dma_sync_for_device(pool, netmem, dma_sync_size);
++	if (pool->dma_sync && dma_dev_need_sync(pool->p.dev)) {
++		rcu_read_lock();
++		/* re-check under rcu_read_lock() to sync with page_pool_scrub() */
++		if (pool->dma_sync)
++			__page_pool_dma_sync_for_device(pool, netmem,
++							dma_sync_size);
++		rcu_read_unlock();
++	}
+ }
+ 
+-static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem)
++static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem, gfp_t gfp)
+ {
+ 	dma_addr_t dma;
++	int err;
++	u32 id;
+ 
+ 	/* Setup DMA mapping: use 'struct page' area for storing DMA-addr
+ 	 * since dma_addr_t can be either 32 or 64 bits and does not always fit
+@@ -483,15 +488,30 @@ static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem)
+ 	if (dma_mapping_error(pool->p.dev, dma))
+ 		return false;
+ 
+-	if (page_pool_set_dma_addr_netmem(netmem, dma))
++	if (page_pool_set_dma_addr_netmem(netmem, dma)) {
++		WARN_ONCE(1, "unexpected DMA address, please report to netdev@");
+ 		goto unmap_failed;
++	}
++
++	if (in_softirq())
++		err = xa_alloc(&pool->dma_mapped, &id, netmem_to_page(netmem),
++			       PP_DMA_INDEX_LIMIT, gfp);
++	else
++		err = xa_alloc_bh(&pool->dma_mapped, &id, netmem_to_page(netmem),
++				  PP_DMA_INDEX_LIMIT, gfp);
++	if (err) {
++		WARN_ONCE(err != -ENOMEM, "couldn't track DMA mapping, please report to netdev@");
++		goto unset_failed;
++	}
+ 
++	netmem_set_dma_index(netmem, id);
+ 	page_pool_dma_sync_for_device(pool, netmem, pool->p.max_len);
+ 
+ 	return true;
+ 
++unset_failed:
++	page_pool_set_dma_addr_netmem(netmem, 0);
+ unmap_failed:
+-	WARN_ONCE(1, "unexpected DMA address, please report to netdev@");
+ 	dma_unmap_page_attrs(pool->p.dev, dma,
+ 			     PAGE_SIZE << pool->p.order, pool->p.dma_dir,
+ 			     DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
+@@ -508,7 +528,7 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool,
+ 	if (unlikely(!page))
+ 		return NULL;
+ 
+-	if (pool->dma_map && unlikely(!page_pool_dma_map(pool, page_to_netmem(page)))) {
++	if (pool->dma_map && unlikely(!page_pool_dma_map(pool, page_to_netmem(page), gfp))) {
+ 		put_page(page);
+ 		return NULL;
+ 	}
+@@ -554,7 +574,7 @@ static noinline netmem_ref __page_pool_alloc_pages_slow(struct page_pool *pool,
+ 	 */
+ 	for (i = 0; i < nr_pages; i++) {
+ 		netmem = pool->alloc.cache[i];
+-		if (dma_map && unlikely(!page_pool_dma_map(pool, netmem))) {
++		if (dma_map && unlikely(!page_pool_dma_map(pool, netmem, gfp))) {
+ 			put_page(netmem_to_page(netmem));
+ 			continue;
+ 		}
+@@ -656,6 +676,8 @@ void page_pool_clear_pp_info(netmem_ref netmem)
+ static __always_inline void __page_pool_release_page_dma(struct page_pool *pool,
+ 							 netmem_ref netmem)
+ {
++	struct page *old, *page = netmem_to_page(netmem);
++	unsigned long id;
+ 	dma_addr_t dma;
+ 
+ 	if (!pool->dma_map)
+@@ -664,6 +686,17 @@ static __always_inline void __page_pool_release_page_dma(struct page_pool *pool,
+ 		 */
+ 		return;
+ 
++	id = netmem_get_dma_index(netmem);
++	if (!id)
++		return;
++
++	if (in_softirq())
++		old = xa_cmpxchg(&pool->dma_mapped, id, page, NULL, 0);
++	else
++		old = xa_cmpxchg_bh(&pool->dma_mapped, id, page, NULL, 0);
++	if (old != page)
++		return;
++
+ 	dma = page_pool_get_dma_addr_netmem(netmem);
+ 
+ 	/* When page is unmapped, it cannot be returned to our pool */
+@@ -671,6 +704,7 @@ static __always_inline void __page_pool_release_page_dma(struct page_pool *pool,
+ 			     PAGE_SIZE << pool->p.order, pool->p.dma_dir,
+ 			     DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
+ 	page_pool_set_dma_addr_netmem(netmem, 0);
++	netmem_set_dma_index(netmem, 0);
+ }
+ 
+ /* Disconnects a page (from a page_pool).  API users can have a need
+@@ -707,19 +741,16 @@ void page_pool_return_page(struct page_pool *pool, netmem_ref netmem)
+ 
+ static bool page_pool_recycle_in_ring(struct page_pool *pool, netmem_ref netmem)
+ {
+-	int ret;
+-	/* BH protection not needed if current is softirq */
+-	if (in_softirq())
+-		ret = ptr_ring_produce(&pool->ring, (__force void *)netmem);
+-	else
+-		ret = ptr_ring_produce_bh(&pool->ring, (__force void *)netmem);
++	bool in_softirq, ret;
+ 
+-	if (!ret) {
++	/* BH protection not needed if current is softirq */
++	in_softirq = page_pool_producer_lock(pool);
++	ret = !__ptr_ring_produce(&pool->ring, (__force void *)netmem);
++	if (ret)
+ 		recycle_stat_inc(pool, ring);
+-		return true;
+-	}
++	page_pool_producer_unlock(pool, in_softirq);
+ 
+-	return false;
++	return ret;
+ }
+ 
+ /* Only allow direct recycling in special circumstances, into the
+@@ -1080,8 +1111,29 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool)
+ 
+ static void page_pool_scrub(struct page_pool *pool)
+ {
++	unsigned long id;
++	void *ptr;
++
+ 	page_pool_empty_alloc_cache_once(pool);
+-	pool->destroy_cnt++;
++	if (!pool->destroy_cnt++ && pool->dma_map) {
++		if (pool->dma_sync) {
++			/* Disable page_pool_dma_sync_for_device() */
++			pool->dma_sync = false;
++
++			/* Make sure all concurrent returns that may see the old
++			 * value of dma_sync (and thus perform a sync) have
++			 * finished before doing the unmapping below. Skip the
++			 * wait if the device doesn't actually need syncing, or
++			 * if there are no outstanding mapped pages.
++			 */
++			if (dma_dev_need_sync(pool->p.dev) &&
++			    !xa_empty(&pool->dma_mapped))
++				synchronize_net();
++		}
++
++		xa_for_each(&pool->dma_mapped, id, ptr)
++			__page_pool_release_page_dma(pool, page_to_netmem(ptr));
++	}
+ 
+ 	/* No more consumers should exist, but producers could still
+ 	 * be in-flight.
+@@ -1091,10 +1143,14 @@ static void page_pool_scrub(struct page_pool *pool)
+ 
+ static int page_pool_release(struct page_pool *pool)
+ {
++	bool in_softirq;
+ 	int inflight;
+ 
+ 	page_pool_scrub(pool);
+ 	inflight = page_pool_inflight(pool, true);
++	/* Acquire producer lock to make sure producers have exited. */
++	in_softirq = page_pool_producer_lock(pool);
++	page_pool_producer_unlock(pool, in_softirq);
+ 	if (!inflight)
+ 		__page_pool_destroy(pool);
+ 
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index c5a7f41982a575..fc6815ad78266f 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -3681,7 +3681,7 @@ struct net_device *rtnl_create_link(struct net *net, const char *ifname,
+ 	if (tb[IFLA_LINKMODE])
+ 		dev->link_mode = nla_get_u8(tb[IFLA_LINKMODE]);
+ 	if (tb[IFLA_GROUP])
+-		dev_set_group(dev, nla_get_u32(tb[IFLA_GROUP]));
++		netif_set_group(dev, nla_get_u32(tb[IFLA_GROUP]));
+ 	if (tb[IFLA_GSO_MAX_SIZE])
+ 		netif_set_gso_max_size(dev, nla_get_u32(tb[IFLA_GSO_MAX_SIZE]));
+ 	if (tb[IFLA_GSO_MAX_SEGS])
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 6cbf77bc61fce7..74a2d886a35b51 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -893,11 +893,6 @@ static void skb_clone_fraglist(struct sk_buff *skb)
+ 		skb_get(list);
+ }
+ 
+-static bool is_pp_netmem(netmem_ref netmem)
+-{
+-	return (netmem_get_pp_magic(netmem) & ~0x3UL) == PP_SIGNATURE;
+-}
+-
+ int skb_pp_cow_data(struct page_pool *pool, struct sk_buff **pskb,
+ 		    unsigned int headroom)
+ {
+@@ -995,14 +990,7 @@ bool napi_pp_put_page(netmem_ref netmem)
+ {
+ 	netmem = netmem_compound_head(netmem);
+ 
+-	/* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation
+-	 * in order to preserve any existing bits, such as bit 0 for the
+-	 * head page of compound page and bit 1 for pfmemalloc page, so
+-	 * mask those bits for freeing side when doing below checking,
+-	 * and page_is_pfmemalloc() is checked in __page_pool_put_page()
+-	 * to avoid recycling the pfmemalloc page.
+-	 */
+-	if (unlikely(!is_pp_netmem(netmem)))
++	if (unlikely(!netmem_is_pp(netmem)))
+ 		return false;
+ 
+ 	page_pool_put_full_netmem(netmem_get_pp(netmem), netmem, false);
+@@ -1042,7 +1030,7 @@ static int skb_pp_frag_ref(struct sk_buff *skb)
+ 
+ 	for (i = 0; i < shinfo->nr_frags; i++) {
+ 		head_netmem = netmem_compound_head(shinfo->frags[i].netmem);
+-		if (likely(is_pp_netmem(head_netmem)))
++		if (likely(netmem_is_pp(head_netmem)))
+ 			page_pool_ref_netmem(head_netmem);
+ 		else
+ 			page_ref_inc(netmem_to_page(head_netmem));
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 0ddc4c7188332a..6d689918c2b390 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -530,16 +530,22 @@ static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
+ 					u32 off, u32 len,
+ 					struct sk_psock *psock,
+ 					struct sock *sk,
+-					struct sk_msg *msg)
++					struct sk_msg *msg,
++					bool take_ref)
+ {
+ 	int num_sge, copied;
+ 
++	/* skb_to_sgvec will fail when the total number of fragments in
++	 * frag_list and frags exceeds MAX_MSG_FRAGS. For example, the
++	 * caller may aggregate multiple skbs.
++	 */
+ 	num_sge = skb_to_sgvec(skb, msg->sg.data, off, len);
+ 	if (num_sge < 0) {
+ 		/* skb linearize may fail with ENOMEM, but lets simply try again
+ 		 * later if this happens. Under memory pressure we don't want to
+ 		 * drop the skb. We need to linearize the skb so that the mapping
+ 		 * in skb_to_sgvec can not error.
++		 * Note that skb_linearize requires the skb not to be shared.
+ 		 */
+ 		if (skb_linearize(skb))
+ 			return -EAGAIN;
+@@ -556,7 +562,7 @@ static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
+ 	msg->sg.start = 0;
+ 	msg->sg.size = copied;
+ 	msg->sg.end = num_sge;
+-	msg->skb = skb;
++	msg->skb = take_ref ? skb_get(skb) : skb;
+ 
+ 	sk_psock_queue_msg(psock, msg);
+ 	sk_psock_data_ready(sk, psock);
+@@ -564,7 +570,7 @@ static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
+ }
+ 
+ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb,
+-				     u32 off, u32 len);
++				     u32 off, u32 len, bool take_ref);
+ 
+ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb,
+ 				u32 off, u32 len)
+@@ -578,7 +584,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb,
+ 	 * correctly.
+ 	 */
+ 	if (unlikely(skb->sk == sk))
+-		return sk_psock_skb_ingress_self(psock, skb, off, len);
++		return sk_psock_skb_ingress_self(psock, skb, off, len, true);
+ 	msg = sk_psock_create_ingress_msg(sk, skb);
+ 	if (!msg)
+ 		return -EAGAIN;
+@@ -590,7 +596,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb,
+ 	 * into user buffers.
+ 	 */
+ 	skb_set_owner_r(skb, sk);
+-	err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg);
++	err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg, true);
+ 	if (err < 0)
+ 		kfree(msg);
+ 	return err;
+@@ -601,7 +607,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb,
+  * because the skb is already accounted for here.
+  */
+ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb,
+-				     u32 off, u32 len)
++				     u32 off, u32 len, bool take_ref)
+ {
+ 	struct sk_msg *msg = alloc_sk_msg(GFP_ATOMIC);
+ 	struct sock *sk = psock->sk;
+@@ -610,7 +616,7 @@ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb
+ 	if (unlikely(!msg))
+ 		return -EAGAIN;
+ 	skb_set_owner_r(skb, sk);
+-	err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg);
++	err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg, take_ref);
+ 	if (err < 0)
+ 		kfree(msg);
+ 	return err;
+@@ -619,18 +625,13 @@ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb
+ static int sk_psock_handle_skb(struct sk_psock *psock, struct sk_buff *skb,
+ 			       u32 off, u32 len, bool ingress)
+ {
+-	int err = 0;
+-
+ 	if (!ingress) {
+ 		if (!sock_writeable(psock->sk))
+ 			return -EAGAIN;
+ 		return skb_send_sock(psock->sk, skb, off, len);
+ 	}
+-	skb_get(skb);
+-	err = sk_psock_skb_ingress(psock, skb, off, len);
+-	if (err < 0)
+-		kfree_skb(skb);
+-	return err;
++
++	return sk_psock_skb_ingress(psock, skb, off, len);
+ }
+ 
+ static void sk_psock_skb_state(struct sk_psock *psock,
+@@ -655,12 +656,14 @@ static void sk_psock_backlog(struct work_struct *work)
+ 	bool ingress;
+ 	int ret;
+ 
++	/* Increment the psock refcnt to synchronize with close(fd) path in
++	 * sock_map_close(), ensuring we wait for backlog thread completion
++	 * before sk_socket freed. If refcnt increment fails, it indicates
++	 * sock_map_close() completed with sk_socket potentially already freed.
++	 */
++	if (!sk_psock_get(psock->sk))
++		return;
+ 	mutex_lock(&psock->work_mutex);
+-	if (unlikely(state->len)) {
+-		len = state->len;
+-		off = state->off;
+-	}
+-
+ 	while ((skb = skb_peek(&psock->ingress_skb))) {
+ 		len = skb->len;
+ 		off = 0;
+@@ -670,6 +673,13 @@ static void sk_psock_backlog(struct work_struct *work)
+ 			off = stm->offset;
+ 			len = stm->full_len;
+ 		}
++
++		/* Resume processing from previous partial state */
++		if (unlikely(state->len)) {
++			len = state->len;
++			off = state->off;
++		}
++
+ 		ingress = skb_bpf_ingress(skb);
+ 		skb_bpf_redirect_clear(skb);
+ 		do {
+@@ -697,11 +707,14 @@ static void sk_psock_backlog(struct work_struct *work)
+ 			len -= ret;
+ 		} while (len);
+ 
++		/* The entire skb sent, clear state */
++		sk_psock_skb_state(psock, state, 0, 0);
+ 		skb = skb_dequeue(&psock->ingress_skb);
+ 		kfree_skb(skb);
+ 	}
+ end:
+ 	mutex_unlock(&psock->work_mutex);
++	sk_psock_put(psock->sk, psock);
+ }
+ 
+ struct sk_psock *sk_psock_init(struct sock *sk, int node)
+@@ -1014,7 +1027,7 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
+ 				off = stm->offset;
+ 				len = stm->full_len;
+ 			}
+-			err = sk_psock_skb_ingress_self(psock, skb, off, len);
++			err = sk_psock_skb_ingress_self(psock, skb, off, len, false);
+ 		}
+ 		if (err < 0) {
+ 			spin_lock_bh(&psock->ingress_lock);
+diff --git a/net/core/sock.c b/net/core/sock.c
+index e54449c9ab0bad..5034d0fbd4a427 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -3234,16 +3234,16 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
+ {
+ 	struct mem_cgroup *memcg = mem_cgroup_sockets_enabled ? sk->sk_memcg : NULL;
+ 	struct proto *prot = sk->sk_prot;
+-	bool charged = false;
++	bool charged = true;
+ 	long allocated;
+ 
+ 	sk_memory_allocated_add(sk, amt);
+ 	allocated = sk_memory_allocated(sk);
+ 
+ 	if (memcg) {
+-		if (!mem_cgroup_charge_skmem(memcg, amt, gfp_memcg_charge()))
++		charged = mem_cgroup_charge_skmem(memcg, amt, gfp_memcg_charge());
++		if (!charged)
+ 			goto suppress_allocation;
+-		charged = true;
+ 	}
+ 
+ 	/* Under limit. */
+@@ -3328,7 +3328,7 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
+ 
+ 	sk_memory_allocated_sub(sk, amt);
+ 
+-	if (charged)
++	if (memcg && charged)
+ 		mem_cgroup_uncharge_skmem(memcg, amt);
+ 
+ 	return 0;
+diff --git a/net/core/utils.c b/net/core/utils.c
+index 27f4cffaae05d9..b8c21a859e27b1 100644
+--- a/net/core/utils.c
++++ b/net/core/utils.c
+@@ -473,11 +473,11 @@ void inet_proto_csum_replace16(__sum16 *sum, struct sk_buff *skb,
+ EXPORT_SYMBOL(inet_proto_csum_replace16);
+ 
+ void inet_proto_csum_replace_by_diff(__sum16 *sum, struct sk_buff *skb,
+-				     __wsum diff, bool pseudohdr)
++				     __wsum diff, bool pseudohdr, bool ipv6)
+ {
+ 	if (skb->ip_summed != CHECKSUM_PARTIAL) {
+ 		csum_replace_by_diff(sum, diff);
+-		if (skb->ip_summed == CHECKSUM_COMPLETE && pseudohdr)
++		if (skb->ip_summed == CHECKSUM_COMPLETE && pseudohdr && !ipv6)
+ 			skb->csum = ~csum_sub(diff, skb->csum);
+ 	} else if (pseudohdr) {
+ 		*sum = ~csum_fold(csum_add(diff, csum_unfold(*sum)));
+diff --git a/net/core/xdp.c b/net/core/xdp.c
+index f86eedad586a77..0ba73943c6eed8 100644
+--- a/net/core/xdp.c
++++ b/net/core/xdp.c
+@@ -437,8 +437,8 @@ void __xdp_return(netmem_ref netmem, enum xdp_mem_type mem_type,
+ 		netmem = netmem_compound_head(netmem);
+ 		if (napi_direct && xdp_return_frame_no_direct())
+ 			napi_direct = false;
+-		/* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE)
+-		 * as mem->type knows this a page_pool page
++		/* No need to check netmem_is_pp() as mem->type knows this a
++		 * page_pool page
+ 		 */
+ 		page_pool_put_full_netmem(netmem_get_pp(netmem), netmem,
+ 					  napi_direct);
+diff --git a/net/dsa/tag_brcm.c b/net/dsa/tag_brcm.c
+index 8c3c068728e51c..fe75821623a4fc 100644
+--- a/net/dsa/tag_brcm.c
++++ b/net/dsa/tag_brcm.c
+@@ -257,7 +257,7 @@ static struct sk_buff *brcm_leg_tag_rcv(struct sk_buff *skb,
+ 	int source_port;
+ 	u8 *brcm_tag;
+ 
+-	if (unlikely(!pskb_may_pull(skb, BRCM_LEG_PORT_ID)))
++	if (unlikely(!pskb_may_pull(skb, BRCM_LEG_TAG_LEN + VLAN_HLEN)))
+ 		return NULL;
+ 
+ 	brcm_tag = dsa_etype_header_pos_rx(skb);
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 8262cc10f98db7..4b1badeebc741c 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -1001,7 +1001,8 @@ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
+ 		    ethtool_get_flow_spec_ring(info.fs.ring_cookie))
+ 			return -EINVAL;
+ 
+-		if (!xa_load(&dev->ethtool->rss_ctx, info.rss_context))
++		if (info.rss_context &&
++		    !xa_load(&dev->ethtool->rss_ctx, info.rss_context))
+ 			return -EINVAL;
+ 	}
+ 
+diff --git a/net/ipv4/netfilter/nft_fib_ipv4.c b/net/ipv4/netfilter/nft_fib_ipv4.c
+index 9082ca17e845cb..7e7c49535e3f56 100644
+--- a/net/ipv4/netfilter/nft_fib_ipv4.c
++++ b/net/ipv4/netfilter/nft_fib_ipv4.c
+@@ -50,7 +50,12 @@ void nft_fib4_eval_type(const struct nft_expr *expr, struct nft_regs *regs,
+ 	else
+ 		addr = iph->saddr;
+ 
+-	*dst = inet_dev_addr_type(nft_net(pkt), dev, addr);
++	if (priv->flags & (NFTA_FIB_F_IIF | NFTA_FIB_F_OIF)) {
++		*dst = inet_dev_addr_type(nft_net(pkt), dev, addr);
++		return;
++	}
++
++	*dst = inet_addr_type_dev_table(nft_net(pkt), pkt->skb->dev, addr);
+ }
+ EXPORT_SYMBOL_GPL(nft_fib4_eval_type);
+ 
+@@ -65,8 +70,8 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 	struct flowi4 fl4 = {
+ 		.flowi4_scope = RT_SCOPE_UNIVERSE,
+ 		.flowi4_iif = LOOPBACK_IFINDEX,
++		.flowi4_proto = pkt->tprot,
+ 		.flowi4_uid = sock_net_uid(nft_net(pkt), NULL),
+-		.flowi4_l3mdev = l3mdev_master_ifindex_rcu(nft_in(pkt)),
+ 	};
+ 	const struct net_device *oif;
+ 	const struct net_device *found;
+@@ -90,6 +95,8 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 	else
+ 		oif = NULL;
+ 
++	fl4.flowi4_l3mdev = nft_fib_l3mdev_master_ifindex_rcu(pkt, oif);
++
+ 	iph = skb_header_pointer(pkt->skb, noff, sizeof(_iph), &_iph);
+ 	if (!iph) {
+ 		regs->verdict.code = NFT_BREAK;
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index 9a8142ccbabe44..9b295b2878befa 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -332,6 +332,7 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+ 	bool copy_dtor;
+ 	__sum16 check;
+ 	__be16 newlen;
++	int ret = 0;
+ 
+ 	mss = skb_shinfo(gso_skb)->gso_size;
+ 	if (gso_skb->len <= sizeof(*uh) + mss)
+@@ -360,6 +361,10 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+ 		if (skb_pagelen(gso_skb) - sizeof(*uh) == skb_shinfo(gso_skb)->gso_size)
+ 			return __udp_gso_segment_list(gso_skb, features, is_ipv6);
+ 
++		ret = __skb_linearize(gso_skb);
++		if (ret)
++			return ERR_PTR(ret);
++
+ 		 /* Setup csum, as fraglist skips this in udp4_gro_receive. */
+ 		gso_skb->csum_start = skb_transport_header(gso_skb) - gso_skb->head;
+ 		gso_skb->csum_offset = offsetof(struct udphdr, check);
+diff --git a/net/ipv6/ila/ila_common.c b/net/ipv6/ila/ila_common.c
+index 95e9146918cc6f..b8d43ed4689db9 100644
+--- a/net/ipv6/ila/ila_common.c
++++ b/net/ipv6/ila/ila_common.c
+@@ -86,7 +86,7 @@ static void ila_csum_adjust_transport(struct sk_buff *skb,
+ 
+ 			diff = get_csum_diff(ip6h, p);
+ 			inet_proto_csum_replace_by_diff(&th->check, skb,
+-							diff, true);
++							diff, true, true);
+ 		}
+ 		break;
+ 	case NEXTHDR_UDP:
+@@ -97,7 +97,7 @@ static void ila_csum_adjust_transport(struct sk_buff *skb,
+ 			if (uh->check || skb->ip_summed == CHECKSUM_PARTIAL) {
+ 				diff = get_csum_diff(ip6h, p);
+ 				inet_proto_csum_replace_by_diff(&uh->check, skb,
+-								diff, true);
++								diff, true, true);
+ 				if (!uh->check)
+ 					uh->check = CSUM_MANGLED_0;
+ 			}
+@@ -111,7 +111,7 @@ static void ila_csum_adjust_transport(struct sk_buff *skb,
+ 
+ 			diff = get_csum_diff(ip6h, p);
+ 			inet_proto_csum_replace_by_diff(&ih->icmp6_cksum, skb,
+-							diff, true);
++							diff, true, true);
+ 		}
+ 		break;
+ 	}
+diff --git a/net/ipv6/netfilter.c b/net/ipv6/netfilter.c
+index 581ce055bf520f..4541836ee3da20 100644
+--- a/net/ipv6/netfilter.c
++++ b/net/ipv6/netfilter.c
+@@ -164,20 +164,20 @@ int br_ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 		struct ip6_fraglist_iter iter;
+ 		struct sk_buff *frag2;
+ 
+-		if (first_len - hlen > mtu ||
+-		    skb_headroom(skb) < (hroom + sizeof(struct frag_hdr)))
++		if (first_len - hlen > mtu)
+ 			goto blackhole;
+ 
+-		if (skb_cloned(skb))
++		if (skb_cloned(skb) ||
++		    skb_headroom(skb) < (hroom + sizeof(struct frag_hdr)))
+ 			goto slow_path;
+ 
+ 		skb_walk_frags(skb, frag2) {
+-			if (frag2->len > mtu ||
+-			    skb_headroom(frag2) < (hlen + hroom + sizeof(struct frag_hdr)))
++			if (frag2->len > mtu)
+ 				goto blackhole;
+ 
+ 			/* Partially cloned skb? */
+-			if (skb_shared(frag2))
++			if (skb_shared(frag2) ||
++			    skb_headroom(frag2) < (hlen + hroom + sizeof(struct frag_hdr)))
+ 				goto slow_path;
+ 		}
+ 
+diff --git a/net/ipv6/netfilter/nft_fib_ipv6.c b/net/ipv6/netfilter/nft_fib_ipv6.c
+index 7fd9d7b21cd42d..421036a3605b46 100644
+--- a/net/ipv6/netfilter/nft_fib_ipv6.c
++++ b/net/ipv6/netfilter/nft_fib_ipv6.c
+@@ -50,6 +50,7 @@ static int nft_fib6_flowi_init(struct flowi6 *fl6, const struct nft_fib *priv,
+ 		fl6->flowi6_mark = pkt->skb->mark;
+ 
+ 	fl6->flowlabel = (*(__be32 *)iph) & IPV6_FLOWINFO_MASK;
++	fl6->flowi6_l3mdev = nft_fib_l3mdev_master_ifindex_rcu(pkt, dev);
+ 
+ 	return lookup_flags;
+ }
+@@ -73,8 +74,6 @@ static u32 __nft_fib6_eval_type(const struct nft_fib *priv,
+ 	else if (priv->flags & NFTA_FIB_F_OIF)
+ 		dev = nft_out(pkt);
+ 
+-	fl6.flowi6_l3mdev = l3mdev_master_ifindex_rcu(dev);
+-
+ 	nft_fib6_flowi_init(&fl6, priv, pkt, dev, iph);
+ 
+ 	if (dev && nf_ipv6_chk_addr(nft_net(pkt), &fl6.daddr, dev, true))
+@@ -158,6 +157,7 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ {
+ 	const struct nft_fib *priv = nft_expr_priv(expr);
+ 	int noff = skb_network_offset(pkt->skb);
++	const struct net_device *found = NULL;
+ 	const struct net_device *oif = NULL;
+ 	u32 *dest = &regs->data[priv->dreg];
+ 	struct ipv6hdr *iph, _iph;
+@@ -165,7 +165,6 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 		.flowi6_iif = LOOPBACK_IFINDEX,
+ 		.flowi6_proto = pkt->tprot,
+ 		.flowi6_uid = sock_net_uid(nft_net(pkt), NULL),
+-		.flowi6_l3mdev = l3mdev_master_ifindex_rcu(nft_in(pkt)),
+ 	};
+ 	struct rt6_info *rt;
+ 	int lookup_flags;
+@@ -203,11 +202,15 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 	if (rt->rt6i_flags & (RTF_REJECT | RTF_ANYCAST | RTF_LOCAL))
+ 		goto put_rt_err;
+ 
+-	if (oif && oif != rt->rt6i_idev->dev &&
+-	    l3mdev_master_ifindex_rcu(rt->rt6i_idev->dev) != oif->ifindex)
+-		goto put_rt_err;
++	if (!oif) {
++		found = rt->rt6i_idev->dev;
++	} else {
++		if (oif == rt->rt6i_idev->dev ||
++		    l3mdev_master_ifindex_rcu(rt->rt6i_idev->dev) == oif->ifindex)
++			found = oif;
++	}
+ 
+-	nft_fib_store_result(dest, priv, rt->rt6i_idev->dev);
++	nft_fib_store_result(dest, priv, found);
+  put_rt_err:
+ 	ip6_rt_put(rt);
+ }
+diff --git a/net/ipv6/seg6_local.c b/net/ipv6/seg6_local.c
+index ac1dbd492c22dc..a11a02b4ba95b6 100644
+--- a/net/ipv6/seg6_local.c
++++ b/net/ipv6/seg6_local.c
+@@ -1644,10 +1644,8 @@ static const struct nla_policy seg6_local_policy[SEG6_LOCAL_MAX + 1] = {
+ 	[SEG6_LOCAL_SRH]	= { .type = NLA_BINARY },
+ 	[SEG6_LOCAL_TABLE]	= { .type = NLA_U32 },
+ 	[SEG6_LOCAL_VRFTABLE]	= { .type = NLA_U32 },
+-	[SEG6_LOCAL_NH4]	= { .type = NLA_BINARY,
+-				    .len = sizeof(struct in_addr) },
+-	[SEG6_LOCAL_NH6]	= { .type = NLA_BINARY,
+-				    .len = sizeof(struct in6_addr) },
++	[SEG6_LOCAL_NH4]	= NLA_POLICY_EXACT_LEN(sizeof(struct in_addr)),
++	[SEG6_LOCAL_NH6]	= NLA_POLICY_EXACT_LEN(sizeof(struct in6_addr)),
+ 	[SEG6_LOCAL_IIF]	= { .type = NLA_U32 },
+ 	[SEG6_LOCAL_OIF]	= { .type = NLA_U32 },
+ 	[SEG6_LOCAL_BPF]	= { .type = NLA_NESTED },
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 35eaf0812c5b24..53d5ffad87be87 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -7220,11 +7220,8 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_link_data *link,
+ 	bssid = ieee80211_get_bssid(hdr, len, sdata->vif.type);
+ 	if (ieee80211_is_s1g_beacon(mgmt->frame_control)) {
+ 		struct ieee80211_ext *ext = (void *) mgmt;
+-
+-		if (ieee80211_is_s1g_short_beacon(ext->frame_control))
+-			variable = ext->u.s1g_short_beacon.variable;
+-		else
+-			variable = ext->u.s1g_beacon.variable;
++		variable = ext->u.s1g_beacon.variable +
++			   ieee80211_s1g_optional_len(ext->frame_control);
+ 	}
+ 
+ 	baselen = (u8 *) variable - (u8 *) mgmt;
+diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
+index cb707907188585..5a56487dab69cb 100644
+--- a/net/mac80211/scan.c
++++ b/net/mac80211/scan.c
+@@ -260,6 +260,7 @@ void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb)
+ 	struct ieee80211_mgmt *mgmt = (void *)skb->data;
+ 	struct ieee80211_bss *bss;
+ 	struct ieee80211_channel *channel;
++	struct ieee80211_ext *ext;
+ 	size_t min_hdr_len = offsetof(struct ieee80211_mgmt,
+ 				      u.probe_resp.variable);
+ 
+@@ -269,12 +270,10 @@ void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb)
+ 		return;
+ 
+ 	if (ieee80211_is_s1g_beacon(mgmt->frame_control)) {
+-		if (ieee80211_is_s1g_short_beacon(mgmt->frame_control))
+-			min_hdr_len = offsetof(struct ieee80211_ext,
+-					       u.s1g_short_beacon.variable);
+-		else
+-			min_hdr_len = offsetof(struct ieee80211_ext,
+-					       u.s1g_beacon);
++		ext = (struct ieee80211_ext *)mgmt;
++		min_hdr_len =
++			offsetof(struct ieee80211_ext, u.s1g_beacon.variable) +
++			ieee80211_s1g_optional_len(ext->frame_control);
+ 	}
+ 
+ 	if (skb->len < min_hdr_len)
+diff --git a/net/ncsi/internal.h b/net/ncsi/internal.h
+index 4e0842df5234ea..2c260f33b55cc5 100644
+--- a/net/ncsi/internal.h
++++ b/net/ncsi/internal.h
+@@ -143,16 +143,15 @@ struct ncsi_channel_vlan_filter {
+ };
+ 
+ struct ncsi_channel_stats {
+-	u32 hnc_cnt_hi;		/* Counter cleared            */
+-	u32 hnc_cnt_lo;		/* Counter cleared            */
+-	u32 hnc_rx_bytes;	/* Rx bytes                   */
+-	u32 hnc_tx_bytes;	/* Tx bytes                   */
+-	u32 hnc_rx_uc_pkts;	/* Rx UC packets              */
+-	u32 hnc_rx_mc_pkts;     /* Rx MC packets              */
+-	u32 hnc_rx_bc_pkts;	/* Rx BC packets              */
+-	u32 hnc_tx_uc_pkts;	/* Tx UC packets              */
+-	u32 hnc_tx_mc_pkts;	/* Tx MC packets              */
+-	u32 hnc_tx_bc_pkts;	/* Tx BC packets              */
++	u64 hnc_cnt;		/* Counter cleared            */
++	u64 hnc_rx_bytes;	/* Rx bytes                   */
++	u64 hnc_tx_bytes;	/* Tx bytes                   */
++	u64 hnc_rx_uc_pkts;	/* Rx UC packets              */
++	u64 hnc_rx_mc_pkts;     /* Rx MC packets              */
++	u64 hnc_rx_bc_pkts;	/* Rx BC packets              */
++	u64 hnc_tx_uc_pkts;	/* Tx UC packets              */
++	u64 hnc_tx_mc_pkts;	/* Tx MC packets              */
++	u64 hnc_tx_bc_pkts;	/* Tx BC packets              */
+ 	u32 hnc_fcs_err;	/* FCS errors                 */
+ 	u32 hnc_align_err;	/* Alignment errors           */
+ 	u32 hnc_false_carrier;	/* False carrier detection    */
+@@ -181,7 +180,7 @@ struct ncsi_channel_stats {
+ 	u32 hnc_tx_1023_frames;	/* Tx 512-1023 bytes frames   */
+ 	u32 hnc_tx_1522_frames;	/* Tx 1024-1522 bytes frames  */
+ 	u32 hnc_tx_9022_frames;	/* Tx 1523-9022 bytes frames  */
+-	u32 hnc_rx_valid_bytes;	/* Rx valid bytes             */
++	u64 hnc_rx_valid_bytes;	/* Rx valid bytes             */
+ 	u32 hnc_rx_runt_pkts;	/* Rx error runt packets      */
+ 	u32 hnc_rx_jabber_pkts;	/* Rx error jabber packets    */
+ 	u32 ncsi_rx_cmds;	/* Rx NCSI commands           */
+diff --git a/net/ncsi/ncsi-pkt.h b/net/ncsi/ncsi-pkt.h
+index f2f3b5c1b94126..24edb273797240 100644
+--- a/net/ncsi/ncsi-pkt.h
++++ b/net/ncsi/ncsi-pkt.h
+@@ -252,16 +252,15 @@ struct ncsi_rsp_gp_pkt {
+ /* Get Controller Packet Statistics */
+ struct ncsi_rsp_gcps_pkt {
+ 	struct ncsi_rsp_pkt_hdr rsp;            /* Response header            */
+-	__be32                  cnt_hi;         /* Counter cleared            */
+-	__be32                  cnt_lo;         /* Counter cleared            */
+-	__be32                  rx_bytes;       /* Rx bytes                   */
+-	__be32                  tx_bytes;       /* Tx bytes                   */
+-	__be32                  rx_uc_pkts;     /* Rx UC packets              */
+-	__be32                  rx_mc_pkts;     /* Rx MC packets              */
+-	__be32                  rx_bc_pkts;     /* Rx BC packets              */
+-	__be32                  tx_uc_pkts;     /* Tx UC packets              */
+-	__be32                  tx_mc_pkts;     /* Tx MC packets              */
+-	__be32                  tx_bc_pkts;     /* Tx BC packets              */
++	__be64                  cnt;            /* Counter cleared            */
++	__be64                  rx_bytes;       /* Rx bytes                   */
++	__be64                  tx_bytes;       /* Tx bytes                   */
++	__be64                  rx_uc_pkts;     /* Rx UC packets              */
++	__be64                  rx_mc_pkts;     /* Rx MC packets              */
++	__be64                  rx_bc_pkts;     /* Rx BC packets              */
++	__be64                  tx_uc_pkts;     /* Tx UC packets              */
++	__be64                  tx_mc_pkts;     /* Tx MC packets              */
++	__be64                  tx_bc_pkts;     /* Tx BC packets              */
+ 	__be32                  fcs_err;        /* FCS errors                 */
+ 	__be32                  align_err;      /* Alignment errors           */
+ 	__be32                  false_carrier;  /* False carrier detection    */
+@@ -290,11 +289,11 @@ struct ncsi_rsp_gcps_pkt {
+ 	__be32                  tx_1023_frames; /* Tx 512-1023 bytes frames   */
+ 	__be32                  tx_1522_frames; /* Tx 1024-1522 bytes frames  */
+ 	__be32                  tx_9022_frames; /* Tx 1523-9022 bytes frames  */
+-	__be32                  rx_valid_bytes; /* Rx valid bytes             */
++	__be64                  rx_valid_bytes; /* Rx valid bytes             */
+ 	__be32                  rx_runt_pkts;   /* Rx error runt packets      */
+ 	__be32                  rx_jabber_pkts; /* Rx error jabber packets    */
+ 	__be32                  checksum;       /* Checksum                   */
+-};
++}  __packed __aligned(4);
+ 
+ /* Get NCSI Statistics */
+ struct ncsi_rsp_gns_pkt {
+diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c
+index 4a8ce2949faeac..8668888c5a2f99 100644
+--- a/net/ncsi/ncsi-rsp.c
++++ b/net/ncsi/ncsi-rsp.c
+@@ -926,16 +926,15 @@ static int ncsi_rsp_handler_gcps(struct ncsi_request *nr)
+ 
+ 	/* Update HNC's statistics */
+ 	ncs = &nc->stats;
+-	ncs->hnc_cnt_hi         = ntohl(rsp->cnt_hi);
+-	ncs->hnc_cnt_lo         = ntohl(rsp->cnt_lo);
+-	ncs->hnc_rx_bytes       = ntohl(rsp->rx_bytes);
+-	ncs->hnc_tx_bytes       = ntohl(rsp->tx_bytes);
+-	ncs->hnc_rx_uc_pkts     = ntohl(rsp->rx_uc_pkts);
+-	ncs->hnc_rx_mc_pkts     = ntohl(rsp->rx_mc_pkts);
+-	ncs->hnc_rx_bc_pkts     = ntohl(rsp->rx_bc_pkts);
+-	ncs->hnc_tx_uc_pkts     = ntohl(rsp->tx_uc_pkts);
+-	ncs->hnc_tx_mc_pkts     = ntohl(rsp->tx_mc_pkts);
+-	ncs->hnc_tx_bc_pkts     = ntohl(rsp->tx_bc_pkts);
++	ncs->hnc_cnt            = be64_to_cpu(rsp->cnt);
++	ncs->hnc_rx_bytes       = be64_to_cpu(rsp->rx_bytes);
++	ncs->hnc_tx_bytes       = be64_to_cpu(rsp->tx_bytes);
++	ncs->hnc_rx_uc_pkts     = be64_to_cpu(rsp->rx_uc_pkts);
++	ncs->hnc_rx_mc_pkts     = be64_to_cpu(rsp->rx_mc_pkts);
++	ncs->hnc_rx_bc_pkts     = be64_to_cpu(rsp->rx_bc_pkts);
++	ncs->hnc_tx_uc_pkts     = be64_to_cpu(rsp->tx_uc_pkts);
++	ncs->hnc_tx_mc_pkts     = be64_to_cpu(rsp->tx_mc_pkts);
++	ncs->hnc_tx_bc_pkts     = be64_to_cpu(rsp->tx_bc_pkts);
+ 	ncs->hnc_fcs_err        = ntohl(rsp->fcs_err);
+ 	ncs->hnc_align_err      = ntohl(rsp->align_err);
+ 	ncs->hnc_false_carrier  = ntohl(rsp->false_carrier);
+@@ -964,7 +963,7 @@ static int ncsi_rsp_handler_gcps(struct ncsi_request *nr)
+ 	ncs->hnc_tx_1023_frames = ntohl(rsp->tx_1023_frames);
+ 	ncs->hnc_tx_1522_frames = ntohl(rsp->tx_1522_frames);
+ 	ncs->hnc_tx_9022_frames = ntohl(rsp->tx_9022_frames);
+-	ncs->hnc_rx_valid_bytes = ntohl(rsp->rx_valid_bytes);
++	ncs->hnc_rx_valid_bytes = be64_to_cpu(rsp->rx_valid_bytes);
+ 	ncs->hnc_rx_runt_pkts   = ntohl(rsp->rx_runt_pkts);
+ 	ncs->hnc_rx_jabber_pkts = ntohl(rsp->rx_jabber_pkts);
+ 
+diff --git a/net/netfilter/nf_nat_core.c b/net/netfilter/nf_nat_core.c
+index aad84aabd7f1d1..f391cd267922b3 100644
+--- a/net/netfilter/nf_nat_core.c
++++ b/net/netfilter/nf_nat_core.c
+@@ -248,7 +248,7 @@ static noinline bool
+ nf_nat_used_tuple_new(const struct nf_conntrack_tuple *tuple,
+ 		      const struct nf_conn *ignored_ct)
+ {
+-	static const unsigned long uses_nat = IPS_NAT_MASK | IPS_SEQ_ADJUST_BIT;
++	static const unsigned long uses_nat = IPS_NAT_MASK | IPS_SEQ_ADJUST;
+ 	const struct nf_conntrack_tuple_hash *thash;
+ 	const struct nf_conntrack_zone *zone;
+ 	struct nf_conn *ct;
+@@ -287,8 +287,14 @@ nf_nat_used_tuple_new(const struct nf_conntrack_tuple *tuple,
+ 	zone = nf_ct_zone(ignored_ct);
+ 
+ 	thash = nf_conntrack_find_get(net, zone, tuple);
+-	if (unlikely(!thash)) /* clashing entry went away */
+-		return false;
++	if (unlikely(!thash)) {
++		struct nf_conntrack_tuple reply;
++
++		nf_ct_invert_tuple(&reply, tuple);
++		thash = nf_conntrack_find_get(net, zone, &reply);
++		if (!thash) /* clashing entry went away */
++			return false;
++	}
+ 
+ 	ct = nf_ct_tuplehash_to_ctrack(thash);
+ 
+diff --git a/net/netfilter/nft_quota.c b/net/netfilter/nft_quota.c
+index 9b2d7463d3d326..df0798da2329b9 100644
+--- a/net/netfilter/nft_quota.c
++++ b/net/netfilter/nft_quota.c
+@@ -19,10 +19,16 @@ struct nft_quota {
+ };
+ 
+ static inline bool nft_overquota(struct nft_quota *priv,
+-				 const struct sk_buff *skb)
++				 const struct sk_buff *skb,
++				 bool *report)
+ {
+-	return atomic64_add_return(skb->len, priv->consumed) >=
+-	       atomic64_read(&priv->quota);
++	u64 consumed = atomic64_add_return(skb->len, priv->consumed);
++	u64 quota = atomic64_read(&priv->quota);
++
++	if (report)
++		*report = consumed >= quota;
++
++	return consumed > quota;
+ }
+ 
+ static inline bool nft_quota_invert(struct nft_quota *priv)
+@@ -34,7 +40,7 @@ static inline void nft_quota_do_eval(struct nft_quota *priv,
+ 				     struct nft_regs *regs,
+ 				     const struct nft_pktinfo *pkt)
+ {
+-	if (nft_overquota(priv, pkt->skb) ^ nft_quota_invert(priv))
++	if (nft_overquota(priv, pkt->skb, NULL) ^ nft_quota_invert(priv))
+ 		regs->verdict.code = NFT_BREAK;
+ }
+ 
+@@ -51,13 +57,13 @@ static void nft_quota_obj_eval(struct nft_object *obj,
+ 			       const struct nft_pktinfo *pkt)
+ {
+ 	struct nft_quota *priv = nft_obj_data(obj);
+-	bool overquota;
++	bool overquota, report;
+ 
+-	overquota = nft_overquota(priv, pkt->skb);
++	overquota = nft_overquota(priv, pkt->skb, &report);
+ 	if (overquota ^ nft_quota_invert(priv))
+ 		regs->verdict.code = NFT_BREAK;
+ 
+-	if (overquota &&
++	if (report &&
+ 	    !test_and_set_bit(NFT_QUOTA_DEPLETED_BIT, &priv->flags))
+ 		nft_obj_notify(nft_net(pkt), obj->key.table, obj, 0, 0,
+ 			       NFT_MSG_NEWOBJ, 0, nft_pf(pkt), 0, GFP_ATOMIC);
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 7be342b495f5f7..0529e4ef752070 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -683,6 +683,30 @@ static int pipapo_realloc_mt(struct nft_pipapo_field *f,
+ 	return 0;
+ }
+ 
++
++/**
++ * lt_calculate_size() - Get storage size for lookup table with overflow check
++ * @groups:	Amount of bit groups
++ * @bb:		Number of bits grouped together in lookup table buckets
++ * @bsize:	Size of each bucket in lookup table, in longs
++ *
++ * Return: allocation size including alignment overhead, negative on overflow
++ */
++static ssize_t lt_calculate_size(unsigned int groups, unsigned int bb,
++				 unsigned int bsize)
++{
++	ssize_t ret = groups * NFT_PIPAPO_BUCKETS(bb) * sizeof(long);
++
++	if (check_mul_overflow(ret, bsize, &ret))
++		return -1;
++	if (check_add_overflow(ret, NFT_PIPAPO_ALIGN_HEADROOM, &ret))
++		return -1;
++	if (ret > INT_MAX)
++		return -1;
++
++	return ret;
++}
++
+ /**
+  * pipapo_resize() - Resize lookup or mapping table, or both
+  * @f:		Field containing lookup and mapping tables
+@@ -701,6 +725,7 @@ static int pipapo_resize(struct nft_pipapo_field *f,
+ 	long *new_lt = NULL, *new_p, *old_lt = f->lt, *old_p;
+ 	unsigned int new_bucket_size, copy;
+ 	int group, bucket, err;
++	ssize_t lt_size;
+ 
+ 	if (rules >= NFT_PIPAPO_RULE0_MAX)
+ 		return -ENOSPC;
+@@ -719,10 +744,11 @@ static int pipapo_resize(struct nft_pipapo_field *f,
+ 	else
+ 		copy = new_bucket_size;
+ 
+-	new_lt = kvzalloc(f->groups * NFT_PIPAPO_BUCKETS(f->bb) *
+-			  new_bucket_size * sizeof(*new_lt) +
+-			  NFT_PIPAPO_ALIGN_HEADROOM,
+-			  GFP_KERNEL);
++	lt_size = lt_calculate_size(f->groups, f->bb, new_bucket_size);
++	if (lt_size < 0)
++		return -ENOMEM;
++
++	new_lt = kvzalloc(lt_size, GFP_KERNEL_ACCOUNT);
+ 	if (!new_lt)
+ 		return -ENOMEM;
+ 
+@@ -907,7 +933,7 @@ static void pipapo_lt_bits_adjust(struct nft_pipapo_field *f)
+ {
+ 	unsigned int groups, bb;
+ 	unsigned long *new_lt;
+-	size_t lt_size;
++	ssize_t lt_size;
+ 
+ 	lt_size = f->groups * NFT_PIPAPO_BUCKETS(f->bb) * f->bsize *
+ 		  sizeof(*f->lt);
+@@ -917,15 +943,17 @@ static void pipapo_lt_bits_adjust(struct nft_pipapo_field *f)
+ 		groups = f->groups * 2;
+ 		bb = NFT_PIPAPO_GROUP_BITS_LARGE_SET;
+ 
+-		lt_size = groups * NFT_PIPAPO_BUCKETS(bb) * f->bsize *
+-			  sizeof(*f->lt);
++		lt_size = lt_calculate_size(groups, bb, f->bsize);
++		if (lt_size < 0)
++			return;
+ 	} else if (f->bb == NFT_PIPAPO_GROUP_BITS_LARGE_SET &&
+ 		   lt_size < NFT_PIPAPO_LT_SIZE_LOW) {
+ 		groups = f->groups / 2;
+ 		bb = NFT_PIPAPO_GROUP_BITS_SMALL_SET;
+ 
+-		lt_size = groups * NFT_PIPAPO_BUCKETS(bb) * f->bsize *
+-			  sizeof(*f->lt);
++		lt_size = lt_calculate_size(groups, bb, f->bsize);
++		if (lt_size < 0)
++			return;
+ 
+ 		/* Don't increase group width if the resulting lookup table size
+ 		 * would exceed the upper size threshold for a "small" set.
+@@ -936,7 +964,7 @@ static void pipapo_lt_bits_adjust(struct nft_pipapo_field *f)
+ 		return;
+ 	}
+ 
+-	new_lt = kvzalloc(lt_size + NFT_PIPAPO_ALIGN_HEADROOM, GFP_KERNEL_ACCOUNT);
++	new_lt = kvzalloc(lt_size, GFP_KERNEL_ACCOUNT);
+ 	if (!new_lt)
+ 		return;
+ 
+@@ -1451,13 +1479,15 @@ static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old)
+ 
+ 	for (i = 0; i < old->field_count; i++) {
+ 		unsigned long *new_lt;
++		ssize_t lt_size;
+ 
+ 		memcpy(dst, src, offsetof(struct nft_pipapo_field, lt));
+ 
+-		new_lt = kvzalloc(src->groups * NFT_PIPAPO_BUCKETS(src->bb) *
+-				  src->bsize * sizeof(*dst->lt) +
+-				  NFT_PIPAPO_ALIGN_HEADROOM,
+-				  GFP_KERNEL_ACCOUNT);
++		lt_size = lt_calculate_size(src->groups, src->bb, src->bsize);
++		if (lt_size < 0)
++			goto out_lt;
++
++		new_lt = kvzalloc(lt_size, GFP_KERNEL_ACCOUNT);
+ 		if (!new_lt)
+ 			goto out_lt;
+ 
+diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
+index c15db28c5ebc43..be7c16c79f711e 100644
+--- a/net/netfilter/nft_set_pipapo_avx2.c
++++ b/net/netfilter/nft_set_pipapo_avx2.c
+@@ -1113,6 +1113,25 @@ bool nft_pipapo_avx2_estimate(const struct nft_set_desc *desc, u32 features,
+ 	return true;
+ }
+ 
++/**
++ * pipapo_resmap_init_avx2() - Initialise result map before first use
++ * @m:		Matching data, including mapping table
++ * @res_map:	Result map
++ *
++ * Like pipapo_resmap_init() but do not set start map bits covered by the first field.
++ */
++static inline void pipapo_resmap_init_avx2(const struct nft_pipapo_match *m, unsigned long *res_map)
++{
++	const struct nft_pipapo_field *f = m->f;
++	int i;
++
++	/* Starting map doesn't need to be set to all-ones for this implementation,
++	 * but we do need to zero the remaining bits, if any.
++	 */
++	for (i = f->bsize; i < m->bsize_max; i++)
++		res_map[i] = 0ul;
++}
++
+ /**
+  * nft_pipapo_avx2_lookup() - Lookup function for AVX2 implementation
+  * @net:	Network namespace
+@@ -1171,7 +1190,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 	res  = scratch->map + (map_index ? m->bsize_max : 0);
+ 	fill = scratch->map + (map_index ? 0 : m->bsize_max);
+ 
+-	/* Starting map doesn't need to be set for this implementation */
++	pipapo_resmap_init_avx2(m, res);
+ 
+ 	nft_pipapo_avx2_prepare();
+ 
+diff --git a/net/netfilter/nft_tunnel.c b/net/netfilter/nft_tunnel.c
+index 0c63d1367cf7a7..a12486ae089d6f 100644
+--- a/net/netfilter/nft_tunnel.c
++++ b/net/netfilter/nft_tunnel.c
+@@ -621,10 +621,10 @@ static int nft_tunnel_opts_dump(struct sk_buff *skb,
+ 		struct geneve_opt *opt;
+ 		int offset = 0;
+ 
+-		inner = nla_nest_start_noflag(skb, NFTA_TUNNEL_KEY_OPTS_GENEVE);
+-		if (!inner)
+-			goto failure;
+ 		while (opts->len > offset) {
++			inner = nla_nest_start_noflag(skb, NFTA_TUNNEL_KEY_OPTS_GENEVE);
++			if (!inner)
++				goto failure;
+ 			opt = (struct geneve_opt *)(opts->u.data + offset);
+ 			if (nla_put_be16(skb, NFTA_TUNNEL_KEY_GENEVE_CLASS,
+ 					 opt->opt_class) ||
+@@ -634,8 +634,8 @@ static int nft_tunnel_opts_dump(struct sk_buff *skb,
+ 				    opt->length * 4, opt->opt_data))
+ 				goto inner_failure;
+ 			offset += sizeof(*opt) + opt->length * 4;
++			nla_nest_end(skb, inner);
+ 		}
+-		nla_nest_end(skb, inner);
+ 	}
+ 	nla_nest_end(skb, nest);
+ 	return 0;
+diff --git a/net/netfilter/xt_TCPOPTSTRIP.c b/net/netfilter/xt_TCPOPTSTRIP.c
+index 30e99464171b7b..93f064306901c0 100644
+--- a/net/netfilter/xt_TCPOPTSTRIP.c
++++ b/net/netfilter/xt_TCPOPTSTRIP.c
+@@ -91,7 +91,7 @@ tcpoptstrip_tg4(struct sk_buff *skb, const struct xt_action_param *par)
+ 	return tcpoptstrip_mangle_packet(skb, par, ip_hdrlen(skb));
+ }
+ 
+-#if IS_ENABLED(CONFIG_IP6_NF_MANGLE)
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
+ static unsigned int
+ tcpoptstrip_tg6(struct sk_buff *skb, const struct xt_action_param *par)
+ {
+@@ -119,7 +119,7 @@ static struct xt_target tcpoptstrip_tg_reg[] __read_mostly = {
+ 		.targetsize = sizeof(struct xt_tcpoptstrip_target_info),
+ 		.me         = THIS_MODULE,
+ 	},
+-#if IS_ENABLED(CONFIG_IP6_NF_MANGLE)
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
+ 	{
+ 		.name       = "TCPOPTSTRIP",
+ 		.family     = NFPROTO_IPV6,
+diff --git a/net/netfilter/xt_mark.c b/net/netfilter/xt_mark.c
+index 65b965ca40ea7e..59b9d04400cac2 100644
+--- a/net/netfilter/xt_mark.c
++++ b/net/netfilter/xt_mark.c
+@@ -48,7 +48,7 @@ static struct xt_target mark_tg_reg[] __read_mostly = {
+ 		.targetsize     = sizeof(struct xt_mark_tginfo2),
+ 		.me             = THIS_MODULE,
+ 	},
+-#if IS_ENABLED(CONFIG_IP_NF_ARPTABLES)
++#if IS_ENABLED(CONFIG_IP_NF_ARPTABLES) || IS_ENABLED(CONFIG_NFT_COMPAT_ARP)
+ 	{
+ 		.name           = "MARK",
+ 		.revision       = 2,
+diff --git a/net/netlabel/netlabel_kapi.c b/net/netlabel/netlabel_kapi.c
+index cd9160bbc91974..33b77084a4e5f3 100644
+--- a/net/netlabel/netlabel_kapi.c
++++ b/net/netlabel/netlabel_kapi.c
+@@ -1165,6 +1165,11 @@ int netlbl_conn_setattr(struct sock *sk,
+ 		break;
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	case AF_INET6:
++		if (sk->sk_family != AF_INET6) {
++			ret_val = -EAFNOSUPPORT;
++			goto conn_setattr_return;
++		}
++
+ 		addr6 = (struct sockaddr_in6 *)addr;
+ 		entry = netlbl_domhsh_getentry_af6(secattr->domain,
+ 						   &addr6->sin6_addr);
+diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c
+index 8a848ce72e2910..b80bd3a9077397 100644
+--- a/net/openvswitch/flow.c
++++ b/net/openvswitch/flow.c
+@@ -788,7 +788,7 @@ static int key_extract_l3l4(struct sk_buff *skb, struct sw_flow_key *key)
+ 			memset(&key->ipv4, 0, sizeof(key->ipv4));
+ 		}
+ 	} else if (eth_p_mpls(key->eth.type)) {
+-		u8 label_count = 1;
++		size_t label_count = 1;
+ 
+ 		memset(&key->mpls, 0, sizeof(key->mpls));
+ 		skb_set_inner_network_header(skb, skb->mac_len);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index d4dba06297c33e..20be2c47cf4191 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3713,15 +3713,15 @@ static int packet_dev_mc(struct net_device *dev, struct packet_mclist *i,
+ }
+ 
+ static void packet_dev_mclist_delete(struct net_device *dev,
+-				     struct packet_mclist **mlp)
++				     struct packet_mclist **mlp,
++				     struct list_head *list)
+ {
+ 	struct packet_mclist *ml;
+ 
+ 	while ((ml = *mlp) != NULL) {
+ 		if (ml->ifindex == dev->ifindex) {
+-			packet_dev_mc(dev, ml, -1);
++			list_add(&ml->remove_list, list);
+ 			*mlp = ml->next;
+-			kfree(ml);
+ 		} else
+ 			mlp = &ml->next;
+ 	}
+@@ -3769,6 +3769,7 @@ static int packet_mc_add(struct sock *sk, struct packet_mreq_max *mreq)
+ 	memcpy(i->addr, mreq->mr_address, i->alen);
+ 	memset(i->addr + i->alen, 0, sizeof(i->addr) - i->alen);
+ 	i->count = 1;
++	INIT_LIST_HEAD(&i->remove_list);
+ 	i->next = po->mclist;
+ 	po->mclist = i;
+ 	err = packet_dev_mc(dev, i, 1);
+@@ -4233,9 +4234,11 @@ static int packet_getsockopt(struct socket *sock, int level, int optname,
+ static int packet_notifier(struct notifier_block *this,
+ 			   unsigned long msg, void *ptr)
+ {
+-	struct sock *sk;
+ 	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+ 	struct net *net = dev_net(dev);
++	struct packet_mclist *ml, *tmp;
++	LIST_HEAD(mclist);
++	struct sock *sk;
+ 
+ 	rcu_read_lock();
+ 	sk_for_each_rcu(sk, &net->packet.sklist) {
+@@ -4244,7 +4247,8 @@ static int packet_notifier(struct notifier_block *this,
+ 		switch (msg) {
+ 		case NETDEV_UNREGISTER:
+ 			if (po->mclist)
+-				packet_dev_mclist_delete(dev, &po->mclist);
++				packet_dev_mclist_delete(dev, &po->mclist,
++							 &mclist);
+ 			fallthrough;
+ 
+ 		case NETDEV_DOWN:
+@@ -4277,6 +4281,13 @@ static int packet_notifier(struct notifier_block *this,
+ 		}
+ 	}
+ 	rcu_read_unlock();
++
++	/* packet_dev_mc might grab instance locks so can't run under rcu */
++	list_for_each_entry_safe(ml, tmp, &mclist, remove_list) {
++		packet_dev_mc(dev, ml, -1);
++		kfree(ml);
++	}
++
+ 	return NOTIFY_DONE;
+ }
+ 
+diff --git a/net/packet/internal.h b/net/packet/internal.h
+index d5d70712007ad3..1e743d0316fdda 100644
+--- a/net/packet/internal.h
++++ b/net/packet/internal.h
+@@ -11,6 +11,7 @@ struct packet_mclist {
+ 	unsigned short		type;
+ 	unsigned short		alen;
+ 	unsigned char		addr[MAX_ADDR_LEN];
++	struct list_head	remove_list;
+ };
+ 
+ /* kbdq - kernel block descriptor queue */
+diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
+index 2c069f0181c62b..037f764822b965 100644
+--- a/net/sched/sch_ets.c
++++ b/net/sched/sch_ets.c
+@@ -661,7 +661,7 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
+ 	for (i = q->nbands; i < oldbands; i++) {
+ 		if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)
+ 			list_del_init(&q->classes[i].alist);
+-		qdisc_tree_flush_backlog(q->classes[i].qdisc);
++		qdisc_purge_queue(q->classes[i].qdisc);
+ 	}
+ 	WRITE_ONCE(q->nstrict, nstrict);
+ 	memcpy(q->prio2band, priomap, sizeof(priomap));
+diff --git a/net/sched/sch_prio.c b/net/sched/sch_prio.c
+index cc30f7a32f1a78..9e2b9a490db23d 100644
+--- a/net/sched/sch_prio.c
++++ b/net/sched/sch_prio.c
+@@ -211,7 +211,7 @@ static int prio_tune(struct Qdisc *sch, struct nlattr *opt,
+ 	memcpy(q->prio2band, qopt->priomap, TC_PRIO_MAX+1);
+ 
+ 	for (i = q->bands; i < oldbands; i++)
+-		qdisc_tree_flush_backlog(q->queues[i]);
++		qdisc_purge_queue(q->queues[i]);
+ 
+ 	for (i = oldbands; i < q->bands; i++) {
+ 		q->queues[i] = queues[i];
+diff --git a/net/sched/sch_red.c b/net/sched/sch_red.c
+index 1ba3e0bba54f0c..4696c893cf553c 100644
+--- a/net/sched/sch_red.c
++++ b/net/sched/sch_red.c
+@@ -285,7 +285,7 @@ static int __red_change(struct Qdisc *sch, struct nlattr **tb,
+ 	q->userbits = userbits;
+ 	q->limit = ctl->limit;
+ 	if (child) {
+-		qdisc_tree_flush_backlog(q->qdisc);
++		qdisc_purge_queue(q->qdisc);
+ 		old_child = q->qdisc;
+ 		q->qdisc = child;
+ 	}
+diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
+index b912ad99aa15d9..77fa02f2bfcd56 100644
+--- a/net/sched/sch_sfq.c
++++ b/net/sched/sch_sfq.c
+@@ -310,7 +310,10 @@ static unsigned int sfq_drop(struct Qdisc *sch, struct sk_buff **to_free)
+ 		/* It is difficult to believe, but ALL THE SLOTS HAVE LENGTH 1. */
+ 		x = q->tail->next;
+ 		slot = &q->slots[x];
+-		q->tail->next = slot->next;
++		if (slot->next == x)
++			q->tail = NULL; /* no more active slots */
++		else
++			q->tail->next = slot->next;
+ 		q->ht[slot->hash] = SFQ_EMPTY_SLOT;
+ 		goto drop;
+ 	}
+diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
+index dc26b22d53c734..4c977f049670a6 100644
+--- a/net/sched/sch_tbf.c
++++ b/net/sched/sch_tbf.c
+@@ -452,7 +452,7 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt,
+ 
+ 	sch_tree_lock(sch);
+ 	if (child) {
+-		qdisc_tree_flush_backlog(q->qdisc);
++		qdisc_purge_queue(q->qdisc);
+ 		old = q->qdisc;
+ 		q->qdisc = child;
+ 	}
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+index aca8bdf65d729f..ca6172822b68ae 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+@@ -406,12 +406,12 @@ static void svc_rdma_xprt_done(struct rpcrdma_notification *rn)
+  */
+ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
+ {
++	unsigned int ctxts, rq_depth, maxpayload;
+ 	struct svcxprt_rdma *listen_rdma;
+ 	struct svcxprt_rdma *newxprt = NULL;
+ 	struct rdma_conn_param conn_param;
+ 	struct rpcrdma_connect_private pmsg;
+ 	struct ib_qp_init_attr qp_attr;
+-	unsigned int ctxts, rq_depth;
+ 	struct ib_device *dev;
+ 	int ret = 0;
+ 	RPC_IFDEBUG(struct sockaddr *sap);
+@@ -462,12 +462,14 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
+ 		newxprt->sc_max_bc_requests = 2;
+ 	}
+ 
+-	/* Arbitrarily estimate the number of rw_ctxs needed for
+-	 * this transport. This is enough rw_ctxs to make forward
+-	 * progress even if the client is using one rkey per page
+-	 * in each Read chunk.
++	/* Arbitrary estimate of the needed number of rdma_rw contexts.
+ 	 */
+-	ctxts = 3 * RPCSVC_MAXPAGES;
++	maxpayload = min(xprt->xpt_server->sv_max_payload,
++			 RPCSVC_MAXPAYLOAD_RDMA);
++	ctxts = newxprt->sc_max_requests * 3 *
++		rdma_rw_mr_factor(dev, newxprt->sc_port_num,
++				  maxpayload >> PAGE_SHIFT);
++
+ 	newxprt->sc_sq_depth = rq_depth + ctxts;
+ 	if (newxprt->sc_sq_depth > dev->attrs.max_qp_wr)
+ 		newxprt->sc_sq_depth = dev->attrs.max_qp_wr;
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index 8584893b478510..79f91b6ca8c847 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -818,7 +818,11 @@ static int tipc_aead_encrypt(struct tipc_aead *aead, struct sk_buff *skb,
+ 	}
+ 
+ 	/* Get net to avoid freed tipc_crypto when delete namespace */
+-	get_net(aead->crypto->net);
++	if (!maybe_get_net(aead->crypto->net)) {
++		tipc_bearer_put(b);
++		rc = -ENODEV;
++		goto exit;
++	}
+ 
+ 	/* Now, do encrypt */
+ 	rc = crypto_aead_encrypt(req);
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 914d4e1516a3cd..fc88e34b7f33fe 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -908,6 +908,13 @@ static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk,
+ 					    &msg_redir, send, flags);
+ 		lock_sock(sk);
+ 		if (err < 0) {
++			/* Regardless of whether the data represented by
++			 * msg_redir is sent successfully, we have already
++			 * uncharged it via sk_msg_return_zero(). The
++			 * msg->sg.size represents the remaining unprocessed
++			 * data, which needs to be uncharged here.
++			 */
++			sk_mem_uncharge(sk, msg->sg.size);
+ 			*copied -= sk_msg_free_nocharge(sk, &msg_redir);
+ 			msg->sg.size = 0;
+ 		}
+@@ -1120,9 +1127,13 @@ static int tls_sw_sendmsg_locked(struct sock *sk, struct msghdr *msg,
+ 					num_async++;
+ 				else if (ret == -ENOMEM)
+ 					goto wait_for_memory;
+-				else if (ctx->open_rec && ret == -ENOSPC)
++				else if (ctx->open_rec && ret == -ENOSPC) {
++					if (msg_pl->cork_bytes) {
++						ret = 0;
++						goto send_end;
++					}
+ 					goto rollback_iter;
+-				else if (ret != -EAGAIN)
++				} else if (ret != -EAGAIN)
+ 					goto send_end;
+ 			}
+ 			continue;
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 7f7de6d8809655..2c9b1011cdcc80 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -441,18 +441,20 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk,
+ static bool virtio_transport_inc_rx_pkt(struct virtio_vsock_sock *vvs,
+ 					u32 len)
+ {
+-	if (vvs->rx_bytes + len > vvs->buf_alloc)
++	if (vvs->buf_used + len > vvs->buf_alloc)
+ 		return false;
+ 
+ 	vvs->rx_bytes += len;
++	vvs->buf_used += len;
+ 	return true;
+ }
+ 
+ static void virtio_transport_dec_rx_pkt(struct virtio_vsock_sock *vvs,
+-					u32 len)
++					u32 bytes_read, u32 bytes_dequeued)
+ {
+-	vvs->rx_bytes -= len;
+-	vvs->fwd_cnt += len;
++	vvs->rx_bytes -= bytes_read;
++	vvs->buf_used -= bytes_dequeued;
++	vvs->fwd_cnt += bytes_dequeued;
+ }
+ 
+ void virtio_transport_inc_tx_pkt(struct virtio_vsock_sock *vvs, struct sk_buff *skb)
+@@ -581,11 +583,11 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
+ 				   size_t len)
+ {
+ 	struct virtio_vsock_sock *vvs = vsk->trans;
+-	size_t bytes, total = 0;
+ 	struct sk_buff *skb;
+ 	u32 fwd_cnt_delta;
+ 	bool low_rx_bytes;
+ 	int err = -EFAULT;
++	size_t total = 0;
+ 	u32 free_space;
+ 
+ 	spin_lock_bh(&vvs->rx_lock);
+@@ -597,6 +599,8 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
+ 	}
+ 
+ 	while (total < len && !skb_queue_empty(&vvs->rx_queue)) {
++		size_t bytes, dequeued = 0;
++
+ 		skb = skb_peek(&vvs->rx_queue);
+ 
+ 		bytes = min_t(size_t, len - total,
+@@ -620,12 +624,12 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
+ 		VIRTIO_VSOCK_SKB_CB(skb)->offset += bytes;
+ 
+ 		if (skb->len == VIRTIO_VSOCK_SKB_CB(skb)->offset) {
+-			u32 pkt_len = le32_to_cpu(virtio_vsock_hdr(skb)->len);
+-
+-			virtio_transport_dec_rx_pkt(vvs, pkt_len);
++			dequeued = le32_to_cpu(virtio_vsock_hdr(skb)->len);
+ 			__skb_unlink(skb, &vvs->rx_queue);
+ 			consume_skb(skb);
+ 		}
++
++		virtio_transport_dec_rx_pkt(vvs, bytes, dequeued);
+ 	}
+ 
+ 	fwd_cnt_delta = vvs->fwd_cnt - vvs->last_fwd_cnt;
+@@ -781,7 +785,7 @@ static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
+ 				msg->msg_flags |= MSG_EOR;
+ 		}
+ 
+-		virtio_transport_dec_rx_pkt(vvs, pkt_len);
++		virtio_transport_dec_rx_pkt(vvs, pkt_len, pkt_len);
+ 		kfree_skb(skb);
+ 	}
+ 
+@@ -1735,6 +1739,7 @@ int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_acto
+ 	struct sock *sk = sk_vsock(vsk);
+ 	struct virtio_vsock_hdr *hdr;
+ 	struct sk_buff *skb;
++	u32 pkt_len;
+ 	int off = 0;
+ 	int err;
+ 
+@@ -1752,7 +1757,8 @@ int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_acto
+ 	if (le32_to_cpu(hdr->flags) & VIRTIO_VSOCK_SEQ_EOM)
+ 		vvs->msg_count--;
+ 
+-	virtio_transport_dec_rx_pkt(vvs, le32_to_cpu(hdr->len));
++	pkt_len = le32_to_cpu(hdr->len);
++	virtio_transport_dec_rx_pkt(vvs, pkt_len, pkt_len);
+ 	spin_unlock_bh(&vvs->rx_lock);
+ 
+ 	virtio_transport_send_credit_update(vsk);
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index ddd3a97f6609d1..e8a4fe44ec2d80 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -3250,6 +3250,7 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
+ 	const u8 *ie;
+ 	size_t ielen;
+ 	u64 tsf;
++	size_t s1g_optional_len;
+ 
+ 	if (WARN_ON(!mgmt))
+ 		return NULL;
+@@ -3264,12 +3265,11 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
+ 
+ 	if (ieee80211_is_s1g_beacon(mgmt->frame_control)) {
+ 		ext = (void *) mgmt;
+-		if (ieee80211_is_s1g_short_beacon(mgmt->frame_control))
+-			min_hdr_len = offsetof(struct ieee80211_ext,
+-					       u.s1g_short_beacon.variable);
+-		else
+-			min_hdr_len = offsetof(struct ieee80211_ext,
+-					       u.s1g_beacon.variable);
++		s1g_optional_len =
++			ieee80211_s1g_optional_len(ext->frame_control);
++		min_hdr_len =
++			offsetof(struct ieee80211_ext, u.s1g_beacon.variable) +
++			s1g_optional_len;
+ 	} else {
+ 		/* same for beacons */
+ 		min_hdr_len = offsetof(struct ieee80211_mgmt,
+@@ -3285,11 +3285,7 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
+ 		const struct ieee80211_s1g_bcn_compat_ie *compat;
+ 		const struct element *elem;
+ 
+-		if (ieee80211_is_s1g_short_beacon(mgmt->frame_control))
+-			ie = ext->u.s1g_short_beacon.variable;
+-		else
+-			ie = ext->u.s1g_beacon.variable;
+-
++		ie = ext->u.s1g_beacon.variable + s1g_optional_len;
+ 		elem = cfg80211_find_elem(WLAN_EID_S1G_BCN_COMPAT, ie, ielen);
+ 		if (!elem)
+ 			return NULL;
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index d62f76161d83e2..f46a9e5764f014 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -314,7 +314,6 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+ 
+ 	xso->dev = dev;
+ 	netdev_tracker_alloc(dev, &xso->dev_tracker, GFP_ATOMIC);
+-	xso->real_dev = dev;
+ 
+ 	if (xuo->flags & XFRM_OFFLOAD_INBOUND)
+ 		xso->dir = XFRM_DEV_OFFLOAD_IN;
+@@ -326,11 +325,10 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+ 	else
+ 		xso->type = XFRM_DEV_OFFLOAD_CRYPTO;
+ 
+-	err = dev->xfrmdev_ops->xdo_dev_state_add(x, extack);
++	err = dev->xfrmdev_ops->xdo_dev_state_add(dev, x, extack);
+ 	if (err) {
+ 		xso->dev = NULL;
+ 		xso->dir = 0;
+-		xso->real_dev = NULL;
+ 		netdev_put(dev, &xso->dev_tracker);
+ 		xso->type = XFRM_DEV_OFFLOAD_UNSPECIFIED;
+ 
+@@ -378,7 +376,6 @@ int xfrm_dev_policy_add(struct net *net, struct xfrm_policy *xp,
+ 
+ 	xdo->dev = dev;
+ 	netdev_tracker_alloc(dev, &xdo->dev_tracker, GFP_ATOMIC);
+-	xdo->real_dev = dev;
+ 	xdo->type = XFRM_DEV_OFFLOAD_PACKET;
+ 	switch (dir) {
+ 	case XFRM_POLICY_IN:
+@@ -400,7 +397,6 @@ int xfrm_dev_policy_add(struct net *net, struct xfrm_policy *xp,
+ 	err = dev->xfrmdev_ops->xdo_dev_policy_add(xp, extack);
+ 	if (err) {
+ 		xdo->dev = NULL;
+-		xdo->real_dev = NULL;
+ 		xdo->type = XFRM_DEV_OFFLOAD_UNSPECIFIED;
+ 		xdo->dir = 0;
+ 		netdev_put(dev, &xdo->dev_tracker);
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 07fe8e5daa32b0..5ece039846e201 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -767,7 +767,7 @@ void xfrm_dev_state_delete(struct xfrm_state *x)
+ 	struct net_device *dev = READ_ONCE(xso->dev);
+ 
+ 	if (dev) {
+-		dev->xfrmdev_ops->xdo_dev_state_delete(x);
++		dev->xfrmdev_ops->xdo_dev_state_delete(dev, x);
+ 		spin_lock_bh(&xfrm_state_dev_gc_lock);
+ 		hlist_add_head(&x->dev_gclist, &xfrm_state_dev_gc_list);
+ 		spin_unlock_bh(&xfrm_state_dev_gc_lock);
+@@ -789,7 +789,7 @@ void xfrm_dev_state_free(struct xfrm_state *x)
+ 		spin_unlock_bh(&xfrm_state_dev_gc_lock);
+ 
+ 		if (dev->xfrmdev_ops->xdo_dev_state_free)
+-			dev->xfrmdev_ops->xdo_dev_state_free(x);
++			dev->xfrmdev_ops->xdo_dev_state_free(dev, x);
+ 		WRITE_ONCE(xso->dev, NULL);
+ 		xso->type = XFRM_DEV_OFFLOAD_UNSPECIFIED;
+ 		netdev_put(dev, &xso->dev_tracker);
+@@ -1548,19 +1548,19 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ 		if (pol->xdo.type == XFRM_DEV_OFFLOAD_PACKET) {
+ 			struct xfrm_dev_offload *xdo = &pol->xdo;
+ 			struct xfrm_dev_offload *xso = &x->xso;
++			struct net_device *dev = xdo->dev;
+ 
+ 			xso->type = XFRM_DEV_OFFLOAD_PACKET;
+ 			xso->dir = xdo->dir;
+-			xso->dev = xdo->dev;
+-			xso->real_dev = xdo->real_dev;
++			xso->dev = dev;
+ 			xso->flags = XFRM_DEV_OFFLOAD_FLAG_ACQ;
+-			netdev_hold(xso->dev, &xso->dev_tracker, GFP_ATOMIC);
+-			error = xso->dev->xfrmdev_ops->xdo_dev_state_add(x, NULL);
++			netdev_hold(dev, &xso->dev_tracker, GFP_ATOMIC);
++			error = dev->xfrmdev_ops->xdo_dev_state_add(dev, x,
++								    NULL);
+ 			if (error) {
+ 				xso->dir = 0;
+-				netdev_put(xso->dev, &xso->dev_tracker);
++				netdev_put(dev, &xso->dev_tracker);
+ 				xso->dev = NULL;
+-				xso->real_dev = NULL;
+ 				xso->type = XFRM_DEV_OFFLOAD_UNSPECIFIED;
+ 				x->km.state = XFRM_STATE_DEAD;
+ 				to_put = x;
+diff --git a/rust/Makefile b/rust/Makefile
+index 3aca903a7d08cf..313a200112ce18 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -60,6 +60,8 @@ endif
+ core-cfgs = \
+     --cfg no_fp_fmt_parse
+ 
++core-edition := $(if $(call rustc-min-version,108700),2024,2021)
++
+ # `rustc` recognizes `--remap-path-prefix` since 1.26.0, but `rustdoc` only
+ # since Rust 1.81.0. Moreover, `rustdoc` ICEs on out-of-tree builds since Rust
+ # 1.82.0 (https://github.com/rust-lang/rust/issues/138520). Thus workaround both
+@@ -106,8 +108,8 @@ rustdoc-macros: $(src)/macros/lib.rs FORCE
+ 
+ # Starting with Rust 1.82.0, skipping `-Wrustdoc::unescaped_backticks` should
+ # not be needed -- see https://github.com/rust-lang/rust/pull/128307.
+-rustdoc-core: private skip_flags = -Wrustdoc::unescaped_backticks
+-rustdoc-core: private rustc_target_flags = $(core-cfgs)
++rustdoc-core: private skip_flags = --edition=2021 -Wrustdoc::unescaped_backticks
++rustdoc-core: private rustc_target_flags = --edition=$(core-edition) $(core-cfgs)
+ rustdoc-core: $(RUST_LIB_SRC)/core/src/lib.rs FORCE
+ 	+$(call if_changed,rustdoc)
+ 
+@@ -416,7 +418,7 @@ quiet_cmd_rustc_library = $(if $(skip_clippy),RUSTC,$(RUSTC_OR_CLIPPY_QUIET)) L
+       cmd_rustc_library = \
+ 	OBJTREE=$(abspath $(objtree)) \
+ 	$(if $(skip_clippy),$(RUSTC),$(RUSTC_OR_CLIPPY)) \
+-		$(filter-out $(skip_flags),$(rust_flags) $(rustc_target_flags)) \
++		$(filter-out $(skip_flags),$(rust_flags)) $(rustc_target_flags) \
+ 		--emit=dep-info=$(depfile) --emit=obj=$@ \
+ 		--emit=metadata=$(dir $@)$(patsubst %.o,lib%.rmeta,$(notdir $@)) \
+ 		--crate-type rlib -L$(objtree)/$(obj) \
+@@ -427,7 +429,7 @@ quiet_cmd_rustc_library = $(if $(skip_clippy),RUSTC,$(RUSTC_OR_CLIPPY_QUIET)) L
+ 
+ rust-analyzer:
+ 	$(Q)MAKEFLAGS= $(srctree)/scripts/generate_rust_analyzer.py \
+-		--cfgs='core=$(core-cfgs)' \
++		--cfgs='core=$(core-cfgs)' $(core-edition) \
+ 		$(realpath $(srctree)) $(realpath $(objtree)) \
+ 		$(rustc_sysroot) $(RUST_LIB_SRC) $(if $(KBUILD_EXTMOD),$(srcroot)) \
+ 		> rust-project.json
+@@ -483,9 +485,9 @@ $(obj)/helpers/helpers.o: $(src)/helpers/helpers.c $(recordmcount_source) FORCE
+ $(obj)/exports.o: private skip_gendwarfksyms = 1
+ 
+ $(obj)/core.o: private skip_clippy = 1
+-$(obj)/core.o: private skip_flags = -Wunreachable_pub
++$(obj)/core.o: private skip_flags = --edition=2021 -Wunreachable_pub
+ $(obj)/core.o: private rustc_objcopy = $(foreach sym,$(redirect-intrinsics),--redefine-sym $(sym)=__rust$(sym))
+-$(obj)/core.o: private rustc_target_flags = $(core-cfgs)
++$(obj)/core.o: private rustc_target_flags = --edition=$(core-edition) $(core-cfgs)
+ $(obj)/core.o: $(RUST_LIB_SRC)/core/src/lib.rs \
+     $(wildcard $(objtree)/include/config/RUSTC_VERSION_TEXT) FORCE
+ 	+$(call if_changed_rule,rustc_library)
+diff --git a/rust/kernel/alloc/kvec.rs b/rust/kernel/alloc/kvec.rs
+index 87a71fd40c3cad..f62204fe563f58 100644
+--- a/rust/kernel/alloc/kvec.rs
++++ b/rust/kernel/alloc/kvec.rs
+@@ -196,6 +196,9 @@ pub fn len(&self) -> usize {
+     #[inline]
+     pub unsafe fn set_len(&mut self, new_len: usize) {
+         debug_assert!(new_len <= self.capacity());
++
++        // INVARIANT: By the safety requirements of this method `new_len` represents the exact
++        // number of elements stored within `self`.
+         self.len = new_len;
+     }
+ 
+diff --git a/rust/kernel/fs/file.rs b/rust/kernel/fs/file.rs
+index 13a0e44cd1aa81..138693bdeb3fdf 100644
+--- a/rust/kernel/fs/file.rs
++++ b/rust/kernel/fs/file.rs
+@@ -219,6 +219,7 @@ unsafe fn dec_ref(obj: ptr::NonNull<File>) {
+ ///   must be on the same thread as this file.
+ ///
+ /// [`assume_no_fdget_pos`]: LocalFile::assume_no_fdget_pos
++#[repr(transparent)]
+ pub struct LocalFile {
+     inner: Opaque<bindings::file>,
+ }
+diff --git a/rust/kernel/list/arc.rs b/rust/kernel/list/arc.rs
+index 13c50df37b89d1..a88a2dc65aa7cf 100644
+--- a/rust/kernel/list/arc.rs
++++ b/rust/kernel/list/arc.rs
+@@ -96,7 +96,7 @@ unsafe fn on_drop_list_arc(&self) {}
+     } $($rest:tt)*) => {
+         impl$(<$($generics)*>)? $crate::list::ListArcSafe<$num> for $t {
+             unsafe fn on_create_list_arc_from_unique(self: ::core::pin::Pin<&mut Self>) {
+-                $crate::assert_pinned!($t, $field, $fty, inline);
++                ::pin_init::assert_pinned!($t, $field, $fty, inline);
+ 
+                 // SAFETY: This field is structurally pinned as per the above assertion.
+                 let field = unsafe {
+diff --git a/rust/kernel/miscdevice.rs b/rust/kernel/miscdevice.rs
+index fa9ecc42602a47..15d10e5c1db7da 100644
+--- a/rust/kernel/miscdevice.rs
++++ b/rust/kernel/miscdevice.rs
+@@ -121,7 +121,7 @@ fn release(device: Self::Ptr, _file: &File) {
+ 
+     /// Handler for ioctls.
+     ///
+-    /// The `cmd` argument is usually manipulated using the utilties in [`kernel::ioctl`].
++    /// The `cmd` argument is usually manipulated using the utilities in [`kernel::ioctl`].
+     ///
+     /// [`kernel::ioctl`]: mod@crate::ioctl
+     fn ioctl(
+diff --git a/rust/kernel/pci.rs b/rust/kernel/pci.rs
+index c97d6d470b2822..bbc453c6d9ea88 100644
+--- a/rust/kernel/pci.rs
++++ b/rust/kernel/pci.rs
+@@ -118,7 +118,9 @@ macro_rules! module_pci_driver {
+ };
+ }
+ 
+-/// Abstraction for bindings::pci_device_id.
++/// Abstraction for the PCI device ID structure ([`struct pci_device_id`]).
++///
++/// [`struct pci_device_id`]: https://docs.kernel.org/PCI/pci.html#c.pci_device_id
+ #[repr(transparent)]
+ #[derive(Clone, Copy)]
+ pub struct DeviceId(bindings::pci_device_id);
+@@ -173,7 +175,7 @@ fn index(&self) -> usize {
+     }
+ }
+ 
+-/// IdTable type for PCI
++/// `IdTable` type for PCI.
+ pub type IdTable<T> = &'static dyn kernel::device_id::IdTable<DeviceId, T>;
+ 
+ /// Create a PCI `IdTable` with its alias for modpost.
+@@ -224,10 +226,11 @@ macro_rules! pci_device_table {
+ /// `Adapter` documentation for an example.
+ pub trait Driver: Send {
+     /// The type holding information about each device id supported by the driver.
+-    ///
+-    /// TODO: Use associated_type_defaults once stabilized:
+-    ///
+-    /// type IdInfo: 'static = ();
++    // TODO: Use `associated_type_defaults` once stabilized:
++    //
++    // ```
++    // type IdInfo: 'static = ();
++    // ```
+     type IdInfo: 'static;
+ 
+     /// The table of device ids supported by the driver.
+diff --git a/scripts/gcc-plugins/gcc-common.h b/scripts/gcc-plugins/gcc-common.h
+index 3222c1070444fa..ef12c8f929eda3 100644
+--- a/scripts/gcc-plugins/gcc-common.h
++++ b/scripts/gcc-plugins/gcc-common.h
+@@ -123,6 +123,38 @@ static inline tree build_const_char_string(int len, const char *str)
+ 	return cstr;
+ }
+ 
++static inline void __add_type_attr(tree type, const char *attr, tree args)
++{
++	tree oldattr;
++
++	if (type == NULL_TREE)
++		return;
++	oldattr = lookup_attribute(attr, TYPE_ATTRIBUTES(type));
++	if (oldattr != NULL_TREE) {
++		gcc_assert(TREE_VALUE(oldattr) == args || TREE_VALUE(TREE_VALUE(oldattr)) == TREE_VALUE(args));
++		return;
++	}
++
++	TYPE_ATTRIBUTES(type) = copy_list(TYPE_ATTRIBUTES(type));
++	TYPE_ATTRIBUTES(type) = tree_cons(get_identifier(attr), args, TYPE_ATTRIBUTES(type));
++}
++
++static inline void add_type_attr(tree type, const char *attr, tree args)
++{
++	tree main_variant = TYPE_MAIN_VARIANT(type);
++
++	__add_type_attr(TYPE_CANONICAL(type), attr, args);
++	__add_type_attr(TYPE_CANONICAL(main_variant), attr, args);
++	__add_type_attr(main_variant, attr, args);
++
++	for (type = TYPE_NEXT_VARIANT(main_variant); type; type = TYPE_NEXT_VARIANT(type)) {
++		if (!lookup_attribute(attr, TYPE_ATTRIBUTES(type)))
++			TYPE_ATTRIBUTES(type) = TYPE_ATTRIBUTES(main_variant);
++
++		__add_type_attr(TYPE_CANONICAL(type), attr, args);
++	}
++}
++
+ #define PASS_INFO(NAME, REF, ID, POS)		\
+ struct register_pass_info NAME##_pass_info = {	\
+ 	.pass = make_##NAME##_pass(),		\
+diff --git a/scripts/gcc-plugins/randomize_layout_plugin.c b/scripts/gcc-plugins/randomize_layout_plugin.c
+index 5694df3da2e95b..ff65a4f87f240a 100644
+--- a/scripts/gcc-plugins/randomize_layout_plugin.c
++++ b/scripts/gcc-plugins/randomize_layout_plugin.c
+@@ -73,6 +73,9 @@ static tree handle_randomize_layout_attr(tree *node, tree name, tree args, int f
+ 
+ 	if (TYPE_P(*node)) {
+ 		type = *node;
++	} else if (TREE_CODE(*node) == FIELD_DECL) {
++		*no_add_attrs = false;
++		return NULL_TREE;
+ 	} else {
+ 		gcc_assert(TREE_CODE(*node) == TYPE_DECL);
+ 		type = TREE_TYPE(*node);
+@@ -344,35 +347,18 @@ static int relayout_struct(tree type)
+ 
+ 	shuffle(type, (tree *)newtree, shuffle_length);
+ 
+-	/*
+-	 * set up a bogus anonymous struct field designed to error out on unnamed struct initializers
+-	 * as gcc provides no other way to detect such code
+-	 */
+-	list = make_node(FIELD_DECL);
+-	TREE_CHAIN(list) = newtree[0];
+-	TREE_TYPE(list) = void_type_node;
+-	DECL_SIZE(list) = bitsize_zero_node;
+-	DECL_NONADDRESSABLE_P(list) = 1;
+-	DECL_FIELD_BIT_OFFSET(list) = bitsize_zero_node;
+-	DECL_SIZE_UNIT(list) = size_zero_node;
+-	DECL_FIELD_OFFSET(list) = size_zero_node;
+-	DECL_CONTEXT(list) = type;
+-	// to satisfy the constify plugin
+-	TREE_READONLY(list) = 1;
+-
+ 	for (i = 0; i < num_fields - 1; i++)
+ 		TREE_CHAIN(newtree[i]) = newtree[i+1];
+ 	TREE_CHAIN(newtree[num_fields - 1]) = NULL_TREE;
+ 
++	add_type_attr(type, "randomize_performed", NULL_TREE);
++	add_type_attr(type, "designated_init", NULL_TREE);
++	if (has_flexarray)
++		add_type_attr(type, "has_flexarray", NULL_TREE);
++
+ 	main_variant = TYPE_MAIN_VARIANT(type);
+-	for (variant = main_variant; variant; variant = TYPE_NEXT_VARIANT(variant)) {
+-		TYPE_FIELDS(variant) = list;
+-		TYPE_ATTRIBUTES(variant) = copy_list(TYPE_ATTRIBUTES(variant));
+-		TYPE_ATTRIBUTES(variant) = tree_cons(get_identifier("randomize_performed"), NULL_TREE, TYPE_ATTRIBUTES(variant));
+-		TYPE_ATTRIBUTES(variant) = tree_cons(get_identifier("designated_init"), NULL_TREE, TYPE_ATTRIBUTES(variant));
+-		if (has_flexarray)
+-			TYPE_ATTRIBUTES(type) = tree_cons(get_identifier("has_flexarray"), NULL_TREE, TYPE_ATTRIBUTES(type));
+-	}
++	for (variant = main_variant; variant; variant = TYPE_NEXT_VARIANT(variant))
++		TYPE_FIELDS(variant) = newtree[0];
+ 
+ 	/*
+ 	 * force a re-layout of the main variant
+@@ -440,10 +426,8 @@ static void randomize_type(tree type)
+ 	if (lookup_attribute("randomize_layout", TYPE_ATTRIBUTES(TYPE_MAIN_VARIANT(type))) || is_pure_ops_struct(type))
+ 		relayout_struct(type);
+ 
+-	for (variant = TYPE_MAIN_VARIANT(type); variant; variant = TYPE_NEXT_VARIANT(variant)) {
+-		TYPE_ATTRIBUTES(type) = copy_list(TYPE_ATTRIBUTES(type));
+-		TYPE_ATTRIBUTES(type) = tree_cons(get_identifier("randomize_considered"), NULL_TREE, TYPE_ATTRIBUTES(type));
+-	}
++	add_type_attr(type, "randomize_considered", NULL_TREE);
++
+ #ifdef __DEBUG_PLUGIN
+ 	fprintf(stderr, "Marking randomize_considered on struct %s\n", ORIG_TYPE_NAME(type));
+ #ifdef __DEBUG_VERBOSE
+diff --git a/scripts/generate_rust_analyzer.py b/scripts/generate_rust_analyzer.py
+index fe663dd0c43b04..7c3ea2b55041f8 100755
+--- a/scripts/generate_rust_analyzer.py
++++ b/scripts/generate_rust_analyzer.py
+@@ -19,7 +19,7 @@ def args_crates_cfgs(cfgs):
+ 
+     return crates_cfgs
+ 
+-def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
++def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs, core_edition):
+     # Generate the configuration list.
+     cfg = []
+     with open(objtree / "include" / "generated" / "rustc_cfg") as fd:
+@@ -35,7 +35,7 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+     crates_indexes = {}
+     crates_cfgs = args_crates_cfgs(cfgs)
+ 
+-    def append_crate(display_name, root_module, deps, cfg=[], is_workspace_member=True, is_proc_macro=False):
++    def append_crate(display_name, root_module, deps, cfg=[], is_workspace_member=True, is_proc_macro=False, edition="2021"):
+         crate = {
+             "display_name": display_name,
+             "root_module": str(root_module),
+@@ -43,7 +43,7 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+             "is_proc_macro": is_proc_macro,
+             "deps": [{"crate": crates_indexes[dep], "name": dep} for dep in deps],
+             "cfg": cfg,
+-            "edition": "2021",
++            "edition": edition,
+             "env": {
+                 "RUST_MODFILE": "This is only for rust-analyzer"
+             }
+@@ -61,6 +61,7 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+         display_name,
+         deps,
+         cfg=[],
++        edition="2021",
+     ):
+         append_crate(
+             display_name,
+@@ -68,12 +69,13 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+             deps,
+             cfg,
+             is_workspace_member=False,
++            edition=edition,
+         )
+ 
+     # NB: sysroot crates reexport items from one another so setting up our transitive dependencies
+     # here is important for ensuring that rust-analyzer can resolve symbols. The sources of truth
+     # for this dependency graph are `(sysroot_src / crate / "Cargo.toml" for crate in crates)`.
+-    append_sysroot_crate("core", [], cfg=crates_cfgs.get("core", []))
++    append_sysroot_crate("core", [], cfg=crates_cfgs.get("core", []), edition=core_edition)
+     append_sysroot_crate("alloc", ["core"])
+     append_sysroot_crate("std", ["alloc", "core"])
+     append_sysroot_crate("proc_macro", ["core", "std"])
+@@ -177,6 +179,7 @@ def main():
+     parser = argparse.ArgumentParser()
+     parser.add_argument('--verbose', '-v', action='store_true')
+     parser.add_argument('--cfgs', action='append', default=[])
++    parser.add_argument("core_edition")
+     parser.add_argument("srctree", type=pathlib.Path)
+     parser.add_argument("objtree", type=pathlib.Path)
+     parser.add_argument("sysroot", type=pathlib.Path)
+@@ -193,7 +196,7 @@ def main():
+     assert args.sysroot in args.sysroot_src.parents
+ 
+     rust_project = {
+-        "crates": generate_crates(args.srctree, args.objtree, args.sysroot_src, args.exttree, args.cfgs),
++        "crates": generate_crates(args.srctree, args.objtree, args.sysroot_src, args.exttree, args.cfgs, args.core_edition),
+         "sysroot": str(args.sysroot),
+     }
+ 
+diff --git a/scripts/genksyms/genksyms.c b/scripts/genksyms/genksyms.c
+index 8b0d7ac73dbb09..83e48670c2fcfb 100644
+--- a/scripts/genksyms/genksyms.c
++++ b/scripts/genksyms/genksyms.c
+@@ -181,13 +181,9 @@ static int is_unknown_symbol(struct symbol *sym)
+ 			strcmp(defn->string, "{") == 0);
+ }
+ 
+-static struct symbol *__add_symbol(const char *name, enum symbol_type type,
+-			    struct string_list *defn, int is_extern,
+-			    int is_reference)
++static struct string_list *process_enum(const char *name, enum symbol_type type,
++					struct string_list *defn)
+ {
+-	unsigned long h;
+-	struct symbol *sym;
+-	enum symbol_status status = STATUS_UNCHANGED;
+ 	/* The parser adds symbols in the order their declaration completes,
+ 	 * so it is safe to store the value of the previous enum constant in
+ 	 * a static variable.
+@@ -216,7 +212,7 @@ static struct symbol *__add_symbol(const char *name, enum symbol_type type,
+ 				defn = mk_node(buf);
+ 			}
+ 		}
+-	} else if (type == SYM_ENUM) {
++	} else {
+ 		free_list(last_enum_expr, NULL);
+ 		last_enum_expr = NULL;
+ 		enum_counter = 0;
+@@ -225,6 +221,23 @@ static struct symbol *__add_symbol(const char *name, enum symbol_type type,
+ 			return NULL;
+ 	}
+ 
++	return defn;
++}
++
++static struct symbol *__add_symbol(const char *name, enum symbol_type type,
++			    struct string_list *defn, int is_extern,
++			    int is_reference)
++{
++	unsigned long h;
++	struct symbol *sym;
++	enum symbol_status status = STATUS_UNCHANGED;
++
++	if ((type == SYM_ENUM_CONST || type == SYM_ENUM) && !is_reference) {
++		defn = process_enum(name, type, defn);
++		if (defn == NULL)
++			return NULL;
++	}
++
+ 	h = crc32(name);
+ 	hash_for_each_possible(symbol_hashtable, sym, hnode, h) {
+ 		if (map_to_ns(sym->type) != map_to_ns(type) ||
+diff --git a/sound/core/seq_device.c b/sound/core/seq_device.c
+index 4492be5d2317c7..bac9f860373425 100644
+--- a/sound/core/seq_device.c
++++ b/sound/core/seq_device.c
+@@ -43,7 +43,7 @@ MODULE_LICENSE("GPL");
+ static int snd_seq_bus_match(struct device *dev, const struct device_driver *drv)
+ {
+ 	struct snd_seq_device *sdev = to_seq_dev(dev);
+-	struct snd_seq_driver *sdrv = to_seq_drv(drv);
++	const struct snd_seq_driver *sdrv = to_seq_drv(drv);
+ 
+ 	return strcmp(sdrv->id, sdev->id) == 0 &&
+ 		sdrv->argsize == sdev->argsize;
+diff --git a/sound/hda/ext/hdac_ext_controller.c b/sound/hda/ext/hdac_ext_controller.c
+index 6199bb60ccf00f..c84754434d1627 100644
+--- a/sound/hda/ext/hdac_ext_controller.c
++++ b/sound/hda/ext/hdac_ext_controller.c
+@@ -9,6 +9,7 @@
+  * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+  */
+ 
++#include <linux/bitfield.h>
+ #include <linux/delay.h>
+ #include <linux/slab.h>
+ #include <sound/hda_register.h>
+@@ -81,6 +82,7 @@ int snd_hdac_ext_bus_get_ml_capabilities(struct hdac_bus *bus)
+ 	int idx;
+ 	u32 link_count;
+ 	struct hdac_ext_link *hlink;
++	u32 leptr;
+ 
+ 	link_count = readl(bus->mlcap + AZX_REG_ML_MLCD) + 1;
+ 
+@@ -96,6 +98,12 @@ int snd_hdac_ext_bus_get_ml_capabilities(struct hdac_bus *bus)
+ 					(AZX_ML_INTERVAL * idx);
+ 		hlink->lcaps  = readl(hlink->ml_addr + AZX_REG_ML_LCAP);
+ 		hlink->lsdiid = readw(hlink->ml_addr + AZX_REG_ML_LSDIID);
++		hlink->slcount = FIELD_GET(AZX_ML_HDA_LCAP_SLCOUNT, hlink->lcaps) + 1;
++
++		if (hdac_ext_link_alt(hlink)) {
++			leptr = readl(hlink->ml_addr + AZX_REG_ML_LEPTR);
++			hlink->id = FIELD_GET(AZX_REG_ML_LEPTR_ID, leptr);
++		}
+ 
+ 		/* since link in On, update the ref */
+ 		hlink->ref_count = 1;
+@@ -125,6 +133,17 @@ void snd_hdac_ext_link_free_all(struct hdac_bus *bus)
+ }
+ EXPORT_SYMBOL_GPL(snd_hdac_ext_link_free_all);
+ 
++struct hdac_ext_link *snd_hdac_ext_bus_get_hlink_by_id(struct hdac_bus *bus, u32 id)
++{
++	struct hdac_ext_link *hlink;
++
++	list_for_each_entry(hlink, &bus->hlink_list, list)
++		if (hdac_ext_link_alt(hlink) && hlink->id == id)
++			return hlink;
++	return NULL;
++}
++EXPORT_SYMBOL_GPL(snd_hdac_ext_bus_get_hlink_by_id);
++
+ /**
+  * snd_hdac_ext_bus_get_hlink_by_addr - get hlink at specified address
+  * @bus: hlink's parent bus device
+diff --git a/sound/hda/hda_bus_type.c b/sound/hda/hda_bus_type.c
+index 7545ace7b0ee4b..eb72a7af2e56e8 100644
+--- a/sound/hda/hda_bus_type.c
++++ b/sound/hda/hda_bus_type.c
+@@ -21,7 +21,7 @@ MODULE_LICENSE("GPL");
+  * driver id_table and returns the matching device id entry.
+  */
+ const struct hda_device_id *
+-hdac_get_device_id(struct hdac_device *hdev, struct hdac_driver *drv)
++hdac_get_device_id(struct hdac_device *hdev, const struct hdac_driver *drv)
+ {
+ 	if (drv->id_table) {
+ 		const struct hda_device_id *id  = drv->id_table;
+@@ -38,7 +38,7 @@ hdac_get_device_id(struct hdac_device *hdev, struct hdac_driver *drv)
+ }
+ EXPORT_SYMBOL_GPL(hdac_get_device_id);
+ 
+-static int hdac_codec_match(struct hdac_device *dev, struct hdac_driver *drv)
++static int hdac_codec_match(struct hdac_device *dev, const struct hdac_driver *drv)
+ {
+ 	if (hdac_get_device_id(dev, drv))
+ 		return 1;
+@@ -49,7 +49,7 @@ static int hdac_codec_match(struct hdac_device *dev, struct hdac_driver *drv)
+ static int hda_bus_match(struct device *dev, const struct device_driver *drv)
+ {
+ 	struct hdac_device *hdev = dev_to_hdac_dev(dev);
+-	struct hdac_driver *hdrv = drv_to_hdac_driver(drv);
++	const struct hdac_driver *hdrv = drv_to_hdac_driver(drv);
+ 
+ 	if (hdev->type != hdrv->type)
+ 		return 0;
+diff --git a/sound/pci/hda/hda_bind.c b/sound/pci/hda/hda_bind.c
+index 9521e5e0e6e6f8..1fef350d821ef0 100644
+--- a/sound/pci/hda/hda_bind.c
++++ b/sound/pci/hda/hda_bind.c
+@@ -18,10 +18,10 @@
+ /*
+  * find a matching codec id
+  */
+-static int hda_codec_match(struct hdac_device *dev, struct hdac_driver *drv)
++static int hda_codec_match(struct hdac_device *dev, const struct hdac_driver *drv)
+ {
+ 	struct hda_codec *codec = container_of(dev, struct hda_codec, core);
+-	struct hda_codec_driver *driver =
++	const struct hda_codec_driver *driver =
+ 		container_of(drv, struct hda_codec_driver, core);
+ 	const struct hda_device_id *list;
+ 	/* check probe_id instead of vendor_id if set */
+diff --git a/sound/soc/apple/mca.c b/sound/soc/apple/mca.c
+index b4f4696809dd23..5dd24ab90d0f05 100644
+--- a/sound/soc/apple/mca.c
++++ b/sound/soc/apple/mca.c
+@@ -464,6 +464,28 @@ static int mca_configure_serdes(struct mca_cluster *cl, int serdes_unit,
+ 	return -EINVAL;
+ }
+ 
++static int mca_fe_startup(struct snd_pcm_substream *substream,
++			  struct snd_soc_dai *dai)
++{
++	struct mca_cluster *cl = mca_dai_to_cluster(dai);
++	unsigned int mask, nchannels;
++
++	if (cl->tdm_slots) {
++		if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
++			mask = cl->tdm_tx_mask;
++		else
++			mask = cl->tdm_rx_mask;
++
++		nchannels = hweight32(mask);
++	} else {
++		nchannels = 2;
++	}
++
++	return snd_pcm_hw_constraint_minmax(substream->runtime,
++					    SNDRV_PCM_HW_PARAM_CHANNELS,
++					    1, nchannels);
++}
++
+ static int mca_fe_set_tdm_slot(struct snd_soc_dai *dai, unsigned int tx_mask,
+ 			       unsigned int rx_mask, int slots, int slot_width)
+ {
+@@ -680,6 +702,7 @@ static int mca_fe_hw_params(struct snd_pcm_substream *substream,
+ }
+ 
+ static const struct snd_soc_dai_ops mca_fe_ops = {
++	.startup = mca_fe_startup,
+ 	.set_fmt = mca_fe_set_fmt,
+ 	.set_bclk_ratio = mca_set_bclk_ratio,
+ 	.set_tdm_slot = mca_fe_set_tdm_slot,
+diff --git a/sound/soc/codecs/hda.c b/sound/soc/codecs/hda.c
+index ddc00927313cfe..dc7794c9ac44ce 100644
+--- a/sound/soc/codecs/hda.c
++++ b/sound/soc/codecs/hda.c
+@@ -152,7 +152,7 @@ int hda_codec_probe_complete(struct hda_codec *codec)
+ 	ret = snd_hda_codec_build_controls(codec);
+ 	if (ret < 0) {
+ 		dev_err(&hdev->dev, "unable to create controls %d\n", ret);
+-		goto out;
++		return ret;
+ 	}
+ 
+ 	/* Bus suspended codecs as it does not manage their pm */
+@@ -160,7 +160,7 @@ int hda_codec_probe_complete(struct hda_codec *codec)
+ 	/* rpm was forbidden in snd_hda_codec_device_new() */
+ 	snd_hda_codec_set_power_save(codec, 2000);
+ 	snd_hda_codec_register(codec);
+-out:
++
+ 	/* Complement pm_runtime_get_sync(bus) in probe */
+ 	pm_runtime_mark_last_busy(bus->dev);
+ 	pm_runtime_put_autosuspend(bus->dev);
+diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c
+index 08aa7ee3425689..fbfe4d032df7b2 100644
+--- a/sound/soc/codecs/tas2764.c
++++ b/sound/soc/codecs/tas2764.c
+@@ -546,6 +546,8 @@ static uint8_t sn012776_bop_presets[] = {
+ 	0x06, 0x3e, 0x37, 0x30, 0xff, 0xe6
+ };
+ 
++static const struct regmap_config tas2764_i2c_regmap;
++
+ static int tas2764_codec_probe(struct snd_soc_component *component)
+ {
+ 	struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component);
+@@ -559,9 +561,10 @@ static int tas2764_codec_probe(struct snd_soc_component *component)
+ 	}
+ 
+ 	tas2764_reset(tas2764);
++	regmap_reinit_cache(tas2764->regmap, &tas2764_i2c_regmap);
+ 
+ 	if (tas2764->irq) {
+-		ret = snd_soc_component_write(tas2764->component, TAS2764_INT_MASK0, 0xff);
++		ret = snd_soc_component_write(tas2764->component, TAS2764_INT_MASK0, 0x00);
+ 		if (ret < 0)
+ 			return ret;
+ 
+diff --git a/sound/soc/intel/avs/avs.h b/sound/soc/intel/avs/avs.h
+index 585543f872fccc..ec5502f9d5cb1d 100644
+--- a/sound/soc/intel/avs/avs.h
++++ b/sound/soc/intel/avs/avs.h
+@@ -72,6 +72,8 @@ extern const struct avs_dsp_ops avs_tgl_dsp_ops;
+ 
+ #define AVS_PLATATTR_CLDMA		BIT_ULL(0)
+ #define AVS_PLATATTR_IMR		BIT_ULL(1)
++#define AVS_PLATATTR_ACE		BIT_ULL(2)
++#define AVS_PLATATTR_ALTHDA		BIT_ULL(3)
+ 
+ #define avs_platattr_test(adev, attr) \
+ 	((adev)->spec->attributes & AVS_PLATATTR_##attr)
+@@ -79,7 +81,6 @@ extern const struct avs_dsp_ops avs_tgl_dsp_ops;
+ struct avs_sram_spec {
+ 	const u32 base_offset;
+ 	const u32 window_size;
+-	const u32 rom_status_offset;
+ };
+ 
+ struct avs_hipc_spec {
+@@ -91,6 +92,7 @@ struct avs_hipc_spec {
+ 	const u32 rsp_offset;
+ 	const u32 rsp_busy_mask;
+ 	const u32 ctl_offset;
++	const u32 sts_offset;
+ };
+ 
+ /* Platform specific descriptor */
+diff --git a/sound/soc/intel/avs/core.c b/sound/soc/intel/avs/core.c
+index 8fbf33e30dfc3e..cbbc656fcc3f86 100644
+--- a/sound/soc/intel/avs/core.c
++++ b/sound/soc/intel/avs/core.c
+@@ -54,14 +54,17 @@ void avs_hda_power_gating_enable(struct avs_dev *adev, bool enable)
+ {
+ 	u32 value = enable ? 0 : pgctl_mask;
+ 
+-	avs_hda_update_config_dword(&adev->base.core, AZX_PCIREG_PGCTL, pgctl_mask, value);
++	if (!avs_platattr_test(adev, ACE))
++		avs_hda_update_config_dword(&adev->base.core, AZX_PCIREG_PGCTL, pgctl_mask, value);
+ }
+ 
+ static void avs_hdac_clock_gating_enable(struct hdac_bus *bus, bool enable)
+ {
++	struct avs_dev *adev = hdac_to_avs(bus);
+ 	u32 value = enable ? cgctl_mask : 0;
+ 
+-	avs_hda_update_config_dword(bus, AZX_PCIREG_CGCTL, cgctl_mask, value);
++	if (!avs_platattr_test(adev, ACE))
++		avs_hda_update_config_dword(bus, AZX_PCIREG_CGCTL, cgctl_mask, value);
+ }
+ 
+ void avs_hda_clock_gating_enable(struct avs_dev *adev, bool enable)
+@@ -71,6 +74,8 @@ void avs_hda_clock_gating_enable(struct avs_dev *adev, bool enable)
+ 
+ void avs_hda_l1sen_enable(struct avs_dev *adev, bool enable)
+ {
++	if (avs_platattr_test(adev, ACE))
++		return;
+ 	if (enable) {
+ 		if (atomic_inc_and_test(&adev->l1sen_counter))
+ 			snd_hdac_chip_updatel(&adev->base.core, VS_EM2, AZX_VS_EM2_L1SEN,
+@@ -99,6 +104,7 @@ static int avs_hdac_bus_init_streams(struct hdac_bus *bus)
+ 
+ static bool avs_hdac_bus_init_chip(struct hdac_bus *bus, bool full_reset)
+ {
++	struct avs_dev *adev = hdac_to_avs(bus);
+ 	struct hdac_ext_link *hlink;
+ 	bool ret;
+ 
+@@ -114,7 +120,8 @@ static bool avs_hdac_bus_init_chip(struct hdac_bus *bus, bool full_reset)
+ 	/* Set DUM bit to address incorrect position reporting for capture
+ 	 * streams. In order to do so, CTRL needs to be out of reset state
+ 	 */
+-	snd_hdac_chip_updatel(bus, VS_EM2, AZX_VS_EM2_DUM, AZX_VS_EM2_DUM);
++	if (!avs_platattr_test(adev, ACE))
++		snd_hdac_chip_updatel(bus, VS_EM2, AZX_VS_EM2_DUM, AZX_VS_EM2_DUM);
+ 
+ 	return ret;
+ }
+@@ -748,13 +755,11 @@ static const struct dev_pm_ops avs_dev_pm = {
+ static const struct avs_sram_spec skl_sram_spec = {
+ 	.base_offset = SKL_ADSP_SRAM_BASE_OFFSET,
+ 	.window_size = SKL_ADSP_SRAM_WINDOW_SIZE,
+-	.rom_status_offset = SKL_ADSP_SRAM_BASE_OFFSET,
+ };
+ 
+ static const struct avs_sram_spec apl_sram_spec = {
+ 	.base_offset = APL_ADSP_SRAM_BASE_OFFSET,
+ 	.window_size = APL_ADSP_SRAM_WINDOW_SIZE,
+-	.rom_status_offset = APL_ADSP_SRAM_BASE_OFFSET,
+ };
+ 
+ static const struct avs_hipc_spec skl_hipc_spec = {
+@@ -766,6 +771,19 @@ static const struct avs_hipc_spec skl_hipc_spec = {
+ 	.rsp_offset = SKL_ADSP_REG_HIPCT,
+ 	.rsp_busy_mask = SKL_ADSP_HIPCT_BUSY,
+ 	.ctl_offset = SKL_ADSP_REG_HIPCCTL,
++	.sts_offset = SKL_ADSP_SRAM_BASE_OFFSET,
++};
++
++static const struct avs_hipc_spec apl_hipc_spec = {
++	.req_offset = SKL_ADSP_REG_HIPCI,
++	.req_ext_offset = SKL_ADSP_REG_HIPCIE,
++	.req_busy_mask = SKL_ADSP_HIPCI_BUSY,
++	.ack_offset = SKL_ADSP_REG_HIPCIE,
++	.ack_done_mask = SKL_ADSP_HIPCIE_DONE,
++	.rsp_offset = SKL_ADSP_REG_HIPCT,
++	.rsp_busy_mask = SKL_ADSP_HIPCT_BUSY,
++	.ctl_offset = SKL_ADSP_REG_HIPCCTL,
++	.sts_offset = APL_ADSP_SRAM_BASE_OFFSET,
+ };
+ 
+ static const struct avs_hipc_spec cnl_hipc_spec = {
+@@ -777,6 +795,7 @@ static const struct avs_hipc_spec cnl_hipc_spec = {
+ 	.rsp_offset = CNL_ADSP_REG_HIPCTDR,
+ 	.rsp_busy_mask = CNL_ADSP_HIPCTDR_BUSY,
+ 	.ctl_offset = CNL_ADSP_REG_HIPCCTL,
++	.sts_offset = APL_ADSP_SRAM_BASE_OFFSET,
+ };
+ 
+ static const struct avs_spec skl_desc = {
+@@ -796,7 +815,7 @@ static const struct avs_spec apl_desc = {
+ 	.core_init_mask = 3,
+ 	.attributes = AVS_PLATATTR_IMR,
+ 	.sram = &apl_sram_spec,
+-	.hipc = &skl_hipc_spec,
++	.hipc = &apl_hipc_spec,
+ };
+ 
+ static const struct avs_spec cnl_desc = {
+@@ -902,13 +921,13 @@ MODULE_AUTHOR("Cezary Rojewski <cezary.rojewski@intel.com>");
+ MODULE_AUTHOR("Amadeusz Slawinski <amadeuszx.slawinski@linux.intel.com>");
+ MODULE_DESCRIPTION("Intel cAVS sound driver");
+ MODULE_LICENSE("GPL");
+-MODULE_FIRMWARE("intel/skl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/apl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/cnl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/icl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/jsl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/lkf/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/tgl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/ehl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/adl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/adl_n/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/skl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/apl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/cnl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/icl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/jsl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/lkf/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/tgl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/ehl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/adl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/adl_n/dsp_basefw.bin");
+diff --git a/sound/soc/intel/avs/debugfs.c b/sound/soc/intel/avs/debugfs.c
+index 8c4edda97f757f..0e826ca20619ca 100644
+--- a/sound/soc/intel/avs/debugfs.c
++++ b/sound/soc/intel/avs/debugfs.c
+@@ -373,7 +373,10 @@ static ssize_t trace_control_write(struct file *file, const char __user *from, s
+ 		return ret;
+ 
+ 	num_elems = *array;
+-	resource_mask = array[1];
++	if (!num_elems) {
++		ret = -EINVAL;
++		goto free_array;
++	}
+ 
+ 	/*
+ 	 * Disable if just resource mask is provided - no log priority flags.
+@@ -381,6 +384,7 @@ static ssize_t trace_control_write(struct file *file, const char __user *from, s
+ 	 * Enable input format:   mask, prio1, .., prioN
+ 	 * Where 'N' equals number of bits set in the 'mask'.
+ 	 */
++	resource_mask = array[1];
+ 	if (num_elems == 1) {
+ 		ret = disable_logs(adev, resource_mask);
+ 	} else {
+diff --git a/sound/soc/intel/avs/ipc.c b/sound/soc/intel/avs/ipc.c
+index 08ed9d96738a05..0314f9d4ea5f40 100644
+--- a/sound/soc/intel/avs/ipc.c
++++ b/sound/soc/intel/avs/ipc.c
+@@ -169,7 +169,9 @@ static void avs_dsp_exception_caught(struct avs_dev *adev, union avs_notify_msg
+ 
+ 	dev_crit(adev->dev, "communication severed, rebooting dsp..\n");
+ 
+-	cancel_delayed_work_sync(&ipc->d0ix_work);
++	/* Avoid deadlock as the exception may be the response to SET_D0IX. */
++	if (current_work() != &ipc->d0ix_work.work)
++		cancel_delayed_work_sync(&ipc->d0ix_work);
+ 	ipc->in_d0ix = false;
+ 	/* Re-enabled on recovery completion. */
+ 	pm_runtime_disable(adev->dev);
+diff --git a/sound/soc/intel/avs/loader.c b/sound/soc/intel/avs/loader.c
+index 0b29941feb0ef0..138e4e9de5e309 100644
+--- a/sound/soc/intel/avs/loader.c
++++ b/sound/soc/intel/avs/loader.c
+@@ -310,7 +310,7 @@ avs_hda_init_rom(struct avs_dev *adev, unsigned int dma_id, bool purge)
+ 	}
+ 
+ 	/* await ROM init */
+-	ret = snd_hdac_adsp_readl_poll(adev, spec->sram->rom_status_offset, reg,
++	ret = snd_hdac_adsp_readl_poll(adev, spec->hipc->sts_offset, reg,
+ 				       (reg & 0xF) == AVS_ROM_INIT_DONE ||
+ 				       (reg & 0xF) == APL_ROM_FW_ENTERED,
+ 				       AVS_ROM_INIT_POLLING_US, APL_ROM_INIT_TIMEOUT_US);
+@@ -683,6 +683,7 @@ int avs_dsp_boot_firmware(struct avs_dev *adev, bool purge)
+ 
+ static int avs_dsp_alloc_resources(struct avs_dev *adev)
+ {
++	struct hdac_ext_link *link;
+ 	int ret, i;
+ 
+ 	ret = avs_ipc_get_hw_config(adev, &adev->hw_cfg);
+@@ -693,6 +694,14 @@ static int avs_dsp_alloc_resources(struct avs_dev *adev)
+ 	if (ret)
+ 		return AVS_IPC_RET(ret);
+ 
++	/* If hw allows, read capabilities directly from it. */
++	if (avs_platattr_test(adev, ALTHDA)) {
++		link = snd_hdac_ext_bus_get_hlink_by_id(&adev->base.core,
++							AZX_REG_ML_LEPTR_ID_INTEL_SSP);
++		if (link)
++			adev->hw_cfg.i2s_caps.ctrl_count = link->slcount;
++	}
++
+ 	adev->core_refs = devm_kcalloc(adev->dev, adev->hw_cfg.dsp_cores,
+ 				       sizeof(*adev->core_refs), GFP_KERNEL);
+ 	adev->lib_names = devm_kcalloc(adev->dev, adev->fw_cfg.max_libs_count,
+diff --git a/sound/soc/intel/avs/path.c b/sound/soc/intel/avs/path.c
+index cafb8c6198bedb..43b3d995391072 100644
+--- a/sound/soc/intel/avs/path.c
++++ b/sound/soc/intel/avs/path.c
+@@ -131,9 +131,11 @@ int avs_path_set_constraint(struct avs_dev *adev, struct avs_tplg_path_template
+ 	list_for_each_entry(path_template, &template->path_list, node)
+ 		i++;
+ 
+-	rlist = kcalloc(i, sizeof(rlist), GFP_KERNEL);
+-	clist = kcalloc(i, sizeof(clist), GFP_KERNEL);
+-	slist = kcalloc(i, sizeof(slist), GFP_KERNEL);
++	rlist = kcalloc(i, sizeof(*rlist), GFP_KERNEL);
++	clist = kcalloc(i, sizeof(*clist), GFP_KERNEL);
++	slist = kcalloc(i, sizeof(*slist), GFP_KERNEL);
++	if (!rlist || !clist || !slist)
++		return -ENOMEM;
+ 
+ 	i = 0;
+ 	list_for_each_entry(path_template, &template->path_list, node) {
+diff --git a/sound/soc/intel/avs/pcm.c b/sound/soc/intel/avs/pcm.c
+index d83ef504643bbb..5a2330e4e4225d 100644
+--- a/sound/soc/intel/avs/pcm.c
++++ b/sound/soc/intel/avs/pcm.c
+@@ -36,6 +36,7 @@ struct avs_dma_data {
+ 	struct snd_pcm_hw_constraint_list sample_bits_list;
+ 
+ 	struct work_struct period_elapsed_work;
++	struct hdac_ext_link *link;
+ 	struct snd_pcm_substream *substream;
+ };
+ 
+@@ -81,10 +82,8 @@ void avs_period_elapsed(struct snd_pcm_substream *substream)
+ static int hw_rule_param_size(struct snd_pcm_hw_params *params, struct snd_pcm_hw_rule *rule);
+ static int avs_hw_constraints_init(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
+ {
+-	struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+ 	struct snd_pcm_hw_constraint_list *r, *c, *s;
+-	struct avs_tplg_path_template *template;
+ 	struct avs_dma_data *data;
+ 	int ret;
+ 
+@@ -97,8 +96,7 @@ static int avs_hw_constraints_init(struct snd_pcm_substream *substream, struct s
+ 	c = &(data->channels_list);
+ 	s = &(data->sample_bits_list);
+ 
+-	template = avs_dai_find_path_template(dai, !rtd->dai_link->no_pcm, substream->stream);
+-	ret = avs_path_set_constraint(data->adev, template, r, c, s);
++	ret = avs_path_set_constraint(data->adev, data->template, r, c, s);
+ 	if (ret <= 0)
+ 		return ret;
+ 
+@@ -325,32 +323,75 @@ static const struct snd_soc_dai_ops avs_dai_nonhda_be_ops = {
+ 	.trigger = avs_dai_nonhda_be_trigger,
+ };
+ 
+-static int avs_dai_hda_be_startup(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
++static int __avs_dai_hda_be_startup(struct snd_pcm_substream *substream, struct snd_soc_dai *dai,
++				    struct hdac_ext_link *link)
+ {
+-	struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
+ 	struct hdac_ext_stream *link_stream;
+ 	struct avs_dma_data *data;
+-	struct hda_codec *codec;
+ 	int ret;
+ 
+ 	ret = avs_dai_startup(substream, dai);
+ 	if (ret)
+ 		return ret;
+ 
+-	codec = dev_to_hda_codec(snd_soc_rtd_to_codec(rtd, 0)->dev);
+-	link_stream = snd_hdac_ext_stream_assign(&codec->bus->core, substream,
++	data = snd_soc_dai_get_dma_data(dai, substream);
++	link_stream = snd_hdac_ext_stream_assign(&data->adev->base.core, substream,
+ 						 HDAC_EXT_STREAM_TYPE_LINK);
+ 	if (!link_stream) {
+ 		avs_dai_shutdown(substream, dai);
+ 		return -EBUSY;
+ 	}
+ 
+-	data = snd_soc_dai_get_dma_data(dai, substream);
+ 	data->link_stream = link_stream;
+-	substream->runtime->private_data = link_stream;
++	data->link = link;
+ 	return 0;
+ }
+ 
++static int avs_dai_hda_be_startup(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
++{
++	struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
++	struct hdac_ext_link *link;
++	struct avs_dma_data *data;
++	struct hda_codec *codec;
++	int ret;
++
++	codec = dev_to_hda_codec(snd_soc_rtd_to_codec(rtd, 0)->dev);
++
++	link = snd_hdac_ext_bus_get_hlink_by_addr(&codec->bus->core, codec->core.addr);
++	if (!link)
++		return -EINVAL;
++
++	ret = __avs_dai_hda_be_startup(substream, dai, link);
++	if (!ret) {
++		data = snd_soc_dai_get_dma_data(dai, substream);
++		substream->runtime->private_data = data->link_stream;
++	}
++
++	return ret;
++}
++
++static int avs_dai_i2shda_be_startup(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
++{
++	struct avs_dev *adev = to_avs_dev(dai->component->dev);
++	struct hdac_ext_link *link;
++
++	link = snd_hdac_ext_bus_get_hlink_by_id(&adev->base.core, AZX_REG_ML_LEPTR_ID_INTEL_SSP);
++	if (!link)
++		return -EINVAL;
++	return __avs_dai_hda_be_startup(substream, dai, link);
++}
++
++static int avs_dai_dmichda_be_startup(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
++{
++	struct avs_dev *adev = to_avs_dev(dai->component->dev);
++	struct hdac_ext_link *link;
++
++	link = snd_hdac_ext_bus_get_hlink_by_id(&adev->base.core, AZX_REG_ML_LEPTR_ID_INTEL_DMIC);
++	if (!link)
++		return -EINVAL;
++	return __avs_dai_hda_be_startup(substream, dai, link);
++}
++
+ static void avs_dai_hda_be_shutdown(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
+ {
+ 	struct avs_dma_data *data = snd_soc_dai_get_dma_data(dai, substream);
+@@ -360,6 +401,14 @@ static void avs_dai_hda_be_shutdown(struct snd_pcm_substream *substream, struct
+ 	avs_dai_shutdown(substream, dai);
+ }
+ 
++static void avs_dai_althda_be_shutdown(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
++{
++	struct avs_dma_data *data = snd_soc_dai_get_dma_data(dai, substream);
++
++	snd_hdac_ext_stream_release(data->link_stream, HDAC_EXT_STREAM_TYPE_LINK);
++	avs_dai_shutdown(substream, dai);
++}
++
+ static int avs_dai_hda_be_hw_params(struct snd_pcm_substream *substream,
+ 				    struct snd_pcm_hw_params *hw_params, struct snd_soc_dai *dai)
+ {
+@@ -375,13 +424,8 @@ static int avs_dai_hda_be_hw_params(struct snd_pcm_substream *substream,
+ 
+ static int avs_dai_hda_be_hw_free(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
+ {
+-	struct avs_dma_data *data;
+-	struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
+ 	struct hdac_ext_stream *link_stream;
+-	struct hdac_ext_link *link;
+-	struct hda_codec *codec;
+-
+-	dev_dbg(dai->dev, "%s: %s\n", __func__, dai->name);
++	struct avs_dma_data *data;
+ 
+ 	data = snd_soc_dai_get_dma_data(dai, substream);
+ 	if (!data->path)
+@@ -393,54 +437,43 @@ static int avs_dai_hda_be_hw_free(struct snd_pcm_substream *substream, struct sn
+ 	data->path = NULL;
+ 
+ 	/* clear link <-> stream mapping */
+-	codec = dev_to_hda_codec(snd_soc_rtd_to_codec(rtd, 0)->dev);
+-	link = snd_hdac_ext_bus_get_hlink_by_addr(&codec->bus->core, codec->core.addr);
+-	if (!link)
+-		return -EINVAL;
+-
+ 	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-		snd_hdac_ext_bus_link_clear_stream_id(link, hdac_stream(link_stream)->stream_tag);
++		snd_hdac_ext_bus_link_clear_stream_id(data->link,
++						      hdac_stream(link_stream)->stream_tag);
+ 
+ 	return 0;
+ }
+ 
+ static int avs_dai_hda_be_prepare(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
+ {
+-	struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
+-	struct snd_pcm_runtime *runtime = substream->runtime;
++	struct snd_soc_pcm_runtime *be = snd_soc_substream_to_rtd(substream);
+ 	const struct snd_soc_pcm_stream *stream_info;
+ 	struct hdac_ext_stream *link_stream;
+-	struct hdac_ext_link *link;
++	const struct snd_pcm_hw_params *p;
+ 	struct avs_dma_data *data;
+-	struct hda_codec *codec;
+-	struct hdac_bus *bus;
+ 	unsigned int format_val;
+ 	unsigned int bits;
+ 	int ret;
+ 
+ 	data = snd_soc_dai_get_dma_data(dai, substream);
+ 	link_stream = data->link_stream;
++	p = &be->dpcm[substream->stream].hw_params;
+ 
+ 	if (link_stream->link_prepared)
+ 		return 0;
+ 
+-	codec = dev_to_hda_codec(snd_soc_rtd_to_codec(rtd, 0)->dev);
+-	bus = &codec->bus->core;
+ 	stream_info = snd_soc_dai_get_pcm_stream(dai, substream->stream);
+-	bits = snd_hdac_stream_format_bits(runtime->format, runtime->subformat,
++	bits = snd_hdac_stream_format_bits(params_format(p), params_subformat(p),
+ 					   stream_info->sig_bits);
+-	format_val = snd_hdac_stream_format(runtime->channels, bits, runtime->rate);
++	format_val = snd_hdac_stream_format(params_channels(p), bits, params_rate(p));
+ 
+-	snd_hdac_ext_stream_decouple(bus, link_stream, true);
++	snd_hdac_ext_stream_decouple(&data->adev->base.core, link_stream, true);
+ 	snd_hdac_ext_stream_reset(link_stream);
+ 	snd_hdac_ext_stream_setup(link_stream, format_val);
+ 
+-	link = snd_hdac_ext_bus_get_hlink_by_addr(bus, codec->core.addr);
+-	if (!link)
+-		return -EINVAL;
+-
+ 	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-		snd_hdac_ext_bus_link_set_stream_id(link, hdac_stream(link_stream)->stream_tag);
++		snd_hdac_ext_bus_link_set_stream_id(data->link,
++						    hdac_stream(link_stream)->stream_tag);
+ 
+ 	ret = avs_dai_prepare(substream, dai);
+ 	if (ret)
+@@ -515,6 +548,26 @@ static const struct snd_soc_dai_ops avs_dai_hda_be_ops = {
+ 	.trigger = avs_dai_hda_be_trigger,
+ };
+ 
++__maybe_unused
++static const struct snd_soc_dai_ops avs_dai_i2shda_be_ops = {
++	.startup = avs_dai_i2shda_be_startup,
++	.shutdown = avs_dai_althda_be_shutdown,
++	.hw_params = avs_dai_hda_be_hw_params,
++	.hw_free = avs_dai_hda_be_hw_free,
++	.prepare = avs_dai_hda_be_prepare,
++	.trigger = avs_dai_hda_be_trigger,
++};
++
++__maybe_unused
++static const struct snd_soc_dai_ops avs_dai_dmichda_be_ops = {
++	.startup = avs_dai_dmichda_be_startup,
++	.shutdown = avs_dai_althda_be_shutdown,
++	.hw_params = avs_dai_hda_be_hw_params,
++	.hw_free = avs_dai_hda_be_hw_free,
++	.prepare = avs_dai_hda_be_prepare,
++	.trigger = avs_dai_hda_be_trigger,
++};
++
+ static int hw_rule_param_size(struct snd_pcm_hw_params *params, struct snd_pcm_hw_rule *rule)
+ {
+ 	struct snd_interval *interval = hw_param_interval(params, rule->var);
+diff --git a/sound/soc/intel/avs/registers.h b/sound/soc/intel/avs/registers.h
+index 368ede05f2cdaa..4db0cdf68ffc7a 100644
+--- a/sound/soc/intel/avs/registers.h
++++ b/sound/soc/intel/avs/registers.h
+@@ -74,7 +74,7 @@
+ #define APL_ADSP_SRAM_WINDOW_SIZE	0x20000
+ 
+ /* Constants used when accessing SRAM, space shared with firmware */
+-#define AVS_FW_REG_BASE(adev)		((adev)->spec->sram->base_offset)
++#define AVS_FW_REG_BASE(adev)		((adev)->spec->hipc->sts_offset)
+ #define AVS_FW_REG_STATUS(adev)		(AVS_FW_REG_BASE(adev) + 0x0)
+ #define AVS_FW_REG_ERROR(adev)		(AVS_FW_REG_BASE(adev) + 0x4)
+ 
+diff --git a/sound/soc/mediatek/mt8195/mt8195-mt6359.c b/sound/soc/mediatek/mt8195/mt8195-mt6359.c
+index df29a9fa5aee5b..1fa664b56f30fa 100644
+--- a/sound/soc/mediatek/mt8195/mt8195-mt6359.c
++++ b/sound/soc/mediatek/mt8195/mt8195-mt6359.c
+@@ -822,12 +822,12 @@ SND_SOC_DAILINK_DEFS(ETDM1_IN_BE,
+ 
+ SND_SOC_DAILINK_DEFS(ETDM2_IN_BE,
+ 		     DAILINK_COMP_ARRAY(COMP_CPU("ETDM2_IN")),
+-		     DAILINK_COMP_ARRAY(COMP_EMPTY()),
++		     DAILINK_COMP_ARRAY(COMP_DUMMY()),
+ 		     DAILINK_COMP_ARRAY(COMP_EMPTY()));
+ 
+ SND_SOC_DAILINK_DEFS(ETDM1_OUT_BE,
+ 		     DAILINK_COMP_ARRAY(COMP_CPU("ETDM1_OUT")),
+-		     DAILINK_COMP_ARRAY(COMP_EMPTY()),
++		     DAILINK_COMP_ARRAY(COMP_DUMMY()),
+ 		     DAILINK_COMP_ARRAY(COMP_EMPTY()));
+ 
+ SND_SOC_DAILINK_DEFS(ETDM2_OUT_BE,
+diff --git a/sound/soc/sof/amd/pci-acp70.c b/sound/soc/sof/amd/pci-acp70.c
+index 8fa1170a2161e9..9108f1139ff2dc 100644
+--- a/sound/soc/sof/amd/pci-acp70.c
++++ b/sound/soc/sof/amd/pci-acp70.c
+@@ -33,6 +33,7 @@ static const struct sof_amd_acp_desc acp70_chip_info = {
+ 	.ext_intr_cntl = ACP70_EXTERNAL_INTR_CNTL,
+ 	.ext_intr_stat	= ACP70_EXT_INTR_STAT,
+ 	.ext_intr_stat1	= ACP70_EXT_INTR_STAT1,
++	.acp_error_stat = ACP70_ERROR_STATUS,
+ 	.dsp_intr_base	= ACP70_DSP_SW_INTR_BASE,
+ 	.acp_sw0_i2s_err_reason = ACP7X_SW0_I2S_ERROR_REASON,
+ 	.sram_pte_offset = ACP70_SRAM_PTE_OFFSET,
+diff --git a/sound/soc/sof/ipc4-pcm.c b/sound/soc/sof/ipc4-pcm.c
+index c09b424ab863d7..8eee3e1aadf932 100644
+--- a/sound/soc/sof/ipc4-pcm.c
++++ b/sound/soc/sof/ipc4-pcm.c
+@@ -784,7 +784,8 @@ static int sof_ipc4_pcm_setup(struct snd_sof_dev *sdev, struct snd_sof_pcm *spcm
+ 
+ 		/* allocate memory for max number of pipeline IDs */
+ 		pipeline_list->pipelines = kcalloc(ipc4_data->max_num_pipelines,
+-						   sizeof(struct snd_sof_widget *), GFP_KERNEL);
++						   sizeof(*pipeline_list->pipelines),
++						   GFP_KERNEL);
+ 		if (!pipeline_list->pipelines) {
+ 			sof_ipc4_pcm_free(sdev, spcm);
+ 			return -ENOMEM;
+diff --git a/sound/soc/ti/omap-hdmi.c b/sound/soc/ti/omap-hdmi.c
+index cf43ac19c4a6d0..55e7cb96858fca 100644
+--- a/sound/soc/ti/omap-hdmi.c
++++ b/sound/soc/ti/omap-hdmi.c
+@@ -361,17 +361,20 @@ static int omap_hdmi_audio_probe(struct platform_device *pdev)
+ 	if (!card->dai_link)
+ 		return -ENOMEM;
+ 
+-	compnent = devm_kzalloc(dev, sizeof(*compnent), GFP_KERNEL);
++	compnent = devm_kzalloc(dev, 2 * sizeof(*compnent), GFP_KERNEL);
+ 	if (!compnent)
+ 		return -ENOMEM;
+-	card->dai_link->cpus		= compnent;
++	card->dai_link->cpus		= &compnent[0];
+ 	card->dai_link->num_cpus	= 1;
+ 	card->dai_link->codecs		= &snd_soc_dummy_dlc;
+ 	card->dai_link->num_codecs	= 1;
++	card->dai_link->platforms	= &compnent[1];
++	card->dai_link->num_platforms	= 1;
+ 
+ 	card->dai_link->name = card->name;
+ 	card->dai_link->stream_name = card->name;
+ 	card->dai_link->cpus->dai_name = dev_name(ad->dssdev);
++	card->dai_link->platforms->name = dev_name(ad->dssdev);
+ 	card->num_links = 1;
+ 	card->dev = dev;
+ 
+diff --git a/sound/usb/implicit.c b/sound/usb/implicit.c
+index 4727043fd74580..77f06da93151e8 100644
+--- a/sound/usb/implicit.c
++++ b/sound/usb/implicit.c
+@@ -57,6 +57,7 @@ static const struct snd_usb_implicit_fb_match playback_implicit_fb_quirks[] = {
+ 	IMPLICIT_FB_FIXED_DEV(0x31e9, 0x0002, 0x81, 2), /* Solid State Logic SSL2+ */
+ 	IMPLICIT_FB_FIXED_DEV(0x0499, 0x172f, 0x81, 2), /* Steinberg UR22C */
+ 	IMPLICIT_FB_FIXED_DEV(0x0d9a, 0x00df, 0x81, 2), /* RTX6001 */
++	IMPLICIT_FB_FIXED_DEV(0x19f7, 0x000a, 0x84, 3), /* RODE AI-1 */
+ 	IMPLICIT_FB_FIXED_DEV(0x22f0, 0x0006, 0x81, 3), /* Allen&Heath Qu-16 */
+ 	IMPLICIT_FB_FIXED_DEV(0x1686, 0xf029, 0x82, 2), /* Zoom UAC-2 */
+ 	IMPLICIT_FB_FIXED_DEV(0x2466, 0x8003, 0x86, 2), /* Fractal Audio Axe-Fx II */
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index cfed000f243ab9..c3de2b13743500 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1530,6 +1530,7 @@ static void snd_usbmidi_free(struct snd_usb_midi *umidi)
+ 			snd_usbmidi_in_endpoint_delete(ep->in);
+ 	}
+ 	mutex_destroy(&umidi->mutex);
++	timer_shutdown_sync(&umidi->error_timer);
+ 	kfree(umidi);
+ }
+ 
+@@ -1553,7 +1554,7 @@ void snd_usbmidi_disconnect(struct list_head *p)
+ 	spin_unlock_irq(&umidi->disc_lock);
+ 	up_write(&umidi->disc_rwsem);
+ 
+-	timer_delete_sync(&umidi->error_timer);
++	timer_shutdown_sync(&umidi->error_timer);
+ 
+ 	for (i = 0; i < MIDI_MAX_ENDPOINTS; ++i) {
+ 		struct snd_usb_midi_endpoint *ep = &umidi->endpoints[i];
+diff --git a/tools/arch/x86/kcpuid/kcpuid.c b/tools/arch/x86/kcpuid/kcpuid.c
+index 1b25c0a95d3f9a..40a9e59c2fd568 100644
+--- a/tools/arch/x86/kcpuid/kcpuid.c
++++ b/tools/arch/x86/kcpuid/kcpuid.c
+@@ -1,11 +1,12 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #define _GNU_SOURCE
+ 
+-#include <stdio.h>
++#include <err.h>
++#include <getopt.h>
+ #include <stdbool.h>
++#include <stdio.h>
+ #include <stdlib.h>
+ #include <string.h>
+-#include <getopt.h>
+ 
+ #define ARRAY_SIZE(x)	(sizeof(x) / sizeof((x)[0]))
+ #define min(a, b)	(((a) < (b)) ? (a) : (b))
+@@ -145,14 +146,14 @@ static bool cpuid_store(struct cpuid_range *range, u32 f, int subleaf,
+ 	if (!func->leafs) {
+ 		func->leafs = malloc(sizeof(struct subleaf));
+ 		if (!func->leafs)
+-			perror("malloc func leaf");
++			err(EXIT_FAILURE, NULL);
+ 
+ 		func->nr = 1;
+ 	} else {
+ 		s = func->nr;
+ 		func->leafs = realloc(func->leafs, (s + 1) * sizeof(*leaf));
+ 		if (!func->leafs)
+-			perror("realloc f->leafs");
++			err(EXIT_FAILURE, NULL);
+ 
+ 		func->nr++;
+ 	}
+@@ -211,7 +212,7 @@ struct cpuid_range *setup_cpuid_range(u32 input_eax)
+ 
+ 	range = malloc(sizeof(struct cpuid_range));
+ 	if (!range)
+-		perror("malloc range");
++		err(EXIT_FAILURE, NULL);
+ 
+ 	if (input_eax & 0x80000000)
+ 		range->is_ext = true;
+@@ -220,7 +221,7 @@ struct cpuid_range *setup_cpuid_range(u32 input_eax)
+ 
+ 	range->funcs = malloc(sizeof(struct cpuid_func) * idx_func);
+ 	if (!range->funcs)
+-		perror("malloc range->funcs");
++		err(EXIT_FAILURE, NULL);
+ 
+ 	range->nr = idx_func;
+ 	memset(range->funcs, 0, sizeof(struct cpuid_func) * idx_func);
+@@ -395,8 +396,8 @@ static int parse_line(char *line)
+ 	return 0;
+ 
+ err_exit:
+-	printf("Warning: wrong line format:\n");
+-	printf("\tline[%d]: %s\n", flines, line);
++	warnx("Wrong line format:\n"
++	      "\tline[%d]: %s", flines, line);
+ 	return -1;
+ }
+ 
+@@ -418,10 +419,8 @@ static void parse_text(void)
+ 		file = fopen("./cpuid.csv", "r");
+ 	}
+ 
+-	if (!file) {
+-		printf("Fail to open '%s'\n", filename);
+-		return;
+-	}
++	if (!file)
++		err(EXIT_FAILURE, "%s", filename);
+ 
+ 	while (1) {
+ 		ret = getline(&line, &len, file);
+@@ -530,7 +529,7 @@ static inline struct cpuid_func *index_to_func(u32 index)
+ 	func_idx = index & 0xffff;
+ 
+ 	if ((func_idx + 1) > (u32)range->nr) {
+-		printf("ERR: invalid input index (0x%x)\n", index);
++		warnx("Invalid input index (0x%x)", index);
+ 		return NULL;
+ 	}
+ 	return &range->funcs[func_idx];
+@@ -562,7 +561,7 @@ static void show_info(void)
+ 				return;
+ 			}
+ 
+-			printf("ERR: invalid input subleaf (0x%x)\n", user_sub);
++			warnx("Invalid input subleaf (0x%x)", user_sub);
+ 		}
+ 
+ 		show_func(func);
+@@ -593,15 +592,15 @@ static void setup_platform_cpuid(void)
+ 
+ static void usage(void)
+ {
+-	printf("kcpuid [-abdfhr] [-l leaf] [-s subleaf]\n"
+-		"\t-a|--all             Show both bit flags and complex bit fields info\n"
+-		"\t-b|--bitflags        Show boolean flags only\n"
+-		"\t-d|--detail          Show details of the flag/fields (default)\n"
+-		"\t-f|--flags           Specify the cpuid csv file\n"
+-		"\t-h|--help            Show usage info\n"
+-		"\t-l|--leaf=index      Specify the leaf you want to check\n"
+-		"\t-r|--raw             Show raw cpuid data\n"
+-		"\t-s|--subleaf=sub     Specify the subleaf you want to check\n"
++	warnx("kcpuid [-abdfhr] [-l leaf] [-s subleaf]\n"
++	      "\t-a|--all             Show both bit flags and complex bit fields info\n"
++	      "\t-b|--bitflags        Show boolean flags only\n"
++	      "\t-d|--detail          Show details of the flag/fields (default)\n"
++	      "\t-f|--flags           Specify the CPUID CSV file\n"
++	      "\t-h|--help            Show usage info\n"
++	      "\t-l|--leaf=index      Specify the leaf you want to check\n"
++	      "\t-r|--raw             Show raw CPUID data\n"
++	      "\t-s|--subleaf=sub     Specify the subleaf you want to check"
+ 	);
+ }
+ 
+@@ -652,7 +651,7 @@ static int parse_options(int argc, char *argv[])
+ 			user_sub = strtoul(optarg, NULL, 0);
+ 			break;
+ 		default:
+-			printf("%s: Invalid option '%c'\n", argv[0], optopt);
++			warnx("Invalid option '%c'", optopt);
+ 			return -1;
+ 	}
+ 
+diff --git a/tools/arch/x86/lib/x86-opcode-map.txt b/tools/arch/x86/lib/x86-opcode-map.txt
+index f5dd84eb55dcda..cd3fd5155f6ece 100644
+--- a/tools/arch/x86/lib/x86-opcode-map.txt
++++ b/tools/arch/x86/lib/x86-opcode-map.txt
+@@ -35,7 +35,7 @@
+ #  - (!F3) : the last prefix is not 0xF3 (including non-last prefix case)
+ #  - (66&F2): Both 0x66 and 0xF2 prefixes are specified.
+ #
+-# REX2 Prefix
++# REX2 Prefix Superscripts
+ #  - (!REX2): REX2 is not allowed
+ #  - (REX2): REX2 variant e.g. JMPABS
+ 
+@@ -286,10 +286,10 @@ df: ESC
+ # Note: "forced64" is Intel CPU behavior: they ignore 0x66 prefix
+ # in 64-bit mode. AMD CPUs accept 0x66 prefix, it causes RIP truncation
+ # to 16 bits. In 32-bit mode, 0x66 is accepted by both Intel and AMD.
+-e0: LOOPNE/LOOPNZ Jb (f64) (!REX2)
+-e1: LOOPE/LOOPZ Jb (f64) (!REX2)
+-e2: LOOP Jb (f64) (!REX2)
+-e3: JrCXZ Jb (f64) (!REX2)
++e0: LOOPNE/LOOPNZ Jb (f64),(!REX2)
++e1: LOOPE/LOOPZ Jb (f64),(!REX2)
++e2: LOOP Jb (f64),(!REX2)
++e3: JrCXZ Jb (f64),(!REX2)
+ e4: IN AL,Ib (!REX2)
+ e5: IN eAX,Ib (!REX2)
+ e6: OUT Ib,AL (!REX2)
+@@ -298,10 +298,10 @@ e7: OUT Ib,eAX (!REX2)
+ # in "near" jumps and calls is 16-bit. For CALL,
+ # push of return address is 16-bit wide, RSP is decremented by 2
+ # but is not truncated to 16 bits, unlike RIP.
+-e8: CALL Jz (f64) (!REX2)
+-e9: JMP-near Jz (f64) (!REX2)
+-ea: JMP-far Ap (i64) (!REX2)
+-eb: JMP-short Jb (f64) (!REX2)
++e8: CALL Jz (f64),(!REX2)
++e9: JMP-near Jz (f64),(!REX2)
++ea: JMP-far Ap (i64),(!REX2)
++eb: JMP-short Jb (f64),(!REX2)
+ ec: IN AL,DX (!REX2)
+ ed: IN eAX,DX (!REX2)
+ ee: OUT DX,AL (!REX2)
+@@ -478,22 +478,22 @@ AVXcode: 1
+ 7f: movq Qq,Pq | vmovdqa Wx,Vx (66) | vmovdqa32/64 Wx,Vx (66),(evo) | vmovdqu Wx,Vx (F3) | vmovdqu32/64 Wx,Vx (F3),(evo) | vmovdqu8/16 Wx,Vx (F2),(ev)
+ # 0x0f 0x80-0x8f
+ # Note: "forced64" is Intel CPU behavior (see comment about CALL insn).
+-80: JO Jz (f64) (!REX2)
+-81: JNO Jz (f64) (!REX2)
+-82: JB/JC/JNAE Jz (f64) (!REX2)
+-83: JAE/JNB/JNC Jz (f64) (!REX2)
+-84: JE/JZ Jz (f64) (!REX2)
+-85: JNE/JNZ Jz (f64) (!REX2)
+-86: JBE/JNA Jz (f64) (!REX2)
+-87: JA/JNBE Jz (f64) (!REX2)
+-88: JS Jz (f64) (!REX2)
+-89: JNS Jz (f64) (!REX2)
+-8a: JP/JPE Jz (f64) (!REX2)
+-8b: JNP/JPO Jz (f64) (!REX2)
+-8c: JL/JNGE Jz (f64) (!REX2)
+-8d: JNL/JGE Jz (f64) (!REX2)
+-8e: JLE/JNG Jz (f64) (!REX2)
+-8f: JNLE/JG Jz (f64) (!REX2)
++80: JO Jz (f64),(!REX2)
++81: JNO Jz (f64),(!REX2)
++82: JB/JC/JNAE Jz (f64),(!REX2)
++83: JAE/JNB/JNC Jz (f64),(!REX2)
++84: JE/JZ Jz (f64),(!REX2)
++85: JNE/JNZ Jz (f64),(!REX2)
++86: JBE/JNA Jz (f64),(!REX2)
++87: JA/JNBE Jz (f64),(!REX2)
++88: JS Jz (f64),(!REX2)
++89: JNS Jz (f64),(!REX2)
++8a: JP/JPE Jz (f64),(!REX2)
++8b: JNP/JPO Jz (f64),(!REX2)
++8c: JL/JNGE Jz (f64),(!REX2)
++8d: JNL/JGE Jz (f64),(!REX2)
++8e: JLE/JNG Jz (f64),(!REX2)
++8f: JNLE/JG Jz (f64),(!REX2)
+ # 0x0f 0x90-0x9f
+ 90: SETO Eb | kmovw/q Vk,Wk | kmovb/d Vk,Wk (66)
+ 91: SETNO Eb | kmovw/q Mv,Vk | kmovb/d Mv,Vk (66)
+diff --git a/tools/bpf/bpftool/cgroup.c b/tools/bpf/bpftool/cgroup.c
+index 93b139bfb9880a..3f1d6be512151d 100644
+--- a/tools/bpf/bpftool/cgroup.c
++++ b/tools/bpf/bpftool/cgroup.c
+@@ -221,7 +221,7 @@ static int cgroup_has_attached_progs(int cgroup_fd)
+ 	for (i = 0; i < ARRAY_SIZE(cgroup_attach_types); i++) {
+ 		int count = count_attached_bpf_progs(cgroup_fd, cgroup_attach_types[i]);
+ 
+-		if (count < 0)
++		if (count < 0 && errno != EINVAL)
+ 			return -1;
+ 
+ 		if (count > 0) {
+diff --git a/tools/bpf/resolve_btfids/Makefile b/tools/bpf/resolve_btfids/Makefile
+index afbddea3a39c64..ce1b556dfa90f1 100644
+--- a/tools/bpf/resolve_btfids/Makefile
++++ b/tools/bpf/resolve_btfids/Makefile
+@@ -17,7 +17,7 @@ endif
+ 
+ # Overrides for the prepare step libraries.
+ HOST_OVERRIDES := AR="$(HOSTAR)" CC="$(HOSTCC)" LD="$(HOSTLD)" ARCH="$(HOSTARCH)" \
+-		  CROSS_COMPILE="" EXTRA_CFLAGS="$(HOSTCFLAGS)"
++		  CROSS_COMPILE="" CLANG_CROSS_FLAGS="" EXTRA_CFLAGS="$(HOSTCFLAGS)"
+ 
+ RM      ?= rm
+ HOSTCC  ?= gcc
+diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
+index 1f44ca677ad3d6..57bd995ce6afa3 100644
+--- a/tools/build/Makefile.feature
++++ b/tools/build/Makefile.feature
+@@ -87,7 +87,6 @@ FEATURE_TESTS_BASIC :=                  \
+         libtracefs                      \
+         libcpupower                     \
+         libcrypto                       \
+-        libunwind                       \
+         pthread-attr-setaffinity-np     \
+         pthread-barrier     		\
+         reallocarray                    \
+@@ -148,15 +147,12 @@ endif
+ FEATURE_DISPLAY ?=              \
+          libdw                  \
+          glibc                  \
+-         libbfd                 \
+-         libbfd-buildid		\
+          libelf                 \
+          libnuma                \
+          numa_num_possible_cpus \
+          libperl                \
+          libpython              \
+          libcrypto              \
+-         libunwind              \
+          libcapstone            \
+          llvm-perf              \
+          zlib                   \
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index fd404729b1154b..fe5df2a9fe8ee6 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -2051,7 +2051,8 @@ union bpf_attr {
+  * 		untouched (unless **BPF_F_MARK_ENFORCE** is added as well), and
+  * 		for updates resulting in a null checksum the value is set to
+  * 		**CSUM_MANGLED_0** instead. Flag **BPF_F_PSEUDO_HDR** indicates
+- * 		the checksum is to be computed against a pseudo-header.
++ * 		that the modified header field is part of the pseudo-header.
++ * 		Flag **BPF_F_IPV6** should be set for IPv6 packets.
+  *
+  * 		This helper works in combination with **bpf_csum_diff**\ (),
+  * 		which does not update the checksum in-place, but offers more
+@@ -6068,6 +6069,7 @@ enum {
+ 	BPF_F_PSEUDO_HDR		= (1ULL << 4),
+ 	BPF_F_MARK_MANGLED_0		= (1ULL << 5),
+ 	BPF_F_MARK_ENFORCE		= (1ULL << 6),
++	BPF_F_IPV6			= (1ULL << 7),
+ };
+ 
+ /* BPF_FUNC_skb_set_tunnel_key and BPF_FUNC_skb_get_tunnel_key flags. */
+diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
+index c0e13cdf966077..b997c68bd94536 100644
+--- a/tools/lib/bpf/bpf_core_read.h
++++ b/tools/lib/bpf/bpf_core_read.h
+@@ -388,7 +388,13 @@ extern void *bpf_rdonly_cast(const void *obj, __u32 btf_id) __ksym __weak;
+ #define ___arrow10(a, b, c, d, e, f, g, h, i, j) a->b->c->d->e->f->g->h->i->j
+ #define ___arrow(...) ___apply(___arrow, ___narg(__VA_ARGS__))(__VA_ARGS__)
+ 
++#if defined(__clang__) && (__clang_major__ >= 19)
++#define ___type(...) __typeof_unqual__(___arrow(__VA_ARGS__))
++#elif defined(__GNUC__) && (__GNUC__ >= 14)
++#define ___type(...) __typeof_unqual__(___arrow(__VA_ARGS__))
++#else
+ #define ___type(...) typeof(___arrow(__VA_ARGS__))
++#endif
+ 
+ #define ___read(read_fn, dst, src_type, src, accessor)			    \
+ 	read_fn((void *)(dst), sizeof(*(dst)), &((src_type)(src))->accessor)
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 6b85060f07b3b4..147964bb64c8f4 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -60,6 +60,8 @@
+ #define BPF_FS_MAGIC		0xcafe4a11
+ #endif
+ 
++#define MAX_EVENT_NAME_LEN	64
++
+ #define BPF_FS_DEFAULT_PATH "/sys/fs/bpf"
+ 
+ #define BPF_INSN_SZ (sizeof(struct bpf_insn))
+@@ -284,7 +286,7 @@ void libbpf_print(enum libbpf_print_level level, const char *format, ...)
+ 	old_errno = errno;
+ 
+ 	va_start(args, format);
+-	__libbpf_pr(level, format, args);
++	print_fn(level, format, args);
+ 	va_end(args);
+ 
+ 	errno = old_errno;
+@@ -896,7 +898,7 @@ bpf_object__add_programs(struct bpf_object *obj, Elf_Data *sec_data,
+ 			return -LIBBPF_ERRNO__FORMAT;
+ 		}
+ 
+-		if (sec_off + prog_sz > sec_sz) {
++		if (sec_off + prog_sz > sec_sz || sec_off + prog_sz < sec_off) {
+ 			pr_warn("sec '%s': program at offset %zu crosses section boundary\n",
+ 				sec_name, sec_off);
+ 			return -LIBBPF_ERRNO__FORMAT;
+@@ -1725,15 +1727,6 @@ static Elf64_Sym *find_elf_var_sym(const struct bpf_object *obj, const char *nam
+ 	return ERR_PTR(-ENOENT);
+ }
+ 
+-/* Some versions of Android don't provide memfd_create() in their libc
+- * implementation, so avoid complications and just go straight to Linux
+- * syscall.
+- */
+-static int sys_memfd_create(const char *name, unsigned flags)
+-{
+-	return syscall(__NR_memfd_create, name, flags);
+-}
+-
+ #ifndef MFD_CLOEXEC
+ #define MFD_CLOEXEC 0x0001U
+ #endif
+@@ -11121,16 +11114,16 @@ static const char *tracefs_available_filter_functions_addrs(void)
+ 			     : TRACEFS"/available_filter_functions_addrs";
+ }
+ 
+-static void gen_kprobe_legacy_event_name(char *buf, size_t buf_sz,
+-					 const char *kfunc_name, size_t offset)
++static void gen_probe_legacy_event_name(char *buf, size_t buf_sz,
++					const char *name, size_t offset)
+ {
+ 	static int index = 0;
+ 	int i;
+ 
+-	snprintf(buf, buf_sz, "libbpf_%u_%s_0x%zx_%d", getpid(), kfunc_name, offset,
+-		 __sync_fetch_and_add(&index, 1));
++	snprintf(buf, buf_sz, "libbpf_%u_%d_%s_0x%zx", getpid(),
++		 __sync_fetch_and_add(&index, 1), name, offset);
+ 
+-	/* sanitize binary_path in the probe name */
++	/* sanitize name in the probe name */
+ 	for (i = 0; buf[i]; i++) {
+ 		if (!isalnum(buf[i]))
+ 			buf[i] = '_';
+@@ -11255,9 +11248,9 @@ int probe_kern_syscall_wrapper(int token_fd)
+ 
+ 		return pfd >= 0 ? 1 : 0;
+ 	} else { /* legacy mode */
+-		char probe_name[128];
++		char probe_name[MAX_EVENT_NAME_LEN];
+ 
+-		gen_kprobe_legacy_event_name(probe_name, sizeof(probe_name), syscall_name, 0);
++		gen_probe_legacy_event_name(probe_name, sizeof(probe_name), syscall_name, 0);
+ 		if (add_kprobe_event_legacy(probe_name, false, syscall_name, 0) < 0)
+ 			return 0;
+ 
+@@ -11313,10 +11306,10 @@ bpf_program__attach_kprobe_opts(const struct bpf_program *prog,
+ 					    func_name, offset,
+ 					    -1 /* pid */, 0 /* ref_ctr_off */);
+ 	} else {
+-		char probe_name[256];
++		char probe_name[MAX_EVENT_NAME_LEN];
+ 
+-		gen_kprobe_legacy_event_name(probe_name, sizeof(probe_name),
+-					     func_name, offset);
++		gen_probe_legacy_event_name(probe_name, sizeof(probe_name),
++					    func_name, offset);
+ 
+ 		legacy_probe = strdup(probe_name);
+ 		if (!legacy_probe)
+@@ -11860,20 +11853,6 @@ static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, stru
+ 	return ret;
+ }
+ 
+-static void gen_uprobe_legacy_event_name(char *buf, size_t buf_sz,
+-					 const char *binary_path, uint64_t offset)
+-{
+-	int i;
+-
+-	snprintf(buf, buf_sz, "libbpf_%u_%s_0x%zx", getpid(), binary_path, (size_t)offset);
+-
+-	/* sanitize binary_path in the probe name */
+-	for (i = 0; buf[i]; i++) {
+-		if (!isalnum(buf[i]))
+-			buf[i] = '_';
+-	}
+-}
+-
+ static inline int add_uprobe_event_legacy(const char *probe_name, bool retprobe,
+ 					  const char *binary_path, size_t offset)
+ {
+@@ -12297,13 +12276,14 @@ bpf_program__attach_uprobe_opts(const struct bpf_program *prog, pid_t pid,
+ 		pfd = perf_event_open_probe(true /* uprobe */, retprobe, binary_path,
+ 					    func_offset, pid, ref_ctr_off);
+ 	} else {
+-		char probe_name[PATH_MAX + 64];
++		char probe_name[MAX_EVENT_NAME_LEN];
+ 
+ 		if (ref_ctr_off)
+ 			return libbpf_err_ptr(-EINVAL);
+ 
+-		gen_uprobe_legacy_event_name(probe_name, sizeof(probe_name),
+-					     binary_path, func_offset);
++		gen_probe_legacy_event_name(probe_name, sizeof(probe_name),
++					    strrchr(binary_path, '/') ? : binary_path,
++					    func_offset);
+ 
+ 		legacy_probe = strdup(probe_name);
+ 		if (!legacy_probe)
+@@ -13371,7 +13351,6 @@ struct perf_buffer *perf_buffer__new(int map_fd, size_t page_cnt,
+ 	attr.config = PERF_COUNT_SW_BPF_OUTPUT;
+ 	attr.type = PERF_TYPE_SOFTWARE;
+ 	attr.sample_type = PERF_SAMPLE_RAW;
+-	attr.sample_period = sample_period;
+ 	attr.wakeup_events = sample_period;
+ 
+ 	p.attr = &attr;
+diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
+index 76669c73dcd162..477a3b3389a091 100644
+--- a/tools/lib/bpf/libbpf_internal.h
++++ b/tools/lib/bpf/libbpf_internal.h
+@@ -667,6 +667,15 @@ static inline int sys_dup3(int oldfd, int newfd, int flags)
+ 	return syscall(__NR_dup3, oldfd, newfd, flags);
+ }
+ 
++/* Some versions of Android don't provide memfd_create() in their libc
++ * implementation, so avoid complications and just go straight to Linux
++ * syscall.
++ */
++static inline int sys_memfd_create(const char *name, unsigned flags)
++{
++	return syscall(__NR_memfd_create, name, flags);
++}
++
+ /* Point *fixed_fd* to the same file that *tmp_fd* points to.
+  * Regardless of success, *tmp_fd* is closed.
+  * Whatever *fixed_fd* pointed to is closed silently.
+diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
+index 800e0ef09c3787..a469e5d4fee70e 100644
+--- a/tools/lib/bpf/linker.c
++++ b/tools/lib/bpf/linker.c
+@@ -573,7 +573,7 @@ int bpf_linker__add_buf(struct bpf_linker *linker, void *buf, size_t buf_sz,
+ 
+ 	snprintf(filename, sizeof(filename), "mem:%p+%zu", buf, buf_sz);
+ 
+-	fd = memfd_create(filename, 0);
++	fd = sys_memfd_create(filename, 0);
+ 	if (fd < 0) {
+ 		ret = -errno;
+ 		pr_warn("failed to create memfd '%s': %s\n", filename, errstr(ret));
+@@ -1376,7 +1376,7 @@ static int linker_append_sec_data(struct bpf_linker *linker, struct src_obj *obj
+ 		} else {
+ 			if (!secs_match(dst_sec, src_sec)) {
+ 				pr_warn("ELF sections %s are incompatible\n", src_sec->sec_name);
+-				return -1;
++				return -EINVAL;
+ 			}
+ 
+ 			/* "license" and "version" sections are deduped */
+@@ -2223,7 +2223,7 @@ static int linker_append_elf_relos(struct bpf_linker *linker, struct src_obj *ob
+ 			}
+ 		} else if (!secs_match(dst_sec, src_sec)) {
+ 			pr_warn("sections %s are not compatible\n", src_sec->sec_name);
+-			return -1;
++			return -EINVAL;
+ 		}
+ 
+ 		/* shdr->sh_link points to SYMTAB */
+diff --git a/tools/lib/bpf/nlattr.c b/tools/lib/bpf/nlattr.c
+index 975e265eab3bfe..06663f9ea581f9 100644
+--- a/tools/lib/bpf/nlattr.c
++++ b/tools/lib/bpf/nlattr.c
+@@ -63,16 +63,16 @@ static int validate_nla(struct nlattr *nla, int maxtype,
+ 		minlen = nla_attr_minlen[pt->type];
+ 
+ 	if (libbpf_nla_len(nla) < minlen)
+-		return -1;
++		return -EINVAL;
+ 
+ 	if (pt->maxlen && libbpf_nla_len(nla) > pt->maxlen)
+-		return -1;
++		return -EINVAL;
+ 
+ 	if (pt->type == LIBBPF_NLA_STRING) {
+ 		char *data = libbpf_nla_data(nla);
+ 
+ 		if (data[libbpf_nla_len(nla) - 1] != '\0')
+-			return -1;
++			return -EINVAL;
+ 	}
+ 
+ 	return 0;
+@@ -118,19 +118,18 @@ int libbpf_nla_parse(struct nlattr *tb[], int maxtype, struct nlattr *head,
+ 		if (policy) {
+ 			err = validate_nla(nla, maxtype, policy);
+ 			if (err < 0)
+-				goto errout;
++				return err;
+ 		}
+ 
+-		if (tb[type])
++		if (tb[type]) {
+ 			pr_warn("Attribute of type %#x found multiple times in message, "
+ 				"previous attribute is being ignored.\n", type);
++		}
+ 
+ 		tb[type] = nla;
+ 	}
+ 
+-	err = 0;
+-errout:
+-	return err;
++	return 0;
+ }
+ 
+ /**
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index b21b12ec88d960..f23bdda737aaa5 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -230,7 +230,8 @@ static bool is_rust_noreturn(const struct symbol *func)
+ 	       str_ends_with(func->name, "_7___rustc17rust_begin_unwind")				||
+ 	       strstr(func->name, "_4core9panicking13assert_failed")					||
+ 	       strstr(func->name, "_4core9panicking11panic_const24panic_const_")			||
+-	       (strstr(func->name, "_4core5slice5index24slice_") &&
++	       (strstr(func->name, "_4core5slice5index") &&
++		strstr(func->name, "slice_") &&
+ 		str_ends_with(func->name, "_fail"));
+ }
+ 
+diff --git a/tools/perf/MANIFEST b/tools/perf/MANIFEST
+index 364b55b00b4841..34af57b8ec2a9c 100644
+--- a/tools/perf/MANIFEST
++++ b/tools/perf/MANIFEST
+@@ -1,8 +1,10 @@
+ COPYING
+ LICENSES/preferred/GPL-2.0
+ arch/arm64/tools/gen-sysreg.awk
++arch/arm64/tools/syscall_64.tbl
+ arch/arm64/tools/sysreg
+ arch/*/include/uapi/asm/bpf_perf_event.h
++include/uapi/asm-generic/Kbuild
+ tools/perf
+ tools/arch
+ tools/scripts
+@@ -25,6 +27,10 @@ tools/lib/str_error_r.c
+ tools/lib/vsprintf.c
+ tools/lib/zalloc.c
+ scripts/bpf_doc.py
++scripts/Kbuild.include
++scripts/Makefile.asm-headers
++scripts/syscall.tbl
++scripts/syscallhdr.sh
+ tools/bpf/bpftool
+ kernel/bpf/disasm.c
+ kernel/bpf/disasm.h
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index b7769a22fe1afa..d1ea7bf449647e 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -560,6 +560,8 @@ ifndef NO_LIBELF
+     ifeq ($(feature-libdebuginfod), 1)
+       CFLAGS += -DHAVE_DEBUGINFOD_SUPPORT
+       EXTLIBS += -ldebuginfod
++    else
++      $(warning No elfutils/debuginfod.h found, no debuginfo server support, please install libdebuginfod-dev/elfutils-debuginfod-client-devel or equivalent)
+     endif
+   endif
+ 
+@@ -625,6 +627,8 @@ endif
+ ifndef NO_LIBUNWIND
+   have_libunwind :=
+ 
++  $(call feature_check,libunwind)
++
+   $(call feature_check,libunwind-x86)
+   ifeq ($(feature-libunwind-x86), 1)
+     $(call detected,CONFIG_LIBUNWIND_X86)
+@@ -649,7 +653,7 @@ ifndef NO_LIBUNWIND
+   endif
+ 
+   ifneq ($(feature-libunwind), 1)
+-    $(warning No libunwind found. Please install libunwind-dev[el] >= 1.1 and/or set LIBUNWIND_DIR)
++    $(warning No libunwind found. Please install libunwind-dev[el] >= 1.1 and/or set LIBUNWIND_DIR and set LIBUNWIND=1 in the make command line as it is opt-in now)
+     NO_LOCAL_LIBUNWIND := 1
+   else
+     have_libunwind := 1
+diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
+index 979d4691221a07..a7ae5637dadeeb 100644
+--- a/tools/perf/Makefile.perf
++++ b/tools/perf/Makefile.perf
+@@ -1147,7 +1147,8 @@ install-tests: all install-gtk
+ 		$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_probe'; \
+ 		$(INSTALL) tests/shell/base_probe/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_probe'; \
+ 		$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
+-		$(INSTALL) tests/shell/base_probe/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
++		$(INSTALL) tests/shell/base_report/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
++		$(INSTALL) tests/shell/base_report/*.txt '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
+ 		$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/coresight' ; \
+ 		$(INSTALL) tests/shell/coresight/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/coresight'
+ 	$(Q)$(MAKE) -C tests/shell/coresight install-tests
+diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
+index ba20bf7c011d77..d56273a0e241c7 100644
+--- a/tools/perf/builtin-record.c
++++ b/tools/perf/builtin-record.c
+@@ -3480,7 +3480,7 @@ static struct option __record_options[] = {
+ 		    "sample selected machine registers on interrupt,"
+ 		    " use '-I?' to list register names", parse_intr_regs),
+ 	OPT_CALLBACK_OPTARG(0, "user-regs", &record.opts.sample_user_regs, NULL, "any register",
+-		    "sample selected machine registers on interrupt,"
++		    "sample selected machine registers in user space,"
+ 		    " use '--user-regs=?' to list register names", parse_user_regs),
+ 	OPT_BOOLEAN(0, "running-time", &record.opts.running_time,
+ 		    "Record running/enabled time of read (:S) events"),
+diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
+index 6ac51925ea4249..33cce59bdfbdb5 100644
+--- a/tools/perf/builtin-trace.c
++++ b/tools/perf/builtin-trace.c
+@@ -1352,7 +1352,7 @@ static const struct syscall_fmt syscall_fmts[] = {
+ 	  .arg = { [0] = { .scnprintf = SCA_FDAT, /* olddirfd */ },
+ 		   [2] = { .scnprintf = SCA_FDAT, /* newdirfd */ },
+ 		   [4] = { .scnprintf = SCA_RENAMEAT2_FLAGS, /* flags */ }, }, },
+-	{ .name	    = "rseq",	    .errpid = true,
++	{ .name	    = "rseq",
+ 	  .arg = { [0] = { .from_user = true /* rseq */, }, }, },
+ 	{ .name	    = "rt_sigaction",
+ 	  .arg = { [0] = { .scnprintf = SCA_SIGNUM, /* sig */ }, }, },
+@@ -1376,7 +1376,7 @@ static const struct syscall_fmt syscall_fmts[] = {
+ 	{ .name	    = "sendto",
+ 	  .arg = { [3] = { .scnprintf = SCA_MSG_FLAGS, /* flags */ },
+ 		   [4] = SCA_SOCKADDR_FROM_USER(addr), }, },
+-	{ .name	    = "set_robust_list",	    .errpid = true,
++	{ .name	    = "set_robust_list",
+ 	  .arg = { [0] = { .from_user = true /* head */, }, }, },
+ 	{ .name	    = "set_tid_address", .errpid = true, },
+ 	{ .name	    = "setitimer",
+@@ -2842,7 +2842,7 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct evsel *evsel,
+ 	e_machine = thread__e_machine(thread, trace->host);
+ 	sc = trace__syscall_info(trace, evsel, e_machine, id);
+ 	if (sc == NULL)
+-		return -1;
++		goto out_put;
+ 	ttrace = thread__trace(thread, trace);
+ 	/*
+ 	 * We need to get ttrace just to make sure it is there when syscall__scnprintf_args()
+@@ -3005,8 +3005,8 @@ errno_print: {
+ 	else if (sc->fmt->errpid) {
+ 		struct thread *child = machine__find_thread(trace->host, ret, ret);
+ 
++		fprintf(trace->output, "%ld", ret);
+ 		if (child != NULL) {
+-			fprintf(trace->output, "%ld", ret);
+ 			if (thread__comm_set(child))
+ 				fprintf(trace->output, " (%s)", thread__comm_str(child));
+ 			thread__put(child);
+@@ -4128,10 +4128,13 @@ static int trace__set_filter_loop_pids(struct trace *trace)
+ 		if (!strcmp(thread__comm_str(parent), "sshd") ||
+ 		    strstarts(thread__comm_str(parent), "gnome-terminal")) {
+ 			pids[nr++] = thread__tid(parent);
++			thread__put(parent);
+ 			break;
+ 		}
++		thread__put(thread);
+ 		thread = parent;
+ 	}
++	thread__put(thread);
+ 
+ 	err = evlist__append_tp_filter_pids(trace->evlist, nr, pids);
+ 	if (!err && trace->filter_pids.map)
+diff --git a/tools/perf/scripts/python/exported-sql-viewer.py b/tools/perf/scripts/python/exported-sql-viewer.py
+index 121cf61ba1b345..e0b2e7268ef68c 100755
+--- a/tools/perf/scripts/python/exported-sql-viewer.py
++++ b/tools/perf/scripts/python/exported-sql-viewer.py
+@@ -680,7 +680,10 @@ class CallGraphModelBase(TreeModel):
+ 				s = value.replace("%", "\\%")
+ 				s = s.replace("_", "\\_")
+ 				# Translate * and ? into SQL LIKE pattern characters % and _
+-				trans = string.maketrans("*?", "%_")
++				if sys.version_info[0] == 3:
++					trans = str.maketrans("*?", "%_")
++				else:
++					trans = string.maketrans("*?", "%_")
+ 				match = " LIKE '" + str(s).translate(trans) + "'"
+ 			else:
+ 				match = " GLOB '" + str(value) + "'"
+diff --git a/tools/perf/tests/shell/lib/stat_output.sh b/tools/perf/tests/shell/lib/stat_output.sh
+index 4d4aac547f0109..c2ec7881ec1de4 100644
+--- a/tools/perf/tests/shell/lib/stat_output.sh
++++ b/tools/perf/tests/shell/lib/stat_output.sh
+@@ -151,6 +151,11 @@ check_per_socket()
+ check_metric_only()
+ {
+ 	echo -n "Checking $1 output: metric only "
++	if [ "$(uname -m)" = "s390x" ] && ! grep '^facilities' /proc/cpuinfo  | grep -qw 67
++	then
++		echo "[Skip] CPU-measurement counter facility not installed"
++		return
++	fi
+ 	perf stat --metric-only $2 -e instructions,cycles true
+ 	commachecker --metric-only
+ 	echo "[Success]"
+diff --git a/tools/perf/tests/shell/stat+json_output.sh b/tools/perf/tests/shell/stat+json_output.sh
+index a4f257ea839e13..98fb65274ac4f7 100755
+--- a/tools/perf/tests/shell/stat+json_output.sh
++++ b/tools/perf/tests/shell/stat+json_output.sh
+@@ -176,6 +176,11 @@ check_per_socket()
+ check_metric_only()
+ {
+ 	echo -n "Checking json output: metric only "
++	if [ "$(uname -m)" = "s390x" ] && ! grep '^facilities' /proc/cpuinfo  | grep -qw 67
++	then
++		echo "[Skip] CPU-measurement counter facility not installed"
++		return
++	fi
+ 	perf stat -j --metric-only -e instructions,cycles -o "${stat_output}" true
+ 	$PYTHON $pythonchecker --metric-only --file "${stat_output}"
+ 	echo "[Success]"
+diff --git a/tools/perf/tests/switch-tracking.c b/tools/perf/tests/switch-tracking.c
+index 8df3f9d9ffd2b2..6b3aac283c371c 100644
+--- a/tools/perf/tests/switch-tracking.c
++++ b/tools/perf/tests/switch-tracking.c
+@@ -264,7 +264,7 @@ static int compar(const void *a, const void *b)
+ 	const struct event_node *nodeb = b;
+ 	s64 cmp = nodea->event_time - nodeb->event_time;
+ 
+-	return cmp;
++	return cmp < 0 ? -1 : (cmp > 0 ? 1 : 0);
+ }
+ 
+ static int process_events(struct evlist *evlist,
+diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
+index 35c10509b797f3..992d1c822a97a8 100644
+--- a/tools/perf/ui/browsers/hists.c
++++ b/tools/perf/ui/browsers/hists.c
+@@ -3274,10 +3274,10 @@ static int evsel__hists_browse(struct evsel *evsel, int nr_events, const char *h
+ 				/*
+ 				 * No need to set actions->dso here since
+ 				 * it's just to remove the current filter.
+-				 * Ditto for thread below.
+ 				 */
+ 				do_zoom_dso(browser, actions);
+ 			} else if (top == &browser->hists->thread_filter) {
++				actions->thread = thread;
+ 				do_zoom_thread(browser, actions);
+ 			} else if (top == &browser->hists->socket_filter) {
+ 				do_zoom_socket(browser, actions);
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index 4e8a9b172fbcc7..9b1011fe482671 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -127,6 +127,7 @@ struct intel_pt {
+ 
+ 	bool single_pebs;
+ 	bool sample_pebs;
++	int pebs_data_src_fmt;
+ 	struct evsel *pebs_evsel;
+ 
+ 	u64 evt_sample_type;
+@@ -175,6 +176,7 @@ enum switch_state {
+ struct intel_pt_pebs_event {
+ 	struct evsel *evsel;
+ 	u64 id;
++	int data_src_fmt;
+ };
+ 
+ struct intel_pt_queue {
+@@ -2272,7 +2274,146 @@ static void intel_pt_add_lbrs(struct branch_stack *br_stack,
+ 	}
+ }
+ 
+-static int intel_pt_do_synth_pebs_sample(struct intel_pt_queue *ptq, struct evsel *evsel, u64 id)
++#define P(a, b) PERF_MEM_S(a, b)
++#define OP_LH (P(OP, LOAD) | P(LVL, HIT))
++#define LEVEL(x) P(LVLNUM, x)
++#define REM P(REMOTE, REMOTE)
++#define SNOOP_NONE_MISS (P(SNOOP, NONE) | P(SNOOP, MISS))
++
++#define PERF_PEBS_DATA_SOURCE_GRT_MAX	0x10
++#define PERF_PEBS_DATA_SOURCE_GRT_MASK	(PERF_PEBS_DATA_SOURCE_GRT_MAX - 1)
++
++/* Based on kernel __intel_pmu_pebs_data_source_grt() and pebs_data_source */
++static const u64 pebs_data_source_grt[PERF_PEBS_DATA_SOURCE_GRT_MAX] = {
++	P(OP, LOAD) | P(LVL, MISS) | LEVEL(L3) | P(SNOOP, NA),         /* L3 miss|SNP N/A */
++	OP_LH | P(LVL, L1)  | LEVEL(L1)  | P(SNOOP, NONE),             /* L1 hit|SNP None */
++	OP_LH | P(LVL, LFB) | LEVEL(LFB) | P(SNOOP, NONE),             /* LFB/MAB hit|SNP None */
++	OP_LH | P(LVL, L2)  | LEVEL(L2)  | P(SNOOP, NONE),             /* L2 hit|SNP None */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, NONE),             /* L3 hit|SNP None */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, HIT),              /* L3 hit|SNP Hit */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, HITM),             /* L3 hit|SNP HitM */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, HITM),             /* L3 hit|SNP HitM */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOPX, FWD),             /* L3 hit|SNP Fwd */
++	OP_LH | P(LVL, REM_CCE1) | REM | LEVEL(L3) | P(SNOOP, HITM),   /* Remote L3 hit|SNP HitM */
++	OP_LH | P(LVL, LOC_RAM)  | LEVEL(RAM) | P(SNOOP, HIT),         /* RAM hit|SNP Hit */
++	OP_LH | P(LVL, REM_RAM1) | REM | LEVEL(L3) | P(SNOOP, HIT),    /* Remote L3 hit|SNP Hit */
++	OP_LH | P(LVL, LOC_RAM)  | LEVEL(RAM) | SNOOP_NONE_MISS,       /* RAM hit|SNP None or Miss */
++	OP_LH | P(LVL, REM_RAM1) | LEVEL(RAM) | REM | SNOOP_NONE_MISS, /* Remote RAM hit|SNP None or Miss */
++	OP_LH | P(LVL, IO)  | LEVEL(NA) | P(SNOOP, NONE),              /* I/O hit|SNP None */
++	OP_LH | P(LVL, UNC) | LEVEL(NA) | P(SNOOP, NONE),              /* Uncached hit|SNP None */
++};
++
++/* Based on kernel __intel_pmu_pebs_data_source_cmt() and pebs_data_source */
++static const u64 pebs_data_source_cmt[PERF_PEBS_DATA_SOURCE_GRT_MAX] = {
++	P(OP, LOAD) | P(LVL, MISS) | LEVEL(L3) | P(SNOOP, NA),       /* L3 miss|SNP N/A */
++	OP_LH | P(LVL, L1)  | LEVEL(L1)  | P(SNOOP, NONE),           /* L1 hit|SNP None */
++	OP_LH | P(LVL, LFB) | LEVEL(LFB) | P(SNOOP, NONE),           /* LFB/MAB hit|SNP None */
++	OP_LH | P(LVL, L2)  | LEVEL(L2)  | P(SNOOP, NONE),           /* L2 hit|SNP None */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, NONE),           /* L3 hit|SNP None */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, MISS),           /* L3 hit|SNP Hit */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, HIT),            /* L3 hit|SNP HitM */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOPX, FWD),           /* L3 hit|SNP HitM */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, HITM),           /* L3 hit|SNP Fwd */
++	OP_LH | P(LVL, REM_CCE1) | REM | LEVEL(L3) | P(SNOOP, HITM), /* Remote L3 hit|SNP HitM */
++	OP_LH | P(LVL, LOC_RAM)  | LEVEL(RAM) | P(SNOOP, NONE),      /* RAM hit|SNP Hit */
++	OP_LH | LEVEL(RAM) | REM | P(SNOOP, NONE),                   /* Remote L3 hit|SNP Hit */
++	OP_LH | LEVEL(RAM) | REM | P(SNOOPX, FWD),                   /* RAM hit|SNP None or Miss */
++	OP_LH | LEVEL(RAM) | REM | P(SNOOP, HITM),                   /* Remote RAM hit|SNP None or Miss */
++	OP_LH | P(LVL, IO)  | LEVEL(NA) | P(SNOOP, NONE),            /* I/O hit|SNP None */
++	OP_LH | P(LVL, UNC) | LEVEL(NA) | P(SNOOP, NONE),            /* Uncached hit|SNP None */
++};
++
++/* Based on kernel pebs_set_tlb_lock() */
++static inline void pebs_set_tlb_lock(u64 *val, bool tlb, bool lock)
++{
++	/*
++	 * TLB access
++	 * 0 = did not miss 2nd level TLB
++	 * 1 = missed 2nd level TLB
++	 */
++	if (tlb)
++		*val |= P(TLB, MISS) | P(TLB, L2);
++	else
++		*val |= P(TLB, HIT) | P(TLB, L1) | P(TLB, L2);
++
++	/* locked prefix */
++	if (lock)
++		*val |= P(LOCK, LOCKED);
++}
++
++/* Based on kernel __grt_latency_data() */
++static u64 intel_pt_grt_latency_data(u8 dse, bool tlb, bool lock, bool blk,
++				     const u64 *pebs_data_source)
++{
++	u64 val;
++
++	dse &= PERF_PEBS_DATA_SOURCE_GRT_MASK;
++	val = pebs_data_source[dse];
++
++	pebs_set_tlb_lock(&val, tlb, lock);
++
++	if (blk)
++		val |= P(BLK, DATA);
++	else
++		val |= P(BLK, NA);
++
++	return val;
++}
++
++/* Default value for data source */
++#define PERF_MEM_NA (PERF_MEM_S(OP, NA)    |\
++		     PERF_MEM_S(LVL, NA)   |\
++		     PERF_MEM_S(SNOOP, NA) |\
++		     PERF_MEM_S(LOCK, NA)  |\
++		     PERF_MEM_S(TLB, NA)   |\
++		     PERF_MEM_S(LVLNUM, NA))
++
++enum DATA_SRC_FORMAT {
++	DATA_SRC_FORMAT_ERR  = -1,
++	DATA_SRC_FORMAT_NA   =  0,
++	DATA_SRC_FORMAT_GRT  =  1,
++	DATA_SRC_FORMAT_CMT  =  2,
++};
++
++/* Based on kernel grt_latency_data() and cmt_latency_data */
++static u64 intel_pt_get_data_src(u64 mem_aux_info, int data_src_fmt)
++{
++	switch (data_src_fmt) {
++	case DATA_SRC_FORMAT_GRT: {
++		union {
++			u64 val;
++			struct {
++				unsigned int dse:4;
++				unsigned int locked:1;
++				unsigned int stlb_miss:1;
++				unsigned int fwd_blk:1;
++				unsigned int reserved:25;
++			};
++		} x = {.val = mem_aux_info};
++		return intel_pt_grt_latency_data(x.dse, x.stlb_miss, x.locked, x.fwd_blk,
++						 pebs_data_source_grt);
++	}
++	case DATA_SRC_FORMAT_CMT: {
++		union {
++			u64 val;
++			struct {
++				unsigned int dse:5;
++				unsigned int locked:1;
++				unsigned int stlb_miss:1;
++				unsigned int fwd_blk:1;
++				unsigned int reserved:24;
++			};
++		} x = {.val = mem_aux_info};
++		return intel_pt_grt_latency_data(x.dse, x.stlb_miss, x.locked, x.fwd_blk,
++						 pebs_data_source_cmt);
++	}
++	default:
++		return PERF_MEM_NA;
++	}
++}
++
++static int intel_pt_do_synth_pebs_sample(struct intel_pt_queue *ptq, struct evsel *evsel,
++					 u64 id, int data_src_fmt)
+ {
+ 	const struct intel_pt_blk_items *items = &ptq->state->items;
+ 	struct perf_sample sample;
+@@ -2393,6 +2534,18 @@ static int intel_pt_do_synth_pebs_sample(struct intel_pt_queue *ptq, struct evse
+ 		}
+ 	}
+ 
++	if (sample_type & PERF_SAMPLE_DATA_SRC) {
++		if (items->has_mem_aux_info && data_src_fmt) {
++			if (data_src_fmt < 0) {
++				pr_err("Intel PT missing data_src info\n");
++				return -1;
++			}
++			sample.data_src = intel_pt_get_data_src(items->mem_aux_info, data_src_fmt);
++		} else {
++			sample.data_src = PERF_MEM_NA;
++		}
++	}
++
+ 	if (sample_type & PERF_SAMPLE_TRANSACTION && items->has_tsx_aux_info) {
+ 		u64 ax = items->has_rax ? items->rax : 0;
+ 		/* Refer kernel's intel_hsw_transaction() */
+@@ -2413,9 +2566,10 @@ static int intel_pt_synth_single_pebs_sample(struct intel_pt_queue *ptq)
+ {
+ 	struct intel_pt *pt = ptq->pt;
+ 	struct evsel *evsel = pt->pebs_evsel;
++	int data_src_fmt = pt->pebs_data_src_fmt;
+ 	u64 id = evsel->core.id[0];
+ 
+-	return intel_pt_do_synth_pebs_sample(ptq, evsel, id);
++	return intel_pt_do_synth_pebs_sample(ptq, evsel, id, data_src_fmt);
+ }
+ 
+ static int intel_pt_synth_pebs_sample(struct intel_pt_queue *ptq)
+@@ -2440,7 +2594,7 @@ static int intel_pt_synth_pebs_sample(struct intel_pt_queue *ptq)
+ 				       hw_id);
+ 			return intel_pt_synth_single_pebs_sample(ptq);
+ 		}
+-		err = intel_pt_do_synth_pebs_sample(ptq, pe->evsel, pe->id);
++		err = intel_pt_do_synth_pebs_sample(ptq, pe->evsel, pe->id, pe->data_src_fmt);
+ 		if (err)
+ 			return err;
+ 	}
+@@ -3407,6 +3561,49 @@ static int intel_pt_process_itrace_start(struct intel_pt *pt,
+ 					event->itrace_start.tid);
+ }
+ 
++/*
++ * Events with data_src are identified by L1_Hit_Indication
++ * refer https://github.com/intel/perfmon
++ */
++static int intel_pt_data_src_fmt(struct intel_pt *pt, struct evsel *evsel)
++{
++	struct perf_env *env = pt->machine->env;
++	int fmt = DATA_SRC_FORMAT_NA;
++
++	if (!env->cpuid)
++		return DATA_SRC_FORMAT_ERR;
++
++	/*
++	 * PEBS-via-PT is only supported on E-core non-hybrid. Of those only
++	 * Gracemont and Crestmont have data_src. Check for:
++	 *	Alderlake N   (Gracemont)
++	 *	Sierra Forest (Crestmont)
++	 *	Grand Ridge   (Crestmont)
++	 */
++
++	if (!strncmp(env->cpuid, "GenuineIntel,6,190,", 19))
++		fmt = DATA_SRC_FORMAT_GRT;
++
++	if (!strncmp(env->cpuid, "GenuineIntel,6,175,", 19) ||
++	    !strncmp(env->cpuid, "GenuineIntel,6,182,", 19))
++		fmt = DATA_SRC_FORMAT_CMT;
++
++	if (fmt == DATA_SRC_FORMAT_NA)
++		return fmt;
++
++	/*
++	 * Only data_src events are:
++	 *	mem-loads	event=0xd0,umask=0x5
++	 *	mem-stores	event=0xd0,umask=0x6
++	 */
++	if (evsel->core.attr.type == PERF_TYPE_RAW &&
++	    ((evsel->core.attr.config & 0xffff) == 0x5d0 ||
++	     (evsel->core.attr.config & 0xffff) == 0x6d0))
++		return fmt;
++
++	return DATA_SRC_FORMAT_NA;
++}
++
+ static int intel_pt_process_aux_output_hw_id(struct intel_pt *pt,
+ 					     union perf_event *event,
+ 					     struct perf_sample *sample)
+@@ -3427,6 +3624,7 @@ static int intel_pt_process_aux_output_hw_id(struct intel_pt *pt,
+ 
+ 	ptq->pebs[hw_id].evsel = evsel;
+ 	ptq->pebs[hw_id].id = sample->id;
++	ptq->pebs[hw_id].data_src_fmt = intel_pt_data_src_fmt(pt, evsel);
+ 
+ 	return 0;
+ }
+@@ -3976,6 +4174,7 @@ static void intel_pt_setup_pebs_events(struct intel_pt *pt)
+ 			}
+ 			pt->single_pebs = true;
+ 			pt->sample_pebs = true;
++			pt->pebs_data_src_fmt = intel_pt_data_src_fmt(pt, evsel);
+ 			pt->pebs_evsel = evsel;
+ 		}
+ 	}
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 2531b373f2cf7c..b048165b10c141 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -1976,7 +1976,7 @@ static void ip__resolve_ams(struct thread *thread,
+ 	 * Thus, we have to try consecutively until we find a match
+ 	 * or else, the symbol is unknown
+ 	 */
+-	thread__find_cpumode_addr_location(thread, ip, &al);
++	thread__find_cpumode_addr_location(thread, ip, /*symbols=*/true, &al);
+ 
+ 	ams->addr = ip;
+ 	ams->al_addr = al.addr;
+@@ -2078,7 +2078,7 @@ static int add_callchain_ip(struct thread *thread,
+ 	al.sym = NULL;
+ 	al.srcline = NULL;
+ 	if (!cpumode) {
+-		thread__find_cpumode_addr_location(thread, ip, &al);
++		thread__find_cpumode_addr_location(thread, ip, symbols, &al);
+ 	} else {
+ 		if (ip >= PERF_CONTEXT_MAX) {
+ 			switch (ip) {
+@@ -2106,6 +2106,8 @@ static int add_callchain_ip(struct thread *thread,
+ 		}
+ 		if (symbols)
+ 			thread__find_symbol(thread, *cpumode, ip, &al);
++		else
++			thread__find_map(thread, *cpumode, ip, &al);
+ 	}
+ 
+ 	if (al.sym != NULL) {
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index b7ebac5ab1d112..e2e3969e12d360 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -2052,6 +2052,9 @@ static bool perf_pmu___name_match(const struct perf_pmu *pmu, const char *to_mat
+ 	for (size_t i = 0; i < ARRAY_SIZE(names); i++) {
+ 		const char *name = names[i];
+ 
++		if (!name)
++			continue;
++
+ 		if (wildcard && perf_pmu__match_wildcard_uncore(name, to_match))
+ 			return true;
+ 		if (!wildcard && perf_pmu__match_ignoring_suffix_uncore(name, to_match))
+diff --git a/tools/perf/util/symbol-minimal.c b/tools/perf/util/symbol-minimal.c
+index c6f369b5d893f3..36c1d3090689fc 100644
+--- a/tools/perf/util/symbol-minimal.c
++++ b/tools/perf/util/symbol-minimal.c
+@@ -90,11 +90,23 @@ int filename__read_build_id(const char *filename, struct build_id *bid)
+ {
+ 	FILE *fp;
+ 	int ret = -1;
+-	bool need_swap = false;
++	bool need_swap = false, elf32;
+ 	u8 e_ident[EI_NIDENT];
+-	size_t buf_size;
+-	void *buf;
+ 	int i;
++	union {
++		struct {
++			Elf32_Ehdr ehdr32;
++			Elf32_Phdr *phdr32;
++		};
++		struct {
++			Elf64_Ehdr ehdr64;
++			Elf64_Phdr *phdr64;
++		};
++	} hdrs;
++	void *phdr;
++	size_t phdr_size;
++	void *buf = NULL;
++	size_t buf_size = 0;
+ 
+ 	fp = fopen(filename, "r");
+ 	if (fp == NULL)
+@@ -108,117 +120,79 @@ int filename__read_build_id(const char *filename, struct build_id *bid)
+ 		goto out;
+ 
+ 	need_swap = check_need_swap(e_ident[EI_DATA]);
++	elf32 = e_ident[EI_CLASS] == ELFCLASS32;
+ 
+-	/* for simplicity */
+-	fseek(fp, 0, SEEK_SET);
+-
+-	if (e_ident[EI_CLASS] == ELFCLASS32) {
+-		Elf32_Ehdr ehdr;
+-		Elf32_Phdr *phdr;
+-
+-		if (fread(&ehdr, sizeof(ehdr), 1, fp) != 1)
+-			goto out;
++	if (fread(elf32 ? (void *)&hdrs.ehdr32 : (void *)&hdrs.ehdr64,
++		  elf32 ? sizeof(hdrs.ehdr32) : sizeof(hdrs.ehdr64),
++		  1, fp) != 1)
++		goto out;
+ 
+-		if (need_swap) {
+-			ehdr.e_phoff = bswap_32(ehdr.e_phoff);
+-			ehdr.e_phentsize = bswap_16(ehdr.e_phentsize);
+-			ehdr.e_phnum = bswap_16(ehdr.e_phnum);
++	if (need_swap) {
++		if (elf32) {
++			hdrs.ehdr32.e_phoff = bswap_32(hdrs.ehdr32.e_phoff);
++			hdrs.ehdr32.e_phentsize = bswap_16(hdrs.ehdr32.e_phentsize);
++			hdrs.ehdr32.e_phnum = bswap_16(hdrs.ehdr32.e_phnum);
++		} else {
++			hdrs.ehdr64.e_phoff = bswap_64(hdrs.ehdr64.e_phoff);
++			hdrs.ehdr64.e_phentsize = bswap_16(hdrs.ehdr64.e_phentsize);
++			hdrs.ehdr64.e_phnum = bswap_16(hdrs.ehdr64.e_phnum);
+ 		}
++	}
++	phdr_size = elf32 ? hdrs.ehdr32.e_phentsize * hdrs.ehdr32.e_phnum
++			  : hdrs.ehdr64.e_phentsize * hdrs.ehdr64.e_phnum;
++	phdr = malloc(phdr_size);
++	if (phdr == NULL)
++		goto out;
+ 
+-		buf_size = ehdr.e_phentsize * ehdr.e_phnum;
+-		buf = malloc(buf_size);
+-		if (buf == NULL)
+-			goto out;
+-
+-		fseek(fp, ehdr.e_phoff, SEEK_SET);
+-		if (fread(buf, buf_size, 1, fp) != 1)
+-			goto out_free;
+-
+-		for (i = 0, phdr = buf; i < ehdr.e_phnum; i++, phdr++) {
+-			void *tmp;
+-			long offset;
+-
+-			if (need_swap) {
+-				phdr->p_type = bswap_32(phdr->p_type);
+-				phdr->p_offset = bswap_32(phdr->p_offset);
+-				phdr->p_filesz = bswap_32(phdr->p_filesz);
+-			}
+-
+-			if (phdr->p_type != PT_NOTE)
+-				continue;
+-
+-			buf_size = phdr->p_filesz;
+-			offset = phdr->p_offset;
+-			tmp = realloc(buf, buf_size);
+-			if (tmp == NULL)
+-				goto out_free;
+-
+-			buf = tmp;
+-			fseek(fp, offset, SEEK_SET);
+-			if (fread(buf, buf_size, 1, fp) != 1)
+-				goto out_free;
++	fseek(fp, elf32 ? hdrs.ehdr32.e_phoff : hdrs.ehdr64.e_phoff, SEEK_SET);
++	if (fread(phdr, phdr_size, 1, fp) != 1)
++		goto out_free;
+ 
+-			ret = read_build_id(buf, buf_size, bid, need_swap);
+-			if (ret == 0) {
+-				ret = bid->size;
+-				break;
+-			}
+-		}
+-	} else {
+-		Elf64_Ehdr ehdr;
+-		Elf64_Phdr *phdr;
++	if (elf32)
++		hdrs.phdr32 = phdr;
++	else
++		hdrs.phdr64 = phdr;
+ 
+-		if (fread(&ehdr, sizeof(ehdr), 1, fp) != 1)
+-			goto out;
++	for (i = 0; i < elf32 ? hdrs.ehdr32.e_phnum : hdrs.ehdr64.e_phnum; i++) {
++		size_t p_filesz;
+ 
+ 		if (need_swap) {
+-			ehdr.e_phoff = bswap_64(ehdr.e_phoff);
+-			ehdr.e_phentsize = bswap_16(ehdr.e_phentsize);
+-			ehdr.e_phnum = bswap_16(ehdr.e_phnum);
++			if (elf32) {
++				hdrs.phdr32[i].p_type = bswap_32(hdrs.phdr32[i].p_type);
++				hdrs.phdr32[i].p_offset = bswap_32(hdrs.phdr32[i].p_offset);
++				hdrs.phdr32[i].p_filesz = bswap_32(hdrs.phdr32[i].p_offset);
++			} else {
++				hdrs.phdr64[i].p_type = bswap_32(hdrs.phdr64[i].p_type);
++				hdrs.phdr64[i].p_offset = bswap_64(hdrs.phdr64[i].p_offset);
++				hdrs.phdr64[i].p_filesz = bswap_64(hdrs.phdr64[i].p_filesz);
++			}
+ 		}
++		if ((elf32 ? hdrs.phdr32[i].p_type : hdrs.phdr64[i].p_type) != PT_NOTE)
++			continue;
+ 
+-		buf_size = ehdr.e_phentsize * ehdr.e_phnum;
+-		buf = malloc(buf_size);
+-		if (buf == NULL)
+-			goto out;
+-
+-		fseek(fp, ehdr.e_phoff, SEEK_SET);
+-		if (fread(buf, buf_size, 1, fp) != 1)
+-			goto out_free;
+-
+-		for (i = 0, phdr = buf; i < ehdr.e_phnum; i++, phdr++) {
++		p_filesz = elf32 ? hdrs.phdr32[i].p_filesz : hdrs.phdr64[i].p_filesz;
++		if (p_filesz > buf_size) {
+ 			void *tmp;
+-			long offset;
+-
+-			if (need_swap) {
+-				phdr->p_type = bswap_32(phdr->p_type);
+-				phdr->p_offset = bswap_64(phdr->p_offset);
+-				phdr->p_filesz = bswap_64(phdr->p_filesz);
+-			}
+ 
+-			if (phdr->p_type != PT_NOTE)
+-				continue;
+-
+-			buf_size = phdr->p_filesz;
+-			offset = phdr->p_offset;
++			buf_size = p_filesz;
+ 			tmp = realloc(buf, buf_size);
+ 			if (tmp == NULL)
+ 				goto out_free;
+-
+ 			buf = tmp;
+-			fseek(fp, offset, SEEK_SET);
+-			if (fread(buf, buf_size, 1, fp) != 1)
+-				goto out_free;
++		}
++		fseek(fp, elf32 ? hdrs.phdr32[i].p_offset : hdrs.phdr64[i].p_offset, SEEK_SET);
++		if (fread(buf, p_filesz, 1, fp) != 1)
++			goto out_free;
+ 
+-			ret = read_build_id(buf, buf_size, bid, need_swap);
+-			if (ret == 0) {
+-				ret = bid->size;
+-				break;
+-			}
++		ret = read_build_id(buf, p_filesz, bid, need_swap);
++		if (ret == 0) {
++			ret = bid->size;
++			break;
+ 		}
+ 	}
+ out_free:
+ 	free(buf);
++	free(phdr);
+ out:
+ 	fclose(fp);
+ 	return ret;
+diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
+index 89585f53c1d5cc..10a01f8fbd4000 100644
+--- a/tools/perf/util/thread.c
++++ b/tools/perf/util/thread.c
+@@ -410,7 +410,7 @@ int thread__fork(struct thread *thread, struct thread *parent, u64 timestamp, bo
+ }
+ 
+ void thread__find_cpumode_addr_location(struct thread *thread, u64 addr,
+-					struct addr_location *al)
++					bool symbols, struct addr_location *al)
+ {
+ 	size_t i;
+ 	const u8 cpumodes[] = {
+@@ -421,7 +421,11 @@ void thread__find_cpumode_addr_location(struct thread *thread, u64 addr,
+ 	};
+ 
+ 	for (i = 0; i < ARRAY_SIZE(cpumodes); i++) {
+-		thread__find_symbol(thread, cpumodes[i], addr, al);
++		if (symbols)
++			thread__find_symbol(thread, cpumodes[i], addr, al);
++		else
++			thread__find_map(thread, cpumodes[i], addr, al);
++
+ 		if (al->map)
+ 			break;
+ 	}
+diff --git a/tools/perf/util/thread.h b/tools/perf/util/thread.h
+index cd574a896418ac..2b90bbed7a6121 100644
+--- a/tools/perf/util/thread.h
++++ b/tools/perf/util/thread.h
+@@ -126,7 +126,7 @@ struct symbol *thread__find_symbol_fb(struct thread *thread, u8 cpumode,
+ 				      u64 addr, struct addr_location *al);
+ 
+ void thread__find_cpumode_addr_location(struct thread *thread, u64 addr,
+-					struct addr_location *al);
++					bool symbols, struct addr_location *al);
+ 
+ int thread__memcpy(struct thread *thread, struct machine *machine,
+ 		   void *buf, u64 ip, int len, bool *is64bit);
+diff --git a/tools/perf/util/tool_pmu.c b/tools/perf/util/tool_pmu.c
+index 97b327d1ce4a01..727a10e3f99001 100644
+--- a/tools/perf/util/tool_pmu.c
++++ b/tools/perf/util/tool_pmu.c
+@@ -486,8 +486,14 @@ int evsel__tool_pmu_read(struct evsel *evsel, int cpu_map_idx, int thread)
+ 		delta_start *= 1000000000 / ticks_per_sec;
+ 	}
+ 	count->val    = delta_start;
+-	count->ena    = count->run = delta_start;
+ 	count->lost   = 0;
++	/*
++	 * The values of enabled and running must make a ratio of 100%. The
++	 * exact values don't matter as long as they are non-zero to avoid
++	 * issues with evsel__count_has_error.
++	 */
++	count->ena++;
++	count->run++;
+ 	return 0;
+ }
+ 
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 0170d3cc68194c..ab79854cb296e4 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -4766,6 +4766,38 @@ unsigned long pmt_read_counter(struct pmt_counter *ppmt, unsigned int domain_id)
+ 	return (value & value_mask) >> value_shift;
+ }
+ 
++
++/* Rapl domain enumeration helpers */
++static inline int get_rapl_num_domains(void)
++{
++	int num_packages = topo.max_package_id + 1;
++	int num_cores_per_package;
++	int num_cores;
++
++	if (!platform->has_per_core_rapl)
++		return num_packages;
++
++	num_cores_per_package = topo.max_core_id + 1;
++	num_cores = num_cores_per_package * num_packages;
++
++	return num_cores;
++}
++
++static inline int get_rapl_domain_id(int cpu)
++{
++	int nr_cores_per_package = topo.max_core_id + 1;
++	int rapl_core_id;
++
++	if (!platform->has_per_core_rapl)
++		return cpus[cpu].physical_package_id;
++
++	/* Compute the system-wide unique core-id for @cpu */
++	rapl_core_id = cpus[cpu].physical_core_id;
++	rapl_core_id += cpus[cpu].physical_package_id * nr_cores_per_package;
++
++	return rapl_core_id;
++}
++
+ /*
+  * get_counters(...)
+  * migrate to cpu
+@@ -4821,7 +4853,7 @@ int get_counters(struct thread_data *t, struct core_data *c, struct pkg_data *p)
+ 		goto done;
+ 
+ 	if (platform->has_per_core_rapl) {
+-		status = get_rapl_counters(cpu, c->core_id, c, p);
++		status = get_rapl_counters(cpu, get_rapl_domain_id(cpu), c, p);
+ 		if (status != 0)
+ 			return status;
+ 	}
+@@ -4887,7 +4919,7 @@ int get_counters(struct thread_data *t, struct core_data *c, struct pkg_data *p)
+ 		p->sys_lpi = cpuidle_cur_sys_lpi_us;
+ 
+ 	if (!platform->has_per_core_rapl) {
+-		status = get_rapl_counters(cpu, p->package_id, c, p);
++		status = get_rapl_counters(cpu, get_rapl_domain_id(cpu), c, p);
+ 		if (status != 0)
+ 			return status;
+ 	}
+@@ -7863,7 +7895,7 @@ void linux_perf_init(void)
+ 
+ void rapl_perf_init(void)
+ {
+-	const unsigned int num_domains = (platform->has_per_core_rapl ? topo.max_core_id : topo.max_package_id) + 1;
++	const unsigned int num_domains = get_rapl_num_domains();
+ 	bool *domain_visited = calloc(num_domains, sizeof(bool));
+ 
+ 	rapl_counter_info_perdomain = calloc(num_domains, sizeof(*rapl_counter_info_perdomain));
+@@ -7904,8 +7936,7 @@ void rapl_perf_init(void)
+ 				continue;
+ 
+ 			/* Skip already seen and handled RAPL domains */
+-			next_domain =
+-			    platform->has_per_core_rapl ? cpus[cpu].physical_core_id : cpus[cpu].physical_package_id;
++			next_domain = get_rapl_domain_id(cpu);
+ 
+ 			assert(next_domain < num_domains);
+ 
+diff --git a/tools/testing/kunit/qemu_configs/sparc.py b/tools/testing/kunit/qemu_configs/sparc.py
+index 256d9573b44646..2019550a1b692e 100644
+--- a/tools/testing/kunit/qemu_configs/sparc.py
++++ b/tools/testing/kunit/qemu_configs/sparc.py
+@@ -2,6 +2,8 @@ from ..qemu_config import QemuArchParams
+ 
+ QEMU_ARCH = QemuArchParams(linux_arch='sparc',
+ 			   kconfig='''
++CONFIG_KUNIT_FAULT_TEST=n
++CONFIG_SPARC32=y
+ CONFIG_SERIAL_SUNZILOG=y
+ CONFIG_SERIAL_SUNZILOG_CONSOLE=y
+ ''',
+diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
+index 80fb84fa3cfcbd..9c477321a5b474 100644
+--- a/tools/testing/selftests/Makefile
++++ b/tools/testing/selftests/Makefile
+@@ -202,7 +202,7 @@ export KHDR_INCLUDES
+ 
+ all:
+ 	@ret=1;							\
+-	for TARGET in $(TARGETS); do				\
++	for TARGET in $(TARGETS) $(INSTALL_DEP_TARGETS); do	\
+ 		BUILD_TARGET=$$BUILD/$$TARGET;			\
+ 		mkdir $$BUILD_TARGET  -p;			\
+ 		$(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET	\
+diff --git a/tools/testing/selftests/arm64/fp/fp-ptrace.c b/tools/testing/selftests/arm64/fp/fp-ptrace.c
+index 4930e03a7b9903..762048eb354ffe 100644
+--- a/tools/testing/selftests/arm64/fp/fp-ptrace.c
++++ b/tools/testing/selftests/arm64/fp/fp-ptrace.c
+@@ -891,18 +891,11 @@ static void set_initial_values(struct test_config *config)
+ {
+ 	int vq = __sve_vq_from_vl(vl_in(config));
+ 	int sme_vq = __sve_vq_from_vl(config->sme_vl_in);
+-	bool sm_change;
+ 
+ 	svcr_in = config->svcr_in;
+ 	svcr_expected = config->svcr_expected;
+ 	svcr_out = 0;
+ 
+-	if (sme_supported() &&
+-	    (svcr_in & SVCR_SM) != (svcr_expected & SVCR_SM))
+-		sm_change = true;
+-	else
+-		sm_change = false;
+-
+ 	fill_random(&v_in, sizeof(v_in));
+ 	memcpy(v_expected, v_in, sizeof(v_in));
+ 	memset(v_out, 0, sizeof(v_out));
+@@ -953,12 +946,7 @@ static void set_initial_values(struct test_config *config)
+ 	if (fpmr_supported()) {
+ 		fill_random(&fpmr_in, sizeof(fpmr_in));
+ 		fpmr_in &= FPMR_SAFE_BITS;
+-
+-		/* Entering or exiting streaming mode clears FPMR */
+-		if (sm_change)
+-			fpmr_expected = 0;
+-		else
+-			fpmr_expected = fpmr_in;
++		fpmr_expected = fpmr_in;
+ 	} else {
+ 		fpmr_in = 0;
+ 		fpmr_expected = 0;
+diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_nf.c b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
+index dbd13f8e42a7aa..dd6512fa652be0 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
++++ b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
+@@ -63,6 +63,12 @@ static void test_bpf_nf_ct(int mode)
+ 		.repeat = 1,
+ 	);
+ 
++	if (SYS_NOFAIL("iptables-legacy --version")) {
++		fprintf(stdout, "Missing required iptables-legacy tool\n");
++		test__skip();
++		return;
++	}
++
+ 	skel = test_bpf_nf__open_and_load();
+ 	if (!ASSERT_OK_PTR(skel, "test_bpf_nf__open_and_load"))
+ 		return;
+diff --git a/tools/testing/selftests/bpf/prog_tests/kmem_cache_iter.c b/tools/testing/selftests/bpf/prog_tests/kmem_cache_iter.c
+index 8e13a3416a21d2..1de14b111931aa 100644
+--- a/tools/testing/selftests/bpf/prog_tests/kmem_cache_iter.c
++++ b/tools/testing/selftests/bpf/prog_tests/kmem_cache_iter.c
+@@ -104,7 +104,7 @@ void test_kmem_cache_iter(void)
+ 		goto destroy;
+ 
+ 	memset(buf, 0, sizeof(buf));
+-	while (read(iter_fd, buf, sizeof(buf) > 0)) {
++	while (read(iter_fd, buf, sizeof(buf)) > 0) {
+ 		/* Read out all contents */
+ 		printf("%s", buf);
+ 	}
+diff --git a/tools/testing/selftests/bpf/progs/verifier_load_acquire.c b/tools/testing/selftests/bpf/progs/verifier_load_acquire.c
+index 77698d5a19e446..a696ab84bfd662 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_load_acquire.c
++++ b/tools/testing/selftests/bpf/progs/verifier_load_acquire.c
+@@ -10,65 +10,81 @@
+ 
+ SEC("socket")
+ __description("load-acquire, 8-bit")
+-__success __success_unpriv __retval(0x12)
++__success __success_unpriv __retval(0)
+ __naked void load_acquire_8(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"w1 = 0x12;"
+ 	"*(u8 *)(r10 - 1) = w1;"
+-	".8byte %[load_acquire_insn];" // w0 = load_acquire((u8 *)(r10 - 1));
++	".8byte %[load_acquire_insn];" // w2 = load_acquire((u8 *)(r10 - 1));
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(load_acquire_insn,
+-		     BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -1))
++		     BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_2, BPF_REG_10, -1))
+ 	: __clobber_all);
+ }
+ 
+ SEC("socket")
+ __description("load-acquire, 16-bit")
+-__success __success_unpriv __retval(0x1234)
++__success __success_unpriv __retval(0)
+ __naked void load_acquire_16(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"w1 = 0x1234;"
+ 	"*(u16 *)(r10 - 2) = w1;"
+-	".8byte %[load_acquire_insn];" // w0 = load_acquire((u16 *)(r10 - 2));
++	".8byte %[load_acquire_insn];" // w2 = load_acquire((u16 *)(r10 - 2));
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(load_acquire_insn,
+-		     BPF_ATOMIC_OP(BPF_H, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -2))
++		     BPF_ATOMIC_OP(BPF_H, BPF_LOAD_ACQ, BPF_REG_2, BPF_REG_10, -2))
+ 	: __clobber_all);
+ }
+ 
+ SEC("socket")
+ __description("load-acquire, 32-bit")
+-__success __success_unpriv __retval(0x12345678)
++__success __success_unpriv __retval(0)
+ __naked void load_acquire_32(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"w1 = 0x12345678;"
+ 	"*(u32 *)(r10 - 4) = w1;"
+-	".8byte %[load_acquire_insn];" // w0 = load_acquire((u32 *)(r10 - 4));
++	".8byte %[load_acquire_insn];" // w2 = load_acquire((u32 *)(r10 - 4));
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(load_acquire_insn,
+-		     BPF_ATOMIC_OP(BPF_W, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -4))
++		     BPF_ATOMIC_OP(BPF_W, BPF_LOAD_ACQ, BPF_REG_2, BPF_REG_10, -4))
+ 	: __clobber_all);
+ }
+ 
+ SEC("socket")
+ __description("load-acquire, 64-bit")
+-__success __success_unpriv __retval(0x1234567890abcdef)
++__success __success_unpriv __retval(0)
+ __naked void load_acquire_64(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"r1 = 0x1234567890abcdef ll;"
+ 	"*(u64 *)(r10 - 8) = r1;"
+-	".8byte %[load_acquire_insn];" // r0 = load_acquire((u64 *)(r10 - 8));
++	".8byte %[load_acquire_insn];" // r2 = load_acquire((u64 *)(r10 - 8));
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(load_acquire_insn,
+-		     BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -8))
++		     BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_2, BPF_REG_10, -8))
+ 	: __clobber_all);
+ }
+ 
+diff --git a/tools/testing/selftests/bpf/progs/verifier_store_release.c b/tools/testing/selftests/bpf/progs/verifier_store_release.c
+index c0442d5bb049d8..022d03d9835957 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_store_release.c
++++ b/tools/testing/selftests/bpf/progs/verifier_store_release.c
+@@ -11,13 +11,17 @@
+ 
+ SEC("socket")
+ __description("store-release, 8-bit")
+-__success __success_unpriv __retval(0x12)
++__success __success_unpriv __retval(0)
+ __naked void store_release_8(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"w1 = 0x12;"
+ 	".8byte %[store_release_insn];" // store_release((u8 *)(r10 - 1), w1);
+-	"w0 = *(u8 *)(r10 - 1);"
++	"w2 = *(u8 *)(r10 - 1);"
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(store_release_insn,
+@@ -27,13 +31,17 @@ __naked void store_release_8(void)
+ 
+ SEC("socket")
+ __description("store-release, 16-bit")
+-__success __success_unpriv __retval(0x1234)
++__success __success_unpriv __retval(0)
+ __naked void store_release_16(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"w1 = 0x1234;"
+ 	".8byte %[store_release_insn];" // store_release((u16 *)(r10 - 2), w1);
+-	"w0 = *(u16 *)(r10 - 2);"
++	"w2 = *(u16 *)(r10 - 2);"
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(store_release_insn,
+@@ -43,13 +51,17 @@ __naked void store_release_16(void)
+ 
+ SEC("socket")
+ __description("store-release, 32-bit")
+-__success __success_unpriv __retval(0x12345678)
++__success __success_unpriv __retval(0)
+ __naked void store_release_32(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"w1 = 0x12345678;"
+ 	".8byte %[store_release_insn];" // store_release((u32 *)(r10 - 4), w1);
+-	"w0 = *(u32 *)(r10 - 4);"
++	"w2 = *(u32 *)(r10 - 4);"
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(store_release_insn,
+@@ -59,13 +71,17 @@ __naked void store_release_32(void)
+ 
+ SEC("socket")
+ __description("store-release, 64-bit")
+-__success __success_unpriv __retval(0x1234567890abcdef)
++__success __success_unpriv __retval(0)
+ __naked void store_release_64(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"r1 = 0x1234567890abcdef ll;"
+ 	".8byte %[store_release_insn];" // store_release((u64 *)(r10 - 8), r1);
+-	"r0 = *(u64 *)(r10 - 8);"
++	"r2 = *(u64 *)(r10 - 8);"
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(store_release_insn,
+diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c
+index 49f2fc61061f5d..9551d8d5f8f9f8 100644
+--- a/tools/testing/selftests/bpf/test_loader.c
++++ b/tools/testing/selftests/bpf/test_loader.c
+@@ -1042,6 +1042,14 @@ void run_subtest(struct test_loader *tester,
+ 	emit_verifier_log(tester->log_buf, false /*force*/);
+ 	validate_msgs(tester->log_buf, &subspec->expect_msgs, emit_verifier_log);
+ 
++	/* Restore capabilities because the kernel will silently ignore requests
++	 * for program info (such as xlated program text) if we are not
++	 * bpf-capable. Also, for some reason test_verifier executes programs
++	 * with all capabilities restored. Do the same here.
++	 */
++	if (restore_capabilities(&caps))
++		goto tobj_cleanup;
++
+ 	if (subspec->expect_xlated.cnt) {
+ 		err = get_xlated_program_text(bpf_program__fd(tprog),
+ 					      tester->log_buf, tester->log_buf_sz);
+@@ -1067,12 +1075,6 @@ void run_subtest(struct test_loader *tester,
+ 	}
+ 
+ 	if (should_do_test_run(spec, subspec)) {
+-		/* For some reason test_verifier executes programs
+-		 * with all capabilities restored. Do the same here.
+-		 */
+-		if (restore_capabilities(&caps))
+-			goto tobj_cleanup;
+-
+ 		/* Do bpf_map__attach_struct_ops() for each struct_ops map.
+ 		 * This should trigger bpf_struct_ops->reg callback on kernel side.
+ 		 */
+diff --git a/tools/testing/selftests/coredump/stackdump_test.c b/tools/testing/selftests/coredump/stackdump_test.c
+index 137b2364a08207..fe3c728cd6be5a 100644
+--- a/tools/testing/selftests/coredump/stackdump_test.c
++++ b/tools/testing/selftests/coredump/stackdump_test.c
+@@ -89,14 +89,14 @@ FIXTURE_TEARDOWN(coredump)
+ 	fprintf(stderr, "Failed to cleanup stackdump test: %s\n", reason);
+ }
+ 
+-TEST_F(coredump, stackdump)
++TEST_F_TIMEOUT(coredump, stackdump, 120)
+ {
+ 	struct sigaction action = {};
+ 	unsigned long long stack;
+ 	char *test_dir, *line;
+ 	size_t line_length;
+ 	char buf[PATH_MAX];
+-	int ret, i;
++	int ret, i, status;
+ 	FILE *file;
+ 	pid_t pid;
+ 
+@@ -129,6 +129,10 @@ TEST_F(coredump, stackdump)
+ 	/*
+ 	 * Step 3: Wait for the stackdump script to write the stack pointers to the stackdump file
+ 	 */
++	waitpid(pid, &status, 0);
++	ASSERT_TRUE(WIFSIGNALED(status));
++	ASSERT_TRUE(WCOREDUMP(status));
++
+ 	for (i = 0; i < 10; ++i) {
+ 		file = fopen(STACKDUMP_FILE, "r");
+ 		if (file)
+@@ -138,10 +142,12 @@ TEST_F(coredump, stackdump)
+ 	ASSERT_NE(file, NULL);
+ 
+ 	/* Step 4: Make sure all stack pointer values are non-zero */
++	line = NULL;
+ 	for (i = 0; -1 != getline(&line, &line_length, file); ++i) {
+ 		stack = strtoull(line, NULL, 10);
+ 		ASSERT_NE(stack, 0);
+ 	}
++	free(line);
+ 
+ 	ASSERT_EQ(i, 1 + NUM_THREAD_SPAWN);
+ 
+diff --git a/tools/testing/selftests/cpufreq/cpufreq.sh b/tools/testing/selftests/cpufreq/cpufreq.sh
+index e350c521b46750..3aad9db921b533 100755
+--- a/tools/testing/selftests/cpufreq/cpufreq.sh
++++ b/tools/testing/selftests/cpufreq/cpufreq.sh
+@@ -244,9 +244,10 @@ do_suspend()
+ 					printf "Failed to suspend using RTC wake alarm\n"
+ 					return 1
+ 				fi
++			else
++				echo $filename > $SYSFS/power/state
+ 			fi
+ 
+-			echo $filename > $SYSFS/power/state
+ 			printf "Came out of $1\n"
+ 
+ 			printf "Do basic tests after finishing $1 to verify cpufreq state\n\n"
+diff --git a/tools/testing/selftests/drivers/net/hw/tso.py b/tools/testing/selftests/drivers/net/hw/tso.py
+index e1ecb92f79d9b4..3370827409aa02 100755
+--- a/tools/testing/selftests/drivers/net/hw/tso.py
++++ b/tools/testing/selftests/drivers/net/hw/tso.py
+@@ -39,7 +39,7 @@ def run_one_stream(cfg, ipver, remote_v4, remote_v6, should_lso):
+     port = rand_port()
+     listen_cmd = f"socat -{ipver} -t 2 -u TCP-LISTEN:{port},reuseport /dev/null,ignoreeof"
+ 
+-    with bkg(listen_cmd, host=cfg.remote) as nc:
++    with bkg(listen_cmd, host=cfg.remote, exit_wait=True) as nc:
+         wait_port_listen(port, host=cfg.remote)
+ 
+         if ipver == "4":
+@@ -216,7 +216,7 @@ def main() -> None:
+             ("",            "6", "tx-tcp6-segmentation",          None),
+             ("vxlan",        "", "tx-udp_tnl-segmentation",       ("vxlan",  True,  "id 100 dstport 4789 noudpcsum")),
+             ("vxlan_csum",   "", "tx-udp_tnl-csum-segmentation",  ("vxlan",  False, "id 100 dstport 4789 udpcsum")),
+-            ("gre",         "4", "tx-gre-segmentation",           ("ipgre",  False,  "")),
++            ("gre",         "4", "tx-gre-segmentation",           ("gre",    False,  "")),
+             ("gre",         "6", "tx-gre-segmentation",           ("ip6gre", False,  "")),
+         )
+ 
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index b2f76a52215ad2..61acbd45ffaaf8 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -1629,14 +1629,8 @@ void teardown_trace_fixture(struct __test_metadata *_metadata,
+ {
+ 	if (tracer) {
+ 		int status;
+-		/*
+-		 * Extract the exit code from the other process and
+-		 * adopt it for ourselves in case its asserts failed.
+-		 */
+ 		ASSERT_EQ(0, kill(tracer, SIGUSR1));
+ 		ASSERT_EQ(tracer, waitpid(tracer, &status, 0));
+-		if (WEXITSTATUS(status))
+-			_metadata->exit_code = KSFT_FAIL;
+ 	}
+ }
+ 
+@@ -3166,12 +3160,15 @@ TEST(syscall_restart)
+ 	ret = get_syscall(_metadata, child_pid);
+ #if defined(__arm__)
+ 	/*
+-	 * FIXME:
+ 	 * - native ARM registers do NOT expose true syscall.
+ 	 * - compat ARM registers on ARM64 DO expose true syscall.
++	 * - values of utsbuf.machine include 'armv8l' or 'armb8b'
++	 *   for ARM64 running in compat mode.
+ 	 */
+ 	ASSERT_EQ(0, uname(&utsbuf));
+-	if (strncmp(utsbuf.machine, "arm", 3) == 0) {
++	if ((strncmp(utsbuf.machine, "arm", 3) == 0) &&
++	    (strncmp(utsbuf.machine, "armv8l", 6) != 0) &&
++	    (strncmp(utsbuf.machine, "armv8b", 6) != 0)) {
+ 		EXPECT_EQ(__NR_nanosleep, ret);
+ 	} else
+ #endif
+diff --git a/tools/tracing/rtla/src/timerlat_bpf.c b/tools/tracing/rtla/src/timerlat_bpf.c
+index 5abee884037aeb..0bc44ce5d69bd9 100644
+--- a/tools/tracing/rtla/src/timerlat_bpf.c
++++ b/tools/tracing/rtla/src/timerlat_bpf.c
+@@ -1,5 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #ifdef HAVE_BPF_SKEL
++#define _GNU_SOURCE
+ #include "timerlat.h"
+ #include "timerlat_bpf.h"
+ #include "timerlat.skel.h"


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-06-24 17:42 Mike Pagano
  0 siblings, 0 replies; 19+ messages in thread
From: Mike Pagano @ 2025-06-24 17:42 UTC (permalink / raw
  To: gentoo-commits

commit:     2c2f45c6a8efe7de6077d4079ac0dcab67063ba4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 24 17:42:15 2025 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 24 17:42:15 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2c2f45c6

alt/sched: Fix build error when CONFIG_PREEMPT_DYNAMIC is unset

Bug: https://bugs.gentoo.org/958623

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                     |  4 ++++
 5022_BMQ-CONFIG-PREEMPT-DYNAMIC-unset-fix.patch | 27 +++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/0000_README b/0000_README
index e54206b2..2eed9e1f 100644
--- a/0000_README
+++ b/0000_README
@@ -97,3 +97,7 @@ Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incl
 Patch:  5021_BMQ-and-PDS-gentoo-defaults.patch
 From:   https://gitweb.gentoo.org/proj/linux-patches.git/
 Desc:   Set defaults for BMQ. Add archs as people test, default to N
+
+Patch:  5022_BMQ-CONFIG-PREEMPT-DYNAMIC-unset-fix.patch
+From:   https://gitlab.com/alfredchen/linux-prjc/-/commit/9c1e782466888962387717b6dc5dba7248d8fa15.patch
+Desc:   alt/sched: Fix build error when CONFIG_PREEMPT_DYNAMIC is unset

diff --git a/5022_BMQ-CONFIG-PREEMPT-DYNAMIC-unset-fix.patch b/5022_BMQ-CONFIG-PREEMPT-DYNAMIC-unset-fix.patch
new file mode 100644
index 00000000..d7bafa2d
--- /dev/null
+++ b/5022_BMQ-CONFIG-PREEMPT-DYNAMIC-unset-fix.patch
@@ -0,0 +1,27 @@
+From 9c1e782466888962387717b6dc5dba7248d8fa15 Mon Sep 17 00:00:00 2001
+From: Alfred Chen <cchalpha@gmail.com>
+Date: Tue, 27 May 2025 14:12:11 +0800
+Subject: [PATCH] alt/sched: Fix build error when CONFIG_PREEMPT_DYNAMIC is
+ unset
+
+Pick up missing mainline code for !109.
+---
+ kernel/sched/alt_core.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+index 0afd3670e9bb..e4d43e14beb5 100644
+--- a/kernel/sched/alt_core.c
++++ b/kernel/sched/alt_core.c
+@@ -5604,6 +5604,8 @@ PREEMPT_MODEL_ACCESSOR(lazy);
+ 
+ #else /* !CONFIG_PREEMPT_DYNAMIC: */
+ 
++#define preempt_dynamic_mode -1
++
+ static inline void preempt_dynamic_init(void) { }
+ 
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+-- 
+GitLab
+


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-06-27 11:17 Mike Pagano
  0 siblings, 0 replies; 19+ messages in thread
From: Mike Pagano @ 2025-06-27 11:17 UTC (permalink / raw
  To: gentoo-commits

commit:     36e39a5f6ac91aa1bf8fb9910a76bd73add9feef
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jun 27 11:16:53 2025 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jun 27 11:16:53 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=36e39a5f

Linux patch 6.15.4

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1003_linux-6.15.4.patch | 24203 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 24207 insertions(+)

diff --git a/0000_README b/0000_README
index 2eed9e1f..703fe152 100644
--- a/0000_README
+++ b/0000_README
@@ -54,6 +54,10 @@ Patch:  1002_linux-6.15.3.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.15.3
 
+Patch:  1003_linux-6.15.4.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.15.4
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1003_linux-6.15.4.patch b/1003_linux-6.15.4.patch
new file mode 100644
index 00000000..996196dd
--- /dev/null
+++ b/1003_linux-6.15.4.patch
@@ -0,0 +1,24203 @@
+diff --git a/Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.yaml b/Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.yaml
+index b57ae6963e6298..6b6f6762d122f9 100644
+--- a/Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.yaml
++++ b/Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.yaml
+@@ -97,7 +97,10 @@ properties:
+ 
+   resets:
+     items:
+-      - description: module reset
++      - description:
++          Module reset. This property is optional for controllers in Tegra194,
++          Tegra234 etc where an internal software reset is available as an
++          alternative.
+ 
+   reset-names:
+     items:
+@@ -116,6 +119,13 @@ properties:
+       - const: rx
+       - const: tx
+ 
++required:
++  - compatible
++  - reg
++  - interrupts
++  - clocks
++  - clock-names
++
+ allOf:
+   - $ref: /schemas/i2c/i2c-controller.yaml
+   - if:
+@@ -169,6 +179,18 @@ allOf:
+       properties:
+         power-domains: false
+ 
++  - if:
++      not:
++        properties:
++          compatible:
++            contains:
++              enum:
++                - nvidia,tegra194-i2c
++    then:
++      required:
++        - resets
++        - reset-names
++
+ unevaluatedProperties: false
+ 
+ examples:
+diff --git a/Documentation/gpu/nouveau.rst b/Documentation/gpu/nouveau.rst
+index 0f34131ccc2788..cab2e81013bc5f 100644
+--- a/Documentation/gpu/nouveau.rst
++++ b/Documentation/gpu/nouveau.rst
+@@ -25,5 +25,8 @@ providing a consistent API to upper layers of the driver stack.
+ GSP Support
+ ------------------------
+ 
+-.. kernel-doc:: drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
++.. kernel-doc:: drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c
+    :doc: GSP message queue element
++
++.. kernel-doc:: drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h
++   :doc: GSP message handling policy
+diff --git a/Makefile b/Makefile
+index 01ddb4eb3659f4..c1bde4eef2bfb2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 15
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/mach-omap2/clockdomain.h b/arch/arm/mach-omap2/clockdomain.h
+index c36fb27212615a..86a2f9e5d0ef9d 100644
+--- a/arch/arm/mach-omap2/clockdomain.h
++++ b/arch/arm/mach-omap2/clockdomain.h
+@@ -48,6 +48,7 @@
+ #define CLKDM_NO_AUTODEPS			(1 << 4)
+ #define CLKDM_ACTIVE_WITH_MPU			(1 << 5)
+ #define CLKDM_MISSING_IDLE_REPORTING		(1 << 6)
++#define CLKDM_STANDBY_FORCE_WAKEUP		BIT(7)
+ 
+ #define CLKDM_CAN_HWSUP		(CLKDM_CAN_ENABLE_AUTO | CLKDM_CAN_DISABLE_AUTO)
+ #define CLKDM_CAN_SWSUP		(CLKDM_CAN_FORCE_SLEEP | CLKDM_CAN_FORCE_WAKEUP)
+diff --git a/arch/arm/mach-omap2/clockdomains33xx_data.c b/arch/arm/mach-omap2/clockdomains33xx_data.c
+index 87f4e927eb1830..c05a3c07d44863 100644
+--- a/arch/arm/mach-omap2/clockdomains33xx_data.c
++++ b/arch/arm/mach-omap2/clockdomains33xx_data.c
+@@ -19,7 +19,7 @@ static struct clockdomain l4ls_am33xx_clkdm = {
+ 	.pwrdm		= { .name = "per_pwrdm" },
+ 	.cm_inst	= AM33XX_CM_PER_MOD,
+ 	.clkdm_offs	= AM33XX_CM_PER_L4LS_CLKSTCTRL_OFFSET,
+-	.flags		= CLKDM_CAN_SWSUP,
++	.flags		= CLKDM_CAN_SWSUP | CLKDM_STANDBY_FORCE_WAKEUP,
+ };
+ 
+ static struct clockdomain l3s_am33xx_clkdm = {
+diff --git a/arch/arm/mach-omap2/cm33xx.c b/arch/arm/mach-omap2/cm33xx.c
+index acdf72a541c02a..a4dd42abda89b0 100644
+--- a/arch/arm/mach-omap2/cm33xx.c
++++ b/arch/arm/mach-omap2/cm33xx.c
+@@ -20,6 +20,9 @@
+ #include "cm-regbits-34xx.h"
+ #include "cm-regbits-33xx.h"
+ #include "prm33xx.h"
++#if IS_ENABLED(CONFIG_SUSPEND)
++#include <linux/suspend.h>
++#endif
+ 
+ /*
+  * CLKCTRL_IDLEST_*: possible values for the CM_*_CLKCTRL.IDLEST bitfield:
+@@ -328,8 +331,17 @@ static int am33xx_clkdm_clk_disable(struct clockdomain *clkdm)
+ {
+ 	bool hwsup = false;
+ 
++#if IS_ENABLED(CONFIG_SUSPEND)
++	/*
++	 * In case of standby, Don't put the l4ls clk domain to sleep.
++	 * Since CM3 PM FW doesn't wake-up/enable the l4ls clk domain
++	 * upon wake-up, CM3 PM FW fails to wake-up th MPU.
++	 */
++	if (pm_suspend_target_state == PM_SUSPEND_STANDBY &&
++	    (clkdm->flags & CLKDM_STANDBY_FORCE_WAKEUP))
++		return 0;
++#endif
+ 	hwsup = am33xx_cm_is_clkdm_in_hwsup(clkdm->cm_inst, clkdm->clkdm_offs);
+-
+ 	if (!hwsup && (clkdm->flags & CLKDM_CAN_FORCE_SLEEP))
+ 		am33xx_clkdm_sleep(clkdm);
+ 
+diff --git a/arch/arm/mach-omap2/pmic-cpcap.c b/arch/arm/mach-omap2/pmic-cpcap.c
+index 4f31e61c0c90ca..9f9a20274db848 100644
+--- a/arch/arm/mach-omap2/pmic-cpcap.c
++++ b/arch/arm/mach-omap2/pmic-cpcap.c
+@@ -264,7 +264,11 @@ int __init omap4_cpcap_init(void)
+ 
+ static int __init cpcap_late_init(void)
+ {
+-	omap4_vc_set_pmic_signaling(PWRDM_POWER_RET);
++	if (!of_find_compatible_node(NULL, NULL, "motorola,cpcap"))
++		return 0;
++
++	if (soc_is_omap443x() || soc_is_omap446x() || soc_is_omap447x())
++		omap4_vc_set_pmic_signaling(PWRDM_POWER_RET);
+ 
+ 	return 0;
+ }
+diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
+index 748698e91a4b46..27e64f782cb3dc 100644
+--- a/arch/arm/mm/ioremap.c
++++ b/arch/arm/mm/ioremap.c
+@@ -515,7 +515,5 @@ void __init early_ioremap_init(void)
+ bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size,
+ 				 unsigned long flags)
+ {
+-	unsigned long pfn = PHYS_PFN(offset);
+-
+-	return memblock_is_map_memory(pfn);
++	return memblock_is_map_memory(offset);
+ }
+diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
+index eba1a98657f132..aa9efee17277d4 100644
+--- a/arch/arm64/include/asm/tlbflush.h
++++ b/arch/arm64/include/asm/tlbflush.h
+@@ -323,13 +323,14 @@ static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
+ }
+ 
+ /*
+- * If mprotect/munmap/etc occurs during TLB batched flushing, we need to
+- * synchronise all the TLBI issued with a DSB to avoid the race mentioned in
+- * flush_tlb_batched_pending().
++ * If mprotect/munmap/etc occurs during TLB batched flushing, we need to ensure
++ * all the previously issued TLBIs targeting mm have completed. But since we
++ * can be executing on a remote CPU, a DSB cannot guarantee this like it can
++ * for arch_tlbbatch_flush(). Our only option is to flush the entire mm.
+  */
+ static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
+ {
+-	dsb(ish);
++	flush_tlb_mm(mm);
+ }
+ 
+ /*
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index f79b0d5f71ac94..f8345c39c70541 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -141,7 +141,7 @@ unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, unsigned int n)
+ 
+ 	addr += n;
+ 	if (regs_within_kernel_stack(regs, (unsigned long)addr))
+-		return *addr;
++		return READ_ONCE_NOCHECK(*addr);
+ 	else
+ 		return 0;
+ }
+diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
+index 502a5b73ee70c2..73881e1dc26794 100644
+--- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
++++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
+@@ -167,6 +167,9 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
+ 
+ 	__debug_save_state(guest_dbg, guest_ctxt);
+ 	__debug_restore_state(host_dbg, host_ctxt);
++
++	if (has_vhe())
++		isb();
+ }
+ 
+ #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index ea6695d53fb967..5a9bf291c649b4 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -1286,7 +1286,8 @@ int pud_free_pmd_page(pud_t *pudp, unsigned long addr)
+ 	next = addr;
+ 	end = addr + PUD_SIZE;
+ 	do {
+-		pmd_free_pte_page(pmdp, next);
++		if (pmd_present(pmdp_get(pmdp)))
++			pmd_free_pte_page(pmdp, next);
+ 	} while (pmdp++, next += PMD_SIZE, next != end);
+ 
+ 	pud_clear(pudp);
+diff --git a/arch/loongarch/include/asm/irqflags.h b/arch/loongarch/include/asm/irqflags.h
+index 319a8c616f1f5b..003172b8406be7 100644
+--- a/arch/loongarch/include/asm/irqflags.h
++++ b/arch/loongarch/include/asm/irqflags.h
+@@ -14,40 +14,48 @@
+ static inline void arch_local_irq_enable(void)
+ {
+ 	u32 flags = CSR_CRMD_IE;
++	register u32 mask asm("t0") = CSR_CRMD_IE;
++
+ 	__asm__ __volatile__(
+ 		"csrxchg %[val], %[mask], %[reg]\n\t"
+ 		: [val] "+r" (flags)
+-		: [mask] "r" (CSR_CRMD_IE), [reg] "i" (LOONGARCH_CSR_CRMD)
++		: [mask] "r" (mask), [reg] "i" (LOONGARCH_CSR_CRMD)
+ 		: "memory");
+ }
+ 
+ static inline void arch_local_irq_disable(void)
+ {
+ 	u32 flags = 0;
++	register u32 mask asm("t0") = CSR_CRMD_IE;
++
+ 	__asm__ __volatile__(
+ 		"csrxchg %[val], %[mask], %[reg]\n\t"
+ 		: [val] "+r" (flags)
+-		: [mask] "r" (CSR_CRMD_IE), [reg] "i" (LOONGARCH_CSR_CRMD)
++		: [mask] "r" (mask), [reg] "i" (LOONGARCH_CSR_CRMD)
+ 		: "memory");
+ }
+ 
+ static inline unsigned long arch_local_irq_save(void)
+ {
+ 	u32 flags = 0;
++	register u32 mask asm("t0") = CSR_CRMD_IE;
++
+ 	__asm__ __volatile__(
+ 		"csrxchg %[val], %[mask], %[reg]\n\t"
+ 		: [val] "+r" (flags)
+-		: [mask] "r" (CSR_CRMD_IE), [reg] "i" (LOONGARCH_CSR_CRMD)
++		: [mask] "r" (mask), [reg] "i" (LOONGARCH_CSR_CRMD)
+ 		: "memory");
+ 	return flags;
+ }
+ 
+ static inline void arch_local_irq_restore(unsigned long flags)
+ {
++	register u32 mask asm("t0") = CSR_CRMD_IE;
++
+ 	__asm__ __volatile__(
+ 		"csrxchg %[val], %[mask], %[reg]\n\t"
+ 		: [val] "+r" (flags)
+-		: [mask] "r" (CSR_CRMD_IE), [reg] "i" (LOONGARCH_CSR_CRMD)
++		: [mask] "r" (mask), [reg] "i" (LOONGARCH_CSR_CRMD)
+ 		: "memory");
+ }
+ 
+diff --git a/arch/loongarch/include/asm/vdso/getrandom.h b/arch/loongarch/include/asm/vdso/getrandom.h
+index 48c43f55b039b4..a81724b69f291e 100644
+--- a/arch/loongarch/include/asm/vdso/getrandom.h
++++ b/arch/loongarch/include/asm/vdso/getrandom.h
+@@ -20,7 +20,7 @@ static __always_inline ssize_t getrandom_syscall(void *_buffer, size_t _len, uns
+ 
+ 	asm volatile(
+ 	"      syscall 0\n"
+-	: "+r" (ret)
++	: "=r" (ret)
+ 	: "r" (nr), "r" (buffer), "r" (len), "r" (flags)
+ 	: "$t0", "$t1", "$t2", "$t3", "$t4", "$t5", "$t6", "$t7", "$t8",
+ 	  "memory");
+diff --git a/arch/loongarch/include/asm/vdso/gettimeofday.h b/arch/loongarch/include/asm/vdso/gettimeofday.h
+index 88cfcf13311630..f15503e3336ca1 100644
+--- a/arch/loongarch/include/asm/vdso/gettimeofday.h
++++ b/arch/loongarch/include/asm/vdso/gettimeofday.h
+@@ -25,7 +25,7 @@ static __always_inline long gettimeofday_fallback(
+ 
+ 	asm volatile(
+ 	"       syscall 0\n"
+-	: "+r" (ret)
++	: "=r" (ret)
+ 	: "r" (nr), "r" (tv), "r" (tz)
+ 	: "$t0", "$t1", "$t2", "$t3", "$t4", "$t5", "$t6", "$t7",
+ 	  "$t8", "memory");
+@@ -44,7 +44,7 @@ static __always_inline long clock_gettime_fallback(
+ 
+ 	asm volatile(
+ 	"       syscall 0\n"
+-	: "+r" (ret)
++	: "=r" (ret)
+ 	: "r" (nr), "r" (clkid), "r" (ts)
+ 	: "$t0", "$t1", "$t2", "$t3", "$t4", "$t5", "$t6", "$t7",
+ 	  "$t8", "memory");
+@@ -63,7 +63,7 @@ static __always_inline int clock_getres_fallback(
+ 
+ 	asm volatile(
+ 	"       syscall 0\n"
+-	: "+r" (ret)
++	: "=r" (ret)
+ 	: "r" (nr), "r" (clkid), "r" (ts)
+ 	: "$t0", "$t1", "$t2", "$t3", "$t4", "$t5", "$t6", "$t7",
+ 	  "$t8", "memory");
+diff --git a/arch/loongarch/mm/hugetlbpage.c b/arch/loongarch/mm/hugetlbpage.c
+index cea84d7f2b91a1..02dad4624fe329 100644
+--- a/arch/loongarch/mm/hugetlbpage.c
++++ b/arch/loongarch/mm/hugetlbpage.c
+@@ -47,7 +47,8 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr,
+ 				pmd = pmd_offset(pud, addr);
+ 		}
+ 	}
+-	return pmd_none(pmdp_get(pmd)) ? NULL : (pte_t *) pmd;
++
++	return (!pmd || pmd_none(pmdp_get(pmd))) ? NULL : (pte_t *) pmd;
+ }
+ 
+ uint64_t pmd_to_entrylo(unsigned long pmd_val)
+diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile
+index fb4c493aaffa90..69d4593f64fee5 100644
+--- a/arch/mips/vdso/Makefile
++++ b/arch/mips/vdso/Makefile
+@@ -27,6 +27,7 @@ endif
+ # offsets.
+ cflags-vdso := $(ccflags-vdso) \
+ 	$(filter -W%,$(filter-out -Wa$(comma)%,$(KBUILD_CFLAGS))) \
++	$(filter -std=%,$(KBUILD_CFLAGS)) \
+ 	-O3 -g -fPIC -fno-strict-aliasing -fno-common -fno-builtin -G 0 \
+ 	-mrelax-pic-calls $(call cc-option, -mexplicit-relocs) \
+ 	-fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
+diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h
+index eab87c6beacb5a..e5d64c84aadf7d 100644
+--- a/arch/nios2/include/asm/pgtable.h
++++ b/arch/nios2/include/asm/pgtable.h
+@@ -291,4 +291,20 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
+ #define update_mmu_cache(vma, addr, ptep) \
+ 	update_mmu_cache_range(NULL, vma, addr, ptep, 1)
+ 
++static inline int pte_same(pte_t pte_a, pte_t pte_b);
++
++#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
++static inline int ptep_set_access_flags(struct vm_area_struct *vma,
++					unsigned long address, pte_t *ptep,
++					pte_t entry, int dirty)
++{
++	if (!pte_same(*ptep, entry))
++		set_ptes(vma->vm_mm, address, ptep, entry, 1);
++	/*
++	 * update_mmu_cache will unconditionally execute, handling both
++	 * the case that the PTE changed and the spurious fault case.
++	 */
++	return true;
++}
++
+ #endif /* _ASM_NIOS2_PGTABLE_H */
+diff --git a/arch/parisc/boot/compressed/Makefile b/arch/parisc/boot/compressed/Makefile
+index 92227fa813dc34..17c42d718eb336 100644
+--- a/arch/parisc/boot/compressed/Makefile
++++ b/arch/parisc/boot/compressed/Makefile
+@@ -18,6 +18,7 @@ KBUILD_CFLAGS += -fno-PIE -mno-space-regs -mdisable-fpregs -Os
+ ifndef CONFIG_64BIT
+ KBUILD_CFLAGS += -mfast-indirect-calls
+ endif
++KBUILD_CFLAGS += -std=gnu11
+ 
+ LDFLAGS_vmlinux := -X -e startup --as-needed -T
+ $(obj)/vmlinux: $(obj)/vmlinux.lds $(addprefix $(obj)/, $(OBJECTS)) $(LIBGCC) FORCE
+diff --git a/arch/parisc/kernel/unaligned.c b/arch/parisc/kernel/unaligned.c
+index f4626943633adc..00e97204783eda 100644
+--- a/arch/parisc/kernel/unaligned.c
++++ b/arch/parisc/kernel/unaligned.c
+@@ -25,7 +25,7 @@
+ #define DPRINTF(fmt, args...)
+ #endif
+ 
+-#define RFMT "%#08lx"
++#define RFMT "0x%08lx"
+ 
+ /* 1111 1100 0000 0000 0001 0011 1100 0000 */
+ #define OPCODE1(a,b,c)	((a)<<26|(b)<<12|(c)<<6) 
+diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
+index 02897f4b0dbf81..b891910fce8a69 100644
+--- a/arch/powerpc/include/asm/ppc_asm.h
++++ b/arch/powerpc/include/asm/ppc_asm.h
+@@ -183,7 +183,7 @@
+ /*
+  * Used to name C functions called from asm
+  */
+-#ifdef CONFIG_PPC_KERNEL_PCREL
++#if defined(__powerpc64__) && defined(CONFIG_PPC_KERNEL_PCREL)
+ #define CFUNC(name) name@notoc
+ #else
+ #define CFUNC(name) name
+diff --git a/arch/powerpc/kernel/eeh.c b/arch/powerpc/kernel/eeh.c
+index 83fe99861eb178..ca7f7bb2b47869 100644
+--- a/arch/powerpc/kernel/eeh.c
++++ b/arch/powerpc/kernel/eeh.c
+@@ -1509,6 +1509,8 @@ int eeh_pe_configure(struct eeh_pe *pe)
+ 	/* Invalid PE ? */
+ 	if (!pe)
+ 		return -ENODEV;
++	else
++		ret = eeh_ops->configure_bridge(pe);
+ 
+ 	return ret;
+ }
+diff --git a/arch/powerpc/kernel/trace/ftrace_entry.S b/arch/powerpc/kernel/trace/ftrace_entry.S
+index 2c1b24100eca29..3565c67fc63859 100644
+--- a/arch/powerpc/kernel/trace/ftrace_entry.S
++++ b/arch/powerpc/kernel/trace/ftrace_entry.S
+@@ -212,10 +212,10 @@
+ 	bne-	1f
+ 
+ 	mr	r3, r15
++1:	mtlr	r3
+ 	.if \allregs == 0
+ 	REST_GPR(15, r1)
+ 	.endif
+-1:	mtlr	r3
+ #endif
+ 
+ 	/* Restore gprs */
+diff --git a/arch/powerpc/kernel/vdso/Makefile b/arch/powerpc/kernel/vdso/Makefile
+index e8824f93332610..8834dfe9d72796 100644
+--- a/arch/powerpc/kernel/vdso/Makefile
++++ b/arch/powerpc/kernel/vdso/Makefile
+@@ -53,7 +53,7 @@ ldflags-$(CONFIG_LD_ORPHAN_WARN) += -Wl,--orphan-handling=$(CONFIG_LD_ORPHAN_WAR
+ ldflags-y += $(filter-out $(CC_AUTO_VAR_INIT_ZERO_ENABLER) $(CC_FLAGS_FTRACE) -Wa$(comma)%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
+ 
+ CC32FLAGS := -m32
+-CC32FLAGSREMOVE := -mcmodel=medium -mabi=elfv1 -mabi=elfv2 -mcall-aixdesc
++CC32FLAGSREMOVE := -mcmodel=medium -mabi=elfv1 -mabi=elfv2 -mcall-aixdesc -mpcrel
+ ifdef CONFIG_CC_IS_CLANG
+ # This flag is supported by clang for 64-bit but not 32-bit so it will cause
+ # an unused command line flag warning for this file.
+diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
+index 6beacaec63d303..4c26912c2e3c36 100644
+--- a/arch/powerpc/net/bpf_jit.h
++++ b/arch/powerpc/net/bpf_jit.h
+@@ -51,8 +51,16 @@
+ 		EMIT(PPC_INST_BRANCH_COND | (((cond) & 0x3ff) << 16) | (offset & 0xfffc));					\
+ 	} while (0)
+ 
+-/* Sign-extended 32-bit immediate load */
++/*
++ * Sign-extended 32-bit immediate load
++ *
++ * If this is a dummy pass (!image), account for
++ * maximum possible instructions.
++ */
+ #define PPC_LI32(d, i)		do {					      \
++	if (!image)							      \
++		ctx->idx += 2;						      \
++	else {								      \
+ 		if ((int)(uintptr_t)(i) >= -32768 &&			      \
+ 				(int)(uintptr_t)(i) < 32768)		      \
+ 			EMIT(PPC_RAW_LI(d, i));				      \
+@@ -60,10 +68,15 @@
+ 			EMIT(PPC_RAW_LIS(d, IMM_H(i)));			      \
+ 			if (IMM_L(i))					      \
+ 				EMIT(PPC_RAW_ORI(d, d, IMM_L(i)));	      \
+-		} } while(0)
++		}							      \
++	} } while (0)
+ 
+ #ifdef CONFIG_PPC64
++/* If dummy pass (!image), account for maximum possible instructions */
+ #define PPC_LI64(d, i)		do {					      \
++	if (!image)							      \
++		ctx->idx += 5;						      \
++	else {								      \
+ 		if ((long)(i) >= -2147483648 &&				      \
+ 				(long)(i) < 2147483648)			      \
+ 			PPC_LI32(d, i);					      \
+@@ -84,7 +97,8 @@
+ 			if ((uintptr_t)(i) & 0x000000000000ffffULL)	      \
+ 				EMIT(PPC_RAW_ORI(d, d, (uintptr_t)(i) &       \
+ 							0xffff));             \
+-		} } while (0)
++		}							      \
++	} } while (0)
+ #define PPC_LI_ADDR	PPC_LI64
+ 
+ #ifndef CONFIG_PPC_KERNEL_PCREL
+diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
+index 2991bb171a9bbe..c0684733e9d6ac 100644
+--- a/arch/powerpc/net/bpf_jit_comp.c
++++ b/arch/powerpc/net/bpf_jit_comp.c
+@@ -504,10 +504,11 @@ static int invoke_bpf_prog(u32 *image, u32 *ro_image, struct codegen_context *ct
+ 	EMIT(PPC_RAW_ADDI(_R3, _R1, regs_off));
+ 	if (!p->jited)
+ 		PPC_LI_ADDR(_R4, (unsigned long)p->insnsi);
+-	if (!create_branch(&branch_insn, (u32 *)&ro_image[ctx->idx], (unsigned long)p->bpf_func,
+-			   BRANCH_SET_LINK)) {
+-		if (image)
+-			image[ctx->idx] = ppc_inst_val(branch_insn);
++	/* Account for max possible instructions during dummy pass for size calculation */
++	if (image && !create_branch(&branch_insn, (u32 *)&ro_image[ctx->idx],
++				    (unsigned long)p->bpf_func,
++				    BRANCH_SET_LINK)) {
++		image[ctx->idx] = ppc_inst_val(branch_insn);
+ 		ctx->idx++;
+ 	} else {
+ 		EMIT(PPC_RAW_LL(_R12, _R25, offsetof(struct bpf_prog, bpf_func)));
+@@ -889,7 +890,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
+ 			bpf_trampoline_restore_tail_call_cnt(image, ctx, func_frame_offset, r4_off);
+ 
+ 		/* Reserve space to patch branch instruction to skip fexit progs */
+-		im->ip_after_call = &((u32 *)ro_image)[ctx->idx];
++		if (ro_image) /* image is NULL for dummy pass */
++			im->ip_after_call = &((u32 *)ro_image)[ctx->idx];
+ 		EMIT(PPC_RAW_NOP());
+ 	}
+ 
+@@ -912,7 +914,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
+ 		}
+ 
+ 	if (flags & BPF_TRAMP_F_CALL_ORIG) {
+-		im->ip_epilogue = &((u32 *)ro_image)[ctx->idx];
++		if (ro_image) /* image is NULL for dummy pass */
++			im->ip_epilogue = &((u32 *)ro_image)[ctx->idx];
+ 		PPC_LI_ADDR(_R3, im);
+ 		ret = bpf_jit_emit_func_call_rel(image, ro_image, ctx,
+ 						 (unsigned long)__bpf_tramp_exit);
+@@ -973,25 +976,9 @@ int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
+ 			     struct bpf_tramp_links *tlinks, void *func_addr)
+ {
+ 	struct bpf_tramp_image im;
+-	void *image;
+ 	int ret;
+ 
+-	/*
+-	 * Allocate a temporary buffer for __arch_prepare_bpf_trampoline().
+-	 * This will NOT cause fragmentation in direct map, as we do not
+-	 * call set_memory_*() on this buffer.
+-	 *
+-	 * We cannot use kvmalloc here, because we need image to be in
+-	 * module memory range.
+-	 */
+-	image = bpf_jit_alloc_exec(PAGE_SIZE);
+-	if (!image)
+-		return -ENOMEM;
+-
+-	ret = __arch_prepare_bpf_trampoline(&im, image, image + PAGE_SIZE, image,
+-					    m, flags, tlinks, func_addr);
+-	bpf_jit_free_exec(image);
+-
++	ret = __arch_prepare_bpf_trampoline(&im, NULL, NULL, NULL, m, flags, tlinks, func_addr);
+ 	return ret;
+ }
+ 
+diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
+index c4db278dae3609..0aace304dfe191 100644
+--- a/arch/powerpc/net/bpf_jit_comp32.c
++++ b/arch/powerpc/net/bpf_jit_comp32.c
+@@ -313,7 +313,6 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
+ 		u64 func_addr;
+ 		u32 true_cond;
+ 		u32 tmp_idx;
+-		int j;
+ 
+ 		if (i && (BPF_CLASS(code) == BPF_ALU64 || BPF_CLASS(code) == BPF_ALU) &&
+ 		    (BPF_CLASS(prevcode) == BPF_ALU64 || BPF_CLASS(prevcode) == BPF_ALU) &&
+@@ -1099,13 +1098,8 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
+ 		 * 16 byte instruction that uses two 'struct bpf_insn'
+ 		 */
+ 		case BPF_LD | BPF_IMM | BPF_DW: /* dst = (u64) imm */
+-			tmp_idx = ctx->idx;
+ 			PPC_LI32(dst_reg_h, (u32)insn[i + 1].imm);
+ 			PPC_LI32(dst_reg, (u32)insn[i].imm);
+-			/* padding to allow full 4 instructions for later patching */
+-			if (!image)
+-				for (j = ctx->idx - tmp_idx; j < 4; j++)
+-					EMIT(PPC_RAW_NOP());
+ 			/* Adjust for two bpf instructions */
+ 			addrs[++i] = ctx->idx * 4;
+ 			break;
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 233703b06d7c97..5daa77aee7f720 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -227,7 +227,14 @@ int bpf_jit_emit_func_call_rel(u32 *image, u32 *fimage, struct codegen_context *
+ #ifdef CONFIG_PPC_KERNEL_PCREL
+ 	reladdr = func_addr - local_paca->kernelbase;
+ 
+-	if (reladdr < (long)SZ_8G && reladdr >= -(long)SZ_8G) {
++	/*
++	 * If fimage is NULL (the initial pass to find image size),
++	 * account for the maximum no. of instructions possible.
++	 */
++	if (!fimage) {
++		ctx->idx += 7;
++		return 0;
++	} else if (reladdr < (long)SZ_8G && reladdr >= -(long)SZ_8G) {
+ 		EMIT(PPC_RAW_LD(_R12, _R13, offsetof(struct paca_struct, kernelbase)));
+ 		/* Align for subsequent prefix instruction */
+ 		if (!IS_ALIGNED((unsigned long)fimage + CTX_NIA(ctx), 8))
+@@ -412,7 +419,6 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
+ 		u64 imm64;
+ 		u32 true_cond;
+ 		u32 tmp_idx;
+-		int j;
+ 
+ 		/*
+ 		 * addrs[] maps a BPF bytecode address into a real offset from
+@@ -1046,12 +1052,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
+ 		case BPF_LD | BPF_IMM | BPF_DW: /* dst = (u64) imm */
+ 			imm64 = ((u64)(u32) insn[i].imm) |
+ 				    (((u64)(u32) insn[i+1].imm) << 32);
+-			tmp_idx = ctx->idx;
+ 			PPC_LI64(dst_reg, imm64);
+-			/* padding to allow full 5 instructions for later patching */
+-			if (!image)
+-				for (j = ctx->idx - tmp_idx; j < 5; j++)
+-					EMIT(PPC_RAW_NOP());
+ 			/* Adjust for two bpf instructions */
+ 			addrs[++i] = ctx->idx * 4;
+ 			break;
+diff --git a/arch/powerpc/platforms/pseries/msi.c b/arch/powerpc/platforms/pseries/msi.c
+index f9d80111c322f2..957c0c03d25904 100644
+--- a/arch/powerpc/platforms/pseries/msi.c
++++ b/arch/powerpc/platforms/pseries/msi.c
+@@ -525,7 +525,12 @@ static struct msi_domain_info pseries_msi_domain_info = {
+ 
+ static void pseries_msi_compose_msg(struct irq_data *data, struct msi_msg *msg)
+ {
+-	__pci_read_msi_msg(irq_data_get_msi_desc(data), msg);
++	struct pci_dev *dev = msi_desc_to_pci_dev(irq_data_get_msi_desc(data));
++
++	if (dev->current_state == PCI_D0)
++		__pci_read_msi_msg(irq_data_get_msi_desc(data), msg);
++	else
++		get_cached_msi_msg(data->irq, msg);
+ }
+ 
+ static struct irq_chip pseries_msi_irq_chip = {
+diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_replace.c
+index 5fbf3f94f1e855..b17fad091babdc 100644
+--- a/arch/riscv/kvm/vcpu_sbi_replace.c
++++ b/arch/riscv/kvm/vcpu_sbi_replace.c
+@@ -103,7 +103,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run
+ 		kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_SENT);
+ 		break;
+ 	case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA:
+-		if (cp->a2 == 0 && cp->a3 == 0)
++		if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL)
+ 			kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask);
+ 		else
+ 			kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask,
+@@ -111,7 +111,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run
+ 		kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT);
+ 		break;
+ 	case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID:
+-		if (cp->a2 == 0 && cp->a3 == 0)
++		if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL)
+ 			kvm_riscv_hfence_vvma_asid_all(vcpu->kvm,
+ 						       hbase, hmask, cp->a4);
+ 		else
+@@ -127,9 +127,9 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run
+ 	case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID:
+ 		/*
+ 		 * Until nested virtualization is implemented, the
+-		 * SBI HFENCE calls should be treated as NOPs
++		 * SBI HFENCE calls should return not supported
++		 * hence fallthrough.
+ 		 */
+-		break;
+ 	default:
+ 		retdata->err_val = SBI_ERR_NOT_SUPPORTED;
+ 	}
+diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
+index f6fded15633ad8..4e5654ad1604fd 100644
+--- a/arch/s390/kvm/gaccess.c
++++ b/arch/s390/kvm/gaccess.c
+@@ -318,7 +318,7 @@ enum prot_type {
+ 	PROT_TYPE_DAT  = 3,
+ 	PROT_TYPE_IEP  = 4,
+ 	/* Dummy value for passing an initialized value when code != PGM_PROTECTION */
+-	PROT_NONE,
++	PROT_TYPE_DUMMY,
+ };
+ 
+ static int trans_exc_ending(struct kvm_vcpu *vcpu, int code, unsigned long gva, u8 ar,
+@@ -334,7 +334,7 @@ static int trans_exc_ending(struct kvm_vcpu *vcpu, int code, unsigned long gva,
+ 	switch (code) {
+ 	case PGM_PROTECTION:
+ 		switch (prot) {
+-		case PROT_NONE:
++		case PROT_TYPE_DUMMY:
+ 			/* We should never get here, acts like termination */
+ 			WARN_ON_ONCE(1);
+ 			break;
+@@ -804,7 +804,7 @@ static int guest_range_to_gpas(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
+ 			gpa = kvm_s390_real_to_abs(vcpu, ga);
+ 			if (!kvm_is_gpa_in_memslot(vcpu->kvm, gpa)) {
+ 				rc = PGM_ADDRESSING;
+-				prot = PROT_NONE;
++				prot = PROT_TYPE_DUMMY;
+ 			}
+ 		}
+ 		if (rc)
+@@ -962,7 +962,7 @@ int access_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
+ 		if (rc == PGM_PROTECTION)
+ 			prot = PROT_TYPE_KEYC;
+ 		else
+-			prot = PROT_NONE;
++			prot = PROT_TYPE_DUMMY;
+ 		rc = trans_exc_ending(vcpu, rc, ga, ar, mode, prot, terminate);
+ 	}
+ out_unlock:
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 5bbdc4190b8b82..cd6676c2d6022d 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -45,6 +45,7 @@
+ /* list of all detected zpci devices */
+ static LIST_HEAD(zpci_list);
+ static DEFINE_SPINLOCK(zpci_list_lock);
++static DEFINE_MUTEX(zpci_add_remove_lock);
+ 
+ static DECLARE_BITMAP(zpci_domain, ZPCI_DOMAIN_BITMAP_SIZE);
+ static DEFINE_SPINLOCK(zpci_domain_lock);
+@@ -70,6 +71,15 @@ EXPORT_SYMBOL_GPL(zpci_aipb);
+ struct airq_iv *zpci_aif_sbv;
+ EXPORT_SYMBOL_GPL(zpci_aif_sbv);
+ 
++void zpci_zdev_put(struct zpci_dev *zdev)
++{
++	if (!zdev)
++		return;
++	mutex_lock(&zpci_add_remove_lock);
++	kref_put_lock(&zdev->kref, zpci_release_device, &zpci_list_lock);
++	mutex_unlock(&zpci_add_remove_lock);
++}
++
+ struct zpci_dev *get_zdev_by_fid(u32 fid)
+ {
+ 	struct zpci_dev *tmp, *zdev = NULL;
+@@ -837,6 +847,7 @@ int zpci_add_device(struct zpci_dev *zdev)
+ {
+ 	int rc;
+ 
++	mutex_lock(&zpci_add_remove_lock);
+ 	zpci_dbg(1, "add fid:%x, fh:%x, c:%d\n", zdev->fid, zdev->fh, zdev->state);
+ 	rc = zpci_init_iommu(zdev);
+ 	if (rc)
+@@ -850,12 +861,14 @@ int zpci_add_device(struct zpci_dev *zdev)
+ 	spin_lock(&zpci_list_lock);
+ 	list_add_tail(&zdev->entry, &zpci_list);
+ 	spin_unlock(&zpci_list_lock);
++	mutex_unlock(&zpci_add_remove_lock);
+ 	return 0;
+ 
+ error_destroy_iommu:
+ 	zpci_destroy_iommu(zdev);
+ error:
+ 	zpci_dbg(0, "add fid:%x, rc:%d\n", zdev->fid, rc);
++	mutex_unlock(&zpci_add_remove_lock);
+ 	return rc;
+ }
+ 
+@@ -925,21 +938,20 @@ int zpci_deconfigure_device(struct zpci_dev *zdev)
+  * @zdev: the zpci_dev that was reserved
+  *
+  * Handle the case that a given zPCI function was reserved by another system.
+- * After a call to this function the zpci_dev can not be found via
+- * get_zdev_by_fid() anymore but may still be accessible via existing
+- * references though it will not be functional anymore.
+  */
+ void zpci_device_reserved(struct zpci_dev *zdev)
+ {
+-	/*
+-	 * Remove device from zpci_list as it is going away. This also
+-	 * makes sure we ignore subsequent zPCI events for this device.
+-	 */
+-	spin_lock(&zpci_list_lock);
+-	list_del(&zdev->entry);
+-	spin_unlock(&zpci_list_lock);
++	lockdep_assert_held(&zdev->state_lock);
++	/* We may declare the device reserved multiple times */
++	if (zdev->state == ZPCI_FN_STATE_RESERVED)
++		return;
+ 	zdev->state = ZPCI_FN_STATE_RESERVED;
+ 	zpci_dbg(3, "rsv fid:%x\n", zdev->fid);
++	/*
++	 * The underlying device is gone. Allow the zdev to be freed
++	 * as soon as all other references are gone by accounting for
++	 * the removal as a dropped reference.
++	 */
+ 	zpci_zdev_put(zdev);
+ }
+ 
+@@ -947,13 +959,14 @@ void zpci_release_device(struct kref *kref)
+ {
+ 	struct zpci_dev *zdev = container_of(kref, struct zpci_dev, kref);
+ 
++	lockdep_assert_held(&zpci_add_remove_lock);
+ 	WARN_ON(zdev->state != ZPCI_FN_STATE_RESERVED);
+-
+-	if (zdev->zbus->bus)
+-		zpci_bus_remove_device(zdev, false);
+-
+-	if (zdev_enabled(zdev))
+-		zpci_disable_device(zdev);
++	/*
++	 * We already hold zpci_list_lock thanks to kref_put_lock().
++	 * This makes sure no new reference can be taken from the list.
++	 */
++	list_del(&zdev->entry);
++	spin_unlock(&zpci_list_lock);
+ 
+ 	if (zdev->has_hp_slot)
+ 		zpci_exit_slot(zdev);
+diff --git a/arch/s390/pci/pci_bus.h b/arch/s390/pci/pci_bus.h
+index e86a9419d233f7..ae3d7a9159bde1 100644
+--- a/arch/s390/pci/pci_bus.h
++++ b/arch/s390/pci/pci_bus.h
+@@ -21,11 +21,8 @@ int zpci_bus_scan_device(struct zpci_dev *zdev);
+ void zpci_bus_remove_device(struct zpci_dev *zdev, bool set_error);
+ 
+ void zpci_release_device(struct kref *kref);
+-static inline void zpci_zdev_put(struct zpci_dev *zdev)
+-{
+-	if (zdev)
+-		kref_put(&zdev->kref, zpci_release_device);
+-}
++
++void zpci_zdev_put(struct zpci_dev *zdev);
+ 
+ static inline void zpci_zdev_get(struct zpci_dev *zdev)
+ {
+diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c
+index 7bd7721c1239a2..2fbee3887d13aa 100644
+--- a/arch/s390/pci/pci_event.c
++++ b/arch/s390/pci/pci_event.c
+@@ -335,6 +335,22 @@ static void zpci_event_hard_deconfigured(struct zpci_dev *zdev, u32 fh)
+ 	zdev->state = ZPCI_FN_STATE_STANDBY;
+ }
+ 
++static void zpci_event_reappear(struct zpci_dev *zdev)
++{
++	lockdep_assert_held(&zdev->state_lock);
++	/*
++	 * The zdev is in the reserved state. This means that it was presumed to
++	 * go away but there are still undropped references. Now, the platform
++	 * announced its availability again. Bring back the lingering zdev
++	 * to standby. This is safe because we hold a temporary reference
++	 * now so that it won't go away. Account for the re-appearance of the
++	 * underlying device by incrementing the reference count.
++	 */
++	zdev->state = ZPCI_FN_STATE_STANDBY;
++	zpci_zdev_get(zdev);
++	zpci_dbg(1, "rea fid:%x, fh:%x\n", zdev->fid, zdev->fh);
++}
++
+ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ {
+ 	struct zpci_dev *zdev = get_zdev_by_fid(ccdf->fid);
+@@ -358,8 +374,10 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ 				break;
+ 			}
+ 		} else {
++			if (zdev->state == ZPCI_FN_STATE_RESERVED)
++				zpci_event_reappear(zdev);
+ 			/* the configuration request may be stale */
+-			if (zdev->state != ZPCI_FN_STATE_STANDBY)
++			else if (zdev->state != ZPCI_FN_STATE_STANDBY)
+ 				break;
+ 			zdev->state = ZPCI_FN_STATE_CONFIGURED;
+ 		}
+@@ -375,6 +393,8 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ 				break;
+ 			}
+ 		} else {
++			if (zdev->state == ZPCI_FN_STATE_RESERVED)
++				zpci_event_reappear(zdev);
+ 			zpci_update_fh(zdev, ccdf->fh);
+ 		}
+ 		break;
+diff --git a/arch/s390/pci/pci_mmio.c b/arch/s390/pci/pci_mmio.c
+index 5fcc1a3b04bd0b..91e72b0044bc04 100644
+--- a/arch/s390/pci/pci_mmio.c
++++ b/arch/s390/pci/pci_mmio.c
+@@ -236,7 +236,7 @@ static inline int __pcilg_mio_inuser(
+ 		: [ioaddr_len] "+&d" (ioaddr_len.pair), [exc] "+d" (exception),
+ 		  CC_OUT(cc, cc), [val] "=d" (val),
+ 		  [dst] "+a" (dst), [cnt] "+d" (cnt), [tmp] "=d" (tmp),
+-		  [shift] "+d" (shift)
++		  [shift] "+a" (shift)
+ 		:
+ 		: CC_CLOBBER_LIST("memory"));
+ 
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index e21cca404943e7..47932d5f44990a 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -88,7 +88,7 @@ config X86
+ 	select ARCH_HAS_DMA_OPS			if GART_IOMMU || XEN
+ 	select ARCH_HAS_EARLY_DEBUG		if KGDB
+ 	select ARCH_HAS_ELF_RANDOMIZE
+-	select ARCH_HAS_EXECMEM_ROX		if X86_64
++	select ARCH_HAS_EXECMEM_ROX		if X86_64 && STRICT_MODULE_RWX
+ 	select ARCH_HAS_FAST_MULTIPLIER
+ 	select ARCH_HAS_FORTIFY_SOURCE
+ 	select ARCH_HAS_GCOV_PROFILE_ALL
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index c5f385413392b1..d69af2687ed73b 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -2810,7 +2810,7 @@ static void intel_pmu_read_event(struct perf_event *event)
+ 		 * If the PEBS counters snapshotting is enabled,
+ 		 * the topdown event is available in PEBS records.
+ 		 */
+-		if (is_topdown_event(event) && !is_pebs_counter_event_group(event))
++		if (is_topdown_count(event) && !is_pebs_counter_event_group(event))
+ 			static_call(intel_pmu_update_topdown_event)(event, NULL);
+ 		else
+ 			intel_pmu_drain_pebs_buffer();
+diff --git a/arch/x86/include/asm/module.h b/arch/x86/include/asm/module.h
+index e988bac0a4a1c3..3c2de4ce3b10de 100644
+--- a/arch/x86/include/asm/module.h
++++ b/arch/x86/include/asm/module.h
+@@ -5,12 +5,20 @@
+ #include <asm-generic/module.h>
+ #include <asm/orc_types.h>
+ 
++struct its_array {
++#ifdef CONFIG_MITIGATION_ITS
++	void **pages;
++	int num;
++#endif
++};
++
+ struct mod_arch_specific {
+ #ifdef CONFIG_UNWINDER_ORC
+ 	unsigned int num_orcs;
+ 	int *orc_unwind_ip;
+ 	struct orc_entry *orc_unwind;
+ #endif
++	struct its_array its_pages;
+ };
+ 
+ #endif /* _ASM_X86_MODULE_H */
+diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
+index 4a1922ec80cf76..e70bb1f46e064c 100644
+--- a/arch/x86/include/asm/tdx.h
++++ b/arch/x86/include/asm/tdx.h
+@@ -100,7 +100,7 @@ void tdx_init(void);
+ 
+ typedef u64 (*sc_func_t)(u64 fn, struct tdx_module_args *args);
+ 
+-static inline u64 sc_retry(sc_func_t func, u64 fn,
++static __always_inline u64 sc_retry(sc_func_t func, u64 fn,
+ 			   struct tdx_module_args *args)
+ {
+ 	int retry = RDRAND_RETRY_LOOPS;
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index 45bcff181cbae9..4dc21a34e67d65 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -138,6 +138,24 @@ static struct module *its_mod;
+ #endif
+ static void *its_page;
+ static unsigned int its_offset;
++struct its_array its_pages;
++
++static void *__its_alloc(struct its_array *pages)
++{
++	void *page __free(execmem) = execmem_alloc(EXECMEM_MODULE_TEXT, PAGE_SIZE);
++	if (!page)
++		return NULL;
++
++	void *tmp = krealloc(pages->pages, (pages->num+1) * sizeof(void *),
++			     GFP_KERNEL);
++	if (!tmp)
++		return NULL;
++
++	pages->pages = tmp;
++	pages->pages[pages->num++] = page;
++
++	return no_free_ptr(page);
++}
+ 
+ /* Initialize a thunk with the "jmp *reg; int3" instructions. */
+ static void *its_init_thunk(void *thunk, int reg)
+@@ -173,6 +191,21 @@ static void *its_init_thunk(void *thunk, int reg)
+ 	return thunk + offset;
+ }
+ 
++static void its_pages_protect(struct its_array *pages)
++{
++	for (int i = 0; i < pages->num; i++) {
++		void *page = pages->pages[i];
++		execmem_restore_rox(page, PAGE_SIZE);
++	}
++}
++
++static void its_fini_core(void)
++{
++	if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX))
++		its_pages_protect(&its_pages);
++	kfree(its_pages.pages);
++}
++
+ #ifdef CONFIG_MODULES
+ void its_init_mod(struct module *mod)
+ {
+@@ -195,10 +228,8 @@ void its_fini_mod(struct module *mod)
+ 	its_page = NULL;
+ 	mutex_unlock(&text_mutex);
+ 
+-	for (int i = 0; i < mod->its_num_pages; i++) {
+-		void *page = mod->its_page_array[i];
+-		execmem_restore_rox(page, PAGE_SIZE);
+-	}
++	if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX))
++		its_pages_protect(&mod->arch.its_pages);
+ }
+ 
+ void its_free_mod(struct module *mod)
+@@ -206,37 +237,33 @@ void its_free_mod(struct module *mod)
+ 	if (!cpu_feature_enabled(X86_FEATURE_INDIRECT_THUNK_ITS))
+ 		return;
+ 
+-	for (int i = 0; i < mod->its_num_pages; i++) {
+-		void *page = mod->its_page_array[i];
++	for (int i = 0; i < mod->arch.its_pages.num; i++) {
++		void *page = mod->arch.its_pages.pages[i];
+ 		execmem_free(page);
+ 	}
+-	kfree(mod->its_page_array);
++	kfree(mod->arch.its_pages.pages);
+ }
+ #endif /* CONFIG_MODULES */
+ 
+ static void *its_alloc(void)
+ {
+-	void *page __free(execmem) = execmem_alloc(EXECMEM_MODULE_TEXT, PAGE_SIZE);
+-
+-	if (!page)
+-		return NULL;
++	struct its_array *pages = &its_pages;
++	void *page;
+ 
+ #ifdef CONFIG_MODULES
+-	if (its_mod) {
+-		void *tmp = krealloc(its_mod->its_page_array,
+-				     (its_mod->its_num_pages+1) * sizeof(void *),
+-				     GFP_KERNEL);
+-		if (!tmp)
+-			return NULL;
++	if (its_mod)
++		pages = &its_mod->arch.its_pages;
++#endif
+ 
+-		its_mod->its_page_array = tmp;
+-		its_mod->its_page_array[its_mod->its_num_pages++] = page;
++	page = __its_alloc(pages);
++	if (!page)
++		return NULL;
+ 
+-		execmem_make_temp_rw(page, PAGE_SIZE);
+-	}
+-#endif /* CONFIG_MODULES */
++	execmem_make_temp_rw(page, PAGE_SIZE);
++	if (pages == &its_pages)
++		set_memory_x((unsigned long)page, 1);
+ 
+-	return no_free_ptr(page);
++	return page;
+ }
+ 
+ static void *its_allocate_thunk(int reg)
+@@ -290,7 +317,9 @@ u8 *its_static_thunk(int reg)
+ 	return thunk;
+ }
+ 
+-#endif
++#else
++static inline void its_fini_core(void) {}
++#endif /* CONFIG_MITIGATION_ITS */
+ 
+ /*
+  * Nomenclature for variable names to simplify and clarify this code and ease
+@@ -2367,6 +2396,8 @@ void __init alternative_instructions(void)
+ 	apply_retpolines(__retpoline_sites, __retpoline_sites_end);
+ 	apply_returns(__return_sites, __return_sites_end);
+ 
++	its_fini_core();
++
+ 	/*
+ 	 * Adjust all CALL instructions to point to func()-10, including
+ 	 * those in .altinstr_replacement.
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 4e06baab40bb3f..a59d6d8fc71f9f 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -29,7 +29,7 @@
+ 
+ #include "cpu.h"
+ 
+-u16 invlpgb_count_max __ro_after_init;
++u16 invlpgb_count_max __ro_after_init = 1;
+ 
+ static inline int rdmsrl_amd_safe(unsigned msr, unsigned long long *p)
+ {
+diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
+index 8ce352fc72ac3d..7c199773705a74 100644
+--- a/arch/x86/kernel/cpu/sgx/main.c
++++ b/arch/x86/kernel/cpu/sgx/main.c
+@@ -719,6 +719,8 @@ int arch_memory_failure(unsigned long pfn, int flags)
+ 		goto out;
+ 	}
+ 
++	sgx_unmark_page_reclaimable(page);
++
+ 	/*
+ 	 * TBD: Add additional plumbing to enable pre-emptive
+ 	 * action for asynchronous poison notification. Until
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index a89c271a1951f4..b567ec94b7fa54 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1488,7 +1488,7 @@ static void svm_clear_current_vmcb(struct vmcb *vmcb)
+ {
+ 	int i;
+ 
+-	for_each_online_cpu(i)
++	for_each_possible_cpu(i)
+ 		cmpxchg(per_cpu_ptr(&svm_data.current_vmcb, i), vmcb, NULL);
+ }
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 5c5766467a61d4..0b66b856d673b2 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -769,8 +769,11 @@ void vmx_emergency_disable_virtualization_cpu(void)
+ 		return;
+ 
+ 	list_for_each_entry(v, &per_cpu(loaded_vmcss_on_cpu, cpu),
+-			    loaded_vmcss_on_cpu_link)
++			    loaded_vmcss_on_cpu_link) {
+ 		vmcs_clear(v->vmcs);
++		if (v->shadow_vmcs)
++			vmcs_clear(v->shadow_vmcs);
++	}
+ 
+ 	kvm_cpu_vmxoff();
+ }
+diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
+index bb8d99e717b9e7..148eba50265a59 100644
+--- a/arch/x86/mm/init_32.c
++++ b/arch/x86/mm/init_32.c
+@@ -30,7 +30,6 @@
+ #include <linux/initrd.h>
+ #include <linux/cpumask.h>
+ #include <linux/gfp.h>
+-#include <linux/execmem.h>
+ 
+ #include <asm/asm.h>
+ #include <asm/bios_ebda.h>
+@@ -756,8 +755,6 @@ void mark_rodata_ro(void)
+ 	pr_info("Write protecting kernel text and read-only data: %luk\n",
+ 		size >> 10);
+ 
+-	execmem_cache_make_ro();
+-
+ 	kernel_set_to_readonly = 1;
+ 
+ #ifdef CONFIG_CPA_DEBUG
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index 949a447f75ec7e..7c4f6f591f2b24 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -34,7 +34,6 @@
+ #include <linux/gfp.h>
+ #include <linux/kcore.h>
+ #include <linux/bootmem_info.h>
+-#include <linux/execmem.h>
+ 
+ #include <asm/processor.h>
+ #include <asm/bios_ebda.h>
+@@ -1392,8 +1391,6 @@ void mark_rodata_ro(void)
+ 	       (end - start) >> 10);
+ 	set_memory_ro(start, (end - start) >> PAGE_SHIFT);
+ 
+-	execmem_cache_make_ro();
+-
+ 	kernel_set_to_readonly = 1;
+ 
+ 	/*
+diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
+index def3d928425422..9292f835cf5a30 100644
+--- a/arch/x86/mm/pat/set_memory.c
++++ b/arch/x86/mm/pat/set_memory.c
+@@ -1257,6 +1257,9 @@ static int collapse_pmd_page(pmd_t *pmd, unsigned long addr,
+ 	pgprot_t pgprot;
+ 	int i = 0;
+ 
++	if (!cpu_feature_enabled(X86_FEATURE_PSE))
++		return 0;
++
+ 	addr &= PMD_MASK;
+ 	pte = pte_offset_kernel(pmd, addr);
+ 	first = *pte;
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 5f0d579932c688..ce2f5a6081bedd 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -98,6 +98,11 @@ void __init pti_check_boottime_disable(void)
+ 		return;
+ 
+ 	setup_force_cpu_cap(X86_FEATURE_PTI);
++
++	if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) {
++		pr_debug("PTI enabled, disabling INVLPGB\n");
++		setup_clear_cpu_cap(X86_FEATURE_INVLPGB);
++	}
+ }
+ 
+ static int __init pti_parse_cmdline(char *arg)
+diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
+index 7fdb37387886ba..328a18164ac72e 100644
+--- a/arch/x86/virt/vmx/tdx/tdx.c
++++ b/arch/x86/virt/vmx/tdx/tdx.c
+@@ -69,8 +69,9 @@ static inline void seamcall_err_ret(u64 fn, u64 err,
+ 			args->r9, args->r10, args->r11);
+ }
+ 
+-static inline int sc_retry_prerr(sc_func_t func, sc_err_func_t err_func,
+-				 u64 fn, struct tdx_module_args *args)
++static __always_inline int sc_retry_prerr(sc_func_t func,
++					  sc_err_func_t err_func,
++					  u64 fn, struct tdx_module_args *args)
+ {
+ 	u64 sret = sc_retry(func, fn, args);
+ 
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index fdd4efb54c6c70..daa010e93cb3a4 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -1127,20 +1127,20 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
+ 	if (!plug || rq_list_empty(&plug->mq_list))
+ 		return false;
+ 
+-	rq_list_for_each(&plug->mq_list, rq) {
+-		if (rq->q == q) {
+-			if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==
+-			    BIO_MERGE_OK)
+-				return true;
+-			break;
+-		}
++	rq = plug->mq_list.tail;
++	if (rq->q == q)
++		return blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==
++			BIO_MERGE_OK;
++	else if (!plug->multiple_queues)
++		return false;
+ 
+-		/*
+-		 * Only keep iterating plug list for merges if we have multiple
+-		 * queues
+-		 */
+-		if (!plug->multiple_queues)
+-			break;
++	rq_list_for_each(&plug->mq_list, rq) {
++		if (rq->q != q)
++			continue;
++		if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==
++		    BIO_MERGE_OK)
++			return true;
++		break;
+ 	}
+ 	return false;
+ }
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 45c91016cef38a..351d659280e116 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -1225,6 +1225,7 @@ void blk_zone_write_plug_bio_endio(struct bio *bio)
+ 	if (bio_flagged(bio, BIO_EMULATES_ZONE_APPEND)) {
+ 		bio->bi_opf &= ~REQ_OP_MASK;
+ 		bio->bi_opf |= REQ_OP_ZONE_APPEND;
++		bio_clear_flag(bio, BIO_EMULATES_ZONE_APPEND);
+ 	}
+ 
+ 	/*
+diff --git a/drivers/accel/ivpu/ivpu_fw.c b/drivers/accel/ivpu/ivpu_fw.c
+index ccaaf6c100c022..9db741695401e7 100644
+--- a/drivers/accel/ivpu/ivpu_fw.c
++++ b/drivers/accel/ivpu/ivpu_fw.c
+@@ -55,18 +55,18 @@ static struct {
+ 	int gen;
+ 	const char *name;
+ } fw_names[] = {
+-	{ IVPU_HW_IP_37XX, "vpu_37xx.bin" },
++	{ IVPU_HW_IP_37XX, "intel/vpu/vpu_37xx_v1.bin" },
+ 	{ IVPU_HW_IP_37XX, "intel/vpu/vpu_37xx_v0.0.bin" },
+-	{ IVPU_HW_IP_40XX, "vpu_40xx.bin" },
++	{ IVPU_HW_IP_40XX, "intel/vpu/vpu_40xx_v1.bin" },
+ 	{ IVPU_HW_IP_40XX, "intel/vpu/vpu_40xx_v0.0.bin" },
+-	{ IVPU_HW_IP_50XX, "vpu_50xx.bin" },
++	{ IVPU_HW_IP_50XX, "intel/vpu/vpu_50xx_v1.bin" },
+ 	{ IVPU_HW_IP_50XX, "intel/vpu/vpu_50xx_v0.0.bin" },
+ };
+ 
+ /* Production fw_names from the table above */
+-MODULE_FIRMWARE("intel/vpu/vpu_37xx_v0.0.bin");
+-MODULE_FIRMWARE("intel/vpu/vpu_40xx_v0.0.bin");
+-MODULE_FIRMWARE("intel/vpu/vpu_50xx_v0.0.bin");
++MODULE_FIRMWARE("intel/vpu/vpu_37xx_v1.bin");
++MODULE_FIRMWARE("intel/vpu/vpu_40xx_v1.bin");
++MODULE_FIRMWARE("intel/vpu/vpu_50xx_v1.bin");
+ 
+ static int ivpu_fw_request(struct ivpu_device *vdev)
+ {
+diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c
+index 8741c73b92ce0b..248bfebeaa22d2 100644
+--- a/drivers/accel/ivpu/ivpu_gem.c
++++ b/drivers/accel/ivpu/ivpu_gem.c
+@@ -28,11 +28,21 @@ static inline void ivpu_dbg_bo(struct ivpu_device *vdev, struct ivpu_bo *bo, con
+ {
+ 	ivpu_dbg(vdev, BO,
+ 		 "%6s: bo %8p vpu_addr %9llx size %8zu ctx %d has_pages %d dma_mapped %d mmu_mapped %d wc %d imported %d\n",
+-		 action, bo, bo->vpu_addr, ivpu_bo_size(bo), bo->ctx ? bo->ctx->id : 0,
++		 action, bo, bo->vpu_addr, ivpu_bo_size(bo), bo->ctx_id,
+ 		 (bool)bo->base.pages, (bool)bo->base.sgt, bo->mmu_mapped, bo->base.map_wc,
+ 		 (bool)bo->base.base.import_attach);
+ }
+ 
++static inline int ivpu_bo_lock(struct ivpu_bo *bo)
++{
++	return dma_resv_lock(bo->base.base.resv, NULL);
++}
++
++static inline void ivpu_bo_unlock(struct ivpu_bo *bo)
++{
++	dma_resv_unlock(bo->base.base.resv);
++}
++
+ /*
+  * ivpu_bo_pin() - pin the backing physical pages and map them to VPU.
+  *
+@@ -43,22 +53,22 @@ static inline void ivpu_dbg_bo(struct ivpu_device *vdev, struct ivpu_bo *bo, con
+ int __must_check ivpu_bo_pin(struct ivpu_bo *bo)
+ {
+ 	struct ivpu_device *vdev = ivpu_bo_to_vdev(bo);
++	struct sg_table *sgt;
+ 	int ret = 0;
+ 
+-	mutex_lock(&bo->lock);
+-
+ 	ivpu_dbg_bo(vdev, bo, "pin");
+-	drm_WARN_ON(&vdev->drm, !bo->ctx);
+ 
+-	if (!bo->mmu_mapped) {
+-		struct sg_table *sgt = drm_gem_shmem_get_pages_sgt(&bo->base);
++	sgt = drm_gem_shmem_get_pages_sgt(&bo->base);
++	if (IS_ERR(sgt)) {
++		ret = PTR_ERR(sgt);
++		ivpu_err(vdev, "Failed to map BO in IOMMU: %d\n", ret);
++		return ret;
++	}
+ 
+-		if (IS_ERR(sgt)) {
+-			ret = PTR_ERR(sgt);
+-			ivpu_err(vdev, "Failed to map BO in IOMMU: %d\n", ret);
+-			goto unlock;
+-		}
++	ivpu_bo_lock(bo);
+ 
++	if (!bo->mmu_mapped) {
++		drm_WARN_ON(&vdev->drm, !bo->ctx);
+ 		ret = ivpu_mmu_context_map_sgt(vdev, bo->ctx, bo->vpu_addr, sgt,
+ 					       ivpu_bo_is_snooped(bo));
+ 		if (ret) {
+@@ -69,7 +79,7 @@ int __must_check ivpu_bo_pin(struct ivpu_bo *bo)
+ 	}
+ 
+ unlock:
+-	mutex_unlock(&bo->lock);
++	ivpu_bo_unlock(bo);
+ 
+ 	return ret;
+ }
+@@ -84,7 +94,7 @@ ivpu_bo_alloc_vpu_addr(struct ivpu_bo *bo, struct ivpu_mmu_context *ctx,
+ 	if (!drm_dev_enter(&vdev->drm, &idx))
+ 		return -ENODEV;
+ 
+-	mutex_lock(&bo->lock);
++	ivpu_bo_lock(bo);
+ 
+ 	ret = ivpu_mmu_context_insert_node(ctx, range, ivpu_bo_size(bo), &bo->mm_node);
+ 	if (!ret) {
+@@ -94,9 +104,7 @@ ivpu_bo_alloc_vpu_addr(struct ivpu_bo *bo, struct ivpu_mmu_context *ctx,
+ 		ivpu_err(vdev, "Failed to add BO to context %u: %d\n", ctx->id, ret);
+ 	}
+ 
+-	ivpu_dbg_bo(vdev, bo, "alloc");
+-
+-	mutex_unlock(&bo->lock);
++	ivpu_bo_unlock(bo);
+ 
+ 	drm_dev_exit(idx);
+ 
+@@ -107,7 +115,7 @@ static void ivpu_bo_unbind_locked(struct ivpu_bo *bo)
+ {
+ 	struct ivpu_device *vdev = ivpu_bo_to_vdev(bo);
+ 
+-	lockdep_assert(lockdep_is_held(&bo->lock) || !kref_read(&bo->base.base.refcount));
++	lockdep_assert(dma_resv_held(bo->base.base.resv) || !kref_read(&bo->base.base.refcount));
+ 
+ 	if (bo->mmu_mapped) {
+ 		drm_WARN_ON(&vdev->drm, !bo->ctx);
+@@ -125,14 +133,12 @@ static void ivpu_bo_unbind_locked(struct ivpu_bo *bo)
+ 	if (bo->base.base.import_attach)
+ 		return;
+ 
+-	dma_resv_lock(bo->base.base.resv, NULL);
+ 	if (bo->base.sgt) {
+ 		dma_unmap_sgtable(vdev->drm.dev, bo->base.sgt, DMA_BIDIRECTIONAL, 0);
+ 		sg_free_table(bo->base.sgt);
+ 		kfree(bo->base.sgt);
+ 		bo->base.sgt = NULL;
+ 	}
+-	dma_resv_unlock(bo->base.base.resv);
+ }
+ 
+ void ivpu_bo_unbind_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx)
+@@ -144,12 +150,12 @@ void ivpu_bo_unbind_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_m
+ 
+ 	mutex_lock(&vdev->bo_list_lock);
+ 	list_for_each_entry(bo, &vdev->bo_list, bo_list_node) {
+-		mutex_lock(&bo->lock);
++		ivpu_bo_lock(bo);
+ 		if (bo->ctx == ctx) {
+ 			ivpu_dbg_bo(vdev, bo, "unbind");
+ 			ivpu_bo_unbind_locked(bo);
+ 		}
+-		mutex_unlock(&bo->lock);
++		ivpu_bo_unlock(bo);
+ 	}
+ 	mutex_unlock(&vdev->bo_list_lock);
+ }
+@@ -169,7 +175,6 @@ struct drm_gem_object *ivpu_gem_create_object(struct drm_device *dev, size_t siz
+ 	bo->base.pages_mark_dirty_on_put = true; /* VPU can dirty a BO anytime */
+ 
+ 	INIT_LIST_HEAD(&bo->bo_list_node);
+-	mutex_init(&bo->lock);
+ 
+ 	return &bo->base.base;
+ }
+@@ -215,7 +220,7 @@ struct drm_gem_object *ivpu_gem_prime_import(struct drm_device *dev,
+ 	return ERR_PTR(ret);
+ }
+ 
+-static struct ivpu_bo *ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 flags)
++static struct ivpu_bo *ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 flags, u32 ctx_id)
+ {
+ 	struct drm_gem_shmem_object *shmem;
+ 	struct ivpu_bo *bo;
+@@ -233,6 +238,7 @@ static struct ivpu_bo *ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 fla
+ 		return ERR_CAST(shmem);
+ 
+ 	bo = to_ivpu_bo(&shmem->base);
++	bo->ctx_id = ctx_id;
+ 	bo->base.map_wc = flags & DRM_IVPU_BO_WC;
+ 	bo->flags = flags;
+ 
+@@ -240,6 +246,8 @@ static struct ivpu_bo *ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 fla
+ 	list_add_tail(&bo->bo_list_node, &vdev->bo_list);
+ 	mutex_unlock(&vdev->bo_list_lock);
+ 
++	ivpu_dbg_bo(vdev, bo, "alloc");
++
+ 	return bo;
+ }
+ 
+@@ -277,10 +285,14 @@ static void ivpu_gem_bo_free(struct drm_gem_object *obj)
+ 	list_del(&bo->bo_list_node);
+ 	mutex_unlock(&vdev->bo_list_lock);
+ 
+-	drm_WARN_ON(&vdev->drm, !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ));
++	drm_WARN_ON(&vdev->drm, !drm_gem_is_imported(&bo->base.base) &&
++		    !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ));
++	drm_WARN_ON(&vdev->drm, ivpu_bo_size(bo) == 0);
++	drm_WARN_ON(&vdev->drm, bo->base.vaddr);
+ 
+ 	ivpu_bo_unbind_locked(bo);
+-	mutex_destroy(&bo->lock);
++	drm_WARN_ON(&vdev->drm, bo->mmu_mapped);
++	drm_WARN_ON(&vdev->drm, bo->ctx);
+ 
+ 	drm_WARN_ON(obj->dev, bo->base.pages_use_count > 1);
+ 	drm_gem_shmem_free(&bo->base);
+@@ -314,7 +326,7 @@ int ivpu_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *fi
+ 	if (size == 0)
+ 		return -EINVAL;
+ 
+-	bo = ivpu_bo_alloc(vdev, size, args->flags);
++	bo = ivpu_bo_alloc(vdev, size, args->flags, file_priv->ctx.id);
+ 	if (IS_ERR(bo)) {
+ 		ivpu_err(vdev, "Failed to allocate BO: %pe (ctx %u size %llu flags 0x%x)",
+ 			 bo, file_priv->ctx.id, args->size, args->flags);
+@@ -322,7 +334,10 @@ int ivpu_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *fi
+ 	}
+ 
+ 	ret = drm_gem_handle_create(file, &bo->base.base, &args->handle);
+-	if (!ret)
++	if (ret)
++		ivpu_err(vdev, "Failed to create handle for BO: %pe (ctx %u size %llu flags 0x%x)",
++			 bo, file_priv->ctx.id, args->size, args->flags);
++	else
+ 		args->vpu_addr = bo->vpu_addr;
+ 
+ 	drm_gem_object_put(&bo->base.base);
+@@ -345,7 +360,7 @@ ivpu_bo_create(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx,
+ 	drm_WARN_ON(&vdev->drm, !PAGE_ALIGNED(range->end));
+ 	drm_WARN_ON(&vdev->drm, !PAGE_ALIGNED(size));
+ 
+-	bo = ivpu_bo_alloc(vdev, size, flags);
++	bo = ivpu_bo_alloc(vdev, size, flags, IVPU_GLOBAL_CONTEXT_MMU_SSID);
+ 	if (IS_ERR(bo)) {
+ 		ivpu_err(vdev, "Failed to allocate BO: %pe (vpu_addr 0x%llx size %llu flags 0x%x)",
+ 			 bo, range->start, size, flags);
+@@ -361,9 +376,9 @@ ivpu_bo_create(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx,
+ 		goto err_put;
+ 
+ 	if (flags & DRM_IVPU_BO_MAPPABLE) {
+-		dma_resv_lock(bo->base.base.resv, NULL);
++		ivpu_bo_lock(bo);
+ 		ret = drm_gem_shmem_vmap(&bo->base, &map);
+-		dma_resv_unlock(bo->base.base.resv);
++		ivpu_bo_unlock(bo);
+ 
+ 		if (ret)
+ 			goto err_put;
+@@ -386,9 +401,9 @@ void ivpu_bo_free(struct ivpu_bo *bo)
+ 	struct iosys_map map = IOSYS_MAP_INIT_VADDR(bo->base.vaddr);
+ 
+ 	if (bo->flags & DRM_IVPU_BO_MAPPABLE) {
+-		dma_resv_lock(bo->base.base.resv, NULL);
++		ivpu_bo_lock(bo);
+ 		drm_gem_shmem_vunmap(&bo->base, &map);
+-		dma_resv_unlock(bo->base.base.resv);
++		ivpu_bo_unlock(bo);
+ 	}
+ 
+ 	drm_gem_object_put(&bo->base.base);
+@@ -407,12 +422,12 @@ int ivpu_bo_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file
+ 
+ 	bo = to_ivpu_bo(obj);
+ 
+-	mutex_lock(&bo->lock);
++	ivpu_bo_lock(bo);
+ 	args->flags = bo->flags;
+ 	args->mmap_offset = drm_vma_node_offset_addr(&obj->vma_node);
+ 	args->vpu_addr = bo->vpu_addr;
+ 	args->size = obj->size;
+-	mutex_unlock(&bo->lock);
++	ivpu_bo_unlock(bo);
+ 
+ 	drm_gem_object_put(obj);
+ 	return ret;
+@@ -449,10 +464,10 @@ int ivpu_bo_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file
+ 
+ static void ivpu_bo_print_info(struct ivpu_bo *bo, struct drm_printer *p)
+ {
+-	mutex_lock(&bo->lock);
++	ivpu_bo_lock(bo);
+ 
+ 	drm_printf(p, "%-9p %-3u 0x%-12llx %-10lu 0x%-8x %-4u",
+-		   bo, bo->ctx ? bo->ctx->id : 0, bo->vpu_addr, bo->base.base.size,
++		   bo, bo->ctx_id, bo->vpu_addr, bo->base.base.size,
+ 		   bo->flags, kref_read(&bo->base.base.refcount));
+ 
+ 	if (bo->base.pages)
+@@ -466,7 +481,7 @@ static void ivpu_bo_print_info(struct ivpu_bo *bo, struct drm_printer *p)
+ 
+ 	drm_printf(p, "\n");
+ 
+-	mutex_unlock(&bo->lock);
++	ivpu_bo_unlock(bo);
+ }
+ 
+ void ivpu_bo_list(struct drm_device *dev, struct drm_printer *p)
+diff --git a/drivers/accel/ivpu/ivpu_gem.h b/drivers/accel/ivpu/ivpu_gem.h
+index a222a9ec9d6113..aa8ff14f7aae1a 100644
+--- a/drivers/accel/ivpu/ivpu_gem.h
++++ b/drivers/accel/ivpu/ivpu_gem.h
+@@ -17,10 +17,10 @@ struct ivpu_bo {
+ 	struct list_head bo_list_node;
+ 	struct drm_mm_node mm_node;
+ 
+-	struct mutex lock; /* Protects: ctx, mmu_mapped, vpu_addr */
+ 	u64 vpu_addr;
+ 	u32 flags;
+ 	u32 job_status; /* Valid only for command buffer */
++	u32 ctx_id;
+ 	bool mmu_mapped;
+ };
+ 
+diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c
+index 1c8e283ad98542..fae8351aa33090 100644
+--- a/drivers/accel/ivpu/ivpu_job.c
++++ b/drivers/accel/ivpu/ivpu_job.c
+@@ -986,7 +986,8 @@ void ivpu_context_abort_work_fn(struct work_struct *work)
+ 		return;
+ 
+ 	if (vdev->fw->sched_mode == VPU_SCHEDULING_MODE_HW)
+-		ivpu_jsm_reset_engine(vdev, 0);
++		if (ivpu_jsm_reset_engine(vdev, 0))
++			return;
+ 
+ 	mutex_lock(&vdev->context_list_lock);
+ 	xa_for_each(&vdev->context_xa, ctx_id, file_priv) {
+@@ -1009,7 +1010,8 @@ void ivpu_context_abort_work_fn(struct work_struct *work)
+ 	if (vdev->fw->sched_mode != VPU_SCHEDULING_MODE_HW)
+ 		goto runtime_put;
+ 
+-	ivpu_jsm_hws_resume_engine(vdev, 0);
++	if (ivpu_jsm_hws_resume_engine(vdev, 0))
++		return;
+ 	/*
+ 	 * In hardware scheduling mode NPU already has stopped processing jobs
+ 	 * and won't send us any further notifications, thus we have to free job related resources
+diff --git a/drivers/accel/ivpu/ivpu_jsm_msg.c b/drivers/accel/ivpu/ivpu_jsm_msg.c
+index 219ab8afefabde..0256b2dfefc10c 100644
+--- a/drivers/accel/ivpu/ivpu_jsm_msg.c
++++ b/drivers/accel/ivpu/ivpu_jsm_msg.c
+@@ -7,6 +7,7 @@
+ #include "ivpu_hw.h"
+ #include "ivpu_ipc.h"
+ #include "ivpu_jsm_msg.h"
++#include "ivpu_pm.h"
+ #include "vpu_jsm_api.h"
+ 
+ const char *ivpu_jsm_msg_type_to_str(enum vpu_ipc_msg_type type)
+@@ -163,8 +164,10 @@ int ivpu_jsm_reset_engine(struct ivpu_device *vdev, u32 engine)
+ 
+ 	ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_ENGINE_RESET_DONE, &resp,
+ 				    VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+-	if (ret)
++	if (ret) {
+ 		ivpu_err_ratelimited(vdev, "Failed to reset engine %d: %d\n", engine, ret);
++		ivpu_pm_trigger_recovery(vdev, "Engine reset failed");
++	}
+ 
+ 	return ret;
+ }
+@@ -354,8 +357,10 @@ int ivpu_jsm_hws_resume_engine(struct ivpu_device *vdev, u32 engine)
+ 
+ 	ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_HWS_RESUME_ENGINE_DONE, &resp,
+ 				    VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+-	if (ret)
++	if (ret) {
+ 		ivpu_err_ratelimited(vdev, "Failed to resume engine %d: %d\n", engine, ret);
++		ivpu_pm_trigger_recovery(vdev, "Engine resume failed");
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/acpi/acpica/amlresrc.h b/drivers/acpi/acpica/amlresrc.h
+index 4e88f9fc2a2894..b6588b7fa8986a 100644
+--- a/drivers/acpi/acpica/amlresrc.h
++++ b/drivers/acpi/acpica/amlresrc.h
+@@ -504,10 +504,6 @@ struct aml_resource_pin_group_config {
+ 
+ #define AML_RESOURCE_PIN_GROUP_CONFIG_REVISION    1	/* ACPI 6.2 */
+ 
+-/* restore default alignment */
+-
+-#pragma pack()
+-
+ /* Union of all resource descriptors, so we can allocate the worst case */
+ 
+ union aml_resource {
+@@ -562,6 +558,10 @@ union aml_resource {
+ 	u8 byte_item;
+ };
+ 
++/* restore default alignment */
++
++#pragma pack()
++
+ /* Interfaces used by both the disassembler and compiler */
+ 
+ void
+diff --git a/drivers/acpi/acpica/dsutils.c b/drivers/acpi/acpica/dsutils.c
+index fb9ed5e1da89dc..2bdae8a25e084d 100644
+--- a/drivers/acpi/acpica/dsutils.c
++++ b/drivers/acpi/acpica/dsutils.c
+@@ -668,6 +668,8 @@ acpi_ds_create_operands(struct acpi_walk_state *walk_state,
+ 	union acpi_parse_object *arguments[ACPI_OBJ_NUM_OPERANDS];
+ 	u32 arg_count = 0;
+ 	u32 index = walk_state->num_operands;
++	u32 prev_num_operands = walk_state->num_operands;
++	u32 new_num_operands;
+ 	u32 i;
+ 
+ 	ACPI_FUNCTION_TRACE_PTR(ds_create_operands, first_arg);
+@@ -696,6 +698,7 @@ acpi_ds_create_operands(struct acpi_walk_state *walk_state,
+ 
+ 	/* Create the interpreter arguments, in reverse order */
+ 
++	new_num_operands = index;
+ 	index--;
+ 	for (i = 0; i < arg_count; i++) {
+ 		arg = arguments[index];
+@@ -720,7 +723,11 @@ acpi_ds_create_operands(struct acpi_walk_state *walk_state,
+ 	 * pop everything off of the operand stack and delete those
+ 	 * objects
+ 	 */
+-	acpi_ds_obj_stack_pop_and_delete(arg_count, walk_state);
++	walk_state->num_operands = i;
++	acpi_ds_obj_stack_pop_and_delete(new_num_operands, walk_state);
++
++	/* Restore operand count */
++	walk_state->num_operands = prev_num_operands;
+ 
+ 	ACPI_EXCEPTION((AE_INFO, status, "While creating Arg %u", index));
+ 	return_ACPI_STATUS(status);
+diff --git a/drivers/acpi/acpica/psobject.c b/drivers/acpi/acpica/psobject.c
+index 54471083ba545e..0bce1baaa62b32 100644
+--- a/drivers/acpi/acpica/psobject.c
++++ b/drivers/acpi/acpica/psobject.c
+@@ -636,7 +636,8 @@ acpi_status
+ acpi_ps_complete_final_op(struct acpi_walk_state *walk_state,
+ 			  union acpi_parse_object *op, acpi_status status)
+ {
+-	acpi_status status2;
++	acpi_status return_status = status;
++	u8 ascending = TRUE;
+ 
+ 	ACPI_FUNCTION_TRACE_PTR(ps_complete_final_op, walk_state);
+ 
+@@ -650,7 +651,7 @@ acpi_ps_complete_final_op(struct acpi_walk_state *walk_state,
+ 			  op));
+ 	do {
+ 		if (op) {
+-			if (walk_state->ascending_callback != NULL) {
++			if (ascending && walk_state->ascending_callback != NULL) {
+ 				walk_state->op = op;
+ 				walk_state->op_info =
+ 				    acpi_ps_get_opcode_info(op->common.
+@@ -672,49 +673,26 @@ acpi_ps_complete_final_op(struct acpi_walk_state *walk_state,
+ 				}
+ 
+ 				if (status == AE_CTRL_TERMINATE) {
+-					status = AE_OK;
+-
+-					/* Clean up */
+-					do {
+-						if (op) {
+-							status2 =
+-							    acpi_ps_complete_this_op
+-							    (walk_state, op);
+-							if (ACPI_FAILURE
+-							    (status2)) {
+-								return_ACPI_STATUS
+-								    (status2);
+-							}
+-						}
+-
+-						acpi_ps_pop_scope(&
+-								  (walk_state->
+-								   parser_state),
+-								  &op,
+-								  &walk_state->
+-								  arg_types,
+-								  &walk_state->
+-								  arg_count);
+-
+-					} while (op);
+-
+-					return_ACPI_STATUS(status);
++					ascending = FALSE;
++					return_status = AE_CTRL_TERMINATE;
+ 				}
+ 
+ 				else if (ACPI_FAILURE(status)) {
+ 
+ 					/* First error is most important */
+ 
+-					(void)
+-					    acpi_ps_complete_this_op(walk_state,
+-								     op);
+-					return_ACPI_STATUS(status);
++					ascending = FALSE;
++					return_status = status;
+ 				}
+ 			}
+ 
+-			status2 = acpi_ps_complete_this_op(walk_state, op);
+-			if (ACPI_FAILURE(status2)) {
+-				return_ACPI_STATUS(status2);
++			status = acpi_ps_complete_this_op(walk_state, op);
++			if (ACPI_FAILURE(status)) {
++				ascending = FALSE;
++				if (ACPI_SUCCESS(return_status) ||
++				    return_status == AE_CTRL_TERMINATE) {
++					return_status = status;
++				}
+ 			}
+ 		}
+ 
+@@ -724,5 +702,5 @@ acpi_ps_complete_final_op(struct acpi_walk_state *walk_state,
+ 
+ 	} while (op);
+ 
+-	return_ACPI_STATUS(status);
++	return_ACPI_STATUS(return_status);
+ }
+diff --git a/drivers/acpi/acpica/rsaddr.c b/drivers/acpi/acpica/rsaddr.c
+index 27384ee245f094..f92010e667cda7 100644
+--- a/drivers/acpi/acpica/rsaddr.c
++++ b/drivers/acpi/acpica/rsaddr.c
+@@ -272,18 +272,13 @@ u8
+ acpi_rs_get_address_common(struct acpi_resource *resource,
+ 			   union aml_resource *aml)
+ {
+-	struct aml_resource_address address;
+-
+ 	ACPI_FUNCTION_ENTRY();
+ 
+-	/* Avoid undefined behavior: member access within misaligned address */
+-
+-	memcpy(&address, aml, sizeof(address));
+-
+ 	/* Validate the Resource Type */
+ 
+-	if ((address.resource_type > 2) &&
+-	    (address.resource_type < 0xC0) && (address.resource_type != 0x0A)) {
++	if ((aml->address.resource_type > 2) &&
++	    (aml->address.resource_type < 0xC0) &&
++	    (aml->address.resource_type != 0x0A)) {
+ 		return (FALSE);
+ 	}
+ 
+@@ -304,7 +299,7 @@ acpi_rs_get_address_common(struct acpi_resource *resource,
+ 		/* Generic resource type, just grab the type_specific byte */
+ 
+ 		resource->data.address.info.type_specific =
+-		    address.specific_flags;
++		    aml->address.specific_flags;
+ 	}
+ 
+ 	return (TRUE);
+diff --git a/drivers/acpi/acpica/rscalc.c b/drivers/acpi/acpica/rscalc.c
+index 6e7a152d645953..242daf45e20eff 100644
+--- a/drivers/acpi/acpica/rscalc.c
++++ b/drivers/acpi/acpica/rscalc.c
+@@ -608,18 +608,12 @@ acpi_rs_get_list_length(u8 *aml_buffer,
+ 
+ 		case ACPI_RESOURCE_NAME_SERIAL_BUS:{
+ 
+-				/* Avoid undefined behavior: member access within misaligned address */
+-
+-				struct aml_resource_common_serialbus
+-				    common_serial_bus;
+-				memcpy(&common_serial_bus, aml_resource,
+-				       sizeof(common_serial_bus));
+-
+ 				minimum_aml_resource_length =
+ 				    acpi_gbl_resource_aml_serial_bus_sizes
+-				    [common_serial_bus.type];
++				    [aml_resource->common_serial_bus.type];
+ 				extra_struct_bytes +=
+-				    common_serial_bus.resource_length -
++				    aml_resource->common_serial_bus.
++				    resource_length -
+ 				    minimum_aml_resource_length;
+ 				break;
+ 			}
+@@ -688,16 +682,10 @@ acpi_rs_get_list_length(u8 *aml_buffer,
+ 		 */
+ 		if (acpi_ut_get_resource_type(aml_buffer) ==
+ 		    ACPI_RESOURCE_NAME_SERIAL_BUS) {
+-
+-			/* Avoid undefined behavior: member access within misaligned address */
+-
+-			struct aml_resource_common_serialbus common_serial_bus;
+-			memcpy(&common_serial_bus, aml_resource,
+-			       sizeof(common_serial_bus));
+-
+ 			buffer_size =
+ 			    acpi_gbl_resource_struct_serial_bus_sizes
+-			    [common_serial_bus.type] + extra_struct_bytes;
++			    [aml_resource->common_serial_bus.type] +
++			    extra_struct_bytes;
+ 		} else {
+ 			buffer_size =
+ 			    acpi_gbl_resource_struct_sizes[resource_index] +
+diff --git a/drivers/acpi/acpica/rslist.c b/drivers/acpi/acpica/rslist.c
+index 164c96e063c6e8..e46efaa889cdd7 100644
+--- a/drivers/acpi/acpica/rslist.c
++++ b/drivers/acpi/acpica/rslist.c
+@@ -55,21 +55,15 @@ acpi_rs_convert_aml_to_resources(u8 * aml,
+ 	aml_resource = ACPI_CAST_PTR(union aml_resource, aml);
+ 
+ 	if (acpi_ut_get_resource_type(aml) == ACPI_RESOURCE_NAME_SERIAL_BUS) {
+-
+-		/* Avoid undefined behavior: member access within misaligned address */
+-
+-		struct aml_resource_common_serialbus common_serial_bus;
+-		memcpy(&common_serial_bus, aml_resource,
+-		       sizeof(common_serial_bus));
+-
+-		if (common_serial_bus.type > AML_RESOURCE_MAX_SERIALBUSTYPE) {
++		if (aml_resource->common_serial_bus.type >
++		    AML_RESOURCE_MAX_SERIALBUSTYPE) {
+ 			conversion_table = NULL;
+ 		} else {
+ 			/* This is an I2C, SPI, UART, or CSI2 serial_bus descriptor */
+ 
+ 			conversion_table =
+ 			    acpi_gbl_convert_resource_serial_bus_dispatch
+-			    [common_serial_bus.type];
++			    [aml_resource->common_serial_bus.type];
+ 		}
+ 	} else {
+ 		conversion_table =
+diff --git a/drivers/acpi/acpica/utprint.c b/drivers/acpi/acpica/utprint.c
+index 42b30b9f93128e..7fad03c5252c35 100644
+--- a/drivers/acpi/acpica/utprint.c
++++ b/drivers/acpi/acpica/utprint.c
+@@ -333,11 +333,8 @@ int vsnprintf(char *string, acpi_size size, const char *format, va_list args)
+ 
+ 	pos = string;
+ 
+-	if (size != ACPI_UINT32_MAX) {
+-		end = string + size;
+-	} else {
+-		end = ACPI_CAST_PTR(char, ACPI_UINT32_MAX);
+-	}
++	size = ACPI_MIN(size, ACPI_PTR_DIFF(ACPI_MAX_PTR, string));
++	end = string + size;
+ 
+ 	for (; *format; ++format) {
+ 		if (*format != '%') {
+diff --git a/drivers/acpi/acpica/utresrc.c b/drivers/acpi/acpica/utresrc.c
+index cff7901f7866ec..e1cc3d3487508c 100644
+--- a/drivers/acpi/acpica/utresrc.c
++++ b/drivers/acpi/acpica/utresrc.c
+@@ -361,20 +361,16 @@ acpi_ut_validate_resource(struct acpi_walk_state *walk_state,
+ 	aml_resource = ACPI_CAST_PTR(union aml_resource, aml);
+ 	if (resource_type == ACPI_RESOURCE_NAME_SERIAL_BUS) {
+ 
+-		/* Avoid undefined behavior: member access within misaligned address */
+-
+-		struct aml_resource_common_serialbus common_serial_bus;
+-		memcpy(&common_serial_bus, aml_resource,
+-		       sizeof(common_serial_bus));
+-
+ 		/* Validate the bus_type field */
+ 
+-		if ((common_serial_bus.type == 0) ||
+-		    (common_serial_bus.type > AML_RESOURCE_MAX_SERIALBUSTYPE)) {
++		if ((aml_resource->common_serial_bus.type == 0) ||
++		    (aml_resource->common_serial_bus.type >
++		     AML_RESOURCE_MAX_SERIALBUSTYPE)) {
+ 			if (walk_state) {
+ 				ACPI_ERROR((AE_INFO,
+ 					    "Invalid/unsupported SerialBus resource descriptor: BusType 0x%2.2X",
+-					    common_serial_bus.type));
++					    aml_resource->common_serial_bus.
++					    type));
+ 			}
+ 			return (AE_AML_INVALID_RESOURCE_TYPE);
+ 		}
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index 6760330a8af55d..93bb1f7d909867 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -243,10 +243,23 @@ static int acpi_battery_get_property(struct power_supply *psy,
+ 		break;
+ 	case POWER_SUPPLY_PROP_CURRENT_NOW:
+ 	case POWER_SUPPLY_PROP_POWER_NOW:
+-		if (battery->rate_now == ACPI_BATTERY_VALUE_UNKNOWN)
++		if (battery->rate_now == ACPI_BATTERY_VALUE_UNKNOWN) {
+ 			ret = -ENODEV;
+-		else
+-			val->intval = battery->rate_now * 1000;
++			break;
++		}
++
++		val->intval = battery->rate_now * 1000;
++		/*
++		 * When discharging, the current should be reported as a
++		 * negative number as per the power supply class interface
++		 * definition.
++		 */
++		if (psp == POWER_SUPPLY_PROP_CURRENT_NOW &&
++		    (battery->state & ACPI_BATTERY_STATE_DISCHARGING) &&
++		    acpi_battery_handle_discharging(battery)
++				== POWER_SUPPLY_STATUS_DISCHARGING)
++			val->intval = -val->intval;
++
+ 		break;
+ 	case POWER_SUPPLY_PROP_CHARGE_FULL_DESIGN:
+ 	case POWER_SUPPLY_PROP_ENERGY_FULL_DESIGN:
+diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
+index 058910af82bca6..c2ab2783303f21 100644
+--- a/drivers/acpi/bus.c
++++ b/drivers/acpi/bus.c
+@@ -1446,8 +1446,10 @@ static int __init acpi_init(void)
+ 	}
+ 
+ 	acpi_kobj = kobject_create_and_add("acpi", firmware_kobj);
+-	if (!acpi_kobj)
+-		pr_debug("%s: kset create error\n", __func__);
++	if (!acpi_kobj) {
++		pr_err("Failed to register kobject\n");
++		return -ENOMEM;
++	}
+ 
+ 	init_prmt();
+ 	acpi_init_pcc();
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 163ac909bd0689..3fc65df9ccb839 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -1410,8 +1410,15 @@ static bool ahci_broken_suspend(struct pci_dev *pdev)
+ 
+ static bool ahci_broken_lpm(struct pci_dev *pdev)
+ {
++	/*
++	 * Platforms with LPM problems.
++	 * If driver_data is NULL, there is no existing BIOS version with
++	 * functioning LPM.
++	 * If driver_data is non-NULL, then driver_data contains the DMI BIOS
++	 * build date of the first BIOS version with functioning LPM (i.e. older
++	 * BIOS versions have broken LPM).
++	 */
+ 	static const struct dmi_system_id sysids[] = {
+-		/* Various Lenovo 50 series have LPM issues with older BIOSen */
+ 		{
+ 			.matches = {
+ 				DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+@@ -1446,6 +1453,29 @@ static bool ahci_broken_lpm(struct pci_dev *pdev)
+ 			 */
+ 			.driver_data = "20180310", /* 2.35 */
+ 		},
++		{
++			.matches = {
++				DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++				DMI_MATCH(DMI_PRODUCT_VERSION, "ASUSPRO D840MB_M840SA"),
++			},
++			/* 320 is broken, there is no known good version. */
++		},
++		{
++			/*
++			 * AMD 500 Series Chipset SATA Controller [1022:43eb]
++			 * on this motherboard timeouts on ports 5 and 6 when
++			 * LPM is enabled, at least with WDC WD20EFAX-68FB5N0
++			 * hard drives. LPM with the same drive works fine on
++			 * all other ports on the same controller.
++			 */
++			.matches = {
++				DMI_MATCH(DMI_BOARD_VENDOR,
++					  "ASUSTeK COMPUTER INC."),
++				DMI_MATCH(DMI_BOARD_NAME,
++					  "ROG STRIX B550-F GAMING (WI-FI)"),
++			},
++			/* 3621 is broken, there is no known good version. */
++		},
+ 		{ }	/* terminate list */
+ 	};
+ 	const struct dmi_system_id *dmi = dmi_first_match(sysids);
+@@ -1455,6 +1485,9 @@ static bool ahci_broken_lpm(struct pci_dev *pdev)
+ 	if (!dmi)
+ 		return false;
+ 
++	if (!dmi->driver_data)
++		return true;
++
+ 	dmi_get_date(DMI_BIOS_DATE, &year, &month, &date);
+ 	snprintf(buf, sizeof(buf), "%04d%02d%02d", year, month, date);
+ 
+diff --git a/drivers/ata/pata_via.c b/drivers/ata/pata_via.c
+index 696b99720dcbda..d82728a01832b5 100644
+--- a/drivers/ata/pata_via.c
++++ b/drivers/ata/pata_via.c
+@@ -368,7 +368,8 @@ static unsigned int via_mode_filter(struct ata_device *dev, unsigned int mask)
+ 	}
+ 
+ 	if (dev->class == ATA_DEV_ATAPI &&
+-	    dmi_check_system(no_atapi_dma_dmi_table)) {
++	    (dmi_check_system(no_atapi_dma_dmi_table) ||
++	     config->id == PCI_DEVICE_ID_VIA_6415)) {
+ 		ata_dev_warn(dev, "controller locks up on ATAPI DMA, forcing PIO\n");
+ 		mask &= ATA_MASK_PIO;
+ 	}
+diff --git a/drivers/atm/atmtcp.c b/drivers/atm/atmtcp.c
+index d4aa0f353b6c80..eeae160c898d38 100644
+--- a/drivers/atm/atmtcp.c
++++ b/drivers/atm/atmtcp.c
+@@ -288,7 +288,9 @@ static int atmtcp_c_send(struct atm_vcc *vcc,struct sk_buff *skb)
+ 	struct sk_buff *new_skb;
+ 	int result = 0;
+ 
+-	if (!skb->len) return 0;
++	if (skb->len < sizeof(struct atmtcp_hdr))
++		goto done;
++
+ 	dev = vcc->dev_data;
+ 	hdr = (struct atmtcp_hdr *) skb->data;
+ 	if (hdr->length == ATMTCP_HDR_MAGIC) {
+diff --git a/drivers/base/platform-msi.c b/drivers/base/platform-msi.c
+index 0e60dd650b5e0d..70db08f3ac6fae 100644
+--- a/drivers/base/platform-msi.c
++++ b/drivers/base/platform-msi.c
+@@ -95,5 +95,6 @@ EXPORT_SYMBOL_GPL(platform_device_msi_init_and_alloc_irqs);
+ void platform_device_msi_free_irqs_all(struct device *dev)
+ {
+ 	msi_domain_free_irqs_all(dev, MSI_DEFAULT_DOMAIN);
++	msi_remove_device_irq_domain(dev, MSI_DEFAULT_DOMAIN);
+ }
+ EXPORT_SYMBOL_GPL(platform_device_msi_free_irqs_all);
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index 205a4f8828b0ac..c55a7c70bc1a88 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -1011,7 +1011,7 @@ static enum hrtimer_restart  pm_suspend_timer_fn(struct hrtimer *timer)
+ 	 * If 'expires' is after the current time, we've been called
+ 	 * too early.
+ 	 */
+-	if (expires > 0 && expires < ktime_get_mono_fast_ns()) {
++	if (expires > 0 && expires <= ktime_get_mono_fast_ns()) {
+ 		dev->power.timer_expires = 0;
+ 		rpm_suspend(dev, dev->power.timer_autosuspends ?
+ 		    (RPM_ASYNC | RPM_AUTO) : RPM_ASYNC);
+diff --git a/drivers/base/swnode.c b/drivers/base/swnode.c
+index 5c78fa6ae77257..deda7f35a05987 100644
+--- a/drivers/base/swnode.c
++++ b/drivers/base/swnode.c
+@@ -529,7 +529,7 @@ software_node_get_reference_args(const struct fwnode_handle *fwnode,
+ 	if (prop->is_inline)
+ 		return -EINVAL;
+ 
+-	if (index * sizeof(*ref) >= prop->length)
++	if ((index + 1) * sizeof(*ref) > prop->length)
+ 		return -ENOENT;
+ 
+ 	ref_array = prop->pointer;
+diff --git a/drivers/block/aoe/aoedev.c b/drivers/block/aoe/aoedev.c
+index 141b2a0e03f2cb..8c18034cb3d69e 100644
+--- a/drivers/block/aoe/aoedev.c
++++ b/drivers/block/aoe/aoedev.c
+@@ -198,6 +198,7 @@ aoedev_downdev(struct aoedev *d)
+ {
+ 	struct aoetgt *t, **tt, **te;
+ 	struct list_head *head, *pos, *nx;
++	struct request *rq, *rqnext;
+ 	int i;
+ 
+ 	d->flags &= ~DEVFL_UP;
+@@ -223,6 +224,13 @@ aoedev_downdev(struct aoedev *d)
+ 	/* clean out the in-process request (if any) */
+ 	aoe_failip(d);
+ 
++	/* clean out any queued block requests */
++	list_for_each_entry_safe(rq, rqnext, &d->rq_list, queuelist) {
++		list_del_init(&rq->queuelist);
++		blk_mq_start_request(rq);
++		blk_mq_end_request(rq, BLK_STS_IOERR);
++	}
++
+ 	/* fast fail all pending I/O */
+ 	if (d->blkq) {
+ 		/* UP is cleared, freeze+quiesce to insure all are errored */
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index dc104c025cd568..8a482853a75ede 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -2710,6 +2710,9 @@ static int ublk_ctrl_add_dev(const struct ublksrv_ctrl_cmd *header)
+ 	if (copy_from_user(&info, argp, sizeof(info)))
+ 		return -EFAULT;
+ 
++	if (info.queue_depth > UBLK_MAX_QUEUE_DEPTH || info.nr_hw_queues > UBLK_MAX_NR_QUEUES)
++		return -EINVAL;
++
+ 	if (capable(CAP_SYS_ADMIN))
+ 		info.flags &= ~UBLK_F_UNPRIVILEGED_DEV;
+ 	else if (!(info.flags & UBLK_F_UNPRIVILEGED_DEV))
+diff --git a/drivers/bluetooth/btmrvl_sdio.c b/drivers/bluetooth/btmrvl_sdio.c
+index 07cd308f7abf6d..93932a0d8625a5 100644
+--- a/drivers/bluetooth/btmrvl_sdio.c
++++ b/drivers/bluetooth/btmrvl_sdio.c
+@@ -100,7 +100,9 @@ static int btmrvl_sdio_probe_of(struct device *dev,
+ 			}
+ 
+ 			/* Configure wakeup (enabled by default) */
+-			device_init_wakeup(dev, true);
++			ret = devm_device_init_wakeup(dev);
++			if (ret)
++				return dev_err_probe(dev, ret, "Failed to init wakeup\n");
+ 		}
+ 	}
+ 
+diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
+index 1d26207b2ba70a..c16a3518b8ffa4 100644
+--- a/drivers/bluetooth/btmtksdio.c
++++ b/drivers/bluetooth/btmtksdio.c
+@@ -1414,7 +1414,7 @@ static int btmtksdio_probe(struct sdio_func *func,
+ 	 */
+ 	pm_runtime_put_noidle(bdev->dev);
+ 
+-	err = device_init_wakeup(bdev->dev, true);
++	err = devm_device_init_wakeup(bdev->dev);
+ 	if (err)
+ 		bt_dev_err(hdev, "failed to initialize device wakeup");
+ 
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 256b451bbe065f..ef9689f877691e 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -513,6 +513,7 @@ static const struct usb_device_id quirks_table[] = {
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 
+ 	/* Realtek 8851BE Bluetooth devices */
++	{ USB_DEVICE(0x0bda, 0xb850), .driver_info = BTUSB_REALTEK },
+ 	{ USB_DEVICE(0x13d3, 0x3600), .driver_info = BTUSB_REALTEK },
+ 
+ 	/* Realtek 8852AE Bluetooth devices */
+@@ -678,6 +679,8 @@ static const struct usb_device_id quirks_table[] = {
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x13d3, 0x3568), .driver_info = BTUSB_MEDIATEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x13d3, 0x3584), .driver_info = BTUSB_MEDIATEK |
++						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x13d3, 0x3605), .driver_info = BTUSB_MEDIATEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x13d3, 0x3607), .driver_info = BTUSB_MEDIATEK |
+@@ -718,6 +721,8 @@ static const struct usb_device_id quirks_table[] = {
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x13d3, 0x3628), .driver_info = BTUSB_MEDIATEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x13d3, 0x3630), .driver_info = BTUSB_MEDIATEK |
++						     BTUSB_WIDEBAND_SPEECH },
+ 
+ 	/* Additional Realtek 8723AE Bluetooth devices */
+ 	{ USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK },
+diff --git a/drivers/bus/fsl-mc/fsl-mc-uapi.c b/drivers/bus/fsl-mc/fsl-mc-uapi.c
+index 9c4c1395fcdbf2..a376ec66165348 100644
+--- a/drivers/bus/fsl-mc/fsl-mc-uapi.c
++++ b/drivers/bus/fsl-mc/fsl-mc-uapi.c
+@@ -275,13 +275,13 @@ static struct fsl_mc_cmd_desc fsl_mc_accepted_cmds[] = {
+ 		.size = 8,
+ 	},
+ 	[DPSW_GET_TAILDROP] = {
+-		.cmdid_value = 0x0A80,
++		.cmdid_value = 0x0A90,
+ 		.cmdid_mask = 0xFFF0,
+ 		.token = true,
+ 		.size = 14,
+ 	},
+ 	[DPSW_SET_TAILDROP] = {
+-		.cmdid_value = 0x0A90,
++		.cmdid_value = 0x0A80,
+ 		.cmdid_mask = 0xFFF0,
+ 		.token = true,
+ 		.size = 24,
+diff --git a/drivers/bus/fsl-mc/mc-io.c b/drivers/bus/fsl-mc/mc-io.c
+index a0ad7866cbfcba..cd8754763f40a2 100644
+--- a/drivers/bus/fsl-mc/mc-io.c
++++ b/drivers/bus/fsl-mc/mc-io.c
+@@ -214,12 +214,19 @@ int __must_check fsl_mc_portal_allocate(struct fsl_mc_device *mc_dev,
+ 	if (error < 0)
+ 		goto error_cleanup_resource;
+ 
+-	dpmcp_dev->consumer_link = device_link_add(&mc_dev->dev,
+-						   &dpmcp_dev->dev,
+-						   DL_FLAG_AUTOREMOVE_CONSUMER);
+-	if (!dpmcp_dev->consumer_link) {
+-		error = -EINVAL;
+-		goto error_cleanup_mc_io;
++	/* If the DPRC device itself tries to allocate a portal (usually for
++	 * UAPI interaction), don't add a device link between them since the
++	 * DPMCP device is an actual child device of the DPRC and a reverse
++	 * dependency is not allowed.
++	 */
++	if (mc_dev != mc_bus_dev) {
++		dpmcp_dev->consumer_link = device_link_add(&mc_dev->dev,
++							   &dpmcp_dev->dev,
++							   DL_FLAG_AUTOREMOVE_CONSUMER);
++		if (!dpmcp_dev->consumer_link) {
++			error = -EINVAL;
++			goto error_cleanup_mc_io;
++		}
+ 	}
+ 
+ 	*new_mc_io = mc_io;
+diff --git a/drivers/bus/fsl-mc/mc-sys.c b/drivers/bus/fsl-mc/mc-sys.c
+index f2052cd0a05178..b22c59d57c8f0a 100644
+--- a/drivers/bus/fsl-mc/mc-sys.c
++++ b/drivers/bus/fsl-mc/mc-sys.c
+@@ -19,7 +19,7 @@
+ /*
+  * Timeout in milliseconds to wait for the completion of an MC command
+  */
+-#define MC_CMD_COMPLETION_TIMEOUT_MS	500
++#define MC_CMD_COMPLETION_TIMEOUT_MS	15000
+ 
+ /*
+  * usleep_range() min and max values used to throttle down polling
+diff --git a/drivers/bus/mhi/ep/ring.c b/drivers/bus/mhi/ep/ring.c
+index aeb53b2c34a8cd..26357ee68dee98 100644
+--- a/drivers/bus/mhi/ep/ring.c
++++ b/drivers/bus/mhi/ep/ring.c
+@@ -131,19 +131,23 @@ int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ring_element *e
+ 	}
+ 
+ 	old_offset = ring->rd_offset;
+-	mhi_ep_ring_inc_index(ring);
+ 
+ 	dev_dbg(dev, "Adding an element to ring at offset (%zu)\n", ring->rd_offset);
++	buf_info.host_addr = ring->rbase + (old_offset * sizeof(*el));
++	buf_info.dev_addr = el;
++	buf_info.size = sizeof(*el);
++
++	ret = mhi_cntrl->write_sync(mhi_cntrl, &buf_info);
++	if (ret)
++		return ret;
++
++	mhi_ep_ring_inc_index(ring);
+ 
+ 	/* Update rp in ring context */
+ 	rp = cpu_to_le64(ring->rd_offset * sizeof(*el) + ring->rbase);
+ 	memcpy_toio((void __iomem *) &ring->ring_ctx->generic.rp, &rp, sizeof(u64));
+ 
+-	buf_info.host_addr = ring->rbase + (old_offset * sizeof(*el));
+-	buf_info.dev_addr = el;
+-	buf_info.size = sizeof(*el);
+-
+-	return mhi_cntrl->write_sync(mhi_cntrl, &buf_info);
++	return ret;
+ }
+ 
+ void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id)
+diff --git a/drivers/bus/mhi/host/pm.c b/drivers/bus/mhi/host/pm.c
+index 2fb27e6f8f88eb..33d92bf2fc3ed4 100644
+--- a/drivers/bus/mhi/host/pm.c
++++ b/drivers/bus/mhi/host/pm.c
+@@ -602,6 +602,7 @@ static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
+ 	struct mhi_cmd *mhi_cmd;
+ 	struct mhi_event_ctxt *er_ctxt;
+ 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	bool reset_device = false;
+ 	int ret, i;
+ 
+ 	dev_dbg(dev, "Transitioning from PM state: %s to: %s\n",
+@@ -630,8 +631,23 @@ static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
+ 	/* Wake up threads waiting for state transition */
+ 	wake_up_all(&mhi_cntrl->state_event);
+ 
+-	/* Trigger MHI RESET so that the device will not access host memory */
+ 	if (MHI_REG_ACCESS_VALID(prev_state)) {
++		/*
++		 * If the device is in PBL or SBL, it will only respond to
++		 * RESET if the device is in SYSERR state. SYSERR might
++		 * already be cleared at this point.
++		 */
++		enum mhi_state cur_state = mhi_get_mhi_state(mhi_cntrl);
++		enum mhi_ee_type cur_ee = mhi_get_exec_env(mhi_cntrl);
++
++		if (cur_state == MHI_STATE_SYS_ERR)
++			reset_device = true;
++		else if (cur_ee != MHI_EE_PBL && cur_ee != MHI_EE_SBL)
++			reset_device = true;
++	}
++
++	/* Trigger MHI RESET so that the device will not access host memory */
++	if (reset_device) {
+ 		u32 in_reset = -1;
+ 		unsigned long timeout = msecs_to_jiffies(mhi_cntrl->timeout_ms);
+ 
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index f67b927ae4caa8..e5c02e950f2c1b 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -677,51 +677,6 @@ static int sysc_parse_and_check_child_range(struct sysc *ddata)
+ 	return 0;
+ }
+ 
+-/* Interconnect instances to probe before l4_per instances */
+-static struct resource early_bus_ranges[] = {
+-	/* am3/4 l4_wkup */
+-	{ .start = 0x44c00000, .end = 0x44c00000 + 0x300000, },
+-	/* omap4/5 and dra7 l4_cfg */
+-	{ .start = 0x4a000000, .end = 0x4a000000 + 0x300000, },
+-	/* omap4 l4_wkup */
+-	{ .start = 0x4a300000, .end = 0x4a300000 + 0x30000,  },
+-	/* omap5 and dra7 l4_wkup without dra7 dcan segment */
+-	{ .start = 0x4ae00000, .end = 0x4ae00000 + 0x30000,  },
+-};
+-
+-static atomic_t sysc_defer = ATOMIC_INIT(10);
+-
+-/**
+- * sysc_defer_non_critical - defer non_critical interconnect probing
+- * @ddata: device driver data
+- *
+- * We want to probe l4_cfg and l4_wkup interconnect instances before any
+- * l4_per instances as l4_per instances depend on resources on l4_cfg and
+- * l4_wkup interconnects.
+- */
+-static int sysc_defer_non_critical(struct sysc *ddata)
+-{
+-	struct resource *res;
+-	int i;
+-
+-	if (!atomic_read(&sysc_defer))
+-		return 0;
+-
+-	for (i = 0; i < ARRAY_SIZE(early_bus_ranges); i++) {
+-		res = &early_bus_ranges[i];
+-		if (ddata->module_pa >= res->start &&
+-		    ddata->module_pa <= res->end) {
+-			atomic_set(&sysc_defer, 0);
+-
+-			return 0;
+-		}
+-	}
+-
+-	atomic_dec_if_positive(&sysc_defer);
+-
+-	return -EPROBE_DEFER;
+-}
+-
+ static struct device_node *stdout_path;
+ 
+ static void sysc_init_stdout_path(struct sysc *ddata)
+@@ -947,10 +902,6 @@ static int sysc_map_and_check_registers(struct sysc *ddata)
+ 	if (error)
+ 		return error;
+ 
+-	error = sysc_defer_non_critical(ddata);
+-	if (error)
+-		return error;
+-
+ 	sysc_check_children(ddata);
+ 
+ 	if (!of_property_present(np, "reg"))
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 0b45b07dec22cd..5bf038e620c75f 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -481,8 +481,6 @@ static int ipmi_ssif_thread(void *data)
+ 		/* Wait for something to do */
+ 		result = wait_for_completion_interruptible(
+ 						&ssif_info->wake_thread);
+-		if (ssif_info->stopping)
+-			break;
+ 		if (result == -ERESTARTSYS)
+ 			continue;
+ 		init_completion(&ssif_info->wake_thread);
+@@ -1270,10 +1268,8 @@ static void shutdown_ssif(void *send_info)
+ 	ssif_info->stopping = true;
+ 	timer_delete_sync(&ssif_info->watch_timer);
+ 	timer_delete_sync(&ssif_info->retry_timer);
+-	if (ssif_info->thread) {
+-		complete(&ssif_info->wake_thread);
++	if (ssif_info->thread)
+ 		kthread_stop(ssif_info->thread);
+-	}
+ }
+ 
+ static void ssif_remove(struct i2c_client *client)
+diff --git a/drivers/clk/meson/g12a.c b/drivers/clk/meson/g12a.c
+index ceabebb1863d6e..d9e546e006d7e0 100644
+--- a/drivers/clk/meson/g12a.c
++++ b/drivers/clk/meson/g12a.c
+@@ -4093,6 +4093,7 @@ static const struct clk_parent_data spicc_sclk_parent_data[] = {
+ 	{ .hw = &g12a_clk81.hw },
+ 	{ .hw = &g12a_fclk_div4.hw },
+ 	{ .hw = &g12a_fclk_div3.hw },
++	{ .hw = &g12a_fclk_div2.hw },
+ 	{ .hw = &g12a_fclk_div5.hw },
+ 	{ .hw = &g12a_fclk_div7.hw },
+ };
+diff --git a/drivers/clk/qcom/gcc-sm8650.c b/drivers/clk/qcom/gcc-sm8650.c
+index fa1672c4e7d814..24f98062b9dd50 100644
+--- a/drivers/clk/qcom/gcc-sm8650.c
++++ b/drivers/clk/qcom/gcc-sm8650.c
+@@ -3817,7 +3817,9 @@ static int gcc_sm8650_probe(struct platform_device *pdev)
+ 	qcom_branch_set_clk_en(regmap, 0x32004); /* GCC_VIDEO_AHB_CLK */
+ 	qcom_branch_set_clk_en(regmap, 0x32030); /* GCC_VIDEO_XO_CLK */
+ 
++	/* FORCE_MEM_CORE_ON for ufs phy ice core and gcc ufs phy axi clocks  */
+ 	qcom_branch_set_force_mem_core(regmap, gcc_ufs_phy_ice_core_clk, true);
++	qcom_branch_set_force_mem_core(regmap, gcc_ufs_phy_axi_clk, true);
+ 
+ 	/* Clear GDSC_SLEEP_ENA_VOTE to stop votes being auto-removed in sleep. */
+ 	regmap_write(regmap, 0x52150, 0x0);
+diff --git a/drivers/clk/qcom/gcc-sm8750.c b/drivers/clk/qcom/gcc-sm8750.c
+index b36d7097609583..8092dd6b37b56f 100644
+--- a/drivers/clk/qcom/gcc-sm8750.c
++++ b/drivers/clk/qcom/gcc-sm8750.c
+@@ -3244,8 +3244,9 @@ static int gcc_sm8750_probe(struct platform_device *pdev)
+ 	regmap_update_bits(regmap, 0x52010, BIT(20), BIT(20));
+ 	regmap_update_bits(regmap, 0x52010, BIT(21), BIT(21));
+ 
+-	/* FORCE_MEM_CORE_ON for ufs phy ice core clocks */
++	/* FORCE_MEM_CORE_ON for ufs phy ice core and gcc ufs phy axi clocks  */
+ 	qcom_branch_set_force_mem_core(regmap, gcc_ufs_phy_ice_core_clk, true);
++	qcom_branch_set_force_mem_core(regmap, gcc_ufs_phy_axi_clk, true);
+ 
+ 	return qcom_cc_really_probe(&pdev->dev, &gcc_sm8750_desc, regmap);
+ }
+diff --git a/drivers/clk/qcom/gcc-x1e80100.c b/drivers/clk/qcom/gcc-x1e80100.c
+index 009f39139b6440..3e44757e25d324 100644
+--- a/drivers/clk/qcom/gcc-x1e80100.c
++++ b/drivers/clk/qcom/gcc-x1e80100.c
+@@ -6753,6 +6753,10 @@ static int gcc_x1e80100_probe(struct platform_device *pdev)
+ 	/* Clear GDSC_SLEEP_ENA_VOTE to stop votes being auto-removed in sleep. */
+ 	regmap_write(regmap, 0x52224, 0x0);
+ 
++	/* FORCE_MEM_CORE_ON for ufs phy ice core and gcc ufs phy axi clocks  */
++	qcom_branch_set_force_mem_core(regmap, gcc_ufs_phy_ice_core_clk, true);
++	qcom_branch_set_force_mem_core(regmap, gcc_ufs_phy_axi_clk, true);
++
+ 	return qcom_cc_really_probe(&pdev->dev, &gcc_x1e80100_desc, regmap);
+ }
+ 
+diff --git a/drivers/clk/rockchip/clk-rk3036.c b/drivers/clk/rockchip/clk-rk3036.c
+index d341ce0708aac3..e4af3a92863794 100644
+--- a/drivers/clk/rockchip/clk-rk3036.c
++++ b/drivers/clk/rockchip/clk-rk3036.c
+@@ -431,6 +431,7 @@ static const char *const rk3036_critical_clocks[] __initconst = {
+ 	"hclk_peri",
+ 	"pclk_peri",
+ 	"pclk_ddrupctl",
++	"ddrphy",
+ };
+ 
+ static void __init rk3036_clk_init(struct device_node *np)
+diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
+index 944e899eb1be13..ef078426bfd51a 100644
+--- a/drivers/cpufreq/scmi-cpufreq.c
++++ b/drivers/cpufreq/scmi-cpufreq.c
+@@ -393,6 +393,40 @@ static struct cpufreq_driver scmi_cpufreq_driver = {
+ 	.set_boost	= cpufreq_boost_set_sw,
+ };
+ 
++static bool scmi_dev_used_by_cpus(struct device *scmi_dev)
++{
++	struct device_node *scmi_np = dev_of_node(scmi_dev);
++	struct device_node *cpu_np, *np;
++	struct device *cpu_dev;
++	int cpu, idx;
++
++	if (!scmi_np)
++		return false;
++
++	for_each_possible_cpu(cpu) {
++		cpu_dev = get_cpu_device(cpu);
++		if (!cpu_dev)
++			continue;
++
++		cpu_np = dev_of_node(cpu_dev);
++
++		np = of_parse_phandle(cpu_np, "clocks", 0);
++		of_node_put(np);
++
++		if (np == scmi_np)
++			return true;
++
++		idx = of_property_match_string(cpu_np, "power-domain-names", "perf");
++		np = of_parse_phandle(cpu_np, "power-domains", idx);
++		of_node_put(np);
++
++		if (np == scmi_np)
++			return true;
++	}
++
++	return false;
++}
++
+ static int scmi_cpufreq_probe(struct scmi_device *sdev)
+ {
+ 	int ret;
+@@ -401,7 +435,7 @@ static int scmi_cpufreq_probe(struct scmi_device *sdev)
+ 
+ 	handle = sdev->handle;
+ 
+-	if (!handle)
++	if (!handle || !scmi_dev_used_by_cpus(dev))
+ 		return -ENODEV;
+ 
+ 	scmi_cpufreq_driver.driver_data = sdev;
+diff --git a/drivers/crypto/intel/qat/qat_420xx/adf_drv.c b/drivers/crypto/intel/qat/qat_420xx/adf_drv.c
+index 8084aa0f7f4170..b4731f02deb8c4 100644
+--- a/drivers/crypto/intel/qat/qat_420xx/adf_drv.c
++++ b/drivers/crypto/intel/qat/qat_420xx/adf_drv.c
+@@ -186,11 +186,19 @@ static void adf_remove(struct pci_dev *pdev)
+ 	adf_cleanup_accel(accel_dev);
+ }
+ 
++static void adf_shutdown(struct pci_dev *pdev)
++{
++	struct adf_accel_dev *accel_dev = adf_devmgr_pci_to_accel_dev(pdev);
++
++	adf_dev_down(accel_dev);
++}
++
+ static struct pci_driver adf_driver = {
+ 	.id_table = adf_pci_tbl,
+ 	.name = ADF_420XX_DEVICE_NAME,
+ 	.probe = adf_probe,
+ 	.remove = adf_remove,
++	.shutdown = adf_shutdown,
+ 	.sriov_configure = adf_sriov_configure,
+ 	.err_handler = &adf_err_handler,
+ };
+diff --git a/drivers/crypto/intel/qat/qat_4xxx/adf_drv.c b/drivers/crypto/intel/qat/qat_4xxx/adf_drv.c
+index 5537a9991e4efb..1ac415ef3c3154 100644
+--- a/drivers/crypto/intel/qat/qat_4xxx/adf_drv.c
++++ b/drivers/crypto/intel/qat/qat_4xxx/adf_drv.c
+@@ -188,11 +188,19 @@ static void adf_remove(struct pci_dev *pdev)
+ 	adf_cleanup_accel(accel_dev);
+ }
+ 
++static void adf_shutdown(struct pci_dev *pdev)
++{
++	struct adf_accel_dev *accel_dev = adf_devmgr_pci_to_accel_dev(pdev);
++
++	adf_dev_down(accel_dev);
++}
++
+ static struct pci_driver adf_driver = {
+ 	.id_table = adf_pci_tbl,
+ 	.name = ADF_4XXX_DEVICE_NAME,
+ 	.probe = adf_probe,
+ 	.remove = adf_remove,
++	.shutdown = adf_shutdown,
+ 	.sriov_configure = adf_sriov_configure,
+ 	.err_handler = &adf_err_handler,
+ };
+diff --git a/drivers/crypto/intel/qat/qat_c3xxx/adf_drv.c b/drivers/crypto/intel/qat/qat_c3xxx/adf_drv.c
+index b825b35ab4bfcf..566258e80371fc 100644
+--- a/drivers/crypto/intel/qat/qat_c3xxx/adf_drv.c
++++ b/drivers/crypto/intel/qat/qat_c3xxx/adf_drv.c
+@@ -19,6 +19,13 @@
+ #include <adf_dbgfs.h>
+ #include "adf_c3xxx_hw_data.h"
+ 
++static void adf_shutdown(struct pci_dev *pdev)
++{
++	struct adf_accel_dev *accel_dev = adf_devmgr_pci_to_accel_dev(pdev);
++
++	adf_dev_down(accel_dev);
++}
++
+ static const struct pci_device_id adf_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_QAT_C3XXX), },
+ 	{ }
+@@ -33,6 +40,7 @@ static struct pci_driver adf_driver = {
+ 	.name = ADF_C3XXX_DEVICE_NAME,
+ 	.probe = adf_probe,
+ 	.remove = adf_remove,
++	.shutdown = adf_shutdown,
+ 	.sriov_configure = adf_sriov_configure,
+ 	.err_handler = &adf_err_handler,
+ };
+diff --git a/drivers/crypto/intel/qat/qat_c62x/adf_drv.c b/drivers/crypto/intel/qat/qat_c62x/adf_drv.c
+index 8a7bdec358d61c..ce541a72195261 100644
+--- a/drivers/crypto/intel/qat/qat_c62x/adf_drv.c
++++ b/drivers/crypto/intel/qat/qat_c62x/adf_drv.c
+@@ -19,6 +19,13 @@
+ #include <adf_dbgfs.h>
+ #include "adf_c62x_hw_data.h"
+ 
++static void adf_shutdown(struct pci_dev *pdev)
++{
++	struct adf_accel_dev *accel_dev = adf_devmgr_pci_to_accel_dev(pdev);
++
++	adf_dev_down(accel_dev);
++}
++
+ static const struct pci_device_id adf_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_QAT_C62X), },
+ 	{ }
+@@ -33,6 +40,7 @@ static struct pci_driver adf_driver = {
+ 	.name = ADF_C62X_DEVICE_NAME,
+ 	.probe = adf_probe,
+ 	.remove = adf_remove,
++	.shutdown = adf_shutdown,
+ 	.sriov_configure = adf_sriov_configure,
+ 	.err_handler = &adf_err_handler,
+ };
+diff --git a/drivers/crypto/intel/qat/qat_dh895xcc/adf_drv.c b/drivers/crypto/intel/qat/qat_dh895xcc/adf_drv.c
+index 07e9d7e5286135..f36e625266ed34 100644
+--- a/drivers/crypto/intel/qat/qat_dh895xcc/adf_drv.c
++++ b/drivers/crypto/intel/qat/qat_dh895xcc/adf_drv.c
+@@ -19,6 +19,13 @@
+ #include <adf_dbgfs.h>
+ #include "adf_dh895xcc_hw_data.h"
+ 
++static void adf_shutdown(struct pci_dev *pdev)
++{
++	struct adf_accel_dev *accel_dev = adf_devmgr_pci_to_accel_dev(pdev);
++
++	adf_dev_down(accel_dev);
++}
++
+ static const struct pci_device_id adf_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_QAT_DH895XCC), },
+ 	{ }
+@@ -33,6 +40,7 @@ static struct pci_driver adf_driver = {
+ 	.name = ADF_DH895XCC_DEVICE_NAME,
+ 	.probe = adf_probe,
+ 	.remove = adf_remove,
++	.shutdown = adf_shutdown,
+ 	.sriov_configure = adf_sriov_configure,
+ 	.err_handler = &adf_err_handler,
+ };
+diff --git a/drivers/crypto/marvell/cesa/cesa.c b/drivers/crypto/marvell/cesa/cesa.c
+index fa08f10e6f3f2c..9c21f5d835d2b0 100644
+--- a/drivers/crypto/marvell/cesa/cesa.c
++++ b/drivers/crypto/marvell/cesa/cesa.c
+@@ -94,7 +94,7 @@ static int mv_cesa_std_process(struct mv_cesa_engine *engine, u32 status)
+ 
+ static int mv_cesa_int_process(struct mv_cesa_engine *engine, u32 status)
+ {
+-	if (engine->chain.first && engine->chain.last)
++	if (engine->chain_hw.first && engine->chain_hw.last)
+ 		return mv_cesa_tdma_process(engine, status);
+ 
+ 	return mv_cesa_std_process(engine, status);
+diff --git a/drivers/crypto/marvell/cesa/cesa.h b/drivers/crypto/marvell/cesa/cesa.h
+index d215a6bed6bc7b..50ca1039fdaa7a 100644
+--- a/drivers/crypto/marvell/cesa/cesa.h
++++ b/drivers/crypto/marvell/cesa/cesa.h
+@@ -440,8 +440,10 @@ struct mv_cesa_dev {
+  *			SRAM
+  * @queue:		fifo of the pending crypto requests
+  * @load:		engine load counter, useful for load balancing
+- * @chain:		list of the current tdma descriptors being processed
+- *			by this engine.
++ * @chain_hw:		list of the current tdma descriptors being processed
++ *			by the hardware.
++ * @chain_sw:		list of the current tdma descriptors that will be
++ *			submitted to the hardware.
+  * @complete_queue:	fifo of the processed requests by the engine
+  *
+  * Structure storing CESA engine information.
+@@ -463,7 +465,8 @@ struct mv_cesa_engine {
+ 	struct gen_pool *pool;
+ 	struct crypto_queue queue;
+ 	atomic_t load;
+-	struct mv_cesa_tdma_chain chain;
++	struct mv_cesa_tdma_chain chain_hw;
++	struct mv_cesa_tdma_chain chain_sw;
+ 	struct list_head complete_queue;
+ 	int irq;
+ };
+diff --git a/drivers/crypto/marvell/cesa/tdma.c b/drivers/crypto/marvell/cesa/tdma.c
+index 388a06e180d64a..243305354420c1 100644
+--- a/drivers/crypto/marvell/cesa/tdma.c
++++ b/drivers/crypto/marvell/cesa/tdma.c
+@@ -38,6 +38,15 @@ void mv_cesa_dma_step(struct mv_cesa_req *dreq)
+ {
+ 	struct mv_cesa_engine *engine = dreq->engine;
+ 
++	spin_lock_bh(&engine->lock);
++	if (engine->chain_sw.first == dreq->chain.first) {
++		engine->chain_sw.first = NULL;
++		engine->chain_sw.last = NULL;
++	}
++	engine->chain_hw.first = dreq->chain.first;
++	engine->chain_hw.last = dreq->chain.last;
++	spin_unlock_bh(&engine->lock);
++
+ 	writel_relaxed(0, engine->regs + CESA_SA_CFG);
+ 
+ 	mv_cesa_set_int_mask(engine, CESA_SA_INT_ACC0_IDMA_DONE);
+@@ -96,25 +105,27 @@ void mv_cesa_dma_prepare(struct mv_cesa_req *dreq,
+ void mv_cesa_tdma_chain(struct mv_cesa_engine *engine,
+ 			struct mv_cesa_req *dreq)
+ {
+-	if (engine->chain.first == NULL && engine->chain.last == NULL) {
+-		engine->chain.first = dreq->chain.first;
+-		engine->chain.last  = dreq->chain.last;
+-	} else {
+-		struct mv_cesa_tdma_desc *last;
++	struct mv_cesa_tdma_desc *last = engine->chain_sw.last;
+ 
+-		last = engine->chain.last;
++	/*
++	 * Break the DMA chain if the request being queued needs the IV
++	 * regs to be set before lauching the request.
++	 */
++	if (!last || dreq->chain.first->flags & CESA_TDMA_SET_STATE)
++		engine->chain_sw.first = dreq->chain.first;
++	else {
+ 		last->next = dreq->chain.first;
+-		engine->chain.last = dreq->chain.last;
+-
+-		/*
+-		 * Break the DMA chain if the CESA_TDMA_BREAK_CHAIN is set on
+-		 * the last element of the current chain, or if the request
+-		 * being queued needs the IV regs to be set before lauching
+-		 * the request.
+-		 */
+-		if (!(last->flags & CESA_TDMA_BREAK_CHAIN) &&
+-		    !(dreq->chain.first->flags & CESA_TDMA_SET_STATE))
+-			last->next_dma = cpu_to_le32(dreq->chain.first->cur_dma);
++		last->next_dma = cpu_to_le32(dreq->chain.first->cur_dma);
++	}
++	last = dreq->chain.last;
++	engine->chain_sw.last = last;
++	/*
++	 * Break the DMA chain if the CESA_TDMA_BREAK_CHAIN is set on
++	 * the last element of the current chain.
++	 */
++	if (last->flags & CESA_TDMA_BREAK_CHAIN) {
++		engine->chain_sw.first = NULL;
++		engine->chain_sw.last = NULL;
+ 	}
+ }
+ 
+@@ -127,7 +138,7 @@ int mv_cesa_tdma_process(struct mv_cesa_engine *engine, u32 status)
+ 
+ 	tdma_cur = readl(engine->regs + CESA_TDMA_CUR);
+ 
+-	for (tdma = engine->chain.first; tdma; tdma = next) {
++	for (tdma = engine->chain_hw.first; tdma; tdma = next) {
+ 		spin_lock_bh(&engine->lock);
+ 		next = tdma->next;
+ 		spin_unlock_bh(&engine->lock);
+@@ -149,12 +160,12 @@ int mv_cesa_tdma_process(struct mv_cesa_engine *engine, u32 status)
+ 								 &backlog);
+ 
+ 			/* Re-chaining to the next request */
+-			engine->chain.first = tdma->next;
++			engine->chain_hw.first = tdma->next;
+ 			tdma->next = NULL;
+ 
+ 			/* If this is the last request, clear the chain */
+-			if (engine->chain.first == NULL)
+-				engine->chain.last  = NULL;
++			if (engine->chain_hw.first == NULL)
++				engine->chain_hw.last  = NULL;
+ 			spin_unlock_bh(&engine->lock);
+ 
+ 			ctx = crypto_tfm_ctx(req->tfm);
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index e74e36a8ecda21..421d44644bc02a 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -264,8 +264,7 @@ static int begin_cpu_udmabuf(struct dma_buf *buf,
+ 			ubuf->sg = NULL;
+ 		}
+ 	} else {
+-		dma_sync_sg_for_cpu(dev, ubuf->sg->sgl, ubuf->sg->nents,
+-				    direction);
++		dma_sync_sgtable_for_cpu(dev, ubuf->sg, direction);
+ 	}
+ 
+ 	return ret;
+@@ -280,7 +279,7 @@ static int end_cpu_udmabuf(struct dma_buf *buf,
+ 	if (!ubuf->sg)
+ 		return -EINVAL;
+ 
+-	dma_sync_sg_for_device(dev, ubuf->sg->sgl, ubuf->sg->nents, direction);
++	dma_sync_sgtable_for_device(dev, ubuf->sg, direction);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
+index dcd7008fe06b05..c45c92f0537367 100644
+--- a/drivers/edac/altera_edac.c
++++ b/drivers/edac/altera_edac.c
+@@ -1746,9 +1746,9 @@ altr_edac_a10_device_trig(struct file *file, const char __user *user_buf,
+ 
+ 	local_irq_save(flags);
+ 	if (trig_type == ALTR_UE_TRIGGER_CHAR)
+-		writel(priv->ue_set_mask, set_addr);
++		writew(priv->ue_set_mask, set_addr);
+ 	else
+-		writel(priv->ce_set_mask, set_addr);
++		writew(priv->ce_set_mask, set_addr);
+ 
+ 	/* Ensure the interrupt test bits are set */
+ 	wmb();
+@@ -1778,7 +1778,7 @@ altr_edac_a10_device_trig2(struct file *file, const char __user *user_buf,
+ 
+ 	local_irq_save(flags);
+ 	if (trig_type == ALTR_UE_TRIGGER_CHAR) {
+-		writel(priv->ue_set_mask, set_addr);
++		writew(priv->ue_set_mask, set_addr);
+ 	} else {
+ 		/* Setup read/write of 4 bytes */
+ 		writel(ECC_WORD_WRITE, drvdata->base + ECC_BLK_DBYTECTRL_OFST);
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index 90f0eb7cc5b9bd..390f5756b66ed0 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -3879,6 +3879,7 @@ static int per_family_init(struct amd64_pvt *pvt)
+ 			break;
+ 		case 0x70 ... 0x7f:
+ 			pvt->ctl_name			= "F19h_M70h";
++			pvt->max_mcs			= 4;
+ 			pvt->flags.zn_regs_v2		= 1;
+ 			break;
+ 		case 0x90 ... 0x9f:
+diff --git a/drivers/edac/igen6_edac.c b/drivers/edac/igen6_edac.c
+index 5807517ee32dec..cc50313b3e905f 100644
+--- a/drivers/edac/igen6_edac.c
++++ b/drivers/edac/igen6_edac.c
+@@ -125,8 +125,9 @@
+ #define MEM_SLICE_HASH_MASK(v)		(GET_BITFIELD(v, 6, 19) << 6)
+ #define MEM_SLICE_HASH_LSB_MASK_BIT(v)	GET_BITFIELD(v, 24, 26)
+ 
+-static const struct res_config {
++static struct res_config {
+ 	bool machine_check;
++	/* The number of present memory controllers. */
+ 	int num_imc;
+ 	u32 imc_base;
+ 	u32 cmf_base;
+@@ -472,7 +473,7 @@ static u64 rpl_p_err_addr(u64 ecclog)
+ 	return ECC_ERROR_LOG_ADDR45(ecclog);
+ }
+ 
+-static const struct res_config ehl_cfg = {
++static struct res_config ehl_cfg = {
+ 	.num_imc		= 1,
+ 	.imc_base		= 0x5000,
+ 	.ibecc_base		= 0xdc00,
+@@ -482,7 +483,7 @@ static const struct res_config ehl_cfg = {
+ 	.err_addr_to_imc_addr	= ehl_err_addr_to_imc_addr,
+ };
+ 
+-static const struct res_config icl_cfg = {
++static struct res_config icl_cfg = {
+ 	.num_imc		= 1,
+ 	.imc_base		= 0x5000,
+ 	.ibecc_base		= 0xd800,
+@@ -492,7 +493,7 @@ static const struct res_config icl_cfg = {
+ 	.err_addr_to_imc_addr	= ehl_err_addr_to_imc_addr,
+ };
+ 
+-static const struct res_config tgl_cfg = {
++static struct res_config tgl_cfg = {
+ 	.machine_check		= true,
+ 	.num_imc		= 2,
+ 	.imc_base		= 0x5000,
+@@ -506,7 +507,7 @@ static const struct res_config tgl_cfg = {
+ 	.err_addr_to_imc_addr	= tgl_err_addr_to_imc_addr,
+ };
+ 
+-static const struct res_config adl_cfg = {
++static struct res_config adl_cfg = {
+ 	.machine_check		= true,
+ 	.num_imc		= 2,
+ 	.imc_base		= 0xd800,
+@@ -517,7 +518,7 @@ static const struct res_config adl_cfg = {
+ 	.err_addr_to_imc_addr	= adl_err_addr_to_imc_addr,
+ };
+ 
+-static const struct res_config adl_n_cfg = {
++static struct res_config adl_n_cfg = {
+ 	.machine_check		= true,
+ 	.num_imc		= 1,
+ 	.imc_base		= 0xd800,
+@@ -528,7 +529,7 @@ static const struct res_config adl_n_cfg = {
+ 	.err_addr_to_imc_addr	= adl_err_addr_to_imc_addr,
+ };
+ 
+-static const struct res_config rpl_p_cfg = {
++static struct res_config rpl_p_cfg = {
+ 	.machine_check		= true,
+ 	.num_imc		= 2,
+ 	.imc_base		= 0xd800,
+@@ -540,7 +541,7 @@ static const struct res_config rpl_p_cfg = {
+ 	.err_addr_to_imc_addr	= adl_err_addr_to_imc_addr,
+ };
+ 
+-static const struct res_config mtl_ps_cfg = {
++static struct res_config mtl_ps_cfg = {
+ 	.machine_check		= true,
+ 	.num_imc		= 2,
+ 	.imc_base		= 0xd800,
+@@ -551,7 +552,7 @@ static const struct res_config mtl_ps_cfg = {
+ 	.err_addr_to_imc_addr	= adl_err_addr_to_imc_addr,
+ };
+ 
+-static const struct res_config mtl_p_cfg = {
++static struct res_config mtl_p_cfg = {
+ 	.machine_check		= true,
+ 	.num_imc		= 2,
+ 	.imc_base		= 0xd800,
+@@ -562,7 +563,7 @@ static const struct res_config mtl_p_cfg = {
+ 	.err_addr_to_imc_addr	= adl_err_addr_to_imc_addr,
+ };
+ 
+-static const struct pci_device_id igen6_pci_tbl[] = {
++static struct pci_device_id igen6_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, DID_EHL_SKU5), (kernel_ulong_t)&ehl_cfg },
+ 	{ PCI_VDEVICE(INTEL, DID_EHL_SKU6), (kernel_ulong_t)&ehl_cfg },
+ 	{ PCI_VDEVICE(INTEL, DID_EHL_SKU7), (kernel_ulong_t)&ehl_cfg },
+@@ -1201,23 +1202,21 @@ static void igen6_check(struct mem_ctl_info *mci)
+ 		irq_work_queue(&ecclog_irq_work);
+ }
+ 
+-static int igen6_register_mci(int mc, u64 mchbar, struct pci_dev *pdev)
++/* Check whether the memory controller is absent. */
++static bool igen6_imc_absent(void __iomem *window)
++{
++	return readl(window + MAD_INTER_CHANNEL_OFFSET) == ~0;
++}
++
++static int igen6_register_mci(int mc, void __iomem *window, struct pci_dev *pdev)
+ {
+ 	struct edac_mc_layer layers[2];
+ 	struct mem_ctl_info *mci;
+ 	struct igen6_imc *imc;
+-	void __iomem *window;
+ 	int rc;
+ 
+ 	edac_dbg(2, "\n");
+ 
+-	mchbar += mc * MCHBAR_SIZE;
+-	window = ioremap(mchbar, MCHBAR_SIZE);
+-	if (!window) {
+-		igen6_printk(KERN_ERR, "Failed to ioremap 0x%llx\n", mchbar);
+-		return -ENODEV;
+-	}
+-
+ 	layers[0].type = EDAC_MC_LAYER_CHANNEL;
+ 	layers[0].size = NUM_CHANNELS;
+ 	layers[0].is_virt_csrow = false;
+@@ -1283,7 +1282,6 @@ static int igen6_register_mci(int mc, u64 mchbar, struct pci_dev *pdev)
+ fail2:
+ 	edac_mc_free(mci);
+ fail:
+-	iounmap(window);
+ 	return rc;
+ }
+ 
+@@ -1309,6 +1307,58 @@ static void igen6_unregister_mcis(void)
+ 	}
+ }
+ 
++static int igen6_register_mcis(struct pci_dev *pdev, u64 mchbar)
++{
++	void __iomem *window;
++	int lmc, pmc, rc;
++	u64 base;
++
++	for (lmc = 0, pmc = 0; pmc < NUM_IMC; pmc++) {
++		base   = mchbar + pmc * MCHBAR_SIZE;
++		window = ioremap(base, MCHBAR_SIZE);
++		if (!window) {
++			igen6_printk(KERN_ERR, "Failed to ioremap 0x%llx for mc%d\n", base, pmc);
++			rc = -ENOMEM;
++			goto out_unregister_mcis;
++		}
++
++		if (igen6_imc_absent(window)) {
++			iounmap(window);
++			edac_dbg(2, "Skip absent mc%d\n", pmc);
++			continue;
++		}
++
++		rc = igen6_register_mci(lmc, window, pdev);
++		if (rc)
++			goto out_iounmap;
++
++		/* Done, if all present MCs are detected and registered. */
++		if (++lmc >= res_cfg->num_imc)
++			break;
++	}
++
++	if (!lmc) {
++		igen6_printk(KERN_ERR, "No mc found.\n");
++		return -ENODEV;
++	}
++
++	if (lmc < res_cfg->num_imc) {
++		igen6_printk(KERN_WARNING, "Expected %d mcs, but only %d detected.",
++			     res_cfg->num_imc, lmc);
++		res_cfg->num_imc = lmc;
++	}
++
++	return 0;
++
++out_iounmap:
++	iounmap(window);
++
++out_unregister_mcis:
++	igen6_unregister_mcis();
++
++	return rc;
++}
++
+ static int igen6_mem_slice_setup(u64 mchbar)
+ {
+ 	struct igen6_imc *imc = &igen6_pvt->imc[0];
+@@ -1405,7 +1455,7 @@ static void opstate_set(const struct res_config *cfg, const struct pci_device_id
+ static int igen6_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ {
+ 	u64 mchbar;
+-	int i, rc;
++	int rc;
+ 
+ 	edac_dbg(2, "\n");
+ 
+@@ -1421,11 +1471,9 @@ static int igen6_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	opstate_set(res_cfg, ent);
+ 
+-	for (i = 0; i < res_cfg->num_imc; i++) {
+-		rc = igen6_register_mci(i, mchbar, pdev);
+-		if (rc)
+-			goto fail2;
+-	}
++	rc = igen6_register_mcis(pdev, mchbar);
++	if (rc)
++		goto fail;
+ 
+ 	if (res_cfg->num_imc > 1) {
+ 		rc = igen6_mem_slice_setup(mchbar);
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index 0390d5ff195ec0..06944ac6985d91 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -1737,6 +1737,39 @@ static int scmi_common_get_max_msg_size(const struct scmi_protocol_handle *ph)
+ 	return info->desc->max_msg_size;
+ }
+ 
++/**
++ * scmi_protocol_msg_check  - Check protocol message attributes
++ *
++ * @ph: A reference to the protocol handle.
++ * @message_id: The ID of the message to check.
++ * @attributes: A parameter to optionally return the retrieved message
++ *		attributes, in case of Success.
++ *
++ * An helper to check protocol message attributes for a specific protocol
++ * and message pair.
++ *
++ * Return: 0 on SUCCESS
++ */
++static int scmi_protocol_msg_check(const struct scmi_protocol_handle *ph,
++				   u32 message_id, u32 *attributes)
++{
++	int ret;
++	struct scmi_xfer *t;
++
++	ret = xfer_get_init(ph, PROTOCOL_MESSAGE_ATTRIBUTES,
++			    sizeof(__le32), 0, &t);
++	if (ret)
++		return ret;
++
++	put_unaligned_le32(message_id, t->tx.buf);
++	ret = do_xfer(ph, t);
++	if (!ret && attributes)
++		*attributes = get_unaligned_le32(t->rx.buf);
++	xfer_put(ph, t);
++
++	return ret;
++}
++
+ /**
+  * struct scmi_iterator  - Iterator descriptor
+  * @msg: A reference to the message TX buffer; filled by @prepare_message with
+@@ -1878,6 +1911,7 @@ scmi_common_fastchannel_init(const struct scmi_protocol_handle *ph,
+ 	int ret;
+ 	u32 flags;
+ 	u64 phys_addr;
++	u32 attributes;
+ 	u8 size;
+ 	void __iomem *addr;
+ 	struct scmi_xfer *t;
+@@ -1886,6 +1920,15 @@ scmi_common_fastchannel_init(const struct scmi_protocol_handle *ph,
+ 	struct scmi_msg_resp_desc_fc *resp;
+ 	const struct scmi_protocol_instance *pi = ph_to_pi(ph);
+ 
++	/* Check if the MSG_ID supports fastchannel */
++	ret = scmi_protocol_msg_check(ph, message_id, &attributes);
++	if (ret || !MSG_SUPPORTS_FASTCHANNEL(attributes)) {
++		dev_dbg(ph->dev,
++			"Skip FC init for 0x%02X/%d  domain:%d - ret:%d\n",
++			pi->proto->id, message_id, domain, ret);
++		return;
++	}
++
+ 	if (!p_addr) {
+ 		ret = -EINVAL;
+ 		goto err_out;
+@@ -2003,39 +2046,6 @@ static void scmi_common_fastchannel_db_ring(struct scmi_fc_db_info *db)
+ 		SCMI_PROTO_FC_RING_DB(64);
+ }
+ 
+-/**
+- * scmi_protocol_msg_check  - Check protocol message attributes
+- *
+- * @ph: A reference to the protocol handle.
+- * @message_id: The ID of the message to check.
+- * @attributes: A parameter to optionally return the retrieved message
+- *		attributes, in case of Success.
+- *
+- * An helper to check protocol message attributes for a specific protocol
+- * and message pair.
+- *
+- * Return: 0 on SUCCESS
+- */
+-static int scmi_protocol_msg_check(const struct scmi_protocol_handle *ph,
+-				   u32 message_id, u32 *attributes)
+-{
+-	int ret;
+-	struct scmi_xfer *t;
+-
+-	ret = xfer_get_init(ph, PROTOCOL_MESSAGE_ATTRIBUTES,
+-			    sizeof(__le32), 0, &t);
+-	if (ret)
+-		return ret;
+-
+-	put_unaligned_le32(message_id, t->tx.buf);
+-	ret = do_xfer(ph, t);
+-	if (!ret && attributes)
+-		*attributes = get_unaligned_le32(t->rx.buf);
+-	xfer_put(ph, t);
+-
+-	return ret;
+-}
+-
+ static const struct scmi_proto_helpers_ops helpers_ops = {
+ 	.extended_name_get = scmi_common_extended_name_get,
+ 	.get_max_msg_size = scmi_common_get_max_msg_size,
+diff --git a/drivers/firmware/arm_scmi/protocols.h b/drivers/firmware/arm_scmi/protocols.h
+index aaee57cdcd5589..d62c4469d1fd9f 100644
+--- a/drivers/firmware/arm_scmi/protocols.h
++++ b/drivers/firmware/arm_scmi/protocols.h
+@@ -31,6 +31,8 @@
+ 
+ #define SCMI_PROTOCOL_VENDOR_BASE	0x80
+ 
++#define MSG_SUPPORTS_FASTCHANNEL(x)	((x) & BIT(0))
++
+ enum scmi_common_cmd {
+ 	PROTOCOL_VERSION = 0x0,
+ 	PROTOCOL_ATTRIBUTES = 0x1,
+diff --git a/drivers/firmware/cirrus/test/cs_dsp_mock_bin.c b/drivers/firmware/cirrus/test/cs_dsp_mock_bin.c
+index 49d84f7e59e6a9..a0601b23b1bd32 100644
+--- a/drivers/firmware/cirrus/test/cs_dsp_mock_bin.c
++++ b/drivers/firmware/cirrus/test/cs_dsp_mock_bin.c
+@@ -96,10 +96,11 @@ static void cs_dsp_mock_bin_add_name_or_info(struct cs_dsp_mock_bin_builder *bui
+ 
+ 	if (info_len % 4) {
+ 		/* Create a padded string with length a multiple of 4 */
++		size_t copy_len = info_len;
+ 		info_len = round_up(info_len, 4);
+ 		tmp = kunit_kzalloc(builder->test_priv->test, info_len, GFP_KERNEL);
+ 		KUNIT_ASSERT_NOT_ERR_OR_NULL(builder->test_priv->test, tmp);
+-		memcpy(tmp, info, info_len);
++		memcpy(tmp, info, copy_len);
+ 		info = tmp;
+ 	}
+ 
+diff --git a/drivers/firmware/cirrus/test/cs_dsp_mock_wmfw.c b/drivers/firmware/cirrus/test/cs_dsp_mock_wmfw.c
+index 5a3ac03ac37f07..4fa74550dafd18 100644
+--- a/drivers/firmware/cirrus/test/cs_dsp_mock_wmfw.c
++++ b/drivers/firmware/cirrus/test/cs_dsp_mock_wmfw.c
+@@ -133,10 +133,11 @@ void cs_dsp_mock_wmfw_add_info(struct cs_dsp_mock_wmfw_builder *builder,
+ 
+ 	if (info_len % 4) {
+ 		/* Create a padded string with length a multiple of 4 */
++		size_t copy_len = info_len;
+ 		info_len = round_up(info_len, 4);
+ 		tmp = kunit_kzalloc(builder->test_priv->test, info_len, GFP_KERNEL);
+ 		KUNIT_ASSERT_NOT_ERR_OR_NULL(builder->test_priv->test, tmp);
+-		memcpy(tmp, info, info_len);
++		memcpy(tmp, info, copy_len);
+ 		info = tmp;
+ 	}
+ 
+diff --git a/drivers/firmware/cirrus/test/cs_dsp_test_control_cache.c b/drivers/firmware/cirrus/test/cs_dsp_test_control_cache.c
+index 83386cc978e3f8..ebca3a4ab0f1ad 100644
+--- a/drivers/firmware/cirrus/test/cs_dsp_test_control_cache.c
++++ b/drivers/firmware/cirrus/test/cs_dsp_test_control_cache.c
+@@ -776,7 +776,6 @@ static void cs_dsp_ctl_cache_init_multiple_offsets(struct kunit *test)
+ 					      "dummyalg", NULL);
+ 
+ 	/* Create controls identical except for offset */
+-	def.length_bytes = 8;
+ 	def.offset_dsp_words = 0;
+ 	def.shortname = "CtlA";
+ 	cs_dsp_mock_wmfw_add_coeff_desc(local->wmfw_builder, &def);
+diff --git a/drivers/firmware/sysfb.c b/drivers/firmware/sysfb.c
+index 7c5c03f274b951..889e5b05c739ce 100644
+--- a/drivers/firmware/sysfb.c
++++ b/drivers/firmware/sysfb.c
+@@ -143,6 +143,7 @@ static __init int sysfb_init(void)
+ {
+ 	struct screen_info *si = &screen_info;
+ 	struct device *parent;
++	unsigned int type;
+ 	struct simplefb_platform_data mode;
+ 	const char *name;
+ 	bool compatible;
+@@ -170,17 +171,26 @@ static __init int sysfb_init(void)
+ 			goto put_device;
+ 	}
+ 
++	type = screen_info_video_type(si);
++
+ 	/* if the FB is incompatible, create a legacy framebuffer device */
+-	if (si->orig_video_isVGA == VIDEO_TYPE_EFI)
+-		name = "efi-framebuffer";
+-	else if (si->orig_video_isVGA == VIDEO_TYPE_VLFB)
+-		name = "vesa-framebuffer";
+-	else if (si->orig_video_isVGA == VIDEO_TYPE_VGAC)
+-		name = "vga-framebuffer";
+-	else if (si->orig_video_isVGA == VIDEO_TYPE_EGAC)
++	switch (type) {
++	case VIDEO_TYPE_EGAC:
+ 		name = "ega-framebuffer";
+-	else
++		break;
++	case VIDEO_TYPE_VGAC:
++		name = "vga-framebuffer";
++		break;
++	case VIDEO_TYPE_VLFB:
++		name = "vesa-framebuffer";
++		break;
++	case VIDEO_TYPE_EFI:
++		name = "efi-framebuffer";
++		break;
++	default:
+ 		name = "platform-framebuffer";
++		break;
++	}
+ 
+ 	pd = platform_device_alloc(name, 0);
+ 	if (!pd) {
+diff --git a/drivers/firmware/ti_sci.c b/drivers/firmware/ti_sci.c
+index 806a975fff22ae..ae5fd1936ad322 100644
+--- a/drivers/firmware/ti_sci.c
++++ b/drivers/firmware/ti_sci.c
+@@ -2,7 +2,7 @@
+ /*
+  * Texas Instruments System Control Interface Protocol Driver
+  *
+- * Copyright (C) 2015-2024 Texas Instruments Incorporated - https://www.ti.com/
++ * Copyright (C) 2015-2025 Texas Instruments Incorporated - https://www.ti.com/
+  *	Nishanth Menon
+  */
+ 
+@@ -3670,6 +3670,7 @@ static int __maybe_unused ti_sci_suspend(struct device *dev)
+ 	struct ti_sci_info *info = dev_get_drvdata(dev);
+ 	struct device *cpu_dev, *cpu_dev_max = NULL;
+ 	s32 val, cpu_lat = 0;
++	u16 cpu_lat_ms;
+ 	int i, ret;
+ 
+ 	if (info->fw_caps & MSG_FLAG_CAPS_LPM_DM_MANAGED) {
+@@ -3682,9 +3683,16 @@ static int __maybe_unused ti_sci_suspend(struct device *dev)
+ 			}
+ 		}
+ 		if (cpu_dev_max) {
+-			dev_dbg(cpu_dev_max, "%s: sending max CPU latency=%u\n", __func__, cpu_lat);
++			/*
++			 * PM QoS latency unit is usecs, device manager uses msecs.
++			 * Convert to msecs and round down for device manager.
++			 */
++			cpu_lat_ms = cpu_lat / USEC_PER_MSEC;
++			dev_dbg(cpu_dev_max, "%s: sending max CPU latency=%u ms\n", __func__,
++				cpu_lat_ms);
+ 			ret = ti_sci_cmd_set_latency_constraint(&info->handle,
+-								cpu_lat, TISCI_MSG_CONSTRAINT_SET);
++								cpu_lat_ms,
++								TISCI_MSG_CONSTRAINT_SET);
+ 			if (ret)
+ 				return ret;
+ 		}
+diff --git a/drivers/gpio/gpio-loongson-64bit.c b/drivers/gpio/gpio-loongson-64bit.c
+index a9a93036f08ff0..286a3876ed0c08 100644
+--- a/drivers/gpio/gpio-loongson-64bit.c
++++ b/drivers/gpio/gpio-loongson-64bit.c
+@@ -266,7 +266,7 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls7a2000_data0 = {
+ /* LS7A2000 ACPI GPIO */
+ static const struct loongson_gpio_chip_data loongson_gpio_ls7a2000_data1 = {
+ 	.label = "ls7a2000_gpio",
+-	.mode = BYTE_CTRL_MODE,
++	.mode = BIT_CTRL_MODE,
+ 	.conf_offset = 0x4,
+ 	.in_offset = 0x8,
+ 	.out_offset = 0x0,
+diff --git a/drivers/gpio/gpio-mlxbf3.c b/drivers/gpio/gpio-mlxbf3.c
+index 10ea71273c8915..9875e34bde72a4 100644
+--- a/drivers/gpio/gpio-mlxbf3.c
++++ b/drivers/gpio/gpio-mlxbf3.c
+@@ -190,7 +190,9 @@ static int mlxbf3_gpio_probe(struct platform_device *pdev)
+ 	struct mlxbf3_gpio_context *gs;
+ 	struct gpio_irq_chip *girq;
+ 	struct gpio_chip *gc;
++	char *colon_ptr;
+ 	int ret, irq;
++	long num;
+ 
+ 	gs = devm_kzalloc(dev, sizeof(*gs), GFP_KERNEL);
+ 	if (!gs)
+@@ -227,25 +229,39 @@ static int mlxbf3_gpio_probe(struct platform_device *pdev)
+ 	gc->owner = THIS_MODULE;
+ 	gc->add_pin_ranges = mlxbf3_gpio_add_pin_ranges;
+ 
+-	irq = platform_get_irq(pdev, 0);
+-	if (irq >= 0) {
+-		girq = &gs->gc.irq;
+-		gpio_irq_chip_set_chip(girq, &gpio_mlxbf3_irqchip);
+-		girq->default_type = IRQ_TYPE_NONE;
+-		/* This will let us handle the parent IRQ in the driver */
+-		girq->num_parents = 0;
+-		girq->parents = NULL;
+-		girq->parent_handler = NULL;
+-		girq->handler = handle_bad_irq;
+-
+-		/*
+-		 * Directly request the irq here instead of passing
+-		 * a flow-handler because the irq is shared.
+-		 */
+-		ret = devm_request_irq(dev, irq, mlxbf3_gpio_irq_handler,
+-				       IRQF_SHARED, dev_name(dev), gs);
+-		if (ret)
+-			return dev_err_probe(dev, ret, "failed to request IRQ");
++	colon_ptr = strchr(dev_name(dev), ':');
++	if (!colon_ptr) {
++		dev_err(dev, "invalid device name format\n");
++		return -EINVAL;
++	}
++
++	ret = kstrtol(++colon_ptr, 16, &num);
++	if (ret) {
++		dev_err(dev, "invalid device instance\n");
++		return ret;
++	}
++
++	if (!num) {
++		irq = platform_get_irq(pdev, 0);
++		if (irq >= 0) {
++			girq = &gs->gc.irq;
++			gpio_irq_chip_set_chip(girq, &gpio_mlxbf3_irqchip);
++			girq->default_type = IRQ_TYPE_NONE;
++			/* This will let us handle the parent IRQ in the driver */
++			girq->num_parents = 0;
++			girq->parents = NULL;
++			girq->parent_handler = NULL;
++			girq->handler = handle_bad_irq;
++
++			/*
++			 * Directly request the irq here instead of passing
++			 * a flow-handler because the irq is shared.
++			 */
++			ret = devm_request_irq(dev, irq, mlxbf3_gpio_irq_handler,
++					       IRQF_SHARED, dev_name(dev), gs);
++			if (ret)
++				return dev_err_probe(dev, ret, "failed to request IRQ");
++		}
+ 	}
+ 
+ 	platform_set_drvdata(pdev, gs);
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 13cc120cf11f14..02da81ff1c0f18 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -952,7 +952,7 @@ static int pca953x_irq_setup(struct pca953x_chip *chip, int irq_base)
+ 					IRQF_ONESHOT | IRQF_SHARED, dev_name(dev),
+ 					chip);
+ 	if (ret)
+-		return dev_err_probe(dev, client->irq, "failed to request irq\n");
++		return dev_err_probe(dev, ret, "failed to request irq\n");
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 65f6a7177b78ef..17802d97492fa6 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -224,6 +224,15 @@ static void of_gpio_try_fixup_polarity(const struct device_node *np,
+ 		 */
+ 		{ "lantiq,pci-xway",	"gpio-reset",	false },
+ #endif
++#if IS_ENABLED(CONFIG_REGULATOR_S5M8767)
++		/*
++		 * According to S5M8767, the DVS and DS pin are
++		 * active-high signals. However, exynos5250-spring.dts use
++		 * active-low setting.
++		 */
++		{ "samsung,s5m8767-pmic", "s5m8767,pmic-buck-dvs-gpios", true },
++		{ "samsung,s5m8767-pmic", "s5m8767,pmic-buck-ds-gpios", true },
++#endif
+ #if IS_ENABLED(CONFIG_TOUCHSCREEN_TSC2005)
+ 		/*
+ 		 * DTS for Nokia N900 incorrectly specified "active high"
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index f8b3e04d71eda1..95124a4a0a67cf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2689,6 +2689,13 @@ static int amdgpu_device_ip_early_init(struct amdgpu_device *adev)
+ 		break;
+ 	}
+ 
++	/* Check for IP version 9.4.3 with A0 hardware */
++	if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 3) &&
++	    !amdgpu_device_get_rev_id(adev)) {
++		dev_err(adev->dev, "Unsupported A0 hardware\n");
++		return -ENODEV;	/* device unsupported - no device error */
++	}
++
+ 	if (amdgpu_has_atpx() &&
+ 	    (amdgpu_is_atpx_hybrid() ||
+ 	     amdgpu_has_atpx_dgpu_power_cntl()) &&
+@@ -2701,7 +2708,6 @@ static int amdgpu_device_ip_early_init(struct amdgpu_device *adev)
+ 		adev->has_pr3 = parent ? pci_pr3_present(parent) : false;
+ 	}
+ 
+-
+ 	adev->pm.pp_feature = amdgpu_pp_feature_mask;
+ 	if (amdgpu_sriov_vf(adev) || sched_policy == KFD_SCHED_POLICY_NO_HWS)
+ 		adev->pm.pp_feature &= ~PP_GFXOFF_MASK;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index cf2df7790077d4..1dc06e4ab49705 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -1351,6 +1351,10 @@ static ssize_t amdgpu_gfx_get_current_compute_partition(struct device *dev,
+ 	struct amdgpu_device *adev = drm_to_adev(ddev);
+ 	int mode;
+ 
++	/* Only minimal precaution taken to reject requests while in reset.*/
++	if (amdgpu_in_reset(adev))
++		return -EPERM;
++
+ 	mode = amdgpu_xcp_query_partition_mode(adev->xcp_mgr,
+ 					       AMDGPU_XCP_FL_NONE);
+ 
+@@ -1394,8 +1398,14 @@ static ssize_t amdgpu_gfx_set_compute_partition(struct device *dev,
+ 		return -EINVAL;
+ 	}
+ 
++	/* Don't allow a switch while under reset */
++	if (!down_read_trylock(&adev->reset_domain->sem))
++		return -EPERM;
++
+ 	ret = amdgpu_xcp_switch_partition_mode(adev->xcp_mgr, mode);
+ 
++	up_read(&adev->reset_domain->sem);
++
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+index ecb74ccf1d9081..6b0fbbb91e5795 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+@@ -1230,6 +1230,10 @@ static ssize_t current_memory_partition_show(
+ 	struct amdgpu_device *adev = drm_to_adev(ddev);
+ 	enum amdgpu_memory_partition mode;
+ 
++	/* Only minimal precaution taken to reject requests while in reset */
++	if (amdgpu_in_reset(adev))
++		return -EPERM;
++
+ 	mode = adev->gmc.gmc_funcs->query_mem_partition_mode(adev);
+ 	if ((mode >= ARRAY_SIZE(nps_desc)) ||
+ 	    (BIT(mode) & AMDGPU_ALL_NPS_MASK) != BIT(mode))
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+index fb212f0a1136a4..5590ad5e8cd76c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+@@ -150,9 +150,6 @@ int amdgpu_mes_init(struct amdgpu_device *adev)
+ 		adev->mes.compute_hqd_mask[i] = 0xc;
+ 	}
+ 
+-	for (i = 0; i < AMDGPU_MES_MAX_GFX_PIPES; i++)
+-		adev->mes.gfx_hqd_mask[i] = i ? 0 : 0xfffffffe;
+-
+ 	for (i = 0; i < AMDGPU_MES_MAX_SDMA_PIPES; i++) {
+ 		if (i >= adev->sdma.num_instances)
+ 			break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
+index da2c9a8cb3e011..52dd54a32fb477 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
+@@ -111,8 +111,8 @@ struct amdgpu_mes {
+ 
+ 	uint32_t                        vmid_mask_gfxhub;
+ 	uint32_t                        vmid_mask_mmhub;
+-	uint32_t                        compute_hqd_mask[AMDGPU_MES_MAX_COMPUTE_PIPES];
+ 	uint32_t                        gfx_hqd_mask[AMDGPU_MES_MAX_GFX_PIPES];
++	uint32_t                        compute_hqd_mask[AMDGPU_MES_MAX_COMPUTE_PIPES];
+ 	uint32_t                        sdma_hqd_mask[AMDGPU_MES_MAX_SDMA_PIPES];
+ 	uint32_t                        aggregated_doorbells[AMDGPU_MES_PRIORITY_NUM_LEVELS];
+ 	uint32_t                        sch_ctx_offs[AMDGPU_MAX_MES_PIPES];
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+index 8d5acc415d386b..dcf5e8e0b9e3ea 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+@@ -107,6 +107,7 @@ enum psp_reg_prog_id {
+ 	PSP_REG_IH_RB_CNTL        = 0,  /* register IH_RB_CNTL */
+ 	PSP_REG_IH_RB_CNTL_RING1  = 1,  /* register IH_RB_CNTL_RING1 */
+ 	PSP_REG_IH_RB_CNTL_RING2  = 2,  /* register IH_RB_CNTL_RING2 */
++	PSP_REG_MMHUB_L1_TLB_CNTL = 25,
+ 	PSP_REG_LAST
+ };
+ 
+@@ -142,6 +143,8 @@ struct psp_funcs {
+ 	bool (*get_ras_capability)(struct psp_context *psp);
+ 	bool (*is_aux_sos_load_required)(struct psp_context *psp);
+ 	bool (*is_reload_needed)(struct psp_context *psp);
++	int (*reg_program_no_ring)(struct psp_context *psp, uint32_t val,
++				   enum psp_reg_prog_id id);
+ };
+ 
+ struct ta_funcs {
+@@ -475,6 +478,10 @@ struct amdgpu_psp_funcs {
+ #define psp_is_aux_sos_load_required(psp) \
+ 	((psp)->funcs->is_aux_sos_load_required ? (psp)->funcs->is_aux_sos_load_required((psp)) : 0)
+ 
++#define psp_reg_program_no_ring(psp, val, id) \
++	((psp)->funcs->reg_program_no_ring ? \
++	(psp)->funcs->reg_program_no_ring((psp), val, id) : -EINVAL)
++
+ extern const struct amd_ip_funcs psp_ip_funcs;
+ 
+ extern const struct amdgpu_ip_block_version psp_v3_1_ip_block;
+@@ -569,5 +576,8 @@ bool amdgpu_psp_get_ras_capability(struct psp_context *psp);
+ int psp_config_sq_perfmon(struct psp_context *psp, uint32_t xcp_id,
+ 	bool core_override_enable, bool reg_override_enable, bool perfmon_override_enable);
+ bool amdgpu_psp_tos_reload_needed(struct amdgpu_device *adev);
++int amdgpu_psp_reg_program_no_ring(struct psp_context *psp, uint32_t val,
++				   enum psp_reg_prog_id id);
++
+ 
+ #endif
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
+index 0ea7cfaf3587df..e979a6086178c7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
+@@ -1392,17 +1392,33 @@ int amdgpu_ras_eeprom_init(struct amdgpu_ras_eeprom_control *control)
+ 
+ 	__decode_table_header_from_buf(hdr, buf);
+ 
+-	if (hdr->version >= RAS_TABLE_VER_V2_1) {
++	switch (hdr->version) {
++	case RAS_TABLE_VER_V2_1:
++	case RAS_TABLE_VER_V3:
+ 		control->ras_num_recs = RAS_NUM_RECS_V2_1(hdr);
+ 		control->ras_record_offset = RAS_RECORD_START_V2_1;
+ 		control->ras_max_record_count = RAS_MAX_RECORD_COUNT_V2_1;
+-	} else {
++		break;
++	case RAS_TABLE_VER_V1:
+ 		control->ras_num_recs = RAS_NUM_RECS(hdr);
+ 		control->ras_record_offset = RAS_RECORD_START;
+ 		control->ras_max_record_count = RAS_MAX_RECORD_COUNT;
++		break;
++	default:
++		dev_err(adev->dev,
++			"RAS header invalid, unsupported version: %u",
++			hdr->version);
++		return -EINVAL;
+ 	}
+-	control->ras_fri = RAS_OFFSET_TO_INDEX(control, hdr->first_rec_offset);
+ 
++	if (control->ras_num_recs > control->ras_max_record_count) {
++		dev_err(adev->dev,
++			"RAS header invalid, records in header: %u max allowed :%u",
++			control->ras_num_recs, control->ras_max_record_count);
++		return -EINVAL;
++	}
++
++	control->ras_fri = RAS_OFFSET_TO_INDEX(control, hdr->first_rec_offset);
+ 	control->ras_num_mca_recs = 0;
+ 	control->ras_num_pa_recs = 0;
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+index df03dba67ab898..b6ec6b7969f0c9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+@@ -146,11 +146,13 @@ enum AMDGIM_FEATURE_FLAG {
+ 
+ enum AMDGIM_REG_ACCESS_FLAG {
+ 	/* Use PSP to program IH_RB_CNTL */
+-	AMDGIM_FEATURE_IH_REG_PSP_EN     = (1 << 0),
++	AMDGIM_FEATURE_IH_REG_PSP_EN      = (1 << 0),
+ 	/* Use RLC to program MMHUB regs */
+-	AMDGIM_FEATURE_MMHUB_REG_RLC_EN  = (1 << 1),
++	AMDGIM_FEATURE_MMHUB_REG_RLC_EN   = (1 << 1),
+ 	/* Use RLC to program GC regs */
+-	AMDGIM_FEATURE_GC_REG_RLC_EN     = (1 << 2),
++	AMDGIM_FEATURE_GC_REG_RLC_EN      = (1 << 2),
++	/* Use PSP to program L1_TLB_CNTL*/
++	AMDGIM_FEATURE_L1_TLB_CNTL_PSP_EN = (1 << 3),
+ };
+ 
+ struct amdgim_pf2vf_info_v1 {
+@@ -330,6 +332,10 @@ struct amdgpu_video_codec_info;
+ (amdgpu_sriov_vf((adev)) && \
+ 	((adev)->virt.reg_access & (AMDGIM_FEATURE_GC_REG_RLC_EN)))
+ 
++#define amdgpu_sriov_reg_indirect_l1_tlb_cntl(adev) \
++(amdgpu_sriov_vf((adev)) && \
++	((adev)->virt.reg_access & (AMDGIM_FEATURE_L1_TLB_CNTL_PSP_EN)))
++
+ #define amdgpu_sriov_rlcg_error_report_enabled(adev) \
+         (amdgpu_sriov_reg_indirect_mmhub(adev) || amdgpu_sriov_reg_indirect_gc(adev))
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h b/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h
+index d6ac2652f0ac29..bea724981309cd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h
+@@ -109,10 +109,11 @@ union amd_sriov_msg_feature_flags {
+ 
+ union amd_sriov_reg_access_flags {
+ 	struct {
+-		uint32_t vf_reg_access_ih	: 1;
+-		uint32_t vf_reg_access_mmhub	: 1;
+-		uint32_t vf_reg_access_gc	: 1;
+-		uint32_t reserved		: 29;
++		uint32_t vf_reg_access_ih		: 1;
++		uint32_t vf_reg_access_mmhub		: 1;
++		uint32_t vf_reg_access_gc		: 1;
++		uint32_t vf_reg_access_l1_tlb_cntl	: 1;
++		uint32_t reserved			: 28;
+ 	} flags;
+ 	uint32_t all;
+ };
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index c68c2e2f4d61aa..2144d124c91084 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -4322,8 +4322,6 @@ static void gfx_v10_0_get_csb_buffer(struct amdgpu_device *adev,
+ 						PACKET3_SET_CONTEXT_REG_START);
+ 				for (i = 0; i < ext->reg_count; i++)
+ 					buffer[count++] = cpu_to_le32(ext->extent[i]);
+-			} else {
+-				return;
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index 2a5c2a1ae3c74f..914c18f48e8e1a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -859,8 +859,6 @@ static void gfx_v11_0_get_csb_buffer(struct amdgpu_device *adev,
+ 						PACKET3_SET_CONTEXT_REG_START);
+ 				for (i = 0; i < ext->reg_count; i++)
+ 					buffer[count++] = cpu_to_le32(ext->extent[i]);
+-			} else {
+-				return;
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
+index 13fbee46417af8..cee2cf47112c91 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
+@@ -2874,8 +2874,6 @@ static void gfx_v6_0_get_csb_buffer(struct amdgpu_device *adev,
+ 				buffer[count++] = cpu_to_le32(ext->reg_index - 0xa000);
+ 				for (i = 0; i < ext->reg_count; i++)
+ 					buffer[count++] = cpu_to_le32(ext->extent[i]);
+-			} else {
+-				return;
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
+index 8181bd0e4f189c..0deeee542623a0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
+@@ -3906,8 +3906,6 @@ static void gfx_v7_0_get_csb_buffer(struct amdgpu_device *adev,
+ 				buffer[count++] = cpu_to_le32(ext->reg_index - PACKET3_SET_CONTEXT_REG_START);
+ 				for (i = 0; i < ext->reg_count; i++)
+ 					buffer[count++] = cpu_to_le32(ext->extent[i]);
+-			} else {
+-				return;
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+index bfedd487efc536..fc73be4ab0685b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+@@ -1248,8 +1248,6 @@ static void gfx_v8_0_get_csb_buffer(struct amdgpu_device *adev,
+ 						PACKET3_SET_CONTEXT_REG_START);
+ 				for (i = 0; i < ext->reg_count; i++)
+ 					buffer[count++] = cpu_to_le32(ext->extent[i]);
+-			} else {
+-				return;
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index d7db4cb907ae55..d725e2e230a3dc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -1649,8 +1649,6 @@ static void gfx_v9_0_get_csb_buffer(struct amdgpu_device *adev,
+ 						PACKET3_SET_CONTEXT_REG_START);
+ 				for (i = 0; i < ext->reg_count; i++)
+ 					buffer[count++] = cpu_to_le32(ext->extent[i]);
+-			} else {
+-				return;
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index 5effe8327d29fb..53050176c244da 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -1213,10 +1213,7 @@ static void gmc_v9_0_get_coherence_flags(struct amdgpu_device *adev,
+ 		if (uncached) {
+ 			mtype = MTYPE_UC;
+ 		} else if (ext_coherent) {
+-			if (gc_ip_version == IP_VERSION(9, 5, 0) || adev->rev_id)
+-				mtype = is_local ? MTYPE_CC : MTYPE_UC;
+-			else
+-				mtype = MTYPE_UC;
++			mtype = is_local ? MTYPE_CC : MTYPE_UC;
+ 		} else if (adev->flags & AMD_IS_APU) {
+ 			mtype = is_local ? mtype_local : MTYPE_NC;
+ 		} else {
+@@ -1336,7 +1333,7 @@ static void gmc_v9_0_override_vm_pte_flags(struct amdgpu_device *adev,
+ 				mtype_local = MTYPE_CC;
+ 
+ 			*flags = AMDGPU_PTE_MTYPE_VG10(*flags, mtype_local);
+-		} else if (adev->rev_id) {
++		} else {
+ 			/* MTYPE_UC case */
+ 			*flags = AMDGPU_PTE_MTYPE_VG10(*flags, MTYPE_CC);
+ 		}
+@@ -2411,13 +2408,6 @@ static int gmc_v9_0_hw_init(struct amdgpu_ip_block *ip_block)
+ 	adev->gmc.flush_tlb_needs_extra_type_2 =
+ 		amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 0) &&
+ 		adev->gmc.xgmi.num_physical_nodes;
+-	/*
+-	 * TODO: This workaround is badly documented and had a buggy
+-	 * implementation. We should probably verify what we do here.
+-	 */
+-	adev->gmc.flush_tlb_needs_extra_type_0 =
+-		amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 3) &&
+-		adev->rev_id == 0;
+ 
+ 	/* The sequence of these two function calls matters.*/
+ 	gmc_v9_0_init_golden_registers(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+index ef9538fbbf5371..821c9baf5baa6a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+@@ -480,7 +480,7 @@ static int mes_v11_0_reset_hw_queue(struct amdgpu_mes *mes,
+ 
+ 	return mes_v11_0_submit_pkt_and_poll_completion(mes,
+ 			&mes_reset_queue_pkt, sizeof(mes_reset_queue_pkt),
+-			offsetof(union MESAPI__REMOVE_QUEUE, api_status));
++			offsetof(union MESAPI__RESET, api_status));
+ }
+ 
+ static int mes_v11_0_map_legacy_queue(struct amdgpu_mes *mes,
+@@ -669,6 +669,18 @@ static int mes_v11_0_misc_op(struct amdgpu_mes *mes,
+ 			offsetof(union MESAPI__MISC, api_status));
+ }
+ 
++static void mes_v11_0_set_gfx_hqd_mask(union MESAPI_SET_HW_RESOURCES *pkt)
++{
++	/*
++	 * GFX pipe 0 queue 0 is being used by Kernel queue.
++	 * Set GFX pipe 0 queue 1 for MES scheduling
++	 * mask = 10b
++	 * GFX pipe 1 can't be used for MES due to HW limitation.
++	 */
++	pkt->gfx_hqd_mask[0] = 0x2;
++	pkt->gfx_hqd_mask[1] = 0;
++}
++
+ static int mes_v11_0_set_hw_resources(struct amdgpu_mes *mes)
+ {
+ 	int i;
+@@ -693,8 +705,7 @@ static int mes_v11_0_set_hw_resources(struct amdgpu_mes *mes)
+ 		mes_set_hw_res_pkt.compute_hqd_mask[i] =
+ 			mes->compute_hqd_mask[i];
+ 
+-	for (i = 0; i < MAX_GFX_PIPES; i++)
+-		mes_set_hw_res_pkt.gfx_hqd_mask[i] = mes->gfx_hqd_mask[i];
++	mes_v11_0_set_gfx_hqd_mask(&mes_set_hw_res_pkt);
+ 
+ 	for (i = 0; i < MAX_SDMA_PIPES; i++)
+ 		mes_set_hw_res_pkt.sdma_hqd_mask[i] = mes->sdma_hqd_mask[i];
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
+index e6ab617b9a4041..7984ebda5b8bf5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
+@@ -500,7 +500,7 @@ static int mes_v12_0_reset_hw_queue(struct amdgpu_mes *mes,
+ 
+ 	return mes_v12_0_submit_pkt_and_poll_completion(mes, pipe,
+ 			&mes_reset_queue_pkt, sizeof(mes_reset_queue_pkt),
+-			offsetof(union MESAPI__REMOVE_QUEUE, api_status));
++			offsetof(union MESAPI__RESET, api_status));
+ }
+ 
+ static int mes_v12_0_map_legacy_queue(struct amdgpu_mes *mes,
+@@ -694,6 +694,17 @@ static int mes_v12_0_set_hw_resources_1(struct amdgpu_mes *mes, int pipe)
+ 			offsetof(union MESAPI_SET_HW_RESOURCES_1, api_status));
+ }
+ 
++static void mes_v12_0_set_gfx_hqd_mask(union MESAPI_SET_HW_RESOURCES *pkt)
++{
++	/*
++	 * GFX V12 has only one GFX pipe, but 8 queues in it.
++	 * GFX pipe 0 queue 0 is being used by Kernel queue.
++	 * Set GFX pipe 0 queue 1-7 for MES scheduling
++	 * mask = 1111 1110b
++	 */
++	pkt->gfx_hqd_mask[0] = 0xFE;
++}
++
+ static int mes_v12_0_set_hw_resources(struct amdgpu_mes *mes, int pipe)
+ {
+ 	int i;
+@@ -716,9 +727,7 @@ static int mes_v12_0_set_hw_resources(struct amdgpu_mes *mes, int pipe)
+ 			mes_set_hw_res_pkt.compute_hqd_mask[i] =
+ 				mes->compute_hqd_mask[i];
+ 
+-		for (i = 0; i < MAX_GFX_PIPES; i++)
+-			mes_set_hw_res_pkt.gfx_hqd_mask[i] =
+-				mes->gfx_hqd_mask[i];
++		mes_v12_0_set_gfx_hqd_mask(&mes_set_hw_res_pkt);
+ 
+ 		for (i = 0; i < MAX_SDMA_PIPES; i++)
+ 			mes_set_hw_res_pkt.sdma_hqd_mask[i] =
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_8.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_8.c
+index 84cde1239ee458..4a43c9ab95a2b1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_8.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_8.c
+@@ -30,6 +30,7 @@
+ #include "soc15_common.h"
+ #include "soc15.h"
+ #include "amdgpu_ras.h"
++#include "amdgpu_psp.h"
+ 
+ #define regVM_L2_CNTL3_DEFAULT	0x80100007
+ #define regVM_L2_CNTL4_DEFAULT	0x000000c1
+@@ -192,10 +193,8 @@ static void mmhub_v1_8_init_tlb_regs(struct amdgpu_device *adev)
+ 	uint32_t tmp, inst_mask;
+ 	int i;
+ 
+-	/* Setup TLB control */
+-	inst_mask = adev->aid_mask;
+-	for_each_inst(i, inst_mask) {
+-		tmp = RREG32_SOC15(MMHUB, i, regMC_VM_MX_L1_TLB_CNTL);
++	if (amdgpu_sriov_reg_indirect_l1_tlb_cntl(adev)) {
++		tmp = RREG32_SOC15(MMHUB, 0, regMC_VM_MX_L1_TLB_CNTL);
+ 
+ 		tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB,
+ 				    1);
+@@ -209,7 +208,26 @@ static void mmhub_v1_8_init_tlb_regs(struct amdgpu_device *adev)
+ 				    MTYPE, MTYPE_UC);/* XXX for emulation. */
+ 		tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ATC_EN, 1);
+ 
+-		WREG32_SOC15(MMHUB, i, regMC_VM_MX_L1_TLB_CNTL, tmp);
++		psp_reg_program_no_ring(&adev->psp, tmp, PSP_REG_MMHUB_L1_TLB_CNTL);
++	} else {
++		inst_mask = adev->aid_mask;
++		for_each_inst(i, inst_mask) {
++			tmp = RREG32_SOC15(MMHUB, i, regMC_VM_MX_L1_TLB_CNTL);
++
++			tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB,
++					    1);
++			tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
++					    SYSTEM_ACCESS_MODE, 3);
++			tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
++					    ENABLE_ADVANCED_DRIVER_MODEL, 1);
++			tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
++					    SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
++			tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
++					    MTYPE, MTYPE_UC);/* XXX for emulation. */
++			tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ATC_EN, 1);
++
++			WREG32_SOC15(MMHUB, i, regMC_VM_MX_L1_TLB_CNTL, tmp);
++		}
+ 	}
+ }
+ 
+@@ -454,6 +472,30 @@ static int mmhub_v1_8_gart_enable(struct amdgpu_device *adev)
+ 	return 0;
+ }
+ 
++static void mmhub_v1_8_disable_l1_tlb(struct amdgpu_device *adev)
++{
++	u32 tmp;
++	u32 i, inst_mask;
++
++	if (amdgpu_sriov_reg_indirect_l1_tlb_cntl(adev)) {
++		tmp = RREG32_SOC15(MMHUB, 0, regMC_VM_MX_L1_TLB_CNTL);
++		tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 0);
++		tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
++				    ENABLE_ADVANCED_DRIVER_MODEL, 0);
++		psp_reg_program_no_ring(&adev->psp, tmp, PSP_REG_MMHUB_L1_TLB_CNTL);
++	} else {
++		inst_mask = adev->aid_mask;
++		for_each_inst(i, inst_mask) {
++			tmp = RREG32_SOC15(MMHUB, i, regMC_VM_MX_L1_TLB_CNTL);
++			tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB,
++					    0);
++			tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
++					    ENABLE_ADVANCED_DRIVER_MODEL, 0);
++			WREG32_SOC15(MMHUB, i, regMC_VM_MX_L1_TLB_CNTL, tmp);
++		}
++	}
++}
++
+ static void mmhub_v1_8_gart_disable(struct amdgpu_device *adev)
+ {
+ 	struct amdgpu_vmhub *hub;
+@@ -467,15 +509,6 @@ static void mmhub_v1_8_gart_disable(struct amdgpu_device *adev)
+ 		for (i = 0; i < 16; i++)
+ 			WREG32_SOC15_OFFSET(MMHUB, j, regVM_CONTEXT0_CNTL,
+ 					    i * hub->ctx_distance, 0);
+-
+-		/* Setup TLB control */
+-		tmp = RREG32_SOC15(MMHUB, j, regMC_VM_MX_L1_TLB_CNTL);
+-		tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB,
+-				    0);
+-		tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
+-				    ENABLE_ADVANCED_DRIVER_MODEL, 0);
+-		WREG32_SOC15(MMHUB, j, regMC_VM_MX_L1_TLB_CNTL, tmp);
+-
+ 		if (!amdgpu_sriov_vf(adev)) {
+ 			/* Setup L2 cache */
+ 			tmp = RREG32_SOC15(MMHUB, j, regVM_L2_CNTL);
+@@ -485,6 +518,8 @@ static void mmhub_v1_8_gart_disable(struct amdgpu_device *adev)
+ 			WREG32_SOC15(MMHUB, j, regVM_L2_CNTL3, 0);
+ 		}
+ 	}
++
++	mmhub_v1_8_disable_l1_tlb(adev);
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+index afdf8ce3b4c59e..f5f616ab20e704 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+@@ -858,6 +858,25 @@ static bool psp_v13_0_is_reload_needed(struct psp_context *psp)
+ 	return false;
+ }
+ 
++static int psp_v13_0_reg_program_no_ring(struct psp_context *psp, uint32_t val,
++					 enum psp_reg_prog_id id)
++{
++	struct amdgpu_device *adev = psp->adev;
++	int ret = -EOPNOTSUPP;
++
++	/* PSP will broadcast the value to all instances */
++	if (amdgpu_sriov_vf(adev)) {
++		WREG32_SOC15(MP0, 0, regMP0_SMN_C2PMSG_101, GFX_CTRL_CMD_ID_GBR_IH_SET);
++		WREG32_SOC15(MP0, 0, regMP0_SMN_C2PMSG_102, id);
++		WREG32_SOC15(MP0, 0, regMP0_SMN_C2PMSG_103, val);
++
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x80000000, false);
++	}
++
++	return ret;
++}
++
+ static const struct psp_funcs psp_v13_0_funcs = {
+ 	.init_microcode = psp_v13_0_init_microcode,
+ 	.wait_for_bootloader = psp_v13_0_wait_for_bootloader_steady_state,
+@@ -884,6 +903,7 @@ static const struct psp_funcs psp_v13_0_funcs = {
+ 	.get_ras_capability = psp_v13_0_get_ras_capability,
+ 	.is_aux_sos_load_required = psp_v13_0_is_aux_sos_load_required,
+ 	.is_reload_needed = psp_v13_0_is_reload_needed,
++	.reg_program_no_ring = psp_v13_0_reg_program_no_ring,
+ };
+ 
+ void psp_v13_0_set_psp_funcs(struct psp_context *psp)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index b9c82be6ce134f..bf0854bd55551b 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -352,11 +352,6 @@ struct kfd_dev *kgd2kfd_probe(struct amdgpu_device *adev, bool vf)
+ 			f2g = &aldebaran_kfd2kgd;
+ 			break;
+ 		case IP_VERSION(9, 4, 3):
+-			gfx_target_version = adev->rev_id >= 1 ? 90402
+-					   : adev->flags & AMD_IS_APU ? 90400
+-					   : 90401;
+-			f2g = &gc_9_4_3_kfd2kgd;
+-			break;
+ 		case IP_VERSION(9, 4, 4):
+ 			gfx_target_version = 90402;
+ 			f2g = &gc_9_4_3_kfd2kgd;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+index 80320a6c8854a5..97933d2a380323 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+@@ -495,6 +495,10 @@ static void update_mqd_sdma(struct mqd_manager *mm, void *mqd,
+ 	m->sdma_engine_id = q->sdma_engine_id;
+ 	m->sdma_queue_id = q->sdma_queue_id;
+ 	m->sdmax_rlcx_dummy_reg = SDMA_RLC_DUMMY_DEFAULT;
++	/* Allow context switch so we don't cross-process starve with a massive
++	 * command buffer of long-running SDMA commands
++	 */
++	m->sdmax_rlcx_ib_cntl |= SDMA0_GFX_IB_CNTL__SWITCH_INSIDE_IB_MASK;
+ 
+ 	q->is_active = QUEUE_IS_ACTIVE(*q);
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_queue.c b/drivers/gpu/drm/amd/amdkfd/kfd_queue.c
+index 4afff7094cafcd..a65c67cf56ff37 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_queue.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_queue.c
+@@ -402,7 +402,7 @@ static u32 kfd_get_vgpr_size_per_cu(u32 gfxv)
+ {
+ 	u32 vgpr_size = 0x40000;
+ 
+-	if ((gfxv / 100 * 100) == 90400 ||	/* GFX_VERSION_AQUA_VANJARAM */
++	if (gfxv == 90402 ||			/* GFX_VERSION_AQUA_VANJARAM */
+ 	    gfxv == 90010 ||			/* GFX_VERSION_ALDEBARAN */
+ 	    gfxv == 90008 ||			/* GFX_VERSION_ARCTURUS */
+ 	    gfxv == 90500)
+@@ -462,7 +462,7 @@ void kfd_queue_ctx_save_restore_size(struct kfd_topology_device *dev)
+ 
+ 	if (gfxv == 80002)	/* GFX_VERSION_TONGA */
+ 		props->eop_buffer_size = 0x8000;
+-	else if ((gfxv / 100 * 100) == 90400)	/* GFX_VERSION_AQUA_VANJARAM */
++	else if (gfxv == 90402)	/* GFX_VERSION_AQUA_VANJARAM */
+ 		props->eop_buffer_size = 4096;
+ 	else if (gfxv >= 80000)
+ 		props->eop_buffer_size = 4096;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index 100717a98ec113..72be6e152e881e 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -1245,8 +1245,7 @@ svm_range_get_pte_flags(struct kfd_node *node,
+ 	case IP_VERSION(9, 4, 4):
+ 	case IP_VERSION(9, 5, 0):
+ 		if (ext_coherent)
+-			mtype_local = (gc_ip_version < IP_VERSION(9, 5, 0) && !node->adev->rev_id) ?
+-					AMDGPU_VM_MTYPE_UC : AMDGPU_VM_MTYPE_CC;
++			mtype_local = AMDGPU_VM_MTYPE_CC;
+ 		else
+ 			mtype_local = amdgpu_mtype_local == 1 ? AMDGPU_VM_MTYPE_NC :
+ 				amdgpu_mtype_local == 2 ? AMDGPU_VM_MTYPE_CC : AMDGPU_VM_MTYPE_RW;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index 9bbee484d57cc4..d6653e39e1477b 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -510,6 +510,10 @@ static ssize_t node_show(struct kobject *kobj, struct attribute *attr,
+ 			dev->node_props.capability |=
+ 					HSA_CAP_AQL_QUEUE_DOUBLE_MAP;
+ 
++		if (KFD_GC_VERSION(dev->gpu) < IP_VERSION(10, 0, 0) &&
++			(dev->gpu->adev->sdma.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
++				dev->node_props.capability2 |= HSA_CAP2_PER_SDMA_QUEUE_RESET_SUPPORTED;
++
+ 		sysfs_show_32bit_prop(buffer, offs, "max_engine_clk_fcompute",
+ 			dev->node_props.max_engine_clk_fcompute);
+ 
+@@ -2001,8 +2005,6 @@ static void kfd_topology_set_capabilities(struct kfd_topology_device *dev)
+ 		if (!amdgpu_sriov_vf(dev->gpu->adev))
+ 			dev->node_props.capability |= HSA_CAP_PER_QUEUE_RESET_SUPPORTED;
+ 
+-		if (dev->gpu->adev->sdma.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE)
+-			dev->node_props.capability2 |= HSA_CAP2_PER_SDMA_QUEUE_RESET_SUPPORTED;
+ 	} else {
+ 		dev->node_props.debug_prop |= HSA_DBG_WATCH_ADDR_MASK_LO_BIT_GFX10 |
+ 					HSA_DBG_WATCH_ADDR_MASK_HI_BIT;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/Makefile b/drivers/gpu/drm/amd/display/amdgpu_dm/Makefile
+index ab2a97e354da1f..7329b8cc2576ea 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/Makefile
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/Makefile
+@@ -38,6 +38,7 @@ AMDGPUDM = \
+ 	amdgpu_dm_pp_smu.o \
+ 	amdgpu_dm_psr.o \
+ 	amdgpu_dm_replay.o \
++	amdgpu_dm_quirks.o \
+ 	amdgpu_dm_wb.o
+ 
+ ifdef CONFIG_DRM_AMD_DC_FP
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 0fe8bd19ecd13e..96118a0e1ffeb2 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -80,7 +80,6 @@
+ #include <linux/power_supply.h>
+ #include <linux/firmware.h>
+ #include <linux/component.h>
+-#include <linux/dmi.h>
+ #include <linux/sort.h>
+ 
+ #include <drm/display/drm_dp_mst_helper.h>
+@@ -1631,153 +1630,6 @@ static bool dm_should_disable_stutter(struct pci_dev *pdev)
+ 	return false;
+ }
+ 
+-struct amdgpu_dm_quirks {
+-	bool aux_hpd_discon;
+-	bool support_edp0_on_dp1;
+-};
+-
+-static struct amdgpu_dm_quirks quirk_entries = {
+-	.aux_hpd_discon = false,
+-	.support_edp0_on_dp1 = false
+-};
+-
+-static int edp0_on_dp1_callback(const struct dmi_system_id *id)
+-{
+-	quirk_entries.support_edp0_on_dp1 = true;
+-	return 0;
+-}
+-
+-static int aux_hpd_discon_callback(const struct dmi_system_id *id)
+-{
+-	quirk_entries.aux_hpd_discon = true;
+-	return 0;
+-}
+-
+-static const struct dmi_system_id dmi_quirk_table[] = {
+-	{
+-		.callback = aux_hpd_discon_callback,
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Precision 3660"),
+-		},
+-	},
+-	{
+-		.callback = aux_hpd_discon_callback,
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Precision 3260"),
+-		},
+-	},
+-	{
+-		.callback = aux_hpd_discon_callback,
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Precision 3460"),
+-		},
+-	},
+-	{
+-		.callback = aux_hpd_discon_callback,
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Tower Plus 7010"),
+-		},
+-	},
+-	{
+-		.callback = aux_hpd_discon_callback,
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Tower 7010"),
+-		},
+-	},
+-	{
+-		.callback = aux_hpd_discon_callback,
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex SFF Plus 7010"),
+-		},
+-	},
+-	{
+-		.callback = aux_hpd_discon_callback,
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex SFF 7010"),
+-		},
+-	},
+-	{
+-		.callback = aux_hpd_discon_callback,
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Micro Plus 7010"),
+-		},
+-	},
+-	{
+-		.callback = aux_hpd_discon_callback,
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Micro 7010"),
+-		},
+-	},
+-	{
+-		.callback = edp0_on_dp1_callback,
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP Elite mt645 G8 Mobile Thin Client"),
+-		},
+-	},
+-	{
+-		.callback = edp0_on_dp1_callback,
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 645 14 inch G11 Notebook PC"),
+-		},
+-	},
+-	{
+-		.callback = edp0_on_dp1_callback,
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 665 16 inch G11 Notebook PC"),
+-		},
+-	},
+-	{
+-		.callback = edp0_on_dp1_callback,
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook 445 14 inch G11 Notebook PC"),
+-		},
+-	},
+-	{
+-		.callback = edp0_on_dp1_callback,
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook 465 16 inch G11 Notebook PC"),
+-		},
+-	},
+-	{}
+-	/* TODO: refactor this from a fixed table to a dynamic option */
+-};
+-
+-static void retrieve_dmi_info(struct amdgpu_display_manager *dm, struct dc_init_data *init_data)
+-{
+-	int dmi_id;
+-	struct drm_device *dev = dm->ddev;
+-
+-	dm->aux_hpd_discon_quirk = false;
+-	init_data->flags.support_edp0_on_dp1 = false;
+-
+-	dmi_id = dmi_check_system(dmi_quirk_table);
+-
+-	if (!dmi_id)
+-		return;
+-
+-	if (quirk_entries.aux_hpd_discon) {
+-		dm->aux_hpd_discon_quirk = true;
+-		drm_info(dev, "aux_hpd_discon_quirk attached\n");
+-	}
+-	if (quirk_entries.support_edp0_on_dp1) {
+-		init_data->flags.support_edp0_on_dp1 = true;
+-		drm_info(dev, "support_edp0_on_dp1 attached\n");
+-	}
+-}
+ 
+ void*
+ dm_allocate_gpu_mem(
+@@ -2064,7 +1916,9 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ 	if (amdgpu_ip_version(adev, DCE_HWIP, 0) >= IP_VERSION(3, 0, 0))
+ 		init_data.num_virtual_links = 1;
+ 
+-	retrieve_dmi_info(&adev->dm, &init_data);
++	retrieve_dmi_info(&adev->dm);
++	if (adev->dm.edp0_on_dp1_quirk)
++		init_data.flags.support_edp0_on_dp1 = true;
+ 
+ 	if (adev->dm.bb_from_dmub)
+ 		init_data.bb_from_dmub = adev->dm.bb_from_dmub;
+@@ -10510,16 +10364,20 @@ static int dm_force_atomic_commit(struct drm_connector *connector)
+ 	 */
+ 	conn_state = drm_atomic_get_connector_state(state, connector);
+ 
+-	ret = PTR_ERR_OR_ZERO(conn_state);
+-	if (ret)
++	/* Check for error in getting connector state */
++	if (IS_ERR(conn_state)) {
++		ret = PTR_ERR(conn_state);
+ 		goto out;
++	}
+ 
+ 	/* Attach crtc to drm_atomic_state*/
+ 	crtc_state = drm_atomic_get_crtc_state(state, &disconnected_acrtc->base);
+ 
+-	ret = PTR_ERR_OR_ZERO(crtc_state);
+-	if (ret)
++	/* Check for error in getting crtc state */
++	if (IS_ERR(crtc_state)) {
++		ret = PTR_ERR(crtc_state);
+ 		goto out;
++	}
+ 
+ 	/* force a restore */
+ 	crtc_state->mode_changed = true;
+@@ -10527,9 +10385,11 @@ static int dm_force_atomic_commit(struct drm_connector *connector)
+ 	/* Attach plane to drm_atomic_state */
+ 	plane_state = drm_atomic_get_plane_state(state, plane);
+ 
+-	ret = PTR_ERR_OR_ZERO(plane_state);
+-	if (ret)
++	/* Check for error in getting plane state */
++	if (IS_ERR(plane_state)) {
++		ret = PTR_ERR(plane_state);
+ 		goto out;
++	}
+ 
+ 	/* Call commit internally with the state we just constructed */
+ 	ret = drm_atomic_commit(state);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index 385faaca6e26a6..9e8c659c53c496 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -613,6 +613,13 @@ struct amdgpu_display_manager {
+ 	 */
+ 	bool aux_hpd_discon_quirk;
+ 
++	/**
++	 * @edp0_on_dp1_quirk:
++	 *
++	 * quirk for platforms that put edp0 on DP1.
++	 */
++	bool edp0_on_dp1_quirk;
++
+ 	/**
+ 	 * @dpia_aux_lock:
+ 	 *
+@@ -1045,4 +1052,6 @@ void hdmi_cec_set_edid(struct amdgpu_dm_connector *aconnector);
+ void hdmi_cec_unset_edid(struct amdgpu_dm_connector *aconnector);
+ int amdgpu_dm_initialize_hdmi_connector(struct amdgpu_dm_connector *aconnector);
+ 
++void retrieve_dmi_info(struct amdgpu_display_manager *dm);
++
+ #endif /* __AMDGPU_DM_H__ */
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 5cdbc86ef8f5a9..25e8befbcc479a 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -1739,16 +1739,17 @@ static bool is_dsc_common_config_possible(struct dc_stream_state *stream,
+ 					  struct dc_dsc_bw_range *bw_range)
+ {
+ 	struct dc_dsc_policy dsc_policy = {0};
++	bool is_dsc_possible;
+ 
+ 	dc_dsc_get_policy_for_timing(&stream->timing, 0, &dsc_policy, dc_link_get_highest_encoding_format(stream->link));
+-	dc_dsc_compute_bandwidth_range(stream->sink->ctx->dc->res_pool->dscs[0],
+-				       stream->sink->ctx->dc->debug.dsc_min_slice_height_override,
+-				       dsc_policy.min_target_bpp * 16,
+-				       dsc_policy.max_target_bpp * 16,
+-				       &stream->sink->dsc_caps.dsc_dec_caps,
+-				       &stream->timing, dc_link_get_highest_encoding_format(stream->link), bw_range);
+-
+-	return bw_range->max_target_bpp_x16 && bw_range->min_target_bpp_x16;
++	is_dsc_possible = dc_dsc_compute_bandwidth_range(stream->sink->ctx->dc->res_pool->dscs[0],
++							 stream->sink->ctx->dc->debug.dsc_min_slice_height_override,
++							 dsc_policy.min_target_bpp * 16,
++							 dsc_policy.max_target_bpp * 16,
++							 &stream->sink->dsc_caps.dsc_dec_caps,
++							 &stream->timing, dc_link_get_highest_encoding_format(stream->link), bw_range);
++
++	return is_dsc_possible;
+ }
+ #endif
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_quirks.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_quirks.c
+new file mode 100644
+index 00000000000000..1da07ebf9217c0
+--- /dev/null
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_quirks.c
+@@ -0,0 +1,178 @@
++// SPDX-License-Identifier: MIT
++/*
++ * Copyright 2025 Advanced Micro Devices, Inc.
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice shall be included in
++ * all copies or substantial portions of the Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
++ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
++ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
++ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
++ * OTHER DEALINGS IN THE SOFTWARE.
++ *
++ * Authors: AMD
++ *
++ */
++
++#include <linux/dmi.h>
++
++#include "amdgpu.h"
++#include "amdgpu_dm.h"
++
++struct amdgpu_dm_quirks {
++	bool aux_hpd_discon;
++	bool support_edp0_on_dp1;
++};
++
++static struct amdgpu_dm_quirks quirk_entries = {
++	.aux_hpd_discon = false,
++	.support_edp0_on_dp1 = false
++};
++
++static int edp0_on_dp1_callback(const struct dmi_system_id *id)
++{
++	quirk_entries.support_edp0_on_dp1 = true;
++	return 0;
++}
++
++static int aux_hpd_discon_callback(const struct dmi_system_id *id)
++{
++	quirk_entries.aux_hpd_discon = true;
++	return 0;
++}
++
++static const struct dmi_system_id dmi_quirk_table[] = {
++	{
++		.callback = aux_hpd_discon_callback,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Precision 3660"),
++		},
++	},
++	{
++		.callback = aux_hpd_discon_callback,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Precision 3260"),
++		},
++	},
++	{
++		.callback = aux_hpd_discon_callback,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Precision 3460"),
++		},
++	},
++	{
++		.callback = aux_hpd_discon_callback,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Tower Plus 7010"),
++		},
++	},
++	{
++		.callback = aux_hpd_discon_callback,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Tower 7010"),
++		},
++	},
++	{
++		.callback = aux_hpd_discon_callback,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex SFF Plus 7010"),
++		},
++	},
++	{
++		.callback = aux_hpd_discon_callback,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex SFF 7010"),
++		},
++	},
++	{
++		.callback = aux_hpd_discon_callback,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Micro Plus 7010"),
++		},
++	},
++	{
++		.callback = aux_hpd_discon_callback,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Micro 7010"),
++		},
++	},
++	{
++		.callback = edp0_on_dp1_callback,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP Elite mt645 G8 Mobile Thin Client"),
++		},
++	},
++	{
++		.callback = edp0_on_dp1_callback,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 645 14 inch G11 Notebook PC"),
++		},
++	},
++	{
++		.callback = edp0_on_dp1_callback,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 665 16 inch G11 Notebook PC"),
++		},
++	},
++	{
++		.callback = edp0_on_dp1_callback,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook 445 14 inch G11 Notebook PC"),
++		},
++	},
++	{
++		.callback = edp0_on_dp1_callback,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook 465 16 inch G11 Notebook PC"),
++		},
++	},
++	{}
++	/* TODO: refactor this from a fixed table to a dynamic option */
++};
++
++void retrieve_dmi_info(struct amdgpu_display_manager *dm)
++{
++	struct drm_device *dev = dm->ddev;
++	int dmi_id;
++
++	dm->aux_hpd_discon_quirk = false;
++	dm->edp0_on_dp1_quirk = false;
++
++	dmi_id = dmi_check_system(dmi_quirk_table);
++
++	if (!dmi_id)
++		return;
++
++	if (quirk_entries.aux_hpd_discon) {
++		dm->aux_hpd_discon_quirk = true;
++		drm_info(dev, "aux_hpd_discon_quirk attached\n");
++	}
++	if (quirk_entries.support_edp0_on_dp1) {
++		dm->edp0_on_dp1_quirk = true;
++		drm_info(dev, "support_edp0_on_dp1 attached\n");
++	}
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn351_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn351_clk_mgr.c
+index 6a6ae618650b6d..4607eff07253c2 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn351_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn351_clk_mgr.c
+@@ -65,6 +65,7 @@
+ #define mmCLK1_CLK5_ALLOW_DS 0x16EB1
+ 
+ #define mmCLK5_spll_field_8 0x1B04B
++#define mmCLK6_spll_field_8 0x1B24B
+ #define mmDENTIST_DISPCLK_CNTL 0x0124
+ #define regDENTIST_DISPCLK_CNTL 0x0064
+ #define regDENTIST_DISPCLK_CNTL_BASE_IDX 1
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+index 142de8938d7c3d..bb1ac12a2b0955 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+@@ -90,6 +90,7 @@
+ #define mmCLK1_CLK5_ALLOW_DS 0x16EB1
+ 
+ #define mmCLK5_spll_field_8 0x1B24B
++#define mmCLK6_spll_field_8 0x1B24B
+ #define mmDENTIST_DISPCLK_CNTL 0x0124
+ #define regDENTIST_DISPCLK_CNTL 0x0064
+ #define regDENTIST_DISPCLK_CNTL_BASE_IDX 1
+@@ -116,6 +117,7 @@
+ #define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_WDIVIDER_MASK 0x7F000000L
+ 
+ #define CLK5_spll_field_8__spll_ssc_en_MASK 0x00002000L
++#define CLK6_spll_field_8__spll_ssc_en_MASK 0x00002000L
+ 
+ #define SMU_VER_THRESHOLD 0x5D4A00 //93.74.0
+ #undef FN
+@@ -596,7 +598,11 @@ static bool dcn35_is_spll_ssc_enabled(struct clk_mgr *clk_mgr_base)
+ 
+ 	uint32_t ssc_enable;
+ 
+-	ssc_enable = REG_READ(CLK5_spll_field_8) & CLK5_spll_field_8__spll_ssc_en_MASK;
++	if (clk_mgr_base->ctx->dce_version == DCN_VERSION_3_51) {
++		ssc_enable = REG_READ(CLK6_spll_field_8) & CLK6_spll_field_8__spll_ssc_en_MASK;
++	} else {
++		ssc_enable = REG_READ(CLK5_spll_field_8) & CLK5_spll_field_8__spll_ssc_en_MASK;
++	}
+ 
+ 	return ssc_enable != 0;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c b/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
+index b363f5360818d8..ad910065f463fe 100644
+--- a/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
++++ b/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
+@@ -391,6 +391,7 @@ static void dccg35_set_dppclk_rcg(struct dccg *dccg,
+ 
+ 	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+ 
++
+ 	if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp && enable)
+ 		return;
+ 
+@@ -411,6 +412,8 @@ static void dccg35_set_dppclk_rcg(struct dccg *dccg,
+ 	BREAK_TO_DEBUGGER();
+ 		break;
+ 	}
++	//DC_LOG_DEBUG("%s: inst(%d) DPPCLK rcg_disable: %d\n", __func__, inst, enable ? 0 : 1);
++
+ }
+ 
+ static void dccg35_set_dpstreamclk_rcg(
+@@ -1112,30 +1115,24 @@ static void dcn35_set_dppclk_enable(struct dccg *dccg,
+ {
+ 	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+ 
++
+ 	switch (dpp_inst) {
+ 	case 0:
+ 		REG_UPDATE(DPPCLK_CTRL, DPPCLK0_EN, enable);
+-		if (dccg->ctx->dc->debug.root_clock_optimization.bits.dpp)
+-			REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, enable);
+ 		break;
+ 	case 1:
+ 		REG_UPDATE(DPPCLK_CTRL, DPPCLK1_EN, enable);
+-		if (dccg->ctx->dc->debug.root_clock_optimization.bits.dpp)
+-			REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, enable);
+ 		break;
+ 	case 2:
+ 		REG_UPDATE(DPPCLK_CTRL, DPPCLK2_EN, enable);
+-		if (dccg->ctx->dc->debug.root_clock_optimization.bits.dpp)
+-			REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, enable);
+ 		break;
+ 	case 3:
+ 		REG_UPDATE(DPPCLK_CTRL, DPPCLK3_EN, enable);
+-		if (dccg->ctx->dc->debug.root_clock_optimization.bits.dpp)
+-			REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, enable);
+ 		break;
+ 	default:
+ 		break;
+ 	}
++	//DC_LOG_DEBUG("%s: dpp_inst(%d) DPPCLK_EN = %d\n", __func__, dpp_inst, enable);
+ 
+ }
+ 
+@@ -1163,14 +1160,18 @@ static void dccg35_update_dpp_dto(struct dccg *dccg, int dpp_inst,
+ 			ASSERT(false);
+ 			phase = 0xff;
+ 		}
++		dccg35_set_dppclk_rcg(dccg, dpp_inst, false);
+ 
+ 		REG_SET_2(DPPCLK_DTO_PARAM[dpp_inst], 0,
+ 				DPPCLK0_DTO_PHASE, phase,
+ 				DPPCLK0_DTO_MODULO, modulo);
+ 
+ 		dcn35_set_dppclk_enable(dccg, dpp_inst, true);
+-	} else
++	} else {
+ 		dcn35_set_dppclk_enable(dccg, dpp_inst, false);
++		/*we have this in hwss: disable_plane*/
++		//dccg35_set_dppclk_rcg(dccg, dpp_inst, true);
++	}
+ 	dccg->pipe_dppclk_khz[dpp_inst] = req_dppclk;
+ }
+ 
+@@ -1182,6 +1183,7 @@ static void dccg35_set_dppclk_root_clock_gating(struct dccg *dccg,
+ 	if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp)
+ 		return;
+ 
++
+ 	switch (dpp_inst) {
+ 	case 0:
+ 		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, enable);
+@@ -1198,6 +1200,8 @@ static void dccg35_set_dppclk_root_clock_gating(struct dccg *dccg,
+ 	default:
+ 		break;
+ 	}
++	//DC_LOG_DEBUG("%s: dpp_inst(%d) rcg: %d\n", __func__, dpp_inst, enable);
++
+ }
+ 
+ static void dccg35_get_pixel_rate_div(
+@@ -1521,28 +1525,30 @@ static void dccg35_set_physymclk_root_clock_gating(
+ 	switch (phy_inst) {
+ 	case 0:
+ 		REG_UPDATE(DCCG_GATE_DISABLE_CNTL2,
+-				PHYASYMCLK_ROOT_GATE_DISABLE, enable ? 1 : 0);
++				PHYASYMCLK_ROOT_GATE_DISABLE, enable ? 0 : 1);
+ 		break;
+ 	case 1:
+ 		REG_UPDATE(DCCG_GATE_DISABLE_CNTL2,
+-				PHYBSYMCLK_ROOT_GATE_DISABLE, enable ? 1 : 0);
++				PHYBSYMCLK_ROOT_GATE_DISABLE, enable ? 0 : 1);
+ 		break;
+ 	case 2:
+ 		REG_UPDATE(DCCG_GATE_DISABLE_CNTL2,
+-				PHYCSYMCLK_ROOT_GATE_DISABLE, enable ? 1 : 0);
++				PHYCSYMCLK_ROOT_GATE_DISABLE, enable ? 0 : 1);
+ 		break;
+ 	case 3:
+ 		REG_UPDATE(DCCG_GATE_DISABLE_CNTL2,
+-				PHYDSYMCLK_ROOT_GATE_DISABLE, enable ? 1 : 0);
++				PHYDSYMCLK_ROOT_GATE_DISABLE, enable ? 0 : 1);
+ 		break;
+ 	case 4:
+ 		REG_UPDATE(DCCG_GATE_DISABLE_CNTL2,
+-				PHYESYMCLK_ROOT_GATE_DISABLE, enable ? 1 : 0);
++				PHYESYMCLK_ROOT_GATE_DISABLE, enable ? 0 : 1);
+ 		break;
+ 	default:
+ 		BREAK_TO_DEBUGGER();
+ 		return;
+ 	}
++	//DC_LOG_DEBUG("%s: dpp_inst(%d) PHYESYMCLK_ROOT_GATE_DISABLE:\n", __func__, phy_inst, enable ? 0 : 1);
++
+ }
+ 
+ static void dccg35_set_physymclk(
+@@ -1643,6 +1649,8 @@ static void dccg35_dpp_root_clock_control(
+ 		return;
+ 
+ 	if (clock_on) {
++		dccg35_set_dppclk_rcg(dccg, dpp_inst, false);
++
+ 		/* turn off the DTO and leave phase/modulo at max */
+ 		dcn35_set_dppclk_enable(dccg, dpp_inst, 1);
+ 		REG_SET_2(DPPCLK_DTO_PARAM[dpp_inst], 0,
+@@ -1654,6 +1662,8 @@ static void dccg35_dpp_root_clock_control(
+ 		REG_SET_2(DPPCLK_DTO_PARAM[dpp_inst], 0,
+ 			  DPPCLK0_DTO_PHASE, 0,
+ 			  DPPCLK0_DTO_MODULO, 1);
++		/*we have this in hwss: disable_plane*/
++		//dccg35_set_dppclk_rcg(dccg, dpp_inst, true);
+ 	}
+ 
+ 	dccg->dpp_clock_gated[dpp_inst] = !clock_on;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+index f1fe49401bc0ac..8d24763938ea6d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+@@ -1002,6 +1002,7 @@ static bool CalculatePrefetchSchedule(
+ 
+ 	dst_y_prefetch_equ = VStartup - (Tsetup + dml_max(TWait + TCalc, *Tdmdl)) / LineTime
+ 			- (*DSTYAfterScaler + *DSTXAfterScaler / myPipe->HTotal);
++	dst_y_prefetch_equ = dml_min(dst_y_prefetch_equ, 63.75); // limit to the reg limit of U6.2 for DST_Y_PREFETCH
+ 
+ 	Lsw_oto = dml_max(PrefetchSourceLinesY, PrefetchSourceLinesC);
+ 	Tsw_oto = Lsw_oto * LineTime;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
+index f567a9023682d1..ed59c77bc6f60a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
+@@ -1105,6 +1105,7 @@ static bool CalculatePrefetchSchedule(
+ 	Tr0_oto_lines = dml_ceil(4.0 * Tr0_oto / LineTime, 1) / 4.0;
+ 	dst_y_prefetch_oto = Tvm_oto_lines + 2 * Tr0_oto_lines + Lsw_oto;
+ 	dst_y_prefetch_equ =  VStartup - (*TSetup + dml_max(TWait + TCalc, *Tdmdl)) / LineTime - (*DSTYAfterScaler + *DSTXAfterScaler / myPipe->HTotal);
++	dst_y_prefetch_equ = dml_min(dst_y_prefetch_equ, 63.75); // limit to the reg limit of U6.2 for DST_Y_PREFETCH
+ 	dst_y_prefetch_equ = dml_floor(4.0 * (dst_y_prefetch_equ + 0.125), 1) / 4.0;
+ 	Tpre_rounded = dst_y_prefetch_equ * LineTime;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c b/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c
+index 5865e8fa2d8e8f..9f3938a50240f3 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c
+@@ -1123,6 +1123,7 @@ static bool CalculatePrefetchSchedule(
+ 	Tr0_oto_lines = dml_ceil(4.0 * Tr0_oto / LineTime, 1) / 4.0;
+ 	dst_y_prefetch_oto = Tvm_oto_lines + 2 * Tr0_oto_lines + Lsw_oto;
+ 	dst_y_prefetch_equ =  VStartup - (*TSetup + dml_max(TWait + TCalc, *Tdmdl)) / LineTime - (*DSTYAfterScaler + *DSTXAfterScaler / myPipe->HTotal);
++	dst_y_prefetch_equ = dml_min(dst_y_prefetch_equ, 63.75); // limit to the reg limit of U6.2 for DST_Y_PREFETCH
+ 	dst_y_prefetch_equ = dml_floor(4.0 * (dst_y_prefetch_equ + 0.125), 1) / 4.0;
+ 	Tpre_rounded = dst_y_prefetch_equ * LineTime;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+index ab6baf2698012c..5de775fd8fceef 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+@@ -896,7 +896,7 @@ static void populate_dummy_dml_surface_cfg(struct dml_surface_cfg_st *out, unsig
+ 	out->SurfaceWidthC[location] = in->timing.h_addressable;
+ 	out->SurfaceHeightC[location] = in->timing.v_addressable;
+ 	out->PitchY[location] = ((out->SurfaceWidthY[location] + 127) / 128) * 128;
+-	out->PitchC[location] = 0;
++	out->PitchC[location] = 1;
+ 	out->DCCEnable[location] = false;
+ 	out->DCCMetaPitchY[location] = 0;
+ 	out->DCCMetaPitchC[location] = 0;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
+index e89571874185ee..525b7d04bf84cd 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
+@@ -663,7 +663,10 @@ static bool dml2_validate_and_build_resource(const struct dc *in_dc, struct dc_s
+ 		dml2_copy_clocks_to_dc_state(&out_clks, context);
+ 		dml2_extract_watermark_set(&context->bw_ctx.bw.dcn.watermarks.a, &dml2->v20.dml_core_ctx);
+ 		dml2_extract_watermark_set(&context->bw_ctx.bw.dcn.watermarks.b, &dml2->v20.dml_core_ctx);
+-		memcpy(&context->bw_ctx.bw.dcn.watermarks.c, &dml2->v20.g6_temp_read_watermark_set, sizeof(context->bw_ctx.bw.dcn.watermarks.c));
++		if (context->streams[0]->sink->link->dc->caps.is_apu)
++			dml2_extract_watermark_set(&context->bw_ctx.bw.dcn.watermarks.c, &dml2->v20.dml_core_ctx);
++		else
++			memcpy(&context->bw_ctx.bw.dcn.watermarks.c, &dml2->v20.g6_temp_read_watermark_set, sizeof(context->bw_ctx.bw.dcn.watermarks.c));
+ 		dml2_extract_watermark_set(&context->bw_ctx.bw.dcn.watermarks.d, &dml2->v20.dml_core_ctx);
+ 		dml2_extract_writeback_wm(context, &dml2->v20.dml_core_ctx);
+ 		//copy for deciding zstate use
+diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn35/dcn35_dpp.c b/drivers/gpu/drm/amd/display/dc/dpp/dcn35/dcn35_dpp.c
+index 62b7012cda4304..f7a373a3d70a52 100644
+--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn35/dcn35_dpp.c
++++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn35/dcn35_dpp.c
+@@ -138,7 +138,7 @@ bool dpp35_construct(
+ 	dpp->base.funcs = &dcn35_dpp_funcs;
+ 
+ 	// w/a for cursor memory stuck in LS by programming DISPCLK_R_GATE_DISABLE, limit w/a to some ASIC revs
+-	if (dpp->base.ctx->asic_id.hw_internal_rev <= 0x10)
++	if (dpp->base.ctx->asic_id.hw_internal_rev < 0x40)
+ 		dpp->dispclk_r_gate_disable = true;
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
+index be26c925fdfa1a..e68f21fd5f0fb4 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
+@@ -84,6 +84,20 @@ static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+ 		struct dsc_config dsc_cfg;
+ 		struct dsc_optc_config dsc_optc_cfg = {0};
+ 		enum optc_dsc_mode optc_dsc_mode;
++		struct dcn_dsc_state dsc_state = {0};
++
++		if (!dsc) {
++			DC_LOG_DSC("DSC is NULL for tg instance %d:", pipe_ctx->stream_res.tg->inst);
++			return;
++		}
++
++		if (dsc->funcs->dsc_read_state) {
++			dsc->funcs->dsc_read_state(dsc, &dsc_state);
++			if (!dsc_state.dsc_fw_en) {
++				DC_LOG_DSC("DSC has been disabled for tg instance %d:", pipe_ctx->stream_res.tg->inst);
++				return;
++			}
++		}
+ 
+ 		/* Enable DSC hw block */
+ 		dsc_cfg.pic_width = (stream->timing.h_addressable + stream->timing.h_border_left + stream->timing.h_border_right) / opp_cnt;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index 922b8d71cf1aa5..63077c1fad859c 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -241,11 +241,6 @@ void dcn35_init_hw(struct dc *dc)
+ 			dc->res_pool->hubbub->funcs->allow_self_refresh_control(dc->res_pool->hubbub,
+ 					!dc->res_pool->hubbub->ctx->dc->debug.disable_stutter);
+ 	}
+-	if (res_pool->dccg->funcs->dccg_root_gate_disable_control) {
+-		for (i = 0; i < res_pool->pipe_count; i++)
+-			res_pool->dccg->funcs->dccg_root_gate_disable_control(res_pool->dccg, i, 0);
+-	}
+-
+ 	for (i = 0; i < res_pool->audio_count; i++) {
+ 		struct audio *audio = res_pool->audios[i];
+ 
+@@ -901,12 +896,18 @@ void dcn35_init_pipes(struct dc *dc, struct dc_state *context)
+ void dcn35_enable_plane(struct dc *dc, struct pipe_ctx *pipe_ctx,
+ 			       struct dc_state *context)
+ {
++	struct dpp *dpp = pipe_ctx->plane_res.dpp;
++	struct dccg *dccg = dc->res_pool->dccg;
++
++
+ 	/* enable DCFCLK current DCHUB */
+ 	pipe_ctx->plane_res.hubp->funcs->hubp_clk_cntl(pipe_ctx->plane_res.hubp, true);
+ 
+ 	/* initialize HUBP on power up */
+ 	pipe_ctx->plane_res.hubp->funcs->hubp_init(pipe_ctx->plane_res.hubp);
+-
++	/*make sure DPPCLK is on*/
++	dccg->funcs->dccg_root_gate_disable_control(dccg, dpp->inst, true);
++	dpp->funcs->dpp_dppclk_control(dpp, false, true);
+ 	/* make sure OPP_PIPE_CLOCK_EN = 1 */
+ 	pipe_ctx->stream_res.opp->funcs->opp_pipe_clock_control(
+ 			pipe_ctx->stream_res.opp,
+@@ -923,6 +924,7 @@ void dcn35_enable_plane(struct dc *dc, struct pipe_ctx *pipe_ctx,
+ 		// Program system aperture settings
+ 		pipe_ctx->plane_res.hubp->funcs->hubp_set_vm_system_aperture_settings(pipe_ctx->plane_res.hubp, &apt);
+ 	}
++	//DC_LOG_DEBUG("%s: dpp_inst(%d) =\n", __func__, dpp->inst);
+ 
+ 	if (!pipe_ctx->top_pipe
+ 		&& pipe_ctx->plane_state
+@@ -938,6 +940,8 @@ void dcn35_plane_atomic_disable(struct dc *dc, struct pipe_ctx *pipe_ctx)
+ {
+ 	struct hubp *hubp = pipe_ctx->plane_res.hubp;
+ 	struct dpp *dpp = pipe_ctx->plane_res.dpp;
++	struct dccg *dccg = dc->res_pool->dccg;
++
+ 
+ 	dc->hwss.wait_for_mpcc_disconnect(dc, dc->res_pool, pipe_ctx);
+ 
+@@ -955,7 +959,8 @@ void dcn35_plane_atomic_disable(struct dc *dc, struct pipe_ctx *pipe_ctx)
+ 	hubp->funcs->hubp_clk_cntl(hubp, false);
+ 
+ 	dpp->funcs->dpp_dppclk_control(dpp, false, false);
+-/*to do, need to support both case*/
++	dccg->funcs->dccg_root_gate_disable_control(dccg, dpp->inst, false);
++
+ 	hubp->power_gated = true;
+ 
+ 	hubp->funcs->hubp_reset(hubp);
+@@ -967,6 +972,8 @@ void dcn35_plane_atomic_disable(struct dc *dc, struct pipe_ctx *pipe_ctx)
+ 	pipe_ctx->top_pipe = NULL;
+ 	pipe_ctx->bottom_pipe = NULL;
+ 	pipe_ctx->plane_state = NULL;
++	//DC_LOG_DEBUG("%s: dpp_inst(%d)=\n", __func__, dpp->inst);
++
+ }
+ 
+ void dcn35_disable_plane(struct dc *dc, struct dc_state *state, struct pipe_ctx *pipe_ctx)
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
+index 221645c023b502..bac8febad69a54 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
+@@ -199,6 +199,7 @@ enum dentist_divider_range {
+ 	CLK_SR_DCN35(CLK1_CLK4_ALLOW_DS), \
+ 	CLK_SR_DCN35(CLK1_CLK5_ALLOW_DS), \
+ 	CLK_SR_DCN35(CLK5_spll_field_8), \
++	CLK_SR_DCN35(CLK6_spll_field_8), \
+ 	SR(DENTIST_DISPCLK_CNTL), \
+ 
+ #define CLK_COMMON_MASK_SH_LIST_DCN32(mask_sh) \
+@@ -307,7 +308,7 @@ struct clk_mgr_registers {
+ 	uint32_t CLK1_CLK4_ALLOW_DS;
+ 	uint32_t CLK1_CLK5_ALLOW_DS;
+ 	uint32_t CLK5_spll_field_8;
+-
++	uint32_t CLK6_spll_field_8;
+ };
+ 
+ struct clk_mgr_shift {
+diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c b/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c
+index f0ac0aeeac512c..f839afacd5a5c0 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c
++++ b/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c
+@@ -191,6 +191,16 @@ static struct irq_source_info_funcs vline0_irq_info_funcs = {
+ 	.ack = NULL
+ };
+ 
++static struct irq_source_info_funcs vline1_irq_info_funcs = {
++	.set = NULL,
++	.ack = NULL
++};
++
++static struct irq_source_info_funcs vline2_irq_info_funcs = {
++	.set = NULL,
++	.ack = NULL
++};
++
+ #undef BASE_INNER
+ #define BASE_INNER(seg) DCN_BASE__INST0_SEG ## seg
+ 
+@@ -259,6 +269,13 @@ static struct irq_source_info_funcs vline0_irq_info_funcs = {
+ 		.funcs = &pflip_irq_info_funcs\
+ 	}
+ 
++#define vblank_int_entry(reg_num)\
++	[DC_IRQ_SOURCE_VBLANK1 + reg_num] = {\
++		IRQ_REG_ENTRY(OTG, reg_num,\
++			OTG_GLOBAL_SYNC_STATUS, VSTARTUP_INT_EN,\
++			OTG_GLOBAL_SYNC_STATUS, VSTARTUP_EVENT_CLEAR),\
++		.funcs = &vblank_irq_info_funcs\
++	}
+ /* vupdate_no_lock_int_entry maps to DC_IRQ_SOURCE_VUPDATEx, to match semantic
+  * of DCE's DC_IRQ_SOURCE_VUPDATEx.
+  */
+@@ -270,14 +287,6 @@ static struct irq_source_info_funcs vline0_irq_info_funcs = {
+ 		.funcs = &vupdate_no_lock_irq_info_funcs\
+ 	}
+ 
+-#define vblank_int_entry(reg_num)\
+-	[DC_IRQ_SOURCE_VBLANK1 + reg_num] = {\
+-		IRQ_REG_ENTRY(OTG, reg_num,\
+-			OTG_GLOBAL_SYNC_STATUS, VSTARTUP_INT_EN,\
+-			OTG_GLOBAL_SYNC_STATUS, VSTARTUP_EVENT_CLEAR),\
+-		.funcs = &vblank_irq_info_funcs\
+-}
+-
+ #define vline0_int_entry(reg_num)\
+ 	[DC_IRQ_SOURCE_DC1_VLINE0 + reg_num] = {\
+ 		IRQ_REG_ENTRY(OTG, reg_num,\
+@@ -285,6 +294,20 @@ static struct irq_source_info_funcs vline0_irq_info_funcs = {
+ 			OTG_VERTICAL_INTERRUPT0_CONTROL, OTG_VERTICAL_INTERRUPT0_CLEAR),\
+ 		.funcs = &vline0_irq_info_funcs\
+ 	}
++#define vline1_int_entry(reg_num)\
++	[DC_IRQ_SOURCE_DC1_VLINE1 + reg_num] = {\
++		IRQ_REG_ENTRY(OTG, reg_num,\
++			OTG_VERTICAL_INTERRUPT1_CONTROL, OTG_VERTICAL_INTERRUPT1_INT_ENABLE,\
++			OTG_VERTICAL_INTERRUPT1_CONTROL, OTG_VERTICAL_INTERRUPT1_CLEAR),\
++		.funcs = &vline1_irq_info_funcs\
++	}
++#define vline2_int_entry(reg_num)\
++	[DC_IRQ_SOURCE_DC1_VLINE2 + reg_num] = {\
++		IRQ_REG_ENTRY(OTG, reg_num,\
++			OTG_VERTICAL_INTERRUPT2_CONTROL, OTG_VERTICAL_INTERRUPT2_INT_ENABLE,\
++			OTG_VERTICAL_INTERRUPT2_CONTROL, OTG_VERTICAL_INTERRUPT2_CLEAR),\
++		.funcs = &vline2_irq_info_funcs\
++	}
+ #define dmub_outbox_int_entry()\
+ 	[DC_IRQ_SOURCE_DMCUB_OUTBOX] = {\
+ 		IRQ_REG_ENTRY_DMUB(\
+@@ -387,21 +410,29 @@ irq_source_info_dcn32[DAL_IRQ_SOURCES_NUMBER] = {
+ 	dc_underflow_int_entry(6),
+ 	[DC_IRQ_SOURCE_DMCU_SCP] = dummy_irq_entry(),
+ 	[DC_IRQ_SOURCE_VBIOS_SW] = dummy_irq_entry(),
+-	vupdate_no_lock_int_entry(0),
+-	vupdate_no_lock_int_entry(1),
+-	vupdate_no_lock_int_entry(2),
+-	vupdate_no_lock_int_entry(3),
+ 	vblank_int_entry(0),
+ 	vblank_int_entry(1),
+ 	vblank_int_entry(2),
+ 	vblank_int_entry(3),
++	[DC_IRQ_SOURCE_DC5_VLINE1] = dummy_irq_entry(),
++	[DC_IRQ_SOURCE_DC6_VLINE1] = dummy_irq_entry(),
++	dmub_outbox_int_entry(),
++	vupdate_no_lock_int_entry(0),
++	vupdate_no_lock_int_entry(1),
++	vupdate_no_lock_int_entry(2),
++	vupdate_no_lock_int_entry(3),
+ 	vline0_int_entry(0),
+ 	vline0_int_entry(1),
+ 	vline0_int_entry(2),
+ 	vline0_int_entry(3),
+-	[DC_IRQ_SOURCE_DC5_VLINE1] = dummy_irq_entry(),
+-	[DC_IRQ_SOURCE_DC6_VLINE1] = dummy_irq_entry(),
+-	dmub_outbox_int_entry(),
++	vline1_int_entry(0),
++	vline1_int_entry(1),
++	vline1_int_entry(2),
++	vline1_int_entry(3),
++	vline2_int_entry(0),
++	vline2_int_entry(1),
++	vline2_int_entry(2),
++	vline2_int_entry(3)
+ };
+ 
+ static const struct irq_service_funcs irq_service_funcs_dcn32 = {
+diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn401/irq_service_dcn401.c b/drivers/gpu/drm/amd/display/dc/irq/dcn401/irq_service_dcn401.c
+index b43c9524b0de1e..8499e505cf3ef7 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq/dcn401/irq_service_dcn401.c
++++ b/drivers/gpu/drm/amd/display/dc/irq/dcn401/irq_service_dcn401.c
+@@ -171,6 +171,16 @@ static struct irq_source_info_funcs vline0_irq_info_funcs = {
+ 	.ack = NULL
+ };
+ 
++static struct irq_source_info_funcs vline1_irq_info_funcs = {
++	.set = NULL,
++	.ack = NULL
++};
++
++static struct irq_source_info_funcs vline2_irq_info_funcs = {
++	.set = NULL,
++	.ack = NULL
++};
++
+ #undef BASE_INNER
+ #define BASE_INNER(seg) DCN_BASE__INST0_SEG ## seg
+ 
+@@ -239,6 +249,13 @@ static struct irq_source_info_funcs vline0_irq_info_funcs = {
+ 		.funcs = &pflip_irq_info_funcs\
+ 	}
+ 
++#define vblank_int_entry(reg_num)\
++	[DC_IRQ_SOURCE_VBLANK1 + reg_num] = {\
++		IRQ_REG_ENTRY(OTG, reg_num,\
++			OTG_GLOBAL_SYNC_STATUS, VSTARTUP_INT_EN,\
++			OTG_GLOBAL_SYNC_STATUS, VSTARTUP_EVENT_CLEAR),\
++		.funcs = &vblank_irq_info_funcs\
++	}
+ /* vupdate_no_lock_int_entry maps to DC_IRQ_SOURCE_VUPDATEx, to match semantic
+  * of DCE's DC_IRQ_SOURCE_VUPDATEx.
+  */
+@@ -250,13 +267,6 @@ static struct irq_source_info_funcs vline0_irq_info_funcs = {
+ 		.funcs = &vupdate_no_lock_irq_info_funcs\
+ 	}
+ 
+-#define vblank_int_entry(reg_num)\
+-	[DC_IRQ_SOURCE_VBLANK1 + reg_num] = {\
+-		IRQ_REG_ENTRY(OTG, reg_num,\
+-			OTG_GLOBAL_SYNC_STATUS, VSTARTUP_INT_EN,\
+-			OTG_GLOBAL_SYNC_STATUS, VSTARTUP_EVENT_CLEAR),\
+-		.funcs = &vblank_irq_info_funcs\
+-	}
+ #define vline0_int_entry(reg_num)\
+ 	[DC_IRQ_SOURCE_DC1_VLINE0 + reg_num] = {\
+ 		IRQ_REG_ENTRY(OTG, reg_num,\
+@@ -264,6 +274,20 @@ static struct irq_source_info_funcs vline0_irq_info_funcs = {
+ 			OTG_VERTICAL_INTERRUPT0_CONTROL, OTG_VERTICAL_INTERRUPT0_CLEAR),\
+ 		.funcs = &vline0_irq_info_funcs\
+ 	}
++#define vline1_int_entry(reg_num)\
++	[DC_IRQ_SOURCE_DC1_VLINE1 + reg_num] = {\
++		IRQ_REG_ENTRY(OTG, reg_num,\
++			OTG_VERTICAL_INTERRUPT1_CONTROL, OTG_VERTICAL_INTERRUPT1_INT_ENABLE,\
++			OTG_VERTICAL_INTERRUPT1_CONTROL, OTG_VERTICAL_INTERRUPT1_CLEAR),\
++		.funcs = &vline1_irq_info_funcs\
++	}
++#define vline2_int_entry(reg_num)\
++	[DC_IRQ_SOURCE_DC1_VLINE2 + reg_num] = {\
++		IRQ_REG_ENTRY(OTG, reg_num,\
++			OTG_VERTICAL_INTERRUPT2_CONTROL, OTG_VERTICAL_INTERRUPT2_INT_ENABLE,\
++			OTG_VERTICAL_INTERRUPT2_CONTROL, OTG_VERTICAL_INTERRUPT2_CLEAR),\
++		.funcs = &vline2_irq_info_funcs\
++	}
+ #define dmub_outbox_int_entry()\
+ 	[DC_IRQ_SOURCE_DMCUB_OUTBOX] = {\
+ 		IRQ_REG_ENTRY_DMUB(\
+@@ -364,21 +388,29 @@ irq_source_info_dcn401[DAL_IRQ_SOURCES_NUMBER] = {
+ 	dc_underflow_int_entry(6),
+ 	[DC_IRQ_SOURCE_DMCU_SCP] = dummy_irq_entry(),
+ 	[DC_IRQ_SOURCE_VBIOS_SW] = dummy_irq_entry(),
+-	vupdate_no_lock_int_entry(0),
+-	vupdate_no_lock_int_entry(1),
+-	vupdate_no_lock_int_entry(2),
+-	vupdate_no_lock_int_entry(3),
+ 	vblank_int_entry(0),
+ 	vblank_int_entry(1),
+ 	vblank_int_entry(2),
+ 	vblank_int_entry(3),
++	[DC_IRQ_SOURCE_DC5_VLINE1] = dummy_irq_entry(),
++	[DC_IRQ_SOURCE_DC6_VLINE1] = dummy_irq_entry(),
++	dmub_outbox_int_entry(),
++	vupdate_no_lock_int_entry(0),
++	vupdate_no_lock_int_entry(1),
++	vupdate_no_lock_int_entry(2),
++	vupdate_no_lock_int_entry(3),
+ 	vline0_int_entry(0),
+ 	vline0_int_entry(1),
+ 	vline0_int_entry(2),
+ 	vline0_int_entry(3),
+-	[DC_IRQ_SOURCE_DC5_VLINE1] = dummy_irq_entry(),
+-	[DC_IRQ_SOURCE_DC6_VLINE1] = dummy_irq_entry(),
+-	dmub_outbox_int_entry(),
++	vline1_int_entry(0),
++	vline1_int_entry(1),
++	vline1_int_entry(2),
++	vline1_int_entry(3),
++	vline2_int_entry(0),
++	vline2_int_entry(1),
++	vline2_int_entry(2),
++	vline2_int_entry(3),
+ };
+ 
+ static const struct irq_service_funcs irq_service_funcs_dcn401 = {
+diff --git a/drivers/gpu/drm/amd/display/dc/irq_types.h b/drivers/gpu/drm/amd/display/dc/irq_types.h
+index 110f656d43aee0..eadab0a2afebe7 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq_types.h
++++ b/drivers/gpu/drm/amd/display/dc/irq_types.h
+@@ -161,6 +161,13 @@ enum dc_irq_source {
+ 	DC_IRQ_SOURCE_DPCX_TX_PHYE,
+ 	DC_IRQ_SOURCE_DPCX_TX_PHYF,
+ 
++	DC_IRQ_SOURCE_DC1_VLINE2,
++	DC_IRQ_SOURCE_DC2_VLINE2,
++	DC_IRQ_SOURCE_DC3_VLINE2,
++	DC_IRQ_SOURCE_DC4_VLINE2,
++	DC_IRQ_SOURCE_DC5_VLINE2,
++	DC_IRQ_SOURCE_DC6_VLINE2,
++
+ 	DAL_IRQ_SOURCES_NUMBER
+ };
+ 
+@@ -170,6 +177,8 @@ enum irq_type
+ 	IRQ_TYPE_VUPDATE = DC_IRQ_SOURCE_VUPDATE1,
+ 	IRQ_TYPE_VBLANK = DC_IRQ_SOURCE_VBLANK1,
+ 	IRQ_TYPE_VLINE0 = DC_IRQ_SOURCE_DC1_VLINE0,
++	IRQ_TYPE_VLINE1 = DC_IRQ_SOURCE_DC1_VLINE1,
++	IRQ_TYPE_VLINE2 = DC_IRQ_SOURCE_DC1_VLINE2,
+ 	IRQ_TYPE_DCUNDERFLOW = DC_IRQ_SOURCE_DC1UNDERFLOW,
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/mpc/dcn32/dcn32_mpc.c b/drivers/gpu/drm/amd/display/dc/mpc/dcn32/dcn32_mpc.c
+index a0e9e9f0441a41..b4cea2b8cb2a8a 100644
+--- a/drivers/gpu/drm/amd/display/dc/mpc/dcn32/dcn32_mpc.c
++++ b/drivers/gpu/drm/amd/display/dc/mpc/dcn32/dcn32_mpc.c
+@@ -370,275 +370,279 @@ void mpc32_program_shaper_luta_settings(
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_BASE_B, params->corner_points[1].red.custom_float_y);
+ 
+ 	curve = params->arr_curve_points;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_0_1[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_2_3[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_4_5[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_6_7[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_8_9[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_10_11[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_12_13[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_14_15[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_16_17[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_18_19[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_20_21[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_22_23[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_24_25[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_26_27[mpcc_id], 0,
++	if (curve) {
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_0_1[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+ 
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_28_29[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_30_31[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_32_33[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-}
+-
+-
+-void mpc32_program_shaper_lutb_settings(
+-		struct mpc *mpc,
+-		const struct pwl_params *params,
+-		uint32_t mpcc_id)
+-{
+-	const struct gamma_curve *curve;
+-	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+-
+-	REG_SET_2(MPCC_MCM_SHAPER_RAMB_START_CNTL_B[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_B, params->corner_points[0].blue.custom_float_x,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_SEGMENT_B, 0);
+-	REG_SET_2(MPCC_MCM_SHAPER_RAMB_START_CNTL_G[mpcc_id], 0,
+-			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_B, params->corner_points[0].green.custom_float_x,
+-			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_SEGMENT_B, 0);
+-	REG_SET_2(MPCC_MCM_SHAPER_RAMB_START_CNTL_R[mpcc_id], 0,
+-			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_B, params->corner_points[0].red.custom_float_x,
+-			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_SEGMENT_B, 0);
+-
+-	REG_SET_2(MPCC_MCM_SHAPER_RAMB_END_CNTL_B[mpcc_id], 0,
+-			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_B, params->corner_points[1].blue.custom_float_x,
+-			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_BASE_B, params->corner_points[1].blue.custom_float_y);
+-	REG_SET_2(MPCC_MCM_SHAPER_RAMB_END_CNTL_G[mpcc_id], 0,
+-			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_B, params->corner_points[1].green.custom_float_x,
+-			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_BASE_B, params->corner_points[1].green.custom_float_y);
+-	REG_SET_2(MPCC_MCM_SHAPER_RAMB_END_CNTL_R[mpcc_id], 0,
+-			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_B, params->corner_points[1].red.custom_float_x,
+-			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_BASE_B, params->corner_points[1].red.custom_float_y);
+-
+-	curve = params->arr_curve_points;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_0_1[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_2_3[mpcc_id], 0,
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_2_3[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+ 
+-
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_4_5[mpcc_id], 0,
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_4_5[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+ 
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_6_7[mpcc_id], 0,
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_6_7[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+ 
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_8_9[mpcc_id], 0,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+-		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_8_9[mpcc_id], 0,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+ 
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_10_11[mpcc_id], 0,
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_10_11[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+ 
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_12_13[mpcc_id], 0,
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_12_13[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+ 
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_14_15[mpcc_id], 0,
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_14_15[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+ 
+ 
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_16_17[mpcc_id], 0,
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_16_17[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+ 
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_18_19[mpcc_id], 0,
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_18_19[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+ 
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_20_21[mpcc_id], 0,
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_20_21[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+ 
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_22_23[mpcc_id], 0,
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_22_23[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+ 
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_24_25[mpcc_id], 0,
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_24_25[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+ 
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_26_27[mpcc_id], 0,
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_26_27[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_28_29[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+ 
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_28_29[mpcc_id], 0,
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_30_31[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+ 
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_30_31[mpcc_id], 0,
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_32_33[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++	}
++}
++
++
++void mpc32_program_shaper_lutb_settings(
++		struct mpc *mpc,
++		const struct pwl_params *params,
++		uint32_t mpcc_id)
++{
++	const struct gamma_curve *curve;
++	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
++
++	REG_SET_2(MPCC_MCM_SHAPER_RAMB_START_CNTL_B[mpcc_id], 0,
++		MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_B, params->corner_points[0].blue.custom_float_x,
++		MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_SEGMENT_B, 0);
++	REG_SET_2(MPCC_MCM_SHAPER_RAMB_START_CNTL_G[mpcc_id], 0,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_B, params->corner_points[0].green.custom_float_x,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_SEGMENT_B, 0);
++	REG_SET_2(MPCC_MCM_SHAPER_RAMB_START_CNTL_R[mpcc_id], 0,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_B, params->corner_points[0].red.custom_float_x,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_SEGMENT_B, 0);
+ 
+-	curve += 2;
+-	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_32_33[mpcc_id], 0,
++	REG_SET_2(MPCC_MCM_SHAPER_RAMB_END_CNTL_B[mpcc_id], 0,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_B, params->corner_points[1].blue.custom_float_x,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_BASE_B, params->corner_points[1].blue.custom_float_y);
++	REG_SET_2(MPCC_MCM_SHAPER_RAMB_END_CNTL_G[mpcc_id], 0,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_B, params->corner_points[1].green.custom_float_x,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_BASE_B, params->corner_points[1].green.custom_float_y);
++	REG_SET_2(MPCC_MCM_SHAPER_RAMB_END_CNTL_R[mpcc_id], 0,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_B, params->corner_points[1].red.custom_float_x,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_BASE_B, params->corner_points[1].red.custom_float_y);
++
++	curve = params->arr_curve_points;
++	if (curve) {
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_0_1[mpcc_id], 0,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+ 			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_2_3[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_4_5[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_6_7[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_8_9[mpcc_id], 0,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_10_11[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_12_13[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_14_15[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_16_17[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_18_19[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_20_21[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_22_23[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_24_25[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_26_27[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_28_29[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_30_31[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++
++		curve += 2;
++		REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_32_33[mpcc_id], 0,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
++				MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
++	}
+ }
+ 
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+index ffd2b816cd02c7..8948d44a7a80e3 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+@@ -1903,7 +1903,7 @@ static bool dcn35_resource_construct(
+ 	dc->caps.max_disp_clock_khz_at_vmin = 650000;
+ 
+ 	/* Sequential ONO is based on ASIC. */
+-	if (dc->ctx->asic_id.hw_internal_rev > 0x10)
++	if (dc->ctx->asic_id.hw_internal_rev >= 0x40)
+ 		dc->caps.sequential_ono = true;
+ 
+ 	/* Use pipe context based otg sync logic */
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn36/dcn36_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn36/dcn36_resource.c
+index b6468573dc33d7..7f19689e976a17 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn36/dcn36_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn36/dcn36_resource.c
+@@ -1876,7 +1876,7 @@ static bool dcn36_resource_construct(
+ 	dc->caps.max_disp_clock_khz_at_vmin = 650000;
+ 
+ 	/* Sequential ONO is based on ASIC. */
+-	if (dc->ctx->asic_id.hw_internal_rev > 0x10)
++	if (dc->ctx->asic_id.hw_internal_rev >= 0x40)
+ 		dc->caps.sequential_ono = true;
+ 
+ 	/* Use pipe context based otg sync logic */
+diff --git a/drivers/gpu/drm/amd/display/dc/sspl/dc_spl.c b/drivers/gpu/drm/amd/display/dc/sspl/dc_spl.c
+index 28348734d900c1..124aaff890d211 100644
+--- a/drivers/gpu/drm/amd/display/dc/sspl/dc_spl.c
++++ b/drivers/gpu/drm/amd/display/dc/sspl/dc_spl.c
+@@ -1297,7 +1297,7 @@ static void spl_set_easf_data(struct spl_scratch *spl_scratch, struct spl_out *s
+ 	if (enable_easf_v) {
+ 		dscl_prog_data->easf_v_en = true;
+ 		dscl_prog_data->easf_v_ring = 0;
+-		dscl_prog_data->easf_v_sharp_factor = 0;
++		dscl_prog_data->easf_v_sharp_factor = 1;
+ 		dscl_prog_data->easf_v_bf1_en = 1;	// 1-bit, BF1 calculation enable, 0=disable, 1=enable
+ 		dscl_prog_data->easf_v_bf2_mode = 0xF;	// 4-bit, BF2 calculation mode
+ 		/* 2-bit, BF3 chroma mode correction calculation mode */
+@@ -1461,7 +1461,7 @@ static void spl_set_easf_data(struct spl_scratch *spl_scratch, struct spl_out *s
+ 	if (enable_easf_h) {
+ 		dscl_prog_data->easf_h_en = true;
+ 		dscl_prog_data->easf_h_ring = 0;
+-		dscl_prog_data->easf_h_sharp_factor = 0;
++		dscl_prog_data->easf_h_sharp_factor = 1;
+ 		dscl_prog_data->easf_h_bf1_en =
+ 			1;	// 1-bit, BF1 calculation enable, 0=disable, 1=enable
+ 		dscl_prog_data->easf_h_bf2_mode =
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
+index cd03caffe31735..21589c4583e6b3 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
+@@ -310,6 +310,7 @@ int smu_v13_0_get_boot_freq_by_index(struct smu_context *smu,
+ 				     uint32_t *value);
+ 
+ void smu_v13_0_interrupt_work(struct smu_context *smu);
++void smu_v13_0_reset_custom_level(struct smu_context *smu);
+ bool smu_v13_0_12_is_dpm_running(struct smu_context *smu);
+ int smu_v13_0_12_get_max_metrics_size(void);
+ int smu_v13_0_12_setup_driver_pptable(struct smu_context *smu);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+index 83163d7c7f0014..5cb3b9bb60898f 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+@@ -1270,6 +1270,7 @@ static int aldebaran_set_performance_level(struct smu_context *smu,
+ 	struct smu_13_0_dpm_table *gfx_table =
+ 		&dpm_context->dpm_tables.gfx_table;
+ 	struct smu_umd_pstate_table *pstate_table = &smu->pstate_table;
++	int r;
+ 
+ 	/* Disable determinism if switching to another mode */
+ 	if ((smu_dpm->dpm_level == AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) &&
+@@ -1282,7 +1283,11 @@ static int aldebaran_set_performance_level(struct smu_context *smu,
+ 
+ 	case AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM:
+ 		return 0;
+-
++	case AMD_DPM_FORCED_LEVEL_AUTO:
++		r = smu_v13_0_set_performance_level(smu, level);
++		if (!r)
++			smu_v13_0_reset_custom_level(smu);
++		return r;
+ 	case AMD_DPM_FORCED_LEVEL_HIGH:
+ 	case AMD_DPM_FORCED_LEVEL_LOW:
+ 	case AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD:
+@@ -1423,7 +1428,11 @@ static int aldebaran_usr_edit_dpm_table(struct smu_context *smu, enum PP_OD_DPM_
+ 			min_clk = dpm_context->dpm_tables.gfx_table.min;
+ 			max_clk = dpm_context->dpm_tables.gfx_table.max;
+ 
+-			return aldebaran_set_soft_freq_limited_range(smu, SMU_GFXCLK, min_clk, max_clk, false);
++			ret = aldebaran_set_soft_freq_limited_range(
++				smu, SMU_GFXCLK, min_clk, max_clk, false);
++			if (ret)
++				return ret;
++			smu_v13_0_reset_custom_level(smu);
+ 		}
+ 		break;
+ 	case PP_OD_COMMIT_DPM_TABLE:
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+index ba5a9012dbd5e3..075f381ad311ba 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+@@ -2595,3 +2595,13 @@ int smu_v13_0_set_wbrf_exclusion_ranges(struct smu_context *smu,
+ 
+ 	return ret;
+ }
++
++void smu_v13_0_reset_custom_level(struct smu_context *smu)
++{
++	struct smu_umd_pstate_table *pstate_table = &smu->pstate_table;
++
++	pstate_table->uclk_pstate.custom.min = 0;
++	pstate_table->uclk_pstate.custom.max = 0;
++	pstate_table->gfxclk_pstate.custom.min = 0;
++	pstate_table->gfxclk_pstate.custom.max = 0;
++}
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
+index c478b3be37af1e..b8feabb019cf80 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
+@@ -1927,7 +1927,7 @@ static int smu_v13_0_6_set_performance_level(struct smu_context *smu,
+ 				return ret;
+ 			pstate_table->uclk_pstate.curr.max = uclk_table->max;
+ 		}
+-		pstate_table->uclk_pstate.custom.max = 0;
++		smu_v13_0_reset_custom_level(smu);
+ 
+ 		return 0;
+ 	case AMD_DPM_FORCED_LEVEL_MANUAL:
+@@ -2140,7 +2140,7 @@ static int smu_v13_0_6_usr_edit_dpm_table(struct smu_context *smu,
+ 				smu, SMU_UCLK, min_clk, max_clk, false);
+ 			if (ret)
+ 				return ret;
+-			pstate_table->uclk_pstate.custom.max = 0;
++			smu_v13_0_reset_custom_level(smu);
+ 		}
+ 		break;
+ 	case PP_OD_COMMIT_DPM_TABLE:
+diff --git a/drivers/gpu/drm/bridge/Kconfig b/drivers/gpu/drm/bridge/Kconfig
+index 09a1be234f7173..b9e0ca85226a60 100644
+--- a/drivers/gpu/drm/bridge/Kconfig
++++ b/drivers/gpu/drm/bridge/Kconfig
+@@ -16,6 +16,7 @@ config DRM_AUX_BRIDGE
+ 	tristate
+ 	depends on DRM_BRIDGE && OF
+ 	select AUXILIARY_BUS
++	select DRM_KMS_HELPER
+ 	select DRM_PANEL_BRIDGE
+ 	help
+ 	  Simple transparent bridge that is used by several non-DRM drivers to
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+index 5222b1e9f533d0..f96952e9ff4ef3 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+@@ -1622,10 +1622,10 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 		 * that we can get the current state of the GPIO.
+ 		 */
+ 		dp->irq = gpiod_to_irq(dp->hpd_gpiod);
+-		irq_flags = IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING;
++		irq_flags = IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_NO_AUTOEN;
+ 	} else {
+ 		dp->irq = platform_get_irq(pdev, 0);
+-		irq_flags = 0;
++		irq_flags = IRQF_NO_AUTOEN;
+ 	}
+ 
+ 	if (dp->irq == -ENXIO) {
+@@ -1641,7 +1641,6 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 		dev_err(&pdev->dev, "failed to request irq\n");
+ 		return ERR_PTR(ret);
+ 	}
+-	disable_irq(dp->irq);
+ 
+ 	dp->aux.name = "DP-AUX";
+ 	dp->aux.transfer = analogix_dpaux_transfer;
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index 0b97b66de57742..95d5a4e265788a 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -1257,10 +1257,10 @@ static void anx7625_power_on(struct anx7625_data *ctx)
+ 	usleep_range(11000, 12000);
+ 
+ 	/* Power on pin enable */
+-	gpiod_set_value(ctx->pdata.gpio_p_on, 1);
++	gpiod_set_value_cansleep(ctx->pdata.gpio_p_on, 1);
+ 	usleep_range(10000, 11000);
+ 	/* Power reset pin enable */
+-	gpiod_set_value(ctx->pdata.gpio_reset, 1);
++	gpiod_set_value_cansleep(ctx->pdata.gpio_reset, 1);
+ 	usleep_range(10000, 11000);
+ 
+ 	DRM_DEV_DEBUG_DRIVER(dev, "power on !\n");
+@@ -1280,9 +1280,9 @@ static void anx7625_power_standby(struct anx7625_data *ctx)
+ 		return;
+ 	}
+ 
+-	gpiod_set_value(ctx->pdata.gpio_reset, 0);
++	gpiod_set_value_cansleep(ctx->pdata.gpio_reset, 0);
+ 	usleep_range(1000, 1100);
+-	gpiod_set_value(ctx->pdata.gpio_p_on, 0);
++	gpiod_set_value_cansleep(ctx->pdata.gpio_p_on, 0);
+ 	usleep_range(1000, 1100);
+ 
+ 	ret = regulator_bulk_disable(ARRAY_SIZE(ctx->pdata.supplies),
+@@ -2474,6 +2474,22 @@ static const struct drm_edid *anx7625_bridge_edid_read(struct drm_bridge *bridge
+ 	return anx7625_edid_read(ctx);
+ }
+ 
++static void anx7625_bridge_hpd_enable(struct drm_bridge *bridge)
++{
++	struct anx7625_data *ctx = bridge_to_anx7625(bridge);
++	struct device *dev = ctx->dev;
++
++	pm_runtime_get_sync(dev);
++}
++
++static void anx7625_bridge_hpd_disable(struct drm_bridge *bridge)
++{
++	struct anx7625_data *ctx = bridge_to_anx7625(bridge);
++	struct device *dev = ctx->dev;
++
++	pm_runtime_put_sync(dev);
++}
++
+ static const struct drm_bridge_funcs anx7625_bridge_funcs = {
+ 	.attach = anx7625_bridge_attach,
+ 	.detach = anx7625_bridge_detach,
+@@ -2487,6 +2503,8 @@ static const struct drm_bridge_funcs anx7625_bridge_funcs = {
+ 	.atomic_reset = drm_atomic_helper_bridge_reset,
+ 	.detect = anx7625_bridge_detect,
+ 	.edid_read = anx7625_bridge_edid_read,
++	.hpd_enable = anx7625_bridge_hpd_enable,
++	.hpd_disable = anx7625_bridge_hpd_disable,
+ };
+ 
+ static int anx7625_register_i2c_dummy_clients(struct anx7625_data *ctx,
+diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
+index dbce1c3f49691f..753d7c3942a149 100644
+--- a/drivers/gpu/drm/display/drm_dp_helper.c
++++ b/drivers/gpu/drm/display/drm_dp_helper.c
+@@ -2081,14 +2081,17 @@ static int drm_dp_i2c_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs,
+ 
+ 	for (i = 0; i < num; i++) {
+ 		msg.address = msgs[i].addr;
+-		drm_dp_i2c_msg_set_request(&msg, &msgs[i]);
+-		/* Send a bare address packet to start the transaction.
+-		 * Zero sized messages specify an address only (bare
+-		 * address) transaction.
+-		 */
+-		msg.buffer = NULL;
+-		msg.size = 0;
+-		err = drm_dp_i2c_do_msg(aux, &msg);
++
++		if (!aux->no_zero_sized) {
++			drm_dp_i2c_msg_set_request(&msg, &msgs[i]);
++			/* Send a bare address packet to start the transaction.
++			 * Zero sized messages specify an address only (bare
++			 * address) transaction.
++			 */
++			msg.buffer = NULL;
++			msg.size = 0;
++			err = drm_dp_i2c_do_msg(aux, &msg);
++		}
+ 
+ 		/*
+ 		 * Reset msg.request in case in case it got
+@@ -2107,6 +2110,8 @@ static int drm_dp_i2c_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs,
+ 			msg.buffer = msgs[i].buf + j;
+ 			msg.size = min(transfer_size, msgs[i].len - j);
+ 
++			if (j + msg.size == msgs[i].len && aux->no_zero_sized)
++				msg.request &= ~DP_AUX_I2C_MOT;
+ 			err = drm_dp_i2c_drain_msg(aux, &msg);
+ 
+ 			/*
+@@ -2124,15 +2129,17 @@ static int drm_dp_i2c_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs,
+ 	}
+ 	if (err >= 0)
+ 		err = num;
+-	/* Send a bare address packet to close out the transaction.
+-	 * Zero sized messages specify an address only (bare
+-	 * address) transaction.
+-	 */
+-	msg.request &= ~DP_AUX_I2C_MOT;
+-	msg.buffer = NULL;
+-	msg.size = 0;
+-	(void)drm_dp_i2c_do_msg(aux, &msg);
+ 
++	if (!aux->no_zero_sized) {
++		/* Send a bare address packet to close out the transaction.
++		 * Zero sized messages specify an address only (bare
++		 * address) transaction.
++		 */
++		msg.request &= ~DP_AUX_I2C_MOT;
++		msg.buffer = NULL;
++		msg.size = 0;
++		(void)drm_dp_i2c_do_msg(aux, &msg);
++	}
+ 	return err;
+ }
+ 
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index c554ad8f246b65..7ac0fd5391feaf 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -517,6 +517,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "LTH17"),
+ 		},
+ 		.driver_data = (void *)&lcd800x1280_rightside_up,
++	}, {	/* ZOTAC Gaming Zone */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ZOTAC"),
++		  DMI_EXACT_MATCH(DMI_BOARD_NAME, "G0A1W"),
++		},
++		.driver_data = (void *)&lcd1080x1920_leftside_up,
+ 	}, {	/* One Mix 2S (generic strings, also match on bios date) */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"),
+diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
+index e5a188ce318578..990bfaba3ce4e8 100644
+--- a/drivers/gpu/drm/i915/i915_pmu.c
++++ b/drivers/gpu/drm/i915/i915_pmu.c
+@@ -112,7 +112,7 @@ static u32 config_mask(const u64 config)
+ {
+ 	unsigned int bit = config_bit(config);
+ 
+-	if (__builtin_constant_p(config))
++	if (__builtin_constant_p(bit))
+ 		BUILD_BUG_ON(bit >
+ 			     BITS_PER_TYPE(typeof_member(struct i915_pmu,
+ 							 enable)) - 1);
+@@ -121,7 +121,7 @@ static u32 config_mask(const u64 config)
+ 			     BITS_PER_TYPE(typeof_member(struct i915_pmu,
+ 							 enable)) - 1);
+ 
+-	return BIT(config_bit(config));
++	return BIT(bit);
+ }
+ 
+ static bool is_engine_event(struct perf_event *event)
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 90991ba5a4ae10..742132feb19cc9 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -130,6 +130,20 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
+ 		OUT_RING(ring, lower_32_bits(rbmemptr(ring, fence)));
+ 		OUT_RING(ring, upper_32_bits(rbmemptr(ring, fence)));
+ 		OUT_RING(ring, submit->seqno - 1);
++
++		OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
++		OUT_RING(ring, CP_SET_THREAD_BOTH);
++
++		/* Reset state used to synchronize BR and BV */
++		OUT_PKT7(ring, CP_RESET_CONTEXT_STATE, 1);
++		OUT_RING(ring,
++			 CP_RESET_CONTEXT_STATE_0_CLEAR_ON_CHIP_TS |
++			 CP_RESET_CONTEXT_STATE_0_CLEAR_RESOURCE_TABLE |
++			 CP_RESET_CONTEXT_STATE_0_CLEAR_BV_BR_COUNTER |
++			 CP_RESET_CONTEXT_STATE_0_RESET_GLOBAL_LOCAL_TS);
++
++		OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
++		OUT_RING(ring, CP_SET_THREAD_BR);
+ 	}
+ 
+ 	if (!sysprof) {
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_hfi.c b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c
+index 0989aee3dd2cf9..628c19789e9d37 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_hfi.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c
+@@ -109,7 +109,7 @@ static int a6xx_hfi_wait_for_ack(struct a6xx_gmu *gmu, u32 id, u32 seqnum,
+ 
+ 	/* Wait for a response */
+ 	ret = gmu_poll_timeout(gmu, REG_A6XX_GMU_GMU2HOST_INTR_INFO, val,
+-		val & A6XX_GMU_GMU2HOST_INTR_INFO_MSGQ, 100, 5000);
++		val & A6XX_GMU_GMU2HOST_INTR_INFO_MSGQ, 100, 1000000);
+ 
+ 	if (ret) {
+ 		DRM_DEV_ERROR(gmu->dev,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
+index abd6600046cb3a..c3c7a0d56c4103 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
+@@ -94,17 +94,21 @@ static void drm_mode_to_intf_timing_params(
+ 		timing->vsync_polarity = 0;
+ 	}
+ 
+-	/* for DP/EDP, Shift timings to align it to bottom right */
+-	if (phys_enc->hw_intf->cap->type == INTF_DP) {
++	timing->wide_bus_en = dpu_encoder_is_widebus_enabled(phys_enc->parent);
++	timing->compression_en = dpu_encoder_is_dsc_enabled(phys_enc->parent);
++
++	/*
++	 *  For DP/EDP, Shift timings to align it to bottom right.
++	 *  wide_bus_en is set for everything excluding SDM845 &
++	 *  porch changes cause DisplayPort failure and HDMI tearing.
++	 */
++	if (phys_enc->hw_intf->cap->type == INTF_DP && timing->wide_bus_en) {
+ 		timing->h_back_porch += timing->h_front_porch;
+ 		timing->h_front_porch = 0;
+ 		timing->v_back_porch += timing->v_front_porch;
+ 		timing->v_front_porch = 0;
+ 	}
+ 
+-	timing->wide_bus_en = dpu_encoder_is_widebus_enabled(phys_enc->parent);
+-	timing->compression_en = dpu_encoder_is_dsc_enabled(phys_enc->parent);
+-
+ 	/*
+ 	 * for DP, divide the horizonal parameters by 2 when
+ 	 * widebus is enabled
+@@ -372,7 +376,8 @@ static void dpu_encoder_phys_vid_underrun_irq(void *arg)
+ static bool dpu_encoder_phys_vid_needs_single_flush(
+ 		struct dpu_encoder_phys *phys_enc)
+ {
+-	return phys_enc->split_role != ENC_ROLE_SOLO;
++	return !(phys_enc->hw_ctl->caps->features & BIT(DPU_CTL_ACTIVE_CFG)) &&
++		phys_enc->split_role != ENC_ROLE_SOLO;
+ }
+ 
+ static void dpu_encoder_phys_vid_atomic_mode_set(
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index ab8c1f19dcb42d..c7503a7a6123f5 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -127,6 +127,11 @@ static const struct msm_dp_desc msm_dp_desc_sa8775p[] = {
+ 	{}
+ };
+ 
++static const struct msm_dp_desc msm_dp_desc_sdm845[] = {
++	{ .io_start = 0x0ae90000, .id = MSM_DP_CONTROLLER_0 },
++	{}
++};
++
+ static const struct msm_dp_desc msm_dp_desc_sc7180[] = {
+ 	{ .io_start = 0x0ae90000, .id = MSM_DP_CONTROLLER_0, .wide_bus_supported = true },
+ 	{}
+@@ -179,7 +184,7 @@ static const struct of_device_id msm_dp_dt_match[] = {
+ 	{ .compatible = "qcom,sc8180x-edp", .data = &msm_dp_desc_sc8180x },
+ 	{ .compatible = "qcom,sc8280xp-dp", .data = &msm_dp_desc_sc8280xp },
+ 	{ .compatible = "qcom,sc8280xp-edp", .data = &msm_dp_desc_sc8280xp },
+-	{ .compatible = "qcom,sdm845-dp", .data = &msm_dp_desc_sc7180 },
++	{ .compatible = "qcom,sdm845-dp", .data = &msm_dp_desc_sdm845 },
+ 	{ .compatible = "qcom,sm8350-dp", .data = &msm_dp_desc_sc7180 },
+ 	{ .compatible = "qcom,sm8650-dp", .data = &msm_dp_desc_sm8650 },
+ 	{ .compatible = "qcom,x1e80100-dp", .data = &msm_dp_desc_x1e80100 },
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+index 9812b4d6919792..af2e30f3f842a0 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+@@ -704,6 +704,13 @@ static int dsi_pll_10nm_init(struct msm_dsi_phy *phy)
+ 	/* TODO: Remove this when we have proper display handover support */
+ 	msm_dsi_phy_pll_save_state(phy);
+ 
++	/*
++	 * Store also proper vco_current_rate, because its value will be used in
++	 * dsi_10nm_pll_restore_state().
++	 */
++	if (!dsi_pll_10nm_vco_recalc_rate(&pll_10nm->clk_hw, VCO_REF_CLK_RATE))
++		pll_10nm->vco_current_rate = pll_10nm->phy->cfg->min_pll_rate;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi_i2c.c b/drivers/gpu/drm/msm/hdmi/hdmi_i2c.c
+index 7aa500d24240ff..ebefea4fb40855 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi_i2c.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi_i2c.c
+@@ -107,11 +107,15 @@ static int msm_hdmi_i2c_xfer(struct i2c_adapter *i2c,
+ 	if (num == 0)
+ 		return num;
+ 
++	ret = pm_runtime_resume_and_get(&hdmi->pdev->dev);
++	if (ret)
++		return ret;
++
+ 	init_ddc(hdmi_i2c);
+ 
+ 	ret = ddc_clear_irq(hdmi_i2c);
+ 	if (ret)
+-		return ret;
++		goto fail;
+ 
+ 	for (i = 0; i < num; i++) {
+ 		struct i2c_msg *p = &msgs[i];
+@@ -169,7 +173,7 @@ static int msm_hdmi_i2c_xfer(struct i2c_adapter *i2c,
+ 				hdmi_read(hdmi, REG_HDMI_DDC_SW_STATUS),
+ 				hdmi_read(hdmi, REG_HDMI_DDC_HW_STATUS),
+ 				hdmi_read(hdmi, REG_HDMI_DDC_INT_CTRL));
+-		return ret;
++		goto fail;
+ 	}
+ 
+ 	ddc_status = hdmi_read(hdmi, REG_HDMI_DDC_SW_STATUS);
+@@ -202,7 +206,13 @@ static int msm_hdmi_i2c_xfer(struct i2c_adapter *i2c,
+ 		}
+ 	}
+ 
++	pm_runtime_put(&hdmi->pdev->dev);
++
+ 	return i;
++
++fail:
++	pm_runtime_put(&hdmi->pdev->dev);
++	return ret;
+ }
+ 
+ static u32 msm_hdmi_i2c_func(struct i2c_adapter *adapter)
+diff --git a/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml b/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
+index 5a6ae9fc319451..46271340162280 100644
+--- a/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
++++ b/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
+@@ -2255,7 +2255,8 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 	<reg32 offset="0" name="0">
+ 		<bitfield name="CLEAR_ON_CHIP_TS" pos="0" type="boolean"/>
+ 		<bitfield name="CLEAR_RESOURCE_TABLE" pos="1" type="boolean"/>
+-		<bitfield name="CLEAR_GLOBAL_LOCAL_TS" pos="2" type="boolean"/>
++		<bitfield name="CLEAR_BV_BR_COUNTER" pos="2" type="boolean"/>
++		<bitfield name="RESET_GLOBAL_LOCAL_TS" pos="3" type="boolean"/>
+ 	</reg32>
+ </domain>
+ 
+diff --git a/drivers/gpu/drm/nouveau/Kbuild b/drivers/gpu/drm/nouveau/Kbuild
+index 7b863355c5c676..0759ba15954be8 100644
+--- a/drivers/gpu/drm/nouveau/Kbuild
++++ b/drivers/gpu/drm/nouveau/Kbuild
+@@ -2,6 +2,7 @@
+ ccflags-y += -I $(src)/include
+ ccflags-y += -I $(src)/include/nvkm
+ ccflags-y += -I $(src)/nvkm
++ccflags-y += -I $(src)/nvkm/subdev/gsp
+ ccflags-y += -I $(src)
+ 
+ # NVKM - HW resource manager
+diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h
+index 746e126c3ecf56..b543c31d3d3209 100644
+--- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h
+@@ -31,6 +31,29 @@ typedef int (*nvkm_gsp_msg_ntfy_func)(void *priv, u32 fn, void *repv, u32 repc);
+ struct nvkm_gsp_event;
+ typedef void (*nvkm_gsp_event_func)(struct nvkm_gsp_event *, void *repv, u32 repc);
+ 
++/**
++ * DOC: GSP message handling policy
++ *
++ * When sending a GSP RPC command, there can be multiple cases of handling
++ * the GSP RPC messages, which are the reply of GSP RPC commands, according
++ * to the requirement of the callers and the nature of the GSP RPC commands.
++ *
++ * NVKM_GSP_RPC_REPLY_NOWAIT - If specified, immediately return to the
++ * caller after the GSP RPC command is issued.
++ *
++ * NVKM_GSP_RPC_REPLY_RECV - If specified, wait and receive the entire GSP
++ * RPC message after the GSP RPC command is issued.
++ *
++ * NVKM_GSP_RPC_REPLY_POLL - If specified, wait for the specific reply and
++ * discard the reply before returning to the caller.
++ *
++ */
++enum nvkm_gsp_rpc_reply_policy {
++	NVKM_GSP_RPC_REPLY_NOWAIT = 0,
++	NVKM_GSP_RPC_REPLY_RECV,
++	NVKM_GSP_RPC_REPLY_POLL,
++};
++
+ struct nvkm_gsp {
+ 	const struct nvkm_gsp_func *func;
+ 	struct nvkm_subdev subdev;
+@@ -187,9 +210,7 @@ struct nvkm_gsp {
+ 	} gr;
+ 
+ 	const struct nvkm_gsp_rm {
+-		void *(*rpc_get)(struct nvkm_gsp *, u32 fn, u32 argc);
+-		void *(*rpc_push)(struct nvkm_gsp *, void *argv, bool wait, u32 repc);
+-		void (*rpc_done)(struct nvkm_gsp *gsp, void *repv);
++		const struct nvkm_rm_api *api;
+ 
+ 		void *(*rm_ctrl_get)(struct nvkm_gsp_object *, u32 cmd, u32 argc);
+ 		int (*rm_ctrl_push)(struct nvkm_gsp_object *, void **argv, u32 repc);
+@@ -248,16 +269,19 @@ nvkm_gsp_rm(struct nvkm_gsp *gsp)
+ 	return gsp && (gsp->fws.rm || gsp->fw.img);
+ }
+ 
++#include <rm/rm.h>
++
+ static inline void *
+ nvkm_gsp_rpc_get(struct nvkm_gsp *gsp, u32 fn, u32 argc)
+ {
+-	return gsp->rm->rpc_get(gsp, fn, argc);
++	return gsp->rm->api->rpc->get(gsp, fn, argc);
+ }
+ 
+ static inline void *
+-nvkm_gsp_rpc_push(struct nvkm_gsp *gsp, void *argv, bool wait, u32 repc)
++nvkm_gsp_rpc_push(struct nvkm_gsp *gsp, void *argv,
++		  enum nvkm_gsp_rpc_reply_policy policy, u32 repc)
+ {
+-	return gsp->rm->rpc_push(gsp, argv, wait, repc);
++	return gsp->rm->api->rpc->push(gsp, argv, policy, repc);
+ }
+ 
+ static inline void *
+@@ -268,13 +292,14 @@ nvkm_gsp_rpc_rd(struct nvkm_gsp *gsp, u32 fn, u32 argc)
+ 	if (IS_ERR_OR_NULL(argv))
+ 		return argv;
+ 
+-	return nvkm_gsp_rpc_push(gsp, argv, true, argc);
++	return nvkm_gsp_rpc_push(gsp, argv, NVKM_GSP_RPC_REPLY_RECV, argc);
+ }
+ 
+ static inline int
+-nvkm_gsp_rpc_wr(struct nvkm_gsp *gsp, void *argv, bool wait)
++nvkm_gsp_rpc_wr(struct nvkm_gsp *gsp, void *argv,
++		enum nvkm_gsp_rpc_reply_policy policy)
+ {
+-	void *repv = nvkm_gsp_rpc_push(gsp, argv, wait, 0);
++	void *repv = nvkm_gsp_rpc_push(gsp, argv, policy, 0);
+ 
+ 	if (IS_ERR(repv))
+ 		return PTR_ERR(repv);
+@@ -285,7 +310,7 @@ nvkm_gsp_rpc_wr(struct nvkm_gsp *gsp, void *argv, bool wait)
+ static inline void
+ nvkm_gsp_rpc_done(struct nvkm_gsp *gsp, void *repv)
+ {
+-	gsp->rm->rpc_done(gsp, repv);
++	gsp->rm->api->rpc->done(gsp, repv);
+ }
+ 
+ static inline void *
+diff --git a/drivers/gpu/drm/nouveau/nouveau_backlight.c b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+index d47442125fa183..9aae26eb7d8fba 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_backlight.c
++++ b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+@@ -42,7 +42,7 @@
+ #include "nouveau_acpi.h"
+ 
+ static struct ida bl_ida;
+-#define BL_NAME_SIZE 15 // 12 for name + 2 for digits + 1 for '\0'
++#define BL_NAME_SIZE 24 // 12 for name + 11 for digits + 1 for '\0'
+ 
+ static bool
+ nouveau_get_backlight_name(char backlight_name[BL_NAME_SIZE],
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index e154d08857c55b..c69139701056d7 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -1079,6 +1079,10 @@ nouveau_pmops_freeze(struct device *dev)
+ {
+ 	struct nouveau_drm *drm = dev_get_drvdata(dev);
+ 
++	if (drm->dev->switch_power_state == DRM_SWITCH_POWER_OFF ||
++	    drm->dev->switch_power_state == DRM_SWITCH_POWER_DYNAMIC_OFF)
++		return 0;
++
+ 	return nouveau_do_suspend(drm, false);
+ }
+ 
+@@ -1087,6 +1091,10 @@ nouveau_pmops_thaw(struct device *dev)
+ {
+ 	struct nouveau_drm *drm = dev_get_drvdata(dev);
+ 
++	if (drm->dev->switch_power_state == DRM_SWITCH_POWER_OFF ||
++	    drm->dev->switch_power_state == DRM_SWITCH_POWER_DYNAMIC_OFF)
++		return 0;
++
+ 	return nouveau_do_resume(drm, false);
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bar/r535.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bar/r535.c
+index 3a30bea30e366f..90186f98065c28 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bar/r535.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bar/r535.c
+@@ -56,7 +56,7 @@ r535_bar_bar2_update_pde(struct nvkm_gsp *gsp, u64 addr)
+ 	rpc->info.entryValue = addr ? ((addr >> 4) | 2) : 0; /* PD3 entry format! */
+ 	rpc->info.entryLevelShift = 47; //XXX: probably fetch this from mmu!
+ 
+-	return nvkm_gsp_rpc_wr(gsp, rpc, true);
++	return nvkm_gsp_rpc_wr(gsp, rpc, NVKM_GSP_RPC_REPLY_RECV);
+ }
+ 
+ static void
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/Kbuild b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/Kbuild
+index 16bf2f1bb78014..af6e55603763d6 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/Kbuild
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/Kbuild
+@@ -10,3 +10,5 @@ nvkm-y += nvkm/subdev/gsp/ga102.o
+ nvkm-y += nvkm/subdev/gsp/ad102.o
+ 
+ nvkm-y += nvkm/subdev/gsp/r535.o
++
++include $(src)/nvkm/subdev/gsp/rm/Kbuild
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+index db2602e880062b..53a4af00103926 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+@@ -19,6 +19,7 @@
+  * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+  * OTHER DEALINGS IN THE SOFTWARE.
+  */
++#include <rm/rpc.h>
+ #include "priv.h"
+ 
+ #include <core/pci.h>
+@@ -60,562 +61,6 @@
+ 
+ extern struct dentry *nouveau_debugfs_root;
+ 
+-#define GSP_MSG_MIN_SIZE GSP_PAGE_SIZE
+-#define GSP_MSG_MAX_SIZE (GSP_MSG_MIN_SIZE * 16)
+-
+-/**
+- * DOC: GSP message queue element
+- *
+- * https://github.com/NVIDIA/open-gpu-kernel-modules/blob/535/src/nvidia/inc/kernel/gpu/gsp/message_queue_priv.h
+- *
+- * The GSP command queue and status queue are message queues for the
+- * communication between software and GSP. The software submits the GSP
+- * RPC via the GSP command queue, GSP writes the status of the submitted
+- * RPC in the status queue.
+- *
+- * A GSP message queue element consists of three parts:
+- *
+- * - message element header (struct r535_gsp_msg), which mostly maintains
+- *   the metadata for queuing the element.
+- *
+- * - RPC message header (struct nvfw_gsp_rpc), which maintains the info
+- *   of the RPC. E.g., the RPC function number.
+- *
+- * - The payload, where the RPC message stays. E.g. the params of a
+- *   specific RPC function. Some RPC functions also have their headers
+- *   in the payload. E.g. rm_alloc, rm_control.
+- *
+- * The memory layout of a GSP message element can be illustrated below::
+- *
+- *    +------------------------+
+- *    | Message Element Header |
+- *    |    (r535_gsp_msg)      |
+- *    |                        |
+- *    | (r535_gsp_msg.data)    |
+- *    |          |             |
+- *    |----------V-------------|
+- *    |    GSP RPC Header      |
+- *    |    (nvfw_gsp_rpc)      |
+- *    |                        |
+- *    | (nvfw_gsp_rpc.data)    |
+- *    |          |             |
+- *    |----------V-------------|
+- *    |       Payload          |
+- *    |                        |
+- *    |   header(optional)     |
+- *    |        params          |
+- *    +------------------------+
+- *
+- * The max size of a message queue element is 16 pages (including the
+- * headers). When a GSP message to be sent is larger than 16 pages, the
+- * message should be split into multiple elements and sent accordingly.
+- *
+- * In the bunch of the split elements, the first element has the expected
+- * function number, while the rest of the elements are sent with the
+- * function number NV_VGPU_MSG_FUNCTION_CONTINUATION_RECORD.
+- *
+- * GSP consumes the elements from the cmdq and always writes the result
+- * back to the msgq. The result is also formed as split elements.
+- *
+- * Terminology:
+- *
+- * - gsp_msg(msg): GSP message element (element header + GSP RPC header +
+- *   payload)
+- * - gsp_rpc(rpc): GSP RPC (RPC header + payload)
+- * - gsp_rpc_buf: buffer for (GSP RPC header + payload)
+- * - gsp_rpc_len: size of (GSP RPC header + payload)
+- * - params_size: size of params in the payload
+- * - payload_size: size of (header if exists + params) in the payload
+- */
+-
+-struct r535_gsp_msg {
+-	u8 auth_tag_buffer[16];
+-	u8 aad_buffer[16];
+-	u32 checksum;
+-	u32 sequence;
+-	u32 elem_count;
+-	u32 pad;
+-	u8  data[];
+-};
+-
+-struct nvfw_gsp_rpc {
+-	u32 header_version;
+-	u32 signature;
+-	u32 length;
+-	u32 function;
+-	u32 rpc_result;
+-	u32 rpc_result_private;
+-	u32 sequence;
+-	union {
+-		u32 spare;
+-		u32 cpuRmGfid;
+-	};
+-	u8  data[];
+-};
+-
+-#define GSP_MSG_HDR_SIZE offsetof(struct r535_gsp_msg, data)
+-
+-#define to_gsp_hdr(p, header) \
+-	container_of((void *)p, typeof(*header), data)
+-
+-#define to_payload_hdr(p, header) \
+-	container_of((void *)p, typeof(*header), params)
+-
+-static int
+-r535_rpc_status_to_errno(uint32_t rpc_status)
+-{
+-	switch (rpc_status) {
+-	case 0x55: /* NV_ERR_NOT_READY */
+-	case 0x66: /* NV_ERR_TIMEOUT_RETRY */
+-		return -EBUSY;
+-	case 0x51: /* NV_ERR_NO_MEMORY */
+-		return -ENOMEM;
+-	default:
+-		return -EINVAL;
+-	}
+-}
+-
+-static int
+-r535_gsp_msgq_wait(struct nvkm_gsp *gsp, u32 gsp_rpc_len, int *ptime)
+-{
+-	u32 size, rptr = *gsp->msgq.rptr;
+-	int used;
+-
+-	size = DIV_ROUND_UP(GSP_MSG_HDR_SIZE + gsp_rpc_len,
+-			    GSP_PAGE_SIZE);
+-	if (WARN_ON(!size || size >= gsp->msgq.cnt))
+-		return -EINVAL;
+-
+-	do {
+-		u32 wptr = *gsp->msgq.wptr;
+-
+-		used = wptr + gsp->msgq.cnt - rptr;
+-		if (used >= gsp->msgq.cnt)
+-			used -= gsp->msgq.cnt;
+-		if (used >= size)
+-			break;
+-
+-		usleep_range(1, 2);
+-	} while (--(*ptime));
+-
+-	if (WARN_ON(!*ptime))
+-		return -ETIMEDOUT;
+-
+-	return used;
+-}
+-
+-static struct r535_gsp_msg *
+-r535_gsp_msgq_get_entry(struct nvkm_gsp *gsp)
+-{
+-	u32 rptr = *gsp->msgq.rptr;
+-
+-	/* Skip the first page, which is the message queue info */
+-	return (void *)((u8 *)gsp->shm.msgq.ptr + GSP_PAGE_SIZE +
+-	       rptr * GSP_PAGE_SIZE);
+-}
+-
+-/**
+- * DOC: Receive a GSP message queue element
+- *
+- * Receiving a GSP message queue element from the message queue consists of
+- * the following steps:
+- *
+- * - Peek the element from the queue: r535_gsp_msgq_peek().
+- *   Peek the first page of the element to determine the total size of the
+- *   message before allocating the proper memory.
+- *
+- * - Allocate memory for the message.
+- *   Once the total size of the message is determined from the GSP message
+- *   queue element, the caller of r535_gsp_msgq_recv() allocates the
+- *   required memory.
+- *
+- * - Receive the message: r535_gsp_msgq_recv().
+- *   Copy the message into the allocated memory. Advance the read pointer.
+- *   If the message is a large GSP message, r535_gsp_msgq_recv() calls
+- *   r535_gsp_msgq_recv_one_elem() repeatedly to receive continuation parts
+- *   until the complete message is received.
+- *   r535_gsp_msgq_recv() assembles the payloads of cotinuation parts into
+- *   the return of the large GSP message.
+- *
+- * - Free the allocated memory: r535_gsp_msg_done().
+- *   The user is responsible for freeing the memory allocated for the GSP
+- *   message pages after they have been processed.
+- */
+-static void *
+-r535_gsp_msgq_peek(struct nvkm_gsp *gsp, u32 gsp_rpc_len, int *retries)
+-{
+-	struct r535_gsp_msg *mqe;
+-	int ret;
+-
+-	ret = r535_gsp_msgq_wait(gsp, gsp_rpc_len, retries);
+-	if (ret < 0)
+-		return ERR_PTR(ret);
+-
+-	mqe = r535_gsp_msgq_get_entry(gsp);
+-
+-	return mqe->data;
+-}
+-
+-struct r535_gsp_msg_info {
+-	int *retries;
+-	u32 gsp_rpc_len;
+-	void *gsp_rpc_buf;
+-	bool continuation;
+-};
+-
+-static void
+-r535_gsp_msg_dump(struct nvkm_gsp *gsp, struct nvfw_gsp_rpc *msg, int lvl);
+-
+-static void *
+-r535_gsp_msgq_recv_one_elem(struct nvkm_gsp *gsp,
+-			    struct r535_gsp_msg_info *info)
+-{
+-	u8 *buf = info->gsp_rpc_buf;
+-	u32 rptr = *gsp->msgq.rptr;
+-	struct r535_gsp_msg *mqe;
+-	u32 size, expected, len;
+-	int ret;
+-
+-	expected = info->gsp_rpc_len;
+-
+-	ret = r535_gsp_msgq_wait(gsp, expected, info->retries);
+-	if (ret < 0)
+-		return ERR_PTR(ret);
+-
+-	mqe = r535_gsp_msgq_get_entry(gsp);
+-
+-	if (info->continuation) {
+-		struct nvfw_gsp_rpc *rpc = (struct nvfw_gsp_rpc *)mqe->data;
+-
+-		if (rpc->function != NV_VGPU_MSG_FUNCTION_CONTINUATION_RECORD) {
+-			nvkm_error(&gsp->subdev,
+-				   "Not a continuation of a large RPC\n");
+-			r535_gsp_msg_dump(gsp, rpc, NV_DBG_ERROR);
+-			return ERR_PTR(-EIO);
+-		}
+-	}
+-
+-	size = ALIGN(expected + GSP_MSG_HDR_SIZE, GSP_PAGE_SIZE);
+-
+-	len = ((gsp->msgq.cnt - rptr) * GSP_PAGE_SIZE) - sizeof(*mqe);
+-	len = min_t(u32, expected, len);
+-
+-	if (info->continuation)
+-		memcpy(buf, mqe->data + sizeof(struct nvfw_gsp_rpc),
+-		       len - sizeof(struct nvfw_gsp_rpc));
+-	else
+-		memcpy(buf, mqe->data, len);
+-
+-	expected -= len;
+-
+-	if (expected) {
+-		mqe = (void *)((u8 *)gsp->shm.msgq.ptr + 0x1000 + 0 * 0x1000);
+-		memcpy(buf + len, mqe, expected);
+-	}
+-
+-	rptr = (rptr + DIV_ROUND_UP(size, GSP_PAGE_SIZE)) % gsp->msgq.cnt;
+-
+-	mb();
+-	(*gsp->msgq.rptr) = rptr;
+-	return buf;
+-}
+-
+-static void *
+-r535_gsp_msgq_recv(struct nvkm_gsp *gsp, u32 gsp_rpc_len, int *retries)
+-{
+-	struct r535_gsp_msg *mqe;
+-	const u32 max_rpc_size = GSP_MSG_MAX_SIZE - sizeof(*mqe);
+-	struct nvfw_gsp_rpc *rpc;
+-	struct r535_gsp_msg_info info = {0};
+-	u32 expected = gsp_rpc_len;
+-	void *buf;
+-
+-	mqe = r535_gsp_msgq_get_entry(gsp);
+-	rpc = (struct nvfw_gsp_rpc *)mqe->data;
+-
+-	if (WARN_ON(rpc->length > max_rpc_size))
+-		return NULL;
+-
+-	buf = kvmalloc(max_t(u32, rpc->length, expected), GFP_KERNEL);
+-	if (!buf)
+-		return ERR_PTR(-ENOMEM);
+-
+-	info.gsp_rpc_buf = buf;
+-	info.retries = retries;
+-	info.gsp_rpc_len = rpc->length;
+-
+-	buf = r535_gsp_msgq_recv_one_elem(gsp, &info);
+-	if (IS_ERR(buf)) {
+-		kvfree(info.gsp_rpc_buf);
+-		info.gsp_rpc_buf = NULL;
+-		return buf;
+-	}
+-
+-	if (expected <= max_rpc_size)
+-		return buf;
+-
+-	info.gsp_rpc_buf += info.gsp_rpc_len;
+-	expected -= info.gsp_rpc_len;
+-
+-	while (expected) {
+-		u32 size;
+-
+-		rpc = r535_gsp_msgq_peek(gsp, sizeof(*rpc), info.retries);
+-		if (IS_ERR_OR_NULL(rpc)) {
+-			kfree(buf);
+-			return rpc;
+-		}
+-
+-		info.gsp_rpc_len = rpc->length;
+-		info.continuation = true;
+-
+-		rpc = r535_gsp_msgq_recv_one_elem(gsp, &info);
+-		if (IS_ERR_OR_NULL(rpc)) {
+-			kfree(buf);
+-			return rpc;
+-		}
+-
+-		size = info.gsp_rpc_len - sizeof(*rpc);
+-		expected -= size;
+-		info.gsp_rpc_buf += size;
+-	}
+-
+-	rpc = buf;
+-	rpc->length = gsp_rpc_len;
+-	return buf;
+-}
+-
+-static int
+-r535_gsp_cmdq_push(struct nvkm_gsp *gsp, void *rpc)
+-{
+-	struct r535_gsp_msg *msg = to_gsp_hdr(rpc, msg);
+-	struct r535_gsp_msg *cqe;
+-	u32 gsp_rpc_len = msg->checksum;
+-	u64 *ptr = (void *)msg;
+-	u64 *end;
+-	u64 csum = 0;
+-	int free, time = 1000000;
+-	u32 wptr, size, step, len;
+-	u32 off = 0;
+-
+-	len = ALIGN(GSP_MSG_HDR_SIZE + gsp_rpc_len, GSP_PAGE_SIZE);
+-
+-	end = (u64 *)((char *)ptr + len);
+-	msg->pad = 0;
+-	msg->checksum = 0;
+-	msg->sequence = gsp->cmdq.seq++;
+-	msg->elem_count = DIV_ROUND_UP(len, 0x1000);
+-
+-	while (ptr < end)
+-		csum ^= *ptr++;
+-
+-	msg->checksum = upper_32_bits(csum) ^ lower_32_bits(csum);
+-
+-	wptr = *gsp->cmdq.wptr;
+-	do {
+-		do {
+-			free = *gsp->cmdq.rptr + gsp->cmdq.cnt - wptr - 1;
+-			if (free >= gsp->cmdq.cnt)
+-				free -= gsp->cmdq.cnt;
+-			if (free >= 1)
+-				break;
+-
+-			usleep_range(1, 2);
+-		} while(--time);
+-
+-		if (WARN_ON(!time)) {
+-			kvfree(msg);
+-			return -ETIMEDOUT;
+-		}
+-
+-		cqe = (void *)((u8 *)gsp->shm.cmdq.ptr + 0x1000 + wptr * 0x1000);
+-		step = min_t(u32, free, (gsp->cmdq.cnt - wptr));
+-		size = min_t(u32, len, step * GSP_PAGE_SIZE);
+-
+-		memcpy(cqe, (u8 *)msg + off, size);
+-
+-		wptr += DIV_ROUND_UP(size, 0x1000);
+-		if (wptr == gsp->cmdq.cnt)
+-			wptr = 0;
+-
+-		off  += size;
+-		len -= size;
+-	} while (len);
+-
+-	nvkm_trace(&gsp->subdev, "cmdq: wptr %d\n", wptr);
+-	wmb();
+-	(*gsp->cmdq.wptr) = wptr;
+-	mb();
+-
+-	nvkm_falcon_wr32(&gsp->falcon, 0xc00, 0x00000000);
+-
+-	kvfree(msg);
+-	return 0;
+-}
+-
+-static void *
+-r535_gsp_cmdq_get(struct nvkm_gsp *gsp, u32 gsp_rpc_len)
+-{
+-	struct r535_gsp_msg *msg;
+-	u32 size = GSP_MSG_HDR_SIZE + gsp_rpc_len;
+-
+-	size = ALIGN(size, GSP_MSG_MIN_SIZE);
+-	msg = kvzalloc(size, GFP_KERNEL);
+-	if (!msg)
+-		return ERR_PTR(-ENOMEM);
+-
+-	msg->checksum = gsp_rpc_len;
+-	return msg->data;
+-}
+-
+-static void
+-r535_gsp_msg_done(struct nvkm_gsp *gsp, struct nvfw_gsp_rpc *msg)
+-{
+-	kvfree(msg);
+-}
+-
+-static void
+-r535_gsp_msg_dump(struct nvkm_gsp *gsp, struct nvfw_gsp_rpc *msg, int lvl)
+-{
+-	if (gsp->subdev.debug >= lvl) {
+-		nvkm_printk__(&gsp->subdev, lvl, info,
+-			      "msg fn:%d len:0x%x/0x%zx res:0x%x resp:0x%x\n",
+-			      msg->function, msg->length, msg->length - sizeof(*msg),
+-			      msg->rpc_result, msg->rpc_result_private);
+-		print_hex_dump(KERN_INFO, "msg: ", DUMP_PREFIX_OFFSET, 16, 1,
+-			       msg->data, msg->length - sizeof(*msg), true);
+-	}
+-}
+-
+-static struct nvfw_gsp_rpc *
+-r535_gsp_msg_recv(struct nvkm_gsp *gsp, int fn, u32 gsp_rpc_len)
+-{
+-	struct nvkm_subdev *subdev = &gsp->subdev;
+-	struct nvfw_gsp_rpc *rpc;
+-	int retries = 4000000, i;
+-
+-retry:
+-	rpc = r535_gsp_msgq_peek(gsp, sizeof(*rpc), &retries);
+-	if (IS_ERR_OR_NULL(rpc))
+-		return rpc;
+-
+-	rpc = r535_gsp_msgq_recv(gsp, gsp_rpc_len, &retries);
+-	if (IS_ERR_OR_NULL(rpc))
+-		return rpc;
+-
+-	if (rpc->rpc_result) {
+-		r535_gsp_msg_dump(gsp, rpc, NV_DBG_ERROR);
+-		r535_gsp_msg_done(gsp, rpc);
+-		return ERR_PTR(-EINVAL);
+-	}
+-
+-	r535_gsp_msg_dump(gsp, rpc, NV_DBG_TRACE);
+-
+-	if (fn && rpc->function == fn) {
+-		if (gsp_rpc_len) {
+-			if (rpc->length < gsp_rpc_len) {
+-				nvkm_error(subdev, "rpc len %d < %d\n",
+-					   rpc->length, gsp_rpc_len);
+-				r535_gsp_msg_dump(gsp, rpc, NV_DBG_ERROR);
+-				r535_gsp_msg_done(gsp, rpc);
+-				return ERR_PTR(-EIO);
+-			}
+-
+-			return rpc;
+-		}
+-
+-		r535_gsp_msg_done(gsp, rpc);
+-		return NULL;
+-	}
+-
+-	for (i = 0; i < gsp->msgq.ntfy_nr; i++) {
+-		struct nvkm_gsp_msgq_ntfy *ntfy = &gsp->msgq.ntfy[i];
+-
+-		if (ntfy->fn == rpc->function) {
+-			if (ntfy->func)
+-				ntfy->func(ntfy->priv, ntfy->fn, rpc->data,
+-					   rpc->length - sizeof(*rpc));
+-			break;
+-		}
+-	}
+-
+-	if (i == gsp->msgq.ntfy_nr)
+-		r535_gsp_msg_dump(gsp, rpc, NV_DBG_WARN);
+-
+-	r535_gsp_msg_done(gsp, rpc);
+-	if (fn)
+-		goto retry;
+-
+-	if (*gsp->msgq.rptr != *gsp->msgq.wptr)
+-		goto retry;
+-
+-	return NULL;
+-}
+-
+-static int
+-r535_gsp_msg_ntfy_add(struct nvkm_gsp *gsp, u32 fn, nvkm_gsp_msg_ntfy_func func, void *priv)
+-{
+-	int ret = 0;
+-
+-	mutex_lock(&gsp->msgq.mutex);
+-	if (WARN_ON(gsp->msgq.ntfy_nr >= ARRAY_SIZE(gsp->msgq.ntfy))) {
+-		ret = -ENOSPC;
+-	} else {
+-		gsp->msgq.ntfy[gsp->msgq.ntfy_nr].fn = fn;
+-		gsp->msgq.ntfy[gsp->msgq.ntfy_nr].func = func;
+-		gsp->msgq.ntfy[gsp->msgq.ntfy_nr].priv = priv;
+-		gsp->msgq.ntfy_nr++;
+-	}
+-	mutex_unlock(&gsp->msgq.mutex);
+-	return ret;
+-}
+-
+-static int
+-r535_gsp_rpc_poll(struct nvkm_gsp *gsp, u32 fn)
+-{
+-	void *repv;
+-
+-	mutex_lock(&gsp->cmdq.mutex);
+-	repv = r535_gsp_msg_recv(gsp, fn, 0);
+-	mutex_unlock(&gsp->cmdq.mutex);
+-	if (IS_ERR(repv))
+-		return PTR_ERR(repv);
+-
+-	return 0;
+-}
+-
+-static void *
+-r535_gsp_rpc_send(struct nvkm_gsp *gsp, void *payload, bool wait,
+-		  u32 gsp_rpc_len)
+-{
+-	struct nvfw_gsp_rpc *rpc = to_gsp_hdr(payload, rpc);
+-	struct nvfw_gsp_rpc *msg;
+-	u32 fn = rpc->function;
+-	void *repv = NULL;
+-	int ret;
+-
+-	if (gsp->subdev.debug >= NV_DBG_TRACE) {
+-		nvkm_trace(&gsp->subdev, "rpc fn:%d len:0x%x/0x%zx\n", rpc->function,
+-			   rpc->length, rpc->length - sizeof(*rpc));
+-		print_hex_dump(KERN_INFO, "rpc: ", DUMP_PREFIX_OFFSET, 16, 1,
+-			       rpc->data, rpc->length - sizeof(*rpc), true);
+-	}
+-
+-	ret = r535_gsp_cmdq_push(gsp, rpc);
+-	if (ret)
+-		return ERR_PTR(ret);
+-
+-	if (wait) {
+-		msg = r535_gsp_msg_recv(gsp, fn, gsp_rpc_len);
+-		if (!IS_ERR_OR_NULL(msg))
+-			repv = msg->data;
+-		else
+-			repv = msg;
+-	}
+-
+-	return repv;
+-}
+-
+ static void
+ r535_gsp_event_dtor(struct nvkm_gsp_event *event)
+ {
+@@ -797,7 +242,7 @@ r535_gsp_rpc_rm_free(struct nvkm_gsp_object *object)
+ 	rpc->params.hRoot = client->object.handle;
+ 	rpc->params.hObjectParent = 0;
+ 	rpc->params.hObjectOld = object->handle;
+-	return nvkm_gsp_rpc_wr(gsp, rpc, true);
++	return nvkm_gsp_rpc_wr(gsp, rpc, NVKM_GSP_RPC_REPLY_RECV);
+ }
+ 
+ static void
+@@ -815,7 +260,7 @@ r535_gsp_rpc_rm_alloc_push(struct nvkm_gsp_object *object, void *params)
+ 	struct nvkm_gsp *gsp = object->client->gsp;
+ 	void *ret = NULL;
+ 
+-	rpc = nvkm_gsp_rpc_push(gsp, rpc, true, sizeof(*rpc));
++	rpc = nvkm_gsp_rpc_push(gsp, rpc, NVKM_GSP_RPC_REPLY_RECV, sizeof(*rpc));
+ 	if (IS_ERR_OR_NULL(rpc))
+ 		return rpc;
+ 
+@@ -876,7 +321,7 @@ r535_gsp_rpc_rm_ctrl_push(struct nvkm_gsp_object *object, void **params, u32 rep
+ 	struct nvkm_gsp *gsp = object->client->gsp;
+ 	int ret = 0;
+ 
+-	rpc = nvkm_gsp_rpc_push(gsp, rpc, true, repc);
++	rpc = nvkm_gsp_rpc_push(gsp, rpc, NVKM_GSP_RPC_REPLY_RECV, repc);
+ 	if (IS_ERR_OR_NULL(rpc)) {
+ 		*params = NULL;
+ 		return PTR_ERR(rpc);
+@@ -920,109 +365,9 @@ r535_gsp_rpc_rm_ctrl_get(struct nvkm_gsp_object *object, u32 cmd, u32 params_siz
+ 	return rpc->params;
+ }
+ 
+-static void
+-r535_gsp_rpc_done(struct nvkm_gsp *gsp, void *repv)
+-{
+-	struct nvfw_gsp_rpc *rpc = container_of(repv, typeof(*rpc), data);
+-
+-	r535_gsp_msg_done(gsp, rpc);
+-}
+-
+-static void *
+-r535_gsp_rpc_get(struct nvkm_gsp *gsp, u32 fn, u32 payload_size)
+-{
+-	struct nvfw_gsp_rpc *rpc;
+-
+-	rpc = r535_gsp_cmdq_get(gsp, ALIGN(sizeof(*rpc) + payload_size,
+-					   sizeof(u64)));
+-	if (IS_ERR(rpc))
+-		return ERR_CAST(rpc);
+-
+-	rpc->header_version = 0x03000000;
+-	rpc->signature = ('C' << 24) | ('P' << 16) | ('R' << 8) | 'V';
+-	rpc->function = fn;
+-	rpc->rpc_result = 0xffffffff;
+-	rpc->rpc_result_private = 0xffffffff;
+-	rpc->length = sizeof(*rpc) + payload_size;
+-	return rpc->data;
+-}
+-
+-static void *
+-r535_gsp_rpc_push(struct nvkm_gsp *gsp, void *payload, bool wait,
+-		  u32 gsp_rpc_len)
+-{
+-	struct nvfw_gsp_rpc *rpc = to_gsp_hdr(payload, rpc);
+-	struct r535_gsp_msg *msg = to_gsp_hdr(rpc, msg);
+-	const u32 max_rpc_size = GSP_MSG_MAX_SIZE - sizeof(*msg);
+-	const u32 max_payload_size = max_rpc_size - sizeof(*rpc);
+-	u32 payload_size = rpc->length - sizeof(*rpc);
+-	void *repv;
+-
+-	mutex_lock(&gsp->cmdq.mutex);
+-	if (payload_size > max_payload_size) {
+-		const u32 fn = rpc->function;
+-		u32 remain_payload_size = payload_size;
+-
+-		/* Adjust length, and send initial RPC. */
+-		rpc->length = sizeof(*rpc) + max_payload_size;
+-		msg->checksum = rpc->length;
+-
+-		repv = r535_gsp_rpc_send(gsp, payload, false, 0);
+-		if (IS_ERR(repv))
+-			goto done;
+-
+-		payload += max_payload_size;
+-		remain_payload_size -= max_payload_size;
+-
+-		/* Remaining chunks sent as CONTINUATION_RECORD RPCs. */
+-		while (remain_payload_size) {
+-			u32 size = min(remain_payload_size,
+-				       max_payload_size);
+-			void *next;
+-
+-			next = r535_gsp_rpc_get(gsp, NV_VGPU_MSG_FUNCTION_CONTINUATION_RECORD, size);
+-			if (IS_ERR(next)) {
+-				repv = next;
+-				goto done;
+-			}
+-
+-			memcpy(next, payload, size);
+-
+-			repv = r535_gsp_rpc_send(gsp, next, false, 0);
+-			if (IS_ERR(repv))
+-				goto done;
+-
+-			payload += size;
+-			remain_payload_size -= size;
+-		}
+-
+-		/* Wait for reply. */
+-		rpc = r535_gsp_msg_recv(gsp, fn, payload_size +
+-					sizeof(*rpc));
+-		if (!IS_ERR_OR_NULL(rpc)) {
+-			if (wait) {
+-				repv = rpc->data;
+-			} else {
+-				nvkm_gsp_rpc_done(gsp, rpc);
+-				repv = NULL;
+-			}
+-		} else {
+-			repv = wait ? rpc : NULL;
+-		}
+-	} else {
+-		repv = r535_gsp_rpc_send(gsp, payload, wait, gsp_rpc_len);
+-	}
+-
+-done:
+-	mutex_unlock(&gsp->cmdq.mutex);
+-	return repv;
+-}
+-
+ const struct nvkm_gsp_rm
+ r535_gsp_rm = {
+-	.rpc_get = r535_gsp_rpc_get,
+-	.rpc_push = r535_gsp_rpc_push,
+-	.rpc_done = r535_gsp_rpc_done,
++	.api = &r535_rm,
+ 
+ 	.rm_ctrl_get = r535_gsp_rpc_rm_ctrl_get,
+ 	.rm_ctrl_push = r535_gsp_rpc_rm_ctrl_push,
+@@ -1327,7 +672,7 @@ r535_gsp_rpc_unloading_guest_driver(struct nvkm_gsp *gsp, bool suspend)
+ 		rpc->newLevel = NV2080_CTRL_GPU_SET_POWER_STATE_GPU_LEVEL_0;
+ 	}
+ 
+-	return nvkm_gsp_rpc_wr(gsp, rpc, true);
++	return nvkm_gsp_rpc_wr(gsp, rpc, NVKM_GSP_RPC_REPLY_RECV);
+ }
+ 
+ enum registry_type {
+@@ -1684,7 +1029,7 @@ r535_gsp_rpc_set_registry(struct nvkm_gsp *gsp)
+ 
+ 	build_registry(gsp, rpc);
+ 
+-	return nvkm_gsp_rpc_wr(gsp, rpc, false);
++	return nvkm_gsp_rpc_wr(gsp, rpc, NVKM_GSP_RPC_REPLY_NOWAIT);
+ 
+ fail:
+ 	clean_registry(gsp);
+@@ -1893,7 +1238,7 @@ r535_gsp_rpc_set_system_info(struct nvkm_gsp *gsp)
+ 	info->pciConfigMirrorSize = 0x001000;
+ 	r535_gsp_acpi_info(gsp, &info->acpiMethodData);
+ 
+-	return nvkm_gsp_rpc_wr(gsp, info, false);
++	return nvkm_gsp_rpc_wr(gsp, info, NVKM_GSP_RPC_REPLY_NOWAIT);
+ }
+ 
+ static int
+@@ -2838,7 +2183,7 @@ r535_gsp_fini(struct nvkm_gsp *gsp, bool suspend)
+ 		return ret;
+ 
+ 	nvkm_msec(gsp->subdev.device, 2000,
+-		if (nvkm_falcon_rd32(&gsp->falcon, 0x040) & 0x80000000)
++		if (nvkm_falcon_rd32(&gsp->falcon, 0x040) == 0x80000000)
+ 			break;
+ 	);
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/Kbuild b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/Kbuild
+new file mode 100644
+index 00000000000000..1c07740215ec5f
+--- /dev/null
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/Kbuild
+@@ -0,0 +1,5 @@
++# SPDX-License-Identifier: MIT
++#
++# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
++
++include $(src)/nvkm/subdev/gsp/rm/r535/Kbuild
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/Kbuild b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/Kbuild
+new file mode 100644
+index 00000000000000..21c818ec070160
+--- /dev/null
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/Kbuild
+@@ -0,0 +1,6 @@
++# SPDX-License-Identifier: MIT
++#
++# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
++
++nvkm-y += nvkm/subdev/gsp/rm/r535/rm.o
++nvkm-y += nvkm/subdev/gsp/rm/r535/rpc.o
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rm.c
+new file mode 100644
+index 00000000000000..f28b781abc5c7d
+--- /dev/null
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rm.c
+@@ -0,0 +1,10 @@
++/* SPDX-License-Identifier: MIT
++ *
++ * Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
++ */
++#include <rm/rm.h>
++
++const struct nvkm_rm_api
++r535_rm = {
++	.rpc = &r535_rpc,
++};
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c
+new file mode 100644
+index 00000000000000..d558b0f62b010f
+--- /dev/null
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c
+@@ -0,0 +1,699 @@
++/*
++ * Copyright 2023 Red Hat Inc.
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice shall be included in
++ * all copies or substantial portions of the Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
++ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
++ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
++ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
++ * OTHER DEALINGS IN THE SOFTWARE.
++ */
++#include <rm/rpc.h>
++
++#include <nvrm/nvtypes.h>
++#include <nvrm/535.113.01/nvidia/kernel/inc/vgpu/rpc_global_enums.h>
++
++#define GSP_MSG_MIN_SIZE GSP_PAGE_SIZE
++#define GSP_MSG_MAX_SIZE (GSP_MSG_MIN_SIZE * 16)
++
++/**
++ * DOC: GSP message queue element
++ *
++ * https://github.com/NVIDIA/open-gpu-kernel-modules/blob/535/src/nvidia/inc/kernel/gpu/gsp/message_queue_priv.h
++ *
++ * The GSP command queue and status queue are message queues for the
++ * communication between software and GSP. The software submits the GSP
++ * RPC via the GSP command queue, GSP writes the status of the submitted
++ * RPC in the status queue.
++ *
++ * A GSP message queue element consists of three parts:
++ *
++ * - message element header (struct r535_gsp_msg), which mostly maintains
++ *   the metadata for queuing the element.
++ *
++ * - RPC message header (struct nvfw_gsp_rpc), which maintains the info
++ *   of the RPC. E.g., the RPC function number.
++ *
++ * - The payload, where the RPC message stays. E.g. the params of a
++ *   specific RPC function. Some RPC functions also have their headers
++ *   in the payload. E.g. rm_alloc, rm_control.
++ *
++ * The memory layout of a GSP message element can be illustrated below::
++ *
++ *    +------------------------+
++ *    | Message Element Header |
++ *    |    (r535_gsp_msg)      |
++ *    |                        |
++ *    | (r535_gsp_msg.data)    |
++ *    |          |             |
++ *    |----------V-------------|
++ *    |    GSP RPC Header      |
++ *    |    (nvfw_gsp_rpc)      |
++ *    |                        |
++ *    | (nvfw_gsp_rpc.data)    |
++ *    |          |             |
++ *    |----------V-------------|
++ *    |       Payload          |
++ *    |                        |
++ *    |   header(optional)     |
++ *    |        params          |
++ *    +------------------------+
++ *
++ * The max size of a message queue element is 16 pages (including the
++ * headers). When a GSP message to be sent is larger than 16 pages, the
++ * message should be split into multiple elements and sent accordingly.
++ *
++ * In the bunch of the split elements, the first element has the expected
++ * function number, while the rest of the elements are sent with the
++ * function number NV_VGPU_MSG_FUNCTION_CONTINUATION_RECORD.
++ *
++ * GSP consumes the elements from the cmdq and always writes the result
++ * back to the msgq. The result is also formed as split elements.
++ *
++ * Terminology:
++ *
++ * - gsp_msg(msg): GSP message element (element header + GSP RPC header +
++ *   payload)
++ * - gsp_rpc(rpc): GSP RPC (RPC header + payload)
++ * - gsp_rpc_buf: buffer for (GSP RPC header + payload)
++ * - gsp_rpc_len: size of (GSP RPC header + payload)
++ * - params_size: size of params in the payload
++ * - payload_size: size of (header if exists + params) in the payload
++ */
++
++struct r535_gsp_msg {
++	u8 auth_tag_buffer[16];
++	u8 aad_buffer[16];
++	u32 checksum;
++	u32 sequence;
++	u32 elem_count;
++	u32 pad;
++	u8  data[];
++};
++
++struct nvfw_gsp_rpc {
++	u32 header_version;
++	u32 signature;
++	u32 length;
++	u32 function;
++	u32 rpc_result;
++	u32 rpc_result_private;
++	u32 sequence;
++	union {
++		u32 spare;
++		u32 cpuRmGfid;
++	};
++	u8  data[];
++};
++
++#define GSP_MSG_HDR_SIZE offsetof(struct r535_gsp_msg, data)
++
++#define to_gsp_hdr(p, header) \
++	container_of((void *)p, typeof(*header), data)
++
++#define to_payload_hdr(p, header) \
++	container_of((void *)p, typeof(*header), params)
++
++int
++r535_rpc_status_to_errno(uint32_t rpc_status)
++{
++	switch (rpc_status) {
++	case 0x55: /* NV_ERR_NOT_READY */
++	case 0x66: /* NV_ERR_TIMEOUT_RETRY */
++		return -EBUSY;
++	case 0x51: /* NV_ERR_NO_MEMORY */
++		return -ENOMEM;
++	default:
++		return -EINVAL;
++	}
++}
++
++static int
++r535_gsp_msgq_wait(struct nvkm_gsp *gsp, u32 gsp_rpc_len, int *ptime)
++{
++	u32 size, rptr = *gsp->msgq.rptr;
++	int used;
++
++	size = DIV_ROUND_UP(GSP_MSG_HDR_SIZE + gsp_rpc_len,
++			    GSP_PAGE_SIZE);
++	if (WARN_ON(!size || size >= gsp->msgq.cnt))
++		return -EINVAL;
++
++	do {
++		u32 wptr = *gsp->msgq.wptr;
++
++		used = wptr + gsp->msgq.cnt - rptr;
++		if (used >= gsp->msgq.cnt)
++			used -= gsp->msgq.cnt;
++		if (used >= size)
++			break;
++
++		usleep_range(1, 2);
++	} while (--(*ptime));
++
++	if (WARN_ON(!*ptime))
++		return -ETIMEDOUT;
++
++	return used;
++}
++
++static struct r535_gsp_msg *
++r535_gsp_msgq_get_entry(struct nvkm_gsp *gsp)
++{
++	u32 rptr = *gsp->msgq.rptr;
++
++	/* Skip the first page, which is the message queue info */
++	return (void *)((u8 *)gsp->shm.msgq.ptr + GSP_PAGE_SIZE +
++	       rptr * GSP_PAGE_SIZE);
++}
++
++/**
++ * DOC: Receive a GSP message queue element
++ *
++ * Receiving a GSP message queue element from the message queue consists of
++ * the following steps:
++ *
++ * - Peek the element from the queue: r535_gsp_msgq_peek().
++ *   Peek the first page of the element to determine the total size of the
++ *   message before allocating the proper memory.
++ *
++ * - Allocate memory for the message.
++ *   Once the total size of the message is determined from the GSP message
++ *   queue element, the caller of r535_gsp_msgq_recv() allocates the
++ *   required memory.
++ *
++ * - Receive the message: r535_gsp_msgq_recv().
++ *   Copy the message into the allocated memory. Advance the read pointer.
++ *   If the message is a large GSP message, r535_gsp_msgq_recv() calls
++ *   r535_gsp_msgq_recv_one_elem() repeatedly to receive continuation parts
++ *   until the complete message is received.
++ *   r535_gsp_msgq_recv() assembles the payloads of cotinuation parts into
++ *   the return of the large GSP message.
++ *
++ * - Free the allocated memory: r535_gsp_msg_done().
++ *   The user is responsible for freeing the memory allocated for the GSP
++ *   message pages after they have been processed.
++ */
++static void *
++r535_gsp_msgq_peek(struct nvkm_gsp *gsp, u32 gsp_rpc_len, int *retries)
++{
++	struct r535_gsp_msg *mqe;
++	int ret;
++
++	ret = r535_gsp_msgq_wait(gsp, gsp_rpc_len, retries);
++	if (ret < 0)
++		return ERR_PTR(ret);
++
++	mqe = r535_gsp_msgq_get_entry(gsp);
++
++	return mqe->data;
++}
++
++struct r535_gsp_msg_info {
++	int *retries;
++	u32 gsp_rpc_len;
++	void *gsp_rpc_buf;
++	bool continuation;
++};
++
++static void
++r535_gsp_msg_dump(struct nvkm_gsp *gsp, struct nvfw_gsp_rpc *msg, int lvl);
++
++static void *
++r535_gsp_msgq_recv_one_elem(struct nvkm_gsp *gsp,
++			    struct r535_gsp_msg_info *info)
++{
++	u8 *buf = info->gsp_rpc_buf;
++	u32 rptr = *gsp->msgq.rptr;
++	struct r535_gsp_msg *mqe;
++	u32 size, expected, len;
++	int ret;
++
++	expected = info->gsp_rpc_len;
++
++	ret = r535_gsp_msgq_wait(gsp, expected, info->retries);
++	if (ret < 0)
++		return ERR_PTR(ret);
++
++	mqe = r535_gsp_msgq_get_entry(gsp);
++
++	if (info->continuation) {
++		struct nvfw_gsp_rpc *rpc = (struct nvfw_gsp_rpc *)mqe->data;
++
++		if (rpc->function != NV_VGPU_MSG_FUNCTION_CONTINUATION_RECORD) {
++			nvkm_error(&gsp->subdev,
++				   "Not a continuation of a large RPC\n");
++			r535_gsp_msg_dump(gsp, rpc, NV_DBG_ERROR);
++			return ERR_PTR(-EIO);
++		}
++	}
++
++	size = ALIGN(expected + GSP_MSG_HDR_SIZE, GSP_PAGE_SIZE);
++
++	len = ((gsp->msgq.cnt - rptr) * GSP_PAGE_SIZE) - sizeof(*mqe);
++	len = min_t(u32, expected, len);
++
++	if (info->continuation)
++		memcpy(buf, mqe->data + sizeof(struct nvfw_gsp_rpc),
++		       len - sizeof(struct nvfw_gsp_rpc));
++	else
++		memcpy(buf, mqe->data, len);
++
++	expected -= len;
++
++	if (expected) {
++		mqe = (void *)((u8 *)gsp->shm.msgq.ptr + 0x1000 + 0 * 0x1000);
++		memcpy(buf + len, mqe, expected);
++	}
++
++	rptr = (rptr + DIV_ROUND_UP(size, GSP_PAGE_SIZE)) % gsp->msgq.cnt;
++
++	mb();
++	(*gsp->msgq.rptr) = rptr;
++	return buf;
++}
++
++static void *
++r535_gsp_msgq_recv(struct nvkm_gsp *gsp, u32 gsp_rpc_len, int *retries)
++{
++	struct r535_gsp_msg *mqe;
++	const u32 max_rpc_size = GSP_MSG_MAX_SIZE - sizeof(*mqe);
++	struct nvfw_gsp_rpc *rpc;
++	struct r535_gsp_msg_info info = {0};
++	u32 expected = gsp_rpc_len;
++	void *buf;
++
++	mqe = r535_gsp_msgq_get_entry(gsp);
++	rpc = (struct nvfw_gsp_rpc *)mqe->data;
++
++	if (WARN_ON(rpc->length > max_rpc_size))
++		return NULL;
++
++	buf = kvmalloc(max_t(u32, rpc->length, expected), GFP_KERNEL);
++	if (!buf)
++		return ERR_PTR(-ENOMEM);
++
++	info.gsp_rpc_buf = buf;
++	info.retries = retries;
++	info.gsp_rpc_len = rpc->length;
++
++	buf = r535_gsp_msgq_recv_one_elem(gsp, &info);
++	if (IS_ERR(buf)) {
++		kvfree(info.gsp_rpc_buf);
++		info.gsp_rpc_buf = NULL;
++		return buf;
++	}
++
++	if (expected <= max_rpc_size)
++		return buf;
++
++	info.gsp_rpc_buf += info.gsp_rpc_len;
++	expected -= info.gsp_rpc_len;
++
++	while (expected) {
++		u32 size;
++
++		rpc = r535_gsp_msgq_peek(gsp, sizeof(*rpc), info.retries);
++		if (IS_ERR_OR_NULL(rpc)) {
++			kfree(buf);
++			return rpc;
++		}
++
++		info.gsp_rpc_len = rpc->length;
++		info.continuation = true;
++
++		rpc = r535_gsp_msgq_recv_one_elem(gsp, &info);
++		if (IS_ERR_OR_NULL(rpc)) {
++			kfree(buf);
++			return rpc;
++		}
++
++		size = info.gsp_rpc_len - sizeof(*rpc);
++		expected -= size;
++		info.gsp_rpc_buf += size;
++	}
++
++	rpc = buf;
++	rpc->length = gsp_rpc_len;
++	return buf;
++}
++
++static int
++r535_gsp_cmdq_push(struct nvkm_gsp *gsp, void *rpc)
++{
++	struct r535_gsp_msg *msg = to_gsp_hdr(rpc, msg);
++	struct r535_gsp_msg *cqe;
++	u32 gsp_rpc_len = msg->checksum;
++	u64 *ptr = (void *)msg;
++	u64 *end;
++	u64 csum = 0;
++	int free, time = 1000000;
++	u32 wptr, size, step, len;
++	u32 off = 0;
++
++	len = ALIGN(GSP_MSG_HDR_SIZE + gsp_rpc_len, GSP_PAGE_SIZE);
++
++	end = (u64 *)((char *)ptr + len);
++	msg->pad = 0;
++	msg->checksum = 0;
++	msg->sequence = gsp->cmdq.seq++;
++	msg->elem_count = DIV_ROUND_UP(len, 0x1000);
++
++	while (ptr < end)
++		csum ^= *ptr++;
++
++	msg->checksum = upper_32_bits(csum) ^ lower_32_bits(csum);
++
++	wptr = *gsp->cmdq.wptr;
++	do {
++		do {
++			free = *gsp->cmdq.rptr + gsp->cmdq.cnt - wptr - 1;
++			if (free >= gsp->cmdq.cnt)
++				free -= gsp->cmdq.cnt;
++			if (free >= 1)
++				break;
++
++			usleep_range(1, 2);
++		} while(--time);
++
++		if (WARN_ON(!time)) {
++			kvfree(msg);
++			return -ETIMEDOUT;
++		}
++
++		cqe = (void *)((u8 *)gsp->shm.cmdq.ptr + 0x1000 + wptr * 0x1000);
++		step = min_t(u32, free, (gsp->cmdq.cnt - wptr));
++		size = min_t(u32, len, step * GSP_PAGE_SIZE);
++
++		memcpy(cqe, (u8 *)msg + off, size);
++
++		wptr += DIV_ROUND_UP(size, 0x1000);
++		if (wptr == gsp->cmdq.cnt)
++			wptr = 0;
++
++		off  += size;
++		len -= size;
++	} while (len);
++
++	nvkm_trace(&gsp->subdev, "cmdq: wptr %d\n", wptr);
++	wmb();
++	(*gsp->cmdq.wptr) = wptr;
++	mb();
++
++	nvkm_falcon_wr32(&gsp->falcon, 0xc00, 0x00000000);
++
++	kvfree(msg);
++	return 0;
++}
++
++static void *
++r535_gsp_cmdq_get(struct nvkm_gsp *gsp, u32 gsp_rpc_len)
++{
++	struct r535_gsp_msg *msg;
++	u32 size = GSP_MSG_HDR_SIZE + gsp_rpc_len;
++
++	size = ALIGN(size, GSP_MSG_MIN_SIZE);
++	msg = kvzalloc(size, GFP_KERNEL);
++	if (!msg)
++		return ERR_PTR(-ENOMEM);
++
++	msg->checksum = gsp_rpc_len;
++	return msg->data;
++}
++
++static void
++r535_gsp_msg_done(struct nvkm_gsp *gsp, struct nvfw_gsp_rpc *msg)
++{
++	kvfree(msg);
++}
++
++static void
++r535_gsp_msg_dump(struct nvkm_gsp *gsp, struct nvfw_gsp_rpc *msg, int lvl)
++{
++	if (gsp->subdev.debug >= lvl) {
++		nvkm_printk__(&gsp->subdev, lvl, info,
++			      "msg fn:%d len:0x%x/0x%zx res:0x%x resp:0x%x\n",
++			      msg->function, msg->length, msg->length - sizeof(*msg),
++			      msg->rpc_result, msg->rpc_result_private);
++		print_hex_dump(KERN_INFO, "msg: ", DUMP_PREFIX_OFFSET, 16, 1,
++			       msg->data, msg->length - sizeof(*msg), true);
++	}
++}
++
++struct nvfw_gsp_rpc *
++r535_gsp_msg_recv(struct nvkm_gsp *gsp, int fn, u32 gsp_rpc_len)
++{
++	struct nvkm_subdev *subdev = &gsp->subdev;
++	struct nvfw_gsp_rpc *rpc;
++	int retries = 4000000, i;
++
++retry:
++	rpc = r535_gsp_msgq_peek(gsp, sizeof(*rpc), &retries);
++	if (IS_ERR_OR_NULL(rpc))
++		return rpc;
++
++	rpc = r535_gsp_msgq_recv(gsp, gsp_rpc_len, &retries);
++	if (IS_ERR_OR_NULL(rpc))
++		return rpc;
++
++	if (rpc->rpc_result) {
++		r535_gsp_msg_dump(gsp, rpc, NV_DBG_ERROR);
++		r535_gsp_msg_done(gsp, rpc);
++		return ERR_PTR(-EINVAL);
++	}
++
++	r535_gsp_msg_dump(gsp, rpc, NV_DBG_TRACE);
++
++	if (fn && rpc->function == fn) {
++		if (gsp_rpc_len) {
++			if (rpc->length < gsp_rpc_len) {
++				nvkm_error(subdev, "rpc len %d < %d\n",
++					   rpc->length, gsp_rpc_len);
++				r535_gsp_msg_dump(gsp, rpc, NV_DBG_ERROR);
++				r535_gsp_msg_done(gsp, rpc);
++				return ERR_PTR(-EIO);
++			}
++
++			return rpc;
++		}
++
++		r535_gsp_msg_done(gsp, rpc);
++		return NULL;
++	}
++
++	for (i = 0; i < gsp->msgq.ntfy_nr; i++) {
++		struct nvkm_gsp_msgq_ntfy *ntfy = &gsp->msgq.ntfy[i];
++
++		if (ntfy->fn == rpc->function) {
++			if (ntfy->func)
++				ntfy->func(ntfy->priv, ntfy->fn, rpc->data,
++					   rpc->length - sizeof(*rpc));
++			break;
++		}
++	}
++
++	if (i == gsp->msgq.ntfy_nr)
++		r535_gsp_msg_dump(gsp, rpc, NV_DBG_WARN);
++
++	r535_gsp_msg_done(gsp, rpc);
++	if (fn)
++		goto retry;
++
++	if (*gsp->msgq.rptr != *gsp->msgq.wptr)
++		goto retry;
++
++	return NULL;
++}
++
++int
++r535_gsp_msg_ntfy_add(struct nvkm_gsp *gsp, u32 fn, nvkm_gsp_msg_ntfy_func func, void *priv)
++{
++	int ret = 0;
++
++	mutex_lock(&gsp->msgq.mutex);
++	if (WARN_ON(gsp->msgq.ntfy_nr >= ARRAY_SIZE(gsp->msgq.ntfy))) {
++		ret = -ENOSPC;
++	} else {
++		gsp->msgq.ntfy[gsp->msgq.ntfy_nr].fn = fn;
++		gsp->msgq.ntfy[gsp->msgq.ntfy_nr].func = func;
++		gsp->msgq.ntfy[gsp->msgq.ntfy_nr].priv = priv;
++		gsp->msgq.ntfy_nr++;
++	}
++	mutex_unlock(&gsp->msgq.mutex);
++	return ret;
++}
++
++int
++r535_gsp_rpc_poll(struct nvkm_gsp *gsp, u32 fn)
++{
++	void *repv;
++
++	mutex_lock(&gsp->cmdq.mutex);
++	repv = r535_gsp_msg_recv(gsp, fn, 0);
++	mutex_unlock(&gsp->cmdq.mutex);
++	if (IS_ERR(repv))
++		return PTR_ERR(repv);
++
++	return 0;
++}
++
++static void *
++r535_gsp_rpc_handle_reply(struct nvkm_gsp *gsp, u32 fn,
++			  enum nvkm_gsp_rpc_reply_policy policy,
++			  u32 gsp_rpc_len)
++{
++	struct nvfw_gsp_rpc *reply;
++	void *repv = NULL;
++
++	switch (policy) {
++	case NVKM_GSP_RPC_REPLY_NOWAIT:
++		break;
++	case NVKM_GSP_RPC_REPLY_RECV:
++		reply = r535_gsp_msg_recv(gsp, fn, gsp_rpc_len);
++		if (!IS_ERR_OR_NULL(reply))
++			repv = reply->data;
++		else
++			repv = reply;
++		break;
++	case NVKM_GSP_RPC_REPLY_POLL:
++		repv = r535_gsp_msg_recv(gsp, fn, 0);
++		break;
++	}
++
++	return repv;
++}
++
++static void *
++r535_gsp_rpc_send(struct nvkm_gsp *gsp, void *payload,
++		  enum nvkm_gsp_rpc_reply_policy policy, u32 gsp_rpc_len)
++{
++	struct nvfw_gsp_rpc *rpc = to_gsp_hdr(payload, rpc);
++	u32 fn = rpc->function;
++	int ret;
++
++	if (gsp->subdev.debug >= NV_DBG_TRACE) {
++		nvkm_trace(&gsp->subdev, "rpc fn:%d len:0x%x/0x%zx\n", rpc->function,
++			   rpc->length, rpc->length - sizeof(*rpc));
++		print_hex_dump(KERN_INFO, "rpc: ", DUMP_PREFIX_OFFSET, 16, 1,
++			       rpc->data, rpc->length - sizeof(*rpc), true);
++	}
++
++	ret = r535_gsp_cmdq_push(gsp, rpc);
++	if (ret)
++		return ERR_PTR(ret);
++
++	return r535_gsp_rpc_handle_reply(gsp, fn, policy, gsp_rpc_len);
++}
++
++static void
++r535_gsp_rpc_done(struct nvkm_gsp *gsp, void *repv)
++{
++	struct nvfw_gsp_rpc *rpc = container_of(repv, typeof(*rpc), data);
++
++	r535_gsp_msg_done(gsp, rpc);
++}
++
++static void *
++r535_gsp_rpc_get(struct nvkm_gsp *gsp, u32 fn, u32 payload_size)
++{
++	struct nvfw_gsp_rpc *rpc;
++
++	rpc = r535_gsp_cmdq_get(gsp, ALIGN(sizeof(*rpc) + payload_size,
++					   sizeof(u64)));
++	if (IS_ERR(rpc))
++		return ERR_CAST(rpc);
++
++	rpc->header_version = 0x03000000;
++	rpc->signature = ('C' << 24) | ('P' << 16) | ('R' << 8) | 'V';
++	rpc->function = fn;
++	rpc->rpc_result = 0xffffffff;
++	rpc->rpc_result_private = 0xffffffff;
++	rpc->length = sizeof(*rpc) + payload_size;
++	return rpc->data;
++}
++
++static void *
++r535_gsp_rpc_push(struct nvkm_gsp *gsp, void *payload,
++		  enum nvkm_gsp_rpc_reply_policy policy, u32 gsp_rpc_len)
++{
++	struct nvfw_gsp_rpc *rpc = to_gsp_hdr(payload, rpc);
++	struct r535_gsp_msg *msg = to_gsp_hdr(rpc, msg);
++	const u32 max_rpc_size = GSP_MSG_MAX_SIZE - sizeof(*msg);
++	const u32 max_payload_size = max_rpc_size - sizeof(*rpc);
++	u32 payload_size = rpc->length - sizeof(*rpc);
++	void *repv;
++
++	mutex_lock(&gsp->cmdq.mutex);
++	if (payload_size > max_payload_size) {
++		const u32 fn = rpc->function;
++		u32 remain_payload_size = payload_size;
++		void *next;
++
++		/* Send initial RPC. */
++		next = r535_gsp_rpc_get(gsp, fn, max_payload_size);
++		if (IS_ERR(next)) {
++			repv = next;
++			goto done;
++		}
++
++		memcpy(next, payload, max_payload_size);
++
++		repv = r535_gsp_rpc_send(gsp, next, NVKM_GSP_RPC_REPLY_NOWAIT, 0);
++		if (IS_ERR(repv))
++			goto done;
++
++		payload += max_payload_size;
++		remain_payload_size -= max_payload_size;
++
++		/* Remaining chunks sent as CONTINUATION_RECORD RPCs. */
++		while (remain_payload_size) {
++			u32 size = min(remain_payload_size,
++				       max_payload_size);
++
++			next = r535_gsp_rpc_get(gsp, NV_VGPU_MSG_FUNCTION_CONTINUATION_RECORD, size);
++			if (IS_ERR(next)) {
++				repv = next;
++				goto done;
++			}
++
++			memcpy(next, payload, size);
++
++			repv = r535_gsp_rpc_send(gsp, next, NVKM_GSP_RPC_REPLY_NOWAIT, 0);
++			if (IS_ERR(repv))
++				goto done;
++
++			payload += size;
++			remain_payload_size -= size;
++		}
++
++		/* Wait for reply. */
++		repv = r535_gsp_rpc_handle_reply(gsp, fn, policy, payload_size +
++						 sizeof(*rpc));
++		if (!IS_ERR(repv))
++			kvfree(msg);
++	} else {
++		repv = r535_gsp_rpc_send(gsp, payload, policy, gsp_rpc_len);
++	}
++
++done:
++	mutex_unlock(&gsp->cmdq.mutex);
++	return repv;
++}
++
++const struct nvkm_rm_api_rpc
++r535_rpc = {
++	.get = r535_gsp_rpc_get,
++	.push = r535_gsp_rpc_push,
++	.done = r535_gsp_rpc_done,
++};
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/rm.h b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/rm.h
+new file mode 100644
+index 00000000000000..7a0ece9791671f
+--- /dev/null
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/rm.h
+@@ -0,0 +1,20 @@
++/* SPDX-License-Identifier: MIT
++ *
++ * Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
++ */
++#include <subdev/gsp.h>
++#ifndef __NVKM_RM_H__
++#define __NVKM_RM_H__
++
++struct nvkm_rm_api {
++	const struct nvkm_rm_api_rpc {
++		void *(*get)(struct nvkm_gsp *, u32 fn, u32 argc);
++		void *(*push)(struct nvkm_gsp *gsp, void *argv,
++			      enum nvkm_gsp_rpc_reply_policy policy, u32 repc);
++		void (*done)(struct nvkm_gsp *gsp, void *repv);
++	} *rpc;
++};
++
++extern const struct nvkm_rm_api r535_rm;
++extern const struct nvkm_rm_api_rpc r535_rpc;
++#endif
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/rpc.h b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/rpc.h
+new file mode 100644
+index 00000000000000..4431e33b33049f
+--- /dev/null
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/rpc.h
+@@ -0,0 +1,18 @@
++/* SPDX-License-Identifier: MIT
++ *
++ * Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
++ */
++#ifndef __NVKM_RM_RPC_H__
++#define __NVKM_RM_RPC_H__
++#include "rm.h"
++
++#define to_payload_hdr(p, header) \
++	container_of((void *)p, typeof(*header), params)
++
++int r535_gsp_rpc_poll(struct nvkm_gsp *, u32 fn);
++
++struct nvfw_gsp_rpc *r535_gsp_msg_recv(struct nvkm_gsp *, int fn, u32 gsp_rpc_len);
++int r535_gsp_msg_ntfy_add(struct nvkm_gsp *, u32 fn, nvkm_gsp_msg_ntfy_func, void *priv);
++
++int r535_rpc_status_to_errno(uint32_t rpc_status);
++#endif
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/r535.c b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/r535.c
+index 5f3c9c02a4c04b..35ba1798ee6e49 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/r535.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/r535.c
+@@ -105,7 +105,7 @@ fbsr_memlist(struct nvkm_gsp_device *device, u32 handle, enum nvkm_memory_target
+ 			rpc->pteDesc.pte_pde[i].pte = (phys >> 12) + i;
+ 	}
+ 
+-	ret = nvkm_gsp_rpc_wr(gsp, rpc, true);
++	ret = nvkm_gsp_rpc_wr(gsp, rpc, NVKM_GSP_RPC_REPLY_POLL);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c b/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
+index 729cbb0d8403ff..36abfa2e65e962 100644
+--- a/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
++++ b/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
+@@ -36,60 +36,49 @@ static inline struct sharp_nt_panel *to_sharp_nt_panel(struct drm_panel *panel)
+ static int sharp_nt_panel_init(struct sharp_nt_panel *sharp_nt)
+ {
+ 	struct mipi_dsi_device *dsi = sharp_nt->dsi;
+-	int ret;
++	struct mipi_dsi_multi_context dsi_ctx = { .dsi = dsi };
+ 
+ 	dsi->mode_flags |= MIPI_DSI_MODE_LPM;
+ 
+-	ret = mipi_dsi_dcs_exit_sleep_mode(dsi);
+-	if (ret < 0)
+-		return ret;
++	mipi_dsi_dcs_exit_sleep_mode_multi(&dsi_ctx);
+ 
+-	msleep(120);
++	mipi_dsi_msleep(&dsi_ctx, 120);
+ 
+ 	/* Novatek two-lane operation */
+-	ret = mipi_dsi_dcs_write(dsi, 0xae, (u8[]){ 0x03 }, 1);
+-	if (ret < 0)
+-		return ret;
++	mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xae,  0x03);
+ 
+ 	/* Set both MCU and RGB I/F to 24bpp */
+-	ret = mipi_dsi_dcs_set_pixel_format(dsi, MIPI_DCS_PIXEL_FMT_24BIT |
+-					(MIPI_DCS_PIXEL_FMT_24BIT << 4));
+-	if (ret < 0)
+-		return ret;
++	mipi_dsi_dcs_set_pixel_format_multi(&dsi_ctx,
++					    MIPI_DCS_PIXEL_FMT_24BIT |
++					    (MIPI_DCS_PIXEL_FMT_24BIT << 4));
+ 
+-	return 0;
++	return dsi_ctx.accum_err;
+ }
+ 
+ static int sharp_nt_panel_on(struct sharp_nt_panel *sharp_nt)
+ {
+ 	struct mipi_dsi_device *dsi = sharp_nt->dsi;
+-	int ret;
++	struct mipi_dsi_multi_context dsi_ctx = { .dsi = dsi };
+ 
+ 	dsi->mode_flags |= MIPI_DSI_MODE_LPM;
+ 
+-	ret = mipi_dsi_dcs_set_display_on(dsi);
+-	if (ret < 0)
+-		return ret;
++	mipi_dsi_dcs_set_display_on_multi(&dsi_ctx);
+ 
+-	return 0;
++	return dsi_ctx.accum_err;
+ }
+ 
+ static int sharp_nt_panel_off(struct sharp_nt_panel *sharp_nt)
+ {
+ 	struct mipi_dsi_device *dsi = sharp_nt->dsi;
+-	int ret;
++	struct mipi_dsi_multi_context dsi_ctx = { .dsi = dsi };
+ 
+ 	dsi->mode_flags &= ~MIPI_DSI_MODE_LPM;
+ 
+-	ret = mipi_dsi_dcs_set_display_off(dsi);
+-	if (ret < 0)
+-		return ret;
++	mipi_dsi_dcs_set_display_off_multi(&dsi_ctx);
+ 
+-	ret = mipi_dsi_dcs_enter_sleep_mode(dsi);
+-	if (ret < 0)
+-		return ret;
++	mipi_dsi_dcs_enter_sleep_mode_multi(&dsi_ctx);
+ 
+-	return 0;
++	return dsi_ctx.accum_err;
+ }
+ 
+ static int sharp_nt_panel_unprepare(struct drm_panel *panel)
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 3aaac96c0bfbf5..53211b5eaa09b9 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -3798,6 +3798,32 @@ static const struct panel_desc pda_91_00156_a0  = {
+ 	.bus_format = MEDIA_BUS_FMT_RGB888_1X24,
+ };
+ 
++static const struct drm_display_mode powertip_ph128800t004_zza01_mode = {
++	.clock = 71150,
++	.hdisplay = 1280,
++	.hsync_start = 1280 + 48,
++	.hsync_end = 1280 + 48 + 32,
++	.htotal = 1280 + 48 + 32 + 80,
++	.vdisplay = 800,
++	.vsync_start = 800 + 9,
++	.vsync_end = 800 + 9 + 8,
++	.vtotal = 800 + 9 + 8 + 6,
++	.flags = DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC,
++};
++
++static const struct panel_desc powertip_ph128800t004_zza01 = {
++	.modes = &powertip_ph128800t004_zza01_mode,
++	.num_modes = 1,
++	.bpc = 8,
++	.size = {
++		.width = 216,
++		.height = 135,
++	},
++	.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
++	.bus_flags = DRM_BUS_FLAG_DE_HIGH,
++	.connector_type = DRM_MODE_CONNECTOR_LVDS,
++};
++
+ static const struct drm_display_mode powertip_ph128800t006_zhc01_mode = {
+ 	.clock = 66500,
+ 	.hdisplay = 1280,
+@@ -5155,6 +5181,9 @@ static const struct of_device_id platform_of_match[] = {
+ 	}, {
+ 		.compatible = "pda,91-00156-a0",
+ 		.data = &pda_91_00156_a0,
++	}, {
++		.compatible = "powertip,ph128800t004-zza01",
++		.data = &powertip_ph128800t004_zza01,
+ 	}, {
+ 		.compatible = "powertip,ph128800t006-zhc01",
+ 		.data = &powertip_ph128800t006_zhc01,
+diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
+index 7cca97d298ea10..cc838bfc82e00e 100644
+--- a/drivers/gpu/drm/panthor/panthor_mmu.c
++++ b/drivers/gpu/drm/panthor/panthor_mmu.c
+@@ -1714,7 +1714,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status)
+ 		 * re-enabled.
+ 		 */
+ 		ptdev->mmu->irq.mask = new_int_mask;
+-		gpu_write(ptdev, MMU_INT_MASK, new_int_mask);
+ 
+ 		if (ptdev->mmu->as.slots[as].vm)
+ 			ptdev->mmu->as.slots[as].vm->unhandled_fault = true;
+diff --git a/drivers/gpu/drm/rockchip/inno_hdmi.c b/drivers/gpu/drm/rockchip/inno_hdmi.c
+index 483ecfeaebb08f..87dfd300015832 100644
+--- a/drivers/gpu/drm/rockchip/inno_hdmi.c
++++ b/drivers/gpu/drm/rockchip/inno_hdmi.c
+@@ -10,10 +10,12 @@
+ #include <linux/delay.h>
+ #include <linux/err.h>
+ #include <linux/hdmi.h>
++#include <linux/mfd/syscon.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/module.h>
+ #include <linux/mutex.h>
+ #include <linux/platform_device.h>
++#include <linux/regmap.h>
+ 
+ #include <drm/drm_atomic.h>
+ #include <drm/drm_atomic_helper.h>
+@@ -29,8 +31,19 @@
+ 
+ #include "inno_hdmi.h"
+ 
++#define HIWORD_UPDATE(val, mask)	((val) | (mask) << 16)
++
+ #define INNO_HDMI_MIN_TMDS_CLOCK  25000000U
+ 
++#define RK3036_GRF_SOC_CON2	0x148
++#define RK3036_HDMI_PHSYNC	BIT(4)
++#define RK3036_HDMI_PVSYNC	BIT(5)
++
++enum inno_hdmi_dev_type {
++	RK3036_HDMI,
++	RK3128_HDMI,
++};
++
+ struct inno_hdmi_phy_config {
+ 	unsigned long pixelclock;
+ 	u8 pre_emphasis;
+@@ -38,6 +51,7 @@ struct inno_hdmi_phy_config {
+ };
+ 
+ struct inno_hdmi_variant {
++	enum inno_hdmi_dev_type dev_type;
+ 	struct inno_hdmi_phy_config *phy_configs;
+ 	struct inno_hdmi_phy_config *default_phy_config;
+ };
+@@ -58,6 +72,7 @@ struct inno_hdmi {
+ 	struct clk *pclk;
+ 	struct clk *refclk;
+ 	void __iomem *regs;
++	struct regmap *grf;
+ 
+ 	struct drm_connector	connector;
+ 	struct rockchip_encoder	encoder;
+@@ -374,7 +389,15 @@ static int inno_hdmi_config_video_csc(struct inno_hdmi *hdmi)
+ static int inno_hdmi_config_video_timing(struct inno_hdmi *hdmi,
+ 					 struct drm_display_mode *mode)
+ {
+-	int value;
++	int value, psync;
++
++	if (hdmi->variant->dev_type == RK3036_HDMI) {
++		psync = mode->flags & DRM_MODE_FLAG_PHSYNC ? RK3036_HDMI_PHSYNC : 0;
++		value = HIWORD_UPDATE(psync, RK3036_HDMI_PHSYNC);
++		psync = mode->flags & DRM_MODE_FLAG_PVSYNC ? RK3036_HDMI_PVSYNC : 0;
++		value |= HIWORD_UPDATE(psync, RK3036_HDMI_PVSYNC);
++		regmap_write(hdmi->grf, RK3036_GRF_SOC_CON2, value);
++	}
+ 
+ 	/* Set detail external video timing polarity and interlace mode */
+ 	value = v_EXTERANL_VIDEO(1);
+@@ -911,6 +934,15 @@ static int inno_hdmi_bind(struct device *dev, struct device *master,
+ 		goto err_disable_pclk;
+ 	}
+ 
++	if (hdmi->variant->dev_type == RK3036_HDMI) {
++		hdmi->grf = syscon_regmap_lookup_by_phandle(dev->of_node, "rockchip,grf");
++		if (IS_ERR(hdmi->grf)) {
++			ret = dev_err_probe(dev, PTR_ERR(hdmi->grf),
++					    "Unable to get rockchip,grf\n");
++			goto err_disable_clk;
++		}
++	}
++
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0) {
+ 		ret = irq;
+@@ -995,11 +1027,13 @@ static void inno_hdmi_remove(struct platform_device *pdev)
+ }
+ 
+ static const struct inno_hdmi_variant rk3036_inno_hdmi_variant = {
++	.dev_type = RK3036_HDMI,
+ 	.phy_configs = rk3036_hdmi_phy_configs,
+ 	.default_phy_config = &rk3036_hdmi_phy_configs[1],
+ };
+ 
+ static const struct inno_hdmi_variant rk3128_inno_hdmi_variant = {
++	.dev_type = RK3128_HDMI,
+ 	.phy_configs = rk3128_hdmi_phy_configs,
+ 	.default_phy_config = &rk3128_hdmi_phy_configs[1],
+ };
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h
+index 680bedbb770e6f..fc3ecb9fcd9576 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h
+@@ -710,6 +710,7 @@ enum dst_factor_mode {
+ 
+ #define VOP2_COLOR_KEY_MASK				BIT(31)
+ 
++#define RK3568_OVL_CTRL__LAYERSEL_REGDONE_SEL		GENMASK(31, 30)
+ #define RK3568_OVL_CTRL__LAYERSEL_REGDONE_IMD		BIT(28)
+ #define RK3568_OVL_CTRL__YUV_MODE(vp)			BIT(vp)
+ 
+diff --git a/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c b/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
+index 0a2840cbe8e22d..32c4ed6857395a 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
++++ b/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
+@@ -2070,7 +2070,10 @@ static void rk3568_vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ 	struct rockchip_crtc_state *vcstate = to_rockchip_crtc_state(vp->crtc.state);
+ 
+ 	ovl_ctrl = vop2_readl(vop2, RK3568_OVL_CTRL);
+-	ovl_ctrl |= RK3568_OVL_CTRL__LAYERSEL_REGDONE_IMD;
++	ovl_ctrl &= ~RK3568_OVL_CTRL__LAYERSEL_REGDONE_IMD;
++	ovl_ctrl &= ~RK3568_OVL_CTRL__LAYERSEL_REGDONE_SEL;
++	ovl_ctrl |= FIELD_PREP(RK3568_OVL_CTRL__LAYERSEL_REGDONE_SEL, vp->id);
++
+ 	if (vcstate->yuv_overlay)
+ 		ovl_ctrl |= RK3568_OVL_CTRL__YUV_MODE(vp->id);
+ 	else
+diff --git a/drivers/gpu/drm/solomon/ssd130x.c b/drivers/gpu/drm/solomon/ssd130x.c
+index dd2006d51c7a2f..eec43d1a55951b 100644
+--- a/drivers/gpu/drm/solomon/ssd130x.c
++++ b/drivers/gpu/drm/solomon/ssd130x.c
+@@ -974,7 +974,7 @@ static void ssd130x_clear_screen(struct ssd130x_device *ssd130x, u8 *data_array)
+ 
+ static void ssd132x_clear_screen(struct ssd130x_device *ssd130x, u8 *data_array)
+ {
+-	unsigned int columns = DIV_ROUND_UP(ssd130x->height, SSD132X_SEGMENT_WIDTH);
++	unsigned int columns = DIV_ROUND_UP(ssd130x->width, SSD132X_SEGMENT_WIDTH);
+ 	unsigned int height = ssd130x->height;
+ 
+ 	memset(data_array, 0, columns * height);
+diff --git a/drivers/gpu/drm/tiny/Kconfig b/drivers/gpu/drm/tiny/Kconfig
+index 54c84c9801c192..cbdd2096b17a0b 100644
+--- a/drivers/gpu/drm/tiny/Kconfig
++++ b/drivers/gpu/drm/tiny/Kconfig
+@@ -3,6 +3,7 @@
+ config DRM_APPLETBDRM
+ 	tristate "DRM support for Apple Touch Bars"
+ 	depends on DRM && USB && MMU
++	depends on X86 || COMPILE_TEST
+ 	select DRM_GEM_SHMEM_HELPER
+ 	select DRM_KMS_HELPER
+ 	help
+diff --git a/drivers/gpu/drm/ttm/tests/ttm_bo_test.c b/drivers/gpu/drm/ttm/tests/ttm_bo_test.c
+index f8f20d2f61740f..e08e5a138420e3 100644
+--- a/drivers/gpu/drm/ttm/tests/ttm_bo_test.c
++++ b/drivers/gpu/drm/ttm/tests/ttm_bo_test.c
+@@ -340,7 +340,7 @@ static void ttm_bo_unreserve_bulk(struct kunit *test)
+ 	KUNIT_ASSERT_NOT_NULL(test, ttm_dev);
+ 
+ 	resv = kunit_kzalloc(test, sizeof(*resv), GFP_KERNEL);
+-	KUNIT_ASSERT_NOT_NULL(test, ttm_dev);
++	KUNIT_ASSERT_NOT_NULL(test, resv);
+ 
+ 	err = ttm_device_kunit_init(priv, ttm_dev, false, false);
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
+index 35f131a46d0701..42df9d3567e79e 100644
+--- a/drivers/gpu/drm/v3d/v3d_sched.c
++++ b/drivers/gpu/drm/v3d/v3d_sched.c
+@@ -199,7 +199,6 @@ v3d_job_update_stats(struct v3d_job *job, enum v3d_queue queue)
+ 	struct v3d_dev *v3d = job->v3d;
+ 	struct v3d_file_priv *file = job->file->driver_priv;
+ 	struct v3d_stats *global_stats = &v3d->queue[queue].stats;
+-	struct v3d_stats *local_stats = &file->stats[queue];
+ 	u64 now = local_clock();
+ 	unsigned long flags;
+ 
+@@ -209,7 +208,12 @@ v3d_job_update_stats(struct v3d_job *job, enum v3d_queue queue)
+ 	else
+ 		preempt_disable();
+ 
+-	v3d_stats_update(local_stats, now);
++	/* Don't update the local stats if the file context has already closed */
++	if (file)
++		v3d_stats_update(&file->stats[queue], now);
++	else
++		drm_dbg(&v3d->drm, "The file descriptor was closed before job completion\n");
++
+ 	v3d_stats_update(global_stats, now);
+ 
+ 	if (IS_ENABLED(CONFIG_LOCKDEP))
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index 5922302c3e00cc..2c9d57cf8d5334 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -2408,7 +2408,7 @@ static int gem_create_user_ext_set_property(struct xe_device *xe,
+ 	int err;
+ 	u32 idx;
+ 
+-	err = __copy_from_user(&ext, address, sizeof(ext));
++	err = copy_from_user(&ext, address, sizeof(ext));
+ 	if (XE_IOCTL_DBG(xe, err))
+ 		return -EFAULT;
+ 
+@@ -2445,7 +2445,7 @@ static int gem_create_user_extensions(struct xe_device *xe, struct xe_bo *bo,
+ 	if (XE_IOCTL_DBG(xe, ext_number >= MAX_USER_EXTENSIONS))
+ 		return -E2BIG;
+ 
+-	err = __copy_from_user(&ext, address, sizeof(ext));
++	err = copy_from_user(&ext, address, sizeof(ext));
+ 	if (XE_IOCTL_DBG(xe, err))
+ 		return -EFAULT;
+ 
+diff --git a/drivers/gpu/drm/xe/xe_eu_stall.c b/drivers/gpu/drm/xe/xe_eu_stall.c
+index e2bb156c71fb08..96732613b4b7df 100644
+--- a/drivers/gpu/drm/xe/xe_eu_stall.c
++++ b/drivers/gpu/drm/xe/xe_eu_stall.c
+@@ -283,7 +283,7 @@ static int xe_eu_stall_user_ext_set_property(struct xe_device *xe, u64 extension
+ 	int err;
+ 	u32 idx;
+ 
+-	err = __copy_from_user(&ext, address, sizeof(ext));
++	err = copy_from_user(&ext, address, sizeof(ext));
+ 	if (XE_IOCTL_DBG(xe, err))
+ 		return -EFAULT;
+ 
+@@ -313,7 +313,7 @@ static int xe_eu_stall_user_extensions(struct xe_device *xe, u64 extension,
+ 	if (XE_IOCTL_DBG(xe, ext_number >= MAX_USER_EXTENSIONS))
+ 		return -E2BIG;
+ 
+-	err = __copy_from_user(&ext, address, sizeof(ext));
++	err = copy_from_user(&ext, address, sizeof(ext));
+ 	if (XE_IOCTL_DBG(xe, err))
+ 		return -EFAULT;
+ 
+diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
+index b75adfc99fb7cd..44364c042ad72d 100644
+--- a/drivers/gpu/drm/xe/xe_exec.c
++++ b/drivers/gpu/drm/xe/xe_exec.c
+@@ -176,8 +176,8 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+ 	}
+ 
+ 	if (xe_exec_queue_is_parallel(q)) {
+-		err = __copy_from_user(addresses, addresses_user, sizeof(u64) *
+-				       q->width);
++		err = copy_from_user(addresses, addresses_user, sizeof(u64) *
++				     q->width);
+ 		if (err) {
+ 			err = -EFAULT;
+ 			goto err_syncs;
+diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
+index cd9b1c32f30f80..ce78cee5dec684 100644
+--- a/drivers/gpu/drm/xe/xe_exec_queue.c
++++ b/drivers/gpu/drm/xe/xe_exec_queue.c
+@@ -479,7 +479,7 @@ static int exec_queue_user_ext_set_property(struct xe_device *xe,
+ 	int err;
+ 	u32 idx;
+ 
+-	err = __copy_from_user(&ext, address, sizeof(ext));
++	err = copy_from_user(&ext, address, sizeof(ext));
+ 	if (XE_IOCTL_DBG(xe, err))
+ 		return -EFAULT;
+ 
+@@ -518,7 +518,7 @@ static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue
+ 	if (XE_IOCTL_DBG(xe, ext_number >= MAX_USER_EXTENSIONS))
+ 		return -E2BIG;
+ 
+-	err = __copy_from_user(&ext, address, sizeof(ext));
++	err = copy_from_user(&ext, address, sizeof(ext));
+ 	if (XE_IOCTL_DBG(xe, err))
+ 		return -EFAULT;
+ 
+@@ -618,9 +618,8 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
+ 	if (XE_IOCTL_DBG(xe, !len || len > XE_HW_ENGINE_MAX_INSTANCE))
+ 		return -EINVAL;
+ 
+-	err = __copy_from_user(eci, user_eci,
+-			       sizeof(struct drm_xe_engine_class_instance) *
+-			       len);
++	err = copy_from_user(eci, user_eci,
++			     sizeof(struct drm_xe_engine_class_instance) * len);
+ 	if (XE_IOCTL_DBG(xe, err))
+ 		return -EFAULT;
+ 
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index 66198cf2662c57..4bad8894fa12c3 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -116,7 +116,7 @@ static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
+ 		xe_gt_mcr_multicast_write(gt, XE2_GAMREQSTRM_CTRL, reg);
+ 	}
+ 
+-	xe_gt_mcr_multicast_write(gt, XEHPC_L3CLOS_MASK(3), 0x3);
++	xe_gt_mcr_multicast_write(gt, XEHPC_L3CLOS_MASK(3), 0xF);
+ 	xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ }
+ 
+diff --git a/drivers/gpu/drm/xe/xe_gt_freq.c b/drivers/gpu/drm/xe/xe_gt_freq.c
+index 552ac92496a408..60d9354e7dbf4b 100644
+--- a/drivers/gpu/drm/xe/xe_gt_freq.c
++++ b/drivers/gpu/drm/xe/xe_gt_freq.c
+@@ -61,9 +61,10 @@ dev_to_xe(struct device *dev)
+ 	return gt_to_xe(kobj_to_gt(dev->kobj.parent));
+ }
+ 
+-static ssize_t act_freq_show(struct device *dev,
+-			     struct device_attribute *attr, char *buf)
++static ssize_t act_freq_show(struct kobject *kobj,
++			     struct kobj_attribute *attr, char *buf)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_guc_pc *pc = dev_to_pc(dev);
+ 	u32 freq;
+ 
+@@ -73,11 +74,12 @@ static ssize_t act_freq_show(struct device *dev,
+ 
+ 	return sysfs_emit(buf, "%d\n", freq);
+ }
+-static DEVICE_ATTR_RO(act_freq);
++static struct kobj_attribute attr_act_freq = __ATTR_RO(act_freq);
+ 
+-static ssize_t cur_freq_show(struct device *dev,
+-			     struct device_attribute *attr, char *buf)
++static ssize_t cur_freq_show(struct kobject *kobj,
++			     struct kobj_attribute *attr, char *buf)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_guc_pc *pc = dev_to_pc(dev);
+ 	u32 freq;
+ 	ssize_t ret;
+@@ -90,11 +92,12 @@ static ssize_t cur_freq_show(struct device *dev,
+ 
+ 	return sysfs_emit(buf, "%d\n", freq);
+ }
+-static DEVICE_ATTR_RO(cur_freq);
++static struct kobj_attribute attr_cur_freq = __ATTR_RO(cur_freq);
+ 
+-static ssize_t rp0_freq_show(struct device *dev,
+-			     struct device_attribute *attr, char *buf)
++static ssize_t rp0_freq_show(struct kobject *kobj,
++			     struct kobj_attribute *attr, char *buf)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_guc_pc *pc = dev_to_pc(dev);
+ 	u32 freq;
+ 
+@@ -104,11 +107,12 @@ static ssize_t rp0_freq_show(struct device *dev,
+ 
+ 	return sysfs_emit(buf, "%d\n", freq);
+ }
+-static DEVICE_ATTR_RO(rp0_freq);
++static struct kobj_attribute attr_rp0_freq = __ATTR_RO(rp0_freq);
+ 
+-static ssize_t rpe_freq_show(struct device *dev,
+-			     struct device_attribute *attr, char *buf)
++static ssize_t rpe_freq_show(struct kobject *kobj,
++			     struct kobj_attribute *attr, char *buf)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_guc_pc *pc = dev_to_pc(dev);
+ 	u32 freq;
+ 
+@@ -118,11 +122,12 @@ static ssize_t rpe_freq_show(struct device *dev,
+ 
+ 	return sysfs_emit(buf, "%d\n", freq);
+ }
+-static DEVICE_ATTR_RO(rpe_freq);
++static struct kobj_attribute attr_rpe_freq = __ATTR_RO(rpe_freq);
+ 
+-static ssize_t rpa_freq_show(struct device *dev,
+-			     struct device_attribute *attr, char *buf)
++static ssize_t rpa_freq_show(struct kobject *kobj,
++			     struct kobj_attribute *attr, char *buf)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_guc_pc *pc = dev_to_pc(dev);
+ 	u32 freq;
+ 
+@@ -132,20 +137,22 @@ static ssize_t rpa_freq_show(struct device *dev,
+ 
+ 	return sysfs_emit(buf, "%d\n", freq);
+ }
+-static DEVICE_ATTR_RO(rpa_freq);
++static struct kobj_attribute attr_rpa_freq = __ATTR_RO(rpa_freq);
+ 
+-static ssize_t rpn_freq_show(struct device *dev,
+-			     struct device_attribute *attr, char *buf)
++static ssize_t rpn_freq_show(struct kobject *kobj,
++			     struct kobj_attribute *attr, char *buf)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_guc_pc *pc = dev_to_pc(dev);
+ 
+ 	return sysfs_emit(buf, "%d\n", xe_guc_pc_get_rpn_freq(pc));
+ }
+-static DEVICE_ATTR_RO(rpn_freq);
++static struct kobj_attribute attr_rpn_freq = __ATTR_RO(rpn_freq);
+ 
+-static ssize_t min_freq_show(struct device *dev,
+-			     struct device_attribute *attr, char *buf)
++static ssize_t min_freq_show(struct kobject *kobj,
++			     struct kobj_attribute *attr, char *buf)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_guc_pc *pc = dev_to_pc(dev);
+ 	u32 freq;
+ 	ssize_t ret;
+@@ -159,9 +166,10 @@ static ssize_t min_freq_show(struct device *dev,
+ 	return sysfs_emit(buf, "%d\n", freq);
+ }
+ 
+-static ssize_t min_freq_store(struct device *dev, struct device_attribute *attr,
+-			      const char *buff, size_t count)
++static ssize_t min_freq_store(struct kobject *kobj,
++			      struct kobj_attribute *attr, const char *buff, size_t count)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_guc_pc *pc = dev_to_pc(dev);
+ 	u32 freq;
+ 	ssize_t ret;
+@@ -178,11 +186,12 @@ static ssize_t min_freq_store(struct device *dev, struct device_attribute *attr,
+ 
+ 	return count;
+ }
+-static DEVICE_ATTR_RW(min_freq);
++static struct kobj_attribute attr_min_freq = __ATTR_RW(min_freq);
+ 
+-static ssize_t max_freq_show(struct device *dev,
+-			     struct device_attribute *attr, char *buf)
++static ssize_t max_freq_show(struct kobject *kobj,
++			     struct kobj_attribute *attr, char *buf)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_guc_pc *pc = dev_to_pc(dev);
+ 	u32 freq;
+ 	ssize_t ret;
+@@ -196,9 +205,10 @@ static ssize_t max_freq_show(struct device *dev,
+ 	return sysfs_emit(buf, "%d\n", freq);
+ }
+ 
+-static ssize_t max_freq_store(struct device *dev, struct device_attribute *attr,
+-			      const char *buff, size_t count)
++static ssize_t max_freq_store(struct kobject *kobj,
++			      struct kobj_attribute *attr, const char *buff, size_t count)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_guc_pc *pc = dev_to_pc(dev);
+ 	u32 freq;
+ 	ssize_t ret;
+@@ -215,17 +225,17 @@ static ssize_t max_freq_store(struct device *dev, struct device_attribute *attr,
+ 
+ 	return count;
+ }
+-static DEVICE_ATTR_RW(max_freq);
++static struct kobj_attribute attr_max_freq = __ATTR_RW(max_freq);
+ 
+ static const struct attribute *freq_attrs[] = {
+-	&dev_attr_act_freq.attr,
+-	&dev_attr_cur_freq.attr,
+-	&dev_attr_rp0_freq.attr,
+-	&dev_attr_rpa_freq.attr,
+-	&dev_attr_rpe_freq.attr,
+-	&dev_attr_rpn_freq.attr,
+-	&dev_attr_min_freq.attr,
+-	&dev_attr_max_freq.attr,
++	&attr_act_freq.attr,
++	&attr_cur_freq.attr,
++	&attr_rp0_freq.attr,
++	&attr_rpa_freq.attr,
++	&attr_rpe_freq.attr,
++	&attr_rpn_freq.attr,
++	&attr_min_freq.attr,
++	&attr_max_freq.attr,
+ 	NULL
+ };
+ 
+diff --git a/drivers/gpu/drm/xe/xe_gt_idle.c b/drivers/gpu/drm/xe/xe_gt_idle.c
+index fbbace7b0b12a9..c11206410a4d4e 100644
+--- a/drivers/gpu/drm/xe/xe_gt_idle.c
++++ b/drivers/gpu/drm/xe/xe_gt_idle.c
+@@ -249,9 +249,10 @@ int xe_gt_idle_pg_print(struct xe_gt *gt, struct drm_printer *p)
+ 	return 0;
+ }
+ 
+-static ssize_t name_show(struct device *dev,
+-			 struct device_attribute *attr, char *buff)
++static ssize_t name_show(struct kobject *kobj,
++			 struct kobj_attribute *attr, char *buff)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_gt_idle *gtidle = dev_to_gtidle(dev);
+ 	struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
+ 	ssize_t ret;
+@@ -262,11 +263,12 @@ static ssize_t name_show(struct device *dev,
+ 
+ 	return ret;
+ }
+-static DEVICE_ATTR_RO(name);
++static struct kobj_attribute name_attr = __ATTR_RO(name);
+ 
+-static ssize_t idle_status_show(struct device *dev,
+-				struct device_attribute *attr, char *buff)
++static ssize_t idle_status_show(struct kobject *kobj,
++				struct kobj_attribute *attr, char *buff)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_gt_idle *gtidle = dev_to_gtidle(dev);
+ 	struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
+ 	enum xe_gt_idle_state state;
+@@ -277,6 +279,7 @@ static ssize_t idle_status_show(struct device *dev,
+ 
+ 	return sysfs_emit(buff, "%s\n", gt_idle_state_to_string(state));
+ }
++static struct kobj_attribute idle_status_attr = __ATTR_RO(idle_status);
+ 
+ u64 xe_gt_idle_residency_msec(struct xe_gt_idle *gtidle)
+ {
+@@ -291,10 +294,11 @@ u64 xe_gt_idle_residency_msec(struct xe_gt_idle *gtidle)
+ 	return residency;
+ }
+ 
+-static DEVICE_ATTR_RO(idle_status);
+-static ssize_t idle_residency_ms_show(struct device *dev,
+-				      struct device_attribute *attr, char *buff)
++
++static ssize_t idle_residency_ms_show(struct kobject *kobj,
++				      struct kobj_attribute *attr, char *buff)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_gt_idle *gtidle = dev_to_gtidle(dev);
+ 	struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
+ 	u64 residency;
+@@ -305,12 +309,12 @@ static ssize_t idle_residency_ms_show(struct device *dev,
+ 
+ 	return sysfs_emit(buff, "%llu\n", residency);
+ }
+-static DEVICE_ATTR_RO(idle_residency_ms);
++static struct kobj_attribute idle_residency_attr = __ATTR_RO(idle_residency_ms);
+ 
+ static const struct attribute *gt_idle_attrs[] = {
+-	&dev_attr_name.attr,
+-	&dev_attr_idle_status.attr,
+-	&dev_attr_idle_residency_ms.attr,
++	&name_attr.attr,
++	&idle_status_attr.attr,
++	&idle_residency_attr.attr,
+ 	NULL,
+ };
+ 
+diff --git a/drivers/gpu/drm/xe/xe_gt_throttle.c b/drivers/gpu/drm/xe/xe_gt_throttle.c
+index 8db78d616b6f29..aa962c783cdf7a 100644
+--- a/drivers/gpu/drm/xe/xe_gt_throttle.c
++++ b/drivers/gpu/drm/xe/xe_gt_throttle.c
+@@ -114,115 +114,115 @@ static u32 read_reason_vr_tdc(struct xe_gt *gt)
+ 	return tdc;
+ }
+ 
+-static ssize_t status_show(struct device *dev,
+-			   struct device_attribute *attr,
+-			   char *buff)
++static ssize_t status_show(struct kobject *kobj,
++			   struct kobj_attribute *attr, char *buff)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_gt *gt = dev_to_gt(dev);
+ 	bool status = !!read_status(gt);
+ 
+ 	return sysfs_emit(buff, "%u\n", status);
+ }
+-static DEVICE_ATTR_RO(status);
++static struct kobj_attribute attr_status = __ATTR_RO(status);
+ 
+-static ssize_t reason_pl1_show(struct device *dev,
+-			       struct device_attribute *attr,
+-			       char *buff)
++static ssize_t reason_pl1_show(struct kobject *kobj,
++			       struct kobj_attribute *attr, char *buff)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_gt *gt = dev_to_gt(dev);
+ 	bool pl1 = !!read_reason_pl1(gt);
+ 
+ 	return sysfs_emit(buff, "%u\n", pl1);
+ }
+-static DEVICE_ATTR_RO(reason_pl1);
++static struct kobj_attribute attr_reason_pl1 = __ATTR_RO(reason_pl1);
+ 
+-static ssize_t reason_pl2_show(struct device *dev,
+-			       struct device_attribute *attr,
+-			       char *buff)
++static ssize_t reason_pl2_show(struct kobject *kobj,
++			       struct kobj_attribute *attr, char *buff)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_gt *gt = dev_to_gt(dev);
+ 	bool pl2 = !!read_reason_pl2(gt);
+ 
+ 	return sysfs_emit(buff, "%u\n", pl2);
+ }
+-static DEVICE_ATTR_RO(reason_pl2);
++static struct kobj_attribute attr_reason_pl2 = __ATTR_RO(reason_pl2);
+ 
+-static ssize_t reason_pl4_show(struct device *dev,
+-			       struct device_attribute *attr,
+-			       char *buff)
++static ssize_t reason_pl4_show(struct kobject *kobj,
++			       struct kobj_attribute *attr, char *buff)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_gt *gt = dev_to_gt(dev);
+ 	bool pl4 = !!read_reason_pl4(gt);
+ 
+ 	return sysfs_emit(buff, "%u\n", pl4);
+ }
+-static DEVICE_ATTR_RO(reason_pl4);
++static struct kobj_attribute attr_reason_pl4 = __ATTR_RO(reason_pl4);
+ 
+-static ssize_t reason_thermal_show(struct device *dev,
+-				   struct device_attribute *attr,
+-				   char *buff)
++static ssize_t reason_thermal_show(struct kobject *kobj,
++				   struct kobj_attribute *attr, char *buff)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_gt *gt = dev_to_gt(dev);
+ 	bool thermal = !!read_reason_thermal(gt);
+ 
+ 	return sysfs_emit(buff, "%u\n", thermal);
+ }
+-static DEVICE_ATTR_RO(reason_thermal);
++static struct kobj_attribute attr_reason_thermal = __ATTR_RO(reason_thermal);
+ 
+-static ssize_t reason_prochot_show(struct device *dev,
+-				   struct device_attribute *attr,
+-				   char *buff)
++static ssize_t reason_prochot_show(struct kobject *kobj,
++				   struct kobj_attribute *attr, char *buff)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_gt *gt = dev_to_gt(dev);
+ 	bool prochot = !!read_reason_prochot(gt);
+ 
+ 	return sysfs_emit(buff, "%u\n", prochot);
+ }
+-static DEVICE_ATTR_RO(reason_prochot);
++static struct kobj_attribute attr_reason_prochot = __ATTR_RO(reason_prochot);
+ 
+-static ssize_t reason_ratl_show(struct device *dev,
+-				struct device_attribute *attr,
+-				char *buff)
++static ssize_t reason_ratl_show(struct kobject *kobj,
++				struct kobj_attribute *attr, char *buff)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_gt *gt = dev_to_gt(dev);
+ 	bool ratl = !!read_reason_ratl(gt);
+ 
+ 	return sysfs_emit(buff, "%u\n", ratl);
+ }
+-static DEVICE_ATTR_RO(reason_ratl);
++static struct kobj_attribute attr_reason_ratl = __ATTR_RO(reason_ratl);
+ 
+-static ssize_t reason_vr_thermalert_show(struct device *dev,
+-					 struct device_attribute *attr,
+-					 char *buff)
++static ssize_t reason_vr_thermalert_show(struct kobject *kobj,
++					 struct kobj_attribute *attr, char *buff)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_gt *gt = dev_to_gt(dev);
+ 	bool thermalert = !!read_reason_vr_thermalert(gt);
+ 
+ 	return sysfs_emit(buff, "%u\n", thermalert);
+ }
+-static DEVICE_ATTR_RO(reason_vr_thermalert);
++static struct kobj_attribute attr_reason_vr_thermalert = __ATTR_RO(reason_vr_thermalert);
+ 
+-static ssize_t reason_vr_tdc_show(struct device *dev,
+-				  struct device_attribute *attr,
+-				  char *buff)
++static ssize_t reason_vr_tdc_show(struct kobject *kobj,
++				  struct kobj_attribute *attr, char *buff)
+ {
++	struct device *dev = kobj_to_dev(kobj);
+ 	struct xe_gt *gt = dev_to_gt(dev);
+ 	bool tdc = !!read_reason_vr_tdc(gt);
+ 
+ 	return sysfs_emit(buff, "%u\n", tdc);
+ }
+-static DEVICE_ATTR_RO(reason_vr_tdc);
++static struct kobj_attribute attr_reason_vr_tdc = __ATTR_RO(reason_vr_tdc);
+ 
+ static struct attribute *throttle_attrs[] = {
+-	&dev_attr_status.attr,
+-	&dev_attr_reason_pl1.attr,
+-	&dev_attr_reason_pl2.attr,
+-	&dev_attr_reason_pl4.attr,
+-	&dev_attr_reason_thermal.attr,
+-	&dev_attr_reason_prochot.attr,
+-	&dev_attr_reason_ratl.attr,
+-	&dev_attr_reason_vr_thermalert.attr,
+-	&dev_attr_reason_vr_tdc.attr,
++	&attr_status.attr,
++	&attr_reason_pl1.attr,
++	&attr_reason_pl2.attr,
++	&attr_reason_pl4.attr,
++	&attr_reason_thermal.attr,
++	&attr_reason_prochot.attr,
++	&attr_reason_ratl.attr,
++	&attr_reason_vr_thermalert.attr,
++	&attr_reason_vr_tdc.attr,
+ 	NULL
+ };
+ 
+diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
+index bc5714a5b36b24..f082be4af4cffe 100644
+--- a/drivers/gpu/drm/xe/xe_guc.c
++++ b/drivers/gpu/drm/xe/xe_guc.c
+@@ -1508,30 +1508,32 @@ void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p)
+ 
+ 	xe_uc_fw_print(&guc->fw, p);
+ 
+-	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+-	if (!fw_ref)
+-		return;
++	if (!IS_SRIOV_VF(gt_to_xe(gt))) {
++		fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
++		if (!fw_ref)
++			return;
++
++		status = xe_mmio_read32(&gt->mmio, GUC_STATUS);
++
++		drm_printf(p, "\nGuC status 0x%08x:\n", status);
++		drm_printf(p, "\tBootrom status = 0x%x\n",
++			   REG_FIELD_GET(GS_BOOTROM_MASK, status));
++		drm_printf(p, "\tuKernel status = 0x%x\n",
++			   REG_FIELD_GET(GS_UKERNEL_MASK, status));
++		drm_printf(p, "\tMIA Core status = 0x%x\n",
++			   REG_FIELD_GET(GS_MIA_MASK, status));
++		drm_printf(p, "\tLog level = %d\n",
++			   xe_guc_log_get_level(&guc->log));
++
++		drm_puts(p, "\nScratch registers:\n");
++		for (i = 0; i < SOFT_SCRATCH_COUNT; i++) {
++			drm_printf(p, "\t%2d: \t0x%x\n",
++				   i, xe_mmio_read32(&gt->mmio, SOFT_SCRATCH(i)));
++		}
+ 
+-	status = xe_mmio_read32(&gt->mmio, GUC_STATUS);
+-
+-	drm_printf(p, "\nGuC status 0x%08x:\n", status);
+-	drm_printf(p, "\tBootrom status = 0x%x\n",
+-		   REG_FIELD_GET(GS_BOOTROM_MASK, status));
+-	drm_printf(p, "\tuKernel status = 0x%x\n",
+-		   REG_FIELD_GET(GS_UKERNEL_MASK, status));
+-	drm_printf(p, "\tMIA Core status = 0x%x\n",
+-		   REG_FIELD_GET(GS_MIA_MASK, status));
+-	drm_printf(p, "\tLog level = %d\n",
+-		   xe_guc_log_get_level(&guc->log));
+-
+-	drm_puts(p, "\nScratch registers:\n");
+-	for (i = 0; i < SOFT_SCRATCH_COUNT; i++) {
+-		drm_printf(p, "\t%2d: \t0x%x\n",
+-			   i, xe_mmio_read32(&gt->mmio, SOFT_SCRATCH(i)));
++		xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ 	}
+ 
+-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
+-
+ 	drm_puts(p, "\n");
+ 	xe_guc_ct_print(&guc->ct, p, false);
+ 
+diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
+index 7ffc98f67e6963..777ec6613abda0 100644
+--- a/drivers/gpu/drm/xe/xe_oa.c
++++ b/drivers/gpu/drm/xe/xe_oa.c
+@@ -1301,7 +1301,7 @@ static int xe_oa_user_ext_set_property(struct xe_oa *oa, enum xe_oa_user_extn_fr
+ 	int err;
+ 	u32 idx;
+ 
+-	err = __copy_from_user(&ext, address, sizeof(ext));
++	err = copy_from_user(&ext, address, sizeof(ext));
+ 	if (XE_IOCTL_DBG(oa->xe, err))
+ 		return -EFAULT;
+ 
+@@ -1338,7 +1338,7 @@ static int xe_oa_user_extensions(struct xe_oa *oa, enum xe_oa_user_extn_from fro
+ 	if (XE_IOCTL_DBG(oa->xe, ext_number >= MAX_USER_EXTENSIONS))
+ 		return -E2BIG;
+ 
+-	err = __copy_from_user(&ext, address, sizeof(ext));
++	err = copy_from_user(&ext, address, sizeof(ext));
+ 	if (XE_IOCTL_DBG(oa->xe, err))
+ 		return -EFAULT;
+ 
+@@ -2280,7 +2280,7 @@ int xe_oa_add_config_ioctl(struct drm_device *dev, u64 data, struct drm_file *fi
+ 		return -EACCES;
+ 	}
+ 
+-	err = __copy_from_user(&param, u64_to_user_ptr(data), sizeof(param));
++	err = copy_from_user(&param, u64_to_user_ptr(data), sizeof(param));
+ 	if (XE_IOCTL_DBG(oa->xe, err))
+ 		return -EFAULT;
+ 
+diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
+index 975094c1a58279..496ca4494beee6 100644
+--- a/drivers/gpu/drm/xe/xe_svm.c
++++ b/drivers/gpu/drm/xe/xe_svm.c
+@@ -750,7 +750,7 @@ static bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range,
+ 		return false;
+ 	}
+ 
+-	if (range_size <= SZ_64K && !supports_4K_migration(vm->xe)) {
++	if (range_size < SZ_64K && !supports_4K_migration(vm->xe)) {
+ 		drm_dbg(&vm->xe->drm, "Platform doesn't support SZ_4K range migration\n");
+ 		return false;
+ 	}
+diff --git a/drivers/gpu/drm/xe/xe_uc_fw.c b/drivers/gpu/drm/xe/xe_uc_fw.c
+index fb0eda3d568290..b553079ae3d647 100644
+--- a/drivers/gpu/drm/xe/xe_uc_fw.c
++++ b/drivers/gpu/drm/xe/xe_uc_fw.c
+@@ -222,8 +222,8 @@ uc_fw_auto_select(struct xe_device *xe, struct xe_uc_fw *uc_fw)
+ 		[XE_UC_FW_TYPE_HUC] = { entries_huc, ARRAY_SIZE(entries_huc) },
+ 		[XE_UC_FW_TYPE_GSC] = { entries_gsc, ARRAY_SIZE(entries_gsc) },
+ 	};
+-	static const struct uc_fw_entry *entries;
+ 	enum xe_platform p = xe->info.platform;
++	const struct uc_fw_entry *entries;
+ 	u32 count;
+ 	int i;
+ 
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index 737172013a8f9e..cc1ae8ba9bb750 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -3087,9 +3087,9 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
+ 		if (!*bind_ops)
+ 			return args->num_binds > 1 ? -ENOBUFS : -ENOMEM;
+ 
+-		err = __copy_from_user(*bind_ops, bind_user,
+-				       sizeof(struct drm_xe_vm_bind_op) *
+-				       args->num_binds);
++		err = copy_from_user(*bind_ops, bind_user,
++				     sizeof(struct drm_xe_vm_bind_op) *
++				     args->num_binds);
+ 		if (XE_IOCTL_DBG(xe, err)) {
+ 			err = -EFAULT;
+ 			goto free_bind_ops;
+diff --git a/drivers/hid/hid-asus.c b/drivers/hid/hid-asus.c
+index 46e3e42f9eb5fb..599c836507ff88 100644
+--- a/drivers/hid/hid-asus.c
++++ b/drivers/hid/hid-asus.c
+@@ -52,6 +52,10 @@ MODULE_DESCRIPTION("Asus HID Keyboard and TouchPad");
+ #define FEATURE_KBD_LED_REPORT_ID1 0x5d
+ #define FEATURE_KBD_LED_REPORT_ID2 0x5e
+ 
++#define ROG_ALLY_REPORT_SIZE 64
++#define ROG_ALLY_X_MIN_MCU 313
++#define ROG_ALLY_MIN_MCU 319
++
+ #define SUPPORT_KBD_BACKLIGHT BIT(0)
+ 
+ #define MAX_TOUCH_MAJOR 8
+@@ -84,6 +88,7 @@ MODULE_DESCRIPTION("Asus HID Keyboard and TouchPad");
+ #define QUIRK_MEDION_E1239T		BIT(10)
+ #define QUIRK_ROG_NKEY_KEYBOARD		BIT(11)
+ #define QUIRK_ROG_CLAYMORE_II_KEYBOARD BIT(12)
++#define QUIRK_ROG_ALLY_XPAD		BIT(13)
+ 
+ #define I2C_KEYBOARD_QUIRKS			(QUIRK_FIX_NOTEBOOK_REPORT | \
+ 						 QUIRK_NO_INIT_REPORTS | \
+@@ -534,9 +539,99 @@ static bool asus_kbd_wmi_led_control_present(struct hid_device *hdev)
+ 	return !!(value & ASUS_WMI_DSTS_PRESENCE_BIT);
+ }
+ 
++/*
++ * We don't care about any other part of the string except the version section.
++ * Example strings: FGA80100.RC72LA.312_T01, FGA80100.RC71LS.318_T01
++ * The bytes "5a 05 03 31 00 1a 13" and possibly more come before the version
++ * string, and there may be additional bytes after the version string such as
++ * "75 00 74 00 65 00" or a postfix such as "_T01"
++ */
++static int mcu_parse_version_string(const u8 *response, size_t response_size)
++{
++	const u8 *end = response + response_size;
++	const u8 *p = response;
++	int dots, err, version;
++	char buf[4];
++
++	dots = 0;
++	while (p < end && dots < 2) {
++		if (*p++ == '.')
++			dots++;
++	}
++
++	if (dots != 2 || p >= end || (p + 3) >= end)
++		return -EINVAL;
++
++	memcpy(buf, p, 3);
++	buf[3] = '\0';
++
++	err = kstrtoint(buf, 10, &version);
++	if (err || version < 0)
++		return -EINVAL;
++
++	return version;
++}
++
++static int mcu_request_version(struct hid_device *hdev)
++{
++	u8 *response __free(kfree) = kzalloc(ROG_ALLY_REPORT_SIZE, GFP_KERNEL);
++	const u8 request[] = { 0x5a, 0x05, 0x03, 0x31, 0x00, 0x20 };
++	int ret;
++
++	if (!response)
++		return -ENOMEM;
++
++	ret = asus_kbd_set_report(hdev, request, sizeof(request));
++	if (ret < 0)
++		return ret;
++
++	ret = hid_hw_raw_request(hdev, FEATURE_REPORT_ID, response,
++				ROG_ALLY_REPORT_SIZE, HID_FEATURE_REPORT,
++				HID_REQ_GET_REPORT);
++	if (ret < 0)
++		return ret;
++
++	ret = mcu_parse_version_string(response, ROG_ALLY_REPORT_SIZE);
++	if (ret < 0) {
++		pr_err("Failed to parse MCU version: %d\n", ret);
++		print_hex_dump(KERN_ERR, "MCU: ", DUMP_PREFIX_NONE,
++			      16, 1, response, ROG_ALLY_REPORT_SIZE, false);
++	}
++
++	return ret;
++}
++
++static void validate_mcu_fw_version(struct hid_device *hdev, int idProduct)
++{
++	int min_version, version;
++
++	version = mcu_request_version(hdev);
++	if (version < 0)
++		return;
++
++	switch (idProduct) {
++	case USB_DEVICE_ID_ASUSTEK_ROG_NKEY_ALLY:
++		min_version = ROG_ALLY_MIN_MCU;
++		break;
++	case USB_DEVICE_ID_ASUSTEK_ROG_NKEY_ALLY_X:
++		min_version = ROG_ALLY_X_MIN_MCU;
++		break;
++	default:
++		min_version = 0;
++	}
++
++	if (version < min_version) {
++		hid_warn(hdev,
++			"The MCU firmware version must be %d or greater to avoid issues with suspend.\n",
++			min_version);
++	}
++}
++
+ static int asus_kbd_register_leds(struct hid_device *hdev)
+ {
+ 	struct asus_drvdata *drvdata = hid_get_drvdata(hdev);
++	struct usb_interface *intf;
++	struct usb_device *udev;
+ 	unsigned char kbd_func;
+ 	int ret;
+ 
+@@ -560,6 +655,14 @@ static int asus_kbd_register_leds(struct hid_device *hdev)
+ 			if (ret < 0)
+ 				return ret;
+ 		}
++
++		if (drvdata->quirks & QUIRK_ROG_ALLY_XPAD) {
++			intf = to_usb_interface(hdev->dev.parent);
++			udev = interface_to_usbdev(intf);
++			validate_mcu_fw_version(hdev,
++				le16_to_cpu(udev->descriptor.idProduct));
++		}
++
+ 	} else {
+ 		/* Initialize keyboard */
+ 		ret = asus_kbd_init(hdev, FEATURE_KBD_REPORT_ID);
+@@ -1280,10 +1383,10 @@ static const struct hid_device_id asus_devices[] = {
+ 	  QUIRK_USE_KBD_BACKLIGHT | QUIRK_ROG_NKEY_KEYBOARD },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK,
+ 	    USB_DEVICE_ID_ASUSTEK_ROG_NKEY_ALLY),
+-	  QUIRK_USE_KBD_BACKLIGHT | QUIRK_ROG_NKEY_KEYBOARD },
++	  QUIRK_USE_KBD_BACKLIGHT | QUIRK_ROG_NKEY_KEYBOARD | QUIRK_ROG_ALLY_XPAD},
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK,
+ 	    USB_DEVICE_ID_ASUSTEK_ROG_NKEY_ALLY_X),
+-	  QUIRK_USE_KBD_BACKLIGHT | QUIRK_ROG_NKEY_KEYBOARD },
++	  QUIRK_USE_KBD_BACKLIGHT | QUIRK_ROG_NKEY_KEYBOARD | QUIRK_ROG_ALLY_XPAD },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK,
+ 	    USB_DEVICE_ID_ASUSTEK_ROG_CLAYMORE_II_KEYBOARD),
+ 	  QUIRK_ROG_CLAYMORE_II_KEYBOARD },
+diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
+index 8351360bba1617..be490c59878524 100644
+--- a/drivers/hv/connection.c
++++ b/drivers/hv/connection.c
+@@ -206,11 +206,20 @@ int vmbus_connect(void)
+ 	INIT_LIST_HEAD(&vmbus_connection.chn_list);
+ 	mutex_init(&vmbus_connection.channel_mutex);
+ 
++	/*
++	 * The following Hyper-V interrupt and monitor pages can be used by
++	 * UIO for mapping to user-space, so they should always be allocated on
++	 * system page boundaries. The system page size must be >= the Hyper-V
++	 * page size.
++	 */
++	BUILD_BUG_ON(PAGE_SIZE < HV_HYP_PAGE_SIZE);
++
+ 	/*
+ 	 * Setup the vmbus event connection for channel interrupt
+ 	 * abstraction stuff
+ 	 */
+-	vmbus_connection.int_page = hv_alloc_hyperv_zeroed_page();
++	vmbus_connection.int_page =
++		(void *)__get_free_page(GFP_KERNEL | __GFP_ZERO);
+ 	if (vmbus_connection.int_page == NULL) {
+ 		ret = -ENOMEM;
+ 		goto cleanup;
+@@ -225,8 +234,8 @@ int vmbus_connect(void)
+ 	 * Setup the monitor notification facility. The 1st page for
+ 	 * parent->child and the 2nd page for child->parent
+ 	 */
+-	vmbus_connection.monitor_pages[0] = hv_alloc_hyperv_page();
+-	vmbus_connection.monitor_pages[1] = hv_alloc_hyperv_page();
++	vmbus_connection.monitor_pages[0] = (void *)__get_free_page(GFP_KERNEL);
++	vmbus_connection.monitor_pages[1] = (void *)__get_free_page(GFP_KERNEL);
+ 	if ((vmbus_connection.monitor_pages[0] == NULL) ||
+ 	    (vmbus_connection.monitor_pages[1] == NULL)) {
+ 		ret = -ENOMEM;
+@@ -342,21 +351,23 @@ void vmbus_disconnect(void)
+ 		destroy_workqueue(vmbus_connection.work_queue);
+ 
+ 	if (vmbus_connection.int_page) {
+-		hv_free_hyperv_page(vmbus_connection.int_page);
++		free_page((unsigned long)vmbus_connection.int_page);
+ 		vmbus_connection.int_page = NULL;
+ 	}
+ 
+ 	if (vmbus_connection.monitor_pages[0]) {
+ 		if (!set_memory_encrypted(
+ 			(unsigned long)vmbus_connection.monitor_pages[0], 1))
+-			hv_free_hyperv_page(vmbus_connection.monitor_pages[0]);
++			free_page((unsigned long)
++				vmbus_connection.monitor_pages[0]);
+ 		vmbus_connection.monitor_pages[0] = NULL;
+ 	}
+ 
+ 	if (vmbus_connection.monitor_pages[1]) {
+ 		if (!set_memory_encrypted(
+ 			(unsigned long)vmbus_connection.monitor_pages[1], 1))
+-			hv_free_hyperv_page(vmbus_connection.monitor_pages[1]);
++			free_page((unsigned long)
++				vmbus_connection.monitor_pages[1]);
+ 		vmbus_connection.monitor_pages[1] = NULL;
+ 	}
+ }
+diff --git a/drivers/hwmon/ftsteutates.c b/drivers/hwmon/ftsteutates.c
+index a3a07662e49175..8aeec16a7a9054 100644
+--- a/drivers/hwmon/ftsteutates.c
++++ b/drivers/hwmon/ftsteutates.c
+@@ -423,13 +423,16 @@ static int fts_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
+ 		break;
+ 	case hwmon_pwm:
+ 		switch (attr) {
+-		case hwmon_pwm_auto_channels_temp:
+-			if (data->fan_source[channel] == FTS_FAN_SOURCE_INVALID)
++		case hwmon_pwm_auto_channels_temp: {
++			u8 fan_source = data->fan_source[channel];
++
++			if (fan_source == FTS_FAN_SOURCE_INVALID || fan_source >= BITS_PER_LONG)
+ 				*val = 0;
+ 			else
+-				*val = BIT(data->fan_source[channel]);
++				*val = BIT(fan_source);
+ 
+ 			return 0;
++		}
+ 		default:
+ 			break;
+ 		}
+diff --git a/drivers/hwmon/ltc4282.c b/drivers/hwmon/ltc4282.c
+index 7f38d269623965..f607fe8f793702 100644
+--- a/drivers/hwmon/ltc4282.c
++++ b/drivers/hwmon/ltc4282.c
+@@ -1511,13 +1511,6 @@ static int ltc4282_setup(struct ltc4282_state *st, struct device *dev)
+ 			return ret;
+ 	}
+ 
+-	if (device_property_read_bool(dev, "adi,fault-log-enable")) {
+-		ret = regmap_set_bits(st->map, LTC4282_ADC_CTRL,
+-				      LTC4282_FAULT_LOG_EN_MASK);
+-		if (ret)
+-			return ret;
+-	}
+-
+ 	if (device_property_read_bool(dev, "adi,fault-log-enable")) {
+ 		ret = regmap_set_bits(st->map, LTC4282_ADC_CTRL, LTC4282_FAULT_LOG_EN_MASK);
+ 		if (ret)
+diff --git a/drivers/hwmon/occ/common.c b/drivers/hwmon/occ/common.c
+index 9486db249c64fb..b3694a4209b975 100644
+--- a/drivers/hwmon/occ/common.c
++++ b/drivers/hwmon/occ/common.c
+@@ -459,12 +459,10 @@ static ssize_t occ_show_power_1(struct device *dev,
+ 	return sysfs_emit(buf, "%llu\n", val);
+ }
+ 
+-static u64 occ_get_powr_avg(u64 *accum, u32 *samples)
++static u64 occ_get_powr_avg(u64 accum, u32 samples)
+ {
+-	u64 divisor = get_unaligned_be32(samples);
+-
+-	return (divisor == 0) ? 0 :
+-		div64_u64(get_unaligned_be64(accum) * 1000000ULL, divisor);
++	return (samples == 0) ? 0 :
++		mul_u64_u32_div(accum, 1000000UL, samples);
+ }
+ 
+ static ssize_t occ_show_power_2(struct device *dev,
+@@ -489,8 +487,8 @@ static ssize_t occ_show_power_2(struct device *dev,
+ 				  get_unaligned_be32(&power->sensor_id),
+ 				  power->function_id, power->apss_channel);
+ 	case 1:
+-		val = occ_get_powr_avg(&power->accumulator,
+-				       &power->update_tag);
++		val = occ_get_powr_avg(get_unaligned_be64(&power->accumulator),
++				       get_unaligned_be32(&power->update_tag));
+ 		break;
+ 	case 2:
+ 		val = (u64)get_unaligned_be32(&power->update_tag) *
+@@ -527,8 +525,8 @@ static ssize_t occ_show_power_a0(struct device *dev,
+ 		return sysfs_emit(buf, "%u_system\n",
+ 				  get_unaligned_be32(&power->sensor_id));
+ 	case 1:
+-		val = occ_get_powr_avg(&power->system.accumulator,
+-				       &power->system.update_tag);
++		val = occ_get_powr_avg(get_unaligned_be64(&power->system.accumulator),
++				       get_unaligned_be32(&power->system.update_tag));
+ 		break;
+ 	case 2:
+ 		val = (u64)get_unaligned_be32(&power->system.update_tag) *
+@@ -541,8 +539,8 @@ static ssize_t occ_show_power_a0(struct device *dev,
+ 		return sysfs_emit(buf, "%u_proc\n",
+ 				  get_unaligned_be32(&power->sensor_id));
+ 	case 5:
+-		val = occ_get_powr_avg(&power->proc.accumulator,
+-				       &power->proc.update_tag);
++		val = occ_get_powr_avg(get_unaligned_be64(&power->proc.accumulator),
++				       get_unaligned_be32(&power->proc.update_tag));
+ 		break;
+ 	case 6:
+ 		val = (u64)get_unaligned_be32(&power->proc.update_tag) *
+@@ -555,8 +553,8 @@ static ssize_t occ_show_power_a0(struct device *dev,
+ 		return sysfs_emit(buf, "%u_vdd\n",
+ 				  get_unaligned_be32(&power->sensor_id));
+ 	case 9:
+-		val = occ_get_powr_avg(&power->vdd.accumulator,
+-				       &power->vdd.update_tag);
++		val = occ_get_powr_avg(get_unaligned_be64(&power->vdd.accumulator),
++				       get_unaligned_be32(&power->vdd.update_tag));
+ 		break;
+ 	case 10:
+ 		val = (u64)get_unaligned_be32(&power->vdd.update_tag) *
+@@ -569,8 +567,8 @@ static ssize_t occ_show_power_a0(struct device *dev,
+ 		return sysfs_emit(buf, "%u_vdn\n",
+ 				  get_unaligned_be32(&power->sensor_id));
+ 	case 13:
+-		val = occ_get_powr_avg(&power->vdn.accumulator,
+-				       &power->vdn.update_tag);
++		val = occ_get_powr_avg(get_unaligned_be64(&power->vdn.accumulator),
++				       get_unaligned_be32(&power->vdn.update_tag));
+ 		break;
+ 	case 14:
+ 		val = (u64)get_unaligned_be32(&power->vdn.update_tag) *
+@@ -747,29 +745,30 @@ static ssize_t occ_show_extended(struct device *dev,
+ }
+ 
+ /*
+- * Some helper macros to make it easier to define an occ_attribute. Since these
+- * are dynamically allocated, we shouldn't use the existing kernel macros which
++ * A helper to make it easier to define an occ_attribute. Since these
++ * are dynamically allocated, we cannot use the existing kernel macros which
+  * stringify the name argument.
+  */
+-#define ATTR_OCC(_name, _mode, _show, _store) {				\
+-	.attr	= {							\
+-		.name = _name,						\
+-		.mode = VERIFY_OCTAL_PERMISSIONS(_mode),		\
+-	},								\
+-	.show	= _show,						\
+-	.store	= _store,						\
+-}
+-
+-#define SENSOR_ATTR_OCC(_name, _mode, _show, _store, _nr, _index) {	\
+-	.dev_attr	= ATTR_OCC(_name, _mode, _show, _store),	\
+-	.index		= _index,					\
+-	.nr		= _nr,						\
++static void occ_init_attribute(struct occ_attribute *attr, int mode,
++	ssize_t (*show)(struct device *dev, struct device_attribute *attr, char *buf),
++	ssize_t (*store)(struct device *dev, struct device_attribute *attr,
++				   const char *buf, size_t count),
++	int nr, int index, const char *fmt, ...)
++{
++	va_list args;
++
++	va_start(args, fmt);
++	vsnprintf(attr->name, sizeof(attr->name), fmt, args);
++	va_end(args);
++
++	attr->sensor.dev_attr.attr.name = attr->name;
++	attr->sensor.dev_attr.attr.mode = mode;
++	attr->sensor.dev_attr.show = show;
++	attr->sensor.dev_attr.store = store;
++	attr->sensor.index = index;
++	attr->sensor.nr = nr;
+ }
+ 
+-#define OCC_INIT_ATTR(_name, _mode, _show, _store, _nr, _index)		\
+-	((struct sensor_device_attribute_2)				\
+-		SENSOR_ATTR_OCC(_name, _mode, _show, _store, _nr, _index))
+-
+ /*
+  * Allocate and instatiate sensor_device_attribute_2s. It's most efficient to
+  * use our own instead of the built-in hwmon attribute types.
+@@ -855,14 +854,15 @@ static int occ_setup_sensor_attrs(struct occ *occ)
+ 		sensors->extended.num_sensors = 0;
+ 	}
+ 
+-	occ->attrs = devm_kzalloc(dev, sizeof(*occ->attrs) * num_attrs,
++	occ->attrs = devm_kcalloc(dev, num_attrs, sizeof(*occ->attrs),
+ 				  GFP_KERNEL);
+ 	if (!occ->attrs)
+ 		return -ENOMEM;
+ 
+ 	/* null-terminated list */
+-	occ->group.attrs = devm_kzalloc(dev, sizeof(*occ->group.attrs) *
+-					num_attrs + 1, GFP_KERNEL);
++	occ->group.attrs = devm_kcalloc(dev, num_attrs + 1,
++					sizeof(*occ->group.attrs),
++					GFP_KERNEL);
+ 	if (!occ->group.attrs)
+ 		return -ENOMEM;
+ 
+@@ -872,43 +872,33 @@ static int occ_setup_sensor_attrs(struct occ *occ)
+ 		s = i + 1;
+ 		temp = ((struct temp_sensor_2 *)sensors->temp.data) + i;
+ 
+-		snprintf(attr->name, sizeof(attr->name), "temp%d_label", s);
+-		attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_temp, NULL,
+-					     0, i);
++		occ_init_attribute(attr, 0444, show_temp, NULL,
++				   0, i, "temp%d_label", s);
+ 		attr++;
+ 
+ 		if (sensors->temp.version == 2 &&
+ 		    temp->fru_type == OCC_FRU_TYPE_VRM) {
+-			snprintf(attr->name, sizeof(attr->name),
+-				 "temp%d_alarm", s);
++			occ_init_attribute(attr, 0444, show_temp, NULL,
++					   1, i, "temp%d_alarm", s);
+ 		} else {
+-			snprintf(attr->name, sizeof(attr->name),
+-				 "temp%d_input", s);
++			occ_init_attribute(attr, 0444, show_temp, NULL,
++					   1, i, "temp%d_input", s);
+ 		}
+ 
+-		attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_temp, NULL,
+-					     1, i);
+ 		attr++;
+ 
+ 		if (sensors->temp.version > 1) {
+-			snprintf(attr->name, sizeof(attr->name),
+-				 "temp%d_fru_type", s);
+-			attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-						     show_temp, NULL, 2, i);
++			occ_init_attribute(attr, 0444, show_temp, NULL,
++					   2, i, "temp%d_fru_type", s);
+ 			attr++;
+ 
+-			snprintf(attr->name, sizeof(attr->name),
+-				 "temp%d_fault", s);
+-			attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-						     show_temp, NULL, 3, i);
++			occ_init_attribute(attr, 0444, show_temp, NULL,
++					   3, i, "temp%d_fault", s);
+ 			attr++;
+ 
+ 			if (sensors->temp.version == 0x10) {
+-				snprintf(attr->name, sizeof(attr->name),
+-					 "temp%d_max", s);
+-				attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-							     show_temp, NULL,
+-							     4, i);
++				occ_init_attribute(attr, 0444, show_temp, NULL,
++						   4, i, "temp%d_max", s);
+ 				attr++;
+ 			}
+ 		}
+@@ -917,14 +907,12 @@ static int occ_setup_sensor_attrs(struct occ *occ)
+ 	for (i = 0; i < sensors->freq.num_sensors; ++i) {
+ 		s = i + 1;
+ 
+-		snprintf(attr->name, sizeof(attr->name), "freq%d_label", s);
+-		attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_freq, NULL,
+-					     0, i);
++		occ_init_attribute(attr, 0444, show_freq, NULL,
++				   0, i, "freq%d_label", s);
+ 		attr++;
+ 
+-		snprintf(attr->name, sizeof(attr->name), "freq%d_input", s);
+-		attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_freq, NULL,
+-					     1, i);
++		occ_init_attribute(attr, 0444, show_freq, NULL,
++				   1, i, "freq%d_input", s);
+ 		attr++;
+ 	}
+ 
+@@ -940,32 +928,24 @@ static int occ_setup_sensor_attrs(struct occ *occ)
+ 			s = (i * 4) + 1;
+ 
+ 			for (j = 0; j < 4; ++j) {
+-				snprintf(attr->name, sizeof(attr->name),
+-					 "power%d_label", s);
+-				attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-							     show_power, NULL,
+-							     nr++, i);
++				occ_init_attribute(attr, 0444, show_power,
++						   NULL, nr++, i,
++						   "power%d_label", s);
+ 				attr++;
+ 
+-				snprintf(attr->name, sizeof(attr->name),
+-					 "power%d_average", s);
+-				attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-							     show_power, NULL,
+-							     nr++, i);
++				occ_init_attribute(attr, 0444, show_power,
++						   NULL, nr++, i,
++						   "power%d_average", s);
+ 				attr++;
+ 
+-				snprintf(attr->name, sizeof(attr->name),
+-					 "power%d_average_interval", s);
+-				attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-							     show_power, NULL,
+-							     nr++, i);
++				occ_init_attribute(attr, 0444, show_power,
++						   NULL, nr++, i,
++						   "power%d_average_interval", s);
+ 				attr++;
+ 
+-				snprintf(attr->name, sizeof(attr->name),
+-					 "power%d_input", s);
+-				attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-							     show_power, NULL,
+-							     nr++, i);
++				occ_init_attribute(attr, 0444, show_power,
++						   NULL, nr++, i,
++						   "power%d_input", s);
+ 				attr++;
+ 
+ 				s++;
+@@ -977,28 +957,20 @@ static int occ_setup_sensor_attrs(struct occ *occ)
+ 		for (i = 0; i < sensors->power.num_sensors; ++i) {
+ 			s = i + 1;
+ 
+-			snprintf(attr->name, sizeof(attr->name),
+-				 "power%d_label", s);
+-			attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-						     show_power, NULL, 0, i);
++			occ_init_attribute(attr, 0444, show_power, NULL,
++					   0, i, "power%d_label", s);
+ 			attr++;
+ 
+-			snprintf(attr->name, sizeof(attr->name),
+-				 "power%d_average", s);
+-			attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-						     show_power, NULL, 1, i);
++			occ_init_attribute(attr, 0444, show_power, NULL,
++					   1, i, "power%d_average", s);
+ 			attr++;
+ 
+-			snprintf(attr->name, sizeof(attr->name),
+-				 "power%d_average_interval", s);
+-			attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-						     show_power, NULL, 2, i);
++			occ_init_attribute(attr, 0444, show_power, NULL,
++					   2, i, "power%d_average_interval", s);
+ 			attr++;
+ 
+-			snprintf(attr->name, sizeof(attr->name),
+-				 "power%d_input", s);
+-			attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-						     show_power, NULL, 3, i);
++			occ_init_attribute(attr, 0444, show_power, NULL,
++					   3, i, "power%d_input", s);
+ 			attr++;
+ 		}
+ 
+@@ -1006,56 +978,43 @@ static int occ_setup_sensor_attrs(struct occ *occ)
+ 	}
+ 
+ 	if (sensors->caps.num_sensors >= 1) {
+-		snprintf(attr->name, sizeof(attr->name), "power%d_label", s);
+-		attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL,
+-					     0, 0);
++		occ_init_attribute(attr, 0444, show_caps, NULL,
++				   0, 0, "power%d_label", s);
+ 		attr++;
+ 
+-		snprintf(attr->name, sizeof(attr->name), "power%d_cap", s);
+-		attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL,
+-					     1, 0);
++		occ_init_attribute(attr, 0444, show_caps, NULL,
++				   1, 0, "power%d_cap", s);
+ 		attr++;
+ 
+-		snprintf(attr->name, sizeof(attr->name), "power%d_input", s);
+-		attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL,
+-					     2, 0);
++		occ_init_attribute(attr, 0444, show_caps, NULL,
++				   2, 0, "power%d_input", s);
+ 		attr++;
+ 
+-		snprintf(attr->name, sizeof(attr->name),
+-			 "power%d_cap_not_redundant", s);
+-		attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL,
+-					     3, 0);
++		occ_init_attribute(attr, 0444, show_caps, NULL,
++				   3, 0, "power%d_cap_not_redundant", s);
+ 		attr++;
+ 
+-		snprintf(attr->name, sizeof(attr->name), "power%d_cap_max", s);
+-		attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL,
+-					     4, 0);
++		occ_init_attribute(attr, 0444, show_caps, NULL,
++				   4, 0, "power%d_cap_max", s);
+ 		attr++;
+ 
+-		snprintf(attr->name, sizeof(attr->name), "power%d_cap_min", s);
+-		attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL,
+-					     5, 0);
++		occ_init_attribute(attr, 0444, show_caps, NULL,
++				   5, 0, "power%d_cap_min", s);
+ 		attr++;
+ 
+-		snprintf(attr->name, sizeof(attr->name), "power%d_cap_user",
+-			 s);
+-		attr->sensor = OCC_INIT_ATTR(attr->name, 0644, show_caps,
+-					     occ_store_caps_user, 6, 0);
++		occ_init_attribute(attr, 0644, show_caps, occ_store_caps_user,
++				   6, 0, "power%d_cap_user", s);
+ 		attr++;
+ 
+ 		if (sensors->caps.version > 1) {
+-			snprintf(attr->name, sizeof(attr->name),
+-				 "power%d_cap_user_source", s);
+-			attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-						     show_caps, NULL, 7, 0);
++			occ_init_attribute(attr, 0444, show_caps, NULL,
++					   7, 0, "power%d_cap_user_source", s);
+ 			attr++;
+ 
+ 			if (sensors->caps.version > 2) {
+-				snprintf(attr->name, sizeof(attr->name),
+-					 "power%d_cap_min_soft", s);
+-				attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-							     show_caps, NULL,
+-							     8, 0);
++				occ_init_attribute(attr, 0444, show_caps, NULL,
++						   8, 0,
++						   "power%d_cap_min_soft", s);
+ 				attr++;
+ 			}
+ 		}
+@@ -1064,19 +1023,16 @@ static int occ_setup_sensor_attrs(struct occ *occ)
+ 	for (i = 0; i < sensors->extended.num_sensors; ++i) {
+ 		s = i + 1;
+ 
+-		snprintf(attr->name, sizeof(attr->name), "extn%d_label", s);
+-		attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-					     occ_show_extended, NULL, 0, i);
++		occ_init_attribute(attr, 0444, occ_show_extended, NULL,
++				   0, i, "extn%d_label", s);
+ 		attr++;
+ 
+-		snprintf(attr->name, sizeof(attr->name), "extn%d_flags", s);
+-		attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-					     occ_show_extended, NULL, 1, i);
++		occ_init_attribute(attr, 0444, occ_show_extended, NULL,
++				   1, i, "extn%d_flags", s);
+ 		attr++;
+ 
+-		snprintf(attr->name, sizeof(attr->name), "extn%d_input", s);
+-		attr->sensor = OCC_INIT_ATTR(attr->name, 0444,
+-					     occ_show_extended, NULL, 2, i);
++		occ_init_attribute(attr, 0444, occ_show_extended, NULL,
++				   2, i, "extn%d_input", s);
+ 		attr++;
+ 	}
+ 
+diff --git a/drivers/i2c/busses/i2c-designware-slave.c b/drivers/i2c/busses/i2c-designware-slave.c
+index 5cd4a5f7a472e4..b936a240db0a93 100644
+--- a/drivers/i2c/busses/i2c-designware-slave.c
++++ b/drivers/i2c/busses/i2c-designware-slave.c
+@@ -96,7 +96,7 @@ static int i2c_dw_unreg_slave(struct i2c_client *slave)
+ 	i2c_dw_disable(dev);
+ 	synchronize_irq(dev->irq);
+ 	dev->slave = NULL;
+-	pm_runtime_put(dev->dev);
++	pm_runtime_put_sync_suspend(dev->dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/i2c/busses/i2c-k1.c b/drivers/i2c/busses/i2c-k1.c
+index 5965b4cf6220e4..b68a21fff0b56b 100644
+--- a/drivers/i2c/busses/i2c-k1.c
++++ b/drivers/i2c/busses/i2c-k1.c
+@@ -477,7 +477,7 @@ static int spacemit_i2c_xfer(struct i2c_adapter *adapt, struct i2c_msg *msgs, in
+ 
+ 	ret = spacemit_i2c_wait_bus_idle(i2c);
+ 	if (!ret)
+-		spacemit_i2c_xfer_msg(i2c);
++		ret = spacemit_i2c_xfer_msg(i2c);
+ 	else if (ret < 0)
+ 		dev_dbg(i2c->dev, "i2c transfer error: %d\n", ret);
+ 	else
+diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c
+index de713b5747fe5c..05a140ec2b64d8 100644
+--- a/drivers/i2c/busses/i2c-npcm7xx.c
++++ b/drivers/i2c/busses/i2c-npcm7xx.c
+@@ -2178,10 +2178,14 @@ static int npcm_i2c_init_module(struct npcm_i2c *bus, enum i2c_mode mode,
+ 
+ 	/* Check HW is OK: SDA and SCL should be high at this point. */
+ 	if ((npcm_i2c_get_SDA(&bus->adap) == 0) || (npcm_i2c_get_SCL(&bus->adap) == 0)) {
+-		dev_err(bus->dev, "I2C%d init fail: lines are low\n", bus->num);
+-		dev_err(bus->dev, "SDA=%d SCL=%d\n", npcm_i2c_get_SDA(&bus->adap),
+-			npcm_i2c_get_SCL(&bus->adap));
+-		return -ENXIO;
++		dev_warn(bus->dev, " I2C%d SDA=%d SCL=%d, attempting to recover\n", bus->num,
++				 npcm_i2c_get_SDA(&bus->adap), npcm_i2c_get_SCL(&bus->adap));
++		if (npcm_i2c_recovery_tgclk(&bus->adap)) {
++			dev_err(bus->dev, "I2C%d init fail: SDA=%d SCL=%d\n",
++				bus->num, npcm_i2c_get_SDA(&bus->adap),
++				npcm_i2c_get_SCL(&bus->adap));
++			return -ENXIO;
++		}
+ 	}
+ 
+ 	npcm_i2c_int_enable(bus, true);
+diff --git a/drivers/i2c/busses/i2c-pasemi-core.c b/drivers/i2c/busses/i2c-pasemi-core.c
+index bd128ab2e2ebb6..27ab09854c9275 100644
+--- a/drivers/i2c/busses/i2c-pasemi-core.c
++++ b/drivers/i2c/busses/i2c-pasemi-core.c
+@@ -71,7 +71,7 @@ static inline int reg_read(struct pasemi_smbus *smbus, int reg)
+ 
+ static void pasemi_reset(struct pasemi_smbus *smbus)
+ {
+-	u32 val = (CTL_MTR | CTL_MRR | (smbus->clk_div & CTL_CLK_M));
++	u32 val = (CTL_MTR | CTL_MRR | CTL_UJM | (smbus->clk_div & CTL_CLK_M));
+ 
+ 	if (smbus->hw_rev >= 6)
+ 		val |= CTL_EN;
+diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
+index 87976e99e6d001..049b4d154c2337 100644
+--- a/drivers/i2c/busses/i2c-tegra.c
++++ b/drivers/i2c/busses/i2c-tegra.c
+@@ -1395,6 +1395,11 @@ static int tegra_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[],
+ 			ret = tegra_i2c_xfer_msg(i2c_dev, &msgs[i], MSG_END_CONTINUE);
+ 			if (ret)
+ 				break;
++
++			/* Validate message length before proceeding */
++			if (msgs[i].buf[0] == 0 || msgs[i].buf[0] > I2C_SMBUS_BLOCK_MAX)
++				break;
++
+ 			/* Set the msg length from first byte */
+ 			msgs[i].len += msgs[i].buf[0];
+ 			dev_dbg(i2c_dev->dev, "reading %d bytes\n", msgs[i].len);
+diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c
+index a71226d7ca5932..5834bf8a3fd9e9 100644
+--- a/drivers/i3c/master/mipi-i3c-hci/core.c
++++ b/drivers/i3c/master/mipi-i3c-hci/core.c
+@@ -594,6 +594,7 @@ static irqreturn_t i3c_hci_irq_handler(int irq, void *dev_id)
+ 
+ 	if (val) {
+ 		reg_write(INTR_STATUS, val);
++		result = IRQ_HANDLED;
+ 	}
+ 
+ 	if (val & INTR_HC_RESET_CANCEL) {
+@@ -605,12 +606,11 @@ static irqreturn_t i3c_hci_irq_handler(int irq, void *dev_id)
+ 		val &= ~INTR_HC_INTERNAL_ERR;
+ 	}
+ 
+-	hci->io->irq_handler(hci);
++	if (hci->io->irq_handler(hci))
++		result = IRQ_HANDLED;
+ 
+ 	if (val)
+ 		dev_err(&hci->master.dev, "unexpected INTR_STATUS %#x\n", val);
+-	else
+-		result = IRQ_HANDLED;
+ 
+ 	return result;
+ }
+diff --git a/drivers/iio/accel/fxls8962af-core.c b/drivers/iio/accel/fxls8962af-core.c
+index bf1d3923a18179..ae965a8f560d3b 100644
+--- a/drivers/iio/accel/fxls8962af-core.c
++++ b/drivers/iio/accel/fxls8962af-core.c
+@@ -23,6 +23,7 @@
+ #include <linux/regulator/consumer.h>
+ #include <linux/regmap.h>
+ #include <linux/types.h>
++#include <linux/units.h>
+ 
+ #include <linux/iio/buffer.h>
+ #include <linux/iio/events.h>
+@@ -439,8 +440,16 @@ static int fxls8962af_read_raw(struct iio_dev *indio_dev,
+ 		*val = FXLS8962AF_TEMP_CENTER_VAL;
+ 		return IIO_VAL_INT;
+ 	case IIO_CHAN_INFO_SCALE:
+-		*val = 0;
+-		return fxls8962af_read_full_scale(data, val2);
++		switch (chan->type) {
++		case IIO_TEMP:
++			*val = MILLIDEGREE_PER_DEGREE;
++			return IIO_VAL_INT;
++		case IIO_ACCEL:
++			*val = 0;
++			return fxls8962af_read_full_scale(data, val2);
++		default:
++			return -EINVAL;
++		}
+ 	case IIO_CHAN_INFO_SAMP_FREQ:
+ 		return fxls8962af_read_samp_freq(data, val, val2);
+ 	default:
+@@ -736,9 +745,11 @@ static const struct iio_event_spec fxls8962af_event[] = {
+ 	.type = IIO_TEMP, \
+ 	.address = FXLS8962AF_TEMP_OUT, \
+ 	.info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | \
++			      BIT(IIO_CHAN_INFO_SCALE) | \
+ 			      BIT(IIO_CHAN_INFO_OFFSET),\
+ 	.scan_index = -1, \
+ 	.scan_type = { \
++		.sign = 's', \
+ 		.realbits = 8, \
+ 		.storagebits = 8, \
+ 	}, \
+diff --git a/drivers/iio/adc/Kconfig b/drivers/iio/adc/Kconfig
+index 6529df1a498c2c..b7aac84e5224af 100644
+--- a/drivers/iio/adc/Kconfig
++++ b/drivers/iio/adc/Kconfig
+@@ -129,8 +129,9 @@ config AD7173
+ 	tristate "Analog Devices AD7173 driver"
+ 	depends on SPI_MASTER
+ 	select AD_SIGMA_DELTA
+-	select GPIO_REGMAP if GPIOLIB
+-	select REGMAP_SPI if GPIOLIB
++	select GPIOLIB
++	select GPIO_REGMAP
++	select REGMAP_SPI
+ 	help
+ 	  Say yes here to build support for Analog Devices AD7173 and similar ADC
+ 	  Currently supported models:
+@@ -1546,6 +1547,7 @@ config TI_ADS1298
+ 	tristate "Texas Instruments ADS1298"
+ 	depends on SPI
+ 	select IIO_BUFFER
++	select IIO_KFIFO_BUF
+ 	help
+ 	  If you say yes here you get support for Texas Instruments ADS1298
+ 	  medical ADC chips
+diff --git a/drivers/iio/adc/ad7173.c b/drivers/iio/adc/ad7173.c
+index 69de5886474ce2..b3e6bd2a55d717 100644
+--- a/drivers/iio/adc/ad7173.c
++++ b/drivers/iio/adc/ad7173.c
+@@ -230,10 +230,8 @@ struct ad7173_state {
+ 	unsigned long long *config_cnts;
+ 	struct clk *ext_clk;
+ 	struct clk_hw int_clk_hw;
+-#if IS_ENABLED(CONFIG_GPIOLIB)
+ 	struct regmap *reg_gpiocon_regmap;
+ 	struct gpio_regmap *gpio_regmap;
+-#endif
+ };
+ 
+ static unsigned int ad4115_sinc5_data_rates[] = {
+@@ -288,8 +286,6 @@ static const char *const ad7173_clk_sel[] = {
+ 	"ext-clk", "xtal"
+ };
+ 
+-#if IS_ENABLED(CONFIG_GPIOLIB)
+-
+ static const struct regmap_range ad7173_range_gpio[] = {
+ 	regmap_reg_range(AD7173_REG_GPIO, AD7173_REG_GPIO),
+ };
+@@ -543,12 +539,6 @@ static int ad7173_gpio_init(struct ad7173_state *st)
+ 
+ 	return 0;
+ }
+-#else
+-static int ad7173_gpio_init(struct ad7173_state *st)
+-{
+-	return 0;
+-}
+-#endif /* CONFIG_GPIOLIB */
+ 
+ static struct ad7173_state *ad_sigma_delta_to_ad7173(struct ad_sigma_delta *sd)
+ {
+@@ -1797,10 +1787,7 @@ static int ad7173_probe(struct spi_device *spi)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (IS_ENABLED(CONFIG_GPIOLIB))
+-		return ad7173_gpio_init(st);
+-
+-	return 0;
++	return ad7173_gpio_init(st);
+ }
+ 
+ static const struct of_device_id ad7173_of_match[] = {
+diff --git a/drivers/iio/adc/ad7606.c b/drivers/iio/adc/ad7606.c
+index 703556eb7257ea..8ed65a35b48623 100644
+--- a/drivers/iio/adc/ad7606.c
++++ b/drivers/iio/adc/ad7606.c
+@@ -727,17 +727,16 @@ static int ad7606_scan_direct(struct iio_dev *indio_dev, unsigned int ch,
+ 		goto error_ret;
+ 
+ 	chan = &indio_dev->channels[ch + 1];
+-	if (chan->scan_type.sign == 'u') {
+-		if (realbits > 16)
+-			*val = st->data.buf32[ch];
+-		else
+-			*val = st->data.buf16[ch];
+-	} else {
+-		if (realbits > 16)
+-			*val = sign_extend32(st->data.buf32[ch], realbits - 1);
+-		else
+-			*val = sign_extend32(st->data.buf16[ch], realbits - 1);
+-	}
++
++	if (realbits > 16)
++		*val = st->data.buf32[ch];
++	else
++		*val = st->data.buf16[ch];
++
++	*val &= GENMASK(realbits - 1, 0);
++
++	if (chan->scan_type.sign == 's')
++		*val = sign_extend32(*val, realbits - 1);
+ 
+ error_ret:
+ 	if (!st->gpio_convst) {
+diff --git a/drivers/iio/adc/ad7606_spi.c b/drivers/iio/adc/ad7606_spi.c
+index 179115e909888b..b37458ce3c7087 100644
+--- a/drivers/iio/adc/ad7606_spi.c
++++ b/drivers/iio/adc/ad7606_spi.c
+@@ -155,7 +155,7 @@ static int ad7606_spi_reg_write(struct ad7606_state *st,
+ 	struct spi_device *spi = to_spi_device(st->dev);
+ 
+ 	st->d16[0] = cpu_to_be16((st->bops->rd_wr_cmd(addr, 1) << 8) |
+-				  (val & 0x1FF));
++				  (val & 0xFF));
+ 
+ 	return spi_write(spi, &st->d16[0], sizeof(st->d16[0]));
+ }
+diff --git a/drivers/iio/adc/ad7944.c b/drivers/iio/adc/ad7944.c
+index 2f949fe5587318..37a137bd83571b 100644
+--- a/drivers/iio/adc/ad7944.c
++++ b/drivers/iio/adc/ad7944.c
+@@ -377,6 +377,8 @@ static int ad7944_single_conversion(struct ad7944_adc *adc,
+ 
+ 	if (chan->scan_type.sign == 's')
+ 		*val = sign_extend32(*val, chan->scan_type.realbits - 1);
++	else
++		*val &= GENMASK(chan->scan_type.realbits - 1, 0);
+ 
+ 	return IIO_VAL_INT;
+ }
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c
+index 213cce1c31110e..91f0f381082bda 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c
+@@ -67,16 +67,18 @@ int inv_icm42600_temp_read_raw(struct iio_dev *indio_dev,
+ 		return IIO_VAL_INT;
+ 	/*
+ 	 * T°C = (temp / 132.48) + 25
+-	 * Tm°C = 1000 * ((temp * 100 / 13248) + 25)
++	 * Tm°C = 1000 * ((temp / 132.48) + 25)
++	 * Tm°C = 7.548309 * temp + 25000
++	 * Tm°C = (temp + 3312) * 7.548309
+ 	 * scale: 100000 / 13248 ~= 7.548309
+-	 * offset: 25000
++	 * offset: 3312
+ 	 */
+ 	case IIO_CHAN_INFO_SCALE:
+ 		*val = 7;
+ 		*val2 = 548309;
+ 		return IIO_VAL_INT_PLUS_MICRO;
+ 	case IIO_CHAN_INFO_OFFSET:
+-		*val = 25000;
++		*val = 3312;
+ 		return IIO_VAL_INT;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/core/iwcm.c b/drivers/infiniband/core/iwcm.c
+index f4486cbd8f45a9..62410578dec37d 100644
+--- a/drivers/infiniband/core/iwcm.c
++++ b/drivers/infiniband/core/iwcm.c
+@@ -368,12 +368,9 @@ EXPORT_SYMBOL(iw_cm_disconnect);
+ /*
+  * CM_ID <-- DESTROYING
+  *
+- * Clean up all resources associated with the connection and release
+- * the initial reference taken by iw_create_cm_id.
+- *
+- * Returns true if and only if the last cm_id_priv reference has been dropped.
++ * Clean up all resources associated with the connection.
+  */
+-static bool destroy_cm_id(struct iw_cm_id *cm_id)
++static void destroy_cm_id(struct iw_cm_id *cm_id)
+ {
+ 	struct iwcm_id_private *cm_id_priv;
+ 	struct ib_qp *qp;
+@@ -442,20 +439,22 @@ static bool destroy_cm_id(struct iw_cm_id *cm_id)
+ 		iwpm_remove_mapinfo(&cm_id->local_addr, &cm_id->m_local_addr);
+ 		iwpm_remove_mapping(&cm_id->local_addr, RDMA_NL_IWCM);
+ 	}
+-
+-	return iwcm_deref_id(cm_id_priv);
+ }
+ 
+ /*
+- * This function is only called by the application thread and cannot
+- * be called by the event thread. The function will wait for all
+- * references to be released on the cm_id and then kfree the cm_id
+- * object.
++ * Destroy cm_id. If the cm_id still has other references, wait for all
++ * references to be released on the cm_id and then release the initial
++ * reference taken by iw_create_cm_id.
+  */
+ void iw_destroy_cm_id(struct iw_cm_id *cm_id)
+ {
+-	if (!destroy_cm_id(cm_id))
++	struct iwcm_id_private *cm_id_priv;
++
++	cm_id_priv = container_of(cm_id, struct iwcm_id_private, id);
++	destroy_cm_id(cm_id);
++	if (refcount_read(&cm_id_priv->refcount) > 1)
+ 		flush_workqueue(iwcm_wq);
++	iwcm_deref_id(cm_id_priv);
+ }
+ EXPORT_SYMBOL(iw_destroy_cm_id);
+ 
+@@ -1035,8 +1034,10 @@ static void cm_work_handler(struct work_struct *_work)
+ 
+ 		if (!test_bit(IWCM_F_DROP_EVENTS, &cm_id_priv->flags)) {
+ 			ret = process_event(cm_id_priv, &levent);
+-			if (ret)
+-				WARN_ON_ONCE(destroy_cm_id(&cm_id_priv->id));
++			if (ret) {
++				destroy_cm_id(&cm_id_priv->id);
++				WARN_ON_ONCE(iwcm_deref_id(cm_id_priv));
++			}
+ 		} else
+ 			pr_debug("dropping event %d\n", levent.event);
+ 		if (iwcm_deref_id(cm_id_priv))
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 59352d1b62099f..bbf6e1983704cf 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -942,7 +942,7 @@ static void fill_wqe_idx(struct hns_roce_srq *srq, unsigned int wqe_idx)
+ static void update_srq_db(struct hns_roce_srq *srq)
+ {
+ 	struct hns_roce_dev *hr_dev = to_hr_dev(srq->ibsrq.device);
+-	struct hns_roce_v2_db db;
++	struct hns_roce_v2_db db = {};
+ 
+ 	hr_reg_write(&db, DB_TAG, srq->srqn);
+ 	hr_reg_write(&db, DB_CMD, HNS_ROCE_V2_SRQ_DB);
+diff --git a/drivers/input/keyboard/gpio_keys.c b/drivers/input/keyboard/gpio_keys.c
+index 5c39a217b94c8a..f9db86da0818b2 100644
+--- a/drivers/input/keyboard/gpio_keys.c
++++ b/drivers/input/keyboard/gpio_keys.c
+@@ -449,6 +449,8 @@ static enum hrtimer_restart gpio_keys_irq_timer(struct hrtimer *t)
+ 						      release_timer);
+ 	struct input_dev *input = bdata->input;
+ 
++	guard(spinlock_irqsave)(&bdata->lock);
++
+ 	if (bdata->key_pressed) {
+ 		input_report_key(input, *bdata->code, 0);
+ 		input_sync(input);
+@@ -486,7 +488,7 @@ static irqreturn_t gpio_keys_irq_isr(int irq, void *dev_id)
+ 	if (bdata->release_delay)
+ 		hrtimer_start(&bdata->release_timer,
+ 			      ms_to_ktime(bdata->release_delay),
+-			      HRTIMER_MODE_REL_HARD);
++			      HRTIMER_MODE_REL);
+ out:
+ 	return IRQ_HANDLED;
+ }
+@@ -628,7 +630,7 @@ static int gpio_keys_setup_key(struct platform_device *pdev,
+ 
+ 		bdata->release_delay = button->debounce_interval;
+ 		hrtimer_setup(&bdata->release_timer, gpio_keys_irq_timer,
+-			      CLOCK_REALTIME, HRTIMER_MODE_REL_HARD);
++			      CLOCK_REALTIME, HRTIMER_MODE_REL);
+ 
+ 		isr = gpio_keys_irq_isr;
+ 		irqflags = 0;
+diff --git a/drivers/input/misc/ims-pcu.c b/drivers/input/misc/ims-pcu.c
+index d9ee14b1f45184..4581f1c536442b 100644
+--- a/drivers/input/misc/ims-pcu.c
++++ b/drivers/input/misc/ims-pcu.c
+@@ -844,6 +844,12 @@ static int ims_pcu_flash_firmware(struct ims_pcu *pcu,
+ 		addr = be32_to_cpu(rec->addr) / 2;
+ 		len = be16_to_cpu(rec->len);
+ 
++		if (len > sizeof(pcu->cmd_buf) - 1 - sizeof(*fragment)) {
++			dev_err(pcu->dev,
++				"Invalid record length in firmware: %d\n", len);
++			return -EINVAL;
++		}
++
+ 		fragment = (void *)&pcu->cmd_buf[1];
+ 		put_unaligned_le32(addr, &fragment->addr);
+ 		fragment->len = len;
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index f34209b08b4c54..31f8d208dedb7a 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -241,7 +241,9 @@ static inline int get_acpihid_device_id(struct device *dev,
+ 					struct acpihid_map_entry **entry)
+ {
+ 	struct acpi_device *adev = ACPI_COMPANION(dev);
+-	struct acpihid_map_entry *p;
++	struct acpihid_map_entry *p, *p1 = NULL;
++	int hid_count = 0;
++	bool fw_bug;
+ 
+ 	if (!adev)
+ 		return -ENODEV;
+@@ -249,12 +251,33 @@ static inline int get_acpihid_device_id(struct device *dev,
+ 	list_for_each_entry(p, &acpihid_map, list) {
+ 		if (acpi_dev_hid_uid_match(adev, p->hid,
+ 					   p->uid[0] ? p->uid : NULL)) {
+-			if (entry)
+-				*entry = p;
+-			return p->devid;
++			p1 = p;
++			fw_bug = false;
++			hid_count = 1;
++			break;
++		}
++
++		/*
++		 * Count HID matches w/o UID, raise FW_BUG but allow exactly one match
++		 */
++		if (acpi_dev_hid_match(adev, p->hid)) {
++			p1 = p;
++			hid_count++;
++			fw_bug = true;
+ 		}
+ 	}
+-	return -EINVAL;
++
++	if (!p1)
++		return -EINVAL;
++	if (fw_bug)
++		dev_err_once(dev, FW_BUG "No ACPI device matched UID, but %d device%s matched HID.\n",
++			     hid_count, hid_count > 1 ? "s" : "");
++	if (hid_count > 1)
++		return -EINVAL;
++	if (entry)
++		*entry = p1;
++
++	return p1->devid;
+ }
+ 
+ static inline int get_device_sbdf_id(struct device *dev)
+@@ -982,6 +1005,14 @@ int amd_iommu_register_ga_log_notifier(int (*notifier)(u32))
+ {
+ 	iommu_ga_log_notifier = notifier;
+ 
++	/*
++	 * Ensure all in-flight IRQ handlers run to completion before returning
++	 * to the caller, e.g. to ensure module code isn't unloaded while it's
++	 * being executed in the IRQ handler.
++	 */
++	if (!notifier)
++		synchronize_rcu();
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(amd_iommu_register_ga_log_notifier);
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index cb0b993bebb4dd..9d2d34c1b2ff8f 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -1859,6 +1859,7 @@ static int dmar_domain_attach_device(struct dmar_domain *domain,
+ 		return ret;
+ 
+ 	info->domain = domain;
++	info->domain_attached = true;
+ 	spin_lock_irqsave(&domain->lock, flags);
+ 	list_add(&info->link, &domain->devices);
+ 	spin_unlock_irqrestore(&domain->lock, flags);
+@@ -3257,6 +3258,10 @@ void device_block_translation(struct device *dev)
+ 	struct intel_iommu *iommu = info->iommu;
+ 	unsigned long flags;
+ 
++	/* Device in DMA blocking state. Noting to do. */
++	if (!info->domain_attached)
++		return;
++
+ 	if (info->domain)
+ 		cache_tag_unassign_domain(info->domain, dev, IOMMU_NO_PASID);
+ 
+@@ -3268,6 +3273,9 @@ void device_block_translation(struct device *dev)
+ 			domain_context_clear(info);
+ 	}
+ 
++	/* Device now in DMA blocking state. */
++	info->domain_attached = false;
++
+ 	if (!info->domain)
+ 		return;
+ 
+@@ -4357,6 +4365,9 @@ static int identity_domain_attach_dev(struct iommu_domain *domain, struct device
+ 	else
+ 		ret = device_setup_pass_through(dev);
+ 
++	if (!ret)
++		info->domain_attached = true;
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h
+index c4916886da5a08..8661e0864701ce 100644
+--- a/drivers/iommu/intel/iommu.h
++++ b/drivers/iommu/intel/iommu.h
+@@ -773,6 +773,7 @@ struct device_domain_info {
+ 	u8 ats_supported:1;
+ 	u8 ats_enabled:1;
+ 	u8 dtlb_extra_inval:1;	/* Quirk for devices need extra flush */
++	u8 domain_attached:1;	/* Device has domain attached */
+ 	u8 ats_qdep;
+ 	unsigned int iopf_refcount;
+ 	struct device *dev; /* it's NULL for PCIe-to-PCI bridge */
+diff --git a/drivers/iommu/intel/nested.c b/drivers/iommu/intel/nested.c
+index 6ac5c534bef437..1e149169ee77b0 100644
+--- a/drivers/iommu/intel/nested.c
++++ b/drivers/iommu/intel/nested.c
+@@ -27,8 +27,7 @@ static int intel_nested_attach_dev(struct iommu_domain *domain,
+ 	unsigned long flags;
+ 	int ret = 0;
+ 
+-	if (info->domain)
+-		device_block_translation(dev);
++	device_block_translation(dev);
+ 
+ 	if (iommu->agaw < dmar_domain->s2_domain->agaw) {
+ 		dev_err_ratelimited(dev, "Adjusted guest address width not compatible\n");
+@@ -62,6 +61,7 @@ static int intel_nested_attach_dev(struct iommu_domain *domain,
+ 		goto unassign_tag;
+ 
+ 	info->domain = dmar_domain;
++	info->domain_attached = true;
+ 	spin_lock_irqsave(&dmar_domain->lock, flags);
+ 	list_add(&info->link, &dmar_domain->devices);
+ 	spin_unlock_irqrestore(&dmar_domain->lock, flags);
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index e4628d96216102..df871a500b283e 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -2208,6 +2208,19 @@ static void *iommu_make_pasid_array_entry(struct iommu_domain *domain,
+ 	return xa_tag_pointer(domain, IOMMU_PASID_ARRAY_DOMAIN);
+ }
+ 
++static bool domain_iommu_ops_compatible(const struct iommu_ops *ops,
++					struct iommu_domain *domain)
++{
++	if (domain->owner == ops)
++		return true;
++
++	/* For static domains, owner isn't set. */
++	if (domain == ops->blocked_domain || domain == ops->identity_domain)
++		return true;
++
++	return false;
++}
++
+ static int __iommu_attach_group(struct iommu_domain *domain,
+ 				struct iommu_group *group)
+ {
+@@ -2218,7 +2231,8 @@ static int __iommu_attach_group(struct iommu_domain *domain,
+ 		return -EBUSY;
+ 
+ 	dev = iommu_group_first_dev(group);
+-	if (!dev_has_iommu(dev) || dev_iommu_ops(dev) != domain->owner)
++	if (!dev_has_iommu(dev) ||
++	    !domain_iommu_ops_compatible(dev_iommu_ops(dev), domain))
+ 		return -EINVAL;
+ 
+ 	return __iommu_group_set_domain(group, domain);
+@@ -3456,7 +3470,8 @@ int iommu_attach_device_pasid(struct iommu_domain *domain,
+ 	    !ops->blocked_domain->ops->set_dev_pasid)
+ 		return -EOPNOTSUPP;
+ 
+-	if (ops != domain->owner || pasid == IOMMU_NO_PASID)
++	if (!domain_iommu_ops_compatible(ops, domain) ||
++	    pasid == IOMMU_NO_PASID)
+ 		return -EINVAL;
+ 
+ 	mutex_lock(&group->mutex);
+@@ -3538,7 +3553,7 @@ int iommu_replace_device_pasid(struct iommu_domain *domain,
+ 	if (!domain->ops->set_dev_pasid)
+ 		return -EOPNOTSUPP;
+ 
+-	if (dev_iommu_ops(dev) != domain->owner ||
++	if (!domain_iommu_ops_compatible(dev_iommu_ops(dev), domain) ||
+ 	    pasid == IOMMU_NO_PASID || !handle)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c
+index 9e615b4f1f5e66..785af481658406 100644
+--- a/drivers/md/dm-raid1.c
++++ b/drivers/md/dm-raid1.c
+@@ -133,10 +133,9 @@ static void queue_bio(struct mirror_set *ms, struct bio *bio, int rw)
+ 	spin_lock_irqsave(&ms->lock, flags);
+ 	should_wake = !(bl->head);
+ 	bio_list_add(bl, bio);
+-	spin_unlock_irqrestore(&ms->lock, flags);
+-
+ 	if (should_wake)
+ 		wakeup_mirrord(ms);
++	spin_unlock_irqrestore(&ms->lock, flags);
+ }
+ 
+ static void dispatch_bios(void *context, struct bio_list *bio_list)
+@@ -646,9 +645,9 @@ static void write_callback(unsigned long error, void *context)
+ 	if (!ms->failures.head)
+ 		should_wake = 1;
+ 	bio_list_add(&ms->failures, bio);
+-	spin_unlock_irqrestore(&ms->lock, flags);
+ 	if (should_wake)
+ 		wakeup_mirrord(ms);
++	spin_unlock_irqrestore(&ms->lock, flags);
+ }
+ 
+ static void do_write(struct mirror_set *ms, struct bio *bio)
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index e009bba52d4c0c..dd074c8ecbbad0 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -431,6 +431,13 @@ static int dm_set_device_limits(struct dm_target *ti, struct dm_dev *dev,
+ 		return 0;
+ 	}
+ 
++	mutex_lock(&q->limits_lock);
++	/*
++	 * BLK_FEAT_ATOMIC_WRITES is not inherited from the bottom device in
++	 * blk_stack_limits(), so do it manually.
++	 */
++	limits->features |= (q->limits.features & BLK_FEAT_ATOMIC_WRITES);
++
+ 	if (blk_stack_limits(limits, &q->limits,
+ 			get_start_sect(bdev) + start) < 0)
+ 		DMWARN("%s: adding target device %pg caused an alignment inconsistency: "
+@@ -448,6 +455,7 @@ static int dm_set_device_limits(struct dm_target *ti, struct dm_dev *dev,
+ 	 */
+ 	if (!dm_target_has_integrity(ti->type))
+ 		queue_limits_stack_integrity_bdev(limits, bdev);
++	mutex_unlock(&q->limits_lock);
+ 	return 0;
+ }
+ 
+@@ -1733,8 +1741,12 @@ static int device_not_write_zeroes_capable(struct dm_target *ti, struct dm_dev *
+ 					   sector_t start, sector_t len, void *data)
+ {
+ 	struct request_queue *q = bdev_get_queue(dev->bdev);
++	int b;
+ 
+-	return !q->limits.max_write_zeroes_sectors;
++	mutex_lock(&q->limits_lock);
++	b = !q->limits.max_write_zeroes_sectors;
++	mutex_unlock(&q->limits_lock);
++	return b;
+ }
+ 
+ static bool dm_table_supports_write_zeroes(struct dm_table *t)
+diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
+index 0c41949db784ba..631a887b487cc4 100644
+--- a/drivers/md/dm-verity-fec.c
++++ b/drivers/md/dm-verity-fec.c
+@@ -593,6 +593,10 @@ int verity_fec_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v,
+ 	(*argc)--;
+ 
+ 	if (!strcasecmp(arg_name, DM_VERITY_OPT_FEC_DEV)) {
++		if (v->fec->dev) {
++			ti->error = "FEC device already specified";
++			return -EINVAL;
++		}
+ 		r = dm_get_device(ti, arg_value, BLK_OPEN_READ, &v->fec->dev);
+ 		if (r) {
+ 			ti->error = "FEC device lookup failed";
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index 3c427f18a04b33..ed49bcbd224fa2 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -1120,6 +1120,9 @@ static int verity_alloc_most_once(struct dm_verity *v)
+ {
+ 	struct dm_target *ti = v->ti;
+ 
++	if (v->validated_blocks)
++		return 0;
++
+ 	/* the bitset can only handle INT_MAX blocks */
+ 	if (v->data_blocks > INT_MAX) {
+ 		ti->error = "device too large to use check_at_most_once";
+@@ -1143,6 +1146,9 @@ static int verity_alloc_zero_digest(struct dm_verity *v)
+ 	struct dm_verity_io *io;
+ 	u8 *zero_data;
+ 
++	if (v->zero_digest)
++		return 0;
++
+ 	v->zero_digest = kmalloc(v->digest_size, GFP_KERNEL);
+ 
+ 	if (!v->zero_digest)
+@@ -1577,7 +1583,7 @@ static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 			goto bad;
+ 	}
+ 
+-	/* Root hash signature is  a optional parameter*/
++	/* Root hash signature is an optional parameter */
+ 	r = verity_verify_root_hash(root_hash_digest_to_validate,
+ 				    strlen(root_hash_digest_to_validate),
+ 				    verify_args.sig,
+diff --git a/drivers/md/dm-verity-verify-sig.c b/drivers/md/dm-verity-verify-sig.c
+index a9e2c6c0a33c6d..d5261a0e4232e1 100644
+--- a/drivers/md/dm-verity-verify-sig.c
++++ b/drivers/md/dm-verity-verify-sig.c
+@@ -71,9 +71,14 @@ int verity_verify_sig_parse_opt_args(struct dm_arg_set *as,
+ 				     const char *arg_name)
+ {
+ 	struct dm_target *ti = v->ti;
+-	int ret = 0;
++	int ret;
+ 	const char *sig_key = NULL;
+ 
++	if (v->signature_key_desc) {
++		ti->error = DM_VERITY_VERIFY_ERR("root_hash_sig_key_desc already specified");
++		return -EINVAL;
++	}
++
+ 	if (!*argc) {
+ 		ti->error = DM_VERITY_VERIFY_ERR("Signature key not specified");
+ 		return -EINVAL;
+@@ -83,14 +88,18 @@ int verity_verify_sig_parse_opt_args(struct dm_arg_set *as,
+ 	(*argc)--;
+ 
+ 	ret = verity_verify_get_sig_from_key(sig_key, sig_opts);
+-	if (ret < 0)
++	if (ret < 0) {
+ 		ti->error = DM_VERITY_VERIFY_ERR("Invalid key specified");
++		return ret;
++	}
+ 
+ 	v->signature_key_desc = kstrdup(sig_key, GFP_KERNEL);
+-	if (!v->signature_key_desc)
++	if (!v->signature_key_desc) {
++		ti->error = DM_VERITY_VERIFY_ERR("Could not allocate memory for signature key");
+ 		return -ENOMEM;
++	}
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ /*
+diff --git a/drivers/media/cec/usb/extron-da-hd-4k-plus/extron-da-hd-4k-plus.c b/drivers/media/cec/usb/extron-da-hd-4k-plus/extron-da-hd-4k-plus.c
+index cfbfc4c1b2e67f..41d019b01ec09d 100644
+--- a/drivers/media/cec/usb/extron-da-hd-4k-plus/extron-da-hd-4k-plus.c
++++ b/drivers/media/cec/usb/extron-da-hd-4k-plus/extron-da-hd-4k-plus.c
+@@ -1002,8 +1002,8 @@ static int extron_cec_adap_transmit(struct cec_adapter *adap, u8 attempts,
+ 				    u32 signal_free_time, struct cec_msg *msg)
+ {
+ 	struct extron_port *port = cec_get_drvdata(adap);
+-	char buf[CEC_MAX_MSG_SIZE * 3 + 1];
+-	char cmd[CEC_MAX_MSG_SIZE * 3 + 13];
++	char buf[(CEC_MAX_MSG_SIZE - 1) * 3 + 1];
++	char cmd[sizeof(buf) + 14];
+ 	unsigned int i;
+ 
+ 	if (port->disconnected)
+diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+index c6ddf2357c58c1..b3bf2173c14e1b 100644
+--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
++++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+@@ -469,7 +469,7 @@ vb2_dma_sg_dmabuf_ops_begin_cpu_access(struct dma_buf *dbuf,
+ 	struct vb2_dma_sg_buf *buf = dbuf->priv;
+ 	struct sg_table *sgt = buf->dma_sgt;
+ 
+-	dma_sync_sg_for_cpu(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir);
++	dma_sync_sgtable_for_cpu(buf->dev, sgt, buf->dma_dir);
+ 	return 0;
+ }
+ 
+@@ -480,7 +480,7 @@ vb2_dma_sg_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
+ 	struct vb2_dma_sg_buf *buf = dbuf->priv;
+ 	struct sg_table *sgt = buf->dma_sgt;
+ 
+-	dma_sync_sg_for_device(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir);
++	dma_sync_sgtable_for_device(buf->dev, sgt, buf->dma_dir);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/i2c/ccs-pll.c b/drivers/media/i2c/ccs-pll.c
+index 34ccda66652458..2051f1f292294a 100644
+--- a/drivers/media/i2c/ccs-pll.c
++++ b/drivers/media/i2c/ccs-pll.c
+@@ -312,6 +312,11 @@ __ccs_pll_calculate_vt_tree(struct device *dev,
+ 	dev_dbg(dev, "more_mul2: %u\n", more_mul);
+ 
+ 	pll_fr->pll_multiplier = mul * more_mul;
++	if (pll_fr->pll_multiplier > lim_fr->max_pll_multiplier) {
++		dev_dbg(dev, "pll multiplier %u too high\n",
++			pll_fr->pll_multiplier);
++		return -EINVAL;
++	}
+ 
+ 	if (pll_fr->pll_multiplier * pll_fr->pll_ip_clk_freq_hz >
+ 	    lim_fr->max_pll_op_clk_freq_hz)
+@@ -397,6 +402,8 @@ static int ccs_pll_calculate_vt_tree(struct device *dev,
+ 	min_pre_pll_clk_div = max_t(u16, min_pre_pll_clk_div,
+ 				    pll->ext_clk_freq_hz /
+ 				    lim_fr->max_pll_ip_clk_freq_hz);
++	if (!(pll->flags & CCS_PLL_FLAG_EXT_IP_PLL_DIVIDER))
++		min_pre_pll_clk_div = clk_div_even(min_pre_pll_clk_div);
+ 
+ 	dev_dbg(dev, "vt min/max_pre_pll_clk_div: %u,%u\n",
+ 		min_pre_pll_clk_div, max_pre_pll_clk_div);
+@@ -435,7 +442,7 @@ static int ccs_pll_calculate_vt_tree(struct device *dev,
+ 	return -EINVAL;
+ }
+ 
+-static void
++static int
+ ccs_pll_calculate_vt(struct device *dev, const struct ccs_pll_limits *lim,
+ 		     const struct ccs_pll_branch_limits_bk *op_lim_bk,
+ 		     struct ccs_pll *pll, struct ccs_pll_branch_fr *pll_fr,
+@@ -558,6 +565,8 @@ ccs_pll_calculate_vt(struct device *dev, const struct ccs_pll_limits *lim,
+ 		if (best_pix_div < SHRT_MAX >> 1)
+ 			break;
+ 	}
++	if (best_pix_div == SHRT_MAX >> 1)
++		return -EINVAL;
+ 
+ 	pll->vt_bk.sys_clk_div = DIV_ROUND_UP(vt_div, best_pix_div);
+ 	pll->vt_bk.pix_clk_div = best_pix_div;
+@@ -570,6 +579,8 @@ ccs_pll_calculate_vt(struct device *dev, const struct ccs_pll_limits *lim,
+ out_calc_pixel_rate:
+ 	pll->pixel_rate_pixel_array =
+ 		pll->vt_bk.pix_clk_freq_hz * pll->vt_lanes;
++
++	return 0;
+ }
+ 
+ /*
+@@ -792,7 +803,7 @@ int ccs_pll_calculate(struct device *dev, const struct ccs_pll_limits *lim,
+ 		op_lim_fr->min_pre_pll_clk_div, op_lim_fr->max_pre_pll_clk_div);
+ 	max_op_pre_pll_clk_div =
+ 		min_t(u16, op_lim_fr->max_pre_pll_clk_div,
+-		      clk_div_even(pll->ext_clk_freq_hz /
++		      DIV_ROUND_UP(pll->ext_clk_freq_hz,
+ 				   op_lim_fr->min_pll_ip_clk_freq_hz));
+ 	min_op_pre_pll_clk_div =
+ 		max_t(u16, op_lim_fr->min_pre_pll_clk_div,
+@@ -815,6 +826,8 @@ int ccs_pll_calculate(struct device *dev, const struct ccs_pll_limits *lim,
+ 			      one_or_more(
+ 				      DIV_ROUND_UP(op_lim_fr->max_pll_op_clk_freq_hz,
+ 						   pll->ext_clk_freq_hz))));
++	if (!(pll->flags & CCS_PLL_FLAG_EXT_IP_PLL_DIVIDER))
++		min_op_pre_pll_clk_div = clk_div_even(min_op_pre_pll_clk_div);
+ 	dev_dbg(dev, "pll_op check: min / max op_pre_pll_clk_div: %u / %u\n",
+ 		min_op_pre_pll_clk_div, max_op_pre_pll_clk_div);
+ 
+@@ -843,8 +856,10 @@ int ccs_pll_calculate(struct device *dev, const struct ccs_pll_limits *lim,
+ 		if (pll->flags & CCS_PLL_FLAG_DUAL_PLL)
+ 			break;
+ 
+-		ccs_pll_calculate_vt(dev, lim, op_lim_bk, pll, op_pll_fr,
+-				     op_pll_bk, cphy, phy_const);
++		rval = ccs_pll_calculate_vt(dev, lim, op_lim_bk, pll, op_pll_fr,
++					    op_pll_bk, cphy, phy_const);
++		if (rval)
++			continue;
+ 
+ 		rval = check_bk_bounds(dev, lim, pll, PLL_VT);
+ 		if (rval)
+diff --git a/drivers/media/i2c/ds90ub913.c b/drivers/media/i2c/ds90ub913.c
+index fd2d2d5272bfb6..1445ebbcc9cabb 100644
+--- a/drivers/media/i2c/ds90ub913.c
++++ b/drivers/media/i2c/ds90ub913.c
+@@ -450,10 +450,10 @@ static int ub913_set_fmt(struct v4l2_subdev *sd,
+ 	if (!fmt)
+ 		return -EINVAL;
+ 
+-	format->format.code = finfo->outcode;
+-
+ 	*fmt = format->format;
+ 
++	fmt->code = finfo->outcode;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/i2c/imx334.c b/drivers/media/i2c/imx334.c
+index a544fc3df39c28..b47cb3b8f36897 100644
+--- a/drivers/media/i2c/imx334.c
++++ b/drivers/media/i2c/imx334.c
+@@ -352,6 +352,12 @@ static const struct imx334_reg mode_3840x2160_regs[] = {
+ 	{0x302d, 0x00},
+ 	{0x302e, 0x00},
+ 	{0x302f, 0x0f},
++	{0x3074, 0xb0},
++	{0x3075, 0x00},
++	{0x308e, 0xb1},
++	{0x308f, 0x00},
++	{0x30d8, 0x20},
++	{0x30d9, 0x12},
+ 	{0x3076, 0x70},
+ 	{0x3077, 0x08},
+ 	{0x3090, 0x70},
+@@ -1391,6 +1397,9 @@ static int imx334_probe(struct i2c_client *client)
+ 		goto error_handler_free;
+ 	}
+ 
++	pm_runtime_set_active(imx334->dev);
++	pm_runtime_enable(imx334->dev);
++
+ 	ret = v4l2_async_register_subdev_sensor(&imx334->sd);
+ 	if (ret < 0) {
+ 		dev_err(imx334->dev,
+@@ -1398,13 +1407,13 @@ static int imx334_probe(struct i2c_client *client)
+ 		goto error_media_entity;
+ 	}
+ 
+-	pm_runtime_set_active(imx334->dev);
+-	pm_runtime_enable(imx334->dev);
+ 	pm_runtime_idle(imx334->dev);
+ 
+ 	return 0;
+ 
+ error_media_entity:
++	pm_runtime_disable(imx334->dev);
++	pm_runtime_set_suspended(imx334->dev);
+ 	media_entity_cleanup(&imx334->sd.entity);
+ error_handler_free:
+ 	v4l2_ctrl_handler_free(imx334->sd.ctrl_handler);
+@@ -1432,7 +1441,10 @@ static void imx334_remove(struct i2c_client *client)
+ 	v4l2_ctrl_handler_free(sd->ctrl_handler);
+ 
+ 	pm_runtime_disable(&client->dev);
+-	pm_runtime_suspended(&client->dev);
++	if (!pm_runtime_status_suspended(&client->dev)) {
++		imx334_power_off(&client->dev);
++		pm_runtime_set_suspended(&client->dev);
++	}
+ 
+ 	mutex_destroy(&imx334->mutex);
+ }
+diff --git a/drivers/media/i2c/imx335.c b/drivers/media/i2c/imx335.c
+index 0beb80b8c45815..9b4db4cd4929ca 100644
+--- a/drivers/media/i2c/imx335.c
++++ b/drivers/media/i2c/imx335.c
+@@ -31,7 +31,7 @@
+ #define IMX335_REG_CPWAIT_TIME		CCI_REG8(0x300d)
+ #define IMX335_REG_WINMODE		CCI_REG8(0x3018)
+ #define IMX335_REG_HTRIMMING_START	CCI_REG16_LE(0x302c)
+-#define IMX335_REG_HNUM			CCI_REG8(0x302e)
++#define IMX335_REG_HNUM			CCI_REG16_LE(0x302e)
+ 
+ /* Lines per frame */
+ #define IMX335_REG_VMAX			CCI_REG24_LE(0x3030)
+@@ -660,7 +660,8 @@ static int imx335_enum_frame_size(struct v4l2_subdev *sd,
+ 	struct imx335 *imx335 = to_imx335(sd);
+ 	u32 code;
+ 
+-	if (fsize->index > ARRAY_SIZE(imx335_mbus_codes))
++	/* Only a single supported_mode available. */
++	if (fsize->index > 0)
+ 		return -EINVAL;
+ 
+ 	code = imx335_get_format_code(imx335, fsize->code);
+diff --git a/drivers/media/i2c/lt6911uxe.c b/drivers/media/i2c/lt6911uxe.c
+index c5b40bb58a373d..24857d683fcfcf 100644
+--- a/drivers/media/i2c/lt6911uxe.c
++++ b/drivers/media/i2c/lt6911uxe.c
+@@ -605,10 +605,10 @@ static int lt6911uxe_probe(struct i2c_client *client)
+ 		return dev_err_probe(dev, PTR_ERR(lt6911uxe->reset_gpio),
+ 				     "failed to get reset gpio\n");
+ 
+-	lt6911uxe->irq_gpio = devm_gpiod_get(dev, "readystat", GPIOD_IN);
++	lt6911uxe->irq_gpio = devm_gpiod_get(dev, "hpd", GPIOD_IN);
+ 	if (IS_ERR(lt6911uxe->irq_gpio))
+ 		return dev_err_probe(dev, PTR_ERR(lt6911uxe->irq_gpio),
+-				     "failed to get ready_stat gpio\n");
++				     "failed to get hpd gpio\n");
+ 
+ 	ret = lt6911uxe_fwnode_parse(lt6911uxe, dev);
+ 	if (ret)
+diff --git a/drivers/media/i2c/ov08x40.c b/drivers/media/i2c/ov08x40.c
+index cf0e41fc30719d..54575eea3c4906 100644
+--- a/drivers/media/i2c/ov08x40.c
++++ b/drivers/media/i2c/ov08x40.c
+@@ -1341,7 +1341,7 @@ static int ov08x40_power_on(struct device *dev)
+ 	}
+ 
+ 	gpiod_set_value_cansleep(ov08x->reset_gpio, 0);
+-	usleep_range(1500, 1800);
++	usleep_range(5000, 5500);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/media/i2c/ov2740.c b/drivers/media/i2c/ov2740.c
+index 80d151e8ae294d..6cf461e3373ceb 100644
+--- a/drivers/media/i2c/ov2740.c
++++ b/drivers/media/i2c/ov2740.c
+@@ -1456,12 +1456,12 @@ static int ov2740_probe(struct i2c_client *client)
+ 	return 0;
+ 
+ probe_error_v4l2_subdev_cleanup:
++	pm_runtime_disable(&client->dev);
++	pm_runtime_set_suspended(&client->dev);
+ 	v4l2_subdev_cleanup(&ov2740->sd);
+ 
+ probe_error_media_entity_cleanup:
+ 	media_entity_cleanup(&ov2740->sd.entity);
+-	pm_runtime_disable(&client->dev);
+-	pm_runtime_set_suspended(&client->dev);
+ 
+ probe_error_v4l2_ctrl_handler_free:
+ 	v4l2_ctrl_handler_free(ov2740->sd.ctrl_handler);
+diff --git a/drivers/media/i2c/ov5675.c b/drivers/media/i2c/ov5675.c
+index c1081deffc2f99..e7aec281e9a49d 100644
+--- a/drivers/media/i2c/ov5675.c
++++ b/drivers/media/i2c/ov5675.c
+@@ -1295,11 +1295,8 @@ static int ov5675_probe(struct i2c_client *client)
+ 		return -ENOMEM;
+ 
+ 	ret = ov5675_get_hwcfg(ov5675, &client->dev);
+-	if (ret) {
+-		dev_err(&client->dev, "failed to get HW configuration: %d",
+-			ret);
++	if (ret)
+ 		return ret;
+-	}
+ 
+ 	v4l2_i2c_subdev_init(&ov5675->sd, client, &ov5675_subdev_ops);
+ 
+diff --git a/drivers/media/i2c/ov8856.c b/drivers/media/i2c/ov8856.c
+index e6704d01824815..4b6874d2a104ad 100644
+--- a/drivers/media/i2c/ov8856.c
++++ b/drivers/media/i2c/ov8856.c
+@@ -2276,8 +2276,8 @@ static int ov8856_get_hwcfg(struct ov8856 *ov8856, struct device *dev)
+ 	if (!is_acpi_node(fwnode)) {
+ 		ov8856->xvclk = devm_clk_get(dev, "xvclk");
+ 		if (IS_ERR(ov8856->xvclk)) {
+-			dev_err(dev, "could not get xvclk clock (%pe)\n",
+-				ov8856->xvclk);
++			dev_err_probe(dev, PTR_ERR(ov8856->xvclk),
++				      "could not get xvclk clock\n");
+ 			return PTR_ERR(ov8856->xvclk);
+ 		}
+ 
+@@ -2382,11 +2382,8 @@ static int ov8856_probe(struct i2c_client *client)
+ 		return -ENOMEM;
+ 
+ 	ret = ov8856_get_hwcfg(ov8856, &client->dev);
+-	if (ret) {
+-		dev_err(&client->dev, "failed to get HW configuration: %d",
+-			ret);
++	if (ret)
+ 		return ret;
+-	}
+ 
+ 	v4l2_i2c_subdev_init(&ov8856->sd, client, &ov8856_subdev_ops);
+ 
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index 2d5f42f111583b..dcef93e1a3bcdf 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -313,6 +313,10 @@ static int tc358743_get_detected_timings(struct v4l2_subdev *sd,
+ 
+ 	memset(timings, 0, sizeof(struct v4l2_dv_timings));
+ 
++	/* if HPD is low, ignore any video */
++	if (!(i2c_rd8(sd, HPD_CTL) & MASK_HPD_OUT0))
++		return -ENOLINK;
++
+ 	if (no_signal(sd)) {
+ 		v4l2_dbg(1, debug, sd, "%s: no valid signal\n", __func__);
+ 		return -ENOLINK;
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-dma.c b/drivers/media/pci/intel/ipu6/ipu6-dma.c
+index 1ca60ca79dba3b..7296373d36b0aa 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-dma.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-dma.c
+@@ -172,7 +172,7 @@ void *ipu6_dma_alloc(struct ipu6_bus_device *sys, size_t size,
+ 	count = PHYS_PFN(size);
+ 
+ 	iova = alloc_iova(&mmu->dmap->iovad, count,
+-			  PHYS_PFN(dma_get_mask(dev)), 0);
++			  PHYS_PFN(mmu->dmap->mmu_info->aperture_end), 0);
+ 	if (!iova)
+ 		goto out_kfree;
+ 
+@@ -398,7 +398,7 @@ int ipu6_dma_map_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
+ 		nents, npages);
+ 
+ 	iova = alloc_iova(&mmu->dmap->iovad, npages,
+-			  PHYS_PFN(dma_get_mask(dev)), 0);
++			  PHYS_PFN(mmu->dmap->mmu_info->aperture_end), 0);
+ 	if (!iova)
+ 		return 0;
+ 
+diff --git a/drivers/media/pci/intel/ipu6/ipu6.c b/drivers/media/pci/intel/ipu6/ipu6.c
+index 277af7cda8eec1..b00d0705fefa86 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6.c
++++ b/drivers/media/pci/intel/ipu6/ipu6.c
+@@ -464,11 +464,6 @@ static int ipu6_pci_config_setup(struct pci_dev *dev, u8 hw_ver)
+ {
+ 	int ret;
+ 
+-	/* disable IPU6 PCI ATS on mtl ES2 */
+-	if (is_ipu6ep_mtl(hw_ver) && boot_cpu_data.x86_stepping == 0x2 &&
+-	    pci_ats_supported(dev))
+-		pci_disable_ats(dev);
+-
+ 	/* No PCI msi capability for IPU6EP */
+ 	if (is_ipu6ep(hw_ver) || is_ipu6ep_mtl(hw_ver)) {
+ 		/* likely do nothing as msi not enabled by default */
+diff --git a/drivers/media/platform/imagination/e5010-jpeg-enc.c b/drivers/media/platform/imagination/e5010-jpeg-enc.c
+index c194f830577f9a..ae868d9f73e13f 100644
+--- a/drivers/media/platform/imagination/e5010-jpeg-enc.c
++++ b/drivers/media/platform/imagination/e5010-jpeg-enc.c
+@@ -1057,8 +1057,11 @@ static int e5010_probe(struct platform_device *pdev)
+ 	e5010->vdev->lock = &e5010->mutex;
+ 
+ 	ret = v4l2_device_register(dev, &e5010->v4l2_dev);
+-	if (ret)
+-		return dev_err_probe(dev, ret, "failed to register v4l2 device\n");
++	if (ret) {
++		dev_err_probe(dev, ret, "failed to register v4l2 device\n");
++		goto fail_after_video_device_alloc;
++	}
++
+ 
+ 	e5010->m2m_dev = v4l2_m2m_init(&e5010_m2m_ops);
+ 	if (IS_ERR(e5010->m2m_dev)) {
+@@ -1118,6 +1121,8 @@ static int e5010_probe(struct platform_device *pdev)
+ 	v4l2_m2m_release(e5010->m2m_dev);
+ fail_after_v4l2_register:
+ 	v4l2_device_unregister(&e5010->v4l2_dev);
++fail_after_video_device_alloc:
++	video_device_release(e5010->vdev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_hevc_req_multi_if.c b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_hevc_req_multi_if.c
+index aa721cc43647c7..2725db882e5b30 100644
+--- a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_hevc_req_multi_if.c
++++ b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_hevc_req_multi_if.c
+@@ -821,7 +821,7 @@ static int vdec_hevc_slice_setup_core_buffer(struct vdec_hevc_slice_inst *inst,
+ 	inst->vsi_core->fb.y.dma_addr = y_fb_dma;
+ 	inst->vsi_core->fb.y.size = ctx->picinfo.fb_sz[0];
+ 	inst->vsi_core->fb.c.dma_addr = c_fb_dma;
+-	inst->vsi_core->fb.y.size = ctx->picinfo.fb_sz[1];
++	inst->vsi_core->fb.c.size = ctx->picinfo.fb_sz[1];
+ 
+ 	inst->vsi_core->dec.vdec_fb_va = (unsigned long)fb;
+ 
+diff --git a/drivers/media/platform/nuvoton/npcm-video.c b/drivers/media/platform/nuvoton/npcm-video.c
+index 7a9d8928ae4019..3022fdcf66ec7a 100644
+--- a/drivers/media/platform/nuvoton/npcm-video.c
++++ b/drivers/media/platform/nuvoton/npcm-video.c
+@@ -863,7 +863,6 @@ static void npcm_video_detect_resolution(struct npcm_video *video)
+ 	struct regmap *gfxi = video->gfx_regmap;
+ 	unsigned int dispst;
+ 
+-	video->v4l2_input_status = V4L2_IN_ST_NO_SIGNAL;
+ 	det->width = npcm_video_hres(video);
+ 	det->height = npcm_video_vres(video);
+ 
+@@ -892,12 +891,16 @@ static void npcm_video_detect_resolution(struct npcm_video *video)
+ 		clear_bit(VIDEO_RES_CHANGING, &video->flags);
+ 	}
+ 
+-	if (det->width && det->height)
++	if (det->width && det->height) {
+ 		video->v4l2_input_status = 0;
+-
+-	dev_dbg(video->dev, "Got resolution[%dx%d] -> [%dx%d], status %d\n",
+-		act->width, act->height, det->width, det->height,
+-		video->v4l2_input_status);
++		dev_dbg(video->dev, "Got resolution[%dx%d] -> [%dx%d], status %d\n",
++			act->width, act->height, det->width, det->height,
++			video->v4l2_input_status);
++	} else {
++		video->v4l2_input_status = V4L2_IN_ST_NO_SIGNAL;
++		dev_err(video->dev, "Got invalid resolution[%dx%d]\n", det->width,
++			det->height);
++	}
+ }
+ 
+ static int npcm_video_set_resolution(struct npcm_video *video,
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.h b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.h
+index d579c804b04790..adb93e977be91a 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.h
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.h
+@@ -89,6 +89,7 @@
+ /* SLOT_STATUS fields for slots 0..3 */
+ #define SLOT_STATUS_FRMDONE			(0x1 << 3)
+ #define SLOT_STATUS_ENC_CONFIG_ERR		(0x1 << 8)
++#define SLOT_STATUS_ONGOING			(0x1 << 31)
+ 
+ /* SLOT_IRQ_EN fields TBD */
+ 
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+index 1221b309a91639..dce5620d29e477 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+@@ -752,6 +752,32 @@ static int mxc_get_free_slot(struct mxc_jpeg_slot_data *slot_data)
+ 	return -1;
+ }
+ 
++static void mxc_jpeg_free_slot_data(struct mxc_jpeg_dev *jpeg)
++{
++	/* free descriptor for decoding/encoding phase */
++	dma_free_coherent(jpeg->dev, sizeof(struct mxc_jpeg_desc),
++			  jpeg->slot_data.desc,
++			  jpeg->slot_data.desc_handle);
++	jpeg->slot_data.desc = NULL;
++	jpeg->slot_data.desc_handle = 0;
++
++	/* free descriptor for encoder configuration phase / decoder DHT */
++	dma_free_coherent(jpeg->dev, sizeof(struct mxc_jpeg_desc),
++			  jpeg->slot_data.cfg_desc,
++			  jpeg->slot_data.cfg_desc_handle);
++	jpeg->slot_data.cfg_desc_handle = 0;
++	jpeg->slot_data.cfg_desc = NULL;
++
++	/* free configuration stream */
++	dma_free_coherent(jpeg->dev, MXC_JPEG_MAX_CFG_STREAM,
++			  jpeg->slot_data.cfg_stream_vaddr,
++			  jpeg->slot_data.cfg_stream_handle);
++	jpeg->slot_data.cfg_stream_vaddr = NULL;
++	jpeg->slot_data.cfg_stream_handle = 0;
++
++	jpeg->slot_data.used = false;
++}
++
+ static bool mxc_jpeg_alloc_slot_data(struct mxc_jpeg_dev *jpeg)
+ {
+ 	struct mxc_jpeg_desc *desc;
+@@ -794,30 +820,11 @@ static bool mxc_jpeg_alloc_slot_data(struct mxc_jpeg_dev *jpeg)
+ 	return true;
+ err:
+ 	dev_err(jpeg->dev, "Could not allocate descriptors for slot %d", jpeg->slot_data.slot);
++	mxc_jpeg_free_slot_data(jpeg);
+ 
+ 	return false;
+ }
+ 
+-static void mxc_jpeg_free_slot_data(struct mxc_jpeg_dev *jpeg)
+-{
+-	/* free descriptor for decoding/encoding phase */
+-	dma_free_coherent(jpeg->dev, sizeof(struct mxc_jpeg_desc),
+-			  jpeg->slot_data.desc,
+-			  jpeg->slot_data.desc_handle);
+-
+-	/* free descriptor for encoder configuration phase / decoder DHT */
+-	dma_free_coherent(jpeg->dev, sizeof(struct mxc_jpeg_desc),
+-			  jpeg->slot_data.cfg_desc,
+-			  jpeg->slot_data.cfg_desc_handle);
+-
+-	/* free configuration stream */
+-	dma_free_coherent(jpeg->dev, MXC_JPEG_MAX_CFG_STREAM,
+-			  jpeg->slot_data.cfg_stream_vaddr,
+-			  jpeg->slot_data.cfg_stream_handle);
+-
+-	jpeg->slot_data.used = false;
+-}
+-
+ static void mxc_jpeg_check_and_set_last_buffer(struct mxc_jpeg_ctx *ctx,
+ 					       struct vb2_v4l2_buffer *src_buf,
+ 					       struct vb2_v4l2_buffer *dst_buf)
+@@ -877,6 +884,34 @@ static u32 mxc_jpeg_get_plane_size(struct mxc_jpeg_q_data *q_data, u32 plane_no)
+ 	return size;
+ }
+ 
++static bool mxc_dec_is_ongoing(struct mxc_jpeg_ctx *ctx)
++{
++	struct mxc_jpeg_dev *jpeg = ctx->mxc_jpeg;
++	u32 curr_desc;
++	u32 slot_status;
++
++	curr_desc = readl(jpeg->base_reg + MXC_SLOT_OFFSET(ctx->slot, SLOT_CUR_DESCPT_PTR));
++	if (curr_desc == jpeg->slot_data.cfg_desc_handle)
++		return true;
++
++	slot_status = readl(jpeg->base_reg + MXC_SLOT_OFFSET(ctx->slot, SLOT_STATUS));
++	if (slot_status & SLOT_STATUS_ONGOING)
++		return true;
++
++	/*
++	 * The curr_desc register is updated when next_descpt_ptr is loaded,
++	 * the ongoing bit of slot_status is set when the 32 bytes descriptor is loaded.
++	 * So there will be a short time interval in between, which may cause fake false.
++	 * Consider read register is quite slow compared with IP read 32byte from memory,
++	 * read twice slot_status can avoid this situation.
++	 */
++	slot_status = readl(jpeg->base_reg + MXC_SLOT_OFFSET(ctx->slot, SLOT_STATUS));
++	if (slot_status & SLOT_STATUS_ONGOING)
++		return true;
++
++	return false;
++}
++
+ static irqreturn_t mxc_jpeg_dec_irq(int irq, void *priv)
+ {
+ 	struct mxc_jpeg_dev *jpeg = priv;
+@@ -946,7 +981,8 @@ static irqreturn_t mxc_jpeg_dec_irq(int irq, void *priv)
+ 		mxc_jpeg_enc_mode_go(dev, reg, mxc_jpeg_is_extended_sequential(q_data->fmt));
+ 		goto job_unlock;
+ 	}
+-	if (jpeg->mode == MXC_JPEG_DECODE && jpeg_src_buf->dht_needed) {
++	if (jpeg->mode == MXC_JPEG_DECODE && jpeg_src_buf->dht_needed &&
++	    mxc_dec_is_ongoing(ctx)) {
+ 		jpeg_src_buf->dht_needed = false;
+ 		dev_dbg(dev, "Decoder DHT cfg finished. Start decoding...\n");
+ 		goto job_unlock;
+@@ -1918,9 +1954,19 @@ static void mxc_jpeg_buf_queue(struct vb2_buffer *vb)
+ 	jpeg_src_buf = vb2_to_mxc_buf(vb);
+ 	jpeg_src_buf->jpeg_parse_error = false;
+ 	ret = mxc_jpeg_parse(ctx, vb);
+-	if (ret)
++	if (ret) {
+ 		jpeg_src_buf->jpeg_parse_error = true;
+ 
++		/*
++		 * if the capture queue is not setup, the device_run() won't be scheduled,
++		 * need to drop the error buffer, so that the decoding can continue
++		 */
++		if (!vb2_is_streaming(v4l2_m2m_get_dst_vq(ctx->fh.m2m_ctx))) {
++			v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_ERROR);
++			return;
++		}
++	}
++
+ end:
+ 	v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vbuf);
+ }
+diff --git a/drivers/media/platform/nxp/imx8-isi/imx8-isi-m2m.c b/drivers/media/platform/nxp/imx8-isi/imx8-isi-m2m.c
+index 794050a6a919b8..22e49d3a128732 100644
+--- a/drivers/media/platform/nxp/imx8-isi/imx8-isi-m2m.c
++++ b/drivers/media/platform/nxp/imx8-isi/imx8-isi-m2m.c
+@@ -43,6 +43,7 @@ struct mxc_isi_m2m_ctx_queue_data {
+ 	struct v4l2_pix_format_mplane format;
+ 	const struct mxc_isi_format_info *info;
+ 	u32 sequence;
++	bool streaming;
+ };
+ 
+ struct mxc_isi_m2m_ctx {
+@@ -484,15 +485,18 @@ static int mxc_isi_m2m_streamon(struct file *file, void *fh,
+ 				enum v4l2_buf_type type)
+ {
+ 	struct mxc_isi_m2m_ctx *ctx = to_isi_m2m_ctx(fh);
++	struct mxc_isi_m2m_ctx_queue_data *q = mxc_isi_m2m_ctx_qdata(ctx, type);
+ 	const struct v4l2_pix_format_mplane *out_pix = &ctx->queues.out.format;
+ 	const struct v4l2_pix_format_mplane *cap_pix = &ctx->queues.cap.format;
+ 	const struct mxc_isi_format_info *cap_info = ctx->queues.cap.info;
+ 	const struct mxc_isi_format_info *out_info = ctx->queues.out.info;
+ 	struct mxc_isi_m2m *m2m = ctx->m2m;
+ 	bool bypass;
+-
+ 	int ret;
+ 
++	if (q->streaming)
++		return 0;
++
+ 	mutex_lock(&m2m->lock);
+ 
+ 	if (m2m->usage_count == INT_MAX) {
+@@ -545,6 +549,8 @@ static int mxc_isi_m2m_streamon(struct file *file, void *fh,
+ 		goto unchain;
+ 	}
+ 
++	q->streaming = true;
++
+ 	return 0;
+ 
+ unchain:
+@@ -567,10 +573,14 @@ static int mxc_isi_m2m_streamoff(struct file *file, void *fh,
+ 				 enum v4l2_buf_type type)
+ {
+ 	struct mxc_isi_m2m_ctx *ctx = to_isi_m2m_ctx(fh);
++	struct mxc_isi_m2m_ctx_queue_data *q = mxc_isi_m2m_ctx_qdata(ctx, type);
+ 	struct mxc_isi_m2m *m2m = ctx->m2m;
+ 
+ 	v4l2_m2m_ioctl_streamoff(file, fh, type);
+ 
++	if (!q->streaming)
++		return 0;
++
+ 	mutex_lock(&m2m->lock);
+ 
+ 	/*
+@@ -596,6 +606,8 @@ static int mxc_isi_m2m_streamoff(struct file *file, void *fh,
+ 
+ 	mutex_unlock(&m2m->lock);
+ 
++	q->streaming = false;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/platform/qcom/camss/camss-csid.c b/drivers/media/platform/qcom/camss/camss-csid.c
+index d08117f46f3b9b..5284b5857368c3 100644
+--- a/drivers/media/platform/qcom/camss/camss-csid.c
++++ b/drivers/media/platform/qcom/camss/camss-csid.c
+@@ -613,8 +613,8 @@ u32 csid_hw_version(struct csid_device *csid)
+ 	hw_gen = (hw_version >> HW_VERSION_GENERATION) & 0xF;
+ 	hw_rev = (hw_version >> HW_VERSION_REVISION) & 0xFFF;
+ 	hw_step = (hw_version >> HW_VERSION_STEPPING) & 0xFFFF;
+-	dev_info(csid->camss->dev, "CSID:%d HW Version = %u.%u.%u\n",
+-		 csid->id, hw_gen, hw_rev, hw_step);
++	dev_dbg(csid->camss->dev, "CSID:%d HW Version = %u.%u.%u\n",
++		csid->id, hw_gen, hw_rev, hw_step);
+ 
+ 	return hw_version;
+ }
+diff --git a/drivers/media/platform/qcom/camss/camss-vfe.c b/drivers/media/platform/qcom/camss/camss-vfe.c
+index cf0e8f5c004a20..91bc0cb7781e44 100644
+--- a/drivers/media/platform/qcom/camss/camss-vfe.c
++++ b/drivers/media/platform/qcom/camss/camss-vfe.c
+@@ -428,8 +428,8 @@ u32 vfe_hw_version(struct vfe_device *vfe)
+ 	u32 rev = (hw_version >> HW_VERSION_REVISION) & 0xFFF;
+ 	u32 step = (hw_version >> HW_VERSION_STEPPING) & 0xFFFF;
+ 
+-	dev_info(vfe->camss->dev, "VFE:%d HW Version = %u.%u.%u\n",
+-		 vfe->id, gen, rev, step);
++	dev_dbg(vfe->camss->dev, "VFE:%d HW Version = %u.%u.%u\n",
++		vfe->id, gen, rev, step);
+ 
+ 	return hw_version;
+ }
+diff --git a/drivers/media/platform/qcom/iris/iris_firmware.c b/drivers/media/platform/qcom/iris/iris_firmware.c
+index 7c493b4a75dbb2..f1b5cd56db3225 100644
+--- a/drivers/media/platform/qcom/iris/iris_firmware.c
++++ b/drivers/media/platform/qcom/iris/iris_firmware.c
+@@ -53,8 +53,10 @@ static int iris_load_fw_to_memory(struct iris_core *core, const char *fw_name)
+ 	}
+ 
+ 	mem_virt = memremap(mem_phys, res_size, MEMREMAP_WC);
+-	if (!mem_virt)
++	if (!mem_virt) {
++		ret = -ENOMEM;
+ 		goto err_release_fw;
++	}
+ 
+ 	ret = qcom_mdt_load(dev, firmware, fw_name,
+ 			    pas_id, mem_virt, mem_phys, res_size, NULL);
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index 77d48578ecd288..d305d74bb152d2 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -438,7 +438,7 @@ static int venus_probe(struct platform_device *pdev)
+ 
+ 	ret = v4l2_device_register(dev, &core->v4l2_dev);
+ 	if (ret)
+-		goto err_core_deinit;
++		goto err_hfi_destroy;
+ 
+ 	platform_set_drvdata(pdev, core);
+ 
+@@ -476,24 +476,24 @@ static int venus_probe(struct platform_device *pdev)
+ 
+ 	ret = venus_enumerate_codecs(core, VIDC_SESSION_TYPE_DEC);
+ 	if (ret)
+-		goto err_venus_shutdown;
++		goto err_core_deinit;
+ 
+ 	ret = venus_enumerate_codecs(core, VIDC_SESSION_TYPE_ENC);
+ 	if (ret)
+-		goto err_venus_shutdown;
++		goto err_core_deinit;
+ 
+ 	ret = pm_runtime_put_sync(dev);
+ 	if (ret) {
+ 		pm_runtime_get_noresume(dev);
+-		goto err_dev_unregister;
++		goto err_core_deinit;
+ 	}
+ 
+ 	venus_dbgfs_init(core);
+ 
+ 	return 0;
+ 
+-err_dev_unregister:
+-	v4l2_device_unregister(&core->v4l2_dev);
++err_core_deinit:
++	hfi_core_deinit(core, false);
+ err_venus_shutdown:
+ 	venus_shutdown(core);
+ err_firmware_deinit:
+@@ -506,9 +506,9 @@ static int venus_probe(struct platform_device *pdev)
+ 	pm_runtime_put_noidle(dev);
+ 	pm_runtime_disable(dev);
+ 	pm_runtime_set_suspended(dev);
++	v4l2_device_unregister(&core->v4l2_dev);
++err_hfi_destroy:
+ 	hfi_destroy(core);
+-err_core_deinit:
+-	hfi_core_deinit(core, false);
+ err_core_put:
+ 	if (core->pm_ops->core_put)
+ 		core->pm_ops->core_put(core);
+diff --git a/drivers/media/platform/qcom/venus/vdec.c b/drivers/media/platform/qcom/venus/vdec.c
+index 9f82882b77bcc5..39d0556d7237d7 100644
+--- a/drivers/media/platform/qcom/venus/vdec.c
++++ b/drivers/media/platform/qcom/venus/vdec.c
+@@ -154,14 +154,14 @@ find_format_by_index(struct venus_inst *inst, unsigned int index, u32 type)
+ 		return NULL;
+ 
+ 	for (i = 0; i < size; i++) {
+-		bool valid;
++		bool valid = false;
+ 
+ 		if (fmt[i].type != type)
+ 			continue;
+ 
+ 		if (V4L2_TYPE_IS_OUTPUT(type)) {
+ 			valid = venus_helper_check_codec(inst, fmt[i].pixfmt);
+-		} else if (V4L2_TYPE_IS_CAPTURE(type)) {
++		} else {
+ 			valid = venus_helper_check_format(inst, fmt[i].pixfmt);
+ 
+ 			if (fmt[i].pixfmt == V4L2_PIX_FMT_QC10C &&
+diff --git a/drivers/media/platform/renesas/rcar-vin/rcar-dma.c b/drivers/media/platform/renesas/rcar-vin/rcar-dma.c
+index 8de8712404409c..3af67c1b303d6e 100644
+--- a/drivers/media/platform/renesas/rcar-vin/rcar-dma.c
++++ b/drivers/media/platform/renesas/rcar-vin/rcar-dma.c
+@@ -679,22 +679,6 @@ void rvin_crop_scale_comp(struct rvin_dev *vin)
+ 
+ 	fmt = rvin_format_from_pixel(vin, vin->format.pixelformat);
+ 	stride = vin->format.bytesperline / fmt->bpp;
+-
+-	/* For RAW8 format bpp is 1, but the hardware process RAW8
+-	 * format in 2 pixel unit hence configure VNIS_REG as stride / 2.
+-	 */
+-	switch (vin->format.pixelformat) {
+-	case V4L2_PIX_FMT_SBGGR8:
+-	case V4L2_PIX_FMT_SGBRG8:
+-	case V4L2_PIX_FMT_SGRBG8:
+-	case V4L2_PIX_FMT_SRGGB8:
+-	case V4L2_PIX_FMT_GREY:
+-		stride /= 2;
+-		break;
+-	default:
+-		break;
+-	}
+-
+ 	rvin_write(vin, stride, VNIS_REG);
+ }
+ 
+@@ -910,7 +894,7 @@ static int rvin_setup(struct rvin_dev *vin)
+ 	case V4L2_PIX_FMT_SGBRG10:
+ 	case V4L2_PIX_FMT_SGRBG10:
+ 	case V4L2_PIX_FMT_SRGGB10:
+-		dmr = VNDMR_RMODE_RAW10 | VNDMR_YC_THR;
++		dmr = VNDMR_RMODE_RAW10;
+ 		break;
+ 	default:
+ 		vin_err(vin, "Invalid pixelformat (0x%x)\n",
+diff --git a/drivers/media/platform/renesas/rcar-vin/rcar-v4l2.c b/drivers/media/platform/renesas/rcar-vin/rcar-v4l2.c
+index 756fdfdbce616c..65da8d513b527d 100644
+--- a/drivers/media/platform/renesas/rcar-vin/rcar-v4l2.c
++++ b/drivers/media/platform/renesas/rcar-vin/rcar-v4l2.c
+@@ -88,19 +88,19 @@ static const struct rvin_video_format rvin_formats[] = {
+ 	},
+ 	{
+ 		.fourcc			= V4L2_PIX_FMT_SBGGR10,
+-		.bpp			= 4,
++		.bpp			= 2,
+ 	},
+ 	{
+ 		.fourcc			= V4L2_PIX_FMT_SGBRG10,
+-		.bpp			= 4,
++		.bpp			= 2,
+ 	},
+ 	{
+ 		.fourcc			= V4L2_PIX_FMT_SGRBG10,
+-		.bpp			= 4,
++		.bpp			= 2,
+ 	},
+ 	{
+ 		.fourcc			= V4L2_PIX_FMT_SRGGB10,
+-		.bpp			= 4,
++		.bpp			= 2,
+ 	},
+ };
+ 
+diff --git a/drivers/media/platform/renesas/vsp1/vsp1_rwpf.c b/drivers/media/platform/renesas/vsp1/vsp1_rwpf.c
+index 9d38203e73d00b..1b4bac7b7cfa1c 100644
+--- a/drivers/media/platform/renesas/vsp1/vsp1_rwpf.c
++++ b/drivers/media/platform/renesas/vsp1/vsp1_rwpf.c
+@@ -76,11 +76,20 @@ static int vsp1_rwpf_set_format(struct v4l2_subdev *subdev,
+ 	format = v4l2_subdev_state_get_format(state, fmt->pad);
+ 
+ 	if (fmt->pad == RWPF_PAD_SOURCE) {
++		const struct v4l2_mbus_framefmt *sink_format =
++			v4l2_subdev_state_get_format(state, RWPF_PAD_SINK);
++
+ 		/*
+ 		 * The RWPF performs format conversion but can't scale, only the
+-		 * format code can be changed on the source pad.
++		 * format code can be changed on the source pad when converting
++		 * between RGB and YUV.
+ 		 */
+-		format->code = fmt->format.code;
++		if (sink_format->code != MEDIA_BUS_FMT_AHSV8888_1X32 &&
++		    fmt->format.code != MEDIA_BUS_FMT_AHSV8888_1X32)
++			format->code = fmt->format.code;
++		else
++			format->code = sink_format->code;
++
+ 		fmt->format = *format;
+ 		goto done;
+ 	}
+diff --git a/drivers/media/platform/samsung/exynos4-is/fimc-is-regs.c b/drivers/media/platform/samsung/exynos4-is/fimc-is-regs.c
+index 366e6393817d21..5f9c44e825a5fa 100644
+--- a/drivers/media/platform/samsung/exynos4-is/fimc-is-regs.c
++++ b/drivers/media/platform/samsung/exynos4-is/fimc-is-regs.c
+@@ -164,6 +164,7 @@ int fimc_is_hw_change_mode(struct fimc_is *is)
+ 	if (WARN_ON(is->config_index >= ARRAY_SIZE(cmd)))
+ 		return -EINVAL;
+ 
++	fimc_is_hw_wait_intmsr0_intmsd0(is);
+ 	mcuctl_write(cmd[is->config_index], is, MCUCTL_REG_ISSR(0));
+ 	mcuctl_write(is->sensor_index, is, MCUCTL_REG_ISSR(1));
+ 	mcuctl_write(is->setfile.sub_index, is, MCUCTL_REG_ISSR(2));
+diff --git a/drivers/media/platform/ti/cal/cal-video.c b/drivers/media/platform/ti/cal/cal-video.c
+index e29743ae61e27e..c16754c136ca04 100644
+--- a/drivers/media/platform/ti/cal/cal-video.c
++++ b/drivers/media/platform/ti/cal/cal-video.c
+@@ -758,7 +758,7 @@ static int cal_start_streaming(struct vb2_queue *vq, unsigned int count)
+ 
+ 	ret = pm_runtime_resume_and_get(ctx->cal->dev);
+ 	if (ret < 0)
+-		goto error_pipeline;
++		goto error_unprepare;
+ 
+ 	cal_ctx_set_dma_addr(ctx, addr);
+ 	cal_ctx_start(ctx);
+@@ -775,8 +775,8 @@ static int cal_start_streaming(struct vb2_queue *vq, unsigned int count)
+ error_stop:
+ 	cal_ctx_stop(ctx);
+ 	pm_runtime_put_sync(ctx->cal->dev);
++error_unprepare:
+ 	cal_ctx_unprepare(ctx);
+-
+ error_pipeline:
+ 	video_device_pipeline_stop(&ctx->vdev);
+ error_release_buffers:
+diff --git a/drivers/media/platform/ti/davinci/vpif.c b/drivers/media/platform/ti/davinci/vpif.c
+index a81719702a22d1..969d623fc842e5 100644
+--- a/drivers/media/platform/ti/davinci/vpif.c
++++ b/drivers/media/platform/ti/davinci/vpif.c
+@@ -504,7 +504,7 @@ static int vpif_probe(struct platform_device *pdev)
+ 	pdev_display = kzalloc(sizeof(*pdev_display), GFP_KERNEL);
+ 	if (!pdev_display) {
+ 		ret = -ENOMEM;
+-		goto err_put_pdev_capture;
++		goto err_del_pdev_capture;
+ 	}
+ 
+ 	pdev_display->name = "vpif_display";
+@@ -527,6 +527,8 @@ static int vpif_probe(struct platform_device *pdev)
+ 
+ err_put_pdev_display:
+ 	platform_device_put(pdev_display);
++err_del_pdev_capture:
++	platform_device_del(pdev_capture);
+ err_put_pdev_capture:
+ 	platform_device_put(pdev_capture);
+ err_put_rpm:
+diff --git a/drivers/media/platform/ti/omap3isp/ispccdc.c b/drivers/media/platform/ti/omap3isp/ispccdc.c
+index dd375c4e180d1b..7d0c723dcd119a 100644
+--- a/drivers/media/platform/ti/omap3isp/ispccdc.c
++++ b/drivers/media/platform/ti/omap3isp/ispccdc.c
+@@ -446,8 +446,8 @@ static int ccdc_lsc_config(struct isp_ccdc_device *ccdc,
+ 		if (ret < 0)
+ 			goto done;
+ 
+-		dma_sync_sg_for_cpu(isp->dev, req->table.sgt.sgl,
+-				    req->table.sgt.nents, DMA_TO_DEVICE);
++		dma_sync_sgtable_for_cpu(isp->dev, &req->table.sgt,
++					 DMA_TO_DEVICE);
+ 
+ 		if (copy_from_user(req->table.addr, config->lsc,
+ 				   req->config.size)) {
+@@ -455,8 +455,8 @@ static int ccdc_lsc_config(struct isp_ccdc_device *ccdc,
+ 			goto done;
+ 		}
+ 
+-		dma_sync_sg_for_device(isp->dev, req->table.sgt.sgl,
+-				       req->table.sgt.nents, DMA_TO_DEVICE);
++		dma_sync_sgtable_for_device(isp->dev, &req->table.sgt,
++					    DMA_TO_DEVICE);
+ 	}
+ 
+ 	spin_lock_irqsave(&ccdc->lsc.req_lock, flags);
+diff --git a/drivers/media/platform/ti/omap3isp/ispstat.c b/drivers/media/platform/ti/omap3isp/ispstat.c
+index 359a846205b0ff..d3da68408ecb16 100644
+--- a/drivers/media/platform/ti/omap3isp/ispstat.c
++++ b/drivers/media/platform/ti/omap3isp/ispstat.c
+@@ -161,8 +161,7 @@ static void isp_stat_buf_sync_for_device(struct ispstat *stat,
+ 	if (ISP_STAT_USES_DMAENGINE(stat))
+ 		return;
+ 
+-	dma_sync_sg_for_device(stat->isp->dev, buf->sgt.sgl,
+-			       buf->sgt.nents, DMA_FROM_DEVICE);
++	dma_sync_sgtable_for_device(stat->isp->dev, &buf->sgt, DMA_FROM_DEVICE);
+ }
+ 
+ static void isp_stat_buf_sync_for_cpu(struct ispstat *stat,
+@@ -171,8 +170,7 @@ static void isp_stat_buf_sync_for_cpu(struct ispstat *stat,
+ 	if (ISP_STAT_USES_DMAENGINE(stat))
+ 		return;
+ 
+-	dma_sync_sg_for_cpu(stat->isp->dev, buf->sgt.sgl,
+-			    buf->sgt.nents, DMA_FROM_DEVICE);
++	dma_sync_sgtable_for_cpu(stat->isp->dev, &buf->sgt, DMA_FROM_DEVICE);
+ }
+ 
+ static void isp_stat_buf_clear(struct ispstat *stat)
+diff --git a/drivers/media/platform/verisilicon/rockchip_vpu_hw.c b/drivers/media/platform/verisilicon/rockchip_vpu_hw.c
+index 964122e7c35593..b64f0658f7f1e7 100644
+--- a/drivers/media/platform/verisilicon/rockchip_vpu_hw.c
++++ b/drivers/media/platform/verisilicon/rockchip_vpu_hw.c
+@@ -85,10 +85,10 @@ static const struct hantro_fmt rockchip_vpu981_postproc_fmts[] = {
+ 		.postprocessed = true,
+ 		.frmsize = {
+ 			.min_width = ROCKCHIP_VPU981_MIN_SIZE,
+-			.max_width = FMT_UHD_WIDTH,
++			.max_width = FMT_4K_WIDTH,
+ 			.step_width = MB_DIM,
+ 			.min_height = ROCKCHIP_VPU981_MIN_SIZE,
+-			.max_height = FMT_UHD_HEIGHT,
++			.max_height = FMT_4K_HEIGHT,
+ 			.step_height = MB_DIM,
+ 		},
+ 	},
+@@ -99,10 +99,10 @@ static const struct hantro_fmt rockchip_vpu981_postproc_fmts[] = {
+ 		.postprocessed = true,
+ 		.frmsize = {
+ 			.min_width = ROCKCHIP_VPU981_MIN_SIZE,
+-			.max_width = FMT_UHD_WIDTH,
++			.max_width = FMT_4K_WIDTH,
+ 			.step_width = MB_DIM,
+ 			.min_height = ROCKCHIP_VPU981_MIN_SIZE,
+-			.max_height = FMT_UHD_HEIGHT,
++			.max_height = FMT_4K_HEIGHT,
+ 			.step_height = MB_DIM,
+ 		},
+ 	},
+@@ -318,10 +318,10 @@ static const struct hantro_fmt rockchip_vpu981_dec_fmts[] = {
+ 		.match_depth = true,
+ 		.frmsize = {
+ 			.min_width = ROCKCHIP_VPU981_MIN_SIZE,
+-			.max_width = FMT_UHD_WIDTH,
++			.max_width = FMT_4K_WIDTH,
+ 			.step_width = MB_DIM,
+ 			.min_height = ROCKCHIP_VPU981_MIN_SIZE,
+-			.max_height = FMT_UHD_HEIGHT,
++			.max_height = FMT_4K_HEIGHT,
+ 			.step_height = MB_DIM,
+ 		},
+ 	},
+@@ -331,10 +331,10 @@ static const struct hantro_fmt rockchip_vpu981_dec_fmts[] = {
+ 		.match_depth = true,
+ 		.frmsize = {
+ 			.min_width = ROCKCHIP_VPU981_MIN_SIZE,
+-			.max_width = FMT_UHD_WIDTH,
++			.max_width = FMT_4K_WIDTH,
+ 			.step_width = MB_DIM,
+ 			.min_height = ROCKCHIP_VPU981_MIN_SIZE,
+-			.max_height = FMT_UHD_HEIGHT,
++			.max_height = FMT_4K_HEIGHT,
+ 			.step_height = MB_DIM,
+ 		},
+ 	},
+@@ -344,10 +344,10 @@ static const struct hantro_fmt rockchip_vpu981_dec_fmts[] = {
+ 		.max_depth = 2,
+ 		.frmsize = {
+ 			.min_width = ROCKCHIP_VPU981_MIN_SIZE,
+-			.max_width = FMT_UHD_WIDTH,
++			.max_width = FMT_4K_WIDTH,
+ 			.step_width = MB_DIM,
+ 			.min_height = ROCKCHIP_VPU981_MIN_SIZE,
+-			.max_height = FMT_UHD_HEIGHT,
++			.max_height = FMT_4K_HEIGHT,
+ 			.step_height = MB_DIM,
+ 		},
+ 	},
+diff --git a/drivers/media/test-drivers/vidtv/vidtv_channel.c b/drivers/media/test-drivers/vidtv/vidtv_channel.c
+index 7838e62727128f..f3023e91b3ebc8 100644
+--- a/drivers/media/test-drivers/vidtv/vidtv_channel.c
++++ b/drivers/media/test-drivers/vidtv/vidtv_channel.c
+@@ -497,7 +497,7 @@ int vidtv_channel_si_init(struct vidtv_mux *m)
+ 	vidtv_psi_sdt_table_destroy(m->si.sdt);
+ free_pat:
+ 	vidtv_psi_pat_table_destroy(m->si.pat);
+-	return 0;
++	return -EINVAL;
+ }
+ 
+ void vidtv_channel_si_destroy(struct vidtv_mux *m)
+diff --git a/drivers/media/test-drivers/vivid/vivid-vid-cap.c b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+index b166d90177c641..df5d1c2a42ef51 100644
+--- a/drivers/media/test-drivers/vivid/vivid-vid-cap.c
++++ b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+@@ -946,8 +946,8 @@ int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection
+ 			if (dev->has_compose_cap) {
+ 				v4l2_rect_set_min_size(compose, &min_rect);
+ 				v4l2_rect_set_max_size(compose, &max_rect);
+-				v4l2_rect_map_inside(compose, &fmt);
+ 			}
++			v4l2_rect_map_inside(compose, &fmt);
+ 			dev->fmt_cap_rect = fmt;
+ 			tpg_s_buf_height(&dev->tpg, fmt.height);
+ 		} else if (dev->has_compose_cap) {
+diff --git a/drivers/media/usb/dvb-usb/cxusb.c b/drivers/media/usb/dvb-usb/cxusb.c
+index f44529b40989b1..d0501c1e81d63e 100644
+--- a/drivers/media/usb/dvb-usb/cxusb.c
++++ b/drivers/media/usb/dvb-usb/cxusb.c
+@@ -119,9 +119,8 @@ static void cxusb_gpio_tuner(struct dvb_usb_device *d, int onoff)
+ 
+ 	o[0] = GPIO_TUNER;
+ 	o[1] = onoff;
+-	cxusb_ctrl_msg(d, CMD_GPIO_WRITE, o, 2, &i, 1);
+ 
+-	if (i != 0x01)
++	if (!cxusb_ctrl_msg(d, CMD_GPIO_WRITE, o, 2, &i, 1) && i != 0x01)
+ 		dev_info(&d->udev->dev, "gpio_write failed.\n");
+ 
+ 	st->gpio_write_state[GPIO_TUNER] = onoff;
+diff --git a/drivers/media/usb/gspca/stv06xx/stv06xx_hdcs.c b/drivers/media/usb/gspca/stv06xx/stv06xx_hdcs.c
+index 5a47dcbf1c8e55..303b055fefea98 100644
+--- a/drivers/media/usb/gspca/stv06xx/stv06xx_hdcs.c
++++ b/drivers/media/usb/gspca/stv06xx/stv06xx_hdcs.c
+@@ -520,12 +520,13 @@ static int hdcs_init(struct sd *sd)
+ static int hdcs_dump(struct sd *sd)
+ {
+ 	u16 reg, val;
++	int err = 0;
+ 
+ 	pr_info("Dumping sensor registers:\n");
+ 
+-	for (reg = HDCS_IDENT; reg <= HDCS_ROWEXPH; reg++) {
+-		stv06xx_read_sensor(sd, reg, &val);
++	for (reg = HDCS_IDENT; reg <= HDCS_ROWEXPH && !err; reg++) {
++		err = stv06xx_read_sensor(sd, reg, &val);
+ 		pr_info("reg 0x%02x = 0x%02x\n", reg, val);
+ 	}
+-	return 0;
++	return (err < 0) ? err : 0;
+ }
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index cbf19aa1d82374..bc7e2005fc6c72 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -1943,7 +1943,9 @@ static bool uvc_ctrl_xctrls_has_control(const struct v4l2_ext_control *xctrls,
+ }
+ 
+ static void uvc_ctrl_send_events(struct uvc_fh *handle,
+-	const struct v4l2_ext_control *xctrls, unsigned int xctrls_count)
++				 struct uvc_entity *entity,
++				 const struct v4l2_ext_control *xctrls,
++				 unsigned int xctrls_count)
+ {
+ 	struct uvc_control_mapping *mapping;
+ 	struct uvc_control *ctrl;
+@@ -1955,6 +1957,9 @@ static void uvc_ctrl_send_events(struct uvc_fh *handle,
+ 		s32 value;
+ 
+ 		ctrl = uvc_find_control(handle->chain, xctrls[i].id, &mapping);
++		if (ctrl->entity != entity)
++			continue;
++
+ 		if (ctrl->info.flags & UVC_CTRL_FLAG_ASYNCHRONOUS)
+ 			/* Notification will be sent from an Interrupt event. */
+ 			continue;
+@@ -2090,12 +2095,17 @@ int uvc_ctrl_begin(struct uvc_video_chain *chain)
+ 	return mutex_lock_interruptible(&chain->ctrl_mutex) ? -ERESTARTSYS : 0;
+ }
+ 
++/*
++ * Returns the number of uvc controls that have been correctly set, or a
++ * negative number if there has been an error.
++ */
+ static int uvc_ctrl_commit_entity(struct uvc_device *dev,
+ 				  struct uvc_fh *handle,
+ 				  struct uvc_entity *entity,
+ 				  int rollback,
+ 				  struct uvc_control **err_ctrl)
+ {
++	unsigned int processed_ctrls = 0;
+ 	struct uvc_control *ctrl;
+ 	unsigned int i;
+ 	int ret;
+@@ -2130,6 +2140,9 @@ static int uvc_ctrl_commit_entity(struct uvc_device *dev,
+ 		else
+ 			ret = 0;
+ 
++		if (!ret)
++			processed_ctrls++;
++
+ 		if (rollback || ret < 0)
+ 			memcpy(uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT),
+ 			       uvc_ctrl_data(ctrl, UVC_CTRL_DATA_BACKUP),
+@@ -2148,7 +2161,7 @@ static int uvc_ctrl_commit_entity(struct uvc_device *dev,
+ 			uvc_ctrl_set_handle(handle, ctrl, handle);
+ 	}
+ 
+-	return 0;
++	return processed_ctrls;
+ }
+ 
+ static int uvc_ctrl_find_ctrl_idx(struct uvc_entity *entity,
+@@ -2190,11 +2203,13 @@ int __uvc_ctrl_commit(struct uvc_fh *handle, int rollback,
+ 					uvc_ctrl_find_ctrl_idx(entity, ctrls,
+ 							       err_ctrl);
+ 			goto done;
++		} else if (ret > 0 && !rollback) {
++			uvc_ctrl_send_events(handle, entity,
++					     ctrls->controls, ctrls->count);
+ 		}
+ 	}
+ 
+-	if (!rollback)
+-		uvc_ctrl_send_events(handle, ctrls->controls, ctrls->count);
++	ret = 0;
+ done:
+ 	mutex_unlock(&chain->ctrl_mutex);
+ 	return ret;
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 107e0fafd80f54..25e9aea81196e0 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -2232,13 +2232,16 @@ static int uvc_probe(struct usb_interface *intf,
+ #endif
+ 
+ 	/* Parse the Video Class control descriptor. */
+-	if (uvc_parse_control(dev) < 0) {
++	ret = uvc_parse_control(dev);
++	if (ret < 0) {
++		ret = -ENODEV;
+ 		uvc_dbg(dev, PROBE, "Unable to parse UVC descriptors\n");
+ 		goto error;
+ 	}
+ 
+ 	/* Parse the associated GPIOs. */
+-	if (uvc_gpio_parse(dev) < 0) {
++	ret = uvc_gpio_parse(dev);
++	if (ret < 0) {
+ 		uvc_dbg(dev, PROBE, "Unable to parse UVC GPIOs\n");
+ 		goto error;
+ 	}
+@@ -2264,24 +2267,32 @@ static int uvc_probe(struct usb_interface *intf,
+ 	}
+ 
+ 	/* Register the V4L2 device. */
+-	if (v4l2_device_register(&intf->dev, &dev->vdev) < 0)
++	ret = v4l2_device_register(&intf->dev, &dev->vdev);
++	if (ret < 0)
+ 		goto error;
+ 
+ 	/* Scan the device for video chains. */
+-	if (uvc_scan_device(dev) < 0)
++	if (uvc_scan_device(dev) < 0) {
++		ret = -ENODEV;
+ 		goto error;
++	}
+ 
+ 	/* Initialize controls. */
+-	if (uvc_ctrl_init_device(dev) < 0)
++	if (uvc_ctrl_init_device(dev) < 0) {
++		ret = -ENODEV;
+ 		goto error;
++	}
+ 
+ 	/* Register video device nodes. */
+-	if (uvc_register_chains(dev) < 0)
++	if (uvc_register_chains(dev) < 0) {
++		ret = -ENODEV;
+ 		goto error;
++	}
+ 
+ #ifdef CONFIG_MEDIA_CONTROLLER
+ 	/* Register the media device node */
+-	if (media_device_register(&dev->mdev) < 0)
++	ret = media_device_register(&dev->mdev);
++	if (ret < 0)
+ 		goto error;
+ #endif
+ 	/* Save our data pointer in the interface data. */
+@@ -2315,7 +2326,7 @@ static int uvc_probe(struct usb_interface *intf,
+ error:
+ 	uvc_unregister_video(dev);
+ 	kref_put(&dev->ref, uvc_delete);
+-	return -ENODEV;
++	return ret;
+ }
+ 
+ static void uvc_disconnect(struct usb_interface *intf)
+diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
+index b40c08ce909d44..c369235113d98a 100644
+--- a/drivers/media/v4l2-core/v4l2-dev.c
++++ b/drivers/media/v4l2-core/v4l2-dev.c
+@@ -1054,25 +1054,25 @@ int __video_register_device(struct video_device *vdev,
+ 	vdev->dev.class = &video_class;
+ 	vdev->dev.devt = MKDEV(VIDEO_MAJOR, vdev->minor);
+ 	vdev->dev.parent = vdev->dev_parent;
++	vdev->dev.release = v4l2_device_release;
+ 	dev_set_name(&vdev->dev, "%s%d", name_base, vdev->num);
++
++	/* Increase v4l2_device refcount */
++	v4l2_device_get(vdev->v4l2_dev);
++
+ 	mutex_lock(&videodev_lock);
+ 	ret = device_register(&vdev->dev);
+ 	if (ret < 0) {
+ 		mutex_unlock(&videodev_lock);
+ 		pr_err("%s: device_register failed\n", __func__);
+-		goto cleanup;
++		put_device(&vdev->dev);
++		return ret;
+ 	}
+-	/* Register the release callback that will be called when the last
+-	   reference to the device goes away. */
+-	vdev->dev.release = v4l2_device_release;
+ 
+ 	if (nr != -1 && nr != vdev->num && warn_if_nr_in_use)
+ 		pr_warn("%s: requested %s%d, got %s\n", __func__,
+ 			name_base, nr, video_device_node_name(vdev));
+ 
+-	/* Increase v4l2_device refcount */
+-	v4l2_device_get(vdev->v4l2_dev);
+-
+ 	/* Part 5: Register the entity. */
+ 	ret = video_register_media_controller(vdev);
+ 
+diff --git a/drivers/mmc/core/card.h b/drivers/mmc/core/card.h
+index 3205feb1e8ff6a..9cbdd240c3a7d4 100644
+--- a/drivers/mmc/core/card.h
++++ b/drivers/mmc/core/card.h
+@@ -89,6 +89,7 @@ struct mmc_fixup {
+ #define CID_MANFID_MICRON       0x13
+ #define CID_MANFID_SAMSUNG      0x15
+ #define CID_MANFID_APACER       0x27
++#define CID_MANFID_SWISSBIT     0x5D
+ #define CID_MANFID_KINGSTON     0x70
+ #define CID_MANFID_HYNIX	0x90
+ #define CID_MANFID_KINGSTON_SD	0x9F
+@@ -294,4 +295,9 @@ static inline int mmc_card_broken_sd_poweroff_notify(const struct mmc_card *c)
+ 	return c->quirks & MMC_QUIRK_BROKEN_SD_POWEROFF_NOTIFY;
+ }
+ 
++static inline int mmc_card_no_uhs_ddr50_tuning(const struct mmc_card *c)
++{
++	return c->quirks & MMC_QUIRK_NO_UHS_DDR50_TUNING;
++}
++
+ #endif
+diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
+index 89b512905be140..7f893bafaa607d 100644
+--- a/drivers/mmc/core/quirks.h
++++ b/drivers/mmc/core/quirks.h
+@@ -34,6 +34,16 @@ static const struct mmc_fixup __maybe_unused mmc_sd_fixups[] = {
+ 		   MMC_QUIRK_BROKEN_SD_CACHE | MMC_QUIRK_BROKEN_SD_POWEROFF_NOTIFY,
+ 		   EXT_CSD_REV_ANY),
+ 
++	/*
++	 * Swissbit series S46-u cards throw I/O errors during tuning requests
++	 * after the initial tuning request expectedly times out. This has
++	 * only been observed on cards manufactured on 01/2019 that are using
++	 * Bay Trail host controllers.
++	 */
++	_FIXUP_EXT("0016G", CID_MANFID_SWISSBIT, 0x5342, 2019, 1,
++		   0, -1ull, SDIO_ANY_ID, SDIO_ANY_ID, add_quirk_sd,
++		   MMC_QUIRK_NO_UHS_DDR50_TUNING, EXT_CSD_REV_ANY),
++
+ 	END_FIXUP
+ };
+ 
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index 8eba697d3d8671..6847b3fe8887aa 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -617,6 +617,29 @@ static int sd_set_current_limit(struct mmc_card *card, u8 *status)
+ 	return 0;
+ }
+ 
++/*
++ * Determine if the card should tune or not.
++ */
++static bool mmc_sd_use_tuning(struct mmc_card *card)
++{
++	/*
++	 * SPI mode doesn't define CMD19 and tuning is only valid for SDR50 and
++	 * SDR104 mode SD-cards. Note that tuning is mandatory for SDR104.
++	 */
++	if (mmc_host_is_spi(card->host))
++		return false;
++
++	switch (card->host->ios.timing) {
++	case MMC_TIMING_UHS_SDR50:
++	case MMC_TIMING_UHS_SDR104:
++		return true;
++	case MMC_TIMING_UHS_DDR50:
++		return !mmc_card_no_uhs_ddr50_tuning(card);
++	}
++
++	return false;
++}
++
+ /*
+  * UHS-I specific initialization procedure
+  */
+@@ -660,14 +683,7 @@ static int mmc_sd_init_uhs_card(struct mmc_card *card)
+ 	if (err)
+ 		goto out;
+ 
+-	/*
+-	 * SPI mode doesn't define CMD19 and tuning is only valid for SDR50 and
+-	 * SDR104 mode SD-cards. Note that tuning is mandatory for SDR104.
+-	 */
+-	if (!mmc_host_is_spi(card->host) &&
+-		(card->host->ios.timing == MMC_TIMING_UHS_SDR50 ||
+-		 card->host->ios.timing == MMC_TIMING_UHS_DDR50 ||
+-		 card->host->ios.timing == MMC_TIMING_UHS_SDR104)) {
++	if (mmc_sd_use_tuning(card)) {
+ 		err = mmc_execute_tuning(card);
+ 
+ 		/*
+diff --git a/drivers/mtd/nand/qpic_common.c b/drivers/mtd/nand/qpic_common.c
+index e0ed25b5afea9b..4dc4d65e7d323e 100644
+--- a/drivers/mtd/nand/qpic_common.c
++++ b/drivers/mtd/nand/qpic_common.c
+@@ -236,21 +236,21 @@ int qcom_prep_bam_dma_desc_cmd(struct qcom_nand_controller *nandc, bool read,
+ 	int i, ret;
+ 	struct bam_cmd_element *bam_ce_buffer;
+ 	struct bam_transaction *bam_txn = nandc->bam_txn;
++	u32 offset;
+ 
+ 	bam_ce_buffer = &bam_txn->bam_ce[bam_txn->bam_ce_pos];
+ 
+ 	/* fill the command desc */
+ 	for (i = 0; i < size; i++) {
++		offset = nandc->props->bam_offset + reg_off + 4 * i;
+ 		if (read)
+ 			bam_prep_ce(&bam_ce_buffer[i],
+-				    nandc_reg_phys(nandc, reg_off + 4 * i),
+-				    BAM_READ_COMMAND,
++				    offset, BAM_READ_COMMAND,
+ 				    reg_buf_dma_addr(nandc,
+ 						     (__le32 *)vaddr + i));
+ 		else
+ 			bam_prep_ce_le32(&bam_ce_buffer[i],
+-					 nandc_reg_phys(nandc, reg_off + 4 * i),
+-					 BAM_WRITE_COMMAND,
++					 offset, BAM_WRITE_COMMAND,
+ 					 *((__le32 *)vaddr + i));
+ 	}
+ 
+diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
+index 5eaa0be367cdb8..1003cf118c01b8 100644
+--- a/drivers/mtd/nand/raw/qcom_nandc.c
++++ b/drivers/mtd/nand/raw/qcom_nandc.c
+@@ -1863,7 +1863,12 @@ static int qcom_param_page_type_exec(struct nand_chip *chip,  const struct nand_
+ 	const struct nand_op_instr *instr = NULL;
+ 	unsigned int op_id = 0;
+ 	unsigned int len = 0;
+-	int ret;
++	int ret, reg_base;
++
++	reg_base = NAND_READ_LOCATION_0;
++
++	if (nandc->props->qpic_version2)
++		reg_base = NAND_READ_LOCATION_LAST_CW_0;
+ 
+ 	ret = qcom_parse_instructions(chip, subop, &q_op);
+ 	if (ret)
+@@ -1915,14 +1920,17 @@ static int qcom_param_page_type_exec(struct nand_chip *chip,  const struct nand_
+ 	op_id = q_op.data_instr_idx;
+ 	len = nand_subop_get_data_len(subop, op_id);
+ 
+-	nandc_set_read_loc(chip, 0, 0, 0, len, 1);
++	if (nandc->props->qpic_version2)
++		nandc_set_read_loc_last(chip, reg_base, 0, len, 1);
++	else
++		nandc_set_read_loc_first(chip, reg_base, 0, len, 1);
+ 
+ 	if (!nandc->props->qpic_version2) {
+ 		qcom_write_reg_dma(nandc, &nandc->regs->vld, NAND_DEV_CMD_VLD, 1, 0);
+ 		qcom_write_reg_dma(nandc, &nandc->regs->cmd1, NAND_DEV_CMD1, 1, NAND_BAM_NEXT_SGL);
+ 	}
+ 
+-	nandc->buf_count = len;
++	nandc->buf_count = 512;
+ 	memset(nandc->data_buffer, 0xff, nandc->buf_count);
+ 
+ 	config_nand_single_cw_page_read(chip, false, 0);
+@@ -2360,6 +2368,7 @@ static const struct qcom_nandc_props ipq806x_nandc_props = {
+ 	.supports_bam = false,
+ 	.use_codeword_fixup = true,
+ 	.dev_cmd_reg_start = 0x0,
++	.bam_offset = 0x30000,
+ };
+ 
+ static const struct qcom_nandc_props ipq4019_nandc_props = {
+@@ -2367,6 +2376,7 @@ static const struct qcom_nandc_props ipq4019_nandc_props = {
+ 	.supports_bam = true,
+ 	.nandc_part_of_qpic = true,
+ 	.dev_cmd_reg_start = 0x0,
++	.bam_offset = 0x30000,
+ };
+ 
+ static const struct qcom_nandc_props ipq8074_nandc_props = {
+@@ -2374,6 +2384,7 @@ static const struct qcom_nandc_props ipq8074_nandc_props = {
+ 	.supports_bam = true,
+ 	.nandc_part_of_qpic = true,
+ 	.dev_cmd_reg_start = 0x7000,
++	.bam_offset = 0x30000,
+ };
+ 
+ static const struct qcom_nandc_props sdx55_nandc_props = {
+@@ -2382,6 +2393,7 @@ static const struct qcom_nandc_props sdx55_nandc_props = {
+ 	.nandc_part_of_qpic = true,
+ 	.qpic_version2 = true,
+ 	.dev_cmd_reg_start = 0x7000,
++	.bam_offset = 0x30000,
+ };
+ 
+ /*
+diff --git a/drivers/mtd/nand/raw/sunxi_nand.c b/drivers/mtd/nand/raw/sunxi_nand.c
+index fab371e3e9b780..162cd5f4f2344c 100644
+--- a/drivers/mtd/nand/raw/sunxi_nand.c
++++ b/drivers/mtd/nand/raw/sunxi_nand.c
+@@ -817,6 +817,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct nand_chip *nand,
+ 	if (ret)
+ 		return ret;
+ 
++	sunxi_nfc_randomizer_config(nand, page, false);
+ 	sunxi_nfc_randomizer_enable(nand);
+ 	writel(NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD | NFC_ECC_OP,
+ 	       nfc->regs + NFC_REG_CMD);
+@@ -1049,6 +1050,7 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct nand_chip *nand,
+ 	if (ret)
+ 		return ret;
+ 
++	sunxi_nfc_randomizer_config(nand, page, false);
+ 	sunxi_nfc_randomizer_enable(nand);
+ 	sunxi_nfc_hw_ecc_set_prot_oob_bytes(nand, oob, 0, bbm, page);
+ 
+diff --git a/drivers/mtd/nand/spi/alliancememory.c b/drivers/mtd/nand/spi/alliancememory.c
+index 6046c73f8424e9..0f9522009843bc 100644
+--- a/drivers/mtd/nand/spi/alliancememory.c
++++ b/drivers/mtd/nand/spi/alliancememory.c
+@@ -17,12 +17,12 @@
+ #define AM_STATUS_ECC_MAX_CORRECTED	(3 << 4)
+ 
+ static SPINAND_OP_VARIANTS(read_cache_variants,
+-		SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0));
++		SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(write_cache_variants,
+ 			   SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
+diff --git a/drivers/mtd/nand/spi/ato.c b/drivers/mtd/nand/spi/ato.c
+index bb5298911137f0..88dc51308e1b84 100644
+--- a/drivers/mtd/nand/spi/ato.c
++++ b/drivers/mtd/nand/spi/ato.c
+@@ -14,9 +14,9 @@
+ 
+ 
+ static SPINAND_OP_VARIANTS(read_cache_variants,
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0));
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(write_cache_variants,
+ 		SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
+diff --git a/drivers/mtd/nand/spi/esmt.c b/drivers/mtd/nand/spi/esmt.c
+index a164d821464d27..cda718e385a22f 100644
+--- a/drivers/mtd/nand/spi/esmt.c
++++ b/drivers/mtd/nand/spi/esmt.c
+@@ -18,10 +18,10 @@
+ 	(CFG_OTP_ENABLE | ESMT_F50L1G41LB_CFG_OTP_PROTECT)
+ 
+ static SPINAND_OP_VARIANTS(read_cache_variants,
+-			   SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-			   SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+-			   SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-			   SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0));
++			   SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++			   SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
++			   SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++			   SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(write_cache_variants,
+ 			   SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
+diff --git a/drivers/mtd/nand/spi/foresee.c b/drivers/mtd/nand/spi/foresee.c
+index ecd5f6bffa3342..21ad44032286ff 100644
+--- a/drivers/mtd/nand/spi/foresee.c
++++ b/drivers/mtd/nand/spi/foresee.c
+@@ -12,10 +12,10 @@
+ #define SPINAND_MFR_FORESEE		0xCD
+ 
+ static SPINAND_OP_VARIANTS(read_cache_variants,
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0));
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(write_cache_variants,
+ 		SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
+diff --git a/drivers/mtd/nand/spi/gigadevice.c b/drivers/mtd/nand/spi/gigadevice.c
+index d620bb02a20a0d..3ce79ae1bac4e8 100644
+--- a/drivers/mtd/nand/spi/gigadevice.c
++++ b/drivers/mtd/nand/spi/gigadevice.c
+@@ -24,36 +24,36 @@
+ #define GD5FXGQ4UXFXXG_STATUS_ECC_UNCOR_ERROR	(7 << 4)
+ 
+ static SPINAND_OP_VARIANTS(read_cache_variants,
+-		SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0));
++		SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(read_cache_variants_f,
+-		SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_OP_3A(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X2_OP_3A(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_FAST_OP_3A(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_OP_3A(0, 0, NULL, 0));
++		SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_FAST_3A_1S_1S_1S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_1S_OP(0, 0, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(read_cache_variants_1gq5,
+-		SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0));
++		SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(read_cache_variants_2gq5,
+-		SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 4, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 2, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0));
++		SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 4, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 2, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(write_cache_variants,
+ 		SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
+diff --git a/drivers/mtd/nand/spi/macronix.c b/drivers/mtd/nand/spi/macronix.c
+index 1ef08ad850a2fe..208db1a2151049 100644
+--- a/drivers/mtd/nand/spi/macronix.c
++++ b/drivers/mtd/nand/spi/macronix.c
+@@ -28,10 +28,10 @@ struct macronix_priv {
+ };
+ 
+ static SPINAND_OP_VARIANTS(read_cache_variants,
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0));
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(write_cache_variants,
+ 		SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
+diff --git a/drivers/mtd/nand/spi/micron.c b/drivers/mtd/nand/spi/micron.c
+index 691f8a2e0791d0..f92c28b8d55712 100644
+--- a/drivers/mtd/nand/spi/micron.c
++++ b/drivers/mtd/nand/spi/micron.c
+@@ -35,12 +35,12 @@
+ 	(CFG_OTP_ENABLE | MICRON_MT29F2G01ABAGD_CFG_OTP_STATE)
+ 
+ static SPINAND_OP_VARIANTS(quadio_read_cache_variants,
+-		SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0));
++		SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(x4_write_cache_variants,
+ 		SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
+@@ -52,10 +52,10 @@ static SPINAND_OP_VARIANTS(x4_update_cache_variants,
+ 
+ /* Micron  MT29F2G01AAAED Device */
+ static SPINAND_OP_VARIANTS(x4_read_cache_variants,
+-			   SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-			   SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+-			   SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-			   SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0));
++			   SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++			   SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
++			   SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++			   SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(x1_write_cache_variants,
+ 			   SPINAND_PROG_LOAD(true, 0, NULL, 0));
+diff --git a/drivers/mtd/nand/spi/paragon.c b/drivers/mtd/nand/spi/paragon.c
+index 6e7cc6995380c0..b5ea248618036d 100644
+--- a/drivers/mtd/nand/spi/paragon.c
++++ b/drivers/mtd/nand/spi/paragon.c
+@@ -22,12 +22,12 @@
+ 
+ 
+ static SPINAND_OP_VARIANTS(read_cache_variants,
+-		SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0));
++		SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(write_cache_variants,
+ 		SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
+diff --git a/drivers/mtd/nand/spi/skyhigh.c b/drivers/mtd/nand/spi/skyhigh.c
+index 961df0d74984a8..ac73f43e9365c7 100644
+--- a/drivers/mtd/nand/spi/skyhigh.c
++++ b/drivers/mtd/nand/spi/skyhigh.c
+@@ -17,12 +17,12 @@
+ #define SKYHIGH_CONFIG_PROTECT_EN		BIT(1)
+ 
+ static SPINAND_OP_VARIANTS(read_cache_variants,
+-		SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 4, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 2, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0));
++		SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 4, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 2, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(write_cache_variants,
+ 		SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
+diff --git a/drivers/mtd/nand/spi/toshiba.c b/drivers/mtd/nand/spi/toshiba.c
+index 2e2106b2705f08..f3f2c0ed1d0cae 100644
+--- a/drivers/mtd/nand/spi/toshiba.c
++++ b/drivers/mtd/nand/spi/toshiba.c
+@@ -15,10 +15,10 @@
+ #define TOSH_STATUS_ECC_HAS_BITFLIPS_T	(3 << 4)
+ 
+ static SPINAND_OP_VARIANTS(read_cache_variants,
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0));
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(write_cache_x4_variants,
+ 		SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
+diff --git a/drivers/mtd/nand/spi/winbond.c b/drivers/mtd/nand/spi/winbond.c
+index 8394a1b1fb0c12..397c90b745e3af 100644
+--- a/drivers/mtd/nand/spi/winbond.c
++++ b/drivers/mtd/nand/spi/winbond.c
+@@ -24,25 +24,25 @@
+  */
+ 
+ static SPINAND_OP_VARIANTS(read_cache_dtr_variants,
+-		SPINAND_PAGE_READ_FROM_CACHE_QUADIO_DTR_OP(0, 8, NULL, 0, 80 * HZ_PER_MHZ),
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_DTR_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ),
+-		SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_DUALIO_DTR_OP(0, 4, NULL, 0, 80 * HZ_PER_MHZ),
+-		SPINAND_PAGE_READ_FROM_CACHE_X2_DTR_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ),
+-		SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_DTR_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ),
+-		SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0, 54 * HZ_PER_MHZ));
++		SPINAND_PAGE_READ_FROM_CACHE_1S_4D_4D_OP(0, 8, NULL, 0, 80 * HZ_PER_MHZ),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1D_4D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0, 104 * HZ_PER_MHZ),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_2D_2D_OP(0, 4, NULL, 0, 80 * HZ_PER_MHZ),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1D_2D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0, 104 * HZ_PER_MHZ),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1D_1D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ),
++		SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 54 * HZ_PER_MHZ));
+ 
+ static SPINAND_OP_VARIANTS(read_cache_variants,
+-		SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0));
++		SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(write_cache_variants,
+ 		SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
+diff --git a/drivers/mtd/nand/spi/xtx.c b/drivers/mtd/nand/spi/xtx.c
+index 3f539ca0de861c..abbbcd594c2c1f 100644
+--- a/drivers/mtd/nand/spi/xtx.c
++++ b/drivers/mtd/nand/spi/xtx.c
+@@ -23,12 +23,12 @@
+ #define XT26XXXD_STATUS_ECC_UNCOR_ERROR     (2)
+ 
+ static SPINAND_OP_VARIANTS(read_cache_variants,
+-		SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0),
+-		SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0));
++		SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0));
+ 
+ static SPINAND_OP_VARIANTS(write_cache_variants,
+ 		SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index f6921368cd14e9..0071a51ce2c1b2 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -966,7 +966,7 @@ static int kvaser_pciefd_setup_can_ctrls(struct kvaser_pciefd *pcie)
+ 		u32 status, tx_nr_packets_max;
+ 
+ 		netdev = alloc_candev(sizeof(struct kvaser_pciefd_can),
+-				      KVASER_PCIEFD_CAN_TX_MAX_COUNT);
++				      roundup_pow_of_two(KVASER_PCIEFD_CAN_TX_MAX_COUNT));
+ 		if (!netdev)
+ 			return -ENOMEM;
+ 
+@@ -995,7 +995,6 @@ static int kvaser_pciefd_setup_can_ctrls(struct kvaser_pciefd *pcie)
+ 		can->tx_max_count = min(KVASER_PCIEFD_CAN_TX_MAX_COUNT, tx_nr_packets_max - 1);
+ 
+ 		can->can.clock.freq = pcie->freq;
+-		can->can.echo_skb_max = roundup_pow_of_two(can->tx_max_count);
+ 		spin_lock_init(&can->lock);
+ 
+ 		can->can.bittiming_const = &kvaser_pciefd_bittiming_const;
+diff --git a/drivers/net/can/m_can/tcan4x5x-core.c b/drivers/net/can/m_can/tcan4x5x-core.c
+index e5c162f8c589b2..8edaa339d590b6 100644
+--- a/drivers/net/can/m_can/tcan4x5x-core.c
++++ b/drivers/net/can/m_can/tcan4x5x-core.c
+@@ -411,10 +411,11 @@ static int tcan4x5x_can_probe(struct spi_device *spi)
+ 	priv = cdev_to_priv(mcan_class);
+ 
+ 	priv->power = devm_regulator_get_optional(&spi->dev, "vsup");
+-	if (PTR_ERR(priv->power) == -EPROBE_DEFER) {
+-		ret = -EPROBE_DEFER;
+-		goto out_m_can_class_free_dev;
+-	} else {
++	if (IS_ERR(priv->power)) {
++		if (PTR_ERR(priv->power) == -EPROBE_DEFER) {
++			ret = -EPROBE_DEFER;
++			goto out_m_can_class_free_dev;
++		}
+ 		priv->power = NULL;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_main.c b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+index c1d1673c5749d6..b565189e591398 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_main.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+@@ -123,7 +123,6 @@ static netdev_tx_t aq_ndev_start_xmit(struct sk_buff *skb, struct net_device *nd
+ 	}
+ #endif
+ 
+-	skb_tx_timestamp(skb);
+ 	return aq_nic_xmit(aq_nic, skb);
+ }
+ 
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+index bf3aa46887a1c1..e71cd10e4e1f14 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+@@ -898,6 +898,8 @@ int aq_nic_xmit(struct aq_nic_s *self, struct sk_buff *skb)
+ 
+ 	frags = aq_nic_map_skb(self, skb, ring);
+ 
++	skb_tx_timestamp(skb);
++
+ 	if (likely(frags)) {
+ 		err = self->aq_hw_ops->hw_ring_tx_xmit(self->aq_hw,
+ 						       ring, frags);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 6afc2ab6fad228..c365a9e64f7281 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -10738,6 +10738,72 @@ void bnxt_del_one_rss_ctx(struct bnxt *bp, struct bnxt_rss_ctx *rss_ctx,
+ 	bp->num_rss_ctx--;
+ }
+ 
++static bool bnxt_vnic_has_rx_ring(struct bnxt *bp, struct bnxt_vnic_info *vnic,
++				  int rxr_id)
++{
++	u16 tbl_size = bnxt_get_rxfh_indir_size(bp->dev);
++	int i, vnic_rx;
++
++	/* Ntuple VNIC always has all the rx rings. Any change of ring id
++	 * must be updated because a future filter may use it.
++	 */
++	if (vnic->flags & BNXT_VNIC_NTUPLE_FLAG)
++		return true;
++
++	for (i = 0; i < tbl_size; i++) {
++		if (vnic->flags & BNXT_VNIC_RSSCTX_FLAG)
++			vnic_rx = ethtool_rxfh_context_indir(vnic->rss_ctx)[i];
++		else
++			vnic_rx = bp->rss_indir_tbl[i];
++
++		if (rxr_id == vnic_rx)
++			return true;
++	}
++
++	return false;
++}
++
++static int bnxt_set_vnic_mru_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic,
++				u16 mru, int rxr_id)
++{
++	int rc;
++
++	if (!bnxt_vnic_has_rx_ring(bp, vnic, rxr_id))
++		return 0;
++
++	if (mru) {
++		rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true);
++		if (rc) {
++			netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n",
++				   vnic->vnic_id, rc);
++			return rc;
++		}
++	}
++	vnic->mru = mru;
++	bnxt_hwrm_vnic_update(bp, vnic,
++			      VNIC_UPDATE_REQ_ENABLES_MRU_VALID);
++
++	return 0;
++}
++
++static int bnxt_set_rss_ctx_vnic_mru(struct bnxt *bp, u16 mru, int rxr_id)
++{
++	struct ethtool_rxfh_context *ctx;
++	unsigned long context;
++	int rc;
++
++	xa_for_each(&bp->dev->ethtool->rss_ctx, context, ctx) {
++		struct bnxt_rss_ctx *rss_ctx = ethtool_rxfh_context_priv(ctx);
++		struct bnxt_vnic_info *vnic = &rss_ctx->vnic;
++
++		rc = bnxt_set_vnic_mru_p5(bp, vnic, mru, rxr_id);
++		if (rc)
++			return rc;
++	}
++
++	return 0;
++}
++
+ static void bnxt_hwrm_realloc_rss_ctx_vnic(struct bnxt *bp)
+ {
+ 	bool set_tpa = !!(bp->flags & BNXT_FLAG_TPA);
+@@ -15884,6 +15950,7 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
+ 	struct bnxt_vnic_info *vnic;
+ 	struct bnxt_napi *bnapi;
+ 	int i, rc;
++	u16 mru;
+ 
+ 	rxr = &bp->rx_ring[idx];
+ 	clone = qmem;
+@@ -15933,21 +16000,15 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
+ 	napi_enable_locked(&bnapi->napi);
+ 	bnxt_db_nq_arm(bp, &cpr->cp_db, cpr->cp_raw_cons);
+ 
++	mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN;
+ 	for (i = 0; i < bp->nr_vnics; i++) {
+ 		vnic = &bp->vnic_info[i];
+ 
+-		rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true);
+-		if (rc) {
+-			netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n",
+-				   vnic->vnic_id, rc);
++		rc = bnxt_set_vnic_mru_p5(bp, vnic, mru, idx);
++		if (rc)
+ 			return rc;
+-		}
+-		vnic->mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN;
+-		bnxt_hwrm_vnic_update(bp, vnic,
+-				      VNIC_UPDATE_REQ_ENABLES_MRU_VALID);
+ 	}
+-
+-	return 0;
++	return bnxt_set_rss_ctx_vnic_mru(bp, mru, idx);
+ 
+ err_reset:
+ 	netdev_err(bp->dev, "Unexpected HWRM error during queue start rc: %d\n",
+@@ -15969,10 +16030,10 @@ static int bnxt_queue_stop(struct net_device *dev, void *qmem, int idx)
+ 
+ 	for (i = 0; i < bp->nr_vnics; i++) {
+ 		vnic = &bp->vnic_info[i];
+-		vnic->mru = 0;
+-		bnxt_hwrm_vnic_update(bp, vnic,
+-				      VNIC_UPDATE_REQ_ENABLES_MRU_VALID);
++
++		bnxt_set_vnic_mru_p5(bp, vnic, 0, idx);
+ 	}
++	bnxt_set_rss_ctx_vnic_mru(bp, 0, idx);
+ 	/* Make sure NAPI sees that the VNIC is disabled */
+ 	synchronize_net();
+ 	rxr = &bp->rx_ring[idx];
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+index 7564705d64783e..2450a369b79201 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+@@ -149,7 +149,6 @@ void bnxt_unregister_dev(struct bnxt_en_dev *edev)
+ 	struct net_device *dev = edev->net;
+ 	struct bnxt *bp = netdev_priv(dev);
+ 	struct bnxt_ulp *ulp;
+-	int i = 0;
+ 
+ 	ulp = edev->ulp_tbl;
+ 	netdev_lock(dev);
+@@ -165,10 +164,6 @@ void bnxt_unregister_dev(struct bnxt_en_dev *edev)
+ 	synchronize_rcu();
+ 	ulp->max_async_event_id = 0;
+ 	ulp->async_events_bmap = NULL;
+-	while (atomic_read(&ulp->ref_count) != 0 && i < 10) {
+-		msleep(100);
+-		i++;
+-	}
+ 	mutex_unlock(&edev->en_dev_lock);
+ 	netdev_unlock(dev);
+ 	return;
+@@ -236,10 +231,9 @@ void bnxt_ulp_stop(struct bnxt *bp)
+ 		return;
+ 
+ 	mutex_lock(&edev->en_dev_lock);
+-	if (!bnxt_ulp_registered(edev)) {
+-		mutex_unlock(&edev->en_dev_lock);
+-		return;
+-	}
++	if (!bnxt_ulp_registered(edev) ||
++	    (edev->flags & BNXT_EN_FLAG_ULP_STOPPED))
++		goto ulp_stop_exit;
+ 
+ 	edev->flags |= BNXT_EN_FLAG_ULP_STOPPED;
+ 	if (aux_priv) {
+@@ -255,6 +249,7 @@ void bnxt_ulp_stop(struct bnxt *bp)
+ 			adrv->suspend(adev, pm);
+ 		}
+ 	}
++ulp_stop_exit:
+ 	mutex_unlock(&edev->en_dev_lock);
+ }
+ 
+@@ -263,19 +258,13 @@ void bnxt_ulp_start(struct bnxt *bp, int err)
+ 	struct bnxt_aux_priv *aux_priv = bp->aux_priv;
+ 	struct bnxt_en_dev *edev = bp->edev;
+ 
+-	if (!edev)
+-		return;
+-
+-	edev->flags &= ~BNXT_EN_FLAG_ULP_STOPPED;
+-
+-	if (err)
++	if (!edev || err)
+ 		return;
+ 
+ 	mutex_lock(&edev->en_dev_lock);
+-	if (!bnxt_ulp_registered(edev)) {
+-		mutex_unlock(&edev->en_dev_lock);
+-		return;
+-	}
++	if (!bnxt_ulp_registered(edev) ||
++	    !(edev->flags & BNXT_EN_FLAG_ULP_STOPPED))
++		goto ulp_start_exit;
+ 
+ 	if (edev->ulp_tbl->msix_requested)
+ 		bnxt_fill_msix_vecs(bp, edev->msix_entries);
+@@ -292,6 +281,8 @@ void bnxt_ulp_start(struct bnxt *bp, int err)
+ 			adrv->resume(adev);
+ 		}
+ 	}
++ulp_start_exit:
++	edev->flags &= ~BNXT_EN_FLAG_ULP_STOPPED;
+ 	mutex_unlock(&edev->en_dev_lock);
+ }
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
+index 7fa3b8d1ebd288..f6b5efb5e77535 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
+@@ -50,7 +50,6 @@ struct bnxt_ulp {
+ 	unsigned long	*async_events_bmap;
+ 	u16		max_async_event_id;
+ 	u16		msix_requested;
+-	atomic_t	ref_count;
+ };
+ 
+ struct bnxt_en_dev {
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index e1e8bd2ec155b8..d1f1ae5ea161cc 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -5283,7 +5283,11 @@ static int macb_probe(struct platform_device *pdev)
+ 
+ #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+ 	if (GEM_BFEXT(DAW64, gem_readl(bp, DCFG6))) {
+-		dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(44));
++		err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(44));
++		if (err) {
++			dev_err(&pdev->dev, "failed to set DMA mask\n");
++			goto err_out_free_netdev;
++		}
+ 		bp->hw_dma_cap |= HW_DMA_CAP_64B;
+ 	}
+ #endif
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index 517a15904fb085..6a2004bbe87f93 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -1144,6 +1144,7 @@ static int gmac_map_tx_bufs(struct net_device *netdev, struct sk_buff *skb,
+ 	struct gmac_txdesc *txd;
+ 	skb_frag_t *skb_frag;
+ 	dma_addr_t mapping;
++	bool tcp = false;
+ 	void *buffer;
+ 	u16 mss;
+ 	int ret;
+@@ -1151,6 +1152,13 @@ static int gmac_map_tx_bufs(struct net_device *netdev, struct sk_buff *skb,
+ 	word1 = skb->len;
+ 	word3 = SOF_BIT;
+ 
++	/* Determine if we are doing TCP */
++	if (skb->protocol == htons(ETH_P_IP))
++		tcp = (ip_hdr(skb)->protocol == IPPROTO_TCP);
++	else
++		/* IPv6 */
++		tcp = (ipv6_hdr(skb)->nexthdr == IPPROTO_TCP);
++
+ 	mss = skb_shinfo(skb)->gso_size;
+ 	if (mss) {
+ 		/* This means we are dealing with TCP and skb->len is the
+@@ -1163,8 +1171,26 @@ static int gmac_map_tx_bufs(struct net_device *netdev, struct sk_buff *skb,
+ 			   mss, skb->len);
+ 		word1 |= TSS_MTU_ENABLE_BIT;
+ 		word3 |= mss;
++	} else if (tcp) {
++		/* Even if we are not using TSO, use the hardware offloader
++		 * for transferring the TCP frame: this hardware has partial
++		 * TCP awareness (called TOE - TCP Offload Engine) and will
++		 * according to the datasheet put packets belonging to the
++		 * same TCP connection in the same queue for the TOE/TSO
++		 * engine to process. The engine will deal with chopping
++		 * up frames that exceed ETH_DATA_LEN which the
++		 * checksumming engine cannot handle (see below) into
++		 * manageable chunks. It flawlessly deals with quite big
++		 * frames and frames containing custom DSA EtherTypes.
++		 */
++		mss = netdev->mtu + skb_tcp_all_headers(skb);
++		mss = min(mss, skb->len);
++		netdev_dbg(netdev, "TOE/TSO len %04x mtu %04x mss %04x\n",
++			   skb->len, netdev->mtu, mss);
++		word1 |= TSS_MTU_ENABLE_BIT;
++		word3 |= mss;
+ 	} else if (skb->len >= ETH_FRAME_LEN) {
+-		/* Hardware offloaded checksumming isn't working on frames
++		/* Hardware offloaded checksumming isn't working on non-TCP frames
+ 		 * bigger than 1514 bytes. A hypothesis about this is that the
+ 		 * checksum buffer is only 1518 bytes, so when the frames get
+ 		 * bigger they get truncated, or the last few bytes get
+@@ -1181,21 +1207,16 @@ static int gmac_map_tx_bufs(struct net_device *netdev, struct sk_buff *skb,
+ 	}
+ 
+ 	if (skb->ip_summed == CHECKSUM_PARTIAL) {
+-		int tcp = 0;
+-
+ 		/* We do not switch off the checksumming on non TCP/UDP
+ 		 * frames: as is shown from tests, the checksumming engine
+ 		 * is smart enough to see that a frame is not actually TCP
+ 		 * or UDP and then just pass it through without any changes
+ 		 * to the frame.
+ 		 */
+-		if (skb->protocol == htons(ETH_P_IP)) {
++		if (skb->protocol == htons(ETH_P_IP))
+ 			word1 |= TSS_IP_CHKSUM_BIT;
+-			tcp = ip_hdr(skb)->protocol == IPPROTO_TCP;
+-		} else { /* IPv6 */
++		else
+ 			word1 |= TSS_IPV6_ENABLE_BIT;
+-			tcp = ipv6_hdr(skb)->nexthdr == IPPROTO_TCP;
+-		}
+ 
+ 		word1 |= tcp ? TSS_TCP_CHKSUM_BIT : TSS_UDP_CHKSUM_BIT;
+ 	}
+diff --git a/drivers/net/ethernet/dlink/dl2k.c b/drivers/net/ethernet/dlink/dl2k.c
+index 232e839a9d0719..038a0400c1f956 100644
+--- a/drivers/net/ethernet/dlink/dl2k.c
++++ b/drivers/net/ethernet/dlink/dl2k.c
+@@ -146,6 +146,8 @@ rio_probe1 (struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	np->ioaddr = ioaddr;
+ 	np->chip_id = chip_idx;
+ 	np->pdev = pdev;
++
++	spin_lock_init(&np->stats_lock);
+ 	spin_lock_init (&np->tx_lock);
+ 	spin_lock_init (&np->rx_lock);
+ 
+@@ -865,7 +867,6 @@ tx_error (struct net_device *dev, int tx_status)
+ 	frame_id = (tx_status & 0xffff0000);
+ 	printk (KERN_ERR "%s: Transmit error, TxStatus %4.4x, FrameId %d.\n",
+ 		dev->name, tx_status, frame_id);
+-	dev->stats.tx_errors++;
+ 	/* Ttransmit Underrun */
+ 	if (tx_status & 0x10) {
+ 		dev->stats.tx_fifo_errors++;
+@@ -902,9 +903,15 @@ tx_error (struct net_device *dev, int tx_status)
+ 		rio_set_led_mode(dev);
+ 		/* Let TxStartThresh stay default value */
+ 	}
++
++	spin_lock(&np->stats_lock);
+ 	/* Maximum Collisions */
+ 	if (tx_status & 0x08)
+ 		dev->stats.collisions++;
++
++	dev->stats.tx_errors++;
++	spin_unlock(&np->stats_lock);
++
+ 	/* Restart the Tx */
+ 	dw32(MACCtrl, dr16(MACCtrl) | TxEnable);
+ }
+@@ -1073,7 +1080,9 @@ get_stats (struct net_device *dev)
+ 	int i;
+ #endif
+ 	unsigned int stat_reg;
++	unsigned long flags;
+ 
++	spin_lock_irqsave(&np->stats_lock, flags);
+ 	/* All statistics registers need to be acknowledged,
+ 	   else statistic overflow could cause problems */
+ 
+@@ -1123,6 +1132,9 @@ get_stats (struct net_device *dev)
+ 	dr16(TCPCheckSumErrors);
+ 	dr16(UDPCheckSumErrors);
+ 	dr16(IPCheckSumErrors);
++
++	spin_unlock_irqrestore(&np->stats_lock, flags);
++
+ 	return &dev->stats;
+ }
+ 
+diff --git a/drivers/net/ethernet/dlink/dl2k.h b/drivers/net/ethernet/dlink/dl2k.h
+index 0e33e2eaae9606..56aff2f0bdbfa0 100644
+--- a/drivers/net/ethernet/dlink/dl2k.h
++++ b/drivers/net/ethernet/dlink/dl2k.h
+@@ -372,6 +372,8 @@ struct netdev_private {
+ 	struct pci_dev *pdev;
+ 	void __iomem *ioaddr;
+ 	void __iomem *eeprom_addr;
++	// To ensure synchronization when stats are updated.
++	spinlock_t stats_lock;
+ 	spinlock_t tx_lock;
+ 	spinlock_t rx_lock;
+ 	unsigned int rx_buf_sz;		/* Based on MTU+slack. */
+diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
+index 51b8377edd1d04..a89aa4ac0a064a 100644
+--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
++++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
+@@ -1609,7 +1609,7 @@ int be_cmd_get_stats(struct be_adapter *adapter, struct be_dma_mem *nonemb_cmd)
+ 	/* version 1 of the cmd is not supported only by BE2 */
+ 	if (BE2_chip(adapter))
+ 		hdr->version = 0;
+-	if (BE3_chip(adapter) || lancer_chip(adapter))
++	else if (BE3_chip(adapter) || lancer_chip(adapter))
+ 		hdr->version = 1;
+ 	else
+ 		hdr->version = 2;
+diff --git a/drivers/net/ethernet/faraday/Kconfig b/drivers/net/ethernet/faraday/Kconfig
+index c699bd6bcbb938..474073c7f94d74 100644
+--- a/drivers/net/ethernet/faraday/Kconfig
++++ b/drivers/net/ethernet/faraday/Kconfig
+@@ -31,6 +31,7 @@ config FTGMAC100
+ 	depends on ARM || COMPILE_TEST
+ 	depends on !64BIT || BROKEN
+ 	select PHYLIB
++	select FIXED_PHY
+ 	select MDIO_ASPEED if MACH_ASPEED_G6
+ 	select CRC32
+ 	help
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 8ebcb6a7d608ae..a0045aa9550b59 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -3534,9 +3534,6 @@ s32 e1000e_get_base_timinca(struct e1000_adapter *adapter, u32 *timinca)
+ 	case e1000_pch_cnp:
+ 	case e1000_pch_tgp:
+ 	case e1000_pch_adp:
+-	case e1000_pch_mtp:
+-	case e1000_pch_lnp:
+-	case e1000_pch_ptp:
+ 	case e1000_pch_nvp:
+ 		if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) {
+ 			/* Stable 24MHz frequency */
+@@ -3552,6 +3549,17 @@ s32 e1000e_get_base_timinca(struct e1000_adapter *adapter, u32 *timinca)
+ 			adapter->cc.shift = shift;
+ 		}
+ 		break;
++	case e1000_pch_mtp:
++	case e1000_pch_lnp:
++	case e1000_pch_ptp:
++		/* System firmware can misreport this value, so set it to a
++		 * stable 38400KHz frequency.
++		 */
++		incperiod = INCPERIOD_38400KHZ;
++		incvalue = INCVALUE_38400KHZ;
++		shift = INCVALUE_SHIFT_38400KHZ;
++		adapter->cc.shift = shift;
++		break;
+ 	case e1000_82574:
+ 	case e1000_82583:
+ 		/* Stable 25MHz frequency */
+diff --git a/drivers/net/ethernet/intel/e1000e/ptp.c b/drivers/net/ethernet/intel/e1000e/ptp.c
+index 89d57dd911dc89..ea3c3eb2ef2020 100644
+--- a/drivers/net/ethernet/intel/e1000e/ptp.c
++++ b/drivers/net/ethernet/intel/e1000e/ptp.c
+@@ -295,15 +295,17 @@ void e1000e_ptp_init(struct e1000_adapter *adapter)
+ 	case e1000_pch_cnp:
+ 	case e1000_pch_tgp:
+ 	case e1000_pch_adp:
+-	case e1000_pch_mtp:
+-	case e1000_pch_lnp:
+-	case e1000_pch_ptp:
+ 	case e1000_pch_nvp:
+ 		if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI)
+ 			adapter->ptp_clock_info.max_adj = MAX_PPB_24MHZ;
+ 		else
+ 			adapter->ptp_clock_info.max_adj = MAX_PPB_38400KHZ;
+ 		break;
++	case e1000_pch_mtp:
++	case e1000_pch_lnp:
++	case e1000_pch_ptp:
++		adapter->ptp_clock_info.max_adj = MAX_PPB_38400KHZ;
++		break;
+ 	case e1000_82574:
+ 	case e1000_82583:
+ 		adapter->ptp_clock_info.max_adj = MAX_PPB_25MHZ;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
+index 370b4bddee4419..b11c35e307ca96 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
+@@ -817,10 +817,11 @@ int i40e_pf_reset(struct i40e_hw *hw)
+ void i40e_clear_hw(struct i40e_hw *hw)
+ {
+ 	u32 num_queues, base_queue;
+-	u32 num_pf_int;
+-	u32 num_vf_int;
++	s32 num_pf_int;
++	s32 num_vf_int;
+ 	u32 num_vfs;
+-	u32 i, j;
++	s32 i;
++	u32 j;
+ 	u32 val;
+ 	u32 eol = 0x7ff;
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_arfs.c b/drivers/net/ethernet/intel/ice/ice_arfs.c
+index 2bc5c7f598444d..1f7834c0355027 100644
+--- a/drivers/net/ethernet/intel/ice/ice_arfs.c
++++ b/drivers/net/ethernet/intel/ice/ice_arfs.c
+@@ -377,6 +377,50 @@ ice_arfs_is_perfect_flow_set(struct ice_hw *hw, __be16 l3_proto, u8 l4_proto)
+ 	return false;
+ }
+ 
++/**
++ * ice_arfs_cmp - Check if aRFS filter matches this flow.
++ * @fltr_info: filter info of the saved ARFS entry.
++ * @fk: flow dissector keys.
++ * @n_proto:  One of htons(ETH_P_IP) or htons(ETH_P_IPV6).
++ * @ip_proto: One of IPPROTO_TCP or IPPROTO_UDP.
++ *
++ * Since this function assumes limited values for n_proto and ip_proto, it
++ * is meant to be called only from ice_rx_flow_steer().
++ *
++ * Return:
++ * * true	- fltr_info refers to the same flow as fk.
++ * * false	- fltr_info and fk refer to different flows.
++ */
++static bool
++ice_arfs_cmp(const struct ice_fdir_fltr *fltr_info, const struct flow_keys *fk,
++	     __be16 n_proto, u8 ip_proto)
++{
++	/* Determine if the filter is for IPv4 or IPv6 based on flow_type,
++	 * which is one of ICE_FLTR_PTYPE_NONF_IPV{4,6}_{TCP,UDP}.
++	 */
++	bool is_v4 = fltr_info->flow_type == ICE_FLTR_PTYPE_NONF_IPV4_TCP ||
++		     fltr_info->flow_type == ICE_FLTR_PTYPE_NONF_IPV4_UDP;
++
++	/* Following checks are arranged in the quickest and most discriminative
++	 * fields first for early failure.
++	 */
++	if (is_v4)
++		return n_proto == htons(ETH_P_IP) &&
++			fltr_info->ip.v4.src_port == fk->ports.src &&
++			fltr_info->ip.v4.dst_port == fk->ports.dst &&
++			fltr_info->ip.v4.src_ip == fk->addrs.v4addrs.src &&
++			fltr_info->ip.v4.dst_ip == fk->addrs.v4addrs.dst &&
++			fltr_info->ip.v4.proto == ip_proto;
++
++	return fltr_info->ip.v6.src_port == fk->ports.src &&
++		fltr_info->ip.v6.dst_port == fk->ports.dst &&
++		fltr_info->ip.v6.proto == ip_proto &&
++		!memcmp(&fltr_info->ip.v6.src_ip, &fk->addrs.v6addrs.src,
++			sizeof(struct in6_addr)) &&
++		!memcmp(&fltr_info->ip.v6.dst_ip, &fk->addrs.v6addrs.dst,
++			sizeof(struct in6_addr));
++}
++
+ /**
+  * ice_rx_flow_steer - steer the Rx flow to where application is being run
+  * @netdev: ptr to the netdev being adjusted
+@@ -448,6 +492,10 @@ ice_rx_flow_steer(struct net_device *netdev, const struct sk_buff *skb,
+ 			continue;
+ 
+ 		fltr_info = &arfs_entry->fltr_info;
++
++		if (!ice_arfs_cmp(fltr_info, &fk, n_proto, ip_proto))
++			continue;
++
+ 		ret = fltr_info->fltr_id;
+ 
+ 		if (fltr_info->q_index == rxq_idx ||
+diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c
+index ed21d7f55ac11b..5b9a7ee278f17b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_eswitch.c
++++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c
+@@ -502,10 +502,14 @@ ice_eswitch_attach(struct ice_pf *pf, struct ice_repr *repr, unsigned long *id)
+  */
+ int ice_eswitch_attach_vf(struct ice_pf *pf, struct ice_vf *vf)
+ {
+-	struct ice_repr *repr = ice_repr_create_vf(vf);
+ 	struct devlink *devlink = priv_to_devlink(pf);
++	struct ice_repr *repr;
+ 	int err;
+ 
++	if (!ice_is_eswitch_mode_switchdev(pf))
++		return 0;
++
++	repr = ice_repr_create_vf(vf);
+ 	if (IS_ERR(repr))
+ 		return PTR_ERR(repr);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index 4a91e0aaf0a5e5..9d9a7edd3618af 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -3146,7 +3146,7 @@ ice_add_update_vsi_list(struct ice_hw *hw,
+ 		u16 vsi_handle_arr[2];
+ 
+ 		/* A rule already exists with the new VSI being added */
+-		if (cur_fltr->fwd_id.hw_vsi_id == new_fltr->fwd_id.hw_vsi_id)
++		if (cur_fltr->vsi_handle == new_fltr->vsi_handle)
+ 			return -EEXIST;
+ 
+ 		vsi_handle_arr[0] = cur_fltr->vsi_handle;
+@@ -5978,7 +5978,7 @@ ice_adv_add_update_vsi_list(struct ice_hw *hw,
+ 
+ 		/* A rule already exists with the new VSI being added */
+ 		if (test_bit(vsi_handle, m_entry->vsi_list_info->vsi_map))
+-			return 0;
++			return -EEXIST;
+ 
+ 		/* Update the previously created VSI list set with
+ 		 * the new VSI ID passed in
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
+index 0a03a8bb5f8869..2d54828bdfbbcc 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
+@@ -167,7 +167,7 @@ int ixgbe_write_i2c_combined_generic_int(struct ixgbe_hw *hw, u8 addr,
+ 					 u16 reg, u16 val, bool lock)
+ {
+ 	u32 swfw_mask = hw->phy.phy_semaphore_mask;
+-	int max_retry = 1;
++	int max_retry = 3;
+ 	int retry = 0;
+ 	u8 reg_high;
+ 	u8 csum;
+@@ -2285,7 +2285,7 @@ static int ixgbe_write_i2c_byte_generic_int(struct ixgbe_hw *hw, u8 byte_offset,
+ 					    u8 dev_addr, u8 data, bool lock)
+ {
+ 	u32 swfw_mask = hw->phy.phy_semaphore_mask;
+-	u32 max_retry = 1;
++	u32 max_retry = 3;
+ 	u32 retry = 0;
+ 	int status;
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
+index c3b6e0f60a7998..7f6a435ac68069 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
+@@ -357,9 +357,12 @@ int cn10k_free_matchall_ipolicer(struct otx2_nic *pfvf)
+ 	mutex_lock(&pfvf->mbox.lock);
+ 
+ 	/* Remove RQ's policer mapping */
+-	for (qidx = 0; qidx < hw->rx_queues; qidx++)
+-		cn10k_map_unmap_rq_policer(pfvf, qidx,
+-					   hw->matchall_ipolicer, false);
++	for (qidx = 0; qidx < hw->rx_queues; qidx++) {
++		rc = cn10k_map_unmap_rq_policer(pfvf, qidx, hw->matchall_ipolicer, false);
++		if (rc)
++			dev_warn(pfvf->dev, "Failed to unmap RQ %d's policer (error %d).",
++				 qidx, rc);
++	}
+ 
+ 	rc = cn10k_free_leaf_profile(pfvf, hw->matchall_ipolicer);
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index 84cd029a85aab7..1b3004ba4493ed 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -1822,7 +1822,7 @@ int otx2_nix_config_bp(struct otx2_nic *pfvf, bool enable)
+ 		req->chan_cnt = IEEE_8021QAZ_MAX_TCS;
+ 		req->bpid_per_chan = 1;
+ 	} else {
+-		req->chan_cnt = 1;
++		req->chan_cnt = pfvf->hw.rx_chan_cnt;
+ 		req->bpid_per_chan = 0;
+ 	}
+ 
+@@ -1847,7 +1847,7 @@ int otx2_nix_cpt_config_bp(struct otx2_nic *pfvf, bool enable)
+ 		req->chan_cnt = IEEE_8021QAZ_MAX_TCS;
+ 		req->bpid_per_chan = 1;
+ 	} else {
+-		req->chan_cnt = 1;
++		req->chan_cnt = pfvf->hw.rx_chan_cnt;
+ 		req->bpid_per_chan = 0;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+index cd17a3f4faf83e..a68cd3f0304c64 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+@@ -1897,6 +1897,7 @@ static int mlx4_en_get_ts_info(struct net_device *dev,
+ 	if (mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_TS) {
+ 		info->so_timestamping |=
+ 			SOF_TIMESTAMPING_TX_HARDWARE |
++			SOF_TIMESTAMPING_TX_SOFTWARE |
+ 			SOF_TIMESTAMPING_RX_HARDWARE |
+ 			SOF_TIMESTAMPING_RAW_HARDWARE;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c
+index 32de8bfc7644f5..3f8f4306d90b38 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c
+@@ -329,16 +329,12 @@ static void hws_bwc_rule_list_add(struct mlx5hws_bwc_rule *bwc_rule, u16 idx)
+ {
+ 	struct mlx5hws_bwc_matcher *bwc_matcher = bwc_rule->bwc_matcher;
+ 
+-	atomic_inc(&bwc_matcher->num_of_rules);
+ 	bwc_rule->bwc_queue_idx = idx;
+ 	list_add(&bwc_rule->list_node, &bwc_matcher->rules[idx]);
+ }
+ 
+ static void hws_bwc_rule_list_remove(struct mlx5hws_bwc_rule *bwc_rule)
+ {
+-	struct mlx5hws_bwc_matcher *bwc_matcher = bwc_rule->bwc_matcher;
+-
+-	atomic_dec(&bwc_matcher->num_of_rules);
+ 	list_del_init(&bwc_rule->list_node);
+ }
+ 
+@@ -391,6 +387,7 @@ int mlx5hws_bwc_rule_destroy_simple(struct mlx5hws_bwc_rule *bwc_rule)
+ 	mutex_lock(queue_lock);
+ 
+ 	ret = hws_bwc_rule_destroy_hws_sync(bwc_rule, &attr);
++	atomic_dec(&bwc_matcher->num_of_rules);
+ 	hws_bwc_rule_list_remove(bwc_rule);
+ 
+ 	mutex_unlock(queue_lock);
+@@ -860,7 +857,7 @@ int mlx5hws_bwc_rule_create_simple(struct mlx5hws_bwc_rule *bwc_rule,
+ 	}
+ 
+ 	/* check if number of rules require rehash */
+-	num_of_rules = atomic_read(&bwc_matcher->num_of_rules);
++	num_of_rules = atomic_inc_return(&bwc_matcher->num_of_rules);
+ 
+ 	if (unlikely(hws_bwc_matcher_rehash_size_needed(bwc_matcher, num_of_rules))) {
+ 		mutex_unlock(queue_lock);
+@@ -874,6 +871,7 @@ int mlx5hws_bwc_rule_create_simple(struct mlx5hws_bwc_rule *bwc_rule,
+ 				    bwc_matcher->size_log - MLX5HWS_BWC_MATCHER_SIZE_LOG_STEP,
+ 				    bwc_matcher->size_log,
+ 				    ret);
++			atomic_dec(&bwc_matcher->num_of_rules);
+ 			return ret;
+ 		}
+ 
+@@ -906,6 +904,7 @@ int mlx5hws_bwc_rule_create_simple(struct mlx5hws_bwc_rule *bwc_rule,
+ 
+ 	if (ret) {
+ 		mlx5hws_err(ctx, "BWC rule insertion: rehash failed (%d)\n", ret);
++		atomic_dec(&bwc_matcher->num_of_rules);
+ 		return ret;
+ 	}
+ 
+@@ -921,6 +920,7 @@ int mlx5hws_bwc_rule_create_simple(struct mlx5hws_bwc_rule *bwc_rule,
+ 	if (unlikely(ret)) {
+ 		mutex_unlock(queue_lock);
+ 		mlx5hws_err(ctx, "BWC rule insertion failed (%d)\n", ret);
++		atomic_dec(&bwc_matcher->num_of_rules);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/definer.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/definer.c
+index 293459458cc5f9..ecda35597111ea 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/definer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/definer.c
+@@ -509,7 +509,7 @@ static int
+ hws_definer_conv_outer(struct mlx5hws_definer_conv_data *cd,
+ 		       u32 *match_param)
+ {
+-	bool is_s_ipv6, is_d_ipv6, smac_set, dmac_set;
++	bool is_ipv6, smac_set, dmac_set, ip_addr_set, ip_ver_set;
+ 	struct mlx5hws_definer_fc *fc = cd->fc;
+ 	struct mlx5hws_definer_fc *curr_fc;
+ 	u32 *s_ipv6, *d_ipv6;
+@@ -521,6 +521,20 @@ hws_definer_conv_outer(struct mlx5hws_definer_conv_data *cd,
+ 		return -EINVAL;
+ 	}
+ 
++	ip_addr_set = HWS_IS_FLD_SET_SZ(match_param,
++					outer_headers.src_ipv4_src_ipv6,
++					0x80) ||
++		      HWS_IS_FLD_SET_SZ(match_param,
++					outer_headers.dst_ipv4_dst_ipv6, 0x80);
++	ip_ver_set = HWS_IS_FLD_SET(match_param, outer_headers.ip_version) ||
++		     HWS_IS_FLD_SET(match_param, outer_headers.ethertype);
++
++	if (ip_addr_set && !ip_ver_set) {
++		mlx5hws_err(cd->ctx,
++			    "Unsupported match on IP address without version or ethertype\n");
++		return -EINVAL;
++	}
++
+ 	/* L2 Check ethertype */
+ 	HWS_SET_HDR(fc, match_param, ETH_TYPE_O,
+ 		    outer_headers.ethertype,
+@@ -573,10 +587,16 @@ hws_definer_conv_outer(struct mlx5hws_definer_conv_data *cd,
+ 			      outer_headers.dst_ipv4_dst_ipv6.ipv6_layout);
+ 
+ 	/* Assume IPv6 is used if ipv6 bits are set */
+-	is_s_ipv6 = s_ipv6[0] || s_ipv6[1] || s_ipv6[2];
+-	is_d_ipv6 = d_ipv6[0] || d_ipv6[1] || d_ipv6[2];
++	is_ipv6 = s_ipv6[0] || s_ipv6[1] || s_ipv6[2] ||
++		  d_ipv6[0] || d_ipv6[1] || d_ipv6[2];
+ 
+-	if (is_s_ipv6) {
++	/* IHL is an IPv4-specific field. */
++	if (is_ipv6 && HWS_IS_FLD_SET(match_param, outer_headers.ipv4_ihl)) {
++		mlx5hws_err(cd->ctx, "Unsupported match on IPv6 address and IPv4 IHL\n");
++		return -EINVAL;
++	}
++
++	if (is_ipv6) {
+ 		/* Handle IPv6 source address */
+ 		HWS_SET_HDR(fc, match_param, IPV6_SRC_127_96_O,
+ 			    outer_headers.src_ipv4_src_ipv6.ipv6_simple_layout.ipv6_127_96,
+@@ -590,13 +610,6 @@ hws_definer_conv_outer(struct mlx5hws_definer_conv_data *cd,
+ 		HWS_SET_HDR(fc, match_param, IPV6_SRC_31_0_O,
+ 			    outer_headers.src_ipv4_src_ipv6.ipv6_simple_layout.ipv6_31_0,
+ 			    ipv6_src_outer.ipv6_address_31_0);
+-	} else {
+-		/* Handle IPv4 source address */
+-		HWS_SET_HDR(fc, match_param, IPV4_SRC_O,
+-			    outer_headers.src_ipv4_src_ipv6.ipv6_simple_layout.ipv6_31_0,
+-			    ipv4_src_dest_outer.source_address);
+-	}
+-	if (is_d_ipv6) {
+ 		/* Handle IPv6 destination address */
+ 		HWS_SET_HDR(fc, match_param, IPV6_DST_127_96_O,
+ 			    outer_headers.dst_ipv4_dst_ipv6.ipv6_simple_layout.ipv6_127_96,
+@@ -611,6 +624,10 @@ hws_definer_conv_outer(struct mlx5hws_definer_conv_data *cd,
+ 			    outer_headers.dst_ipv4_dst_ipv6.ipv6_simple_layout.ipv6_31_0,
+ 			    ipv6_dst_outer.ipv6_address_31_0);
+ 	} else {
++		/* Handle IPv4 source address */
++		HWS_SET_HDR(fc, match_param, IPV4_SRC_O,
++			    outer_headers.src_ipv4_src_ipv6.ipv6_simple_layout.ipv6_31_0,
++			    ipv4_src_dest_outer.source_address);
+ 		/* Handle IPv4 destination address */
+ 		HWS_SET_HDR(fc, match_param, IPV4_DST_O,
+ 			    outer_headers.dst_ipv4_dst_ipv6.ipv6_simple_layout.ipv6_31_0,
+@@ -668,7 +685,7 @@ static int
+ hws_definer_conv_inner(struct mlx5hws_definer_conv_data *cd,
+ 		       u32 *match_param)
+ {
+-	bool is_s_ipv6, is_d_ipv6, smac_set, dmac_set;
++	bool is_ipv6, smac_set, dmac_set, ip_addr_set, ip_ver_set;
+ 	struct mlx5hws_definer_fc *fc = cd->fc;
+ 	struct mlx5hws_definer_fc *curr_fc;
+ 	u32 *s_ipv6, *d_ipv6;
+@@ -680,6 +697,20 @@ hws_definer_conv_inner(struct mlx5hws_definer_conv_data *cd,
+ 		return -EINVAL;
+ 	}
+ 
++	ip_addr_set = HWS_IS_FLD_SET_SZ(match_param,
++					inner_headers.src_ipv4_src_ipv6,
++					0x80) ||
++		      HWS_IS_FLD_SET_SZ(match_param,
++					inner_headers.dst_ipv4_dst_ipv6, 0x80);
++	ip_ver_set = HWS_IS_FLD_SET(match_param, inner_headers.ip_version) ||
++		     HWS_IS_FLD_SET(match_param, inner_headers.ethertype);
++
++	if (ip_addr_set && !ip_ver_set) {
++		mlx5hws_err(cd->ctx,
++			    "Unsupported match on IP address without version or ethertype\n");
++		return -EINVAL;
++	}
++
+ 	/* L2 Check ethertype */
+ 	HWS_SET_HDR(fc, match_param, ETH_TYPE_I,
+ 		    inner_headers.ethertype,
+@@ -731,10 +762,16 @@ hws_definer_conv_inner(struct mlx5hws_definer_conv_data *cd,
+ 			      inner_headers.dst_ipv4_dst_ipv6.ipv6_layout);
+ 
+ 	/* Assume IPv6 is used if ipv6 bits are set */
+-	is_s_ipv6 = s_ipv6[0] || s_ipv6[1] || s_ipv6[2];
+-	is_d_ipv6 = d_ipv6[0] || d_ipv6[1] || d_ipv6[2];
++	is_ipv6 = s_ipv6[0] || s_ipv6[1] || s_ipv6[2] ||
++		  d_ipv6[0] || d_ipv6[1] || d_ipv6[2];
+ 
+-	if (is_s_ipv6) {
++	/* IHL is an IPv4-specific field. */
++	if (is_ipv6 && HWS_IS_FLD_SET(match_param, inner_headers.ipv4_ihl)) {
++		mlx5hws_err(cd->ctx, "Unsupported match on IPv6 address and IPv4 IHL\n");
++		return -EINVAL;
++	}
++
++	if (is_ipv6) {
+ 		/* Handle IPv6 source address */
+ 		HWS_SET_HDR(fc, match_param, IPV6_SRC_127_96_I,
+ 			    inner_headers.src_ipv4_src_ipv6.ipv6_simple_layout.ipv6_127_96,
+@@ -748,13 +785,6 @@ hws_definer_conv_inner(struct mlx5hws_definer_conv_data *cd,
+ 		HWS_SET_HDR(fc, match_param, IPV6_SRC_31_0_I,
+ 			    inner_headers.src_ipv4_src_ipv6.ipv6_simple_layout.ipv6_31_0,
+ 			    ipv6_src_inner.ipv6_address_31_0);
+-	} else {
+-		/* Handle IPv4 source address */
+-		HWS_SET_HDR(fc, match_param, IPV4_SRC_I,
+-			    inner_headers.src_ipv4_src_ipv6.ipv6_simple_layout.ipv6_31_0,
+-			    ipv4_src_dest_inner.source_address);
+-	}
+-	if (is_d_ipv6) {
+ 		/* Handle IPv6 destination address */
+ 		HWS_SET_HDR(fc, match_param, IPV6_DST_127_96_I,
+ 			    inner_headers.dst_ipv4_dst_ipv6.ipv6_simple_layout.ipv6_127_96,
+@@ -769,6 +799,10 @@ hws_definer_conv_inner(struct mlx5hws_definer_conv_data *cd,
+ 			    inner_headers.dst_ipv4_dst_ipv6.ipv6_simple_layout.ipv6_31_0,
+ 			    ipv6_dst_inner.ipv6_address_31_0);
+ 	} else {
++		/* Handle IPv4 source address */
++		HWS_SET_HDR(fc, match_param, IPV4_SRC_I,
++			    inner_headers.src_ipv4_src_ipv6.ipv6_simple_layout.ipv6_31_0,
++			    ipv4_src_dest_inner.source_address);
+ 		/* Handle IPv4 destination address */
+ 		HWS_SET_HDR(fc, match_param, IPV4_DST_I,
+ 			    inner_headers.dst_ipv4_dst_ipv6.ipv6_simple_layout.ipv6_31_0,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/vport.c b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
+index d10d4c39604085..da5c24fc7b30ab 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/vport.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
+@@ -465,19 +465,22 @@ int mlx5_query_nic_vport_node_guid(struct mlx5_core_dev *mdev, u64 *node_guid)
+ {
+ 	u32 *out;
+ 	int outlen = MLX5_ST_SZ_BYTES(query_nic_vport_context_out);
++	int err;
+ 
+ 	out = kvzalloc(outlen, GFP_KERNEL);
+ 	if (!out)
+ 		return -ENOMEM;
+ 
+-	mlx5_query_nic_vport_context(mdev, 0, out);
++	err = mlx5_query_nic_vport_context(mdev, 0, out);
++	if (err)
++		goto out;
+ 
+ 	*node_guid = MLX5_GET64(query_nic_vport_context_out, out,
+ 				nic_vport_context.node_guid);
+-
++out:
+ 	kvfree(out);
+ 
+-	return 0;
++	return err;
+ }
+ EXPORT_SYMBOL_GPL(mlx5_query_nic_vport_node_guid);
+ 
+@@ -519,19 +522,22 @@ int mlx5_query_nic_vport_qkey_viol_cntr(struct mlx5_core_dev *mdev,
+ {
+ 	u32 *out;
+ 	int outlen = MLX5_ST_SZ_BYTES(query_nic_vport_context_out);
++	int err;
+ 
+ 	out = kvzalloc(outlen, GFP_KERNEL);
+ 	if (!out)
+ 		return -ENOMEM;
+ 
+-	mlx5_query_nic_vport_context(mdev, 0, out);
++	err = mlx5_query_nic_vport_context(mdev, 0, out);
++	if (err)
++		goto out;
+ 
+ 	*qkey_viol_cntr = MLX5_GET(query_nic_vport_context_out, out,
+ 				   nic_vport_context.qkey_violation_counter);
+-
++out:
+ 	kvfree(out);
+ 
+-	return 0;
++	return err;
+ }
+ EXPORT_SYMBOL_GPL(mlx5_query_nic_vport_qkey_viol_cntr);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
+index fb2e5b844c150d..d76d7a945899c6 100644
+--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
++++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
+@@ -447,8 +447,10 @@ static int mlxbf_gige_probe(struct platform_device *pdev)
+ 	priv->llu_plu_irq = platform_get_irq(pdev, MLXBF_GIGE_LLU_PLU_INTR_IDX);
+ 
+ 	phy_irq = acpi_dev_gpio_irq_get_by(ACPI_COMPANION(&pdev->dev), "phy", 0);
+-	if (phy_irq < 0) {
+-		dev_err(&pdev->dev, "Error getting PHY irq. Use polling instead");
++	if (phy_irq == -EPROBE_DEFER) {
++		err = -EPROBE_DEFER;
++		goto out;
++	} else if (phy_irq < 0) {
+ 		phy_irq = PHY_POLL;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_fw.c b/drivers/net/ethernet/meta/fbnic/fbnic_fw.c
+index 3d9636a6c968ec..0c3985613ea181 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_fw.c
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_fw.c
+@@ -127,11 +127,8 @@ static int fbnic_mbx_map_msg(struct fbnic_dev *fbd, int mbx_idx,
+ 		return -EBUSY;
+ 
+ 	addr = dma_map_single(fbd->dev, msg, PAGE_SIZE, direction);
+-	if (dma_mapping_error(fbd->dev, addr)) {
+-		free_page((unsigned long)msg);
+-
++	if (dma_mapping_error(fbd->dev, addr))
+ 		return -ENOSPC;
+-	}
+ 
+ 	mbx->buf_info[tail].msg = msg;
+ 	mbx->buf_info[tail].addr = addr;
+diff --git a/drivers/net/ethernet/microchip/lan743x_ethtool.c b/drivers/net/ethernet/microchip/lan743x_ethtool.c
+index 1459acfb1e618b..64a3b953cc175d 100644
+--- a/drivers/net/ethernet/microchip/lan743x_ethtool.c
++++ b/drivers/net/ethernet/microchip/lan743x_ethtool.c
+@@ -18,6 +18,8 @@
+ #define EEPROM_MAC_OFFSET		    (0x01)
+ #define MAX_EEPROM_SIZE			    (512)
+ #define MAX_OTP_SIZE			    (1024)
++#define MAX_HS_OTP_SIZE			    (8 * 1024)
++#define MAX_HS_EEPROM_SIZE		    (64 * 1024)
+ #define OTP_INDICATOR_1			    (0xF3)
+ #define OTP_INDICATOR_2			    (0xF7)
+ 
+@@ -272,6 +274,9 @@ static int lan743x_hs_otp_read(struct lan743x_adapter *adapter, u32 offset,
+ 	int ret;
+ 	int i;
+ 
++	if (offset + length > MAX_HS_OTP_SIZE)
++		return -EINVAL;
++
+ 	ret = lan743x_hs_syslock_acquire(adapter, LOCK_TIMEOUT_MAX_CNT);
+ 	if (ret < 0)
+ 		return ret;
+@@ -320,6 +325,9 @@ static int lan743x_hs_otp_write(struct lan743x_adapter *adapter, u32 offset,
+ 	int ret;
+ 	int i;
+ 
++	if (offset + length > MAX_HS_OTP_SIZE)
++		return -EINVAL;
++
+ 	ret = lan743x_hs_syslock_acquire(adapter, LOCK_TIMEOUT_MAX_CNT);
+ 	if (ret < 0)
+ 		return ret;
+@@ -497,6 +505,9 @@ static int lan743x_hs_eeprom_read(struct lan743x_adapter *adapter,
+ 	u32 val;
+ 	int i;
+ 
++	if (offset + length > MAX_HS_EEPROM_SIZE)
++		return -EINVAL;
++
+ 	retval = lan743x_hs_syslock_acquire(adapter, LOCK_TIMEOUT_MAX_CNT);
+ 	if (retval < 0)
+ 		return retval;
+@@ -539,6 +550,9 @@ static int lan743x_hs_eeprom_write(struct lan743x_adapter *adapter,
+ 	u32 val;
+ 	int i;
+ 
++	if (offset + length > MAX_HS_EEPROM_SIZE)
++		return -EINVAL;
++
+ 	retval = lan743x_hs_syslock_acquire(adapter, LOCK_TIMEOUT_MAX_CNT);
+ 	if (retval < 0)
+ 		return retval;
+@@ -604,9 +618,9 @@ static int lan743x_ethtool_get_eeprom_len(struct net_device *netdev)
+ 	struct lan743x_adapter *adapter = netdev_priv(netdev);
+ 
+ 	if (adapter->flags & LAN743X_ADAPTER_FLAG_OTP)
+-		return MAX_OTP_SIZE;
++		return adapter->is_pci11x1x ? MAX_HS_OTP_SIZE : MAX_OTP_SIZE;
+ 
+-	return MAX_EEPROM_SIZE;
++	return adapter->is_pci11x1x ? MAX_HS_EEPROM_SIZE : MAX_EEPROM_SIZE;
+ }
+ 
+ static int lan743x_ethtool_get_eeprom(struct net_device *netdev,
+diff --git a/drivers/net/ethernet/microchip/lan743x_ptp.h b/drivers/net/ethernet/microchip/lan743x_ptp.h
+index 0d29914cd46063..225e8232474d73 100644
+--- a/drivers/net/ethernet/microchip/lan743x_ptp.h
++++ b/drivers/net/ethernet/microchip/lan743x_ptp.h
+@@ -18,9 +18,9 @@
+  */
+ #define LAN743X_PTP_N_EVENT_CHAN	2
+ #define LAN743X_PTP_N_PEROUT		LAN743X_PTP_N_EVENT_CHAN
+-#define LAN743X_PTP_N_EXTTS		4
+-#define LAN743X_PTP_N_PPS		0
+ #define PCI11X1X_PTP_IO_MAX_CHANNELS	8
++#define LAN743X_PTP_N_EXTTS		PCI11X1X_PTP_IO_MAX_CHANNELS
++#define LAN743X_PTP_N_PPS		0
+ #define PTP_CMD_CTL_TIMEOUT_CNT		50
+ 
+ struct lan743x_adapter;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+index daf1e82cb76b34..0e60a6bef99a3e 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_main.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+@@ -516,9 +516,9 @@ static int __ionic_dev_cmd_wait(struct ionic *ionic, unsigned long max_seconds,
+ 	unsigned long start_time;
+ 	unsigned long max_wait;
+ 	unsigned long duration;
+-	int done = 0;
+ 	bool fw_up;
+ 	int opcode;
++	bool done;
+ 	int err;
+ 
+ 	/* Wait for dev cmd to complete, retrying if we get EAGAIN,
+@@ -526,6 +526,7 @@ static int __ionic_dev_cmd_wait(struct ionic *ionic, unsigned long max_seconds,
+ 	 */
+ 	max_wait = jiffies + (max_seconds * HZ);
+ try_again:
++	done = false;
+ 	opcode = idev->opcode;
+ 	start_time = jiffies;
+ 	for (fw_up = ionic_is_fw_running(idev);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 3a049a158ea111..1d716cee0cb108 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -4493,8 +4493,6 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	if (priv->sarc_type)
+ 		stmmac_set_desc_sarc(priv, first, priv->sarc_type);
+ 
+-	skb_tx_timestamp(skb);
+-
+ 	if (unlikely((skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
+ 		     priv->hwts_tx_en)) {
+ 		/* declare that device is doing timestamping */
+@@ -4527,6 +4525,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	}
+ 
+ 	netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len);
++	skb_tx_timestamp(skb);
+ 
+ 	stmmac_flush_tx_descriptors(priv, queue);
+ 	stmmac_tx_timer_arm(priv, queue);
+@@ -4770,8 +4769,6 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	if (priv->sarc_type)
+ 		stmmac_set_desc_sarc(priv, first, priv->sarc_type);
+ 
+-	skb_tx_timestamp(skb);
+-
+ 	/* Ready to fill the first descriptor and set the OWN bit w/o any
+ 	 * problems because all the descriptors are actually ready to be
+ 	 * passed to the DMA engine.
+@@ -4818,7 +4815,7 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len);
+ 
+ 	stmmac_enable_dma_transmission(priv, priv->ioaddr, queue);
+-
++	skb_tx_timestamp(skb);
+ 	stmmac_flush_tx_descriptors(priv, queue);
+ 	stmmac_tx_timer_arm(priv, queue);
+ 
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 30665ffe78cf91..4cec05e0e3d9bb 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -2679,7 +2679,9 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common)
+ 			goto of_node_put;
+ 
+ 		ret = of_get_mac_address(port_np, port->slave.mac_addr);
+-		if (ret) {
++		if (ret == -EPROBE_DEFER) {
++			goto of_node_put;
++		} else if (ret) {
+ 			am65_cpsw_am654_get_efuse_macid(port_np,
+ 							port->port_id,
+ 							port->slave.mac_addr);
+@@ -3561,6 +3563,16 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
++	am65_cpsw_nuss_get_ver(common);
++
++	ret = am65_cpsw_nuss_init_host_p(common);
++	if (ret)
++		goto err_pm_clear;
++
++	ret = am65_cpsw_nuss_init_slave_ports(common);
++	if (ret)
++		goto err_pm_clear;
++
+ 	node = of_get_child_by_name(dev->of_node, "mdio");
+ 	if (!node) {
+ 		dev_warn(dev, "MDIO node not found\n");
+@@ -3577,16 +3589,6 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
+ 	}
+ 	of_node_put(node);
+ 
+-	am65_cpsw_nuss_get_ver(common);
+-
+-	ret = am65_cpsw_nuss_init_host_p(common);
+-	if (ret)
+-		goto err_of_clear;
+-
+-	ret = am65_cpsw_nuss_init_slave_ports(common);
+-	if (ret)
+-		goto err_of_clear;
+-
+ 	/* init common data */
+ 	ale_params.dev = dev;
+ 	ale_params.ale_ageout = AM65_CPSW_ALE_AGEOUT_DEFAULT;
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_common.c b/drivers/net/ethernet/ti/icssg/icssg_common.c
+index d88a0180294e0f..7ae069e7af92ba 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_common.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_common.c
+@@ -98,20 +98,11 @@ void prueth_xmit_free(struct prueth_tx_chn *tx_chn,
+ {
+ 	struct cppi5_host_desc_t *first_desc, *next_desc;
+ 	dma_addr_t buf_dma, next_desc_dma;
+-	struct prueth_swdata *swdata;
+-	struct page *page;
+ 	u32 buf_dma_len;
+ 
+ 	first_desc = desc;
+ 	next_desc = first_desc;
+ 
+-	swdata = cppi5_hdesc_get_swdata(desc);
+-	if (swdata->type == PRUETH_SWDATA_PAGE) {
+-		page = swdata->data.page;
+-		page_pool_recycle_direct(page->pp, swdata->data.page);
+-		goto free_desc;
+-	}
+-
+ 	cppi5_hdesc_get_obuf(first_desc, &buf_dma, &buf_dma_len);
+ 	k3_udma_glue_tx_cppi5_to_dma_addr(tx_chn->tx_chn, &buf_dma);
+ 
+@@ -135,7 +126,6 @@ void prueth_xmit_free(struct prueth_tx_chn *tx_chn,
+ 		k3_cppi_desc_pool_free(tx_chn->desc_pool, next_desc);
+ 	}
+ 
+-free_desc:
+ 	k3_cppi_desc_pool_free(tx_chn->desc_pool, first_desc);
+ }
+ EXPORT_SYMBOL_GPL(prueth_xmit_free);
+@@ -612,13 +602,8 @@ u32 emac_xmit_xdp_frame(struct prueth_emac *emac,
+ 	k3_udma_glue_tx_dma_to_cppi5_addr(tx_chn->tx_chn, &buf_dma);
+ 	cppi5_hdesc_attach_buf(first_desc, buf_dma, xdpf->len, buf_dma, xdpf->len);
+ 	swdata = cppi5_hdesc_get_swdata(first_desc);
+-	if (page) {
+-		swdata->type = PRUETH_SWDATA_PAGE;
+-		swdata->data.page = page;
+-	} else {
+-		swdata->type = PRUETH_SWDATA_XDPF;
+-		swdata->data.xdpf = xdpf;
+-	}
++	swdata->type = PRUETH_SWDATA_XDPF;
++	swdata->data.xdpf = xdpf;
+ 
+ 	/* Report BQL before sending the packet */
+ 	netif_txq = netdev_get_tx_queue(ndev, tx_chn->id);
+diff --git a/drivers/net/ethernet/vertexcom/mse102x.c b/drivers/net/ethernet/vertexcom/mse102x.c
+index e4d993f3137407..545177e84c0eba 100644
+--- a/drivers/net/ethernet/vertexcom/mse102x.c
++++ b/drivers/net/ethernet/vertexcom/mse102x.c
+@@ -306,7 +306,7 @@ static void mse102x_dump_packet(const char *msg, int len, const char *data)
+ 		       data, len, true);
+ }
+ 
+-static void mse102x_rx_pkt_spi(struct mse102x_net *mse)
++static irqreturn_t mse102x_rx_pkt_spi(struct mse102x_net *mse)
+ {
+ 	struct sk_buff *skb;
+ 	unsigned int rxalign;
+@@ -327,7 +327,7 @@ static void mse102x_rx_pkt_spi(struct mse102x_net *mse)
+ 		mse102x_tx_cmd_spi(mse, CMD_CTR);
+ 		ret = mse102x_rx_cmd_spi(mse, (u8 *)&rx);
+ 		if (ret)
+-			return;
++			return IRQ_NONE;
+ 
+ 		cmd_resp = be16_to_cpu(rx);
+ 		if ((cmd_resp & CMD_MASK) != CMD_RTS) {
+@@ -360,7 +360,7 @@ static void mse102x_rx_pkt_spi(struct mse102x_net *mse)
+ 	rxalign = ALIGN(rxlen + DET_SOF_LEN + DET_DFT_LEN, 4);
+ 	skb = netdev_alloc_skb_ip_align(mse->ndev, rxalign);
+ 	if (!skb)
+-		return;
++		return IRQ_NONE;
+ 
+ 	/* 2 bytes Start of frame (before ethernet header)
+ 	 * 2 bytes Data frame tail (after ethernet frame)
+@@ -370,7 +370,7 @@ static void mse102x_rx_pkt_spi(struct mse102x_net *mse)
+ 	if (mse102x_rx_frame_spi(mse, rxpkt, rxlen, drop)) {
+ 		mse->ndev->stats.rx_errors++;
+ 		dev_kfree_skb(skb);
+-		return;
++		return IRQ_HANDLED;
+ 	}
+ 
+ 	if (netif_msg_pktdata(mse))
+@@ -381,6 +381,8 @@ static void mse102x_rx_pkt_spi(struct mse102x_net *mse)
+ 
+ 	mse->ndev->stats.rx_packets++;
+ 	mse->ndev->stats.rx_bytes += rxlen;
++
++	return IRQ_HANDLED;
+ }
+ 
+ static int mse102x_tx_pkt_spi(struct mse102x_net *mse, struct sk_buff *txb,
+@@ -512,12 +514,13 @@ static irqreturn_t mse102x_irq(int irq, void *_mse)
+ {
+ 	struct mse102x_net *mse = _mse;
+ 	struct mse102x_net_spi *mses = to_mse102x_spi(mse);
++	irqreturn_t ret;
+ 
+ 	mutex_lock(&mses->lock);
+-	mse102x_rx_pkt_spi(mse);
++	ret = mse102x_rx_pkt_spi(mse);
+ 	mutex_unlock(&mses->lock);
+ 
+-	return IRQ_HANDLED;
++	return ret;
+ }
+ 
+ static int mse102x_net_open(struct net_device *ndev)
+diff --git a/drivers/net/hyperv/netvsc_bpf.c b/drivers/net/hyperv/netvsc_bpf.c
+index e01c5997a551c6..1dd3755d9e6df3 100644
+--- a/drivers/net/hyperv/netvsc_bpf.c
++++ b/drivers/net/hyperv/netvsc_bpf.c
+@@ -183,7 +183,7 @@ int netvsc_vf_setxdp(struct net_device *vf_netdev, struct bpf_prog *prog)
+ 	xdp.command = XDP_SETUP_PROG;
+ 	xdp.prog = prog;
+ 
+-	ret = dev_xdp_propagate(vf_netdev, &xdp);
++	ret = netif_xdp_propagate(vf_netdev, &xdp);
+ 
+ 	if (ret && prog)
+ 		bpf_prog_put(prog);
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index d8b169ac0343c5..31242921c8285a 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2462,8 +2462,6 @@ static int netvsc_unregister_vf(struct net_device *vf_netdev)
+ 
+ 	netdev_info(ndev, "VF unregistering: %s\n", vf_netdev->name);
+ 
+-	netvsc_vf_setxdp(vf_netdev, NULL);
+-
+ 	reinit_completion(&net_device_ctx->vf_add);
+ 	netdev_rx_handler_unregister(vf_netdev);
+ 	netdev_upper_dev_unlink(vf_netdev, ndev);
+@@ -2631,7 +2629,9 @@ static int netvsc_probe(struct hv_device *dev,
+ 			continue;
+ 
+ 		netvsc_prepare_bonding(vf_netdev);
++		netdev_lock_ops(vf_netdev);
+ 		netvsc_register_vf(vf_netdev, VF_REG_IN_PROBE);
++		netdev_unlock_ops(vf_netdev);
+ 		__netvsc_vf_setup(net, vf_netdev);
+ 		break;
+ 	}
+diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
+index 31a06e71be25bb..b2a3518015372f 100644
+--- a/drivers/net/netdevsim/netdev.c
++++ b/drivers/net/netdevsim/netdev.c
+@@ -29,6 +29,7 @@
+ #include <net/pkt_cls.h>
+ #include <net/rtnetlink.h>
+ #include <net/udp_tunnel.h>
++#include <net/busy_poll.h>
+ 
+ #include "netdevsim.h"
+ 
+@@ -357,6 +358,7 @@ static int nsim_rcv(struct nsim_rq *rq, int budget)
+ 			break;
+ 
+ 		skb = skb_dequeue(&rq->skb_queue);
++		skb_mark_napi_id(skb, &rq->napi);
+ 		netif_receive_skb(skb);
+ 	}
+ 
+diff --git a/drivers/net/phy/marvell-88q2xxx.c b/drivers/net/phy/marvell-88q2xxx.c
+index 23e1f0521f5498..65f31d3c348106 100644
+--- a/drivers/net/phy/marvell-88q2xxx.c
++++ b/drivers/net/phy/marvell-88q2xxx.c
+@@ -119,7 +119,6 @@
+ #define MV88Q2XXX_LED_INDEX_GPIO			1
+ 
+ struct mv88q2xxx_priv {
+-	bool enable_temp;
+ 	bool enable_led0;
+ };
+ 
+@@ -482,49 +481,6 @@ static int mv88q2xxx_config_aneg(struct phy_device *phydev)
+ 	return phydev->drv->soft_reset(phydev);
+ }
+ 
+-static int mv88q2xxx_config_init(struct phy_device *phydev)
+-{
+-	struct mv88q2xxx_priv *priv = phydev->priv;
+-	int ret;
+-
+-	/* The 88Q2XXX PHYs do have the extended ability register available, but
+-	 * register MDIO_PMA_EXTABLE where they should signalize it does not
+-	 * work according to specification. Therefore, we force it here.
+-	 */
+-	phydev->pma_extable = MDIO_PMA_EXTABLE_BT1;
+-
+-	/* Configure interrupt with default settings, output is driven low for
+-	 * active interrupt and high for inactive.
+-	 */
+-	if (phy_interrupt_is_valid(phydev)) {
+-		ret = phy_set_bits_mmd(phydev, MDIO_MMD_PCS,
+-				       MDIO_MMD_PCS_MV_GPIO_INT_CTRL,
+-				       MDIO_MMD_PCS_MV_GPIO_INT_CTRL_TRI_DIS);
+-		if (ret < 0)
+-			return ret;
+-	}
+-
+-	/* Enable LED function and disable TX disable feature on LED/TX_ENABLE */
+-	if (priv->enable_led0) {
+-		ret = phy_clear_bits_mmd(phydev, MDIO_MMD_PCS,
+-					 MDIO_MMD_PCS_MV_RESET_CTRL,
+-					 MDIO_MMD_PCS_MV_RESET_CTRL_TX_DISABLE);
+-		if (ret < 0)
+-			return ret;
+-	}
+-
+-	/* Enable temperature sense */
+-	if (priv->enable_temp) {
+-		ret = phy_modify_mmd(phydev, MDIO_MMD_PCS,
+-				     MDIO_MMD_PCS_MV_TEMP_SENSOR2,
+-				     MDIO_MMD_PCS_MV_TEMP_SENSOR2_DIS_MASK, 0);
+-		if (ret < 0)
+-			return ret;
+-	}
+-
+-	return 0;
+-}
+-
+ static int mv88q2xxx_get_sqi(struct phy_device *phydev)
+ {
+ 	int ret;
+@@ -667,6 +623,12 @@ static int mv88q2xxx_resume(struct phy_device *phydev)
+ }
+ 
+ #if IS_ENABLED(CONFIG_HWMON)
++static int mv88q2xxx_enable_temp_sense(struct phy_device *phydev)
++{
++	return phy_modify_mmd(phydev, MDIO_MMD_PCS, MDIO_MMD_PCS_MV_TEMP_SENSOR2,
++			      MDIO_MMD_PCS_MV_TEMP_SENSOR2_DIS_MASK, 0);
++}
++
+ static const struct hwmon_channel_info * const mv88q2xxx_hwmon_info[] = {
+ 	HWMON_CHANNEL_INFO(temp, HWMON_T_INPUT | HWMON_T_MAX | HWMON_T_ALARM),
+ 	NULL
+@@ -762,11 +724,13 @@ static const struct hwmon_chip_info mv88q2xxx_hwmon_chip_info = {
+ 
+ static int mv88q2xxx_hwmon_probe(struct phy_device *phydev)
+ {
+-	struct mv88q2xxx_priv *priv = phydev->priv;
+ 	struct device *dev = &phydev->mdio.dev;
+ 	struct device *hwmon;
++	int ret;
+ 
+-	priv->enable_temp = true;
++	ret = mv88q2xxx_enable_temp_sense(phydev);
++	if (ret < 0)
++		return ret;
+ 
+ 	hwmon = devm_hwmon_device_register_with_info(dev, NULL, phydev,
+ 						     &mv88q2xxx_hwmon_chip_info,
+@@ -776,6 +740,11 @@ static int mv88q2xxx_hwmon_probe(struct phy_device *phydev)
+ }
+ 
+ #else
++static int mv88q2xxx_enable_temp_sense(struct phy_device *phydev)
++{
++	return 0;
++}
++
+ static int mv88q2xxx_hwmon_probe(struct phy_device *phydev)
+ {
+ 	return 0;
+@@ -853,6 +822,48 @@ static int mv88q222x_probe(struct phy_device *phydev)
+ 	return mv88q2xxx_hwmon_probe(phydev);
+ }
+ 
++static int mv88q2xxx_config_init(struct phy_device *phydev)
++{
++	struct mv88q2xxx_priv *priv = phydev->priv;
++	int ret;
++
++	/* The 88Q2XXX PHYs do have the extended ability register available, but
++	 * register MDIO_PMA_EXTABLE where they should signalize it does not
++	 * work according to specification. Therefore, we force it here.
++	 */
++	phydev->pma_extable = MDIO_PMA_EXTABLE_BT1;
++
++	/* Configure interrupt with default settings, output is driven low for
++	 * active interrupt and high for inactive.
++	 */
++	if (phy_interrupt_is_valid(phydev)) {
++		ret = phy_set_bits_mmd(phydev, MDIO_MMD_PCS,
++				       MDIO_MMD_PCS_MV_GPIO_INT_CTRL,
++				       MDIO_MMD_PCS_MV_GPIO_INT_CTRL_TRI_DIS);
++		if (ret < 0)
++			return ret;
++	}
++
++	/* Enable LED function and disable TX disable feature on LED/TX_ENABLE */
++	if (priv->enable_led0) {
++		ret = phy_clear_bits_mmd(phydev, MDIO_MMD_PCS,
++					 MDIO_MMD_PCS_MV_RESET_CTRL,
++					 MDIO_MMD_PCS_MV_RESET_CTRL_TX_DISABLE);
++		if (ret < 0)
++			return ret;
++	}
++
++	/* Enable temperature sense again. There might have been a hard reset
++	 * of the PHY and in this case the register content is restored to
++	 * defaults and we need to enable it again.
++	 */
++	ret = mv88q2xxx_enable_temp_sense(phydev);
++	if (ret < 0)
++		return ret;
++
++	return 0;
++}
++
+ static int mv88q2110_config_init(struct phy_device *phydev)
+ {
+ 	int ret;
+diff --git a/drivers/net/phy/mediatek/mtk-ge-soc.c b/drivers/net/phy/mediatek/mtk-ge-soc.c
+index 175cf5239bba86..21975ef946d5b7 100644
+--- a/drivers/net/phy/mediatek/mtk-ge-soc.c
++++ b/drivers/net/phy/mediatek/mtk-ge-soc.c
+@@ -7,6 +7,7 @@
+ #include <linux/pinctrl/consumer.h>
+ #include <linux/phy.h>
+ #include <linux/regmap.h>
++#include <linux/of.h>
+ 
+ #include "../phylib.h"
+ #include "mtk.h"
+@@ -1319,6 +1320,7 @@ static int mt7988_phy_probe_shared(struct phy_device *phydev)
+ {
+ 	struct device_node *np = dev_of_node(&phydev->mdio.bus->dev);
+ 	struct mtk_socphy_shared *shared = phy_package_get_priv(phydev);
++	struct device_node *pio_np;
+ 	struct regmap *regmap;
+ 	u32 reg;
+ 	int ret;
+@@ -1336,7 +1338,13 @@ static int mt7988_phy_probe_shared(struct phy_device *phydev)
+ 	 * The 4 bits in TPBANK0 are kept as package shared data and are used to
+ 	 * set LED polarity for each of the LED0.
+ 	 */
+-	regmap = syscon_regmap_lookup_by_phandle(np, "mediatek,pio");
++	pio_np = of_parse_phandle(np, "mediatek,pio", 0);
++	if (!pio_np)
++		return -ENODEV;
++
++	regmap = device_node_to_regmap(pio_np);
++	of_node_put(pio_np);
++
+ 	if (IS_ERR(regmap))
+ 		return PTR_ERR(regmap);
+ 
+diff --git a/drivers/net/usb/asix.h b/drivers/net/usb/asix.h
+index 74162190bccc10..8531b804021aa4 100644
+--- a/drivers/net/usb/asix.h
++++ b/drivers/net/usb/asix.h
+@@ -224,7 +224,6 @@ int asix_write_rx_ctl(struct usbnet *dev, u16 mode, int in_pm);
+ 
+ u16 asix_read_medium_status(struct usbnet *dev, int in_pm);
+ int asix_write_medium_mode(struct usbnet *dev, u16 mode, int in_pm);
+-void asix_adjust_link(struct net_device *netdev);
+ 
+ int asix_write_gpio(struct usbnet *dev, u16 value, int sleep, int in_pm);
+ 
+diff --git a/drivers/net/usb/asix_common.c b/drivers/net/usb/asix_common.c
+index 72ffc89b477ad8..7fd763917ae2cf 100644
+--- a/drivers/net/usb/asix_common.c
++++ b/drivers/net/usb/asix_common.c
+@@ -414,28 +414,6 @@ int asix_write_medium_mode(struct usbnet *dev, u16 mode, int in_pm)
+ 	return ret;
+ }
+ 
+-/* set MAC link settings according to information from phylib */
+-void asix_adjust_link(struct net_device *netdev)
+-{
+-	struct phy_device *phydev = netdev->phydev;
+-	struct usbnet *dev = netdev_priv(netdev);
+-	u16 mode = 0;
+-
+-	if (phydev->link) {
+-		mode = AX88772_MEDIUM_DEFAULT;
+-
+-		if (phydev->duplex == DUPLEX_HALF)
+-			mode &= ~AX_MEDIUM_FD;
+-
+-		if (phydev->speed != SPEED_100)
+-			mode &= ~AX_MEDIUM_PS;
+-	}
+-
+-	asix_write_medium_mode(dev, mode, 0);
+-	phy_print_status(phydev);
+-	usbnet_link_change(dev, phydev->link, 0);
+-}
+-
+ int asix_write_gpio(struct usbnet *dev, u16 value, int sleep, int in_pm)
+ {
+ 	int ret;
+diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
+index da24941a6e4446..9b0318fb50b55c 100644
+--- a/drivers/net/usb/asix_devices.c
++++ b/drivers/net/usb/asix_devices.c
+@@ -752,7 +752,6 @@ static void ax88772_mac_link_down(struct phylink_config *config,
+ 	struct usbnet *dev = netdev_priv(to_net_dev(config->dev));
+ 
+ 	asix_write_medium_mode(dev, 0, 0);
+-	usbnet_link_change(dev, false, false);
+ }
+ 
+ static void ax88772_mac_link_up(struct phylink_config *config,
+@@ -783,7 +782,6 @@ static void ax88772_mac_link_up(struct phylink_config *config,
+ 		m |= AX_MEDIUM_RFC;
+ 
+ 	asix_write_medium_mode(dev, m, 0);
+-	usbnet_link_change(dev, true, false);
+ }
+ 
+ static const struct phylink_mac_ops ax88772_phylink_mac_ops = {
+@@ -1350,10 +1348,9 @@ static const struct driver_info ax88772_info = {
+ 	.description = "ASIX AX88772 USB 2.0 Ethernet",
+ 	.bind = ax88772_bind,
+ 	.unbind = ax88772_unbind,
+-	.status = asix_status,
+ 	.reset = ax88772_reset,
+ 	.stop = ax88772_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_LINK_INTR | FLAG_MULTI_PACKET,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_MULTI_PACKET,
+ 	.rx_fixup = asix_rx_fixup_common,
+ 	.tx_fixup = asix_tx_fixup,
+ };
+@@ -1362,11 +1359,9 @@ static const struct driver_info ax88772b_info = {
+ 	.description = "ASIX AX88772B USB 2.0 Ethernet",
+ 	.bind = ax88772_bind,
+ 	.unbind = ax88772_unbind,
+-	.status = asix_status,
+ 	.reset = ax88772_reset,
+ 	.stop = ax88772_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_LINK_INTR |
+-	         FLAG_MULTI_PACKET,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_MULTI_PACKET,
+ 	.rx_fixup = asix_rx_fixup_common,
+ 	.tx_fixup = asix_tx_fixup,
+ 	.data = FLAG_EEPROM_MAC,
+@@ -1376,11 +1371,9 @@ static const struct driver_info lxausb_t1l_info = {
+ 	.description = "Linux Automation GmbH USB 10Base-T1L",
+ 	.bind = ax88772_bind,
+ 	.unbind = ax88772_unbind,
+-	.status = asix_status,
+ 	.reset = ax88772_reset,
+ 	.stop = ax88772_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_LINK_INTR |
+-		 FLAG_MULTI_PACKET,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_MULTI_PACKET,
+ 	.rx_fixup = asix_rx_fixup_common,
+ 	.tx_fixup = asix_tx_fixup,
+ 	.data = FLAG_EEPROM_MAC,
+@@ -1412,10 +1405,8 @@ static const struct driver_info hg20f9_info = {
+ 	.description = "HG20F9 USB 2.0 Ethernet",
+ 	.bind = ax88772_bind,
+ 	.unbind = ax88772_unbind,
+-	.status = asix_status,
+ 	.reset = ax88772_reset,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_LINK_INTR |
+-	         FLAG_MULTI_PACKET,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_MULTI_PACKET,
+ 	.rx_fixup = asix_rx_fixup_common,
+ 	.tx_fixup = asix_tx_fixup,
+ 	.data = FLAG_EEPROM_MAC,
+diff --git a/drivers/net/usb/ch9200.c b/drivers/net/usb/ch9200.c
+index f69d9b902da04a..a206ffa76f1b93 100644
+--- a/drivers/net/usb/ch9200.c
++++ b/drivers/net/usb/ch9200.c
+@@ -178,6 +178,7 @@ static int ch9200_mdio_read(struct net_device *netdev, int phy_id, int loc)
+ {
+ 	struct usbnet *dev = netdev_priv(netdev);
+ 	unsigned char buff[2];
++	int ret;
+ 
+ 	netdev_dbg(netdev, "%s phy_id:%02x loc:%02x\n",
+ 		   __func__, phy_id, loc);
+@@ -185,8 +186,10 @@ static int ch9200_mdio_read(struct net_device *netdev, int phy_id, int loc)
+ 	if (phy_id != 0)
+ 		return -ENODEV;
+ 
+-	control_read(dev, REQUEST_READ, 0, loc * 2, buff, 0x02,
+-		     CONTROL_TIMEOUT_MS);
++	ret = control_read(dev, REQUEST_READ, 0, loc * 2, buff, 0x02,
++			   CONTROL_TIMEOUT_MS);
++	if (ret < 0)
++		return ret;
+ 
+ 	return (buff[0] | buff[1] << 8);
+ }
+diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
+index 9ccc3f09f71b8c..edbf1088c7d741 100644
+--- a/drivers/net/vxlan/vxlan_core.c
++++ b/drivers/net/vxlan/vxlan_core.c
+@@ -610,10 +610,10 @@ static int vxlan_fdb_append(struct vxlan_fdb *f,
+ 	if (rd == NULL)
+ 		return -ENOMEM;
+ 
+-	if (dst_cache_init(&rd->dst_cache, GFP_ATOMIC)) {
+-		kfree(rd);
+-		return -ENOMEM;
+-	}
++	/* The driver can work correctly without a dst cache, so do not treat
++	 * dst cache initialization errors as fatal.
++	 */
++	dst_cache_init(&rd->dst_cache, GFP_ATOMIC | __GFP_NOWARN);
+ 
+ 	rd->remote_ip = *ip;
+ 	rd->remote_port = port;
+@@ -1916,12 +1916,15 @@ static int arp_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
+ 			goto out;
+ 		}
+ 
++		rcu_read_lock();
+ 		f = vxlan_find_mac(vxlan, n->ha, vni);
+ 		if (f && vxlan_addr_any(&(first_remote_rcu(f)->remote_ip))) {
+ 			/* bridge-local neighbor */
+ 			neigh_release(n);
++			rcu_read_unlock();
+ 			goto out;
+ 		}
++		rcu_read_unlock();
+ 
+ 		reply = arp_create(ARPOP_REPLY, ETH_P_ARP, sip, dev, tip, sha,
+ 				n->ha, sha);
+@@ -2648,14 +2651,10 @@ static void vxlan_xmit_nh(struct sk_buff *skb, struct net_device *dev,
+ 	memset(&nh_rdst, 0, sizeof(struct vxlan_rdst));
+ 	hash = skb_get_hash(skb);
+ 
+-	rcu_read_lock();
+ 	nh = rcu_dereference(f->nh);
+-	if (!nh) {
+-		rcu_read_unlock();
++	if (!nh)
+ 		goto drop;
+-	}
+ 	do_xmit = vxlan_fdb_nh_path_select(nh, hash, &nh_rdst);
+-	rcu_read_unlock();
+ 
+ 	if (likely(do_xmit))
+ 		vxlan_xmit_one(skb, dev, vni, &nh_rdst, did_rsc);
+@@ -2782,6 +2781,7 @@ static netdev_tx_t vxlan_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	}
+ 
+ 	eth = eth_hdr(skb);
++	rcu_read_lock();
+ 	f = vxlan_find_mac(vxlan, eth->h_dest, vni);
+ 	did_rsc = false;
+ 
+@@ -2804,7 +2804,7 @@ static netdev_tx_t vxlan_xmit(struct sk_buff *skb, struct net_device *dev)
+ 			vxlan_vnifilter_count(vxlan, vni, NULL,
+ 					      VXLAN_VNI_STATS_TX_DROPS, 0);
+ 			kfree_skb_reason(skb, SKB_DROP_REASON_NO_TX_TARGET);
+-			return NETDEV_TX_OK;
++			goto out;
+ 		}
+ 	}
+ 
+@@ -2829,6 +2829,8 @@ static netdev_tx_t vxlan_xmit(struct sk_buff *skb, struct net_device *dev)
+ 			kfree_skb_reason(skb, SKB_DROP_REASON_NO_TX_TARGET);
+ 	}
+ 
++out:
++	rcu_read_unlock();
+ 	return NETDEV_TX_OK;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/ce.c b/drivers/net/wireless/ath/ath11k/ce.c
+index e66e86bdec20ff..9d8efec46508a1 100644
+--- a/drivers/net/wireless/ath/ath11k/ce.c
++++ b/drivers/net/wireless/ath/ath11k/ce.c
+@@ -393,11 +393,10 @@ static int ath11k_ce_completed_recv_next(struct ath11k_ce_pipe *pipe,
+ 		goto err;
+ 	}
+ 
++	/* Make sure descriptor is read after the head pointer. */
++	dma_rmb();
++
+ 	*nbytes = ath11k_hal_ce_dst_status_get_length(desc);
+-	if (*nbytes == 0) {
+-		ret = -EIO;
+-		goto err;
+-	}
+ 
+ 	*skb = pipe->dest_ring->skb[sw_index];
+ 	pipe->dest_ring->skb[sw_index] = NULL;
+@@ -430,8 +429,8 @@ static void ath11k_ce_recv_process_cb(struct ath11k_ce_pipe *pipe)
+ 		dma_unmap_single(ab->dev, ATH11K_SKB_RXCB(skb)->paddr,
+ 				 max_nbytes, DMA_FROM_DEVICE);
+ 
+-		if (unlikely(max_nbytes < nbytes)) {
+-			ath11k_warn(ab, "rxed more than expected (nbytes %d, max %d)",
++		if (unlikely(max_nbytes < nbytes || nbytes == 0)) {
++			ath11k_warn(ab, "unexpected rx length (nbytes %d, max %d)",
+ 				    nbytes, max_nbytes);
+ 			dev_kfree_skb_any(skb);
+ 			continue;
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index 22eb1b0377ffed..0281ce6fb71773 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -907,6 +907,52 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
+ 	},
+ };
+ 
++static const struct dmi_system_id ath11k_pm_quirk_table[] = {
++	{
++		.driver_data = (void *)ATH11K_PM_WOW,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "21J4"),
++		},
++	},
++	{
++		.driver_data = (void *)ATH11K_PM_WOW,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "21K4"),
++		},
++	},
++	{
++		.driver_data = (void *)ATH11K_PM_WOW,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "21K6"),
++		},
++	},
++	{
++		.driver_data = (void *)ATH11K_PM_WOW,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "21K8"),
++		},
++	},
++	{
++		.driver_data = (void *)ATH11K_PM_WOW,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "21KA"),
++		},
++	},
++	{
++		.driver_data = (void *)ATH11K_PM_WOW,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "21F9"),
++		},
++	},
++	{}
++};
++
+ static inline struct ath11k_pdev *ath11k_core_get_single_pdev(struct ath11k_base *ab)
+ {
+ 	WARN_ON(!ab->hw_params.single_pdev_only);
+@@ -2334,8 +2380,17 @@ EXPORT_SYMBOL(ath11k_core_pre_init);
+ 
+ int ath11k_core_init(struct ath11k_base *ab)
+ {
++	const struct dmi_system_id *dmi_id;
+ 	int ret;
+ 
++	dmi_id = dmi_first_match(ath11k_pm_quirk_table);
++	if (dmi_id)
++		ab->pm_policy = (kernel_ulong_t)dmi_id->driver_data;
++	else
++		ab->pm_policy = ATH11K_PM_DEFAULT;
++
++	ath11k_dbg(ab, ATH11K_DBG_BOOT, "pm policy %u\n", ab->pm_policy);
++
+ 	ret = ath11k_core_soc_create(ab);
+ 	if (ret) {
+ 		ath11k_err(ab, "failed to create soc core: %d\n", ret);
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index 529aca4f40621e..81a8fe7bed4484 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -894,6 +894,11 @@ struct ath11k_msi_config {
+ 	u16 hw_rev;
+ };
+ 
++enum ath11k_pm_policy {
++	ATH11K_PM_DEFAULT,
++	ATH11K_PM_WOW,
++};
++
+ /* Master structure to hold the hw data which may be used in core module */
+ struct ath11k_base {
+ 	enum ath11k_hw_rev hw_rev;
+@@ -1060,6 +1065,8 @@ struct ath11k_base {
+ 	} testmode;
+ #endif
+ 
++	enum ath11k_pm_policy pm_policy;
++
+ 	/* must be last */
+ 	u8 drv_priv[] __aligned(sizeof(void *));
+ };
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 218ab41c0f3c9a..ea2959305dec65 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -2637,7 +2637,7 @@ int ath11k_dp_process_rx(struct ath11k_base *ab, int ring_id,
+ 	struct ath11k *ar;
+ 	struct hal_reo_dest_ring *desc;
+ 	enum hal_reo_dest_ring_push_reason push_reason;
+-	u32 cookie;
++	u32 cookie, info0, rx_msdu_info0, rx_mpdu_info0;
+ 	int i;
+ 
+ 	for (i = 0; i < MAX_RADIOS; i++)
+@@ -2650,11 +2650,14 @@ int ath11k_dp_process_rx(struct ath11k_base *ab, int ring_id,
+ try_again:
+ 	ath11k_hal_srng_access_begin(ab, srng);
+ 
++	/* Make sure descriptor is read after the head pointer. */
++	dma_rmb();
++
+ 	while (likely(desc =
+ 	      (struct hal_reo_dest_ring *)ath11k_hal_srng_dst_get_next_entry(ab,
+ 									     srng))) {
+ 		cookie = FIELD_GET(BUFFER_ADDR_INFO1_SW_COOKIE,
+-				   desc->buf_addr_info.info1);
++				   READ_ONCE(desc->buf_addr_info.info1));
+ 		buf_id = FIELD_GET(DP_RXDMA_BUF_COOKIE_BUF_ID,
+ 				   cookie);
+ 		mac_id = FIELD_GET(DP_RXDMA_BUF_COOKIE_PDEV_ID, cookie);
+@@ -2683,8 +2686,9 @@ int ath11k_dp_process_rx(struct ath11k_base *ab, int ring_id,
+ 
+ 		num_buffs_reaped[mac_id]++;
+ 
++		info0 = READ_ONCE(desc->info0);
+ 		push_reason = FIELD_GET(HAL_REO_DEST_RING_INFO0_PUSH_REASON,
+-					desc->info0);
++					info0);
+ 		if (unlikely(push_reason !=
+ 			     HAL_REO_DEST_RING_PUSH_REASON_ROUTING_INSTRUCTION)) {
+ 			dev_kfree_skb_any(msdu);
+@@ -2692,18 +2696,21 @@ int ath11k_dp_process_rx(struct ath11k_base *ab, int ring_id,
+ 			continue;
+ 		}
+ 
+-		rxcb->is_first_msdu = !!(desc->rx_msdu_info.info0 &
++		rx_msdu_info0 = READ_ONCE(desc->rx_msdu_info.info0);
++		rx_mpdu_info0 = READ_ONCE(desc->rx_mpdu_info.info0);
++
++		rxcb->is_first_msdu = !!(rx_msdu_info0 &
+ 					 RX_MSDU_DESC_INFO0_FIRST_MSDU_IN_MPDU);
+-		rxcb->is_last_msdu = !!(desc->rx_msdu_info.info0 &
++		rxcb->is_last_msdu = !!(rx_msdu_info0 &
+ 					RX_MSDU_DESC_INFO0_LAST_MSDU_IN_MPDU);
+-		rxcb->is_continuation = !!(desc->rx_msdu_info.info0 &
++		rxcb->is_continuation = !!(rx_msdu_info0 &
+ 					   RX_MSDU_DESC_INFO0_MSDU_CONTINUATION);
+ 		rxcb->peer_id = FIELD_GET(RX_MPDU_DESC_META_DATA_PEER_ID,
+-					  desc->rx_mpdu_info.meta_data);
++					  READ_ONCE(desc->rx_mpdu_info.meta_data));
+ 		rxcb->seq_no = FIELD_GET(RX_MPDU_DESC_INFO0_SEQ_NUM,
+-					 desc->rx_mpdu_info.info0);
++					 rx_mpdu_info0);
+ 		rxcb->tid = FIELD_GET(HAL_REO_DEST_RING_INFO0_RX_QUEUE_NUM,
+-				      desc->info0);
++				      info0);
+ 
+ 		rxcb->mac_id = mac_id;
+ 		__skb_queue_tail(&msdu_list[mac_id], msdu);
+diff --git a/drivers/net/wireless/ath/ath11k/hal.c b/drivers/net/wireless/ath/ath11k/hal.c
+index 61f4b6dd53807f..8cb1505a5a0c3f 100644
+--- a/drivers/net/wireless/ath/ath11k/hal.c
++++ b/drivers/net/wireless/ath/ath11k/hal.c
+@@ -599,7 +599,7 @@ u32 ath11k_hal_ce_dst_status_get_length(void *buf)
+ 	struct hal_ce_srng_dst_status_desc *desc = buf;
+ 	u32 len;
+ 
+-	len = FIELD_GET(HAL_CE_DST_STATUS_DESC_FLAGS_LEN, desc->flags);
++	len = FIELD_GET(HAL_CE_DST_STATUS_DESC_FLAGS_LEN, READ_ONCE(desc->flags));
+ 	desc->flags &= ~HAL_CE_DST_STATUS_DESC_FLAGS_LEN;
+ 
+ 	return len;
+@@ -829,7 +829,7 @@ void ath11k_hal_srng_access_begin(struct ath11k_base *ab, struct hal_srng *srng)
+ 		srng->u.src_ring.cached_tp =
+ 			*(volatile u32 *)srng->u.src_ring.tp_addr;
+ 	} else {
+-		srng->u.dst_ring.cached_hp = *srng->u.dst_ring.hp_addr;
++		srng->u.dst_ring.cached_hp = READ_ONCE(*srng->u.dst_ring.hp_addr);
+ 
+ 		/* Try to prefetch the next descriptor in the ring */
+ 		if (srng->flags & HAL_SRNG_FLAGS_CACHED)
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
+index 4f8b08ed1bbc6e..83a48a77c53ee5 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.c
++++ b/drivers/net/wireless/ath/ath11k/qmi.c
+@@ -1993,6 +1993,15 @@ static int ath11k_qmi_alloc_target_mem_chunk(struct ath11k_base *ab)
+ 			    chunk->prev_size == chunk->size)
+ 				continue;
+ 
++			if (ab->qmi.mem_seg_count <= ATH11K_QMI_FW_MEM_REQ_SEGMENT_CNT) {
++				ath11k_dbg(ab, ATH11K_DBG_QMI,
++					   "size/type mismatch (current %d %u) (prev %d %u), try later with small size\n",
++					    chunk->size, chunk->type,
++					    chunk->prev_size, chunk->prev_type);
++				ab->qmi.target_mem_delayed = true;
++				return 0;
++			}
++
+ 			/* cannot reuse the existing chunk */
+ 			dma_free_coherent(ab->dev, chunk->prev_size,
+ 					  chunk->vaddr, chunk->paddr);
+diff --git a/drivers/net/wireless/ath/ath12k/ce.c b/drivers/net/wireless/ath/ath12k/ce.c
+index be0d669d31fcce..740586fe49d1f9 100644
+--- a/drivers/net/wireless/ath/ath12k/ce.c
++++ b/drivers/net/wireless/ath/ath12k/ce.c
+@@ -343,11 +343,10 @@ static int ath12k_ce_completed_recv_next(struct ath12k_ce_pipe *pipe,
+ 		goto err;
+ 	}
+ 
++	/* Make sure descriptor is read after the head pointer. */
++	dma_rmb();
++
+ 	*nbytes = ath12k_hal_ce_dst_status_get_length(desc);
+-	if (*nbytes == 0) {
+-		ret = -EIO;
+-		goto err;
+-	}
+ 
+ 	*skb = pipe->dest_ring->skb[sw_index];
+ 	pipe->dest_ring->skb[sw_index] = NULL;
+@@ -380,8 +379,8 @@ static void ath12k_ce_recv_process_cb(struct ath12k_ce_pipe *pipe)
+ 		dma_unmap_single(ab->dev, ATH12K_SKB_RXCB(skb)->paddr,
+ 				 max_nbytes, DMA_FROM_DEVICE);
+ 
+-		if (unlikely(max_nbytes < nbytes)) {
+-			ath12k_warn(ab, "rxed more than expected (nbytes %d, max %d)",
++		if (unlikely(max_nbytes < nbytes || nbytes == 0)) {
++			ath12k_warn(ab, "unexpected rx length (nbytes %d, max %d)",
+ 				    nbytes, max_nbytes);
+ 			dev_kfree_skb_any(skb);
+ 			continue;
+diff --git a/drivers/net/wireless/ath/ath12k/ce.h b/drivers/net/wireless/ath/ath12k/ce.h
+index 1a14b9fb86b885..f85188af5de2f0 100644
+--- a/drivers/net/wireless/ath/ath12k/ce.h
++++ b/drivers/net/wireless/ath/ath12k/ce.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022, 2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2022, 2024-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_CE_H
+@@ -39,8 +39,8 @@
+ #define PIPEDIR_INOUT_H2H	4 /* bidirectional, host to host */
+ 
+ /* CE address/mask */
+-#define CE_HOST_IE_ADDRESS	0x00A1803C
+-#define CE_HOST_IE_2_ADDRESS	0x00A18040
++#define CE_HOST_IE_ADDRESS	0x75804C
++#define CE_HOST_IE_2_ADDRESS	0x758050
+ #define CE_HOST_IE_3_ADDRESS	CE_HOST_IE_ADDRESS
+ 
+ #define CE_HOST_IE_3_SHIFT	0xC
+diff --git a/drivers/net/wireless/ath/ath12k/dp.c b/drivers/net/wireless/ath/ath12k/dp.c
+index 50c36e6ea10278..34e1bd2934ce3d 100644
+--- a/drivers/net/wireless/ath/ath12k/dp.c
++++ b/drivers/net/wireless/ath/ath12k/dp.c
+@@ -1261,22 +1261,24 @@ static void ath12k_dp_reoq_lut_cleanup(struct ath12k_base *ab)
+ 	if (!ab->hw_params->reoq_lut_support)
+ 		return;
+ 
+-	if (dp->reoq_lut.vaddr) {
++	if (dp->reoq_lut.vaddr_unaligned) {
+ 		ath12k_hif_write32(ab,
+ 				   HAL_SEQ_WCSS_UMAC_REO_REG +
+ 				   HAL_REO1_QDESC_LUT_BASE0(ab), 0);
+-		dma_free_coherent(ab->dev, DP_REOQ_LUT_SIZE,
+-				  dp->reoq_lut.vaddr, dp->reoq_lut.paddr);
+-		dp->reoq_lut.vaddr = NULL;
++		dma_free_coherent(ab->dev, dp->reoq_lut.size,
++				  dp->reoq_lut.vaddr_unaligned,
++				  dp->reoq_lut.paddr_unaligned);
++		dp->reoq_lut.vaddr_unaligned = NULL;
+ 	}
+ 
+-	if (dp->ml_reoq_lut.vaddr) {
++	if (dp->ml_reoq_lut.vaddr_unaligned) {
+ 		ath12k_hif_write32(ab,
+ 				   HAL_SEQ_WCSS_UMAC_REO_REG +
+ 				   HAL_REO1_QDESC_LUT_BASE1(ab), 0);
+-		dma_free_coherent(ab->dev, DP_REOQ_LUT_SIZE,
+-				  dp->ml_reoq_lut.vaddr, dp->ml_reoq_lut.paddr);
+-		dp->ml_reoq_lut.vaddr = NULL;
++		dma_free_coherent(ab->dev, dp->ml_reoq_lut.size,
++				  dp->ml_reoq_lut.vaddr_unaligned,
++				  dp->ml_reoq_lut.paddr_unaligned);
++		dp->ml_reoq_lut.vaddr_unaligned = NULL;
+ 	}
+ }
+ 
+@@ -1608,39 +1610,66 @@ static int ath12k_dp_cc_init(struct ath12k_base *ab)
+ 	return ret;
+ }
+ 
++static int ath12k_dp_alloc_reoq_lut(struct ath12k_base *ab,
++				    struct ath12k_reo_q_addr_lut *lut)
++{
++	lut->size =  DP_REOQ_LUT_SIZE + HAL_REO_QLUT_ADDR_ALIGN - 1;
++	lut->vaddr_unaligned = dma_alloc_coherent(ab->dev, lut->size,
++						  &lut->paddr_unaligned,
++						  GFP_KERNEL | __GFP_ZERO);
++	if (!lut->vaddr_unaligned)
++		return -ENOMEM;
++
++	lut->vaddr = PTR_ALIGN(lut->vaddr_unaligned, HAL_REO_QLUT_ADDR_ALIGN);
++	lut->paddr = lut->paddr_unaligned +
++		     ((unsigned long)lut->vaddr - (unsigned long)lut->vaddr_unaligned);
++	return 0;
++}
++
+ static int ath12k_dp_reoq_lut_setup(struct ath12k_base *ab)
+ {
+ 	struct ath12k_dp *dp = &ab->dp;
++	u32 val;
++	int ret;
+ 
+ 	if (!ab->hw_params->reoq_lut_support)
+ 		return 0;
+ 
+-	dp->reoq_lut.vaddr = dma_alloc_coherent(ab->dev,
+-						DP_REOQ_LUT_SIZE,
+-						&dp->reoq_lut.paddr,
+-						GFP_KERNEL | __GFP_ZERO);
+-	if (!dp->reoq_lut.vaddr) {
++	ret = ath12k_dp_alloc_reoq_lut(ab, &dp->reoq_lut);
++	if (ret) {
+ 		ath12k_warn(ab, "failed to allocate memory for reoq table");
+-		return -ENOMEM;
++		return ret;
+ 	}
+ 
+-	dp->ml_reoq_lut.vaddr = dma_alloc_coherent(ab->dev,
+-						   DP_REOQ_LUT_SIZE,
+-						   &dp->ml_reoq_lut.paddr,
+-						   GFP_KERNEL | __GFP_ZERO);
+-	if (!dp->ml_reoq_lut.vaddr) {
++	ret = ath12k_dp_alloc_reoq_lut(ab, &dp->ml_reoq_lut);
++	if (ret) {
+ 		ath12k_warn(ab, "failed to allocate memory for ML reoq table");
+-		dma_free_coherent(ab->dev, DP_REOQ_LUT_SIZE,
+-				  dp->reoq_lut.vaddr, dp->reoq_lut.paddr);
+-		dp->reoq_lut.vaddr = NULL;
+-		return -ENOMEM;
++		dma_free_coherent(ab->dev, dp->reoq_lut.size,
++				  dp->reoq_lut.vaddr_unaligned,
++				  dp->reoq_lut.paddr_unaligned);
++		dp->reoq_lut.vaddr_unaligned = NULL;
++		return ret;
+ 	}
+ 
++	/* Bits in the register have address [39:8] LUT base address to be
++	 * allocated such that LSBs are assumed to be zero. Also, current
++	 * design supports paddr upto 4 GB max hence it fits in 32 bit register only
++	 */
++
+ 	ath12k_hif_write32(ab, HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO1_QDESC_LUT_BASE0(ab),
+-			   dp->reoq_lut.paddr);
++			   dp->reoq_lut.paddr >> 8);
++
+ 	ath12k_hif_write32(ab, HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO1_QDESC_LUT_BASE1(ab),
+ 			   dp->ml_reoq_lut.paddr >> 8);
+ 
++	val = ath12k_hif_read32(ab, HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO1_QDESC_ADDR(ab));
++
++	ath12k_hif_write32(ab, HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO1_QDESC_ADDR(ab),
++			   val | HAL_REO_QDESC_ADDR_READ_LUT_ENABLE);
++
++	ath12k_hif_write32(ab, HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO1_QDESC_MAX_PEERID(ab),
++			   HAL_REO_QDESC_MAX_PEERID);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/dp.h b/drivers/net/wireless/ath/ath12k/dp.h
+index 427a87b63dec3b..e8dbba0c3bb7d4 100644
+--- a/drivers/net/wireless/ath/ath12k/dp.h
++++ b/drivers/net/wireless/ath/ath12k/dp.h
+@@ -311,8 +311,11 @@ struct ath12k_reo_queue_ref {
+ } __packed;
+ 
+ struct ath12k_reo_q_addr_lut {
+-	dma_addr_t paddr;
++	u32 *vaddr_unaligned;
+ 	u32 *vaddr;
++	dma_addr_t paddr_unaligned;
++	dma_addr_t paddr;
++	u32 size;
+ };
+ 
+ struct ath12k_dp {
+diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
+index 600d97169f241a..826c9723a7a68d 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
++++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
+@@ -2097,6 +2097,8 @@ static void ath12k_dp_mon_rx_deliver_msdu(struct ath12k *ar, struct napi_struct
+ 	bool is_mcbc = rxcb->is_mcbc;
+ 	bool is_eapol_tkip = rxcb->is_eapol;
+ 
++	status->link_valid = 0;
++
+ 	if ((status->encoding == RX_ENC_HE) && !(status->flag & RX_FLAG_RADIOTAP_HE) &&
+ 	    !(status->flag & RX_FLAG_SKIP_MONITOR)) {
+ 		he = skb_push(msdu, sizeof(known));
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index 7fadd366ec13de..cc5a23a46ea151 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -2259,6 +2259,11 @@ static void ath12k_dp_rx_h_mpdu(struct ath12k *ar,
+ 	spin_lock_bh(&ar->ab->base_lock);
+ 	peer = ath12k_dp_rx_h_find_peer(ar->ab, msdu, rx_info);
+ 	if (peer) {
++		/* resetting mcbc bit because mcbc packets are unicast
++		 * packets only for AP as STA sends unicast packets.
++		 */
++		rxcb->is_mcbc = rxcb->is_mcbc && !peer->ucast_ra_only;
++
+ 		if (rxcb->is_mcbc)
+ 			enctype = peer->sec_type_grp;
+ 		else
+@@ -3250,8 +3255,14 @@ static int ath12k_dp_rx_h_defrag_reo_reinject(struct ath12k *ar,
+ 	reo_ent_ring->rx_mpdu_info.peer_meta_data =
+ 		reo_dest_ring->rx_mpdu_info.peer_meta_data;
+ 
+-	reo_ent_ring->queue_addr_lo = cpu_to_le32(lower_32_bits(rx_tid->paddr));
+-	queue_addr_hi = upper_32_bits(rx_tid->paddr);
++	if (ab->hw_params->reoq_lut_support) {
++		reo_ent_ring->queue_addr_lo = reo_dest_ring->rx_mpdu_info.peer_meta_data;
++		queue_addr_hi = 0;
++	} else {
++		reo_ent_ring->queue_addr_lo = cpu_to_le32(lower_32_bits(rx_tid->paddr));
++		queue_addr_hi = upper_32_bits(rx_tid->paddr);
++	}
++
+ 	reo_ent_ring->info0 = le32_encode_bits(queue_addr_hi,
+ 					       HAL_REO_ENTR_RING_INFO0_QUEUE_ADDR_HI) |
+ 			      le32_encode_bits(dst_ind,
+diff --git a/drivers/net/wireless/ath/ath12k/hal.c b/drivers/net/wireless/ath/ath12k/hal.c
+index d00869a33fea06..faf74b54594109 100644
+--- a/drivers/net/wireless/ath/ath12k/hal.c
++++ b/drivers/net/wireless/ath/ath12k/hal.c
+@@ -449,8 +449,8 @@ static u8 *ath12k_hw_qcn9274_rx_desc_mpdu_start_addr2(struct hal_rx_desc *desc)
+ 
+ static bool ath12k_hw_qcn9274_rx_desc_is_da_mcbc(struct hal_rx_desc *desc)
+ {
+-	return __le32_to_cpu(desc->u.qcn9274.mpdu_start.info6) &
+-	       RX_MPDU_START_INFO6_MCAST_BCAST;
++	return __le16_to_cpu(desc->u.qcn9274.msdu_end.info5) &
++	       RX_MSDU_END_INFO5_DA_IS_MCBC;
+ }
+ 
+ static void ath12k_hw_qcn9274_rx_desc_get_dot11_hdr(struct hal_rx_desc *desc,
+@@ -902,8 +902,8 @@ static u8 *ath12k_hw_qcn9274_compact_rx_desc_mpdu_start_addr2(struct hal_rx_desc
+ 
+ static bool ath12k_hw_qcn9274_compact_rx_desc_is_da_mcbc(struct hal_rx_desc *desc)
+ {
+-	return __le32_to_cpu(desc->u.qcn9274_compact.mpdu_start.info6) &
+-	       RX_MPDU_START_INFO6_MCAST_BCAST;
++	return __le16_to_cpu(desc->u.qcn9274_compact.msdu_end.info5) &
++	       RX_MSDU_END_INFO5_DA_IS_MCBC;
+ }
+ 
+ static void ath12k_hw_qcn9274_compact_rx_desc_get_dot11_hdr(struct hal_rx_desc *desc,
+@@ -1943,7 +1943,7 @@ u32 ath12k_hal_ce_dst_status_get_length(struct hal_ce_srng_dst_status_desc *desc
+ {
+ 	u32 len;
+ 
+-	len = le32_get_bits(desc->flags, HAL_CE_DST_STATUS_DESC_FLAGS_LEN);
++	len = le32_get_bits(READ_ONCE(desc->flags), HAL_CE_DST_STATUS_DESC_FLAGS_LEN);
+ 	desc->flags &= ~cpu_to_le32(HAL_CE_DST_STATUS_DESC_FLAGS_LEN);
+ 
+ 	return len;
+@@ -2113,7 +2113,7 @@ void ath12k_hal_srng_access_begin(struct ath12k_base *ab, struct hal_srng *srng)
+ 		srng->u.src_ring.cached_tp =
+ 			*(volatile u32 *)srng->u.src_ring.tp_addr;
+ 	else
+-		srng->u.dst_ring.cached_hp = *srng->u.dst_ring.hp_addr;
++		srng->u.dst_ring.cached_hp = READ_ONCE(*srng->u.dst_ring.hp_addr);
+ }
+ 
+ /* Update cached ring head/tail pointers to HW. ath12k_hal_srng_access_begin()
+diff --git a/drivers/net/wireless/ath/ath12k/hal.h b/drivers/net/wireless/ath/ath12k/hal.h
+index c8205672cd3dd5..cb8530dfdd911d 100644
+--- a/drivers/net/wireless/ath/ath12k/hal.h
++++ b/drivers/net/wireless/ath/ath12k/hal.h
+@@ -21,6 +21,7 @@ struct ath12k_base;
+ #define HAL_MAX_AVAIL_BLK_RES			3
+ 
+ #define HAL_RING_BASE_ALIGN	8
++#define HAL_REO_QLUT_ADDR_ALIGN 256
+ 
+ #define HAL_WBM_IDLE_SCATTER_BUF_SIZE_MAX	32704
+ /* TODO: Check with hw team on the supported scatter buf size */
+@@ -39,6 +40,7 @@ struct ath12k_base;
+ #define HAL_OFFSET_FROM_HP_TO_TP		4
+ 
+ #define HAL_SHADOW_REG(x) (HAL_SHADOW_BASE_ADDR + (4 * (x)))
++#define HAL_REO_QDESC_MAX_PEERID		8191
+ 
+ /* WCSS Relative address */
+ #define HAL_SEQ_WCSS_UMAC_OFFSET		0x00a00000
+@@ -139,6 +141,8 @@ struct ath12k_base;
+ #define HAL_REO1_DEST_RING_CTRL_IX_1		0x00000008
+ #define HAL_REO1_DEST_RING_CTRL_IX_2		0x0000000c
+ #define HAL_REO1_DEST_RING_CTRL_IX_3		0x00000010
++#define HAL_REO1_QDESC_ADDR(ab)		((ab)->hw_params->regs->hal_reo1_qdesc_addr)
++#define HAL_REO1_QDESC_MAX_PEERID(ab)	((ab)->hw_params->regs->hal_reo1_qdesc_max_peerid)
+ #define HAL_REO1_SW_COOKIE_CFG0(ab)	((ab)->hw_params->regs->hal_reo1_sw_cookie_cfg0)
+ #define HAL_REO1_SW_COOKIE_CFG1(ab)	((ab)->hw_params->regs->hal_reo1_sw_cookie_cfg1)
+ #define HAL_REO1_QDESC_LUT_BASE0(ab)	((ab)->hw_params->regs->hal_reo1_qdesc_lut_base0)
+@@ -326,6 +330,8 @@ struct ath12k_base;
+ #define HAL_REO1_SW_COOKIE_CFG_ALIGN			BIT(18)
+ #define HAL_REO1_SW_COOKIE_CFG_ENABLE			BIT(19)
+ #define HAL_REO1_SW_COOKIE_CFG_GLOBAL_ENABLE		BIT(20)
++#define HAL_REO_QDESC_ADDR_READ_LUT_ENABLE		BIT(7)
++#define HAL_REO_QDESC_ADDR_READ_CLEAR_QDESC_ARRAY	BIT(6)
+ 
+ /* CE ring bit field mask and shift */
+ #define HAL_CE_DST_R0_DEST_CTRL_MAX_LEN			GENMASK(15, 0)
+diff --git a/drivers/net/wireless/ath/ath12k/hal_desc.h b/drivers/net/wireless/ath/ath12k/hal_desc.h
+index 63d279fab32249..f8a51aa0217a8e 100644
+--- a/drivers/net/wireless/ath/ath12k/hal_desc.h
++++ b/drivers/net/wireless/ath/ath12k/hal_desc.h
+@@ -707,7 +707,7 @@ enum hal_rx_msdu_desc_reo_dest_ind {
+ #define RX_MSDU_DESC_INFO0_DECAP_FORMAT		GENMASK(30, 29)
+ 
+ #define HAL_RX_MSDU_PKT_LENGTH_GET(val)		\
+-	(u32_get_bits((val), RX_MSDU_DESC_INFO0_MSDU_LENGTH))
++	(le32_get_bits((val), RX_MSDU_DESC_INFO0_MSDU_LENGTH))
+ 
+ struct rx_msdu_desc {
+ 	__le32 info0;
+diff --git a/drivers/net/wireless/ath/ath12k/hw.c b/drivers/net/wireless/ath/ath12k/hw.c
+index 1bfb11bae7add3..a5fa3b6a831ae4 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.c
++++ b/drivers/net/wireless/ath/ath12k/hw.c
+@@ -748,6 +748,8 @@ static const struct ath12k_hw_regs qcn9274_v2_regs = {
+ 	.hal_reo1_sw_cookie_cfg1 = 0x00000070,
+ 	.hal_reo1_qdesc_lut_base0 = 0x00000074,
+ 	.hal_reo1_qdesc_lut_base1 = 0x00000078,
++	.hal_reo1_qdesc_addr = 0x0000007c,
++	.hal_reo1_qdesc_max_peerid = 0x00000088,
+ 	.hal_reo1_ring_base_lsb = 0x00000500,
+ 	.hal_reo1_ring_base_msb = 0x00000504,
+ 	.hal_reo1_ring_id = 0x00000508,
+diff --git a/drivers/net/wireless/ath/ath12k/hw.h b/drivers/net/wireless/ath/ath12k/hw.h
+index 862b11325a9021..e1ad03daebcd48 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.h
++++ b/drivers/net/wireless/ath/ath12k/hw.h
+@@ -299,6 +299,9 @@ struct ath12k_hw_regs {
+ 
+ 	u32 hal_tcl_status_ring_base_lsb;
+ 
++	u32 hal_reo1_qdesc_addr;
++	u32 hal_reo1_qdesc_max_peerid;
++
+ 	u32 hal_wbm_idle_ring_base_lsb;
+ 	u32 hal_wbm_idle_ring_misc_addr;
+ 	u32 hal_wbm_r0_idle_list_cntl_addr;
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 331bcf5e6c4cce..d1d3c9f34372da 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -3451,7 +3451,10 @@ static void ath12k_recalculate_mgmt_rate(struct ath12k *ar,
+ 	}
+ 
+ 	sband = hw->wiphy->bands[def->chan->band];
+-	basic_rate_idx = ffs(bss_conf->basic_rates) - 1;
++	if (bss_conf->basic_rates)
++		basic_rate_idx = __ffs(bss_conf->basic_rates);
++	else
++		basic_rate_idx = 0;
+ 	bitrate = sband->bitrates[basic_rate_idx].bitrate;
+ 
+ 	hw_rate_code = ath12k_mac_get_rate_hw_value(bitrate);
+@@ -3703,6 +3706,8 @@ static void ath12k_mac_op_vif_cfg_changed(struct ieee80211_hw *hw,
+ 	unsigned long links = ahvif->links_map;
+ 	struct ieee80211_bss_conf *info;
+ 	struct ath12k_link_vif *arvif;
++	struct ieee80211_sta *sta;
++	struct ath12k_sta *ahsta;
+ 	struct ath12k *ar;
+ 	u8 link_id;
+ 
+@@ -3715,6 +3720,35 @@ static void ath12k_mac_op_vif_cfg_changed(struct ieee80211_hw *hw,
+ 	}
+ 
+ 	if (changed & BSS_CHANGED_ASSOC) {
++		if (vif->cfg.assoc) {
++			/* only in station mode we can get here, so it's safe
++			 * to use ap_addr
++			 */
++			rcu_read_lock();
++			sta = ieee80211_find_sta(vif, vif->cfg.ap_addr);
++			if (!sta) {
++				rcu_read_unlock();
++				WARN_ONCE(1, "failed to find sta with addr %pM\n",
++					  vif->cfg.ap_addr);
++				return;
++			}
++
++			ahsta = ath12k_sta_to_ahsta(sta);
++			arvif = wiphy_dereference(hw->wiphy,
++						  ahvif->link[ahsta->assoc_link_id]);
++			rcu_read_unlock();
++
++			ar = arvif->ar;
++			/* there is no reason for which an assoc link's
++			 * bss info does not exist
++			 */
++			info = ath12k_mac_get_link_bss_conf(arvif);
++			ath12k_bss_assoc(ar, arvif, info);
++
++			/* exclude assoc link as it is done above */
++			links &= ~BIT(ahsta->assoc_link_id);
++		}
++
+ 		for_each_set_bit(link_id, &links, IEEE80211_MLD_MAX_NUM_LINKS) {
+ 			arvif = wiphy_dereference(hw->wiphy, ahvif->link[link_id]);
+ 			if (!arvif || !arvif->ar)
+@@ -3984,10 +4018,14 @@ static void ath12k_mac_bss_info_changed(struct ath12k *ar,
+ 		band = def.chan->band;
+ 		mcast_rate = info->mcast_rate[band];
+ 
+-		if (mcast_rate > 0)
++		if (mcast_rate > 0) {
+ 			rateidx = mcast_rate - 1;
+-		else
+-			rateidx = ffs(info->basic_rates) - 1;
++		} else {
++			if (info->basic_rates)
++				rateidx = __ffs(info->basic_rates);
++			else
++				rateidx = 0;
++		}
+ 
+ 		if (ar->pdev->cap.supported_bands & WMI_HOST_WLAN_5GHZ_CAP)
+ 			rateidx += ATH12K_MAC_FIRST_OFDM_RATE_IDX;
+@@ -5522,10 +5560,13 @@ static int ath12k_mac_station_add(struct ath12k *ar,
+ 			    ar->max_num_stations);
+ 		goto exit;
+ 	}
+-	arsta->rx_stats = kzalloc(sizeof(*arsta->rx_stats), GFP_KERNEL);
++
+ 	if (!arsta->rx_stats) {
+-		ret = -ENOMEM;
+-		goto dec_num_station;
++		arsta->rx_stats = kzalloc(sizeof(*arsta->rx_stats), GFP_KERNEL);
++		if (!arsta->rx_stats) {
++			ret = -ENOMEM;
++			goto dec_num_station;
++		}
+ 	}
+ 
+ 	peer_param.vdev_id = arvif->vdev_id;
+diff --git a/drivers/net/wireless/ath/ath12k/pci.c b/drivers/net/wireless/ath/ath12k/pci.c
+index 2e7d302ace679d..9289910887217f 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.c
++++ b/drivers/net/wireless/ath/ath12k/pci.c
+@@ -1491,6 +1491,9 @@ void ath12k_pci_power_down(struct ath12k_base *ab, bool is_suspend)
+ {
+ 	struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
+ 
++	if (!test_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags))
++		return;
++
+ 	/* restore aspm in case firmware bootup fails */
+ 	ath12k_pci_aspm_restore(ab_pci);
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/peer.c b/drivers/net/wireless/ath/ath12k/peer.c
+index 792cca8a3fb1b0..ec7236bbccc0fe 100644
+--- a/drivers/net/wireless/ath/ath12k/peer.c
++++ b/drivers/net/wireless/ath/ath12k/peer.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022, 2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2022, 2024-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include "core.h"
+@@ -383,6 +383,9 @@ int ath12k_peer_create(struct ath12k *ar, struct ath12k_link_vif *arvif,
+ 		arvif->ast_idx = peer->hw_peer_id;
+ 	}
+ 
++	if (vif->type == NL80211_IFTYPE_AP)
++		peer->ucast_ra_only = true;
++
+ 	if (sta) {
+ 		ahsta = ath12k_sta_to_ahsta(sta);
+ 		arsta = wiphy_dereference(ath12k_ar_to_hw(ar)->wiphy,
+diff --git a/drivers/net/wireless/ath/ath12k/peer.h b/drivers/net/wireless/ath/ath12k/peer.h
+index 5870ee11a8c7ec..f3a5e054d2b556 100644
+--- a/drivers/net/wireless/ath/ath12k/peer.h
++++ b/drivers/net/wireless/ath/ath12k/peer.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_PEER_H
+@@ -62,6 +62,7 @@ struct ath12k_peer {
+ 
+ 	/* for reference to ath12k_link_sta */
+ 	u8 link_id;
++	bool ucast_ra_only;
+ };
+ 
+ struct ath12k_ml_peer {
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index fe50c3d3cb8201..a44fc9106634b6 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -1037,14 +1037,24 @@ int ath12k_wmi_vdev_down(struct ath12k *ar, u8 vdev_id)
+ static void ath12k_wmi_put_wmi_channel(struct ath12k_wmi_channel_params *chan,
+ 				       struct wmi_vdev_start_req_arg *arg)
+ {
++	u32 center_freq1 = arg->band_center_freq1;
++
+ 	memset(chan, 0, sizeof(*chan));
+ 
+ 	chan->mhz = cpu_to_le32(arg->freq);
+-	chan->band_center_freq1 = cpu_to_le32(arg->band_center_freq1);
+-	if (arg->mode == MODE_11AC_VHT80_80)
++	chan->band_center_freq1 = cpu_to_le32(center_freq1);
++	if (arg->mode == MODE_11BE_EHT160) {
++		if (arg->freq > center_freq1)
++			chan->band_center_freq1 = cpu_to_le32(center_freq1 + 40);
++		else
++			chan->band_center_freq1 = cpu_to_le32(center_freq1 - 40);
++
++		chan->band_center_freq2 = cpu_to_le32(center_freq1);
++	} else if (arg->mode == MODE_11BE_EHT80_80) {
+ 		chan->band_center_freq2 = cpu_to_le32(arg->band_center_freq2);
+-	else
++	} else {
+ 		chan->band_center_freq2 = 0;
++	}
+ 
+ 	chan->info |= le32_encode_bits(arg->mode, WMI_CHAN_INFO_MODE);
+ 	if (arg->passive)
+@@ -3665,7 +3675,8 @@ ath12k_fill_band_to_mac_param(struct ath12k_base  *soc,
+ }
+ 
+ static void
+-ath12k_wmi_copy_resource_config(struct ath12k_wmi_resource_config_params *wmi_cfg,
++ath12k_wmi_copy_resource_config(struct ath12k_base *ab,
++				struct ath12k_wmi_resource_config_params *wmi_cfg,
+ 				struct ath12k_wmi_resource_config_arg *tg_cfg)
+ {
+ 	wmi_cfg->num_vdevs = cpu_to_le32(tg_cfg->num_vdevs);
+@@ -3732,6 +3743,9 @@ ath12k_wmi_copy_resource_config(struct ath12k_wmi_resource_config_params *wmi_cf
+ 					   WMI_RSRC_CFG_FLAGS2_RX_PEER_METADATA_VERSION);
+ 	wmi_cfg->host_service_flags = cpu_to_le32(tg_cfg->is_reg_cc_ext_event_supported <<
+ 				WMI_RSRC_CFG_HOST_SVC_FLAG_REG_CC_EXT_SUPPORT_BIT);
++	if (ab->hw_params->reoq_lut_support)
++		wmi_cfg->host_service_flags |=
++			cpu_to_le32(1 << WMI_RSRC_CFG_HOST_SVC_FLAG_REO_QREF_SUPPORT_BIT);
+ 	wmi_cfg->ema_max_vap_cnt = cpu_to_le32(tg_cfg->ema_max_vap_cnt);
+ 	wmi_cfg->ema_max_profile_period = cpu_to_le32(tg_cfg->ema_max_profile_period);
+ 	wmi_cfg->flags2 |= cpu_to_le32(WMI_RSRC_CFG_FLAGS2_CALC_NEXT_DTIM_COUNT_SET);
+@@ -3772,7 +3786,7 @@ static int ath12k_init_cmd_send(struct ath12k_wmi_pdev *wmi,
+ 	ptr = skb->data + sizeof(*cmd);
+ 	cfg = ptr;
+ 
+-	ath12k_wmi_copy_resource_config(cfg, &arg->res_cfg);
++	ath12k_wmi_copy_resource_config(ab, cfg, &arg->res_cfg);
+ 
+ 	cfg->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_RESOURCE_CONFIG,
+ 						 sizeof(*cfg));
+@@ -6019,7 +6033,7 @@ static int ath12k_reg_chan_list_event(struct ath12k_base *ab, struct sk_buff *sk
+ 		goto fallback;
+ 	}
+ 
+-	spin_lock(&ab->base_lock);
++	spin_lock_bh(&ab->base_lock);
+ 	if (test_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags)) {
+ 		/* Once mac is registered, ar is valid and all CC events from
+ 		 * fw is considered to be received due to user requests
+@@ -6043,7 +6057,7 @@ static int ath12k_reg_chan_list_event(struct ath12k_base *ab, struct sk_buff *sk
+ 		ab->default_regd[pdev_idx] = regd;
+ 	}
+ 	ab->dfs_region = reg_info->dfs_region;
+-	spin_unlock(&ab->base_lock);
++	spin_unlock_bh(&ab->base_lock);
+ 
+ 	goto mem_free;
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.h b/drivers/net/wireless/ath/ath12k/wmi.h
+index be4ac91dd34f50..bd7312f3cf24aa 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.h
++++ b/drivers/net/wireless/ath/ath12k/wmi.h
+@@ -2461,6 +2461,7 @@ struct wmi_init_cmd {
+ } __packed;
+ 
+ #define WMI_RSRC_CFG_HOST_SVC_FLAG_REG_CC_EXT_SUPPORT_BIT 4
++#define WMI_RSRC_CFG_HOST_SVC_FLAG_REO_QREF_SUPPORT_BIT   12
+ #define WMI_RSRC_CFG_FLAGS2_RX_PEER_METADATA_VERSION		GENMASK(5, 4)
+ #define WMI_RSRC_CFG_FLAG1_BSS_CHANNEL_INFO_64	BIT(5)
+ #define WMI_RSRC_CFG_FLAGS2_CALC_NEXT_DTIM_COUNT_SET      BIT(9)
+diff --git a/drivers/net/wireless/ath/carl9170/usb.c b/drivers/net/wireless/ath/carl9170/usb.c
+index a3e03580cd9ff0..564ca6a619856b 100644
+--- a/drivers/net/wireless/ath/carl9170/usb.c
++++ b/drivers/net/wireless/ath/carl9170/usb.c
+@@ -438,14 +438,21 @@ static void carl9170_usb_rx_complete(struct urb *urb)
+ 
+ 		if (atomic_read(&ar->rx_anch_urbs) == 0) {
+ 			/*
+-			 * The system is too slow to cope with
+-			 * the enormous workload. We have simply
+-			 * run out of active rx urbs and this
+-			 * unfortunately leads to an unpredictable
+-			 * device.
++			 * At this point, either the system is too slow to
++			 * cope with the enormous workload (so we have simply
++			 * run out of active rx urbs and this unfortunately
++			 * leads to an unpredictable device), or the device
++			 * is not fully functional after an unsuccessful
++			 * firmware loading attempts (so it doesn't pass
++			 * ieee80211_register_hw() and there is no internal
++			 * workqueue at all).
+ 			 */
+ 
+-			ieee80211_queue_work(ar->hw, &ar->ping_work);
++			if (ar->registered)
++				ieee80211_queue_work(ar->hw, &ar->ping_work);
++			else
++				pr_warn_once("device %s is not registered\n",
++					     dev_name(&ar->udev->dev));
+ 		}
+ 	} else {
+ 		/*
+diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
+index 130b9a8aa7ebe1..67ee3b6e6d85c0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
++++ b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
+@@ -44,6 +44,8 @@
+ 	IWL_QU_C_HR_B_FW_PRE "-" __stringify(api) ".ucode"
+ #define IWL_QU_B_JF_B_MODULE_FIRMWARE(api) \
+ 	IWL_QU_B_JF_B_FW_PRE "-" __stringify(api) ".ucode"
++#define IWL_QU_C_JF_B_MODULE_FIRMWARE(api) \
++	IWL_QU_C_JF_B_FW_PRE "-" __stringify(api) ".ucode"
+ #define IWL_CC_A_MODULE_FIRMWARE(api)			\
+ 	IWL_CC_A_FW_PRE "-" __stringify(api) ".ucode"
+ 
+@@ -422,6 +424,7 @@ const struct iwl_cfg iwl_cfg_quz_a0_hr_b0 = {
+ MODULE_FIRMWARE(IWL_QU_B_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_QU_C_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_QU_B_JF_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
++MODULE_FIRMWARE(IWL_QU_C_JF_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_QUZ_A_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_QUZ_A_JF_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_CC_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/main.c b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
+index a27a72cc017a30..a7f9e244c0975e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/dvm/main.c
++++ b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
+@@ -1382,14 +1382,14 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans,
+ 
+ 	err = iwl_trans_start_hw(priv->trans);
+ 	if (err)
+-		goto out_free_hw;
++		goto out_leave_trans;
+ 
+ 	/* Read the EEPROM */
+ 	err = iwl_read_eeprom(priv->trans, &priv->eeprom_blob,
+ 			      &priv->eeprom_blob_size);
+ 	if (err) {
+ 		IWL_ERR(priv, "Unable to init EEPROM\n");
+-		goto out_free_hw;
++		goto out_leave_trans;
+ 	}
+ 
+ 	/* Reset chip to save power until we load uCode during "up". */
+@@ -1508,6 +1508,8 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans,
+ 	kfree(priv->eeprom_blob);
+ out_free_eeprom:
+ 	kfree(priv->nvm_data);
++out_leave_trans:
++	iwl_trans_op_mode_leave(priv->trans);
+ out_free_hw:
+ 	ieee80211_free_hw(priv->hw);
+ out:
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/d3.c b/drivers/net/wireless/intel/iwlwifi/mld/d3.c
+index ee99298eebf595..7ce01ad3608e18 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/d3.c
+@@ -1757,7 +1757,7 @@ iwl_mld_send_proto_offload(struct iwl_mld *mld,
+ 
+ 		addrconf_addr_solict_mult(&wowlan_data->target_ipv6_addrs[i],
+ 					  &solicited_addr);
+-		for (j = 0; j < c; j++)
++		for (j = 0; j < n_nsc && j < c; j++)
+ 			if (ipv6_addr_cmp(&nsc[j].dest_ipv6_addr,
+ 					  &solicited_addr) == 0)
+ 				break;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c
+index 68d97d3b8f0260..2d5233dc3e2423 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c
+@@ -2460,7 +2460,7 @@ iwl_mld_change_vif_links(struct ieee80211_hw *hw,
+ 		added |= BIT(0);
+ 
+ 	for (int i = 0; i < IEEE80211_MLD_MAX_NUM_LINKS; i++) {
+-		if (removed & BIT(i))
++		if (removed & BIT(i) && !WARN_ON(!old[i]))
+ 			iwl_mld_remove_link(mld, old[i]);
+ 	}
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/mld.c b/drivers/net/wireless/intel/iwlwifi/mld/mld.c
+index 7a098942dc8021..21f65442638dda 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/mld.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/mld.c
+@@ -475,8 +475,9 @@ iwl_op_mode_mld_stop(struct iwl_op_mode *op_mode)
+ 	iwl_mld_ptp_remove(mld);
+ 	iwl_mld_leds_exit(mld);
+ 
+-	wiphy_lock(mld->wiphy);
+ 	iwl_mld_thermal_exit(mld);
++
++	wiphy_lock(mld->wiphy);
+ 	iwl_mld_low_latency_stop(mld);
+ 	iwl_mld_deinit_time_sync(mld);
+ 	wiphy_unlock(mld->wiphy);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/thermal.c b/drivers/net/wireless/intel/iwlwifi/mld/thermal.c
+index 1909953a9be984..670ac43528006b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/thermal.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/thermal.c
+@@ -419,6 +419,8 @@ static void iwl_mld_cooling_device_unregister(struct iwl_mld *mld)
+ 
+ void iwl_mld_thermal_initialize(struct iwl_mld *mld)
+ {
++	lockdep_assert_not_held(&mld->wiphy->mtx);
++
+ 	wiphy_delayed_work_init(&mld->ct_kill_exit_wk, iwl_mld_exit_ctkill);
+ 
+ #ifdef CONFIG_THERMAL
+@@ -429,7 +431,9 @@ void iwl_mld_thermal_initialize(struct iwl_mld *mld)
+ 
+ void iwl_mld_thermal_exit(struct iwl_mld *mld)
+ {
++	wiphy_lock(mld->wiphy);
+ 	wiphy_delayed_work_cancel(mld->wiphy, &mld->ct_kill_exit_wk);
++	wiphy_unlock(mld->wiphy);
+ 
+ #ifdef CONFIG_THERMAL
+ 	iwl_mld_cooling_device_unregister(mld);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+index bec18d197f3102..83f1ed94ccab99 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2012-2014, 2018-2024 Intel Corporation
++ * Copyright (C) 2012-2014, 2018-2025 Intel Corporation
+  * Copyright (C) 2013-2014 Intel Mobile Communications GmbH
+  * Copyright (C) 2015-2017 Intel Deutschland GmbH
+  */
+@@ -941,7 +941,7 @@ u16 iwl_mvm_mac_ctxt_get_beacon_flags(const struct iwl_fw *fw, u8 rate_idx)
+ 	u16 flags = iwl_mvm_mac80211_idx_to_hwrate(fw, rate_idx);
+ 	bool is_new_rate = iwl_fw_lookup_cmd_ver(fw, BEACON_TEMPLATE_CMD, 0) > 10;
+ 
+-	if (rate_idx <= IWL_FIRST_CCK_RATE)
++	if (rate_idx <= IWL_LAST_CCK_RATE)
+ 		flags |= is_new_rate ? IWL_MAC_BEACON_CCK
+ 			  : IWL_MAC_BEACON_CCK_V1;
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 102a6123bba0e4..4cc7a2e5746d2b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -2942,6 +2942,8 @@ static ssize_t iwl_dbgfs_rx_queue_read(struct file *file,
+ 	for (i = 0; i < trans->num_rx_queues && pos < bufsz; i++) {
+ 		struct iwl_rxq *rxq = &trans_pcie->rxq[i];
+ 
++		spin_lock_bh(&rxq->lock);
++
+ 		pos += scnprintf(buf + pos, bufsz - pos, "queue#: %2d\n",
+ 				 i);
+ 		pos += scnprintf(buf + pos, bufsz - pos, "\tread: %u\n",
+@@ -2962,6 +2964,7 @@ static ssize_t iwl_dbgfs_rx_queue_read(struct file *file,
+ 			pos += scnprintf(buf + pos, bufsz - pos,
+ 					 "\tclosed_rb_num: Not Allocated\n");
+ 		}
++		spin_unlock_bh(&rxq->lock);
+ 	}
+ 	ret = simple_read_from_buffer(user_buf, count, ppos, buf, pos);
+ 	kfree(buf);
+@@ -3662,8 +3665,11 @@ iwl_trans_pcie_dump_data(struct iwl_trans *trans, u32 dump_mask,
+ 		/* Dump RBs is supported only for pre-9000 devices (1 queue) */
+ 		struct iwl_rxq *rxq = &trans_pcie->rxq[0];
+ 		/* RBs */
++		spin_lock_bh(&rxq->lock);
+ 		num_rbs = iwl_get_closed_rb_stts(trans, rxq);
+ 		num_rbs = (num_rbs - rxq->read) & RX_QUEUE_MASK;
++		spin_unlock_bh(&rxq->lock);
++
+ 		len += num_rbs * (sizeof(*data) +
+ 				  sizeof(struct iwl_fw_error_dump_rb) +
+ 				  (PAGE_SIZE << trans_pcie->rx_page_order));
+diff --git a/drivers/net/wireless/intersil/p54/fwio.c b/drivers/net/wireless/intersil/p54/fwio.c
+index 772084a9bd8d7c..3baf8ab01e22b0 100644
+--- a/drivers/net/wireless/intersil/p54/fwio.c
++++ b/drivers/net/wireless/intersil/p54/fwio.c
+@@ -231,6 +231,7 @@ int p54_download_eeprom(struct p54_common *priv, void *buf,
+ 
+ 	mutex_lock(&priv->eeprom_mutex);
+ 	priv->eeprom = buf;
++	priv->eeprom_slice_size = len;
+ 	eeprom_hdr = skb_put(skb, eeprom_hdr_size + len);
+ 
+ 	if (priv->fw_var < 0x509) {
+@@ -253,6 +254,7 @@ int p54_download_eeprom(struct p54_common *priv, void *buf,
+ 		ret = -EBUSY;
+ 	}
+ 	priv->eeprom = NULL;
++	priv->eeprom_slice_size = 0;
+ 	mutex_unlock(&priv->eeprom_mutex);
+ 	return ret;
+ }
+diff --git a/drivers/net/wireless/intersil/p54/p54.h b/drivers/net/wireless/intersil/p54/p54.h
+index 522656de415987..aeb5e40cc5ef3f 100644
+--- a/drivers/net/wireless/intersil/p54/p54.h
++++ b/drivers/net/wireless/intersil/p54/p54.h
+@@ -258,6 +258,7 @@ struct p54_common {
+ 
+ 	/* eeprom handling */
+ 	void *eeprom;
++	size_t eeprom_slice_size;
+ 	struct completion eeprom_comp;
+ 	struct mutex eeprom_mutex;
+ };
+diff --git a/drivers/net/wireless/intersil/p54/txrx.c b/drivers/net/wireless/intersil/p54/txrx.c
+index 8414aa208655f6..2deb1bb54f24bd 100644
+--- a/drivers/net/wireless/intersil/p54/txrx.c
++++ b/drivers/net/wireless/intersil/p54/txrx.c
+@@ -496,14 +496,19 @@ static void p54_rx_eeprom_readback(struct p54_common *priv,
+ 		return ;
+ 
+ 	if (priv->fw_var >= 0x509) {
+-		memcpy(priv->eeprom, eeprom->v2.data,
+-		       le16_to_cpu(eeprom->v2.len));
++		if (le16_to_cpu(eeprom->v2.len) != priv->eeprom_slice_size)
++			return;
++
++		memcpy(priv->eeprom, eeprom->v2.data, priv->eeprom_slice_size);
+ 	} else {
+-		memcpy(priv->eeprom, eeprom->v1.data,
+-		       le16_to_cpu(eeprom->v1.len));
++		if (le16_to_cpu(eeprom->v1.len) != priv->eeprom_slice_size)
++			return;
++
++		memcpy(priv->eeprom, eeprom->v1.data, priv->eeprom_slice_size);
+ 	}
+ 
+ 	priv->eeprom = NULL;
++	priv->eeprom_slice_size = 0;
+ 	tmp = p54_find_and_unlink_skb(priv, hdr->req_id);
+ 	dev_kfree_skb_any(tmp);
+ 	complete(&priv->eeprom_comp);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c b/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
+index 84ef80ab4afbfa..96cecc576a9867 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
+@@ -17,6 +17,8 @@ static const struct usb_device_id mt76x2u_device_table[] = {
+ 	{ USB_DEVICE(0x057c, 0x8503) },	/* Avm FRITZ!WLAN AC860 */
+ 	{ USB_DEVICE(0x7392, 0xb711) },	/* Edimax EW 7722 UAC */
+ 	{ USB_DEVICE(0x0e8d, 0x7632) },	/* HC-M7662BU1 */
++	{ USB_DEVICE(0x0471, 0x2126) }, /* LiteOn WN4516R module, nonstandard USB connector */
++	{ USB_DEVICE(0x0471, 0x7600) }, /* LiteOn WN4519R module, nonstandard USB connector */
+ 	{ USB_DEVICE(0x2c4e, 0x0103) },	/* Mercury UD13 */
+ 	{ USB_DEVICE(0x0846, 0x9014) },	/* Netgear WNDA3100v3 */
+ 	{ USB_DEVICE(0x0846, 0x9053) },	/* Netgear A6210 */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_init.c b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_init.c
+index 33a14365ec9b98..3b556281151158 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_init.c
+@@ -191,6 +191,7 @@ int mt76x2u_register_device(struct mt76x02_dev *dev)
+ {
+ 	struct ieee80211_hw *hw = mt76_hw(dev);
+ 	struct mt76_usb *usb = &dev->mt76.usb;
++	bool vht;
+ 	int err;
+ 
+ 	INIT_DELAYED_WORK(&dev->cal_work, mt76x2u_phy_calibrate);
+@@ -217,7 +218,17 @@ int mt76x2u_register_device(struct mt76x02_dev *dev)
+ 
+ 	/* check hw sg support in order to enable AMSDU */
+ 	hw->max_tx_fragments = dev->mt76.usb.sg_en ? MT_TX_SG_MAX_SIZE : 1;
+-	err = mt76_register_device(&dev->mt76, true, mt76x02_rates,
++	switch (dev->mt76.rev) {
++	case 0x76320044:
++		/* these ASIC revisions do not support VHT */
++		vht = false;
++		break;
++	default:
++		vht = true;
++		break;
++	}
++
++	err = mt76_register_device(&dev->mt76, vht, mt76x02_rates,
+ 				   ARRAY_SIZE(mt76x02_rates));
+ 	if (err)
+ 		goto fail;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 826c48a2ee6964..1fffa43379b2b2 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -83,6 +83,11 @@ mt7921_init_he_caps(struct mt792x_phy *phy, enum nl80211_band band,
+ 			he_cap_elem->phy_cap_info[9] |=
+ 				IEEE80211_HE_PHY_CAP9_TX_1024_QAM_LESS_THAN_242_TONE_RU |
+ 				IEEE80211_HE_PHY_CAP9_RX_1024_QAM_LESS_THAN_242_TONE_RU;
++
++			if (is_mt7922(phy->mt76->dev)) {
++				he_cap_elem->phy_cap_info[0] |=
++					IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G;
++			}
+ 			break;
+ 		case NL80211_IFTYPE_STATION:
+ 			he_cap_elem->mac_cap_info[1] |=
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/init.c b/drivers/net/wireless/mediatek/mt76/mt7925/init.c
+index 79639be0d29aca..2a83ff59a968c9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/init.c
+@@ -322,6 +322,12 @@ static void mt7925_init_work(struct work_struct *work)
+ 		return;
+ 	}
+ 
++	ret = mt7925_mcu_set_thermal_protect(dev);
++	if (ret) {
++		dev_err(dev->mt76.dev, "thermal protection enable failed\n");
++		return;
++	}
++
+ 	/* we support chip reset now */
+ 	dev->hw_init_done = true;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index dea5b9bcb3fdfb..7d96b88cff803a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -974,6 +974,23 @@ int mt7925_mcu_set_deep_sleep(struct mt792x_dev *dev, bool enable)
+ }
+ EXPORT_SYMBOL_GPL(mt7925_mcu_set_deep_sleep);
+ 
++int mt7925_mcu_set_thermal_protect(struct mt792x_dev *dev)
++{
++	char cmd[64];
++	int ret = 0;
++
++	snprintf(cmd, sizeof(cmd), "ThermalProtGband %d %d %d %d %d %d %d %d %d %d",
++		 0, 100, 90, 80, 30, 1, 1, 115, 105, 5);
++	ret = mt7925_mcu_chip_config(dev, cmd);
++
++	snprintf(cmd, sizeof(cmd), "ThermalProtAband %d %d %d %d %d %d %d %d %d %d",
++		 1, 100, 90, 80, 30, 1, 1, 115, 105, 5);
++	ret |= mt7925_mcu_chip_config(dev, cmd);
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(mt7925_mcu_set_thermal_protect);
++
+ int mt7925_run_firmware(struct mt792x_dev *dev)
+ {
+ 	int err;
+@@ -3306,7 +3323,8 @@ int mt7925_mcu_fill_message(struct mt76_dev *mdev, struct sk_buff *skb,
+ 		else
+ 			uni_txd->option = MCU_CMD_UNI_EXT_ACK;
+ 
+-		if (cmd == MCU_UNI_CMD(HIF_CTRL))
++		if (cmd == MCU_UNI_CMD(HIF_CTRL) ||
++		    cmd == MCU_UNI_CMD(CHIP_CONFIG))
+ 			uni_txd->option &= ~MCU_CMD_ACK;
+ 
+ 		goto exit;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
+index 8ac43feb26d64f..a855a451350280 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
+@@ -637,6 +637,7 @@ int mt7925_mcu_add_bss_info(struct mt792x_phy *phy,
+ int mt7925_mcu_set_timing(struct mt792x_phy *phy,
+ 			  struct ieee80211_bss_conf *link_conf);
+ int mt7925_mcu_set_deep_sleep(struct mt792x_dev *dev, bool enable);
++int mt7925_mcu_set_thermal_protect(struct mt792x_dev *dev);
+ int mt7925_mcu_set_channel_domain(struct mt76_phy *phy);
+ int mt7925_mcu_set_radio_en(struct mt792x_phy *phy, bool enable);
+ int mt7925_mcu_set_chctx(struct mt76_phy *phy, struct mt76_vif_link *mvif,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/pci.c b/drivers/net/wireless/mediatek/mt76/mt7925/pci.c
+index c7b5dc1dbb3495..2e20ecc712071c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/pci.c
+@@ -490,9 +490,6 @@ static int mt7925_pci_suspend(struct device *device)
+ 
+ 	/* disable interrupt */
+ 	mt76_wr(dev, dev->irq_map->host_irq_enable, 0);
+-	mt76_wr(dev, MT_WFDMA0_HOST_INT_DIS,
+-		dev->irq_map->tx.all_complete_mask |
+-		MT_INT_RX_DONE_ALL | MT_INT_MCU_CMD);
+ 
+ 	mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0x0);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/regs.h b/drivers/net/wireless/mediatek/mt76/mt7925/regs.h
+index 985794a40c1a8e..547489092c2947 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/regs.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/regs.h
+@@ -28,7 +28,7 @@
+ #define MT_MDP_TO_HIF			0
+ #define MT_MDP_TO_WM			1
+ 
+-#define MT_WFDMA0_HOST_INT_ENA		MT_WFDMA0(0x228)
++#define MT_WFDMA0_HOST_INT_ENA		MT_WFDMA0(0x204)
+ #define MT_WFDMA0_HOST_INT_DIS		MT_WFDMA0(0x22c)
+ #define HOST_RX_DONE_INT_ENA4		BIT(12)
+ #define HOST_RX_DONE_INT_ENA5		BIT(13)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+index d89c06f47997fa..2108361543a0c0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+@@ -647,6 +647,14 @@ mt7996_mac_fill_rx(struct mt7996_dev *dev, enum mt76_rxq_id q,
+ 		status->last_amsdu = amsdu_info == MT_RXD4_LAST_AMSDU_FRAME;
+ 	}
+ 
++	/* IEEE 802.11 fragmentation can only be applied to unicast frames.
++	 * Hence, drop fragments with multicast/broadcast RA.
++	 * This check fixes vulnerabilities, like CVE-2020-26145.
++	 */
++	if ((ieee80211_has_morefrags(fc) || seq_ctrl & IEEE80211_SCTL_FRAG) &&
++	    FIELD_GET(MT_RXD3_NORMAL_ADDR_TYPE, rxd3) != MT_RXD3_NORMAL_U2M)
++		return -EINVAL;
++
+ 	hdr_gap = (u8 *)rxd - skb->data + 2 * remove_pad;
+ 	if (hdr_trans && ieee80211_has_morefrags(fc)) {
+ 		if (mt7996_reverse_frag0_hdr_trans(skb, hdr_gap))
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+index a3295b22523a61..b11dd3dd5c46f0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+@@ -991,7 +991,7 @@ mt7996_mac_sta_add_links(struct mt7996_dev *dev, struct ieee80211_vif *vif,
+ {
+ 	struct mt7996_sta *msta = (struct mt7996_sta *)sta->drv_priv;
+ 	unsigned int link_id;
+-	int err;
++	int err = 0;
+ 
+ 	for_each_set_bit(link_id, &new_links, IEEE80211_MLD_MAX_NUM_LINKS) {
+ 		struct ieee80211_bss_conf *link_conf;
+@@ -1254,7 +1254,7 @@ static void mt7996_tx(struct ieee80211_hw *hw,
+ static int mt7996_set_rts_threshold(struct ieee80211_hw *hw, u32 val)
+ {
+ 	struct mt7996_dev *dev = mt7996_hw_dev(hw);
+-	int i, ret;
++	int i, ret = 0;
+ 
+ 	mutex_lock(&dev->mt76.mutex);
+ 
+diff --git a/drivers/net/wireless/purelifi/plfxlc/usb.c b/drivers/net/wireless/purelifi/plfxlc/usb.c
+index 10d2e2124ff81a..c2a1234b59db6c 100644
+--- a/drivers/net/wireless/purelifi/plfxlc/usb.c
++++ b/drivers/net/wireless/purelifi/plfxlc/usb.c
+@@ -503,8 +503,10 @@ int plfxlc_usb_wreq_async(struct plfxlc_usb *usb, const u8 *buffer,
+ 			  (void *)buffer, buffer_len, complete_fn, context);
+ 
+ 	r = usb_submit_urb(urb, GFP_ATOMIC);
+-	if (r)
++	if (r) {
++		usb_free_urb(urb);
+ 		dev_err(&udev->dev, "Async write submit failed (%d)\n", r);
++	}
+ 
+ 	return r;
+ }
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.c b/drivers/net/wireless/realtek/rtlwifi/pci.c
+index 0eafc4d125f91d..898f597f70a96d 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.c
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.c
+@@ -155,6 +155,16 @@ static void _rtl_pci_update_default_setting(struct ieee80211_hw *hw)
+ 	    ((u8)init_aspm) == (PCI_EXP_LNKCTL_ASPM_L0S |
+ 				PCI_EXP_LNKCTL_ASPM_L1 | PCI_EXP_LNKCTL_CCC))
+ 		ppsc->support_aspm = false;
++
++	/* RTL8723BE found on some ASUSTek laptops, such as F441U and
++	 * X555UQ with subsystem ID 11ad:1723 are known to output large
++	 * amounts of PCIe AER errors during and after boot up, causing
++	 * heavy lags, poor network throughput, and occasional lock-ups.
++	 */
++	if (rtlpriv->rtlhal.hw_type == HARDWARE_TYPE_RTL8723BE &&
++	    (rtlpci->pdev->subsystem_vendor == 0x11ad &&
++	     rtlpci->pdev->subsystem_device == 0x1723))
++		ppsc->support_aspm = false;
+ }
+ 
+ static bool _rtl_pci_platform_switch_device_pci_aspm(
+diff --git a/drivers/net/wireless/realtek/rtw88/hci.h b/drivers/net/wireless/realtek/rtw88/hci.h
+index 96aeda26014e20..d4bee9c3ecfeab 100644
+--- a/drivers/net/wireless/realtek/rtw88/hci.h
++++ b/drivers/net/wireless/realtek/rtw88/hci.h
+@@ -19,6 +19,8 @@ struct rtw_hci_ops {
+ 	void (*link_ps)(struct rtw_dev *rtwdev, bool enter);
+ 	void (*interface_cfg)(struct rtw_dev *rtwdev);
+ 	void (*dynamic_rx_agg)(struct rtw_dev *rtwdev, bool enable);
++	void (*write_firmware_page)(struct rtw_dev *rtwdev, u32 page,
++				    const u8 *data, u32 size);
+ 
+ 	int (*write_data_rsvd_page)(struct rtw_dev *rtwdev, u8 *buf, u32 size);
+ 	int (*write_data_h2c)(struct rtw_dev *rtwdev, u8 *buf, u32 size);
+@@ -79,6 +81,12 @@ static inline void rtw_hci_dynamic_rx_agg(struct rtw_dev *rtwdev, bool enable)
+ 		rtwdev->hci.ops->dynamic_rx_agg(rtwdev, enable);
+ }
+ 
++static inline void rtw_hci_write_firmware_page(struct rtw_dev *rtwdev, u32 page,
++					       const u8 *data, u32 size)
++{
++	rtwdev->hci.ops->write_firmware_page(rtwdev, page, data, size);
++}
++
+ static inline int
+ rtw_hci_write_data_rsvd_page(struct rtw_dev *rtwdev, u8 *buf, u32 size)
+ {
+diff --git a/drivers/net/wireless/realtek/rtw88/mac.c b/drivers/net/wireless/realtek/rtw88/mac.c
+index 0491f501c13839..f66d1b302dc509 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac.c
++++ b/drivers/net/wireless/realtek/rtw88/mac.c
+@@ -856,8 +856,8 @@ static void en_download_firmware_legacy(struct rtw_dev *rtwdev, bool en)
+ 	}
+ }
+ 
+-static void
+-write_firmware_page(struct rtw_dev *rtwdev, u32 page, const u8 *data, u32 size)
++void rtw_write_firmware_page(struct rtw_dev *rtwdev, u32 page,
++			     const u8 *data, u32 size)
+ {
+ 	u32 val32;
+ 	u32 block_nr;
+@@ -887,6 +887,7 @@ write_firmware_page(struct rtw_dev *rtwdev, u32 page, const u8 *data, u32 size)
+ 		rtw_write32(rtwdev, write_addr, le32_to_cpu(remain_data));
+ 	}
+ }
++EXPORT_SYMBOL(rtw_write_firmware_page);
+ 
+ static int
+ download_firmware_legacy(struct rtw_dev *rtwdev, const u8 *data, u32 size)
+@@ -904,11 +905,13 @@ download_firmware_legacy(struct rtw_dev *rtwdev, const u8 *data, u32 size)
+ 	rtw_write8_set(rtwdev, REG_MCUFW_CTRL, BIT_FWDL_CHK_RPT);
+ 
+ 	for (page = 0; page < total_page; page++) {
+-		write_firmware_page(rtwdev, page, data, DLFW_PAGE_SIZE_LEGACY);
++		rtw_hci_write_firmware_page(rtwdev, page, data,
++					    DLFW_PAGE_SIZE_LEGACY);
+ 		data += DLFW_PAGE_SIZE_LEGACY;
+ 	}
+ 	if (last_page_size)
+-		write_firmware_page(rtwdev, page, data, last_page_size);
++		rtw_hci_write_firmware_page(rtwdev, page, data,
++					    last_page_size);
+ 
+ 	if (!check_hw_ready(rtwdev, REG_MCUFW_CTRL, BIT_FWDL_CHK_RPT, 1)) {
+ 		rtw_err(rtwdev, "failed to check download firmware report\n");
+diff --git a/drivers/net/wireless/realtek/rtw88/mac.h b/drivers/net/wireless/realtek/rtw88/mac.h
+index 6905e27473721a..e92b1483728d5f 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac.h
++++ b/drivers/net/wireless/realtek/rtw88/mac.h
+@@ -34,6 +34,8 @@ int rtw_pwr_seq_parser(struct rtw_dev *rtwdev,
+ 		       const struct rtw_pwr_seq_cmd * const *cmd_seq);
+ int rtw_mac_power_on(struct rtw_dev *rtwdev);
+ void rtw_mac_power_off(struct rtw_dev *rtwdev);
++void rtw_write_firmware_page(struct rtw_dev *rtwdev, u32 page,
++			     const u8 *data, u32 size);
+ int rtw_download_firmware(struct rtw_dev *rtwdev, struct rtw_fw_state *fw);
+ int rtw_mac_init(struct rtw_dev *rtwdev);
+ void rtw_mac_flush_queues(struct rtw_dev *rtwdev, u32 queues, bool drop);
+diff --git a/drivers/net/wireless/realtek/rtw88/mac80211.c b/drivers/net/wireless/realtek/rtw88/mac80211.c
+index 026fbf4ad9cce9..77f9fbe1870c66 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw88/mac80211.c
+@@ -396,6 +396,8 @@ static void rtw_ops_bss_info_changed(struct ieee80211_hw *hw,
+ 			if (rtw_bf_support)
+ 				rtw_bf_assoc(rtwdev, vif, conf);
+ 
++			rtw_set_ampdu_factor(rtwdev, vif, conf);
++
+ 			rtw_fw_beacon_filter_config(rtwdev, true, vif);
+ 		} else {
+ 			rtw_leave_lps(rtwdev);
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index 959f56a3cc1ab4..bc2c1a5a30b379 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -2447,6 +2447,38 @@ void rtw_core_enable_beacon(struct rtw_dev *rtwdev, bool enable)
+ 	}
+ }
+ 
++void rtw_set_ampdu_factor(struct rtw_dev *rtwdev, struct ieee80211_vif *vif,
++			  struct ieee80211_bss_conf *bss_conf)
++{
++	const struct rtw_chip_ops *ops = rtwdev->chip->ops;
++	struct ieee80211_sta *sta;
++	u8 factor = 0xff;
++
++	if (!ops->set_ampdu_factor)
++		return;
++
++	rcu_read_lock();
++
++	sta = ieee80211_find_sta(vif, bss_conf->bssid);
++	if (!sta) {
++		rcu_read_unlock();
++		rtw_warn(rtwdev, "%s: failed to find station %pM\n",
++			 __func__, bss_conf->bssid);
++		return;
++	}
++
++	if (sta->deflink.vht_cap.vht_supported)
++		factor = u32_get_bits(sta->deflink.vht_cap.cap,
++				      IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK);
++	else if (sta->deflink.ht_cap.ht_supported)
++		factor = sta->deflink.ht_cap.ampdu_factor;
++
++	rcu_read_unlock();
++
++	if (factor != 0xff)
++		ops->set_ampdu_factor(rtwdev, factor);
++}
++
+ MODULE_AUTHOR("Realtek Corporation");
+ MODULE_DESCRIPTION("Realtek 802.11ac wireless core module");
+ MODULE_LICENSE("Dual BSD/GPL");
+diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h
+index 02343e059fd977..f410c554da58a3 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.h
++++ b/drivers/net/wireless/realtek/rtw88/main.h
+@@ -878,6 +878,7 @@ struct rtw_chip_ops {
+ 			   u32 antenna_rx);
+ 	void (*cfg_ldo25)(struct rtw_dev *rtwdev, bool enable);
+ 	void (*efuse_grant)(struct rtw_dev *rtwdev, bool enable);
++	void (*set_ampdu_factor)(struct rtw_dev *rtwdev, u8 factor);
+ 	void (*false_alarm_statistics)(struct rtw_dev *rtwdev);
+ 	void (*phy_calibration)(struct rtw_dev *rtwdev);
+ 	void (*dpk_track)(struct rtw_dev *rtwdev);
+@@ -2272,4 +2273,6 @@ void rtw_update_channel(struct rtw_dev *rtwdev, u8 center_channel,
+ void rtw_core_port_switch(struct rtw_dev *rtwdev, struct ieee80211_vif *vif);
+ bool rtw_core_check_sta_active(struct rtw_dev *rtwdev);
+ void rtw_core_enable_beacon(struct rtw_dev *rtwdev, bool enable);
++void rtw_set_ampdu_factor(struct rtw_dev *rtwdev, struct ieee80211_vif *vif,
++			  struct ieee80211_bss_conf *bss_conf);
+ #endif
+diff --git a/drivers/net/wireless/realtek/rtw88/pci.c b/drivers/net/wireless/realtek/rtw88/pci.c
+index bb4c4ccb31d41a..7f2b6dc21f566b 100644
+--- a/drivers/net/wireless/realtek/rtw88/pci.c
++++ b/drivers/net/wireless/realtek/rtw88/pci.c
+@@ -12,6 +12,7 @@
+ #include "fw.h"
+ #include "ps.h"
+ #include "debug.h"
++#include "mac.h"
+ 
+ static bool rtw_disable_msi;
+ static bool rtw_pci_disable_aspm;
+@@ -1602,6 +1603,7 @@ static const struct rtw_hci_ops rtw_pci_ops = {
+ 	.link_ps = rtw_pci_link_ps,
+ 	.interface_cfg = rtw_pci_interface_cfg,
+ 	.dynamic_rx_agg = NULL,
++	.write_firmware_page = rtw_write_firmware_page,
+ 
+ 	.read8 = rtw_pci_read8,
+ 	.read16 = rtw_pci_read16,
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8703b.c b/drivers/net/wireless/realtek/rtw88/rtw8703b.c
+index 1d232adbdd7e31..5e59cfe4dfdf5d 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8703b.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8703b.c
+@@ -1904,6 +1904,7 @@ static const struct rtw_chip_ops rtw8703b_ops = {
+ 	.set_antenna		= NULL,
+ 	.cfg_ldo25		= rtw8723x_cfg_ldo25,
+ 	.efuse_grant		= rtw8723x_efuse_grant,
++	.set_ampdu_factor	= NULL,
+ 	.false_alarm_statistics	= rtw8723x_false_alarm_statistics,
+ 	.phy_calibration	= rtw8703b_phy_calibration,
+ 	.dpk_track		= NULL,
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8723d.c b/drivers/net/wireless/realtek/rtw88/rtw8723d.c
+index 87715bd54860a0..31876e708f9ef6 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8723d.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8723d.c
+@@ -1404,6 +1404,7 @@ static const struct rtw_chip_ops rtw8723d_ops = {
+ 	.set_antenna		= NULL,
+ 	.cfg_ldo25		= rtw8723x_cfg_ldo25,
+ 	.efuse_grant		= rtw8723x_efuse_grant,
++	.set_ampdu_factor	= NULL,
+ 	.false_alarm_statistics	= rtw8723x_false_alarm_statistics,
+ 	.phy_calibration	= rtw8723d_phy_calibration,
+ 	.cck_pd_set		= rtw8723d_phy_cck_pd_set,
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8812a.c b/drivers/net/wireless/realtek/rtw88/rtw8812a.c
+index f9ba2aa2928a42..adbfb37105d051 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8812a.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8812a.c
+@@ -925,6 +925,7 @@ static const struct rtw_chip_ops rtw8812a_ops = {
+ 	.set_tx_power_index	= rtw88xxa_set_tx_power_index,
+ 	.cfg_ldo25		= rtw8812a_cfg_ldo25,
+ 	.efuse_grant		= rtw88xxa_efuse_grant,
++	.set_ampdu_factor	= NULL,
+ 	.false_alarm_statistics	= rtw88xxa_false_alarm_statistics,
+ 	.phy_calibration	= rtw8812a_phy_calibration,
+ 	.cck_pd_set		= rtw88xxa_phy_cck_pd_set,
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8814a.c b/drivers/net/wireless/realtek/rtw88/rtw8814a.c
+index cfd35d40d46e22..ce8d4e4c6c57bc 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8814a.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8814a.c
+@@ -1332,6 +1332,16 @@ static void rtw8814a_cfg_ldo25(struct rtw_dev *rtwdev, bool enable)
+ {
+ }
+ 
++/* Without this RTL8814A sends too many frames and (some?) 11n AP
++ * can't handle it, resulting in low TX speed. Other chips seem fine.
++ */
++static void rtw8814a_set_ampdu_factor(struct rtw_dev *rtwdev, u8 factor)
++{
++	factor = min_t(u8, factor, IEEE80211_VHT_MAX_AMPDU_256K);
++
++	rtw_write32(rtwdev, REG_AMPDU_MAX_LENGTH, (8192 << factor) - 1);
++}
++
+ static void rtw8814a_false_alarm_statistics(struct rtw_dev *rtwdev)
+ {
+ 	struct rtw_dm_info *dm_info = &rtwdev->dm_info;
+@@ -2051,6 +2061,7 @@ static const struct rtw_chip_ops rtw8814a_ops = {
+ 	.set_antenna		= NULL,
+ 	.cfg_ldo25		= rtw8814a_cfg_ldo25,
+ 	.efuse_grant		= rtw8814a_efuse_grant,
++	.set_ampdu_factor	= rtw8814a_set_ampdu_factor,
+ 	.false_alarm_statistics	= rtw8814a_false_alarm_statistics,
+ 	.phy_calibration	= rtw8814a_phy_calibration,
+ 	.cck_pd_set		= rtw8814a_phy_cck_pd_set,
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821a.c b/drivers/net/wireless/realtek/rtw88/rtw8821a.c
+index f68239b073191d..4d81fb29c9fcd8 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821a.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821a.c
+@@ -871,6 +871,7 @@ static const struct rtw_chip_ops rtw8821a_ops = {
+ 	.set_tx_power_index	= rtw88xxa_set_tx_power_index,
+ 	.cfg_ldo25		= rtw8821a_cfg_ldo25,
+ 	.efuse_grant		= rtw88xxa_efuse_grant,
++	.set_ampdu_factor	= NULL,
+ 	.false_alarm_statistics	= rtw88xxa_false_alarm_statistics,
+ 	.phy_calibration	= rtw8821a_phy_calibration,
+ 	.cck_pd_set		= rtw88xxa_phy_cck_pd_set,
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.c b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+index 0ade7f11cbd2eb..f68b0041dcc06c 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+@@ -1668,6 +1668,7 @@ static const struct rtw_chip_ops rtw8821c_ops = {
+ 	.set_antenna		= NULL,
+ 	.set_tx_power_index	= rtw8821c_set_tx_power_index,
+ 	.cfg_ldo25		= rtw8821c_cfg_ldo25,
++	.set_ampdu_factor	= NULL,
+ 	.false_alarm_statistics	= rtw8821c_false_alarm_statistics,
+ 	.phy_calibration	= rtw8821c_phy_calibration,
+ 	.cck_pd_set		= rtw8821c_phy_cck_pd_set,
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822b.c b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
+index b4934da88e33ad..0da212e27d55b3 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822b.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
+@@ -2158,6 +2158,7 @@ static const struct rtw_chip_ops rtw8822b_ops = {
+ 	.set_tx_power_index	= rtw8822b_set_tx_power_index,
+ 	.set_antenna		= rtw8822b_set_antenna,
+ 	.cfg_ldo25		= rtw8822b_cfg_ldo25,
++	.set_ampdu_factor	= NULL,
+ 	.false_alarm_statistics	= rtw8822b_false_alarm_statistics,
+ 	.phy_calibration	= rtw8822b_phy_calibration,
+ 	.pwr_track		= rtw8822b_pwr_track,
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822bu.c b/drivers/net/wireless/realtek/rtw88/rtw8822bu.c
+index 572d1f31832ee4..ab50b3c4056261 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822bu.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822bu.c
+@@ -77,6 +77,8 @@ static const struct usb_device_id rtw_8822bu_id_table[] = {
+ 	  .driver_info = (kernel_ulong_t)&(rtw8822b_hw_spec) }, /* Mercusys MA30N */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x3322, 0xff, 0xff, 0xff),
+ 	  .driver_info = (kernel_ulong_t)&(rtw8822b_hw_spec) }, /* D-Link DWA-T185 rev. A1 */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x0411, 0x03d1, 0xff, 0xff, 0xff),
++	  .driver_info = (kernel_ulong_t)&(rtw8822b_hw_spec) }, /* BUFFALO WI-U2-866DM */
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(usb, rtw_8822bu_id_table);
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+index 8937a7b656edb1..a7dc79773f6249 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+@@ -4969,6 +4969,7 @@ static const struct rtw_chip_ops rtw8822c_ops = {
+ 	.set_tx_power_index	= rtw8822c_set_tx_power_index,
+ 	.set_antenna		= rtw8822c_set_antenna,
+ 	.cfg_ldo25		= rtw8822c_cfg_ldo25,
++	.set_ampdu_factor	= NULL,
+ 	.false_alarm_statistics	= rtw8822c_false_alarm_statistics,
+ 	.dpk_track		= rtw8822c_dpk_track,
+ 	.phy_calibration	= rtw8822c_phy_calibration,
+diff --git a/drivers/net/wireless/realtek/rtw88/sdio.c b/drivers/net/wireless/realtek/rtw88/sdio.c
+index 410f637b1add58..682d4d6e314eea 100644
+--- a/drivers/net/wireless/realtek/rtw88/sdio.c
++++ b/drivers/net/wireless/realtek/rtw88/sdio.c
+@@ -10,6 +10,7 @@
+ #include <linux/mmc/host.h>
+ #include <linux/mmc/sdio_func.h>
+ #include "main.h"
++#include "mac.h"
+ #include "debug.h"
+ #include "fw.h"
+ #include "ps.h"
+@@ -1154,6 +1155,7 @@ static const struct rtw_hci_ops rtw_sdio_ops = {
+ 	.link_ps = rtw_sdio_link_ps,
+ 	.interface_cfg = rtw_sdio_interface_cfg,
+ 	.dynamic_rx_agg = NULL,
++	.write_firmware_page = rtw_write_firmware_page,
+ 
+ 	.read8 = rtw_sdio_read8,
+ 	.read16 = rtw_sdio_read16,
+diff --git a/drivers/net/wireless/realtek/rtw88/usb.c b/drivers/net/wireless/realtek/rtw88/usb.c
+index c8092fa0d9f13d..1c181431e71ca9 100644
+--- a/drivers/net/wireless/realtek/rtw88/usb.c
++++ b/drivers/net/wireless/realtek/rtw88/usb.c
+@@ -139,7 +139,7 @@ static void rtw_usb_write(struct rtw_dev *rtwdev, u32 addr, u32 val, int len)
+ 
+ 	ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 			      RTW_USB_CMD_REQ, RTW_USB_CMD_WRITE,
+-			      addr, 0, data, len, 30000);
++			      addr, 0, data, len, 500);
+ 	if (ret < 0 && ret != -ENODEV && count++ < 4)
+ 		rtw_err(rtwdev, "write register 0x%x failed with %d\n",
+ 			addr, ret);
+@@ -165,6 +165,60 @@ static void rtw_usb_write32(struct rtw_dev *rtwdev, u32 addr, u32 val)
+ 	rtw_usb_write(rtwdev, addr, val, 4);
+ }
+ 
++static void rtw_usb_write_firmware_page(struct rtw_dev *rtwdev, u32 page,
++					const u8 *data, u32 size)
++{
++	struct rtw_usb *rtwusb = rtw_get_usb_priv(rtwdev);
++	struct usb_device *udev = rtwusb->udev;
++	u32 addr = FW_START_ADDR_LEGACY;
++	u8 *data_dup, *buf;
++	u32 n, block_size;
++	int ret;
++
++	switch (rtwdev->chip->id) {
++	case RTW_CHIP_TYPE_8723D:
++		block_size = 254;
++		break;
++	default:
++		block_size = 196;
++		break;
++	}
++
++	data_dup = kmemdup(data, size, GFP_KERNEL);
++	if (!data_dup)
++		return;
++
++	buf = data_dup;
++
++	rtw_write32_mask(rtwdev, REG_MCUFW_CTRL, BIT_ROM_PGE, page);
++
++	while (size > 0) {
++		if (size >= block_size)
++			n = block_size;
++		else if (size >= 8)
++			n = 8;
++		else
++			n = 1;
++
++		ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
++				      RTW_USB_CMD_REQ, RTW_USB_CMD_WRITE,
++				      addr, 0, buf, n, 500);
++		if (ret != n) {
++			if (ret != -ENODEV)
++				rtw_err(rtwdev,
++					"write 0x%x len %d failed: %d\n",
++					addr, n, ret);
++			break;
++		}
++
++		addr += n;
++		buf += n;
++		size -= n;
++	}
++
++	kfree(data_dup);
++}
++
+ static int dma_mapping_to_ep(enum rtw_dma_mapping dma_mapping)
+ {
+ 	switch (dma_mapping) {
+@@ -891,6 +945,7 @@ static const struct rtw_hci_ops rtw_usb_ops = {
+ 	.link_ps = rtw_usb_link_ps,
+ 	.interface_cfg = rtw_usb_interface_cfg,
+ 	.dynamic_rx_agg = rtw_usb_dynamic_rx_agg,
++	.write_firmware_page = rtw_usb_write_firmware_page,
+ 
+ 	.write8  = rtw_usb_write8,
+ 	.write16 = rtw_usb_write16,
+diff --git a/drivers/net/wireless/realtek/rtw89/cam.c b/drivers/net/wireless/realtek/rtw89/cam.c
+index eca3d767ff6032..bc6f799e291e84 100644
+--- a/drivers/net/wireless/realtek/rtw89/cam.c
++++ b/drivers/net/wireless/realtek/rtw89/cam.c
+@@ -6,6 +6,7 @@
+ #include "debug.h"
+ #include "fw.h"
+ #include "mac.h"
++#include "ps.h"
+ 
+ static struct sk_buff *
+ rtw89_cam_get_sec_key_cmd(struct rtw89_dev *rtwdev,
+@@ -471,9 +472,11 @@ int rtw89_cam_sec_key_add(struct rtw89_dev *rtwdev,
+ 
+ 	switch (key->cipher) {
+ 	case WLAN_CIPHER_SUITE_WEP40:
++		rtw89_leave_ips_by_hwflags(rtwdev);
+ 		hw_key_type = RTW89_SEC_KEY_TYPE_WEP40;
+ 		break;
+ 	case WLAN_CIPHER_SUITE_WEP104:
++		rtw89_leave_ips_by_hwflags(rtwdev);
+ 		hw_key_type = RTW89_SEC_KEY_TYPE_WEP104;
+ 		break;
+ 	case WLAN_CIPHER_SUITE_TKIP:
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8922a_rfk.c b/drivers/net/wireless/realtek/rtw89/rtw8922a_rfk.c
+index c4c93f836a2f5a..1659ea64ade119 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8922a_rfk.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8922a_rfk.c
+@@ -77,11 +77,6 @@ void rtw8922a_ctl_band_ch_bw(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy,
+ 					     RR_CFGCH_BAND0 | RR_CFGCH_CH);
+ 			rf_reg[path][i] |= u32_encode_bits(central_ch, RR_CFGCH_CH);
+ 
+-			if (band == RTW89_BAND_2G)
+-				rtw89_write_rf(rtwdev, path, RR_SMD, RR_VCO2, 0x0);
+-			else
+-				rtw89_write_rf(rtwdev, path, RR_SMD, RR_VCO2, 0x1);
+-
+ 			switch (band) {
+ 			case RTW89_BAND_2G:
+ 			default:
+diff --git a/drivers/net/wireless/virtual/mac80211_hwsim.c b/drivers/net/wireless/virtual/mac80211_hwsim.c
+index cf3e976471c614..6ca5d9d0fe532a 100644
+--- a/drivers/net/wireless/virtual/mac80211_hwsim.c
++++ b/drivers/net/wireless/virtual/mac80211_hwsim.c
+@@ -1229,6 +1229,11 @@ static void mac80211_hwsim_set_tsf(struct ieee80211_hw *hw,
+ 	/* MLD not supported here */
+ 	u32 bcn_int = data->link_data[0].beacon_int;
+ 	u64 delta = abs(tsf - now);
++	struct ieee80211_bss_conf *conf;
++
++	conf = link_conf_dereference_protected(vif, data->link_data[0].link_id);
++	if (conf && !conf->enable_beacon)
++		return;
+ 
+ 	/* adjust after beaconing with new timestamp at old TBTT */
+ 	if (tsf > now) {
+diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
+index f29107d95ff26d..13aab3ca34f61e 100644
+--- a/drivers/nvme/host/ioctl.c
++++ b/drivers/nvme/host/ioctl.c
+@@ -429,21 +429,14 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io(struct request *req,
+ 	pdu->result = le64_to_cpu(nvme_req(req)->result.u64);
+ 
+ 	/*
+-	 * For iopoll, complete it directly. Note that using the uring_cmd
+-	 * helper for this is safe only because we check blk_rq_is_poll().
+-	 * As that returns false if we're NOT on a polled queue, then it's
+-	 * safe to use the polled completion helper.
+-	 *
+-	 * Otherwise, move the completion to task work.
++	 * IOPOLL could potentially complete this request directly, but
++	 * if multiple rings are polling on the same queue, then it's possible
++	 * for one ring to find completions for another ring. Punting the
++	 * completion via task_work will always direct it to the right
++	 * location, rather than potentially complete requests for ringA
++	 * under iopoll invocations from ringB.
+ 	 */
+-	if (blk_rq_is_poll(req)) {
+-		if (pdu->bio)
+-			blk_rq_unmap_user(pdu->bio);
+-		io_uring_cmd_iopoll_done(ioucmd, pdu->result, pdu->status);
+-	} else {
+-		io_uring_cmd_do_in_task_lazy(ioucmd, nvme_uring_task_cb);
+-	}
+-
++	io_uring_cmd_do_in_task_lazy(ioucmd, nvme_uring_task_cb);
+ 	return RQ_END_IO_FREE;
+ }
+ 
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index aba365f97cf6b4..947fac9128b30d 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -2392,7 +2392,7 @@ static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new)
+ 		nvme_tcp_teardown_admin_queue(ctrl, false);
+ 		ret = nvme_tcp_configure_admin_queue(ctrl, false);
+ 		if (ret)
+-			return ret;
++			goto destroy_admin;
+ 	}
+ 
+ 	if (ctrl->icdoff) {
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+index 599ec4b1223e9d..112ae200b393a6 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+@@ -292,13 +292,14 @@ static int cdns_pcie_ep_set_msix(struct pci_epc *epc, u8 fn, u8 vfn,
+ 	struct cdns_pcie *pcie = &ep->pcie;
+ 	u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET;
+ 	u32 val, reg;
++	u16 actual_interrupts = interrupts + 1;
+ 
+ 	fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
+ 
+ 	reg = cap + PCI_MSIX_FLAGS;
+ 	val = cdns_pcie_ep_fn_readw(pcie, fn, reg);
+ 	val &= ~PCI_MSIX_FLAGS_QSIZE;
+-	val |= interrupts;
++	val |= interrupts; /* 0's based value */
+ 	cdns_pcie_ep_fn_writew(pcie, fn, reg, val);
+ 
+ 	/* Set MSI-X BAR and offset */
+@@ -308,7 +309,7 @@ static int cdns_pcie_ep_set_msix(struct pci_epc *epc, u8 fn, u8 vfn,
+ 
+ 	/* Set PBA BAR and offset.  BAR must match MSI-X BAR */
+ 	reg = cap + PCI_MSIX_PBA;
+-	val = (offset + (interrupts * PCI_MSIX_ENTRY_SIZE)) | bir;
++	val = (offset + (actual_interrupts * PCI_MSIX_ENTRY_SIZE)) | bir;
+ 	cdns_pcie_ep_fn_writel(pcie, fn, reg, val);
+ 
+ 	return 0;
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index 1a0bf9341542ea..24026f3f34133b 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -585,6 +585,7 @@ static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
+ 	struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
+ 	struct dw_pcie_ep_func *ep_func;
+ 	u32 val, reg;
++	u16 actual_interrupts = interrupts + 1;
+ 
+ 	ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
+ 	if (!ep_func || !ep_func->msix_cap)
+@@ -595,7 +596,7 @@ static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
+ 	reg = ep_func->msix_cap + PCI_MSIX_FLAGS;
+ 	val = dw_pcie_ep_readw_dbi(ep, func_no, reg);
+ 	val &= ~PCI_MSIX_FLAGS_QSIZE;
+-	val |= interrupts;
++	val |= interrupts; /* 0's based value */
+ 	dw_pcie_writew_dbi(pci, reg, val);
+ 
+ 	reg = ep_func->msix_cap + PCI_MSIX_TABLE;
+@@ -603,7 +604,7 @@ static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
+ 	dw_pcie_ep_writel_dbi(ep, func_no, reg, val);
+ 
+ 	reg = ep_func->msix_cap + PCI_MSIX_PBA;
+-	val = (offset + (interrupts * PCI_MSIX_ENTRY_SIZE)) | bir;
++	val = (offset + (actual_interrupts * PCI_MSIX_ENTRY_SIZE)) | bir;
+ 	dw_pcie_ep_writel_dbi(ep, func_no, reg, val);
+ 
+ 	dw_pcie_dbi_ro_wr_dis(pci);
+diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+index c624b7ebd118c0..bbe9d750316b99 100644
+--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
++++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+@@ -44,7 +44,6 @@
+ #define PCIE_LINKUP			(PCIE_SMLH_LINKUP | PCIE_RDLH_LINKUP)
+ #define PCIE_RDLH_LINK_UP_CHGED		BIT(1)
+ #define PCIE_LINK_REQ_RST_NOT_INT	BIT(2)
+-#define PCIE_L0S_ENTRY			0x11
+ #define PCIE_CLIENT_GENERAL_CONTROL	0x0
+ #define PCIE_CLIENT_INTR_STATUS_LEGACY	0x8
+ #define PCIE_CLIENT_INTR_MASK_LEGACY	0x1c
+@@ -177,8 +176,7 @@ static int rockchip_pcie_link_up(struct dw_pcie *pci)
+ 	struct rockchip_pcie *rockchip = to_rockchip_pcie(pci);
+ 	u32 val = rockchip_pcie_get_ltssm(rockchip);
+ 
+-	if ((val & PCIE_LINKUP) == PCIE_LINKUP &&
+-	    (val & PCIE_LTSSM_STATUS_MASK) == PCIE_L0S_ENTRY)
++	if ((val & PCIE_LINKUP) == PCIE_LINKUP)
+ 		return 1;
+ 
+ 	return 0;
+@@ -410,8 +408,8 @@ static int rockchip_pcie_phy_init(struct rockchip_pcie *rockchip)
+ 
+ static void rockchip_pcie_phy_deinit(struct rockchip_pcie *rockchip)
+ {
+-	phy_exit(rockchip->phy);
+ 	phy_power_off(rockchip->phy);
++	phy_exit(rockchip->phy);
+ }
+ 
+ static const struct dw_pcie_ops dw_pcie_ops = {
+diff --git a/drivers/pci/controller/pcie-apple.c b/drivers/pci/controller/pcie-apple.c
+index 3d778d8b018756..32f57b8a6ecbd5 100644
+--- a/drivers/pci/controller/pcie-apple.c
++++ b/drivers/pci/controller/pcie-apple.c
+@@ -754,7 +754,7 @@ static int apple_pcie_init(struct pci_config_window *cfg)
+ 	if (ret)
+ 		return ret;
+ 
+-	for_each_child_of_node_scoped(dev->of_node, of_port) {
++	for_each_available_child_of_node_scoped(dev->of_node, of_port) {
+ 		ret = apple_pcie_setup_port(pcie, of_port);
+ 		if (ret) {
+ 			dev_err(pcie->dev, "Port %pOF setup fail: %d\n", of_port, ret);
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index ebd342bda235d4..91d2d92717d981 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -771,7 +771,7 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
+ 		u16 ignored_events = PCI_EXP_SLTSTA_DLLSC;
+ 
+ 		if (!ctrl->inband_presence_disabled)
+-			ignored_events |= events & PCI_EXP_SLTSTA_PDC;
++			ignored_events |= PCI_EXP_SLTSTA_PDC;
+ 
+ 		events &= ~ignored_events;
+ 		pciehp_ignore_link_change(ctrl, pdev, irq, ignored_events);
+diff --git a/drivers/pci/hotplug/s390_pci_hpc.c b/drivers/pci/hotplug/s390_pci_hpc.c
+index e9e9aaa91770ae..d9996516f49e67 100644
+--- a/drivers/pci/hotplug/s390_pci_hpc.c
++++ b/drivers/pci/hotplug/s390_pci_hpc.c
+@@ -65,9 +65,9 @@ static int disable_slot(struct hotplug_slot *hotplug_slot)
+ 
+ 	rc = zpci_deconfigure_device(zdev);
+ out:
+-	mutex_unlock(&zdev->state_lock);
+ 	if (pdev)
+ 		pci_dev_put(pdev);
++	mutex_unlock(&zdev->state_lock);
+ 	return rc;
+ }
+ 
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 4d84ed41248442..027a71c9c06f13 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -5538,7 +5538,8 @@ static void pci_slot_unlock(struct pci_slot *slot)
+ 			continue;
+ 		if (dev->subordinate)
+ 			pci_bus_unlock(dev->subordinate);
+-		pci_dev_unlock(dev);
++		else
++			pci_dev_unlock(dev);
+ 	}
+ }
+ 
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 94daca15a096f7..d0f7b749b9a620 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4995,6 +4995,18 @@ static int pci_quirk_brcm_acs(struct pci_dev *dev, u16 acs_flags)
+ 		PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
+ }
+ 
++static int pci_quirk_loongson_acs(struct pci_dev *dev, u16 acs_flags)
++{
++	/*
++	 * Loongson PCIe Root Ports don't advertise an ACS capability, but
++	 * they do not allow peer-to-peer transactions between Root Ports.
++	 * Allow each Root Port to be in a separate IOMMU group by masking
++	 * SV/RR/CR/UF bits.
++	 */
++	return pci_acs_ctrl_enabled(acs_flags,
++		PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
++}
++
+ /*
+  * Wangxun 40G/25G/10G/1G NICs have no ACS capability, but on
+  * multi-function devices, the hardware isolates the functions by
+@@ -5128,6 +5140,17 @@ static const struct pci_dev_acs_enabled {
+ 	{ PCI_VENDOR_ID_BROADCOM, 0x1762, pci_quirk_mf_endpoint_acs },
+ 	{ PCI_VENDOR_ID_BROADCOM, 0x1763, pci_quirk_mf_endpoint_acs },
+ 	{ PCI_VENDOR_ID_BROADCOM, 0xD714, pci_quirk_brcm_acs },
++	/* Loongson PCIe Root Ports */
++	{ PCI_VENDOR_ID_LOONGSON, 0x3C09, pci_quirk_loongson_acs },
++	{ PCI_VENDOR_ID_LOONGSON, 0x3C19, pci_quirk_loongson_acs },
++	{ PCI_VENDOR_ID_LOONGSON, 0x3C29, pci_quirk_loongson_acs },
++	{ PCI_VENDOR_ID_LOONGSON, 0x7A09, pci_quirk_loongson_acs },
++	{ PCI_VENDOR_ID_LOONGSON, 0x7A19, pci_quirk_loongson_acs },
++	{ PCI_VENDOR_ID_LOONGSON, 0x7A29, pci_quirk_loongson_acs },
++	{ PCI_VENDOR_ID_LOONGSON, 0x7A39, pci_quirk_loongson_acs },
++	{ PCI_VENDOR_ID_LOONGSON, 0x7A49, pci_quirk_loongson_acs },
++	{ PCI_VENDOR_ID_LOONGSON, 0x7A59, pci_quirk_loongson_acs },
++	{ PCI_VENDOR_ID_LOONGSON, 0x7A69, pci_quirk_loongson_acs },
+ 	/* Amazon Annapurna Labs */
+ 	{ PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, pci_quirk_al_acs },
+ 	/* Zhaoxin multi-function devices */
+diff --git a/drivers/phy/freescale/phy-fsl-imx8mq-usb.c b/drivers/phy/freescale/phy-fsl-imx8mq-usb.c
+index a974ef94de9a08..9598a80739910c 100644
+--- a/drivers/phy/freescale/phy-fsl-imx8mq-usb.c
++++ b/drivers/phy/freescale/phy-fsl-imx8mq-usb.c
+@@ -317,12 +317,12 @@ static u32 phy_tx_preemp_amp_tune_from_property(u32 microamp)
+ static u32 phy_tx_vboost_level_from_property(u32 microvolt)
+ {
+ 	switch (microvolt) {
+-	case 0 ... 960:
+-		return 0;
+-	case 961 ... 1160:
+-		return 2;
+-	default:
++	case 1156:
++		return 5;
++	case 844:
+ 		return 3;
++	default:
++		return 4;
+ 	}
+ }
+ 
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+index 79f9c08e5039c3..5c0177b4e4a37f 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+@@ -358,9 +358,7 @@ static int armada_37xx_pmx_set_by_name(struct pinctrl_dev *pctldev,
+ 
+ 	val = grp->val[func];
+ 
+-	regmap_update_bits(info->regmap, reg, mask, val);
+-
+-	return 0;
++	return regmap_update_bits(info->regmap, reg, mask, val);
+ }
+ 
+ static int armada_37xx_pmx_set(struct pinctrl_dev *pctldev,
+@@ -402,10 +400,13 @@ static int armada_37xx_gpio_get_direction(struct gpio_chip *chip,
+ 	struct armada_37xx_pinctrl *info = gpiochip_get_data(chip);
+ 	unsigned int reg = OUTPUT_EN;
+ 	unsigned int val, mask;
++	int ret;
+ 
+ 	armada_37xx_update_reg(&reg, &offset);
+ 	mask = BIT(offset);
+-	regmap_read(info->regmap, reg, &val);
++	ret = regmap_read(info->regmap, reg, &val);
++	if (ret)
++		return ret;
+ 
+ 	if (val & mask)
+ 		return GPIO_LINE_DIRECTION_OUT;
+@@ -442,11 +443,14 @@ static int armada_37xx_gpio_get(struct gpio_chip *chip, unsigned int offset)
+ 	struct armada_37xx_pinctrl *info = gpiochip_get_data(chip);
+ 	unsigned int reg = INPUT_VAL;
+ 	unsigned int val, mask;
++	int ret;
+ 
+ 	armada_37xx_update_reg(&reg, &offset);
+ 	mask = BIT(offset);
+ 
+-	regmap_read(info->regmap, reg, &val);
++	ret = regmap_read(info->regmap, reg, &val);
++	if (ret)
++		return ret;
+ 
+ 	return (val & mask) != 0;
+ }
+@@ -471,16 +475,17 @@ static int armada_37xx_pmx_gpio_set_direction(struct pinctrl_dev *pctldev,
+ {
+ 	struct armada_37xx_pinctrl *info = pinctrl_dev_get_drvdata(pctldev);
+ 	struct gpio_chip *chip = range->gc;
++	int ret;
+ 
+ 	dev_dbg(info->dev, "gpio_direction for pin %u as %s-%d to %s\n",
+ 		offset, range->name, offset, input ? "input" : "output");
+ 
+ 	if (input)
+-		armada_37xx_gpio_direction_input(chip, offset);
++		ret = armada_37xx_gpio_direction_input(chip, offset);
+ 	else
+-		armada_37xx_gpio_direction_output(chip, offset, 0);
++		ret = armada_37xx_gpio_direction_output(chip, offset, 0);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int armada_37xx_gpio_request_enable(struct pinctrl_dev *pctldev,
+diff --git a/drivers/pinctrl/pinctrl-mcp23s08.c b/drivers/pinctrl/pinctrl-mcp23s08.c
+index 4d1f41488017e4..c2f4b16f42d20b 100644
+--- a/drivers/pinctrl/pinctrl-mcp23s08.c
++++ b/drivers/pinctrl/pinctrl-mcp23s08.c
+@@ -636,6 +636,14 @@ int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev,
+ 
+ 	mcp->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW);
+ 
++	/*
++	 * Reset the chip - we don't really know what state it's in, so reset
++	 * all pins to input first to prevent surprises.
++	 */
++	ret = mcp_write(mcp, MCP_IODIR, mcp->chip.ngpio == 16 ? 0xFFFF : 0xFF);
++	if (ret < 0)
++		return ret;
++
+ 	/* verify MCP_IOCON.SEQOP = 0, so sequential reads work,
+ 	 * and MCP_IOCON.HAEN = 1, so we work with all chips.
+ 	 */
+diff --git a/drivers/platform/loongarch/loongson-laptop.c b/drivers/platform/loongarch/loongson-laptop.c
+index 99203584949daa..61b18ac206c9ee 100644
+--- a/drivers/platform/loongarch/loongson-laptop.c
++++ b/drivers/platform/loongarch/loongson-laptop.c
+@@ -56,8 +56,7 @@ static struct input_dev *generic_inputdev;
+ static acpi_handle hotkey_handle;
+ static struct key_entry hotkey_keycode_map[GENERIC_HOTKEY_MAP_MAX];
+ 
+-int loongson_laptop_turn_on_backlight(void);
+-int loongson_laptop_turn_off_backlight(void);
++static bool bl_powered;
+ static int loongson_laptop_backlight_update(struct backlight_device *bd);
+ 
+ /* 2. ACPI Helpers and device model */
+@@ -354,16 +353,42 @@ static int ec_backlight_level(u8 level)
+ 	return level;
+ }
+ 
++static int ec_backlight_set_power(bool state)
++{
++	int status;
++	union acpi_object arg0 = { ACPI_TYPE_INTEGER };
++	struct acpi_object_list args = { 1, &arg0 };
++
++	arg0.integer.value = state;
++	status = acpi_evaluate_object(NULL, "\\BLSW", &args, NULL);
++	if (ACPI_FAILURE(status)) {
++		pr_info("Loongson lvds error: 0x%x\n", status);
++		return -EIO;
++	}
++
++	return 0;
++}
++
+ static int loongson_laptop_backlight_update(struct backlight_device *bd)
+ {
+-	int lvl = ec_backlight_level(bd->props.brightness);
++	bool target_powered = !backlight_is_blank(bd);
++	int ret = 0, lvl = ec_backlight_level(bd->props.brightness);
+ 
+ 	if (lvl < 0)
+ 		return -EIO;
++
+ 	if (ec_set_brightness(lvl))
+ 		return -EIO;
+ 
+-	return 0;
++	if (target_powered != bl_powered) {
++		ret = ec_backlight_set_power(target_powered);
++		if (ret < 0)
++			return ret;
++
++		bl_powered = target_powered;
++	}
++
++	return ret;
+ }
+ 
+ static int loongson_laptop_get_brightness(struct backlight_device *bd)
+@@ -384,7 +409,7 @@ static const struct backlight_ops backlight_laptop_ops = {
+ 
+ static int laptop_backlight_register(void)
+ {
+-	int status = 0;
++	int status = 0, ret;
+ 	struct backlight_properties props;
+ 
+ 	memset(&props, 0, sizeof(props));
+@@ -392,44 +417,20 @@ static int laptop_backlight_register(void)
+ 	if (!acpi_evalf(hotkey_handle, &status, "ECLL", "d"))
+ 		return -EIO;
+ 
+-	props.brightness = 1;
++	ret = ec_backlight_set_power(true);
++	if (ret)
++		return ret;
++
++	bl_powered = true;
++
+ 	props.max_brightness = status;
++	props.brightness = ec_get_brightness();
++	props.power = BACKLIGHT_POWER_ON;
+ 	props.type = BACKLIGHT_PLATFORM;
+ 
+ 	backlight_device_register("loongson_laptop",
+ 				NULL, NULL, &backlight_laptop_ops, &props);
+ 
+-	return 0;
+-}
+-
+-int loongson_laptop_turn_on_backlight(void)
+-{
+-	int status;
+-	union acpi_object arg0 = { ACPI_TYPE_INTEGER };
+-	struct acpi_object_list args = { 1, &arg0 };
+-
+-	arg0.integer.value = 1;
+-	status = acpi_evaluate_object(NULL, "\\BLSW", &args, NULL);
+-	if (ACPI_FAILURE(status)) {
+-		pr_info("Loongson lvds error: 0x%x\n", status);
+-		return -ENODEV;
+-	}
+-
+-	return 0;
+-}
+-
+-int loongson_laptop_turn_off_backlight(void)
+-{
+-	int status;
+-	union acpi_object arg0 = { ACPI_TYPE_INTEGER };
+-	struct acpi_object_list args = { 1, &arg0 };
+-
+-	arg0.integer.value = 0;
+-	status = acpi_evaluate_object(NULL, "\\BLSW", &args, NULL);
+-	if (ACPI_FAILURE(status)) {
+-		pr_info("Loongson lvds error: 0x%x\n", status);
+-		return -ENODEV;
+-	}
+ 
+ 	return 0;
+ }
+@@ -611,11 +612,17 @@ static int __init generic_acpi_laptop_init(void)
+ 
+ static void __exit generic_acpi_laptop_exit(void)
+ {
++	int i;
++
+ 	if (generic_inputdev) {
+-		if (input_device_registered)
+-			input_unregister_device(generic_inputdev);
+-		else
++		if (!input_device_registered) {
+ 			input_free_device(generic_inputdev);
++		} else {
++			input_unregister_device(generic_inputdev);
++
++			for (i = 0; i < ARRAY_SIZE(generic_sub_drivers); i++)
++				generic_subdriver_exit(&generic_sub_drivers[i]);
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/platform/x86/amd/pmc/pmc.c b/drivers/platform/x86/amd/pmc/pmc.c
+index 0329fafe14ebcd..f45525bbd15495 100644
+--- a/drivers/platform/x86/amd/pmc/pmc.c
++++ b/drivers/platform/x86/amd/pmc/pmc.c
+@@ -157,6 +157,8 @@ static int amd_pmc_setup_smu_logging(struct amd_pmc_dev *dev)
+ 			return -ENOMEM;
+ 	}
+ 
++	memset_io(dev->smu_virt_addr, 0, sizeof(struct smu_metrics));
++
+ 	/* Start the logging */
+ 	amd_pmc_send_cmd(dev, 0, NULL, SMU_MSG_LOG_RESET, false);
+ 	amd_pmc_send_cmd(dev, 0, NULL, SMU_MSG_LOG_START, false);
+diff --git a/drivers/platform/x86/amd/pmf/core.c b/drivers/platform/x86/amd/pmf/core.c
+index 96821101ec773c..395c011e837f1b 100644
+--- a/drivers/platform/x86/amd/pmf/core.c
++++ b/drivers/platform/x86/amd/pmf/core.c
+@@ -280,7 +280,7 @@ int amd_pmf_set_dram_addr(struct amd_pmf_dev *dev, bool alloc_buffer)
+ 			dev_err(dev->dev, "Invalid CPU id: 0x%x", dev->cpu_id);
+ 		}
+ 
+-		dev->buf = kzalloc(dev->mtable_size, GFP_KERNEL);
++		dev->buf = devm_kzalloc(dev->dev, dev->mtable_size, GFP_KERNEL);
+ 		if (!dev->buf)
+ 			return -ENOMEM;
+ 	}
+@@ -493,7 +493,6 @@ static void amd_pmf_remove(struct platform_device *pdev)
+ 	mutex_destroy(&dev->lock);
+ 	mutex_destroy(&dev->update_mutex);
+ 	mutex_destroy(&dev->cb_mutex);
+-	kfree(dev->buf);
+ }
+ 
+ static const struct attribute_group *amd_pmf_driver_groups[] = {
+diff --git a/drivers/platform/x86/amd/pmf/tee-if.c b/drivers/platform/x86/amd/pmf/tee-if.c
+index d3bd12ad036ae0..76efce48a45ce2 100644
+--- a/drivers/platform/x86/amd/pmf/tee-if.c
++++ b/drivers/platform/x86/amd/pmf/tee-if.c
+@@ -358,30 +358,28 @@ static ssize_t amd_pmf_get_pb_data(struct file *filp, const char __user *buf,
+ 		return -EINVAL;
+ 
+ 	/* re-alloc to the new buffer length of the policy binary */
+-	new_policy_buf = memdup_user(buf, length);
+-	if (IS_ERR(new_policy_buf))
+-		return PTR_ERR(new_policy_buf);
++	new_policy_buf = devm_kzalloc(dev->dev, length, GFP_KERNEL);
++	if (!new_policy_buf)
++		return -ENOMEM;
++
++	if (copy_from_user(new_policy_buf, buf, length)) {
++		devm_kfree(dev->dev, new_policy_buf);
++		return -EFAULT;
++	}
+ 
+-	kfree(dev->policy_buf);
++	devm_kfree(dev->dev, dev->policy_buf);
+ 	dev->policy_buf = new_policy_buf;
+ 	dev->policy_sz = length;
+ 
+-	if (!amd_pmf_pb_valid(dev)) {
+-		ret = -EINVAL;
+-		goto cleanup;
+-	}
++	if (!amd_pmf_pb_valid(dev))
++		return -EINVAL;
+ 
+ 	amd_pmf_hex_dump_pb(dev);
+ 	ret = amd_pmf_start_policy_engine(dev);
+ 	if (ret < 0)
+-		goto cleanup;
++		return ret;
+ 
+ 	return length;
+-
+-cleanup:
+-	kfree(dev->policy_buf);
+-	dev->policy_buf = NULL;
+-	return ret;
+ }
+ 
+ static const struct file_operations pb_fops = {
+@@ -422,12 +420,12 @@ static int amd_pmf_ta_open_session(struct tee_context *ctx, u32 *id, const uuid_
+ 	rc = tee_client_open_session(ctx, &sess_arg, NULL);
+ 	if (rc < 0 || sess_arg.ret != 0) {
+ 		pr_err("Failed to open TEE session err:%#x, rc:%d\n", sess_arg.ret, rc);
+-		return rc;
++		return rc ?: -EINVAL;
+ 	}
+ 
+ 	*id = sess_arg.session;
+ 
+-	return rc;
++	return 0;
+ }
+ 
+ static int amd_pmf_register_input_device(struct amd_pmf_dev *dev)
+@@ -462,7 +460,9 @@ static int amd_pmf_tee_init(struct amd_pmf_dev *dev, const uuid_t *uuid)
+ 	dev->tee_ctx = tee_client_open_context(NULL, amd_pmf_amdtee_ta_match, NULL, NULL);
+ 	if (IS_ERR(dev->tee_ctx)) {
+ 		dev_err(dev->dev, "Failed to open TEE context\n");
+-		return PTR_ERR(dev->tee_ctx);
++		ret = PTR_ERR(dev->tee_ctx);
++		dev->tee_ctx = NULL;
++		return ret;
+ 	}
+ 
+ 	ret = amd_pmf_ta_open_session(dev->tee_ctx, &dev->session_id, uuid);
+@@ -502,9 +502,12 @@ static int amd_pmf_tee_init(struct amd_pmf_dev *dev, const uuid_t *uuid)
+ 
+ static void amd_pmf_tee_deinit(struct amd_pmf_dev *dev)
+ {
++	if (!dev->tee_ctx)
++		return;
+ 	tee_shm_free(dev->fw_shm_pool);
+ 	tee_client_close_session(dev->tee_ctx, dev->session_id);
+ 	tee_client_close_context(dev->tee_ctx);
++	dev->tee_ctx = NULL;
+ }
+ 
+ int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev)
+@@ -532,13 +535,13 @@ int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev)
+ 	dev->policy_base = devm_ioremap_resource(dev->dev, dev->res);
+ 	if (IS_ERR(dev->policy_base)) {
+ 		ret = PTR_ERR(dev->policy_base);
+-		goto err_free_dram_buf;
++		goto err_cancel_work;
+ 	}
+ 
+-	dev->policy_buf = kzalloc(dev->policy_sz, GFP_KERNEL);
++	dev->policy_buf = devm_kzalloc(dev->dev, dev->policy_sz, GFP_KERNEL);
+ 	if (!dev->policy_buf) {
+ 		ret = -ENOMEM;
+-		goto err_free_dram_buf;
++		goto err_cancel_work;
+ 	}
+ 
+ 	memcpy_fromio(dev->policy_buf, dev->policy_base, dev->policy_sz);
+@@ -546,21 +549,21 @@ int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev)
+ 	if (!amd_pmf_pb_valid(dev)) {
+ 		dev_info(dev->dev, "No Smart PC policy present\n");
+ 		ret = -EINVAL;
+-		goto err_free_policy;
++		goto err_cancel_work;
+ 	}
+ 
+ 	amd_pmf_hex_dump_pb(dev);
+ 
+-	dev->prev_data = kzalloc(sizeof(*dev->prev_data), GFP_KERNEL);
++	dev->prev_data = devm_kzalloc(dev->dev, sizeof(*dev->prev_data), GFP_KERNEL);
+ 	if (!dev->prev_data) {
+ 		ret = -ENOMEM;
+-		goto err_free_policy;
++		goto err_cancel_work;
+ 	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(amd_pmf_ta_uuid); i++) {
+ 		ret = amd_pmf_tee_init(dev, &amd_pmf_ta_uuid[i]);
+ 		if (ret)
+-			goto err_free_prev_data;
++			goto err_cancel_work;
+ 
+ 		ret = amd_pmf_start_policy_engine(dev);
+ 		switch (ret) {
+@@ -575,7 +578,7 @@ int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev)
+ 		default:
+ 			ret = -EINVAL;
+ 			amd_pmf_tee_deinit(dev);
+-			goto err_free_prev_data;
++			goto err_cancel_work;
+ 		}
+ 
+ 		if (status)
+@@ -584,7 +587,7 @@ int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev)
+ 
+ 	if (!status && !pb_side_load) {
+ 		ret = -EINVAL;
+-		goto err_free_prev_data;
++		goto err_cancel_work;
+ 	}
+ 
+ 	if (pb_side_load)
+@@ -600,12 +603,6 @@ int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev)
+ 	if (pb_side_load && dev->esbin)
+ 		amd_pmf_remove_pb(dev);
+ 	amd_pmf_tee_deinit(dev);
+-err_free_prev_data:
+-	kfree(dev->prev_data);
+-err_free_policy:
+-	kfree(dev->policy_buf);
+-err_free_dram_buf:
+-	kfree(dev->buf);
+ err_cancel_work:
+ 	cancel_delayed_work_sync(&dev->pb_work);
+ 
+@@ -621,11 +618,5 @@ void amd_pmf_deinit_smart_pc(struct amd_pmf_dev *dev)
+ 		amd_pmf_remove_pb(dev);
+ 
+ 	cancel_delayed_work_sync(&dev->pb_work);
+-	kfree(dev->prev_data);
+-	dev->prev_data = NULL;
+-	kfree(dev->policy_buf);
+-	dev->policy_buf = NULL;
+-	kfree(dev->buf);
+-	dev->buf = NULL;
+ 	amd_pmf_tee_deinit(dev);
+ }
+diff --git a/drivers/platform/x86/dell/alienware-wmi-wmax.c b/drivers/platform/x86/dell/alienware-wmi-wmax.c
+index 08b82c151e0710..eb5cbe6ae9e907 100644
+--- a/drivers/platform/x86/dell/alienware-wmi-wmax.c
++++ b/drivers/platform/x86/dell/alienware-wmi-wmax.c
+@@ -91,7 +91,7 @@ static const struct dmi_system_id awcc_dmi_table[] __initconst = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Alienware"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m16 R1 AMD"),
+ 		},
+-		.driver_data = &g_series_quirks,
++		.driver_data = &generic_quirks,
+ 	},
+ 	{
+ 		.ident = "Alienware m16 R2",
+diff --git a/drivers/platform/x86/dell/dell_rbu.c b/drivers/platform/x86/dell/dell_rbu.c
+index e30ca325938cb6..8dea70b7f8c154 100644
+--- a/drivers/platform/x86/dell/dell_rbu.c
++++ b/drivers/platform/x86/dell/dell_rbu.c
+@@ -292,7 +292,7 @@ static int packet_read_list(char *data, size_t * pread_length)
+ 	remaining_bytes = *pread_length;
+ 	bytes_read = rbu_data.packet_read_count;
+ 
+-	list_for_each_entry(newpacket, (&packet_data_head.list)->next, list) {
++	list_for_each_entry(newpacket, &packet_data_head.list, list) {
+ 		bytes_copied = do_packet_read(pdest, newpacket,
+ 			remaining_bytes, bytes_read, &temp_count);
+ 		remaining_bytes -= bytes_copied;
+@@ -315,14 +315,14 @@ static void packet_empty_list(void)
+ {
+ 	struct packet_data *newpacket, *tmp;
+ 
+-	list_for_each_entry_safe(newpacket, tmp, (&packet_data_head.list)->next, list) {
++	list_for_each_entry_safe(newpacket, tmp, &packet_data_head.list, list) {
+ 		list_del(&newpacket->list);
+ 
+ 		/*
+ 		 * zero out the RBU packet memory before freeing
+ 		 * to make sure there are no stale RBU packets left in memory
+ 		 */
+-		memset(newpacket->data, 0, rbu_data.packetsize);
++		memset(newpacket->data, 0, newpacket->length);
+ 		set_memory_wb((unsigned long)newpacket->data,
+ 			1 << newpacket->ordernum);
+ 		free_pages((unsigned long) newpacket->data,
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index ede483573fe0dd..b5e4da6a67798a 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -15,6 +15,7 @@
+ #include <linux/bug.h>
+ #include <linux/cleanup.h>
+ #include <linux/debugfs.h>
++#include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/dmi.h>
+ #include <linux/i8042.h>
+@@ -267,6 +268,20 @@ static void ideapad_shared_exit(struct ideapad_private *priv)
+  */
+ #define IDEAPAD_EC_TIMEOUT 200 /* in ms */
+ 
++/*
++ * Some models (e.g., ThinkBook since 2024) have a low tolerance for being
++ * polled too frequently. Doing so may break the state machine in the EC,
++ * resulting in a hard shutdown.
++ *
++ * It is also observed that frequent polls may disturb the ongoing operation
++ * and notably delay the availability of EC response.
++ *
++ * These values are used as the delay before the first poll and the interval
++ * between subsequent polls to solve the above issues.
++ */
++#define IDEAPAD_EC_POLL_MIN_US 150
++#define IDEAPAD_EC_POLL_MAX_US 300
++
+ static int eval_int(acpi_handle handle, const char *name, unsigned long *res)
+ {
+ 	unsigned long long result;
+@@ -383,7 +398,7 @@ static int read_ec_data(acpi_handle handle, unsigned long cmd, unsigned long *da
+ 	end_jiffies = jiffies + msecs_to_jiffies(IDEAPAD_EC_TIMEOUT) + 1;
+ 
+ 	while (time_before(jiffies, end_jiffies)) {
+-		schedule();
++		usleep_range(IDEAPAD_EC_POLL_MIN_US, IDEAPAD_EC_POLL_MAX_US);
+ 
+ 		err = eval_vpcr(handle, 1, &val);
+ 		if (err)
+@@ -414,7 +429,7 @@ static int write_ec_cmd(acpi_handle handle, unsigned long cmd, unsigned long dat
+ 	end_jiffies = jiffies + msecs_to_jiffies(IDEAPAD_EC_TIMEOUT) + 1;
+ 
+ 	while (time_before(jiffies, end_jiffies)) {
+-		schedule();
++		usleep_range(IDEAPAD_EC_POLL_MIN_US, IDEAPAD_EC_POLL_MAX_US);
+ 
+ 		err = eval_vpcr(handle, 1, &val);
+ 		if (err)
+diff --git a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c
+index 4aa6c227ec8202..b67bf85532aec0 100644
+--- a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c
++++ b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c
+@@ -467,10 +467,13 @@ static int uncore_probe(struct auxiliary_device *auxdev, const struct auxiliary_
+ 
+ 	/* Get the package ID from the TPMI core */
+ 	plat_info = tpmi_get_platform_data(auxdev);
+-	if (plat_info)
+-		pkg = plat_info->package_id;
+-	else
++	if (unlikely(!plat_info)) {
+ 		dev_info(&auxdev->dev, "Platform information is NULL\n");
++		ret = -ENODEV;
++		goto err_rem_common;
++	}
++
++	pkg = plat_info->package_id;
+ 
+ 	for (i = 0; i < num_resources; ++i) {
+ 		struct tpmi_uncore_power_domain_info *pd_info;
+diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
+index d6c1ddb807b208..7a3bad106e175d 100644
+--- a/drivers/pmdomain/core.c
++++ b/drivers/pmdomain/core.c
+@@ -2229,8 +2229,10 @@ static int genpd_alloc_data(struct generic_pm_domain *genpd)
+ 	return 0;
+ put:
+ 	put_device(&genpd->dev);
+-	if (genpd->free_states == genpd_free_default_power_state)
++	if (genpd->free_states == genpd_free_default_power_state) {
+ 		kfree(genpd->states);
++		genpd->states = NULL;
++	}
+ free:
+ 	if (genpd_is_cpu_domain(genpd))
+ 		free_cpumask_var(genpd->cpus);
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index 2f31d750a4c1e3..93dcebbe114175 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -2131,7 +2131,7 @@ static int bq27xxx_battery_get_property(struct power_supply *psy,
+ 	mutex_unlock(&di->lock);
+ 
+ 	if (psp != POWER_SUPPLY_PROP_PRESENT && di->cache.flags < 0)
+-		return -ENODEV;
++		return di->cache.flags;
+ 
+ 	switch (psp) {
+ 	case POWER_SUPPLY_PROP_STATUS:
+diff --git a/drivers/power/supply/bq27xxx_battery_i2c.c b/drivers/power/supply/bq27xxx_battery_i2c.c
+index ba0d22d9042950..868e95f0887e11 100644
+--- a/drivers/power/supply/bq27xxx_battery_i2c.c
++++ b/drivers/power/supply/bq27xxx_battery_i2c.c
+@@ -6,6 +6,7 @@
+  *	Andrew F. Davis <afd@ti.com>
+  */
+ 
++#include <linux/delay.h>
+ #include <linux/i2c.h>
+ #include <linux/interrupt.h>
+ #include <linux/module.h>
+@@ -31,6 +32,7 @@ static int bq27xxx_battery_i2c_read(struct bq27xxx_device_info *di, u8 reg,
+ 	struct i2c_msg msg[2];
+ 	u8 data[2];
+ 	int ret;
++	int retry = 0;
+ 
+ 	if (!client->adapter)
+ 		return -ENODEV;
+@@ -47,7 +49,16 @@ static int bq27xxx_battery_i2c_read(struct bq27xxx_device_info *di, u8 reg,
+ 	else
+ 		msg[1].len = 2;
+ 
+-	ret = i2c_transfer(client->adapter, msg, ARRAY_SIZE(msg));
++	do {
++		ret = i2c_transfer(client->adapter, msg, ARRAY_SIZE(msg));
++		if (ret == -EBUSY && ++retry < 3) {
++			/* sleep 10 milliseconds when busy */
++			usleep_range(10000, 11000);
++			continue;
++		}
++		break;
++	} while (1);
++
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/power/supply/collie_battery.c b/drivers/power/supply/collie_battery.c
+index 68390bd1004f04..3daf7befc0bf64 100644
+--- a/drivers/power/supply/collie_battery.c
++++ b/drivers/power/supply/collie_battery.c
+@@ -440,6 +440,7 @@ static int collie_bat_probe(struct ucb1x00_dev *dev)
+ 
+ static void collie_bat_remove(struct ucb1x00_dev *dev)
+ {
++	device_init_wakeup(&ucb->dev, 0);
+ 	free_irq(gpiod_to_irq(collie_bat_main.gpio_full), &collie_bat_main);
+ 	power_supply_unregister(collie_bat_bu.psy);
+ 	power_supply_unregister(collie_bat_main.psy);
+diff --git a/drivers/power/supply/gpio-charger.c b/drivers/power/supply/gpio-charger.c
+index 1dfd5b0cb30d8e..1b2da9b5fb6541 100644
+--- a/drivers/power/supply/gpio-charger.c
++++ b/drivers/power/supply/gpio-charger.c
+@@ -366,7 +366,9 @@ static int gpio_charger_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, gpio_charger);
+ 
+-	device_init_wakeup(dev, 1);
++	ret = devm_device_init_wakeup(dev);
++	if (ret)
++		return dev_err_probe(dev, ret, "Failed to init wakeup\n");
+ 
+ 	return 0;
+ }
+diff --git a/drivers/power/supply/max17040_battery.c b/drivers/power/supply/max17040_battery.c
+index 51310f6e4803b9..c1640bc6accd27 100644
+--- a/drivers/power/supply/max17040_battery.c
++++ b/drivers/power/supply/max17040_battery.c
+@@ -410,8 +410,9 @@ static int max17040_get_property(struct power_supply *psy,
+ 		if (!chip->channel_temp)
+ 			return -ENODATA;
+ 
+-		iio_read_channel_processed_scale(chip->channel_temp,
+-						 &val->intval, 10);
++		iio_read_channel_processed(chip->channel_temp, &val->intval);
++		val->intval /= 100; /* Convert from milli- to deci-degree */
++
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
+index 35a5994bf64f63..36f57d7b4a6671 100644
+--- a/drivers/ptp/ptp_clock.c
++++ b/drivers/ptp/ptp_clock.c
+@@ -121,7 +121,8 @@ static int ptp_clock_adjtime(struct posix_clock *pc, struct __kernel_timex *tx)
+ 	struct ptp_clock_info *ops;
+ 	int err = -EOPNOTSUPP;
+ 
+-	if (ptp_clock_freerun(ptp)) {
++	if (tx->modes & (ADJ_SETOFFSET | ADJ_FREQUENCY | ADJ_OFFSET) &&
++	    ptp_clock_freerun(ptp)) {
+ 		pr_err("ptp: physical clock is free running\n");
+ 		return -EBUSY;
+ 	}
+diff --git a/drivers/ptp/ptp_private.h b/drivers/ptp/ptp_private.h
+index 528d86a33f37de..a6aad743c282f4 100644
+--- a/drivers/ptp/ptp_private.h
++++ b/drivers/ptp/ptp_private.h
+@@ -98,7 +98,27 @@ static inline int queue_cnt(const struct timestamp_event_queue *q)
+ /* Check if ptp virtual clock is in use */
+ static inline bool ptp_vclock_in_use(struct ptp_clock *ptp)
+ {
+-	return !ptp->is_virtual_clock;
++	bool in_use = false;
++
++	/* Virtual clocks can't be stacked on top of virtual clocks.
++	 * Avoid acquiring the n_vclocks_mux on virtual clocks, to allow this
++	 * function to be called from code paths where the n_vclocks_mux of the
++	 * parent physical clock is already held. Functionally that's not an
++	 * issue, but lockdep would complain, because they have the same lock
++	 * class.
++	 */
++	if (ptp->is_virtual_clock)
++		return false;
++
++	if (mutex_lock_interruptible(&ptp->n_vclocks_mux))
++		return true;
++
++	if (ptp->n_vclocks)
++		in_use = true;
++
++	mutex_unlock(&ptp->n_vclocks_mux);
++
++	return in_use;
+ }
+ 
+ /* Check if ptp clock shall be free running */
+diff --git a/drivers/pwm/pwm-axi-pwmgen.c b/drivers/pwm/pwm-axi-pwmgen.c
+index 4337c8f5acf055..60dcd354237316 100644
+--- a/drivers/pwm/pwm-axi-pwmgen.c
++++ b/drivers/pwm/pwm-axi-pwmgen.c
+@@ -257,7 +257,7 @@ static int axi_pwmgen_probe(struct platform_device *pdev)
+ 	struct regmap *regmap;
+ 	struct pwm_chip *chip;
+ 	struct axi_pwmgen_ddata *ddata;
+-	struct clk *clk;
++	struct clk *axi_clk, *clk;
+ 	void __iomem *io_base;
+ 	int ret;
+ 
+@@ -280,9 +280,26 @@ static int axi_pwmgen_probe(struct platform_device *pdev)
+ 	ddata = pwmchip_get_drvdata(chip);
+ 	ddata->regmap = regmap;
+ 
+-	clk = devm_clk_get_enabled(dev, NULL);
++	/*
++	 * Using NULL here instead of "axi" for backwards compatibility. There
++	 * are some dtbs that don't give clock-names and have the "ext" clock
++	 * as the one and only clock (due to mistake in the original bindings).
++	 */
++	axi_clk = devm_clk_get_enabled(dev, NULL);
++	if (IS_ERR(axi_clk))
++		return dev_err_probe(dev, PTR_ERR(axi_clk), "failed to get axi clock\n");
++
++	clk = devm_clk_get_optional_enabled(dev, "ext");
+ 	if (IS_ERR(clk))
+-		return dev_err_probe(dev, PTR_ERR(clk), "failed to get clock\n");
++		return dev_err_probe(dev, PTR_ERR(clk), "failed to get ext clock\n");
++
++	/*
++	 * If there is no "ext" clock, it means the HDL was compiled with
++	 * ASYNC_CLK_EN=0. In this case, the AXI clock is also used for the
++	 * PWM output clock.
++	 */
++	if (!clk)
++		clk = axi_clk;
+ 
+ 	ret = devm_clk_rate_exclusive_get(dev, clk);
+ 	if (ret)
+diff --git a/drivers/rapidio/rio_cm.c b/drivers/rapidio/rio_cm.c
+index 9135227301c8d6..e548edf64eca04 100644
+--- a/drivers/rapidio/rio_cm.c
++++ b/drivers/rapidio/rio_cm.c
+@@ -789,6 +789,9 @@ static int riocm_ch_send(u16 ch_id, void *buf, int len)
+ 	if (buf == NULL || ch_id == 0 || len == 0 || len > RIO_MAX_MSG_SIZE)
+ 		return -EINVAL;
+ 
++	if (len < sizeof(struct rio_ch_chan_hdr))
++		return -EINVAL;		/* insufficient data from user */
++
+ 	ch = riocm_get_channel(ch_id);
+ 	if (!ch) {
+ 		riocm_error("%s(%d) ch_%d not found", current->comm,
+diff --git a/drivers/regulator/max14577-regulator.c b/drivers/regulator/max14577-regulator.c
+index 5e7171b9065ae7..41fd15adfd1fdd 100644
+--- a/drivers/regulator/max14577-regulator.c
++++ b/drivers/regulator/max14577-regulator.c
+@@ -40,11 +40,14 @@ static int max14577_reg_get_current_limit(struct regulator_dev *rdev)
+ 	struct max14577 *max14577 = rdev_get_drvdata(rdev);
+ 	const struct maxim_charger_current *limits =
+ 		&maxim_charger_currents[max14577->dev_type];
++	int ret;
+ 
+ 	if (rdev_get_id(rdev) != MAX14577_CHARGER)
+ 		return -EINVAL;
+ 
+-	max14577_read_reg(rmap, MAX14577_CHG_REG_CHG_CTRL4, &reg_data);
++	ret = max14577_read_reg(rmap, MAX14577_CHG_REG_CHG_CTRL4, &reg_data);
++	if (ret < 0)
++		return ret;
+ 
+ 	if ((reg_data & CHGCTRL4_MBCICHWRCL_MASK) == 0)
+ 		return limits->min;
+diff --git a/drivers/regulator/max20086-regulator.c b/drivers/regulator/max20086-regulator.c
+index 3d333b61fb18c8..fcdd2d0317a573 100644
+--- a/drivers/regulator/max20086-regulator.c
++++ b/drivers/regulator/max20086-regulator.c
+@@ -29,7 +29,7 @@
+ #define	MAX20086_REG_ADC4		0x09
+ 
+ /* DEVICE IDs */
+-#define MAX20086_DEVICE_ID_MAX20086	0x40
++#define MAX20086_DEVICE_ID_MAX20086	0x30
+ #define MAX20086_DEVICE_ID_MAX20087	0x20
+ #define MAX20086_DEVICE_ID_MAX20088	0x10
+ #define MAX20086_DEVICE_ID_MAX20089	0x00
+@@ -264,7 +264,7 @@ static int max20086_i2c_probe(struct i2c_client *i2c)
+ 	 * shutdown.
+ 	 */
+ 	flags = boot_on ? GPIOD_OUT_HIGH : GPIOD_OUT_LOW;
+-	chip->ena_gpiod = devm_gpiod_get(chip->dev, "enable", flags);
++	chip->ena_gpiod = devm_gpiod_get_optional(chip->dev, "enable", flags);
+ 	if (IS_ERR(chip->ena_gpiod)) {
+ 		ret = PTR_ERR(chip->ena_gpiod);
+ 		dev_err(chip->dev, "Failed to get enable GPIO: %d\n", ret);
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index b21eedefff877a..48d146e1fa5603 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -1617,7 +1617,7 @@ static int rproc_attach(struct rproc *rproc)
+ 	ret = rproc_set_rsc_table(rproc);
+ 	if (ret) {
+ 		dev_err(dev, "can't load resource table: %d\n", ret);
+-		goto unprepare_device;
++		goto clean_up_resources;
+ 	}
+ 
+ 	/* reset max_notifyid */
+@@ -1634,7 +1634,7 @@ static int rproc_attach(struct rproc *rproc)
+ 	ret = rproc_handle_resources(rproc, rproc_loading_handlers);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to process resources: %d\n", ret);
+-		goto unprepare_device;
++		goto clean_up_resources;
+ 	}
+ 
+ 	/* Allocate carveout resources associated to rproc */
+@@ -1653,9 +1653,9 @@ static int rproc_attach(struct rproc *rproc)
+ 
+ clean_up_resources:
+ 	rproc_resource_cleanup(rproc);
+-unprepare_device:
+ 	/* release HW resources if needed */
+ 	rproc_unprepare_device(rproc);
++	kfree(rproc->clean_table);
+ disable_iommu:
+ 	rproc_disable_iommu(rproc);
+ 	return ret;
+diff --git a/drivers/remoteproc/ti_k3_m4_remoteproc.c b/drivers/remoteproc/ti_k3_m4_remoteproc.c
+index a16fb165fcedd4..6cd50b16a8e82a 100644
+--- a/drivers/remoteproc/ti_k3_m4_remoteproc.c
++++ b/drivers/remoteproc/ti_k3_m4_remoteproc.c
+@@ -228,7 +228,7 @@ static int k3_m4_rproc_unprepare(struct rproc *rproc)
+ 	int ret;
+ 
+ 	/* If the core is going to be detached do not assert the module reset */
+-	if (rproc->state == RPROC_ATTACHED)
++	if (rproc->state == RPROC_DETACHED)
+ 		return 0;
+ 
+ 	ret = kproc->ti_sci->ops.dev_ops.put_device(kproc->ti_sci,
+diff --git a/drivers/s390/scsi/zfcp_sysfs.c b/drivers/s390/scsi/zfcp_sysfs.c
+index 41e36af35488eb..90a84ae98b971c 100644
+--- a/drivers/s390/scsi/zfcp_sysfs.c
++++ b/drivers/s390/scsi/zfcp_sysfs.c
+@@ -449,6 +449,8 @@ static ssize_t zfcp_sysfs_unit_add_store(struct device *dev,
+ 	if (kstrtoull(buf, 0, (unsigned long long *) &fcp_lun))
+ 		return -EINVAL;
+ 
++	flush_work(&port->rport_work);
++
+ 	retval = zfcp_unit_add(port, fcp_lun);
+ 	if (retval)
+ 		return retval;
+diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
+index 5a5525054d71c8..5b079b8b7a082b 100644
+--- a/drivers/scsi/elx/efct/efct_hw.c
++++ b/drivers/scsi/elx/efct/efct_hw.c
+@@ -1120,7 +1120,7 @@ int
+ efct_hw_parse_filter(struct efct_hw *hw, void *value)
+ {
+ 	int rc = 0;
+-	char *p = NULL;
++	char *p = NULL, *pp = NULL;
+ 	char *token;
+ 	u32 idx = 0;
+ 
+@@ -1132,6 +1132,7 @@ efct_hw_parse_filter(struct efct_hw *hw, void *value)
+ 		efc_log_err(hw->os, "p is NULL\n");
+ 		return -ENOMEM;
+ 	}
++	pp = p;
+ 
+ 	idx = 0;
+ 	while ((token = strsep(&p, ",")) && *token) {
+@@ -1144,7 +1145,7 @@ efct_hw_parse_filter(struct efct_hw *hw, void *value)
+ 		if (idx == ARRAY_SIZE(hw->config.filter_def))
+ 			break;
+ 	}
+-	kfree(p);
++	kfree(pp);
+ 
+ 	return rc;
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index c2ec4db6728697..c256c3edd66392 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -5055,7 +5055,7 @@ lpfc_check_sli_ndlp(struct lpfc_hba *phba,
+ 		case CMD_GEN_REQUEST64_CR:
+ 			if (iocb->ndlp == ndlp)
+ 				return 1;
+-			fallthrough;
++			break;
+ 		case CMD_ELS_REQUEST64_CR:
+ 			if (remote_id == ndlp->nlp_DID)
+ 				return 1;
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 6574f9e744766d..a335d34070d3c5 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -6003,9 +6003,9 @@ lpfc_sli4_get_ctl_attr(struct lpfc_hba *phba)
+ 	phba->sli4_hba.flash_id = bf_get(lpfc_cntl_attr_flash_id, cntl_attr);
+ 	phba->sli4_hba.asic_rev = bf_get(lpfc_cntl_attr_asic_rev, cntl_attr);
+ 
+-	memset(phba->BIOSVersion, 0, sizeof(phba->BIOSVersion));
+-	strlcat(phba->BIOSVersion, (char *)cntl_attr->bios_ver_str,
++	memcpy(phba->BIOSVersion, cntl_attr->bios_ver_str,
+ 		sizeof(phba->BIOSVersion));
++	phba->BIOSVersion[sizeof(phba->BIOSVersion) - 1] = '\0';
+ 
+ 	lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
+ 			"3086 lnk_type:%d, lnk_numb:%d, bios_ver:%s, "
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 6c9dec7e3128fd..66c9d1a2c94de2 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -9709,6 +9709,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x1bd4, 0x0089)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++				0x1bd4, 0x00a3)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x1ff9, 0x00a1)
+@@ -10045,6 +10049,30 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       PCI_VENDOR_ID_ADAPTEC2, 0x14f0)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x207d, 0x4044)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x207d, 0x4054)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x207d, 0x4084)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x207d, 0x4094)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x207d, 0x4140)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x207d, 0x4240)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       PCI_VENDOR_ID_ADVANTECH, 0x8312)
+@@ -10261,6 +10289,14 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x1cc4, 0x0201)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1018, 0x8238)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1f3f, 0x0610)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       PCI_VENDOR_ID_LENOVO, 0x0220)
+@@ -10269,10 +10305,30 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       PCI_VENDOR_ID_LENOVO, 0x0221)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_LENOVO, 0x0222)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_LENOVO, 0x0223)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_LENOVO, 0x0224)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_LENOVO, 0x0225)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       PCI_VENDOR_ID_LENOVO, 0x0520)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_LENOVO, 0x0521)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       PCI_VENDOR_ID_LENOVO, 0x0522)
+@@ -10293,6 +10349,26 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       PCI_VENDOR_ID_LENOVO, 0x0623)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_LENOVO, 0x0624)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_LENOVO, 0x0625)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_LENOVO, 0x0626)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_LENOVO, 0x0627)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_LENOVO, 0x0628)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 				0x1014, 0x0718)
+@@ -10321,6 +10397,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x1137, 0x0300)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++				0x1ded, 0x3301)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x1ff9, 0x0045)
+@@ -10469,6 +10549,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 				0x1f51, 0x100a)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++				0x1f51, 0x100b)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x1f51, 0x100e)
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index 2e6b2412d2c946..d9e59204a9c369 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -362,7 +362,7 @@ MODULE_PARM_DESC(ring_avail_percent_lowater,
+ /*
+  * Timeout in seconds for all devices managed by this driver.
+  */
+-static int storvsc_timeout = 180;
++static const int storvsc_timeout = 180;
+ 
+ #if IS_ENABLED(CONFIG_SCSI_FC_ATTRS)
+ static struct scsi_transport_template *fc_transport_template;
+@@ -768,7 +768,7 @@ static void  handle_multichannel_storage(struct hv_device *device, int max_chns)
+ 		return;
+ 	}
+ 
+-	t = wait_for_completion_timeout(&request->wait_event, 10*HZ);
++	t = wait_for_completion_timeout(&request->wait_event, storvsc_timeout * HZ);
+ 	if (t == 0) {
+ 		dev_err(dev, "Failed to create sub-channel: timed out\n");
+ 		return;
+@@ -833,7 +833,7 @@ static int storvsc_execute_vstor_op(struct hv_device *device,
+ 	if (ret != 0)
+ 		return ret;
+ 
+-	t = wait_for_completion_timeout(&request->wait_event, 5*HZ);
++	t = wait_for_completion_timeout(&request->wait_event, storvsc_timeout * HZ);
+ 	if (t == 0)
+ 		return -ETIMEDOUT;
+ 
+@@ -1350,6 +1350,8 @@ static int storvsc_connect_to_vsp(struct hv_device *device, u32 ring_size,
+ 		return ret;
+ 
+ 	ret = storvsc_channel_init(device, is_fc);
++	if (ret)
++		vmbus_close(device->channel);
+ 
+ 	return ret;
+ }
+@@ -1668,7 +1670,7 @@ static int storvsc_host_reset_handler(struct scsi_cmnd *scmnd)
+ 	if (ret != 0)
+ 		return FAILED;
+ 
+-	t = wait_for_completion_timeout(&request->wait_event, 5*HZ);
++	t = wait_for_completion_timeout(&request->wait_event, storvsc_timeout * HZ);
+ 	if (t == 0)
+ 		return TIMEOUT_ERROR;
+ 
+diff --git a/drivers/soc/qcom/pmic_glink_altmode.c b/drivers/soc/qcom/pmic_glink_altmode.c
+index bd06ce16180411..7f11acd33323b7 100644
+--- a/drivers/soc/qcom/pmic_glink_altmode.c
++++ b/drivers/soc/qcom/pmic_glink_altmode.c
+@@ -218,21 +218,29 @@ static void pmic_glink_altmode_worker(struct work_struct *work)
+ {
+ 	struct pmic_glink_altmode_port *alt_port = work_to_altmode_port(work);
+ 	struct pmic_glink_altmode *altmode = alt_port->altmode;
++	enum drm_connector_status conn_status;
+ 
+ 	typec_switch_set(alt_port->typec_switch, alt_port->orientation);
+ 
+-	if (alt_port->svid == USB_TYPEC_DP_SID && alt_port->mode == 0xff)
+-		pmic_glink_altmode_safe(altmode, alt_port);
+-	else if (alt_port->svid == USB_TYPEC_DP_SID)
+-		pmic_glink_altmode_enable_dp(altmode, alt_port, alt_port->mode,
+-					     alt_port->hpd_state, alt_port->hpd_irq);
+-	else
+-		pmic_glink_altmode_enable_usb(altmode, alt_port);
++	if (alt_port->svid == USB_TYPEC_DP_SID) {
++		if (alt_port->mode == 0xff) {
++			pmic_glink_altmode_safe(altmode, alt_port);
++		} else {
++			pmic_glink_altmode_enable_dp(altmode, alt_port,
++						     alt_port->mode,
++						     alt_port->hpd_state,
++						     alt_port->hpd_irq);
++		}
+ 
+-	drm_aux_hpd_bridge_notify(&alt_port->bridge->dev,
+-				  alt_port->hpd_state ?
+-				  connector_status_connected :
+-				  connector_status_disconnected);
++		if (alt_port->hpd_state)
++			conn_status = connector_status_connected;
++		else
++			conn_status = connector_status_disconnected;
++
++		drm_aux_hpd_bridge_notify(&alt_port->bridge->dev, conn_status);
++	} else {
++		pmic_glink_altmode_enable_usb(altmode, alt_port);
++	}
+ 
+ 	pmic_glink_altmode_request(altmode, ALTMODE_PAN_ACK, alt_port->index);
+ }
+diff --git a/drivers/spi/spi-qpic-snand.c b/drivers/spi/spi-qpic-snand.c
+index 44a8f58e46fe12..d3a4e091dca4ee 100644
+--- a/drivers/spi/spi-qpic-snand.c
++++ b/drivers/spi/spi-qpic-snand.c
+@@ -1636,6 +1636,7 @@ static void qcom_spi_remove(struct platform_device *pdev)
+ 
+ static const struct qcom_nandc_props ipq9574_snandc_props = {
+ 	.dev_cmd_reg_start = 0x7000,
++	.bam_offset = 0x30000,
+ 	.supports_bam = true,
+ };
+ 
+diff --git a/drivers/staging/iio/impedance-analyzer/ad5933.c b/drivers/staging/iio/impedance-analyzer/ad5933.c
+index d5544fc2fe989b..f8fcc10ea81509 100644
+--- a/drivers/staging/iio/impedance-analyzer/ad5933.c
++++ b/drivers/staging/iio/impedance-analyzer/ad5933.c
+@@ -411,7 +411,7 @@ static ssize_t ad5933_store(struct device *dev,
+ 		ret = ad5933_cmd(st, 0);
+ 		break;
+ 	case AD5933_OUT_SETTLING_CYCLES:
+-		val = clamp(val, (u16)0, (u16)0x7FF);
++		val = clamp(val, (u16)0, (u16)0x7FC);
+ 		st->settling_cycles = val;
+ 
+ 		/* 2x, 4x handling, see datasheet */
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index a9bfd5305410c2..65befffc356966 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -825,24 +825,24 @@ static int rkvdec_open(struct file *filp)
+ 	rkvdec_reset_decoded_fmt(ctx);
+ 	v4l2_fh_init(&ctx->fh, video_devdata(filp));
+ 
+-	ret = rkvdec_init_ctrls(ctx);
+-	if (ret)
+-		goto err_free_ctx;
+-
+ 	ctx->fh.m2m_ctx = v4l2_m2m_ctx_init(rkvdec->m2m_dev, ctx,
+ 					    rkvdec_queue_init);
+ 	if (IS_ERR(ctx->fh.m2m_ctx)) {
+ 		ret = PTR_ERR(ctx->fh.m2m_ctx);
+-		goto err_cleanup_ctrls;
++		goto err_free_ctx;
+ 	}
+ 
++	ret = rkvdec_init_ctrls(ctx);
++	if (ret)
++		goto err_cleanup_m2m_ctx;
++
+ 	filp->private_data = &ctx->fh;
+ 	v4l2_fh_add(&ctx->fh);
+ 
+ 	return 0;
+ 
+-err_cleanup_ctrls:
+-	v4l2_ctrl_handler_free(&ctx->ctrl_hdl);
++err_cleanup_m2m_ctx:
++	v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
+ 
+ err_free_ctx:
+ 	kfree(ctx);
+diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c
+index d113679b1e2d7a..acc7998758ad84 100644
+--- a/drivers/tee/tee_core.c
++++ b/drivers/tee/tee_core.c
+@@ -10,6 +10,7 @@
+ #include <linux/fs.h>
+ #include <linux/idr.h>
+ #include <linux/module.h>
++#include <linux/overflow.h>
+ #include <linux/slab.h>
+ #include <linux/tee_core.h>
+ #include <linux/uaccess.h>
+@@ -19,7 +20,7 @@
+ 
+ #define TEE_NUM_DEVICES	32
+ 
+-#define TEE_IOCTL_PARAM_SIZE(x) (sizeof(struct tee_param) * (x))
++#define TEE_IOCTL_PARAM_SIZE(x) (size_mul(sizeof(struct tee_param), (x)))
+ 
+ #define TEE_UUID_NS_NAME_SIZE	128
+ 
+@@ -487,7 +488,7 @@ static int tee_ioctl_open_session(struct tee_context *ctx,
+ 	if (copy_from_user(&arg, uarg, sizeof(arg)))
+ 		return -EFAULT;
+ 
+-	if (sizeof(arg) + TEE_IOCTL_PARAM_SIZE(arg.num_params) != buf.buf_len)
++	if (size_add(sizeof(arg), TEE_IOCTL_PARAM_SIZE(arg.num_params)) != buf.buf_len)
+ 		return -EINVAL;
+ 
+ 	if (arg.num_params) {
+@@ -565,7 +566,7 @@ static int tee_ioctl_invoke(struct tee_context *ctx,
+ 	if (copy_from_user(&arg, uarg, sizeof(arg)))
+ 		return -EFAULT;
+ 
+-	if (sizeof(arg) + TEE_IOCTL_PARAM_SIZE(arg.num_params) != buf.buf_len)
++	if (size_add(sizeof(arg), TEE_IOCTL_PARAM_SIZE(arg.num_params)) != buf.buf_len)
+ 		return -EINVAL;
+ 
+ 	if (arg.num_params) {
+@@ -699,7 +700,7 @@ static int tee_ioctl_supp_recv(struct tee_context *ctx,
+ 	if (get_user(num_params, &uarg->num_params))
+ 		return -EFAULT;
+ 
+-	if (sizeof(*uarg) + TEE_IOCTL_PARAM_SIZE(num_params) != buf.buf_len)
++	if (size_add(sizeof(*uarg), TEE_IOCTL_PARAM_SIZE(num_params)) != buf.buf_len)
+ 		return -EINVAL;
+ 
+ 	params = kcalloc(num_params, sizeof(struct tee_param), GFP_KERNEL);
+@@ -798,7 +799,7 @@ static int tee_ioctl_supp_send(struct tee_context *ctx,
+ 	    get_user(num_params, &uarg->num_params))
+ 		return -EFAULT;
+ 
+-	if (sizeof(*uarg) + TEE_IOCTL_PARAM_SIZE(num_params) > buf.buf_len)
++	if (size_add(sizeof(*uarg), TEE_IOCTL_PARAM_SIZE(num_params)) > buf.buf_len)
+ 		return -EINVAL;
+ 
+ 	params = kcalloc(num_params, sizeof(struct tee_param), GFP_KERNEL);
+diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c
+index 69c1df0f4ca541..aac67a4413cea9 100644
+--- a/drivers/uio/uio_hv_generic.c
++++ b/drivers/uio/uio_hv_generic.c
+@@ -243,6 +243,9 @@ hv_uio_probe(struct hv_device *dev,
+ 	if (!ring_size)
+ 		ring_size = SZ_2M;
+ 
++	/* Adjust ring size if necessary to have it page aligned */
++	ring_size = VMBUS_RING_SIZE(ring_size);
++
+ 	pdata = devm_kzalloc(&dev->device, sizeof(*pdata), GFP_KERNEL);
+ 	if (!pdata)
+ 		return -ENOMEM;
+@@ -274,13 +277,13 @@ hv_uio_probe(struct hv_device *dev,
+ 	pdata->info.mem[INT_PAGE_MAP].name = "int_page";
+ 	pdata->info.mem[INT_PAGE_MAP].addr
+ 		= (uintptr_t)vmbus_connection.int_page;
+-	pdata->info.mem[INT_PAGE_MAP].size = PAGE_SIZE;
++	pdata->info.mem[INT_PAGE_MAP].size = HV_HYP_PAGE_SIZE;
+ 	pdata->info.mem[INT_PAGE_MAP].memtype = UIO_MEM_LOGICAL;
+ 
+ 	pdata->info.mem[MON_PAGE_MAP].name = "monitor_page";
+ 	pdata->info.mem[MON_PAGE_MAP].addr
+ 		= (uintptr_t)vmbus_connection.monitor_pages[1];
+-	pdata->info.mem[MON_PAGE_MAP].size = PAGE_SIZE;
++	pdata->info.mem[MON_PAGE_MAP].size = HV_HYP_PAGE_SIZE;
+ 	pdata->info.mem[MON_PAGE_MAP].memtype = UIO_MEM_LOGICAL;
+ 
+ 	if (channel->device_id == HV_NIC) {
+diff --git a/drivers/video/console/dummycon.c b/drivers/video/console/dummycon.c
+index 139049368fdcf8..7d02470f19b932 100644
+--- a/drivers/video/console/dummycon.c
++++ b/drivers/video/console/dummycon.c
+@@ -85,6 +85,15 @@ static bool dummycon_blank(struct vc_data *vc, enum vesa_blank_mode blank,
+ 	/* Redraw, so that we get putc(s) for output done while blanked */
+ 	return true;
+ }
++
++static bool dummycon_switch(struct vc_data *vc)
++{
++	/*
++	 * Redraw, so that we get putc(s) for output done while switched
++	 * away. Informs deferred consoles to take over the display.
++	 */
++	return true;
++}
+ #else
+ static void dummycon_putc(struct vc_data *vc, u16 c, unsigned int y,
+ 			  unsigned int x) { }
+@@ -95,6 +104,10 @@ static bool dummycon_blank(struct vc_data *vc, enum vesa_blank_mode blank,
+ {
+ 	return false;
+ }
++static bool dummycon_switch(struct vc_data *vc)
++{
++	return false;
++}
+ #endif
+ 
+ static const char *dummycon_startup(void)
+@@ -124,11 +137,6 @@ static bool dummycon_scroll(struct vc_data *vc, unsigned int top,
+ 	return false;
+ }
+ 
+-static bool dummycon_switch(struct vc_data *vc)
+-{
+-	return false;
+-}
+-
+ /*
+  *  The console `switch' structure for the dummy console
+  *
+diff --git a/drivers/video/console/vgacon.c b/drivers/video/console/vgacon.c
+index 37bd18730fe0df..f9cdbf8c53e34b 100644
+--- a/drivers/video/console/vgacon.c
++++ b/drivers/video/console/vgacon.c
+@@ -1168,7 +1168,7 @@ static bool vgacon_scroll(struct vc_data *c, unsigned int t, unsigned int b,
+ 				     c->vc_screenbuf_size - delta);
+ 			c->vc_origin = vga_vram_end - c->vc_screenbuf_size;
+ 			vga_rolled_over = 0;
+-		} else
++		} else if (oldo - delta >= (unsigned long)c->vc_screenbuf)
+ 			c->vc_origin -= delta;
+ 		c->vc_scr_end = c->vc_origin + c->vc_screenbuf_size;
+ 		scr_memsetw((u16 *) (c->vc_origin), c->vc_video_erase_char,
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index ac3c99ed92d1aa..2df48037688d1d 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -117,9 +117,14 @@ static signed char con2fb_map_boot[MAX_NR_CONSOLES];
+ 
+ static struct fb_info *fbcon_info_from_console(int console)
+ {
++	signed char fb;
+ 	WARN_CONSOLE_UNLOCKED();
+ 
+-	return fbcon_registered_fb[con2fb_map[console]];
++	fb = con2fb_map[console];
++	if (fb < 0 || fb >= ARRAY_SIZE(fbcon_registered_fb))
++		return NULL;
++
++	return fbcon_registered_fb[fb];
+ }
+ 
+ static int logo_lines;
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 3c568cff2913e4..eca2498f243685 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -328,8 +328,10 @@ fb_set_var(struct fb_info *info, struct fb_var_screeninfo *var)
+ 	    !list_empty(&info->modelist))
+ 		ret = fb_add_videomode(&mode, &info->modelist);
+ 
+-	if (ret)
++	if (ret) {
++		info->var = old_var;
+ 		return ret;
++	}
+ 
+ 	event.info = info;
+ 	event.data = &mode;
+@@ -388,7 +390,7 @@ static int fb_check_foreignness(struct fb_info *fi)
+ 
+ static int do_register_framebuffer(struct fb_info *fb_info)
+ {
+-	int i;
++	int i, err = 0;
+ 	struct fb_videomode mode;
+ 
+ 	if (fb_check_foreignness(fb_info))
+@@ -397,10 +399,18 @@ static int do_register_framebuffer(struct fb_info *fb_info)
+ 	if (num_registered_fb == FB_MAX)
+ 		return -ENXIO;
+ 
+-	num_registered_fb++;
+ 	for (i = 0 ; i < FB_MAX; i++)
+ 		if (!registered_fb[i])
+ 			break;
++
++	if (!fb_info->modelist.prev || !fb_info->modelist.next)
++		INIT_LIST_HEAD(&fb_info->modelist);
++
++	fb_var_to_videomode(&mode, &fb_info->var);
++	err = fb_add_videomode(&mode, &fb_info->modelist);
++	if (err < 0)
++		return err;
++
+ 	fb_info->node = i;
+ 	refcount_set(&fb_info->count, 1);
+ 	mutex_init(&fb_info->lock);
+@@ -426,16 +436,12 @@ static int do_register_framebuffer(struct fb_info *fb_info)
+ 	if (bitmap_empty(fb_info->pixmap.blit_y, FB_MAX_BLIT_HEIGHT))
+ 		bitmap_fill(fb_info->pixmap.blit_y, FB_MAX_BLIT_HEIGHT);
+ 
+-	if (!fb_info->modelist.prev || !fb_info->modelist.next)
+-		INIT_LIST_HEAD(&fb_info->modelist);
+-
+ 	if (fb_info->skip_vt_switch)
+ 		pm_vt_switch_required(fb_info->device, false);
+ 	else
+ 		pm_vt_switch_required(fb_info->device, true);
+ 
+-	fb_var_to_videomode(&mode, &fb_info->var);
+-	fb_add_videomode(&mode, &fb_info->modelist);
++	num_registered_fb++;
+ 	registered_fb[i] = fb_info;
+ 
+ #ifdef CONFIG_GUMSTIX_AM200EPD
+diff --git a/drivers/video/screen_info_pci.c b/drivers/video/screen_info_pci.c
+index 6c583351714100..66bfc1d0a6dc82 100644
+--- a/drivers/video/screen_info_pci.c
++++ b/drivers/video/screen_info_pci.c
+@@ -7,8 +7,8 @@
+ 
+ static struct pci_dev *screen_info_lfb_pdev;
+ static size_t screen_info_lfb_bar;
+-static resource_size_t screen_info_lfb_offset;
+-static struct resource screen_info_lfb_res = DEFINE_RES_MEM(0, 0);
++static resource_size_t screen_info_lfb_res_start; // original start of resource
++static resource_size_t screen_info_lfb_offset; // framebuffer offset within resource
+ 
+ static bool __screen_info_relocation_is_valid(const struct screen_info *si, struct resource *pr)
+ {
+@@ -31,7 +31,7 @@ void screen_info_apply_fixups(void)
+ 	if (screen_info_lfb_pdev) {
+ 		struct resource *pr = &screen_info_lfb_pdev->resource[screen_info_lfb_bar];
+ 
+-		if (pr->start != screen_info_lfb_res.start) {
++		if (pr->start != screen_info_lfb_res_start) {
+ 			if (__screen_info_relocation_is_valid(si, pr)) {
+ 				/*
+ 				 * Only update base if we have an actual
+@@ -47,46 +47,67 @@ void screen_info_apply_fixups(void)
+ 	}
+ }
+ 
++static int __screen_info_lfb_pci_bus_region(const struct screen_info *si, unsigned int type,
++					    struct pci_bus_region *r)
++{
++	u64 base, size;
++
++	base = __screen_info_lfb_base(si);
++	if (!base)
++		return -EINVAL;
++
++	size = __screen_info_lfb_size(si, type);
++	if (!size)
++		return -EINVAL;
++
++	r->start = base;
++	r->end = base + size - 1;
++
++	return 0;
++}
++
+ static void screen_info_fixup_lfb(struct pci_dev *pdev)
+ {
+ 	unsigned int type;
+-	struct resource res[SCREEN_INFO_MAX_RESOURCES];
+-	size_t i, numres;
++	struct pci_bus_region bus_region;
+ 	int ret;
++	struct resource r = {
++		.flags = IORESOURCE_MEM,
++	};
++	const struct resource *pr;
+ 	const struct screen_info *si = &screen_info;
+ 
+ 	if (screen_info_lfb_pdev)
+ 		return; // already found
+ 
+ 	type = screen_info_video_type(si);
+-	if (type != VIDEO_TYPE_EFI)
+-		return; // only applies to EFI
++	if (!__screen_info_has_lfb(type))
++		return; // only applies to EFI; maybe VESA
+ 
+-	ret = screen_info_resources(si, res, ARRAY_SIZE(res));
++	ret = __screen_info_lfb_pci_bus_region(si, type, &bus_region);
+ 	if (ret < 0)
+ 		return;
+-	numres = ret;
+ 
+-	for (i = 0; i < numres; ++i) {
+-		struct resource *r = &res[i];
+-		const struct resource *pr;
+-
+-		if (!(r->flags & IORESOURCE_MEM))
+-			continue;
+-		pr = pci_find_resource(pdev, r);
+-		if (!pr)
+-			continue;
+-
+-		/*
+-		 * We've found a PCI device with the framebuffer
+-		 * resource. Store away the parameters to track
+-		 * relocation of the framebuffer aperture.
+-		 */
+-		screen_info_lfb_pdev = pdev;
+-		screen_info_lfb_bar = pr - pdev->resource;
+-		screen_info_lfb_offset = r->start - pr->start;
+-		memcpy(&screen_info_lfb_res, r, sizeof(screen_info_lfb_res));
+-	}
++	/*
++	 * Translate the PCI bus address to resource. Account
++	 * for an offset if the framebuffer is behind a PCI host
++	 * bridge.
++	 */
++	pcibios_bus_to_resource(pdev->bus, &r, &bus_region);
++
++	pr = pci_find_resource(pdev, &r);
++	if (!pr)
++		return;
++
++	/*
++	 * We've found a PCI device with the framebuffer
++	 * resource. Store away the parameters to track
++	 * relocation of the framebuffer aperture.
++	 */
++	screen_info_lfb_pdev = pdev;
++	screen_info_lfb_bar = pr - pdev->resource;
++	screen_info_lfb_offset = r.start - pr->start;
++	screen_info_lfb_res_start = bus_region.start;
+ }
+ DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_ANY_ID, PCI_ANY_ID, PCI_BASE_CLASS_DISPLAY, 16,
+ 			       screen_info_fixup_lfb);
+diff --git a/drivers/virt/coco/tsm.c b/drivers/virt/coco/tsm.c
+index 9432d4e303f16b..8a638bc34d4a9e 100644
+--- a/drivers/virt/coco/tsm.c
++++ b/drivers/virt/coco/tsm.c
+@@ -15,6 +15,7 @@
+ static struct tsm_provider {
+ 	const struct tsm_ops *ops;
+ 	void *data;
++	atomic_t count;
+ } provider;
+ static DECLARE_RWSEM(tsm_rwsem);
+ 
+@@ -92,6 +93,10 @@ static ssize_t tsm_report_privlevel_store(struct config_item *cfg,
+ 	if (rc)
+ 		return rc;
+ 
++	guard(rwsem_write)(&tsm_rwsem);
++	if (!provider.ops)
++		return -ENXIO;
++
+ 	/*
+ 	 * The valid privilege levels that a TSM might accept, if it accepts a
+ 	 * privilege level setting at all, are a max of TSM_PRIVLEVEL_MAX (see
+@@ -101,7 +106,6 @@ static ssize_t tsm_report_privlevel_store(struct config_item *cfg,
+ 	if (provider.ops->privlevel_floor > val || val > TSM_PRIVLEVEL_MAX)
+ 		return -EINVAL;
+ 
+-	guard(rwsem_write)(&tsm_rwsem);
+ 	rc = try_advance_write_generation(report);
+ 	if (rc)
+ 		return rc;
+@@ -115,6 +119,10 @@ static ssize_t tsm_report_privlevel_floor_show(struct config_item *cfg,
+ 					       char *buf)
+ {
+ 	guard(rwsem_read)(&tsm_rwsem);
++
++	if (!provider.ops)
++		return -ENXIO;
++
+ 	return sysfs_emit(buf, "%u\n", provider.ops->privlevel_floor);
+ }
+ CONFIGFS_ATTR_RO(tsm_report_, privlevel_floor);
+@@ -217,6 +225,9 @@ CONFIGFS_ATTR_RO(tsm_report_, generation);
+ static ssize_t tsm_report_provider_show(struct config_item *cfg, char *buf)
+ {
+ 	guard(rwsem_read)(&tsm_rwsem);
++	if (!provider.ops)
++		return -ENXIO;
++
+ 	return sysfs_emit(buf, "%s\n", provider.ops->name);
+ }
+ CONFIGFS_ATTR_RO(tsm_report_, provider);
+@@ -284,7 +295,7 @@ static ssize_t tsm_report_read(struct tsm_report *report, void *buf,
+ 	guard(rwsem_write)(&tsm_rwsem);
+ 	ops = provider.ops;
+ 	if (!ops)
+-		return -ENOTTY;
++		return -ENXIO;
+ 	if (!report->desc.inblob_len)
+ 		return -EINVAL;
+ 
+@@ -421,12 +432,20 @@ static struct config_item *tsm_report_make_item(struct config_group *group,
+ 	if (!state)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	atomic_inc(&provider.count);
+ 	config_item_init_type_name(&state->cfg, name, &tsm_report_type);
+ 	return &state->cfg;
+ }
+ 
++static void tsm_report_drop_item(struct config_group *group, struct config_item *item)
++{
++	config_item_put(item);
++	atomic_dec(&provider.count);
++}
++
+ static struct configfs_group_operations tsm_report_group_ops = {
+ 	.make_item = tsm_report_make_item,
++	.drop_item = tsm_report_drop_item,
+ };
+ 
+ static const struct config_item_type tsm_reports_type = {
+@@ -459,6 +478,11 @@ int tsm_register(const struct tsm_ops *ops, void *priv)
+ 		return -EBUSY;
+ 	}
+ 
++	if (atomic_read(&provider.count)) {
++		pr_err("configfs/tsm/report not empty\n");
++		return -EBUSY;
++	}
++
+ 	provider.ops = ops;
+ 	provider.data = priv;
+ 	return 0;
+@@ -470,6 +494,9 @@ int tsm_unregister(const struct tsm_ops *ops)
+ 	guard(rwsem_write)(&tsm_rwsem);
+ 	if (ops != provider.ops)
+ 		return -EBUSY;
++	if (atomic_read(&provider.count))
++		pr_warn("\"%s\" unregistered with items present in configfs/tsm/report\n",
++			provider.ops->name);
+ 	provider.ops = NULL;
+ 	provider.data = NULL;
+ 	return 0;
+diff --git a/drivers/watchdog/da9052_wdt.c b/drivers/watchdog/da9052_wdt.c
+index 77039f2f0be542..bc0946233ced00 100644
+--- a/drivers/watchdog/da9052_wdt.c
++++ b/drivers/watchdog/da9052_wdt.c
+@@ -168,6 +168,7 @@ static int da9052_wdt_probe(struct platform_device *pdev)
+ 	da9052_wdt = &driver_data->wdt;
+ 
+ 	da9052_wdt->timeout = DA9052_DEF_TIMEOUT;
++	da9052_wdt->min_hw_heartbeat_ms = DA9052_TWDMIN;
+ 	da9052_wdt->info = &da9052_wdt_info;
+ 	da9052_wdt->ops = &da9052_wdt_ops;
+ 	da9052_wdt->parent = dev;
+diff --git a/drivers/watchdog/stm32_iwdg.c b/drivers/watchdog/stm32_iwdg.c
+index 8ad06b54c5adc6..b356a272ff9a0a 100644
+--- a/drivers/watchdog/stm32_iwdg.c
++++ b/drivers/watchdog/stm32_iwdg.c
+@@ -291,7 +291,7 @@ static int stm32_iwdg_irq_init(struct platform_device *pdev,
+ 		return 0;
+ 
+ 	if (of_property_read_bool(np, "wakeup-source")) {
+-		ret = device_init_wakeup(dev, true);
++		ret = devm_device_init_wakeup(dev);
+ 		if (ret)
+ 			return ret;
+ 
+diff --git a/fs/anon_inodes.c b/fs/anon_inodes.c
+index 583ac81669c249..e51e7d88980a22 100644
+--- a/fs/anon_inodes.c
++++ b/fs/anon_inodes.c
+@@ -24,9 +24,50 @@
+ 
+ #include <linux/uaccess.h>
+ 
++#include "internal.h"
++
+ static struct vfsmount *anon_inode_mnt __ro_after_init;
+ static struct inode *anon_inode_inode __ro_after_init;
+ 
++/*
++ * User space expects anonymous inodes to have no file type in st_mode.
++ *
++ * In particular, 'lsof' has this legacy logic:
++ *
++ *	type = s->st_mode & S_IFMT;
++ *	switch (type) {
++ *	  ...
++ *	case 0:
++ *		if (!strcmp(p, "anon_inode"))
++ *			Lf->ntype = Ntype = N_ANON_INODE;
++ *
++ * to detect our old anon_inode logic.
++ *
++ * Rather than mess with our internal sane inode data, just fix it
++ * up here in getattr() by masking off the format bits.
++ */
++int anon_inode_getattr(struct mnt_idmap *idmap, const struct path *path,
++		       struct kstat *stat, u32 request_mask,
++		       unsigned int query_flags)
++{
++	struct inode *inode = d_inode(path->dentry);
++
++	generic_fillattr(&nop_mnt_idmap, request_mask, inode, stat);
++	stat->mode &= ~S_IFMT;
++	return 0;
++}
++
++int anon_inode_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
++		       struct iattr *attr)
++{
++	return -EOPNOTSUPP;
++}
++
++static const struct inode_operations anon_inode_operations = {
++	.getattr = anon_inode_getattr,
++	.setattr = anon_inode_setattr,
++};
++
+ /*
+  * anon_inodefs_dname() is called from d_path().
+  */
+@@ -45,6 +86,8 @@ static int anon_inodefs_init_fs_context(struct fs_context *fc)
+ 	struct pseudo_fs_context *ctx = init_pseudo(fc, ANON_INODE_FS_MAGIC);
+ 	if (!ctx)
+ 		return -ENOMEM;
++	fc->s_iflags |= SB_I_NOEXEC;
++	fc->s_iflags |= SB_I_NODEV;
+ 	ctx->dops = &anon_inodefs_dentry_operations;
+ 	return 0;
+ }
+@@ -66,6 +109,7 @@ static struct inode *anon_inode_make_secure_inode(
+ 	if (IS_ERR(inode))
+ 		return inode;
+ 	inode->i_flags &= ~S_PRIVATE;
++	inode->i_op = &anon_inode_operations;
+ 	error =	security_inode_init_security_anon(inode, &QSTR(name),
+ 						  context_inode);
+ 	if (error) {
+@@ -313,6 +357,7 @@ static int __init anon_inode_init(void)
+ 	anon_inode_inode = alloc_anon_inode(anon_inode_mnt->mnt_sb);
+ 	if (IS_ERR(anon_inode_inode))
+ 		panic("anon_inode_init() inode allocation failed (%ld)\n", PTR_ERR(anon_inode_inode));
++	anon_inode_inode->i_op = &anon_inode_operations;
+ 
+ 	return 0;
+ }
+diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
+index b95c4cb21c13f0..60a621b00c656d 100644
+--- a/fs/ceph/addr.c
++++ b/fs/ceph/addr.c
+@@ -409,6 +409,15 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq)
+ 		struct page **pages;
+ 		size_t page_off;
+ 
++		/*
++		 * FIXME: io_iter.count needs to be corrected to aligned
++		 * length. Otherwise, iov_iter_get_pages_alloc2() operates
++		 * with the initial unaligned length value. As a result,
++		 * ceph_msg_data_cursor_init() triggers BUG_ON() in the case
++		 * if msg->sparse_read_total > msg->data_length.
++		 */
++		subreq->io_iter.count = len;
++
+ 		err = iov_iter_get_pages_alloc2(&subreq->io_iter, &pages, len, &page_off);
+ 		if (err < 0) {
+ 			doutc(cl, "%llx.%llx failed to allocate pages, %d\n",
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index f3951253e393a6..fc4cab8b7b7781 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -1227,6 +1227,7 @@ static int ceph_set_super(struct super_block *s, struct fs_context *fc)
+ 	s->s_time_min = 0;
+ 	s->s_time_max = U32_MAX;
+ 	s->s_flags |= SB_NODIRATIME | SB_NOATIME;
++	s->s_magic = CEPH_SUPER_MAGIC;
+ 
+ 	ceph_fscrypt_set_ops(s);
+ 
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index 5568cb74b32243..f43be44bff6243 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -619,7 +619,7 @@ static int populate_attrs(struct config_item *item)
+ 				break;
+ 		}
+ 	}
+-	if (t->ct_bin_attrs) {
++	if (!error && t->ct_bin_attrs) {
+ 		for (i = 0; (bin_attr = t->ct_bin_attrs[i]) != NULL; i++) {
+ 			if (ops && ops->is_bin_visible && !ops->is_bin_visible(item, bin_attr, i))
+ 				continue;
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index 70abd4da17a63a..90abcd07f8898d 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -160,6 +160,7 @@ struct dlm_proto_ops {
+ 	bool try_new_addr;
+ 	const char *name;
+ 	int proto;
++	int how;
+ 
+ 	void (*sockopts)(struct socket *sock);
+ 	int (*bind)(struct socket *sock);
+@@ -810,7 +811,7 @@ static void shutdown_connection(struct connection *con, bool and_other)
+ 		return;
+ 	}
+ 
+-	ret = kernel_sock_shutdown(con->sock, SHUT_WR);
++	ret = kernel_sock_shutdown(con->sock, dlm_proto_ops->how);
+ 	up_read(&con->sock_lock);
+ 	if (ret) {
+ 		log_print("Connection %p failed to shutdown: %d will force close",
+@@ -1858,6 +1859,7 @@ static int dlm_tcp_listen_bind(struct socket *sock)
+ static const struct dlm_proto_ops dlm_tcp_ops = {
+ 	.name = "TCP",
+ 	.proto = IPPROTO_TCP,
++	.how = SHUT_WR,
+ 	.sockopts = dlm_tcp_sockopts,
+ 	.bind = dlm_tcp_bind,
+ 	.listen_validate = dlm_tcp_listen_validate,
+@@ -1896,6 +1898,7 @@ static void dlm_sctp_sockopts(struct socket *sock)
+ static const struct dlm_proto_ops dlm_sctp_ops = {
+ 	.name = "SCTP",
+ 	.proto = IPPROTO_SCTP,
++	.how = SHUT_RDWR,
+ 	.try_new_addr = true,
+ 	.sockopts = dlm_sctp_sockopts,
+ 	.bind = dlm_sctp_bind,
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index 14ea47f954f552..0bebc6e3a4d7dd 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -597,6 +597,10 @@ static int z_erofs_map_blocks_ext(struct inode *inode,
+ 
+ 			if (la > map->m_la) {
+ 				r = mid;
++				if (la > lend) {
++					DBG_BUGON(1);
++					return -EFSCORRUPTED;
++				}
+ 				lend = la;
+ 			} else {
+ 				l = mid + 1;
+@@ -635,12 +639,6 @@ static int z_erofs_map_blocks_ext(struct inode *inode,
+ 		}
+ 	}
+ 	map->m_llen = lend - map->m_la;
+-	if (!last && map->m_llen < sb->s_blocksize) {
+-		erofs_err(sb, "extent too small %llu @ offset %llu of nid %llu",
+-			  map->m_llen, map->m_la, vi->nid);
+-		DBG_BUGON(1);
+-		return -EFSCORRUPTED;
+-	}
+ 	return 0;
+ }
+ 
+diff --git a/fs/exfat/nls.c b/fs/exfat/nls.c
+index d47896a895965b..1729bf42eb5169 100644
+--- a/fs/exfat/nls.c
++++ b/fs/exfat/nls.c
+@@ -801,4 +801,5 @@ int exfat_create_upcase_table(struct super_block *sb)
+ void exfat_free_upcase_table(struct exfat_sb_info *sbi)
+ {
+ 	kvfree(sbi->vol_utbl);
++	sbi->vol_utbl = NULL;
+ }
+diff --git a/fs/exfat/super.c b/fs/exfat/super.c
+index 8465033a6cf0c0..7ed858937d45d2 100644
+--- a/fs/exfat/super.c
++++ b/fs/exfat/super.c
+@@ -36,31 +36,12 @@ static void exfat_put_super(struct super_block *sb)
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
+ 
+ 	mutex_lock(&sbi->s_lock);
++	exfat_clear_volume_dirty(sb);
+ 	exfat_free_bitmap(sbi);
+ 	brelse(sbi->boot_bh);
+ 	mutex_unlock(&sbi->s_lock);
+ }
+ 
+-static int exfat_sync_fs(struct super_block *sb, int wait)
+-{
+-	struct exfat_sb_info *sbi = EXFAT_SB(sb);
+-	int err = 0;
+-
+-	if (unlikely(exfat_forced_shutdown(sb)))
+-		return 0;
+-
+-	if (!wait)
+-		return 0;
+-
+-	/* If there are some dirty buffers in the bdev inode */
+-	mutex_lock(&sbi->s_lock);
+-	sync_blockdev(sb->s_bdev);
+-	if (exfat_clear_volume_dirty(sb))
+-		err = -EIO;
+-	mutex_unlock(&sbi->s_lock);
+-	return err;
+-}
+-
+ static int exfat_statfs(struct dentry *dentry, struct kstatfs *buf)
+ {
+ 	struct super_block *sb = dentry->d_sb;
+@@ -219,7 +200,6 @@ static const struct super_operations exfat_sops = {
+ 	.write_inode	= exfat_write_inode,
+ 	.evict_inode	= exfat_evict_inode,
+ 	.put_super	= exfat_put_super,
+-	.sync_fs	= exfat_sync_fs,
+ 	.statfs		= exfat_statfs,
+ 	.show_options	= exfat_show_options,
+ 	.shutdown	= exfat_shutdown,
+@@ -751,10 +731,14 @@ static void exfat_free(struct fs_context *fc)
+ 
+ static int exfat_reconfigure(struct fs_context *fc)
+ {
++	struct super_block *sb = fc->root->d_sb;
+ 	fc->sb_flags |= SB_NODIRATIME;
+ 
+-	/* volume flag will be updated in exfat_sync_fs */
+-	sync_filesystem(fc->root->d_sb);
++	sync_filesystem(sb);
++	mutex_lock(&EXFAT_SB(sb)->s_lock);
++	exfat_clear_volume_dirty(sb);
++	mutex_unlock(&EXFAT_SB(sb)->s_lock);
++
+ 	return 0;
+ }
+ 
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 5a20e9cd718491..8664bb5367c53b 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -3378,6 +3378,13 @@ static inline unsigned int ext4_flex_bg_size(struct ext4_sb_info *sbi)
+ 	return 1 << sbi->s_log_groups_per_flex;
+ }
+ 
++static inline loff_t ext4_get_maxbytes(struct inode *inode)
++{
++	if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
++		return inode->i_sb->s_maxbytes;
++	return EXT4_SB(inode->i_sb)->s_bitmap_maxbytes;
++}
++
+ #define ext4_std_error(sb, errno)				\
+ do {								\
+ 	if ((errno))						\
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index c616a16a9f36d3..828a78a9600a81 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -1530,7 +1530,7 @@ static int ext4_ext_search_left(struct inode *inode,
+ static int ext4_ext_search_right(struct inode *inode,
+ 				 struct ext4_ext_path *path,
+ 				 ext4_lblk_t *logical, ext4_fsblk_t *phys,
+-				 struct ext4_extent *ret_ex)
++				 struct ext4_extent *ret_ex, int flags)
+ {
+ 	struct buffer_head *bh = NULL;
+ 	struct ext4_extent_header *eh;
+@@ -1604,7 +1604,8 @@ static int ext4_ext_search_right(struct inode *inode,
+ 	ix++;
+ 	while (++depth < path->p_depth) {
+ 		/* subtract from p_depth to get proper eh_depth */
+-		bh = read_extent_tree_block(inode, ix, path->p_depth - depth, 0);
++		bh = read_extent_tree_block(inode, ix, path->p_depth - depth,
++					    flags);
+ 		if (IS_ERR(bh))
+ 			return PTR_ERR(bh);
+ 		eh = ext_block_hdr(bh);
+@@ -1612,7 +1613,7 @@ static int ext4_ext_search_right(struct inode *inode,
+ 		put_bh(bh);
+ 	}
+ 
+-	bh = read_extent_tree_block(inode, ix, path->p_depth - depth, 0);
++	bh = read_extent_tree_block(inode, ix, path->p_depth - depth, flags);
+ 	if (IS_ERR(bh))
+ 		return PTR_ERR(bh);
+ 	eh = ext_block_hdr(bh);
+@@ -2396,18 +2397,19 @@ int ext4_ext_calc_credits_for_single_extent(struct inode *inode, int nrblocks,
+ int ext4_ext_index_trans_blocks(struct inode *inode, int extents)
+ {
+ 	int index;
+-	int depth;
+ 
+ 	/* If we are converting the inline data, only one is needed here. */
+ 	if (ext4_has_inline_data(inode))
+ 		return 1;
+ 
+-	depth = ext_depth(inode);
+-
++	/*
++	 * Extent tree can change between the time we estimate credits and
++	 * the time we actually modify the tree. Assume the worst case.
++	 */
+ 	if (extents <= 1)
+-		index = depth * 2;
++		index = EXT4_MAX_EXTENT_DEPTH * 2;
+ 	else
+-		index = depth * 3;
++		index = EXT4_MAX_EXTENT_DEPTH * 3;
+ 
+ 	return index;
+ }
+@@ -2821,6 +2823,7 @@ int ext4_ext_remove_space(struct inode *inode, ext4_lblk_t start,
+ 	struct partial_cluster partial;
+ 	handle_t *handle;
+ 	int i = 0, err = 0;
++	int flags = EXT4_EX_NOCACHE | EXT4_EX_NOFAIL;
+ 
+ 	partial.pclu = 0;
+ 	partial.lblk = 0;
+@@ -2851,8 +2854,7 @@ int ext4_ext_remove_space(struct inode *inode, ext4_lblk_t start,
+ 		ext4_fsblk_t pblk;
+ 
+ 		/* find extent for or closest extent to this block */
+-		path = ext4_find_extent(inode, end, NULL,
+-					EXT4_EX_NOCACHE | EXT4_EX_NOFAIL);
++		path = ext4_find_extent(inode, end, NULL, flags);
+ 		if (IS_ERR(path)) {
+ 			ext4_journal_stop(handle);
+ 			return PTR_ERR(path);
+@@ -2918,7 +2920,7 @@ int ext4_ext_remove_space(struct inode *inode, ext4_lblk_t start,
+ 			 */
+ 			lblk = ex_end + 1;
+ 			err = ext4_ext_search_right(inode, path, &lblk, &pblk,
+-						    NULL);
++						    NULL, flags);
+ 			if (err < 0)
+ 				goto out;
+ 			if (pblk) {
+@@ -2994,8 +2996,7 @@ int ext4_ext_remove_space(struct inode *inode, ext4_lblk_t start,
+ 				  i + 1, ext4_idx_pblock(path[i].p_idx));
+ 			memset(path + i + 1, 0, sizeof(*path));
+ 			bh = read_extent_tree_block(inode, path[i].p_idx,
+-						    depth - i - 1,
+-						    EXT4_EX_NOCACHE);
++						    depth - i - 1, flags);
+ 			if (IS_ERR(bh)) {
+ 				/* should we reset i_size? */
+ 				err = PTR_ERR(bh);
+@@ -4314,7 +4315,8 @@ int ext4_ext_map_blocks(handle_t *handle, struct inode *inode,
+ 	if (err)
+ 		goto out;
+ 	ar.lright = map->m_lblk;
+-	err = ext4_ext_search_right(inode, path, &ar.lright, &ar.pright, &ex2);
++	err = ext4_ext_search_right(inode, path, &ar.lright, &ar.pright,
++				    &ex2, 0);
+ 	if (err < 0)
+ 		goto out;
+ 
+@@ -4931,12 +4933,7 @@ static const struct iomap_ops ext4_iomap_xattr_ops = {
+ 
+ static int ext4_fiemap_check_ranges(struct inode *inode, u64 start, u64 *len)
+ {
+-	u64 maxbytes;
+-
+-	if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
+-		maxbytes = inode->i_sb->s_maxbytes;
+-	else
+-		maxbytes = EXT4_SB(inode->i_sb)->s_bitmap_maxbytes;
++	u64 maxbytes = ext4_get_maxbytes(inode);
+ 
+ 	if (*len == 0)
+ 		return -EINVAL;
+@@ -4999,7 +4996,9 @@ int ext4_get_es_cache(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ 	}
+ 
+ 	if (fieinfo->fi_flags & FIEMAP_FLAG_CACHE) {
++		inode_lock_shared(inode);
+ 		error = ext4_ext_precache(inode);
++		inode_unlock_shared(inode);
+ 		if (error)
+ 			return error;
+ 		fieinfo->fi_flags &= ~FIEMAP_FLAG_CACHE;
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index beb078ee4811d6..b845a25f7932c6 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -929,12 +929,7 @@ static int ext4_file_open(struct inode *inode, struct file *filp)
+ loff_t ext4_llseek(struct file *file, loff_t offset, int whence)
+ {
+ 	struct inode *inode = file->f_mapping->host;
+-	loff_t maxbytes;
+-
+-	if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+-		maxbytes = EXT4_SB(inode->i_sb)->s_bitmap_maxbytes;
+-	else
+-		maxbytes = inode->i_sb->s_maxbytes;
++	loff_t maxbytes = ext4_get_maxbytes(inode);
+ 
+ 	switch (whence) {
+ 	default:
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 2c9b762925c72f..e5e6bf0d338b96 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -397,7 +397,7 @@ static int ext4_update_inline_data(handle_t *handle, struct inode *inode,
+ }
+ 
+ static int ext4_prepare_inline_data(handle_t *handle, struct inode *inode,
+-				    unsigned int len)
++				    loff_t len)
+ {
+ 	int ret, size, no_expand;
+ 	struct ext4_inode_info *ei = EXT4_I(inode);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 94c7d2d828a64e..7fcdc01a0220a7 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1009,7 +1009,12 @@ int ext4_walk_page_buffers(handle_t *handle, struct inode *inode,
+  */
+ static int ext4_dirty_journalled_data(handle_t *handle, struct buffer_head *bh)
+ {
+-	folio_mark_dirty(bh->b_folio);
++	struct folio *folio = bh->b_folio;
++	struct inode *inode = folio->mapping->host;
++
++	/* only regular files have a_ops */
++	if (S_ISREG(inode->i_mode))
++		folio_mark_dirty(folio);
+ 	return ext4_handle_dirty_metadata(handle, NULL, bh);
+ }
+ 
+@@ -4006,7 +4011,7 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length)
+ 	struct inode *inode = file_inode(file);
+ 	struct super_block *sb = inode->i_sb;
+ 	ext4_lblk_t start_lblk, end_lblk;
+-	loff_t max_end = EXT4_SB(sb)->s_bitmap_maxbytes - sb->s_blocksize;
++	loff_t max_end = sb->s_maxbytes;
+ 	loff_t end = offset + length;
+ 	handle_t *handle;
+ 	unsigned int credits;
+@@ -4015,14 +4020,20 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length)
+ 	trace_ext4_punch_hole(inode, offset, length, 0);
+ 	WARN_ON_ONCE(!inode_is_locked(inode));
+ 
++	/*
++	 * For indirect-block based inodes, make sure that the hole within
++	 * one block before last range.
++	 */
++	if (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
++		max_end = EXT4_SB(sb)->s_bitmap_maxbytes - sb->s_blocksize;
++
+ 	/* No need to punch hole beyond i_size */
+-	if (offset >= inode->i_size)
++	if (offset >= inode->i_size || offset >= max_end)
+ 		return 0;
+ 
+ 	/*
+ 	 * If the hole extends beyond i_size, set the hole to end after
+-	 * the page that contains i_size, and also make sure that the hole
+-	 * within one block before last range.
++	 * the page that contains i_size.
+ 	 */
+ 	if (end > inode->i_size)
+ 		end = round_up(inode->i_size, PAGE_SIZE);
+@@ -4916,7 +4927,8 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ 		ei->i_file_acl |=
+ 			((__u64)le16_to_cpu(raw_inode->i_file_acl_high)) << 32;
+ 	inode->i_size = ext4_isize(sb, raw_inode);
+-	if ((size = i_size_read(inode)) < 0) {
++	size = i_size_read(inode);
++	if (size < 0 || size > ext4_get_maxbytes(inode)) {
+ 		ext4_error_inode(inode, function, line, 0,
+ 				 "iget: bad i_size value: %lld", size);
+ 		ret = -EFSCORRUPTED;
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index d17207386ead13..0e240013c84d21 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -1505,8 +1505,14 @@ static long __ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 		return 0;
+ 	}
+ 	case EXT4_IOC_PRECACHE_EXTENTS:
+-		return ext4_ext_precache(inode);
++	{
++		int ret;
+ 
++		inode_lock_shared(inode);
++		ret = ext4_ext_precache(inode);
++		inode_unlock_shared(inode);
++		return ret;
++	}
+ 	case FS_IOC_SET_ENCRYPTION_POLICY:
+ 		if (!ext4_has_feature_encrypt(sb))
+ 			return -EOPNOTSUPP;
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 9b94810675c193..5a9b6d5f3ae0a8 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -178,8 +178,7 @@ void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct folio *folio)
+ #ifdef CONFIG_F2FS_FS_LZO
+ static int lzo_init_compress_ctx(struct compress_ctx *cc)
+ {
+-	cc->private = f2fs_kvmalloc(F2FS_I_SB(cc->inode),
+-				LZO1X_MEM_COMPRESS, GFP_NOFS);
++	cc->private = f2fs_vmalloc(LZO1X_MEM_COMPRESS);
+ 	if (!cc->private)
+ 		return -ENOMEM;
+ 
+@@ -189,7 +188,7 @@ static int lzo_init_compress_ctx(struct compress_ctx *cc)
+ 
+ static void lzo_destroy_compress_ctx(struct compress_ctx *cc)
+ {
+-	kvfree(cc->private);
++	vfree(cc->private);
+ 	cc->private = NULL;
+ }
+ 
+@@ -246,7 +245,7 @@ static int lz4_init_compress_ctx(struct compress_ctx *cc)
+ 		size = LZ4HC_MEM_COMPRESS;
+ #endif
+ 
+-	cc->private = f2fs_kvmalloc(F2FS_I_SB(cc->inode), size, GFP_NOFS);
++	cc->private = f2fs_vmalloc(size);
+ 	if (!cc->private)
+ 		return -ENOMEM;
+ 
+@@ -261,7 +260,7 @@ static int lz4_init_compress_ctx(struct compress_ctx *cc)
+ 
+ static void lz4_destroy_compress_ctx(struct compress_ctx *cc)
+ {
+-	kvfree(cc->private);
++	vfree(cc->private);
+ 	cc->private = NULL;
+ }
+ 
+@@ -342,8 +341,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
+ 	params = zstd_get_params(level, cc->rlen);
+ 	workspace_size = zstd_cstream_workspace_bound(&params.cParams);
+ 
+-	workspace = f2fs_kvmalloc(F2FS_I_SB(cc->inode),
+-					workspace_size, GFP_NOFS);
++	workspace = f2fs_vmalloc(workspace_size);
+ 	if (!workspace)
+ 		return -ENOMEM;
+ 
+@@ -351,7 +349,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
+ 	if (!stream) {
+ 		f2fs_err_ratelimited(F2FS_I_SB(cc->inode),
+ 				"%s zstd_init_cstream failed", __func__);
+-		kvfree(workspace);
++		vfree(workspace);
+ 		return -EIO;
+ 	}
+ 
+@@ -364,7 +362,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
+ 
+ static void zstd_destroy_compress_ctx(struct compress_ctx *cc)
+ {
+-	kvfree(cc->private);
++	vfree(cc->private);
+ 	cc->private = NULL;
+ 	cc->private2 = NULL;
+ }
+@@ -423,8 +421,7 @@ static int zstd_init_decompress_ctx(struct decompress_io_ctx *dic)
+ 
+ 	workspace_size = zstd_dstream_workspace_bound(max_window_size);
+ 
+-	workspace = f2fs_kvmalloc(F2FS_I_SB(dic->inode),
+-					workspace_size, GFP_NOFS);
++	workspace = f2fs_vmalloc(workspace_size);
+ 	if (!workspace)
+ 		return -ENOMEM;
+ 
+@@ -432,7 +429,7 @@ static int zstd_init_decompress_ctx(struct decompress_io_ctx *dic)
+ 	if (!stream) {
+ 		f2fs_err_ratelimited(F2FS_I_SB(dic->inode),
+ 				"%s zstd_init_dstream failed", __func__);
+-		kvfree(workspace);
++		vfree(workspace);
+ 		return -EIO;
+ 	}
+ 
+@@ -444,7 +441,7 @@ static int zstd_init_decompress_ctx(struct decompress_io_ctx *dic)
+ 
+ static void zstd_destroy_decompress_ctx(struct decompress_io_ctx *dic)
+ {
+-	kvfree(dic->private);
++	vfree(dic->private);
+ 	dic->private = NULL;
+ 	dic->private2 = NULL;
+ }
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 4f34a7d9760a10..34e4ae2a5f5ba3 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3527,6 +3527,11 @@ static inline void *f2fs_kvzalloc(struct f2fs_sb_info *sbi,
+ 	return f2fs_kvmalloc(sbi, size, flags | __GFP_ZERO);
+ }
+ 
++static inline void *f2fs_vmalloc(size_t size)
++{
++	return vmalloc(size);
++}
++
+ static inline int get_extra_isize(struct inode *inode)
+ {
+ 	return F2FS_I(inode)->i_extra_isize / sizeof(__le32);
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 83f862578fc80c..f5991e8751b9bb 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -34,7 +34,9 @@ void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync)
+ 	if (f2fs_inode_dirtied(inode, sync))
+ 		return;
+ 
+-	if (f2fs_is_atomic_file(inode))
++	/* only atomic file w/ FI_ATOMIC_COMMITTED can be set vfs dirty */
++	if (f2fs_is_atomic_file(inode) &&
++			!is_inode_flag_set(inode, FI_ATOMIC_COMMITTED))
+ 		return;
+ 
+ 	mark_inode_dirty_sync(inode);
+@@ -286,6 +288,12 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page)
+ 		return false;
+ 	}
+ 
++	if (ino_of_node(node_page) == fi->i_xattr_nid) {
++		f2fs_warn(sbi, "%s: corrupted inode i_ino=%lx, xnid=%x, run fsck to fix.",
++			  __func__, inode->i_ino, fi->i_xattr_nid);
++		return false;
++	}
++
+ 	if (f2fs_has_extra_attr(inode)) {
+ 		if (!f2fs_sb_has_extra_attr(sbi)) {
+ 			f2fs_warn(sbi, "%s: inode (ino=%lx) is with extra_attr, but extra_attr feature is off",
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 28137d499f8f65..f775dc62511c24 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -569,6 +569,15 @@ static int f2fs_unlink(struct inode *dir, struct dentry *dentry)
+ 		goto fail;
+ 	}
+ 
++	if (unlikely(inode->i_nlink == 0)) {
++		f2fs_warn(F2FS_I_SB(inode), "%s: inode (ino=%lx) has zero i_nlink",
++			  __func__, inode->i_ino);
++		err = -EFSCORRUPTED;
++		set_sbi_flag(F2FS_I_SB(inode), SBI_NEED_FSCK);
++		f2fs_put_page(page, 0);
++		goto fail;
++	}
++
+ 	f2fs_balance_fs(sbi, true);
+ 
+ 	f2fs_lock_op(sbi);
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 5f15c224bf782e..f476c2e7b09629 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -2107,10 +2107,14 @@ int f2fs_sync_node_pages(struct f2fs_sb_info *sbi,
+ 
+ 			ret = __write_node_page(&folio->page, false, &submitted,
+ 						wbc, do_balance, io_type, NULL);
+-			if (ret)
++			if (ret) {
+ 				folio_unlock(folio);
+-			else if (submitted)
++				folio_batch_release(&fbatch);
++				ret = -EIO;
++				goto out;
++			} else if (submitted) {
+ 				nwritten++;
++			}
+ 
+ 			if (--wbc->nr_to_write == 0)
+ 				break;
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 41ca73622c8d46..876e97ec5f5703 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -376,7 +376,13 @@ static int __f2fs_commit_atomic_write(struct inode *inode)
+ 	} else {
+ 		sbi->committed_atomic_block += fi->atomic_write_cnt;
+ 		set_inode_flag(inode, FI_ATOMIC_COMMITTED);
++
++		/*
++		 * inode may has no FI_ATOMIC_DIRTIED flag due to no write
++		 * before commit.
++		 */
+ 		if (is_inode_flag_set(inode, FI_ATOMIC_DIRTIED)) {
++			/* clear atomic dirty status and set vfs dirty status */
+ 			clear_inode_flag(inode, FI_ATOMIC_DIRTIED);
+ 			f2fs_mark_inode_dirty_sync(inode, true);
+ 		}
+@@ -2836,7 +2842,11 @@ static int get_new_segment(struct f2fs_sb_info *sbi,
+ 	}
+ got_it:
+ 	/* set it as dirty segment in free segmap */
+-	f2fs_bug_on(sbi, test_bit(segno, free_i->free_segmap));
++	if (test_bit(segno, free_i->free_segmap)) {
++		ret = -EFSCORRUPTED;
++		f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_CORRUPTED_FREE_BITMAP);
++		goto out_unlock;
++	}
+ 
+ 	/* no free section in conventional device or conventional zone */
+ 	if (new_sec && pinning &&
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 386326f7a440eb..bc510c91f3eba4 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1531,7 +1531,9 @@ int f2fs_inode_dirtied(struct inode *inode, bool sync)
+ 	}
+ 	spin_unlock(&sbi->inode_lock[DIRTY_META]);
+ 
+-	if (!ret && f2fs_is_atomic_file(inode))
++	/* if atomic write is not committed, set inode w/ atomic dirty */
++	if (!ret && f2fs_is_atomic_file(inode) &&
++			!is_inode_flag_set(inode, FI_ATOMIC_COMMITTED))
+ 		set_inode_flag(inode, FI_ATOMIC_DIRTIED);
+ 
+ 	return ret;
+@@ -3717,6 +3719,7 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
+ 	block_t user_block_count, valid_user_blocks;
+ 	block_t avail_node_count, valid_node_count;
+ 	unsigned int nat_blocks, nat_bits_bytes, nat_bits_blocks;
++	unsigned int sit_blk_cnt;
+ 	int i, j;
+ 
+ 	total = le32_to_cpu(raw_super->segment_count);
+@@ -3828,6 +3831,13 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
+ 		return 1;
+ 	}
+ 
++	sit_blk_cnt = DIV_ROUND_UP(main_segs, SIT_ENTRY_PER_BLOCK);
++	if (sit_bitmap_size * 8 < sit_blk_cnt) {
++		f2fs_err(sbi, "Wrong bitmap size: sit: %u, sit_blk_cnt:%u",
++			 sit_bitmap_size, sit_blk_cnt);
++		return 1;
++	}
++
+ 	cp_pack_start_sum = __start_sum_addr(sbi);
+ 	cp_payload = __cp_payload(sbi);
+ 	if (cp_pack_start_sum < cp_payload + 1 ||
+diff --git a/fs/file.c b/fs/file.c
+index 3a3146664cf371..b6db031545e650 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -1198,8 +1198,12 @@ bool file_seek_cur_needs_f_lock(struct file *file)
+ 	if (!(file->f_mode & FMODE_ATOMIC_POS) && !file->f_op->iterate_shared)
+ 		return false;
+ 
+-	VFS_WARN_ON_ONCE((file_count(file) > 1) &&
+-			 !mutex_is_locked(&file->f_pos_lock));
++	/*
++	 * Note that we are not guaranteed to be called after fdget_pos() on
++	 * this file obj, in which case the caller is expected to provide the
++	 * appropriate locking.
++	 */
++
+ 	return true;
+ }
+ 
+diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c
+index 58aeeae7ed8cd0..2c9172dd41e7ee 100644
+--- a/fs/gfs2/lock_dlm.c
++++ b/fs/gfs2/lock_dlm.c
+@@ -996,14 +996,15 @@ static int control_mount(struct gfs2_sbd *sdp)
+ 		if (sdp->sd_args.ar_spectator) {
+ 			fs_info(sdp, "Recovery is required. Waiting for a "
+ 				"non-spectator to mount.\n");
++			spin_unlock(&ls->ls_recover_spin);
+ 			msleep_interruptible(1000);
+ 		} else {
+ 			fs_info(sdp, "control_mount wait1 block %u start %u "
+ 				"mount %u lvb %u flags %lx\n", block_gen,
+ 				start_gen, mount_gen, lvb_gen,
+ 				ls->ls_recover_flags);
++			spin_unlock(&ls->ls_recover_spin);
+ 		}
+-		spin_unlock(&ls->ls_recover_spin);
+ 		goto restart;
+ 	}
+ 
+diff --git a/fs/internal.h b/fs/internal.h
+index b9b3e29a73fd8c..f545400ce607f9 100644
+--- a/fs/internal.h
++++ b/fs/internal.h
+@@ -343,3 +343,8 @@ static inline bool path_mounted(const struct path *path)
+ void file_f_owner_release(struct file *file);
+ bool file_seek_cur_needs_f_lock(struct file *file);
+ int statmount_mnt_idmap(struct mnt_idmap *idmap, struct seq_file *seq, bool uid_map);
++int anon_inode_getattr(struct mnt_idmap *idmap, const struct path *path,
++		       struct kstat *stat, u32 request_mask,
++		       unsigned int query_flags);
++int anon_inode_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
++		       struct iattr *attr);
+diff --git a/fs/ioctl.c b/fs/ioctl.c
+index c91fd2b46a77f6..03d9a11f22475b 100644
+--- a/fs/ioctl.c
++++ b/fs/ioctl.c
+@@ -821,7 +821,8 @@ static int do_vfs_ioctl(struct file *filp, unsigned int fd,
+ 		return ioctl_fioasync(fd, filp, argp);
+ 
+ 	case FIOQSIZE:
+-		if (S_ISDIR(inode->i_mode) || S_ISREG(inode->i_mode) ||
++		if (S_ISDIR(inode->i_mode) ||
++		    (S_ISREG(inode->i_mode) && !IS_ANON_FILE(inode)) ||
+ 		    S_ISLNK(inode->i_mode)) {
+ 			loff_t res = inode_get_bytes(inode);
+ 			return copy_to_user(argp, &res, sizeof(res)) ?
+@@ -856,7 +857,7 @@ static int do_vfs_ioctl(struct file *filp, unsigned int fd,
+ 		return ioctl_file_dedupe_range(filp, argp);
+ 
+ 	case FIONREAD:
+-		if (!S_ISREG(inode->i_mode))
++		if (!S_ISREG(inode->i_mode) || IS_ANON_FILE(inode))
+ 			return vfs_ioctl(filp, cmd, arg);
+ 
+ 		return put_user(i_size_read(inode) - filp->f_pos,
+@@ -881,7 +882,7 @@ static int do_vfs_ioctl(struct file *filp, unsigned int fd,
+ 		return ioctl_get_fs_sysfs_path(filp, argp);
+ 
+ 	default:
+-		if (S_ISREG(inode->i_mode))
++		if (S_ISREG(inode->i_mode) && !IS_ANON_FILE(inode))
+ 			return file_ioctl(filp, cmd, argp);
+ 		break;
+ 	}
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index 47038e6608123c..d5da9817df9b36 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -1275,6 +1275,7 @@ static int isofs_read_inode(struct inode *inode, int relocated)
+ 	unsigned long offset;
+ 	struct iso_inode_info *ei = ISOFS_I(inode);
+ 	int ret = -EIO;
++	struct timespec64 ts;
+ 
+ 	block = ei->i_iget5_block;
+ 	bh = sb_bread(inode->i_sb, block);
+@@ -1387,8 +1388,10 @@ static int isofs_read_inode(struct inode *inode, int relocated)
+ 			inode->i_ino, de->flags[-high_sierra]);
+ 	}
+ #endif
+-	inode_set_mtime_to_ts(inode,
+-			      inode_set_atime_to_ts(inode, inode_set_ctime(inode, iso_date(de->date, high_sierra), 0)));
++	ts = iso_date(de->date, high_sierra ? ISO_DATE_HIGH_SIERRA : 0);
++	inode_set_ctime_to_ts(inode, ts);
++	inode_set_atime_to_ts(inode, ts);
++	inode_set_mtime_to_ts(inode, ts);
+ 
+ 	ei->i_first_extent = (isonum_733(de->extent) +
+ 			isonum_711(de->ext_attr_length));
+diff --git a/fs/isofs/isofs.h b/fs/isofs/isofs.h
+index 2d55207c9a9902..50655583753334 100644
+--- a/fs/isofs/isofs.h
++++ b/fs/isofs/isofs.h
+@@ -106,7 +106,9 @@ static inline unsigned int isonum_733(u8 *p)
+ 	/* Ignore bigendian datum due to broken mastering programs */
+ 	return get_unaligned_le32(p);
+ }
+-extern int iso_date(u8 *, int);
++#define ISO_DATE_HIGH_SIERRA (1 << 0)
++#define ISO_DATE_LONG_FORM (1 << 1)
++struct timespec64 iso_date(u8 *p, int flags);
+ 
+ struct inode;		/* To make gcc happy */
+ 
+diff --git a/fs/isofs/rock.c b/fs/isofs/rock.c
+index dbf911126e610e..576498245b9d7c 100644
+--- a/fs/isofs/rock.c
++++ b/fs/isofs/rock.c
+@@ -412,7 +412,12 @@ parse_rock_ridge_inode_internal(struct iso_directory_record *de,
+ 				}
+ 			}
+ 			break;
+-		case SIG('T', 'F'):
++		case SIG('T', 'F'): {
++			int flags, size, slen;
++
++			flags = rr->u.TF.flags & TF_LONG_FORM ? ISO_DATE_LONG_FORM : 0;
++			size = rr->u.TF.flags & TF_LONG_FORM ? 17 : 7;
++			slen = rr->len - 5;
+ 			/*
+ 			 * Some RRIP writers incorrectly place ctime in the
+ 			 * TF_CREATE field. Try to handle this correctly for
+@@ -420,27 +425,28 @@ parse_rock_ridge_inode_internal(struct iso_directory_record *de,
+ 			 */
+ 			/* Rock ridge never appears on a High Sierra disk */
+ 			cnt = 0;
+-			if (rr->u.TF.flags & TF_CREATE) {
+-				inode_set_ctime(inode,
+-						iso_date(rr->u.TF.times[cnt++].time, 0),
+-						0);
++			if ((rr->u.TF.flags & TF_CREATE) && size <= slen) {
++				inode_set_ctime_to_ts(inode,
++						iso_date(rr->u.TF.data + size * cnt++, flags));
++				slen -= size;
+ 			}
+-			if (rr->u.TF.flags & TF_MODIFY) {
+-				inode_set_mtime(inode,
+-						iso_date(rr->u.TF.times[cnt++].time, 0),
+-						0);
++			if ((rr->u.TF.flags & TF_MODIFY) && size <= slen) {
++				inode_set_mtime_to_ts(inode,
++						iso_date(rr->u.TF.data + size * cnt++, flags));
++				slen -= size;
+ 			}
+-			if (rr->u.TF.flags & TF_ACCESS) {
+-				inode_set_atime(inode,
+-						iso_date(rr->u.TF.times[cnt++].time, 0),
+-						0);
++			if ((rr->u.TF.flags & TF_ACCESS) && size <= slen) {
++				inode_set_atime_to_ts(inode,
++						iso_date(rr->u.TF.data + size * cnt++, flags));
++				slen -= size;
+ 			}
+-			if (rr->u.TF.flags & TF_ATTRIBUTES) {
+-				inode_set_ctime(inode,
+-						iso_date(rr->u.TF.times[cnt++].time, 0),
+-						0);
++			if ((rr->u.TF.flags & TF_ATTRIBUTES) && size <= slen) {
++				inode_set_ctime_to_ts(inode,
++						iso_date(rr->u.TF.data + size * cnt++, flags));
++				slen -= size;
+ 			}
+ 			break;
++		}
+ 		case SIG('S', 'L'):
+ 			{
+ 				int slen;
+diff --git a/fs/isofs/rock.h b/fs/isofs/rock.h
+index 7755e587f77850..c0856fa9bb6a4e 100644
+--- a/fs/isofs/rock.h
++++ b/fs/isofs/rock.h
+@@ -65,13 +65,9 @@ struct RR_PL_s {
+ 	__u8 location[8];
+ };
+ 
+-struct stamp {
+-	__u8 time[7];		/* actually 6 unsigned, 1 signed */
+-} __attribute__ ((packed));
+-
+ struct RR_TF_s {
+ 	__u8 flags;
+-	struct stamp times[];	/* Variable number of these beasts */
++	__u8 data[];
+ } __attribute__ ((packed));
+ 
+ /* Linux-specific extension for transparent decompression */
+diff --git a/fs/isofs/util.c b/fs/isofs/util.c
+index e88dba72166187..42f479da0b282c 100644
+--- a/fs/isofs/util.c
++++ b/fs/isofs/util.c
+@@ -16,29 +16,44 @@
+  * to GMT.  Thus  we should always be correct.
+  */
+ 
+-int iso_date(u8 *p, int flag)
++struct timespec64 iso_date(u8 *p, int flags)
+ {
+ 	int year, month, day, hour, minute, second, tz;
+-	int crtime;
++	struct timespec64 ts;
++
++	if (flags & ISO_DATE_LONG_FORM) {
++		year = (p[0] - '0') * 1000 +
++		       (p[1] - '0') * 100 +
++		       (p[2] - '0') * 10 +
++		       (p[3] - '0') - 1900;
++		month = ((p[4] - '0') * 10 + (p[5] - '0'));
++		day = ((p[6] - '0') * 10 + (p[7] - '0'));
++		hour = ((p[8] - '0') * 10 + (p[9] - '0'));
++		minute = ((p[10] - '0') * 10 + (p[11] - '0'));
++		second = ((p[12] - '0') * 10 + (p[13] - '0'));
++		ts.tv_nsec = ((p[14] - '0') * 10 + (p[15] - '0')) * 10000000;
++		tz = p[16];
++	} else {
++		year = p[0];
++		month = p[1];
++		day = p[2];
++		hour = p[3];
++		minute = p[4];
++		second = p[5];
++		ts.tv_nsec = 0;
++		/* High sierra has no time zone */
++		tz = flags & ISO_DATE_HIGH_SIERRA ? 0 : p[6];
++	}
+ 
+-	year = p[0];
+-	month = p[1];
+-	day = p[2];
+-	hour = p[3];
+-	minute = p[4];
+-	second = p[5];
+-	if (flag == 0) tz = p[6]; /* High sierra has no time zone */
+-	else tz = 0;
+-	
+ 	if (year < 0) {
+-		crtime = 0;
++		ts.tv_sec = 0;
+ 	} else {
+-		crtime = mktime64(year+1900, month, day, hour, minute, second);
++		ts.tv_sec = mktime64(year+1900, month, day, hour, minute, second);
+ 
+ 		/* sign extend */
+ 		if (tz & 0x80)
+ 			tz |= (-1 << 8);
+-		
++
+ 		/* 
+ 		 * The timezone offset is unreliable on some disks,
+ 		 * so we make a sanity check.  In no case is it ever
+@@ -65,7 +80,7 @@ int iso_date(u8 *p, int flag)
+ 		 * for pointing out the sign error.
+ 		 */
+ 		if (-52 <= tz && tz <= 52)
+-			crtime -= tz * 15 * 60;
++			ts.tv_sec -= tz * 15 * 60;
+ 	}
+-	return crtime;
+-}		
++	return ts;
++}
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index cbc4785462f537..c7867139af69dd 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -1509,7 +1509,7 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ 				jh->b_next_transaction == transaction);
+ 		spin_unlock(&jh->b_state_lock);
+ 	}
+-	if (jh->b_modified == 1) {
++	if (data_race(jh->b_modified == 1)) {
+ 		/* If it's in our transaction it must be in BJ_Metadata list. */
+ 		if (data_race(jh->b_transaction == transaction &&
+ 		    jh->b_jlist != BJ_Metadata)) {
+@@ -1528,7 +1528,6 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ 		goto out;
+ 	}
+ 
+-	journal = transaction->t_journal;
+ 	spin_lock(&jh->b_state_lock);
+ 
+ 	if (is_handle_aborted(handle)) {
+@@ -1543,6 +1542,8 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ 		goto out_unlock_bh;
+ 	}
+ 
++	journal = transaction->t_journal;
++
+ 	if (jh->b_modified == 0) {
+ 		/*
+ 		 * This buffer's got modified and becoming part
+diff --git a/fs/jffs2/erase.c b/fs/jffs2/erase.c
+index ef3a1e1b6cb065..fda9f4d6093f94 100644
+--- a/fs/jffs2/erase.c
++++ b/fs/jffs2/erase.c
+@@ -425,7 +425,9 @@ static void jffs2_mark_erased_block(struct jffs2_sb_info *c, struct jffs2_eraseb
+ 			.totlen =	cpu_to_je32(c->cleanmarker_size)
+ 		};
+ 
+-		jffs2_prealloc_raw_node_refs(c, jeb, 1);
++		ret = jffs2_prealloc_raw_node_refs(c, jeb, 1);
++		if (ret)
++			goto filebad;
+ 
+ 		marker.hdr_crc = cpu_to_je32(crc32(0, &marker, sizeof(struct jffs2_unknown_node)-4));
+ 
+diff --git a/fs/jffs2/scan.c b/fs/jffs2/scan.c
+index 29671e33a1714c..62879c218d4b11 100644
+--- a/fs/jffs2/scan.c
++++ b/fs/jffs2/scan.c
+@@ -256,7 +256,9 @@ int jffs2_scan_medium(struct jffs2_sb_info *c)
+ 
+ 		jffs2_dbg(1, "%s(): Skipping %d bytes in nextblock to ensure page alignment\n",
+ 			  __func__, skip);
+-		jffs2_prealloc_raw_node_refs(c, c->nextblock, 1);
++		ret = jffs2_prealloc_raw_node_refs(c, c->nextblock, 1);
++		if (ret)
++			goto out;
+ 		jffs2_scan_dirty_space(c, c->nextblock, skip);
+ 	}
+ #endif
+diff --git a/fs/jffs2/summary.c b/fs/jffs2/summary.c
+index 4fe64519870f1a..d83372d3e1a07b 100644
+--- a/fs/jffs2/summary.c
++++ b/fs/jffs2/summary.c
+@@ -858,7 +858,10 @@ int jffs2_sum_write_sumnode(struct jffs2_sb_info *c)
+ 	spin_unlock(&c->erase_completion_lock);
+ 
+ 	jeb = c->nextblock;
+-	jffs2_prealloc_raw_node_refs(c, jeb, 1);
++	ret = jffs2_prealloc_raw_node_refs(c, jeb, 1);
++
++	if (ret)
++		goto out;
+ 
+ 	if (!c->summary->sum_num || !c->summary->sum_list_head) {
+ 		JFFS2_WARNING("Empty summary info!!!\n");
+@@ -872,6 +875,8 @@ int jffs2_sum_write_sumnode(struct jffs2_sb_info *c)
+ 	datasize += padsize;
+ 
+ 	ret = jffs2_sum_write_data(c, jeb, infosize, datasize, padsize);
++
++out:
+ 	spin_lock(&c->erase_completion_lock);
+ 	return ret;
+ }
+diff --git a/fs/jfs/jfs_discard.c b/fs/jfs/jfs_discard.c
+index 5f4b305030ad5e..4b660296caf39c 100644
+--- a/fs/jfs/jfs_discard.c
++++ b/fs/jfs/jfs_discard.c
+@@ -86,7 +86,8 @@ int jfs_ioc_trim(struct inode *ip, struct fstrim_range *range)
+ 	down_read(&sb->s_umount);
+ 	bmp = JFS_SBI(ip->i_sb)->bmap;
+ 
+-	if (minlen > bmp->db_agsize ||
++	if (bmp == NULL ||
++	    minlen > bmp->db_agsize ||
+ 	    start >= bmp->db_mapsize ||
+ 	    range->len < sb->s_blocksize) {
+ 		up_read(&sb->s_umount);
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 26e89d0c69b61e..35e063c9f3a42e 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -194,7 +194,11 @@ int dbMount(struct inode *ipbmap)
+ 	    !bmp->db_numag || (bmp->db_numag > MAXAG) ||
+ 	    (bmp->db_maxag >= MAXAG) || (bmp->db_maxag < 0) ||
+ 	    (bmp->db_agpref >= MAXAG) || (bmp->db_agpref < 0) ||
+-	    !bmp->db_agwidth ||
++	    (bmp->db_agheight < 0) || (bmp->db_agheight > (L2LPERCTL >> 1)) ||
++	    (bmp->db_agwidth < 1) || (bmp->db_agwidth > (LPERCTL / MAXAG)) ||
++	    (bmp->db_agwidth > (1 << (L2LPERCTL - (bmp->db_agheight << 1)))) ||
++	    (bmp->db_agstart < 0) ||
++	    (bmp->db_agstart > (CTLTREESIZE - 1 - bmp->db_agwidth * (MAXAG - 1))) ||
+ 	    (bmp->db_agl2size > L2MAXL2SIZE - L2MAXAG) ||
+ 	    (bmp->db_agl2size < 0) ||
+ 	    ((bmp->db_mapsize - 1) >> bmp->db_agl2size) > MAXAG) {
+diff --git a/fs/jfs/jfs_dtree.c b/fs/jfs/jfs_dtree.c
+index 93db6eec446556..ab11849cf9cc3c 100644
+--- a/fs/jfs/jfs_dtree.c
++++ b/fs/jfs/jfs_dtree.c
+@@ -2613,7 +2613,7 @@ void dtInitRoot(tid_t tid, struct inode *ip, u32 idotdot)
+  *	     fsck.jfs should really fix this, but it currently does not.
+  *	     Called from jfs_readdir when bad index is detected.
+  */
+-static void add_missing_indices(struct inode *inode, s64 bn)
++static int add_missing_indices(struct inode *inode, s64 bn)
+ {
+ 	struct ldtentry *d;
+ 	struct dt_lock *dtlck;
+@@ -2622,7 +2622,7 @@ static void add_missing_indices(struct inode *inode, s64 bn)
+ 	struct lv *lv;
+ 	struct metapage *mp;
+ 	dtpage_t *p;
+-	int rc;
++	int rc = 0;
+ 	s8 *stbl;
+ 	tid_t tid;
+ 	struct tlock *tlck;
+@@ -2647,6 +2647,16 @@ static void add_missing_indices(struct inode *inode, s64 bn)
+ 
+ 	stbl = DT_GETSTBL(p);
+ 	for (i = 0; i < p->header.nextindex; i++) {
++		if (stbl[i] < 0) {
++			jfs_err("jfs: add_missing_indices: Invalid stbl[%d] = %d for inode %ld, block = %lld",
++				i, stbl[i], (long)inode->i_ino, (long long)bn);
++			rc = -EIO;
++
++			DT_PUTPAGE(mp);
++			txAbort(tid, 0);
++			goto end;
++		}
++
+ 		d = (struct ldtentry *) &p->slot[stbl[i]];
+ 		index = le32_to_cpu(d->index);
+ 		if ((index < 2) || (index >= JFS_IP(inode)->next_index)) {
+@@ -2664,6 +2674,7 @@ static void add_missing_indices(struct inode *inode, s64 bn)
+ 	(void) txCommit(tid, 1, &inode, 0);
+ end:
+ 	txEnd(tid);
++	return rc;
+ }
+ 
+ /*
+@@ -3017,7 +3028,8 @@ int jfs_readdir(struct file *file, struct dir_context *ctx)
+ 		}
+ 
+ 		if (fix_page) {
+-			add_missing_indices(ip, bn);
++			if ((rc = add_missing_indices(ip, bn)))
++				goto out;
+ 			page_fixed = 1;
+ 		}
+ 
+diff --git a/fs/libfs.c b/fs/libfs.c
+index 6393d7c49ee6e6..e28da9574a652b 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -1647,10 +1647,16 @@ struct inode *alloc_anon_inode(struct super_block *s)
+ 	 * that it already _is_ on the dirty list.
+ 	 */
+ 	inode->i_state = I_DIRTY;
+-	inode->i_mode = S_IRUSR | S_IWUSR;
++	/*
++	 * Historically anonymous inodes didn't have a type at all and
++	 * userspace has come to rely on this. Internally they're just
++	 * regular files but S_IFREG is masked off when reporting
++	 * information to userspace.
++	 */
++	inode->i_mode = S_IFREG | S_IRUSR | S_IWUSR;
+ 	inode->i_uid = current_fsuid();
+ 	inode->i_gid = current_fsgid();
+-	inode->i_flags |= S_PRIVATE;
++	inode->i_flags |= S_PRIVATE | S_ANON_INODE;
+ 	simple_inode_init_ts(inode);
+ 	return inode;
+ }
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index 6d63b958c4bb13..d8fe7c0e7e052d 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -439,7 +439,7 @@ struct nfs_client *nfs_get_client(const struct nfs_client_initdata *cl_init)
+ 			spin_unlock(&nn->nfs_client_lock);
+ 			new = rpc_ops->init_client(new, cl_init);
+ 			if (!IS_ERR(new))
+-				 nfs_local_probe(new);
++				 nfs_local_probe_async(new);
+ 			return new;
+ 		}
+ 
+diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+index 4a304cf17c4b07..656d5c50bbce1c 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c
++++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+@@ -400,7 +400,7 @@ nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg,
+ 		 * keep ds_clp even if DS is local, so that if local IO cannot
+ 		 * proceed somehow, we can fall back to NFS whenever we want.
+ 		 */
+-		nfs_local_probe(ds->ds_clp);
++		nfs_local_probe_async(ds->ds_clp);
+ 		max_payload =
+ 			nfs_block_size(rpc_max_payload(ds->ds_clp->cl_rpcclient),
+ 				       NULL);
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 6655e5f32ec63c..69c2c10ee658c9 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -455,7 +455,6 @@ extern int nfs_wait_bit_killable(struct wait_bit_key *key, int mode);
+ 
+ #if IS_ENABLED(CONFIG_NFS_LOCALIO)
+ /* localio.c */
+-extern void nfs_local_probe(struct nfs_client *);
+ extern void nfs_local_probe_async(struct nfs_client *);
+ extern void nfs_local_probe_async_work(struct work_struct *);
+ extern struct nfsd_file *nfs_local_open_fh(struct nfs_client *,
+diff --git a/fs/nfs/localio.c b/fs/nfs/localio.c
+index e6d36b3d3fc059..510d0a16cfe917 100644
+--- a/fs/nfs/localio.c
++++ b/fs/nfs/localio.c
+@@ -171,7 +171,7 @@ static bool nfs_server_uuid_is_local(struct nfs_client *clp)
+  * - called after alloc_client and init_client (so cl_rpcclient exists)
+  * - this function is idempotent, it can be called for old or new clients
+  */
+-void nfs_local_probe(struct nfs_client *clp)
++static void nfs_local_probe(struct nfs_client *clp)
+ {
+ 	/* Disallow localio if disabled via sysfs or AUTH_SYS isn't used */
+ 	if (!localio_enabled ||
+@@ -191,14 +191,16 @@ void nfs_local_probe(struct nfs_client *clp)
+ 		nfs_localio_enable_client(clp);
+ 	nfs_uuid_end(&clp->cl_uuid);
+ }
+-EXPORT_SYMBOL_GPL(nfs_local_probe);
+ 
+ void nfs_local_probe_async_work(struct work_struct *work)
+ {
+ 	struct nfs_client *clp =
+ 		container_of(work, struct nfs_client, cl_local_probe_work);
+ 
++	if (!refcount_inc_not_zero(&clp->cl_count))
++		return;
+ 	nfs_local_probe(clp);
++	nfs_put_client(clp);
+ }
+ 
+ void nfs_local_probe_async(struct nfs_client *clp)
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 4b123bca65e12d..9db317e7dea177 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -3976,8 +3976,9 @@ static int _nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *f
+ 		     FATTR4_WORD0_CASE_INSENSITIVE |
+ 		     FATTR4_WORD0_CASE_PRESERVING;
+ 	if (minorversion)
+-		bitmask[2] = FATTR4_WORD2_SUPPATTR_EXCLCREAT |
+-			     FATTR4_WORD2_OPEN_ARGUMENTS;
++		bitmask[2] = FATTR4_WORD2_SUPPATTR_EXCLCREAT;
++	if (minorversion > 1)
++		bitmask[2] |= FATTR4_WORD2_OPEN_ARGUMENTS;
+ 
+ 	status = nfs4_call_sync(server->client, server, &msg, &args.seq_args, &res.seq_res, 0);
+ 	if (status == 0) {
+diff --git a/fs/nfs/read.c b/fs/nfs/read.c
+index 81bd1b9aba176f..3c1fa320b3f1bd 100644
+--- a/fs/nfs/read.c
++++ b/fs/nfs/read.c
+@@ -56,7 +56,8 @@ static int nfs_return_empty_folio(struct folio *folio)
+ {
+ 	folio_zero_segment(folio, 0, folio_size(folio));
+ 	folio_mark_uptodate(folio);
+-	folio_unlock(folio);
++	if (nfs_netfs_folio_unlock(folio))
++		folio_unlock(folio);
+ 	return 0;
+ }
+ 
+diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c
+index 0363720280d4cb..88ae410b411333 100644
+--- a/fs/nfsd/export.c
++++ b/fs/nfsd/export.c
+@@ -1124,7 +1124,8 @@ __be32 check_nfsd_access(struct svc_export *exp, struct svc_rqst *rqstp,
+ 		    test_bit(XPT_PEER_AUTH, &xprt->xpt_flags))
+ 			goto ok;
+ 	}
+-	goto denied;
++	if (!may_bypass_gss)
++		goto denied;
+ 
+ ok:
+ 	/* legacy gss-only clients are always OK: */
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index b397246dae7b7e..d0358c801c428a 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -3766,7 +3766,8 @@ bool nfsd4_spo_must_allow(struct svc_rqst *rqstp)
+ 	struct nfs4_op_map *allow = &cstate->clp->cl_spo_must_allow;
+ 	u32 opiter;
+ 
+-	if (!cstate->minorversion)
++	if (rqstp->rq_procinfo != &nfsd_version4.vs_proc[NFSPROC4_COMPOUND] ||
++	    cstate->minorversion == 0)
+ 		return false;
+ 
+ 	if (cstate->spo_must_allowed)
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index e67420729ecd60..9eb8e570462251 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3391,6 +3391,23 @@ static __be32 nfsd4_encode_fattr4_suppattr_exclcreat(struct xdr_stream *xdr,
+ 	return nfsd4_encode_bitmap4(xdr, supp[0], supp[1], supp[2]);
+ }
+ 
++/*
++ * Copied from generic_remap_checks/generic_remap_file_range_prep.
++ *
++ * These generic functions use the file system's s_blocksize, but
++ * individual file systems aren't required to use
++ * generic_remap_file_range_prep. Until there is a mechanism for
++ * determining a particular file system's (or file's) clone block
++ * size, this is the best NFSD can do.
++ */
++static __be32 nfsd4_encode_fattr4_clone_blksize(struct xdr_stream *xdr,
++						const struct nfsd4_fattr_args *args)
++{
++	struct inode *inode = d_inode(args->dentry);
++
++	return nfsd4_encode_uint32_t(xdr, inode->i_sb->s_blocksize);
++}
++
+ #ifdef CONFIG_NFSD_V4_SECURITY_LABEL
+ static __be32 nfsd4_encode_fattr4_sec_label(struct xdr_stream *xdr,
+ 					    const struct nfsd4_fattr_args *args)
+@@ -3545,7 +3562,7 @@ static const nfsd4_enc_attr nfsd4_enc_fattr4_encode_ops[] = {
+ 	[FATTR4_MODE_SET_MASKED]	= nfsd4_encode_fattr4__noop,
+ 	[FATTR4_SUPPATTR_EXCLCREAT]	= nfsd4_encode_fattr4_suppattr_exclcreat,
+ 	[FATTR4_FS_CHARSET_CAP]		= nfsd4_encode_fattr4__noop,
+-	[FATTR4_CLONE_BLKSIZE]		= nfsd4_encode_fattr4__noop,
++	[FATTR4_CLONE_BLKSIZE]		= nfsd4_encode_fattr4_clone_blksize,
+ 	[FATTR4_SPACE_FREED]		= nfsd4_encode_fattr4__noop,
+ 	[FATTR4_CHANGE_ATTR_TYPE]	= nfsd4_encode_fattr4__noop,
+ 
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index ac265d6fde35df..332bfb508c20bc 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1611,7 +1611,7 @@ int nfsd_nl_rpc_status_get_dumpit(struct sk_buff *skb,
+  */
+ int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info)
+ {
+-	int *nthreads, count = 0, nrpools, i, ret = -EOPNOTSUPP, rem;
++	int *nthreads, nrpools = 0, i, ret = -EOPNOTSUPP, rem;
+ 	struct net *net = genl_info_net(info);
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ 	const struct nlattr *attr;
+@@ -1623,12 +1623,11 @@ int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info)
+ 	/* count number of SERVER_THREADS values */
+ 	nlmsg_for_each_attr(attr, info->nlhdr, GENL_HDRLEN, rem) {
+ 		if (nla_type(attr) == NFSD_A_SERVER_THREADS)
+-			count++;
++			nrpools++;
+ 	}
+ 
+ 	mutex_lock(&nfsd_mutex);
+ 
+-	nrpools = max(count, nfsd_nrpools(net));
+ 	nthreads = kcalloc(nrpools, sizeof(int), GFP_KERNEL);
+ 	if (!nthreads) {
+ 		ret = -ENOMEM;
+@@ -2291,12 +2290,9 @@ static int __init init_nfsd(void)
+ 	if (retval)
+ 		goto out_free_pnfs;
+ 	nfsd_lockd_init();	/* lockd->nfsd callbacks */
+-	retval = create_proc_exports_entry();
+-	if (retval)
+-		goto out_free_lockd;
+ 	retval = register_pernet_subsys(&nfsd_net_ops);
+ 	if (retval < 0)
+-		goto out_free_exports;
++		goto out_free_lockd;
+ 	retval = register_cld_notifier();
+ 	if (retval)
+ 		goto out_free_subsys;
+@@ -2305,22 +2301,26 @@ static int __init init_nfsd(void)
+ 		goto out_free_cld;
+ 	retval = register_filesystem(&nfsd_fs_type);
+ 	if (retval)
+-		goto out_free_all;
++		goto out_free_nfsd4;
+ 	retval = genl_register_family(&nfsd_nl_family);
++	if (retval)
++		goto out_free_filesystem;
++	retval = create_proc_exports_entry();
+ 	if (retval)
+ 		goto out_free_all;
+ 	nfsd_localio_ops_init();
+ 
+ 	return 0;
+ out_free_all:
++	genl_unregister_family(&nfsd_nl_family);
++out_free_filesystem:
++	unregister_filesystem(&nfsd_fs_type);
++out_free_nfsd4:
+ 	nfsd4_destroy_laundry_wq();
+ out_free_cld:
+ 	unregister_cld_notifier();
+ out_free_subsys:
+ 	unregister_pernet_subsys(&nfsd_net_ops);
+-out_free_exports:
+-	remove_proc_entry("fs/nfs/exports", NULL);
+-	remove_proc_entry("fs/nfs", NULL);
+ out_free_lockd:
+ 	nfsd_lockd_shutdown();
+ 	nfsd_drc_slab_free();
+@@ -2333,14 +2333,14 @@ static int __init init_nfsd(void)
+ 
+ static void __exit exit_nfsd(void)
+ {
++	remove_proc_entry("fs/nfs/exports", NULL);
++	remove_proc_entry("fs/nfs", NULL);
+ 	genl_unregister_family(&nfsd_nl_family);
+ 	unregister_filesystem(&nfsd_fs_type);
+ 	nfsd4_destroy_laundry_wq();
+ 	unregister_cld_notifier();
+ 	unregister_pernet_subsys(&nfsd_net_ops);
+ 	nfsd_drc_slab_free();
+-	remove_proc_entry("fs/nfs/exports", NULL);
+-	remove_proc_entry("fs/nfs", NULL);
+ 	nfsd_lockd_shutdown();
+ 	nfsd4_free_slabs();
+ 	nfsd4_exit_pnfs();
+diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
+index 9b3d6cff0e1e24..8ed143ef8b4115 100644
+--- a/fs/nfsd/nfssvc.c
++++ b/fs/nfsd/nfssvc.c
+@@ -396,13 +396,13 @@ static int nfsd_startup_net(struct net *net, const struct cred *cred)
+ 	if (ret)
+ 		goto out_filecache;
+ 
++#ifdef CONFIG_NFSD_V4_2_INTER_SSC
++	nfsd4_ssc_init_umount_work(nn);
++#endif
+ 	ret = nfs4_state_start_net(net);
+ 	if (ret)
+ 		goto out_reply_cache;
+ 
+-#ifdef CONFIG_NFSD_V4_2_INTER_SSC
+-	nfsd4_ssc_init_umount_work(nn);
+-#endif
+ 	nn->nfsd_net_up = true;
+ 	return 0;
+ 
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index 969b458100fe5e..dfea7bd800cb4b 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -48,8 +48,8 @@ static struct file *ovl_open_realfile(const struct file *file,
+ 		if (!inode_owner_or_capable(real_idmap, realinode))
+ 			flags &= ~O_NOATIME;
+ 
+-		realfile = backing_file_open(&file->f_path, flags, realpath,
+-					     current_cred());
++		realfile = backing_file_open(file_user_path((struct file *) file),
++					     flags, realpath, current_cred());
+ 	}
+ 	ovl_revert_creds(old_cred);
+ 
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index aef942a758cea5..c69c34e11c74dc 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -246,9 +246,11 @@ static inline struct dentry *ovl_do_mkdir(struct ovl_fs *ofs,
+ 					  struct dentry *dentry,
+ 					  umode_t mode)
+ {
+-	dentry = vfs_mkdir(ovl_upper_mnt_idmap(ofs), dir, dentry, mode);
+-	pr_debug("mkdir(%pd2, 0%o) = %i\n", dentry, mode, PTR_ERR_OR_ZERO(dentry));
+-	return dentry;
++	struct dentry *ret;
++
++	ret = vfs_mkdir(ovl_upper_mnt_idmap(ofs), dir, dentry, mode);
++	pr_debug("mkdir(%pd2, 0%o) = %i\n", dentry, mode, PTR_ERR_OR_ZERO(ret));
++	return ret;
+ }
+ 
+ static inline int ovl_do_mknod(struct ovl_fs *ofs,
+diff --git a/fs/pidfs.c b/fs/pidfs.c
+index 87a53d2ae4bb78..005976025ce9fd 100644
+--- a/fs/pidfs.c
++++ b/fs/pidfs.c
+@@ -826,7 +826,7 @@ static int pidfs_init_inode(struct inode *inode, void *data)
+ 	const struct pid *pid = data;
+ 
+ 	inode->i_private = data;
+-	inode->i_flags |= S_PRIVATE;
++	inode->i_flags |= S_PRIVATE | S_ANON_INODE;
+ 	inode->i_mode |= S_IRWXU;
+ 	inode->i_op = &pidfs_inode_operations;
+ 	inode->i_fop = &pidfs_file_operations;
+diff --git a/fs/smb/client/cached_dir.c b/fs/smb/client/cached_dir.c
+index 240d82c6f90806..9831eac7b948a8 100644
+--- a/fs/smb/client/cached_dir.c
++++ b/fs/smb/client/cached_dir.c
+@@ -486,8 +486,17 @@ void close_all_cached_dirs(struct cifs_sb_info *cifs_sb)
+ 		spin_lock(&cfids->cfid_list_lock);
+ 		list_for_each_entry(cfid, &cfids->entries, entry) {
+ 			tmp_list = kmalloc(sizeof(*tmp_list), GFP_ATOMIC);
+-			if (tmp_list == NULL)
+-				break;
++			if (tmp_list == NULL) {
++				/*
++				 * If the malloc() fails, we won't drop all
++				 * dentries, and unmounting is likely to trigger
++				 * a 'Dentry still in use' error.
++				 */
++				cifs_tcon_dbg(VFS, "Out of memory while dropping dentries\n");
++				spin_unlock(&cfids->cfid_list_lock);
++				spin_unlock(&cifs_sb->tlink_tree_lock);
++				goto done;
++			}
+ 			spin_lock(&cfid->fid_lock);
+ 			tmp_list->dentry = cfid->dentry;
+ 			cfid->dentry = NULL;
+@@ -499,6 +508,7 @@ void close_all_cached_dirs(struct cifs_sb_info *cifs_sb)
+ 	}
+ 	spin_unlock(&cifs_sb->tlink_tree_lock);
+ 
++done:
+ 	list_for_each_entry_safe(tmp_list, q, &entry, entry) {
+ 		list_del(&tmp_list->entry);
+ 		dput(tmp_list->dentry);
+diff --git a/fs/smb/client/cached_dir.h b/fs/smb/client/cached_dir.h
+index 1dfe79d947a62f..bc8a812ff95f8b 100644
+--- a/fs/smb/client/cached_dir.h
++++ b/fs/smb/client/cached_dir.h
+@@ -21,10 +21,10 @@ struct cached_dirent {
+ struct cached_dirents {
+ 	bool is_valid:1;
+ 	bool is_failed:1;
+-	struct dir_context *ctx; /*
+-				  * Only used to make sure we only take entries
+-				  * from a single context. Never dereferenced.
+-				  */
++	struct file *file; /*
++			    * Used to associate the cache with a single
++			    * open file instance.
++			    */
+ 	struct mutex de_mutex;
+ 	int pos;		 /* Expected ctx->pos */
+ 	struct list_head entries;
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 3b32116b0b4964..0c80ca352f3fae 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -1084,6 +1084,7 @@ struct cifs_chan {
+ };
+ 
+ #define CIFS_SES_FLAG_SCALE_CHANNELS (0x1)
++#define CIFS_SES_FLAGS_PENDING_QUERY_INTERFACES (0x2)
+ 
+ /*
+  * Session structure.  One of these for each uid session with a particular host
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index 6bf04d9a549138..f9aef60f1901ac 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -116,13 +116,9 @@ static void smb2_query_server_interfaces(struct work_struct *work)
+ 	rc = server->ops->query_server_interfaces(xid, tcon, false);
+ 	free_xid(xid);
+ 
+-	if (rc) {
+-		if (rc == -EOPNOTSUPP)
+-			return;
+-
++	if (rc)
+ 		cifs_dbg(FYI, "%s: failed to query server interfaces: %d\n",
+ 				__func__, rc);
+-	}
+ 
+ 	queue_delayed_work(cifsiod_wq, &tcon->query_interfaces,
+ 			   (SMB_INTERFACE_POLL_INTERVAL * HZ));
+@@ -377,6 +373,13 @@ static int __cifs_reconnect(struct TCP_Server_Info *server,
+ 	if (!cifs_tcp_ses_needs_reconnect(server, 1))
+ 		return 0;
+ 
++	/*
++	 * if smb session has been marked for reconnect, also reconnect all
++	 * connections. This way, the other connections do not end up bad.
++	 */
++	if (mark_smb_session)
++		cifs_signal_cifsd_for_reconnect(server, mark_smb_session);
++
+ 	cifs_mark_tcp_ses_conns_for_reconnect(server, mark_smb_session);
+ 
+ 	cifs_abort_connection(server);
+@@ -385,7 +388,8 @@ static int __cifs_reconnect(struct TCP_Server_Info *server,
+ 		try_to_freeze();
+ 		cifs_server_lock(server);
+ 
+-		if (!cifs_swn_set_server_dstaddr(server)) {
++		if (!cifs_swn_set_server_dstaddr(server) &&
++		    !SERVER_IS_CHAN(server)) {
+ 			/* resolve the hostname again to make sure that IP address is up-to-date */
+ 			rc = reconn_set_ipaddr_from_hostname(server);
+ 			cifs_dbg(FYI, "%s: reconn_set_ipaddr_from_hostname: rc=%d\n", __func__, rc);
+@@ -4189,6 +4193,7 @@ cifs_negotiate_protocol(const unsigned int xid, struct cifs_ses *ses,
+ 		return 0;
+ 	}
+ 
++	server->lstrp = jiffies;
+ 	server->tcpStatus = CifsInNegotiate;
+ 	spin_unlock(&server->srv_lock);
+ 
+diff --git a/fs/smb/client/namespace.c b/fs/smb/client/namespace.c
+index e3f9213131c467..a6655807c0865a 100644
+--- a/fs/smb/client/namespace.c
++++ b/fs/smb/client/namespace.c
+@@ -146,6 +146,9 @@ static char *automount_fullpath(struct dentry *dentry, void *page)
+ 	}
+ 	spin_unlock(&tcon->tc_lock);
+ 
++	if (unlikely(!page))
++		return ERR_PTR(-ENOMEM);
++
+ 	s = dentry_path_raw(dentry, page, PATH_MAX);
+ 	if (IS_ERR(s))
+ 		return s;
+diff --git a/fs/smb/client/readdir.c b/fs/smb/client/readdir.c
+index 787d6bcb5d1dc4..c3feb26fcfd03a 100644
+--- a/fs/smb/client/readdir.c
++++ b/fs/smb/client/readdir.c
+@@ -850,9 +850,9 @@ static bool emit_cached_dirents(struct cached_dirents *cde,
+ }
+ 
+ static void update_cached_dirents_count(struct cached_dirents *cde,
+-					struct dir_context *ctx)
++					struct file *file)
+ {
+-	if (cde->ctx != ctx)
++	if (cde->file != file)
+ 		return;
+ 	if (cde->is_valid || cde->is_failed)
+ 		return;
+@@ -861,9 +861,9 @@ static void update_cached_dirents_count(struct cached_dirents *cde,
+ }
+ 
+ static void finished_cached_dirents_count(struct cached_dirents *cde,
+-					struct dir_context *ctx)
++					struct dir_context *ctx, struct file *file)
+ {
+-	if (cde->ctx != ctx)
++	if (cde->file != file)
+ 		return;
+ 	if (cde->is_valid || cde->is_failed)
+ 		return;
+@@ -876,11 +876,12 @@ static void finished_cached_dirents_count(struct cached_dirents *cde,
+ static void add_cached_dirent(struct cached_dirents *cde,
+ 			      struct dir_context *ctx,
+ 			      const char *name, int namelen,
+-			      struct cifs_fattr *fattr)
++			      struct cifs_fattr *fattr,
++				  struct file *file)
+ {
+ 	struct cached_dirent *de;
+ 
+-	if (cde->ctx != ctx)
++	if (cde->file != file)
+ 		return;
+ 	if (cde->is_valid || cde->is_failed)
+ 		return;
+@@ -910,7 +911,8 @@ static void add_cached_dirent(struct cached_dirents *cde,
+ static bool cifs_dir_emit(struct dir_context *ctx,
+ 			  const char *name, int namelen,
+ 			  struct cifs_fattr *fattr,
+-			  struct cached_fid *cfid)
++			  struct cached_fid *cfid,
++			  struct file *file)
+ {
+ 	bool rc;
+ 	ino_t ino = cifs_uniqueid_to_ino_t(fattr->cf_uniqueid);
+@@ -922,7 +924,7 @@ static bool cifs_dir_emit(struct dir_context *ctx,
+ 	if (cfid) {
+ 		mutex_lock(&cfid->dirents.de_mutex);
+ 		add_cached_dirent(&cfid->dirents, ctx, name, namelen,
+-				  fattr);
++				  fattr, file);
+ 		mutex_unlock(&cfid->dirents.de_mutex);
+ 	}
+ 
+@@ -1022,7 +1024,7 @@ static int cifs_filldir(char *find_entry, struct file *file,
+ 	cifs_prime_dcache(file_dentry(file), &name, &fattr);
+ 
+ 	return !cifs_dir_emit(ctx, name.name, name.len,
+-			      &fattr, cfid);
++			      &fattr, cfid, file);
+ }
+ 
+ 
+@@ -1073,8 +1075,8 @@ int cifs_readdir(struct file *file, struct dir_context *ctx)
+ 	 * we need to initialize scanning and storing the
+ 	 * directory content.
+ 	 */
+-	if (ctx->pos == 0 && cfid->dirents.ctx == NULL) {
+-		cfid->dirents.ctx = ctx;
++	if (ctx->pos == 0 && cfid->dirents.file == NULL) {
++		cfid->dirents.file = file;
+ 		cfid->dirents.pos = 2;
+ 	}
+ 	/*
+@@ -1142,7 +1144,7 @@ int cifs_readdir(struct file *file, struct dir_context *ctx)
+ 	} else {
+ 		if (cfid) {
+ 			mutex_lock(&cfid->dirents.de_mutex);
+-			finished_cached_dirents_count(&cfid->dirents, ctx);
++			finished_cached_dirents_count(&cfid->dirents, ctx, file);
+ 			mutex_unlock(&cfid->dirents.de_mutex);
+ 		}
+ 		cifs_dbg(FYI, "Could not find entry\n");
+@@ -1183,7 +1185,7 @@ int cifs_readdir(struct file *file, struct dir_context *ctx)
+ 		ctx->pos++;
+ 		if (cfid) {
+ 			mutex_lock(&cfid->dirents.de_mutex);
+-			update_cached_dirents_count(&cfid->dirents, ctx);
++			update_cached_dirents_count(&cfid->dirents, file);
+ 			mutex_unlock(&cfid->dirents.de_mutex);
+ 		}
+ 
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index bb25e77c5540c2..511611206dab48 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -1172,7 +1172,6 @@ static bool wsl_to_fattr(struct cifs_open_info_data *data,
+ 	if (!have_xattr_dev && (tag == IO_REPARSE_TAG_LX_CHR || tag == IO_REPARSE_TAG_LX_BLK))
+ 		return false;
+ 
+-	fattr->cf_dtype = S_DT(fattr->cf_mode);
+ 	return true;
+ }
+ 
+diff --git a/fs/smb/client/sess.c b/fs/smb/client/sess.c
+index b3fa9ee2691272..12c99fb4023dc2 100644
+--- a/fs/smb/client/sess.c
++++ b/fs/smb/client/sess.c
+@@ -445,6 +445,10 @@ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+ 
+ 	ses->chans[chan_index].iface = iface;
+ 	spin_unlock(&ses->chan_lock);
++
++	spin_lock(&server->srv_lock);
++	memcpy(&server->dstaddr, &iface->sockaddr, sizeof(server->dstaddr));
++	spin_unlock(&server->srv_lock);
+ }
+ 
+ static int
+@@ -494,8 +498,7 @@ cifs_ses_add_channel(struct cifs_ses *ses,
+ 	ctx->domainauto = ses->domainAuto;
+ 	ctx->domainname = ses->domainName;
+ 
+-	/* no hostname for extra channels */
+-	ctx->server_hostname = "";
++	ctx->server_hostname = ses->server->hostname;
+ 
+ 	ctx->username = ses->user_name;
+ 	ctx->password = ses->password;
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 399185ca7cacb0..72903265b17069 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -411,14 +411,23 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ 	if (!rc &&
+ 	    (server->capabilities & SMB2_GLOBAL_CAP_MULTI_CHANNEL) &&
+ 	    server->ops->query_server_interfaces) {
+-		mutex_unlock(&ses->session_mutex);
+-
+ 		/*
+-		 * query server network interfaces, in case they change
++		 * query server network interfaces, in case they change.
++		 * Also mark the session as pending this update while the query
++		 * is in progress. This will be used to avoid calling
++		 * smb2_reconnect recursively.
+ 		 */
++		ses->flags |= CIFS_SES_FLAGS_PENDING_QUERY_INTERFACES;
+ 		xid = get_xid();
+ 		rc = server->ops->query_server_interfaces(xid, tcon, false);
+ 		free_xid(xid);
++		ses->flags &= ~CIFS_SES_FLAGS_PENDING_QUERY_INTERFACES;
++
++		/* regardless of rc value, setup polling */
++		queue_delayed_work(cifsiod_wq, &tcon->query_interfaces,
++				   (SMB_INTERFACE_POLL_INTERVAL * HZ));
++
++		mutex_unlock(&ses->session_mutex);
+ 
+ 		if (rc == -EOPNOTSUPP && ses->chan_count > 1) {
+ 			/*
+@@ -438,11 +447,8 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ 		if (ses->chan_max > ses->chan_count &&
+ 		    ses->iface_count &&
+ 		    !SERVER_IS_CHAN(server)) {
+-			if (ses->chan_count == 1) {
++			if (ses->chan_count == 1)
+ 				cifs_server_dbg(VFS, "supports multichannel now\n");
+-				queue_delayed_work(cifsiod_wq, &tcon->query_interfaces,
+-						 (SMB_INTERFACE_POLL_INTERVAL * HZ));
+-			}
+ 
+ 			cifs_try_adding_channels(ses);
+ 		}
+@@ -560,11 +566,18 @@ static int smb2_ioctl_req_init(u32 opcode, struct cifs_tcon *tcon,
+ 			       struct TCP_Server_Info *server,
+ 			       void **request_buf, unsigned int *total_len)
+ {
+-	/* Skip reconnect only for FSCTL_VALIDATE_NEGOTIATE_INFO IOCTLs */
+-	if (opcode == FSCTL_VALIDATE_NEGOTIATE_INFO) {
++	/*
++	 * Skip reconnect in one of the following cases:
++	 * 1. For FSCTL_VALIDATE_NEGOTIATE_INFO IOCTLs
++	 * 2. For FSCTL_QUERY_NETWORK_INTERFACE_INFO IOCTL when called from
++	 * smb2_reconnect (indicated by CIFS_SES_FLAG_SCALE_CHANNELS ses flag)
++	 */
++	if (opcode == FSCTL_VALIDATE_NEGOTIATE_INFO ||
++	    (opcode == FSCTL_QUERY_NETWORK_INTERFACE_INFO &&
++	     (tcon->ses->flags & CIFS_SES_FLAGS_PENDING_QUERY_INTERFACES)))
+ 		return __smb2_plain_req_init(SMB2_IOCTL, tcon, server,
+ 					     request_buf, total_len);
+-	}
++
+ 	return smb2_plain_req_init(SMB2_IOCTL, tcon, server,
+ 				   request_buf, total_len);
+ }
+diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
+index b0b7254661e926..9d8be034f103f2 100644
+--- a/fs/smb/client/smbdirect.c
++++ b/fs/smb/client/smbdirect.c
+@@ -2552,13 +2552,14 @@ static ssize_t smb_extract_folioq_to_rdma(struct iov_iter *iter,
+ 		size_t fsize = folioq_folio_size(folioq, slot);
+ 
+ 		if (offset < fsize) {
+-			size_t part = umin(maxsize - ret, fsize - offset);
++			size_t part = umin(maxsize, fsize - offset);
+ 
+ 			if (!smb_set_sge(rdma, folio_page(folio, 0), offset, part))
+ 				return -EIO;
+ 
+ 			offset += part;
+ 			ret += part;
++			maxsize -= part;
+ 		}
+ 
+ 		if (offset >= fsize) {
+@@ -2573,7 +2574,7 @@ static ssize_t smb_extract_folioq_to_rdma(struct iov_iter *iter,
+ 				slot = 0;
+ 			}
+ 		}
+-	} while (rdma->nr_sge < rdma->max_sge || maxsize > 0);
++	} while (rdma->nr_sge < rdma->max_sge && maxsize > 0);
+ 
+ 	iter->folioq = folioq;
+ 	iter->folioq_slot = slot;
+diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c
+index 266af17aa7d99a..191783f553ce88 100644
+--- a/fs/smb/client/transport.c
++++ b/fs/smb/client/transport.c
+@@ -1018,14 +1018,16 @@ struct TCP_Server_Info *cifs_pick_channel(struct cifs_ses *ses)
+ 	uint index = 0;
+ 	unsigned int min_in_flight = UINT_MAX, max_in_flight = 0;
+ 	struct TCP_Server_Info *server = NULL;
+-	int i;
++	int i, start, cur;
+ 
+ 	if (!ses)
+ 		return NULL;
+ 
+ 	spin_lock(&ses->chan_lock);
++	start = atomic_inc_return(&ses->chan_seq);
+ 	for (i = 0; i < ses->chan_count; i++) {
+-		server = ses->chans[i].server;
++		cur = (start + i) % ses->chan_count;
++		server = ses->chans[cur].server;
+ 		if (!server || server->terminate)
+ 			continue;
+ 
+@@ -1042,17 +1044,15 @@ struct TCP_Server_Info *cifs_pick_channel(struct cifs_ses *ses)
+ 		 */
+ 		if (server->in_flight < min_in_flight) {
+ 			min_in_flight = server->in_flight;
+-			index = i;
++			index = cur;
+ 		}
+ 		if (server->in_flight > max_in_flight)
+ 			max_in_flight = server->in_flight;
+ 	}
+ 
+ 	/* if all channels are equally loaded, fall back to round-robin */
+-	if (min_in_flight == max_in_flight) {
+-		index = (uint)atomic_inc_return(&ses->chan_seq);
+-		index %= ses->chan_count;
+-	}
++	if (min_in_flight == max_in_flight)
++		index = (uint)start % ses->chan_count;
+ 
+ 	server = ses->chans[index].server;
+ 	spin_unlock(&ses->chan_lock);
+diff --git a/fs/smb/server/connection.c b/fs/smb/server/connection.c
+index 83764c230e9d4c..3f04a2977ba86c 100644
+--- a/fs/smb/server/connection.c
++++ b/fs/smb/server/connection.c
+@@ -40,7 +40,7 @@ void ksmbd_conn_free(struct ksmbd_conn *conn)
+ 	kvfree(conn->request_buf);
+ 	kfree(conn->preauth_info);
+ 	if (atomic_dec_and_test(&conn->refcnt)) {
+-		ksmbd_free_transport(conn->transport);
++		conn->transport->ops->free_transport(conn->transport);
+ 		kfree(conn);
+ 	}
+ }
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index 14620e147dda57..572102098c1080 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -132,6 +132,7 @@ struct ksmbd_transport_ops {
+ 			  void *buf, unsigned int len,
+ 			  struct smb2_buffer_desc_v1 *desc,
+ 			  unsigned int desc_len);
++	void (*free_transport)(struct ksmbd_transport *kt);
+ };
+ 
+ struct ksmbd_transport {
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index f2a2be8467c669..c6b990c93bfa75 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -1607,17 +1607,18 @@ static int krb5_authenticate(struct ksmbd_work *work,
+ 	out_len = work->response_sz -
+ 		(le16_to_cpu(rsp->SecurityBufferOffset) + 4);
+ 
+-	/* Check previous session */
+-	prev_sess_id = le64_to_cpu(req->PreviousSessionId);
+-	if (prev_sess_id && prev_sess_id != sess->id)
+-		destroy_previous_session(conn, sess->user, prev_sess_id);
+-
+ 	retval = ksmbd_krb5_authenticate(sess, in_blob, in_len,
+ 					 out_blob, &out_len);
+ 	if (retval) {
+ 		ksmbd_debug(SMB, "krb5 authentication failed\n");
+ 		return -EINVAL;
+ 	}
++
++	/* Check previous session */
++	prev_sess_id = le64_to_cpu(req->PreviousSessionId);
++	if (prev_sess_id && prev_sess_id != sess->id)
++		destroy_previous_session(conn, sess->user, prev_sess_id);
++
+ 	rsp->SecurityBufferLength = cpu_to_le16(out_len);
+ 
+ 	if ((conn->sign || server_conf.enforced_signing) ||
+diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c
+index 4998df04ab95ae..64a428a06ace0c 100644
+--- a/fs/smb/server/transport_rdma.c
++++ b/fs/smb/server/transport_rdma.c
+@@ -159,7 +159,8 @@ struct smb_direct_transport {
+ };
+ 
+ #define KSMBD_TRANS(t) ((struct ksmbd_transport *)&((t)->transport))
+-
++#define SMBD_TRANS(t)	((struct smb_direct_transport *)container_of(t, \
++				struct smb_direct_transport, transport))
+ enum {
+ 	SMB_DIRECT_MSG_NEGOTIATE_REQ = 0,
+ 	SMB_DIRECT_MSG_DATA_TRANSFER
+@@ -410,6 +411,11 @@ static struct smb_direct_transport *alloc_transport(struct rdma_cm_id *cm_id)
+ 	return NULL;
+ }
+ 
++static void smb_direct_free_transport(struct ksmbd_transport *kt)
++{
++	kfree(SMBD_TRANS(kt));
++}
++
+ static void free_transport(struct smb_direct_transport *t)
+ {
+ 	struct smb_direct_recvmsg *recvmsg;
+@@ -455,7 +461,6 @@ static void free_transport(struct smb_direct_transport *t)
+ 
+ 	smb_direct_destroy_pools(t);
+ 	ksmbd_conn_free(KSMBD_TRANS(t)->conn);
+-	kfree(t);
+ }
+ 
+ static struct smb_direct_sendmsg
+@@ -2281,4 +2286,5 @@ static const struct ksmbd_transport_ops ksmbd_smb_direct_transport_ops = {
+ 	.read		= smb_direct_read,
+ 	.rdma_read	= smb_direct_rdma_read,
+ 	.rdma_write	= smb_direct_rdma_write,
++	.free_transport = smb_direct_free_transport,
+ };
+diff --git a/fs/smb/server/transport_tcp.c b/fs/smb/server/transport_tcp.c
+index abedf510899a74..4e9f98db9ff409 100644
+--- a/fs/smb/server/transport_tcp.c
++++ b/fs/smb/server/transport_tcp.c
+@@ -93,7 +93,7 @@ static struct tcp_transport *alloc_transport(struct socket *client_sk)
+ 	return t;
+ }
+ 
+-void ksmbd_free_transport(struct ksmbd_transport *kt)
++static void ksmbd_tcp_free_transport(struct ksmbd_transport *kt)
+ {
+ 	struct tcp_transport *t = TCP_TRANS(kt);
+ 
+@@ -656,4 +656,5 @@ static const struct ksmbd_transport_ops ksmbd_tcp_transport_ops = {
+ 	.read		= ksmbd_tcp_read,
+ 	.writev		= ksmbd_tcp_writev,
+ 	.disconnect	= ksmbd_tcp_disconnect,
++	.free_transport = ksmbd_tcp_free_transport,
+ };
+diff --git a/fs/xattr.c b/fs/xattr.c
+index 8ec5b0204bfdc5..600ae97969cf24 100644
+--- a/fs/xattr.c
++++ b/fs/xattr.c
+@@ -1479,6 +1479,7 @@ ssize_t simple_xattr_list(struct inode *inode, struct simple_xattrs *xattrs,
+ 		buffer += err;
+ 	}
+ 	remaining_size -= err;
++	err = 0;
+ 
+ 	read_lock(&xattrs->lock);
+ 	for (rbp = rb_first(&xattrs->rb_root); rbp; rbp = rb_next(rbp)) {
+diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h
+index f7b3c4a4b7e7c3..5b9f9a6125484f 100644
+--- a/include/acpi/actypes.h
++++ b/include/acpi/actypes.h
+@@ -527,7 +527,7 @@ typedef u64 acpi_integer;
+ 
+ /* Support for the special RSDP signature (8 characters) */
+ 
+-#define ACPI_VALIDATE_RSDP_SIG(a)       (!strncmp (ACPI_CAST_PTR (char, (a)), ACPI_SIG_RSDP, 8))
++#define ACPI_VALIDATE_RSDP_SIG(a)       (!strncmp (ACPI_CAST_PTR (char, (a)), ACPI_SIG_RSDP, (sizeof(a) < 8) ? ACPI_NAMESEG_SIZE : 8))
+ #define ACPI_MAKE_RSDP_SIG(dest)        (memcpy (ACPI_CAST_PTR (char, (dest)), ACPI_SIG_RSDP, 8))
+ 
+ /* Support for OEMx signature (x can be any character) */
+diff --git a/include/drm/display/drm_dp_helper.h b/include/drm/display/drm_dp_helper.h
+index 5ae4241959f24e..736dbfdd6321de 100644
+--- a/include/drm/display/drm_dp_helper.h
++++ b/include/drm/display/drm_dp_helper.h
+@@ -518,6 +518,11 @@ struct drm_dp_aux {
+ 	 * @powered_down: If true then the remote endpoint is powered down.
+ 	 */
+ 	bool powered_down;
++
++	/**
++	 * @no_zero_sized: If the hw can't use zero sized transfers (NVIDIA)
++	 */
++	bool no_zero_sized;
+ };
+ 
+ int drm_dp_dpcd_probe(struct drm_dp_aux *aux, unsigned int offset);
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index 3f2e93ed973016..fc372bbaa54769 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -1125,13 +1125,13 @@ void acpi_os_set_prepare_extended_sleep(int (*func)(u8 sleep_state,
+ 
+ acpi_status acpi_os_prepare_extended_sleep(u8 sleep_state,
+ 					   u32 val_a, u32 val_b);
+-#if defined(CONFIG_SUSPEND) && defined(CONFIG_X86)
+ struct acpi_s2idle_dev_ops {
+ 	struct list_head list_node;
+ 	void (*prepare)(void);
+ 	void (*check)(void);
+ 	void (*restore)(void);
+ };
++#if defined(CONFIG_SUSPEND) && defined(CONFIG_X86)
+ int acpi_register_lps0_dev(struct acpi_s2idle_dev_ops *arg);
+ void acpi_unregister_lps0_dev(struct acpi_s2idle_dev_ops *arg);
+ int acpi_get_lps0_constraint(struct acpi_device *adev);
+@@ -1140,6 +1140,13 @@ static inline int acpi_get_lps0_constraint(struct device *dev)
+ {
+ 	return ACPI_STATE_UNKNOWN;
+ }
++static inline int acpi_register_lps0_dev(struct acpi_s2idle_dev_ops *arg)
++{
++	return -ENODEV;
++}
++static inline void acpi_unregister_lps0_dev(struct acpi_s2idle_dev_ops *arg)
++{
++}
+ #endif /* CONFIG_SUSPEND && CONFIG_X86 */
+ void arch_reserve_mem_area(acpi_physical_address addr, size_t size);
+ #else
+diff --git a/include/linux/atmdev.h b/include/linux/atmdev.h
+index 9b02961d65ee66..45f2f278b50a8a 100644
+--- a/include/linux/atmdev.h
++++ b/include/linux/atmdev.h
+@@ -249,6 +249,12 @@ static inline void atm_account_tx(struct atm_vcc *vcc, struct sk_buff *skb)
+ 	ATM_SKB(skb)->atm_options = vcc->atm_options;
+ }
+ 
++static inline void atm_return_tx(struct atm_vcc *vcc, struct sk_buff *skb)
++{
++	WARN_ON_ONCE(refcount_sub_and_test(ATM_SKB(skb)->acct_truesize,
++					   &sk_atm(vcc)->sk_wmem_alloc));
++}
++
+ static inline void atm_force_charge(struct atm_vcc *vcc,int truesize)
+ {
+ 	atomic_add(truesize, &sk_atm(vcc)->sk_rmem_alloc);
+diff --git a/include/linux/bus/stm32_firewall_device.h b/include/linux/bus/stm32_firewall_device.h
+index 5178b72bc92098..eaa7a3f5445072 100644
+--- a/include/linux/bus/stm32_firewall_device.h
++++ b/include/linux/bus/stm32_firewall_device.h
+@@ -114,27 +114,30 @@ void stm32_firewall_release_access_by_id(struct stm32_firewall *firewall, u32 su
+ 
+ #else /* CONFIG_STM32_FIREWALL */
+ 
+-int stm32_firewall_get_firewall(struct device_node *np, struct stm32_firewall *firewall,
+-				unsigned int nb_firewall)
++static inline int stm32_firewall_get_firewall(struct device_node *np,
++					      struct stm32_firewall *firewall,
++					      unsigned int nb_firewall)
+ {
+ 	return -ENODEV;
+ }
+ 
+-int stm32_firewall_grant_access(struct stm32_firewall *firewall)
++static inline int stm32_firewall_grant_access(struct stm32_firewall *firewall)
+ {
+ 	return -ENODEV;
+ }
+ 
+-void stm32_firewall_release_access(struct stm32_firewall *firewall)
++static inline void stm32_firewall_release_access(struct stm32_firewall *firewall)
+ {
+ }
+ 
+-int stm32_firewall_grant_access_by_id(struct stm32_firewall *firewall, u32 subsystem_id)
++static inline int stm32_firewall_grant_access_by_id(struct stm32_firewall *firewall,
++						    u32 subsystem_id)
+ {
+ 	return -ENODEV;
+ }
+ 
+-void stm32_firewall_release_access_by_id(struct stm32_firewall *firewall, u32 subsystem_id)
++static inline void stm32_firewall_release_access_by_id(struct stm32_firewall *firewall,
++						       u32 subsystem_id)
+ {
+ }
+ 
+diff --git a/include/linux/codetag.h b/include/linux/codetag.h
+index 0ee4c21c6dbc7c..5f2b9a1f722c76 100644
+--- a/include/linux/codetag.h
++++ b/include/linux/codetag.h
+@@ -36,8 +36,8 @@ union codetag_ref {
+ struct codetag_type_desc {
+ 	const char *section;
+ 	size_t tag_size;
+-	void (*module_load)(struct module *mod,
+-			    struct codetag *start, struct codetag *end);
++	int (*module_load)(struct module *mod,
++			   struct codetag *start, struct codetag *end);
+ 	void (*module_unload)(struct module *mod,
+ 			      struct codetag *start, struct codetag *end);
+ #ifdef CONFIG_MODULES
+@@ -89,7 +89,7 @@ void *codetag_alloc_module_section(struct module *mod, const char *name,
+ 				   unsigned long align);
+ void codetag_free_module_sections(struct module *mod);
+ void codetag_module_replaced(struct module *mod, struct module *new_mod);
+-void codetag_load_module(struct module *mod);
++int codetag_load_module(struct module *mod);
+ void codetag_unload_module(struct module *mod);
+ 
+ #else /* defined(CONFIG_CODE_TAGGING) && defined(CONFIG_MODULES) */
+@@ -103,7 +103,7 @@ codetag_alloc_module_section(struct module *mod, const char *name,
+ 			     unsigned long align) { return NULL; }
+ static inline void codetag_free_module_sections(struct module *mod) {}
+ static inline void codetag_module_replaced(struct module *mod, struct module *new_mod) {}
+-static inline void codetag_load_module(struct module *mod) {}
++static inline int codetag_load_module(struct module *mod) { return 0; }
+ static inline void codetag_unload_module(struct module *mod) {}
+ 
+ #endif /* defined(CONFIG_CODE_TAGGING) && defined(CONFIG_MODULES) */
+diff --git a/include/linux/execmem.h b/include/linux/execmem.h
+index ca42d5e46ccc6b..3be35680a54fd1 100644
+--- a/include/linux/execmem.h
++++ b/include/linux/execmem.h
+@@ -54,7 +54,7 @@ enum execmem_range_flags {
+ 	EXECMEM_ROX_CACHE	= (1 << 1),
+ };
+ 
+-#if defined(CONFIG_ARCH_HAS_EXECMEM_ROX) && defined(CONFIG_EXECMEM)
++#ifdef CONFIG_ARCH_HAS_EXECMEM_ROX
+ /**
+  * execmem_fill_trapping_insns - set memory to contain instructions that
+  *				 will trap
+@@ -94,15 +94,9 @@ int execmem_make_temp_rw(void *ptr, size_t size);
+  * Return: 0 on success or negative error code on failure.
+  */
+ int execmem_restore_rox(void *ptr, size_t size);
+-
+-/*
+- * Called from mark_readonly(), where the system transitions to ROX.
+- */
+-void execmem_cache_make_ro(void);
+ #else
+ static inline int execmem_make_temp_rw(void *ptr, size_t size) { return 0; }
+ static inline int execmem_restore_rox(void *ptr, size_t size) { return 0; }
+-static inline void execmem_cache_make_ro(void) { }
+ #endif
+ 
+ /**
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index c24f8bc01045df..5206d63b33860b 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -78,6 +78,7 @@ enum stop_cp_reason {
+ 	STOP_CP_REASON_UPDATE_INODE,
+ 	STOP_CP_REASON_FLUSH_FAIL,
+ 	STOP_CP_REASON_NO_SEGMENT,
++	STOP_CP_REASON_CORRUPTED_FREE_BITMAP,
+ 	STOP_CP_REASON_MAX,
+ };
+ 
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index a4fd649e2c3fcd..ef41a8213a6fa0 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2344,6 +2344,7 @@ struct super_operations {
+ #define S_CASEFOLD	(1 << 15) /* Casefolded file */
+ #define S_VERITY	(1 << 16) /* Verity file (using fs/verity/) */
+ #define S_KERNEL_FILE	(1 << 17) /* File is in use by the kernel (eg. fs/cachefiles) */
++#define S_ANON_INODE	(1 << 19) /* Inode is an anonymous inode */
+ 
+ /*
+  * Note that nosuid etc flags are inode-specific: setting some file-system
+@@ -2400,6 +2401,7 @@ static inline bool sb_rdonly(const struct super_block *sb) { return sb->s_flags
+ 
+ #define IS_WHITEOUT(inode)	(S_ISCHR(inode->i_mode) && \
+ 				 (inode)->i_rdev == WHITEOUT_DEV)
++#define IS_ANON_FILE(inode)	((inode)->i_flags & S_ANON_INODE)
+ 
+ static inline bool HAS_UNMAPPED_ID(struct mnt_idmap *idmap,
+ 				   struct inode *inode)
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 4861a7e304bbf4..9a93e6dbb95e34 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -276,6 +276,7 @@ bool is_hugetlb_entry_migration(pte_t pte);
+ bool is_hugetlb_entry_hwpoisoned(pte_t pte);
+ void hugetlb_unshare_all_pmds(struct vm_area_struct *vma);
+ void fixup_hugetlb_reservations(struct vm_area_struct *vma);
++void hugetlb_split(struct vm_area_struct *vma, unsigned long addr);
+ 
+ #else /* !CONFIG_HUGETLB_PAGE */
+ 
+@@ -473,6 +474,8 @@ static inline void fixup_hugetlb_reservations(struct vm_area_struct *vma)
+ {
+ }
+ 
++static inline void hugetlb_split(struct vm_area_struct *vma, unsigned long addr) {}
++
+ #endif /* !CONFIG_HUGETLB_PAGE */
+ 
+ #ifndef pgd_write
+diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
+index 526fce58165754..ddcdf23d731c47 100644
+--- a/include/linux/mmc/card.h
++++ b/include/linux/mmc/card.h
+@@ -329,6 +329,7 @@ struct mmc_card {
+ #define MMC_QUIRK_BROKEN_SD_CACHE	(1<<15)	/* Disable broken SD cache support */
+ #define MMC_QUIRK_BROKEN_CACHE_FLUSH	(1<<16)	/* Don't flush cache until the write has occurred */
+ #define MMC_QUIRK_BROKEN_SD_POWEROFF_NOTIFY	(1<<17) /* Disable broken SD poweroff notify support */
++#define MMC_QUIRK_NO_UHS_DDR50_TUNING	(1<<18) /* Disable DDR50 tuning */
+ 
+ 	bool			written_flag;	/* Indicates eMMC has been written since power on */
+ 	bool			reenable_cmdq;	/* Re-enable Command Queue */
+diff --git a/include/linux/module.h b/include/linux/module.h
+index 8050f77c3b64f8..b3329110d6686f 100644
+--- a/include/linux/module.h
++++ b/include/linux/module.h
+@@ -586,11 +586,6 @@ struct module {
+ 	atomic_t refcnt;
+ #endif
+ 
+-#ifdef CONFIG_MITIGATION_ITS
+-	int its_num_pages;
+-	void **its_page_array;
+-#endif
+-
+ #ifdef CONFIG_CONSTRUCTORS
+ 	/* Constructor functions. */
+ 	ctor_fn_t *ctors;
+diff --git a/include/linux/mtd/nand-qpic-common.h b/include/linux/mtd/nand-qpic-common.h
+index cd7172e6c1bbff..e8462deda6dbf6 100644
+--- a/include/linux/mtd/nand-qpic-common.h
++++ b/include/linux/mtd/nand-qpic-common.h
+@@ -199,9 +199,6 @@
+  */
+ #define dev_cmd_reg_addr(nandc, reg) ((nandc)->props->dev_cmd_reg_start + (reg))
+ 
+-/* Returns the NAND register physical address */
+-#define nandc_reg_phys(chip, offset) ((chip)->base_phys + (offset))
+-
+ /* Returns the dma address for reg read buffer */
+ #define reg_buf_dma_addr(chip, vaddr) \
+ 	((chip)->reg_read_dma + \
+@@ -454,6 +451,7 @@ struct qcom_nand_controller {
+ struct qcom_nandc_props {
+ 	u32 ecc_modes;
+ 	u32 dev_cmd_reg_start;
++	u32 bam_offset;
+ 	bool supports_bam;
+ 	bool nandc_part_of_qpic;
+ 	bool qpic_version2;
+diff --git a/include/linux/mtd/spinand.h b/include/linux/mtd/spinand.h
+index 311f145eb4e843..aba653207c0f72 100644
+--- a/include/linux/mtd/spinand.h
++++ b/include/linux/mtd/spinand.h
+@@ -62,108 +62,110 @@
+ 		   SPI_MEM_OP_NO_DUMMY,					\
+ 		   SPI_MEM_OP_NO_DATA)
+ 
+-#define SPINAND_PAGE_READ_FROM_CACHE_OP(addr, ndummy, buf, len, ...) \
++#define SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(addr, ndummy, buf, len, ...) \
+ 	SPI_MEM_OP(SPI_MEM_OP_CMD(0x03, 1),				\
+ 		   SPI_MEM_OP_ADDR(2, addr, 1),				\
+ 		   SPI_MEM_OP_DUMMY(ndummy, 1),				\
+ 		   SPI_MEM_OP_DATA_IN(len, buf, 1),			\
+ 		   SPI_MEM_OP_MAX_FREQ(__VA_ARGS__ + 0))
+ 
+-#define SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(addr, ndummy, buf, len) \
+-	SPI_MEM_OP(SPI_MEM_OP_CMD(0x0b, 1),			\
++#define SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(addr, ndummy, buf, len) \
++	SPI_MEM_OP(SPI_MEM_OP_CMD(0x0b, 1),				\
+ 			 SPI_MEM_OP_ADDR(2, addr, 1),			\
+ 			 SPI_MEM_OP_DUMMY(ndummy, 1),			\
+ 			 SPI_MEM_OP_DATA_IN(len, buf, 1))
+ 
+-#define SPINAND_PAGE_READ_FROM_CACHE_OP_3A(addr, ndummy, buf, len) \
++#define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_1S_OP(addr, ndummy, buf, len) \
+ 	SPI_MEM_OP(SPI_MEM_OP_CMD(0x03, 1),				\
+ 		   SPI_MEM_OP_ADDR(3, addr, 1),				\
+ 		   SPI_MEM_OP_DUMMY(ndummy, 1),				\
+ 		   SPI_MEM_OP_DATA_IN(len, buf, 1))
+ 
+-#define SPINAND_PAGE_READ_FROM_CACHE_FAST_OP_3A(addr, ndummy, buf, len) \
++#define SPINAND_PAGE_READ_FROM_CACHE_FAST_3A_1S_1S_1S_OP(addr, ndummy, buf, len) \
+ 	SPI_MEM_OP(SPI_MEM_OP_CMD(0x0b, 1),				\
+ 		   SPI_MEM_OP_ADDR(3, addr, 1),				\
+ 		   SPI_MEM_OP_DUMMY(ndummy, 1),				\
+ 		   SPI_MEM_OP_DATA_IN(len, buf, 1))
+ 
+-#define SPINAND_PAGE_READ_FROM_CACHE_DTR_OP(addr, ndummy, buf, len, freq) \
++#define SPINAND_PAGE_READ_FROM_CACHE_1S_1D_1D_OP(addr, ndummy, buf, len, freq) \
+ 	SPI_MEM_OP(SPI_MEM_OP_CMD(0x0d, 1),				\
+ 		   SPI_MEM_DTR_OP_ADDR(2, addr, 1),			\
+ 		   SPI_MEM_DTR_OP_DUMMY(ndummy, 1),			\
+ 		   SPI_MEM_DTR_OP_DATA_IN(len, buf, 1),			\
+ 		   SPI_MEM_OP_MAX_FREQ(freq))
+ 
+-#define SPINAND_PAGE_READ_FROM_CACHE_X2_OP(addr, ndummy, buf, len)	\
++#define SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(addr, ndummy, buf, len) \
+ 	SPI_MEM_OP(SPI_MEM_OP_CMD(0x3b, 1),				\
+ 		   SPI_MEM_OP_ADDR(2, addr, 1),				\
+ 		   SPI_MEM_OP_DUMMY(ndummy, 1),				\
+ 		   SPI_MEM_OP_DATA_IN(len, buf, 2))
+ 
+-#define SPINAND_PAGE_READ_FROM_CACHE_X2_OP_3A(addr, ndummy, buf, len)	\
++#define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_2S_OP(addr, ndummy, buf, len) \
+ 	SPI_MEM_OP(SPI_MEM_OP_CMD(0x3b, 1),				\
+ 		   SPI_MEM_OP_ADDR(3, addr, 1),				\
+ 		   SPI_MEM_OP_DUMMY(ndummy, 1),				\
+ 		   SPI_MEM_OP_DATA_IN(len, buf, 2))
+ 
+-#define SPINAND_PAGE_READ_FROM_CACHE_X2_DTR_OP(addr, ndummy, buf, len, freq) \
++#define SPINAND_PAGE_READ_FROM_CACHE_1S_1D_2D_OP(addr, ndummy, buf, len, freq) \
+ 	SPI_MEM_OP(SPI_MEM_OP_CMD(0x3d, 1),				\
+ 		   SPI_MEM_DTR_OP_ADDR(2, addr, 1),			\
+ 		   SPI_MEM_DTR_OP_DUMMY(ndummy, 1),			\
+ 		   SPI_MEM_DTR_OP_DATA_IN(len, buf, 2),			\
+ 		   SPI_MEM_OP_MAX_FREQ(freq))
+ 
+-#define SPINAND_PAGE_READ_FROM_CACHE_X4_OP(addr, ndummy, buf, len)	\
++#define SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(addr, ndummy, buf, len, ...) \
++	SPI_MEM_OP(SPI_MEM_OP_CMD(0xbb, 1),				\
++		   SPI_MEM_OP_ADDR(2, addr, 2),				\
++		   SPI_MEM_OP_DUMMY(ndummy, 2),				\
++		   SPI_MEM_OP_DATA_IN(len, buf, 2),			\
++		   SPI_MEM_OP_MAX_FREQ(__VA_ARGS__ + 0))
++
++#define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_2S_2S_OP(addr, ndummy, buf, len) \
++	SPI_MEM_OP(SPI_MEM_OP_CMD(0xbb, 1),				\
++		   SPI_MEM_OP_ADDR(3, addr, 2),				\
++		   SPI_MEM_OP_DUMMY(ndummy, 2),				\
++		   SPI_MEM_OP_DATA_IN(len, buf, 2))
++
++#define SPINAND_PAGE_READ_FROM_CACHE_1S_2D_2D_OP(addr, ndummy, buf, len, freq) \
++	SPI_MEM_OP(SPI_MEM_OP_CMD(0xbd, 1),				\
++		   SPI_MEM_DTR_OP_ADDR(2, addr, 2),			\
++		   SPI_MEM_DTR_OP_DUMMY(ndummy, 2),			\
++		   SPI_MEM_DTR_OP_DATA_IN(len, buf, 2),			\
++		   SPI_MEM_OP_MAX_FREQ(freq))
++
++#define SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(addr, ndummy, buf, len) \
+ 	SPI_MEM_OP(SPI_MEM_OP_CMD(0x6b, 1),				\
+ 		   SPI_MEM_OP_ADDR(2, addr, 1),				\
+ 		   SPI_MEM_OP_DUMMY(ndummy, 1),				\
+ 		   SPI_MEM_OP_DATA_IN(len, buf, 4))
+ 
+-#define SPINAND_PAGE_READ_FROM_CACHE_X4_OP_3A(addr, ndummy, buf, len)	\
++#define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_4S_OP(addr, ndummy, buf, len)	\
+ 	SPI_MEM_OP(SPI_MEM_OP_CMD(0x6b, 1),				\
+ 		   SPI_MEM_OP_ADDR(3, addr, 1),				\
+ 		   SPI_MEM_OP_DUMMY(ndummy, 1),				\
+ 		   SPI_MEM_OP_DATA_IN(len, buf, 4))
+ 
+-#define SPINAND_PAGE_READ_FROM_CACHE_X4_DTR_OP(addr, ndummy, buf, len, freq) \
++#define SPINAND_PAGE_READ_FROM_CACHE_1S_1D_4D_OP(addr, ndummy, buf, len, freq) \
+ 	SPI_MEM_OP(SPI_MEM_OP_CMD(0x6d, 1),				\
+ 		   SPI_MEM_DTR_OP_ADDR(2, addr, 1),			\
+ 		   SPI_MEM_DTR_OP_DUMMY(ndummy, 1),			\
+ 		   SPI_MEM_DTR_OP_DATA_IN(len, buf, 4),			\
+ 		   SPI_MEM_OP_MAX_FREQ(freq))
+ 
+-#define SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(addr, ndummy, buf, len)	\
+-	SPI_MEM_OP(SPI_MEM_OP_CMD(0xbb, 1),				\
+-		   SPI_MEM_OP_ADDR(2, addr, 2),				\
+-		   SPI_MEM_OP_DUMMY(ndummy, 2),				\
+-		   SPI_MEM_OP_DATA_IN(len, buf, 2))
+-
+-#define SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP_3A(addr, ndummy, buf, len) \
+-	SPI_MEM_OP(SPI_MEM_OP_CMD(0xbb, 1),				\
+-		   SPI_MEM_OP_ADDR(3, addr, 2),				\
+-		   SPI_MEM_OP_DUMMY(ndummy, 2),				\
+-		   SPI_MEM_OP_DATA_IN(len, buf, 2))
+-
+-#define SPINAND_PAGE_READ_FROM_CACHE_DUALIO_DTR_OP(addr, ndummy, buf, len, freq) \
+-	SPI_MEM_OP(SPI_MEM_OP_CMD(0xbd, 1),				\
+-		   SPI_MEM_DTR_OP_ADDR(2, addr, 2),			\
+-		   SPI_MEM_DTR_OP_DUMMY(ndummy, 2),			\
+-		   SPI_MEM_DTR_OP_DATA_IN(len, buf, 2),			\
+-		   SPI_MEM_OP_MAX_FREQ(freq))
+-
+-#define SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(addr, ndummy, buf, len)	\
++#define SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(addr, ndummy, buf, len, ...) \
+ 	SPI_MEM_OP(SPI_MEM_OP_CMD(0xeb, 1),				\
+ 		   SPI_MEM_OP_ADDR(2, addr, 4),				\
+ 		   SPI_MEM_OP_DUMMY(ndummy, 4),				\
+-		   SPI_MEM_OP_DATA_IN(len, buf, 4))
++		   SPI_MEM_OP_DATA_IN(len, buf, 4),			\
++		   SPI_MEM_OP_MAX_FREQ(__VA_ARGS__ + 0))
+ 
+-#define SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP_3A(addr, ndummy, buf, len) \
++#define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_4S_4S_OP(addr, ndummy, buf, len) \
+ 	SPI_MEM_OP(SPI_MEM_OP_CMD(0xeb, 1),				\
+ 		   SPI_MEM_OP_ADDR(3, addr, 4),				\
+ 		   SPI_MEM_OP_DUMMY(ndummy, 4),				\
+ 		   SPI_MEM_OP_DATA_IN(len, buf, 4))
+ 
+-#define SPINAND_PAGE_READ_FROM_CACHE_QUADIO_DTR_OP(addr, ndummy, buf, len, freq) \
++#define SPINAND_PAGE_READ_FROM_CACHE_1S_4D_4D_OP(addr, ndummy, buf, len, freq) \
+ 	SPI_MEM_OP(SPI_MEM_OP_CMD(0xed, 1),				\
+ 		   SPI_MEM_DTR_OP_ADDR(2, addr, 4),			\
+ 		   SPI_MEM_DTR_OP_DUMMY(ndummy, 4),			\
+diff --git a/include/linux/tcp.h b/include/linux/tcp.h
+index 1669d95bb0f9aa..5c7c5038d47b51 100644
+--- a/include/linux/tcp.h
++++ b/include/linux/tcp.h
+@@ -340,7 +340,7 @@ struct tcp_sock {
+ 	} rcv_rtt_est;
+ /* Receiver queue space */
+ 	struct {
+-		u32	space;
++		int	space;
+ 		u32	seq;
+ 		u64	time;
+ 	} rcvq_space;
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index c498f685d01f3c..5349df59615711 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -5346,22 +5346,6 @@ void ieee80211_get_tx_rates(struct ieee80211_vif *vif,
+ 			    struct ieee80211_tx_rate *dest,
+ 			    int max_rates);
+ 
+-/**
+- * ieee80211_sta_set_expected_throughput - set the expected tpt for a station
+- *
+- * Call this function to notify mac80211 about a change in expected throughput
+- * to a station. A driver for a device that does rate control in firmware can
+- * call this function when the expected throughput estimate towards a station
+- * changes. The information is used to tune the CoDel AQM applied to traffic
+- * going towards that station (which can otherwise be too aggressive and cause
+- * slow stations to starve).
+- *
+- * @pubsta: the station to set throughput for.
+- * @thr: the current expected throughput in kbps.
+- */
+-void ieee80211_sta_set_expected_throughput(struct ieee80211_sta *pubsta,
+-					   u32 thr);
+-
+ /**
+  * ieee80211_tx_rate_update - transmit rate update callback
+  *
+diff --git a/include/trace/events/erofs.h b/include/trace/events/erofs.h
+index c69c7b1e41d137..a71b19ed5d0cf6 100644
+--- a/include/trace/events/erofs.h
++++ b/include/trace/events/erofs.h
+@@ -211,24 +211,6 @@ TRACE_EVENT(erofs_map_blocks_exit,
+ 		  show_mflags(__entry->mflags), __entry->ret)
+ );
+ 
+-TRACE_EVENT(erofs_destroy_inode,
+-	TP_PROTO(struct inode *inode),
+-
+-	TP_ARGS(inode),
+-
+-	TP_STRUCT__entry(
+-		__field(	dev_t,		dev		)
+-		__field(	erofs_nid_t,	nid		)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->dev	= inode->i_sb->s_dev;
+-		__entry->nid	= EROFS_I(inode)->nid;
+-	),
+-
+-	TP_printk("dev = (%d,%d), nid = %llu", show_dev_nid(__entry))
+-);
+-
+ #endif /* _TRACE_EROFS_H */
+ 
+  /* This part must be outside protection */
+diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
+index c8cb2796130f8d..af86ece741e94d 100644
+--- a/include/uapi/linux/videodev2.h
++++ b/include/uapi/linux/videodev2.h
+@@ -153,10 +153,18 @@ enum v4l2_buf_type {
+ 	V4L2_BUF_TYPE_SDR_OUTPUT           = 12,
+ 	V4L2_BUF_TYPE_META_CAPTURE         = 13,
+ 	V4L2_BUF_TYPE_META_OUTPUT	   = 14,
++	/*
++	 * Note: V4L2_TYPE_IS_VALID and V4L2_TYPE_IS_OUTPUT must
++	 * be updated if a new type is added.
++	 */
+ 	/* Deprecated, do not use */
+ 	V4L2_BUF_TYPE_PRIVATE              = 0x80,
+ };
+ 
++#define V4L2_TYPE_IS_VALID(type)		 \
++	((type) >= V4L2_BUF_TYPE_VIDEO_CAPTURE &&\
++	 (type) <= V4L2_BUF_TYPE_META_OUTPUT)
++
+ #define V4L2_TYPE_IS_MULTIPLANAR(type)			\
+ 	((type) == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE	\
+ 	 || (type) == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE)
+@@ -164,14 +172,14 @@ enum v4l2_buf_type {
+ #define V4L2_TYPE_IS_OUTPUT(type)				\
+ 	((type) == V4L2_BUF_TYPE_VIDEO_OUTPUT			\
+ 	 || (type) == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE		\
+-	 || (type) == V4L2_BUF_TYPE_VIDEO_OVERLAY		\
+ 	 || (type) == V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY	\
+ 	 || (type) == V4L2_BUF_TYPE_VBI_OUTPUT			\
+ 	 || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT		\
+ 	 || (type) == V4L2_BUF_TYPE_SDR_OUTPUT			\
+ 	 || (type) == V4L2_BUF_TYPE_META_OUTPUT)
+ 
+-#define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type))
++#define V4L2_TYPE_IS_CAPTURE(type)	\
++	(V4L2_TYPE_IS_VALID(type) && !V4L2_TYPE_IS_OUTPUT(type))
+ 
+ enum v4l2_tuner_type {
+ 	V4L2_TUNER_RADIO	     = 1,
+diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
+index 04a75d66619510..0825c1b30f8fc5 100644
+--- a/io_uring/io-wq.c
++++ b/io_uring/io-wq.c
+@@ -1236,8 +1236,10 @@ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
+ 	atomic_set(&wq->worker_refs, 1);
+ 	init_completion(&wq->worker_done);
+ 	ret = cpuhp_state_add_instance_nocalls(io_wq_online, &wq->cpuhp_node);
+-	if (ret)
++	if (ret) {
++		put_task_struct(wq->task);
+ 		goto err;
++	}
+ 
+ 	return wq;
+ err:
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index e5466f65682699..74218c7b760483 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -1689,7 +1689,7 @@ static __cold void io_drain_req(struct io_kiocb *req)
+ 	spin_unlock(&ctx->completion_lock);
+ 
+ 	io_prep_async_link(req);
+-	de = kmalloc(sizeof(*de), GFP_KERNEL);
++	de = kmalloc(sizeof(*de), GFP_KERNEL_ACCOUNT);
+ 	if (!de) {
+ 		ret = -ENOMEM;
+ 		io_req_defer_failed(req, ret);
+diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
+index 953d5e74256916..a8467f6aba54d7 100644
+--- a/io_uring/kbuf.c
++++ b/io_uring/kbuf.c
+@@ -270,8 +270,11 @@ static int io_ring_buffers_peek(struct io_kiocb *req, struct buf_sel_arg *arg,
+ 		/* truncate end piece, if needed, for non partial buffers */
+ 		if (len > arg->max_len) {
+ 			len = arg->max_len;
+-			if (!(bl->flags & IOBL_INC))
++			if (!(bl->flags & IOBL_INC)) {
++				if (iov != arg->iovs)
++					break;
+ 				buf->len = len;
++			}
+ 		}
+ 
+ 		iov->iov_base = u64_to_user_ptr(buf->addr);
+@@ -624,7 +627,7 @@ int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg)
+ 		io_destroy_bl(ctx, bl);
+ 	}
+ 
+-	free_bl = bl = kzalloc(sizeof(*bl), GFP_KERNEL);
++	free_bl = bl = kzalloc(sizeof(*bl), GFP_KERNEL_ACCOUNT);
+ 	if (!bl)
+ 		return -ENOMEM;
+ 
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 27f37fa2ef7936..3feceb2b5b97ec 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -829,7 +829,7 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
+ 	if (sr->flags & IORING_RECVSEND_BUNDLE) {
+ 		size_t this_ret = *ret - sr->done_io;
+ 
+-		cflags |= io_put_kbufs(req, *ret, io_bundle_nbufs(kmsg, this_ret),
++		cflags |= io_put_kbufs(req, this_ret, io_bundle_nbufs(kmsg, this_ret),
+ 				      issue_flags);
+ 		if (sr->retry)
+ 			cflags = req->cqe.flags | (cflags & CQE_F_MASK);
+@@ -840,7 +840,7 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
+ 		 * If more is available AND it was a full transfer, retry and
+ 		 * append to this one
+ 		 */
+-		if (!sr->retry && kmsg->msg.msg_inq > 0 && this_ret > 0 &&
++		if (!sr->retry && kmsg->msg.msg_inq > 1 && this_ret > 0 &&
+ 		    !iov_iter_count(&kmsg->msg.msg_iter)) {
+ 			req->cqe.flags = cflags & ~CQE_F_MASK;
+ 			sr->len = kmsg->msg.msg_inq;
+@@ -1077,7 +1077,7 @@ static int io_recv_buf_select(struct io_kiocb *req, struct io_async_msghdr *kmsg
+ 			arg.mode |= KBUF_MODE_FREE;
+ 		}
+ 
+-		if (kmsg->msg.msg_inq > 0)
++		if (kmsg->msg.msg_inq > 1)
+ 			arg.max_len = min_not_zero(sr->len, kmsg->msg.msg_inq);
+ 
+ 		ret = io_buffers_peek(req, &arg);
+diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
+index f80a77c4973f30..794d4ae6f0bc8d 100644
+--- a/io_uring/rsrc.c
++++ b/io_uring/rsrc.c
+@@ -810,10 +810,8 @@ static struct io_rsrc_node *io_sqe_buffer_register(struct io_ring_ctx *ctx,
+ 
+ 	imu->nr_bvecs = nr_pages;
+ 	ret = io_buffer_account_pin(ctx, pages, nr_pages, imu, last_hpage);
+-	if (ret) {
+-		unpin_user_pages(pages, nr_pages);
++	if (ret)
+ 		goto done;
+-	}
+ 
+ 	size = iov->iov_len;
+ 	/* store original address for later verification */
+@@ -843,6 +841,8 @@ static struct io_rsrc_node *io_sqe_buffer_register(struct io_ring_ctx *ctx,
+ 	if (ret) {
+ 		if (imu)
+ 			io_free_imu(ctx, imu);
++		if (pages)
++			unpin_user_pages(pages, nr_pages);
+ 		io_cache_free(&ctx->node_cache, node);
+ 		node = ERR_PTR(ret);
+ 	}
+@@ -1174,6 +1174,8 @@ static int io_clone_buffers(struct io_ring_ctx *ctx, struct io_ring_ctx *src_ctx
+ 		return -EINVAL;
+ 	if (check_add_overflow(arg->nr, arg->dst_off, &nbufs))
+ 		return -EOVERFLOW;
++	if (nbufs > IORING_MAX_REG_BUFFERS)
++		return -EINVAL;
+ 
+ 	ret = io_rsrc_data_alloc(&data, max(nbufs, ctx->buf_table.nr));
+ 	if (ret)
+diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
+index 268d2fbe6160c2..d3a94cd0f5e656 100644
+--- a/io_uring/sqpoll.c
++++ b/io_uring/sqpoll.c
+@@ -419,7 +419,6 @@ void io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
+ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+ 				struct io_uring_params *p)
+ {
+-	struct task_struct *task_to_put = NULL;
+ 	int ret;
+ 
+ 	/* Retain compatibility with failing for an invalid attach attempt */
+@@ -498,7 +497,7 @@ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+ 		rcu_assign_pointer(sqd->thread, tsk);
+ 		mutex_unlock(&sqd->lock);
+ 
+-		task_to_put = get_task_struct(tsk);
++		get_task_struct(tsk);
+ 		ret = io_uring_alloc_task_context(tsk, ctx);
+ 		wake_up_new_task(tsk);
+ 		if (ret)
+@@ -513,8 +512,6 @@ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+ 	complete(&ctx->sq_data->exited);
+ err:
+ 	io_sq_thread_finish(ctx);
+-	if (task_to_put)
+-		put_task_struct(task_to_put);
+ 	return ret;
+ }
+ 
+diff --git a/ipc/shm.c b/ipc/shm.c
+index 99564c87008408..492fcc6999857a 100644
+--- a/ipc/shm.c
++++ b/ipc/shm.c
+@@ -431,8 +431,11 @@ static int shm_try_destroy_orphaned(int id, void *p, void *data)
+ void shm_destroy_orphaned(struct ipc_namespace *ns)
+ {
+ 	down_write(&shm_ids(ns).rwsem);
+-	if (shm_ids(ns).in_use)
++	if (shm_ids(ns).in_use) {
++		rcu_read_lock();
+ 		idr_for_each(&shm_ids(ns).ipcs_idr, &shm_try_destroy_orphaned, ns);
++		rcu_read_unlock();
++	}
+ 	up_write(&shm_ids(ns).rwsem);
+ }
+ 
+diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
+index db13ee70d94d5c..96113633e391a1 100644
+--- a/kernel/bpf/bpf_struct_ops.c
++++ b/kernel/bpf/bpf_struct_ops.c
+@@ -601,7 +601,7 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
+ 	if (model->ret_size > 0)
+ 		flags |= BPF_TRAMP_F_RET_FENTRY_RET;
+ 
+-	size = arch_bpf_trampoline_size(model, flags, tlinks, NULL);
++	size = arch_bpf_trampoline_size(model, flags, tlinks, stub_func);
+ 	if (size <= 0)
+ 		return size ? : -EFAULT;
+ 
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 16ba36f34dfab7..656ee11aff6762 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -6829,10 +6829,10 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ 			/* Is this a func with potential NULL args? */
+ 			if (strcmp(tname, raw_tp_null_args[i].func))
+ 				continue;
+-			if (raw_tp_null_args[i].mask & (0x1 << (arg * 4)))
++			if (raw_tp_null_args[i].mask & (0x1ULL << (arg * 4)))
+ 				info->reg_type |= PTR_MAYBE_NULL;
+ 			/* Is the current arg IS_ERR? */
+-			if (raw_tp_null_args[i].mask & (0x2 << (arg * 4)))
++			if (raw_tp_null_args[i].mask & (0x2ULL << (arg * 4)))
+ 				ptr_err_raw_tp = true;
+ 			break;
+ 		}
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index e3a2662f4e3365..a71aa4cb85fae6 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -129,7 +129,8 @@ const struct bpf_func_proto bpf_map_peek_elem_proto = {
+ 
+ BPF_CALL_3(bpf_map_lookup_percpu_elem, struct bpf_map *, map, void *, key, u32, cpu)
+ {
+-	WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_bh_held());
++	WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_trace_held() &&
++		     !rcu_read_lock_bh_held());
+ 	return (unsigned long) map->ops->map_lookup_percpu_elem(map, key, cpu);
+ }
+ 
+diff --git a/kernel/cgroup/legacy_freezer.c b/kernel/cgroup/legacy_freezer.c
+index 039d1eb2f215bb..507b8f19a262e0 100644
+--- a/kernel/cgroup/legacy_freezer.c
++++ b/kernel/cgroup/legacy_freezer.c
+@@ -188,13 +188,12 @@ static void freezer_attach(struct cgroup_taskset *tset)
+ 		if (!(freezer->state & CGROUP_FREEZING)) {
+ 			__thaw_task(task);
+ 		} else {
+-			freeze_task(task);
+-
+ 			/* clear FROZEN and propagate upwards */
+ 			while (freezer && (freezer->state & CGROUP_FROZEN)) {
+ 				freezer->state &= ~CGROUP_FROZEN;
+ 				freezer = parent_freezer(freezer);
+ 			}
++			freeze_task(task);
+ 		}
+ 	}
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index e97bc9220fd1a8..2d1131e2cfc02c 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -207,6 +207,19 @@ static void perf_ctx_unlock(struct perf_cpu_context *cpuctx,
+ 	__perf_ctx_unlock(&cpuctx->ctx);
+ }
+ 
++typedef struct {
++	struct perf_cpu_context *cpuctx;
++	struct perf_event_context *ctx;
++} class_perf_ctx_lock_t;
++
++static inline void class_perf_ctx_lock_destructor(class_perf_ctx_lock_t *_T)
++{ perf_ctx_unlock(_T->cpuctx, _T->ctx); }
++
++static inline class_perf_ctx_lock_t
++class_perf_ctx_lock_constructor(struct perf_cpu_context *cpuctx,
++				struct perf_event_context *ctx)
++{ perf_ctx_lock(cpuctx, ctx); return (class_perf_ctx_lock_t){ cpuctx, ctx }; }
++
+ #define TASK_TOMBSTONE ((void *)-1L)
+ 
+ static bool is_kernel_event(struct perf_event *event)
+@@ -944,7 +957,13 @@ static void perf_cgroup_switch(struct task_struct *task)
+ 	if (READ_ONCE(cpuctx->cgrp) == cgrp)
+ 		return;
+ 
+-	perf_ctx_lock(cpuctx, cpuctx->task_ctx);
++	guard(perf_ctx_lock)(cpuctx, cpuctx->task_ctx);
++	/*
++	 * Re-check, could've raced vs perf_remove_from_context().
++	 */
++	if (READ_ONCE(cpuctx->cgrp) == NULL)
++		return;
++
+ 	perf_ctx_disable(&cpuctx->ctx, true);
+ 
+ 	ctx_sched_out(&cpuctx->ctx, NULL, EVENT_ALL|EVENT_CGROUP);
+@@ -962,7 +981,6 @@ static void perf_cgroup_switch(struct task_struct *task)
+ 	ctx_sched_in(&cpuctx->ctx, NULL, EVENT_ALL|EVENT_CGROUP);
+ 
+ 	perf_ctx_enable(&cpuctx->ctx, true);
+-	perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
+ }
+ 
+ static int perf_cgroup_ensure_storage(struct perf_event *event,
+@@ -2145,8 +2163,9 @@ perf_aux_output_match(struct perf_event *event, struct perf_event *aux_event)
+ }
+ 
+ static void put_event(struct perf_event *event);
+-static void event_sched_out(struct perf_event *event,
+-			    struct perf_event_context *ctx);
++static void __event_disable(struct perf_event *event,
++			    struct perf_event_context *ctx,
++			    enum perf_event_state state);
+ 
+ static void perf_put_aux_event(struct perf_event *event)
+ {
+@@ -2179,8 +2198,7 @@ static void perf_put_aux_event(struct perf_event *event)
+ 		 * state so that we don't try to schedule it again. Note
+ 		 * that perf_event_enable() will clear the ERROR status.
+ 		 */
+-		event_sched_out(iter, ctx);
+-		perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
++		__event_disable(iter, ctx, PERF_EVENT_STATE_ERROR);
+ 	}
+ }
+ 
+@@ -2238,18 +2256,6 @@ static inline struct list_head *get_event_list(struct perf_event *event)
+ 				    &event->pmu_ctx->flexible_active;
+ }
+ 
+-/*
+- * Events that have PERF_EV_CAP_SIBLING require being part of a group and
+- * cannot exist on their own, schedule them out and move them into the ERROR
+- * state. Also see _perf_event_enable(), it will not be able to recover
+- * this ERROR state.
+- */
+-static inline void perf_remove_sibling_event(struct perf_event *event)
+-{
+-	event_sched_out(event, event->ctx);
+-	perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
+-}
+-
+ static void perf_group_detach(struct perf_event *event)
+ {
+ 	struct perf_event *leader = event->group_leader;
+@@ -2285,8 +2291,15 @@ static void perf_group_detach(struct perf_event *event)
+ 	 */
+ 	list_for_each_entry_safe(sibling, tmp, &event->sibling_list, sibling_list) {
+ 
++		/*
++		 * Events that have PERF_EV_CAP_SIBLING require being part of
++		 * a group and cannot exist on their own, schedule them out
++		 * and move them into the ERROR state. Also see
++		 * _perf_event_enable(), it will not be able to recover this
++		 * ERROR state.
++		 */
+ 		if (sibling->event_caps & PERF_EV_CAP_SIBLING)
+-			perf_remove_sibling_event(sibling);
++			__event_disable(sibling, ctx, PERF_EVENT_STATE_ERROR);
+ 
+ 		sibling->group_leader = sibling;
+ 		list_del_init(&sibling->sibling_list);
+@@ -2545,6 +2558,15 @@ static void perf_remove_from_context(struct perf_event *event, unsigned long fla
+ 	event_function_call(event, __perf_remove_from_context, (void *)flags);
+ }
+ 
++static void __event_disable(struct perf_event *event,
++			    struct perf_event_context *ctx,
++			    enum perf_event_state state)
++{
++	event_sched_out(event, ctx);
++	perf_cgroup_event_disable(event, ctx);
++	perf_event_set_state(event, state);
++}
++
+ /*
+  * Cross CPU call to disable a performance event
+  */
+@@ -2559,13 +2581,18 @@ static void __perf_event_disable(struct perf_event *event,
+ 	perf_pmu_disable(event->pmu_ctx->pmu);
+ 	ctx_time_update_event(ctx, event);
+ 
++	/*
++	 * When disabling a group leader, the whole group becomes ineligible
++	 * to run, so schedule out the full group.
++	 */
+ 	if (event == event->group_leader)
+ 		group_sched_out(event, ctx);
+-	else
+-		event_sched_out(event, ctx);
+ 
+-	perf_event_set_state(event, PERF_EVENT_STATE_OFF);
+-	perf_cgroup_event_disable(event, ctx);
++	/*
++	 * But only mark the leader OFF; the siblings will remain
++	 * INACTIVE.
++	 */
++	__event_disable(event, ctx, PERF_EVENT_STATE_OFF);
+ 
+ 	perf_pmu_enable(event->pmu_ctx->pmu);
+ }
+@@ -7363,6 +7390,10 @@ perf_sample_ustack_size(u16 stack_size, u16 header_size,
+ 	if (!regs)
+ 		return 0;
+ 
++	/* No mm, no stack, no dump. */
++	if (!current->mm)
++		return 0;
++
+ 	/*
+ 	 * Check if we fit in with the requested stack size into the:
+ 	 * - TASK_SIZE
+@@ -8074,6 +8105,9 @@ perf_callchain(struct perf_event *event, struct pt_regs *regs)
+ 	const u32 max_stack = event->attr.sample_max_stack;
+ 	struct perf_callchain_entry *callchain;
+ 
++	if (!current->mm)
++		user = false;
++
+ 	if (!kernel && !user)
+ 		return &__empty_callchain;
+ 
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 1b51dc099f1e05..771dd7b226c18f 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -937,6 +937,15 @@ void __noreturn do_exit(long code)
+ 	tsk->exit_code = code;
+ 	taskstats_exit(tsk, group_dead);
+ 
++	/*
++	 * Since sampling can touch ->mm, make sure to stop everything before we
++	 * tear it down.
++	 *
++	 * Also flushes inherited counters to the parent - before the parent
++	 * gets woken up by child-exit notifications.
++	 */
++	perf_event_exit_task(tsk);
++
+ 	exit_mm();
+ 
+ 	if (group_dead)
+@@ -953,14 +962,6 @@ void __noreturn do_exit(long code)
+ 	exit_task_work(tsk);
+ 	exit_thread(tsk);
+ 
+-	/*
+-	 * Flush inherited counters to the parent - before the parent
+-	 * gets woken up by child-exit notifications.
+-	 *
+-	 * because of cgroup mode, must be called before cgroup_exit()
+-	 */
+-	perf_event_exit_task(tsk);
+-
+ 	sched_autogroup_exit_task(tsk);
+ 	cgroup_exit(tsk);
+ 
+diff --git a/kernel/module/main.c b/kernel/module/main.c
+index 5c6ab20240a6d6..9861c2ac5fd500 100644
+--- a/kernel/module/main.c
++++ b/kernel/module/main.c
+@@ -3399,11 +3399,12 @@ static int load_module(struct load_info *info, const char __user *uargs,
+ 			goto sysfs_cleanup;
+ 	}
+ 
++	if (codetag_load_module(mod))
++		goto sysfs_cleanup;
++
+ 	/* Get rid of temporary copy. */
+ 	free_copy(info, flags);
+ 
+-	codetag_load_module(mod);
+-
+ 	/* Done! */
+ 	trace_module_load(mod);
+ 
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index d593d6612ba07e..39fac649aa142a 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -8571,7 +8571,7 @@ void __init sched_init(void)
+ 		init_cfs_bandwidth(&root_task_group.cfs_bandwidth, NULL);
+ #endif /* CONFIG_FAIR_GROUP_SCHED */
+ #ifdef CONFIG_EXT_GROUP_SCHED
+-		root_task_group.scx_weight = CGROUP_WEIGHT_DFL;
++		scx_tg_init(&root_task_group);
+ #endif /* CONFIG_EXT_GROUP_SCHED */
+ #ifdef CONFIG_RT_GROUP_SCHED
+ 		root_task_group.rt_se = (struct sched_rt_entity **)ptr;
+@@ -9011,7 +9011,7 @@ struct task_group *sched_create_group(struct task_group *parent)
+ 	if (!alloc_rt_sched_group(tg, parent))
+ 		goto err;
+ 
+-	scx_group_set_weight(tg, CGROUP_WEIGHT_DFL);
++	scx_tg_init(tg);
+ 	alloc_uclamp_sched_group(tg, parent);
+ 
+ 	return tg;
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index f5133249fd4d92..afaf49e5ecb976 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -3936,6 +3936,11 @@ bool scx_can_stop_tick(struct rq *rq)
+ DEFINE_STATIC_PERCPU_RWSEM(scx_cgroup_rwsem);
+ static bool scx_cgroup_enabled;
+ 
++void scx_tg_init(struct task_group *tg)
++{
++	tg->scx_weight = CGROUP_WEIGHT_DFL;
++}
++
+ int scx_tg_online(struct task_group *tg)
+ {
+ 	int ret = 0;
+diff --git a/kernel/sched/ext.h b/kernel/sched/ext.h
+index 1bda96b19a1bf6..d5c02af0ef58e5 100644
+--- a/kernel/sched/ext.h
++++ b/kernel/sched/ext.h
+@@ -80,6 +80,7 @@ static inline void scx_update_idle(struct rq *rq, bool idle, bool do_notify) {}
+ 
+ #ifdef CONFIG_CGROUP_SCHED
+ #ifdef CONFIG_EXT_GROUP_SCHED
++void scx_tg_init(struct task_group *tg);
+ int scx_tg_online(struct task_group *tg);
+ void scx_tg_offline(struct task_group *tg);
+ int scx_cgroup_can_attach(struct cgroup_taskset *tset);
+@@ -89,6 +90,7 @@ void scx_cgroup_cancel_attach(struct cgroup_taskset *tset);
+ void scx_group_set_weight(struct task_group *tg, unsigned long cgrp_weight);
+ void scx_group_set_idle(struct task_group *tg, bool idle);
+ #else	/* CONFIG_EXT_GROUP_SCHED */
++static inline void scx_tg_init(struct task_group *tg) {}
+ static inline int scx_tg_online(struct task_group *tg) { return 0; }
+ static inline void scx_tg_offline(struct task_group *tg) {}
+ static inline int scx_cgroup_can_attach(struct cgroup_taskset *tset) { return 0; }
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 0c04ed41485259..138d9f4658d5f8 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3795,6 +3795,7 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
+ 		update_entity_lag(cfs_rq, se);
+ 		se->deadline -= se->vruntime;
+ 		se->rel_deadline = 1;
++		cfs_rq->nr_queued--;
+ 		if (!curr)
+ 			__dequeue_entity(cfs_rq, se);
+ 		update_load_sub(&cfs_rq->load, se->load.weight);
+@@ -3821,10 +3822,11 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
+ 
+ 	enqueue_load_avg(cfs_rq, se);
+ 	if (se->on_rq) {
+-		update_load_add(&cfs_rq->load, se->load.weight);
+ 		place_entity(cfs_rq, se, 0);
++		update_load_add(&cfs_rq->load, se->load.weight);
+ 		if (!curr)
+ 			__enqueue_entity(cfs_rq, se);
++		cfs_rq->nr_queued++;
+ 
+ 		/*
+ 		 * The entity's vruntime has been adjusted, so let's check
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index fa03ec3ed56a20..bfcb8b0a1e2c46 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -1883,6 +1883,27 @@ static int find_lowest_rq(struct task_struct *task)
+ 	return -1;
+ }
+ 
++static struct task_struct *pick_next_pushable_task(struct rq *rq)
++{
++	struct task_struct *p;
++
++	if (!has_pushable_tasks(rq))
++		return NULL;
++
++	p = plist_first_entry(&rq->rt.pushable_tasks,
++			      struct task_struct, pushable_tasks);
++
++	BUG_ON(rq->cpu != task_cpu(p));
++	BUG_ON(task_current(rq, p));
++	BUG_ON(task_current_donor(rq, p));
++	BUG_ON(p->nr_cpus_allowed <= 1);
++
++	BUG_ON(!task_on_rq_queued(p));
++	BUG_ON(!rt_task(p));
++
++	return p;
++}
++
+ /* Will lock the rq it finds */
+ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
+ {
+@@ -1913,18 +1934,16 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
+ 			/*
+ 			 * We had to unlock the run queue. In
+ 			 * the mean time, task could have
+-			 * migrated already or had its affinity changed.
+-			 * Also make sure that it wasn't scheduled on its rq.
++			 * migrated already or had its affinity changed,
++			 * therefore check if the task is still at the
++			 * head of the pushable tasks list.
+ 			 * It is possible the task was scheduled, set
+ 			 * "migrate_disabled" and then got preempted, so we must
+ 			 * check the task migration disable flag here too.
+ 			 */
+-			if (unlikely(task_rq(task) != rq ||
++			if (unlikely(is_migration_disabled(task) ||
+ 				     !cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) ||
+-				     task_on_cpu(rq, task) ||
+-				     !rt_task(task) ||
+-				     is_migration_disabled(task) ||
+-				     !task_on_rq_queued(task))) {
++				     task != pick_next_pushable_task(rq))) {
+ 
+ 				double_unlock_balance(rq, lowest_rq);
+ 				lowest_rq = NULL;
+@@ -1944,27 +1963,6 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
+ 	return lowest_rq;
+ }
+ 
+-static struct task_struct *pick_next_pushable_task(struct rq *rq)
+-{
+-	struct task_struct *p;
+-
+-	if (!has_pushable_tasks(rq))
+-		return NULL;
+-
+-	p = plist_first_entry(&rq->rt.pushable_tasks,
+-			      struct task_struct, pushable_tasks);
+-
+-	BUG_ON(rq->cpu != task_cpu(p));
+-	BUG_ON(task_current(rq, p));
+-	BUG_ON(task_current_donor(rq, p));
+-	BUG_ON(p->nr_cpus_allowed <= 1);
+-
+-	BUG_ON(!task_on_rq_queued(p));
+-	BUG_ON(!rt_task(p));
+-
+-	return p;
+-}
+-
+ /*
+  * If the current CPU has more than one RT task, see if the non
+  * running task can migrate over to a CPU that is running a task
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index bb48498ebb5a80..6a8bc7da906263 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -310,7 +310,7 @@ static void clocksource_verify_choose_cpus(void)
+ {
+ 	int cpu, i, n = verify_n_cpus;
+ 
+-	if (n < 0) {
++	if (n < 0 || n >= num_online_cpus()) {
+ 		/* Check all of the CPUs. */
+ 		cpumask_copy(&cpus_chosen, cpu_online_mask);
+ 		cpumask_clear_cpu(smp_processor_id(), &cpus_chosen);
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 6981830c312859..6859675516f756 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -7395,9 +7395,10 @@ void ftrace_release_mod(struct module *mod)
+ 
+ 	mutex_lock(&ftrace_lock);
+ 
+-	if (ftrace_disabled)
+-		goto out_unlock;
+-
++	/*
++	 * To avoid the UAF problem after the module is unloaded, the
++	 * 'mod_map' resource needs to be released unconditionally.
++	 */
+ 	list_for_each_entry_safe(mod_map, n, &ftrace_mod_maps, list) {
+ 		if (mod_map->mod == mod) {
+ 			list_del_rcu(&mod_map->list);
+@@ -7406,6 +7407,9 @@ void ftrace_release_mod(struct module *mod)
+ 		}
+ 	}
+ 
++	if (ftrace_disabled)
++		goto out_unlock;
++
+ 	/*
+ 	 * Each module has its own ftrace_pages, remove
+ 	 * them from the list.
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 766cb3cd254e05..14e1e1ed55058d 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -6032,6 +6032,7 @@ unsigned long trace_adjust_address(struct trace_array *tr, unsigned long addr)
+ 	struct trace_module_delta *module_delta;
+ 	struct trace_scratch *tscratch;
+ 	struct trace_mod_entry *entry;
++	unsigned long raddr;
+ 	int idx = 0, nr_entries;
+ 
+ 	/* If we don't have last boot delta, return the address */
+@@ -6045,7 +6046,9 @@ unsigned long trace_adjust_address(struct trace_array *tr, unsigned long addr)
+ 	module_delta = READ_ONCE(tr->module_delta);
+ 	if (!module_delta || !tscratch->nr_entries ||
+ 	    tscratch->entries[0].mod_addr > addr) {
+-		return addr + tr->text_delta;
++		raddr = addr + tr->text_delta;
++		return __is_kernel(raddr) || is_kernel_core_data(raddr) ||
++			is_kernel_rodata(raddr) ? raddr : addr;
+ 	}
+ 
+ 	/* Note that entries must be sorted. */
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index 2048560264bb48..cca676f651b108 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -1335,22 +1335,137 @@ static void filter_free_subsystem_preds(struct trace_subsystem_dir *dir,
+ 	}
+ }
+ 
++struct filter_list {
++	struct list_head	list;
++	struct event_filter	*filter;
++};
++
++struct filter_head {
++	struct list_head	list;
++	struct rcu_head		rcu;
++};
++
++
++static void free_filter_list(struct rcu_head *rhp)
++{
++	struct filter_head *filter_list = container_of(rhp, struct filter_head, rcu);
++	struct filter_list *filter_item, *tmp;
++
++	list_for_each_entry_safe(filter_item, tmp, &filter_list->list, list) {
++		__free_filter(filter_item->filter);
++		list_del(&filter_item->list);
++		kfree(filter_item);
++	}
++	kfree(filter_list);
++}
++
++static void free_filter_list_tasks(struct rcu_head *rhp)
++{
++	call_rcu(rhp, free_filter_list);
++}
++
++/*
++ * The tracepoint_synchronize_unregister() is a double rcu call.
++ * It calls synchronize_rcu_tasks_trace() followed by synchronize_rcu().
++ * Instead of waiting for it, simply call these via the call_rcu*()
++ * variants.
++ */
++static void delay_free_filter(struct filter_head *head)
++{
++	call_rcu_tasks_trace(&head->rcu, free_filter_list_tasks);
++}
++
++static void try_delay_free_filter(struct event_filter *filter)
++{
++	struct filter_head *head;
++	struct filter_list *item;
++
++	head = kmalloc(sizeof(*head), GFP_KERNEL);
++	if (!head)
++		goto free_now;
++
++	INIT_LIST_HEAD(&head->list);
++
++	item = kmalloc(sizeof(*item), GFP_KERNEL);
++	if (!item) {
++		kfree(head);
++		goto free_now;
++	}
++
++	item->filter = filter;
++	list_add_tail(&item->list, &head->list);
++	delay_free_filter(head);
++	return;
++
++ free_now:
++	/* Make sure the filter is not being used */
++	tracepoint_synchronize_unregister();
++	__free_filter(filter);
++}
++
+ static inline void __free_subsystem_filter(struct trace_event_file *file)
+ {
+ 	__free_filter(file->filter);
+ 	file->filter = NULL;
+ }
+ 
++static inline void event_set_filter(struct trace_event_file *file,
++				    struct event_filter *filter)
++{
++	rcu_assign_pointer(file->filter, filter);
++}
++
++static inline void event_clear_filter(struct trace_event_file *file)
++{
++	RCU_INIT_POINTER(file->filter, NULL);
++}
++
+ static void filter_free_subsystem_filters(struct trace_subsystem_dir *dir,
+-					  struct trace_array *tr)
++					  struct trace_array *tr,
++					  struct event_filter *filter)
+ {
+ 	struct trace_event_file *file;
++	struct filter_head *head;
++	struct filter_list *item;
++
++	head = kmalloc(sizeof(*head), GFP_KERNEL);
++	if (!head)
++		goto free_now;
++
++	INIT_LIST_HEAD(&head->list);
++
++	item = kmalloc(sizeof(*item), GFP_KERNEL);
++	if (!item)
++		goto free_now;
++
++	item->filter = filter;
++	list_add_tail(&item->list, &head->list);
+ 
+ 	list_for_each_entry(file, &tr->events, list) {
+ 		if (file->system != dir)
+ 			continue;
+-		__free_subsystem_filter(file);
++		item = kmalloc(sizeof(*item), GFP_KERNEL);
++		if (!item)
++			goto free_now;
++		item->filter = event_filter(file);
++		list_add_tail(&item->list, &head->list);
++		event_clear_filter(file);
++	}
++
++	delay_free_filter(head);
++	return;
++ free_now:
++	tracepoint_synchronize_unregister();
++
++	if (head)
++		free_filter_list(&head->rcu);
++
++	list_for_each_entry(file, &tr->events, list) {
++		if (file->system != dir || !file->filter)
++			continue;
++		__free_filter(file->filter);
+ 	}
++	__free_filter(filter);
+ }
+ 
+ int filter_assign_type(const char *type)
+@@ -2120,22 +2235,6 @@ static inline void event_set_filtered_flag(struct trace_event_file *file)
+ 		trace_buffered_event_enable();
+ }
+ 
+-static inline void event_set_filter(struct trace_event_file *file,
+-				    struct event_filter *filter)
+-{
+-	rcu_assign_pointer(file->filter, filter);
+-}
+-
+-static inline void event_clear_filter(struct trace_event_file *file)
+-{
+-	RCU_INIT_POINTER(file->filter, NULL);
+-}
+-
+-struct filter_list {
+-	struct list_head	list;
+-	struct event_filter	*filter;
+-};
+-
+ static int process_system_preds(struct trace_subsystem_dir *dir,
+ 				struct trace_array *tr,
+ 				struct filter_parse_error *pe,
+@@ -2144,11 +2243,16 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
+ 	struct trace_event_file *file;
+ 	struct filter_list *filter_item;
+ 	struct event_filter *filter = NULL;
+-	struct filter_list *tmp;
+-	LIST_HEAD(filter_list);
++	struct filter_head *filter_list;
+ 	bool fail = true;
+ 	int err;
+ 
++	filter_list = kmalloc(sizeof(*filter_list), GFP_KERNEL);
++	if (!filter_list)
++		return -ENOMEM;
++
++	INIT_LIST_HEAD(&filter_list->list);
++
+ 	list_for_each_entry(file, &tr->events, list) {
+ 
+ 		if (file->system != dir)
+@@ -2175,7 +2279,7 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
+ 		if (!filter_item)
+ 			goto fail_mem;
+ 
+-		list_add_tail(&filter_item->list, &filter_list);
++		list_add_tail(&filter_item->list, &filter_list->list);
+ 		/*
+ 		 * Regardless of if this returned an error, we still
+ 		 * replace the filter for the call.
+@@ -2195,31 +2299,22 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
+ 	 * Do a synchronize_rcu() and to ensure all calls are
+ 	 * done with them before we free them.
+ 	 */
+-	tracepoint_synchronize_unregister();
+-	list_for_each_entry_safe(filter_item, tmp, &filter_list, list) {
+-		__free_filter(filter_item->filter);
+-		list_del(&filter_item->list);
+-		kfree(filter_item);
+-	}
++	delay_free_filter(filter_list);
+ 	return 0;
+  fail:
+ 	/* No call succeeded */
+-	list_for_each_entry_safe(filter_item, tmp, &filter_list, list) {
+-		list_del(&filter_item->list);
+-		kfree(filter_item);
+-	}
++	free_filter_list(&filter_list->rcu);
+ 	parse_error(pe, FILT_ERR_BAD_SUBSYS_FILTER, 0);
+ 	return -EINVAL;
+  fail_mem:
+ 	__free_filter(filter);
++
+ 	/* If any call succeeded, we still need to sync */
+ 	if (!fail)
+-		tracepoint_synchronize_unregister();
+-	list_for_each_entry_safe(filter_item, tmp, &filter_list, list) {
+-		__free_filter(filter_item->filter);
+-		list_del(&filter_item->list);
+-		kfree(filter_item);
+-	}
++		delay_free_filter(filter_list);
++	else
++		free_filter_list(&filter_list->rcu);
++
+ 	return -ENOMEM;
+ }
+ 
+@@ -2361,9 +2456,7 @@ int apply_event_filter(struct trace_event_file *file, char *filter_string)
+ 
+ 		event_clear_filter(file);
+ 
+-		/* Make sure the filter is not being used */
+-		tracepoint_synchronize_unregister();
+-		__free_filter(filter);
++		try_delay_free_filter(filter);
+ 
+ 		return 0;
+ 	}
+@@ -2387,11 +2480,8 @@ int apply_event_filter(struct trace_event_file *file, char *filter_string)
+ 
+ 		event_set_filter(file, filter);
+ 
+-		if (tmp) {
+-			/* Make sure the call is done with the filter */
+-			tracepoint_synchronize_unregister();
+-			__free_filter(tmp);
+-		}
++		if (tmp)
++			try_delay_free_filter(tmp);
+ 	}
+ 
+ 	return err;
+@@ -2417,9 +2507,7 @@ int apply_subsystem_event_filter(struct trace_subsystem_dir *dir,
+ 		filter = system->filter;
+ 		system->filter = NULL;
+ 		/* Ensure all filters are no longer used */
+-		tracepoint_synchronize_unregister();
+-		filter_free_subsystem_filters(dir, tr);
+-		__free_filter(filter);
++		filter_free_subsystem_filters(dir, tr, filter);
+ 		return 0;
+ 	}
+ 
+diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
+index 0c357a89c58e01..e9eab4a021ab74 100644
+--- a/kernel/trace/trace_functions_graph.c
++++ b/kernel/trace/trace_functions_graph.c
+@@ -475,10 +475,16 @@ static int graph_trace_init(struct trace_array *tr)
+ 	return 0;
+ }
+ 
++static struct tracer graph_trace;
++
+ static int ftrace_graph_trace_args(struct trace_array *tr, int set)
+ {
+ 	trace_func_graph_ent_t entry;
+ 
++	/* Do nothing if the current tracer is not this tracer */
++	if (tr->current_trace != &graph_trace)
++		return 0;
++
+ 	if (set)
+ 		entry = trace_graph_entry_args;
+ 	else
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 9fa2af9dbf2cec..2d283e92be5abc 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -47,6 +47,7 @@ int __read_mostly watchdog_user_enabled = 1;
+ static int __read_mostly watchdog_hardlockup_user_enabled = WATCHDOG_HARDLOCKUP_DEFAULT;
+ static int __read_mostly watchdog_softlockup_user_enabled = 1;
+ int __read_mostly watchdog_thresh = 10;
++static int __read_mostly watchdog_thresh_next;
+ static int __read_mostly watchdog_hardlockup_available;
+ 
+ struct cpumask watchdog_cpumask __read_mostly;
+@@ -870,12 +871,20 @@ int lockup_detector_offline_cpu(unsigned int cpu)
+ 	return 0;
+ }
+ 
+-static void __lockup_detector_reconfigure(void)
++static void __lockup_detector_reconfigure(bool thresh_changed)
+ {
+ 	cpus_read_lock();
+ 	watchdog_hardlockup_stop();
+ 
+ 	softlockup_stop_all();
++	/*
++	 * To prevent watchdog_timer_fn from using the old interval and
++	 * the new watchdog_thresh at the same time, which could lead to
++	 * false softlockup reports, it is necessary to update the
++	 * watchdog_thresh after the softlockup is completed.
++	 */
++	if (thresh_changed)
++		watchdog_thresh = READ_ONCE(watchdog_thresh_next);
+ 	set_sample_period();
+ 	lockup_detector_update_enable();
+ 	if (watchdog_enabled && watchdog_thresh)
+@@ -888,7 +897,7 @@ static void __lockup_detector_reconfigure(void)
+ void lockup_detector_reconfigure(void)
+ {
+ 	mutex_lock(&watchdog_mutex);
+-	__lockup_detector_reconfigure();
++	__lockup_detector_reconfigure(false);
+ 	mutex_unlock(&watchdog_mutex);
+ }
+ 
+@@ -908,27 +917,29 @@ static __init void lockup_detector_setup(void)
+ 		return;
+ 
+ 	mutex_lock(&watchdog_mutex);
+-	__lockup_detector_reconfigure();
++	__lockup_detector_reconfigure(false);
+ 	softlockup_initialized = true;
+ 	mutex_unlock(&watchdog_mutex);
+ }
+ 
+ #else /* CONFIG_SOFTLOCKUP_DETECTOR */
+-static void __lockup_detector_reconfigure(void)
++static void __lockup_detector_reconfigure(bool thresh_changed)
+ {
+ 	cpus_read_lock();
+ 	watchdog_hardlockup_stop();
++	if (thresh_changed)
++		watchdog_thresh = READ_ONCE(watchdog_thresh_next);
+ 	lockup_detector_update_enable();
+ 	watchdog_hardlockup_start();
+ 	cpus_read_unlock();
+ }
+ void lockup_detector_reconfigure(void)
+ {
+-	__lockup_detector_reconfigure();
++	__lockup_detector_reconfigure(false);
+ }
+ static inline void lockup_detector_setup(void)
+ {
+-	__lockup_detector_reconfigure();
++	__lockup_detector_reconfigure(false);
+ }
+ #endif /* !CONFIG_SOFTLOCKUP_DETECTOR */
+ 
+@@ -946,11 +957,11 @@ void lockup_detector_soft_poweroff(void)
+ #ifdef CONFIG_SYSCTL
+ 
+ /* Propagate any changes to the watchdog infrastructure */
+-static void proc_watchdog_update(void)
++static void proc_watchdog_update(bool thresh_changed)
+ {
+ 	/* Remove impossible cpus to keep sysctl output clean. */
+ 	cpumask_and(&watchdog_cpumask, &watchdog_cpumask, cpu_possible_mask);
+-	__lockup_detector_reconfigure();
++	__lockup_detector_reconfigure(thresh_changed);
+ }
+ 
+ /*
+@@ -984,7 +995,7 @@ static int proc_watchdog_common(int which, const struct ctl_table *table, int wr
+ 	} else {
+ 		err = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+ 		if (!err && old != READ_ONCE(*param))
+-			proc_watchdog_update();
++			proc_watchdog_update(false);
+ 	}
+ 	mutex_unlock(&watchdog_mutex);
+ 	return err;
+@@ -1035,11 +1046,13 @@ static int proc_watchdog_thresh(const struct ctl_table *table, int write,
+ 
+ 	mutex_lock(&watchdog_mutex);
+ 
+-	old = READ_ONCE(watchdog_thresh);
++	watchdog_thresh_next = READ_ONCE(watchdog_thresh);
++
++	old = watchdog_thresh_next;
+ 	err = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+ 
+-	if (!err && write && old != READ_ONCE(watchdog_thresh))
+-		proc_watchdog_update();
++	if (!err && write && old != READ_ONCE(watchdog_thresh_next))
++		proc_watchdog_update(true);
+ 
+ 	mutex_unlock(&watchdog_mutex);
+ 	return err;
+@@ -1060,7 +1073,7 @@ static int proc_watchdog_cpumask(const struct ctl_table *table, int write,
+ 
+ 	err = proc_do_large_bitmap(table, write, buffer, lenp, ppos);
+ 	if (!err && write)
+-		proc_watchdog_update();
++		proc_watchdog_update(false);
+ 
+ 	mutex_unlock(&watchdog_mutex);
+ 	return err;
+@@ -1080,7 +1093,7 @@ static const struct ctl_table watchdog_sysctls[] = {
+ 	},
+ 	{
+ 		.procname	= "watchdog_thresh",
+-		.data		= &watchdog_thresh,
++		.data		= &watchdog_thresh_next,
+ 		.maxlen		= sizeof(int),
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_watchdog_thresh,
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index cf620328273757..c8083dd3001670 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -3241,7 +3241,7 @@ __acquires(&pool->lock)
+ 	 * point will only record its address.
+ 	 */
+ 	trace_workqueue_execute_end(work, worker->current_func);
+-	pwq->stats[PWQ_STAT_COMPLETED]++;
++
+ 	lock_map_release(&lockdep_map);
+ 	if (!bh_draining)
+ 		lock_map_release(pwq->wq->lockdep_map);
+@@ -3272,6 +3272,8 @@ __acquires(&pool->lock)
+ 
+ 	raw_spin_lock_irq(&pool->lock);
+ 
++	pwq->stats[PWQ_STAT_COMPLETED]++;
++
+ 	/*
+ 	 * In addition to %WQ_CPU_INTENSIVE, @worker may also have been marked
+ 	 * CPU intensive by wq_worker_tick() if @work hogged CPU longer than
+@@ -7754,7 +7756,8 @@ void __init workqueue_init_early(void)
+ 		restrict_unbound_cpumask("workqueue.unbound_cpus", &wq_cmdline_cpumask);
+ 
+ 	cpumask_copy(wq_requested_unbound_cpumask, wq_unbound_cpumask);
+-
++	cpumask_andnot(wq_isolated_cpumask, cpu_possible_mask,
++						housekeeping_cpumask(HK_TYPE_DOMAIN));
+ 	pwq_cache = KMEM_CACHE(pool_workqueue, SLAB_PANIC);
+ 
+ 	unbound_wq_update_pwq_attrs_buf = alloc_workqueue_attrs();
+diff --git a/lib/Kconfig b/lib/Kconfig
+index 6c1b8f1842678c..37db228f70a99f 100644
+--- a/lib/Kconfig
++++ b/lib/Kconfig
+@@ -716,6 +716,7 @@ config GENERIC_LIB_DEVMEM_IS_ALLOWED
+ 
+ config PLDMFW
+ 	bool
++	select CRC32
+ 	default n
+ 
+ config ASN1_ENCODER
+diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
+index c7f602fa7b23fc..ac72849b5da93a 100644
+--- a/lib/alloc_tag.c
++++ b/lib/alloc_tag.c
+@@ -618,15 +618,16 @@ static void release_module_tags(struct module *mod, bool used)
+ 	mas_unlock(&mas);
+ }
+ 
+-static void load_module(struct module *mod, struct codetag *start, struct codetag *stop)
++static int load_module(struct module *mod, struct codetag *start, struct codetag *stop)
+ {
+ 	/* Allocate module alloc_tag percpu counters */
+ 	struct alloc_tag *start_tag;
+ 	struct alloc_tag *stop_tag;
+ 	struct alloc_tag *tag;
+ 
++	/* percpu counters for core allocations are already statically allocated */
+ 	if (!mod)
+-		return;
++		return 0;
+ 
+ 	start_tag = ct_to_alloc_tag(start);
+ 	stop_tag = ct_to_alloc_tag(stop);
+@@ -638,12 +639,13 @@ static void load_module(struct module *mod, struct codetag *start, struct codeta
+ 				free_percpu(tag->counters);
+ 				tag->counters = NULL;
+ 			}
+-			shutdown_mem_profiling(true);
+-			pr_err("Failed to allocate memory for allocation tag percpu counters in the module %s. Memory allocation profiling is disabled!\n",
++			pr_err("Failed to allocate memory for allocation tag percpu counters in the module %s\n",
+ 			       mod->name);
+-			break;
++			return -ENOMEM;
+ 		}
+ 	}
++
++	return 0;
+ }
+ 
+ static void replace_module(struct module *mod, struct module *new_mod)
+diff --git a/lib/codetag.c b/lib/codetag.c
+index de332e98d6f5b5..650d54d7e14da6 100644
+--- a/lib/codetag.c
++++ b/lib/codetag.c
+@@ -167,6 +167,7 @@ static int codetag_module_init(struct codetag_type *cttype, struct module *mod)
+ {
+ 	struct codetag_range range;
+ 	struct codetag_module *cmod;
++	int mod_id;
+ 	int err;
+ 
+ 	range = get_section_range(mod, cttype->desc.section);
+@@ -190,11 +191,20 @@ static int codetag_module_init(struct codetag_type *cttype, struct module *mod)
+ 	cmod->range = range;
+ 
+ 	down_write(&cttype->mod_lock);
+-	err = idr_alloc(&cttype->mod_idr, cmod, 0, 0, GFP_KERNEL);
+-	if (err >= 0) {
+-		cttype->count += range_size(cttype, &range);
+-		if (cttype->desc.module_load)
+-			cttype->desc.module_load(mod, range.start, range.stop);
++	mod_id = idr_alloc(&cttype->mod_idr, cmod, 0, 0, GFP_KERNEL);
++	if (mod_id >= 0) {
++		if (cttype->desc.module_load) {
++			err = cttype->desc.module_load(mod, range.start, range.stop);
++			if (!err)
++				cttype->count += range_size(cttype, &range);
++			else
++				idr_remove(&cttype->mod_idr, mod_id);
++		} else {
++			cttype->count += range_size(cttype, &range);
++			err = 0;
++		}
++	} else {
++		err = mod_id;
+ 	}
+ 	up_write(&cttype->mod_lock);
+ 
+@@ -295,17 +305,23 @@ void codetag_module_replaced(struct module *mod, struct module *new_mod)
+ 	mutex_unlock(&codetag_lock);
+ }
+ 
+-void codetag_load_module(struct module *mod)
++int codetag_load_module(struct module *mod)
+ {
+ 	struct codetag_type *cttype;
++	int ret = 0;
+ 
+ 	if (!mod)
+-		return;
++		return 0;
+ 
+ 	mutex_lock(&codetag_lock);
+-	list_for_each_entry(cttype, &codetag_types, link)
+-		codetag_module_init(cttype, mod);
++	list_for_each_entry(cttype, &codetag_types, link) {
++		ret = codetag_module_init(cttype, mod);
++		if (ret)
++			break;
++	}
+ 	mutex_unlock(&codetag_lock);
++
++	return ret;
+ }
+ 
+ void codetag_unload_module(struct module *mod)
+diff --git a/mm/execmem.c b/mm/execmem.c
+index 6f7a2653b280ed..e6c4f5076ca8d8 100644
+--- a/mm/execmem.c
++++ b/mm/execmem.c
+@@ -254,34 +254,6 @@ static void *__execmem_cache_alloc(struct execmem_range *range, size_t size)
+ 	return ptr;
+ }
+ 
+-static bool execmem_cache_rox = false;
+-
+-void execmem_cache_make_ro(void)
+-{
+-	struct maple_tree *free_areas = &execmem_cache.free_areas;
+-	struct maple_tree *busy_areas = &execmem_cache.busy_areas;
+-	MA_STATE(mas_free, free_areas, 0, ULONG_MAX);
+-	MA_STATE(mas_busy, busy_areas, 0, ULONG_MAX);
+-	struct mutex *mutex = &execmem_cache.mutex;
+-	void *area;
+-
+-	execmem_cache_rox = true;
+-
+-	mutex_lock(mutex);
+-
+-	mas_for_each(&mas_free, area, ULONG_MAX) {
+-		unsigned long pages = mas_range_len(&mas_free) >> PAGE_SHIFT;
+-		set_memory_ro(mas_free.index, pages);
+-	}
+-
+-	mas_for_each(&mas_busy, area, ULONG_MAX) {
+-		unsigned long pages = mas_range_len(&mas_busy) >> PAGE_SHIFT;
+-		set_memory_ro(mas_busy.index, pages);
+-	}
+-
+-	mutex_unlock(mutex);
+-}
+-
+ static int execmem_cache_populate(struct execmem_range *range, size_t size)
+ {
+ 	unsigned long vm_flags = VM_ALLOW_HUGE_VMAP;
+@@ -302,15 +274,9 @@ static int execmem_cache_populate(struct execmem_range *range, size_t size)
+ 	/* fill memory with instructions that will trap */
+ 	execmem_fill_trapping_insns(p, alloc_size, /* writable = */ true);
+ 
+-	if (execmem_cache_rox) {
+-		err = set_memory_rox((unsigned long)p, vm->nr_pages);
+-		if (err)
+-			goto err_free_mem;
+-	} else {
+-		err = set_memory_x((unsigned long)p, vm->nr_pages);
+-		if (err)
+-			goto err_free_mem;
+-	}
++	err = set_memory_rox((unsigned long)p, vm->nr_pages);
++	if (err)
++		goto err_free_mem;
+ 
+ 	err = execmem_cache_add(p, alloc_size);
+ 	if (err)
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 6a3cf7935c1499..395857ca8118b4 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -120,7 +120,7 @@ static void hugetlb_vma_lock_free(struct vm_area_struct *vma);
+ static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma);
+ static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma);
+ static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
+-		unsigned long start, unsigned long end);
++		unsigned long start, unsigned long end, bool take_locks);
+ static struct resv_map *vma_resv_map(struct vm_area_struct *vma);
+ 
+ static void hugetlb_free_folio(struct folio *folio)
+@@ -5426,26 +5426,40 @@ static int hugetlb_vm_op_split(struct vm_area_struct *vma, unsigned long addr)
+ {
+ 	if (addr & ~(huge_page_mask(hstate_vma(vma))))
+ 		return -EINVAL;
++	return 0;
++}
+ 
++void hugetlb_split(struct vm_area_struct *vma, unsigned long addr)
++{
+ 	/*
+ 	 * PMD sharing is only possible for PUD_SIZE-aligned address ranges
+ 	 * in HugeTLB VMAs. If we will lose PUD_SIZE alignment due to this
+ 	 * split, unshare PMDs in the PUD_SIZE interval surrounding addr now.
++	 * This function is called in the middle of a VMA split operation, with
++	 * MM, VMA and rmap all write-locked to prevent concurrent page table
++	 * walks (except hardware and gup_fast()).
+ 	 */
++	vma_assert_write_locked(vma);
++	i_mmap_assert_write_locked(vma->vm_file->f_mapping);
++
+ 	if (addr & ~PUD_MASK) {
+-		/*
+-		 * hugetlb_vm_op_split is called right before we attempt to
+-		 * split the VMA. We will need to unshare PMDs in the old and
+-		 * new VMAs, so let's unshare before we split.
+-		 */
+ 		unsigned long floor = addr & PUD_MASK;
+ 		unsigned long ceil = floor + PUD_SIZE;
+ 
+-		if (floor >= vma->vm_start && ceil <= vma->vm_end)
+-			hugetlb_unshare_pmds(vma, floor, ceil);
++		if (floor >= vma->vm_start && ceil <= vma->vm_end) {
++			/*
++			 * Locking:
++			 * Use take_locks=false here.
++			 * The file rmap lock is already held.
++			 * The hugetlb VMA lock can't be taken when we already
++			 * hold the file rmap lock, and we don't need it because
++			 * its purpose is to synchronize against concurrent page
++			 * table walks, which are not possible thanks to the
++			 * locks held by our caller.
++			 */
++			hugetlb_unshare_pmds(vma, floor, ceil, /* take_locks = */ false);
++		}
+ 	}
+-
+-	return 0;
+ }
+ 
+ static unsigned long hugetlb_vm_op_pagesize(struct vm_area_struct *vma)
+@@ -7614,6 +7628,13 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
+ 		return 0;
+ 
+ 	pud_clear(pud);
++	/*
++	 * Once our caller drops the rmap lock, some other process might be
++	 * using this page table as a normal, non-hugetlb page table.
++	 * Wait for pending gup_fast() in other threads to finish before letting
++	 * that happen.
++	 */
++	tlb_remove_table_sync_one();
+ 	ptdesc_pmd_pts_dec(virt_to_ptdesc(ptep));
+ 	mm_dec_nr_pmds(mm);
+ 	return 1;
+@@ -7884,9 +7905,16 @@ void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int re
+ 	spin_unlock_irq(&hugetlb_lock);
+ }
+ 
++/*
++ * If @take_locks is false, the caller must ensure that no concurrent page table
++ * access can happen (except for gup_fast() and hardware page walks).
++ * If @take_locks is true, we take the hugetlb VMA lock (to lock out things like
++ * concurrent page fault handling) and the file rmap lock.
++ */
+ static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
+ 				   unsigned long start,
+-				   unsigned long end)
++				   unsigned long end,
++				   bool take_locks)
+ {
+ 	struct hstate *h = hstate_vma(vma);
+ 	unsigned long sz = huge_page_size(h);
+@@ -7910,8 +7938,12 @@ static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
+ 	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm,
+ 				start, end);
+ 	mmu_notifier_invalidate_range_start(&range);
+-	hugetlb_vma_lock_write(vma);
+-	i_mmap_lock_write(vma->vm_file->f_mapping);
++	if (take_locks) {
++		hugetlb_vma_lock_write(vma);
++		i_mmap_lock_write(vma->vm_file->f_mapping);
++	} else {
++		i_mmap_assert_write_locked(vma->vm_file->f_mapping);
++	}
+ 	for (address = start; address < end; address += PUD_SIZE) {
+ 		ptep = hugetlb_walk(vma, address, sz);
+ 		if (!ptep)
+@@ -7921,8 +7953,10 @@ static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
+ 		spin_unlock(ptl);
+ 	}
+ 	flush_hugetlb_tlb_range(vma, start, end);
+-	i_mmap_unlock_write(vma->vm_file->f_mapping);
+-	hugetlb_vma_unlock_write(vma);
++	if (take_locks) {
++		i_mmap_unlock_write(vma->vm_file->f_mapping);
++		hugetlb_vma_unlock_write(vma);
++	}
+ 	/*
+ 	 * No need to call mmu_notifier_arch_invalidate_secondary_tlbs(), see
+ 	 * Documentation/mm/mmu_notifier.rst.
+@@ -7937,7 +7971,8 @@ static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
+ void hugetlb_unshare_all_pmds(struct vm_area_struct *vma)
+ {
+ 	hugetlb_unshare_pmds(vma, ALIGN(vma->vm_start, PUD_SIZE),
+-			ALIGN_DOWN(vma->vm_end, PUD_SIZE));
++			ALIGN_DOWN(vma->vm_end, PUD_SIZE),
++			/* take_locks = */ true);
+ }
+ 
+ /*
+diff --git a/mm/madvise.c b/mm/madvise.c
+index b17f684322ad79..f5ddf766c8015e 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -503,6 +503,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
+ 					pte_offset_map_lock(mm, pmd, addr, &ptl);
+ 				if (!start_pte)
+ 					break;
++				flush_tlb_batched_pending(mm);
+ 				arch_enter_lazy_mmu_mode();
+ 				if (!err)
+ 					nr = 0;
+@@ -736,6 +737,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
+ 				start_pte = pte;
+ 				if (!start_pte)
+ 					break;
++				flush_tlb_batched_pending(mm);
+ 				arch_enter_lazy_mmu_mode();
+ 				if (!err)
+ 					nr = 0;
+@@ -1830,7 +1832,9 @@ static ssize_t vector_madvise(struct mm_struct *mm, struct iov_iter *iter,
+ 
+ 			/* Drop and reacquire lock to unwind race. */
+ 			madvise_unlock(mm, behavior);
+-			madvise_lock(mm, behavior);
++			ret = madvise_lock(mm, behavior);
++			if (ret)
++				goto out;
+ 			continue;
+ 		}
+ 		if (ret < 0)
+@@ -1839,6 +1843,7 @@ static ssize_t vector_madvise(struct mm_struct *mm, struct iov_iter *iter,
+ 	}
+ 	madvise_unlock(mm, behavior);
+ 
++out:
+ 	ret = (total_len - iov_iter_count(iter)) ? : ret;
+ 
+ 	return ret;
+diff --git a/mm/page-writeback.c b/mm/page-writeback.c
+index c81624bc396971..20e1d76f1eba3f 100644
+--- a/mm/page-writeback.c
++++ b/mm/page-writeback.c
+@@ -520,8 +520,8 @@ static int dirty_ratio_handler(const struct ctl_table *table, int write, void *b
+ 
+ 	ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+ 	if (ret == 0 && write && vm_dirty_ratio != old_ratio) {
+-		writeback_set_ratelimit();
+ 		vm_dirty_bytes = 0;
++		writeback_set_ratelimit();
+ 	}
+ 	return ret;
+ }
+diff --git a/mm/readahead.c b/mm/readahead.c
+index 6a4e96b69702b3..20d36d6b055ed5 100644
+--- a/mm/readahead.c
++++ b/mm/readahead.c
+@@ -690,9 +690,15 @@ EXPORT_SYMBOL_GPL(page_cache_async_ra);
+ 
+ ssize_t ksys_readahead(int fd, loff_t offset, size_t count)
+ {
++	struct file *file;
++	const struct inode *inode;
++
+ 	CLASS(fd, f)(fd);
++	if (fd_empty(f))
++		return -EBADF;
+ 
+-	if (fd_empty(f) || !(fd_file(f)->f_mode & FMODE_READ))
++	file = fd_file(f);
++	if (!(file->f_mode & FMODE_READ))
+ 		return -EBADF;
+ 
+ 	/*
+@@ -700,9 +706,15 @@ ssize_t ksys_readahead(int fd, loff_t offset, size_t count)
+ 	 * that can execute readahead. If readahead is not possible
+ 	 * on this file, then we must return -EINVAL.
+ 	 */
+-	if (!fd_file(f)->f_mapping || !fd_file(f)->f_mapping->a_ops ||
+-	    (!S_ISREG(file_inode(fd_file(f))->i_mode) &&
+-	    !S_ISBLK(file_inode(fd_file(f))->i_mode)))
++	if (!file->f_mapping)
++		return -EINVAL;
++	if (!file->f_mapping->a_ops)
++		return -EINVAL;
++
++	inode = file_inode(file);
++	if (!S_ISREG(inode->i_mode) && !S_ISBLK(inode->i_mode))
++		return -EINVAL;
++	if (IS_ANON_FILE(inode))
+ 		return -EINVAL;
+ 
+ 	return vfs_fadvise(fd_file(f), offset, count, POSIX_FADV_WILLNEED);
+diff --git a/mm/vma.c b/mm/vma.c
+index a468d4c29c0cd4..81df9487cba0ee 100644
+--- a/mm/vma.c
++++ b/mm/vma.c
+@@ -144,6 +144,9 @@ static void init_multi_vma_prep(struct vma_prepare *vp,
+ 	vp->file = vma->vm_file;
+ 	if (vp->file)
+ 		vp->mapping = vma->vm_file->f_mapping;
++
++	if (vmg && vmg->skip_vma_uprobe)
++		vp->skip_vma_uprobe = true;
+ }
+ 
+ /*
+@@ -333,10 +336,13 @@ static void vma_complete(struct vma_prepare *vp, struct vma_iterator *vmi,
+ 
+ 	if (vp->file) {
+ 		i_mmap_unlock_write(vp->mapping);
+-		uprobe_mmap(vp->vma);
+ 
+-		if (vp->adj_next)
+-			uprobe_mmap(vp->adj_next);
++		if (!vp->skip_vma_uprobe) {
++			uprobe_mmap(vp->vma);
++
++			if (vp->adj_next)
++				uprobe_mmap(vp->adj_next);
++		}
+ 	}
+ 
+ 	if (vp->remove) {
+@@ -510,7 +516,14 @@ __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
+ 	init_vma_prep(&vp, vma);
+ 	vp.insert = new;
+ 	vma_prepare(&vp);
++
++	/*
++	 * Get rid of huge pages and shared page tables straddling the split
++	 * boundary.
++	 */
+ 	vma_adjust_trans_huge(vma, vma->vm_start, addr, NULL);
++	if (is_vm_hugetlb_page(vma))
++		hugetlb_split(vma, addr);
+ 
+ 	if (new_below) {
+ 		vma->vm_start = addr;
+@@ -914,26 +927,9 @@ static __must_check struct vm_area_struct *vma_merge_existing_range(
+ 		err = dup_anon_vma(next, middle, &anon_dup);
+ 	}
+ 
+-	if (err)
++	if (err || commit_merge(vmg))
+ 		goto abort;
+ 
+-	err = commit_merge(vmg);
+-	if (err) {
+-		VM_WARN_ON(err != -ENOMEM);
+-
+-		if (anon_dup)
+-			unlink_anon_vmas(anon_dup);
+-
+-		/*
+-		 * We've cleaned up any cloned anon_vma's, no VMAs have been
+-		 * modified, no harm no foul if the user requests that we not
+-		 * report this and just give up, leaving the VMAs unmerged.
+-		 */
+-		if (!vmg->give_up_on_oom)
+-			vmg->state = VMA_MERGE_ERROR_NOMEM;
+-		return NULL;
+-	}
+-
+ 	khugepaged_enter_vma(vmg->target, vmg->flags);
+ 	vmg->state = VMA_MERGE_SUCCESS;
+ 	return vmg->target;
+@@ -942,6 +938,9 @@ static __must_check struct vm_area_struct *vma_merge_existing_range(
+ 	vma_iter_set(vmg->vmi, start);
+ 	vma_iter_load(vmg->vmi);
+ 
++	if (anon_dup)
++		unlink_anon_vmas(anon_dup);
++
+ 	/*
+ 	 * This means we have failed to clone anon_vma's correctly, but no
+ 	 * actual changes to VMAs have occurred, so no harm no foul - if the
+@@ -1783,6 +1782,14 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
+ 		faulted_in_anon_vma = false;
+ 	}
+ 
++	/*
++	 * If the VMA we are copying might contain a uprobe PTE, ensure
++	 * that we do not establish one upon merge. Otherwise, when mremap()
++	 * moves page tables, it will orphan the newly created PTE.
++	 */
++	if (vma->vm_file)
++		vmg.skip_vma_uprobe = true;
++
+ 	new_vma = find_vma_prev(mm, addr, &vmg.prev);
+ 	if (new_vma && new_vma->vm_start < addr + len)
+ 		return NULL;	/* should never get here */
+diff --git a/mm/vma.h b/mm/vma.h
+index 149926e8a6d1ac..7e8aa136e6f770 100644
+--- a/mm/vma.h
++++ b/mm/vma.h
+@@ -19,6 +19,8 @@ struct vma_prepare {
+ 	struct vm_area_struct *insert;
+ 	struct vm_area_struct *remove;
+ 	struct vm_area_struct *remove2;
++
++	bool skip_vma_uprobe :1;
+ };
+ 
+ struct unlink_vma_file_batch {
+@@ -120,6 +122,11 @@ struct vma_merge_struct {
+ 	 */
+ 	bool give_up_on_oom :1;
+ 
++	/*
++	 * If set, skip uprobe_mmap upon merged vma.
++	 */
++	bool skip_vma_uprobe :1;
++
+ 	/* Internal flags set during merge process: */
+ 
+ 	/*
+diff --git a/net/atm/common.c b/net/atm/common.c
+index 9b75699992ff92..d7f7976ea13ac6 100644
+--- a/net/atm/common.c
++++ b/net/atm/common.c
+@@ -635,6 +635,7 @@ int vcc_sendmsg(struct socket *sock, struct msghdr *m, size_t size)
+ 
+ 	skb->dev = NULL; /* for paths shared with net_device interfaces */
+ 	if (!copy_from_iter_full(skb_put(skb, size), size, &m->msg_iter)) {
++		atm_return_tx(vcc, skb);
+ 		kfree_skb(skb);
+ 		error = -EFAULT;
+ 		goto out;
+diff --git a/net/atm/lec.c b/net/atm/lec.c
+index ded2f0df2ee664..07c01b6f2b2bed 100644
+--- a/net/atm/lec.c
++++ b/net/atm/lec.c
+@@ -124,6 +124,7 @@ static unsigned char bus_mac[ETH_ALEN] = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff };
+ 
+ /* Device structures */
+ static struct net_device *dev_lec[MAX_LEC_ITF];
++static DEFINE_MUTEX(lec_mutex);
+ 
+ #if IS_ENABLED(CONFIG_BRIDGE)
+ static void lec_handle_bridge(struct sk_buff *skb, struct net_device *dev)
+@@ -685,6 +686,7 @@ static int lec_vcc_attach(struct atm_vcc *vcc, void __user *arg)
+ 	int bytes_left;
+ 	struct atmlec_ioc ioc_data;
+ 
++	lockdep_assert_held(&lec_mutex);
+ 	/* Lecd must be up in this case */
+ 	bytes_left = copy_from_user(&ioc_data, arg, sizeof(struct atmlec_ioc));
+ 	if (bytes_left != 0)
+@@ -710,6 +712,7 @@ static int lec_vcc_attach(struct atm_vcc *vcc, void __user *arg)
+ 
+ static int lec_mcast_attach(struct atm_vcc *vcc, int arg)
+ {
++	lockdep_assert_held(&lec_mutex);
+ 	if (arg < 0 || arg >= MAX_LEC_ITF)
+ 		return -EINVAL;
+ 	arg = array_index_nospec(arg, MAX_LEC_ITF);
+@@ -725,6 +728,7 @@ static int lecd_attach(struct atm_vcc *vcc, int arg)
+ 	int i;
+ 	struct lec_priv *priv;
+ 
++	lockdep_assert_held(&lec_mutex);
+ 	if (arg < 0)
+ 		arg = 0;
+ 	if (arg >= MAX_LEC_ITF)
+@@ -742,6 +746,7 @@ static int lecd_attach(struct atm_vcc *vcc, int arg)
+ 		snprintf(dev_lec[i]->name, IFNAMSIZ, "lec%d", i);
+ 		if (register_netdev(dev_lec[i])) {
+ 			free_netdev(dev_lec[i]);
++			dev_lec[i] = NULL;
+ 			return -EINVAL;
+ 		}
+ 
+@@ -904,7 +909,6 @@ static void *lec_itf_walk(struct lec_state *state, loff_t *l)
+ 	v = (dev && netdev_priv(dev)) ?
+ 		lec_priv_walk(state, l, netdev_priv(dev)) : NULL;
+ 	if (!v && dev) {
+-		dev_put(dev);
+ 		/* Partial state reset for the next time we get called */
+ 		dev = NULL;
+ 	}
+@@ -928,6 +932,7 @@ static void *lec_seq_start(struct seq_file *seq, loff_t *pos)
+ {
+ 	struct lec_state *state = seq->private;
+ 
++	mutex_lock(&lec_mutex);
+ 	state->itf = 0;
+ 	state->dev = NULL;
+ 	state->locked = NULL;
+@@ -945,8 +950,9 @@ static void lec_seq_stop(struct seq_file *seq, void *v)
+ 	if (state->dev) {
+ 		spin_unlock_irqrestore(&state->locked->lec_arp_lock,
+ 				       state->flags);
+-		dev_put(state->dev);
++		state->dev = NULL;
+ 	}
++	mutex_unlock(&lec_mutex);
+ }
+ 
+ static void *lec_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+@@ -1003,6 +1009,7 @@ static int lane_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ 		return -ENOIOCTLCMD;
+ 	}
+ 
++	mutex_lock(&lec_mutex);
+ 	switch (cmd) {
+ 	case ATMLEC_CTRL:
+ 		err = lecd_attach(vcc, (int)arg);
+@@ -1017,6 +1024,7 @@ static int lane_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ 		break;
+ 	}
+ 
++	mutex_unlock(&lec_mutex);
+ 	return err;
+ }
+ 
+diff --git a/net/atm/raw.c b/net/atm/raw.c
+index 2b5f78a7ec3e4a..1e6511ec842cbc 100644
+--- a/net/atm/raw.c
++++ b/net/atm/raw.c
+@@ -36,7 +36,7 @@ static void atm_pop_raw(struct atm_vcc *vcc, struct sk_buff *skb)
+ 
+ 	pr_debug("(%d) %d -= %d\n",
+ 		 vcc->vci, sk_wmem_alloc_get(sk), ATM_SKB(skb)->acct_truesize);
+-	WARN_ON(refcount_sub_and_test(ATM_SKB(skb)->acct_truesize, &sk->sk_wmem_alloc));
++	atm_return_tx(vcc, skb);
+ 	dev_kfree_skb_any(skb);
+ 	sk->sk_write_space(sk);
+ }
+diff --git a/net/bridge/br_mst.c b/net/bridge/br_mst.c
+index 1820f09ff59ceb..3f24b4ee49c274 100644
+--- a/net/bridge/br_mst.c
++++ b/net/bridge/br_mst.c
+@@ -80,10 +80,10 @@ static void br_mst_vlan_set_state(struct net_bridge_vlan_group *vg,
+ 	if (br_vlan_get_state(v) == state)
+ 		return;
+ 
+-	br_vlan_set_state(v, state);
+-
+ 	if (v->vid == vg->pvid)
+ 		br_vlan_set_pvid_state(vg, state);
++
++	br_vlan_set_state(v, state);
+ }
+ 
+ int br_mst_set_state(struct net_bridge_port *p, u16 msti, u8 state,
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index dcbf058de1e3b4..7e0b2362b9ee5f 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -2105,12 +2105,17 @@ static void __br_multicast_enable_port_ctx(struct net_bridge_mcast_port *pmctx)
+ 	}
+ }
+ 
+-void br_multicast_enable_port(struct net_bridge_port *port)
++static void br_multicast_enable_port_ctx(struct net_bridge_mcast_port *pmctx)
+ {
+-	struct net_bridge *br = port->br;
++	struct net_bridge *br = pmctx->port->br;
+ 
+ 	spin_lock_bh(&br->multicast_lock);
+-	__br_multicast_enable_port_ctx(&port->multicast_ctx);
++	if (br_multicast_port_ctx_is_vlan(pmctx) &&
++	    !(pmctx->vlan->priv_flags & BR_VLFLAG_MCAST_ENABLED)) {
++		spin_unlock_bh(&br->multicast_lock);
++		return;
++	}
++	__br_multicast_enable_port_ctx(pmctx);
+ 	spin_unlock_bh(&br->multicast_lock);
+ }
+ 
+@@ -2137,11 +2142,67 @@ static void __br_multicast_disable_port_ctx(struct net_bridge_mcast_port *pmctx)
+ 	br_multicast_rport_del_notify(pmctx, del);
+ }
+ 
++static void br_multicast_disable_port_ctx(struct net_bridge_mcast_port *pmctx)
++{
++	struct net_bridge *br = pmctx->port->br;
++
++	spin_lock_bh(&br->multicast_lock);
++	if (br_multicast_port_ctx_is_vlan(pmctx) &&
++	    !(pmctx->vlan->priv_flags & BR_VLFLAG_MCAST_ENABLED)) {
++		spin_unlock_bh(&br->multicast_lock);
++		return;
++	}
++
++	__br_multicast_disable_port_ctx(pmctx);
++	spin_unlock_bh(&br->multicast_lock);
++}
++
++static void br_multicast_toggle_port(struct net_bridge_port *port, bool on)
++{
++#if IS_ENABLED(CONFIG_BRIDGE_VLAN_FILTERING)
++	if (br_opt_get(port->br, BROPT_MCAST_VLAN_SNOOPING_ENABLED)) {
++		struct net_bridge_vlan_group *vg;
++		struct net_bridge_vlan *vlan;
++
++		rcu_read_lock();
++		vg = nbp_vlan_group_rcu(port);
++		if (!vg) {
++			rcu_read_unlock();
++			return;
++		}
++
++		/* iterate each vlan, toggle vlan multicast context */
++		list_for_each_entry_rcu(vlan, &vg->vlan_list, vlist) {
++			struct net_bridge_mcast_port *pmctx =
++						&vlan->port_mcast_ctx;
++			u8 state = br_vlan_get_state(vlan);
++			/* enable vlan multicast context when state is
++			 * LEARNING or FORWARDING
++			 */
++			if (on && br_vlan_state_allowed(state, true))
++				br_multicast_enable_port_ctx(pmctx);
++			else
++				br_multicast_disable_port_ctx(pmctx);
++		}
++		rcu_read_unlock();
++		return;
++	}
++#endif
++	/* toggle port multicast context when vlan snooping is disabled */
++	if (on)
++		br_multicast_enable_port_ctx(&port->multicast_ctx);
++	else
++		br_multicast_disable_port_ctx(&port->multicast_ctx);
++}
++
++void br_multicast_enable_port(struct net_bridge_port *port)
++{
++	br_multicast_toggle_port(port, true);
++}
++
+ void br_multicast_disable_port(struct net_bridge_port *port)
+ {
+-	spin_lock_bh(&port->br->multicast_lock);
+-	__br_multicast_disable_port_ctx(&port->multicast_ctx);
+-	spin_unlock_bh(&port->br->multicast_lock);
++	br_multicast_toggle_port(port, false);
+ }
+ 
+ static int __grp_src_delete_marked(struct net_bridge_port_group *pg)
+@@ -4211,6 +4272,32 @@ static void __br_multicast_stop(struct net_bridge_mcast *brmctx)
+ #endif
+ }
+ 
++void br_multicast_update_vlan_mcast_ctx(struct net_bridge_vlan *v, u8 state)
++{
++#if IS_ENABLED(CONFIG_BRIDGE_VLAN_FILTERING)
++	struct net_bridge *br;
++
++	if (!br_vlan_should_use(v))
++		return;
++
++	if (br_vlan_is_master(v))
++		return;
++
++	br = v->port->br;
++
++	if (!br_opt_get(br, BROPT_MCAST_VLAN_SNOOPING_ENABLED))
++		return;
++
++	if (br_vlan_state_allowed(state, true))
++		br_multicast_enable_port_ctx(&v->port_mcast_ctx);
++
++	/* Multicast is not disabled for the vlan when it goes in
++	 * blocking state because the timers will expire and stop by
++	 * themselves without sending more queries.
++	 */
++#endif
++}
++
+ void br_multicast_toggle_one_vlan(struct net_bridge_vlan *vlan, bool on)
+ {
+ 	struct net_bridge *br;
+@@ -4304,9 +4391,9 @@ int br_multicast_toggle_vlan_snooping(struct net_bridge *br, bool on,
+ 		__br_multicast_open(&br->multicast_ctx);
+ 	list_for_each_entry(p, &br->port_list, list) {
+ 		if (on)
+-			br_multicast_disable_port(p);
++			br_multicast_disable_port_ctx(&p->multicast_ctx);
+ 		else
+-			br_multicast_enable_port(p);
++			br_multicast_enable_port_ctx(&p->multicast_ctx);
+ 	}
+ 
+ 	list_for_each_entry(vlan, &vg->vlan_list, vlist)
+diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
+index 4715a8d6dc3266..c41d315b09d327 100644
+--- a/net/bridge/br_private.h
++++ b/net/bridge/br_private.h
+@@ -1052,6 +1052,7 @@ void br_multicast_port_ctx_init(struct net_bridge_port *port,
+ 				struct net_bridge_vlan *vlan,
+ 				struct net_bridge_mcast_port *pmctx);
+ void br_multicast_port_ctx_deinit(struct net_bridge_mcast_port *pmctx);
++void br_multicast_update_vlan_mcast_ctx(struct net_bridge_vlan *v, u8 state);
+ void br_multicast_toggle_one_vlan(struct net_bridge_vlan *vlan, bool on);
+ int br_multicast_toggle_vlan_snooping(struct net_bridge *br, bool on,
+ 				      struct netlink_ext_ack *extack);
+@@ -1502,6 +1503,11 @@ static inline void br_multicast_port_ctx_deinit(struct net_bridge_mcast_port *pm
+ {
+ }
+ 
++static inline void br_multicast_update_vlan_mcast_ctx(struct net_bridge_vlan *v,
++						      u8 state)
++{
++}
++
+ static inline void br_multicast_toggle_one_vlan(struct net_bridge_vlan *vlan,
+ 						bool on)
+ {
+@@ -1862,7 +1868,9 @@ bool br_vlan_global_opts_can_enter_range(const struct net_bridge_vlan *v_curr,
+ bool br_vlan_global_opts_fill(struct sk_buff *skb, u16 vid, u16 vid_range,
+ 			      const struct net_bridge_vlan *v_opts);
+ 
+-/* vlan state manipulation helpers using *_ONCE to annotate lock-free access */
++/* vlan state manipulation helpers using *_ONCE to annotate lock-free access,
++ * while br_vlan_set_state() may access data protected by multicast_lock.
++ */
+ static inline u8 br_vlan_get_state(const struct net_bridge_vlan *v)
+ {
+ 	return READ_ONCE(v->state);
+@@ -1871,6 +1879,7 @@ static inline u8 br_vlan_get_state(const struct net_bridge_vlan *v)
+ static inline void br_vlan_set_state(struct net_bridge_vlan *v, u8 state)
+ {
+ 	WRITE_ONCE(v->state, state);
++	br_multicast_update_vlan_mcast_ctx(v, state);
+ }
+ 
+ static inline u8 br_vlan_get_pvid_state(const struct net_bridge_vlan_group *vg)
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 2b20aadaf9268d..0c66ee5358812b 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -9863,6 +9863,7 @@ int netif_xdp_propagate(struct net_device *dev, struct netdev_bpf *bpf)
+ 
+ 	return dev->netdev_ops->ndo_bpf(dev, bpf);
+ }
++EXPORT_SYMBOL_GPL(netif_xdp_propagate);
+ 
+ u32 dev_xdp_prog_id(struct net_device *dev, enum bpf_xdp_mode mode)
+ {
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 357d26b76c22d9..34f91c3aacb2f1 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -3233,6 +3233,13 @@ static const struct bpf_func_proto bpf_skb_vlan_pop_proto = {
+ 	.arg1_type      = ARG_PTR_TO_CTX,
+ };
+ 
++static void bpf_skb_change_protocol(struct sk_buff *skb, u16 proto)
++{
++	skb->protocol = htons(proto);
++	if (skb_valid_dst(skb))
++		skb_dst_drop(skb);
++}
++
+ static int bpf_skb_generic_push(struct sk_buff *skb, u32 off, u32 len)
+ {
+ 	/* Caller already did skb_cow() with len as headroom,
+@@ -3329,7 +3336,7 @@ static int bpf_skb_proto_4_to_6(struct sk_buff *skb)
+ 		}
+ 	}
+ 
+-	skb->protocol = htons(ETH_P_IPV6);
++	bpf_skb_change_protocol(skb, ETH_P_IPV6);
+ 	skb_clear_hash(skb);
+ 
+ 	return 0;
+@@ -3359,7 +3366,7 @@ static int bpf_skb_proto_6_to_4(struct sk_buff *skb)
+ 		}
+ 	}
+ 
+-	skb->protocol = htons(ETH_P_IP);
++	bpf_skb_change_protocol(skb, ETH_P_IP);
+ 	skb_clear_hash(skb);
+ 
+ 	return 0;
+@@ -3550,10 +3557,10 @@ static int bpf_skb_net_grow(struct sk_buff *skb, u32 off, u32 len_diff,
+ 		/* Match skb->protocol to new outer l3 protocol */
+ 		if (skb->protocol == htons(ETH_P_IP) &&
+ 		    flags & BPF_F_ADJ_ROOM_ENCAP_L3_IPV6)
+-			skb->protocol = htons(ETH_P_IPV6);
++			bpf_skb_change_protocol(skb, ETH_P_IPV6);
+ 		else if (skb->protocol == htons(ETH_P_IPV6) &&
+ 			 flags & BPF_F_ADJ_ROOM_ENCAP_L3_IPV4)
+-			skb->protocol = htons(ETH_P_IP);
++			bpf_skb_change_protocol(skb, ETH_P_IP);
+ 	}
+ 
+ 	if (skb_is_gso(skb)) {
+@@ -3606,10 +3613,10 @@ static int bpf_skb_net_shrink(struct sk_buff *skb, u32 off, u32 len_diff,
+ 	/* Match skb->protocol to new outer l3 protocol */
+ 	if (skb->protocol == htons(ETH_P_IP) &&
+ 	    flags & BPF_F_ADJ_ROOM_DECAP_L3_IPV6)
+-		skb->protocol = htons(ETH_P_IPV6);
++		bpf_skb_change_protocol(skb, ETH_P_IPV6);
+ 	else if (skb->protocol == htons(ETH_P_IPV6) &&
+ 		 flags & BPF_F_ADJ_ROOM_DECAP_L3_IPV4)
+-		skb->protocol = htons(ETH_P_IP);
++		bpf_skb_change_protocol(skb, ETH_P_IP);
+ 
+ 	if (skb_is_gso(skb)) {
+ 		struct skb_shared_info *shinfo = skb_shinfo(skb);
+diff --git a/net/core/page_pool.c b/net/core/page_pool.c
+index 2d9c51f480fb5f..3eabe78c93f4c2 100644
+--- a/net/core/page_pool.c
++++ b/net/core/page_pool.c
+@@ -836,6 +836,10 @@ static bool page_pool_napi_local(const struct page_pool *pool)
+ 	const struct napi_struct *napi;
+ 	u32 cpuid;
+ 
++	/* On PREEMPT_RT the softirq can be preempted by the consumer */
++	if (IS_ENABLED(CONFIG_PREEMPT_RT))
++		return false;
++
+ 	if (unlikely(!in_softirq()))
+ 		return false;
+ 
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 74a2d886a35b51..86cc58376392b7 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -6220,9 +6220,6 @@ int skb_ensure_writable(struct sk_buff *skb, unsigned int write_len)
+ 	if (!pskb_may_pull(skb, write_len))
+ 		return -ENOMEM;
+ 
+-	if (!skb_frags_readable(skb))
+-		return -EFAULT;
+-
+ 	if (!skb_cloned(skb) || skb_clone_writable(skb, write_len))
+ 		return 0;
+ 
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 6d689918c2b390..34c51eb1a14fb4 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -690,7 +690,8 @@ static void sk_psock_backlog(struct work_struct *work)
+ 			if (ret <= 0) {
+ 				if (ret == -EAGAIN) {
+ 					sk_psock_skb_state(psock, state, len, off);
+-
++					/* Restore redir info we cleared before */
++					skb_bpf_set_redir(skb, psock->sk, ingress);
+ 					/* Delay slightly to prioritize any
+ 					 * other work that might be here.
+ 					 */
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 5034d0fbd4a427..3e8c548cb1f878 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -4004,7 +4004,7 @@ static int assign_proto_idx(struct proto *prot)
+ {
+ 	prot->inuse_idx = find_first_zero_bit(proto_inuse_idx, PROTO_INUSE_NR);
+ 
+-	if (unlikely(prot->inuse_idx == PROTO_INUSE_NR - 1)) {
++	if (unlikely(prot->inuse_idx == PROTO_INUSE_NR)) {
+ 		pr_err("PROTO_INUSE_NR exhausted\n");
+ 		return -ENOSPC;
+ 	}
+@@ -4015,7 +4015,7 @@ static int assign_proto_idx(struct proto *prot)
+ 
+ static void release_proto_idx(struct proto *prot)
+ {
+-	if (prot->inuse_idx != PROTO_INUSE_NR - 1)
++	if (prot->inuse_idx != PROTO_INUSE_NR)
+ 		clear_bit(prot->inuse_idx, proto_inuse_idx);
+ }
+ #else
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 753704f75b2c65..5d7c7efea66cc6 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -189,7 +189,11 @@ const __u8 ip_tos2prio[16] = {
+ EXPORT_SYMBOL(ip_tos2prio);
+ 
+ static DEFINE_PER_CPU(struct rt_cache_stat, rt_cache_stat);
++#ifndef CONFIG_PREEMPT_RT
+ #define RT_CACHE_STAT_INC(field) raw_cpu_inc(rt_cache_stat.field)
++#else
++#define RT_CACHE_STAT_INC(field) this_cpu_inc(rt_cache_stat.field)
++#endif
+ 
+ #ifdef CONFIG_PROC_FS
+ static void *rt_cache_seq_start(struct seq_file *seq, loff_t *pos)
+diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
+index 1a6b1bc5424514..dd8b60f5c5553e 100644
+--- a/net/ipv4/tcp_fastopen.c
++++ b/net/ipv4/tcp_fastopen.c
+@@ -3,6 +3,7 @@
+ #include <linux/tcp.h>
+ #include <linux/rcupdate.h>
+ #include <net/tcp.h>
++#include <net/busy_poll.h>
+ 
+ void tcp_fastopen_init_key_once(struct net *net)
+ {
+@@ -279,6 +280,8 @@ static struct sock *tcp_fastopen_create_child(struct sock *sk,
+ 
+ 	refcount_set(&req->rsk_refcnt, 2);
+ 
++	sk_mark_napi_id_set(child, skb);
++
+ 	/* Now finish processing the fastopen child socket. */
+ 	tcp_init_transfer(child, BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB, skb);
+ 
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index a35018e2d0ba27..bce2a111cc9e05 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -664,10 +664,12 @@ EXPORT_IPV6_MOD(tcp_initialize_rcv_mss);
+  */
+ static void tcp_rcv_rtt_update(struct tcp_sock *tp, u32 sample, int win_dep)
+ {
+-	u32 new_sample = tp->rcv_rtt_est.rtt_us;
+-	long m = sample;
++	u32 new_sample, old_sample = tp->rcv_rtt_est.rtt_us;
++	long m = sample << 3;
+ 
+-	if (new_sample != 0) {
++	if (old_sample == 0 || m < old_sample) {
++		new_sample = m;
++	} else {
+ 		/* If we sample in larger samples in the non-timestamp
+ 		 * case, we could grossly overestimate the RTT especially
+ 		 * with chatty applications or bulk transfer apps which
+@@ -678,17 +680,9 @@ static void tcp_rcv_rtt_update(struct tcp_sock *tp, u32 sample, int win_dep)
+ 		 * else with timestamps disabled convergence takes too
+ 		 * long.
+ 		 */
+-		if (!win_dep) {
+-			m -= (new_sample >> 3);
+-			new_sample += m;
+-		} else {
+-			m <<= 3;
+-			if (m < new_sample)
+-				new_sample = m;
+-		}
+-	} else {
+-		/* No previous measure. */
+-		new_sample = m << 3;
++		if (win_dep)
++			return;
++		new_sample = old_sample - (old_sample >> 3) + sample;
+ 	}
+ 
+ 	tp->rcv_rtt_est.rtt_us = new_sample;
+@@ -712,7 +706,7 @@ static inline void tcp_rcv_rtt_measure(struct tcp_sock *tp)
+ 	tp->rcv_rtt_est.time = tp->tcp_mstamp;
+ }
+ 
+-static s32 tcp_rtt_tsopt_us(const struct tcp_sock *tp)
++static s32 tcp_rtt_tsopt_us(const struct tcp_sock *tp, u32 min_delta)
+ {
+ 	u32 delta, delta_us;
+ 
+@@ -722,7 +716,7 @@ static s32 tcp_rtt_tsopt_us(const struct tcp_sock *tp)
+ 
+ 	if (likely(delta < INT_MAX / (USEC_PER_SEC / TCP_TS_HZ))) {
+ 		if (!delta)
+-			delta = 1;
++			delta = min_delta;
+ 		delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ);
+ 		return delta_us;
+ 	}
+@@ -740,9 +734,9 @@ static inline void tcp_rcv_rtt_measure_ts(struct sock *sk,
+ 
+ 	if (TCP_SKB_CB(skb)->end_seq -
+ 	    TCP_SKB_CB(skb)->seq >= inet_csk(sk)->icsk_ack.rcv_mss) {
+-		s32 delta = tcp_rtt_tsopt_us(tp);
++		s32 delta = tcp_rtt_tsopt_us(tp, 0);
+ 
+-		if (delta >= 0)
++		if (delta > 0)
+ 			tcp_rcv_rtt_update(tp, delta, 0);
+ 	}
+ }
+@@ -754,8 +748,7 @@ static inline void tcp_rcv_rtt_measure_ts(struct sock *sk,
+ void tcp_rcv_space_adjust(struct sock *sk)
+ {
+ 	struct tcp_sock *tp = tcp_sk(sk);
+-	u32 copied;
+-	int time;
++	int time, inq, copied;
+ 
+ 	trace_tcp_rcv_space_adjust(sk);
+ 
+@@ -766,6 +759,9 @@ void tcp_rcv_space_adjust(struct sock *sk)
+ 
+ 	/* Number of bytes copied to user in last RTT */
+ 	copied = tp->copied_seq - tp->rcvq_space.seq;
++	/* Number of bytes in receive queue. */
++	inq = tp->rcv_nxt - tp->copied_seq;
++	copied -= inq;
+ 	if (copied <= tp->rcvq_space.space)
+ 		goto new_measure;
+ 
+@@ -2484,20 +2480,33 @@ static inline bool tcp_packet_delayed(const struct tcp_sock *tp)
+ {
+ 	const struct sock *sk = (const struct sock *)tp;
+ 
+-	if (tp->retrans_stamp &&
+-	    tcp_tsopt_ecr_before(tp, tp->retrans_stamp))
+-		return true;  /* got echoed TS before first retransmission */
++	/* Received an echoed timestamp before the first retransmission? */
++	if (tp->retrans_stamp)
++		return tcp_tsopt_ecr_before(tp, tp->retrans_stamp);
++
++	/* We set tp->retrans_stamp upon the first retransmission of a loss
++	 * recovery episode, so normally if tp->retrans_stamp is 0 then no
++	 * retransmission has happened yet (likely due to TSQ, which can cause
++	 * fast retransmits to be delayed). So if snd_una advanced while
++	 * (tp->retrans_stamp is 0 then apparently a packet was merely delayed,
++	 * not lost. But there are exceptions where we retransmit but then
++	 * clear tp->retrans_stamp, so we check for those exceptions.
++	 */
+ 
+-	/* Check if nothing was retransmitted (retrans_stamp==0), which may
+-	 * happen in fast recovery due to TSQ. But we ignore zero retrans_stamp
+-	 * in TCP_SYN_SENT, since when we set FLAG_SYN_ACKED we also clear
+-	 * retrans_stamp even if we had retransmitted the SYN.
++	/* (1) For non-SACK connections, tcp_is_non_sack_preventing_reopen()
++	 * clears tp->retrans_stamp when snd_una == high_seq.
+ 	 */
+-	if (!tp->retrans_stamp &&	   /* no record of a retransmit/SYN? */
+-	    sk->sk_state != TCP_SYN_SENT)  /* not the FLAG_SYN_ACKED case? */
+-		return true;  /* nothing was retransmitted */
++	if (!tcp_is_sack(tp) && !before(tp->snd_una, tp->high_seq))
++		return false;
+ 
+-	return false;
++	/* (2) In TCP_SYN_SENT tcp_clean_rtx_queue() clears tp->retrans_stamp
++	 * when setting FLAG_SYN_ACKED is set, even if the SYN was
++	 * retransmitted.
++	 */
++	if (sk->sk_state == TCP_SYN_SENT)
++		return false;
++
++	return true;	/* tp->retrans_stamp is zero; no retransmit yet */
+ }
+ 
+ /* Undo procedures. */
+@@ -3226,7 +3235,7 @@ static bool tcp_ack_update_rtt(struct sock *sk, const int flag,
+ 	 */
+ 	if (seq_rtt_us < 0 && tp->rx_opt.saw_tstamp &&
+ 	    tp->rx_opt.rcv_tsecr && flag & FLAG_ACKED)
+-		seq_rtt_us = ca_rtt_us = tcp_rtt_tsopt_us(tp);
++		seq_rtt_us = ca_rtt_us = tcp_rtt_tsopt_us(tp, 1);
+ 
+ 	rs->rtt_us = ca_rtt_us; /* RTT of last (S)ACKed packet (or -1) */
+ 	if (seq_rtt_us < 0)
+@@ -6873,6 +6882,9 @@ tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
+ 		if (!tp->srtt_us)
+ 			tcp_synack_rtt_meas(sk, req);
+ 
++		if (tp->rx_opt.tstamp_ok)
++			tp->advmss -= TCPOLEN_TSTAMP_ALIGNED;
++
+ 		if (req) {
+ 			tcp_rcv_synrecv_state_fastopen(sk);
+ 		} else {
+@@ -6898,9 +6910,6 @@ tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
+ 		tp->snd_wnd = ntohs(th->window) << tp->rx_opt.snd_wscale;
+ 		tcp_init_wl(tp, TCP_SKB_CB(skb)->seq);
+ 
+-		if (tp->rx_opt.tstamp_ok)
+-			tp->advmss -= TCPOLEN_TSTAMP_ALIGNED;
+-
+ 		if (!inet_csk(sk)->icsk_ca_ops->cong_control)
+ 			tcp_update_pacing_rate(sk);
+ 
+diff --git a/net/ipv6/calipso.c b/net/ipv6/calipso.c
+index 62618a058b8fad..a247bb93908bf4 100644
+--- a/net/ipv6/calipso.c
++++ b/net/ipv6/calipso.c
+@@ -1207,6 +1207,10 @@ static int calipso_req_setattr(struct request_sock *req,
+ 	struct ipv6_opt_hdr *old, *new;
+ 	struct sock *sk = sk_to_full_sk(req_to_sk(req));
+ 
++	/* sk is NULL for SYN+ACK w/ SYN Cookie */
++	if (!sk)
++		return -ENOMEM;
++
+ 	if (req_inet->ipv6_opt && req_inet->ipv6_opt->hopopt)
+ 		old = req_inet->ipv6_opt->hopopt;
+ 	else
+@@ -1247,6 +1251,10 @@ static void calipso_req_delattr(struct request_sock *req)
+ 	struct ipv6_txoptions *txopts;
+ 	struct sock *sk = sk_to_full_sk(req_to_sk(req));
+ 
++	/* sk is NULL for SYN+ACK w/ SYN Cookie */
++	if (!sk)
++		return;
++
+ 	if (!req_inet->ipv6_opt || !req_inet->ipv6_opt->hopopt)
+ 		return;
+ 
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 9f683f838431da..acfde525fad2fa 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -2904,7 +2904,7 @@ static int ieee80211_scan(struct wiphy *wiphy,
+ 		 * the frames sent while scanning on other channel will be
+ 		 * lost)
+ 		 */
+-		if (sdata->deflink.u.ap.beacon &&
++		if (ieee80211_num_beaconing_links(sdata) &&
+ 		    (!(wiphy->features & NL80211_FEATURE_AP_SCAN) ||
+ 		     !(req->flags & NL80211_SCAN_FLAG_AP)))
+ 			return -EOPNOTSUPP;
+diff --git a/net/mac80211/debugfs_sta.c b/net/mac80211/debugfs_sta.c
+index a8948f4d983e5e..49061bd4151bcf 100644
+--- a/net/mac80211/debugfs_sta.c
++++ b/net/mac80211/debugfs_sta.c
+@@ -150,12 +150,6 @@ static ssize_t sta_aqm_read(struct file *file, char __user *userbuf,
+ 	spin_lock_bh(&local->fq.lock);
+ 	rcu_read_lock();
+ 
+-	p += scnprintf(p,
+-		       bufsz + buf - p,
+-		       "target %uus interval %uus ecn %s\n",
+-		       codel_time_to_us(sta->cparams.target),
+-		       codel_time_to_us(sta->cparams.interval),
+-		       sta->cparams.ecn ? "yes" : "no");
+ 	p += scnprintf(p,
+ 		       bufsz + buf - p,
+ 		       "tid ac backlog-bytes backlog-packets new-flows drops marks overlimit collisions tx-bytes tx-packets flags\n");
+diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c
+index c94a9c7ca960ef..91444301a84a4c 100644
+--- a/net/mac80211/mesh_hwmp.c
++++ b/net/mac80211/mesh_hwmp.c
+@@ -636,7 +636,7 @@ static void hwmp_preq_frame_process(struct ieee80211_sub_if_data *sdata,
+ 				mesh_path_add_gate(mpath);
+ 		}
+ 		rcu_read_unlock();
+-	} else {
++	} else if (ifmsh->mshcfg.dot11MeshForwarding) {
+ 		rcu_read_lock();
+ 		mpath = mesh_path_lookup(sdata, target_addr);
+ 		if (mpath) {
+@@ -654,6 +654,8 @@ static void hwmp_preq_frame_process(struct ieee80211_sub_if_data *sdata,
+ 			}
+ 		}
+ 		rcu_read_unlock();
++	} else {
++		forward = false;
+ 	}
+ 
+ 	if (reply) {
+@@ -671,7 +673,7 @@ static void hwmp_preq_frame_process(struct ieee80211_sub_if_data *sdata,
+ 		}
+ 	}
+ 
+-	if (forward && ifmsh->mshcfg.dot11MeshForwarding) {
++	if (forward) {
+ 		u32 preq_id;
+ 		u8 hopcount;
+ 
+diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c
+index 0d056db9f81e63..6a193278005410 100644
+--- a/net/mac80211/rate.c
++++ b/net/mac80211/rate.c
+@@ -990,8 +990,6 @@ int rate_control_set_rates(struct ieee80211_hw *hw,
+ 	if (sta->uploaded)
+ 		drv_sta_rate_tbl_update(hw_to_local(hw), sta->sdata, pubsta);
+ 
+-	ieee80211_sta_set_expected_throughput(pubsta, sta_get_expected_throughput(sta));
+-
+ 	return 0;
+ }
+ EXPORT_SYMBOL(rate_control_set_rates);
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index 248e1f63bf7397..84b18be1f0b16a 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -18,7 +18,6 @@
+ #include <linux/timer.h>
+ #include <linux/rtnetlink.h>
+ 
+-#include <net/codel.h>
+ #include <net/mac80211.h>
+ #include "ieee80211_i.h"
+ #include "driver-ops.h"
+@@ -701,12 +700,6 @@ __sta_info_alloc(struct ieee80211_sub_if_data *sdata,
+ 		}
+ 	}
+ 
+-	sta->cparams.ce_threshold = CODEL_DISABLED_THRESHOLD;
+-	sta->cparams.target = MS2TIME(20);
+-	sta->cparams.interval = MS2TIME(100);
+-	sta->cparams.ecn = true;
+-	sta->cparams.ce_threshold_selector = 0;
+-	sta->cparams.ce_threshold_mask = 0;
+ 
+ 	sta_dbg(sdata, "Allocated STA %pM\n", sta->sta.addr);
+ 
+@@ -2905,27 +2898,6 @@ unsigned long ieee80211_sta_last_active(struct sta_info *sta)
+ 	return sta->deflink.status_stats.last_ack;
+ }
+ 
+-static void sta_update_codel_params(struct sta_info *sta, u32 thr)
+-{
+-	if (thr && thr < STA_SLOW_THRESHOLD * sta->local->num_sta) {
+-		sta->cparams.target = MS2TIME(50);
+-		sta->cparams.interval = MS2TIME(300);
+-		sta->cparams.ecn = false;
+-	} else {
+-		sta->cparams.target = MS2TIME(20);
+-		sta->cparams.interval = MS2TIME(100);
+-		sta->cparams.ecn = true;
+-	}
+-}
+-
+-void ieee80211_sta_set_expected_throughput(struct ieee80211_sta *pubsta,
+-					   u32 thr)
+-{
+-	struct sta_info *sta = container_of(pubsta, struct sta_info, sta);
+-
+-	sta_update_codel_params(sta, thr);
+-}
+-
+ int ieee80211_sta_allocate_link(struct sta_info *sta, unsigned int link_id)
+ {
+ 	struct ieee80211_sub_if_data *sdata = sta->sdata;
+diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h
+index 07b7ec39a52f9a..7a95d8d34fca89 100644
+--- a/net/mac80211/sta_info.h
++++ b/net/mac80211/sta_info.h
+@@ -466,14 +466,6 @@ struct ieee80211_fragment_cache {
+ 	unsigned int next;
+ };
+ 
+-/*
+- * The bandwidth threshold below which the per-station CoDel parameters will be
+- * scaled to be more lenient (to prevent starvation of slow stations). This
+- * value will be scaled by the number of active stations when it is being
+- * applied.
+- */
+-#define STA_SLOW_THRESHOLD 6000 /* 6 Mbps */
+-
+ /**
+  * struct link_sta_info - Link STA information
+  * All link specific sta info are stored here for reference. This can be
+@@ -626,7 +618,6 @@ struct link_sta_info {
+  * @sta: station information we share with the driver
+  * @sta_state: duplicates information about station state (for debug)
+  * @rcu_head: RCU head used for freeing this station struct
+- * @cparams: CoDel parameters for this station.
+  * @reserved_tid: reserved TID (if any, otherwise IEEE80211_TID_UNRESERVED)
+  * @amsdu_mesh_control: track the mesh A-MSDU format used by the peer:
+  *
+@@ -717,8 +708,6 @@ struct sta_info {
+ 	struct dentry *debugfs_dir;
+ #endif
+ 
+-	struct codel_params cparams;
+-
+ 	u8 reserved_tid;
+ 	s8 amsdu_mesh_control;
+ 
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 20179db88c4a6c..695db38ccfb41a 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -1402,16 +1402,9 @@ static struct sk_buff *fq_tin_dequeue_func(struct fq *fq,
+ 
+ 	local = container_of(fq, struct ieee80211_local, fq);
+ 	txqi = container_of(tin, struct txq_info, tin);
++	cparams = &local->cparams;
+ 	cstats = &txqi->cstats;
+ 
+-	if (txqi->txq.sta) {
+-		struct sta_info *sta = container_of(txqi->txq.sta,
+-						    struct sta_info, sta);
+-		cparams = &sta->cparams;
+-	} else {
+-		cparams = &local->cparams;
+-	}
+-
+ 	if (flow == &tin->default_flow)
+ 		cvars = &txqi->def_cvars;
+ 	else
+@@ -4526,8 +4519,10 @@ netdev_tx_t ieee80211_subif_start_xmit(struct sk_buff *skb,
+ 						     IEEE80211_TX_CTRL_MLO_LINK_UNSPEC,
+ 						     NULL);
+ 	} else if (ieee80211_vif_is_mld(&sdata->vif) &&
+-		   sdata->vif.type == NL80211_IFTYPE_AP &&
+-		   !ieee80211_hw_check(&sdata->local->hw, MLO_MCAST_MULTI_LINK_TX)) {
++		   ((sdata->vif.type == NL80211_IFTYPE_AP &&
++		     !ieee80211_hw_check(&sdata->local->hw, MLO_MCAST_MULTI_LINK_TX)) ||
++		    (sdata->vif.type == NL80211_IFTYPE_AP_VLAN &&
++		     !sdata->wdev.use_4addr))) {
+ 		ieee80211_mlo_multicast_tx(dev, skb);
+ 	} else {
+ normal:
+diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c
+index 1f63b32d76d678..aab2e79fcff045 100644
+--- a/net/mpls/af_mpls.c
++++ b/net/mpls/af_mpls.c
+@@ -81,8 +81,8 @@ static struct mpls_route *mpls_route_input_rcu(struct net *net, unsigned index)
+ 
+ 	if (index < net->mpls.platform_labels) {
+ 		struct mpls_route __rcu **platform_label =
+-			rcu_dereference(net->mpls.platform_label);
+-		rt = rcu_dereference(platform_label[index]);
++			rcu_dereference_rtnl(net->mpls.platform_label);
++		rt = rcu_dereference_rtnl(platform_label[index]);
+ 	}
+ 	return rt;
+ }
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 0529e4ef752070..c5855069bdaba0 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -663,6 +663,9 @@ static int pipapo_realloc_mt(struct nft_pipapo_field *f,
+ 	    check_add_overflow(rules, extra, &rules_alloc))
+ 		return -EOVERFLOW;
+ 
++	if (rules_alloc > (INT_MAX / sizeof(*new_mt)))
++		return -ENOMEM;
++
+ 	new_mt = kvmalloc_array(rules_alloc, sizeof(*new_mt), GFP_KERNEL_ACCOUNT);
+ 	if (!new_mt)
+ 		return -ENOMEM;
+@@ -1499,6 +1502,9 @@ static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old)
+ 		       src->groups * NFT_PIPAPO_BUCKETS(src->bb));
+ 
+ 		if (src->rules > 0) {
++			if (src->rules_alloc > (INT_MAX / sizeof(*src->mt)))
++				goto out_mt;
++
+ 			dst->mt = kvmalloc_array(src->rules_alloc,
+ 						 sizeof(*src->mt),
+ 						 GFP_KERNEL_ACCOUNT);
+diff --git a/net/nfc/nci/uart.c b/net/nfc/nci/uart.c
+index ed1508a9e093ed..aab107727f186e 100644
+--- a/net/nfc/nci/uart.c
++++ b/net/nfc/nci/uart.c
+@@ -119,22 +119,22 @@ static int nci_uart_set_driver(struct tty_struct *tty, unsigned int driver)
+ 
+ 	memcpy(nu, nci_uart_drivers[driver], sizeof(struct nci_uart));
+ 	nu->tty = tty;
+-	tty->disc_data = nu;
+ 	skb_queue_head_init(&nu->tx_q);
+ 	INIT_WORK(&nu->write_work, nci_uart_write_work);
+ 	spin_lock_init(&nu->rx_lock);
+ 
+ 	ret = nu->ops.open(nu);
+ 	if (ret) {
+-		tty->disc_data = NULL;
+ 		kfree(nu);
++		return ret;
+ 	} else if (!try_module_get(nu->owner)) {
+ 		nu->ops.close(nu);
+-		tty->disc_data = NULL;
+ 		kfree(nu);
+ 		return -ENOENT;
+ 	}
+-	return ret;
++	tty->disc_data = nu;
++
++	return 0;
+ }
+ 
+ /* ------ LDISC part ------ */
+diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
+index 77fa02f2bfcd56..a8cca549b5a2eb 100644
+--- a/net/sched/sch_sfq.c
++++ b/net/sched/sch_sfq.c
+@@ -656,6 +656,14 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt,
+ 		NL_SET_ERR_MSG_MOD(extack, "invalid quantum");
+ 		return -EINVAL;
+ 	}
++
++	if (ctl->perturb_period < 0 ||
++	    ctl->perturb_period > INT_MAX / HZ) {
++		NL_SET_ERR_MSG_MOD(extack, "invalid perturb period");
++		return -EINVAL;
++	}
++	perturb_period = ctl->perturb_period * HZ;
++
+ 	if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max,
+ 					ctl_v1->Wlog, ctl_v1->Scell_log, NULL))
+ 		return -EINVAL;
+@@ -672,14 +680,12 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt,
+ 	headdrop = q->headdrop;
+ 	maxdepth = q->maxdepth;
+ 	maxflows = q->maxflows;
+-	perturb_period = q->perturb_period;
+ 	quantum = q->quantum;
+ 	flags = q->flags;
+ 
+ 	/* update and validate configuration */
+ 	if (ctl->quantum)
+ 		quantum = ctl->quantum;
+-	perturb_period = ctl->perturb_period * HZ;
+ 	if (ctl->flows)
+ 		maxflows = min_t(u32, ctl->flows, SFQ_MAX_FLOWS);
+ 	if (ctl->divisor) {
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 14021b81232906..2b14c81a87e5c4 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1328,13 +1328,15 @@ static int taprio_dev_notifier(struct notifier_block *nb, unsigned long event,
+ 
+ 		stab = rtnl_dereference(q->root->stab);
+ 
+-		oper = rtnl_dereference(q->oper_sched);
++		rcu_read_lock();
++		oper = rcu_dereference(q->oper_sched);
+ 		if (oper)
+ 			taprio_update_queue_max_sdu(q, oper, stab);
+ 
+-		admin = rtnl_dereference(q->admin_sched);
++		admin = rcu_dereference(q->admin_sched);
+ 		if (admin)
+ 			taprio_update_queue_max_sdu(q, admin, stab);
++		rcu_read_unlock();
+ 
+ 		break;
+ 	}
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 53725ee7ba06d7..b301d64d9d80f3 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -9100,7 +9100,8 @@ static void __sctp_write_space(struct sctp_association *asoc)
+ 		wq = rcu_dereference(sk->sk_wq);
+ 		if (wq) {
+ 			if (waitqueue_active(&wq->wait))
+-				wake_up_interruptible(&wq->wait);
++				wake_up_interruptible_poll(&wq->wait, EPOLLOUT |
++						EPOLLWRNORM | EPOLLWRBAND);
+ 
+ 			/* Note that we try to include the Async I/O support
+ 			 * here by modeling from the current TCP/UDP code.
+diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
+index 7ce5e28a6c0316..131090f31e6a83 100644
+--- a/net/sunrpc/cache.c
++++ b/net/sunrpc/cache.c
+@@ -135,6 +135,8 @@ static struct cache_head *sunrpc_cache_add_entry(struct cache_detail *detail,
+ 
+ 	hlist_add_head_rcu(&new->cache_list, head);
+ 	detail->entries++;
++	if (detail->nextcheck > new->expiry_time)
++		detail->nextcheck = new->expiry_time + 1;
+ 	cache_get(new);
+ 	spin_unlock(&detail->hash_lock);
+ 
+@@ -462,24 +464,21 @@ static int cache_clean(void)
+ 		}
+ 	}
+ 
++	spin_lock(&current_detail->hash_lock);
++
+ 	/* find a non-empty bucket in the table */
+-	while (current_detail &&
+-	       current_index < current_detail->hash_size &&
++	while (current_index < current_detail->hash_size &&
+ 	       hlist_empty(&current_detail->hash_table[current_index]))
+ 		current_index++;
+ 
+ 	/* find a cleanable entry in the bucket and clean it, or set to next bucket */
+-
+-	if (current_detail && current_index < current_detail->hash_size) {
++	if (current_index < current_detail->hash_size) {
+ 		struct cache_head *ch = NULL;
+ 		struct cache_detail *d;
+ 		struct hlist_head *head;
+ 		struct hlist_node *tmp;
+ 
+-		spin_lock(&current_detail->hash_lock);
+-
+ 		/* Ok, now to clean this strand */
+-
+ 		head = &current_detail->hash_table[current_index];
+ 		hlist_for_each_entry_safe(ch, tmp, head, cache_list) {
+ 			if (current_detail->nextcheck > ch->expiry_time)
+@@ -500,8 +499,10 @@ static int cache_clean(void)
+ 		spin_unlock(&cache_list_lock);
+ 		if (ch)
+ 			sunrpc_end_cache_remove_entry(ch, d);
+-	} else
++	} else {
++		spin_unlock(&current_detail->hash_lock);
+ 		spin_unlock(&cache_list_lock);
++	}
+ 
+ 	return rv;
+ }
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index e7f9c295d13c03..7b6ec2d4279573 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -1369,7 +1369,8 @@ svc_process_common(struct svc_rqst *rqstp)
+ 	case SVC_OK:
+ 		break;
+ 	case SVC_GARBAGE:
+-		goto err_garbage_args;
++		rqstp->rq_auth_stat = rpc_autherr_badcred;
++		goto err_bad_auth;
+ 	case SVC_SYSERR:
+ 		goto err_system_err;
+ 	case SVC_DENIED:
+@@ -1510,14 +1511,6 @@ svc_process_common(struct svc_rqst *rqstp)
+ 	*rqstp->rq_accept_statp = rpc_proc_unavail;
+ 	goto sendit;
+ 
+-err_garbage_args:
+-	svc_printk(rqstp, "failed to decode RPC header\n");
+-
+-	if (serv->sv_stats)
+-		serv->sv_stats->rpcbadfmt++;
+-	*rqstp->rq_accept_statp = rpc_garbage_args;
+-	goto sendit;
+-
+ err_system_err:
+ 	if (serv->sv_stats)
+ 		serv->sv_stats->rpcbadfmt++;
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+index ca6172822b68ae..3d7f1413df0233 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+@@ -577,6 +577,7 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
+ 	if (newxprt->sc_qp && !IS_ERR(newxprt->sc_qp))
+ 		ib_destroy_qp(newxprt->sc_qp);
+ 	rdma_destroy_id(newxprt->sc_cm_id);
++	rpcrdma_rn_unregister(dev, &newxprt->sc_rn);
+ 	/* This call to put will destroy the transport */
+ 	svc_xprt_put(&newxprt->sc_xprt);
+ 	return NULL;
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 83cc095846d356..4b10ecf4c26538 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -2740,6 +2740,11 @@ static void xs_tcp_tls_setup_socket(struct work_struct *work)
+ 	}
+ 	rpc_shutdown_client(lower_clnt);
+ 
++	/* Check for ingress data that arrived before the socket's
++	 * ->data_ready callback was set up.
++	 */
++	xs_poll_check_readable(upper_transport);
++
+ out_unlock:
+ 	current_restore_flags(pflags, PF_MEMALLOC);
+ 	upper_transport->clnt = NULL;
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index 79f91b6ca8c847..ea5bb131ebd060 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -425,7 +425,7 @@ static void tipc_aead_free(struct rcu_head *rp)
+ 	}
+ 	free_percpu(aead->tfm_entry);
+ 	kfree_sensitive(aead->key);
+-	kfree(aead);
++	kfree_sensitive(aead);
+ }
+ 
+ static int tipc_aead_users(struct tipc_aead __rcu *aead)
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index 108a4cc2e00107..258d6aa4f21ae4 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -489,7 +489,7 @@ int tipc_udp_nl_dump_remoteip(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 		rtnl_lock();
+ 		b = tipc_bearer_find(net, bname);
+-		if (!b) {
++		if (!b || b->bcast_addr.media_id != TIPC_MEDIA_TYPE_UDP) {
+ 			rtnl_unlock();
+ 			return -EINVAL;
+ 		}
+@@ -500,7 +500,7 @@ int tipc_udp_nl_dump_remoteip(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 		rtnl_lock();
+ 		b = rtnl_dereference(tn->bearer_list[bid]);
+-		if (!b) {
++		if (!b || b->bcast_addr.media_id != TIPC_MEDIA_TYPE_UDP) {
+ 			rtnl_unlock();
+ 			return -EINVAL;
+ 		}
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 784a2d124749f5..614b58cb26ab71 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -178,11 +178,27 @@ static inline int verify_replay(struct xfrm_usersa_info *p,
+ 				       "Replay seq and seq_hi should be 0 for output SA");
+ 			return -EINVAL;
+ 		}
+-		if (rs->oseq_hi && !(p->flags & XFRM_STATE_ESN)) {
+-			NL_SET_ERR_MSG(
+-				extack,
+-				"Replay oseq_hi should be 0 in non-ESN mode for output SA");
+-			return -EINVAL;
++
++		if (!(p->flags & XFRM_STATE_ESN)) {
++			if (rs->oseq_hi) {
++				NL_SET_ERR_MSG(
++					extack,
++					"Replay oseq_hi should be 0 in non-ESN mode for output SA");
++				return -EINVAL;
++			}
++			if (rs->oseq == U32_MAX) {
++				NL_SET_ERR_MSG(
++					extack,
++					"Replay oseq should be less than 0xFFFFFFFF in non-ESN mode for output SA");
++				return -EINVAL;
++			}
++		} else {
++			if (rs->oseq == U32_MAX && rs->oseq_hi == U32_MAX) {
++				NL_SET_ERR_MSG(
++					extack,
++					"Replay oseq and oseq_hi should be less than 0xFFFFFFFF for output SA");
++				return -EINVAL;
++			}
+ 		}
+ 		if (rs->bmp_len) {
+ 			NL_SET_ERR_MSG(extack, "Replay bmp_len should 0 for output SA");
+@@ -196,11 +212,27 @@ static inline int verify_replay(struct xfrm_usersa_info *p,
+ 				       "Replay oseq and oseq_hi should be 0 for input SA");
+ 			return -EINVAL;
+ 		}
+-		if (rs->seq_hi && !(p->flags & XFRM_STATE_ESN)) {
+-			NL_SET_ERR_MSG(
+-				extack,
+-				"Replay seq_hi should be 0 in non-ESN mode for input SA");
+-			return -EINVAL;
++		if (!(p->flags & XFRM_STATE_ESN)) {
++			if (rs->seq_hi) {
++				NL_SET_ERR_MSG(
++					extack,
++					"Replay seq_hi should be 0 in non-ESN mode for input SA");
++				return -EINVAL;
++			}
++
++			if (rs->seq == U32_MAX) {
++				NL_SET_ERR_MSG(
++					extack,
++					"Replay seq should be less than 0xFFFFFFFF in non-ESN mode for input SA");
++				return -EINVAL;
++			}
++		} else {
++			if (rs->seq == U32_MAX && rs->seq_hi == U32_MAX) {
++				NL_SET_ERR_MSG(
++					extack,
++					"Replay seq and seq_hi should be less than 0xFFFFFFFF for input SA");
++				return -EINVAL;
++			}
+ 		}
+ 	}
+ 
+diff --git a/scripts/Makefile.compiler b/scripts/Makefile.compiler
+index f4fcc1eaaeaee8..65cfa72e376be0 100644
+--- a/scripts/Makefile.compiler
++++ b/scripts/Makefile.compiler
+@@ -43,7 +43,7 @@ as-instr = $(call try-run,\
+ # __cc-option
+ # Usage: MY_CFLAGS += $(call __cc-option,$(CC),$(MY_CFLAGS),-march=winchip-c6,-march=i586)
+ __cc-option = $(call try-run,\
+-	$(1) -Werror $(2) $(3) -c -x c /dev/null -o "$$TMP",$(3),$(4))
++	$(1) -Werror $(2) $(3:-Wno-%=-W%) -c -x c /dev/null -o "$$TMP",$(3),$(4))
+ 
+ # cc-option
+ # Usage: cflags-y += $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -57,7 +57,7 @@ cc-option-yn = $(if $(call cc-option,$1),y,n)
+ 
+ # cc-disable-warning
+ # Usage: cflags-y += $(call cc-disable-warning,unused-but-set-variable)
+-cc-disable-warning = $(if $(call cc-option,-W$(strip $1)),-Wno-$(strip $1))
++cc-disable-warning = $(call cc-option,-Wno-$(strip $1))
+ 
+ # gcc-min-version
+ # Usage: cflags-$(call gcc-min-version, 70100) += -foo
+diff --git a/security/selinux/xfrm.c b/security/selinux/xfrm.c
+index 90ec4ef1b082f9..61d56b0c2be138 100644
+--- a/security/selinux/xfrm.c
++++ b/security/selinux/xfrm.c
+@@ -94,7 +94,7 @@ static int selinux_xfrm_alloc_user(struct xfrm_sec_ctx **ctxp,
+ 
+ 	ctx->ctx_doi = XFRM_SC_DOI_LSM;
+ 	ctx->ctx_alg = XFRM_SC_ALG_SELINUX;
+-	ctx->ctx_len = str_len;
++	ctx->ctx_len = str_len + 1;
+ 	memcpy(ctx->ctx_str, &uctx[1], str_len);
+ 	ctx->ctx_str[str_len] = '\0';
+ 	rc = security_context_to_sid(ctx->ctx_str, str_len,
+diff --git a/sound/pci/hda/cs35l41_hda_property.c b/sound/pci/hda/cs35l41_hda_property.c
+index 61d2314834e7b1..d8249d997c2a0b 100644
+--- a/sound/pci/hda/cs35l41_hda_property.c
++++ b/sound/pci/hda/cs35l41_hda_property.c
+@@ -31,6 +31,9 @@ struct cs35l41_config {
+ };
+ 
+ static const struct cs35l41_config cs35l41_config_table[] = {
++	{ "10251826", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, -1, -1, 0, 0, 0 },
++	{ "1025182C", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, -1, -1, 0, 0, 0 },
++	{ "10251844", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, -1, -1, 0, 0, 0 },
+ 	{ "10280B27", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+ 	{ "10280B28", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+ 	{ "10280BEB", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, -1, 0, 0, 0, 0 },
+@@ -452,6 +455,9 @@ struct cs35l41_prop_model {
+ static const struct cs35l41_prop_model cs35l41_prop_model_table[] = {
+ 	{ "CLSA0100", NULL, lenovo_legion_no_acpi },
+ 	{ "CLSA0101", NULL, lenovo_legion_no_acpi },
++	{ "CSC3551", "10251826", generic_dsd_config },
++	{ "CSC3551", "1025182C", generic_dsd_config },
++	{ "CSC3551", "10251844", generic_dsd_config },
+ 	{ "CSC3551", "10280B27", generic_dsd_config },
+ 	{ "CSC3551", "10280B28", generic_dsd_config },
+ 	{ "CSC3551", "10280BEB", generic_dsd_config },
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 512fb22f5e5eb9..77a2984c3741de 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2276,6 +2276,8 @@ static const struct snd_pci_quirk power_save_denylist[] = {
+ 	SND_PCI_QUIRK(0x1734, 0x1232, "KONTRON SinglePC", 0),
+ 	/* Dell ALC3271 */
+ 	SND_PCI_QUIRK(0x1028, 0x0962, "Dell ALC3271", 0),
++	/* https://bugzilla.kernel.org/show_bug.cgi?id=220210 */
++	SND_PCI_QUIRK(0x17aa, 0x5079, "Lenovo Thinkpad E15", 0),
+ 	{}
+ };
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 20ab1fb2195ff6..02a424b7a99204 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8029,6 +8029,7 @@ enum {
+ 	ALC283_FIXUP_DELL_HP_RESUME,
+ 	ALC294_FIXUP_ASUS_CS35L41_SPI_2,
+ 	ALC274_FIXUP_HP_AIO_BIND_DACS,
++	ALC287_FIXUP_PREDATOR_SPK_CS35L41_I2C_2,
+ };
+ 
+ /* A special fixup for Lenovo C940 and Yoga Duet 7;
+@@ -9301,6 +9302,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 			{ }
+ 		}
+ 	},
++	[ALC287_FIXUP_PREDATOR_SPK_CS35L41_I2C_2] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = cs35l41_fixup_i2c_two,
++		.chained = true,
++		.chain_id = ALC255_FIXUP_PREDATOR_SUBWOOFER
++	},
+ 	[ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+@@ -10456,6 +10463,9 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x1534, "Acer Predator PH315-54", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x159c, "Acer Nitro 5 AN515-58", ALC2XX_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x169a, "Acer Swift SFG16", ALC256_FIXUP_ACER_SFG16_MICMUTE_LED),
++	SND_PCI_QUIRK(0x1025, 0x1826, "Acer Helios ZPC", ALC287_FIXUP_PREDATOR_SPK_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x1025, 0x182c, "Acer Helios ZPD", ALC287_FIXUP_PREDATOR_SPK_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x1025, 0x1844, "Acer Helios ZPS", ALC287_FIXUP_PREDATOR_SPK_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
+ 	SND_PCI_QUIRK(0x1028, 0x053c, "Dell Latitude E5430", ALC292_FIXUP_DELL_E7X),
+ 	SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS),
+@@ -10499,6 +10509,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0872, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0873, "Dell Precision 3930", ALC255_FIXUP_DUMMY_LINEOUT_VERB),
++	SND_PCI_QUIRK(0x1028, 0x0879, "Dell Latitude 5420 Rugged", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+@@ -10777,6 +10788,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8b97, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8bb3, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8bb4, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x103c, 0x8bc8, "HP Victus 15-fa1xxx", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8bcd, "HP Omen 16-xd0xxx", ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8bdd, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8bde, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10830,6 +10842,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8c91, "HP EliteBook 660", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8c96, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8c97, "HP ZBook", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x8c9c, "HP Victus 16-s1xxx (MB 8C9C)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca1, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca2, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca4, "HP ZBook Fury", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+@@ -10894,6 +10907,8 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8e60, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8e61, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8e62, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x1043, 0x1032, "ASUS VivoBook X513EA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1043, 0x1034, "ASUS GU605C", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ 	SND_PCI_QUIRK(0x1043, 0x1054, "ASUS G614FH/FM/FP", ALC287_FIXUP_CS35L41_I2C_2),
+diff --git a/sound/soc/amd/acp/acp-sdw-legacy-mach.c b/sound/soc/amd/acp/acp-sdw-legacy-mach.c
+index 2020c5cfb3d5d7..582c68aee6e589 100644
+--- a/sound/soc/amd/acp/acp-sdw-legacy-mach.c
++++ b/sound/soc/amd/acp/acp-sdw-legacy-mach.c
+@@ -272,7 +272,7 @@ static int create_sdw_dailinks(struct snd_soc_card *card,
+ 
+ 	/* generate DAI links by each sdw link */
+ 	while (soc_dais->initialised) {
+-		int current_be_id;
++		int current_be_id = 0;
+ 
+ 		ret = create_sdw_dailink(card, soc_dais, dai_links,
+ 					 &current_be_id, codec_conf, sdw_platform_component);
+diff --git a/sound/soc/amd/acp/acp-sdw-sof-mach.c b/sound/soc/amd/acp/acp-sdw-sof-mach.c
+index c09b1f118a6cc1..75bdd843ca3681 100644
+--- a/sound/soc/amd/acp/acp-sdw-sof-mach.c
++++ b/sound/soc/amd/acp/acp-sdw-sof-mach.c
+@@ -219,7 +219,7 @@ static int create_sdw_dailinks(struct snd_soc_card *card,
+ 
+ 	/* generate DAI links by each sdw link */
+ 	while (sof_dais->initialised) {
+-		int current_be_id;
++		int current_be_id = 0;
+ 
+ 		ret = create_sdw_dailink(card, sof_dais, dai_links,
+ 					 &current_be_id, codec_conf);
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index e632f16c910250..3d9da93d22ee84 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -311,6 +311,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "83AS"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "83HN"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+@@ -360,7 +367,7 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "M5402RA"),
+ 		}
+ 	},
+-        {
++	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."),
+diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c
+index 7f219df8be7046..8de7e94d4ba478 100644
+--- a/sound/soc/codecs/tas2770.c
++++ b/sound/soc/codecs/tas2770.c
+@@ -156,11 +156,37 @@ static const struct snd_kcontrol_new isense_switch =
+ static const struct snd_kcontrol_new vsense_switch =
+ 	SOC_DAPM_SINGLE("Switch", TAS2770_PWR_CTRL, 2, 1, 1);
+ 
++static int sense_event(struct snd_soc_dapm_widget *w,
++			struct snd_kcontrol *kcontrol, int event)
++{
++	struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
++	struct tas2770_priv *tas2770 = snd_soc_component_get_drvdata(component);
++
++	/*
++	 * Powering up ISENSE/VSENSE requires a trip through the shutdown state.
++	 * Do that here to ensure that our changes are applied properly, otherwise
++	 * we might end up with non-functional IVSENSE if playback started earlier,
++	 * which would break software speaker protection.
++	 */
++	switch (event) {
++	case SND_SOC_DAPM_PRE_REG:
++		return snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
++						    TAS2770_PWR_CTRL_MASK,
++						    TAS2770_PWR_CTRL_SHUTDOWN);
++	case SND_SOC_DAPM_POST_REG:
++		return tas2770_update_pwr_ctrl(tas2770);
++	default:
++		return 0;
++	}
++}
++
+ static const struct snd_soc_dapm_widget tas2770_dapm_widgets[] = {
+ 	SND_SOC_DAPM_AIF_IN("ASI1", "ASI1 Playback", 0, SND_SOC_NOPM, 0, 0),
+ 	SND_SOC_DAPM_MUX("ASI1 Sel", SND_SOC_NOPM, 0, 0, &tas2770_asi1_mux),
+-	SND_SOC_DAPM_SWITCH("ISENSE", TAS2770_PWR_CTRL, 3, 1, &isense_switch),
+-	SND_SOC_DAPM_SWITCH("VSENSE", TAS2770_PWR_CTRL, 2, 1, &vsense_switch),
++	SND_SOC_DAPM_SWITCH_E("ISENSE", TAS2770_PWR_CTRL, 3, 1, &isense_switch,
++		sense_event, SND_SOC_DAPM_PRE_REG | SND_SOC_DAPM_POST_REG),
++	SND_SOC_DAPM_SWITCH_E("VSENSE", TAS2770_PWR_CTRL, 2, 1, &vsense_switch,
++		sense_event, SND_SOC_DAPM_PRE_REG | SND_SOC_DAPM_POST_REG),
+ 	SND_SOC_DAPM_DAC_E("DAC", NULL, SND_SOC_NOPM, 0, 0, tas2770_dac_event,
+ 			   SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD),
+ 	SND_SOC_DAPM_OUTPUT("OUT"),
+diff --git a/sound/soc/codecs/wcd937x.c b/sound/soc/codecs/wcd937x.c
+index dd2045a5d26d23..6101d52a73b85e 100644
+--- a/sound/soc/codecs/wcd937x.c
++++ b/sound/soc/codecs/wcd937x.c
+@@ -91,7 +91,6 @@ struct wcd937x_priv {
+ 	struct regmap_irq_chip *wcd_regmap_irq_chip;
+ 	struct regmap_irq_chip_data *irq_chip;
+ 	struct regulator_bulk_data supplies[WCD937X_MAX_BULK_SUPPLY];
+-	struct regulator *buck_supply;
+ 	struct snd_soc_jack *jack;
+ 	unsigned long status_mask;
+ 	s32 micb_ref[WCD937X_MAX_MICBIAS];
+@@ -2945,10 +2944,8 @@ static int wcd937x_probe(struct platform_device *pdev)
+ 		return dev_err_probe(dev, ret, "Failed to get supplies\n");
+ 
+ 	ret = regulator_bulk_enable(WCD937X_MAX_BULK_SUPPLY, wcd937x->supplies);
+-	if (ret) {
+-		regulator_bulk_free(WCD937X_MAX_BULK_SUPPLY, wcd937x->supplies);
++	if (ret)
+ 		return dev_err_probe(dev, ret, "Failed to enable supplies\n");
+-	}
+ 
+ 	wcd937x_dt_parse_micbias_info(dev, wcd937x);
+ 
+@@ -2984,7 +2981,6 @@ static int wcd937x_probe(struct platform_device *pdev)
+ 
+ err_disable_regulators:
+ 	regulator_bulk_disable(WCD937X_MAX_BULK_SUPPLY, wcd937x->supplies);
+-	regulator_bulk_free(WCD937X_MAX_BULK_SUPPLY, wcd937x->supplies);
+ 
+ 	return ret;
+ }
+@@ -3001,7 +2997,6 @@ static void wcd937x_remove(struct platform_device *pdev)
+ 	pm_runtime_dont_use_autosuspend(dev);
+ 
+ 	regulator_bulk_disable(WCD937X_MAX_BULK_SUPPLY, wcd937x->supplies);
+-	regulator_bulk_free(WCD937X_MAX_BULK_SUPPLY, wcd937x->supplies);
+ }
+ 
+ #if defined(CONFIG_OF)
+diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c
+index 3ae2a212a2e38a..355f7ec8943c24 100644
+--- a/sound/soc/generic/simple-card-utils.c
++++ b/sound/soc/generic/simple-card-utils.c
+@@ -1119,12 +1119,16 @@ int graph_util_parse_dai(struct simple_util_priv *priv, struct device_node *ep,
+ 	args.np = ep;
+ 	dai = snd_soc_get_dai_via_args(&args);
+ 	if (dai) {
++		const char *dai_name = snd_soc_dai_name_get(dai);
++		const struct of_phandle_args *dai_args = snd_soc_copy_dai_args(dev, &args);
++
+ 		ret = -ENOMEM;
++		if (!dai_args)
++			goto err;
++
+ 		dlc->of_node  = node;
+-		dlc->dai_name = snd_soc_dai_name_get(dai);
+-		dlc->dai_args = snd_soc_copy_dai_args(dev, &args);
+-		if (!dlc->dai_args)
+-			goto end;
++		dlc->dai_name = dai_name;
++		dlc->dai_args = dai_args;
+ 
+ 		goto parse_dai_end;
+ 	}
+@@ -1154,16 +1158,17 @@ int graph_util_parse_dai(struct simple_util_priv *priv, struct device_node *ep,
+ 	 *    if he unbinded CPU or Codec.
+ 	 */
+ 	ret = snd_soc_get_dlc(&args, dlc);
+-	if (ret < 0) {
+-		of_node_put(node);
+-		goto end;
+-	}
++	if (ret < 0)
++		goto err;
+ 
+ parse_dai_end:
+ 	if (is_single_link)
+ 		*is_single_link = of_graph_get_endpoint_count(node) == 1;
+ 	ret = 0;
+-end:
++err:
++	if (ret < 0)
++		of_node_put(node);
++
+ 	return simple_ret(priv, ret);
+ }
+ EXPORT_SYMBOL_GPL(graph_util_parse_dai);
+diff --git a/sound/soc/meson/meson-card-utils.c b/sound/soc/meson/meson-card-utils.c
+index cfc7f6e41ab5c4..68531183fb60ca 100644
+--- a/sound/soc/meson/meson-card-utils.c
++++ b/sound/soc/meson/meson-card-utils.c
+@@ -231,7 +231,7 @@ static int meson_card_parse_of_optional(struct snd_soc_card *card,
+ 						    const char *p))
+ {
+ 	/* If property is not provided, don't fail ... */
+-	if (!of_property_read_bool(card->dev->of_node, propname))
++	if (!of_property_present(card->dev->of_node, propname))
+ 		return 0;
+ 
+ 	/* ... but do fail if it is provided and the parsing fails */
+diff --git a/sound/soc/qcom/sdm845.c b/sound/soc/qcom/sdm845.c
+index fcc7df75346fc1..a233b80049ee74 100644
+--- a/sound/soc/qcom/sdm845.c
++++ b/sound/soc/qcom/sdm845.c
+@@ -91,6 +91,10 @@ static int sdm845_slim_snd_hw_params(struct snd_pcm_substream *substream,
+ 		else
+ 			ret = snd_soc_dai_set_channel_map(cpu_dai, tx_ch_cnt,
+ 							  tx_ch, 0, NULL);
++		if (ret != 0 && ret != -ENOTSUPP) {
++			dev_err(rtd->dev, "failed to set cpu chan map, err:%d\n", ret);
++			return ret;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/sound/soc/sdw_utils/soc_sdw_rt_amp.c b/sound/soc/sdw_utils/soc_sdw_rt_amp.c
+index 0538c252ba69b1..83c2368170cb5e 100644
+--- a/sound/soc/sdw_utils/soc_sdw_rt_amp.c
++++ b/sound/soc/sdw_utils/soc_sdw_rt_amp.c
+@@ -190,7 +190,7 @@ int asoc_sdw_rt_amp_spk_rtd_init(struct snd_soc_pcm_runtime *rtd, struct snd_soc
+ 	const struct snd_soc_dapm_route *rt_amp_map;
+ 	char codec_name[CODEC_NAME_SIZE];
+ 	struct snd_soc_dai *codec_dai;
+-	int ret;
++	int ret = -EINVAL;
+ 	int i;
+ 
+ 	rt_amp_map = get_codec_name_and_route(dai, codec_name);
+diff --git a/sound/soc/tegra/tegra210_ahub.c b/sound/soc/tegra/tegra210_ahub.c
+index 99683f292b5d84..ae4965a9f7649e 100644
+--- a/sound/soc/tegra/tegra210_ahub.c
++++ b/sound/soc/tegra/tegra210_ahub.c
+@@ -1359,6 +1359,8 @@ static int tegra_ahub_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	ahub->soc_data = of_device_get_match_data(&pdev->dev);
++	if (!ahub->soc_data)
++		return -ENODEV;
+ 
+ 	platform_set_drvdata(pdev, ahub);
+ 
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index 0e9b5431a47f20..faac7df1fbcf02 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -383,6 +383,13 @@ static const struct usbmix_name_map ms_usb_link_map[] = {
+ 	{ 0 }   /* terminator */
+ };
+ 
++/* KTMicro USB */
++static struct usbmix_name_map s31b2_0022_map[] = {
++	{ 23, "Speaker Playback" },
++	{ 18, "Headphone Playback" },
++	{ 0 }
++};
++
+ /* ASUS ROG Zenith II with Realtek ALC1220-VB */
+ static const struct usbmix_name_map asus_zenith_ii_map[] = {
+ 	{ 19, NULL, 12 }, /* FU, Input Gain Pad - broken response, disabled */
+@@ -692,6 +699,11 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 		.id = USB_ID(0x045e, 0x083c),
+ 		.map = ms_usb_link_map,
+ 	},
++	{
++		/* KTMicro USB */
++		.id = USB_ID(0X31b2, 0x0022),
++		.map = s31b2_0022_map,
++	},
+ 	{ 0 } /* terminator */
+ };
+ 
+diff --git a/tools/bpf/bpftool/cgroup.c b/tools/bpf/bpftool/cgroup.c
+index 3f1d6be512151d..944ebe21a2169a 100644
+--- a/tools/bpf/bpftool/cgroup.c
++++ b/tools/bpf/bpftool/cgroup.c
+@@ -318,11 +318,11 @@ static int show_bpf_progs(int cgroup_fd, enum bpf_attach_type type,
+ 
+ static int do_show(int argc, char **argv)
+ {
+-	enum bpf_attach_type type;
+ 	int has_attached_progs;
+ 	const char *path;
+ 	int cgroup_fd;
+ 	int ret = -1;
++	unsigned int i;
+ 
+ 	query_flags = 0;
+ 
+@@ -370,14 +370,14 @@ static int do_show(int argc, char **argv)
+ 		       "AttachFlags", "Name");
+ 
+ 	btf_vmlinux = libbpf_find_kernel_btf();
+-	for (type = 0; type < __MAX_BPF_ATTACH_TYPE; type++) {
++	for (i = 0; i < ARRAY_SIZE(cgroup_attach_types); i++) {
+ 		/*
+ 		 * Not all attach types may be supported, so it's expected,
+ 		 * that some requests will fail.
+ 		 * If we were able to get the show for at least one
+ 		 * attach type, let's return 0.
+ 		 */
+-		if (show_bpf_progs(cgroup_fd, type, 0) == 0)
++		if (show_bpf_progs(cgroup_fd, cgroup_attach_types[i], 0) == 0)
+ 			ret = 0;
+ 	}
+ 
+@@ -400,9 +400,9 @@ static int do_show(int argc, char **argv)
+ static int do_show_tree_fn(const char *fpath, const struct stat *sb,
+ 			   int typeflag, struct FTW *ftw)
+ {
+-	enum bpf_attach_type type;
+ 	int has_attached_progs;
+ 	int cgroup_fd;
++	unsigned int i;
+ 
+ 	if (typeflag != FTW_D)
+ 		return 0;
+@@ -434,8 +434,8 @@ static int do_show_tree_fn(const char *fpath, const struct stat *sb,
+ 	}
+ 
+ 	btf_vmlinux = libbpf_find_kernel_btf();
+-	for (type = 0; type < __MAX_BPF_ATTACH_TYPE; type++)
+-		show_bpf_progs(cgroup_fd, type, ftw->level);
++	for (i = 0; i < ARRAY_SIZE(cgroup_attach_types); i++)
++		show_bpf_progs(cgroup_fd, cgroup_attach_types[i], ftw->level);
+ 
+ 	if (errno == EINVAL)
+ 		/* Last attach type does not support query.
+diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
+index 38bc6b14b0666a..39b18521d5472c 100644
+--- a/tools/lib/bpf/btf.c
++++ b/tools/lib/bpf/btf.c
+@@ -996,7 +996,7 @@ static struct btf *btf_new_empty(struct btf *base_btf)
+ 	if (base_btf) {
+ 		btf->base_btf = base_btf;
+ 		btf->start_id = btf__type_cnt(base_btf);
+-		btf->start_str_off = base_btf->hdr->str_len;
++		btf->start_str_off = base_btf->hdr->str_len + base_btf->start_str_off;
+ 		btf->swapped_endian = base_btf->swapped_endian;
+ 	}
+ 
+@@ -4390,6 +4390,19 @@ static bool btf_dedup_identical_structs(struct btf_dedup *d, __u32 id1, __u32 id
+ 	return true;
+ }
+ 
++static bool btf_dedup_identical_ptrs(struct btf_dedup *d, __u32 id1, __u32 id2)
++{
++	struct btf_type *t1, *t2;
++
++	t1 = btf_type_by_id(d->btf, id1);
++	t2 = btf_type_by_id(d->btf, id2);
++
++	if (!btf_is_ptr(t1) || !btf_is_ptr(t2))
++		return false;
++
++	return t1->type == t2->type;
++}
++
+ /*
+  * Check equivalence of BTF type graph formed by candidate struct/union (we'll
+  * call it "candidate graph" in this description for brevity) to a type graph
+@@ -4522,6 +4535,9 @@ static int btf_dedup_is_equiv(struct btf_dedup *d, __u32 cand_id,
+ 		 */
+ 		if (btf_dedup_identical_structs(d, hypot_type_id, cand_id))
+ 			return 1;
++		/* A similar case is again observed for PTRs. */
++		if (btf_dedup_identical_ptrs(d, hypot_type_id, cand_id))
++			return 1;
+ 		return 0;
+ 	}
+ 
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 147964bb64c8f4..30cf210261032e 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -14078,6 +14078,12 @@ int bpf_object__attach_skeleton(struct bpf_object_skeleton *s)
+ 		}
+ 
+ 		link = map_skel->link;
++		if (!link) {
++			pr_warn("map '%s': BPF map skeleton link is uninitialized\n",
++				bpf_map__name(map));
++			continue;
++		}
++
+ 		if (*link)
+ 			continue;
+ 
+diff --git a/tools/net/ynl/pyynl/lib/ynl.py b/tools/net/ynl/pyynl/lib/ynl.py
+index dcc2c6b298d603..61deb592306711 100644
+--- a/tools/net/ynl/pyynl/lib/ynl.py
++++ b/tools/net/ynl/pyynl/lib/ynl.py
+@@ -231,14 +231,7 @@ class NlMsg:
+                     self.extack['unknown'].append(extack)
+ 
+             if attr_space:
+-                # We don't have the ability to parse nests yet, so only do global
+-                if 'miss-type' in self.extack and 'miss-nest' not in self.extack:
+-                    miss_type = self.extack['miss-type']
+-                    if miss_type in attr_space.attrs_by_val:
+-                        spec = attr_space.attrs_by_val[miss_type]
+-                        self.extack['miss-type'] = spec['name']
+-                        if 'doc' in spec:
+-                            self.extack['miss-type-doc'] = spec['doc']
++                self.annotate_extack(attr_space)
+ 
+     def _decode_policy(self, raw):
+         policy = {}
+@@ -264,6 +257,18 @@ class NlMsg:
+                 policy['mask'] = attr.as_scalar('u64')
+         return policy
+ 
++    def annotate_extack(self, attr_space):
++        """ Make extack more human friendly with attribute information """
++
++        # We don't have the ability to parse nests yet, so only do global
++        if 'miss-type' in self.extack and 'miss-nest' not in self.extack:
++            miss_type = self.extack['miss-type']
++            if miss_type in attr_space.attrs_by_val:
++                spec = attr_space.attrs_by_val[miss_type]
++                self.extack['miss-type'] = spec['name']
++                if 'doc' in spec:
++                    self.extack['miss-type-doc'] = spec['doc']
++
+     def cmd(self):
+         return self.nl_type
+ 
+@@ -277,12 +282,12 @@ class NlMsg:
+ 
+ 
+ class NlMsgs:
+-    def __init__(self, data, attr_space=None):
++    def __init__(self, data):
+         self.msgs = []
+ 
+         offset = 0
+         while offset < len(data):
+-            msg = NlMsg(data, offset, attr_space=attr_space)
++            msg = NlMsg(data, offset)
+             offset += msg.nl_len
+             self.msgs.append(msg)
+ 
+@@ -594,7 +599,7 @@ class YnlFamily(SpecFamily):
+             scalar_selector = self._get_scalar(attr, value["selector"])
+             attr_payload = struct.pack("II", scalar_value, scalar_selector)
+         elif attr['type'] == 'sub-message':
+-            msg_format = self._resolve_selector(attr, search_attrs)
++            msg_format, _ = self._resolve_selector(attr, search_attrs)
+             attr_payload = b''
+             if msg_format.fixed_header:
+                 attr_payload += self._encode_struct(msg_format.fixed_header, value)
+@@ -712,10 +717,10 @@ class YnlFamily(SpecFamily):
+             raise Exception(f"No message format for '{value}' in sub-message spec '{sub_msg}'")
+ 
+         spec = sub_msg_spec.formats[value]
+-        return spec
++        return spec, value
+ 
+     def _decode_sub_msg(self, attr, attr_spec, search_attrs):
+-        msg_format = self._resolve_selector(attr_spec, search_attrs)
++        msg_format, _ = self._resolve_selector(attr_spec, search_attrs)
+         decoded = {}
+         offset = 0
+         if msg_format.fixed_header:
+@@ -787,7 +792,7 @@ class YnlFamily(SpecFamily):
+ 
+         return rsp
+ 
+-    def _decode_extack_path(self, attrs, attr_set, offset, target):
++    def _decode_extack_path(self, attrs, attr_set, offset, target, search_attrs):
+         for attr in attrs:
+             try:
+                 attr_spec = attr_set.attrs_by_val[attr.type]
+@@ -801,26 +806,37 @@ class YnlFamily(SpecFamily):
+             if offset + attr.full_len <= target:
+                 offset += attr.full_len
+                 continue
+-            if attr_spec['type'] != 'nest':
++
++            pathname = attr_spec.name
++            if attr_spec['type'] == 'nest':
++                sub_attrs = self.attr_sets[attr_spec['nested-attributes']]
++                search_attrs = SpaceAttrs(sub_attrs, search_attrs.lookup(attr_spec['name']))
++            elif attr_spec['type'] == 'sub-message':
++                msg_format, value = self._resolve_selector(attr_spec, search_attrs)
++                if msg_format is None:
++                    raise Exception(f"Can't resolve sub-message of {attr_spec['name']} for extack")
++                sub_attrs = self.attr_sets[msg_format.attr_set]
++                pathname += f"({value})"
++            else:
+                 raise Exception(f"Can't dive into {attr.type} ({attr_spec['name']}) for extack")
+             offset += 4
+-            subpath = self._decode_extack_path(NlAttrs(attr.raw),
+-                                               self.attr_sets[attr_spec['nested-attributes']],
+-                                               offset, target)
++            subpath = self._decode_extack_path(NlAttrs(attr.raw), sub_attrs,
++                                               offset, target, search_attrs)
+             if subpath is None:
+                 return None
+-            return '.' + attr_spec.name + subpath
++            return '.' + pathname + subpath
+ 
+         return None
+ 
+-    def _decode_extack(self, request, op, extack):
++    def _decode_extack(self, request, op, extack, vals):
+         if 'bad-attr-offs' not in extack:
+             return
+ 
+         msg = self.nlproto.decode(self, NlMsg(request, 0, op.attr_set), op)
+         offset = self.nlproto.msghdr_size() + self._struct_size(op.fixed_header)
++        search_attrs = SpaceAttrs(op.attr_set, vals)
+         path = self._decode_extack_path(msg.raw_attrs, op.attr_set, offset,
+-                                        extack['bad-attr-offs'])
++                                        extack['bad-attr-offs'], search_attrs)
+         if path:
+             del extack['bad-attr-offs']
+             extack['bad-attr'] = path
+@@ -1012,7 +1028,7 @@ class YnlFamily(SpecFamily):
+         for (method, vals, flags) in ops:
+             op = self.ops[method]
+             msg = self._encode_message(op, vals, flags, req_seq)
+-            reqs_by_seq[req_seq] = (op, msg, flags)
++            reqs_by_seq[req_seq] = (op, vals, msg, flags)
+             payload += msg
+             req_seq += 1
+ 
+@@ -1023,13 +1039,14 @@ class YnlFamily(SpecFamily):
+         op_rsp = []
+         while not done:
+             reply = self.sock.recv(self._recv_size)
+-            nms = NlMsgs(reply, attr_space=op.attr_set)
++            nms = NlMsgs(reply)
+             self._recv_dbg_print(reply, nms)
+             for nl_msg in nms:
+                 if nl_msg.nl_seq in reqs_by_seq:
+-                    (op, req_msg, req_flags) = reqs_by_seq[nl_msg.nl_seq]
++                    (op, vals, req_msg, req_flags) = reqs_by_seq[nl_msg.nl_seq]
+                     if nl_msg.extack:
+-                        self._decode_extack(req_msg, op, nl_msg.extack)
++                        nl_msg.annotate_extack(op.attr_set)
++                        self._decode_extack(req_msg, op, nl_msg.extack, vals)
+                 else:
+                     op = None
+                     req_flags = []
+diff --git a/tools/perf/tests/tests-scripts.c b/tools/perf/tests/tests-scripts.c
+index 1d5759d0814174..3a2a8438f9af1b 100644
+--- a/tools/perf/tests/tests-scripts.c
++++ b/tools/perf/tests/tests-scripts.c
+@@ -260,6 +260,7 @@ static void append_scripts_in_dir(int dir_fd,
+ 			continue; /* Skip scripts that have a separate driver. */
+ 		fd = openat(dir_fd, ent->d_name, O_PATH);
+ 		append_scripts_in_dir(fd, result, result_sz);
++		close(fd);
+ 	}
+ 	for (i = 0; i < n_dirs; i++) /* Clean up */
+ 		zfree(&entlist[i]);
+diff --git a/tools/perf/util/print-events.c b/tools/perf/util/print-events.c
+index a786cbfb0ff56a..83aaf7cda63590 100644
+--- a/tools/perf/util/print-events.c
++++ b/tools/perf/util/print-events.c
+@@ -268,6 +268,7 @@ bool is_event_supported(u8 type, u64 config)
+ 			ret = evsel__open(evsel, NULL, tmap) >= 0;
+ 		}
+ 
++		evsel__close(evsel);
+ 		evsel__delete(evsel);
+ 	}
+ 
+diff --git a/tools/testing/selftests/x86/Makefile b/tools/testing/selftests/x86/Makefile
+index 28422c32cc8ff4..334ac9cbb5cd45 100644
+--- a/tools/testing/selftests/x86/Makefile
++++ b/tools/testing/selftests/x86/Makefile
+@@ -12,7 +12,7 @@ CAN_BUILD_WITH_NOPIE := $(shell ./check_cc.sh "$(CC)" trivial_program.c -no-pie)
+ 
+ TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt test_mremap_vdso \
+ 			check_initial_reg_state sigreturn iopl ioperm \
+-			test_vsyscall mov_ss_trap \
++			test_vsyscall mov_ss_trap sigtrap_loop \
+ 			syscall_arg_fault fsgsbase_restore sigaltstack
+ TARGETS_C_BOTHBITS += nx_stack
+ TARGETS_C_32BIT_ONLY := entry_from_vm86 test_syscall_vdso unwind_vdso \
+diff --git a/tools/testing/selftests/x86/sigtrap_loop.c b/tools/testing/selftests/x86/sigtrap_loop.c
+new file mode 100644
+index 00000000000000..9d065479e89f94
+--- /dev/null
++++ b/tools/testing/selftests/x86/sigtrap_loop.c
+@@ -0,0 +1,101 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/*
++ * Copyright (C) 2025 Intel Corporation
++ */
++#define _GNU_SOURCE
++
++#include <err.h>
++#include <signal.h>
++#include <stdio.h>
++#include <stdlib.h>
++#include <string.h>
++#include <sys/ucontext.h>
++
++#ifdef __x86_64__
++# define REG_IP REG_RIP
++#else
++# define REG_IP REG_EIP
++#endif
++
++static void sethandler(int sig, void (*handler)(int, siginfo_t *, void *), int flags)
++{
++	struct sigaction sa;
++
++	memset(&sa, 0, sizeof(sa));
++	sa.sa_sigaction = handler;
++	sa.sa_flags = SA_SIGINFO | flags;
++	sigemptyset(&sa.sa_mask);
++
++	if (sigaction(sig, &sa, 0))
++		err(1, "sigaction");
++
++	return;
++}
++
++static void sigtrap(int sig, siginfo_t *info, void *ctx_void)
++{
++	ucontext_t *ctx = (ucontext_t *)ctx_void;
++	static unsigned int loop_count_on_same_ip;
++	static unsigned long last_trap_ip;
++
++	if (last_trap_ip == ctx->uc_mcontext.gregs[REG_IP]) {
++		printf("\tTrapped at %016lx\n", last_trap_ip);
++
++		/*
++		 * If the same IP is hit more than 10 times in a row, it is
++		 * _considered_ an infinite loop.
++		 */
++		if (++loop_count_on_same_ip > 10) {
++			printf("[FAIL]\tDetected SIGTRAP infinite loop\n");
++			exit(1);
++		}
++
++		return;
++	}
++
++	loop_count_on_same_ip = 0;
++	last_trap_ip = ctx->uc_mcontext.gregs[REG_IP];
++	printf("\tTrapped at %016lx\n", last_trap_ip);
++}
++
++int main(int argc, char *argv[])
++{
++	sethandler(SIGTRAP, sigtrap, 0);
++
++	/*
++	 * Set the Trap Flag (TF) to single-step the test code, therefore to
++	 * trigger a SIGTRAP signal after each instruction until the TF is
++	 * cleared.
++	 *
++	 * Because the arithmetic flags are not significant here, the TF is
++	 * set by pushing 0x302 onto the stack and then popping it into the
++	 * flags register.
++	 *
++	 * Four instructions in the following asm code are executed with the
++	 * TF set, thus the SIGTRAP handler is expected to run four times.
++	 */
++	printf("[RUN]\tSIGTRAP infinite loop detection\n");
++	asm volatile(
++#ifdef __x86_64__
++		/*
++		 * Avoid clobbering the redzone
++		 *
++		 * Equivalent to "sub $128, %rsp", however -128 can be encoded
++		 * in a single byte immediate while 128 uses 4 bytes.
++		 */
++		"add $-128, %rsp\n\t"
++#endif
++		"push $0x302\n\t"
++		"popf\n\t"
++		"nop\n\t"
++		"nop\n\t"
++		"push $0x202\n\t"
++		"popf\n\t"
++#ifdef __x86_64__
++		"sub $-128, %rsp\n\t"
++#endif
++	);
++
++	printf("[OK]\tNo SIGTRAP infinite loop detected\n");
++	return 0;
++}
+diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h
+index 572ab2cea763f5..90eccd19098505 100644
+--- a/tools/testing/vma/vma_internal.h
++++ b/tools/testing/vma/vma_internal.h
+@@ -793,6 +793,8 @@ static inline void vma_adjust_trans_huge(struct vm_area_struct *vma,
+ 	(void)next;
+ }
+ 
++static inline void hugetlb_split(struct vm_area_struct *, unsigned long) {}
++
+ static inline void vma_iter_free(struct vma_iterator *vmi)
+ {
+ 	mas_destroy(&vmi->mas);
+diff --git a/tools/tracing/rtla/src/utils.c b/tools/tracing/rtla/src/utils.c
+index 4995d35cf3ec6e..d6ab15dcb4907e 100644
+--- a/tools/tracing/rtla/src/utils.c
++++ b/tools/tracing/rtla/src/utils.c
+@@ -227,6 +227,8 @@ long parse_ns_duration(char *val)
+ #  define __NR_sched_setattr	355
+ # elif __s390x__
+ #  define __NR_sched_setattr	345
++# elif __loongarch__
++#  define __NR_sched_setattr	274
+ # endif
+ #endif
+ 


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-07-06 13:42 Arisu Tachibana
  0 siblings, 0 replies; 19+ messages in thread
From: Arisu Tachibana @ 2025-07-06 13:42 UTC (permalink / raw
  To: gentoo-commits

commit:     d3b265e5f14e6ea2e40fe5964cee3c0787a6adaf
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sun Jul  6 13:22:22 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Sun Jul  6 13:22:22 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d3b265e5

Linux patch 6.15.5

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README             |     5 +
 1004_linux-6.15.5.patch | 13988 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 13993 insertions(+)

diff --git a/0000_README b/0000_README
index 703fe152..6933269c 100644
--- a/0000_README
+++ b/0000_README
@@ -42,6 +42,7 @@ EXPERIMENTAL
 
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
+
 Patch:  1000_linux-6.15.1.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.15.1
@@ -58,6 +59,10 @@ Patch:  1003_linux-6.15.4.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.15.4
 
+Patch:  1004_linux-6.15.5.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.15.5
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1004_linux-6.15.5.patch b/1004_linux-6.15.5.patch
new file mode 100644
index 00000000..309b027a
--- /dev/null
+++ b/1004_linux-6.15.5.patch
@@ -0,0 +1,13988 @@
+diff --git a/Documentation/devicetree/bindings/serial/8250.yaml b/Documentation/devicetree/bindings/serial/8250.yaml
+index dc0d52920575ff..eb9bd305b3a993 100644
+--- a/Documentation/devicetree/bindings/serial/8250.yaml
++++ b/Documentation/devicetree/bindings/serial/8250.yaml
+@@ -45,7 +45,7 @@ allOf:
+                   - ns16550
+                   - ns16550a
+     then:
+-      anyOf:
++      oneOf:
+         - required: [ clock-frequency ]
+         - required: [ clocks ]
+ 
+diff --git a/Documentation/netlink/specs/tc.yaml b/Documentation/netlink/specs/tc.yaml
+index 953aa837958b3f..5702a6d21038cb 100644
+--- a/Documentation/netlink/specs/tc.yaml
++++ b/Documentation/netlink/specs/tc.yaml
+@@ -227,7 +227,7 @@ definitions:
+         type: u8
+         doc: log(P_max / (qth-max - qth-min))
+       -
+-        name: Scell_log
++        name: Scell-log
+         type: u8
+         doc: cell size for idle damping
+       -
+@@ -248,7 +248,7 @@ definitions:
+         name: DPs
+         type: u32
+       -
+-        name: def_DP
++        name: def-DP
+         type: u32
+       -
+         name: grio
+diff --git a/Makefile b/Makefile
+index c1bde4eef2bfb2..66b61bf9038814 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 15
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm64/boot/dts/qcom/x1-crd.dtsi b/arch/arm64/boot/dts/qcom/x1-crd.dtsi
+new file mode 100644
+index 00000000000000..60f0b32baded32
+--- /dev/null
++++ b/arch/arm64/boot/dts/qcom/x1-crd.dtsi
+@@ -0,0 +1,1277 @@
++// SPDX-License-Identifier: BSD-3-Clause
++/*
++ * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved.
++ */
++
++#include <dt-bindings/gpio/gpio.h>
++#include <dt-bindings/input/gpio-keys.h>
++#include <dt-bindings/input/input.h>
++#include <dt-bindings/pinctrl/qcom,pmic-gpio.h>
++#include <dt-bindings/regulator/qcom,rpmh-regulator.h>
++
++#include "x1e80100-pmics.dtsi"
++
++/ {
++	model = "Qualcomm Technologies, Inc. X1E80100 CRD";
++	compatible = "qcom,x1e80100-crd", "qcom,x1e80100";
++
++	aliases {
++		serial0 = &uart21;
++	};
++
++	wcd938x: audio-codec {
++		compatible = "qcom,wcd9385-codec";
++
++		pinctrl-names = "default";
++		pinctrl-0 = <&wcd_default>;
++
++		qcom,micbias1-microvolt = <1800000>;
++		qcom,micbias2-microvolt = <1800000>;
++		qcom,micbias3-microvolt = <1800000>;
++		qcom,micbias4-microvolt = <1800000>;
++		qcom,mbhc-buttons-vthreshold-microvolt = <75000 150000 237000 500000 500000 500000 500000 500000>;
++		qcom,mbhc-headset-vthreshold-microvolt = <1700000>;
++		qcom,mbhc-headphone-vthreshold-microvolt = <50000>;
++		qcom,rx-device = <&wcd_rx>;
++		qcom,tx-device = <&wcd_tx>;
++
++		reset-gpios = <&tlmm 191 GPIO_ACTIVE_LOW>;
++
++		vdd-buck-supply = <&vreg_l15b_1p8>;
++		vdd-rxtx-supply = <&vreg_l15b_1p8>;
++		vdd-io-supply = <&vreg_l15b_1p8>;
++		vdd-mic-bias-supply = <&vreg_bob1>;
++
++		#sound-dai-cells = <1>;
++	};
++
++	chosen {
++		stdout-path = "serial0:115200n8";
++	};
++
++	gpio-keys {
++		compatible = "gpio-keys";
++
++		pinctrl-0 = <&hall_int_n_default>;
++		pinctrl-names = "default";
++
++		switch-lid {
++			gpios = <&tlmm 92 GPIO_ACTIVE_LOW>;
++			linux,input-type = <EV_SW>;
++			linux,code = <SW_LID>;
++			wakeup-source;
++			wakeup-event-action = <EV_ACT_DEASSERTED>;
++		};
++	};
++
++	pmic-glink {
++		compatible = "qcom,x1e80100-pmic-glink",
++			     "qcom,sm8550-pmic-glink",
++			     "qcom,pmic-glink";
++		#address-cells = <1>;
++		#size-cells = <0>;
++		orientation-gpios = <&tlmm 121 GPIO_ACTIVE_HIGH>,
++				    <&tlmm 123 GPIO_ACTIVE_HIGH>,
++				    <&tlmm 125 GPIO_ACTIVE_HIGH>;
++
++		/* Left-side rear port */
++		connector@0 {
++			compatible = "usb-c-connector";
++			reg = <0>;
++			power-role = "dual";
++			data-role = "dual";
++
++			ports {
++				#address-cells = <1>;
++				#size-cells = <0>;
++
++				port@0 {
++					reg = <0>;
++
++					pmic_glink_ss0_hs_in: endpoint {
++						remote-endpoint = <&usb_1_ss0_dwc3_hs>;
++					};
++				};
++
++				port@1 {
++					reg = <1>;
++
++					pmic_glink_ss0_ss_in: endpoint {
++						remote-endpoint = <&usb_1_ss0_qmpphy_out>;
++					};
++				};
++			};
++		};
++
++		/* Left-side front port */
++		connector@1 {
++			compatible = "usb-c-connector";
++			reg = <1>;
++			power-role = "dual";
++			data-role = "dual";
++
++			ports {
++				#address-cells = <1>;
++				#size-cells = <0>;
++
++				port@0 {
++					reg = <0>;
++
++					pmic_glink_ss1_hs_in: endpoint {
++						remote-endpoint = <&usb_1_ss1_dwc3_hs>;
++					};
++				};
++
++				port@1 {
++					reg = <1>;
++
++					pmic_glink_ss1_ss_in: endpoint {
++						remote-endpoint = <&usb_1_ss1_qmpphy_out>;
++					};
++				};
++			};
++		};
++
++		/* Right-side port */
++		connector@2 {
++			compatible = "usb-c-connector";
++			reg = <2>;
++			power-role = "dual";
++			data-role = "dual";
++
++			ports {
++				#address-cells = <1>;
++				#size-cells = <0>;
++
++				port@0 {
++					reg = <0>;
++
++					pmic_glink_ss2_hs_in: endpoint {
++						remote-endpoint = <&usb_1_ss2_dwc3_hs>;
++					};
++				};
++
++				port@1 {
++					reg = <1>;
++
++					pmic_glink_ss2_ss_in: endpoint {
++						remote-endpoint = <&usb_1_ss2_qmpphy_out>;
++					};
++				};
++			};
++		};
++	};
++
++	reserved-memory {
++		linux,cma {
++			compatible = "shared-dma-pool";
++			size = <0x0 0x8000000>;
++			reusable;
++			linux,cma-default;
++		};
++	};
++
++	sound {
++		compatible = "qcom,x1e80100-sndcard";
++		model = "X1E80100-CRD";
++		audio-routing = "WooferLeft IN", "WSA WSA_SPK1 OUT",
++				"TweeterLeft IN", "WSA WSA_SPK2 OUT",
++				"WooferRight IN", "WSA2 WSA_SPK2 OUT",
++				"TweeterRight IN", "WSA2 WSA_SPK2 OUT",
++				"IN1_HPHL", "HPHL_OUT",
++				"IN2_HPHR", "HPHR_OUT",
++				"AMIC2", "MIC BIAS2",
++				"VA DMIC0", "MIC BIAS3",
++				"VA DMIC1", "MIC BIAS3",
++				"VA DMIC2", "MIC BIAS1",
++				"VA DMIC3", "MIC BIAS1",
++				"VA DMIC0", "VA MIC BIAS3",
++				"VA DMIC1", "VA MIC BIAS3",
++				"VA DMIC2", "VA MIC BIAS1",
++				"VA DMIC3", "VA MIC BIAS1",
++				"TX SWR_INPUT1", "ADC2_OUTPUT";
++
++		wcd-playback-dai-link {
++			link-name = "WCD Playback";
++
++			cpu {
++				sound-dai = <&q6apmbedai RX_CODEC_DMA_RX_0>;
++			};
++
++			codec {
++				sound-dai = <&wcd938x 0>, <&swr1 0>, <&lpass_rxmacro 0>;
++			};
++
++			platform {
++				sound-dai = <&q6apm>;
++			};
++		};
++
++		wcd-capture-dai-link {
++			link-name = "WCD Capture";
++
++			cpu {
++				sound-dai = <&q6apmbedai TX_CODEC_DMA_TX_3>;
++			};
++
++			codec {
++				sound-dai = <&wcd938x 1>, <&swr2 1>, <&lpass_txmacro 0>;
++			};
++
++			platform {
++				sound-dai = <&q6apm>;
++			};
++		};
++
++		wsa-dai-link {
++			link-name = "WSA Playback";
++
++			cpu {
++				sound-dai = <&q6apmbedai WSA_CODEC_DMA_RX_0>;
++			};
++
++			codec {
++				sound-dai = <&left_woofer>, <&left_tweeter>,
++					    <&swr0 0>, <&lpass_wsamacro 0>,
++					    <&right_woofer>, <&right_tweeter>,
++					    <&swr3 0>, <&lpass_wsa2macro 0>;
++			};
++
++			platform {
++				sound-dai = <&q6apm>;
++			};
++		};
++
++		va-dai-link {
++			link-name = "VA Capture";
++
++			cpu {
++				sound-dai = <&q6apmbedai VA_CODEC_DMA_TX_0>;
++			};
++
++			codec {
++				sound-dai = <&lpass_vamacro 0>;
++			};
++
++			platform {
++				sound-dai = <&q6apm>;
++			};
++		};
++	};
++
++	vreg_edp_3p3: regulator-edp-3p3 {
++		compatible = "regulator-fixed";
++
++		regulator-name = "VREG_EDP_3P3";
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++
++		gpio = <&tlmm 70 GPIO_ACTIVE_HIGH>;
++		enable-active-high;
++
++		pinctrl-0 = <&edp_reg_en>;
++		pinctrl-names = "default";
++
++		regulator-boot-on;
++	};
++
++	vreg_misc_3p3: regulator-misc-3p3 {
++		compatible = "regulator-fixed";
++
++		regulator-name = "VREG_MISC_3P3";
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++
++		gpio = <&pm8550ve_8_gpios 6 GPIO_ACTIVE_HIGH>;
++		enable-active-high;
++
++		pinctrl-names = "default";
++		pinctrl-0 = <&misc_3p3_reg_en>;
++
++		regulator-boot-on;
++		regulator-always-on;
++	};
++
++	vreg_nvme: regulator-nvme {
++		compatible = "regulator-fixed";
++
++		regulator-name = "VREG_NVME_3P3";
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++
++		gpio = <&tlmm 18 GPIO_ACTIVE_HIGH>;
++		enable-active-high;
++
++		pinctrl-names = "default";
++		pinctrl-0 = <&nvme_reg_en>;
++
++		regulator-boot-on;
++	};
++
++	vph_pwr: regulator-vph-pwr {
++		compatible = "regulator-fixed";
++
++		regulator-name = "vph_pwr";
++		regulator-min-microvolt = <3700000>;
++		regulator-max-microvolt = <3700000>;
++
++		regulator-always-on;
++		regulator-boot-on;
++	};
++
++	vreg_wwan: regulator-wwan {
++		compatible = "regulator-fixed";
++
++		regulator-name = "SDX_VPH_PWR";
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++
++		gpio = <&tlmm 221 GPIO_ACTIVE_HIGH>;
++		enable-active-high;
++
++		pinctrl-0 = <&wwan_sw_en>;
++		pinctrl-names = "default";
++
++		regulator-boot-on;
++	};
++};
++
++&apps_rsc {
++	regulators-0 {
++		compatible = "qcom,pm8550-rpmh-regulators";
++		qcom,pmic-id = "b";
++
++		vdd-bob1-supply = <&vph_pwr>;
++		vdd-bob2-supply = <&vph_pwr>;
++		vdd-l1-l4-l10-supply = <&vreg_s4c_1p8>;
++		vdd-l2-l13-l14-supply = <&vreg_bob1>;
++		vdd-l5-l16-supply = <&vreg_bob1>;
++		vdd-l6-l7-supply = <&vreg_bob2>;
++		vdd-l8-l9-supply = <&vreg_bob1>;
++		vdd-l12-supply = <&vreg_s5j_1p2>;
++		vdd-l15-supply = <&vreg_s4c_1p8>;
++		vdd-l17-supply = <&vreg_bob2>;
++
++		vreg_bob1: bob1 {
++			regulator-name = "vreg_bob1";
++			regulator-min-microvolt = <3008000>;
++			regulator-max-microvolt = <3960000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_bob2: bob2 {
++			regulator-name = "vreg_bob2";
++			regulator-min-microvolt = <2504000>;
++			regulator-max-microvolt = <3008000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l1b_1p8: ldo1 {
++			regulator-name = "vreg_l1b_1p8";
++			regulator-min-microvolt = <1800000>;
++			regulator-max-microvolt = <1800000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l2b_3p0: ldo2 {
++			regulator-name = "vreg_l2b_3p0";
++			regulator-min-microvolt = <3072000>;
++			regulator-max-microvolt = <3100000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l4b_1p8: ldo4 {
++			regulator-name = "vreg_l4b_1p8";
++			regulator-min-microvolt = <1800000>;
++			regulator-max-microvolt = <1800000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l5b_3p0: ldo5 {
++			regulator-name = "vreg_l5b_3p0";
++			regulator-min-microvolt = <3000000>;
++			regulator-max-microvolt = <3000000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l6b_1p8: ldo6 {
++			regulator-name = "vreg_l6b_1p8";
++			regulator-min-microvolt = <1800000>;
++			regulator-max-microvolt = <2960000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l7b_2p8: ldo7 {
++			regulator-name = "vreg_l7b_2p8";
++			regulator-min-microvolt = <2800000>;
++			regulator-max-microvolt = <2800000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l8b_3p0: ldo8 {
++			regulator-name = "vreg_l8b_3p0";
++			regulator-min-microvolt = <3072000>;
++			regulator-max-microvolt = <3072000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l9b_2p9: ldo9 {
++			regulator-name = "vreg_l9b_2p9";
++			regulator-min-microvolt = <2960000>;
++			regulator-max-microvolt = <2960000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l10b_1p8: ldo10 {
++			regulator-name = "vreg_l10b_1p8";
++			regulator-min-microvolt = <1800000>;
++			regulator-max-microvolt = <1800000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l12b_1p2: ldo12 {
++			regulator-name = "vreg_l12b_1p2";
++			regulator-min-microvolt = <1200000>;
++			regulator-max-microvolt = <1200000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
++		};
++
++		vreg_l13b_3p0: ldo13 {
++			regulator-name = "vreg_l13b_3p0";
++			regulator-min-microvolt = <3072000>;
++			regulator-max-microvolt = <3100000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l14b_3p0: ldo14 {
++			regulator-name = "vreg_l14b_3p0";
++			regulator-min-microvolt = <3072000>;
++			regulator-max-microvolt = <3072000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l15b_1p8: ldo15 {
++			regulator-name = "vreg_l15b_1p8";
++			regulator-min-microvolt = <1800000>;
++			regulator-max-microvolt = <1800000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
++		};
++
++		vreg_l16b_2p9: ldo16 {
++			regulator-name = "vreg_l16b_2p9";
++			regulator-min-microvolt = <2912000>;
++			regulator-max-microvolt = <2912000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l17b_2p5: ldo17 {
++			regulator-name = "vreg_l17b_2p5";
++			regulator-min-microvolt = <2504000>;
++			regulator-max-microvolt = <2504000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++	};
++
++	regulators-1 {
++		compatible = "qcom,pm8550ve-rpmh-regulators";
++		qcom,pmic-id = "c";
++
++		vdd-l1-supply = <&vreg_s5j_1p2>;
++		vdd-l2-supply = <&vreg_s1f_0p7>;
++		vdd-l3-supply = <&vreg_s1f_0p7>;
++		vdd-s4-supply = <&vph_pwr>;
++
++		vreg_s4c_1p8: smps4 {
++			regulator-name = "vreg_s4c_1p8";
++			regulator-min-microvolt = <1856000>;
++			regulator-max-microvolt = <2000000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l1c_1p2: ldo1 {
++			regulator-name = "vreg_l1c_1p2";
++			regulator-min-microvolt = <1200000>;
++			regulator-max-microvolt = <1200000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l2c_0p8: ldo2 {
++			regulator-name = "vreg_l2c_0p8";
++			regulator-min-microvolt = <880000>;
++			regulator-max-microvolt = <920000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l3c_0p8: ldo3 {
++			regulator-name = "vreg_l3c_0p8";
++			regulator-min-microvolt = <880000>;
++			regulator-max-microvolt = <920000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++	};
++
++	regulators-2 {
++		compatible = "qcom,pmc8380-rpmh-regulators";
++		qcom,pmic-id = "d";
++
++		vdd-l1-supply = <&vreg_s1f_0p7>;
++		vdd-l2-supply = <&vreg_s1f_0p7>;
++		vdd-l3-supply = <&vreg_s4c_1p8>;
++		vdd-s1-supply = <&vph_pwr>;
++
++		vreg_l1d_0p8: ldo1 {
++			regulator-name = "vreg_l1d_0p8";
++			regulator-min-microvolt = <880000>;
++			regulator-max-microvolt = <920000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l2d_0p9: ldo2 {
++			regulator-name = "vreg_l2d_0p9";
++			regulator-min-microvolt = <912000>;
++			regulator-max-microvolt = <920000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l3d_1p8: ldo3 {
++			regulator-name = "vreg_l3d_1p8";
++			regulator-min-microvolt = <1800000>;
++			regulator-max-microvolt = <1800000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++	};
++
++	regulators-3 {
++		compatible = "qcom,pmc8380-rpmh-regulators";
++		qcom,pmic-id = "e";
++
++		vdd-l2-supply = <&vreg_s1f_0p7>;
++		vdd-l3-supply = <&vreg_s5j_1p2>;
++
++		vreg_l2e_0p8: ldo2 {
++			regulator-name = "vreg_l2e_0p8";
++			regulator-min-microvolt = <880000>;
++			regulator-max-microvolt = <920000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l3e_1p2: ldo3 {
++			regulator-name = "vreg_l3e_1p2";
++			regulator-min-microvolt = <1200000>;
++			regulator-max-microvolt = <1200000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++	};
++
++	regulators-4 {
++		compatible = "qcom,pmc8380-rpmh-regulators";
++		qcom,pmic-id = "f";
++
++		vdd-l1-supply = <&vreg_s5j_1p2>;
++		vdd-l2-supply = <&vreg_s5j_1p2>;
++		vdd-l3-supply = <&vreg_s5j_1p2>;
++		vdd-s1-supply = <&vph_pwr>;
++
++		vreg_s1f_0p7: smps1 {
++			regulator-name = "vreg_s1f_0p7";
++			regulator-min-microvolt = <700000>;
++			regulator-max-microvolt = <1100000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l1f_1p0: ldo1 {
++			regulator-name = "vreg_l1f_1p0";
++			regulator-min-microvolt = <1024000>;
++			regulator-max-microvolt = <1024000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l2f_1p0: ldo2 {
++			regulator-name = "vreg_l2f_1p0";
++			regulator-min-microvolt = <1024000>;
++			regulator-max-microvolt = <1024000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l3f_1p0: ldo3 {
++			regulator-name = "vreg_l3f_1p0";
++			regulator-min-microvolt = <1024000>;
++			regulator-max-microvolt = <1024000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++	};
++
++	regulators-6 {
++		compatible = "qcom,pm8550ve-rpmh-regulators";
++		qcom,pmic-id = "i";
++
++		vdd-l1-supply = <&vreg_s4c_1p8>;
++		vdd-l2-supply = <&vreg_s5j_1p2>;
++		vdd-l3-supply = <&vreg_s1f_0p7>;
++		vdd-s1-supply = <&vph_pwr>;
++		vdd-s2-supply = <&vph_pwr>;
++
++		vreg_s1i_0p9: smps1 {
++			regulator-name = "vreg_s1i_0p9";
++			regulator-min-microvolt = <900000>;
++			regulator-max-microvolt = <920000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_s2i_1p0: smps2 {
++			regulator-name = "vreg_s2i_1p0";
++			regulator-min-microvolt = <1000000>;
++			regulator-max-microvolt = <1100000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l1i_1p8: ldo1 {
++			regulator-name = "vreg_l1i_1p8";
++			regulator-min-microvolt = <1800000>;
++			regulator-max-microvolt = <1800000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l2i_1p2: ldo2 {
++			regulator-name = "vreg_l2i_1p2";
++			regulator-min-microvolt = <1200000>;
++			regulator-max-microvolt = <1200000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l3i_0p8: ldo3 {
++			regulator-name = "vreg_l3i_0p8";
++			regulator-min-microvolt = <880000>;
++			regulator-max-microvolt = <920000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++	};
++
++	regulators-7 {
++		compatible = "qcom,pm8550ve-rpmh-regulators";
++		qcom,pmic-id = "j";
++
++		vdd-l1-supply = <&vreg_s1f_0p7>;
++		vdd-l2-supply = <&vreg_s5j_1p2>;
++		vdd-l3-supply = <&vreg_s1f_0p7>;
++		vdd-s5-supply = <&vph_pwr>;
++
++		vreg_s5j_1p2: smps5 {
++			regulator-name = "vreg_s5j_1p2";
++			regulator-min-microvolt = <1256000>;
++			regulator-max-microvolt = <1304000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l1j_0p8: ldo1 {
++			regulator-name = "vreg_l1j_0p8";
++			regulator-min-microvolt = <880000>;
++			regulator-max-microvolt = <920000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l2j_1p2: ldo2 {
++			regulator-name = "vreg_l2j_1p2";
++			regulator-min-microvolt = <1256000>;
++			regulator-max-microvolt = <1256000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++
++		vreg_l3j_0p8: ldo3 {
++			regulator-name = "vreg_l3j_0p8";
++			regulator-min-microvolt = <880000>;
++			regulator-max-microvolt = <920000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
++	};
++};
++
++&gpu {
++	status = "okay";
++};
++
++&i2c0 {
++	clock-frequency = <400000>;
++
++	status = "okay";
++
++	touchpad@15 {
++		compatible = "hid-over-i2c";
++		reg = <0x15>;
++
++		hid-descr-addr = <0x1>;
++		interrupts-extended = <&tlmm 3 IRQ_TYPE_LEVEL_LOW>;
++
++		vdd-supply = <&vreg_misc_3p3>;
++		vddl-supply = <&vreg_l12b_1p2>;
++
++		pinctrl-0 = <&tpad_default>;
++		pinctrl-names = "default";
++
++		wakeup-source;
++	};
++
++	keyboard@3a {
++		compatible = "hid-over-i2c";
++		reg = <0x3a>;
++
++		hid-descr-addr = <0x1>;
++		interrupts-extended = <&tlmm 67 IRQ_TYPE_LEVEL_LOW>;
++
++		vdd-supply = <&vreg_misc_3p3>;
++		vddl-supply = <&vreg_l12b_1p2>;
++
++		pinctrl-0 = <&kybd_default>;
++		pinctrl-names = "default";
++
++		wakeup-source;
++	};
++};
++
++&i2c8 {
++	clock-frequency = <400000>;
++
++	status = "okay";
++
++	touchscreen@10 {
++		compatible = "hid-over-i2c";
++		reg = <0x10>;
++
++		hid-descr-addr = <0x1>;
++		interrupts-extended = <&tlmm 51 IRQ_TYPE_LEVEL_LOW>;
++
++		vdd-supply = <&vreg_misc_3p3>;
++		vddl-supply = <&vreg_l15b_1p8>;
++
++		pinctrl-0 = <&ts0_default>;
++		pinctrl-names = "default";
++	};
++};
++
++&lpass_tlmm {
++	spkr_01_sd_n_active: spkr-01-sd-n-active-state {
++		pins = "gpio12";
++		function = "gpio";
++		drive-strength = <16>;
++		bias-disable;
++		output-low;
++	};
++
++	spkr_23_sd_n_active: spkr-23-sd-n-active-state {
++		pins = "gpio13";
++		function = "gpio";
++		drive-strength = <16>;
++		bias-disable;
++		output-low;
++	};
++};
++
++&lpass_vamacro {
++	pinctrl-0 = <&dmic01_default>, <&dmic23_default>;
++	pinctrl-names = "default";
++
++	vdd-micb-supply = <&vreg_l1b_1p8>;
++	qcom,dmic-sample-rate = <4800000>;
++};
++
++&mdss {
++	status = "okay";
++};
++
++&mdss_dp3 {
++	compatible = "qcom,x1e80100-dp";
++	/delete-property/ #sound-dai-cells;
++
++	status = "okay";
++
++	aux-bus {
++		panel {
++			compatible = "samsung,atna45af01", "samsung,atna33xc20";
++			enable-gpios = <&pmc8380_3_gpios 4 GPIO_ACTIVE_HIGH>;
++			power-supply = <&vreg_edp_3p3>;
++
++			pinctrl-0 = <&edp_bl_en>;
++			pinctrl-names = "default";
++
++			port {
++				edp_panel_in: endpoint {
++					remote-endpoint = <&mdss_dp3_out>;
++				};
++			};
++		};
++	};
++
++	ports {
++		port@1 {
++			reg = <1>;
++			mdss_dp3_out: endpoint {
++				data-lanes = <0 1 2 3>;
++				link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
++
++				remote-endpoint = <&edp_panel_in>;
++			};
++		};
++	};
++};
++
++&mdss_dp3_phy {
++	vdda-phy-supply = <&vreg_l3j_0p8>;
++	vdda-pll-supply = <&vreg_l2j_1p2>;
++
++	status = "okay";
++};
++
++&pcie4 {
++	perst-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>;
++	wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>;
++
++	pinctrl-0 = <&pcie4_default>;
++	pinctrl-names = "default";
++
++	status = "okay";
++};
++
++&pcie4_phy {
++	vdda-phy-supply = <&vreg_l3i_0p8>;
++	vdda-pll-supply = <&vreg_l3e_1p2>;
++
++	status = "okay";
++};
++
++&pcie5 {
++	perst-gpios = <&tlmm 149 GPIO_ACTIVE_LOW>;
++	wake-gpios = <&tlmm 151 GPIO_ACTIVE_LOW>;
++
++	vddpe-3v3-supply = <&vreg_wwan>;
++
++	pinctrl-0 = <&pcie5_default>;
++	pinctrl-names = "default";
++
++	status = "okay";
++};
++
++&pcie5_phy {
++	vdda-phy-supply = <&vreg_l3i_0p8>;
++	vdda-pll-supply = <&vreg_l3e_1p2>;
++
++	status = "okay";
++};
++
++&pcie6a {
++	perst-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>;
++	wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>;
++
++	vddpe-3v3-supply = <&vreg_nvme>;
++
++	pinctrl-names = "default";
++	pinctrl-0 = <&pcie6a_default>;
++
++	status = "okay";
++};
++
++&pcie6a_phy {
++	vdda-phy-supply = <&vreg_l1d_0p8>;
++	vdda-pll-supply = <&vreg_l2j_1p2>;
++
++	status = "okay";
++};
++
++&pm8550ve_8_gpios {
++	misc_3p3_reg_en: misc-3p3-reg-en-state {
++		pins = "gpio6";
++		function = "normal";
++		bias-disable;
++		input-disable;
++		output-enable;
++		drive-push-pull;
++		power-source = <1>; /* 1.8 V */
++		qcom,drive-strength = <PMIC_GPIO_STRENGTH_LOW>;
++	};
++};
++
++&pmc8380_3_gpios {
++	edp_bl_en: edp-bl-en-state {
++		pins = "gpio4";
++		function = "normal";
++		power-source = <1>; /* 1.8V */
++		input-disable;
++		output-enable;
++	};
++};
++
++&qupv3_0 {
++	status = "okay";
++};
++
++&qupv3_1 {
++	status = "okay";
++};
++
++&qupv3_2 {
++	status = "okay";
++};
++
++&remoteproc_adsp {
++	firmware-name = "qcom/x1e80100/adsp.mbn",
++			"qcom/x1e80100/adsp_dtb.mbn";
++
++	status = "okay";
++};
++
++&remoteproc_cdsp {
++	firmware-name = "qcom/x1e80100/cdsp.mbn",
++			"qcom/x1e80100/cdsp_dtb.mbn";
++
++	status = "okay";
++};
++
++&smb2360_0 {
++	status = "okay";
++};
++
++&smb2360_0_eusb2_repeater {
++	vdd18-supply = <&vreg_l3d_1p8>;
++	vdd3-supply = <&vreg_l2b_3p0>;
++};
++
++&smb2360_1 {
++	status = "okay";
++};
++
++&smb2360_1_eusb2_repeater {
++	vdd18-supply = <&vreg_l3d_1p8>;
++	vdd3-supply = <&vreg_l14b_3p0>;
++};
++
++&smb2360_2 {
++	status = "okay";
++};
++
++&smb2360_2_eusb2_repeater {
++	vdd18-supply = <&vreg_l3d_1p8>;
++	vdd3-supply = <&vreg_l8b_3p0>;
++};
++
++&swr0 {
++	status = "okay";
++
++	pinctrl-0 = <&wsa_swr_active>, <&spkr_01_sd_n_active>;
++	pinctrl-names = "default";
++
++	/* WSA8845, Left Woofer */
++	left_woofer: speaker@0,0 {
++		compatible = "sdw20217020400";
++		reg = <0 0>;
++		reset-gpios = <&lpass_tlmm 12 GPIO_ACTIVE_LOW>;
++		#sound-dai-cells = <0>;
++		sound-name-prefix = "WooferLeft";
++		vdd-1p8-supply = <&vreg_l15b_1p8>;
++		vdd-io-supply = <&vreg_l12b_1p2>;
++		qcom,port-mapping = <1 2 3 7 10 13>;
++	};
++
++	/* WSA8845, Left Tweeter */
++	left_tweeter: speaker@0,1 {
++		compatible = "sdw20217020400";
++		reg = <0 1>;
++		reset-gpios = <&lpass_tlmm 12 GPIO_ACTIVE_LOW>;
++		#sound-dai-cells = <0>;
++		sound-name-prefix = "TweeterLeft";
++		vdd-1p8-supply = <&vreg_l15b_1p8>;
++		vdd-io-supply = <&vreg_l12b_1p2>;
++		qcom,port-mapping = <4 5 6 7 11 13>;
++	};
++};
++
++&swr1 {
++	status = "okay";
++
++	/* WCD9385 RX */
++	wcd_rx: codec@0,4 {
++		compatible = "sdw20217010d00";
++		reg = <0 4>;
++		qcom,rx-port-mapping = <1 2 3 4 5>;
++	};
++};
++
++&swr2 {
++	status = "okay";
++
++	/* WCD9385 TX */
++	wcd_tx: codec@0,3 {
++		compatible = "sdw20217010d00";
++		reg = <0 3>;
++		qcom,tx-port-mapping = <2 2 3 4>;
++	};
++};
++
++&swr3 {
++	status = "okay";
++
++	pinctrl-0 = <&wsa2_swr_active>, <&spkr_23_sd_n_active>;
++	pinctrl-names = "default";
++
++	/* WSA8845, Right Woofer */
++	right_woofer: speaker@0,0 {
++		compatible = "sdw20217020400";
++		reg = <0 0>;
++		reset-gpios = <&lpass_tlmm 13 GPIO_ACTIVE_LOW>;
++		#sound-dai-cells = <0>;
++		sound-name-prefix = "WooferRight";
++		vdd-1p8-supply = <&vreg_l15b_1p8>;
++		vdd-io-supply = <&vreg_l12b_1p2>;
++		qcom,port-mapping = <1 2 3 7 10 13>;
++	};
++
++	/* WSA8845, Right Tweeter */
++	right_tweeter: speaker@0,1 {
++		compatible = "sdw20217020400";
++		reg = <0 1>;
++		reset-gpios = <&lpass_tlmm 13 GPIO_ACTIVE_LOW>;
++		#sound-dai-cells = <0>;
++		sound-name-prefix = "TweeterRight";
++		vdd-1p8-supply = <&vreg_l15b_1p8>;
++		vdd-io-supply = <&vreg_l12b_1p2>;
++		qcom,port-mapping = <4 5 6 7 11 13>;
++	};
++};
++
++&tlmm {
++	gpio-reserved-ranges = <34 2>, /* Unused */
++			       <44 4>, /* SPI (TPM) */
++			       <238 1>; /* UFS Reset */
++
++	edp_reg_en: edp-reg-en-state {
++		pins = "gpio70";
++		function = "gpio";
++		drive-strength = <16>;
++		bias-disable;
++	};
++
++	hall_int_n_default: hall-int-n-state {
++		pins = "gpio92";
++		function = "gpio";
++		bias-disable;
++	};
++
++	kybd_default: kybd-default-state {
++		pins = "gpio67";
++		function = "gpio";
++		bias-disable;
++	};
++
++	nvme_reg_en: nvme-reg-en-state {
++		pins = "gpio18";
++		function = "gpio";
++		drive-strength = <2>;
++		bias-disable;
++	};
++
++	pcie4_default: pcie4-default-state {
++		clkreq-n-pins {
++			pins = "gpio147";
++			function = "pcie4_clk";
++			drive-strength = <2>;
++			bias-pull-up;
++		};
++
++		perst-n-pins {
++			pins = "gpio146";
++			function = "gpio";
++			drive-strength = <2>;
++			bias-disable;
++		};
++
++		wake-n-pins {
++			pins = "gpio148";
++			function = "gpio";
++			drive-strength = <2>;
++			bias-pull-up;
++		};
++	};
++
++	pcie5_default: pcie5-default-state {
++		clkreq-n-pins {
++			pins = "gpio150";
++			function = "pcie5_clk";
++			drive-strength = <2>;
++			bias-pull-up;
++		};
++
++		perst-n-pins {
++			pins = "gpio149";
++			function = "gpio";
++			drive-strength = <2>;
++			bias-disable;
++		};
++
++		wake-n-pins {
++			pins = "gpio151";
++			function = "gpio";
++			drive-strength = <2>;
++			bias-pull-up;
++		};
++	};
++
++	pcie6a_default: pcie6a-default-state {
++		clkreq-n-pins {
++			pins = "gpio153";
++			function = "pcie6a_clk";
++			drive-strength = <2>;
++			bias-pull-up;
++		};
++
++		perst-n-pins {
++			pins = "gpio152";
++			function = "gpio";
++			drive-strength = <2>;
++			bias-disable;
++		};
++
++		wake-n-pins {
++			pins = "gpio154";
++			function = "gpio";
++			drive-strength = <2>;
++			bias-pull-up;
++		};
++	};
++
++	tpad_default: tpad-default-state {
++		pins = "gpio3";
++		function = "gpio";
++		bias-disable;
++	};
++
++	ts0_default: ts0-default-state {
++		int-n-pins {
++			pins = "gpio51";
++			function = "gpio";
++			bias-disable;
++		};
++
++		reset-n-pins {
++			pins = "gpio48";
++			function = "gpio";
++			output-high;
++			drive-strength = <16>;
++		};
++	};
++
++	wcd_default: wcd-reset-n-active-state {
++		pins = "gpio191";
++		function = "gpio";
++		drive-strength = <16>;
++		bias-disable;
++		output-low;
++	};
++
++	wwan_sw_en: wwan-sw-en-state {
++		pins = "gpio221";
++		function = "gpio";
++		drive-strength = <4>;
++		bias-disable;
++	};
++};
++
++&uart21 {
++	compatible = "qcom,geni-debug-uart";
++	status = "okay";
++};
++
++&usb_1_ss0_hsphy {
++	vdd-supply = <&vreg_l3j_0p8>;
++	vdda12-supply = <&vreg_l2j_1p2>;
++
++	phys = <&smb2360_0_eusb2_repeater>;
++
++	status = "okay";
++};
++
++&usb_1_ss0_qmpphy {
++	vdda-phy-supply = <&vreg_l2j_1p2>;
++	vdda-pll-supply = <&vreg_l1j_0p8>;
++
++	status = "okay";
++};
++
++&usb_1_ss0 {
++	status = "okay";
++};
++
++&usb_1_ss0_dwc3 {
++	dr_mode = "host";
++};
++
++&usb_1_ss0_dwc3_hs {
++	remote-endpoint = <&pmic_glink_ss0_hs_in>;
++};
++
++&usb_1_ss0_qmpphy_out {
++	remote-endpoint = <&pmic_glink_ss0_ss_in>;
++};
++
++&usb_1_ss1_hsphy {
++	vdd-supply = <&vreg_l3j_0p8>;
++	vdda12-supply = <&vreg_l2j_1p2>;
++
++	phys = <&smb2360_1_eusb2_repeater>;
++
++	status = "okay";
++};
++
++&usb_1_ss1_qmpphy {
++	vdda-phy-supply = <&vreg_l2j_1p2>;
++	vdda-pll-supply = <&vreg_l2d_0p9>;
++
++	status = "okay";
++};
++
++&usb_1_ss1 {
++	status = "okay";
++};
++
++&usb_1_ss1_dwc3 {
++	dr_mode = "host";
++};
++
++&usb_1_ss1_dwc3_hs {
++	remote-endpoint = <&pmic_glink_ss1_hs_in>;
++};
++
++&usb_1_ss1_qmpphy_out {
++	remote-endpoint = <&pmic_glink_ss1_ss_in>;
++};
++
++&usb_1_ss2_hsphy {
++	vdd-supply = <&vreg_l3j_0p8>;
++	vdda12-supply = <&vreg_l2j_1p2>;
++
++	phys = <&smb2360_2_eusb2_repeater>;
++
++	status = "okay";
++};
++
++&usb_1_ss2_qmpphy {
++	vdda-phy-supply = <&vreg_l2j_1p2>;
++	vdda-pll-supply = <&vreg_l2d_0p9>;
++
++	status = "okay";
++};
++
++&usb_1_ss2 {
++	status = "okay";
++};
++
++&usb_1_ss2_dwc3 {
++	dr_mode = "host";
++};
++
++&usb_1_ss2_dwc3_hs {
++	remote-endpoint = <&pmic_glink_ss2_hs_in>;
++};
++
++&usb_1_ss2_qmpphy_out {
++	remote-endpoint = <&pmic_glink_ss2_ss_in>;
++};
+diff --git a/arch/arm64/boot/dts/qcom/x1e78100-lenovo-thinkpad-t14s.dts b/arch/arm64/boot/dts/qcom/x1e78100-lenovo-thinkpad-t14s.dts
+index b2c2347f54fa65..999d966b448694 100644
+--- a/arch/arm64/boot/dts/qcom/x1e78100-lenovo-thinkpad-t14s.dts
++++ b/arch/arm64/boot/dts/qcom/x1e78100-lenovo-thinkpad-t14s.dts
+@@ -9,6 +9,7 @@
+ #include <dt-bindings/gpio/gpio.h>
+ #include <dt-bindings/input/gpio-keys.h>
+ #include <dt-bindings/input/input.h>
++#include <dt-bindings/pinctrl/qcom,pmic-gpio.h>
+ #include <dt-bindings/regulator/qcom,rpmh-regulator.h>
+ 
+ #include "x1e80100.dtsi"
+@@ -153,6 +154,23 @@ vreg_edp_3p3: regulator-edp-3p3 {
+ 		regulator-boot-on;
+ 	};
+ 
++	vreg_misc_3p3: regulator-misc-3p3 {
++		compatible = "regulator-fixed";
++
++		regulator-name = "VCC3B";
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++
++		gpio = <&pm8550ve_8_gpios 6 GPIO_ACTIVE_HIGH>;
++		enable-active-high;
++
++		pinctrl-0 = <&misc_3p3_reg_en>;
++		pinctrl-names = "default";
++
++		regulator-boot-on;
++		regulator-always-on;
++	};
++
+ 	vreg_nvme: regulator-nvme {
+ 		compatible = "regulator-fixed";
+ 
+@@ -344,6 +362,7 @@ vreg_l12b_1p2: ldo12 {
+ 			regulator-min-microvolt = <1200000>;
+ 			regulator-max-microvolt = <1200000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+ 
+ 		vreg_l13b_3p0: ldo13 {
+@@ -365,6 +384,7 @@ vreg_l15b_1p8: ldo15 {
+ 			regulator-min-microvolt = <1800000>;
+ 			regulator-max-microvolt = <1800000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+ 
+ 		vreg_l17b_2p5: ldo17 {
+@@ -578,6 +598,9 @@ touchpad@15 {
+ 		hid-descr-addr = <0x1>;
+ 		interrupts-extended = <&tlmm 3 IRQ_TYPE_LEVEL_LOW>;
+ 
++		vdd-supply = <&vreg_misc_3p3>;
++		vddl-supply = <&vreg_l12b_1p2>;
++
+ 		wakeup-source;
+ 	};
+ 
+@@ -589,6 +612,9 @@ touchpad@2c {
+ 		hid-descr-addr = <0x20>;
+ 		interrupts-extended = <&tlmm 3 IRQ_TYPE_LEVEL_LOW>;
+ 
++		vdd-supply = <&vreg_misc_3p3>;
++		vddl-supply = <&vreg_l12b_1p2>;
++
+ 		wakeup-source;
+ 	};
+ 
+@@ -600,6 +626,9 @@ keyboard@3a {
+ 		hid-descr-addr = <0x1>;
+ 		interrupts-extended = <&tlmm 67 IRQ_TYPE_LEVEL_LOW>;
+ 
++		vdd-supply = <&vreg_misc_3p3>;
++		vddl-supply = <&vreg_l15b_1p8>;
++
+ 		pinctrl-0 = <&kybd_default>;
+ 		pinctrl-names = "default";
+ 
+@@ -668,6 +697,9 @@ touchscreen@10 {
+ 		hid-descr-addr = <0x1>;
+ 		interrupts-extended = <&tlmm 51 IRQ_TYPE_LEVEL_LOW>;
+ 
++		vdd-supply = <&vreg_misc_3p3>;
++		vddl-supply = <&vreg_l15b_1p8>;
++
+ 		pinctrl-0 = <&ts0_default>;
+ 		pinctrl-names = "default";
+ 	};
+@@ -787,6 +819,19 @@ edp_bl_en: edp-bl-en-state {
+ 	};
+ };
+ 
++&pm8550ve_8_gpios {
++	misc_3p3_reg_en: misc-3p3-reg-en-state {
++		pins = "gpio6";
++		function = "normal";
++		bias-disable;
++		drive-push-pull;
++		input-disable;
++		output-enable;
++		power-source = <1>; /* 1.8 V */
++		qcom,drive-strength = <PMIC_GPIO_STRENGTH_LOW>;
++	};
++};
++
+ &qupv3_0 {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+index ff5b3472fafd35..976b8e44b5763b 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+@@ -5,1278 +5,14 @@
+ 
+ /dts-v1/;
+ 
+-#include <dt-bindings/gpio/gpio.h>
+-#include <dt-bindings/input/gpio-keys.h>
+-#include <dt-bindings/input/input.h>
+-#include <dt-bindings/pinctrl/qcom,pmic-gpio.h>
+-#include <dt-bindings/regulator/qcom,rpmh-regulator.h>
+-
+ #include "x1e80100.dtsi"
+-#include "x1e80100-pmics.dtsi"
++#include "x1-crd.dtsi"
+ 
+ / {
+ 	model = "Qualcomm Technologies, Inc. X1E80100 CRD";
+ 	compatible = "qcom,x1e80100-crd", "qcom,x1e80100";
+-
+-	aliases {
+-		serial0 = &uart21;
+-	};
+-
+-	wcd938x: audio-codec {
+-		compatible = "qcom,wcd9385-codec";
+-
+-		pinctrl-names = "default";
+-		pinctrl-0 = <&wcd_default>;
+-
+-		qcom,micbias1-microvolt = <1800000>;
+-		qcom,micbias2-microvolt = <1800000>;
+-		qcom,micbias3-microvolt = <1800000>;
+-		qcom,micbias4-microvolt = <1800000>;
+-		qcom,mbhc-buttons-vthreshold-microvolt = <75000 150000 237000 500000 500000 500000 500000 500000>;
+-		qcom,mbhc-headset-vthreshold-microvolt = <1700000>;
+-		qcom,mbhc-headphone-vthreshold-microvolt = <50000>;
+-		qcom,rx-device = <&wcd_rx>;
+-		qcom,tx-device = <&wcd_tx>;
+-
+-		reset-gpios = <&tlmm 191 GPIO_ACTIVE_LOW>;
+-
+-		vdd-buck-supply = <&vreg_l15b_1p8>;
+-		vdd-rxtx-supply = <&vreg_l15b_1p8>;
+-		vdd-io-supply = <&vreg_l15b_1p8>;
+-		vdd-mic-bias-supply = <&vreg_bob1>;
+-
+-		#sound-dai-cells = <1>;
+-	};
+-
+-	chosen {
+-		stdout-path = "serial0:115200n8";
+-	};
+-
+-	gpio-keys {
+-		compatible = "gpio-keys";
+-
+-		pinctrl-0 = <&hall_int_n_default>;
+-		pinctrl-names = "default";
+-
+-		switch-lid {
+-			gpios = <&tlmm 92 GPIO_ACTIVE_LOW>;
+-			linux,input-type = <EV_SW>;
+-			linux,code = <SW_LID>;
+-			wakeup-source;
+-			wakeup-event-action = <EV_ACT_DEASSERTED>;
+-		};
+-	};
+-
+-	pmic-glink {
+-		compatible = "qcom,x1e80100-pmic-glink",
+-			     "qcom,sm8550-pmic-glink",
+-			     "qcom,pmic-glink";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+-		orientation-gpios = <&tlmm 121 GPIO_ACTIVE_HIGH>,
+-				    <&tlmm 123 GPIO_ACTIVE_HIGH>,
+-				    <&tlmm 125 GPIO_ACTIVE_HIGH>;
+-
+-		/* Left-side rear port */
+-		connector@0 {
+-			compatible = "usb-c-connector";
+-			reg = <0>;
+-			power-role = "dual";
+-			data-role = "dual";
+-
+-			ports {
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+-
+-				port@0 {
+-					reg = <0>;
+-
+-					pmic_glink_ss0_hs_in: endpoint {
+-						remote-endpoint = <&usb_1_ss0_dwc3_hs>;
+-					};
+-				};
+-
+-				port@1 {
+-					reg = <1>;
+-
+-					pmic_glink_ss0_ss_in: endpoint {
+-						remote-endpoint = <&usb_1_ss0_qmpphy_out>;
+-					};
+-				};
+-			};
+-		};
+-
+-		/* Left-side front port */
+-		connector@1 {
+-			compatible = "usb-c-connector";
+-			reg = <1>;
+-			power-role = "dual";
+-			data-role = "dual";
+-
+-			ports {
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+-
+-				port@0 {
+-					reg = <0>;
+-
+-					pmic_glink_ss1_hs_in: endpoint {
+-						remote-endpoint = <&usb_1_ss1_dwc3_hs>;
+-					};
+-				};
+-
+-				port@1 {
+-					reg = <1>;
+-
+-					pmic_glink_ss1_ss_in: endpoint {
+-						remote-endpoint = <&usb_1_ss1_qmpphy_out>;
+-					};
+-				};
+-			};
+-		};
+-
+-		/* Right-side port */
+-		connector@2 {
+-			compatible = "usb-c-connector";
+-			reg = <2>;
+-			power-role = "dual";
+-			data-role = "dual";
+-
+-			ports {
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+-
+-				port@0 {
+-					reg = <0>;
+-
+-					pmic_glink_ss2_hs_in: endpoint {
+-						remote-endpoint = <&usb_1_ss2_dwc3_hs>;
+-					};
+-				};
+-
+-				port@1 {
+-					reg = <1>;
+-
+-					pmic_glink_ss2_ss_in: endpoint {
+-						remote-endpoint = <&usb_1_ss2_qmpphy_out>;
+-					};
+-				};
+-			};
+-		};
+-	};
+-
+-	reserved-memory {
+-		linux,cma {
+-			compatible = "shared-dma-pool";
+-			size = <0x0 0x8000000>;
+-			reusable;
+-			linux,cma-default;
+-		};
+-	};
+-
+-	sound {
+-		compatible = "qcom,x1e80100-sndcard";
+-		model = "X1E80100-CRD";
+-		audio-routing = "WooferLeft IN", "WSA WSA_SPK1 OUT",
+-				"TweeterLeft IN", "WSA WSA_SPK2 OUT",
+-				"WooferRight IN", "WSA2 WSA_SPK2 OUT",
+-				"TweeterRight IN", "WSA2 WSA_SPK2 OUT",
+-				"IN1_HPHL", "HPHL_OUT",
+-				"IN2_HPHR", "HPHR_OUT",
+-				"AMIC2", "MIC BIAS2",
+-				"VA DMIC0", "MIC BIAS3",
+-				"VA DMIC1", "MIC BIAS3",
+-				"VA DMIC2", "MIC BIAS1",
+-				"VA DMIC3", "MIC BIAS1",
+-				"VA DMIC0", "VA MIC BIAS3",
+-				"VA DMIC1", "VA MIC BIAS3",
+-				"VA DMIC2", "VA MIC BIAS1",
+-				"VA DMIC3", "VA MIC BIAS1",
+-				"TX SWR_INPUT1", "ADC2_OUTPUT";
+-
+-		wcd-playback-dai-link {
+-			link-name = "WCD Playback";
+-
+-			cpu {
+-				sound-dai = <&q6apmbedai RX_CODEC_DMA_RX_0>;
+-			};
+-
+-			codec {
+-				sound-dai = <&wcd938x 0>, <&swr1 0>, <&lpass_rxmacro 0>;
+-			};
+-
+-			platform {
+-				sound-dai = <&q6apm>;
+-			};
+-		};
+-
+-		wcd-capture-dai-link {
+-			link-name = "WCD Capture";
+-
+-			cpu {
+-				sound-dai = <&q6apmbedai TX_CODEC_DMA_TX_3>;
+-			};
+-
+-			codec {
+-				sound-dai = <&wcd938x 1>, <&swr2 1>, <&lpass_txmacro 0>;
+-			};
+-
+-			platform {
+-				sound-dai = <&q6apm>;
+-			};
+-		};
+-
+-		wsa-dai-link {
+-			link-name = "WSA Playback";
+-
+-			cpu {
+-				sound-dai = <&q6apmbedai WSA_CODEC_DMA_RX_0>;
+-			};
+-
+-			codec {
+-				sound-dai = <&left_woofer>, <&left_tweeter>,
+-					    <&swr0 0>, <&lpass_wsamacro 0>,
+-					    <&right_woofer>, <&right_tweeter>,
+-					    <&swr3 0>, <&lpass_wsa2macro 0>;
+-			};
+-
+-			platform {
+-				sound-dai = <&q6apm>;
+-			};
+-		};
+-
+-		va-dai-link {
+-			link-name = "VA Capture";
+-
+-			cpu {
+-				sound-dai = <&q6apmbedai VA_CODEC_DMA_TX_0>;
+-			};
+-
+-			codec {
+-				sound-dai = <&lpass_vamacro 0>;
+-			};
+-
+-			platform {
+-				sound-dai = <&q6apm>;
+-			};
+-		};
+-	};
+-
+-	vreg_edp_3p3: regulator-edp-3p3 {
+-		compatible = "regulator-fixed";
+-
+-		regulator-name = "VREG_EDP_3P3";
+-		regulator-min-microvolt = <3300000>;
+-		regulator-max-microvolt = <3300000>;
+-
+-		gpio = <&tlmm 70 GPIO_ACTIVE_HIGH>;
+-		enable-active-high;
+-
+-		pinctrl-0 = <&edp_reg_en>;
+-		pinctrl-names = "default";
+-
+-		regulator-boot-on;
+-	};
+-
+-	vreg_misc_3p3: regulator-misc-3p3 {
+-		compatible = "regulator-fixed";
+-
+-		regulator-name = "VREG_MISC_3P3";
+-		regulator-min-microvolt = <3300000>;
+-		regulator-max-microvolt = <3300000>;
+-
+-		gpio = <&pm8550ve_8_gpios 6 GPIO_ACTIVE_HIGH>;
+-		enable-active-high;
+-
+-		pinctrl-names = "default";
+-		pinctrl-0 = <&misc_3p3_reg_en>;
+-
+-		regulator-boot-on;
+-		regulator-always-on;
+-	};
+-
+-	vreg_nvme: regulator-nvme {
+-		compatible = "regulator-fixed";
+-
+-		regulator-name = "VREG_NVME_3P3";
+-		regulator-min-microvolt = <3300000>;
+-		regulator-max-microvolt = <3300000>;
+-
+-		gpio = <&tlmm 18 GPIO_ACTIVE_HIGH>;
+-		enable-active-high;
+-
+-		pinctrl-names = "default";
+-		pinctrl-0 = <&nvme_reg_en>;
+-
+-		regulator-boot-on;
+-	};
+-
+-	vph_pwr: regulator-vph-pwr {
+-		compatible = "regulator-fixed";
+-
+-		regulator-name = "vph_pwr";
+-		regulator-min-microvolt = <3700000>;
+-		regulator-max-microvolt = <3700000>;
+-
+-		regulator-always-on;
+-		regulator-boot-on;
+-	};
+-
+-	vreg_wwan: regulator-wwan {
+-		compatible = "regulator-fixed";
+-
+-		regulator-name = "SDX_VPH_PWR";
+-		regulator-min-microvolt = <3300000>;
+-		regulator-max-microvolt = <3300000>;
+-
+-		gpio = <&tlmm 221 GPIO_ACTIVE_HIGH>;
+-		enable-active-high;
+-
+-		pinctrl-0 = <&wwan_sw_en>;
+-		pinctrl-names = "default";
+-
+-		regulator-boot-on;
+-	};
+-};
+-
+-&apps_rsc {
+-	regulators-0 {
+-		compatible = "qcom,pm8550-rpmh-regulators";
+-		qcom,pmic-id = "b";
+-
+-		vdd-bob1-supply = <&vph_pwr>;
+-		vdd-bob2-supply = <&vph_pwr>;
+-		vdd-l1-l4-l10-supply = <&vreg_s4c_1p8>;
+-		vdd-l2-l13-l14-supply = <&vreg_bob1>;
+-		vdd-l5-l16-supply = <&vreg_bob1>;
+-		vdd-l6-l7-supply = <&vreg_bob2>;
+-		vdd-l8-l9-supply = <&vreg_bob1>;
+-		vdd-l12-supply = <&vreg_s5j_1p2>;
+-		vdd-l15-supply = <&vreg_s4c_1p8>;
+-		vdd-l17-supply = <&vreg_bob2>;
+-
+-		vreg_bob1: bob1 {
+-			regulator-name = "vreg_bob1";
+-			regulator-min-microvolt = <3008000>;
+-			regulator-max-microvolt = <3960000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_bob2: bob2 {
+-			regulator-name = "vreg_bob2";
+-			regulator-min-microvolt = <2504000>;
+-			regulator-max-microvolt = <3008000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l1b_1p8: ldo1 {
+-			regulator-name = "vreg_l1b_1p8";
+-			regulator-min-microvolt = <1800000>;
+-			regulator-max-microvolt = <1800000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l2b_3p0: ldo2 {
+-			regulator-name = "vreg_l2b_3p0";
+-			regulator-min-microvolt = <3072000>;
+-			regulator-max-microvolt = <3100000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l4b_1p8: ldo4 {
+-			regulator-name = "vreg_l4b_1p8";
+-			regulator-min-microvolt = <1800000>;
+-			regulator-max-microvolt = <1800000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l5b_3p0: ldo5 {
+-			regulator-name = "vreg_l5b_3p0";
+-			regulator-min-microvolt = <3000000>;
+-			regulator-max-microvolt = <3000000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l6b_1p8: ldo6 {
+-			regulator-name = "vreg_l6b_1p8";
+-			regulator-min-microvolt = <1800000>;
+-			regulator-max-microvolt = <2960000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l7b_2p8: ldo7 {
+-			regulator-name = "vreg_l7b_2p8";
+-			regulator-min-microvolt = <2800000>;
+-			regulator-max-microvolt = <2800000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l8b_3p0: ldo8 {
+-			regulator-name = "vreg_l8b_3p0";
+-			regulator-min-microvolt = <3072000>;
+-			regulator-max-microvolt = <3072000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l9b_2p9: ldo9 {
+-			regulator-name = "vreg_l9b_2p9";
+-			regulator-min-microvolt = <2960000>;
+-			regulator-max-microvolt = <2960000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l10b_1p8: ldo10 {
+-			regulator-name = "vreg_l10b_1p8";
+-			regulator-min-microvolt = <1800000>;
+-			regulator-max-microvolt = <1800000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l12b_1p2: ldo12 {
+-			regulator-name = "vreg_l12b_1p2";
+-			regulator-min-microvolt = <1200000>;
+-			regulator-max-microvolt = <1200000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l13b_3p0: ldo13 {
+-			regulator-name = "vreg_l13b_3p0";
+-			regulator-min-microvolt = <3072000>;
+-			regulator-max-microvolt = <3100000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l14b_3p0: ldo14 {
+-			regulator-name = "vreg_l14b_3p0";
+-			regulator-min-microvolt = <3072000>;
+-			regulator-max-microvolt = <3072000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l15b_1p8: ldo15 {
+-			regulator-name = "vreg_l15b_1p8";
+-			regulator-min-microvolt = <1800000>;
+-			regulator-max-microvolt = <1800000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l16b_2p9: ldo16 {
+-			regulator-name = "vreg_l16b_2p9";
+-			regulator-min-microvolt = <2912000>;
+-			regulator-max-microvolt = <2912000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l17b_2p5: ldo17 {
+-			regulator-name = "vreg_l17b_2p5";
+-			regulator-min-microvolt = <2504000>;
+-			regulator-max-microvolt = <2504000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-	};
+-
+-	regulators-1 {
+-		compatible = "qcom,pm8550ve-rpmh-regulators";
+-		qcom,pmic-id = "c";
+-
+-		vdd-l1-supply = <&vreg_s5j_1p2>;
+-		vdd-l2-supply = <&vreg_s1f_0p7>;
+-		vdd-l3-supply = <&vreg_s1f_0p7>;
+-		vdd-s4-supply = <&vph_pwr>;
+-
+-		vreg_s4c_1p8: smps4 {
+-			regulator-name = "vreg_s4c_1p8";
+-			regulator-min-microvolt = <1856000>;
+-			regulator-max-microvolt = <2000000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l1c_1p2: ldo1 {
+-			regulator-name = "vreg_l1c_1p2";
+-			regulator-min-microvolt = <1200000>;
+-			regulator-max-microvolt = <1200000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l2c_0p8: ldo2 {
+-			regulator-name = "vreg_l2c_0p8";
+-			regulator-min-microvolt = <880000>;
+-			regulator-max-microvolt = <920000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l3c_0p8: ldo3 {
+-			regulator-name = "vreg_l3c_0p8";
+-			regulator-min-microvolt = <880000>;
+-			regulator-max-microvolt = <920000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-	};
+-
+-	regulators-2 {
+-		compatible = "qcom,pmc8380-rpmh-regulators";
+-		qcom,pmic-id = "d";
+-
+-		vdd-l1-supply = <&vreg_s1f_0p7>;
+-		vdd-l2-supply = <&vreg_s1f_0p7>;
+-		vdd-l3-supply = <&vreg_s4c_1p8>;
+-		vdd-s1-supply = <&vph_pwr>;
+-
+-		vreg_l1d_0p8: ldo1 {
+-			regulator-name = "vreg_l1d_0p8";
+-			regulator-min-microvolt = <880000>;
+-			regulator-max-microvolt = <920000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l2d_0p9: ldo2 {
+-			regulator-name = "vreg_l2d_0p9";
+-			regulator-min-microvolt = <912000>;
+-			regulator-max-microvolt = <920000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l3d_1p8: ldo3 {
+-			regulator-name = "vreg_l3d_1p8";
+-			regulator-min-microvolt = <1800000>;
+-			regulator-max-microvolt = <1800000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-	};
+-
+-	regulators-3 {
+-		compatible = "qcom,pmc8380-rpmh-regulators";
+-		qcom,pmic-id = "e";
+-
+-		vdd-l2-supply = <&vreg_s1f_0p7>;
+-		vdd-l3-supply = <&vreg_s5j_1p2>;
+-
+-		vreg_l2e_0p8: ldo2 {
+-			regulator-name = "vreg_l2e_0p8";
+-			regulator-min-microvolt = <880000>;
+-			regulator-max-microvolt = <920000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l3e_1p2: ldo3 {
+-			regulator-name = "vreg_l3e_1p2";
+-			regulator-min-microvolt = <1200000>;
+-			regulator-max-microvolt = <1200000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-	};
+-
+-	regulators-4 {
+-		compatible = "qcom,pmc8380-rpmh-regulators";
+-		qcom,pmic-id = "f";
+-
+-		vdd-l1-supply = <&vreg_s5j_1p2>;
+-		vdd-l2-supply = <&vreg_s5j_1p2>;
+-		vdd-l3-supply = <&vreg_s5j_1p2>;
+-		vdd-s1-supply = <&vph_pwr>;
+-
+-		vreg_s1f_0p7: smps1 {
+-			regulator-name = "vreg_s1f_0p7";
+-			regulator-min-microvolt = <700000>;
+-			regulator-max-microvolt = <1100000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l1f_1p0: ldo1 {
+-			regulator-name = "vreg_l1f_1p0";
+-			regulator-min-microvolt = <1024000>;
+-			regulator-max-microvolt = <1024000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l2f_1p0: ldo2 {
+-			regulator-name = "vreg_l2f_1p0";
+-			regulator-min-microvolt = <1024000>;
+-			regulator-max-microvolt = <1024000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l3f_1p0: ldo3 {
+-			regulator-name = "vreg_l3f_1p0";
+-			regulator-min-microvolt = <1024000>;
+-			regulator-max-microvolt = <1024000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-	};
+-
+-	regulators-6 {
+-		compatible = "qcom,pm8550ve-rpmh-regulators";
+-		qcom,pmic-id = "i";
+-
+-		vdd-l1-supply = <&vreg_s4c_1p8>;
+-		vdd-l2-supply = <&vreg_s5j_1p2>;
+-		vdd-l3-supply = <&vreg_s1f_0p7>;
+-		vdd-s1-supply = <&vph_pwr>;
+-		vdd-s2-supply = <&vph_pwr>;
+-
+-		vreg_s1i_0p9: smps1 {
+-			regulator-name = "vreg_s1i_0p9";
+-			regulator-min-microvolt = <900000>;
+-			regulator-max-microvolt = <920000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_s2i_1p0: smps2 {
+-			regulator-name = "vreg_s2i_1p0";
+-			regulator-min-microvolt = <1000000>;
+-			regulator-max-microvolt = <1100000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l1i_1p8: ldo1 {
+-			regulator-name = "vreg_l1i_1p8";
+-			regulator-min-microvolt = <1800000>;
+-			regulator-max-microvolt = <1800000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l2i_1p2: ldo2 {
+-			regulator-name = "vreg_l2i_1p2";
+-			regulator-min-microvolt = <1200000>;
+-			regulator-max-microvolt = <1200000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l3i_0p8: ldo3 {
+-			regulator-name = "vreg_l3i_0p8";
+-			regulator-min-microvolt = <880000>;
+-			regulator-max-microvolt = <920000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-	};
+-
+-	regulators-7 {
+-		compatible = "qcom,pm8550ve-rpmh-regulators";
+-		qcom,pmic-id = "j";
+-
+-		vdd-l1-supply = <&vreg_s1f_0p7>;
+-		vdd-l2-supply = <&vreg_s5j_1p2>;
+-		vdd-l3-supply = <&vreg_s1f_0p7>;
+-		vdd-s5-supply = <&vph_pwr>;
+-
+-		vreg_s5j_1p2: smps5 {
+-			regulator-name = "vreg_s5j_1p2";
+-			regulator-min-microvolt = <1256000>;
+-			regulator-max-microvolt = <1304000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l1j_0p8: ldo1 {
+-			regulator-name = "vreg_l1j_0p8";
+-			regulator-min-microvolt = <880000>;
+-			regulator-max-microvolt = <920000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l2j_1p2: ldo2 {
+-			regulator-name = "vreg_l2j_1p2";
+-			regulator-min-microvolt = <1200000>;
+-			regulator-max-microvolt = <1200000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-
+-		vreg_l3j_0p8: ldo3 {
+-			regulator-name = "vreg_l3j_0p8";
+-			regulator-min-microvolt = <880000>;
+-			regulator-max-microvolt = <920000>;
+-			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+-		};
+-	};
+-};
+-
+-&gpu {
+-	status = "okay";
+-
+-	zap-shader {
+-		firmware-name = "qcom/x1e80100/gen70500_zap.mbn";
+-	};
+-};
+-
+-&i2c0 {
+-	clock-frequency = <400000>;
+-
+-	status = "okay";
+-
+-	touchpad@15 {
+-		compatible = "hid-over-i2c";
+-		reg = <0x15>;
+-
+-		hid-descr-addr = <0x1>;
+-		interrupts-extended = <&tlmm 3 IRQ_TYPE_LEVEL_LOW>;
+-
+-		vdd-supply = <&vreg_misc_3p3>;
+-		vddl-supply = <&vreg_l12b_1p2>;
+-
+-		pinctrl-0 = <&tpad_default>;
+-		pinctrl-names = "default";
+-
+-		wakeup-source;
+-	};
+-
+-	keyboard@3a {
+-		compatible = "hid-over-i2c";
+-		reg = <0x3a>;
+-
+-		hid-descr-addr = <0x1>;
+-		interrupts-extended = <&tlmm 67 IRQ_TYPE_LEVEL_LOW>;
+-
+-		vdd-supply = <&vreg_misc_3p3>;
+-		vddl-supply = <&vreg_l12b_1p2>;
+-
+-		pinctrl-0 = <&kybd_default>;
+-		pinctrl-names = "default";
+-
+-		wakeup-source;
+-	};
+-};
+-
+-&i2c8 {
+-	clock-frequency = <400000>;
+-
+-	status = "okay";
+-
+-	touchscreen@10 {
+-		compatible = "hid-over-i2c";
+-		reg = <0x10>;
+-
+-		hid-descr-addr = <0x1>;
+-		interrupts-extended = <&tlmm 51 IRQ_TYPE_LEVEL_LOW>;
+-
+-		vdd-supply = <&vreg_misc_3p3>;
+-		vddl-supply = <&vreg_l15b_1p8>;
+-
+-		pinctrl-0 = <&ts0_default>;
+-		pinctrl-names = "default";
+-	};
+-};
+-
+-&lpass_tlmm {
+-	spkr_01_sd_n_active: spkr-01-sd-n-active-state {
+-		pins = "gpio12";
+-		function = "gpio";
+-		drive-strength = <16>;
+-		bias-disable;
+-		output-low;
+-	};
+-
+-	spkr_23_sd_n_active: spkr-23-sd-n-active-state {
+-		pins = "gpio13";
+-		function = "gpio";
+-		drive-strength = <16>;
+-		bias-disable;
+-		output-low;
+-	};
+-};
+-
+-&lpass_vamacro {
+-	pinctrl-0 = <&dmic01_default>, <&dmic23_default>;
+-	pinctrl-names = "default";
+-
+-	vdd-micb-supply = <&vreg_l1b_1p8>;
+-	qcom,dmic-sample-rate = <4800000>;
+-};
+-
+-&mdss {
+-	status = "okay";
+-};
+-
+-&mdss_dp3 {
+-	compatible = "qcom,x1e80100-dp";
+-	/delete-property/ #sound-dai-cells;
+-
+-	status = "okay";
+-
+-	aux-bus {
+-		panel {
+-			compatible = "samsung,atna45af01", "samsung,atna33xc20";
+-			enable-gpios = <&pmc8380_3_gpios 4 GPIO_ACTIVE_HIGH>;
+-			power-supply = <&vreg_edp_3p3>;
+-
+-			pinctrl-0 = <&edp_bl_en>;
+-			pinctrl-names = "default";
+-
+-			port {
+-				edp_panel_in: endpoint {
+-					remote-endpoint = <&mdss_dp3_out>;
+-				};
+-			};
+-		};
+-	};
+-
+-	ports {
+-		port@1 {
+-			reg = <1>;
+-			mdss_dp3_out: endpoint {
+-				data-lanes = <0 1 2 3>;
+-				link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
+-
+-				remote-endpoint = <&edp_panel_in>;
+-			};
+-		};
+-	};
+-};
+-
+-&mdss_dp3_phy {
+-	vdda-phy-supply = <&vreg_l3j_0p8>;
+-	vdda-pll-supply = <&vreg_l2j_1p2>;
+-
+-	status = "okay";
+-};
+-
+-&pcie4 {
+-	perst-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>;
+-	wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>;
+-
+-	pinctrl-0 = <&pcie4_default>;
+-	pinctrl-names = "default";
+-
+-	status = "okay";
+-};
+-
+-&pcie4_phy {
+-	vdda-phy-supply = <&vreg_l3i_0p8>;
+-	vdda-pll-supply = <&vreg_l3e_1p2>;
+-
+-	status = "okay";
+-};
+-
+-&pcie5 {
+-	perst-gpios = <&tlmm 149 GPIO_ACTIVE_LOW>;
+-	wake-gpios = <&tlmm 151 GPIO_ACTIVE_LOW>;
+-
+-	vddpe-3v3-supply = <&vreg_wwan>;
+-
+-	pinctrl-0 = <&pcie5_default>;
+-	pinctrl-names = "default";
+-
+-	status = "okay";
+-};
+-
+-&pcie5_phy {
+-	vdda-phy-supply = <&vreg_l3i_0p8>;
+-	vdda-pll-supply = <&vreg_l3e_1p2>;
+-
+-	status = "okay";
+-};
+-
+-&pcie6a {
+-	perst-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>;
+-	wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>;
+-
+-	vddpe-3v3-supply = <&vreg_nvme>;
+-
+-	pinctrl-names = "default";
+-	pinctrl-0 = <&pcie6a_default>;
+-
+-	status = "okay";
+-};
+-
+-&pcie6a_phy {
+-	vdda-phy-supply = <&vreg_l1d_0p8>;
+-	vdda-pll-supply = <&vreg_l2j_1p2>;
+-
+-	status = "okay";
+-};
+-
+-&pm8550ve_8_gpios {
+-	misc_3p3_reg_en: misc-3p3-reg-en-state {
+-		pins = "gpio6";
+-		function = "normal";
+-		bias-disable;
+-		input-disable;
+-		output-enable;
+-		drive-push-pull;
+-		power-source = <1>; /* 1.8 V */
+-		qcom,drive-strength = <PMIC_GPIO_STRENGTH_LOW>;
+-	};
+-};
+-
+-&pmc8380_3_gpios {
+-	edp_bl_en: edp-bl-en-state {
+-		pins = "gpio4";
+-		function = "normal";
+-		power-source = <1>; /* 1.8V */
+-		input-disable;
+-		output-enable;
+-	};
+-};
+-
+-&qupv3_0 {
+-	status = "okay";
+-};
+-
+-&qupv3_1 {
+-	status = "okay";
+-};
+-
+-&qupv3_2 {
+-	status = "okay";
+-};
+-
+-&remoteproc_adsp {
+-	firmware-name = "qcom/x1e80100/adsp.mbn",
+-			"qcom/x1e80100/adsp_dtb.mbn";
+-
+-	status = "okay";
+-};
+-
+-&remoteproc_cdsp {
+-	firmware-name = "qcom/x1e80100/cdsp.mbn",
+-			"qcom/x1e80100/cdsp_dtb.mbn";
+-
+-	status = "okay";
+-};
+-
+-&smb2360_0 {
+-	status = "okay";
+-};
+-
+-&smb2360_0_eusb2_repeater {
+-	vdd18-supply = <&vreg_l3d_1p8>;
+-	vdd3-supply = <&vreg_l2b_3p0>;
+-};
+-
+-&smb2360_1 {
+-	status = "okay";
+-};
+-
+-&smb2360_1_eusb2_repeater {
+-	vdd18-supply = <&vreg_l3d_1p8>;
+-	vdd3-supply = <&vreg_l14b_3p0>;
+-};
+-
+-&smb2360_2 {
+-	status = "okay";
+-};
+-
+-&smb2360_2_eusb2_repeater {
+-	vdd18-supply = <&vreg_l3d_1p8>;
+-	vdd3-supply = <&vreg_l8b_3p0>;
+-};
+-
+-&swr0 {
+-	status = "okay";
+-
+-	pinctrl-0 = <&wsa_swr_active>, <&spkr_01_sd_n_active>;
+-	pinctrl-names = "default";
+-
+-	/* WSA8845, Left Woofer */
+-	left_woofer: speaker@0,0 {
+-		compatible = "sdw20217020400";
+-		reg = <0 0>;
+-		reset-gpios = <&lpass_tlmm 12 GPIO_ACTIVE_LOW>;
+-		#sound-dai-cells = <0>;
+-		sound-name-prefix = "WooferLeft";
+-		vdd-1p8-supply = <&vreg_l15b_1p8>;
+-		vdd-io-supply = <&vreg_l12b_1p2>;
+-		qcom,port-mapping = <1 2 3 7 10 13>;
+-	};
+-
+-	/* WSA8845, Left Tweeter */
+-	left_tweeter: speaker@0,1 {
+-		compatible = "sdw20217020400";
+-		reg = <0 1>;
+-		reset-gpios = <&lpass_tlmm 12 GPIO_ACTIVE_LOW>;
+-		#sound-dai-cells = <0>;
+-		sound-name-prefix = "TweeterLeft";
+-		vdd-1p8-supply = <&vreg_l15b_1p8>;
+-		vdd-io-supply = <&vreg_l12b_1p2>;
+-		qcom,port-mapping = <4 5 6 7 11 13>;
+-	};
+-};
+-
+-&swr1 {
+-	status = "okay";
+-
+-	/* WCD9385 RX */
+-	wcd_rx: codec@0,4 {
+-		compatible = "sdw20217010d00";
+-		reg = <0 4>;
+-		qcom,rx-port-mapping = <1 2 3 4 5>;
+-	};
+-};
+-
+-&swr2 {
+-	status = "okay";
+-
+-	/* WCD9385 TX */
+-	wcd_tx: codec@0,3 {
+-		compatible = "sdw20217010d00";
+-		reg = <0 3>;
+-		qcom,tx-port-mapping = <2 2 3 4>;
+-	};
+-};
+-
+-&swr3 {
+-	status = "okay";
+-
+-	pinctrl-0 = <&wsa2_swr_active>, <&spkr_23_sd_n_active>;
+-	pinctrl-names = "default";
+-
+-	/* WSA8845, Right Woofer */
+-	right_woofer: speaker@0,0 {
+-		compatible = "sdw20217020400";
+-		reg = <0 0>;
+-		reset-gpios = <&lpass_tlmm 13 GPIO_ACTIVE_LOW>;
+-		#sound-dai-cells = <0>;
+-		sound-name-prefix = "WooferRight";
+-		vdd-1p8-supply = <&vreg_l15b_1p8>;
+-		vdd-io-supply = <&vreg_l12b_1p2>;
+-		qcom,port-mapping = <1 2 3 7 10 13>;
+-	};
+-
+-	/* WSA8845, Right Tweeter */
+-	right_tweeter: speaker@0,1 {
+-		compatible = "sdw20217020400";
+-		reg = <0 1>;
+-		reset-gpios = <&lpass_tlmm 13 GPIO_ACTIVE_LOW>;
+-		#sound-dai-cells = <0>;
+-		sound-name-prefix = "TweeterRight";
+-		vdd-1p8-supply = <&vreg_l15b_1p8>;
+-		vdd-io-supply = <&vreg_l12b_1p2>;
+-		qcom,port-mapping = <4 5 6 7 11 13>;
+-	};
+-};
+-
+-&tlmm {
+-	gpio-reserved-ranges = <34 2>, /* Unused */
+-			       <44 4>, /* SPI (TPM) */
+-			       <238 1>; /* UFS Reset */
+-
+-	edp_reg_en: edp-reg-en-state {
+-		pins = "gpio70";
+-		function = "gpio";
+-		drive-strength = <16>;
+-		bias-disable;
+-	};
+-
+-	hall_int_n_default: hall-int-n-state {
+-		pins = "gpio92";
+-		function = "gpio";
+-		bias-disable;
+-	};
+-
+-	kybd_default: kybd-default-state {
+-		pins = "gpio67";
+-		function = "gpio";
+-		bias-disable;
+-	};
+-
+-	nvme_reg_en: nvme-reg-en-state {
+-		pins = "gpio18";
+-		function = "gpio";
+-		drive-strength = <2>;
+-		bias-disable;
+-	};
+-
+-	pcie4_default: pcie4-default-state {
+-		clkreq-n-pins {
+-			pins = "gpio147";
+-			function = "pcie4_clk";
+-			drive-strength = <2>;
+-			bias-pull-up;
+-		};
+-
+-		perst-n-pins {
+-			pins = "gpio146";
+-			function = "gpio";
+-			drive-strength = <2>;
+-			bias-disable;
+-		};
+-
+-		wake-n-pins {
+-			pins = "gpio148";
+-			function = "gpio";
+-			drive-strength = <2>;
+-			bias-pull-up;
+-		};
+-	};
+-
+-	pcie5_default: pcie5-default-state {
+-		clkreq-n-pins {
+-			pins = "gpio150";
+-			function = "pcie5_clk";
+-			drive-strength = <2>;
+-			bias-pull-up;
+-		};
+-
+-		perst-n-pins {
+-			pins = "gpio149";
+-			function = "gpio";
+-			drive-strength = <2>;
+-			bias-disable;
+-		};
+-
+-		wake-n-pins {
+-			pins = "gpio151";
+-			function = "gpio";
+-			drive-strength = <2>;
+-			bias-pull-up;
+-		};
+-	};
+-
+-	pcie6a_default: pcie6a-default-state {
+-		clkreq-n-pins {
+-			pins = "gpio153";
+-			function = "pcie6a_clk";
+-			drive-strength = <2>;
+-			bias-pull-up;
+-		};
+-
+-		perst-n-pins {
+-			pins = "gpio152";
+-			function = "gpio";
+-			drive-strength = <2>;
+-			bias-disable;
+-		};
+-
+-		wake-n-pins {
+-			pins = "gpio154";
+-			function = "gpio";
+-			drive-strength = <2>;
+-			bias-pull-up;
+-		};
+-	};
+-
+-	tpad_default: tpad-default-state {
+-		pins = "gpio3";
+-		function = "gpio";
+-		bias-disable;
+-	};
+-
+-	ts0_default: ts0-default-state {
+-		int-n-pins {
+-			pins = "gpio51";
+-			function = "gpio";
+-			bias-disable;
+-		};
+-
+-		reset-n-pins {
+-			pins = "gpio48";
+-			function = "gpio";
+-			output-high;
+-			drive-strength = <16>;
+-		};
+-	};
+-
+-	wcd_default: wcd-reset-n-active-state {
+-		pins = "gpio191";
+-		function = "gpio";
+-		drive-strength = <16>;
+-		bias-disable;
+-		output-low;
+-	};
+-
+-	wwan_sw_en: wwan-sw-en-state {
+-		pins = "gpio221";
+-		function = "gpio";
+-		drive-strength = <4>;
+-		bias-disable;
+-	};
+-};
+-
+-&uart21 {
+-	compatible = "qcom,geni-debug-uart";
+-	status = "okay";
+-};
+-
+-&usb_1_ss0_hsphy {
+-	vdd-supply = <&vreg_l3j_0p8>;
+-	vdda12-supply = <&vreg_l2j_1p2>;
+-
+-	phys = <&smb2360_0_eusb2_repeater>;
+-
+-	status = "okay";
+-};
+-
+-&usb_1_ss0_qmpphy {
+-	vdda-phy-supply = <&vreg_l2j_1p2>;
+-	vdda-pll-supply = <&vreg_l1j_0p8>;
+-
+-	status = "okay";
+-};
+-
+-&usb_1_ss0 {
+-	status = "okay";
+-};
+-
+-&usb_1_ss0_dwc3 {
+-	dr_mode = "host";
+-};
+-
+-&usb_1_ss0_dwc3_hs {
+-	remote-endpoint = <&pmic_glink_ss0_hs_in>;
+-};
+-
+-&usb_1_ss0_qmpphy_out {
+-	remote-endpoint = <&pmic_glink_ss0_ss_in>;
+-};
+-
+-&usb_1_ss1_hsphy {
+-	vdd-supply = <&vreg_l3j_0p8>;
+-	vdda12-supply = <&vreg_l2j_1p2>;
+-
+-	phys = <&smb2360_1_eusb2_repeater>;
+-
+-	status = "okay";
+-};
+-
+-&usb_1_ss1_qmpphy {
+-	vdda-phy-supply = <&vreg_l2j_1p2>;
+-	vdda-pll-supply = <&vreg_l2d_0p9>;
+-
+-	status = "okay";
+-};
+-
+-&usb_1_ss1 {
+-	status = "okay";
+-};
+-
+-&usb_1_ss1_dwc3 {
+-	dr_mode = "host";
+-};
+-
+-&usb_1_ss1_dwc3_hs {
+-	remote-endpoint = <&pmic_glink_ss1_hs_in>;
+-};
+-
+-&usb_1_ss1_qmpphy_out {
+-	remote-endpoint = <&pmic_glink_ss1_ss_in>;
+-};
+-
+-&usb_1_ss2_hsphy {
+-	vdd-supply = <&vreg_l3j_0p8>;
+-	vdda12-supply = <&vreg_l2j_1p2>;
+-
+-	phys = <&smb2360_2_eusb2_repeater>;
+-
+-	status = "okay";
+-};
+-
+-&usb_1_ss2_qmpphy {
+-	vdda-phy-supply = <&vreg_l2j_1p2>;
+-	vdda-pll-supply = <&vreg_l2d_0p9>;
+-
+-	status = "okay";
+-};
+-
+-&usb_1_ss2 {
+-	status = "okay";
+-};
+-
+-&usb_1_ss2_dwc3 {
+-	dr_mode = "host";
+-};
+-
+-&usb_1_ss2_dwc3_hs {
+-	remote-endpoint = <&pmic_glink_ss2_hs_in>;
+ };
+ 
+-&usb_1_ss2_qmpphy_out {
+-	remote-endpoint = <&pmic_glink_ss2_ss_in>;
++&gpu_zap_shader {
++	firmware-name = "qcom/x1e80100/gen70500_zap.mbn";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index 607d32f68c3406..a25783c85e1639 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -3748,7 +3748,7 @@ gpu: gpu@3d00000 {
+ 
+ 			status = "disabled";
+ 
+-			zap-shader {
++			gpu_zap_shader: zap-shader {
+ 				memory-region = <&gpu_microcode_mem>;
+ 			};
+ 
+diff --git a/arch/loongarch/kvm/intc/eiointc.c b/arch/loongarch/kvm/intc/eiointc.c
+index f39929d7bf8a24..a75f865d6fb96c 100644
+--- a/arch/loongarch/kvm/intc/eiointc.c
++++ b/arch/loongarch/kvm/intc/eiointc.c
+@@ -9,7 +9,8 @@
+ 
+ static void eiointc_set_sw_coreisr(struct loongarch_eiointc *s)
+ {
+-	int ipnum, cpu, irq_index, irq_mask, irq;
++	int ipnum, cpu, cpuid, irq_index, irq_mask, irq;
++	struct kvm_vcpu *vcpu;
+ 
+ 	for (irq = 0; irq < EIOINTC_IRQS; irq++) {
+ 		ipnum = s->ipmap.reg_u8[irq / 32];
+@@ -20,7 +21,12 @@ static void eiointc_set_sw_coreisr(struct loongarch_eiointc *s)
+ 		irq_index = irq / 32;
+ 		irq_mask = BIT(irq & 0x1f);
+ 
+-		cpu = s->coremap.reg_u8[irq];
++		cpuid = s->coremap.reg_u8[irq];
++		vcpu = kvm_get_vcpu_by_cpuid(s->kvm, cpuid);
++		if (!vcpu)
++			continue;
++
++		cpu = vcpu->vcpu_id;
+ 		if (!!(s->coreisr.reg_u32[cpu][irq_index] & irq_mask))
+ 			set_bit(irq, s->sw_coreisr[cpu][ipnum]);
+ 		else
+@@ -66,20 +72,25 @@ static void eiointc_update_irq(struct loongarch_eiointc *s, int irq, int level)
+ }
+ 
+ static inline void eiointc_update_sw_coremap(struct loongarch_eiointc *s,
+-					int irq, void *pvalue, u32 len, bool notify)
++					int irq, u64 val, u32 len, bool notify)
+ {
+-	int i, cpu;
+-	u64 val = *(u64 *)pvalue;
++	int i, cpu, cpuid;
++	struct kvm_vcpu *vcpu;
+ 
+ 	for (i = 0; i < len; i++) {
+-		cpu = val & 0xff;
++		cpuid = val & 0xff;
+ 		val = val >> 8;
+ 
+ 		if (!(s->status & BIT(EIOINTC_ENABLE_CPU_ENCODE))) {
+-			cpu = ffs(cpu) - 1;
+-			cpu = (cpu >= 4) ? 0 : cpu;
++			cpuid = ffs(cpuid) - 1;
++			cpuid = (cpuid >= 4) ? 0 : cpuid;
+ 		}
+ 
++		vcpu = kvm_get_vcpu_by_cpuid(s->kvm, cpuid);
++		if (!vcpu)
++			continue;
++
++		cpu = vcpu->vcpu_id;
+ 		if (s->sw_coremap[irq + i] == cpu)
+ 			continue;
+ 
+@@ -305,6 +316,11 @@ static int kvm_eiointc_read(struct kvm_vcpu *vcpu,
+ 		return -EINVAL;
+ 	}
+ 
++	if (addr & (len - 1)) {
++		kvm_err("%s: eiointc not aligned addr %llx len %d\n", __func__, addr, len);
++		return -EINVAL;
++	}
++
+ 	vcpu->kvm->stat.eiointc_read_exits++;
+ 	spin_lock_irqsave(&eiointc->lock, flags);
+ 	switch (len) {
+@@ -398,7 +414,7 @@ static int loongarch_eiointc_writeb(struct kvm_vcpu *vcpu,
+ 		irq = offset - EIOINTC_COREMAP_START;
+ 		index = irq;
+ 		s->coremap.reg_u8[index] = data;
+-		eiointc_update_sw_coremap(s, irq, (void *)&data, sizeof(data), true);
++		eiointc_update_sw_coremap(s, irq, data, sizeof(data), true);
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+@@ -436,17 +452,16 @@ static int loongarch_eiointc_writew(struct kvm_vcpu *vcpu,
+ 		break;
+ 	case EIOINTC_ENABLE_START ... EIOINTC_ENABLE_END:
+ 		index = (offset - EIOINTC_ENABLE_START) >> 1;
+-		old_data = s->enable.reg_u32[index];
++		old_data = s->enable.reg_u16[index];
+ 		s->enable.reg_u16[index] = data;
+ 		/*
+ 		 * 1: enable irq.
+ 		 * update irq when isr is set.
+ 		 */
+ 		data = s->enable.reg_u16[index] & ~old_data & s->isr.reg_u16[index];
+-		index = index << 1;
+ 		for (i = 0; i < sizeof(data); i++) {
+ 			u8 mask = (data >> (i * 8)) & 0xff;
+-			eiointc_enable_irq(vcpu, s, index + i, mask, 1);
++			eiointc_enable_irq(vcpu, s, index * 2 + i, mask, 1);
+ 		}
+ 		/*
+ 		 * 0: disable irq.
+@@ -455,7 +470,7 @@ static int loongarch_eiointc_writew(struct kvm_vcpu *vcpu,
+ 		data = ~s->enable.reg_u16[index] & old_data & s->isr.reg_u16[index];
+ 		for (i = 0; i < sizeof(data); i++) {
+ 			u8 mask = (data >> (i * 8)) & 0xff;
+-			eiointc_enable_irq(vcpu, s, index, mask, 0);
++			eiointc_enable_irq(vcpu, s, index * 2 + i, mask, 0);
+ 		}
+ 		break;
+ 	case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END:
+@@ -484,7 +499,7 @@ static int loongarch_eiointc_writew(struct kvm_vcpu *vcpu,
+ 		irq = offset - EIOINTC_COREMAP_START;
+ 		index = irq >> 1;
+ 		s->coremap.reg_u16[index] = data;
+-		eiointc_update_sw_coremap(s, irq, (void *)&data, sizeof(data), true);
++		eiointc_update_sw_coremap(s, irq, data, sizeof(data), true);
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+@@ -529,10 +544,9 @@ static int loongarch_eiointc_writel(struct kvm_vcpu *vcpu,
+ 		 * update irq when isr is set.
+ 		 */
+ 		data = s->enable.reg_u32[index] & ~old_data & s->isr.reg_u32[index];
+-		index = index << 2;
+ 		for (i = 0; i < sizeof(data); i++) {
+ 			u8 mask = (data >> (i * 8)) & 0xff;
+-			eiointc_enable_irq(vcpu, s, index + i, mask, 1);
++			eiointc_enable_irq(vcpu, s, index * 4 + i, mask, 1);
+ 		}
+ 		/*
+ 		 * 0: disable irq.
+@@ -541,7 +555,7 @@ static int loongarch_eiointc_writel(struct kvm_vcpu *vcpu,
+ 		data = ~s->enable.reg_u32[index] & old_data & s->isr.reg_u32[index];
+ 		for (i = 0; i < sizeof(data); i++) {
+ 			u8 mask = (data >> (i * 8)) & 0xff;
+-			eiointc_enable_irq(vcpu, s, index, mask, 0);
++			eiointc_enable_irq(vcpu, s, index * 4 + i, mask, 0);
+ 		}
+ 		break;
+ 	case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END:
+@@ -570,7 +584,7 @@ static int loongarch_eiointc_writel(struct kvm_vcpu *vcpu,
+ 		irq = offset - EIOINTC_COREMAP_START;
+ 		index = irq >> 2;
+ 		s->coremap.reg_u32[index] = data;
+-		eiointc_update_sw_coremap(s, irq, (void *)&data, sizeof(data), true);
++		eiointc_update_sw_coremap(s, irq, data, sizeof(data), true);
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+@@ -615,10 +629,9 @@ static int loongarch_eiointc_writeq(struct kvm_vcpu *vcpu,
+ 		 * update irq when isr is set.
+ 		 */
+ 		data = s->enable.reg_u64[index] & ~old_data & s->isr.reg_u64[index];
+-		index = index << 3;
+ 		for (i = 0; i < sizeof(data); i++) {
+ 			u8 mask = (data >> (i * 8)) & 0xff;
+-			eiointc_enable_irq(vcpu, s, index + i, mask, 1);
++			eiointc_enable_irq(vcpu, s, index * 8 + i, mask, 1);
+ 		}
+ 		/*
+ 		 * 0: disable irq.
+@@ -627,7 +640,7 @@ static int loongarch_eiointc_writeq(struct kvm_vcpu *vcpu,
+ 		data = ~s->enable.reg_u64[index] & old_data & s->isr.reg_u64[index];
+ 		for (i = 0; i < sizeof(data); i++) {
+ 			u8 mask = (data >> (i * 8)) & 0xff;
+-			eiointc_enable_irq(vcpu, s, index, mask, 0);
++			eiointc_enable_irq(vcpu, s, index * 8 + i, mask, 0);
+ 		}
+ 		break;
+ 	case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END:
+@@ -656,7 +669,7 @@ static int loongarch_eiointc_writeq(struct kvm_vcpu *vcpu,
+ 		irq = offset - EIOINTC_COREMAP_START;
+ 		index = irq >> 3;
+ 		s->coremap.reg_u64[index] = data;
+-		eiointc_update_sw_coremap(s, irq, (void *)&data, sizeof(data), true);
++		eiointc_update_sw_coremap(s, irq, data, sizeof(data), true);
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+@@ -679,6 +692,11 @@ static int kvm_eiointc_write(struct kvm_vcpu *vcpu,
+ 		return -EINVAL;
+ 	}
+ 
++	if (addr & (len - 1)) {
++		kvm_err("%s: eiointc not aligned addr %llx len %d\n", __func__, addr, len);
++		return -EINVAL;
++	}
++
+ 	vcpu->kvm->stat.eiointc_write_exits++;
+ 	spin_lock_irqsave(&eiointc->lock, flags);
+ 	switch (len) {
+@@ -787,7 +805,7 @@ static int kvm_eiointc_ctrl_access(struct kvm_device *dev,
+ 	int ret = 0;
+ 	unsigned long flags;
+ 	unsigned long type = (unsigned long)attr->attr;
+-	u32 i, start_irq;
++	u32 i, start_irq, val;
+ 	void __user *data;
+ 	struct loongarch_eiointc *s = dev->kvm->arch.eiointc;
+ 
+@@ -795,8 +813,14 @@ static int kvm_eiointc_ctrl_access(struct kvm_device *dev,
+ 	spin_lock_irqsave(&s->lock, flags);
+ 	switch (type) {
+ 	case KVM_DEV_LOONGARCH_EXTIOI_CTRL_INIT_NUM_CPU:
+-		if (copy_from_user(&s->num_cpu, data, 4))
++		if (copy_from_user(&val, data, 4))
+ 			ret = -EFAULT;
++		else {
++			if (val >= EIOINTC_ROUTE_MAX_VCPUS)
++				ret = -EINVAL;
++			else
++				s->num_cpu = val;
++		}
+ 		break;
+ 	case KVM_DEV_LOONGARCH_EXTIOI_CTRL_INIT_FEATURE:
+ 		if (copy_from_user(&s->features, data, 4))
+@@ -809,7 +833,7 @@ static int kvm_eiointc_ctrl_access(struct kvm_device *dev,
+ 		for (i = 0; i < (EIOINTC_IRQS / 4); i++) {
+ 			start_irq = i * 4;
+ 			eiointc_update_sw_coremap(s, start_irq,
+-					(void *)&s->coremap.reg_u32[i], sizeof(u32), false);
++					s->coremap.reg_u32[i], sizeof(u32), false);
+ 		}
+ 		break;
+ 	default:
+@@ -824,7 +848,7 @@ static int kvm_eiointc_regs_access(struct kvm_device *dev,
+ 					struct kvm_device_attr *attr,
+ 					bool is_write)
+ {
+-	int addr, cpuid, offset, ret = 0;
++	int addr, cpu, offset, ret = 0;
+ 	unsigned long flags;
+ 	void *p = NULL;
+ 	void __user *data;
+@@ -832,7 +856,7 @@ static int kvm_eiointc_regs_access(struct kvm_device *dev,
+ 
+ 	s = dev->kvm->arch.eiointc;
+ 	addr = attr->attr;
+-	cpuid = addr >> 16;
++	cpu = addr >> 16;
+ 	addr &= 0xffff;
+ 	data = (void __user *)attr->addr;
+ 	switch (addr) {
+@@ -857,8 +881,11 @@ static int kvm_eiointc_regs_access(struct kvm_device *dev,
+ 		p = &s->isr.reg_u32[offset];
+ 		break;
+ 	case EIOINTC_COREISR_START ... EIOINTC_COREISR_END:
++		if (cpu >= s->num_cpu)
++			return -EINVAL;
++
+ 		offset = (addr - EIOINTC_COREISR_START) / 4;
+-		p = &s->coreisr.reg_u32[cpuid][offset];
++		p = &s->coreisr.reg_u32[cpu][offset];
+ 		break;
+ 	case EIOINTC_COREMAP_START ... EIOINTC_COREMAP_END:
+ 		offset = (addr - EIOINTC_COREMAP_START) / 4;
+@@ -899,9 +926,15 @@ static int kvm_eiointc_sw_status_access(struct kvm_device *dev,
+ 	data = (void __user *)attr->addr;
+ 	switch (addr) {
+ 	case KVM_DEV_LOONGARCH_EXTIOI_SW_STATUS_NUM_CPU:
++		if (is_write)
++			return ret;
++
+ 		p = &s->num_cpu;
+ 		break;
+ 	case KVM_DEV_LOONGARCH_EXTIOI_SW_STATUS_FEATURE:
++		if (is_write)
++			return ret;
++
+ 		p = &s->features;
+ 		break;
+ 	case KVM_DEV_LOONGARCH_EXTIOI_SW_STATUS_STATE:
+diff --git a/arch/powerpc/crypto/Kconfig b/arch/powerpc/crypto/Kconfig
+index 370db8192ce62d..7b785c8664f5dd 100644
+--- a/arch/powerpc/crypto/Kconfig
++++ b/arch/powerpc/crypto/Kconfig
+@@ -110,6 +110,7 @@ config CRYPTO_CHACHA20_P10
+ config CRYPTO_POLY1305_P10
+ 	tristate "Hash functions: Poly1305 (P10 or later)"
+ 	depends on PPC64 && CPU_LITTLE_ENDIAN && VSX
++	depends on BROKEN # Needs to be fixed to work in softirq context
+ 	select CRYPTO_HASH
+ 	select CRYPTO_LIB_POLY1305_GENERIC
+ 	help
+diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
+index f56b409361fbe0..7201da46694f71 100644
+--- a/arch/riscv/include/asm/cpufeature.h
++++ b/arch/riscv/include/asm/cpufeature.h
+@@ -71,7 +71,6 @@ bool __init check_unaligned_access_emulated_all_cpus(void);
+ void check_unaligned_access_emulated(struct work_struct *work __always_unused);
+ void unaligned_emulation_finish(void);
+ bool unaligned_ctl_available(void);
+-DECLARE_PER_CPU(long, misaligned_access_speed);
+ #else
+ static inline bool unaligned_ctl_available(void)
+ {
+@@ -79,6 +78,10 @@ static inline bool unaligned_ctl_available(void)
+ }
+ #endif
+ 
++#if defined(CONFIG_RISCV_MISALIGNED)
++DECLARE_PER_CPU(long, misaligned_access_speed);
++#endif
++
+ bool __init check_vector_unaligned_access_emulated_all_cpus(void);
+ #if defined(CONFIG_RISCV_VECTOR_MISALIGNED)
+ void check_vector_unaligned_access_emulated(struct work_struct *work __always_unused);
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index 428e48e5f57d06..dffb42572d7917 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -980,7 +980,6 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
+  */
+ #ifdef CONFIG_64BIT
+ #define TASK_SIZE_64	(PGDIR_SIZE * PTRS_PER_PGD / 2)
+-#define TASK_SIZE_MAX	LONG_MAX
+ 
+ #ifdef CONFIG_COMPAT
+ #define TASK_SIZE_32	(_AC(0x80000000, UL) - PAGE_SIZE)
+diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
+index 5f56eb9d114a95..e27fe5b97b50b2 100644
+--- a/arch/riscv/include/asm/processor.h
++++ b/arch/riscv/include/asm/processor.h
+@@ -103,6 +103,7 @@ struct thread_struct {
+ 	struct __riscv_d_ext_state fstate;
+ 	unsigned long bad_cause;
+ 	unsigned long envcfg;
++	unsigned long sum;
+ 	u32 riscv_v_flags;
+ 	u32 vstate_ctrl;
+ 	struct __riscv_v_ext_state vstate;
+diff --git a/arch/riscv/include/asm/runtime-const.h b/arch/riscv/include/asm/runtime-const.h
+index 451fd76b881152..d766e2b9e6df1b 100644
+--- a/arch/riscv/include/asm/runtime-const.h
++++ b/arch/riscv/include/asm/runtime-const.h
+@@ -206,7 +206,7 @@ static inline void __runtime_fixup_32(__le16 *lui_parcel, __le16 *addi_parcel, u
+ 		addi_insn_mask &= 0x07fff;
+ 	}
+ 
+-	if (lower_immediate & 0x00000fff) {
++	if (lower_immediate & 0x00000fff || lui_insn == RISCV_INSN_NOP4) {
+ 		/* replace upper 12 bits of addi with lower 12 bits of val */
+ 		addi_insn &= addi_insn_mask;
+ 		addi_insn |= (lower_immediate & 0x00000fff) << 20;
+diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h
+index e8a83f55be2ba5..7df6355023a30a 100644
+--- a/arch/riscv/include/asm/vector.h
++++ b/arch/riscv/include/asm/vector.h
+@@ -200,11 +200,11 @@ static inline void __riscv_v_vstate_save(struct __riscv_v_ext_state *save_to,
+ 			THEAD_VSETVLI_T4X0E8M8D1
+ 			THEAD_VSB_V_V0T0
+ 			"add		t0, t0, t4\n\t"
+-			THEAD_VSB_V_V0T0
++			THEAD_VSB_V_V8T0
+ 			"add		t0, t0, t4\n\t"
+-			THEAD_VSB_V_V0T0
++			THEAD_VSB_V_V16T0
+ 			"add		t0, t0, t4\n\t"
+-			THEAD_VSB_V_V0T0
++			THEAD_VSB_V_V24T0
+ 			: : "r" (datap) : "memory", "t0", "t4");
+ 	} else {
+ 		asm volatile (
+@@ -236,11 +236,11 @@ static inline void __riscv_v_vstate_restore(struct __riscv_v_ext_state *restore_
+ 			THEAD_VSETVLI_T4X0E8M8D1
+ 			THEAD_VLB_V_V0T0
+ 			"add		t0, t0, t4\n\t"
+-			THEAD_VLB_V_V0T0
++			THEAD_VLB_V_V8T0
+ 			"add		t0, t0, t4\n\t"
+-			THEAD_VLB_V_V0T0
++			THEAD_VLB_V_V16T0
+ 			"add		t0, t0, t4\n\t"
+-			THEAD_VLB_V_V0T0
++			THEAD_VLB_V_V24T0
+ 			: : "r" (datap) : "memory", "t0", "t4");
+ 	} else {
+ 		asm volatile (
+diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
+index 16490755304e0c..190dd6cc5a8ece 100644
+--- a/arch/riscv/kernel/asm-offsets.c
++++ b/arch/riscv/kernel/asm-offsets.c
+@@ -34,6 +34,7 @@ void asm_offsets(void)
+ 	OFFSET(TASK_THREAD_S9, task_struct, thread.s[9]);
+ 	OFFSET(TASK_THREAD_S10, task_struct, thread.s[10]);
+ 	OFFSET(TASK_THREAD_S11, task_struct, thread.s[11]);
++	OFFSET(TASK_THREAD_SUM, task_struct, thread.sum);
+ 
+ 	OFFSET(TASK_TI_CPU, task_struct, thread_info.cpu);
+ 	OFFSET(TASK_TI_PREEMPT_COUNT, task_struct, thread_info.preempt_count);
+@@ -346,6 +347,10 @@ void asm_offsets(void)
+ 		  offsetof(struct task_struct, thread.s[11])
+ 		- offsetof(struct task_struct, thread.ra)
+ 	);
++	DEFINE(TASK_THREAD_SUM_RA,
++		  offsetof(struct task_struct, thread.sum)
++		- offsetof(struct task_struct, thread.ra)
++	);
+ 
+ 	DEFINE(TASK_THREAD_F0_F0,
+ 		  offsetof(struct task_struct, thread.fstate.f[0])
+diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
+index 33a5a9f2a0d4e1..a49e19ce3a975e 100644
+--- a/arch/riscv/kernel/entry.S
++++ b/arch/riscv/kernel/entry.S
+@@ -397,9 +397,18 @@ SYM_FUNC_START(__switch_to)
+ 	REG_S s9,  TASK_THREAD_S9_RA(a3)
+ 	REG_S s10, TASK_THREAD_S10_RA(a3)
+ 	REG_S s11, TASK_THREAD_S11_RA(a3)
++
++	/* save the user space access flag */
++	csrr  s0, CSR_STATUS
++	REG_S s0, TASK_THREAD_SUM_RA(a3)
++
+ 	/* Save the kernel shadow call stack pointer */
+ 	scs_save_current
+ 	/* Restore context from next->thread */
++	REG_L s0,  TASK_THREAD_SUM_RA(a4)
++	li    s1,  SR_SUM
++	and   s0,  s0, s1
++	csrs  CSR_STATUS, s0
+ 	REG_L ra,  TASK_THREAD_RA_RA(a4)
+ 	REG_L sp,  TASK_THREAD_SP_RA(a4)
+ 	REG_L s0,  TASK_THREAD_S0_RA(a4)
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index f7c9a1caa83e62..14888e5ea19ab1 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -50,6 +50,7 @@ atomic_t hart_lottery __section(".sdata")
+ #endif
+ ;
+ unsigned long boot_cpu_hartid;
++EXPORT_SYMBOL_GPL(boot_cpu_hartid);
+ 
+ /*
+  * Place kernel memory regions on the resource tree so that
+diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
+index 56f06a27d45fb1..fe0ab912014baa 100644
+--- a/arch/riscv/kernel/traps_misaligned.c
++++ b/arch/riscv/kernel/traps_misaligned.c
+@@ -368,9 +368,7 @@ static int handle_scalar_misaligned_load(struct pt_regs *regs)
+ 
+ 	perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr);
+ 
+-#ifdef CONFIG_RISCV_PROBE_UNALIGNED_ACCESS
+ 	*this_cpu_ptr(&misaligned_access_speed) = RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED;
+-#endif
+ 
+ 	if (!unaligned_enabled)
+ 		return -1;
+@@ -455,7 +453,7 @@ static int handle_scalar_misaligned_load(struct pt_regs *regs)
+ 
+ 	val.data_u64 = 0;
+ 	if (user_mode(regs)) {
+-		if (copy_from_user_nofault(&val, (u8 __user *)addr, len))
++		if (copy_from_user(&val, (u8 __user *)addr, len))
+ 			return -1;
+ 	} else {
+ 		memcpy(&val, (u8 *)addr, len);
+@@ -556,7 +554,7 @@ static int handle_scalar_misaligned_store(struct pt_regs *regs)
+ 		return -EOPNOTSUPP;
+ 
+ 	if (user_mode(regs)) {
+-		if (copy_to_user_nofault((u8 __user *)addr, &val, len))
++		if (copy_to_user((u8 __user *)addr, &val, len))
+ 			return -1;
+ 	} else {
+ 		memcpy((u8 *)addr, &val, len);
+diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
+index b8167272988723..b2e4b81763f888 100644
+--- a/arch/riscv/mm/cacheflush.c
++++ b/arch/riscv/mm/cacheflush.c
+@@ -24,7 +24,20 @@ void flush_icache_all(void)
+ 
+ 	if (num_online_cpus() < 2)
+ 		return;
+-	else if (riscv_use_sbi_for_rfence())
++
++	/*
++	 * Make sure all previous writes to the D$ are ordered before making
++	 * the IPI. The RISC-V spec states that a hart must execute a data fence
++	 * before triggering a remote fence.i in order to make the modification
++	 * visable for remote harts.
++	 *
++	 * IPIs on RISC-V are triggered by MMIO writes to either CLINT or
++	 * S-IMSIC, so the fence ensures previous data writes "happen before"
++	 * the MMIO.
++	 */
++	RISCV_FENCE(w, o);
++
++	if (riscv_use_sbi_for_rfence())
+ 		sbi_remote_fence_i(NULL);
+ 	else
+ 		on_each_cpu(ipi_remote_fence_i, NULL, 1);
+diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c
+index 34b8d9e745df05..1b8bc1720d60a5 100644
+--- a/arch/s390/kernel/ptrace.c
++++ b/arch/s390/kernel/ptrace.c
+@@ -1574,5 +1574,5 @@ unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, unsigned int n)
+ 	addr = kernel_stack_pointer(regs) + n * sizeof(long);
+ 	if (!regs_within_kernel_stack(regs, addr))
+ 		return 0;
+-	return READ_ONCE_NOCHECK(addr);
++	return READ_ONCE_NOCHECK(*(unsigned long *)addr);
+ }
+diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
+index da84ff6770deca..8b3f6dd00eab25 100644
+--- a/arch/s390/mm/fault.c
++++ b/arch/s390/mm/fault.c
+@@ -442,6 +442,8 @@ void do_secure_storage_access(struct pt_regs *regs)
+ 		if (rc)
+ 			BUG();
+ 	} else {
++		if (faulthandler_disabled())
++			return handle_fault_error_nolock(regs, 0);
+ 		mm = current->mm;
+ 		mmap_read_lock(mm);
+ 		vma = find_vma(mm, addr);
+diff --git a/arch/um/drivers/ubd_user.c b/arch/um/drivers/ubd_user.c
+index c5e6545f6fcf67..8e8a8bf518b634 100644
+--- a/arch/um/drivers/ubd_user.c
++++ b/arch/um/drivers/ubd_user.c
+@@ -41,7 +41,7 @@ int start_io_thread(struct os_helper_thread **td_out, int *fd_out)
+ 	*fd_out = fds[1];
+ 
+ 	err = os_set_fd_block(*fd_out, 0);
+-	err = os_set_fd_block(kernel_fd, 0);
++	err |= os_set_fd_block(kernel_fd, 0);
+ 	if (err) {
+ 		printk("start_io_thread - failed to set nonblocking I/O.\n");
+ 		goto out_close;
+diff --git a/arch/um/include/asm/asm-prototypes.h b/arch/um/include/asm/asm-prototypes.h
+index 5898a26daa0dd4..408b31d591279d 100644
+--- a/arch/um/include/asm/asm-prototypes.h
++++ b/arch/um/include/asm/asm-prototypes.h
+@@ -1 +1,6 @@
+ #include <asm-generic/asm-prototypes.h>
++#include <asm/checksum.h>
++
++#ifdef CONFIG_UML_X86
++extern void cmpxchg8b_emu(void);
++#endif
+diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
+index ef2272e92a4321..8a2e68d07de616 100644
+--- a/arch/um/kernel/trap.c
++++ b/arch/um/kernel/trap.c
+@@ -18,6 +18,122 @@
+ #include <skas.h>
+ #include <arch.h>
+ 
++/*
++ * NOTE: UML does not have exception tables. As such, this is almost a copy
++ * of the code in mm/memory.c, only adjusting the logic to simply check whether
++ * we are coming from the kernel instead of doing an additional lookup in the
++ * exception table.
++ * We can do this simplification because we never get here if the exception was
++ * fixable.
++ */
++static inline bool get_mmap_lock_carefully(struct mm_struct *mm, bool is_user)
++{
++	if (likely(mmap_read_trylock(mm)))
++		return true;
++
++	if (!is_user)
++		return false;
++
++	return !mmap_read_lock_killable(mm);
++}
++
++static inline bool mmap_upgrade_trylock(struct mm_struct *mm)
++{
++	/*
++	 * We don't have this operation yet.
++	 *
++	 * It should be easy enough to do: it's basically a
++	 *    atomic_long_try_cmpxchg_acquire()
++	 * from RWSEM_READER_BIAS -> RWSEM_WRITER_LOCKED, but
++	 * it also needs the proper lockdep magic etc.
++	 */
++	return false;
++}
++
++static inline bool upgrade_mmap_lock_carefully(struct mm_struct *mm, bool is_user)
++{
++	mmap_read_unlock(mm);
++	if (!is_user)
++		return false;
++
++	return !mmap_write_lock_killable(mm);
++}
++
++/*
++ * Helper for page fault handling.
++ *
++ * This is kind of equivalend to "mmap_read_lock()" followed
++ * by "find_extend_vma()", except it's a lot more careful about
++ * the locking (and will drop the lock on failure).
++ *
++ * For example, if we have a kernel bug that causes a page
++ * fault, we don't want to just use mmap_read_lock() to get
++ * the mm lock, because that would deadlock if the bug were
++ * to happen while we're holding the mm lock for writing.
++ *
++ * So this checks the exception tables on kernel faults in
++ * order to only do this all for instructions that are actually
++ * expected to fault.
++ *
++ * We can also actually take the mm lock for writing if we
++ * need to extend the vma, which helps the VM layer a lot.
++ */
++static struct vm_area_struct *
++um_lock_mm_and_find_vma(struct mm_struct *mm,
++			unsigned long addr, bool is_user)
++{
++	struct vm_area_struct *vma;
++
++	if (!get_mmap_lock_carefully(mm, is_user))
++		return NULL;
++
++	vma = find_vma(mm, addr);
++	if (likely(vma && (vma->vm_start <= addr)))
++		return vma;
++
++	/*
++	 * Well, dang. We might still be successful, but only
++	 * if we can extend a vma to do so.
++	 */
++	if (!vma || !(vma->vm_flags & VM_GROWSDOWN)) {
++		mmap_read_unlock(mm);
++		return NULL;
++	}
++
++	/*
++	 * We can try to upgrade the mmap lock atomically,
++	 * in which case we can continue to use the vma
++	 * we already looked up.
++	 *
++	 * Otherwise we'll have to drop the mmap lock and
++	 * re-take it, and also look up the vma again,
++	 * re-checking it.
++	 */
++	if (!mmap_upgrade_trylock(mm)) {
++		if (!upgrade_mmap_lock_carefully(mm, is_user))
++			return NULL;
++
++		vma = find_vma(mm, addr);
++		if (!vma)
++			goto fail;
++		if (vma->vm_start <= addr)
++			goto success;
++		if (!(vma->vm_flags & VM_GROWSDOWN))
++			goto fail;
++	}
++
++	if (expand_stack_locked(vma, addr))
++		goto fail;
++
++success:
++	mmap_write_downgrade(mm);
++	return vma;
++
++fail:
++	mmap_write_unlock(mm);
++	return NULL;
++}
++
+ /*
+  * Note this is constrained to return 0, -EFAULT, -EACCES, -ENOMEM by
+  * segv().
+@@ -44,21 +160,10 @@ int handle_page_fault(unsigned long address, unsigned long ip,
+ 	if (is_user)
+ 		flags |= FAULT_FLAG_USER;
+ retry:
+-	mmap_read_lock(mm);
+-	vma = find_vma(mm, address);
+-	if (!vma)
+-		goto out;
+-	if (vma->vm_start <= address)
+-		goto good_area;
+-	if (!(vma->vm_flags & VM_GROWSDOWN))
+-		goto out;
+-	if (is_user && !ARCH_IS_STACKGROW(address))
+-		goto out;
+-	vma = expand_stack(mm, address);
++	vma = um_lock_mm_and_find_vma(mm, address, is_user);
+ 	if (!vma)
+ 		goto out_nosemaphore;
+ 
+-good_area:
+ 	*code_out = SEGV_ACCERR;
+ 	if (is_write) {
+ 		if (!(vma->vm_flags & VM_WRITE))
+diff --git a/arch/x86/include/uapi/asm/debugreg.h b/arch/x86/include/uapi/asm/debugreg.h
+index 0007ba077c0c2b..41da492dfb01f0 100644
+--- a/arch/x86/include/uapi/asm/debugreg.h
++++ b/arch/x86/include/uapi/asm/debugreg.h
+@@ -15,7 +15,26 @@
+    which debugging register was responsible for the trap.  The other bits
+    are either reserved or not of interest to us. */
+ 
+-/* Define reserved bits in DR6 which are always set to 1 */
++/*
++ * Define bits in DR6 which are set to 1 by default.
++ *
++ * This is also the DR6 architectural value following Power-up, Reset or INIT.
++ *
++ * Note, with the introduction of Bus Lock Detection (BLD) and Restricted
++ * Transactional Memory (RTM), the DR6 register has been modified:
++ *
++ * 1) BLD flag (bit 11) is no longer reserved to 1 if the CPU supports
++ *    Bus Lock Detection.  The assertion of a bus lock could clear it.
++ *
++ * 2) RTM flag (bit 16) is no longer reserved to 1 if the CPU supports
++ *    restricted transactional memory.  #DB occurred inside an RTM region
++ *    could clear it.
++ *
++ * Apparently, DR6.BLD and DR6.RTM are active low bits.
++ *
++ * As a result, DR6_RESERVED is an incorrect name now, but it is kept for
++ * compatibility.
++ */
+ #define DR6_RESERVED	(0xFFFF0FF0)
+ 
+ #define DR_TRAP0	(0x1)		/* db0 */
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 5de4a879232a6c..bf86a0145d8bdc 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -2205,20 +2205,16 @@ EXPORT_PER_CPU_SYMBOL(__stack_chk_guard);
+ #endif
+ #endif
+ 
+-/*
+- * Clear all 6 debug registers:
+- */
+-static void clear_all_debug_regs(void)
++static void initialize_debug_regs(void)
+ {
+-	int i;
+-
+-	for (i = 0; i < 8; i++) {
+-		/* Ignore db4, db5 */
+-		if ((i == 4) || (i == 5))
+-			continue;
+-
+-		set_debugreg(0, i);
+-	}
++	/* Control register first -- to make sure everything is disabled. */
++	set_debugreg(0, 7);
++	set_debugreg(DR6_RESERVED, 6);
++	/* dr5 and dr4 don't exist */
++	set_debugreg(0, 3);
++	set_debugreg(0, 2);
++	set_debugreg(0, 1);
++	set_debugreg(0, 0);
+ }
+ 
+ #ifdef CONFIG_KGDB
+@@ -2379,7 +2375,7 @@ void cpu_init(void)
+ 
+ 	load_mm_ldt(&init_mm);
+ 
+-	clear_all_debug_regs();
++	initialize_debug_regs();
+ 	dbg_restore_debug_regs();
+ 
+ 	doublefault_init_cpu_tss();
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index 6c69cb28b2983b..514c17238cdea8 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -114,7 +114,6 @@ static inline bool save_xstate_epilog(void __user *buf, int ia32_frame,
+ {
+ 	struct xregs_state __user *x = buf;
+ 	struct _fpx_sw_bytes sw_bytes = {};
+-	u32 xfeatures;
+ 	int err;
+ 
+ 	/* Setup the bytes not touched by the [f]xsave and reserved for SW. */
+@@ -127,12 +126,6 @@ static inline bool save_xstate_epilog(void __user *buf, int ia32_frame,
+ 	err |= __put_user(FP_XSTATE_MAGIC2,
+ 			  (__u32 __user *)(buf + fpstate->user_size));
+ 
+-	/*
+-	 * Read the xfeatures which we copied (directly from the cpu or
+-	 * from the state in task struct) to the user buffers.
+-	 */
+-	err |= __get_user(xfeatures, (__u32 __user *)&x->header.xfeatures);
+-
+ 	/*
+ 	 * For legacy compatible, we always set FP/SSE bits in the bit
+ 	 * vector while saving the state to the user context. This will
+@@ -144,9 +137,7 @@ static inline bool save_xstate_epilog(void __user *buf, int ia32_frame,
+ 	 * header as well as change any contents in the memory layout.
+ 	 * xrestore as part of sigreturn will capture all the changes.
+ 	 */
+-	xfeatures |= XFEATURE_MASK_FPSSE;
+-
+-	err |= __put_user(xfeatures, (__u32 __user *)&x->header.xfeatures);
++	err |= set_xfeature_in_sigframe(x, XFEATURE_MASK_FPSSE);
+ 
+ 	return !err;
+ }
+diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h
+index 0fd34f53f0258a..7b684411ad78ff 100644
+--- a/arch/x86/kernel/fpu/xstate.h
++++ b/arch/x86/kernel/fpu/xstate.h
+@@ -69,21 +69,31 @@ static inline u64 xfeatures_mask_independent(void)
+ 	return fpu_kernel_cfg.independent_features;
+ }
+ 
++static inline int set_xfeature_in_sigframe(struct xregs_state __user *xbuf, u64 mask)
++{
++	u64 xfeatures;
++	int err;
++
++	/* Read the xfeatures value already saved in the user buffer */
++	err  = __get_user(xfeatures, &xbuf->header.xfeatures);
++	xfeatures |= mask;
++	err |= __put_user(xfeatures, &xbuf->header.xfeatures);
++
++	return err;
++}
++
+ /*
+  * Update the value of PKRU register that was already pushed onto the signal frame.
+  */
+-static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u64 mask, u32 pkru)
++static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u32 pkru)
+ {
+-	u64 xstate_bv;
+ 	int err;
+ 
+ 	if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
+ 		return 0;
+ 
+ 	/* Mark PKRU as in-use so that it is restored correctly. */
+-	xstate_bv = (mask & xfeatures_in_use()) | XFEATURE_MASK_PKRU;
+-
+-	err =  __put_user(xstate_bv, &buf->header.xfeatures);
++	err = set_xfeature_in_sigframe(buf, XFEATURE_MASK_PKRU);
+ 	if (err)
+ 		return err;
+ 
+@@ -307,7 +317,7 @@ static inline int xsave_to_user_sigframe(struct xregs_state __user *buf, u32 pkr
+ 	clac();
+ 
+ 	if (!err)
+-		err = update_pkru_in_sigframe(buf, mask, pkru);
++		err = update_pkru_in_sigframe(buf, pkru);
+ 
+ 	return err;
+ }
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 9f88b8a78e5091..a5f3fa4b6c39ea 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -1021,24 +1021,32 @@ static bool is_sysenter_singlestep(struct pt_regs *regs)
+ #endif
+ }
+ 
+-static __always_inline unsigned long debug_read_clear_dr6(void)
++static __always_inline unsigned long debug_read_reset_dr6(void)
+ {
+ 	unsigned long dr6;
+ 
++	get_debugreg(dr6, 6);
++	dr6 ^= DR6_RESERVED; /* Flip to positive polarity */
++
+ 	/*
+ 	 * The Intel SDM says:
+ 	 *
+-	 *   Certain debug exceptions may clear bits 0-3. The remaining
+-	 *   contents of the DR6 register are never cleared by the
+-	 *   processor. To avoid confusion in identifying debug
+-	 *   exceptions, debug handlers should clear the register before
+-	 *   returning to the interrupted task.
++	 *   Certain debug exceptions may clear bits 0-3 of DR6.
++	 *
++	 *   BLD induced #DB clears DR6.BLD and any other debug
++	 *   exception doesn't modify DR6.BLD.
+ 	 *
+-	 * Keep it simple: clear DR6 immediately.
++	 *   RTM induced #DB clears DR6.RTM and any other debug
++	 *   exception sets DR6.RTM.
++	 *
++	 *   To avoid confusion in identifying debug exceptions,
++	 *   debug handlers should set DR6.BLD and DR6.RTM, and
++	 *   clear other DR6 bits before returning.
++	 *
++	 * Keep it simple: write DR6 with its architectural reset
++	 * value 0xFFFF0FF0, defined as DR6_RESERVED, immediately.
+ 	 */
+-	get_debugreg(dr6, 6);
+ 	set_debugreg(DR6_RESERVED, 6);
+-	dr6 ^= DR6_RESERVED; /* Flip to positive polarity */
+ 
+ 	return dr6;
+ }
+@@ -1238,13 +1246,13 @@ static noinstr void exc_debug_user(struct pt_regs *regs, unsigned long dr6)
+ /* IST stack entry */
+ DEFINE_IDTENTRY_DEBUG(exc_debug)
+ {
+-	exc_debug_kernel(regs, debug_read_clear_dr6());
++	exc_debug_kernel(regs, debug_read_reset_dr6());
+ }
+ 
+ /* User entry, runs on regular task stack */
+ DEFINE_IDTENTRY_DEBUG_USER(exc_debug)
+ {
+-	exc_debug_user(regs, debug_read_clear_dr6());
++	exc_debug_user(regs, debug_read_reset_dr6());
+ }
+ 
+ #ifdef CONFIG_X86_FRED
+@@ -1263,7 +1271,7 @@ DEFINE_FREDENTRY_DEBUG(exc_debug)
+ {
+ 	/*
+ 	 * FRED #DB stores DR6 on the stack in the format which
+-	 * debug_read_clear_dr6() returns for the IDT entry points.
++	 * debug_read_reset_dr6() returns for the IDT entry points.
+ 	 */
+ 	unsigned long dr6 = fred_event_data(regs);
+ 
+@@ -1278,7 +1286,7 @@ DEFINE_FREDENTRY_DEBUG(exc_debug)
+ /* 32 bit does not have separate entry points. */
+ DEFINE_IDTENTRY_RAW(exc_debug)
+ {
+-	unsigned long dr6 = debug_read_clear_dr6();
++	unsigned long dr6 = debug_read_reset_dr6();
+ 
+ 	if (user_mode(regs))
+ 		exc_debug_user(regs, dr6);
+diff --git a/arch/x86/um/asm/checksum.h b/arch/x86/um/asm/checksum.h
+index b07824500363fa..ddc144657efad9 100644
+--- a/arch/x86/um/asm/checksum.h
++++ b/arch/x86/um/asm/checksum.h
+@@ -20,6 +20,9 @@
+  */
+ extern __wsum csum_partial(const void *buff, int len, __wsum sum);
+ 
++/* Do not call this directly. Declared for export type visibility. */
++extern __visible __wsum csum_partial_copy_generic(const void *src, void *dst, int len);
++
+ /**
+  * csum_fold - Fold and invert a 32bit checksum.
+  * sum: 32bit unfolded sum
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 3fc65df9ccb839..931da4749e8086 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -1456,7 +1456,7 @@ static bool ahci_broken_lpm(struct pci_dev *pdev)
+ 		{
+ 			.matches = {
+ 				DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-				DMI_MATCH(DMI_PRODUCT_VERSION, "ASUSPRO D840MB_M840SA"),
++				DMI_MATCH(DMI_PRODUCT_NAME, "ASUSPRO D840MB_M840SA"),
+ 			},
+ 			/* 320 is broken, there is no known good version. */
+ 		},
+diff --git a/drivers/bus/mhi/host/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c
+index 03aa8879520986..059cfd77382f06 100644
+--- a/drivers/bus/mhi/host/pci_generic.c
++++ b/drivers/bus/mhi/host/pci_generic.c
+@@ -782,6 +782,42 @@ static const struct mhi_pci_dev_info mhi_telit_fe990a_info = {
+ 	.mru_default = 32768,
+ };
+ 
++static const struct mhi_channel_config mhi_telit_fn920c04_channels[] = {
++	MHI_CHANNEL_CONFIG_UL_SBL(2, "SAHARA", 32, 0),
++	MHI_CHANNEL_CONFIG_DL_SBL(3, "SAHARA", 32, 0),
++	MHI_CHANNEL_CONFIG_UL(4, "DIAG", 64, 1),
++	MHI_CHANNEL_CONFIG_DL(5, "DIAG", 64, 1),
++	MHI_CHANNEL_CONFIG_UL(14, "QMI", 32, 0),
++	MHI_CHANNEL_CONFIG_DL(15, "QMI", 32, 0),
++	MHI_CHANNEL_CONFIG_UL(32, "DUN", 32, 0),
++	MHI_CHANNEL_CONFIG_DL(33, "DUN", 32, 0),
++	MHI_CHANNEL_CONFIG_UL_FP(34, "FIREHOSE", 32, 0),
++	MHI_CHANNEL_CONFIG_DL_FP(35, "FIREHOSE", 32, 0),
++	MHI_CHANNEL_CONFIG_UL(92, "DUN2", 32, 1),
++	MHI_CHANNEL_CONFIG_DL(93, "DUN2", 32, 1),
++	MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0", 128, 2),
++	MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0", 128, 3),
++};
++
++static const struct mhi_controller_config modem_telit_fn920c04_config = {
++	.max_channels = 128,
++	.timeout_ms = 50000,
++	.num_channels = ARRAY_SIZE(mhi_telit_fn920c04_channels),
++	.ch_cfg = mhi_telit_fn920c04_channels,
++	.num_events = ARRAY_SIZE(mhi_telit_fn990_events),
++	.event_cfg = mhi_telit_fn990_events,
++};
++
++static const struct mhi_pci_dev_info mhi_telit_fn920c04_info = {
++	.name = "telit-fn920c04",
++	.config = &modem_telit_fn920c04_config,
++	.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
++	.dma_data_width = 32,
++	.sideband_wake = false,
++	.mru_default = 32768,
++	.edl_trigger = true,
++};
++
+ static const struct mhi_pci_dev_info mhi_netprisma_lcur57_info = {
+ 	.name = "netprisma-lcur57",
+ 	.edl = "qcom/prog_firehose_sdx24.mbn",
+@@ -806,6 +842,9 @@ static const struct mhi_pci_dev_info mhi_netprisma_fcun69_info = {
+ static const struct pci_device_id mhi_pci_id_table[] = {
+ 	{PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0116),
+ 		.driver_data = (kernel_ulong_t) &mhi_qcom_sa8775p_info },
++	/* Telit FN920C04 (sdx35) */
++	{PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x011a, 0x1c5d, 0x2020),
++		.driver_data = (kernel_ulong_t) &mhi_telit_fn920c04_info },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0304),
+ 		.driver_data = (kernel_ulong_t) &mhi_qcom_sdx24_info },
+ 	{ PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x0306, PCI_VENDOR_ID_QCOM, 0x010c),
+diff --git a/drivers/cxl/core/ras.c b/drivers/cxl/core/ras.c
+index 485a831695c705..2731ba3a07993c 100644
+--- a/drivers/cxl/core/ras.c
++++ b/drivers/cxl/core/ras.c
+@@ -31,40 +31,38 @@ static void cxl_cper_trace_uncorr_port_prot_err(struct pci_dev *pdev,
+ 					       ras_cap.header_log);
+ }
+ 
+-static void cxl_cper_trace_corr_prot_err(struct pci_dev *pdev,
+-				  struct cxl_ras_capability_regs ras_cap)
++static void cxl_cper_trace_corr_prot_err(struct cxl_memdev *cxlmd,
++					 struct cxl_ras_capability_regs ras_cap)
+ {
+ 	u32 status = ras_cap.cor_status & ~ras_cap.cor_mask;
+-	struct cxl_dev_state *cxlds;
+ 
+-	cxlds = pci_get_drvdata(pdev);
+-	if (!cxlds)
+-		return;
+-
+-	trace_cxl_aer_correctable_error(cxlds->cxlmd, status);
++	trace_cxl_aer_correctable_error(cxlmd, status);
+ }
+ 
+-static void cxl_cper_trace_uncorr_prot_err(struct pci_dev *pdev,
+-				    struct cxl_ras_capability_regs ras_cap)
++static void
++cxl_cper_trace_uncorr_prot_err(struct cxl_memdev *cxlmd,
++			       struct cxl_ras_capability_regs ras_cap)
+ {
+ 	u32 status = ras_cap.uncor_status & ~ras_cap.uncor_mask;
+-	struct cxl_dev_state *cxlds;
+ 	u32 fe;
+ 
+-	cxlds = pci_get_drvdata(pdev);
+-	if (!cxlds)
+-		return;
+-
+ 	if (hweight32(status) > 1)
+ 		fe = BIT(FIELD_GET(CXL_RAS_CAP_CONTROL_FE_MASK,
+ 				   ras_cap.cap_control));
+ 	else
+ 		fe = status;
+ 
+-	trace_cxl_aer_uncorrectable_error(cxlds->cxlmd, status, fe,
++	trace_cxl_aer_uncorrectable_error(cxlmd, status, fe,
+ 					  ras_cap.header_log);
+ }
+ 
++static int match_memdev_by_parent(struct device *dev, const void *uport)
++{
++	if (is_cxl_memdev(dev) && dev->parent == uport)
++		return 1;
++	return 0;
++}
++
+ static void cxl_cper_handle_prot_err(struct cxl_cper_prot_err_work_data *data)
+ {
+ 	unsigned int devfn = PCI_DEVFN(data->prot_err.agent_addr.device,
+@@ -73,13 +71,12 @@ static void cxl_cper_handle_prot_err(struct cxl_cper_prot_err_work_data *data)
+ 		pci_get_domain_bus_and_slot(data->prot_err.agent_addr.segment,
+ 					    data->prot_err.agent_addr.bus,
+ 					    devfn);
++	struct cxl_memdev *cxlmd;
+ 	int port_type;
+ 
+ 	if (!pdev)
+ 		return;
+ 
+-	guard(device)(&pdev->dev);
+-
+ 	port_type = pci_pcie_type(pdev);
+ 	if (port_type == PCI_EXP_TYPE_ROOT_PORT ||
+ 	    port_type == PCI_EXP_TYPE_DOWNSTREAM ||
+@@ -92,10 +89,20 @@ static void cxl_cper_handle_prot_err(struct cxl_cper_prot_err_work_data *data)
+ 		return;
+ 	}
+ 
++	guard(device)(&pdev->dev);
++	if (!pdev->dev.driver)
++		return;
++
++	struct device *mem_dev __free(put_device) = bus_find_device(
++		&cxl_bus_type, NULL, pdev, match_memdev_by_parent);
++	if (!mem_dev)
++		return;
++
++	cxlmd = to_cxl_memdev(mem_dev);
+ 	if (data->severity == AER_CORRECTABLE)
+-		cxl_cper_trace_corr_prot_err(pdev, data->ras_cap);
++		cxl_cper_trace_corr_prot_err(cxlmd, data->ras_cap);
+ 	else
+-		cxl_cper_trace_uncorr_prot_err(pdev, data->ras_cap);
++		cxl_cper_trace_uncorr_prot_err(cxlmd, data->ras_cap);
+ }
+ 
+ static void cxl_cper_prot_err_work_fn(struct work_struct *work)
+diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
+index c3f4dc244df77b..7585f0302f3a2a 100644
+--- a/drivers/cxl/core/region.c
++++ b/drivers/cxl/core/region.c
+@@ -1446,7 +1446,7 @@ static int cxl_port_setup_targets(struct cxl_port *port,
+ 
+ 	if (test_bit(CXL_REGION_F_AUTO, &cxlr->flags)) {
+ 		if (cxld->interleave_ways != iw ||
+-		    cxld->interleave_granularity != ig ||
++		    (iw > 1 && cxld->interleave_granularity != ig) ||
+ 		    !region_res_match_cxl_range(p, &cxld->hpa_range) ||
+ 		    ((cxld->flags & CXL_DECODER_F_ENABLE) == 0)) {
+ 			dev_err(&cxlr->dev,
+@@ -1805,6 +1805,13 @@ static int find_pos_and_ways(struct cxl_port *port, struct range *range,
+ 	}
+ 	put_device(dev);
+ 
++	if (rc)
++		dev_err(port->uport_dev,
++			"failed to find %s:%s in target list of %s\n",
++			dev_name(&port->dev),
++			dev_name(port->parent_dport->dport_dev),
++			dev_name(&cxlsd->cxld.dev));
++
+ 	return rc;
+ }
+ 
+diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
+index 6d12033649f817..bc934bc249df1f 100644
+--- a/drivers/dma/idxd/cdev.c
++++ b/drivers/dma/idxd/cdev.c
+@@ -349,7 +349,9 @@ static void idxd_cdev_evl_drain_pasid(struct idxd_wq *wq, u32 pasid)
+ 			set_bit(h, evl->bmap);
+ 		h = (h + 1) % size;
+ 	}
+-	drain_workqueue(wq->wq);
++	if (wq->wq)
++		drain_workqueue(wq->wq);
++
+ 	mutex_unlock(&evl->lock);
+ }
+ 
+diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
+index 3ad44afd0e74ee..8f26b6eff3f3e9 100644
+--- a/drivers/dma/xilinx/xilinx_dma.c
++++ b/drivers/dma/xilinx/xilinx_dma.c
+@@ -2909,6 +2909,8 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
+ 		return -EINVAL;
+ 	}
+ 
++	xdev->common.directions |= chan->direction;
++
+ 	/* Request the interrupt */
+ 	chan->irq = of_irq_get(node, chan->tdest);
+ 	if (chan->irq < 0)
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index 390f5756b66ed0..9a8866db1c3a70 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -1209,7 +1209,9 @@ static int umc_get_cs_mode(int dimm, u8 ctrl, struct amd64_pvt *pvt)
+ 	if (csrow_enabled(2 * dimm + 1, ctrl, pvt))
+ 		cs_mode |= CS_ODD_PRIMARY;
+ 
+-	/* Asymmetric dual-rank DIMM support. */
++	if (csrow_sec_enabled(2 * dimm, ctrl, pvt))
++		cs_mode |= CS_EVEN_SECONDARY;
++
+ 	if (csrow_sec_enabled(2 * dimm + 1, ctrl, pvt))
+ 		cs_mode |= CS_ODD_SECONDARY;
+ 
+@@ -1230,12 +1232,13 @@ static int umc_get_cs_mode(int dimm, u8 ctrl, struct amd64_pvt *pvt)
+ 	return cs_mode;
+ }
+ 
+-static int __addr_mask_to_cs_size(u32 addr_mask_orig, unsigned int cs_mode,
+-				  int csrow_nr, int dimm)
++static int calculate_cs_size(u32 mask, unsigned int cs_mode)
+ {
+-	u32 msb, weight, num_zero_bits;
+-	u32 addr_mask_deinterleaved;
+-	int size = 0;
++	int msb, weight, num_zero_bits;
++	u32 deinterleaved_mask;
++
++	if (!mask)
++		return 0;
+ 
+ 	/*
+ 	 * The number of zero bits in the mask is equal to the number of bits
+@@ -1248,19 +1251,30 @@ static int __addr_mask_to_cs_size(u32 addr_mask_orig, unsigned int cs_mode,
+ 	 * without swapping with the most significant bit. This can be handled
+ 	 * by keeping the MSB where it is and ignoring the single zero bit.
+ 	 */
+-	msb = fls(addr_mask_orig) - 1;
+-	weight = hweight_long(addr_mask_orig);
++	msb = fls(mask) - 1;
++	weight = hweight_long(mask);
+ 	num_zero_bits = msb - weight - !!(cs_mode & CS_3R_INTERLEAVE);
+ 
+ 	/* Take the number of zero bits off from the top of the mask. */
+-	addr_mask_deinterleaved = GENMASK_ULL(msb - num_zero_bits, 1);
++	deinterleaved_mask = GENMASK(msb - num_zero_bits, 1);
++	edac_dbg(1, "  Deinterleaved AddrMask: 0x%x\n", deinterleaved_mask);
++
++	return (deinterleaved_mask >> 2) + 1;
++}
++
++static int __addr_mask_to_cs_size(u32 addr_mask, u32 addr_mask_sec,
++				  unsigned int cs_mode, int csrow_nr, int dimm)
++{
++	int size;
+ 
+ 	edac_dbg(1, "CS%d DIMM%d AddrMasks:\n", csrow_nr, dimm);
+-	edac_dbg(1, "  Original AddrMask: 0x%x\n", addr_mask_orig);
+-	edac_dbg(1, "  Deinterleaved AddrMask: 0x%x\n", addr_mask_deinterleaved);
++	edac_dbg(1, "  Primary AddrMask: 0x%x\n", addr_mask);
+ 
+ 	/* Register [31:1] = Address [39:9]. Size is in kBs here. */
+-	size = (addr_mask_deinterleaved >> 2) + 1;
++	size = calculate_cs_size(addr_mask, cs_mode);
++
++	edac_dbg(1, "  Secondary AddrMask: 0x%x\n", addr_mask_sec);
++	size += calculate_cs_size(addr_mask_sec, cs_mode);
+ 
+ 	/* Return size in MBs. */
+ 	return size >> 10;
+@@ -1269,8 +1283,8 @@ static int __addr_mask_to_cs_size(u32 addr_mask_orig, unsigned int cs_mode,
+ static int umc_addr_mask_to_cs_size(struct amd64_pvt *pvt, u8 umc,
+ 				    unsigned int cs_mode, int csrow_nr)
+ {
++	u32 addr_mask = 0, addr_mask_sec = 0;
+ 	int cs_mask_nr = csrow_nr;
+-	u32 addr_mask_orig;
+ 	int dimm, size = 0;
+ 
+ 	/* No Chip Selects are enabled. */
+@@ -1308,13 +1322,13 @@ static int umc_addr_mask_to_cs_size(struct amd64_pvt *pvt, u8 umc,
+ 	if (!pvt->flags.zn_regs_v2)
+ 		cs_mask_nr >>= 1;
+ 
+-	/* Asymmetric dual-rank DIMM support. */
+-	if ((csrow_nr & 1) && (cs_mode & CS_ODD_SECONDARY))
+-		addr_mask_orig = pvt->csels[umc].csmasks_sec[cs_mask_nr];
+-	else
+-		addr_mask_orig = pvt->csels[umc].csmasks[cs_mask_nr];
++	if (cs_mode & (CS_EVEN_PRIMARY | CS_ODD_PRIMARY))
++		addr_mask = pvt->csels[umc].csmasks[cs_mask_nr];
++
++	if (cs_mode & (CS_EVEN_SECONDARY | CS_ODD_SECONDARY))
++		addr_mask_sec = pvt->csels[umc].csmasks_sec[cs_mask_nr];
+ 
+-	return __addr_mask_to_cs_size(addr_mask_orig, cs_mode, csrow_nr, dimm);
++	return __addr_mask_to_cs_size(addr_mask, addr_mask_sec, cs_mode, csrow_nr, dimm);
+ }
+ 
+ static void umc_debug_display_dimm_sizes(struct amd64_pvt *pvt, u8 ctrl)
+@@ -3512,9 +3526,10 @@ static void gpu_get_err_info(struct mce *m, struct err_info *err)
+ static int gpu_addr_mask_to_cs_size(struct amd64_pvt *pvt, u8 umc,
+ 				    unsigned int cs_mode, int csrow_nr)
+ {
+-	u32 addr_mask_orig = pvt->csels[umc].csmasks[csrow_nr];
++	u32 addr_mask		= pvt->csels[umc].csmasks[csrow_nr];
++	u32 addr_mask_sec	= pvt->csels[umc].csmasks_sec[csrow_nr];
+ 
+-	return __addr_mask_to_cs_size(addr_mask_orig, cs_mode, csrow_nr, csrow_nr >> 1);
++	return __addr_mask_to_cs_size(addr_mask, addr_mask_sec, cs_mode, csrow_nr, csrow_nr >> 1);
+ }
+ 
+ static void gpu_debug_display_dimm_sizes(struct amd64_pvt *pvt, u8 ctrl)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+index a1450f13d9632c..0961e99c393426 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+@@ -1902,7 +1902,7 @@ static void amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring)
+ 			continue;
+ 		}
+ 		job = to_amdgpu_job(s_job);
+-		if (preempted && (&job->hw_fence) == fence)
++		if (preempted && (&job->hw_fence.base) == fence)
+ 			/* mark the job as preempted */
+ 			job->preemption_status |= AMDGPU_IB_PREEMPTED;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 95124a4a0a67cf..82a22a62b99bee 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -6083,7 +6083,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
+ 	 *
+ 	 * job->base holds a reference to parent fence
+ 	 */
+-	if (job && dma_fence_is_signaled(&job->hw_fence)) {
++	if (job && dma_fence_is_signaled(&job->hw_fence.base)) {
+ 		job_signaled = true;
+ 		dev_info(adev->dev, "Guilty job already signaled, skipping HW reset");
+ 		goto skip_hw_reset;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+index 9e738fae2b74f1..6d34eac0539d4e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+@@ -311,10 +311,12 @@ static int amdgpu_discovery_read_binary_from_file(struct amdgpu_device *adev,
+ 	const struct firmware *fw;
+ 	int r;
+ 
+-	r = request_firmware(&fw, fw_name, adev->dev);
++	r = firmware_request_nowarn(&fw, fw_name, adev->dev);
+ 	if (r) {
+-		dev_err(adev->dev, "can't load firmware \"%s\"\n",
+-			fw_name);
++		if (amdgpu_discovery == 2)
++			dev_err(adev->dev, "can't load firmware \"%s\"\n", fw_name);
++		else
++			drm_info(&adev->ddev, "Optional firmware \"%s\" was not found\n", fw_name);
+ 		return r;
+ 	}
+ 
+@@ -449,16 +451,12 @@ static int amdgpu_discovery_init(struct amdgpu_device *adev)
+ 	/* Read from file if it is the preferred option */
+ 	fw_name = amdgpu_discovery_get_fw_name(adev);
+ 	if (fw_name != NULL) {
+-		dev_info(adev->dev, "use ip discovery information from file");
++		drm_dbg(&adev->ddev, "use ip discovery information from file");
+ 		r = amdgpu_discovery_read_binary_from_file(adev, adev->mman.discovery_bin, fw_name);
+-
+-		if (r) {
+-			dev_err(adev->dev, "failed to read ip discovery binary from file\n");
+-			r = -EINVAL;
++		if (r)
+ 			goto out;
+-		}
+-
+ 	} else {
++		drm_dbg(&adev->ddev, "use ip discovery information from memory");
+ 		r = amdgpu_discovery_read_binary_from_mem(
+ 			adev, adev->mman.discovery_bin);
+ 		if (r)
+@@ -1328,10 +1326,8 @@ static int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
+ 	int r;
+ 
+ 	r = amdgpu_discovery_init(adev);
+-	if (r) {
+-		DRM_ERROR("amdgpu_discovery_init failed\n");
++	if (r)
+ 		return r;
+-	}
+ 
+ 	wafl_ver = 0;
+ 	adev->gfx.xcc_mask = 0;
+@@ -2569,8 +2565,10 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
+ 		break;
+ 	default:
+ 		r = amdgpu_discovery_reg_base_init(adev);
+-		if (r)
+-			return -EINVAL;
++		if (r) {
++			drm_err(&adev->ddev, "discovery failed: %d\n", r);
++			return r;
++		}
+ 
+ 		amdgpu_discovery_harvest_ip(adev);
+ 		amdgpu_discovery_get_gfx_info(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+index 5f5c00ace96ba4..f5855c412321f8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+@@ -41,22 +41,6 @@
+ #include "amdgpu_trace.h"
+ #include "amdgpu_reset.h"
+ 
+-/*
+- * Fences mark an event in the GPUs pipeline and are used
+- * for GPU/CPU synchronization.  When the fence is written,
+- * it is expected that all buffers associated with that fence
+- * are no longer in use by the associated ring on the GPU and
+- * that the relevant GPU caches have been flushed.
+- */
+-
+-struct amdgpu_fence {
+-	struct dma_fence base;
+-
+-	/* RB, DMA, etc. */
+-	struct amdgpu_ring		*ring;
+-	ktime_t				start_timestamp;
+-};
+-
+ static struct kmem_cache *amdgpu_fence_slab;
+ 
+ int amdgpu_fence_slab_init(void)
+@@ -151,12 +135,12 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
+ 		am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
+ 		if (am_fence == NULL)
+ 			return -ENOMEM;
+-		fence = &am_fence->base;
+-		am_fence->ring = ring;
+ 	} else {
+ 		/* take use of job-embedded fence */
+-		fence = &job->hw_fence;
++		am_fence = &job->hw_fence;
+ 	}
++	fence = &am_fence->base;
++	am_fence->ring = ring;
+ 
+ 	seq = ++ring->fence_drv.sync_seq;
+ 	if (job && job->job_run_counter) {
+@@ -718,7 +702,7 @@ void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring)
+ 			 * it right here or we won't be able to track them in fence_drv
+ 			 * and they will remain unsignaled during sa_bo free.
+ 			 */
+-			job = container_of(old, struct amdgpu_job, hw_fence);
++			job = container_of(old, struct amdgpu_job, hw_fence.base);
+ 			if (!job->base.s_fence && !dma_fence_is_signaled(old))
+ 				dma_fence_signal(old);
+ 			RCU_INIT_POINTER(*ptr, NULL);
+@@ -780,7 +764,7 @@ static const char *amdgpu_fence_get_timeline_name(struct dma_fence *f)
+ 
+ static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f)
+ {
+-	struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
++	struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
+ 
+ 	return (const char *)to_amdgpu_ring(job->base.sched)->name;
+ }
+@@ -810,7 +794,7 @@ static bool amdgpu_fence_enable_signaling(struct dma_fence *f)
+  */
+ static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f)
+ {
+-	struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
++	struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
+ 
+ 	if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer))
+ 		amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched));
+@@ -845,7 +829,7 @@ static void amdgpu_job_fence_free(struct rcu_head *rcu)
+ 	struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
+ 
+ 	/* free job if fence has a parent job */
+-	kfree(container_of(f, struct amdgpu_job, hw_fence));
++	kfree(container_of(f, struct amdgpu_job, hw_fence.base));
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index 1dc06e4ab49705..54b55f4e8a20c9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -2192,6 +2192,9 @@ void amdgpu_gfx_profile_ring_begin_use(struct amdgpu_ring *ring)
+ 	enum PP_SMC_POWER_PROFILE profile;
+ 	int r;
+ 
++	if (amdgpu_dpm_is_overdrive_enabled(adev))
++		return;
++
+ 	if (adev->gfx.num_gfx_rings)
+ 		profile = PP_SMC_POWER_PROFILE_FULLSCREEN3D;
+ 	else
+@@ -2222,6 +2225,11 @@ void amdgpu_gfx_profile_ring_begin_use(struct amdgpu_ring *ring)
+ 
+ void amdgpu_gfx_profile_ring_end_use(struct amdgpu_ring *ring)
+ {
++	struct amdgpu_device *adev = ring->adev;
++
++	if (amdgpu_dpm_is_overdrive_enabled(adev))
++		return;
++
+ 	atomic_dec(&ring->adev->gfx.total_submission_cnt);
+ 
+ 	schedule_delayed_work(&ring->adev->gfx.idle_work, GFX_PROFILE_IDLE_TIMEOUT);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+index acb21fc8b3ce5d..ddb9d3269357cf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+@@ -272,8 +272,8 @@ void amdgpu_job_free_resources(struct amdgpu_job *job)
+ 	/* Check if any fences where initialized */
+ 	if (job->base.s_fence && job->base.s_fence->finished.ops)
+ 		f = &job->base.s_fence->finished;
+-	else if (job->hw_fence.ops)
+-		f = &job->hw_fence;
++	else if (job->hw_fence.base.ops)
++		f = &job->hw_fence.base;
+ 	else
+ 		f = NULL;
+ 
+@@ -290,10 +290,10 @@ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
+ 	amdgpu_sync_free(&job->explicit_sync);
+ 
+ 	/* only put the hw fence if has embedded fence */
+-	if (!job->hw_fence.ops)
++	if (!job->hw_fence.base.ops)
+ 		kfree(job);
+ 	else
+-		dma_fence_put(&job->hw_fence);
++		dma_fence_put(&job->hw_fence.base);
+ }
+ 
+ void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
+@@ -322,10 +322,10 @@ void amdgpu_job_free(struct amdgpu_job *job)
+ 	if (job->gang_submit != &job->base.s_fence->scheduled)
+ 		dma_fence_put(job->gang_submit);
+ 
+-	if (!job->hw_fence.ops)
++	if (!job->hw_fence.base.ops)
+ 		kfree(job);
+ 	else
+-		dma_fence_put(&job->hw_fence);
++		dma_fence_put(&job->hw_fence.base);
+ }
+ 
+ struct dma_fence *amdgpu_job_submit(struct amdgpu_job *job)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
+index ce6b9ba967fff0..4fe033d8f35683 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
+@@ -48,7 +48,7 @@ struct amdgpu_job {
+ 	struct drm_sched_job    base;
+ 	struct amdgpu_vm	*vm;
+ 	struct amdgpu_sync	explicit_sync;
+-	struct dma_fence	hw_fence;
++	struct amdgpu_fence	hw_fence;
+ 	struct dma_fence	*gang_submit;
+ 	uint32_t		preamble_status;
+ 	uint32_t                preemption_status;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+index 5590ad5e8cd76c..7164948001e9d2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+@@ -836,7 +836,9 @@ int amdgpu_mes_map_legacy_queue(struct amdgpu_device *adev,
+ 	queue_input.mqd_addr = amdgpu_bo_gpu_offset(ring->mqd_obj);
+ 	queue_input.wptr_addr = ring->wptr_gpu_addr;
+ 
++	amdgpu_mes_lock(&adev->mes);
+ 	r = adev->mes.funcs->map_legacy_queue(&adev->mes, &queue_input);
++	amdgpu_mes_unlock(&adev->mes);
+ 	if (r)
+ 		DRM_ERROR("failed to map legacy queue\n");
+ 
+@@ -859,7 +861,9 @@ int amdgpu_mes_unmap_legacy_queue(struct amdgpu_device *adev,
+ 	queue_input.trail_fence_addr = gpu_addr;
+ 	queue_input.trail_fence_data = seq;
+ 
++	amdgpu_mes_lock(&adev->mes);
+ 	r = adev->mes.funcs->unmap_legacy_queue(&adev->mes, &queue_input);
++	amdgpu_mes_unlock(&adev->mes);
+ 	if (r)
+ 		DRM_ERROR("failed to unmap legacy queue\n");
+ 
+@@ -886,7 +890,9 @@ int amdgpu_mes_reset_legacy_queue(struct amdgpu_device *adev,
+ 	queue_input.vmid = vmid;
+ 	queue_input.use_mmio = use_mmio;
+ 
++	amdgpu_mes_lock(&adev->mes);
+ 	r = adev->mes.funcs->reset_legacy_queue(&adev->mes, &queue_input);
++	amdgpu_mes_unlock(&adev->mes);
+ 	if (r)
+ 		DRM_ERROR("failed to reset legacy queue\n");
+ 
+@@ -916,7 +922,9 @@ uint32_t amdgpu_mes_rreg(struct amdgpu_device *adev, uint32_t reg)
+ 		goto error;
+ 	}
+ 
++	amdgpu_mes_lock(&adev->mes);
+ 	r = adev->mes.funcs->misc_op(&adev->mes, &op_input);
++	amdgpu_mes_unlock(&adev->mes);
+ 	if (r)
+ 		DRM_ERROR("failed to read reg (0x%x)\n", reg);
+ 	else
+@@ -944,7 +952,9 @@ int amdgpu_mes_wreg(struct amdgpu_device *adev,
+ 		goto error;
+ 	}
+ 
++	amdgpu_mes_lock(&adev->mes);
+ 	r = adev->mes.funcs->misc_op(&adev->mes, &op_input);
++	amdgpu_mes_unlock(&adev->mes);
+ 	if (r)
+ 		DRM_ERROR("failed to write reg (0x%x)\n", reg);
+ 
+@@ -971,7 +981,9 @@ int amdgpu_mes_reg_write_reg_wait(struct amdgpu_device *adev,
+ 		goto error;
+ 	}
+ 
++	amdgpu_mes_lock(&adev->mes);
+ 	r = adev->mes.funcs->misc_op(&adev->mes, &op_input);
++	amdgpu_mes_unlock(&adev->mes);
+ 	if (r)
+ 		DRM_ERROR("failed to reg_write_reg_wait\n");
+ 
+@@ -996,7 +1008,9 @@ int amdgpu_mes_reg_wait(struct amdgpu_device *adev, uint32_t reg,
+ 		goto error;
+ 	}
+ 
++	amdgpu_mes_lock(&adev->mes);
+ 	r = adev->mes.funcs->misc_op(&adev->mes, &op_input);
++	amdgpu_mes_unlock(&adev->mes);
+ 	if (r)
+ 		DRM_ERROR("failed to reg_write_reg_wait\n");
+ 
+@@ -1687,7 +1701,9 @@ static int amdgpu_mes_set_enforce_isolation(struct amdgpu_device *adev,
+ 		goto error;
+ 	}
+ 
++	amdgpu_mes_lock(&adev->mes);
+ 	r = adev->mes.funcs->misc_op(&adev->mes, &op_input);
++	amdgpu_mes_unlock(&adev->mes);
+ 	if (r)
+ 		dev_err(adev->dev, "failed to change_config.\n");
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index df5d5dbd7f0fe2..20c5507b629355 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -3521,8 +3521,12 @@ int psp_init_sos_microcode(struct psp_context *psp, const char *chip_name)
+ 	uint8_t *ucode_array_start_addr;
+ 	int err = 0;
+ 
+-	err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, AMDGPU_UCODE_REQUIRED,
+-				   "amdgpu/%s_sos.bin", chip_name);
++	if (amdgpu_is_kicker_fw(adev))
++		err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, AMDGPU_UCODE_REQUIRED,
++					   "amdgpu/%s_sos_kicker.bin", chip_name);
++	else
++		err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, AMDGPU_UCODE_REQUIRED,
++					   "amdgpu/%s_sos.bin", chip_name);
+ 	if (err)
+ 		goto out;
+ 
+@@ -3798,8 +3802,12 @@ int psp_init_ta_microcode(struct psp_context *psp, const char *chip_name)
+ 	struct amdgpu_device *adev = psp->adev;
+ 	int err;
+ 
+-	err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, AMDGPU_UCODE_REQUIRED,
+-				   "amdgpu/%s_ta.bin", chip_name);
++	if (amdgpu_is_kicker_fw(adev))
++		err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, AMDGPU_UCODE_REQUIRED,
++					   "amdgpu/%s_ta_kicker.bin", chip_name);
++	else
++		err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, AMDGPU_UCODE_REQUIRED,
++					   "amdgpu/%s_ta.bin", chip_name);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+index bb2b663852237c..8ec9cb5f9bb52d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+@@ -127,6 +127,22 @@ struct amdgpu_fence_driver {
+ 	struct dma_fence		**fences;
+ };
+ 
++/*
++ * Fences mark an event in the GPUs pipeline and are used
++ * for GPU/CPU synchronization.  When the fence is written,
++ * it is expected that all buffers associated with that fence
++ * are no longer in use by the associated ring on the GPU and
++ * that the relevant GPU caches have been flushed.
++ */
++
++struct amdgpu_fence {
++	struct dma_fence base;
++
++	/* RB, DMA, etc. */
++	struct amdgpu_ring		*ring;
++	ktime_t				start_timestamp;
++};
++
+ extern const struct drm_sched_backend_ops amdgpu_sched_ops;
+ 
+ void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_seq64.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_seq64.c
+index e22cb2b5cd9264..dba8051b8c14b2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_seq64.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_seq64.c
+@@ -133,7 +133,7 @@ void amdgpu_seq64_unmap(struct amdgpu_device *adev, struct amdgpu_fpriv *fpriv)
+ 
+ 	vm = &fpriv->vm;
+ 
+-	drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
++	drm_exec_init(&exec, 0, 0);
+ 	drm_exec_until_all_locked(&exec) {
+ 		r = amdgpu_vm_lock_pd(vm, &exec, 0);
+ 		if (likely(!r))
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+index 3d9e9fdc10b478..7cdcb7a6f98fa4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+@@ -30,6 +30,10 @@
+ 
+ #define AMDGPU_UCODE_NAME_MAX		(128)
+ 
++static const struct kicker_device kicker_device_list[] = {
++	{0x744B, 0x00},
++};
++
+ static void amdgpu_ucode_print_common_hdr(const struct common_firmware_header *hdr)
+ {
+ 	DRM_DEBUG("size_bytes: %u\n", le32_to_cpu(hdr->size_bytes));
+@@ -1384,6 +1388,19 @@ static const char *amdgpu_ucode_legacy_naming(struct amdgpu_device *adev, int bl
+ 	return NULL;
+ }
+ 
++bool amdgpu_is_kicker_fw(struct amdgpu_device *adev)
++{
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(kicker_device_list); i++) {
++		if (adev->pdev->device == kicker_device_list[i].device &&
++			adev->pdev->revision == kicker_device_list[i].revision)
++		return true;
++	}
++
++	return false;
++}
++
+ void amdgpu_ucode_ip_version_decode(struct amdgpu_device *adev, int block_type, char *ucode_prefix, int len)
+ {
+ 	int maj, min, rev;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+index 4eedd92f000be9..bd0f6df14efa70 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+@@ -602,6 +602,11 @@ struct amdgpu_firmware {
+ 	uint64_t fw_buf_mc;
+ };
+ 
++struct kicker_device{
++	unsigned short device;
++	u8 revision;
++};
++
+ void amdgpu_ucode_print_mc_hdr(const struct common_firmware_header *hdr);
+ void amdgpu_ucode_print_smc_hdr(const struct common_firmware_header *hdr);
+ void amdgpu_ucode_print_imu_hdr(const struct common_firmware_header *hdr);
+@@ -629,5 +634,6 @@ amdgpu_ucode_get_load_type(struct amdgpu_device *adev, int load_type);
+ const char *amdgpu_ucode_name(enum AMDGPU_UCODE_ID ucode_id);
+ 
+ void amdgpu_ucode_ip_version_decode(struct amdgpu_device *adev, int block_type, char *ucode_prefix, int len);
++bool amdgpu_is_kicker_fw(struct amdgpu_device *adev);
+ 
+ #endif
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+index 2d7f82e98df92c..abdc52b0895a60 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+@@ -463,7 +463,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
+ 	int r;
+ 
+ 	lpfn = (u64)place->lpfn << PAGE_SHIFT;
+-	if (!lpfn)
++	if (!lpfn || lpfn > man->size)
+ 		lpfn = man->size;
+ 
+ 	fpfn = (u64)place->fpfn << PAGE_SHIFT;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index 914c18f48e8e1a..b03c52a18610f2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -83,6 +83,7 @@ MODULE_FIRMWARE("amdgpu/gc_11_0_0_pfp.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_0_me.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_0_mec.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc.bin");
++MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc_kicker.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc_1.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_0_toc.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_1_pfp.bin");
+@@ -744,6 +745,10 @@ static int gfx_v11_0_init_microcode(struct amdgpu_device *adev)
+ 			err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw,
+ 						   AMDGPU_UCODE_REQUIRED,
+ 						   "amdgpu/gc_11_0_0_rlc_1.bin");
++		else if (amdgpu_is_kicker_fw(adev))
++			err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw,
++						   AMDGPU_UCODE_REQUIRED,
++						   "amdgpu/%s_rlc_kicker.bin", ucode_prefix);
+ 		else
+ 			err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw,
+ 						   AMDGPU_UCODE_REQUIRED,
+diff --git a/drivers/gpu/drm/amd/amdgpu/imu_v11_0.c b/drivers/gpu/drm/amd/amdgpu/imu_v11_0.c
+index cfa91d709d4996..cc626036ed9c3d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/imu_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/imu_v11_0.c
+@@ -32,6 +32,7 @@
+ #include "gc/gc_11_0_0_sh_mask.h"
+ 
+ MODULE_FIRMWARE("amdgpu/gc_11_0_0_imu.bin");
++MODULE_FIRMWARE("amdgpu/gc_11_0_0_imu_kicker.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_1_imu.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_2_imu.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_3_imu.bin");
+@@ -51,8 +52,12 @@ static int imu_v11_0_init_microcode(struct amdgpu_device *adev)
+ 	DRM_DEBUG("\n");
+ 
+ 	amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix));
+-	err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED,
+-				   "amdgpu/%s_imu.bin", ucode_prefix);
++	if (amdgpu_is_kicker_fw(adev))
++		err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED,
++					   "amdgpu/%s_imu_kicker.bin", ucode_prefix);
++	else
++		err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED,
++					   "amdgpu/%s_imu.bin", ucode_prefix);
+ 	if (err)
+ 		goto out;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+index 821c9baf5baa6a..cc6476f4dbe6a2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+@@ -1649,10 +1649,12 @@ static int mes_v11_0_hw_init(struct amdgpu_ip_block *ip_block)
+ 	if (r)
+ 		goto failure;
+ 
+-	r = mes_v11_0_set_hw_resources_1(&adev->mes);
+-	if (r) {
+-		DRM_ERROR("failed mes_v11_0_set_hw_resources_1, r=%d\n", r);
+-		goto failure;
++	if ((adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x50) {
++		r = mes_v11_0_set_hw_resources_1(&adev->mes);
++		if (r) {
++			DRM_ERROR("failed mes_v11_0_set_hw_resources_1, r=%d\n", r);
++			goto failure;
++		}
+ 	}
+ 
+ 	r = mes_v11_0_query_sched_status(&adev->mes);
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
+index 7984ebda5b8bf5..196848c347f6d7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
+@@ -1761,7 +1761,8 @@ static int mes_v12_0_hw_init(struct amdgpu_ip_block *ip_block)
+ 	if (r)
+ 		goto failure;
+ 
+-	mes_v12_0_set_hw_resources_1(&adev->mes, AMDGPU_MES_SCHED_PIPE);
++	if ((adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x4b)
++		mes_v12_0_set_hw_resources_1(&adev->mes, AMDGPU_MES_SCHED_PIPE);
+ 
+ 	mes_v12_0_init_aggregated_doorbell(&adev->mes);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+index f5f616ab20e704..da604000a3697c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+@@ -42,7 +42,9 @@ MODULE_FIRMWARE("amdgpu/psp_13_0_5_ta.bin");
+ MODULE_FIRMWARE("amdgpu/psp_13_0_8_toc.bin");
+ MODULE_FIRMWARE("amdgpu/psp_13_0_8_ta.bin");
+ MODULE_FIRMWARE("amdgpu/psp_13_0_0_sos.bin");
++MODULE_FIRMWARE("amdgpu/psp_13_0_0_sos_kicker.bin");
+ MODULE_FIRMWARE("amdgpu/psp_13_0_0_ta.bin");
++MODULE_FIRMWARE("amdgpu/psp_13_0_0_ta_kicker.bin");
+ MODULE_FIRMWARE("amdgpu/psp_13_0_7_sos.bin");
+ MODULE_FIRMWARE("amdgpu/psp_13_0_7_ta.bin");
+ MODULE_FIRMWARE("amdgpu/psp_13_0_10_sos.bin");
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+index 688a720bbbbd8e..b26a8301a1bdd4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+@@ -489,7 +489,7 @@ static void sdma_v4_4_2_inst_gfx_stop(struct amdgpu_device *adev,
+ {
+ 	struct amdgpu_ring *sdma[AMDGPU_MAX_SDMA_INSTANCES];
+ 	u32 doorbell_offset, doorbell;
+-	u32 rb_cntl, ib_cntl;
++	u32 rb_cntl, ib_cntl, sdma_cntl;
+ 	int i;
+ 
+ 	for_each_inst(i, inst_mask) {
+@@ -501,6 +501,9 @@ static void sdma_v4_4_2_inst_gfx_stop(struct amdgpu_device *adev,
+ 		ib_cntl = RREG32_SDMA(i, regSDMA_GFX_IB_CNTL);
+ 		ib_cntl = REG_SET_FIELD(ib_cntl, SDMA_GFX_IB_CNTL, IB_ENABLE, 0);
+ 		WREG32_SDMA(i, regSDMA_GFX_IB_CNTL, ib_cntl);
++		sdma_cntl = RREG32_SDMA(i, regSDMA_CNTL);
++		sdma_cntl = REG_SET_FIELD(sdma_cntl, SDMA_CNTL, UTC_L1_ENABLE, 0);
++		WREG32_SDMA(i, regSDMA_CNTL, sdma_cntl);
+ 
+ 		if (sdma[i]->use_doorbell) {
+ 			doorbell = RREG32_SDMA(i, regSDMA_GFX_DOORBELL);
+@@ -994,6 +997,7 @@ static int sdma_v4_4_2_inst_start(struct amdgpu_device *adev,
+ 		/* set utc l1 enable flag always to 1 */
+ 		temp = RREG32_SDMA(i, regSDMA_CNTL);
+ 		temp = REG_SET_FIELD(temp, SDMA_CNTL, UTC_L1_ENABLE, 1);
++		WREG32_SDMA(i, regSDMA_CNTL, temp);
+ 
+ 		if (amdgpu_ip_version(adev, SDMA0_HWIP, 0) < IP_VERSION(4, 4, 5)) {
+ 			/* enable context empty interrupt during initialization */
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+index 3eec1b8feaeea4..58b527a6b795fc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+@@ -1158,6 +1158,11 @@ static int vcn_v2_5_start_dpg_mode(struct amdgpu_vcn_inst *vinst, bool indirect)
+ 	WREG32_P(SOC15_REG_OFFSET(VCN, inst_idx, mmUVD_POWER_STATUS),
+ 		0, ~UVD_POWER_STATUS__STALL_DPG_POWER_UP_MASK);
+ 
++	/* Keeping one read-back to ensure all register writes are done,
++	 * otherwise it may introduce race conditions.
++	 */
++	RREG32_SOC15(VCN, inst_idx, mmUVD_STATUS);
++
+ 	return 0;
+ }
+ 
+@@ -1343,6 +1348,11 @@ static int vcn_v2_5_start(struct amdgpu_vcn_inst *vinst)
+ 	WREG32_SOC15(VCN, i, mmUVD_RB_SIZE2, ring->ring_size / 4);
+ 	fw_shared->multi_queue.encode_lowlatency_queue_mode &= ~FW_QUEUE_RING_RESET;
+ 
++	/* Keeping one read-back to ensure all register writes are done,
++	 * otherwise it may introduce race conditions.
++	 */
++	RREG32_SOC15(VCN, i, mmUVD_STATUS);
++
+ 	return 0;
+ }
+ 
+@@ -1569,6 +1579,11 @@ static int vcn_v2_5_stop_dpg_mode(struct amdgpu_vcn_inst *vinst)
+ 	WREG32_P(SOC15_REG_OFFSET(VCN, inst_idx, mmUVD_POWER_STATUS), 0,
+ 			~UVD_POWER_STATUS__UVD_PG_MODE_MASK);
+ 
++	/* Keeping one read-back to ensure all register writes are done,
++	 * otherwise it may introduce race conditions.
++	 */
++	RREG32_SOC15(VCN, inst_idx, mmUVD_STATUS);
++
+ 	return 0;
+ }
+ 
+@@ -1635,6 +1650,10 @@ static int vcn_v2_5_stop(struct amdgpu_vcn_inst *vinst)
+ 		 UVD_POWER_STATUS__UVD_POWER_STATUS_MASK,
+ 		 ~UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);
+ 
++	/* Keeping one read-back to ensure all register writes are done,
++	 * otherwise it may introduce race conditions.
++	 */
++	RREG32_SOC15(VCN, i, mmUVD_STATUS);
+ done:
+ 	if (adev->pm.dpm_enabled)
+ 		amdgpu_dpm_enable_vcn(adev, false, i);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+index 0b19f0ab4480da..9fb0d53805892d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+@@ -1173,6 +1173,11 @@ static int vcn_v3_0_start_dpg_mode(struct amdgpu_vcn_inst *vinst, bool indirect)
+ 	WREG32_P(SOC15_REG_OFFSET(VCN, inst_idx, mmUVD_POWER_STATUS),
+ 		0, ~UVD_POWER_STATUS__STALL_DPG_POWER_UP_MASK);
+ 
++	/* Keeping one read-back to ensure all register writes are done,
++	 * otherwise it may introduce race conditions.
++	 */
++	RREG32_SOC15(VCN, inst_idx, mmUVD_STATUS);
++
+ 	return 0;
+ }
+ 
+@@ -1360,6 +1365,11 @@ static int vcn_v3_0_start(struct amdgpu_vcn_inst *vinst)
+ 		fw_shared->multi_queue.encode_lowlatency_queue_mode &= cpu_to_le32(~FW_QUEUE_RING_RESET);
+ 	}
+ 
++	/* Keeping one read-back to ensure all register writes are done,
++	 * otherwise it may introduce race conditions.
++	 */
++	RREG32_SOC15(VCN, i, mmUVD_STATUS);
++
+ 	return 0;
+ }
+ 
+@@ -1602,6 +1612,11 @@ static int vcn_v3_0_stop_dpg_mode(struct amdgpu_vcn_inst *vinst)
+ 	WREG32_P(SOC15_REG_OFFSET(VCN, inst_idx, mmUVD_POWER_STATUS), 0,
+ 		~UVD_POWER_STATUS__UVD_PG_MODE_MASK);
+ 
++	/* Keeping one read-back to ensure all register writes are done,
++	 * otherwise it may introduce race conditions.
++	 */
++	RREG32_SOC15(VCN, inst_idx, mmUVD_STATUS);
++
+ 	return 0;
+ }
+ 
+@@ -1674,6 +1689,11 @@ static int vcn_v3_0_stop(struct amdgpu_vcn_inst *vinst)
+ 	/* enable VCN power gating */
+ 	vcn_v3_0_enable_static_power_gating(vinst);
+ 
++	/* Keeping one read-back to ensure all register writes are done,
++	 * otherwise it may introduce race conditions.
++	 */
++	RREG32_SOC15(VCN, i, mmUVD_STATUS);
++
+ done:
+ 	if (adev->pm.dpm_enabled)
+ 		amdgpu_dpm_enable_vcn(adev, false, i);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+index 1f777c125b00de..4a88a4d37aeebc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+@@ -1122,6 +1122,11 @@ static int vcn_v4_0_start_dpg_mode(struct amdgpu_vcn_inst *vinst, bool indirect)
+ 			ring->doorbell_index << VCN_RB1_DB_CTRL__OFFSET__SHIFT |
+ 			VCN_RB1_DB_CTRL__EN_MASK);
+ 
++	/* Keeping one read-back to ensure all register writes are done,
++	 * otherwise it may introduce race conditions.
++	 */
++	RREG32_SOC15(VCN, inst_idx, regUVD_STATUS);
++
+ 	return 0;
+ }
+ 
+@@ -1303,6 +1308,11 @@ static int vcn_v4_0_start(struct amdgpu_vcn_inst *vinst)
+ 	WREG32_SOC15(VCN, i, regVCN_RB_ENABLE, tmp);
+ 	fw_shared->sq.queue_mode &= ~(FW_QUEUE_RING_RESET | FW_QUEUE_DPG_HOLD_OFF);
+ 
++	/* Keeping one read-back to ensure all register writes are done,
++	 * otherwise it may introduce race conditions.
++	 */
++	RREG32_SOC15(VCN, i, regUVD_STATUS);
++
+ 	return 0;
+ }
+ 
+@@ -1583,6 +1593,11 @@ static void vcn_v4_0_stop_dpg_mode(struct amdgpu_vcn_inst *vinst)
+ 	/* disable dynamic power gating mode */
+ 	WREG32_P(SOC15_REG_OFFSET(VCN, inst_idx, regUVD_POWER_STATUS), 0,
+ 		~UVD_POWER_STATUS__UVD_PG_MODE_MASK);
++
++	/* Keeping one read-back to ensure all register writes are done,
++	 * otherwise it may introduce race conditions.
++	 */
++	RREG32_SOC15(VCN, inst_idx, regUVD_STATUS);
+ }
+ 
+ /**
+@@ -1666,6 +1681,11 @@ static int vcn_v4_0_stop(struct amdgpu_vcn_inst *vinst)
+ 	/* enable VCN power gating */
+ 	vcn_v4_0_enable_static_power_gating(vinst);
+ 
++	/* Keeping one read-back to ensure all register writes are done,
++	 * otherwise it may introduce race conditions.
++	 */
++	RREG32_SOC15(VCN, i, regUVD_STATUS);
++
+ done:
+ 	if (adev->pm.dpm_enabled)
+ 		amdgpu_dpm_enable_vcn(adev, false, i);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c
+index e0e84ef7f56866..4f336ab468638d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c
+@@ -629,6 +629,9 @@ static int vcn_v5_0_1_start_dpg_mode(struct amdgpu_vcn_inst *vinst,
+ 	if (indirect)
+ 		amdgpu_vcn_psp_update_sram(adev, inst_idx, AMDGPU_UCODE_ID_VCN0_RAM);
+ 
++	/* resetting ring, fw should not check RB ring */
++	fw_shared->sq.queue_mode |= FW_QUEUE_RING_RESET;
++
+ 	/* Pause dpg */
+ 	vcn_v5_0_1_pause_dpg_mode(vinst, &state);
+ 
+@@ -641,7 +644,7 @@ static int vcn_v5_0_1_start_dpg_mode(struct amdgpu_vcn_inst *vinst,
+ 	tmp = RREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE);
+ 	tmp &= ~(VCN_RB_ENABLE__RB1_EN_MASK);
+ 	WREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE, tmp);
+-	fw_shared->sq.queue_mode |= FW_QUEUE_RING_RESET;
++
+ 	WREG32_SOC15(VCN, vcn_inst, regUVD_RB_RPTR, 0);
+ 	WREG32_SOC15(VCN, vcn_inst, regUVD_RB_WPTR, 0);
+ 
+@@ -652,6 +655,7 @@ static int vcn_v5_0_1_start_dpg_mode(struct amdgpu_vcn_inst *vinst,
+ 	tmp = RREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE);
+ 	tmp |= VCN_RB_ENABLE__RB1_EN_MASK;
+ 	WREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE, tmp);
++	/* resetting done, fw can check RB ring */
+ 	fw_shared->sq.queue_mode &= ~(FW_QUEUE_RING_RESET | FW_QUEUE_DPG_HOLD_OFF);
+ 
+ 	WREG32_SOC15(VCN, vcn_inst, regVCN_RB1_DB_CTRL,
+@@ -809,6 +813,11 @@ static int vcn_v5_0_1_start(struct amdgpu_vcn_inst *vinst)
+ 	WREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE, tmp);
+ 	fw_shared->sq.queue_mode &= ~(FW_QUEUE_RING_RESET | FW_QUEUE_DPG_HOLD_OFF);
+ 
++	/* Keeping one read-back to ensure all register writes are done,
++	 * otherwise it may introduce race conditions.
++	 */
++	RREG32_SOC15(VCN, vcn_inst, regUVD_STATUS);
++
+ 	return 0;
+ }
+ 
+@@ -843,6 +852,11 @@ static void vcn_v5_0_1_stop_dpg_mode(struct amdgpu_vcn_inst *vinst)
+ 	/* disable dynamic power gating mode */
+ 	WREG32_P(SOC15_REG_OFFSET(VCN, vcn_inst, regUVD_POWER_STATUS), 0,
+ 		~UVD_POWER_STATUS__UVD_PG_MODE_MASK);
++
++	/* Keeping one read-back to ensure all register writes are done,
++	 * otherwise it may introduce race conditions.
++	 */
++	RREG32_SOC15(VCN, vcn_inst, regUVD_STATUS);
+ }
+ 
+ /**
+@@ -918,6 +932,11 @@ static int vcn_v5_0_1_stop(struct amdgpu_vcn_inst *vinst)
+ 	/* clear status */
+ 	WREG32_SOC15(VCN, vcn_inst, regUVD_STATUS, 0);
+ 
++	/* Keeping one read-back to ensure all register writes are done,
++	 * otherwise it may introduce race conditions.
++	 */
++	RREG32_SOC15(VCN, vcn_inst, regUVD_STATUS);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+index fecdb679407503..3a926eb82379b6 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_events.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+@@ -1331,6 +1331,7 @@ void kfd_signal_poison_consumed_event(struct kfd_node *dev, u32 pasid)
+ 	user_gpu_id = kfd_process_get_user_gpu_id(p, dev->id);
+ 	if (unlikely(user_gpu_id == -EINVAL)) {
+ 		WARN_ONCE(1, "Could not get user_gpu_id from dev->id:%x\n", dev->id);
++		kfd_unref_process(p);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c
+index 2893fd5e5d0038..6e1ab51ec441ec 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c
+@@ -237,7 +237,7 @@ static int pm_map_queues_v9(struct packet_manager *pm, uint32_t *buffer,
+ 
+ 	packet->bitfields2.engine_sel =
+ 		engine_sel__mes_map_queues__compute_vi;
+-	packet->bitfields2.gws_control_queue = q->gws ? 1 : 0;
++	packet->bitfields2.gws_control_queue = q->properties.is_gws ? 1 : 0;
+ 	packet->bitfields2.extended_engine_sel =
+ 		extended_engine_sel__mes_map_queues__legacy_engine_sel;
+ 	packet->bitfields2.queue_type =
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 96118a0e1ffeb2..87c2bc5f64a6ce 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -4655,45 +4655,72 @@ static int get_brightness_range(const struct amdgpu_dm_backlight_caps *caps,
+ 	return 1;
+ }
+ 
+-static u32 convert_brightness_from_user(const struct amdgpu_dm_backlight_caps *caps,
+-					uint32_t brightness)
++/* Rescale from [min..max] to [0..MAX_BACKLIGHT_LEVEL] */
++static inline u32 scale_input_to_fw(int min, int max, u64 input)
+ {
+-	unsigned int min, max;
+-	u8 prev_signal = 0, prev_lum = 0;
++	return DIV_ROUND_CLOSEST_ULL(input * MAX_BACKLIGHT_LEVEL, max - min);
++}
+ 
+-	if (!get_brightness_range(caps, &min, &max))
+-		return brightness;
++/* Rescale from [0..MAX_BACKLIGHT_LEVEL] to [min..max] */
++static inline u32 scale_fw_to_input(int min, int max, u64 input)
++{
++	return min + DIV_ROUND_CLOSEST_ULL(input * (max - min), MAX_BACKLIGHT_LEVEL);
++}
++
++static void convert_custom_brightness(const struct amdgpu_dm_backlight_caps *caps,
++				      unsigned int min, unsigned int max,
++				      uint32_t *user_brightness)
++{
++	u32 brightness = scale_input_to_fw(min, max, *user_brightness);
++	u8 prev_signal = 0, prev_lum = 0;
++	int i = 0;
+ 
+-	for (int i = 0; i < caps->data_points; i++) {
+-		u8 signal, lum;
++	if (amdgpu_dc_debug_mask & DC_DISABLE_CUSTOM_BRIGHTNESS_CURVE)
++		return;
+ 
+-		if (amdgpu_dc_debug_mask & DC_DISABLE_CUSTOM_BRIGHTNESS_CURVE)
+-			break;
++	if (!caps->data_points)
++		return;
+ 
+-		signal = caps->luminance_data[i].input_signal;
+-		lum = caps->luminance_data[i].luminance;
++	/* choose start to run less interpolation steps */
++	if (caps->luminance_data[caps->data_points/2].input_signal > brightness)
++		i = caps->data_points/2;
++	do {
++		u8 signal = caps->luminance_data[i].input_signal;
++		u8 lum = caps->luminance_data[i].luminance;
+ 
+ 		/*
+ 		 * brightness == signal: luminance is percent numerator
+ 		 * brightness < signal: interpolate between previous and current luminance numerator
+ 		 * brightness > signal: find next data point
+ 		 */
+-		if (brightness < signal)
+-			lum = prev_lum + DIV_ROUND_CLOSEST((lum - prev_lum) *
+-							   (brightness - prev_signal),
+-							   signal - prev_signal);
+-		else if (brightness > signal) {
++		if (brightness > signal) {
+ 			prev_signal = signal;
+ 			prev_lum = lum;
++			i++;
+ 			continue;
+ 		}
+-		brightness = DIV_ROUND_CLOSEST(lum * brightness, 101);
+-		break;
+-	}
++		if (brightness < signal)
++			lum = prev_lum + DIV_ROUND_CLOSEST((lum - prev_lum) *
++							   (brightness - prev_signal),
++							   signal - prev_signal);
++		*user_brightness = scale_fw_to_input(min, max,
++						     DIV_ROUND_CLOSEST(lum * brightness, 101));
++		return;
++	} while (i < caps->data_points);
++}
+ 
+-	// Rescale 0..255 to min..max
+-	return min + DIV_ROUND_CLOSEST((max - min) * brightness,
+-				       AMDGPU_MAX_BL_LEVEL);
++static u32 convert_brightness_from_user(const struct amdgpu_dm_backlight_caps *caps,
++					uint32_t brightness)
++{
++	unsigned int min, max;
++
++	if (!get_brightness_range(caps, &min, &max))
++		return brightness;
++
++	convert_custom_brightness(caps, min, max, &brightness);
++
++	// Rescale 0..max to min..max
++	return min + DIV_ROUND_CLOSEST_ULL((u64)(max - min) * brightness, max);
+ }
+ 
+ static u32 convert_brightness_to_user(const struct amdgpu_dm_backlight_caps *caps,
+@@ -4706,8 +4733,8 @@ static u32 convert_brightness_to_user(const struct amdgpu_dm_backlight_caps *cap
+ 
+ 	if (brightness < min)
+ 		return 0;
+-	// Rescale min..max to 0..255
+-	return DIV_ROUND_CLOSEST(AMDGPU_MAX_BL_LEVEL * (brightness - min),
++	// Rescale min..max to 0..max
++	return DIV_ROUND_CLOSEST_ULL((u64)max * (brightness - min),
+ 				 max - min);
+ }
+ 
+@@ -4832,8 +4859,9 @@ amdgpu_dm_register_backlight_device(struct amdgpu_dm_connector *aconnector)
+ 	struct drm_device *drm = aconnector->base.dev;
+ 	struct amdgpu_display_manager *dm = &drm_to_adev(drm)->dm;
+ 	struct backlight_properties props = { 0 };
+-	struct amdgpu_dm_backlight_caps caps = { 0 };
++	struct amdgpu_dm_backlight_caps *caps;
+ 	char bl_name[16];
++	int min, max;
+ 
+ 	if (aconnector->bl_idx == -1)
+ 		return;
+@@ -4845,18 +4873,21 @@ amdgpu_dm_register_backlight_device(struct amdgpu_dm_connector *aconnector)
+ 		return;
+ 	}
+ 
+-	amdgpu_acpi_get_backlight_caps(&caps);
+-	if (caps.caps_valid) {
++	caps = &dm->backlight_caps[aconnector->bl_idx];
++	if (get_brightness_range(caps, &min, &max)) {
+ 		if (power_supply_is_system_supplied() > 0)
+-			props.brightness = caps.ac_level;
++			props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps->ac_level, 100);
+ 		else
+-			props.brightness = caps.dc_level;
++			props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps->dc_level, 100);
++		/* min is zero, so max needs to be adjusted */
++		props.max_brightness = max - min;
++		drm_dbg(drm, "Backlight caps: min: %d, max: %d, ac %d, dc %d\n", min, max,
++			caps->ac_level, caps->dc_level);
+ 	} else
+-		props.brightness = AMDGPU_MAX_BL_LEVEL;
++		props.brightness = props.max_brightness = MAX_BACKLIGHT_LEVEL;
+ 
+-	if (caps.data_points && !(amdgpu_dc_debug_mask & DC_DISABLE_CUSTOM_BRIGHTNESS_CURVE))
++	if (caps->data_points && !(amdgpu_dc_debug_mask & DC_DISABLE_CUSTOM_BRIGHTNESS_CURVE))
+ 		drm_info(drm, "Using custom brightness curve\n");
+-	props.max_brightness = AMDGPU_MAX_BL_LEVEL;
+ 	props.type = BACKLIGHT_RAW;
+ 
+ 	snprintf(bl_name, sizeof(bl_name), "amdgpu_bl%d",
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+index 1395a748d726c4..72dc9d5e5c001e 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+@@ -1016,6 +1016,10 @@ enum dc_edid_status dm_helpers_read_local_edid(
+ 			return EDID_NO_RESPONSE;
+ 
+ 		edid = drm_edid_raw(drm_edid); // FIXME: Get rid of drm_edid_raw()
++		if (!edid ||
++		    edid->extensions >= sizeof(sink->dc_edid.raw_edid) / EDID_LENGTH)
++			return EDID_BAD_INPUT;
++
+ 		sink->dc_edid.length = EDID_LENGTH * (edid->extensions + 1);
+ 		memmove(sink->dc_edid.raw_edid, (uint8_t *)edid, sink->dc_edid.length);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index ba4ce8a63158bb..40561c4deb3c9d 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -239,6 +239,7 @@ static bool create_links(
+ 	DC_LOG_DC("BIOS object table - end");
+ 
+ 	/* Create a link for each usb4 dpia port */
++	dc->lowest_dpia_link_index = MAX_LINKS;
+ 	for (i = 0; i < dc->res_pool->usb4_dpia_count; i++) {
+ 		struct link_init_data link_init_params = {0};
+ 		struct dc_link *link;
+@@ -251,6 +252,9 @@ static bool create_links(
+ 
+ 		link = dc->link_srv->create_link(&link_init_params);
+ 		if (link) {
++			if (dc->lowest_dpia_link_index > dc->link_count)
++				dc->lowest_dpia_link_index = dc->link_count;
++
+ 			dc->links[dc->link_count] = link;
+ 			link->dc = dc;
+ 			++dc->link_count;
+@@ -6247,6 +6251,35 @@ struct dc_power_profile dc_get_power_profile_for_dc_state(const struct dc_state
+ 		profile.power_level = dc->res_pool->funcs->get_power_profile(context);
+ 	return profile;
+ }
++/**
++ ***********************************************************************************************
++ * dc_get_host_router_index: Get index of host router from a dpia link
++ *
++ * This function return a host router index of the target link. If the target link is dpia link.
++ *
++ * @param [in] link: target link
++ * @param [out] host_router_index: host router index of the target link
++ *
++ * @return: true if the host router index is found and valid.
++ *
++ ***********************************************************************************************
++ */
++bool dc_get_host_router_index(const struct dc_link *link, unsigned int *host_router_index)
++{
++	struct dc *dc = link->ctx->dc;
++
++	if (link->ep_type != DISPLAY_ENDPOINT_USB4_DPIA)
++		return false;
++
++	if (link->link_index < dc->lowest_dpia_link_index)
++		return false;
++
++	*host_router_index = (link->link_index - dc->lowest_dpia_link_index) / dc->caps.num_of_dpias_per_host_router;
++	if (*host_router_index < dc->caps.num_of_host_routers)
++		return true;
++	else
++		return false;
++}
+ 
+ /*
+  **********************************************************************************
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 7c2ee052692604..0761fc95a15800 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -66,7 +66,8 @@ struct dmub_notification;
+ #define MAX_STREAMS 6
+ #define MIN_VIEWPORT_SIZE 12
+ #define MAX_NUM_EDP 2
+-#define MAX_HOST_ROUTERS_NUM 2
++#define MAX_HOST_ROUTERS_NUM 3
++#define MAX_DPIA_PER_HOST_ROUTER 2
+ 
+ /* Display Core Interfaces */
+ struct dc_versions {
+@@ -303,6 +304,8 @@ struct dc_caps {
+ 	/* Conservative limit for DCC cases which require ODM4:1 to support*/
+ 	uint32_t dcc_plane_width_limit;
+ 	struct dc_scl_caps scl_caps;
++	uint8_t num_of_host_routers;
++	uint8_t num_of_dpias_per_host_router;
+ };
+ 
+ struct dc_bug_wa {
+@@ -1431,6 +1434,7 @@ struct dc {
+ 
+ 	uint8_t link_count;
+ 	struct dc_link *links[MAX_LINKS];
++	uint8_t lowest_dpia_link_index;
+ 	struct link_service *link_srv;
+ 
+ 	struct dc_state *current_state;
+@@ -2586,6 +2590,8 @@ struct dc_power_profile dc_get_power_profile_for_dc_state(const struct dc_state
+ 
+ unsigned int dc_get_det_buffer_size_from_state(const struct dc_state *context);
+ 
++bool dc_get_host_router_index(const struct dc_link *link, unsigned int *host_router_index);
++
+ /* DSC Interfaces */
+ #include "dc_dsc.h"
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
+index 77c87ad5722078..bbd6701096ca94 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
+@@ -1157,8 +1157,8 @@ struct dc_lttpr_caps {
+ 	union dp_128b_132b_supported_lttpr_link_rates supported_128b_132b_rates;
+ 	union dp_alpm_lttpr_cap alpm;
+ 	uint8_t aux_rd_interval[MAX_REPEATER_CNT - 1];
+-	uint8_t lttpr_ieee_oui[3];
+-	uint8_t lttpr_device_id[6];
++	uint8_t lttpr_ieee_oui[3]; // Always read from closest LTTPR to host
++	uint8_t lttpr_device_id[6]; // Always read from closest LTTPR to host
+ };
+ 
+ struct dc_dongle_dfp_cap_ext {
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
+index 731fbd4bc600b4..f775df25584189 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
+@@ -785,6 +785,7 @@ static void populate_dml21_plane_config_from_plane_state(struct dml2_context *dm
+ 		plane->pixel_format = dml2_420_10;
+ 		break;
+ 	case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616:
++	case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616:
+ 	case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616F:
+ 	case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616F:
+ 		plane->pixel_format = dml2_444_64;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
+index 4c504cb0e1c539..49911085f62125 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
+@@ -4750,7 +4750,10 @@ static void calculate_tdlut_setting(
+ 	//the tdlut is fetched during the 2 row times of prefetch.
+ 	if (p->setup_for_tdlut) {
+ 		*p->tdlut_groups_per_2row_ub = (unsigned int)math_ceil2((double) *p->tdlut_bytes_per_frame / *p->tdlut_bytes_per_group, 1);
+-		*p->tdlut_opt_time = (*p->tdlut_bytes_per_frame - p->cursor_buffer_size * 1024) / tdlut_drain_rate;
++		if (*p->tdlut_bytes_per_frame > p->cursor_buffer_size * 1024)
++			*p->tdlut_opt_time = (*p->tdlut_bytes_per_frame - p->cursor_buffer_size * 1024) / tdlut_drain_rate;
++		else
++			*p->tdlut_opt_time = 0;
+ 		*p->tdlut_drain_time = p->cursor_buffer_size * 1024 / tdlut_drain_rate;
+ 		*p->tdlut_bytes_to_deliver = (unsigned int) (p->cursor_buffer_size * 1024.0);
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+index 5de775fd8fceef..208630754c8a34 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+@@ -953,6 +953,7 @@ static void populate_dml_surface_cfg_from_plane_state(enum dml_project_id dml2_p
+ 		out->SourcePixelFormat[location] = dml_420_10;
+ 		break;
+ 	case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616:
++	case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616:
+ 	case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616F:
+ 	case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616F:
+ 		out->SourcePixelFormat[location] = dml_444_64;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+index 5656d10368add3..a998c498a47715 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+@@ -952,8 +952,8 @@ void dce110_edp_backlight_control(
+ 	struct dc_context *ctx = link->ctx;
+ 	struct bp_transmitter_control cntl = { 0 };
+ 	uint8_t pwrseq_instance = 0;
+-	unsigned int pre_T11_delay = OLED_PRE_T11_DELAY;
+-	unsigned int post_T7_delay = OLED_POST_T7_DELAY;
++	unsigned int pre_T11_delay = (link->dpcd_sink_ext_caps.bits.oled ? OLED_PRE_T11_DELAY : 0);
++	unsigned int post_T7_delay = (link->dpcd_sink_ext_caps.bits.oled ? OLED_POST_T7_DELAY : 0);
+ 
+ 	if (dal_graphics_object_id_get_connector_id(link->link_enc->connector)
+ 		!= CONNECTOR_ID_EDP) {
+@@ -1069,7 +1069,8 @@ void dce110_edp_backlight_control(
+ 	if (!enable) {
+ 		/*follow oem panel config's requirement*/
+ 		pre_T11_delay += link->panel_config.pps.extra_pre_t11_ms;
+-		msleep(pre_T11_delay);
++		if (pre_T11_delay)
++			msleep(pre_T11_delay);
+ 	}
+ }
+ 
+@@ -1221,7 +1222,7 @@ void dce110_blank_stream(struct pipe_ctx *pipe_ctx)
+ 	struct dce_hwseq *hws = link->dc->hwseq;
+ 
+ 	if (link->local_sink && link->local_sink->sink_signal == SIGNAL_TYPE_EDP) {
+-		if (!link->skip_implict_edp_power_control)
++		if (!link->skip_implict_edp_power_control && hws)
+ 			hws->funcs.edp_backlight_control(link, false);
+ 		link->dc->hwss.set_abm_immediate_disable(pipe_ctx);
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index 63077c1fad859c..dc00b0b9f3300a 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -1047,6 +1047,15 @@ void dcn35_calc_blocks_to_gate(struct dc *dc, struct dc_state *context,
+ 			if (dc->caps.sequential_ono) {
+ 				update_state->pg_pipe_res_update[PG_HUBP][pipe_ctx->stream_res.dsc->inst] = false;
+ 				update_state->pg_pipe_res_update[PG_DPP][pipe_ctx->stream_res.dsc->inst] = false;
++
++				/* All HUBP/DPP instances must be powered if the DSC inst != HUBP inst */
++				if (!pipe_ctx->top_pipe && pipe_ctx->plane_res.hubp &&
++				    pipe_ctx->plane_res.hubp->inst != pipe_ctx->stream_res.dsc->inst) {
++					for (j = 0; j < dc->res_pool->pipe_count; ++j) {
++						update_state->pg_pipe_res_update[PG_HUBP][j] = false;
++						update_state->pg_pipe_res_update[PG_DPP][j] = false;
++					}
++				}
+ 			}
+ 		}
+ 
+@@ -1193,6 +1202,25 @@ void dcn35_calc_blocks_to_ungate(struct dc *dc, struct dc_state *context,
+ 		update_state->pg_pipe_res_update[PG_HDMISTREAM][0] = true;
+ 
+ 	if (dc->caps.sequential_ono) {
++		for (i = 0; i < dc->res_pool->pipe_count; i++) {
++			struct pipe_ctx *new_pipe = &context->res_ctx.pipe_ctx[i];
++
++			if (new_pipe->stream_res.dsc && !new_pipe->top_pipe &&
++			    update_state->pg_pipe_res_update[PG_DSC][new_pipe->stream_res.dsc->inst]) {
++				update_state->pg_pipe_res_update[PG_HUBP][new_pipe->stream_res.dsc->inst] = true;
++				update_state->pg_pipe_res_update[PG_DPP][new_pipe->stream_res.dsc->inst] = true;
++
++				/* All HUBP/DPP instances must be powered if the DSC inst != HUBP inst */
++				if (new_pipe->plane_res.hubp &&
++				    new_pipe->plane_res.hubp->inst != new_pipe->stream_res.dsc->inst) {
++					for (j = 0; j < dc->res_pool->pipe_count; ++j) {
++						update_state->pg_pipe_res_update[PG_HUBP][j] = true;
++						update_state->pg_pipe_res_update[PG_DPP][j] = true;
++					}
++				}
++			}
++		}
++
+ 		for (i = dc->res_pool->pipe_count - 1; i >= 0; i--) {
+ 			if (update_state->pg_pipe_res_update[PG_HUBP][i] &&
+ 			    update_state->pg_pipe_res_update[PG_DPP][i]) {
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+index 21ee0d96c9d48f..ed9d396e3d0ea2 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+@@ -158,6 +158,14 @@ uint8_t dp_parse_lttpr_repeater_count(uint8_t lttpr_repeater_count)
+ 	return 0; // invalid value
+ }
+ 
++uint32_t dp_get_closest_lttpr_offset(uint8_t lttpr_count)
++{
++	/* Calculate offset for LTTPR closest to DPTX which is highest in the chain
++	 * Offset is 0 for single LTTPR cases as base LTTPR DPCD addresses target LTTPR 1
++	 */
++	return DP_REPEATER_CONFIGURATION_AND_STATUS_SIZE * (lttpr_count - 1);
++}
++
+ uint32_t link_bw_kbps_from_raw_frl_link_rate_data(uint8_t bw)
+ {
+ 	switch (bw) {
+@@ -377,9 +385,15 @@ bool dp_is_128b_132b_signal(struct pipe_ctx *pipe_ctx)
+ bool dp_is_lttpr_present(struct dc_link *link)
+ {
+ 	/* Some sink devices report invalid LTTPR revision, so don't validate against that cap */
+-	return (dp_parse_lttpr_repeater_count(link->dpcd_caps.lttpr_caps.phy_repeater_cnt) != 0 &&
++	uint32_t lttpr_count = dp_parse_lttpr_repeater_count(link->dpcd_caps.lttpr_caps.phy_repeater_cnt);
++	bool is_lttpr_present = (lttpr_count > 0 &&
+ 			link->dpcd_caps.lttpr_caps.max_lane_count > 0 &&
+ 			link->dpcd_caps.lttpr_caps.max_lane_count <= 4);
++
++	if (lttpr_count > 0 && !is_lttpr_present)
++		DC_LOG_ERROR("LTTPR count is nonzero but invalid lane count reported. Assuming no LTTPR present.\n");
++
++	return is_lttpr_present;
+ }
+ 
+ /* in DP compliance test, DPR-120 may have
+@@ -1543,6 +1557,8 @@ enum dc_status dp_retrieve_lttpr_cap(struct dc_link *link)
+ 	uint8_t lttpr_dpcd_data[10] = {0};
+ 	enum dc_status status;
+ 	bool is_lttpr_present;
++	uint32_t lttpr_count;
++	uint32_t closest_lttpr_offset;
+ 
+ 	/* Logic to determine LTTPR support*/
+ 	bool vbios_lttpr_interop = link->dc->caps.vbios_lttpr_aware;
+@@ -1594,20 +1610,22 @@ enum dc_status dp_retrieve_lttpr_cap(struct dc_link *link)
+ 			lttpr_dpcd_data[DP_LTTPR_ALPM_CAPABILITIES -
+ 							DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV];
+ 
++	lttpr_count = dp_parse_lttpr_repeater_count(link->dpcd_caps.lttpr_caps.phy_repeater_cnt);
++
+ 	/* If this chip cap is set, at least one retimer must exist in the chain
+ 	 * Override count to 1 if we receive a known bad count (0 or an invalid value) */
+ 	if (((link->chip_caps & AMD_EXT_DISPLAY_PATH_CAPS__EXT_CHIP_MASK) == AMD_EXT_DISPLAY_PATH_CAPS__DP_FIXED_VS_EN) &&
+-			(dp_parse_lttpr_repeater_count(link->dpcd_caps.lttpr_caps.phy_repeater_cnt) == 0)) {
++			lttpr_count == 0) {
+ 		/* If you see this message consistently, either the host platform has FIXED_VS flag
+ 		 * incorrectly configured or the sink device is returning an invalid count.
+ 		 */
+ 		DC_LOG_ERROR("lttpr_caps phy_repeater_cnt is 0x%x, forcing it to 0x80.",
+ 			     link->dpcd_caps.lttpr_caps.phy_repeater_cnt);
+ 		link->dpcd_caps.lttpr_caps.phy_repeater_cnt = 0x80;
++		lttpr_count = 1;
+ 		DC_LOG_DC("lttpr_caps forced phy_repeater_cnt = %d\n", link->dpcd_caps.lttpr_caps.phy_repeater_cnt);
+ 	}
+ 
+-	/* Attempt to train in LTTPR transparent mode if repeater count exceeds 8. */
+ 	is_lttpr_present = dp_is_lttpr_present(link);
+ 
+ 	DC_LOG_DC("is_lttpr_present = %d\n", is_lttpr_present);
+@@ -1615,11 +1633,25 @@ enum dc_status dp_retrieve_lttpr_cap(struct dc_link *link)
+ 	if (is_lttpr_present) {
+ 		CONN_DATA_DETECT(link, lttpr_dpcd_data, sizeof(lttpr_dpcd_data), "LTTPR Caps: ");
+ 
+-		core_link_read_dpcd(link, DP_LTTPR_IEEE_OUI, link->dpcd_caps.lttpr_caps.lttpr_ieee_oui, sizeof(link->dpcd_caps.lttpr_caps.lttpr_ieee_oui));
+-		CONN_DATA_DETECT(link, link->dpcd_caps.lttpr_caps.lttpr_ieee_oui, sizeof(link->dpcd_caps.lttpr_caps.lttpr_ieee_oui), "LTTPR IEEE OUI: ");
++		// Identify closest LTTPR to determine if workarounds required for known embedded LTTPR
++		closest_lttpr_offset = dp_get_closest_lttpr_offset(lttpr_count);
++
++		core_link_read_dpcd(link, (DP_LTTPR_IEEE_OUI + closest_lttpr_offset),
++				link->dpcd_caps.lttpr_caps.lttpr_ieee_oui, sizeof(link->dpcd_caps.lttpr_caps.lttpr_ieee_oui));
++		core_link_read_dpcd(link, (DP_LTTPR_DEVICE_ID + closest_lttpr_offset),
++				link->dpcd_caps.lttpr_caps.lttpr_device_id, sizeof(link->dpcd_caps.lttpr_caps.lttpr_device_id));
+ 
+-		core_link_read_dpcd(link, DP_LTTPR_DEVICE_ID, link->dpcd_caps.lttpr_caps.lttpr_device_id, sizeof(link->dpcd_caps.lttpr_caps.lttpr_device_id));
+-		CONN_DATA_DETECT(link, link->dpcd_caps.lttpr_caps.lttpr_device_id, sizeof(link->dpcd_caps.lttpr_caps.lttpr_device_id), "LTTPR Device ID: ");
++		if (lttpr_count > 1) {
++			CONN_DATA_DETECT(link, link->dpcd_caps.lttpr_caps.lttpr_ieee_oui, sizeof(link->dpcd_caps.lttpr_caps.lttpr_ieee_oui),
++					"Closest LTTPR To Host's IEEE OUI: ");
++			CONN_DATA_DETECT(link, link->dpcd_caps.lttpr_caps.lttpr_device_id, sizeof(link->dpcd_caps.lttpr_caps.lttpr_device_id),
++					"Closest LTTPR To Host's LTTPR Device ID: ");
++		} else {
++			CONN_DATA_DETECT(link, link->dpcd_caps.lttpr_caps.lttpr_ieee_oui, sizeof(link->dpcd_caps.lttpr_caps.lttpr_ieee_oui),
++					"LTTPR IEEE OUI: ");
++			CONN_DATA_DETECT(link, link->dpcd_caps.lttpr_caps.lttpr_device_id, sizeof(link->dpcd_caps.lttpr_caps.lttpr_device_id),
++					"LTTPR Device ID: ");
++		}
+ 	}
+ 
+ 	return status;
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.h b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.h
+index 0ce0af3ddbebe7..940b147cc5d426 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.h
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.h
+@@ -48,6 +48,9 @@ enum dc_status dp_retrieve_lttpr_cap(struct dc_link *link);
+ /* Convert PHY repeater count read from DPCD uint8_t. */
+ uint8_t dp_parse_lttpr_repeater_count(uint8_t lttpr_repeater_count);
+ 
++/* Calculate embedded LTTPR address offset for vendor-specific behaviour */
++uint32_t dp_get_closest_lttpr_offset(uint8_t lttpr_count);
++
+ bool dp_is_sink_present(struct dc_link *link);
+ 
+ bool dp_is_lttpr_present(struct dc_link *link);
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
+index ef358afdfb65b1..2dc1a660e50450 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
+@@ -785,7 +785,6 @@ void override_training_settings(
+ 		lt_settings->lttpr_mode = LTTPR_MODE_NON_LTTPR;
+ 
+ 	dp_get_lttpr_mode_override(link, &lt_settings->lttpr_mode);
+-
+ }
+ 
+ enum dc_dp_training_pattern decide_cr_training_pattern(
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_8b_10b.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_8b_10b.c
+index 5a5d48fadbf27b..66d0fb1b9b9d2e 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_8b_10b.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_8b_10b.c
+@@ -142,6 +142,14 @@ void decide_8b_10b_training_settings(
+ 	lt_settings->lttpr_mode = dp_decide_8b_10b_lttpr_mode(link);
+ 	lt_settings->cr_pattern_time = get_cr_training_aux_rd_interval(link, link_setting, lt_settings->lttpr_mode);
+ 	dp_hw_to_dpcd_lane_settings(lt_settings, lt_settings->hw_lane_settings, lt_settings->dpcd_lane_settings);
++
++	/* Some embedded LTTPRs rely on receiving TPS2 before LT to interop reliably with sensitive VGA dongles
++	 * This allows these LTTPRs to minimize freq/phase and skew variation during lock and deskew sequences
++	 */
++	if ((link->chip_caps & AMD_EXT_DISPLAY_PATH_CAPS__EXT_CHIP_MASK) ==
++			AMD_EXT_DISPLAY_PATH_CAPS__DP_EARLY_8B10B_TPS2) {
++		lt_settings->lttpr_early_tps2 = true;
++	}
+ }
+ 
+ enum lttpr_mode dp_decide_8b_10b_lttpr_mode(struct dc_link *link)
+@@ -173,6 +181,42 @@ enum lttpr_mode dp_decide_8b_10b_lttpr_mode(struct dc_link *link)
+ 	return LTTPR_MODE_NON_LTTPR;
+ }
+ 
++static void set_link_settings_and_perform_early_tps2_retimer_pre_lt_sequence(struct dc_link *link,
++	const struct link_resource *link_res,
++	struct link_training_settings *lt_settings,
++	uint32_t lttpr_count)
++{
++	/* Vendor-specific LTTPR early TPS2 sequence:
++	* 1. Output TPS2
++	* 2. Wait 400us
++	* 3. Set link settings as usual
++	* 4. Write TPS1 to DP_TRAINING_PATTERN_SET_PHY_REPEATERx targeting LTTPR closest to host
++	* 5. Wait 1ms
++	* 6. Begin link training as usual
++	* */
++
++	uint32_t closest_lttpr_address_offset = dp_get_closest_lttpr_offset(lttpr_count);
++
++	union dpcd_training_pattern dpcd_pattern = {0};
++
++	dpcd_pattern.v1_4.TRAINING_PATTERN_SET = 1;
++	dpcd_pattern.v1_4.SCRAMBLING_DISABLE = 1;
++
++	DC_LOG_HW_LINK_TRAINING("%s\n GPU sends TPS2. Wait 400us.\n", __func__);
++
++	dp_set_hw_training_pattern(link, link_res, DP_TRAINING_PATTERN_SEQUENCE_2, DPRX);
++
++	dp_set_hw_lane_settings(link, link_res, lt_settings, DPRX);
++
++	udelay(400);
++
++	dpcd_set_link_settings(link, lt_settings);
++
++	core_link_write_dpcd(link, DP_TRAINING_PATTERN_SET_PHY_REPEATER1 + closest_lttpr_address_offset, &dpcd_pattern.raw, 1);
++
++	udelay(1000);
++	}
++
+ enum link_training_result perform_8b_10b_clock_recovery_sequence(
+ 	struct dc_link *link,
+ 	const struct link_resource *link_res,
+@@ -383,7 +427,7 @@ enum link_training_result dp_perform_8b_10b_link_training(
+ {
+ 	enum link_training_result status = LINK_TRAINING_SUCCESS;
+ 
+-	uint8_t repeater_cnt;
++	uint8_t repeater_cnt = dp_parse_lttpr_repeater_count(link->dpcd_caps.lttpr_caps.phy_repeater_cnt);
+ 	uint8_t repeater_id;
+ 	uint8_t lane = 0;
+ 
+@@ -391,14 +435,16 @@ enum link_training_result dp_perform_8b_10b_link_training(
+ 		start_clock_recovery_pattern_early(link, link_res, lt_settings, DPRX);
+ 
+ 	/* 1. set link rate, lane count and spread. */
+-	dpcd_set_link_settings(link, lt_settings);
++	if (lt_settings->lttpr_early_tps2)
++		set_link_settings_and_perform_early_tps2_retimer_pre_lt_sequence(link, link_res, lt_settings, repeater_cnt);
++	else
++		dpcd_set_link_settings(link, lt_settings);
+ 
+ 	if (lt_settings->lttpr_mode == LTTPR_MODE_NON_TRANSPARENT) {
+ 
+ 		/* 2. perform link training (set link training done
+ 		 *  to false is done as well)
+ 		 */
+-		repeater_cnt = dp_parse_lttpr_repeater_count(link->dpcd_caps.lttpr_caps.phy_repeater_cnt);
+ 
+ 		for (repeater_id = repeater_cnt; (repeater_id > 0 && status == LINK_TRAINING_SUCCESS);
+ 				repeater_id--) {
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
+index dddddbfef85f8f..1abaa948386f1a 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
+@@ -1954,6 +1954,9 @@ static bool dcn31_resource_construct(
+ 	dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
+ 	dc->caps.color.mpc.ocsc = 1;
+ 
++	dc->caps.num_of_host_routers = 2;
++	dc->caps.num_of_dpias_per_host_router = 2;
++
+ 	/* Use pipe context based otg sync logic */
+ 	dc->config.use_pipe_ctx_sync_logic = true;
+ 	dc->config.disable_hbr_audio_dp2 = true;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
+index 26becc4cb80408..f47cd281d6e7e5 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
+@@ -1885,6 +1885,9 @@ static bool dcn314_resource_construct(
+ 
+ 	dc->caps.max_disp_clock_khz_at_vmin = 650000;
+ 
++	dc->caps.num_of_host_routers = 2;
++	dc->caps.num_of_dpias_per_host_router = 2;
++
+ 	/* Use pipe context based otg sync logic */
+ 	dc->config.use_pipe_ctx_sync_logic = true;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+index 8948d44a7a80e3..ace5bd9dfd3849 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+@@ -1894,6 +1894,9 @@ static bool dcn35_resource_construct(
+ 	dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
+ 	dc->caps.color.mpc.ocsc = 1;
+ 
++	dc->caps.num_of_host_routers = 2;
++	dc->caps.num_of_dpias_per_host_router = 2;
++
+ 	/* max_disp_clock_khz_at_vmin is slightly lower than the STA value in order
+ 	 * to provide some margin.
+ 	 * It's expected for furture ASIC to have equal or higher value, in order to
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+index 98f5bc1b929ecf..6a94832e7d8081 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+@@ -1866,6 +1866,9 @@ static bool dcn351_resource_construct(
+ 	dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
+ 	dc->caps.color.mpc.ocsc = 1;
+ 
++	dc->caps.num_of_host_routers = 2;
++	dc->caps.num_of_dpias_per_host_router = 2;
++
+ 	/* max_disp_clock_khz_at_vmin is slightly lower than the STA value in order
+ 	 * to provide some margin.
+ 	 * It's expected for furture ASIC to have equal or higher value, in order to
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn36/dcn36_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn36/dcn36_resource.c
+index 7f19689e976a17..9f196d0ffdd96f 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn36/dcn36_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn36/dcn36_resource.c
+@@ -1867,6 +1867,9 @@ static bool dcn36_resource_construct(
+ 	dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
+ 	dc->caps.color.mpc.ocsc = 1;
+ 
++	dc->caps.num_of_host_routers = 2;
++	dc->caps.num_of_dpias_per_host_router = 2;
++
+ 	/* max_disp_clock_khz_at_vmin is slightly lower than the STA value in order
+ 	 * to provide some margin.
+ 	 * It's expected for furture ASIC to have equal or higher value, in order to
+diff --git a/drivers/gpu/drm/amd/display/include/link_service_types.h b/drivers/gpu/drm/amd/display/include/link_service_types.h
+index 1867aac57cf2c4..da74ed66c8f9d8 100644
+--- a/drivers/gpu/drm/amd/display/include/link_service_types.h
++++ b/drivers/gpu/drm/amd/display/include/link_service_types.h
+@@ -89,6 +89,8 @@ struct link_training_settings {
+ 	bool enhanced_framing;
+ 	enum lttpr_mode lttpr_mode;
+ 
++	bool lttpr_early_tps2;
++
+ 	/* disallow different lanes to have different lane settings */
+ 	bool disallow_per_lane_settings;
+ 	/* dpcd lane settings will always use the same hw lane settings
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+index 8c137d7c032e1f..e58e7b93810be7 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+@@ -368,6 +368,9 @@ enum mod_hdcp_status mod_hdcp_hdcp1_enable_encryption(struct mod_hdcp *hdcp)
+ 	struct mod_hdcp_display *display = get_first_active_display(hdcp);
+ 	enum mod_hdcp_status status = MOD_HDCP_STATUS_SUCCESS;
+ 
++	if (!display)
++		return MOD_HDCP_STATUS_DISPLAY_NOT_FOUND;
++
+ 	mutex_lock(&psp->hdcp_context.mutex);
+ 	hdcp_cmd = (struct ta_hdcp_shared_memory *)psp->hdcp_context.context.mem_context.shared_buf;
+ 	memset(hdcp_cmd, 0, sizeof(struct ta_hdcp_shared_memory));
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+index 3533d43ed1e73d..8f9d3a140e3672 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+@@ -1654,6 +1654,28 @@ int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev)
+ 	}
+ }
+ 
++int amdgpu_dpm_is_overdrive_enabled(struct amdgpu_device *adev)
++{
++	if (is_support_sw_smu(adev)) {
++		struct smu_context *smu = adev->powerplay.pp_handle;
++
++		return smu->od_enabled;
++	} else {
++		struct pp_hwmgr *hwmgr;
++
++		/*
++		 * dpm on some legacy asics don't carry od_enabled member
++		 * as its pp_handle is casted directly from adev.
++		 */
++		if (amdgpu_dpm_is_legacy_dpm(adev))
++			return false;
++
++		hwmgr = (struct pp_hwmgr *)adev->powerplay.pp_handle;
++
++		return hwmgr->od_enabled;
++	}
++}
++
+ int amdgpu_dpm_set_pp_table(struct amdgpu_device *adev,
+ 			    const char *buf,
+ 			    size_t size)
+diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
+index 4c0f7ad1481661..a2b8f3b5c0e9eb 100644
+--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
++++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
+@@ -559,6 +559,7 @@ int amdgpu_dpm_get_smu_prv_buf_details(struct amdgpu_device *adev,
+ 				       void **addr,
+ 				       size_t *size);
+ int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev);
++int amdgpu_dpm_is_overdrive_enabled(struct amdgpu_device *adev);
+ int amdgpu_dpm_set_pp_table(struct amdgpu_device *adev,
+ 			    const char *buf,
+ 			    size_t size);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+index 075f381ad311ba..575709871d6b6b 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+@@ -58,6 +58,7 @@
+ 
+ MODULE_FIRMWARE("amdgpu/aldebaran_smc.bin");
+ MODULE_FIRMWARE("amdgpu/smu_13_0_0.bin");
++MODULE_FIRMWARE("amdgpu/smu_13_0_0_kicker.bin");
+ MODULE_FIRMWARE("amdgpu/smu_13_0_7.bin");
+ MODULE_FIRMWARE("amdgpu/smu_13_0_10.bin");
+ 
+@@ -92,7 +93,7 @@ const int pmfw_decoded_link_width[7] = {0, 1, 2, 4, 8, 12, 16};
+ int smu_v13_0_init_microcode(struct smu_context *smu)
+ {
+ 	struct amdgpu_device *adev = smu->adev;
+-	char ucode_prefix[15];
++	char ucode_prefix[30];
+ 	int err = 0;
+ 	const struct smc_firmware_header_v1_0 *hdr;
+ 	const struct common_firmware_header *header;
+@@ -103,8 +104,13 @@ int smu_v13_0_init_microcode(struct smu_context *smu)
+ 		return 0;
+ 
+ 	amdgpu_ucode_ip_version_decode(adev, MP1_HWIP, ucode_prefix, sizeof(ucode_prefix));
+-	err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED,
+-				   "amdgpu/%s.bin", ucode_prefix);
++
++	if (amdgpu_is_kicker_fw(adev))
++		err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED,
++					   "amdgpu/%s_kicker.bin", ucode_prefix);
++	else
++		err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED,
++					   "amdgpu/%s.bin", ucode_prefix);
+ 	if (err)
+ 		goto out;
+ 
+diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
+index c3b95067548584..da4451693a1f07 100644
+--- a/drivers/gpu/drm/ast/ast_mode.c
++++ b/drivers/gpu/drm/ast/ast_mode.c
+@@ -922,9 +922,9 @@ static void ast_mode_config_helper_atomic_commit_tail(struct drm_atomic_state *s
+ 
+ 	/*
+ 	 * Concurrent operations could possibly trigger a call to
+-	 * drm_connector_helper_funcs.get_modes by trying to read the
+-	 * display modes. Protect access to I/O registers by acquiring
+-	 * the I/O-register lock. Released in atomic_flush().
++	 * drm_connector_helper_funcs.get_modes by reading the display
++	 * modes. Protect access to registers by acquiring the modeset
++	 * lock.
+ 	 */
+ 	mutex_lock(&ast->modeset_lock);
+ 	drm_atomic_helper_commit_tail(state);
+diff --git a/drivers/gpu/drm/bridge/cadence/cdns-dsi-core.c b/drivers/gpu/drm/bridge/cadence/cdns-dsi-core.c
+index c7a0247e06adf3..6a77ca36cb9ddb 100644
+--- a/drivers/gpu/drm/bridge/cadence/cdns-dsi-core.c
++++ b/drivers/gpu/drm/bridge/cadence/cdns-dsi-core.c
+@@ -568,15 +568,18 @@ static int cdns_dsi_check_conf(struct cdns_dsi *dsi,
+ 	struct phy_configure_opts_mipi_dphy *phy_cfg = &output->phy_opts.mipi_dphy;
+ 	unsigned long dsi_hss_hsa_hse_hbp;
+ 	unsigned int nlanes = output->dev->lanes;
++	int mode_clock = (mode_valid_check ? mode->clock : mode->crtc_clock);
+ 	int ret;
+ 
+ 	ret = cdns_dsi_mode2cfg(dsi, mode, dsi_cfg, mode_valid_check);
+ 	if (ret)
+ 		return ret;
+ 
+-	phy_mipi_dphy_get_default_config(mode->crtc_clock * 1000,
+-					 mipi_dsi_pixel_format_to_bpp(output->dev->format),
+-					 nlanes, phy_cfg);
++	ret = phy_mipi_dphy_get_default_config(mode_clock * 1000,
++					       mipi_dsi_pixel_format_to_bpp(output->dev->format),
++					       nlanes, phy_cfg);
++	if (ret)
++		return ret;
+ 
+ 	ret = cdns_dsi_adjust_phy_config(dsi, dsi_cfg, phy_cfg, mode, mode_valid_check);
+ 	if (ret)
+@@ -680,6 +683,11 @@ static void cdns_dsi_bridge_post_disable(struct drm_bridge *bridge)
+ 	struct cdns_dsi_input *input = bridge_to_cdns_dsi_input(bridge);
+ 	struct cdns_dsi *dsi = input_to_dsi(input);
+ 
++	dsi->phy_initialized = false;
++	dsi->link_initialized = false;
++	phy_power_off(dsi->dphy);
++	phy_exit(dsi->dphy);
++
+ 	pm_runtime_put(dsi->base.dev);
+ }
+ 
+@@ -761,7 +769,7 @@ static void cdns_dsi_bridge_enable(struct drm_bridge *bridge)
+ 	struct phy_configure_opts_mipi_dphy *phy_cfg = &output->phy_opts.mipi_dphy;
+ 	unsigned long tx_byte_period;
+ 	struct cdns_dsi_cfg dsi_cfg;
+-	u32 tmp, reg_wakeup, div;
++	u32 tmp, reg_wakeup, div, status;
+ 	int nlanes;
+ 
+ 	if (WARN_ON(pm_runtime_get_sync(dsi->base.dev) < 0))
+@@ -778,6 +786,19 @@ static void cdns_dsi_bridge_enable(struct drm_bridge *bridge)
+ 	cdns_dsi_hs_init(dsi);
+ 	cdns_dsi_init_link(dsi);
+ 
++	/*
++	 * Now that the DSI Link and DSI Phy are initialized,
++	 * wait for the CLK and Data Lanes to be ready.
++	 */
++	tmp = CLK_LANE_RDY;
++	for (int i = 0; i < nlanes; i++)
++		tmp |= DATA_LANE_RDY(i);
++
++	if (readl_poll_timeout(dsi->regs + MCTL_MAIN_STS, status,
++			       (tmp == (status & tmp)), 100, 500000))
++		dev_err(dsi->base.dev,
++			"Timed Out: DSI-DPhy Clock and Data Lanes not ready.\n");
++
+ 	writel(HBP_LEN(dsi_cfg.hbp) | HSA_LEN(dsi_cfg.hsa),
+ 	       dsi->regs + VID_HSIZE1);
+ 	writel(HFP_LEN(dsi_cfg.hfp) | HACT_LEN(dsi_cfg.hact),
+@@ -952,7 +973,7 @@ static int cdns_dsi_attach(struct mipi_dsi_host *host,
+ 		bridge = drm_panel_bridge_add_typed(panel,
+ 						    DRM_MODE_CONNECTOR_DSI);
+ 	} else {
+-		bridge = of_drm_find_bridge(dev->dev.of_node);
++		bridge = of_drm_find_bridge(np);
+ 		if (!bridge)
+ 			bridge = ERR_PTR(-EINVAL);
+ 	}
+@@ -1152,7 +1173,6 @@ static int __maybe_unused cdns_dsi_suspend(struct device *dev)
+ 	clk_disable_unprepare(dsi->dsi_sys_clk);
+ 	clk_disable_unprepare(dsi->dsi_p_clk);
+ 	reset_control_assert(dsi->dsi_p_rst);
+-	dsi->link_initialized = false;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+index 01d456b955abbb..4ea13e5a3a54a6 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+@@ -330,12 +330,18 @@ static void ti_sn65dsi86_enable_comms(struct ti_sn65dsi86 *pdata)
+ 	 * 200 ms.  We'll assume that the panel driver will have the hardcoded
+ 	 * delay in its prepare and always disable HPD.
+ 	 *
+-	 * If HPD somehow makes sense on some future panel we'll have to
+-	 * change this to be conditional on someone specifying that HPD should
+-	 * be used.
++	 * For DisplayPort bridge type, we need HPD. So we use the bridge type
++	 * to conditionally disable HPD.
++	 * NOTE: The bridge type is set in ti_sn_bridge_probe() but enable_comms()
++	 * can be called before. So for DisplayPort, HPD will be enabled once
++	 * bridge type is set. We are using bridge type instead of "no-hpd"
++	 * property because it is not used properly in devicetree description
++	 * and hence is unreliable.
+ 	 */
+-	regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG, HPD_DISABLE,
+-			   HPD_DISABLE);
++
++	if (pdata->bridge.type != DRM_MODE_CONNECTOR_DisplayPort)
++		regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG, HPD_DISABLE,
++				   HPD_DISABLE);
+ 
+ 	pdata->comms_enabled = true;
+ 
+@@ -423,36 +429,8 @@ static int status_show(struct seq_file *s, void *data)
+ 
+ 	return 0;
+ }
+-
+ DEFINE_SHOW_ATTRIBUTE(status);
+ 
+-static void ti_sn65dsi86_debugfs_remove(void *data)
+-{
+-	debugfs_remove_recursive(data);
+-}
+-
+-static void ti_sn65dsi86_debugfs_init(struct ti_sn65dsi86 *pdata)
+-{
+-	struct device *dev = pdata->dev;
+-	struct dentry *debugfs;
+-	int ret;
+-
+-	debugfs = debugfs_create_dir(dev_name(dev), NULL);
+-
+-	/*
+-	 * We might get an error back if debugfs wasn't enabled in the kernel
+-	 * so let's just silently return upon failure.
+-	 */
+-	if (IS_ERR_OR_NULL(debugfs))
+-		return;
+-
+-	ret = devm_add_action_or_reset(dev, ti_sn65dsi86_debugfs_remove, debugfs);
+-	if (ret)
+-		return;
+-
+-	debugfs_create_file("status", 0600, debugfs, pdata, &status_fops);
+-}
+-
+ /* -----------------------------------------------------------------------------
+  * Auxiliary Devices (*not* AUX)
+  */
+@@ -1200,9 +1178,14 @@ static enum drm_connector_status ti_sn_bridge_detect(struct drm_bridge *bridge)
+ 	struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge);
+ 	int val = 0;
+ 
+-	pm_runtime_get_sync(pdata->dev);
++	/*
++	 * Runtime reference is grabbed in ti_sn_bridge_hpd_enable()
++	 * as the chip won't report HPD just after being powered on.
++	 * HPD_DEBOUNCED_STATE reflects correct state only after the
++	 * debounce time (~100-400 ms).
++	 */
++
+ 	regmap_read(pdata->regmap, SN_HPD_DISABLE_REG, &val);
+-	pm_runtime_put_autosuspend(pdata->dev);
+ 
+ 	return val & HPD_DEBOUNCED_STATE ? connector_status_connected
+ 					 : connector_status_disconnected;
+@@ -1216,6 +1199,35 @@ static const struct drm_edid *ti_sn_bridge_edid_read(struct drm_bridge *bridge,
+ 	return drm_edid_read_ddc(connector, &pdata->aux.ddc);
+ }
+ 
++static void ti_sn65dsi86_debugfs_init(struct drm_bridge *bridge, struct dentry *root)
++{
++	struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge);
++	struct dentry *debugfs;
++
++	debugfs = debugfs_create_dir(dev_name(pdata->dev), root);
++	debugfs_create_file("status", 0600, debugfs, pdata, &status_fops);
++}
++
++static void ti_sn_bridge_hpd_enable(struct drm_bridge *bridge)
++{
++	struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge);
++
++	/*
++	 * Device needs to be powered on before reading the HPD state
++	 * for reliable hpd detection in ti_sn_bridge_detect() due to
++	 * the high debounce time.
++	 */
++
++	pm_runtime_get_sync(pdata->dev);
++}
++
++static void ti_sn_bridge_hpd_disable(struct drm_bridge *bridge)
++{
++	struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge);
++
++	pm_runtime_put_autosuspend(pdata->dev);
++}
++
+ static const struct drm_bridge_funcs ti_sn_bridge_funcs = {
+ 	.attach = ti_sn_bridge_attach,
+ 	.detach = ti_sn_bridge_detach,
+@@ -1229,6 +1241,9 @@ static const struct drm_bridge_funcs ti_sn_bridge_funcs = {
+ 	.atomic_reset = drm_atomic_helper_bridge_reset,
+ 	.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
+ 	.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
++	.debugfs_init = ti_sn65dsi86_debugfs_init,
++	.hpd_enable = ti_sn_bridge_hpd_enable,
++	.hpd_disable = ti_sn_bridge_hpd_disable,
+ };
+ 
+ static void ti_sn_bridge_parse_lanes(struct ti_sn65dsi86 *pdata,
+@@ -1317,8 +1332,26 @@ static int ti_sn_bridge_probe(struct auxiliary_device *adev,
+ 	pdata->bridge.type = pdata->next_bridge->type == DRM_MODE_CONNECTOR_DisplayPort
+ 			   ? DRM_MODE_CONNECTOR_DisplayPort : DRM_MODE_CONNECTOR_eDP;
+ 
+-	if (pdata->bridge.type == DRM_MODE_CONNECTOR_DisplayPort)
+-		pdata->bridge.ops = DRM_BRIDGE_OP_EDID | DRM_BRIDGE_OP_DETECT;
++	if (pdata->bridge.type == DRM_MODE_CONNECTOR_DisplayPort) {
++		pdata->bridge.ops = DRM_BRIDGE_OP_EDID | DRM_BRIDGE_OP_DETECT |
++				    DRM_BRIDGE_OP_HPD;
++		/*
++		 * If comms were already enabled they would have been enabled
++		 * with the wrong value of HPD_DISABLE. Update it now. Comms
++		 * could be enabled if anyone is holding a pm_runtime reference
++		 * (like if a GPIO is in use). Note that in most cases nobody
++		 * is doing AUX channel xfers before the bridge is added so
++		 * HPD doesn't _really_ matter then. The only exception is in
++		 * the eDP case where the panel wants to read the EDID before
++		 * the bridge is added. We always consistently have HPD disabled
++		 * for eDP.
++		 */
++		mutex_lock(&pdata->comms_mutex);
++		if (pdata->comms_enabled)
++			regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG,
++					   HPD_DISABLE, 0);
++		mutex_unlock(&pdata->comms_mutex);
++	};
+ 
+ 	drm_bridge_add(&pdata->bridge);
+ 
+@@ -1937,8 +1970,6 @@ static int ti_sn65dsi86_probe(struct i2c_client *client)
+ 	if (ret)
+ 		return ret;
+ 
+-	ti_sn65dsi86_debugfs_init(pdata);
+-
+ 	/*
+ 	 * Break ourselves up into a collection of aux devices. The only real
+ 	 * motiviation here is to solve the chicken-and-egg problem of probe
+diff --git a/drivers/gpu/drm/drm_writeback.c b/drivers/gpu/drm/drm_writeback.c
+index edbeab88ff2b6d..d983ee85cf134f 100644
+--- a/drivers/gpu/drm/drm_writeback.c
++++ b/drivers/gpu/drm/drm_writeback.c
+@@ -343,17 +343,18 @@ EXPORT_SYMBOL(drm_writeback_connector_init_with_encoder);
+ /**
+  * drm_writeback_connector_cleanup - Cleanup the writeback connector
+  * @dev: DRM device
+- * @wb_connector: Pointer to the writeback connector to clean up
++ * @data: Pointer to the writeback connector to clean up
+  *
+  * This will decrement the reference counter of blobs and destroy properties. It
+  * will also clean the remaining jobs in this writeback connector. Caution: This helper will not
+  * clean up the attached encoder and the drm_connector.
+  */
+ static void drm_writeback_connector_cleanup(struct drm_device *dev,
+-					    struct drm_writeback_connector *wb_connector)
++					    void *data)
+ {
+ 	unsigned long flags;
+ 	struct drm_writeback_job *pos, *n;
++	struct drm_writeback_connector *wb_connector = data;
+ 
+ 	delete_writeback_properties(dev);
+ 	drm_property_blob_put(wb_connector->pixel_formats_blob_ptr);
+@@ -405,7 +406,7 @@ int drmm_writeback_connector_init(struct drm_device *dev,
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = drmm_add_action_or_reset(dev, (void *)drm_writeback_connector_cleanup,
++	ret = drmm_add_action_or_reset(dev, drm_writeback_connector_cleanup,
+ 				       wb_connector);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+index 76a3a3e517d8d9..71e2e6b9d71393 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+@@ -35,6 +35,7 @@ static enum drm_gpu_sched_stat etnaviv_sched_timedout_job(struct drm_sched_job
+ 							  *sched_job)
+ {
+ 	struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job);
++	struct drm_gpu_scheduler *sched = sched_job->sched;
+ 	struct etnaviv_gpu *gpu = submit->gpu;
+ 	u32 dma_addr, primid = 0;
+ 	int change;
+@@ -89,7 +90,9 @@ static enum drm_gpu_sched_stat etnaviv_sched_timedout_job(struct drm_sched_job
+ 	return DRM_GPU_SCHED_STAT_NOMINAL;
+ 
+ out_no_timeout:
+-	list_add(&sched_job->list, &sched_job->sched->pending_list);
++	spin_lock(&sched->job_list_lock);
++	list_add(&sched_job->list, &sched->pending_list);
++	spin_unlock(&sched->job_list_lock);
+ 	return DRM_GPU_SCHED_STAT_NOMINAL;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_cx0_phy.c b/drivers/gpu/drm/i915/display/intel_cx0_phy.c
+index 22595766eac533..f9fa5416c389d5 100644
+--- a/drivers/gpu/drm/i915/display/intel_cx0_phy.c
++++ b/drivers/gpu/drm/i915/display/intel_cx0_phy.c
+@@ -2761,9 +2761,9 @@ static void intel_program_port_clock_ctl(struct intel_encoder *encoder,
+ 	val |= XELPDP_FORWARD_CLOCK_UNGATE;
+ 
+ 	if (!is_dp && is_hdmi_frl(port_clock))
+-		val |= XELPDP_DDI_CLOCK_SELECT(XELPDP_DDI_CLOCK_SELECT_DIV18CLK);
++		val |= XELPDP_DDI_CLOCK_SELECT_PREP(display, XELPDP_DDI_CLOCK_SELECT_DIV18CLK);
+ 	else
+-		val |= XELPDP_DDI_CLOCK_SELECT(XELPDP_DDI_CLOCK_SELECT_MAXPCLK);
++		val |= XELPDP_DDI_CLOCK_SELECT_PREP(display, XELPDP_DDI_CLOCK_SELECT_MAXPCLK);
+ 
+ 	/* TODO: HDMI FRL */
+ 	/* DP2.0 10G and 20G rates enable MPLLA*/
+@@ -2774,7 +2774,7 @@ static void intel_program_port_clock_ctl(struct intel_encoder *encoder,
+ 
+ 	intel_de_rmw(display, XELPDP_PORT_CLOCK_CTL(display, encoder->port),
+ 		     XELPDP_LANE1_PHY_CLOCK_SELECT | XELPDP_FORWARD_CLOCK_UNGATE |
+-		     XELPDP_DDI_CLOCK_SELECT_MASK | XELPDP_SSC_ENABLE_PLLA |
++		     XELPDP_DDI_CLOCK_SELECT_MASK(display) | XELPDP_SSC_ENABLE_PLLA |
+ 		     XELPDP_SSC_ENABLE_PLLB, val);
+ }
+ 
+@@ -3097,10 +3097,7 @@ int intel_mtl_tbt_calc_port_clock(struct intel_encoder *encoder)
+ 
+ 	val = intel_de_read(display, XELPDP_PORT_CLOCK_CTL(display, encoder->port));
+ 
+-	if (DISPLAY_VER(display) >= 30)
+-		clock = REG_FIELD_GET(XE3_DDI_CLOCK_SELECT_MASK, val);
+-	else
+-		clock = REG_FIELD_GET(XELPDP_DDI_CLOCK_SELECT_MASK, val);
++	clock = XELPDP_DDI_CLOCK_SELECT_GET(display, val);
+ 
+ 	drm_WARN_ON(display->drm, !(val & XELPDP_FORWARD_CLOCK_UNGATE));
+ 	drm_WARN_ON(display->drm, !(val & XELPDP_TBT_CLOCK_REQUEST));
+@@ -3168,13 +3165,9 @@ static void intel_mtl_tbt_pll_enable(struct intel_encoder *encoder,
+ 	 * clock muxes, gating and SSC
+ 	 */
+ 
+-	if (DISPLAY_VER(display) >= 30) {
+-		mask = XE3_DDI_CLOCK_SELECT_MASK;
+-		val |= XE3_DDI_CLOCK_SELECT(intel_mtl_tbt_clock_select(display, crtc_state->port_clock));
+-	} else {
+-		mask = XELPDP_DDI_CLOCK_SELECT_MASK;
+-		val |= XELPDP_DDI_CLOCK_SELECT(intel_mtl_tbt_clock_select(display, crtc_state->port_clock));
+-	}
++	mask = XELPDP_DDI_CLOCK_SELECT_MASK(display);
++	val |= XELPDP_DDI_CLOCK_SELECT_PREP(display,
++					    intel_mtl_tbt_clock_select(display, crtc_state->port_clock));
+ 
+ 	mask |= XELPDP_FORWARD_CLOCK_UNGATE;
+ 	val |= XELPDP_FORWARD_CLOCK_UNGATE;
+@@ -3287,7 +3280,7 @@ static void intel_cx0pll_disable(struct intel_encoder *encoder)
+ 
+ 	/* 7. Program PORT_CLOCK_CTL register to disable and gate clocks. */
+ 	intel_de_rmw(display, XELPDP_PORT_CLOCK_CTL(display, encoder->port),
+-		     XELPDP_DDI_CLOCK_SELECT_MASK, 0);
++		     XELPDP_DDI_CLOCK_SELECT_MASK(display), 0);
+ 	intel_de_rmw(display, XELPDP_PORT_CLOCK_CTL(display, encoder->port),
+ 		     XELPDP_FORWARD_CLOCK_UNGATE, 0);
+ 
+@@ -3336,7 +3329,7 @@ static void intel_mtl_tbt_pll_disable(struct intel_encoder *encoder)
+ 	 * 5. Program PORT CLOCK CTRL register to disable and gate clocks
+ 	 */
+ 	intel_de_rmw(display, XELPDP_PORT_CLOCK_CTL(display, encoder->port),
+-		     XELPDP_DDI_CLOCK_SELECT_MASK |
++		     XELPDP_DDI_CLOCK_SELECT_MASK(display) |
+ 		     XELPDP_FORWARD_CLOCK_UNGATE, 0);
+ 
+ 	/* 6. Program DDI_CLK_VALFREQ to 0. */
+@@ -3365,7 +3358,7 @@ intel_mtl_port_pll_type(struct intel_encoder *encoder,
+ 	 * handling is done via the standard shared DPLL framework.
+ 	 */
+ 	val = intel_de_read(display, XELPDP_PORT_CLOCK_CTL(display, encoder->port));
+-	clock = REG_FIELD_GET(XELPDP_DDI_CLOCK_SELECT_MASK, val);
++	clock = XELPDP_DDI_CLOCK_SELECT_GET(display, val);
+ 
+ 	if (clock == XELPDP_DDI_CLOCK_SELECT_MAXPCLK ||
+ 	    clock == XELPDP_DDI_CLOCK_SELECT_DIV18CLK)
+diff --git a/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h b/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h
+index 960f7f778fb81e..59c22beaf1de50 100644
+--- a/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h
++++ b/drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h
+@@ -192,10 +192,17 @@
+ 
+ #define   XELPDP_TBT_CLOCK_REQUEST			REG_BIT(19)
+ #define   XELPDP_TBT_CLOCK_ACK				REG_BIT(18)
+-#define   XELPDP_DDI_CLOCK_SELECT_MASK			REG_GENMASK(15, 12)
+-#define   XE3_DDI_CLOCK_SELECT_MASK			REG_GENMASK(16, 12)
+-#define   XELPDP_DDI_CLOCK_SELECT(val)			REG_FIELD_PREP(XELPDP_DDI_CLOCK_SELECT_MASK, val)
+-#define   XE3_DDI_CLOCK_SELECT(val)			REG_FIELD_PREP(XE3_DDI_CLOCK_SELECT_MASK, val)
++#define   _XELPDP_DDI_CLOCK_SELECT_MASK			REG_GENMASK(15, 12)
++#define   _XE3_DDI_CLOCK_SELECT_MASK			REG_GENMASK(16, 12)
++#define   XELPDP_DDI_CLOCK_SELECT_MASK(display)		(DISPLAY_VER(display) >= 30 ? \
++							 _XE3_DDI_CLOCK_SELECT_MASK : _XELPDP_DDI_CLOCK_SELECT_MASK)
++#define   XELPDP_DDI_CLOCK_SELECT_PREP(display, val)	(DISPLAY_VER(display) >= 30 ? \
++							 REG_FIELD_PREP(_XE3_DDI_CLOCK_SELECT_MASK, (val)) : \
++							 REG_FIELD_PREP(_XELPDP_DDI_CLOCK_SELECT_MASK, (val)))
++#define   XELPDP_DDI_CLOCK_SELECT_GET(display, val)	(DISPLAY_VER(display) >= 30 ? \
++							 REG_FIELD_GET(_XE3_DDI_CLOCK_SELECT_MASK, (val)) : \
++							 REG_FIELD_GET(_XELPDP_DDI_CLOCK_SELECT_MASK, (val)))
++
+ #define   XELPDP_DDI_CLOCK_SELECT_NONE			0x0
+ #define   XELPDP_DDI_CLOCK_SELECT_MAXPCLK		0x8
+ #define   XELPDP_DDI_CLOCK_SELECT_DIV18CLK		0x9
+diff --git a/drivers/gpu/drm/i915/display/intel_display_driver.c b/drivers/gpu/drm/i915/display/intel_display_driver.c
+index 31740a677dd807..14c8b3259bbf58 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_driver.c
++++ b/drivers/gpu/drm/i915/display/intel_display_driver.c
+@@ -241,31 +241,45 @@ int intel_display_driver_probe_noirq(struct intel_display *display)
+ 	intel_dmc_init(display);
+ 
+ 	display->wq.modeset = alloc_ordered_workqueue("i915_modeset", 0);
++	if (!display->wq.modeset) {
++		ret = -ENOMEM;
++		goto cleanup_vga_client_pw_domain_dmc;
++	}
++
+ 	display->wq.flip = alloc_workqueue("i915_flip", WQ_HIGHPRI |
+ 						WQ_UNBOUND, WQ_UNBOUND_MAX_ACTIVE);
++	if (!display->wq.flip) {
++		ret = -ENOMEM;
++		goto cleanup_wq_modeset;
++	}
++
+ 	display->wq.cleanup = alloc_workqueue("i915_cleanup", WQ_HIGHPRI, 0);
++	if (!display->wq.cleanup) {
++		ret = -ENOMEM;
++		goto cleanup_wq_flip;
++	}
+ 
+ 	intel_mode_config_init(display);
+ 
+ 	ret = intel_cdclk_init(display);
+ 	if (ret)
+-		goto cleanup_vga_client_pw_domain_dmc;
++		goto cleanup_wq_cleanup;
+ 
+ 	ret = intel_color_init(display);
+ 	if (ret)
+-		goto cleanup_vga_client_pw_domain_dmc;
++		goto cleanup_wq_cleanup;
+ 
+ 	ret = intel_dbuf_init(i915);
+ 	if (ret)
+-		goto cleanup_vga_client_pw_domain_dmc;
++		goto cleanup_wq_cleanup;
+ 
+ 	ret = intel_bw_init(i915);
+ 	if (ret)
+-		goto cleanup_vga_client_pw_domain_dmc;
++		goto cleanup_wq_cleanup;
+ 
+ 	ret = intel_pmdemand_init(display);
+ 	if (ret)
+-		goto cleanup_vga_client_pw_domain_dmc;
++		goto cleanup_wq_cleanup;
+ 
+ 	intel_init_quirks(display);
+ 
+@@ -273,6 +287,12 @@ int intel_display_driver_probe_noirq(struct intel_display *display)
+ 
+ 	return 0;
+ 
++cleanup_wq_cleanup:
++	destroy_workqueue(display->wq.cleanup);
++cleanup_wq_flip:
++	destroy_workqueue(display->wq.flip);
++cleanup_wq_modeset:
++	destroy_workqueue(display->wq.modeset);
+ cleanup_vga_client_pw_domain_dmc:
+ 	intel_dmc_fini(display);
+ 	intel_power_domains_driver_remove(display);
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index cd8f728d5fddc4..ad0306283990c5 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -4457,6 +4457,23 @@ intel_dp_mst_disconnect(struct intel_dp *intel_dp)
+ static bool
+ intel_dp_get_sink_irq_esi(struct intel_dp *intel_dp, u8 *esi)
+ {
++	struct intel_display *display = to_intel_display(intel_dp);
++
++	/*
++	 * Display WA for HSD #13013007775: mtl/arl/lnl
++	 * Read the sink count and link service IRQ registers in separate
++	 * transactions to prevent disconnecting the sink on a TBT link
++	 * inadvertently.
++	 */
++	if (IS_DISPLAY_VER(display, 14, 20) && !display->platform.battlemage) {
++		if (drm_dp_dpcd_read(&intel_dp->aux, DP_SINK_COUNT_ESI, esi, 3) != 3)
++			return false;
++
++		/* DP_SINK_COUNT_ESI + 3 == DP_LINK_SERVICE_IRQ_VECTOR_ESI0 */
++		return drm_dp_dpcd_readb(&intel_dp->aux, DP_LINK_SERVICE_IRQ_VECTOR_ESI0,
++					 &esi[3]) == 1;
++	}
++
+ 	return drm_dp_dpcd_read(&intel_dp->aux, DP_SINK_COUNT_ESI, esi, 4) == 4;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.c b/drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.c
+index 74bb3bedf30f5d..5111bdc3075b58 100644
+--- a/drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.c
++++ b/drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.c
+@@ -103,8 +103,8 @@ static void get_ana_cp_int_prop(u64 vco_clk,
+ 			    DIV_ROUND_DOWN_ULL(curve_1_interpolated, CURVE0_MULTIPLIER)));
+ 
+ 	ana_cp_int_temp =
+-		DIV_ROUND_CLOSEST_ULL(DIV_ROUND_DOWN_ULL(adjusted_vco_clk1, curve_2_scaled1),
+-				      CURVE2_MULTIPLIER);
++		DIV64_U64_ROUND_CLOSEST(DIV_ROUND_DOWN_ULL(adjusted_vco_clk1, curve_2_scaled1),
++					CURVE2_MULTIPLIER);
+ 
+ 	*ana_cp_int = max(1, min(ana_cp_int_temp, 127));
+ 
+diff --git a/drivers/gpu/drm/i915/display/vlv_dsi.c b/drivers/gpu/drm/i915/display/vlv_dsi.c
+index af717df831977c..661de51dfd2290 100644
+--- a/drivers/gpu/drm/i915/display/vlv_dsi.c
++++ b/drivers/gpu/drm/i915/display/vlv_dsi.c
+@@ -1060,7 +1060,7 @@ static void bxt_dsi_get_pipe_config(struct intel_encoder *encoder,
+ 				              BXT_MIPI_TRANS_VACTIVE(port));
+ 	adjusted_mode->crtc_vtotal =
+ 				intel_de_read(display,
+-				              BXT_MIPI_TRANS_VTOTAL(port));
++				              BXT_MIPI_TRANS_VTOTAL(port)) + 1;
+ 
+ 	hactive = adjusted_mode->crtc_hdisplay;
+ 	hfp = intel_de_read(display, MIPI_HFP_COUNT(display, port));
+@@ -1265,7 +1265,7 @@ static void set_dsi_timings(struct intel_encoder *encoder,
+ 			intel_de_write(display, BXT_MIPI_TRANS_VACTIVE(port),
+ 				       adjusted_mode->crtc_vdisplay);
+ 			intel_de_write(display, BXT_MIPI_TRANS_VTOTAL(port),
+-				       adjusted_mode->crtc_vtotal);
++				       adjusted_mode->crtc_vtotal - 1);
+ 		}
+ 
+ 		intel_de_write(display, MIPI_HACTIVE_AREA_COUNT(display, port),
+diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
+index 990bfaba3ce4e8..5bc696bfbb0fea 100644
+--- a/drivers/gpu/drm/i915/i915_pmu.c
++++ b/drivers/gpu/drm/i915/i915_pmu.c
+@@ -108,7 +108,7 @@ static unsigned int config_bit(const u64 config)
+ 		return other_bit(config);
+ }
+ 
+-static u32 config_mask(const u64 config)
++static __always_inline u32 config_mask(const u64 config)
+ {
+ 	unsigned int bit = config_bit(config);
+ 
+diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+index 6970b0f7f457c8..2e1d5c3432728c 100644
+--- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
++++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+@@ -156,6 +156,7 @@ void msm_devfreq_init(struct msm_gpu *gpu)
+ 	priv->gpu_devfreq_config.downdifferential = 10;
+ 
+ 	mutex_init(&df->lock);
++	df->suspended = true;
+ 
+ 	ret = dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq,
+ 				     DEV_PM_QOS_MIN_FREQUENCY, 0);
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 53211b5eaa09b9..10f3221cf00b64 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -4455,6 +4455,12 @@ static const struct panel_desc tianma_tm070jdhg34_00 = {
+ 		.width = 150, /* 149.76 */
+ 		.height = 94, /* 93.60 */
+ 	},
++	.delay = {
++		.prepare = 15,		/* Tp1 */
++		.enable = 150,		/* Tp2 */
++		.disable = 150,		/* Tp4 */
++		.unprepare = 120,	/* Tp3 */
++	},
+ 	.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
+ 	.connector_type = DRM_MODE_CONNECTOR_LVDS,
+ };
+diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
+index bd39db7bb24087..e671aa24172068 100644
+--- a/drivers/gpu/drm/scheduler/sched_entity.c
++++ b/drivers/gpu/drm/scheduler/sched_entity.c
+@@ -176,6 +176,7 @@ static void drm_sched_entity_kill_jobs_work(struct work_struct *wrk)
+ {
+ 	struct drm_sched_job *job = container_of(wrk, typeof(*job), work);
+ 
++	drm_sched_fence_scheduled(job->s_fence, NULL);
+ 	drm_sched_fence_finished(job->s_fence, -ESRCH);
+ 	WARN_ON(job->s_fence->parent);
+ 	job->sched->ops->free_job(job);
+diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
+index 798507a8ae56d6..59d5c1ba145a82 100644
+--- a/drivers/gpu/drm/tegra/dc.c
++++ b/drivers/gpu/drm/tegra/dc.c
+@@ -1321,10 +1321,16 @@ static struct drm_plane *tegra_dc_add_shared_planes(struct drm_device *drm,
+ 		if (wgrp->dc == dc->pipe) {
+ 			for (j = 0; j < wgrp->num_windows; j++) {
+ 				unsigned int index = wgrp->windows[j];
++				enum drm_plane_type type;
++
++				if (primary)
++					type = DRM_PLANE_TYPE_OVERLAY;
++				else
++					type = DRM_PLANE_TYPE_PRIMARY;
+ 
+ 				plane = tegra_shared_plane_create(drm, dc,
+ 								  wgrp->index,
+-								  index);
++								  index, type);
+ 				if (IS_ERR(plane))
+ 					return plane;
+ 
+@@ -1332,10 +1338,8 @@ static struct drm_plane *tegra_dc_add_shared_planes(struct drm_device *drm,
+ 				 * Choose the first shared plane owned by this
+ 				 * head as the primary plane.
+ 				 */
+-				if (!primary) {
+-					plane->type = DRM_PLANE_TYPE_PRIMARY;
++				if (!primary)
+ 					primary = plane;
+-				}
+ 			}
+ 		}
+ 	}
+@@ -1389,7 +1393,10 @@ static void tegra_crtc_reset(struct drm_crtc *crtc)
+ 	if (crtc->state)
+ 		tegra_crtc_atomic_destroy_state(crtc, crtc->state);
+ 
+-	__drm_atomic_helper_crtc_reset(crtc, &state->base);
++	if (state)
++		__drm_atomic_helper_crtc_reset(crtc, &state->base);
++	else
++		__drm_atomic_helper_crtc_reset(crtc, NULL);
+ }
+ 
+ static struct drm_crtc_state *
+diff --git a/drivers/gpu/drm/tegra/hub.c b/drivers/gpu/drm/tegra/hub.c
+index fa6140fc37fb16..8f779f23dc0904 100644
+--- a/drivers/gpu/drm/tegra/hub.c
++++ b/drivers/gpu/drm/tegra/hub.c
+@@ -755,9 +755,9 @@ static const struct drm_plane_helper_funcs tegra_shared_plane_helper_funcs = {
+ struct drm_plane *tegra_shared_plane_create(struct drm_device *drm,
+ 					    struct tegra_dc *dc,
+ 					    unsigned int wgrp,
+-					    unsigned int index)
++					    unsigned int index,
++					    enum drm_plane_type type)
+ {
+-	enum drm_plane_type type = DRM_PLANE_TYPE_OVERLAY;
+ 	struct tegra_drm *tegra = drm->dev_private;
+ 	struct tegra_display_hub *hub = tegra->hub;
+ 	struct tegra_shared_plane *plane;
+diff --git a/drivers/gpu/drm/tegra/hub.h b/drivers/gpu/drm/tegra/hub.h
+index 23c4b2115ed1e3..a66f18c4facc9d 100644
+--- a/drivers/gpu/drm/tegra/hub.h
++++ b/drivers/gpu/drm/tegra/hub.h
+@@ -80,7 +80,8 @@ void tegra_display_hub_cleanup(struct tegra_display_hub *hub);
+ struct drm_plane *tegra_shared_plane_create(struct drm_device *drm,
+ 					    struct tegra_dc *dc,
+ 					    unsigned int wgrp,
+-					    unsigned int index);
++					    unsigned int index,
++					    enum drm_plane_type type);
+ 
+ int tegra_display_hub_atomic_check(struct drm_device *drm,
+ 				   struct drm_atomic_state *state);
+diff --git a/drivers/gpu/drm/tiny/cirrus-qemu.c b/drivers/gpu/drm/tiny/cirrus-qemu.c
+index 52ec1e4ea9e511..a00d3b7ded6c5e 100644
+--- a/drivers/gpu/drm/tiny/cirrus-qemu.c
++++ b/drivers/gpu/drm/tiny/cirrus-qemu.c
+@@ -318,7 +318,6 @@ static void cirrus_pitch_set(struct cirrus_device *cirrus, unsigned int pitch)
+ 	/* Enable extended blanking and pitch bits, and enable full memory */
+ 	cr1b = 0x22;
+ 	cr1b |= (pitch >> 7) & 0x10;
+-	cr1b |= (pitch >> 6) & 0x40;
+ 	wreg_crt(cirrus, 0x1b, cr1b);
+ 
+ 	cirrus_set_start_address(cirrus, 0);
+diff --git a/drivers/gpu/drm/tiny/simpledrm.c b/drivers/gpu/drm/tiny/simpledrm.c
+index 5d9ab8adf80058..c2b022a81afd3c 100644
+--- a/drivers/gpu/drm/tiny/simpledrm.c
++++ b/drivers/gpu/drm/tiny/simpledrm.c
+@@ -284,7 +284,7 @@ static struct simpledrm_device *simpledrm_device_of_dev(struct drm_device *dev)
+ 
+ static void simpledrm_device_release_clocks(void *res)
+ {
+-	struct simpledrm_device *sdev = simpledrm_device_of_dev(res);
++	struct simpledrm_device *sdev = res;
+ 	unsigned int i;
+ 
+ 	for (i = 0; i < sdev->clk_count; ++i) {
+@@ -382,7 +382,7 @@ static int simpledrm_device_init_clocks(struct simpledrm_device *sdev)
+ 
+ static void simpledrm_device_release_regulators(void *res)
+ {
+-	struct simpledrm_device *sdev = simpledrm_device_of_dev(res);
++	struct simpledrm_device *sdev = res;
+ 	unsigned int i;
+ 
+ 	for (i = 0; i < sdev->regulator_count; ++i) {
+diff --git a/drivers/gpu/drm/udl/udl_drv.c b/drivers/gpu/drm/udl/udl_drv.c
+index 05b3a152cc3321..7e7d704be0c0ef 100644
+--- a/drivers/gpu/drm/udl/udl_drv.c
++++ b/drivers/gpu/drm/udl/udl_drv.c
+@@ -127,9 +127,9 @@ static void udl_usb_disconnect(struct usb_interface *interface)
+ {
+ 	struct drm_device *dev = usb_get_intfdata(interface);
+ 
++	drm_dev_unplug(dev);
+ 	drm_kms_helper_poll_fini(dev);
+ 	udl_drop_usb(dev);
+-	drm_dev_unplug(dev);
+ }
+ 
+ /*
+diff --git a/drivers/gpu/drm/xe/display/xe_display.c b/drivers/gpu/drm/xe/display/xe_display.c
+index 0b0aca7a25afd0..18062cfb265f7a 100644
+--- a/drivers/gpu/drm/xe/display/xe_display.c
++++ b/drivers/gpu/drm/xe/display/xe_display.c
+@@ -104,6 +104,8 @@ int xe_display_create(struct xe_device *xe)
+ 	spin_lock_init(&xe->display.fb_tracking.lock);
+ 
+ 	xe->display.hotplug.dp_wq = alloc_ordered_workqueue("xe-dp", 0);
++	if (!xe->display.hotplug.dp_wq)
++		return -ENOMEM;
+ 
+ 	return drmm_add_action_or_reset(&xe->drm, display_destroy, NULL);
+ }
+diff --git a/drivers/gpu/drm/xe/display/xe_dsb_buffer.c b/drivers/gpu/drm/xe/display/xe_dsb_buffer.c
+index f95375451e2faf..9f941fc2e36bb2 100644
+--- a/drivers/gpu/drm/xe/display/xe_dsb_buffer.c
++++ b/drivers/gpu/drm/xe/display/xe_dsb_buffer.c
+@@ -17,10 +17,7 @@ u32 intel_dsb_buffer_ggtt_offset(struct intel_dsb_buffer *dsb_buf)
+ 
+ void intel_dsb_buffer_write(struct intel_dsb_buffer *dsb_buf, u32 idx, u32 val)
+ {
+-	struct xe_device *xe = dsb_buf->vma->bo->tile->xe;
+-
+ 	iosys_map_wr(&dsb_buf->vma->bo->vmap, idx * 4, u32, val);
+-	xe_device_l2_flush(xe);
+ }
+ 
+ u32 intel_dsb_buffer_read(struct intel_dsb_buffer *dsb_buf, u32 idx)
+@@ -30,12 +27,9 @@ u32 intel_dsb_buffer_read(struct intel_dsb_buffer *dsb_buf, u32 idx)
+ 
+ void intel_dsb_buffer_memset(struct intel_dsb_buffer *dsb_buf, u32 idx, u32 val, size_t size)
+ {
+-	struct xe_device *xe = dsb_buf->vma->bo->tile->xe;
+-
+ 	WARN_ON(idx > (dsb_buf->buf_size - size) / sizeof(*dsb_buf->cmd_buf));
+ 
+ 	iosys_map_memset(&dsb_buf->vma->bo->vmap, idx * 4, val, size);
+-	xe_device_l2_flush(xe);
+ }
+ 
+ bool intel_dsb_buffer_create(struct intel_crtc *crtc, struct intel_dsb_buffer *dsb_buf, size_t size)
+@@ -74,9 +68,12 @@ void intel_dsb_buffer_cleanup(struct intel_dsb_buffer *dsb_buf)
+ 
+ void intel_dsb_buffer_flush_map(struct intel_dsb_buffer *dsb_buf)
+ {
++	struct xe_device *xe = dsb_buf->vma->bo->tile->xe;
++
+ 	/*
+ 	 * The memory barrier here is to ensure coherency of DSB vs MMIO,
+ 	 * both for weak ordering archs and discrete cards.
+ 	 */
+-	xe_device_wmb(dsb_buf->vma->bo->tile->xe);
++	xe_device_wmb(xe);
++	xe_device_l2_flush(xe);
+ }
+diff --git a/drivers/gpu/drm/xe/display/xe_fb_pin.c b/drivers/gpu/drm/xe/display/xe_fb_pin.c
+index d918ae1c806182..55259969480b47 100644
+--- a/drivers/gpu/drm/xe/display/xe_fb_pin.c
++++ b/drivers/gpu/drm/xe/display/xe_fb_pin.c
+@@ -164,6 +164,9 @@ static int __xe_pin_fb_vma_dpt(const struct intel_framebuffer *fb,
+ 
+ 	vma->dpt = dpt;
+ 	vma->node = dpt->ggtt_node[tile0->id];
++
++	/* Ensure DPT writes are flushed */
++	xe_device_l2_flush(xe);
+ 	return 0;
+ }
+ 
+@@ -333,8 +336,6 @@ static struct i915_vma *__xe_pin_fb_vma(const struct intel_framebuffer *fb,
+ 	if (ret)
+ 		goto err_unpin;
+ 
+-	/* Ensure DPT writes are flushed */
+-	xe_device_l2_flush(xe);
+ 	return vma;
+ 
+ err_unpin:
+diff --git a/drivers/gpu/drm/xe/xe_ggtt.c b/drivers/gpu/drm/xe/xe_ggtt.c
+index 5fcb2b4c2c1397..60bce5de527256 100644
+--- a/drivers/gpu/drm/xe/xe_ggtt.c
++++ b/drivers/gpu/drm/xe/xe_ggtt.c
+@@ -201,6 +201,13 @@ static const struct xe_ggtt_pt_ops xelpg_pt_wa_ops = {
+ 	.ggtt_set_pte = xe_ggtt_set_pte_and_flush,
+ };
+ 
++static void dev_fini_ggtt(void *arg)
++{
++	struct xe_ggtt *ggtt = arg;
++
++	drain_workqueue(ggtt->wq);
++}
++
+ /**
+  * xe_ggtt_init_early - Early GGTT initialization
+  * @ggtt: the &xe_ggtt to be initialized
+@@ -257,6 +264,10 @@ int xe_ggtt_init_early(struct xe_ggtt *ggtt)
+ 	if (err)
+ 		return err;
+ 
++	err = devm_add_action_or_reset(xe->drm.dev, dev_fini_ggtt, ggtt);
++	if (err)
++		return err;
++
+ 	if (IS_SRIOV_VF(xe)) {
+ 		err = xe_gt_sriov_vf_prepare_ggtt(xe_tile_get_gt(ggtt->tile, 0));
+ 		if (err)
+diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.h b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
+index c250ea773491ec..308061f0cf372a 100644
+--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
++++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
+@@ -51,7 +51,15 @@ static inline void xe_sched_tdr_queue_imm(struct xe_gpu_scheduler *sched)
+ 
+ static inline void xe_sched_resubmit_jobs(struct xe_gpu_scheduler *sched)
+ {
+-	drm_sched_resubmit_jobs(&sched->base);
++	struct drm_sched_job *s_job;
++
++	list_for_each_entry(s_job, &sched->base.pending_list, list) {
++		struct drm_sched_fence *s_fence = s_job->s_fence;
++		struct dma_fence *hw_fence = s_fence->parent;
++
++		if (hw_fence && !dma_fence_is_signaled(hw_fence))
++			sched->base.ops->run_job(s_job);
++	}
+ }
+ 
+ static inline bool
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+index 084cbdeba8eaa5..e1362e608146b6 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+@@ -137,6 +137,14 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
+ 	struct xe_gt_tlb_invalidation_fence *fence, *next;
+ 	int pending_seqno;
+ 
++	/*
++	 * we can get here before the CTs are even initialized if we're wedging
++	 * very early, in which case there are not going to be any pending
++	 * fences so we can bail immediately.
++	 */
++	if (!xe_guc_ct_initialized(&gt->uc.guc.ct))
++		return;
++
+ 	/*
+ 	 * CT channel is already disabled at this point. No new TLB requests can
+ 	 * appear.
+diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
+index 72ad576fc18eb5..1fe1510e580b58 100644
+--- a/drivers/gpu/drm/xe/xe_guc_ct.c
++++ b/drivers/gpu/drm/xe/xe_guc_ct.c
+@@ -34,6 +34,11 @@
+ #include "xe_pm.h"
+ #include "xe_trace_guc.h"
+ 
++static void receive_g2h(struct xe_guc_ct *ct);
++static void g2h_worker_func(struct work_struct *w);
++static void safe_mode_worker_func(struct work_struct *w);
++static void ct_exit_safe_mode(struct xe_guc_ct *ct);
++
+ #if IS_ENABLED(CONFIG_DRM_XE_DEBUG)
+ enum {
+ 	/* Internal states, not error conditions */
+@@ -186,14 +191,11 @@ static void guc_ct_fini(struct drm_device *drm, void *arg)
+ {
+ 	struct xe_guc_ct *ct = arg;
+ 
++	ct_exit_safe_mode(ct);
+ 	destroy_workqueue(ct->g2h_wq);
+ 	xa_destroy(&ct->fence_lookup);
+ }
+ 
+-static void receive_g2h(struct xe_guc_ct *ct);
+-static void g2h_worker_func(struct work_struct *w);
+-static void safe_mode_worker_func(struct work_struct *w);
+-
+ static void primelockdep(struct xe_guc_ct *ct)
+ {
+ 	if (!IS_ENABLED(CONFIG_LOCKDEP))
+@@ -513,6 +515,9 @@ void xe_guc_ct_disable(struct xe_guc_ct *ct)
+  */
+ void xe_guc_ct_stop(struct xe_guc_ct *ct)
+ {
++	if (!xe_guc_ct_initialized(ct))
++		return;
++
+ 	xe_guc_ct_set_state(ct, XE_GUC_CT_STATE_STOPPED);
+ 	stop_g2h_handler(ct);
+ }
+@@ -759,7 +764,7 @@ static int __guc_ct_send_locked(struct xe_guc_ct *ct, const u32 *action,
+ 	u16 seqno;
+ 	int ret;
+ 
+-	xe_gt_assert(gt, ct->state != XE_GUC_CT_STATE_NOT_INITIALIZED);
++	xe_gt_assert(gt, xe_guc_ct_initialized(ct));
+ 	xe_gt_assert(gt, !g2h_len || !g2h_fence);
+ 	xe_gt_assert(gt, !num_g2h || !g2h_fence);
+ 	xe_gt_assert(gt, !g2h_len || num_g2h);
+@@ -1342,7 +1347,7 @@ static int g2h_read(struct xe_guc_ct *ct, u32 *msg, bool fast_path)
+ 	u32 action;
+ 	u32 *hxg;
+ 
+-	xe_gt_assert(gt, ct->state != XE_GUC_CT_STATE_NOT_INITIALIZED);
++	xe_gt_assert(gt, xe_guc_ct_initialized(ct));
+ 	lockdep_assert_held(&ct->fast_lock);
+ 
+ 	if (ct->state == XE_GUC_CT_STATE_DISABLED)
+diff --git a/drivers/gpu/drm/xe/xe_guc_ct.h b/drivers/gpu/drm/xe/xe_guc_ct.h
+index 82c4ae458dda39..582aac10646945 100644
+--- a/drivers/gpu/drm/xe/xe_guc_ct.h
++++ b/drivers/gpu/drm/xe/xe_guc_ct.h
+@@ -22,6 +22,11 @@ void xe_guc_ct_snapshot_print(struct xe_guc_ct_snapshot *snapshot, struct drm_pr
+ void xe_guc_ct_snapshot_free(struct xe_guc_ct_snapshot *snapshot);
+ void xe_guc_ct_print(struct xe_guc_ct *ct, struct drm_printer *p, bool want_ctb);
+ 
++static inline bool xe_guc_ct_initialized(struct xe_guc_ct *ct)
++{
++	return ct->state != XE_GUC_CT_STATE_NOT_INITIALIZED;
++}
++
+ static inline bool xe_guc_ct_enabled(struct xe_guc_ct *ct)
+ {
+ 	return ct->state == XE_GUC_CT_STATE_ENABLED;
+diff --git a/drivers/gpu/drm/xe/xe_guc_pc.c b/drivers/gpu/drm/xe/xe_guc_pc.c
+index 43b1192ba61cde..a7b8bacfe64efc 100644
+--- a/drivers/gpu/drm/xe/xe_guc_pc.c
++++ b/drivers/gpu/drm/xe/xe_guc_pc.c
+@@ -1053,7 +1053,7 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
+ 		goto out;
+ 	}
+ 
+-	memset(pc->bo->vmap.vaddr, 0, size);
++	xe_map_memset(xe, &pc->bo->vmap, 0, 0, size);
+ 	slpc_shared_data_write(pc, header.size, size);
+ 
+ 	earlier = ktime_get();
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index 769781d577df60..71ddd26ec30e80 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -229,6 +229,17 @@ static bool exec_queue_killed_or_banned_or_wedged(struct xe_exec_queue *q)
+ static void guc_submit_fini(struct drm_device *drm, void *arg)
+ {
+ 	struct xe_guc *guc = arg;
++	struct xe_device *xe = guc_to_xe(guc);
++	struct xe_gt *gt = guc_to_gt(guc);
++	int ret;
++
++	ret = wait_event_timeout(guc->submission_state.fini_wq,
++				 xa_empty(&guc->submission_state.exec_queue_lookup),
++				 HZ * 5);
++
++	drain_workqueue(xe->destroy_wq);
++
++	xe_gt_assert(gt, ret);
+ 
+ 	xa_destroy(&guc->submission_state.exec_queue_lookup);
+ }
+@@ -300,6 +311,8 @@ int xe_guc_submit_init(struct xe_guc *guc, unsigned int num_ids)
+ 
+ 	primelockdep(guc);
+ 
++	guc->submission_state.initialized = true;
++
+ 	return drmm_add_action_or_reset(&xe->drm, guc_submit_fini, guc);
+ }
+ 
+@@ -834,6 +847,13 @@ void xe_guc_submit_wedge(struct xe_guc *guc)
+ 
+ 	xe_gt_assert(guc_to_gt(guc), guc_to_xe(guc)->wedged.mode);
+ 
++	/*
++	 * If device is being wedged even before submission_state is
++	 * initialized, there's nothing to do here.
++	 */
++	if (!guc->submission_state.initialized)
++		return;
++
+ 	err = devm_add_action_or_reset(guc_to_xe(guc)->drm.dev,
+ 				       guc_submit_wedged_fini, guc);
+ 	if (err) {
+@@ -1739,6 +1759,9 @@ int xe_guc_submit_reset_prepare(struct xe_guc *guc)
+ {
+ 	int ret;
+ 
++	if (!guc->submission_state.initialized)
++		return 0;
++
+ 	/*
+ 	 * Using an atomic here rather than submission_state.lock as this
+ 	 * function can be called while holding the CT lock (engine reset
+diff --git a/drivers/gpu/drm/xe/xe_guc_types.h b/drivers/gpu/drm/xe/xe_guc_types.h
+index 63bac64429a5dc..1fde7614fcc522 100644
+--- a/drivers/gpu/drm/xe/xe_guc_types.h
++++ b/drivers/gpu/drm/xe/xe_guc_types.h
+@@ -89,6 +89,11 @@ struct xe_guc {
+ 		struct mutex lock;
+ 		/** @submission_state.enabled: submission is enabled */
+ 		bool enabled;
++		/**
++		 * @submission_state.initialized: mark when submission state is
++		 * even initialized - before that not even the lock is valid
++		 */
++		bool initialized;
+ 		/** @submission_state.fini_wq: submit fini wait queue */
+ 		wait_queue_head_t fini_wq;
+ 	} submission_state;
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index cc1ae8ba9bb750..ecae71a03b83c6 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -1678,8 +1678,10 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
+ 	 * scheduler drops all the references of it, hence protecting the VM
+ 	 * for this case is necessary.
+ 	 */
+-	if (flags & XE_VM_FLAG_LR_MODE)
++	if (flags & XE_VM_FLAG_LR_MODE) {
++		INIT_WORK(&vm->preempt.rebind_work, preempt_rebind_work_func);
+ 		xe_pm_runtime_get_noresume(xe);
++	}
+ 
+ 	if (flags & XE_VM_FLAG_FAULT_MODE) {
+ 		err = xe_svm_init(vm);
+@@ -1730,10 +1732,8 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
+ 		vm->batch_invalidate_tlb = true;
+ 	}
+ 
+-	if (vm->flags & XE_VM_FLAG_LR_MODE) {
+-		INIT_WORK(&vm->preempt.rebind_work, preempt_rebind_work_func);
++	if (vm->flags & XE_VM_FLAG_LR_MODE)
+ 		vm->batch_invalidate_tlb = false;
+-	}
+ 
+ 	/* Fill pt_root after allocating scratch tables */
+ 	for_each_tile(tile, xe, id) {
+diff --git a/drivers/hid/hid-appletb-kbd.c b/drivers/hid/hid-appletb-kbd.c
+index 029ccbaa1d1269..747945dd450a24 100644
+--- a/drivers/hid/hid-appletb-kbd.c
++++ b/drivers/hid/hid-appletb-kbd.c
+@@ -435,6 +435,8 @@ static int appletb_kbd_probe(struct hid_device *hdev, const struct hid_device_id
+ 	return 0;
+ 
+ close_hw:
++	if (kbd->backlight_dev)
++		put_device(&kbd->backlight_dev->dev);
+ 	hid_hw_close(hdev);
+ stop_hw:
+ 	hid_hw_stop(hdev);
+@@ -450,6 +452,9 @@ static void appletb_kbd_remove(struct hid_device *hdev)
+ 	input_unregister_handler(&kbd->inp_handler);
+ 	timer_delete_sync(&kbd->inactivity_timer);
+ 
++	if (kbd->backlight_dev)
++		put_device(&kbd->backlight_dev->dev);
++
+ 	hid_hw_close(hdev);
+ 	hid_hw_stop(hdev);
+ }
+diff --git a/drivers/hid/hid-lenovo.c b/drivers/hid/hid-lenovo.c
+index af29ba840522f9..a3c23a72316ac2 100644
+--- a/drivers/hid/hid-lenovo.c
++++ b/drivers/hid/hid-lenovo.c
+@@ -548,11 +548,14 @@ static void lenovo_features_set_cptkbd(struct hid_device *hdev)
+ 
+ 	/*
+ 	 * Tell the keyboard a driver understands it, and turn F7, F9, F11 into
+-	 * regular keys
++	 * regular keys (Compact only)
+ 	 */
+-	ret = lenovo_send_cmd_cptkbd(hdev, 0x01, 0x03);
+-	if (ret)
+-		hid_warn(hdev, "Failed to switch F7/9/11 mode: %d\n", ret);
++	if (hdev->product == USB_DEVICE_ID_LENOVO_CUSBKBD ||
++	    hdev->product == USB_DEVICE_ID_LENOVO_CBTKBD) {
++		ret = lenovo_send_cmd_cptkbd(hdev, 0x01, 0x03);
++		if (ret)
++			hid_warn(hdev, "Failed to switch F7/9/11 mode: %d\n", ret);
++	}
+ 
+ 	/* Switch middle button to native mode */
+ 	ret = lenovo_send_cmd_cptkbd(hdev, 0x09, 0x01);
+diff --git a/drivers/hid/intel-thc-hid/intel-quicki2c/quicki2c-protocol.c b/drivers/hid/intel-thc-hid/intel-quicki2c/quicki2c-protocol.c
+index f493df0d5dc4e3..a63f8c833252de 100644
+--- a/drivers/hid/intel-thc-hid/intel-quicki2c/quicki2c-protocol.c
++++ b/drivers/hid/intel-thc-hid/intel-quicki2c/quicki2c-protocol.c
+@@ -4,6 +4,7 @@
+ #include <linux/bitfield.h>
+ #include <linux/hid.h>
+ #include <linux/hid-over-i2c.h>
++#include <linux/unaligned.h>
+ 
+ #include "intel-thc-dev.h"
+ #include "intel-thc-dma.h"
+@@ -200,6 +201,9 @@ int quicki2c_set_report(struct quicki2c_device *qcdev, u8 report_type,
+ 
+ int quicki2c_reset(struct quicki2c_device *qcdev)
+ {
++	u16 input_reg = le16_to_cpu(qcdev->dev_desc.input_reg);
++	size_t read_len = HIDI2C_LENGTH_LEN;
++	u32 prd_len = read_len;
+ 	int ret;
+ 
+ 	qcdev->reset_ack = false;
+@@ -213,12 +217,32 @@ int quicki2c_reset(struct quicki2c_device *qcdev)
+ 
+ 	ret = wait_event_interruptible_timeout(qcdev->reset_ack_wq, qcdev->reset_ack,
+ 					       HIDI2C_RESET_TIMEOUT * HZ);
+-	if (ret <= 0 || !qcdev->reset_ack) {
++	if (qcdev->reset_ack)
++		return 0;
++
++	/*
++	 * Manually read reset response if it wasn't received, in case reset interrupt
++	 * was missed by touch device or THC hardware.
++	 */
++	ret = thc_tic_pio_read(qcdev->thc_hw, input_reg, read_len, &prd_len,
++			       (u32 *)qcdev->input_buf);
++	if (ret) {
++		dev_err_once(qcdev->dev, "Read Reset Response failed, ret %d\n", ret);
++		return ret;
++	}
++
++	/*
++	 * Check response packet length, it's first 16 bits of packet.
++	 * If response packet length is zero, it's reset response, otherwise not.
++	 */
++	if (get_unaligned_le16(qcdev->input_buf)) {
+ 		dev_err_once(qcdev->dev,
+ 			     "Wait reset response timed out ret:%d timeout:%ds\n",
+ 			     ret, HIDI2C_RESET_TIMEOUT);
+ 		return -ETIMEDOUT;
+ 	}
+ 
++	qcdev->reset_ack = true;
++
+ 	return 0;
+ }
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index eaf099b2efdb0a..9a57504e51a19c 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -2048,14 +2048,18 @@ static int wacom_initialize_remotes(struct wacom *wacom)
+ 
+ 	remote->remote_dir = kobject_create_and_add("wacom_remote",
+ 						    &wacom->hdev->dev.kobj);
+-	if (!remote->remote_dir)
++	if (!remote->remote_dir) {
++		kfifo_free(&remote->remote_fifo);
+ 		return -ENOMEM;
++	}
+ 
+ 	error = sysfs_create_files(remote->remote_dir, remote_unpair_attrs);
+ 
+ 	if (error) {
+ 		hid_err(wacom->hdev,
+ 			"cannot create sysfs group err: %d\n", error);
++		kfifo_free(&remote->remote_fifo);
++		kobject_put(remote->remote_dir);
+ 		return error;
+ 	}
+ 
+@@ -2901,6 +2905,7 @@ static void wacom_remove(struct hid_device *hdev)
+ 	hid_hw_stop(hdev);
+ 
+ 	cancel_delayed_work_sync(&wacom->init_work);
++	cancel_delayed_work_sync(&wacom->aes_battery_work);
+ 	cancel_work_sync(&wacom->wireless_work);
+ 	cancel_work_sync(&wacom->battery_work);
+ 	cancel_work_sync(&wacom->remote_work);
+diff --git a/drivers/hwmon/isl28022.c b/drivers/hwmon/isl28022.c
+index 1fb9864635db9e..1b4fb0824d6c02 100644
+--- a/drivers/hwmon/isl28022.c
++++ b/drivers/hwmon/isl28022.c
+@@ -154,6 +154,7 @@ static int isl28022_read_current(struct device *dev, u32 attr, long *val)
+ 	struct isl28022_data *data = dev_get_drvdata(dev);
+ 	unsigned int regval;
+ 	int err;
++	u16 sign_bit;
+ 
+ 	switch (attr) {
+ 	case hwmon_curr_input:
+@@ -161,8 +162,9 @@ static int isl28022_read_current(struct device *dev, u32 attr, long *val)
+ 				  ISL28022_REG_CURRENT, &regval);
+ 		if (err < 0)
+ 			return err;
+-		*val = ((long)regval * 1250L * (long)data->gain) /
+-			(long)data->shunt;
++		sign_bit = (regval >> 15) & 0x01;
++		*val = (((long)(((u16)regval) & 0x7FFF) - (sign_bit * 32768)) *
++			1250L * (long)data->gain) / (long)data->shunt;
+ 		break;
+ 	default:
+ 		return -EOPNOTSUPP;
+diff --git a/drivers/hwmon/pmbus/max34440.c b/drivers/hwmon/pmbus/max34440.c
+index c9dda33831ff24..d6d556b0138532 100644
+--- a/drivers/hwmon/pmbus/max34440.c
++++ b/drivers/hwmon/pmbus/max34440.c
+@@ -34,16 +34,21 @@ enum chips { max34440, max34441, max34446, max34451, max34460, max34461 };
+ /*
+  * The whole max344* family have IOUT_OC_WARN_LIMIT and IOUT_OC_FAULT_LIMIT
+  * swapped from the standard pmbus spec addresses.
++ * For max34451, version MAX34451ETNA6+ and later has this issue fixed.
+  */
+ #define MAX34440_IOUT_OC_WARN_LIMIT	0x46
+ #define MAX34440_IOUT_OC_FAULT_LIMIT	0x4A
+ 
++#define MAX34451ETNA6_MFR_REV		0x0012
++
+ #define MAX34451_MFR_CHANNEL_CONFIG	0xe4
+ #define MAX34451_MFR_CHANNEL_CONFIG_SEL_MASK	0x3f
+ 
+ struct max34440_data {
+ 	int id;
+ 	struct pmbus_driver_info info;
++	u8 iout_oc_warn_limit;
++	u8 iout_oc_fault_limit;
+ };
+ 
+ #define to_max34440_data(x)  container_of(x, struct max34440_data, info)
+@@ -60,11 +65,11 @@ static int max34440_read_word_data(struct i2c_client *client, int page,
+ 	switch (reg) {
+ 	case PMBUS_IOUT_OC_FAULT_LIMIT:
+ 		ret = pmbus_read_word_data(client, page, phase,
+-					   MAX34440_IOUT_OC_FAULT_LIMIT);
++					   data->iout_oc_fault_limit);
+ 		break;
+ 	case PMBUS_IOUT_OC_WARN_LIMIT:
+ 		ret = pmbus_read_word_data(client, page, phase,
+-					   MAX34440_IOUT_OC_WARN_LIMIT);
++					   data->iout_oc_warn_limit);
+ 		break;
+ 	case PMBUS_VIRT_READ_VOUT_MIN:
+ 		ret = pmbus_read_word_data(client, page, phase,
+@@ -133,11 +138,11 @@ static int max34440_write_word_data(struct i2c_client *client, int page,
+ 
+ 	switch (reg) {
+ 	case PMBUS_IOUT_OC_FAULT_LIMIT:
+-		ret = pmbus_write_word_data(client, page, MAX34440_IOUT_OC_FAULT_LIMIT,
++		ret = pmbus_write_word_data(client, page, data->iout_oc_fault_limit,
+ 					    word);
+ 		break;
+ 	case PMBUS_IOUT_OC_WARN_LIMIT:
+-		ret = pmbus_write_word_data(client, page, MAX34440_IOUT_OC_WARN_LIMIT,
++		ret = pmbus_write_word_data(client, page, data->iout_oc_warn_limit,
+ 					    word);
+ 		break;
+ 	case PMBUS_VIRT_RESET_POUT_HISTORY:
+@@ -235,6 +240,25 @@ static int max34451_set_supported_funcs(struct i2c_client *client,
+ 	 */
+ 
+ 	int page, rv;
++	bool max34451_na6 = false;
++
++	rv = i2c_smbus_read_word_data(client, PMBUS_MFR_REVISION);
++	if (rv < 0)
++		return rv;
++
++	if (rv >= MAX34451ETNA6_MFR_REV) {
++		max34451_na6 = true;
++		data->info.format[PSC_VOLTAGE_IN] = direct;
++		data->info.format[PSC_CURRENT_IN] = direct;
++		data->info.m[PSC_VOLTAGE_IN] = 1;
++		data->info.b[PSC_VOLTAGE_IN] = 0;
++		data->info.R[PSC_VOLTAGE_IN] = 3;
++		data->info.m[PSC_CURRENT_IN] = 1;
++		data->info.b[PSC_CURRENT_IN] = 0;
++		data->info.R[PSC_CURRENT_IN] = 2;
++		data->iout_oc_fault_limit = PMBUS_IOUT_OC_FAULT_LIMIT;
++		data->iout_oc_warn_limit = PMBUS_IOUT_OC_WARN_LIMIT;
++	}
+ 
+ 	for (page = 0; page < 16; page++) {
+ 		rv = i2c_smbus_write_byte_data(client, PMBUS_PAGE, page);
+@@ -251,16 +275,30 @@ static int max34451_set_supported_funcs(struct i2c_client *client,
+ 		case 0x20:
+ 			data->info.func[page] = PMBUS_HAVE_VOUT |
+ 				PMBUS_HAVE_STATUS_VOUT;
++
++			if (max34451_na6)
++				data->info.func[page] |= PMBUS_HAVE_VIN |
++					PMBUS_HAVE_STATUS_INPUT;
+ 			break;
+ 		case 0x21:
+ 			data->info.func[page] = PMBUS_HAVE_VOUT;
++
++			if (max34451_na6)
++				data->info.func[page] |= PMBUS_HAVE_VIN;
+ 			break;
+ 		case 0x22:
+ 			data->info.func[page] = PMBUS_HAVE_IOUT |
+ 				PMBUS_HAVE_STATUS_IOUT;
++
++			if (max34451_na6)
++				data->info.func[page] |= PMBUS_HAVE_IIN |
++					PMBUS_HAVE_STATUS_INPUT;
+ 			break;
+ 		case 0x23:
+ 			data->info.func[page] = PMBUS_HAVE_IOUT;
++
++			if (max34451_na6)
++				data->info.func[page] |= PMBUS_HAVE_IIN;
+ 			break;
+ 		default:
+ 			break;
+@@ -494,6 +532,8 @@ static int max34440_probe(struct i2c_client *client)
+ 		return -ENOMEM;
+ 	data->id = i2c_match_id(max34440_id, client)->driver_data;
+ 	data->info = max34440_info[data->id];
++	data->iout_oc_fault_limit = MAX34440_IOUT_OC_FAULT_LIMIT;
++	data->iout_oc_warn_limit = MAX34440_IOUT_OC_WARN_LIMIT;
+ 
+ 	if (data->id == max34451) {
+ 		rv = max34451_set_supported_funcs(client, data);
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index d3523f0262af82..dfcb30da2e20b8 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -131,7 +131,8 @@ coresight_find_out_connection(struct coresight_device *csdev,
+ 
+ static inline u32 coresight_read_claim_tags(struct coresight_device *csdev)
+ {
+-	return csdev_access_relaxed_read32(&csdev->access, CORESIGHT_CLAIMCLR);
++	return FIELD_GET(CORESIGHT_CLAIM_MASK,
++			 csdev_access_relaxed_read32(&csdev->access, CORESIGHT_CLAIMCLR));
+ }
+ 
+ static inline bool coresight_is_claimed_self_hosted(struct coresight_device *csdev)
+diff --git a/drivers/hwtracing/coresight/coresight-priv.h b/drivers/hwtracing/coresight/coresight-priv.h
+index 82644aff8d2b78..38bb4e8b50ef6b 100644
+--- a/drivers/hwtracing/coresight/coresight-priv.h
++++ b/drivers/hwtracing/coresight/coresight-priv.h
+@@ -35,6 +35,7 @@ extern const struct device_type coresight_dev_type[];
+  * Coresight device CLAIM protocol.
+  * See PSCI - ARM DEN 0022D, Section: 6.8.1 Debug and Trace save and restore.
+  */
++#define CORESIGHT_CLAIM_MASK		GENMASK(1, 0)
+ #define CORESIGHT_CLAIM_SELF_HOSTED	BIT(1)
+ 
+ #define TIMEOUT_US		100
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index 9e5d454d8318e8..2f6945e02a4b47 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -1008,7 +1008,7 @@ static inline int i2c_imx_isr_read(struct imx_i2c_struct *i2c_imx)
+ 	/* setup bus to read data */
+ 	temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2CR);
+ 	temp &= ~I2CR_MTX;
+-	if (i2c_imx->msg->len - 1)
++	if ((i2c_imx->msg->len - 1) || (i2c_imx->msg->flags & I2C_M_RECV_LEN))
+ 		temp &= ~I2CR_TXAK;
+ 
+ 	imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR);
+@@ -1063,6 +1063,7 @@ static inline void i2c_imx_isr_read_block_data_len(struct imx_i2c_struct *i2c_im
+ 		wake_up(&i2c_imx->queue);
+ 	}
+ 	i2c_imx->msg->len += len;
++	i2c_imx->msg->buf[i2c_imx->msg_buf_idx++] = len;
+ }
+ 
+ static irqreturn_t i2c_imx_master_isr(struct imx_i2c_struct *i2c_imx, unsigned int status)
+diff --git a/drivers/i2c/busses/i2c-omap.c b/drivers/i2c/busses/i2c-omap.c
+index 876791d20ed55e..5e46dc2cbbd75d 100644
+--- a/drivers/i2c/busses/i2c-omap.c
++++ b/drivers/i2c/busses/i2c-omap.c
+@@ -1461,13 +1461,13 @@ omap_i2c_probe(struct platform_device *pdev)
+ 		if (IS_ERR(mux_state)) {
+ 			r = PTR_ERR(mux_state);
+ 			dev_dbg(&pdev->dev, "failed to get I2C mux: %d\n", r);
+-			goto err_disable_pm;
++			goto err_put_pm;
+ 		}
+ 		omap->mux_state = mux_state;
+ 		r = mux_state_select(omap->mux_state);
+ 		if (r) {
+ 			dev_err(&pdev->dev, "failed to select I2C mux: %d\n", r);
+-			goto err_disable_pm;
++			goto err_put_pm;
+ 		}
+ 	}
+ 
+@@ -1515,6 +1515,9 @@ omap_i2c_probe(struct platform_device *pdev)
+ 
+ err_unuse_clocks:
+ 	omap_i2c_write_reg(omap, OMAP_I2C_CON_REG, 0);
++	if (omap->mux_state)
++		mux_state_deselect(omap->mux_state);
++err_put_pm:
+ 	pm_runtime_dont_use_autosuspend(omap->dev);
+ 	pm_runtime_put_sync(omap->dev);
+ err_disable_pm:
+diff --git a/drivers/i2c/busses/i2c-robotfuzz-osif.c b/drivers/i2c/busses/i2c-robotfuzz-osif.c
+index 80d45079b763c0..e0a76fb5bc31f5 100644
+--- a/drivers/i2c/busses/i2c-robotfuzz-osif.c
++++ b/drivers/i2c/busses/i2c-robotfuzz-osif.c
+@@ -111,6 +111,11 @@ static u32 osif_func(struct i2c_adapter *adapter)
+ 	return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL;
+ }
+ 
++/* prevent invalid 0-length usb_control_msg */
++static const struct i2c_adapter_quirks osif_quirks = {
++	.flags = I2C_AQ_NO_ZERO_LEN_READ,
++};
++
+ static const struct i2c_algorithm osif_algorithm = {
+ 	.xfer = osif_xfer,
+ 	.functionality = osif_func,
+@@ -143,6 +148,7 @@ static int osif_probe(struct usb_interface *interface,
+ 
+ 	priv->adapter.owner = THIS_MODULE;
+ 	priv->adapter.class = I2C_CLASS_HWMON;
++	priv->adapter.quirks = &osif_quirks;
+ 	priv->adapter.algo = &osif_algorithm;
+ 	priv->adapter.algo_data = priv;
+ 	snprintf(priv->adapter.name, sizeof(priv->adapter.name),
+diff --git a/drivers/i2c/busses/i2c-tiny-usb.c b/drivers/i2c/busses/i2c-tiny-usb.c
+index 0f2ed181b2665c..0cc7c0a816fc02 100644
+--- a/drivers/i2c/busses/i2c-tiny-usb.c
++++ b/drivers/i2c/busses/i2c-tiny-usb.c
+@@ -138,6 +138,11 @@ static u32 usb_func(struct i2c_adapter *adapter)
+ 	return ret;
+ }
+ 
++/* prevent invalid 0-length usb_control_msg */
++static const struct i2c_adapter_quirks usb_quirks = {
++	.flags = I2C_AQ_NO_ZERO_LEN_READ,
++};
++
+ /* This is the actual algorithm we define */
+ static const struct i2c_algorithm usb_algorithm = {
+ 	.xfer = usb_xfer,
+@@ -246,6 +251,7 @@ static int i2c_tiny_usb_probe(struct usb_interface *interface,
+ 	/* setup i2c adapter description */
+ 	dev->adapter.owner = THIS_MODULE;
+ 	dev->adapter.class = I2C_CLASS_HWMON;
++	dev->adapter.quirks = &usb_quirks;
+ 	dev->adapter.algo = &usb_algorithm;
+ 	dev->adapter.algo_data = dev;
+ 	snprintf(dev->adapter.name, sizeof(dev->adapter.name),
+diff --git a/drivers/iio/adc/ad7606_spi.c b/drivers/iio/adc/ad7606_spi.c
+index b37458ce3c7087..5553a44ff83bc2 100644
+--- a/drivers/iio/adc/ad7606_spi.c
++++ b/drivers/iio/adc/ad7606_spi.c
+@@ -174,11 +174,13 @@ static int ad7616_sw_mode_config(struct iio_dev *indio_dev)
+ static int ad7606B_sw_mode_config(struct iio_dev *indio_dev)
+ {
+ 	struct ad7606_state *st = iio_priv(indio_dev);
++	int ret;
+ 
+ 	/* Configure device spi to output on a single channel */
+-	st->bops->reg_write(st,
+-			    AD7606_CONFIGURATION_REGISTER,
+-			    AD7606_SINGLE_DOUT);
++	ret = st->bops->reg_write(st, AD7606_CONFIGURATION_REGISTER,
++				  AD7606_SINGLE_DOUT);
++	if (ret)
++		return ret;
+ 
+ 	/*
+ 	 * Scale can be configured individually for each channel
+diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
+index 6c37f8e21120b8..4c5f8d29a559fe 100644
+--- a/drivers/iio/adc/ad_sigma_delta.c
++++ b/drivers/iio/adc/ad_sigma_delta.c
+@@ -587,6 +587,10 @@ static irqreturn_t ad_sd_trigger_handler(int irq, void *p)
+ 		 * byte set to zero. */
+ 		ad_sd_read_reg_raw(sigma_delta, data_reg, transfer_size, &data[1]);
+ 		break;
++
++	default:
++		dev_err_ratelimited(&indio_dev->dev, "Unsupported reg_size: %u\n", reg_size);
++		goto irq_handled;
+ 	}
+ 
+ 	/*
+diff --git a/drivers/iio/dac/adi-axi-dac.c b/drivers/iio/dac/adi-axi-dac.c
+index 05b374e137d35d..fd93ed31328353 100644
+--- a/drivers/iio/dac/adi-axi-dac.c
++++ b/drivers/iio/dac/adi-axi-dac.c
+@@ -84,6 +84,7 @@
+ #define AXI_DAC_CHAN_CNTRL_7_REG(c)		(0x0418 + (c) * 0x40)
+ #define   AXI_DAC_CHAN_CNTRL_7_DATA_SEL		GENMASK(3, 0)
+ 
++#define AXI_DAC_CHAN_CNTRL_MAX			15
+ #define AXI_DAC_RD_ADDR(x)			(BIT(7) | (x))
+ 
+ /* 360 degrees in rad */
+@@ -186,6 +187,9 @@ static int __axi_dac_frequency_get(struct axi_dac_state *st, unsigned int chan,
+ 	u32 reg, raw;
+ 	int ret;
+ 
++	if (chan > AXI_DAC_CHAN_CNTRL_MAX)
++		return -EINVAL;
++
+ 	if (!st->dac_clk) {
+ 		dev_err(st->dev, "Sampling rate is 0...\n");
+ 		return -EINVAL;
+@@ -230,6 +234,9 @@ static int axi_dac_scale_get(struct axi_dac_state *st,
+ 	int ret, vals[2];
+ 	u32 reg, raw;
+ 
++	if (chan->channel > AXI_DAC_CHAN_CNTRL_MAX)
++		return -EINVAL;
++
+ 	if (tone_2)
+ 		reg = AXI_DAC_CHAN_CNTRL_3_REG(chan->channel);
+ 	else
+@@ -264,6 +271,9 @@ static int axi_dac_phase_get(struct axi_dac_state *st,
+ 	u32 reg, raw, phase;
+ 	int ret, vals[2];
+ 
++	if (chan->channel > AXI_DAC_CHAN_CNTRL_MAX)
++		return -EINVAL;
++
+ 	if (tone_2)
+ 		reg = AXI_DAC_CHAN_CNTRL_4_REG(chan->channel);
+ 	else
+@@ -291,6 +301,9 @@ static int __axi_dac_frequency_set(struct axi_dac_state *st, unsigned int chan,
+ 	u16 raw;
+ 	int ret;
+ 
++	if (chan > AXI_DAC_CHAN_CNTRL_MAX)
++		return -EINVAL;
++
+ 	if (!sample_rate || freq > sample_rate / 2) {
+ 		dev_err(st->dev, "Invalid frequency(%u) dac_clk(%llu)\n",
+ 			freq, sample_rate);
+@@ -342,6 +355,9 @@ static int axi_dac_scale_set(struct axi_dac_state *st,
+ 	u32 raw = 0, reg;
+ 	int ret;
+ 
++	if (chan->channel > AXI_DAC_CHAN_CNTRL_MAX)
++		return -EINVAL;
++
+ 	ret = iio_str_to_fixpoint(buf, 100000, &integer, &frac);
+ 	if (ret)
+ 		return ret;
+@@ -385,6 +401,9 @@ static int axi_dac_phase_set(struct axi_dac_state *st,
+ 	u32 raw, reg;
+ 	int ret;
+ 
++	if (chan->channel > AXI_DAC_CHAN_CNTRL_MAX)
++		return -EINVAL;
++
+ 	ret = iio_str_to_fixpoint(buf, 100000, &integer, &frac);
+ 	if (ret)
+ 		return ret;
+@@ -493,6 +512,9 @@ static int axi_dac_data_source_set(struct iio_backend *back, unsigned int chan,
+ {
+ 	struct axi_dac_state *st = iio_backend_get_priv(back);
+ 
++	if (chan > AXI_DAC_CHAN_CNTRL_MAX)
++		return -EINVAL;
++
+ 	switch (data) {
+ 	case IIO_BACKEND_INTERNAL_CONTINUOUS_WAVE:
+ 		return regmap_update_bits(st->regmap,
+@@ -521,6 +543,8 @@ static int axi_dac_set_sample_rate(struct iio_backend *back, unsigned int chan,
+ 	unsigned int freq;
+ 	int ret, tone;
+ 
++	if (chan > AXI_DAC_CHAN_CNTRL_MAX)
++		return -EINVAL;
+ 	if (!sample_rate)
+ 		return -EINVAL;
+ 	if (st->reg_config & AXI_DAC_CONFIG_DDS_DISABLE)
+diff --git a/drivers/iio/light/al3000a.c b/drivers/iio/light/al3000a.c
+index e2fbb1270040f4..6d5115b2a06c5a 100644
+--- a/drivers/iio/light/al3000a.c
++++ b/drivers/iio/light/al3000a.c
+@@ -85,12 +85,17 @@ static void al3000a_set_pwr_off(void *_data)
+ 
+ static int al3000a_init(struct al3000a_data *data)
+ {
++	struct device *dev = regmap_get_device(data->regmap);
+ 	int ret;
+ 
+ 	ret = al3000a_set_pwr_on(data);
+ 	if (ret)
+ 		return ret;
+ 
++	ret = devm_add_action_or_reset(dev, al3000a_set_pwr_off, data);
++	if (ret)
++		return dev_err_probe(dev, ret, "failed to add action\n");
++
+ 	ret = regmap_write(data->regmap, AL3000A_REG_SYSTEM, AL3000A_CONFIG_RESET);
+ 	if (ret)
+ 		return ret;
+@@ -157,10 +162,6 @@ static int al3000a_probe(struct i2c_client *client)
+ 	if (ret)
+ 		return dev_err_probe(dev, ret, "failed to init ALS\n");
+ 
+-	ret = devm_add_action_or_reset(dev, al3000a_set_pwr_off, data);
+-	if (ret)
+-		return dev_err_probe(dev, ret, "failed to add action\n");
+-
+ 	return devm_iio_device_register(dev, indio_dev);
+ }
+ 
+diff --git a/drivers/iio/light/hid-sensor-prox.c b/drivers/iio/light/hid-sensor-prox.c
+index 4c65b32d34ce41..46f788b0bc3e21 100644
+--- a/drivers/iio/light/hid-sensor-prox.c
++++ b/drivers/iio/light/hid-sensor-prox.c
+@@ -215,6 +215,9 @@ static int prox_capture_sample(struct hid_sensor_hub_device *hsdev,
+ 	case 1:
+ 		prox_state->human_presence[chan] = *(u8 *)raw_data * multiplier;
+ 		return 0;
++	case 2:
++		prox_state->human_presence[chan] = *(u16 *)raw_data * multiplier;
++		return 0;
+ 	case 4:
+ 		prox_state->human_presence[chan] = *(u32 *)raw_data * multiplier;
+ 		return 0;
+diff --git a/drivers/iio/pressure/zpa2326.c b/drivers/iio/pressure/zpa2326.c
+index 9db1c94dfc188f..b2e04368532a09 100644
+--- a/drivers/iio/pressure/zpa2326.c
++++ b/drivers/iio/pressure/zpa2326.c
+@@ -582,7 +582,7 @@ static int zpa2326_fill_sample_buffer(struct iio_dev               *indio_dev,
+ 	struct {
+ 		u32 pressure;
+ 		u16 temperature;
+-		u64 timestamp;
++		aligned_s64 timestamp;
+ 	}   sample;
+ 	int err;
+ 
+diff --git a/drivers/leds/led-class-multicolor.c b/drivers/leds/led-class-multicolor.c
+index b2a87c9948165e..fd66d2bdeace8e 100644
+--- a/drivers/leds/led-class-multicolor.c
++++ b/drivers/leds/led-class-multicolor.c
+@@ -59,7 +59,8 @@ static ssize_t multi_intensity_store(struct device *dev,
+ 	for (i = 0; i < mcled_cdev->num_colors; i++)
+ 		mcled_cdev->subled_info[i].intensity = intensity_value[i];
+ 
+-	led_set_brightness(led_cdev, led_cdev->brightness);
++	if (!test_bit(LED_BLINK_SW, &led_cdev->work_flags))
++		led_set_brightness(led_cdev, led_cdev->brightness);
+ 	ret = size;
+ err_out:
+ 	mutex_unlock(&led_cdev->led_access);
+diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
+index 0593b4d0368595..aea0e690b63ee0 100644
+--- a/drivers/mailbox/mailbox.c
++++ b/drivers/mailbox/mailbox.c
+@@ -486,8 +486,8 @@ void mbox_free_channel(struct mbox_chan *chan)
+ 	if (chan->txdone_method == TXDONE_BY_ACK)
+ 		chan->txdone_method = TXDONE_BY_POLL;
+ 
+-	module_put(chan->mbox->dev->driver->owner);
+ 	spin_unlock_irqrestore(&chan->lock, flags);
++	module_put(chan->mbox->dev->driver->owner);
+ }
+ EXPORT_SYMBOL_GPL(mbox_free_channel);
+ 
+diff --git a/drivers/md/bcache/Kconfig b/drivers/md/bcache/Kconfig
+index d4697e79d5a391..b2d10063d35fb4 100644
+--- a/drivers/md/bcache/Kconfig
++++ b/drivers/md/bcache/Kconfig
+@@ -5,7 +5,6 @@ config BCACHE
+ 	select BLOCK_HOLDER_DEPRECATED if SYSFS
+ 	select CRC64
+ 	select CLOSURES
+-	select MIN_HEAP
+ 	help
+ 	Allows a block device to be used as cache for other devices; uses
+ 	a btree for indexing and the layout is optimized for SSDs.
+diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
+index 8998e61efa406f..48ce750bf70af9 100644
+--- a/drivers/md/bcache/alloc.c
++++ b/drivers/md/bcache/alloc.c
+@@ -164,61 +164,40 @@ static void bch_invalidate_one_bucket(struct cache *ca, struct bucket *b)
+  * prio is worth 1/8th of what INITIAL_PRIO is worth.
+  */
+ 
+-static inline unsigned int new_bucket_prio(struct cache *ca, struct bucket *b)
+-{
+-	unsigned int min_prio = (INITIAL_PRIO - ca->set->min_prio) / 8;
+-
+-	return (b->prio - ca->set->min_prio + min_prio) * GC_SECTORS_USED(b);
+-}
+-
+-static inline bool new_bucket_max_cmp(const void *l, const void *r, void *args)
+-{
+-	struct bucket **lhs = (struct bucket **)l;
+-	struct bucket **rhs = (struct bucket **)r;
+-	struct cache *ca = args;
+-
+-	return new_bucket_prio(ca, *lhs) > new_bucket_prio(ca, *rhs);
+-}
+-
+-static inline bool new_bucket_min_cmp(const void *l, const void *r, void *args)
+-{
+-	struct bucket **lhs = (struct bucket **)l;
+-	struct bucket **rhs = (struct bucket **)r;
+-	struct cache *ca = args;
++#define bucket_prio(b)							\
++({									\
++	unsigned int min_prio = (INITIAL_PRIO - ca->set->min_prio) / 8;	\
++									\
++	(b->prio - ca->set->min_prio + min_prio) * GC_SECTORS_USED(b);	\
++})
+ 
+-	return new_bucket_prio(ca, *lhs) < new_bucket_prio(ca, *rhs);
+-}
++#define bucket_max_cmp(l, r)	(bucket_prio(l) < bucket_prio(r))
++#define bucket_min_cmp(l, r)	(bucket_prio(l) > bucket_prio(r))
+ 
+ static void invalidate_buckets_lru(struct cache *ca)
+ {
+ 	struct bucket *b;
+-	const struct min_heap_callbacks bucket_max_cmp_callback = {
+-		.less = new_bucket_max_cmp,
+-		.swp = NULL,
+-	};
+-	const struct min_heap_callbacks bucket_min_cmp_callback = {
+-		.less = new_bucket_min_cmp,
+-		.swp = NULL,
+-	};
++	ssize_t i;
+ 
+-	ca->heap.nr = 0;
++	ca->heap.used = 0;
+ 
+ 	for_each_bucket(b, ca) {
+ 		if (!bch_can_invalidate_bucket(ca, b))
+ 			continue;
+ 
+-		if (!min_heap_full(&ca->heap))
+-			min_heap_push(&ca->heap, &b, &bucket_max_cmp_callback, ca);
+-		else if (!new_bucket_max_cmp(&b, min_heap_peek(&ca->heap), ca)) {
++		if (!heap_full(&ca->heap))
++			heap_add(&ca->heap, b, bucket_max_cmp);
++		else if (bucket_max_cmp(b, heap_peek(&ca->heap))) {
+ 			ca->heap.data[0] = b;
+-			min_heap_sift_down(&ca->heap, 0, &bucket_max_cmp_callback, ca);
++			heap_sift(&ca->heap, 0, bucket_max_cmp);
+ 		}
+ 	}
+ 
+-	min_heapify_all(&ca->heap, &bucket_min_cmp_callback, ca);
++	for (i = ca->heap.used / 2 - 1; i >= 0; --i)
++		heap_sift(&ca->heap, i, bucket_min_cmp);
+ 
+ 	while (!fifo_full(&ca->free_inc)) {
+-		if (!ca->heap.nr) {
++		if (!heap_pop(&ca->heap, b, bucket_min_cmp)) {
+ 			/*
+ 			 * We don't want to be calling invalidate_buckets()
+ 			 * multiple times when it can't do anything
+@@ -227,8 +206,6 @@ static void invalidate_buckets_lru(struct cache *ca)
+ 			wake_up_gc(ca->set);
+ 			return;
+ 		}
+-		b = min_heap_peek(&ca->heap)[0];
+-		min_heap_pop(&ca->heap, &bucket_min_cmp_callback, ca);
+ 
+ 		bch_invalidate_one_bucket(ca, b);
+ 	}
+diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
+index 785b0d9008face..1d33e40d26ea51 100644
+--- a/drivers/md/bcache/bcache.h
++++ b/drivers/md/bcache/bcache.h
+@@ -458,7 +458,7 @@ struct cache {
+ 	/* Allocation stuff: */
+ 	struct bucket		*buckets;
+ 
+-	DEFINE_MIN_HEAP(struct bucket *, cache_heap) heap;
++	DECLARE_HEAP(struct bucket *, heap);
+ 
+ 	/*
+ 	 * If nonzero, we know we aren't going to find any buckets to invalidate
+diff --git a/drivers/md/bcache/bset.c b/drivers/md/bcache/bset.c
+index 68258a16e125cf..463eb13bd0b2a7 100644
+--- a/drivers/md/bcache/bset.c
++++ b/drivers/md/bcache/bset.c
+@@ -54,11 +54,9 @@ void bch_dump_bucket(struct btree_keys *b)
+ int __bch_count_data(struct btree_keys *b)
+ {
+ 	unsigned int ret = 0;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct bkey *k;
+ 
+-	min_heap_init(&iter.heap, NULL, MAX_BSETS);
+-
+ 	if (b->ops->is_extents)
+ 		for_each_key(b, k, &iter)
+ 			ret += KEY_SIZE(k);
+@@ -69,11 +67,9 @@ void __bch_check_keys(struct btree_keys *b, const char *fmt, ...)
+ {
+ 	va_list args;
+ 	struct bkey *k, *p = NULL;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	const char *err;
+ 
+-	min_heap_init(&iter.heap, NULL, MAX_BSETS);
+-
+ 	for_each_key(b, k, &iter) {
+ 		if (b->ops->is_extents) {
+ 			err = "Keys out of order";
+@@ -114,9 +110,9 @@ void __bch_check_keys(struct btree_keys *b, const char *fmt, ...)
+ 
+ static void bch_btree_iter_next_check(struct btree_iter *iter)
+ {
+-	struct bkey *k = iter->heap.data->k, *next = bkey_next(k);
++	struct bkey *k = iter->data->k, *next = bkey_next(k);
+ 
+-	if (next < iter->heap.data->end &&
++	if (next < iter->data->end &&
+ 	    bkey_cmp(k, iter->b->ops->is_extents ?
+ 		     &START_KEY(next) : next) > 0) {
+ 		bch_dump_bucket(iter->b);
+@@ -883,14 +879,12 @@ unsigned int bch_btree_insert_key(struct btree_keys *b, struct bkey *k,
+ 	unsigned int status = BTREE_INSERT_STATUS_NO_INSERT;
+ 	struct bset *i = bset_tree_last(b)->data;
+ 	struct bkey *m, *prev = NULL;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct bkey preceding_key_on_stack = ZERO_KEY;
+ 	struct bkey *preceding_key_p = &preceding_key_on_stack;
+ 
+ 	BUG_ON(b->ops->is_extents && !KEY_SIZE(k));
+ 
+-	min_heap_init(&iter.heap, NULL, MAX_BSETS);
+-
+ 	/*
+ 	 * If k has preceding key, preceding_key_p will be set to address
+ 	 *  of k's preceding key; otherwise preceding_key_p will be set
+@@ -901,9 +895,9 @@ unsigned int bch_btree_insert_key(struct btree_keys *b, struct bkey *k,
+ 	else
+ 		preceding_key(k, &preceding_key_p);
+ 
+-	m = bch_btree_iter_init(b, &iter, preceding_key_p);
++	m = bch_btree_iter_stack_init(b, &iter, preceding_key_p);
+ 
+-	if (b->ops->insert_fixup(b, k, &iter, replace_key))
++	if (b->ops->insert_fixup(b, k, &iter.iter, replace_key))
+ 		return status;
+ 
+ 	status = BTREE_INSERT_STATUS_INSERT;
+@@ -1083,94 +1077,79 @@ struct bkey *__bch_bset_search(struct btree_keys *b, struct bset_tree *t,
+ 
+ /* Btree iterator */
+ 
+-typedef bool (new_btree_iter_cmp_fn)(const void *, const void *, void *);
++typedef bool (btree_iter_cmp_fn)(struct btree_iter_set,
++				 struct btree_iter_set);
+ 
+-static inline bool new_btree_iter_cmp(const void *l, const void *r, void __always_unused *args)
++static inline bool btree_iter_cmp(struct btree_iter_set l,
++				  struct btree_iter_set r)
+ {
+-	const struct btree_iter_set *_l = l;
+-	const struct btree_iter_set *_r = r;
+-
+-	return bkey_cmp(_l->k, _r->k) <= 0;
++	return bkey_cmp(l.k, r.k) > 0;
+ }
+ 
+ static inline bool btree_iter_end(struct btree_iter *iter)
+ {
+-	return !iter->heap.nr;
++	return !iter->used;
+ }
+ 
+ void bch_btree_iter_push(struct btree_iter *iter, struct bkey *k,
+ 			 struct bkey *end)
+ {
+-	const struct min_heap_callbacks callbacks = {
+-		.less = new_btree_iter_cmp,
+-		.swp = NULL,
+-	};
+-
+ 	if (k != end)
+-		BUG_ON(!min_heap_push(&iter->heap,
+-				 &((struct btree_iter_set) { k, end }),
+-				 &callbacks,
+-				 NULL));
++		BUG_ON(!heap_add(iter,
++				 ((struct btree_iter_set) { k, end }),
++				 btree_iter_cmp));
+ }
+ 
+-static struct bkey *__bch_btree_iter_init(struct btree_keys *b,
+-					  struct btree_iter *iter,
+-					  struct bkey *search,
+-					  struct bset_tree *start)
++static struct bkey *__bch_btree_iter_stack_init(struct btree_keys *b,
++						struct btree_iter_stack *iter,
++						struct bkey *search,
++						struct bset_tree *start)
+ {
+ 	struct bkey *ret = NULL;
+ 
+-	iter->heap.size = ARRAY_SIZE(iter->heap.preallocated);
+-	iter->heap.nr = 0;
++	iter->iter.size = ARRAY_SIZE(iter->stack_data);
++	iter->iter.used = 0;
+ 
+ #ifdef CONFIG_BCACHE_DEBUG
+-	iter->b = b;
++	iter->iter.b = b;
+ #endif
+ 
+ 	for (; start <= bset_tree_last(b); start++) {
+ 		ret = bch_bset_search(b, start, search);
+-		bch_btree_iter_push(iter, ret, bset_bkey_last(start->data));
++		bch_btree_iter_push(&iter->iter, ret, bset_bkey_last(start->data));
+ 	}
+ 
+ 	return ret;
+ }
+ 
+-struct bkey *bch_btree_iter_init(struct btree_keys *b,
+-				 struct btree_iter *iter,
++struct bkey *bch_btree_iter_stack_init(struct btree_keys *b,
++				 struct btree_iter_stack *iter,
+ 				 struct bkey *search)
+ {
+-	return __bch_btree_iter_init(b, iter, search, b->set);
++	return __bch_btree_iter_stack_init(b, iter, search, b->set);
+ }
+ 
+ static inline struct bkey *__bch_btree_iter_next(struct btree_iter *iter,
+-						 new_btree_iter_cmp_fn *cmp)
++						 btree_iter_cmp_fn *cmp)
+ {
+ 	struct btree_iter_set b __maybe_unused;
+ 	struct bkey *ret = NULL;
+-	const struct min_heap_callbacks callbacks = {
+-		.less = cmp,
+-		.swp = NULL,
+-	};
+ 
+ 	if (!btree_iter_end(iter)) {
+ 		bch_btree_iter_next_check(iter);
+ 
+-		ret = iter->heap.data->k;
+-		iter->heap.data->k = bkey_next(iter->heap.data->k);
++		ret = iter->data->k;
++		iter->data->k = bkey_next(iter->data->k);
+ 
+-		if (iter->heap.data->k > iter->heap.data->end) {
++		if (iter->data->k > iter->data->end) {
+ 			WARN_ONCE(1, "bset was corrupt!\n");
+-			iter->heap.data->k = iter->heap.data->end;
++			iter->data->k = iter->data->end;
+ 		}
+ 
+-		if (iter->heap.data->k == iter->heap.data->end) {
+-			if (iter->heap.nr) {
+-				b = min_heap_peek(&iter->heap)[0];
+-				min_heap_pop(&iter->heap, &callbacks, NULL);
+-			}
+-		}
++		if (iter->data->k == iter->data->end)
++			heap_pop(iter, b, cmp);
+ 		else
+-			min_heap_sift_down(&iter->heap, 0, &callbacks, NULL);
++			heap_sift(iter, 0, cmp);
+ 	}
+ 
+ 	return ret;
+@@ -1178,7 +1157,7 @@ static inline struct bkey *__bch_btree_iter_next(struct btree_iter *iter,
+ 
+ struct bkey *bch_btree_iter_next(struct btree_iter *iter)
+ {
+-	return __bch_btree_iter_next(iter, new_btree_iter_cmp);
++	return __bch_btree_iter_next(iter, btree_iter_cmp);
+ 
+ }
+ 
+@@ -1216,18 +1195,16 @@ static void btree_mergesort(struct btree_keys *b, struct bset *out,
+ 			    struct btree_iter *iter,
+ 			    bool fixup, bool remove_stale)
+ {
++	int i;
+ 	struct bkey *k, *last = NULL;
+ 	BKEY_PADDED(k) tmp;
+ 	bool (*bad)(struct btree_keys *, const struct bkey *) = remove_stale
+ 		? bch_ptr_bad
+ 		: bch_ptr_invalid;
+-	const struct min_heap_callbacks callbacks = {
+-		.less = b->ops->sort_cmp,
+-		.swp = NULL,
+-	};
+ 
+ 	/* Heapify the iterator, using our comparison function */
+-	min_heapify_all(&iter->heap, &callbacks, NULL);
++	for (i = iter->used / 2 - 1; i >= 0; --i)
++		heap_sift(iter, i, b->ops->sort_cmp);
+ 
+ 	while (!btree_iter_end(iter)) {
+ 		if (b->ops->sort_fixup && fixup)
+@@ -1316,11 +1293,10 @@ void bch_btree_sort_partial(struct btree_keys *b, unsigned int start,
+ 			    struct bset_sort_state *state)
+ {
+ 	size_t order = b->page_order, keys = 0;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	int oldsize = bch_count_data(b);
+ 
+-	min_heap_init(&iter.heap, NULL, MAX_BSETS);
+-	__bch_btree_iter_init(b, &iter, NULL, &b->set[start]);
++	__bch_btree_iter_stack_init(b, &iter, NULL, &b->set[start]);
+ 
+ 	if (start) {
+ 		unsigned int i;
+@@ -1331,7 +1307,7 @@ void bch_btree_sort_partial(struct btree_keys *b, unsigned int start,
+ 		order = get_order(__set_bytes(b->set->data, keys));
+ 	}
+ 
+-	__btree_sort(b, &iter, start, order, false, state);
++	__btree_sort(b, &iter.iter, start, order, false, state);
+ 
+ 	EBUG_ON(oldsize >= 0 && bch_count_data(b) != oldsize);
+ }
+@@ -1347,13 +1323,11 @@ void bch_btree_sort_into(struct btree_keys *b, struct btree_keys *new,
+ 			 struct bset_sort_state *state)
+ {
+ 	uint64_t start_time = local_clock();
+-	struct btree_iter iter;
+-
+-	min_heap_init(&iter.heap, NULL, MAX_BSETS);
++	struct btree_iter_stack iter;
+ 
+-	bch_btree_iter_init(b, &iter, NULL);
++	bch_btree_iter_stack_init(b, &iter, NULL);
+ 
+-	btree_mergesort(b, new->set->data, &iter, false, true);
++	btree_mergesort(b, new->set->data, &iter.iter, false, true);
+ 
+ 	bch_time_stats_update(&state->time, start_time);
+ 
+diff --git a/drivers/md/bcache/bset.h b/drivers/md/bcache/bset.h
+index f79441acd4c18e..011f6062c4c04f 100644
+--- a/drivers/md/bcache/bset.h
++++ b/drivers/md/bcache/bset.h
+@@ -187,9 +187,8 @@ struct bset_tree {
+ };
+ 
+ struct btree_keys_ops {
+-	bool		(*sort_cmp)(const void *l,
+-				    const void *r,
+-					void *args);
++	bool		(*sort_cmp)(struct btree_iter_set l,
++				    struct btree_iter_set r);
+ 	struct bkey	*(*sort_fixup)(struct btree_iter *iter,
+ 				       struct bkey *tmp);
+ 	bool		(*insert_fixup)(struct btree_keys *b,
+@@ -313,17 +312,23 @@ enum {
+ 	BTREE_INSERT_STATUS_FRONT_MERGE,
+ };
+ 
+-struct btree_iter_set {
+-	struct bkey *k, *end;
+-};
+-
+ /* Btree key iteration */
+ 
+ struct btree_iter {
++	size_t size, used;
+ #ifdef CONFIG_BCACHE_DEBUG
+ 	struct btree_keys *b;
+ #endif
+-	MIN_HEAP_PREALLOCATED(struct btree_iter_set, btree_iter_heap, MAX_BSETS) heap;
++	struct btree_iter_set {
++		struct bkey *k, *end;
++	} data[];
++};
++
++/* Fixed-size btree_iter that can be allocated on the stack */
++
++struct btree_iter_stack {
++	struct btree_iter iter;
++	struct btree_iter_set stack_data[MAX_BSETS];
+ };
+ 
+ typedef bool (*ptr_filter_fn)(struct btree_keys *b, const struct bkey *k);
+@@ -335,9 +340,9 @@ struct bkey *bch_btree_iter_next_filter(struct btree_iter *iter,
+ 
+ void bch_btree_iter_push(struct btree_iter *iter, struct bkey *k,
+ 			 struct bkey *end);
+-struct bkey *bch_btree_iter_init(struct btree_keys *b,
+-				 struct btree_iter *iter,
+-				 struct bkey *search);
++struct bkey *bch_btree_iter_stack_init(struct btree_keys *b,
++				       struct btree_iter_stack *iter,
++				       struct bkey *search);
+ 
+ struct bkey *__bch_bset_search(struct btree_keys *b, struct bset_tree *t,
+ 			       const struct bkey *search);
+@@ -352,13 +357,14 @@ static inline struct bkey *bch_bset_search(struct btree_keys *b,
+ 	return search ? __bch_bset_search(b, t, search) : t->data->start;
+ }
+ 
+-#define for_each_key_filter(b, k, iter, filter)				\
+-	for (bch_btree_iter_init((b), (iter), NULL);			\
+-	     ((k) = bch_btree_iter_next_filter((iter), (b), filter));)
++#define for_each_key_filter(b, k, stack_iter, filter)                      \
++	for (bch_btree_iter_stack_init((b), (stack_iter), NULL);           \
++	     ((k) = bch_btree_iter_next_filter(&((stack_iter)->iter), (b), \
++					       filter));)
+ 
+-#define for_each_key(b, k, iter)					\
+-	for (bch_btree_iter_init((b), (iter), NULL);			\
+-	     ((k) = bch_btree_iter_next(iter));)
++#define for_each_key(b, k, stack_iter)                           \
++	for (bch_btree_iter_stack_init((b), (stack_iter), NULL); \
++	     ((k) = bch_btree_iter_next(&((stack_iter)->iter)));)
+ 
+ /* Sorting */
+ 
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index ed40d86006564d..4e6ccf2c8a0bf3 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -149,19 +149,19 @@ void bch_btree_node_read_done(struct btree *b)
+ {
+ 	const char *err = "bad btree header";
+ 	struct bset *i = btree_bset_first(b);
+-	struct btree_iter iter;
++	struct btree_iter *iter;
+ 
+ 	/*
+ 	 * c->fill_iter can allocate an iterator with more memory space
+ 	 * than static MAX_BSETS.
+ 	 * See the comment arount cache_set->fill_iter.
+ 	 */
+-	iter.heap.data = mempool_alloc(&b->c->fill_iter, GFP_NOIO);
+-	iter.heap.size = b->c->cache->sb.bucket_size / b->c->cache->sb.block_size;
+-	iter.heap.nr = 0;
++	iter = mempool_alloc(&b->c->fill_iter, GFP_NOIO);
++	iter->size = b->c->cache->sb.bucket_size / b->c->cache->sb.block_size;
++	iter->used = 0;
+ 
+ #ifdef CONFIG_BCACHE_DEBUG
+-	iter.b = &b->keys;
++	iter->b = &b->keys;
+ #endif
+ 
+ 	if (!i->seq)
+@@ -199,7 +199,7 @@ void bch_btree_node_read_done(struct btree *b)
+ 		if (i != b->keys.set[0].data && !i->keys)
+ 			goto err;
+ 
+-		bch_btree_iter_push(&iter, i->start, bset_bkey_last(i));
++		bch_btree_iter_push(iter, i->start, bset_bkey_last(i));
+ 
+ 		b->written += set_blocks(i, block_bytes(b->c->cache));
+ 	}
+@@ -211,7 +211,7 @@ void bch_btree_node_read_done(struct btree *b)
+ 		if (i->seq == b->keys.set[0].data->seq)
+ 			goto err;
+ 
+-	bch_btree_sort_and_fix_extents(&b->keys, &iter, &b->c->sort);
++	bch_btree_sort_and_fix_extents(&b->keys, iter, &b->c->sort);
+ 
+ 	i = b->keys.set[0].data;
+ 	err = "short btree key";
+@@ -223,7 +223,7 @@ void bch_btree_node_read_done(struct btree *b)
+ 		bch_bset_init_next(&b->keys, write_block(b),
+ 				   bset_magic(&b->c->cache->sb));
+ out:
+-	mempool_free(iter.heap.data, &b->c->fill_iter);
++	mempool_free(iter, &b->c->fill_iter);
+ 	return;
+ err:
+ 	set_btree_node_io_error(b);
+@@ -1309,11 +1309,9 @@ static bool btree_gc_mark_node(struct btree *b, struct gc_stat *gc)
+ 	uint8_t stale = 0;
+ 	unsigned int keys = 0, good_keys = 0;
+ 	struct bkey *k;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct bset_tree *t;
+ 
+-	min_heap_init(&iter.heap, NULL, MAX_BSETS);
+-
+ 	gc->nodes++;
+ 
+ 	for_each_key_filter(&b->keys, k, &iter, bch_ptr_invalid) {
+@@ -1572,11 +1570,9 @@ static int btree_gc_rewrite_node(struct btree *b, struct btree_op *op,
+ static unsigned int btree_gc_count_keys(struct btree *b)
+ {
+ 	struct bkey *k;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	unsigned int ret = 0;
+ 
+-	min_heap_init(&iter.heap, NULL, MAX_BSETS);
+-
+ 	for_each_key_filter(&b->keys, k, &iter, bch_ptr_bad)
+ 		ret += bkey_u64s(k);
+ 
+@@ -1615,18 +1611,18 @@ static int btree_gc_recurse(struct btree *b, struct btree_op *op,
+ 	int ret = 0;
+ 	bool should_rewrite;
+ 	struct bkey *k;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct gc_merge_info r[GC_MERGE_NODES];
+ 	struct gc_merge_info *i, *last = r + ARRAY_SIZE(r) - 1;
+ 
+-	min_heap_init(&iter.heap, NULL, MAX_BSETS);
+-	bch_btree_iter_init(&b->keys, &iter, &b->c->gc_done);
++	bch_btree_iter_stack_init(&b->keys, &iter, &b->c->gc_done);
+ 
+ 	for (i = r; i < r + ARRAY_SIZE(r); i++)
+ 		i->b = ERR_PTR(-EINTR);
+ 
+ 	while (1) {
+-		k = bch_btree_iter_next_filter(&iter, &b->keys, bch_ptr_bad);
++		k = bch_btree_iter_next_filter(&iter.iter, &b->keys,
++					       bch_ptr_bad);
+ 		if (k) {
+ 			r->b = bch_btree_node_get(b->c, op, k, b->level - 1,
+ 						  true, b);
+@@ -1921,9 +1917,7 @@ static int bch_btree_check_recurse(struct btree *b, struct btree_op *op)
+ {
+ 	int ret = 0;
+ 	struct bkey *k, *p = NULL;
+-	struct btree_iter iter;
+-
+-	min_heap_init(&iter.heap, NULL, MAX_BSETS);
++	struct btree_iter_stack iter;
+ 
+ 	for_each_key_filter(&b->keys, k, &iter, bch_ptr_invalid)
+ 		bch_initial_mark_key(b->c, b->level, k);
+@@ -1931,10 +1925,10 @@ static int bch_btree_check_recurse(struct btree *b, struct btree_op *op)
+ 	bch_initial_mark_key(b->c, b->level + 1, &b->key);
+ 
+ 	if (b->level) {
+-		bch_btree_iter_init(&b->keys, &iter, NULL);
++		bch_btree_iter_stack_init(&b->keys, &iter, NULL);
+ 
+ 		do {
+-			k = bch_btree_iter_next_filter(&iter, &b->keys,
++			k = bch_btree_iter_next_filter(&iter.iter, &b->keys,
+ 						       bch_ptr_bad);
+ 			if (k) {
+ 				btree_node_prefetch(b, k);
+@@ -1962,7 +1956,7 @@ static int bch_btree_check_thread(void *arg)
+ 	struct btree_check_info *info = arg;
+ 	struct btree_check_state *check_state = info->state;
+ 	struct cache_set *c = check_state->c;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct bkey *k, *p;
+ 	int cur_idx, prev_idx, skip_nr;
+ 
+@@ -1970,11 +1964,9 @@ static int bch_btree_check_thread(void *arg)
+ 	cur_idx = prev_idx = 0;
+ 	ret = 0;
+ 
+-	min_heap_init(&iter.heap, NULL, MAX_BSETS);
+-
+ 	/* root node keys are checked before thread created */
+-	bch_btree_iter_init(&c->root->keys, &iter, NULL);
+-	k = bch_btree_iter_next_filter(&iter, &c->root->keys, bch_ptr_bad);
++	bch_btree_iter_stack_init(&c->root->keys, &iter, NULL);
++	k = bch_btree_iter_next_filter(&iter.iter, &c->root->keys, bch_ptr_bad);
+ 	BUG_ON(!k);
+ 
+ 	p = k;
+@@ -1992,7 +1984,7 @@ static int bch_btree_check_thread(void *arg)
+ 		skip_nr = cur_idx - prev_idx;
+ 
+ 		while (skip_nr) {
+-			k = bch_btree_iter_next_filter(&iter,
++			k = bch_btree_iter_next_filter(&iter.iter,
+ 						       &c->root->keys,
+ 						       bch_ptr_bad);
+ 			if (k)
+@@ -2065,11 +2057,9 @@ int bch_btree_check(struct cache_set *c)
+ 	int ret = 0;
+ 	int i;
+ 	struct bkey *k = NULL;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct btree_check_state check_state;
+ 
+-	min_heap_init(&iter.heap, NULL, MAX_BSETS);
+-
+ 	/* check and mark root node keys */
+ 	for_each_key_filter(&c->root->keys, k, &iter, bch_ptr_invalid)
+ 		bch_initial_mark_key(c, c->root->level, k);
+@@ -2563,12 +2553,11 @@ static int bch_btree_map_nodes_recurse(struct btree *b, struct btree_op *op,
+ 
+ 	if (b->level) {
+ 		struct bkey *k;
+-		struct btree_iter iter;
++		struct btree_iter_stack iter;
+ 
+-		min_heap_init(&iter.heap, NULL, MAX_BSETS);
+-		bch_btree_iter_init(&b->keys, &iter, from);
++		bch_btree_iter_stack_init(&b->keys, &iter, from);
+ 
+-		while ((k = bch_btree_iter_next_filter(&iter, &b->keys,
++		while ((k = bch_btree_iter_next_filter(&iter.iter, &b->keys,
+ 						       bch_ptr_bad))) {
+ 			ret = bcache_btree(map_nodes_recurse, k, b,
+ 				    op, from, fn, flags);
+@@ -2597,12 +2586,12 @@ int bch_btree_map_keys_recurse(struct btree *b, struct btree_op *op,
+ {
+ 	int ret = MAP_CONTINUE;
+ 	struct bkey *k;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 
+-	min_heap_init(&iter.heap, NULL, MAX_BSETS);
+-	bch_btree_iter_init(&b->keys, &iter, from);
++	bch_btree_iter_stack_init(&b->keys, &iter, from);
+ 
+-	while ((k = bch_btree_iter_next_filter(&iter, &b->keys, bch_ptr_bad))) {
++	while ((k = bch_btree_iter_next_filter(&iter.iter, &b->keys,
++					       bch_ptr_bad))) {
+ 		ret = !b->level
+ 			? fn(op, b, k)
+ 			: bcache_btree(map_keys_recurse, k,
+diff --git a/drivers/md/bcache/extents.c b/drivers/md/bcache/extents.c
+index 4b84fda1530a79..d626ffcbecb99c 100644
+--- a/drivers/md/bcache/extents.c
++++ b/drivers/md/bcache/extents.c
+@@ -33,16 +33,15 @@ static void sort_key_next(struct btree_iter *iter,
+ 	i->k = bkey_next(i->k);
+ 
+ 	if (i->k == i->end)
+-		*i = iter->heap.data[--iter->heap.nr];
++		*i = iter->data[--iter->used];
+ }
+ 
+-static bool new_bch_key_sort_cmp(const void *l, const void *r, void *args)
++static bool bch_key_sort_cmp(struct btree_iter_set l,
++			     struct btree_iter_set r)
+ {
+-	struct btree_iter_set *_l = (struct btree_iter_set *)l;
+-	struct btree_iter_set *_r = (struct btree_iter_set *)r;
+-	int64_t c = bkey_cmp(_l->k, _r->k);
++	int64_t c = bkey_cmp(l.k, r.k);
+ 
+-	return !(c ? c > 0 : _l->k < _r->k);
++	return c ? c > 0 : l.k < r.k;
+ }
+ 
+ static bool __ptr_invalid(struct cache_set *c, const struct bkey *k)
+@@ -239,7 +238,7 @@ static bool bch_btree_ptr_insert_fixup(struct btree_keys *bk,
+ }
+ 
+ const struct btree_keys_ops bch_btree_keys_ops = {
+-	.sort_cmp	= new_bch_key_sort_cmp,
++	.sort_cmp	= bch_key_sort_cmp,
+ 	.insert_fixup	= bch_btree_ptr_insert_fixup,
+ 	.key_invalid	= bch_btree_ptr_invalid,
+ 	.key_bad	= bch_btree_ptr_bad,
+@@ -256,28 +255,22 @@ const struct btree_keys_ops bch_btree_keys_ops = {
+  * Necessary for btree_sort_fixup() - if there are multiple keys that compare
+  * equal in different sets, we have to process them newest to oldest.
+  */
+-
+-static bool new_bch_extent_sort_cmp(const void *l, const void *r, void __always_unused *args)
++static bool bch_extent_sort_cmp(struct btree_iter_set l,
++				struct btree_iter_set r)
+ {
+-	struct btree_iter_set *_l = (struct btree_iter_set *)l;
+-	struct btree_iter_set *_r = (struct btree_iter_set *)r;
+-	int64_t c = bkey_cmp(&START_KEY(_l->k), &START_KEY(_r->k));
++	int64_t c = bkey_cmp(&START_KEY(l.k), &START_KEY(r.k));
+ 
+-	return !(c ? c > 0 : _l->k < _r->k);
++	return c ? c > 0 : l.k < r.k;
+ }
+ 
+ static struct bkey *bch_extent_sort_fixup(struct btree_iter *iter,
+ 					  struct bkey *tmp)
+ {
+-	const struct min_heap_callbacks callbacks = {
+-		.less = new_bch_extent_sort_cmp,
+-		.swp = NULL,
+-	};
+-	while (iter->heap.nr > 1) {
+-		struct btree_iter_set *top = iter->heap.data, *i = top + 1;
+-
+-		if (iter->heap.nr > 2 &&
+-		    !new_bch_extent_sort_cmp(&i[0], &i[1], NULL))
++	while (iter->used > 1) {
++		struct btree_iter_set *top = iter->data, *i = top + 1;
++
++		if (iter->used > 2 &&
++		    bch_extent_sort_cmp(i[0], i[1]))
+ 			i++;
+ 
+ 		if (bkey_cmp(top->k, &START_KEY(i->k)) <= 0)
+@@ -285,7 +278,7 @@ static struct bkey *bch_extent_sort_fixup(struct btree_iter *iter,
+ 
+ 		if (!KEY_SIZE(i->k)) {
+ 			sort_key_next(iter, i);
+-			min_heap_sift_down(&iter->heap, i - top, &callbacks, NULL);
++			heap_sift(iter, i - top, bch_extent_sort_cmp);
+ 			continue;
+ 		}
+ 
+@@ -295,7 +288,7 @@ static struct bkey *bch_extent_sort_fixup(struct btree_iter *iter,
+ 			else
+ 				bch_cut_front(top->k, i->k);
+ 
+-			min_heap_sift_down(&iter->heap, i - top, &callbacks, NULL);
++			heap_sift(iter, i - top, bch_extent_sort_cmp);
+ 		} else {
+ 			/* can't happen because of comparison func */
+ 			BUG_ON(!bkey_cmp(&START_KEY(top->k), &START_KEY(i->k)));
+@@ -305,7 +298,7 @@ static struct bkey *bch_extent_sort_fixup(struct btree_iter *iter,
+ 
+ 				bch_cut_back(&START_KEY(i->k), tmp);
+ 				bch_cut_front(i->k, top->k);
+-				min_heap_sift_down(&iter->heap, 0, &callbacks, NULL);
++				heap_sift(iter, 0, bch_extent_sort_cmp);
+ 
+ 				return tmp;
+ 			} else {
+@@ -625,7 +618,7 @@ static bool bch_extent_merge(struct btree_keys *bk,
+ }
+ 
+ const struct btree_keys_ops bch_extent_keys_ops = {
+-	.sort_cmp	= new_bch_extent_sort_cmp,
++	.sort_cmp	= bch_extent_sort_cmp,
+ 	.sort_fixup	= bch_extent_sort_fixup,
+ 	.insert_fixup	= bch_extent_insert_fixup,
+ 	.key_invalid	= bch_extent_invalid,
+diff --git a/drivers/md/bcache/movinggc.c b/drivers/md/bcache/movinggc.c
+index 45ca134cbf023e..26a6a535ec325f 100644
+--- a/drivers/md/bcache/movinggc.c
++++ b/drivers/md/bcache/movinggc.c
+@@ -182,19 +182,16 @@ err:		if (!IS_ERR_OR_NULL(w->private))
+ 	closure_sync(&cl);
+ }
+ 
+-static bool new_bucket_cmp(const void *l, const void *r, void __always_unused *args)
++static bool bucket_cmp(struct bucket *l, struct bucket *r)
+ {
+-	struct bucket **_l = (struct bucket **)l;
+-	struct bucket **_r = (struct bucket **)r;
+-
+-	return GC_SECTORS_USED(*_l) >= GC_SECTORS_USED(*_r);
++	return GC_SECTORS_USED(l) < GC_SECTORS_USED(r);
+ }
+ 
+ static unsigned int bucket_heap_top(struct cache *ca)
+ {
+ 	struct bucket *b;
+ 
+-	return (b = min_heap_peek(&ca->heap)[0]) ? GC_SECTORS_USED(b) : 0;
++	return (b = heap_peek(&ca->heap)) ? GC_SECTORS_USED(b) : 0;
+ }
+ 
+ void bch_moving_gc(struct cache_set *c)
+@@ -202,10 +199,6 @@ void bch_moving_gc(struct cache_set *c)
+ 	struct cache *ca = c->cache;
+ 	struct bucket *b;
+ 	unsigned long sectors_to_move, reserve_sectors;
+-	const struct min_heap_callbacks callbacks = {
+-		.less = new_bucket_cmp,
+-		.swp = NULL,
+-	};
+ 
+ 	if (!c->copy_gc_enabled)
+ 		return;
+@@ -216,7 +209,7 @@ void bch_moving_gc(struct cache_set *c)
+ 	reserve_sectors = ca->sb.bucket_size *
+ 			     fifo_used(&ca->free[RESERVE_MOVINGGC]);
+ 
+-	ca->heap.nr = 0;
++	ca->heap.used = 0;
+ 
+ 	for_each_bucket(b, ca) {
+ 		if (GC_MARK(b) == GC_MARK_METADATA ||
+@@ -225,31 +218,25 @@ void bch_moving_gc(struct cache_set *c)
+ 		    atomic_read(&b->pin))
+ 			continue;
+ 
+-		if (!min_heap_full(&ca->heap)) {
++		if (!heap_full(&ca->heap)) {
+ 			sectors_to_move += GC_SECTORS_USED(b);
+-			min_heap_push(&ca->heap, &b, &callbacks, NULL);
+-		} else if (!new_bucket_cmp(&b, min_heap_peek(&ca->heap), ca)) {
++			heap_add(&ca->heap, b, bucket_cmp);
++		} else if (bucket_cmp(b, heap_peek(&ca->heap))) {
+ 			sectors_to_move -= bucket_heap_top(ca);
+ 			sectors_to_move += GC_SECTORS_USED(b);
+ 
+ 			ca->heap.data[0] = b;
+-			min_heap_sift_down(&ca->heap, 0, &callbacks, NULL);
++			heap_sift(&ca->heap, 0, bucket_cmp);
+ 		}
+ 	}
+ 
+ 	while (sectors_to_move > reserve_sectors) {
+-		if (ca->heap.nr) {
+-			b = min_heap_peek(&ca->heap)[0];
+-			min_heap_pop(&ca->heap, &callbacks, NULL);
+-		}
++		heap_pop(&ca->heap, b, bucket_cmp);
+ 		sectors_to_move -= GC_SECTORS_USED(b);
+ 	}
+ 
+-	while (ca->heap.nr) {
+-		b = min_heap_peek(&ca->heap)[0];
+-		min_heap_pop(&ca->heap, &callbacks, NULL);
++	while (heap_pop(&ca->heap, b, bucket_cmp))
+ 		SET_GC_MOVE(b, 1);
+-	}
+ 
+ 	mutex_unlock(&c->bucket_lock);
+ 
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 813b38aec3e4e0..8fc9e98af31e88 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1733,7 +1733,12 @@ static CLOSURE_CALLBACK(cache_set_flush)
+ 			mutex_unlock(&b->write_lock);
+ 		}
+ 
+-	if (ca->alloc_thread)
++	/*
++	 * If the register_cache_set() call to bch_cache_set_alloc() failed,
++	 * ca has not been assigned a value and return error.
++	 * So we need check ca is not NULL during bch_cache_set_unregister().
++	 */
++	if (ca && ca->alloc_thread)
+ 		kthread_stop(ca->alloc_thread);
+ 
+ 	if (c->journal.cur) {
+@@ -1907,7 +1912,8 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
+ 	INIT_LIST_HEAD(&c->btree_cache_freed);
+ 	INIT_LIST_HEAD(&c->data_buckets);
+ 
+-	iter_size = ((meta_bucket_pages(sb) * PAGE_SECTORS) / sb->block_size) *
++	iter_size = sizeof(struct btree_iter) +
++		    ((meta_bucket_pages(sb) * PAGE_SECTORS) / sb->block_size) *
+ 			    sizeof(struct btree_iter_set);
+ 
+ 	c->devices = kcalloc(c->nr_uuids, sizeof(void *), GFP_KERNEL);
+diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
+index e8f696cb58c056..826b14cae4e58e 100644
+--- a/drivers/md/bcache/sysfs.c
++++ b/drivers/md/bcache/sysfs.c
+@@ -660,9 +660,7 @@ static unsigned int bch_root_usage(struct cache_set *c)
+ 	unsigned int bytes = 0;
+ 	struct bkey *k;
+ 	struct btree *b;
+-	struct btree_iter iter;
+-
+-	min_heap_init(&iter.heap, NULL, MAX_BSETS);
++	struct btree_iter_stack iter;
+ 
+ 	goto lock_root;
+ 
+diff --git a/drivers/md/bcache/util.h b/drivers/md/bcache/util.h
+index 539454d8e2d089..f61ab1bada6cf5 100644
+--- a/drivers/md/bcache/util.h
++++ b/drivers/md/bcache/util.h
+@@ -9,7 +9,6 @@
+ #include <linux/kernel.h>
+ #include <linux/sched/clock.h>
+ #include <linux/llist.h>
+-#include <linux/min_heap.h>
+ #include <linux/ratelimit.h>
+ #include <linux/vmalloc.h>
+ #include <linux/workqueue.h>
+@@ -31,10 +30,16 @@ struct closure;
+ 
+ #endif
+ 
++#define DECLARE_HEAP(type, name)					\
++	struct {							\
++		size_t size, used;					\
++		type *data;						\
++	} name
++
+ #define init_heap(heap, _size, gfp)					\
+ ({									\
+ 	size_t _bytes;							\
+-	(heap)->nr = 0;						\
++	(heap)->used = 0;						\
+ 	(heap)->size = (_size);						\
+ 	_bytes = (heap)->size * sizeof(*(heap)->data);			\
+ 	(heap)->data = kvmalloc(_bytes, (gfp) & GFP_KERNEL);		\
+@@ -47,6 +52,64 @@ do {									\
+ 	(heap)->data = NULL;						\
+ } while (0)
+ 
++#define heap_swap(h, i, j)	swap((h)->data[i], (h)->data[j])
++
++#define heap_sift(h, i, cmp)						\
++do {									\
++	size_t _r, _j = i;						\
++									\
++	for (; _j * 2 + 1 < (h)->used; _j = _r) {			\
++		_r = _j * 2 + 1;					\
++		if (_r + 1 < (h)->used &&				\
++		    cmp((h)->data[_r], (h)->data[_r + 1]))		\
++			_r++;						\
++									\
++		if (cmp((h)->data[_r], (h)->data[_j]))			\
++			break;						\
++		heap_swap(h, _r, _j);					\
++	}								\
++} while (0)
++
++#define heap_sift_down(h, i, cmp)					\
++do {									\
++	while (i) {							\
++		size_t p = (i - 1) / 2;					\
++		if (cmp((h)->data[i], (h)->data[p]))			\
++			break;						\
++		heap_swap(h, i, p);					\
++		i = p;							\
++	}								\
++} while (0)
++
++#define heap_add(h, d, cmp)						\
++({									\
++	bool _r = !heap_full(h);					\
++	if (_r) {							\
++		size_t _i = (h)->used++;				\
++		(h)->data[_i] = d;					\
++									\
++		heap_sift_down(h, _i, cmp);				\
++		heap_sift(h, _i, cmp);					\
++	}								\
++	_r;								\
++})
++
++#define heap_pop(h, d, cmp)						\
++({									\
++	bool _r = (h)->used;						\
++	if (_r) {							\
++		(d) = (h)->data[0];					\
++		(h)->used--;						\
++		heap_swap(h, 0, (h)->used);				\
++		heap_sift(h, 0, cmp);					\
++	}								\
++	_r;								\
++})
++
++#define heap_peek(h)	((h)->used ? (h)->data[0] : NULL)
++
++#define heap_full(h)	((h)->used == (h)->size)
++
+ #define DECLARE_FIFO(type, name)					\
+ 	struct {							\
+ 		size_t front, back, size, mask;				\
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 453efbbdc8eeeb..302e75f1fc4b63 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -908,16 +908,15 @@ static int bch_dirty_init_thread(void *arg)
+ 	struct dirty_init_thrd_info *info = arg;
+ 	struct bch_dirty_init_state *state = info->state;
+ 	struct cache_set *c = state->c;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct bkey *k, *p;
+ 	int cur_idx, prev_idx, skip_nr;
+ 
+ 	k = p = NULL;
+ 	prev_idx = 0;
+ 
+-	min_heap_init(&iter.heap, NULL, MAX_BSETS);
+-	bch_btree_iter_init(&c->root->keys, &iter, NULL);
+-	k = bch_btree_iter_next_filter(&iter, &c->root->keys, bch_ptr_bad);
++	bch_btree_iter_stack_init(&c->root->keys, &iter, NULL);
++	k = bch_btree_iter_next_filter(&iter.iter, &c->root->keys, bch_ptr_bad);
+ 	BUG_ON(!k);
+ 
+ 	p = k;
+@@ -931,7 +930,7 @@ static int bch_dirty_init_thread(void *arg)
+ 		skip_nr = cur_idx - prev_idx;
+ 
+ 		while (skip_nr) {
+-			k = bch_btree_iter_next_filter(&iter,
++			k = bch_btree_iter_next_filter(&iter.iter,
+ 						       &c->root->keys,
+ 						       bch_ptr_bad);
+ 			if (k)
+@@ -980,13 +979,11 @@ void bch_sectors_dirty_init(struct bcache_device *d)
+ 	int i;
+ 	struct btree *b = NULL;
+ 	struct bkey *k = NULL;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct sectors_dirty_init op;
+ 	struct cache_set *c = d->c;
+ 	struct bch_dirty_init_state state;
+ 
+-	min_heap_init(&iter.heap, NULL, MAX_BSETS);
+-
+ retry_lock:
+ 	b = c->root;
+ 	rw_lock(0, b, b->level);
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 6adc55fd90d3d4..0d40c18fe3ead7 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -2410,7 +2410,7 @@ static int super_init_validation(struct raid_set *rs, struct md_rdev *rdev)
+ 	 */
+ 	sb_retrieve_failed_devices(sb, failed_devices);
+ 	rdev_for_each(r, mddev) {
+-		if (test_bit(Journal, &rdev->flags) ||
++		if (test_bit(Journal, &r->flags) ||
+ 		    !r->sb_page)
+ 			continue;
+ 		sb2 = page_address(r->sb_page);
+diff --git a/drivers/md/dm-vdo/indexer/volume.c b/drivers/md/dm-vdo/indexer/volume.c
+index 655453bb276bed..425b3a74f4dbae 100644
+--- a/drivers/md/dm-vdo/indexer/volume.c
++++ b/drivers/md/dm-vdo/indexer/volume.c
+@@ -754,10 +754,11 @@ static int get_volume_page_protected(struct volume *volume, struct uds_request *
+ 				     u32 physical_page, struct cached_page **page_ptr)
+ {
+ 	struct cached_page *page;
++	unsigned int zone_number = request->zone_number;
+ 
+ 	get_page_from_cache(&volume->page_cache, physical_page, &page);
+ 	if (page != NULL) {
+-		if (request->zone_number == 0) {
++		if (zone_number == 0) {
+ 			/* Only one zone is allowed to update the LRU. */
+ 			make_page_most_recent(&volume->page_cache, page);
+ 		}
+@@ -767,7 +768,7 @@ static int get_volume_page_protected(struct volume *volume, struct uds_request *
+ 	}
+ 
+ 	/* Prepare to enqueue a read for the page. */
+-	end_pending_search(&volume->page_cache, request->zone_number);
++	end_pending_search(&volume->page_cache, zone_number);
+ 	mutex_lock(&volume->read_threads_mutex);
+ 
+ 	/*
+@@ -787,8 +788,7 @@ static int get_volume_page_protected(struct volume *volume, struct uds_request *
+ 		 * the order does not matter for correctness as it does below.
+ 		 */
+ 		mutex_unlock(&volume->read_threads_mutex);
+-		begin_pending_search(&volume->page_cache, physical_page,
+-				     request->zone_number);
++		begin_pending_search(&volume->page_cache, physical_page, zone_number);
+ 		return UDS_QUEUED;
+ 	}
+ 
+@@ -797,7 +797,7 @@ static int get_volume_page_protected(struct volume *volume, struct uds_request *
+ 	 * "search pending" state in careful order so no other thread can mess with the data before
+ 	 * the caller gets to look at it.
+ 	 */
+-	begin_pending_search(&volume->page_cache, physical_page, request->zone_number);
++	begin_pending_search(&volume->page_cache, physical_page, zone_number);
+ 	mutex_unlock(&volume->read_threads_mutex);
+ 	*page_ptr = page;
+ 	return UDS_SUCCESS;
+@@ -849,6 +849,7 @@ static int search_cached_index_page(struct volume *volume, struct uds_request *r
+ {
+ 	int result;
+ 	struct cached_page *page = NULL;
++	unsigned int zone_number = request->zone_number;
+ 	u32 physical_page = map_to_physical_page(volume->geometry, chapter,
+ 						 index_page_number);
+ 
+@@ -858,18 +859,18 @@ static int search_cached_index_page(struct volume *volume, struct uds_request *r
+ 	 * invalidation by the reader thread, before the reader thread has noticed that the
+ 	 * invalidate_counter has been incremented.
+ 	 */
+-	begin_pending_search(&volume->page_cache, physical_page, request->zone_number);
++	begin_pending_search(&volume->page_cache, physical_page, zone_number);
+ 
+ 	result = get_volume_page_protected(volume, request, physical_page, &page);
+ 	if (result != UDS_SUCCESS) {
+-		end_pending_search(&volume->page_cache, request->zone_number);
++		end_pending_search(&volume->page_cache, zone_number);
+ 		return result;
+ 	}
+ 
+ 	result = uds_search_chapter_index_page(&page->index_page, volume->geometry,
+ 					       &request->record_name,
+ 					       record_page_number);
+-	end_pending_search(&volume->page_cache, request->zone_number);
++	end_pending_search(&volume->page_cache, zone_number);
+ 	return result;
+ }
+ 
+@@ -882,6 +883,7 @@ int uds_search_cached_record_page(struct volume *volume, struct uds_request *req
+ {
+ 	struct cached_page *record_page;
+ 	struct index_geometry *geometry = volume->geometry;
++	unsigned int zone_number = request->zone_number;
+ 	int result;
+ 	u32 physical_page, page_number;
+ 
+@@ -905,11 +907,11 @@ int uds_search_cached_record_page(struct volume *volume, struct uds_request *req
+ 	 * invalidation by the reader thread, before the reader thread has noticed that the
+ 	 * invalidate_counter has been incremented.
+ 	 */
+-	begin_pending_search(&volume->page_cache, physical_page, request->zone_number);
++	begin_pending_search(&volume->page_cache, physical_page, zone_number);
+ 
+ 	result = get_volume_page_protected(volume, request, physical_page, &record_page);
+ 	if (result != UDS_SUCCESS) {
+-		end_pending_search(&volume->page_cache, request->zone_number);
++		end_pending_search(&volume->page_cache, zone_number);
+ 		return result;
+ 	}
+ 
+@@ -917,7 +919,7 @@ int uds_search_cached_record_page(struct volume *volume, struct uds_request *req
+ 			       &request->record_name, geometry, &request->old_metadata))
+ 		*found = true;
+ 
+-	end_pending_search(&volume->page_cache, request->zone_number);
++	end_pending_search(&volume->page_cache, zone_number);
+ 	return UDS_SUCCESS;
+ }
+ 
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 37b08f26c62f5d..45dd3d9f01a8e1 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -789,7 +789,7 @@ static int md_bitmap_new_disk_sb(struct bitmap *bitmap)
+ 	 * is a good choice?  We choose COUNTER_MAX / 2 arbitrarily.
+ 	 */
+ 	write_behind = bitmap->mddev->bitmap_info.max_write_behind;
+-	if (write_behind > COUNTER_MAX)
++	if (write_behind > COUNTER_MAX / 2)
+ 		write_behind = COUNTER_MAX / 2;
+ 	sb->write_behind = cpu_to_le32(write_behind);
+ 	bitmap->mddev->bitmap_info.max_write_behind = write_behind;
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index bc7e2005fc6c72..44b6513c526421 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -1812,38 +1812,49 @@ static void uvc_ctrl_send_slave_event(struct uvc_video_chain *chain,
+ 	uvc_ctrl_send_event(chain, handle, ctrl, mapping, val, changes);
+ }
+ 
+-static void uvc_ctrl_set_handle(struct uvc_fh *handle, struct uvc_control *ctrl,
+-				struct uvc_fh *new_handle)
++static int uvc_ctrl_set_handle(struct uvc_fh *handle, struct uvc_control *ctrl,
++			       struct uvc_fh *new_handle)
+ {
+ 	lockdep_assert_held(&handle->chain->ctrl_mutex);
+ 
+ 	if (new_handle) {
++		int ret;
++
+ 		if (ctrl->handle)
+ 			dev_warn_ratelimited(&handle->stream->dev->udev->dev,
+ 					     "UVC non compliance: Setting an async control with a pending operation.");
+ 
+ 		if (new_handle == ctrl->handle)
+-			return;
++			return 0;
+ 
+ 		if (ctrl->handle) {
+ 			WARN_ON(!ctrl->handle->pending_async_ctrls);
+ 			if (ctrl->handle->pending_async_ctrls)
+ 				ctrl->handle->pending_async_ctrls--;
++			ctrl->handle = new_handle;
++			handle->pending_async_ctrls++;
++			return 0;
+ 		}
+ 
++		ret = uvc_pm_get(handle->chain->dev);
++		if (ret)
++			return ret;
++
+ 		ctrl->handle = new_handle;
+ 		handle->pending_async_ctrls++;
+-		return;
++		return 0;
+ 	}
+ 
+ 	/* Cannot clear the handle for a control not owned by us.*/
+ 	if (WARN_ON(ctrl->handle != handle))
+-		return;
++		return -EINVAL;
+ 
+ 	ctrl->handle = NULL;
+ 	if (WARN_ON(!handle->pending_async_ctrls))
+-		return;
++		return -EINVAL;
+ 	handle->pending_async_ctrls--;
++	uvc_pm_put(handle->chain->dev);
++	return 0;
+ }
+ 
+ void uvc_ctrl_status_event(struct uvc_video_chain *chain,
+@@ -2108,7 +2119,7 @@ static int uvc_ctrl_commit_entity(struct uvc_device *dev,
+ 	unsigned int processed_ctrls = 0;
+ 	struct uvc_control *ctrl;
+ 	unsigned int i;
+-	int ret;
++	int ret = 0;
+ 
+ 	if (entity == NULL)
+ 		return 0;
+@@ -2137,8 +2148,6 @@ static int uvc_ctrl_commit_entity(struct uvc_device *dev,
+ 				dev->intfnum, ctrl->info.selector,
+ 				uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT),
+ 				ctrl->info.size);
+-		else
+-			ret = 0;
+ 
+ 		if (!ret)
+ 			processed_ctrls++;
+@@ -2150,17 +2159,24 @@ static int uvc_ctrl_commit_entity(struct uvc_device *dev,
+ 
+ 		ctrl->dirty = 0;
+ 
+-		if (ret < 0) {
++		if (!rollback && handle && !ret &&
++		    ctrl->info.flags & UVC_CTRL_FLAG_ASYNCHRONOUS)
++			ret = uvc_ctrl_set_handle(handle, ctrl, handle);
++
++		if (ret < 0 && !rollback) {
+ 			if (err_ctrl)
+ 				*err_ctrl = ctrl;
+-			return ret;
++			/*
++			 * If we fail to set a control, we need to rollback
++			 * the next ones.
++			 */
++			rollback = 1;
+ 		}
+-
+-		if (!rollback && handle &&
+-		    ctrl->info.flags & UVC_CTRL_FLAG_ASYNCHRONOUS)
+-			uvc_ctrl_set_handle(handle, ctrl, handle);
+ 	}
+ 
++	if (ret)
++		return ret;
++
+ 	return processed_ctrls;
+ }
+ 
+@@ -2191,7 +2207,8 @@ int __uvc_ctrl_commit(struct uvc_fh *handle, int rollback,
+ 	struct uvc_video_chain *chain = handle->chain;
+ 	struct uvc_control *err_ctrl;
+ 	struct uvc_entity *entity;
+-	int ret = 0;
++	int ret_out = 0;
++	int ret;
+ 
+ 	/* Find the control. */
+ 	list_for_each_entry(entity, &chain->entities, chain) {
+@@ -2202,17 +2219,23 @@ int __uvc_ctrl_commit(struct uvc_fh *handle, int rollback,
+ 				ctrls->error_idx =
+ 					uvc_ctrl_find_ctrl_idx(entity, ctrls,
+ 							       err_ctrl);
+-			goto done;
++			/*
++			 * When we fail to commit an entity, we need to
++			 * restore the UVC_CTRL_DATA_BACKUP for all the
++			 * controls in the other entities, otherwise our cache
++			 * and the hardware will be out of sync.
++			 */
++			rollback = 1;
++
++			ret_out = ret;
+ 		} else if (ret > 0 && !rollback) {
+ 			uvc_ctrl_send_events(handle, entity,
+ 					     ctrls->controls, ctrls->count);
+ 		}
+ 	}
+ 
+-	ret = 0;
+-done:
+ 	mutex_unlock(&chain->ctrl_mutex);
+-	return ret;
++	return ret_out;
+ }
+ 
+ static int uvc_mapping_get_xctrl_compound(struct uvc_video_chain *chain,
+@@ -3237,6 +3260,7 @@ int uvc_ctrl_init_device(struct uvc_device *dev)
+ void uvc_ctrl_cleanup_fh(struct uvc_fh *handle)
+ {
+ 	struct uvc_entity *entity;
++	int i;
+ 
+ 	guard(mutex)(&handle->chain->ctrl_mutex);
+ 
+@@ -3251,7 +3275,11 @@ void uvc_ctrl_cleanup_fh(struct uvc_fh *handle)
+ 		}
+ 	}
+ 
+-	WARN_ON(handle->pending_async_ctrls);
++	if (!WARN_ON(handle->pending_async_ctrls))
++		return;
++
++	for (i = 0; i < handle->pending_async_ctrls; i++)
++		uvc_pm_put(handle->stream->dev);
+ }
+ 
+ /*
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index 39065db44e864b..8bccf7e17528b6 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -26,6 +26,27 @@
+ 
+ #include "uvcvideo.h"
+ 
++int uvc_pm_get(struct uvc_device *dev)
++{
++	int ret;
++
++	ret = usb_autopm_get_interface(dev->intf);
++	if (ret)
++		return ret;
++
++	ret = uvc_status_get(dev);
++	if (ret)
++		usb_autopm_put_interface(dev->intf);
++
++	return ret;
++}
++
++void uvc_pm_put(struct uvc_device *dev)
++{
++	uvc_status_put(dev);
++	usb_autopm_put_interface(dev->intf);
++}
++
+ static int uvc_acquire_privileges(struct uvc_fh *handle);
+ 
+ static int uvc_control_add_xu_mapping(struct uvc_video_chain *chain,
+@@ -642,20 +663,13 @@ static int uvc_v4l2_open(struct file *file)
+ 	stream = video_drvdata(file);
+ 	uvc_dbg(stream->dev, CALLS, "%s\n", __func__);
+ 
+-	ret = usb_autopm_get_interface(stream->dev->intf);
+-	if (ret < 0)
+-		return ret;
+-
+ 	/* Create the device handle. */
+ 	handle = kzalloc(sizeof(*handle), GFP_KERNEL);
+-	if (handle == NULL) {
+-		usb_autopm_put_interface(stream->dev->intf);
++	if (!handle)
+ 		return -ENOMEM;
+-	}
+ 
+-	ret = uvc_status_get(stream->dev);
++	ret = uvc_pm_get(stream->dev);
+ 	if (ret) {
+-		usb_autopm_put_interface(stream->dev->intf);
+ 		kfree(handle);
+ 		return ret;
+ 	}
+@@ -683,6 +697,9 @@ static int uvc_v4l2_release(struct file *file)
+ 	if (uvc_has_privileges(handle))
+ 		uvc_queue_release(&stream->queue);
+ 
++	if (handle->is_streaming)
++		uvc_pm_put(stream->dev);
++
+ 	/* Release the file handle. */
+ 	uvc_dismiss_privileges(handle);
+ 	v4l2_fh_del(&handle->vfh);
+@@ -690,9 +707,7 @@ static int uvc_v4l2_release(struct file *file)
+ 	kfree(handle);
+ 	file->private_data = NULL;
+ 
+-	uvc_status_put(stream->dev);
+-
+-	usb_autopm_put_interface(stream->dev->intf);
++	uvc_pm_put(stream->dev);
+ 	return 0;
+ }
+ 
+@@ -841,11 +856,23 @@ static int uvc_ioctl_streamon(struct file *file, void *fh,
+ 	if (!uvc_has_privileges(handle))
+ 		return -EBUSY;
+ 
+-	mutex_lock(&stream->mutex);
++	guard(mutex)(&stream->mutex);
++
++	if (handle->is_streaming)
++		return 0;
++
+ 	ret = uvc_queue_streamon(&stream->queue, type);
+-	mutex_unlock(&stream->mutex);
++	if (ret)
++		return ret;
+ 
+-	return ret;
++	ret = uvc_pm_get(stream->dev);
++	if (ret) {
++		uvc_queue_streamoff(&stream->queue, type);
++		return ret;
++	}
++	handle->is_streaming = true;
++
++	return 0;
+ }
+ 
+ static int uvc_ioctl_streamoff(struct file *file, void *fh,
+@@ -857,9 +884,13 @@ static int uvc_ioctl_streamoff(struct file *file, void *fh,
+ 	if (!uvc_has_privileges(handle))
+ 		return -EBUSY;
+ 
+-	mutex_lock(&stream->mutex);
++	guard(mutex)(&stream->mutex);
++
+ 	uvc_queue_streamoff(&stream->queue, type);
+-	mutex_unlock(&stream->mutex);
++	if (handle->is_streaming) {
++		handle->is_streaming = false;
++		uvc_pm_put(stream->dev);
++	}
+ 
+ 	return 0;
+ }
+@@ -1358,9 +1389,11 @@ static int uvc_v4l2_put_xu_query(const struct uvc_xu_control_query *kp,
+ #define UVCIOC_CTRL_MAP32	_IOWR('u', 0x20, struct uvc_xu_control_mapping32)
+ #define UVCIOC_CTRL_QUERY32	_IOWR('u', 0x21, struct uvc_xu_control_query32)
+ 
++DEFINE_FREE(uvc_pm_put, struct uvc_device *, if (_T) uvc_pm_put(_T))
+ static long uvc_v4l2_compat_ioctl32(struct file *file,
+ 		     unsigned int cmd, unsigned long arg)
+ {
++	struct uvc_device *uvc_device __free(uvc_pm_put) = NULL;
+ 	struct uvc_fh *handle = file->private_data;
+ 	union {
+ 		struct uvc_xu_control_mapping xmap;
+@@ -1369,6 +1402,12 @@ static long uvc_v4l2_compat_ioctl32(struct file *file,
+ 	void __user *up = compat_ptr(arg);
+ 	long ret;
+ 
++	ret = uvc_pm_get(handle->stream->dev);
++	if (ret)
++		return ret;
++
++	uvc_device = handle->stream->dev;
++
+ 	switch (cmd) {
+ 	case UVCIOC_CTRL_MAP32:
+ 		ret = uvc_v4l2_get_xu_mapping(&karg.xmap, up);
+@@ -1403,6 +1442,22 @@ static long uvc_v4l2_compat_ioctl32(struct file *file,
+ }
+ #endif
+ 
++static long uvc_v4l2_unlocked_ioctl(struct file *file,
++				    unsigned int cmd, unsigned long arg)
++{
++	struct uvc_fh *handle = file->private_data;
++	int ret;
++
++	ret = uvc_pm_get(handle->stream->dev);
++	if (ret)
++		return ret;
++
++	ret = video_ioctl2(file, cmd, arg);
++
++	uvc_pm_put(handle->stream->dev);
++	return ret;
++}
++
+ static ssize_t uvc_v4l2_read(struct file *file, char __user *data,
+ 		    size_t count, loff_t *ppos)
+ {
+@@ -1487,7 +1542,7 @@ const struct v4l2_file_operations uvc_fops = {
+ 	.owner		= THIS_MODULE,
+ 	.open		= uvc_v4l2_open,
+ 	.release	= uvc_v4l2_release,
+-	.unlocked_ioctl	= video_ioctl2,
++	.unlocked_ioctl	= uvc_v4l2_unlocked_ioctl,
+ #ifdef CONFIG_COMPAT
+ 	.compat_ioctl32	= uvc_v4l2_compat_ioctl32,
+ #endif
+diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
+index b4ee701835fc01..b9f8eb62ba1d82 100644
+--- a/drivers/media/usb/uvc/uvcvideo.h
++++ b/drivers/media/usb/uvc/uvcvideo.h
+@@ -630,6 +630,7 @@ struct uvc_fh {
+ 	struct uvc_streaming *stream;
+ 	enum uvc_handle_state state;
+ 	unsigned int pending_async_ctrls;
++	bool is_streaming;
+ };
+ 
+ /* ------------------------------------------------------------------------
+@@ -767,6 +768,10 @@ void uvc_status_suspend(struct uvc_device *dev);
+ int uvc_status_get(struct uvc_device *dev);
+ void uvc_status_put(struct uvc_device *dev);
+ 
++/* PM */
++int uvc_pm_get(struct uvc_device *dev);
++void uvc_pm_put(struct uvc_device *dev);
++
+ /* Controls */
+ extern const struct v4l2_subscribed_event_ops uvc_ctrl_sub_ev_ops;
+ 
+diff --git a/drivers/mfd/88pm886.c b/drivers/mfd/88pm886.c
+index 891fdce5d8c124..177878aa32f86e 100644
+--- a/drivers/mfd/88pm886.c
++++ b/drivers/mfd/88pm886.c
+@@ -124,7 +124,11 @@ static int pm886_probe(struct i2c_client *client)
+ 	if (err)
+ 		return dev_err_probe(dev, err, "Failed to register power off handler\n");
+ 
+-	device_init_wakeup(dev, device_property_read_bool(dev, "wakeup-source"));
++	if (device_property_read_bool(dev, "wakeup-source")) {
++		err = devm_device_init_wakeup(dev);
++		if (err)
++			return dev_err_probe(dev, err, "Failed to init wakeup\n");
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mfd/max14577.c b/drivers/mfd/max14577.c
+index 6fce79ec2dc646..7e7e8af9af2246 100644
+--- a/drivers/mfd/max14577.c
++++ b/drivers/mfd/max14577.c
+@@ -456,6 +456,7 @@ static void max14577_i2c_remove(struct i2c_client *i2c)
+ {
+ 	struct max14577 *max14577 = i2c_get_clientdata(i2c);
+ 
++	device_init_wakeup(max14577->dev, false);
+ 	mfd_remove_devices(max14577->dev);
+ 	regmap_del_irq_chip(max14577->irq, max14577->irq_data);
+ 	if (max14577->dev_type == MAXIM_DEVICE_TYPE_MAX77836)
+diff --git a/drivers/mfd/max77541.c b/drivers/mfd/max77541.c
+index d77c31c86e4356..f91b4f5373ce93 100644
+--- a/drivers/mfd/max77541.c
++++ b/drivers/mfd/max77541.c
+@@ -152,7 +152,7 @@ static int max77541_pmic_setup(struct device *dev)
+ 	if (ret)
+ 		return dev_err_probe(dev, ret, "Failed to initialize IRQ\n");
+ 
+-	ret = device_init_wakeup(dev, true);
++	ret = devm_device_init_wakeup(dev);
+ 	if (ret)
+ 		return dev_err_probe(dev, ret, "Unable to init wakeup\n");
+ 
+diff --git a/drivers/mfd/max77705.c b/drivers/mfd/max77705.c
+index 60c457c21d952c..6b263bacb8c28d 100644
+--- a/drivers/mfd/max77705.c
++++ b/drivers/mfd/max77705.c
+@@ -131,7 +131,9 @@ static int max77705_i2c_probe(struct i2c_client *i2c)
+ 	if (ret)
+ 		return dev_err_probe(dev, ret, "Failed to register child devices\n");
+ 
+-	device_init_wakeup(dev, true);
++	ret = devm_device_init_wakeup(dev);
++	if (ret)
++		return dev_err_probe(dev, ret, "Failed to init wakeup\n");
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mfd/sprd-sc27xx-spi.c b/drivers/mfd/sprd-sc27xx-spi.c
+index 7186e2108108f0..d6b4350779e6ae 100644
+--- a/drivers/mfd/sprd-sc27xx-spi.c
++++ b/drivers/mfd/sprd-sc27xx-spi.c
+@@ -210,7 +210,10 @@ static int sprd_pmic_probe(struct spi_device *spi)
+ 		return ret;
+ 	}
+ 
+-	device_init_wakeup(&spi->dev, true);
++	ret = devm_device_init_wakeup(&spi->dev);
++	if (ret)
++		return dev_err_probe(&spi->dev, ret, "Failed to init wakeup\n");
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/misc/tps6594-pfsm.c b/drivers/misc/tps6594-pfsm.c
+index 0a24ce44cc37c4..6db1c9d48f8fcf 100644
+--- a/drivers/misc/tps6594-pfsm.c
++++ b/drivers/misc/tps6594-pfsm.c
+@@ -281,6 +281,9 @@ static int tps6594_pfsm_probe(struct platform_device *pdev)
+ 	pfsm->miscdev.minor = MISC_DYNAMIC_MINOR;
+ 	pfsm->miscdev.name = devm_kasprintf(dev, GFP_KERNEL, "pfsm-%ld-0x%02x",
+ 					    tps->chip_id, tps->reg);
++	if (!pfsm->miscdev.name)
++		return -ENOMEM;
++
+ 	pfsm->miscdev.fops = &tps6594_pfsm_fops;
+ 	pfsm->miscdev.parent = dev->parent;
+ 	pfsm->chip_id = tps->chip_id;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index c365a9e64f7281..9de6eefad97913 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -2958,6 +2958,7 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ {
+ 	struct bnxt_napi *bnapi = cpr->bnapi;
+ 	u32 raw_cons = cpr->cp_raw_cons;
++	bool flush_xdp = false;
+ 	u32 cons;
+ 	int rx_pkts = 0;
+ 	u8 event = 0;
+@@ -3011,6 +3012,8 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 			else
+ 				rc = bnxt_force_rx_discard(bp, cpr, &raw_cons,
+ 							   &event);
++			if (event & BNXT_REDIRECT_EVENT)
++				flush_xdp = true;
+ 			if (likely(rc >= 0))
+ 				rx_pkts += rc;
+ 			/* Increment rx_pkts when rc is -ENOMEM to count towards
+@@ -3035,7 +3038,7 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 		}
+ 	}
+ 
+-	if (event & BNXT_REDIRECT_EVENT) {
++	if (flush_xdp) {
+ 		xdp_do_flush();
+ 		event &= ~BNXT_REDIRECT_EVENT;
+ 	}
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_hw.h b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+index 4098f01479bc0a..53e8d18c7a34a5 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_hw.h
++++ b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+@@ -507,7 +507,7 @@ static inline u64 _enetc_rd_reg64(void __iomem *reg)
+ 		tmp = ioread32(reg + 4);
+ 	} while (high != tmp);
+ 
+-	return le64_to_cpu((__le64)high << 32 | low);
++	return (u64)high << 32 | low;
+ }
+ #endif
+ 
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+index 2ac59564ded188..d10b58ebf6034a 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+@@ -321,7 +321,7 @@ static int ionic_xdp_post_frame(struct ionic_queue *q, struct xdp_frame *frame,
+ 					   len, DMA_TO_DEVICE);
+ 	} else /* XDP_REDIRECT */ {
+ 		dma_addr = ionic_tx_map_single(q, frame->data, len);
+-		if (!dma_addr)
++		if (dma_addr == DMA_MAPPING_ERROR)
+ 			return -EIO;
+ 	}
+ 
+@@ -357,7 +357,7 @@ static int ionic_xdp_post_frame(struct ionic_queue *q, struct xdp_frame *frame,
+ 			} else {
+ 				dma_addr = ionic_tx_map_frag(q, frag, 0,
+ 							     skb_frag_size(frag));
+-				if (dma_mapping_error(q->dev, dma_addr)) {
++				if (dma_addr == DMA_MAPPING_ERROR) {
+ 					ionic_tx_desc_unmap_bufs(q, desc_info);
+ 					return -EIO;
+ 				}
+@@ -1083,7 +1083,7 @@ static dma_addr_t ionic_tx_map_single(struct ionic_queue *q,
+ 		net_warn_ratelimited("%s: DMA single map failed on %s!\n",
+ 				     dev_name(dev), q->name);
+ 		q_to_tx_stats(q)->dma_map_err++;
+-		return 0;
++		return DMA_MAPPING_ERROR;
+ 	}
+ 	return dma_addr;
+ }
+@@ -1100,7 +1100,7 @@ static dma_addr_t ionic_tx_map_frag(struct ionic_queue *q,
+ 		net_warn_ratelimited("%s: DMA frag map failed on %s!\n",
+ 				     dev_name(dev), q->name);
+ 		q_to_tx_stats(q)->dma_map_err++;
+-		return 0;
++		return DMA_MAPPING_ERROR;
+ 	}
+ 	return dma_addr;
+ }
+@@ -1116,7 +1116,7 @@ static int ionic_tx_map_skb(struct ionic_queue *q, struct sk_buff *skb,
+ 	int frag_idx;
+ 
+ 	dma_addr = ionic_tx_map_single(q, skb->data, skb_headlen(skb));
+-	if (!dma_addr)
++	if (dma_addr == DMA_MAPPING_ERROR)
+ 		return -EIO;
+ 	buf_info->dma_addr = dma_addr;
+ 	buf_info->len = skb_headlen(skb);
+@@ -1126,7 +1126,7 @@ static int ionic_tx_map_skb(struct ionic_queue *q, struct sk_buff *skb,
+ 	nfrags = skb_shinfo(skb)->nr_frags;
+ 	for (frag_idx = 0; frag_idx < nfrags; frag_idx++, frag++) {
+ 		dma_addr = ionic_tx_map_frag(q, frag, 0, skb_frag_size(frag));
+-		if (!dma_addr)
++		if (dma_addr == DMA_MAPPING_ERROR)
+ 			goto dma_fail;
+ 		buf_info->dma_addr = dma_addr;
+ 		buf_info->len = skb_frag_size(frag);
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.c b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+index e69eaa65e0de85..f4242bbad5fc57 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_lib.c
++++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+@@ -2496,7 +2496,7 @@ static int wx_alloc_page_pool(struct wx_ring *rx_ring)
+ 	struct page_pool_params pp_params = {
+ 		.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
+ 		.order = 0,
+-		.pool_size = rx_ring->size,
++		.pool_size = rx_ring->count,
+ 		.nid = dev_to_node(rx_ring->dev),
+ 		.dev = rx_ring->dev,
+ 		.dma_dir = DMA_FROM_DEVICE,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/fw.c b/drivers/net/wireless/intel/iwlwifi/mld/fw.c
+index 4b083d447ee2f7..6be9366bd4b14d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/fw.c
+@@ -339,10 +339,6 @@ int iwl_mld_load_fw(struct iwl_mld *mld)
+ 	if (ret)
+ 		goto err;
+ 
+-	ret = iwl_mld_init_mcc(mld);
+-	if (ret)
+-		goto err;
+-
+ 	mld->fw_status.running = true;
+ 
+ 	return 0;
+@@ -535,6 +531,10 @@ int iwl_mld_start_fw(struct iwl_mld *mld)
+ 	if (ret)
+ 		goto error;
+ 
++	ret = iwl_mld_init_mcc(mld);
++	if (ret)
++		goto error;
++
+ 	return 0;
+ 
+ error:
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 93a8119ad5ca66..d253b829011107 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1996,21 +1996,41 @@ static void nvme_configure_metadata(struct nvme_ctrl *ctrl,
+ }
+ 
+ 
+-static void nvme_update_atomic_write_disk_info(struct nvme_ns *ns,
+-			struct nvme_id_ns *id, struct queue_limits *lim,
+-			u32 bs, u32 atomic_bs)
++static u32 nvme_configure_atomic_write(struct nvme_ns *ns,
++		struct nvme_id_ns *id, struct queue_limits *lim, u32 bs)
+ {
+-	unsigned int boundary = 0;
++	u32 atomic_bs, boundary = 0;
+ 
+-	if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf) {
+-		if (le16_to_cpu(id->nabspf))
++	/*
++	 * We do not support an offset for the atomic boundaries.
++	 */
++	if (id->nabo)
++		return bs;
++
++	if ((id->nsfeat & NVME_NS_FEAT_ATOMICS) && id->nawupf) {
++		/*
++		 * Use the per-namespace atomic write unit when available.
++		 */
++		atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs;
++		if (id->nabspf)
+ 			boundary = (le16_to_cpu(id->nabspf) + 1) * bs;
++	} else {
++		/*
++		 * Use the controller wide atomic write unit.  This sucks
++		 * because the limit is defined in terms of logical blocks while
++		 * namespaces can have different formats, and because there is
++		 * no clear language in the specification prohibiting different
++		 * values for different controllers in the subsystem.
++		 */
++		atomic_bs = (1 + ns->ctrl->subsys->awupf) * bs;
+ 	}
++
+ 	lim->atomic_write_hw_max = atomic_bs;
+ 	lim->atomic_write_hw_boundary = boundary;
+ 	lim->atomic_write_hw_unit_min = bs;
+ 	lim->atomic_write_hw_unit_max = rounddown_pow_of_two(atomic_bs);
+ 	lim->features |= BLK_FEAT_ATOMIC_WRITES;
++	return atomic_bs;
+ }
+ 
+ static u32 nvme_max_drv_segments(struct nvme_ctrl *ctrl)
+@@ -2048,34 +2068,8 @@ static bool nvme_update_disk_info(struct nvme_ns *ns, struct nvme_id_ns *id,
+ 		valid = false;
+ 	}
+ 
+-	atomic_bs = phys_bs = bs;
+-	if (id->nabo == 0) {
+-		/*
+-		 * Bit 1 indicates whether NAWUPF is defined for this namespace
+-		 * and whether it should be used instead of AWUPF. If NAWUPF ==
+-		 * 0 then AWUPF must be used instead.
+-		 */
+-		if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf)
+-			atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs;
+-		else
+-			atomic_bs = (1 + ns->ctrl->awupf) * bs;
+-
+-		/*
+-		 * Set subsystem atomic bs.
+-		 */
+-		if (ns->ctrl->subsys->atomic_bs) {
+-			if (atomic_bs != ns->ctrl->subsys->atomic_bs) {
+-				dev_err_ratelimited(ns->ctrl->device,
+-					"%s: Inconsistent Atomic Write Size, Namespace will not be added: Subsystem=%d bytes, Controller/Namespace=%d bytes\n",
+-					ns->disk ? ns->disk->disk_name : "?",
+-					ns->ctrl->subsys->atomic_bs,
+-					atomic_bs);
+-			}
+-		} else
+-			ns->ctrl->subsys->atomic_bs = atomic_bs;
+-
+-		nvme_update_atomic_write_disk_info(ns, id, lim, bs, atomic_bs);
+-	}
++	phys_bs = bs;
++	atomic_bs = nvme_configure_atomic_write(ns, id, lim, bs);
+ 
+ 	if (id->nsfeat & NVME_NS_FEAT_IO_OPT) {
+ 		/* NPWG = Namespace Preferred Write Granularity */
+@@ -2215,16 +2209,6 @@ static int nvme_update_ns_info_block(struct nvme_ns *ns,
+ 	if (!nvme_update_disk_info(ns, id, &lim))
+ 		capacity = 0;
+ 
+-	/*
+-	 * Validate the max atomic write size fits within the subsystem's
+-	 * atomic write capabilities.
+-	 */
+-	if (lim.atomic_write_hw_max > ns->ctrl->subsys->atomic_bs) {
+-		blk_mq_unfreeze_queue(ns->disk->queue, memflags);
+-		ret = -ENXIO;
+-		goto out;
+-	}
+-
+ 	nvme_config_discard(ns, &lim);
+ 	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
+ 	    ns->head->ids.csi == NVME_CSI_ZNS)
+@@ -3040,6 +3024,7 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
+ 	memcpy(subsys->model, id->mn, sizeof(subsys->model));
+ 	subsys->vendor_id = le16_to_cpu(id->vid);
+ 	subsys->cmic = id->cmic;
++	subsys->awupf = le16_to_cpu(id->awupf);
+ 
+ 	/* Versions prior to 1.4 don't necessarily report a valid type */
+ 	if (id->cntrltype == NVME_CTRL_DISC ||
+@@ -3369,6 +3354,15 @@ static int nvme_init_identify(struct nvme_ctrl *ctrl)
+ 		if (ret)
+ 			goto out_free;
+ 	}
++
++	if (le16_to_cpu(id->awupf) != ctrl->subsys->awupf) {
++		dev_err_ratelimited(ctrl->device,
++			"inconsistent AWUPF, controller not added (%u/%u).\n",
++			le16_to_cpu(id->awupf), ctrl->subsys->awupf);
++		ret = -EINVAL;
++		goto out_free;
++	}
++
+ 	memcpy(ctrl->subsys->firmware_rev, id->fr,
+ 	       sizeof(ctrl->subsys->firmware_rev));
+ 
+@@ -3464,7 +3458,6 @@ static int nvme_init_identify(struct nvme_ctrl *ctrl)
+ 		dev_pm_qos_expose_latency_tolerance(ctrl->device);
+ 	else if (!ctrl->apst_enabled && prev_apst_enabled)
+ 		dev_pm_qos_hide_latency_tolerance(ctrl->device);
+-	ctrl->awupf = le16_to_cpu(id->awupf);
+ out_free:
+ 	kfree(id);
+ 	return ret;
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 8fc4683418a3a6..d8c4e545f732c3 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -410,7 +410,6 @@ struct nvme_ctrl {
+ 
+ 	enum nvme_ctrl_type cntrltype;
+ 	enum nvme_dctype dctype;
+-	u16 awupf; /* 0's based value. */
+ };
+ 
+ static inline enum nvme_ctrl_state nvme_ctrl_state(struct nvme_ctrl *ctrl)
+@@ -443,11 +442,11 @@ struct nvme_subsystem {
+ 	u8			cmic;
+ 	enum nvme_subsys_type	subtype;
+ 	u16			vendor_id;
++	u16			awupf; /* 0's based value. */
+ 	struct ida		ns_ida;
+ #ifdef CONFIG_NVME_MULTIPATH
+ 	enum nvme_iopolicy	iopolicy;
+ #endif
+-	u32			atomic_bs;
+ };
+ 
+ /*
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 947fac9128b30d..b882ee6ef40f6e 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -452,7 +452,8 @@ nvme_tcp_fetch_request(struct nvme_tcp_queue *queue)
+ 			return NULL;
+ 	}
+ 
+-	list_del(&req->entry);
++	list_del_init(&req->entry);
++	init_llist_node(&req->lentry);
+ 	return req;
+ }
+ 
+@@ -560,6 +561,8 @@ static int nvme_tcp_init_request(struct blk_mq_tag_set *set,
+ 	req->queue = queue;
+ 	nvme_req(rq)->ctrl = &ctrl->ctrl;
+ 	nvme_req(rq)->cmd = &pdu->cmd;
++	init_llist_node(&req->lentry);
++	INIT_LIST_HEAD(&req->entry);
+ 
+ 	return 0;
+ }
+@@ -764,6 +767,14 @@ static int nvme_tcp_handle_r2t(struct nvme_tcp_queue *queue,
+ 		return -EPROTO;
+ 	}
+ 
++	if (llist_on_list(&req->lentry) ||
++	    !list_empty(&req->entry)) {
++		dev_err(queue->ctrl->ctrl.device,
++			"req %d unexpected r2t while processing request\n",
++			rq->tag);
++		return -EPROTO;
++	}
++
+ 	req->pdu_len = 0;
+ 	req->h2cdata_left = r2t_length;
+ 	req->h2cdata_offset = r2t_offset;
+@@ -1348,7 +1359,7 @@ static int nvme_tcp_try_recv(struct nvme_tcp_queue *queue)
+ 	queue->nr_cqe = 0;
+ 	consumed = sock->ops->read_sock(sk, &rd_desc, nvme_tcp_recv_skb);
+ 	release_sock(sk);
+-	return consumed;
++	return consumed == -EAGAIN ? 0 : consumed;
+ }
+ 
+ static void nvme_tcp_io_work(struct work_struct *w)
+@@ -1376,6 +1387,11 @@ static void nvme_tcp_io_work(struct work_struct *w)
+ 		else if (unlikely(result < 0))
+ 			return;
+ 
++		/* did we get some space after spending time in recv? */
++		if (nvme_tcp_queue_has_pending(queue) &&
++		    sk_stream_is_writeable(queue->sock->sk))
++			pending = true;
++
+ 		if (!pending || !queue->rd_enabled)
+ 			return;
+ 
+@@ -2636,6 +2652,8 @@ static void nvme_tcp_submit_async_event(struct nvme_ctrl *arg)
+ 	ctrl->async_req.offset = 0;
+ 	ctrl->async_req.curr_bio = NULL;
+ 	ctrl->async_req.data_len = 0;
++	init_llist_node(&ctrl->async_req.lentry);
++	INIT_LIST_HEAD(&ctrl->async_req.entry);
+ 
+ 	nvme_tcp_queue_request(&ctrl->async_req, true, true);
+ }
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index ea5c06371171ff..1b6fbf507166f4 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -48,6 +48,8 @@
+ #define IMX95_PCIE_SS_RW_REG_0			0xf0
+ #define IMX95_PCIE_REF_CLKEN			BIT(23)
+ #define IMX95_PCIE_PHY_CR_PARA_SEL		BIT(9)
++#define IMX95_PCIE_SS_RW_REG_1			0xf4
++#define IMX95_PCIE_SYS_AUX_PWR_DET		BIT(31)
+ 
+ #define IMX95_PE0_GEN_CTRL_1			0x1050
+ #define IMX95_PCIE_DEVICE_TYPE			GENMASK(3, 0)
+@@ -231,6 +233,19 @@ static unsigned int imx_pcie_grp_offset(const struct imx_pcie *imx_pcie)
+ 
+ static int imx95_pcie_init_phy(struct imx_pcie *imx_pcie)
+ {
++	/*
++	 * ERR051624: The Controller Without Vaux Cannot Exit L23 Ready
++	 * Through Beacon or PERST# De-assertion
++	 *
++	 * When the auxiliary power is not available, the controller
++	 * cannot exit from L23 Ready with beacon or PERST# de-assertion
++	 * when main power is not removed.
++	 *
++	 * Workaround: Set SS_RW_REG_1[SYS_AUX_PWR_DET] to 1.
++	 */
++	regmap_set_bits(imx_pcie->iomuxc_gpr, IMX95_PCIE_SS_RW_REG_1,
++			IMX95_PCIE_SYS_AUX_PWR_DET);
++
+ 	regmap_update_bits(imx_pcie->iomuxc_gpr,
+ 			IMX95_PCIE_SS_RW_REG_0,
+ 			IMX95_PCIE_PHY_CR_PARA_SEL,
+diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
+index 97d76d3dc066ef..be348b341e3cfb 100644
+--- a/drivers/pci/controller/dwc/pcie-designware.c
++++ b/drivers/pci/controller/dwc/pcie-designware.c
+@@ -797,22 +797,19 @@ static void dw_pcie_link_set_max_link_width(struct dw_pcie *pci, u32 num_lanes)
+ 	/* Set link width speed control register */
+ 	lwsc = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
+ 	lwsc &= ~PORT_LOGIC_LINK_WIDTH_MASK;
++	lwsc |= PORT_LOGIC_LINK_WIDTH_1_LANES;
+ 	switch (num_lanes) {
+ 	case 1:
+ 		plc |= PORT_LINK_MODE_1_LANES;
+-		lwsc |= PORT_LOGIC_LINK_WIDTH_1_LANES;
+ 		break;
+ 	case 2:
+ 		plc |= PORT_LINK_MODE_2_LANES;
+-		lwsc |= PORT_LOGIC_LINK_WIDTH_2_LANES;
+ 		break;
+ 	case 4:
+ 		plc |= PORT_LINK_MODE_4_LANES;
+-		lwsc |= PORT_LOGIC_LINK_WIDTH_4_LANES;
+ 		break;
+ 	case 8:
+ 		plc |= PORT_LINK_MODE_8_LANES;
+-		lwsc |= PORT_LOGIC_LINK_WIDTH_8_LANES;
+ 		break;
+ 	default:
+ 		dev_err(pci->dev, "num-lanes %u: invalid value\n", num_lanes);
+diff --git a/drivers/pci/controller/pcie-apple.c b/drivers/pci/controller/pcie-apple.c
+index 32f57b8a6ecbd5..8f7fc9161760ce 100644
+--- a/drivers/pci/controller/pcie-apple.c
++++ b/drivers/pci/controller/pcie-apple.c
+@@ -584,6 +584,9 @@ static int apple_pcie_setup_port(struct apple_pcie *pcie,
+ 	list_add_tail(&port->entry, &pcie->ports);
+ 	init_completion(&pcie->event);
+ 
++	/* In the success path, we keep a reference to np around */
++	of_node_get(np);
++
+ 	ret = apple_pcie_port_register_irqs(port);
+ 	WARN_ON(ret);
+ 
+diff --git a/drivers/s390/crypto/pkey_api.c b/drivers/s390/crypto/pkey_api.c
+index 3a39e167bdbff8..d62fea0fbdfc1f 100644
+--- a/drivers/s390/crypto/pkey_api.c
++++ b/drivers/s390/crypto/pkey_api.c
+@@ -85,7 +85,7 @@ static void *_copy_apqns_from_user(void __user *uapqns, size_t nr_apqns)
+ 	if (!uapqns || nr_apqns == 0)
+ 		return NULL;
+ 
+-	return memdup_user(uapqns, nr_apqns * sizeof(struct pkey_apqn));
++	return memdup_array_user(uapqns, nr_apqns, sizeof(struct pkey_apqn));
+ }
+ 
+ static int pkey_ioctl_genseck(struct pkey_genseck __user *ugs)
+diff --git a/drivers/scsi/fnic/fdls_disc.c b/drivers/scsi/fnic/fdls_disc.c
+index c2b6f4eb338e65..146744ca97c243 100644
+--- a/drivers/scsi/fnic/fdls_disc.c
++++ b/drivers/scsi/fnic/fdls_disc.c
+@@ -763,47 +763,69 @@ static void fdls_send_fabric_abts(struct fnic_iport_s *iport)
+ 	iport->fabric.timer_pending = 1;
+ }
+ 
+-static void fdls_send_fdmi_abts(struct fnic_iport_s *iport)
++static uint8_t *fdls_alloc_init_fdmi_abts_frame(struct fnic_iport_s *iport,
++		uint16_t oxid)
+ {
+-	uint8_t *frame;
++	struct fc_frame_header *pfdmi_abts;
+ 	uint8_t d_id[3];
++	uint8_t *frame;
+ 	struct fnic *fnic = iport->fnic;
+-	struct fc_frame_header *pfabric_abts;
+-	unsigned long fdmi_tov;
+-	uint16_t oxid;
+-	uint16_t frame_size = FNIC_ETH_FCOE_HDRS_OFFSET +
+-			sizeof(struct fc_frame_header);
+ 
+ 	frame = fdls_alloc_frame(iport);
+ 	if (frame == NULL) {
+ 		FNIC_FCS_DBG(KERN_ERR, fnic->host, fnic->fnic_num,
+ 				"Failed to allocate frame to send FDMI ABTS");
+-		return;
++		return NULL;
+ 	}
+ 
+-	pfabric_abts = (struct fc_frame_header *) (frame + FNIC_ETH_FCOE_HDRS_OFFSET);
++	pfdmi_abts = (struct fc_frame_header *) (frame + FNIC_ETH_FCOE_HDRS_OFFSET);
+ 	fdls_init_fabric_abts_frame(frame, iport);
+ 
+ 	hton24(d_id, FC_FID_MGMT_SERV);
+-	FNIC_STD_SET_D_ID(*pfabric_abts, d_id);
++	FNIC_STD_SET_D_ID(*pfdmi_abts, d_id);
++	FNIC_STD_SET_OX_ID(*pfdmi_abts, oxid);
++
++	return frame;
++}
++
++static void fdls_send_fdmi_abts(struct fnic_iport_s *iport)
++{
++	uint8_t *frame;
++	unsigned long fdmi_tov;
++	uint16_t frame_size = FNIC_ETH_FCOE_HDRS_OFFSET +
++			sizeof(struct fc_frame_header);
+ 
+ 	if (iport->fabric.fdmi_pending & FDLS_FDMI_PLOGI_PENDING) {
+-		oxid = iport->active_oxid_fdmi_plogi;
+-		FNIC_STD_SET_OX_ID(*pfabric_abts, oxid);
++		frame = fdls_alloc_init_fdmi_abts_frame(iport,
++						iport->active_oxid_fdmi_plogi);
++		if (frame == NULL)
++			return;
++
+ 		fnic_send_fcoe_frame(iport, frame, frame_size);
+ 	} else {
+ 		if (iport->fabric.fdmi_pending & FDLS_FDMI_REG_HBA_PENDING) {
+-			oxid = iport->active_oxid_fdmi_rhba;
+-			FNIC_STD_SET_OX_ID(*pfabric_abts, oxid);
++			frame = fdls_alloc_init_fdmi_abts_frame(iport,
++						iport->active_oxid_fdmi_rhba);
++			if (frame == NULL)
++				return;
++
+ 			fnic_send_fcoe_frame(iport, frame, frame_size);
+ 		}
+ 		if (iport->fabric.fdmi_pending & FDLS_FDMI_RPA_PENDING) {
+-			oxid = iport->active_oxid_fdmi_rpa;
+-			FNIC_STD_SET_OX_ID(*pfabric_abts, oxid);
++			frame = fdls_alloc_init_fdmi_abts_frame(iport,
++						iport->active_oxid_fdmi_rpa);
++			if (frame == NULL) {
++				if (iport->fabric.fdmi_pending & FDLS_FDMI_REG_HBA_PENDING)
++					goto arm_timer;
++				else
++					return;
++			}
++
+ 			fnic_send_fcoe_frame(iport, frame, frame_size);
+ 		}
+ 	}
+ 
++arm_timer:
+ 	fdmi_tov = jiffies + msecs_to_jiffies(2 * iport->e_d_tov);
+ 	mod_timer(&iport->fabric.fdmi_timer, round_jiffies(fdmi_tov));
+ 	iport->fabric.fdmi_pending |= FDLS_FDMI_ABORT_PENDING;
+@@ -2244,6 +2266,21 @@ void fdls_fabric_timer_callback(struct timer_list *t)
+ 	spin_unlock_irqrestore(&fnic->fnic_lock, flags);
+ }
+ 
++void fdls_fdmi_retry_plogi(struct fnic_iport_s *iport)
++{
++	struct fnic *fnic = iport->fnic;
++
++	iport->fabric.fdmi_pending = 0;
++	/* If max retries not exhausted, start over from fdmi plogi */
++	if (iport->fabric.fdmi_retry < FDLS_FDMI_MAX_RETRY) {
++		iport->fabric.fdmi_retry++;
++		FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num,
++					 "Retry FDMI PLOGI. FDMI retry: %d",
++					 iport->fabric.fdmi_retry);
++		fdls_send_fdmi_plogi(iport);
++	}
++}
++
+ void fdls_fdmi_timer_callback(struct timer_list *t)
+ {
+ 	struct fnic_fdls_fabric_s *fabric = from_timer(fabric, t, fdmi_timer);
+@@ -2289,14 +2326,7 @@ void fdls_fdmi_timer_callback(struct timer_list *t)
+ 	FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num,
+ 		"fdmi timer callback : 0x%x\n", iport->fabric.fdmi_pending);
+ 
+-	iport->fabric.fdmi_pending = 0;
+-	/* If max retries not exhaused, start over from fdmi plogi */
+-	if (iport->fabric.fdmi_retry < FDLS_FDMI_MAX_RETRY) {
+-		iport->fabric.fdmi_retry++;
+-		FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num,
+-					 "retry fdmi timer %d", iport->fabric.fdmi_retry);
+-		fdls_send_fdmi_plogi(iport);
+-	}
++	fdls_fdmi_retry_plogi(iport);
+ 	FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num,
+ 		"fdmi timer callback : 0x%x\n", iport->fabric.fdmi_pending);
+ 	spin_unlock_irqrestore(&fnic->fnic_lock, flags);
+@@ -3714,11 +3744,32 @@ static void fdls_process_fdmi_abts_rsp(struct fnic_iport_s *iport,
+ 	switch (FNIC_FRAME_TYPE(oxid)) {
+ 	case FNIC_FRAME_TYPE_FDMI_PLOGI:
+ 		fdls_free_oxid(iport, oxid, &iport->active_oxid_fdmi_plogi);
++
++		iport->fabric.fdmi_pending &= ~FDLS_FDMI_PLOGI_PENDING;
++		iport->fabric.fdmi_pending &= ~FDLS_FDMI_ABORT_PENDING;
+ 		break;
+ 	case FNIC_FRAME_TYPE_FDMI_RHBA:
++		iport->fabric.fdmi_pending &= ~FDLS_FDMI_REG_HBA_PENDING;
++
++		/* If RPA is still pending, don't turn off ABORT PENDING.
++		 * We count on the timer to detect the ABTS timeout and take
++		 * corrective action.
++		 */
++		if (!(iport->fabric.fdmi_pending & FDLS_FDMI_RPA_PENDING))
++			iport->fabric.fdmi_pending &= ~FDLS_FDMI_ABORT_PENDING;
++
+ 		fdls_free_oxid(iport, oxid, &iport->active_oxid_fdmi_rhba);
+ 		break;
+ 	case FNIC_FRAME_TYPE_FDMI_RPA:
++		iport->fabric.fdmi_pending &= ~FDLS_FDMI_RPA_PENDING;
++
++		/* If RHBA is still pending, don't turn off ABORT PENDING.
++		 * We count on the timer to detect the ABTS timeout and take
++		 * corrective action.
++		 */
++		if (!(iport->fabric.fdmi_pending & FDLS_FDMI_REG_HBA_PENDING))
++			iport->fabric.fdmi_pending &= ~FDLS_FDMI_ABORT_PENDING;
++
+ 		fdls_free_oxid(iport, oxid, &iport->active_oxid_fdmi_rpa);
+ 		break;
+ 	default:
+@@ -3728,10 +3779,16 @@ static void fdls_process_fdmi_abts_rsp(struct fnic_iport_s *iport,
+ 		break;
+ 	}
+ 
+-	timer_delete_sync(&iport->fabric.fdmi_timer);
+-	iport->fabric.fdmi_pending &= ~FDLS_FDMI_ABORT_PENDING;
+-
+-	fdls_send_fdmi_plogi(iport);
++	/*
++	 * Only if ABORT PENDING is off, delete the timer, and if no other
++	 * operations are pending, retry FDMI.
++	 * Otherwise, let the timer pop and take the appropriate action.
++	 */
++	if (!(iport->fabric.fdmi_pending & FDLS_FDMI_ABORT_PENDING)) {
++		timer_delete_sync(&iport->fabric.fdmi_timer);
++		if (!iport->fabric.fdmi_pending)
++			fdls_fdmi_retry_plogi(iport);
++	}
+ }
+ 
+ static void
+@@ -4970,9 +5027,12 @@ void fnic_fdls_link_down(struct fnic_iport_s *iport)
+ 		fdls_delete_tport(iport, tport);
+ 	}
+ 
+-	if ((fnic_fdmi_support == 1) && (iport->fabric.fdmi_pending > 0)) {
+-		timer_delete_sync(&iport->fabric.fdmi_timer);
+-		iport->fabric.fdmi_pending = 0;
++	if (fnic_fdmi_support == 1) {
++		if (iport->fabric.fdmi_pending > 0) {
++			timer_delete_sync(&iport->fabric.fdmi_timer);
++			iport->fabric.fdmi_pending = 0;
++		}
++		iport->flags &= ~FNIC_FDMI_ACTIVE;
+ 	}
+ 
+ 	FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num,
+diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h
+index 6c5f6046b1f5b2..c2fdc6553e627a 100644
+--- a/drivers/scsi/fnic/fnic.h
++++ b/drivers/scsi/fnic/fnic.h
+@@ -30,7 +30,7 @@
+ 
+ #define DRV_NAME		"fnic"
+ #define DRV_DESCRIPTION		"Cisco FCoE HBA Driver"
+-#define DRV_VERSION		"1.8.0.0"
++#define DRV_VERSION		"1.8.0.2"
+ #define PFX			DRV_NAME ": "
+ #define DFX                     DRV_NAME "%d: "
+ 
+diff --git a/drivers/scsi/fnic/fnic_fcs.c b/drivers/scsi/fnic/fnic_fcs.c
+index 1e8cd64f9a5c55..103ab6f1f7cd1c 100644
+--- a/drivers/scsi/fnic/fnic_fcs.c
++++ b/drivers/scsi/fnic/fnic_fcs.c
+@@ -636,6 +636,8 @@ static int fnic_send_frame(struct fnic *fnic, void *frame, int frame_len)
+ 	unsigned long flags;
+ 
+ 	pa = dma_map_single(&fnic->pdev->dev, frame, frame_len, DMA_TO_DEVICE);
++	if (dma_mapping_error(&fnic->pdev->dev, pa))
++		return -ENOMEM;
+ 
+ 	if ((fnic_fc_trace_set_data(fnic->fnic_num,
+ 				FNIC_FC_SEND | 0x80, (char *) frame,
+diff --git a/drivers/scsi/fnic/fnic_fdls.h b/drivers/scsi/fnic/fnic_fdls.h
+index 8e610b65ad57d3..531d0b37e450f1 100644
+--- a/drivers/scsi/fnic/fnic_fdls.h
++++ b/drivers/scsi/fnic/fnic_fdls.h
+@@ -394,6 +394,7 @@ void fdls_send_tport_abts(struct fnic_iport_s *iport,
+ bool fdls_delete_tport(struct fnic_iport_s *iport,
+ 		       struct fnic_tport_s *tport);
+ void fdls_fdmi_timer_callback(struct timer_list *t);
++void fdls_fdmi_retry_plogi(struct fnic_iport_s *iport);
+ 
+ /* fnic_fcs.c */
+ void fnic_fdls_init(struct fnic *fnic, int usefip);
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 5e33d411fa3d8d..6d4de06082d144 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -5910,7 +5910,11 @@ megasas_set_high_iops_queue_affinity_and_hint(struct megasas_instance *instance)
+ 	const struct cpumask *mask;
+ 
+ 	if (instance->perf_mode == MR_BALANCED_PERF_MODE) {
+-		mask = cpumask_of_node(dev_to_node(&instance->pdev->dev));
++		int nid = dev_to_node(&instance->pdev->dev);
++
++		if (nid == NUMA_NO_NODE)
++			nid = 0;
++		mask = cpumask_of_node(nid);
+ 
+ 		for (i = 0; i < instance->low_latency_index_start; i++) {
+ 			irq = pci_irq_vector(instance->pdev, i);
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index c90462783b3f9f..506a139fbd2c56 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1958,10 +1958,10 @@ static int cqspi_probe(struct platform_device *pdev)
+ 			goto probe_setup_failed;
+ 	}
+ 
+-	ret = devm_pm_runtime_enable(dev);
+-	if (ret) {
+-		if (cqspi->rx_chan)
+-			dma_release_channel(cqspi->rx_chan);
++	pm_runtime_enable(dev);
++
++	if (cqspi->rx_chan) {
++		dma_release_channel(cqspi->rx_chan);
+ 		goto probe_setup_failed;
+ 	}
+ 
+@@ -1981,6 +1981,7 @@ static int cqspi_probe(struct platform_device *pdev)
+ 	return 0;
+ probe_setup_failed:
+ 	cqspi_controller_enable(cqspi, 0);
++	pm_runtime_disable(dev);
+ probe_reset_failed:
+ 	if (cqspi->is_jh7110)
+ 		cqspi_jh7110_disable_clk(pdev, cqspi);
+@@ -1999,7 +2000,8 @@ static void cqspi_remove(struct platform_device *pdev)
+ 	if (cqspi->rx_chan)
+ 		dma_release_channel(cqspi->rx_chan);
+ 
+-	clk_disable_unprepare(cqspi->clk);
++	if (pm_runtime_get_sync(&pdev->dev) >= 0)
++		clk_disable(cqspi->clk);
+ 
+ 	if (cqspi->is_jh7110)
+ 		cqspi_jh7110_disable_clk(pdev, cqspi);
+diff --git a/drivers/staging/rtl8723bs/core/rtw_security.c b/drivers/staging/rtl8723bs/core/rtw_security.c
+index 1e9eff01b1aa52..e9f382c280d9b0 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_security.c
++++ b/drivers/staging/rtl8723bs/core/rtw_security.c
+@@ -868,29 +868,21 @@ static signed int aes_cipher(u8 *key, uint	hdrlen,
+ 		num_blocks, payload_index;
+ 
+ 	u8 pn_vector[6];
+-	u8 mic_iv[16];
+-	u8 mic_header1[16];
+-	u8 mic_header2[16];
+-	u8 ctr_preload[16];
++	u8 mic_iv[16] = {};
++	u8 mic_header1[16] = {};
++	u8 mic_header2[16] = {};
++	u8 ctr_preload[16] = {};
+ 
+ 	/* Intermediate Buffers */
+-	u8 chain_buffer[16];
+-	u8 aes_out[16];
+-	u8 padded_buffer[16];
++	u8 chain_buffer[16] = {};
++	u8 aes_out[16] = {};
++	u8 padded_buffer[16] = {};
+ 	u8 mic[8];
+ 	uint	frtype  = GetFrameType(pframe);
+ 	uint	frsubtype  = GetFrameSubType(pframe);
+ 
+ 	frsubtype = frsubtype>>4;
+ 
+-	memset((void *)mic_iv, 0, 16);
+-	memset((void *)mic_header1, 0, 16);
+-	memset((void *)mic_header2, 0, 16);
+-	memset((void *)ctr_preload, 0, 16);
+-	memset((void *)chain_buffer, 0, 16);
+-	memset((void *)aes_out, 0, 16);
+-	memset((void *)padded_buffer, 0, 16);
+-
+ 	if ((hdrlen == WLAN_HDR_A3_LEN) || (hdrlen ==  WLAN_HDR_A3_QOS_LEN))
+ 		a4_exists = 0;
+ 	else
+@@ -1080,15 +1072,15 @@ static signed int aes_decipher(u8 *key, uint	hdrlen,
+ 			num_blocks, payload_index;
+ 	signed int res = _SUCCESS;
+ 	u8 pn_vector[6];
+-	u8 mic_iv[16];
+-	u8 mic_header1[16];
+-	u8 mic_header2[16];
+-	u8 ctr_preload[16];
++	u8 mic_iv[16] = {};
++	u8 mic_header1[16] = {};
++	u8 mic_header2[16] = {};
++	u8 ctr_preload[16] = {};
+ 
+ 		/* Intermediate Buffers */
+-	u8 chain_buffer[16];
+-	u8 aes_out[16];
+-	u8 padded_buffer[16];
++	u8 chain_buffer[16] = {};
++	u8 aes_out[16] = {};
++	u8 padded_buffer[16] = {};
+ 	u8 mic[8];
+ 
+ 	uint frtype  = GetFrameType(pframe);
+@@ -1096,14 +1088,6 @@ static signed int aes_decipher(u8 *key, uint	hdrlen,
+ 
+ 	frsubtype = frsubtype>>4;
+ 
+-	memset((void *)mic_iv, 0, 16);
+-	memset((void *)mic_header1, 0, 16);
+-	memset((void *)mic_header2, 0, 16);
+-	memset((void *)ctr_preload, 0, 16);
+-	memset((void *)chain_buffer, 0, 16);
+-	memset((void *)aes_out, 0, 16);
+-	memset((void *)padded_buffer, 0, 16);
+-
+ 	/* start to decrypt the payload */
+ 
+ 	num_blocks = (plen-8) / 16; /* plen including LLC, payload_length and mic) */
+diff --git a/drivers/tty/serial/8250/8250_pci1xxxx.c b/drivers/tty/serial/8250/8250_pci1xxxx.c
+index e9c51d4e447dd2..4c149db846925f 100644
+--- a/drivers/tty/serial/8250/8250_pci1xxxx.c
++++ b/drivers/tty/serial/8250/8250_pci1xxxx.c
+@@ -115,6 +115,7 @@
+ 
+ #define UART_RESET_REG				0x94
+ #define UART_RESET_D3_RESET_DISABLE		BIT(16)
++#define UART_RESET_HOT_RESET_DISABLE		BIT(17)
+ 
+ #define UART_BURST_STATUS_REG			0x9C
+ #define UART_TX_BURST_FIFO			0xA0
+@@ -620,6 +621,10 @@ static int pci1xxxx_suspend(struct device *dev)
+ 	}
+ 
+ 	data = readl(p + UART_RESET_REG);
++
++	if (priv->dev_rev >= 0xC0)
++		data |= UART_RESET_HOT_RESET_DISABLE;
++
+ 	writel(data | UART_RESET_D3_RESET_DISABLE, p + UART_RESET_REG);
+ 
+ 	if (wakeup)
+@@ -647,7 +652,12 @@ static int pci1xxxx_resume(struct device *dev)
+ 	}
+ 
+ 	data = readl(p + UART_RESET_REG);
++
++	if (priv->dev_rev >= 0xC0)
++		data &= ~UART_RESET_HOT_RESET_DISABLE;
++
+ 	writel(data & ~UART_RESET_D3_RESET_DISABLE, p + UART_RESET_REG);
++
+ 	iounmap(p);
+ 
+ 	for (i = 0; i < priv->nr; i++) {
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index e4b6f1bfdc95cd..a91017dc5760d7 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -235,6 +235,7 @@ struct imx_port {
+ 	enum imx_tx_state	tx_state;
+ 	struct hrtimer		trigger_start_tx;
+ 	struct hrtimer		trigger_stop_tx;
++	unsigned int		rxtl;
+ };
+ 
+ struct imx_port_ucrs {
+@@ -1339,6 +1340,7 @@ static void imx_uart_clear_rx_errors(struct imx_port *sport)
+ 
+ #define TXTL_DEFAULT 8
+ #define RXTL_DEFAULT 8 /* 8 characters or aging timer */
++#define RXTL_CONSOLE_DEFAULT 1
+ #define TXTL_DMA 8 /* DMA burst setting */
+ #define RXTL_DMA 9 /* DMA burst setting */
+ 
+@@ -1457,7 +1459,7 @@ static void imx_uart_disable_dma(struct imx_port *sport)
+ 	ucr1 &= ~(UCR1_RXDMAEN | UCR1_TXDMAEN | UCR1_ATDMAEN);
+ 	imx_uart_writel(sport, ucr1, UCR1);
+ 
+-	imx_uart_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT);
++	imx_uart_setup_ufcr(sport, TXTL_DEFAULT, sport->rxtl);
+ 
+ 	sport->dma_is_enabled = 0;
+ }
+@@ -1482,7 +1484,12 @@ static int imx_uart_startup(struct uart_port *port)
+ 		return retval;
+ 	}
+ 
+-	imx_uart_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT);
++	if (uart_console(&sport->port))
++		sport->rxtl = RXTL_CONSOLE_DEFAULT;
++	else
++		sport->rxtl = RXTL_DEFAULT;
++
++	imx_uart_setup_ufcr(sport, TXTL_DEFAULT, sport->rxtl);
+ 
+ 	/* disable the DREN bit (Data Ready interrupt enable) before
+ 	 * requesting IRQs
+@@ -1948,7 +1955,7 @@ static int imx_uart_poll_init(struct uart_port *port)
+ 	if (retval)
+ 		clk_disable_unprepare(sport->clk_ipg);
+ 
+-	imx_uart_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT);
++	imx_uart_setup_ufcr(sport, TXTL_DEFAULT, sport->rxtl);
+ 
+ 	uart_port_lock_irqsave(&sport->port, &flags);
+ 
+@@ -2040,7 +2047,7 @@ static int imx_uart_rs485_config(struct uart_port *port, struct ktermios *termio
+ 		/* If the receiver trigger is 0, set it to a default value */
+ 		ufcr = imx_uart_readl(sport, UFCR);
+ 		if ((ufcr & UFCR_RXTL_MASK) == 0)
+-			imx_uart_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT);
++			imx_uart_setup_ufcr(sport, TXTL_DEFAULT, sport->rxtl);
+ 		imx_uart_start_rx(port);
+ 	}
+ 
+@@ -2302,7 +2309,7 @@ imx_uart_console_setup(struct console *co, char *options)
+ 	else
+ 		imx_uart_console_get_options(sport, &baud, &parity, &bits);
+ 
+-	imx_uart_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT);
++	imx_uart_setup_ufcr(sport, TXTL_DEFAULT, sport->rxtl);
+ 
+ 	retval = uart_set_options(&sport->port, co, baud, parity, bits, flow);
+ 
+diff --git a/drivers/tty/serial/serial_base_bus.c b/drivers/tty/serial/serial_base_bus.c
+index 5d1677f1b651c2..cb3b127b06b613 100644
+--- a/drivers/tty/serial/serial_base_bus.c
++++ b/drivers/tty/serial/serial_base_bus.c
+@@ -72,6 +72,7 @@ static int serial_base_device_init(struct uart_port *port,
+ 	dev->parent = parent_dev;
+ 	dev->bus = &serial_base_bus_type;
+ 	dev->release = release;
++	device_set_of_node_from_dev(dev, parent_dev);
+ 
+ 	if (!serial_base_initialized) {
+ 		dev_dbg(port->dev, "uart_add_one_port() called before arch_initcall()?\n");
+diff --git a/drivers/tty/serial/uartlite.c b/drivers/tty/serial/uartlite.c
+index a41e7fc373b7c6..39c1fd1ff9cedd 100644
+--- a/drivers/tty/serial/uartlite.c
++++ b/drivers/tty/serial/uartlite.c
+@@ -880,16 +880,6 @@ static int ulite_probe(struct platform_device *pdev)
+ 	pm_runtime_set_active(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
+ 
+-	if (!ulite_uart_driver.state) {
+-		dev_dbg(&pdev->dev, "uartlite: calling uart_register_driver()\n");
+-		ret = uart_register_driver(&ulite_uart_driver);
+-		if (ret < 0) {
+-			dev_err(&pdev->dev, "Failed to register driver\n");
+-			clk_disable_unprepare(pdata->clk);
+-			return ret;
+-		}
+-	}
+-
+ 	ret = ulite_assign(&pdev->dev, id, res->start, irq, pdata);
+ 
+ 	pm_runtime_mark_last_busy(&pdev->dev);
+@@ -929,16 +919,25 @@ static struct platform_driver ulite_platform_driver = {
+ 
+ static int __init ulite_init(void)
+ {
++	int ret;
++
++	pr_debug("uartlite: calling uart_register_driver()\n");
++	ret = uart_register_driver(&ulite_uart_driver);
++	if (ret)
++		return ret;
+ 
+ 	pr_debug("uartlite: calling platform_driver_register()\n");
+-	return platform_driver_register(&ulite_platform_driver);
++	ret = platform_driver_register(&ulite_platform_driver);
++	if (ret)
++		uart_unregister_driver(&ulite_uart_driver);
++
++	return ret;
+ }
+ 
+ static void __exit ulite_exit(void)
+ {
+ 	platform_driver_unregister(&ulite_platform_driver);
+-	if (ulite_uart_driver.state)
+-		uart_unregister_driver(&ulite_uart_driver);
++	uart_unregister_driver(&ulite_uart_driver);
+ }
+ 
+ module_init(ulite_init);
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 04f769d907a446..e7e6bbc04d21cd 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -1379,6 +1379,7 @@ static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba, u64 timeout_us)
+ 	 * make sure that there are no outstanding requests when
+ 	 * clock scaling is in progress
+ 	 */
++	mutex_lock(&hba->host->scan_mutex);
+ 	blk_mq_quiesce_tagset(&hba->host->tag_set);
+ 	mutex_lock(&hba->wb_mutex);
+ 	down_write(&hba->clk_scaling_lock);
+@@ -1389,6 +1390,7 @@ static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba, u64 timeout_us)
+ 		up_write(&hba->clk_scaling_lock);
+ 		mutex_unlock(&hba->wb_mutex);
+ 		blk_mq_unquiesce_tagset(&hba->host->tag_set);
++		mutex_unlock(&hba->host->scan_mutex);
+ 		goto out;
+ 	}
+ 
+@@ -1410,6 +1412,7 @@ static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, int err)
+ 	mutex_unlock(&hba->wb_mutex);
+ 
+ 	blk_mq_unquiesce_tagset(&hba->host->tag_set);
++	mutex_unlock(&hba->host->scan_mutex);
+ 	ufshcd_release(hba);
+ }
+ 
+@@ -7750,7 +7753,8 @@ static int ufshcd_host_reset_and_restore(struct ufs_hba *hba)
+ 	hba->silence_err_logs = false;
+ 
+ 	/* scale up clocks to max frequency before full reinitialization */
+-	ufshcd_scale_clks(hba, ULONG_MAX, true);
++	if (ufshcd_is_clkscaling_supported(hba))
++		ufshcd_scale_clks(hba, ULONG_MAX, true);
+ 
+ 	err = ufshcd_hba_enable(hba);
+ 
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 16e7fa4d488d37..ecd6d1f39e4984 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -92,7 +92,6 @@ struct wdm_device {
+ 	u16			wMaxCommand;
+ 	u16			wMaxPacketSize;
+ 	__le16			inum;
+-	int			reslength;
+ 	int			length;
+ 	int			read;
+ 	int			count;
+@@ -214,6 +213,11 @@ static void wdm_in_callback(struct urb *urb)
+ 	if (desc->rerr == 0 && status != -EPIPE)
+ 		desc->rerr = status;
+ 
++	if (length == 0) {
++		dev_dbg(&desc->intf->dev, "received ZLP\n");
++		goto skip_zlp;
++	}
++
+ 	if (length + desc->length > desc->wMaxCommand) {
+ 		/* The buffer would overflow */
+ 		set_bit(WDM_OVERFLOW, &desc->flags);
+@@ -222,18 +226,18 @@ static void wdm_in_callback(struct urb *urb)
+ 		if (!test_bit(WDM_OVERFLOW, &desc->flags)) {
+ 			memmove(desc->ubuf + desc->length, desc->inbuf, length);
+ 			desc->length += length;
+-			desc->reslength = length;
+ 		}
+ 	}
+ skip_error:
+ 
+ 	if (desc->rerr) {
+ 		/*
+-		 * Since there was an error, userspace may decide to not read
+-		 * any data after poll'ing.
++		 * If there was a ZLP or an error, userspace may decide to not
++		 * read any data after poll'ing.
+ 		 * We should respond to further attempts from the device to send
+ 		 * data, so that we can get unstuck.
+ 		 */
++skip_zlp:
+ 		schedule_work(&desc->service_outs_intr);
+ 	} else {
+ 		set_bit(WDM_READ, &desc->flags);
+@@ -585,15 +589,6 @@ static ssize_t wdm_read
+ 			goto retry;
+ 		}
+ 
+-		if (!desc->reslength) { /* zero length read */
+-			dev_dbg(&desc->intf->dev, "zero length - clearing WDM_READ\n");
+-			clear_bit(WDM_READ, &desc->flags);
+-			rv = service_outstanding_interrupt(desc);
+-			spin_unlock_irq(&desc->iuspin);
+-			if (rv < 0)
+-				goto err;
+-			goto retry;
+-		}
+ 		cntr = desc->length;
+ 		spin_unlock_irq(&desc->iuspin);
+ 	}
+@@ -1016,7 +1011,7 @@ static void service_interrupt_work(struct work_struct *work)
+ 
+ 	spin_lock_irq(&desc->iuspin);
+ 	service_outstanding_interrupt(desc);
+-	if (!desc->resp_count) {
++	if (!desc->resp_count && (desc->length || desc->rerr)) {
+ 		set_bit(WDM_READ, &desc->flags);
+ 		wake_up(&desc->wait);
+ 	}
+diff --git a/drivers/usb/common/usb-conn-gpio.c b/drivers/usb/common/usb-conn-gpio.c
+index 1e36be2a28fd5c..421c3af38e0697 100644
+--- a/drivers/usb/common/usb-conn-gpio.c
++++ b/drivers/usb/common/usb-conn-gpio.c
+@@ -21,6 +21,9 @@
+ #include <linux/regulator/consumer.h>
+ #include <linux/string_choices.h>
+ #include <linux/usb/role.h>
++#include <linux/idr.h>
++
++static DEFINE_IDA(usb_conn_ida);
+ 
+ #define USB_GPIO_DEB_MS		20	/* ms */
+ #define USB_GPIO_DEB_US		((USB_GPIO_DEB_MS) * 1000)	/* us */
+@@ -30,6 +33,7 @@
+ 
+ struct usb_conn_info {
+ 	struct device *dev;
++	int conn_id; /* store the IDA-allocated ID */
+ 	struct usb_role_switch *role_sw;
+ 	enum usb_role last_role;
+ 	struct regulator *vbus;
+@@ -161,7 +165,17 @@ static int usb_conn_psy_register(struct usb_conn_info *info)
+ 		.fwnode = dev_fwnode(dev),
+ 	};
+ 
+-	desc->name = "usb-charger";
++	info->conn_id = ida_alloc(&usb_conn_ida, GFP_KERNEL);
++	if (info->conn_id < 0)
++		return info->conn_id;
++
++	desc->name = devm_kasprintf(dev, GFP_KERNEL, "usb-charger-%d",
++				    info->conn_id);
++	if (!desc->name) {
++		ida_free(&usb_conn_ida, info->conn_id);
++		return -ENOMEM;
++	}
++
+ 	desc->properties = usb_charger_properties;
+ 	desc->num_properties = ARRAY_SIZE(usb_charger_properties);
+ 	desc->get_property = usb_charger_get_property;
+@@ -169,8 +183,10 @@ static int usb_conn_psy_register(struct usb_conn_info *info)
+ 	cfg.drv_data = info;
+ 
+ 	info->charger = devm_power_supply_register(dev, desc, &cfg);
+-	if (IS_ERR(info->charger))
+-		dev_err(dev, "Unable to register charger\n");
++	if (IS_ERR(info->charger)) {
++		dev_err(dev, "Unable to register charger %d\n", info->conn_id);
++		ida_free(&usb_conn_ida, info->conn_id);
++	}
+ 
+ 	return PTR_ERR_OR_ZERO(info->charger);
+ }
+@@ -278,6 +294,9 @@ static void usb_conn_remove(struct platform_device *pdev)
+ 
+ 	cancel_delayed_work_sync(&info->dw_det);
+ 
++	if (info->charger)
++		ida_free(&usb_conn_ida, info->conn_id);
++
+ 	if (info->last_role == USB_ROLE_HOST && info->vbus)
+ 		regulator_disable(info->vbus);
+ 
+diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c
+index 0b4685aad2d503..118fa4c93a7956 100644
+--- a/drivers/usb/core/usb.c
++++ b/drivers/usb/core/usb.c
+@@ -695,15 +695,16 @@ struct usb_device *usb_alloc_dev(struct usb_device *parent,
+ 		device_set_of_node_from_dev(&dev->dev, bus->sysdev);
+ 		dev_set_name(&dev->dev, "usb%d", bus->busnum);
+ 	} else {
++		int n;
++
+ 		/* match any labeling on the hubs; it's one-based */
+ 		if (parent->devpath[0] == '0') {
+-			snprintf(dev->devpath, sizeof dev->devpath,
+-				"%d", port1);
++			n = snprintf(dev->devpath, sizeof(dev->devpath), "%d", port1);
+ 			/* Root ports are not counted in route string */
+ 			dev->route = 0;
+ 		} else {
+-			snprintf(dev->devpath, sizeof dev->devpath,
+-				"%s.%d", parent->devpath, port1);
++			n = snprintf(dev->devpath, sizeof(dev->devpath), "%s.%d",
++				     parent->devpath, port1);
+ 			/* Route string assumes hubs have less than 16 ports */
+ 			if (port1 < 15)
+ 				dev->route = parent->route +
+@@ -712,6 +713,11 @@ struct usb_device *usb_alloc_dev(struct usb_device *parent,
+ 				dev->route = parent->route +
+ 					(15 << ((parent->level - 1)*4));
+ 		}
++		if (n >= sizeof(dev->devpath)) {
++			usb_put_hcd(bus_to_hcd(bus));
++			usb_put_dev(dev);
++			return NULL;
++		}
+ 
+ 		dev->dev.parent = &parent->dev;
+ 		dev_set_name(&dev->dev, "%d-%s", bus->busnum, dev->devpath);
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index 300ea4969f0cff..f323fb5597b32f 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -4604,6 +4604,12 @@ static int dwc2_hsotg_udc_stop(struct usb_gadget *gadget)
+ 	if (!hsotg)
+ 		return -ENODEV;
+ 
++	/* Exit clock gating when driver is stopped. */
++	if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_NONE &&
++	    hsotg->bus_suspended && !hsotg->params.no_clock_gating) {
++		dwc2_gadget_exit_clock_gating(hsotg, 0);
++	}
++
+ 	/* all endpoints should be shutdown */
+ 	for (ep = 1; ep < hsotg->num_of_eps; ep++) {
+ 		if (hsotg->eps_in[ep])
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index c7a05f842745bc..d8bd2d82e9ec63 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -75,6 +75,7 @@ struct f_hidg {
+ 	/* recv report */
+ 	spinlock_t			read_spinlock;
+ 	wait_queue_head_t		read_queue;
++	bool				disabled;
+ 	/* recv report - interrupt out only (use_out_ep == 1) */
+ 	struct list_head		completed_out_req;
+ 	unsigned int			qlen;
+@@ -329,7 +330,7 @@ static ssize_t f_hidg_intout_read(struct file *file, char __user *buffer,
+ 
+ 	spin_lock_irqsave(&hidg->read_spinlock, flags);
+ 
+-#define READ_COND_INTOUT (!list_empty(&hidg->completed_out_req))
++#define READ_COND_INTOUT (!list_empty(&hidg->completed_out_req) || hidg->disabled)
+ 
+ 	/* wait for at least one buffer to complete */
+ 	while (!READ_COND_INTOUT) {
+@@ -343,6 +344,11 @@ static ssize_t f_hidg_intout_read(struct file *file, char __user *buffer,
+ 		spin_lock_irqsave(&hidg->read_spinlock, flags);
+ 	}
+ 
++	if (hidg->disabled) {
++		spin_unlock_irqrestore(&hidg->read_spinlock, flags);
++		return -ESHUTDOWN;
++	}
++
+ 	/* pick the first one */
+ 	list = list_first_entry(&hidg->completed_out_req,
+ 				struct f_hidg_req_list, list);
+@@ -387,7 +393,7 @@ static ssize_t f_hidg_intout_read(struct file *file, char __user *buffer,
+ 	return count;
+ }
+ 
+-#define READ_COND_SSREPORT (hidg->set_report_buf != NULL)
++#define READ_COND_SSREPORT (hidg->set_report_buf != NULL || hidg->disabled)
+ 
+ static ssize_t f_hidg_ssreport_read(struct file *file, char __user *buffer,
+ 				    size_t count, loff_t *ptr)
+@@ -1012,6 +1018,11 @@ static void hidg_disable(struct usb_function *f)
+ 	}
+ 	spin_unlock_irqrestore(&hidg->get_report_spinlock, flags);
+ 
++	spin_lock_irqsave(&hidg->read_spinlock, flags);
++	hidg->disabled = true;
++	spin_unlock_irqrestore(&hidg->read_spinlock, flags);
++	wake_up(&hidg->read_queue);
++
+ 	spin_lock_irqsave(&hidg->write_spinlock, flags);
+ 	if (!hidg->write_pending) {
+ 		free_ep_req(hidg->in_ep, hidg->req);
+@@ -1097,6 +1108,10 @@ static int hidg_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+ 		}
+ 	}
+ 
++	spin_lock_irqsave(&hidg->read_spinlock, flags);
++	hidg->disabled = false;
++	spin_unlock_irqrestore(&hidg->read_spinlock, flags);
++
+ 	if (hidg->in_ep != NULL) {
+ 		spin_lock_irqsave(&hidg->write_spinlock, flags);
+ 		hidg->req = req_in;
+diff --git a/drivers/usb/gadget/function/f_tcm.c b/drivers/usb/gadget/function/f_tcm.c
+index 5a2e1237f85c3f..6e8804f04baa77 100644
+--- a/drivers/usb/gadget/function/f_tcm.c
++++ b/drivers/usb/gadget/function/f_tcm.c
+@@ -1641,14 +1641,14 @@ static struct se_portal_group *usbg_make_tpg(struct se_wwn *wwn,
+ 	struct usbg_tport *tport = container_of(wwn, struct usbg_tport,
+ 			tport_wwn);
+ 	struct usbg_tpg *tpg;
+-	unsigned long tpgt;
++	u16 tpgt;
+ 	int ret;
+ 	struct f_tcm_opts *opts;
+ 	unsigned i;
+ 
+ 	if (strstr(name, "tpgt_") != name)
+ 		return ERR_PTR(-EINVAL);
+-	if (kstrtoul(name + 5, 0, &tpgt) || tpgt > UINT_MAX)
++	if (kstrtou16(name + 5, 0, &tpgt))
+ 		return ERR_PTR(-EINVAL);
+ 	ret = -ENODEV;
+ 	mutex_lock(&tpg_instances_lock);
+diff --git a/drivers/usb/typec/altmodes/displayport.c b/drivers/usb/typec/altmodes/displayport.c
+index ac84a6d64c2fbb..b09b58d7311de9 100644
+--- a/drivers/usb/typec/altmodes/displayport.c
++++ b/drivers/usb/typec/altmodes/displayport.c
+@@ -393,6 +393,10 @@ static int dp_altmode_vdm(struct typec_altmode *alt,
+ 		break;
+ 	case CMDT_RSP_NAK:
+ 		switch (cmd) {
++		case DP_CMD_STATUS_UPDATE:
++			if (typec_altmode_exit(alt))
++				dev_err(&dp->alt->dev, "Exit Mode Failed!\n");
++			break;
+ 		case DP_CMD_CONFIGURE:
+ 			dp->data.conf = 0;
+ 			ret = dp_altmode_configured(dp);
+diff --git a/drivers/usb/typec/mux.c b/drivers/usb/typec/mux.c
+index 49926d6e72c71b..182c902c42f61c 100644
+--- a/drivers/usb/typec/mux.c
++++ b/drivers/usb/typec/mux.c
+@@ -214,7 +214,7 @@ int typec_switch_set(struct typec_switch *sw,
+ 		sw_dev = sw->sw_devs[i];
+ 
+ 		ret = sw_dev->set(sw_dev, orientation);
+-		if (ret)
++		if (ret && ret != -EOPNOTSUPP)
+ 			return ret;
+ 	}
+ 
+@@ -378,7 +378,7 @@ int typec_mux_set(struct typec_mux *mux, struct typec_mux_state *state)
+ 		mux_dev = mux->mux_devs[i];
+ 
+ 		ret = mux_dev->set(mux_dev, state);
+-		if (ret)
++		if (ret && ret != -EOPNOTSUPP)
+ 			return ret;
+ 	}
+ 
+diff --git a/drivers/usb/typec/tcpm/tcpci_maxim_core.c b/drivers/usb/typec/tcpm/tcpci_maxim_core.c
+index 648311f5e3cf13..b5a5ed40faea9c 100644
+--- a/drivers/usb/typec/tcpm/tcpci_maxim_core.c
++++ b/drivers/usb/typec/tcpm/tcpci_maxim_core.c
+@@ -537,7 +537,10 @@ static int max_tcpci_probe(struct i2c_client *client)
+ 		return dev_err_probe(&client->dev, ret,
+ 				     "IRQ initialization failed\n");
+ 
+-	device_init_wakeup(chip->dev, true);
++	ret = devm_device_init_wakeup(chip->dev);
++	if (ret)
++		return dev_err_probe(chip->dev, ret, "Failed to init wakeup\n");
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/typec/tipd/core.c b/drivers/usb/typec/tipd/core.c
+index 7ee721a877c12d..dcf141ada07812 100644
+--- a/drivers/usb/typec/tipd/core.c
++++ b/drivers/usb/typec/tipd/core.c
+@@ -1431,7 +1431,7 @@ static int tps6598x_probe(struct i2c_client *client)
+ 
+ 	tps->wakeup = device_property_read_bool(tps->dev, "wakeup-source");
+ 	if (tps->wakeup && client->irq) {
+-		device_init_wakeup(&client->dev, true);
++		devm_device_init_wakeup(&client->dev);
+ 		enable_irq_wake(client->irq);
+ 	}
+ 
+diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
+index 74e6140312747f..953637115956b9 100644
+--- a/fs/btrfs/backref.h
++++ b/fs/btrfs/backref.h
+@@ -423,8 +423,8 @@ struct btrfs_backref_node *btrfs_backref_alloc_node(
+ struct btrfs_backref_edge *btrfs_backref_alloc_edge(
+ 		struct btrfs_backref_cache *cache);
+ 
+-#define		LINK_LOWER	(1 << 0)
+-#define		LINK_UPPER	(1 << 1)
++#define		LINK_LOWER	(1U << 0)
++#define		LINK_UPPER	(1U << 1)
+ 
+ void btrfs_backref_link_edge(struct btrfs_backref_edge *edge,
+ 			     struct btrfs_backref_node *lower,
+diff --git a/fs/btrfs/direct-io.c b/fs/btrfs/direct-io.c
+index a374ce7a1813b0..7900cba56225da 100644
+--- a/fs/btrfs/direct-io.c
++++ b/fs/btrfs/direct-io.c
+@@ -151,8 +151,8 @@ static struct extent_map *btrfs_create_dio_extent(struct btrfs_inode *inode,
+ 	}
+ 
+ 	ordered = btrfs_alloc_ordered_extent(inode, start, file_extent,
+-					     (1 << type) |
+-					     (1 << BTRFS_ORDERED_DIRECT));
++					     (1U << type) |
++					     (1U << BTRFS_ORDERED_DIRECT));
+ 	if (IS_ERR(ordered)) {
+ 		if (em) {
+ 			free_extent_map(em);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index aa58e0663a5d7b..321feb99c17977 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -2164,8 +2164,7 @@ static int load_global_roots_objectid(struct btrfs_root *tree_root,
+ 		found = true;
+ 		root = read_tree_root_path(tree_root, path, &key);
+ 		if (IS_ERR(root)) {
+-			if (!btrfs_test_opt(fs_info, IGNOREBADROOTS))
+-				ret = PTR_ERR(root);
++			ret = PTR_ERR(root);
+ 			break;
+ 		}
+ 		set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
+@@ -4385,8 +4384,8 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ 	 *
+ 	 * So wait for all ongoing ordered extents to complete and then run
+ 	 * delayed iputs. This works because once we reach this point no one
+-	 * can either create new ordered extents nor create delayed iputs
+-	 * through some other means.
++	 * can create new ordered extents, but delayed iputs can still be added
++	 * by a reclaim worker (see comments further below).
+ 	 *
+ 	 * Also note that btrfs_wait_ordered_roots() is not safe here, because
+ 	 * it waits for BTRFS_ORDERED_COMPLETE to be set on an ordered extent,
+@@ -4397,15 +4396,29 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ 	btrfs_flush_workqueue(fs_info->endio_write_workers);
+ 	/* Ordered extents for free space inodes. */
+ 	btrfs_flush_workqueue(fs_info->endio_freespace_worker);
++	/*
++	 * Run delayed iputs in case an async reclaim worker is waiting for them
++	 * to be run as mentioned above.
++	 */
+ 	btrfs_run_delayed_iputs(fs_info);
+-	/* There should be no more workload to generate new delayed iputs. */
+-	set_bit(BTRFS_FS_STATE_NO_DELAYED_IPUT, &fs_info->fs_state);
+ 
+ 	cancel_work_sync(&fs_info->async_reclaim_work);
+ 	cancel_work_sync(&fs_info->async_data_reclaim_work);
+ 	cancel_work_sync(&fs_info->preempt_reclaim_work);
+ 	cancel_work_sync(&fs_info->em_shrinker_work);
+ 
++	/*
++	 * Run delayed iputs again because an async reclaim worker may have
++	 * added new ones if it was flushing delalloc:
++	 *
++	 * shrink_delalloc() -> btrfs_start_delalloc_roots() ->
++	 *    start_delalloc_inodes() -> btrfs_add_delayed_iput()
++	 */
++	btrfs_run_delayed_iputs(fs_info);
++
++	/* There should be no more workload to generate new delayed iputs. */
++	set_bit(BTRFS_FS_STATE_NO_DELAYED_IPUT, &fs_info->fs_state);
++
+ 	/* Cancel or finish ongoing discard work */
+ 	btrfs_discard_cleanup(fs_info);
+ 
+diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
+index f5b28b5c4908b2..91d9cb5ff401ba 100644
+--- a/fs/btrfs/extent_io.h
++++ b/fs/btrfs/extent_io.h
+@@ -79,7 +79,7 @@ enum {
+  *    single word in a bitmap may straddle two pages in the extent buffer.
+  */
+ #define BIT_BYTE(nr) ((nr) / BITS_PER_BYTE)
+-#define BYTE_MASK ((1 << BITS_PER_BYTE) - 1)
++#define BYTE_MASK ((1U << BITS_PER_BYTE) - 1)
+ #define BITMAP_FIRST_BYTE_MASK(start) \
+ 	((BYTE_MASK << ((start) & (BITS_PER_BYTE - 1))) & BYTE_MASK)
+ #define BITMAP_LAST_BYTE_MASK(nbits) \
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 8a3f44302788cd..391172b443e50e 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1171,7 +1171,7 @@ static void submit_one_async_extent(struct async_chunk *async_chunk,
+ 	free_extent_map(em);
+ 
+ 	ordered = btrfs_alloc_ordered_extent(inode, start, &file_extent,
+-					     1 << BTRFS_ORDERED_COMPRESSED);
++					     1U << BTRFS_ORDERED_COMPRESSED);
+ 	if (IS_ERR(ordered)) {
+ 		btrfs_drop_extent_map_range(inode, start, end, false);
+ 		ret = PTR_ERR(ordered);
+@@ -1418,7 +1418,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
+ 		free_extent_map(em);
+ 
+ 		ordered = btrfs_alloc_ordered_extent(inode, start, &file_extent,
+-						     1 << BTRFS_ORDERED_REGULAR);
++						     1U << BTRFS_ORDERED_REGULAR);
+ 		if (IS_ERR(ordered)) {
+ 			unlock_extent(&inode->io_tree, start,
+ 				      start + cur_alloc_size - 1, &cached);
+@@ -1999,8 +1999,8 @@ static int nocow_one_range(struct btrfs_inode *inode, struct folio *locked_folio
+ 
+ 	ordered = btrfs_alloc_ordered_extent(inode, file_pos, &nocow_args->file_extent,
+ 					     is_prealloc
+-					     ? (1 << BTRFS_ORDERED_PREALLOC)
+-					     : (1 << BTRFS_ORDERED_NOCOW));
++					     ? (1U << BTRFS_ORDERED_PREALLOC)
++					     : (1U << BTRFS_ORDERED_NOCOW));
+ 	if (IS_ERR(ordered)) {
+ 		if (is_prealloc)
+ 			btrfs_drop_extent_map_range(inode, file_pos, end, false);
+@@ -7979,6 +7979,7 @@ static int btrfs_rename_exchange(struct inode *old_dir,
+ 	int ret;
+ 	int ret2;
+ 	bool need_abort = false;
++	bool logs_pinned = false;
+ 	struct fscrypt_name old_fname, new_fname;
+ 	struct fscrypt_str *old_name, *new_name;
+ 
+@@ -8102,6 +8103,31 @@ static int btrfs_rename_exchange(struct inode *old_dir,
+ 	inode_inc_iversion(new_inode);
+ 	simple_rename_timestamp(old_dir, old_dentry, new_dir, new_dentry);
+ 
++	if (old_ino != BTRFS_FIRST_FREE_OBJECTID &&
++	    new_ino != BTRFS_FIRST_FREE_OBJECTID) {
++		/*
++		 * If we are renaming in the same directory (and it's not for
++		 * root entries) pin the log early to prevent any concurrent
++		 * task from logging the directory after we removed the old
++		 * entries and before we add the new entries, otherwise that
++		 * task can sync a log without any entry for the inodes we are
++		 * renaming and therefore replaying that log, if a power failure
++		 * happens after syncing the log, would result in deleting the
++		 * inodes.
++		 *
++		 * If the rename affects two different directories, we want to
++		 * make sure the that there's no log commit that contains
++		 * updates for only one of the directories but not for the
++		 * other.
++		 *
++		 * If we are renaming an entry for a root, we don't care about
++		 * log updates since we called btrfs_set_log_full_commit().
++		 */
++		btrfs_pin_log_trans(root);
++		btrfs_pin_log_trans(dest);
++		logs_pinned = true;
++	}
++
+ 	if (old_dentry->d_parent != new_dentry->d_parent) {
+ 		btrfs_record_unlink_dir(trans, BTRFS_I(old_dir),
+ 					BTRFS_I(old_inode), true);
+@@ -8173,30 +8199,23 @@ static int btrfs_rename_exchange(struct inode *old_dir,
+ 		BTRFS_I(new_inode)->dir_index = new_idx;
+ 
+ 	/*
+-	 * Now pin the logs of the roots. We do it to ensure that no other task
+-	 * can sync the logs while we are in progress with the rename, because
+-	 * that could result in an inconsistency in case any of the inodes that
+-	 * are part of this rename operation were logged before.
++	 * Do the log updates for all inodes.
++	 *
++	 * If either entry is for a root we don't need to update the logs since
++	 * we've called btrfs_set_log_full_commit() before.
+ 	 */
+-	if (old_ino != BTRFS_FIRST_FREE_OBJECTID)
+-		btrfs_pin_log_trans(root);
+-	if (new_ino != BTRFS_FIRST_FREE_OBJECTID)
+-		btrfs_pin_log_trans(dest);
+-
+-	/* Do the log updates for all inodes. */
+-	if (old_ino != BTRFS_FIRST_FREE_OBJECTID)
++	if (logs_pinned) {
+ 		btrfs_log_new_name(trans, old_dentry, BTRFS_I(old_dir),
+ 				   old_rename_ctx.index, new_dentry->d_parent);
+-	if (new_ino != BTRFS_FIRST_FREE_OBJECTID)
+ 		btrfs_log_new_name(trans, new_dentry, BTRFS_I(new_dir),
+ 				   new_rename_ctx.index, old_dentry->d_parent);
++	}
+ 
+-	/* Now unpin the logs. */
+-	if (old_ino != BTRFS_FIRST_FREE_OBJECTID)
++out_fail:
++	if (logs_pinned) {
+ 		btrfs_end_log_trans(root);
+-	if (new_ino != BTRFS_FIRST_FREE_OBJECTID)
+ 		btrfs_end_log_trans(dest);
+-out_fail:
++	}
+ 	ret2 = btrfs_end_transaction(trans);
+ 	ret = ret ? ret : ret2;
+ out_notrans:
+@@ -8246,6 +8265,7 @@ static int btrfs_rename(struct mnt_idmap *idmap,
+ 	int ret2;
+ 	u64 old_ino = btrfs_ino(BTRFS_I(old_inode));
+ 	struct fscrypt_name old_fname, new_fname;
++	bool logs_pinned = false;
+ 
+ 	if (btrfs_ino(BTRFS_I(new_dir)) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID)
+ 		return -EPERM;
+@@ -8380,6 +8400,29 @@ static int btrfs_rename(struct mnt_idmap *idmap,
+ 	inode_inc_iversion(old_inode);
+ 	simple_rename_timestamp(old_dir, old_dentry, new_dir, new_dentry);
+ 
++	if (old_ino != BTRFS_FIRST_FREE_OBJECTID) {
++		/*
++		 * If we are renaming in the same directory (and it's not a
++		 * root entry) pin the log to prevent any concurrent task from
++		 * logging the directory after we removed the old entry and
++		 * before we add the new entry, otherwise that task can sync
++		 * a log without any entry for the inode we are renaming and
++		 * therefore replaying that log, if a power failure happens
++		 * after syncing the log, would result in deleting the inode.
++		 *
++		 * If the rename affects two different directories, we want to
++		 * make sure the that there's no log commit that contains
++		 * updates for only one of the directories but not for the
++		 * other.
++		 *
++		 * If we are renaming an entry for a root, we don't care about
++		 * log updates since we called btrfs_set_log_full_commit().
++		 */
++		btrfs_pin_log_trans(root);
++		btrfs_pin_log_trans(dest);
++		logs_pinned = true;
++	}
++
+ 	if (old_dentry->d_parent != new_dentry->d_parent)
+ 		btrfs_record_unlink_dir(trans, BTRFS_I(old_dir),
+ 					BTRFS_I(old_inode), true);
+@@ -8444,7 +8487,7 @@ static int btrfs_rename(struct mnt_idmap *idmap,
+ 	if (old_inode->i_nlink == 1)
+ 		BTRFS_I(old_inode)->dir_index = index;
+ 
+-	if (old_ino != BTRFS_FIRST_FREE_OBJECTID)
++	if (logs_pinned)
+ 		btrfs_log_new_name(trans, old_dentry, BTRFS_I(old_dir),
+ 				   rename_ctx.index, new_dentry->d_parent);
+ 
+@@ -8460,6 +8503,10 @@ static int btrfs_rename(struct mnt_idmap *idmap,
+ 		}
+ 	}
+ out_fail:
++	if (logs_pinned) {
++		btrfs_end_log_trans(root);
++		btrfs_end_log_trans(dest);
++	}
+ 	ret2 = btrfs_end_transaction(trans);
+ 	ret = ret ? ret : ret2;
+ out_notrans:
+@@ -9733,8 +9780,8 @@ ssize_t btrfs_do_encoded_write(struct kiocb *iocb, struct iov_iter *from,
+ 	free_extent_map(em);
+ 
+ 	ordered = btrfs_alloc_ordered_extent(inode, start, &file_extent,
+-				       (1 << BTRFS_ORDERED_ENCODED) |
+-				       (1 << BTRFS_ORDERED_COMPRESSED));
++				       (1U << BTRFS_ORDERED_ENCODED) |
++				       (1U << BTRFS_ORDERED_COMPRESSED));
+ 	if (IS_ERR(ordered)) {
+ 		btrfs_drop_extent_map_range(inode, start, end, false);
+ 		ret = PTR_ERR(ordered);
+diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
+index 03c945711003c0..1d7ddd664c0d39 100644
+--- a/fs/btrfs/ordered-data.c
++++ b/fs/btrfs/ordered-data.c
+@@ -153,9 +153,10 @@ static struct btrfs_ordered_extent *alloc_ordered_extent(
+ 	struct btrfs_ordered_extent *entry;
+ 	int ret;
+ 	u64 qgroup_rsv = 0;
++	const bool is_nocow = (flags &
++	       ((1U << BTRFS_ORDERED_NOCOW) | (1U << BTRFS_ORDERED_PREALLOC)));
+ 
+-	if (flags &
+-	    ((1 << BTRFS_ORDERED_NOCOW) | (1 << BTRFS_ORDERED_PREALLOC))) {
++	if (is_nocow) {
+ 		/* For nocow write, we can release the qgroup rsv right now */
+ 		ret = btrfs_qgroup_free_data(inode, NULL, file_offset, num_bytes, &qgroup_rsv);
+ 		if (ret < 0)
+@@ -170,8 +171,13 @@ static struct btrfs_ordered_extent *alloc_ordered_extent(
+ 			return ERR_PTR(ret);
+ 	}
+ 	entry = kmem_cache_zalloc(btrfs_ordered_extent_cache, GFP_NOFS);
+-	if (!entry)
++	if (!entry) {
++		if (!is_nocow)
++			btrfs_qgroup_free_refroot(inode->root->fs_info,
++						  btrfs_root_id(inode->root),
++						  qgroup_rsv, BTRFS_QGROUP_RSV_DATA);
+ 		return ERR_PTR(-ENOMEM);
++	}
+ 
+ 	entry->file_offset = file_offset;
+ 	entry->num_bytes = num_bytes;
+@@ -253,7 +259,7 @@ static void insert_ordered_extent(struct btrfs_ordered_extent *entry)
+  * @disk_bytenr:     Offset of extent on disk.
+  * @disk_num_bytes:  Size of extent on disk.
+  * @offset:          Offset into unencoded data where file data starts.
+- * @flags:           Flags specifying type of extent (1 << BTRFS_ORDERED_*).
++ * @flags:           Flags specifying type of extent (1U << BTRFS_ORDERED_*).
+  * @compress_type:   Compression algorithm used for data.
+  *
+  * Most of these parameters correspond to &struct btrfs_file_extent_item. The
+diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
+index cdd373c277848e..7285bf7926e3ad 100644
+--- a/fs/btrfs/raid56.c
++++ b/fs/btrfs/raid56.c
+@@ -200,8 +200,7 @@ int btrfs_alloc_stripe_hash_table(struct btrfs_fs_info *info)
+ 	struct btrfs_stripe_hash_table *x;
+ 	struct btrfs_stripe_hash *cur;
+ 	struct btrfs_stripe_hash *h;
+-	int num_entries = 1 << BTRFS_STRIPE_HASH_TABLE_BITS;
+-	int i;
++	unsigned int num_entries = 1U << BTRFS_STRIPE_HASH_TABLE_BITS;
+ 
+ 	if (info->stripe_hash_table)
+ 		return 0;
+@@ -222,7 +221,7 @@ int btrfs_alloc_stripe_hash_table(struct btrfs_fs_info *info)
+ 
+ 	h = table->table;
+ 
+-	for (i = 0; i < num_entries; i++) {
++	for (unsigned int i = 0; i < num_entries; i++) {
+ 		cur = h + i;
+ 		INIT_LIST_HEAD(&cur->hash_list);
+ 		spin_lock_init(&cur->lock);
+diff --git a/fs/btrfs/tests/extent-io-tests.c b/fs/btrfs/tests/extent-io-tests.c
+index 74aca7180a5a91..400574c6e82ecd 100644
+--- a/fs/btrfs/tests/extent-io-tests.c
++++ b/fs/btrfs/tests/extent-io-tests.c
+@@ -14,9 +14,9 @@
+ #include "../disk-io.h"
+ #include "../btrfs_inode.h"
+ 
+-#define PROCESS_UNLOCK		(1 << 0)
+-#define PROCESS_RELEASE		(1 << 1)
+-#define PROCESS_TEST_LOCKED	(1 << 2)
++#define PROCESS_UNLOCK		(1U << 0)
++#define PROCESS_RELEASE		(1U << 1)
++#define PROCESS_TEST_LOCKED	(1U << 2)
+ 
+ static noinline int process_page_range(struct inode *inode, u64 start, u64 end,
+ 				       unsigned long flags)
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index f5af11565b8760..ef9660eabf0c63 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -668,15 +668,12 @@ static noinline int replay_one_extent(struct btrfs_trans_handle *trans,
+ 		extent_end = ALIGN(start + size,
+ 				   fs_info->sectorsize);
+ 	} else {
+-		ret = 0;
+-		goto out;
++		return 0;
+ 	}
+ 
+ 	inode = read_one_inode(root, key->objectid);
+-	if (!inode) {
+-		ret = -EIO;
+-		goto out;
+-	}
++	if (!inode)
++		return -EIO;
+ 
+ 	/*
+ 	 * first check to see if we already have this extent in the
+@@ -961,7 +958,8 @@ static noinline int drop_one_dir_item(struct btrfs_trans_handle *trans,
+ 	ret = unlink_inode_for_log_replay(trans, dir, inode, &name);
+ out:
+ 	kfree(name.name);
+-	iput(&inode->vfs_inode);
++	if (inode)
++		iput(&inode->vfs_inode);
+ 	return ret;
+ }
+ 
+@@ -1176,8 +1174,8 @@ static inline int __add_inode_ref(struct btrfs_trans_handle *trans,
+ 					ret = unlink_inode_for_log_replay(trans,
+ 							victim_parent,
+ 							inode, &victim_name);
++					iput(&victim_parent->vfs_inode);
+ 				}
+-				iput(&victim_parent->vfs_inode);
+ 				kfree(victim_name.name);
+ 				if (ret)
+ 					return ret;
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 8e6b6fed7429a9..9ad222c4391f6f 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -3281,6 +3281,12 @@ int btrfs_remove_chunk(struct btrfs_trans_handle *trans, u64 chunk_offset)
+ 					device->bytes_used - dev_extent_len);
+ 			atomic64_add(dev_extent_len, &fs_info->free_chunk_space);
+ 			btrfs_clear_space_info_full(fs_info);
++
++			if (list_empty(&device->post_commit_list)) {
++				list_add_tail(&device->post_commit_list,
++					      &trans->transaction->dev_update_list);
++			}
++
+ 			mutex_unlock(&fs_info->chunk_mutex);
+ 		}
+ 	}
+diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
+index 3541efa765c73f..94c19331823a92 100644
+--- a/fs/btrfs/zstd.c
++++ b/fs/btrfs/zstd.c
+@@ -24,7 +24,7 @@
+ #include "super.h"
+ 
+ #define ZSTD_BTRFS_MAX_WINDOWLOG 17
+-#define ZSTD_BTRFS_MAX_INPUT (1 << ZSTD_BTRFS_MAX_WINDOWLOG)
++#define ZSTD_BTRFS_MAX_INPUT (1U << ZSTD_BTRFS_MAX_WINDOWLOG)
+ #define ZSTD_BTRFS_DEFAULT_LEVEL 3
+ #define ZSTD_BTRFS_MIN_LEVEL -15
+ #define ZSTD_BTRFS_MAX_LEVEL 15
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 851d70200c6b8f..a7254cab44cc2e 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -2616,7 +2616,7 @@ static int ceph_zero_objects(struct inode *inode, loff_t offset, loff_t length)
+ 	s32 stripe_unit = ci->i_layout.stripe_unit;
+ 	s32 stripe_count = ci->i_layout.stripe_count;
+ 	s32 object_size = ci->i_layout.object_size;
+-	u64 object_set_size = object_size * stripe_count;
++	u64 object_set_size = (u64) object_size * stripe_count;
+ 	u64 nearly, t;
+ 
+ 	/* round offset up to next period boundary */
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index abbcbb5865a316..f2de3c886c086b 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -35,6 +35,17 @@
+ #include <trace/events/f2fs.h>
+ #include <uapi/linux/f2fs.h>
+ 
++static void f2fs_zero_post_eof_page(struct inode *inode, loff_t new_size)
++{
++	loff_t old_size = i_size_read(inode);
++
++	if (old_size >= new_size)
++		return;
++
++	/* zero or drop pages only in range of [old_size, new_size] */
++	truncate_pagecache(inode, old_size);
++}
++
+ static vm_fault_t f2fs_filemap_fault(struct vm_fault *vmf)
+ {
+ 	struct inode *inode = file_inode(vmf->vma->vm_file);
+@@ -103,8 +114,13 @@ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
+ 
+ 	f2fs_bug_on(sbi, f2fs_has_inline_data(inode));
+ 
++	filemap_invalidate_lock(inode->i_mapping);
++	f2fs_zero_post_eof_page(inode, (folio->index + 1) << PAGE_SHIFT);
++	filemap_invalidate_unlock(inode->i_mapping);
++
+ 	file_update_time(vmf->vma->vm_file);
+ 	filemap_invalidate_lock_shared(inode->i_mapping);
++
+ 	folio_lock(folio);
+ 	if (unlikely(folio->mapping != inode->i_mapping ||
+ 			folio_pos(folio) > i_size_read(inode) ||
+@@ -1106,6 +1122,8 @@ int f2fs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 		f2fs_down_write(&fi->i_gc_rwsem[WRITE]);
+ 		filemap_invalidate_lock(inode->i_mapping);
+ 
++		if (attr->ia_size > old_size)
++			f2fs_zero_post_eof_page(inode, attr->ia_size);
+ 		truncate_setsize(inode, attr->ia_size);
+ 
+ 		if (attr->ia_size <= old_size)
+@@ -1224,6 +1242,10 @@ static int f2fs_punch_hole(struct inode *inode, loff_t offset, loff_t len)
+ 	if (ret)
+ 		return ret;
+ 
++	filemap_invalidate_lock(inode->i_mapping);
++	f2fs_zero_post_eof_page(inode, offset + len);
++	filemap_invalidate_unlock(inode->i_mapping);
++
+ 	pg_start = ((unsigned long long) offset) >> PAGE_SHIFT;
+ 	pg_end = ((unsigned long long) offset + len) >> PAGE_SHIFT;
+ 
+@@ -1507,6 +1529,8 @@ static int f2fs_do_collapse(struct inode *inode, loff_t offset, loff_t len)
+ 	f2fs_down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 	filemap_invalidate_lock(inode->i_mapping);
+ 
++	f2fs_zero_post_eof_page(inode, offset + len);
++
+ 	f2fs_lock_op(sbi);
+ 	f2fs_drop_extent_tree(inode);
+ 	truncate_pagecache(inode, offset);
+@@ -1628,6 +1652,10 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
+ 	if (ret)
+ 		return ret;
+ 
++	filemap_invalidate_lock(mapping);
++	f2fs_zero_post_eof_page(inode, offset + len);
++	filemap_invalidate_unlock(mapping);
++
+ 	pg_start = ((unsigned long long) offset) >> PAGE_SHIFT;
+ 	pg_end = ((unsigned long long) offset + len) >> PAGE_SHIFT;
+ 
+@@ -1759,6 +1787,8 @@ static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len)
+ 	/* avoid gc operation during block exchange */
+ 	f2fs_down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 	filemap_invalidate_lock(mapping);
++
++	f2fs_zero_post_eof_page(inode, offset + len);
+ 	truncate_pagecache(inode, offset);
+ 
+ 	while (!ret && idx > pg_start) {
+@@ -1816,6 +1846,10 @@ static int f2fs_expand_inode_data(struct inode *inode, loff_t offset,
+ 	if (err)
+ 		return err;
+ 
++	filemap_invalidate_lock(inode->i_mapping);
++	f2fs_zero_post_eof_page(inode, offset + len);
++	filemap_invalidate_unlock(inode->i_mapping);
++
+ 	f2fs_balance_fs(sbi, true);
+ 
+ 	pg_start = ((unsigned long long)offset) >> PAGE_SHIFT;
+@@ -4846,6 +4880,10 @@ static ssize_t f2fs_write_checks(struct kiocb *iocb, struct iov_iter *from)
+ 	err = file_modified(file);
+ 	if (err)
+ 		return err;
++
++	filemap_invalidate_lock(inode->i_mapping);
++	f2fs_zero_post_eof_page(inode, iocb->ki_pos + iov_iter_count(from));
++	filemap_invalidate_unlock(inode->i_mapping);
+ 	return count;
+ }
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index bc510c91f3eba4..86dd30eb50b1de 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1806,26 +1806,32 @@ static int f2fs_statfs_project(struct super_block *sb,
+ 
+ 	limit = min_not_zero(dquot->dq_dqb.dqb_bsoftlimit,
+ 					dquot->dq_dqb.dqb_bhardlimit);
+-	if (limit)
+-		limit >>= sb->s_blocksize_bits;
++	limit >>= sb->s_blocksize_bits;
++
++	if (limit) {
++		uint64_t remaining = 0;
+ 
+-	if (limit && buf->f_blocks > limit) {
+ 		curblock = (dquot->dq_dqb.dqb_curspace +
+ 			    dquot->dq_dqb.dqb_rsvspace) >> sb->s_blocksize_bits;
+-		buf->f_blocks = limit;
+-		buf->f_bfree = buf->f_bavail =
+-			(buf->f_blocks > curblock) ?
+-			 (buf->f_blocks - curblock) : 0;
++		if (limit > curblock)
++			remaining = limit - curblock;
++
++		buf->f_blocks = min(buf->f_blocks, limit);
++		buf->f_bfree = min(buf->f_bfree, remaining);
++		buf->f_bavail = min(buf->f_bavail, remaining);
+ 	}
+ 
+ 	limit = min_not_zero(dquot->dq_dqb.dqb_isoftlimit,
+ 					dquot->dq_dqb.dqb_ihardlimit);
+ 
+-	if (limit && buf->f_files > limit) {
+-		buf->f_files = limit;
+-		buf->f_ffree =
+-			(buf->f_files > dquot->dq_dqb.dqb_curinodes) ?
+-			 (buf->f_files - dquot->dq_dqb.dqb_curinodes) : 0;
++	if (limit) {
++		uint64_t remaining = 0;
++
++		if (limit > dquot->dq_dqb.dqb_curinodes)
++			remaining = limit - dquot->dq_dqb.dqb_curinodes;
++
++		buf->f_files = min(buf->f_files, limit);
++		buf->f_ffree = min(buf->f_ffree, remaining);
+ 	}
+ 
+ 	spin_unlock(&dquot->dq_dqb_lock);
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 83ac192e7fdd19..fa90309030e215 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -1946,6 +1946,7 @@ int fuse_do_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 	int err;
+ 	bool trust_local_cmtime = is_wb;
+ 	bool fault_blocked = false;
++	u64 attr_version;
+ 
+ 	if (!fc->default_permissions)
+ 		attr->ia_valid |= ATTR_FORCE;
+@@ -2030,6 +2031,8 @@ int fuse_do_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 		if (fc->handle_killpriv_v2 && !capable(CAP_FSETID))
+ 			inarg.valid |= FATTR_KILL_SUIDGID;
+ 	}
++
++	attr_version = fuse_get_attr_version(fm->fc);
+ 	fuse_setattr_fill(fc, &args, inode, &inarg, &outarg);
+ 	err = fuse_simple_request(fm, &args);
+ 	if (err) {
+@@ -2055,6 +2058,14 @@ int fuse_do_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 		/* FIXME: clear I_DIRTY_SYNC? */
+ 	}
+ 
++	if (fi->attr_version > attr_version) {
++		/*
++		 * Apply attributes, for example for fsnotify_change(), but set
++		 * attribute timeout to zero.
++		 */
++		outarg.attr_valid = outarg.attr_valid_nsec = 0;
++	}
++
+ 	fuse_change_attributes_common(inode, &outarg.attr, NULL,
+ 				      ATTR_TIMEOUT(&outarg),
+ 				      fuse_get_cache_mask(inode), 0);
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index fd48e8d37f2edc..fa6d6b728ffc27 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -9,6 +9,7 @@
+ #include "fuse_i.h"
+ #include "dev_uring_i.h"
+ 
++#include <linux/dax.h>
+ #include <linux/pagemap.h>
+ #include <linux/slab.h>
+ #include <linux/file.h>
+@@ -162,6 +163,9 @@ static void fuse_evict_inode(struct inode *inode)
+ 	/* Will write inode on close/munmap and in all other dirtiers */
+ 	WARN_ON(inode->i_state & I_DIRTY_INODE);
+ 
++	if (FUSE_IS_DAX(inode))
++		dax_break_layout_final(inode);
++
+ 	truncate_inode_pages_final(&inode->i_data);
+ 	clear_inode(inode);
+ 	if (inode->i_sb->s_flags & SB_ACTIVE) {
+diff --git a/fs/namespace.c b/fs/namespace.c
+index d6ac7e533b0212..dfb72f827d4a7f 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2765,14 +2765,14 @@ static int attach_recursive_mnt(struct mount *source_mnt,
+ 	hlist_for_each_entry_safe(child, n, &tree_list, mnt_hash) {
+ 		struct mount *q;
+ 		hlist_del_init(&child->mnt_hash);
+-		q = __lookup_mnt(&child->mnt_parent->mnt,
+-				 child->mnt_mountpoint);
+-		if (q)
+-			mnt_change_mountpoint(child, smp, q);
+ 		/* Notice when we are propagating across user namespaces */
+ 		if (child->mnt_parent->mnt_ns->user_ns != user_ns)
+ 			lock_mnt_tree(child);
+ 		child->mnt.mnt_flags &= ~MNT_LOCKED;
++		q = __lookup_mnt(&child->mnt_parent->mnt,
++				 child->mnt_mountpoint);
++		if (q)
++			mnt_change_mountpoint(child, smp, q);
+ 		commit_tree(child);
+ 	}
+ 	put_mountpoint(smp);
+@@ -5307,16 +5307,12 @@ SYSCALL_DEFINE5(open_tree_attr, int, dfd, const char __user *, filename,
+ 			kattr.kflags |= MOUNT_KATTR_RECURSE;
+ 
+ 		ret = wants_mount_setattr(uattr, usize, &kattr);
+-		if (ret < 0)
+-			return ret;
+-
+-		if (ret) {
++		if (ret > 0) {
+ 			ret = do_mount_setattr(&file->f_path, &kattr);
+-			if (ret)
+-				return ret;
+-
+ 			finish_mount_kattr(&kattr);
+ 		}
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	fd = get_unused_fd_flags(flags & O_CLOEXEC);
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 119e447758b994..8ab7868807a7d9 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -557,6 +557,8 @@ nfs_fhget(struct super_block *sb, struct nfs_fh *fh, struct nfs_fattr *fattr)
+ 			set_nlink(inode, fattr->nlink);
+ 		else if (fattr_supported & NFS_ATTR_FATTR_NLINK)
+ 			nfs_set_cache_invalid(inode, NFS_INO_INVALID_NLINK);
++		else
++			set_nlink(inode, 1);
+ 		if (fattr->valid & NFS_ATTR_FATTR_OWNER)
+ 			inode->i_uid = fattr->uid;
+ 		else if (fattr_supported & NFS_ATTR_FATTR_OWNER)
+@@ -633,6 +635,34 @@ nfs_fattr_fixup_delegated(struct inode *inode, struct nfs_fattr *fattr)
+ 	}
+ }
+ 
++static void nfs_set_timestamps_to_ts(struct inode *inode, struct iattr *attr)
++{
++	unsigned int cache_flags = 0;
++
++	if (attr->ia_valid & ATTR_MTIME_SET) {
++		struct timespec64 ctime = inode_get_ctime(inode);
++		struct timespec64 mtime = inode_get_mtime(inode);
++		struct timespec64 now;
++		int updated = 0;
++
++		now = inode_set_ctime_current(inode);
++		if (!timespec64_equal(&now, &ctime))
++			updated |= S_CTIME;
++
++		inode_set_mtime_to_ts(inode, attr->ia_mtime);
++		if (!timespec64_equal(&now, &mtime))
++			updated |= S_MTIME;
++
++		inode_maybe_inc_iversion(inode, updated);
++		cache_flags |= NFS_INO_INVALID_CTIME | NFS_INO_INVALID_MTIME;
++	}
++	if (attr->ia_valid & ATTR_ATIME_SET) {
++		inode_set_atime_to_ts(inode, attr->ia_atime);
++		cache_flags |= NFS_INO_INVALID_ATIME;
++	}
++	NFS_I(inode)->cache_validity &= ~cache_flags;
++}
++
+ static void nfs_update_timestamps(struct inode *inode, unsigned int ia_valid)
+ {
+ 	enum file_time_flags time_flags = 0;
+@@ -701,14 +731,27 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 
+ 	if (nfs_have_delegated_mtime(inode) && attr->ia_valid & ATTR_MTIME) {
+ 		spin_lock(&inode->i_lock);
+-		nfs_update_timestamps(inode, attr->ia_valid);
++		if (attr->ia_valid & ATTR_MTIME_SET) {
++			nfs_set_timestamps_to_ts(inode, attr);
++			attr->ia_valid &= ~(ATTR_MTIME|ATTR_MTIME_SET|
++						ATTR_ATIME|ATTR_ATIME_SET);
++		} else {
++			nfs_update_timestamps(inode, attr->ia_valid);
++			attr->ia_valid &= ~(ATTR_MTIME|ATTR_ATIME);
++		}
+ 		spin_unlock(&inode->i_lock);
+-		attr->ia_valid &= ~(ATTR_MTIME | ATTR_ATIME);
+ 	} else if (nfs_have_delegated_atime(inode) &&
+ 		   attr->ia_valid & ATTR_ATIME &&
+ 		   !(attr->ia_valid & ATTR_MTIME)) {
+-		nfs_update_delegated_atime(inode);
+-		attr->ia_valid &= ~ATTR_ATIME;
++		if (attr->ia_valid & ATTR_ATIME_SET) {
++			spin_lock(&inode->i_lock);
++			nfs_set_timestamps_to_ts(inode, attr);
++			spin_unlock(&inode->i_lock);
++			attr->ia_valid &= ~(ATTR_ATIME|ATTR_ATIME_SET);
++		} else {
++			nfs_update_delegated_atime(inode);
++			attr->ia_valid &= ~ATTR_ATIME;
++		}
+ 	}
+ 
+ 	/* Optimization: if the end result is no change, don't RPC */
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 9db317e7dea177..2f5a6aa3fd48ea 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -325,14 +325,14 @@ static void nfs4_bitmap_copy_adjust(__u32 *dst, const __u32 *src,
+ 
+ 	if (nfs_have_delegated_mtime(inode)) {
+ 		if (!(cache_validity & NFS_INO_INVALID_ATIME))
+-			dst[1] &= ~FATTR4_WORD1_TIME_ACCESS;
++			dst[1] &= ~(FATTR4_WORD1_TIME_ACCESS|FATTR4_WORD1_TIME_ACCESS_SET);
+ 		if (!(cache_validity & NFS_INO_INVALID_MTIME))
+-			dst[1] &= ~FATTR4_WORD1_TIME_MODIFY;
++			dst[1] &= ~(FATTR4_WORD1_TIME_MODIFY|FATTR4_WORD1_TIME_MODIFY_SET);
+ 		if (!(cache_validity & NFS_INO_INVALID_CTIME))
+-			dst[1] &= ~FATTR4_WORD1_TIME_METADATA;
++			dst[1] &= ~(FATTR4_WORD1_TIME_METADATA|FATTR4_WORD1_TIME_MODIFY_SET);
+ 	} else if (nfs_have_delegated_atime(inode)) {
+ 		if (!(cache_validity & NFS_INO_INVALID_ATIME))
+-			dst[1] &= ~FATTR4_WORD1_TIME_ACCESS;
++			dst[1] &= ~(FATTR4_WORD1_TIME_ACCESS|FATTR4_WORD1_TIME_ACCESS_SET);
+ 	}
+ }
+ 
+@@ -6220,6 +6220,8 @@ static ssize_t nfs4_proc_get_acl(struct inode *inode, void *buf, size_t buflen,
+ 	struct nfs_server *server = NFS_SERVER(inode);
+ 	int ret;
+ 
++	if (unlikely(NFS_FH(inode)->size == 0))
++		return -ENODATA;
+ 	if (!nfs4_server_supports_acls(server, type))
+ 		return -EOPNOTSUPP;
+ 	ret = nfs_revalidate_inode(inode, NFS_INO_INVALID_CHANGE);
+@@ -6294,6 +6296,9 @@ static int nfs4_proc_set_acl(struct inode *inode, const void *buf,
+ {
+ 	struct nfs4_exception exception = { };
+ 	int err;
++
++	if (unlikely(NFS_FH(inode)->size == 0))
++		return -ENODATA;
+ 	do {
+ 		err = __nfs4_proc_set_acl(inode, buf, buflen, type);
+ 		trace_nfs4_set_acl(inode, err);
+@@ -10861,7 +10866,7 @@ const struct nfs4_minor_version_ops *nfs_v4_minor_ops[] = {
+ 
+ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
+ {
+-	ssize_t error, error2, error3;
++	ssize_t error, error2, error3, error4;
+ 	size_t left = size;
+ 
+ 	error = generic_listxattr(dentry, list, left);
+@@ -10884,8 +10889,16 @@ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
+ 	error3 = nfs4_listxattr_nfs4_user(d_inode(dentry), list, left);
+ 	if (error3 < 0)
+ 		return error3;
++	if (list) {
++		list += error3;
++		left -= error3;
++	}
++
++	error4 = security_inode_listsecurity(d_inode(dentry), list, left);
++	if (error4 < 0)
++		return error4;
+ 
+-	error += error2 + error3;
++	error += error2 + error3 + error4;
+ 	if (size && error > size)
+ 		return -ERANGE;
+ 	return error;
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index 0819c739cc2ffc..5d6b60d56c275e 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -305,7 +305,9 @@ enum ovl_path_type ovl_path_realdata(struct dentry *dentry, struct path *path)
+ 
+ struct dentry *ovl_dentry_upper(struct dentry *dentry)
+ {
+-	return ovl_upperdentry_dereference(OVL_I(d_inode(dentry)));
++	struct inode *inode = d_inode(dentry);
++
++	return inode ? ovl_upperdentry_dereference(OVL_I(inode)) : NULL;
+ }
+ 
+ struct dentry *ovl_dentry_lower(struct dentry *dentry)
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 994cde10e3f4d3..854d23be3d7bbc 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -2179,7 +2179,7 @@ static unsigned long pagemap_thp_category(struct pagemap_scan_private *p,
+ 				categories |= PAGE_IS_FILE;
+ 		}
+ 
+-		if (is_zero_pfn(pmd_pfn(pmd)))
++		if (is_huge_zero_pmd(pmd))
+ 			categories |= PAGE_IS_PFNZERO;
+ 		if (pmd_soft_dirty(pmd))
+ 			categories |= PAGE_IS_SOFT_DIRTY;
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 0c80ca352f3fae..56381cbb63990f 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -709,6 +709,7 @@ inc_rfc1001_len(void *buf, int count)
+ struct TCP_Server_Info {
+ 	struct list_head tcp_ses_list;
+ 	struct list_head smb_ses_list;
++	struct list_head rlist; /* reconnect list */
+ 	spinlock_t srv_lock;  /* protect anything here that is not protected */
+ 	__u64 conn_id; /* connection identifier (useful for debugging) */
+ 	int srv_count; /* reference counter */
+@@ -773,6 +774,7 @@ struct TCP_Server_Info {
+ 	char workstation_RFC1001_name[RFC1001_NAME_LEN_WITH_NULL];
+ 	__u32 sequence_number; /* for signing, protected by srv_mutex */
+ 	__u32 reconnect_instance; /* incremented on each reconnect */
++	__le32 session_key_id; /* retrieved from negotiate response and send in session setup request */
+ 	struct session_key session_key;
+ 	unsigned long lstrp; /* when we got last response from this server */
+ 	struct cifs_secmech secmech; /* crypto sec mech functs, descriptors */
+diff --git a/fs/smb/client/cifspdu.h b/fs/smb/client/cifspdu.h
+index 1b79fe07476f65..d9cf7db0ac35e9 100644
+--- a/fs/smb/client/cifspdu.h
++++ b/fs/smb/client/cifspdu.h
+@@ -597,7 +597,7 @@ typedef union smb_com_session_setup_andx {
+ 		__le16 MaxBufferSize;
+ 		__le16 MaxMpxCount;
+ 		__le16 VcNumber;
+-		__u32 SessionKey;
++		__le32 SessionKey;
+ 		__le16 SecurityBlobLength;
+ 		__u32 Reserved;
+ 		__le32 Capabilities;	/* see below */
+@@ -616,7 +616,7 @@ typedef union smb_com_session_setup_andx {
+ 		__le16 MaxBufferSize;
+ 		__le16 MaxMpxCount;
+ 		__le16 VcNumber;
+-		__u32 SessionKey;
++		__le32 SessionKey;
+ 		__le16 CaseInsensitivePasswordLength; /* ASCII password len */
+ 		__le16 CaseSensitivePasswordLength; /* Unicode password length*/
+ 		__u32 Reserved;	/* see below */
+@@ -654,7 +654,7 @@ typedef union smb_com_session_setup_andx {
+ 		__le16 MaxBufferSize;
+ 		__le16 MaxMpxCount;
+ 		__le16 VcNumber;
+-		__u32 SessionKey;
++		__le32 SessionKey;
+ 		__le16 PasswordLength;
+ 		__u32 Reserved; /* encrypt key len and offset */
+ 		__le16 ByteCount;
+diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
+index a3ba3346ed313f..7216fcec79e8b2 100644
+--- a/fs/smb/client/cifssmb.c
++++ b/fs/smb/client/cifssmb.c
+@@ -498,6 +498,7 @@ CIFSSMBNegotiate(const unsigned int xid,
+ 	server->max_rw = le32_to_cpu(pSMBr->MaxRawSize);
+ 	cifs_dbg(NOISY, "Max buf = %d\n", ses->server->maxBuf);
+ 	server->capabilities = le32_to_cpu(pSMBr->Capabilities);
++	server->session_key_id = pSMBr->SessionKey;
+ 	server->timeAdj = (int)(__s16)le16_to_cpu(pSMBr->ServerTimeZone);
+ 	server->timeAdj *= 60;
+ 
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index f9aef60f1901ac..e92c7b71626fd7 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -124,6 +124,14 @@ static void smb2_query_server_interfaces(struct work_struct *work)
+ 			   (SMB_INTERFACE_POLL_INTERVAL * HZ));
+ }
+ 
++#define set_need_reco(server) \
++do { \
++	spin_lock(&server->srv_lock); \
++	if (server->tcpStatus != CifsExiting) \
++		server->tcpStatus = CifsNeedReconnect; \
++	spin_unlock(&server->srv_lock); \
++} while (0)
++
+ /*
+  * Update the tcpStatus for the server.
+  * This is used to signal the cifsd thread to call cifs_reconnect
+@@ -137,39 +145,45 @@ void
+ cifs_signal_cifsd_for_reconnect(struct TCP_Server_Info *server,
+ 				bool all_channels)
+ {
+-	struct TCP_Server_Info *pserver;
++	struct TCP_Server_Info *nserver;
+ 	struct cifs_ses *ses;
++	LIST_HEAD(reco);
+ 	int i;
+ 
+-	/* If server is a channel, select the primary channel */
+-	pserver = SERVER_IS_CHAN(server) ? server->primary_server : server;
+-
+ 	/* if we need to signal just this channel */
+ 	if (!all_channels) {
+-		spin_lock(&server->srv_lock);
+-		if (server->tcpStatus != CifsExiting)
+-			server->tcpStatus = CifsNeedReconnect;
+-		spin_unlock(&server->srv_lock);
++		set_need_reco(server);
+ 		return;
+ 	}
+ 
+-	spin_lock(&cifs_tcp_ses_lock);
+-	list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {
+-		if (cifs_ses_exiting(ses))
+-			continue;
+-		spin_lock(&ses->chan_lock);
+-		for (i = 0; i < ses->chan_count; i++) {
+-			if (!ses->chans[i].server)
++	if (SERVER_IS_CHAN(server))
++		server = server->primary_server;
++	scoped_guard(spinlock, &cifs_tcp_ses_lock) {
++		set_need_reco(server);
++		list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {
++			spin_lock(&ses->ses_lock);
++			if (ses->ses_status == SES_EXITING) {
++				spin_unlock(&ses->ses_lock);
+ 				continue;
+-
+-			spin_lock(&ses->chans[i].server->srv_lock);
+-			if (ses->chans[i].server->tcpStatus != CifsExiting)
+-				ses->chans[i].server->tcpStatus = CifsNeedReconnect;
+-			spin_unlock(&ses->chans[i].server->srv_lock);
++			}
++			spin_lock(&ses->chan_lock);
++			for (i = 1; i < ses->chan_count; i++) {
++				nserver = ses->chans[i].server;
++				if (!nserver)
++					continue;
++				nserver->srv_count++;
++				list_add(&nserver->rlist, &reco);
++			}
++			spin_unlock(&ses->chan_lock);
++			spin_unlock(&ses->ses_lock);
+ 		}
+-		spin_unlock(&ses->chan_lock);
+ 	}
+-	spin_unlock(&cifs_tcp_ses_lock);
++
++	list_for_each_entry_safe(server, nserver, &reco, rlist) {
++		list_del_init(&server->rlist);
++		set_need_reco(server);
++		cifs_put_tcp_session(server, 0);
++	}
+ }
+ 
+ /*
+diff --git a/fs/smb/client/misc.c b/fs/smb/client/misc.c
+index 7b6ed9b23e713d..e77017f470845f 100644
+--- a/fs/smb/client/misc.c
++++ b/fs/smb/client/misc.c
+@@ -326,6 +326,14 @@ check_smb_hdr(struct smb_hdr *smb)
+ 	if (smb->Command == SMB_COM_LOCKING_ANDX)
+ 		return 0;
+ 
++	/*
++	 * Windows NT server returns error resposne (e.g. STATUS_DELETE_PENDING
++	 * or STATUS_OBJECT_NAME_NOT_FOUND or ERRDOS/ERRbadfile or any other)
++	 * for some TRANS2 requests without the RESPONSE flag set in header.
++	 */
++	if (smb->Command == SMB_COM_TRANSACTION2 && smb->Status.CifsError != 0)
++		return 0;
++
+ 	cifs_dbg(VFS, "Server sent request, not response. mid=%u\n",
+ 		 get_mid(smb));
+ 	return 1;
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index 511611206dab48..1c40e42e4d8973 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -875,15 +875,8 @@ int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len,
+ 			abs_path += sizeof("\\DosDevices\\")-1;
+ 		else if (strstarts(abs_path, "\\GLOBAL??\\"))
+ 			abs_path += sizeof("\\GLOBAL??\\")-1;
+-		else {
+-			/* Unhandled absolute symlink, points outside of DOS/Win32 */
+-			cifs_dbg(VFS,
+-				 "absolute symlink '%s' cannot be converted from NT format "
+-				 "because points to unknown target\n",
+-				 smb_target);
+-			rc = -EIO;
+-			goto out;
+-		}
++		else
++			goto out_unhandled_target;
+ 
+ 		/* Sometimes path separator after \?? is double backslash */
+ 		if (abs_path[0] == '\\')
+@@ -910,13 +903,7 @@ int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len,
+ 			abs_path++;
+ 			abs_path[0] = drive_letter;
+ 		} else {
+-			/* Unhandled absolute symlink. Report an error. */
+-			cifs_dbg(VFS,
+-				 "absolute symlink '%s' cannot be converted from NT format "
+-				 "because points to unknown target\n",
+-				 smb_target);
+-			rc = -EIO;
+-			goto out;
++			goto out_unhandled_target;
+ 		}
+ 
+ 		abs_path_len = strlen(abs_path)+1;
+@@ -966,6 +953,7 @@ int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len,
+ 		 * These paths have same format as Linux symlinks, so no
+ 		 * conversion is needed.
+ 		 */
++out_unhandled_target:
+ 		linux_target = smb_target;
+ 		smb_target = NULL;
+ 	}
+diff --git a/fs/smb/client/sess.c b/fs/smb/client/sess.c
+index 12c99fb4023dc2..330bc3d25badd8 100644
+--- a/fs/smb/client/sess.c
++++ b/fs/smb/client/sess.c
+@@ -631,6 +631,7 @@ static __u32 cifs_ssetup_hdr(struct cifs_ses *ses,
+ 					USHRT_MAX));
+ 	pSMB->req.MaxMpxCount = cpu_to_le16(server->maxReq);
+ 	pSMB->req.VcNumber = cpu_to_le16(1);
++	pSMB->req.SessionKey = server->session_key_id;
+ 
+ 	/* Now no need to set SMBFLG_CASELESS or obsolete CANONICAL PATH */
+ 
+@@ -1687,22 +1688,22 @@ _sess_auth_rawntlmssp_assemble_req(struct sess_data *sess_data)
+ 	pSMB = (SESSION_SETUP_ANDX *)sess_data->iov[0].iov_base;
+ 
+ 	capabilities = cifs_ssetup_hdr(ses, server, pSMB);
+-	if ((pSMB->req.hdr.Flags2 & SMBFLG2_UNICODE) == 0) {
+-		cifs_dbg(VFS, "NTLMSSP requires Unicode support\n");
+-		return -ENOSYS;
+-	}
+-
+ 	pSMB->req.hdr.Flags2 |= SMBFLG2_EXT_SEC;
+ 	capabilities |= CAP_EXTENDED_SECURITY;
+ 	pSMB->req.Capabilities |= cpu_to_le32(capabilities);
+ 
+ 	bcc_ptr = sess_data->iov[2].iov_base;
+-	/* unicode strings must be word aligned */
+-	if (!IS_ALIGNED(sess_data->iov[0].iov_len + sess_data->iov[1].iov_len, 2)) {
+-		*bcc_ptr = 0;
+-		bcc_ptr++;
++
++	if (pSMB->req.hdr.Flags2 & SMBFLG2_UNICODE) {
++		/* unicode strings must be word aligned */
++		if (!IS_ALIGNED(sess_data->iov[0].iov_len + sess_data->iov[1].iov_len, 2)) {
++			*bcc_ptr = 0;
++			bcc_ptr++;
++		}
++		unicode_oslm_strings(&bcc_ptr, sess_data->nls_cp);
++	} else {
++		ascii_oslm_strings(&bcc_ptr, sess_data->nls_cp);
+ 	}
+-	unicode_oslm_strings(&bcc_ptr, sess_data->nls_cp);
+ 
+ 	sess_data->iov[2].iov_len = (long) bcc_ptr -
+ 					(long) sess_data->iov[2].iov_base;
+diff --git a/fs/smb/client/trace.h b/fs/smb/client/trace.h
+index 52bcb55d995276..93e5b2bb9f28a2 100644
+--- a/fs/smb/client/trace.h
++++ b/fs/smb/client/trace.h
+@@ -140,7 +140,7 @@ DECLARE_EVENT_CLASS(smb3_rw_err_class,
+ 		__entry->len = len;
+ 		__entry->rc = rc;
+ 	),
+-	TP_printk("\tR=%08x[%x] xid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d",
++	TP_printk("R=%08x[%x] xid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d",
+ 		  __entry->rreq_debug_id, __entry->rreq_debug_index,
+ 		  __entry->xid, __entry->sesid, __entry->tid, __entry->fid,
+ 		  __entry->offset, __entry->len, __entry->rc)
+@@ -190,7 +190,7 @@ DECLARE_EVENT_CLASS(smb3_other_err_class,
+ 		__entry->len = len;
+ 		__entry->rc = rc;
+ 	),
+-	TP_printk("\txid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d",
++	TP_printk("xid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d",
+ 		__entry->xid, __entry->sesid, __entry->tid, __entry->fid,
+ 		__entry->offset, __entry->len, __entry->rc)
+ )
+@@ -247,7 +247,7 @@ DECLARE_EVENT_CLASS(smb3_copy_range_err_class,
+ 		__entry->len = len;
+ 		__entry->rc = rc;
+ 	),
+-	TP_printk("\txid=%u sid=0x%llx tid=0x%x source fid=0x%llx source offset=0x%llx target fid=0x%llx target offset=0x%llx len=0x%x rc=%d",
++	TP_printk("xid=%u sid=0x%llx tid=0x%x source fid=0x%llx source offset=0x%llx target fid=0x%llx target offset=0x%llx len=0x%x rc=%d",
+ 		__entry->xid, __entry->sesid, __entry->tid, __entry->target_fid,
+ 		__entry->src_offset, __entry->target_fid, __entry->target_offset, __entry->len, __entry->rc)
+ )
+@@ -298,7 +298,7 @@ DECLARE_EVENT_CLASS(smb3_copy_range_done_class,
+ 		__entry->target_offset = target_offset;
+ 		__entry->len = len;
+ 	),
+-	TP_printk("\txid=%u sid=0x%llx tid=0x%x source fid=0x%llx source offset=0x%llx target fid=0x%llx target offset=0x%llx len=0x%x",
++	TP_printk("xid=%u sid=0x%llx tid=0x%x source fid=0x%llx source offset=0x%llx target fid=0x%llx target offset=0x%llx len=0x%x",
+ 		__entry->xid, __entry->sesid, __entry->tid, __entry->target_fid,
+ 		__entry->src_offset, __entry->target_fid, __entry->target_offset, __entry->len)
+ )
+@@ -482,7 +482,7 @@ DECLARE_EVENT_CLASS(smb3_fd_class,
+ 		__entry->tid = tid;
+ 		__entry->sesid = sesid;
+ 	),
+-	TP_printk("\txid=%u sid=0x%llx tid=0x%x fid=0x%llx",
++	TP_printk("xid=%u sid=0x%llx tid=0x%x fid=0x%llx",
+ 		__entry->xid, __entry->sesid, __entry->tid, __entry->fid)
+ )
+ 
+@@ -521,7 +521,7 @@ DECLARE_EVENT_CLASS(smb3_fd_err_class,
+ 		__entry->sesid = sesid;
+ 		__entry->rc = rc;
+ 	),
+-	TP_printk("\txid=%u sid=0x%llx tid=0x%x fid=0x%llx rc=%d",
++	TP_printk("xid=%u sid=0x%llx tid=0x%x fid=0x%llx rc=%d",
+ 		__entry->xid, __entry->sesid, __entry->tid, __entry->fid,
+ 		__entry->rc)
+ )
+@@ -794,7 +794,7 @@ DECLARE_EVENT_CLASS(smb3_cmd_err_class,
+ 		__entry->status = status;
+ 		__entry->rc = rc;
+ 	),
+-	TP_printk("\tsid=0x%llx tid=0x%x cmd=%u mid=%llu status=0x%x rc=%d",
++	TP_printk("sid=0x%llx tid=0x%x cmd=%u mid=%llu status=0x%x rc=%d",
+ 		__entry->sesid, __entry->tid, __entry->cmd, __entry->mid,
+ 		__entry->status, __entry->rc)
+ )
+@@ -829,7 +829,7 @@ DECLARE_EVENT_CLASS(smb3_cmd_done_class,
+ 		__entry->cmd = cmd;
+ 		__entry->mid = mid;
+ 	),
+-	TP_printk("\tsid=0x%llx tid=0x%x cmd=%u mid=%llu",
++	TP_printk("sid=0x%llx tid=0x%x cmd=%u mid=%llu",
+ 		__entry->sesid, __entry->tid,
+ 		__entry->cmd, __entry->mid)
+ )
+@@ -867,7 +867,7 @@ DECLARE_EVENT_CLASS(smb3_mid_class,
+ 		__entry->when_sent = when_sent;
+ 		__entry->when_received = when_received;
+ 	),
+-	TP_printk("\tcmd=%u mid=%llu pid=%u, when_sent=%lu when_rcv=%lu",
++	TP_printk("cmd=%u mid=%llu pid=%u, when_sent=%lu when_rcv=%lu",
+ 		__entry->cmd, __entry->mid, __entry->pid, __entry->when_sent,
+ 		__entry->when_received)
+ )
+@@ -898,7 +898,7 @@ DECLARE_EVENT_CLASS(smb3_exit_err_class,
+ 		__assign_str(func_name);
+ 		__entry->rc = rc;
+ 	),
+-	TP_printk("\t%s: xid=%u rc=%d",
++	TP_printk("%s: xid=%u rc=%d",
+ 		__get_str(func_name), __entry->xid, __entry->rc)
+ )
+ 
+@@ -924,7 +924,7 @@ DECLARE_EVENT_CLASS(smb3_sync_err_class,
+ 		__entry->ino = ino;
+ 		__entry->rc = rc;
+ 	),
+-	TP_printk("\tino=%lu rc=%d",
++	TP_printk("ino=%lu rc=%d",
+ 		__entry->ino, __entry->rc)
+ )
+ 
+@@ -950,7 +950,7 @@ DECLARE_EVENT_CLASS(smb3_enter_exit_class,
+ 		__entry->xid = xid;
+ 		__assign_str(func_name);
+ 	),
+-	TP_printk("\t%s: xid=%u",
++	TP_printk("%s: xid=%u",
+ 		__get_str(func_name), __entry->xid)
+ )
+ 
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index 572102098c1080..dd3e0e3f7bf046 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -108,6 +108,7 @@ struct ksmbd_conn {
+ 	__le16				signing_algorithm;
+ 	bool				binding;
+ 	atomic_t			refcnt;
++	bool				is_aapl;
+ };
+ 
+ struct ksmbd_conn_ops {
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index c6b990c93bfa75..ad2b15ec3b561d 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -2875,7 +2875,7 @@ int smb2_open(struct ksmbd_work *work)
+ 	int req_op_level = 0, open_flags = 0, may_flags = 0, file_info = 0;
+ 	int rc = 0;
+ 	int contxt_cnt = 0, query_disk_id = 0;
+-	int maximal_access_ctxt = 0, posix_ctxt = 0;
++	bool maximal_access_ctxt = false, posix_ctxt = false;
+ 	int s_type = 0;
+ 	int next_off = 0;
+ 	char *name = NULL;
+@@ -2904,6 +2904,27 @@ int smb2_open(struct ksmbd_work *work)
+ 		return create_smb2_pipe(work);
+ 	}
+ 
++	if (req->CreateContextsOffset && tcon->posix_extensions) {
++		context = smb2_find_context_vals(req, SMB2_CREATE_TAG_POSIX, 16);
++		if (IS_ERR(context)) {
++			rc = PTR_ERR(context);
++			goto err_out2;
++		} else if (context) {
++			struct create_posix *posix = (struct create_posix *)context;
++
++			if (le16_to_cpu(context->DataOffset) +
++				le32_to_cpu(context->DataLength) <
++			    sizeof(struct create_posix) - 4) {
++				rc = -EINVAL;
++				goto err_out2;
++			}
++			ksmbd_debug(SMB, "get posix context\n");
++
++			posix_mode = le32_to_cpu(posix->Mode);
++			posix_ctxt = true;
++		}
++	}
++
+ 	if (req->NameLength) {
+ 		name = smb2_get_name((char *)req + le16_to_cpu(req->NameOffset),
+ 				     le16_to_cpu(req->NameLength),
+@@ -2926,9 +2947,11 @@ int smb2_open(struct ksmbd_work *work)
+ 				goto err_out2;
+ 		}
+ 
+-		rc = ksmbd_validate_filename(name);
+-		if (rc < 0)
+-			goto err_out2;
++		if (posix_ctxt == false) {
++			rc = ksmbd_validate_filename(name);
++			if (rc < 0)
++				goto err_out2;
++		}
+ 
+ 		if (ksmbd_share_veto_filename(share, name)) {
+ 			rc = -ENOENT;
+@@ -3086,28 +3109,6 @@ int smb2_open(struct ksmbd_work *work)
+ 			rc = -EBADF;
+ 			goto err_out2;
+ 		}
+-
+-		if (tcon->posix_extensions) {
+-			context = smb2_find_context_vals(req,
+-							 SMB2_CREATE_TAG_POSIX, 16);
+-			if (IS_ERR(context)) {
+-				rc = PTR_ERR(context);
+-				goto err_out2;
+-			} else if (context) {
+-				struct create_posix *posix =
+-					(struct create_posix *)context;
+-				if (le16_to_cpu(context->DataOffset) +
+-				    le32_to_cpu(context->DataLength) <
+-				    sizeof(struct create_posix) - 4) {
+-					rc = -EINVAL;
+-					goto err_out2;
+-				}
+-				ksmbd_debug(SMB, "get posix context\n");
+-
+-				posix_mode = le32_to_cpu(posix->Mode);
+-				posix_ctxt = 1;
+-			}
+-		}
+ 	}
+ 
+ 	if (ksmbd_override_fsids(work)) {
+@@ -3540,6 +3541,15 @@ int smb2_open(struct ksmbd_work *work)
+ 			ksmbd_debug(SMB, "get query on disk id context\n");
+ 			query_disk_id = 1;
+ 		}
++
++		if (conn->is_aapl == false) {
++			context = smb2_find_context_vals(req, SMB2_CREATE_AAPL, 4);
++			if (IS_ERR(context)) {
++				rc = PTR_ERR(context);
++				goto err_out1;
++			} else if (context)
++				conn->is_aapl = true;
++		}
+ 	}
+ 
+ 	rc = ksmbd_vfs_getattr(&path, &stat);
+@@ -3979,7 +3989,10 @@ static int smb2_populate_readdir_entry(struct ksmbd_conn *conn, int info_level,
+ 		if (dinfo->EaSize)
+ 			dinfo->ExtFileAttributes = FILE_ATTRIBUTE_REPARSE_POINT_LE;
+ 		dinfo->Reserved = 0;
+-		dinfo->UniqueId = cpu_to_le64(ksmbd_kstat->kstat->ino);
++		if (conn->is_aapl)
++			dinfo->UniqueId = 0;
++		else
++			dinfo->UniqueId = cpu_to_le64(ksmbd_kstat->kstat->ino);
+ 		if (d_info->hide_dot_file && d_info->name[0] == '.')
+ 			dinfo->ExtFileAttributes |= FILE_ATTRIBUTE_HIDDEN_LE;
+ 		memcpy(dinfo->FileName, conv_name, conv_len);
+@@ -3996,7 +4009,10 @@ static int smb2_populate_readdir_entry(struct ksmbd_conn *conn, int info_level,
+ 			smb2_get_reparse_tag_special_file(ksmbd_kstat->kstat->mode);
+ 		if (fibdinfo->EaSize)
+ 			fibdinfo->ExtFileAttributes = FILE_ATTRIBUTE_REPARSE_POINT_LE;
+-		fibdinfo->UniqueId = cpu_to_le64(ksmbd_kstat->kstat->ino);
++		if (conn->is_aapl)
++			fibdinfo->UniqueId = 0;
++		else
++			fibdinfo->UniqueId = cpu_to_le64(ksmbd_kstat->kstat->ino);
+ 		fibdinfo->ShortNameLength = 0;
+ 		fibdinfo->Reserved = 0;
+ 		fibdinfo->Reserved2 = cpu_to_le16(0);
+diff --git a/fs/smb/server/smb2pdu.h b/fs/smb/server/smb2pdu.h
+index 17a0b18a8406b3..16ae8a10490beb 100644
+--- a/fs/smb/server/smb2pdu.h
++++ b/fs/smb/server/smb2pdu.h
+@@ -63,6 +63,9 @@ struct preauth_integrity_info {
+ 
+ #define SMB2_SESSION_TIMEOUT		(10 * HZ)
+ 
++/* Apple Defined Contexts */
++#define SMB2_CREATE_AAPL		"AAPL"
++
+ struct create_durable_req_v2 {
+ 	struct create_context_hdr ccontext;
+ 	__u8   Name[8];
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index d15316bffd70bb..6e9d2a856a6b0a 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -29,6 +29,7 @@
+ #include <linux/idr.h>
+ #include <linux/leds.h>
+ #include <linux/rculist.h>
++#include <linux/srcu.h>
+ 
+ #include <net/bluetooth/hci.h>
+ #include <net/bluetooth/hci_sync.h>
+@@ -345,6 +346,7 @@ struct adv_monitor {
+ 
+ struct hci_dev {
+ 	struct list_head list;
++	struct srcu_struct srcu;
+ 	struct mutex	lock;
+ 
+ 	struct ida	unset_handle_ida;
+diff --git a/include/uapi/linux/vm_sockets.h b/include/uapi/linux/vm_sockets.h
+index ed07181d4eff91..e05280e4152286 100644
+--- a/include/uapi/linux/vm_sockets.h
++++ b/include/uapi/linux/vm_sockets.h
+@@ -17,6 +17,10 @@
+ #ifndef _UAPI_VM_SOCKETS_H
+ #define _UAPI_VM_SOCKETS_H
+ 
++#ifndef __KERNEL__
++#include <sys/socket.h>        /* for struct sockaddr and sa_family_t */
++#endif
++
+ #include <linux/socket.h>
+ #include <linux/types.h>
+ 
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 74218c7b760483..211f7f5f507ea9 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -1647,11 +1647,12 @@ static void io_iopoll_req_issued(struct io_kiocb *req, unsigned int issue_flags)
+ 
+ io_req_flags_t io_file_get_flags(struct file *file)
+ {
++	struct inode *inode = file_inode(file);
+ 	io_req_flags_t res = 0;
+ 
+ 	BUILD_BUG_ON(REQ_F_ISREG_BIT != REQ_F_SUPPORT_NOWAIT_BIT + 1);
+ 
+-	if (S_ISREG(file_inode(file)->i_mode))
++	if (S_ISREG(inode->i_mode) && !(inode->i_flags & S_ANON_INODE))
+ 		res |= REQ_F_ISREG;
+ 	if ((file->f_flags & O_NONBLOCK) || (file->f_mode & FMODE_NOWAIT))
+ 		res |= REQ_F_SUPPORT_NOWAIT;
+diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
+index a8467f6aba54d7..6c676aedc9a20e 100644
+--- a/io_uring/kbuf.c
++++ b/io_uring/kbuf.c
+@@ -271,6 +271,7 @@ static int io_ring_buffers_peek(struct io_kiocb *req, struct buf_sel_arg *arg,
+ 		if (len > arg->max_len) {
+ 			len = arg->max_len;
+ 			if (!(bl->flags & IOBL_INC)) {
++				arg->partial_map = 1;
+ 				if (iov != arg->iovs)
+ 					break;
+ 				buf->len = len;
+diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h
+index 2ec0b983ce243c..6eb16ba6ed3921 100644
+--- a/io_uring/kbuf.h
++++ b/io_uring/kbuf.h
+@@ -55,6 +55,7 @@ struct buf_sel_arg {
+ 	size_t max_len;
+ 	unsigned short nr_iovs;
+ 	unsigned short mode;
++	unsigned short partial_map;
+ };
+ 
+ void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 3feceb2b5b97ec..7bf5e62d5a292e 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -76,12 +76,17 @@ struct io_sr_msg {
+ 	u16				flags;
+ 	/* initialised and used only by !msg send variants */
+ 	u16				buf_group;
+-	bool				retry;
++	unsigned short			retry_flags;
+ 	void __user			*msg_control;
+ 	/* used only for send zerocopy */
+ 	struct io_kiocb 		*notif;
+ };
+ 
++enum sr_retry_flags {
++	IO_SR_MSG_RETRY		= 1,
++	IO_SR_MSG_PARTIAL_MAP	= 2,
++};
++
+ /*
+  * Number of times we'll try and do receives if there's more data. If we
+  * exceed this limit, then add us to the back of the queue and retry from
+@@ -188,7 +193,7 @@ static inline void io_mshot_prep_retry(struct io_kiocb *req,
+ 
+ 	req->flags &= ~REQ_F_BL_EMPTY;
+ 	sr->done_io = 0;
+-	sr->retry = false;
++	sr->retry_flags = 0;
+ 	sr->len = 0; /* get from the provided buffer */
+ 	req->buf_index = sr->buf_group;
+ }
+@@ -401,7 +406,7 @@ int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
+ 
+ 	sr->done_io = 0;
+-	sr->retry = false;
++	sr->retry_flags = 0;
+ 	sr->len = READ_ONCE(sqe->len);
+ 	sr->flags = READ_ONCE(sqe->ioprio);
+ 	if (sr->flags & ~SENDMSG_FLAGS)
+@@ -759,7 +764,7 @@ int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
+ 
+ 	sr->done_io = 0;
+-	sr->retry = false;
++	sr->retry_flags = 0;
+ 
+ 	if (unlikely(sqe->file_index || sqe->addr2))
+ 		return -EINVAL;
+@@ -831,7 +836,7 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
+ 
+ 		cflags |= io_put_kbufs(req, this_ret, io_bundle_nbufs(kmsg, this_ret),
+ 				      issue_flags);
+-		if (sr->retry)
++		if (sr->retry_flags & IO_SR_MSG_RETRY)
+ 			cflags = req->cqe.flags | (cflags & CQE_F_MASK);
+ 		/* bundle with no more immediate buffers, we're done */
+ 		if (req->flags & REQ_F_BL_EMPTY)
+@@ -840,12 +845,12 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
+ 		 * If more is available AND it was a full transfer, retry and
+ 		 * append to this one
+ 		 */
+-		if (!sr->retry && kmsg->msg.msg_inq > 1 && this_ret > 0 &&
++		if (!sr->retry_flags && kmsg->msg.msg_inq > 1 && this_ret > 0 &&
+ 		    !iov_iter_count(&kmsg->msg.msg_iter)) {
+ 			req->cqe.flags = cflags & ~CQE_F_MASK;
+ 			sr->len = kmsg->msg.msg_inq;
+ 			sr->done_io += this_ret;
+-			sr->retry = true;
++			sr->retry_flags |= IO_SR_MSG_RETRY;
+ 			return false;
+ 		}
+ 	} else {
+@@ -1084,6 +1089,14 @@ static int io_recv_buf_select(struct io_kiocb *req, struct io_async_msghdr *kmsg
+ 		if (unlikely(ret < 0))
+ 			return ret;
+ 
++		if (arg.iovs != &kmsg->fast_iov && arg.iovs != kmsg->vec.iovec) {
++			kmsg->vec.nr = ret;
++			kmsg->vec.iovec = arg.iovs;
++			req->flags |= REQ_F_NEED_CLEANUP;
++		}
++		if (arg.partial_map)
++			sr->retry_flags |= IO_SR_MSG_PARTIAL_MAP;
++
+ 		/* special case 1 vec, can be a fast path */
+ 		if (ret == 1) {
+ 			sr->buf = arg.iovs[0].iov_base;
+@@ -1092,11 +1105,6 @@ static int io_recv_buf_select(struct io_kiocb *req, struct io_async_msghdr *kmsg
+ 		}
+ 		iov_iter_init(&kmsg->msg.msg_iter, ITER_DEST, arg.iovs, ret,
+ 				arg.out_len);
+-		if (arg.iovs != &kmsg->fast_iov && arg.iovs != kmsg->vec.iovec) {
+-			kmsg->vec.nr = ret;
+-			kmsg->vec.iovec = arg.iovs;
+-			req->flags |= REQ_F_NEED_CLEANUP;
+-		}
+ 	} else {
+ 		void __user *buf;
+ 
+@@ -1284,7 +1292,7 @@ int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	int ret;
+ 
+ 	zc->done_io = 0;
+-	zc->retry = false;
++	zc->retry_flags = 0;
+ 
+ 	if (unlikely(READ_ONCE(sqe->__pad2[0]) || READ_ONCE(sqe->addr3)))
+ 		return -EINVAL;
+diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
+index 794d4ae6f0bc8d..7409af671c9ee2 100644
+--- a/io_uring/rsrc.c
++++ b/io_uring/rsrc.c
+@@ -80,10 +80,21 @@ static int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
+ 	return 0;
+ }
+ 
+-int io_buffer_validate(struct iovec *iov)
++int io_validate_user_buf_range(u64 uaddr, u64 ulen)
+ {
+-	unsigned long tmp, acct_len = iov->iov_len + (PAGE_SIZE - 1);
++	unsigned long tmp, base = (unsigned long)uaddr;
++	unsigned long acct_len = (unsigned long)PAGE_ALIGN(ulen);
+ 
++	/* arbitrary limit, but we need something */
++	if (ulen > SZ_1G || !ulen)
++		return -EFAULT;
++	if (check_add_overflow(base, acct_len, &tmp))
++		return -EOVERFLOW;
++	return 0;
++}
++
++static int io_buffer_validate(struct iovec *iov)
++{
+ 	/*
+ 	 * Don't impose further limits on the size and buffer
+ 	 * constraints here, we'll -EINVAL later when IO is
+@@ -91,17 +102,9 @@ int io_buffer_validate(struct iovec *iov)
+ 	 */
+ 	if (!iov->iov_base)
+ 		return iov->iov_len ? -EFAULT : 0;
+-	if (!iov->iov_len)
+-		return -EFAULT;
+-
+-	/* arbitrary limit, but we need something */
+-	if (iov->iov_len > SZ_1G)
+-		return -EFAULT;
+ 
+-	if (check_add_overflow((unsigned long)iov->iov_base, acct_len, &tmp))
+-		return -EOVERFLOW;
+-
+-	return 0;
++	return io_validate_user_buf_range((unsigned long)iov->iov_base,
++					  iov->iov_len);
+ }
+ 
+ static void io_release_ubuf(void *priv)
+@@ -109,8 +112,11 @@ static void io_release_ubuf(void *priv)
+ 	struct io_mapped_ubuf *imu = priv;
+ 	unsigned int i;
+ 
+-	for (i = 0; i < imu->nr_bvecs; i++)
+-		unpin_user_page(imu->bvec[i].bv_page);
++	for (i = 0; i < imu->nr_bvecs; i++) {
++		struct folio *folio = page_folio(imu->bvec[i].bv_page);
++
++		unpin_user_folio(folio, 1);
++	}
+ }
+ 
+ static struct io_mapped_ubuf *io_alloc_imu(struct io_ring_ctx *ctx,
+@@ -732,6 +738,7 @@ bool io_check_coalesce_buffer(struct page **page_array, int nr_pages,
+ 
+ 	data->nr_pages_mid = folio_nr_pages(folio);
+ 	data->folio_shift = folio_shift(folio);
++	data->first_folio_page_idx = folio_page_idx(folio, page_array[0]);
+ 
+ 	/*
+ 	 * Check if pages are contiguous inside a folio, and all folios have
+@@ -825,7 +832,11 @@ static struct io_rsrc_node *io_sqe_buffer_register(struct io_ring_ctx *ctx,
+ 	if (coalesced)
+ 		imu->folio_shift = data.folio_shift;
+ 	refcount_set(&imu->refs, 1);
+-	off = (unsigned long) iov->iov_base & ((1UL << imu->folio_shift) - 1);
++
++	off = (unsigned long)iov->iov_base & ~PAGE_MASK;
++	if (coalesced)
++		off += data.first_folio_page_idx << PAGE_SHIFT;
++
+ 	node->buf = imu;
+ 	ret = 0;
+ 
+@@ -841,8 +852,10 @@ static struct io_rsrc_node *io_sqe_buffer_register(struct io_ring_ctx *ctx,
+ 	if (ret) {
+ 		if (imu)
+ 			io_free_imu(ctx, imu);
+-		if (pages)
+-			unpin_user_pages(pages, nr_pages);
++		if (pages) {
++			for (i = 0; i < nr_pages; i++)
++				unpin_user_folio(page_folio(pages[i]), 1);
++		}
+ 		io_cache_free(&ctx->node_cache, node);
+ 		node = ERR_PTR(ret);
+ 	}
+@@ -1326,7 +1339,6 @@ static int io_vec_fill_bvec(int ddir, struct iov_iter *iter,
+ {
+ 	unsigned long folio_size = 1 << imu->folio_shift;
+ 	unsigned long folio_mask = folio_size - 1;
+-	u64 folio_addr = imu->ubuf & ~folio_mask;
+ 	struct bio_vec *res_bvec = vec->bvec;
+ 	size_t total_len = 0;
+ 	unsigned bvec_idx = 0;
+@@ -1348,8 +1360,13 @@ static int io_vec_fill_bvec(int ddir, struct iov_iter *iter,
+ 		if (unlikely(check_add_overflow(total_len, iov_len, &total_len)))
+ 			return -EOVERFLOW;
+ 
+-		/* by using folio address it also accounts for bvec offset */
+-		offset = buf_addr - folio_addr;
++		offset = buf_addr - imu->ubuf;
++		/*
++		 * Only the first bvec can have non zero bv_offset, account it
++		 * here and work with full folios below.
++		 */
++		offset += imu->bvec[0].bv_offset;
++
+ 		src_bvec = imu->bvec + (offset >> imu->folio_shift);
+ 		offset &= folio_mask;
+ 
+diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
+index b52242852ff342..ba00ee6f0650e8 100644
+--- a/io_uring/rsrc.h
++++ b/io_uring/rsrc.h
+@@ -49,6 +49,7 @@ struct io_imu_folio_data {
+ 	unsigned int	nr_pages_mid;
+ 	unsigned int	folio_shift;
+ 	unsigned int	nr_folios;
++	unsigned long	first_folio_page_idx;
+ };
+ 
+ bool io_rsrc_cache_init(struct io_ring_ctx *ctx);
+@@ -83,7 +84,7 @@ int io_register_rsrc_update(struct io_ring_ctx *ctx, void __user *arg,
+ 			    unsigned size, unsigned type);
+ int io_register_rsrc(struct io_ring_ctx *ctx, void __user *arg,
+ 			unsigned int size, unsigned int type);
+-int io_buffer_validate(struct iovec *iov);
++int io_validate_user_buf_range(u64 uaddr, u64 ulen);
+ 
+ bool io_check_coalesce_buffer(struct page **page_array, int nr_pages,
+ 			      struct io_imu_folio_data *data);
+diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
+index fe86606b9f304d..a53058dd6b7a18 100644
+--- a/io_uring/zcrx.c
++++ b/io_uring/zcrx.c
+@@ -26,12 +26,61 @@
+ #include "zcrx.h"
+ #include "rsrc.h"
+ 
++#define IO_DMA_ATTR (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)
++
+ static inline struct io_zcrx_ifq *io_pp_to_ifq(struct page_pool *pp)
+ {
+ 	return pp->mp_priv;
+ }
+ 
+-#define IO_DMA_ATTR (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)
++static inline struct io_zcrx_area *io_zcrx_iov_to_area(const struct net_iov *niov)
++{
++	struct net_iov_area *owner = net_iov_owner(niov);
++
++	return container_of(owner, struct io_zcrx_area, nia);
++}
++
++static inline struct page *io_zcrx_iov_page(const struct net_iov *niov)
++{
++	struct io_zcrx_area *area = io_zcrx_iov_to_area(niov);
++
++	return area->mem.pages[net_iov_idx(niov)];
++}
++
++static void io_release_area_mem(struct io_zcrx_mem *mem)
++{
++	if (mem->pages) {
++		unpin_user_pages(mem->pages, mem->nr_folios);
++		kvfree(mem->pages);
++	}
++}
++
++static int io_import_area(struct io_zcrx_ifq *ifq,
++			  struct io_zcrx_mem *mem,
++			  struct io_uring_zcrx_area_reg *area_reg)
++{
++	struct page **pages;
++	int nr_pages;
++	int ret;
++
++	ret = io_validate_user_buf_range(area_reg->addr, area_reg->len);
++	if (ret)
++		return ret;
++	if (!area_reg->addr)
++		return -EFAULT;
++	if (area_reg->addr & ~PAGE_MASK || area_reg->len & ~PAGE_MASK)
++		return -EINVAL;
++
++	pages = io_pin_pages((unsigned long)area_reg->addr, area_reg->len,
++				   &nr_pages);
++	if (IS_ERR(pages))
++		return PTR_ERR(pages);
++
++	mem->pages = pages;
++	mem->nr_folios = nr_pages;
++	mem->size = area_reg->len;
++	return 0;
++}
+ 
+ static void __io_zcrx_unmap_area(struct io_zcrx_ifq *ifq,
+ 				 struct io_zcrx_area *area, int nr_mapped)
+@@ -70,8 +119,8 @@ static int io_zcrx_map_area(struct io_zcrx_ifq *ifq, struct io_zcrx_area *area)
+ 		struct net_iov *niov = &area->nia.niovs[i];
+ 		dma_addr_t dma;
+ 
+-		dma = dma_map_page_attrs(ifq->dev, area->pages[i], 0, PAGE_SIZE,
+-					 DMA_FROM_DEVICE, IO_DMA_ATTR);
++		dma = dma_map_page_attrs(ifq->dev, area->mem.pages[i], 0,
++					 PAGE_SIZE, DMA_FROM_DEVICE, IO_DMA_ATTR);
+ 		if (dma_mapping_error(ifq->dev, dma))
+ 			break;
+ 		if (net_mp_niov_set_dma_addr(niov, dma)) {
+@@ -118,13 +167,6 @@ struct io_zcrx_args {
+ 
+ static const struct memory_provider_ops io_uring_pp_zc_ops;
+ 
+-static inline struct io_zcrx_area *io_zcrx_iov_to_area(const struct net_iov *niov)
+-{
+-	struct net_iov_area *owner = net_iov_owner(niov);
+-
+-	return container_of(owner, struct io_zcrx_area, nia);
+-}
+-
+ static inline atomic_t *io_get_user_counter(struct net_iov *niov)
+ {
+ 	struct io_zcrx_area *area = io_zcrx_iov_to_area(niov);
+@@ -147,13 +189,6 @@ static void io_zcrx_get_niov_uref(struct net_iov *niov)
+ 	atomic_inc(io_get_user_counter(niov));
+ }
+ 
+-static inline struct page *io_zcrx_iov_page(const struct net_iov *niov)
+-{
+-	struct io_zcrx_area *area = io_zcrx_iov_to_area(niov);
+-
+-	return area->pages[net_iov_idx(niov)];
+-}
+-
+ static int io_allocate_rbuf_ring(struct io_zcrx_ifq *ifq,
+ 				 struct io_uring_zcrx_ifq_reg *reg,
+ 				 struct io_uring_region_desc *rd)
+@@ -187,15 +222,13 @@ static void io_free_rbuf_ring(struct io_zcrx_ifq *ifq)
+ 
+ static void io_zcrx_free_area(struct io_zcrx_area *area)
+ {
+-	io_zcrx_unmap_area(area->ifq, area);
++	if (area->ifq)
++		io_zcrx_unmap_area(area->ifq, area);
++	io_release_area_mem(&area->mem);
+ 
+ 	kvfree(area->freelist);
+ 	kvfree(area->nia.niovs);
+ 	kvfree(area->user_refs);
+-	if (area->pages) {
+-		unpin_user_pages(area->pages, area->nr_folios);
+-		kvfree(area->pages);
+-	}
+ 	kfree(area);
+ }
+ 
+@@ -204,37 +237,27 @@ static int io_zcrx_create_area(struct io_zcrx_ifq *ifq,
+ 			       struct io_uring_zcrx_area_reg *area_reg)
+ {
+ 	struct io_zcrx_area *area;
+-	int i, ret, nr_pages, nr_iovs;
+-	struct iovec iov;
++	unsigned nr_iovs;
++	int i, ret;
+ 
+ 	if (area_reg->flags || area_reg->rq_area_token)
+ 		return -EINVAL;
+ 	if (area_reg->__resv1 || area_reg->__resv2[0] || area_reg->__resv2[1])
+ 		return -EINVAL;
+-	if (area_reg->addr & ~PAGE_MASK || area_reg->len & ~PAGE_MASK)
+-		return -EINVAL;
+-
+-	iov.iov_base = u64_to_user_ptr(area_reg->addr);
+-	iov.iov_len = area_reg->len;
+-	ret = io_buffer_validate(&iov);
+-	if (ret)
+-		return ret;
+ 
+ 	ret = -ENOMEM;
+ 	area = kzalloc(sizeof(*area), GFP_KERNEL);
+ 	if (!area)
+ 		goto err;
+ 
+-	area->pages = io_pin_pages((unsigned long)area_reg->addr, area_reg->len,
+-				   &nr_pages);
+-	if (IS_ERR(area->pages)) {
+-		ret = PTR_ERR(area->pages);
+-		area->pages = NULL;
++	ret = io_import_area(ifq, &area->mem, area_reg);
++	if (ret)
+ 		goto err;
+-	}
+-	area->nr_folios = nr_iovs = nr_pages;
++
++	nr_iovs = area->mem.size >> PAGE_SHIFT;
+ 	area->nia.num_niovs = nr_iovs;
+ 
++	ret = -ENOMEM;
+ 	area->nia.niovs = kvmalloc_array(nr_iovs, sizeof(area->nia.niovs[0]),
+ 					 GFP_KERNEL | __GFP_ZERO);
+ 	if (!area->nia.niovs)
+diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h
+index f2bc811f022c67..64796c90851e10 100644
+--- a/io_uring/zcrx.h
++++ b/io_uring/zcrx.h
+@@ -7,6 +7,13 @@
+ #include <net/page_pool/types.h>
+ #include <net/net_trackers.h>
+ 
++struct io_zcrx_mem {
++	unsigned long			size;
++
++	struct page			**pages;
++	unsigned long			nr_folios;
++};
++
+ struct io_zcrx_area {
+ 	struct net_iov_area	nia;
+ 	struct io_zcrx_ifq	*ifq;
+@@ -14,13 +21,13 @@ struct io_zcrx_area {
+ 
+ 	bool			is_mapped;
+ 	u16			area_id;
+-	struct page		**pages;
+-	unsigned long		nr_folios;
+ 
+ 	/* freelist */
+ 	spinlock_t		freelist_lock ____cacheline_aligned_in_smp;
+ 	u32			free_count;
+ 	u32			*freelist;
++
++	struct io_zcrx_mem	mem;
+ };
+ 
+ struct io_zcrx_ifq {
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index afaf49e5ecb976..86ce43fa366931 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -4074,12 +4074,12 @@ void scx_group_set_weight(struct task_group *tg, unsigned long weight)
+ {
+ 	percpu_down_read(&scx_cgroup_rwsem);
+ 
+-	if (scx_cgroup_enabled && tg->scx_weight != weight) {
+-		if (SCX_HAS_OP(cgroup_set_weight))
+-			SCX_CALL_OP(SCX_KF_UNLOCKED, cgroup_set_weight, NULL,
+-				    tg_cgrp(tg), weight);
+-		tg->scx_weight = weight;
+-	}
++	if (scx_cgroup_enabled && SCX_HAS_OP(cgroup_set_weight) &&
++	    tg->scx_weight != weight)
++		SCX_CALL_OP(SCX_KF_UNLOCKED, cgroup_set_weight, NULL,
++			    tg_cgrp(tg), weight);
++
++	tg->scx_weight = weight;
+ 
+ 	percpu_up_read(&scx_cgroup_rwsem);
+ }
+diff --git a/lib/group_cpus.c b/lib/group_cpus.c
+index ee272c4cefcc13..18d43a406114b9 100644
+--- a/lib/group_cpus.c
++++ b/lib/group_cpus.c
+@@ -352,6 +352,9 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps)
+ 	int ret = -ENOMEM;
+ 	struct cpumask *masks = NULL;
+ 
++	if (numgrps == 0)
++		return NULL;
++
+ 	if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
+ 		return NULL;
+ 
+@@ -426,8 +429,12 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps)
+ #else /* CONFIG_SMP */
+ struct cpumask *group_cpus_evenly(unsigned int numgrps)
+ {
+-	struct cpumask *masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL);
++	struct cpumask *masks;
+ 
++	if (numgrps == 0)
++		return NULL;
++
++	masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL);
+ 	if (!masks)
+ 		return NULL;
+ 
+diff --git a/lib/maple_tree.c b/lib/maple_tree.c
+index d0bea23fa4bc9f..3ecb1c390a6ac7 100644
+--- a/lib/maple_tree.c
++++ b/lib/maple_tree.c
+@@ -5496,8 +5496,9 @@ int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp)
+ 	mas->store_type = mas_wr_store_type(&wr_mas);
+ 	request = mas_prealloc_calc(mas, entry);
+ 	if (!request)
+-		return ret;
++		goto set_flag;
+ 
++	mas->mas_flags &= ~MA_STATE_PREALLOC;
+ 	mas_node_count_gfp(mas, request, gfp);
+ 	if (mas_is_err(mas)) {
+ 		mas_set_alloc_req(mas, 0);
+@@ -5507,6 +5508,7 @@ int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp)
+ 		return ret;
+ 	}
+ 
++set_flag:
+ 	mas->mas_flags |= MA_STATE_PREALLOC;
+ 	return ret;
+ }
+diff --git a/mm/damon/sysfs-schemes.c b/mm/damon/sysfs-schemes.c
+index 23b562df0839f2..08d3e21ec878e8 100644
+--- a/mm/damon/sysfs-schemes.c
++++ b/mm/damon/sysfs-schemes.c
+@@ -471,6 +471,7 @@ static ssize_t memcg_path_store(struct kobject *kobj,
+ 		return -ENOMEM;
+ 
+ 	strscpy(path, buf, count + 1);
++	kfree(filter->memcg_path);
+ 	filter->memcg_path = path;
+ 	return count;
+ }
+diff --git a/mm/gup.c b/mm/gup.c
+index 84461d384ae2be..696fe16e2d51a6 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -2320,13 +2320,13 @@ static void pofs_unpin(struct pages_or_folios *pofs)
+ /*
+  * Returns the number of collected folios. Return value is always >= 0.
+  */
+-static void collect_longterm_unpinnable_folios(
++static unsigned long collect_longterm_unpinnable_folios(
+ 		struct list_head *movable_folio_list,
+ 		struct pages_or_folios *pofs)
+ {
++	unsigned long i, collected = 0;
+ 	struct folio *prev_folio = NULL;
+ 	bool drain_allow = true;
+-	unsigned long i;
+ 
+ 	for (i = 0; i < pofs->nr_entries; i++) {
+ 		struct folio *folio = pofs_get_folio(pofs, i);
+@@ -2338,6 +2338,8 @@ static void collect_longterm_unpinnable_folios(
+ 		if (folio_is_longterm_pinnable(folio))
+ 			continue;
+ 
++		collected++;
++
+ 		if (folio_is_device_coherent(folio))
+ 			continue;
+ 
+@@ -2359,6 +2361,8 @@ static void collect_longterm_unpinnable_folios(
+ 				    NR_ISOLATED_ANON + folio_is_file_lru(folio),
+ 				    folio_nr_pages(folio));
+ 	}
++
++	return collected;
+ }
+ 
+ /*
+@@ -2435,9 +2439,11 @@ static long
+ check_and_migrate_movable_pages_or_folios(struct pages_or_folios *pofs)
+ {
+ 	LIST_HEAD(movable_folio_list);
++	unsigned long collected;
+ 
+-	collect_longterm_unpinnable_folios(&movable_folio_list, pofs);
+-	if (list_empty(&movable_folio_list))
++	collected = collect_longterm_unpinnable_folios(&movable_folio_list,
++						       pofs);
++	if (!collected)
+ 		return 0;
+ 
+ 	return migrate_longterm_unpinnable_folios(&movable_folio_list, pofs);
+diff --git a/mm/memory.c b/mm/memory.c
+index 49199410805cd0..2c7d9bb28e88ec 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -4224,26 +4224,6 @@ static struct folio *__alloc_swap_folio(struct vm_fault *vmf)
+ }
+ 
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-static inline int non_swapcache_batch(swp_entry_t entry, int max_nr)
+-{
+-	struct swap_info_struct *si = swp_swap_info(entry);
+-	pgoff_t offset = swp_offset(entry);
+-	int i;
+-
+-	/*
+-	 * While allocating a large folio and doing swap_read_folio, which is
+-	 * the case the being faulted pte doesn't have swapcache. We need to
+-	 * ensure all PTEs have no cache as well, otherwise, we might go to
+-	 * swap devices while the content is in swapcache.
+-	 */
+-	for (i = 0; i < max_nr; i++) {
+-		if ((si->swap_map[offset + i] & SWAP_HAS_CACHE))
+-			return i;
+-	}
+-
+-	return i;
+-}
+-
+ /*
+  * Check if the PTEs within a range are contiguous swap entries
+  * and have consistent swapcache, zeromap.
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 99327c30507c44..12882a39759b44 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2262,6 +2262,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
+ 	folio = swap_cache_get_folio(swap, NULL, 0);
+ 	order = xa_get_order(&mapping->i_pages, index);
+ 	if (!folio) {
++		int nr_pages = 1 << order;
+ 		bool fallback_order0 = false;
+ 
+ 		/* Or update major stats only when swapin succeeds?? */
+@@ -2275,9 +2276,12 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
+ 		 * If uffd is active for the vma, we need per-page fault
+ 		 * fidelity to maintain the uffd semantics, then fallback
+ 		 * to swapin order-0 folio, as well as for zswap case.
++		 * Any existing sub folio in the swap cache also blocks
++		 * mTHP swapin.
+ 		 */
+ 		if (order > 0 && ((vma && unlikely(userfaultfd_armed(vma))) ||
+-				  !zswap_never_enabled()))
++				  !zswap_never_enabled() ||
++				  non_swapcache_batch(swap, nr_pages) != nr_pages))
+ 			fallback_order0 = true;
+ 
+ 		/* Skip swapcache for synchronous device. */
+diff --git a/mm/swap.h b/mm/swap.h
+index 6f4a3f927edb2d..ca4d7abfafabc9 100644
+--- a/mm/swap.h
++++ b/mm/swap.h
+@@ -106,6 +106,25 @@ static inline int swap_zeromap_batch(swp_entry_t entry, int max_nr,
+ 		return find_next_bit(sis->zeromap, end, start) - start;
+ }
+ 
++static inline int non_swapcache_batch(swp_entry_t entry, int max_nr)
++{
++	struct swap_info_struct *si = swp_swap_info(entry);
++	pgoff_t offset = swp_offset(entry);
++	int i;
++
++	/*
++	 * While allocating a large folio and doing mTHP swapin, we need to
++	 * ensure all entries are not cached, otherwise, the mTHP folio will
++	 * be in conflict with the folio in swap cache.
++	 */
++	for (i = 0; i < max_nr; i++) {
++		if ((si->swap_map[offset + i] & SWAP_HAS_CACHE))
++			return i;
++	}
++
++	return i;
++}
++
+ #else /* CONFIG_SWAP */
+ struct swap_iocb;
+ static inline void swap_read_folio(struct folio *folio, struct swap_iocb **plug)
+@@ -199,6 +218,10 @@ static inline int swap_zeromap_batch(swp_entry_t entry, int max_nr,
+ 	return 0;
+ }
+ 
++static inline int non_swapcache_batch(swp_entry_t entry, int max_nr)
++{
++	return 0;
++}
+ #endif /* CONFIG_SWAP */
+ 
+ #endif /* _MM_SWAP_H */
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index e0db855c89b41a..416c573ed36316 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -1084,8 +1084,18 @@ static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma,
+ 			 pte_t orig_dst_pte, pte_t orig_src_pte,
+ 			 pmd_t *dst_pmd, pmd_t dst_pmdval,
+ 			 spinlock_t *dst_ptl, spinlock_t *src_ptl,
+-			 struct folio *src_folio)
++			 struct folio *src_folio,
++			 struct swap_info_struct *si, swp_entry_t entry)
+ {
++	/*
++	 * Check if the folio still belongs to the target swap entry after
++	 * acquiring the lock. Folio can be freed in the swap cache while
++	 * not locked.
++	 */
++	if (src_folio && unlikely(!folio_test_swapcache(src_folio) ||
++				  entry.val != src_folio->swap.val))
++		return -EAGAIN;
++
+ 	double_pt_lock(dst_ptl, src_ptl);
+ 
+ 	if (!is_pte_pages_stable(dst_pte, src_pte, orig_dst_pte, orig_src_pte,
+@@ -1102,6 +1112,25 @@ static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma,
+ 	if (src_folio) {
+ 		folio_move_anon_rmap(src_folio, dst_vma);
+ 		src_folio->index = linear_page_index(dst_vma, dst_addr);
++	} else {
++		/*
++		 * Check if the swap entry is cached after acquiring the src_pte
++		 * lock. Otherwise, we might miss a newly loaded swap cache folio.
++		 *
++		 * Check swap_map directly to minimize overhead, READ_ONCE is sufficient.
++		 * We are trying to catch newly added swap cache, the only possible case is
++		 * when a folio is swapped in and out again staying in swap cache, using the
++		 * same entry before the PTE check above. The PTL is acquired and released
++		 * twice, each time after updating the swap_map's flag. So holding
++		 * the PTL here ensures we see the updated value. False positive is possible,
++		 * e.g. SWP_SYNCHRONOUS_IO swapin may set the flag without touching the
++		 * cache, or during the tiny synchronization window between swap cache and
++		 * swap_map, but it will be gone very quickly, worst result is retry jitters.
++		 */
++		if (READ_ONCE(si->swap_map[swp_offset(entry)]) & SWAP_HAS_CACHE) {
++			double_pt_unlock(dst_ptl, src_ptl);
++			return -EAGAIN;
++		}
+ 	}
+ 
+ 	orig_src_pte = ptep_get_and_clear(mm, src_addr, src_pte);
+@@ -1412,7 +1441,7 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
+ 		}
+ 		err = move_swap_pte(mm, dst_vma, dst_addr, src_addr, dst_pte, src_pte,
+ 				orig_dst_pte, orig_src_pte, dst_pmd, dst_pmdval,
+-				dst_ptl, src_ptl, src_folio);
++				dst_ptl, src_ptl, src_folio, si, entry);
+ 	}
+ 
+ out:
+diff --git a/net/atm/clip.c b/net/atm/clip.c
+index 61b5b700817de5..b234dc3bcb0d4a 100644
+--- a/net/atm/clip.c
++++ b/net/atm/clip.c
+@@ -193,12 +193,6 @@ static void clip_push(struct atm_vcc *vcc, struct sk_buff *skb)
+ 
+ 	pr_debug("\n");
+ 
+-	if (!clip_devs) {
+-		atm_return(vcc, skb->truesize);
+-		kfree_skb(skb);
+-		return;
+-	}
+-
+ 	if (!skb) {
+ 		pr_debug("removing VCC %p\n", clip_vcc);
+ 		if (clip_vcc->entry)
+@@ -208,6 +202,11 @@ static void clip_push(struct atm_vcc *vcc, struct sk_buff *skb)
+ 		return;
+ 	}
+ 	atm_return(vcc, skb->truesize);
++	if (!clip_devs) {
++		kfree_skb(skb);
++		return;
++	}
++
+ 	skb->dev = clip_vcc->entry ? clip_vcc->entry->neigh->dev : clip_devs;
+ 	/* clip_vcc->entry == NULL if we don't have an IP address yet */
+ 	if (!skb->dev) {
+diff --git a/net/atm/resources.c b/net/atm/resources.c
+index 995d29e7fb138c..b19d851e1f4439 100644
+--- a/net/atm/resources.c
++++ b/net/atm/resources.c
+@@ -146,11 +146,10 @@ void atm_dev_deregister(struct atm_dev *dev)
+ 	 */
+ 	mutex_lock(&atm_dev_mutex);
+ 	list_del(&dev->dev_list);
+-	mutex_unlock(&atm_dev_mutex);
+-
+ 	atm_dev_release_vccs(dev);
+ 	atm_unregister_sysfs(dev);
+ 	atm_proc_dev_deregister(dev);
++	mutex_unlock(&atm_dev_mutex);
+ 
+ 	atm_dev_put(dev);
+ }
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index af30a420bab75a..abff4690cb88ff 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -64,7 +64,7 @@ static DEFINE_IDA(hci_index_ida);
+ 
+ /* Get HCI device by index.
+  * Device is held on return. */
+-struct hci_dev *hci_dev_get(int index)
++static struct hci_dev *__hci_dev_get(int index, int *srcu_index)
+ {
+ 	struct hci_dev *hdev = NULL, *d;
+ 
+@@ -77,6 +77,8 @@ struct hci_dev *hci_dev_get(int index)
+ 	list_for_each_entry(d, &hci_dev_list, list) {
+ 		if (d->id == index) {
+ 			hdev = hci_dev_hold(d);
++			if (srcu_index)
++				*srcu_index = srcu_read_lock(&d->srcu);
+ 			break;
+ 		}
+ 	}
+@@ -84,6 +86,22 @@ struct hci_dev *hci_dev_get(int index)
+ 	return hdev;
+ }
+ 
++struct hci_dev *hci_dev_get(int index)
++{
++	return __hci_dev_get(index, NULL);
++}
++
++static struct hci_dev *hci_dev_get_srcu(int index, int *srcu_index)
++{
++	return __hci_dev_get(index, srcu_index);
++}
++
++static void hci_dev_put_srcu(struct hci_dev *hdev, int srcu_index)
++{
++	srcu_read_unlock(&hdev->srcu, srcu_index);
++	hci_dev_put(hdev);
++}
++
+ /* ---- Inquiry support ---- */
+ 
+ bool hci_discovery_active(struct hci_dev *hdev)
+@@ -568,9 +586,9 @@ static int hci_dev_do_reset(struct hci_dev *hdev)
+ int hci_dev_reset(__u16 dev)
+ {
+ 	struct hci_dev *hdev;
+-	int err;
++	int err, srcu_index;
+ 
+-	hdev = hci_dev_get(dev);
++	hdev = hci_dev_get_srcu(dev, &srcu_index);
+ 	if (!hdev)
+ 		return -ENODEV;
+ 
+@@ -592,7 +610,7 @@ int hci_dev_reset(__u16 dev)
+ 	err = hci_dev_do_reset(hdev);
+ 
+ done:
+-	hci_dev_put(hdev);
++	hci_dev_put_srcu(hdev, srcu_index);
+ 	return err;
+ }
+ 
+@@ -2419,6 +2437,11 @@ struct hci_dev *hci_alloc_dev_priv(int sizeof_priv)
+ 	if (!hdev)
+ 		return NULL;
+ 
++	if (init_srcu_struct(&hdev->srcu)) {
++		kfree(hdev);
++		return NULL;
++	}
++
+ 	hdev->pkt_type  = (HCI_DM1 | HCI_DH1 | HCI_HV1);
+ 	hdev->esco_type = (ESCO_HV1);
+ 	hdev->link_mode = (HCI_LM_ACCEPT);
+@@ -2664,6 +2687,9 @@ void hci_unregister_dev(struct hci_dev *hdev)
+ 	list_del(&hdev->list);
+ 	write_unlock(&hci_dev_list_lock);
+ 
++	synchronize_srcu(&hdev->srcu);
++	cleanup_srcu_struct(&hdev->srcu);
++
+ 	disable_work_sync(&hdev->rx_work);
+ 	disable_work_sync(&hdev->cmd_work);
+ 	disable_work_sync(&hdev->tx_work);
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index a5bde5db58efcb..40daa38276f353 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -3415,7 +3415,7 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+ 	struct l2cap_conf_rfc rfc = { .mode = L2CAP_MODE_BASIC };
+ 	struct l2cap_conf_efs efs;
+ 	u8 remote_efs = 0;
+-	u16 mtu = L2CAP_DEFAULT_MTU;
++	u16 mtu = 0;
+ 	u16 result = L2CAP_CONF_SUCCESS;
+ 	u16 size;
+ 
+@@ -3520,6 +3520,13 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+ 		/* Configure output options and let the other side know
+ 		 * which ones we don't like. */
+ 
++		/* If MTU is not provided in configure request, use the most recently
++		 * explicitly or implicitly accepted value for the other direction,
++		 * or the default value.
++		 */
++		if (mtu == 0)
++			mtu = chan->imtu ? chan->imtu : L2CAP_DEFAULT_MTU;
++
+ 		if (mtu < L2CAP_DEFAULT_MIN_MTU)
+ 			result = L2CAP_CONF_UNACCEPT;
+ 		else {
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index 7e0b2362b9ee5f..d35a409b5e4aa4 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -2014,10 +2014,19 @@ void br_multicast_port_ctx_init(struct net_bridge_port *port,
+ 
+ void br_multicast_port_ctx_deinit(struct net_bridge_mcast_port *pmctx)
+ {
++	struct net_bridge *br = pmctx->port->br;
++	bool del = false;
++
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	timer_delete_sync(&pmctx->ip6_mc_router_timer);
+ #endif
+ 	timer_delete_sync(&pmctx->ip4_mc_router_timer);
++
++	spin_lock_bh(&br->multicast_lock);
++	del |= br_ip6_multicast_rport_del(pmctx);
++	del |= br_ip4_multicast_rport_del(pmctx);
++	br_multicast_rport_del_notify(pmctx, del);
++	spin_unlock_bh(&br->multicast_lock);
+ }
+ 
+ int br_multicast_add_port(struct net_bridge_port *port)
+diff --git a/net/core/netpoll.c b/net/core/netpoll.c
+index 4ddb7490df4b88..6ad84d4a2b464b 100644
+--- a/net/core/netpoll.c
++++ b/net/core/netpoll.c
+@@ -432,6 +432,7 @@ int netpoll_send_udp(struct netpoll *np, const char *msg, int len)
+ 	udph->dest = htons(np->remote_port);
+ 	udph->len = htons(udp_len);
+ 
++	udph->check = 0;
+ 	if (np->ipv6) {
+ 		udph->check = csum_ipv6_magic(&np->local_ip.in6,
+ 					      &np->remote_ip.in6,
+@@ -460,7 +461,6 @@ int netpoll_send_udp(struct netpoll *np, const char *msg, int len)
+ 		skb_reset_mac_header(skb);
+ 		skb->protocol = eth->h_proto = htons(ETH_P_IPV6);
+ 	} else {
+-		udph->check = 0;
+ 		udph->check = csum_tcpudp_magic(np->local_ip.ip,
+ 						np->remote_ip.ip,
+ 						udp_len, IPPROTO_UDP,
+diff --git a/net/core/selftests.c b/net/core/selftests.c
+index 35f807ea995235..406faf8e5f3f9f 100644
+--- a/net/core/selftests.c
++++ b/net/core/selftests.c
+@@ -160,8 +160,9 @@ static struct sk_buff *net_test_get_skb(struct net_device *ndev,
+ 	skb->csum = 0;
+ 	skb->ip_summed = CHECKSUM_PARTIAL;
+ 	if (attr->tcp) {
+-		thdr->check = ~tcp_v4_check(skb->len, ihdr->saddr,
+-					    ihdr->daddr, 0);
++		int l4len = skb->len - skb_transport_offset(skb);
++
++		thdr->check = ~tcp_v4_check(l4len, ihdr->saddr, ihdr->daddr, 0);
+ 		skb->csum_start = skb_transport_header(skb) - skb->head;
+ 		skb->csum_offset = offsetof(struct tcphdr, check);
+ 	} else {
+diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c
+index c3bfac58151f63..3aaf5abf1acc13 100644
+--- a/net/mac80211/chan.c
++++ b/net/mac80211/chan.c
+@@ -2131,6 +2131,9 @@ void ieee80211_link_release_channel(struct ieee80211_link_data *link)
+ {
+ 	struct ieee80211_sub_if_data *sdata = link->sdata;
+ 
++	if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
++		return;
++
+ 	lockdep_assert_wiphy(sdata->local->hw.wiphy);
+ 
+ 	if (rcu_access_pointer(link->conf->chanctx_conf))
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index fb05f3cd37ec42..e0b44dbebe001b 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1226,6 +1226,15 @@ struct ieee80211_sub_if_data *vif_to_sdata(struct ieee80211_vif *p)
+ 	if ((_link = wiphy_dereference((_local)->hw.wiphy,		\
+ 				       ___sdata->link[___link_id])))
+ 
++#define for_each_link_data(sdata, __link)					\
++	struct ieee80211_sub_if_data *__sdata = sdata;				\
++	for (int __link_id = 0;							\
++	     __link_id < ARRAY_SIZE((__sdata)->link); __link_id++)		\
++		if ((!(__sdata)->vif.valid_links ||				\
++		     (__sdata)->vif.valid_links & BIT(__link_id)) &&		\
++		    ((__link) = sdata_dereference((__sdata)->link[__link_id],	\
++						  (__sdata))))
++
+ static inline int
+ ieee80211_get_mbssid_beacon_len(struct cfg80211_mbssid_elems *elems,
+ 				struct cfg80211_rnr_elems *rnr_elems,
+@@ -2078,6 +2087,9 @@ static inline void ieee80211_vif_clear_links(struct ieee80211_sub_if_data *sdata
+ 	ieee80211_vif_set_links(sdata, 0, 0);
+ }
+ 
++void ieee80211_apvlan_link_setup(struct ieee80211_sub_if_data *sdata);
++void ieee80211_apvlan_link_clear(struct ieee80211_sub_if_data *sdata);
++
+ /* tx handling */
+ void ieee80211_clear_tx_pending(struct ieee80211_local *local);
+ void ieee80211_tx_pending(struct tasklet_struct *t);
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 969b3e2c496af5..7d93e5aa595b28 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -485,6 +485,9 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
+ 	case NL80211_IFTYPE_MONITOR:
+ 		list_del_rcu(&sdata->u.mntr.list);
+ 		break;
++	case NL80211_IFTYPE_AP_VLAN:
++		ieee80211_apvlan_link_clear(sdata);
++		break;
+ 	default:
+ 		break;
+ 	}
+@@ -1268,6 +1271,8 @@ int ieee80211_do_open(struct wireless_dev *wdev, bool coming_up)
+ 		sdata->crypto_tx_tailroom_needed_cnt +=
+ 			master->crypto_tx_tailroom_needed_cnt;
+ 
++		ieee80211_apvlan_link_setup(sdata);
++
+ 		break;
+ 		}
+ 	case NL80211_IFTYPE_AP:
+@@ -1324,7 +1329,12 @@ int ieee80211_do_open(struct wireless_dev *wdev, bool coming_up)
+ 	case NL80211_IFTYPE_AP_VLAN:
+ 		/* no need to tell driver, but set carrier and chanctx */
+ 		if (sdata->bss->active) {
+-			ieee80211_link_vlan_copy_chanctx(&sdata->deflink);
++			struct ieee80211_link_data *link;
++
++			for_each_link_data(sdata, link) {
++				ieee80211_link_vlan_copy_chanctx(link);
++			}
++
+ 			netif_carrier_on(dev);
+ 			ieee80211_set_vif_encap_ops(sdata);
+ 		} else {
+diff --git a/net/mac80211/link.c b/net/mac80211/link.c
+index 58a76bcd6ae686..4f7b7d0f64f24b 100644
+--- a/net/mac80211/link.c
++++ b/net/mac80211/link.c
+@@ -12,6 +12,71 @@
+ #include "key.h"
+ #include "debugfs_netdev.h"
+ 
++static void ieee80211_update_apvlan_links(struct ieee80211_sub_if_data *sdata)
++{
++	struct ieee80211_sub_if_data *vlan;
++	struct ieee80211_link_data *link;
++	u16 ap_bss_links = sdata->vif.valid_links;
++	u16 new_links, vlan_links;
++	unsigned long add;
++
++	list_for_each_entry(vlan, &sdata->u.ap.vlans, u.vlan.list) {
++		int link_id;
++
++		if (!vlan)
++			continue;
++
++		/* No support for 4addr with MLO yet */
++		if (vlan->wdev.use_4addr)
++			return;
++
++		vlan_links = vlan->vif.valid_links;
++
++		new_links = ap_bss_links;
++
++		add = new_links & ~vlan_links;
++		if (!add)
++			continue;
++
++		ieee80211_vif_set_links(vlan, add, 0);
++
++		for_each_set_bit(link_id, &add, IEEE80211_MLD_MAX_NUM_LINKS) {
++			link = sdata_dereference(vlan->link[link_id], vlan);
++			ieee80211_link_vlan_copy_chanctx(link);
++		}
++	}
++}
++
++void ieee80211_apvlan_link_setup(struct ieee80211_sub_if_data *sdata)
++{
++	struct ieee80211_sub_if_data *ap_bss = container_of(sdata->bss,
++					    struct ieee80211_sub_if_data, u.ap);
++	u16 new_links = ap_bss->vif.valid_links;
++	unsigned long add;
++	int link_id;
++
++	if (!ap_bss->vif.valid_links)
++		return;
++
++	add = new_links;
++	for_each_set_bit(link_id, &add, IEEE80211_MLD_MAX_NUM_LINKS) {
++		sdata->wdev.valid_links |= BIT(link_id);
++		ether_addr_copy(sdata->wdev.links[link_id].addr,
++				ap_bss->wdev.links[link_id].addr);
++	}
++
++	ieee80211_vif_set_links(sdata, new_links, 0);
++}
++
++void ieee80211_apvlan_link_clear(struct ieee80211_sub_if_data *sdata)
++{
++	if (!sdata->wdev.valid_links)
++		return;
++
++	sdata->wdev.valid_links = 0;
++	ieee80211_vif_clear_links(sdata);
++}
++
+ void ieee80211_link_setup(struct ieee80211_link_data *link)
+ {
+ 	if (link->sdata->vif.type == NL80211_IFTYPE_STATION)
+@@ -28,8 +93,16 @@ void ieee80211_link_init(struct ieee80211_sub_if_data *sdata,
+ 	if (link_id < 0)
+ 		link_id = 0;
+ 
+-	rcu_assign_pointer(sdata->vif.link_conf[link_id], link_conf);
+-	rcu_assign_pointer(sdata->link[link_id], link);
++	if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN) {
++		struct ieee80211_sub_if_data *ap_bss;
++		struct ieee80211_bss_conf *ap_bss_conf;
++
++		ap_bss = container_of(sdata->bss,
++				      struct ieee80211_sub_if_data, u.ap);
++		ap_bss_conf = sdata_dereference(ap_bss->vif.link_conf[link_id],
++						ap_bss);
++		memcpy(link_conf, ap_bss_conf, sizeof(*link_conf));
++	}
+ 
+ 	link->sdata = sdata;
+ 	link->link_id = link_id;
+@@ -54,6 +127,7 @@ void ieee80211_link_init(struct ieee80211_sub_if_data *sdata,
+ 	if (!deflink) {
+ 		switch (sdata->vif.type) {
+ 		case NL80211_IFTYPE_AP:
++		case NL80211_IFTYPE_AP_VLAN:
+ 			ether_addr_copy(link_conf->addr,
+ 					sdata->wdev.links[link_id].addr);
+ 			link_conf->bssid = link_conf->addr;
+@@ -68,6 +142,9 @@ void ieee80211_link_init(struct ieee80211_sub_if_data *sdata,
+ 
+ 		ieee80211_link_debugfs_add(link);
+ 	}
++
++	rcu_assign_pointer(sdata->vif.link_conf[link_id], link_conf);
++	rcu_assign_pointer(sdata->link[link_id], link);
+ }
+ 
+ void ieee80211_link_stop(struct ieee80211_link_data *link)
+@@ -177,6 +254,7 @@ static void ieee80211_set_vif_links_bitmaps(struct ieee80211_sub_if_data *sdata,
+ 
+ 	switch (sdata->vif.type) {
+ 	case NL80211_IFTYPE_AP:
++	case NL80211_IFTYPE_AP_VLAN:
+ 		/* in an AP all links are always active */
+ 		sdata->vif.active_links = valid_links;
+ 
+@@ -278,12 +356,16 @@ static int ieee80211_vif_update_links(struct ieee80211_sub_if_data *sdata,
+ 		ieee80211_set_vif_links_bitmaps(sdata, new_links, dormant_links);
+ 
+ 		/* tell the driver */
+-		ret = drv_change_vif_links(sdata->local, sdata,
+-					   old_links & old_active,
+-					   new_links & sdata->vif.active_links,
+-					   old);
++		if (sdata->vif.type != NL80211_IFTYPE_AP_VLAN)
++			ret = drv_change_vif_links(sdata->local, sdata,
++						   old_links & old_active,
++						   new_links & sdata->vif.active_links,
++						   old);
+ 		if (!new_links)
+ 			ieee80211_debugfs_recreate_netdev(sdata, false);
++
++		if (sdata->vif.type == NL80211_IFTYPE_AP)
++			ieee80211_update_apvlan_links(sdata);
+ 	}
+ 
+ 	if (ret) {
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index dec6e16b8c7d28..82256eddd16bd4 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -3899,7 +3899,7 @@ void ieee80211_recalc_dtim(struct ieee80211_local *local,
+ {
+ 	u64 tsf = drv_get_tsf(local, sdata);
+ 	u64 dtim_count = 0;
+-	u16 beacon_int = sdata->vif.bss_conf.beacon_int * 1024;
++	u32 beacon_int = sdata->vif.bss_conf.beacon_int * 1024;
+ 	u8 dtim_period = sdata->vif.bss_conf.dtim_period;
+ 	struct ps_data *ps;
+ 	u8 bcns_from_dtim;
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 6f75862d97820c..21426c3049d350 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -2771,8 +2771,13 @@ rpc_decode_header(struct rpc_task *task, struct xdr_stream *xdr)
+ 	case -EPROTONOSUPPORT:
+ 		goto out_err;
+ 	case -EACCES:
+-		/* Re-encode with a fresh cred */
+-		fallthrough;
++		/* possible RPCSEC_GSS out-of-sequence event (RFC2203),
++		 * reset recv state and keep waiting, don't retransmit
++		 */
++		task->tk_rqstp->rq_reply_bytes_recvd = 0;
++		task->tk_status = xprt_request_enqueue_receive(task);
++		task->tk_action = call_transmit_status;
++		return -EBADMSG;
+ 	default:
+ 		goto out_garbage;
+ 	}
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index f78a2492826f9c..52f2812d2fa5b2 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -654,6 +654,11 @@ static void unix_sock_destructor(struct sock *sk)
+ #endif
+ }
+ 
++static unsigned int unix_skb_len(const struct sk_buff *skb)
++{
++	return skb->len - UNIXCB(skb).consumed;
++}
++
+ static void unix_release_sock(struct sock *sk, int embrion)
+ {
+ 	struct unix_sock *u = unix_sk(sk);
+@@ -688,10 +693,16 @@ static void unix_release_sock(struct sock *sk, int embrion)
+ 
+ 	if (skpair != NULL) {
+ 		if (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET) {
++			struct sk_buff *skb = skb_peek(&sk->sk_receive_queue);
++
++#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
++			if (skb && !unix_skb_len(skb))
++				skb = skb_peek_next(skb, &sk->sk_receive_queue);
++#endif
+ 			unix_state_lock(skpair);
+ 			/* No more writes */
+ 			WRITE_ONCE(skpair->sk_shutdown, SHUTDOWN_MASK);
+-			if (!skb_queue_empty_lockless(&sk->sk_receive_queue) || embrion)
++			if (skb || embrion)
+ 				WRITE_ONCE(skpair->sk_err, ECONNRESET);
+ 			unix_state_unlock(skpair);
+ 			skpair->sk_state_change(skpair);
+@@ -2578,11 +2589,6 @@ static long unix_stream_data_wait(struct sock *sk, long timeo,
+ 	return timeo;
+ }
+ 
+-static unsigned int unix_skb_len(const struct sk_buff *skb)
+-{
+-	return skb->len - UNIXCB(skb).consumed;
+-}
+-
+ struct unix_stream_read_state {
+ 	int (*recv_actor)(struct sk_buff *, int, int,
+ 			  struct unix_stream_read_state *);
+@@ -2597,11 +2603,11 @@ struct unix_stream_read_state {
+ #if IS_ENABLED(CONFIG_AF_UNIX_OOB)
+ static int unix_stream_recv_urg(struct unix_stream_read_state *state)
+ {
++	struct sk_buff *oob_skb, *read_skb = NULL;
+ 	struct socket *sock = state->socket;
+ 	struct sock *sk = sock->sk;
+ 	struct unix_sock *u = unix_sk(sk);
+ 	int chunk = 1;
+-	struct sk_buff *oob_skb;
+ 
+ 	mutex_lock(&u->iolock);
+ 	unix_state_lock(sk);
+@@ -2616,9 +2622,16 @@ static int unix_stream_recv_urg(struct unix_stream_read_state *state)
+ 
+ 	oob_skb = u->oob_skb;
+ 
+-	if (!(state->flags & MSG_PEEK))
++	if (!(state->flags & MSG_PEEK)) {
+ 		WRITE_ONCE(u->oob_skb, NULL);
+ 
++		if (oob_skb->prev != (struct sk_buff *)&sk->sk_receive_queue &&
++		    !unix_skb_len(oob_skb->prev)) {
++			read_skb = oob_skb->prev;
++			__skb_unlink(read_skb, &sk->sk_receive_queue);
++		}
++	}
++
+ 	spin_unlock(&sk->sk_receive_queue.lock);
+ 	unix_state_unlock(sk);
+ 
+@@ -2629,6 +2642,8 @@ static int unix_stream_recv_urg(struct unix_stream_read_state *state)
+ 
+ 	mutex_unlock(&u->iolock);
+ 
++	consume_skb(read_skb);
++
+ 	if (chunk < 0)
+ 		return -EFAULT;
+ 
+diff --git a/rust/Makefile b/rust/Makefile
+index 313a200112ce18..d62b58d0a55cc4 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -275,7 +275,7 @@ bindgen_skip_c_flags := -mno-fp-ret-in-387 -mpreferred-stack-boundary=% \
+ 	-fzero-call-used-regs=% -fno-stack-clash-protection \
+ 	-fno-inline-functions-called-once -fsanitize=bounds-strict \
+ 	-fstrict-flex-arrays=% -fmin-function-alignment=% \
+-	-fzero-init-padding-bits=% \
++	-fzero-init-padding-bits=% -mno-fdpic \
+ 	--param=% --param asan-%
+ 
+ # Derived from `scripts/Makefile.clang`.
+diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
+index ab37e1d35c70d5..3f66570b875635 100644
+--- a/rust/bindings/bindings_helper.h
++++ b/rust/bindings/bindings_helper.h
+@@ -10,6 +10,7 @@
+ #include <linux/blk-mq.h>
+ #include <linux/blk_types.h>
+ #include <linux/blkdev.h>
++#include <linux/completion.h>
+ #include <linux/cpumask.h>
+ #include <linux/cred.h>
+ #include <linux/device/faux.h>
+diff --git a/rust/helpers/completion.c b/rust/helpers/completion.c
+new file mode 100644
+index 00000000000000..b2443262a2aed3
+--- /dev/null
++++ b/rust/helpers/completion.c
+@@ -0,0 +1,8 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/completion.h>
++
++void rust_helper_init_completion(struct completion *x)
++{
++	init_completion(x);
++}
+diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
+index 1e7c84df725211..97cb759d92d4f1 100644
+--- a/rust/helpers/helpers.c
++++ b/rust/helpers/helpers.c
+@@ -11,6 +11,7 @@
+ #include "bug.c"
+ #include "build_assert.c"
+ #include "build_bug.c"
++#include "completion.c"
+ #include "cpumask.c"
+ #include "cred.c"
+ #include "device.c"
+diff --git a/rust/kernel/devres.rs b/rust/kernel/devres.rs
+index ddb1ce4a78d944..dc6ea014ee60e2 100644
+--- a/rust/kernel/devres.rs
++++ b/rust/kernel/devres.rs
+@@ -12,26 +12,28 @@
+     error::{Error, Result},
+     ffi::c_void,
+     prelude::*,
+-    revocable::Revocable,
+-    sync::Arc,
++    revocable::{Revocable, RevocableGuard},
++    sync::{rcu, Arc, Completion},
+     types::ARef,
+ };
+ 
+-use core::ops::Deref;
+-
+ #[pin_data]
+ struct DevresInner<T> {
+     dev: ARef<Device>,
+     callback: unsafe extern "C" fn(*mut c_void),
+     #[pin]
+     data: Revocable<T>,
++    #[pin]
++    revoke: Completion,
+ }
+ 
+ /// This abstraction is meant to be used by subsystems to containerize [`Device`] bound resources to
+ /// manage their lifetime.
+ ///
+ /// [`Device`] bound resources should be freed when either the resource goes out of scope or the
+-/// [`Device`] is unbound respectively, depending on what happens first.
++/// [`Device`] is unbound respectively, depending on what happens first. In any case, it is always
++/// guaranteed that revoking the device resource is completed before the corresponding [`Device`]
++/// is unbound.
+ ///
+ /// To achieve that [`Devres`] registers a devres callback on creation, which is called once the
+ /// [`Device`] is unbound, revoking access to the encapsulated resource (see also [`Revocable`]).
+@@ -105,6 +107,7 @@ fn new(dev: &Device, data: T, flags: Flags) -> Result<Arc<DevresInner<T>>> {
+                 dev: dev.into(),
+                 callback: Self::devres_callback,
+                 data <- Revocable::new(data),
++                revoke <- Completion::new(),
+             }),
+             flags,
+         )?;
+@@ -133,26 +136,28 @@ fn as_ptr(&self) -> *const Self {
+         self as _
+     }
+ 
+-    fn remove_action(this: &Arc<Self>) {
++    fn remove_action(this: &Arc<Self>) -> bool {
+         // SAFETY:
+         // - `self.inner.dev` is a valid `Device`,
+         // - the `action` and `data` pointers are the exact same ones as given to devm_add_action()
+         //   previously,
+         // - `self` is always valid, even if the action has been released already.
+-        let ret = unsafe {
++        let success = unsafe {
+             bindings::devm_remove_action_nowarn(
+                 this.dev.as_raw(),
+                 Some(this.callback),
+                 this.as_ptr() as _,
+             )
+-        };
++        } == 0;
+ 
+-        if ret == 0 {
++        if success {
+             // SAFETY: We leaked an `Arc` reference to devm_add_action() in `DevresInner::new`; if
+             // devm_remove_action_nowarn() was successful we can (and have to) claim back ownership
+             // of this reference.
+             let _ = unsafe { Arc::from_raw(this.as_ptr()) };
+         }
++
++        success
+     }
+ 
+     #[allow(clippy::missing_safety_doc)]
+@@ -164,7 +169,12 @@ fn remove_action(this: &Arc<Self>) {
+         //         `DevresInner::new`.
+         let inner = unsafe { Arc::from_raw(ptr) };
+ 
+-        inner.data.revoke();
++        if !inner.data.revoke() {
++            // If `revoke()` returns false, it means that `Devres::drop` already started revoking
++            // `inner.data` for us. Hence we have to wait until `Devres::drop()` signals that it
++            // completed revoking `inner.data`.
++            inner.revoke.wait_for_completion();
++        }
+     }
+ }
+ 
+@@ -184,18 +194,29 @@ pub fn new_foreign_owned(dev: &Device, data: T, flags: Flags) -> Result {
+ 
+         Ok(())
+     }
+-}
+ 
+-impl<T> Deref for Devres<T> {
+-    type Target = Revocable<T>;
++    /// [`Devres`] accessor for [`Revocable::try_access`].
++    pub fn try_access(&self) -> Option<RevocableGuard<'_, T>> {
++        self.0.data.try_access()
++    }
+ 
+-    fn deref(&self) -> &Self::Target {
+-        &self.0.data
++    /// [`Devres`] accessor for [`Revocable::try_access_with_guard`].
++    pub fn try_access_with_guard<'a>(&'a self, guard: &'a rcu::Guard) -> Option<&'a T> {
++        self.0.data.try_access_with_guard(guard)
+     }
+ }
+ 
+ impl<T> Drop for Devres<T> {
+     fn drop(&mut self) {
+-        DevresInner::remove_action(&self.0);
++        // SAFETY: When `drop` runs, it is guaranteed that nobody is accessing the revocable data
++        // anymore, hence it is safe not to wait for the grace period to finish.
++        if unsafe { self.0.data.revoke_nosync() } {
++            // We revoked `self.0.data` before the devres action did, hence try to remove it.
++            if !DevresInner::remove_action(&self.0) {
++                // We could not remove the devres action, which means that it now runs concurrently,
++                // hence signal that `self.0.data` has been revoked successfully.
++                self.0.revoke.complete_all();
++            }
++        }
+     }
+ }
+diff --git a/rust/kernel/revocable.rs b/rust/kernel/revocable.rs
+index 1e5a9d25c21b27..3f0fbee4acb5c4 100644
+--- a/rust/kernel/revocable.rs
++++ b/rust/kernel/revocable.rs
+@@ -126,8 +126,10 @@ pub fn try_access_with_guard<'a>(&'a self, _guard: &'a rcu::Guard) -> Option<&'a
+     /// # Safety
+     ///
+     /// Callers must ensure that there are no more concurrent users of the revocable object.
+-    unsafe fn revoke_internal<const SYNC: bool>(&self) {
+-        if self.is_available.swap(false, Ordering::Relaxed) {
++    unsafe fn revoke_internal<const SYNC: bool>(&self) -> bool {
++        let revoke = self.is_available.swap(false, Ordering::Relaxed);
++
++        if revoke {
+             if SYNC {
+                 // SAFETY: Just an FFI call, there are no further requirements.
+                 unsafe { bindings::synchronize_rcu() };
+@@ -137,6 +139,8 @@ unsafe fn revoke_internal<const SYNC: bool>(&self) {
+             // `compare_exchange` above that takes `is_available` from `true` to `false`.
+             unsafe { drop_in_place(self.data.get()) };
+         }
++
++        revoke
+     }
+ 
+     /// Revokes access to and drops the wrapped object.
+@@ -144,10 +148,13 @@ unsafe fn revoke_internal<const SYNC: bool>(&self) {
+     /// Access to the object is revoked immediately to new callers of [`Revocable::try_access`],
+     /// expecting that there are no concurrent users of the object.
+     ///
++    /// Returns `true` if `&self` has been revoked with this call, `false` if it was revoked
++    /// already.
++    ///
+     /// # Safety
+     ///
+     /// Callers must ensure that there are no more concurrent users of the revocable object.
+-    pub unsafe fn revoke_nosync(&self) {
++    pub unsafe fn revoke_nosync(&self) -> bool {
+         // SAFETY: By the safety requirement of this function, the caller ensures that nobody is
+         // accessing the data anymore and hence we don't have to wait for the grace period to
+         // finish.
+@@ -161,7 +168,10 @@ pub unsafe fn revoke_nosync(&self) {
+     /// If there are concurrent users of the object (i.e., ones that called
+     /// [`Revocable::try_access`] beforehand and still haven't dropped the returned guard), this
+     /// function waits for the concurrent access to complete before dropping the wrapped object.
+-    pub fn revoke(&self) {
++    ///
++    /// Returns `true` if `&self` has been revoked with this call, `false` if it was revoked
++    /// already.
++    pub fn revoke(&self) -> bool {
+         // SAFETY: By passing `true` we ask `revoke_internal` to wait for the grace period to
+         // finish.
+         unsafe { self.revoke_internal::<true>() }
+diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
+index 36a7190155833e..c23a1263992478 100644
+--- a/rust/kernel/sync.rs
++++ b/rust/kernel/sync.rs
+@@ -10,6 +10,7 @@
+ use pin_init;
+ 
+ mod arc;
++pub mod completion;
+ mod condvar;
+ pub mod lock;
+ mod locked_by;
+@@ -17,6 +18,7 @@
+ pub mod rcu;
+ 
+ pub use arc::{Arc, ArcBorrow, UniqueArc};
++pub use completion::Completion;
+ pub use condvar::{new_condvar, CondVar, CondVarTimeoutResult};
+ pub use lock::global::{global_lock, GlobalGuard, GlobalLock, GlobalLockBackend, GlobalLockedBy};
+ pub use lock::mutex::{new_mutex, Mutex, MutexGuard};
+diff --git a/rust/kernel/sync/completion.rs b/rust/kernel/sync/completion.rs
+new file mode 100644
+index 00000000000000..c50012a940a3c7
+--- /dev/null
++++ b/rust/kernel/sync/completion.rs
+@@ -0,0 +1,112 @@
++// SPDX-License-Identifier: GPL-2.0
++
++//! Completion support.
++//!
++//! Reference: <https://docs.kernel.org/scheduler/completion.html>
++//!
++//! C header: [`include/linux/completion.h`](srctree/include/linux/completion.h)
++
++use crate::{bindings, prelude::*, types::Opaque};
++
++/// Synchronization primitive to signal when a certain task has been completed.
++///
++/// The [`Completion`] synchronization primitive signals when a certain task has been completed by
++/// waking up other tasks that have been queued up to wait for the [`Completion`] to be completed.
++///
++/// # Examples
++///
++/// ```
++/// use kernel::sync::{Arc, Completion};
++/// use kernel::workqueue::{self, impl_has_work, new_work, Work, WorkItem};
++///
++/// #[pin_data]
++/// struct MyTask {
++///     #[pin]
++///     work: Work<MyTask>,
++///     #[pin]
++///     done: Completion,
++/// }
++///
++/// impl_has_work! {
++///     impl HasWork<Self> for MyTask { self.work }
++/// }
++///
++/// impl MyTask {
++///     fn new() -> Result<Arc<Self>> {
++///         let this = Arc::pin_init(pin_init!(MyTask {
++///             work <- new_work!("MyTask::work"),
++///             done <- Completion::new(),
++///         }), GFP_KERNEL)?;
++///
++///         let _ = workqueue::system().enqueue(this.clone());
++///
++///         Ok(this)
++///     }
++///
++///     fn wait_for_completion(&self) {
++///         self.done.wait_for_completion();
++///
++///         pr_info!("Completion: task complete\n");
++///     }
++/// }
++///
++/// impl WorkItem for MyTask {
++///     type Pointer = Arc<MyTask>;
++///
++///     fn run(this: Arc<MyTask>) {
++///         // process this task
++///         this.done.complete_all();
++///     }
++/// }
++///
++/// let task = MyTask::new()?;
++/// task.wait_for_completion();
++/// # Ok::<(), Error>(())
++/// ```
++#[pin_data]
++pub struct Completion {
++    #[pin]
++    inner: Opaque<bindings::completion>,
++}
++
++// SAFETY: `Completion` is safe to be send to any task.
++unsafe impl Send for Completion {}
++
++// SAFETY: `Completion` is safe to be accessed concurrently.
++unsafe impl Sync for Completion {}
++
++impl Completion {
++    /// Create an initializer for a new [`Completion`].
++    pub fn new() -> impl PinInit<Self> {
++        pin_init!(Self {
++            inner <- Opaque::ffi_init(|slot: *mut bindings::completion| {
++                // SAFETY: `slot` is a valid pointer to an uninitialized `struct completion`.
++                unsafe { bindings::init_completion(slot) };
++            }),
++        })
++    }
++
++    fn as_raw(&self) -> *mut bindings::completion {
++        self.inner.get()
++    }
++
++    /// Signal all tasks waiting on this completion.
++    ///
++    /// This method wakes up all tasks waiting on this completion; after this operation the
++    /// completion is permanently done, i.e. signals all current and future waiters.
++    pub fn complete_all(&self) {
++        // SAFETY: `self.as_raw()` is a pointer to a valid `struct completion`.
++        unsafe { bindings::complete_all(self.as_raw()) };
++    }
++
++    /// Wait for completion of a task.
++    ///
++    /// This method waits for the completion of a task; it is not interruptible and there is no
++    /// timeout.
++    ///
++    /// See also [`Completion::complete_all`].
++    pub fn wait_for_completion(&self) {
++        // SAFETY: `self.as_raw()` is a pointer to a valid `struct completion`.
++        unsafe { bindings::wait_for_completion(self.as_raw()) };
++    }
++}
+diff --git a/rust/macros/module.rs b/rust/macros/module.rs
+index 2f66107847f785..44e5cb108cea70 100644
+--- a/rust/macros/module.rs
++++ b/rust/macros/module.rs
+@@ -278,6 +278,7 @@ mod __module_init {{
+                     #[cfg(MODULE)]
+                     #[doc(hidden)]
+                     #[no_mangle]
++                    #[link_section = \".exit.text\"]
+                     pub extern \"C\" fn cleanup_module() {{
+                         // SAFETY:
+                         // - This function is inaccessible to the outside due to the double
+diff --git a/scripts/gdb/linux/vfs.py b/scripts/gdb/linux/vfs.py
+index c77b9ce75f6d25..b5fbb18ccb77ab 100644
+--- a/scripts/gdb/linux/vfs.py
++++ b/scripts/gdb/linux/vfs.py
+@@ -22,7 +22,7 @@ def dentry_name(d):
+     if parent == d or parent == 0:
+         return ""
+     p = dentry_name(d['d_parent']) + "/"
+-    return p + d['d_iname'].string()
++    return p + d['d_shortname']['string'].string()
+ 
+ class DentryName(gdb.Function):
+     """Return string of the full path of a dentry.
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index e431772c616890..89fca3ffd21c95 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -1909,11 +1909,17 @@ static int security_compute_sid(u32 ssid,
+ 			goto out_unlock;
+ 	}
+ 	/* Obtain the sid for the context. */
+-	rc = sidtab_context_to_sid(sidtab, &newcontext, out_sid);
+-	if (rc == -ESTALE) {
+-		rcu_read_unlock();
+-		context_destroy(&newcontext);
+-		goto retry;
++	if (context_equal(scontext, &newcontext))
++		*out_sid = ssid;
++	else if (context_equal(tcontext, &newcontext))
++		*out_sid = tsid;
++	else {
++		rc = sidtab_context_to_sid(sidtab, &newcontext, out_sid);
++		if (rc == -ESTALE) {
++			rcu_read_unlock();
++			context_destroy(&newcontext);
++			goto retry;
++		}
+ 	}
+ out_unlock:
+ 	rcu_read_unlock();
+diff --git a/sound/pci/hda/hda_bind.c b/sound/pci/hda/hda_bind.c
+index 1fef350d821ef0..df8f88beddd073 100644
+--- a/sound/pci/hda/hda_bind.c
++++ b/sound/pci/hda/hda_bind.c
+@@ -44,7 +44,7 @@ static void hda_codec_unsol_event(struct hdac_device *dev, unsigned int ev)
+ 	struct hda_codec *codec = container_of(dev, struct hda_codec, core);
+ 
+ 	/* ignore unsol events during shutdown */
+-	if (codec->bus->shutdown)
++	if (codec->card->shutdown || codec->bus->shutdown)
+ 		return;
+ 
+ 	/* ignore unsol events during system suspend/resume */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 77a2984c3741de..eb7ffa152b97b1 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2717,6 +2717,9 @@ static const struct pci_device_id azx_ids[] = {
+ 	{ PCI_VDEVICE(ATI, 0xab38),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
+ 	  AZX_DCAPS_PM_RUNTIME },
++	{ PCI_VDEVICE(ATI, 0xab40),
++	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
++	  AZX_DCAPS_PM_RUNTIME },
+ 	/* GLENFLY */
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_GLENFLY, PCI_ANY_ID),
+ 	  .class = PCI_CLASS_MULTIMEDIA_HD_AUDIO << 8,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 02a424b7a99204..03ffaec49998d1 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -11002,6 +11002,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1df3, "ASUS UM5606WA", ALC294_FIXUP_BASS_SPEAKER_15),
+ 	SND_PCI_QUIRK(0x1043, 0x1264, "ASUS UM5606KA", ALC294_FIXUP_BASS_SPEAKER_15),
+ 	SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402ZA", ALC245_FIXUP_CS35L41_SPI_2),
++	SND_PCI_QUIRK(0x1043, 0x1e10, "ASUS VivoBook X507UAR", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
+ 	SND_PCI_QUIRK(0x1043, 0x1e12, "ASUS UM3402", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x1043, 0x1e1f, "ASUS Vivobook 15 X1504VAP", ALC2XX_FIXUP_HEADSET_MIC),
+diff --git a/sound/soc/amd/ps/acp63.h b/sound/soc/amd/ps/acp63.h
+index 85feae45c44c5c..d7c994e26e4dfa 100644
+--- a/sound/soc/amd/ps/acp63.h
++++ b/sound/soc/amd/ps/acp63.h
+@@ -334,6 +334,8 @@ struct acp_hw_ops {
+  * @addr: pci ioremap address
+  * @reg_range: ACP reigister range
+  * @acp_rev: ACP PCI revision id
++ * @acp_sw_pad_keeper_en: store acp SoundWire pad keeper enable register value
++ * @acp_pad_pulldown_ctrl: store acp pad pulldown control register value
+  * @acp63_sdw0-dma_intr_stat: DMA interrupt status array for ACP6.3 platform SoundWire
+  * manager-SW0 instance
+  * @acp63_sdw_dma_intr_stat: DMA interrupt status array for ACP6.3 platform SoundWire
+@@ -367,6 +369,8 @@ struct acp63_dev_data {
+ 	u32 addr;
+ 	u32 reg_range;
+ 	u32 acp_rev;
++	u32 acp_sw_pad_keeper_en;
++	u32 acp_pad_pulldown_ctrl;
+ 	u16 acp63_sdw0_dma_intr_stat[ACP63_SDW0_DMA_MAX_STREAMS];
+ 	u16 acp63_sdw1_dma_intr_stat[ACP63_SDW1_DMA_MAX_STREAMS];
+ 	u16 acp70_sdw0_dma_intr_stat[ACP70_SDW0_DMA_MAX_STREAMS];
+diff --git a/sound/soc/amd/ps/ps-common.c b/sound/soc/amd/ps/ps-common.c
+index 1c89fb5fe1da5d..7b4966b75dc675 100644
+--- a/sound/soc/amd/ps/ps-common.c
++++ b/sound/soc/amd/ps/ps-common.c
+@@ -160,6 +160,8 @@ static int __maybe_unused snd_acp63_suspend(struct device *dev)
+ 
+ 	adata = dev_get_drvdata(dev);
+ 	if (adata->is_sdw_dev) {
++		adata->acp_sw_pad_keeper_en = readl(adata->acp63_base + ACP_SW0_PAD_KEEPER_EN);
++		adata->acp_pad_pulldown_ctrl = readl(adata->acp63_base + ACP_PAD_PULLDOWN_CTRL);
+ 		adata->sdw_en_stat = check_acp_sdw_enable_status(adata);
+ 		if (adata->sdw_en_stat) {
+ 			writel(1, adata->acp63_base + ACP_ZSC_DSP_CTRL);
+@@ -197,6 +199,7 @@ static int __maybe_unused snd_acp63_runtime_resume(struct device *dev)
+ static int __maybe_unused snd_acp63_resume(struct device *dev)
+ {
+ 	struct acp63_dev_data *adata;
++	u32 acp_sw_pad_keeper_en;
+ 	int ret;
+ 
+ 	adata = dev_get_drvdata(dev);
+@@ -209,6 +212,12 @@ static int __maybe_unused snd_acp63_resume(struct device *dev)
+ 	if (ret)
+ 		dev_err(dev, "ACP init failed\n");
+ 
++	acp_sw_pad_keeper_en = readl(adata->acp63_base + ACP_SW0_PAD_KEEPER_EN);
++	dev_dbg(dev, "ACP_SW0_PAD_KEEPER_EN:0x%x\n", acp_sw_pad_keeper_en);
++	if (!acp_sw_pad_keeper_en) {
++		writel(adata->acp_sw_pad_keeper_en, adata->acp63_base + ACP_SW0_PAD_KEEPER_EN);
++		writel(adata->acp_pad_pulldown_ctrl, adata->acp63_base + ACP_PAD_PULLDOWN_CTRL);
++	}
+ 	return ret;
+ }
+ 
+@@ -408,6 +417,8 @@ static int __maybe_unused snd_acp70_suspend(struct device *dev)
+ 
+ 	adata = dev_get_drvdata(dev);
+ 	if (adata->is_sdw_dev) {
++		adata->acp_sw_pad_keeper_en = readl(adata->acp63_base + ACP_SW0_PAD_KEEPER_EN);
++		adata->acp_pad_pulldown_ctrl = readl(adata->acp63_base + ACP_PAD_PULLDOWN_CTRL);
+ 		adata->sdw_en_stat = check_acp_sdw_enable_status(adata);
+ 		if (adata->sdw_en_stat) {
+ 			writel(1, adata->acp63_base + ACP_ZSC_DSP_CTRL);
+@@ -445,6 +456,7 @@ static int __maybe_unused snd_acp70_runtime_resume(struct device *dev)
+ static int __maybe_unused snd_acp70_resume(struct device *dev)
+ {
+ 	struct acp63_dev_data *adata;
++	u32 acp_sw_pad_keeper_en;
+ 	int ret;
+ 
+ 	adata = dev_get_drvdata(dev);
+@@ -459,6 +471,12 @@ static int __maybe_unused snd_acp70_resume(struct device *dev)
+ 	if (ret)
+ 		dev_err(dev, "ACP init failed\n");
+ 
++	acp_sw_pad_keeper_en = readl(adata->acp63_base + ACP_SW0_PAD_KEEPER_EN);
++	dev_dbg(dev, "ACP_SW0_PAD_KEEPER_EN:0x%x\n", acp_sw_pad_keeper_en);
++	if (!acp_sw_pad_keeper_en) {
++		writel(adata->acp_sw_pad_keeper_en, adata->acp63_base + ACP_SW0_PAD_KEEPER_EN);
++		writel(adata->acp_pad_pulldown_ctrl, adata->acp63_base + ACP_PAD_PULLDOWN_CTRL);
++	}
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 3d9da93d22ee84..b27966f82c8b65 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -353,6 +353,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "83J2"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "83J3"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+diff --git a/sound/soc/codecs/rt1320-sdw.c b/sound/soc/codecs/rt1320-sdw.c
+index f51ba345a16e64..015cc710e6dc08 100644
+--- a/sound/soc/codecs/rt1320-sdw.c
++++ b/sound/soc/codecs/rt1320-sdw.c
+@@ -204,7 +204,7 @@ static const struct reg_sequence rt1320_vc_blind_write[] = {
+ 	{ 0x3fc2bfc0, 0x03 },
+ 	{ 0x0000d486, 0x43 },
+ 	{ SDW_SDCA_CTL(FUNC_NUM_AMP, RT1320_SDCA_ENT_PDE23, RT1320_SDCA_CTL_REQ_POWER_STATE, 0), 0x00 },
+-	{ 0x1000db00, 0x04 },
++	{ 0x1000db00, 0x07 },
+ 	{ 0x1000db01, 0x00 },
+ 	{ 0x1000db02, 0x11 },
+ 	{ 0x1000db03, 0x00 },
+@@ -225,6 +225,21 @@ static const struct reg_sequence rt1320_vc_blind_write[] = {
+ 	{ 0x1000db12, 0x00 },
+ 	{ 0x1000db13, 0x00 },
+ 	{ 0x1000db14, 0x45 },
++	{ 0x1000db15, 0x0d },
++	{ 0x1000db16, 0x01 },
++	{ 0x1000db17, 0x00 },
++	{ 0x1000db18, 0x00 },
++	{ 0x1000db19, 0xbf },
++	{ 0x1000db1a, 0x13 },
++	{ 0x1000db1b, 0x09 },
++	{ 0x1000db1c, 0x00 },
++	{ 0x1000db1d, 0x00 },
++	{ 0x1000db1e, 0x00 },
++	{ 0x1000db1f, 0x12 },
++	{ 0x1000db20, 0x09 },
++	{ 0x1000db21, 0x00 },
++	{ 0x1000db22, 0x00 },
++	{ 0x1000db23, 0x00 },
+ 	{ 0x0000d540, 0x01 },
+ 	{ 0x0000c081, 0xfc },
+ 	{ 0x0000f01e, 0x80 },
+diff --git a/sound/soc/codecs/wcd9335.c b/sound/soc/codecs/wcd9335.c
+index 7cef43bb2a8800..5e19e813748dfa 100644
+--- a/sound/soc/codecs/wcd9335.c
++++ b/sound/soc/codecs/wcd9335.c
+@@ -17,7 +17,7 @@
+ #include <sound/soc.h>
+ #include <sound/pcm_params.h>
+ #include <sound/soc-dapm.h>
+-#include <linux/of_gpio.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/of.h>
+ #include <linux/of_irq.h>
+ #include <sound/tlv.h>
+@@ -331,8 +331,7 @@ struct wcd9335_codec {
+ 	int comp_enabled[COMPANDER_MAX];
+ 
+ 	int intr1;
+-	int reset_gpio;
+-	struct regulator_bulk_data supplies[WCD9335_MAX_SUPPLY];
++	struct gpio_desc *reset_gpio;
+ 
+ 	unsigned int rx_port_value[WCD9335_RX_MAX];
+ 	unsigned int tx_port_value[WCD9335_TX_MAX];
+@@ -355,6 +354,10 @@ struct wcd9335_irq {
+ 	char *name;
+ };
+ 
++static const char * const wcd9335_supplies[] = {
++	"vdd-buck", "vdd-buck-sido", "vdd-tx", "vdd-rx", "vdd-io",
++};
++
+ static const struct wcd9335_slim_ch wcd9335_tx_chs[WCD9335_TX_MAX] = {
+ 	WCD9335_SLIM_TX_CH(0),
+ 	WCD9335_SLIM_TX_CH(1),
+@@ -4975,12 +4978,11 @@ static const struct regmap_irq_chip wcd9335_regmap_irq1_chip = {
+ static int wcd9335_parse_dt(struct wcd9335_codec *wcd)
+ {
+ 	struct device *dev = wcd->dev;
+-	struct device_node *np = dev->of_node;
+ 	int ret;
+ 
+-	wcd->reset_gpio = of_get_named_gpio(np,	"reset-gpios", 0);
+-	if (wcd->reset_gpio < 0)
+-		return dev_err_probe(dev, wcd->reset_gpio, "Reset GPIO missing from DT\n");
++	wcd->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
++	if (IS_ERR(wcd->reset_gpio))
++		return dev_err_probe(dev, PTR_ERR(wcd->reset_gpio), "Reset GPIO missing from DT\n");
+ 
+ 	wcd->mclk = devm_clk_get(dev, "mclk");
+ 	if (IS_ERR(wcd->mclk))
+@@ -4990,30 +4992,16 @@ static int wcd9335_parse_dt(struct wcd9335_codec *wcd)
+ 	if (IS_ERR(wcd->native_clk))
+ 		return dev_err_probe(dev, PTR_ERR(wcd->native_clk), "slimbus clock not found\n");
+ 
+-	wcd->supplies[0].supply = "vdd-buck";
+-	wcd->supplies[1].supply = "vdd-buck-sido";
+-	wcd->supplies[2].supply = "vdd-tx";
+-	wcd->supplies[3].supply = "vdd-rx";
+-	wcd->supplies[4].supply = "vdd-io";
+-
+-	ret = regulator_bulk_get(dev, WCD9335_MAX_SUPPLY, wcd->supplies);
++	ret = devm_regulator_bulk_get_enable(dev, ARRAY_SIZE(wcd9335_supplies),
++					     wcd9335_supplies);
+ 	if (ret)
+-		return dev_err_probe(dev, ret, "Failed to get supplies\n");
++		return dev_err_probe(dev, ret, "Failed to get and enable supplies\n");
+ 
+ 	return 0;
+ }
+ 
+ static int wcd9335_power_on_reset(struct wcd9335_codec *wcd)
+ {
+-	struct device *dev = wcd->dev;
+-	int ret;
+-
+-	ret = regulator_bulk_enable(WCD9335_MAX_SUPPLY, wcd->supplies);
+-	if (ret) {
+-		dev_err(dev, "Failed to get supplies: err = %d\n", ret);
+-		return ret;
+-	}
+-
+ 	/*
+ 	 * For WCD9335, it takes about 600us for the Vout_A and
+ 	 * Vout_D to be ready after BUCK_SIDO is powered up.
+@@ -5023,9 +5011,9 @@ static int wcd9335_power_on_reset(struct wcd9335_codec *wcd)
+ 	 */
+ 	usleep_range(600, 650);
+ 
+-	gpio_direction_output(wcd->reset_gpio, 0);
++	gpiod_set_value(wcd->reset_gpio, 1);
+ 	msleep(20);
+-	gpio_set_value(wcd->reset_gpio, 1);
++	gpiod_set_value(wcd->reset_gpio, 0);
+ 	msleep(20);
+ 
+ 	return 0;
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index dbbc9eb935a4b3..f302bcebaa9d09 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2284,6 +2284,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_DISABLE_AUTOSUSPEND),
+ 	DEVICE_FLG(0x17aa, 0x104d, /* Lenovo ThinkStation P620 Internal Speaker + Front Headset */
+ 		   QUIRK_FLAG_DISABLE_AUTOSUSPEND),
++	DEVICE_FLG(0x17ef, 0x3083, /* Lenovo TBT3 dock */
++		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x1852, 0x5062, /* Luxman D-08u */
+ 		   QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY),
+ 	DEVICE_FLG(0x1852, 0x5065, /* Luxman DA-06 */
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index c1ea8844a46fc4..aa91d63749f2ca 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -987,6 +987,8 @@ snd_usb_get_audioformat_uac3(struct snd_usb_audio *chip,
+ 	 * and request Cluster Descriptor
+ 	 */
+ 	wLength = le16_to_cpu(hc_header.wLength);
++	if (wLength < sizeof(cluster))
++		return NULL;
+ 	cluster = kzalloc(wLength, GFP_KERNEL);
+ 	if (!cluster)
+ 		return ERR_PTR(-ENOMEM);
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 460c3e57fadb64..0381f209920a66 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -226,6 +226,9 @@ static void btf_dump_free_names(struct hashmap *map)
+ 	size_t bkt;
+ 	struct hashmap_entry *cur;
+ 
++	if (!map)
++		return;
++
+ 	hashmap__for_each_entry(map, cur, bkt)
+ 		free((void *)cur->pkey);
+ 
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 30cf210261032e..97605ea8093ff6 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -597,7 +597,7 @@ struct extern_desc {
+ 	int sym_idx;
+ 	int btf_id;
+ 	int sec_btf_id;
+-	const char *name;
++	char *name;
+ 	char *essent_name;
+ 	bool is_set;
+ 	bool is_weak;
+@@ -4259,7 +4259,9 @@ static int bpf_object__collect_externs(struct bpf_object *obj)
+ 			return ext->btf_id;
+ 		}
+ 		t = btf__type_by_id(obj->btf, ext->btf_id);
+-		ext->name = btf__name_by_offset(obj->btf, t->name_off);
++		ext->name = strdup(btf__name_by_offset(obj->btf, t->name_off));
++		if (!ext->name)
++			return -ENOMEM;
+ 		ext->sym_idx = i;
+ 		ext->is_weak = ELF64_ST_BIND(sym->st_info) == STB_WEAK;
+ 
+@@ -9138,8 +9140,10 @@ void bpf_object__close(struct bpf_object *obj)
+ 	zfree(&obj->btf_custom_path);
+ 	zfree(&obj->kconfig);
+ 
+-	for (i = 0; i < obj->nr_extern; i++)
++	for (i = 0; i < obj->nr_extern; i++) {
++		zfree(&obj->externs[i].name);
+ 		zfree(&obj->externs[i].essent_name);
++	}
+ 
+ 	zfree(&obj->externs);
+ 	obj->nr_extern = 0;
+diff --git a/tools/testing/selftests/bpf/progs/test_global_map_resize.c b/tools/testing/selftests/bpf/progs/test_global_map_resize.c
+index a3f220ba7025bd..ee65bad0436d0c 100644
+--- a/tools/testing/selftests/bpf/progs/test_global_map_resize.c
++++ b/tools/testing/selftests/bpf/progs/test_global_map_resize.c
+@@ -32,6 +32,16 @@ int my_int_last SEC(".data.array_not_last");
+ 
+ int percpu_arr[1] SEC(".data.percpu_arr");
+ 
++/* at least one extern is included, to ensure that a specific
++ * regression is tested whereby resizing resulted in a free-after-use
++ * bug after type information is invalidated by the resize operation.
++ *
++ * There isn't a particularly good API to test for this specific condition,
++ * but by having externs for the resizing tests it will cover this path.
++ */
++extern int LINUX_KERNEL_VERSION __kconfig;
++long version_sink;
++
+ SEC("tp/syscalls/sys_enter_getpid")
+ int bss_array_sum(void *ctx)
+ {
+@@ -44,6 +54,9 @@ int bss_array_sum(void *ctx)
+ 	for (size_t i = 0; i < bss_array_len; ++i)
+ 		sum += array[i];
+ 
++	/* see above; ensure this is not optimized out */
++	version_sink = LINUX_KERNEL_VERSION;
++
+ 	return 0;
+ }
+ 
+@@ -59,6 +72,9 @@ int data_array_sum(void *ctx)
+ 	for (size_t i = 0; i < data_array_len; ++i)
+ 		sum += my_array[i];
+ 
++	/* see above; ensure this is not optimized out */
++	version_sink = LINUX_KERNEL_VERSION;
++
+ 	return 0;
+ }
+ 


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-07-11  2:26 Arisu Tachibana
  0 siblings, 0 replies; 19+ messages in thread
From: Arisu Tachibana @ 2025-07-11  2:26 UTC (permalink / raw
  To: gentoo-commits

commit:     bd764b8d3d33eb41259f5a44e3ca47c67f8d2c0c
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Jul 11 02:26:21 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Jul 11 02:26:21 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bd764b8d

Linux patch 6.15.6

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README             |    4 +
 1005_linux-6.15.6.patch | 8234 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 8238 insertions(+)

diff --git a/0000_README b/0000_README
index 6933269c..7184e8ec 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-6.15.5.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.15.5
 
+Patch:  1005_linux-6.15.6.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.15.6
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1005_linux-6.15.6.patch b/1005_linux-6.15.6.patch
new file mode 100644
index 00000000..36fd9119
--- /dev/null
+++ b/1005_linux-6.15.6.patch
@@ -0,0 +1,8234 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 6a1acabb29d85f..53755b2021ed01 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -523,6 +523,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v1
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v2
+ 		/sys/devices/system/cpu/vulnerabilities/srbds
++		/sys/devices/system/cpu/vulnerabilities/tsa
+ 		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+ Date:		January 2018
+ Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
+diff --git a/Documentation/ABI/testing/sysfs-driver-ufs b/Documentation/ABI/testing/sysfs-driver-ufs
+index e36d2de16cbdad..0397664e869a0d 100644
+--- a/Documentation/ABI/testing/sysfs-driver-ufs
++++ b/Documentation/ABI/testing/sysfs-driver-ufs
+@@ -711,7 +711,7 @@ Description:	This file shows the thin provisioning type. This is one of
+ 
+ 		The file is read only.
+ 
+-What:		/sys/class/scsi_device/*/device/unit_descriptor/physical_memory_resourse_count
++What:		/sys/class/scsi_device/*/device/unit_descriptor/physical_memory_resource_count
+ Date:		February 2018
+ Contact:	Stanislav Nijnikov <stanislav.nijnikov@wdc.com>
+ Description:	This file shows the total physical memory resources. This is
+diff --git a/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
+index 1302fd1b55e83c..6dba18dbb9abc8 100644
+--- a/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
++++ b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
+@@ -157,9 +157,7 @@ This is achieved by using the otherwise unused and obsolete VERW instruction in
+ combination with a microcode update. The microcode clears the affected CPU
+ buffers when the VERW instruction is executed.
+ 
+-Kernel reuses the MDS function to invoke the buffer clearing:
+-
+-	mds_clear_cpu_buffers()
++Kernel does the buffer clearing with x86_clear_cpu_buffers().
+ 
+ On MDS affected CPUs, the kernel already invokes CPU buffer clear on
+ kernel/userspace, hypervisor/guest and C-state (idle) transitions. No
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 8f75ec17739944..76a34e0aef7634 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -7423,6 +7423,19 @@
+ 			having this key zero'ed is acceptable. E.g. in testing
+ 			scenarios.
+ 
++	tsa=		[X86] Control mitigation for Transient Scheduler
++			Attacks on AMD CPUs. Search the following in your
++			favourite search engine for more details:
++
++			"Technical guidance for mitigating transient scheduler
++			attacks".
++
++			off		- disable the mitigation
++			on		- enable the mitigation (default)
++			user		- mitigate only user/kernel transitions
++			vm		- mitigate only guest/host transitions
++
++
+ 	tsc=		Disable clocksource stability checks for TSC.
+ 			Format: <string>
+ 			[x86] reliable: mark tsc clocksource as reliable, this
+diff --git a/Documentation/arch/x86/mds.rst b/Documentation/arch/x86/mds.rst
+index 5a2e6c0ef04a53..3518671e1a8503 100644
+--- a/Documentation/arch/x86/mds.rst
++++ b/Documentation/arch/x86/mds.rst
+@@ -93,7 +93,7 @@ enters a C-state.
+ 
+ The kernel provides a function to invoke the buffer clearing:
+ 
+-    mds_clear_cpu_buffers()
++    x86_clear_cpu_buffers()
+ 
+ Also macro CLEAR_CPU_BUFFERS can be used in ASM late in exit-to-user path.
+ Other than CFLAGS.ZF, this macro doesn't clobber any registers.
+@@ -185,9 +185,9 @@ Mitigation points
+    idle clearing would be a window dressing exercise and is therefore not
+    activated.
+ 
+-   The invocation is controlled by the static key mds_idle_clear which is
+-   switched depending on the chosen mitigation mode and the SMT state of
+-   the system.
++   The invocation is controlled by the static key cpu_buf_idle_clear which is
++   switched depending on the chosen mitigation mode and the SMT state of the
++   system.
+ 
+    The buffer clear is only invoked before entering the C-State to prevent
+    that stale data from the idling CPU from spilling to the Hyper-Thread
+diff --git a/Documentation/core-api/symbol-namespaces.rst b/Documentation/core-api/symbol-namespaces.rst
+index 06f766a6aab244..c6f59c5e25648e 100644
+--- a/Documentation/core-api/symbol-namespaces.rst
++++ b/Documentation/core-api/symbol-namespaces.rst
+@@ -28,6 +28,9 @@ kernel. As of today, modules that make use of symbols exported into namespaces,
+ are required to import the namespace. Otherwise the kernel will, depending on
+ its configuration, reject loading the module or warn about a missing import.
+ 
++Additionally, it is possible to put symbols into a module namespace, strictly
++limiting which modules are allowed to use these symbols.
++
+ 2. How to define Symbol Namespaces
+ ==================================
+ 
+@@ -83,6 +86,22 @@ unit as preprocessor statement. The above example would then read::
+ within the corresponding compilation unit before the #include for
+ <linux/export.h>. Typically it's placed before the first #include statement.
+ 
++2.3 Using the EXPORT_SYMBOL_GPL_FOR_MODULES() macro
++===================================================
++
++Symbols exported using this macro are put into a module namespace. This
++namespace cannot be imported.
++
++The macro takes a comma separated list of module names, allowing only those
++modules to access this symbol. Simple tail-globs are supported.
++
++For example:
++
++  EXPORT_SYMBOL_GPL_FOR_MODULES(preempt_notifier_inc, "kvm,kvm-*")
++
++will limit usage of this symbol to modules whoes name matches the given
++patterns.
++
+ 3. How to use Symbols exported in Namespaces
+ ============================================
+ 
+@@ -154,3 +173,6 @@ in-tree modules::
+ You can also run nsdeps for external module builds. A typical usage is::
+ 
+ 	$ make -C <path_to_kernel_src> M=$PWD nsdeps
++
++Note: it will happily generate an import statement for the module namespace;
++which will not work and generates build and runtime failures.
+diff --git a/Documentation/devicetree/bindings/i2c/realtek,rtl9301-i2c.yaml b/Documentation/devicetree/bindings/i2c/realtek,rtl9301-i2c.yaml
+index eddfd329c67b74..69ac5db8b91489 100644
+--- a/Documentation/devicetree/bindings/i2c/realtek,rtl9301-i2c.yaml
++++ b/Documentation/devicetree/bindings/i2c/realtek,rtl9301-i2c.yaml
+@@ -26,7 +26,8 @@ properties:
+       - const: realtek,rtl9301-i2c
+ 
+   reg:
+-    description: Register offset and size this I2C controller.
++    items:
++      - description: Register offset and size this I2C controller.
+ 
+   "#address-cells":
+     const: 1
+diff --git a/Documentation/devicetree/bindings/net/sophgo,sg2044-dwmac.yaml b/Documentation/devicetree/bindings/net/sophgo,sg2044-dwmac.yaml
+index 4dd2dc9c678b70..8afbd9ebd73f69 100644
+--- a/Documentation/devicetree/bindings/net/sophgo,sg2044-dwmac.yaml
++++ b/Documentation/devicetree/bindings/net/sophgo,sg2044-dwmac.yaml
+@@ -80,6 +80,8 @@ examples:
+       interrupt-parent = <&intc>;
+       interrupts = <296 IRQ_TYPE_LEVEL_HIGH>;
+       interrupt-names = "macirq";
++      phy-handle = <&phy0>;
++      phy-mode = "rgmii-id";
+       resets = <&rst 30>;
+       reset-names = "stmmaceth";
+       snps,multicast-filter-bins = <0>;
+@@ -91,7 +93,6 @@ examples:
+       snps,mtl-rx-config = <&gmac0_mtl_rx_setup>;
+       snps,mtl-tx-config = <&gmac0_mtl_tx_setup>;
+       snps,axi-config = <&gmac0_stmmac_axi_setup>;
+-      status = "disabled";
+ 
+       gmac0_mtl_rx_setup: rx-queues-config {
+         snps,rx-queues-to-use = <8>;
+diff --git a/Makefile b/Makefile
+index 66b61bf9038814..959b55a05d1ba2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 15
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm64/boot/dts/apple/spi1-nvram.dtsi b/arch/arm64/boot/dts/apple/spi1-nvram.dtsi
+index 3df2fd3993b528..9740fbf200f0bc 100644
+--- a/arch/arm64/boot/dts/apple/spi1-nvram.dtsi
++++ b/arch/arm64/boot/dts/apple/spi1-nvram.dtsi
+@@ -20,8 +20,6 @@ flash@0 {
+ 		compatible = "jedec,spi-nor";
+ 		reg = <0x0>;
+ 		spi-max-frequency = <25000000>;
+-		#address-cells = <1>;
+-		#size-cells = <1>;
+ 
+ 		partitions {
+ 			compatible = "fixed-partitions";
+diff --git a/arch/arm64/boot/dts/apple/t8103-j293.dts b/arch/arm64/boot/dts/apple/t8103-j293.dts
+index e2d9439397f71a..5b3c42e9f0e677 100644
+--- a/arch/arm64/boot/dts/apple/t8103-j293.dts
++++ b/arch/arm64/boot/dts/apple/t8103-j293.dts
+@@ -100,6 +100,8 @@ dfr_mipi_out_panel: endpoint@0 {
+ 
+ &displaydfr_mipi {
+ 	status = "okay";
++	#address-cells = <1>;
++	#size-cells = <0>;
+ 
+ 	dfr_panel: panel@0 {
+ 		compatible = "apple,j293-summit", "apple,summit";
+diff --git a/arch/arm64/boot/dts/apple/t8103-jxxx.dtsi b/arch/arm64/boot/dts/apple/t8103-jxxx.dtsi
+index 8e82231acab59c..0c8206156bfefd 100644
+--- a/arch/arm64/boot/dts/apple/t8103-jxxx.dtsi
++++ b/arch/arm64/boot/dts/apple/t8103-jxxx.dtsi
+@@ -71,7 +71,7 @@ hpm1: usb-pd@3f {
+  */
+ &port00 {
+ 	bus-range = <1 1>;
+-	wifi0: network@0,0 {
++	wifi0: wifi@0,0 {
+ 		compatible = "pci14e4,4425";
+ 		reg = <0x10000 0x0 0x0 0x0 0x0>;
+ 		/* To be filled by the loader */
+diff --git a/arch/arm64/boot/dts/apple/t8103.dtsi b/arch/arm64/boot/dts/apple/t8103.dtsi
+index 97b6a067394e31..229b10efaab9ed 100644
+--- a/arch/arm64/boot/dts/apple/t8103.dtsi
++++ b/arch/arm64/boot/dts/apple/t8103.dtsi
+@@ -404,8 +404,6 @@ displaydfr_mipi: dsi@228600000 {
+ 			compatible = "apple,t8103-display-pipe-mipi", "apple,h7-display-pipe-mipi";
+ 			reg = <0x2 0x28600000 0x0 0x100000>;
+ 			power-domains = <&ps_mipi_dsi>;
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			status = "disabled";
+ 
+ 			ports {
+diff --git a/arch/arm64/boot/dts/apple/t8112-j493.dts b/arch/arm64/boot/dts/apple/t8112-j493.dts
+index be86d34c6696cb..fb8ad7d4c65a8f 100644
+--- a/arch/arm64/boot/dts/apple/t8112-j493.dts
++++ b/arch/arm64/boot/dts/apple/t8112-j493.dts
+@@ -63,6 +63,8 @@ dfr_mipi_out_panel: endpoint@0 {
+ 
+ &displaydfr_mipi {
+ 	status = "okay";
++	#address-cells = <1>;
++	#size-cells = <0>;
+ 
+ 	dfr_panel: panel@0 {
+ 		compatible = "apple,j493-summit", "apple,summit";
+diff --git a/arch/arm64/boot/dts/apple/t8112.dtsi b/arch/arm64/boot/dts/apple/t8112.dtsi
+index d9b966d68e4fae..7488e3850493b0 100644
+--- a/arch/arm64/boot/dts/apple/t8112.dtsi
++++ b/arch/arm64/boot/dts/apple/t8112.dtsi
+@@ -420,8 +420,6 @@ displaydfr_mipi: dsi@228600000 {
+ 			compatible = "apple,t8112-display-pipe-mipi", "apple,h7-display-pipe-mipi";
+ 			reg = <0x2 0x28600000 0x0 0x100000>;
+ 			power-domains = <&ps_mipi_dsi>;
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			status = "disabled";
+ 
+ 			ports {
+diff --git a/arch/powerpc/include/uapi/asm/ioctls.h b/arch/powerpc/include/uapi/asm/ioctls.h
+index 2c145da3b774a1..b5211e413829a2 100644
+--- a/arch/powerpc/include/uapi/asm/ioctls.h
++++ b/arch/powerpc/include/uapi/asm/ioctls.h
+@@ -23,10 +23,10 @@
+ #define TCSETSW		_IOW('t', 21, struct termios)
+ #define TCSETSF		_IOW('t', 22, struct termios)
+ 
+-#define TCGETA		_IOR('t', 23, struct termio)
+-#define TCSETA		_IOW('t', 24, struct termio)
+-#define TCSETAW		_IOW('t', 25, struct termio)
+-#define TCSETAF		_IOW('t', 28, struct termio)
++#define TCGETA		0x40147417 /* _IOR('t', 23, struct termio) */
++#define TCSETA		0x80147418 /* _IOW('t', 24, struct termio) */
++#define TCSETAW		0x80147419 /* _IOW('t', 25, struct termio) */
++#define TCSETAF		0x8014741c /* _IOW('t', 28, struct termio) */
+ 
+ #define TCSBRK		_IO('t', 29)
+ #define TCXONC		_IO('t', 30)
+diff --git a/arch/riscv/kernel/cpu_ops_sbi.c b/arch/riscv/kernel/cpu_ops_sbi.c
+index e6fbaaf549562d..87d6559448039c 100644
+--- a/arch/riscv/kernel/cpu_ops_sbi.c
++++ b/arch/riscv/kernel/cpu_ops_sbi.c
+@@ -18,10 +18,10 @@ const struct cpu_operations cpu_ops_sbi;
+ 
+ /*
+  * Ordered booting via HSM brings one cpu at a time. However, cpu hotplug can
+- * be invoked from multiple threads in parallel. Define a per cpu data
++ * be invoked from multiple threads in parallel. Define an array of boot data
+  * to handle that.
+  */
+-static DEFINE_PER_CPU(struct sbi_hart_boot_data, boot_data);
++static struct sbi_hart_boot_data boot_data[NR_CPUS];
+ 
+ static int sbi_hsm_hart_start(unsigned long hartid, unsigned long saddr,
+ 			      unsigned long priv)
+@@ -67,7 +67,7 @@ static int sbi_cpu_start(unsigned int cpuid, struct task_struct *tidle)
+ 	unsigned long boot_addr = __pa_symbol(secondary_start_sbi);
+ 	unsigned long hartid = cpuid_to_hartid_map(cpuid);
+ 	unsigned long hsm_data;
+-	struct sbi_hart_boot_data *bdata = &per_cpu(boot_data, cpuid);
++	struct sbi_hart_boot_data *bdata = &boot_data[cpuid];
+ 
+ 	/* Make sure tidle is updated */
+ 	smp_mb();
+diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c
+index 2fbee3887d13aa..6c8922ad70f340 100644
+--- a/arch/s390/pci/pci_event.c
++++ b/arch/s390/pci/pci_event.c
+@@ -106,6 +106,10 @@ static pci_ers_result_t zpci_event_do_error_state_clear(struct pci_dev *pdev,
+ 	struct zpci_dev *zdev = to_zpci(pdev);
+ 	int rc;
+ 
++	/* The underlying device may have been disabled by the event */
++	if (!zdev_enabled(zdev))
++		return PCI_ERS_RESULT_NEED_RESET;
++
+ 	pr_info("%s: Unblocking device access for examination\n", pci_name(pdev));
+ 	rc = zpci_reset_load_store_blocked(zdev);
+ 	if (rc) {
+@@ -273,6 +277,8 @@ static void __zpci_event_error(struct zpci_ccdf_err *ccdf)
+ 	struct zpci_dev *zdev = get_zdev_by_fid(ccdf->fid);
+ 	struct pci_dev *pdev = NULL;
+ 	pci_ers_result_t ers_res;
++	u32 fh = 0;
++	int rc;
+ 
+ 	zpci_dbg(3, "err fid:%x, fh:%x, pec:%x\n",
+ 		 ccdf->fid, ccdf->fh, ccdf->pec);
+@@ -281,6 +287,15 @@ static void __zpci_event_error(struct zpci_ccdf_err *ccdf)
+ 
+ 	if (zdev) {
+ 		mutex_lock(&zdev->state_lock);
++		rc = clp_refresh_fh(zdev->fid, &fh);
++		if (rc)
++			goto no_pdev;
++		if (!fh || ccdf->fh != fh) {
++			/* Ignore events with stale handles */
++			zpci_dbg(3, "err fid:%x, fh:%x (stale %x)\n",
++				 ccdf->fid, fh, ccdf->fh);
++			goto no_pdev;
++		}
+ 		zpci_update_fh(zdev, ccdf->fh);
+ 		if (zdev->zbus->bus)
+ 			pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn);
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 47932d5f44990a..020180dfb87f65 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2723,6 +2723,15 @@ config MITIGATION_ITS
+ 	  disabled, mitigation cannot be enabled via cmdline.
+ 	  See <file:Documentation/admin-guide/hw-vuln/indirect-target-selection.rst>
+ 
++config MITIGATION_TSA
++	bool "Mitigate Transient Scheduler Attacks"
++	depends on CPU_SUP_AMD
++	default y
++	help
++	  Enable mitigation for Transient Scheduler Attacks. TSA is a hardware
++	  security vulnerability on AMD CPUs which can lead to forwarding of
++	  invalid info to subsequent instructions and thus can affect their
++	  timing and thereby cause a leakage.
+ endif
+ 
+ config ARCH_HAS_ADD_PAGES
+diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S
+index 175958b02f2bfd..8e9a0cc20a4ab7 100644
+--- a/arch/x86/entry/entry.S
++++ b/arch/x86/entry/entry.S
+@@ -36,20 +36,20 @@ EXPORT_SYMBOL_GPL(write_ibpb);
+ 
+ /*
+  * Define the VERW operand that is disguised as entry code so that
+- * it can be referenced with KPTI enabled. This ensure VERW can be
++ * it can be referenced with KPTI enabled. This ensures VERW can be
+  * used late in exit-to-user path after page tables are switched.
+  */
+ .pushsection .entry.text, "ax"
+ 
+ .align L1_CACHE_BYTES, 0xcc
+-SYM_CODE_START_NOALIGN(mds_verw_sel)
++SYM_CODE_START_NOALIGN(x86_verw_sel)
+ 	UNWIND_HINT_UNDEFINED
+ 	ANNOTATE_NOENDBR
+ 	.word __KERNEL_DS
+ .align L1_CACHE_BYTES, 0xcc
+-SYM_CODE_END(mds_verw_sel);
++SYM_CODE_END(x86_verw_sel);
+ /* For KVM */
+-EXPORT_SYMBOL_GPL(mds_verw_sel);
++EXPORT_SYMBOL_GPL(x86_verw_sel);
+ 
+ .popsection
+ 
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 30144ef9ef02fb..6f894740663c16 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -453,6 +453,7 @@
+ #define X86_FEATURE_NO_NESTED_DATA_BP	(20*32+ 0) /* No Nested Data Breakpoints */
+ #define X86_FEATURE_WRMSR_XX_BASE_NS	(20*32+ 1) /* WRMSR to {FS,GS,KERNEL_GS}_BASE is non-serializing */
+ #define X86_FEATURE_LFENCE_RDTSC	(20*32+ 2) /* LFENCE always serializing / synchronizes RDTSC */
++#define X86_FEATURE_VERW_CLEAR		(20*32+ 5) /* The memory form of VERW mitigates TSA */
+ #define X86_FEATURE_NULL_SEL_CLR_BASE	(20*32+ 6) /* Null Selector Clears Base */
+ #define X86_FEATURE_AUTOIBRS		(20*32+ 8) /* Automatic IBRS */
+ #define X86_FEATURE_NO_SMM_CTL_MSR	(20*32+ 9) /* SMM_CTL MSR is not present */
+@@ -482,6 +483,9 @@
+ #define X86_FEATURE_AMD_WORKLOAD_CLASS	(21*32 + 7) /* Workload Classification */
+ #define X86_FEATURE_PREFER_YMM		(21*32 + 8) /* Avoid ZMM registers due to downclocking */
+ #define X86_FEATURE_INDIRECT_THUNK_ITS	(21*32 + 9) /* Use thunk for indirect branches in lower half of cacheline */
++#define X86_FEATURE_TSA_SQ_NO		(21*32+11) /* AMD CPU not vulnerable to TSA-SQ */
++#define X86_FEATURE_TSA_L1_NO		(21*32+12) /* AMD CPU not vulnerable to TSA-L1 */
++#define X86_FEATURE_CLEAR_CPU_BUF_VM	(21*32+13) /* Clear CPU buffers using VERW before VMRUN */
+ 
+ /*
+  * BUG word(s)
+@@ -536,4 +540,5 @@
+ #define X86_BUG_SPECTRE_V2_USER		X86_BUG(1*32 + 5) /* "spectre_v2_user" CPU is affected by Spectre variant 2 attack between user processes */
+ #define X86_BUG_ITS			X86_BUG(1*32 + 6) /* "its" CPU is affected by Indirect Target Selection */
+ #define X86_BUG_ITS_NATIVE_ONLY		X86_BUG(1*32 + 7) /* "its_native_only" CPU is affected by ITS, VMX is not affected */
++#define X86_BUG_TSA			X86_BUG( 1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index 9a9b21b78905a6..b30e5474c18e1b 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -44,13 +44,13 @@ static __always_inline void native_irq_enable(void)
+ 
+ static __always_inline void native_safe_halt(void)
+ {
+-	mds_idle_clear_cpu_buffers();
++	x86_idle_clear_cpu_buffers();
+ 	asm volatile("sti; hlt": : :"memory");
+ }
+ 
+ static __always_inline void native_halt(void)
+ {
+-	mds_idle_clear_cpu_buffers();
++	x86_idle_clear_cpu_buffers();
+ 	asm volatile("hlt": : :"memory");
+ }
+ 
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 7bc174a1f1cb8c..f094493232b645 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -756,6 +756,7 @@ enum kvm_only_cpuid_leafs {
+ 	CPUID_8000_0022_EAX,
+ 	CPUID_7_2_EDX,
+ 	CPUID_24_0_EBX,
++	CPUID_8000_0021_ECX,
+ 	NR_KVM_CPU_CAPS,
+ 
+ 	NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS,
+diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
+index 54dc313bcdf018..d3d51caade48f6 100644
+--- a/arch/x86/include/asm/mwait.h
++++ b/arch/x86/include/asm/mwait.h
+@@ -43,8 +43,6 @@ static __always_inline void __monitorx(const void *eax, unsigned long ecx,
+ 
+ static __always_inline void __mwait(unsigned long eax, unsigned long ecx)
+ {
+-	mds_idle_clear_cpu_buffers();
+-
+ 	/* "mwait %eax, %ecx;" */
+ 	asm volatile(".byte 0x0f, 0x01, 0xc9;"
+ 		     :: "a" (eax), "c" (ecx));
+@@ -79,7 +77,7 @@ static __always_inline void __mwait(unsigned long eax, unsigned long ecx)
+ static __always_inline void __mwaitx(unsigned long eax, unsigned long ebx,
+ 				     unsigned long ecx)
+ {
+-	/* No MDS buffer clear as this is AMD/HYGON only */
++	/* No need for TSA buffer clearing on AMD */
+ 
+ 	/* "mwaitx %eax, %ebx, %ecx;" */
+ 	asm volatile(".byte 0x0f, 0x01, 0xfb;"
+@@ -97,7 +95,7 @@ static __always_inline void __mwaitx(unsigned long eax, unsigned long ebx,
+  */
+ static __always_inline void __sti_mwait(unsigned long eax, unsigned long ecx)
+ {
+-	mds_idle_clear_cpu_buffers();
++
+ 	/* "mwait %eax, %ecx;" */
+ 	asm volatile("sti; .byte 0x0f, 0x01, 0xc9;"
+ 		     :: "a" (eax), "c" (ecx));
+@@ -115,21 +113,29 @@ static __always_inline void __sti_mwait(unsigned long eax, unsigned long ecx)
+  */
+ static __always_inline void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
+ {
++	if (need_resched())
++		return;
++
++	x86_idle_clear_cpu_buffers();
++
+ 	if (static_cpu_has_bug(X86_BUG_MONITOR) || !current_set_polling_and_test()) {
+ 		const void *addr = &current_thread_info()->flags;
+ 
+ 		alternative_input("", "clflush (%[addr])", X86_BUG_CLFLUSH_MONITOR, [addr] "a" (addr));
+ 		__monitor(addr, 0, 0);
+ 
+-		if (!need_resched()) {
+-			if (ecx & 1) {
+-				__mwait(eax, ecx);
+-			} else {
+-				__sti_mwait(eax, ecx);
+-				raw_local_irq_disable();
+-			}
++		if (need_resched())
++			goto out;
++
++		if (ecx & 1) {
++			__mwait(eax, ecx);
++		} else {
++			__sti_mwait(eax, ecx);
++			raw_local_irq_disable();
+ 		}
+ 	}
++
++out:
+ 	current_clr_polling();
+ }
+ 
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 7d04ade3354115..6cc5432438cb81 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -302,25 +302,31 @@
+ .endm
+ 
+ /*
+- * Macro to execute VERW instruction that mitigate transient data sampling
+- * attacks such as MDS. On affected systems a microcode update overloaded VERW
+- * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
+- *
++ * Macro to execute VERW insns that mitigate transient data sampling
++ * attacks such as MDS or TSA. On affected systems a microcode update
++ * overloaded VERW insns to also clear the CPU buffers. VERW clobbers
++ * CFLAGS.ZF.
+  * Note: Only the memory operand variant of VERW clears the CPU buffers.
+  */
+-.macro CLEAR_CPU_BUFFERS
++.macro __CLEAR_CPU_BUFFERS feature
+ #ifdef CONFIG_X86_64
+-	ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
++	ALTERNATIVE "", "verw x86_verw_sel(%rip)", \feature
+ #else
+ 	/*
+ 	 * In 32bit mode, the memory operand must be a %cs reference. The data
+ 	 * segments may not be usable (vm86 mode), and the stack segment may not
+ 	 * be flat (ESPFIX32).
+ 	 */
+-	ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
++	ALTERNATIVE "", "verw %cs:x86_verw_sel", \feature
+ #endif
+ .endm
+ 
++#define CLEAR_CPU_BUFFERS \
++	__CLEAR_CPU_BUFFERS X86_FEATURE_CLEAR_CPU_BUF
++
++#define VM_CLEAR_CPU_BUFFERS \
++	__CLEAR_CPU_BUFFERS X86_FEATURE_CLEAR_CPU_BUF_VM
++
+ #ifdef CONFIG_X86_64
+ .macro CLEAR_BRANCH_HISTORY
+ 	ALTERNATIVE "", "call clear_bhb_loop", X86_FEATURE_CLEAR_BHB_LOOP
+@@ -567,24 +573,24 @@ DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+ 
+ DECLARE_STATIC_KEY_FALSE(switch_vcpu_ibpb);
+ 
+-DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
++DECLARE_STATIC_KEY_FALSE(cpu_buf_idle_clear);
+ 
+ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
+ 
+ DECLARE_STATIC_KEY_FALSE(mmio_stale_data_clear);
+ 
+-extern u16 mds_verw_sel;
++extern u16 x86_verw_sel;
+ 
+ #include <asm/segment.h>
+ 
+ /**
+- * mds_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
++ * x86_clear_cpu_buffers - Buffer clearing support for different x86 CPU vulns
+  *
+  * This uses the otherwise unused and obsolete VERW instruction in
+  * combination with microcode which triggers a CPU buffer flush when the
+  * instruction is executed.
+  */
+-static __always_inline void mds_clear_cpu_buffers(void)
++static __always_inline void x86_clear_cpu_buffers(void)
+ {
+ 	static const u16 ds = __KERNEL_DS;
+ 
+@@ -601,14 +607,15 @@ static __always_inline void mds_clear_cpu_buffers(void)
+ }
+ 
+ /**
+- * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability
++ * x86_idle_clear_cpu_buffers - Buffer clearing support in idle for the MDS
++ * and TSA vulnerabilities.
+  *
+  * Clear CPU buffers if the corresponding static key is enabled
+  */
+-static __always_inline void mds_idle_clear_cpu_buffers(void)
++static __always_inline void x86_idle_clear_cpu_buffers(void)
+ {
+-	if (static_branch_likely(&mds_idle_clear))
+-		mds_clear_cpu_buffers();
++	if (static_branch_likely(&cpu_buf_idle_clear))
++		x86_clear_cpu_buffers();
+ }
+ 
+ #endif /* __ASSEMBLER__ */
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index a59d6d8fc71f9f..2c5003f05920b8 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -375,6 +375,47 @@ static void bsp_determine_snp(struct cpuinfo_x86 *c)
+ #endif
+ }
+ 
++#define ZEN_MODEL_STEP_UCODE(fam, model, step, ucode) \
++	X86_MATCH_VFM_STEPS(VFM_MAKE(X86_VENDOR_AMD, fam, model), \
++			    step, step, ucode)
++
++static const struct x86_cpu_id amd_tsa_microcode[] = {
++	ZEN_MODEL_STEP_UCODE(0x19, 0x01, 0x1, 0x0a0011d7),
++	ZEN_MODEL_STEP_UCODE(0x19, 0x01, 0x2, 0x0a00123b),
++	ZEN_MODEL_STEP_UCODE(0x19, 0x08, 0x2, 0x0a00820d),
++	ZEN_MODEL_STEP_UCODE(0x19, 0x11, 0x1, 0x0a10114c),
++	ZEN_MODEL_STEP_UCODE(0x19, 0x11, 0x2, 0x0a10124c),
++	ZEN_MODEL_STEP_UCODE(0x19, 0x18, 0x1, 0x0a108109),
++	ZEN_MODEL_STEP_UCODE(0x19, 0x21, 0x0, 0x0a20102e),
++	ZEN_MODEL_STEP_UCODE(0x19, 0x21, 0x2, 0x0a201211),
++	ZEN_MODEL_STEP_UCODE(0x19, 0x44, 0x1, 0x0a404108),
++	ZEN_MODEL_STEP_UCODE(0x19, 0x50, 0x0, 0x0a500012),
++	ZEN_MODEL_STEP_UCODE(0x19, 0x61, 0x2, 0x0a60120a),
++	ZEN_MODEL_STEP_UCODE(0x19, 0x74, 0x1, 0x0a704108),
++	ZEN_MODEL_STEP_UCODE(0x19, 0x75, 0x2, 0x0a705208),
++	ZEN_MODEL_STEP_UCODE(0x19, 0x78, 0x0, 0x0a708008),
++	ZEN_MODEL_STEP_UCODE(0x19, 0x7c, 0x0, 0x0a70c008),
++	ZEN_MODEL_STEP_UCODE(0x19, 0xa0, 0x2, 0x0aa00216),
++	{},
++};
++
++static void tsa_init(struct cpuinfo_x86 *c)
++{
++	if (cpu_has(c, X86_FEATURE_HYPERVISOR))
++		return;
++
++	if (cpu_has(c, X86_FEATURE_ZEN3) ||
++	    cpu_has(c, X86_FEATURE_ZEN4)) {
++		if (x86_match_min_microcode_rev(amd_tsa_microcode))
++			setup_force_cpu_cap(X86_FEATURE_VERW_CLEAR);
++		else
++			pr_debug("%s: current revision: 0x%x\n", __func__, c->microcode);
++	} else {
++		setup_force_cpu_cap(X86_FEATURE_TSA_SQ_NO);
++		setup_force_cpu_cap(X86_FEATURE_TSA_L1_NO);
++	}
++}
++
+ static void bsp_init_amd(struct cpuinfo_x86 *c)
+ {
+ 	if (cpu_has(c, X86_FEATURE_CONSTANT_TSC)) {
+@@ -487,6 +528,9 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
+ 	}
+ 
+ 	bsp_determine_snp(c);
++
++	tsa_init(c);
++
+ 	return;
+ 
+ warn:
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 8596ce85026c0d..0f6bc28db1828a 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -50,6 +50,7 @@ static void __init l1d_flush_select_mitigation(void);
+ static void __init srso_select_mitigation(void);
+ static void __init gds_select_mitigation(void);
+ static void __init its_select_mitigation(void);
++static void __init tsa_select_mitigation(void);
+ 
+ /* The base value of the SPEC_CTRL MSR without task-specific bits set */
+ u64 x86_spec_ctrl_base;
+@@ -125,9 +126,9 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+ DEFINE_STATIC_KEY_FALSE(switch_vcpu_ibpb);
+ EXPORT_SYMBOL_GPL(switch_vcpu_ibpb);
+ 
+-/* Control MDS CPU buffer clear before idling (halt, mwait) */
+-DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
+-EXPORT_SYMBOL_GPL(mds_idle_clear);
++/* Control CPU buffer clear before idling (halt, mwait) */
++DEFINE_STATIC_KEY_FALSE(cpu_buf_idle_clear);
++EXPORT_SYMBOL_GPL(cpu_buf_idle_clear);
+ 
+ /*
+  * Controls whether l1d flush based mitigations are enabled,
+@@ -188,6 +189,7 @@ void __init cpu_select_mitigations(void)
+ 	srso_select_mitigation();
+ 	gds_select_mitigation();
+ 	its_select_mitigation();
++	tsa_select_mitigation();
+ }
+ 
+ /*
+@@ -469,7 +471,7 @@ static void __init mmio_select_mitigation(void)
+ 	 * is required irrespective of SMT state.
+ 	 */
+ 	if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
+-		static_branch_enable(&mds_idle_clear);
++		static_branch_enable(&cpu_buf_idle_clear);
+ 
+ 	/*
+ 	 * Check if the system has the right microcode.
+@@ -2063,10 +2065,10 @@ static void update_mds_branch_idle(void)
+ 		return;
+ 
+ 	if (sched_smt_active()) {
+-		static_branch_enable(&mds_idle_clear);
++		static_branch_enable(&cpu_buf_idle_clear);
+ 	} else if (mmio_mitigation == MMIO_MITIGATION_OFF ||
+ 		   (x86_arch_cap_msr & ARCH_CAP_FBSDP_NO)) {
+-		static_branch_disable(&mds_idle_clear);
++		static_branch_disable(&cpu_buf_idle_clear);
+ 	}
+ }
+ 
+@@ -2074,6 +2076,94 @@ static void update_mds_branch_idle(void)
+ #define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
+ #define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n"
+ 
++#undef pr_fmt
++#define pr_fmt(fmt)	"Transient Scheduler Attacks: " fmt
++
++enum tsa_mitigations {
++	TSA_MITIGATION_NONE,
++	TSA_MITIGATION_UCODE_NEEDED,
++	TSA_MITIGATION_USER_KERNEL,
++	TSA_MITIGATION_VM,
++	TSA_MITIGATION_FULL,
++};
++
++static const char * const tsa_strings[] = {
++	[TSA_MITIGATION_NONE]		= "Vulnerable",
++	[TSA_MITIGATION_UCODE_NEEDED]	= "Vulnerable: Clear CPU buffers attempted, no microcode",
++	[TSA_MITIGATION_USER_KERNEL]	= "Mitigation: Clear CPU buffers: user/kernel boundary",
++	[TSA_MITIGATION_VM]		= "Mitigation: Clear CPU buffers: VM",
++	[TSA_MITIGATION_FULL]		= "Mitigation: Clear CPU buffers",
++};
++
++static enum tsa_mitigations tsa_mitigation __ro_after_init =
++	IS_ENABLED(CONFIG_MITIGATION_TSA) ? TSA_MITIGATION_FULL : TSA_MITIGATION_NONE;
++
++static int __init tsa_parse_cmdline(char *str)
++{
++	if (!str)
++		return -EINVAL;
++
++	if (!strcmp(str, "off"))
++		tsa_mitigation = TSA_MITIGATION_NONE;
++	else if (!strcmp(str, "on"))
++		tsa_mitigation = TSA_MITIGATION_FULL;
++	else if (!strcmp(str, "user"))
++		tsa_mitigation = TSA_MITIGATION_USER_KERNEL;
++	else if (!strcmp(str, "vm"))
++		tsa_mitigation = TSA_MITIGATION_VM;
++	else
++		pr_err("Ignoring unknown tsa=%s option.\n", str);
++
++	return 0;
++}
++early_param("tsa", tsa_parse_cmdline);
++
++static void __init tsa_select_mitigation(void)
++{
++	if (tsa_mitigation == TSA_MITIGATION_NONE)
++		return;
++
++	if (cpu_mitigations_off() || !boot_cpu_has_bug(X86_BUG_TSA)) {
++		tsa_mitigation = TSA_MITIGATION_NONE;
++		return;
++	}
++
++	if (!boot_cpu_has(X86_FEATURE_VERW_CLEAR))
++		tsa_mitigation = TSA_MITIGATION_UCODE_NEEDED;
++
++	switch (tsa_mitigation) {
++	case TSA_MITIGATION_USER_KERNEL:
++		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
++		break;
++
++	case TSA_MITIGATION_VM:
++		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM);
++		break;
++
++	case TSA_MITIGATION_UCODE_NEEDED:
++		if (!boot_cpu_has(X86_FEATURE_HYPERVISOR))
++			goto out;
++
++		pr_notice("Forcing mitigation on in a VM\n");
++
++		/*
++		 * On the off-chance that microcode has been updated
++		 * on the host, enable the mitigation in the guest just
++		 * in case.
++		 */
++		fallthrough;
++	case TSA_MITIGATION_FULL:
++		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
++		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM);
++		break;
++	default:
++		break;
++	}
++
++out:
++	pr_info("%s\n", tsa_strings[tsa_mitigation]);
++}
++
+ void cpu_bugs_smt_update(void)
+ {
+ 	mutex_lock(&spec_ctrl_mutex);
+@@ -2130,6 +2220,24 @@ void cpu_bugs_smt_update(void)
+ 		break;
+ 	}
+ 
++	switch (tsa_mitigation) {
++	case TSA_MITIGATION_USER_KERNEL:
++	case TSA_MITIGATION_VM:
++	case TSA_MITIGATION_FULL:
++	case TSA_MITIGATION_UCODE_NEEDED:
++		/*
++		 * TSA-SQ can potentially lead to info leakage between
++		 * SMT threads.
++		 */
++		if (sched_smt_active())
++			static_branch_enable(&cpu_buf_idle_clear);
++		else
++			static_branch_disable(&cpu_buf_idle_clear);
++		break;
++	case TSA_MITIGATION_NONE:
++		break;
++	}
++
+ 	mutex_unlock(&spec_ctrl_mutex);
+ }
+ 
+@@ -3078,6 +3186,11 @@ static ssize_t gds_show_state(char *buf)
+ 	return sysfs_emit(buf, "%s\n", gds_strings[gds_mitigation]);
+ }
+ 
++static ssize_t tsa_show_state(char *buf)
++{
++	return sysfs_emit(buf, "%s\n", tsa_strings[tsa_mitigation]);
++}
++
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ 			       char *buf, unsigned int bug)
+ {
+@@ -3139,6 +3252,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 	case X86_BUG_ITS:
+ 		return its_show_state(buf);
+ 
++	case X86_BUG_TSA:
++		return tsa_show_state(buf);
++
+ 	default:
+ 		break;
+ 	}
+@@ -3223,6 +3339,11 @@ ssize_t cpu_show_indirect_target_selection(struct device *dev, struct device_att
+ {
+ 	return cpu_show_common(dev, attr, buf, X86_BUG_ITS);
+ }
++
++ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return cpu_show_common(dev, attr, buf, X86_BUG_TSA);
++}
+ #endif
+ 
+ void __warn_thunk(void)
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index bf86a0145d8bdc..c39c5c37f4e825 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1232,6 +1232,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ #define ITS		BIT(8)
+ /* CPU is affected by Indirect Target Selection, but guest-host isolation is not affected */
+ #define ITS_NATIVE_ONLY	BIT(9)
++/* CPU is affected by Transient Scheduler Attacks */
++#define TSA		BIT(10)
+ 
+ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ 	VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE,	     X86_STEP_MAX,	SRBDS),
+@@ -1279,7 +1281,7 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ 	VULNBL_AMD(0x16, RETBLEED),
+ 	VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO),
+ 	VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO),
+-	VULNBL_AMD(0x19, SRSO),
++	VULNBL_AMD(0x19, SRSO | TSA),
+ 	VULNBL_AMD(0x1a, SRSO),
+ 	{}
+ };
+@@ -1492,6 +1494,16 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 			setup_force_cpu_bug(X86_BUG_ITS_NATIVE_ONLY);
+ 	}
+ 
++	if (c->x86_vendor == X86_VENDOR_AMD) {
++		if (!cpu_has(c, X86_FEATURE_TSA_SQ_NO) ||
++		    !cpu_has(c, X86_FEATURE_TSA_L1_NO)) {
++			if (cpu_matches(cpu_vuln_blacklist, TSA) ||
++			    /* Enable bug on Zen guests to allow for live migration. */
++			    (cpu_has(c, X86_FEATURE_HYPERVISOR) && cpu_has(c, X86_FEATURE_ZEN)))
++				setup_force_cpu_bug(X86_BUG_TSA);
++		}
++	}
++
+ 	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ 		return;
+ 
+diff --git a/arch/x86/kernel/cpu/microcode/amd_shas.c b/arch/x86/kernel/cpu/microcode/amd_shas.c
+index 2a1655b1fdd883..1fd349cfc8024a 100644
+--- a/arch/x86/kernel/cpu/microcode/amd_shas.c
++++ b/arch/x86/kernel/cpu/microcode/amd_shas.c
+@@ -231,6 +231,13 @@ static const struct patch_digest phashes[] = {
+ 		0x0d,0x5b,0x65,0x34,0x69,0xb2,0x62,0x21,
+ 	}
+  },
++ { 0xa0011d7, {
++                0x35,0x07,0xcd,0x40,0x94,0xbc,0x81,0x6b,
++                0xfc,0x61,0x56,0x1a,0xe2,0xdb,0x96,0x12,
++                0x1c,0x1c,0x31,0xb1,0x02,0x6f,0xe5,0xd2,
++                0xfe,0x1b,0x04,0x03,0x2c,0x8f,0x4c,0x36,
++        }
++ },
+  { 0xa001223, {
+ 		0xfb,0x32,0x5f,0xc6,0x83,0x4f,0x8c,0xb8,
+ 		0xa4,0x05,0xf9,0x71,0x53,0x01,0x16,0xc4,
+@@ -294,6 +301,13 @@ static const struct patch_digest phashes[] = {
+ 		0xc0,0xcd,0x33,0xf2,0x8d,0xf9,0xef,0x59,
+ 	}
+  },
++ { 0xa00123b, {
++		0xef,0xa1,0x1e,0x71,0xf1,0xc3,0x2c,0xe2,
++		0xc3,0xef,0x69,0x41,0x7a,0x54,0xca,0xc3,
++		0x8f,0x62,0x84,0xee,0xc2,0x39,0xd9,0x28,
++		0x95,0xa7,0x12,0x49,0x1e,0x30,0x71,0x72,
++	}
++ },
+  { 0xa00820c, {
+ 		0xa8,0x0c,0x81,0xc0,0xa6,0x00,0xe7,0xf3,
+ 		0x5f,0x65,0xd3,0xb9,0x6f,0xea,0x93,0x63,
+@@ -301,6 +315,13 @@ static const struct patch_digest phashes[] = {
+ 		0xe1,0x3b,0x8d,0xb2,0xf8,0x22,0x03,0xe2,
+ 	}
+  },
++ { 0xa00820d, {
++		0xf9,0x2a,0xc0,0xf4,0x9e,0xa4,0x87,0xa4,
++		0x7d,0x87,0x00,0xfd,0xab,0xda,0x19,0xca,
++		0x26,0x51,0x32,0xc1,0x57,0x91,0xdf,0xc1,
++		0x05,0xeb,0x01,0x7c,0x5a,0x95,0x21,0xb7,
++	}
++ },
+  { 0xa10113e, {
+ 		0x05,0x3c,0x66,0xd7,0xa9,0x5a,0x33,0x10,
+ 		0x1b,0xf8,0x9c,0x8f,0xed,0xfc,0xa7,0xa0,
+@@ -322,6 +343,13 @@ static const struct patch_digest phashes[] = {
+ 		0xf1,0x5e,0xb0,0xde,0xb4,0x98,0xae,0xc4,
+ 	}
+  },
++ { 0xa10114c, {
++		0x9e,0xb6,0xa2,0xd9,0x87,0x38,0xc5,0x64,
++		0xd8,0x88,0xfa,0x78,0x98,0xf9,0x6f,0x74,
++		0x39,0x90,0x1b,0xa5,0xcf,0x5e,0xb4,0x2a,
++		0x02,0xff,0xd4,0x8c,0x71,0x8b,0xe2,0xc0,
++	}
++ },
+  { 0xa10123e, {
+ 		0x03,0xb9,0x2c,0x76,0x48,0x93,0xc9,0x18,
+ 		0xfb,0x56,0xfd,0xf7,0xe2,0x1d,0xca,0x4d,
+@@ -343,6 +371,13 @@ static const struct patch_digest phashes[] = {
+ 		0x1b,0x7d,0x64,0x9d,0x4b,0x53,0x13,0x75,
+ 	}
+  },
++ { 0xa10124c, {
++		0x29,0xea,0xf1,0x2c,0xb2,0xe4,0xef,0x90,
++		0xa4,0xcd,0x1d,0x86,0x97,0x17,0x61,0x46,
++		0xfc,0x22,0xcb,0x57,0x75,0x19,0xc8,0xcc,
++		0x0c,0xf5,0xbc,0xac,0x81,0x9d,0x9a,0xd2,
++	}
++ },
+  { 0xa108108, {
+ 		0xed,0xc2,0xec,0xa1,0x15,0xc6,0x65,0xe9,
+ 		0xd0,0xef,0x39,0xaa,0x7f,0x55,0x06,0xc6,
+@@ -350,6 +385,13 @@ static const struct patch_digest phashes[] = {
+ 		0x28,0x1e,0x9c,0x59,0x69,0x99,0x4d,0x16,
+ 	}
+  },
++ { 0xa108109, {
++		0x85,0xb4,0xbd,0x7c,0x49,0xa7,0xbd,0xfa,
++		0x49,0x36,0x80,0x81,0xc5,0xb7,0x39,0x1b,
++		0x9a,0xaa,0x50,0xde,0x9b,0xe9,0x32,0x35,
++		0x42,0x7e,0x51,0x4f,0x52,0x2c,0x28,0x59,
++	}
++ },
+  { 0xa20102d, {
+ 		0xf9,0x6e,0xf2,0x32,0xd3,0x0f,0x5f,0x11,
+ 		0x59,0xa1,0xfe,0xcc,0xcd,0x9b,0x42,0x89,
+@@ -357,6 +399,13 @@ static const struct patch_digest phashes[] = {
+ 		0x8c,0xe9,0x19,0x3e,0xcc,0x3f,0x7b,0xb4,
+ 	}
+  },
++ { 0xa20102e, {
++		0xbe,0x1f,0x32,0x04,0x0d,0x3c,0x9c,0xdd,
++		0xe1,0xa4,0xbf,0x76,0x3a,0xec,0xc2,0xf6,
++		0x11,0x00,0xa7,0xaf,0x0f,0xe5,0x02,0xc5,
++		0x54,0x3a,0x1f,0x8c,0x16,0xb5,0xff,0xbe,
++	}
++ },
+  { 0xa201210, {
+ 		0xe8,0x6d,0x51,0x6a,0x8e,0x72,0xf3,0xfe,
+ 		0x6e,0x16,0xbc,0x62,0x59,0x40,0x17,0xe9,
+@@ -364,6 +413,13 @@ static const struct patch_digest phashes[] = {
+ 		0xf7,0x55,0xf0,0x13,0xbb,0x22,0xf6,0x41,
+ 	}
+  },
++ { 0xa201211, {
++		0x69,0xa1,0x17,0xec,0xd0,0xf6,0x6c,0x95,
++		0xe2,0x1e,0xc5,0x59,0x1a,0x52,0x0a,0x27,
++		0xc4,0xed,0xd5,0x59,0x1f,0xbf,0x00,0xff,
++		0x08,0x88,0xb5,0xe1,0x12,0xb6,0xcc,0x27,
++	}
++ },
+  { 0xa404107, {
+ 		0xbb,0x04,0x4e,0x47,0xdd,0x5e,0x26,0x45,
+ 		0x1a,0xc9,0x56,0x24,0xa4,0x4c,0x82,0xb0,
+@@ -371,6 +427,13 @@ static const struct patch_digest phashes[] = {
+ 		0x13,0xbc,0xc5,0x25,0xe4,0xc5,0xc3,0x99,
+ 	}
+  },
++ { 0xa404108, {
++		0x69,0x67,0x43,0x06,0xf8,0x0c,0x62,0xdc,
++		0xa4,0x21,0x30,0x4f,0x0f,0x21,0x2c,0xcb,
++		0xcc,0x37,0xf1,0x1c,0xc3,0xf8,0x2f,0x19,
++		0xdf,0x53,0x53,0x46,0xb1,0x15,0xea,0x00,
++	}
++ },
+  { 0xa500011, {
+ 		0x23,0x3d,0x70,0x7d,0x03,0xc3,0xc4,0xf4,
+ 		0x2b,0x82,0xc6,0x05,0xda,0x80,0x0a,0xf1,
+@@ -378,6 +441,13 @@ static const struct patch_digest phashes[] = {
+ 		0x11,0x5e,0x96,0x7e,0x71,0xe9,0xfc,0x74,
+ 	}
+  },
++ { 0xa500012, {
++		0xeb,0x74,0x0d,0x47,0xa1,0x8e,0x09,0xe4,
++		0x93,0x4c,0xad,0x03,0x32,0x4c,0x38,0x16,
++		0x10,0x39,0xdd,0x06,0xaa,0xce,0xd6,0x0f,
++		0x62,0x83,0x9d,0x8e,0x64,0x55,0xbe,0x63,
++	}
++ },
+  { 0xa601209, {
+ 		0x66,0x48,0xd4,0x09,0x05,0xcb,0x29,0x32,
+ 		0x66,0xb7,0x9a,0x76,0xcd,0x11,0xf3,0x30,
+@@ -385,6 +455,13 @@ static const struct patch_digest phashes[] = {
+ 		0xe8,0x73,0xe2,0xd6,0xdb,0xd2,0x77,0x1d,
+ 	}
+  },
++ { 0xa60120a, {
++		0x0c,0x8b,0x3d,0xfd,0x52,0x52,0x85,0x7d,
++		0x20,0x3a,0xe1,0x7e,0xa4,0x21,0x3b,0x7b,
++		0x17,0x86,0xae,0xac,0x13,0xb8,0x63,0x9d,
++		0x06,0x01,0xd0,0xa0,0x51,0x9a,0x91,0x2c,
++	}
++ },
+  { 0xa704107, {
+ 		0xf3,0xc6,0x58,0x26,0xee,0xac,0x3f,0xd6,
+ 		0xce,0xa1,0x72,0x47,0x3b,0xba,0x2b,0x93,
+@@ -392,6 +469,13 @@ static const struct patch_digest phashes[] = {
+ 		0x64,0x39,0x71,0x8c,0xce,0xe7,0x41,0x39,
+ 	}
+  },
++ { 0xa704108, {
++		0xd7,0x55,0x15,0x2b,0xfe,0xc4,0xbc,0x93,
++		0xec,0x91,0xa0,0xae,0x45,0xb7,0xc3,0x98,
++		0x4e,0xff,0x61,0x77,0x88,0xc2,0x70,0x49,
++		0xe0,0x3a,0x1d,0x84,0x38,0x52,0xbf,0x5a,
++	}
++ },
+  { 0xa705206, {
+ 		0x8d,0xc0,0x76,0xbd,0x58,0x9f,0x8f,0xa4,
+ 		0x12,0x9d,0x21,0xfb,0x48,0x21,0xbc,0xe7,
+@@ -399,6 +483,13 @@ static const struct patch_digest phashes[] = {
+ 		0x03,0x35,0xe9,0xbe,0xfb,0x06,0xdf,0xfc,
+ 	}
+  },
++ { 0xa705208, {
++		0x30,0x1d,0x55,0x24,0xbc,0x6b,0x5a,0x19,
++		0x0c,0x7d,0x1d,0x74,0xaa,0xd1,0xeb,0xd2,
++		0x16,0x62,0xf7,0x5b,0xe1,0x1f,0x18,0x11,
++		0x5c,0xf0,0x94,0x90,0x26,0xec,0x69,0xff,
++	}
++ },
+  { 0xa708007, {
+ 		0x6b,0x76,0xcc,0x78,0xc5,0x8a,0xa3,0xe3,
+ 		0x32,0x2d,0x79,0xe4,0xc3,0x80,0xdb,0xb2,
+@@ -406,6 +497,13 @@ static const struct patch_digest phashes[] = {
+ 		0xdf,0x92,0x73,0x84,0x87,0x3c,0x73,0x93,
+ 	}
+  },
++ { 0xa708008, {
++		0x08,0x6e,0xf0,0x22,0x4b,0x8e,0xc4,0x46,
++		0x58,0x34,0xe6,0x47,0xa2,0x28,0xfd,0xab,
++		0x22,0x3d,0xdd,0xd8,0x52,0x9e,0x1d,0x16,
++		0xfa,0x01,0x68,0x14,0x79,0x3e,0xe8,0x6b,
++	}
++ },
+  { 0xa70c005, {
+ 		0x88,0x5d,0xfb,0x79,0x64,0xd8,0x46,0x3b,
+ 		0x4a,0x83,0x8e,0x77,0x7e,0xcf,0xb3,0x0f,
+@@ -413,6 +511,13 @@ static const struct patch_digest phashes[] = {
+ 		0xee,0x49,0xac,0xe1,0x8b,0x13,0xc5,0x13,
+ 	}
+  },
++ { 0xa70c008, {
++		0x0f,0xdb,0x37,0xa1,0x10,0xaf,0xd4,0x21,
++		0x94,0x0d,0xa4,0xa2,0xe9,0x86,0x6c,0x0e,
++		0x85,0x7c,0x36,0x30,0xa3,0x3a,0x78,0x66,
++		0x18,0x10,0x60,0x0d,0x78,0x3d,0x44,0xd0,
++	}
++ },
+  { 0xaa00116, {
+ 		0xe8,0x4c,0x2c,0x88,0xa1,0xac,0x24,0x63,
+ 		0x65,0xe5,0xaa,0x2d,0x16,0xa9,0xc3,0xf5,
+@@ -441,4 +546,11 @@ static const struct patch_digest phashes[] = {
+ 		0x68,0x2f,0x46,0xee,0xfe,0xc6,0x6d,0xef,
+ 	}
+  },
++ { 0xaa00216, {
++		0x79,0xfb,0x5b,0x9f,0xb6,0xe6,0xa8,0xf5,
++		0x4e,0x7c,0x4f,0x8e,0x1d,0xad,0xd0,0x08,
++		0xc2,0x43,0x7c,0x8b,0xe6,0xdb,0xd0,0xd2,
++		0xe8,0x39,0x26,0xc1,0xe5,0x5a,0x48,0xf1,
++	}
++ },
+ };
+diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
+index 16f3ca30626ab2..8e1b087ca936c1 100644
+--- a/arch/x86/kernel/cpu/scattered.c
++++ b/arch/x86/kernel/cpu/scattered.c
+@@ -49,6 +49,8 @@ static const struct cpuid_bit cpuid_bits[] = {
+ 	{ X86_FEATURE_MBA,			CPUID_EBX,  6, 0x80000008, 0 },
+ 	{ X86_FEATURE_SMBA,			CPUID_EBX,  2, 0x80000020, 0 },
+ 	{ X86_FEATURE_BMEC,			CPUID_EBX,  3, 0x80000020, 0 },
++	{ X86_FEATURE_TSA_SQ_NO,		CPUID_ECX,  1, 0x80000021, 0 },
++	{ X86_FEATURE_TSA_L1_NO,		CPUID_ECX,  2, 0x80000021, 0 },
+ 	{ X86_FEATURE_AMD_WORKLOAD_CLASS,	CPUID_EAX, 22, 0x80000021, 0 },
+ 	{ X86_FEATURE_PERFMON_V2,		CPUID_EAX,  0, 0x80000022, 0 },
+ 	{ X86_FEATURE_AMD_LBR_V2,		CPUID_EAX,  1, 0x80000022, 0 },
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 4940fcd409251c..bc61a09b5a6608 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -912,16 +912,24 @@ static __init bool prefer_mwait_c1_over_halt(void)
+  */
+ static __cpuidle void mwait_idle(void)
+ {
++	if (need_resched())
++		return;
++
++	x86_idle_clear_cpu_buffers();
++
+ 	if (!current_set_polling_and_test()) {
+ 		const void *addr = &current_thread_info()->flags;
+ 
+ 		alternative_input("", "clflush (%[addr])", X86_BUG_CLFLUSH_MONITOR, [addr] "a" (addr));
+ 		__monitor(addr, 0, 0);
+-		if (!need_resched()) {
+-			__sti_mwait(0, 0);
+-			raw_local_irq_disable();
+-		}
++		if (need_resched())
++			goto out;
++
++		__sti_mwait(0, 0);
++		raw_local_irq_disable();
+ 	}
++
++out:
+ 	__current_clr_polling();
+ }
+ 
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 571c906ffcbfe2..77be7196772590 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -1164,6 +1164,7 @@ void kvm_set_cpu_caps(void)
+ 
+ 	kvm_cpu_cap_init(CPUID_8000_0021_EAX,
+ 		F(NO_NESTED_DATA_BP),
++		F(WRMSR_XX_BASE_NS),
+ 		/*
+ 		 * Synthesize "LFENCE is serializing" into the AMD-defined entry
+ 		 * in KVM's supported CPUID, i.e. if the feature is reported as
+@@ -1176,17 +1177,26 @@ void kvm_set_cpu_caps(void)
+ 		 */
+ 		SYNTHESIZED_F(LFENCE_RDTSC),
+ 		/* SmmPgCfgLock */
++		/* 4: Resv */
++		SYNTHESIZED_F(VERW_CLEAR),
+ 		F(NULL_SEL_CLR_BASE),
++		/* UpperAddressIgnore */
+ 		F(AUTOIBRS),
+ 		EMULATED_F(NO_SMM_CTL_MSR),
+ 		/* PrefetchCtlMsr */
+-		F(WRMSR_XX_BASE_NS),
++		/* GpOnUserCpuid */
++		/* EPSF */
+ 		SYNTHESIZED_F(SBPB),
+ 		SYNTHESIZED_F(IBPB_BRTYPE),
+ 		SYNTHESIZED_F(SRSO_NO),
+ 		F(SRSO_USER_KERNEL_NO),
+ 	);
+ 
++	kvm_cpu_cap_init(CPUID_8000_0021_ECX,
++		SYNTHESIZED_F(TSA_SQ_NO),
++		SYNTHESIZED_F(TSA_L1_NO),
++	);
++
+ 	kvm_cpu_cap_init(CPUID_8000_0022_EAX,
+ 		F(PERFMON_V2),
+ 	);
+@@ -1756,8 +1766,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ 		entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
+ 		break;
+ 	case 0x80000021:
+-		entry->ebx = entry->ecx = entry->edx = 0;
++		entry->ebx = entry->edx = 0;
+ 		cpuid_entry_override(entry, CPUID_8000_0021_EAX);
++		cpuid_entry_override(entry, CPUID_8000_0021_ECX);
+ 		break;
+ 	/* AMD Extended Performance Monitoring and Debug */
+ 	case 0x80000022: {
+diff --git a/arch/x86/kvm/reverse_cpuid.h b/arch/x86/kvm/reverse_cpuid.h
+index fde0ae9860039c..c53b92379e6e65 100644
+--- a/arch/x86/kvm/reverse_cpuid.h
++++ b/arch/x86/kvm/reverse_cpuid.h
+@@ -52,6 +52,10 @@
+ /* CPUID level 0x80000022 (EAX) */
+ #define KVM_X86_FEATURE_PERFMON_V2	KVM_X86_FEATURE(CPUID_8000_0022_EAX, 0)
+ 
++/* CPUID level 0x80000021 (ECX) */
++#define KVM_X86_FEATURE_TSA_SQ_NO	KVM_X86_FEATURE(CPUID_8000_0021_ECX, 1)
++#define KVM_X86_FEATURE_TSA_L1_NO	KVM_X86_FEATURE(CPUID_8000_0021_ECX, 2)
++
+ struct cpuid_reg {
+ 	u32 function;
+ 	u32 index;
+@@ -82,6 +86,7 @@ static const struct cpuid_reg reverse_cpuid[] = {
+ 	[CPUID_8000_0022_EAX] = {0x80000022, 0, CPUID_EAX},
+ 	[CPUID_7_2_EDX]       = {         7, 2, CPUID_EDX},
+ 	[CPUID_24_0_EBX]      = {      0x24, 0, CPUID_EBX},
++	[CPUID_8000_0021_ECX] = {0x80000021, 0, CPUID_ECX},
+ };
+ 
+ /*
+@@ -121,6 +126,8 @@ static __always_inline u32 __feature_translate(int x86_feature)
+ 	KVM_X86_TRANSLATE_FEATURE(PERFMON_V2);
+ 	KVM_X86_TRANSLATE_FEATURE(RRSBA_CTRL);
+ 	KVM_X86_TRANSLATE_FEATURE(BHI_CTRL);
++	KVM_X86_TRANSLATE_FEATURE(TSA_SQ_NO);
++	KVM_X86_TRANSLATE_FEATURE(TSA_L1_NO);
+ 	default:
+ 		return x86_feature;
+ 	}
+diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S
+index 0c61153b275f64..235c4af6b692a4 100644
+--- a/arch/x86/kvm/svm/vmenter.S
++++ b/arch/x86/kvm/svm/vmenter.S
+@@ -169,6 +169,9 @@ SYM_FUNC_START(__svm_vcpu_run)
+ #endif
+ 	mov VCPU_RDI(%_ASM_DI), %_ASM_DI
+ 
++	/* Clobbers EFLAGS.ZF */
++	VM_CLEAR_CPU_BUFFERS
++
+ 	/* Enter guest mode */
+ 3:	vmrun %_ASM_AX
+ 4:
+@@ -335,6 +338,9 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run)
+ 	mov SVM_current_vmcb(%rdi), %rax
+ 	mov KVM_VMCB_pa(%rax), %rax
+ 
++	/* Clobbers EFLAGS.ZF */
++	VM_CLEAR_CPU_BUFFERS
++
+ 	/* Enter guest mode */
+ 1:	vmrun %rax
+ 2:
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 0b66b856d673b2..eec0aa13e00297 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -7366,7 +7366,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
+ 		vmx_l1d_flush(vcpu);
+ 	else if (static_branch_unlikely(&mmio_stale_data_clear) &&
+ 		 kvm_arch_has_assigned_device(vcpu->kvm))
+-		mds_clear_cpu_buffers();
++		x86_clear_cpu_buffers();
+ 
+ 	vmx_disable_fb_clear(vmx);
+ 
+diff --git a/drivers/acpi/acpica/dsmethod.c b/drivers/acpi/acpica/dsmethod.c
+index e809c2aed78aed..a232746d150a75 100644
+--- a/drivers/acpi/acpica/dsmethod.c
++++ b/drivers/acpi/acpica/dsmethod.c
+@@ -483,6 +483,13 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
+ 		return_ACPI_STATUS(AE_NULL_OBJECT);
+ 	}
+ 
++	if (this_walk_state->num_operands < obj_desc->method.param_count) {
++		ACPI_ERROR((AE_INFO, "Missing argument for method [%4.4s]",
++			    acpi_ut_get_node_name(method_node)));
++
++		return_ACPI_STATUS(AE_AML_UNINITIALIZED_ARG);
++	}
++
+ 	/* Init for new method, possibly wait on method mutex */
+ 
+ 	status =
+diff --git a/drivers/ata/libata-acpi.c b/drivers/ata/libata-acpi.c
+index b7f0bf79552134..f2140fc06ba0f8 100644
+--- a/drivers/ata/libata-acpi.c
++++ b/drivers/ata/libata-acpi.c
+@@ -514,15 +514,19 @@ unsigned int ata_acpi_gtm_xfermask(struct ata_device *dev,
+ EXPORT_SYMBOL_GPL(ata_acpi_gtm_xfermask);
+ 
+ /**
+- * ata_acpi_cbl_80wire		-	Check for 80 wire cable
++ * ata_acpi_cbl_pata_type - Return PATA cable type
+  * @ap: Port to check
+- * @gtm: GTM data to use
+  *
+- * Return 1 if the @gtm indicates the BIOS selected an 80wire mode.
++ * Return ATA_CBL_PATA* according to the transfer mode selected by BIOS
+  */
+-int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm)
++int ata_acpi_cbl_pata_type(struct ata_port *ap)
+ {
+ 	struct ata_device *dev;
++	int ret = ATA_CBL_PATA_UNK;
++	const struct ata_acpi_gtm *gtm = ata_acpi_init_gtm(ap);
++
++	if (!gtm)
++		return ATA_CBL_PATA40;
+ 
+ 	ata_for_each_dev(dev, &ap->link, ENABLED) {
+ 		unsigned int xfer_mask, udma_mask;
+@@ -530,13 +534,17 @@ int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm)
+ 		xfer_mask = ata_acpi_gtm_xfermask(dev, gtm);
+ 		ata_unpack_xfermask(xfer_mask, NULL, NULL, &udma_mask);
+ 
+-		if (udma_mask & ~ATA_UDMA_MASK_40C)
+-			return 1;
++		ret = ATA_CBL_PATA40;
++
++		if (udma_mask & ~ATA_UDMA_MASK_40C) {
++			ret = ATA_CBL_PATA80;
++			break;
++		}
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+-EXPORT_SYMBOL_GPL(ata_acpi_cbl_80wire);
++EXPORT_SYMBOL_GPL(ata_acpi_cbl_pata_type);
+ 
+ static void ata_acpi_gtf_to_tf(struct ata_device *dev,
+ 			       const struct ata_acpi_gtf *gtf,
+diff --git a/drivers/ata/pata_cs5536.c b/drivers/ata/pata_cs5536.c
+index b811efd2cc346a..73e81e160c91fb 100644
+--- a/drivers/ata/pata_cs5536.c
++++ b/drivers/ata/pata_cs5536.c
+@@ -27,7 +27,7 @@
+ #include <scsi/scsi_host.h>
+ #include <linux/dmi.h>
+ 
+-#ifdef CONFIG_X86_32
++#if defined(CONFIG_X86) && defined(CONFIG_X86_32)
+ #include <asm/msr.h>
+ static int use_msr;
+ module_param_named(msr, use_msr, int, 0644);
+diff --git a/drivers/ata/pata_via.c b/drivers/ata/pata_via.c
+index d82728a01832b5..bb80e7800dcbe9 100644
+--- a/drivers/ata/pata_via.c
++++ b/drivers/ata/pata_via.c
+@@ -201,11 +201,9 @@ static int via_cable_detect(struct ata_port *ap) {
+ 	   two drives */
+ 	if (ata66 & (0x10100000 >> (16 * ap->port_no)))
+ 		return ATA_CBL_PATA80;
++
+ 	/* Check with ACPI so we can spot BIOS reported SATA bridges */
+-	if (ata_acpi_init_gtm(ap) &&
+-	    ata_acpi_cbl_80wire(ap, ata_acpi_init_gtm(ap)))
+-		return ATA_CBL_PATA80;
+-	return ATA_CBL_PATA40;
++	return ata_acpi_cbl_pata_type(ap);
+ }
+ 
+ static int via_pre_reset(struct ata_link *link, unsigned long deadline)
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 50651435577c8f..a79a57dc571fb1 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -601,6 +601,7 @@ CPU_SHOW_VULN_FALLBACK(gds);
+ CPU_SHOW_VULN_FALLBACK(reg_file_data_sampling);
+ CPU_SHOW_VULN_FALLBACK(ghostwrite);
+ CPU_SHOW_VULN_FALLBACK(indirect_target_selection);
++CPU_SHOW_VULN_FALLBACK(tsa);
+ 
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+@@ -618,6 +619,7 @@ static DEVICE_ATTR(gather_data_sampling, 0444, cpu_show_gds, NULL);
+ static DEVICE_ATTR(reg_file_data_sampling, 0444, cpu_show_reg_file_data_sampling, NULL);
+ static DEVICE_ATTR(ghostwrite, 0444, cpu_show_ghostwrite, NULL);
+ static DEVICE_ATTR(indirect_target_selection, 0444, cpu_show_indirect_target_selection, NULL);
++static DEVICE_ATTR(tsa, 0444, cpu_show_tsa, NULL);
+ 
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_meltdown.attr,
+@@ -636,6 +638,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_reg_file_data_sampling.attr,
+ 	&dev_attr_ghostwrite.attr,
+ 	&dev_attr_indirect_target_selection.attr,
++	&dev_attr_tsa.attr,
+ 	NULL
+ };
+ 
+diff --git a/drivers/block/aoe/aoe.h b/drivers/block/aoe/aoe.h
+index 749ae1246f4cf8..d35caa3c69e15e 100644
+--- a/drivers/block/aoe/aoe.h
++++ b/drivers/block/aoe/aoe.h
+@@ -80,6 +80,7 @@ enum {
+ 	DEVFL_NEWSIZE = (1<<6),	/* need to update dev size in block layer */
+ 	DEVFL_FREEING = (1<<7),	/* set when device is being cleaned up */
+ 	DEVFL_FREED = (1<<8),	/* device has been cleaned up */
++	DEVFL_DEAD = (1<<9),	/* device has timed out of aoe_deadsecs */
+ };
+ 
+ enum {
+diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c
+index 92b06d1de4cc7b..6c94cfd1c480ea 100644
+--- a/drivers/block/aoe/aoecmd.c
++++ b/drivers/block/aoe/aoecmd.c
+@@ -754,7 +754,7 @@ rexmit_timer(struct timer_list *timer)
+ 
+ 	utgts = count_targets(d, NULL);
+ 
+-	if (d->flags & DEVFL_TKILL) {
++	if (d->flags & (DEVFL_TKILL | DEVFL_DEAD)) {
+ 		spin_unlock_irqrestore(&d->lock, flags);
+ 		return;
+ 	}
+@@ -786,7 +786,8 @@ rexmit_timer(struct timer_list *timer)
+ 			 * to clean up.
+ 			 */
+ 			list_splice(&flist, &d->factive[0]);
+-			aoedev_downdev(d);
++			d->flags |= DEVFL_DEAD;
++			queue_work(aoe_wq, &d->work);
+ 			goto out;
+ 		}
+ 
+@@ -898,6 +899,9 @@ aoecmd_sleepwork(struct work_struct *work)
+ {
+ 	struct aoedev *d = container_of(work, struct aoedev, work);
+ 
++	if (d->flags & DEVFL_DEAD)
++		aoedev_downdev(d);
++
+ 	if (d->flags & DEVFL_GDALLOC)
+ 		aoeblk_gdalloc(d);
+ 
+diff --git a/drivers/block/aoe/aoedev.c b/drivers/block/aoe/aoedev.c
+index 8c18034cb3d69e..9b66f8ebce50d0 100644
+--- a/drivers/block/aoe/aoedev.c
++++ b/drivers/block/aoe/aoedev.c
+@@ -200,8 +200,11 @@ aoedev_downdev(struct aoedev *d)
+ 	struct list_head *head, *pos, *nx;
+ 	struct request *rq, *rqnext;
+ 	int i;
++	unsigned long flags;
+ 
+-	d->flags &= ~DEVFL_UP;
++	spin_lock_irqsave(&d->lock, flags);
++	d->flags &= ~(DEVFL_UP | DEVFL_DEAD);
++	spin_unlock_irqrestore(&d->lock, flags);
+ 
+ 	/* clean out active and to-be-retransmitted buffers */
+ 	for (i = 0; i < NFACTIVE; i++) {
+diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
+index b1ef4546346d44..bea3e9858aca56 100644
+--- a/drivers/dma-buf/dma-resv.c
++++ b/drivers/dma-buf/dma-resv.c
+@@ -685,11 +685,13 @@ long dma_resv_wait_timeout(struct dma_resv *obj, enum dma_resv_usage usage,
+ 	dma_resv_iter_begin(&cursor, obj, usage);
+ 	dma_resv_for_each_fence_unlocked(&cursor, fence) {
+ 
+-		ret = dma_fence_wait_timeout(fence, intr, ret);
+-		if (ret <= 0) {
+-			dma_resv_iter_end(&cursor);
+-			return ret;
+-		}
++		ret = dma_fence_wait_timeout(fence, intr, timeout);
++		if (ret <= 0)
++			break;
++
++		/* Even for zero timeout the return value is 1 */
++		if (timeout)
++			timeout = ret;
+ 	}
+ 	dma_resv_iter_end(&cursor);
+ 
+diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
+index fe55613a8ea993..37eb2e6c2f9f4d 100644
+--- a/drivers/firmware/arm_ffa/driver.c
++++ b/drivers/firmware/arm_ffa/driver.c
+@@ -110,7 +110,7 @@ struct ffa_drv_info {
+ 	struct work_struct sched_recv_irq_work;
+ 	struct xarray partition_info;
+ 	DECLARE_HASHTABLE(notifier_hash, ilog2(FFA_MAX_NOTIFICATIONS));
+-	struct mutex notify_lock; /* lock to protect notifier hashtable  */
++	rwlock_t notify_lock; /* lock to protect notifier hashtable  */
+ };
+ 
+ static struct ffa_drv_info *drv_info;
+@@ -1250,13 +1250,12 @@ notifier_hnode_get_by_type(u16 notify_id, enum notify_type type)
+ 	return NULL;
+ }
+ 
+-static int
+-update_notifier_cb(struct ffa_device *dev, int notify_id, void *cb,
+-		   void *cb_data, bool is_registration, bool is_framework)
++static int update_notifier_cb(struct ffa_device *dev, int notify_id,
++			      struct notifier_cb_info *cb, bool is_framework)
+ {
+ 	struct notifier_cb_info *cb_info = NULL;
+ 	enum notify_type type = ffa_notify_type_get(dev->vm_id);
+-	bool cb_found;
++	bool cb_found, is_registration = !!cb;
+ 
+ 	if (is_framework)
+ 		cb_info = notifier_hnode_get_by_vmid_uuid(notify_id, dev->vm_id,
+@@ -1270,20 +1269,10 @@ update_notifier_cb(struct ffa_device *dev, int notify_id, void *cb,
+ 		return -EINVAL;
+ 
+ 	if (is_registration) {
+-		cb_info = kzalloc(sizeof(*cb_info), GFP_KERNEL);
+-		if (!cb_info)
+-			return -ENOMEM;
+-
+-		cb_info->dev = dev;
+-		cb_info->cb_data = cb_data;
+-		if (is_framework)
+-			cb_info->fwk_cb = cb;
+-		else
+-			cb_info->cb = cb;
+-
+-		hash_add(drv_info->notifier_hash, &cb_info->hnode, notify_id);
++		hash_add(drv_info->notifier_hash, &cb->hnode, notify_id);
+ 	} else {
+ 		hash_del(&cb_info->hnode);
++		kfree(cb_info);
+ 	}
+ 
+ 	return 0;
+@@ -1300,20 +1289,19 @@ static int __ffa_notify_relinquish(struct ffa_device *dev, int notify_id,
+ 	if (notify_id >= FFA_MAX_NOTIFICATIONS)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&drv_info->notify_lock);
++	write_lock(&drv_info->notify_lock);
+ 
+-	rc = update_notifier_cb(dev, notify_id, NULL, NULL, false,
+-				is_framework);
++	rc = update_notifier_cb(dev, notify_id, NULL, is_framework);
+ 	if (rc) {
+ 		pr_err("Could not unregister notification callback\n");
+-		mutex_unlock(&drv_info->notify_lock);
++		write_unlock(&drv_info->notify_lock);
+ 		return rc;
+ 	}
+ 
+ 	if (!is_framework)
+ 		rc = ffa_notification_unbind(dev->vm_id, BIT(notify_id));
+ 
+-	mutex_unlock(&drv_info->notify_lock);
++	write_unlock(&drv_info->notify_lock);
+ 
+ 	return rc;
+ }
+@@ -1334,6 +1322,7 @@ static int __ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu,
+ {
+ 	int rc;
+ 	u32 flags = 0;
++	struct notifier_cb_info *cb_info = NULL;
+ 
+ 	if (ffa_notifications_disabled())
+ 		return -EOPNOTSUPP;
+@@ -1341,28 +1330,40 @@ static int __ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu,
+ 	if (notify_id >= FFA_MAX_NOTIFICATIONS)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&drv_info->notify_lock);
++	cb_info = kzalloc(sizeof(*cb_info), GFP_KERNEL);
++	if (!cb_info)
++		return -ENOMEM;
++
++	cb_info->dev = dev;
++	cb_info->cb_data = cb_data;
++	if (is_framework)
++		cb_info->fwk_cb = cb;
++	else
++		cb_info->cb = cb;
++
++	write_lock(&drv_info->notify_lock);
+ 
+ 	if (!is_framework) {
+ 		if (is_per_vcpu)
+ 			flags = PER_VCPU_NOTIFICATION_FLAG;
+ 
+ 		rc = ffa_notification_bind(dev->vm_id, BIT(notify_id), flags);
+-		if (rc) {
+-			mutex_unlock(&drv_info->notify_lock);
+-			return rc;
+-		}
++		if (rc)
++			goto out_unlock_free;
+ 	}
+ 
+-	rc = update_notifier_cb(dev, notify_id, cb, cb_data, true,
+-				is_framework);
++	rc = update_notifier_cb(dev, notify_id, cb_info, is_framework);
+ 	if (rc) {
+ 		pr_err("Failed to register callback for %d - %d\n",
+ 		       notify_id, rc);
+ 		if (!is_framework)
+ 			ffa_notification_unbind(dev->vm_id, BIT(notify_id));
+ 	}
+-	mutex_unlock(&drv_info->notify_lock);
++
++out_unlock_free:
++	write_unlock(&drv_info->notify_lock);
++	if (rc)
++		kfree(cb_info);
+ 
+ 	return rc;
+ }
+@@ -1406,9 +1407,9 @@ static void handle_notif_callbacks(u64 bitmap, enum notify_type type)
+ 		if (!(bitmap & 1))
+ 			continue;
+ 
+-		mutex_lock(&drv_info->notify_lock);
++		read_lock(&drv_info->notify_lock);
+ 		cb_info = notifier_hnode_get_by_type(notify_id, type);
+-		mutex_unlock(&drv_info->notify_lock);
++		read_unlock(&drv_info->notify_lock);
+ 
+ 		if (cb_info && cb_info->cb)
+ 			cb_info->cb(notify_id, cb_info->cb_data);
+@@ -1446,9 +1447,9 @@ static void handle_fwk_notif_callbacks(u32 bitmap)
+ 
+ 	ffa_rx_release();
+ 
+-	mutex_lock(&drv_info->notify_lock);
++	read_lock(&drv_info->notify_lock);
+ 	cb_info = notifier_hnode_get_by_vmid_uuid(notify_id, target, &uuid);
+-	mutex_unlock(&drv_info->notify_lock);
++	read_unlock(&drv_info->notify_lock);
+ 
+ 	if (cb_info && cb_info->fwk_cb)
+ 		cb_info->fwk_cb(notify_id, cb_info->cb_data, buf);
+@@ -1973,7 +1974,7 @@ static void ffa_notifications_setup(void)
+ 		goto cleanup;
+ 
+ 	hash_init(drv_info->notifier_hash);
+-	mutex_init(&drv_info->notify_lock);
++	rwlock_init(&drv_info->notify_lock);
+ 
+ 	drv_info->notif_enabled = true;
+ 	return;
+diff --git a/drivers/firmware/samsung/exynos-acpm.c b/drivers/firmware/samsung/exynos-acpm.c
+index e80cb7a8da8f23..520a9fd3b0fd32 100644
+--- a/drivers/firmware/samsung/exynos-acpm.c
++++ b/drivers/firmware/samsung/exynos-acpm.c
+@@ -430,6 +430,9 @@ int acpm_do_xfer(const struct acpm_handle *handle, const struct acpm_xfer *xfer)
+ 		return -EOPNOTSUPP;
+ 	}
+ 
++	msg.chan_id = xfer->acpm_chan_id;
++	msg.chan_type = EXYNOS_MBOX_CHAN_TYPE_DOORBELL;
++
+ 	scoped_guard(mutex, &achan->tx_lock) {
+ 		tx_front = readl(achan->tx.front);
+ 		idx = (tx_front + 1) % achan->qlen;
+@@ -446,25 +449,15 @@ int acpm_do_xfer(const struct acpm_handle *handle, const struct acpm_xfer *xfer)
+ 
+ 		/* Advance TX front. */
+ 		writel(idx, achan->tx.front);
+-	}
+ 
+-	msg.chan_id = xfer->acpm_chan_id;
+-	msg.chan_type = EXYNOS_MBOX_CHAN_TYPE_DOORBELL;
+-	ret = mbox_send_message(achan->chan, (void *)&msg);
+-	if (ret < 0)
+-		return ret;
+-
+-	ret = acpm_wait_for_message_response(achan, xfer);
++		ret = mbox_send_message(achan->chan, (void *)&msg);
++		if (ret < 0)
++			return ret;
+ 
+-	/*
+-	 * NOTE: we might prefer not to need the mailbox ticker to manage the
+-	 * transfer queueing since the protocol layer queues things by itself.
+-	 * Unfortunately, we have to kick the mailbox framework after we have
+-	 * received our message.
+-	 */
+-	mbox_client_txdone(achan->chan, ret);
++		mbox_client_txdone(achan->chan, 0);
++	}
+ 
+-	return ret;
++	return acpm_wait_for_message_response(achan, xfer);
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/bridge/aux-hpd-bridge.c b/drivers/gpu/drm/bridge/aux-hpd-bridge.c
+index 48f297c78ee67c..1ec4f16b9939d2 100644
+--- a/drivers/gpu/drm/bridge/aux-hpd-bridge.c
++++ b/drivers/gpu/drm/bridge/aux-hpd-bridge.c
+@@ -64,10 +64,11 @@ struct auxiliary_device *devm_drm_dp_hpd_bridge_alloc(struct device *parent, str
+ 	adev->id = ret;
+ 	adev->name = "dp_hpd_bridge";
+ 	adev->dev.parent = parent;
+-	adev->dev.of_node = of_node_get(parent->of_node);
+ 	adev->dev.release = drm_aux_hpd_bridge_release;
+ 	adev->dev.platform_data = of_node_get(np);
+ 
++	device_set_of_node_from_dev(&adev->dev, parent);
++
+ 	ret = auxiliary_device_init(adev);
+ 	if (ret) {
+ 		of_node_put(adev->dev.platform_data);
+diff --git a/drivers/gpu/drm/bridge/panel.c b/drivers/gpu/drm/bridge/panel.c
+index 258c85c83a2834..6c58d0759f2271 100644
+--- a/drivers/gpu/drm/bridge/panel.c
++++ b/drivers/gpu/drm/bridge/panel.c
+@@ -298,6 +298,7 @@ struct drm_bridge *drm_panel_bridge_add_typed(struct drm_panel *panel,
+ 	panel_bridge->bridge.of_node = panel->dev->of_node;
+ 	panel_bridge->bridge.ops = DRM_BRIDGE_OP_MODES;
+ 	panel_bridge->bridge.type = connector_type;
++	panel_bridge->bridge.pre_enable_prev_first = panel->prepare_prev_first;
+ 
+ 	drm_bridge_add(&panel_bridge->bridge);
+ 
+@@ -412,8 +413,6 @@ struct drm_bridge *devm_drm_panel_bridge_add_typed(struct device *dev,
+ 		return bridge;
+ 	}
+ 
+-	bridge->pre_enable_prev_first = panel->prepare_prev_first;
+-
+ 	*ptr = bridge;
+ 	devres_add(dev, ptr);
+ 
+@@ -455,8 +454,6 @@ struct drm_bridge *drmm_panel_bridge_add(struct drm_device *drm,
+ 	if (ret)
+ 		return ERR_PTR(ret);
+ 
+-	bridge->pre_enable_prev_first = panel->prepare_prev_first;
+-
+ 	return bridge;
+ }
+ EXPORT_SYMBOL(drmm_panel_bridge_add);
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.c b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+index c394cc702d7d42..205c238cc73a6d 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_fimd.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+@@ -187,6 +187,7 @@ struct fimd_context {
+ 	u32				i80ifcon;
+ 	bool				i80_if;
+ 	bool				suspended;
++	bool				dp_clk_enabled;
+ 	wait_queue_head_t		wait_vsync_queue;
+ 	atomic_t			wait_vsync_event;
+ 	atomic_t			win_updated;
+@@ -1047,7 +1048,18 @@ static void fimd_dp_clock_enable(struct exynos_drm_clk *clk, bool enable)
+ 	struct fimd_context *ctx = container_of(clk, struct fimd_context,
+ 						dp_clk);
+ 	u32 val = enable ? DP_MIE_CLK_DP_ENABLE : DP_MIE_CLK_DISABLE;
++
++	if (enable == ctx->dp_clk_enabled)
++		return;
++
++	if (enable)
++		pm_runtime_resume_and_get(ctx->dev);
++
++	ctx->dp_clk_enabled = enable;
+ 	writel(val, ctx->regs + DP_MIE_CLKCON);
++
++	if (!enable)
++		pm_runtime_put(ctx->dev);
+ }
+ 
+ static const struct exynos_drm_crtc_ops fimd_crtc_ops = {
+diff --git a/drivers/gpu/drm/i915/gt/intel_gsc.c b/drivers/gpu/drm/i915/gt/intel_gsc.c
+index 1e925c75fb080d..c43febc862dc3d 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gsc.c
++++ b/drivers/gpu/drm/i915/gt/intel_gsc.c
+@@ -284,7 +284,7 @@ static void gsc_irq_handler(struct intel_gt *gt, unsigned int intf_id)
+ 	if (gt->gsc.intf[intf_id].irq < 0)
+ 		return;
+ 
+-	ret = generic_handle_irq(gt->gsc.intf[intf_id].irq);
++	ret = generic_handle_irq_safe(gt->gsc.intf[intf_id].irq);
+ 	if (ret)
+ 		gt_err_ratelimited(gt, "error handling GSC irq: %d\n", ret);
+ }
+diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
+index 6e9977b2d18029..44a4b6723582ff 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c
++++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
+@@ -604,7 +604,6 @@ static int ring_context_alloc(struct intel_context *ce)
+ 	/* One ringbuffer to rule them all */
+ 	GEM_BUG_ON(!engine->legacy.ring);
+ 	ce->ring = engine->legacy.ring;
+-	ce->timeline = intel_timeline_get(engine->legacy.timeline);
+ 
+ 	GEM_BUG_ON(ce->state);
+ 	if (engine->context_size) {
+@@ -617,6 +616,8 @@ static int ring_context_alloc(struct intel_context *ce)
+ 		ce->state = vma;
+ 	}
+ 
++	ce->timeline = intel_timeline_get(engine->legacy.timeline);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c
+index 88870844b5bd90..2fb7a9e7efec67 100644
+--- a/drivers/gpu/drm/i915/selftests/i915_request.c
++++ b/drivers/gpu/drm/i915/selftests/i915_request.c
+@@ -73,8 +73,8 @@ static int igt_add_request(void *arg)
+ 	/* Basic preliminary test to create a request and let it loose! */
+ 
+ 	request = mock_request(rcs0(i915)->kernel_context, HZ / 10);
+-	if (!request)
+-		return -ENOMEM;
++	if (IS_ERR(request))
++		return PTR_ERR(request);
+ 
+ 	i915_request_add(request);
+ 
+@@ -91,8 +91,8 @@ static int igt_wait_request(void *arg)
+ 	/* Submit a request, then wait upon it */
+ 
+ 	request = mock_request(rcs0(i915)->kernel_context, T);
+-	if (!request)
+-		return -ENOMEM;
++	if (IS_ERR(request))
++		return PTR_ERR(request);
+ 
+ 	i915_request_get(request);
+ 
+@@ -160,8 +160,8 @@ static int igt_fence_wait(void *arg)
+ 	/* Submit a request, treat it as a fence and wait upon it */
+ 
+ 	request = mock_request(rcs0(i915)->kernel_context, T);
+-	if (!request)
+-		return -ENOMEM;
++	if (IS_ERR(request))
++		return PTR_ERR(request);
+ 
+ 	if (dma_fence_wait_timeout(&request->fence, false, T) != -ETIME) {
+ 		pr_err("fence wait success before submit (expected timeout)!\n");
+@@ -219,8 +219,8 @@ static int igt_request_rewind(void *arg)
+ 	GEM_BUG_ON(IS_ERR(ce));
+ 	request = mock_request(ce, 2 * HZ);
+ 	intel_context_put(ce);
+-	if (!request) {
+-		err = -ENOMEM;
++	if (IS_ERR(request)) {
++		err = PTR_ERR(request);
+ 		goto err_context_0;
+ 	}
+ 
+@@ -237,8 +237,8 @@ static int igt_request_rewind(void *arg)
+ 	GEM_BUG_ON(IS_ERR(ce));
+ 	vip = mock_request(ce, 0);
+ 	intel_context_put(ce);
+-	if (!vip) {
+-		err = -ENOMEM;
++	if (IS_ERR(vip)) {
++		err = PTR_ERR(vip);
+ 		goto err_context_1;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/i915/selftests/mock_request.c b/drivers/gpu/drm/i915/selftests/mock_request.c
+index 09f747228dff57..1b0cf073e9643f 100644
+--- a/drivers/gpu/drm/i915/selftests/mock_request.c
++++ b/drivers/gpu/drm/i915/selftests/mock_request.c
+@@ -35,7 +35,7 @@ mock_request(struct intel_context *ce, unsigned long delay)
+ 	/* NB the i915->requests slab cache is enlarged to fit mock_request */
+ 	request = intel_context_create_request(ce);
+ 	if (IS_ERR(request))
+-		return NULL;
++		return request;
+ 
+ 	request->mock.delay = delay;
+ 	return request;
+diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
+index 3e9aa2cc38ef99..d4f71bb54e84c0 100644
+--- a/drivers/gpu/drm/msm/msm_gem_submit.c
++++ b/drivers/gpu/drm/msm/msm_gem_submit.c
+@@ -85,6 +85,15 @@ void __msm_gem_submit_destroy(struct kref *kref)
+ 			container_of(kref, struct msm_gem_submit, ref);
+ 	unsigned i;
+ 
++	/*
++	 * In error paths, we could unref the submit without calling
++	 * drm_sched_entity_push_job(), so msm_job_free() will never
++	 * get called.  Since drm_sched_job_cleanup() will NULL out
++	 * s_fence, we can use that to detect this case.
++	 */
++	if (submit->base.s_fence)
++		drm_sched_job_cleanup(&submit->base);
++
+ 	if (submit->fence_id) {
+ 		spin_lock(&submit->queue->idr_lock);
+ 		idr_remove(&submit->queue->fence_idr, submit->fence_id);
+@@ -649,6 +658,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
+ 	struct msm_ringbuffer *ring;
+ 	struct msm_submit_post_dep *post_deps = NULL;
+ 	struct drm_syncobj **syncobjs_to_reset = NULL;
++	struct sync_file *sync_file = NULL;
+ 	int out_fence_fd = -1;
+ 	unsigned i;
+ 	int ret;
+@@ -858,7 +868,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
+ 	}
+ 
+ 	if (ret == 0 && args->flags & MSM_SUBMIT_FENCE_FD_OUT) {
+-		struct sync_file *sync_file = sync_file_create(submit->user_fence);
++		sync_file = sync_file_create(submit->user_fence);
+ 		if (!sync_file) {
+ 			ret = -ENOMEM;
+ 		} else {
+@@ -892,8 +902,11 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
+ out_unlock:
+ 	mutex_unlock(&queue->lock);
+ out_post_unlock:
+-	if (ret && (out_fence_fd >= 0))
++	if (ret && (out_fence_fd >= 0)) {
+ 		put_unused_fd(out_fence_fd);
++		if (sync_file)
++			fput(sync_file->file);
++	}
+ 
+ 	if (!IS_ERR_OR_NULL(submit)) {
+ 		msm_gem_submit_put(submit);
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
+index de4a9e18f6a903..7b6ce3d7b0af55 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.h
++++ b/drivers/gpu/drm/v3d/v3d_drv.h
+@@ -101,6 +101,12 @@ enum v3d_gen {
+ 	V3D_GEN_71 = 71,
+ };
+ 
++enum v3d_irq {
++	V3D_CORE_IRQ,
++	V3D_HUB_IRQ,
++	V3D_MAX_IRQS,
++};
++
+ struct v3d_dev {
+ 	struct drm_device drm;
+ 
+@@ -112,6 +118,8 @@ struct v3d_dev {
+ 
+ 	bool single_irq_line;
+ 
++	int irq[V3D_MAX_IRQS];
++
+ 	struct v3d_perfmon_info perfmon_info;
+ 
+ 	void __iomem *hub_regs;
+diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
+index 1ea6d3832c2212..58d63b3e780432 100644
+--- a/drivers/gpu/drm/v3d/v3d_gem.c
++++ b/drivers/gpu/drm/v3d/v3d_gem.c
+@@ -118,6 +118,8 @@ v3d_reset(struct v3d_dev *v3d)
+ 	if (false)
+ 		v3d_idle_axi(v3d, 0);
+ 
++	v3d_irq_disable(v3d);
++
+ 	v3d_idle_gca(v3d);
+ 	v3d_reset_v3d(v3d);
+ 
+diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c
+index 2cca5d3a26a22c..a515a301e48029 100644
+--- a/drivers/gpu/drm/v3d/v3d_irq.c
++++ b/drivers/gpu/drm/v3d/v3d_irq.c
+@@ -260,7 +260,7 @@ v3d_hub_irq(int irq, void *arg)
+ int
+ v3d_irq_init(struct v3d_dev *v3d)
+ {
+-	int irq1, ret, core;
++	int irq, ret, core;
+ 
+ 	INIT_WORK(&v3d->overflow_mem_work, v3d_overflow_mem_work);
+ 
+@@ -271,17 +271,24 @@ v3d_irq_init(struct v3d_dev *v3d)
+ 		V3D_CORE_WRITE(core, V3D_CTL_INT_CLR, V3D_CORE_IRQS(v3d->ver));
+ 	V3D_WRITE(V3D_HUB_INT_CLR, V3D_HUB_IRQS(v3d->ver));
+ 
+-	irq1 = platform_get_irq_optional(v3d_to_pdev(v3d), 1);
+-	if (irq1 == -EPROBE_DEFER)
+-		return irq1;
+-	if (irq1 > 0) {
+-		ret = devm_request_irq(v3d->drm.dev, irq1,
++	irq = platform_get_irq_optional(v3d_to_pdev(v3d), 1);
++	if (irq == -EPROBE_DEFER)
++		return irq;
++	if (irq > 0) {
++		v3d->irq[V3D_CORE_IRQ] = irq;
++
++		ret = devm_request_irq(v3d->drm.dev, v3d->irq[V3D_CORE_IRQ],
+ 				       v3d_irq, IRQF_SHARED,
+ 				       "v3d_core0", v3d);
+ 		if (ret)
+ 			goto fail;
+-		ret = devm_request_irq(v3d->drm.dev,
+-				       platform_get_irq(v3d_to_pdev(v3d), 0),
++
++		irq = platform_get_irq(v3d_to_pdev(v3d), 0);
++		if (irq < 0)
++			return irq;
++		v3d->irq[V3D_HUB_IRQ] = irq;
++
++		ret = devm_request_irq(v3d->drm.dev, v3d->irq[V3D_HUB_IRQ],
+ 				       v3d_hub_irq, IRQF_SHARED,
+ 				       "v3d_hub", v3d);
+ 		if (ret)
+@@ -289,8 +296,12 @@ v3d_irq_init(struct v3d_dev *v3d)
+ 	} else {
+ 		v3d->single_irq_line = true;
+ 
+-		ret = devm_request_irq(v3d->drm.dev,
+-				       platform_get_irq(v3d_to_pdev(v3d), 0),
++		irq = platform_get_irq(v3d_to_pdev(v3d), 0);
++		if (irq < 0)
++			return irq;
++		v3d->irq[V3D_CORE_IRQ] = irq;
++
++		ret = devm_request_irq(v3d->drm.dev, v3d->irq[V3D_CORE_IRQ],
+ 				       v3d_irq, IRQF_SHARED,
+ 				       "v3d", v3d);
+ 		if (ret)
+@@ -331,6 +342,12 @@ v3d_irq_disable(struct v3d_dev *v3d)
+ 		V3D_CORE_WRITE(core, V3D_CTL_INT_MSK_SET, ~0);
+ 	V3D_WRITE(V3D_HUB_INT_MSK_SET, ~0);
+ 
++	/* Finish any interrupt handler still in flight. */
++	for (int i = 0; i < V3D_MAX_IRQS; i++) {
++		if (v3d->irq[i])
++			synchronize_irq(v3d->irq[i]);
++	}
++
+ 	/* Clear any pending interrupts we might have left. */
+ 	for (core = 0; core < v3d->cores; core++)
+ 		V3D_CORE_WRITE(core, V3D_CTL_INT_CLR, V3D_CORE_IRQS(v3d->ver));
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+index 0f32471c853325..55c822a61b9add 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+@@ -769,7 +769,7 @@ static int vmw_setup_pci_resources(struct vmw_private *dev,
+ 		dev->fifo_mem = devm_memremap(dev->drm.dev,
+ 					      fifo_start,
+ 					      fifo_size,
+-					      MEMREMAP_WB);
++					      MEMREMAP_WB | MEMREMAP_DEC);
+ 
+ 		if (IS_ERR(dev->fifo_mem)) {
+ 			drm_err(&dev->drm,
+diff --git a/drivers/gpu/drm/xe/Kconfig b/drivers/gpu/drm/xe/Kconfig
+index 9a46aafcb33bae..0dc3512876b3a9 100644
+--- a/drivers/gpu/drm/xe/Kconfig
++++ b/drivers/gpu/drm/xe/Kconfig
+@@ -1,7 +1,8 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ config DRM_XE
+ 	tristate "Intel Xe Graphics"
+-	depends on DRM && PCI && MMU && (m || (y && KUNIT=y))
++	depends on DRM && PCI && MMU
++	depends on KUNIT || !KUNIT
+ 	depends on INTEL_VSEC || !INTEL_VSEC
+ 	depends on X86_PLATFORM_DEVICES || !(X86 && ACPI)
+ 	select INTERVAL_TREE
+diff --git a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
+index d633f1c739e43a..7de8f827281fcd 100644
+--- a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
++++ b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
+@@ -367,6 +367,7 @@ enum xe_guc_klv_ids {
+ 	GUC_WA_KLV_NP_RD_WRITE_TO_CLEAR_RCSM_AT_CGP_LATE_RESTORE			= 0x9008,
+ 	GUC_WORKAROUND_KLV_ID_BACK_TO_BACK_RCS_ENGINE_RESET				= 0x9009,
+ 	GUC_WA_KLV_WAKE_POWER_DOMAINS_FOR_OUTBOUND_MMIO					= 0x900a,
++	GUC_WA_KLV_RESET_BB_STACK_PTR_ON_VF_SWITCH					= 0x900b,
+ };
+ 
+ #endif
+diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
+index 00191227bc95c5..f3123914b1abf4 100644
+--- a/drivers/gpu/drm/xe/xe_device.c
++++ b/drivers/gpu/drm/xe/xe_device.c
+@@ -38,6 +38,7 @@
+ #include "xe_gt_printk.h"
+ #include "xe_gt_sriov_vf.h"
+ #include "xe_guc.h"
++#include "xe_guc_pc.h"
+ #include "xe_hw_engine_group.h"
+ #include "xe_hwmon.h"
+ #include "xe_irq.h"
+@@ -972,38 +973,15 @@ void xe_device_wmb(struct xe_device *xe)
+ 		xe_mmio_write32(xe_root_tile_mmio(xe), VF_CAP_REG, 0);
+ }
+ 
+-/**
+- * xe_device_td_flush() - Flush transient L3 cache entries
+- * @xe: The device
+- *
+- * Display engine has direct access to memory and is never coherent with L3/L4
+- * caches (or CPU caches), however KMD is responsible for specifically flushing
+- * transient L3 GPU cache entries prior to the flip sequence to ensure scanout
+- * can happen from such a surface without seeing corruption.
+- *
+- * Display surfaces can be tagged as transient by mapping it using one of the
+- * various L3:XD PAT index modes on Xe2.
+- *
+- * Note: On non-discrete xe2 platforms, like LNL, the entire L3 cache is flushed
+- * at the end of each submission via PIPE_CONTROL for compute/render, since SA
+- * Media is not coherent with L3 and we want to support render-vs-media
+- * usescases. For other engines like copy/blt the HW internally forces uncached
+- * behaviour, hence why we can skip the TDF on such platforms.
++/*
++ * Issue a TRANSIENT_FLUSH_REQUEST and wait for completion on each gt.
+  */
+-void xe_device_td_flush(struct xe_device *xe)
++static void tdf_request_sync(struct xe_device *xe)
+ {
+-	struct xe_gt *gt;
+ 	unsigned int fw_ref;
++	struct xe_gt *gt;
+ 	u8 id;
+ 
+-	if (!IS_DGFX(xe) || GRAPHICS_VER(xe) < 20)
+-		return;
+-
+-	if (XE_WA(xe_root_mmio_gt(xe), 16023588340)) {
+-		xe_device_l2_flush(xe);
+-		return;
+-	}
+-
+ 	for_each_gt(gt, xe, id) {
+ 		if (xe_gt_is_media_type(gt))
+ 			continue;
+@@ -1013,6 +991,7 @@ void xe_device_td_flush(struct xe_device *xe)
+ 			return;
+ 
+ 		xe_mmio_write32(&gt->mmio, XE2_TDF_CTRL, TRANSIENT_FLUSH_REQUEST);
++
+ 		/*
+ 		 * FIXME: We can likely do better here with our choice of
+ 		 * timeout. Currently we just assume the worst case, i.e. 150us,
+@@ -1043,15 +1022,52 @@ void xe_device_l2_flush(struct xe_device *xe)
+ 		return;
+ 
+ 	spin_lock(&gt->global_invl_lock);
+-	xe_mmio_write32(&gt->mmio, XE2_GLOBAL_INVAL, 0x1);
+ 
++	xe_mmio_write32(&gt->mmio, XE2_GLOBAL_INVAL, 0x1);
+ 	if (xe_mmio_wait32(&gt->mmio, XE2_GLOBAL_INVAL, 0x1, 0x0, 500, NULL, true))
+ 		xe_gt_err_once(gt, "Global invalidation timeout\n");
++
+ 	spin_unlock(&gt->global_invl_lock);
+ 
+ 	xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ }
+ 
++/**
++ * xe_device_td_flush() - Flush transient L3 cache entries
++ * @xe: The device
++ *
++ * Display engine has direct access to memory and is never coherent with L3/L4
++ * caches (or CPU caches), however KMD is responsible for specifically flushing
++ * transient L3 GPU cache entries prior to the flip sequence to ensure scanout
++ * can happen from such a surface without seeing corruption.
++ *
++ * Display surfaces can be tagged as transient by mapping it using one of the
++ * various L3:XD PAT index modes on Xe2.
++ *
++ * Note: On non-discrete xe2 platforms, like LNL, the entire L3 cache is flushed
++ * at the end of each submission via PIPE_CONTROL for compute/render, since SA
++ * Media is not coherent with L3 and we want to support render-vs-media
++ * usescases. For other engines like copy/blt the HW internally forces uncached
++ * behaviour, hence why we can skip the TDF on such platforms.
++ */
++void xe_device_td_flush(struct xe_device *xe)
++{
++	struct xe_gt *root_gt;
++
++	if (!IS_DGFX(xe) || GRAPHICS_VER(xe) < 20)
++		return;
++
++	root_gt = xe_root_mmio_gt(xe);
++	if (XE_WA(root_gt, 16023588340)) {
++		/* A transient flush is not sufficient: flush the L2 */
++		xe_device_l2_flush(xe);
++	} else {
++		xe_guc_pc_apply_flush_freq_limit(&root_gt->uc.guc.pc);
++		tdf_request_sync(xe);
++		xe_guc_pc_remove_flush_freq_limit(&root_gt->uc.guc.pc);
++	}
++}
++
+ u32 xe_device_ccs_bytes(struct xe_device *xe, u64 size)
+ {
+ 	return xe_device_has_flat_ccs(xe) ?
+diff --git a/drivers/gpu/drm/xe/xe_guc_ads.c b/drivers/gpu/drm/xe/xe_guc_ads.c
+index 7031542a70cebc..1fccdef9e532e8 100644
+--- a/drivers/gpu/drm/xe/xe_guc_ads.c
++++ b/drivers/gpu/drm/xe/xe_guc_ads.c
+@@ -376,6 +376,11 @@ static void guc_waklv_init(struct xe_guc_ads *ads)
+ 					GUC_WORKAROUND_KLV_ID_BACK_TO_BACK_RCS_ENGINE_RESET,
+ 					&offset, &remain);
+ 
++	if (GUC_FIRMWARE_VER(&gt->uc.guc) >= MAKE_GUC_VER(70, 44, 0) && XE_WA(gt, 16026508708))
++		guc_waklv_enable_simple(ads,
++					GUC_WA_KLV_RESET_BB_STACK_PTR_ON_VF_SWITCH,
++					&offset, &remain);
++
+ 	size = guc_ads_waklv_size(ads) - remain;
+ 	if (!size)
+ 		return;
+diff --git a/drivers/gpu/drm/xe/xe_guc_pc.c b/drivers/gpu/drm/xe/xe_guc_pc.c
+index a7b8bacfe64efc..1c7b044413f262 100644
+--- a/drivers/gpu/drm/xe/xe_guc_pc.c
++++ b/drivers/gpu/drm/xe/xe_guc_pc.c
+@@ -5,8 +5,11 @@
+ 
+ #include "xe_guc_pc.h"
+ 
++#include <linux/cleanup.h>
+ #include <linux/delay.h>
++#include <linux/jiffies.h>
+ #include <linux/ktime.h>
++#include <linux/wait_bit.h>
+ 
+ #include <drm/drm_managed.h>
+ #include <drm/drm_print.h>
+@@ -51,9 +54,12 @@
+ 
+ #define LNL_MERT_FREQ_CAP	800
+ #define BMG_MERT_FREQ_CAP	2133
++#define BMG_MIN_FREQ		1200
++#define BMG_MERT_FLUSH_FREQ_CAP	2600
+ 
+ #define SLPC_RESET_TIMEOUT_MS 5 /* roughly 5ms, but no need for precision */
+ #define SLPC_RESET_EXTENDED_TIMEOUT_MS 1000 /* To be used only at pc_start */
++#define SLPC_ACT_FREQ_TIMEOUT_MS 100
+ 
+ /**
+  * DOC: GuC Power Conservation (PC)
+@@ -141,6 +147,36 @@ static int wait_for_pc_state(struct xe_guc_pc *pc,
+ 	return -ETIMEDOUT;
+ }
+ 
++static int wait_for_flush_complete(struct xe_guc_pc *pc)
++{
++	const unsigned long timeout = msecs_to_jiffies(30);
++
++	if (!wait_var_event_timeout(&pc->flush_freq_limit,
++				    !atomic_read(&pc->flush_freq_limit),
++				    timeout))
++		return -ETIMEDOUT;
++
++	return 0;
++}
++
++static int wait_for_act_freq_limit(struct xe_guc_pc *pc, u32 freq)
++{
++	int timeout_us = SLPC_ACT_FREQ_TIMEOUT_MS * USEC_PER_MSEC;
++	int slept, wait = 10;
++
++	for (slept = 0; slept < timeout_us;) {
++		if (xe_guc_pc_get_act_freq(pc) <= freq)
++			return 0;
++
++		usleep_range(wait, wait << 1);
++		slept += wait;
++		wait <<= 1;
++		if (slept + wait > timeout_us)
++			wait = timeout_us - slept;
++	}
++
++	return -ETIMEDOUT;
++}
+ static int pc_action_reset(struct xe_guc_pc *pc)
+ {
+ 	struct xe_guc_ct *ct = pc_to_ct(pc);
+@@ -538,6 +574,25 @@ u32 xe_guc_pc_get_rpn_freq(struct xe_guc_pc *pc)
+ 	return pc->rpn_freq;
+ }
+ 
++static int xe_guc_pc_get_min_freq_locked(struct xe_guc_pc *pc, u32 *freq)
++{
++	int ret;
++
++	lockdep_assert_held(&pc->freq_lock);
++
++	/* Might be in the middle of a gt reset */
++	if (!pc->freq_ready)
++		return -EAGAIN;
++
++	ret = pc_action_query_task_state(pc);
++	if (ret)
++		return ret;
++
++	*freq = pc_get_min_freq(pc);
++
++	return 0;
++}
++
+ /**
+  * xe_guc_pc_get_min_freq - Get the min operational frequency
+  * @pc: The GuC PC
+@@ -547,27 +602,29 @@ u32 xe_guc_pc_get_rpn_freq(struct xe_guc_pc *pc)
+  *         -EAGAIN if GuC PC not ready (likely in middle of a reset).
+  */
+ int xe_guc_pc_get_min_freq(struct xe_guc_pc *pc, u32 *freq)
++{
++	guard(mutex)(&pc->freq_lock);
++
++	return xe_guc_pc_get_min_freq_locked(pc, freq);
++}
++
++static int xe_guc_pc_set_min_freq_locked(struct xe_guc_pc *pc, u32 freq)
+ {
+ 	int ret;
+ 
+-	xe_device_assert_mem_access(pc_to_xe(pc));
++	lockdep_assert_held(&pc->freq_lock);
+ 
+-	mutex_lock(&pc->freq_lock);
+-	if (!pc->freq_ready) {
+-		/* Might be in the middle of a gt reset */
+-		ret = -EAGAIN;
+-		goto out;
+-	}
++	/* Might be in the middle of a gt reset */
++	if (!pc->freq_ready)
++		return -EAGAIN;
+ 
+-	ret = pc_action_query_task_state(pc);
++	ret = pc_set_min_freq(pc, freq);
+ 	if (ret)
+-		goto out;
++		return ret;
+ 
+-	*freq = pc_get_min_freq(pc);
++	pc->user_requested_min = freq;
+ 
+-out:
+-	mutex_unlock(&pc->freq_lock);
+-	return ret;
++	return 0;
+ }
+ 
+ /**
+@@ -580,25 +637,29 @@ int xe_guc_pc_get_min_freq(struct xe_guc_pc *pc, u32 *freq)
+  *         -EINVAL if value out of bounds.
+  */
+ int xe_guc_pc_set_min_freq(struct xe_guc_pc *pc, u32 freq)
++{
++	guard(mutex)(&pc->freq_lock);
++
++	return xe_guc_pc_set_min_freq_locked(pc, freq);
++}
++
++static int xe_guc_pc_get_max_freq_locked(struct xe_guc_pc *pc, u32 *freq)
+ {
+ 	int ret;
+ 
+-	mutex_lock(&pc->freq_lock);
+-	if (!pc->freq_ready) {
+-		/* Might be in the middle of a gt reset */
+-		ret = -EAGAIN;
+-		goto out;
+-	}
++	lockdep_assert_held(&pc->freq_lock);
+ 
+-	ret = pc_set_min_freq(pc, freq);
++	/* Might be in the middle of a gt reset */
++	if (!pc->freq_ready)
++		return -EAGAIN;
++
++	ret = pc_action_query_task_state(pc);
+ 	if (ret)
+-		goto out;
++		return ret;
+ 
+-	pc->user_requested_min = freq;
++	*freq = pc_get_max_freq(pc);
+ 
+-out:
+-	mutex_unlock(&pc->freq_lock);
+-	return ret;
++	return 0;
+ }
+ 
+ /**
+@@ -610,25 +671,29 @@ int xe_guc_pc_set_min_freq(struct xe_guc_pc *pc, u32 freq)
+  *         -EAGAIN if GuC PC not ready (likely in middle of a reset).
+  */
+ int xe_guc_pc_get_max_freq(struct xe_guc_pc *pc, u32 *freq)
++{
++	guard(mutex)(&pc->freq_lock);
++
++	return xe_guc_pc_get_max_freq_locked(pc, freq);
++}
++
++static int xe_guc_pc_set_max_freq_locked(struct xe_guc_pc *pc, u32 freq)
+ {
+ 	int ret;
+ 
+-	mutex_lock(&pc->freq_lock);
+-	if (!pc->freq_ready) {
+-		/* Might be in the middle of a gt reset */
+-		ret = -EAGAIN;
+-		goto out;
+-	}
++	lockdep_assert_held(&pc->freq_lock);
+ 
+-	ret = pc_action_query_task_state(pc);
++	/* Might be in the middle of a gt reset */
++	if (!pc->freq_ready)
++		return -EAGAIN;
++
++	ret = pc_set_max_freq(pc, freq);
+ 	if (ret)
+-		goto out;
++		return ret;
+ 
+-	*freq = pc_get_max_freq(pc);
++	pc->user_requested_max = freq;
+ 
+-out:
+-	mutex_unlock(&pc->freq_lock);
+-	return ret;
++	return 0;
+ }
+ 
+ /**
+@@ -642,24 +707,14 @@ int xe_guc_pc_get_max_freq(struct xe_guc_pc *pc, u32 *freq)
+  */
+ int xe_guc_pc_set_max_freq(struct xe_guc_pc *pc, u32 freq)
+ {
+-	int ret;
+-
+-	mutex_lock(&pc->freq_lock);
+-	if (!pc->freq_ready) {
+-		/* Might be in the middle of a gt reset */
+-		ret = -EAGAIN;
+-		goto out;
++	if (XE_WA(pc_to_gt(pc), 22019338487)) {
++		if (wait_for_flush_complete(pc) != 0)
++			return -EAGAIN;
+ 	}
+ 
+-	ret = pc_set_max_freq(pc, freq);
+-	if (ret)
+-		goto out;
+-
+-	pc->user_requested_max = freq;
++	guard(mutex)(&pc->freq_lock);
+ 
+-out:
+-	mutex_unlock(&pc->freq_lock);
+-	return ret;
++	return xe_guc_pc_set_max_freq_locked(pc, freq);
+ }
+ 
+ /**
+@@ -802,6 +857,7 @@ void xe_guc_pc_init_early(struct xe_guc_pc *pc)
+ 
+ static int pc_adjust_freq_bounds(struct xe_guc_pc *pc)
+ {
++	struct xe_tile *tile = gt_to_tile(pc_to_gt(pc));
+ 	int ret;
+ 
+ 	lockdep_assert_held(&pc->freq_lock);
+@@ -828,6 +884,9 @@ static int pc_adjust_freq_bounds(struct xe_guc_pc *pc)
+ 	if (pc_get_min_freq(pc) > pc->rp0_freq)
+ 		ret = pc_set_min_freq(pc, pc->rp0_freq);
+ 
++	if (XE_WA(tile->primary_gt, 14022085890))
++		ret = pc_set_min_freq(pc, max(BMG_MIN_FREQ, pc_get_min_freq(pc)));
++
+ out:
+ 	return ret;
+ }
+@@ -853,6 +912,92 @@ static int pc_adjust_requested_freq(struct xe_guc_pc *pc)
+ 	return ret;
+ }
+ 
++static bool needs_flush_freq_limit(struct xe_guc_pc *pc)
++{
++	struct xe_gt *gt = pc_to_gt(pc);
++
++	return  XE_WA(gt, 22019338487) &&
++		pc->rp0_freq > BMG_MERT_FLUSH_FREQ_CAP;
++}
++
++/**
++ * xe_guc_pc_apply_flush_freq_limit() - Limit max GT freq during L2 flush
++ * @pc: the xe_guc_pc object
++ *
++ * As per the WA, reduce max GT frequency during L2 cache flush
++ */
++void xe_guc_pc_apply_flush_freq_limit(struct xe_guc_pc *pc)
++{
++	struct xe_gt *gt = pc_to_gt(pc);
++	u32 max_freq;
++	int ret;
++
++	if (!needs_flush_freq_limit(pc))
++		return;
++
++	guard(mutex)(&pc->freq_lock);
++
++	ret = xe_guc_pc_get_max_freq_locked(pc, &max_freq);
++	if (!ret && max_freq > BMG_MERT_FLUSH_FREQ_CAP) {
++		ret = pc_set_max_freq(pc, BMG_MERT_FLUSH_FREQ_CAP);
++		if (ret) {
++			xe_gt_err_once(gt, "Failed to cap max freq on flush to %u, %pe\n",
++				       BMG_MERT_FLUSH_FREQ_CAP, ERR_PTR(ret));
++			return;
++		}
++
++		atomic_set(&pc->flush_freq_limit, 1);
++
++		/*
++		 * If user has previously changed max freq, stash that value to
++		 * restore later, otherwise use the current max. New user
++		 * requests wait on flush.
++		 */
++		if (pc->user_requested_max != 0)
++			pc->stashed_max_freq = pc->user_requested_max;
++		else
++			pc->stashed_max_freq = max_freq;
++	}
++
++	/*
++	 * Wait for actual freq to go below the flush cap: even if the previous
++	 * max was below cap, the current one might still be above it
++	 */
++	ret = wait_for_act_freq_limit(pc, BMG_MERT_FLUSH_FREQ_CAP);
++	if (ret)
++		xe_gt_err_once(gt, "Actual freq did not reduce to %u, %pe\n",
++			       BMG_MERT_FLUSH_FREQ_CAP, ERR_PTR(ret));
++}
++
++/**
++ * xe_guc_pc_remove_flush_freq_limit() - Remove max GT freq limit after L2 flush completes.
++ * @pc: the xe_guc_pc object
++ *
++ * Retrieve the previous GT max frequency value.
++ */
++void xe_guc_pc_remove_flush_freq_limit(struct xe_guc_pc *pc)
++{
++	struct xe_gt *gt = pc_to_gt(pc);
++	int ret = 0;
++
++	if (!needs_flush_freq_limit(pc))
++		return;
++
++	if (!atomic_read(&pc->flush_freq_limit))
++		return;
++
++	mutex_lock(&pc->freq_lock);
++
++	ret = pc_set_max_freq(&gt->uc.guc.pc, pc->stashed_max_freq);
++	if (ret)
++		xe_gt_err_once(gt, "Failed to restore max freq %u:%d",
++			       pc->stashed_max_freq, ret);
++
++	atomic_set(&pc->flush_freq_limit, 0);
++	mutex_unlock(&pc->freq_lock);
++	wake_up_var(&pc->flush_freq_limit);
++}
++
+ static int pc_set_mert_freq_cap(struct xe_guc_pc *pc)
+ {
+ 	int ret = 0;
+diff --git a/drivers/gpu/drm/xe/xe_guc_pc.h b/drivers/gpu/drm/xe/xe_guc_pc.h
+index 39102b79602fd8..0302c7426ccde4 100644
+--- a/drivers/gpu/drm/xe/xe_guc_pc.h
++++ b/drivers/gpu/drm/xe/xe_guc_pc.h
+@@ -37,5 +37,7 @@ u64 xe_guc_pc_mc6_residency(struct xe_guc_pc *pc);
+ void xe_guc_pc_init_early(struct xe_guc_pc *pc);
+ int xe_guc_pc_restore_stashed_freq(struct xe_guc_pc *pc);
+ void xe_guc_pc_raise_unslice(struct xe_guc_pc *pc);
++void xe_guc_pc_apply_flush_freq_limit(struct xe_guc_pc *pc);
++void xe_guc_pc_remove_flush_freq_limit(struct xe_guc_pc *pc);
+ 
+ #endif /* _XE_GUC_PC_H_ */
+diff --git a/drivers/gpu/drm/xe/xe_guc_pc_types.h b/drivers/gpu/drm/xe/xe_guc_pc_types.h
+index 2978ac9a249b5f..c02053948a579c 100644
+--- a/drivers/gpu/drm/xe/xe_guc_pc_types.h
++++ b/drivers/gpu/drm/xe/xe_guc_pc_types.h
+@@ -15,6 +15,8 @@
+ struct xe_guc_pc {
+ 	/** @bo: GGTT buffer object that is shared with GuC PC */
+ 	struct xe_bo *bo;
++	/** @flush_freq_limit: 1 when max freq changes are limited by driver */
++	atomic_t flush_freq_limit;
+ 	/** @rp0_freq: HW RP0 frequency - The Maximum one */
+ 	u32 rp0_freq;
+ 	/** @rpa_freq: HW RPa frequency - The Achievable one */
+diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
+index 5a3e89022c3812..752f57fd515e1c 100644
+--- a/drivers/gpu/drm/xe/xe_migrate.c
++++ b/drivers/gpu/drm/xe/xe_migrate.c
+@@ -82,7 +82,7 @@ struct xe_migrate {
+  * of the instruction.  Subtracting the instruction header (1 dword) and
+  * address (2 dwords), that leaves 0x3FD dwords (0x1FE qwords) for PTE values.
+  */
+-#define MAX_PTE_PER_SDI 0x1FE
++#define MAX_PTE_PER_SDI 0x1FEU
+ 
+ /**
+  * xe_tile_migrate_exec_queue() - Get this tile's migrate exec queue.
+@@ -1550,15 +1550,17 @@ static u32 pte_update_cmd_size(u64 size)
+ 	u64 entries = DIV_U64_ROUND_UP(size, XE_PAGE_SIZE);
+ 
+ 	XE_WARN_ON(size > MAX_PREEMPTDISABLE_TRANSFER);
++
+ 	/*
+ 	 * MI_STORE_DATA_IMM command is used to update page table. Each
+-	 * instruction can update maximumly 0x1ff pte entries. To update
+-	 * n (n <= 0x1ff) pte entries, we need:
+-	 * 1 dword for the MI_STORE_DATA_IMM command header (opcode etc)
+-	 * 2 dword for the page table's physical location
+-	 * 2*n dword for value of pte to fill (each pte entry is 2 dwords)
++	 * instruction can update maximumly MAX_PTE_PER_SDI pte entries. To
++	 * update n (n <= MAX_PTE_PER_SDI) pte entries, we need:
++	 *
++	 * - 1 dword for the MI_STORE_DATA_IMM command header (opcode etc)
++	 * - 2 dword for the page table's physical location
++	 * - 2*n dword for value of pte to fill (each pte entry is 2 dwords)
+ 	 */
+-	num_dword = (1 + 2) * DIV_U64_ROUND_UP(entries, 0x1ff);
++	num_dword = (1 + 2) * DIV_U64_ROUND_UP(entries, MAX_PTE_PER_SDI);
+ 	num_dword += entries * 2;
+ 
+ 	return num_dword;
+@@ -1574,7 +1576,7 @@ static void build_pt_update_batch_sram(struct xe_migrate *m,
+ 
+ 	ptes = DIV_ROUND_UP(size, XE_PAGE_SIZE);
+ 	while (ptes) {
+-		u32 chunk = min(0x1ffU, ptes);
++		u32 chunk = min(MAX_PTE_PER_SDI, ptes);
+ 
+ 		bb->cs[bb->len++] = MI_STORE_DATA_IMM | MI_SDI_NUM_QW(chunk);
+ 		bb->cs[bb->len++] = pt_offset;
+diff --git a/drivers/gpu/drm/xe/xe_wa_oob.rules b/drivers/gpu/drm/xe/xe_wa_oob.rules
+index 9b9e176992a837..320766f6c5dffb 100644
+--- a/drivers/gpu/drm/xe/xe_wa_oob.rules
++++ b/drivers/gpu/drm/xe/xe_wa_oob.rules
+@@ -57,3 +57,9 @@ no_media_l3	MEDIA_VERSION(3000)
+ 		GRAPHICS_VERSION(1260), GRAPHICS_STEP(A0, B0)
+ 16023105232	GRAPHICS_VERSION_RANGE(2001, 3001)
+ 		MEDIA_VERSION_RANGE(1301, 3000)
++16026508708	GRAPHICS_VERSION_RANGE(1200, 3001)
++		MEDIA_VERSION_RANGE(1300, 3000)
++
++# SoC workaround - currently applies to all platforms with the following
++# primary GT GMDID
++14022085890	GRAPHICS_VERSION(2001)
+diff --git a/drivers/hid/hid-appletb-kbd.c b/drivers/hid/hid-appletb-kbd.c
+index 747945dd450a24..0d43938f44fbc8 100644
+--- a/drivers/hid/hid-appletb-kbd.c
++++ b/drivers/hid/hid-appletb-kbd.c
+@@ -427,16 +427,20 @@ static int appletb_kbd_probe(struct hid_device *hdev, const struct hid_device_id
+ 	ret = appletb_kbd_set_mode(kbd, appletb_tb_def_mode);
+ 	if (ret) {
+ 		dev_err_probe(dev, ret, "Failed to set touchbar mode\n");
+-		goto close_hw;
++		goto unregister_handler;
+ 	}
+ 
+ 	hid_set_drvdata(hdev, kbd);
+ 
+ 	return 0;
+ 
++unregister_handler:
++	input_unregister_handler(&kbd->inp_handler);
+ close_hw:
+-	if (kbd->backlight_dev)
++	if (kbd->backlight_dev) {
+ 		put_device(&kbd->backlight_dev->dev);
++		timer_delete_sync(&kbd->inactivity_timer);
++	}
+ 	hid_hw_close(hdev);
+ stop_hw:
+ 	hid_hw_stop(hdev);
+@@ -450,10 +454,10 @@ static void appletb_kbd_remove(struct hid_device *hdev)
+ 	appletb_kbd_set_mode(kbd, APPLETB_KBD_MODE_OFF);
+ 
+ 	input_unregister_handler(&kbd->inp_handler);
+-	timer_delete_sync(&kbd->inactivity_timer);
+-
+-	if (kbd->backlight_dev)
++	if (kbd->backlight_dev) {
+ 		put_device(&kbd->backlight_dev->dev);
++		timer_delete_sync(&kbd->inactivity_timer);
++	}
+ 
+ 	hid_hw_close(hdev);
+ 	hid_hw_stop(hdev);
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index c5394229b77f57..40aa5114bf8c90 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -363,6 +363,7 @@ static int amd_i2c_dw_xfer_quirk(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ 
+ 	dev->msgs = msgs;
+ 	dev->msgs_num = num_msgs;
++	dev->msg_write_idx = 0;
+ 	i2c_dw_xfer_init(dev);
+ 
+ 	/* Initiate messages read/write transaction */
+diff --git a/drivers/infiniband/hw/mlx5/counters.c b/drivers/infiniband/hw/mlx5/counters.c
+index b847084dcd9986..a506fafd2b1513 100644
+--- a/drivers/infiniband/hw/mlx5/counters.c
++++ b/drivers/infiniband/hw/mlx5/counters.c
+@@ -398,7 +398,7 @@ static int do_get_hw_stats(struct ib_device *ibdev,
+ 		return ret;
+ 
+ 	/* We don't expose device counters over Vports */
+-	if (is_mdev_switchdev_mode(dev->mdev) && port_num != 0)
++	if (is_mdev_switchdev_mode(dev->mdev) && dev->is_rep && port_num != 0)
+ 		goto done;
+ 
+ 	if (MLX5_CAP_PCAM_FEATURE(dev->mdev, rx_icrc_encapsulated_counter)) {
+@@ -418,7 +418,7 @@ static int do_get_hw_stats(struct ib_device *ibdev,
+ 			 */
+ 			goto done;
+ 		}
+-		ret = mlx5_lag_query_cong_counters(dev->mdev,
++		ret = mlx5_lag_query_cong_counters(mdev,
+ 						   stats->value +
+ 						   cnts->num_q_counters,
+ 						   cnts->num_cong_counters,
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index 2479da8620ca91..843dcd3122424d 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -1958,6 +1958,7 @@ subscribe_event_xa_alloc(struct mlx5_devx_event_table *devx_event_table,
+ 			/* Level1 is valid for future use, no need to free */
+ 			return -ENOMEM;
+ 
++		INIT_LIST_HEAD(&obj_event->obj_sub_list);
+ 		err = xa_insert(&event->object_ids,
+ 				key_level2,
+ 				obj_event,
+@@ -1966,7 +1967,6 @@ subscribe_event_xa_alloc(struct mlx5_devx_event_table *devx_event_table,
+ 			kfree(obj_event);
+ 			return err;
+ 		}
+-		INIT_LIST_HEAD(&obj_event->obj_sub_list);
+ 	}
+ 
+ 	return 0;
+@@ -2669,7 +2669,7 @@ static void devx_wait_async_destroy(struct mlx5_async_cmd *cmd)
+ 
+ void mlx5_ib_ufile_hw_cleanup(struct ib_uverbs_file *ufile)
+ {
+-	struct mlx5_async_cmd async_cmd[MAX_ASYNC_CMDS];
++	struct mlx5_async_cmd *async_cmd;
+ 	struct ib_ucontext *ucontext = ufile->ucontext;
+ 	struct ib_device *device = ucontext->device;
+ 	struct mlx5_ib_dev *dev = to_mdev(device);
+@@ -2678,6 +2678,10 @@ void mlx5_ib_ufile_hw_cleanup(struct ib_uverbs_file *ufile)
+ 	int head = 0;
+ 	int tail = 0;
+ 
++	async_cmd = kcalloc(MAX_ASYNC_CMDS, sizeof(*async_cmd), GFP_KERNEL);
++	if (!async_cmd)
++		return;
++
+ 	list_for_each_entry(uobject, &ufile->uobjects, list) {
+ 		WARN_ON(uverbs_try_lock_object(uobject, UVERBS_LOOKUP_WRITE));
+ 
+@@ -2713,6 +2717,8 @@ void mlx5_ib_ufile_hw_cleanup(struct ib_uverbs_file *ufile)
+ 		devx_wait_async_destroy(&async_cmd[head % MAX_ASYNC_CMDS]);
+ 		head++;
+ 	}
++
++	kfree(async_cmd);
+ }
+ 
+ static ssize_t devx_async_cmd_event_read(struct file *filp, char __user *buf,
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index d07cacaa0abd00..4ffe3afb560c9b 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -1779,6 +1779,33 @@ static void deallocate_uars(struct mlx5_ib_dev *dev,
+ 					     context->devx_uid);
+ }
+ 
++static int mlx5_ib_enable_lb_mp(struct mlx5_core_dev *master,
++				struct mlx5_core_dev *slave)
++{
++	int err;
++
++	err = mlx5_nic_vport_update_local_lb(master, true);
++	if (err)
++		return err;
++
++	err = mlx5_nic_vport_update_local_lb(slave, true);
++	if (err)
++		goto out;
++
++	return 0;
++
++out:
++	mlx5_nic_vport_update_local_lb(master, false);
++	return err;
++}
++
++static void mlx5_ib_disable_lb_mp(struct mlx5_core_dev *master,
++				  struct mlx5_core_dev *slave)
++{
++	mlx5_nic_vport_update_local_lb(slave, false);
++	mlx5_nic_vport_update_local_lb(master, false);
++}
++
+ int mlx5_ib_enable_lb(struct mlx5_ib_dev *dev, bool td, bool qp)
+ {
+ 	int err = 0;
+@@ -3483,6 +3510,8 @@ static void mlx5_ib_unbind_slave_port(struct mlx5_ib_dev *ibdev,
+ 
+ 	lockdep_assert_held(&mlx5_ib_multiport_mutex);
+ 
++	mlx5_ib_disable_lb_mp(ibdev->mdev, mpi->mdev);
++
+ 	mlx5_core_mp_event_replay(ibdev->mdev,
+ 				  MLX5_DRIVER_EVENT_AFFILIATION_REMOVED,
+ 				  NULL);
+@@ -3578,6 +3607,10 @@ static bool mlx5_ib_bind_slave_port(struct mlx5_ib_dev *ibdev,
+ 				  MLX5_DRIVER_EVENT_AFFILIATION_DONE,
+ 				  &key);
+ 
++	err = mlx5_ib_enable_lb_mp(ibdev->mdev, mpi->mdev);
++	if (err)
++		goto unbind;
++
+ 	return true;
+ 
+ unbind:
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 5fbebafc87742d..cb403134eeaea6 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -2027,23 +2027,50 @@ void mlx5_ib_revoke_data_direct_mrs(struct mlx5_ib_dev *dev)
+ 	}
+ }
+ 
+-static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
++static int mlx5_umr_revoke_mr_with_lock(struct mlx5_ib_mr *mr)
+ {
+-	struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device);
+-	struct mlx5_cache_ent *ent = mr->mmkey.cache_ent;
+-	bool is_odp = is_odp_mr(mr);
+ 	bool is_odp_dma_buf = is_dmabuf_mr(mr) &&
+-			!to_ib_umem_dmabuf(mr->umem)->pinned;
+-	bool from_cache = !!ent;
+-	int ret = 0;
++			      !to_ib_umem_dmabuf(mr->umem)->pinned;
++	bool is_odp = is_odp_mr(mr);
++	int ret;
+ 
+ 	if (is_odp)
+ 		mutex_lock(&to_ib_umem_odp(mr->umem)->umem_mutex);
+ 
+ 	if (is_odp_dma_buf)
+-		dma_resv_lock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv, NULL);
++		dma_resv_lock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv,
++			      NULL);
++
++	ret = mlx5r_umr_revoke_mr(mr);
++
++	if (is_odp) {
++		if (!ret)
++			to_ib_umem_odp(mr->umem)->private = NULL;
++		mutex_unlock(&to_ib_umem_odp(mr->umem)->umem_mutex);
++	}
++
++	if (is_odp_dma_buf) {
++		if (!ret)
++			to_ib_umem_dmabuf(mr->umem)->private = NULL;
++		dma_resv_unlock(
++			to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv);
++	}
+ 
+-	if (mr->mmkey.cacheable && !mlx5r_umr_revoke_mr(mr) && !cache_ent_find_and_store(dev, mr)) {
++	return ret;
++}
++
++static int mlx5r_handle_mkey_cleanup(struct mlx5_ib_mr *mr)
++{
++	bool is_odp_dma_buf = is_dmabuf_mr(mr) &&
++			      !to_ib_umem_dmabuf(mr->umem)->pinned;
++	struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device);
++	struct mlx5_cache_ent *ent = mr->mmkey.cache_ent;
++	bool is_odp = is_odp_mr(mr);
++	bool from_cache = !!ent;
++	int ret;
++
++	if (mr->mmkey.cacheable && !mlx5_umr_revoke_mr_with_lock(mr) &&
++	    !cache_ent_find_and_store(dev, mr)) {
+ 		ent = mr->mmkey.cache_ent;
+ 		/* upon storing to a clean temp entry - schedule its cleanup */
+ 		spin_lock_irq(&ent->mkeys_queue.lock);
+@@ -2055,7 +2082,7 @@ static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
+ 			ent->tmp_cleanup_scheduled = true;
+ 		}
+ 		spin_unlock_irq(&ent->mkeys_queue.lock);
+-		goto out;
++		return 0;
+ 	}
+ 
+ 	if (ent) {
+@@ -2064,8 +2091,14 @@ static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
+ 		mr->mmkey.cache_ent = NULL;
+ 		spin_unlock_irq(&ent->mkeys_queue.lock);
+ 	}
++
++	if (is_odp)
++		mutex_lock(&to_ib_umem_odp(mr->umem)->umem_mutex);
++
++	if (is_odp_dma_buf)
++		dma_resv_lock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv,
++			      NULL);
+ 	ret = destroy_mkey(dev, mr);
+-out:
+ 	if (is_odp) {
+ 		if (!ret)
+ 			to_ib_umem_odp(mr->umem)->private = NULL;
+@@ -2075,9 +2108,9 @@ static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
+ 	if (is_odp_dma_buf) {
+ 		if (!ret)
+ 			to_ib_umem_dmabuf(mr->umem)->private = NULL;
+-		dma_resv_unlock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv);
++		dma_resv_unlock(
++			to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv);
+ 	}
+-
+ 	return ret;
+ }
+ 
+@@ -2126,7 +2159,7 @@ static int __mlx5_ib_dereg_mr(struct ib_mr *ibmr)
+ 	}
+ 
+ 	/* Stop DMA */
+-	rc = mlx5_revoke_mr(mr);
++	rc = mlx5r_handle_mkey_cleanup(mr);
+ 	if (rc)
+ 		return rc;
+ 
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index 86d8fa63bf691a..4183bab753a390 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -247,8 +247,8 @@ static void destroy_unused_implicit_child_mr(struct mlx5_ib_mr *mr)
+ 	}
+ 
+ 	if (MLX5_CAP_ODP(mr_to_mdev(mr)->mdev, mem_page_fault))
+-		__xa_erase(&mr_to_mdev(mr)->odp_mkeys,
+-			   mlx5_base_mkey(mr->mmkey.key));
++		xa_erase(&mr_to_mdev(mr)->odp_mkeys,
++			 mlx5_base_mkey(mr->mmkey.key));
+ 	xa_unlock(&imr->implicit_children);
+ 
+ 	/* Freeing a MR is a sleeping operation, so bounce to a work queue */
+@@ -521,8 +521,8 @@ static struct mlx5_ib_mr *implicit_get_child_mr(struct mlx5_ib_mr *imr,
+ 	}
+ 
+ 	if (MLX5_CAP_ODP(dev->mdev, mem_page_fault)) {
+-		ret = __xa_store(&dev->odp_mkeys, mlx5_base_mkey(mr->mmkey.key),
+-				 &mr->mmkey, GFP_KERNEL);
++		ret = xa_store(&dev->odp_mkeys, mlx5_base_mkey(mr->mmkey.key),
++			       &mr->mmkey, GFP_KERNEL);
+ 		if (xa_is_err(ret)) {
+ 			ret = ERR_PTR(xa_err(ret));
+ 			__xa_erase(&imr->implicit_children, idx);
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 1008858f78e207..8c0af1d26a30df 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -174,6 +174,7 @@ static const struct xpad_device {
+ 	{ 0x05fd, 0x107a, "InterAct 'PowerPad Pro' X-Box pad (Germany)", 0, XTYPE_XBOX },
+ 	{ 0x05fe, 0x3030, "Chic Controller", 0, XTYPE_XBOX },
+ 	{ 0x05fe, 0x3031, "Chic Controller", 0, XTYPE_XBOX },
++	{ 0x0502, 0x1305, "Acer NGR200", 0, XTYPE_XBOX },
+ 	{ 0x062a, 0x0020, "Logic3 Xbox GamePad", 0, XTYPE_XBOX },
+ 	{ 0x062a, 0x0033, "Competition Pro Steering Wheel", 0, XTYPE_XBOX },
+ 	{ 0x06a3, 0x0200, "Saitek Racing Wheel", 0, XTYPE_XBOX },
+@@ -520,6 +521,7 @@ static const struct usb_device_id xpad_table[] = {
+ 	XPAD_XBOX360_VENDOR(0x045e),		/* Microsoft Xbox 360 controllers */
+ 	XPAD_XBOXONE_VENDOR(0x045e),		/* Microsoft Xbox One controllers */
+ 	XPAD_XBOX360_VENDOR(0x046d),		/* Logitech Xbox 360-style controllers */
++	XPAD_XBOX360_VENDOR(0x0502),		/* Acer Inc. Xbox 360 style controllers */
+ 	XPAD_XBOX360_VENDOR(0x056e),		/* Elecom JC-U3613M */
+ 	XPAD_XBOX360_VENDOR(0x06a3),		/* Saitek P3600 */
+ 	XPAD_XBOX360_VENDOR(0x0738),		/* Mad Catz Xbox 360 controllers */
+diff --git a/drivers/input/misc/cs40l50-vibra.c b/drivers/input/misc/cs40l50-vibra.c
+index dce3b0ec8cf368..330f0912363183 100644
+--- a/drivers/input/misc/cs40l50-vibra.c
++++ b/drivers/input/misc/cs40l50-vibra.c
+@@ -238,6 +238,8 @@ static int cs40l50_upload_owt(struct cs40l50_work *work_data)
+ 	header.data_words = len / sizeof(u32);
+ 
+ 	new_owt_effect_data = kmalloc(sizeof(header) + len, GFP_KERNEL);
++	if (!new_owt_effect_data)
++		return -ENOMEM;
+ 
+ 	memcpy(new_owt_effect_data, &header, sizeof(header));
+ 	memcpy(new_owt_effect_data + sizeof(header), work_data->custom_data, len);
+diff --git a/drivers/input/misc/iqs7222.c b/drivers/input/misc/iqs7222.c
+index 80b917944b51e7..6fac31c0d99f2b 100644
+--- a/drivers/input/misc/iqs7222.c
++++ b/drivers/input/misc/iqs7222.c
+@@ -301,6 +301,7 @@ struct iqs7222_dev_desc {
+ 	int allow_offset;
+ 	int event_offset;
+ 	int comms_offset;
++	int ext_chan;
+ 	bool legacy_gesture;
+ 	struct iqs7222_reg_grp_desc reg_grps[IQS7222_NUM_REG_GRPS];
+ };
+@@ -315,6 +316,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = {
+ 		.allow_offset = 9,
+ 		.event_offset = 10,
+ 		.comms_offset = 12,
++		.ext_chan = 10,
+ 		.reg_grps = {
+ 			[IQS7222_REG_GRP_STAT] = {
+ 				.base = IQS7222_SYS_STATUS,
+@@ -373,6 +375,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = {
+ 		.allow_offset = 9,
+ 		.event_offset = 10,
+ 		.comms_offset = 12,
++		.ext_chan = 10,
+ 		.legacy_gesture = true,
+ 		.reg_grps = {
+ 			[IQS7222_REG_GRP_STAT] = {
+@@ -2244,7 +2247,7 @@ static int iqs7222_parse_chan(struct iqs7222_private *iqs7222,
+ 	const struct iqs7222_dev_desc *dev_desc = iqs7222->dev_desc;
+ 	struct i2c_client *client = iqs7222->client;
+ 	int num_chan = dev_desc->reg_grps[IQS7222_REG_GRP_CHAN].num_row;
+-	int ext_chan = rounddown(num_chan, 10);
++	int ext_chan = dev_desc->ext_chan ? : num_chan;
+ 	int error, i;
+ 	u16 *chan_setup = iqs7222->chan_setup[chan_index];
+ 	u16 *sys_setup = iqs7222->sys_setup;
+@@ -2445,7 +2448,7 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222,
+ 	const struct iqs7222_dev_desc *dev_desc = iqs7222->dev_desc;
+ 	struct i2c_client *client = iqs7222->client;
+ 	int num_chan = dev_desc->reg_grps[IQS7222_REG_GRP_CHAN].num_row;
+-	int ext_chan = rounddown(num_chan, 10);
++	int ext_chan = dev_desc->ext_chan ? : num_chan;
+ 	int count, error, reg_offset, i;
+ 	u16 *event_mask = &iqs7222->sys_setup[dev_desc->event_offset];
+ 	u16 *sldr_setup = iqs7222->sldr_setup[sldr_index];
+diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
+index fc35cba5914532..47692cbfaabdb3 100644
+--- a/drivers/iommu/intel/cache.c
++++ b/drivers/iommu/intel/cache.c
+@@ -40,9 +40,8 @@ static bool cache_tage_match(struct cache_tag *tag, u16 domain_id,
+ }
+ 
+ /* Assign a cache tag with specified type to domain. */
+-static int cache_tag_assign(struct dmar_domain *domain, u16 did,
+-			    struct device *dev, ioasid_t pasid,
+-			    enum cache_tag_type type)
++int cache_tag_assign(struct dmar_domain *domain, u16 did, struct device *dev,
++		     ioasid_t pasid, enum cache_tag_type type)
+ {
+ 	struct device_domain_info *info = dev_iommu_priv_get(dev);
+ 	struct intel_iommu *iommu = info->iommu;
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 9d2d34c1b2ff8f..ff07ee2940f5f0 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -3819,8 +3819,17 @@ static void intel_iommu_probe_finalize(struct device *dev)
+ 	    !pci_enable_pasid(to_pci_dev(dev), info->pasid_supported & ~1))
+ 		info->pasid_enabled = 1;
+ 
+-	if (sm_supported(iommu) && !dev_is_real_dma_subdevice(dev))
++	if (sm_supported(iommu) && !dev_is_real_dma_subdevice(dev)) {
+ 		iommu_enable_pci_ats(info);
++		/* Assign a DEVTLB cache tag to the default domain. */
++		if (info->ats_enabled && info->domain) {
++			u16 did = domain_id_iommu(info->domain, iommu);
++
++			if (cache_tag_assign(info->domain, did, dev,
++					     IOMMU_NO_PASID, CACHE_TAG_DEVTLB))
++				iommu_disable_pci_ats(info);
++		}
++	}
+ 	iommu_enable_pci_pri(info);
+ }
+ 
+diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h
+index 8661e0864701ce..814f5ed2001898 100644
+--- a/drivers/iommu/intel/iommu.h
++++ b/drivers/iommu/intel/iommu.h
+@@ -1277,6 +1277,8 @@ struct cache_tag {
+ 	unsigned int users;
+ };
+ 
++int cache_tag_assign(struct dmar_domain *domain, u16 did, struct device *dev,
++		     ioasid_t pasid, enum cache_tag_type type);
+ int cache_tag_assign_domain(struct dmar_domain *domain,
+ 			    struct device *dev, ioasid_t pasid);
+ void cache_tag_unassign_domain(struct dmar_domain *domain,
+diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
+index af4cc91b2bbfc8..97d4bfcd8241bb 100644
+--- a/drivers/iommu/rockchip-iommu.c
++++ b/drivers/iommu/rockchip-iommu.c
+@@ -1155,7 +1155,6 @@ static int rk_iommu_of_xlate(struct device *dev,
+ 		return -ENOMEM;
+ 
+ 	data->iommu = platform_get_drvdata(iommu_dev);
+-	data->iommu->domain = &rk_identity_domain;
+ 	dev_iommu_priv_set(dev, data);
+ 
+ 	platform_device_put(iommu_dev);
+@@ -1193,6 +1192,8 @@ static int rk_iommu_probe(struct platform_device *pdev)
+ 	if (!iommu)
+ 		return -ENOMEM;
+ 
++	iommu->domain = &rk_identity_domain;
++
+ 	platform_set_drvdata(pdev, iommu);
+ 	iommu->dev = dev;
+ 	iommu->num_mmu = 0;
+diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
+index 7f893bafaa607d..c417ed34c05767 100644
+--- a/drivers/mmc/core/quirks.h
++++ b/drivers/mmc/core/quirks.h
+@@ -44,6 +44,12 @@ static const struct mmc_fixup __maybe_unused mmc_sd_fixups[] = {
+ 		   0, -1ull, SDIO_ANY_ID, SDIO_ANY_ID, add_quirk_sd,
+ 		   MMC_QUIRK_NO_UHS_DDR50_TUNING, EXT_CSD_REV_ANY),
+ 
++	/*
++	 * Some SD cards reports discard support while they don't
++	 */
++	MMC_FIXUP(CID_NAME_ANY, CID_MANFID_SANDISK_SD, 0x5344, add_quirk_sd,
++		  MMC_QUIRK_BROKEN_SD_DISCARD),
++
+ 	END_FIXUP
+ };
+ 
+@@ -147,12 +153,6 @@ static const struct mmc_fixup __maybe_unused mmc_blk_fixups[] = {
+ 	MMC_FIXUP("M62704", CID_MANFID_KINGSTON, 0x0100, add_quirk_mmc,
+ 		  MMC_QUIRK_TRIM_BROKEN),
+ 
+-	/*
+-	 * Some SD cards reports discard support while they don't
+-	 */
+-	MMC_FIXUP(CID_NAME_ANY, CID_MANFID_SANDISK_SD, 0x5344, add_quirk_sd,
+-		  MMC_QUIRK_BROKEN_SD_DISCARD),
+-
+ 	END_FIXUP
+ };
+ 
+diff --git a/drivers/mmc/core/sd_uhs2.c b/drivers/mmc/core/sd_uhs2.c
+index 1c31d0dfa96152..de17d1611290cf 100644
+--- a/drivers/mmc/core/sd_uhs2.c
++++ b/drivers/mmc/core/sd_uhs2.c
+@@ -91,8 +91,8 @@ static int sd_uhs2_phy_init(struct mmc_host *host)
+ 
+ 	err = host->ops->uhs2_control(host, UHS2_PHY_INIT);
+ 	if (err) {
+-		pr_err("%s: failed to initial phy for UHS-II!\n",
+-		       mmc_hostname(host));
++		pr_debug("%s: failed to initial phy for UHS-II!\n",
++			 mmc_hostname(host));
+ 	}
+ 
+ 	return err;
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 345ea91629e0f5..b15dc542a2ea8a 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -827,12 +827,18 @@ static inline void msdc_dma_setup(struct msdc_host *host, struct msdc_dma *dma,
+ static void msdc_prepare_data(struct msdc_host *host, struct mmc_data *data)
+ {
+ 	if (!(data->host_cookie & MSDC_PREPARE_FLAG)) {
+-		data->host_cookie |= MSDC_PREPARE_FLAG;
+ 		data->sg_count = dma_map_sg(host->dev, data->sg, data->sg_len,
+ 					    mmc_get_dma_dir(data));
++		if (data->sg_count)
++			data->host_cookie |= MSDC_PREPARE_FLAG;
+ 	}
+ }
+ 
++static bool msdc_data_prepared(struct mmc_data *data)
++{
++	return data->host_cookie & MSDC_PREPARE_FLAG;
++}
++
+ static void msdc_unprepare_data(struct msdc_host *host, struct mmc_data *data)
+ {
+ 	if (data->host_cookie & MSDC_ASYNC_FLAG)
+@@ -1465,8 +1471,19 @@ static void msdc_ops_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ 	WARN_ON(!host->hsq_en && host->mrq);
+ 	host->mrq = mrq;
+ 
+-	if (mrq->data)
++	if (mrq->data) {
+ 		msdc_prepare_data(host, mrq->data);
++		if (!msdc_data_prepared(mrq->data)) {
++			host->mrq = NULL;
++			/*
++			 * Failed to prepare DMA area, fail fast before
++			 * starting any commands.
++			 */
++			mrq->cmd->error = -ENOSPC;
++			mmc_request_done(mmc_from_priv(host), mrq);
++			return;
++		}
++	}
+ 
+ 	/* if SBC is required, we have HW option and SW option.
+ 	 * if HW option is enabled, and SBC does not have "special" flags,
+diff --git a/drivers/mmc/host/sdhci-uhs2.c b/drivers/mmc/host/sdhci-uhs2.c
+index c53b64d50c0de5..0efeb9d0c3765a 100644
+--- a/drivers/mmc/host/sdhci-uhs2.c
++++ b/drivers/mmc/host/sdhci-uhs2.c
+@@ -99,8 +99,8 @@ void sdhci_uhs2_reset(struct sdhci_host *host, u16 mask)
+ 	/* hw clears the bit when it's done */
+ 	if (read_poll_timeout_atomic(sdhci_readw, val, !(val & mask), 10,
+ 				     UHS2_RESET_TIMEOUT_100MS, true, host, SDHCI_UHS2_SW_RESET)) {
+-		pr_warn("%s: %s: Reset 0x%x never completed. %s: clean reset bit.\n", __func__,
+-			mmc_hostname(host->mmc), (int)mask, mmc_hostname(host->mmc));
++		pr_debug("%s: %s: Reset 0x%x never completed. %s: clean reset bit.\n", __func__,
++			 mmc_hostname(host->mmc), (int)mask, mmc_hostname(host->mmc));
+ 		sdhci_writeb(host, 0, SDHCI_UHS2_SW_RESET);
+ 		return;
+ 	}
+@@ -335,8 +335,8 @@ static int sdhci_uhs2_interface_detect(struct sdhci_host *host)
+ 	if (read_poll_timeout(sdhci_readl, val, (val & SDHCI_UHS2_IF_DETECT),
+ 			      100, UHS2_INTERFACE_DETECT_TIMEOUT_100MS, true,
+ 			      host, SDHCI_PRESENT_STATE)) {
+-		pr_warn("%s: not detect UHS2 interface in 100ms.\n", mmc_hostname(host->mmc));
+-		sdhci_dumpregs(host);
++		pr_debug("%s: not detect UHS2 interface in 100ms.\n", mmc_hostname(host->mmc));
++		sdhci_dbg_dumpregs(host, "UHS2 interface detect timeout in 100ms");
+ 		return -EIO;
+ 	}
+ 
+@@ -345,8 +345,8 @@ static int sdhci_uhs2_interface_detect(struct sdhci_host *host)
+ 
+ 	if (read_poll_timeout(sdhci_readl, val, (val & SDHCI_UHS2_LANE_SYNC),
+ 			      100, UHS2_LANE_SYNC_TIMEOUT_150MS, true, host, SDHCI_PRESENT_STATE)) {
+-		pr_warn("%s: UHS2 Lane sync fail in 150ms.\n", mmc_hostname(host->mmc));
+-		sdhci_dumpregs(host);
++		pr_debug("%s: UHS2 Lane sync fail in 150ms.\n", mmc_hostname(host->mmc));
++		sdhci_dbg_dumpregs(host, "UHS2 Lane sync fail in 150ms");
+ 		return -EIO;
+ 	}
+ 
+@@ -417,12 +417,12 @@ static int sdhci_uhs2_do_detect_init(struct mmc_host *mmc)
+ 		host->ops->uhs2_pre_detect_init(host);
+ 
+ 	if (sdhci_uhs2_interface_detect(host)) {
+-		pr_warn("%s: cannot detect UHS2 interface.\n", mmc_hostname(host->mmc));
++		pr_debug("%s: cannot detect UHS2 interface.\n", mmc_hostname(host->mmc));
+ 		return -EIO;
+ 	}
+ 
+ 	if (sdhci_uhs2_init(host)) {
+-		pr_warn("%s: UHS2 init fail.\n", mmc_hostname(host->mmc));
++		pr_debug("%s: UHS2 init fail.\n", mmc_hostname(host->mmc));
+ 		return -EIO;
+ 	}
+ 
+@@ -504,8 +504,8 @@ static int sdhci_uhs2_check_dormant(struct sdhci_host *host)
+ 	if (read_poll_timeout(sdhci_readl, val, (val & SDHCI_UHS2_IN_DORMANT_STATE),
+ 			      100, UHS2_CHECK_DORMANT_TIMEOUT_100MS, true, host,
+ 			      SDHCI_PRESENT_STATE)) {
+-		pr_warn("%s: UHS2 IN_DORMANT fail in 100ms.\n", mmc_hostname(host->mmc));
+-		sdhci_dumpregs(host);
++		pr_debug("%s: UHS2 IN_DORMANT fail in 100ms.\n", mmc_hostname(host->mmc));
++		sdhci_dbg_dumpregs(host, "UHS2 IN_DORMANT fail in 100ms");
+ 		return -EIO;
+ 	}
+ 	return 0;
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 5f78be7ae16d78..0f31746b8a096d 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -2065,15 +2065,10 @@ void sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
+ 
+ 	host->mmc->actual_clock = 0;
+ 
+-	clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL);
+-	if (clk & SDHCI_CLOCK_CARD_EN)
+-		sdhci_writew(host, clk & ~SDHCI_CLOCK_CARD_EN,
+-			SDHCI_CLOCK_CONTROL);
++	sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL);
+ 
+-	if (clock == 0) {
+-		sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL);
++	if (clock == 0)
+ 		return;
+-	}
+ 
+ 	clk = sdhci_calc_clk(host, clock, &host->mmc->actual_clock);
+ 	sdhci_enable_clk(host, clk);
+diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
+index cd0e35a805427c..2c28240e600350 100644
+--- a/drivers/mmc/host/sdhci.h
++++ b/drivers/mmc/host/sdhci.h
+@@ -898,4 +898,20 @@ void sdhci_switch_external_dma(struct sdhci_host *host, bool en);
+ void sdhci_set_data_timeout_irq(struct sdhci_host *host, bool enable);
+ void __sdhci_set_timeout(struct sdhci_host *host, struct mmc_command *cmd);
+ 
++#if defined(CONFIG_DYNAMIC_DEBUG) || \
++	(defined(CONFIG_DYNAMIC_DEBUG_CORE) && defined(DYNAMIC_DEBUG_MODULE))
++#define SDHCI_DBG_ANYWAY 0
++#elif defined(DEBUG)
++#define SDHCI_DBG_ANYWAY 1
++#else
++#define SDHCI_DBG_ANYWAY 0
++#endif
++
++#define sdhci_dbg_dumpregs(host, fmt)					\
++do {									\
++	DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt);			\
++	if (DYNAMIC_DEBUG_BRANCH(descriptor) ||	SDHCI_DBG_ANYWAY)	\
++		sdhci_dumpregs(host);					\
++} while (0)
++
+ #endif /* __SDHCI_HW_H */
+diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
+index d16e42cf8faebd..303aede6e5aa49 100644
+--- a/drivers/mtd/nand/spi/core.c
++++ b/drivers/mtd/nand/spi/core.c
+@@ -1585,6 +1585,7 @@ static void spinand_cleanup(struct spinand_device *spinand)
+ {
+ 	struct nand_device *nand = spinand_to_nand(spinand);
+ 
++	nanddev_ecc_engine_cleanup(nand);
+ 	nanddev_cleanup(nand);
+ 	spinand_manufacturer_cleanup(spinand);
+ 	kfree(spinand->databuf);
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+index 3b70f67376331e..aa25a8a0a106f6 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+@@ -1373,6 +1373,8 @@
+ #define MDIO_VEND2_CTRL1_SS13		BIT(13)
+ #endif
+ 
++#define XGBE_VEND2_MAC_AUTO_SW		BIT(9)
++
+ /* MDIO mask values */
+ #define XGBE_AN_CL73_INT_CMPLT		BIT(0)
+ #define XGBE_AN_CL73_INC_LINK		BIT(1)
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+index 07f4f3418d0187..ed76a8df6ec6ed 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+@@ -375,6 +375,10 @@ static void xgbe_an37_set(struct xgbe_prv_data *pdata, bool enable,
+ 		reg |= MDIO_VEND2_CTRL1_AN_RESTART;
+ 
+ 	XMDIO_WRITE(pdata, MDIO_MMD_VEND2, MDIO_CTRL1, reg);
++
++	reg = XMDIO_READ(pdata, MDIO_MMD_VEND2, MDIO_PCS_DIG_CTRL);
++	reg |= XGBE_VEND2_MAC_AUTO_SW;
++	XMDIO_WRITE(pdata, MDIO_MMD_VEND2, MDIO_PCS_DIG_CTRL, reg);
+ }
+ 
+ static void xgbe_an37_restart(struct xgbe_prv_data *pdata)
+@@ -1003,6 +1007,11 @@ static void xgbe_an37_init(struct xgbe_prv_data *pdata)
+ 
+ 	netif_dbg(pdata, link, pdata->netdev, "CL37 AN (%s) initialized\n",
+ 		  (pdata->an_mode == XGBE_AN_MODE_CL37) ? "BaseX" : "SGMII");
++
++	reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_CTRL1);
++	reg &= ~MDIO_AN_CTRL1_ENABLE;
++	XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_CTRL1, reg);
++
+ }
+ 
+ static void xgbe_an73_init(struct xgbe_prv_data *pdata)
+@@ -1404,6 +1413,10 @@ static void xgbe_phy_status(struct xgbe_prv_data *pdata)
+ 
+ 	pdata->phy.link = pdata->phy_if.phy_impl.link_status(pdata,
+ 							     &an_restart);
++	/* bail out if the link status register read fails */
++	if (pdata->phy.link < 0)
++		return;
++
+ 	if (an_restart) {
+ 		xgbe_phy_config_aneg(pdata);
+ 		goto adjust_link;
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+index 268399dfcf22f0..32e633d1134843 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+@@ -2855,8 +2855,7 @@ static bool xgbe_phy_valid_speed(struct xgbe_prv_data *pdata, int speed)
+ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart)
+ {
+ 	struct xgbe_phy_data *phy_data = pdata->phy_data;
+-	unsigned int reg;
+-	int ret;
++	int reg, ret;
+ 
+ 	*an_restart = 0;
+ 
+@@ -2890,11 +2889,20 @@ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart)
+ 			return 0;
+ 	}
+ 
+-	/* Link status is latched low, so read once to clear
+-	 * and then read again to get current state
+-	 */
+-	reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);
+ 	reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);
++	if (reg < 0)
++		return reg;
++
++	/* Link status is latched low so that momentary link drops
++	 * can be detected. If link was already down read again
++	 * to get the latest state.
++	 */
++
++	if (!pdata->phy.link && !(reg & MDIO_STAT1_LSTATUS)) {
++		reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);
++		if (reg < 0)
++			return reg;
++	}
+ 
+ 	if (pdata->en_rx_adap) {
+ 		/* if the link is available and adaptation is done,
+@@ -2913,9 +2921,7 @@ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart)
+ 			xgbe_phy_set_mode(pdata, phy_data->cur_mode);
+ 		}
+ 
+-		/* check again for the link and adaptation status */
+-		reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);
+-		if ((reg & MDIO_STAT1_LSTATUS) && pdata->rx_adapt_done)
++		if (pdata->rx_adapt_done)
+ 			return 1;
+ 	} else if (reg & MDIO_STAT1_LSTATUS)
+ 		return 1;
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
+index ed5d43c16d0e23..7526a0906b3914 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
++++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
+@@ -292,12 +292,12 @@
+ #define XGBE_LINK_TIMEOUT		5
+ #define XGBE_KR_TRAINING_WAIT_ITER	50
+ 
+-#define XGBE_SGMII_AN_LINK_STATUS	BIT(1)
++#define XGBE_SGMII_AN_LINK_DUPLEX	BIT(1)
+ #define XGBE_SGMII_AN_LINK_SPEED	(BIT(2) | BIT(3))
+ #define XGBE_SGMII_AN_LINK_SPEED_10	0x00
+ #define XGBE_SGMII_AN_LINK_SPEED_100	0x04
+ #define XGBE_SGMII_AN_LINK_SPEED_1000	0x08
+-#define XGBE_SGMII_AN_LINK_DUPLEX	BIT(4)
++#define XGBE_SGMII_AN_LINK_STATUS	BIT(4)
+ 
+ /* ECC correctable error notification window (seconds) */
+ #define XGBE_ECC_LIMIT			60
+diff --git a/drivers/net/ethernet/atheros/atlx/atl1.c b/drivers/net/ethernet/atheros/atlx/atl1.c
+index 38cd84b7677cfd..86663aa0de17ec 100644
+--- a/drivers/net/ethernet/atheros/atlx/atl1.c
++++ b/drivers/net/ethernet/atheros/atlx/atl1.c
+@@ -1861,14 +1861,21 @@ static u16 atl1_alloc_rx_buffers(struct atl1_adapter *adapter)
+ 			break;
+ 		}
+ 
+-		buffer_info->alloced = 1;
+-		buffer_info->skb = skb;
+-		buffer_info->length = (u16) adapter->rx_buffer_len;
+ 		page = virt_to_page(skb->data);
+ 		offset = offset_in_page(skb->data);
+ 		buffer_info->dma = dma_map_page(&pdev->dev, page, offset,
+ 						adapter->rx_buffer_len,
+ 						DMA_FROM_DEVICE);
++		if (dma_mapping_error(&pdev->dev, buffer_info->dma)) {
++			kfree_skb(skb);
++			adapter->soft_stats.rx_dropped++;
++			break;
++		}
++
++		buffer_info->alloced = 1;
++		buffer_info->skb = skb;
++		buffer_info->length = (u16)adapter->rx_buffer_len;
++
+ 		rfd_desc->buffer_addr = cpu_to_le64(buffer_info->dma);
+ 		rfd_desc->buf_len = cpu_to_le16(adapter->rx_buffer_len);
+ 		rfd_desc->coalese = 0;
+@@ -2183,8 +2190,8 @@ static int atl1_tx_csum(struct atl1_adapter *adapter, struct sk_buff *skb,
+ 	return 0;
+ }
+ 
+-static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+-	struct tx_packet_desc *ptpd)
++static bool atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
++			struct tx_packet_desc *ptpd)
+ {
+ 	struct atl1_tpd_ring *tpd_ring = &adapter->tpd_ring;
+ 	struct atl1_buffer *buffer_info;
+@@ -2194,6 +2201,7 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ 	unsigned int nr_frags;
+ 	unsigned int f;
+ 	int retval;
++	u16 first_mapped;
+ 	u16 next_to_use;
+ 	u16 data_len;
+ 	u8 hdr_len;
+@@ -2201,6 +2209,7 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ 	buf_len -= skb->data_len;
+ 	nr_frags = skb_shinfo(skb)->nr_frags;
+ 	next_to_use = atomic_read(&tpd_ring->next_to_use);
++	first_mapped = next_to_use;
+ 	buffer_info = &tpd_ring->buffer_info[next_to_use];
+ 	BUG_ON(buffer_info->skb);
+ 	/* put skb in last TPD */
+@@ -2216,6 +2225,8 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ 		buffer_info->dma = dma_map_page(&adapter->pdev->dev, page,
+ 						offset, hdr_len,
+ 						DMA_TO_DEVICE);
++		if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma))
++			goto dma_err;
+ 
+ 		if (++next_to_use == tpd_ring->count)
+ 			next_to_use = 0;
+@@ -2242,6 +2253,9 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ 								page, offset,
+ 								buffer_info->length,
+ 								DMA_TO_DEVICE);
++				if (dma_mapping_error(&adapter->pdev->dev,
++						      buffer_info->dma))
++					goto dma_err;
+ 				if (++next_to_use == tpd_ring->count)
+ 					next_to_use = 0;
+ 			}
+@@ -2254,6 +2268,8 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ 		buffer_info->dma = dma_map_page(&adapter->pdev->dev, page,
+ 						offset, buf_len,
+ 						DMA_TO_DEVICE);
++		if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma))
++			goto dma_err;
+ 		if (++next_to_use == tpd_ring->count)
+ 			next_to_use = 0;
+ 	}
+@@ -2277,6 +2293,9 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ 			buffer_info->dma = skb_frag_dma_map(&adapter->pdev->dev,
+ 				frag, i * ATL1_MAX_TX_BUF_LEN,
+ 				buffer_info->length, DMA_TO_DEVICE);
++			if (dma_mapping_error(&adapter->pdev->dev,
++					      buffer_info->dma))
++				goto dma_err;
+ 
+ 			if (++next_to_use == tpd_ring->count)
+ 				next_to_use = 0;
+@@ -2285,6 +2304,22 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ 
+ 	/* last tpd's buffer-info */
+ 	buffer_info->skb = skb;
++
++	return true;
++
++ dma_err:
++	while (first_mapped != next_to_use) {
++		buffer_info = &tpd_ring->buffer_info[first_mapped];
++		dma_unmap_page(&adapter->pdev->dev,
++			       buffer_info->dma,
++			       buffer_info->length,
++			       DMA_TO_DEVICE);
++		buffer_info->dma = 0;
++
++		if (++first_mapped == tpd_ring->count)
++			first_mapped = 0;
++	}
++	return false;
+ }
+ 
+ static void atl1_tx_queue(struct atl1_adapter *adapter, u16 count,
+@@ -2355,10 +2390,8 @@ static netdev_tx_t atl1_xmit_frame(struct sk_buff *skb,
+ 
+ 	len = skb_headlen(skb);
+ 
+-	if (unlikely(skb->len <= 0)) {
+-		dev_kfree_skb_any(skb);
+-		return NETDEV_TX_OK;
+-	}
++	if (unlikely(skb->len <= 0))
++		goto drop_packet;
+ 
+ 	nr_frags = skb_shinfo(skb)->nr_frags;
+ 	for (f = 0; f < nr_frags; f++) {
+@@ -2371,10 +2404,9 @@ static netdev_tx_t atl1_xmit_frame(struct sk_buff *skb,
+ 	if (mss) {
+ 		if (skb->protocol == htons(ETH_P_IP)) {
+ 			proto_hdr_len = skb_tcp_all_headers(skb);
+-			if (unlikely(proto_hdr_len > len)) {
+-				dev_kfree_skb_any(skb);
+-				return NETDEV_TX_OK;
+-			}
++			if (unlikely(proto_hdr_len > len))
++				goto drop_packet;
++
+ 			/* need additional TPD ? */
+ 			if (proto_hdr_len != len)
+ 				count += (len - proto_hdr_len +
+@@ -2406,23 +2438,26 @@ static netdev_tx_t atl1_xmit_frame(struct sk_buff *skb,
+ 	}
+ 
+ 	tso = atl1_tso(adapter, skb, ptpd);
+-	if (tso < 0) {
+-		dev_kfree_skb_any(skb);
+-		return NETDEV_TX_OK;
+-	}
++	if (tso < 0)
++		goto drop_packet;
+ 
+ 	if (!tso) {
+ 		ret_val = atl1_tx_csum(adapter, skb, ptpd);
+-		if (ret_val < 0) {
+-			dev_kfree_skb_any(skb);
+-			return NETDEV_TX_OK;
+-		}
++		if (ret_val < 0)
++			goto drop_packet;
+ 	}
+ 
+-	atl1_tx_map(adapter, skb, ptpd);
++	if (!atl1_tx_map(adapter, skb, ptpd))
++		goto drop_packet;
++
+ 	atl1_tx_queue(adapter, count, ptpd);
+ 	atl1_update_mailbox(adapter);
+ 	return NETDEV_TX_OK;
++
++drop_packet:
++	adapter->soft_stats.tx_errors++;
++	dev_kfree_skb_any(skb);
++	return NETDEV_TX_OK;
+ }
+ 
+ static int atl1_rings_clean(struct napi_struct *napi, int budget)
+diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
+index c753c35b26ebd1..5a8ca5be9ca00e 100644
+--- a/drivers/net/ethernet/cisco/enic/enic_main.c
++++ b/drivers/net/ethernet/cisco/enic/enic_main.c
+@@ -1864,10 +1864,10 @@ static int enic_change_mtu(struct net_device *netdev, int new_mtu)
+ 	if (enic_is_dynamic(enic) || enic_is_sriov_vf(enic))
+ 		return -EOPNOTSUPP;
+ 
+-	if (netdev->mtu > enic->port_mtu)
++	if (new_mtu > enic->port_mtu)
+ 		netdev_warn(netdev,
+ 			    "interface MTU (%d) set higher than port MTU (%d)\n",
+-			    netdev->mtu, enic->port_mtu);
++			    new_mtu, enic->port_mtu);
+ 
+ 	return _enic_change_mtu(netdev, new_mtu);
+ }
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 29886a8ba73f33..efd0048acd3b2d 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -3928,6 +3928,7 @@ static int dpaa2_eth_setup_rx_flow(struct dpaa2_eth_priv *priv,
+ 					 MEM_TYPE_PAGE_ORDER0, NULL);
+ 	if (err) {
+ 		dev_err(dev, "xdp_rxq_info_reg_mem_model failed\n");
++		xdp_rxq_info_unreg(&fq->channel->xdp_rxq);
+ 		return err;
+ 	}
+ 
+@@ -4421,17 +4422,25 @@ static int dpaa2_eth_bind_dpni(struct dpaa2_eth_priv *priv)
+ 			return -EINVAL;
+ 		}
+ 		if (err)
+-			return err;
++			goto out;
+ 	}
+ 
+ 	err = dpni_get_qdid(priv->mc_io, 0, priv->mc_token,
+ 			    DPNI_QUEUE_TX, &priv->tx_qdid);
+ 	if (err) {
+ 		dev_err(dev, "dpni_get_qdid() failed\n");
+-		return err;
++		goto out;
+ 	}
+ 
+ 	return 0;
++
++out:
++	while (i--) {
++		if (priv->fq[i].type == DPAA2_RX_FQ &&
++		    xdp_rxq_info_is_reg(&priv->fq[i].channel->xdp_rxq))
++			xdp_rxq_info_unreg(&priv->fq[i].channel->xdp_rxq);
++	}
++	return err;
+ }
+ 
+ /* Allocate rings for storing incoming frame descriptors */
+@@ -4814,6 +4823,17 @@ static void dpaa2_eth_del_ch_napi(struct dpaa2_eth_priv *priv)
+ 	}
+ }
+ 
++static void dpaa2_eth_free_rx_xdp_rxq(struct dpaa2_eth_priv *priv)
++{
++	int i;
++
++	for (i = 0; i < priv->num_fqs; i++) {
++		if (priv->fq[i].type == DPAA2_RX_FQ &&
++		    xdp_rxq_info_is_reg(&priv->fq[i].channel->xdp_rxq))
++			xdp_rxq_info_unreg(&priv->fq[i].channel->xdp_rxq);
++	}
++}
++
+ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
+ {
+ 	struct device *dev;
+@@ -5017,6 +5037,7 @@ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
+ 	free_percpu(priv->percpu_stats);
+ err_alloc_percpu_stats:
+ 	dpaa2_eth_del_ch_napi(priv);
++	dpaa2_eth_free_rx_xdp_rxq(priv);
+ err_bind:
+ 	dpaa2_eth_free_dpbps(priv);
+ err_dpbp_setup:
+@@ -5069,6 +5090,7 @@ static void dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
+ 	free_percpu(priv->percpu_extras);
+ 
+ 	dpaa2_eth_del_ch_napi(priv);
++	dpaa2_eth_free_rx_xdp_rxq(priv);
+ 	dpaa2_eth_free_dpbps(priv);
+ 	dpaa2_eth_free_dpio(priv);
+ 	dpaa2_eth_free_dpni(priv);
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_controlq.c b/drivers/net/ethernet/intel/idpf/idpf_controlq.c
+index b28991dd187036..48b8e184f3db63 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_controlq.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_controlq.c
+@@ -96,7 +96,7 @@ static void idpf_ctlq_init_rxq_bufs(struct idpf_ctlq_info *cq)
+  */
+ static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
+ {
+-	mutex_lock(&cq->cq_lock);
++	spin_lock(&cq->cq_lock);
+ 
+ 	/* free ring buffers and the ring itself */
+ 	idpf_ctlq_dealloc_ring_res(hw, cq);
+@@ -104,8 +104,7 @@ static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
+ 	/* Set ring_size to 0 to indicate uninitialized queue */
+ 	cq->ring_size = 0;
+ 
+-	mutex_unlock(&cq->cq_lock);
+-	mutex_destroy(&cq->cq_lock);
++	spin_unlock(&cq->cq_lock);
+ }
+ 
+ /**
+@@ -173,7 +172,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
+ 
+ 	idpf_ctlq_init_regs(hw, cq, is_rxq);
+ 
+-	mutex_init(&cq->cq_lock);
++	spin_lock_init(&cq->cq_lock);
+ 
+ 	list_add(&cq->cq_list, &hw->cq_list_head);
+ 
+@@ -272,7 +271,7 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
+ 	int err = 0;
+ 	int i;
+ 
+-	mutex_lock(&cq->cq_lock);
++	spin_lock(&cq->cq_lock);
+ 
+ 	/* Ensure there are enough descriptors to send all messages */
+ 	num_desc_avail = IDPF_CTLQ_DESC_UNUSED(cq);
+@@ -332,7 +331,7 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
+ 	wr32(hw, cq->reg.tail, cq->next_to_use);
+ 
+ err_unlock:
+-	mutex_unlock(&cq->cq_lock);
++	spin_unlock(&cq->cq_lock);
+ 
+ 	return err;
+ }
+@@ -364,7 +363,7 @@ int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
+ 	if (*clean_count > cq->ring_size)
+ 		return -EBADR;
+ 
+-	mutex_lock(&cq->cq_lock);
++	spin_lock(&cq->cq_lock);
+ 
+ 	ntc = cq->next_to_clean;
+ 
+@@ -397,7 +396,7 @@ int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
+ 
+ 	cq->next_to_clean = ntc;
+ 
+-	mutex_unlock(&cq->cq_lock);
++	spin_unlock(&cq->cq_lock);
+ 
+ 	/* Return number of descriptors actually cleaned */
+ 	*clean_count = i;
+@@ -435,7 +434,7 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
+ 	if (*buff_count > 0)
+ 		buffs_avail = true;
+ 
+-	mutex_lock(&cq->cq_lock);
++	spin_lock(&cq->cq_lock);
+ 
+ 	if (tbp >= cq->ring_size)
+ 		tbp = 0;
+@@ -524,7 +523,7 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
+ 		wr32(hw, cq->reg.tail, cq->next_to_post);
+ 	}
+ 
+-	mutex_unlock(&cq->cq_lock);
++	spin_unlock(&cq->cq_lock);
+ 
+ 	/* return the number of buffers that were not posted */
+ 	*buff_count = *buff_count - i;
+@@ -552,7 +551,7 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
+ 	u16 i;
+ 
+ 	/* take the lock before we start messing with the ring */
+-	mutex_lock(&cq->cq_lock);
++	spin_lock(&cq->cq_lock);
+ 
+ 	ntc = cq->next_to_clean;
+ 
+@@ -614,7 +613,7 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
+ 
+ 	cq->next_to_clean = ntc;
+ 
+-	mutex_unlock(&cq->cq_lock);
++	spin_unlock(&cq->cq_lock);
+ 
+ 	*num_q_msg = i;
+ 	if (*num_q_msg == 0)
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_controlq_api.h b/drivers/net/ethernet/intel/idpf/idpf_controlq_api.h
+index e8e046ef2f0d76..5890d8adca4a87 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_controlq_api.h
++++ b/drivers/net/ethernet/intel/idpf/idpf_controlq_api.h
+@@ -99,7 +99,7 @@ struct idpf_ctlq_info {
+ 
+ 	enum idpf_ctlq_type cq_type;
+ 	int q_id;
+-	struct mutex cq_lock;		/* control queue lock */
++	spinlock_t cq_lock;		/* control queue lock */
+ 	/* used for interrupt processing */
+ 	u16 next_to_use;
+ 	u16 next_to_clean;
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
+index 59b1a1a099967f..f72420cf68216c 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
+@@ -46,7 +46,7 @@ static u32 idpf_get_rxfh_key_size(struct net_device *netdev)
+ 	struct idpf_vport_user_config_data *user_config;
+ 
+ 	if (!idpf_is_cap_ena_all(np->adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS))
+-		return -EOPNOTSUPP;
++		return 0;
+ 
+ 	user_config = &np->adapter->vport_config[np->vport_idx]->user_config;
+ 
+@@ -65,7 +65,7 @@ static u32 idpf_get_rxfh_indir_size(struct net_device *netdev)
+ 	struct idpf_vport_user_config_data *user_config;
+ 
+ 	if (!idpf_is_cap_ena_all(np->adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS))
+-		return -EOPNOTSUPP;
++		return 0;
+ 
+ 	user_config = &np->adapter->vport_config[np->vport_idx]->user_config;
+ 
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+index 2ed801398971cc..fe96e20573660f 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -2329,8 +2329,12 @@ void *idpf_alloc_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem, u64 size)
+ 	struct idpf_adapter *adapter = hw->back;
+ 	size_t sz = ALIGN(size, 4096);
+ 
+-	mem->va = dma_alloc_coherent(&adapter->pdev->dev, sz,
+-				     &mem->pa, GFP_KERNEL);
++	/* The control queue resources are freed under a spinlock, contiguous
++	 * pages will avoid IOMMU remapping and the use vmap (and vunmap in
++	 * dma_free_*() path.
++	 */
++	mem->va = dma_alloc_attrs(&adapter->pdev->dev, sz, &mem->pa,
++				  GFP_KERNEL, DMA_ATTR_FORCE_CONTIGUOUS);
+ 	mem->size = sz;
+ 
+ 	return mem->va;
+@@ -2345,8 +2349,8 @@ void idpf_free_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem)
+ {
+ 	struct idpf_adapter *adapter = hw->back;
+ 
+-	dma_free_coherent(&adapter->pdev->dev, mem->size,
+-			  mem->va, mem->pa);
++	dma_free_attrs(&adapter->pdev->dev, mem->size,
++		       mem->va, mem->pa, DMA_ATTR_FORCE_CONTIGUOUS);
+ 	mem->size = 0;
+ 	mem->va = NULL;
+ 	mem->pa = 0;
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index b1669d7cf43591..7d8cc783b22820 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -7046,6 +7046,10 @@ static int igc_probe(struct pci_dev *pdev,
+ 	adapter->port_num = hw->bus.func;
+ 	adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE);
+ 
++	/* Disable ASPM L1.2 on I226 devices to avoid packet loss */
++	if (igc_is_device_id_i226(hw))
++		pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2);
++
+ 	err = pci_save_state(pdev);
+ 	if (err)
+ 		goto err_ioremap;
+@@ -7426,6 +7430,9 @@ static int __igc_resume(struct device *dev, bool rpm)
+ 	pci_enable_wake(pdev, PCI_D3hot, 0);
+ 	pci_enable_wake(pdev, PCI_D3cold, 0);
+ 
++	if (igc_is_device_id_i226(hw))
++		pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2);
++
+ 	if (igc_init_interrupt_scheme(adapter, true)) {
+ 		netdev_err(netdev, "Unable to allocate memory for queues\n");
+ 		return -ENOMEM;
+@@ -7551,6 +7558,9 @@ static pci_ers_result_t igc_io_slot_reset(struct pci_dev *pdev)
+ 		pci_enable_wake(pdev, PCI_D3hot, 0);
+ 		pci_enable_wake(pdev, PCI_D3cold, 0);
+ 
++		if (igc_is_device_id_i226(hw))
++			pci_disable_link_state_locked(pdev, PCIE_LINK_STATE_L1_2);
++
+ 		/* In case of PCI error, adapter loses its HW address
+ 		 * so we should re-assign it here.
+ 		 */
+diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
+index 379b6e90121d9f..02b8a9e5c9a902 100644
+--- a/drivers/net/ethernet/sun/niu.c
++++ b/drivers/net/ethernet/sun/niu.c
+@@ -3336,7 +3336,7 @@ static int niu_rbr_add_page(struct niu *np, struct rx_ring_info *rp,
+ 
+ 	addr = np->ops->map_page(np->device, page, 0,
+ 				 PAGE_SIZE, DMA_FROM_DEVICE);
+-	if (!addr) {
++	if (np->ops->mapping_error(np->device, addr)) {
+ 		__free_page(page);
+ 		return -ENOMEM;
+ 	}
+@@ -6676,6 +6676,8 @@ static netdev_tx_t niu_start_xmit(struct sk_buff *skb,
+ 	len = skb_headlen(skb);
+ 	mapping = np->ops->map_single(np->device, skb->data,
+ 				      len, DMA_TO_DEVICE);
++	if (np->ops->mapping_error(np->device, mapping))
++		goto out_drop;
+ 
+ 	prod = rp->prod;
+ 
+@@ -6717,6 +6719,8 @@ static netdev_tx_t niu_start_xmit(struct sk_buff *skb,
+ 		mapping = np->ops->map_page(np->device, skb_frag_page(frag),
+ 					    skb_frag_off(frag), len,
+ 					    DMA_TO_DEVICE);
++		if (np->ops->mapping_error(np->device, mapping))
++			goto out_unmap;
+ 
+ 		rp->tx_buffs[prod].skb = NULL;
+ 		rp->tx_buffs[prod].mapping = mapping;
+@@ -6741,6 +6745,19 @@ static netdev_tx_t niu_start_xmit(struct sk_buff *skb,
+ out:
+ 	return NETDEV_TX_OK;
+ 
++out_unmap:
++	while (i--) {
++		const skb_frag_t *frag;
++
++		prod = PREVIOUS_TX(rp, prod);
++		frag = &skb_shinfo(skb)->frags[i];
++		np->ops->unmap_page(np->device, rp->tx_buffs[prod].mapping,
++				    skb_frag_size(frag), DMA_TO_DEVICE);
++	}
++
++	np->ops->unmap_single(np->device, rp->tx_buffs[rp->prod].mapping,
++			      skb_headlen(skb), DMA_TO_DEVICE);
++
+ out_drop:
+ 	rp->tx_errors++;
+ 	kfree_skb(skb);
+@@ -9644,6 +9661,11 @@ static void niu_pci_unmap_single(struct device *dev, u64 dma_address,
+ 	dma_unmap_single(dev, dma_address, size, direction);
+ }
+ 
++static int niu_pci_mapping_error(struct device *dev, u64 addr)
++{
++	return dma_mapping_error(dev, addr);
++}
++
+ static const struct niu_ops niu_pci_ops = {
+ 	.alloc_coherent	= niu_pci_alloc_coherent,
+ 	.free_coherent	= niu_pci_free_coherent,
+@@ -9651,6 +9673,7 @@ static const struct niu_ops niu_pci_ops = {
+ 	.unmap_page	= niu_pci_unmap_page,
+ 	.map_single	= niu_pci_map_single,
+ 	.unmap_single	= niu_pci_unmap_single,
++	.mapping_error	= niu_pci_mapping_error,
+ };
+ 
+ static void niu_driver_version(void)
+@@ -10019,6 +10042,11 @@ static void niu_phys_unmap_single(struct device *dev, u64 dma_address,
+ 	/* Nothing to do.  */
+ }
+ 
++static int niu_phys_mapping_error(struct device *dev, u64 dma_address)
++{
++	return false;
++}
++
+ static const struct niu_ops niu_phys_ops = {
+ 	.alloc_coherent	= niu_phys_alloc_coherent,
+ 	.free_coherent	= niu_phys_free_coherent,
+@@ -10026,6 +10054,7 @@ static const struct niu_ops niu_phys_ops = {
+ 	.unmap_page	= niu_phys_unmap_page,
+ 	.map_single	= niu_phys_map_single,
+ 	.unmap_single	= niu_phys_unmap_single,
++	.mapping_error	= niu_phys_mapping_error,
+ };
+ 
+ static int niu_of_probe(struct platform_device *op)
+diff --git a/drivers/net/ethernet/sun/niu.h b/drivers/net/ethernet/sun/niu.h
+index 04c215f91fc08e..0b169c08b0f2d1 100644
+--- a/drivers/net/ethernet/sun/niu.h
++++ b/drivers/net/ethernet/sun/niu.h
+@@ -2879,6 +2879,9 @@ struct tx_ring_info {
+ #define NEXT_TX(tp, index) \
+ 	(((index) + 1) < (tp)->pending ? ((index) + 1) : 0)
+ 
++#define PREVIOUS_TX(tp, index) \
++	(((index) - 1) >= 0 ? ((index) - 1) : (((tp)->pending) - 1))
++
+ static inline u32 niu_tx_avail(struct tx_ring_info *tp)
+ {
+ 	return (tp->pending -
+@@ -3140,6 +3143,7 @@ struct niu_ops {
+ 			  enum dma_data_direction direction);
+ 	void (*unmap_single)(struct device *dev, u64 dma_address,
+ 			     size_t size, enum dma_data_direction direction);
++	int (*mapping_error)(struct device *dev, u64 dma_address);
+ };
+ 
+ struct niu_link_config {
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.c b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+index f4242bbad5fc57..f77bdf732f5f37 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_lib.c
++++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+@@ -1641,6 +1641,7 @@ static void wx_set_rss_queues(struct wx *wx)
+ 
+ 	clear_bit(WX_FLAG_FDIR_HASH, wx->flags);
+ 
++	wx->ring_feature[RING_F_FDIR].indices = 1;
+ 	/* Use Flow Director in addition to RSS to ensure the best
+ 	 * distribution of flows across cores, even when an FDIR flow
+ 	 * isn't matched.
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
+index 8658a51ee8106e..44547e69c026fe 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
+@@ -68,7 +68,6 @@ int txgbe_request_queue_irqs(struct wx *wx)
+ 		free_irq(wx->msix_q_entries[vector].vector,
+ 			 wx->q_vector[vector]);
+ 	}
+-	wx_reset_interrupt_capability(wx);
+ 	return err;
+ }
+ 
+@@ -172,6 +171,7 @@ void txgbe_free_misc_irq(struct txgbe *txgbe)
+ 	free_irq(txgbe->link_irq, txgbe);
+ 	free_irq(txgbe->misc.irq, txgbe);
+ 	txgbe_del_irq_domain(txgbe);
++	txgbe->wx->misc_irq_domain = false;
+ }
+ 
+ int txgbe_setup_misc_irq(struct txgbe *txgbe)
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+index 38206a46693bcd..acfef74fd2fc6d 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+@@ -354,10 +354,14 @@ static int txgbe_open(struct net_device *netdev)
+ 
+ 	wx_configure(wx);
+ 
+-	err = txgbe_request_queue_irqs(wx);
++	err = txgbe_setup_misc_irq(wx->priv);
+ 	if (err)
+ 		goto err_free_resources;
+ 
++	err = txgbe_request_queue_irqs(wx);
++	if (err)
++		goto err_free_misc_irq;
++
+ 	/* Notify the stack of the actual queue counts. */
+ 	err = netif_set_real_num_tx_queues(netdev, wx->num_tx_queues);
+ 	if (err)
+@@ -375,6 +379,9 @@ static int txgbe_open(struct net_device *netdev)
+ 
+ err_free_irq:
+ 	wx_free_irq(wx);
++err_free_misc_irq:
++	txgbe_free_misc_irq(wx->priv);
++	wx_reset_interrupt_capability(wx);
+ err_free_resources:
+ 	wx_free_resources(wx);
+ err_reset:
+@@ -415,6 +422,7 @@ static int txgbe_close(struct net_device *netdev)
+ 	wx_ptp_stop(wx);
+ 	txgbe_down(wx);
+ 	wx_free_irq(wx);
++	txgbe_free_misc_irq(wx->priv);
+ 	wx_free_resources(wx);
+ 	txgbe_fdir_filter_exit(wx);
+ 	wx_control_hw(wx, false);
+@@ -460,7 +468,6 @@ static void txgbe_shutdown(struct pci_dev *pdev)
+ int txgbe_setup_tc(struct net_device *dev, u8 tc)
+ {
+ 	struct wx *wx = netdev_priv(dev);
+-	struct txgbe *txgbe = wx->priv;
+ 
+ 	/* Hardware has to reinitialize queues and interrupts to
+ 	 * match packet buffer alignment. Unfortunately, the
+@@ -471,7 +478,6 @@ int txgbe_setup_tc(struct net_device *dev, u8 tc)
+ 	else
+ 		txgbe_reset(wx);
+ 
+-	txgbe_free_misc_irq(txgbe);
+ 	wx_clear_interrupt_scheme(wx);
+ 
+ 	if (tc)
+@@ -480,7 +486,6 @@ int txgbe_setup_tc(struct net_device *dev, u8 tc)
+ 		netdev_reset_tc(dev);
+ 
+ 	wx_init_interrupt_scheme(wx);
+-	txgbe_setup_misc_irq(txgbe);
+ 
+ 	if (netif_running(dev))
+ 		txgbe_open(dev);
+@@ -729,13 +734,9 @@ static int txgbe_probe(struct pci_dev *pdev,
+ 
+ 	txgbe_init_fdir(txgbe);
+ 
+-	err = txgbe_setup_misc_irq(txgbe);
+-	if (err)
+-		goto err_release_hw;
+-
+ 	err = txgbe_init_phy(txgbe);
+ 	if (err)
+-		goto err_free_misc_irq;
++		goto err_release_hw;
+ 
+ 	err = register_netdev(netdev);
+ 	if (err)
+@@ -763,8 +764,6 @@ static int txgbe_probe(struct pci_dev *pdev,
+ 
+ err_remove_phy:
+ 	txgbe_remove_phy(txgbe);
+-err_free_misc_irq:
+-	txgbe_free_misc_irq(txgbe);
+ err_release_hw:
+ 	wx_clear_interrupt_scheme(wx);
+ 	wx_control_hw(wx, false);
+@@ -798,7 +797,6 @@ static void txgbe_remove(struct pci_dev *pdev)
+ 	unregister_netdev(netdev);
+ 
+ 	txgbe_remove_phy(txgbe);
+-	txgbe_free_misc_irq(txgbe);
+ 	wx_free_isb_resources(wx);
+ 
+ 	pci_release_selected_regions(pdev,
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index e4f1663b62047b..5f014b86268538 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -4344,8 +4344,6 @@ static void lan78xx_disconnect(struct usb_interface *intf)
+ 	if (!dev)
+ 		return;
+ 
+-	netif_napi_del(&dev->napi);
+-
+ 	udev = interface_to_usbdev(intf);
+ 	net = dev->net;
+ 
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index e53ba600605a5d..d5be73a9647081 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -778,6 +778,26 @@ static unsigned int mergeable_ctx_to_truesize(void *mrg_ctx)
+ 	return (unsigned long)mrg_ctx & ((1 << MRG_CTX_HEADER_SHIFT) - 1);
+ }
+ 
++static int check_mergeable_len(struct net_device *dev, void *mrg_ctx,
++			       unsigned int len)
++{
++	unsigned int headroom, tailroom, room, truesize;
++
++	truesize = mergeable_ctx_to_truesize(mrg_ctx);
++	headroom = mergeable_ctx_to_headroom(mrg_ctx);
++	tailroom = headroom ? sizeof(struct skb_shared_info) : 0;
++	room = SKB_DATA_ALIGN(headroom + tailroom);
++
++	if (len > truesize - room) {
++		pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
++			 dev->name, len, (unsigned long)(truesize - room));
++		DEV_STATS_INC(dev, rx_length_errors);
++		return -1;
++	}
++
++	return 0;
++}
++
+ static struct sk_buff *virtnet_build_skb(void *buf, unsigned int buflen,
+ 					 unsigned int headroom,
+ 					 unsigned int len)
+@@ -1127,15 +1147,29 @@ static void check_sq_full_and_disable(struct virtnet_info *vi,
+ 	}
+ }
+ 
++/* Note that @len is the length of received data without virtio header */
+ static struct xdp_buff *buf_to_xdp(struct virtnet_info *vi,
+-				   struct receive_queue *rq, void *buf, u32 len)
++				   struct receive_queue *rq, void *buf,
++				   u32 len, bool first_buf)
+ {
+ 	struct xdp_buff *xdp;
+ 	u32 bufsize;
+ 
+ 	xdp = (struct xdp_buff *)buf;
+ 
+-	bufsize = xsk_pool_get_rx_frame_size(rq->xsk_pool) + vi->hdr_len;
++	/* In virtnet_add_recvbuf_xsk, we use part of XDP_PACKET_HEADROOM for
++	 * virtio header and ask the vhost to fill data from
++	 *         hard_start + XDP_PACKET_HEADROOM - vi->hdr_len
++	 * The first buffer has virtio header so the remaining region for frame
++	 * data is
++	 *         xsk_pool_get_rx_frame_size()
++	 * While other buffers than the first one do not have virtio header, so
++	 * the maximum frame data's length can be
++	 *         xsk_pool_get_rx_frame_size() + vi->hdr_len
++	 */
++	bufsize = xsk_pool_get_rx_frame_size(rq->xsk_pool);
++	if (!first_buf)
++		bufsize += vi->hdr_len;
+ 
+ 	if (unlikely(len > bufsize)) {
+ 		pr_debug("%s: rx error: len %u exceeds truesize %u\n",
+@@ -1260,7 +1294,7 @@ static int xsk_append_merge_buffer(struct virtnet_info *vi,
+ 
+ 		u64_stats_add(&stats->bytes, len);
+ 
+-		xdp = buf_to_xdp(vi, rq, buf, len);
++		xdp = buf_to_xdp(vi, rq, buf, len, false);
+ 		if (!xdp)
+ 			goto err;
+ 
+@@ -1358,7 +1392,7 @@ static void virtnet_receive_xsk_buf(struct virtnet_info *vi, struct receive_queu
+ 
+ 	u64_stats_add(&stats->bytes, len);
+ 
+-	xdp = buf_to_xdp(vi, rq, buf, len);
++	xdp = buf_to_xdp(vi, rq, buf, len, true);
+ 	if (!xdp)
+ 		return;
+ 
+@@ -1797,7 +1831,8 @@ static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
+  * across multiple buffers (num_buf > 1), and we make sure buffers
+  * have enough headroom.
+  */
+-static struct page *xdp_linearize_page(struct receive_queue *rq,
++static struct page *xdp_linearize_page(struct net_device *dev,
++				       struct receive_queue *rq,
+ 				       int *num_buf,
+ 				       struct page *p,
+ 				       int offset,
+@@ -1817,18 +1852,27 @@ static struct page *xdp_linearize_page(struct receive_queue *rq,
+ 	memcpy(page_address(page) + page_off, page_address(p) + offset, *len);
+ 	page_off += *len;
+ 
++	/* Only mergeable mode can go inside this while loop. In small mode,
++	 * *num_buf == 1, so it cannot go inside.
++	 */
+ 	while (--*num_buf) {
+ 		unsigned int buflen;
+ 		void *buf;
++		void *ctx;
+ 		int off;
+ 
+-		buf = virtnet_rq_get_buf(rq, &buflen, NULL);
++		buf = virtnet_rq_get_buf(rq, &buflen, &ctx);
+ 		if (unlikely(!buf))
+ 			goto err_buf;
+ 
+ 		p = virt_to_head_page(buf);
+ 		off = buf - page_address(p);
+ 
++		if (check_mergeable_len(dev, ctx, buflen)) {
++			put_page(p);
++			goto err_buf;
++		}
++
+ 		/* guard against a misconfigured or uncooperative backend that
+ 		 * is sending packet larger than the MTU.
+ 		 */
+@@ -1917,7 +1961,7 @@ static struct sk_buff *receive_small_xdp(struct net_device *dev,
+ 		headroom = vi->hdr_len + header_offset;
+ 		buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) +
+ 			SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+-		xdp_page = xdp_linearize_page(rq, &num_buf, page,
++		xdp_page = xdp_linearize_page(dev, rq, &num_buf, page,
+ 					      offset, header_offset,
+ 					      &tlen);
+ 		if (!xdp_page)
+@@ -2252,7 +2296,7 @@ static void *mergeable_xdp_get_buf(struct virtnet_info *vi,
+ 	 */
+ 	if (!xdp_prog->aux->xdp_has_frags) {
+ 		/* linearize data for XDP */
+-		xdp_page = xdp_linearize_page(rq, num_buf,
++		xdp_page = xdp_linearize_page(vi->dev, rq, num_buf,
+ 					      *page, offset,
+ 					      XDP_PACKET_HEADROOM,
+ 					      len);
+diff --git a/drivers/net/wireless/ath/ath6kl/bmi.c b/drivers/net/wireless/ath/ath6kl/bmi.c
+index af98e871199d31..5a9e93fd1ef42a 100644
+--- a/drivers/net/wireless/ath/ath6kl/bmi.c
++++ b/drivers/net/wireless/ath/ath6kl/bmi.c
+@@ -87,7 +87,9 @@ int ath6kl_bmi_get_target_info(struct ath6kl *ar,
+ 		 * We need to do some backwards compatibility to make this work.
+ 		 */
+ 		if (le32_to_cpu(targ_info->byte_count) != sizeof(*targ_info)) {
+-			WARN_ON(1);
++			ath6kl_err("mismatched byte count %d vs. expected %zd\n",
++				   le32_to_cpu(targ_info->byte_count),
++				   sizeof(*targ_info));
+ 			return -EINVAL;
+ 		}
+ 
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index d253b829011107..ae584c97f52842 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -382,7 +382,7 @@ static void nvme_log_err_passthru(struct request *req)
+ 		nr->cmd->common.cdw12,
+ 		nr->cmd->common.cdw13,
+ 		nr->cmd->common.cdw14,
+-		nr->cmd->common.cdw14);
++		nr->cmd->common.cdw15);
+ }
+ 
+ enum nvme_disposition {
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index cf0ef4745564c3..700dfbd5a4512b 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -1050,7 +1050,8 @@ void nvme_mpath_add_sysfs_link(struct nvme_ns_head *head)
+ 	 */
+ 	srcu_idx = srcu_read_lock(&head->srcu);
+ 
+-	list_for_each_entry_rcu(ns, &head->list, siblings) {
++	list_for_each_entry_srcu(ns, &head->list, siblings,
++				 srcu_read_lock_held(&head->srcu)) {
+ 		/*
+ 		 * Ensure that ns path disk node is already added otherwise we
+ 		 * may get invalid kobj name for target
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index f1dd804151b1c9..776c867fb64d57 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2031,8 +2031,6 @@ static void nvme_map_cmb(struct nvme_dev *dev)
+ 	if ((dev->cmbsz & (NVME_CMBSZ_WDS | NVME_CMBSZ_RDS)) ==
+ 			(NVME_CMBSZ_WDS | NVME_CMBSZ_RDS))
+ 		pci_p2pmem_publish(pdev, true);
+-
+-	nvme_update_attrs(dev);
+ }
+ 
+ static int nvme_set_host_mem(struct nvme_dev *dev, u32 bits)
+@@ -2969,6 +2967,8 @@ static void nvme_reset_work(struct work_struct *work)
+ 	if (result < 0)
+ 		goto out;
+ 
++	nvme_update_attrs(dev);
++
+ 	result = nvme_setup_io_queues(dev);
+ 	if (result)
+ 		goto out;
+@@ -3305,6 +3305,8 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	if (result < 0)
+ 		goto out_disable;
+ 
++	nvme_update_attrs(dev);
++
+ 	result = nvme_setup_io_queues(dev);
+ 	if (result)
+ 		goto out_disable;
+diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
+index b6db8b74dc4ad3..85e50e29817422 100644
+--- a/drivers/nvme/target/nvmet.h
++++ b/drivers/nvme/target/nvmet.h
+@@ -857,6 +857,8 @@ static inline void nvmet_req_bio_put(struct nvmet_req *req, struct bio *bio)
+ {
+ 	if (bio != &req->b.inline_bio)
+ 		bio_put(bio);
++	else
++		bio_uninit(bio);
+ }
+ 
+ #ifdef CONFIG_NVME_TARGET_TCP_TLS
+diff --git a/drivers/platform/mellanox/mlxbf-pmc.c b/drivers/platform/mellanox/mlxbf-pmc.c
+index 36a00692347dce..771a24c397c12b 100644
+--- a/drivers/platform/mellanox/mlxbf-pmc.c
++++ b/drivers/platform/mellanox/mlxbf-pmc.c
+@@ -713,7 +713,7 @@ static const struct mlxbf_pmc_events mlxbf_pmc_llt_events[] = {
+ 	{101, "GDC_BANK0_HIT_DCL_PARTIAL"},
+ 	{102, "GDC_BANK0_EVICT_DCL"},
+ 	{103, "GDC_BANK0_G_RSE_PIPE_CACHE_DATA0"},
+-	{103, "GDC_BANK0_G_RSE_PIPE_CACHE_DATA1"},
++	{104, "GDC_BANK0_G_RSE_PIPE_CACHE_DATA1"},
+ 	{105, "GDC_BANK0_ARB_STRB"},
+ 	{106, "GDC_BANK0_ARB_WAIT"},
+ 	{107, "GDC_BANK0_GGA_STRB"},
+diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c
+index aae99adb29eb0e..70c58c4c6c842a 100644
+--- a/drivers/platform/mellanox/mlxbf-tmfifo.c
++++ b/drivers/platform/mellanox/mlxbf-tmfifo.c
+@@ -281,7 +281,8 @@ static int mlxbf_tmfifo_alloc_vrings(struct mlxbf_tmfifo *fifo,
+ 		vring->align = SMP_CACHE_BYTES;
+ 		vring->index = i;
+ 		vring->vdev_id = tm_vdev->vdev.id.device;
+-		vring->drop_desc.len = VRING_DROP_DESC_MAX_LEN;
++		vring->drop_desc.len = cpu_to_virtio32(&tm_vdev->vdev,
++						       VRING_DROP_DESC_MAX_LEN);
+ 		dev = &tm_vdev->vdev.dev;
+ 
+ 		size = vring_size(vring->num, vring->align);
+diff --git a/drivers/platform/mellanox/mlxreg-lc.c b/drivers/platform/mellanox/mlxreg-lc.c
+index aee395bb48ae44..8681ceb7144bae 100644
+--- a/drivers/platform/mellanox/mlxreg-lc.c
++++ b/drivers/platform/mellanox/mlxreg-lc.c
+@@ -688,7 +688,7 @@ static int mlxreg_lc_completion_notify(void *handle, struct i2c_adapter *parent,
+ 	if (regval & mlxreg_lc->data->mask) {
+ 		mlxreg_lc->state |= MLXREG_LC_SYNCED;
+ 		mlxreg_lc_state_update_locked(mlxreg_lc, MLXREG_LC_SYNCED, 1);
+-		if (mlxreg_lc->state & ~MLXREG_LC_POWERED) {
++		if (!(mlxreg_lc->state & MLXREG_LC_POWERED)) {
+ 			err = mlxreg_lc_power_on_off(mlxreg_lc, 1);
+ 			if (err)
+ 				goto mlxreg_lc_regmap_power_on_off_fail;
+diff --git a/drivers/platform/mellanox/nvsw-sn2201.c b/drivers/platform/mellanox/nvsw-sn2201.c
+index 0c047aa2345b36..490001a9afd769 100644
+--- a/drivers/platform/mellanox/nvsw-sn2201.c
++++ b/drivers/platform/mellanox/nvsw-sn2201.c
+@@ -1088,7 +1088,7 @@ static int nvsw_sn2201_i2c_completion_notify(void *handle, int id)
+ 	if (!nvsw_sn2201->main_mux_devs->adapter) {
+ 		err = -ENODEV;
+ 		dev_err(nvsw_sn2201->dev, "Failed to get adapter for bus %d\n",
+-			nvsw_sn2201->cpld_devs->nr);
++			nvsw_sn2201->main_mux_devs->nr);
+ 		goto i2c_get_adapter_main_fail;
+ 	}
+ 
+diff --git a/drivers/platform/x86/amd/hsmp/hsmp.c b/drivers/platform/x86/amd/hsmp/hsmp.c
+index a3ac09a90de456..ab877112f4c809 100644
+--- a/drivers/platform/x86/amd/hsmp/hsmp.c
++++ b/drivers/platform/x86/amd/hsmp/hsmp.c
+@@ -99,7 +99,7 @@ static int __hsmp_send_message(struct hsmp_socket *sock, struct hsmp_message *ms
+ 	short_sleep = jiffies + msecs_to_jiffies(HSMP_SHORT_SLEEP);
+ 	timeout	= jiffies + msecs_to_jiffies(HSMP_MSG_TIMEOUT);
+ 
+-	while (time_before(jiffies, timeout)) {
++	while (true) {
+ 		ret = sock->amd_hsmp_rdwr(sock, mbinfo->msg_resp_off, &mbox_status, HSMP_RD);
+ 		if (ret) {
+ 			dev_err(sock->dev, "Error %d reading mailbox status\n", ret);
+@@ -108,6 +108,10 @@ static int __hsmp_send_message(struct hsmp_socket *sock, struct hsmp_message *ms
+ 
+ 		if (mbox_status != HSMP_STATUS_NOT_READY)
+ 			break;
++
++		if (!time_before(jiffies, timeout))
++			break;
++
+ 		if (time_before(jiffies, short_sleep))
+ 			usleep_range(50, 100);
+ 		else
+diff --git a/drivers/platform/x86/amd/pmc/pmc-quirks.c b/drivers/platform/x86/amd/pmc/pmc-quirks.c
+index 2e3f6fc67c568d..7ed12c1d3b34c0 100644
+--- a/drivers/platform/x86/amd/pmc/pmc-quirks.c
++++ b/drivers/platform/x86/amd/pmc/pmc-quirks.c
+@@ -224,6 +224,15 @@ static const struct dmi_system_id fwbug_list[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "WUJIE14-GX4HRXL"),
+ 		}
+ 	},
++	/* https://bugzilla.kernel.org/show_bug.cgi?id=220116 */
++	{
++		.ident = "PCSpecialist Lafite Pro V 14M",
++		.driver_data = &quirk_spurious_8042,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "PCSpecialist"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Lafite Pro V 14M"),
++		}
++	},
+ 	{}
+ };
+ 
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/dell-wmi-sysman.h b/drivers/platform/x86/dell/dell-wmi-sysman/dell-wmi-sysman.h
+index 3ad33a094588c6..817ee7ba07ca08 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/dell-wmi-sysman.h
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/dell-wmi-sysman.h
+@@ -89,6 +89,11 @@ extern struct wmi_sysman_priv wmi_priv;
+ 
+ enum { ENUM, INT, STR, PO };
+ 
++#define ENUM_MIN_ELEMENTS		8
++#define INT_MIN_ELEMENTS		9
++#define STR_MIN_ELEMENTS		8
++#define PO_MIN_ELEMENTS			4
++
+ enum {
+ 	ATTR_NAME,
+ 	DISPL_NAME_LANG_CODE,
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/enum-attributes.c b/drivers/platform/x86/dell/dell-wmi-sysman/enum-attributes.c
+index 8cc212c8526683..fc2f58b4cbc6ef 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/enum-attributes.c
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/enum-attributes.c
+@@ -23,9 +23,10 @@ static ssize_t current_value_show(struct kobject *kobj, struct kobj_attribute *a
+ 	obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_ENUMERATION_ATTRIBUTE_GUID);
+ 	if (!obj)
+ 		return -EIO;
+-	if (obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) {
++	if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < ENUM_MIN_ELEMENTS ||
++	    obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) {
+ 		kfree(obj);
+-		return -EINVAL;
++		return -EIO;
+ 	}
+ 	ret = snprintf(buf, PAGE_SIZE, "%s\n", obj->package.elements[CURRENT_VAL].string.pointer);
+ 	kfree(obj);
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/int-attributes.c b/drivers/platform/x86/dell/dell-wmi-sysman/int-attributes.c
+index 951e75b538fad4..73524806423914 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/int-attributes.c
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/int-attributes.c
+@@ -25,9 +25,10 @@ static ssize_t current_value_show(struct kobject *kobj, struct kobj_attribute *a
+ 	obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_INTEGER_ATTRIBUTE_GUID);
+ 	if (!obj)
+ 		return -EIO;
+-	if (obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_INTEGER) {
++	if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < INT_MIN_ELEMENTS ||
++	    obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_INTEGER) {
+ 		kfree(obj);
+-		return -EINVAL;
++		return -EIO;
+ 	}
+ 	ret = snprintf(buf, PAGE_SIZE, "%lld\n", obj->package.elements[CURRENT_VAL].integer.value);
+ 	kfree(obj);
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c b/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c
+index d8f1bf5e58a0f4..3167e06d416ede 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c
+@@ -26,9 +26,10 @@ static ssize_t is_enabled_show(struct kobject *kobj, struct kobj_attribute *attr
+ 	obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_PASSOBJ_ATTRIBUTE_GUID);
+ 	if (!obj)
+ 		return -EIO;
+-	if (obj->package.elements[IS_PASS_SET].type != ACPI_TYPE_INTEGER) {
++	if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < PO_MIN_ELEMENTS ||
++	    obj->package.elements[IS_PASS_SET].type != ACPI_TYPE_INTEGER) {
+ 		kfree(obj);
+-		return -EINVAL;
++		return -EIO;
+ 	}
+ 	ret = snprintf(buf, PAGE_SIZE, "%lld\n", obj->package.elements[IS_PASS_SET].integer.value);
+ 	kfree(obj);
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/string-attributes.c b/drivers/platform/x86/dell/dell-wmi-sysman/string-attributes.c
+index c392f0ecf8b55b..0d2c74f8d1aad7 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/string-attributes.c
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/string-attributes.c
+@@ -25,9 +25,10 @@ static ssize_t current_value_show(struct kobject *kobj, struct kobj_attribute *a
+ 	obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_STRING_ATTRIBUTE_GUID);
+ 	if (!obj)
+ 		return -EIO;
+-	if (obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) {
++	if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < STR_MIN_ELEMENTS ||
++	    obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) {
+ 		kfree(obj);
+-		return -EINVAL;
++		return -EIO;
+ 	}
+ 	ret = snprintf(buf, PAGE_SIZE, "%s\n", obj->package.elements[CURRENT_VAL].string.pointer);
+ 	kfree(obj);
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
+index d00389b860e4ea..f5402b71465729 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
+@@ -407,10 +407,10 @@ static int init_bios_attributes(int attr_type, const char *guid)
+ 		return retval;
+ 
+ 	switch (attr_type) {
+-	case ENUM:	min_elements = 8;	break;
+-	case INT:	min_elements = 9;	break;
+-	case STR:	min_elements = 8;	break;
+-	case PO:	min_elements = 4;	break;
++	case ENUM:	min_elements = ENUM_MIN_ELEMENTS;	break;
++	case INT:	min_elements = INT_MIN_ELEMENTS;	break;
++	case STR:	min_elements = STR_MIN_ELEMENTS;	break;
++	case PO:	min_elements = PO_MIN_ELEMENTS;		break;
+ 	default:
+ 		pr_err("Error: Unknown attr_type: %d\n", attr_type);
+ 		return -EINVAL;
+@@ -597,7 +597,7 @@ static int __init sysman_init(void)
+ 	release_attributes_data();
+ 
+ err_destroy_classdev:
+-	device_destroy(&firmware_attributes_class, MKDEV(0, 0));
++	device_unregister(wmi_priv.class_dev);
+ 
+ err_exit_bios_attr_pass_interface:
+ 	exit_bios_attr_pass_interface();
+@@ -611,7 +611,7 @@ static int __init sysman_init(void)
+ static void __exit sysman_exit(void)
+ {
+ 	release_attributes_data();
+-	device_destroy(&firmware_attributes_class, MKDEV(0, 0));
++	device_unregister(wmi_priv.class_dev);
+ 	exit_bios_attr_set_interface();
+ 	exit_bios_attr_pass_interface();
+ }
+diff --git a/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c b/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c
+index 13237890fc9200..5bfa7159f5bcd5 100644
+--- a/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c
++++ b/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c
+@@ -1034,7 +1034,7 @@ static int __init hp_init(void)
+ 	release_attributes_data();
+ 
+ err_destroy_classdev:
+-	device_destroy(&firmware_attributes_class, MKDEV(0, 0));
++	device_unregister(bioscfg_drv.class_dev);
+ 
+ err_unregister_class:
+ 	hp_exit_attr_set_interface();
+@@ -1045,7 +1045,7 @@ static int __init hp_init(void)
+ static void __exit hp_exit(void)
+ {
+ 	release_attributes_data();
+-	device_destroy(&firmware_attributes_class, MKDEV(0, 0));
++	device_unregister(bioscfg_drv.class_dev);
+ 
+ 	hp_exit_attr_set_interface();
+ }
+diff --git a/drivers/platform/x86/think-lmi.c b/drivers/platform/x86/think-lmi.c
+index 00b1e7c79a3d1e..b73b84fdb15e86 100644
+--- a/drivers/platform/x86/think-lmi.c
++++ b/drivers/platform/x86/think-lmi.c
+@@ -973,6 +973,7 @@ static const struct attribute_group auth_attr_group = {
+ 	.is_visible = auth_attr_is_visible,
+ 	.attrs = auth_attrs,
+ };
++__ATTRIBUTE_GROUPS(auth_attr);
+ 
+ /* ---- Attributes sysfs --------------------------------------------------------- */
+ static ssize_t display_name_show(struct kobject *kobj, struct kobj_attribute *attr,
+@@ -1188,6 +1189,7 @@ static const struct attribute_group tlmi_attr_group = {
+ 	.is_visible = attr_is_visible,
+ 	.attrs = tlmi_attrs,
+ };
++__ATTRIBUTE_GROUPS(tlmi_attr);
+ 
+ static void tlmi_attr_setting_release(struct kobject *kobj)
+ {
+@@ -1207,11 +1209,13 @@ static void tlmi_pwd_setting_release(struct kobject *kobj)
+ static const struct kobj_type tlmi_attr_setting_ktype = {
+ 	.release        = &tlmi_attr_setting_release,
+ 	.sysfs_ops	= &kobj_sysfs_ops,
++	.default_groups = tlmi_attr_groups,
+ };
+ 
+ static const struct kobj_type tlmi_pwd_setting_ktype = {
+ 	.release        = &tlmi_pwd_setting_release,
+ 	.sysfs_ops	= &kobj_sysfs_ops,
++	.default_groups = auth_attr_groups,
+ };
+ 
+ static ssize_t pending_reboot_show(struct kobject *kobj, struct kobj_attribute *attr,
+@@ -1380,21 +1384,18 @@ static struct kobj_attribute debug_cmd = __ATTR_WO(debug_cmd);
+ /* ---- Initialisation --------------------------------------------------------- */
+ static void tlmi_release_attr(void)
+ {
+-	int i;
++	struct kobject *pos, *n;
+ 
+ 	/* Attribute structures */
+-	for (i = 0; i < TLMI_SETTINGS_COUNT; i++) {
+-		if (tlmi_priv.setting[i]) {
+-			sysfs_remove_group(&tlmi_priv.setting[i]->kobj, &tlmi_attr_group);
+-			kobject_put(&tlmi_priv.setting[i]->kobj);
+-		}
+-	}
+ 	sysfs_remove_file(&tlmi_priv.attribute_kset->kobj, &pending_reboot.attr);
+ 	sysfs_remove_file(&tlmi_priv.attribute_kset->kobj, &save_settings.attr);
+ 
+ 	if (tlmi_priv.can_debug_cmd && debug_support)
+ 		sysfs_remove_file(&tlmi_priv.attribute_kset->kobj, &debug_cmd.attr);
+ 
++	list_for_each_entry_safe(pos, n, &tlmi_priv.attribute_kset->list, entry)
++		kobject_put(pos);
++
+ 	kset_unregister(tlmi_priv.attribute_kset);
+ 
+ 	/* Free up any saved signatures */
+@@ -1402,19 +1403,8 @@ static void tlmi_release_attr(void)
+ 	kfree(tlmi_priv.pwd_admin->save_signature);
+ 
+ 	/* Authentication structures */
+-	sysfs_remove_group(&tlmi_priv.pwd_admin->kobj, &auth_attr_group);
+-	kobject_put(&tlmi_priv.pwd_admin->kobj);
+-	sysfs_remove_group(&tlmi_priv.pwd_power->kobj, &auth_attr_group);
+-	kobject_put(&tlmi_priv.pwd_power->kobj);
+-
+-	if (tlmi_priv.opcode_support) {
+-		sysfs_remove_group(&tlmi_priv.pwd_system->kobj, &auth_attr_group);
+-		kobject_put(&tlmi_priv.pwd_system->kobj);
+-		sysfs_remove_group(&tlmi_priv.pwd_hdd->kobj, &auth_attr_group);
+-		kobject_put(&tlmi_priv.pwd_hdd->kobj);
+-		sysfs_remove_group(&tlmi_priv.pwd_nvme->kobj, &auth_attr_group);
+-		kobject_put(&tlmi_priv.pwd_nvme->kobj);
+-	}
++	list_for_each_entry_safe(pos, n, &tlmi_priv.authentication_kset->list, entry)
++		kobject_put(pos);
+ 
+ 	kset_unregister(tlmi_priv.authentication_kset);
+ }
+@@ -1455,6 +1445,14 @@ static int tlmi_sysfs_init(void)
+ 		goto fail_device_created;
+ 	}
+ 
++	tlmi_priv.authentication_kset = kset_create_and_add("authentication", NULL,
++							    &tlmi_priv.class_dev->kobj);
++	if (!tlmi_priv.authentication_kset) {
++		kset_unregister(tlmi_priv.attribute_kset);
++		ret = -ENOMEM;
++		goto fail_device_created;
++	}
++
+ 	for (i = 0; i < TLMI_SETTINGS_COUNT; i++) {
+ 		/* Check if index is a valid setting - skip if it isn't */
+ 		if (!tlmi_priv.setting[i])
+@@ -1471,12 +1469,8 @@ static int tlmi_sysfs_init(void)
+ 
+ 		/* Build attribute */
+ 		tlmi_priv.setting[i]->kobj.kset = tlmi_priv.attribute_kset;
+-		ret = kobject_add(&tlmi_priv.setting[i]->kobj, NULL,
+-				  "%s", tlmi_priv.setting[i]->display_name);
+-		if (ret)
+-			goto fail_create_attr;
+-
+-		ret = sysfs_create_group(&tlmi_priv.setting[i]->kobj, &tlmi_attr_group);
++		ret = kobject_init_and_add(&tlmi_priv.setting[i]->kobj, &tlmi_attr_setting_ktype,
++					   NULL, "%s", tlmi_priv.setting[i]->display_name);
+ 		if (ret)
+ 			goto fail_create_attr;
+ 	}
+@@ -1496,55 +1490,34 @@ static int tlmi_sysfs_init(void)
+ 	}
+ 
+ 	/* Create authentication entries */
+-	tlmi_priv.authentication_kset = kset_create_and_add("authentication", NULL,
+-								&tlmi_priv.class_dev->kobj);
+-	if (!tlmi_priv.authentication_kset) {
+-		ret = -ENOMEM;
+-		goto fail_create_attr;
+-	}
+ 	tlmi_priv.pwd_admin->kobj.kset = tlmi_priv.authentication_kset;
+-	ret = kobject_add(&tlmi_priv.pwd_admin->kobj, NULL, "%s", "Admin");
+-	if (ret)
+-		goto fail_create_attr;
+-
+-	ret = sysfs_create_group(&tlmi_priv.pwd_admin->kobj, &auth_attr_group);
++	ret = kobject_init_and_add(&tlmi_priv.pwd_admin->kobj, &tlmi_pwd_setting_ktype,
++				   NULL, "%s", "Admin");
+ 	if (ret)
+ 		goto fail_create_attr;
+ 
+ 	tlmi_priv.pwd_power->kobj.kset = tlmi_priv.authentication_kset;
+-	ret = kobject_add(&tlmi_priv.pwd_power->kobj, NULL, "%s", "Power-on");
+-	if (ret)
+-		goto fail_create_attr;
+-
+-	ret = sysfs_create_group(&tlmi_priv.pwd_power->kobj, &auth_attr_group);
++	ret = kobject_init_and_add(&tlmi_priv.pwd_power->kobj, &tlmi_pwd_setting_ktype,
++				   NULL, "%s", "Power-on");
+ 	if (ret)
+ 		goto fail_create_attr;
+ 
+ 	if (tlmi_priv.opcode_support) {
+ 		tlmi_priv.pwd_system->kobj.kset = tlmi_priv.authentication_kset;
+-		ret = kobject_add(&tlmi_priv.pwd_system->kobj, NULL, "%s", "System");
+-		if (ret)
+-			goto fail_create_attr;
+-
+-		ret = sysfs_create_group(&tlmi_priv.pwd_system->kobj, &auth_attr_group);
++		ret = kobject_init_and_add(&tlmi_priv.pwd_system->kobj, &tlmi_pwd_setting_ktype,
++					   NULL, "%s", "System");
+ 		if (ret)
+ 			goto fail_create_attr;
+ 
+ 		tlmi_priv.pwd_hdd->kobj.kset = tlmi_priv.authentication_kset;
+-		ret = kobject_add(&tlmi_priv.pwd_hdd->kobj, NULL, "%s", "HDD");
+-		if (ret)
+-			goto fail_create_attr;
+-
+-		ret = sysfs_create_group(&tlmi_priv.pwd_hdd->kobj, &auth_attr_group);
++		ret = kobject_init_and_add(&tlmi_priv.pwd_hdd->kobj, &tlmi_pwd_setting_ktype,
++					   NULL, "%s", "HDD");
+ 		if (ret)
+ 			goto fail_create_attr;
+ 
+ 		tlmi_priv.pwd_nvme->kobj.kset = tlmi_priv.authentication_kset;
+-		ret = kobject_add(&tlmi_priv.pwd_nvme->kobj, NULL, "%s", "NVMe");
+-		if (ret)
+-			goto fail_create_attr;
+-
+-		ret = sysfs_create_group(&tlmi_priv.pwd_nvme->kobj, &auth_attr_group);
++		ret = kobject_init_and_add(&tlmi_priv.pwd_nvme->kobj, &tlmi_pwd_setting_ktype,
++					   NULL, "%s", "NVMe");
+ 		if (ret)
+ 			goto fail_create_attr;
+ 	}
+@@ -1554,7 +1527,7 @@ static int tlmi_sysfs_init(void)
+ fail_create_attr:
+ 	tlmi_release_attr();
+ fail_device_created:
+-	device_destroy(&firmware_attributes_class, MKDEV(0, 0));
++	device_unregister(tlmi_priv.class_dev);
+ fail_class_created:
+ 	return ret;
+ }
+@@ -1577,8 +1550,6 @@ static struct tlmi_pwd_setting *tlmi_create_auth(const char *pwd_type,
+ 	new_pwd->maxlen = tlmi_priv.pwdcfg.core.max_length;
+ 	new_pwd->index = 0;
+ 
+-	kobject_init(&new_pwd->kobj, &tlmi_pwd_setting_ktype);
+-
+ 	return new_pwd;
+ }
+ 
+@@ -1683,7 +1654,6 @@ static int tlmi_analyze(struct wmi_device *wdev)
+ 		if (setting->possible_values)
+ 			strreplace(setting->possible_values, ',', ';');
+ 
+-		kobject_init(&setting->kobj, &tlmi_attr_setting_ktype);
+ 		tlmi_priv.setting[i] = setting;
+ 		kfree(item);
+ 	}
+@@ -1781,7 +1751,7 @@ static int tlmi_analyze(struct wmi_device *wdev)
+ static void tlmi_remove(struct wmi_device *wdev)
+ {
+ 	tlmi_release_attr();
+-	device_destroy(&firmware_attributes_class, MKDEV(0, 0));
++	device_unregister(tlmi_priv.class_dev);
+ }
+ 
+ static int tlmi_probe(struct wmi_device *wdev, const void *context)
+diff --git a/drivers/platform/x86/wmi.c b/drivers/platform/x86/wmi.c
+index e46453750d5f14..03aecf8bb7f8ef 100644
+--- a/drivers/platform/x86/wmi.c
++++ b/drivers/platform/x86/wmi.c
+@@ -177,16 +177,22 @@ static int wmi_device_enable(struct wmi_device *wdev, bool enable)
+ 	acpi_handle handle;
+ 	acpi_status status;
+ 
+-	if (!(wblock->gblock.flags & ACPI_WMI_EXPENSIVE))
+-		return 0;
+-
+ 	if (wblock->dev.dev.type == &wmi_type_method)
+ 		return 0;
+ 
+-	if (wblock->dev.dev.type == &wmi_type_event)
++	if (wblock->dev.dev.type == &wmi_type_event) {
++		/*
++		 * Windows always enables/disables WMI events, even when they are
++		 * not marked as being expensive. We follow this behavior for
++		 * compatibility reasons.
++		 */
+ 		snprintf(method, sizeof(method), "WE%02X", wblock->gblock.notify_id);
+-	else
++	} else {
++		if (!(wblock->gblock.flags & ACPI_WMI_EXPENSIVE))
++			return 0;
++
+ 		get_acpi_method_name(wblock, 'C', method);
++	}
+ 
+ 	/*
+ 	 * Not all WMI devices marked as expensive actually implement the
+diff --git a/drivers/powercap/intel_rapl_common.c b/drivers/powercap/intel_rapl_common.c
+index 5ab3feb296868d..09beab439bb1fb 100644
+--- a/drivers/powercap/intel_rapl_common.c
++++ b/drivers/powercap/intel_rapl_common.c
+@@ -340,12 +340,28 @@ static int set_domain_enable(struct powercap_zone *power_zone, bool mode)
+ {
+ 	struct rapl_domain *rd = power_zone_to_rapl_domain(power_zone);
+ 	struct rapl_defaults *defaults = get_defaults(rd->rp);
++	u64 val;
+ 	int ret;
+ 
+ 	cpus_read_lock();
+ 	ret = rapl_write_pl_data(rd, POWER_LIMIT1, PL_ENABLE, mode);
+-	if (!ret && defaults->set_floor_freq)
++	if (ret)
++		goto end;
++
++	ret = rapl_read_pl_data(rd, POWER_LIMIT1, PL_ENABLE, false, &val);
++	if (ret)
++		goto end;
++
++	if (mode != val) {
++		pr_debug("%s cannot be %s\n", power_zone->name,
++			 str_enabled_disabled(mode));
++		goto end;
++	}
++
++	if (defaults->set_floor_freq)
+ 		defaults->set_floor_freq(rd, mode);
++
++end:
+ 	cpus_read_unlock();
+ 
+ 	return ret;
+diff --git a/drivers/regulator/fan53555.c b/drivers/regulator/fan53555.c
+index bd9447dac5967a..c282236959b180 100644
+--- a/drivers/regulator/fan53555.c
++++ b/drivers/regulator/fan53555.c
+@@ -147,6 +147,7 @@ struct fan53555_device_info {
+ 	unsigned int slew_mask;
+ 	const unsigned int *ramp_delay_table;
+ 	unsigned int n_ramp_values;
++	unsigned int enable_time;
+ 	unsigned int slew_rate;
+ };
+ 
+@@ -282,6 +283,7 @@ static int fan53526_voltages_setup_fairchild(struct fan53555_device_info *di)
+ 	di->slew_mask = CTL_SLEW_MASK;
+ 	di->ramp_delay_table = slew_rates;
+ 	di->n_ramp_values = ARRAY_SIZE(slew_rates);
++	di->enable_time = 250;
+ 	di->vsel_count = FAN53526_NVOLTAGES;
+ 
+ 	return 0;
+@@ -296,10 +298,12 @@ static int fan53555_voltages_setup_fairchild(struct fan53555_device_info *di)
+ 		case FAN53555_CHIP_REV_00:
+ 			di->vsel_min = 600000;
+ 			di->vsel_step = 10000;
++			di->enable_time = 400;
+ 			break;
+ 		case FAN53555_CHIP_REV_13:
+ 			di->vsel_min = 800000;
+ 			di->vsel_step = 10000;
++			di->enable_time = 400;
+ 			break;
+ 		default:
+ 			dev_err(di->dev,
+@@ -311,13 +315,19 @@ static int fan53555_voltages_setup_fairchild(struct fan53555_device_info *di)
+ 	case FAN53555_CHIP_ID_01:
+ 	case FAN53555_CHIP_ID_03:
+ 	case FAN53555_CHIP_ID_05:
++		di->vsel_min = 600000;
++		di->vsel_step = 10000;
++		di->enable_time = 400;
++		break;
+ 	case FAN53555_CHIP_ID_08:
+ 		di->vsel_min = 600000;
+ 		di->vsel_step = 10000;
++		di->enable_time = 175;
+ 		break;
+ 	case FAN53555_CHIP_ID_04:
+ 		di->vsel_min = 603000;
+ 		di->vsel_step = 12826;
++		di->enable_time = 400;
+ 		break;
+ 	default:
+ 		dev_err(di->dev,
+@@ -350,6 +360,7 @@ static int fan53555_voltages_setup_rockchip(struct fan53555_device_info *di)
+ 	di->slew_mask = CTL_SLEW_MASK;
+ 	di->ramp_delay_table = slew_rates;
+ 	di->n_ramp_values = ARRAY_SIZE(slew_rates);
++	di->enable_time = 360;
+ 	di->vsel_count = FAN53555_NVOLTAGES;
+ 
+ 	return 0;
+@@ -372,6 +383,7 @@ static int rk8602_voltages_setup_rockchip(struct fan53555_device_info *di)
+ 	di->slew_mask = CTL_SLEW_MASK;
+ 	di->ramp_delay_table = slew_rates;
+ 	di->n_ramp_values = ARRAY_SIZE(slew_rates);
++	di->enable_time = 360;
+ 	di->vsel_count = RK8602_NVOLTAGES;
+ 
+ 	return 0;
+@@ -395,6 +407,7 @@ static int fan53555_voltages_setup_silergy(struct fan53555_device_info *di)
+ 	di->slew_mask = CTL_SLEW_MASK;
+ 	di->ramp_delay_table = slew_rates;
+ 	di->n_ramp_values = ARRAY_SIZE(slew_rates);
++	di->enable_time = 400;
+ 	di->vsel_count = FAN53555_NVOLTAGES;
+ 
+ 	return 0;
+@@ -594,6 +607,7 @@ static int fan53555_regulator_register(struct fan53555_device_info *di,
+ 	rdesc->ramp_mask = di->slew_mask;
+ 	rdesc->ramp_delay_table = di->ramp_delay_table;
+ 	rdesc->n_ramp_values = di->n_ramp_values;
++	rdesc->enable_time = di->enable_time;
+ 	rdesc->owner = THIS_MODULE;
+ 
+ 	rdev = devm_regulator_register(di->dev, &di->desc, config);
+diff --git a/drivers/regulator/gpio-regulator.c b/drivers/regulator/gpio-regulator.c
+index 65927fa2ef161c..1bdd494cf8821e 100644
+--- a/drivers/regulator/gpio-regulator.c
++++ b/drivers/regulator/gpio-regulator.c
+@@ -260,8 +260,10 @@ static int gpio_regulator_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 	}
+ 
+-	drvdata->gpiods = devm_kzalloc(dev, sizeof(struct gpio_desc *),
+-				       GFP_KERNEL);
++	drvdata->gpiods = devm_kcalloc(dev, config->ngpios,
++				       sizeof(struct gpio_desc *), GFP_KERNEL);
++	if (!drvdata->gpiods)
++		return -ENOMEM;
+ 
+ 	if (config->input_supply) {
+ 		drvdata->desc.supply_name = devm_kstrdup(&pdev->dev,
+@@ -274,8 +276,6 @@ static int gpio_regulator_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	if (!drvdata->gpiods)
+-		return -ENOMEM;
+ 	for (i = 0; i < config->ngpios; i++) {
+ 		drvdata->gpiods[i] = devm_gpiod_get_index(dev,
+ 							  NULL,
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index 8172869bd3d793..0743c6acd6e2c1 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -692,8 +692,12 @@ static irqreturn_t cmos_interrupt(int irq, void *p)
+ {
+ 	u8		irqstat;
+ 	u8		rtc_control;
++	unsigned long	flags;
+ 
+-	spin_lock(&rtc_lock);
++	/* We cannot use spin_lock() here, as cmos_interrupt() is also called
++	 * in a non-irq context.
++	 */
++	spin_lock_irqsave(&rtc_lock, flags);
+ 
+ 	/* When the HPET interrupt handler calls us, the interrupt
+ 	 * status is passed as arg1 instead of the irq number.  But
+@@ -727,7 +731,7 @@ static irqreturn_t cmos_interrupt(int irq, void *p)
+ 			hpet_mask_rtc_irq_bit(RTC_AIE);
+ 		CMOS_READ(RTC_INTR_FLAGS);
+ 	}
+-	spin_unlock(&rtc_lock);
++	spin_unlock_irqrestore(&rtc_lock, flags);
+ 
+ 	if (is_intr(irqstat)) {
+ 		rtc_update_irq(p, 1, irqstat);
+@@ -1295,9 +1299,7 @@ static void cmos_check_wkalrm(struct device *dev)
+ 	 * ACK the rtc irq here
+ 	 */
+ 	if (t_now >= cmos->alarm_expires && cmos_use_acpi_alarm()) {
+-		local_irq_disable();
+ 		cmos_interrupt(0, (void *)cmos->rtc);
+-		local_irq_enable();
+ 		return;
+ 	}
+ 
+diff --git a/drivers/rtc/rtc-pcf2127.c b/drivers/rtc/rtc-pcf2127.c
+index 31c7dca8f4692c..02e4b107509793 100644
+--- a/drivers/rtc/rtc-pcf2127.c
++++ b/drivers/rtc/rtc-pcf2127.c
+@@ -1465,6 +1465,11 @@ static int pcf2127_i2c_probe(struct i2c_client *client)
+ 		variant = &pcf21xx_cfg[type];
+ 	}
+ 
++	if (variant->type == PCF2131) {
++		config.read_flag_mask = 0x0;
++		config.write_flag_mask = 0x0;
++	}
++
+ 	config.max_register = variant->max_register,
+ 
+ 	regmap = devm_regmap_init(&client->dev, &pcf2127_i2c_regmap,
+@@ -1538,7 +1543,7 @@ static int pcf2127_spi_probe(struct spi_device *spi)
+ 		variant = &pcf21xx_cfg[type];
+ 	}
+ 
+-	config.max_register = variant->max_register,
++	config.max_register = variant->max_register;
+ 
+ 	regmap = devm_regmap_init_spi(spi, &config);
+ 	if (IS_ERR(regmap)) {
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index e021f1106beabf..cc5d05dc395c46 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -473,10 +473,17 @@ struct Scsi_Host *scsi_host_alloc(const struct scsi_host_template *sht, int priv
+ 	else
+ 		shost->max_sectors = SCSI_DEFAULT_MAX_SECTORS;
+ 
+-	if (sht->max_segment_size)
+-		shost->max_segment_size = sht->max_segment_size;
+-	else
+-		shost->max_segment_size = BLK_MAX_SEGMENT_SIZE;
++	shost->virt_boundary_mask = sht->virt_boundary_mask;
++	if (shost->virt_boundary_mask) {
++		WARN_ON_ONCE(sht->max_segment_size &&
++			     sht->max_segment_size != UINT_MAX);
++		shost->max_segment_size = UINT_MAX;
++	} else {
++		if (sht->max_segment_size)
++			shost->max_segment_size = sht->max_segment_size;
++		else
++			shost->max_segment_size = BLK_MAX_SEGMENT_SIZE;
++	}
+ 
+ 	/* 32-byte (dword) is a common minimum for HBAs. */
+ 	if (sht->dma_alignment)
+@@ -492,9 +499,6 @@ struct Scsi_Host *scsi_host_alloc(const struct scsi_host_template *sht, int priv
+ 	else
+ 		shost->dma_boundary = 0xffffffff;
+ 
+-	if (sht->virt_boundary_mask)
+-		shost->virt_boundary_mask = sht->virt_boundary_mask;
+-
+ 	device_initialize(&shost->shost_gendev);
+ 	dev_set_name(&shost->shost_gendev, "host%d", shost->host_no);
+ 	shost->shost_gendev.bus = &scsi_bus_type;
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index 0cd6f3e1488249..13b6cb1b93acd9 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -2147,7 +2147,7 @@ qla24xx_get_port_database(scsi_qla_host_t *vha, u16 nport_handle,
+ 
+ 	pdb_dma = dma_map_single(&vha->hw->pdev->dev, pdb,
+ 	    sizeof(*pdb), DMA_FROM_DEVICE);
+-	if (!pdb_dma) {
++	if (dma_mapping_error(&vha->hw->pdev->dev, pdb_dma)) {
+ 		ql_log(ql_log_warn, vha, 0x1116, "Failed to map dma buffer.\n");
+ 		return QLA_MEMORY_ALLOC_FAILED;
+ 	}
+diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
+index d540d66e6ffc4f..80c4b4e7526b1c 100644
+--- a/drivers/scsi/qla4xxx/ql4_os.c
++++ b/drivers/scsi/qla4xxx/ql4_os.c
+@@ -3420,6 +3420,8 @@ static int qla4xxx_alloc_pdu(struct iscsi_task *task, uint8_t opcode)
+ 		task_data->data_dma = dma_map_single(&ha->pdev->dev, task->data,
+ 						     task->data_count,
+ 						     DMA_TO_DEVICE);
++		if (dma_mapping_error(&ha->pdev->dev, task_data->data_dma))
++			return -ENOMEM;
+ 	}
+ 
+ 	DEBUG2(ql4_printk(KERN_INFO, ha, "%s: MaxRecvLen %u, iscsi hrd %d\n",
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 950d8c9fb88437..89d5c4b17bc462 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3384,7 +3384,7 @@ static void sd_read_block_limits_ext(struct scsi_disk *sdkp)
+ 
+ 	rcu_read_lock();
+ 	vpd = rcu_dereference(sdkp->device->vpd_pgb7);
+-	if (vpd && vpd->len >= 2)
++	if (vpd && vpd->len >= 6)
+ 		sdkp->rscs = vpd->data[5] & 1;
+ 	rcu_read_unlock();
+ }
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 863781ba6c1601..0dcd4911409505 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -983,11 +983,20 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr,
+ 		if (dspi->devtype_data->trans_mode == DSPI_DMA_MODE) {
+ 			status = dspi_dma_xfer(dspi);
+ 		} else {
++			/*
++			 * Reinitialize the completion before transferring data
++			 * to avoid the case where it might remain in the done
++			 * state due to a spurious interrupt from a previous
++			 * transfer. This could falsely signal that the current
++			 * transfer has completed.
++			 */
++			if (dspi->irq)
++				reinit_completion(&dspi->xfer_done);
++
+ 			dspi_fifo_write(dspi);
+ 
+ 			if (dspi->irq) {
+ 				wait_for_completion(&dspi->xfer_done);
+-				reinit_completion(&dspi->xfer_done);
+ 			} else {
+ 				do {
+ 					status = dspi_poll(dspi);
+diff --git a/drivers/spi/spi-qpic-snand.c b/drivers/spi/spi-qpic-snand.c
+index d3a4e091dca4ee..80856d2fa35c66 100644
+--- a/drivers/spi/spi-qpic-snand.c
++++ b/drivers/spi/spi-qpic-snand.c
+@@ -316,6 +316,22 @@ static int qcom_spi_ecc_init_ctx_pipelined(struct nand_device *nand)
+ 
+ 	mtd_set_ooblayout(mtd, &qcom_spi_ooblayout);
+ 
++	/*
++	 * Free the temporary BAM transaction allocated initially by
++	 * qcom_nandc_alloc(), and allocate a new one based on the
++	 * updated max_cwperpage value.
++	 */
++	qcom_free_bam_transaction(snandc);
++
++	snandc->max_cwperpage = cwperpage;
++
++	snandc->bam_txn = qcom_alloc_bam_transaction(snandc);
++	if (!snandc->bam_txn) {
++		dev_err(snandc->dev, "failed to allocate BAM transaction\n");
++		ret = -ENOMEM;
++		goto err_free_ecc_cfg;
++	}
++
+ 	ecc_cfg->cfg0 = FIELD_PREP(CW_PER_PAGE_MASK, (cwperpage - 1)) |
+ 			FIELD_PREP(UD_SIZE_BYTES_MASK, ecc_cfg->cw_data) |
+ 			FIELD_PREP(DISABLE_STATUS_AFTER_WRITE, 1) |
+diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
+index 34cf2c399b399d..70905805cb1756 100644
+--- a/drivers/target/target_core_pr.c
++++ b/drivers/target/target_core_pr.c
+@@ -1842,7 +1842,9 @@ core_scsi3_decode_spec_i_port(
+ 		}
+ 
+ 		kmem_cache_free(t10_pr_reg_cache, dest_pr_reg);
+-		core_scsi3_lunacl_undepend_item(dest_se_deve);
++
++		if (dest_se_deve)
++			core_scsi3_lunacl_undepend_item(dest_se_deve);
+ 
+ 		if (is_local)
+ 			continue;
+diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c
+index f3af5666bb1182..f9ef7d94cebd7a 100644
+--- a/drivers/tee/optee/ffa_abi.c
++++ b/drivers/tee/optee/ffa_abi.c
+@@ -728,12 +728,21 @@ static bool optee_ffa_exchange_caps(struct ffa_device *ffa_dev,
+ 	return true;
+ }
+ 
++static void notif_work_fn(struct work_struct *work)
++{
++	struct optee_ffa *optee_ffa = container_of(work, struct optee_ffa,
++						   notif_work);
++	struct optee *optee = container_of(optee_ffa, struct optee, ffa);
++
++	optee_do_bottom_half(optee->ctx);
++}
++
+ static void notif_callback(int notify_id, void *cb_data)
+ {
+ 	struct optee *optee = cb_data;
+ 
+ 	if (notify_id == optee->ffa.bottom_half_value)
+-		optee_do_bottom_half(optee->ctx);
++		queue_work(optee->ffa.notif_wq, &optee->ffa.notif_work);
+ 	else
+ 		optee_notif_send(optee, notify_id);
+ }
+@@ -817,9 +826,11 @@ static void optee_ffa_remove(struct ffa_device *ffa_dev)
+ 	struct optee *optee = ffa_dev_get_drvdata(ffa_dev);
+ 	u32 bottom_half_id = optee->ffa.bottom_half_value;
+ 
+-	if (bottom_half_id != U32_MAX)
++	if (bottom_half_id != U32_MAX) {
+ 		ffa_dev->ops->notifier_ops->notify_relinquish(ffa_dev,
+ 							      bottom_half_id);
++		destroy_workqueue(optee->ffa.notif_wq);
++	}
+ 	optee_remove_common(optee);
+ 
+ 	mutex_destroy(&optee->ffa.mutex);
+@@ -835,6 +846,13 @@ static int optee_ffa_async_notif_init(struct ffa_device *ffa_dev,
+ 	u32 notif_id = 0;
+ 	int rc;
+ 
++	INIT_WORK(&optee->ffa.notif_work, notif_work_fn);
++	optee->ffa.notif_wq = create_workqueue("optee_notification");
++	if (!optee->ffa.notif_wq) {
++		rc = -EINVAL;
++		goto err;
++	}
++
+ 	while (true) {
+ 		rc = ffa_dev->ops->notifier_ops->notify_request(ffa_dev,
+ 								is_per_vcpu,
+@@ -851,19 +869,24 @@ static int optee_ffa_async_notif_init(struct ffa_device *ffa_dev,
+ 		 * notifications in that case.
+ 		 */
+ 		if (rc != -EACCES)
+-			return rc;
++			goto err_wq;
+ 		notif_id++;
+ 		if (notif_id >= OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE)
+-			return rc;
++			goto err_wq;
+ 	}
+ 	optee->ffa.bottom_half_value = notif_id;
+ 
+ 	rc = enable_async_notif(optee);
+-	if (rc < 0) {
+-		ffa_dev->ops->notifier_ops->notify_relinquish(ffa_dev,
+-							      notif_id);
+-		optee->ffa.bottom_half_value = U32_MAX;
+-	}
++	if (rc < 0)
++		goto err_rel;
++
++	return 0;
++err_rel:
++	ffa_dev->ops->notifier_ops->notify_relinquish(ffa_dev, notif_id);
++err_wq:
++	destroy_workqueue(optee->ffa.notif_wq);
++err:
++	optee->ffa.bottom_half_value = U32_MAX;
+ 
+ 	return rc;
+ }
+diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h
+index dc0f355ef72aae..9526087f0e680f 100644
+--- a/drivers/tee/optee/optee_private.h
++++ b/drivers/tee/optee/optee_private.h
+@@ -165,6 +165,8 @@ struct optee_ffa {
+ 	/* Serializes access to @global_ids */
+ 	struct mutex mutex;
+ 	struct rhashtable global_ids;
++	struct workqueue_struct *notif_wq;
++	struct work_struct notif_work;
+ };
+ 
+ struct optee;
+diff --git a/drivers/ufs/core/ufs-sysfs.c b/drivers/ufs/core/ufs-sysfs.c
+index 634cf163f4cb10..c53f924f7fa832 100644
+--- a/drivers/ufs/core/ufs-sysfs.c
++++ b/drivers/ufs/core/ufs-sysfs.c
+@@ -1675,7 +1675,7 @@ UFS_UNIT_DESC_PARAM(logical_block_size, _LOGICAL_BLK_SIZE, 1);
+ UFS_UNIT_DESC_PARAM(logical_block_count, _LOGICAL_BLK_COUNT, 8);
+ UFS_UNIT_DESC_PARAM(erase_block_size, _ERASE_BLK_SIZE, 4);
+ UFS_UNIT_DESC_PARAM(provisioning_type, _PROVISIONING_TYPE, 1);
+-UFS_UNIT_DESC_PARAM(physical_memory_resourse_count, _PHY_MEM_RSRC_CNT, 8);
++UFS_UNIT_DESC_PARAM(physical_memory_resource_count, _PHY_MEM_RSRC_CNT, 8);
+ UFS_UNIT_DESC_PARAM(context_capabilities, _CTX_CAPABILITIES, 2);
+ UFS_UNIT_DESC_PARAM(large_unit_granularity, _LARGE_UNIT_SIZE_M1, 1);
+ UFS_UNIT_DESC_PARAM(wb_buf_alloc_units, _WB_BUF_ALLOC_UNITS, 4);
+@@ -1692,7 +1692,7 @@ static struct attribute *ufs_sysfs_unit_descriptor[] = {
+ 	&dev_attr_logical_block_count.attr,
+ 	&dev_attr_erase_block_size.attr,
+ 	&dev_attr_provisioning_type.attr,
+-	&dev_attr_physical_memory_resourse_count.attr,
++	&dev_attr_physical_memory_resource_count.attr,
+ 	&dev_attr_context_capabilities.attr,
+ 	&dev_attr_large_unit_granularity.attr,
+ 	&dev_attr_wb_buf_alloc_units.attr,
+diff --git a/drivers/usb/cdns3/cdnsp-debug.h b/drivers/usb/cdns3/cdnsp-debug.h
+index cd138acdcce165..86860686d8363e 100644
+--- a/drivers/usb/cdns3/cdnsp-debug.h
++++ b/drivers/usb/cdns3/cdnsp-debug.h
+@@ -327,12 +327,13 @@ static inline const char *cdnsp_decode_trb(char *str, size_t size, u32 field0,
+ 	case TRB_RESET_EP:
+ 	case TRB_HALT_ENDPOINT:
+ 		ret = scnprintf(str, size,
+-				"%s: ep%d%s(%d) ctx %08x%08x slot %ld flags %c",
++				"%s: ep%d%s(%d) ctx %08x%08x slot %ld flags %c %c",
+ 				cdnsp_trb_type_string(type),
+ 				ep_num, ep_id % 2 ? "out" : "in",
+ 				TRB_TO_EP_INDEX(field3), field1, field0,
+ 				TRB_TO_SLOT_ID(field3),
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++				field3 & TRB_CYCLE ? 'C' : 'c',
++				field3 & TRB_ESP ? 'P' : 'p');
+ 		break;
+ 	case TRB_STOP_RING:
+ 		ret = scnprintf(str, size,
+diff --git a/drivers/usb/cdns3/cdnsp-ep0.c b/drivers/usb/cdns3/cdnsp-ep0.c
+index f317d3c8478108..5cd9b898ce971f 100644
+--- a/drivers/usb/cdns3/cdnsp-ep0.c
++++ b/drivers/usb/cdns3/cdnsp-ep0.c
+@@ -414,6 +414,7 @@ static int cdnsp_ep0_std_request(struct cdnsp_device *pdev,
+ void cdnsp_setup_analyze(struct cdnsp_device *pdev)
+ {
+ 	struct usb_ctrlrequest *ctrl = &pdev->setup;
++	struct cdnsp_ep *pep;
+ 	int ret = -EINVAL;
+ 	u16 len;
+ 
+@@ -427,10 +428,21 @@ void cdnsp_setup_analyze(struct cdnsp_device *pdev)
+ 		goto out;
+ 	}
+ 
++	pep = &pdev->eps[0];
++
+ 	/* Restore the ep0 to Stopped/Running state. */
+-	if (pdev->eps[0].ep_state & EP_HALTED) {
+-		trace_cdnsp_ep0_halted("Restore to normal state");
+-		cdnsp_halt_endpoint(pdev, &pdev->eps[0], 0);
++	if (pep->ep_state & EP_HALTED) {
++		if (GET_EP_CTX_STATE(pep->out_ctx) == EP_STATE_HALTED)
++			cdnsp_halt_endpoint(pdev, pep, 0);
++
++		/*
++		 * Halt Endpoint Command for SSP2 for ep0 preserve current
++		 * endpoint state and driver has to synchronize the
++		 * software endpoint state with endpoint output context
++		 * state.
++		 */
++		pep->ep_state &= ~EP_HALTED;
++		pep->ep_state |= EP_STOPPED;
+ 	}
+ 
+ 	/*
+diff --git a/drivers/usb/cdns3/cdnsp-gadget.h b/drivers/usb/cdns3/cdnsp-gadget.h
+index 2afa3e558f85ca..a91cca509db080 100644
+--- a/drivers/usb/cdns3/cdnsp-gadget.h
++++ b/drivers/usb/cdns3/cdnsp-gadget.h
+@@ -987,6 +987,12 @@ enum cdnsp_setup_dev {
+ #define STREAM_ID_FOR_TRB(p)		((((p)) << 16) & GENMASK(31, 16))
+ #define SCT_FOR_TRB(p)			(((p) << 1) & 0x7)
+ 
++/*
++ * Halt Endpoint Command TRB field.
++ * The ESP bit only exists in the SSP2 controller.
++ */
++#define TRB_ESP				BIT(9)
++
+ /* Link TRB specific fields. */
+ #define TRB_TC				BIT(1)
+ 
+diff --git a/drivers/usb/cdns3/cdnsp-ring.c b/drivers/usb/cdns3/cdnsp-ring.c
+index fd06cb85c4ea84..0758f171f73ecf 100644
+--- a/drivers/usb/cdns3/cdnsp-ring.c
++++ b/drivers/usb/cdns3/cdnsp-ring.c
+@@ -772,7 +772,9 @@ static int cdnsp_update_port_id(struct cdnsp_device *pdev, u32 port_id)
+ 	}
+ 
+ 	if (port_id != old_port) {
+-		cdnsp_disable_slot(pdev);
++		if (pdev->slot_id)
++			cdnsp_disable_slot(pdev);
++
+ 		pdev->active_port = port;
+ 		cdnsp_enable_slot(pdev);
+ 	}
+@@ -2483,7 +2485,8 @@ void cdnsp_queue_halt_endpoint(struct cdnsp_device *pdev, unsigned int ep_index)
+ {
+ 	cdnsp_queue_command(pdev, 0, 0, 0, TRB_TYPE(TRB_HALT_ENDPOINT) |
+ 			    SLOT_ID_FOR_TRB(pdev->slot_id) |
+-			    EP_ID_FOR_TRB(ep_index));
++			    EP_ID_FOR_TRB(ep_index) |
++			    (!ep_index ? TRB_ESP : 0));
+ }
+ 
+ void cdnsp_force_header_wakeup(struct cdnsp_device *pdev, int intf_num)
+diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
+index 8a9b31fd5c89d8..1a48e6440e6c29 100644
+--- a/drivers/usb/chipidea/udc.c
++++ b/drivers/usb/chipidea/udc.c
+@@ -2374,6 +2374,10 @@ static void udc_suspend(struct ci_hdrc *ci)
+ 	 */
+ 	if (hw_read(ci, OP_ENDPTLISTADDR, ~0) == 0)
+ 		hw_write(ci, OP_ENDPTLISTADDR, ~0, ~0);
++
++	if (ci->gadget.connected &&
++	    (!ci->suspended || !device_may_wakeup(ci->dev)))
++		usb_gadget_disconnect(&ci->gadget);
+ }
+ 
+ static void udc_resume(struct ci_hdrc *ci, bool power_lost)
+@@ -2384,6 +2388,9 @@ static void udc_resume(struct ci_hdrc *ci, bool power_lost)
+ 					OTGSC_BSVIS | OTGSC_BSVIE);
+ 		if (ci->vbus_active)
+ 			usb_gadget_vbus_disconnect(&ci->gadget);
++	} else if (ci->vbus_active && ci->driver &&
++		   !ci->gadget.connected) {
++		usb_gadget_connect(&ci->gadget);
+ 	}
+ 
+ 	/* Restore value 0 if it was set for power lost check */
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 9f19fc7494e022..f71b807d71a7a5 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -2337,6 +2337,9 @@ void usb_disconnect(struct usb_device **pdev)
+ 	usb_remove_ep_devs(&udev->ep0);
+ 	usb_unlock_device(udev);
+ 
++	if (udev->usb4_link)
++		device_link_del(udev->usb4_link);
++
+ 	/* Unregister the device.  The device driver is responsible
+ 	 * for de-configuring the device and invoking the remove-device
+ 	 * notifier chain (used by usbfs and possibly others).
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 53d68d20fb62e0..0cf94c7a2c9ce6 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -227,7 +227,8 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x046a, 0x0023), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
+ 	/* Logitech HD Webcam C270 */
+-	{ USB_DEVICE(0x046d, 0x0825), .driver_info = USB_QUIRK_RESET_RESUME },
++	{ USB_DEVICE(0x046d, 0x0825), .driver_info = USB_QUIRK_RESET_RESUME |
++		USB_QUIRK_NO_LPM},
+ 
+ 	/* Logitech HD Pro Webcams C920, C920-C, C922, C925e and C930e */
+ 	{ USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT },
+diff --git a/drivers/usb/core/usb-acpi.c b/drivers/usb/core/usb-acpi.c
+index ea1ce8beb0cbb2..489dbdc96f94ad 100644
+--- a/drivers/usb/core/usb-acpi.c
++++ b/drivers/usb/core/usb-acpi.c
+@@ -157,7 +157,7 @@ EXPORT_SYMBOL_GPL(usb_acpi_set_power_state);
+  */
+ static int usb_acpi_add_usb4_devlink(struct usb_device *udev)
+ {
+-	const struct device_link *link;
++	struct device_link *link;
+ 	struct usb_port *port_dev;
+ 	struct usb_hub *hub;
+ 
+@@ -188,6 +188,8 @@ static int usb_acpi_add_usb4_devlink(struct usb_device *udev)
+ 	dev_dbg(&port_dev->dev, "Created device link from %s to %s\n",
+ 		dev_name(&port_dev->child->dev), dev_name(nhi_fwnode->dev));
+ 
++	udev->usb4_link = link;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 66a08b5271653a..f36bc933c55bcb 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -2388,6 +2388,7 @@ static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
+ {
+ 	u32 reg;
+ 	int i;
++	int ret;
+ 
+ 	if (!pm_runtime_suspended(dwc->dev) && !PMSG_IS_AUTO(msg)) {
+ 		dwc->susphy_state = (dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)) &
+@@ -2406,7 +2407,9 @@ static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
+ 	case DWC3_GCTL_PRTCAP_DEVICE:
+ 		if (pm_runtime_suspended(dwc->dev))
+ 			break;
+-		dwc3_gadget_suspend(dwc);
++		ret = dwc3_gadget_suspend(dwc);
++		if (ret)
++			return ret;
+ 		synchronize_irq(dwc->irq_gadget);
+ 		dwc3_core_exit(dwc);
+ 		break;
+@@ -2441,7 +2444,9 @@ static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
+ 			break;
+ 
+ 		if (dwc->current_otg_role == DWC3_OTG_ROLE_DEVICE) {
+-			dwc3_gadget_suspend(dwc);
++			ret = dwc3_gadget_suspend(dwc);
++			if (ret)
++				return ret;
+ 			synchronize_irq(dwc->irq_gadget);
+ 		}
+ 
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 321361288935db..74968f93d4a353 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3516,7 +3516,7 @@ static int dwc3_gadget_ep_reclaim_completed_trb(struct dwc3_ep *dep,
+ 	 * We're going to do that here to avoid problems of HW trying
+ 	 * to use bogus TRBs for transfers.
+ 	 */
+-	if (chain && (trb->ctrl & DWC3_TRB_CTRL_HWO))
++	if (trb->ctrl & DWC3_TRB_CTRL_HWO)
+ 		trb->ctrl &= ~DWC3_TRB_CTRL_HWO;
+ 
+ 	/*
+@@ -4821,26 +4821,22 @@ int dwc3_gadget_suspend(struct dwc3 *dwc)
+ 	int ret;
+ 
+ 	ret = dwc3_gadget_soft_disconnect(dwc);
+-	if (ret)
+-		goto err;
+-
+-	spin_lock_irqsave(&dwc->lock, flags);
+-	if (dwc->gadget_driver)
+-		dwc3_disconnect_gadget(dwc);
+-	spin_unlock_irqrestore(&dwc->lock, flags);
+-
+-	return 0;
+-
+-err:
+ 	/*
+ 	 * Attempt to reset the controller's state. Likely no
+ 	 * communication can be established until the host
+ 	 * performs a port reset.
+ 	 */
+-	if (dwc->softconnect)
++	if (ret && dwc->softconnect) {
+ 		dwc3_gadget_soft_connect(dwc);
++		return -EAGAIN;
++	}
+ 
+-	return ret;
++	spin_lock_irqsave(&dwc->lock, flags);
++	if (dwc->gadget_driver)
++		dwc3_disconnect_gadget(dwc);
++	spin_unlock_irqrestore(&dwc->lock, flags);
++
++	return 0;
+ }
+ 
+ int dwc3_gadget_resume(struct dwc3 *dwc)
+diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c
+index 0d4ce5734165ed..06a2edb9e86ef7 100644
+--- a/drivers/usb/host/xhci-dbgcap.c
++++ b/drivers/usb/host/xhci-dbgcap.c
+@@ -652,6 +652,10 @@ static void xhci_dbc_stop(struct xhci_dbc *dbc)
+ 	case DS_DISABLED:
+ 		return;
+ 	case DS_CONFIGURED:
++		spin_lock(&dbc->lock);
++		xhci_dbc_flush_requests(dbc);
++		spin_unlock(&dbc->lock);
++
+ 		if (dbc->driver->disconnect)
+ 			dbc->driver->disconnect(dbc);
+ 		break;
+diff --git a/drivers/usb/host/xhci-dbgtty.c b/drivers/usb/host/xhci-dbgtty.c
+index 60ed753c85bbc1..d894081d8d1572 100644
+--- a/drivers/usb/host/xhci-dbgtty.c
++++ b/drivers/usb/host/xhci-dbgtty.c
+@@ -617,6 +617,7 @@ int dbc_tty_init(void)
+ 	dbc_tty_driver->type = TTY_DRIVER_TYPE_SERIAL;
+ 	dbc_tty_driver->subtype = SERIAL_TYPE_NORMAL;
+ 	dbc_tty_driver->init_termios = tty_std_termios;
++	dbc_tty_driver->init_termios.c_lflag &= ~ECHO;
+ 	dbc_tty_driver->init_termios.c_cflag =
+ 			B9600 | CS8 | CREAD | HUPCL | CLOCAL;
+ 	dbc_tty_driver->init_termios.c_ispeed = 9600;
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index d698095fc88d3c..a5e7980ac1031f 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -1420,6 +1420,10 @@ int xhci_endpoint_init(struct xhci_hcd *xhci,
+ 	/* Periodic endpoint bInterval limit quirk */
+ 	if (usb_endpoint_xfer_int(&ep->desc) ||
+ 	    usb_endpoint_xfer_isoc(&ep->desc)) {
++		if ((xhci->quirks & XHCI_LIMIT_ENDPOINT_INTERVAL_9) &&
++		    interval >= 9) {
++			interval = 8;
++		}
+ 		if ((xhci->quirks & XHCI_LIMIT_ENDPOINT_INTERVAL_7) &&
+ 		    udev->speed >= USB_SPEED_HIGH &&
+ 		    interval >= 7) {
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 0c481cbc8f085d..00fac8b233d2a9 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -71,12 +71,22 @@
+ #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_XHCI		0x15ec
+ #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI		0x15f0
+ 
++#define PCI_DEVICE_ID_AMD_ARIEL_TYPEC_XHCI		0x13ed
++#define PCI_DEVICE_ID_AMD_ARIEL_TYPEA_XHCI		0x13ee
++#define PCI_DEVICE_ID_AMD_STARSHIP_XHCI			0x148c
++#define PCI_DEVICE_ID_AMD_FIREFLIGHT_15D4_XHCI		0x15d4
++#define PCI_DEVICE_ID_AMD_FIREFLIGHT_15D5_XHCI		0x15d5
++#define PCI_DEVICE_ID_AMD_RAVEN_15E0_XHCI		0x15e0
++#define PCI_DEVICE_ID_AMD_RAVEN_15E1_XHCI		0x15e1
++#define PCI_DEVICE_ID_AMD_RAVEN2_XHCI			0x15e5
+ #define PCI_DEVICE_ID_AMD_RENOIR_XHCI			0x1639
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_4			0x43b9
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_3			0x43ba
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_2			0x43bb
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_1			0x43bc
+ 
++#define PCI_DEVICE_ID_ATI_NAVI10_7316_XHCI		0x7316
++
+ #define PCI_DEVICE_ID_ASMEDIA_1042_XHCI			0x1042
+ #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI		0x1142
+ #define PCI_DEVICE_ID_ASMEDIA_1142_XHCI			0x1242
+@@ -280,6 +290,21 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	if (pdev->vendor == PCI_VENDOR_ID_NEC)
+ 		xhci->quirks |= XHCI_NEC_HOST;
+ 
++	if (pdev->vendor == PCI_VENDOR_ID_AMD &&
++	    (pdev->device == PCI_DEVICE_ID_AMD_ARIEL_TYPEC_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_AMD_ARIEL_TYPEA_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_AMD_STARSHIP_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_AMD_FIREFLIGHT_15D4_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_AMD_FIREFLIGHT_15D5_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_AMD_RAVEN_15E0_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_AMD_RAVEN_15E1_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_AMD_RAVEN2_XHCI))
++		xhci->quirks |= XHCI_LIMIT_ENDPOINT_INTERVAL_9;
++
++	if (pdev->vendor == PCI_VENDOR_ID_ATI &&
++	    pdev->device == PCI_DEVICE_ID_ATI_NAVI10_7316_XHCI)
++		xhci->quirks |= XHCI_LIMIT_ENDPOINT_INTERVAL_9;
++
+ 	if (pdev->vendor == PCI_VENDOR_ID_AMD && xhci->hci_version == 0x96)
+ 		xhci->quirks |= XHCI_AMD_0x96_HOST;
+ 
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index 3155e3a842da9a..619481dec8e8d8 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -326,7 +326,8 @@ int xhci_plat_probe(struct platform_device *pdev, struct device *sysdev, const s
+ 	}
+ 
+ 	usb3_hcd = xhci_get_usb3_hcd(xhci);
+-	if (usb3_hcd && HCC_MAX_PSA(xhci->hcc_params) >= 4)
++	if (usb3_hcd && HCC_MAX_PSA(xhci->hcc_params) >= 4 &&
++	    !(xhci->quirks & XHCI_BROKEN_STREAMS))
+ 		usb3_hcd->can_do_streams = 1;
+ 
+ 	if (xhci->shared_hcd) {
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 423bf36495705e..b720e04ce7d86c 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -518,9 +518,8 @@ static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags)
+ 	 * In the future we should distinguish between -ENODEV and -ETIMEDOUT
+ 	 * and try to recover a -ETIMEDOUT with a host controller reset.
+ 	 */
+-	ret = xhci_handshake_check_state(xhci, &xhci->op_regs->cmd_ring,
+-			CMD_RING_RUNNING, 0, 5 * 1000 * 1000,
+-			XHCI_STATE_REMOVING);
++	ret = xhci_handshake(&xhci->op_regs->cmd_ring,
++			CMD_RING_RUNNING, 0, 5 * 1000 * 1000);
+ 	if (ret < 0) {
+ 		xhci_err(xhci, "Abort failed to stop command ring: %d\n", ret);
+ 		xhci_halt(xhci);
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 90eb491267b584..cb9f35acb1f94f 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -83,29 +83,6 @@ int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, u64 timeout_us)
+ 	return ret;
+ }
+ 
+-/*
+- * xhci_handshake_check_state - same as xhci_handshake but takes an additional
+- * exit_state parameter, and bails out with an error immediately when xhc_state
+- * has exit_state flag set.
+- */
+-int xhci_handshake_check_state(struct xhci_hcd *xhci, void __iomem *ptr,
+-		u32 mask, u32 done, int usec, unsigned int exit_state)
+-{
+-	u32	result;
+-	int	ret;
+-
+-	ret = readl_poll_timeout_atomic(ptr, result,
+-				(result & mask) == done ||
+-				result == U32_MAX ||
+-				xhci->xhc_state & exit_state,
+-				1, usec);
+-
+-	if (result == U32_MAX || xhci->xhc_state & exit_state)
+-		return -ENODEV;
+-
+-	return ret;
+-}
+-
+ /*
+  * Disable interrupts and begin the xHCI halting process.
+  */
+@@ -226,8 +203,7 @@ int xhci_reset(struct xhci_hcd *xhci, u64 timeout_us)
+ 	if (xhci->quirks & XHCI_INTEL_HOST)
+ 		udelay(1000);
+ 
+-	ret = xhci_handshake_check_state(xhci, &xhci->op_regs->command,
+-				CMD_RESET, 0, timeout_us, XHCI_STATE_REMOVING);
++	ret = xhci_handshake(&xhci->op_regs->command, CMD_RESET, 0, timeout_us);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1084,7 +1060,10 @@ int xhci_resume(struct xhci_hcd *xhci, bool power_lost, bool is_auto_resume)
+ 		xhci_dbg(xhci, "Stop HCD\n");
+ 		xhci_halt(xhci);
+ 		xhci_zero_64b_regs(xhci);
+-		retval = xhci_reset(xhci, XHCI_RESET_LONG_USEC);
++		if (xhci->xhc_state & XHCI_STATE_REMOVING)
++			retval = -ENODEV;
++		else
++			retval = xhci_reset(xhci, XHCI_RESET_LONG_USEC);
+ 		spin_unlock_irq(&xhci->lock);
+ 		if (retval)
+ 			return retval;
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 242ab9fbc8ae6b..4bd691ea979fe0 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1637,6 +1637,7 @@ struct xhci_hcd {
+ #define XHCI_WRITE_64_HI_LO	BIT_ULL(47)
+ #define XHCI_CDNS_SCTX_QUIRK	BIT_ULL(48)
+ #define XHCI_ETRON_HOST	BIT_ULL(49)
++#define XHCI_LIMIT_ENDPOINT_INTERVAL_9 BIT_ULL(50)
+ 
+ 	unsigned int		num_active_eps;
+ 	unsigned int		limit_active_eps;
+@@ -1855,8 +1856,6 @@ void xhci_remove_secondary_interrupter(struct usb_hcd
+ /* xHCI host controller glue */
+ typedef void (*xhci_get_quirks_t)(struct device *, struct xhci_hcd *);
+ int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, u64 timeout_us);
+-int xhci_handshake_check_state(struct xhci_hcd *xhci, void __iomem *ptr,
+-		u32 mask, u32 done, int usec, unsigned int exit_state);
+ void xhci_quiesce(struct xhci_hcd *xhci);
+ int xhci_halt(struct xhci_hcd *xhci);
+ int xhci_start(struct xhci_hcd *xhci);
+diff --git a/drivers/usb/typec/altmodes/displayport.c b/drivers/usb/typec/altmodes/displayport.c
+index b09b58d7311de9..d8b906ec4d1c88 100644
+--- a/drivers/usb/typec/altmodes/displayport.c
++++ b/drivers/usb/typec/altmodes/displayport.c
+@@ -394,8 +394,7 @@ static int dp_altmode_vdm(struct typec_altmode *alt,
+ 	case CMDT_RSP_NAK:
+ 		switch (cmd) {
+ 		case DP_CMD_STATUS_UPDATE:
+-			if (typec_altmode_exit(alt))
+-				dev_err(&dp->alt->dev, "Exit Mode Failed!\n");
++			dp->state = DP_STATE_EXIT;
+ 			break;
+ 		case DP_CMD_CONFIGURE:
+ 			dp->data.conf = 0;
+@@ -677,7 +676,7 @@ static ssize_t pin_assignment_show(struct device *dev,
+ 
+ 	assignments = get_current_pin_assignments(dp);
+ 
+-	for (i = 0; assignments; assignments >>= 1, i++) {
++	for (i = 0; assignments && i < DP_PIN_ASSIGN_MAX; assignments >>= 1, i++) {
+ 		if (assignments & 1) {
+ 			if (i == cur)
+ 				len += sprintf(buf + len, "[%s] ",
+diff --git a/fs/anon_inodes.c b/fs/anon_inodes.c
+index e51e7d88980a22..1d847a939f29a4 100644
+--- a/fs/anon_inodes.c
++++ b/fs/anon_inodes.c
+@@ -98,14 +98,25 @@ static struct file_system_type anon_inode_fs_type = {
+ 	.kill_sb	= kill_anon_super,
+ };
+ 
+-static struct inode *anon_inode_make_secure_inode(
+-	const char *name,
+-	const struct inode *context_inode)
++/**
++ * anon_inode_make_secure_inode - allocate an anonymous inode with security context
++ * @sb:		[in]	Superblock to allocate from
++ * @name:	[in]	Name of the class of the newfile (e.g., "secretmem")
++ * @context_inode:
++ *		[in]	Optional parent inode for security inheritance
++ *
++ * The function ensures proper security initialization through the LSM hook
++ * security_inode_init_security_anon().
++ *
++ * Return:	Pointer to new inode on success, ERR_PTR on failure.
++ */
++struct inode *anon_inode_make_secure_inode(struct super_block *sb, const char *name,
++					   const struct inode *context_inode)
+ {
+ 	struct inode *inode;
+ 	int error;
+ 
+-	inode = alloc_anon_inode(anon_inode_mnt->mnt_sb);
++	inode = alloc_anon_inode(sb);
+ 	if (IS_ERR(inode))
+ 		return inode;
+ 	inode->i_flags &= ~S_PRIVATE;
+@@ -118,6 +129,7 @@ static struct inode *anon_inode_make_secure_inode(
+ 	}
+ 	return inode;
+ }
++EXPORT_SYMBOL_GPL_FOR_MODULES(anon_inode_make_secure_inode, "kvm");
+ 
+ static struct file *__anon_inode_getfile(const char *name,
+ 					 const struct file_operations *fops,
+@@ -132,7 +144,8 @@ static struct file *__anon_inode_getfile(const char *name,
+ 		return ERR_PTR(-ENOENT);
+ 
+ 	if (make_inode) {
+-		inode =	anon_inode_make_secure_inode(name, context_inode);
++		inode =	anon_inode_make_secure_inode(anon_inode_mnt->mnt_sb,
++						     name, context_inode);
+ 		if (IS_ERR(inode)) {
+ 			file = ERR_CAST(inode);
+ 			goto err;
+diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
+index 36937eeab9b8a1..08548b0b5ea1d0 100644
+--- a/fs/btrfs/block-group.h
++++ b/fs/btrfs/block-group.h
+@@ -83,6 +83,8 @@ enum btrfs_block_group_flags {
+ 	BLOCK_GROUP_FLAG_ZONED_DATA_RELOC,
+ 	/* Does the block group need to be added to the free space tree? */
+ 	BLOCK_GROUP_FLAG_NEEDS_FREE_SPACE,
++	/* Set after we add a new block group to the free space tree. */
++	BLOCK_GROUP_FLAG_FREE_SPACE_ADDED,
+ 	/* Indicate that the block group is placed on a sequential zone */
+ 	BLOCK_GROUP_FLAG_SEQUENTIAL_ZONE,
+ 	/*
+diff --git a/fs/btrfs/free-space-tree.c b/fs/btrfs/free-space-tree.c
+index 39c6b96a4c25a8..b65a20fd519ba5 100644
+--- a/fs/btrfs/free-space-tree.c
++++ b/fs/btrfs/free-space-tree.c
+@@ -1217,6 +1217,7 @@ static int clear_free_space_tree(struct btrfs_trans_handle *trans,
+ {
+ 	BTRFS_PATH_AUTO_FREE(path);
+ 	struct btrfs_key key;
++	struct rb_node *node;
+ 	int nr;
+ 	int ret;
+ 
+@@ -1245,6 +1246,16 @@ static int clear_free_space_tree(struct btrfs_trans_handle *trans,
+ 		btrfs_release_path(path);
+ 	}
+ 
++	node = rb_first_cached(&trans->fs_info->block_group_cache_tree);
++	while (node) {
++		struct btrfs_block_group *bg;
++
++		bg = rb_entry(node, struct btrfs_block_group, cache_node);
++		clear_bit(BLOCK_GROUP_FLAG_FREE_SPACE_ADDED, &bg->runtime_flags);
++		node = rb_next(node);
++		cond_resched();
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1334,12 +1345,18 @@ int btrfs_rebuild_free_space_tree(struct btrfs_fs_info *fs_info)
+ 
+ 		block_group = rb_entry(node, struct btrfs_block_group,
+ 				       cache_node);
++
++		if (test_bit(BLOCK_GROUP_FLAG_FREE_SPACE_ADDED,
++			     &block_group->runtime_flags))
++			goto next;
++
+ 		ret = populate_free_space_tree(trans, block_group);
+ 		if (ret) {
+ 			btrfs_abort_transaction(trans, ret);
+ 			btrfs_end_transaction(trans);
+ 			return ret;
+ 		}
++next:
+ 		if (btrfs_should_end_transaction(trans)) {
+ 			btrfs_end_transaction(trans);
+ 			trans = btrfs_start_transaction(free_space_root, 1);
+@@ -1366,6 +1383,29 @@ static int __add_block_group_free_space(struct btrfs_trans_handle *trans,
+ 
+ 	clear_bit(BLOCK_GROUP_FLAG_NEEDS_FREE_SPACE, &block_group->runtime_flags);
+ 
++	/*
++	 * While rebuilding the free space tree we may allocate new metadata
++	 * block groups while modifying the free space tree.
++	 *
++	 * Because during the rebuild (at btrfs_rebuild_free_space_tree()) we
++	 * can use multiple transactions, every time btrfs_end_transaction() is
++	 * called at btrfs_rebuild_free_space_tree() we finish the creation of
++	 * new block groups by calling btrfs_create_pending_block_groups(), and
++	 * that in turn calls us, through add_block_group_free_space(), to add
++	 * a free space info item and a free space extent item for the block
++	 * group.
++	 *
++	 * Then later btrfs_rebuild_free_space_tree() may find such new block
++	 * groups and processes them with populate_free_space_tree(), which can
++	 * fail with EEXIST since there are already items for the block group in
++	 * the free space tree. Notice that we say "may find" because a new
++	 * block group may be added to the block groups rbtree in a node before
++	 * or after the block group currently being processed by the rebuild
++	 * process. So signal the rebuild process to skip such new block groups
++	 * if it finds them.
++	 */
++	set_bit(BLOCK_GROUP_FLAG_FREE_SPACE_ADDED, &block_group->runtime_flags);
++
+ 	ret = add_new_free_space_info(trans, block_group, path);
+ 	if (ret)
+ 		return ret;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 391172b443e50e..f18f4d59d389e4 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4724,7 +4724,6 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry)
+ 	struct btrfs_fs_info *fs_info = BTRFS_I(inode)->root->fs_info;
+ 	int ret = 0;
+ 	struct btrfs_trans_handle *trans;
+-	u64 last_unlink_trans;
+ 	struct fscrypt_name fname;
+ 
+ 	if (inode->i_size > BTRFS_EMPTY_DIR_SIZE)
+@@ -4750,6 +4749,23 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry)
+ 		goto out_notrans;
+ 	}
+ 
++	/*
++	 * Propagate the last_unlink_trans value of the deleted dir to its
++	 * parent directory. This is to prevent an unrecoverable log tree in the
++	 * case we do something like this:
++	 * 1) create dir foo
++	 * 2) create snapshot under dir foo
++	 * 3) delete the snapshot
++	 * 4) rmdir foo
++	 * 5) mkdir foo
++	 * 6) fsync foo or some file inside foo
++	 *
++	 * This is because we can't unlink other roots when replaying the dir
++	 * deletes for directory foo.
++	 */
++	if (BTRFS_I(inode)->last_unlink_trans >= trans->transid)
++		btrfs_record_snapshot_destroy(trans, BTRFS_I(dir));
++
+ 	if (unlikely(btrfs_ino(BTRFS_I(inode)) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID)) {
+ 		ret = btrfs_unlink_subvol(trans, BTRFS_I(dir), dentry);
+ 		goto out;
+@@ -4759,27 +4775,11 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry)
+ 	if (ret)
+ 		goto out;
+ 
+-	last_unlink_trans = BTRFS_I(inode)->last_unlink_trans;
+-
+ 	/* now the directory is empty */
+ 	ret = btrfs_unlink_inode(trans, BTRFS_I(dir), BTRFS_I(d_inode(dentry)),
+ 				 &fname.disk_name);
+-	if (!ret) {
++	if (!ret)
+ 		btrfs_i_size_write(BTRFS_I(inode), 0);
+-		/*
+-		 * Propagate the last_unlink_trans value of the deleted dir to
+-		 * its parent directory. This is to prevent an unrecoverable
+-		 * log tree in the case we do something like this:
+-		 * 1) create dir foo
+-		 * 2) create snapshot under dir foo
+-		 * 3) delete the snapshot
+-		 * 4) rmdir foo
+-		 * 5) mkdir foo
+-		 * 6) fsync foo or some file inside foo
+-		 */
+-		if (last_unlink_trans >= trans->transid)
+-			BTRFS_I(dir)->last_unlink_trans = last_unlink_trans;
+-	}
+ out:
+ 	btrfs_end_transaction(trans);
+ out_notrans:
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 63aeacc5494574..b70ef4455610a0 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -666,14 +666,14 @@ static noinline int create_subvol(struct mnt_idmap *idmap,
+ 		goto out;
+ 	}
+ 
++	btrfs_record_new_subvolume(trans, BTRFS_I(dir));
++
+ 	ret = btrfs_create_new_inode(trans, &new_inode_args);
+ 	if (ret) {
+ 		btrfs_abort_transaction(trans, ret);
+ 		goto out;
+ 	}
+ 
+-	btrfs_record_new_subvolume(trans, BTRFS_I(dir));
+-
+ 	d_instantiate_new(dentry, new_inode_args.inode);
+ 	new_inode_args.inode = NULL;
+ 
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index ef9660eabf0c63..e05140ce95be9f 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -143,6 +143,9 @@ static struct btrfs_inode *btrfs_iget_logging(u64 objectid, struct btrfs_root *r
+ 	unsigned int nofs_flag;
+ 	struct btrfs_inode *inode;
+ 
++	/* Only meant to be called for subvolume roots and not for log roots. */
++	ASSERT(is_fstree(btrfs_root_id(root)));
++
+ 	/*
+ 	 * We're holding a transaction handle whether we are logging or
+ 	 * replaying a log tree, so we must make sure NOFS semantics apply
+@@ -604,21 +607,6 @@ static int read_alloc_one_name(struct extent_buffer *eb, void *start, int len,
+ 	return 0;
+ }
+ 
+-/*
+- * simple helper to read an inode off the disk from a given root
+- * This can only be called for subvolume roots and not for the log
+- */
+-static noinline struct btrfs_inode *read_one_inode(struct btrfs_root *root,
+-						   u64 objectid)
+-{
+-	struct btrfs_inode *inode;
+-
+-	inode = btrfs_iget_logging(objectid, root);
+-	if (IS_ERR(inode))
+-		return NULL;
+-	return inode;
+-}
+-
+ /* replays a single extent in 'eb' at 'slot' with 'key' into the
+  * subvolume 'root'.  path is released on entry and should be released
+  * on exit.
+@@ -671,9 +659,9 @@ static noinline int replay_one_extent(struct btrfs_trans_handle *trans,
+ 		return 0;
+ 	}
+ 
+-	inode = read_one_inode(root, key->objectid);
+-	if (!inode)
+-		return -EIO;
++	inode = btrfs_iget_logging(key->objectid, root);
++	if (IS_ERR(inode))
++		return PTR_ERR(inode);
+ 
+ 	/*
+ 	 * first check to see if we already have this extent in the
+@@ -945,9 +933,10 @@ static noinline int drop_one_dir_item(struct btrfs_trans_handle *trans,
+ 
+ 	btrfs_release_path(path);
+ 
+-	inode = read_one_inode(root, location.objectid);
+-	if (!inode) {
+-		ret = -EIO;
++	inode = btrfs_iget_logging(location.objectid, root);
++	if (IS_ERR(inode)) {
++		ret = PTR_ERR(inode);
++		inode = NULL;
+ 		goto out;
+ 	}
+ 
+@@ -1070,7 +1059,9 @@ static inline int __add_inode_ref(struct btrfs_trans_handle *trans,
+ 	search_key.type = BTRFS_INODE_REF_KEY;
+ 	search_key.offset = parent_objectid;
+ 	ret = btrfs_search_slot(NULL, root, &search_key, path, 0, 0);
+-	if (ret == 0) {
++	if (ret < 0) {
++		return ret;
++	} else if (ret == 0) {
+ 		struct btrfs_inode_ref *victim_ref;
+ 		unsigned long ptr;
+ 		unsigned long ptr_end;
+@@ -1143,13 +1134,13 @@ static inline int __add_inode_ref(struct btrfs_trans_handle *trans,
+ 			struct fscrypt_str victim_name;
+ 
+ 			extref = (struct btrfs_inode_extref *)(base + cur_offset);
++			victim_name.len = btrfs_inode_extref_name_len(leaf, extref);
+ 
+ 			if (btrfs_inode_extref_parent(leaf, extref) != parent_objectid)
+ 				goto next;
+ 
+ 			ret = read_alloc_one_name(leaf, &extref->name,
+-				 btrfs_inode_extref_name_len(leaf, extref),
+-				 &victim_name);
++						  victim_name.len, &victim_name);
+ 			if (ret)
+ 				return ret;
+ 
+@@ -1164,10 +1155,10 @@ static inline int __add_inode_ref(struct btrfs_trans_handle *trans,
+ 				kfree(victim_name.name);
+ 				return ret;
+ 			} else if (!ret) {
+-				ret = -ENOENT;
+-				victim_parent = read_one_inode(root,
+-						parent_objectid);
+-				if (victim_parent) {
++				victim_parent = btrfs_iget_logging(parent_objectid, root);
++				if (IS_ERR(victim_parent)) {
++					ret = PTR_ERR(victim_parent);
++				} else {
+ 					inc_nlink(&inode->vfs_inode);
+ 					btrfs_release_path(path);
+ 
+@@ -1312,9 +1303,9 @@ static int unlink_old_inode_refs(struct btrfs_trans_handle *trans,
+ 			struct btrfs_inode *dir;
+ 
+ 			btrfs_release_path(path);
+-			dir = read_one_inode(root, parent_id);
+-			if (!dir) {
+-				ret = -ENOENT;
++			dir = btrfs_iget_logging(parent_id, root);
++			if (IS_ERR(dir)) {
++				ret = PTR_ERR(dir);
+ 				kfree(name.name);
+ 				goto out;
+ 			}
+@@ -1386,15 +1377,17 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 	 * copy the back ref in.  The link count fixup code will take
+ 	 * care of the rest
+ 	 */
+-	dir = read_one_inode(root, parent_objectid);
+-	if (!dir) {
+-		ret = -ENOENT;
++	dir = btrfs_iget_logging(parent_objectid, root);
++	if (IS_ERR(dir)) {
++		ret = PTR_ERR(dir);
++		dir = NULL;
+ 		goto out;
+ 	}
+ 
+-	inode = read_one_inode(root, inode_objectid);
+-	if (!inode) {
+-		ret = -EIO;
++	inode = btrfs_iget_logging(inode_objectid, root);
++	if (IS_ERR(inode)) {
++		ret = PTR_ERR(inode);
++		inode = NULL;
+ 		goto out;
+ 	}
+ 
+@@ -1406,11 +1399,13 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 			 * parent object can change from one array
+ 			 * item to another.
+ 			 */
+-			if (!dir)
+-				dir = read_one_inode(root, parent_objectid);
+ 			if (!dir) {
+-				ret = -ENOENT;
+-				goto out;
++				dir = btrfs_iget_logging(parent_objectid, root);
++				if (IS_ERR(dir)) {
++					ret = PTR_ERR(dir);
++					dir = NULL;
++					goto out;
++				}
+ 			}
+ 		} else {
+ 			ret = ref_get_fields(eb, ref_ptr, &name, &ref_index);
+@@ -1679,9 +1674,9 @@ static noinline int fixup_inode_link_counts(struct btrfs_trans_handle *trans,
+ 			break;
+ 
+ 		btrfs_release_path(path);
+-		inode = read_one_inode(root, key.offset);
+-		if (!inode) {
+-			ret = -EIO;
++		inode = btrfs_iget_logging(key.offset, root);
++		if (IS_ERR(inode)) {
++			ret = PTR_ERR(inode);
+ 			break;
+ 		}
+ 
+@@ -1717,9 +1712,9 @@ static noinline int link_to_fixup_dir(struct btrfs_trans_handle *trans,
+ 	struct btrfs_inode *inode;
+ 	struct inode *vfs_inode;
+ 
+-	inode = read_one_inode(root, objectid);
+-	if (!inode)
+-		return -EIO;
++	inode = btrfs_iget_logging(objectid, root);
++	if (IS_ERR(inode))
++		return PTR_ERR(inode);
+ 
+ 	vfs_inode = &inode->vfs_inode;
+ 	key.objectid = BTRFS_TREE_LOG_FIXUP_OBJECTID;
+@@ -1758,14 +1753,14 @@ static noinline int insert_one_name(struct btrfs_trans_handle *trans,
+ 	struct btrfs_inode *dir;
+ 	int ret;
+ 
+-	inode = read_one_inode(root, location->objectid);
+-	if (!inode)
+-		return -ENOENT;
++	inode = btrfs_iget_logging(location->objectid, root);
++	if (IS_ERR(inode))
++		return PTR_ERR(inode);
+ 
+-	dir = read_one_inode(root, dirid);
+-	if (!dir) {
++	dir = btrfs_iget_logging(dirid, root);
++	if (IS_ERR(dir)) {
+ 		iput(&inode->vfs_inode);
+-		return -EIO;
++		return PTR_ERR(dir);
+ 	}
+ 
+ 	ret = btrfs_add_link(trans, dir, inode, name, 1, index);
+@@ -1842,9 +1837,9 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans,
+ 	bool update_size = true;
+ 	bool name_added = false;
+ 
+-	dir = read_one_inode(root, key->objectid);
+-	if (!dir)
+-		return -EIO;
++	dir = btrfs_iget_logging(key->objectid, root);
++	if (IS_ERR(dir))
++		return PTR_ERR(dir);
+ 
+ 	ret = read_alloc_one_name(eb, di + 1, btrfs_dir_name_len(eb, di), &name);
+ 	if (ret)
+@@ -2144,9 +2139,10 @@ static noinline int check_item_in_log(struct btrfs_trans_handle *trans,
+ 	btrfs_dir_item_key_to_cpu(eb, di, &location);
+ 	btrfs_release_path(path);
+ 	btrfs_release_path(log_path);
+-	inode = read_one_inode(root, location.objectid);
+-	if (!inode) {
+-		ret = -EIO;
++	inode = btrfs_iget_logging(location.objectid, root);
++	if (IS_ERR(inode)) {
++		ret = PTR_ERR(inode);
++		inode = NULL;
+ 		goto out;
+ 	}
+ 
+@@ -2298,14 +2294,17 @@ static noinline int replay_dir_deletes(struct btrfs_trans_handle *trans,
+ 	if (!log_path)
+ 		return -ENOMEM;
+ 
+-	dir = read_one_inode(root, dirid);
+-	/* it isn't an error if the inode isn't there, that can happen
+-	 * because we replay the deletes before we copy in the inode item
+-	 * from the log
++	dir = btrfs_iget_logging(dirid, root);
++	/*
++	 * It isn't an error if the inode isn't there, that can happen because
++	 * we replay the deletes before we copy in the inode item from the log.
+ 	 */
+-	if (!dir) {
++	if (IS_ERR(dir)) {
+ 		btrfs_free_path(log_path);
+-		return 0;
++		ret = PTR_ERR(dir);
++		if (ret == -ENOENT)
++			ret = 0;
++		return ret;
+ 	}
+ 
+ 	range_start = 0;
+@@ -2464,9 +2463,9 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 				struct btrfs_inode *inode;
+ 				u64 from;
+ 
+-				inode = read_one_inode(root, key.objectid);
+-				if (!inode) {
+-					ret = -EIO;
++				inode = btrfs_iget_logging(key.objectid, root);
++				if (IS_ERR(inode)) {
++					ret = PTR_ERR(inode);
+ 					break;
+ 				}
+ 				from = ALIGN(i_size_read(&inode->vfs_inode),
+@@ -7445,6 +7444,8 @@ void btrfs_record_snapshot_destroy(struct btrfs_trans_handle *trans,
+  * full log sync.
+  * Also we don't need to worry with renames, since btrfs_rename() marks the log
+  * for full commit when renaming a subvolume.
++ *
++ * Must be called before creating the subvolume entry in its parent directory.
+  */
+ void btrfs_record_new_subvolume(const struct btrfs_trans_handle *trans,
+ 				struct btrfs_inode *dir)
+diff --git a/fs/exec.c b/fs/exec.c
+index 8e4ea5f1e64c69..a18d5875c8d945 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -111,6 +111,9 @@ static inline void put_binfmt(struct linux_binfmt * fmt)
+ 
+ bool path_noexec(const struct path *path)
+ {
++	/* If it's an anonymous inode make sure that we catch any shenanigans. */
++	VFS_WARN_ON_ONCE(IS_ANON_FILE(d_inode(path->dentry)) &&
++			 !(path->mnt->mnt_sb->s_iflags & SB_I_NOEXEC));
+ 	return (path->mnt->mnt_flags & MNT_NOEXEC) ||
+ 	       (path->mnt->mnt_sb->s_iflags & SB_I_NOEXEC);
+ }
+@@ -894,13 +897,15 @@ static struct file *do_open_execat(int fd, struct filename *name, int flags)
+ 	if (IS_ERR(file))
+ 		return file;
+ 
++	if (path_noexec(&file->f_path))
++		return ERR_PTR(-EACCES);
++
+ 	/*
+ 	 * In the past the regular type check was here. It moved to may_open() in
+ 	 * 633fb6ac3980 ("exec: move S_ISREG() check earlier"). Since then it is
+ 	 * an invariant that all non-regular files error out before we get here.
+ 	 */
+-	if (WARN_ON_ONCE(!S_ISREG(file_inode(file)->i_mode)) ||
+-	    path_noexec(&file->f_path))
++	if (WARN_ON_ONCE(!S_ISREG(file_inode(file)->i_mode)))
+ 		return ERR_PTR(-EACCES);
+ 
+ 	err = exe_file_deny_write_access(file);
+diff --git a/fs/libfs.c b/fs/libfs.c
+index e28da9574a652b..14e0e9b18c8ed8 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -1648,12 +1648,10 @@ struct inode *alloc_anon_inode(struct super_block *s)
+ 	 */
+ 	inode->i_state = I_DIRTY;
+ 	/*
+-	 * Historically anonymous inodes didn't have a type at all and
+-	 * userspace has come to rely on this. Internally they're just
+-	 * regular files but S_IFREG is masked off when reporting
+-	 * information to userspace.
++	 * Historically anonymous inodes don't have a type at all and
++	 * userspace has come to rely on this.
+ 	 */
+-	inode->i_mode = S_IFREG | S_IRUSR | S_IWUSR;
++	inode->i_mode = S_IRUSR | S_IWUSR;
+ 	inode->i_uid = current_fsuid();
+ 	inode->i_gid = current_fsgid();
+ 	inode->i_flags |= S_PRIVATE | S_ANON_INODE;
+diff --git a/fs/namei.c b/fs/namei.c
+index 84a0e0b0111c78..8e165c99feee2c 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -3464,7 +3464,7 @@ static int may_open(struct mnt_idmap *idmap, const struct path *path,
+ 			return -EACCES;
+ 		break;
+ 	default:
+-		VFS_BUG_ON_INODE(1, inode);
++		VFS_BUG_ON_INODE(!IS_ANON_FILE(inode), inode);
+ 	}
+ 
+ 	error = inode_permission(idmap, inode, MAY_OPEN | acc_mode);
+diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c
+index dbb544e183d13d..9f22ff890a8cd5 100644
+--- a/fs/netfs/buffered_write.c
++++ b/fs/netfs/buffered_write.c
+@@ -64,6 +64,7 @@ static void netfs_update_i_size(struct netfs_inode *ctx, struct inode *inode,
+ 		return;
+ 	}
+ 
++	spin_lock(&inode->i_lock);
+ 	i_size_write(inode, pos);
+ #if IS_ENABLED(CONFIG_FSCACHE)
+ 	fscache_update_cookie(ctx->cache, NULL, &pos);
+@@ -77,6 +78,7 @@ static void netfs_update_i_size(struct netfs_inode *ctx, struct inode *inode,
+ 					DIV_ROUND_UP(pos, SECTOR_SIZE),
+ 					inode->i_blocks + add);
+ 	}
++	spin_unlock(&inode->i_lock);
+ }
+ 
+ /**
+diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c
+index fa9a5bf3c6d512..3efa5894b2c07d 100644
+--- a/fs/netfs/direct_write.c
++++ b/fs/netfs/direct_write.c
+@@ -14,13 +14,17 @@ static void netfs_cleanup_dio_write(struct netfs_io_request *wreq)
+ 	struct inode *inode = wreq->inode;
+ 	unsigned long long end = wreq->start + wreq->transferred;
+ 
+-	if (!wreq->error &&
+-	    i_size_read(inode) < end) {
++	if (wreq->error || end <= i_size_read(inode))
++		return;
++
++	spin_lock(&inode->i_lock);
++	if (end > i_size_read(inode)) {
+ 		if (wreq->netfs_ops->update_i_size)
+ 			wreq->netfs_ops->update_i_size(inode, end);
+ 		else
+ 			i_size_write(inode, end);
+ 	}
++	spin_unlock(&inode->i_lock);
+ }
+ 
+ /*
+diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c
+index 43b67a28a8fa07..8b1c11ef32aa50 100644
+--- a/fs/netfs/misc.c
++++ b/fs/netfs/misc.c
+@@ -381,7 +381,12 @@ void netfs_wait_for_in_progress_stream(struct netfs_io_request *rreq,
+ static int netfs_collect_in_app(struct netfs_io_request *rreq,
+ 				bool (*collector)(struct netfs_io_request *rreq))
+ {
+-	bool need_collect = false, inactive = true;
++	bool need_collect = false, inactive = true, done = true;
++
++	if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags)) {
++		trace_netfs_rreq(rreq, netfs_rreq_trace_recollect);
++		return 1; /* Done */
++	}
+ 
+ 	for (int i = 0; i < NR_IO_STREAMS; i++) {
+ 		struct netfs_io_subrequest *subreq;
+@@ -400,9 +405,11 @@ static int netfs_collect_in_app(struct netfs_io_request *rreq,
+ 			need_collect = true;
+ 			break;
+ 		}
++		if (subreq || !test_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags))
++			done = false;
+ 	}
+ 
+-	if (!need_collect && !inactive)
++	if (!need_collect && !inactive && !done)
+ 		return 0; /* Sleep */
+ 
+ 	__set_current_state(TASK_RUNNING);
+@@ -423,8 +430,8 @@ static int netfs_collect_in_app(struct netfs_io_request *rreq,
+ /*
+  * Wait for a request to complete, successfully or otherwise.
+  */
+-static ssize_t netfs_wait_for_request(struct netfs_io_request *rreq,
+-				      bool (*collector)(struct netfs_io_request *rreq))
++static ssize_t netfs_wait_for_in_progress(struct netfs_io_request *rreq,
++					  bool (*collector)(struct netfs_io_request *rreq))
+ {
+ 	DEFINE_WAIT(myself);
+ 	ssize_t ret;
+@@ -440,6 +447,9 @@ static ssize_t netfs_wait_for_request(struct netfs_io_request *rreq,
+ 			case 1:
+ 				goto all_collected;
+ 			case 2:
++				if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags))
++					break;
++				cond_resched();
+ 				continue;
+ 			}
+ 		}
+@@ -478,12 +488,12 @@ static ssize_t netfs_wait_for_request(struct netfs_io_request *rreq,
+ 
+ ssize_t netfs_wait_for_read(struct netfs_io_request *rreq)
+ {
+-	return netfs_wait_for_request(rreq, netfs_read_collection);
++	return netfs_wait_for_in_progress(rreq, netfs_read_collection);
+ }
+ 
+ ssize_t netfs_wait_for_write(struct netfs_io_request *rreq)
+ {
+-	return netfs_wait_for_request(rreq, netfs_write_collection);
++	return netfs_wait_for_in_progress(rreq, netfs_write_collection);
+ }
+ 
+ /*
+@@ -507,6 +517,10 @@ static void netfs_wait_for_pause(struct netfs_io_request *rreq,
+ 			case 1:
+ 				goto all_collected;
+ 			case 2:
++				if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags) ||
++				    !test_bit(NETFS_RREQ_PAUSE, &rreq->flags))
++					break;
++				cond_resched();
+ 				continue;
+ 			}
+ 		}
+diff --git a/fs/netfs/write_retry.c b/fs/netfs/write_retry.c
+index 9d1d8a8bab7261..7158657061e981 100644
+--- a/fs/netfs/write_retry.c
++++ b/fs/netfs/write_retry.c
+@@ -153,7 +153,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
+ 			trace_netfs_sreq_ref(wreq->debug_id, subreq->debug_index,
+ 					     refcount_read(&subreq->ref),
+ 					     netfs_sreq_trace_new);
+-			netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
++			trace_netfs_sreq(subreq, netfs_sreq_trace_split);
+ 
+ 			list_add(&subreq->rreq_link, &to->rreq_link);
+ 			to = list_next_entry(to, rreq_link);
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index e6909cafab6864..4bea008dbebd7c 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -1105,6 +1105,7 @@ static void ff_layout_reset_read(struct nfs_pgio_header *hdr)
+ }
+ 
+ static int ff_layout_async_handle_error_v4(struct rpc_task *task,
++					   u32 op_status,
+ 					   struct nfs4_state *state,
+ 					   struct nfs_client *clp,
+ 					   struct pnfs_layout_segment *lseg,
+@@ -1115,32 +1116,42 @@ static int ff_layout_async_handle_error_v4(struct rpc_task *task,
+ 	struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx);
+ 	struct nfs4_slot_table *tbl = &clp->cl_session->fc_slot_table;
+ 
+-	switch (task->tk_status) {
+-	case -NFS4ERR_BADSESSION:
+-	case -NFS4ERR_BADSLOT:
+-	case -NFS4ERR_BAD_HIGH_SLOT:
+-	case -NFS4ERR_DEADSESSION:
+-	case -NFS4ERR_CONN_NOT_BOUND_TO_SESSION:
+-	case -NFS4ERR_SEQ_FALSE_RETRY:
+-	case -NFS4ERR_SEQ_MISORDERED:
++	switch (op_status) {
++	case NFS4_OK:
++	case NFS4ERR_NXIO:
++		break;
++	case NFSERR_PERM:
++		if (!task->tk_xprt)
++			break;
++		xprt_force_disconnect(task->tk_xprt);
++		goto out_retry;
++	case NFS4ERR_BADSESSION:
++	case NFS4ERR_BADSLOT:
++	case NFS4ERR_BAD_HIGH_SLOT:
++	case NFS4ERR_DEADSESSION:
++	case NFS4ERR_CONN_NOT_BOUND_TO_SESSION:
++	case NFS4ERR_SEQ_FALSE_RETRY:
++	case NFS4ERR_SEQ_MISORDERED:
+ 		dprintk("%s ERROR %d, Reset session. Exchangeid "
+ 			"flags 0x%x\n", __func__, task->tk_status,
+ 			clp->cl_exchange_flags);
+ 		nfs4_schedule_session_recovery(clp->cl_session, task->tk_status);
+-		break;
+-	case -NFS4ERR_DELAY:
+-	case -NFS4ERR_GRACE:
++		goto out_retry;
++	case NFS4ERR_DELAY:
++		nfs_inc_stats(lseg->pls_layout->plh_inode, NFSIOS_DELAY);
++		fallthrough;
++	case NFS4ERR_GRACE:
+ 		rpc_delay(task, FF_LAYOUT_POLL_RETRY_MAX);
+-		break;
+-	case -NFS4ERR_RETRY_UNCACHED_REP:
+-		break;
++		goto out_retry;
++	case NFS4ERR_RETRY_UNCACHED_REP:
++		goto out_retry;
+ 	/* Invalidate Layout errors */
+-	case -NFS4ERR_PNFS_NO_LAYOUT:
+-	case -ESTALE:           /* mapped NFS4ERR_STALE */
+-	case -EBADHANDLE:       /* mapped NFS4ERR_BADHANDLE */
+-	case -EISDIR:           /* mapped NFS4ERR_ISDIR */
+-	case -NFS4ERR_FHEXPIRED:
+-	case -NFS4ERR_WRONG_TYPE:
++	case NFS4ERR_PNFS_NO_LAYOUT:
++	case NFS4ERR_STALE:
++	case NFS4ERR_BADHANDLE:
++	case NFS4ERR_ISDIR:
++	case NFS4ERR_FHEXPIRED:
++	case NFS4ERR_WRONG_TYPE:
+ 		dprintk("%s Invalid layout error %d\n", __func__,
+ 			task->tk_status);
+ 		/*
+@@ -1153,6 +1164,11 @@ static int ff_layout_async_handle_error_v4(struct rpc_task *task,
+ 		pnfs_destroy_layout(NFS_I(inode));
+ 		rpc_wake_up(&tbl->slot_tbl_waitq);
+ 		goto reset;
++	default:
++		break;
++	}
++
++	switch (task->tk_status) {
+ 	/* RPC connection errors */
+ 	case -ENETDOWN:
+ 	case -ENETUNREACH:
+@@ -1172,27 +1188,56 @@ static int ff_layout_async_handle_error_v4(struct rpc_task *task,
+ 		nfs4_delete_deviceid(devid->ld, devid->nfs_client,
+ 				&devid->deviceid);
+ 		rpc_wake_up(&tbl->slot_tbl_waitq);
+-		fallthrough;
++		break;
+ 	default:
+-		if (ff_layout_avoid_mds_available_ds(lseg))
+-			return -NFS4ERR_RESET_TO_PNFS;
+-reset:
+-		dprintk("%s Retry through MDS. Error %d\n", __func__,
+-			task->tk_status);
+-		return -NFS4ERR_RESET_TO_MDS;
++		break;
+ 	}
++
++	if (ff_layout_avoid_mds_available_ds(lseg))
++		return -NFS4ERR_RESET_TO_PNFS;
++reset:
++	dprintk("%s Retry through MDS. Error %d\n", __func__,
++		task->tk_status);
++	return -NFS4ERR_RESET_TO_MDS;
++
++out_retry:
+ 	task->tk_status = 0;
+ 	return -EAGAIN;
+ }
+ 
+ /* Retry all errors through either pNFS or MDS except for -EJUKEBOX */
+ static int ff_layout_async_handle_error_v3(struct rpc_task *task,
++					   u32 op_status,
+ 					   struct nfs_client *clp,
+ 					   struct pnfs_layout_segment *lseg,
+ 					   u32 idx)
+ {
+ 	struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx);
+ 
++	switch (op_status) {
++	case NFS_OK:
++	case NFSERR_NXIO:
++		break;
++	case NFSERR_PERM:
++		if (!task->tk_xprt)
++			break;
++		xprt_force_disconnect(task->tk_xprt);
++		goto out_retry;
++	case NFSERR_ACCES:
++	case NFSERR_BADHANDLE:
++	case NFSERR_FBIG:
++	case NFSERR_IO:
++	case NFSERR_NOSPC:
++	case NFSERR_ROFS:
++	case NFSERR_STALE:
++		goto out_reset_to_pnfs;
++	case NFSERR_JUKEBOX:
++		nfs_inc_stats(lseg->pls_layout->plh_inode, NFSIOS_DELAY);
++		goto out_retry;
++	default:
++		break;
++	}
++
+ 	switch (task->tk_status) {
+ 	/* File access problems. Don't mark the device as unavailable */
+ 	case -EACCES:
+@@ -1216,6 +1261,7 @@ static int ff_layout_async_handle_error_v3(struct rpc_task *task,
+ 		nfs4_delete_deviceid(devid->ld, devid->nfs_client,
+ 				&devid->deviceid);
+ 	}
++out_reset_to_pnfs:
+ 	/* FIXME: Need to prevent infinite looping here. */
+ 	return -NFS4ERR_RESET_TO_PNFS;
+ out_retry:
+@@ -1226,6 +1272,7 @@ static int ff_layout_async_handle_error_v3(struct rpc_task *task,
+ }
+ 
+ static int ff_layout_async_handle_error(struct rpc_task *task,
++					u32 op_status,
+ 					struct nfs4_state *state,
+ 					struct nfs_client *clp,
+ 					struct pnfs_layout_segment *lseg,
+@@ -1244,10 +1291,11 @@ static int ff_layout_async_handle_error(struct rpc_task *task,
+ 
+ 	switch (vers) {
+ 	case 3:
+-		return ff_layout_async_handle_error_v3(task, clp, lseg, idx);
+-	case 4:
+-		return ff_layout_async_handle_error_v4(task, state, clp,
++		return ff_layout_async_handle_error_v3(task, op_status, clp,
+ 						       lseg, idx);
++	case 4:
++		return ff_layout_async_handle_error_v4(task, op_status, state,
++						       clp, lseg, idx);
+ 	default:
+ 		/* should never happen */
+ 		WARN_ON_ONCE(1);
+@@ -1300,6 +1348,7 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg,
+ 	switch (status) {
+ 	case NFS4ERR_DELAY:
+ 	case NFS4ERR_GRACE:
++	case NFS4ERR_PERM:
+ 		break;
+ 	case NFS4ERR_NXIO:
+ 		ff_layout_mark_ds_unreachable(lseg, idx);
+@@ -1332,7 +1381,8 @@ static int ff_layout_read_done_cb(struct rpc_task *task,
+ 		trace_ff_layout_read_error(hdr, task->tk_status);
+ 	}
+ 
+-	err = ff_layout_async_handle_error(task, hdr->args.context->state,
++	err = ff_layout_async_handle_error(task, hdr->res.op_status,
++					   hdr->args.context->state,
+ 					   hdr->ds_clp, hdr->lseg,
+ 					   hdr->pgio_mirror_idx);
+ 
+@@ -1505,7 +1555,8 @@ static int ff_layout_write_done_cb(struct rpc_task *task,
+ 		trace_ff_layout_write_error(hdr, task->tk_status);
+ 	}
+ 
+-	err = ff_layout_async_handle_error(task, hdr->args.context->state,
++	err = ff_layout_async_handle_error(task, hdr->res.op_status,
++					   hdr->args.context->state,
+ 					   hdr->ds_clp, hdr->lseg,
+ 					   hdr->pgio_mirror_idx);
+ 
+@@ -1554,8 +1605,9 @@ static int ff_layout_commit_done_cb(struct rpc_task *task,
+ 		trace_ff_layout_commit_error(data, task->tk_status);
+ 	}
+ 
+-	err = ff_layout_async_handle_error(task, NULL, data->ds_clp,
+-					   data->lseg, data->ds_commit_index);
++	err = ff_layout_async_handle_error(task, data->res.op_status,
++					   NULL, data->ds_clp, data->lseg,
++					   data->ds_commit_index);
+ 
+ 	trace_nfs4_pnfs_commit_ds(data, err);
+ 	switch (err) {
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 8ab7868807a7d9..a2fa6bc4d74e37 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -2589,15 +2589,26 @@ EXPORT_SYMBOL_GPL(nfs_net_id);
+ static int nfs_net_init(struct net *net)
+ {
+ 	struct nfs_net *nn = net_generic(net, nfs_net_id);
++	int err;
+ 
+ 	nfs_clients_init(net);
+ 
+ 	if (!rpc_proc_register(net, &nn->rpcstats)) {
+-		nfs_clients_exit(net);
+-		return -ENOMEM;
++		err = -ENOMEM;
++		goto err_proc_rpc;
+ 	}
+ 
+-	return nfs_fs_proc_net_init(net);
++	err = nfs_fs_proc_net_init(net);
++	if (err)
++		goto err_proc_nfs;
++
++	return 0;
++
++err_proc_nfs:
++	rpc_proc_unregister(net, "nfs");
++err_proc_rpc:
++	nfs_clients_exit(net);
++	return err;
+ }
+ 
+ static void nfs_net_exit(struct net *net)
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 3adb7d0dbec7ac..1a7ec68bde1532 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -2059,8 +2059,10 @@ static void nfs_layoutget_begin(struct pnfs_layout_hdr *lo)
+ static void nfs_layoutget_end(struct pnfs_layout_hdr *lo)
+ {
+ 	if (atomic_dec_and_test(&lo->plh_outstanding) &&
+-	    test_and_clear_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags))
++	    test_and_clear_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags)) {
++		smp_mb__after_atomic();
+ 		wake_up_bit(&lo->plh_flags, NFS_LAYOUT_DRAIN);
++	}
+ }
+ 
+ static bool pnfs_is_first_layoutget(struct pnfs_layout_hdr *lo)
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 56381cbb63990f..eed29f043114ad 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -777,6 +777,7 @@ struct TCP_Server_Info {
+ 	__le32 session_key_id; /* retrieved from negotiate response and send in session setup request */
+ 	struct session_key session_key;
+ 	unsigned long lstrp; /* when we got last response from this server */
++	unsigned long neg_start; /* when negotiate started (jiffies) */
+ 	struct cifs_secmech secmech; /* crypto sec mech functs, descriptors */
+ #define	CIFS_NEGFLAVOR_UNENCAP	1	/* wct == 17, but no ext_sec */
+ #define	CIFS_NEGFLAVOR_EXTENDED	2	/* wct == 17, ext_sec bit set */
+@@ -1303,6 +1304,7 @@ struct cifs_tcon {
+ 	bool use_persistent:1; /* use persistent instead of durable handles */
+ 	bool no_lease:1;    /* Do not request leases on files or directories */
+ 	bool use_witness:1; /* use witness protocol */
++	bool dummy:1; /* dummy tcon used for reconnecting channels */
+ 	__le32 capabilities;
+ 	__u32 share_flags;
+ 	__u32 maximal_access;
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index 66093fa78aed7d..045227ed4efc96 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -136,6 +136,7 @@ extern int SendReceiveBlockingLock(const unsigned int xid,
+ 			struct smb_hdr *out_buf,
+ 			int *bytes_returned);
+ 
++void smb2_query_server_interfaces(struct work_struct *work);
+ void
+ cifs_signal_cifsd_for_reconnect(struct TCP_Server_Info *server,
+ 				      bool all_channels);
+diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
+index 7216fcec79e8b2..0e509a0433fb67 100644
+--- a/fs/smb/client/cifssmb.c
++++ b/fs/smb/client/cifssmb.c
+@@ -1335,6 +1335,7 @@ cifs_readv_callback(struct mid_q_entry *mid)
+ 		break;
+ 	case MID_REQUEST_SUBMITTED:
+ 	case MID_RETRY_NEEDED:
++		__set_bit(NETFS_SREQ_NEED_RETRY, &rdata->subreq.flags);
+ 		rdata->result = -EAGAIN;
+ 		if (server->sign && rdata->got_bytes)
+ 			/* reset bytes number since we can not check a sign */
+@@ -1714,6 +1715,7 @@ cifs_writev_callback(struct mid_q_entry *mid)
+ 		break;
+ 	case MID_REQUEST_SUBMITTED:
+ 	case MID_RETRY_NEEDED:
++		__set_bit(NETFS_SREQ_NEED_RETRY, &wdata->subreq.flags);
+ 		result = -EAGAIN;
+ 		break;
+ 	default:
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index e92c7b71626fd7..7ebed5e856dc84 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -97,7 +97,7 @@ static int reconn_set_ipaddr_from_hostname(struct TCP_Server_Info *server)
+ 	return rc;
+ }
+ 
+-static void smb2_query_server_interfaces(struct work_struct *work)
++void smb2_query_server_interfaces(struct work_struct *work)
+ {
+ 	int rc;
+ 	int xid;
+@@ -679,12 +679,12 @@ server_unresponsive(struct TCP_Server_Info *server)
+ 	/*
+ 	 * If we're in the process of mounting a share or reconnecting a session
+ 	 * and the server abruptly shut down (e.g. socket wasn't closed, packet
+-	 * had been ACK'ed but no SMB response), don't wait longer than 20s to
+-	 * negotiate protocol.
++	 * had been ACK'ed but no SMB response), don't wait longer than 20s from
++	 * when negotiate actually started.
+ 	 */
+ 	spin_lock(&server->srv_lock);
+ 	if (server->tcpStatus == CifsInNegotiate &&
+-	    time_after(jiffies, server->lstrp + 20 * HZ)) {
++	    time_after(jiffies, server->neg_start + 20 * HZ)) {
+ 		spin_unlock(&server->srv_lock);
+ 		cifs_reconnect(server, false);
+ 		return true;
+@@ -2880,20 +2880,14 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb3_fs_context *ctx)
+ 	tcon->max_cached_dirs = ctx->max_cached_dirs;
+ 	tcon->nodelete = ctx->nodelete;
+ 	tcon->local_lease = ctx->local_lease;
+-	INIT_LIST_HEAD(&tcon->pending_opens);
+ 	tcon->status = TID_GOOD;
+ 
+-	INIT_DELAYED_WORK(&tcon->query_interfaces,
+-			  smb2_query_server_interfaces);
+ 	if (ses->server->dialect >= SMB30_PROT_ID &&
+ 	    (ses->server->capabilities & SMB2_GLOBAL_CAP_MULTI_CHANNEL)) {
+ 		/* schedule query interfaces poll */
+ 		queue_delayed_work(cifsiod_wq, &tcon->query_interfaces,
+ 				   (SMB_INTERFACE_POLL_INTERVAL * HZ));
+ 	}
+-#ifdef CONFIG_CIFS_DFS_UPCALL
+-	INIT_DELAYED_WORK(&tcon->dfs_cache_work, dfs_cache_refresh);
+-#endif
+ 	spin_lock(&cifs_tcp_ses_lock);
+ 	list_add(&tcon->tcon_list, &ses->tcon_list);
+ 	spin_unlock(&cifs_tcp_ses_lock);
+@@ -4209,6 +4203,7 @@ cifs_negotiate_protocol(const unsigned int xid, struct cifs_ses *ses,
+ 
+ 	server->lstrp = jiffies;
+ 	server->tcpStatus = CifsInNegotiate;
++	server->neg_start = jiffies;
+ 	spin_unlock(&server->srv_lock);
+ 
+ 	rc = server->ops->negotiate(xid, ses, server);
+diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
+index a634a34d4086a0..59ccc2229ab300 100644
+--- a/fs/smb/client/fs_context.c
++++ b/fs/smb/client/fs_context.c
+@@ -1824,10 +1824,14 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ 			cifs_errorf(fc, "symlinkroot mount options must be absolute path\n");
+ 			goto cifs_parse_mount_err;
+ 		}
+-		kfree(ctx->symlinkroot);
+-		ctx->symlinkroot = kstrdup(param->string, GFP_KERNEL);
+-		if (!ctx->symlinkroot)
++		if (strnlen(param->string, PATH_MAX) == PATH_MAX) {
++			cifs_errorf(fc, "symlinkroot path too long (max path length: %u)\n",
++				    PATH_MAX - 1);
+ 			goto cifs_parse_mount_err;
++		}
++		kfree(ctx->symlinkroot);
++		ctx->symlinkroot = param->string;
++		param->string = NULL;
+ 		break;
+ 	}
+ 	/* case Opt_ignore: - is ignored as expected ... */
+@@ -1837,13 +1841,6 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ 		goto cifs_parse_mount_err;
+ 	}
+ 
+-	/*
+-	 * By default resolve all native absolute symlinks relative to "/mnt/".
+-	 * Same default has drvfs driver running in WSL for resolving SMB shares.
+-	 */
+-	if (!ctx->symlinkroot)
+-		ctx->symlinkroot = kstrdup("/mnt/", GFP_KERNEL);
+-
+ 	return 0;
+ 
+  cifs_parse_mount_err:
+diff --git a/fs/smb/client/misc.c b/fs/smb/client/misc.c
+index e77017f470845f..da23cc12a52caa 100644
+--- a/fs/smb/client/misc.c
++++ b/fs/smb/client/misc.c
+@@ -151,6 +151,12 @@ tcon_info_alloc(bool dir_leases_enabled, enum smb3_tcon_ref_trace trace)
+ #ifdef CONFIG_CIFS_DFS_UPCALL
+ 	INIT_LIST_HEAD(&ret_buf->dfs_ses_list);
+ #endif
++	INIT_LIST_HEAD(&ret_buf->pending_opens);
++	INIT_DELAYED_WORK(&ret_buf->query_interfaces,
++			  smb2_query_server_interfaces);
++#ifdef CONFIG_CIFS_DFS_UPCALL
++	INIT_DELAYED_WORK(&ret_buf->dfs_cache_work, dfs_cache_refresh);
++#endif
+ 
+ 	return ret_buf;
+ }
+diff --git a/fs/smb/client/readdir.c b/fs/smb/client/readdir.c
+index c3feb26fcfd03a..7bf3214117a91e 100644
+--- a/fs/smb/client/readdir.c
++++ b/fs/smb/client/readdir.c
+@@ -263,7 +263,7 @@ cifs_posix_to_fattr(struct cifs_fattr *fattr, struct smb2_posix_info *info,
+ 	/* The Mode field in the response can now include the file type as well */
+ 	fattr->cf_mode = wire_mode_to_posix(le32_to_cpu(info->Mode),
+ 					    fattr->cf_cifsattrs & ATTR_DIRECTORY);
+-	fattr->cf_dtype = S_DT(le32_to_cpu(info->Mode));
++	fattr->cf_dtype = S_DT(fattr->cf_mode);
+ 
+ 	switch (fattr->cf_mode & S_IFMT) {
+ 	case S_IFLNK:
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index 1c40e42e4d8973..5fa29a97ac154b 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -57,6 +57,7 @@ static int create_native_symlink(const unsigned int xid, struct inode *inode,
+ 	struct reparse_symlink_data_buffer *buf = NULL;
+ 	struct cifs_open_info_data data = {};
+ 	struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
++	const char *symroot = cifs_sb->ctx->symlinkroot;
+ 	struct inode *new;
+ 	struct kvec iov;
+ 	__le16 *path = NULL;
+@@ -82,7 +83,8 @@ static int create_native_symlink(const unsigned int xid, struct inode *inode,
+ 		.symlink_target = symlink_target,
+ 	};
+ 
+-	if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS) && symname[0] == '/') {
++	if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS) &&
++	    symroot && symname[0] == '/') {
+ 		/*
+ 		 * This is a request to create an absolute symlink on the server
+ 		 * which does not support POSIX paths, and expects symlink in
+@@ -92,7 +94,7 @@ static int create_native_symlink(const unsigned int xid, struct inode *inode,
+ 		 * ensure compatibility of this symlink stored in absolute form
+ 		 * on the SMB server.
+ 		 */
+-		if (!strstarts(symname, cifs_sb->ctx->symlinkroot)) {
++		if (!strstarts(symname, symroot)) {
+ 			/*
+ 			 * If the absolute Linux symlink target path is not
+ 			 * inside "symlinkroot" location then there is no way
+@@ -101,12 +103,12 @@ static int create_native_symlink(const unsigned int xid, struct inode *inode,
+ 			cifs_dbg(VFS,
+ 				 "absolute symlink '%s' cannot be converted to NT format "
+ 				 "because it is outside of symlinkroot='%s'\n",
+-				 symname, cifs_sb->ctx->symlinkroot);
++				 symname, symroot);
+ 			rc = -EINVAL;
+ 			goto out;
+ 		}
+-		len = strlen(cifs_sb->ctx->symlinkroot);
+-		if (cifs_sb->ctx->symlinkroot[len-1] != '/')
++		len = strlen(symroot);
++		if (symroot[len - 1] != '/')
+ 			len++;
+ 		if (symname[len] >= 'a' && symname[len] <= 'z' &&
+ 		    (symname[len+1] == '/' || symname[len+1] == '\0')) {
+@@ -782,6 +784,7 @@ int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len,
+ 			      const char *full_path,
+ 			      struct cifs_sb_info *cifs_sb)
+ {
++	const char *symroot = cifs_sb->ctx->symlinkroot;
+ 	char sep = CIFS_DIR_SEP(cifs_sb);
+ 	char *linux_target = NULL;
+ 	char *smb_target = NULL;
+@@ -815,7 +818,8 @@ int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len,
+ 		goto out;
+ 	}
+ 
+-	if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS) && !relative) {
++	if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS) &&
++	    symroot && !relative) {
+ 		/*
+ 		 * This is an absolute symlink from the server which does not
+ 		 * support POSIX paths, so the symlink is in NT-style path.
+@@ -907,15 +911,15 @@ int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len,
+ 		}
+ 
+ 		abs_path_len = strlen(abs_path)+1;
+-		symlinkroot_len = strlen(cifs_sb->ctx->symlinkroot);
+-		if (cifs_sb->ctx->symlinkroot[symlinkroot_len-1] == '/')
++		symlinkroot_len = strlen(symroot);
++		if (symroot[symlinkroot_len - 1] == '/')
+ 			symlinkroot_len--;
+ 		linux_target = kmalloc(symlinkroot_len + 1 + abs_path_len, GFP_KERNEL);
+ 		if (!linux_target) {
+ 			rc = -ENOMEM;
+ 			goto out;
+ 		}
+-		memcpy(linux_target, cifs_sb->ctx->symlinkroot, symlinkroot_len);
++		memcpy(linux_target, symroot, symlinkroot_len);
+ 		linux_target[symlinkroot_len] = '/';
+ 		memcpy(linux_target + symlinkroot_len + 1, abs_path, abs_path_len);
+ 	} else if (smb_target[0] == sep && relative) {
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 72903265b17069..2c0cc544dfb31d 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -423,9 +423,9 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ 		free_xid(xid);
+ 		ses->flags &= ~CIFS_SES_FLAGS_PENDING_QUERY_INTERFACES;
+ 
+-		/* regardless of rc value, setup polling */
+-		queue_delayed_work(cifsiod_wq, &tcon->query_interfaces,
+-				   (SMB_INTERFACE_POLL_INTERVAL * HZ));
++		if (!tcon->ipc && !tcon->dummy)
++			queue_delayed_work(cifsiod_wq, &tcon->query_interfaces,
++					   (SMB_INTERFACE_POLL_INTERVAL * HZ));
+ 
+ 		mutex_unlock(&ses->session_mutex);
+ 
+@@ -4221,10 +4221,8 @@ void smb2_reconnect_server(struct work_struct *work)
+ 		}
+ 		goto done;
+ 	}
+-
+ 	tcon->status = TID_GOOD;
+-	tcon->retry = false;
+-	tcon->need_reconnect = false;
++	tcon->dummy = true;
+ 
+ 	/* now reconnect sessions for necessary channels */
+ 	list_for_each_entry_safe(ses, ses2, &tmp_ses_list, rlist) {
+@@ -4854,6 +4852,7 @@ smb2_writev_callback(struct mid_q_entry *mid)
+ 		break;
+ 	case MID_REQUEST_SUBMITTED:
+ 	case MID_RETRY_NEEDED:
++		__set_bit(NETFS_SREQ_NEED_RETRY, &wdata->subreq.flags);
+ 		result = -EAGAIN;
+ 		break;
+ 	case MID_RESPONSE_MALFORMED:
+diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c
+index 6484c596eceaf2..736eb0924573d3 100644
+--- a/fs/xfs/xfs_rtalloc.c
++++ b/fs/xfs/xfs_rtalloc.c
+@@ -1259,6 +1259,8 @@ xfs_growfs_check_rtgeom(
+ 
+ 	kfree(nmp);
+ 
++	trace_xfs_growfs_check_rtgeom(mp, min_logfsbs);
++
+ 	if (min_logfsbs > mp->m_sb.sb_logblocks)
+ 		return -EINVAL;
+ 
+diff --git a/include/linux/arm_ffa.h b/include/linux/arm_ffa.h
+index 5bded24dc24fea..e1634897e159cd 100644
+--- a/include/linux/arm_ffa.h
++++ b/include/linux/arm_ffa.h
+@@ -283,6 +283,7 @@ struct ffa_indirect_msg_hdr {
+ 	u32 offset;
+ 	u32 send_recv_id;
+ 	u32 size;
++	u32 res1;
+ 	uuid_t uuid;
+ };
+ 
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 3aa955102b349a..1ee8e9e3037d0e 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -80,6 +80,7 @@ extern ssize_t cpu_show_reg_file_data_sampling(struct device *dev,
+ extern ssize_t cpu_show_ghostwrite(struct device *dev, struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_indirect_target_selection(struct device *dev,
+ 						  struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf);
+ 
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+diff --git a/include/linux/export.h b/include/linux/export.h
+index a8c23d945634b3..f35d03b4113b19 100644
+--- a/include/linux/export.h
++++ b/include/linux/export.h
+@@ -24,11 +24,17 @@
+ 	.long sym
+ #endif
+ 
+-#define ___EXPORT_SYMBOL(sym, license, ns)		\
++/*
++ * LLVM integrated assembler cam merge adjacent string literals (like
++ * C and GNU-as) passed to '.ascii', but not to '.asciz' and chokes on:
++ *
++ *   .asciz "MODULE_" "kvm" ;
++ */
++#define ___EXPORT_SYMBOL(sym, license, ns...)		\
+ 	.section ".export_symbol","a"		ASM_NL	\
+ 	__export_symbol_##sym:			ASM_NL	\
+ 		.asciz license			ASM_NL	\
+-		.asciz ns			ASM_NL	\
++		.ascii ns "\0"			ASM_NL	\
+ 		__EXPORT_SYMBOL_REF(sym)	ASM_NL	\
+ 	.previous
+ 
+@@ -85,4 +91,6 @@
+ #define EXPORT_SYMBOL_NS(sym, ns)	__EXPORT_SYMBOL(sym, "", ns)
+ #define EXPORT_SYMBOL_NS_GPL(sym, ns)	__EXPORT_SYMBOL(sym, "GPL", ns)
+ 
++#define EXPORT_SYMBOL_GPL_FOR_MODULES(sym, mods) __EXPORT_SYMBOL(sym, "GPL", "module:" mods)
++
+ #endif /* _LINUX_EXPORT_H */
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index ef41a8213a6fa0..1fe7bf10d442c5 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -3552,6 +3552,8 @@ extern int simple_write_begin(struct file *file, struct address_space *mapping,
+ extern const struct address_space_operations ram_aops;
+ extern int always_delete_dentry(const struct dentry *);
+ extern struct inode *alloc_anon_inode(struct super_block *);
++struct inode *anon_inode_make_secure_inode(struct super_block *sb, const char *name,
++					   const struct inode *context_inode);
+ extern int simple_nosetlease(struct file *, int, struct file_lease **, void **);
+ extern const struct dentry_operations simple_dentry_operations;
+ 
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index e5695998acb027..ec3b0c9c2a8cff 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -1363,7 +1363,7 @@ int ata_acpi_stm(struct ata_port *ap, const struct ata_acpi_gtm *stm);
+ int ata_acpi_gtm(struct ata_port *ap, struct ata_acpi_gtm *stm);
+ unsigned int ata_acpi_gtm_xfermask(struct ata_device *dev,
+ 				   const struct ata_acpi_gtm *gtm);
+-int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm);
++int ata_acpi_cbl_pata_type(struct ata_port *ap);
+ #else
+ static inline const struct ata_acpi_gtm *ata_acpi_init_gtm(struct ata_port *ap)
+ {
+@@ -1388,10 +1388,9 @@ static inline unsigned int ata_acpi_gtm_xfermask(struct ata_device *dev,
+ 	return 0;
+ }
+ 
+-static inline int ata_acpi_cbl_80wire(struct ata_port *ap,
+-				      const struct ata_acpi_gtm *gtm)
++static inline int ata_acpi_cbl_pata_type(struct ata_port *ap)
+ {
+-	return 0;
++	return ATA_CBL_PATA40;
+ }
+ #endif
+ 
+diff --git a/include/linux/usb.h b/include/linux/usb.h
+index b46738701f8dc4..c63a7a9191ca7d 100644
+--- a/include/linux/usb.h
++++ b/include/linux/usb.h
+@@ -614,6 +614,7 @@ struct usb3_lpm_parameters {
+  *	FIXME -- complete doc
+  * @authenticated: Crypto authentication passed
+  * @tunnel_mode: Connection native or tunneled over USB4
++ * @usb4_link: device link to the USB4 host interface
+  * @lpm_capable: device supports LPM
+  * @lpm_devinit_allow: Allow USB3 device initiated LPM, exit latency is in range
+  * @usb2_hw_lpm_capable: device can perform USB2 hardware LPM
+@@ -724,6 +725,7 @@ struct usb_device {
+ 	unsigned reset_resume:1;
+ 	unsigned port_is_suspended:1;
+ 	enum usb_link_tunnel_mode tunnel_mode;
++	struct device_link *usb4_link;
+ 
+ 	int slot_id;
+ 	struct usb2_lpm_parameters l1_params;
+diff --git a/include/linux/usb/typec_dp.h b/include/linux/usb/typec_dp.h
+index f2da264d9c140c..acb0ad03bdacbd 100644
+--- a/include/linux/usb/typec_dp.h
++++ b/include/linux/usb/typec_dp.h
+@@ -57,6 +57,7 @@ enum {
+ 	DP_PIN_ASSIGN_D,
+ 	DP_PIN_ASSIGN_E,
+ 	DP_PIN_ASSIGN_F, /* Not supported after v1.0b */
++	DP_PIN_ASSIGN_MAX,
+ };
+ 
+ /* DisplayPort alt mode specific commands */
+diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
+index 4175eec40048ad..ecc1b852661e39 100644
+--- a/include/trace/events/netfs.h
++++ b/include/trace/events/netfs.h
+@@ -56,6 +56,7 @@
+ 	EM(netfs_rreq_trace_dirty,		"DIRTY  ")	\
+ 	EM(netfs_rreq_trace_done,		"DONE   ")	\
+ 	EM(netfs_rreq_trace_free,		"FREE   ")	\
++	EM(netfs_rreq_trace_recollect,		"RECLLCT")	\
+ 	EM(netfs_rreq_trace_redirty,		"REDIRTY")	\
+ 	EM(netfs_rreq_trace_resubmit,		"RESUBMT")	\
+ 	EM(netfs_rreq_trace_set_abandon,	"S-ABNDN")	\
+diff --git a/kernel/irq/irq_sim.c b/kernel/irq/irq_sim.c
+index 1a3d483548e2f3..ae4c9cbd1b4b9e 100644
+--- a/kernel/irq/irq_sim.c
++++ b/kernel/irq/irq_sim.c
+@@ -202,7 +202,7 @@ struct irq_domain *irq_domain_create_sim_full(struct fwnode_handle *fwnode,
+ 					      void *data)
+ {
+ 	struct irq_sim_work_ctx *work_ctx __free(kfree) =
+-				kmalloc(sizeof(*work_ctx), GFP_KERNEL);
++				kzalloc(sizeof(*work_ctx), GFP_KERNEL);
+ 
+ 	if (!work_ctx)
+ 		return ERR_PTR(-ENOMEM);
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 80b10893b5038d..37778d913f0115 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -3068,6 +3068,10 @@ __call_rcu_common(struct rcu_head *head, rcu_callback_t func, bool lazy_in)
+ 	/* Misaligned rcu_head! */
+ 	WARN_ON_ONCE((unsigned long)head & (sizeof(void *) - 1));
+ 
++	/* Avoid NULL dereference if callback is NULL. */
++	if (WARN_ON_ONCE(!func))
++		return;
++
+ 	if (debug_rcu_head_queue(head)) {
+ 		/*
+ 		 * Probable double call_rcu(), so leak the callback.
+diff --git a/lib/test_objagg.c b/lib/test_objagg.c
+index d34df4306b874f..222b39fc2629e2 100644
+--- a/lib/test_objagg.c
++++ b/lib/test_objagg.c
+@@ -899,8 +899,10 @@ static int check_expect_hints_stats(struct objagg_hints *objagg_hints,
+ 	int err;
+ 
+ 	stats = objagg_hints_stats_get(objagg_hints);
+-	if (IS_ERR(stats))
++	if (IS_ERR(stats)) {
++		*errmsg = "objagg_hints_stats_get() failed.";
+ 		return PTR_ERR(stats);
++	}
+ 	err = __check_expect_stats(stats, expect_stats, errmsg);
+ 	objagg_stats_put(stats);
+ 	return err;
+diff --git a/mm/secretmem.c b/mm/secretmem.c
+index 1b0a214ee5580e..4662f2510ae5f7 100644
+--- a/mm/secretmem.c
++++ b/mm/secretmem.c
+@@ -195,18 +195,11 @@ static struct file *secretmem_file_create(unsigned long flags)
+ 	struct file *file;
+ 	struct inode *inode;
+ 	const char *anon_name = "[secretmem]";
+-	int err;
+ 
+-	inode = alloc_anon_inode(secretmem_mnt->mnt_sb);
++	inode = anon_inode_make_secure_inode(secretmem_mnt->mnt_sb, anon_name, NULL);
+ 	if (IS_ERR(inode))
+ 		return ERR_CAST(inode);
+ 
+-	err = security_inode_init_security_anon(inode, &QSTR(anon_name), NULL);
+-	if (err) {
+-		file = ERR_PTR(err);
+-		goto err_free_inode;
+-	}
+-
+ 	file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
+ 				 O_RDWR, &secretmem_fops);
+ 	if (IS_ERR(file))
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 00cf1b575c8962..3fb534dcf14d93 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -3100,7 +3100,7 @@ static void clear_vm_uninitialized_flag(struct vm_struct *vm)
+ 	/*
+ 	 * Before removing VM_UNINITIALIZED,
+ 	 * we should make sure that vm has proper values.
+-	 * Pair with smp_rmb() in show_numa_info().
++	 * Pair with smp_rmb() in vread_iter() and vmalloc_info_show().
+ 	 */
+ 	smp_wmb();
+ 	vm->flags &= ~VM_UNINITIALIZED;
+@@ -4934,28 +4934,29 @@ bool vmalloc_dump_obj(void *object)
+ #endif
+ 
+ #ifdef CONFIG_PROC_FS
+-static void show_numa_info(struct seq_file *m, struct vm_struct *v)
+-{
+-	if (IS_ENABLED(CONFIG_NUMA)) {
+-		unsigned int nr, *counters = m->private;
+-		unsigned int step = 1U << vm_area_page_order(v);
+ 
+-		if (!counters)
+-			return;
++/*
++ * Print number of pages allocated on each memory node.
++ *
++ * This function can only be called if CONFIG_NUMA is enabled
++ * and VM_UNINITIALIZED bit in v->flags is disabled.
++ */
++static void show_numa_info(struct seq_file *m, struct vm_struct *v,
++				 unsigned int *counters)
++{
++	unsigned int nr;
++	unsigned int step = 1U << vm_area_page_order(v);
+ 
+-		if (v->flags & VM_UNINITIALIZED)
+-			return;
+-		/* Pair with smp_wmb() in clear_vm_uninitialized_flag() */
+-		smp_rmb();
++	if (!counters)
++		return;
+ 
+-		memset(counters, 0, nr_node_ids * sizeof(unsigned int));
++	memset(counters, 0, nr_node_ids * sizeof(unsigned int));
+ 
+-		for (nr = 0; nr < v->nr_pages; nr += step)
+-			counters[page_to_nid(v->pages[nr])] += step;
+-		for_each_node_state(nr, N_HIGH_MEMORY)
+-			if (counters[nr])
+-				seq_printf(m, " N%u=%u", nr, counters[nr]);
+-	}
++	for (nr = 0; nr < v->nr_pages; nr += step)
++		counters[page_to_nid(v->pages[nr])] += step;
++	for_each_node_state(nr, N_HIGH_MEMORY)
++		if (counters[nr])
++			seq_printf(m, " N%u=%u", nr, counters[nr]);
+ }
+ 
+ static void show_purge_info(struct seq_file *m)
+@@ -4983,6 +4984,10 @@ static int vmalloc_info_show(struct seq_file *m, void *p)
+ 	struct vmap_area *va;
+ 	struct vm_struct *v;
+ 	int i;
++	unsigned int *counters;
++
++	if (IS_ENABLED(CONFIG_NUMA))
++		counters = kmalloc(nr_node_ids * sizeof(unsigned int), GFP_KERNEL);
+ 
+ 	for (i = 0; i < nr_vmap_nodes; i++) {
+ 		vn = &vmap_nodes[i];
+@@ -4999,6 +5004,11 @@ static int vmalloc_info_show(struct seq_file *m, void *p)
+ 			}
+ 
+ 			v = va->vm;
++			if (v->flags & VM_UNINITIALIZED)
++				continue;
++
++			/* Pair with smp_wmb() in clear_vm_uninitialized_flag() */
++			smp_rmb();
+ 
+ 			seq_printf(m, "0x%pK-0x%pK %7ld",
+ 				v->addr, v->addr + v->size, v->size);
+@@ -5033,7 +5043,9 @@ static int vmalloc_info_show(struct seq_file *m, void *p)
+ 			if (is_vmalloc_addr(v->pages))
+ 				seq_puts(m, " vpages");
+ 
+-			show_numa_info(m, v);
++			if (IS_ENABLED(CONFIG_NUMA))
++				show_numa_info(m, v, counters);
++
+ 			seq_putc(m, '\n');
+ 		}
+ 		spin_unlock(&vn->busy.lock);
+@@ -5043,19 +5055,14 @@ static int vmalloc_info_show(struct seq_file *m, void *p)
+ 	 * As a final step, dump "unpurged" areas.
+ 	 */
+ 	show_purge_info(m);
++	if (IS_ENABLED(CONFIG_NUMA))
++		kfree(counters);
+ 	return 0;
+ }
+ 
+ static int __init proc_vmalloc_init(void)
+ {
+-	void *priv_data = NULL;
+-
+-	if (IS_ENABLED(CONFIG_NUMA))
+-		priv_data = kmalloc(nr_node_ids * sizeof(unsigned int), GFP_KERNEL);
+-
+-	proc_create_single_data("vmallocinfo",
+-		0400, NULL, vmalloc_info_show, priv_data);
+-
++	proc_create_single("vmallocinfo", 0400, NULL, vmalloc_info_show);
+ 	return 0;
+ }
+ module_init(proc_vmalloc_init);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 66052d6aaa1d50..4d5ace9d245d9c 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -2150,40 +2150,6 @@ static u8 hci_cc_set_adv_param(struct hci_dev *hdev, void *data,
+ 	return rp->status;
+ }
+ 
+-static u8 hci_cc_set_ext_adv_param(struct hci_dev *hdev, void *data,
+-				   struct sk_buff *skb)
+-{
+-	struct hci_rp_le_set_ext_adv_params *rp = data;
+-	struct hci_cp_le_set_ext_adv_params *cp;
+-	struct adv_info *adv_instance;
+-
+-	bt_dev_dbg(hdev, "status 0x%2.2x", rp->status);
+-
+-	if (rp->status)
+-		return rp->status;
+-
+-	cp = hci_sent_cmd_data(hdev, HCI_OP_LE_SET_EXT_ADV_PARAMS);
+-	if (!cp)
+-		return rp->status;
+-
+-	hci_dev_lock(hdev);
+-	hdev->adv_addr_type = cp->own_addr_type;
+-	if (!cp->handle) {
+-		/* Store in hdev for instance 0 */
+-		hdev->adv_tx_power = rp->tx_power;
+-	} else {
+-		adv_instance = hci_find_adv_instance(hdev, cp->handle);
+-		if (adv_instance)
+-			adv_instance->tx_power = rp->tx_power;
+-	}
+-	/* Update adv data as tx power is known now */
+-	hci_update_adv_data(hdev, cp->handle);
+-
+-	hci_dev_unlock(hdev);
+-
+-	return rp->status;
+-}
+-
+ static u8 hci_cc_read_rssi(struct hci_dev *hdev, void *data,
+ 			   struct sk_buff *skb)
+ {
+@@ -4164,8 +4130,6 @@ static const struct hci_cc {
+ 	HCI_CC(HCI_OP_LE_READ_NUM_SUPPORTED_ADV_SETS,
+ 	       hci_cc_le_read_num_adv_sets,
+ 	       sizeof(struct hci_rp_le_read_num_supported_adv_sets)),
+-	HCI_CC(HCI_OP_LE_SET_EXT_ADV_PARAMS, hci_cc_set_ext_adv_param,
+-	       sizeof(struct hci_rp_le_set_ext_adv_params)),
+ 	HCI_CC_STATUS(HCI_OP_LE_SET_EXT_ADV_ENABLE,
+ 		      hci_cc_le_set_ext_adv_enable),
+ 	HCI_CC_STATUS(HCI_OP_LE_SET_ADV_SET_RAND_ADDR,
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 83de3847c8eaf7..9955d6cd7b76f5 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -1205,9 +1205,126 @@ static int hci_set_adv_set_random_addr_sync(struct hci_dev *hdev, u8 instance,
+ 				     sizeof(cp), &cp, HCI_CMD_TIMEOUT);
+ }
+ 
++static int
++hci_set_ext_adv_params_sync(struct hci_dev *hdev, struct adv_info *adv,
++			    const struct hci_cp_le_set_ext_adv_params *cp,
++			    struct hci_rp_le_set_ext_adv_params *rp)
++{
++	struct sk_buff *skb;
++
++	skb = __hci_cmd_sync(hdev, HCI_OP_LE_SET_EXT_ADV_PARAMS, sizeof(*cp),
++			     cp, HCI_CMD_TIMEOUT);
++
++	/* If command return a status event, skb will be set to -ENODATA */
++	if (skb == ERR_PTR(-ENODATA))
++		return 0;
++
++	if (IS_ERR(skb)) {
++		bt_dev_err(hdev, "Opcode 0x%4.4x failed: %ld",
++			   HCI_OP_LE_SET_EXT_ADV_PARAMS, PTR_ERR(skb));
++		return PTR_ERR(skb);
++	}
++
++	if (skb->len != sizeof(*rp)) {
++		bt_dev_err(hdev, "Invalid response length for 0x%4.4x: %u",
++			   HCI_OP_LE_SET_EXT_ADV_PARAMS, skb->len);
++		kfree_skb(skb);
++		return -EIO;
++	}
++
++	memcpy(rp, skb->data, sizeof(*rp));
++	kfree_skb(skb);
++
++	if (!rp->status) {
++		hdev->adv_addr_type = cp->own_addr_type;
++		if (!cp->handle) {
++			/* Store in hdev for instance 0 */
++			hdev->adv_tx_power = rp->tx_power;
++		} else if (adv) {
++			adv->tx_power = rp->tx_power;
++		}
++	}
++
++	return rp->status;
++}
++
++static int hci_set_ext_adv_data_sync(struct hci_dev *hdev, u8 instance)
++{
++	DEFINE_FLEX(struct hci_cp_le_set_ext_adv_data, pdu, data, length,
++		    HCI_MAX_EXT_AD_LENGTH);
++	u8 len;
++	struct adv_info *adv = NULL;
++	int err;
++
++	if (instance) {
++		adv = hci_find_adv_instance(hdev, instance);
++		if (!adv || !adv->adv_data_changed)
++			return 0;
++	}
++
++	len = eir_create_adv_data(hdev, instance, pdu->data,
++				  HCI_MAX_EXT_AD_LENGTH);
++
++	pdu->length = len;
++	pdu->handle = adv ? adv->handle : instance;
++	pdu->operation = LE_SET_ADV_DATA_OP_COMPLETE;
++	pdu->frag_pref = LE_SET_ADV_DATA_NO_FRAG;
++
++	err = __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_EXT_ADV_DATA,
++				    struct_size(pdu, data, len), pdu,
++				    HCI_CMD_TIMEOUT);
++	if (err)
++		return err;
++
++	/* Update data if the command succeed */
++	if (adv) {
++		adv->adv_data_changed = false;
++	} else {
++		memcpy(hdev->adv_data, pdu->data, len);
++		hdev->adv_data_len = len;
++	}
++
++	return 0;
++}
++
++static int hci_set_adv_data_sync(struct hci_dev *hdev, u8 instance)
++{
++	struct hci_cp_le_set_adv_data cp;
++	u8 len;
++
++	memset(&cp, 0, sizeof(cp));
++
++	len = eir_create_adv_data(hdev, instance, cp.data, sizeof(cp.data));
++
++	/* There's nothing to do if the data hasn't changed */
++	if (hdev->adv_data_len == len &&
++	    memcmp(cp.data, hdev->adv_data, len) == 0)
++		return 0;
++
++	memcpy(hdev->adv_data, cp.data, sizeof(cp.data));
++	hdev->adv_data_len = len;
++
++	cp.length = len;
++
++	return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_ADV_DATA,
++				     sizeof(cp), &cp, HCI_CMD_TIMEOUT);
++}
++
++int hci_update_adv_data_sync(struct hci_dev *hdev, u8 instance)
++{
++	if (!hci_dev_test_flag(hdev, HCI_LE_ENABLED))
++		return 0;
++
++	if (ext_adv_capable(hdev))
++		return hci_set_ext_adv_data_sync(hdev, instance);
++
++	return hci_set_adv_data_sync(hdev, instance);
++}
++
+ int hci_setup_ext_adv_instance_sync(struct hci_dev *hdev, u8 instance)
+ {
+ 	struct hci_cp_le_set_ext_adv_params cp;
++	struct hci_rp_le_set_ext_adv_params rp;
+ 	bool connectable;
+ 	u32 flags;
+ 	bdaddr_t random_addr;
+@@ -1314,8 +1431,12 @@ int hci_setup_ext_adv_instance_sync(struct hci_dev *hdev, u8 instance)
+ 		cp.secondary_phy = HCI_ADV_PHY_1M;
+ 	}
+ 
+-	err = __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_EXT_ADV_PARAMS,
+-				    sizeof(cp), &cp, HCI_CMD_TIMEOUT);
++	err = hci_set_ext_adv_params_sync(hdev, adv, &cp, &rp);
++	if (err)
++		return err;
++
++	/* Update adv data as tx power is known now */
++	err = hci_set_ext_adv_data_sync(hdev, cp.handle);
+ 	if (err)
+ 		return err;
+ 
+@@ -1808,79 +1929,6 @@ int hci_le_terminate_big_sync(struct hci_dev *hdev, u8 handle, u8 reason)
+ 				     sizeof(cp), &cp, HCI_CMD_TIMEOUT);
+ }
+ 
+-static int hci_set_ext_adv_data_sync(struct hci_dev *hdev, u8 instance)
+-{
+-	DEFINE_FLEX(struct hci_cp_le_set_ext_adv_data, pdu, data, length,
+-		    HCI_MAX_EXT_AD_LENGTH);
+-	u8 len;
+-	struct adv_info *adv = NULL;
+-	int err;
+-
+-	if (instance) {
+-		adv = hci_find_adv_instance(hdev, instance);
+-		if (!adv || !adv->adv_data_changed)
+-			return 0;
+-	}
+-
+-	len = eir_create_adv_data(hdev, instance, pdu->data,
+-				  HCI_MAX_EXT_AD_LENGTH);
+-
+-	pdu->length = len;
+-	pdu->handle = adv ? adv->handle : instance;
+-	pdu->operation = LE_SET_ADV_DATA_OP_COMPLETE;
+-	pdu->frag_pref = LE_SET_ADV_DATA_NO_FRAG;
+-
+-	err = __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_EXT_ADV_DATA,
+-				    struct_size(pdu, data, len), pdu,
+-				    HCI_CMD_TIMEOUT);
+-	if (err)
+-		return err;
+-
+-	/* Update data if the command succeed */
+-	if (adv) {
+-		adv->adv_data_changed = false;
+-	} else {
+-		memcpy(hdev->adv_data, pdu->data, len);
+-		hdev->adv_data_len = len;
+-	}
+-
+-	return 0;
+-}
+-
+-static int hci_set_adv_data_sync(struct hci_dev *hdev, u8 instance)
+-{
+-	struct hci_cp_le_set_adv_data cp;
+-	u8 len;
+-
+-	memset(&cp, 0, sizeof(cp));
+-
+-	len = eir_create_adv_data(hdev, instance, cp.data, sizeof(cp.data));
+-
+-	/* There's nothing to do if the data hasn't changed */
+-	if (hdev->adv_data_len == len &&
+-	    memcmp(cp.data, hdev->adv_data, len) == 0)
+-		return 0;
+-
+-	memcpy(hdev->adv_data, cp.data, sizeof(cp.data));
+-	hdev->adv_data_len = len;
+-
+-	cp.length = len;
+-
+-	return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_ADV_DATA,
+-				     sizeof(cp), &cp, HCI_CMD_TIMEOUT);
+-}
+-
+-int hci_update_adv_data_sync(struct hci_dev *hdev, u8 instance)
+-{
+-	if (!hci_dev_test_flag(hdev, HCI_LE_ENABLED))
+-		return 0;
+-
+-	if (ext_adv_capable(hdev))
+-		return hci_set_ext_adv_data_sync(hdev, instance);
+-
+-	return hci_set_adv_data_sync(hdev, instance);
+-}
+-
+ int hci_schedule_adv_instance_sync(struct hci_dev *hdev, u8 instance,
+ 				   bool force)
+ {
+@@ -1956,13 +2004,10 @@ static int hci_clear_adv_sets_sync(struct hci_dev *hdev, struct sock *sk)
+ static int hci_clear_adv_sync(struct hci_dev *hdev, struct sock *sk, bool force)
+ {
+ 	struct adv_info *adv, *n;
+-	int err = 0;
+ 
+ 	if (ext_adv_capable(hdev))
+ 		/* Remove all existing sets */
+-		err = hci_clear_adv_sets_sync(hdev, sk);
+-	if (ext_adv_capable(hdev))
+-		return err;
++		return hci_clear_adv_sets_sync(hdev, sk);
+ 
+ 	/* This is safe as long as there is no command send while the lock is
+ 	 * held.
+@@ -1990,13 +2035,11 @@ static int hci_clear_adv_sync(struct hci_dev *hdev, struct sock *sk, bool force)
+ static int hci_remove_adv_sync(struct hci_dev *hdev, u8 instance,
+ 			       struct sock *sk)
+ {
+-	int err = 0;
++	int err;
+ 
+ 	/* If we use extended advertising, instance has to be removed first. */
+ 	if (ext_adv_capable(hdev))
+-		err = hci_remove_ext_adv_instance_sync(hdev, instance, sk);
+-	if (ext_adv_capable(hdev))
+-		return err;
++		return hci_remove_ext_adv_instance_sync(hdev, instance, sk);
+ 
+ 	/* This is safe as long as there is no command send while the lock is
+ 	 * held.
+@@ -2095,16 +2138,13 @@ int hci_read_tx_power_sync(struct hci_dev *hdev, __le16 handle, u8 type)
+ int hci_disable_advertising_sync(struct hci_dev *hdev)
+ {
+ 	u8 enable = 0x00;
+-	int err = 0;
+ 
+ 	/* If controller is not advertising we are done. */
+ 	if (!hci_dev_test_flag(hdev, HCI_LE_ADV))
+ 		return 0;
+ 
+ 	if (ext_adv_capable(hdev))
+-		err = hci_disable_ext_adv_instance_sync(hdev, 0x00);
+-	if (ext_adv_capable(hdev))
+-		return err;
++		return hci_disable_ext_adv_instance_sync(hdev, 0x00);
+ 
+ 	return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_ADV_ENABLE,
+ 				     sizeof(enable), &enable, HCI_CMD_TIMEOUT);
+@@ -2467,6 +2507,10 @@ static int hci_pause_advertising_sync(struct hci_dev *hdev)
+ 	int err;
+ 	int old_state;
+ 
++	/* If controller is not advertising we are done. */
++	if (!hci_dev_test_flag(hdev, HCI_LE_ADV))
++		return 0;
++
+ 	/* If already been paused there is nothing to do. */
+ 	if (hdev->advertising_paused)
+ 		return 0;
+@@ -6263,6 +6307,7 @@ static int hci_le_ext_directed_advertising_sync(struct hci_dev *hdev,
+ 						struct hci_conn *conn)
+ {
+ 	struct hci_cp_le_set_ext_adv_params cp;
++	struct hci_rp_le_set_ext_adv_params rp;
+ 	int err;
+ 	bdaddr_t random_addr;
+ 	u8 own_addr_type;
+@@ -6304,8 +6349,12 @@ static int hci_le_ext_directed_advertising_sync(struct hci_dev *hdev,
+ 	if (err)
+ 		return err;
+ 
+-	err = __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_EXT_ADV_PARAMS,
+-				    sizeof(cp), &cp, HCI_CMD_TIMEOUT);
++	err = hci_set_ext_adv_params_sync(hdev, NULL, &cp, &rp);
++	if (err)
++		return err;
++
++	/* Update adv data as tx power is known now */
++	err = hci_set_ext_adv_data_sync(hdev, cp.handle);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index d540f7b4f75fbf..1485b455ade464 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -1080,7 +1080,8 @@ static int mesh_send_done_sync(struct hci_dev *hdev, void *data)
+ 	struct mgmt_mesh_tx *mesh_tx;
+ 
+ 	hci_dev_clear_flag(hdev, HCI_MESH_SENDING);
+-	hci_disable_advertising_sync(hdev);
++	if (list_empty(&hdev->adv_instances))
++		hci_disable_advertising_sync(hdev);
+ 	mesh_tx = mgmt_mesh_next(hdev, NULL);
+ 
+ 	if (mesh_tx)
+@@ -2153,6 +2154,9 @@ static int set_mesh_sync(struct hci_dev *hdev, void *data)
+ 	else
+ 		hci_dev_clear_flag(hdev, HCI_MESH);
+ 
++	hdev->le_scan_interval = __le16_to_cpu(cp->period);
++	hdev->le_scan_window = __le16_to_cpu(cp->window);
++
+ 	len -= sizeof(*cp);
+ 
+ 	/* If filters don't fit, forward all adv pkts */
+@@ -2167,6 +2171,7 @@ static int set_mesh(struct sock *sk, struct hci_dev *hdev, void *data, u16 len)
+ {
+ 	struct mgmt_cp_set_mesh *cp = data;
+ 	struct mgmt_pending_cmd *cmd;
++	__u16 period, window;
+ 	int err = 0;
+ 
+ 	bt_dev_dbg(hdev, "sock %p", sk);
+@@ -2180,6 +2185,23 @@ static int set_mesh(struct sock *sk, struct hci_dev *hdev, void *data, u16 len)
+ 		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER,
+ 				       MGMT_STATUS_INVALID_PARAMS);
+ 
++	/* Keep allowed ranges in sync with set_scan_params() */
++	period = __le16_to_cpu(cp->period);
++
++	if (period < 0x0004 || period > 0x4000)
++		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER,
++				       MGMT_STATUS_INVALID_PARAMS);
++
++	window = __le16_to_cpu(cp->window);
++
++	if (window < 0x0004 || window > 0x4000)
++		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER,
++				       MGMT_STATUS_INVALID_PARAMS);
++
++	if (window > period)
++		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER,
++				       MGMT_STATUS_INVALID_PARAMS);
++
+ 	hci_dev_lock(hdev);
+ 
+ 	cmd = mgmt_pending_add(sk, MGMT_OP_SET_MESH_RECEIVER, hdev, data, len);
+@@ -6432,6 +6454,7 @@ static int set_scan_params(struct sock *sk, struct hci_dev *hdev,
+ 		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_SCAN_PARAMS,
+ 				       MGMT_STATUS_NOT_SUPPORTED);
+ 
++	/* Keep allowed ranges in sync with set_mesh() */
+ 	interval = __le16_to_cpu(cp->interval);
+ 
+ 	if (interval < 0x0004 || interval > 0x4000)
+diff --git a/net/ipv4/ip_input.c b/net/ipv4/ip_input.c
+index 30a5e9460d006d..5a49eb99e5c480 100644
+--- a/net/ipv4/ip_input.c
++++ b/net/ipv4/ip_input.c
+@@ -319,8 +319,8 @@ static int ip_rcv_finish_core(struct net *net,
+ 			      const struct sk_buff *hint)
+ {
+ 	const struct iphdr *iph = ip_hdr(skb);
+-	int err, drop_reason;
+ 	struct rtable *rt;
++	int drop_reason;
+ 
+ 	if (ip_can_use_hint(skb, iph, hint)) {
+ 		drop_reason = ip_route_use_hint(skb, iph->daddr, iph->saddr,
+@@ -345,9 +345,10 @@ static int ip_rcv_finish_core(struct net *net,
+ 			break;
+ 		case IPPROTO_UDP:
+ 			if (READ_ONCE(net->ipv4.sysctl_udp_early_demux)) {
+-				err = udp_v4_early_demux(skb);
+-				if (unlikely(err))
++				drop_reason = udp_v4_early_demux(skb);
++				if (unlikely(drop_reason))
+ 					goto drop_error;
++				drop_reason = SKB_DROP_REASON_NOT_SPECIFIED;
+ 
+ 				/* must reload iph, skb->head might have changed */
+ 				iph = ip_hdr(skb);
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 09beb65d6108b4..e73431549ce77e 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -4432,6 +4432,10 @@ static bool ieee80211_accept_frame(struct ieee80211_rx_data *rx)
+ 		if (!multicast &&
+ 		    !ether_addr_equal(sdata->dev->dev_addr, hdr->addr1))
+ 			return false;
++		/* reject invalid/our STA address */
++		if (!is_valid_ether_addr(hdr->addr2) ||
++		    ether_addr_equal(sdata->dev->dev_addr, hdr->addr2))
++			return false;
+ 		if (!rx->sta) {
+ 			int rate_idx;
+ 			if (status->encoding != RX_ENC_LEGACY)
+diff --git a/net/rose/rose_route.c b/net/rose/rose_route.c
+index 2dd6bd3a3011f5..b72bf8a08d489f 100644
+--- a/net/rose/rose_route.c
++++ b/net/rose/rose_route.c
+@@ -497,22 +497,15 @@ void rose_rt_device_down(struct net_device *dev)
+ 			t         = rose_node;
+ 			rose_node = rose_node->next;
+ 
+-			for (i = 0; i < t->count; i++) {
++			for (i = t->count - 1; i >= 0; i--) {
+ 				if (t->neighbour[i] != s)
+ 					continue;
+ 
+ 				t->count--;
+ 
+-				switch (i) {
+-				case 0:
+-					t->neighbour[0] = t->neighbour[1];
+-					fallthrough;
+-				case 1:
+-					t->neighbour[1] = t->neighbour[2];
+-					break;
+-				case 2:
+-					break;
+-				}
++				memmove(&t->neighbour[i], &t->neighbour[i + 1],
++					sizeof(t->neighbour[0]) *
++						(t->count - i));
+ 			}
+ 
+ 			if (t->count <= 0)
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index f74a097f54ae76..d58921ffcf35e7 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -779,15 +779,12 @@ static u32 qdisc_alloc_handle(struct net_device *dev)
+ 
+ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len)
+ {
+-	bool qdisc_is_offloaded = sch->flags & TCQ_F_OFFLOADED;
+ 	const struct Qdisc_class_ops *cops;
+ 	unsigned long cl;
+ 	u32 parentid;
+ 	bool notify;
+ 	int drops;
+ 
+-	if (n == 0 && len == 0)
+-		return;
+ 	drops = max_t(int, n, 0);
+ 	rcu_read_lock();
+ 	while ((parentid = sch->parent)) {
+@@ -796,17 +793,8 @@ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len)
+ 
+ 		if (sch->flags & TCQ_F_NOPARENT)
+ 			break;
+-		/* Notify parent qdisc only if child qdisc becomes empty.
+-		 *
+-		 * If child was empty even before update then backlog
+-		 * counter is screwed and we skip notification because
+-		 * parent class is already passive.
+-		 *
+-		 * If the original child was offloaded then it is allowed
+-		 * to be seem as empty, so the parent is notified anyway.
+-		 */
+-		notify = !sch->q.qlen && !WARN_ON_ONCE(!n &&
+-						       !qdisc_is_offloaded);
++		/* Notify parent qdisc only if child qdisc becomes empty. */
++		notify = !sch->q.qlen;
+ 		/* TODO: perform the search on a per txq basis */
+ 		sch = qdisc_lookup_rcu(qdisc_dev(sch), TC_H_MAJ(parentid));
+ 		if (sch == NULL) {
+@@ -815,6 +803,9 @@ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len)
+ 		}
+ 		cops = sch->ops->cl_ops;
+ 		if (notify && cops->qlen_notify) {
++			/* Note that qlen_notify must be idempotent as it may get called
++			 * multiple times.
++			 */
+ 			cl = cops->find(sch, parentid);
+ 			cops->qlen_notify(sch, cl);
+ 		}
+diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
+index b370070194fa4a..7eccd6708d6649 100644
+--- a/net/vmw_vsock/vmci_transport.c
++++ b/net/vmw_vsock/vmci_transport.c
+@@ -119,6 +119,8 @@ vmci_transport_packet_init(struct vmci_transport_packet *pkt,
+ 			   u16 proto,
+ 			   struct vmci_handle handle)
+ {
++	memset(pkt, 0, sizeof(*pkt));
++
+ 	/* We register the stream control handler as an any cid handle so we
+ 	 * must always send from a source address of VMADDR_CID_ANY
+ 	 */
+@@ -131,8 +133,6 @@ vmci_transport_packet_init(struct vmci_transport_packet *pkt,
+ 	pkt->type = type;
+ 	pkt->src_port = src->svm_port;
+ 	pkt->dst_port = dst->svm_port;
+-	memset(&pkt->proto, 0, sizeof(pkt->proto));
+-	memset(&pkt->_reserved2, 0, sizeof(pkt->_reserved2));
+ 
+ 	switch (pkt->type) {
+ 	case VMCI_TRANSPORT_PACKET_TYPE_INVALID:
+diff --git a/sound/isa/sb/sb16_main.c b/sound/isa/sb/sb16_main.c
+index 74db115250030e..5a083eecaa6b99 100644
+--- a/sound/isa/sb/sb16_main.c
++++ b/sound/isa/sb/sb16_main.c
+@@ -703,6 +703,9 @@ static int snd_sb16_dma_control_put(struct snd_kcontrol *kcontrol, struct snd_ct
+ 	unsigned char nval, oval;
+ 	int change;
+ 	
++	if (chip->mode & (SB_MODE_PLAYBACK | SB_MODE_CAPTURE))
++		return -EBUSY;
++
+ 	nval = ucontrol->value.enumerated.item[0];
+ 	if (nval > 2)
+ 		return -EINVAL;
+@@ -711,6 +714,10 @@ static int snd_sb16_dma_control_put(struct snd_kcontrol *kcontrol, struct snd_ct
+ 	change = nval != oval;
+ 	snd_sb16_set_dma_mode(chip, nval);
+ 	spin_unlock_irqrestore(&chip->reg_lock, flags);
++	if (change) {
++		snd_dma_disable(chip->dma8);
++		snd_dma_disable(chip->dma16);
++	}
+ 	return change;
+ }
+ 
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index b27966f82c8b65..723cb7bc128516 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -451,6 +451,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 17 D7VEK"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Micro-Star International Co., Ltd."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 17 D7VF"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+@@ -514,6 +521,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "OMEN by HP Gaming Laptop 16z-n000"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "HP"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Victus by HP Gaming Laptop 15-fb2xxx"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c
+index 1a8e85afe9aa51..e61218c0537f22 100644
+--- a/tools/testing/selftests/iommu/iommufd.c
++++ b/tools/testing/selftests/iommu/iommufd.c
+@@ -54,6 +54,8 @@ static __attribute__((constructor)) void setup_sizes(void)
+ 
+ 	mfd_buffer = memfd_mmap(BUFFER_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED,
+ 				&mfd);
++	assert(mfd_buffer != MAP_FAILED);
++	assert(mfd > 0);
+ }
+ 
+ FIXTURE(iommufd)
+@@ -2008,6 +2010,7 @@ FIXTURE_VARIANT(iommufd_dirty_tracking)
+ 
+ FIXTURE_SETUP(iommufd_dirty_tracking)
+ {
++	size_t mmap_buffer_size;
+ 	unsigned long size;
+ 	int mmap_flags;
+ 	void *vrc;
+@@ -2022,22 +2025,33 @@ FIXTURE_SETUP(iommufd_dirty_tracking)
+ 	self->fd = open("/dev/iommu", O_RDWR);
+ 	ASSERT_NE(-1, self->fd);
+ 
+-	rc = posix_memalign(&self->buffer, HUGEPAGE_SIZE, variant->buffer_size);
+-	if (rc || !self->buffer) {
+-		SKIP(return, "Skipping buffer_size=%lu due to errno=%d",
+-			   variant->buffer_size, rc);
+-	}
+-
+ 	mmap_flags = MAP_SHARED | MAP_ANONYMOUS | MAP_FIXED;
++	mmap_buffer_size = variant->buffer_size;
+ 	if (variant->hugepages) {
+ 		/*
+ 		 * MAP_POPULATE will cause the kernel to fail mmap if THPs are
+ 		 * not available.
+ 		 */
+ 		mmap_flags |= MAP_HUGETLB | MAP_POPULATE;
++
++		/*
++		 * Allocation must be aligned to the HUGEPAGE_SIZE, because the
++		 * following mmap() will automatically align the length to be a
++		 * multiple of the underlying huge page size. Failing to do the
++		 * same at this allocation will result in a memory overwrite by
++		 * the mmap().
++		 */
++		if (mmap_buffer_size < HUGEPAGE_SIZE)
++			mmap_buffer_size = HUGEPAGE_SIZE;
++	}
++
++	rc = posix_memalign(&self->buffer, HUGEPAGE_SIZE, mmap_buffer_size);
++	if (rc || !self->buffer) {
++		SKIP(return, "Skipping buffer_size=%lu due to errno=%d",
++			   mmap_buffer_size, rc);
+ 	}
+ 	assert((uintptr_t)self->buffer % HUGEPAGE_SIZE == 0);
+-	vrc = mmap(self->buffer, variant->buffer_size, PROT_READ | PROT_WRITE,
++	vrc = mmap(self->buffer, mmap_buffer_size, PROT_READ | PROT_WRITE,
+ 		   mmap_flags, -1, 0);
+ 	assert(vrc == self->buffer);
+ 
+@@ -2066,8 +2080,8 @@ FIXTURE_SETUP(iommufd_dirty_tracking)
+ 
+ FIXTURE_TEARDOWN(iommufd_dirty_tracking)
+ {
+-	munmap(self->buffer, variant->buffer_size);
+-	munmap(self->bitmap, DIV_ROUND_UP(self->bitmap_size, BITS_PER_BYTE));
++	free(self->buffer);
++	free(self->bitmap);
+ 	teardown_iommufd(self->fd, _metadata);
+ }
+ 
+diff --git a/tools/testing/selftests/iommu/iommufd_utils.h b/tools/testing/selftests/iommu/iommufd_utils.h
+index 72f6636e5d9099..6e967b58acfd34 100644
+--- a/tools/testing/selftests/iommu/iommufd_utils.h
++++ b/tools/testing/selftests/iommu/iommufd_utils.h
+@@ -60,13 +60,18 @@ static inline void *memfd_mmap(size_t length, int prot, int flags, int *mfd_p)
+ {
+ 	int mfd_flags = (flags & MAP_HUGETLB) ? MFD_HUGETLB : 0;
+ 	int mfd = memfd_create("buffer", mfd_flags);
++	void *buf = MAP_FAILED;
+ 
+ 	if (mfd <= 0)
+ 		return MAP_FAILED;
+ 	if (ftruncate(mfd, length))
+-		return MAP_FAILED;
++		goto out;
+ 	*mfd_p = mfd;
+-	return mmap(0, length, prot, flags, mfd, 0);
++	buf = mmap(0, length, prot, flags, mfd, 0);
++out:
++	if (buf == MAP_FAILED)
++		close(mfd);
++	return buf;
+ }
+ 
+ /*


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-07-18 12:05 Arisu Tachibana
  0 siblings, 0 replies; 19+ messages in thread
From: Arisu Tachibana @ 2025-07-18 12:05 UTC (permalink / raw
  To: gentoo-commits

commit:     84479599fce7e5092108413e8c59aafad67bce07
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Jul 18 12:05:16 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Jul 18 12:05:16 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=84479599

Linux patch 6.15.7

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README             |    4 +
 1006_linux-6.15.7.patch | 7004 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7008 insertions(+)

diff --git a/0000_README b/0000_README
index 7184e8ec..ef16828c 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-6.15.6.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.15.6
 
+Patch:  1006_linux-6.15.7.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.15.7
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1006_linux-6.15.7.patch b/1006_linux-6.15.7.patch
new file mode 100644
index 00000000..c3df4fcd
--- /dev/null
+++ b/1006_linux-6.15.7.patch
@@ -0,0 +1,7004 @@
+diff --git a/Documentation/bpf/map_hash.rst b/Documentation/bpf/map_hash.rst
+index d2343952f2cbd3..8606bf958a8cf0 100644
+--- a/Documentation/bpf/map_hash.rst
++++ b/Documentation/bpf/map_hash.rst
+@@ -233,10 +233,16 @@ attempts in order to enforce the LRU property which have increasing impacts on
+ other CPUs involved in the following operation attempts:
+ 
+ - Attempt to use CPU-local state to batch operations
+-- Attempt to fetch free nodes from global lists
++- Attempt to fetch ``target_free`` free nodes from global lists
+ - Attempt to pull any node from a global list and remove it from the hashmap
+ - Attempt to pull any node from any CPU's list and remove it from the hashmap
+ 
++The number of nodes to borrow from the global list in a batch, ``target_free``,
++depends on the size of the map. Larger batch size reduces lock contention, but
++may also exhaust the global structure. The value is computed at map init to
++avoid exhaustion, by limiting aggregate reservation by all CPUs to half the map
++size. With a minimum of a single element and maximum budget of 128 at a time.
++
+ This algorithm is described visually in the following diagram. See the
+ description in commit 3a08c2fd7634 ("bpf: LRU List") for a full explanation of
+ the corresponding operations:
+diff --git a/Documentation/bpf/map_lru_hash_update.dot b/Documentation/bpf/map_lru_hash_update.dot
+index a0fee349d29c27..ab10058f5b79f5 100644
+--- a/Documentation/bpf/map_lru_hash_update.dot
++++ b/Documentation/bpf/map_lru_hash_update.dot
+@@ -35,18 +35,18 @@ digraph {
+   fn_bpf_lru_list_pop_free_to_local [shape=rectangle,fillcolor=2,
+     label="Flush local pending,
+     Rotate Global list, move
+-    LOCAL_FREE_TARGET
++    target_free
+     from global -> local"]
+   // Also corresponds to:
+   // fn__local_list_flush()
+   // fn_bpf_lru_list_rotate()
+   fn___bpf_lru_node_move_to_free[shape=diamond,fillcolor=2,
+-    label="Able to free\nLOCAL_FREE_TARGET\nnodes?"]
++    label="Able to free\ntarget_free\nnodes?"]
+ 
+   fn___bpf_lru_list_shrink_inactive [shape=rectangle,fillcolor=3,
+     label="Shrink inactive list
+       up to remaining
+-      LOCAL_FREE_TARGET
++      target_free
+       (global LRU -> local)"]
+   fn___bpf_lru_list_shrink [shape=diamond,fillcolor=2,
+     label="> 0 entries in\nlocal free list?"]
+diff --git a/Documentation/devicetree/bindings/clock/mediatek,mt8188-clock.yaml b/Documentation/devicetree/bindings/clock/mediatek,mt8188-clock.yaml
+index 2985c8c717d728..5403242545ab12 100644
+--- a/Documentation/devicetree/bindings/clock/mediatek,mt8188-clock.yaml
++++ b/Documentation/devicetree/bindings/clock/mediatek,mt8188-clock.yaml
+@@ -52,6 +52,9 @@ properties:
+   '#clock-cells':
+     const: 1
+ 
++  '#reset-cells':
++    const: 1
++
+ required:
+   - compatible
+   - reg
+diff --git a/Makefile b/Makefile
+index 959b55a05d1ba2..29a19c24428d05 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 15
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 4c46d80aa64b7c..ffb1d2317488e7 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -3122,6 +3122,13 @@ static bool has_sve_feature(const struct arm64_cpu_capabilities *cap, int scope)
+ }
+ #endif
+ 
++#ifdef CONFIG_ARM64_SME
++static bool has_sme_feature(const struct arm64_cpu_capabilities *cap, int scope)
++{
++	return system_supports_sme() && has_user_cpuid_feature(cap, scope);
++}
++#endif
++
+ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
+ 	HWCAP_CAP(ID_AA64ISAR0_EL1, AES, PMULL, CAP_HWCAP, KERNEL_HWCAP_PMULL),
+ 	HWCAP_CAP(ID_AA64ISAR0_EL1, AES, AES, CAP_HWCAP, KERNEL_HWCAP_AES),
+@@ -3210,31 +3217,31 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
+ 	HWCAP_CAP(ID_AA64ISAR2_EL1, BC, IMP, CAP_HWCAP, KERNEL_HWCAP_HBC),
+ #ifdef CONFIG_ARM64_SME
+ 	HWCAP_CAP(ID_AA64PFR1_EL1, SME, IMP, CAP_HWCAP, KERNEL_HWCAP_SME),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, FA64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_FA64),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, LUTv2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_LUTV2),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, SME2p2, CAP_HWCAP, KERNEL_HWCAP_SME2P2),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, SME2p1, CAP_HWCAP, KERNEL_HWCAP_SME2P1),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, SME2, CAP_HWCAP, KERNEL_HWCAP_SME2),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, I16I64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I64),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, F64F64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F64F64),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, I16I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I32),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, B16B16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16B16),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, F16F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F16),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, F8F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F16),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, F8F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F32),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, I8I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I8I32),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, F16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F32),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, B16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16F32),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, BI32I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_BI32I32),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, F32F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F32F32),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, SF8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8FMA),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, SF8DP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP4),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, SF8DP2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP2),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, SBitPerm, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SBITPERM),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, AES, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_AES),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, SFEXPA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SFEXPA),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, STMOP, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_STMOP),
+-	HWCAP_CAP(ID_AA64SMFR0_EL1, SMOP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SMOP4),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, FA64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_FA64),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, LUTv2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_LUTV2),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMEver, SME2p2, CAP_HWCAP, KERNEL_HWCAP_SME2P2),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMEver, SME2p1, CAP_HWCAP, KERNEL_HWCAP_SME2P1),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMEver, SME2, CAP_HWCAP, KERNEL_HWCAP_SME2),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, I16I64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I64),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F64F64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F64F64),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, I16I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I32),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, B16B16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16B16),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F16F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F16),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F8F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F16),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F8F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F32),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, I8I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I8I32),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F32),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, B16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16F32),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, BI32I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_BI32I32),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F32F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F32F32),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SF8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8FMA),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SF8DP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP4),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SF8DP2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP2),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SBitPerm, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SBITPERM),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, AES, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_AES),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SFEXPA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SFEXPA),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, STMOP, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_STMOP),
++	HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMOP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SMOP4),
+ #endif /* CONFIG_ARM64_SME */
+ 	HWCAP_CAP(ID_AA64FPFR0_EL1, F8CVT, IMP, CAP_HWCAP, KERNEL_HWCAP_F8CVT),
+ 	HWCAP_CAP(ID_AA64FPFR0_EL1, F8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_F8FMA),
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index 42faebb7b71232..4bc70205312e47 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -638,6 +638,11 @@ static void permission_overlay_switch(struct task_struct *next)
+ 	current->thread.por_el0 = read_sysreg_s(SYS_POR_EL0);
+ 	if (current->thread.por_el0 != next->thread.por_el0) {
+ 		write_sysreg_s(next->thread.por_el0, SYS_POR_EL0);
++		/*
++		 * No ISB required as we can tolerate spurious Overlay faults -
++		 * the fault handler will check again based on the new value
++		 * of POR_EL0.
++		 */
+ 	}
+ }
+ 
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index ec0a337891ddfc..11eb8d1adc8418 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -487,17 +487,29 @@ static void do_bad_area(unsigned long far, unsigned long esr,
+ 	}
+ }
+ 
+-static bool fault_from_pkey(unsigned long esr, struct vm_area_struct *vma,
+-			unsigned int mm_flags)
++static bool fault_from_pkey(struct vm_area_struct *vma, unsigned int mm_flags)
+ {
+-	unsigned long iss2 = ESR_ELx_ISS2(esr);
+-
+ 	if (!system_supports_poe())
+ 		return false;
+ 
+-	if (esr_fsc_is_permission_fault(esr) && (iss2 & ESR_ELx_Overlay))
+-		return true;
+-
++	/*
++	 * We do not check whether an Overlay fault has occurred because we
++	 * cannot make a decision based solely on its value:
++	 *
++	 * - If Overlay is set, a fault did occur due to POE, but it may be
++	 *   spurious in those cases where we update POR_EL0 without ISB (e.g.
++	 *   on context-switch). We would then need to manually check POR_EL0
++	 *   against vma_pkey(vma), which is exactly what
++	 *   arch_vma_access_permitted() does.
++	 *
++	 * - If Overlay is not set, we may still need to report a pkey fault.
++	 *   This is the case if an access was made within a mapping but with no
++	 *   page mapped, and POR_EL0 forbids the access (according to
++	 *   vma_pkey()). Such access will result in a SIGSEGV regardless
++	 *   because core code checks arch_vma_access_permitted(), but in order
++	 *   to report the correct error code - SEGV_PKUERR - we must handle
++	 *   that case here.
++	 */
+ 	return !arch_vma_access_permitted(vma,
+ 			mm_flags & FAULT_FLAG_WRITE,
+ 			mm_flags & FAULT_FLAG_INSTRUCTION,
+@@ -635,7 +647,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
+ 		goto bad_area;
+ 	}
+ 
+-	if (fault_from_pkey(esr, vma, mm_flags)) {
++	if (fault_from_pkey(vma, mm_flags)) {
+ 		pkey = vma_pkey(vma);
+ 		vma_end_read(vma);
+ 		fault = 0;
+@@ -679,7 +691,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
+ 		goto bad_area;
+ 	}
+ 
+-	if (fault_from_pkey(esr, vma, mm_flags)) {
++	if (fault_from_pkey(vma, mm_flags)) {
+ 		pkey = vma_pkey(vma);
+ 		mmap_read_unlock(mm);
+ 		fault = 0;
+diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
+index fb30c8804f87b0..46a18af52980dd 100644
+--- a/arch/arm64/mm/proc.S
++++ b/arch/arm64/mm/proc.S
+@@ -533,7 +533,6 @@ alternative_else_nop_endif
+ #undef PTE_MAYBE_SHARED
+ 
+ 	orr	tcr2, tcr2, TCR2_EL1_PIE
+-	msr	REG_TCR2_EL1, x0
+ 
+ .Lskip_indirection:
+ 
+diff --git a/arch/riscv/kernel/vdso/vdso.lds.S b/arch/riscv/kernel/vdso/vdso.lds.S
+index 8e86965a8aae4d..646e268ede4432 100644
+--- a/arch/riscv/kernel/vdso/vdso.lds.S
++++ b/arch/riscv/kernel/vdso/vdso.lds.S
+@@ -30,7 +30,7 @@ SECTIONS
+ 		*(.data .data.* .gnu.linkonce.d.*)
+ 		*(.dynbss)
+ 		*(.bss .bss.* .gnu.linkonce.b.*)
+-	}
++	}						:text
+ 
+ 	.note		: { *(.note.*) }		:text	:note
+ 
+diff --git a/arch/s390/crypto/sha1_s390.c b/arch/s390/crypto/sha1_s390.c
+index bc3a22704e0930..10950953429e66 100644
+--- a/arch/s390/crypto/sha1_s390.c
++++ b/arch/s390/crypto/sha1_s390.c
+@@ -38,6 +38,7 @@ static int s390_sha1_init(struct shash_desc *desc)
+ 	sctx->state[4] = SHA1_H4;
+ 	sctx->count = 0;
+ 	sctx->func = CPACF_KIMD_SHA_1;
++	sctx->first_message_part = 0;
+ 
+ 	return 0;
+ }
+@@ -62,6 +63,7 @@ static int s390_sha1_import(struct shash_desc *desc, const void *in)
+ 	memcpy(sctx->state, ictx->state, sizeof(ictx->state));
+ 	memcpy(sctx->buf, ictx->buffer, sizeof(ictx->buffer));
+ 	sctx->func = CPACF_KIMD_SHA_1;
++	sctx->first_message_part = 0;
+ 	return 0;
+ }
+ 
+diff --git a/arch/s390/crypto/sha256_s390.c b/arch/s390/crypto/sha256_s390.c
+index 6f1ccdf93d3e5e..0204d4bca34032 100644
+--- a/arch/s390/crypto/sha256_s390.c
++++ b/arch/s390/crypto/sha256_s390.c
+@@ -31,6 +31,7 @@ static int s390_sha256_init(struct shash_desc *desc)
+ 	sctx->state[7] = SHA256_H7;
+ 	sctx->count = 0;
+ 	sctx->func = CPACF_KIMD_SHA_256;
++	sctx->first_message_part = 0;
+ 
+ 	return 0;
+ }
+@@ -55,6 +56,7 @@ static int sha256_import(struct shash_desc *desc, const void *in)
+ 	memcpy(sctx->state, ictx->state, sizeof(ictx->state));
+ 	memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
+ 	sctx->func = CPACF_KIMD_SHA_256;
++	sctx->first_message_part = 0;
+ 	return 0;
+ }
+ 
+@@ -90,6 +92,7 @@ static int s390_sha224_init(struct shash_desc *desc)
+ 	sctx->state[7] = SHA224_H7;
+ 	sctx->count = 0;
+ 	sctx->func = CPACF_KIMD_SHA_256;
++	sctx->first_message_part = 0;
+ 
+ 	return 0;
+ }
+diff --git a/arch/s390/crypto/sha512_s390.c b/arch/s390/crypto/sha512_s390.c
+index 04f11c40776345..b53a7793bd244f 100644
+--- a/arch/s390/crypto/sha512_s390.c
++++ b/arch/s390/crypto/sha512_s390.c
+@@ -32,6 +32,7 @@ static int sha512_init(struct shash_desc *desc)
+ 	*(__u64 *)&ctx->state[14] = SHA512_H7;
+ 	ctx->count = 0;
+ 	ctx->func = CPACF_KIMD_SHA_512;
++	ctx->first_message_part = 0;
+ 
+ 	return 0;
+ }
+@@ -60,6 +61,7 @@ static int sha512_import(struct shash_desc *desc, const void *in)
+ 	memcpy(sctx->state, ictx->state, sizeof(ictx->state));
+ 	memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
+ 	sctx->func = CPACF_KIMD_SHA_512;
++	sctx->first_message_part = 0;
+ 	return 0;
+ }
+ 
+@@ -97,6 +99,7 @@ static int sha384_init(struct shash_desc *desc)
+ 	*(__u64 *)&ctx->state[14] = SHA384_H7;
+ 	ctx->count = 0;
+ 	ctx->func = CPACF_KIMD_SHA_512;
++	ctx->first_message_part = 0;
+ 
+ 	return 0;
+ }
+diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c
+index b97bb52dd56260..70f8d7e87fb81c 100644
+--- a/arch/um/drivers/vector_kern.c
++++ b/arch/um/drivers/vector_kern.c
+@@ -1592,35 +1592,19 @@ static void vector_eth_configure(
+ 
+ 	device->dev = dev;
+ 
+-	*vp = ((struct vector_private)
+-		{
+-		.list			= LIST_HEAD_INIT(vp->list),
+-		.dev			= dev,
+-		.unit			= n,
+-		.options		= get_transport_options(def),
+-		.rx_irq			= 0,
+-		.tx_irq			= 0,
+-		.parsed			= def,
+-		.max_packet		= get_mtu(def) + ETH_HEADER_OTHER,
+-		/* TODO - we need to calculate headroom so that ip header
+-		 * is 16 byte aligned all the time
+-		 */
+-		.headroom		= get_headroom(def),
+-		.form_header		= NULL,
+-		.verify_header		= NULL,
+-		.header_rxbuffer	= NULL,
+-		.header_txbuffer	= NULL,
+-		.header_size		= 0,
+-		.rx_header_size		= 0,
+-		.rexmit_scheduled	= false,
+-		.opened			= false,
+-		.transport_data		= NULL,
+-		.in_write_poll		= false,
+-		.coalesce		= 2,
+-		.req_size		= get_req_size(def),
+-		.in_error		= false,
+-		.bpf			= NULL
+-	});
++	INIT_LIST_HEAD(&vp->list);
++	vp->dev		= dev;
++	vp->unit	= n;
++	vp->options	= get_transport_options(def);
++	vp->parsed	= def;
++	vp->max_packet	= get_mtu(def) + ETH_HEADER_OTHER;
++	/*
++	 * TODO - we need to calculate headroom so that ip header
++	 * is 16 byte aligned all the time
++	 */
++	vp->headroom	= get_headroom(def);
++	vp->coalesce	= 2;
++	vp->req_size	= get_req_size(def);
+ 
+ 	dev->features = dev->hw_features = (NETIF_F_SG | NETIF_F_FRAGLIST);
+ 	INIT_WORK(&vp->reset_tx, vector_reset_tx);
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 020180dfb87f65..5d4857b476a649 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -146,7 +146,7 @@ config X86
+ 	select ARCH_WANTS_DYNAMIC_TASK_STRUCT
+ 	select ARCH_WANTS_NO_INSTR
+ 	select ARCH_WANT_GENERAL_HUGETLB
+-	select ARCH_WANT_HUGE_PMD_SHARE
++	select ARCH_WANT_HUGE_PMD_SHARE		if X86_64
+ 	select ARCH_WANT_LD_ORPHAN_WARN
+ 	select ARCH_WANT_OPTIMIZE_DAX_VMEMMAP	if X86_64
+ 	select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP	if X86_64
+diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
+index 36beaac713c128..e2ee6bb3008f74 100644
+--- a/arch/x86/coco/sev/core.c
++++ b/arch/x86/coco/sev/core.c
+@@ -103,7 +103,7 @@ static u64 secrets_pa __ro_after_init;
+  */
+ static u64 snp_tsc_scale __ro_after_init;
+ static u64 snp_tsc_offset __ro_after_init;
+-static u64 snp_tsc_freq_khz __ro_after_init;
++static unsigned long snp_tsc_freq_khz __ro_after_init;
+ 
+ /* #VC handler runtime per-CPU data */
+ struct sev_es_runtime_data {
+@@ -3347,15 +3347,31 @@ static unsigned long securetsc_get_tsc_khz(void)
+ 
+ void __init snp_secure_tsc_init(void)
+ {
+-	unsigned long long tsc_freq_mhz;
++	struct snp_secrets_page *secrets;
++	unsigned long tsc_freq_mhz;
++	void *mem;
+ 
+ 	if (!cc_platform_has(CC_ATTR_GUEST_SNP_SECURE_TSC))
+ 		return;
+ 
++	mem = early_memremap_encrypted(secrets_pa, PAGE_SIZE);
++	if (!mem) {
++		pr_err("Unable to get TSC_FACTOR: failed to map the SNP secrets page.\n");
++		sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_SECURE_TSC);
++	}
++
++	secrets = (__force struct snp_secrets_page *)mem;
++
+ 	setup_force_cpu_cap(X86_FEATURE_TSC_KNOWN_FREQ);
+ 	rdmsrl(MSR_AMD64_GUEST_TSC_FREQ, tsc_freq_mhz);
+-	snp_tsc_freq_khz = (unsigned long)(tsc_freq_mhz * 1000);
++
++	/* Extract the GUEST TSC MHZ from BIT[17:0], rest is reserved space */
++	tsc_freq_mhz &= GENMASK_ULL(17, 0);
++
++	snp_tsc_freq_khz = SNP_SCALE_TSC_FREQ(tsc_freq_mhz * 1000, secrets->tsc_factor);
+ 
+ 	x86_platform.calibrate_cpu = securetsc_get_tsc_khz;
+ 	x86_platform.calibrate_tsc = securetsc_get_tsc_khz;
++
++	early_memunmap(mem, PAGE_SIZE);
+ }
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index e7d2f460fcc699..2333f4e7bc2f1a 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -624,6 +624,7 @@
+ #define MSR_AMD64_OSVW_STATUS		0xc0010141
+ #define MSR_AMD_PPIN_CTL		0xc00102f0
+ #define MSR_AMD_PPIN			0xc00102f1
++#define MSR_AMD64_CPUID_FN_7		0xc0011002
+ #define MSR_AMD64_CPUID_FN_1		0xc0011004
+ #define MSR_AMD64_LS_CFG		0xc0011020
+ #define MSR_AMD64_DC_CFG		0xc0011022
+diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
+index ba7999f66abe6d..488a029c848b11 100644
+--- a/arch/x86/include/asm/sev.h
++++ b/arch/x86/include/asm/sev.h
+@@ -192,6 +192,18 @@ struct snp_tsc_info_resp {
+ 	u8 rsvd2[100];
+ } __packed;
+ 
++/*
++ * Obtain the mean TSC frequency by decreasing the nominal TSC frequency with
++ * TSC_FACTOR as documented in the SNP Firmware ABI specification:
++ *
++ * GUEST_TSC_FREQ * (1 - (TSC_FACTOR * 0.00001))
++ *
++ * which is equivalent to:
++ *
++ * GUEST_TSC_FREQ -= (GUEST_TSC_FREQ * TSC_FACTOR) / 100000;
++ */
++#define SNP_SCALE_TSC_FREQ(freq, factor) ((freq) - (freq) * (factor) / 100000)
++
+ struct snp_guest_req {
+ 	void *req_buf;
+ 	size_t req_sz;
+@@ -251,8 +263,11 @@ struct snp_secrets_page {
+ 	u8 svsm_guest_vmpl;
+ 	u8 rsvd3[3];
+ 
++	/* The percentage decrease from nominal to mean TSC frequency. */
++	u32 tsc_factor;
++
+ 	/* Remainder of page */
+-	u8 rsvd4[3744];
++	u8 rsvd4[3740];
+ } __packed;
+ 
+ struct snp_msg_desc {
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 2c5003f05920b8..6f9a6185df7f60 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -972,6 +972,16 @@ static void init_amd_zen2(struct cpuinfo_x86 *c)
+ 	init_spectral_chicken(c);
+ 	fix_erratum_1386(c);
+ 	zen2_zenbleed_check(c);
++
++	/* Disable RDSEED on AMD Cyan Skillfish because of an error. */
++	if (c->x86_model == 0x47 && c->x86_stepping == 0x0) {
++		clear_cpu_cap(c, X86_FEATURE_RDSEED);
++		msr_clear_bit(MSR_AMD64_CPUID_FN_7, 18);
++		pr_emerg("RDSEED is not reliable on this platform; disabling.\n");
++	}
++
++	/* Correct misconfigured CPUID on some clients. */
++	clear_cpu_cap(c, X86_FEATURE_INVLPGB);
+ }
+ 
+ static void init_amd_zen3(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
+index 1075a90141daed..8c79d13ed4cc35 100644
+--- a/arch/x86/kernel/cpu/mce/amd.c
++++ b/arch/x86/kernel/cpu/mce/amd.c
+@@ -350,7 +350,6 @@ static void smca_configure(unsigned int bank, unsigned int cpu)
+ 
+ struct thresh_restart {
+ 	struct threshold_block	*b;
+-	int			reset;
+ 	int			set_lvt_off;
+ 	int			lvt_off;
+ 	u16			old_limit;
+@@ -432,13 +431,13 @@ static void threshold_restart_bank(void *_tr)
+ 
+ 	rdmsr(tr->b->address, lo, hi);
+ 
+-	if (tr->b->threshold_limit < (hi & THRESHOLD_MAX))
+-		tr->reset = 1;	/* limit cannot be lower than err count */
+-
+-	if (tr->reset) {		/* reset err count and overflow bit */
+-		hi =
+-		    (hi & ~(MASK_ERR_COUNT_HI | MASK_OVERFLOW_HI)) |
+-		    (THRESHOLD_MAX - tr->b->threshold_limit);
++	/*
++	 * Reset error count and overflow bit.
++	 * This is done during init or after handling an interrupt.
++	 */
++	if (hi & MASK_OVERFLOW_HI || tr->set_lvt_off) {
++		hi &= ~(MASK_ERR_COUNT_HI | MASK_OVERFLOW_HI);
++		hi |= THRESHOLD_MAX - tr->b->threshold_limit;
+ 	} else if (tr->old_limit) {	/* change limit w/o reset */
+ 		int new_count = (hi & THRESHOLD_MAX) +
+ 		    (tr->old_limit - tr->b->threshold_limit);
+@@ -1113,13 +1112,20 @@ static const char *get_name(unsigned int cpu, unsigned int bank, struct threshol
+ 	}
+ 
+ 	bank_type = smca_get_bank_type(cpu, bank);
+-	if (bank_type >= N_SMCA_BANK_TYPES)
+-		return NULL;
+ 
+ 	if (b && (bank_type == SMCA_UMC || bank_type == SMCA_UMC_V2)) {
+ 		if (b->block < ARRAY_SIZE(smca_umc_block_names))
+ 			return smca_umc_block_names[b->block];
+-		return NULL;
++	}
++
++	if (b && b->block) {
++		snprintf(buf_mcatype, MAX_MCATYPE_NAME_LEN, "th_block_%u", b->block);
++		return buf_mcatype;
++	}
++
++	if (bank_type >= N_SMCA_BANK_TYPES) {
++		snprintf(buf_mcatype, MAX_MCATYPE_NAME_LEN, "th_bank_%u", bank);
++		return buf_mcatype;
+ 	}
+ 
+ 	if (per_cpu(smca_bank_counts, cpu)[bank_type] == 1)
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index f6fd71b64b6638..50acb0fcb27e2a 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -1740,6 +1740,11 @@ static void mc_poll_banks_default(void)
+ 
+ void (*mc_poll_banks)(void) = mc_poll_banks_default;
+ 
++static bool should_enable_timer(unsigned long iv)
++{
++	return !mca_cfg.ignore_ce && iv;
++}
++
+ static void mce_timer_fn(struct timer_list *t)
+ {
+ 	struct timer_list *cpu_t = this_cpu_ptr(&mce_timer);
+@@ -1763,7 +1768,7 @@ static void mce_timer_fn(struct timer_list *t)
+ 
+ 	if (mce_get_storm_mode()) {
+ 		__start_timer(t, HZ);
+-	} else {
++	} else if (should_enable_timer(iv)) {
+ 		__this_cpu_write(mce_next_interval, iv);
+ 		__start_timer(t, iv);
+ 	}
+@@ -2156,11 +2161,10 @@ static void mce_start_timer(struct timer_list *t)
+ {
+ 	unsigned long iv = check_interval * HZ;
+ 
+-	if (mca_cfg.ignore_ce || !iv)
+-		return;
+-
+-	this_cpu_write(mce_next_interval, iv);
+-	__start_timer(t, iv);
++	if (should_enable_timer(iv)) {
++		this_cpu_write(mce_next_interval, iv);
++		__start_timer(t, iv);
++	}
+ }
+ 
+ static void __mcheck_cpu_setup_timer(void)
+@@ -2801,15 +2805,9 @@ static int mce_cpu_dead(unsigned int cpu)
+ static int mce_cpu_online(unsigned int cpu)
+ {
+ 	struct timer_list *t = this_cpu_ptr(&mce_timer);
+-	int ret;
+ 
+ 	mce_device_create(cpu);
+-
+-	ret = mce_threshold_create_device(cpu);
+-	if (ret) {
+-		mce_device_remove(cpu);
+-		return ret;
+-	}
++	mce_threshold_create_device(cpu);
+ 	mce_reenable_cpu();
+ 	mce_start_timer(t);
+ 	return 0;
+diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
+index f863df0ff42ce8..512b5f6c4d5c85 100644
+--- a/arch/x86/kernel/cpu/mce/intel.c
++++ b/arch/x86/kernel/cpu/mce/intel.c
+@@ -478,6 +478,7 @@ void mce_intel_feature_init(struct cpuinfo_x86 *c)
+ void mce_intel_feature_clear(struct cpuinfo_x86 *c)
+ {
+ 	intel_clear_lmce();
++	cmci_clear();
+ }
+ 
+ bool intel_filter_mce(struct mce *m)
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index 24f0318c50d790..fdc8cfa8423fda 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -1979,6 +1979,9 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
+ 		if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY)
+ 			goto out_flush_all;
+ 
++		if (is_noncanonical_invlpg_address(entries[i], vcpu))
++			continue;
++
+ 		/*
+ 		 * Lower 12 bits of 'address' encode the number of additional
+ 		 * pages to flush.
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index a7a7dc5073363b..c581ab85bbef31 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -2032,6 +2032,10 @@ static int sev_check_source_vcpus(struct kvm *dst, struct kvm *src)
+ 	struct kvm_vcpu *src_vcpu;
+ 	unsigned long i;
+ 
++	if (src->created_vcpus != atomic_read(&src->online_vcpus) ||
++	    dst->created_vcpus != atomic_read(&dst->online_vcpus))
++		return -EBUSY;
++
+ 	if (!sev_es_guest(src))
+ 		return 0;
+ 
+diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
+index 38b33cdd4232d9..160cbde4323df9 100644
+--- a/arch/x86/kvm/xen.c
++++ b/arch/x86/kvm/xen.c
+@@ -1970,8 +1970,19 @@ int kvm_xen_setup_evtchn(struct kvm *kvm,
+ {
+ 	struct kvm_vcpu *vcpu;
+ 
+-	if (ue->u.xen_evtchn.port >= max_evtchn_port(kvm))
+-		return -EINVAL;
++	/*
++	 * Don't check for the port being within range of max_evtchn_port().
++	 * Userspace can configure what ever targets it likes; events just won't
++	 * be delivered if/while the target is invalid, just like userspace can
++	 * configure MSIs which target non-existent APICs.
++	 *
++	 * This allow on Live Migration and Live Update, the IRQ routing table
++	 * can be restored *independently* of other things like creating vCPUs,
++	 * without imposing an ordering dependency on userspace.  In this
++	 * particular case, the problematic ordering would be with setting the
++	 * Xen 'long mode' flag, which changes max_evtchn_port() to allow 4096
++	 * instead of 1024 event channels.
++	 */
+ 
+ 	/* We only support 2 level event channels for now */
+ 	if (ue->u.xen_evtchn.priority != KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL)
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index 93bb1f7d909867..6760330a8af55d 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -243,23 +243,10 @@ static int acpi_battery_get_property(struct power_supply *psy,
+ 		break;
+ 	case POWER_SUPPLY_PROP_CURRENT_NOW:
+ 	case POWER_SUPPLY_PROP_POWER_NOW:
+-		if (battery->rate_now == ACPI_BATTERY_VALUE_UNKNOWN) {
++		if (battery->rate_now == ACPI_BATTERY_VALUE_UNKNOWN)
+ 			ret = -ENODEV;
+-			break;
+-		}
+-
+-		val->intval = battery->rate_now * 1000;
+-		/*
+-		 * When discharging, the current should be reported as a
+-		 * negative number as per the power supply class interface
+-		 * definition.
+-		 */
+-		if (psp == POWER_SUPPLY_PROP_CURRENT_NOW &&
+-		    (battery->state & ACPI_BATTERY_STATE_DISCHARGING) &&
+-		    acpi_battery_handle_discharging(battery)
+-				== POWER_SUPPLY_STATUS_DISCHARGING)
+-			val->intval = -val->intval;
+-
++		else
++			val->intval = battery->rate_now * 1000;
+ 		break;
+ 	case POWER_SUPPLY_PROP_CHARGE_FULL_DESIGN:
+ 	case POWER_SUPPLY_PROP_ENERGY_FULL_DESIGN:
+diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c
+index a876024d8a05f9..63d41320cd5cf0 100644
+--- a/drivers/atm/idt77252.c
++++ b/drivers/atm/idt77252.c
+@@ -852,6 +852,8 @@ queue_skb(struct idt77252_dev *card, struct vc_map *vc,
+ 
+ 	IDT77252_PRV_PADDR(skb) = dma_map_single(&card->pcidev->dev, skb->data,
+ 						 skb->len, DMA_TO_DEVICE);
++	if (dma_mapping_error(&card->pcidev->dev, IDT77252_PRV_PADDR(skb)))
++		return -ENOMEM;
+ 
+ 	error = -EINVAL;
+ 
+@@ -1857,6 +1859,8 @@ add_rx_skb(struct idt77252_dev *card, int queue,
+ 		paddr = dma_map_single(&card->pcidev->dev, skb->data,
+ 				       skb_end_pointer(skb) - skb->data,
+ 				       DMA_FROM_DEVICE);
++		if (dma_mapping_error(&card->pcidev->dev, paddr))
++			goto outpoolrm;
+ 		IDT77252_PRV_PADDR(skb) = paddr;
+ 
+ 		if (push_rx_skb(card, skb, queue)) {
+@@ -1871,6 +1875,7 @@ add_rx_skb(struct idt77252_dev *card, int queue,
+ 	dma_unmap_single(&card->pcidev->dev, IDT77252_PRV_PADDR(skb),
+ 			 skb_end_pointer(skb) - skb->data, DMA_FROM_DEVICE);
+ 
++outpoolrm:
+ 	handle = IDT77252_PRV_POOL(skb);
+ 	card->sbpool[POOL_QUEUE(handle)].skb[POOL_INDEX(handle)] = NULL;
+ 
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 7bdc7eb808ea93..2592bd19ebc151 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -2198,9 +2198,7 @@ static int nbd_genl_connect(struct sk_buff *skb, struct genl_info *info)
+ 				goto out;
+ 		}
+ 	}
+-	ret = nbd_start_device(nbd);
+-	if (ret)
+-		goto out;
++
+ 	if (info->attrs[NBD_ATTR_BACKEND_IDENTIFIER]) {
+ 		nbd->backend = nla_strdup(info->attrs[NBD_ATTR_BACKEND_IDENTIFIER],
+ 					  GFP_KERNEL);
+@@ -2216,6 +2214,8 @@ static int nbd_genl_connect(struct sk_buff *skb, struct genl_info *info)
+ 		goto out;
+ 	}
+ 	set_bit(NBD_RT_HAS_BACKEND_FILE, &config->runtime_flags);
++
++	ret = nbd_start_device(nbd);
+ out:
+ 	mutex_unlock(&nbd->config_lock);
+ 	if (!ret) {
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 8a482853a75ede..0e017eae97fb1d 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -2710,7 +2710,8 @@ static int ublk_ctrl_add_dev(const struct ublksrv_ctrl_cmd *header)
+ 	if (copy_from_user(&info, argp, sizeof(info)))
+ 		return -EFAULT;
+ 
+-	if (info.queue_depth > UBLK_MAX_QUEUE_DEPTH || info.nr_hw_queues > UBLK_MAX_NR_QUEUES)
++	if (info.queue_depth > UBLK_MAX_QUEUE_DEPTH || !info.queue_depth ||
++	    info.nr_hw_queues > UBLK_MAX_NR_QUEUES || !info.nr_hw_queues)
+ 		return -EINVAL;
+ 
+ 	if (capable(CAP_SYS_ADMIN))
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index a2dc39c005f4f8..976ec88a0f62aa 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -2392,10 +2392,17 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ 			 */
+ 			qcadev->bt_power->pwrseq = devm_pwrseq_get(&serdev->dev,
+ 								   "bluetooth");
+-			if (IS_ERR(qcadev->bt_power->pwrseq))
+-				return PTR_ERR(qcadev->bt_power->pwrseq);
+ 
+-			break;
++			/*
++			 * Some modules have BT_EN enabled via a hardware pull-up,
++			 * meaning it is not defined in the DTS and is not controlled
++			 * through the power sequence. In such cases, fall through
++			 * to follow the legacy flow.
++			 */
++			if (IS_ERR(qcadev->bt_power->pwrseq))
++				qcadev->bt_power->pwrseq = NULL;
++			else
++				break;
+ 		}
+ 		fallthrough;
+ 	case QCA_WCN3950:
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index 3ba9d7e9a6c7ca..6047c9600e03aa 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -1241,7 +1241,7 @@ int ipmi_create_user(unsigned int          if_num,
+ 	}
+ 	/* Not found, return an error */
+ 	rv = -EINVAL;
+-	goto out_kfree;
++	goto out_unlock;
+ 
+  found:
+ 	if (atomic_add_return(1, &intf->nr_users) > max_users) {
+@@ -1283,6 +1283,7 @@ int ipmi_create_user(unsigned int          if_num,
+ 
+ out_kfree:
+ 	atomic_dec(&intf->nr_users);
++out_unlock:
+ 	srcu_read_unlock(&ipmi_interfaces_srcu, index);
+ 	vfree(new_user);
+ 	return rv;
+diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c
+index 15510c2ff21c03..1b1561c84127b9 100644
+--- a/drivers/clk/clk-scmi.c
++++ b/drivers/clk/clk-scmi.c
+@@ -404,6 +404,7 @@ static int scmi_clocks_probe(struct scmi_device *sdev)
+ 	const struct scmi_handle *handle = sdev->handle;
+ 	struct scmi_protocol_handle *ph;
+ 	const struct clk_ops *scmi_clk_ops_db[SCMI_MAX_CLK_OPS] = {};
++	struct scmi_clk *sclks;
+ 
+ 	if (!handle)
+ 		return -ENODEV;
+@@ -430,18 +431,21 @@ static int scmi_clocks_probe(struct scmi_device *sdev)
+ 	transport_is_atomic = handle->is_transport_atomic(handle,
+ 							  &atomic_threshold_us);
+ 
++	sclks = devm_kcalloc(dev, count, sizeof(*sclks), GFP_KERNEL);
++	if (!sclks)
++		return -ENOMEM;
++
++	for (idx = 0; idx < count; idx++)
++		hws[idx] = &sclks[idx].hw;
++
+ 	for (idx = 0; idx < count; idx++) {
+-		struct scmi_clk *sclk;
++		struct scmi_clk *sclk = &sclks[idx];
+ 		const struct clk_ops *scmi_ops;
+ 
+-		sclk = devm_kzalloc(dev, sizeof(*sclk), GFP_KERNEL);
+-		if (!sclk)
+-			return -ENOMEM;
+-
+ 		sclk->info = scmi_proto_clk_ops->info_get(ph, idx);
+ 		if (!sclk->info) {
+ 			dev_dbg(dev, "invalid clock info for idx %d\n", idx);
+-			devm_kfree(dev, sclk);
++			hws[idx] = NULL;
+ 			continue;
+ 		}
+ 
+@@ -479,13 +483,11 @@ static int scmi_clocks_probe(struct scmi_device *sdev)
+ 		if (err) {
+ 			dev_err(dev, "failed to register clock %d\n", idx);
+ 			devm_kfree(dev, sclk->parent_data);
+-			devm_kfree(dev, sclk);
+ 			hws[idx] = NULL;
+ 		} else {
+ 			dev_dbg(dev, "Registered clock:%s%s\n",
+ 				sclk->info->name,
+ 				scmi_ops->enable ? " (atomic ops)" : "");
+-			hws[idx] = &sclk->hw;
+ 		}
+ 	}
+ 
+diff --git a/drivers/clk/imx/clk-imx95-blk-ctl.c b/drivers/clk/imx/clk-imx95-blk-ctl.c
+index 25974947ad0c18..cc2ee2be18195f 100644
+--- a/drivers/clk/imx/clk-imx95-blk-ctl.c
++++ b/drivers/clk/imx/clk-imx95-blk-ctl.c
+@@ -219,11 +219,15 @@ static const struct imx95_blk_ctl_dev_data lvds_csr_dev_data = {
+ 	.clk_reg_offset = 0,
+ };
+ 
++static const char * const disp_engine_parents[] = {
++	"videopll1", "dsi_pll", "ldb_pll_div7"
++};
++
+ static const struct imx95_blk_ctl_clk_dev_data dispmix_csr_clk_dev_data[] = {
+ 	[IMX95_CLK_DISPMIX_ENG0_SEL] = {
+ 		.name = "disp_engine0_sel",
+-		.parent_names = (const char *[]){"videopll1", "dsi_pll", "ldb_pll_div7", },
+-		.num_parents = 4,
++		.parent_names = disp_engine_parents,
++		.num_parents = ARRAY_SIZE(disp_engine_parents),
+ 		.reg = 0,
+ 		.bit_idx = 0,
+ 		.bit_width = 2,
+@@ -232,8 +236,8 @@ static const struct imx95_blk_ctl_clk_dev_data dispmix_csr_clk_dev_data[] = {
+ 	},
+ 	[IMX95_CLK_DISPMIX_ENG1_SEL] = {
+ 		.name = "disp_engine1_sel",
+-		.parent_names = (const char *[]){"videopll1", "dsi_pll", "ldb_pll_div7", },
+-		.num_parents = 4,
++		.parent_names = disp_engine_parents,
++		.num_parents = ARRAY_SIZE(disp_engine_parents),
+ 		.reg = 0,
+ 		.bit_idx = 2,
+ 		.bit_width = 2,
+diff --git a/drivers/edac/ecs.c b/drivers/edac/ecs.c
+index 1d51838a60c111..51c451c7f0f0b2 100755
+--- a/drivers/edac/ecs.c
++++ b/drivers/edac/ecs.c
+@@ -170,8 +170,10 @@ static int ecs_create_desc(struct device *ecs_dev, const struct attribute_group
+ 		fru_ctx->dev_attr[ECS_RESET]		= EDAC_ECS_ATTR_WO(reset, fru);
+ 		fru_ctx->dev_attr[ECS_THRESHOLD]	= EDAC_ECS_ATTR_RW(threshold, fru);
+ 
+-		for (i = 0; i < ECS_MAX_ATTRS; i++)
++		for (i = 0; i < ECS_MAX_ATTRS; i++) {
++			sysfs_attr_init(&fru_ctx->dev_attr[i].dev_attr.attr);
+ 			fru_ctx->ecs_attrs[i] = &fru_ctx->dev_attr[i].dev_attr.attr;
++		}
+ 
+ 		sprintf(fru_ctx->name, "%s%d", EDAC_ECS_FRU_NAME, fru);
+ 		group->name = fru_ctx->name;
+diff --git a/drivers/edac/mem_repair.c b/drivers/edac/mem_repair.c
+index 3b1a845457b08f..1df8957a8459ae 100755
+--- a/drivers/edac/mem_repair.c
++++ b/drivers/edac/mem_repair.c
+@@ -324,6 +324,7 @@ static int mem_repair_create_desc(struct device *dev,
+ 	for (i = 0; i < MR_MAX_ATTRS; i++) {
+ 		memcpy(&ctx->mem_repair_dev_attr[i],
+ 		       &dev_attr[i], sizeof(dev_attr[i]));
++		sysfs_attr_init(&ctx->mem_repair_dev_attr[i].dev_attr.attr);
+ 		ctx->mem_repair_attrs[i] =
+ 			&ctx->mem_repair_dev_attr[i].dev_attr.attr;
+ 	}
+diff --git a/drivers/edac/scrub.c b/drivers/edac/scrub.c
+index e421d3ebd959f4..f9d02af2fc3a20 100755
+--- a/drivers/edac/scrub.c
++++ b/drivers/edac/scrub.c
+@@ -176,6 +176,7 @@ static int scrub_create_desc(struct device *scrub_dev,
+ 	group = &scrub_ctx->group;
+ 	for (i = 0; i < SCRUB_MAX_ATTRS; i++) {
+ 		memcpy(&scrub_ctx->scrub_dev_attr[i], &dev_attr[i], sizeof(dev_attr[i]));
++		sysfs_attr_init(&scrub_ctx->scrub_dev_attr[i].dev_attr.attr);
+ 		scrub_ctx->scrub_attrs[i] = &scrub_ctx->scrub_dev_attr[i].dev_attr.attr;
+ 	}
+ 	sprintf(scrub_ctx->name, "%s%d", "scrub", instance);
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 113c5d90f2df46..0063d62d3bc486 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -3277,14 +3277,15 @@ static int gpiod_get_raw_value_commit(const struct gpio_desc *desc)
+ static int gpio_chip_get_multiple(struct gpio_chip *gc,
+ 				  unsigned long *mask, unsigned long *bits)
+ {
+-	int ret;
+-	
+ 	lockdep_assert_held(&gc->gpiodev->srcu);
+ 
+ 	if (gc->get_multiple) {
++		int ret;
++
+ 		ret = gc->get_multiple(gc, mask, bits);
+ 		if (ret > 0)
+ 			return -EBADE;
++		return ret;
+ 	}
+ 
+ 	if (gc->get) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
+index ca4a6b82817f53..df77558e03ef21 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
+@@ -561,6 +561,13 @@ static uint32_t read_vmid_from_vmfault_reg(struct amdgpu_device *adev)
+ 	return REG_GET_FIELD(status, VM_CONTEXT1_PROTECTION_FAULT_STATUS, VMID);
+ }
+ 
++static uint32_t kgd_hqd_sdma_get_doorbell(struct amdgpu_device *adev,
++					  int engine, int queue)
++
++{
++	return 0;
++}
++
+ const struct kfd2kgd_calls gfx_v7_kfd2kgd = {
+ 	.program_sh_mem_settings = kgd_program_sh_mem_settings,
+ 	.set_pasid_vmid_mapping = kgd_set_pasid_vmid_mapping,
+@@ -578,4 +585,5 @@ const struct kfd2kgd_calls gfx_v7_kfd2kgd = {
+ 	.set_scratch_backing_va = set_scratch_backing_va,
+ 	.set_vm_context_page_table_base = set_vm_context_page_table_base,
+ 	.read_vmid_from_vmfault_reg = read_vmid_from_vmfault_reg,
++	.hqd_sdma_get_doorbell = kgd_hqd_sdma_get_doorbell,
+ };
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.c
+index 0f3e2944edd7e9..e68c0fa8d7513a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.c
+@@ -582,6 +582,13 @@ static void set_vm_context_page_table_base(struct amdgpu_device *adev,
+ 			lower_32_bits(page_table_base));
+ }
+ 
++static uint32_t kgd_hqd_sdma_get_doorbell(struct amdgpu_device *adev,
++					  int engine, int queue)
++
++{
++	return 0;
++}
++
+ const struct kfd2kgd_calls gfx_v8_kfd2kgd = {
+ 	.program_sh_mem_settings = kgd_program_sh_mem_settings,
+ 	.set_pasid_vmid_mapping = kgd_set_pasid_vmid_mapping,
+@@ -599,4 +606,5 @@ const struct kfd2kgd_calls gfx_v8_kfd2kgd = {
+ 			get_atc_vmid_pasid_mapping_info,
+ 	.set_scratch_backing_va = set_scratch_backing_va,
+ 	.set_vm_context_page_table_base = set_vm_context_page_table_base,
++	.hqd_sdma_get_doorbell = kgd_hqd_sdma_get_doorbell,
+ };
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+index b26a8301a1bdd4..a09ec778172d86 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+@@ -45,6 +45,7 @@
+ #include "amdgpu_ras.h"
+ 
+ MODULE_FIRMWARE("amdgpu/sdma_4_4_2.bin");
++MODULE_FIRMWARE("amdgpu/sdma_4_4_4.bin");
+ MODULE_FIRMWARE("amdgpu/sdma_4_4_5.bin");
+ 
+ static const struct amdgpu_hwip_reg_entry sdma_reg_list_4_4_2[] = {
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index 72be6e152e881e..b6393a7f69428b 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -1171,13 +1171,12 @@ svm_range_split_head(struct svm_range *prange, uint64_t new_start,
+ }
+ 
+ static void
+-svm_range_add_child(struct svm_range *prange, struct mm_struct *mm,
+-		    struct svm_range *pchild, enum svm_work_list_ops op)
++svm_range_add_child(struct svm_range *prange, struct svm_range *pchild, enum svm_work_list_ops op)
+ {
+ 	pr_debug("add child 0x%p [0x%lx 0x%lx] to prange 0x%p child list %d\n",
+ 		 pchild, pchild->start, pchild->last, prange, op);
+ 
+-	pchild->work_item.mm = mm;
++	pchild->work_item.mm = NULL;
+ 	pchild->work_item.op = op;
+ 	list_add_tail(&pchild->child_list, &prange->child_list);
+ }
+@@ -2394,15 +2393,17 @@ svm_range_add_list_work(struct svm_range_list *svms, struct svm_range *prange,
+ 		    prange->work_item.op != SVM_OP_UNMAP_RANGE)
+ 			prange->work_item.op = op;
+ 	} else {
+-		prange->work_item.op = op;
+-
+-		/* Pairs with mmput in deferred_list_work */
+-		mmget(mm);
+-		prange->work_item.mm = mm;
+-		list_add_tail(&prange->deferred_list,
+-			      &prange->svms->deferred_range_list);
+-		pr_debug("add prange 0x%p [0x%lx 0x%lx] to work list op %d\n",
+-			 prange, prange->start, prange->last, op);
++		/* Pairs with mmput in deferred_list_work.
++		 * If process is exiting and mm is gone, don't update mmu notifier.
++		 */
++		if (mmget_not_zero(mm)) {
++			prange->work_item.mm = mm;
++			prange->work_item.op = op;
++			list_add_tail(&prange->deferred_list,
++				      &prange->svms->deferred_range_list);
++			pr_debug("add prange 0x%p [0x%lx 0x%lx] to work list op %d\n",
++				 prange, prange->start, prange->last, op);
++		}
+ 	}
+ 	spin_unlock(&svms->deferred_list_lock);
+ }
+@@ -2416,8 +2417,7 @@ void schedule_deferred_list_work(struct svm_range_list *svms)
+ }
+ 
+ static void
+-svm_range_unmap_split(struct mm_struct *mm, struct svm_range *parent,
+-		      struct svm_range *prange, unsigned long start,
++svm_range_unmap_split(struct svm_range *parent, struct svm_range *prange, unsigned long start,
+ 		      unsigned long last)
+ {
+ 	struct svm_range *head;
+@@ -2438,12 +2438,12 @@ svm_range_unmap_split(struct mm_struct *mm, struct svm_range *parent,
+ 		svm_range_split(tail, last + 1, tail->last, &head);
+ 
+ 	if (head != prange && tail != prange) {
+-		svm_range_add_child(parent, mm, head, SVM_OP_UNMAP_RANGE);
+-		svm_range_add_child(parent, mm, tail, SVM_OP_ADD_RANGE);
++		svm_range_add_child(parent, head, SVM_OP_UNMAP_RANGE);
++		svm_range_add_child(parent, tail, SVM_OP_ADD_RANGE);
+ 	} else if (tail != prange) {
+-		svm_range_add_child(parent, mm, tail, SVM_OP_UNMAP_RANGE);
++		svm_range_add_child(parent, tail, SVM_OP_UNMAP_RANGE);
+ 	} else if (head != prange) {
+-		svm_range_add_child(parent, mm, head, SVM_OP_UNMAP_RANGE);
++		svm_range_add_child(parent, head, SVM_OP_UNMAP_RANGE);
+ 	} else if (parent != prange) {
+ 		prange->work_item.op = SVM_OP_UNMAP_RANGE;
+ 	}
+@@ -2520,14 +2520,14 @@ svm_range_unmap_from_cpu(struct mm_struct *mm, struct svm_range *prange,
+ 		l = min(last, pchild->last);
+ 		if (l >= s)
+ 			svm_range_unmap_from_gpus(pchild, s, l, trigger);
+-		svm_range_unmap_split(mm, prange, pchild, start, last);
++		svm_range_unmap_split(prange, pchild, start, last);
+ 		mutex_unlock(&pchild->lock);
+ 	}
+ 	s = max(start, prange->start);
+ 	l = min(last, prange->last);
+ 	if (l >= s)
+ 		svm_range_unmap_from_gpus(prange, s, l, trigger);
+-	svm_range_unmap_split(mm, prange, prange, start, last);
++	svm_range_unmap_split(prange, prange, start, last);
+ 
+ 	if (unmap_parent)
+ 		svm_range_add_list_work(svms, prange, mm, SVM_OP_UNMAP_RANGE);
+@@ -2570,8 +2570,6 @@ svm_range_cpu_invalidate_pagetables(struct mmu_interval_notifier *mni,
+ 
+ 	if (range->event == MMU_NOTIFY_RELEASE)
+ 		return true;
+-	if (!mmget_not_zero(mni->mm))
+-		return true;
+ 
+ 	start = mni->interval_tree.start;
+ 	last = mni->interval_tree.last;
+@@ -2598,7 +2596,6 @@ svm_range_cpu_invalidate_pagetables(struct mmu_interval_notifier *mni,
+ 	}
+ 
+ 	svm_range_unlock(prange);
+-	mmput(mni->mm);
+ 
+ 	return true;
+ }
+diff --git a/drivers/gpu/drm/drm_framebuffer.c b/drivers/gpu/drm/drm_framebuffer.c
+index b781601946db8b..63a70f285ccea5 100644
+--- a/drivers/gpu/drm/drm_framebuffer.c
++++ b/drivers/gpu/drm/drm_framebuffer.c
+@@ -862,11 +862,23 @@ EXPORT_SYMBOL_FOR_TESTS_ONLY(drm_framebuffer_free);
+ int drm_framebuffer_init(struct drm_device *dev, struct drm_framebuffer *fb,
+ 			 const struct drm_framebuffer_funcs *funcs)
+ {
++	unsigned int i;
+ 	int ret;
++	bool exists;
+ 
+ 	if (WARN_ON_ONCE(fb->dev != dev || !fb->format))
+ 		return -EINVAL;
+ 
++	for (i = 0; i < fb->format->num_planes; i++) {
++		if (drm_WARN_ON_ONCE(dev, fb->internal_flags & DRM_FRAMEBUFFER_HAS_HANDLE_REF(i)))
++			fb->internal_flags &= ~DRM_FRAMEBUFFER_HAS_HANDLE_REF(i);
++		if (fb->obj[i]) {
++			exists = drm_gem_object_handle_get_if_exists_unlocked(fb->obj[i]);
++			if (exists)
++				fb->internal_flags |= DRM_FRAMEBUFFER_HAS_HANDLE_REF(i);
++		}
++	}
++
+ 	INIT_LIST_HEAD(&fb->filp_head);
+ 
+ 	fb->funcs = funcs;
+@@ -875,7 +887,7 @@ int drm_framebuffer_init(struct drm_device *dev, struct drm_framebuffer *fb,
+ 	ret = __drm_mode_object_add(dev, &fb->base, DRM_MODE_OBJECT_FB,
+ 				    false, drm_framebuffer_free);
+ 	if (ret)
+-		goto out;
++		goto err;
+ 
+ 	mutex_lock(&dev->mode_config.fb_lock);
+ 	dev->mode_config.num_fb++;
+@@ -883,7 +895,16 @@ int drm_framebuffer_init(struct drm_device *dev, struct drm_framebuffer *fb,
+ 	mutex_unlock(&dev->mode_config.fb_lock);
+ 
+ 	drm_mode_object_register(dev, &fb->base);
+-out:
++
++	return 0;
++
++err:
++	for (i = 0; i < fb->format->num_planes; i++) {
++		if (fb->internal_flags & DRM_FRAMEBUFFER_HAS_HANDLE_REF(i)) {
++			drm_gem_object_handle_put_unlocked(fb->obj[i]);
++			fb->internal_flags &= ~DRM_FRAMEBUFFER_HAS_HANDLE_REF(i);
++		}
++	}
+ 	return ret;
+ }
+ EXPORT_SYMBOL(drm_framebuffer_init);
+@@ -960,6 +981,12 @@ EXPORT_SYMBOL(drm_framebuffer_unregister_private);
+ void drm_framebuffer_cleanup(struct drm_framebuffer *fb)
+ {
+ 	struct drm_device *dev = fb->dev;
++	unsigned int i;
++
++	for (i = 0; i < fb->format->num_planes; i++) {
++		if (fb->internal_flags & DRM_FRAMEBUFFER_HAS_HANDLE_REF(i))
++			drm_gem_object_handle_put_unlocked(fb->obj[i]);
++	}
+ 
+ 	mutex_lock(&dev->mode_config.fb_lock);
+ 	list_del(&fb->head);
+diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
+index c6240bab3fa558..84f5ea77f2df51 100644
+--- a/drivers/gpu/drm/drm_gem.c
++++ b/drivers/gpu/drm/drm_gem.c
+@@ -212,6 +212,46 @@ void drm_gem_private_object_fini(struct drm_gem_object *obj)
+ }
+ EXPORT_SYMBOL(drm_gem_private_object_fini);
+ 
++static void drm_gem_object_handle_get(struct drm_gem_object *obj)
++{
++	struct drm_device *dev = obj->dev;
++
++	drm_WARN_ON(dev, !mutex_is_locked(&dev->object_name_lock));
++
++	if (obj->handle_count++ == 0)
++		drm_gem_object_get(obj);
++}
++
++/**
++ * drm_gem_object_handle_get_if_exists_unlocked - acquire reference on user-space handle, if any
++ * @obj: GEM object
++ *
++ * Acquires a reference on the GEM buffer object's handle. Required to keep
++ * the GEM object alive. Call drm_gem_object_handle_put_if_exists_unlocked()
++ * to release the reference. Does nothing if the buffer object has no handle.
++ *
++ * Returns:
++ * True if a handle exists, or false otherwise
++ */
++bool drm_gem_object_handle_get_if_exists_unlocked(struct drm_gem_object *obj)
++{
++	struct drm_device *dev = obj->dev;
++
++	guard(mutex)(&dev->object_name_lock);
++
++	/*
++	 * First ref taken during GEM object creation, if any. Some
++	 * drivers set up internal framebuffers with GEM objects that
++	 * do not have a GEM handle. Hence, this counter can be zero.
++	 */
++	if (!obj->handle_count)
++		return false;
++
++	drm_gem_object_handle_get(obj);
++
++	return true;
++}
++
+ /**
+  * drm_gem_object_handle_free - release resources bound to userspace handles
+  * @obj: GEM object to clean up.
+@@ -242,20 +282,26 @@ static void drm_gem_object_exported_dma_buf_free(struct drm_gem_object *obj)
+ 	}
+ }
+ 
+-static void
+-drm_gem_object_handle_put_unlocked(struct drm_gem_object *obj)
++/**
++ * drm_gem_object_handle_put_unlocked - releases reference on user-space handle
++ * @obj: GEM object
++ *
++ * Releases a reference on the GEM buffer object's handle. Possibly releases
++ * the GEM buffer object and associated dma-buf objects.
++ */
++void drm_gem_object_handle_put_unlocked(struct drm_gem_object *obj)
+ {
+ 	struct drm_device *dev = obj->dev;
+ 	bool final = false;
+ 
+-	if (WARN_ON(READ_ONCE(obj->handle_count) == 0))
++	if (drm_WARN_ON(dev, READ_ONCE(obj->handle_count) == 0))
+ 		return;
+ 
+ 	/*
+-	* Must bump handle count first as this may be the last
+-	* ref, in which case the object would disappear before we
+-	* checked for a name
+-	*/
++	 * Must bump handle count first as this may be the last
++	 * ref, in which case the object would disappear before
++	 * we checked for a name.
++	 */
+ 
+ 	mutex_lock(&dev->object_name_lock);
+ 	if (--obj->handle_count == 0) {
+@@ -279,6 +325,9 @@ drm_gem_object_release_handle(int id, void *ptr, void *data)
+ 	struct drm_file *file_priv = data;
+ 	struct drm_gem_object *obj = ptr;
+ 
++	if (drm_WARN_ON(obj->dev, !data))
++		return 0;
++
+ 	if (obj->funcs->close)
+ 		obj->funcs->close(obj, file_priv);
+ 
+@@ -389,8 +438,8 @@ drm_gem_handle_create_tail(struct drm_file *file_priv,
+ 	int ret;
+ 
+ 	WARN_ON(!mutex_is_locked(&dev->object_name_lock));
+-	if (obj->handle_count++ == 0)
+-		drm_gem_object_get(obj);
++
++	drm_gem_object_handle_get(obj);
+ 
+ 	/*
+ 	 * Get the user-visible handle using idr.  Preload and perform
+@@ -399,7 +448,7 @@ drm_gem_handle_create_tail(struct drm_file *file_priv,
+ 	idr_preload(GFP_KERNEL);
+ 	spin_lock(&file_priv->table_lock);
+ 
+-	ret = idr_alloc(&file_priv->object_idr, obj, 1, 0, GFP_NOWAIT);
++	ret = idr_alloc(&file_priv->object_idr, NULL, 1, 0, GFP_NOWAIT);
+ 
+ 	spin_unlock(&file_priv->table_lock);
+ 	idr_preload_end();
+@@ -420,6 +469,11 @@ drm_gem_handle_create_tail(struct drm_file *file_priv,
+ 			goto err_revoke;
+ 	}
+ 
++	/* mirrors drm_gem_handle_delete to avoid races */
++	spin_lock(&file_priv->table_lock);
++	obj = idr_replace(&file_priv->object_idr, obj, handle);
++	WARN_ON(obj != NULL);
++	spin_unlock(&file_priv->table_lock);
+ 	*handlep = handle;
+ 	return 0;
+ 
+diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
+index b2b6a8e49dda46..81fe2f81d814c8 100644
+--- a/drivers/gpu/drm/drm_internal.h
++++ b/drivers/gpu/drm/drm_internal.h
+@@ -161,6 +161,8 @@ void drm_sysfs_lease_event(struct drm_device *dev);
+ 
+ /* drm_gem.c */
+ int drm_gem_init(struct drm_device *dev);
++bool drm_gem_object_handle_get_if_exists_unlocked(struct drm_gem_object *obj);
++void drm_gem_object_handle_put_unlocked(struct drm_gem_object *obj);
+ int drm_gem_handle_create_tail(struct drm_file *file_priv,
+ 			       struct drm_gem_object *obj,
+ 			       u32 *handlep);
+diff --git a/drivers/gpu/drm/exynos/exynos7_drm_decon.c b/drivers/gpu/drm/exynos/exynos7_drm_decon.c
+index f91daefa9d2bc5..805aa28c172300 100644
+--- a/drivers/gpu/drm/exynos/exynos7_drm_decon.c
++++ b/drivers/gpu/drm/exynos/exynos7_drm_decon.c
+@@ -636,6 +636,10 @@ static irqreturn_t decon_irq_handler(int irq, void *dev_id)
+ 	if (!ctx->drm_dev)
+ 		goto out;
+ 
++	/* check if crtc and vblank have been initialized properly */
++	if (!drm_dev_has_vblank(ctx->drm_dev))
++		goto out;
++
+ 	if (!ctx->i80_if) {
+ 		drm_crtc_handle_vblank(&ctx->crtc->base);
+ 
+diff --git a/drivers/gpu/drm/imagination/pvr_power.c b/drivers/gpu/drm/imagination/pvr_power.c
+index ba7816fd28ec77..850b318605da4c 100644
+--- a/drivers/gpu/drm/imagination/pvr_power.c
++++ b/drivers/gpu/drm/imagination/pvr_power.c
+@@ -363,13 +363,13 @@ pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
+ 		if (!err) {
+ 			if (hard_reset) {
+ 				pvr_dev->fw_dev.booted = false;
+-				WARN_ON(pm_runtime_force_suspend(from_pvr_device(pvr_dev)->dev));
++				WARN_ON(pvr_power_device_suspend(from_pvr_device(pvr_dev)->dev));
+ 
+ 				err = pvr_fw_hard_reset(pvr_dev);
+ 				if (err)
+ 					goto err_device_lost;
+ 
+-				err = pm_runtime_force_resume(from_pvr_device(pvr_dev)->dev);
++				err = pvr_power_device_resume(from_pvr_device(pvr_dev)->dev);
+ 				pvr_dev->fw_dev.booted = true;
+ 				if (err)
+ 					goto err_device_lost;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+index 200e65a7cefc4e..c7869a639befce 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c
++++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+@@ -314,14 +314,10 @@ nouveau_debugfs_fini(struct nouveau_drm *drm)
+ 	drm->debugfs = NULL;
+ }
+ 
+-int
++void
+ nouveau_module_debugfs_init(void)
+ {
+ 	nouveau_debugfs_root = debugfs_create_dir("nouveau", NULL);
+-	if (IS_ERR(nouveau_debugfs_root))
+-		return PTR_ERR(nouveau_debugfs_root);
+-
+-	return 0;
+ }
+ 
+ void
+diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.h b/drivers/gpu/drm/nouveau/nouveau_debugfs.h
+index b7617b344ee26d..d05ed0e641c4aa 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_debugfs.h
++++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.h
+@@ -24,7 +24,7 @@ extern void nouveau_debugfs_fini(struct nouveau_drm *);
+ 
+ extern struct dentry *nouveau_debugfs_root;
+ 
+-int  nouveau_module_debugfs_init(void);
++void nouveau_module_debugfs_init(void);
+ void nouveau_module_debugfs_fini(void);
+ #else
+ static inline void
+@@ -42,10 +42,9 @@ nouveau_debugfs_fini(struct nouveau_drm *drm)
+ {
+ }
+ 
+-static inline int
++static inline void
+ nouveau_module_debugfs_init(void)
+ {
+-	return 0;
+ }
+ 
+ static inline void
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index c69139701056d7..ba18abd9643e04 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -1456,9 +1456,7 @@ nouveau_drm_init(void)
+ 	if (!nouveau_modeset)
+ 		return 0;
+ 
+-	ret = nouveau_module_debugfs_init();
+-	if (ret)
+-		return ret;
++	nouveau_module_debugfs_init();
+ 
+ #ifdef CONFIG_NOUVEAU_PLATFORM_DRIVER
+ 	platform_driver_register(&nouveau_platform_driver);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+index 53a4af00103926..d220c68bfe9149 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+@@ -1047,7 +1047,6 @@ r535_gsp_acpi_caps(acpi_handle handle, CAPS_METHOD_DATA *caps)
+ 	union acpi_object argv4 = {
+ 		.buffer.type    = ACPI_TYPE_BUFFER,
+ 		.buffer.length  = 4,
+-		.buffer.pointer = kmalloc(argv4.buffer.length, GFP_KERNEL),
+ 	}, *obj;
+ 
+ 	caps->status = 0xffff;
+@@ -1055,17 +1054,22 @@ r535_gsp_acpi_caps(acpi_handle handle, CAPS_METHOD_DATA *caps)
+ 	if (!acpi_check_dsm(handle, &NVOP_DSM_GUID, NVOP_DSM_REV, BIT_ULL(0x1a)))
+ 		return;
+ 
++	argv4.buffer.pointer = kmalloc(argv4.buffer.length, GFP_KERNEL);
++	if (!argv4.buffer.pointer)
++		return;
++
+ 	obj = acpi_evaluate_dsm(handle, &NVOP_DSM_GUID, NVOP_DSM_REV, 0x1a, &argv4);
+ 	if (!obj)
+-		return;
++		goto done;
+ 
+ 	if (WARN_ON(obj->type != ACPI_TYPE_BUFFER) ||
+ 	    WARN_ON(obj->buffer.length != 4))
+-		return;
++		goto done;
+ 
+ 	caps->status = 0;
+ 	caps->optimusCaps = *(u32 *)obj->buffer.pointer;
+ 
++done:
+ 	ACPI_FREE(obj);
+ 
+ 	kfree(argv4.buffer.pointer);
+@@ -1082,24 +1086,28 @@ r535_gsp_acpi_jt(acpi_handle handle, JT_METHOD_DATA *jt)
+ 	union acpi_object argv4 = {
+ 		.buffer.type    = ACPI_TYPE_BUFFER,
+ 		.buffer.length  = sizeof(caps),
+-		.buffer.pointer = kmalloc(argv4.buffer.length, GFP_KERNEL),
+ 	}, *obj;
+ 
+ 	jt->status = 0xffff;
+ 
++	argv4.buffer.pointer = kmalloc(argv4.buffer.length, GFP_KERNEL);
++	if (!argv4.buffer.pointer)
++		return;
++
+ 	obj = acpi_evaluate_dsm(handle, &JT_DSM_GUID, JT_DSM_REV, 0x1, &argv4);
+ 	if (!obj)
+-		return;
++		goto done;
+ 
+ 	if (WARN_ON(obj->type != ACPI_TYPE_BUFFER) ||
+ 	    WARN_ON(obj->buffer.length != 4))
+-		return;
++		goto done;
+ 
+ 	jt->status = 0;
+ 	jt->jtCaps = *(u32 *)obj->buffer.pointer;
+ 	jt->jtRevId = (jt->jtCaps & 0xfff00000) >> 20;
+ 	jt->bSBIOSCaps = 0;
+ 
++done:
+ 	ACPI_FREE(obj);
+ 
+ 	kfree(argv4.buffer.pointer);
+diff --git a/drivers/gpu/drm/tegra/nvdec.c b/drivers/gpu/drm/tegra/nvdec.c
+index 2d9a0a3f6c381d..7a38664e890e37 100644
+--- a/drivers/gpu/drm/tegra/nvdec.c
++++ b/drivers/gpu/drm/tegra/nvdec.c
+@@ -261,10 +261,8 @@ static int nvdec_load_falcon_firmware(struct nvdec *nvdec)
+ 
+ 	if (!client->group) {
+ 		virt = dma_alloc_coherent(nvdec->dev, size, &iova, GFP_KERNEL);
+-
+-		err = dma_mapping_error(nvdec->dev, iova);
+-		if (err < 0)
+-			return err;
++		if (!virt)
++			return -ENOMEM;
+ 	} else {
+ 		virt = tegra_drm_alloc(tegra, size, &iova);
+ 		if (IS_ERR(virt))
+diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
+index 15cab9bda17fb9..bd90404ea609ca 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
++++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
+@@ -254,6 +254,13 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
+ 	ret = dma_resv_trylock(&fbo->base.base._resv);
+ 	WARN_ON(!ret);
+ 
++	ret = dma_resv_reserve_fences(&fbo->base.base._resv, 1);
++	if (ret) {
++		dma_resv_unlock(&fbo->base.base._resv);
++		kfree(fbo);
++		return ret;
++	}
++
+ 	if (fbo->base.resource) {
+ 		ttm_resource_set_bo(fbo->base.resource, &fbo->base);
+ 		bo->resource = NULL;
+@@ -262,12 +269,6 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
+ 		fbo->base.bulk_move = NULL;
+ 	}
+ 
+-	ret = dma_resv_reserve_fences(&fbo->base.base._resv, 1);
+-	if (ret) {
+-		kfree(fbo);
+-		return ret;
+-	}
+-
+ 	ttm_bo_get(bo);
+ 	fbo->bo = bo;
+ 
+diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
+index 0c22b3a3665500..4a6c314429d6df 100644
+--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
++++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
+@@ -444,6 +444,7 @@ static int xe_alloc_pf_queue(struct xe_gt *gt, struct pf_queue *pf_queue)
+ #define PF_MULTIPLIER	8
+ 	pf_queue->num_dw =
+ 		(num_eus + XE_NUM_HW_ENGINES) * PF_MSG_LEN_DW * PF_MULTIPLIER;
++	pf_queue->num_dw = roundup_pow_of_two(pf_queue->num_dw);
+ #undef PF_MULTIPLIER
+ 
+ 	pf_queue->gt = gt;
+diff --git a/drivers/gpu/drm/xe/xe_lmtt.c b/drivers/gpu/drm/xe/xe_lmtt.c
+index 89393dcb53d9d6..1337cf49f1c201 100644
+--- a/drivers/gpu/drm/xe/xe_lmtt.c
++++ b/drivers/gpu/drm/xe/xe_lmtt.c
+@@ -78,6 +78,9 @@ static struct xe_lmtt_pt *lmtt_pt_alloc(struct xe_lmtt *lmtt, unsigned int level
+ 	}
+ 
+ 	lmtt_assert(lmtt, xe_bo_is_vram(bo));
++	lmtt_debug(lmtt, "level=%u addr=%#llx\n", level, (u64)xe_bo_main_addr(bo, XE_PAGE_SIZE));
++
++	xe_map_memset(lmtt_to_xe(lmtt), &bo->vmap, 0, 0, bo->size);
+ 
+ 	pt->level = level;
+ 	pt->bo = bo;
+@@ -91,6 +94,9 @@ static struct xe_lmtt_pt *lmtt_pt_alloc(struct xe_lmtt *lmtt, unsigned int level
+ 
+ static void lmtt_pt_free(struct xe_lmtt_pt *pt)
+ {
++	lmtt_debug(&pt->bo->tile->sriov.pf.lmtt, "level=%u addr=%llx\n",
++		   pt->level, (u64)xe_bo_main_addr(pt->bo, XE_PAGE_SIZE));
++
+ 	xe_bo_unpin_map_no_vm(pt->bo);
+ 	kfree(pt);
+ }
+@@ -226,9 +232,14 @@ static void lmtt_write_pte(struct xe_lmtt *lmtt, struct xe_lmtt_pt *pt,
+ 
+ 	switch (lmtt->ops->lmtt_pte_size(level)) {
+ 	case sizeof(u32):
++		lmtt_assert(lmtt, !overflows_type(pte, u32));
++		lmtt_assert(lmtt, !pte || !iosys_map_rd(&pt->bo->vmap, idx * sizeof(u32), u32));
++
+ 		xe_map_wr(lmtt_to_xe(lmtt), &pt->bo->vmap, idx * sizeof(u32), u32, pte);
+ 		break;
+ 	case sizeof(u64):
++		lmtt_assert(lmtt, !pte || !iosys_map_rd(&pt->bo->vmap, idx * sizeof(u64), u64));
++
+ 		xe_map_wr(lmtt_to_xe(lmtt), &pt->bo->vmap, idx * sizeof(u64), u64, pte);
+ 		break;
+ 	default:
+diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
+index 752f57fd515e1c..a32a0817525376 100644
+--- a/drivers/gpu/drm/xe/xe_migrate.c
++++ b/drivers/gpu/drm/xe/xe_migrate.c
+@@ -860,7 +860,7 @@ struct dma_fence *xe_migrate_copy(struct xe_migrate *m,
+ 		if (src_is_vram && xe_migrate_allow_identity(src_L0, &src_it))
+ 			xe_res_next(&src_it, src_L0);
+ 		else
+-			emit_pte(m, bb, src_L0_pt, src_is_vram, copy_system_ccs,
++			emit_pte(m, bb, src_L0_pt, src_is_vram, copy_system_ccs || use_comp_pat,
+ 				 &src_it, src_L0, src);
+ 
+ 		if (dst_is_vram && xe_migrate_allow_identity(src_L0, &dst_it))
+diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
+index 30f7ce06c89690..57547d02daa798 100644
+--- a/drivers/gpu/drm/xe/xe_pci.c
++++ b/drivers/gpu/drm/xe/xe_pci.c
+@@ -137,7 +137,6 @@ static const struct xe_graphics_desc graphics_xelpg = {
+ 	.has_asid = 1, \
+ 	.has_atomic_enable_pte_bit = 1, \
+ 	.has_flat_ccs = 1, \
+-	.has_indirect_ring_state = 1, \
+ 	.has_range_tlb_invalidation = 1, \
+ 	.has_usm = 1, \
+ 	.has_64bit_timestamp = 1, \
+diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
+index 7b6b754ad6eb78..20f8522bb04a55 100644
+--- a/drivers/gpu/drm/xe/xe_pm.c
++++ b/drivers/gpu/drm/xe/xe_pm.c
+@@ -135,7 +135,7 @@ int xe_pm_suspend(struct xe_device *xe)
+ 	/* FIXME: Super racey... */
+ 	err = xe_bo_evict_all(xe);
+ 	if (err)
+-		goto err_pxp;
++		goto err_display;
+ 
+ 	for_each_gt(gt, xe, id) {
+ 		err = xe_gt_suspend(gt);
+@@ -152,7 +152,6 @@ int xe_pm_suspend(struct xe_device *xe)
+ 
+ err_display:
+ 	xe_display_pm_resume(xe);
+-err_pxp:
+ 	xe_pxp_pm_resume(xe->pxp);
+ err:
+ 	drm_dbg(&xe->drm, "Device suspend failed %d\n", err);
+@@ -707,11 +706,13 @@ void xe_pm_assert_unbounded_bridge(struct xe_device *xe)
+ }
+ 
+ /**
+- * xe_pm_set_vram_threshold - Set a vram threshold for allowing/blocking D3Cold
++ * xe_pm_set_vram_threshold - Set a VRAM threshold for allowing/blocking D3Cold
+  * @xe: xe device instance
+- * @threshold: VRAM size in bites for the D3cold threshold
++ * @threshold: VRAM size in MiB for the D3cold threshold
+  *
+- * Returns 0 for success, negative error code otherwise.
++ * Return:
++ * * 0		- success
++ * * -EINVAL	- invalid argument
+  */
+ int xe_pm_set_vram_threshold(struct xe_device *xe, u32 threshold)
+ {
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 1062731315a2a5..b937af010e3545 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -311,6 +311,8 @@
+ #define USB_DEVICE_ID_ASUS_AK1D		0x1125
+ #define USB_DEVICE_ID_CHICONY_TOSHIBA_WT10A	0x1408
+ #define USB_DEVICE_ID_CHICONY_ACER_SWITCH12	0x1421
++#define USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA	0xb824
++#define USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA2	0xb82c
+ 
+ #define USB_VENDOR_ID_CHUNGHWAT		0x2247
+ #define USB_DEVICE_ID_CHUNGHWAT_MULTITOUCH	0x0001
+@@ -818,6 +820,7 @@
+ #define USB_DEVICE_ID_LENOVO_TPPRODOCK	0x6067
+ #define USB_DEVICE_ID_LENOVO_X1_COVER	0x6085
+ #define USB_DEVICE_ID_LENOVO_X1_TAB	0x60a3
++#define USB_DEVICE_ID_LENOVO_X1_TAB2	0x60a4
+ #define USB_DEVICE_ID_LENOVO_X1_TAB3	0x60b5
+ #define USB_DEVICE_ID_LENOVO_X12_TAB	0x60fe
+ #define USB_DEVICE_ID_LENOVO_X12_TAB2	0x61ae
+@@ -1524,4 +1527,7 @@
+ #define USB_VENDOR_ID_SIGNOTEC			0x2133
+ #define USB_DEVICE_ID_SIGNOTEC_VIEWSONIC_PD1011	0x0018
+ 
++#define USB_VENDOR_ID_SMARTLINKTECHNOLOGY              0x4c4a
++#define USB_DEVICE_ID_SMARTLINKTECHNOLOGY_4155         0x4155
++
+ #endif
+diff --git a/drivers/hid/hid-lenovo.c b/drivers/hid/hid-lenovo.c
+index a3c23a72316ac2..b3121fa7a72d73 100644
+--- a/drivers/hid/hid-lenovo.c
++++ b/drivers/hid/hid-lenovo.c
+@@ -492,6 +492,7 @@ static int lenovo_input_mapping(struct hid_device *hdev,
+ 	case USB_DEVICE_ID_LENOVO_X12_TAB:
+ 	case USB_DEVICE_ID_LENOVO_X12_TAB2:
+ 	case USB_DEVICE_ID_LENOVO_X1_TAB:
++	case USB_DEVICE_ID_LENOVO_X1_TAB2:
+ 	case USB_DEVICE_ID_LENOVO_X1_TAB3:
+ 		return lenovo_input_mapping_x1_tab_kbd(hdev, hi, field, usage, bit, max);
+ 	default:
+@@ -608,6 +609,7 @@ static ssize_t attr_fn_lock_store(struct device *dev,
+ 	case USB_DEVICE_ID_LENOVO_X12_TAB2:
+ 	case USB_DEVICE_ID_LENOVO_TP10UBKBD:
+ 	case USB_DEVICE_ID_LENOVO_X1_TAB:
++	case USB_DEVICE_ID_LENOVO_X1_TAB2:
+ 	case USB_DEVICE_ID_LENOVO_X1_TAB3:
+ 		ret = lenovo_led_set_tp10ubkbd(hdev, TP10UBKBD_FN_LOCK_LED, value);
+ 		if (ret)
+@@ -864,6 +866,7 @@ static int lenovo_event(struct hid_device *hdev, struct hid_field *field,
+ 	case USB_DEVICE_ID_LENOVO_X12_TAB2:
+ 	case USB_DEVICE_ID_LENOVO_TP10UBKBD:
+ 	case USB_DEVICE_ID_LENOVO_X1_TAB:
++	case USB_DEVICE_ID_LENOVO_X1_TAB2:
+ 	case USB_DEVICE_ID_LENOVO_X1_TAB3:
+ 		return lenovo_event_tp10ubkbd(hdev, field, usage, value);
+ 	default:
+@@ -1147,6 +1150,7 @@ static int lenovo_led_brightness_set(struct led_classdev *led_cdev,
+ 	case USB_DEVICE_ID_LENOVO_X12_TAB2:
+ 	case USB_DEVICE_ID_LENOVO_TP10UBKBD:
+ 	case USB_DEVICE_ID_LENOVO_X1_TAB:
++	case USB_DEVICE_ID_LENOVO_X1_TAB2:
+ 	case USB_DEVICE_ID_LENOVO_X1_TAB3:
+ 		ret = lenovo_led_set_tp10ubkbd(hdev, tp10ubkbd_led[led_nr], value);
+ 		break;
+@@ -1387,6 +1391,7 @@ static int lenovo_probe(struct hid_device *hdev,
+ 	case USB_DEVICE_ID_LENOVO_X12_TAB2:
+ 	case USB_DEVICE_ID_LENOVO_TP10UBKBD:
+ 	case USB_DEVICE_ID_LENOVO_X1_TAB:
++	case USB_DEVICE_ID_LENOVO_X1_TAB2:
+ 	case USB_DEVICE_ID_LENOVO_X1_TAB3:
+ 		ret = lenovo_probe_tp10ubkbd(hdev);
+ 		break;
+@@ -1476,6 +1481,7 @@ static void lenovo_remove(struct hid_device *hdev)
+ 	case USB_DEVICE_ID_LENOVO_X12_TAB2:
+ 	case USB_DEVICE_ID_LENOVO_TP10UBKBD:
+ 	case USB_DEVICE_ID_LENOVO_X1_TAB:
++	case USB_DEVICE_ID_LENOVO_X1_TAB2:
+ 	case USB_DEVICE_ID_LENOVO_X1_TAB3:
+ 		lenovo_remove_tp10ubkbd(hdev);
+ 		break;
+@@ -1526,6 +1532,8 @@ static const struct hid_device_id lenovo_devices[] = {
+ 	 */
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ 		     USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB) },
++	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
++		     USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB2) },
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ 		     USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB3) },
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 7ac8e16e61581b..536a0a47518fa4 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -2122,12 +2122,18 @@ static const struct hid_device_id mt_devices[] = {
+ 		HID_DEVICE(BUS_I2C, HID_GROUP_GENERIC,
+ 			USB_VENDOR_ID_LG, I2C_DEVICE_ID_LG_7010) },
+ 
+-	/* Lenovo X1 TAB Gen 2 */
++	/* Lenovo X1 TAB Gen 1 */
+ 	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
+ 		HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8,
+ 			   USB_VENDOR_ID_LENOVO,
+ 			   USB_DEVICE_ID_LENOVO_X1_TAB) },
+ 
++	/* Lenovo X1 TAB Gen 2 */
++	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++		HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8,
++			   USB_VENDOR_ID_LENOVO,
++			   USB_DEVICE_ID_LENOVO_X1_TAB2) },
++
+ 	/* Lenovo X1 TAB Gen 3 */
+ 	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
+ 		HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8,
+diff --git a/drivers/hid/hid-nintendo.c b/drivers/hid/hid-nintendo.c
+index 839d5bcd72b1ed..fb4985988615b3 100644
+--- a/drivers/hid/hid-nintendo.c
++++ b/drivers/hid/hid-nintendo.c
+@@ -308,6 +308,7 @@ enum joycon_ctlr_state {
+ 	JOYCON_CTLR_STATE_INIT,
+ 	JOYCON_CTLR_STATE_READ,
+ 	JOYCON_CTLR_STATE_REMOVED,
++	JOYCON_CTLR_STATE_SUSPENDED,
+ };
+ 
+ /* Controller type received as part of device info */
+@@ -2750,14 +2751,46 @@ static void nintendo_hid_remove(struct hid_device *hdev)
+ 
+ static int nintendo_hid_resume(struct hid_device *hdev)
+ {
+-	int ret = joycon_init(hdev);
++	struct joycon_ctlr *ctlr = hid_get_drvdata(hdev);
++	int ret;
++
++	hid_dbg(hdev, "resume\n");
++	if (!joycon_using_usb(ctlr)) {
++		hid_dbg(hdev, "no-op resume for bt ctlr\n");
++		ctlr->ctlr_state = JOYCON_CTLR_STATE_READ;
++		return 0;
++	}
+ 
++	ret = joycon_init(hdev);
+ 	if (ret)
+-		hid_err(hdev, "Failed to restore controller after resume");
++		hid_err(hdev,
++			"Failed to restore controller after resume: %d\n",
++			ret);
++	else
++		ctlr->ctlr_state = JOYCON_CTLR_STATE_READ;
+ 
+ 	return ret;
+ }
+ 
++static int nintendo_hid_suspend(struct hid_device *hdev, pm_message_t message)
++{
++	struct joycon_ctlr *ctlr = hid_get_drvdata(hdev);
++
++	hid_dbg(hdev, "suspend: %d\n", message.event);
++	/*
++	 * Avoid any blocking loops in suspend/resume transitions.
++	 *
++	 * joycon_enforce_subcmd_rate() can result in repeated retries if for
++	 * whatever reason the controller stops providing input reports.
++	 *
++	 * This has been observed with bluetooth controllers which lose
++	 * connectivity prior to suspend (but not long enough to result in
++	 * complete disconnection).
++	 */
++	ctlr->ctlr_state = JOYCON_CTLR_STATE_SUSPENDED;
++	return 0;
++}
++
+ #endif
+ 
+ static const struct hid_device_id nintendo_hid_devices[] = {
+@@ -2796,6 +2829,7 @@ static struct hid_driver nintendo_hid_driver = {
+ 
+ #ifdef CONFIG_PM
+ 	.resume		= nintendo_hid_resume,
++	.suspend	= nintendo_hid_suspend,
+ #endif
+ };
+ static int __init nintendo_init(void)
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 0731473cc9b1ad..06c27308e497bd 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -757,6 +757,8 @@ static const struct hid_device_id hid_ignore_list[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_AVERMEDIA, USB_DEVICE_ID_AVER_FM_MR800) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_AXENTIA, USB_DEVICE_ID_AXENTIA_FM_RADIO) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_BERKSHIRE, USB_DEVICE_ID_BERKSHIRE_PCWD) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA2) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CIDC, 0x0103) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CYGNAL, USB_DEVICE_ID_CYGNAL_RADIO_SI470X) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CYGNAL, USB_DEVICE_ID_CYGNAL_RADIO_SI4713) },
+@@ -904,6 +906,7 @@ static const struct hid_device_id hid_ignore_list[] = {
+ #endif
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_YEALINK, USB_DEVICE_ID_YEALINK_P1K_P4K_B2K) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_HP_5MP_CAMERA_5473) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SMARTLINKTECHNOLOGY, USB_DEVICE_ID_SMARTLINKTECHNOLOGY_4155) },
+ 	{ }
+ };
+ 
+diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
+index 08bb3b031f2309..6539869759b9ed 100644
+--- a/drivers/irqchip/Kconfig
++++ b/drivers/irqchip/Kconfig
+@@ -74,6 +74,7 @@ config ARM_VIC_NR
+ 
+ config IRQ_MSI_LIB
+ 	bool
++	select GENERIC_MSI_IRQ
+ 
+ config ARMADA_370_XP_IRQ
+ 	bool
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 45dd3d9f01a8e1..d946cf1ac24700 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -2357,8 +2357,7 @@ static int bitmap_get_stats(void *data, struct md_bitmap_stats *stats)
+ 
+ 	if (!bitmap)
+ 		return -ENOENT;
+-	if (!bitmap->mddev->bitmap_info.external &&
+-	    !bitmap->storage.sb_page)
++	if (!bitmap->storage.sb_page)
+ 		return -EINVAL;
+ 	sb = kmap_local_page(bitmap->storage.sb_page);
+ 	stats->sync_size = le64_to_cpu(sb->sync_size);
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 1fe645e6300121..3d99a4e38e1c62 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -1399,7 +1399,7 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio,
+ 	}
+ 	read_bio = bio_alloc_clone(mirror->rdev->bdev, bio, gfp,
+ 				   &mddev->bio_set);
+-
++	read_bio->bi_opf &= ~REQ_NOWAIT;
+ 	r1_bio->bios[rdisk] = read_bio;
+ 
+ 	read_bio->bi_iter.bi_sector = r1_bio->sector +
+@@ -1649,6 +1649,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ 				wait_for_serialization(rdev, r1_bio);
+ 		}
+ 
++		mbio->bi_opf &= ~REQ_NOWAIT;
+ 		r1_bio->bios[i] = mbio;
+ 
+ 		mbio->bi_iter.bi_sector	= (r1_bio->sector + rdev->data_offset);
+@@ -3431,6 +3432,7 @@ static int raid1_reshape(struct mddev *mddev)
+ 	/* ok, everything is stopped */
+ 	oldpool = conf->r1bio_pool;
+ 	conf->r1bio_pool = newpool;
++	init_waitqueue_head(&conf->r1bio_pool.wait);
+ 
+ 	for (d = d2 = 0; d < conf->raid_disks; d++) {
+ 		struct md_rdev *rdev = conf->mirrors[d].rdev;
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 54320a887ecc50..6a55374a6ba37d 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1182,8 +1182,11 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
+ 		}
+ 	}
+ 
+-	if (!regular_request_wait(mddev, conf, bio, r10_bio->sectors))
++	if (!regular_request_wait(mddev, conf, bio, r10_bio->sectors)) {
++		raid_end_bio_io(r10_bio);
+ 		return;
++	}
++
+ 	rdev = read_balance(conf, r10_bio, &max_sectors);
+ 	if (!rdev) {
+ 		if (err_rdev) {
+@@ -1221,6 +1224,7 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
+ 		r10_bio->master_bio = bio;
+ 	}
+ 	read_bio = bio_alloc_clone(rdev->bdev, bio, gfp, &mddev->bio_set);
++	read_bio->bi_opf &= ~REQ_NOWAIT;
+ 
+ 	r10_bio->devs[slot].bio = read_bio;
+ 	r10_bio->devs[slot].rdev = rdev;
+@@ -1256,6 +1260,7 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio,
+ 			     conf->mirrors[devnum].rdev;
+ 
+ 	mbio = bio_alloc_clone(rdev->bdev, bio, GFP_NOIO, &mddev->bio_set);
++	mbio->bi_opf &= ~REQ_NOWAIT;
+ 	if (replacement)
+ 		r10_bio->devs[n_copy].repl_bio = mbio;
+ 	else
+@@ -1370,8 +1375,11 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
+ 	}
+ 
+ 	sectors = r10_bio->sectors;
+-	if (!regular_request_wait(mddev, conf, bio, sectors))
++	if (!regular_request_wait(mddev, conf, bio, sectors)) {
++		raid_end_bio_io(r10_bio);
+ 		return;
++	}
++
+ 	if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) &&
+ 	    (mddev->reshape_backwards
+ 	     ? (bio->bi_iter.bi_sector < conf->reshape_safe &&
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index c2c116ce1087c0..782131de5ef767 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -665,7 +665,7 @@ static int m_can_handle_lost_msg(struct net_device *dev)
+ 	struct can_frame *frame;
+ 	u32 timestamp = 0;
+ 
+-	netdev_err(dev, "msg lost in rxf0\n");
++	netdev_dbg(dev, "msg lost in rxf0\n");
+ 
+ 	stats->rx_errors++;
+ 	stats->rx_over_errors++;
+diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c
+index af28a9300a15c7..9e70cf14c8c70d 100644
+--- a/drivers/net/ethernet/airoha/airoha_eth.c
++++ b/drivers/net/ethernet/airoha/airoha_eth.c
+@@ -2628,6 +2628,7 @@ static int airoha_probe(struct platform_device *pdev)
+ error_napi_stop:
+ 	for (i = 0; i < ARRAY_SIZE(eth->qdma); i++)
+ 		airoha_qdma_stop_napi(&eth->qdma[i]);
++	airoha_ppe_deinit(eth);
+ error_hw_cleanup:
+ 	for (i = 0; i < ARRAY_SIZE(eth->qdma); i++)
+ 		airoha_hw_cleanup(&eth->qdma[i]);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 9de6eefad97913..d66519ce57af08 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -11565,11 +11565,9 @@ static void bnxt_free_irq(struct bnxt *bp)
+ 
+ static int bnxt_request_irq(struct bnxt *bp)
+ {
++	struct cpu_rmap *rmap = NULL;
+ 	int i, j, rc = 0;
+ 	unsigned long flags = 0;
+-#ifdef CONFIG_RFS_ACCEL
+-	struct cpu_rmap *rmap;
+-#endif
+ 
+ 	rc = bnxt_setup_int_mode(bp);
+ 	if (rc) {
+@@ -11590,15 +11588,15 @@ static int bnxt_request_irq(struct bnxt *bp)
+ 		int map_idx = bnxt_cp_num_to_irq_num(bp, i);
+ 		struct bnxt_irq *irq = &bp->irq_tbl[map_idx];
+ 
+-#ifdef CONFIG_RFS_ACCEL
+-		if (rmap && bp->bnapi[i]->rx_ring) {
++		if (IS_ENABLED(CONFIG_RFS_ACCEL) &&
++		    rmap && bp->bnapi[i]->rx_ring) {
+ 			rc = irq_cpu_rmap_add(rmap, irq->vector);
+ 			if (rc)
+ 				netdev_warn(bp->dev, "failed adding irq rmap for ring %d\n",
+ 					    j);
+ 			j++;
+ 		}
+-#endif
++
+ 		rc = request_irq(irq->vector, irq->handler, flags, irq->name,
+ 				 bp->bnapi[i]);
+ 		if (rc)
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c
+index a000d3f630bd3b..187695af6611f3 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c
+@@ -368,23 +368,27 @@ static u32 bnxt_get_ctx_coredump(struct bnxt *bp, void *buf, u32 offset,
+ 		if (!ctxm->mem_valid || !seg_id)
+ 			continue;
+ 
+-		if (trace)
++		if (trace) {
+ 			extra_hlen = BNXT_SEG_RCD_LEN;
++			if (buf) {
++				u16 trace_type = bnxt_bstore_to_trace[type];
++
++				bnxt_fill_drv_seg_record(bp, &record, ctxm,
++							 trace_type);
++			}
++		}
++
+ 		if (buf)
+ 			data = buf + BNXT_SEG_HDR_LEN + extra_hlen;
++
+ 		seg_len = bnxt_copy_ctx_mem(bp, ctxm, data, 0) + extra_hlen;
+ 		if (buf) {
+ 			bnxt_fill_coredump_seg_hdr(bp, &seg_hdr, NULL, seg_len,
+ 						   0, 0, 0, comp_id, seg_id);
+ 			memcpy(buf, &seg_hdr, BNXT_SEG_HDR_LEN);
+ 			buf += BNXT_SEG_HDR_LEN;
+-			if (trace) {
+-				u16 trace_type = bnxt_bstore_to_trace[type];
+-
+-				bnxt_fill_drv_seg_record(bp, &record, ctxm,
+-							 trace_type);
++			if (trace)
+ 				memcpy(buf, &record, BNXT_SEG_RCD_LEN);
+-			}
+ 			buf += seg_len;
+ 		}
+ 		len += BNXT_SEG_HDR_LEN + seg_len;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
+index 0dbb880a7aa0e7..71e14be2507e1e 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
+@@ -487,7 +487,9 @@ static int bnxt_ets_validate(struct bnxt *bp, struct ieee_ets *ets, u8 *tc)
+ 
+ 		if ((ets->tc_tx_bw[i] || ets->tc_tsa[i]) && i > bp->max_tc)
+ 			return -EINVAL;
++	}
+ 
++	for (i = 0; i < max_tc; i++) {
+ 		switch (ets->tc_tsa[i]) {
+ 		case IEEE_8021QAZ_TSA_STRICT:
+ 			break;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+index e675611777b528..aedd9e145ff9c4 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+@@ -115,7 +115,7 @@ static void __bnxt_xmit_xdp_redirect(struct bnxt *bp,
+ 	tx_buf->action = XDP_REDIRECT;
+ 	tx_buf->xdpf = xdpf;
+ 	dma_unmap_addr_set(tx_buf, mapping, mapping);
+-	dma_unmap_len_set(tx_buf, len, 0);
++	dma_unmap_len_set(tx_buf, len, len);
+ }
+ 
+ void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int budget)
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
+index a189038d88df03..246ddce753f929 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.h
++++ b/drivers/net/ethernet/ibm/ibmvnic.h
+@@ -211,7 +211,6 @@ struct ibmvnic_statistics {
+ 	u8 reserved[72];
+ } __packed __aligned(8);
+ 
+-#define NUM_TX_STATS 3
+ struct ibmvnic_tx_queue_stats {
+ 	u64 batched_packets;
+ 	u64 direct_packets;
+@@ -219,13 +218,18 @@ struct ibmvnic_tx_queue_stats {
+ 	u64 dropped_packets;
+ };
+ 
+-#define NUM_RX_STATS 3
++#define NUM_TX_STATS \
++	(sizeof(struct ibmvnic_tx_queue_stats) / sizeof(u64))
++
+ struct ibmvnic_rx_queue_stats {
+ 	u64 packets;
+ 	u64 bytes;
+ 	u64 interrupts;
+ };
+ 
++#define NUM_RX_STATS \
++	(sizeof(struct ibmvnic_rx_queue_stats) / sizeof(u64))
++
+ struct ibmvnic_acl_buffer {
+ 	__be32 len;
+ 	__be32 version;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+index b5c3a2a9d2a59b..9560fcba643f50 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+@@ -18,7 +18,8 @@ enum {
+ 
+ enum {
+ 	MLX5E_TC_PRIO = 0,
+-	MLX5E_NIC_PRIO
++	MLX5E_PROMISC_PRIO,
++	MLX5E_NIC_PRIO,
+ };
+ 
+ struct mlx5e_flow_table {
+@@ -68,9 +69,13 @@ struct mlx5e_l2_table {
+ 				 MLX5_HASH_FIELD_SEL_DST_IP   |\
+ 				 MLX5_HASH_FIELD_SEL_IPSEC_SPI)
+ 
+-/* NIC prio FTS */
++/* NIC promisc FT level */
+ enum {
+ 	MLX5E_PROMISC_FT_LEVEL,
++};
++
++/* NIC prio FTS */
++enum {
+ 	MLX5E_VLAN_FT_LEVEL,
+ 	MLX5E_L2_FT_LEVEL,
+ 	MLX5E_TTC_FT_LEVEL,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dim.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dim.c
+index 298bb74ec5e942..d1d629697e285f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dim.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dim.c
+@@ -113,7 +113,7 @@ int mlx5e_dim_rx_change(struct mlx5e_rq *rq, bool enable)
+ 		__set_bit(MLX5E_RQ_STATE_DIM, &rq->state);
+ 	} else {
+ 		__clear_bit(MLX5E_RQ_STATE_DIM, &rq->state);
+-
++		synchronize_net();
+ 		mlx5e_dim_disable(rq->dim);
+ 		rq->dim = NULL;
+ 	}
+@@ -140,7 +140,7 @@ int mlx5e_dim_tx_change(struct mlx5e_txqsq *sq, bool enable)
+ 		__set_bit(MLX5E_SQ_STATE_DIM, &sq->state);
+ 	} else {
+ 		__clear_bit(MLX5E_SQ_STATE_DIM, &sq->state);
+-
++		synchronize_net();
+ 		mlx5e_dim_disable(sq->dim);
+ 		sq->dim = NULL;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+index 05058710d2c79d..537e732085b22a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+@@ -776,7 +776,7 @@ static int mlx5e_create_promisc_table(struct mlx5e_flow_steering *fs)
+ 	ft_attr.max_fte = MLX5E_PROMISC_TABLE_SIZE;
+ 	ft_attr.autogroup.max_num_groups = 1;
+ 	ft_attr.level = MLX5E_PROMISC_FT_LEVEL;
+-	ft_attr.prio = MLX5E_NIC_PRIO;
++	ft_attr.prio = MLX5E_PROMISC_PRIO;
+ 
+ 	ft->t = mlx5_create_auto_grouped_flow_table(fs->ns, &ft_attr);
+ 	if (IS_ERR(ft->t)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
+index b6ae384396b335..ad9f6fca9b6a20 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
+@@ -1076,6 +1076,7 @@ static int esw_qos_vports_node_update_parent(struct mlx5_esw_sched_node *node,
+ 		return err;
+ 	}
+ 	esw_qos_node_set_parent(node, parent);
++	node->bw_share = 0;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 445301ea70426d..53c4eba9867df1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -113,13 +113,16 @@
+ #define ETHTOOL_PRIO_NUM_LEVELS 1
+ #define ETHTOOL_NUM_PRIOS 11
+ #define ETHTOOL_MIN_LEVEL (KERNEL_MIN_LEVEL + ETHTOOL_NUM_PRIOS)
+-/* Promiscuous, Vlan, mac, ttc, inner ttc, {UDP/ANY/aRFS/accel/{esp, esp_err}}, IPsec policy,
++/* Vlan, mac, ttc, inner ttc, {UDP/ANY/aRFS/accel/{esp, esp_err}}, IPsec policy,
+  * {IPsec RoCE MPV,Alias table},IPsec RoCE policy
+  */
+-#define KERNEL_NIC_PRIO_NUM_LEVELS 11
++#define KERNEL_NIC_PRIO_NUM_LEVELS 10
+ #define KERNEL_NIC_NUM_PRIOS 1
+-/* One more level for tc */
+-#define KERNEL_MIN_LEVEL (KERNEL_NIC_PRIO_NUM_LEVELS + 1)
++/* One more level for tc, and one more for promisc */
++#define KERNEL_MIN_LEVEL (KERNEL_NIC_PRIO_NUM_LEVELS + 2)
++
++#define KERNEL_NIC_PROMISC_NUM_PRIOS 1
++#define KERNEL_NIC_PROMISC_NUM_LEVELS 1
+ 
+ #define KERNEL_NIC_TC_NUM_PRIOS  1
+ #define KERNEL_NIC_TC_NUM_LEVELS 3
+@@ -187,6 +190,8 @@ static struct init_tree_node {
+ 			   ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF,
+ 				  ADD_MULTIPLE_PRIO(KERNEL_NIC_TC_NUM_PRIOS,
+ 						    KERNEL_NIC_TC_NUM_LEVELS),
++				  ADD_MULTIPLE_PRIO(KERNEL_NIC_PROMISC_NUM_PRIOS,
++						    KERNEL_NIC_PROMISC_NUM_LEVELS),
+ 				  ADD_MULTIPLE_PRIO(KERNEL_NIC_NUM_PRIOS,
+ 						    KERNEL_NIC_PRIO_NUM_LEVELS))),
+ 		  ADD_PRIO(0, BY_PASS_MIN_LEVEL, 0, FS_CHAINING_CAPS,
+diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
+index 4ffaf758888527..3dc94349820d22 100644
+--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
++++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
+@@ -31,6 +31,9 @@ static void mana_gd_init_pf_regs(struct pci_dev *pdev)
+ 	gc->db_page_base = gc->bar0_va +
+ 				mana_gd_r64(gc, GDMA_PF_REG_DB_PAGE_OFF);
+ 
++	gc->phys_db_page_base = gc->bar0_pa +
++				mana_gd_r64(gc, GDMA_PF_REG_DB_PAGE_OFF);
++
+ 	sriov_base_off = mana_gd_r64(gc, GDMA_SRIOV_REG_CFG_BASE_OFF);
+ 
+ 	sriov_base_va = gc->bar0_va + sriov_base_off;
+diff --git a/drivers/net/ethernet/renesas/rtsn.c b/drivers/net/ethernet/renesas/rtsn.c
+index 6b3f7fca8d1572..05c4b6c8c9c3d0 100644
+--- a/drivers/net/ethernet/renesas/rtsn.c
++++ b/drivers/net/ethernet/renesas/rtsn.c
+@@ -1259,7 +1259,12 @@ static int rtsn_probe(struct platform_device *pdev)
+ 	priv = netdev_priv(ndev);
+ 	priv->pdev = pdev;
+ 	priv->ndev = ndev;
++
+ 	priv->ptp_priv = rcar_gen4_ptp_alloc(pdev);
++	if (!priv->ptp_priv) {
++		ret = -ENOMEM;
++		goto error_free;
++	}
+ 
+ 	spin_lock_init(&priv->lock);
+ 	platform_set_drvdata(pdev, priv);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+index 7840bc403788ef..5dcc95bc0ad28b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+@@ -364,19 +364,17 @@ static int dwxgmac2_dma_interrupt(struct stmmac_priv *priv,
+ 	}
+ 
+ 	/* TX/RX NORMAL interrupts */
+-	if (likely(intr_status & XGMAC_NIS)) {
+-		if (likely(intr_status & XGMAC_RI)) {
+-			u64_stats_update_begin(&stats->syncp);
+-			u64_stats_inc(&stats->rx_normal_irq_n[chan]);
+-			u64_stats_update_end(&stats->syncp);
+-			ret |= handle_rx;
+-		}
+-		if (likely(intr_status & (XGMAC_TI | XGMAC_TBU))) {
+-			u64_stats_update_begin(&stats->syncp);
+-			u64_stats_inc(&stats->tx_normal_irq_n[chan]);
+-			u64_stats_update_end(&stats->syncp);
+-			ret |= handle_tx;
+-		}
++	if (likely(intr_status & XGMAC_RI)) {
++		u64_stats_update_begin(&stats->syncp);
++		u64_stats_inc(&stats->rx_normal_irq_n[chan]);
++		u64_stats_update_end(&stats->syncp);
++		ret |= handle_rx;
++	}
++	if (likely(intr_status & (XGMAC_TI | XGMAC_TBU))) {
++		u64_stats_update_begin(&stats->syncp);
++		u64_stats_inc(&stats->tx_normal_irq_n[chan]);
++		u64_stats_update_end(&stats->syncp);
++		ret |= handle_tx;
+ 	}
+ 
+ 	/* Clear interrupts */
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 4cec05e0e3d9bb..10e8a8309a0345 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -856,8 +856,6 @@ static struct sk_buff *am65_cpsw_build_skb(void *page_addr,
+ {
+ 	struct sk_buff *skb;
+ 
+-	len += AM65_CPSW_HEADROOM;
+-
+ 	skb = build_skb(page_addr, len);
+ 	if (unlikely(!skb))
+ 		return NULL;
+@@ -1344,7 +1342,7 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_rx_flow *flow,
+ 	}
+ 
+ 	skb = am65_cpsw_build_skb(page_addr, ndev,
+-				  AM65_CPSW_MAX_PACKET_SIZE, headroom);
++				  PAGE_SIZE, headroom);
+ 	if (unlikely(!skb)) {
+ 		new_page = page;
+ 		goto requeue;
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.c b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+index f77bdf732f5f37..4eac40c4e8514e 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_lib.c
++++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+@@ -1680,7 +1680,7 @@ static void wx_set_num_queues(struct wx *wx)
+  */
+ static int wx_acquire_msix_vectors(struct wx *wx)
+ {
+-	struct irq_affinity affd = { .pre_vectors = 1 };
++	struct irq_affinity affd = { .post_vectors = 1 };
+ 	int nvecs, i;
+ 
+ 	/* We start by asking for one vector per queue pair */
+@@ -1717,16 +1717,17 @@ static int wx_acquire_msix_vectors(struct wx *wx)
+ 		return nvecs;
+ 	}
+ 
+-	wx->msix_entry->entry = 0;
+-	wx->msix_entry->vector = pci_irq_vector(wx->pdev, 0);
+ 	nvecs -= 1;
+ 	for (i = 0; i < nvecs; i++) {
+ 		wx->msix_q_entries[i].entry = i;
+-		wx->msix_q_entries[i].vector = pci_irq_vector(wx->pdev, i + 1);
++		wx->msix_q_entries[i].vector = pci_irq_vector(wx->pdev, i);
+ 	}
+ 
+ 	wx->num_q_vectors = nvecs;
+ 
++	wx->msix_entry->entry = nvecs;
++	wx->msix_entry->vector = pci_irq_vector(wx->pdev, nvecs);
++
+ 	return 0;
+ }
+ 
+@@ -2182,7 +2183,6 @@ static void wx_set_ivar(struct wx *wx, s8 direction,
+ 		wr32(wx, WX_PX_MISC_IVAR, ivar);
+ 	} else {
+ 		/* tx or rx causes */
+-		msix_vector += 1; /* offset for queue vectors */
+ 		msix_vector |= WX_PX_IVAR_ALLOC_VAL;
+ 		index = ((16 * (queue & 1)) + (8 * direction));
+ 		ivar = rd32(wx, WX_PX_IVAR(queue >> 1));
+@@ -2220,7 +2220,7 @@ void wx_write_eitr(struct wx_q_vector *q_vector)
+ 
+ 	itr_reg |= WX_PX_ITR_CNT_WDIS;
+ 
+-	wr32(wx, WX_PX_ITR(v_idx + 1), itr_reg);
++	wr32(wx, WX_PX_ITR(v_idx), itr_reg);
+ }
+ 
+ /**
+@@ -2266,9 +2266,9 @@ void wx_configure_vectors(struct wx *wx)
+ 		wx_write_eitr(q_vector);
+ 	}
+ 
+-	wx_set_ivar(wx, -1, 0, 0);
++	wx_set_ivar(wx, -1, 0, v_idx);
+ 	if (pdev->msix_enabled)
+-		wr32(wx, WX_PX_ITR(0), 1950);
++		wr32(wx, WX_PX_ITR(v_idx), 1950);
+ }
+ EXPORT_SYMBOL(wx_configure_vectors);
+ 
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_type.h b/drivers/net/ethernet/wangxun/libwx/wx_type.h
+index 4c545b2aa997cb..3a9c226567f801 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_type.h
++++ b/drivers/net/ethernet/wangxun/libwx/wx_type.h
+@@ -1242,7 +1242,7 @@ struct wx {
+ };
+ 
+ #define WX_INTR_ALL (~0ULL)
+-#define WX_INTR_Q(i) BIT((i) + 1)
++#define WX_INTR_Q(i) BIT((i))
+ 
+ /* register operations */
+ #define wr32(a, reg, value)	writel((value), ((a)->hw_addr + (reg)))
+diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
+index 91b3055a5a9f59..d37fd9004b410c 100644
+--- a/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
++++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
+@@ -155,7 +155,7 @@ static void ngbe_irq_enable(struct wx *wx, bool queues)
+ 	if (queues)
+ 		wx_intr_enable(wx, NGBE_INTR_ALL);
+ 	else
+-		wx_intr_enable(wx, NGBE_INTR_MISC);
++		wx_intr_enable(wx, NGBE_INTR_MISC(wx));
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h b/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h
+index 992adbb98c7d12..9e277a9330c94b 100644
+--- a/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h
++++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h
+@@ -85,7 +85,7 @@
+ #define NGBE_PX_MISC_IC_TIMESYNC		BIT(11) /* time sync */
+ 
+ #define NGBE_INTR_ALL				0x1FF
+-#define NGBE_INTR_MISC				BIT(0)
++#define NGBE_INTR_MISC(A)			BIT((A)->num_q_vectors)
+ 
+ #define NGBE_PHY_CONFIG(reg_offset)		(0x14000 + ((reg_offset) * 4))
+ #define NGBE_CFG_LAN_SPEED			0x14440
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
+index 44547e69c026fe..07a6e7b5460711 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
+@@ -21,7 +21,7 @@ void txgbe_irq_enable(struct wx *wx, bool queues)
+ 	wr32(wx, WX_PX_MISC_IEN, TXGBE_PX_MISC_IEN_MASK);
+ 
+ 	/* unmask interrupt */
+-	wx_intr_enable(wx, TXGBE_INTR_MISC);
++	wx_intr_enable(wx, TXGBE_INTR_MISC(wx));
+ 	if (queues)
+ 		wx_intr_enable(wx, TXGBE_INTR_QALL(wx));
+ }
+@@ -147,7 +147,7 @@ static irqreturn_t txgbe_misc_irq_thread_fn(int irq, void *data)
+ 		nhandled++;
+ 	}
+ 
+-	wx_intr_enable(wx, TXGBE_INTR_MISC);
++	wx_intr_enable(wx, TXGBE_INTR_MISC(wx));
+ 	return (nhandled > 0 ? IRQ_HANDLED : IRQ_NONE);
+ }
+ 
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+index f423012dec2256..b91f2f7bd1fb5d 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+@@ -280,8 +280,8 @@ struct txgbe_fdir_filter {
+ #define TXGBE_DEFAULT_RX_WORK           128
+ #endif
+ 
+-#define TXGBE_INTR_MISC       BIT(0)
+-#define TXGBE_INTR_QALL(A)    GENMASK((A)->num_q_vectors, 1)
++#define TXGBE_INTR_MISC(A)    BIT((A)->num_q_vectors)
++#define TXGBE_INTR_QALL(A)    (TXGBE_INTR_MISC(A) - 1)
+ 
+ #define TXGBE_MAX_EITR        GENMASK(11, 3)
+ 
+diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
+index edb36ff07a0c6f..6f82203a414cd8 100644
+--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
++++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
+@@ -1309,7 +1309,7 @@ ll_temac_ethtools_set_ringparam(struct net_device *ndev,
+ 	if (ering->rx_pending > RX_BD_NUM_MAX ||
+ 	    ering->rx_mini_pending ||
+ 	    ering->rx_jumbo_pending ||
+-	    ering->rx_pending > TX_BD_NUM_MAX)
++	    ering->tx_pending > TX_BD_NUM_MAX)
+ 		return -EINVAL;
+ 
+ 	if (netif_running(ndev))
+diff --git a/drivers/net/phy/microchip.c b/drivers/net/phy/microchip.c
+index 93de88c1c8fd58..55822d36889c34 100644
+--- a/drivers/net/phy/microchip.c
++++ b/drivers/net/phy/microchip.c
+@@ -332,7 +332,7 @@ static void lan88xx_link_change_notify(struct phy_device *phydev)
+ 	 * As workaround, set to 10 before setting to 100
+ 	 * at forced 100 F/H mode.
+ 	 */
+-	if (!phydev->autoneg && phydev->speed == 100) {
++	if (phydev->state == PHY_NOLINK && !phydev->autoneg && phydev->speed == 100) {
+ 		/* disable phy interrupt */
+ 		temp = phy_read(phydev, LAN88XX_INT_MASK);
+ 		temp &= ~LAN88XX_INT_MASK_MDINTPIN_EN_;
+@@ -486,6 +486,7 @@ static struct phy_driver microchip_phy_driver[] = {
+ 	.config_init	= lan88xx_config_init,
+ 	.config_aneg	= lan88xx_config_aneg,
+ 	.link_change_notify = lan88xx_link_change_notify,
++	.soft_reset	= genphy_soft_reset,
+ 
+ 	/* Interrupt handling is broken, do not define related
+ 	 * functions to force polling.
+diff --git a/drivers/net/phy/qcom/at803x.c b/drivers/net/phy/qcom/at803x.c
+index 26350b962890b0..8f26e395e39f9a 100644
+--- a/drivers/net/phy/qcom/at803x.c
++++ b/drivers/net/phy/qcom/at803x.c
+@@ -26,9 +26,6 @@
+ 
+ #define AT803X_LED_CONTROL			0x18
+ 
+-#define AT803X_PHY_MMD3_WOL_CTRL		0x8012
+-#define AT803X_WOL_EN				BIT(5)
+-
+ #define AT803X_REG_CHIP_CONFIG			0x1f
+ #define AT803X_BT_BX_REG_SEL			0x8000
+ 
+@@ -866,30 +863,6 @@ static int at8031_config_init(struct phy_device *phydev)
+ 	return at803x_config_init(phydev);
+ }
+ 
+-static int at8031_set_wol(struct phy_device *phydev,
+-			  struct ethtool_wolinfo *wol)
+-{
+-	int ret;
+-
+-	/* First setup MAC address and enable WOL interrupt */
+-	ret = at803x_set_wol(phydev, wol);
+-	if (ret)
+-		return ret;
+-
+-	if (wol->wolopts & WAKE_MAGIC)
+-		/* Enable WOL function for 1588 */
+-		ret = phy_modify_mmd(phydev, MDIO_MMD_PCS,
+-				     AT803X_PHY_MMD3_WOL_CTRL,
+-				     0, AT803X_WOL_EN);
+-	else
+-		/* Disable WoL function for 1588 */
+-		ret = phy_modify_mmd(phydev, MDIO_MMD_PCS,
+-				     AT803X_PHY_MMD3_WOL_CTRL,
+-				     AT803X_WOL_EN, 0);
+-
+-	return ret;
+-}
+-
+ static int at8031_config_intr(struct phy_device *phydev)
+ {
+ 	struct at803x_priv *priv = phydev->priv;
+diff --git a/drivers/net/phy/qcom/qca808x.c b/drivers/net/phy/qcom/qca808x.c
+index 71498c518f0feb..6de16c0eaa0897 100644
+--- a/drivers/net/phy/qcom/qca808x.c
++++ b/drivers/net/phy/qcom/qca808x.c
+@@ -633,7 +633,7 @@ static struct phy_driver qca808x_driver[] = {
+ 	.handle_interrupt	= at803x_handle_interrupt,
+ 	.get_tunable		= at803x_get_tunable,
+ 	.set_tunable		= at803x_set_tunable,
+-	.set_wol		= at803x_set_wol,
++	.set_wol		= at8031_set_wol,
+ 	.get_wol		= at803x_get_wol,
+ 	.get_features		= qca808x_get_features,
+ 	.config_aneg		= qca808x_config_aneg,
+diff --git a/drivers/net/phy/qcom/qcom-phy-lib.c b/drivers/net/phy/qcom/qcom-phy-lib.c
+index d28815ef56bbf3..af7d0d8e81be5c 100644
+--- a/drivers/net/phy/qcom/qcom-phy-lib.c
++++ b/drivers/net/phy/qcom/qcom-phy-lib.c
+@@ -115,6 +115,31 @@ int at803x_set_wol(struct phy_device *phydev,
+ }
+ EXPORT_SYMBOL_GPL(at803x_set_wol);
+ 
++int at8031_set_wol(struct phy_device *phydev,
++		   struct ethtool_wolinfo *wol)
++{
++	int ret;
++
++	/* First setup MAC address and enable WOL interrupt */
++	ret = at803x_set_wol(phydev, wol);
++	if (ret)
++		return ret;
++
++	if (wol->wolopts & WAKE_MAGIC)
++		/* Enable WOL function for 1588 */
++		ret = phy_modify_mmd(phydev, MDIO_MMD_PCS,
++				     AT803X_PHY_MMD3_WOL_CTRL,
++				     0, AT803X_WOL_EN);
++	else
++		/* Disable WoL function for 1588 */
++		ret = phy_modify_mmd(phydev, MDIO_MMD_PCS,
++				     AT803X_PHY_MMD3_WOL_CTRL,
++				     AT803X_WOL_EN, 0);
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(at8031_set_wol);
++
+ void at803x_get_wol(struct phy_device *phydev,
+ 		    struct ethtool_wolinfo *wol)
+ {
+diff --git a/drivers/net/phy/qcom/qcom.h b/drivers/net/phy/qcom/qcom.h
+index 4bb541728846d3..7f7151c8bacaa5 100644
+--- a/drivers/net/phy/qcom/qcom.h
++++ b/drivers/net/phy/qcom/qcom.h
+@@ -172,6 +172,9 @@
+ #define AT803X_LOC_MAC_ADDR_16_31_OFFSET	0x804B
+ #define AT803X_LOC_MAC_ADDR_32_47_OFFSET	0x804A
+ 
++#define AT803X_PHY_MMD3_WOL_CTRL		0x8012
++#define AT803X_WOL_EN				BIT(5)
++
+ #define AT803X_DEBUG_ADDR			0x1D
+ #define AT803X_DEBUG_DATA			0x1E
+ 
+@@ -215,6 +218,8 @@ int at803x_debug_reg_mask(struct phy_device *phydev, u16 reg,
+ int at803x_debug_reg_write(struct phy_device *phydev, u16 reg, u16 data);
+ int at803x_set_wol(struct phy_device *phydev,
+ 		   struct ethtool_wolinfo *wol);
++int at8031_set_wol(struct phy_device *phydev,
++		   struct ethtool_wolinfo *wol);
+ void at803x_get_wol(struct phy_device *phydev,
+ 		    struct ethtool_wolinfo *wol);
+ int at803x_ack_interrupt(struct phy_device *phydev);
+diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c
+index 31463b9e5697f3..b6489da5cfcdfb 100644
+--- a/drivers/net/phy/smsc.c
++++ b/drivers/net/phy/smsc.c
+@@ -155,10 +155,29 @@ static int smsc_phy_reset(struct phy_device *phydev)
+ 
+ static int lan87xx_config_aneg(struct phy_device *phydev)
+ {
+-	int rc;
++	u8 mdix_ctrl;
+ 	int val;
++	int rc;
++
++	/* When auto-negotiation is disabled (forced mode), the PHY's
++	 * Auto-MDIX will continue toggling the TX/RX pairs.
++	 *
++	 * To establish a stable link, we must select a fixed MDI mode.
++	 * If the user has not specified a fixed MDI mode (i.e., mdix_ctrl is
++	 * 'auto'), we default to ETH_TP_MDI. This choice of a ETH_TP_MDI mode
++	 * mirrors the behavior the hardware would exhibit if the AUTOMDIX_EN
++	 * strap were configured for a fixed MDI connection.
++	 */
++	if (phydev->autoneg == AUTONEG_DISABLE) {
++		if (phydev->mdix_ctrl == ETH_TP_MDI_AUTO)
++			mdix_ctrl = ETH_TP_MDI;
++		else
++			mdix_ctrl = phydev->mdix_ctrl;
++	} else {
++		mdix_ctrl = phydev->mdix_ctrl;
++	}
+ 
+-	switch (phydev->mdix_ctrl) {
++	switch (mdix_ctrl) {
+ 	case ETH_TP_MDI:
+ 		val = SPECIAL_CTRL_STS_OVRRD_AMDIX_;
+ 		break;
+@@ -167,7 +186,8 @@ static int lan87xx_config_aneg(struct phy_device *phydev)
+ 			SPECIAL_CTRL_STS_AMDIX_STATE_;
+ 		break;
+ 	case ETH_TP_MDI_AUTO:
+-		val = SPECIAL_CTRL_STS_AMDIX_ENABLE_;
++		val = SPECIAL_CTRL_STS_OVRRD_AMDIX_ |
++			SPECIAL_CTRL_STS_AMDIX_ENABLE_;
+ 		break;
+ 	default:
+ 		return genphy_config_aneg(phydev);
+@@ -183,7 +203,7 @@ static int lan87xx_config_aneg(struct phy_device *phydev)
+ 	rc |= val;
+ 	phy_write(phydev, SPECIAL_CTRL_STS, rc);
+ 
+-	phydev->mdix = phydev->mdix_ctrl;
++	phydev->mdix = mdix_ctrl;
+ 	return genphy_config_aneg(phydev);
+ }
+ 
+@@ -261,6 +281,33 @@ int lan87xx_read_status(struct phy_device *phydev)
+ }
+ EXPORT_SYMBOL_GPL(lan87xx_read_status);
+ 
++static int lan87xx_phy_config_init(struct phy_device *phydev)
++{
++	int rc;
++
++	/* The LAN87xx PHY's initial MDI-X mode is determined by the AUTOMDIX_EN
++	 * hardware strap, but the driver cannot read the strap's status. This
++	 * creates an unpredictable initial state.
++	 *
++	 * To ensure consistent and reliable behavior across all boards,
++	 * override the strap configuration on initialization and force the PHY
++	 * into a known state with Auto-MDIX enabled, which is the expected
++	 * default for modern hardware.
++	 */
++	rc = phy_modify(phydev, SPECIAL_CTRL_STS,
++			SPECIAL_CTRL_STS_OVRRD_AMDIX_ |
++			SPECIAL_CTRL_STS_AMDIX_ENABLE_ |
++			SPECIAL_CTRL_STS_AMDIX_STATE_,
++			SPECIAL_CTRL_STS_OVRRD_AMDIX_ |
++			SPECIAL_CTRL_STS_AMDIX_ENABLE_);
++	if (rc < 0)
++		return rc;
++
++	phydev->mdix_ctrl = ETH_TP_MDI_AUTO;
++
++	return smsc_phy_config_init(phydev);
++}
++
+ static int lan874x_phy_config_init(struct phy_device *phydev)
+ {
+ 	u16 val;
+@@ -695,7 +742,7 @@ static struct phy_driver smsc_phy_driver[] = {
+ 
+ 	/* basic functions */
+ 	.read_status	= lan87xx_read_status,
+-	.config_init	= smsc_phy_config_init,
++	.config_init	= lan87xx_phy_config_init,
+ 	.soft_reset	= smsc_phy_reset,
+ 	.config_aneg	= lan87xx_config_aneg,
+ 
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index b586b1c13a47f9..f5647ee0addec2 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1426,6 +1426,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_QUIRK_SET_DTR(0x22de, 0x9051, 2)}, /* Hucom Wireless HM-211S/K */
+ 	{QMI_FIXED_INTF(0x22de, 0x9061, 3)},	/* WeTelecom WPD-600N */
+ 	{QMI_QUIRK_SET_DTR(0x1e0e, 0x9001, 5)},	/* SIMCom 7100E, 7230E, 7600E ++ */
++	{QMI_QUIRK_SET_DTR(0x1e0e, 0x9071, 3)},	/* SIMCom 8230C ++ */
+ 	{QMI_QUIRK_SET_DTR(0x2c7c, 0x0121, 4)},	/* Quectel EC21 Mini PCIe */
+ 	{QMI_QUIRK_SET_DTR(0x2c7c, 0x0191, 4)},	/* Quectel EG91 */
+ 	{QMI_QUIRK_SET_DTR(0x2c7c, 0x0195, 4)},	/* Quectel EG95 */
+diff --git a/drivers/net/wireless/marvell/mwifiex/util.c b/drivers/net/wireless/marvell/mwifiex/util.c
+index 1f1f6280a0f251..86e20edb593b3f 100644
+--- a/drivers/net/wireless/marvell/mwifiex/util.c
++++ b/drivers/net/wireless/marvell/mwifiex/util.c
+@@ -477,7 +477,9 @@ mwifiex_process_mgmt_packet(struct mwifiex_private *priv,
+ 				    "auth: receive authentication from %pM\n",
+ 				    ieee_hdr->addr3);
+ 		} else {
+-			if (!priv->wdev.connected)
++			if (!priv->wdev.connected ||
++			    !ether_addr_equal(ieee_hdr->addr3,
++					      priv->curr_bss_params.bss_descriptor.mac_address))
+ 				return 0;
+ 
+ 			if (ieee80211_is_deauth(ieee_hdr->frame_control)) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index bafcf5a279e23f..407df42f0f9b61 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -288,7 +288,7 @@ __mt76_connac_mcu_alloc_sta_req(struct mt76_dev *dev, struct mt76_vif_link *mvif
+ 
+ 	mt76_connac_mcu_get_wlan_idx(dev, wcid, &hdr.wlan_idx_lo,
+ 				     &hdr.wlan_idx_hi);
+-	skb = mt76_mcu_msg_alloc(dev, NULL, len);
++	skb = __mt76_mcu_msg_alloc(dev, NULL, len, len, GFP_ATOMIC);
+ 	if (!skb)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -1703,8 +1703,8 @@ int mt76_connac_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 		if (!sreq->ssids[i].ssid_len)
+ 			continue;
+ 
+-		req->ssids[i].ssid_len = cpu_to_le32(sreq->ssids[i].ssid_len);
+-		memcpy(req->ssids[i].ssid, sreq->ssids[i].ssid,
++		req->ssids[n_ssids].ssid_len = cpu_to_le32(sreq->ssids[i].ssid_len);
++		memcpy(req->ssids[n_ssids].ssid, sreq->ssids[i].ssid,
+ 		       sreq->ssids[i].ssid_len);
+ 		n_ssids++;
+ 	}
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 1fffa43379b2b2..77f73ae1d7ecce 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -1180,6 +1180,9 @@ static void mt7921_sta_set_decap_offload(struct ieee80211_hw *hw,
+ 	struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
+ 	struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ 
++	if (!msta->deflink.wcid.sta)
++		return;
++
+ 	mt792x_mutex_acquire(dev);
+ 
+ 	if (enabled)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/init.c b/drivers/net/wireless/mediatek/mt76/mt7925/init.c
+index 2a83ff59a968c9..4249bad83c930e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/init.c
+@@ -52,6 +52,8 @@ static int mt7925_thermal_init(struct mt792x_phy *phy)
+ 
+ 	name = devm_kasprintf(&wiphy->dev, GFP_KERNEL, "mt7925_%s",
+ 			      wiphy_name(wiphy));
++	if (!name)
++		return -ENOMEM;
+ 
+ 	hwmon = devm_hwmon_device_register_with_groups(&wiphy->dev, name, phy,
+ 						       mt7925_hwmon_groups);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/main.c b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+index 66f327781947b8..f1a5e8f09c3d58 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+@@ -1600,6 +1600,9 @@ static void mt7925_sta_set_decap_offload(struct ieee80211_hw *hw,
+ 	unsigned long valid = mvif->valid_links;
+ 	u8 i;
+ 
++	if (!msta->vif)
++		return;
++
+ 	mt792x_mutex_acquire(dev);
+ 
+ 	valid = ieee80211_vif_is_mld(vif) ? mvif->valid_links : BIT(0);
+@@ -1614,6 +1617,9 @@ static void mt7925_sta_set_decap_offload(struct ieee80211_hw *hw,
+ 		else
+ 			clear_bit(MT_WCID_FLAG_HDR_TRANS, &mlink->wcid.flags);
+ 
++		if (!mlink->wcid.sta)
++			continue;
++
+ 		mt7925_mcu_wtbl_update_hdr_trans(dev, vif, sta, i);
+ 	}
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index 7d96b88cff803a..b3d18514c58187 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -2834,8 +2834,8 @@ int mt7925_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 		if (!sreq->ssids[i].ssid_len)
+ 			continue;
+ 
+-		ssid->ssids[i].ssid_len = cpu_to_le32(sreq->ssids[i].ssid_len);
+-		memcpy(ssid->ssids[i].ssid, sreq->ssids[i].ssid,
++		ssid->ssids[n_ssids].ssid_len = cpu_to_le32(sreq->ssids[i].ssid_len);
++		memcpy(ssid->ssids[n_ssids].ssid, sreq->ssids[i].ssid,
+ 		       sreq->ssids[i].ssid_len);
+ 		n_ssids++;
+ 	}
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/regs.h b/drivers/net/wireless/mediatek/mt76/mt7925/regs.h
+index 547489092c2947..341987e47f67a0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/regs.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/regs.h
+@@ -58,7 +58,7 @@
+ 
+ #define MT_INT_TX_DONE_MCU		(MT_INT_TX_DONE_MCU_WM |	\
+ 					 MT_INT_TX_DONE_FWDL)
+-#define MT_INT_TX_DONE_ALL		(MT_INT_TX_DONE_MCU_WM |	\
++#define MT_INT_TX_DONE_ALL		(MT_INT_TX_DONE_MCU |	\
+ 					 MT_INT_TX_DONE_BAND0 |	\
+ 					GENMASK(18, 4))
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+index 2108361543a0c0..3646806088e9a7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+@@ -2318,20 +2318,12 @@ void mt7996_mac_update_stats(struct mt7996_phy *phy)
+ void mt7996_mac_sta_rc_work(struct work_struct *work)
+ {
+ 	struct mt7996_dev *dev = container_of(work, struct mt7996_dev, rc_work);
+-	struct ieee80211_bss_conf *link_conf;
+-	struct ieee80211_link_sta *link_sta;
+ 	struct mt7996_sta_link *msta_link;
+-	struct mt7996_vif_link *link;
+-	struct mt76_vif_link *mlink;
+-	struct ieee80211_sta *sta;
+ 	struct ieee80211_vif *vif;
+-	struct mt7996_sta *msta;
+ 	struct mt7996_vif *mvif;
+ 	LIST_HEAD(list);
+ 	u32 changed;
+-	u8 link_id;
+ 
+-	rcu_read_lock();
+ 	spin_lock_bh(&dev->mt76.sta_poll_lock);
+ 	list_splice_init(&dev->sta_rc_list, &list);
+ 
+@@ -2342,46 +2334,28 @@ void mt7996_mac_sta_rc_work(struct work_struct *work)
+ 
+ 		changed = msta_link->changed;
+ 		msta_link->changed = 0;
+-
+-		sta = wcid_to_sta(&msta_link->wcid);
+-		link_id = msta_link->wcid.link_id;
+-		msta = msta_link->sta;
+-		mvif = msta->vif;
+-		vif = container_of((void *)mvif, struct ieee80211_vif, drv_priv);
+-
+-		mlink = rcu_dereference(mvif->mt76.link[link_id]);
+-		if (!mlink)
+-			continue;
+-
+-		link_sta = rcu_dereference(sta->link[link_id]);
+-		if (!link_sta)
+-			continue;
+-
+-		link_conf = rcu_dereference(vif->link_conf[link_id]);
+-		if (!link_conf)
+-			continue;
++		mvif = msta_link->sta->vif;
++		vif = container_of((void *)mvif, struct ieee80211_vif,
++				   drv_priv);
+ 
+ 		spin_unlock_bh(&dev->mt76.sta_poll_lock);
+ 
+-		link = (struct mt7996_vif_link *)mlink;
+-
+ 		if (changed & (IEEE80211_RC_SUPP_RATES_CHANGED |
+ 			       IEEE80211_RC_NSS_CHANGED |
+ 			       IEEE80211_RC_BW_CHANGED))
+-			mt7996_mcu_add_rate_ctrl(dev, vif, link_conf,
+-						 link_sta, link, msta_link,
++			mt7996_mcu_add_rate_ctrl(dev, msta_link->sta, vif,
++						 msta_link->wcid.link_id,
+ 						 true);
+ 
+ 		if (changed & IEEE80211_RC_SMPS_CHANGED)
+-			mt7996_mcu_set_fixed_field(dev, link_sta, link,
+-						   msta_link, NULL,
++			mt7996_mcu_set_fixed_field(dev, msta_link->sta, NULL,
++						   msta_link->wcid.link_id,
+ 						   RATE_PARAM_MMPS_UPDATE);
+ 
+ 		spin_lock_bh(&dev->mt76.sta_poll_lock);
+ 	}
+ 
+ 	spin_unlock_bh(&dev->mt76.sta_poll_lock);
+-	rcu_read_unlock();
+ }
+ 
+ void mt7996_mac_work(struct work_struct *work)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+index b11dd3dd5c46f0..5584bea9e2a3f8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+@@ -1112,9 +1112,8 @@ mt7996_mac_sta_event(struct mt7996_dev *dev, struct ieee80211_vif *vif,
+ 			if (err)
+ 				return err;
+ 
+-			err = mt7996_mcu_add_rate_ctrl(dev, vif, link_conf,
+-						       link_sta, link,
+-						       msta_link, false);
++			err = mt7996_mcu_add_rate_ctrl(dev, msta_link->sta, vif,
++						       link_id, false);
+ 			if (err)
+ 				return err;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+index ddd555942c7389..63dc6df20c3e42 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+@@ -1883,22 +1883,35 @@ int mt7996_mcu_set_fixed_rate_ctrl(struct mt7996_dev *dev,
+ 				     MCU_WM_UNI_CMD(RA), true);
+ }
+ 
+-int mt7996_mcu_set_fixed_field(struct mt7996_dev *dev,
+-			       struct ieee80211_link_sta *link_sta,
+-			       struct mt7996_vif_link *link,
+-			       struct mt7996_sta_link *msta_link,
+-			       void *data, u32 field)
++int mt7996_mcu_set_fixed_field(struct mt7996_dev *dev, struct mt7996_sta *msta,
++			       void *data, u8 link_id, u32 field)
+ {
+-	struct sta_phy_uni *phy = data;
++	struct mt7996_vif *mvif = msta->vif;
++	struct mt7996_sta_link *msta_link;
+ 	struct sta_rec_ra_fixed_uni *ra;
++	struct sta_phy_uni *phy = data;
++	struct mt76_vif_link *mlink;
+ 	struct sk_buff *skb;
++	int err = -ENODEV;
+ 	struct tlv *tlv;
+ 
+-	skb = __mt76_connac_mcu_alloc_sta_req(&dev->mt76, &link->mt76,
++	rcu_read_lock();
++
++	mlink = rcu_dereference(mvif->mt76.link[link_id]);
++	if (!mlink)
++		goto error_unlock;
++
++	msta_link = rcu_dereference(msta->link[link_id]);
++	if (!msta_link)
++		goto error_unlock;
++
++	skb = __mt76_connac_mcu_alloc_sta_req(&dev->mt76, mlink,
+ 					      &msta_link->wcid,
+ 					      MT7996_STA_UPDATE_MAX_SIZE);
+-	if (IS_ERR(skb))
+-		return PTR_ERR(skb);
++	if (IS_ERR(skb)) {
++		err = PTR_ERR(skb);
++		goto error_unlock;
++	}
+ 
+ 	tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_RA_UPDATE, sizeof(*ra));
+ 	ra = (struct sta_rec_ra_fixed_uni *)tlv;
+@@ -1913,106 +1926,149 @@ int mt7996_mcu_set_fixed_field(struct mt7996_dev *dev,
+ 		if (phy)
+ 			ra->phy = *phy;
+ 		break;
+-	case RATE_PARAM_MMPS_UPDATE:
++	case RATE_PARAM_MMPS_UPDATE: {
++		struct ieee80211_sta *sta = wcid_to_sta(&msta_link->wcid);
++		struct ieee80211_link_sta *link_sta;
++
++		link_sta = rcu_dereference(sta->link[link_id]);
++		if (!link_sta) {
++			dev_kfree_skb(skb);
++			goto error_unlock;
++		}
++
+ 		ra->mmps_mode = mt7996_mcu_get_mmps_mode(link_sta->smps_mode);
+ 		break;
++	}
+ 	default:
+ 		break;
+ 	}
+ 	ra->field = cpu_to_le32(field);
+ 
++	rcu_read_unlock();
++
+ 	return mt76_mcu_skb_send_msg(&dev->mt76, skb,
+ 				     MCU_WMWA_UNI_CMD(STA_REC_UPDATE), true);
++error_unlock:
++	rcu_read_unlock();
++
++	return err;
+ }
+ 
+ static int
+-mt7996_mcu_add_rate_ctrl_fixed(struct mt7996_dev *dev,
+-			       struct ieee80211_link_sta *link_sta,
+-			       struct mt7996_vif_link *link,
+-			       struct mt7996_sta_link *msta_link)
++mt7996_mcu_add_rate_ctrl_fixed(struct mt7996_dev *dev, struct mt7996_sta *msta,
++			       struct ieee80211_vif *vif, u8 link_id)
+ {
+-	struct cfg80211_chan_def *chandef = &link->phy->mt76->chandef;
+-	struct cfg80211_bitrate_mask *mask = &link->bitrate_mask;
+-	enum nl80211_band band = chandef->chan->band;
++	struct ieee80211_link_sta *link_sta;
++	struct cfg80211_bitrate_mask mask;
++	struct mt7996_sta_link *msta_link;
++	struct mt7996_vif_link *link;
+ 	struct sta_phy_uni phy = {};
+-	int ret, nrates = 0;
++	struct ieee80211_sta *sta;
++	int ret, nrates = 0, idx;
++	enum nl80211_band band;
++	bool has_he;
+ 
+ #define __sta_phy_bitrate_mask_check(_mcs, _gi, _ht, _he)			\
+ 	do {									\
+-		u8 i, gi = mask->control[band]._gi;				\
++		u8 i, gi = mask.control[band]._gi;				\
+ 		gi = (_he) ? gi : gi == NL80211_TXRATE_FORCE_SGI;		\
+ 		phy.sgi = gi;							\
+-		phy.he_ltf = mask->control[band].he_ltf;			\
+-		for (i = 0; i < ARRAY_SIZE(mask->control[band]._mcs); i++) {	\
+-			if (!mask->control[band]._mcs[i])			\
++		phy.he_ltf = mask.control[band].he_ltf;				\
++		for (i = 0; i < ARRAY_SIZE(mask.control[band]._mcs); i++) {	\
++			if (!mask.control[band]._mcs[i])			\
+ 				continue;					\
+-			nrates += hweight16(mask->control[band]._mcs[i]);	\
+-			phy.mcs = ffs(mask->control[band]._mcs[i]) - 1;		\
++			nrates += hweight16(mask.control[band]._mcs[i]);	\
++			phy.mcs = ffs(mask.control[band]._mcs[i]) - 1;		\
+ 			if (_ht)						\
+ 				phy.mcs += 8 * i;				\
+ 		}								\
+ 	} while (0)
+ 
+-	if (link_sta->he_cap.has_he) {
++	rcu_read_lock();
++
++	link = mt7996_vif_link(dev, vif, link_id);
++	if (!link)
++		goto error_unlock;
++
++	msta_link = rcu_dereference(msta->link[link_id]);
++	if (!msta_link)
++		goto error_unlock;
++
++	sta = wcid_to_sta(&msta_link->wcid);
++	link_sta = rcu_dereference(sta->link[link_id]);
++	if (!link_sta)
++		goto error_unlock;
++
++	band = link->phy->mt76->chandef.chan->band;
++	has_he = link_sta->he_cap.has_he;
++	mask = link->bitrate_mask;
++	idx = msta_link->wcid.idx;
++
++	if (has_he) {
+ 		__sta_phy_bitrate_mask_check(he_mcs, he_gi, 0, 1);
+ 	} else if (link_sta->vht_cap.vht_supported) {
+ 		__sta_phy_bitrate_mask_check(vht_mcs, gi, 0, 0);
+ 	} else if (link_sta->ht_cap.ht_supported) {
+ 		__sta_phy_bitrate_mask_check(ht_mcs, gi, 1, 0);
+ 	} else {
+-		nrates = hweight32(mask->control[band].legacy);
+-		phy.mcs = ffs(mask->control[band].legacy) - 1;
++		nrates = hweight32(mask.control[band].legacy);
++		phy.mcs = ffs(mask.control[band].legacy) - 1;
+ 	}
++
++	rcu_read_unlock();
++
+ #undef __sta_phy_bitrate_mask_check
+ 
+ 	/* fall back to auto rate control */
+-	if (mask->control[band].gi == NL80211_TXRATE_DEFAULT_GI &&
+-	    mask->control[band].he_gi == GENMASK(7, 0) &&
+-	    mask->control[band].he_ltf == GENMASK(7, 0) &&
++	if (mask.control[band].gi == NL80211_TXRATE_DEFAULT_GI &&
++	    mask.control[band].he_gi == GENMASK(7, 0) &&
++	    mask.control[band].he_ltf == GENMASK(7, 0) &&
+ 	    nrates != 1)
+ 		return 0;
+ 
+ 	/* fixed single rate */
+ 	if (nrates == 1) {
+-		ret = mt7996_mcu_set_fixed_field(dev, link_sta, link,
+-						 msta_link, &phy,
++		ret = mt7996_mcu_set_fixed_field(dev, msta, &phy, link_id,
+ 						 RATE_PARAM_FIXED_MCS);
+ 		if (ret)
+ 			return ret;
+ 	}
+ 
+ 	/* fixed GI */
+-	if (mask->control[band].gi != NL80211_TXRATE_DEFAULT_GI ||
+-	    mask->control[band].he_gi != GENMASK(7, 0)) {
++	if (mask.control[band].gi != NL80211_TXRATE_DEFAULT_GI ||
++	    mask.control[band].he_gi != GENMASK(7, 0)) {
+ 		u32 addr;
+ 
+ 		/* firmware updates only TXCMD but doesn't take WTBL into
+ 		 * account, so driver should update here to reflect the
+ 		 * actual txrate hardware sends out.
+ 		 */
+-		addr = mt7996_mac_wtbl_lmac_addr(dev, msta_link->wcid.idx, 7);
+-		if (link_sta->he_cap.has_he)
++		addr = mt7996_mac_wtbl_lmac_addr(dev, idx, 7);
++		if (has_he)
+ 			mt76_rmw_field(dev, addr, GENMASK(31, 24), phy.sgi);
+ 		else
+ 			mt76_rmw_field(dev, addr, GENMASK(15, 12), phy.sgi);
+ 
+-		ret = mt7996_mcu_set_fixed_field(dev, link_sta, link,
+-						 msta_link, &phy,
++		ret = mt7996_mcu_set_fixed_field(dev, msta, &phy, link_id,
+ 						 RATE_PARAM_FIXED_GI);
+ 		if (ret)
+ 			return ret;
+ 	}
+ 
+ 	/* fixed HE_LTF */
+-	if (mask->control[band].he_ltf != GENMASK(7, 0)) {
+-		ret = mt7996_mcu_set_fixed_field(dev, link_sta, link,
+-						 msta_link, &phy,
++	if (mask.control[band].he_ltf != GENMASK(7, 0)) {
++		ret = mt7996_mcu_set_fixed_field(dev, msta, &phy, link_id,
+ 						 RATE_PARAM_FIXED_HE_LTF);
+ 		if (ret)
+ 			return ret;
+ 	}
+ 
+ 	return 0;
++
++error_unlock:
++	rcu_read_unlock();
++
++	return -ENODEV;
+ }
+ 
+ static void
+@@ -2123,21 +2179,44 @@ mt7996_mcu_sta_rate_ctrl_tlv(struct sk_buff *skb, struct mt7996_dev *dev,
+ 	memset(ra->rx_rcpi, INIT_RCPI, sizeof(ra->rx_rcpi));
+ }
+ 
+-int mt7996_mcu_add_rate_ctrl(struct mt7996_dev *dev,
+-			     struct ieee80211_vif *vif,
+-			     struct ieee80211_bss_conf *link_conf,
+-			     struct ieee80211_link_sta *link_sta,
+-			     struct mt7996_vif_link *link,
+-			     struct mt7996_sta_link *msta_link, bool changed)
++int mt7996_mcu_add_rate_ctrl(struct mt7996_dev *dev, struct mt7996_sta *msta,
++			     struct ieee80211_vif *vif, u8 link_id,
++			     bool changed)
+ {
++	struct ieee80211_bss_conf *link_conf;
++	struct ieee80211_link_sta *link_sta;
++	struct mt7996_sta_link *msta_link;
++	struct mt7996_vif_link *link;
++	struct ieee80211_sta *sta;
+ 	struct sk_buff *skb;
+-	int ret;
++	int ret = -ENODEV;
++
++	rcu_read_lock();
++
++	link = mt7996_vif_link(dev, vif, link_id);
++	if (!link)
++		goto error_unlock;
++
++	msta_link = rcu_dereference(msta->link[link_id]);
++	if (!msta_link)
++		goto error_unlock;
++
++	sta = wcid_to_sta(&msta_link->wcid);
++	link_sta = rcu_dereference(sta->link[link_id]);
++	if (!link_sta)
++		goto error_unlock;
++
++	link_conf = rcu_dereference(vif->link_conf[link_id]);
++	if (!link_conf)
++		goto error_unlock;
+ 
+ 	skb = __mt76_connac_mcu_alloc_sta_req(&dev->mt76, &link->mt76,
+ 					      &msta_link->wcid,
+ 					      MT7996_STA_UPDATE_MAX_SIZE);
+-	if (IS_ERR(skb))
+-		return PTR_ERR(skb);
++	if (IS_ERR(skb)) {
++		ret = PTR_ERR(skb);
++		goto error_unlock;
++	}
+ 
+ 	/* firmware rc algorithm refers to sta_rec_he for HE control.
+ 	 * once dev->rc_work changes the settings driver should also
+@@ -2151,12 +2230,19 @@ int mt7996_mcu_add_rate_ctrl(struct mt7996_dev *dev,
+ 	 */
+ 	mt7996_mcu_sta_rate_ctrl_tlv(skb, dev, vif, link_conf, link_sta, link);
+ 
++	rcu_read_unlock();
++
+ 	ret = mt76_mcu_skb_send_msg(&dev->mt76, skb,
+ 				    MCU_WMWA_UNI_CMD(STA_REC_UPDATE), true);
+ 	if (ret)
+ 		return ret;
+ 
+-	return mt7996_mcu_add_rate_ctrl_fixed(dev, link_sta, link, msta_link);
++	return mt7996_mcu_add_rate_ctrl_fixed(dev, msta, vif, link_id);
++
++error_unlock:
++	rcu_read_unlock();
++
++	return ret;
+ }
+ 
+ static int
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+index 77605403b39661..8220a7310f2859 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+@@ -604,23 +604,17 @@ int mt7996_mcu_beacon_inband_discov(struct mt7996_dev *dev,
+ int mt7996_mcu_add_obss_spr(struct mt7996_phy *phy,
+ 			    struct mt7996_vif_link *link,
+ 			    struct ieee80211_he_obss_pd *he_obss_pd);
+-int mt7996_mcu_add_rate_ctrl(struct mt7996_dev *dev,
+-			     struct ieee80211_vif *vif,
+-			     struct ieee80211_bss_conf *link_conf,
+-			     struct ieee80211_link_sta *link_sta,
+-			     struct mt7996_vif_link *link,
+-			     struct mt7996_sta_link *msta_link, bool changed);
++int mt7996_mcu_add_rate_ctrl(struct mt7996_dev *dev, struct mt7996_sta *msta,
++			     struct ieee80211_vif *vif, u8 link_id,
++			     bool changed);
+ int mt7996_set_channel(struct mt76_phy *mphy);
+ int mt7996_mcu_set_chan_info(struct mt7996_phy *phy, u16 tag);
+ int mt7996_mcu_set_tx(struct mt7996_dev *dev, struct ieee80211_vif *vif,
+ 		      struct ieee80211_bss_conf *link_conf);
+ int mt7996_mcu_set_fixed_rate_ctrl(struct mt7996_dev *dev,
+ 				   void *data, u16 version);
+-int mt7996_mcu_set_fixed_field(struct mt7996_dev *dev,
+-			       struct ieee80211_link_sta *link_sta,
+-			       struct mt7996_vif_link *link,
+-			       struct mt7996_sta_link *msta_link,
+-			       void *data, u32 field);
++int mt7996_mcu_set_fixed_field(struct mt7996_dev *dev, struct mt7996_sta *msta,
++			       void *data, u8 link_id, u32 field);
+ int mt7996_mcu_set_eeprom(struct mt7996_dev *dev);
+ int mt7996_mcu_get_eeprom(struct mt7996_dev *dev, u32 offset, u8 *buf, u32 buf_len);
+ int mt7996_mcu_get_eeprom_free_block(struct mt7996_dev *dev, u8 *block_num);
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00soc.c b/drivers/net/wireless/ralink/rt2x00/rt2x00soc.c
+index eface610178d2e..f7f3a2340c3929 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2x00soc.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2x00soc.c
+@@ -108,7 +108,7 @@ int rt2x00soc_probe(struct platform_device *pdev, const struct rt2x00_ops *ops)
+ }
+ EXPORT_SYMBOL_GPL(rt2x00soc_probe);
+ 
+-int rt2x00soc_remove(struct platform_device *pdev)
++void rt2x00soc_remove(struct platform_device *pdev)
+ {
+ 	struct ieee80211_hw *hw = platform_get_drvdata(pdev);
+ 	struct rt2x00_dev *rt2x00dev = hw->priv;
+@@ -119,8 +119,6 @@ int rt2x00soc_remove(struct platform_device *pdev)
+ 	rt2x00lib_remove_dev(rt2x00dev);
+ 	rt2x00soc_free_reg(rt2x00dev);
+ 	ieee80211_free_hw(hw);
+-
+-	return 0;
+ }
+ EXPORT_SYMBOL_GPL(rt2x00soc_remove);
+ 
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00soc.h b/drivers/net/wireless/ralink/rt2x00/rt2x00soc.h
+index 021fd06b362723..d6226b8a10e00b 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2x00soc.h
++++ b/drivers/net/wireless/ralink/rt2x00/rt2x00soc.h
+@@ -17,7 +17,7 @@
+  * SoC driver handlers.
+  */
+ int rt2x00soc_probe(struct platform_device *pdev, const struct rt2x00_ops *ops);
+-int rt2x00soc_remove(struct platform_device *pdev);
++void rt2x00soc_remove(struct platform_device *pdev);
+ #ifdef CONFIG_PM
+ int rt2x00soc_suspend(struct platform_device *pdev, pm_message_t state);
+ int rt2x00soc_resume(struct platform_device *pdev);
+diff --git a/drivers/net/wireless/zydas/zd1211rw/zd_mac.c b/drivers/net/wireless/zydas/zd1211rw/zd_mac.c
+index 9653dbaac3c059..781510a3ec6d5a 100644
+--- a/drivers/net/wireless/zydas/zd1211rw/zd_mac.c
++++ b/drivers/net/wireless/zydas/zd1211rw/zd_mac.c
+@@ -583,7 +583,11 @@ void zd_mac_tx_to_dev(struct sk_buff *skb, int error)
+ 
+ 		skb_queue_tail(q, skb);
+ 		while (skb_queue_len(q) > ZD_MAC_MAX_ACK_WAITERS) {
+-			zd_mac_tx_status(hw, skb_dequeue(q),
++			skb = skb_dequeue(q);
++			if (!skb)
++				break;
++
++			zd_mac_tx_status(hw, skb,
+ 					 mac->ack_pending ? mac->ack_signal : 0,
+ 					 NULL);
+ 			mac->ack_pending = 0;
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index b78e0e41732445..af370628e58393 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -1676,19 +1676,24 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
+ 		return NULL;
+ 
+ 	root_ops = kzalloc(sizeof(*root_ops), GFP_KERNEL);
+-	if (!root_ops)
+-		goto free_ri;
++	if (!root_ops) {
++		kfree(ri);
++		return NULL;
++	}
+ 
+ 	ri->cfg = pci_acpi_setup_ecam_mapping(root);
+-	if (!ri->cfg)
+-		goto free_root_ops;
++	if (!ri->cfg) {
++		kfree(ri);
++		kfree(root_ops);
++		return NULL;
++	}
+ 
+ 	root_ops->release_info = pci_acpi_generic_release_info;
+ 	root_ops->prepare_resources = pci_acpi_root_prepare_resources;
+ 	root_ops->pci_ops = (struct pci_ops *)&ri->cfg->ops->pci_ops;
+ 	bus = acpi_pci_root_create(root, root_ops, &ri->common, ri->cfg);
+ 	if (!bus)
+-		goto free_cfg;
++		return NULL;
+ 
+ 	/* If we must preserve the resource configuration, claim now */
+ 	host = pci_find_host_bridge(bus);
+@@ -1705,14 +1710,6 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
+ 		pcie_bus_configure_settings(child);
+ 
+ 	return bus;
+-
+-free_cfg:
+-	pci_ecam_free(ri->cfg);
+-free_root_ops:
+-	kfree(root_ops);
+-free_ri:
+-	kfree(ri);
+-	return NULL;
+ }
+ 
+ void pcibios_add_bus(struct pci_bus *bus)
+diff --git a/drivers/pinctrl/nuvoton/pinctrl-ma35.c b/drivers/pinctrl/nuvoton/pinctrl-ma35.c
+index 06ae1fe8b8c545..b51704bafd8111 100644
+--- a/drivers/pinctrl/nuvoton/pinctrl-ma35.c
++++ b/drivers/pinctrl/nuvoton/pinctrl-ma35.c
+@@ -1074,7 +1074,10 @@ static int ma35_pinctrl_probe_dt(struct platform_device *pdev, struct ma35_pinct
+ 	u32 idx = 0;
+ 	int ret;
+ 
+-	for_each_gpiochip_node(dev, child) {
++	device_for_each_child_node(dev, child) {
++		if (fwnode_property_present(child, "gpio-controller"))
++			continue;
++
+ 		npctl->nfunctions++;
+ 		npctl->ngroups += of_get_child_count(to_of_node(child));
+ 	}
+@@ -1092,7 +1095,10 @@ static int ma35_pinctrl_probe_dt(struct platform_device *pdev, struct ma35_pinct
+ 	if (!npctl->groups)
+ 		return -ENOMEM;
+ 
+-	for_each_gpiochip_node(dev, child) {
++	device_for_each_child_node(dev, child) {
++		if (fwnode_property_present(child, "gpio-controller"))
++			continue;
++
+ 		ret = ma35_pinctrl_parse_functions(child, npctl, idx++);
+ 		if (ret) {
+ 			fwnode_handle_put(child);
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 1d7fdcdec4c855..bf55d1b4db67b1 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -934,6 +934,17 @@ static int amd_gpio_suspend_hibernate_common(struct device *dev, bool is_suspend
+ 				  pin, is_suspend ? "suspend" : "hibernate");
+ 		}
+ 
++		/*
++		 * debounce enabled over suspend has shown issues with a GPIO
++		 * being unable to wake the system, as we're only interested in
++		 * the actual wakeup event, clear it.
++		 */
++		if (gpio_dev->saved_regs[i] & (DB_CNTRl_MASK << DB_CNTRL_OFF)) {
++			amd_gpio_set_debounce(gpio_dev, pin, 0);
++			pm_pr_dbg("Clearing debounce for GPIO #%d during %s.\n",
++				  pin, is_suspend ? "suspend" : "hibernate");
++		}
++
+ 		raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ 	}
+ 
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index 0eb816395dc64d..389bd20efca012 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -1036,6 +1036,25 @@ static bool msm_gpio_needs_dual_edge_parent_workaround(struct irq_data *d,
+ 	       test_bit(d->hwirq, pctrl->skip_wake_irqs);
+ }
+ 
++static void msm_gpio_irq_init_valid_mask(struct gpio_chip *gc,
++					 unsigned long *valid_mask,
++					 unsigned int ngpios)
++{
++	struct msm_pinctrl *pctrl = gpiochip_get_data(gc);
++	const struct msm_pingroup *g;
++	int i;
++
++	bitmap_fill(valid_mask, ngpios);
++
++	for (i = 0; i < ngpios; i++) {
++		g = &pctrl->soc->groups[i];
++
++		if (g->intr_detection_width != 1 &&
++		    g->intr_detection_width != 2)
++			clear_bit(i, valid_mask);
++	}
++}
++
+ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ {
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+@@ -1439,6 +1458,7 @@ static int msm_gpio_init(struct msm_pinctrl *pctrl)
+ 	girq->default_type = IRQ_TYPE_NONE;
+ 	girq->handler = handle_bad_irq;
+ 	girq->parents[0] = pctrl->irq;
++	girq->init_valid_mask = msm_gpio_irq_init_valid_mask;
+ 
+ 	ret = gpiochip_add_data(&pctrl->chip, pctrl);
+ 	if (ret) {
+diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c
+index 0387bd838487b1..b28351c71e4d11 100644
+--- a/drivers/pwm/core.c
++++ b/drivers/pwm/core.c
+@@ -549,7 +549,7 @@ static bool pwm_state_valid(const struct pwm_state *state)
+ 	 * and supposed to be ignored. So also ignore any strange values and
+ 	 * consider the state ok.
+ 	 */
+-	if (state->enabled)
++	if (!state->enabled)
+ 		return true;
+ 
+ 	if (!state->period)
+diff --git a/drivers/pwm/pwm-mediatek.c b/drivers/pwm/pwm-mediatek.c
+index 7eaab58314995c..33d3554b9197ab 100644
+--- a/drivers/pwm/pwm-mediatek.c
++++ b/drivers/pwm/pwm-mediatek.c
+@@ -130,8 +130,10 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 		return ret;
+ 
+ 	clk_rate = clk_get_rate(pc->clk_pwms[pwm->hwpwm]);
+-	if (!clk_rate)
+-		return -EINVAL;
++	if (!clk_rate) {
++		ret = -EINVAL;
++		goto out;
++	}
+ 
+ 	/* Make sure we use the bus clock and not the 26MHz clock */
+ 	if (pc->soc->has_ck_26m_sel)
+@@ -150,9 +152,9 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	}
+ 
+ 	if (clkdiv > PWM_CLK_DIV_MAX) {
+-		pwm_mediatek_clk_disable(chip, pwm);
+ 		dev_err(pwmchip_parent(chip), "period of %d ns not supported\n", period_ns);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out;
+ 	}
+ 
+ 	if (pc->soc->pwm45_fixup && pwm->hwpwm > 2) {
+@@ -169,9 +171,10 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	pwm_mediatek_writel(pc, pwm->hwpwm, reg_width, cnt_period);
+ 	pwm_mediatek_writel(pc, pwm->hwpwm, reg_thres, cnt_duty);
+ 
++out:
+ 	pwm_mediatek_clk_disable(chip, pwm);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int pwm_mediatek_enable(struct pwm_chip *chip, struct pwm_device *pwm)
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index f5642b3038e4dd..ed5bbf704a7d10 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -4566,6 +4566,7 @@ void do_unblank_screen(int leaving_gfx)
+ 	set_palette(vc);
+ 	set_cursor(vc);
+ 	vt_event_post(VT_EVENT_UNBLANK, vc->vc_num, vc->vc_num);
++	notify_update(vc);
+ }
+ EXPORT_SYMBOL(do_unblank_screen);
+ 
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index 36fff45e8c9b7a..4b6484cfa0bd1f 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -295,8 +295,8 @@ __acquires(&port->port_lock)
+ 			break;
+ 	}
+ 
+-	if (do_tty_wake && port->port.tty)
+-		tty_wakeup(port->port.tty);
++	if (do_tty_wake)
++		tty_port_tty_wakeup(&port->port);
+ 	return status;
+ }
+ 
+@@ -544,20 +544,16 @@ static int gs_alloc_requests(struct usb_ep *ep, struct list_head *head,
+ static int gs_start_io(struct gs_port *port)
+ {
+ 	struct list_head	*head = &port->read_pool;
+-	struct usb_ep		*ep;
++	struct usb_ep		*ep = port->port_usb->out;
+ 	int			status;
+ 	unsigned		started;
+ 
+-	if (!port->port_usb || !port->port.tty)
+-		return -EIO;
+-
+ 	/* Allocate RX and TX I/O buffers.  We can't easily do this much
+ 	 * earlier (with GFP_KERNEL) because the requests are coupled to
+ 	 * endpoints, as are the packet sizes we'll be using.  Different
+ 	 * configurations may use different endpoints with a given port;
+ 	 * and high speed vs full speed changes packet sizes too.
+ 	 */
+-	ep = port->port_usb->out;
+ 	status = gs_alloc_requests(ep, head, gs_read_complete,
+ 		&port->read_allocated);
+ 	if (status)
+@@ -578,7 +574,7 @@ static int gs_start_io(struct gs_port *port)
+ 		gs_start_tx(port);
+ 		/* Unblock any pending writes into our circular buffer, in case
+ 		 * we didn't in gs_start_tx() */
+-		tty_wakeup(port->port.tty);
++		tty_port_tty_wakeup(&port->port);
+ 	} else {
+ 		/* Free reqs only if we are still connected */
+ 		if (port->port_usb) {
+diff --git a/fs/btrfs/free-space-tree.c b/fs/btrfs/free-space-tree.c
+index b65a20fd519ba5..64af363f36ddce 100644
+--- a/fs/btrfs/free-space-tree.c
++++ b/fs/btrfs/free-space-tree.c
+@@ -1099,11 +1099,21 @@ static int populate_free_space_tree(struct btrfs_trans_handle *trans,
+ 	ret = btrfs_search_slot_for_read(extent_root, &key, path, 1, 0);
+ 	if (ret < 0)
+ 		goto out_locked;
+-	ASSERT(ret == 0);
++	/*
++	 * If ret is 1 (no key found), it means this is an empty block group,
++	 * without any extents allocated from it and there's no block group
++	 * item (key BTRFS_BLOCK_GROUP_ITEM_KEY) located in the extent tree
++	 * because we are using the block group tree feature, so block group
++	 * items are stored in the block group tree. It also means there are no
++	 * extents allocated for block groups with a start offset beyond this
++	 * block group's end offset (this is the last, highest, block group).
++	 */
++	if (!btrfs_fs_compat_ro(trans->fs_info, BLOCK_GROUP_TREE))
++		ASSERT(ret == 0);
+ 
+ 	start = block_group->start;
+ 	end = block_group->start + block_group->length;
+-	while (1) {
++	while (ret == 0) {
+ 		btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0]);
+ 
+ 		if (key.type == BTRFS_EXTENT_ITEM_KEY ||
+@@ -1133,8 +1143,6 @@ static int populate_free_space_tree(struct btrfs_trans_handle *trans,
+ 		ret = btrfs_next_item(extent_root, path);
+ 		if (ret < 0)
+ 			goto out_locked;
+-		if (ret)
+-			break;
+ 	}
+ 	if (start < end) {
+ 		ret = __add_to_free_space_tree(trans, block_group, path2,
+diff --git a/fs/erofs/data.c b/fs/erofs/data.c
+index 2409d2ab0c280e..33cb0a7330d23f 100644
+--- a/fs/erofs/data.c
++++ b/fs/erofs/data.c
+@@ -213,9 +213,11 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
+ 
+ /*
+  * bit 30: I/O error occurred on this folio
++ * bit 29: CPU has dirty data in D-cache (needs aliasing handling);
+  * bit 0 - 29: remaining parts to complete this folio
+  */
+-#define EROFS_ONLINEFOLIO_EIO			(1 << 30)
++#define EROFS_ONLINEFOLIO_EIO		30
++#define EROFS_ONLINEFOLIO_DIRTY		29
+ 
+ void erofs_onlinefolio_init(struct folio *folio)
+ {
+@@ -232,19 +234,23 @@ void erofs_onlinefolio_split(struct folio *folio)
+ 	atomic_inc((atomic_t *)&folio->private);
+ }
+ 
+-void erofs_onlinefolio_end(struct folio *folio, int err)
++void erofs_onlinefolio_end(struct folio *folio, int err, bool dirty)
+ {
+ 	int orig, v;
+ 
+ 	do {
+ 		orig = atomic_read((atomic_t *)&folio->private);
+-		v = (orig - 1) | (err ? EROFS_ONLINEFOLIO_EIO : 0);
++		DBG_BUGON(orig <= 0);
++		v = dirty << EROFS_ONLINEFOLIO_DIRTY;
++		v |= (orig - 1) | (!!err << EROFS_ONLINEFOLIO_EIO);
+ 	} while (atomic_cmpxchg((atomic_t *)&folio->private, orig, v) != orig);
+ 
+-	if (v & ~EROFS_ONLINEFOLIO_EIO)
++	if (v & (BIT(EROFS_ONLINEFOLIO_DIRTY) - 1))
+ 		return;
+ 	folio->private = 0;
+-	folio_end_read(folio, !(v & EROFS_ONLINEFOLIO_EIO));
++	if (v & BIT(EROFS_ONLINEFOLIO_DIRTY))
++		flush_dcache_folio(folio);
++	folio_end_read(folio, !(v & BIT(EROFS_ONLINEFOLIO_EIO)));
+ }
+ 
+ static int erofs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+@@ -350,11 +356,16 @@ int erofs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+  */
+ static int erofs_read_folio(struct file *file, struct folio *folio)
+ {
++	trace_erofs_read_folio(folio, true);
++
+ 	return iomap_read_folio(folio, &erofs_iomap_ops);
+ }
+ 
+ static void erofs_readahead(struct readahead_control *rac)
+ {
++	trace_erofs_readahead(rac->mapping->host, readahead_index(rac),
++					readahead_count(rac), true);
++
+ 	return iomap_readahead(rac, &erofs_iomap_ops);
+ }
+ 
+diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c
+index bf62e2836b604c..358061d7b66074 100644
+--- a/fs/erofs/decompressor.c
++++ b/fs/erofs/decompressor.c
+@@ -301,13 +301,11 @@ static int z_erofs_transform_plain(struct z_erofs_decompress_req *rq,
+ 		cur = min(cur, rq->outputsize);
+ 		if (cur && rq->out[0]) {
+ 			kin = kmap_local_page(rq->in[nrpages_in - 1]);
+-			if (rq->out[0] == rq->in[nrpages_in - 1]) {
++			if (rq->out[0] == rq->in[nrpages_in - 1])
+ 				memmove(kin + rq->pageofs_out, kin + pi, cur);
+-				flush_dcache_page(rq->out[0]);
+-			} else {
++			else
+ 				memcpy_to_page(rq->out[0], rq->pageofs_out,
+ 					       kin + pi, cur);
+-			}
+ 			kunmap_local(kin);
+ 		}
+ 		rq->outputsize -= cur;
+@@ -325,14 +323,12 @@ static int z_erofs_transform_plain(struct z_erofs_decompress_req *rq,
+ 			po = (rq->pageofs_out + cur + pi) & ~PAGE_MASK;
+ 			DBG_BUGON(no >= nrpages_out);
+ 			cnt = min(insz - pi, PAGE_SIZE - po);
+-			if (rq->out[no] == rq->in[ni]) {
++			if (rq->out[no] == rq->in[ni])
+ 				memmove(kin + po,
+ 					kin + rq->pageofs_in + pi, cnt);
+-				flush_dcache_page(rq->out[no]);
+-			} else if (rq->out[no]) {
++			else if (rq->out[no])
+ 				memcpy_to_page(rq->out[no], po,
+ 					       kin + rq->pageofs_in + pi, cnt);
+-			}
+ 			pi += cnt;
+ 		} while (pi < insz);
+ 		kunmap_local(kin);
+diff --git a/fs/erofs/fileio.c b/fs/erofs/fileio.c
+index 60c7cc4c105c67..da1304a9bb4353 100644
+--- a/fs/erofs/fileio.c
++++ b/fs/erofs/fileio.c
+@@ -38,7 +38,7 @@ static void erofs_fileio_ki_complete(struct kiocb *iocb, long ret)
+ 	} else {
+ 		bio_for_each_folio_all(fi, &rq->bio) {
+ 			DBG_BUGON(folio_test_uptodate(fi.folio));
+-			erofs_onlinefolio_end(fi.folio, ret);
++			erofs_onlinefolio_end(fi.folio, ret, false);
+ 		}
+ 	}
+ 	bio_uninit(&rq->bio);
+@@ -158,7 +158,7 @@ static int erofs_fileio_scan_folio(struct erofs_fileio *io, struct folio *folio)
+ 		}
+ 		cur += len;
+ 	}
+-	erofs_onlinefolio_end(folio, err);
++	erofs_onlinefolio_end(folio, err, false);
+ 	return err;
+ }
+ 
+@@ -180,7 +180,7 @@ static void erofs_fileio_readahead(struct readahead_control *rac)
+ 	struct folio *folio;
+ 	int err;
+ 
+-	trace_erofs_readpages(inode, readahead_index(rac),
++	trace_erofs_readahead(inode, readahead_index(rac),
+ 			      readahead_count(rac), true);
+ 	while ((folio = readahead_folio(rac))) {
+ 		err = erofs_fileio_scan_folio(&io, folio);
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index 4ac188d5d8944f..2e8823653c2e24 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -314,10 +314,12 @@ static inline struct folio *erofs_grab_folio_nowait(struct address_space *as,
+ /* The length of extent is full */
+ #define EROFS_MAP_FULL_MAPPED	0x0008
+ /* Located in the special packed inode */
+-#define EROFS_MAP_FRAGMENT	0x0010
++#define __EROFS_MAP_FRAGMENT	0x0010
+ /* The extent refers to partial decompressed data */
+ #define EROFS_MAP_PARTIAL_REF	0x0020
+ 
++#define EROFS_MAP_FRAGMENT	(EROFS_MAP_MAPPED | __EROFS_MAP_FRAGMENT)
++
+ struct erofs_map_blocks {
+ 	struct erofs_buf buf;
+ 
+@@ -389,7 +391,7 @@ int erofs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ int erofs_map_blocks(struct inode *inode, struct erofs_map_blocks *map);
+ void erofs_onlinefolio_init(struct folio *folio);
+ void erofs_onlinefolio_split(struct folio *folio);
+-void erofs_onlinefolio_end(struct folio *folio, int err);
++void erofs_onlinefolio_end(struct folio *folio, int err, bool dirty);
+ struct inode *erofs_iget(struct super_block *sb, erofs_nid_t nid);
+ int erofs_getattr(struct mnt_idmap *idmap, const struct path *path,
+ 		  struct kstat *stat, u32 request_mask,
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index b8e6b76c23d5ee..d21ae4802c7f11 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -1003,7 +1003,7 @@ static int z_erofs_scan_folio(struct z_erofs_frontend *f,
+ 		if (!(map->m_flags & EROFS_MAP_MAPPED)) {
+ 			folio_zero_segment(folio, cur, end);
+ 			tight = false;
+-		} else if (map->m_flags & EROFS_MAP_FRAGMENT) {
++		} else if (map->m_flags & __EROFS_MAP_FRAGMENT) {
+ 			erofs_off_t fpos = offset + cur - map->m_la;
+ 
+ 			err = z_erofs_read_fragment(inode->i_sb, folio, cur,
+@@ -1060,7 +1060,7 @@ static int z_erofs_scan_folio(struct z_erofs_frontend *f,
+ 			tight = (bs == PAGE_SIZE);
+ 		}
+ 	} while ((end = cur) > 0);
+-	erofs_onlinefolio_end(folio, err);
++	erofs_onlinefolio_end(folio, err, false);
+ 	return err;
+ }
+ 
+@@ -1165,7 +1165,7 @@ static void z_erofs_fill_other_copies(struct z_erofs_backend *be, int err)
+ 			cur += len;
+ 		}
+ 		kunmap_local(dst);
+-		erofs_onlinefolio_end(page_folio(bvi->bvec.page), err);
++		erofs_onlinefolio_end(page_folio(bvi->bvec.page), err, true);
+ 		list_del(p);
+ 		kfree(bvi);
+ 	}
+@@ -1324,7 +1324,7 @@ static int z_erofs_decompress_pcluster(struct z_erofs_backend *be, int err)
+ 
+ 		DBG_BUGON(z_erofs_page_is_invalidated(page));
+ 		if (!z_erofs_is_shortlived_page(page)) {
+-			erofs_onlinefolio_end(page_folio(page), err);
++			erofs_onlinefolio_end(page_folio(page), err, true);
+ 			continue;
+ 		}
+ 		if (pcl->algorithmformat != Z_EROFS_COMPRESSION_LZ4) {
+@@ -1855,13 +1855,12 @@ static void z_erofs_readahead(struct readahead_control *rac)
+ {
+ 	struct inode *const inode = rac->mapping->host;
+ 	Z_EROFS_DEFINE_FRONTEND(f, inode, readahead_pos(rac));
+-	struct folio *head = NULL, *folio;
+ 	unsigned int nrpages = readahead_count(rac);
++	struct folio *head = NULL, *folio;
+ 	int err;
+ 
++	trace_erofs_readahead(inode, readahead_index(rac), nrpages, false);
+ 	z_erofs_pcluster_readmore(&f, rac, true);
+-	nrpages = readahead_count(rac);
+-	trace_erofs_readpages(inode, readahead_index(rac), nrpages, false);
+ 	while ((folio = readahead_folio(rac))) {
+ 		folio->private = head;
+ 		head = folio;
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index 0bebc6e3a4d7dd..f1a15ff22147ba 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -413,8 +413,7 @@ static int z_erofs_map_blocks_fo(struct inode *inode,
+ 	    !vi->z_tailextent_headlcn) {
+ 		map->m_la = 0;
+ 		map->m_llen = inode->i_size;
+-		map->m_flags = EROFS_MAP_MAPPED |
+-			EROFS_MAP_FULL_MAPPED | EROFS_MAP_FRAGMENT;
++		map->m_flags = EROFS_MAP_FRAGMENT;
+ 		return 0;
+ 	}
+ 	initial_lcn = ofs >> lclusterbits;
+@@ -489,7 +488,7 @@ static int z_erofs_map_blocks_fo(struct inode *inode,
+ 			goto unmap_out;
+ 		}
+ 	} else if (fragment && m.lcn == vi->z_tailextent_headlcn) {
+-		map->m_flags |= EROFS_MAP_FRAGMENT;
++		map->m_flags = EROFS_MAP_FRAGMENT;
+ 	} else {
+ 		map->m_pa = erofs_pos(sb, m.pblk);
+ 		err = z_erofs_get_extent_compressedlen(&m, initial_lcn);
+@@ -617,7 +616,7 @@ static int z_erofs_map_blocks_ext(struct inode *inode,
+ 	if (lstart < lend) {
+ 		map->m_la = lstart;
+ 		if (last && (vi->z_advise & Z_EROFS_ADVISE_FRAGMENT_PCLUSTER)) {
+-			map->m_flags |= EROFS_MAP_MAPPED | EROFS_MAP_FRAGMENT;
++			map->m_flags = EROFS_MAP_FRAGMENT;
+ 			vi->z_fragmentoff = map->m_plen;
+ 			if (recsz > offsetof(struct z_erofs_extent, pstart_lo))
+ 				vi->z_fragmentoff |= map->m_pa << 32;
+@@ -797,7 +796,7 @@ static int z_erofs_iomap_begin_report(struct inode *inode, loff_t offset,
+ 	iomap->length = map.m_llen;
+ 	if (map.m_flags & EROFS_MAP_MAPPED) {
+ 		iomap->type = IOMAP_MAPPED;
+-		iomap->addr = map.m_flags & EROFS_MAP_FRAGMENT ?
++		iomap->addr = map.m_flags & __EROFS_MAP_FRAGMENT ?
+ 			      IOMAP_NULL_ADDR : map.m_pa;
+ 	} else {
+ 		iomap->type = IOMAP_HOLE;
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index d4dbffdedd08e3..0fbf5dfedb24e2 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -883,7 +883,7 @@ static bool __ep_remove(struct eventpoll *ep, struct epitem *epi, bool force)
+ 	kfree_rcu(epi, rcu);
+ 
+ 	percpu_counter_dec(&ep->user->epoll_watches);
+-	return ep_refcount_dec_and_test(ep);
++	return true;
+ }
+ 
+ /*
+@@ -891,14 +891,14 @@ static bool __ep_remove(struct eventpoll *ep, struct epitem *epi, bool force)
+  */
+ static void ep_remove_safe(struct eventpoll *ep, struct epitem *epi)
+ {
+-	WARN_ON_ONCE(__ep_remove(ep, epi, false));
++	if (__ep_remove(ep, epi, false))
++		WARN_ON_ONCE(ep_refcount_dec_and_test(ep));
+ }
+ 
+ static void ep_clear_and_put(struct eventpoll *ep)
+ {
+ 	struct rb_node *rbp, *next;
+ 	struct epitem *epi;
+-	bool dispose;
+ 
+ 	/* We need to release all tasks waiting for these file */
+ 	if (waitqueue_active(&ep->poll_wait))
+@@ -931,10 +931,8 @@ static void ep_clear_and_put(struct eventpoll *ep)
+ 		cond_resched();
+ 	}
+ 
+-	dispose = ep_refcount_dec_and_test(ep);
+ 	mutex_unlock(&ep->mtx);
+-
+-	if (dispose)
++	if (ep_refcount_dec_and_test(ep))
+ 		ep_free(ep);
+ }
+ 
+@@ -1137,7 +1135,7 @@ void eventpoll_release_file(struct file *file)
+ 		dispose = __ep_remove(ep, epi, true);
+ 		mutex_unlock(&ep->mtx);
+ 
+-		if (dispose)
++		if (dispose && ep_refcount_dec_and_test(ep))
+ 			ep_free(ep);
+ 		goto again;
+ 	}
+diff --git a/fs/proc/inode.c b/fs/proc/inode.c
+index a3eb3b740f7664..3604b616311c27 100644
+--- a/fs/proc/inode.c
++++ b/fs/proc/inode.c
+@@ -42,7 +42,7 @@ static void proc_evict_inode(struct inode *inode)
+ 
+ 	head = ei->sysctl;
+ 	if (head) {
+-		RCU_INIT_POINTER(ei->sysctl, NULL);
++		WRITE_ONCE(ei->sysctl, NULL);
+ 		proc_sys_evict_inode(inode, head);
+ 	}
+ }
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index cc9d74a06ff03c..08b78150cdde1f 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -918,17 +918,21 @@ static int proc_sys_compare(const struct dentry *dentry,
+ 	struct ctl_table_header *head;
+ 	struct inode *inode;
+ 
+-	/* Although proc doesn't have negative dentries, rcu-walk means
+-	 * that inode here can be NULL */
+-	/* AV: can it, indeed? */
+-	inode = d_inode_rcu(dentry);
+-	if (!inode)
+-		return 1;
+ 	if (name->len != len)
+ 		return 1;
+ 	if (memcmp(name->name, str, len))
+ 		return 1;
+-	head = rcu_dereference(PROC_I(inode)->sysctl);
++
++	// false positive is fine here - we'll recheck anyway
++	if (d_in_lookup(dentry))
++		return 0;
++
++	inode = d_inode_rcu(dentry);
++	// we just might have run into dentry in the middle of __dentry_kill()
++	if (!inode)
++		return 1;
++
++	head = READ_ONCE(PROC_I(inode)->sysctl);
+ 	return !head || !sysctl_is_seen(head);
+ }
+ 
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 854d23be3d7bbc..e57e323817e78e 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -36,9 +36,9 @@ void task_mem(struct seq_file *m, struct mm_struct *mm)
+ 	unsigned long text, lib, swap, anon, file, shmem;
+ 	unsigned long hiwater_vm, total_vm, hiwater_rss, total_rss;
+ 
+-	anon = get_mm_counter(mm, MM_ANONPAGES);
+-	file = get_mm_counter(mm, MM_FILEPAGES);
+-	shmem = get_mm_counter(mm, MM_SHMEMPAGES);
++	anon = get_mm_counter_sum(mm, MM_ANONPAGES);
++	file = get_mm_counter_sum(mm, MM_FILEPAGES);
++	shmem = get_mm_counter_sum(mm, MM_SHMEMPAGES);
+ 
+ 	/*
+ 	 * Note: to minimize their overhead, mm maintains hiwater_vm and
+@@ -59,7 +59,7 @@ void task_mem(struct seq_file *m, struct mm_struct *mm)
+ 	text = min(text, mm->exec_vm << PAGE_SHIFT);
+ 	lib = (mm->exec_vm << PAGE_SHIFT) - text;
+ 
+-	swap = get_mm_counter(mm, MM_SWAPENTS);
++	swap = get_mm_counter_sum(mm, MM_SWAPENTS);
+ 	SEQ_PUT_DEC("VmPeak:\t", hiwater_vm);
+ 	SEQ_PUT_DEC(" kB\nVmSize:\t", total_vm);
+ 	SEQ_PUT_DEC(" kB\nVmLck:\t", mm->locked_vm);
+@@ -92,12 +92,12 @@ unsigned long task_statm(struct mm_struct *mm,
+ 			 unsigned long *shared, unsigned long *text,
+ 			 unsigned long *data, unsigned long *resident)
+ {
+-	*shared = get_mm_counter(mm, MM_FILEPAGES) +
+-			get_mm_counter(mm, MM_SHMEMPAGES);
++	*shared = get_mm_counter_sum(mm, MM_FILEPAGES) +
++			get_mm_counter_sum(mm, MM_SHMEMPAGES);
+ 	*text = (PAGE_ALIGN(mm->end_code) - (mm->start_code & PAGE_MASK))
+ 								>> PAGE_SHIFT;
+ 	*data = mm->data_vm + mm->stack_vm;
+-	*resident = *shared + get_mm_counter(mm, MM_ANONPAGES);
++	*resident = *shared + get_mm_counter_sum(mm, MM_ANONPAGES);
+ 	return mm->total_vm;
+ }
+ 
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index ad2b15ec3b561d..f1c7ed1a6ca59d 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -8535,11 +8535,6 @@ static void smb20_oplock_break_ack(struct ksmbd_work *work)
+ 		goto err_out;
+ 	}
+ 
+-	opinfo->op_state = OPLOCK_STATE_NONE;
+-	wake_up_interruptible_all(&opinfo->oplock_q);
+-	opinfo_put(opinfo);
+-	ksmbd_fd_put(work, fp);
+-
+ 	rsp->StructureSize = cpu_to_le16(24);
+ 	rsp->OplockLevel = rsp_oplevel;
+ 	rsp->Reserved = 0;
+@@ -8547,16 +8542,15 @@ static void smb20_oplock_break_ack(struct ksmbd_work *work)
+ 	rsp->VolatileFid = volatile_id;
+ 	rsp->PersistentFid = persistent_id;
+ 	ret = ksmbd_iov_pin_rsp(work, rsp, sizeof(struct smb2_oplock_break));
+-	if (!ret)
+-		return;
+-
++	if (ret) {
+ err_out:
++		smb2_set_err_rsp(work);
++	}
++
+ 	opinfo->op_state = OPLOCK_STATE_NONE;
+ 	wake_up_interruptible_all(&opinfo->oplock_q);
+-
+ 	opinfo_put(opinfo);
+ 	ksmbd_fd_put(work, fp);
+-	smb2_set_err_rsp(work);
+ }
+ 
+ static int check_lease_state(struct lease *lease, __le32 req_state)
+@@ -8686,11 +8680,6 @@ static void smb21_lease_break_ack(struct ksmbd_work *work)
+ 	}
+ 
+ 	lease_state = lease->state;
+-	opinfo->op_state = OPLOCK_STATE_NONE;
+-	wake_up_interruptible_all(&opinfo->oplock_q);
+-	atomic_dec(&opinfo->breaking_cnt);
+-	wake_up_interruptible_all(&opinfo->oplock_brk);
+-	opinfo_put(opinfo);
+ 
+ 	rsp->StructureSize = cpu_to_le16(36);
+ 	rsp->Reserved = 0;
+@@ -8699,16 +8688,16 @@ static void smb21_lease_break_ack(struct ksmbd_work *work)
+ 	rsp->LeaseState = lease_state;
+ 	rsp->LeaseDuration = 0;
+ 	ret = ksmbd_iov_pin_rsp(work, rsp, sizeof(struct smb2_lease_ack));
+-	if (!ret)
+-		return;
+-
++	if (ret) {
+ err_out:
++		smb2_set_err_rsp(work);
++	}
++
++	opinfo->op_state = OPLOCK_STATE_NONE;
+ 	wake_up_interruptible_all(&opinfo->oplock_q);
+ 	atomic_dec(&opinfo->breaking_cnt);
+ 	wake_up_interruptible_all(&opinfo->oplock_brk);
+-
+ 	opinfo_put(opinfo);
+-	smb2_set_err_rsp(work);
+ }
+ 
+ /**
+diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c
+index 64a428a06ace0c..c6cbe0d56e3212 100644
+--- a/fs/smb/server/transport_rdma.c
++++ b/fs/smb/server/transport_rdma.c
+@@ -433,7 +433,8 @@ static void free_transport(struct smb_direct_transport *t)
+ 	if (t->qp) {
+ 		ib_drain_qp(t->qp);
+ 		ib_mr_pool_destroy(t->qp, &t->qp->rdma_mrs);
+-		ib_destroy_qp(t->qp);
++		t->qp = NULL;
++		rdma_destroy_qp(t->cm_id);
+ 	}
+ 
+ 	ksmbd_debug(RDMA, "drain the reassembly queue\n");
+@@ -1940,8 +1941,8 @@ static int smb_direct_create_qpair(struct smb_direct_transport *t,
+ 	return 0;
+ err:
+ 	if (t->qp) {
+-		ib_destroy_qp(t->qp);
+ 		t->qp = NULL;
++		rdma_destroy_qp(t->cm_id);
+ 	}
+ 	if (t->recv_cq) {
+ 		ib_destroy_cq(t->recv_cq);
+diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
+index baf0d3031a44a0..134cabdd60eb3b 100644
+--- a/fs/smb/server/vfs.c
++++ b/fs/smb/server/vfs.c
+@@ -1280,6 +1280,7 @@ int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name,
+ 
+ 		err = ksmbd_vfs_lock_parent(parent_path->dentry, path->dentry);
+ 		if (err) {
++			mnt_drop_write(parent_path->mnt);
+ 			path_put(path);
+ 			path_put(parent_path);
+ 		}
+diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
+index 94d365b2250521..e85d49d6fa1af2 100644
+--- a/include/drm/drm_file.h
++++ b/include/drm/drm_file.h
+@@ -300,6 +300,9 @@ struct drm_file {
+ 	 *
+ 	 * Mapping of mm object handles to object pointers. Used by the GEM
+ 	 * subsystem. Protected by @table_lock.
++	 *
++	 * Note that allocated entries might be NULL as a transient state when
++	 * creating or deleting a handle.
+ 	 */
+ 	struct idr object_idr;
+ 
+diff --git a/include/drm/drm_framebuffer.h b/include/drm/drm_framebuffer.h
+index 668077009fced0..38b24fc8978d34 100644
+--- a/include/drm/drm_framebuffer.h
++++ b/include/drm/drm_framebuffer.h
+@@ -23,6 +23,7 @@
+ #ifndef __DRM_FRAMEBUFFER_H__
+ #define __DRM_FRAMEBUFFER_H__
+ 
++#include <linux/bits.h>
+ #include <linux/ctype.h>
+ #include <linux/list.h>
+ #include <linux/sched.h>
+@@ -100,6 +101,8 @@ struct drm_framebuffer_funcs {
+ 		     unsigned num_clips);
+ };
+ 
++#define DRM_FRAMEBUFFER_HAS_HANDLE_REF(_i)	BIT(0u + (_i))
++
+ /**
+  * struct drm_framebuffer - frame buffer object
+  *
+@@ -188,6 +191,10 @@ struct drm_framebuffer {
+ 	 * DRM_MODE_FB_MODIFIERS.
+ 	 */
+ 	int flags;
++	/**
++	 * @internal_flags: Framebuffer flags like DRM_FRAMEBUFFER_HAS_HANDLE_REF.
++	 */
++	unsigned int internal_flags;
+ 	/**
+ 	 * @filp_head: Placed on &drm_file.fbs, protected by &drm_file.fbs_lock.
+ 	 */
+diff --git a/include/drm/spsc_queue.h b/include/drm/spsc_queue.h
+index 125f096c88cb96..ee9df8cc67b730 100644
+--- a/include/drm/spsc_queue.h
++++ b/include/drm/spsc_queue.h
+@@ -70,9 +70,11 @@ static inline bool spsc_queue_push(struct spsc_queue *queue, struct spsc_node *n
+ 
+ 	preempt_disable();
+ 
++	atomic_inc(&queue->job_count);
++	smp_mb__after_atomic();
++
+ 	tail = (struct spsc_node **)atomic_long_xchg(&queue->tail, (long)&node->next);
+ 	WRITE_ONCE(*tail, node);
+-	atomic_inc(&queue->job_count);
+ 
+ 	/*
+ 	 * In case of first element verify new node will be visible to the consumer
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 9a1f0ee40b5661..7c2a66995518ad 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -268,11 +268,16 @@ static inline dev_t disk_devt(struct gendisk *disk)
+ 	return MKDEV(disk->major, disk->first_minor);
+ }
+ 
++#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ /*
+  * We should strive for 1 << (PAGE_SHIFT + MAX_PAGECACHE_ORDER)
+  * however we constrain this to what we can validate and test.
+  */
+ #define BLK_MAX_BLOCK_SIZE      SZ_64K
++#else
++#define BLK_MAX_BLOCK_SIZE      PAGE_SIZE
++#endif
++
+ 
+ /* blk_validate_limits() validates bsize, so drivers don't usually need to */
+ static inline int blk_validate_block_size(unsigned long bsize)
+diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h
+index 7edc3fb0641cba..f16a073928e9fc 100644
+--- a/include/linux/ieee80211.h
++++ b/include/linux/ieee80211.h
+@@ -662,18 +662,6 @@ static inline bool ieee80211_s1g_has_cssid(__le16 fc)
+ 		(fc & cpu_to_le16(IEEE80211_S1G_BCN_CSSID));
+ }
+ 
+-/**
+- * ieee80211_is_s1g_short_beacon - check if frame is an S1G short beacon
+- * @fc: frame control bytes in little-endian byteorder
+- * Return: whether or not the frame is an S1G short beacon,
+- *	i.e. it is an S1G beacon with 'next TBTT' flag set
+- */
+-static inline bool ieee80211_is_s1g_short_beacon(__le16 fc)
+-{
+-	return ieee80211_is_s1g_beacon(fc) &&
+-		(fc & cpu_to_le16(IEEE80211_S1G_BCN_NEXT_TBTT));
+-}
+-
+ /**
+  * ieee80211_is_atim - check if IEEE80211_FTYPE_MGMT && IEEE80211_STYPE_ATIM
+  * @fc: frame control bytes in little-endian byteorder
+@@ -4897,6 +4885,39 @@ static inline bool ieee80211_is_ftm(struct sk_buff *skb)
+ 	return false;
+ }
+ 
++/**
++ * ieee80211_is_s1g_short_beacon - check if frame is an S1G short beacon
++ * @fc: frame control bytes in little-endian byteorder
++ * @variable: pointer to the beacon frame elements
++ * @variable_len: length of the frame elements
++ * Return: whether or not the frame is an S1G short beacon. As per
++ *	IEEE80211-2024 11.1.3.10.1, The S1G beacon compatibility element shall
++ *	always be present as the first element in beacon frames generated at a
++ *	TBTT (Target Beacon Transmission Time), so any frame not containing
++ *	this element must have been generated at a TSBTT (Target Short Beacon
++ *	Transmission Time) that is not a TBTT. Additionally, short beacons are
++ *	prohibited from containing the S1G beacon compatibility element as per
++ *	IEEE80211-2024 9.3.4.3 Table 9-76, so if we have an S1G beacon with
++ *	either no elements or the first element is not the beacon compatibility
++ *	element, we have a short beacon.
++ */
++static inline bool ieee80211_is_s1g_short_beacon(__le16 fc, const u8 *variable,
++						 size_t variable_len)
++{
++	if (!ieee80211_is_s1g_beacon(fc))
++		return false;
++
++	/*
++	 * If the frame does not contain at least 1 element (this is perfectly
++	 * valid in a short beacon) and is an S1G beacon, we have a short
++	 * beacon.
++	 */
++	if (variable_len < 2)
++		return true;
++
++	return variable[0] != WLAN_EID_S1G_BCN_COMPAT;
++}
++
+ struct element {
+ 	u8 id;
+ 	u8 datalen;
+diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
+index b44d201520d857..ef76eb5c572d05 100644
+--- a/include/linux/io_uring_types.h
++++ b/include/linux/io_uring_types.h
+@@ -701,6 +701,8 @@ struct io_kiocb {
+ 		struct hlist_node	hash_node;
+ 		/* For IOPOLL setup queues, with hybrid polling */
+ 		u64                     iopoll_start;
++		/* for private io_kiocb freeing */
++		struct rcu_head		rcu_head;
+ 	};
+ 	/* internal polling, see IORING_FEAT_FAST_POLL */
+ 	struct async_poll		*apoll;
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index e51dba8398f747..2e4584e1bfcd9a 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -2708,6 +2708,11 @@ static inline unsigned long get_mm_counter(struct mm_struct *mm, int member)
+ 	return percpu_counter_read_positive(&mm->rss_stat[member]);
+ }
+ 
++static inline unsigned long get_mm_counter_sum(struct mm_struct *mm, int member)
++{
++	return percpu_counter_sum_positive(&mm->rss_stat[member]);
++}
++
+ void mm_trace_rss_stat(struct mm_struct *mm, int member);
+ 
+ static inline void add_mm_counter(struct mm_struct *mm, int member, long value)
+diff --git a/include/linux/psp-sev.h b/include/linux/psp-sev.h
+index f3cad182d4ef6f..1f3620aaa4e76b 100644
+--- a/include/linux/psp-sev.h
++++ b/include/linux/psp-sev.h
+@@ -594,6 +594,7 @@ struct sev_data_snp_addr {
+  * @imi_en: launch flow is launching an IMI (Incoming Migration Image) for the
+  *          purpose of guest-assisted migration.
+  * @rsvd: reserved
++ * @desired_tsc_khz: hypervisor desired mean TSC freq in kHz of the guest
+  * @gosvw: guest OS-visible workarounds, as defined by hypervisor
+  */
+ struct sev_data_snp_launch_start {
+@@ -603,6 +604,7 @@ struct sev_data_snp_launch_start {
+ 	u32 ma_en:1;				/* In */
+ 	u32 imi_en:1;				/* In */
+ 	u32 rsvd:30;
++	u32 desired_tsc_khz;			/* In */
+ 	u8 gosvw[16];				/* In */
+ } __packed;
+ 
+diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
+index 9e85424c834353..70302c92d329f6 100644
+--- a/include/net/af_vsock.h
++++ b/include/net/af_vsock.h
+@@ -242,8 +242,8 @@ int __vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
+ int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
+ 			size_t len, int flags);
+ 
+-#ifdef CONFIG_BPF_SYSCALL
+ extern struct proto vsock_proto;
++#ifdef CONFIG_BPF_SYSCALL
+ int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore);
+ void __init vsock_bpf_build_proto(void);
+ #else
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 6e9d2a856a6b0a..1cf60ed7ac89b3 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1346,8 +1346,7 @@ hci_conn_hash_lookup_big_state(struct hci_dev *hdev, __u8 handle,  __u16 state)
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != BIS_LINK || bacmp(&c->dst, BDADDR_ANY) ||
+-		    c->state != state)
++		if (c->type != BIS_LINK || c->state != state)
+ 			continue;
+ 
+ 		if (handle == c->iso_qos.bcast.big) {
+diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
+index d711642e78b57c..c003cd194fa2ae 100644
+--- a/include/net/netfilter/nf_flow_table.h
++++ b/include/net/netfilter/nf_flow_table.h
+@@ -370,7 +370,7 @@ static inline __be16 __nf_flow_pppoe_proto(const struct sk_buff *skb)
+ 
+ static inline bool nf_flow_pppoe_proto(struct sk_buff *skb, __be16 *inner_proto)
+ {
+-	if (!pskb_may_pull(skb, PPPOE_SES_HLEN))
++	if (!pskb_may_pull(skb, ETH_HLEN + PPPOE_SES_HLEN))
+ 		return false;
+ 
+ 	*inner_proto = __nf_flow_pppoe_proto(skb);
+diff --git a/include/sound/soc-acpi.h b/include/sound/soc-acpi.h
+index 72e371a217676b..b8af309c2683f9 100644
+--- a/include/sound/soc-acpi.h
++++ b/include/sound/soc-acpi.h
+@@ -10,6 +10,7 @@
+ #include <linux/acpi.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/soundwire/sdw.h>
++#include <sound/soc.h>
+ 
+ struct snd_soc_acpi_package_context {
+ 	char *name;           /* package name */
+@@ -193,6 +194,15 @@ struct snd_soc_acpi_link_adr {
+  *  is not constant since this field may be updated at run-time
+  * @sof_tplg_filename: Sound Open Firmware topology file name, if enabled
+  * @tplg_quirk_mask: quirks to select different topology files dynamically
++ * @get_function_tplg_files: This is an optional callback, if specified then instead of
++ *	the single sof_tplg_filename the callback will return the list of function topology
++ *	files to be loaded.
++ *	Return value: The number of the files or negative ERRNO. 0 means that the single topology
++ *		      file should be used, no function topology split can be used on the machine.
++ *	@card: the pointer of the card
++ *	@mach: the pointer of the machine driver
++ *	@prefix: the prefix of the topology file name. Typically, it is the path.
++ *	@tplg_files: the pointer of the array of the topology file names.
+  */
+ /* Descriptor for SST ASoC machine driver */
+ struct snd_soc_acpi_mach {
+@@ -212,6 +222,9 @@ struct snd_soc_acpi_mach {
+ 	struct snd_soc_acpi_mach_params mach_params;
+ 	const char *sof_tplg_filename;
+ 	const u32 tplg_quirk_mask;
++	int (*get_function_tplg_files)(struct snd_soc_card *card,
++				       const struct snd_soc_acpi_mach *mach,
++				       const char *prefix, const char ***tplg_files);
+ };
+ 
+ #define SND_SOC_ACPI_MAX_CODECS 3
+diff --git a/include/trace/events/erofs.h b/include/trace/events/erofs.h
+index a71b19ed5d0cf6..dad7360f42f953 100644
+--- a/include/trace/events/erofs.h
++++ b/include/trace/events/erofs.h
+@@ -113,7 +113,7 @@ TRACE_EVENT(erofs_read_folio,
+ 		__entry->raw)
+ );
+ 
+-TRACE_EVENT(erofs_readpages,
++TRACE_EVENT(erofs_readahead,
+ 
+ 	TP_PROTO(struct inode *inode, pgoff_t start, unsigned int nrpage,
+ 		bool raw),
+diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
+index 50a958e9c92129..c73eca5943c0cd 100644
+--- a/io_uring/msg_ring.c
++++ b/io_uring/msg_ring.c
+@@ -82,7 +82,7 @@ static void io_msg_tw_complete(struct io_kiocb *req, io_tw_token_t tw)
+ 		spin_unlock(&ctx->msg_lock);
+ 	}
+ 	if (req)
+-		kmem_cache_free(req_cachep, req);
++		kfree_rcu(req, rcu_head);
+ 	percpu_ref_put(&ctx->refs);
+ }
+ 
+@@ -90,7 +90,7 @@ static int io_msg_remote_post(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ 			      int res, u32 cflags, u64 user_data)
+ {
+ 	if (!READ_ONCE(ctx->submitter_task)) {
+-		kmem_cache_free(req_cachep, req);
++		kfree_rcu(req, rcu_head);
+ 		return -EOWNERDEAD;
+ 	}
+ 	req->opcode = IORING_OP_NOP;
+diff --git a/io_uring/opdef.c b/io_uring/opdef.c
+index 489384c0438bd8..78ef5976bf003d 100644
+--- a/io_uring/opdef.c
++++ b/io_uring/opdef.c
+@@ -216,6 +216,7 @@ const struct io_issue_def io_issue_defs[] = {
+ 	},
+ 	[IORING_OP_FALLOCATE] = {
+ 		.needs_file		= 1,
++		.hash_reg_file          = 1,
+ 		.prep			= io_fallocate_prep,
+ 		.issue			= io_fallocate,
+ 	},
+diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
+index a53058dd6b7a18..adb1e426987edd 100644
+--- a/io_uring/zcrx.c
++++ b/io_uring/zcrx.c
+@@ -676,10 +676,7 @@ static int io_pp_zc_init(struct page_pool *pp)
+ static void io_pp_zc_destroy(struct page_pool *pp)
+ {
+ 	struct io_zcrx_ifq *ifq = io_pp_to_ifq(pp);
+-	struct io_zcrx_area *area = ifq->area;
+ 
+-	if (WARN_ON_ONCE(area->free_count != area->nia.num_niovs))
+-		return;
+ 	percpu_ref_put(&ifq->ctx->refs);
+ }
+ 
+diff --git a/kernel/bpf/bpf_lru_list.c b/kernel/bpf/bpf_lru_list.c
+index 3dabdd137d1021..2d6e1c98d8adc3 100644
+--- a/kernel/bpf/bpf_lru_list.c
++++ b/kernel/bpf/bpf_lru_list.c
+@@ -337,12 +337,12 @@ static void bpf_lru_list_pop_free_to_local(struct bpf_lru *lru,
+ 				 list) {
+ 		__bpf_lru_node_move_to_free(l, node, local_free_list(loc_l),
+ 					    BPF_LRU_LOCAL_LIST_T_FREE);
+-		if (++nfree == LOCAL_FREE_TARGET)
++		if (++nfree == lru->target_free)
+ 			break;
+ 	}
+ 
+-	if (nfree < LOCAL_FREE_TARGET)
+-		__bpf_lru_list_shrink(lru, l, LOCAL_FREE_TARGET - nfree,
++	if (nfree < lru->target_free)
++		__bpf_lru_list_shrink(lru, l, lru->target_free - nfree,
+ 				      local_free_list(loc_l),
+ 				      BPF_LRU_LOCAL_LIST_T_FREE);
+ 
+@@ -577,6 +577,9 @@ static void bpf_common_lru_populate(struct bpf_lru *lru, void *buf,
+ 		list_add(&node->list, &l->lists[BPF_LRU_LIST_T_FREE]);
+ 		buf += elem_size;
+ 	}
++
++	lru->target_free = clamp((nr_elems / num_possible_cpus()) / 2,
++				 1, LOCAL_FREE_TARGET);
+ }
+ 
+ static void bpf_percpu_lru_populate(struct bpf_lru *lru, void *buf,
+diff --git a/kernel/bpf/bpf_lru_list.h b/kernel/bpf/bpf_lru_list.h
+index cbd8d3720c2bbe..fe2661a58ea94a 100644
+--- a/kernel/bpf/bpf_lru_list.h
++++ b/kernel/bpf/bpf_lru_list.h
+@@ -58,6 +58,7 @@ struct bpf_lru {
+ 	del_from_htab_func del_from_htab;
+ 	void *del_arg;
+ 	unsigned int hash_offset;
++	unsigned int target_free;
+ 	unsigned int nr_scans;
+ 	bool percpu;
+ };
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 2d1131e2cfc02c..5a2ed3344392fa 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -951,8 +951,6 @@ static void perf_cgroup_switch(struct task_struct *task)
+ 	if (READ_ONCE(cpuctx->cgrp) == NULL)
+ 		return;
+ 
+-	WARN_ON_ONCE(cpuctx->ctx.nr_cgroups == 0);
+-
+ 	cgrp = perf_cgroup_from_task(task, NULL);
+ 	if (READ_ONCE(cpuctx->cgrp) == cgrp)
+ 		return;
+@@ -964,6 +962,8 @@ static void perf_cgroup_switch(struct task_struct *task)
+ 	if (READ_ONCE(cpuctx->cgrp) == NULL)
+ 		return;
+ 
++	WARN_ON_ONCE(cpuctx->ctx.nr_cgroups == 0);
++
+ 	perf_ctx_disable(&cpuctx->ctx, true);
+ 
+ 	ctx_sched_out(&cpuctx->ctx, NULL, EVENT_ALL|EVENT_CGROUP);
+@@ -11050,7 +11050,7 @@ static int perf_uprobe_event_init(struct perf_event *event)
+ 	if (event->attr.type != perf_uprobe.type)
+ 		return -ENOENT;
+ 
+-	if (!perfmon_capable())
++	if (!capable(CAP_SYS_ADMIN))
+ 		return -EACCES;
+ 
+ 	/*
+diff --git a/kernel/module/main.c b/kernel/module/main.c
+index 9861c2ac5fd500..9d8a845d946657 100644
+--- a/kernel/module/main.c
++++ b/kernel/module/main.c
+@@ -2615,7 +2615,7 @@ static int find_module_sections(struct module *mod, struct load_info *info)
+ static int move_module(struct module *mod, struct load_info *info)
+ {
+ 	int i;
+-	enum mod_mem_type t = 0;
++	enum mod_mem_type t = MOD_MEM_NUM_TYPES;
+ 	int ret = -ENOMEM;
+ 	bool codetag_section_found = false;
+ 
+@@ -2694,7 +2694,7 @@ static int move_module(struct module *mod, struct load_info *info)
+ 	return 0;
+ out_err:
+ 	module_memory_restore_rox(mod);
+-	for (t--; t >= 0; t--)
++	while (t--)
+ 		module_memory_free(mod, t);
+ 	if (codetag_section_found)
+ 		codetag_free_module_sections(mod);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 39fac649aa142a..7d5f51e2f76156 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -3935,6 +3935,11 @@ static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
+ 	if (!scx_allow_ttwu_queue(p))
+ 		return false;
+ 
++#ifdef CONFIG_SMP
++	if (p->sched_class == &stop_sched_class)
++		return false;
++#endif
++
+ 	/*
+ 	 * Do not complicate things with the async wake_list while the CPU is
+ 	 * in hotplug state.
+@@ -7690,7 +7695,7 @@ const char *preempt_model_str(void)
+ 
+ 		if (IS_ENABLED(CONFIG_PREEMPT_DYNAMIC)) {
+ 			seq_buf_printf(&s, "(%s)%s",
+-				       preempt_dynamic_mode > 0 ?
++				       preempt_dynamic_mode >= 0 ?
+ 				       preempt_modes[preempt_dynamic_mode] : "undef",
+ 				       brace ? "}" : "");
+ 			return seq_buf_str(&s);
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index ad45a8fea245ee..89019a14082642 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -1504,7 +1504,9 @@ static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se, s64
+ 	if (dl_entity_is_special(dl_se))
+ 		return;
+ 
+-	scaled_delta_exec = dl_scaled_delta_exec(rq, dl_se, delta_exec);
++	scaled_delta_exec = delta_exec;
++	if (!dl_server(dl_se))
++		scaled_delta_exec = dl_scaled_delta_exec(rq, dl_se, delta_exec);
+ 
+ 	dl_se->runtime -= scaled_delta_exec;
+ 
+@@ -1611,7 +1613,7 @@ static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se, s64
+  */
+ void dl_server_update_idle_time(struct rq *rq, struct task_struct *p)
+ {
+-	s64 delta_exec, scaled_delta_exec;
++	s64 delta_exec;
+ 
+ 	if (!rq->fair_server.dl_defer)
+ 		return;
+@@ -1624,9 +1626,7 @@ void dl_server_update_idle_time(struct rq *rq, struct task_struct *p)
+ 	if (delta_exec < 0)
+ 		return;
+ 
+-	scaled_delta_exec = dl_scaled_delta_exec(rq, &rq->fair_server, delta_exec);
+-
+-	rq->fair_server.runtime -= scaled_delta_exec;
++	rq->fair_server.runtime -= delta_exec;
+ 
+ 	if (rq->fair_server.runtime < 0) {
+ 		rq->fair_server.dl_defer_running = 0;
+diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
+index 5d2d0562115b31..3fe6b0c99f3d89 100644
+--- a/kernel/stop_machine.c
++++ b/kernel/stop_machine.c
+@@ -82,18 +82,15 @@ static void cpu_stop_signal_done(struct cpu_stop_done *done)
+ }
+ 
+ static void __cpu_stop_queue_work(struct cpu_stopper *stopper,
+-					struct cpu_stop_work *work,
+-					struct wake_q_head *wakeq)
++				  struct cpu_stop_work *work)
+ {
+ 	list_add_tail(&work->list, &stopper->works);
+-	wake_q_add(wakeq, stopper->thread);
+ }
+ 
+ /* queue @work to @stopper.  if offline, @work is completed immediately */
+ static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work)
+ {
+ 	struct cpu_stopper *stopper = &per_cpu(cpu_stopper, cpu);
+-	DEFINE_WAKE_Q(wakeq);
+ 	unsigned long flags;
+ 	bool enabled;
+ 
+@@ -101,12 +98,13 @@ static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work)
+ 	raw_spin_lock_irqsave(&stopper->lock, flags);
+ 	enabled = stopper->enabled;
+ 	if (enabled)
+-		__cpu_stop_queue_work(stopper, work, &wakeq);
++		__cpu_stop_queue_work(stopper, work);
+ 	else if (work->done)
+ 		cpu_stop_signal_done(work->done);
+ 	raw_spin_unlock_irqrestore(&stopper->lock, flags);
+ 
+-	wake_up_q(&wakeq);
++	if (enabled)
++		wake_up_process(stopper->thread);
+ 	preempt_enable();
+ 
+ 	return enabled;
+@@ -264,7 +262,6 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
+ {
+ 	struct cpu_stopper *stopper1 = per_cpu_ptr(&cpu_stopper, cpu1);
+ 	struct cpu_stopper *stopper2 = per_cpu_ptr(&cpu_stopper, cpu2);
+-	DEFINE_WAKE_Q(wakeq);
+ 	int err;
+ 
+ retry:
+@@ -300,8 +297,8 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
+ 	}
+ 
+ 	err = 0;
+-	__cpu_stop_queue_work(stopper1, work1, &wakeq);
+-	__cpu_stop_queue_work(stopper2, work2, &wakeq);
++	__cpu_stop_queue_work(stopper1, work1);
++	__cpu_stop_queue_work(stopper2, work2);
+ 
+ unlock:
+ 	raw_spin_unlock(&stopper2->lock);
+@@ -316,7 +313,10 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
+ 		goto retry;
+ 	}
+ 
+-	wake_up_q(&wakeq);
++	if (!err) {
++		wake_up_process(stopper1->thread);
++		wake_up_process(stopper2->thread);
++	}
+ 	preempt_enable();
+ 
+ 	return err;
+diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
+index ac72849b5da93a..d933eca329afb6 100644
+--- a/lib/alloc_tag.c
++++ b/lib/alloc_tag.c
+@@ -134,6 +134,9 @@ size_t alloc_tag_top_users(struct codetag_bytes *tags, size_t count, bool can_sl
+ 	struct codetag_bytes n;
+ 	unsigned int i, nr = 0;
+ 
++	if (IS_ERR_OR_NULL(alloc_tag_cttype))
++		return 0;
++
+ 	if (can_sleep)
+ 		codetag_lock_module_list(alloc_tag_cttype, true);
+ 	else if (!codetag_trylock_module_list(alloc_tag_cttype))
+diff --git a/lib/maple_tree.c b/lib/maple_tree.c
+index 3ecb1c390a6ac7..3fead314f5c615 100644
+--- a/lib/maple_tree.c
++++ b/lib/maple_tree.c
+@@ -5288,6 +5288,7 @@ static void mt_destroy_walk(struct maple_enode *enode, struct maple_tree *mt,
+ 	struct maple_enode *start;
+ 
+ 	if (mte_is_leaf(enode)) {
++		mte_set_node_dead(enode);
+ 		node->type = mte_node_type(enode);
+ 		goto free_leaf;
+ 	}
+diff --git a/mm/damon/core.c b/mm/damon/core.c
+index f0c1676f0599e0..629c9a1adff8ac 100644
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -1427,6 +1427,7 @@ static unsigned long damon_get_intervals_score(struct damon_ctx *c)
+ 		}
+ 	}
+ 	target_access_events = max_access_events * goal_bp / 10000;
++	target_access_events = target_access_events ? : 1;
+ 	return access_events * 10000 / target_access_events;
+ }
+ 
+@@ -2306,9 +2307,8 @@ static void kdamond_usleep(unsigned long usecs)
+  *
+  * If there is a &struct damon_call_control request that registered via
+  * &damon_call() on @ctx, do or cancel the invocation of the function depending
+- * on @cancel.  @cancel is set when the kdamond is deactivated by DAMOS
+- * watermarks, or the kdamond is already out of the main loop and therefore
+- * will be terminated.
++ * on @cancel.  @cancel is set when the kdamond is already out of the main loop
++ * and therefore will be terminated.
+  */
+ static void kdamond_call(struct damon_ctx *ctx, bool cancel)
+ {
+@@ -2356,7 +2356,7 @@ static int kdamond_wait_activation(struct damon_ctx *ctx)
+ 		if (ctx->callback.after_wmarks_check &&
+ 				ctx->callback.after_wmarks_check(ctx))
+ 			break;
+-		kdamond_call(ctx, true);
++		kdamond_call(ctx, false);
+ 		damos_walk_cancel(ctx);
+ 	}
+ 	return -EBUSY;
+diff --git a/mm/kasan/report.c b/mm/kasan/report.c
+index 8357e1a33699b5..b0877035491f80 100644
+--- a/mm/kasan/report.c
++++ b/mm/kasan/report.c
+@@ -370,36 +370,6 @@ static inline bool init_task_stack_addr(const void *addr)
+ 			sizeof(init_thread_union.stack));
+ }
+ 
+-/*
+- * This function is invoked with report_lock (a raw_spinlock) held. A
+- * PREEMPT_RT kernel cannot call find_vm_area() as it will acquire a sleeping
+- * rt_spinlock.
+- *
+- * For !RT kernel, the PROVE_RAW_LOCK_NESTING config option will print a
+- * lockdep warning for this raw_spinlock -> spinlock dependency. This config
+- * option is enabled by default to ensure better test coverage to expose this
+- * kind of RT kernel problem. This lockdep splat, however, can be suppressed
+- * by using DEFINE_WAIT_OVERRIDE_MAP() if it serves a useful purpose and the
+- * invalid PREEMPT_RT case has been taken care of.
+- */
+-static inline struct vm_struct *kasan_find_vm_area(void *addr)
+-{
+-	static DEFINE_WAIT_OVERRIDE_MAP(vmalloc_map, LD_WAIT_SLEEP);
+-	struct vm_struct *va;
+-
+-	if (IS_ENABLED(CONFIG_PREEMPT_RT))
+-		return NULL;
+-
+-	/*
+-	 * Suppress lockdep warning and fetch vmalloc area of the
+-	 * offending address.
+-	 */
+-	lock_map_acquire_try(&vmalloc_map);
+-	va = find_vm_area(addr);
+-	lock_map_release(&vmalloc_map);
+-	return va;
+-}
+-
+ static void print_address_description(void *addr, u8 tag,
+ 				      struct kasan_report_info *info)
+ {
+@@ -429,19 +399,8 @@ static void print_address_description(void *addr, u8 tag,
+ 	}
+ 
+ 	if (is_vmalloc_addr(addr)) {
+-		struct vm_struct *va = kasan_find_vm_area(addr);
+-
+-		if (va) {
+-			pr_err("The buggy address belongs to the virtual mapping at\n"
+-			       " [%px, %px) created by:\n"
+-			       " %pS\n",
+-			       va->addr, va->addr + va->size, va->caller);
+-			pr_err("\n");
+-
+-			page = vmalloc_to_page(addr);
+-		} else {
+-			pr_err("The buggy address %px belongs to a vmalloc virtual mapping\n", addr);
+-		}
++		pr_err("The buggy address %px belongs to a vmalloc virtual mapping\n", addr);
++		page = vmalloc_to_page(addr);
+ 	}
+ 
+ 	if (page) {
+diff --git a/mm/rmap.c b/mm/rmap.c
+index 67bb273dfb80da..e2c87e3af2d564 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -1845,23 +1845,32 @@ void folio_remove_rmap_pud(struct folio *folio, struct page *page,
+ #endif
+ }
+ 
+-/* We support batch unmapping of PTEs for lazyfree large folios */
+-static inline bool can_batch_unmap_folio_ptes(unsigned long addr,
+-			struct folio *folio, pte_t *ptep)
++static inline unsigned int folio_unmap_pte_batch(struct folio *folio,
++			struct page_vma_mapped_walk *pvmw,
++			enum ttu_flags flags, pte_t pte)
+ {
+ 	const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
+-	int max_nr = folio_nr_pages(folio);
+-	pte_t pte = ptep_get(ptep);
++	unsigned long end_addr, addr = pvmw->address;
++	struct vm_area_struct *vma = pvmw->vma;
++	unsigned int max_nr;
++
++	if (flags & TTU_HWPOISON)
++		return 1;
++	if (!folio_test_large(folio))
++		return 1;
+ 
++	/* We may only batch within a single VMA and a single page table. */
++	end_addr = pmd_addr_end(addr, vma->vm_end);
++	max_nr = (end_addr - addr) >> PAGE_SHIFT;
++
++	/* We only support lazyfree batching for now ... */
+ 	if (!folio_test_anon(folio) || folio_test_swapbacked(folio))
+-		return false;
++		return 1;
+ 	if (pte_unused(pte))
+-		return false;
+-	if (pte_pfn(pte) != folio_pfn(folio))
+-		return false;
++		return 1;
+ 
+-	return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL,
+-			       NULL, NULL) == max_nr;
++	return folio_pte_batch(folio, addr, pvmw->pte, pte, max_nr, fpb_flags,
++			       NULL, NULL, NULL);
+ }
+ 
+ /*
+@@ -2024,9 +2033,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
+ 			if (pte_dirty(pteval))
+ 				folio_mark_dirty(folio);
+ 		} else if (likely(pte_present(pteval))) {
+-			if (folio_test_large(folio) && !(flags & TTU_HWPOISON) &&
+-			    can_batch_unmap_folio_ptes(address, folio, pvmw.pte))
+-				nr_pages = folio_nr_pages(folio);
++			nr_pages = folio_unmap_pte_batch(folio, &pvmw, flags, pteval);
+ 			end_addr = address + nr_pages * PAGE_SIZE;
+ 			flush_cache_range(vma, address, end_addr);
+ 
+@@ -2206,13 +2213,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
+ 			hugetlb_remove_rmap(folio);
+ 		} else {
+ 			folio_remove_rmap_ptes(folio, subpage, nr_pages, vma);
+-			folio_ref_sub(folio, nr_pages - 1);
+ 		}
+ 		if (vma->vm_flags & VM_LOCKED)
+ 			mlock_drain_local();
+-		folio_put(folio);
+-		/* We have already batched the entire folio */
+-		if (nr_pages > 1)
++		folio_put_refs(folio, nr_pages);
++
++		/*
++		 * If we are sure that we batched the entire folio and cleared
++		 * all PTEs, we can just optimize and stop right here.
++		 */
++		if (nr_pages == folio_nr_pages(folio))
+ 			goto walk_done;
+ 		continue;
+ walk_abort:
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 3fb534dcf14d93..b679c33168011c 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -487,6 +487,7 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr,
+ 		unsigned long end, pgprot_t prot, struct page **pages, int *nr,
+ 		pgtbl_mod_mask *mask)
+ {
++	int err = 0;
+ 	pte_t *pte;
+ 
+ 	/*
+@@ -500,18 +501,25 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr,
+ 	do {
+ 		struct page *page = pages[*nr];
+ 
+-		if (WARN_ON(!pte_none(ptep_get(pte))))
+-			return -EBUSY;
+-		if (WARN_ON(!page))
+-			return -ENOMEM;
+-		if (WARN_ON(!pfn_valid(page_to_pfn(page))))
+-			return -EINVAL;
++		if (WARN_ON(!pte_none(ptep_get(pte)))) {
++			err = -EBUSY;
++			break;
++		}
++		if (WARN_ON(!page)) {
++			err = -ENOMEM;
++			break;
++		}
++		if (WARN_ON(!pfn_valid(page_to_pfn(page)))) {
++			err = -EINVAL;
++			break;
++		}
+ 
+ 		set_pte_at(&init_mm, addr, pte, mk_pte(page, prot));
+ 		(*nr)++;
+ 	} while (pte++, addr += PAGE_SIZE, addr != end);
+ 	*mask |= PGTBL_PTE_MODIFIED;
+-	return 0;
++
++	return err;
+ }
+ 
+ static int vmap_pages_pmd_range(pud_t *pud, unsigned long addr,
+diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c
+index b068651984fe3d..fa7f002b14fa3c 100644
+--- a/net/appletalk/ddp.c
++++ b/net/appletalk/ddp.c
+@@ -576,6 +576,7 @@ static int atrtr_create(struct rtentry *r, struct net_device *devhint)
+ 
+ 	/* Fill in the routing entry */
+ 	rt->target  = ta->sat_addr;
++	dev_put(rt->dev); /* Release old device */
+ 	dev_hold(devhint);
+ 	rt->dev     = devhint;
+ 	rt->flags   = r->rt_flags;
+diff --git a/net/atm/clip.c b/net/atm/clip.c
+index b234dc3bcb0d4a..f7a5565e794ef1 100644
+--- a/net/atm/clip.c
++++ b/net/atm/clip.c
+@@ -45,7 +45,8 @@
+ #include <net/atmclip.h>
+ 
+ static struct net_device *clip_devs;
+-static struct atm_vcc *atmarpd;
++static struct atm_vcc __rcu *atmarpd;
++static DEFINE_MUTEX(atmarpd_lock);
+ static struct timer_list idle_timer;
+ static const struct neigh_ops clip_neigh_ops;
+ 
+@@ -53,24 +54,35 @@ static int to_atmarpd(enum atmarp_ctrl_type type, int itf, __be32 ip)
+ {
+ 	struct sock *sk;
+ 	struct atmarp_ctrl *ctrl;
++	struct atm_vcc *vcc;
+ 	struct sk_buff *skb;
++	int err = 0;
+ 
+ 	pr_debug("(%d)\n", type);
+-	if (!atmarpd)
+-		return -EUNATCH;
++
++	rcu_read_lock();
++	vcc = rcu_dereference(atmarpd);
++	if (!vcc) {
++		err = -EUNATCH;
++		goto unlock;
++	}
+ 	skb = alloc_skb(sizeof(struct atmarp_ctrl), GFP_ATOMIC);
+-	if (!skb)
+-		return -ENOMEM;
++	if (!skb) {
++		err = -ENOMEM;
++		goto unlock;
++	}
+ 	ctrl = skb_put(skb, sizeof(struct atmarp_ctrl));
+ 	ctrl->type = type;
+ 	ctrl->itf_num = itf;
+ 	ctrl->ip = ip;
+-	atm_force_charge(atmarpd, skb->truesize);
++	atm_force_charge(vcc, skb->truesize);
+ 
+-	sk = sk_atm(atmarpd);
++	sk = sk_atm(vcc);
+ 	skb_queue_tail(&sk->sk_receive_queue, skb);
+ 	sk->sk_data_ready(sk);
+-	return 0;
++unlock:
++	rcu_read_unlock();
++	return err;
+ }
+ 
+ static void link_vcc(struct clip_vcc *clip_vcc, struct atmarp_entry *entry)
+@@ -417,6 +429,8 @@ static int clip_mkip(struct atm_vcc *vcc, int timeout)
+ 
+ 	if (!vcc->push)
+ 		return -EBADFD;
++	if (vcc->user_back)
++		return -EINVAL;
+ 	clip_vcc = kmalloc(sizeof(struct clip_vcc), GFP_KERNEL);
+ 	if (!clip_vcc)
+ 		return -ENOMEM;
+@@ -607,17 +621,27 @@ static void atmarpd_close(struct atm_vcc *vcc)
+ {
+ 	pr_debug("\n");
+ 
+-	rtnl_lock();
+-	atmarpd = NULL;
++	mutex_lock(&atmarpd_lock);
++	RCU_INIT_POINTER(atmarpd, NULL);
++	mutex_unlock(&atmarpd_lock);
++
++	synchronize_rcu();
+ 	skb_queue_purge(&sk_atm(vcc)->sk_receive_queue);
+-	rtnl_unlock();
+ 
+ 	pr_debug("(done)\n");
+ 	module_put(THIS_MODULE);
+ }
+ 
++static int atmarpd_send(struct atm_vcc *vcc, struct sk_buff *skb)
++{
++	atm_return_tx(vcc, skb);
++	dev_kfree_skb_any(skb);
++	return 0;
++}
++
+ static const struct atmdev_ops atmarpd_dev_ops = {
+-	.close = atmarpd_close
++	.close = atmarpd_close,
++	.send = atmarpd_send
+ };
+ 
+ 
+@@ -631,15 +655,18 @@ static struct atm_dev atmarpd_dev = {
+ 
+ static int atm_init_atmarp(struct atm_vcc *vcc)
+ {
+-	rtnl_lock();
++	if (vcc->push == clip_push)
++		return -EINVAL;
++
++	mutex_lock(&atmarpd_lock);
+ 	if (atmarpd) {
+-		rtnl_unlock();
++		mutex_unlock(&atmarpd_lock);
+ 		return -EADDRINUSE;
+ 	}
+ 
+ 	mod_timer(&idle_timer, jiffies + CLIP_CHECK_INTERVAL * HZ);
+ 
+-	atmarpd = vcc;
++	rcu_assign_pointer(atmarpd, vcc);
+ 	set_bit(ATM_VF_META, &vcc->flags);
+ 	set_bit(ATM_VF_READY, &vcc->flags);
+ 	    /* allow replies and avoid getting closed if signaling dies */
+@@ -648,13 +675,14 @@ static int atm_init_atmarp(struct atm_vcc *vcc)
+ 	vcc->push = NULL;
+ 	vcc->pop = NULL; /* crash */
+ 	vcc->push_oam = NULL; /* crash */
+-	rtnl_unlock();
++	mutex_unlock(&atmarpd_lock);
+ 	return 0;
+ }
+ 
+ static int clip_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ {
+ 	struct atm_vcc *vcc = ATM_SD(sock);
++	struct sock *sk = sock->sk;
+ 	int err = 0;
+ 
+ 	switch (cmd) {
+@@ -675,14 +703,18 @@ static int clip_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ 		err = clip_create(arg);
+ 		break;
+ 	case ATMARPD_CTRL:
++		lock_sock(sk);
+ 		err = atm_init_atmarp(vcc);
+ 		if (!err) {
+ 			sock->state = SS_CONNECTED;
+ 			__module_get(THIS_MODULE);
+ 		}
++		release_sock(sk);
+ 		break;
+ 	case ATMARP_MKIP:
++		lock_sock(sk);
+ 		err = clip_mkip(vcc, arg);
++		release_sock(sk);
+ 		break;
+ 	case ATMARP_SETENTRY:
+ 		err = clip_setentry(vcc, (__force __be32)arg);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 4d5ace9d245d9c..992131f88a4568 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6966,7 +6966,10 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ 		bis->iso_qos.bcast.in.sdu = le16_to_cpu(ev->max_pdu);
+ 
+ 		if (!ev->status) {
++			bis->state = BT_CONNECTED;
+ 			set_bit(HCI_CONN_BIG_SYNC, &bis->flags);
++			hci_debugfs_create_conn(bis);
++			hci_conn_add_sysfs(bis);
+ 			hci_iso_setup_path(bis);
+ 		}
+ 	}
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 9955d6cd7b76f5..3ac8d436e3e3a5 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -1345,7 +1345,7 @@ int hci_setup_ext_adv_instance_sync(struct hci_dev *hdev, u8 instance)
+ 	 * Command Disallowed error, so we must first disable the
+ 	 * instance if it is active.
+ 	 */
+-	if (adv && !adv->pending) {
++	if (adv) {
+ 		err = hci_disable_ext_adv_instance_sync(hdev, instance);
+ 		if (err)
+ 			return err;
+@@ -5479,7 +5479,7 @@ static int hci_disconnect_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ {
+ 	struct hci_cp_disconnect cp;
+ 
+-	if (test_bit(HCI_CONN_BIG_CREATED, &conn->flags)) {
++	if (conn->type == BIS_LINK) {
+ 		/* This is a BIS connection, hci_conn_del will
+ 		 * do the necessary cleanup.
+ 		 */
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 6edc441b37023d..905cf7635f77fa 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1154,7 +1154,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
+ 		goto do_error;
+ 
+ 	while (msg_data_left(msg)) {
+-		ssize_t copy = 0;
++		int copy = 0;
+ 
+ 		skb = tcp_write_queue_tail(sk);
+ 		if (skb)
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index c6b22170dc4928..4ef3a3c166c6e9 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -3525,11 +3525,9 @@ static void addrconf_gre_config(struct net_device *dev)
+ 
+ 	ASSERT_RTNL();
+ 
+-	idev = ipv6_find_idev(dev);
+-	if (IS_ERR(idev)) {
+-		pr_debug("%s: add_dev failed\n", __func__);
++	idev = addrconf_add_dev(dev);
++	if (IS_ERR(idev))
+ 		return;
+-	}
+ 
+ 	/* Generate the IPv6 link-local address using addrconf_addr_gen(),
+ 	 * unless we have an IPv4 GRE device not bound to an IP address and
+@@ -3543,9 +3541,6 @@ static void addrconf_gre_config(struct net_device *dev)
+ 	}
+ 
+ 	add_v4_addrs(idev);
+-
+-	if (dev->flags & IFF_POINTOPOINT)
+-		addrconf_add_mroute(dev);
+ }
+ #endif
+ 
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index acfde525fad2fa..4a8d9c3ea480f6 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -1942,6 +1942,20 @@ static int sta_link_apply_parameters(struct ieee80211_local *local,
+ 	ieee80211_sta_init_nss(link_sta);
+ 
+ 	if (params->opmode_notif_used) {
++		enum nl80211_chan_width width = link->conf->chanreq.oper.width;
++
++		switch (width) {
++		case NL80211_CHAN_WIDTH_20:
++		case NL80211_CHAN_WIDTH_40:
++		case NL80211_CHAN_WIDTH_80:
++		case NL80211_CHAN_WIDTH_160:
++		case NL80211_CHAN_WIDTH_80P80:
++		case NL80211_CHAN_WIDTH_320: /* not VHT, allowed for HE/EHT */
++			break;
++		default:
++			return -EINVAL;
++		}
++
+ 		/* returned value is only needed for rc update, but the
+ 		 * rc isn't initialized here yet, so ignore it
+ 		 */
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 53d5ffad87be87..dc8df3129c007e 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -7194,6 +7194,7 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_link_data *link,
+ 	struct ieee80211_bss_conf *bss_conf = link->conf;
+ 	struct ieee80211_vif_cfg *vif_cfg = &sdata->vif.cfg;
+ 	struct ieee80211_mgmt *mgmt = (void *) hdr;
++	struct ieee80211_ext *ext = NULL;
+ 	size_t baselen;
+ 	struct ieee802_11_elems *elems;
+ 	struct ieee80211_local *local = sdata->local;
+@@ -7219,7 +7220,7 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_link_data *link,
+ 	/* Process beacon from the current BSS */
+ 	bssid = ieee80211_get_bssid(hdr, len, sdata->vif.type);
+ 	if (ieee80211_is_s1g_beacon(mgmt->frame_control)) {
+-		struct ieee80211_ext *ext = (void *) mgmt;
++		ext = (void *)mgmt;
+ 		variable = ext->u.s1g_beacon.variable +
+ 			   ieee80211_s1g_optional_len(ext->frame_control);
+ 	}
+@@ -7406,7 +7407,9 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_link_data *link,
+ 	}
+ 
+ 	if ((ncrc == link->u.mgd.beacon_crc && link->u.mgd.beacon_crc_valid) ||
+-	    ieee80211_is_s1g_short_beacon(mgmt->frame_control))
++	    (ext && ieee80211_is_s1g_short_beacon(ext->frame_control,
++						  parse_params.start,
++						  parse_params.len)))
+ 		goto free;
+ 	link->u.mgd.beacon_crc = ncrc;
+ 	link->u.mgd.beacon_crc_valid = true;
+diff --git a/net/mac80211/parse.c b/net/mac80211/parse.c
+index 6da39c864f45ba..922ea9a6e2412c 100644
+--- a/net/mac80211/parse.c
++++ b/net/mac80211/parse.c
+@@ -758,7 +758,6 @@ static size_t ieee802_11_find_bssid_profile(const u8 *start, size_t len,
+ {
+ 	const struct element *elem, *sub;
+ 	size_t profile_len = 0;
+-	bool found = false;
+ 
+ 	if (!bss || !bss->transmitted_bss)
+ 		return profile_len;
+@@ -809,15 +808,14 @@ static size_t ieee802_11_find_bssid_profile(const u8 *start, size_t len,
+ 					       index[2],
+ 					       new_bssid);
+ 			if (ether_addr_equal(new_bssid, bss->bssid)) {
+-				found = true;
+ 				elems->bssid_index_len = index[1];
+ 				elems->bssid_index = (void *)&index[2];
+-				break;
++				return profile_len;
+ 			}
+ 		}
+ 	}
+ 
+-	return found ? profile_len : 0;
++	return 0;
+ }
+ 
+ static void
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index 82256eddd16bd4..0fc3527e6fdd1f 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -2155,11 +2155,6 @@ int ieee80211_reconfig(struct ieee80211_local *local)
+ 		cfg80211_sched_scan_stopped_locked(local->hw.wiphy, 0);
+ 
+  wake_up:
+-
+-	if (local->virt_monitors > 0 &&
+-	    local->virt_monitors == local->open_count)
+-		ieee80211_add_virtual_monitor(local);
+-
+ 	/*
+ 	 * Clear the WLAN_STA_BLOCK_BA flag so new aggregation
+ 	 * sessions can be established after a resume.
+@@ -2213,6 +2208,10 @@ int ieee80211_reconfig(struct ieee80211_local *local)
+ 		}
+ 	}
+ 
++	if (local->virt_monitors > 0 &&
++	    local->virt_monitors == local->open_count)
++		ieee80211_add_virtual_monitor(local);
++
+ 	if (!suspended)
+ 		return 0;
+ 
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index e8972a857e51e9..6332a0e0659675 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -387,7 +387,6 @@ static void netlink_skb_set_owner_r(struct sk_buff *skb, struct sock *sk)
+ 	WARN_ON(skb->sk != NULL);
+ 	skb->sk = sk;
+ 	skb->destructor = netlink_skb_destructor;
+-	atomic_add(skb->truesize, &sk->sk_rmem_alloc);
+ 	sk_mem_charge(sk, skb->truesize);
+ }
+ 
+@@ -1212,41 +1211,48 @@ struct sk_buff *netlink_alloc_large_skb(unsigned int size, int broadcast)
+ int netlink_attachskb(struct sock *sk, struct sk_buff *skb,
+ 		      long *timeo, struct sock *ssk)
+ {
++	DECLARE_WAITQUEUE(wait, current);
+ 	struct netlink_sock *nlk;
++	unsigned int rmem;
+ 
+ 	nlk = nlk_sk(sk);
++	rmem = atomic_add_return(skb->truesize, &sk->sk_rmem_alloc);
+ 
+-	if ((atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf ||
+-	     test_bit(NETLINK_S_CONGESTED, &nlk->state))) {
+-		DECLARE_WAITQUEUE(wait, current);
+-		if (!*timeo) {
+-			if (!ssk || netlink_is_kernel(ssk))
+-				netlink_overrun(sk);
+-			sock_put(sk);
+-			kfree_skb(skb);
+-			return -EAGAIN;
+-		}
+-
+-		__set_current_state(TASK_INTERRUPTIBLE);
+-		add_wait_queue(&nlk->wait, &wait);
++	if ((rmem == skb->truesize || rmem < READ_ONCE(sk->sk_rcvbuf)) &&
++	    !test_bit(NETLINK_S_CONGESTED, &nlk->state)) {
++		netlink_skb_set_owner_r(skb, sk);
++		return 0;
++	}
+ 
+-		if ((atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf ||
+-		     test_bit(NETLINK_S_CONGESTED, &nlk->state)) &&
+-		    !sock_flag(sk, SOCK_DEAD))
+-			*timeo = schedule_timeout(*timeo);
++	atomic_sub(skb->truesize, &sk->sk_rmem_alloc);
+ 
+-		__set_current_state(TASK_RUNNING);
+-		remove_wait_queue(&nlk->wait, &wait);
++	if (!*timeo) {
++		if (!ssk || netlink_is_kernel(ssk))
++			netlink_overrun(sk);
+ 		sock_put(sk);
++		kfree_skb(skb);
++		return -EAGAIN;
++	}
+ 
+-		if (signal_pending(current)) {
+-			kfree_skb(skb);
+-			return sock_intr_errno(*timeo);
+-		}
+-		return 1;
++	__set_current_state(TASK_INTERRUPTIBLE);
++	add_wait_queue(&nlk->wait, &wait);
++	rmem = atomic_read(&sk->sk_rmem_alloc);
++
++	if (((rmem && rmem + skb->truesize > READ_ONCE(sk->sk_rcvbuf)) ||
++	     test_bit(NETLINK_S_CONGESTED, &nlk->state)) &&
++	    !sock_flag(sk, SOCK_DEAD))
++		*timeo = schedule_timeout(*timeo);
++
++	__set_current_state(TASK_RUNNING);
++	remove_wait_queue(&nlk->wait, &wait);
++	sock_put(sk);
++
++	if (signal_pending(current)) {
++		kfree_skb(skb);
++		return sock_intr_errno(*timeo);
+ 	}
+-	netlink_skb_set_owner_r(skb, sk);
+-	return 0;
++
++	return 1;
+ }
+ 
+ static int __netlink_sendskb(struct sock *sk, struct sk_buff *skb)
+@@ -1307,6 +1313,7 @@ static int netlink_unicast_kernel(struct sock *sk, struct sk_buff *skb,
+ 	ret = -ECONNREFUSED;
+ 	if (nlk->netlink_rcv != NULL) {
+ 		ret = skb->len;
++		atomic_add(skb->truesize, &sk->sk_rmem_alloc);
+ 		netlink_skb_set_owner_r(skb, sk);
+ 		NETLINK_CB(skb).sk = ssk;
+ 		netlink_deliver_tap_kernel(sk, ssk, skb);
+@@ -1383,13 +1390,19 @@ EXPORT_SYMBOL_GPL(netlink_strict_get_check);
+ static int netlink_broadcast_deliver(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct netlink_sock *nlk = nlk_sk(sk);
++	unsigned int rmem, rcvbuf;
+ 
+-	if (atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf &&
++	rmem = atomic_add_return(skb->truesize, &sk->sk_rmem_alloc);
++	rcvbuf = READ_ONCE(sk->sk_rcvbuf);
++
++	if ((rmem == skb->truesize || rmem <= rcvbuf) &&
+ 	    !test_bit(NETLINK_S_CONGESTED, &nlk->state)) {
+ 		netlink_skb_set_owner_r(skb, sk);
+ 		__netlink_sendskb(sk, skb);
+-		return atomic_read(&sk->sk_rmem_alloc) > (sk->sk_rcvbuf >> 1);
++		return rmem > (rcvbuf >> 1);
+ 	}
++
++	atomic_sub(skb->truesize, &sk->sk_rmem_alloc);
+ 	return -1;
+ }
+ 
+@@ -2245,6 +2258,7 @@ static int netlink_dump(struct sock *sk, bool lock_taken)
+ 	struct netlink_ext_ack extack = {};
+ 	struct netlink_callback *cb;
+ 	struct sk_buff *skb = NULL;
++	unsigned int rmem, rcvbuf;
+ 	size_t max_recvmsg_len;
+ 	struct module *module;
+ 	int err = -ENOBUFS;
+@@ -2258,9 +2272,6 @@ static int netlink_dump(struct sock *sk, bool lock_taken)
+ 		goto errout_skb;
+ 	}
+ 
+-	if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf)
+-		goto errout_skb;
+-
+ 	/* NLMSG_GOODSIZE is small to avoid high order allocations being
+ 	 * required, but it makes sense to _attempt_ a 32KiB allocation
+ 	 * to reduce number of system calls on dump operations, if user
+@@ -2283,6 +2294,13 @@ static int netlink_dump(struct sock *sk, bool lock_taken)
+ 	if (!skb)
+ 		goto errout_skb;
+ 
++	rcvbuf = READ_ONCE(sk->sk_rcvbuf);
++	rmem = atomic_add_return(skb->truesize, &sk->sk_rmem_alloc);
++	if (rmem != skb->truesize && rmem >= rcvbuf) {
++		atomic_sub(skb->truesize, &sk->sk_rmem_alloc);
++		goto errout_skb;
++	}
++
+ 	/* Trim skb to allocated size. User is expected to provide buffer as
+ 	 * large as max(min_dump_alloc, 32KiB (max_recvmsg_len capped at
+ 	 * netlink_recvmsg())). dump will pack as many smaller messages as
+diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
+index e685034ce4f7ce..0b51f3ccf03583 100644
+--- a/net/rxrpc/call_accept.c
++++ b/net/rxrpc/call_accept.c
+@@ -149,6 +149,7 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
+ 
+ id_in_use:
+ 	write_unlock(&rx->call_lock);
++	rxrpc_prefail_call(call, RXRPC_CALL_LOCAL_ERROR, -EBADSLT);
+ 	rxrpc_cleanup_call(call);
+ 	_leave(" = -EBADSLT");
+ 	return -EBADSLT;
+@@ -253,6 +254,9 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx,
+ 	unsigned short call_tail, conn_tail, peer_tail;
+ 	unsigned short call_count, conn_count;
+ 
++	if (!b)
++		return NULL;
++
+ 	/* #calls >= #conns >= #peers must hold true. */
+ 	call_head = smp_load_acquire(&b->call_backlog_head);
+ 	call_tail = b->call_backlog_tail;
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index d58921ffcf35e7..a96f9f74777bc5 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -335,17 +335,22 @@ struct Qdisc *qdisc_lookup_rcu(struct net_device *dev, u32 handle)
+ 	return q;
+ }
+ 
+-static struct Qdisc *qdisc_leaf(struct Qdisc *p, u32 classid)
++static struct Qdisc *qdisc_leaf(struct Qdisc *p, u32 classid,
++				struct netlink_ext_ack *extack)
+ {
+ 	unsigned long cl;
+ 	const struct Qdisc_class_ops *cops = p->ops->cl_ops;
+ 
+-	if (cops == NULL)
+-		return NULL;
++	if (cops == NULL) {
++		NL_SET_ERR_MSG(extack, "Parent qdisc is not classful");
++		return ERR_PTR(-EOPNOTSUPP);
++	}
+ 	cl = cops->find(p, classid);
+ 
+-	if (cl == 0)
+-		return NULL;
++	if (cl == 0) {
++		NL_SET_ERR_MSG(extack, "Specified class not found");
++		return ERR_PTR(-ENOENT);
++	}
+ 	return cops->leaf(p, cl);
+ }
+ 
+@@ -1489,7 +1494,7 @@ static int __tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 					NL_SET_ERR_MSG(extack, "Failed to find qdisc with specified classid");
+ 					return -ENOENT;
+ 				}
+-				q = qdisc_leaf(p, clid);
++				q = qdisc_leaf(p, clid, extack);
+ 			} else if (dev_ingress_queue(dev)) {
+ 				q = rtnl_dereference(dev_ingress_queue(dev)->qdisc_sleeping);
+ 			}
+@@ -1500,6 +1505,8 @@ static int __tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 			NL_SET_ERR_MSG(extack, "Cannot find specified qdisc on specified device");
+ 			return -ENOENT;
+ 		}
++		if (IS_ERR(q))
++			return PTR_ERR(q);
+ 
+ 		if (tcm->tcm_handle && q->handle != tcm->tcm_handle) {
+ 			NL_SET_ERR_MSG(extack, "Invalid handle");
+@@ -1601,7 +1608,9 @@ static int __tc_modify_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 					NL_SET_ERR_MSG(extack, "Failed to find specified qdisc");
+ 					return -ENOENT;
+ 				}
+-				q = qdisc_leaf(p, clid);
++				q = qdisc_leaf(p, clid, extack);
++				if (IS_ERR(q))
++					return PTR_ERR(q);
+ 			} else if (dev_ingress_queue_create(dev)) {
+ 				q = rtnl_dereference(dev_ingress_queue(dev)->qdisc_sleeping);
+ 			}
+diff --git a/net/tipc/topsrv.c b/net/tipc/topsrv.c
+index 8ee0c07d00e9bb..ffe577bf6b5155 100644
+--- a/net/tipc/topsrv.c
++++ b/net/tipc/topsrv.c
+@@ -704,8 +704,10 @@ static void tipc_topsrv_stop(struct net *net)
+ 	for (id = 0; srv->idr_in_use; id++) {
+ 		con = idr_find(&srv->conn_idr, id);
+ 		if (con) {
++			conn_get(con);
+ 			spin_unlock_bh(&srv->idr_lock);
+ 			tipc_conn_close(con);
++			conn_put(con);
+ 			spin_lock_bh(&srv->idr_lock);
+ 		}
+ 	}
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index fc6afbc8d6806a..c50184eddb4455 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -407,6 +407,8 @@ EXPORT_SYMBOL_GPL(vsock_enqueue_accept);
+ 
+ static bool vsock_use_local_transport(unsigned int remote_cid)
+ {
++	lockdep_assert_held(&vsock_register_mutex);
++
+ 	if (!transport_local)
+ 		return false;
+ 
+@@ -464,6 +466,8 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
+ 
+ 	remote_flags = vsk->remote_addr.svm_flags;
+ 
++	mutex_lock(&vsock_register_mutex);
++
+ 	switch (sk->sk_type) {
+ 	case SOCK_DGRAM:
+ 		new_transport = transport_dgram;
+@@ -479,12 +483,15 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
+ 			new_transport = transport_h2g;
+ 		break;
+ 	default:
+-		return -ESOCKTNOSUPPORT;
++		ret = -ESOCKTNOSUPPORT;
++		goto err;
+ 	}
+ 
+ 	if (vsk->transport) {
+-		if (vsk->transport == new_transport)
+-			return 0;
++		if (vsk->transport == new_transport) {
++			ret = 0;
++			goto err;
++		}
+ 
+ 		/* transport->release() must be called with sock lock acquired.
+ 		 * This path can only be taken during vsock_connect(), where we
+@@ -508,8 +515,16 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
+ 	/* We increase the module refcnt to prevent the transport unloading
+ 	 * while there are open sockets assigned to it.
+ 	 */
+-	if (!new_transport || !try_module_get(new_transport->module))
+-		return -ENODEV;
++	if (!new_transport || !try_module_get(new_transport->module)) {
++		ret = -ENODEV;
++		goto err;
++	}
++
++	/* It's safe to release the mutex after a successful try_module_get().
++	 * Whichever transport `new_transport` points at, it won't go away until
++	 * the last module_put() below or in vsock_deassign_transport().
++	 */
++	mutex_unlock(&vsock_register_mutex);
+ 
+ 	if (sk->sk_type == SOCK_SEQPACKET) {
+ 		if (!new_transport->seqpacket_allow ||
+@@ -528,12 +543,31 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
+ 	vsk->transport = new_transport;
+ 
+ 	return 0;
++err:
++	mutex_unlock(&vsock_register_mutex);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(vsock_assign_transport);
+ 
++/*
++ * Provide safe access to static transport_{h2g,g2h,dgram,local} callbacks.
++ * Otherwise we may race with module removal. Do not use on `vsk->transport`.
++ */
++static u32 vsock_registered_transport_cid(const struct vsock_transport **transport)
++{
++	u32 cid = VMADDR_CID_ANY;
++
++	mutex_lock(&vsock_register_mutex);
++	if (*transport)
++		cid = (*transport)->get_local_cid();
++	mutex_unlock(&vsock_register_mutex);
++
++	return cid;
++}
++
+ bool vsock_find_cid(unsigned int cid)
+ {
+-	if (transport_g2h && cid == transport_g2h->get_local_cid())
++	if (cid == vsock_registered_transport_cid(&transport_g2h))
+ 		return true;
+ 
+ 	if (transport_h2g && cid == VMADDR_CID_HOST)
+@@ -2503,18 +2537,19 @@ static long vsock_dev_do_ioctl(struct file *filp,
+ 			       unsigned int cmd, void __user *ptr)
+ {
+ 	u32 __user *p = ptr;
+-	u32 cid = VMADDR_CID_ANY;
+ 	int retval = 0;
++	u32 cid;
+ 
+ 	switch (cmd) {
+ 	case IOCTL_VM_SOCKETS_GET_LOCAL_CID:
+ 		/* To be compatible with the VMCI behavior, we prioritize the
+ 		 * guest CID instead of well-know host CID (VMADDR_CID_HOST).
+ 		 */
+-		if (transport_g2h)
+-			cid = transport_g2h->get_local_cid();
+-		else if (transport_h2g)
+-			cid = transport_h2g->get_local_cid();
++		cid = vsock_registered_transport_cid(&transport_g2h);
++		if (cid == VMADDR_CID_ANY)
++			cid = vsock_registered_transport_cid(&transport_h2g);
++		if (cid == VMADDR_CID_ANY)
++			cid = vsock_registered_transport_cid(&transport_local);
+ 
+ 		if (put_user(cid, p) != 0)
+ 			retval = -EFAULT;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index f039a7d0d6f739..0c7e8389bc49e8 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -229,6 +229,7 @@ static int validate_beacon_head(const struct nlattr *attr,
+ 	unsigned int len = nla_len(attr);
+ 	const struct element *elem;
+ 	const struct ieee80211_mgmt *mgmt = (void *)data;
++	const struct ieee80211_ext *ext;
+ 	unsigned int fixedlen, hdrlen;
+ 	bool s1g_bcn;
+ 
+@@ -237,8 +238,10 @@ static int validate_beacon_head(const struct nlattr *attr,
+ 
+ 	s1g_bcn = ieee80211_is_s1g_beacon(mgmt->frame_control);
+ 	if (s1g_bcn) {
+-		fixedlen = offsetof(struct ieee80211_ext,
+-				    u.s1g_beacon.variable);
++		ext = (struct ieee80211_ext *)mgmt;
++		fixedlen =
++			offsetof(struct ieee80211_ext, u.s1g_beacon.variable) +
++			ieee80211_s1g_optional_len(ext->frame_control);
+ 		hdrlen = offsetof(struct ieee80211_ext, u.s1g_beacon);
+ 	} else {
+ 		fixedlen = offsetof(struct ieee80211_mgmt,
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index ed868c0f7ca8ed..1ad5a6bdfd755b 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -820,6 +820,52 @@ bool ieee80211_is_valid_amsdu(struct sk_buff *skb, u8 mesh_hdr)
+ }
+ EXPORT_SYMBOL(ieee80211_is_valid_amsdu);
+ 
++
++/*
++ * Detects if an MSDU frame was maliciously converted into an A-MSDU
++ * frame by an adversary. This is done by parsing the received frame
++ * as if it were a regular MSDU, even though the A-MSDU flag is set.
++ *
++ * For non-mesh interfaces, detection involves checking whether the
++ * payload, when interpreted as an MSDU, begins with a valid RFC1042
++ * header. This is done by comparing the A-MSDU subheader's destination
++ * address to the start of the RFC1042 header.
++ *
++ * For mesh interfaces, the MSDU includes a 6-byte Mesh Control field
++ * and an optional variable-length Mesh Address Extension field before
++ * the RFC1042 header. The position of the RFC1042 header must therefore
++ * be calculated based on the mesh header length.
++ *
++ * Since this function intentionally parses an A-MSDU frame as an MSDU,
++ * it only assumes that the A-MSDU subframe header is present, and
++ * beyond this it performs its own bounds checks under the assumption
++ * that the frame is instead parsed as a non-aggregated MSDU.
++ */
++static bool
++is_amsdu_aggregation_attack(struct ethhdr *eth, struct sk_buff *skb,
++			    enum nl80211_iftype iftype)
++{
++	int offset;
++
++	/* Non-mesh case can be directly compared */
++	if (iftype != NL80211_IFTYPE_MESH_POINT)
++		return ether_addr_equal(eth->h_dest, rfc1042_header);
++
++	offset = __ieee80211_get_mesh_hdrlen(eth->h_dest[0]);
++	if (offset == 6) {
++		/* Mesh case with empty address extension field */
++		return ether_addr_equal(eth->h_source, rfc1042_header);
++	} else if (offset + ETH_ALEN <= skb->len) {
++		/* Mesh case with non-empty address extension field */
++		u8 temp[ETH_ALEN];
++
++		skb_copy_bits(skb, offset, temp, ETH_ALEN);
++		return ether_addr_equal(temp, rfc1042_header);
++	}
++
++	return false;
++}
++
+ void ieee80211_amsdu_to_8023s(struct sk_buff *skb, struct sk_buff_head *list,
+ 			      const u8 *addr, enum nl80211_iftype iftype,
+ 			      const unsigned int extra_headroom,
+@@ -861,8 +907,10 @@ void ieee80211_amsdu_to_8023s(struct sk_buff *skb, struct sk_buff_head *list,
+ 		/* the last MSDU has no padding */
+ 		if (subframe_len > remaining)
+ 			goto purge;
+-		/* mitigate A-MSDU aggregation injection attacks */
+-		if (ether_addr_equal(hdr.eth.h_dest, rfc1042_header))
++		/* mitigate A-MSDU aggregation injection attacks, to be
++		 * checked when processing first subframe (offset == 0).
++		 */
++		if (offset == 0 && is_amsdu_aggregation_attack(&hdr.eth, skb, iftype))
+ 			goto purge;
+ 
+ 		offset += sizeof(struct ethhdr);
+diff --git a/samples/damon/prcl.c b/samples/damon/prcl.c
+index c3acbdab7a6202..b753c9c9ba3af0 100644
+--- a/samples/damon/prcl.c
++++ b/samples/damon/prcl.c
+@@ -122,8 +122,12 @@ static int damon_sample_prcl_enable_store(
+ 	if (enable == enabled)
+ 		return 0;
+ 
+-	if (enable)
+-		return damon_sample_prcl_start();
++	if (enable) {
++		err = damon_sample_prcl_start();
++		if (err)
++			enable = false;
++		return err;
++	}
+ 	damon_sample_prcl_stop();
+ 	return 0;
+ }
+diff --git a/samples/damon/wsse.c b/samples/damon/wsse.c
+index 11be2580327445..e20238a249e7b5 100644
+--- a/samples/damon/wsse.c
++++ b/samples/damon/wsse.c
+@@ -102,8 +102,12 @@ static int damon_sample_wsse_enable_store(
+ 	if (enable == enabled)
+ 		return 0;
+ 
+-	if (enable)
+-		return damon_sample_wsse_start();
++	if (enable) {
++		err = damon_sample_wsse_start();
++		if (err)
++			enable = false;
++		return err;
++	}
+ 	damon_sample_wsse_stop();
+ 	return 0;
+ }
+diff --git a/scripts/gdb/linux/constants.py.in b/scripts/gdb/linux/constants.py.in
+index fd6bd69c5096ac..f795302ddfa8b3 100644
+--- a/scripts/gdb/linux/constants.py.in
++++ b/scripts/gdb/linux/constants.py.in
+@@ -20,6 +20,7 @@
+ #include <linux/of_fdt.h>
+ #include <linux/page_ext.h>
+ #include <linux/radix-tree.h>
++#include <linux/maple_tree.h>
+ #include <linux/slab.h>
+ #include <linux/threads.h>
+ #include <linux/vmalloc.h>
+@@ -93,6 +94,12 @@ LX_GDBPARSED(RADIX_TREE_MAP_SIZE)
+ LX_GDBPARSED(RADIX_TREE_MAP_SHIFT)
+ LX_GDBPARSED(RADIX_TREE_MAP_MASK)
+ 
++/* linux/maple_tree.h */
++LX_VALUE(MAPLE_NODE_SLOTS)
++LX_VALUE(MAPLE_RANGE64_SLOTS)
++LX_VALUE(MAPLE_ARANGE64_SLOTS)
++LX_GDBPARSED(MAPLE_NODE_MASK)
++
+ /* linux/vmalloc.h */
+ LX_VALUE(VM_IOREMAP)
+ LX_VALUE(VM_ALLOC)
+diff --git a/scripts/gdb/linux/interrupts.py b/scripts/gdb/linux/interrupts.py
+index 616a5f26377a8c..f4f715a8f0e36e 100644
+--- a/scripts/gdb/linux/interrupts.py
++++ b/scripts/gdb/linux/interrupts.py
+@@ -7,7 +7,7 @@ import gdb
+ from linux import constants
+ from linux import cpus
+ from linux import utils
+-from linux import radixtree
++from linux import mapletree
+ 
+ irq_desc_type = utils.CachedType("struct irq_desc")
+ 
+@@ -23,12 +23,12 @@ def irqd_is_level(desc):
+ def show_irq_desc(prec, irq):
+     text = ""
+ 
+-    desc = radixtree.lookup(gdb.parse_and_eval("&irq_desc_tree"), irq)
++    desc = mapletree.mtree_load(gdb.parse_and_eval("&sparse_irqs"), irq)
+     if desc is None:
+         return text
+ 
+-    desc = desc.cast(irq_desc_type.get_type())
+-    if desc is None:
++    desc = desc.cast(irq_desc_type.get_type().pointer())
++    if desc == 0:
+         return text
+ 
+     if irq_settings_is_hidden(desc):
+@@ -110,7 +110,7 @@ def x86_show_mce(prec, var, pfx, desc):
+     pvar = gdb.parse_and_eval(var)
+     text = "%*s: " % (prec, pfx)
+     for cpu in cpus.each_online_cpu():
+-        text += "%10u " % (cpus.per_cpu(pvar, cpu))
++        text += "%10u " % (cpus.per_cpu(pvar, cpu).dereference())
+     text += "  %s\n" % (desc)
+     return text
+ 
+@@ -142,7 +142,7 @@ def x86_show_interupts(prec):
+ 
+     if constants.LX_CONFIG_X86_MCE:
+         text += x86_show_mce(prec, "&mce_exception_count", "MCE", "Machine check exceptions")
+-        text == x86_show_mce(prec, "&mce_poll_count", "MCP", "Machine check polls")
++        text += x86_show_mce(prec, "&mce_poll_count", "MCP", "Machine check polls")
+ 
+     text += show_irq_err_count(prec)
+ 
+@@ -221,8 +221,8 @@ class LxInterruptList(gdb.Command):
+             gdb.write("CPU%-8d" % cpu)
+         gdb.write("\n")
+ 
+-        if utils.gdb_eval_or_none("&irq_desc_tree") is None:
+-            return
++        if utils.gdb_eval_or_none("&sparse_irqs") is None:
++            raise gdb.GdbError("Unable to find the sparse IRQ tree, is CONFIG_SPARSE_IRQ enabled?")
+ 
+         for irq in range(nr_irqs):
+             gdb.write(show_irq_desc(prec, irq))
+diff --git a/scripts/gdb/linux/mapletree.py b/scripts/gdb/linux/mapletree.py
+new file mode 100644
+index 00000000000000..d52d51c0a03fcb
+--- /dev/null
++++ b/scripts/gdb/linux/mapletree.py
+@@ -0,0 +1,252 @@
++# SPDX-License-Identifier: GPL-2.0
++#
++#  Maple tree helpers
++#
++# Copyright (c) 2025 Broadcom
++#
++# Authors:
++#  Florian Fainelli <florian.fainelli@broadcom.com>
++
++import gdb
++
++from linux import utils
++from linux import constants
++from linux import xarray
++
++maple_tree_root_type = utils.CachedType("struct maple_tree")
++maple_node_type = utils.CachedType("struct maple_node")
++maple_enode_type = utils.CachedType("void")
++
++maple_dense = 0
++maple_leaf_64 = 1
++maple_range_64 = 2
++maple_arange_64 = 3
++
++class Mas(object):
++    ma_active = 0
++    ma_start = 1
++    ma_root = 2
++    ma_none = 3
++    ma_pause = 4
++    ma_overflow = 5
++    ma_underflow = 6
++    ma_error = 7
++
++    def __init__(self, mt, first, end):
++        if mt.type == maple_tree_root_type.get_type().pointer():
++            self.tree = mt.dereference()
++        elif mt.type != maple_tree_root_type.get_type():
++            raise gdb.GdbError("must be {} not {}"
++                               .format(maple_tree_root_type.get_type().pointer(), mt.type))
++        self.tree = mt
++        self.index = first
++        self.last = end
++        self.node = None
++        self.status = self.ma_start
++        self.min = 0
++        self.max = -1
++
++    def is_start(self):
++        # mas_is_start()
++        return self.status == self.ma_start
++
++    def is_ptr(self):
++        # mas_is_ptr()
++        return self.status == self.ma_root
++
++    def is_none(self):
++        # mas_is_none()
++        return self.status == self.ma_none
++
++    def root(self):
++        # mas_root()
++        return self.tree['ma_root'].cast(maple_enode_type.get_type().pointer())
++
++    def start(self):
++        # mas_start()
++        if self.is_start() is False:
++            return None
++
++        self.min = 0
++        self.max = ~0
++
++        while True:
++            self.depth = 0
++            root = self.root()
++            if xarray.xa_is_node(root):
++                self.depth = 0
++                self.status = self.ma_active
++                self.node = mte_safe_root(root)
++                self.offset = 0
++                if mte_dead_node(self.node) is True:
++                    continue
++
++                return None
++
++            self.node = None
++            # Empty tree
++            if root is None:
++                self.status = self.ma_none
++                self.offset = constants.LX_MAPLE_NODE_SLOTS
++                return None
++
++            # Single entry tree
++            self.status = self.ma_root
++            self.offset = constants.LX_MAPLE_NODE_SLOTS
++
++            if self.index != 0:
++                return None
++
++            return root
++
++        return None
++
++    def reset(self):
++        # mas_reset()
++        self.status = self.ma_start
++        self.node = None
++
++def mte_safe_root(node):
++    if node.type != maple_enode_type.get_type().pointer():
++        raise gdb.GdbError("{} must be {} not {}"
++                           .format(mte_safe_root.__name__, maple_enode_type.get_type().pointer(), node.type))
++    ulong_type = utils.get_ulong_type()
++    indirect_ptr = node.cast(ulong_type) & ~0x2
++    val = indirect_ptr.cast(maple_enode_type.get_type().pointer())
++    return val
++
++def mte_node_type(entry):
++    ulong_type = utils.get_ulong_type()
++    val = None
++    if entry.type == maple_enode_type.get_type().pointer():
++        val = entry.cast(ulong_type)
++    elif entry.type == ulong_type:
++        val = entry
++    else:
++        raise gdb.GdbError("{} must be {} not {}"
++                           .format(mte_node_type.__name__, maple_enode_type.get_type().pointer(), entry.type))
++    return (val >> 0x3) & 0xf
++
++def ma_dead_node(node):
++    if node.type != maple_node_type.get_type().pointer():
++        raise gdb.GdbError("{} must be {} not {}"
++                           .format(ma_dead_node.__name__, maple_node_type.get_type().pointer(), node.type))
++    ulong_type = utils.get_ulong_type()
++    parent = node['parent']
++    indirect_ptr = node['parent'].cast(ulong_type) & ~constants.LX_MAPLE_NODE_MASK
++    return indirect_ptr == node
++
++def mte_to_node(enode):
++    ulong_type = utils.get_ulong_type()
++    if enode.type == maple_enode_type.get_type().pointer():
++        indirect_ptr = enode.cast(ulong_type)
++    elif enode.type == ulong_type:
++        indirect_ptr = enode
++    else:
++        raise gdb.GdbError("{} must be {} not {}"
++                           .format(mte_to_node.__name__, maple_enode_type.get_type().pointer(), enode.type))
++    indirect_ptr = indirect_ptr & ~constants.LX_MAPLE_NODE_MASK
++    return indirect_ptr.cast(maple_node_type.get_type().pointer())
++
++def mte_dead_node(enode):
++    if enode.type != maple_enode_type.get_type().pointer():
++        raise gdb.GdbError("{} must be {} not {}"
++                           .format(mte_dead_node.__name__, maple_enode_type.get_type().pointer(), enode.type))
++    node = mte_to_node(enode)
++    return ma_dead_node(node)
++
++def ma_is_leaf(tp):
++    result = tp < maple_range_64
++    return tp < maple_range_64
++
++def mt_pivots(t):
++    if t == maple_dense:
++        return 0
++    elif t == maple_leaf_64 or t == maple_range_64:
++        return constants.LX_MAPLE_RANGE64_SLOTS - 1
++    elif t == maple_arange_64:
++        return constants.LX_MAPLE_ARANGE64_SLOTS - 1
++
++def ma_pivots(node, t):
++    if node.type != maple_node_type.get_type().pointer():
++        raise gdb.GdbError("{}: must be {} not {}"
++                           .format(ma_pivots.__name__, maple_node_type.get_type().pointer(), node.type))
++    if t == maple_arange_64:
++        return node['ma64']['pivot']
++    elif t == maple_leaf_64 or t == maple_range_64:
++        return node['mr64']['pivot']
++    else:
++        return None
++
++def ma_slots(node, tp):
++    if node.type != maple_node_type.get_type().pointer():
++        raise gdb.GdbError("{}: must be {} not {}"
++                           .format(ma_slots.__name__, maple_node_type.get_type().pointer(), node.type))
++    if tp == maple_arange_64:
++        return node['ma64']['slot']
++    elif tp == maple_range_64 or tp == maple_leaf_64:
++        return node['mr64']['slot']
++    elif tp == maple_dense:
++        return node['slot']
++    else:
++        return None
++
++def mt_slot(mt, slots, offset):
++    ulong_type = utils.get_ulong_type()
++    return slots[offset].cast(ulong_type)
++
++def mtree_lookup_walk(mas):
++    ulong_type = utils.get_ulong_type()
++    n = mas.node
++
++    while True:
++        node = mte_to_node(n)
++        tp = mte_node_type(n)
++        pivots = ma_pivots(node, tp)
++        end = mt_pivots(tp)
++        offset = 0
++        while True:
++            if pivots[offset] >= mas.index:
++                break
++            if offset >= end:
++                break
++            offset += 1
++
++        slots = ma_slots(node, tp)
++        n = mt_slot(mas.tree, slots, offset)
++        if ma_dead_node(node) is True:
++            mas.reset()
++            return None
++            break
++
++        if ma_is_leaf(tp) is True:
++            break
++
++    return n
++
++def mtree_load(mt, index):
++    ulong_type = utils.get_ulong_type()
++    # MT_STATE(...)
++    mas = Mas(mt, index, index)
++    entry = None
++
++    while True:
++        entry = mas.start()
++        if mas.is_none():
++            return None
++
++        if mas.is_ptr():
++            if index != 0:
++                entry = None
++            return entry
++
++        entry = mtree_lookup_walk(mas)
++        if entry is None and mas.is_start():
++            continue
++        else:
++            break
++
++    if xarray.xa_is_zero(entry):
++        return None
++
++    return entry
+diff --git a/scripts/gdb/linux/vfs.py b/scripts/gdb/linux/vfs.py
+index b5fbb18ccb77ab..9e921b645a68a8 100644
+--- a/scripts/gdb/linux/vfs.py
++++ b/scripts/gdb/linux/vfs.py
+@@ -22,7 +22,7 @@ def dentry_name(d):
+     if parent == d or parent == 0:
+         return ""
+     p = dentry_name(d['d_parent']) + "/"
+-    return p + d['d_shortname']['string'].string()
++    return p + d['d_name']['name'].string()
+ 
+ class DentryName(gdb.Function):
+     """Return string of the full path of a dentry.
+diff --git a/scripts/gdb/linux/xarray.py b/scripts/gdb/linux/xarray.py
+new file mode 100644
+index 00000000000000..f4477b5def75fc
+--- /dev/null
++++ b/scripts/gdb/linux/xarray.py
+@@ -0,0 +1,28 @@
++# SPDX-License-Identifier: GPL-2.0
++#
++#  Xarray helpers
++#
++# Copyright (c) 2025 Broadcom
++#
++# Authors:
++#  Florian Fainelli <florian.fainelli@broadcom.com>
++
++import gdb
++
++from linux import utils
++from linux import constants
++
++def xa_is_internal(entry):
++    ulong_type = utils.get_ulong_type()
++    return ((entry.cast(ulong_type) & 3) == 2)
++
++def xa_mk_internal(v):
++    return ((v << 2) | 2)
++
++def xa_is_zero(entry):
++    ulong_type = utils.get_ulong_type()
++    return entry.cast(ulong_type) == xa_mk_internal(257)
++
++def xa_is_node(entry):
++    ulong_type = utils.get_ulong_type()
++    return xa_is_internal(entry) and (entry.cast(ulong_type) > 4096)
+diff --git a/sound/isa/ad1816a/ad1816a.c b/sound/isa/ad1816a/ad1816a.c
+index 99006dc4777e91..5c9e2d41d9005f 100644
+--- a/sound/isa/ad1816a/ad1816a.c
++++ b/sound/isa/ad1816a/ad1816a.c
+@@ -98,7 +98,7 @@ static int snd_card_ad1816a_pnp(int dev, struct pnp_card_link *card,
+ 	pdev = pnp_request_card_device(card, id->devs[1].id, NULL);
+ 	if (pdev == NULL) {
+ 		mpu_port[dev] = -1;
+-		dev_warn(&pdev->dev, "MPU401 device busy, skipping.\n");
++		pr_warn("MPU401 device busy, skipping.\n");
+ 		return 0;
+ 	}
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 03ffaec49998d1..f7bb97230201f0 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2656,6 +2656,7 @@ static const struct hda_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX),
+ 	SND_PCI_QUIRK(0x1558, 0x3702, "Clevo X370SN[VW]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x50d3, "Clevo PC50[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x5802, "Clevo X58[05]WN[RST]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+@@ -6609,6 +6610,7 @@ static void alc294_fixup_bass_speaker_15(struct hda_codec *codec,
+ 	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+ 		static const hda_nid_t conn[] = { 0x02, 0x03 };
+ 		snd_hda_override_conn_list(codec, 0x15, ARRAY_SIZE(conn), conn);
++		snd_hda_gen_add_micmute_led_cdev(codec, NULL);
+ 	}
+ }
+ 
+@@ -10714,6 +10716,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8975, "HP EliteBook x360 840 Aero G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x897d, "HP mt440 Mobile Thin Client U74", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8981, "HP Elite Dragonfly G3", ALC245_FIXUP_CS35L41_SPI_4),
++	SND_PCI_QUIRK(0x103c, 0x898a, "HP Pavilion 15-eg100", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x898e, "HP EliteBook 835 G9", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x898f, "HP EliteBook 835 G9", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8991, "HP EliteBook 845 G9", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+@@ -10855,6 +10858,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8ce0, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8cf5, "HP ZBook Studio 16", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8d01, "HP ZBook Power 14 G12", ALC285_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8d07, "HP Victus 15-fb2xxx (MB 8D07)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8d18, "HP EliteStudio 8 AIO", ALC274_FIXUP_HP_AIO_BIND_DACS),
+ 	SND_PCI_QUIRK(0x103c, 0x8d84, "HP EliteBook X G1i", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8d85, "HP EliteBook 14 G12", ALC285_FIXUP_HP_GPIO_LED),
+@@ -10884,7 +10888,9 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8def, "HP EliteBook 660 G12", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8df0, "HP EliteBook 630 G12", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8df1, "HP EliteBook 630 G12", ALC236_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8dfb, "HP EliteBook 6 G1a 14", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8dfc, "HP EliteBook 645 G12", ALC236_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8dfd, "HP EliteBook 6 G1a 16", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8dfe, "HP EliteBook 665 G12", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8e11, "HP Trekker", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8e12, "HP Trekker", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -11112,6 +11118,8 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x14a1, "Clevo L141MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x2624, "Clevo L240TU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x28c1, "Clevo V370VND", ALC2XX_FIXUP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1558, 0x35a1, "Clevo V3[56]0EN[CDE]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x35b1, "Clevo V3[57]0WN[MNP]Q", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x4018, "Clevo NV40M[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x4019, "Clevo NV40MZ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x4020, "Clevo NV40MB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -11139,6 +11147,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x51b1, "Clevo NS50AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x51b3, "Clevo NS70AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x5630, "Clevo NP50RNJS", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x5700, "Clevo X560WN[RST]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x70a1, "Clevo NB70T[HJK]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x70b3, "Clevo NK70SB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x70f2, "Clevo NH79EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -11178,6 +11187,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0xa650, "Clevo NP[567]0SN[CD]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xa671, "Clevo NP70SN[CDE]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xa741, "Clevo V54x_6x_TNE", ALC245_FIXUP_CLEVO_NOISY_MIC),
++	SND_PCI_QUIRK(0x1558, 0xa743, "Clevo V54x_6x_TU", ALC245_FIXUP_CLEVO_NOISY_MIC),
+ 	SND_PCI_QUIRK(0x1558, 0xa763, "Clevo V54x_6x_TU", ALC245_FIXUP_CLEVO_NOISY_MIC),
+ 	SND_PCI_QUIRK(0x1558, 0xb018, "Clevo NP50D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xb019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 723cb7bc128516..1689b6b22598e2 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -346,6 +346,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "83Q3"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "RB"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Nitro ANV15-41"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+diff --git a/sound/soc/codecs/cs35l56-shared.c b/sound/soc/codecs/cs35l56-shared.c
+index e28bfefa72f33e..016a6248ab8f07 100644
+--- a/sound/soc/codecs/cs35l56-shared.c
++++ b/sound/soc/codecs/cs35l56-shared.c
+@@ -811,7 +811,7 @@ int cs35l56_hw_init(struct cs35l56_base *cs35l56_base)
+ 		break;
+ 	default:
+ 		dev_err(cs35l56_base->dev, "Unknown device %x\n", devid);
+-		return ret;
++		return -ENODEV;
+ 	}
+ 
+ 	cs35l56_base->type = devid & 0xFF;
+diff --git a/sound/soc/codecs/rt721-sdca.c b/sound/soc/codecs/rt721-sdca.c
+index 1c9f32e405cf95..ba080957e93361 100644
+--- a/sound/soc/codecs/rt721-sdca.c
++++ b/sound/soc/codecs/rt721-sdca.c
+@@ -430,6 +430,7 @@ static int rt721_sdca_set_gain_get(struct snd_kcontrol *kcontrol,
+ 	unsigned int read_l, read_r, ctl_l = 0, ctl_r = 0;
+ 	unsigned int adc_vol_flag = 0;
+ 	const unsigned int interval_offset = 0xc0;
++	const unsigned int tendA = 0x200;
+ 	const unsigned int tendB = 0xa00;
+ 
+ 	if (strstr(ucontrol->id.name, "FU1E Capture Volume") ||
+@@ -439,9 +440,16 @@ static int rt721_sdca_set_gain_get(struct snd_kcontrol *kcontrol,
+ 	regmap_read(rt721->mbq_regmap, mc->reg, &read_l);
+ 	regmap_read(rt721->mbq_regmap, mc->rreg, &read_r);
+ 
+-	if (mc->shift == 8) /* boost gain */
++	if (mc->shift == 8) {
++		/* boost gain */
+ 		ctl_l = read_l / tendB;
+-	else {
++	} else if (mc->shift == 1) {
++		/* FU33 boost gain */
++		if (read_l == 0x8000 || read_l == 0xfe00)
++			ctl_l = 0;
++		else
++			ctl_l = read_l / tendA + 1;
++	} else {
+ 		if (adc_vol_flag)
+ 			ctl_l = mc->max - (((0x1e00 - read_l) & 0xffff) / interval_offset);
+ 		else
+@@ -449,9 +457,16 @@ static int rt721_sdca_set_gain_get(struct snd_kcontrol *kcontrol,
+ 	}
+ 
+ 	if (read_l != read_r) {
+-		if (mc->shift == 8) /* boost gain */
++		if (mc->shift == 8) {
++			/* boost gain */
+ 			ctl_r = read_r / tendB;
+-		else { /* ADC/DAC gain */
++		} else if (mc->shift == 1) {
++			/* FU33 boost gain */
++			if (read_r == 0x8000 || read_r == 0xfe00)
++				ctl_r = 0;
++			else
++				ctl_r = read_r / tendA + 1;
++		} else { /* ADC/DAC gain */
+ 			if (adc_vol_flag)
+ 				ctl_r = mc->max - (((0x1e00 - read_r) & 0xffff) / interval_offset);
+ 			else
+diff --git a/sound/soc/fsl/fsl_asrc.c b/sound/soc/fsl/fsl_asrc.c
+index 677529916dc0e5..745532ccbdba69 100644
+--- a/sound/soc/fsl/fsl_asrc.c
++++ b/sound/soc/fsl/fsl_asrc.c
+@@ -517,7 +517,8 @@ static int fsl_asrc_config_pair(struct fsl_asrc_pair *pair, bool use_ideal_rate)
+ 	regmap_update_bits(asrc->regmap, REG_ASRCTR,
+ 			   ASRCTR_ATSi_MASK(index), ASRCTR_ATS(index));
+ 	regmap_update_bits(asrc->regmap, REG_ASRCTR,
+-			   ASRCTR_USRi_MASK(index), 0);
++			   ASRCTR_IDRi_MASK(index) | ASRCTR_USRi_MASK(index),
++			   ASRCTR_USR(index));
+ 
+ 	/* Set the input and output clock sources */
+ 	regmap_update_bits(asrc->regmap, REG_ASRCSR,
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index ed2b4780c47079..f244e36799759f 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -771,13 +771,15 @@ static void fsl_sai_config_disable(struct fsl_sai *sai, int dir)
+ 	 * anymore. Add software reset to fix this issue.
+ 	 * This is a hardware bug, and will be fix in the
+ 	 * next sai version.
++	 *
++	 * In consumer mode, this can happen even after a
++	 * single open/close, especially if both tx and rx
++	 * are running concurrently.
+ 	 */
+-	if (!sai->is_consumer_mode[tx]) {
+-		/* Software Reset */
+-		regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR);
+-		/* Clear SR bit to finish the reset */
+-		regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), 0);
+-	}
++	/* Software Reset */
++	regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR);
++	/* Clear SR bit to finish the reset */
++	regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), 0);
+ }
+ 
+ static int fsl_sai_trigger(struct snd_pcm_substream *substream, int cmd,
+diff --git a/sound/soc/intel/boards/Kconfig b/sound/soc/intel/boards/Kconfig
+index 9b80b19bb8d06c..4db7931ba561eb 100644
+--- a/sound/soc/intel/boards/Kconfig
++++ b/sound/soc/intel/boards/Kconfig
+@@ -42,6 +42,7 @@ config SND_SOC_INTEL_SOF_NUVOTON_COMMON
+ 	tristate
+ 
+ config SND_SOC_INTEL_SOF_BOARD_HELPERS
++	select SND_SOC_ACPI_INTEL_MATCH
+ 	tristate
+ 
+ if SND_SOC_INTEL_CATPT
+diff --git a/sound/soc/intel/common/Makefile b/sound/soc/intel/common/Makefile
+index 0afd114be9e5e6..7822bcae6c69d3 100644
+--- a/sound/soc/intel/common/Makefile
++++ b/sound/soc/intel/common/Makefile
+@@ -12,7 +12,7 @@ snd-soc-acpi-intel-match-y := soc-acpi-intel-byt-match.o soc-acpi-intel-cht-matc
+ 	soc-acpi-intel-lnl-match.o \
+ 	soc-acpi-intel-ptl-match.o \
+ 	soc-acpi-intel-hda-match.o \
+-	soc-acpi-intel-sdw-mockup-match.o
++	soc-acpi-intel-sdw-mockup-match.o sof-function-topology-lib.o
+ 
+ snd-soc-acpi-intel-match-y += soc-acpi-intel-ssp-common.o
+ 
+diff --git a/sound/soc/intel/common/soc-acpi-intel-arl-match.c b/sound/soc/intel/common/soc-acpi-intel-arl-match.c
+index 32147dc9d2d666..1ad704ca2c5f2b 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-arl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-arl-match.c
+@@ -8,6 +8,7 @@
+ #include <sound/soc-acpi.h>
+ #include <sound/soc-acpi-intel-match.h>
+ #include <sound/soc-acpi-intel-ssp-common.h>
++#include "sof-function-topology-lib.h"
+ 
+ static const struct snd_soc_acpi_endpoint single_endpoint = {
+ 	.num = 0,
+@@ -436,42 +437,49 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_arl_sdw_machines[] = {
+ 		.links = arl_cs42l43_l0_cs35l56_l23,
+ 		.drv_name = "sof_sdw",
+ 		.sof_tplg_filename = "sof-arl-cs42l43-l0-cs35l56-l23.tplg",
++		.get_function_tplg_files = sof_sdw_get_tplg_files,
+ 	},
+ 	{
+ 		.link_mask = BIT(0) | BIT(2) | BIT(3),
+ 		.links = arl_cs42l43_l0_cs35l56_2_l23,
+ 		.drv_name = "sof_sdw",
+ 		.sof_tplg_filename = "sof-arl-cs42l43-l0-cs35l56-l23.tplg",
++		.get_function_tplg_files = sof_sdw_get_tplg_files,
+ 	},
+ 	{
+ 		.link_mask = BIT(0) | BIT(2) | BIT(3),
+ 		.links = arl_cs42l43_l0_cs35l56_3_l23,
+ 		.drv_name = "sof_sdw",
+ 		.sof_tplg_filename = "sof-arl-cs42l43-l0-cs35l56-l23.tplg",
++		.get_function_tplg_files = sof_sdw_get_tplg_files,
+ 	},
+ 	{
+ 		.link_mask = BIT(0) | BIT(2),
+ 		.links = arl_cs42l43_l0_cs35l56_l2,
+ 		.drv_name = "sof_sdw",
+ 		.sof_tplg_filename = "sof-arl-cs42l43-l0-cs35l56-l2.tplg",
++		.get_function_tplg_files = sof_sdw_get_tplg_files,
+ 	},
+ 	{
+ 		.link_mask = BIT(0),
+ 		.links = arl_cs42l43_l0,
+ 		.drv_name = "sof_sdw",
+ 		.sof_tplg_filename = "sof-arl-cs42l43-l0.tplg",
+-	},
+-	{
+-		.link_mask = BIT(2),
+-		.links = arl_cs42l43_l2,
+-		.drv_name = "sof_sdw",
+-		.sof_tplg_filename = "sof-arl-cs42l43-l2.tplg",
++		.get_function_tplg_files = sof_sdw_get_tplg_files,
+ 	},
+ 	{
+ 		.link_mask = BIT(2) | BIT(3),
+ 		.links = arl_cs42l43_l2_cs35l56_l3,
+ 		.drv_name = "sof_sdw",
+ 		.sof_tplg_filename = "sof-arl-cs42l43-l2-cs35l56-l3.tplg",
++		.get_function_tplg_files = sof_sdw_get_tplg_files,
++	},
++	{
++		.link_mask = BIT(2),
++		.links = arl_cs42l43_l2,
++		.drv_name = "sof_sdw",
++		.sof_tplg_filename = "sof-arl-cs42l43-l2.tplg",
++		.get_function_tplg_files = sof_sdw_get_tplg_files,
+ 	},
+ 	{
+ 		.link_mask = 0x1, /* link0 required */
+@@ -490,6 +498,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_arl_sdw_machines[] = {
+ 		.links = arl_rt722_l0_rt1320_l2,
+ 		.drv_name = "sof_sdw",
+ 		.sof_tplg_filename = "sof-arl-rt722-l0_rt1320-l2.tplg",
++		.get_function_tplg_files = sof_sdw_get_tplg_files,
+ 	},
+ 	{},
+ };
+diff --git a/sound/soc/intel/common/sof-function-topology-lib.c b/sound/soc/intel/common/sof-function-topology-lib.c
+new file mode 100644
+index 00000000000000..3cc81dcf047e3a
+--- /dev/null
++++ b/sound/soc/intel/common/sof-function-topology-lib.c
+@@ -0,0 +1,136 @@
++// SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause)
++//
++// This file is provided under a dual BSD/GPLv2 license.  When using or
++// redistributing this file, you may do so under either license.
++//
++// Copyright(c) 2025 Intel Corporation.
++//
++
++#include <linux/device.h>
++#include <linux/errno.h>
++#include <linux/firmware.h>
++#include <sound/soc.h>
++#include <sound/soc-acpi.h>
++#include "sof-function-topology-lib.h"
++
++enum tplg_device_id {
++	TPLG_DEVICE_SDCA_JACK,
++	TPLG_DEVICE_SDCA_AMP,
++	TPLG_DEVICE_SDCA_MIC,
++	TPLG_DEVICE_INTEL_PCH_DMIC,
++	TPLG_DEVICE_HDMI,
++	TPLG_DEVICE_MAX
++};
++
++#define SDCA_DEVICE_MASK (BIT(TPLG_DEVICE_SDCA_JACK) | BIT(TPLG_DEVICE_SDCA_AMP) | \
++			  BIT(TPLG_DEVICE_SDCA_MIC))
++
++#define SOF_INTEL_PLATFORM_NAME_MAX 4
++
++int sof_sdw_get_tplg_files(struct snd_soc_card *card, const struct snd_soc_acpi_mach *mach,
++			   const char *prefix, const char ***tplg_files)
++{
++	struct snd_soc_acpi_mach_params mach_params = mach->mach_params;
++	struct snd_soc_dai_link *dai_link;
++	const struct firmware *fw;
++	char platform[SOF_INTEL_PLATFORM_NAME_MAX];
++	unsigned long tplg_mask = 0;
++	int tplg_num = 0;
++	int tplg_dev;
++	int ret;
++	int i;
++
++	ret = sscanf(mach->sof_tplg_filename, "sof-%3s-*.tplg", platform);
++	if (ret != 1) {
++		dev_err(card->dev, "Invalid platform name %s of tplg %s\n",
++			platform, mach->sof_tplg_filename);
++		return -EINVAL;
++	}
++
++	for_each_card_prelinks(card, i, dai_link) {
++		char *tplg_dev_name;
++
++		dev_dbg(card->dev, "dai_link %s id %d\n", dai_link->name, dai_link->id);
++		if (strstr(dai_link->name, "SimpleJack")) {
++			tplg_dev = TPLG_DEVICE_SDCA_JACK;
++			tplg_dev_name = "sdca-jack";
++		} else if (strstr(dai_link->name, "SmartAmp")) {
++			tplg_dev = TPLG_DEVICE_SDCA_AMP;
++			tplg_dev_name = devm_kasprintf(card->dev, GFP_KERNEL,
++						       "sdca-%damp", dai_link->num_cpus);
++			if (!tplg_dev_name)
++				return -ENOMEM;
++		} else if (strstr(dai_link->name, "SmartMic")) {
++			tplg_dev = TPLG_DEVICE_SDCA_MIC;
++			tplg_dev_name = "sdca-mic";
++		} else if (strstr(dai_link->name, "dmic")) {
++			switch (mach_params.dmic_num) {
++			case 2:
++				tplg_dev_name = "dmic-2ch";
++				break;
++			case 4:
++				tplg_dev_name = "dmic-4ch";
++				break;
++			default:
++				dev_warn(card->dev,
++					 "unsupported number of dmics: %d\n",
++					 mach_params.dmic_num);
++				continue;
++			}
++			tplg_dev = TPLG_DEVICE_INTEL_PCH_DMIC;
++		} else if (strstr(dai_link->name, "iDisp")) {
++			tplg_dev = TPLG_DEVICE_HDMI;
++			tplg_dev_name = "hdmi-pcm5";
++
++		} else {
++			/* The dai link is not supported by separated tplg yet */
++			dev_dbg(card->dev,
++				"dai_link %s is not supported by separated tplg yet\n",
++				dai_link->name);
++			return 0;
++		}
++		if (tplg_mask & BIT(tplg_dev))
++			continue;
++
++		tplg_mask |= BIT(tplg_dev);
++
++		/*
++		 * The tplg file naming rule is sof-<platform>-<function>-id<BE id number>.tplg
++		 * where <platform> is only required for the DMIC function as the nhlt blob
++		 * is platform dependent.
++		 */
++		switch (tplg_dev) {
++		case TPLG_DEVICE_INTEL_PCH_DMIC:
++			(*tplg_files)[tplg_num] = devm_kasprintf(card->dev, GFP_KERNEL,
++								 "%s/sof-%s-%s-id%d.tplg",
++								 prefix, platform,
++								 tplg_dev_name, dai_link->id);
++			break;
++		default:
++			(*tplg_files)[tplg_num] = devm_kasprintf(card->dev, GFP_KERNEL,
++								 "%s/sof-%s-id%d.tplg",
++								 prefix, tplg_dev_name,
++								 dai_link->id);
++			break;
++		}
++		if (!(*tplg_files)[tplg_num])
++			return -ENOMEM;
++		tplg_num++;
++	}
++
++	dev_dbg(card->dev, "tplg_mask %#lx tplg_num %d\n", tplg_mask, tplg_num);
++
++	/* Check presence of sub-topologies */
++	for (i = 0; i < tplg_num; i++) {
++		ret = firmware_request_nowarn(&fw, (*tplg_files)[i], card->dev);
++		if (!ret) {
++			release_firmware(fw);
++		} else {
++			dev_dbg(card->dev, "Failed to open topology file: %s\n", (*tplg_files)[i]);
++			return 0;
++		}
++	}
++
++	return tplg_num;
++}
++
+diff --git a/sound/soc/intel/common/sof-function-topology-lib.h b/sound/soc/intel/common/sof-function-topology-lib.h
+new file mode 100644
+index 00000000000000..e7d0c39d07883c
+--- /dev/null
++++ b/sound/soc/intel/common/sof-function-topology-lib.h
+@@ -0,0 +1,15 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * soc-acpi-intel-get-tplg.h - get-tplg-files ops
++ *
++ * Copyright (c) 2025, Intel Corporation.
++ *
++ */
++
++#ifndef _SND_SOC_ACPI_INTEL_GET_TPLG_H
++#define _SND_SOC_ACPI_INTEL_GET_TPLG_H
++
++int sof_sdw_get_tplg_files(struct snd_soc_card *card, const struct snd_soc_acpi_mach *mach,
++			   const char *prefix, const char ***tplg_files);
++
++#endif
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index 6a3932d90b43a9..27b077f5c8f58b 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -1254,11 +1254,11 @@ static int check_tplg_quirk_mask(struct snd_soc_acpi_mach *mach)
+ 	return 0;
+ }
+ 
+-static char *remove_file_ext(const char *tplg_filename)
++static char *remove_file_ext(struct device *dev, const char *tplg_filename)
+ {
+ 	char *filename, *tmp;
+ 
+-	filename = kstrdup(tplg_filename, GFP_KERNEL);
++	filename = devm_kstrdup(dev, tplg_filename, GFP_KERNEL);
+ 	if (!filename)
+ 		return NULL;
+ 
+@@ -1342,7 +1342,7 @@ struct snd_soc_acpi_mach *hda_machine_select(struct snd_sof_dev *sdev)
+ 		 */
+ 		if (!sof_pdata->tplg_filename) {
+ 			/* remove file extension if it exists */
+-			tplg_filename = remove_file_ext(mach->sof_tplg_filename);
++			tplg_filename = remove_file_ext(sdev->dev, mach->sof_tplg_filename);
+ 			if (!tplg_filename)
+ 				return NULL;
+ 
+diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h
+index e6134ef2263d50..8b48a54b627a3d 100644
+--- a/tools/arch/x86/include/asm/msr-index.h
++++ b/tools/arch/x86/include/asm/msr-index.h
+@@ -616,6 +616,7 @@
+ #define MSR_AMD64_OSVW_STATUS		0xc0010141
+ #define MSR_AMD_PPIN_CTL		0xc00102f0
+ #define MSR_AMD_PPIN			0xc00102f1
++#define MSR_AMD64_CPUID_FN_7		0xc0011002
+ #define MSR_AMD64_CPUID_FN_1		0xc0011004
+ #define MSR_AMD64_LS_CFG		0xc0011020
+ #define MSR_AMD64_DC_CFG		0xc0011022
+diff --git a/tools/include/linux/kallsyms.h b/tools/include/linux/kallsyms.h
+index 5a37ccbec54fbc..f61a01dd7eb7c7 100644
+--- a/tools/include/linux/kallsyms.h
++++ b/tools/include/linux/kallsyms.h
+@@ -18,6 +18,7 @@ static inline const char *kallsyms_lookup(unsigned long addr,
+ 	return NULL;
+ }
+ 
++#ifdef HAVE_BACKTRACE_SUPPORT
+ #include <execinfo.h>
+ #include <stdlib.h>
+ static inline void print_ip_sym(const char *loglvl, unsigned long ip)
+@@ -30,5 +31,8 @@ static inline void print_ip_sym(const char *loglvl, unsigned long ip)
+ 
+ 	free(name);
+ }
++#else
++static inline void print_ip_sym(const char *loglvl, unsigned long ip) {}
++#endif
+ 
+ #endif
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index f23bdda737aaa5..d967ac001498bb 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -2318,6 +2318,7 @@ static int read_annotate(struct objtool_file *file,
+ 
+ 	for_each_reloc(sec->rsec, reloc) {
+ 		type = *(u32 *)(sec->data->d_buf + (reloc_idx(reloc) * sec->sh.sh_entsize) + 4);
++		type = bswap_if_needed(file->elf, type);
+ 
+ 		offset = reloc->sym->offset + reloc_addend(reloc);
+ 		insn = find_insn(file, reloc->sym->sec, offset);
+diff --git a/tools/testing/selftests/bpf/test_lru_map.c b/tools/testing/selftests/bpf/test_lru_map.c
+index fda7589c50236c..0921939532c6c2 100644
+--- a/tools/testing/selftests/bpf/test_lru_map.c
++++ b/tools/testing/selftests/bpf/test_lru_map.c
+@@ -138,6 +138,18 @@ static int sched_next_online(int pid, int *next_to_try)
+ 	return ret;
+ }
+ 
++/* Derive target_free from map_size, same as bpf_common_lru_populate */
++static unsigned int __tgt_size(unsigned int map_size)
++{
++	return (map_size / nr_cpus) / 2;
++}
++
++/* Inverse of how bpf_common_lru_populate derives target_free from map_size. */
++static unsigned int __map_size(unsigned int tgt_free)
++{
++	return tgt_free * nr_cpus * 2;
++}
++
+ /* Size of the LRU map is 2
+  * Add key=1 (+1 key)
+  * Add key=2 (+1 key)
+@@ -231,11 +243,11 @@ static void test_lru_sanity0(int map_type, int map_flags)
+ 	printf("Pass\n");
+ }
+ 
+-/* Size of the LRU map is 1.5*tgt_free
+- * Insert 1 to tgt_free (+tgt_free keys)
+- * Lookup 1 to tgt_free/2
+- * Insert 1+tgt_free to 2*tgt_free (+tgt_free keys)
+- * => 1+tgt_free/2 to LOCALFREE_TARGET will be removed by LRU
++/* Verify that unreferenced elements are recycled before referenced ones.
++ * Insert elements.
++ * Reference a subset of these.
++ * Insert more, enough to trigger recycling.
++ * Verify that unreferenced are recycled.
+  */
+ static void test_lru_sanity1(int map_type, int map_flags, unsigned int tgt_free)
+ {
+@@ -257,7 +269,7 @@ static void test_lru_sanity1(int map_type, int map_flags, unsigned int tgt_free)
+ 	batch_size = tgt_free / 2;
+ 	assert(batch_size * 2 == tgt_free);
+ 
+-	map_size = tgt_free + batch_size;
++	map_size = __map_size(tgt_free) + batch_size;
+ 	lru_map_fd = create_map(map_type, map_flags, map_size);
+ 	assert(lru_map_fd != -1);
+ 
+@@ -266,13 +278,13 @@ static void test_lru_sanity1(int map_type, int map_flags, unsigned int tgt_free)
+ 
+ 	value[0] = 1234;
+ 
+-	/* Insert 1 to tgt_free (+tgt_free keys) */
+-	end_key = 1 + tgt_free;
++	/* Insert map_size - batch_size keys */
++	end_key = 1 + __map_size(tgt_free);
+ 	for (key = 1; key < end_key; key++)
+ 		assert(!bpf_map_update_elem(lru_map_fd, &key, value,
+ 					    BPF_NOEXIST));
+ 
+-	/* Lookup 1 to tgt_free/2 */
++	/* Lookup 1 to batch_size */
+ 	end_key = 1 + batch_size;
+ 	for (key = 1; key < end_key; key++) {
+ 		assert(!bpf_map_lookup_elem_with_ref_bit(lru_map_fd, key, value));
+@@ -280,12 +292,13 @@ static void test_lru_sanity1(int map_type, int map_flags, unsigned int tgt_free)
+ 					    BPF_NOEXIST));
+ 	}
+ 
+-	/* Insert 1+tgt_free to 2*tgt_free
+-	 * => 1+tgt_free/2 to LOCALFREE_TARGET will be
++	/* Insert another map_size - batch_size keys
++	 * Map will contain 1 to batch_size plus these latest, i.e.,
++	 * => previous 1+batch_size to map_size - batch_size will have been
+ 	 * removed by LRU
+ 	 */
+-	key = 1 + tgt_free;
+-	end_key = key + tgt_free;
++	key = 1 + __map_size(tgt_free);
++	end_key = key + __map_size(tgt_free);
+ 	for (; key < end_key; key++) {
+ 		assert(!bpf_map_update_elem(lru_map_fd, &key, value,
+ 					    BPF_NOEXIST));
+@@ -301,17 +314,8 @@ static void test_lru_sanity1(int map_type, int map_flags, unsigned int tgt_free)
+ 	printf("Pass\n");
+ }
+ 
+-/* Size of the LRU map 1.5 * tgt_free
+- * Insert 1 to tgt_free (+tgt_free keys)
+- * Update 1 to tgt_free/2
+- *   => The original 1 to tgt_free/2 will be removed due to
+- *      the LRU shrink process
+- * Re-insert 1 to tgt_free/2 again and do a lookup immeidately
+- * Insert 1+tgt_free to tgt_free*3/2
+- * Insert 1+tgt_free*3/2 to tgt_free*5/2
+- *   => Key 1+tgt_free to tgt_free*3/2
+- *      will be removed from LRU because it has never
+- *      been lookup and ref bit is not set
++/* Verify that insertions exceeding map size will recycle the oldest.
++ * Verify that unreferenced elements are recycled before referenced.
+  */
+ static void test_lru_sanity2(int map_type, int map_flags, unsigned int tgt_free)
+ {
+@@ -334,7 +338,7 @@ static void test_lru_sanity2(int map_type, int map_flags, unsigned int tgt_free)
+ 	batch_size = tgt_free / 2;
+ 	assert(batch_size * 2 == tgt_free);
+ 
+-	map_size = tgt_free + batch_size;
++	map_size = __map_size(tgt_free) + batch_size;
+ 	lru_map_fd = create_map(map_type, map_flags, map_size);
+ 	assert(lru_map_fd != -1);
+ 
+@@ -343,8 +347,8 @@ static void test_lru_sanity2(int map_type, int map_flags, unsigned int tgt_free)
+ 
+ 	value[0] = 1234;
+ 
+-	/* Insert 1 to tgt_free (+tgt_free keys) */
+-	end_key = 1 + tgt_free;
++	/* Insert map_size - batch_size keys */
++	end_key = 1 + __map_size(tgt_free);
+ 	for (key = 1; key < end_key; key++)
+ 		assert(!bpf_map_update_elem(lru_map_fd, &key, value,
+ 					    BPF_NOEXIST));
+@@ -357,8 +361,7 @@ static void test_lru_sanity2(int map_type, int map_flags, unsigned int tgt_free)
+ 	 * shrink the inactive list to get tgt_free
+ 	 * number of free nodes.
+ 	 *
+-	 * Hence, the oldest key 1 to tgt_free/2
+-	 * are removed from the LRU list.
++	 * Hence, the oldest key is removed from the LRU list.
+ 	 */
+ 	key = 1;
+ 	if (map_type == BPF_MAP_TYPE_LRU_PERCPU_HASH) {
+@@ -370,8 +373,7 @@ static void test_lru_sanity2(int map_type, int map_flags, unsigned int tgt_free)
+ 					   BPF_EXIST));
+ 	}
+ 
+-	/* Re-insert 1 to tgt_free/2 again and do a lookup
+-	 * immeidately.
++	/* Re-insert 1 to batch_size again and do a lookup immediately.
+ 	 */
+ 	end_key = 1 + batch_size;
+ 	value[0] = 4321;
+@@ -387,17 +389,18 @@ static void test_lru_sanity2(int map_type, int map_flags, unsigned int tgt_free)
+ 
+ 	value[0] = 1234;
+ 
+-	/* Insert 1+tgt_free to tgt_free*3/2 */
+-	end_key = 1 + tgt_free + batch_size;
+-	for (key = 1 + tgt_free; key < end_key; key++)
++	/* Insert batch_size new elements */
++	key = 1 + __map_size(tgt_free);
++	end_key = key + batch_size;
++	for (; key < end_key; key++)
+ 		/* These newly added but not referenced keys will be
+ 		 * gone during the next LRU shrink.
+ 		 */
+ 		assert(!bpf_map_update_elem(lru_map_fd, &key, value,
+ 					    BPF_NOEXIST));
+ 
+-	/* Insert 1+tgt_free*3/2 to  tgt_free*5/2 */
+-	end_key = key + tgt_free;
++	/* Insert map_size - batch_size elements */
++	end_key += __map_size(tgt_free);
+ 	for (; key < end_key; key++) {
+ 		assert(!bpf_map_update_elem(lru_map_fd, &key, value,
+ 					    BPF_NOEXIST));
+@@ -413,12 +416,12 @@ static void test_lru_sanity2(int map_type, int map_flags, unsigned int tgt_free)
+ 	printf("Pass\n");
+ }
+ 
+-/* Size of the LRU map is 2*tgt_free
+- * It is to test the active/inactive list rotation
+- * Insert 1 to 2*tgt_free (+2*tgt_free keys)
+- * Lookup key 1 to tgt_free*3/2
+- * Add 1+2*tgt_free to tgt_free*5/2 (+tgt_free/2 keys)
+- *  => key 1+tgt_free*3/2 to 2*tgt_free are removed from LRU
++/* Test the active/inactive list rotation
++ *
++ * Fill the whole map, deplete the free list.
++ * Reference all except the last lru->target_free elements.
++ * Insert lru->target_free new elements. This triggers one shrink.
++ * Verify that the non-referenced elements are replaced.
+  */
+ static void test_lru_sanity3(int map_type, int map_flags, unsigned int tgt_free)
+ {
+@@ -437,8 +440,7 @@ static void test_lru_sanity3(int map_type, int map_flags, unsigned int tgt_free)
+ 
+ 	assert(sched_next_online(0, &next_cpu) != -1);
+ 
+-	batch_size = tgt_free / 2;
+-	assert(batch_size * 2 == tgt_free);
++	batch_size = __tgt_size(tgt_free);
+ 
+ 	map_size = tgt_free * 2;
+ 	lru_map_fd = create_map(map_type, map_flags, map_size);
+@@ -449,23 +451,21 @@ static void test_lru_sanity3(int map_type, int map_flags, unsigned int tgt_free)
+ 
+ 	value[0] = 1234;
+ 
+-	/* Insert 1 to 2*tgt_free (+2*tgt_free keys) */
+-	end_key = 1 + (2 * tgt_free);
++	/* Fill the map */
++	end_key = 1 + map_size;
+ 	for (key = 1; key < end_key; key++)
+ 		assert(!bpf_map_update_elem(lru_map_fd, &key, value,
+ 					    BPF_NOEXIST));
+ 
+-	/* Lookup key 1 to tgt_free*3/2 */
+-	end_key = tgt_free + batch_size;
++	/* Reference all but the last batch_size */
++	end_key = 1 + map_size - batch_size;
+ 	for (key = 1; key < end_key; key++) {
+ 		assert(!bpf_map_lookup_elem_with_ref_bit(lru_map_fd, key, value));
+ 		assert(!bpf_map_update_elem(expected_map_fd, &key, value,
+ 					    BPF_NOEXIST));
+ 	}
+ 
+-	/* Add 1+2*tgt_free to tgt_free*5/2
+-	 * (+tgt_free/2 keys)
+-	 */
++	/* Insert new batch_size: replaces the non-referenced elements */
+ 	key = 2 * tgt_free + 1;
+ 	end_key = key + batch_size;
+ 	for (; key < end_key; key++) {
+@@ -500,7 +500,8 @@ static void test_lru_sanity4(int map_type, int map_flags, unsigned int tgt_free)
+ 		lru_map_fd = create_map(map_type, map_flags,
+ 					3 * tgt_free * nr_cpus);
+ 	else
+-		lru_map_fd = create_map(map_type, map_flags, 3 * tgt_free);
++		lru_map_fd = create_map(map_type, map_flags,
++					3 * __map_size(tgt_free));
+ 	assert(lru_map_fd != -1);
+ 
+ 	expected_map_fd = create_map(BPF_MAP_TYPE_HASH, 0,
+diff --git a/tools/testing/selftests/net/lib.sh b/tools/testing/selftests/net/lib.sh
+index 701905eeff66d8..f380f304259459 100644
+--- a/tools/testing/selftests/net/lib.sh
++++ b/tools/testing/selftests/net/lib.sh
+@@ -286,7 +286,7 @@ log_test_result()
+ 	local test_name=$1; shift
+ 	local opt_str=$1; shift
+ 	local result=$1; shift
+-	local retmsg=$1; shift
++	local retmsg=$1
+ 
+ 	printf "TEST: %-60s  [%s]\n" "$test_name $opt_str" "$result"
+ 	if [[ $retmsg ]]; then
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index e85b33a92624d9..d0ce45c7b5cd1a 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -2515,6 +2515,8 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm, gfn_t start, gfn_t end,
+ 		r = xa_reserve(&kvm->mem_attr_array, i, GFP_KERNEL_ACCOUNT);
+ 		if (r)
+ 			goto out_unlock;
++
++		cond_resched();
+ 	}
+ 
+ 	kvm_handle_gfn_range(kvm, &pre_set_range);
+@@ -2523,6 +2525,7 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm, gfn_t start, gfn_t end,
+ 		r = xa_err(xa_store(&kvm->mem_attr_array, i, entry,
+ 				    GFP_KERNEL_ACCOUNT));
+ 		KVM_BUG_ON(r, kvm);
++		cond_resched();
+ 	}
+ 
+ 	kvm_handle_gfn_range(kvm, &post_set_range);


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-07-24  9:17 Arisu Tachibana
  0 siblings, 0 replies; 19+ messages in thread
From: Arisu Tachibana @ 2025-07-24  9:17 UTC (permalink / raw
  To: gentoo-commits

commit:     644c5fb688c2c2e53691b1ca96a48dc46a119713
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Jul 24 09:17:29 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Jul 24 09:17:29 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=644c5fb6

Linux patch 6.15.8

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README             |    4 +
 1007_linux-6.15.8.patch | 9321 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 9325 insertions(+)

diff --git a/0000_README b/0000_README
index ef16828c..21f71559 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-6.15.7.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.15.7
 
+Patch:  1007_linux-6.15.8.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.15.8
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1007_linux-6.15.8.patch b/1007_linux-6.15.8.patch
new file mode 100644
index 00000000..571038a4
--- /dev/null
+++ b/1007_linux-6.15.8.patch
@@ -0,0 +1,9321 @@
+diff --git a/Makefile b/Makefile
+index 29a19c24428d05..8e2d0372336867 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 15
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
+index 0baf256b44003f..983b2f0e87970a 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
+@@ -687,11 +687,12 @@ lpuart5: serial@29a0000 {
+ 		};
+ 
+ 		wdog0: watchdog@2ad0000 {
+-			compatible = "fsl,imx21-wdt";
++			compatible = "fsl,ls1046a-wdt", "fsl,imx21-wdt";
+ 			reg = <0x0 0x2ad0000 0x0 0x10000>;
+ 			interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
+ 					    QORIQ_CLK_PLL_DIV(2)>;
++			big-endian;
+ 		};
+ 
+ 		edma0: dma-controller@2c00000 {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
+index b46566f3ce2056..b736dbc1e11b33 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
+@@ -464,6 +464,7 @@ reg_vdd_phy: LDO4 {
+ 			};
+ 
+ 			reg_nvcc_sd: LDO5 {
++				regulator-always-on;
+ 				regulator-max-microvolt = <3300000>;
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-name = "On-module +V3.3_1.8_SD (LDO5)";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw71xx.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw71xx.dtsi
+index 2f740d74707bdf..4bf818873fe3c5 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw71xx.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw71xx.dtsi
+@@ -70,7 +70,7 @@ &ecspi2 {
+ 	tpm@1 {
+ 		compatible = "atmel,attpm20p", "tcg,tpm_tis-spi";
+ 		reg = <0x1>;
+-		spi-max-frequency = <36000000>;
++		spi-max-frequency = <25000000>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw72xx.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw72xx.dtsi
+index 5ab3ffe9931d4a..cf747ec6fa16eb 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw72xx.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw72xx.dtsi
+@@ -110,7 +110,7 @@ &ecspi2 {
+ 	tpm@1 {
+ 		compatible = "atmel,attpm20p", "tcg,tpm_tis-spi";
+ 		reg = <0x1>;
+-		spi-max-frequency = <36000000>;
++		spi-max-frequency = <25000000>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw73xx.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw73xx.dtsi
+index e2b5e7ac3e465f..5eb114d2360a3b 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw73xx.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw73xx.dtsi
+@@ -122,7 +122,7 @@ &ecspi2 {
+ 	tpm@1 {
+ 		compatible = "atmel,attpm20p", "tcg,tpm_tis-spi";
+ 		reg = <0x1>;
+-		spi-max-frequency = <36000000>;
++		spi-max-frequency = <25000000>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
+index 6daa2313f87900..568d24265ddf8e 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
+@@ -201,7 +201,7 @@ &ecspi1 {
+ 	tpm@0 {
+ 		compatible = "atmel,attpm20p", "tcg,tpm_tis-spi";
+ 		reg = <0x0>;
+-		spi-max-frequency = <36000000>;
++		spi-max-frequency = <25000000>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx95-15x15-evk.dts b/arch/arm64/boot/dts/freescale/imx95-15x15-evk.dts
+index 514f2429dcbc27..3ab4d27de1a220 100644
+--- a/arch/arm64/boot/dts/freescale/imx95-15x15-evk.dts
++++ b/arch/arm64/boot/dts/freescale/imx95-15x15-evk.dts
+@@ -558,17 +558,17 @@ &sai3 {
+ &scmi_iomuxc {
+ 	pinctrl_emdio: emdiogrp {
+ 		fsl,pins = <
+-			IMX95_PAD_ENET2_MDC__NETCMIX_TOP_NETC_MDC		0x57e
+-			IMX95_PAD_ENET2_MDIO__NETCMIX_TOP_NETC_MDIO		0x97e
++			IMX95_PAD_ENET2_MDC__NETCMIX_TOP_NETC_MDC		0x50e
++			IMX95_PAD_ENET2_MDIO__NETCMIX_TOP_NETC_MDIO		0x90e
+ 		>;
+ 	};
+ 
+ 	pinctrl_enetc0: enetc0grp {
+ 		fsl,pins = <
+-			IMX95_PAD_ENET1_TD3__NETCMIX_TOP_ETH0_RGMII_TD3		0x57e
+-			IMX95_PAD_ENET1_TD2__NETCMIX_TOP_ETH0_RGMII_TD2		0x57e
+-			IMX95_PAD_ENET1_TD1__NETCMIX_TOP_ETH0_RGMII_TD1		0x57e
+-			IMX95_PAD_ENET1_TD0__NETCMIX_TOP_ETH0_RGMII_TD0		0x57e
++			IMX95_PAD_ENET1_TD3__NETCMIX_TOP_ETH0_RGMII_TD3		0x50e
++			IMX95_PAD_ENET1_TD2__NETCMIX_TOP_ETH0_RGMII_TD2		0x50e
++			IMX95_PAD_ENET1_TD1__NETCMIX_TOP_ETH0_RGMII_TD1		0x50e
++			IMX95_PAD_ENET1_TD0__NETCMIX_TOP_ETH0_RGMII_TD0		0x50e
+ 			IMX95_PAD_ENET1_TX_CTL__NETCMIX_TOP_ETH0_RGMII_TX_CTL	0x57e
+ 			IMX95_PAD_ENET1_TXC__NETCMIX_TOP_ETH0_RGMII_TX_CLK	0x58e
+ 			IMX95_PAD_ENET1_RX_CTL__NETCMIX_TOP_ETH0_RGMII_RX_CTL	0x57e
+@@ -582,10 +582,10 @@ IMX95_PAD_ENET1_RD3__NETCMIX_TOP_ETH0_RGMII_RD3		0x57e
+ 
+ 	pinctrl_enetc1: enetc1grp {
+ 		fsl,pins = <
+-			IMX95_PAD_ENET2_TD3__NETCMIX_TOP_ETH1_RGMII_TD3		0x57e
+-			IMX95_PAD_ENET2_TD2__NETCMIX_TOP_ETH1_RGMII_TD2		0x57e
+-			IMX95_PAD_ENET2_TD1__NETCMIX_TOP_ETH1_RGMII_TD1		0x57e
+-			IMX95_PAD_ENET2_TD0__NETCMIX_TOP_ETH1_RGMII_TD0		0x57e
++			IMX95_PAD_ENET2_TD3__NETCMIX_TOP_ETH1_RGMII_TD3		0x50e
++			IMX95_PAD_ENET2_TD2__NETCMIX_TOP_ETH1_RGMII_TD2		0x50e
++			IMX95_PAD_ENET2_TD1__NETCMIX_TOP_ETH1_RGMII_TD1		0x50e
++			IMX95_PAD_ENET2_TD0__NETCMIX_TOP_ETH1_RGMII_TD0		0x50e
+ 			IMX95_PAD_ENET2_TX_CTL__NETCMIX_TOP_ETH1_RGMII_TX_CTL	0x57e
+ 			IMX95_PAD_ENET2_TXC__NETCMIX_TOP_ETH1_RGMII_TX_CLK	0x58e
+ 			IMX95_PAD_ENET2_RX_CTL__NETCMIX_TOP_ETH1_RGMII_RX_CTL	0x57e
+diff --git a/arch/arm64/boot/dts/freescale/imx95-19x19-evk.dts b/arch/arm64/boot/dts/freescale/imx95-19x19-evk.dts
+index 25ac331f03183e..9a4d5f7f9e7f99 100644
+--- a/arch/arm64/boot/dts/freescale/imx95-19x19-evk.dts
++++ b/arch/arm64/boot/dts/freescale/imx95-19x19-evk.dts
+@@ -536,17 +536,17 @@ &wdog3 {
+ &scmi_iomuxc {
+ 	pinctrl_emdio: emdiogrp{
+ 		fsl,pins = <
+-			IMX95_PAD_ENET1_MDC__NETCMIX_TOP_NETC_MDC		0x57e
+-			IMX95_PAD_ENET1_MDIO__NETCMIX_TOP_NETC_MDIO		0x97e
++			IMX95_PAD_ENET1_MDC__NETCMIX_TOP_NETC_MDC		0x50e
++			IMX95_PAD_ENET1_MDIO__NETCMIX_TOP_NETC_MDIO		0x90e
+ 		>;
+ 	};
+ 
+ 	pinctrl_enetc0: enetc0grp {
+ 		fsl,pins = <
+-			IMX95_PAD_ENET1_TD3__NETCMIX_TOP_ETH0_RGMII_TD3		0x57e
+-			IMX95_PAD_ENET1_TD2__NETCMIX_TOP_ETH0_RGMII_TD2		0x57e
+-			IMX95_PAD_ENET1_TD1__NETCMIX_TOP_ETH0_RGMII_TD1		0x57e
+-			IMX95_PAD_ENET1_TD0__NETCMIX_TOP_ETH0_RGMII_TD0		0x57e
++			IMX95_PAD_ENET1_TD3__NETCMIX_TOP_ETH0_RGMII_TD3		0x50e
++			IMX95_PAD_ENET1_TD2__NETCMIX_TOP_ETH0_RGMII_TD2		0x50e
++			IMX95_PAD_ENET1_TD1__NETCMIX_TOP_ETH0_RGMII_TD1		0x50e
++			IMX95_PAD_ENET1_TD0__NETCMIX_TOP_ETH0_RGMII_TD0		0x50e
+ 			IMX95_PAD_ENET1_TX_CTL__NETCMIX_TOP_ETH0_RGMII_TX_CTL	0x57e
+ 			IMX95_PAD_ENET1_TXC__NETCMIX_TOP_ETH0_RGMII_TX_CLK	0x58e
+ 			IMX95_PAD_ENET1_RX_CTL__NETCMIX_TOP_ETH0_RGMII_RX_CTL	0x57e
+diff --git a/arch/arm64/boot/dts/freescale/imx95.dtsi b/arch/arm64/boot/dts/freescale/imx95.dtsi
+index 59f057ba6fa7ff..7ad9adfb26533d 100644
+--- a/arch/arm64/boot/dts/freescale/imx95.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx95.dtsi
+@@ -1678,7 +1678,7 @@ pcie0_ep: pcie-ep@4c300000 {
+ 			      <0x9 0 1 0>;
+ 			reg-names = "dbi","atu", "dbi2", "app", "dma", "addr_space";
+ 			num-lanes = <1>;
+-			interrupts = <GIC_SPI 317 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 311 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "dma";
+ 			clocks = <&scmi_clk IMX95_CLK_HSIO>,
+ 				 <&scmi_clk IMX95_CLK_HSIOPLL>,
+diff --git a/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi b/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi
+index 142244d5270608..31354354bdf483 100644
+--- a/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi
++++ b/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi
+@@ -363,6 +363,18 @@ pmic_int: pmic-int {
+ 				<0 RK_PA7 RK_FUNC_GPIO &pcfg_pull_up>;
+ 		};
+ 	};
++
++	spi1 {
++		spi1_csn0_gpio_pin: spi1-csn0-gpio-pin {
++			rockchip,pins =
++				<3 RK_PB1 RK_FUNC_GPIO &pcfg_pull_up_4ma>;
++		};
++
++		spi1_csn1_gpio_pin: spi1-csn1-gpio-pin {
++			rockchip,pins =
++				<3 RK_PB2 RK_FUNC_GPIO &pcfg_pull_up_4ma>;
++		};
++	};
+ };
+ 
+ &pmu_io_domains {
+@@ -380,6 +392,17 @@ &sdmmc {
+ 	vqmmc-supply = <&vccio_sd>;
+ };
+ 
++&spi1 {
++	/*
++	 * Hardware CS has a very slow rise time of about 6us,
++	 * causing transmission errors.
++	 * With cs-gpios we have a rise time of about 20ns.
++	 */
++	cs-gpios = <&gpio3 RK_PB1 GPIO_ACTIVE_LOW>, <&gpio3 RK_PB2 GPIO_ACTIVE_LOW>;
++	pinctrl-names = "default";
++	pinctrl-0 = <&spi1_clk &spi1_csn0_gpio_pin &spi1_csn1_gpio_pin &spi1_miso &spi1_mosi>;
++};
++
+ &tsadc {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3576-armsom-sige5.dts b/arch/arm64/boot/dts/rockchip/rk3576-armsom-sige5.dts
+index 314067ba6f3c4f..9cb05434d6da0c 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3576-armsom-sige5.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3576-armsom-sige5.dts
+@@ -177,10 +177,38 @@ vcc_3v3_ufs_s0: regulator-vcc-ufs-s0 {
+ 	};
+ };
+ 
++&cpu_b0 {
++	cpu-supply = <&vdd_cpu_big_s0>;
++};
++
++&cpu_b1 {
++	cpu-supply = <&vdd_cpu_big_s0>;
++};
++
++&cpu_b2 {
++	cpu-supply = <&vdd_cpu_big_s0>;
++};
++
++&cpu_b3 {
++	cpu-supply = <&vdd_cpu_big_s0>;
++};
++
+ &cpu_l0 {
+ 	cpu-supply = <&vdd_cpu_lit_s0>;
+ };
+ 
++&cpu_l1 {
++	cpu-supply = <&vdd_cpu_lit_s0>;
++};
++
++&cpu_l2 {
++	cpu-supply = <&vdd_cpu_lit_s0>;
++};
++
++&cpu_l3 {
++	cpu-supply = <&vdd_cpu_lit_s0>;
++};
++
+ &gmac0 {
+ 	phy-mode = "rgmii-id";
+ 	clock_in_out = "output";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-coolpi-cm5.dtsi b/arch/arm64/boot/dts/rockchip/rk3588-coolpi-cm5.dtsi
+index cc37f082adea0f..b07543315f8785 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-coolpi-cm5.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588-coolpi-cm5.dtsi
+@@ -321,6 +321,7 @@ &sdmmc {
+ 	bus-width = <4>;
+ 	cap-mmc-highspeed;
+ 	cap-sd-highspeed;
++	cd-gpios = <&gpio0 RK_PA4 GPIO_ACTIVE_LOW>;
+ 	disable-wp;
+ 	max-frequency = <150000000>;
+ 	no-sdio;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-coolpi-4b.dts b/arch/arm64/boot/dts/rockchip/rk3588s-coolpi-4b.dts
+index 8b717c4017a46a..b2947b36fadaf6 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588s-coolpi-4b.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588s-coolpi-4b.dts
+@@ -474,6 +474,7 @@ &sdmmc {
+ 	bus-width = <4>;
+ 	cap-mmc-highspeed;
+ 	cap-sd-highspeed;
++	cd-gpios = <&gpio0 RK_PA4 GPIO_ACTIVE_LOW>;
+ 	disable-wp;
+ 	max-frequency = <150000000>;
+ 	no-sdio;
+diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
+index 9c83848797a78b..80230de167def3 100644
+--- a/arch/riscv/kernel/traps.c
++++ b/arch/riscv/kernel/traps.c
+@@ -6,6 +6,7 @@
+ #include <linux/cpu.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
++#include <linux/irqflags.h>
+ #include <linux/randomize_kstack.h>
+ #include <linux/sched.h>
+ #include <linux/sched/debug.h>
+@@ -151,7 +152,9 @@ asmlinkage __visible __trap_section void name(struct pt_regs *regs)		\
+ {										\
+ 	if (user_mode(regs)) {							\
+ 		irqentry_enter_from_user_mode(regs);				\
++		local_irq_enable();						\
+ 		do_trap_error(regs, signo, code, regs->epc, "Oops - " str);	\
++		local_irq_disable();						\
+ 		irqentry_exit_to_user_mode(regs);				\
+ 	} else {								\
+ 		irqentry_state_t state = irqentry_nmi_enter(regs);		\
+@@ -173,17 +176,14 @@ asmlinkage __visible __trap_section void do_trap_insn_illegal(struct pt_regs *re
+ 
+ 	if (user_mode(regs)) {
+ 		irqentry_enter_from_user_mode(regs);
+-
+ 		local_irq_enable();
+ 
+ 		handled = riscv_v_first_use_handler(regs);
+-
+-		local_irq_disable();
+-
+ 		if (!handled)
+ 			do_trap_error(regs, SIGILL, ILL_ILLOPC, regs->epc,
+ 				      "Oops - illegal instruction");
+ 
++		local_irq_disable();
+ 		irqentry_exit_to_user_mode(regs);
+ 	} else {
+ 		irqentry_state_t state = irqentry_nmi_enter(regs);
+@@ -308,9 +308,11 @@ asmlinkage __visible __trap_section void do_trap_break(struct pt_regs *regs)
+ {
+ 	if (user_mode(regs)) {
+ 		irqentry_enter_from_user_mode(regs);
++		local_irq_enable();
+ 
+ 		handle_break(regs);
+ 
++		local_irq_disable();
+ 		irqentry_exit_to_user_mode(regs);
+ 	} else {
+ 		irqentry_state_t state = irqentry_nmi_enter(regs);
+diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
+index fe0ab912014baa..f3123f1d20505f 100644
+--- a/arch/riscv/kernel/traps_misaligned.c
++++ b/arch/riscv/kernel/traps_misaligned.c
+@@ -460,7 +460,7 @@ static int handle_scalar_misaligned_load(struct pt_regs *regs)
+ 	}
+ 
+ 	if (!fp)
+-		SET_RD(insn, regs, val.data_ulong << shift >> shift);
++		SET_RD(insn, regs, (long)(val.data_ulong << shift) >> shift);
+ 	else if (len == 8)
+ 		set_f64_rd(insn, regs, val.data_u64);
+ 	else
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index 945106b5562db0..396cde0f48e371 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -544,7 +544,15 @@ static void bpf_jit_plt(struct bpf_plt *plt, void *ret, void *target)
+ {
+ 	memcpy(plt, &bpf_plt, sizeof(*plt));
+ 	plt->ret = ret;
+-	plt->target = target;
++	/*
++	 * (target == NULL) implies that the branch to this PLT entry was
++	 * patched and became a no-op. However, some CPU could have jumped
++	 * to this PLT entry before patching and may be still executing it.
++	 *
++	 * Since the intention in this case is to make the PLT entry a no-op,
++	 * make the target point to the return label instead of NULL.
++	 */
++	plt->target = target ?: ret;
+ }
+ 
+ /*
+diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
+index 160cbde4323df9..49ba9395e28215 100644
+--- a/arch/x86/kvm/xen.c
++++ b/arch/x86/kvm/xen.c
+@@ -1526,7 +1526,7 @@ static bool kvm_xen_schedop_poll(struct kvm_vcpu *vcpu, bool longmode,
+ 	if (kvm_read_guest_virt(vcpu, (gva_t)sched_poll.ports, ports,
+ 				sched_poll.nr_ports * sizeof(*ports), &e)) {
+ 		*r = -EFAULT;
+-		return true;
++		goto out;
+ 	}
+ 
+ 	for (i = 0; i < sched_poll.nr_ports; i++) {
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 1f9b45b0b9ee76..12a5059089a2f8 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -964,4 +964,5 @@ void blk_unregister_queue(struct gendisk *disk)
+ 	kobject_del(&disk->queue_kobj);
+ 
+ 	blk_debugfs_remove(disk);
++	kobject_put(&disk->queue_kobj);
+ }
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index f8d136684109aa..3999056877572e 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -308,14 +308,13 @@ static void lo_complete_rq(struct request *rq)
+ static void lo_rw_aio_do_completion(struct loop_cmd *cmd)
+ {
+ 	struct request *rq = blk_mq_rq_from_pdu(cmd);
+-	struct loop_device *lo = rq->q->queuedata;
+ 
+ 	if (!atomic_dec_and_test(&cmd->ref))
+ 		return;
+ 	kfree(cmd->bvec);
+ 	cmd->bvec = NULL;
+ 	if (req_op(rq) == REQ_OP_WRITE)
+-		file_end_write(lo->lo_backing_file);
++		kiocb_end_write(&cmd->iocb);
+ 	if (likely(!blk_should_fake_timeout(rq->q)))
+ 		blk_mq_complete_request(rq);
+ }
+@@ -391,7 +390,7 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
+ 	}
+ 
+ 	if (rw == ITER_SOURCE) {
+-		file_start_write(lo->lo_backing_file);
++		kiocb_start_write(&cmd->iocb);
+ 		ret = file->f_op->write_iter(&cmd->iocb, &iter);
+ 	} else
+ 		ret = file->f_op->read_iter(&cmd->iocb, &iter);
+diff --git a/drivers/bluetooth/bfusb.c b/drivers/bluetooth/bfusb.c
+index 0d6ad50da0466e..8df310983bf6b3 100644
+--- a/drivers/bluetooth/bfusb.c
++++ b/drivers/bluetooth/bfusb.c
+@@ -670,7 +670,7 @@ static int bfusb_probe(struct usb_interface *intf, const struct usb_device_id *i
+ 	hdev->flush = bfusb_flush;
+ 	hdev->send  = bfusb_send_frame;
+ 
+-	set_bit(HCI_QUIRK_BROKEN_LOCAL_COMMANDS, &hdev->quirks);
++	hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LOCAL_COMMANDS);
+ 
+ 	if (hci_register_dev(hdev) < 0) {
+ 		BT_ERR("Can't register HCI device");
+diff --git a/drivers/bluetooth/bpa10x.c b/drivers/bluetooth/bpa10x.c
+index 1fa58c059cbf5c..8b43dfc755de19 100644
+--- a/drivers/bluetooth/bpa10x.c
++++ b/drivers/bluetooth/bpa10x.c
+@@ -398,7 +398,7 @@ static int bpa10x_probe(struct usb_interface *intf,
+ 	hdev->send     = bpa10x_send_frame;
+ 	hdev->set_diag = bpa10x_set_diag;
+ 
+-	set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
++	hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE);
+ 
+ 	err = hci_register_dev(hdev);
+ 	if (err < 0) {
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index 0a60660fc8ce80..3a3a56ddbb06d0 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -135,7 +135,7 @@ int btbcm_check_bdaddr(struct hci_dev *hdev)
+ 		if (btbcm_set_bdaddr_from_efi(hdev) != 0) {
+ 			bt_dev_info(hdev, "BCM: Using default device address (%pMR)",
+ 				    &bda->bdaddr);
+-			set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
++			hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR);
+ 		}
+ 	}
+ 
+@@ -467,7 +467,7 @@ static int btbcm_print_controller_features(struct hci_dev *hdev)
+ 
+ 	/* Read DMI and disable broken Read LE Min/Max Tx Power */
+ 	if (dmi_first_match(disable_broken_read_transmit_power))
+-		set_bit(HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER);
+ 
+ 	return 0;
+ }
+@@ -706,7 +706,7 @@ int btbcm_finalize(struct hci_dev *hdev, bool *fw_load_done, bool use_autobaud_m
+ 
+ 	btbcm_check_bdaddr(hdev);
+ 
+-	set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
++	hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER);
+ 
+ 	return 0;
+ }
+@@ -769,7 +769,7 @@ int btbcm_setup_apple(struct hci_dev *hdev)
+ 		kfree_skb(skb);
+ 	}
+ 
+-	set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
++	hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 46d9bbd8e411b3..99e6603b773f27 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -88,7 +88,7 @@ int btintel_check_bdaddr(struct hci_dev *hdev)
+ 	if (!bacmp(&bda->bdaddr, BDADDR_INTEL)) {
+ 		bt_dev_err(hdev, "Found Intel default device address (%pMR)",
+ 			   &bda->bdaddr);
+-		set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR);
+ 	}
+ 
+ 	kfree_skb(skb);
+@@ -2027,7 +2027,7 @@ static int btintel_download_fw(struct hci_dev *hdev,
+ 	 */
+ 	if (!bacmp(&params->otp_bdaddr, BDADDR_ANY)) {
+ 		bt_dev_info(hdev, "No device address configured");
+-		set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR);
+ 	}
+ 
+ download:
+@@ -2295,7 +2295,7 @@ static int btintel_prepare_fw_download_tlv(struct hci_dev *hdev,
+ 		 */
+ 		if (!bacmp(&ver->otp_bd_addr, BDADDR_ANY)) {
+ 			bt_dev_info(hdev, "No device address configured");
+-			set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
++			hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR);
+ 		}
+ 	}
+ 
+@@ -2670,7 +2670,7 @@ static u8 btintel_classify_pkt_type(struct hci_dev *hdev, struct sk_buff *skb)
+ 	 * Distinguish ISO data packets form ACL data packets
+ 	 * based on their connection handle value range.
+ 	 */
+-	if (hci_skb_pkt_type(skb) == HCI_ACLDATA_PKT) {
++	if (iso_capable(hdev) && hci_skb_pkt_type(skb) == HCI_ACLDATA_PKT) {
+ 		__u16 handle = __le16_to_cpu(hci_acl_hdr(skb)->handle);
+ 
+ 		if (hci_handle(handle) >= BTINTEL_ISODATA_HANDLE_BASE)
+@@ -3435,9 +3435,9 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 	}
+ 
+ 	/* Apply the common HCI quirks for Intel device */
+-	set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
+-	set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
+-	set_bit(HCI_QUIRK_NON_PERSISTENT_DIAG, &hdev->quirks);
++	hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER);
++	hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
++	hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_DIAG);
+ 
+ 	/* Set up the quality report callback for Intel devices */
+ 	hdev->set_quality_report = btintel_set_quality_report;
+@@ -3475,8 +3475,8 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 			 */
+ 			if (!btintel_test_flag(hdev,
+ 					       INTEL_ROM_LEGACY_NO_WBS_SUPPORT))
+-				set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
+-					&hdev->quirks);
++				hci_set_quirk(hdev,
++					      HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
+ 
+ 			err = btintel_legacy_rom_setup(hdev, &ver);
+ 			break;
+@@ -3491,11 +3491,11 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 			 *
+ 			 * All Legacy bootloader devices support WBS
+ 			 */
+-			set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
+-				&hdev->quirks);
++			hci_set_quirk(hdev,
++				      HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
+ 
+ 			/* These variants don't seem to support LE Coded PHY */
+-			set_bit(HCI_QUIRK_BROKEN_LE_CODED, &hdev->quirks);
++			hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LE_CODED);
+ 
+ 			/* Setup MSFT Extension support */
+ 			btintel_set_msft_opcode(hdev, ver.hw_variant);
+@@ -3571,10 +3571,10 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 		 *
+ 		 * All Legacy bootloader devices support WBS
+ 		 */
+-		set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
+ 
+ 		/* These variants don't seem to support LE Coded PHY */
+-		set_bit(HCI_QUIRK_BROKEN_LE_CODED, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LE_CODED);
+ 
+ 		/* Setup MSFT Extension support */
+ 		btintel_set_msft_opcode(hdev, ver.hw_variant);
+@@ -3600,7 +3600,7 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 		 *
+ 		 * All TLV based devices support WBS
+ 		 */
+-		set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
+ 
+ 		/* Setup MSFT Extension support */
+ 		btintel_set_msft_opcode(hdev,
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index 385e29367dd1df..24d73bae14ec9c 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -1941,9 +1941,9 @@ static int btintel_pcie_setup_internal(struct hci_dev *hdev)
+ 	}
+ 
+ 	/* Apply the common HCI quirks for Intel device */
+-	set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
+-	set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
+-	set_bit(HCI_QUIRK_NON_PERSISTENT_DIAG, &hdev->quirks);
++	hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER);
++	hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
++	hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_DIAG);
+ 
+ 	/* Set up the quality report callback for Intel devices */
+ 	hdev->set_quality_report = btintel_set_quality_report;
+@@ -1983,7 +1983,7 @@ static int btintel_pcie_setup_internal(struct hci_dev *hdev)
+ 		 *
+ 		 * All TLV based devices support WBS
+ 		 */
+-		set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
+ 
+ 		/* Setup MSFT Extension support */
+ 		btintel_set_msft_opcode(hdev,
+diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
+index c16a3518b8ffa4..4fc673640bfce8 100644
+--- a/drivers/bluetooth/btmtksdio.c
++++ b/drivers/bluetooth/btmtksdio.c
+@@ -1141,7 +1141,7 @@ static int btmtksdio_setup(struct hci_dev *hdev)
+ 		}
+ 
+ 		/* Enable WBS with mSBC codec */
+-		set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
+ 
+ 		/* Enable GPIO reset mechanism */
+ 		if (bdev->reset) {
+@@ -1384,7 +1384,7 @@ static int btmtksdio_probe(struct sdio_func *func,
+ 	SET_HCIDEV_DEV(hdev, &func->dev);
+ 
+ 	hdev->manufacturer = 70;
+-	set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
++	hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP);
+ 
+ 	sdio_set_drvdata(func, bdev);
+ 
+diff --git a/drivers/bluetooth/btmtkuart.c b/drivers/bluetooth/btmtkuart.c
+index c97e260fcb0c30..51400a891f6e6d 100644
+--- a/drivers/bluetooth/btmtkuart.c
++++ b/drivers/bluetooth/btmtkuart.c
+@@ -872,7 +872,7 @@ static int btmtkuart_probe(struct serdev_device *serdev)
+ 	SET_HCIDEV_DEV(hdev, &serdev->dev);
+ 
+ 	hdev->manufacturer = 70;
+-	set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
++	hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP);
+ 
+ 	if (btmtkuart_is_standalone(bdev)) {
+ 		err = clk_prepare_enable(bdev->osc);
+diff --git a/drivers/bluetooth/btnxpuart.c b/drivers/bluetooth/btnxpuart.c
+index 604ab2bba231c5..43a212cc032ec9 100644
+--- a/drivers/bluetooth/btnxpuart.c
++++ b/drivers/bluetooth/btnxpuart.c
+@@ -1769,7 +1769,7 @@ static int nxp_serdev_probe(struct serdev_device *serdev)
+ 				      "local-bd-address",
+ 				      (u8 *)&ba, sizeof(ba));
+ 	if (bacmp(&ba, BDADDR_ANY))
+-		set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY);
+ 
+ 	if (hci_register_dev(hdev) < 0) {
+ 		dev_err(&serdev->dev, "Can't register HCI device\n");
+diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c
+index edefb9dc76aa1a..7c958d6065bec1 100644
+--- a/drivers/bluetooth/btqca.c
++++ b/drivers/bluetooth/btqca.c
+@@ -739,7 +739,7 @@ static int qca_check_bdaddr(struct hci_dev *hdev, const struct qca_fw_config *co
+ 
+ 	bda = (struct hci_rp_read_bd_addr *)skb->data;
+ 	if (!bacmp(&bda->bdaddr, &config->bdaddr))
+-		set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY);
+ 
+ 	kfree_skb(skb);
+ 
+diff --git a/drivers/bluetooth/btqcomsmd.c b/drivers/bluetooth/btqcomsmd.c
+index c0eb71d6ffd3bd..d2e13fcb6babfc 100644
+--- a/drivers/bluetooth/btqcomsmd.c
++++ b/drivers/bluetooth/btqcomsmd.c
+@@ -117,7 +117,7 @@ static int btqcomsmd_setup(struct hci_dev *hdev)
+ 	/* Devices do not have persistent storage for BD address. Retrieve
+ 	 * it from the firmware node property.
+ 	 */
+-	set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);
++	hci_set_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c
+index 7838c89e529e0c..4d182cf6e03723 100644
+--- a/drivers/bluetooth/btrtl.c
++++ b/drivers/bluetooth/btrtl.c
+@@ -1287,7 +1287,7 @@ void btrtl_set_quirks(struct hci_dev *hdev, struct btrtl_device_info *btrtl_dev)
+ 	/* Enable controller to do both LE scan and BR/EDR inquiry
+ 	 * simultaneously.
+ 	 */
+-	set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
++	hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
+ 
+ 	/* Enable central-peripheral role (able to create new connections with
+ 	 * an existing connection in slave role).
+@@ -1301,7 +1301,7 @@ void btrtl_set_quirks(struct hci_dev *hdev, struct btrtl_device_info *btrtl_dev)
+ 	case CHIP_ID_8851B:
+ 	case CHIP_ID_8922A:
+ 	case CHIP_ID_8852BT:
+-		set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
+ 
+ 		/* RTL8852C needs to transmit mSBC data continuously without
+ 		 * the zero length of USB packets for the ALT 6 supported chips
+@@ -1312,7 +1312,8 @@ void btrtl_set_quirks(struct hci_dev *hdev, struct btrtl_device_info *btrtl_dev)
+ 		if (btrtl_dev->project_id == CHIP_ID_8852A ||
+ 		    btrtl_dev->project_id == CHIP_ID_8852B ||
+ 		    btrtl_dev->project_id == CHIP_ID_8852C)
+-			set_bit(HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER, &hdev->quirks);
++			hci_set_quirk(hdev,
++				      HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER);
+ 
+ 		hci_set_aosp_capable(hdev);
+ 		break;
+@@ -1331,8 +1332,7 @@ void btrtl_set_quirks(struct hci_dev *hdev, struct btrtl_device_info *btrtl_dev)
+ 		 * but it doesn't support any features from page 2 -
+ 		 * it either responds with garbage or with error status
+ 		 */
+-		set_bit(HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2,
+-			&hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2);
+ 		break;
+ 	default:
+ 		break;
+diff --git a/drivers/bluetooth/btsdio.c b/drivers/bluetooth/btsdio.c
+index a69feb08486a5a..8325655ce6aa81 100644
+--- a/drivers/bluetooth/btsdio.c
++++ b/drivers/bluetooth/btsdio.c
+@@ -327,7 +327,7 @@ static int btsdio_probe(struct sdio_func *func,
+ 	hdev->send     = btsdio_send_frame;
+ 
+ 	if (func->vendor == 0x0104 && func->device == 0x00c5)
+-		set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE);
+ 
+ 	err = hci_register_dev(hdev);
+ 	if (err < 0) {
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index ef9689f877691e..6f2fd043fd3fa6 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2510,18 +2510,18 @@ static int btusb_setup_csr(struct hci_dev *hdev)
+ 		 * Probably will need to be expanded in the future;
+ 		 * without these the controller will lock up.
+ 		 */
+-		set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks);
+-		set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks);
+-		set_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks);
+-		set_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks);
+-		set_bit(HCI_QUIRK_BROKEN_READ_VOICE_SETTING, &hdev->quirks);
+-		set_bit(HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_STORED_LINK_KEY);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_ERR_DATA_REPORTING);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL);
++		hci_set_quirk(hdev, HCI_QUIRK_NO_SUSPEND_NOTIFIER);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_READ_VOICE_SETTING);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE);
+ 
+ 		/* Clear the reset quirk since this is not an actual
+ 		 * early Bluetooth 1.1 device from CSR.
+ 		 */
+-		clear_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
+-		clear_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
++		hci_clear_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE);
++		hci_clear_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
+ 
+ 		/*
+ 		 * Special workaround for these BT 4.0 chip clones, and potentially more:
+@@ -3230,6 +3230,32 @@ static const struct qca_device_info qca_devices_table[] = {
+ 	{ 0x00190200, 40, 4, 16 }, /* WCN785x 2.0 */
+ };
+ 
++static u16 qca_extract_board_id(const struct qca_version *ver)
++{
++	u16 flag = le16_to_cpu(ver->flag);
++	u16 board_id = 0;
++
++	if (((flag >> 8) & 0xff) == QCA_FLAG_MULTI_NVM) {
++		/* The board_id should be split into two bytes
++		 * The 1st byte is chip ID, and the 2nd byte is platform ID
++		 * For example, board ID 0x010A, 0x01 is platform ID. 0x0A is chip ID
++		 * we have several platforms, and platform IDs are continuously added
++		 * Platform ID:
++		 * 0x00 is for Mobile
++		 * 0x01 is for X86
++		 * 0x02 is for Automotive
++		 * 0x03 is for Consumer electronic
++		 */
++		board_id = (ver->chip_id << 8) + ver->platform_id;
++	}
++
++	/* Take 0xffff as invalid board ID */
++	if (board_id == 0xffff)
++		board_id = 0;
++
++	return board_id;
++}
++
+ static int btusb_qca_send_vendor_req(struct usb_device *udev, u8 request,
+ 				     void *data, u16 size)
+ {
+@@ -3386,44 +3412,28 @@ static void btusb_generate_qca_nvm_name(char *fwname, size_t max_size,
+ 					const struct qca_version *ver)
+ {
+ 	u32 rom_version = le32_to_cpu(ver->rom_version);
+-	u16 flag = le16_to_cpu(ver->flag);
++	const char *variant;
++	int len;
++	u16 board_id;
+ 
+-	if (((flag >> 8) & 0xff) == QCA_FLAG_MULTI_NVM) {
+-		/* The board_id should be split into two bytes
+-		 * The 1st byte is chip ID, and the 2nd byte is platform ID
+-		 * For example, board ID 0x010A, 0x01 is platform ID. 0x0A is chip ID
+-		 * we have several platforms, and platform IDs are continuously added
+-		 * Platform ID:
+-		 * 0x00 is for Mobile
+-		 * 0x01 is for X86
+-		 * 0x02 is for Automotive
+-		 * 0x03 is for Consumer electronic
+-		 */
+-		u16 board_id = (ver->chip_id << 8) + ver->platform_id;
+-		const char *variant;
++	board_id = qca_extract_board_id(ver);
+ 
+-		switch (le32_to_cpu(ver->ram_version)) {
+-		case WCN6855_2_0_RAM_VERSION_GF:
+-		case WCN6855_2_1_RAM_VERSION_GF:
+-			variant = "_gf";
+-			break;
+-		default:
+-			variant = "";
+-			break;
+-		}
+-
+-		if (board_id == 0) {
+-			snprintf(fwname, max_size, "qca/nvm_usb_%08x%s.bin",
+-				rom_version, variant);
+-		} else {
+-			snprintf(fwname, max_size, "qca/nvm_usb_%08x%s_%04x.bin",
+-				rom_version, variant, board_id);
+-		}
+-	} else {
+-		snprintf(fwname, max_size, "qca/nvm_usb_%08x.bin",
+-			rom_version);
++	switch (le32_to_cpu(ver->ram_version)) {
++	case WCN6855_2_0_RAM_VERSION_GF:
++	case WCN6855_2_1_RAM_VERSION_GF:
++		variant = "_gf";
++		break;
++	default:
++		variant = NULL;
++		break;
+ 	}
+ 
++	len = snprintf(fwname, max_size, "qca/nvm_usb_%08x", rom_version);
++	if (variant)
++		len += snprintf(fwname + len, max_size - len, "%s", variant);
++	if (board_id)
++		len += snprintf(fwname + len, max_size - len, "_%04x", board_id);
++	len += snprintf(fwname + len, max_size - len, ".bin");
+ }
+ 
+ static int btusb_setup_qca_load_nvm(struct hci_dev *hdev,
+@@ -3532,7 +3542,7 @@ static int btusb_setup_qca(struct hci_dev *hdev)
+ 	/* Mark HCI_OP_ENHANCED_SETUP_SYNC_CONN as broken as it doesn't seem to
+ 	 * work with the likes of HSP/HFP mSBC.
+ 	 */
+-	set_bit(HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN, &hdev->quirks);
++	hci_set_quirk(hdev, HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN);
+ 
+ 	return 0;
+ }
+@@ -3943,10 +3953,10 @@ static int btusb_probe(struct usb_interface *intf,
+ 	}
+ #endif
+ 	if (id->driver_info & BTUSB_CW6622)
+-		set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_STORED_LINK_KEY);
+ 
+ 	if (id->driver_info & BTUSB_BCM2045)
+-		set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_STORED_LINK_KEY);
+ 
+ 	if (id->driver_info & BTUSB_BCM92035)
+ 		hdev->setup = btusb_setup_bcm92035;
+@@ -4003,8 +4013,8 @@ static int btusb_probe(struct usb_interface *intf,
+ 		hdev->reset = btmtk_reset_sync;
+ 		hdev->set_bdaddr = btmtk_set_bdaddr;
+ 		hdev->send = btusb_send_frame_mtk;
+-		set_bit(HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN, &hdev->quirks);
+-		set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN);
++		hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP);
+ 		data->recv_acl = btmtk_usb_recv_acl;
+ 		data->suspend = btmtk_usb_suspend;
+ 		data->resume = btmtk_usb_resume;
+@@ -4012,20 +4022,20 @@ static int btusb_probe(struct usb_interface *intf,
+ 	}
+ 
+ 	if (id->driver_info & BTUSB_SWAVE) {
+-		set_bit(HCI_QUIRK_FIXUP_INQUIRY_MODE, &hdev->quirks);
+-		set_bit(HCI_QUIRK_BROKEN_LOCAL_COMMANDS, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_FIXUP_INQUIRY_MODE);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LOCAL_COMMANDS);
+ 	}
+ 
+ 	if (id->driver_info & BTUSB_INTEL_BOOT) {
+ 		hdev->manufacturer = 2;
+-		set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_RAW_DEVICE);
+ 	}
+ 
+ 	if (id->driver_info & BTUSB_ATH3012) {
+ 		data->setup_on_usb = btusb_setup_qca;
+ 		hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
+-		set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
+-		set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
++		hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER);
+ 	}
+ 
+ 	if (id->driver_info & BTUSB_QCA_ROME) {
+@@ -4033,7 +4043,7 @@ static int btusb_probe(struct usb_interface *intf,
+ 		hdev->shutdown = btusb_shutdown_qca;
+ 		hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
+ 		hdev->reset = btusb_qca_reset;
+-		set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
+ 		btusb_check_needs_reset_resume(intf);
+ 	}
+ 
+@@ -4047,7 +4057,7 @@ static int btusb_probe(struct usb_interface *intf,
+ 		hdev->shutdown = btusb_shutdown_qca;
+ 		hdev->set_bdaddr = btusb_set_bdaddr_wcn6855;
+ 		hdev->reset = btusb_qca_reset;
+-		set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
+ 		hci_set_msft_opcode(hdev, 0xFD70);
+ 	}
+ 
+@@ -4075,35 +4085,35 @@ static int btusb_probe(struct usb_interface *intf,
+ 
+ 	if (id->driver_info & BTUSB_ACTIONS_SEMI) {
+ 		/* Support is advertised, but not implemented */
+-		set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks);
+-		set_bit(HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER, &hdev->quirks);
+-		set_bit(HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT, &hdev->quirks);
+-		set_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &hdev->quirks);
+-		set_bit(HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE, &hdev->quirks);
+-		set_bit(HCI_QUIRK_BROKEN_EXT_CREATE_CONN, &hdev->quirks);
+-		set_bit(HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_ERR_DATA_REPORTING);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_EXT_SCAN);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_EXT_CREATE_CONN);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT);
+ 	}
+ 
+ 	if (!reset)
+-		set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE);
+ 
+ 	if (force_scofix || id->driver_info & BTUSB_WRONG_SCO_MTU) {
+ 		if (!disable_scofix)
+-			set_bit(HCI_QUIRK_FIXUP_BUFFER_SIZE, &hdev->quirks);
++			hci_set_quirk(hdev, HCI_QUIRK_FIXUP_BUFFER_SIZE);
+ 	}
+ 
+ 	if (id->driver_info & BTUSB_BROKEN_ISOC)
+ 		data->isoc = NULL;
+ 
+ 	if (id->driver_info & BTUSB_WIDEBAND_SPEECH)
+-		set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
+ 
+ 	if (id->driver_info & BTUSB_INVALID_LE_STATES)
+-		set_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LE_STATES);
+ 
+ 	if (id->driver_info & BTUSB_DIGIANSWER) {
+ 		data->cmdreq_type = USB_TYPE_VENDOR;
+-		set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE);
+ 	}
+ 
+ 	if (id->driver_info & BTUSB_CSR) {
+@@ -4112,10 +4122,10 @@ static int btusb_probe(struct usb_interface *intf,
+ 
+ 		/* Old firmware would otherwise execute USB reset */
+ 		if (bcdDevice < 0x117)
+-			set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
++			hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE);
+ 
+ 		/* This must be set first in case we disable it for fakes */
+-		set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
+ 
+ 		/* Fake CSR devices with broken commands */
+ 		if (le16_to_cpu(udev->descriptor.idVendor)  == 0x0a12 &&
+@@ -4128,7 +4138,7 @@ static int btusb_probe(struct usb_interface *intf,
+ 
+ 		/* New sniffer firmware has crippled HCI interface */
+ 		if (le16_to_cpu(udev->descriptor.bcdDevice) > 0x997)
+-			set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
++			hci_set_quirk(hdev, HCI_QUIRK_RAW_DEVICE);
+ 	}
+ 
+ 	if (id->driver_info & BTUSB_INTEL_BOOT) {
+diff --git a/drivers/bluetooth/hci_aml.c b/drivers/bluetooth/hci_aml.c
+index dc9541e76d8199..ecdc61126c9b03 100644
+--- a/drivers/bluetooth/hci_aml.c
++++ b/drivers/bluetooth/hci_aml.c
+@@ -425,7 +425,7 @@ static int aml_check_bdaddr(struct hci_dev *hdev)
+ 
+ 	if (!bacmp(&paddr->bdaddr, AML_BDADDR_DEFAULT)) {
+ 		bt_dev_info(hdev, "amlbt using default bdaddr (%pM)", &paddr->bdaddr);
+-		set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR);
+ 	}
+ 
+ exit:
+diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c
+index 9684eb16059bb0..f96617b85d8777 100644
+--- a/drivers/bluetooth/hci_bcm.c
++++ b/drivers/bluetooth/hci_bcm.c
+@@ -643,8 +643,8 @@ static int bcm_setup(struct hci_uart *hu)
+ 	 * Allow the bootloader to set a valid address through the
+ 	 * device tree.
+ 	 */
+-	if (test_bit(HCI_QUIRK_INVALID_BDADDR, &hu->hdev->quirks))
+-		set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hu->hdev->quirks);
++	if (hci_test_quirk(hu->hdev, HCI_QUIRK_INVALID_BDADDR))
++		hci_set_quirk(hu->hdev, HCI_QUIRK_USE_BDADDR_PROPERTY);
+ 
+ 	if (!bcm_request_irq(bcm))
+ 		err = bcm_setup_sleep(hu);
+diff --git a/drivers/bluetooth/hci_bcm4377.c b/drivers/bluetooth/hci_bcm4377.c
+index 9bce53e49cfa60..8a9aa33776b030 100644
+--- a/drivers/bluetooth/hci_bcm4377.c
++++ b/drivers/bluetooth/hci_bcm4377.c
+@@ -1435,7 +1435,7 @@ static int bcm4377_check_bdaddr(struct bcm4377_data *bcm4377)
+ 
+ 	bda = (struct hci_rp_read_bd_addr *)skb->data;
+ 	if (!bcm4377_is_valid_bdaddr(bcm4377, &bda->bdaddr))
+-		set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &bcm4377->hdev->quirks);
++		hci_set_quirk(bcm4377->hdev, HCI_QUIRK_USE_BDADDR_PROPERTY);
+ 
+ 	kfree_skb(skb);
+ 	return 0;
+@@ -2389,13 +2389,13 @@ static int bcm4377_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	hdev->setup = bcm4377_hci_setup;
+ 
+ 	if (bcm4377->hw->broken_mws_transport_config)
+-		set_bit(HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG);
+ 	if (bcm4377->hw->broken_ext_scan)
+-		set_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_EXT_SCAN);
+ 	if (bcm4377->hw->broken_le_coded)
+-		set_bit(HCI_QUIRK_BROKEN_LE_CODED, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LE_CODED);
+ 	if (bcm4377->hw->broken_le_ext_adv_report_phy)
+-		set_bit(HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY);
+ 
+ 	pci_set_drvdata(pdev, bcm4377);
+ 	hci_set_drvdata(hdev, bcm4377);
+diff --git a/drivers/bluetooth/hci_intel.c b/drivers/bluetooth/hci_intel.c
+index 811f33701f8477..d22fbb7f9fc5e1 100644
+--- a/drivers/bluetooth/hci_intel.c
++++ b/drivers/bluetooth/hci_intel.c
+@@ -660,7 +660,7 @@ static int intel_setup(struct hci_uart *hu)
+ 	 */
+ 	if (!bacmp(&params.otp_bdaddr, BDADDR_ANY)) {
+ 		bt_dev_info(hdev, "No device address configured");
+-		set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR);
+ 	}
+ 
+ 	/* With this Intel bootloader only the hardware variant and device
+diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
+index acba83156de9a6..d0adae3267b41e 100644
+--- a/drivers/bluetooth/hci_ldisc.c
++++ b/drivers/bluetooth/hci_ldisc.c
+@@ -667,13 +667,13 @@ static int hci_uart_register_dev(struct hci_uart *hu)
+ 	SET_HCIDEV_DEV(hdev, hu->tty->dev);
+ 
+ 	if (test_bit(HCI_UART_RAW_DEVICE, &hu->hdev_flags))
+-		set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_RAW_DEVICE);
+ 
+ 	if (test_bit(HCI_UART_EXT_CONFIG, &hu->hdev_flags))
+-		set_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG);
+ 
+ 	if (!test_bit(HCI_UART_RESET_ON_INIT, &hu->hdev_flags))
+-		set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE);
+ 
+ 	/* Only call open() for the protocol after hdev is fully initialized as
+ 	 * open() (or a timer/workqueue it starts) may attempt to reference it.
+diff --git a/drivers/bluetooth/hci_ll.c b/drivers/bluetooth/hci_ll.c
+index e19e9bd4955564..7044c86325cedc 100644
+--- a/drivers/bluetooth/hci_ll.c
++++ b/drivers/bluetooth/hci_ll.c
+@@ -649,11 +649,11 @@ static int ll_setup(struct hci_uart *hu)
+ 		/* This means that there was an error getting the BD address
+ 		 * during probe, so mark the device as having a bad address.
+ 		 */
+-		set_bit(HCI_QUIRK_INVALID_BDADDR, &hu->hdev->quirks);
++		hci_set_quirk(hu->hdev, HCI_QUIRK_INVALID_BDADDR);
+ 	} else if (bacmp(&lldev->bdaddr, BDADDR_ANY)) {
+ 		err = ll_set_bdaddr(hu->hdev, &lldev->bdaddr);
+ 		if (err)
+-			set_bit(HCI_QUIRK_INVALID_BDADDR, &hu->hdev->quirks);
++			hci_set_quirk(hu->hdev, HCI_QUIRK_INVALID_BDADDR);
+ 	}
+ 
+ 	/* Operational speed if any */
+diff --git a/drivers/bluetooth/hci_nokia.c b/drivers/bluetooth/hci_nokia.c
+index 9fc10a16fd962d..cd7575c20f653f 100644
+--- a/drivers/bluetooth/hci_nokia.c
++++ b/drivers/bluetooth/hci_nokia.c
+@@ -439,7 +439,7 @@ static int nokia_setup(struct hci_uart *hu)
+ 
+ 	if (btdev->man_id == NOKIA_ID_BCM2048) {
+ 		hu->hdev->set_bdaddr = btbcm_set_bdaddr;
+-		set_bit(HCI_QUIRK_INVALID_BDADDR, &hu->hdev->quirks);
++		hci_set_quirk(hu->hdev, HCI_QUIRK_INVALID_BDADDR);
+ 		dev_dbg(dev, "bcm2048 has invalid bluetooth address!");
+ 	}
+ 
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 976ec88a0f62aa..4badab1e86a358 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -1892,7 +1892,7 @@ static int qca_setup(struct hci_uart *hu)
+ 	/* Enable controller to do both LE scan and BR/EDR inquiry
+ 	 * simultaneously.
+ 	 */
+-	set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
++	hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
+ 
+ 	switch (soc_type) {
+ 	case QCA_QCA2066:
+@@ -1944,7 +1944,7 @@ static int qca_setup(struct hci_uart *hu)
+ 	case QCA_WCN7850:
+ 		qcadev = serdev_device_get_drvdata(hu->serdev);
+ 		if (qcadev->bdaddr_property_broken)
+-			set_bit(HCI_QUIRK_BDADDR_PROPERTY_BROKEN, &hdev->quirks);
++			hci_set_quirk(hdev, HCI_QUIRK_BDADDR_PROPERTY_BROKEN);
+ 
+ 		hci_set_aosp_capable(hdev);
+ 
+@@ -2487,7 +2487,7 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ 	hdev = qcadev->serdev_hu.hdev;
+ 
+ 	if (power_ctrl_enabled) {
+-		set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP);
+ 		hdev->shutdown = qca_power_off;
+ 	}
+ 
+@@ -2496,11 +2496,11 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ 		 * be queried via hci. Same with the valid le states quirk.
+ 		 */
+ 		if (data->capabilities & QCA_CAP_WIDEBAND_SPEECH)
+-			set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
+-				&hdev->quirks);
++			hci_set_quirk(hdev,
++				      HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
+ 
+ 		if (!(data->capabilities & QCA_CAP_VALID_LE_STATES))
+-			set_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks);
++			hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LE_STATES);
+ 	}
+ 
+ 	return 0;
+@@ -2550,7 +2550,7 @@ static void qca_serdev_shutdown(struct device *dev)
+ 		 * invoked and the SOC is already in the initial state, so
+ 		 * don't also need to send the VSC.
+ 		 */
+-		if (test_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks) ||
++		if (hci_test_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP) ||
+ 		    hci_dev_test_flag(hdev, HCI_SETUP))
+ 			return;
+ 
+diff --git a/drivers/bluetooth/hci_serdev.c b/drivers/bluetooth/hci_serdev.c
+index 89a22e9b3253a8..593d9cefbbf925 100644
+--- a/drivers/bluetooth/hci_serdev.c
++++ b/drivers/bluetooth/hci_serdev.c
+@@ -152,7 +152,7 @@ static int hci_uart_close(struct hci_dev *hdev)
+ 	 * BT SOC is completely powered OFF during BT OFF, holding port
+ 	 * open may drain the battery.
+ 	 */
+-	if (test_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks)) {
++	if (hci_test_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP)) {
+ 		clear_bit(HCI_UART_PROTO_READY, &hu->flags);
+ 		serdev_device_close(hu->serdev);
+ 	}
+@@ -358,13 +358,13 @@ int hci_uart_register_device_priv(struct hci_uart *hu,
+ 	SET_HCIDEV_DEV(hdev, &hu->serdev->dev);
+ 
+ 	if (test_bit(HCI_UART_NO_SUSPEND_NOTIFIER, &hu->flags))
+-		set_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_NO_SUSPEND_NOTIFIER);
+ 
+ 	if (test_bit(HCI_UART_RAW_DEVICE, &hu->hdev_flags))
+-		set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_RAW_DEVICE);
+ 
+ 	if (test_bit(HCI_UART_EXT_CONFIG, &hu->hdev_flags))
+-		set_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG);
+ 
+ 	if (test_bit(HCI_UART_INIT_PENDING, &hu->hdev_flags))
+ 		return 0;
+diff --git a/drivers/bluetooth/hci_vhci.c b/drivers/bluetooth/hci_vhci.c
+index 59f4d7bdffdcb5..f7d8c3c00655a8 100644
+--- a/drivers/bluetooth/hci_vhci.c
++++ b/drivers/bluetooth/hci_vhci.c
+@@ -415,16 +415,16 @@ static int __vhci_create_device(struct vhci_data *data, __u8 opcode)
+ 	hdev->get_codec_config_data = vhci_get_codec_config_data;
+ 	hdev->wakeup = vhci_wakeup;
+ 	hdev->setup = vhci_setup;
+-	set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
+-	set_bit(HCI_QUIRK_SYNC_FLOWCTL_SUPPORTED, &hdev->quirks);
++	hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP);
++	hci_set_quirk(hdev, HCI_QUIRK_SYNC_FLOWCTL_SUPPORTED);
+ 
+ 	/* bit 6 is for external configuration */
+ 	if (opcode & 0x40)
+-		set_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG);
+ 
+ 	/* bit 7 is for raw device */
+ 	if (opcode & 0x80)
+-		set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
++		hci_set_quirk(hdev, HCI_QUIRK_RAW_DEVICE);
+ 
+ 	if (hci_register_dev(hdev) < 0) {
+ 		BT_ERR("Can't register HCI device");
+diff --git a/drivers/bluetooth/virtio_bt.c b/drivers/bluetooth/virtio_bt.c
+index 756f292df9e87d..6f1a37e85c6a47 100644
+--- a/drivers/bluetooth/virtio_bt.c
++++ b/drivers/bluetooth/virtio_bt.c
+@@ -327,17 +327,17 @@ static int virtbt_probe(struct virtio_device *vdev)
+ 			hdev->setup = virtbt_setup_intel;
+ 			hdev->shutdown = virtbt_shutdown_generic;
+ 			hdev->set_bdaddr = virtbt_set_bdaddr_intel;
+-			set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
+-			set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
+-			set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
++			hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER);
++			hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
++			hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
+ 			break;
+ 
+ 		case VIRTIO_BT_CONFIG_VENDOR_REALTEK:
+ 			hdev->manufacturer = 93;
+ 			hdev->setup = virtbt_setup_realtek;
+ 			hdev->shutdown = virtbt_shutdown_generic;
+-			set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
+-			set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
++			hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
++			hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
+ 			break;
+ 		}
+ 	}
+diff --git a/drivers/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c
+index b9df9b19d4bd97..07bc81a706b4d3 100644
+--- a/drivers/comedi/comedi_fops.c
++++ b/drivers/comedi/comedi_fops.c
+@@ -1556,21 +1556,27 @@ static int do_insnlist_ioctl(struct comedi_device *dev,
+ 	}
+ 
+ 	for (i = 0; i < n_insns; ++i) {
++		unsigned int n = insns[i].n;
++
+ 		if (insns[i].insn & INSN_MASK_WRITE) {
+ 			if (copy_from_user(data, insns[i].data,
+-					   insns[i].n * sizeof(unsigned int))) {
++					   n * sizeof(unsigned int))) {
+ 				dev_dbg(dev->class_dev,
+ 					"copy_from_user failed\n");
+ 				ret = -EFAULT;
+ 				goto error;
+ 			}
++			if (n < MIN_SAMPLES) {
++				memset(&data[n], 0, (MIN_SAMPLES - n) *
++						    sizeof(unsigned int));
++			}
+ 		}
+ 		ret = parse_insn(dev, insns + i, data, file);
+ 		if (ret < 0)
+ 			goto error;
+ 		if (insns[i].insn & INSN_MASK_READ) {
+ 			if (copy_to_user(insns[i].data, data,
+-					 insns[i].n * sizeof(unsigned int))) {
++					 n * sizeof(unsigned int))) {
+ 				dev_dbg(dev->class_dev,
+ 					"copy_to_user failed\n");
+ 				ret = -EFAULT;
+@@ -1589,6 +1595,16 @@ static int do_insnlist_ioctl(struct comedi_device *dev,
+ 	return i;
+ }
+ 
++#define MAX_INSNS   MAX_SAMPLES
++static int check_insnlist_len(struct comedi_device *dev, unsigned int n_insns)
++{
++	if (n_insns > MAX_INSNS) {
++		dev_dbg(dev->class_dev, "insnlist length too large\n");
++		return -EINVAL;
++	}
++	return 0;
++}
++
+ /*
+  * COMEDI_INSN ioctl
+  * synchronous instruction
+@@ -1633,6 +1649,10 @@ static int do_insn_ioctl(struct comedi_device *dev,
+ 			ret = -EFAULT;
+ 			goto error;
+ 		}
++		if (insn->n < MIN_SAMPLES) {
++			memset(&data[insn->n], 0,
++			       (MIN_SAMPLES - insn->n) * sizeof(unsigned int));
++		}
+ 	}
+ 	ret = parse_insn(dev, insn, data, file);
+ 	if (ret < 0)
+@@ -2239,6 +2259,9 @@ static long comedi_unlocked_ioctl(struct file *file, unsigned int cmd,
+ 			rc = -EFAULT;
+ 			break;
+ 		}
++		rc = check_insnlist_len(dev, insnlist.n_insns);
++		if (rc)
++			break;
+ 		insns = kcalloc(insnlist.n_insns, sizeof(*insns), GFP_KERNEL);
+ 		if (!insns) {
+ 			rc = -ENOMEM;
+@@ -3090,6 +3113,9 @@ static int compat_insnlist(struct file *file, unsigned long arg)
+ 	if (copy_from_user(&insnlist32, compat_ptr(arg), sizeof(insnlist32)))
+ 		return -EFAULT;
+ 
++	rc = check_insnlist_len(dev, insnlist32.n_insns);
++	if (rc)
++		return rc;
+ 	insns = kcalloc(insnlist32.n_insns, sizeof(*insns), GFP_KERNEL);
+ 	if (!insns)
+ 		return -ENOMEM;
+diff --git a/drivers/comedi/drivers.c b/drivers/comedi/drivers.c
+index 376130bfba8a2c..9e4b7c840a8f5a 100644
+--- a/drivers/comedi/drivers.c
++++ b/drivers/comedi/drivers.c
+@@ -339,10 +339,10 @@ int comedi_dio_insn_config(struct comedi_device *dev,
+ 			   unsigned int *data,
+ 			   unsigned int mask)
+ {
+-	unsigned int chan_mask = 1 << CR_CHAN(insn->chanspec);
++	unsigned int chan = CR_CHAN(insn->chanspec);
+ 
+-	if (!mask)
+-		mask = chan_mask;
++	if (!mask && chan < 32)
++		mask = 1U << chan;
+ 
+ 	switch (data[0]) {
+ 	case INSN_CONFIG_DIO_INPUT:
+@@ -382,7 +382,7 @@ EXPORT_SYMBOL_GPL(comedi_dio_insn_config);
+ unsigned int comedi_dio_update_state(struct comedi_subdevice *s,
+ 				     unsigned int *data)
+ {
+-	unsigned int chanmask = (s->n_chan < 32) ? ((1 << s->n_chan) - 1)
++	unsigned int chanmask = (s->n_chan < 32) ? ((1U << s->n_chan) - 1)
+ 						 : 0xffffffff;
+ 	unsigned int mask = data[0] & chanmask;
+ 	unsigned int bits = data[1];
+@@ -615,6 +615,9 @@ static int insn_rw_emulate_bits(struct comedi_device *dev,
+ 	unsigned int _data[2];
+ 	int ret;
+ 
++	if (insn->n == 0)
++		return 0;
++
+ 	memset(_data, 0, sizeof(_data));
+ 	memset(&_insn, 0, sizeof(_insn));
+ 	_insn.insn = INSN_BITS;
+@@ -625,8 +628,8 @@ static int insn_rw_emulate_bits(struct comedi_device *dev,
+ 	if (insn->insn == INSN_WRITE) {
+ 		if (!(s->subdev_flags & SDF_WRITABLE))
+ 			return -EINVAL;
+-		_data[0] = 1 << (chan - base_chan);		    /* mask */
+-		_data[1] = data[0] ? (1 << (chan - base_chan)) : 0; /* bits */
++		_data[0] = 1U << (chan - base_chan);		     /* mask */
++		_data[1] = data[0] ? (1U << (chan - base_chan)) : 0; /* bits */
+ 	}
+ 
+ 	ret = s->insn_bits(dev, s, &_insn, _data);
+@@ -709,7 +712,7 @@ static int __comedi_device_postconfig(struct comedi_device *dev)
+ 
+ 		if (s->type == COMEDI_SUBD_DO) {
+ 			if (s->n_chan < 32)
+-				s->io_bits = (1 << s->n_chan) - 1;
++				s->io_bits = (1U << s->n_chan) - 1;
+ 			else
+ 				s->io_bits = 0xffffffff;
+ 		}
+diff --git a/drivers/comedi/drivers/aio_iiro_16.c b/drivers/comedi/drivers/aio_iiro_16.c
+index b00fab0b89d4c4..739cc4db52ac7e 100644
+--- a/drivers/comedi/drivers/aio_iiro_16.c
++++ b/drivers/comedi/drivers/aio_iiro_16.c
+@@ -177,7 +177,8 @@ static int aio_iiro_16_attach(struct comedi_device *dev,
+ 	 * Digital input change of state interrupts are optionally supported
+ 	 * using IRQ 2-7, 10-12, 14, or 15.
+ 	 */
+-	if ((1 << it->options[1]) & 0xdcfc) {
++	if (it->options[1] > 0 && it->options[1] < 16 &&
++	    (1 << it->options[1]) & 0xdcfc) {
+ 		ret = request_irq(it->options[1], aio_iiro_16_cos, 0,
+ 				  dev->board_name, dev);
+ 		if (ret == 0)
+diff --git a/drivers/comedi/drivers/comedi_test.c b/drivers/comedi/drivers/comedi_test.c
+index da17d891f0e5d2..733e22c6852ed1 100644
+--- a/drivers/comedi/drivers/comedi_test.c
++++ b/drivers/comedi/drivers/comedi_test.c
+@@ -790,7 +790,7 @@ static void waveform_detach(struct comedi_device *dev)
+ {
+ 	struct waveform_private *devpriv = dev->private;
+ 
+-	if (devpriv) {
++	if (devpriv && dev->n_subdevices) {
+ 		timer_delete_sync(&devpriv->ai_timer);
+ 		timer_delete_sync(&devpriv->ao_timer);
+ 	}
+diff --git a/drivers/comedi/drivers/das16m1.c b/drivers/comedi/drivers/das16m1.c
+index b8ea737ad3d14c..1b638f5b5a4fb9 100644
+--- a/drivers/comedi/drivers/das16m1.c
++++ b/drivers/comedi/drivers/das16m1.c
+@@ -522,7 +522,8 @@ static int das16m1_attach(struct comedi_device *dev,
+ 	devpriv->extra_iobase = dev->iobase + DAS16M1_8255_IOBASE;
+ 
+ 	/* only irqs 2, 3, 4, 5, 6, 7, 10, 11, 12, 14, and 15 are valid */
+-	if ((1 << it->options[1]) & 0xdcfc) {
++	if (it->options[1] >= 2 && it->options[1] <= 15 &&
++	    (1 << it->options[1]) & 0xdcfc) {
+ 		ret = request_irq(it->options[1], das16m1_interrupt, 0,
+ 				  dev->board_name, dev);
+ 		if (ret == 0)
+diff --git a/drivers/comedi/drivers/das6402.c b/drivers/comedi/drivers/das6402.c
+index 68f95330de45fd..7660487e563c56 100644
+--- a/drivers/comedi/drivers/das6402.c
++++ b/drivers/comedi/drivers/das6402.c
+@@ -567,7 +567,8 @@ static int das6402_attach(struct comedi_device *dev,
+ 	das6402_reset(dev);
+ 
+ 	/* IRQs 2,3,5,6,7, 10,11,15 are valid for "enhanced" mode */
+-	if ((1 << it->options[1]) & 0x8cec) {
++	if (it->options[1] > 0 && it->options[1] < 16 &&
++	    (1 << it->options[1]) & 0x8cec) {
+ 		ret = request_irq(it->options[1], das6402_interrupt, 0,
+ 				  dev->board_name, dev);
+ 		if (ret == 0) {
+diff --git a/drivers/comedi/drivers/pcl812.c b/drivers/comedi/drivers/pcl812.c
+index 0df639c6a595e5..abca61a72cf7ea 100644
+--- a/drivers/comedi/drivers/pcl812.c
++++ b/drivers/comedi/drivers/pcl812.c
+@@ -1149,7 +1149,8 @@ static int pcl812_attach(struct comedi_device *dev, struct comedi_devconfig *it)
+ 		if (IS_ERR(dev->pacer))
+ 			return PTR_ERR(dev->pacer);
+ 
+-		if ((1 << it->options[1]) & board->irq_bits) {
++		if (it->options[1] > 0 && it->options[1] < 16 &&
++		    (1 << it->options[1]) & board->irq_bits) {
+ 			ret = request_irq(it->options[1], pcl812_interrupt, 0,
+ 					  dev->board_name, dev);
+ 			if (ret == 0)
+diff --git a/drivers/cpuidle/cpuidle-psci.c b/drivers/cpuidle/cpuidle-psci.c
+index b46a83f5ffe463..2812e3bf200def 100644
+--- a/drivers/cpuidle/cpuidle-psci.c
++++ b/drivers/cpuidle/cpuidle-psci.c
+@@ -39,7 +39,6 @@ struct psci_cpuidle_data {
+ static DEFINE_PER_CPU_READ_MOSTLY(struct psci_cpuidle_data, psci_cpuidle_data);
+ static DEFINE_PER_CPU(u32, domain_state);
+ static bool psci_cpuidle_use_syscore;
+-static bool psci_cpuidle_use_cpuhp;
+ 
+ void psci_set_domain_state(u32 state)
+ {
+@@ -108,8 +107,12 @@ static int psci_idle_cpuhp_up(unsigned int cpu)
+ {
+ 	struct device *pd_dev = __this_cpu_read(psci_cpuidle_data.dev);
+ 
+-	if (pd_dev)
+-		pm_runtime_get_sync(pd_dev);
++	if (pd_dev) {
++		if (!IS_ENABLED(CONFIG_PREEMPT_RT))
++			pm_runtime_get_sync(pd_dev);
++		else
++			dev_pm_genpd_resume(pd_dev);
++	}
+ 
+ 	return 0;
+ }
+@@ -119,7 +122,11 @@ static int psci_idle_cpuhp_down(unsigned int cpu)
+ 	struct device *pd_dev = __this_cpu_read(psci_cpuidle_data.dev);
+ 
+ 	if (pd_dev) {
+-		pm_runtime_put_sync(pd_dev);
++		if (!IS_ENABLED(CONFIG_PREEMPT_RT))
++			pm_runtime_put_sync(pd_dev);
++		else
++			dev_pm_genpd_suspend(pd_dev);
++
+ 		/* Clear domain state to start fresh at next online. */
+ 		psci_set_domain_state(0);
+ 	}
+@@ -180,9 +187,6 @@ static void psci_idle_init_cpuhp(void)
+ {
+ 	int err;
+ 
+-	if (!psci_cpuidle_use_cpuhp)
+-		return;
+-
+ 	err = cpuhp_setup_state_nocalls(CPUHP_AP_CPU_PM_STARTING,
+ 					"cpuidle/psci:online",
+ 					psci_idle_cpuhp_up,
+@@ -243,10 +247,8 @@ static int psci_dt_cpu_init_topology(struct cpuidle_driver *drv,
+ 	 * s2ram and s2idle.
+ 	 */
+ 	drv->states[state_count - 1].enter_s2idle = psci_enter_s2idle_domain_idle_state;
+-	if (!IS_ENABLED(CONFIG_PREEMPT_RT)) {
++	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+ 		drv->states[state_count - 1].enter = psci_enter_domain_idle_state;
+-		psci_cpuidle_use_cpuhp = true;
+-	}
+ 
+ 	return 0;
+ }
+@@ -323,7 +325,6 @@ static void psci_cpu_deinit_idle(int cpu)
+ 
+ 	dt_idle_detach_cpu(data->dev);
+ 	psci_cpuidle_use_syscore = false;
+-	psci_cpuidle_use_cpuhp = false;
+ }
+ 
+ static int psci_idle_init_cpu(struct device *dev, int cpu)
+diff --git a/drivers/dma/mediatek/mtk-cqdma.c b/drivers/dma/mediatek/mtk-cqdma.c
+index 47c8adfdc15504..9f0c41ca7770dc 100644
+--- a/drivers/dma/mediatek/mtk-cqdma.c
++++ b/drivers/dma/mediatek/mtk-cqdma.c
+@@ -449,9 +449,9 @@ static enum dma_status mtk_cqdma_tx_status(struct dma_chan *c,
+ 		return ret;
+ 
+ 	spin_lock_irqsave(&cvc->pc->lock, flags);
+-	spin_lock_irqsave(&cvc->vc.lock, flags);
++	spin_lock(&cvc->vc.lock);
+ 	vd = mtk_cqdma_find_active_desc(c, cookie);
+-	spin_unlock_irqrestore(&cvc->vc.lock, flags);
++	spin_unlock(&cvc->vc.lock);
+ 	spin_unlock_irqrestore(&cvc->pc->lock, flags);
+ 
+ 	if (vd) {
+diff --git a/drivers/dma/nbpfaxi.c b/drivers/dma/nbpfaxi.c
+index 0d6324c4e2be0b..7a2488a0d6a326 100644
+--- a/drivers/dma/nbpfaxi.c
++++ b/drivers/dma/nbpfaxi.c
+@@ -1351,7 +1351,7 @@ static int nbpf_probe(struct platform_device *pdev)
+ 	if (irqs == 1) {
+ 		eirq = irqbuf[0];
+ 
+-		for (i = 0; i <= num_channels; i++)
++		for (i = 0; i < num_channels; i++)
+ 			nbpf->chan[i].irq = irqbuf[0];
+ 	} else {
+ 		eirq = platform_get_irq_byname(pdev, "error");
+@@ -1361,16 +1361,15 @@ static int nbpf_probe(struct platform_device *pdev)
+ 		if (irqs == num_channels + 1) {
+ 			struct nbpf_channel *chan;
+ 
+-			for (i = 0, chan = nbpf->chan; i <= num_channels;
++			for (i = 0, chan = nbpf->chan; i < num_channels;
+ 			     i++, chan++) {
+ 				/* Skip the error IRQ */
+ 				if (irqbuf[i] == eirq)
+ 					i++;
++				if (i >= ARRAY_SIZE(irqbuf))
++					return -EINVAL;
+ 				chan->irq = irqbuf[i];
+ 			}
+-
+-			if (chan != nbpf->chan + num_channels)
+-				return -EINVAL;
+ 		} else {
+ 			/* 2 IRQs and more than one channel */
+ 			if (irqbuf[0] == eirq)
+@@ -1378,7 +1377,7 @@ static int nbpf_probe(struct platform_device *pdev)
+ 			else
+ 				irq = irqbuf[0];
+ 
+-			for (i = 0; i <= num_channels; i++)
++			for (i = 0; i < num_channels; i++)
+ 				nbpf->chan[i].irq = irq;
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+index 59acdbfe28d871..f37afeea26e71c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+@@ -463,6 +463,7 @@ bool amdgpu_ring_soft_recovery(struct amdgpu_ring *ring, unsigned int vmid,
+ {
+ 	unsigned long flags;
+ 	ktime_t deadline;
++	bool ret;
+ 
+ 	if (unlikely(ring->adev->debug_disable_soft_recovery))
+ 		return false;
+@@ -477,12 +478,16 @@ bool amdgpu_ring_soft_recovery(struct amdgpu_ring *ring, unsigned int vmid,
+ 		dma_fence_set_error(fence, -ENODATA);
+ 	spin_unlock_irqrestore(fence->lock, flags);
+ 
+-	atomic_inc(&ring->adev->gpu_reset_counter);
+ 	while (!dma_fence_is_signaled(fence) &&
+ 	       ktime_to_ns(ktime_sub(deadline, ktime_get())) > 0)
+ 		ring->funcs->soft_recovery(ring, vmid);
+ 
+-	return dma_fence_is_signaled(fence);
++	ret = dma_fence_is_signaled(fence);
++	/* increment the counter only if soft reset worked */
++	if (ret)
++		atomic_inc(&ring->adev->gpu_reset_counter);
++
++	return ret;
+ }
+ 
+ /*
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+index fc73be4ab0685b..85ac36966a31ac 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+@@ -4664,6 +4664,7 @@ static int gfx_v8_0_kcq_init_queue(struct amdgpu_ring *ring)
+ 			memcpy(mqd, adev->gfx.mec.mqd_backup[mqd_idx], sizeof(struct vi_mqd_allocation));
+ 		/* reset ring buffer */
+ 		ring->wptr = 0;
++		atomic64_set((atomic64_t *)ring->wptr_cpu_addr, 0);
+ 		amdgpu_ring_clear_ring(ring);
+ 	}
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+index e8bdd7f0c46079..db157b38f862ba 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+@@ -737,7 +737,16 @@ int amdgpu_dm_crtc_init(struct amdgpu_display_manager *dm,
+ 	 * support programmable degamma anywhere.
+ 	 */
+ 	is_dcn = dm->adev->dm.dc->caps.color.dpp.dcn_arch;
+-	drm_crtc_enable_color_mgmt(&acrtc->base, is_dcn ? MAX_COLOR_LUT_ENTRIES : 0,
++	/* Dont't enable DRM CRTC degamma property for DCN401 since the
++	 * pre-blending degamma LUT doesn't apply to cursor, and therefore
++	 * can't work similar to a post-blending degamma LUT as in other hw
++	 * versions.
++	 * TODO: revisit it once KMS plane color API is merged.
++	 */
++	drm_crtc_enable_color_mgmt(&acrtc->base,
++				   (is_dcn &&
++				    dm->adev->dm.dc->ctx->dce_version != DCN_VERSION_4_01) ?
++				     MAX_COLOR_LUT_ENTRIES : 0,
+ 				   true, MAX_COLOR_LUT_ENTRIES);
+ 
+ 	drm_mode_crtc_set_gamma_size(&acrtc->base, MAX_COLOR_LEGACY_LUT_ENTRIES);
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr.c
+index a3b8e3d4a429e3..4b17d2fcd56588 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr.c
+@@ -1565,7 +1565,7 @@ struct clk_mgr_internal *dcn401_clk_mgr_construct(
+ 	clk_mgr->base.bw_params = kzalloc(sizeof(*clk_mgr->base.bw_params), GFP_KERNEL);
+ 	if (!clk_mgr->base.bw_params) {
+ 		BREAK_TO_DEBUGGER();
+-		kfree(clk_mgr);
++		kfree(clk_mgr401);
+ 		return NULL;
+ 	}
+ 
+@@ -1576,6 +1576,7 @@ struct clk_mgr_internal *dcn401_clk_mgr_construct(
+ 	if (!clk_mgr->wm_range_table) {
+ 		BREAK_TO_DEBUGGER();
+ 		kfree(clk_mgr->base.bw_params);
++		kfree(clk_mgr401);
+ 		return NULL;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_crtc.c b/drivers/gpu/drm/mediatek/mtk_crtc.c
+index 8f6fba4217ece5..bc7527542fdc6f 100644
+--- a/drivers/gpu/drm/mediatek/mtk_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_crtc.c
+@@ -719,6 +719,39 @@ int mtk_crtc_plane_check(struct drm_crtc *crtc, struct drm_plane *plane,
+ 	return 0;
+ }
+ 
++void mtk_crtc_plane_disable(struct drm_crtc *crtc, struct drm_plane *plane)
++{
++#if IS_REACHABLE(CONFIG_MTK_CMDQ)
++	struct mtk_crtc *mtk_crtc = to_mtk_crtc(crtc);
++	struct mtk_plane_state *plane_state = to_mtk_plane_state(plane->state);
++	int i;
++
++	/* no need to wait for disabling the plane by CPU */
++	if (!mtk_crtc->cmdq_client.chan)
++		return;
++
++	if (!mtk_crtc->enabled)
++		return;
++
++	/* set pending plane state to disabled */
++	for (i = 0; i < mtk_crtc->layer_nr; i++) {
++		struct drm_plane *mtk_plane = &mtk_crtc->planes[i];
++		struct mtk_plane_state *mtk_plane_state = to_mtk_plane_state(mtk_plane->state);
++
++		if (mtk_plane->index == plane->index) {
++			memcpy(mtk_plane_state, plane_state, sizeof(*plane_state));
++			break;
++		}
++	}
++	mtk_crtc_update_config(mtk_crtc, false);
++
++	/* wait for planes to be disabled by CMDQ */
++	wait_event_timeout(mtk_crtc->cb_blocking_queue,
++			   mtk_crtc->cmdq_vblank_cnt == 0,
++			   msecs_to_jiffies(500));
++#endif
++}
++
+ void mtk_crtc_async_update(struct drm_crtc *crtc, struct drm_plane *plane,
+ 			   struct drm_atomic_state *state)
+ {
+@@ -930,7 +963,8 @@ static int mtk_crtc_init_comp_planes(struct drm_device *drm_dev,
+ 				mtk_ddp_comp_supported_rotations(comp),
+ 				mtk_ddp_comp_get_blend_modes(comp),
+ 				mtk_ddp_comp_get_formats(comp),
+-				mtk_ddp_comp_get_num_formats(comp), i);
++				mtk_ddp_comp_get_num_formats(comp),
++				mtk_ddp_comp_is_afbc_supported(comp), i);
+ 		if (ret)
+ 			return ret;
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_crtc.h b/drivers/gpu/drm/mediatek/mtk_crtc.h
+index 388e900b6f4ded..828f109b83e78f 100644
+--- a/drivers/gpu/drm/mediatek/mtk_crtc.h
++++ b/drivers/gpu/drm/mediatek/mtk_crtc.h
+@@ -21,6 +21,7 @@ int mtk_crtc_create(struct drm_device *drm_dev, const unsigned int *path,
+ 		    unsigned int num_conn_routes);
+ int mtk_crtc_plane_check(struct drm_crtc *crtc, struct drm_plane *plane,
+ 			 struct mtk_plane_state *state);
++void mtk_crtc_plane_disable(struct drm_crtc *crtc, struct drm_plane *plane);
+ void mtk_crtc_async_update(struct drm_crtc *crtc, struct drm_plane *plane,
+ 			   struct drm_atomic_state *plane_state);
+ struct device *mtk_crtc_dma_dev_get(struct drm_crtc *crtc);
+diff --git a/drivers/gpu/drm/mediatek/mtk_ddp_comp.c b/drivers/gpu/drm/mediatek/mtk_ddp_comp.c
+index edc6417639e642..ac6620e10262e3 100644
+--- a/drivers/gpu/drm/mediatek/mtk_ddp_comp.c
++++ b/drivers/gpu/drm/mediatek/mtk_ddp_comp.c
+@@ -366,6 +366,7 @@ static const struct mtk_ddp_comp_funcs ddp_ovl = {
+ 	.get_blend_modes = mtk_ovl_get_blend_modes,
+ 	.get_formats = mtk_ovl_get_formats,
+ 	.get_num_formats = mtk_ovl_get_num_formats,
++	.is_afbc_supported = mtk_ovl_is_afbc_supported,
+ };
+ 
+ static const struct mtk_ddp_comp_funcs ddp_postmask = {
+diff --git a/drivers/gpu/drm/mediatek/mtk_ddp_comp.h b/drivers/gpu/drm/mediatek/mtk_ddp_comp.h
+index 39720b27f4e9ed..7289b3dcf22f22 100644
+--- a/drivers/gpu/drm/mediatek/mtk_ddp_comp.h
++++ b/drivers/gpu/drm/mediatek/mtk_ddp_comp.h
+@@ -83,6 +83,7 @@ struct mtk_ddp_comp_funcs {
+ 	u32 (*get_blend_modes)(struct device *dev);
+ 	const u32 *(*get_formats)(struct device *dev);
+ 	size_t (*get_num_formats)(struct device *dev);
++	bool (*is_afbc_supported)(struct device *dev);
+ 	void (*connect)(struct device *dev, struct device *mmsys_dev, unsigned int next);
+ 	void (*disconnect)(struct device *dev, struct device *mmsys_dev, unsigned int next);
+ 	void (*add)(struct device *dev, struct mtk_mutex *mutex);
+@@ -294,6 +295,14 @@ size_t mtk_ddp_comp_get_num_formats(struct mtk_ddp_comp *comp)
+ 	return 0;
+ }
+ 
++static inline bool mtk_ddp_comp_is_afbc_supported(struct mtk_ddp_comp *comp)
++{
++	if (comp->funcs && comp->funcs->is_afbc_supported)
++		return comp->funcs->is_afbc_supported(comp->dev);
++
++	return false;
++}
++
+ static inline bool mtk_ddp_comp_add(struct mtk_ddp_comp *comp, struct mtk_mutex *mutex)
+ {
+ 	if (comp->funcs && comp->funcs->add) {
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_drv.h b/drivers/gpu/drm/mediatek/mtk_disp_drv.h
+index 04217a36939cdf..679d413bf10be1 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_drv.h
++++ b/drivers/gpu/drm/mediatek/mtk_disp_drv.h
+@@ -106,6 +106,7 @@ void mtk_ovl_disable_vblank(struct device *dev);
+ u32 mtk_ovl_get_blend_modes(struct device *dev);
+ const u32 *mtk_ovl_get_formats(struct device *dev);
+ size_t mtk_ovl_get_num_formats(struct device *dev);
++bool mtk_ovl_is_afbc_supported(struct device *dev);
+ 
+ void mtk_ovl_adaptor_add_comp(struct device *dev, struct mtk_mutex *mutex);
+ void mtk_ovl_adaptor_remove_comp(struct device *dev, struct mtk_mutex *mutex);
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+index d0581c4e3c999c..e0236353d4997e 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+@@ -236,6 +236,13 @@ size_t mtk_ovl_get_num_formats(struct device *dev)
+ 	return ovl->data->num_formats;
+ }
+ 
++bool mtk_ovl_is_afbc_supported(struct device *dev)
++{
++	struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
++
++	return ovl->data->supports_afbc;
++}
++
+ int mtk_ovl_clk_enable(struct device *dev)
+ {
+ 	struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
+diff --git a/drivers/gpu/drm/mediatek/mtk_plane.c b/drivers/gpu/drm/mediatek/mtk_plane.c
+index 655106bbb76d33..cbc4f37da8ba81 100644
+--- a/drivers/gpu/drm/mediatek/mtk_plane.c
++++ b/drivers/gpu/drm/mediatek/mtk_plane.c
+@@ -285,9 +285,14 @@ static void mtk_plane_atomic_disable(struct drm_plane *plane,
+ 	struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state,
+ 									   plane);
+ 	struct mtk_plane_state *mtk_plane_state = to_mtk_plane_state(new_state);
++	struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state,
++									   plane);
++
+ 	mtk_plane_state->pending.enable = false;
+ 	wmb(); /* Make sure the above parameter is set before update */
+ 	mtk_plane_state->pending.dirty = true;
++
++	mtk_crtc_plane_disable(old_state->crtc, plane);
+ }
+ 
+ static void mtk_plane_atomic_update(struct drm_plane *plane,
+@@ -321,7 +326,8 @@ static const struct drm_plane_helper_funcs mtk_plane_helper_funcs = {
+ int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane,
+ 		   unsigned long possible_crtcs, enum drm_plane_type type,
+ 		   unsigned int supported_rotations, const u32 blend_modes,
+-		   const u32 *formats, size_t num_formats, unsigned int plane_idx)
++		   const u32 *formats, size_t num_formats,
++		   bool supports_afbc, unsigned int plane_idx)
+ {
+ 	int err;
+ 
+@@ -332,7 +338,9 @@ int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane,
+ 
+ 	err = drm_universal_plane_init(dev, plane, possible_crtcs,
+ 				       &mtk_plane_funcs, formats,
+-				       num_formats, modifiers, type, NULL);
++				       num_formats,
++				       supports_afbc ? modifiers : NULL,
++				       type, NULL);
+ 	if (err) {
+ 		DRM_ERROR("failed to initialize plane\n");
+ 		return err;
+diff --git a/drivers/gpu/drm/mediatek/mtk_plane.h b/drivers/gpu/drm/mediatek/mtk_plane.h
+index 3b13b89989c7e4..95c5fa5295d8ac 100644
+--- a/drivers/gpu/drm/mediatek/mtk_plane.h
++++ b/drivers/gpu/drm/mediatek/mtk_plane.h
+@@ -49,5 +49,6 @@ to_mtk_plane_state(struct drm_plane_state *state)
+ int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane,
+ 		   unsigned long possible_crtcs, enum drm_plane_type type,
+ 		   unsigned int supported_rotations, const u32 blend_modes,
+-		   const u32 *formats, size_t num_formats, unsigned int plane_idx);
++		   const u32 *formats, size_t num_formats,
++		   bool supports_afbc, unsigned int plane_idx);
+ #endif
+diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
+index 5657106c2f7d0a..15e2d505550f48 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_job.c
++++ b/drivers/gpu/drm/panfrost/panfrost_job.c
+@@ -841,7 +841,6 @@ int panfrost_job_init(struct panfrost_device *pfdev)
+ 		.num_rqs = DRM_SCHED_PRIORITY_COUNT,
+ 		.credit_limit = 2,
+ 		.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS),
+-		.timeout_wq = pfdev->reset.wq,
+ 		.name = "pan_js",
+ 		.dev = pfdev->dev,
+ 	};
+@@ -879,6 +878,7 @@ int panfrost_job_init(struct panfrost_device *pfdev)
+ 	pfdev->reset.wq = alloc_ordered_workqueue("panfrost-reset", 0);
+ 	if (!pfdev->reset.wq)
+ 		return -ENOMEM;
++	args.timeout_wq = pfdev->reset.wq;
+ 
+ 	for (j = 0; j < NUM_JOB_SLOTS; j++) {
+ 		js->queue[j].fence_context = dma_fence_context_alloc(1);
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index 4bad8894fa12c3..f627f845596221 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -375,6 +375,8 @@ int xe_gt_init_early(struct xe_gt *gt)
+ 	if (err)
+ 		return err;
+ 
++	xe_mocs_init_early(gt);
++
+ 	return 0;
+ }
+ 
+@@ -588,17 +590,15 @@ int xe_gt_init(struct xe_gt *gt)
+ 	if (err)
+ 		return err;
+ 
+-	err = xe_gt_pagefault_init(gt);
++	err = xe_gt_sysfs_init(gt);
+ 	if (err)
+ 		return err;
+ 
+-	xe_mocs_init_early(gt);
+-
+-	err = xe_gt_sysfs_init(gt);
++	err = gt_fw_domain_init(gt);
+ 	if (err)
+ 		return err;
+ 
+-	err = gt_fw_domain_init(gt);
++	err = xe_gt_pagefault_init(gt);
+ 	if (err)
+ 		return err;
+ 
+@@ -797,6 +797,9 @@ static int gt_reset(struct xe_gt *gt)
+ 		goto err_out;
+ 	}
+ 
++	if (IS_SRIOV_PF(gt_to_xe(gt)))
++		xe_gt_sriov_pf_stop_prepare(gt);
++
+ 	xe_uc_gucrc_disable(&gt->uc);
+ 	xe_uc_stop_prepare(&gt->uc);
+ 	xe_gt_pagefault_reset(gt);
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
+index c08efca6420e71..35489fa818259d 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
+@@ -172,6 +172,25 @@ void xe_gt_sriov_pf_sanitize_hw(struct xe_gt *gt, unsigned int vfid)
+ 	pf_clear_vf_scratch_regs(gt, vfid);
+ }
+ 
++static void pf_cancel_restart(struct xe_gt *gt)
++{
++	xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
++
++	if (cancel_work_sync(&gt->sriov.pf.workers.restart))
++		xe_gt_sriov_dbg_verbose(gt, "pending restart canceled!\n");
++}
++
++/**
++ * xe_gt_sriov_pf_stop_prepare() - Prepare to stop SR-IOV support.
++ * @gt: the &xe_gt
++ *
++ * This function can only be called on the PF.
++ */
++void xe_gt_sriov_pf_stop_prepare(struct xe_gt *gt)
++{
++	pf_cancel_restart(gt);
++}
++
+ static void pf_restart(struct xe_gt *gt)
+ {
+ 	struct xe_device *xe = gt_to_xe(gt);
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf.h
+index f474509411c0cd..e2b2ff8132dc58 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf.h
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf.h
+@@ -13,6 +13,7 @@ int xe_gt_sriov_pf_init_early(struct xe_gt *gt);
+ int xe_gt_sriov_pf_init(struct xe_gt *gt);
+ void xe_gt_sriov_pf_init_hw(struct xe_gt *gt);
+ void xe_gt_sriov_pf_sanitize_hw(struct xe_gt *gt, unsigned int vfid);
++void xe_gt_sriov_pf_stop_prepare(struct xe_gt *gt);
+ void xe_gt_sriov_pf_restart(struct xe_gt *gt);
+ #else
+ static inline int xe_gt_sriov_pf_init_early(struct xe_gt *gt)
+@@ -29,6 +30,10 @@ static inline void xe_gt_sriov_pf_init_hw(struct xe_gt *gt)
+ {
+ }
+ 
++static inline void xe_gt_sriov_pf_stop_prepare(struct xe_gt *gt)
++{
++}
++
+ static inline void xe_gt_sriov_pf_restart(struct xe_gt *gt)
+ {
+ }
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+index 10be109bf357ff..96289195a3d3ec 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+@@ -2356,6 +2356,21 @@ int xe_gt_sriov_pf_config_restore(struct xe_gt *gt, unsigned int vfid,
+ 	return err;
+ }
+ 
++static int pf_push_self_config(struct xe_gt *gt)
++{
++	int err;
++
++	err = pf_push_full_vf_config(gt, PFID);
++	if (err) {
++		xe_gt_sriov_err(gt, "Failed to push self configuration (%pe)\n",
++				ERR_PTR(err));
++		return err;
++	}
++
++	xe_gt_sriov_dbg_verbose(gt, "self configuration completed\n");
++	return 0;
++}
++
+ static void fini_config(void *arg)
+ {
+ 	struct xe_gt *gt = arg;
+@@ -2379,9 +2394,17 @@ static void fini_config(void *arg)
+ int xe_gt_sriov_pf_config_init(struct xe_gt *gt)
+ {
+ 	struct xe_device *xe = gt_to_xe(gt);
++	int err;
+ 
+ 	xe_gt_assert(gt, IS_SRIOV_PF(xe));
+ 
++	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
++	err = pf_push_self_config(gt);
++	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
++
++	if (err)
++		return err;
++
+ 	return devm_add_action_or_reset(xe->drm.dev, fini_config, gt);
+ }
+ 
+@@ -2399,6 +2422,10 @@ void xe_gt_sriov_pf_config_restart(struct xe_gt *gt)
+ 	unsigned int n, total_vfs = xe_sriov_pf_get_totalvfs(gt_to_xe(gt));
+ 	unsigned int fail = 0, skip = 0;
+ 
++	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
++	pf_push_self_config(gt);
++	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
++
+ 	for (n = 1; n <= total_vfs; n++) {
+ 		if (xe_gt_sriov_pf_config_is_empty(gt, n))
+ 			skip++;
+diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c b/drivers/gpu/drm/xe/xe_ring_ops.c
+index bc1689db4cd716..7b50c7c1ee21da 100644
+--- a/drivers/gpu/drm/xe/xe_ring_ops.c
++++ b/drivers/gpu/drm/xe/xe_ring_ops.c
+@@ -110,13 +110,14 @@ static int emit_bb_start(u64 batch_addr, u32 ppgtt_flag, u32 *dw, int i)
+ 	return i;
+ }
+ 
+-static int emit_flush_invalidate(u32 *dw, int i)
++static int emit_flush_invalidate(u32 addr, u32 val, u32 *dw, int i)
+ {
+ 	dw[i++] = MI_FLUSH_DW | MI_INVALIDATE_TLB | MI_FLUSH_DW_OP_STOREDW |
+-		  MI_FLUSH_IMM_DW | MI_FLUSH_DW_STORE_INDEX;
+-	dw[i++] = LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR;
+-	dw[i++] = 0;
++		  MI_FLUSH_IMM_DW;
++
++	dw[i++] = addr | MI_FLUSH_DW_USE_GTT;
+ 	dw[i++] = 0;
++	dw[i++] = val;
+ 
+ 	return i;
+ }
+@@ -397,23 +398,20 @@ static void __emit_job_gen12_render_compute(struct xe_sched_job *job,
+ static void emit_migration_job_gen12(struct xe_sched_job *job,
+ 				     struct xe_lrc *lrc, u32 seqno)
+ {
++	u32 saddr = xe_lrc_start_seqno_ggtt_addr(lrc);
+ 	u32 dw[MAX_JOB_SIZE_DW], i = 0;
+ 
+ 	i = emit_copy_timestamp(lrc, dw, i);
+ 
+-	i = emit_store_imm_ggtt(xe_lrc_start_seqno_ggtt_addr(lrc),
+-				seqno, dw, i);
++	i = emit_store_imm_ggtt(saddr, seqno, dw, i);
+ 
+ 	dw[i++] = MI_ARB_ON_OFF | MI_ARB_DISABLE; /* Enabled again below */
+ 
+ 	i = emit_bb_start(job->ptrs[0].batch_addr, BIT(8), dw, i);
+ 
+-	if (!IS_SRIOV_VF(gt_to_xe(job->q->gt))) {
+-		/* XXX: Do we need this? Leaving for now. */
+-		dw[i++] = preparser_disable(true);
+-		i = emit_flush_invalidate(dw, i);
+-		dw[i++] = preparser_disable(false);
+-	}
++	dw[i++] = preparser_disable(true);
++	i = emit_flush_invalidate(saddr, seqno, dw, i);
++	dw[i++] = preparser_disable(false);
+ 
+ 	i = emit_bb_start(job->ptrs[1].batch_addr, BIT(8), dw, i);
+ 
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 4741ff6267710b..cd4215007d1a4a 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1883,9 +1883,12 @@ u8 *hid_alloc_report_buf(struct hid_report *report, gfp_t flags)
+ 	/*
+ 	 * 7 extra bytes are necessary to achieve proper functionality
+ 	 * of implement() working on 8 byte chunks
++	 * 1 extra byte for the report ID if it is null (not used) so
++	 * we can reserve that extra byte in the first position of the buffer
++	 * when sending it to .raw_request()
+ 	 */
+ 
+-	u32 len = hid_report_len(report) + 7;
++	u32 len = hid_report_len(report) + 7 + (report->id == 0);
+ 
+ 	return kzalloc(len, flags);
+ }
+@@ -1973,7 +1976,7 @@ static struct hid_report *hid_get_report(struct hid_report_enum *report_enum,
+ int __hid_request(struct hid_device *hid, struct hid_report *report,
+ 		enum hid_class_request reqtype)
+ {
+-	char *buf;
++	char *buf, *data_buf;
+ 	int ret;
+ 	u32 len;
+ 
+@@ -1981,13 +1984,19 @@ int __hid_request(struct hid_device *hid, struct hid_report *report,
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
++	data_buf = buf;
+ 	len = hid_report_len(report);
+ 
++	if (report->id == 0) {
++		/* reserve the first byte for the report ID */
++		data_buf++;
++		len++;
++	}
++
+ 	if (reqtype == HID_REQ_SET_REPORT)
+-		hid_output_report(report, buf);
++		hid_output_report(report, data_buf);
+ 
+-	ret = hid->ll_driver->raw_request(hid, report->id, buf, len,
+-					  report->type, reqtype);
++	ret = hid_hw_raw_request(hid, report->id, buf, len, report->type, reqtype);
+ 	if (ret < 0) {
+ 		dbg_hid("unable to complete request: %d\n", ret);
+ 		goto out;
+diff --git a/drivers/hwmon/corsair-cpro.c b/drivers/hwmon/corsair-cpro.c
+index e1a7f7aa7f8048..b7b911f8359c7f 100644
+--- a/drivers/hwmon/corsair-cpro.c
++++ b/drivers/hwmon/corsair-cpro.c
+@@ -89,6 +89,7 @@ struct ccp_device {
+ 	struct mutex mutex; /* whenever buffer is used, lock before send_usb_cmd */
+ 	u8 *cmd_buffer;
+ 	u8 *buffer;
++	int buffer_recv_size; /* number of received bytes in buffer */
+ 	int target[6];
+ 	DECLARE_BITMAP(temp_cnct, NUM_TEMP_SENSORS);
+ 	DECLARE_BITMAP(fan_cnct, NUM_FANS);
+@@ -146,6 +147,9 @@ static int send_usb_cmd(struct ccp_device *ccp, u8 command, u8 byte1, u8 byte2,
+ 	if (!t)
+ 		return -ETIMEDOUT;
+ 
++	if (ccp->buffer_recv_size != IN_BUFFER_SIZE)
++		return -EPROTO;
++
+ 	return ccp_get_errno(ccp);
+ }
+ 
+@@ -157,6 +161,7 @@ static int ccp_raw_event(struct hid_device *hdev, struct hid_report *report, u8
+ 	spin_lock(&ccp->wait_input_report_lock);
+ 	if (!completion_done(&ccp->wait_input_report)) {
+ 		memcpy(ccp->buffer, data, min(IN_BUFFER_SIZE, size));
++		ccp->buffer_recv_size = size;
+ 		complete_all(&ccp->wait_input_report);
+ 	}
+ 	spin_unlock(&ccp->wait_input_report_lock);
+diff --git a/drivers/i2c/busses/i2c-omap.c b/drivers/i2c/busses/i2c-omap.c
+index 5e46dc2cbbd75d..60e70b25c87f32 100644
+--- a/drivers/i2c/busses/i2c-omap.c
++++ b/drivers/i2c/busses/i2c-omap.c
+@@ -1472,7 +1472,9 @@ omap_i2c_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* reset ASAP, clearing any IRQs */
+-	omap_i2c_init(omap);
++	r = omap_i2c_init(omap);
++	if (r)
++		goto err_mux_state_deselect;
+ 
+ 	if (omap->rev < OMAP_I2C_OMAP1_REV_2)
+ 		r = devm_request_irq(&pdev->dev, omap->irq, omap_i2c_omap1_isr,
+@@ -1515,12 +1517,13 @@ omap_i2c_probe(struct platform_device *pdev)
+ 
+ err_unuse_clocks:
+ 	omap_i2c_write_reg(omap, OMAP_I2C_CON_REG, 0);
++err_mux_state_deselect:
+ 	if (omap->mux_state)
+ 		mux_state_deselect(omap->mux_state);
+ err_put_pm:
+-	pm_runtime_dont_use_autosuspend(omap->dev);
+ 	pm_runtime_put_sync(omap->dev);
+ err_disable_pm:
++	pm_runtime_dont_use_autosuspend(omap->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 
+ 	return r;
+diff --git a/drivers/i2c/busses/i2c-stm32.c b/drivers/i2c/busses/i2c-stm32.c
+index 157c64e27d0bd3..f84ec056e36dfe 100644
+--- a/drivers/i2c/busses/i2c-stm32.c
++++ b/drivers/i2c/busses/i2c-stm32.c
+@@ -102,7 +102,6 @@ int stm32_i2c_prep_dma_xfer(struct device *dev, struct stm32_i2c_dma *dma,
+ 			    void *dma_async_param)
+ {
+ 	struct dma_async_tx_descriptor *txdesc;
+-	struct device *chan_dev;
+ 	int ret;
+ 
+ 	if (rd_wr) {
+@@ -116,11 +115,10 @@ int stm32_i2c_prep_dma_xfer(struct device *dev, struct stm32_i2c_dma *dma,
+ 	}
+ 
+ 	dma->dma_len = len;
+-	chan_dev = dma->chan_using->device->dev;
+ 
+-	dma->dma_buf = dma_map_single(chan_dev, buf, dma->dma_len,
++	dma->dma_buf = dma_map_single(dev, buf, dma->dma_len,
+ 				      dma->dma_data_dir);
+-	if (dma_mapping_error(chan_dev, dma->dma_buf)) {
++	if (dma_mapping_error(dev, dma->dma_buf)) {
+ 		dev_err(dev, "DMA mapping failed\n");
+ 		return -EINVAL;
+ 	}
+@@ -150,7 +148,7 @@ int stm32_i2c_prep_dma_xfer(struct device *dev, struct stm32_i2c_dma *dma,
+ 	return 0;
+ 
+ err:
+-	dma_unmap_single(chan_dev, dma->dma_buf, dma->dma_len,
++	dma_unmap_single(dev, dma->dma_buf, dma->dma_len,
+ 			 dma->dma_data_dir);
+ 	return ret;
+ }
+diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c
+index 973a3a8c6d4a18..1f8c6593e99aec 100644
+--- a/drivers/i2c/busses/i2c-stm32f7.c
++++ b/drivers/i2c/busses/i2c-stm32f7.c
+@@ -739,12 +739,13 @@ static void stm32f7_i2c_disable_dma_req(struct stm32f7_i2c_dev *i2c_dev)
+ 
+ static void stm32f7_i2c_dma_callback(void *arg)
+ {
+-	struct stm32f7_i2c_dev *i2c_dev = (struct stm32f7_i2c_dev *)arg;
++	struct stm32f7_i2c_dev *i2c_dev = arg;
+ 	struct stm32_i2c_dma *dma = i2c_dev->dma;
+-	struct device *dev = dma->chan_using->device->dev;
+ 
+ 	stm32f7_i2c_disable_dma_req(i2c_dev);
+-	dma_unmap_single(dev, dma->dma_buf, dma->dma_len, dma->dma_data_dir);
++	dmaengine_terminate_async(dma->chan_using);
++	dma_unmap_single(i2c_dev->dev, dma->dma_buf, dma->dma_len,
++			 dma->dma_data_dir);
+ 	complete(&dma->dma_complete);
+ }
+ 
+@@ -1510,7 +1511,6 @@ static irqreturn_t stm32f7_i2c_handle_isr_errs(struct stm32f7_i2c_dev *i2c_dev,
+ 	u16 addr = f7_msg->addr;
+ 	void __iomem *base = i2c_dev->base;
+ 	struct device *dev = i2c_dev->dev;
+-	struct stm32_i2c_dma *dma = i2c_dev->dma;
+ 
+ 	/* Bus error */
+ 	if (status & STM32F7_I2C_ISR_BERR) {
+@@ -1551,10 +1551,8 @@ static irqreturn_t stm32f7_i2c_handle_isr_errs(struct stm32f7_i2c_dev *i2c_dev,
+ 	}
+ 
+ 	/* Disable dma */
+-	if (i2c_dev->use_dma) {
+-		stm32f7_i2c_disable_dma_req(i2c_dev);
+-		dmaengine_terminate_async(dma->chan_using);
+-	}
++	if (i2c_dev->use_dma)
++		stm32f7_i2c_dma_callback(i2c_dev);
+ 
+ 	i2c_dev->master_mode = false;
+ 	complete(&i2c_dev->complete);
+@@ -1600,7 +1598,6 @@ static irqreturn_t stm32f7_i2c_isr_event_thread(int irq, void *data)
+ {
+ 	struct stm32f7_i2c_dev *i2c_dev = data;
+ 	struct stm32f7_i2c_msg *f7_msg = &i2c_dev->f7_msg;
+-	struct stm32_i2c_dma *dma = i2c_dev->dma;
+ 	void __iomem *base = i2c_dev->base;
+ 	u32 status, mask;
+ 	int ret;
+@@ -1619,10 +1616,8 @@ static irqreturn_t stm32f7_i2c_isr_event_thread(int irq, void *data)
+ 		dev_dbg(i2c_dev->dev, "<%s>: Receive NACK (addr %x)\n",
+ 			__func__, f7_msg->addr);
+ 		writel_relaxed(STM32F7_I2C_ICR_NACKCF, base + STM32F7_I2C_ICR);
+-		if (i2c_dev->use_dma) {
+-			stm32f7_i2c_disable_dma_req(i2c_dev);
+-			dmaengine_terminate_async(dma->chan_using);
+-		}
++		if (i2c_dev->use_dma)
++			stm32f7_i2c_dma_callback(i2c_dev);
+ 		f7_msg->result = -ENXIO;
+ 	}
+ 
+@@ -1640,8 +1635,7 @@ static irqreturn_t stm32f7_i2c_isr_event_thread(int irq, void *data)
+ 			ret = wait_for_completion_timeout(&i2c_dev->dma->dma_complete, HZ);
+ 			if (!ret) {
+ 				dev_dbg(i2c_dev->dev, "<%s>: Timed out\n", __func__);
+-				stm32f7_i2c_disable_dma_req(i2c_dev);
+-				dmaengine_terminate_async(dma->chan_using);
++				stm32f7_i2c_dma_callback(i2c_dev);
+ 				f7_msg->result = -ETIMEDOUT;
+ 			}
+ 		}
+diff --git a/drivers/iio/accel/fxls8962af-core.c b/drivers/iio/accel/fxls8962af-core.c
+index ae965a8f560d3b..e68403db97e86c 100644
+--- a/drivers/iio/accel/fxls8962af-core.c
++++ b/drivers/iio/accel/fxls8962af-core.c
+@@ -877,6 +877,8 @@ static int fxls8962af_buffer_predisable(struct iio_dev *indio_dev)
+ 	if (ret)
+ 		return ret;
+ 
++	synchronize_irq(data->irq);
++
+ 	ret = __fxls8962af_fifo_set_mode(data, false);
+ 
+ 	if (data->enable_event)
+diff --git a/drivers/iio/accel/st_accel_core.c b/drivers/iio/accel/st_accel_core.c
+index 99cb661fabb2d9..a7961c610ed203 100644
+--- a/drivers/iio/accel/st_accel_core.c
++++ b/drivers/iio/accel/st_accel_core.c
+@@ -1353,6 +1353,7 @@ static int apply_acpi_orientation(struct iio_dev *indio_dev)
+ 	union acpi_object *ont;
+ 	union acpi_object *elements;
+ 	acpi_status status;
++	struct device *parent = indio_dev->dev.parent;
+ 	int ret = -EINVAL;
+ 	unsigned int val;
+ 	int i, j;
+@@ -1371,7 +1372,7 @@ static int apply_acpi_orientation(struct iio_dev *indio_dev)
+ 	};
+ 
+ 
+-	adev = ACPI_COMPANION(indio_dev->dev.parent);
++	adev = ACPI_COMPANION(parent);
+ 	if (!adev)
+ 		return -ENXIO;
+ 
+@@ -1380,8 +1381,7 @@ static int apply_acpi_orientation(struct iio_dev *indio_dev)
+ 	if (status == AE_NOT_FOUND) {
+ 		return -ENXIO;
+ 	} else if (ACPI_FAILURE(status)) {
+-		dev_warn(&indio_dev->dev, "failed to execute _ONT: %d\n",
+-			 status);
++		dev_warn(parent, "failed to execute _ONT: %d\n", status);
+ 		return status;
+ 	}
+ 
+@@ -1457,12 +1457,12 @@ static int apply_acpi_orientation(struct iio_dev *indio_dev)
+ 	}
+ 
+ 	ret = 0;
+-	dev_info(&indio_dev->dev, "computed mount matrix from ACPI\n");
++	dev_info(parent, "computed mount matrix from ACPI\n");
+ 
+ out:
+ 	kfree(buffer.pointer);
+ 	if (ret)
+-		dev_dbg(&indio_dev->dev,
++		dev_dbg(parent,
+ 			"failed to apply ACPI orientation data: %d\n", ret);
+ 
+ 	return ret;
+diff --git a/drivers/iio/adc/ad7380.c b/drivers/iio/adc/ad7380.c
+index aef85093eb16cb..fd17e28e279191 100644
+--- a/drivers/iio/adc/ad7380.c
++++ b/drivers/iio/adc/ad7380.c
+@@ -1920,8 +1920,9 @@ static int ad7380_probe(struct spi_device *spi)
+ 
+ 	if (st->chip_info->has_hardware_gain) {
+ 		device_for_each_child_node_scoped(dev, node) {
+-			unsigned int channel, gain;
++			unsigned int channel;
+ 			int gain_idx;
++			u16 gain;
+ 
+ 			ret = fwnode_property_read_u32(node, "reg", &channel);
+ 			if (ret)
+@@ -1933,7 +1934,7 @@ static int ad7380_probe(struct spi_device *spi)
+ 						     "Invalid channel number %i\n",
+ 						     channel);
+ 
+-			ret = fwnode_property_read_u32(node, "adi,gain-milli",
++			ret = fwnode_property_read_u16(node, "adi,gain-milli",
+ 						       &gain);
+ 			if (ret && ret != -EINVAL)
+ 				return dev_err_probe(dev, ret,
+diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
+index cf942c043457cc..fc745297bcb82c 100644
+--- a/drivers/iio/adc/adi-axi-adc.c
++++ b/drivers/iio/adc/adi-axi-adc.c
+@@ -445,7 +445,7 @@ static int axi_adc_raw_read(struct iio_backend *back, u32 *val)
+ static int ad7606_bus_reg_read(struct iio_backend *back, u32 reg, u32 *val)
+ {
+ 	struct adi_axi_adc_state *st = iio_backend_get_priv(back);
+-	int addr;
++	u32 addr, reg_val;
+ 
+ 	guard(mutex)(&st->lock);
+ 
+@@ -455,7 +455,9 @@ static int ad7606_bus_reg_read(struct iio_backend *back, u32 reg, u32 *val)
+ 	 */
+ 	addr = FIELD_PREP(ADI_AXI_REG_ADDRESS_MASK, reg) | ADI_AXI_REG_READ_BIT;
+ 	axi_adc_raw_write(back, addr);
+-	axi_adc_raw_read(back, val);
++	axi_adc_raw_read(back, &reg_val);
++
++	*val = FIELD_GET(ADI_AXI_REG_VALUE_MASK, reg_val);
+ 
+ 	/* Write 0x0 on the bus to get back to ADC mode */
+ 	axi_adc_raw_write(back, 0);
+diff --git a/drivers/iio/adc/axp20x_adc.c b/drivers/iio/adc/axp20x_adc.c
+index 9fd7027623d0c2..d8a9b93853db6a 100644
+--- a/drivers/iio/adc/axp20x_adc.c
++++ b/drivers/iio/adc/axp20x_adc.c
+@@ -187,6 +187,7 @@ static struct iio_map axp717_maps[] = {
+ 		.consumer_channel = "batt_chrg_i",
+ 		.adc_channel_label = "batt_chrg_i",
+ 	},
++	{ }
+ };
+ 
+ /*
+diff --git a/drivers/iio/adc/max1363.c b/drivers/iio/adc/max1363.c
+index 35717ec082cedb..dc8c1e61369e24 100644
+--- a/drivers/iio/adc/max1363.c
++++ b/drivers/iio/adc/max1363.c
+@@ -511,10 +511,10 @@ static const struct iio_event_spec max1363_events[] = {
+ 	MAX1363_CHAN_U(1, _s1, 1, bits, ev_spec, num_ev_spec),		\
+ 	MAX1363_CHAN_U(2, _s2, 2, bits, ev_spec, num_ev_spec),		\
+ 	MAX1363_CHAN_U(3, _s3, 3, bits, ev_spec, num_ev_spec),		\
+-	MAX1363_CHAN_B(0, 1, d0m1, 4, bits, ev_spec, num_ev_spec),	\
+-	MAX1363_CHAN_B(2, 3, d2m3, 5, bits, ev_spec, num_ev_spec),	\
+-	MAX1363_CHAN_B(1, 0, d1m0, 6, bits, ev_spec, num_ev_spec),	\
+-	MAX1363_CHAN_B(3, 2, d3m2, 7, bits, ev_spec, num_ev_spec),	\
++	MAX1363_CHAN_B(0, 1, d0m1, 12, bits, ev_spec, num_ev_spec),	\
++	MAX1363_CHAN_B(2, 3, d2m3, 13, bits, ev_spec, num_ev_spec),	\
++	MAX1363_CHAN_B(1, 0, d1m0, 18, bits, ev_spec, num_ev_spec),	\
++	MAX1363_CHAN_B(3, 2, d3m2, 19, bits, ev_spec, num_ev_spec),	\
+ 	IIO_CHAN_SOFT_TIMESTAMP(8)					\
+ 	}
+ 
+@@ -532,23 +532,23 @@ static const struct iio_chan_spec max1363_channels[] =
+ /* Applies to max1236, max1237 */
+ static const enum max1363_modes max1236_mode_list[] = {
+ 	_s0, _s1, _s2, _s3,
+-	s0to1, s0to2, s0to3,
++	s0to1, s0to2, s2to3, s0to3,
+ 	d0m1, d2m3, d1m0, d3m2,
+ 	d0m1to2m3, d1m0to3m2,
+-	s2to3,
+ };
+ 
+ /* Applies to max1238, max1239 */
+ static const enum max1363_modes max1238_mode_list[] = {
+ 	_s0, _s1, _s2, _s3, _s4, _s5, _s6, _s7, _s8, _s9, _s10, _s11,
+ 	s0to1, s0to2, s0to3, s0to4, s0to5, s0to6,
++	s6to7, s6to8, s6to9, s6to10, s6to11,
+ 	s0to7, s0to8, s0to9, s0to10, s0to11,
+ 	d0m1, d2m3, d4m5, d6m7, d8m9, d10m11,
+ 	d1m0, d3m2, d5m4, d7m6, d9m8, d11m10,
+-	d0m1to2m3, d0m1to4m5, d0m1to6m7, d0m1to8m9, d0m1to10m11,
+-	d1m0to3m2, d1m0to5m4, d1m0to7m6, d1m0to9m8, d1m0to11m10,
+-	s6to7, s6to8, s6to9, s6to10, s6to11,
+-	d6m7to8m9, d6m7to10m11, d7m6to9m8, d7m6to11m10,
++	d0m1to2m3, d0m1to4m5, d0m1to6m7, d6m7to8m9,
++	d0m1to8m9, d6m7to10m11, d0m1to10m11, d1m0to3m2,
++	d1m0to5m4, d1m0to7m6, d7m6to9m8, d1m0to9m8,
++	d7m6to11m10, d1m0to11m10,
+ };
+ 
+ #define MAX1363_12X_CHANS(bits) {				\
+@@ -584,16 +584,15 @@ static const struct iio_chan_spec max1238_channels[] = MAX1363_12X_CHANS(12);
+ 
+ static const enum max1363_modes max11607_mode_list[] = {
+ 	_s0, _s1, _s2, _s3,
+-	s0to1, s0to2, s0to3,
+-	s2to3,
++	s0to1, s0to2, s2to3,
++	s0to3,
+ 	d0m1, d2m3, d1m0, d3m2,
+ 	d0m1to2m3, d1m0to3m2,
+ };
+ 
+ static const enum max1363_modes max11608_mode_list[] = {
+ 	_s0, _s1, _s2, _s3, _s4, _s5, _s6, _s7,
+-	s0to1, s0to2, s0to3, s0to4, s0to5, s0to6, s0to7,
+-	s6to7,
++	s0to1, s0to2, s0to3, s0to4, s0to5, s0to6, s6to7, s0to7,
+ 	d0m1, d2m3, d4m5, d6m7,
+ 	d1m0, d3m2, d5m4, d7m6,
+ 	d0m1to2m3, d0m1to4m5, d0m1to6m7,
+@@ -609,14 +608,14 @@ static const enum max1363_modes max11608_mode_list[] = {
+ 	MAX1363_CHAN_U(5, _s5, 5, bits, NULL, 0),	\
+ 	MAX1363_CHAN_U(6, _s6, 6, bits, NULL, 0),	\
+ 	MAX1363_CHAN_U(7, _s7, 7, bits, NULL, 0),	\
+-	MAX1363_CHAN_B(0, 1, d0m1, 8, bits, NULL, 0),	\
+-	MAX1363_CHAN_B(2, 3, d2m3, 9, bits, NULL, 0),	\
+-	MAX1363_CHAN_B(4, 5, d4m5, 10, bits, NULL, 0),	\
+-	MAX1363_CHAN_B(6, 7, d6m7, 11, bits, NULL, 0),	\
+-	MAX1363_CHAN_B(1, 0, d1m0, 12, bits, NULL, 0),	\
+-	MAX1363_CHAN_B(3, 2, d3m2, 13, bits, NULL, 0),	\
+-	MAX1363_CHAN_B(5, 4, d5m4, 14, bits, NULL, 0),	\
+-	MAX1363_CHAN_B(7, 6, d7m6, 15, bits, NULL, 0),	\
++	MAX1363_CHAN_B(0, 1, d0m1, 12, bits, NULL, 0),	\
++	MAX1363_CHAN_B(2, 3, d2m3, 13, bits, NULL, 0),	\
++	MAX1363_CHAN_B(4, 5, d4m5, 14, bits, NULL, 0),	\
++	MAX1363_CHAN_B(6, 7, d6m7, 15, bits, NULL, 0),	\
++	MAX1363_CHAN_B(1, 0, d1m0, 18, bits, NULL, 0),	\
++	MAX1363_CHAN_B(3, 2, d3m2, 19, bits, NULL, 0),	\
++	MAX1363_CHAN_B(5, 4, d5m4, 20, bits, NULL, 0),	\
++	MAX1363_CHAN_B(7, 6, d7m6, 21, bits, NULL, 0),	\
+ 	IIO_CHAN_SOFT_TIMESTAMP(16)			\
+ }
+ static const struct iio_chan_spec max11602_channels[] = MAX1363_8X_CHANS(8);
+diff --git a/drivers/iio/adc/stm32-adc-core.c b/drivers/iio/adc/stm32-adc-core.c
+index 0914148d1a2268..7ab8e00be4f964 100644
+--- a/drivers/iio/adc/stm32-adc-core.c
++++ b/drivers/iio/adc/stm32-adc-core.c
+@@ -429,10 +429,9 @@ static int stm32_adc_irq_probe(struct platform_device *pdev,
+ 		return -ENOMEM;
+ 	}
+ 
+-	for (i = 0; i < priv->cfg->num_irqs; i++) {
+-		irq_set_chained_handler(priv->irq[i], stm32_adc_irq_handler);
+-		irq_set_handler_data(priv->irq[i], priv);
+-	}
++	for (i = 0; i < priv->cfg->num_irqs; i++)
++		irq_set_chained_handler_and_data(priv->irq[i],
++						 stm32_adc_irq_handler, priv);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iio/common/st_sensors/st_sensors_core.c b/drivers/iio/common/st_sensors/st_sensors_core.c
+index e4f5a7ff7e74c9..5ced0e1482e92f 100644
+--- a/drivers/iio/common/st_sensors/st_sensors_core.c
++++ b/drivers/iio/common/st_sensors/st_sensors_core.c
+@@ -154,7 +154,7 @@ static int st_sensors_set_fullscale(struct iio_dev *indio_dev, unsigned int fs)
+ 	return err;
+ 
+ st_accel_set_fullscale_error:
+-	dev_err(&indio_dev->dev, "failed to set new fullscale.\n");
++	dev_err(indio_dev->dev.parent, "failed to set new fullscale.\n");
+ 	return err;
+ }
+ 
+@@ -231,8 +231,7 @@ int st_sensors_power_enable(struct iio_dev *indio_dev)
+ 					     ARRAY_SIZE(regulator_names),
+ 					     regulator_names);
+ 	if (err)
+-		return dev_err_probe(&indio_dev->dev, err,
+-				     "unable to enable supplies\n");
++		return dev_err_probe(parent, err, "unable to enable supplies\n");
+ 
+ 	return 0;
+ }
+@@ -241,13 +240,14 @@ EXPORT_SYMBOL_NS(st_sensors_power_enable, "IIO_ST_SENSORS");
+ static int st_sensors_set_drdy_int_pin(struct iio_dev *indio_dev,
+ 					struct st_sensors_platform_data *pdata)
+ {
++	struct device *parent = indio_dev->dev.parent;
+ 	struct st_sensor_data *sdata = iio_priv(indio_dev);
+ 
+ 	/* Sensor does not support interrupts */
+ 	if (!sdata->sensor_settings->drdy_irq.int1.addr &&
+ 	    !sdata->sensor_settings->drdy_irq.int2.addr) {
+ 		if (pdata->drdy_int_pin)
+-			dev_info(&indio_dev->dev,
++			dev_info(parent,
+ 				 "DRDY on pin INT%d specified, but sensor does not support interrupts\n",
+ 				 pdata->drdy_int_pin);
+ 		return 0;
+@@ -256,29 +256,27 @@ static int st_sensors_set_drdy_int_pin(struct iio_dev *indio_dev,
+ 	switch (pdata->drdy_int_pin) {
+ 	case 1:
+ 		if (!sdata->sensor_settings->drdy_irq.int1.mask) {
+-			dev_err(&indio_dev->dev,
+-					"DRDY on INT1 not available.\n");
++			dev_err(parent, "DRDY on INT1 not available.\n");
+ 			return -EINVAL;
+ 		}
+ 		sdata->drdy_int_pin = 1;
+ 		break;
+ 	case 2:
+ 		if (!sdata->sensor_settings->drdy_irq.int2.mask) {
+-			dev_err(&indio_dev->dev,
+-					"DRDY on INT2 not available.\n");
++			dev_err(parent, "DRDY on INT2 not available.\n");
+ 			return -EINVAL;
+ 		}
+ 		sdata->drdy_int_pin = 2;
+ 		break;
+ 	default:
+-		dev_err(&indio_dev->dev, "DRDY on pdata not valid.\n");
++		dev_err(parent, "DRDY on pdata not valid.\n");
+ 		return -EINVAL;
+ 	}
+ 
+ 	if (pdata->open_drain) {
+ 		if (!sdata->sensor_settings->drdy_irq.int1.addr_od &&
+ 		    !sdata->sensor_settings->drdy_irq.int2.addr_od)
+-			dev_err(&indio_dev->dev,
++			dev_err(parent,
+ 				"open drain requested but unsupported.\n");
+ 		else
+ 			sdata->int_pin_open_drain = true;
+@@ -336,6 +334,7 @@ EXPORT_SYMBOL_NS(st_sensors_dev_name_probe, "IIO_ST_SENSORS");
+ int st_sensors_init_sensor(struct iio_dev *indio_dev,
+ 					struct st_sensors_platform_data *pdata)
+ {
++	struct device *parent = indio_dev->dev.parent;
+ 	struct st_sensor_data *sdata = iio_priv(indio_dev);
+ 	struct st_sensors_platform_data *of_pdata;
+ 	int err = 0;
+@@ -343,7 +342,7 @@ int st_sensors_init_sensor(struct iio_dev *indio_dev,
+ 	mutex_init(&sdata->odr_lock);
+ 
+ 	/* If OF/DT pdata exists, it will take precedence of anything else */
+-	of_pdata = st_sensors_dev_probe(indio_dev->dev.parent, pdata);
++	of_pdata = st_sensors_dev_probe(parent, pdata);
+ 	if (IS_ERR(of_pdata))
+ 		return PTR_ERR(of_pdata);
+ 	if (of_pdata)
+@@ -370,7 +369,7 @@ int st_sensors_init_sensor(struct iio_dev *indio_dev,
+ 		if (err < 0)
+ 			return err;
+ 	} else
+-		dev_info(&indio_dev->dev, "Full-scale not possible\n");
++		dev_info(parent, "Full-scale not possible\n");
+ 
+ 	err = st_sensors_set_odr(indio_dev, sdata->odr);
+ 	if (err < 0)
+@@ -405,7 +404,7 @@ int st_sensors_init_sensor(struct iio_dev *indio_dev,
+ 			mask = sdata->sensor_settings->drdy_irq.int2.mask_od;
+ 		}
+ 
+-		dev_info(&indio_dev->dev,
++		dev_info(parent,
+ 			 "set interrupt line to open drain mode on pin %d\n",
+ 			 sdata->drdy_int_pin);
+ 		err = st_sensors_write_data_with_mask(indio_dev, addr,
+@@ -594,21 +593,20 @@ EXPORT_SYMBOL_NS(st_sensors_get_settings_index, "IIO_ST_SENSORS");
+ int st_sensors_verify_id(struct iio_dev *indio_dev)
+ {
+ 	struct st_sensor_data *sdata = iio_priv(indio_dev);
++	struct device *parent = indio_dev->dev.parent;
+ 	int wai, err;
+ 
+ 	if (sdata->sensor_settings->wai_addr) {
+ 		err = regmap_read(sdata->regmap,
+ 				  sdata->sensor_settings->wai_addr, &wai);
+ 		if (err < 0) {
+-			dev_err(&indio_dev->dev,
+-				"failed to read Who-Am-I register.\n");
+-			return err;
++			return dev_err_probe(parent, err,
++					     "failed to read Who-Am-I register.\n");
+ 		}
+ 
+ 		if (sdata->sensor_settings->wai != wai) {
+-			dev_warn(&indio_dev->dev,
+-				"%s: WhoAmI mismatch (0x%x).\n",
+-				indio_dev->name, wai);
++			dev_warn(parent, "%s: WhoAmI mismatch (0x%x).\n",
++				 indio_dev->name, wai);
+ 		}
+ 	}
+ 
+diff --git a/drivers/iio/common/st_sensors/st_sensors_trigger.c b/drivers/iio/common/st_sensors/st_sensors_trigger.c
+index 9d4bf822a15dfc..8a8ab688d7980f 100644
+--- a/drivers/iio/common/st_sensors/st_sensors_trigger.c
++++ b/drivers/iio/common/st_sensors/st_sensors_trigger.c
+@@ -127,7 +127,7 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev,
+ 	sdata->trig = devm_iio_trigger_alloc(parent, "%s-trigger",
+ 					     indio_dev->name);
+ 	if (sdata->trig == NULL) {
+-		dev_err(&indio_dev->dev, "failed to allocate iio trigger.\n");
++		dev_err(parent, "failed to allocate iio trigger.\n");
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -143,7 +143,7 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev,
+ 	case IRQF_TRIGGER_FALLING:
+ 	case IRQF_TRIGGER_LOW:
+ 		if (!sdata->sensor_settings->drdy_irq.addr_ihl) {
+-			dev_err(&indio_dev->dev,
++			dev_err(parent,
+ 				"falling/low specified for IRQ but hardware supports only rising/high: will request rising/high\n");
+ 			if (irq_trig == IRQF_TRIGGER_FALLING)
+ 				irq_trig = IRQF_TRIGGER_RISING;
+@@ -156,21 +156,19 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev,
+ 				sdata->sensor_settings->drdy_irq.mask_ihl, 1);
+ 			if (err < 0)
+ 				return err;
+-			dev_info(&indio_dev->dev,
++			dev_info(parent,
+ 				 "interrupts on the falling edge or active low level\n");
+ 		}
+ 		break;
+ 	case IRQF_TRIGGER_RISING:
+-		dev_info(&indio_dev->dev,
+-			 "interrupts on the rising edge\n");
++		dev_info(parent, "interrupts on the rising edge\n");
+ 		break;
+ 	case IRQF_TRIGGER_HIGH:
+-		dev_info(&indio_dev->dev,
+-			 "interrupts active high level\n");
++		dev_info(parent, "interrupts active high level\n");
+ 		break;
+ 	default:
+ 		/* This is the most preferred mode, if possible */
+-		dev_err(&indio_dev->dev,
++		dev_err(parent,
+ 			"unsupported IRQ trigger specified (%lx), enforce rising edge\n", irq_trig);
+ 		irq_trig = IRQF_TRIGGER_RISING;
+ 	}
+@@ -179,7 +177,7 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev,
+ 	if (irq_trig == IRQF_TRIGGER_FALLING ||
+ 	    irq_trig == IRQF_TRIGGER_RISING) {
+ 		if (!sdata->sensor_settings->drdy_irq.stat_drdy.addr) {
+-			dev_err(&indio_dev->dev,
++			dev_err(parent,
+ 				"edge IRQ not supported w/o stat register.\n");
+ 			return -EOPNOTSUPP;
+ 		}
+@@ -214,13 +212,13 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev,
+ 					sdata->trig->name,
+ 					sdata->trig);
+ 	if (err) {
+-		dev_err(&indio_dev->dev, "failed to request trigger IRQ.\n");
++		dev_err(parent, "failed to request trigger IRQ.\n");
+ 		return err;
+ 	}
+ 
+ 	err = devm_iio_trigger_register(parent, sdata->trig);
+ 	if (err < 0) {
+-		dev_err(&indio_dev->dev, "failed to register iio trigger.\n");
++		dev_err(parent, "failed to register iio trigger.\n");
+ 		return err;
+ 	}
+ 	indio_dev->trig = iio_trigger_get(sdata->trig);
+diff --git a/drivers/iio/industrialio-backend.c b/drivers/iio/industrialio-backend.c
+index a43c8d1bb3d0f4..31fe793e345ea7 100644
+--- a/drivers/iio/industrialio-backend.c
++++ b/drivers/iio/industrialio-backend.c
+@@ -155,11 +155,14 @@ static ssize_t iio_backend_debugfs_write_reg(struct file *file,
+ 	ssize_t rc;
+ 	int ret;
+ 
++	if (count >= sizeof(buf))
++		return -ENOSPC;
++
+ 	rc = simple_write_to_buffer(buf, sizeof(buf) - 1, ppos, userbuf, count);
+ 	if (rc < 0)
+ 		return rc;
+ 
+-	buf[count] = '\0';
++	buf[rc] = '\0';
+ 
+ 	ret = sscanf(buf, "%i %i", &back->cached_reg_addr, &val);
+ 
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 8c0af1d26a30df..c963aac4d6e728 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -169,12 +169,12 @@ static const struct xpad_device {
+ 	{ 0x046d, 0xca88, "Logitech Compact Controller for Xbox", 0, XTYPE_XBOX },
+ 	{ 0x046d, 0xca8a, "Logitech Precision Vibration Feedback Wheel", 0, XTYPE_XBOX },
+ 	{ 0x046d, 0xcaa3, "Logitech DriveFx Racing Wheel", 0, XTYPE_XBOX360 },
++	{ 0x0502, 0x1305, "Acer NGR200", 0, XTYPE_XBOX360 },
+ 	{ 0x056e, 0x2004, "Elecom JC-U3613M", 0, XTYPE_XBOX360 },
+ 	{ 0x05fd, 0x1007, "Mad Catz Controller (unverified)", 0, XTYPE_XBOX },
+ 	{ 0x05fd, 0x107a, "InterAct 'PowerPad Pro' X-Box pad (Germany)", 0, XTYPE_XBOX },
+ 	{ 0x05fe, 0x3030, "Chic Controller", 0, XTYPE_XBOX },
+ 	{ 0x05fe, 0x3031, "Chic Controller", 0, XTYPE_XBOX },
+-	{ 0x0502, 0x1305, "Acer NGR200", 0, XTYPE_XBOX },
+ 	{ 0x062a, 0x0020, "Logic3 Xbox GamePad", 0, XTYPE_XBOX },
+ 	{ 0x062a, 0x0033, "Competition Pro Steering Wheel", 0, XTYPE_XBOX },
+ 	{ 0x06a3, 0x0200, "Saitek Racing Wheel", 0, XTYPE_XBOX },
+diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
+index f0b5a6931161a0..8b3699204f6f9c 100644
+--- a/drivers/md/dm-bufio.c
++++ b/drivers/md/dm-bufio.c
+@@ -2750,7 +2750,11 @@ static unsigned long __evict_many(struct dm_bufio_client *c,
+ 		__make_buffer_clean(b);
+ 		__free_buffer_wake(b);
+ 
+-		cond_resched();
++		if (need_resched()) {
++			dm_bufio_unlock(c);
++			cond_resched();
++			dm_bufio_lock(c);
++		}
+ 	}
+ 
+ 	return count;
+diff --git a/drivers/memstick/core/memstick.c b/drivers/memstick/core/memstick.c
+index 043b9ec756ff27..7f3f47db4c98a5 100644
+--- a/drivers/memstick/core/memstick.c
++++ b/drivers/memstick/core/memstick.c
+@@ -324,7 +324,7 @@ EXPORT_SYMBOL(memstick_init_req);
+ static int h_memstick_read_dev_id(struct memstick_dev *card,
+ 				  struct memstick_request **mrq)
+ {
+-	struct ms_id_register id_reg;
++	struct ms_id_register id_reg = {};
+ 
+ 	if (!(*mrq)) {
+ 		memstick_init_req(&card->current_mrq, MS_TPC_READ_REG, &id_reg,
+diff --git a/drivers/mmc/host/bcm2835.c b/drivers/mmc/host/bcm2835.c
+index e5f151d092cd31..521a1cce34e845 100644
+--- a/drivers/mmc/host/bcm2835.c
++++ b/drivers/mmc/host/bcm2835.c
+@@ -503,7 +503,8 @@ void bcm2835_prepare_dma(struct bcm2835_host *host, struct mmc_data *data)
+ 				       DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ 
+ 	if (!desc) {
+-		dma_unmap_sg(dma_chan->device->dev, data->sg, sg_len, dir_data);
++		dma_unmap_sg(dma_chan->device->dev, data->sg, data->sg_len,
++			     dir_data);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index 13a84b9309e064..e3877a1c72a90f 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -913,7 +913,8 @@ static bool glk_broken_cqhci(struct sdhci_pci_slot *slot)
+ {
+ 	return slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_GLK_EMMC &&
+ 	       (dmi_match(DMI_BIOS_VENDOR, "LENOVO") ||
+-		dmi_match(DMI_SYS_VENDOR, "IRBIS"));
++		dmi_match(DMI_SYS_VENDOR, "IRBIS") ||
++		dmi_match(DMI_SYS_VENDOR, "Positivo Tecnologia SA"));
+ }
+ 
+ static bool jsl_broken_hs400es(struct sdhci_pci_slot *slot)
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index 73385ff4c0f30b..9e94998e8df7d2 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -613,7 +613,8 @@ static const struct sdhci_ops sdhci_am654_ops = {
+ static const struct sdhci_pltfm_data sdhci_am654_pdata = {
+ 	.ops = &sdhci_am654_ops,
+ 	.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
+-	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
++	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
++		   SDHCI_QUIRK2_DISABLE_HW_TIMEOUT,
+ };
+ 
+ static const struct sdhci_am654_driver_data sdhci_am654_sr1_drvdata = {
+@@ -643,7 +644,8 @@ static const struct sdhci_ops sdhci_j721e_8bit_ops = {
+ static const struct sdhci_pltfm_data sdhci_j721e_8bit_pdata = {
+ 	.ops = &sdhci_j721e_8bit_ops,
+ 	.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
+-	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
++	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
++		   SDHCI_QUIRK2_DISABLE_HW_TIMEOUT,
+ };
+ 
+ static const struct sdhci_am654_driver_data sdhci_j721e_8bit_drvdata = {
+@@ -667,7 +669,8 @@ static const struct sdhci_ops sdhci_j721e_4bit_ops = {
+ static const struct sdhci_pltfm_data sdhci_j721e_4bit_pdata = {
+ 	.ops = &sdhci_j721e_4bit_ops,
+ 	.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
+-	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
++	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
++		   SDHCI_QUIRK2_DISABLE_HW_TIMEOUT,
+ };
+ 
+ static const struct sdhci_am654_driver_data sdhci_j721e_4bit_drvdata = {
+diff --git a/drivers/net/can/m_can/tcan4x5x-core.c b/drivers/net/can/m_can/tcan4x5x-core.c
+index 8edaa339d590b6..39b0b5277b11f5 100644
+--- a/drivers/net/can/m_can/tcan4x5x-core.c
++++ b/drivers/net/can/m_can/tcan4x5x-core.c
+@@ -343,21 +343,19 @@ static void tcan4x5x_get_dt_data(struct m_can_classdev *cdev)
+ 		of_property_read_bool(cdev->dev->of_node, "ti,nwkrq-voltage-vio");
+ }
+ 
+-static int tcan4x5x_get_gpios(struct m_can_classdev *cdev,
+-			      const struct tcan4x5x_version_info *version_info)
++static int tcan4x5x_get_gpios(struct m_can_classdev *cdev)
+ {
+ 	struct tcan4x5x_priv *tcan4x5x = cdev_to_priv(cdev);
+ 	int ret;
+ 
+-	if (version_info->has_wake_pin) {
+-		tcan4x5x->device_wake_gpio = devm_gpiod_get(cdev->dev, "device-wake",
+-							    GPIOD_OUT_HIGH);
+-		if (IS_ERR(tcan4x5x->device_wake_gpio)) {
+-			if (PTR_ERR(tcan4x5x->device_wake_gpio) == -EPROBE_DEFER)
+-				return -EPROBE_DEFER;
++	tcan4x5x->device_wake_gpio = devm_gpiod_get_optional(cdev->dev,
++							     "device-wake",
++							     GPIOD_OUT_HIGH);
++	if (IS_ERR(tcan4x5x->device_wake_gpio)) {
++		if (PTR_ERR(tcan4x5x->device_wake_gpio) == -EPROBE_DEFER)
++			return -EPROBE_DEFER;
+ 
+-			tcan4x5x_disable_wake(cdev);
+-		}
++		tcan4x5x->device_wake_gpio = NULL;
+ 	}
+ 
+ 	tcan4x5x->reset_gpio = devm_gpiod_get_optional(cdev->dev, "reset",
+@@ -369,14 +367,31 @@ static int tcan4x5x_get_gpios(struct m_can_classdev *cdev,
+ 	if (ret)
+ 		return ret;
+ 
+-	if (version_info->has_state_pin) {
+-		tcan4x5x->device_state_gpio = devm_gpiod_get_optional(cdev->dev,
+-								      "device-state",
+-								      GPIOD_IN);
+-		if (IS_ERR(tcan4x5x->device_state_gpio)) {
+-			tcan4x5x->device_state_gpio = NULL;
+-			tcan4x5x_disable_state(cdev);
+-		}
++	tcan4x5x->device_state_gpio = devm_gpiod_get_optional(cdev->dev,
++							      "device-state",
++							      GPIOD_IN);
++	if (IS_ERR(tcan4x5x->device_state_gpio))
++		tcan4x5x->device_state_gpio = NULL;
++
++	return 0;
++}
++
++static int tcan4x5x_check_gpios(struct m_can_classdev *cdev,
++				const struct tcan4x5x_version_info *version_info)
++{
++	struct tcan4x5x_priv *tcan4x5x = cdev_to_priv(cdev);
++	int ret;
++
++	if (version_info->has_wake_pin && !tcan4x5x->device_wake_gpio) {
++		ret = tcan4x5x_disable_wake(cdev);
++		if (ret)
++			return ret;
++	}
++
++	if (version_info->has_state_pin && !tcan4x5x->device_state_gpio) {
++		ret = tcan4x5x_disable_state(cdev);
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	return 0;
+@@ -468,15 +483,21 @@ static int tcan4x5x_can_probe(struct spi_device *spi)
+ 		goto out_m_can_class_free_dev;
+ 	}
+ 
++	ret = tcan4x5x_get_gpios(mcan_class);
++	if (ret) {
++		dev_err(&spi->dev, "Getting gpios failed %pe\n", ERR_PTR(ret));
++		goto out_power;
++	}
++
+ 	version_info = tcan4x5x_find_version(priv);
+ 	if (IS_ERR(version_info)) {
+ 		ret = PTR_ERR(version_info);
+ 		goto out_power;
+ 	}
+ 
+-	ret = tcan4x5x_get_gpios(mcan_class, version_info);
++	ret = tcan4x5x_check_gpios(mcan_class, version_info);
+ 	if (ret) {
+-		dev_err(&spi->dev, "Getting gpios failed %pe\n", ERR_PTR(ret));
++		dev_err(&spi->dev, "Checking gpios failed %pe\n", ERR_PTR(ret));
+ 		goto out_power;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/airoha/airoha_npu.c b/drivers/net/ethernet/airoha/airoha_npu.c
+index ead0625e781f57..760367c2c033ba 100644
+--- a/drivers/net/ethernet/airoha/airoha_npu.c
++++ b/drivers/net/ethernet/airoha/airoha_npu.c
+@@ -344,12 +344,13 @@ struct airoha_npu *airoha_npu_get(struct device *dev)
+ 		return ERR_PTR(-ENODEV);
+ 
+ 	pdev = of_find_device_by_node(np);
+-	of_node_put(np);
+ 
+ 	if (!pdev) {
+ 		dev_err(dev, "cannot find device node %s\n", np->name);
++		of_node_put(np);
+ 		return ERR_PTR(-ENODEV);
+ 	}
++	of_node_put(np);
+ 
+ 	if (!try_module_get(THIS_MODULE)) {
+ 		dev_err(dev, "failed to get the device driver module\n");
+diff --git a/drivers/net/ethernet/intel/ice/ice_debugfs.c b/drivers/net/ethernet/intel/ice/ice_debugfs.c
+index 9fc0fd95a13d8f..cb71eca6a85bf6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_debugfs.c
++++ b/drivers/net/ethernet/intel/ice/ice_debugfs.c
+@@ -606,7 +606,7 @@ void ice_debugfs_fwlog_init(struct ice_pf *pf)
+ 
+ 	pf->ice_debugfs_pf_fwlog = debugfs_create_dir("fwlog",
+ 						      pf->ice_debugfs_pf);
+-	if (IS_ERR(pf->ice_debugfs_pf))
++	if (IS_ERR(pf->ice_debugfs_pf_fwlog))
+ 		goto err_create_module_files;
+ 
+ 	fw_modules_dir = debugfs_create_dir("modules",
+diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c
+index 2410aee59fb2d5..d132eb4775513c 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lag.c
++++ b/drivers/net/ethernet/intel/ice/ice_lag.c
+@@ -2226,7 +2226,8 @@ bool ice_lag_is_switchdev_running(struct ice_pf *pf)
+ 	struct ice_lag *lag = pf->lag;
+ 	struct net_device *tmp_nd;
+ 
+-	if (!ice_is_feature_supported(pf, ICE_F_SRIOV_LAG) || !lag)
++	if (!ice_is_feature_supported(pf, ICE_F_SRIOV_LAG) ||
++	    !lag || !lag->upper_netdev)
+ 		return false;
+ 
+ 	rcu_read_lock();
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index 5fd70b4d55beb4..e37cf4f754c480 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1154,8 +1154,9 @@ static void mlx5e_lro_update_tcp_hdr(struct mlx5_cqe64 *cqe, struct tcphdr *tcp)
+ 	}
+ }
+ 
+-static void mlx5e_lro_update_hdr(struct sk_buff *skb, struct mlx5_cqe64 *cqe,
+-				 u32 cqe_bcnt)
++static unsigned int mlx5e_lro_update_hdr(struct sk_buff *skb,
++					 struct mlx5_cqe64 *cqe,
++					 u32 cqe_bcnt)
+ {
+ 	struct ethhdr	*eth = (struct ethhdr *)(skb->data);
+ 	struct tcphdr	*tcp;
+@@ -1205,6 +1206,8 @@ static void mlx5e_lro_update_hdr(struct sk_buff *skb, struct mlx5_cqe64 *cqe,
+ 		tcp->check = tcp_v6_check(payload_len, &ipv6->saddr,
+ 					  &ipv6->daddr, check);
+ 	}
++
++	return (unsigned int)((unsigned char *)tcp + tcp->doff * 4 - skb->data);
+ }
+ 
+ static void *mlx5e_shampo_get_packet_hd(struct mlx5e_rq *rq, u16 header_index)
+@@ -1561,8 +1564,9 @@ static inline void mlx5e_build_rx_skb(struct mlx5_cqe64 *cqe,
+ 		mlx5e_macsec_offload_handle_rx_skb(netdev, skb, cqe);
+ 
+ 	if (lro_num_seg > 1) {
+-		mlx5e_lro_update_hdr(skb, cqe, cqe_bcnt);
+-		skb_shinfo(skb)->gso_size = DIV_ROUND_UP(cqe_bcnt, lro_num_seg);
++		unsigned int hdrlen = mlx5e_lro_update_hdr(skb, cqe, cqe_bcnt);
++
++		skb_shinfo(skb)->gso_size = DIV_ROUND_UP(cqe_bcnt - hdrlen, lro_num_seg);
+ 		/* Subtract one since we already counted this as one
+ 		 * "regular" packet in mlx5e_complete_rx_cqe()
+ 		 */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 41e8660c819c05..9c1504d29d34c3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -2257,6 +2257,7 @@ static const struct pci_device_id mlx5_core_pci_table[] = {
+ 	{ PCI_VDEVICE(MELLANOX, 0x1021) },			/* ConnectX-7 */
+ 	{ PCI_VDEVICE(MELLANOX, 0x1023) },			/* ConnectX-8 */
+ 	{ PCI_VDEVICE(MELLANOX, 0x1025) },			/* ConnectX-9 */
++	{ PCI_VDEVICE(MELLANOX, 0x1027) },			/* ConnectX-10 */
+ 	{ PCI_VDEVICE(MELLANOX, 0xa2d2) },			/* BlueField integrated ConnectX-5 network controller */
+ 	{ PCI_VDEVICE(MELLANOX, 0xa2d3), MLX5_PCI_DEV_IS_VF},	/* BlueField integrated ConnectX-5 network controller VF */
+ 	{ PCI_VDEVICE(MELLANOX, 0xa2d6) },			/* BlueField-2 integrated ConnectX-6 Dx network controller */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+index c8bb9265bbb48a..6b9787cf4a042e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+@@ -433,6 +433,12 @@ static int intel_crosststamp(ktime_t *device,
+ 		return -ETIMEDOUT;
+ 	}
+ 
++	*system = (struct system_counterval_t) {
++		.cycles = 0,
++		.cs_id = CSID_X86_ART,
++		.use_nsecs = false,
++	};
++
+ 	num_snapshot = (readl(ioaddr + GMAC_TIMESTAMP_STATUS) &
+ 			GMAC_TIMESTAMP_ATSNS_MASK) >>
+ 			GMAC_TIMESTAMP_ATSNS_SHIFT;
+@@ -448,7 +454,7 @@ static int intel_crosststamp(ktime_t *device,
+ 	}
+ 
+ 	system->cycles *= intel_priv->crossts_adj;
+-	system->cs_id = CSID_X86_ART;
++
+ 	priv->plat->flags &= ~STMMAC_FLAG_INT_SNAPSHOT_EN;
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_hw.c b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
+index 490d34233d38c5..8a03eec3d9c930 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_hw.c
++++ b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
+@@ -1699,7 +1699,6 @@ static void wx_configure_rx_ring(struct wx *wx,
+ 				 struct wx_ring *ring)
+ {
+ 	u16 reg_idx = ring->reg_idx;
+-	union wx_rx_desc *rx_desc;
+ 	u64 rdba = ring->dma;
+ 	u32 rxdctl;
+ 
+@@ -1729,9 +1728,9 @@ static void wx_configure_rx_ring(struct wx *wx,
+ 	memset(ring->rx_buffer_info, 0,
+ 	       sizeof(struct wx_rx_buffer) * ring->count);
+ 
+-	/* initialize Rx descriptor 0 */
+-	rx_desc = WX_RX_DESC(ring, 0);
+-	rx_desc->wb.upper.length = 0;
++	/* reset ntu and ntc to place SW in sync with hardware */
++	ring->next_to_clean = 0;
++	ring->next_to_use = 0;
+ 
+ 	/* enable receive descriptor ring */
+ 	wr32m(wx, WX_PX_RR_CFG(reg_idx),
+@@ -2524,6 +2523,8 @@ void wx_update_stats(struct wx *wx)
+ 		hwstats->fdirmiss += rd32(wx, WX_RDB_FDIR_MISS);
+ 	}
+ 
++	/* qmprc is not cleared on read, manual reset it */
++	hwstats->qmprc = 0;
+ 	for (i = 0; i < wx->mac.max_rx_queues; i++)
+ 		hwstats->qmprc += rd32(wx, WX_PX_MPRC(i));
+ }
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.c b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+index 4eac40c4e8514e..f133fbcc92aaca 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_lib.c
++++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+@@ -173,10 +173,6 @@ static void wx_dma_sync_frag(struct wx_ring *rx_ring,
+ 				      skb_frag_off(frag),
+ 				      skb_frag_size(frag),
+ 				      DMA_FROM_DEVICE);
+-
+-	/* If the page was released, just unmap it. */
+-	if (unlikely(WX_CB(skb)->page_released))
+-		page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false);
+ }
+ 
+ static struct wx_rx_buffer *wx_get_rx_buffer(struct wx_ring *rx_ring,
+@@ -226,10 +222,6 @@ static void wx_put_rx_buffer(struct wx_ring *rx_ring,
+ 			     struct sk_buff *skb,
+ 			     int rx_buffer_pgcnt)
+ {
+-	if (!IS_ERR(skb) && WX_CB(skb)->dma == rx_buffer->dma)
+-		/* the page has been released from the ring */
+-		WX_CB(skb)->page_released = true;
+-
+ 	/* clear contents of rx_buffer */
+ 	rx_buffer->page = NULL;
+ 	rx_buffer->skb = NULL;
+@@ -314,7 +306,7 @@ static bool wx_alloc_mapped_page(struct wx_ring *rx_ring,
+ 		return false;
+ 	dma = page_pool_get_dma_addr(page);
+ 
+-	bi->page_dma = dma;
++	bi->dma = dma;
+ 	bi->page = page;
+ 	bi->page_offset = 0;
+ 
+@@ -351,7 +343,7 @@ void wx_alloc_rx_buffers(struct wx_ring *rx_ring, u16 cleaned_count)
+ 						 DMA_FROM_DEVICE);
+ 
+ 		rx_desc->read.pkt_addr =
+-			cpu_to_le64(bi->page_dma + bi->page_offset);
++			cpu_to_le64(bi->dma + bi->page_offset);
+ 
+ 		rx_desc++;
+ 		bi++;
+@@ -364,6 +356,8 @@ void wx_alloc_rx_buffers(struct wx_ring *rx_ring, u16 cleaned_count)
+ 
+ 		/* clear the status bits for the next_to_use descriptor */
+ 		rx_desc->wb.upper.status_error = 0;
++		/* clear the length for the next_to_use descriptor */
++		rx_desc->wb.upper.length = 0;
+ 
+ 		cleaned_count--;
+ 	} while (cleaned_count);
+@@ -2288,9 +2282,6 @@ static void wx_clean_rx_ring(struct wx_ring *rx_ring)
+ 		if (rx_buffer->skb) {
+ 			struct sk_buff *skb = rx_buffer->skb;
+ 
+-			if (WX_CB(skb)->page_released)
+-				page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false);
+-
+ 			dev_kfree_skb(skb);
+ 		}
+ 
+@@ -2314,6 +2305,9 @@ static void wx_clean_rx_ring(struct wx_ring *rx_ring)
+ 		}
+ 	}
+ 
++	/* Zero out the descriptor ring */
++	memset(rx_ring->desc, 0, rx_ring->size);
++
+ 	rx_ring->next_to_alloc = 0;
+ 	rx_ring->next_to_clean = 0;
+ 	rx_ring->next_to_use = 0;
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_type.h b/drivers/net/ethernet/wangxun/libwx/wx_type.h
+index 3a9c226567f801..3554b870ed592a 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_type.h
++++ b/drivers/net/ethernet/wangxun/libwx/wx_type.h
+@@ -859,7 +859,6 @@ enum wx_reset_type {
+ struct wx_cb {
+ 	dma_addr_t dma;
+ 	u16     append_cnt;      /* number of skb's appended */
+-	bool    page_released;
+ 	bool    dma_released;
+ };
+ 
+@@ -948,7 +947,6 @@ struct wx_tx_buffer {
+ struct wx_rx_buffer {
+ 	struct sk_buff *skb;
+ 	dma_addr_t dma;
+-	dma_addr_t page_dma;
+ 	struct page *page;
+ 	unsigned int page_offset;
+ };
+diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+index ecf47107146dc7..4719d40a63ba38 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
++++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+@@ -286,7 +286,7 @@ static void xemaclite_aligned_read(u32 *src_ptr, u8 *dest_ptr,
+ 
+ 		/* Read the remaining data */
+ 		for (; length > 0; length--)
+-			*to_u8_ptr = *from_u8_ptr;
++			*to_u8_ptr++ = *from_u8_ptr++;
+ 	}
+ }
+ 
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 31242921c8285a..8c73f710510202 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2317,8 +2317,11 @@ static int netvsc_prepare_bonding(struct net_device *vf_netdev)
+ 	if (!ndev)
+ 		return NOTIFY_DONE;
+ 
+-	/* set slave flag before open to prevent IPv6 addrconf */
++	/* Set slave flag and no addrconf flag before open
++	 * to prevent IPv6 addrconf.
++	 */
+ 	vf_netdev->flags |= IFF_SLAVE;
++	vf_netdev->priv_flags |= IFF_NO_ADDRCONF;
+ 	return NOTIFY_DONE;
+ }
+ 
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 7d5e76a3db0e94..2f5bb4d0911d24 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -3391,7 +3391,8 @@ static int phy_probe(struct device *dev)
+ 	/* Get the LEDs from the device tree, and instantiate standard
+ 	 * LEDs for them.
+ 	 */
+-	if (IS_ENABLED(CONFIG_PHYLIB_LEDS))
++	if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev) &&
++	    !phy_driver_is_genphy_10g(phydev))
+ 		err = of_phy_leds(phydev);
+ 
+ out:
+@@ -3408,7 +3409,8 @@ static int phy_remove(struct device *dev)
+ 
+ 	cancel_delayed_work_sync(&phydev->state_queue);
+ 
+-	if (IS_ENABLED(CONFIG_PHYLIB_LEDS))
++	if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev) &&
++	    !phy_driver_is_genphy_10g(phydev))
+ 		phy_leds_unregister(phydev);
+ 
+ 	phydev->state = PHY_DOWN;
+diff --git a/drivers/net/usb/sierra_net.c b/drivers/net/usb/sierra_net.c
+index dec6e82eb0e033..fc9e560e197d41 100644
+--- a/drivers/net/usb/sierra_net.c
++++ b/drivers/net/usb/sierra_net.c
+@@ -689,6 +689,10 @@ static int sierra_net_bind(struct usbnet *dev, struct usb_interface *intf)
+ 			status);
+ 		return -ENODEV;
+ 	}
++	if (!dev->status) {
++		dev_err(&dev->udev->dev, "No status endpoint found");
++		return -ENODEV;
++	}
+ 	/* Initialize sierra private data */
+ 	priv = kzalloc(sizeof *priv, GFP_KERNEL);
+ 	if (!priv)
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index d5be73a9647081..c69c2419480194 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -7074,7 +7074,7 @@ static int virtnet_probe(struct virtio_device *vdev)
+ 	   otherwise get link status from config. */
+ 	netif_carrier_off(dev);
+ 	if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) {
+-		virtnet_config_changed_work(&vi->config_work);
++		virtio_config_changed(vi->vdev);
+ 	} else {
+ 		vi->status = VIRTIO_NET_S_LINK_UP;
+ 		virtnet_update_settings(vi);
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h b/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h
+index 5cdc09d465d4fe..e90f3187e55c49 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
+ /*
+- * Copyright (C) 2012-2014, 2018-2024 Intel Corporation
++ * Copyright (C) 2012-2014, 2018-2025 Intel Corporation
+  * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
+  * Copyright (C) 2016-2017 Intel Deutschland GmbH
+  */
+@@ -754,7 +754,7 @@ struct iwl_lari_config_change_cmd_v10 {
+  *	according to the BIOS definitions.
+  *	For LARI cmd version 11 - bits 0:4 are supported.
+  *	For LARI cmd version 12 - bits 0:6 are supported and bits 7:31 are
+- *	reserved. No need to mask out the reserved bits.
++ *	reserved.
+  * @force_disable_channels_bitmap: Bitmap of disabled bands/channels.
+  *	Each bit represents a set of channels in a specific band that should be
+  *	disabled
+@@ -787,6 +787,7 @@ struct iwl_lari_config_change_cmd {
+ /* Activate UNII-1 (5.2GHz) for World Wide */
+ #define ACTIVATE_5G2_IN_WW_MASK			BIT(4)
+ #define CHAN_STATE_ACTIVE_BITMAP_CMD_V11	0x1F
++#define CHAN_STATE_ACTIVE_BITMAP_CMD_V12	0x7F
+ 
+ /**
+  * struct iwl_pnvm_init_complete_ntfy - PNVM initialization complete
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/regulatory.c b/drivers/net/wireless/intel/iwlwifi/fw/regulatory.c
+index 6adcfa6e214a0a..9947ab7c2f4b34 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/regulatory.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/regulatory.c
+@@ -603,6 +603,7 @@ int iwl_fill_lari_config(struct iwl_fw_runtime *fwrt,
+ 
+ 	ret = iwl_bios_get_dsm(fwrt, DSM_FUNC_ACTIVATE_CHANNEL, &value);
+ 	if (!ret) {
++		value &= CHAN_STATE_ACTIVE_BITMAP_CMD_V12;
+ 		if (cmd_ver < 8)
+ 			value &= ~ACTIVATE_5G2_IN_WW_MASK;
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/regulatory.c b/drivers/net/wireless/intel/iwlwifi/mld/regulatory.c
+index a75af8c1e8ab04..2116b717b9267f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/regulatory.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/regulatory.c
+@@ -251,8 +251,10 @@ void iwl_mld_configure_lari(struct iwl_mld *mld)
+ 			cpu_to_le32(value &= DSM_UNII4_ALLOW_BITMAP);
+ 
+ 	ret = iwl_bios_get_dsm(fwrt, DSM_FUNC_ACTIVATE_CHANNEL, &value);
+-	if (!ret)
++	if (!ret) {
++		value &= CHAN_STATE_ACTIVE_BITMAP_CMD_V12;
+ 		cmd.chan_state_active_bitmap = cpu_to_le32(value);
++	}
+ 
+ 	ret = iwl_bios_get_dsm(fwrt, DSM_FUNC_ENABLE_6E, &value);
+ 	if (!ret)
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index ae584c97f52842..88fec86b8baae4 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -377,12 +377,12 @@ static void nvme_log_err_passthru(struct request *req)
+ 		nr->status & NVME_SC_MASK,	/* Status Code */
+ 		nr->status & NVME_STATUS_MORE ? "MORE " : "",
+ 		nr->status & NVME_STATUS_DNR  ? "DNR "  : "",
+-		nr->cmd->common.cdw10,
+-		nr->cmd->common.cdw11,
+-		nr->cmd->common.cdw12,
+-		nr->cmd->common.cdw13,
+-		nr->cmd->common.cdw14,
+-		nr->cmd->common.cdw15);
++		le32_to_cpu(nr->cmd->common.cdw10),
++		le32_to_cpu(nr->cmd->common.cdw11),
++		le32_to_cpu(nr->cmd->common.cdw12),
++		le32_to_cpu(nr->cmd->common.cdw13),
++		le32_to_cpu(nr->cmd->common.cdw14),
++		le32_to_cpu(nr->cmd->common.cdw15));
+ }
+ 
+ enum nvme_disposition {
+@@ -759,6 +759,10 @@ blk_status_t nvme_fail_nonready_command(struct nvme_ctrl *ctrl,
+ 	    !test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags) &&
+ 	    !blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH))
+ 		return BLK_STS_RESOURCE;
++
++	if (!(rq->rq_flags & RQF_DONTPREP))
++		nvme_clear_nvme_request(rq);
++
+ 	return nvme_host_path_error(rq);
+ }
+ EXPORT_SYMBOL_GPL(nvme_fail_nonready_command);
+@@ -3354,15 +3358,6 @@ static int nvme_init_identify(struct nvme_ctrl *ctrl)
+ 		if (ret)
+ 			goto out_free;
+ 	}
+-
+-	if (le16_to_cpu(id->awupf) != ctrl->subsys->awupf) {
+-		dev_err_ratelimited(ctrl->device,
+-			"inconsistent AWUPF, controller not added (%u/%u).\n",
+-			le16_to_cpu(id->awupf), ctrl->subsys->awupf);
+-		ret = -EINVAL;
+-		goto out_free;
+-	}
+-
+ 	memcpy(ctrl->subsys->firmware_rev, id->fr,
+ 	       sizeof(ctrl->subsys->firmware_rev));
+ 
+@@ -3889,7 +3884,7 @@ static void nvme_ns_add_to_ctrl_list(struct nvme_ns *ns)
+ 			return;
+ 		}
+ 	}
+-	list_add(&ns->list, &ns->ctrl->namespaces);
++	list_add_rcu(&ns->list, &ns->ctrl->namespaces);
+ }
+ 
+ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 12a5cb8641ca30..de70ff9102538a 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -1967,10 +1967,10 @@ static void nvmet_tcp_alloc_queue(struct nvmet_tcp_port *port,
+ 		struct sock *sk = queue->sock->sk;
+ 
+ 		/* Restore the default callbacks before starting upcall */
+-		read_lock_bh(&sk->sk_callback_lock);
++		write_lock_bh(&sk->sk_callback_lock);
+ 		sk->sk_user_data = NULL;
+ 		sk->sk_data_ready = port->data_ready;
+-		read_unlock_bh(&sk->sk_callback_lock);
++		write_unlock_bh(&sk->sk_callback_lock);
+ 		if (!nvmet_tcp_try_peek_pdu(queue)) {
+ 			if (!nvmet_tcp_tls_handshake(queue))
+ 				return;
+diff --git a/drivers/nvmem/imx-ocotp-ele.c b/drivers/nvmem/imx-ocotp-ele.c
+index ca6dd71d8a2e29..7807ec0e2d18dc 100644
+--- a/drivers/nvmem/imx-ocotp-ele.c
++++ b/drivers/nvmem/imx-ocotp-ele.c
+@@ -12,6 +12,7 @@
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
++#include <linux/if_ether.h>	/* ETH_ALEN */
+ 
+ enum fuse_type {
+ 	FUSE_FSB = BIT(0),
+@@ -118,9 +119,11 @@ static int imx_ocotp_cell_pp(void *context, const char *id, int index,
+ 	int i;
+ 
+ 	/* Deal with some post processing of nvmem cell data */
+-	if (id && !strcmp(id, "mac-address"))
++	if (id && !strcmp(id, "mac-address")) {
++		bytes = min(bytes, ETH_ALEN);
+ 		for (i = 0; i < bytes / 2; i++)
+ 			swap(buf[i], buf[bytes - i - 1]);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/nvmem/imx-ocotp.c b/drivers/nvmem/imx-ocotp.c
+index 79dd4fda03295a..7bf7656d4f9631 100644
+--- a/drivers/nvmem/imx-ocotp.c
++++ b/drivers/nvmem/imx-ocotp.c
+@@ -23,6 +23,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+ #include <linux/delay.h>
++#include <linux/if_ether.h>	/* ETH_ALEN */
+ 
+ #define IMX_OCOTP_OFFSET_B0W0		0x400 /* Offset from base address of the
+ 					       * OTP Bank0 Word0
+@@ -227,9 +228,11 @@ static int imx_ocotp_cell_pp(void *context, const char *id, int index,
+ 	int i;
+ 
+ 	/* Deal with some post processing of nvmem cell data */
+-	if (id && !strcmp(id, "mac-address"))
++	if (id && !strcmp(id, "mac-address")) {
++		bytes = min(bytes, ETH_ALEN);
+ 		for (i = 0; i < bytes / 2; i++)
+ 			swap(buf[i], buf[bytes - i - 1]);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/nvmem/layouts/u-boot-env.c b/drivers/nvmem/layouts/u-boot-env.c
+index 436426d4e8f910..8571aac56295a6 100644
+--- a/drivers/nvmem/layouts/u-boot-env.c
++++ b/drivers/nvmem/layouts/u-boot-env.c
+@@ -92,7 +92,7 @@ int u_boot_env_parse(struct device *dev, struct nvmem_device *nvmem,
+ 	size_t crc32_data_offset;
+ 	size_t crc32_data_len;
+ 	size_t crc32_offset;
+-	__le32 *crc32_addr;
++	uint32_t *crc32_addr;
+ 	size_t data_offset;
+ 	size_t data_len;
+ 	size_t dev_size;
+@@ -143,8 +143,8 @@ int u_boot_env_parse(struct device *dev, struct nvmem_device *nvmem,
+ 		goto err_kfree;
+ 	}
+ 
+-	crc32_addr = (__le32 *)(buf + crc32_offset);
+-	crc32 = le32_to_cpu(*crc32_addr);
++	crc32_addr = (uint32_t *)(buf + crc32_offset);
++	crc32 = *crc32_addr;
+ 	crc32_data_len = dev_size - crc32_data_offset;
+ 	data_len = dev_size - data_offset;
+ 
+diff --git a/drivers/phy/phy-core.c b/drivers/phy/phy-core.c
+index 8e2daea81666bf..04a5a34e7a950a 100644
+--- a/drivers/phy/phy-core.c
++++ b/drivers/phy/phy-core.c
+@@ -994,7 +994,8 @@ struct phy *phy_create(struct device *dev, struct device_node *node,
+ 	}
+ 
+ 	device_initialize(&phy->dev);
+-	mutex_init(&phy->mutex);
++	lockdep_register_key(&phy->lockdep_key);
++	mutex_init_with_key(&phy->mutex, &phy->lockdep_key);
+ 
+ 	phy->dev.class = &phy_class;
+ 	phy->dev.parent = dev;
+@@ -1259,6 +1260,8 @@ static void phy_release(struct device *dev)
+ 	dev_vdbg(dev, "releasing '%s'\n", dev_name(dev));
+ 	debugfs_remove_recursive(phy->debugfs);
+ 	regulator_put(phy->pwr);
++	mutex_destroy(&phy->mutex);
++	lockdep_unregister_key(&phy->lockdep_key);
+ 	ida_free(&phy_ida, phy->id);
+ 	kfree(phy);
+ }
+diff --git a/drivers/phy/tegra/xusb-tegra186.c b/drivers/phy/tegra/xusb-tegra186.c
+index 23a23f2d64e586..e818f6c3980e6b 100644
+--- a/drivers/phy/tegra/xusb-tegra186.c
++++ b/drivers/phy/tegra/xusb-tegra186.c
+@@ -648,14 +648,15 @@ static void tegra186_utmi_bias_pad_power_on(struct tegra_xusb_padctl *padctl)
+ 		udelay(100);
+ 	}
+ 
+-	if (padctl->soc->trk_hw_mode) {
+-		value = padctl_readl(padctl, XUSB_PADCTL_USB2_BIAS_PAD_CTL2);
+-		value |= USB2_TRK_HW_MODE;
++	value = padctl_readl(padctl, XUSB_PADCTL_USB2_BIAS_PAD_CTL2);
++	if (padctl->soc->trk_update_on_idle)
+ 		value &= ~CYA_TRK_CODE_UPDATE_ON_IDLE;
+-		padctl_writel(padctl, value, XUSB_PADCTL_USB2_BIAS_PAD_CTL2);
+-	} else {
++	if (padctl->soc->trk_hw_mode)
++		value |= USB2_TRK_HW_MODE;
++	padctl_writel(padctl, value, XUSB_PADCTL_USB2_BIAS_PAD_CTL2);
++
++	if (!padctl->soc->trk_hw_mode)
+ 		clk_disable_unprepare(priv->usb2_trk_clk);
+-	}
+ }
+ 
+ static void tegra186_utmi_bias_pad_power_off(struct tegra_xusb_padctl *padctl)
+@@ -782,13 +783,15 @@ static int tegra186_xusb_padctl_vbus_override(struct tegra_xusb_padctl *padctl,
+ }
+ 
+ static int tegra186_xusb_padctl_id_override(struct tegra_xusb_padctl *padctl,
+-					    bool status)
++					    struct tegra_xusb_usb2_port *port, bool status)
+ {
+-	u32 value;
++	u32 value, id_override;
++	int err = 0;
+ 
+ 	dev_dbg(padctl->dev, "%s id override\n", status ? "set" : "clear");
+ 
+ 	value = padctl_readl(padctl, USB2_VBUS_ID);
++	id_override = value & ID_OVERRIDE(~0);
+ 
+ 	if (status) {
+ 		if (value & VBUS_OVERRIDE) {
+@@ -799,15 +802,35 @@ static int tegra186_xusb_padctl_id_override(struct tegra_xusb_padctl *padctl,
+ 			value = padctl_readl(padctl, USB2_VBUS_ID);
+ 		}
+ 
+-		value &= ~ID_OVERRIDE(~0);
+-		value |= ID_OVERRIDE_GROUNDED;
++		if (id_override != ID_OVERRIDE_GROUNDED) {
++			value &= ~ID_OVERRIDE(~0);
++			value |= ID_OVERRIDE_GROUNDED;
++			padctl_writel(padctl, value, USB2_VBUS_ID);
++
++			err = regulator_enable(port->supply);
++			if (err) {
++				dev_err(padctl->dev, "Failed to enable regulator: %d\n", err);
++				return err;
++			}
++		}
+ 	} else {
+-		value &= ~ID_OVERRIDE(~0);
+-		value |= ID_OVERRIDE_FLOATING;
++		if (id_override == ID_OVERRIDE_GROUNDED) {
++			/*
++			 * The regulator is disabled only when the role transitions
++			 * from USB_ROLE_HOST to USB_ROLE_NONE.
++			 */
++			err = regulator_disable(port->supply);
++			if (err) {
++				dev_err(padctl->dev, "Failed to disable regulator: %d\n", err);
++				return err;
++			}
++
++			value &= ~ID_OVERRIDE(~0);
++			value |= ID_OVERRIDE_FLOATING;
++			padctl_writel(padctl, value, USB2_VBUS_ID);
++		}
+ 	}
+ 
+-	padctl_writel(padctl, value, USB2_VBUS_ID);
+-
+ 	return 0;
+ }
+ 
+@@ -826,27 +849,20 @@ static int tegra186_utmi_phy_set_mode(struct phy *phy, enum phy_mode mode,
+ 
+ 	if (mode == PHY_MODE_USB_OTG) {
+ 		if (submode == USB_ROLE_HOST) {
+-			tegra186_xusb_padctl_id_override(padctl, true);
+-
+-			err = regulator_enable(port->supply);
++			err = tegra186_xusb_padctl_id_override(padctl, port, true);
++			if (err)
++				goto out;
+ 		} else if (submode == USB_ROLE_DEVICE) {
+ 			tegra186_xusb_padctl_vbus_override(padctl, true);
+ 		} else if (submode == USB_ROLE_NONE) {
+-			/*
+-			 * When port is peripheral only or role transitions to
+-			 * USB_ROLE_NONE from USB_ROLE_DEVICE, regulator is not
+-			 * enabled.
+-			 */
+-			if (regulator_is_enabled(port->supply))
+-				regulator_disable(port->supply);
+-
+-			tegra186_xusb_padctl_id_override(padctl, false);
++			err = tegra186_xusb_padctl_id_override(padctl, port, false);
++			if (err)
++				goto out;
+ 			tegra186_xusb_padctl_vbus_override(padctl, false);
+ 		}
+ 	}
+-
++out:
+ 	mutex_unlock(&padctl->lock);
+-
+ 	return err;
+ }
+ 
+@@ -1710,7 +1726,8 @@ const struct tegra_xusb_padctl_soc tegra234_xusb_padctl_soc = {
+ 	.num_supplies = ARRAY_SIZE(tegra194_xusb_padctl_supply_names),
+ 	.supports_gen2 = true,
+ 	.poll_trk_completed = true,
+-	.trk_hw_mode = true,
++	.trk_hw_mode = false,
++	.trk_update_on_idle = true,
+ 	.supports_lp_cfg_en = true,
+ };
+ EXPORT_SYMBOL_GPL(tegra234_xusb_padctl_soc);
+diff --git a/drivers/phy/tegra/xusb.h b/drivers/phy/tegra/xusb.h
+index 6e45d194c68947..d2b5f95651324a 100644
+--- a/drivers/phy/tegra/xusb.h
++++ b/drivers/phy/tegra/xusb.h
+@@ -434,6 +434,7 @@ struct tegra_xusb_padctl_soc {
+ 	bool need_fake_usb3_port;
+ 	bool poll_trk_completed;
+ 	bool trk_hw_mode;
++	bool trk_update_on_idle;
+ 	bool supports_lp_cfg_en;
+ };
+ 
+diff --git a/drivers/pmdomain/governor.c b/drivers/pmdomain/governor.c
+index d1a10eeebd1616..600592f19669f7 100644
+--- a/drivers/pmdomain/governor.c
++++ b/drivers/pmdomain/governor.c
+@@ -8,6 +8,7 @@
+ #include <linux/pm_domain.h>
+ #include <linux/pm_qos.h>
+ #include <linux/hrtimer.h>
++#include <linux/cpu.h>
+ #include <linux/cpuidle.h>
+ #include <linux/cpumask.h>
+ #include <linux/ktime.h>
+@@ -349,6 +350,8 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd)
+ 	struct cpuidle_device *dev;
+ 	ktime_t domain_wakeup, next_hrtimer;
+ 	ktime_t now = ktime_get();
++	struct device *cpu_dev;
++	s64 cpu_constraint, global_constraint;
+ 	s64 idle_duration_ns;
+ 	int cpu, i;
+ 
+@@ -359,6 +362,7 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd)
+ 	if (!(genpd->flags & GENPD_FLAG_CPU_DOMAIN))
+ 		return true;
+ 
++	global_constraint = cpu_latency_qos_limit();
+ 	/*
+ 	 * Find the next wakeup for any of the online CPUs within the PM domain
+ 	 * and its subdomains. Note, we only need the genpd->cpus, as it already
+@@ -372,8 +376,16 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd)
+ 			if (ktime_before(next_hrtimer, domain_wakeup))
+ 				domain_wakeup = next_hrtimer;
+ 		}
++
++		cpu_dev = get_cpu_device(cpu);
++		if (cpu_dev) {
++			cpu_constraint = dev_pm_qos_raw_resume_latency(cpu_dev);
++			if (cpu_constraint < global_constraint)
++				global_constraint = cpu_constraint;
++		}
+ 	}
+ 
++	global_constraint *= NSEC_PER_USEC;
+ 	/* The minimum idle duration is from now - until the next wakeup. */
+ 	idle_duration_ns = ktime_to_ns(ktime_sub(domain_wakeup, now));
+ 	if (idle_duration_ns <= 0)
+@@ -389,8 +401,10 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd)
+ 	 */
+ 	i = genpd->state_idx;
+ 	do {
+-		if (idle_duration_ns >= (genpd->states[i].residency_ns +
+-		    genpd->states[i].power_off_latency_ns)) {
++		if ((idle_duration_ns >= (genpd->states[i].residency_ns +
++		    genpd->states[i].power_off_latency_ns)) &&
++		    (global_constraint >= (genpd->states[i].power_on_latency_ns +
++		    genpd->states[i].power_off_latency_ns))) {
+ 			genpd->state_idx = i;
+ 			return true;
+ 		}
+diff --git a/drivers/soc/aspeed/aspeed-lpc-snoop.c b/drivers/soc/aspeed/aspeed-lpc-snoop.c
+index ef8f355589a584..fc3a2c41cc1073 100644
+--- a/drivers/soc/aspeed/aspeed-lpc-snoop.c
++++ b/drivers/soc/aspeed/aspeed-lpc-snoop.c
+@@ -58,6 +58,7 @@ struct aspeed_lpc_snoop_model_data {
+ };
+ 
+ struct aspeed_lpc_snoop_channel {
++	bool enabled;
+ 	struct kfifo		fifo;
+ 	wait_queue_head_t	wq;
+ 	struct miscdevice	miscdev;
+@@ -190,6 +191,9 @@ static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ 	const struct aspeed_lpc_snoop_model_data *model_data =
+ 		of_device_get_match_data(dev);
+ 
++	if (WARN_ON(lpc_snoop->chan[channel].enabled))
++		return -EBUSY;
++
+ 	init_waitqueue_head(&lpc_snoop->chan[channel].wq);
+ 	/* Create FIFO datastructure */
+ 	rc = kfifo_alloc(&lpc_snoop->chan[channel].fifo,
+@@ -236,6 +240,8 @@ static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ 		regmap_update_bits(lpc_snoop->regmap, HICRB,
+ 				hicrb_en, hicrb_en);
+ 
++	lpc_snoop->chan[channel].enabled = true;
++
+ 	return 0;
+ 
+ err_misc_deregister:
+@@ -248,6 +254,9 @@ static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ static void aspeed_lpc_disable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ 				     int channel)
+ {
++	if (!lpc_snoop->chan[channel].enabled)
++		return;
++
+ 	switch (channel) {
+ 	case 0:
+ 		regmap_update_bits(lpc_snoop->regmap, HICR5,
+@@ -263,8 +272,10 @@ static void aspeed_lpc_disable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ 		return;
+ 	}
+ 
+-	kfifo_free(&lpc_snoop->chan[channel].fifo);
++	lpc_snoop->chan[channel].enabled = false;
++	/* Consider improving safety wrt concurrent reader(s) */
+ 	misc_deregister(&lpc_snoop->chan[channel].miscdev);
++	kfifo_free(&lpc_snoop->chan[channel].fifo);
+ }
+ 
+ static int aspeed_lpc_snoop_probe(struct platform_device *pdev)
+diff --git a/drivers/soundwire/amd_manager.c b/drivers/soundwire/amd_manager.c
+index a12c68b93b1c30..7a671a7861979c 100644
+--- a/drivers/soundwire/amd_manager.c
++++ b/drivers/soundwire/amd_manager.c
+@@ -238,7 +238,7 @@ static u64 amd_sdw_send_cmd_get_resp(struct amd_sdw_manager *amd_manager, u32 lo
+ 
+ 	if (sts & AMD_SDW_IMM_RES_VALID) {
+ 		dev_err(amd_manager->dev, "SDW%x manager is in bad state\n", amd_manager->instance);
+-		writel(0x00, amd_manager->mmio + ACP_SW_IMM_CMD_STS);
++		writel(AMD_SDW_IMM_RES_VALID, amd_manager->mmio + ACP_SW_IMM_CMD_STS);
+ 	}
+ 	writel(upper_data, amd_manager->mmio + ACP_SW_IMM_CMD_UPPER_WORD);
+ 	writel(lower_data, amd_manager->mmio + ACP_SW_IMM_CMD_LOWER_QWORD);
+@@ -1209,6 +1209,7 @@ static int __maybe_unused amd_suspend(struct device *dev)
+ 	}
+ 
+ 	if (amd_manager->power_mode_mask & AMD_SDW_CLK_STOP_MODE) {
++		cancel_work_sync(&amd_manager->amd_sdw_work);
+ 		amd_sdw_wake_enable(amd_manager, false);
+ 		if (amd_manager->acp_rev >= ACP70_PCI_REV_ID) {
+ 			ret = amd_sdw_host_wake_enable(amd_manager, false);
+@@ -1219,6 +1220,7 @@ static int __maybe_unused amd_suspend(struct device *dev)
+ 		if (ret)
+ 			return ret;
+ 	} else if (amd_manager->power_mode_mask & AMD_SDW_POWER_OFF_MODE) {
++		cancel_work_sync(&amd_manager->amd_sdw_work);
+ 		amd_sdw_wake_enable(amd_manager, false);
+ 		if (amd_manager->acp_rev >= ACP70_PCI_REV_ID) {
+ 			ret = amd_sdw_host_wake_enable(amd_manager, false);
+diff --git a/drivers/soundwire/qcom.c b/drivers/soundwire/qcom.c
+index 295a46dc2be755..0f45e3404756f6 100644
+--- a/drivers/soundwire/qcom.c
++++ b/drivers/soundwire/qcom.c
+@@ -156,7 +156,6 @@ struct qcom_swrm_port_config {
+ 	u8 word_length;
+ 	u8 blk_group_count;
+ 	u8 lane_control;
+-	u8 ch_mask;
+ };
+ 
+ /*
+@@ -1049,13 +1048,9 @@ static int qcom_swrm_port_enable(struct sdw_bus *bus,
+ {
+ 	u32 reg = SWRM_DP_PORT_CTRL_BANK(enable_ch->port_num, bank);
+ 	struct qcom_swrm_ctrl *ctrl = to_qcom_sdw(bus);
+-	struct qcom_swrm_port_config *pcfg;
+ 	u32 val;
+ 
+-	pcfg = &ctrl->pconfig[enable_ch->port_num];
+ 	ctrl->reg_read(ctrl, reg, &val);
+-	if (pcfg->ch_mask != SWR_INVALID_PARAM && pcfg->ch_mask != 0)
+-		enable_ch->ch_mask = pcfg->ch_mask;
+ 
+ 	if (enable_ch->enable)
+ 		val |= (enable_ch->ch_mask << SWRM_DP_PORT_CTRL_EN_CHAN_SHFT);
+@@ -1275,26 +1270,6 @@ static void *qcom_swrm_get_sdw_stream(struct snd_soc_dai *dai, int direction)
+ 	return ctrl->sruntime[dai->id];
+ }
+ 
+-static int qcom_swrm_set_channel_map(struct snd_soc_dai *dai,
+-				     unsigned int tx_num, const unsigned int *tx_slot,
+-				     unsigned int rx_num, const unsigned int *rx_slot)
+-{
+-	struct qcom_swrm_ctrl *ctrl = dev_get_drvdata(dai->dev);
+-	int i;
+-
+-	if (tx_slot) {
+-		for (i = 0; i < tx_num; i++)
+-			ctrl->pconfig[i].ch_mask = tx_slot[i];
+-	}
+-
+-	if (rx_slot) {
+-		for (i = 0; i < rx_num; i++)
+-			ctrl->pconfig[i].ch_mask = rx_slot[i];
+-	}
+-
+-	return 0;
+-}
+-
+ static int qcom_swrm_startup(struct snd_pcm_substream *substream,
+ 			     struct snd_soc_dai *dai)
+ {
+@@ -1331,7 +1306,6 @@ static const struct snd_soc_dai_ops qcom_swrm_pdm_dai_ops = {
+ 	.shutdown = qcom_swrm_shutdown,
+ 	.set_stream = qcom_swrm_set_sdw_stream,
+ 	.get_stream = qcom_swrm_get_sdw_stream,
+-	.set_channel_map = qcom_swrm_set_channel_map,
+ };
+ 
+ static const struct snd_soc_component_driver qcom_swrm_dai_component = {
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 90e27729ef6b22..c6ac4b05bdb110 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -4133,10 +4133,13 @@ static int __spi_validate(struct spi_device *spi, struct spi_message *message)
+ 				xfer->tx_nbits != SPI_NBITS_OCTAL)
+ 				return -EINVAL;
+ 			if ((xfer->tx_nbits == SPI_NBITS_DUAL) &&
+-				!(spi->mode & (SPI_TX_DUAL | SPI_TX_QUAD)))
++				!(spi->mode & (SPI_TX_DUAL | SPI_TX_QUAD | SPI_TX_OCTAL)))
+ 				return -EINVAL;
+ 			if ((xfer->tx_nbits == SPI_NBITS_QUAD) &&
+-				!(spi->mode & SPI_TX_QUAD))
++				!(spi->mode & (SPI_TX_QUAD | SPI_TX_OCTAL)))
++				return -EINVAL;
++			if ((xfer->tx_nbits == SPI_NBITS_OCTAL) &&
++				!(spi->mode & SPI_TX_OCTAL))
+ 				return -EINVAL;
+ 		}
+ 		/* Check transfer rx_nbits */
+@@ -4149,10 +4152,13 @@ static int __spi_validate(struct spi_device *spi, struct spi_message *message)
+ 				xfer->rx_nbits != SPI_NBITS_OCTAL)
+ 				return -EINVAL;
+ 			if ((xfer->rx_nbits == SPI_NBITS_DUAL) &&
+-				!(spi->mode & (SPI_RX_DUAL | SPI_RX_QUAD)))
++				!(spi->mode & (SPI_RX_DUAL | SPI_RX_QUAD | SPI_RX_OCTAL)))
+ 				return -EINVAL;
+ 			if ((xfer->rx_nbits == SPI_NBITS_QUAD) &&
+-				!(spi->mode & SPI_RX_QUAD))
++				!(spi->mode & (SPI_RX_QUAD | SPI_RX_OCTAL)))
++				return -EINVAL;
++			if ((xfer->rx_nbits == SPI_NBITS_OCTAL) &&
++				!(spi->mode & SPI_RX_OCTAL))
+ 				return -EINVAL;
+ 		}
+ 
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index 5dbf8d53db09f1..6434cbdc1a6ef3 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -97,6 +97,13 @@ struct vchiq_arm_state {
+ 	 * tracked separately with the state.
+ 	 */
+ 	int peer_use_count;
++
++	/*
++	 * Flag to indicate that the first vchiq connect has made it through.
++	 * This means that both sides should be fully ready, and we should
++	 * be able to suspend after this point.
++	 */
++	int first_connect;
+ };
+ 
+ static int
+@@ -273,6 +280,29 @@ static int vchiq_platform_init(struct platform_device *pdev, struct vchiq_state
+ 	return 0;
+ }
+ 
++int
++vchiq_platform_init_state(struct vchiq_state *state)
++{
++	struct vchiq_arm_state *platform_state;
++
++	platform_state = devm_kzalloc(state->dev, sizeof(*platform_state), GFP_KERNEL);
++	if (!platform_state)
++		return -ENOMEM;
++
++	rwlock_init(&platform_state->susp_res_lock);
++
++	init_completion(&platform_state->ka_evt);
++	atomic_set(&platform_state->ka_use_count, 0);
++	atomic_set(&platform_state->ka_use_ack_count, 0);
++	atomic_set(&platform_state->ka_release_count, 0);
++
++	platform_state->state = state;
++
++	state->platform_state = (struct opaque_platform_state *)platform_state;
++
++	return 0;
++}
++
+ static struct vchiq_arm_state *vchiq_platform_get_arm_state(struct vchiq_state *state)
+ {
+ 	return (struct vchiq_arm_state *)state->platform_state;
+@@ -981,39 +1011,6 @@ vchiq_keepalive_thread_func(void *v)
+ 	return 0;
+ }
+ 
+-int
+-vchiq_platform_init_state(struct vchiq_state *state)
+-{
+-	struct vchiq_arm_state *platform_state;
+-	char threadname[16];
+-
+-	platform_state = devm_kzalloc(state->dev, sizeof(*platform_state), GFP_KERNEL);
+-	if (!platform_state)
+-		return -ENOMEM;
+-
+-	snprintf(threadname, sizeof(threadname), "vchiq-keep/%d",
+-		 state->id);
+-	platform_state->ka_thread = kthread_create(&vchiq_keepalive_thread_func,
+-						   (void *)state, threadname);
+-	if (IS_ERR(platform_state->ka_thread)) {
+-		dev_err(state->dev, "couldn't create thread %s\n", threadname);
+-		return PTR_ERR(platform_state->ka_thread);
+-	}
+-
+-	rwlock_init(&platform_state->susp_res_lock);
+-
+-	init_completion(&platform_state->ka_evt);
+-	atomic_set(&platform_state->ka_use_count, 0);
+-	atomic_set(&platform_state->ka_use_ack_count, 0);
+-	atomic_set(&platform_state->ka_release_count, 0);
+-
+-	platform_state->state = state;
+-
+-	state->platform_state = (struct opaque_platform_state *)platform_state;
+-
+-	return 0;
+-}
+-
+ int
+ vchiq_use_internal(struct vchiq_state *state, struct vchiq_service *service,
+ 		   enum USE_TYPE_E use_type)
+@@ -1329,19 +1326,37 @@ vchiq_check_service(struct vchiq_service *service)
+ 	return ret;
+ }
+ 
+-void vchiq_platform_connected(struct vchiq_state *state)
+-{
+-	struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state);
+-
+-	wake_up_process(arm_state->ka_thread);
+-}
+-
+ void vchiq_platform_conn_state_changed(struct vchiq_state *state,
+ 				       enum vchiq_connstate oldstate,
+ 				       enum vchiq_connstate newstate)
+ {
++	struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state);
++	char threadname[16];
++
+ 	dev_dbg(state->dev, "suspend: %d: %s->%s\n",
+ 		state->id, get_conn_state_name(oldstate), get_conn_state_name(newstate));
++	if (state->conn_state != VCHIQ_CONNSTATE_CONNECTED)
++		return;
++
++	write_lock_bh(&arm_state->susp_res_lock);
++	if (arm_state->first_connect) {
++		write_unlock_bh(&arm_state->susp_res_lock);
++		return;
++	}
++
++	arm_state->first_connect = 1;
++	write_unlock_bh(&arm_state->susp_res_lock);
++	snprintf(threadname, sizeof(threadname), "vchiq-keep/%d",
++		 state->id);
++	arm_state->ka_thread = kthread_create(&vchiq_keepalive_thread_func,
++					      (void *)state,
++					      threadname);
++	if (IS_ERR(arm_state->ka_thread)) {
++		dev_err(state->dev, "suspend: Couldn't create thread %s\n",
++			threadname);
++	} else {
++		wake_up_process(arm_state->ka_thread);
++	}
+ }
+ 
+ static const struct of_device_id vchiq_of_match[] = {
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+index e7b0c800a205dd..e2cac0898b8faa 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+@@ -3343,7 +3343,6 @@ vchiq_connect_internal(struct vchiq_state *state, struct vchiq_instance *instanc
+ 			return -EAGAIN;
+ 
+ 		vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTED);
+-		vchiq_platform_connected(state);
+ 		complete(&state->connect);
+ 	}
+ 
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.h b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.h
+index 3b5c0618e5672f..9b4e766990a493 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.h
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.h
+@@ -575,8 +575,6 @@ int vchiq_send_remote_use(struct vchiq_state *state);
+ 
+ int vchiq_send_remote_use_active(struct vchiq_state *state);
+ 
+-void vchiq_platform_connected(struct vchiq_state *state);
+-
+ void vchiq_platform_conn_state_changed(struct vchiq_state *state,
+ 				       enum vchiq_connstate oldstate,
+ 				  enum vchiq_connstate newstate);
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index 6a2116cbb06f92..60818c1bec4831 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -1450,7 +1450,7 @@ int tb_dp_port_set_hops(struct tb_port *port, unsigned int video,
+ 		return ret;
+ 
+ 	data[0] &= ~ADP_DP_CS_0_VIDEO_HOPID_MASK;
+-	data[1] &= ~ADP_DP_CS_1_AUX_RX_HOPID_MASK;
++	data[1] &= ~ADP_DP_CS_1_AUX_TX_HOPID_MASK;
+ 	data[1] &= ~ADP_DP_CS_1_AUX_RX_HOPID_MASK;
+ 
+ 	data[0] |= (video << ADP_DP_CS_0_VIDEO_HOPID_SHIFT) &
+@@ -3437,7 +3437,7 @@ void tb_sw_set_unplugged(struct tb_switch *sw)
+ 	}
+ }
+ 
+-static int tb_switch_set_wake(struct tb_switch *sw, unsigned int flags)
++static int tb_switch_set_wake(struct tb_switch *sw, unsigned int flags, bool runtime)
+ {
+ 	if (flags)
+ 		tb_sw_dbg(sw, "enabling wakeup: %#x\n", flags);
+@@ -3445,7 +3445,7 @@ static int tb_switch_set_wake(struct tb_switch *sw, unsigned int flags)
+ 		tb_sw_dbg(sw, "disabling wakeup\n");
+ 
+ 	if (tb_switch_is_usb4(sw))
+-		return usb4_switch_set_wake(sw, flags);
++		return usb4_switch_set_wake(sw, flags, runtime);
+ 	return tb_lc_set_wake(sw, flags);
+ }
+ 
+@@ -3521,7 +3521,7 @@ int tb_switch_resume(struct tb_switch *sw, bool runtime)
+ 		tb_switch_check_wakes(sw);
+ 
+ 	/* Disable wakes */
+-	tb_switch_set_wake(sw, 0);
++	tb_switch_set_wake(sw, 0, true);
+ 
+ 	err = tb_switch_tmu_init(sw);
+ 	if (err)
+@@ -3602,7 +3602,7 @@ void tb_switch_suspend(struct tb_switch *sw, bool runtime)
+ 		flags |= TB_WAKE_ON_USB4 | TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE;
+ 	}
+ 
+-	tb_switch_set_wake(sw, flags);
++	tb_switch_set_wake(sw, flags, runtime);
+ 
+ 	if (tb_switch_is_usb4(sw))
+ 		usb4_switch_set_sleep(sw);
+diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
+index b54147a1ba8778..d14381eb540672 100644
+--- a/drivers/thunderbolt/tb.h
++++ b/drivers/thunderbolt/tb.h
+@@ -1304,7 +1304,7 @@ int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid);
+ int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf,
+ 			  size_t size);
+ bool usb4_switch_lane_bonding_possible(struct tb_switch *sw);
+-int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags);
++int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags, bool runtime);
+ int usb4_switch_set_sleep(struct tb_switch *sw);
+ int usb4_switch_nvm_sector_size(struct tb_switch *sw);
+ int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf,
+diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
+index 3e96f1afd4268e..cc05211f269cec 100644
+--- a/drivers/thunderbolt/usb4.c
++++ b/drivers/thunderbolt/usb4.c
+@@ -403,12 +403,12 @@ bool usb4_switch_lane_bonding_possible(struct tb_switch *sw)
+  * usb4_switch_set_wake() - Enabled/disable wake
+  * @sw: USB4 router
+  * @flags: Wakeup flags (%0 to disable)
++ * @runtime: Wake is being programmed during system runtime
+  *
+  * Enables/disables router to wake up from sleep.
+  */
+-int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags)
++int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags, bool runtime)
+ {
+-	struct usb4_port *usb4;
+ 	struct tb_port *port;
+ 	u64 route = tb_route(sw);
+ 	u32 val;
+@@ -438,13 +438,11 @@ int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags)
+ 			val |= PORT_CS_19_WOU4;
+ 		} else {
+ 			bool configured = val & PORT_CS_19_PC;
+-			usb4 = port->usb4;
++			bool wakeup = runtime || device_may_wakeup(&port->usb4->dev);
+ 
+-			if (((flags & TB_WAKE_ON_CONNECT) &&
+-			      device_may_wakeup(&usb4->dev)) && !configured)
++			if ((flags & TB_WAKE_ON_CONNECT) && wakeup && !configured)
+ 				val |= PORT_CS_19_WOC;
+-			if (((flags & TB_WAKE_ON_DISCONNECT) &&
+-			      device_may_wakeup(&usb4->dev)) && configured)
++			if ((flags & TB_WAKE_ON_DISCONNECT) && wakeup && configured)
+ 				val |= PORT_CS_19_WOD;
+ 			if ((flags & TB_WAKE_ON_USB4) && configured)
+ 				val |= PORT_CS_19_WOU4;
+diff --git a/drivers/tty/serial/pch_uart.c b/drivers/tty/serial/pch_uart.c
+index 508e8c6f01d4d2..884fefbfd5a109 100644
+--- a/drivers/tty/serial/pch_uart.c
++++ b/drivers/tty/serial/pch_uart.c
+@@ -954,7 +954,7 @@ static unsigned int dma_handle_tx(struct eg20t_port *priv)
+ 			__func__);
+ 		return 0;
+ 	}
+-	dma_sync_sg_for_device(port->dev, priv->sg_tx_p, nent, DMA_TO_DEVICE);
++	dma_sync_sg_for_device(port->dev, priv->sg_tx_p, num, DMA_TO_DEVICE);
+ 	priv->desc_tx = desc;
+ 	desc->callback = pch_dma_tx_complete;
+ 	desc->callback_param = priv;
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index f71b807d71a7a5..1e0be266e9b200 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -68,6 +68,12 @@
+  */
+ #define USB_SHORT_SET_ADDRESS_REQ_TIMEOUT	500  /* ms */
+ 
++/*
++ * Give SS hubs 200ms time after wake to train downstream links before
++ * assuming no port activity and allowing hub to runtime suspend back.
++ */
++#define USB_SS_PORT_U0_WAKE_TIME	200  /* ms */
++
+ /* Protect struct usb_device->state and ->children members
+  * Note: Both are also protected by ->dev.sem, except that ->state can
+  * change to USB_STATE_NOTATTACHED even when the semaphore isn't held. */
+@@ -1095,6 +1101,7 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ 			goto init2;
+ 		goto init3;
+ 	}
++
+ 	hub_get(hub);
+ 
+ 	/* The superspeed hub except for root hub has to use Hub Depth
+@@ -1343,6 +1350,17 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ 		device_unlock(&hdev->dev);
+ 	}
+ 
++	if (type == HUB_RESUME && hub_is_superspeed(hub->hdev)) {
++		/* give usb3 downstream links training time after hub resume */
++		usb_autopm_get_interface_no_resume(
++			to_usb_interface(hub->intfdev));
++
++		queue_delayed_work(system_power_efficient_wq,
++				   &hub->post_resume_work,
++				   msecs_to_jiffies(USB_SS_PORT_U0_WAKE_TIME));
++		return;
++	}
++
+ 	hub_put(hub);
+ }
+ 
+@@ -1361,6 +1379,14 @@ static void hub_init_func3(struct work_struct *ws)
+ 	hub_activate(hub, HUB_INIT3);
+ }
+ 
++static void hub_post_resume(struct work_struct *ws)
++{
++	struct usb_hub *hub = container_of(ws, struct usb_hub, post_resume_work.work);
++
++	usb_autopm_put_interface_async(to_usb_interface(hub->intfdev));
++	hub_put(hub);
++}
++
+ enum hub_quiescing_type {
+ 	HUB_DISCONNECT, HUB_PRE_RESET, HUB_SUSPEND
+ };
+@@ -1386,6 +1412,7 @@ static void hub_quiesce(struct usb_hub *hub, enum hub_quiescing_type type)
+ 
+ 	/* Stop hub_wq and related activity */
+ 	timer_delete_sync(&hub->irq_urb_retry);
++	flush_delayed_work(&hub->post_resume_work);
+ 	usb_kill_urb(hub->urb);
+ 	if (hub->has_indicators)
+ 		cancel_delayed_work_sync(&hub->leds);
+@@ -1944,6 +1971,7 @@ static int hub_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ 	hub->hdev = hdev;
+ 	INIT_DELAYED_WORK(&hub->leds, led_work);
+ 	INIT_DELAYED_WORK(&hub->init_work, NULL);
++	INIT_DELAYED_WORK(&hub->post_resume_work, hub_post_resume);
+ 	INIT_WORK(&hub->events, hub_event);
+ 	INIT_LIST_HEAD(&hub->onboard_devs);
+ 	spin_lock_init(&hub->irq_urb_lock);
+@@ -5719,6 +5747,7 @@ static void port_event(struct usb_hub *hub, int port1)
+ 	struct usb_device *hdev = hub->hdev;
+ 	u16 portstatus, portchange;
+ 	int i = 0;
++	int err;
+ 
+ 	connect_change = test_bit(port1, hub->change_bits);
+ 	clear_bit(port1, hub->event_bits);
+@@ -5815,8 +5844,11 @@ static void port_event(struct usb_hub *hub, int port1)
+ 		} else if (!udev || !(portstatus & USB_PORT_STAT_CONNECTION)
+ 				|| udev->state == USB_STATE_NOTATTACHED) {
+ 			dev_dbg(&port_dev->dev, "do warm reset, port only\n");
+-			if (hub_port_reset(hub, port1, NULL,
+-					HUB_BH_RESET_TIME, true) < 0)
++			err = hub_port_reset(hub, port1, NULL,
++					     HUB_BH_RESET_TIME, true);
++			if (!udev && err == -ENOTCONN)
++				connect_change = 0;
++			else if (err < 0)
+ 				hub_port_disable(hub, port1, 1);
+ 		} else {
+ 			dev_dbg(&port_dev->dev, "do warm reset, full device\n");
+diff --git a/drivers/usb/core/hub.h b/drivers/usb/core/hub.h
+index e6ae73f8a95dc8..9ebc5ef54a325d 100644
+--- a/drivers/usb/core/hub.h
++++ b/drivers/usb/core/hub.h
+@@ -70,6 +70,7 @@ struct usb_hub {
+ 	u8			indicator[USB_MAXCHILDREN];
+ 	struct delayed_work	leds;
+ 	struct delayed_work	init_work;
++	struct delayed_work	post_resume_work;
+ 	struct work_struct      events;
+ 	spinlock_t		irq_urb_lock;
+ 	struct timer_list	irq_urb_retry;
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index f323fb5597b32f..290c1d846836e6 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -5389,20 +5389,34 @@ int dwc2_gadget_enter_hibernation(struct dwc2_hsotg *hsotg)
+ 	if (gusbcfg & GUSBCFG_ULPI_UTMI_SEL) {
+ 		/* ULPI interface */
+ 		gpwrdn |= GPWRDN_ULPI_LATCH_EN_DURING_HIB_ENTRY;
+-	}
+-	dwc2_writel(hsotg, gpwrdn, GPWRDN);
+-	udelay(10);
++		dwc2_writel(hsotg, gpwrdn, GPWRDN);
++		udelay(10);
+ 
+-	/* Suspend the Phy Clock */
+-	pcgcctl = dwc2_readl(hsotg, PCGCTL);
+-	pcgcctl |= PCGCTL_STOPPCLK;
+-	dwc2_writel(hsotg, pcgcctl, PCGCTL);
+-	udelay(10);
++		/* Suspend the Phy Clock */
++		pcgcctl = dwc2_readl(hsotg, PCGCTL);
++		pcgcctl |= PCGCTL_STOPPCLK;
++		dwc2_writel(hsotg, pcgcctl, PCGCTL);
++		udelay(10);
+ 
+-	gpwrdn = dwc2_readl(hsotg, GPWRDN);
+-	gpwrdn |= GPWRDN_PMUACTV;
+-	dwc2_writel(hsotg, gpwrdn, GPWRDN);
+-	udelay(10);
++		gpwrdn = dwc2_readl(hsotg, GPWRDN);
++		gpwrdn |= GPWRDN_PMUACTV;
++		dwc2_writel(hsotg, gpwrdn, GPWRDN);
++		udelay(10);
++	} else {
++		/* UTMI+ Interface */
++		dwc2_writel(hsotg, gpwrdn, GPWRDN);
++		udelay(10);
++
++		gpwrdn = dwc2_readl(hsotg, GPWRDN);
++		gpwrdn |= GPWRDN_PMUACTV;
++		dwc2_writel(hsotg, gpwrdn, GPWRDN);
++		udelay(10);
++
++		pcgcctl = dwc2_readl(hsotg, PCGCTL);
++		pcgcctl |= PCGCTL_STOPPCLK;
++		dwc2_writel(hsotg, pcgcctl, PCGCTL);
++		udelay(10);
++	}
+ 
+ 	/* Set flag to indicate that we are in hibernation */
+ 	hsotg->hibernated = 1;
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index 58683bb672e952..9b7485b84302d1 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -763,13 +763,13 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ 	ret = reset_control_deassert(qcom->resets);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to deassert resets, err=%d\n", ret);
+-		goto reset_assert;
++		return ret;
+ 	}
+ 
+ 	ret = dwc3_qcom_clk_init(qcom, of_clk_get_parent_count(np));
+ 	if (ret) {
+ 		dev_err_probe(dev, ret, "failed to get clocks\n");
+-		goto reset_assert;
++		return ret;
+ 	}
+ 
+ 	qcom->qscratch_base = devm_platform_ioremap_resource(pdev, 0);
+@@ -835,8 +835,6 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ 		clk_disable_unprepare(qcom->clks[i]);
+ 		clk_put(qcom->clks[i]);
+ 	}
+-reset_assert:
+-	reset_control_assert(qcom->resets);
+ 
+ 	return ret;
+ }
+@@ -857,8 +855,6 @@ static void dwc3_qcom_remove(struct platform_device *pdev)
+ 	qcom->num_clocks = 0;
+ 
+ 	dwc3_qcom_interconnect_exit(qcom);
+-	reset_control_assert(qcom->resets);
+-
+ 	pm_runtime_allow(dev);
+ 	pm_runtime_disable(dev);
+ }
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index fba2a56dae974c..f94ea196ce547b 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -1065,6 +1065,8 @@ static ssize_t webusb_landingPage_store(struct config_item *item, const char *pa
+ 	unsigned int bytes_to_strip = 0;
+ 	int l = len;
+ 
++	if (!len)
++		return len;
+ 	if (page[l - 1] == '\n') {
+ 		--l;
+ 		++bytes_to_strip;
+@@ -1188,6 +1190,8 @@ static ssize_t os_desc_qw_sign_store(struct config_item *item, const char *page,
+ 	struct gadget_info *gi = os_desc_item_to_gadget_info(item);
+ 	int res, l;
+ 
++	if (!len)
++		return len;
+ 	l = min_t(int, len, OS_STRING_QW_SIGN_LEN >> 1);
+ 	if (page[l - 1] == '\n')
+ 		--l;
+diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c
+index 6869c58367f2d0..caf4d4cd4b75b8 100644
+--- a/drivers/usb/musb/musb_gadget.c
++++ b/drivers/usb/musb/musb_gadget.c
+@@ -1913,6 +1913,7 @@ static int musb_gadget_stop(struct usb_gadget *g)
+ 	 * gadget driver here and have everything work;
+ 	 * that currently misbehaves.
+ 	 */
++	usb_gadget_set_state(g, USB_STATE_NOTATTACHED);
+ 
+ 	/* Force check of devctl register for PM runtime */
+ 	pm_runtime_mark_last_busy(musb->controller);
+@@ -2019,6 +2020,7 @@ void musb_g_disconnect(struct musb *musb)
+ 	case OTG_STATE_B_PERIPHERAL:
+ 	case OTG_STATE_B_IDLE:
+ 		musb_set_state(musb, OTG_STATE_B_IDLE);
++		usb_gadget_set_state(&musb->g, USB_STATE_NOTATTACHED);
+ 		break;
+ 	case OTG_STATE_B_SRP_INIT:
+ 		break;
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 6ac7a0a5cf074e..abfcfca3f9718b 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -803,6 +803,8 @@ static const struct usb_device_id id_table_combined[] = {
+ 		.driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_NDI_AURORA_SCU_PID),
+ 		.driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk },
++	{ USB_DEVICE(FTDI_NDI_VID, FTDI_NDI_EMGUIDE_GEMINI_PID),
++		.driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk },
+ 	{ USB_DEVICE(TELLDUS_VID, TELLDUS_TELLSTICK_PID) },
+ 	{ USB_DEVICE(NOVITUS_VID, NOVITUS_BONO_E_PID) },
+ 	{ USB_DEVICE(FTDI_VID, RTSYSTEMS_USB_VX8_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 9acb6f83732763..4cc1fae8acb970 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -204,6 +204,9 @@
+ #define FTDI_NDI_FUTURE_3_PID		0xDA73	/* NDI future device #3 */
+ #define FTDI_NDI_AURORA_SCU_PID		0xDA74	/* NDI Aurora SCU */
+ 
++#define FTDI_NDI_VID			0x23F2
++#define FTDI_NDI_EMGUIDE_GEMINI_PID	0x0003	/* NDI Emguide Gemini */
++
+ /*
+  * ChamSys Limited (www.chamsys.co.uk) USB wing/interface product IDs
+  */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 27879cc575365c..147ca50c94beec 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1415,6 +1415,9 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(5) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x60) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10c7, 0xff, 0xff, 0x30),	/* Telit FE910C04 (ECM) */
++	  .driver_info = NCTRL(4) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10c7, 0xff, 0xff, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x30),	/* Telit FN990B (MBIM) */
+ 	  .driver_info = NCTRL(6) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x40) },
+@@ -2343,6 +2346,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe145, 0xff),			/* Foxconn T99W651 RNDIS */
+ 	  .driver_info = RSVD(5) | RSVD(6) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe167, 0xff),                     /* Foxconn T99W640 MBIM */
++	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE(0x1508, 0x1001),						/* Fibocom NL668 (IOT version) */
+ 	  .driver_info = RSVD(4) | RSVD(5) | RSVD(6) },
+ 	{ USB_DEVICE(0x1782, 0x4d10) },						/* Fibocom L610 (AT mode) */
+diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c
+index c08e4a66ac07a7..3e0576d9db1d2f 100644
+--- a/fs/cachefiles/io.c
++++ b/fs/cachefiles/io.c
+@@ -347,8 +347,6 @@ int __cachefiles_write(struct cachefiles_object *object,
+ 	default:
+ 		ki->was_async = false;
+ 		cachefiles_write_complete(&ki->iocb, ret);
+-		if (ret > 0)
+-			ret = 0;
+ 		break;
+ 	}
+ 
+diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c
+index d9bc6717612829..a7ed86fa98bb8c 100644
+--- a/fs/cachefiles/ondemand.c
++++ b/fs/cachefiles/ondemand.c
+@@ -83,10 +83,8 @@ static ssize_t cachefiles_ondemand_fd_write_iter(struct kiocb *kiocb,
+ 
+ 	trace_cachefiles_ondemand_fd_write(object, file_inode(file), pos, len);
+ 	ret = __cachefiles_write(object, file, pos, iter, NULL, NULL);
+-	if (!ret) {
+-		ret = len;
++	if (ret > 0)
+ 		kiocb->ki_pos += ret;
+-	}
+ 
+ out:
+ 	fput(file);
+diff --git a/fs/efivarfs/super.c b/fs/efivarfs/super.c
+index 0486e9b68bc6e2..f681814fe8bb03 100644
+--- a/fs/efivarfs/super.c
++++ b/fs/efivarfs/super.c
+@@ -387,10 +387,16 @@ static int efivarfs_reconfigure(struct fs_context *fc)
+ 	return 0;
+ }
+ 
++static void efivarfs_free(struct fs_context *fc)
++{
++	kfree(fc->s_fs_info);
++}
++
+ static const struct fs_context_operations efivarfs_context_ops = {
+ 	.get_tree	= efivarfs_get_tree,
+ 	.parse_param	= efivarfs_parse_param,
+ 	.reconfigure	= efivarfs_reconfigure,
++	.free		= efivarfs_free,
+ };
+ 
+ struct efivarfs_ctx {
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index d5da9817df9b36..33e6a620c103e0 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -1440,9 +1440,16 @@ static int isofs_read_inode(struct inode *inode, int relocated)
+ 		inode->i_op = &page_symlink_inode_operations;
+ 		inode_nohighmem(inode);
+ 		inode->i_data.a_ops = &isofs_symlink_aops;
+-	} else
++	} else if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) ||
++		   S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) {
+ 		/* XXX - parse_rock_ridge_inode() had already set i_rdev. */
+ 		init_special_inode(inode, inode->i_mode, inode->i_rdev);
++	} else {
++		printk(KERN_DEBUG "ISOFS: Invalid file type 0%04o for inode %lu.\n",
++			inode->i_mode, inode->i_ino);
++		ret = -EIO;
++		goto fail;
++	}
+ 
+ 	ret = 0;
+ out:
+diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c
+index 5bbe906a551d57..8097bc069c1de6 100644
+--- a/fs/netfs/read_pgpriv2.c
++++ b/fs/netfs/read_pgpriv2.c
+@@ -110,6 +110,8 @@ static struct netfs_io_request *netfs_pgpriv2_begin_copy_to_cache(
+ 	if (!creq->io_streams[1].avail)
+ 		goto cancel_put;
+ 
++	__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &creq->flags);
++	trace_netfs_copy2cache(rreq, creq);
+ 	trace_netfs_write(creq, netfs_write_trace_copy_to_cache);
+ 	netfs_stat(&netfs_n_wh_copy_to_cache);
+ 	rreq->copy_to_cache = creq;
+@@ -154,6 +156,9 @@ void netfs_pgpriv2_end_copy_to_cache(struct netfs_io_request *rreq)
+ 	netfs_issue_write(creq, &creq->io_streams[1]);
+ 	smp_wmb(); /* Write lists before ALL_QUEUED. */
+ 	set_bit(NETFS_RREQ_ALL_QUEUED, &creq->flags);
++	trace_netfs_rreq(rreq, netfs_rreq_trace_end_copy_to_cache);
++	if (list_empty_careful(&creq->io_streams[1].subrequests))
++		netfs_wake_collector(creq);
+ 
+ 	netfs_put_request(creq, netfs_rreq_trace_put_return);
+ 	creq->copy_to_cache = NULL;
+diff --git a/fs/notify/dnotify/dnotify.c b/fs/notify/dnotify/dnotify.c
+index c4cdaf5fa7eda6..9fb73bafd41d28 100644
+--- a/fs/notify/dnotify/dnotify.c
++++ b/fs/notify/dnotify/dnotify.c
+@@ -308,6 +308,10 @@ int fcntl_dirnotify(int fd, struct file *filp, unsigned int arg)
+ 		goto out_err;
+ 	}
+ 
++	error = file_f_owner_allocate(filp);
++	if (error)
++		goto out_err;
++
+ 	/* new fsnotify mark, we expect most fcntl calls to add a new mark */
+ 	new_dn_mark = kmem_cache_alloc(dnotify_mark_cache, GFP_KERNEL);
+ 	if (!new_dn_mark) {
+@@ -315,10 +319,6 @@ int fcntl_dirnotify(int fd, struct file *filp, unsigned int arg)
+ 		goto out_err;
+ 	}
+ 
+-	error = file_f_owner_allocate(filp);
+-	if (error)
+-		goto out_err;
+-
+ 	/* set up the new_fsn_mark and new_dn_mark */
+ 	new_fsn_mark = &new_dn_mark->fsn_mark;
+ 	fsnotify_init_mark(new_fsn_mark, dnotify_group);
+diff --git a/fs/smb/client/cifs_debug.c b/fs/smb/client/cifs_debug.c
+index e03c890de0a068..c0196be0e65fc0 100644
+--- a/fs/smb/client/cifs_debug.c
++++ b/fs/smb/client/cifs_debug.c
+@@ -362,6 +362,10 @@ static int cifs_debug_data_proc_show(struct seq_file *m, void *v)
+ 	c = 0;
+ 	spin_lock(&cifs_tcp_ses_lock);
+ 	list_for_each_entry(server, &cifs_tcp_ses_list, tcp_ses_list) {
++#ifdef CONFIG_CIFS_SMB_DIRECT
++		struct smbdirect_socket_parameters *sp;
++#endif
++
+ 		/* channel info will be printed as a part of sessions below */
+ 		if (SERVER_IS_CHAN(server))
+ 			continue;
+@@ -383,25 +387,26 @@ static int cifs_debug_data_proc_show(struct seq_file *m, void *v)
+ 			seq_printf(m, "\nSMBDirect transport not available");
+ 			goto skip_rdma;
+ 		}
++		sp = &server->smbd_conn->socket.parameters;
+ 
+ 		seq_printf(m, "\nSMBDirect (in hex) protocol version: %x "
+ 			"transport status: %x",
+ 			server->smbd_conn->protocol,
+-			server->smbd_conn->transport_status);
++			server->smbd_conn->socket.status);
+ 		seq_printf(m, "\nConn receive_credit_max: %x "
+ 			"send_credit_target: %x max_send_size: %x",
+-			server->smbd_conn->receive_credit_max,
+-			server->smbd_conn->send_credit_target,
+-			server->smbd_conn->max_send_size);
++			sp->recv_credit_max,
++			sp->send_credit_target,
++			sp->max_send_size);
+ 		seq_printf(m, "\nConn max_fragmented_recv_size: %x "
+ 			"max_fragmented_send_size: %x max_receive_size:%x",
+-			server->smbd_conn->max_fragmented_recv_size,
+-			server->smbd_conn->max_fragmented_send_size,
+-			server->smbd_conn->max_receive_size);
++			sp->max_fragmented_recv_size,
++			sp->max_fragmented_send_size,
++			sp->max_recv_size);
+ 		seq_printf(m, "\nConn keep_alive_interval: %x "
+ 			"max_readwrite_size: %x rdma_readwrite_threshold: %x",
+-			server->smbd_conn->keep_alive_interval,
+-			server->smbd_conn->max_readwrite_size,
++			sp->keepalive_interval_msec * 1000,
++			sp->max_read_write_size,
+ 			server->smbd_conn->rdma_readwrite_threshold);
+ 		seq_printf(m, "\nDebug count_get_receive_buffer: %x "
+ 			"count_put_receive_buffer: %x count_send_empty: %x",
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index 9835672267d277..3819378b10129d 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -3084,7 +3084,8 @@ void cifs_oplock_break(struct work_struct *work)
+ 	struct cifsFileInfo *cfile = container_of(work, struct cifsFileInfo,
+ 						  oplock_break);
+ 	struct inode *inode = d_inode(cfile->dentry);
+-	struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
++	struct super_block *sb = inode->i_sb;
++	struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
+ 	struct cifsInodeInfo *cinode = CIFS_I(inode);
+ 	struct cifs_tcon *tcon;
+ 	struct TCP_Server_Info *server;
+@@ -3094,6 +3095,12 @@ void cifs_oplock_break(struct work_struct *work)
+ 	__u64 persistent_fid, volatile_fid;
+ 	__u16 net_fid;
+ 
++	/*
++	 * Hold a reference to the superblock to prevent it and its inodes from
++	 * being freed while we are accessing cinode. Otherwise, _cifsFileInfo_put()
++	 * may release the last reference to the sb and trigger inode eviction.
++	 */
++	cifs_sb_active(sb);
+ 	wait_on_bit(&cinode->flags, CIFS_INODE_PENDING_WRITERS,
+ 			TASK_UNINTERRUPTIBLE);
+ 
+@@ -3166,6 +3173,7 @@ void cifs_oplock_break(struct work_struct *work)
+ 	cifs_put_tlink(tlink);
+ out:
+ 	cifs_done_oplock_break(cinode);
++	cifs_sb_deactive(sb);
+ }
+ 
+ static int cifs_swap_activate(struct swap_info_struct *sis,
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index 2a3e46b8e15af6..a11a2a693c5194 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -1346,7 +1346,8 @@ struct inode *smb2_get_reparse_inode(struct cifs_open_info_data *data,
+ 	 * empty object on the server.
+ 	 */
+ 	if (!(le32_to_cpu(tcon->fsAttrInfo.Attributes) & FILE_SUPPORTS_REPARSE_POINTS))
+-		return ERR_PTR(-EOPNOTSUPP);
++		if (!tcon->posix_extensions)
++			return ERR_PTR(-EOPNOTSUPP);
+ 
+ 	oparms = CIFS_OPARMS(cifs_sb, tcon, full_path,
+ 			     SYNCHRONIZE | DELETE |
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 2fe8eeb9853563..fcbc4048f10644 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -504,6 +504,9 @@ smb3_negotiate_wsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ 	wsize = min_t(unsigned int, wsize, server->max_write);
+ #ifdef CONFIG_CIFS_SMB_DIRECT
+ 	if (server->rdma) {
++		struct smbdirect_socket_parameters *sp =
++			&server->smbd_conn->socket.parameters;
++
+ 		if (server->sign)
+ 			/*
+ 			 * Account for SMB2 data transfer packet header and
+@@ -511,12 +514,12 @@ smb3_negotiate_wsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ 			 */
+ 			wsize = min_t(unsigned int,
+ 				wsize,
+-				server->smbd_conn->max_fragmented_send_size -
++				sp->max_fragmented_send_size -
+ 					SMB2_READWRITE_PDU_HEADER_SIZE -
+ 					sizeof(struct smb2_transform_hdr));
+ 		else
+ 			wsize = min_t(unsigned int,
+-				wsize, server->smbd_conn->max_readwrite_size);
++				wsize, sp->max_read_write_size);
+ 	}
+ #endif
+ 	if (!(server->capabilities & SMB2_GLOBAL_CAP_LARGE_MTU))
+@@ -552,6 +555,9 @@ smb3_negotiate_rsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ 	rsize = min_t(unsigned int, rsize, server->max_read);
+ #ifdef CONFIG_CIFS_SMB_DIRECT
+ 	if (server->rdma) {
++		struct smbdirect_socket_parameters *sp =
++			&server->smbd_conn->socket.parameters;
++
+ 		if (server->sign)
+ 			/*
+ 			 * Account for SMB2 data transfer packet header and
+@@ -559,12 +565,12 @@ smb3_negotiate_rsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ 			 */
+ 			rsize = min_t(unsigned int,
+ 				rsize,
+-				server->smbd_conn->max_fragmented_recv_size -
++				sp->max_fragmented_recv_size -
+ 					SMB2_READWRITE_PDU_HEADER_SIZE -
+ 					sizeof(struct smb2_transform_hdr));
+ 		else
+ 			rsize = min_t(unsigned int,
+-				rsize, server->smbd_conn->max_readwrite_size);
++				rsize, sp->max_read_write_size);
+ 	}
+ #endif
+ 
+@@ -4307,6 +4313,7 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
+ 	u8 key[SMB3_ENC_DEC_KEY_SIZE];
+ 	struct aead_request *req;
+ 	u8 *iv;
++	DECLARE_CRYPTO_WAIT(wait);
+ 	unsigned int crypt_len = le32_to_cpu(tr_hdr->OriginalMessageSize);
+ 	void *creq;
+ 	size_t sensitive_size;
+@@ -4357,7 +4364,11 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
+ 	aead_request_set_crypt(req, sg, sg, crypt_len, iv);
+ 	aead_request_set_ad(req, assoc_data_len);
+ 
+-	rc = enc ? crypto_aead_encrypt(req) : crypto_aead_decrypt(req);
++	aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
++				  crypto_req_done, &wait);
++
++	rc = crypto_wait_req(enc ? crypto_aead_encrypt(req)
++				: crypto_aead_decrypt(req), &wait);
+ 
+ 	if (!rc && enc)
+ 		memcpy(&tr_hdr->Signature, sign, SMB2_SIGNATURE_SIZE);
+@@ -5246,7 +5257,8 @@ static int smb2_make_node(unsigned int xid, struct inode *inode,
+ 	if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL) {
+ 		rc = cifs_sfu_make_node(xid, inode, dentry, tcon,
+ 					full_path, mode, dev);
+-	} else if (le32_to_cpu(tcon->fsAttrInfo.Attributes) & FILE_SUPPORTS_REPARSE_POINTS) {
++	} else if ((le32_to_cpu(tcon->fsAttrInfo.Attributes) & FILE_SUPPORTS_REPARSE_POINTS)
++		|| (tcon->posix_extensions)) {
+ 		rc = smb2_mknod_reparse(xid, inode, dentry, tcon,
+ 					full_path, mode, dev);
+ 	}
+diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
+index 9d8be034f103f2..754e94a0e07f50 100644
+--- a/fs/smb/client/smbdirect.c
++++ b/fs/smb/client/smbdirect.c
+@@ -7,6 +7,7 @@
+ #include <linux/module.h>
+ #include <linux/highmem.h>
+ #include <linux/folio_queue.h>
++#include "../common/smbdirect/smbdirect_pdu.h"
+ #include "smbdirect.h"
+ #include "cifs_debug.h"
+ #include "cifsproto.h"
+@@ -50,9 +51,6 @@ struct smb_extract_to_rdma {
+ static ssize_t smb_extract_iter_to_rdma(struct iov_iter *iter, size_t len,
+ 					struct smb_extract_to_rdma *rdma);
+ 
+-/* SMBD version number */
+-#define SMBD_V1	0x0100
+-
+ /* Port numbers for SMBD transport */
+ #define SMB_PORT	445
+ #define SMBD_PORT	5445
+@@ -165,10 +163,11 @@ static void smbd_disconnect_rdma_work(struct work_struct *work)
+ {
+ 	struct smbd_connection *info =
+ 		container_of(work, struct smbd_connection, disconnect_work);
++	struct smbdirect_socket *sc = &info->socket;
+ 
+-	if (info->transport_status == SMBD_CONNECTED) {
+-		info->transport_status = SMBD_DISCONNECTING;
+-		rdma_disconnect(info->id);
++	if (sc->status == SMBDIRECT_SOCKET_CONNECTED) {
++		sc->status = SMBDIRECT_SOCKET_DISCONNECTING;
++		rdma_disconnect(sc->rdma.cm_id);
+ 	}
+ }
+ 
+@@ -182,6 +181,7 @@ static int smbd_conn_upcall(
+ 		struct rdma_cm_id *id, struct rdma_cm_event *event)
+ {
+ 	struct smbd_connection *info = id->context;
++	struct smbdirect_socket *sc = &info->socket;
+ 
+ 	log_rdma_event(INFO, "event=%d status=%d\n",
+ 		event->event, event->status);
+@@ -205,7 +205,7 @@ static int smbd_conn_upcall(
+ 
+ 	case RDMA_CM_EVENT_ESTABLISHED:
+ 		log_rdma_event(INFO, "connected event=%d\n", event->event);
+-		info->transport_status = SMBD_CONNECTED;
++		sc->status = SMBDIRECT_SOCKET_CONNECTED;
+ 		wake_up_interruptible(&info->conn_wait);
+ 		break;
+ 
+@@ -213,20 +213,20 @@ static int smbd_conn_upcall(
+ 	case RDMA_CM_EVENT_UNREACHABLE:
+ 	case RDMA_CM_EVENT_REJECTED:
+ 		log_rdma_event(INFO, "connecting failed event=%d\n", event->event);
+-		info->transport_status = SMBD_DISCONNECTED;
++		sc->status = SMBDIRECT_SOCKET_DISCONNECTED;
+ 		wake_up_interruptible(&info->conn_wait);
+ 		break;
+ 
+ 	case RDMA_CM_EVENT_DEVICE_REMOVAL:
+ 	case RDMA_CM_EVENT_DISCONNECTED:
+ 		/* This happens when we fail the negotiation */
+-		if (info->transport_status == SMBD_NEGOTIATE_FAILED) {
+-			info->transport_status = SMBD_DISCONNECTED;
++		if (sc->status == SMBDIRECT_SOCKET_NEGOTIATE_FAILED) {
++			sc->status = SMBDIRECT_SOCKET_DISCONNECTED;
+ 			wake_up(&info->conn_wait);
+ 			break;
+ 		}
+ 
+-		info->transport_status = SMBD_DISCONNECTED;
++		sc->status = SMBDIRECT_SOCKET_DISCONNECTED;
+ 		wake_up_interruptible(&info->disconn_wait);
+ 		wake_up_interruptible(&info->wait_reassembly_queue);
+ 		wake_up_interruptible_all(&info->wait_send_queue);
+@@ -275,6 +275,8 @@ static void send_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	int i;
+ 	struct smbd_request *request =
+ 		container_of(wc->wr_cqe, struct smbd_request, cqe);
++	struct smbd_connection *info = request->info;
++	struct smbdirect_socket *sc = &info->socket;
+ 
+ 	log_rdma_send(INFO, "smbd_request 0x%p completed wc->status=%d\n",
+ 		request, wc->status);
+@@ -286,7 +288,7 @@ static void send_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	}
+ 
+ 	for (i = 0; i < request->num_sge; i++)
+-		ib_dma_unmap_single(request->info->id->device,
++		ib_dma_unmap_single(sc->ib.dev,
+ 			request->sge[i].addr,
+ 			request->sge[i].length,
+ 			DMA_TO_DEVICE);
+@@ -299,7 +301,7 @@ static void send_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	mempool_free(request, request->info->request_mempool);
+ }
+ 
+-static void dump_smbd_negotiate_resp(struct smbd_negotiate_resp *resp)
++static void dump_smbdirect_negotiate_resp(struct smbdirect_negotiate_resp *resp)
+ {
+ 	log_rdma_event(INFO, "resp message min_version %u max_version %u negotiated_version %u credits_requested %u credits_granted %u status %u max_readwrite_size %u preferred_send_size %u max_receive_size %u max_fragmented_size %u\n",
+ 		       resp->min_version, resp->max_version,
+@@ -318,15 +320,17 @@ static bool process_negotiation_response(
+ 		struct smbd_response *response, int packet_length)
+ {
+ 	struct smbd_connection *info = response->info;
+-	struct smbd_negotiate_resp *packet = smbd_response_payload(response);
++	struct smbdirect_socket *sc = &info->socket;
++	struct smbdirect_socket_parameters *sp = &sc->parameters;
++	struct smbdirect_negotiate_resp *packet = smbd_response_payload(response);
+ 
+-	if (packet_length < sizeof(struct smbd_negotiate_resp)) {
++	if (packet_length < sizeof(struct smbdirect_negotiate_resp)) {
+ 		log_rdma_event(ERR,
+ 			"error: packet_length=%d\n", packet_length);
+ 		return false;
+ 	}
+ 
+-	if (le16_to_cpu(packet->negotiated_version) != SMBD_V1) {
++	if (le16_to_cpu(packet->negotiated_version) != SMBDIRECT_V1) {
+ 		log_rdma_event(ERR, "error: negotiated_version=%x\n",
+ 			le16_to_cpu(packet->negotiated_version));
+ 		return false;
+@@ -347,20 +351,20 @@ static bool process_negotiation_response(
+ 
+ 	atomic_set(&info->receive_credits, 0);
+ 
+-	if (le32_to_cpu(packet->preferred_send_size) > info->max_receive_size) {
++	if (le32_to_cpu(packet->preferred_send_size) > sp->max_recv_size) {
+ 		log_rdma_event(ERR, "error: preferred_send_size=%d\n",
+ 			le32_to_cpu(packet->preferred_send_size));
+ 		return false;
+ 	}
+-	info->max_receive_size = le32_to_cpu(packet->preferred_send_size);
++	sp->max_recv_size = le32_to_cpu(packet->preferred_send_size);
+ 
+ 	if (le32_to_cpu(packet->max_receive_size) < SMBD_MIN_RECEIVE_SIZE) {
+ 		log_rdma_event(ERR, "error: max_receive_size=%d\n",
+ 			le32_to_cpu(packet->max_receive_size));
+ 		return false;
+ 	}
+-	info->max_send_size = min_t(int, info->max_send_size,
+-					le32_to_cpu(packet->max_receive_size));
++	sp->max_send_size = min_t(u32, sp->max_send_size,
++				  le32_to_cpu(packet->max_receive_size));
+ 
+ 	if (le32_to_cpu(packet->max_fragmented_size) <
+ 			SMBD_MIN_FRAGMENTED_SIZE) {
+@@ -368,18 +372,18 @@ static bool process_negotiation_response(
+ 			le32_to_cpu(packet->max_fragmented_size));
+ 		return false;
+ 	}
+-	info->max_fragmented_send_size =
++	sp->max_fragmented_send_size =
+ 		le32_to_cpu(packet->max_fragmented_size);
+ 	info->rdma_readwrite_threshold =
+-		rdma_readwrite_threshold > info->max_fragmented_send_size ?
+-		info->max_fragmented_send_size :
++		rdma_readwrite_threshold > sp->max_fragmented_send_size ?
++		sp->max_fragmented_send_size :
+ 		rdma_readwrite_threshold;
+ 
+ 
+-	info->max_readwrite_size = min_t(u32,
++	sp->max_read_write_size = min_t(u32,
+ 			le32_to_cpu(packet->max_readwrite_size),
+ 			info->max_frmr_depth * PAGE_SIZE);
+-	info->max_frmr_depth = info->max_readwrite_size / PAGE_SIZE;
++	info->max_frmr_depth = sp->max_read_write_size / PAGE_SIZE;
+ 
+ 	return true;
+ }
+@@ -393,8 +397,9 @@ static void smbd_post_send_credits(struct work_struct *work)
+ 	struct smbd_connection *info =
+ 		container_of(work, struct smbd_connection,
+ 			post_send_credits_work);
++	struct smbdirect_socket *sc = &info->socket;
+ 
+-	if (info->transport_status != SMBD_CONNECTED) {
++	if (sc->status != SMBDIRECT_SOCKET_CONNECTED) {
+ 		wake_up(&info->wait_receive_queues);
+ 		return;
+ 	}
+@@ -448,7 +453,7 @@ static void smbd_post_send_credits(struct work_struct *work)
+ /* Called from softirq, when recv is done */
+ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ {
+-	struct smbd_data_transfer *data_transfer;
++	struct smbdirect_data_transfer *data_transfer;
+ 	struct smbd_response *response =
+ 		container_of(wc->wr_cqe, struct smbd_response, cqe);
+ 	struct smbd_connection *info = response->info;
+@@ -474,7 +479,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	switch (response->type) {
+ 	/* SMBD negotiation response */
+ 	case SMBD_NEGOTIATE_RESP:
+-		dump_smbd_negotiate_resp(smbd_response_payload(response));
++		dump_smbdirect_negotiate_resp(smbd_response_payload(response));
+ 		info->full_packet_received = true;
+ 		info->negotiate_done =
+ 			process_negotiation_response(response, wc->byte_len);
+@@ -531,7 +536,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		/* Send a KEEP_ALIVE response right away if requested */
+ 		info->keep_alive_requested = KEEP_ALIVE_NONE;
+ 		if (le16_to_cpu(data_transfer->flags) &
+-				SMB_DIRECT_RESPONSE_REQUESTED) {
++				SMBDIRECT_FLAG_RESPONSE_REQUESTED) {
+ 			info->keep_alive_requested = KEEP_ALIVE_PENDING;
+ 		}
+ 
+@@ -635,32 +640,34 @@ static int smbd_ia_open(
+ 		struct smbd_connection *info,
+ 		struct sockaddr *dstaddr, int port)
+ {
++	struct smbdirect_socket *sc = &info->socket;
+ 	int rc;
+ 
+-	info->id = smbd_create_id(info, dstaddr, port);
+-	if (IS_ERR(info->id)) {
+-		rc = PTR_ERR(info->id);
++	sc->rdma.cm_id = smbd_create_id(info, dstaddr, port);
++	if (IS_ERR(sc->rdma.cm_id)) {
++		rc = PTR_ERR(sc->rdma.cm_id);
+ 		goto out1;
+ 	}
++	sc->ib.dev = sc->rdma.cm_id->device;
+ 
+-	if (!frwr_is_supported(&info->id->device->attrs)) {
++	if (!frwr_is_supported(&sc->ib.dev->attrs)) {
+ 		log_rdma_event(ERR, "Fast Registration Work Requests (FRWR) is not supported\n");
+ 		log_rdma_event(ERR, "Device capability flags = %llx max_fast_reg_page_list_len = %u\n",
+-			       info->id->device->attrs.device_cap_flags,
+-			       info->id->device->attrs.max_fast_reg_page_list_len);
++			       sc->ib.dev->attrs.device_cap_flags,
++			       sc->ib.dev->attrs.max_fast_reg_page_list_len);
+ 		rc = -EPROTONOSUPPORT;
+ 		goto out2;
+ 	}
+ 	info->max_frmr_depth = min_t(int,
+ 		smbd_max_frmr_depth,
+-		info->id->device->attrs.max_fast_reg_page_list_len);
++		sc->ib.dev->attrs.max_fast_reg_page_list_len);
+ 	info->mr_type = IB_MR_TYPE_MEM_REG;
+-	if (info->id->device->attrs.kernel_cap_flags & IBK_SG_GAPS_REG)
++	if (sc->ib.dev->attrs.kernel_cap_flags & IBK_SG_GAPS_REG)
+ 		info->mr_type = IB_MR_TYPE_SG_GAPS;
+ 
+-	info->pd = ib_alloc_pd(info->id->device, 0);
+-	if (IS_ERR(info->pd)) {
+-		rc = PTR_ERR(info->pd);
++	sc->ib.pd = ib_alloc_pd(sc->ib.dev, 0);
++	if (IS_ERR(sc->ib.pd)) {
++		rc = PTR_ERR(sc->ib.pd);
+ 		log_rdma_event(ERR, "ib_alloc_pd() returned %d\n", rc);
+ 		goto out2;
+ 	}
+@@ -668,8 +675,8 @@ static int smbd_ia_open(
+ 	return 0;
+ 
+ out2:
+-	rdma_destroy_id(info->id);
+-	info->id = NULL;
++	rdma_destroy_id(sc->rdma.cm_id);
++	sc->rdma.cm_id = NULL;
+ 
+ out1:
+ 	return rc;
+@@ -683,10 +690,12 @@ static int smbd_ia_open(
+  */
+ static int smbd_post_send_negotiate_req(struct smbd_connection *info)
+ {
++	struct smbdirect_socket *sc = &info->socket;
++	struct smbdirect_socket_parameters *sp = &sc->parameters;
+ 	struct ib_send_wr send_wr;
+ 	int rc = -ENOMEM;
+ 	struct smbd_request *request;
+-	struct smbd_negotiate_req *packet;
++	struct smbdirect_negotiate_req *packet;
+ 
+ 	request = mempool_alloc(info->request_mempool, GFP_KERNEL);
+ 	if (!request)
+@@ -695,29 +704,29 @@ static int smbd_post_send_negotiate_req(struct smbd_connection *info)
+ 	request->info = info;
+ 
+ 	packet = smbd_request_payload(request);
+-	packet->min_version = cpu_to_le16(SMBD_V1);
+-	packet->max_version = cpu_to_le16(SMBD_V1);
++	packet->min_version = cpu_to_le16(SMBDIRECT_V1);
++	packet->max_version = cpu_to_le16(SMBDIRECT_V1);
+ 	packet->reserved = 0;
+-	packet->credits_requested = cpu_to_le16(info->send_credit_target);
+-	packet->preferred_send_size = cpu_to_le32(info->max_send_size);
+-	packet->max_receive_size = cpu_to_le32(info->max_receive_size);
++	packet->credits_requested = cpu_to_le16(sp->send_credit_target);
++	packet->preferred_send_size = cpu_to_le32(sp->max_send_size);
++	packet->max_receive_size = cpu_to_le32(sp->max_recv_size);
+ 	packet->max_fragmented_size =
+-		cpu_to_le32(info->max_fragmented_recv_size);
++		cpu_to_le32(sp->max_fragmented_recv_size);
+ 
+ 	request->num_sge = 1;
+ 	request->sge[0].addr = ib_dma_map_single(
+-				info->id->device, (void *)packet,
++				sc->ib.dev, (void *)packet,
+ 				sizeof(*packet), DMA_TO_DEVICE);
+-	if (ib_dma_mapping_error(info->id->device, request->sge[0].addr)) {
++	if (ib_dma_mapping_error(sc->ib.dev, request->sge[0].addr)) {
+ 		rc = -EIO;
+ 		goto dma_mapping_failed;
+ 	}
+ 
+ 	request->sge[0].length = sizeof(*packet);
+-	request->sge[0].lkey = info->pd->local_dma_lkey;
++	request->sge[0].lkey = sc->ib.pd->local_dma_lkey;
+ 
+ 	ib_dma_sync_single_for_device(
+-		info->id->device, request->sge[0].addr,
++		sc->ib.dev, request->sge[0].addr,
+ 		request->sge[0].length, DMA_TO_DEVICE);
+ 
+ 	request->cqe.done = send_done;
+@@ -734,14 +743,14 @@ static int smbd_post_send_negotiate_req(struct smbd_connection *info)
+ 		request->sge[0].length, request->sge[0].lkey);
+ 
+ 	atomic_inc(&info->send_pending);
+-	rc = ib_post_send(info->id->qp, &send_wr, NULL);
++	rc = ib_post_send(sc->ib.qp, &send_wr, NULL);
+ 	if (!rc)
+ 		return 0;
+ 
+ 	/* if we reach here, post send failed */
+ 	log_rdma_send(ERR, "ib_post_send failed rc=%d\n", rc);
+ 	atomic_dec(&info->send_pending);
+-	ib_dma_unmap_single(info->id->device, request->sge[0].addr,
++	ib_dma_unmap_single(sc->ib.dev, request->sge[0].addr,
+ 		request->sge[0].length, DMA_TO_DEVICE);
+ 
+ 	smbd_disconnect_rdma_connection(info);
+@@ -774,10 +783,10 @@ static int manage_credits_prior_sending(struct smbd_connection *info)
+ /*
+  * Check if we need to send a KEEP_ALIVE message
+  * The idle connection timer triggers a KEEP_ALIVE message when expires
+- * SMB_DIRECT_RESPONSE_REQUESTED is set in the message flag to have peer send
++ * SMBDIRECT_FLAG_RESPONSE_REQUESTED is set in the message flag to have peer send
+  * back a response.
+  * return value:
+- * 1 if SMB_DIRECT_RESPONSE_REQUESTED needs to be set
++ * 1 if SMBDIRECT_FLAG_RESPONSE_REQUESTED needs to be set
+  * 0: otherwise
+  */
+ static int manage_keep_alive_before_sending(struct smbd_connection *info)
+@@ -793,6 +802,8 @@ static int manage_keep_alive_before_sending(struct smbd_connection *info)
+ static int smbd_post_send(struct smbd_connection *info,
+ 		struct smbd_request *request)
+ {
++	struct smbdirect_socket *sc = &info->socket;
++	struct smbdirect_socket_parameters *sp = &sc->parameters;
+ 	struct ib_send_wr send_wr;
+ 	int rc, i;
+ 
+@@ -801,7 +812,7 @@ static int smbd_post_send(struct smbd_connection *info,
+ 			"rdma_request sge[%d] addr=0x%llx length=%u\n",
+ 			i, request->sge[i].addr, request->sge[i].length);
+ 		ib_dma_sync_single_for_device(
+-			info->id->device,
++			sc->ib.dev,
+ 			request->sge[i].addr,
+ 			request->sge[i].length,
+ 			DMA_TO_DEVICE);
+@@ -816,7 +827,7 @@ static int smbd_post_send(struct smbd_connection *info,
+ 	send_wr.opcode = IB_WR_SEND;
+ 	send_wr.send_flags = IB_SEND_SIGNALED;
+ 
+-	rc = ib_post_send(info->id->qp, &send_wr, NULL);
++	rc = ib_post_send(sc->ib.qp, &send_wr, NULL);
+ 	if (rc) {
+ 		log_rdma_send(ERR, "ib_post_send failed rc=%d\n", rc);
+ 		smbd_disconnect_rdma_connection(info);
+@@ -824,7 +835,7 @@ static int smbd_post_send(struct smbd_connection *info,
+ 	} else
+ 		/* Reset timer for idle connection after packet is sent */
+ 		mod_delayed_work(info->workqueue, &info->idle_timer_work,
+-			info->keep_alive_interval*HZ);
++			msecs_to_jiffies(sp->keepalive_interval_msec));
+ 
+ 	return rc;
+ }
+@@ -833,22 +844,24 @@ static int smbd_post_send_iter(struct smbd_connection *info,
+ 			       struct iov_iter *iter,
+ 			       int *_remaining_data_length)
+ {
++	struct smbdirect_socket *sc = &info->socket;
++	struct smbdirect_socket_parameters *sp = &sc->parameters;
+ 	int i, rc;
+ 	int header_length;
+ 	int data_length;
+ 	struct smbd_request *request;
+-	struct smbd_data_transfer *packet;
++	struct smbdirect_data_transfer *packet;
+ 	int new_credits = 0;
+ 
+ wait_credit:
+ 	/* Wait for send credits. A SMBD packet needs one credit */
+ 	rc = wait_event_interruptible(info->wait_send_queue,
+ 		atomic_read(&info->send_credits) > 0 ||
+-		info->transport_status != SMBD_CONNECTED);
++		sc->status != SMBDIRECT_SOCKET_CONNECTED);
+ 	if (rc)
+ 		goto err_wait_credit;
+ 
+-	if (info->transport_status != SMBD_CONNECTED) {
++	if (sc->status != SMBDIRECT_SOCKET_CONNECTED) {
+ 		log_outgoing(ERR, "disconnected not sending on wait_credit\n");
+ 		rc = -EAGAIN;
+ 		goto err_wait_credit;
+@@ -860,17 +873,17 @@ static int smbd_post_send_iter(struct smbd_connection *info,
+ 
+ wait_send_queue:
+ 	wait_event(info->wait_post_send,
+-		atomic_read(&info->send_pending) < info->send_credit_target ||
+-		info->transport_status != SMBD_CONNECTED);
++		atomic_read(&info->send_pending) < sp->send_credit_target ||
++		sc->status != SMBDIRECT_SOCKET_CONNECTED);
+ 
+-	if (info->transport_status != SMBD_CONNECTED) {
++	if (sc->status != SMBDIRECT_SOCKET_CONNECTED) {
+ 		log_outgoing(ERR, "disconnected not sending on wait_send_queue\n");
+ 		rc = -EAGAIN;
+ 		goto err_wait_send_queue;
+ 	}
+ 
+ 	if (unlikely(atomic_inc_return(&info->send_pending) >
+-				info->send_credit_target)) {
++				sp->send_credit_target)) {
+ 		atomic_dec(&info->send_pending);
+ 		goto wait_send_queue;
+ 	}
+@@ -890,12 +903,14 @@ static int smbd_post_send_iter(struct smbd_connection *info,
+ 			.nr_sge		= 1,
+ 			.max_sge	= SMBDIRECT_MAX_SEND_SGE,
+ 			.sge		= request->sge,
+-			.device		= info->id->device,
+-			.local_dma_lkey	= info->pd->local_dma_lkey,
++			.device		= sc->ib.dev,
++			.local_dma_lkey	= sc->ib.pd->local_dma_lkey,
+ 			.direction	= DMA_TO_DEVICE,
+ 		};
++		size_t payload_len = umin(*_remaining_data_length,
++					  sp->max_send_size - sizeof(*packet));
+ 
+-		rc = smb_extract_iter_to_rdma(iter, *_remaining_data_length,
++		rc = smb_extract_iter_to_rdma(iter, payload_len,
+ 					      &extract);
+ 		if (rc < 0)
+ 			goto err_dma;
+@@ -909,7 +924,7 @@ static int smbd_post_send_iter(struct smbd_connection *info,
+ 
+ 	/* Fill in the packet header */
+ 	packet = smbd_request_payload(request);
+-	packet->credits_requested = cpu_to_le16(info->send_credit_target);
++	packet->credits_requested = cpu_to_le16(sp->send_credit_target);
+ 
+ 	new_credits = manage_credits_prior_sending(info);
+ 	atomic_add(new_credits, &info->receive_credits);
+@@ -919,7 +934,7 @@ static int smbd_post_send_iter(struct smbd_connection *info,
+ 
+ 	packet->flags = 0;
+ 	if (manage_keep_alive_before_sending(info))
+-		packet->flags |= cpu_to_le16(SMB_DIRECT_RESPONSE_REQUESTED);
++		packet->flags |= cpu_to_le16(SMBDIRECT_FLAG_RESPONSE_REQUESTED);
+ 
+ 	packet->reserved = 0;
+ 	if (!data_length)
+@@ -938,23 +953,23 @@ static int smbd_post_send_iter(struct smbd_connection *info,
+ 		     le32_to_cpu(packet->remaining_data_length));
+ 
+ 	/* Map the packet to DMA */
+-	header_length = sizeof(struct smbd_data_transfer);
++	header_length = sizeof(struct smbdirect_data_transfer);
+ 	/* If this is a packet without payload, don't send padding */
+ 	if (!data_length)
+-		header_length = offsetof(struct smbd_data_transfer, padding);
++		header_length = offsetof(struct smbdirect_data_transfer, padding);
+ 
+-	request->sge[0].addr = ib_dma_map_single(info->id->device,
++	request->sge[0].addr = ib_dma_map_single(sc->ib.dev,
+ 						 (void *)packet,
+ 						 header_length,
+ 						 DMA_TO_DEVICE);
+-	if (ib_dma_mapping_error(info->id->device, request->sge[0].addr)) {
++	if (ib_dma_mapping_error(sc->ib.dev, request->sge[0].addr)) {
+ 		rc = -EIO;
+ 		request->sge[0].addr = 0;
+ 		goto err_dma;
+ 	}
+ 
+ 	request->sge[0].length = header_length;
+-	request->sge[0].lkey = info->pd->local_dma_lkey;
++	request->sge[0].lkey = sc->ib.pd->local_dma_lkey;
+ 
+ 	rc = smbd_post_send(info, request);
+ 	if (!rc)
+@@ -963,7 +978,7 @@ static int smbd_post_send_iter(struct smbd_connection *info,
+ err_dma:
+ 	for (i = 0; i < request->num_sge; i++)
+ 		if (request->sge[i].addr)
+-			ib_dma_unmap_single(info->id->device,
++			ib_dma_unmap_single(sc->ib.dev,
+ 					    request->sge[i].addr,
+ 					    request->sge[i].length,
+ 					    DMA_TO_DEVICE);
+@@ -1000,6 +1015,27 @@ static int smbd_post_send_empty(struct smbd_connection *info)
+ 	return smbd_post_send_iter(info, NULL, &remaining_data_length);
+ }
+ 
++static int smbd_post_send_full_iter(struct smbd_connection *info,
++				    struct iov_iter *iter,
++				    int *_remaining_data_length)
++{
++	int rc = 0;
++
++	/*
++	 * smbd_post_send_iter() respects the
++	 * negotiated max_send_size, so we need to
++	 * loop until the full iter is posted
++	 */
++
++	while (iov_iter_count(iter) > 0) {
++		rc = smbd_post_send_iter(info, iter, _remaining_data_length);
++		if (rc < 0)
++			break;
++	}
++
++	return rc;
++}
++
+ /*
+  * Post a receive request to the transport
+  * The remote peer can only send data when a receive request is posted
+@@ -1008,17 +1044,19 @@ static int smbd_post_send_empty(struct smbd_connection *info)
+ static int smbd_post_recv(
+ 		struct smbd_connection *info, struct smbd_response *response)
+ {
++	struct smbdirect_socket *sc = &info->socket;
++	struct smbdirect_socket_parameters *sp = &sc->parameters;
+ 	struct ib_recv_wr recv_wr;
+ 	int rc = -EIO;
+ 
+ 	response->sge.addr = ib_dma_map_single(
+-				info->id->device, response->packet,
+-				info->max_receive_size, DMA_FROM_DEVICE);
+-	if (ib_dma_mapping_error(info->id->device, response->sge.addr))
++				sc->ib.dev, response->packet,
++				sp->max_recv_size, DMA_FROM_DEVICE);
++	if (ib_dma_mapping_error(sc->ib.dev, response->sge.addr))
+ 		return rc;
+ 
+-	response->sge.length = info->max_receive_size;
+-	response->sge.lkey = info->pd->local_dma_lkey;
++	response->sge.length = sp->max_recv_size;
++	response->sge.lkey = sc->ib.pd->local_dma_lkey;
+ 
+ 	response->cqe.done = recv_done;
+ 
+@@ -1027,9 +1065,9 @@ static int smbd_post_recv(
+ 	recv_wr.sg_list = &response->sge;
+ 	recv_wr.num_sge = 1;
+ 
+-	rc = ib_post_recv(info->id->qp, &recv_wr, NULL);
++	rc = ib_post_recv(sc->ib.qp, &recv_wr, NULL);
+ 	if (rc) {
+-		ib_dma_unmap_single(info->id->device, response->sge.addr,
++		ib_dma_unmap_single(sc->ib.dev, response->sge.addr,
+ 				    response->sge.length, DMA_FROM_DEVICE);
+ 		smbd_disconnect_rdma_connection(info);
+ 		log_rdma_recv(ERR, "ib_post_recv failed rc=%d\n", rc);
+@@ -1187,9 +1225,10 @@ static struct smbd_response *get_receive_buffer(struct smbd_connection *info)
+ static void put_receive_buffer(
+ 	struct smbd_connection *info, struct smbd_response *response)
+ {
++	struct smbdirect_socket *sc = &info->socket;
+ 	unsigned long flags;
+ 
+-	ib_dma_unmap_single(info->id->device, response->sge.addr,
++	ib_dma_unmap_single(sc->ib.dev, response->sge.addr,
+ 		response->sge.length, DMA_FROM_DEVICE);
+ 
+ 	spin_lock_irqsave(&info->receive_queue_lock, flags);
+@@ -1264,6 +1303,8 @@ static void idle_connection_timer(struct work_struct *work)
+ 	struct smbd_connection *info = container_of(
+ 					work, struct smbd_connection,
+ 					idle_timer_work.work);
++	struct smbdirect_socket *sc = &info->socket;
++	struct smbdirect_socket_parameters *sp = &sc->parameters;
+ 
+ 	if (info->keep_alive_requested != KEEP_ALIVE_NONE) {
+ 		log_keep_alive(ERR,
+@@ -1278,7 +1319,7 @@ static void idle_connection_timer(struct work_struct *work)
+ 
+ 	/* Setup the next idle timeout work */
+ 	queue_delayed_work(info->workqueue, &info->idle_timer_work,
+-			info->keep_alive_interval*HZ);
++			msecs_to_jiffies(sp->keepalive_interval_msec));
+ }
+ 
+ /*
+@@ -1289,6 +1330,8 @@ static void idle_connection_timer(struct work_struct *work)
+ void smbd_destroy(struct TCP_Server_Info *server)
+ {
+ 	struct smbd_connection *info = server->smbd_conn;
++	struct smbdirect_socket *sc;
++	struct smbdirect_socket_parameters *sp;
+ 	struct smbd_response *response;
+ 	unsigned long flags;
+ 
+@@ -1296,19 +1339,22 @@ void smbd_destroy(struct TCP_Server_Info *server)
+ 		log_rdma_event(INFO, "rdma session already destroyed\n");
+ 		return;
+ 	}
++	sc = &info->socket;
++	sp = &sc->parameters;
+ 
+ 	log_rdma_event(INFO, "destroying rdma session\n");
+-	if (info->transport_status != SMBD_DISCONNECTED) {
+-		rdma_disconnect(server->smbd_conn->id);
++	if (sc->status != SMBDIRECT_SOCKET_DISCONNECTED) {
++		rdma_disconnect(sc->rdma.cm_id);
+ 		log_rdma_event(INFO, "wait for transport being disconnected\n");
+ 		wait_event_interruptible(
+ 			info->disconn_wait,
+-			info->transport_status == SMBD_DISCONNECTED);
++			sc->status == SMBDIRECT_SOCKET_DISCONNECTED);
+ 	}
+ 
+ 	log_rdma_event(INFO, "destroying qp\n");
+-	ib_drain_qp(info->id->qp);
+-	rdma_destroy_qp(info->id);
++	ib_drain_qp(sc->ib.qp);
++	rdma_destroy_qp(sc->rdma.cm_id);
++	sc->ib.qp = NULL;
+ 
+ 	log_rdma_event(INFO, "cancelling idle timer\n");
+ 	cancel_delayed_work_sync(&info->idle_timer_work);
+@@ -1336,7 +1382,7 @@ void smbd_destroy(struct TCP_Server_Info *server)
+ 	log_rdma_event(INFO, "free receive buffers\n");
+ 	wait_event(info->wait_receive_queues,
+ 		info->count_receive_queue + info->count_empty_packet_queue
+-			== info->receive_credit_max);
++			== sp->recv_credit_max);
+ 	destroy_receive_buffers(info);
+ 
+ 	/*
+@@ -1355,10 +1401,10 @@ void smbd_destroy(struct TCP_Server_Info *server)
+ 	}
+ 	destroy_mr_list(info);
+ 
+-	ib_free_cq(info->send_cq);
+-	ib_free_cq(info->recv_cq);
+-	ib_dealloc_pd(info->pd);
+-	rdma_destroy_id(info->id);
++	ib_free_cq(sc->ib.send_cq);
++	ib_free_cq(sc->ib.recv_cq);
++	ib_dealloc_pd(sc->ib.pd);
++	rdma_destroy_id(sc->rdma.cm_id);
+ 
+ 	/* free mempools */
+ 	mempool_destroy(info->request_mempool);
+@@ -1367,7 +1413,7 @@ void smbd_destroy(struct TCP_Server_Info *server)
+ 	mempool_destroy(info->response_mempool);
+ 	kmem_cache_destroy(info->response_cache);
+ 
+-	info->transport_status = SMBD_DESTROYED;
++	sc->status = SMBDIRECT_SOCKET_DESTROYED;
+ 
+ 	destroy_workqueue(info->workqueue);
+ 	log_rdma_event(INFO,  "rdma session destroyed\n");
+@@ -1392,7 +1438,7 @@ int smbd_reconnect(struct TCP_Server_Info *server)
+ 	 * This is possible if transport is disconnected and we haven't received
+ 	 * notification from RDMA, but upper layer has detected timeout
+ 	 */
+-	if (server->smbd_conn->transport_status == SMBD_CONNECTED) {
++	if (server->smbd_conn->socket.status == SMBDIRECT_SOCKET_CONNECTED) {
+ 		log_rdma_event(INFO, "disconnecting transport\n");
+ 		smbd_destroy(server);
+ 	}
+@@ -1424,37 +1470,47 @@ static void destroy_caches_and_workqueue(struct smbd_connection *info)
+ #define MAX_NAME_LEN	80
+ static int allocate_caches_and_workqueue(struct smbd_connection *info)
+ {
++	struct smbdirect_socket *sc = &info->socket;
++	struct smbdirect_socket_parameters *sp = &sc->parameters;
+ 	char name[MAX_NAME_LEN];
+ 	int rc;
+ 
++	if (WARN_ON_ONCE(sp->max_recv_size < sizeof(struct smbdirect_data_transfer)))
++		return -ENOMEM;
++
+ 	scnprintf(name, MAX_NAME_LEN, "smbd_request_%p", info);
+ 	info->request_cache =
+ 		kmem_cache_create(
+ 			name,
+ 			sizeof(struct smbd_request) +
+-				sizeof(struct smbd_data_transfer),
++				sizeof(struct smbdirect_data_transfer),
+ 			0, SLAB_HWCACHE_ALIGN, NULL);
+ 	if (!info->request_cache)
+ 		return -ENOMEM;
+ 
+ 	info->request_mempool =
+-		mempool_create(info->send_credit_target, mempool_alloc_slab,
++		mempool_create(sp->send_credit_target, mempool_alloc_slab,
+ 			mempool_free_slab, info->request_cache);
+ 	if (!info->request_mempool)
+ 		goto out1;
+ 
+ 	scnprintf(name, MAX_NAME_LEN, "smbd_response_%p", info);
++
++	struct kmem_cache_args response_args = {
++		.align		= __alignof__(struct smbd_response),
++		.useroffset	= (offsetof(struct smbd_response, packet) +
++				   sizeof(struct smbdirect_data_transfer)),
++		.usersize	= sp->max_recv_size - sizeof(struct smbdirect_data_transfer),
++	};
+ 	info->response_cache =
+-		kmem_cache_create(
+-			name,
+-			sizeof(struct smbd_response) +
+-				info->max_receive_size,
+-			0, SLAB_HWCACHE_ALIGN, NULL);
++		kmem_cache_create(name,
++				  sizeof(struct smbd_response) + sp->max_recv_size,
++				  &response_args, SLAB_HWCACHE_ALIGN);
+ 	if (!info->response_cache)
+ 		goto out2;
+ 
+ 	info->response_mempool =
+-		mempool_create(info->receive_credit_max, mempool_alloc_slab,
++		mempool_create(sp->recv_credit_max, mempool_alloc_slab,
+ 		       mempool_free_slab, info->response_cache);
+ 	if (!info->response_mempool)
+ 		goto out3;
+@@ -1464,7 +1520,7 @@ static int allocate_caches_and_workqueue(struct smbd_connection *info)
+ 	if (!info->workqueue)
+ 		goto out4;
+ 
+-	rc = allocate_receive_buffers(info, info->receive_credit_max);
++	rc = allocate_receive_buffers(info, sp->recv_credit_max);
+ 	if (rc) {
+ 		log_rdma_event(ERR, "failed to allocate receive buffers\n");
+ 		goto out5;
+@@ -1491,6 +1547,8 @@ static struct smbd_connection *_smbd_get_connection(
+ {
+ 	int rc;
+ 	struct smbd_connection *info;
++	struct smbdirect_socket *sc;
++	struct smbdirect_socket_parameters *sp;
+ 	struct rdma_conn_param conn_param;
+ 	struct ib_qp_init_attr qp_attr;
+ 	struct sockaddr_in *addr_in = (struct sockaddr_in *) dstaddr;
+@@ -1500,101 +1558,102 @@ static struct smbd_connection *_smbd_get_connection(
+ 	info = kzalloc(sizeof(struct smbd_connection), GFP_KERNEL);
+ 	if (!info)
+ 		return NULL;
++	sc = &info->socket;
++	sp = &sc->parameters;
+ 
+-	info->transport_status = SMBD_CONNECTING;
++	sc->status = SMBDIRECT_SOCKET_CONNECTING;
+ 	rc = smbd_ia_open(info, dstaddr, port);
+ 	if (rc) {
+ 		log_rdma_event(INFO, "smbd_ia_open rc=%d\n", rc);
+ 		goto create_id_failed;
+ 	}
+ 
+-	if (smbd_send_credit_target > info->id->device->attrs.max_cqe ||
+-	    smbd_send_credit_target > info->id->device->attrs.max_qp_wr) {
++	if (smbd_send_credit_target > sc->ib.dev->attrs.max_cqe ||
++	    smbd_send_credit_target > sc->ib.dev->attrs.max_qp_wr) {
+ 		log_rdma_event(ERR, "consider lowering send_credit_target = %d. Possible CQE overrun, device reporting max_cqe %d max_qp_wr %d\n",
+ 			       smbd_send_credit_target,
+-			       info->id->device->attrs.max_cqe,
+-			       info->id->device->attrs.max_qp_wr);
++			       sc->ib.dev->attrs.max_cqe,
++			       sc->ib.dev->attrs.max_qp_wr);
+ 		goto config_failed;
+ 	}
+ 
+-	if (smbd_receive_credit_max > info->id->device->attrs.max_cqe ||
+-	    smbd_receive_credit_max > info->id->device->attrs.max_qp_wr) {
++	if (smbd_receive_credit_max > sc->ib.dev->attrs.max_cqe ||
++	    smbd_receive_credit_max > sc->ib.dev->attrs.max_qp_wr) {
+ 		log_rdma_event(ERR, "consider lowering receive_credit_max = %d. Possible CQE overrun, device reporting max_cqe %d max_qp_wr %d\n",
+ 			       smbd_receive_credit_max,
+-			       info->id->device->attrs.max_cqe,
+-			       info->id->device->attrs.max_qp_wr);
++			       sc->ib.dev->attrs.max_cqe,
++			       sc->ib.dev->attrs.max_qp_wr);
+ 		goto config_failed;
+ 	}
+ 
+-	info->receive_credit_max = smbd_receive_credit_max;
+-	info->send_credit_target = smbd_send_credit_target;
+-	info->max_send_size = smbd_max_send_size;
+-	info->max_fragmented_recv_size = smbd_max_fragmented_recv_size;
+-	info->max_receive_size = smbd_max_receive_size;
+-	info->keep_alive_interval = smbd_keep_alive_interval;
++	sp->recv_credit_max = smbd_receive_credit_max;
++	sp->send_credit_target = smbd_send_credit_target;
++	sp->max_send_size = smbd_max_send_size;
++	sp->max_fragmented_recv_size = smbd_max_fragmented_recv_size;
++	sp->max_recv_size = smbd_max_receive_size;
++	sp->keepalive_interval_msec = smbd_keep_alive_interval * 1000;
+ 
+-	if (info->id->device->attrs.max_send_sge < SMBDIRECT_MAX_SEND_SGE ||
+-	    info->id->device->attrs.max_recv_sge < SMBDIRECT_MAX_RECV_SGE) {
++	if (sc->ib.dev->attrs.max_send_sge < SMBDIRECT_MAX_SEND_SGE ||
++	    sc->ib.dev->attrs.max_recv_sge < SMBDIRECT_MAX_RECV_SGE) {
+ 		log_rdma_event(ERR,
+ 			"device %.*s max_send_sge/max_recv_sge = %d/%d too small\n",
+ 			IB_DEVICE_NAME_MAX,
+-			info->id->device->name,
+-			info->id->device->attrs.max_send_sge,
+-			info->id->device->attrs.max_recv_sge);
++			sc->ib.dev->name,
++			sc->ib.dev->attrs.max_send_sge,
++			sc->ib.dev->attrs.max_recv_sge);
+ 		goto config_failed;
+ 	}
+ 
+-	info->send_cq = NULL;
+-	info->recv_cq = NULL;
+-	info->send_cq =
+-		ib_alloc_cq_any(info->id->device, info,
+-				info->send_credit_target, IB_POLL_SOFTIRQ);
+-	if (IS_ERR(info->send_cq)) {
+-		info->send_cq = NULL;
++	sc->ib.send_cq =
++		ib_alloc_cq_any(sc->ib.dev, info,
++				sp->send_credit_target, IB_POLL_SOFTIRQ);
++	if (IS_ERR(sc->ib.send_cq)) {
++		sc->ib.send_cq = NULL;
+ 		goto alloc_cq_failed;
+ 	}
+ 
+-	info->recv_cq =
+-		ib_alloc_cq_any(info->id->device, info,
+-				info->receive_credit_max, IB_POLL_SOFTIRQ);
+-	if (IS_ERR(info->recv_cq)) {
+-		info->recv_cq = NULL;
++	sc->ib.recv_cq =
++		ib_alloc_cq_any(sc->ib.dev, info,
++				sp->recv_credit_max, IB_POLL_SOFTIRQ);
++	if (IS_ERR(sc->ib.recv_cq)) {
++		sc->ib.recv_cq = NULL;
+ 		goto alloc_cq_failed;
+ 	}
+ 
+ 	memset(&qp_attr, 0, sizeof(qp_attr));
+ 	qp_attr.event_handler = smbd_qp_async_error_upcall;
+ 	qp_attr.qp_context = info;
+-	qp_attr.cap.max_send_wr = info->send_credit_target;
+-	qp_attr.cap.max_recv_wr = info->receive_credit_max;
++	qp_attr.cap.max_send_wr = sp->send_credit_target;
++	qp_attr.cap.max_recv_wr = sp->recv_credit_max;
+ 	qp_attr.cap.max_send_sge = SMBDIRECT_MAX_SEND_SGE;
+ 	qp_attr.cap.max_recv_sge = SMBDIRECT_MAX_RECV_SGE;
+ 	qp_attr.cap.max_inline_data = 0;
+ 	qp_attr.sq_sig_type = IB_SIGNAL_REQ_WR;
+ 	qp_attr.qp_type = IB_QPT_RC;
+-	qp_attr.send_cq = info->send_cq;
+-	qp_attr.recv_cq = info->recv_cq;
++	qp_attr.send_cq = sc->ib.send_cq;
++	qp_attr.recv_cq = sc->ib.recv_cq;
+ 	qp_attr.port_num = ~0;
+ 
+-	rc = rdma_create_qp(info->id, info->pd, &qp_attr);
++	rc = rdma_create_qp(sc->rdma.cm_id, sc->ib.pd, &qp_attr);
+ 	if (rc) {
+ 		log_rdma_event(ERR, "rdma_create_qp failed %i\n", rc);
+ 		goto create_qp_failed;
+ 	}
++	sc->ib.qp = sc->rdma.cm_id->qp;
+ 
+ 	memset(&conn_param, 0, sizeof(conn_param));
+ 	conn_param.initiator_depth = 0;
+ 
+ 	conn_param.responder_resources =
+-		min(info->id->device->attrs.max_qp_rd_atom,
++		min(sc->ib.dev->attrs.max_qp_rd_atom,
+ 		    SMBD_CM_RESPONDER_RESOURCES);
+ 	info->responder_resources = conn_param.responder_resources;
+ 	log_rdma_mr(INFO, "responder_resources=%d\n",
+ 		info->responder_resources);
+ 
+ 	/* Need to send IRD/ORD in private data for iWARP */
+-	info->id->device->ops.get_port_immutable(
+-		info->id->device, info->id->port_num, &port_immutable);
++	sc->ib.dev->ops.get_port_immutable(
++		sc->ib.dev, sc->rdma.cm_id->port_num, &port_immutable);
+ 	if (port_immutable.core_cap_flags & RDMA_CORE_PORT_IWARP) {
+ 		ird_ord_hdr[0] = info->responder_resources;
+ 		ird_ord_hdr[1] = 1;
+@@ -1615,16 +1674,16 @@ static struct smbd_connection *_smbd_get_connection(
+ 	init_waitqueue_head(&info->conn_wait);
+ 	init_waitqueue_head(&info->disconn_wait);
+ 	init_waitqueue_head(&info->wait_reassembly_queue);
+-	rc = rdma_connect(info->id, &conn_param);
++	rc = rdma_connect(sc->rdma.cm_id, &conn_param);
+ 	if (rc) {
+ 		log_rdma_event(ERR, "rdma_connect() failed with %i\n", rc);
+ 		goto rdma_connect_failed;
+ 	}
+ 
+ 	wait_event_interruptible(
+-		info->conn_wait, info->transport_status != SMBD_CONNECTING);
++		info->conn_wait, sc->status != SMBDIRECT_SOCKET_CONNECTING);
+ 
+-	if (info->transport_status != SMBD_CONNECTED) {
++	if (sc->status != SMBDIRECT_SOCKET_CONNECTED) {
+ 		log_rdma_event(ERR, "rdma_connect failed port=%d\n", port);
+ 		goto rdma_connect_failed;
+ 	}
+@@ -1640,7 +1699,7 @@ static struct smbd_connection *_smbd_get_connection(
+ 	init_waitqueue_head(&info->wait_send_queue);
+ 	INIT_DELAYED_WORK(&info->idle_timer_work, idle_connection_timer);
+ 	queue_delayed_work(info->workqueue, &info->idle_timer_work,
+-		info->keep_alive_interval*HZ);
++		msecs_to_jiffies(sp->keepalive_interval_msec));
+ 
+ 	init_waitqueue_head(&info->wait_send_pending);
+ 	atomic_set(&info->send_pending, 0);
+@@ -1675,26 +1734,26 @@ static struct smbd_connection *_smbd_get_connection(
+ negotiation_failed:
+ 	cancel_delayed_work_sync(&info->idle_timer_work);
+ 	destroy_caches_and_workqueue(info);
+-	info->transport_status = SMBD_NEGOTIATE_FAILED;
++	sc->status = SMBDIRECT_SOCKET_NEGOTIATE_FAILED;
+ 	init_waitqueue_head(&info->conn_wait);
+-	rdma_disconnect(info->id);
++	rdma_disconnect(sc->rdma.cm_id);
+ 	wait_event(info->conn_wait,
+-		info->transport_status == SMBD_DISCONNECTED);
++		sc->status == SMBDIRECT_SOCKET_DISCONNECTED);
+ 
+ allocate_cache_failed:
+ rdma_connect_failed:
+-	rdma_destroy_qp(info->id);
++	rdma_destroy_qp(sc->rdma.cm_id);
+ 
+ create_qp_failed:
+ alloc_cq_failed:
+-	if (info->send_cq)
+-		ib_free_cq(info->send_cq);
+-	if (info->recv_cq)
+-		ib_free_cq(info->recv_cq);
++	if (sc->ib.send_cq)
++		ib_free_cq(sc->ib.send_cq);
++	if (sc->ib.recv_cq)
++		ib_free_cq(sc->ib.recv_cq);
+ 
+ config_failed:
+-	ib_dealloc_pd(info->pd);
+-	rdma_destroy_id(info->id);
++	ib_dealloc_pd(sc->ib.pd);
++	rdma_destroy_id(sc->rdma.cm_id);
+ 
+ create_id_failed:
+ 	kfree(info);
+@@ -1719,34 +1778,39 @@ struct smbd_connection *smbd_get_connection(
+ }
+ 
+ /*
+- * Receive data from receive reassembly queue
++ * Receive data from the transport's receive reassembly queue
+  * All the incoming data packets are placed in reassembly queue
+- * buf: the buffer to read data into
++ * iter: the buffer to read data into
+  * size: the length of data to read
+  * return value: actual data read
+- * Note: this implementation copies the data from reassebmly queue to receive
++ *
++ * Note: this implementation copies the data from reassembly queue to receive
+  * buffers used by upper layer. This is not the optimal code path. A better way
+  * to do it is to not have upper layer allocate its receive buffers but rather
+  * borrow the buffer from reassembly queue, and return it after data is
+  * consumed. But this will require more changes to upper layer code, and also
+  * need to consider packet boundaries while they still being reassembled.
+  */
+-static int smbd_recv_buf(struct smbd_connection *info, char *buf,
+-		unsigned int size)
++int smbd_recv(struct smbd_connection *info, struct msghdr *msg)
+ {
++	struct smbdirect_socket *sc = &info->socket;
+ 	struct smbd_response *response;
+-	struct smbd_data_transfer *data_transfer;
++	struct smbdirect_data_transfer *data_transfer;
++	size_t size = iov_iter_count(&msg->msg_iter);
+ 	int to_copy, to_read, data_read, offset;
+ 	u32 data_length, remaining_data_length, data_offset;
+ 	int rc;
+ 
++	if (WARN_ON_ONCE(iov_iter_rw(&msg->msg_iter) == WRITE))
++		return -EINVAL; /* It's a bug in upper layer to get there */
++
+ again:
+ 	/*
+ 	 * No need to hold the reassembly queue lock all the time as we are
+ 	 * the only one reading from the front of the queue. The transport
+ 	 * may add more entries to the back of the queue at the same time
+ 	 */
+-	log_read(INFO, "size=%d info->reassembly_data_length=%d\n", size,
++	log_read(INFO, "size=%zd info->reassembly_data_length=%d\n", size,
+ 		info->reassembly_data_length);
+ 	if (info->reassembly_data_length >= size) {
+ 		int queue_length;
+@@ -1784,7 +1848,10 @@ static int smbd_recv_buf(struct smbd_connection *info, char *buf,
+ 			if (response->first_segment && size == 4) {
+ 				unsigned int rfc1002_len =
+ 					data_length + remaining_data_length;
+-				*((__be32 *)buf) = cpu_to_be32(rfc1002_len);
++				__be32 rfc1002_hdr = cpu_to_be32(rfc1002_len);
++				if (copy_to_iter(&rfc1002_hdr, sizeof(rfc1002_hdr),
++						 &msg->msg_iter) != sizeof(rfc1002_hdr))
++					return -EFAULT;
+ 				data_read = 4;
+ 				response->first_segment = false;
+ 				log_read(INFO, "returning rfc1002 length %d\n",
+@@ -1793,10 +1860,9 @@ static int smbd_recv_buf(struct smbd_connection *info, char *buf,
+ 			}
+ 
+ 			to_copy = min_t(int, data_length - offset, to_read);
+-			memcpy(
+-				buf + data_read,
+-				(char *)data_transfer + data_offset + offset,
+-				to_copy);
++			if (copy_to_iter((char *)data_transfer + data_offset + offset,
++					 to_copy, &msg->msg_iter) != to_copy)
++				return -EFAULT;
+ 
+ 			/* move on to the next buffer? */
+ 			if (to_copy == data_length - offset) {
+@@ -1848,12 +1914,12 @@ static int smbd_recv_buf(struct smbd_connection *info, char *buf,
+ 	rc = wait_event_interruptible(
+ 		info->wait_reassembly_queue,
+ 		info->reassembly_data_length >= size ||
+-			info->transport_status != SMBD_CONNECTED);
++			sc->status != SMBDIRECT_SOCKET_CONNECTED);
+ 	/* Don't return any data if interrupted */
+ 	if (rc)
+ 		return rc;
+ 
+-	if (info->transport_status != SMBD_CONNECTED) {
++	if (sc->status != SMBDIRECT_SOCKET_CONNECTED) {
+ 		log_read(ERR, "disconnected\n");
+ 		return -ECONNABORTED;
+ 	}
+@@ -1861,89 +1927,6 @@ static int smbd_recv_buf(struct smbd_connection *info, char *buf,
+ 	goto again;
+ }
+ 
+-/*
+- * Receive a page from receive reassembly queue
+- * page: the page to read data into
+- * to_read: the length of data to read
+- * return value: actual data read
+- */
+-static int smbd_recv_page(struct smbd_connection *info,
+-		struct page *page, unsigned int page_offset,
+-		unsigned int to_read)
+-{
+-	int ret;
+-	char *to_address;
+-	void *page_address;
+-
+-	/* make sure we have the page ready for read */
+-	ret = wait_event_interruptible(
+-		info->wait_reassembly_queue,
+-		info->reassembly_data_length >= to_read ||
+-			info->transport_status != SMBD_CONNECTED);
+-	if (ret)
+-		return ret;
+-
+-	/* now we can read from reassembly queue and not sleep */
+-	page_address = kmap_atomic(page);
+-	to_address = (char *) page_address + page_offset;
+-
+-	log_read(INFO, "reading from page=%p address=%p to_read=%d\n",
+-		page, to_address, to_read);
+-
+-	ret = smbd_recv_buf(info, to_address, to_read);
+-	kunmap_atomic(page_address);
+-
+-	return ret;
+-}
+-
+-/*
+- * Receive data from transport
+- * msg: a msghdr point to the buffer, can be ITER_KVEC or ITER_BVEC
+- * return: total bytes read, or 0. SMB Direct will not do partial read.
+- */
+-int smbd_recv(struct smbd_connection *info, struct msghdr *msg)
+-{
+-	char *buf;
+-	struct page *page;
+-	unsigned int to_read, page_offset;
+-	int rc;
+-
+-	if (iov_iter_rw(&msg->msg_iter) == WRITE) {
+-		/* It's a bug in upper layer to get there */
+-		cifs_dbg(VFS, "Invalid msg iter dir %u\n",
+-			 iov_iter_rw(&msg->msg_iter));
+-		rc = -EINVAL;
+-		goto out;
+-	}
+-
+-	switch (iov_iter_type(&msg->msg_iter)) {
+-	case ITER_KVEC:
+-		buf = msg->msg_iter.kvec->iov_base;
+-		to_read = msg->msg_iter.kvec->iov_len;
+-		rc = smbd_recv_buf(info, buf, to_read);
+-		break;
+-
+-	case ITER_BVEC:
+-		page = msg->msg_iter.bvec->bv_page;
+-		page_offset = msg->msg_iter.bvec->bv_offset;
+-		to_read = msg->msg_iter.bvec->bv_len;
+-		rc = smbd_recv_page(info, page, page_offset, to_read);
+-		break;
+-
+-	default:
+-		/* It's a bug in upper layer to get there */
+-		cifs_dbg(VFS, "Invalid msg type %d\n",
+-			 iov_iter_type(&msg->msg_iter));
+-		rc = -EINVAL;
+-	}
+-
+-out:
+-	/* SMBDirect will read it all or nothing */
+-	if (rc > 0)
+-		msg->msg_iter.count = 0;
+-	return rc;
+-}
+-
+ /*
+  * Send data to transport
+  * Each rqst is transported as a SMBDirect payload
+@@ -1954,12 +1937,14 @@ int smbd_send(struct TCP_Server_Info *server,
+ 	int num_rqst, struct smb_rqst *rqst_array)
+ {
+ 	struct smbd_connection *info = server->smbd_conn;
++	struct smbdirect_socket *sc = &info->socket;
++	struct smbdirect_socket_parameters *sp = &sc->parameters;
+ 	struct smb_rqst *rqst;
+ 	struct iov_iter iter;
+ 	unsigned int remaining_data_length, klen;
+ 	int rc, i, rqst_idx;
+ 
+-	if (info->transport_status != SMBD_CONNECTED)
++	if (sc->status != SMBDIRECT_SOCKET_CONNECTED)
+ 		return -EAGAIN;
+ 
+ 	/*
+@@ -1971,10 +1956,10 @@ int smbd_send(struct TCP_Server_Info *server,
+ 	for (i = 0; i < num_rqst; i++)
+ 		remaining_data_length += smb_rqst_len(server, &rqst_array[i]);
+ 
+-	if (unlikely(remaining_data_length > info->max_fragmented_send_size)) {
++	if (unlikely(remaining_data_length > sp->max_fragmented_send_size)) {
+ 		/* assertion: payload never exceeds negotiated maximum */
+ 		log_write(ERR, "payload size %d > max size %d\n",
+-			remaining_data_length, info->max_fragmented_send_size);
++			remaining_data_length, sp->max_fragmented_send_size);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -2000,14 +1985,14 @@ int smbd_send(struct TCP_Server_Info *server,
+ 			klen += rqst->rq_iov[i].iov_len;
+ 		iov_iter_kvec(&iter, ITER_SOURCE, rqst->rq_iov, rqst->rq_nvec, klen);
+ 
+-		rc = smbd_post_send_iter(info, &iter, &remaining_data_length);
++		rc = smbd_post_send_full_iter(info, &iter, &remaining_data_length);
+ 		if (rc < 0)
+ 			break;
+ 
+ 		if (iov_iter_count(&rqst->rq_iter) > 0) {
+ 			/* And then the data pages if there are any */
+-			rc = smbd_post_send_iter(info, &rqst->rq_iter,
+-						 &remaining_data_length);
++			rc = smbd_post_send_full_iter(info, &rqst->rq_iter,
++						      &remaining_data_length);
+ 			if (rc < 0)
+ 				break;
+ 		}
+@@ -2053,6 +2038,7 @@ static void smbd_mr_recovery_work(struct work_struct *work)
+ {
+ 	struct smbd_connection *info =
+ 		container_of(work, struct smbd_connection, mr_recovery_work);
++	struct smbdirect_socket *sc = &info->socket;
+ 	struct smbd_mr *smbdirect_mr;
+ 	int rc;
+ 
+@@ -2070,7 +2056,7 @@ static void smbd_mr_recovery_work(struct work_struct *work)
+ 			}
+ 
+ 			smbdirect_mr->mr = ib_alloc_mr(
+-				info->pd, info->mr_type,
++				sc->ib.pd, info->mr_type,
+ 				info->max_frmr_depth);
+ 			if (IS_ERR(smbdirect_mr->mr)) {
+ 				log_rdma_mr(ERR, "ib_alloc_mr failed mr_type=%x max_frmr_depth=%x\n",
+@@ -2099,12 +2085,13 @@ static void smbd_mr_recovery_work(struct work_struct *work)
+ 
+ static void destroy_mr_list(struct smbd_connection *info)
+ {
++	struct smbdirect_socket *sc = &info->socket;
+ 	struct smbd_mr *mr, *tmp;
+ 
+ 	cancel_work_sync(&info->mr_recovery_work);
+ 	list_for_each_entry_safe(mr, tmp, &info->mr_list, list) {
+ 		if (mr->state == MR_INVALIDATED)
+-			ib_dma_unmap_sg(info->id->device, mr->sgt.sgl,
++			ib_dma_unmap_sg(sc->ib.dev, mr->sgt.sgl,
+ 				mr->sgt.nents, mr->dir);
+ 		ib_dereg_mr(mr->mr);
+ 		kfree(mr->sgt.sgl);
+@@ -2121,6 +2108,7 @@ static void destroy_mr_list(struct smbd_connection *info)
+  */
+ static int allocate_mr_list(struct smbd_connection *info)
+ {
++	struct smbdirect_socket *sc = &info->socket;
+ 	int i;
+ 	struct smbd_mr *smbdirect_mr, *tmp;
+ 
+@@ -2136,7 +2124,7 @@ static int allocate_mr_list(struct smbd_connection *info)
+ 		smbdirect_mr = kzalloc(sizeof(*smbdirect_mr), GFP_KERNEL);
+ 		if (!smbdirect_mr)
+ 			goto cleanup_entries;
+-		smbdirect_mr->mr = ib_alloc_mr(info->pd, info->mr_type,
++		smbdirect_mr->mr = ib_alloc_mr(sc->ib.pd, info->mr_type,
+ 					info->max_frmr_depth);
+ 		if (IS_ERR(smbdirect_mr->mr)) {
+ 			log_rdma_mr(ERR, "ib_alloc_mr failed mr_type=%x max_frmr_depth=%x\n",
+@@ -2181,20 +2169,20 @@ static int allocate_mr_list(struct smbd_connection *info)
+  */
+ static struct smbd_mr *get_mr(struct smbd_connection *info)
+ {
++	struct smbdirect_socket *sc = &info->socket;
+ 	struct smbd_mr *ret;
+ 	int rc;
+ again:
+ 	rc = wait_event_interruptible(info->wait_mr,
+ 		atomic_read(&info->mr_ready_count) ||
+-		info->transport_status != SMBD_CONNECTED);
++		sc->status != SMBDIRECT_SOCKET_CONNECTED);
+ 	if (rc) {
+ 		log_rdma_mr(ERR, "wait_event_interruptible rc=%x\n", rc);
+ 		return NULL;
+ 	}
+ 
+-	if (info->transport_status != SMBD_CONNECTED) {
+-		log_rdma_mr(ERR, "info->transport_status=%x\n",
+-			info->transport_status);
++	if (sc->status != SMBDIRECT_SOCKET_CONNECTED) {
++		log_rdma_mr(ERR, "sc->status=%x\n", sc->status);
+ 		return NULL;
+ 	}
+ 
+@@ -2247,6 +2235,7 @@ struct smbd_mr *smbd_register_mr(struct smbd_connection *info,
+ 				 struct iov_iter *iter,
+ 				 bool writing, bool need_invalidate)
+ {
++	struct smbdirect_socket *sc = &info->socket;
+ 	struct smbd_mr *smbdirect_mr;
+ 	int rc, num_pages;
+ 	enum dma_data_direction dir;
+@@ -2276,7 +2265,7 @@ struct smbd_mr *smbd_register_mr(struct smbd_connection *info,
+ 		    num_pages, iov_iter_count(iter), info->max_frmr_depth);
+ 	smbd_iter_to_mr(info, iter, &smbdirect_mr->sgt, info->max_frmr_depth);
+ 
+-	rc = ib_dma_map_sg(info->id->device, smbdirect_mr->sgt.sgl,
++	rc = ib_dma_map_sg(sc->ib.dev, smbdirect_mr->sgt.sgl,
+ 			   smbdirect_mr->sgt.nents, dir);
+ 	if (!rc) {
+ 		log_rdma_mr(ERR, "ib_dma_map_sg num_pages=%x dir=%x rc=%x\n",
+@@ -2312,7 +2301,7 @@ struct smbd_mr *smbd_register_mr(struct smbd_connection *info,
+ 	 * on IB_WR_REG_MR. Hardware enforces a barrier and order of execution
+ 	 * on the next ib_post_send when we actually send I/O to remote peer
+ 	 */
+-	rc = ib_post_send(info->id->qp, &reg_wr->wr, NULL);
++	rc = ib_post_send(sc->ib.qp, &reg_wr->wr, NULL);
+ 	if (!rc)
+ 		return smbdirect_mr;
+ 
+@@ -2321,7 +2310,7 @@ struct smbd_mr *smbd_register_mr(struct smbd_connection *info,
+ 
+ 	/* If all failed, attempt to recover this MR by setting it MR_ERROR*/
+ map_mr_error:
+-	ib_dma_unmap_sg(info->id->device, smbdirect_mr->sgt.sgl,
++	ib_dma_unmap_sg(sc->ib.dev, smbdirect_mr->sgt.sgl,
+ 			smbdirect_mr->sgt.nents, smbdirect_mr->dir);
+ 
+ dma_map_error:
+@@ -2359,6 +2348,7 @@ int smbd_deregister_mr(struct smbd_mr *smbdirect_mr)
+ {
+ 	struct ib_send_wr *wr;
+ 	struct smbd_connection *info = smbdirect_mr->conn;
++	struct smbdirect_socket *sc = &info->socket;
+ 	int rc = 0;
+ 
+ 	if (smbdirect_mr->need_invalidate) {
+@@ -2372,7 +2362,7 @@ int smbd_deregister_mr(struct smbd_mr *smbdirect_mr)
+ 		wr->send_flags = IB_SEND_SIGNALED;
+ 
+ 		init_completion(&smbdirect_mr->invalidate_done);
+-		rc = ib_post_send(info->id->qp, wr, NULL);
++		rc = ib_post_send(sc->ib.qp, wr, NULL);
+ 		if (rc) {
+ 			log_rdma_mr(ERR, "ib_post_send failed rc=%x\n", rc);
+ 			smbd_disconnect_rdma_connection(info);
+@@ -2389,7 +2379,7 @@ int smbd_deregister_mr(struct smbd_mr *smbdirect_mr)
+ 
+ 	if (smbdirect_mr->state == MR_INVALIDATED) {
+ 		ib_dma_unmap_sg(
+-			info->id->device, smbdirect_mr->sgt.sgl,
++			sc->ib.dev, smbdirect_mr->sgt.sgl,
+ 			smbdirect_mr->sgt.nents,
+ 			smbdirect_mr->dir);
+ 		smbdirect_mr->state = MR_READY;
+diff --git a/fs/smb/client/smbdirect.h b/fs/smb/client/smbdirect.h
+index c08e3665150d74..3d552ab27e0f3d 100644
+--- a/fs/smb/client/smbdirect.h
++++ b/fs/smb/client/smbdirect.h
+@@ -15,6 +15,9 @@
+ #include <rdma/rdma_cm.h>
+ #include <linux/mempool.h>
+ 
++#include "../common/smbdirect/smbdirect.h"
++#include "../common/smbdirect/smbdirect_socket.h"
++
+ extern int rdma_readwrite_threshold;
+ extern int smbd_max_frmr_depth;
+ extern int smbd_keep_alive_interval;
+@@ -50,14 +53,8 @@ enum smbd_connection_status {
+  * 5. mempools for allocating packets
+  */
+ struct smbd_connection {
+-	enum smbd_connection_status transport_status;
+-
+-	/* RDMA related */
+-	struct rdma_cm_id *id;
+-	struct ib_qp_init_attr qp_attr;
+-	struct ib_pd *pd;
+-	struct ib_cq *send_cq, *recv_cq;
+-	struct ib_device_attr dev_attr;
++	struct smbdirect_socket socket;
++
+ 	int ri_rc;
+ 	struct completion ri_done;
+ 	wait_queue_head_t conn_wait;
+@@ -72,15 +69,7 @@ struct smbd_connection {
+ 	spinlock_t lock_new_credits_offered;
+ 	int new_credits_offered;
+ 
+-	/* Connection parameters defined in [MS-SMBD] 3.1.1.1 */
+-	int receive_credit_max;
+-	int send_credit_target;
+-	int max_send_size;
+-	int max_fragmented_recv_size;
+-	int max_fragmented_send_size;
+-	int max_receive_size;
+-	int keep_alive_interval;
+-	int max_readwrite_size;
++	/* dynamic connection parameters defined in [MS-SMBD] 3.1.1.1 */
+ 	enum keep_alive_status keep_alive_requested;
+ 	int protocol;
+ 	atomic_t send_credits;
+@@ -177,47 +166,6 @@ enum smbd_message_type {
+ 	SMBD_TRANSFER_DATA,
+ };
+ 
+-#define SMB_DIRECT_RESPONSE_REQUESTED 0x0001
+-
+-/* SMBD negotiation request packet [MS-SMBD] 2.2.1 */
+-struct smbd_negotiate_req {
+-	__le16 min_version;
+-	__le16 max_version;
+-	__le16 reserved;
+-	__le16 credits_requested;
+-	__le32 preferred_send_size;
+-	__le32 max_receive_size;
+-	__le32 max_fragmented_size;
+-} __packed;
+-
+-/* SMBD negotiation response packet [MS-SMBD] 2.2.2 */
+-struct smbd_negotiate_resp {
+-	__le16 min_version;
+-	__le16 max_version;
+-	__le16 negotiated_version;
+-	__le16 reserved;
+-	__le16 credits_requested;
+-	__le16 credits_granted;
+-	__le32 status;
+-	__le32 max_readwrite_size;
+-	__le32 preferred_send_size;
+-	__le32 max_receive_size;
+-	__le32 max_fragmented_size;
+-} __packed;
+-
+-/* SMBD data transfer packet with payload [MS-SMBD] 2.2.3 */
+-struct smbd_data_transfer {
+-	__le16 credits_requested;
+-	__le16 credits_granted;
+-	__le16 flags;
+-	__le16 reserved;
+-	__le32 remaining_data_length;
+-	__le32 data_offset;
+-	__le32 data_length;
+-	__le32 padding;
+-	__u8 buffer[];
+-} __packed;
+-
+ /* The packet fields for a registered RDMA buffer */
+ struct smbd_buffer_descriptor_v1 {
+ 	__le64 offset;
+diff --git a/fs/smb/common/smbdirect/smbdirect.h b/fs/smb/common/smbdirect/smbdirect.h
+new file mode 100644
+index 00000000000000..b9a385344ff31c
+--- /dev/null
++++ b/fs/smb/common/smbdirect/smbdirect.h
+@@ -0,0 +1,37 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later */
++/*
++ *   Copyright (C) 2017, Microsoft Corporation.
++ *   Copyright (C) 2018, LG Electronics.
++ */
++
++#ifndef __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_H__
++#define __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_H__
++
++/* SMB-DIRECT buffer descriptor V1 structure [MS-SMBD] 2.2.3.1 */
++struct smbdirect_buffer_descriptor_v1 {
++	__le64 offset;
++	__le32 token;
++	__le32 length;
++} __packed;
++
++/*
++ * Connection parameters mostly from [MS-SMBD] 3.1.1.1
++ *
++ * These are setup and negotiated at the beginning of a
++ * connection and remain constant unless explicitly changed.
++ *
++ * Some values are important for the upper layer.
++ */
++struct smbdirect_socket_parameters {
++	__u16 recv_credit_max;
++	__u16 send_credit_target;
++	__u32 max_send_size;
++	__u32 max_fragmented_send_size;
++	__u32 max_recv_size;
++	__u32 max_fragmented_recv_size;
++	__u32 max_read_write_size;
++	__u32 keepalive_interval_msec;
++	__u32 keepalive_timeout_msec;
++} __packed;
++
++#endif /* __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_H__ */
+diff --git a/fs/smb/common/smbdirect/smbdirect_pdu.h b/fs/smb/common/smbdirect/smbdirect_pdu.h
+new file mode 100644
+index 00000000000000..ae9fdb05ce2314
+--- /dev/null
++++ b/fs/smb/common/smbdirect/smbdirect_pdu.h
+@@ -0,0 +1,55 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later */
++/*
++ *   Copyright (c) 2017 Stefan Metzmacher
++ */
++
++#ifndef __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_PDU_H__
++#define __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_PDU_H__
++
++#define SMBDIRECT_V1 0x0100
++
++/* SMBD negotiation request packet [MS-SMBD] 2.2.1 */
++struct smbdirect_negotiate_req {
++	__le16 min_version;
++	__le16 max_version;
++	__le16 reserved;
++	__le16 credits_requested;
++	__le32 preferred_send_size;
++	__le32 max_receive_size;
++	__le32 max_fragmented_size;
++} __packed;
++
++/* SMBD negotiation response packet [MS-SMBD] 2.2.2 */
++struct smbdirect_negotiate_resp {
++	__le16 min_version;
++	__le16 max_version;
++	__le16 negotiated_version;
++	__le16 reserved;
++	__le16 credits_requested;
++	__le16 credits_granted;
++	__le32 status;
++	__le32 max_readwrite_size;
++	__le32 preferred_send_size;
++	__le32 max_receive_size;
++	__le32 max_fragmented_size;
++} __packed;
++
++#define SMBDIRECT_DATA_MIN_HDR_SIZE 0x14
++#define SMBDIRECT_DATA_OFFSET       0x18
++
++#define SMBDIRECT_FLAG_RESPONSE_REQUESTED 0x0001
++
++/* SMBD data transfer packet with payload [MS-SMBD] 2.2.3 */
++struct smbdirect_data_transfer {
++	__le16 credits_requested;
++	__le16 credits_granted;
++	__le16 flags;
++	__le16 reserved;
++	__le32 remaining_data_length;
++	__le32 data_offset;
++	__le32 data_length;
++	__le32 padding;
++	__u8 buffer[];
++} __packed;
++
++#endif /* __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_PDU_H__ */
+diff --git a/fs/smb/common/smbdirect/smbdirect_socket.h b/fs/smb/common/smbdirect/smbdirect_socket.h
+new file mode 100644
+index 00000000000000..e5b15cc44a7ba5
+--- /dev/null
++++ b/fs/smb/common/smbdirect/smbdirect_socket.h
+@@ -0,0 +1,43 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later */
++/*
++ *   Copyright (c) 2025 Stefan Metzmacher
++ */
++
++#ifndef __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_SOCKET_H__
++#define __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_SOCKET_H__
++
++enum smbdirect_socket_status {
++	SMBDIRECT_SOCKET_CREATED,
++	SMBDIRECT_SOCKET_CONNECTING,
++	SMBDIRECT_SOCKET_CONNECTED,
++	SMBDIRECT_SOCKET_NEGOTIATE_FAILED,
++	SMBDIRECT_SOCKET_DISCONNECTING,
++	SMBDIRECT_SOCKET_DISCONNECTED,
++	SMBDIRECT_SOCKET_DESTROYED
++};
++
++struct smbdirect_socket {
++	enum smbdirect_socket_status status;
++
++	/* RDMA related */
++	struct {
++		struct rdma_cm_id *cm_id;
++	} rdma;
++
++	/* IB verbs related */
++	struct {
++		struct ib_pd *pd;
++		struct ib_cq *send_cq;
++		struct ib_cq *recv_cq;
++
++		/*
++		 * shortcuts for rdma.cm_id->{qp,device};
++		 */
++		struct ib_qp *qp;
++		struct ib_device *dev;
++	} ib;
++
++	struct smbdirect_socket_parameters parameters;
++};
++
++#endif /* __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_SOCKET_H__ */
+diff --git a/fs/xfs/libxfs/xfs_group.c b/fs/xfs/libxfs/xfs_group.c
+index e9d76bcdc820dd..20ad7c30948974 100644
+--- a/fs/xfs/libxfs/xfs_group.c
++++ b/fs/xfs/libxfs/xfs_group.c
+@@ -163,7 +163,8 @@ xfs_group_free(
+ 
+ 	xfs_defer_drain_free(&xg->xg_intents_drain);
+ #ifdef __KERNEL__
+-	kfree(xg->xg_busy_extents);
++	if (xfs_group_has_extent_busy(xg->xg_mount, xg->xg_type))
++		kfree(xg->xg_busy_extents);
+ #endif
+ 
+ 	if (uninit)
+@@ -189,9 +190,11 @@ xfs_group_insert(
+ 	xg->xg_type = type;
+ 
+ #ifdef __KERNEL__
+-	xg->xg_busy_extents = xfs_extent_busy_alloc();
+-	if (!xg->xg_busy_extents)
+-		return -ENOMEM;
++	if (xfs_group_has_extent_busy(mp, type)) {
++		xg->xg_busy_extents = xfs_extent_busy_alloc();
++		if (!xg->xg_busy_extents)
++			return -ENOMEM;
++	}
+ 	spin_lock_init(&xg->xg_state_lock);
+ 	xfs_hooks_init(&xg->xg_rmap_update_hooks);
+ #endif
+@@ -210,7 +213,8 @@ xfs_group_insert(
+ out_drain:
+ 	xfs_defer_drain_free(&xg->xg_intents_drain);
+ #ifdef __KERNEL__
+-	kfree(xg->xg_busy_extents);
++	if (xfs_group_has_extent_busy(xg->xg_mount, xg->xg_type))
++		kfree(xg->xg_busy_extents);
+ #endif
+ 	return error;
+ }
+diff --git a/fs/xfs/xfs_extent_busy.h b/fs/xfs/xfs_extent_busy.h
+index f069b04e8ea184..3e6e019b614654 100644
+--- a/fs/xfs/xfs_extent_busy.h
++++ b/fs/xfs/xfs_extent_busy.h
+@@ -68,4 +68,12 @@ static inline void xfs_extent_busy_sort(struct list_head *list)
+ 	list_sort(NULL, list, xfs_extent_busy_ag_cmp);
+ }
+ 
++/*
++ * Zoned RTGs don't need to track busy extents, as the actual block freeing only
++ * happens by a zone reset, which forces out all transactions that touched the
++ * to be reset zone first.
++ */
++#define xfs_group_has_extent_busy(mp, type) \
++	((type) == XG_TYPE_AG || !xfs_has_zoned((mp)))
++
+ #endif /* __XFS_EXTENT_BUSY_H__ */
+diff --git a/include/linux/phy/phy.h b/include/linux/phy/phy.h
+index e63e6e70e86042..51f781b5656558 100644
+--- a/include/linux/phy/phy.h
++++ b/include/linux/phy/phy.h
+@@ -149,6 +149,7 @@ struct phy_attrs {
+  * @id: id of the phy device
+  * @ops: function pointers for performing phy operations
+  * @mutex: mutex to protect phy_ops
++ * @lockdep_key: lockdep information for this mutex
+  * @init_count: used to protect when the PHY is used by multiple consumers
+  * @power_count: used to protect when the PHY is used by multiple consumers
+  * @attrs: used to specify PHY specific attributes
+@@ -160,6 +161,7 @@ struct phy {
+ 	int			id;
+ 	const struct phy_ops	*ops;
+ 	struct mutex		mutex;
++	struct lock_class_key	lockdep_key;
+ 	int			init_count;
+ 	int			power_count;
+ 	struct phy_attrs	attrs;
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 521a9d0acac692..f47dfb8b5be799 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -377,6 +377,8 @@ enum {
+ 	 * This quirk must be set before hci_register_dev is called.
+ 	 */
+ 	HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE,
++
++	__HCI_NUM_QUIRKS,
+ };
+ 
+ /* HCI device flags */
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 1cf60ed7ac89b3..d22468bb4341c6 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -462,7 +462,7 @@ struct hci_dev {
+ 
+ 	unsigned int	auto_accept_delay;
+ 
+-	unsigned long	quirks;
++	DECLARE_BITMAP(quirk_flags, __HCI_NUM_QUIRKS);
+ 
+ 	atomic_t	cmd_cnt;
+ 	unsigned int	acl_cnt;
+@@ -652,6 +652,10 @@ struct hci_dev {
+ 	u8 (*classify_pkt_type)(struct hci_dev *hdev, struct sk_buff *skb);
+ };
+ 
++#define hci_set_quirk(hdev, nr) set_bit((nr), (hdev)->quirk_flags)
++#define hci_clear_quirk(hdev, nr) clear_bit((nr), (hdev)->quirk_flags)
++#define hci_test_quirk(hdev, nr) test_bit((nr), (hdev)->quirk_flags)
++
+ #define HCI_PHY_HANDLE(handle)	(handle & 0xff)
+ 
+ enum conn_reasons {
+@@ -825,20 +829,20 @@ extern struct mutex hci_cb_list_lock;
+ #define hci_dev_test_and_clear_flag(hdev, nr)  test_and_clear_bit((nr), (hdev)->dev_flags)
+ #define hci_dev_test_and_change_flag(hdev, nr) test_and_change_bit((nr), (hdev)->dev_flags)
+ 
+-#define hci_dev_clear_volatile_flags(hdev)			\
+-	do {							\
+-		hci_dev_clear_flag(hdev, HCI_LE_SCAN);		\
+-		hci_dev_clear_flag(hdev, HCI_LE_ADV);		\
+-		hci_dev_clear_flag(hdev, HCI_LL_RPA_RESOLUTION);\
+-		hci_dev_clear_flag(hdev, HCI_PERIODIC_INQ);	\
+-		hci_dev_clear_flag(hdev, HCI_QUALITY_REPORT);	\
++#define hci_dev_clear_volatile_flags(hdev)				\
++	do {								\
++		hci_dev_clear_flag((hdev), HCI_LE_SCAN);		\
++		hci_dev_clear_flag((hdev), HCI_LE_ADV);			\
++		hci_dev_clear_flag((hdev), HCI_LL_RPA_RESOLUTION);	\
++		hci_dev_clear_flag((hdev), HCI_PERIODIC_INQ);		\
++		hci_dev_clear_flag((hdev), HCI_QUALITY_REPORT);		\
+ 	} while (0)
+ 
+ #define hci_dev_le_state_simultaneous(hdev) \
+-	(!test_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks) && \
+-	 (hdev->le_states[4] & 0x08) &&	/* Central */ \
+-	 (hdev->le_states[4] & 0x40) &&	/* Peripheral */ \
+-	 (hdev->le_states[3] & 0x10))	/* Simultaneous */
++	(!hci_test_quirk((hdev), HCI_QUIRK_BROKEN_LE_STATES) && \
++	 ((hdev)->le_states[4] & 0x08) &&	/* Central */ \
++	 ((hdev)->le_states[4] & 0x40) &&	/* Peripheral */ \
++	 ((hdev)->le_states[3] & 0x10))		/* Simultaneous */
+ 
+ /* ----- HCI interface to upper protocols ----- */
+ int l2cap_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr);
+@@ -1925,8 +1929,8 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ 		      ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_2M))
+ 
+ #define le_coded_capable(dev) (((dev)->le_features[1] & HCI_LE_PHY_CODED) && \
+-			       !test_bit(HCI_QUIRK_BROKEN_LE_CODED, \
+-					 &(dev)->quirks))
++			       !hci_test_quirk((dev), \
++					       HCI_QUIRK_BROKEN_LE_CODED))
+ 
+ #define scan_coded(dev) (((dev)->le_tx_def_phys & HCI_LE_SET_PHY_CODED) || \
+ 			 ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_CODED))
+@@ -1934,31 +1938,31 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ #define ll_privacy_capable(dev) ((dev)->le_features[0] & HCI_LE_LL_PRIVACY)
+ 
+ #define privacy_mode_capable(dev) (ll_privacy_capable(dev) && \
+-				   (hdev->commands[39] & 0x04))
++				   ((dev)->commands[39] & 0x04))
+ 
+ #define read_key_size_capable(dev) \
+ 	((dev)->commands[20] & 0x10 && \
+-	 !test_bit(HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE, &hdev->quirks))
++	 !hci_test_quirk((dev), HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE))
+ 
+ #define read_voice_setting_capable(dev) \
+ 	((dev)->commands[9] & 0x04 && \
+-	 !test_bit(HCI_QUIRK_BROKEN_READ_VOICE_SETTING, &(dev)->quirks))
++	 !hci_test_quirk((dev), HCI_QUIRK_BROKEN_READ_VOICE_SETTING))
+ 
+ /* Use enhanced synchronous connection if command is supported and its quirk
+  * has not been set.
+  */
+ #define enhanced_sync_conn_capable(dev) \
+ 	(((dev)->commands[29] & 0x08) && \
+-	 !test_bit(HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN, &(dev)->quirks))
++	 !hci_test_quirk((dev), HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN))
+ 
+ /* Use ext scanning if set ext scan param and ext scan enable is supported */
+ #define use_ext_scan(dev) (((dev)->commands[37] & 0x20) && \
+ 			   ((dev)->commands[37] & 0x40) && \
+-			   !test_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &(dev)->quirks))
++			   !hci_test_quirk((dev), HCI_QUIRK_BROKEN_EXT_SCAN))
+ 
+ /* Use ext create connection if command is supported */
+ #define use_ext_conn(dev) (((dev)->commands[37] & 0x80) && \
+-	!test_bit(HCI_QUIRK_BROKEN_EXT_CREATE_CONN, &(dev)->quirks))
++	!hci_test_quirk((dev), HCI_QUIRK_BROKEN_EXT_CREATE_CONN))
+ /* Extended advertising support */
+ #define ext_adv_capable(dev) (((dev)->le_features[1] & HCI_LE_EXT_ADV))
+ 
+@@ -1973,8 +1977,8 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+  */
+ #define use_enhanced_conn_complete(dev) ((ll_privacy_capable(dev) || \
+ 					 ext_adv_capable(dev)) && \
+-					 !test_bit(HCI_QUIRK_BROKEN_EXT_CREATE_CONN, \
+-						 &(dev)->quirks))
++					 !hci_test_quirk((dev), \
++							 HCI_QUIRK_BROKEN_EXT_CREATE_CONN))
+ 
+ /* Periodic advertising support */
+ #define per_adv_capable(dev) (((dev)->le_features[1] & HCI_LE_PERIODIC_ADV))
+@@ -1991,7 +1995,7 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ #define sync_recv_capable(dev) ((dev)->le_features[3] & HCI_LE_ISO_SYNC_RECEIVER)
+ 
+ #define mws_transport_config_capable(dev) (((dev)->commands[30] & 0x08) && \
+-	(!test_bit(HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG, &(dev)->quirks)))
++	(!hci_test_quirk((dev), HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG)))
+ 
+ /* ----- HCI protocols ----- */
+ #define HCI_PROTO_DEFER             0x01
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index efbd79c67be21d..75f2e5782887ff 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -2720,7 +2720,7 @@ struct cfg80211_scan_request {
+ 	s8 tsf_report_link_id;
+ 
+ 	/* keep last */
+-	struct ieee80211_channel *channels[] __counted_by(n_channels);
++	struct ieee80211_channel *channels[];
+ };
+ 
+ static inline void get_random_mask_addr(u8 *buf, const u8 *addr, const u8 *mask)
+diff --git a/include/net/netfilter/nf_conntrack.h b/include/net/netfilter/nf_conntrack.h
+index 3f02a45773e81c..ca26274196b999 100644
+--- a/include/net/netfilter/nf_conntrack.h
++++ b/include/net/netfilter/nf_conntrack.h
+@@ -306,8 +306,19 @@ static inline bool nf_ct_is_expired(const struct nf_conn *ct)
+ /* use after obtaining a reference count */
+ static inline bool nf_ct_should_gc(const struct nf_conn *ct)
+ {
+-	return nf_ct_is_expired(ct) && nf_ct_is_confirmed(ct) &&
+-	       !nf_ct_is_dying(ct);
++	if (!nf_ct_is_confirmed(ct))
++		return false;
++
++	/* load ct->timeout after is_confirmed() test.
++	 * Pairs with __nf_conntrack_confirm() which:
++	 * 1. Increases ct->timeout value
++	 * 2. Inserts ct into rcu hlist
++	 * 3. Sets the confirmed bit
++	 * 4. Unlocks the hlist lock
++	 */
++	smp_acquire__after_ctrl_dep();
++
++	return nf_ct_is_expired(ct) && !nf_ct_is_dying(ct);
+ }
+ 
+ #define	NF_CT_DAY	(86400 * HZ)
+diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
+index ecc1b852661e39..18c1237ccb1e5d 100644
+--- a/include/trace/events/netfs.h
++++ b/include/trace/events/netfs.h
+@@ -55,6 +55,7 @@
+ 	EM(netfs_rreq_trace_complete,		"COMPLET")	\
+ 	EM(netfs_rreq_trace_dirty,		"DIRTY  ")	\
+ 	EM(netfs_rreq_trace_done,		"DONE   ")	\
++	EM(netfs_rreq_trace_end_copy_to_cache,	"END-C2C")	\
+ 	EM(netfs_rreq_trace_free,		"FREE   ")	\
+ 	EM(netfs_rreq_trace_recollect,		"RECLLCT")	\
+ 	EM(netfs_rreq_trace_redirty,		"REDIRTY")	\
+@@ -550,6 +551,35 @@ TRACE_EVENT(netfs_write,
+ 		      __entry->start, __entry->start + __entry->len - 1)
+ 	    );
+ 
++TRACE_EVENT(netfs_copy2cache,
++	    TP_PROTO(const struct netfs_io_request *rreq,
++		     const struct netfs_io_request *creq),
++
++	    TP_ARGS(rreq, creq),
++
++	    TP_STRUCT__entry(
++		    __field(unsigned int,		rreq)
++		    __field(unsigned int,		creq)
++		    __field(unsigned int,		cookie)
++		    __field(unsigned int,		ino)
++			     ),
++
++	    TP_fast_assign(
++		    struct netfs_inode *__ctx = netfs_inode(rreq->inode);
++		    struct fscache_cookie *__cookie = netfs_i_cookie(__ctx);
++		    __entry->rreq	= rreq->debug_id;
++		    __entry->creq	= creq->debug_id;
++		    __entry->cookie	= __cookie ? __cookie->debug_id : 0;
++		    __entry->ino	= rreq->inode->i_ino;
++			   ),
++
++	    TP_printk("R=%08x CR=%08x c=%08x i=%x ",
++		      __entry->rreq,
++		      __entry->creq,
++		      __entry->cookie,
++		      __entry->ino)
++	    );
++
+ TRACE_EVENT(netfs_collect,
+ 	    TP_PROTO(const struct netfs_io_request *wreq),
+ 
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index cad50d91077efa..f2ef106b619699 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -275,20 +275,24 @@
+ 	EM(rxrpc_call_put_kernel,		"PUT kernel  ") \
+ 	EM(rxrpc_call_put_poke,			"PUT poke    ") \
+ 	EM(rxrpc_call_put_recvmsg,		"PUT recvmsg ") \
++	EM(rxrpc_call_put_release_recvmsg_q,	"PUT rls-rcmq") \
+ 	EM(rxrpc_call_put_release_sock,		"PUT rls-sock") \
+ 	EM(rxrpc_call_put_release_sock_tba,	"PUT rls-sk-a") \
+ 	EM(rxrpc_call_put_sendmsg,		"PUT sendmsg ") \
+-	EM(rxrpc_call_put_unnotify,		"PUT unnotify") \
+ 	EM(rxrpc_call_put_userid_exists,	"PUT u-exists") \
+ 	EM(rxrpc_call_put_userid,		"PUT user-id ") \
+ 	EM(rxrpc_call_see_accept,		"SEE accept  ") \
+ 	EM(rxrpc_call_see_activate_client,	"SEE act-clnt") \
++	EM(rxrpc_call_see_already_released,	"SEE alrdy-rl") \
+ 	EM(rxrpc_call_see_connect_failed,	"SEE con-fail") \
+ 	EM(rxrpc_call_see_connected,		"SEE connect ") \
+ 	EM(rxrpc_call_see_conn_abort,		"SEE conn-abt") \
++	EM(rxrpc_call_see_discard,		"SEE discard ") \
+ 	EM(rxrpc_call_see_disconnected,		"SEE disconn ") \
+ 	EM(rxrpc_call_see_distribute_error,	"SEE dist-err") \
+ 	EM(rxrpc_call_see_input,		"SEE input   ") \
++	EM(rxrpc_call_see_notify_released,	"SEE nfy-rlsd") \
++	EM(rxrpc_call_see_recvmsg,		"SEE recvmsg ") \
+ 	EM(rxrpc_call_see_release,		"SEE release ") \
+ 	EM(rxrpc_call_see_userid_exists,	"SEE u-exists") \
+ 	EM(rxrpc_call_see_waiting_call,		"SEE q-conn  ") \
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 7bf5e62d5a292e..4dac01ff7ac772 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -1749,9 +1749,11 @@ int io_connect(struct io_kiocb *req, unsigned int issue_flags)
+ 	int ret;
+ 	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+ 
+-	if (unlikely(req->flags & REQ_F_FAIL)) {
+-		ret = -ECONNRESET;
+-		goto out;
++	if (connect->in_progress) {
++		struct poll_table_struct pt = { ._key = EPOLLERR };
++
++		if (vfs_poll(req->file, &pt) & EPOLLERR)
++			goto get_sock_err;
+ 	}
+ 
+ 	file_flags = force_nonblock ? O_NONBLOCK : 0;
+@@ -1776,8 +1778,10 @@ int io_connect(struct io_kiocb *req, unsigned int issue_flags)
+ 		 * which means the previous result is good. For both of these,
+ 		 * grab the sock_error() and use that for the completion.
+ 		 */
+-		if (ret == -EBADFD || ret == -EISCONN)
++		if (ret == -EBADFD || ret == -EISCONN) {
++get_sock_err:
+ 			ret = sock_error(sock_from_file(req->file)->sk);
++		}
+ 	}
+ 	if (ret == -ERESTARTSYS)
+ 		ret = -EINTR;
+diff --git a/io_uring/poll.c b/io_uring/poll.c
+index 8eb744eb9f4c3c..ddafa88b9fbeda 100644
+--- a/io_uring/poll.c
++++ b/io_uring/poll.c
+@@ -273,8 +273,6 @@ static int io_poll_check_events(struct io_kiocb *req, io_tw_token_t tw)
+ 				return IOU_POLL_REISSUE;
+ 			}
+ 		}
+-		if (unlikely(req->cqe.res & EPOLLERR))
+-			req_set_fail(req);
+ 		if (req->apoll_events & EPOLLONESHOT)
+ 			return IOU_POLL_DONE;
+ 
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index a71aa4cb85fae6..52d02bc0abb2b9 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -883,6 +883,13 @@ int bpf_bprintf_prepare(char *fmt, u32 fmt_size, const u64 *raw_args,
+ 		if (fmt[i] == 'p') {
+ 			sizeof_cur_arg = sizeof(long);
+ 
++			if (fmt[i + 1] == 0 || isspace(fmt[i + 1]) ||
++			    ispunct(fmt[i + 1])) {
++				if (tmp_buf)
++					cur_arg = raw_args[num_spec];
++				goto nocopy_fmt;
++			}
++
+ 			if ((fmt[i + 1] == 'k' || fmt[i + 1] == 'u') &&
+ 			    fmt[i + 2] == 's') {
+ 				fmt_ptype = fmt[i + 1];
+@@ -890,11 +897,9 @@ int bpf_bprintf_prepare(char *fmt, u32 fmt_size, const u64 *raw_args,
+ 				goto fmt_str;
+ 			}
+ 
+-			if (fmt[i + 1] == 0 || isspace(fmt[i + 1]) ||
+-			    ispunct(fmt[i + 1]) || fmt[i + 1] == 'K' ||
++			if (fmt[i + 1] == 'K' ||
+ 			    fmt[i + 1] == 'x' || fmt[i + 1] == 's' ||
+ 			    fmt[i + 1] == 'S') {
+-				/* just kernel pointers */
+ 				if (tmp_buf)
+ 					cur_arg = raw_args[num_spec];
+ 				i++;
+diff --git a/kernel/cgroup/legacy_freezer.c b/kernel/cgroup/legacy_freezer.c
+index 507b8f19a262e0..dd9417425d9292 100644
+--- a/kernel/cgroup/legacy_freezer.c
++++ b/kernel/cgroup/legacy_freezer.c
+@@ -66,15 +66,9 @@ static struct freezer *parent_freezer(struct freezer *freezer)
+ bool cgroup_freezing(struct task_struct *task)
+ {
+ 	bool ret;
+-	unsigned int state;
+ 
+ 	rcu_read_lock();
+-	/* Check if the cgroup is still FREEZING, but not FROZEN. The extra
+-	 * !FROZEN check is required, because the FREEZING bit is not cleared
+-	 * when the state FROZEN is reached.
+-	 */
+-	state = task_freezer(task)->state;
+-	ret = (state & CGROUP_FREEZING) && !(state & CGROUP_FROZEN);
++	ret = task_freezer(task)->state & CGROUP_FREEZING;
+ 	rcu_read_unlock();
+ 
+ 	return ret;
+diff --git a/kernel/freezer.c b/kernel/freezer.c
+index 8d530d0949ff69..6a96149aede9f5 100644
+--- a/kernel/freezer.c
++++ b/kernel/freezer.c
+@@ -201,18 +201,9 @@ static int __restore_freezer_state(struct task_struct *p, void *arg)
+ 
+ void __thaw_task(struct task_struct *p)
+ {
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&freezer_lock, flags);
+-	if (WARN_ON_ONCE(freezing(p)))
+-		goto unlock;
+-
+-	if (!frozen(p) || task_call_func(p, __restore_freezer_state, NULL))
+-		goto unlock;
+-
+-	wake_up_state(p, TASK_FROZEN);
+-unlock:
+-	spin_unlock_irqrestore(&freezer_lock, flags);
++	guard(spinlock_irqsave)(&freezer_lock);
++	if (frozen(p) && !task_call_func(p, __restore_freezer_state, NULL))
++		wake_up_state(p, TASK_FROZEN);
+ }
+ 
+ /**
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 86ce43fa366931..bb5148c87750e6 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -1149,7 +1149,8 @@ static inline struct rq *scx_locked_rq(void)
+ 
+ #define SCX_CALL_OP(mask, op, rq, args...)					\
+ do {										\
+-	update_locked_rq(rq);							\
++	if (rq)									\
++		update_locked_rq(rq);						\
+ 	if (mask) {								\
+ 		scx_kf_allow(mask);						\
+ 		scx_ops.op(args);						\
+@@ -1157,14 +1158,16 @@ do {										\
+ 	} else {								\
+ 		scx_ops.op(args);						\
+ 	}									\
+-	update_locked_rq(NULL);							\
++	if (rq)									\
++		update_locked_rq(NULL);						\
+ } while (0)
+ 
+ #define SCX_CALL_OP_RET(mask, op, rq, args...)					\
+ ({										\
+ 	__typeof__(scx_ops.op(args)) __ret;					\
+ 										\
+-	update_locked_rq(rq);							\
++	if (rq)									\
++		update_locked_rq(rq);						\
+ 	if (mask) {								\
+ 		scx_kf_allow(mask);						\
+ 		__ret = scx_ops.op(args);					\
+@@ -1172,7 +1175,8 @@ do {										\
+ 	} else {								\
+ 		__ret = scx_ops.op(args);					\
+ 	}									\
+-	update_locked_rq(NULL);							\
++	if (rq)									\
++		update_locked_rq(NULL);						\
+ 	__ret;									\
+ })
+ 
+diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c
+index c48900b856a2aa..52ca8e268cfc56 100644
+--- a/kernel/sched/loadavg.c
++++ b/kernel/sched/loadavg.c
+@@ -80,7 +80,7 @@ long calc_load_fold_active(struct rq *this_rq, long adjust)
+ 	long nr_active, delta = 0;
+ 
+ 	nr_active = this_rq->nr_running - adjust;
+-	nr_active += (int)this_rq->nr_uninterruptible;
++	nr_active += (long)this_rq->nr_uninterruptible;
+ 
+ 	if (nr_active != this_rq->calc_load_active) {
+ 		delta = nr_active - this_rq->calc_load_active;
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 47972f34ea7014..d6f82833f65225 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1147,7 +1147,7 @@ struct rq {
+ 	 * one CPU and if it got migrated afterwards it may decrease
+ 	 * it on another CPU. Always updated under the runqueue lock:
+ 	 */
+-	unsigned int		nr_uninterruptible;
++	unsigned long 		nr_uninterruptible;
+ 
+ 	union {
+ 		struct task_struct __rcu *donor; /* Scheduler context */
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 069e92856bdacb..b2d7096e0ec327 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -3125,7 +3125,10 @@ __register_event(struct trace_event_call *call, struct module *mod)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	down_write(&trace_event_sem);
+ 	list_add(&call->list, &ftrace_events);
++	up_write(&trace_event_sem);
++
+ 	if (call->flags & TRACE_EVENT_FL_DYNAMIC)
+ 		atomic_set(&call->refcnt, 0);
+ 	else
+@@ -3739,6 +3742,8 @@ __trace_add_event_dirs(struct trace_array *tr)
+ 	struct trace_event_call *call;
+ 	int ret;
+ 
++	lockdep_assert_held(&trace_event_sem);
++
+ 	list_for_each_entry(call, &ftrace_events, list) {
+ 		ret = __trace_add_new_event(call, tr);
+ 		if (ret < 0)
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index e732c9e37e142c..9edcd5b2cd5afc 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -637,8 +637,8 @@ __timerlat_dump_stack(struct trace_buffer *buffer, struct trace_stack *fstack, u
+ 
+ 	entry = ring_buffer_event_data(event);
+ 
+-	memcpy(&entry->caller, fstack->calls, size);
+ 	entry->size = fstack->nr_entries;
++	memcpy(&entry->caller, fstack->calls, size);
+ 
+ 	trace_buffer_unlock_commit_nostack(buffer, event);
+ }
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index 424751cdf31f9f..40830a3ecd96c0 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -657,7 +657,7 @@ static int parse_btf_arg(char *varname,
+ 		ret = query_btf_context(ctx);
+ 		if (ret < 0 || ctx->nr_params == 0) {
+ 			trace_probe_log_err(ctx->offset, NO_BTF_ENTRY);
+-			return PTR_ERR(params);
++			return -ENOENT;
+ 		}
+ 	}
+ 	params = ctx->params;
+diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c
+index 41be38264493df..49a6d49c23dc59 100644
+--- a/net/8021q/vlan.c
++++ b/net/8021q/vlan.c
+@@ -358,6 +358,35 @@ static int __vlan_device_event(struct net_device *dev, unsigned long event)
+ 	return err;
+ }
+ 
++static void vlan_vid0_add(struct net_device *dev)
++{
++	struct vlan_info *vlan_info;
++	int err;
++
++	if (!(dev->features & NETIF_F_HW_VLAN_CTAG_FILTER))
++		return;
++
++	pr_info("adding VLAN 0 to HW filter on device %s\n", dev->name);
++
++	err = vlan_vid_add(dev, htons(ETH_P_8021Q), 0);
++	if (err)
++		return;
++
++	vlan_info = rtnl_dereference(dev->vlan_info);
++	vlan_info->auto_vid0 = true;
++}
++
++static void vlan_vid0_del(struct net_device *dev)
++{
++	struct vlan_info *vlan_info = rtnl_dereference(dev->vlan_info);
++
++	if (!vlan_info || !vlan_info->auto_vid0)
++		return;
++
++	vlan_info->auto_vid0 = false;
++	vlan_vid_del(dev, htons(ETH_P_8021Q), 0);
++}
++
+ static int vlan_device_event(struct notifier_block *unused, unsigned long event,
+ 			     void *ptr)
+ {
+@@ -379,15 +408,10 @@ static int vlan_device_event(struct notifier_block *unused, unsigned long event,
+ 			return notifier_from_errno(err);
+ 	}
+ 
+-	if ((event == NETDEV_UP) &&
+-	    (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) {
+-		pr_info("adding VLAN 0 to HW filter on device %s\n",
+-			dev->name);
+-		vlan_vid_add(dev, htons(ETH_P_8021Q), 0);
+-	}
+-	if (event == NETDEV_DOWN &&
+-	    (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER))
+-		vlan_vid_del(dev, htons(ETH_P_8021Q), 0);
++	if (event == NETDEV_UP)
++		vlan_vid0_add(dev);
++	else if (event == NETDEV_DOWN)
++		vlan_vid0_del(dev);
+ 
+ 	vlan_info = rtnl_dereference(dev->vlan_info);
+ 	if (!vlan_info)
+diff --git a/net/8021q/vlan.h b/net/8021q/vlan.h
+index 5eaf38875554b0..c7ffe591d59366 100644
+--- a/net/8021q/vlan.h
++++ b/net/8021q/vlan.h
+@@ -33,6 +33,7 @@ struct vlan_info {
+ 	struct vlan_group	grp;
+ 	struct list_head	vid_list;
+ 	unsigned int		nr_vids;
++	bool			auto_vid0;
+ 	struct rcu_head		rcu;
+ };
+ 
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index abff4690cb88ff..2f209e4421d7b7 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -2640,7 +2640,7 @@ int hci_register_dev(struct hci_dev *hdev)
+ 	/* Devices that are marked for raw-only usage are unconfigured
+ 	 * and should not be included in normal operation.
+ 	 */
+-	if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
++	if (hci_test_quirk(hdev, HCI_QUIRK_RAW_DEVICE))
+ 		hci_dev_set_flag(hdev, HCI_UNCONFIGURED);
+ 
+ 	/* Mark Remote Wakeup connection flag as supported if driver has wakeup
+@@ -2770,7 +2770,7 @@ int hci_register_suspend_notifier(struct hci_dev *hdev)
+ 	int ret = 0;
+ 
+ 	if (!hdev->suspend_notifier.notifier_call &&
+-	    !test_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks)) {
++	    !hci_test_quirk(hdev, HCI_QUIRK_NO_SUSPEND_NOTIFIER)) {
+ 		hdev->suspend_notifier.notifier_call = hci_suspend_notifier;
+ 		ret = register_pm_notifier(&hdev->suspend_notifier);
+ 	}
+diff --git a/net/bluetooth/hci_debugfs.c b/net/bluetooth/hci_debugfs.c
+index f625074d1f0020..99e2e9fc70e8c2 100644
+--- a/net/bluetooth/hci_debugfs.c
++++ b/net/bluetooth/hci_debugfs.c
+@@ -38,7 +38,7 @@ static ssize_t __name ## _read(struct file *file,			      \
+ 	struct hci_dev *hdev = file->private_data;			      \
+ 	char buf[3];							      \
+ 									      \
+-	buf[0] = test_bit(__quirk, &hdev->quirks) ? 'Y' : 'N';		      \
++	buf[0] = test_bit(__quirk, hdev->quirk_flags) ? 'Y' : 'N';	      \
+ 	buf[1] = '\n';							      \
+ 	buf[2] = '\0';							      \
+ 	return simple_read_from_buffer(user_buf, count, ppos, buf, 2);	      \
+@@ -59,10 +59,10 @@ static ssize_t __name ## _write(struct file *file,			      \
+ 	if (err)							      \
+ 		return err;						      \
+ 									      \
+-	if (enable == test_bit(__quirk, &hdev->quirks))			      \
++	if (enable == test_bit(__quirk, hdev->quirk_flags))		      \
+ 		return -EALREADY;					      \
+ 									      \
+-	change_bit(__quirk, &hdev->quirks);				      \
++	change_bit(__quirk, hdev->quirk_flags);				      \
+ 									      \
+ 	return count;							      \
+ }									      \
+@@ -1356,7 +1356,7 @@ static ssize_t vendor_diag_write(struct file *file, const char __user *user_buf,
+ 	 * for the vendor callback. Instead just store the desired value and
+ 	 * the setting will be programmed when the controller gets powered on.
+ 	 */
+-	if (test_bit(HCI_QUIRK_NON_PERSISTENT_DIAG, &hdev->quirks) &&
++	if (hci_test_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_DIAG) &&
+ 	    (!test_bit(HCI_RUNNING, &hdev->flags) ||
+ 	     hci_dev_test_flag(hdev, HCI_USER_CHANNEL)))
+ 		goto done;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 992131f88a4568..cf4b30ac9e0e57 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -908,8 +908,8 @@ static u8 hci_cc_read_local_ext_features(struct hci_dev *hdev, void *data,
+ 		return rp->status;
+ 
+ 	if (hdev->max_page < rp->max_page) {
+-		if (test_bit(HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2,
+-			     &hdev->quirks))
++		if (hci_test_quirk(hdev,
++				   HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2))
+ 			bt_dev_warn(hdev, "broken local ext features page 2");
+ 		else
+ 			hdev->max_page = rp->max_page;
+@@ -936,7 +936,7 @@ static u8 hci_cc_read_buffer_size(struct hci_dev *hdev, void *data,
+ 	hdev->acl_pkts = __le16_to_cpu(rp->acl_max_pkt);
+ 	hdev->sco_pkts = __le16_to_cpu(rp->sco_max_pkt);
+ 
+-	if (test_bit(HCI_QUIRK_FIXUP_BUFFER_SIZE, &hdev->quirks)) {
++	if (hci_test_quirk(hdev, HCI_QUIRK_FIXUP_BUFFER_SIZE)) {
+ 		hdev->sco_mtu  = 64;
+ 		hdev->sco_pkts = 8;
+ 	}
+@@ -2971,7 +2971,7 @@ static void hci_inquiry_complete_evt(struct hci_dev *hdev, void *data,
+ 		 * state to indicate completion.
+ 		 */
+ 		if (!hci_dev_test_flag(hdev, HCI_LE_SCAN) ||
+-		    !test_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks))
++		    !hci_test_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY))
+ 			hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
+ 		goto unlock;
+ 	}
+@@ -2990,7 +2990,7 @@ static void hci_inquiry_complete_evt(struct hci_dev *hdev, void *data,
+ 		 * state to indicate completion.
+ 		 */
+ 		if (!hci_dev_test_flag(hdev, HCI_LE_SCAN) ||
+-		    !test_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks))
++		    !hci_test_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY))
+ 			hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
+ 	}
+ 
+@@ -3614,8 +3614,7 @@ static void hci_encrypt_change_evt(struct hci_dev *hdev, void *data,
+ 	/* We skip the WRITE_AUTH_PAYLOAD_TIMEOUT for ATS2851 based controllers
+ 	 * to avoid unexpected SMP command errors when pairing.
+ 	 */
+-	if (test_bit(HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT,
+-		     &hdev->quirks))
++	if (hci_test_quirk(hdev, HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT))
+ 		goto notify;
+ 
+ 	/* Set the default Authenticated Payload Timeout after
+@@ -5914,7 +5913,7 @@ static struct hci_conn *check_pending_le_conn(struct hci_dev *hdev,
+ 	 * while we have an existing one in peripheral role.
+ 	 */
+ 	if (hdev->conn_hash.le_num_peripheral > 0 &&
+-	    (test_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks) ||
++	    (hci_test_quirk(hdev, HCI_QUIRK_BROKEN_LE_STATES) ||
+ 	     !(hdev->le_states[3] & 0x10)))
+ 		return NULL;
+ 
+@@ -6310,8 +6309,8 @@ static void hci_le_ext_adv_report_evt(struct hci_dev *hdev, void *data,
+ 		evt_type = __le16_to_cpu(info->type) & LE_EXT_ADV_EVT_TYPE_MASK;
+ 		legacy_evt_type = ext_evt_type_to_legacy(hdev, evt_type);
+ 
+-		if (test_bit(HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY,
+-			     &hdev->quirks)) {
++		if (hci_test_quirk(hdev,
++				   HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY)) {
+ 			info->primary_phy &= 0x1f;
+ 			info->secondary_phy &= 0x1f;
+ 		}
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 3ac8d436e3e3a5..0972167c1d0622 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -393,7 +393,7 @@ static void le_scan_disable(struct work_struct *work)
+ 	if (hdev->discovery.type != DISCOV_TYPE_INTERLEAVED)
+ 		goto _return;
+ 
+-	if (test_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks)) {
++	if (hci_test_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY)) {
+ 		if (!test_bit(HCI_INQUIRY, &hdev->flags) &&
+ 		    hdev->discovery.state != DISCOVERY_RESOLVING)
+ 			goto discov_stopped;
+@@ -3573,7 +3573,7 @@ static void hci_dev_get_bd_addr_from_property(struct hci_dev *hdev)
+ 	if (ret < 0 || !bacmp(&ba, BDADDR_ANY))
+ 		return;
+ 
+-	if (test_bit(HCI_QUIRK_BDADDR_PROPERTY_BROKEN, &hdev->quirks))
++	if (hci_test_quirk(hdev, HCI_QUIRK_BDADDR_PROPERTY_BROKEN))
+ 		baswap(&hdev->public_addr, &ba);
+ 	else
+ 		bacpy(&hdev->public_addr, &ba);
+@@ -3648,7 +3648,7 @@ static int hci_init0_sync(struct hci_dev *hdev)
+ 	bt_dev_dbg(hdev, "");
+ 
+ 	/* Reset */
+-	if (!test_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks)) {
++	if (!hci_test_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE)) {
+ 		err = hci_reset_sync(hdev);
+ 		if (err)
+ 			return err;
+@@ -3661,7 +3661,7 @@ static int hci_unconf_init_sync(struct hci_dev *hdev)
+ {
+ 	int err;
+ 
+-	if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
++	if (hci_test_quirk(hdev, HCI_QUIRK_RAW_DEVICE))
+ 		return 0;
+ 
+ 	err = hci_init0_sync(hdev);
+@@ -3704,7 +3704,7 @@ static int hci_read_local_cmds_sync(struct hci_dev *hdev)
+ 	 * supported commands.
+ 	 */
+ 	if (hdev->hci_ver > BLUETOOTH_VER_1_1 &&
+-	    !test_bit(HCI_QUIRK_BROKEN_LOCAL_COMMANDS, &hdev->quirks))
++	    !hci_test_quirk(hdev, HCI_QUIRK_BROKEN_LOCAL_COMMANDS))
+ 		return __hci_cmd_sync_status(hdev, HCI_OP_READ_LOCAL_COMMANDS,
+ 					     0, NULL, HCI_CMD_TIMEOUT);
+ 
+@@ -3718,7 +3718,7 @@ static int hci_init1_sync(struct hci_dev *hdev)
+ 	bt_dev_dbg(hdev, "");
+ 
+ 	/* Reset */
+-	if (!test_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks)) {
++	if (!hci_test_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE)) {
+ 		err = hci_reset_sync(hdev);
+ 		if (err)
+ 			return err;
+@@ -3781,7 +3781,7 @@ static int hci_set_event_filter_sync(struct hci_dev *hdev, u8 flt_type,
+ 	if (!hci_dev_test_flag(hdev, HCI_BREDR_ENABLED))
+ 		return 0;
+ 
+-	if (test_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks))
++	if (hci_test_quirk(hdev, HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL))
+ 		return 0;
+ 
+ 	memset(&cp, 0, sizeof(cp));
+@@ -3808,7 +3808,7 @@ static int hci_clear_event_filter_sync(struct hci_dev *hdev)
+ 	 * a hci_set_event_filter_sync() call succeeds, but we do
+ 	 * the check both for parity and as a future reminder.
+ 	 */
+-	if (test_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks))
++	if (hci_test_quirk(hdev, HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL))
+ 		return 0;
+ 
+ 	return hci_set_event_filter_sync(hdev, HCI_FLT_CLEAR_ALL, 0x00,
+@@ -3832,7 +3832,7 @@ static int hci_write_sync_flowctl_sync(struct hci_dev *hdev)
+ 
+ 	/* Check if the controller supports SCO and HCI_OP_WRITE_SYNC_FLOWCTL */
+ 	if (!lmp_sco_capable(hdev) || !(hdev->commands[10] & BIT(4)) ||
+-	    !test_bit(HCI_QUIRK_SYNC_FLOWCTL_SUPPORTED, &hdev->quirks))
++	    !hci_test_quirk(hdev, HCI_QUIRK_SYNC_FLOWCTL_SUPPORTED))
+ 		return 0;
+ 
+ 	memset(&cp, 0, sizeof(cp));
+@@ -3907,7 +3907,7 @@ static int hci_write_inquiry_mode_sync(struct hci_dev *hdev)
+ 	u8 mode;
+ 
+ 	if (!lmp_inq_rssi_capable(hdev) &&
+-	    !test_bit(HCI_QUIRK_FIXUP_INQUIRY_MODE, &hdev->quirks))
++	    !hci_test_quirk(hdev, HCI_QUIRK_FIXUP_INQUIRY_MODE))
+ 		return 0;
+ 
+ 	/* If Extended Inquiry Result events are supported, then
+@@ -4097,7 +4097,7 @@ static int hci_set_event_mask_sync(struct hci_dev *hdev)
+ 	}
+ 
+ 	if (lmp_inq_rssi_capable(hdev) ||
+-	    test_bit(HCI_QUIRK_FIXUP_INQUIRY_MODE, &hdev->quirks))
++	    hci_test_quirk(hdev, HCI_QUIRK_FIXUP_INQUIRY_MODE))
+ 		events[4] |= 0x02; /* Inquiry Result with RSSI */
+ 
+ 	if (lmp_ext_feat_capable(hdev))
+@@ -4149,7 +4149,7 @@ static int hci_read_stored_link_key_sync(struct hci_dev *hdev)
+ 	struct hci_cp_read_stored_link_key cp;
+ 
+ 	if (!(hdev->commands[6] & 0x20) ||
+-	    test_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks))
++	    hci_test_quirk(hdev, HCI_QUIRK_BROKEN_STORED_LINK_KEY))
+ 		return 0;
+ 
+ 	memset(&cp, 0, sizeof(cp));
+@@ -4198,7 +4198,7 @@ static int hci_read_def_err_data_reporting_sync(struct hci_dev *hdev)
+ {
+ 	if (!(hdev->commands[18] & 0x04) ||
+ 	    !(hdev->features[0][6] & LMP_ERR_DATA_REPORTING) ||
+-	    test_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks))
++	    hci_test_quirk(hdev, HCI_QUIRK_BROKEN_ERR_DATA_REPORTING))
+ 		return 0;
+ 
+ 	return __hci_cmd_sync_status(hdev, HCI_OP_READ_DEF_ERR_DATA_REPORTING,
+@@ -4212,7 +4212,7 @@ static int hci_read_page_scan_type_sync(struct hci_dev *hdev)
+ 	 * this command in the bit mask of supported commands.
+ 	 */
+ 	if (!(hdev->commands[13] & 0x01) ||
+-	    test_bit(HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE, &hdev->quirks))
++	    hci_test_quirk(hdev, HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE))
+ 		return 0;
+ 
+ 	return __hci_cmd_sync_status(hdev, HCI_OP_READ_PAGE_SCAN_TYPE,
+@@ -4407,7 +4407,7 @@ static int hci_le_read_adv_tx_power_sync(struct hci_dev *hdev)
+ static int hci_le_read_tx_power_sync(struct hci_dev *hdev)
+ {
+ 	if (!(hdev->commands[38] & 0x80) ||
+-	    test_bit(HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER, &hdev->quirks))
++	    hci_test_quirk(hdev, HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER))
+ 		return 0;
+ 
+ 	return __hci_cmd_sync_status(hdev, HCI_OP_LE_READ_TRANSMIT_POWER,
+@@ -4450,7 +4450,7 @@ static int hci_le_set_rpa_timeout_sync(struct hci_dev *hdev)
+ 	__le16 timeout = cpu_to_le16(hdev->rpa_timeout);
+ 
+ 	if (!(hdev->commands[35] & 0x04) ||
+-	    test_bit(HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT, &hdev->quirks))
++	    hci_test_quirk(hdev, HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT))
+ 		return 0;
+ 
+ 	return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_RPA_TIMEOUT,
+@@ -4595,7 +4595,7 @@ static int hci_delete_stored_link_key_sync(struct hci_dev *hdev)
+ 	 * just disable this command.
+ 	 */
+ 	if (!(hdev->commands[6] & 0x80) ||
+-	    test_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks))
++	    hci_test_quirk(hdev, HCI_QUIRK_BROKEN_STORED_LINK_KEY))
+ 		return 0;
+ 
+ 	memset(&cp, 0, sizeof(cp));
+@@ -4721,7 +4721,7 @@ static int hci_set_err_data_report_sync(struct hci_dev *hdev)
+ 
+ 	if (!(hdev->commands[18] & 0x08) ||
+ 	    !(hdev->features[0][6] & LMP_ERR_DATA_REPORTING) ||
+-	    test_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks))
++	    hci_test_quirk(hdev, HCI_QUIRK_BROKEN_ERR_DATA_REPORTING))
+ 		return 0;
+ 
+ 	if (enabled == hdev->err_data_reporting)
+@@ -4934,7 +4934,7 @@ static int hci_dev_setup_sync(struct hci_dev *hdev)
+ 	size_t i;
+ 
+ 	if (!hci_dev_test_flag(hdev, HCI_SETUP) &&
+-	    !test_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks))
++	    !hci_test_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP))
+ 		return 0;
+ 
+ 	bt_dev_dbg(hdev, "");
+@@ -4945,7 +4945,7 @@ static int hci_dev_setup_sync(struct hci_dev *hdev)
+ 		ret = hdev->setup(hdev);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(hci_broken_table); i++) {
+-		if (test_bit(hci_broken_table[i].quirk, &hdev->quirks))
++		if (hci_test_quirk(hdev, hci_broken_table[i].quirk))
+ 			bt_dev_warn(hdev, "%s", hci_broken_table[i].desc);
+ 	}
+ 
+@@ -4953,10 +4953,10 @@ static int hci_dev_setup_sync(struct hci_dev *hdev)
+ 	 * BD_ADDR invalid before creating the HCI device or in
+ 	 * its setup callback.
+ 	 */
+-	invalid_bdaddr = test_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks) ||
+-			 test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);
++	invalid_bdaddr = hci_test_quirk(hdev, HCI_QUIRK_INVALID_BDADDR) ||
++			 hci_test_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY);
+ 	if (!ret) {
+-		if (test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks) &&
++		if (hci_test_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY) &&
+ 		    !bacmp(&hdev->public_addr, BDADDR_ANY))
+ 			hci_dev_get_bd_addr_from_property(hdev);
+ 
+@@ -4978,7 +4978,7 @@ static int hci_dev_setup_sync(struct hci_dev *hdev)
+ 	 * In case any of them is set, the controller has to
+ 	 * start up as unconfigured.
+ 	 */
+-	if (test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks) ||
++	if (hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG) ||
+ 	    invalid_bdaddr)
+ 		hci_dev_set_flag(hdev, HCI_UNCONFIGURED);
+ 
+@@ -5038,7 +5038,7 @@ static int hci_dev_init_sync(struct hci_dev *hdev)
+ 	 * then they need to be reprogrammed after the init procedure
+ 	 * completed.
+ 	 */
+-	if (test_bit(HCI_QUIRK_NON_PERSISTENT_DIAG, &hdev->quirks) &&
++	if (hci_test_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_DIAG) &&
+ 	    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
+ 	    hci_dev_test_flag(hdev, HCI_VENDOR_DIAG) && hdev->set_diag)
+ 		ret = hdev->set_diag(hdev, true);
+@@ -5295,7 +5295,7 @@ int hci_dev_close_sync(struct hci_dev *hdev)
+ 	/* Reset device */
+ 	skb_queue_purge(&hdev->cmd_q);
+ 	atomic_set(&hdev->cmd_cnt, 1);
+-	if (test_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks) &&
++	if (hci_test_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE) &&
+ 	    !auto_off && !hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
+ 		set_bit(HCI_INIT, &hdev->flags);
+ 		hci_reset_sync(hdev);
+@@ -5945,7 +5945,7 @@ static int hci_active_scan_sync(struct hci_dev *hdev, uint16_t interval)
+ 		own_addr_type = ADDR_LE_DEV_PUBLIC;
+ 
+ 	if (hci_is_adv_monitoring(hdev) ||
+-	    (test_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks) &&
++	    (hci_test_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER) &&
+ 	    hdev->discovery.result_filtering)) {
+ 		/* Duplicate filter should be disabled when some advertisement
+ 		 * monitor is activated, otherwise AdvMon can only receive one
+@@ -6008,8 +6008,7 @@ int hci_start_discovery_sync(struct hci_dev *hdev)
+ 		 * and LE scanning are done sequentially with separate
+ 		 * timeouts.
+ 		 */
+-		if (test_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY,
+-			     &hdev->quirks)) {
++		if (hci_test_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY)) {
+ 			timeout = msecs_to_jiffies(DISCOV_LE_TIMEOUT);
+ 			/* During simultaneous discovery, we double LE scan
+ 			 * interval. We must leave some time for the controller
+@@ -6086,7 +6085,7 @@ static int hci_update_event_filter_sync(struct hci_dev *hdev)
+ 	/* Some fake CSR controllers lock up after setting this type of
+ 	 * filter, so avoid sending the request altogether.
+ 	 */
+-	if (test_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks))
++	if (hci_test_quirk(hdev, HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL))
+ 		return 0;
+ 
+ 	/* Always clear event filter when starting */
+@@ -6801,8 +6800,8 @@ int hci_get_random_address(struct hci_dev *hdev, bool require_privacy,
+ 		return 0;
+ 	}
+ 
+-	/* No privacy so use a public address. */
+-	*own_addr_type = ADDR_LE_DEV_PUBLIC;
++	/* No privacy, use the current address */
++	hci_copy_identity_address(hdev, rand_addr, own_addr_type);
+ 
+ 	return 0;
+ }
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 40daa38276f353..805c752ac0a9d3 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -3520,12 +3520,28 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+ 		/* Configure output options and let the other side know
+ 		 * which ones we don't like. */
+ 
+-		/* If MTU is not provided in configure request, use the most recently
+-		 * explicitly or implicitly accepted value for the other direction,
+-		 * or the default value.
++		/* If MTU is not provided in configure request, try adjusting it
++		 * to the current output MTU if it has been set
++		 *
++		 * Bluetooth Core 6.1, Vol 3, Part A, Section 4.5
++		 *
++		 * Each configuration parameter value (if any is present) in an
++		 * L2CAP_CONFIGURATION_RSP packet reflects an ‘adjustment’ to a
++		 * configuration parameter value that has been sent (or, in case
++		 * of default values, implied) in the corresponding
++		 * L2CAP_CONFIGURATION_REQ packet.
+ 		 */
+-		if (mtu == 0)
+-			mtu = chan->imtu ? chan->imtu : L2CAP_DEFAULT_MTU;
++		if (!mtu) {
++			/* Only adjust for ERTM channels as for older modes the
++			 * remote stack may not be able to detect that the
++			 * adjustment causing it to silently drop packets.
++			 */
++			if (chan->mode == L2CAP_MODE_ERTM &&
++			    chan->omtu && chan->omtu != L2CAP_DEFAULT_MTU)
++				mtu = chan->omtu;
++			else
++				mtu = L2CAP_DEFAULT_MTU;
++		}
+ 
+ 		if (mtu < L2CAP_DEFAULT_MIN_MTU)
+ 			result = L2CAP_CONF_UNACCEPT;
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index 5aa55fa695943a..82d943c4cb5059 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1703,6 +1703,9 @@ static void l2cap_sock_resume_cb(struct l2cap_chan *chan)
+ {
+ 	struct sock *sk = chan->data;
+ 
++	if (!sk)
++		return;
++
+ 	if (test_and_clear_bit(FLAG_PENDING_SECURITY, &chan->flags)) {
+ 		sk->sk_state = BT_CONNECTED;
+ 		chan->state = BT_CONNECTED;
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 1485b455ade464..63dba0503653bd 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -464,7 +464,7 @@ static int read_index_list(struct sock *sk, struct hci_dev *hdev, void *data,
+ 		/* Devices marked as raw-only are neither configured
+ 		 * nor unconfigured controllers.
+ 		 */
+-		if (test_bit(HCI_QUIRK_RAW_DEVICE, &d->quirks))
++		if (hci_test_quirk(d, HCI_QUIRK_RAW_DEVICE))
+ 			continue;
+ 
+ 		if (!hci_dev_test_flag(d, HCI_UNCONFIGURED)) {
+@@ -522,7 +522,7 @@ static int read_unconf_index_list(struct sock *sk, struct hci_dev *hdev,
+ 		/* Devices marked as raw-only are neither configured
+ 		 * nor unconfigured controllers.
+ 		 */
+-		if (test_bit(HCI_QUIRK_RAW_DEVICE, &d->quirks))
++		if (hci_test_quirk(d, HCI_QUIRK_RAW_DEVICE))
+ 			continue;
+ 
+ 		if (hci_dev_test_flag(d, HCI_UNCONFIGURED)) {
+@@ -576,7 +576,7 @@ static int read_ext_index_list(struct sock *sk, struct hci_dev *hdev,
+ 		/* Devices marked as raw-only are neither configured
+ 		 * nor unconfigured controllers.
+ 		 */
+-		if (test_bit(HCI_QUIRK_RAW_DEVICE, &d->quirks))
++		if (hci_test_quirk(d, HCI_QUIRK_RAW_DEVICE))
+ 			continue;
+ 
+ 		if (hci_dev_test_flag(d, HCI_UNCONFIGURED))
+@@ -612,12 +612,12 @@ static int read_ext_index_list(struct sock *sk, struct hci_dev *hdev,
+ 
+ static bool is_configured(struct hci_dev *hdev)
+ {
+-	if (test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks) &&
++	if (hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG) &&
+ 	    !hci_dev_test_flag(hdev, HCI_EXT_CONFIGURED))
+ 		return false;
+ 
+-	if ((test_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks) ||
+-	     test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks)) &&
++	if ((hci_test_quirk(hdev, HCI_QUIRK_INVALID_BDADDR) ||
++	     hci_test_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY)) &&
+ 	    !bacmp(&hdev->public_addr, BDADDR_ANY))
+ 		return false;
+ 
+@@ -628,12 +628,12 @@ static __le32 get_missing_options(struct hci_dev *hdev)
+ {
+ 	u32 options = 0;
+ 
+-	if (test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks) &&
++	if (hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG) &&
+ 	    !hci_dev_test_flag(hdev, HCI_EXT_CONFIGURED))
+ 		options |= MGMT_OPTION_EXTERNAL_CONFIG;
+ 
+-	if ((test_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks) ||
+-	     test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks)) &&
++	if ((hci_test_quirk(hdev, HCI_QUIRK_INVALID_BDADDR) ||
++	     hci_test_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY)) &&
+ 	    !bacmp(&hdev->public_addr, BDADDR_ANY))
+ 		options |= MGMT_OPTION_PUBLIC_ADDRESS;
+ 
+@@ -669,7 +669,7 @@ static int read_config_info(struct sock *sk, struct hci_dev *hdev,
+ 	memset(&rp, 0, sizeof(rp));
+ 	rp.manufacturer = cpu_to_le16(hdev->manufacturer);
+ 
+-	if (test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks))
++	if (hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG))
+ 		options |= MGMT_OPTION_EXTERNAL_CONFIG;
+ 
+ 	if (hdev->set_bdaddr)
+@@ -828,8 +828,7 @@ static u32 get_supported_settings(struct hci_dev *hdev)
+ 		if (lmp_sc_capable(hdev))
+ 			settings |= MGMT_SETTING_SECURE_CONN;
+ 
+-		if (test_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
+-			     &hdev->quirks))
++		if (hci_test_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED))
+ 			settings |= MGMT_SETTING_WIDEBAND_SPEECH;
+ 	}
+ 
+@@ -841,8 +840,7 @@ static u32 get_supported_settings(struct hci_dev *hdev)
+ 		settings |= MGMT_SETTING_ADVERTISING;
+ 	}
+ 
+-	if (test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks) ||
+-	    hdev->set_bdaddr)
++	if (hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG) || hdev->set_bdaddr)
+ 		settings |= MGMT_SETTING_CONFIGURATION;
+ 
+ 	if (cis_central_capable(hdev))
+@@ -4307,7 +4305,7 @@ static int set_wideband_speech(struct sock *sk, struct hci_dev *hdev,
+ 
+ 	bt_dev_dbg(hdev, "sock %p", sk);
+ 
+-	if (!test_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks))
++	if (!hci_test_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED))
+ 		return mgmt_cmd_status(sk, hdev->id,
+ 				       MGMT_OP_SET_WIDEBAND_SPEECH,
+ 				       MGMT_STATUS_NOT_SUPPORTED);
+@@ -7935,7 +7933,7 @@ static int set_external_config(struct sock *sk, struct hci_dev *hdev,
+ 		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_EXTERNAL_CONFIG,
+ 				         MGMT_STATUS_INVALID_PARAMS);
+ 
+-	if (!test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks))
++	if (!hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG))
+ 		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_EXTERNAL_CONFIG,
+ 				       MGMT_STATUS_NOT_SUPPORTED);
+ 
+@@ -9338,7 +9336,7 @@ void mgmt_index_added(struct hci_dev *hdev)
+ {
+ 	struct mgmt_ev_ext_index ev;
+ 
+-	if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
++	if (hci_test_quirk(hdev, HCI_QUIRK_RAW_DEVICE))
+ 		return;
+ 
+ 	if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
+@@ -9362,7 +9360,7 @@ void mgmt_index_removed(struct hci_dev *hdev)
+ 	struct mgmt_ev_ext_index ev;
+ 	struct cmd_lookup match = { NULL, hdev, MGMT_STATUS_INVALID_INDEX };
+ 
+-	if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
++	if (hci_test_quirk(hdev, HCI_QUIRK_RAW_DEVICE))
+ 		return;
+ 
+ 	mgmt_pending_foreach(0, hdev, true, cmd_complete_rsp, &match);
+@@ -10089,7 +10087,7 @@ static bool is_filter_match(struct hci_dev *hdev, s8 rssi, u8 *eir,
+ 	if (hdev->discovery.rssi != HCI_RSSI_INVALID &&
+ 	    (rssi == HCI_RSSI_INVALID ||
+ 	    (rssi < hdev->discovery.rssi &&
+-	     !test_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks))))
++	     !hci_test_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER))))
+ 		return  false;
+ 
+ 	if (hdev->discovery.uuid_count != 0) {
+@@ -10107,7 +10105,7 @@ static bool is_filter_match(struct hci_dev *hdev, s8 rssi, u8 *eir,
+ 	/* If duplicate filtering does not report RSSI changes, then restart
+ 	 * scanning to ensure updated result with updated RSSI values.
+ 	 */
+-	if (test_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks)) {
++	if (hci_test_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER)) {
+ 		/* Validate RSSI value against the RSSI threshold once more. */
+ 		if (hdev->discovery.rssi != HCI_RSSI_INVALID &&
+ 		    rssi < hdev->discovery.rssi)
+diff --git a/net/bluetooth/msft.c b/net/bluetooth/msft.c
+index 5a8ccc491b1444..c560d84676696b 100644
+--- a/net/bluetooth/msft.c
++++ b/net/bluetooth/msft.c
+@@ -989,7 +989,7 @@ static void msft_monitor_device_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 
+ 	handle_data = msft_find_handle_data(hdev, ev->monitor_handle, false);
+ 
+-	if (!test_bit(HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER, &hdev->quirks)) {
++	if (!hci_test_quirk(hdev, HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER)) {
+ 		if (!handle_data)
+ 			return;
+ 		mgmt_handle = handle_data->mgmt_handle;
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index 47f359f24d1fde..8115d42fc15b03 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -1379,7 +1379,7 @@ static void smp_timeout(struct work_struct *work)
+ 
+ 	bt_dev_dbg(conn->hcon->hdev, "conn %p", conn);
+ 
+-	hci_disconnect(conn->hcon, HCI_ERROR_REMOTE_USER_TERM);
++	hci_disconnect(conn->hcon, HCI_ERROR_AUTH_FAILURE);
+ }
+ 
+ static struct smp_chan *smp_chan_create(struct l2cap_conn *conn)
+@@ -2977,8 +2977,25 @@ static int smp_sig_channel(struct l2cap_chan *chan, struct sk_buff *skb)
+ 	if (code > SMP_CMD_MAX)
+ 		goto drop;
+ 
+-	if (smp && !test_and_clear_bit(code, &smp->allow_cmd))
++	if (smp && !test_and_clear_bit(code, &smp->allow_cmd)) {
++		/* If there is a context and the command is not allowed consider
++		 * it a failure so the session is cleanup properly.
++		 */
++		switch (code) {
++		case SMP_CMD_IDENT_INFO:
++		case SMP_CMD_IDENT_ADDR_INFO:
++		case SMP_CMD_SIGN_INFO:
++			/* 3.6.1. Key distribution and generation
++			 *
++			 * A device may reject a distributed key by sending the
++			 * Pairing Failed command with the reason set to
++			 * "Key Rejected".
++			 */
++			smp_failure(conn, SMP_KEY_REJECTED);
++			break;
++		}
+ 		goto drop;
++	}
+ 
+ 	/* If we don't have a context the only allowed commands are
+ 	 * pairing request and security request.
+diff --git a/net/bluetooth/smp.h b/net/bluetooth/smp.h
+index 87a59ec2c9f02b..c5da53dfab04f2 100644
+--- a/net/bluetooth/smp.h
++++ b/net/bluetooth/smp.h
+@@ -138,6 +138,7 @@ struct smp_cmd_keypress_notify {
+ #define SMP_NUMERIC_COMP_FAILED		0x0c
+ #define SMP_BREDR_PAIRING_IN_PROGRESS	0x0d
+ #define SMP_CROSS_TRANSP_NOT_ALLOWED	0x0e
++#define SMP_KEY_REJECTED		0x0f
+ 
+ #define SMP_MIN_ENC_KEY_SIZE		7
+ #define SMP_MAX_ENC_KEY_SIZE		16
+diff --git a/net/bridge/br_switchdev.c b/net/bridge/br_switchdev.c
+index 7b41ee8740cbba..f10bd6a233dcf9 100644
+--- a/net/bridge/br_switchdev.c
++++ b/net/bridge/br_switchdev.c
+@@ -17,6 +17,9 @@ static bool nbp_switchdev_can_offload_tx_fwd(const struct net_bridge_port *p,
+ 	if (!static_branch_unlikely(&br_switchdev_tx_fwd_offload))
+ 		return false;
+ 
++	if (br_multicast_igmp_type(skb))
++		return false;
++
+ 	return (p->flags & BR_TX_FWD_OFFLOAD) &&
+ 	       (p->hwdom != BR_INPUT_SKB_CB(skb)->src_hwdom);
+ }
+diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c
+index d293087b426df7..be5c2294610e5b 100644
+--- a/net/ipv4/tcp_offload.c
++++ b/net/ipv4/tcp_offload.c
+@@ -359,6 +359,7 @@ struct sk_buff *tcp_gro_receive(struct list_head *head, struct sk_buff *skb,
+ 		flush |= skb->ip_summed != p->ip_summed;
+ 		flush |= skb->csum_level != p->csum_level;
+ 		flush |= NAPI_GRO_CB(p)->count >= 64;
++		skb_set_network_header(skb, skb_gro_receive_network_offset(skb));
+ 
+ 		if (flush || skb_gro_receive_list(p, skb))
+ 			mss = 1;
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index 9b295b2878befa..a1aca630867779 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -604,6 +604,7 @@ static struct sk_buff *udp_gro_receive_segment(struct list_head *head,
+ 					NAPI_GRO_CB(skb)->flush = 1;
+ 					return NULL;
+ 				}
++				skb_set_network_header(skb, skb_gro_receive_network_offset(skb));
+ 				ret = skb_gro_receive_list(p, skb);
+ 			} else {
+ 				skb_gro_postpull_rcsum(skb, uh,
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index 65831b4fee1fda..616bf4c0c8fd91 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -807,8 +807,8 @@ static void mld_del_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im)
+ 		} else {
+ 			im->mca_crcount = idev->mc_qrv;
+ 		}
+-		in6_dev_put(pmc->idev);
+ 		ip6_mc_clear_src(pmc);
++		in6_dev_put(pmc->idev);
+ 		kfree_rcu(pmc, rcu);
+ 	}
+ }
+diff --git a/net/ipv6/rpl_iptunnel.c b/net/ipv6/rpl_iptunnel.c
+index 7c05ac846646f3..eccfa4203e96b4 100644
+--- a/net/ipv6/rpl_iptunnel.c
++++ b/net/ipv6/rpl_iptunnel.c
+@@ -129,13 +129,13 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
+ 			     struct dst_entry *cache_dst)
+ {
+ 	struct ipv6_rpl_sr_hdr *isrh, *csrh;
+-	const struct ipv6hdr *oldhdr;
++	struct ipv6hdr oldhdr;
+ 	struct ipv6hdr *hdr;
+ 	unsigned char *buf;
+ 	size_t hdrlen;
+ 	int err;
+ 
+-	oldhdr = ipv6_hdr(skb);
++	memcpy(&oldhdr, ipv6_hdr(skb), sizeof(oldhdr));
+ 
+ 	buf = kcalloc(struct_size(srh, segments.addr, srh->segments_left), 2, GFP_ATOMIC);
+ 	if (!buf)
+@@ -147,7 +147,7 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
+ 	memcpy(isrh, srh, sizeof(*isrh));
+ 	memcpy(isrh->rpl_segaddr, &srh->rpl_segaddr[1],
+ 	       (srh->segments_left - 1) * 16);
+-	isrh->rpl_segaddr[srh->segments_left - 1] = oldhdr->daddr;
++	isrh->rpl_segaddr[srh->segments_left - 1] = oldhdr.daddr;
+ 
+ 	ipv6_rpl_srh_compress(csrh, isrh, &srh->rpl_segaddr[0],
+ 			      isrh->segments_left - 1);
+@@ -169,7 +169,7 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
+ 	skb_mac_header_rebuild(skb);
+ 
+ 	hdr = ipv6_hdr(skb);
+-	memmove(hdr, oldhdr, sizeof(*hdr));
++	memmove(hdr, &oldhdr, sizeof(*hdr));
+ 	isrh = (void *)hdr + sizeof(*hdr);
+ 	memcpy(isrh, csrh, hdrlen);
+ 
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 421ced0312890b..1f898888b22357 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -978,8 +978,9 @@ static bool check_fully_established(struct mptcp_sock *msk, struct sock *ssk,
+ 		if (subflow->mp_join)
+ 			goto reset;
+ 		subflow->mp_capable = 0;
++		if (!mptcp_try_fallback(ssk))
++			goto reset;
+ 		pr_fallback(msk);
+-		mptcp_do_fallback(ssk);
+ 		return false;
+ 	}
+ 
+diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
+index 31747f974941fa..b6870dc04095dc 100644
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -761,8 +761,14 @@ void mptcp_pm_mp_fail_received(struct sock *sk, u64 fail_seq)
+ 
+ 	pr_debug("fail_seq=%llu\n", fail_seq);
+ 
+-	if (!READ_ONCE(msk->allow_infinite_fallback))
++	/* After accepting the fail, we can't create any other subflows */
++	spin_lock_bh(&msk->fallback_lock);
++	if (!msk->allow_infinite_fallback) {
++		spin_unlock_bh(&msk->fallback_lock);
+ 		return;
++	}
++	msk->allow_subflows = false;
++	spin_unlock_bh(&msk->fallback_lock);
+ 
+ 	if (!subflow->fail_tout) {
+ 		pr_debug("send MP_FAIL response and infinite map\n");
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 44f7ab463d7550..f31107017569d8 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -558,10 +558,9 @@ static bool mptcp_check_data_fin(struct sock *sk)
+ 
+ static void mptcp_dss_corruption(struct mptcp_sock *msk, struct sock *ssk)
+ {
+-	if (READ_ONCE(msk->allow_infinite_fallback)) {
++	if (mptcp_try_fallback(ssk)) {
+ 		MPTCP_INC_STATS(sock_net(ssk),
+ 				MPTCP_MIB_DSSCORRUPTIONFALLBACK);
+-		mptcp_do_fallback(ssk);
+ 	} else {
+ 		MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DSSCORRUPTIONRESET);
+ 		mptcp_subflow_reset(ssk);
+@@ -790,7 +789,7 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk)
+ static void mptcp_subflow_joined(struct mptcp_sock *msk, struct sock *ssk)
+ {
+ 	mptcp_subflow_ctx(ssk)->map_seq = READ_ONCE(msk->ack_seq);
+-	WRITE_ONCE(msk->allow_infinite_fallback, false);
++	msk->allow_infinite_fallback = false;
+ 	mptcp_event(MPTCP_EVENT_SUB_ESTABLISHED, msk, ssk, GFP_ATOMIC);
+ }
+ 
+@@ -801,6 +800,14 @@ static bool __mptcp_finish_join(struct mptcp_sock *msk, struct sock *ssk)
+ 	if (sk->sk_state != TCP_ESTABLISHED)
+ 		return false;
+ 
++	spin_lock_bh(&msk->fallback_lock);
++	if (!msk->allow_subflows) {
++		spin_unlock_bh(&msk->fallback_lock);
++		return false;
++	}
++	mptcp_subflow_joined(msk, ssk);
++	spin_unlock_bh(&msk->fallback_lock);
++
+ 	/* attach to msk socket only after we are sure we will deal with it
+ 	 * at close time
+ 	 */
+@@ -809,7 +816,6 @@ static bool __mptcp_finish_join(struct mptcp_sock *msk, struct sock *ssk)
+ 
+ 	mptcp_subflow_ctx(ssk)->subflow_id = msk->subflow_id++;
+ 	mptcp_sockopt_sync_locked(msk, ssk);
+-	mptcp_subflow_joined(msk, ssk);
+ 	mptcp_stop_tout_timer(sk);
+ 	__mptcp_propagate_sndbuf(sk, ssk);
+ 	return true;
+@@ -1134,10 +1140,14 @@ static void mptcp_update_infinite_map(struct mptcp_sock *msk,
+ 	mpext->infinite_map = 1;
+ 	mpext->data_len = 0;
+ 
++	if (!mptcp_try_fallback(ssk)) {
++		mptcp_subflow_reset(ssk);
++		return;
++	}
++
+ 	MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPTX);
+ 	mptcp_subflow_ctx(ssk)->send_infinite_map = 0;
+ 	pr_fallback(msk);
+-	mptcp_do_fallback(ssk);
+ }
+ 
+ #define MPTCP_MAX_GSO_SIZE (GSO_LEGACY_MAX_SIZE - (MAX_TCP_HEADER + 1))
+@@ -2541,9 +2551,9 @@ static void mptcp_check_fastclose(struct mptcp_sock *msk)
+ 
+ static void __mptcp_retrans(struct sock *sk)
+ {
++	struct mptcp_sendmsg_info info = { .data_lock_held = true, };
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 	struct mptcp_subflow_context *subflow;
+-	struct mptcp_sendmsg_info info = {};
+ 	struct mptcp_data_frag *dfrag;
+ 	struct sock *ssk;
+ 	int ret, err;
+@@ -2588,6 +2598,18 @@ static void __mptcp_retrans(struct sock *sk)
+ 			info.sent = 0;
+ 			info.limit = READ_ONCE(msk->csum_enabled) ? dfrag->data_len :
+ 								    dfrag->already_sent;
++
++			/*
++			 * make the whole retrans decision, xmit, disallow
++			 * fallback atomic
++			 */
++			spin_lock_bh(&msk->fallback_lock);
++			if (__mptcp_check_fallback(msk)) {
++				spin_unlock_bh(&msk->fallback_lock);
++				release_sock(ssk);
++				return;
++			}
++
+ 			while (info.sent < info.limit) {
+ 				ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
+ 				if (ret <= 0)
+@@ -2601,8 +2623,9 @@ static void __mptcp_retrans(struct sock *sk)
+ 				len = max(copied, len);
+ 				tcp_push(ssk, 0, info.mss_now, tcp_sk(ssk)->nonagle,
+ 					 info.size_goal);
+-				WRITE_ONCE(msk->allow_infinite_fallback, false);
++				msk->allow_infinite_fallback = false;
+ 			}
++			spin_unlock_bh(&msk->fallback_lock);
+ 
+ 			release_sock(ssk);
+ 		}
+@@ -2728,7 +2751,8 @@ static void __mptcp_init_sock(struct sock *sk)
+ 	WRITE_ONCE(msk->first, NULL);
+ 	inet_csk(sk)->icsk_sync_mss = mptcp_sync_mss;
+ 	WRITE_ONCE(msk->csum_enabled, mptcp_is_checksum_enabled(sock_net(sk)));
+-	WRITE_ONCE(msk->allow_infinite_fallback, true);
++	msk->allow_infinite_fallback = true;
++	msk->allow_subflows = true;
+ 	msk->recovery = false;
+ 	msk->subflow_id = 1;
+ 	msk->last_data_sent = tcp_jiffies32;
+@@ -2736,6 +2760,7 @@ static void __mptcp_init_sock(struct sock *sk)
+ 	msk->last_ack_recv = tcp_jiffies32;
+ 
+ 	mptcp_pm_data_init(msk);
++	spin_lock_init(&msk->fallback_lock);
+ 
+ 	/* re-use the csk retrans timer for MPTCP-level retrans */
+ 	timer_setup(&msk->sk.icsk_retransmit_timer, mptcp_retransmit_timer, 0);
+@@ -3115,7 +3140,16 @@ static int mptcp_disconnect(struct sock *sk, int flags)
+ 	 * subflow
+ 	 */
+ 	mptcp_destroy_common(msk, MPTCP_CF_FASTCLOSE);
++
++	/* The first subflow is already in TCP_CLOSE status, the following
++	 * can't overlap with a fallback anymore
++	 */
++	spin_lock_bh(&msk->fallback_lock);
++	msk->allow_subflows = true;
++	msk->allow_infinite_fallback = true;
+ 	WRITE_ONCE(msk->flags, 0);
++	spin_unlock_bh(&msk->fallback_lock);
++
+ 	msk->cb_flags = 0;
+ 	msk->recovery = false;
+ 	WRITE_ONCE(msk->can_ack, false);
+@@ -3522,7 +3556,13 @@ bool mptcp_finish_join(struct sock *ssk)
+ 
+ 	/* active subflow, already present inside the conn_list */
+ 	if (!list_empty(&subflow->node)) {
++		spin_lock_bh(&msk->fallback_lock);
++		if (!msk->allow_subflows) {
++			spin_unlock_bh(&msk->fallback_lock);
++			return false;
++		}
+ 		mptcp_subflow_joined(msk, ssk);
++		spin_unlock_bh(&msk->fallback_lock);
+ 		mptcp_propagate_sndbuf(parent, ssk);
+ 		return true;
+ 	}
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index d409586b5977f9..833efc079b20f3 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -346,10 +346,16 @@ struct mptcp_sock {
+ 		u64	rtt_us; /* last maximum rtt of subflows */
+ 	} rcvq_space;
+ 	u8		scaling_ratio;
++	bool		allow_subflows;
+ 
+ 	u32		subflow_id;
+ 	u32		setsockopt_seq;
+ 	char		ca_name[TCP_CA_NAME_MAX];
++
++	spinlock_t	fallback_lock;	/* protects fallback,
++					 * allow_infinite_fallback and
++					 * allow_join
++					 */
+ };
+ 
+ #define mptcp_data_lock(sk) spin_lock_bh(&(sk)->sk_lock.slock)
+@@ -1208,15 +1214,22 @@ static inline bool mptcp_check_fallback(const struct sock *sk)
+ 	return __mptcp_check_fallback(msk);
+ }
+ 
+-static inline void __mptcp_do_fallback(struct mptcp_sock *msk)
++static inline bool __mptcp_try_fallback(struct mptcp_sock *msk)
+ {
+ 	if (__mptcp_check_fallback(msk)) {
+ 		pr_debug("TCP fallback already done (msk=%p)\n", msk);
+-		return;
++		return true;
+ 	}
+-	if (WARN_ON_ONCE(!READ_ONCE(msk->allow_infinite_fallback)))
+-		return;
++	spin_lock_bh(&msk->fallback_lock);
++	if (!msk->allow_infinite_fallback) {
++		spin_unlock_bh(&msk->fallback_lock);
++		return false;
++	}
++
++	msk->allow_subflows = false;
+ 	set_bit(MPTCP_FALLBACK_DONE, &msk->flags);
++	spin_unlock_bh(&msk->fallback_lock);
++	return true;
+ }
+ 
+ static inline bool __mptcp_has_initial_subflow(const struct mptcp_sock *msk)
+@@ -1228,14 +1241,15 @@ static inline bool __mptcp_has_initial_subflow(const struct mptcp_sock *msk)
+ 			TCPF_SYN_RECV | TCPF_LISTEN));
+ }
+ 
+-static inline void mptcp_do_fallback(struct sock *ssk)
++static inline bool mptcp_try_fallback(struct sock *ssk)
+ {
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+ 	struct sock *sk = subflow->conn;
+ 	struct mptcp_sock *msk;
+ 
+ 	msk = mptcp_sk(sk);
+-	__mptcp_do_fallback(msk);
++	if (!__mptcp_try_fallback(msk))
++		return false;
+ 	if (READ_ONCE(msk->snd_data_fin_enable) && !(ssk->sk_shutdown & SEND_SHUTDOWN)) {
+ 		gfp_t saved_allocation = ssk->sk_allocation;
+ 
+@@ -1247,6 +1261,7 @@ static inline void mptcp_do_fallback(struct sock *ssk)
+ 		tcp_shutdown(ssk, SEND_SHUTDOWN);
+ 		ssk->sk_allocation = saved_allocation;
+ 	}
++	return true;
+ }
+ 
+ #define pr_fallback(a) pr_debug("%s:fallback to TCP (msk=%p)\n", __func__, a)
+@@ -1256,7 +1271,7 @@ static inline void mptcp_subflow_early_fallback(struct mptcp_sock *msk,
+ {
+ 	pr_fallback(msk);
+ 	subflow->request_mptcp = 0;
+-	__mptcp_do_fallback(msk);
++	WARN_ON_ONCE(!__mptcp_try_fallback(msk));
+ }
+ 
+ static inline bool mptcp_check_infinite_map(struct sk_buff *skb)
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 24c2de1891bdf3..8a4159b0642d96 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -543,9 +543,11 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 	mptcp_get_options(skb, &mp_opt);
+ 	if (subflow->request_mptcp) {
+ 		if (!(mp_opt.suboptions & OPTION_MPTCP_MPC_SYNACK)) {
++			if (!mptcp_try_fallback(sk))
++				goto do_reset;
++
+ 			MPTCP_INC_STATS(sock_net(sk),
+ 					MPTCP_MIB_MPCAPABLEACTIVEFALLBACK);
+-			mptcp_do_fallback(sk);
+ 			pr_fallback(msk);
+ 			goto fallback;
+ 		}
+@@ -1302,20 +1304,29 @@ static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ss
+ 		mptcp_schedule_work(sk);
+ }
+ 
+-static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk)
++static bool mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk)
+ {
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+ 	unsigned long fail_tout;
+ 
++	/* we are really failing, prevent any later subflow join */
++	spin_lock_bh(&msk->fallback_lock);
++	if (!msk->allow_infinite_fallback) {
++		spin_unlock_bh(&msk->fallback_lock);
++		return false;
++	}
++	msk->allow_subflows = false;
++	spin_unlock_bh(&msk->fallback_lock);
++
+ 	/* graceful failure can happen only on the MPC subflow */
+ 	if (WARN_ON_ONCE(ssk != READ_ONCE(msk->first)))
+-		return;
++		return false;
+ 
+ 	/* since the close timeout take precedence on the fail one,
+ 	 * no need to start the latter when the first is already set
+ 	 */
+ 	if (sock_flag((struct sock *)msk, SOCK_DEAD))
+-		return;
++		return true;
+ 
+ 	/* we don't need extreme accuracy here, use a zero fail_tout as special
+ 	 * value meaning no fail timeout at all;
+@@ -1327,6 +1338,7 @@ static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk)
+ 	tcp_send_ack(ssk);
+ 
+ 	mptcp_reset_tout_timer(msk, subflow->fail_tout);
++	return true;
+ }
+ 
+ static bool subflow_check_data_avail(struct sock *ssk)
+@@ -1387,17 +1399,16 @@ static bool subflow_check_data_avail(struct sock *ssk)
+ 		    (subflow->mp_join || subflow->valid_csum_seen)) {
+ 			subflow->send_mp_fail = 1;
+ 
+-			if (!READ_ONCE(msk->allow_infinite_fallback)) {
++			if (!mptcp_subflow_fail(msk, ssk)) {
+ 				subflow->reset_transient = 0;
+ 				subflow->reset_reason = MPTCP_RST_EMIDDLEBOX;
+ 				goto reset;
+ 			}
+-			mptcp_subflow_fail(msk, ssk);
+ 			WRITE_ONCE(subflow->data_avail, true);
+ 			return true;
+ 		}
+ 
+-		if (!READ_ONCE(msk->allow_infinite_fallback)) {
++		if (!mptcp_try_fallback(ssk)) {
+ 			/* fatal protocol error, close the socket.
+ 			 * subflow_error_report() will introduce the appropriate barriers
+ 			 */
+@@ -1415,8 +1426,6 @@ static bool subflow_check_data_avail(struct sock *ssk)
+ 			WRITE_ONCE(subflow->data_avail, false);
+ 			return false;
+ 		}
+-
+-		mptcp_do_fallback(ssk);
+ 	}
+ 
+ 	skb = skb_peek(&ssk->sk_receive_queue);
+@@ -1681,7 +1690,6 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_pm_local *local,
+ 	/* discard the subflow socket */
+ 	mptcp_sock_graft(ssk, sk->sk_socket);
+ 	iput(SOCK_INODE(sf));
+-	WRITE_ONCE(msk->allow_infinite_fallback, false);
+ 	mptcp_stop_tout_timer(sk);
+ 	return 0;
+ 
+@@ -1853,7 +1861,7 @@ static void subflow_state_change(struct sock *sk)
+ 
+ 	msk = mptcp_sk(parent);
+ 	if (subflow_simultaneous_connect(sk)) {
+-		mptcp_do_fallback(sk);
++		WARN_ON_ONCE(!mptcp_try_fallback(sk));
+ 		pr_fallback(msk);
+ 		subflow->conn_finished = 1;
+ 		mptcp_propagate_state(parent, sk, subflow, NULL);
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 7f8b245e287aeb..e720505177607a 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -1121,6 +1121,12 @@ static int nf_ct_resolve_clash_harder(struct sk_buff *skb, u32 repl_idx)
+ 
+ 	hlist_nulls_add_head_rcu(&loser_ct->tuplehash[IP_CT_DIR_REPLY].hnnode,
+ 				 &nf_conntrack_hash[repl_idx]);
++	/* confirmed bit must be set after hlist add, not before:
++	 * loser_ct can still be visible to other cpu due to
++	 * SLAB_TYPESAFE_BY_RCU.
++	 */
++	smp_mb__before_atomic();
++	set_bit(IPS_CONFIRMED_BIT, &loser_ct->status);
+ 
+ 	NF_CT_STAT_INC(net, clash_resolve);
+ 	return NF_ACCEPT;
+@@ -1257,8 +1263,6 @@ __nf_conntrack_confirm(struct sk_buff *skb)
+ 	 * user context, else we insert an already 'dead' hash, blocking
+ 	 * further use of that particular connection -JM.
+ 	 */
+-	ct->status |= IPS_CONFIRMED;
+-
+ 	if (unlikely(nf_ct_is_dying(ct))) {
+ 		NF_CT_STAT_INC(net, insert_failed);
+ 		goto dying;
+@@ -1290,7 +1294,7 @@ __nf_conntrack_confirm(struct sk_buff *skb)
+ 		}
+ 	}
+ 
+-	/* Timer relative to confirmation time, not original
++	/* Timeout is relative to confirmation time, not original
+ 	   setting time, otherwise we'd get timer wrap in
+ 	   weird delay cases. */
+ 	ct->timeout += nfct_time_stamp;
+@@ -1298,11 +1302,21 @@ __nf_conntrack_confirm(struct sk_buff *skb)
+ 	__nf_conntrack_insert_prepare(ct);
+ 
+ 	/* Since the lookup is lockless, hash insertion must be done after
+-	 * starting the timer and setting the CONFIRMED bit. The RCU barriers
+-	 * guarantee that no other CPU can find the conntrack before the above
+-	 * stores are visible.
++	 * setting ct->timeout. The RCU barriers guarantee that no other CPU
++	 * can find the conntrack before the above stores are visible.
+ 	 */
+ 	__nf_conntrack_hash_insert(ct, hash, reply_hash);
++
++	/* IPS_CONFIRMED unset means 'ct not (yet) in hash', conntrack lookups
++	 * skip entries that lack this bit.  This happens when a CPU is looking
++	 * at a stale entry that is being recycled due to SLAB_TYPESAFE_BY_RCU
++	 * or when another CPU encounters this entry right after the insertion
++	 * but before the set-confirm-bit below.  This bit must not be set until
++	 * after __nf_conntrack_hash_insert().
++	 */
++	smp_mb__before_atomic();
++	set_bit(IPS_CONFIRMED_BIT, &ct->status);
++
+ 	nf_conntrack_double_unlock(hash, reply_hash);
+ 	local_bh_enable();
+ 
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 20be2c47cf4191..ab1482e7c9fdb7 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2785,7 +2785,7 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
+ 	int len_sum = 0;
+ 	int status = TP_STATUS_AVAILABLE;
+ 	int hlen, tlen, copylen = 0;
+-	long timeo = 0;
++	long timeo;
+ 
+ 	mutex_lock(&po->pg_vec_lock);
+ 
+@@ -2839,22 +2839,28 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
+ 	if ((size_max > dev->mtu + reserve + VLAN_HLEN) && !vnet_hdr_sz)
+ 		size_max = dev->mtu + reserve + VLAN_HLEN;
+ 
++	timeo = sock_sndtimeo(&po->sk, msg->msg_flags & MSG_DONTWAIT);
+ 	reinit_completion(&po->skb_completion);
+ 
+ 	do {
+ 		ph = packet_current_frame(po, &po->tx_ring,
+ 					  TP_STATUS_SEND_REQUEST);
+ 		if (unlikely(ph == NULL)) {
+-			if (need_wait && skb) {
+-				timeo = sock_sndtimeo(&po->sk, msg->msg_flags & MSG_DONTWAIT);
++			/* Note: packet_read_pending() might be slow if we
++			 * have to call it as it's per_cpu variable, but in
++			 * fast-path we don't have to call it, only when ph
++			 * is NULL, we need to check the pending_refcnt.
++			 */
++			if (need_wait && packet_read_pending(&po->tx_ring)) {
+ 				timeo = wait_for_completion_interruptible_timeout(&po->skb_completion, timeo);
+ 				if (timeo <= 0) {
+ 					err = !timeo ? -ETIMEDOUT : -ERESTARTSYS;
+ 					goto out_put;
+ 				}
+-			}
+-			/* check for additional frames */
+-			continue;
++				/* check for additional frames */
++				continue;
++			} else
++				break;
+ 		}
+ 
+ 		skb = NULL;
+@@ -2943,14 +2949,7 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
+ 		}
+ 		packet_increment_head(&po->tx_ring);
+ 		len_sum += tp_len;
+-	} while (likely((ph != NULL) ||
+-		/* Note: packet_read_pending() might be slow if we have
+-		 * to call it as it's per_cpu variable, but in fast-path
+-		 * we already short-circuit the loop with the first
+-		 * condition, and luckily don't have to go that path
+-		 * anyway.
+-		 */
+-		 (need_wait && packet_read_pending(&po->tx_ring))));
++	} while (1);
+ 
+ 	err = len_sum;
+ 	goto out_put;
+diff --git a/net/phonet/pep.c b/net/phonet/pep.c
+index 53a858478e22f0..62527e1ebb883d 100644
+--- a/net/phonet/pep.c
++++ b/net/phonet/pep.c
+@@ -826,6 +826,7 @@ static struct sock *pep_sock_accept(struct sock *sk,
+ 	}
+ 
+ 	/* Check for duplicate pipe handle */
++	pn_skb_get_dst_sockaddr(skb, &dst);
+ 	newsk = pep_find_pipe(&pn->hlist, &dst, pipe_handle);
+ 	if (unlikely(newsk)) {
+ 		__sock_put(newsk);
+@@ -850,7 +851,6 @@ static struct sock *pep_sock_accept(struct sock *sk,
+ 	newsk->sk_destruct = pipe_destruct;
+ 
+ 	newpn = pep_sk(newsk);
+-	pn_skb_get_dst_sockaddr(skb, &dst);
+ 	pn_skb_get_src_sockaddr(skb, &src);
+ 	newpn->pn_sk.sobject = pn_sockaddr_get_object(&dst);
+ 	newpn->pn_sk.dobject = pn_sockaddr_get_object(&src);
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 3cc3af15086ff1..c4891a538ded04 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -42,6 +42,7 @@ enum rxrpc_skb_mark {
+ 	RXRPC_SKB_MARK_SERVICE_CONN_SECURED, /* Service connection response has been verified */
+ 	RXRPC_SKB_MARK_REJECT_BUSY,	/* Reject with BUSY */
+ 	RXRPC_SKB_MARK_REJECT_ABORT,	/* Reject with ABORT (code in skb->priority) */
++	RXRPC_SKB_MARK_REJECT_CONN_ABORT, /* Reject with connection ABORT (code in skb->priority) */
+ };
+ 
+ /*
+@@ -1197,6 +1198,8 @@ int rxrpc_encap_rcv(struct sock *, struct sk_buff *);
+ void rxrpc_error_report(struct sock *);
+ bool rxrpc_direct_abort(struct sk_buff *skb, enum rxrpc_abort_reason why,
+ 			s32 abort_code, int err);
++bool rxrpc_direct_conn_abort(struct sk_buff *skb, enum rxrpc_abort_reason why,
++			     s32 abort_code, int err);
+ int rxrpc_io_thread(void *data);
+ static inline void rxrpc_wake_up_io_thread(struct rxrpc_local *local)
+ {
+@@ -1315,6 +1318,7 @@ struct rxrpc_peer *rxrpc_lookup_peer_rcu(struct rxrpc_local *,
+ 					 const struct sockaddr_rxrpc *);
+ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local,
+ 				     struct sockaddr_rxrpc *srx, gfp_t gfp);
++void rxrpc_assess_MTU_size(struct rxrpc_local *local, struct rxrpc_peer *peer);
+ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *, gfp_t,
+ 				    enum rxrpc_peer_trace);
+ void rxrpc_new_incoming_peer(struct rxrpc_local *local, struct rxrpc_peer *peer);
+diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
+index 0b51f3ccf03583..3259916c926aef 100644
+--- a/net/rxrpc/call_accept.c
++++ b/net/rxrpc/call_accept.c
+@@ -219,6 +219,7 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx)
+ 	tail = b->call_backlog_tail;
+ 	while (CIRC_CNT(head, tail, size) > 0) {
+ 		struct rxrpc_call *call = b->call_backlog[tail];
++		rxrpc_see_call(call, rxrpc_call_see_discard);
+ 		rcu_assign_pointer(call->socket, rx);
+ 		if (rx->discard_new_call) {
+ 			_debug("discard %lx", call->user_call_ID);
+@@ -372,8 +373,8 @@ bool rxrpc_new_incoming_call(struct rxrpc_local *local,
+ 	spin_lock(&rx->incoming_lock);
+ 	if (rx->sk.sk_state == RXRPC_SERVER_LISTEN_DISABLED ||
+ 	    rx->sk.sk_state == RXRPC_CLOSE) {
+-		rxrpc_direct_abort(skb, rxrpc_abort_shut_down,
+-				   RX_INVALID_OPERATION, -ESHUTDOWN);
++		rxrpc_direct_conn_abort(skb, rxrpc_abort_shut_down,
++					RX_INVALID_OPERATION, -ESHUTDOWN);
+ 		goto no_call;
+ 	}
+ 
+@@ -404,6 +405,7 @@ bool rxrpc_new_incoming_call(struct rxrpc_local *local,
+ 
+ 	spin_unlock(&rx->incoming_lock);
+ 	read_unlock_irq(&local->services_lock);
++	rxrpc_assess_MTU_size(local, call->peer);
+ 
+ 	if (hlist_unhashed(&call->error_link)) {
+ 		spin_lock_irq(&call->peer->lock);
+@@ -418,12 +420,12 @@ bool rxrpc_new_incoming_call(struct rxrpc_local *local,
+ 
+ unsupported_service:
+ 	read_unlock_irq(&local->services_lock);
+-	return rxrpc_direct_abort(skb, rxrpc_abort_service_not_offered,
+-				  RX_INVALID_OPERATION, -EOPNOTSUPP);
++	return rxrpc_direct_conn_abort(skb, rxrpc_abort_service_not_offered,
++				       RX_INVALID_OPERATION, -EOPNOTSUPP);
+ unsupported_security:
+ 	read_unlock_irq(&local->services_lock);
+-	return rxrpc_direct_abort(skb, rxrpc_abort_service_not_offered,
+-				  RX_INVALID_OPERATION, -EKEYREJECTED);
++	return rxrpc_direct_conn_abort(skb, rxrpc_abort_service_not_offered,
++				       RX_INVALID_OPERATION, -EKEYREJECTED);
+ no_call:
+ 	spin_unlock(&rx->incoming_lock);
+ 	read_unlock_irq(&local->services_lock);
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index fce58be65e7cfc..4be3713a5a339d 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -561,7 +561,7 @@ static void rxrpc_cleanup_rx_buffers(struct rxrpc_call *call)
+ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
+ {
+ 	struct rxrpc_connection *conn = call->conn;
+-	bool put = false, putu = false;
++	bool putu = false;
+ 
+ 	_enter("{%d,%d}", call->debug_id, refcount_read(&call->ref));
+ 
+@@ -573,23 +573,13 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
+ 
+ 	rxrpc_put_call_slot(call);
+ 
+-	/* Make sure we don't get any more notifications */
++	/* Note that at this point, the call may still be on or may have been
++	 * added back on to the socket receive queue.  recvmsg() must discard
++	 * released calls.  The CALL_RELEASED flag should prevent further
++	 * notifications.
++	 */
+ 	spin_lock_irq(&rx->recvmsg_lock);
+-
+-	if (!list_empty(&call->recvmsg_link)) {
+-		_debug("unlinking once-pending call %p { e=%lx f=%lx }",
+-		       call, call->events, call->flags);
+-		list_del(&call->recvmsg_link);
+-		put = true;
+-	}
+-
+-	/* list_empty() must return false in rxrpc_notify_socket() */
+-	call->recvmsg_link.next = NULL;
+-	call->recvmsg_link.prev = NULL;
+-
+ 	spin_unlock_irq(&rx->recvmsg_lock);
+-	if (put)
+-		rxrpc_put_call(call, rxrpc_call_put_unnotify);
+ 
+ 	write_lock(&rx->call_lock);
+ 
+@@ -638,6 +628,12 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx)
+ 		rxrpc_put_call(call, rxrpc_call_put_release_sock);
+ 	}
+ 
++	while ((call = list_first_entry_or_null(&rx->recvmsg_q,
++						struct rxrpc_call, recvmsg_link))) {
++		list_del_init(&call->recvmsg_link);
++		rxrpc_put_call(call, rxrpc_call_put_release_recvmsg_q);
++	}
++
+ 	_leave("");
+ }
+ 
+diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c
+index 64f8d77b873125..23dbcc4819990e 100644
+--- a/net/rxrpc/io_thread.c
++++ b/net/rxrpc/io_thread.c
+@@ -97,6 +97,20 @@ bool rxrpc_direct_abort(struct sk_buff *skb, enum rxrpc_abort_reason why,
+ 	return false;
+ }
+ 
++/*
++ * Directly produce a connection abort from a packet.
++ */
++bool rxrpc_direct_conn_abort(struct sk_buff *skb, enum rxrpc_abort_reason why,
++			     s32 abort_code, int err)
++{
++	struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
++
++	trace_rxrpc_abort(0, why, sp->hdr.cid, 0, sp->hdr.seq, abort_code, err);
++	skb->mark = RXRPC_SKB_MARK_REJECT_CONN_ABORT;
++	skb->priority = abort_code;
++	return false;
++}
++
+ static bool rxrpc_bad_message(struct sk_buff *skb, enum rxrpc_abort_reason why)
+ {
+ 	return rxrpc_direct_abort(skb, why, RX_PROTOCOL_ERROR, -EBADMSG);
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index 95905b85a8d711..a480e7e2325ebd 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -814,6 +814,9 @@ void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
+ 	__be32 code;
+ 	int ret, ioc;
+ 
++	if (sp->hdr.type == RXRPC_PACKET_TYPE_ABORT)
++		return; /* Never abort an abort. */
++
+ 	rxrpc_see_skb(skb, rxrpc_skb_see_reject);
+ 
+ 	iov[0].iov_base = &whdr;
+@@ -826,7 +829,13 @@ void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
+ 	msg.msg_controllen = 0;
+ 	msg.msg_flags = 0;
+ 
+-	memset(&whdr, 0, sizeof(whdr));
++	whdr = (struct rxrpc_wire_header) {
++		.epoch		= htonl(sp->hdr.epoch),
++		.cid		= htonl(sp->hdr.cid),
++		.callNumber	= htonl(sp->hdr.callNumber),
++		.serviceId	= htons(sp->hdr.serviceId),
++		.flags		= ~sp->hdr.flags & RXRPC_CLIENT_INITIATED,
++	};
+ 
+ 	switch (skb->mark) {
+ 	case RXRPC_SKB_MARK_REJECT_BUSY:
+@@ -834,6 +843,9 @@ void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
+ 		size = sizeof(whdr);
+ 		ioc = 1;
+ 		break;
++	case RXRPC_SKB_MARK_REJECT_CONN_ABORT:
++		whdr.callNumber	= 0;
++		fallthrough;
+ 	case RXRPC_SKB_MARK_REJECT_ABORT:
+ 		whdr.type = RXRPC_PACKET_TYPE_ABORT;
+ 		code = htonl(skb->priority);
+@@ -847,14 +859,6 @@ void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
+ 	if (rxrpc_extract_addr_from_skb(&srx, skb) == 0) {
+ 		msg.msg_namelen = srx.transport_len;
+ 
+-		whdr.epoch	= htonl(sp->hdr.epoch);
+-		whdr.cid	= htonl(sp->hdr.cid);
+-		whdr.callNumber	= htonl(sp->hdr.callNumber);
+-		whdr.serviceId	= htons(sp->hdr.serviceId);
+-		whdr.flags	= sp->hdr.flags;
+-		whdr.flags	^= RXRPC_CLIENT_INITIATED;
+-		whdr.flags	&= RXRPC_CLIENT_INITIATED;
+-
+ 		iov_iter_kvec(&msg.msg_iter, WRITE, iov, ioc, size);
+ 		ret = do_udp_sendmsg(local->socket, &msg, size);
+ 		if (ret < 0)
+diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
+index 71b6e07bf16163..7df8d984a96ac2 100644
+--- a/net/rxrpc/peer_object.c
++++ b/net/rxrpc/peer_object.c
+@@ -149,8 +149,7 @@ struct rxrpc_peer *rxrpc_lookup_peer_rcu(struct rxrpc_local *local,
+  * assess the MTU size for the network interface through which this peer is
+  * reached
+  */
+-static void rxrpc_assess_MTU_size(struct rxrpc_local *local,
+-				  struct rxrpc_peer *peer)
++void rxrpc_assess_MTU_size(struct rxrpc_local *local, struct rxrpc_peer *peer)
+ {
+ 	struct net *net = local->net;
+ 	struct dst_entry *dst;
+@@ -277,8 +276,6 @@ static void rxrpc_init_peer(struct rxrpc_local *local, struct rxrpc_peer *peer,
+ 
+ 	peer->hdrsize += sizeof(struct rxrpc_wire_header);
+ 	peer->max_data = peer->if_mtu - peer->hdrsize;
+-
+-	rxrpc_assess_MTU_size(local, peer);
+ }
+ 
+ /*
+@@ -297,6 +294,7 @@ static struct rxrpc_peer *rxrpc_create_peer(struct rxrpc_local *local,
+ 	if (peer) {
+ 		memcpy(&peer->srx, srx, sizeof(*srx));
+ 		rxrpc_init_peer(local, peer, hash_key);
++		rxrpc_assess_MTU_size(local, peer);
+ 	}
+ 
+ 	_leave(" = %p", peer);
+diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c
+index 32cd5f1d541dba..89dc612fb89e06 100644
+--- a/net/rxrpc/recvmsg.c
++++ b/net/rxrpc/recvmsg.c
+@@ -29,6 +29,10 @@ void rxrpc_notify_socket(struct rxrpc_call *call)
+ 
+ 	if (!list_empty(&call->recvmsg_link))
+ 		return;
++	if (test_bit(RXRPC_CALL_RELEASED, &call->flags)) {
++		rxrpc_see_call(call, rxrpc_call_see_notify_released);
++		return;
++	}
+ 
+ 	rcu_read_lock();
+ 
+@@ -351,6 +355,16 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 		goto try_again;
+ 	}
+ 
++	rxrpc_see_call(call, rxrpc_call_see_recvmsg);
++	if (test_bit(RXRPC_CALL_RELEASED, &call->flags)) {
++		rxrpc_see_call(call, rxrpc_call_see_already_released);
++		list_del_init(&call->recvmsg_link);
++		spin_unlock_irq(&rx->recvmsg_lock);
++		release_sock(&rx->sk);
++		trace_rxrpc_recvmsg(call->debug_id, rxrpc_recvmsg_unqueue, 0);
++		rxrpc_put_call(call, rxrpc_call_put_recvmsg);
++		goto try_again;
++	}
+ 	if (!(flags & MSG_PEEK))
+ 		list_del_init(&call->recvmsg_link);
+ 	else
+@@ -374,8 +388,13 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 
+ 	release_sock(&rx->sk);
+ 
+-	if (test_bit(RXRPC_CALL_RELEASED, &call->flags))
+-		BUG();
++	if (test_bit(RXRPC_CALL_RELEASED, &call->flags)) {
++		rxrpc_see_call(call, rxrpc_call_see_already_released);
++		mutex_unlock(&call->user_mutex);
++		if (!(flags & MSG_PEEK))
++			rxrpc_put_call(call, rxrpc_call_put_recvmsg);
++		goto try_again;
++	}
+ 
+ 	if (test_bit(RXRPC_CALL_HAS_USERID, &call->flags)) {
+ 		if (flags & MSG_CMSG_COMPAT) {
+diff --git a/net/rxrpc/security.c b/net/rxrpc/security.c
+index 9784adc8f27593..7a48fee3a6b1a8 100644
+--- a/net/rxrpc/security.c
++++ b/net/rxrpc/security.c
+@@ -137,15 +137,15 @@ const struct rxrpc_security *rxrpc_get_incoming_security(struct rxrpc_sock *rx,
+ 
+ 	sec = rxrpc_security_lookup(sp->hdr.securityIndex);
+ 	if (!sec) {
+-		rxrpc_direct_abort(skb, rxrpc_abort_unsupported_security,
+-				   RX_INVALID_OPERATION, -EKEYREJECTED);
++		rxrpc_direct_conn_abort(skb, rxrpc_abort_unsupported_security,
++					RX_INVALID_OPERATION, -EKEYREJECTED);
+ 		return NULL;
+ 	}
+ 
+ 	if (sp->hdr.securityIndex != RXRPC_SECURITY_NONE &&
+ 	    !rx->securities) {
+-		rxrpc_direct_abort(skb, rxrpc_abort_no_service_key,
+-				   sec->no_key_abort, -EKEYREJECTED);
++		rxrpc_direct_conn_abort(skb, rxrpc_abort_no_service_key,
++					sec->no_key_abort, -EKEYREJECTED);
+ 		return NULL;
+ 	}
+ 
+diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
+index 14bf71f570570f..c968ea76377463 100644
+--- a/net/sched/sch_htb.c
++++ b/net/sched/sch_htb.c
+@@ -821,7 +821,9 @@ static struct htb_class *htb_lookup_leaf(struct htb_prio *hprio, const int prio)
+ 		u32 *pid;
+ 	} stk[TC_HTB_MAXDEPTH], *sp = stk;
+ 
+-	BUG_ON(!hprio->row.rb_node);
++	if (unlikely(!hprio->row.rb_node))
++		return NULL;
++
+ 	sp->root = hprio->row.rb_node;
+ 	sp->pptr = &hprio->ptr;
+ 	sp->pid = &hprio->last_ptr_id;
+diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
+index bf1282cb22ebae..2b1b025c31a338 100644
+--- a/net/sched/sch_qfq.c
++++ b/net/sched/sch_qfq.c
+@@ -412,7 +412,7 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ 	bool existing = false;
+ 	struct nlattr *tb[TCA_QFQ_MAX + 1];
+ 	struct qfq_aggregate *new_agg = NULL;
+-	u32 weight, lmax, inv_w;
++	u32 weight, lmax, inv_w, old_weight, old_lmax;
+ 	int err;
+ 	int delta_w;
+ 
+@@ -443,12 +443,16 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ 	inv_w = ONE_FP / weight;
+ 	weight = ONE_FP / inv_w;
+ 
+-	if (cl != NULL &&
+-	    lmax == cl->agg->lmax &&
+-	    weight == cl->agg->class_weight)
+-		return 0; /* nothing to change */
++	if (cl != NULL) {
++		sch_tree_lock(sch);
++		old_weight = cl->agg->class_weight;
++		old_lmax   = cl->agg->lmax;
++		sch_tree_unlock(sch);
++		if (lmax == old_lmax && weight == old_weight)
++			return 0; /* nothing to change */
++	}
+ 
+-	delta_w = weight - (cl ? cl->agg->class_weight : 0);
++	delta_w = weight - (cl ? old_weight : 0);
+ 
+ 	if (q->wsum + delta_w > QFQ_MAX_WSUM) {
+ 		NL_SET_ERR_MSG_FMT_MOD(extack,
+@@ -555,10 +559,10 @@ static int qfq_delete_class(struct Qdisc *sch, unsigned long arg,
+ 
+ 	qdisc_purge_queue(cl->qdisc);
+ 	qdisc_class_hash_remove(&q->clhash, &cl->common);
++	qfq_destroy_class(sch, cl);
+ 
+ 	sch_tree_unlock(sch);
+ 
+-	qfq_destroy_class(sch, cl);
+ 	return 0;
+ }
+ 
+@@ -625,6 +629,7 @@ static int qfq_dump_class(struct Qdisc *sch, unsigned long arg,
+ {
+ 	struct qfq_class *cl = (struct qfq_class *)arg;
+ 	struct nlattr *nest;
++	u32 class_weight, lmax;
+ 
+ 	tcm->tcm_parent	= TC_H_ROOT;
+ 	tcm->tcm_handle	= cl->common.classid;
+@@ -633,8 +638,13 @@ static int qfq_dump_class(struct Qdisc *sch, unsigned long arg,
+ 	nest = nla_nest_start_noflag(skb, TCA_OPTIONS);
+ 	if (nest == NULL)
+ 		goto nla_put_failure;
+-	if (nla_put_u32(skb, TCA_QFQ_WEIGHT, cl->agg->class_weight) ||
+-	    nla_put_u32(skb, TCA_QFQ_LMAX, cl->agg->lmax))
++
++	sch_tree_lock(sch);
++	class_weight	= cl->agg->class_weight;
++	lmax		= cl->agg->lmax;
++	sch_tree_unlock(sch);
++	if (nla_put_u32(skb, TCA_QFQ_WEIGHT, class_weight) ||
++	    nla_put_u32(skb, TCA_QFQ_LMAX, lmax))
+ 		goto nla_put_failure;
+ 	return nla_nest_end(skb, nest);
+ 
+@@ -651,8 +661,10 @@ static int qfq_dump_class_stats(struct Qdisc *sch, unsigned long arg,
+ 
+ 	memset(&xstats, 0, sizeof(xstats));
+ 
++	sch_tree_lock(sch);
+ 	xstats.weight = cl->agg->class_weight;
+ 	xstats.lmax = cl->agg->lmax;
++	sch_tree_unlock(sch);
+ 
+ 	if (gnet_stats_copy_basic(d, NULL, &cl->bstats, true) < 0 ||
+ 	    gnet_stats_copy_rate_est(d, &cl->rate_est) < 0 ||
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 3760131f148450..1882bab8e00e79 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -30,6 +30,10 @@
+ #include <linux/splice.h>
+ 
+ #include <net/sock.h>
++#include <net/inet_common.h>
++#if IS_ENABLED(CONFIG_IPV6)
++#include <net/ipv6.h>
++#endif
+ #include <net/tcp.h>
+ #include <net/smc.h>
+ #include <asm/ioctls.h>
+@@ -360,6 +364,16 @@ static void smc_destruct(struct sock *sk)
+ 		return;
+ 	if (!sock_flag(sk, SOCK_DEAD))
+ 		return;
++	switch (sk->sk_family) {
++	case AF_INET:
++		inet_sock_destruct(sk);
++		break;
++#if IS_ENABLED(CONFIG_IPV6)
++	case AF_INET6:
++		inet6_sock_destruct(sk);
++		break;
++#endif
++	}
+ }
+ 
+ static struct lock_class_key smc_key;
+diff --git a/net/smc/smc.h b/net/smc/smc.h
+index 78ae10d06ed2eb..2c908496373981 100644
+--- a/net/smc/smc.h
++++ b/net/smc/smc.h
+@@ -283,10 +283,10 @@ struct smc_connection {
+ };
+ 
+ struct smc_sock {				/* smc sock container */
+-	struct sock		sk;
+-#if IS_ENABLED(CONFIG_IPV6)
+-	struct ipv6_pinfo	*pinet6;
+-#endif
++	union {
++		struct sock		sk;
++		struct inet_sock	icsk_inet;
++	};
+ 	struct socket		*clcsock;	/* internal tcp socket */
+ 	void			(*clcsk_state_change)(struct sock *sk);
+ 						/* original stat_change fct. */
+diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
+index 65b0da6fdf6a79..095cf31bae0ba9 100644
+--- a/net/tls/tls_strp.c
++++ b/net/tls/tls_strp.c
+@@ -512,9 +512,8 @@ static int tls_strp_read_sock(struct tls_strparser *strp)
+ 	if (inq < strp->stm.full_len)
+ 		return tls_strp_read_copy(strp, true);
+ 
++	tls_strp_load_anchor_with_queue(strp, inq);
+ 	if (!strp->stm.full_len) {
+-		tls_strp_load_anchor_with_queue(strp, inq);
+-
+ 		sz = tls_rx_msg_size(strp, strp->anchor);
+ 		if (sz < 0) {
+ 			tls_strp_abort_strp(strp, sz);
+diff --git a/rust/Makefile b/rust/Makefile
+index d62b58d0a55cc4..913b31d25bc4d1 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -194,6 +194,7 @@ quiet_cmd_rustdoc_test = RUSTDOC T $<
+ 	RUST_MODFILE=test.rs \
+ 	OBJTREE=$(abspath $(objtree)) \
+ 	$(RUSTDOC) --test $(rust_common_flags) \
++		-Zcrate-attr='feature(used_with_arg)' \
+ 		@$(objtree)/include/generated/rustc_cfg \
+ 		$(rustc_target_flags) $(rustdoc_test_target_flags) \
+ 		$(rustdoc_test_quiet) \
+diff --git a/rust/kernel/firmware.rs b/rust/kernel/firmware.rs
+index 2494c96e105f3a..4fe621f3571691 100644
+--- a/rust/kernel/firmware.rs
++++ b/rust/kernel/firmware.rs
+@@ -202,7 +202,7 @@ macro_rules! module_firmware {
+             };
+ 
+             #[link_section = ".modinfo"]
+-            #[used]
++            #[used(compiler)]
+             static __MODULE_FIRMWARE: [u8; $($builder)*::create(__MODULE_FIRMWARE_PREFIX)
+                 .build_length()] = $($builder)*::create(__MODULE_FIRMWARE_PREFIX).build();
+         };
+diff --git a/rust/kernel/init.rs b/rust/kernel/init.rs
+index 8d228c23795445..21ef202ab0dbfa 100644
+--- a/rust/kernel/init.rs
++++ b/rust/kernel/init.rs
+@@ -231,14 +231,14 @@ macro_rules! try_init {
+     ($(&$this:ident in)? $t:ident $(::<$($generics:ty),* $(,)?>)? {
+         $($fields:tt)*
+     }) => {
+-        ::pin_init::try_init!($(&$this in)? $t $(::<$($generics),* $(,)?>)? {
++        ::pin_init::try_init!($(&$this in)? $t $(::<$($generics),*>)? {
+             $($fields)*
+         }? $crate::error::Error)
+     };
+     ($(&$this:ident in)? $t:ident $(::<$($generics:ty),* $(,)?>)? {
+         $($fields:tt)*
+     }? $err:ty) => {
+-        ::pin_init::try_init!($(&$this in)? $t $(::<$($generics),* $(,)?>)? {
++        ::pin_init::try_init!($(&$this in)? $t $(::<$($generics),*>)? {
+             $($fields)*
+         }? $err)
+     };
+@@ -291,14 +291,14 @@ macro_rules! try_pin_init {
+     ($(&$this:ident in)? $t:ident $(::<$($generics:ty),* $(,)?>)? {
+         $($fields:tt)*
+     }) => {
+-        ::pin_init::try_pin_init!($(&$this in)? $t $(::<$($generics),* $(,)?>)? {
++        ::pin_init::try_pin_init!($(&$this in)? $t $(::<$($generics),*>)? {
+             $($fields)*
+         }? $crate::error::Error)
+     };
+     ($(&$this:ident in)? $t:ident $(::<$($generics:ty),* $(,)?>)? {
+         $($fields:tt)*
+     }? $err:ty) => {
+-        ::pin_init::try_pin_init!($(&$this in)? $t $(::<$($generics),* $(,)?>)? {
++        ::pin_init::try_pin_init!($(&$this in)? $t $(::<$($generics),*>)? {
+             $($fields)*
+         }? $err)
+     };
+diff --git a/rust/kernel/kunit.rs b/rust/kernel/kunit.rs
+index 1604fb6a5b1b00..cf95700c8612b4 100644
+--- a/rust/kernel/kunit.rs
++++ b/rust/kernel/kunit.rs
+@@ -278,7 +278,7 @@ macro_rules! kunit_unsafe_test_suite {
+                     is_init: false,
+                 };
+ 
+-            #[used]
++            #[used(compiler)]
+             #[allow(unused_unsafe)]
+             #[cfg_attr(not(target_os = "macos"), link_section = ".kunit_test_suites")]
+             static mut KUNIT_TEST_SUITE_ENTRY: *const ::kernel::bindings::kunit_suite =
+diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
+index de07aadd1ff5fe..2bdf9d14ec43ac 100644
+--- a/rust/kernel/lib.rs
++++ b/rust/kernel/lib.rs
+@@ -26,6 +26,8 @@
+ #![feature(const_mut_refs)]
+ #![feature(const_ptr_write)]
+ #![feature(const_refs_to_cell)]
++// To be determined.
++#![feature(used_with_arg)]
+ 
+ // Ensure conditional compilation based on the kernel configuration works;
+ // otherwise we may silently break things like initcall handling.
+diff --git a/rust/macros/module.rs b/rust/macros/module.rs
+index 44e5cb108cea70..b19ce62f6af12a 100644
+--- a/rust/macros/module.rs
++++ b/rust/macros/module.rs
+@@ -57,7 +57,7 @@ fn emit_base(&mut self, field: &str, content: &str, builtin: bool) {
+                 {cfg}
+                 #[doc(hidden)]
+                 #[cfg_attr(not(target_os = \"macos\"), link_section = \".modinfo\")]
+-                #[used]
++                #[used(compiler)]
+                 pub static __{module}_{counter}: [u8; {length}] = *{string};
+             ",
+             cfg = if builtin {
+@@ -247,7 +247,7 @@ mod __module_init {{
+                     // key or a new section. For the moment, keep it simple.
+                     #[cfg(MODULE)]
+                     #[doc(hidden)]
+-                    #[used]
++                    #[used(compiler)]
+                     static __IS_RUST_MODULE: () = ();
+ 
+                     static mut __MOD: core::mem::MaybeUninit<{type_}> =
+@@ -271,7 +271,7 @@ mod __module_init {{
+ 
+                     #[cfg(MODULE)]
+                     #[doc(hidden)]
+-                    #[used]
++                    #[used(compiler)]
+                     #[link_section = \".init.data\"]
+                     static __UNIQUE_ID___addressable_init_module: unsafe extern \"C\" fn() -> i32 = init_module;
+ 
+@@ -291,7 +291,7 @@ mod __module_init {{
+ 
+                     #[cfg(MODULE)]
+                     #[doc(hidden)]
+-                    #[used]
++                    #[used(compiler)]
+                     #[link_section = \".exit.data\"]
+                     static __UNIQUE_ID___addressable_cleanup_module: extern \"C\" fn() = cleanup_module;
+ 
+@@ -301,7 +301,7 @@ mod __module_init {{
+                     #[cfg(not(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS))]
+                     #[doc(hidden)]
+                     #[link_section = \"{initcall_section}\"]
+-                    #[used]
++                    #[used(compiler)]
+                     pub static __{name}_initcall: extern \"C\" fn() -> kernel::ffi::c_int = __{name}_init;
+ 
+                     #[cfg(not(MODULE))]
+diff --git a/scripts/Makefile.build b/scripts/Makefile.build
+index 13dcd86e74ca83..eb0d27812aec29 100644
+--- a/scripts/Makefile.build
++++ b/scripts/Makefile.build
+@@ -222,7 +222,7 @@ $(obj)/%.lst: $(obj)/%.c FORCE
+ # Compile Rust sources (.rs)
+ # ---------------------------------------------------------------------------
+ 
+-rust_allowed_features := asm_const,asm_goto,arbitrary_self_types,lint_reasons,raw_ref_op
++rust_allowed_features := asm_const,asm_goto,arbitrary_self_types,lint_reasons,raw_ref_op,used_with_arg
+ 
+ # `--out-dir` is required to avoid temporaries being created by `rustc` in the
+ # current working directory, which may be not accessible in the out-of-tree
+diff --git a/sound/core/compress_offload.c b/sound/core/compress_offload.c
+index 840bb9cfe78906..a66f258cafaa87 100644
+--- a/sound/core/compress_offload.c
++++ b/sound/core/compress_offload.c
+@@ -1269,62 +1269,62 @@ static long snd_compr_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+ 	stream = &data->stream;
+ 
+ 	guard(mutex)(&stream->device->lock);
+-	switch (_IOC_NR(cmd)) {
+-	case _IOC_NR(SNDRV_COMPRESS_IOCTL_VERSION):
++	switch (cmd) {
++	case SNDRV_COMPRESS_IOCTL_VERSION:
+ 		return put_user(SNDRV_COMPRESS_VERSION,
+ 				(int __user *)arg) ? -EFAULT : 0;
+-	case _IOC_NR(SNDRV_COMPRESS_GET_CAPS):
++	case SNDRV_COMPRESS_GET_CAPS:
+ 		return snd_compr_get_caps(stream, arg);
+ #ifndef COMPR_CODEC_CAPS_OVERFLOW
+-	case _IOC_NR(SNDRV_COMPRESS_GET_CODEC_CAPS):
++	case SNDRV_COMPRESS_GET_CODEC_CAPS:
+ 		return snd_compr_get_codec_caps(stream, arg);
+ #endif
+-	case _IOC_NR(SNDRV_COMPRESS_SET_PARAMS):
++	case SNDRV_COMPRESS_SET_PARAMS:
+ 		return snd_compr_set_params(stream, arg);
+-	case _IOC_NR(SNDRV_COMPRESS_GET_PARAMS):
++	case SNDRV_COMPRESS_GET_PARAMS:
+ 		return snd_compr_get_params(stream, arg);
+-	case _IOC_NR(SNDRV_COMPRESS_SET_METADATA):
++	case SNDRV_COMPRESS_SET_METADATA:
+ 		return snd_compr_set_metadata(stream, arg);
+-	case _IOC_NR(SNDRV_COMPRESS_GET_METADATA):
++	case SNDRV_COMPRESS_GET_METADATA:
+ 		return snd_compr_get_metadata(stream, arg);
+ 	}
+ 
+ 	if (stream->direction == SND_COMPRESS_ACCEL) {
+ #if IS_ENABLED(CONFIG_SND_COMPRESS_ACCEL)
+-		switch (_IOC_NR(cmd)) {
+-		case _IOC_NR(SNDRV_COMPRESS_TASK_CREATE):
++		switch (cmd) {
++		case SNDRV_COMPRESS_TASK_CREATE:
+ 			return snd_compr_task_create(stream, arg);
+-		case _IOC_NR(SNDRV_COMPRESS_TASK_FREE):
++		case SNDRV_COMPRESS_TASK_FREE:
+ 			return snd_compr_task_seq(stream, arg, snd_compr_task_free_one);
+-		case _IOC_NR(SNDRV_COMPRESS_TASK_START):
++		case SNDRV_COMPRESS_TASK_START:
+ 			return snd_compr_task_start_ioctl(stream, arg);
+-		case _IOC_NR(SNDRV_COMPRESS_TASK_STOP):
++		case SNDRV_COMPRESS_TASK_STOP:
+ 			return snd_compr_task_seq(stream, arg, snd_compr_task_stop_one);
+-		case _IOC_NR(SNDRV_COMPRESS_TASK_STATUS):
++		case SNDRV_COMPRESS_TASK_STATUS:
+ 			return snd_compr_task_status_ioctl(stream, arg);
+ 		}
+ #endif
+ 		return -ENOTTY;
+ 	}
+ 
+-	switch (_IOC_NR(cmd)) {
+-	case _IOC_NR(SNDRV_COMPRESS_TSTAMP):
++	switch (cmd) {
++	case SNDRV_COMPRESS_TSTAMP:
+ 		return snd_compr_tstamp(stream, arg);
+-	case _IOC_NR(SNDRV_COMPRESS_AVAIL):
++	case SNDRV_COMPRESS_AVAIL:
+ 		return snd_compr_ioctl_avail(stream, arg);
+-	case _IOC_NR(SNDRV_COMPRESS_PAUSE):
++	case SNDRV_COMPRESS_PAUSE:
+ 		return snd_compr_pause(stream);
+-	case _IOC_NR(SNDRV_COMPRESS_RESUME):
++	case SNDRV_COMPRESS_RESUME:
+ 		return snd_compr_resume(stream);
+-	case _IOC_NR(SNDRV_COMPRESS_START):
++	case SNDRV_COMPRESS_START:
+ 		return snd_compr_start(stream);
+-	case _IOC_NR(SNDRV_COMPRESS_STOP):
++	case SNDRV_COMPRESS_STOP:
+ 		return snd_compr_stop(stream);
+-	case _IOC_NR(SNDRV_COMPRESS_DRAIN):
++	case SNDRV_COMPRESS_DRAIN:
+ 		return snd_compr_drain(stream);
+-	case _IOC_NR(SNDRV_COMPRESS_PARTIAL_DRAIN):
++	case SNDRV_COMPRESS_PARTIAL_DRAIN:
+ 		return snd_compr_partial_drain(stream);
+-	case _IOC_NR(SNDRV_COMPRESS_NEXT_TRACK):
++	case SNDRV_COMPRESS_NEXT_TRACK:
+ 		return snd_compr_next_track(stream);
+ 	}
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f7bb97230201f0..5a6d0424bfedce 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10791,6 +10791,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8b97, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8bb3, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8bb4, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x103c, 0x8bbe, "HP Victus 16-r0xxx (MB 8BBE)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8bc8, "HP Victus 15-fa1xxx", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8bcd, "HP Omen 16-xd0xxx", ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8bdd, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10982,6 +10983,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+ 	SND_PCI_QUIRK(0x1043, 0x1a63, "ASUS UX3405MA", ALC245_FIXUP_CS35L41_SPI_2),
+ 	SND_PCI_QUIRK(0x1043, 0x1a83, "ASUS UM5302LA", ALC294_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x1043, 0x1a8e, "ASUS G712LWS", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ 	SND_PCI_QUIRK(0x1043, 0x1a8f, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2),
+ 	SND_PCI_QUIRK(0x1043, 0x1b11, "ASUS UX431DA", ALC294_FIXUP_ASUS_COEF_1B),
+ 	SND_PCI_QUIRK(0x1043, 0x1b13, "ASUS U41SV/GA403U", ALC285_FIXUP_ASUS_GA403U_HEADSET_MIC),
+diff --git a/tools/hv/hv_fcopy_uio_daemon.c b/tools/hv/hv_fcopy_uio_daemon.c
+index 0198321d14a291..7d9bcb066d3fb4 100644
+--- a/tools/hv/hv_fcopy_uio_daemon.c
++++ b/tools/hv/hv_fcopy_uio_daemon.c
+@@ -35,7 +35,10 @@
+ #define WIN8_SRV_MINOR		1
+ #define WIN8_SRV_VERSION	(WIN8_SRV_MAJOR << 16 | WIN8_SRV_MINOR)
+ 
+-#define FCOPY_UIO		"/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio"
++#define FCOPY_DEVICE_PATH(subdir) \
++	"/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/" #subdir
++#define FCOPY_UIO_PATH          FCOPY_DEVICE_PATH(uio)
++#define FCOPY_CHANNELS_PATH     FCOPY_DEVICE_PATH(channels)
+ 
+ #define FCOPY_VER_COUNT		1
+ static const int fcopy_versions[] = {
+@@ -47,9 +50,62 @@ static const int fw_versions[] = {
+ 	UTIL_FW_VERSION
+ };
+ 
+-#define HV_RING_SIZE		0x4000 /* 16KB ring buffer size */
++static uint32_t get_ring_buffer_size(void)
++{
++	char ring_path[PATH_MAX];
++	DIR *dir;
++	struct dirent *entry;
++	struct stat st;
++	uint32_t ring_size = 0;
++	int retry_count = 0;
++
++	/* Find the channel directory */
++	dir = opendir(FCOPY_CHANNELS_PATH);
++	if (!dir) {
++		usleep(100 * 1000); /* Avoid race with kernel, wait 100ms and retry once */
++		dir = opendir(FCOPY_CHANNELS_PATH);
++		if (!dir) {
++			syslog(LOG_ERR, "Failed to open channels directory: %s", strerror(errno));
++			return 0;
++		}
++	}
++
++retry_once:
++	while ((entry = readdir(dir)) != NULL) {
++		if (entry->d_type == DT_DIR && strcmp(entry->d_name, ".") != 0 &&
++		    strcmp(entry->d_name, "..") != 0) {
++			snprintf(ring_path, sizeof(ring_path), "%s/%s/ring",
++				 FCOPY_CHANNELS_PATH, entry->d_name);
++
++			if (stat(ring_path, &st) == 0) {
++				/*
++				 * stat returns size of Tx, Rx rings combined,
++				 * so take half of it for individual ring size.
++				 */
++				ring_size = (uint32_t)st.st_size / 2;
++				syslog(LOG_INFO, "Ring buffer size from %s: %u bytes",
++				       ring_path, ring_size);
++				break;
++			}
++		}
++	}
+ 
+-static unsigned char desc[HV_RING_SIZE];
++	if (!ring_size && retry_count == 0) {
++		retry_count = 1;
++		rewinddir(dir);
++		usleep(100 * 1000); /* Wait 100ms and retry once */
++		goto retry_once;
++	}
++
++	closedir(dir);
++
++	if (!ring_size)
++		syslog(LOG_ERR, "Could not determine ring size");
++
++	return ring_size;
++}
++
++static unsigned char *desc;
+ 
+ static int target_fd;
+ static char target_fname[PATH_MAX];
+@@ -406,7 +462,7 @@ int main(int argc, char *argv[])
+ 	int daemonize = 1, long_index = 0, opt, ret = -EINVAL;
+ 	struct vmbus_br txbr, rxbr;
+ 	void *ring;
+-	uint32_t len = HV_RING_SIZE;
++	uint32_t ring_size, len;
+ 	char uio_name[NAME_MAX] = {0};
+ 	char uio_dev_path[PATH_MAX] = {0};
+ 
+@@ -437,7 +493,20 @@ int main(int argc, char *argv[])
+ 	openlog("HV_UIO_FCOPY", 0, LOG_USER);
+ 	syslog(LOG_INFO, "starting; pid is:%d", getpid());
+ 
+-	fcopy_get_first_folder(FCOPY_UIO, uio_name);
++	ring_size = get_ring_buffer_size();
++	if (!ring_size) {
++		ret = -ENODEV;
++		goto exit;
++	}
++
++	desc = malloc(ring_size * sizeof(unsigned char));
++	if (!desc) {
++		syslog(LOG_ERR, "malloc failed for desc buffer");
++		ret = -ENOMEM;
++		goto exit;
++	}
++
++	fcopy_get_first_folder(FCOPY_UIO_PATH, uio_name);
+ 	snprintf(uio_dev_path, sizeof(uio_dev_path), "/dev/%s", uio_name);
+ 	fcopy_fd = open(uio_dev_path, O_RDWR);
+ 
+@@ -445,17 +514,17 @@ int main(int argc, char *argv[])
+ 		syslog(LOG_ERR, "open %s failed; error: %d %s",
+ 		       uio_dev_path, errno, strerror(errno));
+ 		ret = fcopy_fd;
+-		goto exit;
++		goto free_desc;
+ 	}
+ 
+-	ring = vmbus_uio_map(&fcopy_fd, HV_RING_SIZE);
++	ring = vmbus_uio_map(&fcopy_fd, ring_size);
+ 	if (!ring) {
+ 		ret = errno;
+ 		syslog(LOG_ERR, "mmap ringbuffer failed; error: %d %s", ret, strerror(ret));
+ 		goto close;
+ 	}
+-	vmbus_br_setup(&txbr, ring, HV_RING_SIZE);
+-	vmbus_br_setup(&rxbr, (char *)ring + HV_RING_SIZE, HV_RING_SIZE);
++	vmbus_br_setup(&txbr, ring, ring_size);
++	vmbus_br_setup(&rxbr, (char *)ring + ring_size, ring_size);
+ 
+ 	rxbr.vbr->imask = 0;
+ 
+@@ -472,7 +541,7 @@ int main(int argc, char *argv[])
+ 			goto close;
+ 		}
+ 
+-		len = HV_RING_SIZE;
++		len = ring_size;
+ 		ret = rte_vmbus_chan_recv_raw(&rxbr, desc, &len);
+ 		if (unlikely(ret <= 0)) {
+ 			/* This indicates a failure to communicate (or worse) */
+@@ -492,6 +561,8 @@ int main(int argc, char *argv[])
+ 	}
+ close:
+ 	close(fcopy_fd);
++free_desc:
++	free(desc);
+ exit:
+ 	return ret;
+ }
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 97605ea8093ff6..c8e29c52d28c02 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -735,7 +735,7 @@ struct bpf_object {
+ 
+ 	struct usdt_manager *usdt_man;
+ 
+-	struct bpf_map *arena_map;
++	int arena_map_idx;
+ 	void *arena_data;
+ 	size_t arena_data_sz;
+ 
+@@ -1517,6 +1517,7 @@ static struct bpf_object *bpf_object__new(const char *path,
+ 	obj->efile.obj_buf_sz = obj_buf_sz;
+ 	obj->efile.btf_maps_shndx = -1;
+ 	obj->kconfig_map_idx = -1;
++	obj->arena_map_idx = -1;
+ 
+ 	obj->kern_version = get_kernel_version();
+ 	obj->state  = OBJ_OPEN;
+@@ -2964,7 +2965,7 @@ static int init_arena_map_data(struct bpf_object *obj, struct bpf_map *map,
+ 	const long page_sz = sysconf(_SC_PAGE_SIZE);
+ 	size_t mmap_sz;
+ 
+-	mmap_sz = bpf_map_mmap_sz(obj->arena_map);
++	mmap_sz = bpf_map_mmap_sz(map);
+ 	if (roundup(data_sz, page_sz) > mmap_sz) {
+ 		pr_warn("elf: sec '%s': declared ARENA map size (%zu) is too small to hold global __arena variables of size %zu\n",
+ 			sec_name, mmap_sz, data_sz);
+@@ -3038,12 +3039,12 @@ static int bpf_object__init_user_btf_maps(struct bpf_object *obj, bool strict,
+ 		if (map->def.type != BPF_MAP_TYPE_ARENA)
+ 			continue;
+ 
+-		if (obj->arena_map) {
++		if (obj->arena_map_idx >= 0) {
+ 			pr_warn("map '%s': only single ARENA map is supported (map '%s' is also ARENA)\n",
+-				map->name, obj->arena_map->name);
++				map->name, obj->maps[obj->arena_map_idx].name);
+ 			return -EINVAL;
+ 		}
+-		obj->arena_map = map;
++		obj->arena_map_idx = i;
+ 
+ 		if (obj->efile.arena_data) {
+ 			err = init_arena_map_data(obj, map, ARENA_SEC, obj->efile.arena_data_shndx,
+@@ -3053,7 +3054,7 @@ static int bpf_object__init_user_btf_maps(struct bpf_object *obj, bool strict,
+ 				return err;
+ 		}
+ 	}
+-	if (obj->efile.arena_data && !obj->arena_map) {
++	if (obj->efile.arena_data && obj->arena_map_idx < 0) {
+ 		pr_warn("elf: sec '%s': to use global __arena variables the ARENA map should be explicitly declared in SEC(\".maps\")\n",
+ 			ARENA_SEC);
+ 		return -ENOENT;
+@@ -4583,8 +4584,13 @@ static int bpf_program__record_reloc(struct bpf_program *prog,
+ 	if (shdr_idx == obj->efile.arena_data_shndx) {
+ 		reloc_desc->type = RELO_DATA;
+ 		reloc_desc->insn_idx = insn_idx;
+-		reloc_desc->map_idx = obj->arena_map - obj->maps;
++		reloc_desc->map_idx = obj->arena_map_idx;
+ 		reloc_desc->sym_off = sym->st_value;
++
++		map = &obj->maps[obj->arena_map_idx];
++		pr_debug("prog '%s': found arena map %d (%s, sec %d, off %zu) for insn %u\n",
++			 prog->name, obj->arena_map_idx, map->name, map->sec_idx,
++			 map->sec_offset, insn_idx);
+ 		return 0;
+ 	}
+ 
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index d967ac001498bb..67d76f3a1dce55 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -224,6 +224,7 @@ static bool is_rust_noreturn(const struct symbol *func)
+ 	       str_ends_with(func->name, "_4core9panicking14panic_explicit")				||
+ 	       str_ends_with(func->name, "_4core9panicking14panic_nounwind")				||
+ 	       str_ends_with(func->name, "_4core9panicking18panic_bounds_check")			||
++	       str_ends_with(func->name, "_4core9panicking18panic_nounwind_fmt")			||
+ 	       str_ends_with(func->name, "_4core9panicking19assert_failed_inner")			||
+ 	       str_ends_with(func->name, "_4core9panicking30panic_null_pointer_dereference")		||
+ 	       str_ends_with(func->name, "_4core9panicking36panic_misaligned_pointer_dereference")	||
+diff --git a/tools/testing/selftests/net/udpgro.sh b/tools/testing/selftests/net/udpgro.sh
+index d5ffd8c9172e1d..799dbc2b4b01c9 100755
+--- a/tools/testing/selftests/net/udpgro.sh
++++ b/tools/testing/selftests/net/udpgro.sh
+@@ -48,7 +48,7 @@ run_one() {
+ 
+ 	cfg_veth
+ 
+-	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${rx_args} &
++	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 100 ${rx_args} &
+ 	local PID1=$!
+ 
+ 	wait_local_port_listen ${PEER_NS} 8000 udp
+@@ -95,7 +95,7 @@ run_one_nat() {
+ 	# will land on the 'plain' one
+ 	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -G ${family} -b ${addr1} -n 0 &
+ 	local PID1=$!
+-	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${family} -b ${addr2%/*} ${rx_args} &
++	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 100 ${family} -b ${addr2%/*} ${rx_args} &
+ 	local PID2=$!
+ 
+ 	wait_local_port_listen "${PEER_NS}" 8000 udp
+@@ -117,9 +117,9 @@ run_one_2sock() {
+ 
+ 	cfg_veth
+ 
+-	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${rx_args} -p 12345 &
++	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 100 ${rx_args} -p 12345 &
+ 	local PID1=$!
+-	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 2000 -R 10 ${rx_args} &
++	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 2000 -R 100 ${rx_args} &
+ 	local PID2=$!
+ 
+ 	wait_local_port_listen "${PEER_NS}" 12345 udp
+diff --git a/tools/testing/selftests/sched_ext/exit.c b/tools/testing/selftests/sched_ext/exit.c
+index 9451782689de1d..ee25824b1cbe6d 100644
+--- a/tools/testing/selftests/sched_ext/exit.c
++++ b/tools/testing/selftests/sched_ext/exit.c
+@@ -22,6 +22,14 @@ static enum scx_test_status run(void *ctx)
+ 		struct bpf_link *link;
+ 		char buf[16];
+ 
++		/*
++		 * On single-CPU systems, ops.select_cpu() is never
++		 * invoked, so skip this test to avoid getting stuck
++		 * indefinitely.
++		 */
++		if (tc == EXIT_SELECT_CPU && libbpf_num_possible_cpus() == 1)
++			continue;
++
+ 		skel = exit__open();
+ 		SCX_ENUM_INIT(skel);
+ 		skel->rodata->exit_point = tc;


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-08-01 10:30 Arisu Tachibana
  0 siblings, 0 replies; 19+ messages in thread
From: Arisu Tachibana @ 2025-08-01 10:30 UTC (permalink / raw
  To: gentoo-commits

commit:     c27efdd4d8130112de7c85dd970296b16cbf95b7
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Aug  1 10:30:29 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Aug  1 10:30:29 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c27efdd4

Linux patch 6.15.9

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README             |    4 +
 1008_linux-6.15.9.patch | 3672 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3676 insertions(+)

diff --git a/0000_README b/0000_README
index 21f71559..1a9c19eb 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-6.15.8.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.15.8
 
+Patch:  1008_linux-6.15.9.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.15.9
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1008_linux-6.15.9.patch b/1008_linux-6.15.9.patch
new file mode 100644
index 00000000..aa16a660
--- /dev/null
+++ b/1008_linux-6.15.9.patch
@@ -0,0 +1,3672 @@
+diff --git a/Makefile b/Makefile
+index 8e2d0372336867..d38a669b3ada6f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 15
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 25ed6f1a7c7ae5..e55a3fee104a83 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -121,7 +121,7 @@ config ARM
+ 	select HAVE_KERNEL_XZ
+ 	select HAVE_KPROBES if !XIP_KERNEL && !CPU_ENDIAN_BE32 && !CPU_V7M
+ 	select HAVE_KRETPROBES if HAVE_KPROBES
+-	select HAVE_LD_DEAD_CODE_DATA_ELIMINATION if (LD_VERSION >= 23600 || LD_CAN_USE_KEEP_IN_OVERLAY)
++	select HAVE_LD_DEAD_CODE_DATA_ELIMINATION if (LD_VERSION >= 23600 || LD_IS_LLD) && LD_CAN_USE_KEEP_IN_OVERLAY
+ 	select HAVE_MOD_ARCH_SPECIFIC
+ 	select HAVE_NMI
+ 	select HAVE_OPTPROBES if !THUMB2_KERNEL
+diff --git a/arch/arm/Makefile b/arch/arm/Makefile
+index 4808d3ed98e42d..e31e95ffd33fcf 100644
+--- a/arch/arm/Makefile
++++ b/arch/arm/Makefile
+@@ -149,7 +149,7 @@ endif
+ # Need -Uarm for gcc < 3.x
+ KBUILD_CPPFLAGS	+=$(cpp-y)
+ KBUILD_CFLAGS	+=$(CFLAGS_ABI) $(CFLAGS_ISA) $(arch-y) $(tune-y) $(call cc-option,-mshort-load-bytes,$(call cc-option,-malignment-traps,)) -msoft-float -Uarm
+-KBUILD_AFLAGS	+=$(CFLAGS_ABI) $(AFLAGS_ISA) -Wa,$(arch-y) $(tune-y) -include asm/unified.h -msoft-float
++KBUILD_AFLAGS	+=$(CFLAGS_ABI) $(AFLAGS_ISA) -Wa,$(arch-y) $(tune-y) -include $(srctree)/arch/arm/include/asm/unified.h -msoft-float
+ KBUILD_RUSTFLAGS += --target=arm-unknown-linux-gnueabi
+ 
+ CHECKFLAGS	+= -D__arm__
+diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
+index ad63457a05c5b0..c56c21bb1eec23 100644
+--- a/arch/arm64/include/asm/assembler.h
++++ b/arch/arm64/include/asm/assembler.h
+@@ -41,6 +41,11 @@
+ /*
+  * Save/restore interrupts.
+  */
++	.macro save_and_disable_daif, flags
++	mrs	\flags, daif
++	msr	daifset, #0xf
++	.endm
++
+ 	.macro	save_and_disable_irq, flags
+ 	mrs	\flags, daif
+ 	msr	daifset, #3
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 5ae2a34b50bda5..30dcb719685b71 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -825,6 +825,7 @@ SYM_CODE_END(__bp_harden_el1_vectors)
+  *
+  */
+ SYM_FUNC_START(cpu_switch_to)
++	save_and_disable_daif x11
+ 	mov	x10, #THREAD_CPU_CONTEXT
+ 	add	x8, x0, x10
+ 	mov	x9, sp
+@@ -848,6 +849,7 @@ SYM_FUNC_START(cpu_switch_to)
+ 	ptrauth_keys_install_kernel x1, x8, x9, x10
+ 	scs_save x0
+ 	scs_load_current
++	restore_irq x11
+ 	ret
+ SYM_FUNC_END(cpu_switch_to)
+ NOKPROBE(cpu_switch_to)
+@@ -874,6 +876,7 @@ NOKPROBE(ret_from_fork)
+  * Calls func(regs) using this CPU's irq stack and shadow irq stack.
+  */
+ SYM_FUNC_START(call_on_irq_stack)
++	save_and_disable_daif x9
+ #ifdef CONFIG_SHADOW_CALL_STACK
+ 	get_current_task x16
+ 	scs_save x16
+@@ -888,8 +891,10 @@ SYM_FUNC_START(call_on_irq_stack)
+ 
+ 	/* Move to the new stack and call the function there */
+ 	add	sp, x16, #IRQ_STACK_SIZE
++	restore_irq x9
+ 	blr	x1
+ 
++	save_and_disable_daif x9
+ 	/*
+ 	 * Restore the SP from the FP, and restore the FP and LR from the frame
+ 	 * record.
+@@ -897,6 +902,7 @@ SYM_FUNC_START(call_on_irq_stack)
+ 	mov	sp, x29
+ 	ldp	x29, x30, [sp], #16
+ 	scs_load_current
++	restore_irq x9
+ 	ret
+ SYM_FUNC_END(call_on_irq_stack)
+ NOKPROBE(call_on_irq_stack)
+diff --git a/arch/x86/hyperv/irqdomain.c b/arch/x86/hyperv/irqdomain.c
+index 31f0d29cbc5e3e..e28c317ac9e813 100644
+--- a/arch/x86/hyperv/irqdomain.c
++++ b/arch/x86/hyperv/irqdomain.c
+@@ -192,7 +192,6 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ 	struct pci_dev *dev;
+ 	struct hv_interrupt_entry out_entry, *stored_entry;
+ 	struct irq_cfg *cfg = irqd_cfg(data);
+-	const cpumask_t *affinity;
+ 	int cpu;
+ 	u64 status;
+ 
+@@ -204,8 +203,7 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ 		return;
+ 	}
+ 
+-	affinity = irq_data_get_effective_affinity_mask(data);
+-	cpu = cpumask_first_and(affinity, cpu_online_mask);
++	cpu = cpumask_first(irq_data_get_effective_affinity_mask(data));
+ 
+ 	if (data->chip_data) {
+ 		/*
+diff --git a/arch/x86/include/asm/debugreg.h b/arch/x86/include/asm/debugreg.h
+index fdbbbfec745aa5..820b4aeabd0c24 100644
+--- a/arch/x86/include/asm/debugreg.h
++++ b/arch/x86/include/asm/debugreg.h
+@@ -9,6 +9,14 @@
+ #include <asm/cpufeature.h>
+ #include <asm/msr.h>
+ 
++/*
++ * Define bits that are always set to 1 in DR7, only bit 10 is
++ * architecturally reserved to '1'.
++ *
++ * This is also the init/reset value for DR7.
++ */
++#define DR7_FIXED_1	0x00000400
++
+ DECLARE_PER_CPU(unsigned long, cpu_dr7);
+ 
+ #ifndef CONFIG_PARAVIRT_XXL
+@@ -100,8 +108,8 @@ static __always_inline void native_set_debugreg(int regno, unsigned long value)
+ 
+ static inline void hw_breakpoint_disable(void)
+ {
+-	/* Zero the control register for HW Breakpoint */
+-	set_debugreg(0UL, 7);
++	/* Reset the control register for HW Breakpoint */
++	set_debugreg(DR7_FIXED_1, 7);
+ 
+ 	/* Zero-out the individual HW breakpoint address registers */
+ 	set_debugreg(0UL, 0);
+@@ -125,9 +133,12 @@ static __always_inline unsigned long local_db_save(void)
+ 		return 0;
+ 
+ 	get_debugreg(dr7, 7);
+-	dr7 &= ~0x400; /* architecturally set bit */
++
++	/* Architecturally set bit */
++	dr7 &= ~DR7_FIXED_1;
+ 	if (dr7)
+-		set_debugreg(0, 7);
++		set_debugreg(DR7_FIXED_1, 7);
++
+ 	/*
+ 	 * Ensure the compiler doesn't lower the above statements into
+ 	 * the critical section; disabling breakpoints late would not
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index f094493232b645..8980786686bffe 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -31,6 +31,7 @@
+ 
+ #include <asm/apic.h>
+ #include <asm/pvclock-abi.h>
++#include <asm/debugreg.h>
+ #include <asm/desc.h>
+ #include <asm/mtrr.h>
+ #include <asm/msr-index.h>
+@@ -247,7 +248,6 @@ enum x86_intercept_stage;
+ #define DR7_BP_EN_MASK	0x000000ff
+ #define DR7_GE		(1 << 9)
+ #define DR7_GD		(1 << 13)
+-#define DR7_FIXED_1	0x00000400
+ #define DR7_VOLATILE	0xffff2bff
+ 
+ #define KVM_GUESTDBG_VALID_MASK \
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index c39c5c37f4e825..bab44900b937cb 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -2220,7 +2220,7 @@ EXPORT_PER_CPU_SYMBOL(__stack_chk_guard);
+ static void initialize_debug_regs(void)
+ {
+ 	/* Control register first -- to make sure everything is disabled. */
+-	set_debugreg(0, 7);
++	set_debugreg(DR7_FIXED_1, 7);
+ 	set_debugreg(DR6_RESERVED, 6);
+ 	/* dr5 and dr4 don't exist */
+ 	set_debugreg(0, 3);
+diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
+index 102641fd217284..8b1a9733d13e3f 100644
+--- a/arch/x86/kernel/kgdb.c
++++ b/arch/x86/kernel/kgdb.c
+@@ -385,7 +385,7 @@ static void kgdb_disable_hw_debug(struct pt_regs *regs)
+ 	struct perf_event *bp;
+ 
+ 	/* Disable hardware debugging while we are in kgdb: */
+-	set_debugreg(0UL, 7);
++	set_debugreg(DR7_FIXED_1, 7);
+ 	for (i = 0; i < HBP_NUM; i++) {
+ 		if (!breakinfo[i].enabled)
+ 			continue;
+diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
+index 4636ef3599737c..3c7621663800b3 100644
+--- a/arch/x86/kernel/process_32.c
++++ b/arch/x86/kernel/process_32.c
+@@ -93,7 +93,7 @@ void __show_regs(struct pt_regs *regs, enum show_regs_mode mode,
+ 
+ 	/* Only print out debug registers if they are in their non-default state. */
+ 	if ((d0 == 0) && (d1 == 0) && (d2 == 0) && (d3 == 0) &&
+-	    (d6 == DR6_RESERVED) && (d7 == 0x400))
++	    (d6 == DR6_RESERVED) && (d7 == DR7_FIXED_1))
+ 		return;
+ 
+ 	printk("%sDR0: %08lx DR1: %08lx DR2: %08lx DR3: %08lx\n",
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 7196ca7048be0c..8565aa31afafe1 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -132,7 +132,7 @@ void __show_regs(struct pt_regs *regs, enum show_regs_mode mode,
+ 
+ 	/* Only print out debug registers if they are in their non-default state. */
+ 	if (!((d0 == 0) && (d1 == 0) && (d2 == 0) && (d3 == 0) &&
+-	    (d6 == DR6_RESERVED) && (d7 == 0x400))) {
++	    (d6 == DR6_RESERVED) && (d7 == DR7_FIXED_1))) {
+ 		printk("%sDR0: %016lx DR1: %016lx DR2: %016lx\n",
+ 		       log_lvl, d0, d1, d2);
+ 		printk("%sDR3: %016lx DR6: %016lx DR7: %016lx\n",
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index be7bb6d20129db..7bae91eb7b2345 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -10979,7 +10979,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		wrmsrl(MSR_IA32_XFD_ERR, vcpu->arch.guest_fpu.xfd_err);
+ 
+ 	if (unlikely(vcpu->arch.switch_db_regs)) {
+-		set_debugreg(0, 7);
++		set_debugreg(DR7_FIXED_1, 7);
+ 		set_debugreg(vcpu->arch.eff_db[0], 0);
+ 		set_debugreg(vcpu->arch.eff_db[1], 1);
+ 		set_debugreg(vcpu->arch.eff_db[2], 2);
+@@ -10988,7 +10988,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))
+ 			kvm_x86_call(set_dr6)(vcpu, vcpu->arch.dr6);
+ 	} else if (unlikely(hw_breakpoint_active())) {
+-		set_debugreg(0, 7);
++		set_debugreg(DR7_FIXED_1, 7);
+ 	}
+ 
+ 	vcpu->arch.host_debugctl = get_debugctlmsr();
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index f2843f81467515..1f3f782a04ba23 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1173,6 +1173,8 @@ struct regmap *__regmap_init(struct device *dev,
+ err_map:
+ 	kfree(map);
+ err:
++	if (bus && bus->free_on_exit)
++		kfree(bus);
+ 	return ERR_PTR(ret);
+ }
+ EXPORT_SYMBOL_GPL(__regmap_init);
+diff --git a/drivers/bus/fsl-mc/fsl-mc-bus.c b/drivers/bus/fsl-mc/fsl-mc-bus.c
+index 7671bd15854551..c1c0a4759c7e4f 100644
+--- a/drivers/bus/fsl-mc/fsl-mc-bus.c
++++ b/drivers/bus/fsl-mc/fsl-mc-bus.c
+@@ -943,6 +943,7 @@ struct fsl_mc_device *fsl_mc_get_endpoint(struct fsl_mc_device *mc_dev,
+ 	struct fsl_mc_obj_desc endpoint_desc = {{ 0 }};
+ 	struct dprc_endpoint endpoint1 = {{ 0 }};
+ 	struct dprc_endpoint endpoint2 = {{ 0 }};
++	struct fsl_mc_bus *mc_bus;
+ 	int state, err;
+ 
+ 	mc_bus_dev = to_fsl_mc_device(mc_dev->dev.parent);
+@@ -966,6 +967,8 @@ struct fsl_mc_device *fsl_mc_get_endpoint(struct fsl_mc_device *mc_dev,
+ 	strcpy(endpoint_desc.type, endpoint2.type);
+ 	endpoint_desc.id = endpoint2.id;
+ 	endpoint = fsl_mc_device_lookup(&endpoint_desc, mc_bus_dev);
++	if (endpoint)
++		return endpoint;
+ 
+ 	/*
+ 	 * We know that the device has an endpoint because we verified by
+@@ -973,17 +976,13 @@ struct fsl_mc_device *fsl_mc_get_endpoint(struct fsl_mc_device *mc_dev,
+ 	 * yet discovered by the fsl-mc bus, thus the lookup returned NULL.
+ 	 * Force a rescan of the devices in this container and retry the lookup.
+ 	 */
+-	if (!endpoint) {
+-		struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_bus_dev);
+-
+-		if (mutex_trylock(&mc_bus->scan_mutex)) {
+-			err = dprc_scan_objects(mc_bus_dev, true);
+-			mutex_unlock(&mc_bus->scan_mutex);
+-		}
+-
+-		if (err < 0)
+-			return ERR_PTR(err);
++	mc_bus = to_fsl_mc_bus(mc_bus_dev);
++	if (mutex_trylock(&mc_bus->scan_mutex)) {
++		err = dprc_scan_objects(mc_bus_dev, true);
++		mutex_unlock(&mc_bus->scan_mutex);
+ 	}
++	if (err < 0)
++		return ERR_PTR(err);
+ 
+ 	endpoint = fsl_mc_device_lookup(&endpoint_desc, mc_bus_dev);
+ 	/*
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 82a22a62b99bee..5a074e4b478077 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -5123,6 +5123,8 @@ int amdgpu_device_resume(struct drm_device *dev, bool notify_clients)
+ 		dev->dev->power.disable_depth--;
+ #endif
+ 	}
++
++	amdgpu_vram_mgr_clear_reset_blocks(adev);
+ 	adev->in_suspend = false;
+ 
+ 	if (adev->enable_mes)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
+index 529c9696c2f32a..c2242bd5ecc4de 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
+@@ -26,6 +26,8 @@
+ #include "amdgpu_sdma.h"
+ #include "amdgpu_ras.h"
+ #include "amdgpu_reset.h"
++#include "gc/gc_10_1_0_offset.h"
++#include "gc/gc_10_3_0_sh_mask.h"
+ 
+ #define AMDGPU_CSA_SDMA_SIZE 64
+ /* SDMA CSA reside in the 3rd page of CSA */
+@@ -561,10 +563,46 @@ void amdgpu_sdma_register_on_reset_callbacks(struct amdgpu_device *adev, struct
+ 	list_add_tail(&funcs->list, &adev->sdma.reset_callback_list);
+ }
+ 
++static int amdgpu_sdma_soft_reset(struct amdgpu_device *adev, u32 instance_id)
++{
++	struct amdgpu_sdma_instance *sdma_instance = &adev->sdma.instance[instance_id];
++	int r = -EOPNOTSUPP;
++
++	switch (amdgpu_ip_version(adev, SDMA0_HWIP, 0)) {
++	case IP_VERSION(4, 4, 2):
++	case IP_VERSION(4, 4, 4):
++	case IP_VERSION(4, 4, 5):
++		/* For SDMA 4.x, use the existing DPM interface for backward compatibility,
++		 * we need to convert the logical instance ID to physical instance ID before reset.
++		 */
++		r = amdgpu_dpm_reset_sdma(adev, 1 << GET_INST(SDMA0, instance_id));
++		break;
++	case IP_VERSION(5, 0, 0):
++	case IP_VERSION(5, 0, 1):
++	case IP_VERSION(5, 0, 2):
++	case IP_VERSION(5, 0, 5):
++	case IP_VERSION(5, 2, 0):
++	case IP_VERSION(5, 2, 2):
++	case IP_VERSION(5, 2, 4):
++	case IP_VERSION(5, 2, 5):
++	case IP_VERSION(5, 2, 6):
++	case IP_VERSION(5, 2, 3):
++	case IP_VERSION(5, 2, 1):
++	case IP_VERSION(5, 2, 7):
++		if (sdma_instance->funcs->soft_reset_kernel_queue)
++			r = sdma_instance->funcs->soft_reset_kernel_queue(adev, instance_id);
++		break;
++	default:
++		break;
++	}
++
++	return r;
++}
++
+ /**
+  * amdgpu_sdma_reset_engine - Reset a specific SDMA engine
+  * @adev: Pointer to the AMDGPU device
+- * @instance_id: ID of the SDMA engine instance to reset
++ * @instance_id: Logical ID of the SDMA engine instance to reset
+  *
+  * This function performs the following steps:
+  * 1. Calls all registered pre_reset callbacks to allow KFD and AMDGPU to save their state.
+@@ -611,9 +649,9 @@ int amdgpu_sdma_reset_engine(struct amdgpu_device *adev, uint32_t instance_id)
+ 	}
+ 
+ 	/* Perform the SDMA reset for the specified instance */
+-	ret = amdgpu_dpm_reset_sdma(adev, 1 << instance_id);
++	ret = amdgpu_sdma_soft_reset(adev, instance_id);
+ 	if (ret) {
+-		dev_err(adev->dev, "Failed to reset SDMA instance %u\n", instance_id);
++		dev_err(adev->dev, "Failed to reset SDMA logical instance %u\n", instance_id);
+ 		goto exit;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h
+index 47d56fd0589fc1..bf83d66462380e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h
+@@ -50,6 +50,12 @@ enum amdgpu_sdma_irq {
+ 
+ #define NUM_SDMA(x) hweight32(x)
+ 
++struct amdgpu_sdma_funcs {
++	int (*stop_kernel_queue)(struct amdgpu_ring *ring);
++	int (*start_kernel_queue)(struct amdgpu_ring *ring);
++	int (*soft_reset_kernel_queue)(struct amdgpu_device *adev, u32 instance_id);
++};
++
+ struct amdgpu_sdma_instance {
+ 	/* SDMA firmware */
+ 	const struct firmware	*fw;
+@@ -68,7 +74,7 @@ struct amdgpu_sdma_instance {
+ 	/* track guilty state of GFX and PAGE queues */
+ 	bool			gfx_guilty;
+ 	bool			page_guilty;
+-
++	const struct amdgpu_sdma_funcs   *funcs;
+ };
+ 
+ enum amdgpu_sdma_ras_memory_id {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+index 208b7d1d8a277b..450e4bf093b79b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+@@ -154,6 +154,7 @@ int amdgpu_vram_mgr_reserve_range(struct amdgpu_vram_mgr *mgr,
+ 				  uint64_t start, uint64_t size);
+ int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr,
+ 				      uint64_t start);
++void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev);
+ 
+ bool amdgpu_res_cpu_visible(struct amdgpu_device *adev,
+ 			    struct ttm_resource *res);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+index abdc52b0895a60..07c936e90d8e40 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+@@ -782,6 +782,23 @@ uint64_t amdgpu_vram_mgr_vis_usage(struct amdgpu_vram_mgr *mgr)
+ 	return atomic64_read(&mgr->vis_usage);
+ }
+ 
++/**
++ * amdgpu_vram_mgr_clear_reset_blocks - reset clear blocks
++ *
++ * @adev: amdgpu device pointer
++ *
++ * Reset the cleared drm buddy blocks.
++ */
++void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev)
++{
++	struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
++	struct drm_buddy *mm = &mgr->mm;
++
++	mutex_lock(&mgr->lock);
++	drm_buddy_reset_clear(mm, false);
++	mutex_unlock(&mgr->lock);
++}
++
+ /**
+  * amdgpu_vram_mgr_intersects - test each drm buddy block for intersection
+  *
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 87c2bc5f64a6ce..f6d71bf7c89c20 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3548,13 +3548,15 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
+ 
+ 	luminance_range = &conn_base->display_info.luminance_range;
+ 
+-	if (luminance_range->max_luminance) {
+-		caps->aux_min_input_signal = luminance_range->min_luminance;
++	if (luminance_range->max_luminance)
+ 		caps->aux_max_input_signal = luminance_range->max_luminance;
+-	} else {
+-		caps->aux_min_input_signal = 0;
++	else
+ 		caps->aux_max_input_signal = 512;
+-	}
++
++	if (luminance_range->min_luminance)
++		caps->aux_min_input_signal = luminance_range->min_luminance;
++	else
++		caps->aux_min_input_signal = 1;
+ 
+ 	min_input_signal_override = drm_get_panel_min_brightness_quirk(aconnector->drm_edid);
+ 	if (min_input_signal_override >= 0)
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+index 4ea13e5a3a54a6..48766b6abd29ae 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+@@ -1351,7 +1351,7 @@ static int ti_sn_bridge_probe(struct auxiliary_device *adev,
+ 			regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG,
+ 					   HPD_DISABLE, 0);
+ 		mutex_unlock(&pdata->comms_mutex);
+-	};
++	}
+ 
+ 	drm_bridge_add(&pdata->bridge);
+ 
+diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
+index 241c855f891f8b..66aff35f864762 100644
+--- a/drivers/gpu/drm/drm_buddy.c
++++ b/drivers/gpu/drm/drm_buddy.c
+@@ -404,6 +404,49 @@ drm_get_buddy(struct drm_buddy_block *block)
+ }
+ EXPORT_SYMBOL(drm_get_buddy);
+ 
++/**
++ * drm_buddy_reset_clear - reset blocks clear state
++ *
++ * @mm: DRM buddy manager
++ * @is_clear: blocks clear state
++ *
++ * Reset the clear state based on @is_clear value for each block
++ * in the freelist.
++ */
++void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear)
++{
++	u64 root_size, size, start;
++	unsigned int order;
++	int i;
++
++	size = mm->size;
++	for (i = 0; i < mm->n_roots; ++i) {
++		order = ilog2(size) - ilog2(mm->chunk_size);
++		start = drm_buddy_block_offset(mm->roots[i]);
++		__force_merge(mm, start, start + size, order);
++
++		root_size = mm->chunk_size << order;
++		size -= root_size;
++	}
++
++	for (i = 0; i <= mm->max_order; ++i) {
++		struct drm_buddy_block *block;
++
++		list_for_each_entry_reverse(block, &mm->free_list[i], link) {
++			if (is_clear != drm_buddy_block_is_clear(block)) {
++				if (is_clear) {
++					mark_cleared(block);
++					mm->clear_avail += drm_buddy_block_size(mm, block);
++				} else {
++					clear_reset(block);
++					mm->clear_avail -= drm_buddy_block_size(mm, block);
++				}
++			}
++		}
++	}
++}
++EXPORT_SYMBOL(drm_buddy_reset_clear);
++
+ /**
+  * drm_buddy_free_block - free a block
+  *
+diff --git a/drivers/gpu/drm/drm_gem_dma_helper.c b/drivers/gpu/drm/drm_gem_dma_helper.c
+index b7f033d4352a34..4f0320df858f89 100644
+--- a/drivers/gpu/drm/drm_gem_dma_helper.c
++++ b/drivers/gpu/drm/drm_gem_dma_helper.c
+@@ -230,7 +230,7 @@ void drm_gem_dma_free(struct drm_gem_dma_object *dma_obj)
+ 
+ 	if (drm_gem_is_imported(gem_obj)) {
+ 		if (dma_obj->vaddr)
+-			dma_buf_vunmap_unlocked(gem_obj->dma_buf, &map);
++			dma_buf_vunmap_unlocked(gem_obj->import_attach->dmabuf, &map);
+ 		drm_prime_gem_destroy(gem_obj, dma_obj->sgt);
+ 	} else if (dma_obj->vaddr) {
+ 		if (dma_obj->map_noncoherent)
+diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
+index 0fbeb686e561ed..2bf606ba24cd16 100644
+--- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c
++++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
+@@ -419,6 +419,7 @@ EXPORT_SYMBOL(drm_gem_fb_vunmap);
+ static void __drm_gem_fb_end_cpu_access(struct drm_framebuffer *fb, enum dma_data_direction dir,
+ 					unsigned int num_planes)
+ {
++	struct dma_buf_attachment *import_attach;
+ 	struct drm_gem_object *obj;
+ 	int ret;
+ 
+@@ -427,9 +428,10 @@ static void __drm_gem_fb_end_cpu_access(struct drm_framebuffer *fb, enum dma_dat
+ 		obj = drm_gem_fb_get_obj(fb, num_planes);
+ 		if (!obj)
+ 			continue;
++		import_attach = obj->import_attach;
+ 		if (!drm_gem_is_imported(obj))
+ 			continue;
+-		ret = dma_buf_end_cpu_access(obj->dma_buf, dir);
++		ret = dma_buf_end_cpu_access(import_attach->dmabuf, dir);
+ 		if (ret)
+ 			drm_err(fb->dev, "dma_buf_end_cpu_access(%u, %d) failed: %d\n",
+ 				ret, num_planes, dir);
+@@ -452,6 +454,7 @@ static void __drm_gem_fb_end_cpu_access(struct drm_framebuffer *fb, enum dma_dat
+  */
+ int drm_gem_fb_begin_cpu_access(struct drm_framebuffer *fb, enum dma_data_direction dir)
+ {
++	struct dma_buf_attachment *import_attach;
+ 	struct drm_gem_object *obj;
+ 	unsigned int i;
+ 	int ret;
+@@ -462,9 +465,10 @@ int drm_gem_fb_begin_cpu_access(struct drm_framebuffer *fb, enum dma_data_direct
+ 			ret = -EINVAL;
+ 			goto err___drm_gem_fb_end_cpu_access;
+ 		}
++		import_attach = obj->import_attach;
+ 		if (!drm_gem_is_imported(obj))
+ 			continue;
+-		ret = dma_buf_begin_cpu_access(obj->dma_buf, dir);
++		ret = dma_buf_begin_cpu_access(import_attach->dmabuf, dir);
+ 		if (ret)
+ 			goto err___drm_gem_fb_end_cpu_access;
+ 	}
+diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
+index d99dee67353a1f..37333aadd55080 100644
+--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
++++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
+@@ -339,13 +339,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
+ 	int ret = 0;
+ 
+ 	if (drm_gem_is_imported(obj)) {
+-		ret = dma_buf_vmap(obj->dma_buf, map);
+-		if (!ret) {
+-			if (drm_WARN_ON(obj->dev, map->is_iomem)) {
+-				dma_buf_vunmap(obj->dma_buf, map);
+-				return -EIO;
+-			}
+-		}
++		ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
+ 	} else {
+ 		pgprot_t prot = PAGE_KERNEL;
+ 
+@@ -405,7 +399,7 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
+ 	struct drm_gem_object *obj = &shmem->base;
+ 
+ 	if (drm_gem_is_imported(obj)) {
+-		dma_buf_vunmap(obj->dma_buf, map);
++		dma_buf_vunmap(obj->import_attach->dmabuf, map);
+ 	} else {
+ 		dma_resv_assert_held(shmem->base.resv);
+ 
+diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
+index bdb51c8f262e7d..32a8781cfd67b8 100644
+--- a/drivers/gpu/drm/drm_prime.c
++++ b/drivers/gpu/drm/drm_prime.c
+@@ -453,7 +453,13 @@ struct dma_buf *drm_gem_prime_handle_to_dmabuf(struct drm_device *dev,
+ 	}
+ 
+ 	mutex_lock(&dev->object_name_lock);
+-	/* re-export the original imported/exported object */
++	/* re-export the original imported object */
++	if (obj->import_attach) {
++		dmabuf = obj->import_attach->dmabuf;
++		get_dma_buf(dmabuf);
++		goto out_have_obj;
++	}
++
+ 	if (obj->dma_buf) {
+ 		get_dma_buf(obj->dma_buf);
+ 		dmabuf = obj->dma_buf;
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index ad0306283990c5..ba29efae986de2 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -1603,6 +1603,12 @@ int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
+ void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
+ 			   u8 *link_bw, u8 *rate_select)
+ {
++	struct intel_display *display = to_intel_display(intel_dp);
++
++	/* FIXME g4x can't generate an exact 2.7GHz with the 96MHz non-SSC refclk */
++	if (display->platform.g4x && port_clock == 268800)
++		port_clock = 270000;
++
+ 	/* eDP 1.4 rate select method. */
+ 	if (intel_dp->use_rate_select) {
+ 		*link_bw = 0;
+diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
+index e671aa24172068..ac678de7fe5e6e 100644
+--- a/drivers/gpu/drm/scheduler/sched_entity.c
++++ b/drivers/gpu/drm/scheduler/sched_entity.c
+@@ -355,17 +355,6 @@ void drm_sched_entity_destroy(struct drm_sched_entity *entity)
+ }
+ EXPORT_SYMBOL(drm_sched_entity_destroy);
+ 
+-/* drm_sched_entity_clear_dep - callback to clear the entities dependency */
+-static void drm_sched_entity_clear_dep(struct dma_fence *f,
+-				       struct dma_fence_cb *cb)
+-{
+-	struct drm_sched_entity *entity =
+-		container_of(cb, struct drm_sched_entity, cb);
+-
+-	entity->dependency = NULL;
+-	dma_fence_put(f);
+-}
+-
+ /*
+  * drm_sched_entity_wakeup - callback to clear the entity's dependency and
+  * wake up the scheduler
+@@ -376,7 +365,8 @@ static void drm_sched_entity_wakeup(struct dma_fence *f,
+ 	struct drm_sched_entity *entity =
+ 		container_of(cb, struct drm_sched_entity, cb);
+ 
+-	drm_sched_entity_clear_dep(f, cb);
++	entity->dependency = NULL;
++	dma_fence_put(f);
+ 	drm_sched_wakeup(entity->rq->sched);
+ }
+ 
+@@ -429,13 +419,6 @@ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity)
+ 		fence = dma_fence_get(&s_fence->scheduled);
+ 		dma_fence_put(entity->dependency);
+ 		entity->dependency = fence;
+-		if (!dma_fence_add_callback(fence, &entity->cb,
+-					    drm_sched_entity_clear_dep))
+-			return true;
+-
+-		/* Ignore it when it is already scheduled */
+-		dma_fence_put(fence);
+-		return false;
+ 	}
+ 
+ 	if (!dma_fence_add_callback(entity->dependency, &entity->cb,
+diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
+index 16e20b5ad325f9..c3c9df8ba7bbfc 100644
+--- a/drivers/gpu/drm/xe/xe_lrc.c
++++ b/drivers/gpu/drm/xe/xe_lrc.c
+@@ -39,6 +39,7 @@
+ #define LRC_ENGINE_INSTANCE			GENMASK_ULL(53, 48)
+ 
+ #define LRC_INDIRECT_RING_STATE_SIZE		SZ_4K
++#define LRC_WA_BB_SIZE				SZ_4K
+ 
+ static struct xe_device *
+ lrc_to_xe(struct xe_lrc *lrc)
+@@ -910,7 +911,11 @@ static void xe_lrc_finish(struct xe_lrc *lrc)
+ 	xe_bo_unpin(lrc->bo);
+ 	xe_bo_unlock(lrc->bo);
+ 	xe_bo_put(lrc->bo);
+-	xe_bo_unpin_map_no_vm(lrc->bb_per_ctx_bo);
++}
++
++static size_t wa_bb_offset(struct xe_lrc *lrc)
++{
++	return lrc->bo->size - LRC_WA_BB_SIZE;
+ }
+ 
+ /*
+@@ -943,15 +948,16 @@ static void xe_lrc_finish(struct xe_lrc *lrc)
+ #define CONTEXT_ACTIVE 1ULL
+ static int xe_lrc_setup_utilization(struct xe_lrc *lrc)
+ {
++	const size_t max_size = LRC_WA_BB_SIZE;
+ 	u32 *cmd, *buf = NULL;
+ 
+-	if (lrc->bb_per_ctx_bo->vmap.is_iomem) {
+-		buf = kmalloc(lrc->bb_per_ctx_bo->size, GFP_KERNEL);
++	if (lrc->bo->vmap.is_iomem) {
++		buf = kmalloc(max_size, GFP_KERNEL);
+ 		if (!buf)
+ 			return -ENOMEM;
+ 		cmd = buf;
+ 	} else {
+-		cmd = lrc->bb_per_ctx_bo->vmap.vaddr;
++		cmd = lrc->bo->vmap.vaddr + wa_bb_offset(lrc);
+ 	}
+ 
+ 	*cmd++ = MI_STORE_REGISTER_MEM | MI_SRM_USE_GGTT | MI_SRM_ADD_CS_OFFSET;
+@@ -974,13 +980,14 @@ static int xe_lrc_setup_utilization(struct xe_lrc *lrc)
+ 	*cmd++ = MI_BATCH_BUFFER_END;
+ 
+ 	if (buf) {
+-		xe_map_memcpy_to(gt_to_xe(lrc->gt), &lrc->bb_per_ctx_bo->vmap, 0,
+-				 buf, (cmd - buf) * sizeof(*cmd));
++		xe_map_memcpy_to(gt_to_xe(lrc->gt), &lrc->bo->vmap,
++				 wa_bb_offset(lrc), buf,
++				 (cmd - buf) * sizeof(*cmd));
+ 		kfree(buf);
+ 	}
+ 
+-	xe_lrc_write_ctx_reg(lrc, CTX_BB_PER_CTX_PTR,
+-			     xe_bo_ggtt_addr(lrc->bb_per_ctx_bo) | 1);
++	xe_lrc_write_ctx_reg(lrc, CTX_BB_PER_CTX_PTR, xe_bo_ggtt_addr(lrc->bo) +
++			     wa_bb_offset(lrc) + 1);
+ 
+ 	return 0;
+ }
+@@ -1016,20 +1023,13 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
+ 	 * FIXME: Perma-pinning LRC as we don't yet support moving GGTT address
+ 	 * via VM bind calls.
+ 	 */
+-	lrc->bo = xe_bo_create_pin_map(xe, tile, vm, lrc_size,
++	lrc->bo = xe_bo_create_pin_map(xe, tile, vm,
++				       lrc_size + LRC_WA_BB_SIZE,
+ 				       ttm_bo_type_kernel,
+ 				       bo_flags);
+ 	if (IS_ERR(lrc->bo))
+ 		return PTR_ERR(lrc->bo);
+ 
+-	lrc->bb_per_ctx_bo = xe_bo_create_pin_map(xe, tile, NULL, SZ_4K,
+-						  ttm_bo_type_kernel,
+-						  bo_flags);
+-	if (IS_ERR(lrc->bb_per_ctx_bo)) {
+-		err = PTR_ERR(lrc->bb_per_ctx_bo);
+-		goto err_lrc_finish;
+-	}
+-
+ 	lrc->size = lrc_size;
+ 	lrc->ring.size = ring_size;
+ 	lrc->ring.tail = 0;
+@@ -1819,7 +1819,8 @@ struct xe_lrc_snapshot *xe_lrc_snapshot_capture(struct xe_lrc *lrc)
+ 	snapshot->seqno = xe_lrc_seqno(lrc);
+ 	snapshot->lrc_bo = xe_bo_get(lrc->bo);
+ 	snapshot->lrc_offset = xe_lrc_pphwsp_offset(lrc);
+-	snapshot->lrc_size = lrc->bo->size - snapshot->lrc_offset;
++	snapshot->lrc_size = lrc->bo->size - snapshot->lrc_offset -
++		LRC_WA_BB_SIZE;
+ 	snapshot->lrc_snapshot = NULL;
+ 	snapshot->ctx_timestamp = lower_32_bits(xe_lrc_ctx_timestamp(lrc));
+ 	snapshot->ctx_job_timestamp = xe_lrc_ctx_job_timestamp(lrc);
+diff --git a/drivers/gpu/drm/xe/xe_lrc_types.h b/drivers/gpu/drm/xe/xe_lrc_types.h
+index ae24cf6f8dd998..883e550a94234c 100644
+--- a/drivers/gpu/drm/xe/xe_lrc_types.h
++++ b/drivers/gpu/drm/xe/xe_lrc_types.h
+@@ -53,9 +53,6 @@ struct xe_lrc {
+ 
+ 	/** @ctx_timestamp: readout value of CTX_TIMESTAMP on last update */
+ 	u64 ctx_timestamp;
+-
+-	/** @bb_per_ctx_bo: buffer object for per context batch wa buffer */
+-	struct xe_bo *bb_per_ctx_bo;
+ };
+ 
+ struct xe_lrc_snapshot;
+diff --git a/drivers/i2c/busses/i2c-qup.c b/drivers/i2c/busses/i2c-qup.c
+index 3a36d682ed5726..5b053e51f4c98f 100644
+--- a/drivers/i2c/busses/i2c-qup.c
++++ b/drivers/i2c/busses/i2c-qup.c
+@@ -452,8 +452,10 @@ static int qup_i2c_bus_active(struct qup_i2c_dev *qup, int len)
+ 		if (!(status & I2C_STATUS_BUS_ACTIVE))
+ 			break;
+ 
+-		if (time_after(jiffies, timeout))
++		if (time_after(jiffies, timeout)) {
+ 			ret = -ETIMEDOUT;
++			break;
++		}
+ 
+ 		usleep_range(len, len * 2);
+ 	}
+diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
+index 049b4d154c2337..687d1e608abcd2 100644
+--- a/drivers/i2c/busses/i2c-tegra.c
++++ b/drivers/i2c/busses/i2c-tegra.c
+@@ -607,7 +607,6 @@ static int tegra_i2c_wait_for_config_load(struct tegra_i2c_dev *i2c_dev)
+ static int tegra_i2c_init(struct tegra_i2c_dev *i2c_dev)
+ {
+ 	u32 val, clk_divisor, clk_multiplier, tsu_thd, tlow, thigh, non_hs_mode;
+-	acpi_handle handle = ACPI_HANDLE(i2c_dev->dev);
+ 	struct i2c_timings *t = &i2c_dev->timings;
+ 	int err;
+ 
+@@ -619,11 +618,7 @@ static int tegra_i2c_init(struct tegra_i2c_dev *i2c_dev)
+ 	 * emit a noisy warning on error, which won't stay unnoticed and
+ 	 * won't hose machine entirely.
+ 	 */
+-	if (handle)
+-		err = acpi_evaluate_object(handle, "_RST", NULL, NULL);
+-	else
+-		err = reset_control_reset(i2c_dev->rst);
+-
++	err = device_reset(i2c_dev->dev);
+ 	WARN_ON_ONCE(err);
+ 
+ 	if (IS_DVC(i2c_dev))
+@@ -1666,19 +1661,6 @@ static void tegra_i2c_parse_dt(struct tegra_i2c_dev *i2c_dev)
+ 		i2c_dev->is_vi = true;
+ }
+ 
+-static int tegra_i2c_init_reset(struct tegra_i2c_dev *i2c_dev)
+-{
+-	if (ACPI_HANDLE(i2c_dev->dev))
+-		return 0;
+-
+-	i2c_dev->rst = devm_reset_control_get_exclusive(i2c_dev->dev, "i2c");
+-	if (IS_ERR(i2c_dev->rst))
+-		return dev_err_probe(i2c_dev->dev, PTR_ERR(i2c_dev->rst),
+-				      "failed to get reset control\n");
+-
+-	return 0;
+-}
+-
+ static int tegra_i2c_init_clocks(struct tegra_i2c_dev *i2c_dev)
+ {
+ 	int err;
+@@ -1788,10 +1770,6 @@ static int tegra_i2c_probe(struct platform_device *pdev)
+ 
+ 	tegra_i2c_parse_dt(i2c_dev);
+ 
+-	err = tegra_i2c_init_reset(i2c_dev);
+-	if (err)
+-		return err;
+-
+ 	err = tegra_i2c_init_clocks(i2c_dev);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/i2c/busses/i2c-virtio.c b/drivers/i2c/busses/i2c-virtio.c
+index 2a351f961b8993..c8c40ff9765da3 100644
+--- a/drivers/i2c/busses/i2c-virtio.c
++++ b/drivers/i2c/busses/i2c-virtio.c
+@@ -116,15 +116,16 @@ static int virtio_i2c_complete_reqs(struct virtqueue *vq,
+ 	for (i = 0; i < num; i++) {
+ 		struct virtio_i2c_req *req = &reqs[i];
+ 
+-		wait_for_completion(&req->completion);
+-
+-		if (!failed && req->in_hdr.status != VIRTIO_I2C_MSG_OK)
+-			failed = true;
++		if (!failed) {
++			if (wait_for_completion_interruptible(&req->completion))
++				failed = true;
++			else if (req->in_hdr.status != VIRTIO_I2C_MSG_OK)
++				failed = true;
++			else
++				j++;
++		}
+ 
+ 		i2c_put_dma_safe_msg_buf(reqs[i].buf, &msgs[i], !failed);
+-
+-		if (!failed)
+-			j++;
+ 	}
+ 
+ 	return j;
+diff --git a/drivers/iio/adc/ad7949.c b/drivers/iio/adc/ad7949.c
+index edd0c3a35ab73c..202561cad4012b 100644
+--- a/drivers/iio/adc/ad7949.c
++++ b/drivers/iio/adc/ad7949.c
+@@ -308,7 +308,6 @@ static void ad7949_disable_reg(void *reg)
+ 
+ static int ad7949_spi_probe(struct spi_device *spi)
+ {
+-	u32 spi_ctrl_mask = spi->controller->bits_per_word_mask;
+ 	struct device *dev = &spi->dev;
+ 	const struct ad7949_adc_spec *spec;
+ 	struct ad7949_adc_chip *ad7949_adc;
+@@ -337,11 +336,11 @@ static int ad7949_spi_probe(struct spi_device *spi)
+ 	ad7949_adc->resolution = spec->resolution;
+ 
+ 	/* Set SPI bits per word */
+-	if (spi_ctrl_mask & SPI_BPW_MASK(ad7949_adc->resolution)) {
++	if (spi_is_bpw_supported(spi, ad7949_adc->resolution)) {
+ 		spi->bits_per_word = ad7949_adc->resolution;
+-	} else if (spi_ctrl_mask == SPI_BPW_MASK(16)) {
++	} else if (spi_is_bpw_supported(spi, 16)) {
+ 		spi->bits_per_word = 16;
+-	} else if (spi_ctrl_mask == SPI_BPW_MASK(8)) {
++	} else if (spi_is_bpw_supported(spi, 8)) {
+ 		spi->bits_per_word = 8;
+ 	} else {
+ 		dev_err(dev, "unable to find common BPW with spi controller\n");
+diff --git a/drivers/iio/industrialio-core.c b/drivers/iio/industrialio-core.c
+index b9f4113ae5fc3e..ebf17ea5a5f932 100644
+--- a/drivers/iio/industrialio-core.c
++++ b/drivers/iio/industrialio-core.c
+@@ -410,12 +410,15 @@ static ssize_t iio_debugfs_write_reg(struct file *file,
+ 	char buf[80];
+ 	int ret;
+ 
++	if (count >= sizeof(buf))
++		return -EINVAL;
++
+ 	ret = simple_write_to_buffer(buf, sizeof(buf) - 1, ppos, userbuf,
+ 				     count);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	buf[count] = '\0';
++	buf[ret] = '\0';
+ 
+ 	ret = sscanf(buf, "%i %i", &reg, &val);
+ 
+diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c
+index 9979a351577f17..81cf3c902e8195 100644
+--- a/drivers/infiniband/core/cache.c
++++ b/drivers/infiniband/core/cache.c
+@@ -582,8 +582,8 @@ static int __ib_cache_gid_add(struct ib_device *ib_dev, u32 port,
+ out_unlock:
+ 	mutex_unlock(&table->lock);
+ 	if (ret)
+-		pr_warn("%s: unable to add gid %pI6 error=%d\n",
+-			__func__, gid->raw, ret);
++		pr_warn_ratelimited("%s: unable to add gid %pI6 error=%d\n",
++				    __func__, gid->raw, ret);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/interconnect/icc-clk.c b/drivers/interconnect/icc-clk.c
+index 88f311c1102077..93c030608d3e0a 100644
+--- a/drivers/interconnect/icc-clk.c
++++ b/drivers/interconnect/icc-clk.c
+@@ -117,6 +117,7 @@ struct icc_provider *icc_clk_register(struct device *dev,
+ 
+ 		node->name = devm_kasprintf(dev, GFP_KERNEL, "%s_master", data[i].name);
+ 		if (!node->name) {
++			icc_node_destroy(node->id);
+ 			ret = -ENOMEM;
+ 			goto err;
+ 		}
+@@ -135,6 +136,7 @@ struct icc_provider *icc_clk_register(struct device *dev,
+ 
+ 		node->name = devm_kasprintf(dev, GFP_KERNEL, "%s_slave", data[i].name);
+ 		if (!node->name) {
++			icc_node_destroy(node->id);
+ 			ret = -ENOMEM;
+ 			goto err;
+ 		}
+diff --git a/drivers/interconnect/qcom/sc7280.c b/drivers/interconnect/qcom/sc7280.c
+index 346f18d70e9e5e..905403a3a930a2 100644
+--- a/drivers/interconnect/qcom/sc7280.c
++++ b/drivers/interconnect/qcom/sc7280.c
+@@ -238,6 +238,7 @@ static struct qcom_icc_node xm_pcie3_1 = {
+ 	.id = SC7280_MASTER_PCIE_1,
+ 	.channels = 1,
+ 	.buswidth = 8,
++	.num_links = 1,
+ 	.links = { SC7280_SLAVE_ANOC_PCIE_GEM_NOC },
+ };
+ 
+diff --git a/drivers/net/can/dev/dev.c b/drivers/net/can/dev/dev.c
+index 5ec3170b896a42..3fa805ac2c65be 100644
+--- a/drivers/net/can/dev/dev.c
++++ b/drivers/net/can/dev/dev.c
+@@ -145,13 +145,16 @@ void can_change_state(struct net_device *dev, struct can_frame *cf,
+ EXPORT_SYMBOL_GPL(can_change_state);
+ 
+ /* CAN device restart for bus-off recovery */
+-static void can_restart(struct net_device *dev)
++static int can_restart(struct net_device *dev)
+ {
+ 	struct can_priv *priv = netdev_priv(dev);
+ 	struct sk_buff *skb;
+ 	struct can_frame *cf;
+ 	int err;
+ 
++	if (!priv->do_set_mode)
++		return -EOPNOTSUPP;
++
+ 	if (netif_carrier_ok(dev))
+ 		netdev_err(dev, "Attempt to restart for bus-off recovery, but carrier is OK?\n");
+ 
+@@ -173,10 +176,14 @@ static void can_restart(struct net_device *dev)
+ 	if (err) {
+ 		netdev_err(dev, "Restart failed, error %pe\n", ERR_PTR(err));
+ 		netif_carrier_off(dev);
++
++		return err;
+ 	} else {
+ 		netdev_dbg(dev, "Restarted\n");
+ 		priv->can_stats.restarts++;
+ 	}
++
++	return 0;
+ }
+ 
+ static void can_restart_work(struct work_struct *work)
+@@ -201,9 +208,8 @@ int can_restart_now(struct net_device *dev)
+ 		return -EBUSY;
+ 
+ 	cancel_delayed_work_sync(&priv->restart_work);
+-	can_restart(dev);
+ 
+-	return 0;
++	return can_restart(dev);
+ }
+ 
+ /* CAN bus-off
+diff --git a/drivers/net/can/dev/netlink.c b/drivers/net/can/dev/netlink.c
+index f1db9b7ffd4d0e..d5aa8da87961eb 100644
+--- a/drivers/net/can/dev/netlink.c
++++ b/drivers/net/can/dev/netlink.c
+@@ -285,6 +285,12 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
+ 	}
+ 
+ 	if (data[IFLA_CAN_RESTART_MS]) {
++		if (!priv->do_set_mode) {
++			NL_SET_ERR_MSG(extack,
++				       "Device doesn't support restart from Bus Off");
++			return -EOPNOTSUPP;
++		}
++
+ 		/* Do not allow changing restart delay while running */
+ 		if (dev->flags & IFF_UP)
+ 			return -EBUSY;
+@@ -292,6 +298,12 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
+ 	}
+ 
+ 	if (data[IFLA_CAN_RESTART]) {
++		if (!priv->do_set_mode) {
++			NL_SET_ERR_MSG(extack,
++				       "Device doesn't support restart from Bus Off");
++			return -EOPNOTSUPP;
++		}
++
+ 		/* Do not allow a restart while not running */
+ 		if (!(dev->flags & IFF_UP))
+ 			return -EINVAL;
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index efd0048acd3b2d..c744e10e640339 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -4655,12 +4655,19 @@ static int dpaa2_eth_connect_mac(struct dpaa2_eth_priv *priv)
+ 		return PTR_ERR(dpmac_dev);
+ 	}
+ 
+-	if (IS_ERR(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type)
++	if (IS_ERR(dpmac_dev))
+ 		return 0;
+ 
++	if (dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type) {
++		err = 0;
++		goto out_put_device;
++	}
++
+ 	mac = kzalloc(sizeof(struct dpaa2_mac), GFP_KERNEL);
+-	if (!mac)
+-		return -ENOMEM;
++	if (!mac) {
++		err = -ENOMEM;
++		goto out_put_device;
++	}
+ 
+ 	mac->mc_dev = dpmac_dev;
+ 	mac->mc_io = priv->mc_io;
+@@ -4694,6 +4701,8 @@ static int dpaa2_eth_connect_mac(struct dpaa2_eth_priv *priv)
+ 	dpaa2_mac_close(mac);
+ err_free_mac:
+ 	kfree(mac);
++out_put_device:
++	put_device(&dpmac_dev->dev);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+index 147a93bf9fa913..4643a338061820 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+@@ -1448,12 +1448,19 @@ static int dpaa2_switch_port_connect_mac(struct ethsw_port_priv *port_priv)
+ 	if (PTR_ERR(dpmac_dev) == -EPROBE_DEFER)
+ 		return PTR_ERR(dpmac_dev);
+ 
+-	if (IS_ERR(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type)
++	if (IS_ERR(dpmac_dev))
+ 		return 0;
+ 
++	if (dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type) {
++		err = 0;
++		goto out_put_device;
++	}
++
+ 	mac = kzalloc(sizeof(*mac), GFP_KERNEL);
+-	if (!mac)
+-		return -ENOMEM;
++	if (!mac) {
++		err = -ENOMEM;
++		goto out_put_device;
++	}
+ 
+ 	mac->mc_dev = dpmac_dev;
+ 	mac->mc_io = port_priv->ethsw_data->mc_io;
+@@ -1483,6 +1490,8 @@ static int dpaa2_switch_port_connect_mac(struct ethsw_port_priv *port_priv)
+ 	dpaa2_mac_close(mac);
+ err_free_mac:
+ 	kfree(mac);
++out_put_device:
++	put_device(&dpmac_dev->dev);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index d561d45021a581..98f5859b99f81f 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -1916,49 +1916,56 @@ static void gve_turnup_and_check_status(struct gve_priv *priv)
+ 	gve_handle_link_status(priv, GVE_DEVICE_STATUS_LINK_STATUS_MASK & status);
+ }
+ 
+-static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue)
++static struct gve_notify_block *gve_get_tx_notify_block(struct gve_priv *priv,
++							unsigned int txqueue)
+ {
+-	struct gve_notify_block *block;
+-	struct gve_tx_ring *tx = NULL;
+-	struct gve_priv *priv;
+-	u32 last_nic_done;
+-	u32 current_time;
+ 	u32 ntfy_idx;
+ 
+-	netdev_info(dev, "Timeout on tx queue, %d", txqueue);
+-	priv = netdev_priv(dev);
+ 	if (txqueue > priv->tx_cfg.num_queues)
+-		goto reset;
++		return NULL;
+ 
+ 	ntfy_idx = gve_tx_idx_to_ntfy(priv, txqueue);
+ 	if (ntfy_idx >= priv->num_ntfy_blks)
+-		goto reset;
++		return NULL;
++
++	return &priv->ntfy_blocks[ntfy_idx];
++}
++
++static bool gve_tx_timeout_try_q_kick(struct gve_priv *priv,
++				      unsigned int txqueue)
++{
++	struct gve_notify_block *block;
++	u32 current_time;
+ 
+-	block = &priv->ntfy_blocks[ntfy_idx];
+-	tx = block->tx;
++	block = gve_get_tx_notify_block(priv, txqueue);
++
++	if (!block)
++		return false;
+ 
+ 	current_time = jiffies_to_msecs(jiffies);
+-	if (tx->last_kick_msec + MIN_TX_TIMEOUT_GAP > current_time)
+-		goto reset;
++	if (block->tx->last_kick_msec + MIN_TX_TIMEOUT_GAP > current_time)
++		return false;
+ 
+-	/* Check to see if there are missed completions, which will allow us to
+-	 * kick the queue.
+-	 */
+-	last_nic_done = gve_tx_load_event_counter(priv, tx);
+-	if (last_nic_done - tx->done) {
+-		netdev_info(dev, "Kicking queue %d", txqueue);
+-		iowrite32be(GVE_IRQ_MASK, gve_irq_doorbell(priv, block));
+-		napi_schedule(&block->napi);
+-		tx->last_kick_msec = current_time;
+-		goto out;
+-	} // Else reset.
++	netdev_info(priv->dev, "Kicking queue %d", txqueue);
++	napi_schedule(&block->napi);
++	block->tx->last_kick_msec = current_time;
++	return true;
++}
+ 
+-reset:
+-	gve_schedule_reset(priv);
++static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue)
++{
++	struct gve_notify_block *block;
++	struct gve_priv *priv;
+ 
+-out:
+-	if (tx)
+-		tx->queue_timeout++;
++	netdev_info(dev, "Timeout on tx queue, %d", txqueue);
++	priv = netdev_priv(dev);
++
++	if (!gve_tx_timeout_try_q_kick(priv, txqueue))
++		gve_schedule_reset(priv);
++
++	block = gve_get_tx_notify_block(priv, txqueue);
++	if (block)
++		block->tx->queue_timeout++;
+ 	priv->tx_timeo_cnt++;
+ }
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index b03b8758c7774e..aaa803563bd2eb 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -11,6 +11,7 @@
+ #include <linux/irq.h>
+ #include <linux/ip.h>
+ #include <linux/ipv6.h>
++#include <linux/iommu.h>
+ #include <linux/module.h>
+ #include <linux/pci.h>
+ #include <linux/skbuff.h>
+@@ -1039,6 +1040,8 @@ static bool hns3_can_use_tx_sgl(struct hns3_enet_ring *ring,
+ static void hns3_init_tx_spare_buffer(struct hns3_enet_ring *ring)
+ {
+ 	u32 alloc_size = ring->tqp->handle->kinfo.tx_spare_buf_size;
++	struct net_device *netdev = ring_to_netdev(ring);
++	struct hns3_nic_priv *priv = netdev_priv(netdev);
+ 	struct hns3_tx_spare *tx_spare;
+ 	struct page *page;
+ 	dma_addr_t dma;
+@@ -1080,6 +1083,7 @@ static void hns3_init_tx_spare_buffer(struct hns3_enet_ring *ring)
+ 	tx_spare->buf = page_address(page);
+ 	tx_spare->len = PAGE_SIZE << order;
+ 	ring->tx_spare = tx_spare;
++	ring->tx_copybreak = priv->tx_copybreak;
+ 	return;
+ 
+ dma_mapping_error:
+@@ -4874,6 +4878,30 @@ static void hns3_nic_dealloc_vector_data(struct hns3_nic_priv *priv)
+ 	devm_kfree(&pdev->dev, priv->tqp_vector);
+ }
+ 
++static void hns3_update_tx_spare_buf_config(struct hns3_nic_priv *priv)
++{
++#define HNS3_MIN_SPARE_BUF_SIZE (2 * 1024 * 1024)
++#define HNS3_MAX_PACKET_SIZE (64 * 1024)
++
++	struct iommu_domain *domain = iommu_get_domain_for_dev(priv->dev);
++	struct hnae3_ae_dev *ae_dev = hns3_get_ae_dev(priv->ae_handle);
++	struct hnae3_handle *handle = priv->ae_handle;
++
++	if (ae_dev->dev_version < HNAE3_DEVICE_VERSION_V3)
++		return;
++
++	if (!(domain && iommu_is_dma_domain(domain)))
++		return;
++
++	priv->min_tx_copybreak = HNS3_MAX_PACKET_SIZE;
++	priv->min_tx_spare_buf_size = HNS3_MIN_SPARE_BUF_SIZE;
++
++	if (priv->tx_copybreak < priv->min_tx_copybreak)
++		priv->tx_copybreak = priv->min_tx_copybreak;
++	if (handle->kinfo.tx_spare_buf_size < priv->min_tx_spare_buf_size)
++		handle->kinfo.tx_spare_buf_size = priv->min_tx_spare_buf_size;
++}
++
+ static void hns3_ring_get_cfg(struct hnae3_queue *q, struct hns3_nic_priv *priv,
+ 			      unsigned int ring_type)
+ {
+@@ -5107,6 +5135,7 @@ int hns3_init_all_ring(struct hns3_nic_priv *priv)
+ 	int i, j;
+ 	int ret;
+ 
++	hns3_update_tx_spare_buf_config(priv);
+ 	for (i = 0; i < ring_num; i++) {
+ 		ret = hns3_alloc_ring_memory(&priv->ring[i]);
+ 		if (ret) {
+@@ -5311,6 +5340,8 @@ static int hns3_client_init(struct hnae3_handle *handle)
+ 	priv->ae_handle = handle;
+ 	priv->tx_timeout_count = 0;
+ 	priv->max_non_tso_bd_num = ae_dev->dev_specs.max_non_tso_bd_num;
++	priv->min_tx_copybreak = 0;
++	priv->min_tx_spare_buf_size = 0;
+ 	set_bit(HNS3_NIC_STATE_DOWN, &priv->state);
+ 
+ 	handle->msg_enable = netif_msg_init(debug, DEFAULT_MSG_LEVEL);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+index d36c4ed16d8dd2..caf7a4df858527 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+@@ -596,6 +596,8 @@ struct hns3_nic_priv {
+ 	struct hns3_enet_coalesce rx_coal;
+ 	u32 tx_copybreak;
+ 	u32 rx_copybreak;
++	u32 min_tx_copybreak;
++	u32 min_tx_spare_buf_size;
+ };
+ 
+ union l3_hdr_info {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 3e28a08934abd2..4ea19c089578e2 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -9576,33 +9576,36 @@ static bool hclge_need_enable_vport_vlan_filter(struct hclge_vport *vport)
+ 	return false;
+ }
+ 
+-int hclge_enable_vport_vlan_filter(struct hclge_vport *vport, bool request_en)
++static int __hclge_enable_vport_vlan_filter(struct hclge_vport *vport,
++					    bool request_en)
+ {
+-	struct hclge_dev *hdev = vport->back;
+ 	bool need_en;
+ 	int ret;
+ 
+-	mutex_lock(&hdev->vport_lock);
+-
+-	vport->req_vlan_fltr_en = request_en;
+-
+ 	need_en = hclge_need_enable_vport_vlan_filter(vport);
+-	if (need_en == vport->cur_vlan_fltr_en) {
+-		mutex_unlock(&hdev->vport_lock);
++	if (need_en == vport->cur_vlan_fltr_en)
+ 		return 0;
+-	}
+ 
+ 	ret = hclge_set_vport_vlan_filter(vport, need_en);
+-	if (ret) {
+-		mutex_unlock(&hdev->vport_lock);
++	if (ret)
+ 		return ret;
+-	}
+ 
+ 	vport->cur_vlan_fltr_en = need_en;
+ 
++	return 0;
++}
++
++int hclge_enable_vport_vlan_filter(struct hclge_vport *vport, bool request_en)
++{
++	struct hclge_dev *hdev = vport->back;
++	int ret;
++
++	mutex_lock(&hdev->vport_lock);
++	vport->req_vlan_fltr_en = request_en;
++	ret = __hclge_enable_vport_vlan_filter(vport, request_en);
+ 	mutex_unlock(&hdev->vport_lock);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int hclge_enable_vlan_filter(struct hnae3_handle *handle, bool enable)
+@@ -10623,16 +10626,19 @@ static void hclge_sync_vlan_fltr_state(struct hclge_dev *hdev)
+ 					&vport->state))
+ 			continue;
+ 
+-		ret = hclge_enable_vport_vlan_filter(vport,
+-						     vport->req_vlan_fltr_en);
++		mutex_lock(&hdev->vport_lock);
++		ret = __hclge_enable_vport_vlan_filter(vport,
++						       vport->req_vlan_fltr_en);
+ 		if (ret) {
+ 			dev_err(&hdev->pdev->dev,
+ 				"failed to sync vlan filter state for vport%u, ret = %d\n",
+ 				vport->vport_id, ret);
+ 			set_bit(HCLGE_VPORT_STATE_VLAN_FLTR_CHANGE,
+ 				&vport->state);
++			mutex_unlock(&hdev->vport_lock);
+ 			return;
+ 		}
++		mutex_unlock(&hdev->vport_lock);
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
+index ec581d4b696f59..4bd52eab391452 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
+@@ -497,14 +497,14 @@ int hclge_ptp_init(struct hclge_dev *hdev)
+ 	if (ret) {
+ 		dev_err(&hdev->pdev->dev,
+ 			"failed to init freq, ret = %d\n", ret);
+-		goto out;
++		goto out_clear_int;
+ 	}
+ 
+ 	ret = hclge_ptp_set_ts_mode(hdev, &hdev->ptp->ts_cfg);
+ 	if (ret) {
+ 		dev_err(&hdev->pdev->dev,
+ 			"failed to init ts mode, ret = %d\n", ret);
+-		goto out;
++		goto out_clear_int;
+ 	}
+ 
+ 	ktime_get_real_ts64(&ts);
+@@ -512,7 +512,7 @@ int hclge_ptp_init(struct hclge_dev *hdev)
+ 	if (ret) {
+ 		dev_err(&hdev->pdev->dev,
+ 			"failed to init ts time, ret = %d\n", ret);
+-		goto out;
++		goto out_clear_int;
+ 	}
+ 
+ 	set_bit(HCLGE_STATE_PTP_EN, &hdev->state);
+@@ -520,6 +520,9 @@ int hclge_ptp_init(struct hclge_dev *hdev)
+ 
+ 	return 0;
+ 
++out_clear_int:
++	clear_bit(HCLGE_PTP_FLAG_EN, &hdev->ptp->flags);
++	hclge_ptp_int_en(hdev, false);
+ out:
+ 	hclge_ptp_destroy_clock(hdev);
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index dada42e7e0ec96..27d10aeafb2b18 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -3094,11 +3094,7 @@ static void hclgevf_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
+ 
+ static u32 hclgevf_get_max_channels(struct hclgevf_dev *hdev)
+ {
+-	struct hnae3_handle *nic = &hdev->nic;
+-	struct hnae3_knic_private_info *kinfo = &nic->kinfo;
+-
+-	return min_t(u32, hdev->rss_size_max,
+-		     hdev->num_tqps / kinfo->tc_info.num_tc);
++	return min(hdev->rss_size_max, hdev->num_tqps);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/e1000e/defines.h b/drivers/net/ethernet/intel/e1000e/defines.h
+index 8294a7c4f122c3..ba331899d1861b 100644
+--- a/drivers/net/ethernet/intel/e1000e/defines.h
++++ b/drivers/net/ethernet/intel/e1000e/defines.h
+@@ -638,6 +638,9 @@
+ /* For checksumming, the sum of all words in the NVM should equal 0xBABA. */
+ #define NVM_SUM                    0xBABA
+ 
++/* Uninitialized ("empty") checksum word value */
++#define NVM_CHECKSUM_UNINITIALIZED 0xFFFF
++
+ /* PBA (printed board assembly) number words */
+ #define NVM_PBA_OFFSET_0           8
+ #define NVM_PBA_OFFSET_1           9
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index 364378133526a1..df4e7d781cb1cb 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -4274,6 +4274,8 @@ static s32 e1000_validate_nvm_checksum_ich8lan(struct e1000_hw *hw)
+ 			ret_val = e1000e_update_nvm_checksum(hw);
+ 			if (ret_val)
+ 				return ret_val;
++		} else if (hw->mac.type == e1000_pch_tgp) {
++			return 0;
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/e1000e/nvm.c b/drivers/net/ethernet/intel/e1000e/nvm.c
+index e609f4df86f455..16369e6d245a4a 100644
+--- a/drivers/net/ethernet/intel/e1000e/nvm.c
++++ b/drivers/net/ethernet/intel/e1000e/nvm.c
+@@ -558,6 +558,12 @@ s32 e1000e_validate_nvm_checksum_generic(struct e1000_hw *hw)
+ 		checksum += nvm_data;
+ 	}
+ 
++	if (hw->mac.type == e1000_pch_tgp &&
++	    nvm_data == NVM_CHECKSUM_UNINITIALIZED) {
++		e_dbg("Uninitialized NVM Checksum on TGP platform - ignoring\n");
++		return 0;
++	}
++
+ 	if (checksum != (u16)NVM_SUM) {
+ 		e_dbg("NVM Checksum Invalid\n");
+ 		return -E1000_ERR_NVM;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 88e6bef69342c2..7ccfc1191ae56f 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -3137,10 +3137,10 @@ static int i40e_vc_del_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
+ 		const u8 *addr = al->list[i].addr;
+ 
+ 		/* Allow to delete VF primary MAC only if it was not set
+-		 * administratively by PF or if VF is trusted.
++		 * administratively by PF.
+ 		 */
+ 		if (ether_addr_equal(addr, vf->default_lan_addr.addr)) {
+-			if (i40e_can_vf_change_mac(vf))
++			if (!vf->pf_set_mac)
+ 				was_unimac_deleted = true;
+ 			else
+ 				continue;
+@@ -5006,7 +5006,7 @@ int i40e_get_vf_stats(struct net_device *netdev, int vf_id,
+ 	vf_stats->broadcast  = stats->rx_broadcast;
+ 	vf_stats->multicast  = stats->rx_multicast;
+ 	vf_stats->rx_dropped = stats->rx_discards + stats->rx_discards_other;
+-	vf_stats->tx_dropped = stats->tx_discards;
++	vf_stats->tx_dropped = stats->tx_errors;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.c b/drivers/net/ethernet/intel/ice/ice_ddp.c
+index 59323c019544fc..351824dc3c6245 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ddp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ddp.c
+@@ -2301,6 +2301,8 @@ enum ice_ddp_state ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf,
+ 		return ICE_DDP_PKG_ERR;
+ 
+ 	buf_copy = devm_kmemdup(ice_hw_to_dev(hw), buf, len, GFP_KERNEL);
++	if (!buf_copy)
++		return ICE_DDP_PKG_ERR;
+ 
+ 	state = ice_init_pkg(hw, buf_copy, len);
+ 	if (!ice_is_init_pkg_successful(state)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index e53dbdc0a7a17e..34256ce5473ba0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -1948,8 +1948,8 @@ static int cmd_exec(struct mlx5_core_dev *dev, void *in, int in_size, void *out,
+ 
+ 	err = mlx5_cmd_invoke(dev, inb, outb, out, out_size, callback, context,
+ 			      pages_queue, token, force_polling);
+-	if (callback)
+-		return err;
++	if (callback && !err)
++		return 0;
+ 
+ 	if (err > 0) /* Failed in FW, command didn't execute */
+ 		err = deliv_status_to_err(err);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 0e3a977d533298..bee906661282aa 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -1182,19 +1182,19 @@ static void esw_set_peer_miss_rule_source_port(struct mlx5_eswitch *esw,
+ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
+ 				       struct mlx5_core_dev *peer_dev)
+ {
++	struct mlx5_eswitch *peer_esw = peer_dev->priv.eswitch;
+ 	struct mlx5_flow_destination dest = {};
+ 	struct mlx5_flow_act flow_act = {0};
+ 	struct mlx5_flow_handle **flows;
+-	/* total vports is the same for both e-switches */
+-	int nvports = esw->total_vports;
+ 	struct mlx5_flow_handle *flow;
++	struct mlx5_vport *peer_vport;
+ 	struct mlx5_flow_spec *spec;
+-	struct mlx5_vport *vport;
+ 	int err, pfindex;
+ 	unsigned long i;
+ 	void *misc;
+ 
+-	if (!MLX5_VPORT_MANAGER(esw->dev) && !mlx5_core_is_ecpf_esw_manager(esw->dev))
++	if (!MLX5_VPORT_MANAGER(peer_dev) &&
++	    !mlx5_core_is_ecpf_esw_manager(peer_dev))
+ 		return 0;
+ 
+ 	spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
+@@ -1203,7 +1203,7 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
+ 
+ 	peer_miss_rules_setup(esw, peer_dev, spec, &dest);
+ 
+-	flows = kvcalloc(nvports, sizeof(*flows), GFP_KERNEL);
++	flows = kvcalloc(peer_esw->total_vports, sizeof(*flows), GFP_KERNEL);
+ 	if (!flows) {
+ 		err = -ENOMEM;
+ 		goto alloc_flows_err;
+@@ -1213,10 +1213,10 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
+ 	misc = MLX5_ADDR_OF(fte_match_param, spec->match_value,
+ 			    misc_parameters);
+ 
+-	if (mlx5_core_is_ecpf_esw_manager(esw->dev)) {
+-		vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_PF);
+-		esw_set_peer_miss_rule_source_port(esw, peer_dev->priv.eswitch,
+-						   spec, MLX5_VPORT_PF);
++	if (mlx5_core_is_ecpf_esw_manager(peer_dev)) {
++		peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_PF);
++		esw_set_peer_miss_rule_source_port(esw, peer_esw, spec,
++						   MLX5_VPORT_PF);
+ 
+ 		flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw),
+ 					   spec, &flow_act, &dest, 1);
+@@ -1224,11 +1224,11 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
+ 			err = PTR_ERR(flow);
+ 			goto add_pf_flow_err;
+ 		}
+-		flows[vport->index] = flow;
++		flows[peer_vport->index] = flow;
+ 	}
+ 
+-	if (mlx5_ecpf_vport_exists(esw->dev)) {
+-		vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_ECPF);
++	if (mlx5_ecpf_vport_exists(peer_dev)) {
++		peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_ECPF);
+ 		MLX5_SET(fte_match_set_misc, misc, source_port, MLX5_VPORT_ECPF);
+ 		flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw),
+ 					   spec, &flow_act, &dest, 1);
+@@ -1236,13 +1236,14 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
+ 			err = PTR_ERR(flow);
+ 			goto add_ecpf_flow_err;
+ 		}
+-		flows[vport->index] = flow;
++		flows[peer_vport->index] = flow;
+ 	}
+ 
+-	mlx5_esw_for_each_vf_vport(esw, i, vport, mlx5_core_max_vfs(esw->dev)) {
++	mlx5_esw_for_each_vf_vport(peer_esw, i, peer_vport,
++				   mlx5_core_max_vfs(peer_dev)) {
+ 		esw_set_peer_miss_rule_source_port(esw,
+-						   peer_dev->priv.eswitch,
+-						   spec, vport->vport);
++						   peer_esw,
++						   spec, peer_vport->vport);
+ 
+ 		flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw),
+ 					   spec, &flow_act, &dest, 1);
+@@ -1250,22 +1251,22 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
+ 			err = PTR_ERR(flow);
+ 			goto add_vf_flow_err;
+ 		}
+-		flows[vport->index] = flow;
++		flows[peer_vport->index] = flow;
+ 	}
+ 
+-	if (mlx5_core_ec_sriov_enabled(esw->dev)) {
+-		mlx5_esw_for_each_ec_vf_vport(esw, i, vport, mlx5_core_max_ec_vfs(esw->dev)) {
+-			if (i >= mlx5_core_max_ec_vfs(peer_dev))
+-				break;
+-			esw_set_peer_miss_rule_source_port(esw, peer_dev->priv.eswitch,
+-							   spec, vport->vport);
++	if (mlx5_core_ec_sriov_enabled(peer_dev)) {
++		mlx5_esw_for_each_ec_vf_vport(peer_esw, i, peer_vport,
++					      mlx5_core_max_ec_vfs(peer_dev)) {
++			esw_set_peer_miss_rule_source_port(esw, peer_esw,
++							   spec,
++							   peer_vport->vport);
+ 			flow = mlx5_add_flow_rules(esw->fdb_table.offloads.slow_fdb,
+ 						   spec, &flow_act, &dest, 1);
+ 			if (IS_ERR(flow)) {
+ 				err = PTR_ERR(flow);
+ 				goto add_ec_vf_flow_err;
+ 			}
+-			flows[vport->index] = flow;
++			flows[peer_vport->index] = flow;
+ 		}
+ 	}
+ 
+@@ -1282,25 +1283,27 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
+ 	return 0;
+ 
+ add_ec_vf_flow_err:
+-	mlx5_esw_for_each_ec_vf_vport(esw, i, vport, mlx5_core_max_ec_vfs(esw->dev)) {
+-		if (!flows[vport->index])
++	mlx5_esw_for_each_ec_vf_vport(peer_esw, i, peer_vport,
++				      mlx5_core_max_ec_vfs(peer_dev)) {
++		if (!flows[peer_vport->index])
+ 			continue;
+-		mlx5_del_flow_rules(flows[vport->index]);
++		mlx5_del_flow_rules(flows[peer_vport->index]);
+ 	}
+ add_vf_flow_err:
+-	mlx5_esw_for_each_vf_vport(esw, i, vport, mlx5_core_max_vfs(esw->dev)) {
+-		if (!flows[vport->index])
++	mlx5_esw_for_each_vf_vport(peer_esw, i, peer_vport,
++				   mlx5_core_max_vfs(peer_dev)) {
++		if (!flows[peer_vport->index])
+ 			continue;
+-		mlx5_del_flow_rules(flows[vport->index]);
++		mlx5_del_flow_rules(flows[peer_vport->index]);
+ 	}
+-	if (mlx5_ecpf_vport_exists(esw->dev)) {
+-		vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_ECPF);
+-		mlx5_del_flow_rules(flows[vport->index]);
++	if (mlx5_ecpf_vport_exists(peer_dev)) {
++		peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_ECPF);
++		mlx5_del_flow_rules(flows[peer_vport->index]);
+ 	}
+ add_ecpf_flow_err:
+-	if (mlx5_core_is_ecpf_esw_manager(esw->dev)) {
+-		vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_PF);
+-		mlx5_del_flow_rules(flows[vport->index]);
++	if (mlx5_core_is_ecpf_esw_manager(peer_dev)) {
++		peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_PF);
++		mlx5_del_flow_rules(flows[peer_vport->index]);
+ 	}
+ add_pf_flow_err:
+ 	esw_warn(esw->dev, "FDB: Failed to add peer miss flow rule err %d\n", err);
+@@ -1313,37 +1316,34 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
+ static void esw_del_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
+ 					struct mlx5_core_dev *peer_dev)
+ {
++	struct mlx5_eswitch *peer_esw = peer_dev->priv.eswitch;
+ 	u16 peer_index = mlx5_get_dev_index(peer_dev);
+ 	struct mlx5_flow_handle **flows;
+-	struct mlx5_vport *vport;
++	struct mlx5_vport *peer_vport;
+ 	unsigned long i;
+ 
+ 	flows = esw->fdb_table.offloads.peer_miss_rules[peer_index];
+ 	if (!flows)
+ 		return;
+ 
+-	if (mlx5_core_ec_sriov_enabled(esw->dev)) {
+-		mlx5_esw_for_each_ec_vf_vport(esw, i, vport, mlx5_core_max_ec_vfs(esw->dev)) {
+-			/* The flow for a particular vport could be NULL if the other ECPF
+-			 * has fewer or no VFs enabled
+-			 */
+-			if (!flows[vport->index])
+-				continue;
+-			mlx5_del_flow_rules(flows[vport->index]);
+-		}
++	if (mlx5_core_ec_sriov_enabled(peer_dev)) {
++		mlx5_esw_for_each_ec_vf_vport(peer_esw, i, peer_vport,
++					      mlx5_core_max_ec_vfs(peer_dev))
++			mlx5_del_flow_rules(flows[peer_vport->index]);
+ 	}
+ 
+-	mlx5_esw_for_each_vf_vport(esw, i, vport, mlx5_core_max_vfs(esw->dev))
+-		mlx5_del_flow_rules(flows[vport->index]);
++	mlx5_esw_for_each_vf_vport(peer_esw, i, peer_vport,
++				   mlx5_core_max_vfs(peer_dev))
++		mlx5_del_flow_rules(flows[peer_vport->index]);
+ 
+-	if (mlx5_ecpf_vport_exists(esw->dev)) {
+-		vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_ECPF);
+-		mlx5_del_flow_rules(flows[vport->index]);
++	if (mlx5_ecpf_vport_exists(peer_dev)) {
++		peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_ECPF);
++		mlx5_del_flow_rules(flows[peer_vport->index]);
+ 	}
+ 
+-	if (mlx5_core_is_ecpf_esw_manager(esw->dev)) {
+-		vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_PF);
+-		mlx5_del_flow_rules(flows[vport->index]);
++	if (mlx5_core_is_ecpf_esw_manager(peer_dev)) {
++		peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_PF);
++		mlx5_del_flow_rules(flows[peer_vport->index]);
+ 	}
+ 
+ 	kvfree(flows);
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_config.c b/drivers/net/ethernet/ti/icssg/icssg_config.c
+index ddfd1c02a88544..da53eb04b0a43d 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_config.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_config.c
+@@ -288,8 +288,12 @@ static int prueth_fw_offload_buffer_setup(struct prueth_emac *emac)
+ 	int i;
+ 
+ 	addr = lower_32_bits(prueth->msmcram.pa);
+-	if (slice)
+-		addr += PRUETH_NUM_BUF_POOLS * PRUETH_EMAC_BUF_POOL_SIZE;
++	if (slice) {
++		if (prueth->pdata.banked_ms_ram)
++			addr += MSMC_RAM_BANK_SIZE;
++		else
++			addr += PRUETH_SW_TOTAL_BUF_SIZE_PER_SLICE;
++	}
+ 
+ 	if (addr % SZ_64K) {
+ 		dev_warn(prueth->dev, "buffer pool needs to be 64KB aligned\n");
+@@ -297,43 +301,66 @@ static int prueth_fw_offload_buffer_setup(struct prueth_emac *emac)
+ 	}
+ 
+ 	bpool_cfg = emac->dram.va + BUFFER_POOL_0_ADDR_OFFSET;
+-	/* workaround for f/w bug. bpool 0 needs to be initialized */
+-	for (i = 0; i <  PRUETH_NUM_BUF_POOLS; i++) {
++
++	/* Configure buffer pools for forwarding buffers
++	 * - used by firmware to store packets to be forwarded to other port
++	 * - 8 total pools per slice
++	 */
++	for (i = 0; i <  PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE; i++) {
+ 		writel(addr, &bpool_cfg[i].addr);
+-		writel(PRUETH_EMAC_BUF_POOL_SIZE, &bpool_cfg[i].len);
+-		addr += PRUETH_EMAC_BUF_POOL_SIZE;
++		writel(PRUETH_SW_FWD_BUF_POOL_SIZE, &bpool_cfg[i].len);
++		addr += PRUETH_SW_FWD_BUF_POOL_SIZE;
+ 	}
+ 
+-	if (!slice)
+-		addr += PRUETH_NUM_BUF_POOLS * PRUETH_EMAC_BUF_POOL_SIZE;
+-	else
+-		addr += PRUETH_SW_NUM_BUF_POOLS_HOST * PRUETH_SW_BUF_POOL_SIZE_HOST;
+-
+-	for (i = PRUETH_NUM_BUF_POOLS;
+-	     i < 2 * PRUETH_SW_NUM_BUF_POOLS_HOST + PRUETH_NUM_BUF_POOLS;
+-	     i++) {
+-		/* The driver only uses first 4 queues per PRU so only initialize them */
+-		if (i % PRUETH_SW_NUM_BUF_POOLS_HOST < PRUETH_SW_NUM_BUF_POOLS_PER_PRU) {
+-			writel(addr, &bpool_cfg[i].addr);
+-			writel(PRUETH_SW_BUF_POOL_SIZE_HOST, &bpool_cfg[i].len);
+-			addr += PRUETH_SW_BUF_POOL_SIZE_HOST;
++	/* Configure buffer pools for Local Injection buffers
++	 *  - used by firmware to store packets received from host core
++	 *  - 16 total pools per slice
++	 */
++	for (i = 0; i < PRUETH_NUM_LI_BUF_POOLS_PER_SLICE; i++) {
++		int cfg_idx = i + PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE;
++
++		/* The driver only uses first 4 queues per PRU,
++		 * so only initialize buffer for them
++		 */
++		if ((i % PRUETH_NUM_LI_BUF_POOLS_PER_PORT_PER_SLICE)
++			 < PRUETH_SW_USED_LI_BUF_POOLS_PER_PORT_PER_SLICE) {
++			writel(addr, &bpool_cfg[cfg_idx].addr);
++			writel(PRUETH_SW_LI_BUF_POOL_SIZE,
++			       &bpool_cfg[cfg_idx].len);
++			addr += PRUETH_SW_LI_BUF_POOL_SIZE;
+ 		} else {
+-			writel(0, &bpool_cfg[i].addr);
+-			writel(0, &bpool_cfg[i].len);
++			writel(0, &bpool_cfg[cfg_idx].addr);
++			writel(0, &bpool_cfg[cfg_idx].len);
+ 		}
+ 	}
+ 
+-	if (!slice)
+-		addr += PRUETH_SW_NUM_BUF_POOLS_HOST * PRUETH_SW_BUF_POOL_SIZE_HOST;
+-	else
+-		addr += PRUETH_EMAC_RX_CTX_BUF_SIZE;
++	/* Express RX buffer queue
++	 *  - used by firmware to store express packets to be transmitted
++	 *    to the host core
++	 */
++	rxq_ctx = emac->dram.va + HOST_RX_Q_EXP_CONTEXT_OFFSET;
++	for (i = 0; i < 3; i++)
++		writel(addr, &rxq_ctx->start[i]);
++
++	addr += PRUETH_SW_HOST_EXP_BUF_POOL_SIZE;
++	writel(addr, &rxq_ctx->end);
+ 
++	/* Pre-emptible RX buffer queue
++	 *  - used by firmware to store preemptible packets to be transmitted
++	 *    to the host core
++	 */
+ 	rxq_ctx = emac->dram.va + HOST_RX_Q_PRE_CONTEXT_OFFSET;
+ 	for (i = 0; i < 3; i++)
+ 		writel(addr, &rxq_ctx->start[i]);
+ 
+-	addr += PRUETH_EMAC_RX_CTX_BUF_SIZE;
+-	writel(addr - SZ_2K, &rxq_ctx->end);
++	addr += PRUETH_SW_HOST_PRE_BUF_POOL_SIZE;
++	writel(addr, &rxq_ctx->end);
++
++	/* Set pointer for default dropped packet write
++	 *  - used by firmware to temporarily store packet to be dropped
++	 */
++	rxq_ctx = emac->dram.va + DEFAULT_MSMC_Q_OFFSET;
++	writel(addr, &rxq_ctx->start[0]);
+ 
+ 	return 0;
+ }
+@@ -347,13 +374,13 @@ static int prueth_emac_buffer_setup(struct prueth_emac *emac)
+ 	u32 addr;
+ 	int i;
+ 
+-	/* Layout to have 64KB aligned buffer pool
+-	 * |BPOOL0|BPOOL1|RX_CTX0|RX_CTX1|
+-	 */
+-
+ 	addr = lower_32_bits(prueth->msmcram.pa);
+-	if (slice)
+-		addr += PRUETH_NUM_BUF_POOLS * PRUETH_EMAC_BUF_POOL_SIZE;
++	if (slice) {
++		if (prueth->pdata.banked_ms_ram)
++			addr += MSMC_RAM_BANK_SIZE;
++		else
++			addr += PRUETH_EMAC_TOTAL_BUF_SIZE_PER_SLICE;
++	}
+ 
+ 	if (addr % SZ_64K) {
+ 		dev_warn(prueth->dev, "buffer pool needs to be 64KB aligned\n");
+@@ -361,39 +388,66 @@ static int prueth_emac_buffer_setup(struct prueth_emac *emac)
+ 	}
+ 
+ 	bpool_cfg = emac->dram.va + BUFFER_POOL_0_ADDR_OFFSET;
+-	/* workaround for f/w bug. bpool 0 needs to be initilalized */
+-	writel(addr, &bpool_cfg[0].addr);
+-	writel(0, &bpool_cfg[0].len);
+ 
+-	for (i = PRUETH_EMAC_BUF_POOL_START;
+-	     i < PRUETH_EMAC_BUF_POOL_START + PRUETH_NUM_BUF_POOLS;
+-	     i++) {
+-		writel(addr, &bpool_cfg[i].addr);
+-		writel(PRUETH_EMAC_BUF_POOL_SIZE, &bpool_cfg[i].len);
+-		addr += PRUETH_EMAC_BUF_POOL_SIZE;
++	/* Configure buffer pools for forwarding buffers
++	 *  - in mac mode - no forwarding so initialize all pools to 0
++	 *  - 8 total pools per slice
++	 */
++	for (i = 0; i <  PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE; i++) {
++		writel(0, &bpool_cfg[i].addr);
++		writel(0, &bpool_cfg[i].len);
+ 	}
+ 
+-	if (!slice)
+-		addr += PRUETH_NUM_BUF_POOLS * PRUETH_EMAC_BUF_POOL_SIZE;
+-	else
+-		addr += PRUETH_EMAC_RX_CTX_BUF_SIZE * 2;
++	/* Configure buffer pools for Local Injection buffers
++	 *  - used by firmware to store packets received from host core
++	 *  - 16 total pools per slice
++	 */
++	bpool_cfg = emac->dram.va + BUFFER_POOL_0_ADDR_OFFSET;
++	for (i = 0; i < PRUETH_NUM_LI_BUF_POOLS_PER_SLICE; i++) {
++		int cfg_idx = i + PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE;
++
++		/* In EMAC mode, only first 4 buffers are used,
++		 * as 1 slice needs to handle only 1 port
++		 */
++		if (i < PRUETH_EMAC_USED_LI_BUF_POOLS_PER_PORT_PER_SLICE) {
++			writel(addr, &bpool_cfg[cfg_idx].addr);
++			writel(PRUETH_EMAC_LI_BUF_POOL_SIZE,
++			       &bpool_cfg[cfg_idx].len);
++			addr += PRUETH_EMAC_LI_BUF_POOL_SIZE;
++		} else {
++			writel(0, &bpool_cfg[cfg_idx].addr);
++			writel(0, &bpool_cfg[cfg_idx].len);
++		}
++	}
+ 
+-	/* Pre-emptible RX buffer queue */
+-	rxq_ctx = emac->dram.va + HOST_RX_Q_PRE_CONTEXT_OFFSET;
++	/* Express RX buffer queue
++	 *  - used by firmware to store express packets to be transmitted
++	 *    to host core
++	 */
++	rxq_ctx = emac->dram.va + HOST_RX_Q_EXP_CONTEXT_OFFSET;
+ 	for (i = 0; i < 3; i++)
+ 		writel(addr, &rxq_ctx->start[i]);
+ 
+-	addr += PRUETH_EMAC_RX_CTX_BUF_SIZE;
++	addr += PRUETH_EMAC_HOST_EXP_BUF_POOL_SIZE;
+ 	writel(addr, &rxq_ctx->end);
+ 
+-	/* Express RX buffer queue */
+-	rxq_ctx = emac->dram.va + HOST_RX_Q_EXP_CONTEXT_OFFSET;
++	/* Pre-emptible RX buffer queue
++	 *  - used by firmware to store preemptible packets to be transmitted
++	 *    to host core
++	 */
++	rxq_ctx = emac->dram.va + HOST_RX_Q_PRE_CONTEXT_OFFSET;
+ 	for (i = 0; i < 3; i++)
+ 		writel(addr, &rxq_ctx->start[i]);
+ 
+-	addr += PRUETH_EMAC_RX_CTX_BUF_SIZE;
++	addr += PRUETH_EMAC_HOST_PRE_BUF_POOL_SIZE;
+ 	writel(addr, &rxq_ctx->end);
+ 
++	/* Set pointer for default dropped packet write
++	 *  - used by firmware to temporarily store packet to be dropped
++	 */
++	rxq_ctx = emac->dram.va + DEFAULT_MSMC_Q_OFFSET;
++	writel(addr, &rxq_ctx->start[0]);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_config.h b/drivers/net/ethernet/ti/icssg/icssg_config.h
+index c884e9fa099e6f..60d69744ffae28 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_config.h
++++ b/drivers/net/ethernet/ti/icssg/icssg_config.h
+@@ -26,21 +26,71 @@ struct icssg_flow_cfg {
+ #define PRUETH_MAX_RX_FLOWS	1	/* excluding default flow */
+ #define PRUETH_RX_FLOW_DATA	0
+ 
+-#define PRUETH_EMAC_BUF_POOL_SIZE	SZ_8K
+-#define PRUETH_EMAC_POOLS_PER_SLICE	24
+-#define PRUETH_EMAC_BUF_POOL_START	8
+-#define PRUETH_NUM_BUF_POOLS	8
+-#define PRUETH_EMAC_RX_CTX_BUF_SIZE	SZ_16K	/* per slice */
+-#define MSMC_RAM_SIZE	\
+-	(2 * (PRUETH_EMAC_BUF_POOL_SIZE * PRUETH_NUM_BUF_POOLS + \
+-	 PRUETH_EMAC_RX_CTX_BUF_SIZE * 2))
+-
+-#define PRUETH_SW_BUF_POOL_SIZE_HOST	SZ_4K
+-#define PRUETH_SW_NUM_BUF_POOLS_HOST	8
+-#define PRUETH_SW_NUM_BUF_POOLS_PER_PRU 4
+-#define MSMC_RAM_SIZE_SWITCH_MODE \
+-	(MSMC_RAM_SIZE + \
+-	(2 * PRUETH_SW_BUF_POOL_SIZE_HOST * PRUETH_SW_NUM_BUF_POOLS_HOST))
++/* Defines for forwarding path buffer pools:
++ *   - used by firmware to store packets to be forwarded to other port
++ *   - 8 total pools per slice
++ *   - only used in switch mode (as no forwarding in mac mode)
++ */
++#define PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE			8
++#define PRUETH_SW_FWD_BUF_POOL_SIZE				(SZ_8K)
++
++/* Defines for local injection path buffer pools:
++ *   - used by firmware to store packets received from host core
++ *   - 16 total pools per slice
++ *   - 8 pools per port per slice and each slice handles both ports
++ *   - only 4 out of 8 pools used per port (as only 4 real QoS levels in ICSSG)
++ *   - switch mode: 8 total pools used
++ *   - mac mode:    4 total pools used
++ */
++#define PRUETH_NUM_LI_BUF_POOLS_PER_SLICE			16
++#define PRUETH_NUM_LI_BUF_POOLS_PER_PORT_PER_SLICE		8
++#define PRUETH_SW_LI_BUF_POOL_SIZE				SZ_4K
++#define PRUETH_SW_USED_LI_BUF_POOLS_PER_SLICE			8
++#define PRUETH_SW_USED_LI_BUF_POOLS_PER_PORT_PER_SLICE		4
++#define PRUETH_EMAC_LI_BUF_POOL_SIZE				SZ_8K
++#define PRUETH_EMAC_USED_LI_BUF_POOLS_PER_SLICE			4
++#define PRUETH_EMAC_USED_LI_BUF_POOLS_PER_PORT_PER_SLICE	4
++
++/* Defines for host egress path - express and preemptible buffers
++ *   - used by firmware to store express and preemptible packets
++ *     to be transmitted to host core
++ *   - used by both mac/switch modes
++ */
++#define PRUETH_SW_HOST_EXP_BUF_POOL_SIZE	SZ_16K
++#define PRUETH_SW_HOST_PRE_BUF_POOL_SIZE	(SZ_16K - SZ_2K)
++#define PRUETH_EMAC_HOST_EXP_BUF_POOL_SIZE	PRUETH_SW_HOST_EXP_BUF_POOL_SIZE
++#define PRUETH_EMAC_HOST_PRE_BUF_POOL_SIZE	PRUETH_SW_HOST_PRE_BUF_POOL_SIZE
++
++/* Buffer used by firmware to temporarily store packet to be dropped */
++#define PRUETH_SW_DROP_PKT_BUF_SIZE		SZ_2K
++#define PRUETH_EMAC_DROP_PKT_BUF_SIZE		PRUETH_SW_DROP_PKT_BUF_SIZE
++
++/* Total switch mode memory usage for buffers per slice */
++#define PRUETH_SW_TOTAL_BUF_SIZE_PER_SLICE \
++	(PRUETH_SW_FWD_BUF_POOL_SIZE * PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE + \
++	 PRUETH_SW_LI_BUF_POOL_SIZE * PRUETH_SW_USED_LI_BUF_POOLS_PER_SLICE + \
++	 PRUETH_SW_HOST_EXP_BUF_POOL_SIZE + \
++	 PRUETH_SW_HOST_PRE_BUF_POOL_SIZE + \
++	 PRUETH_SW_DROP_PKT_BUF_SIZE)
++
++/* Total switch mode memory usage for all buffers */
++#define PRUETH_SW_TOTAL_BUF_SIZE \
++	(2 * PRUETH_SW_TOTAL_BUF_SIZE_PER_SLICE)
++
++/* Total mac mode memory usage for buffers per slice */
++#define PRUETH_EMAC_TOTAL_BUF_SIZE_PER_SLICE \
++	(PRUETH_EMAC_LI_BUF_POOL_SIZE * \
++	 PRUETH_EMAC_USED_LI_BUF_POOLS_PER_SLICE + \
++	 PRUETH_EMAC_HOST_EXP_BUF_POOL_SIZE + \
++	 PRUETH_EMAC_HOST_PRE_BUF_POOL_SIZE + \
++	 PRUETH_EMAC_DROP_PKT_BUF_SIZE)
++
++/* Total mac mode memory usage for all buffers */
++#define PRUETH_EMAC_TOTAL_BUF_SIZE \
++	(2 * PRUETH_EMAC_TOTAL_BUF_SIZE_PER_SLICE)
++
++/* Size of 1 bank of MSMC/OC_SRAM memory */
++#define MSMC_RAM_BANK_SIZE			SZ_256K
+ 
+ #define PRUETH_SWITCH_FDB_MASK ((SIZE_OF_FDB / NUMBER_OF_FDB_BUCKET_ENTRIES) - 1)
+ 
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.c b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+index 86fc1278127c74..2f5c4335dec388 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+@@ -1764,10 +1764,15 @@ static int prueth_probe(struct platform_device *pdev)
+ 		goto put_mem;
+ 	}
+ 
+-	msmc_ram_size = MSMC_RAM_SIZE;
+ 	prueth->is_switchmode_supported = prueth->pdata.switch_mode;
+-	if (prueth->is_switchmode_supported)
+-		msmc_ram_size = MSMC_RAM_SIZE_SWITCH_MODE;
++	if (prueth->pdata.banked_ms_ram) {
++		/* Reserve 2 MSMC RAM banks for buffers to avoid arbitration */
++		msmc_ram_size = (2 * MSMC_RAM_BANK_SIZE);
++	} else {
++		msmc_ram_size = PRUETH_EMAC_TOTAL_BUF_SIZE;
++		if (prueth->is_switchmode_supported)
++			msmc_ram_size = PRUETH_SW_TOTAL_BUF_SIZE;
++	}
+ 
+ 	/* NOTE: FW bug needs buffer base to be 64KB aligned */
+ 	prueth->msmcram.va =
+@@ -1924,7 +1929,8 @@ static int prueth_probe(struct platform_device *pdev)
+ 
+ free_pool:
+ 	gen_pool_free(prueth->sram_pool,
+-		      (unsigned long)prueth->msmcram.va, msmc_ram_size);
++		      (unsigned long)prueth->msmcram.va,
++		      prueth->msmcram.size);
+ 
+ put_mem:
+ 	pruss_release_mem_region(prueth->pruss, &prueth->shram);
+@@ -1976,8 +1982,8 @@ static void prueth_remove(struct platform_device *pdev)
+ 	icss_iep_put(prueth->iep0);
+ 
+ 	gen_pool_free(prueth->sram_pool,
+-		      (unsigned long)prueth->msmcram.va,
+-		      MSMC_RAM_SIZE);
++		(unsigned long)prueth->msmcram.va,
++		prueth->msmcram.size);
+ 
+ 	pruss_release_mem_region(prueth->pruss, &prueth->shram);
+ 
+@@ -1994,12 +2000,14 @@ static const struct prueth_pdata am654_icssg_pdata = {
+ 	.fdqring_mode = K3_RINGACC_RING_MODE_MESSAGE,
+ 	.quirk_10m_link_issue = 1,
+ 	.switch_mode = 1,
++	.banked_ms_ram = 0,
+ };
+ 
+ static const struct prueth_pdata am64x_icssg_pdata = {
+ 	.fdqring_mode = K3_RINGACC_RING_MODE_RING,
+ 	.quirk_10m_link_issue = 1,
+ 	.switch_mode = 1,
++	.banked_ms_ram = 1,
+ };
+ 
+ static const struct of_device_id prueth_dt_match[] = {
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.h b/drivers/net/ethernet/ti/icssg/icssg_prueth.h
+index b6be4aa57a6153..0ca8ea0560e52f 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.h
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.h
+@@ -251,11 +251,13 @@ struct prueth_emac {
+  * @fdqring_mode: Free desc queue mode
+  * @quirk_10m_link_issue: 10M link detect errata
+  * @switch_mode: switch firmware support
++ * @banked_ms_ram: banked memory support
+  */
+ struct prueth_pdata {
+ 	enum k3_ring_mode fdqring_mode;
+ 	u32	quirk_10m_link_issue:1;
+ 	u32	switch_mode:1;
++	u32	banked_ms_ram:1;
+ };
+ 
+ struct icssg_firmwares {
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_switch_map.h b/drivers/net/ethernet/ti/icssg/icssg_switch_map.h
+index 424a7e945ea84a..12541a12ebd672 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_switch_map.h
++++ b/drivers/net/ethernet/ti/icssg/icssg_switch_map.h
+@@ -180,6 +180,9 @@
+ /* Used to notify the FW of the current link speed */
+ #define PORT_LINK_SPEED_OFFSET                             0x00A8
+ 
++/* 2k memory pointer reserved for default writes by PRU0*/
++#define DEFAULT_MSMC_Q_OFFSET                              0x00AC
++
+ /* TAS gate mask for windows list0 */
+ #define TAS_GATE_MASK_LIST0                                0x0100
+ 
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index c69c2419480194..0c9bedb6791695 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -3525,6 +3525,12 @@ static int virtnet_tx_resize(struct virtnet_info *vi, struct send_queue *sq,
+ {
+ 	int qindex, err;
+ 
++	if (ring_num <= MAX_SKB_FRAGS + 2) {
++		netdev_err(vi->dev, "tx size (%d) cannot be smaller than %d\n",
++			   ring_num, MAX_SKB_FRAGS + 2);
++		return -EINVAL;
++	}
++
+ 	qindex = sq - vi->sq;
+ 
+ 	virtnet_tx_pause(vi, sq);
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 364fa2a514f8a6..e03dcab8c3df56 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -2508,6 +2508,7 @@ bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *l,
+ }
+ EXPORT_SYMBOL(pci_bus_read_dev_vendor_id);
+ 
++#if IS_ENABLED(CONFIG_PCI_PWRCTRL)
+ static struct platform_device *pci_pwrctrl_create_device(struct pci_bus *bus, int devfn)
+ {
+ 	struct pci_host_bridge *host = pci_find_host_bridge(bus);
+@@ -2537,6 +2538,12 @@ static struct platform_device *pci_pwrctrl_create_device(struct pci_bus *bus, in
+ 
+ 	return pdev;
+ }
++#else
++static struct platform_device *pci_pwrctrl_create_device(struct pci_bus *bus, int devfn)
++{
++	return NULL;
++}
++#endif
+ 
+ /*
+  * Read the config data for a PCI device, sanity-check it,
+diff --git a/drivers/platform/mellanox/mlxbf-pmc.c b/drivers/platform/mellanox/mlxbf-pmc.c
+index 771a24c397c12b..1135d2cf335c80 100644
+--- a/drivers/platform/mellanox/mlxbf-pmc.c
++++ b/drivers/platform/mellanox/mlxbf-pmc.c
+@@ -15,6 +15,7 @@
+ #include <linux/hwmon.h>
+ #include <linux/platform_device.h>
+ #include <linux/string.h>
++#include <linux/string_helpers.h>
+ #include <uapi/linux/psci.h>
+ 
+ #define MLXBF_PMC_WRITE_REG_32 0x82000009
+@@ -1104,7 +1105,7 @@ static int mlxbf_pmc_get_event_num(const char *blk, const char *evt)
+ 	return -ENODEV;
+ }
+ 
+-/* Get the event number given the name */
++/* Get the event name given the number */
+ static char *mlxbf_pmc_get_event_name(const char *blk, u32 evt)
+ {
+ 	const struct mlxbf_pmc_events *events;
+@@ -1666,6 +1667,7 @@ static ssize_t mlxbf_pmc_event_store(struct device *dev,
+ 		attr, struct mlxbf_pmc_attribute, dev_attr);
+ 	unsigned int blk_num, cnt_num;
+ 	bool is_l3 = false;
++	char *evt_name;
+ 	int evt_num;
+ 	int err;
+ 
+@@ -1673,14 +1675,23 @@ static ssize_t mlxbf_pmc_event_store(struct device *dev,
+ 	cnt_num = attr_event->index;
+ 
+ 	if (isalpha(buf[0])) {
++		/* Remove the trailing newline character if present */
++		evt_name = kstrdup_and_replace(buf, '\n', '\0', GFP_KERNEL);
++		if (!evt_name)
++			return -ENOMEM;
++
+ 		evt_num = mlxbf_pmc_get_event_num(pmc->block_name[blk_num],
+-						  buf);
++						  evt_name);
++		kfree(evt_name);
+ 		if (evt_num < 0)
+ 			return -EINVAL;
+ 	} else {
+ 		err = kstrtouint(buf, 0, &evt_num);
+ 		if (err < 0)
+ 			return err;
++
++		if (!mlxbf_pmc_get_event_name(pmc->block_name[blk_num], evt_num))
++			return -EINVAL;
+ 	}
+ 
+ 	if (strstr(pmc->block_name[blk_num], "l3cache"))
+@@ -1761,13 +1772,14 @@ static ssize_t mlxbf_pmc_enable_store(struct device *dev,
+ {
+ 	struct mlxbf_pmc_attribute *attr_enable = container_of(
+ 		attr, struct mlxbf_pmc_attribute, dev_attr);
+-	unsigned int en, blk_num;
++	unsigned int blk_num;
+ 	u32 word;
+ 	int err;
++	bool en;
+ 
+ 	blk_num = attr_enable->nr;
+ 
+-	err = kstrtouint(buf, 0, &en);
++	err = kstrtobool(buf, &en);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1787,14 +1799,11 @@ static ssize_t mlxbf_pmc_enable_store(struct device *dev,
+ 			MLXBF_PMC_CRSPACE_PERFMON_CTL(pmc->block[blk_num].counters),
+ 			MLXBF_PMC_WRITE_REG_32, word);
+ 	} else {
+-		if (en && en != 1)
+-			return -EINVAL;
+-
+ 		err = mlxbf_pmc_config_l3_counters(blk_num, false, !!en);
+ 		if (err)
+ 			return err;
+ 
+-		if (en == 1) {
++		if (en) {
+ 			err = mlxbf_pmc_config_l3_counters(blk_num, true, false);
+ 			if (err)
+ 				return err;
+diff --git a/drivers/platform/x86/Makefile b/drivers/platform/x86/Makefile
+index 650dfbebb6c8cc..9a1884b03215a5 100644
+--- a/drivers/platform/x86/Makefile
++++ b/drivers/platform/x86/Makefile
+@@ -58,6 +58,8 @@ obj-$(CONFIG_X86_PLATFORM_DRIVERS_HP)	+= hp/
+ # Hewlett Packard Enterprise
+ obj-$(CONFIG_UV_SYSFS)       += uv_sysfs.o
+ 
++obj-$(CONFIG_FW_ATTR_CLASS)	+= firmware_attributes_class.o
++
+ # IBM Thinkpad and Lenovo
+ obj-$(CONFIG_IBM_RTL)		+= ibm_rtl.o
+ obj-$(CONFIG_IDEAPAD_LAPTOP)	+= ideapad-laptop.o
+@@ -122,7 +124,6 @@ obj-$(CONFIG_SYSTEM76_ACPI)	+= system76_acpi.o
+ obj-$(CONFIG_TOPSTAR_LAPTOP)	+= topstar-laptop.o
+ 
+ # Platform drivers
+-obj-$(CONFIG_FW_ATTR_CLASS)		+= firmware_attributes_class.o
+ obj-$(CONFIG_SERIAL_MULTI_INSTANTIATE)	+= serial-multi-instantiate.o
+ obj-$(CONFIG_TOUCHSCREEN_DMI)		+= touchscreen_dmi.o
+ obj-$(CONFIG_WIRELESS_HOTKEY)		+= wireless-hotkey.o
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index 3f8b2a324efdfa..f84c3d03c1de78 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -530,6 +530,15 @@ static const struct dmi_system_id asus_quirks[] = {
+ 		},
+ 		.driver_data = &quirk_asus_zenbook_duo_kbd,
+ 	},
++	{
++		.callback = dmi_matched,
++		.ident = "ASUS Zenbook Duo UX8406CA",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "UX8406CA"),
++		},
++		.driver_data = &quirk_asus_zenbook_duo_kbd,
++	},
+ 	{},
+ };
+ 
+diff --git a/drivers/platform/x86/dell/alienware-wmi-wmax.c b/drivers/platform/x86/dell/alienware-wmi-wmax.c
+index eb5cbe6ae9e907..dbbb3bc341e341 100644
+--- a/drivers/platform/x86/dell/alienware-wmi-wmax.c
++++ b/drivers/platform/x86/dell/alienware-wmi-wmax.c
+@@ -205,6 +205,7 @@ static const struct dmi_system_id awcc_dmi_table[] __initconst = {
+ 		},
+ 		.driver_data = &g_series_quirks,
+ 	},
++	{}
+ };
+ 
+ enum WMAX_THERMAL_INFORMATION_OPERATIONS {
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index b5e4da6a67798a..edb9d2fb02ec2b 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -1669,7 +1669,7 @@ static int ideapad_kbd_bl_init(struct ideapad_private *priv)
+ 	priv->kbd_bl.led.name                    = "platform::" LED_FUNCTION_KBD_BACKLIGHT;
+ 	priv->kbd_bl.led.brightness_get          = ideapad_kbd_bl_led_cdev_brightness_get;
+ 	priv->kbd_bl.led.brightness_set_blocking = ideapad_kbd_bl_led_cdev_brightness_set;
+-	priv->kbd_bl.led.flags                   = LED_BRIGHT_HW_CHANGED;
++	priv->kbd_bl.led.flags                   = LED_BRIGHT_HW_CHANGED | LED_RETAIN_AT_SHUTDOWN;
+ 
+ 	err = led_classdev_register(&priv->platform_device->dev, &priv->kbd_bl.led);
+ 	if (err)
+@@ -1728,7 +1728,7 @@ static int ideapad_fn_lock_led_init(struct ideapad_private *priv)
+ 	priv->fn_lock.led.name                    = "platform::" LED_FUNCTION_FNLOCK;
+ 	priv->fn_lock.led.brightness_get          = ideapad_fn_lock_led_cdev_get;
+ 	priv->fn_lock.led.brightness_set_blocking = ideapad_fn_lock_led_cdev_set;
+-	priv->fn_lock.led.flags                   = LED_BRIGHT_HW_CHANGED;
++	priv->fn_lock.led.flags                   = LED_BRIGHT_HW_CHANGED | LED_RETAIN_AT_SHUTDOWN;
+ 
+ 	err = led_classdev_register(&priv->platform_device->dev, &priv->fn_lock.led);
+ 	if (err)
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 90629a75669328..4ecad5c6c83905 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -5639,6 +5639,7 @@ static void regulator_remove_coupling(struct regulator_dev *rdev)
+ 				 ERR_PTR(err));
+ 	}
+ 
++	rdev->coupling_desc.n_coupled = 0;
+ 	kfree(rdev->coupling_desc.coupled_rdevs);
+ 	rdev->coupling_desc.coupled_rdevs = NULL;
+ }
+diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c
+index 60ed70a39d2cca..b8464d9433e96d 100644
+--- a/drivers/s390/net/ism_drv.c
++++ b/drivers/s390/net/ism_drv.c
+@@ -130,6 +130,7 @@ static int ism_cmd(struct ism_dev *ism, void *cmd)
+ 	struct ism_req_hdr *req = cmd;
+ 	struct ism_resp_hdr *resp = cmd;
+ 
++	spin_lock(&ism->cmd_lock);
+ 	__ism_write_cmd(ism, req + 1, sizeof(*req), req->len - sizeof(*req));
+ 	__ism_write_cmd(ism, req, 0, sizeof(*req));
+ 
+@@ -143,6 +144,7 @@ static int ism_cmd(struct ism_dev *ism, void *cmd)
+ 	}
+ 	__ism_read_cmd(ism, resp + 1, sizeof(*resp), resp->len - sizeof(*resp));
+ out:
++	spin_unlock(&ism->cmd_lock);
+ 	return resp->ret;
+ }
+ 
+@@ -606,6 +608,7 @@ static int ism_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		return -ENOMEM;
+ 
+ 	spin_lock_init(&ism->lock);
++	spin_lock_init(&ism->cmd_lock);
+ 	dev_set_drvdata(&pdev->dev, ism);
+ 	ism->pdev = pdev;
+ 	ism->dev.parent = &pdev->dev;
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 506a139fbd2c56..b3a0bc81159345 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1960,11 +1960,6 @@ static int cqspi_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(dev);
+ 
+-	if (cqspi->rx_chan) {
+-		dma_release_channel(cqspi->rx_chan);
+-		goto probe_setup_failed;
+-	}
+-
+ 	pm_runtime_set_autosuspend_delay(dev, CQSPI_AUTOSUSPEND_TIMEOUT);
+ 	pm_runtime_use_autosuspend(dev);
+ 	pm_runtime_get_noresume(dev);
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index 6434cbdc1a6ef3..721b15b7e13b9f 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -393,8 +393,7 @@ int vchiq_shutdown(struct vchiq_instance *instance)
+ 	struct vchiq_state *state = instance->state;
+ 	int ret = 0;
+ 
+-	if (mutex_lock_killable(&state->mutex))
+-		return -EAGAIN;
++	mutex_lock(&state->mutex);
+ 
+ 	/* Remove all services */
+ 	vchiq_shutdown_internal(state, instance);
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 214d45f8e55c21..5915bb249de5d0 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -1160,7 +1160,7 @@ static int tcpm_set_attached_state(struct tcpm_port *port, bool attached)
+ 				     port->data_role);
+ }
+ 
+-static int tcpm_set_roles(struct tcpm_port *port, bool attached,
++static int tcpm_set_roles(struct tcpm_port *port, bool attached, int state,
+ 			  enum typec_role role, enum typec_data_role data)
+ {
+ 	enum typec_orientation orientation;
+@@ -1197,7 +1197,7 @@ static int tcpm_set_roles(struct tcpm_port *port, bool attached,
+ 		}
+ 	}
+ 
+-	ret = tcpm_mux_set(port, TYPEC_STATE_USB, usb_role, orientation);
++	ret = tcpm_mux_set(port, state, usb_role, orientation);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -4404,16 +4404,6 @@ static int tcpm_src_attach(struct tcpm_port *port)
+ 
+ 	tcpm_enable_auto_vbus_discharge(port, true);
+ 
+-	ret = tcpm_set_roles(port, true, TYPEC_SOURCE, tcpm_data_role_for_source(port));
+-	if (ret < 0)
+-		return ret;
+-
+-	if (port->pd_supported) {
+-		ret = port->tcpc->set_pd_rx(port->tcpc, true);
+-		if (ret < 0)
+-			goto out_disable_mux;
+-	}
+-
+ 	/*
+ 	 * USB Type-C specification, version 1.2,
+ 	 * chapter 4.5.2.2.8.1 (Attached.SRC Requirements)
+@@ -4423,13 +4413,24 @@ static int tcpm_src_attach(struct tcpm_port *port)
+ 	    (polarity == TYPEC_POLARITY_CC2 && port->cc1 == TYPEC_CC_RA)) {
+ 		ret = tcpm_set_vconn(port, true);
+ 		if (ret < 0)
+-			goto out_disable_pd;
++			return ret;
+ 	}
+ 
+ 	ret = tcpm_set_vbus(port, true);
+ 	if (ret < 0)
+ 		goto out_disable_vconn;
+ 
++	ret = tcpm_set_roles(port, true, TYPEC_STATE_USB, TYPEC_SOURCE,
++			     tcpm_data_role_for_source(port));
++	if (ret < 0)
++		goto out_disable_vbus;
++
++	if (port->pd_supported) {
++		ret = port->tcpc->set_pd_rx(port->tcpc, true);
++		if (ret < 0)
++			goto out_disable_mux;
++	}
++
+ 	port->pd_capable = false;
+ 
+ 	port->partner = NULL;
+@@ -4440,14 +4441,14 @@ static int tcpm_src_attach(struct tcpm_port *port)
+ 
+ 	return 0;
+ 
+-out_disable_vconn:
+-	tcpm_set_vconn(port, false);
+-out_disable_pd:
+-	if (port->pd_supported)
+-		port->tcpc->set_pd_rx(port->tcpc, false);
+ out_disable_mux:
+ 	tcpm_mux_set(port, TYPEC_STATE_SAFE, USB_ROLE_NONE,
+ 		     TYPEC_ORIENTATION_NONE);
++out_disable_vbus:
++	tcpm_set_vbus(port, false);
++out_disable_vconn:
++	tcpm_set_vconn(port, false);
++
+ 	return ret;
+ }
+ 
+@@ -4579,7 +4580,8 @@ static int tcpm_snk_attach(struct tcpm_port *port)
+ 
+ 	tcpm_enable_auto_vbus_discharge(port, true);
+ 
+-	ret = tcpm_set_roles(port, true, TYPEC_SINK, tcpm_data_role_for_sink(port));
++	ret = tcpm_set_roles(port, true, TYPEC_STATE_USB,
++			     TYPEC_SINK, tcpm_data_role_for_sink(port));
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -4602,12 +4604,24 @@ static void tcpm_snk_detach(struct tcpm_port *port)
+ static int tcpm_acc_attach(struct tcpm_port *port)
+ {
+ 	int ret;
++	enum typec_role role;
++	enum typec_data_role data;
++	int state = TYPEC_STATE_USB;
+ 
+ 	if (port->attached)
+ 		return 0;
+ 
+-	ret = tcpm_set_roles(port, true, TYPEC_SOURCE,
+-			     tcpm_data_role_for_source(port));
++	role = tcpm_port_is_sink(port) ? TYPEC_SINK : TYPEC_SOURCE;
++	data = tcpm_port_is_sink(port) ? tcpm_data_role_for_sink(port)
++				       : tcpm_data_role_for_source(port);
++
++	if (tcpm_port_is_audio(port))
++		state = TYPEC_MODE_AUDIO;
++
++	if (tcpm_port_is_debug(port))
++		state = TYPEC_MODE_DEBUG;
++
++	ret = tcpm_set_roles(port, true, state, role, data);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -5377,7 +5391,7 @@ static void run_state_machine(struct tcpm_port *port)
+ 		 */
+ 		tcpm_set_vconn(port, false);
+ 		tcpm_set_vbus(port, false);
+-		tcpm_set_roles(port, port->self_powered, TYPEC_SOURCE,
++		tcpm_set_roles(port, port->self_powered, TYPEC_STATE_USB, TYPEC_SOURCE,
+ 			       tcpm_data_role_for_source(port));
+ 		/*
+ 		 * If tcpc fails to notify vbus off, TCPM will wait for PD_T_SAFE_0V +
+@@ -5409,7 +5423,7 @@ static void run_state_machine(struct tcpm_port *port)
+ 		tcpm_set_vconn(port, false);
+ 		if (port->pd_capable)
+ 			tcpm_set_charge(port, false);
+-		tcpm_set_roles(port, port->self_powered, TYPEC_SINK,
++		tcpm_set_roles(port, port->self_powered, TYPEC_STATE_USB, TYPEC_SINK,
+ 			       tcpm_data_role_for_sink(port));
+ 		/*
+ 		 * VBUS may or may not toggle, depending on the adapter.
+@@ -5533,10 +5547,10 @@ static void run_state_machine(struct tcpm_port *port)
+ 	case DR_SWAP_CHANGE_DR:
+ 		tcpm_unregister_altmodes(port);
+ 		if (port->data_role == TYPEC_HOST)
+-			tcpm_set_roles(port, true, port->pwr_role,
++			tcpm_set_roles(port, true, TYPEC_STATE_USB, port->pwr_role,
+ 				       TYPEC_DEVICE);
+ 		else
+-			tcpm_set_roles(port, true, port->pwr_role,
++			tcpm_set_roles(port, true, TYPEC_STATE_USB, port->pwr_role,
+ 				       TYPEC_HOST);
+ 		tcpm_ams_finish(port);
+ 		tcpm_set_state(port, ready_state(port), 0);
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index b784aab6686703..4397392bfef000 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -2797,7 +2797,7 @@ int virtqueue_resize(struct virtqueue *_vq, u32 num,
+ 		     void (*recycle_done)(struct virtqueue *vq))
+ {
+ 	struct vring_virtqueue *vq = to_vvq(_vq);
+-	int err;
++	int err, err_reset;
+ 
+ 	if (num > vq->vq.num_max)
+ 		return -E2BIG;
+@@ -2819,7 +2819,11 @@ int virtqueue_resize(struct virtqueue *_vq, u32 num,
+ 	else
+ 		err = virtqueue_resize_split(_vq, num);
+ 
+-	return virtqueue_enable_after_reset(_vq);
++	err_reset = virtqueue_enable_after_reset(_vq);
++	if (err_reset)
++		return err_reset;
++
++	return err;
+ }
+ EXPORT_SYMBOL_GPL(virtqueue_resize);
+ 
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index 6613b8fcceb0d9..5cf7328d536053 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -472,11 +472,18 @@ static int __nilfs_read_inode(struct super_block *sb,
+ 		inode->i_op = &nilfs_symlink_inode_operations;
+ 		inode_nohighmem(inode);
+ 		inode->i_mapping->a_ops = &nilfs_aops;
+-	} else {
++	} else if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) ||
++		   S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) {
+ 		inode->i_op = &nilfs_special_inode_operations;
+ 		init_special_inode(
+ 			inode, inode->i_mode,
+ 			huge_decode_dev(le64_to_cpu(raw_inode->i_device_code)));
++	} else {
++		nilfs_error(sb,
++			    "invalid file type bits in mode 0%o for inode %lu",
++			    inode->i_mode, ino);
++		err = -EIO;
++		goto failed_unmap;
+ 	}
+ 	nilfs_ifile_unmap_inode(raw_inode);
+ 	brelse(bh);
+diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h
+index 9689a7c5dd36b2..513837632b7d37 100644
+--- a/include/drm/drm_buddy.h
++++ b/include/drm/drm_buddy.h
+@@ -160,6 +160,8 @@ int drm_buddy_block_trim(struct drm_buddy *mm,
+ 			 u64 new_size,
+ 			 struct list_head *blocks);
+ 
++void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear);
++
+ void drm_buddy_free_block(struct drm_buddy *mm, struct drm_buddy_block *block);
+ 
+ void drm_buddy_free_list(struct drm_buddy *mm,
+diff --git a/include/linux/ism.h b/include/linux/ism.h
+index 5428edd9098231..8358b4cd7ba6ae 100644
+--- a/include/linux/ism.h
++++ b/include/linux/ism.h
+@@ -28,6 +28,7 @@ struct ism_dmb {
+ 
+ struct ism_dev {
+ 	spinlock_t lock; /* protects the ism device */
++	spinlock_t cmd_lock; /* serializes cmds */
+ 	struct list_head list;
+ 	struct pci_dev *pdev;
+ 
+diff --git a/include/linux/sprintf.h b/include/linux/sprintf.h
+index 51cab2def9ec10..87613009138438 100644
+--- a/include/linux/sprintf.h
++++ b/include/linux/sprintf.h
+@@ -4,6 +4,7 @@
+ 
+ #include <linux/compiler_attributes.h>
+ #include <linux/types.h>
++#include <linux/stdarg.h>
+ 
+ int num_to_str(char *buf, int size, unsigned long long num, unsigned int width);
+ 
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 1f1861c57e2ad0..29a0759d5582c8 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -474,7 +474,7 @@ struct xfrm_type_offload {
+ 
+ int xfrm_register_type_offload(const struct xfrm_type_offload *type, unsigned short family);
+ void xfrm_unregister_type_offload(const struct xfrm_type_offload *type, unsigned short family);
+-void xfrm_set_type_offload(struct xfrm_state *x);
++void xfrm_set_type_offload(struct xfrm_state *x, bool try_load);
+ static inline void xfrm_unset_type_offload(struct xfrm_state *x)
+ {
+ 	if (!x->type_offload)
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index efa70141171290..c12dfbeb78a744 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -16441,6 +16441,8 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+ 
+ 		if (src_reg->type == PTR_TO_STACK)
+ 			insn_flags |= INSN_F_SRC_REG_STACK;
++		if (dst_reg->type == PTR_TO_STACK)
++			insn_flags |= INSN_F_DST_REG_STACK;
+ 	} else {
+ 		if (insn->src_reg != BPF_REG_0) {
+ 			verbose(env, "BPF_JMP/JMP32 uses reserved fields\n");
+@@ -16450,10 +16452,11 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+ 		memset(src_reg, 0, sizeof(*src_reg));
+ 		src_reg->type = SCALAR_VALUE;
+ 		__mark_reg_known(src_reg, insn->imm);
++
++		if (dst_reg->type == PTR_TO_STACK)
++			insn_flags |= INSN_F_DST_REG_STACK;
+ 	}
+ 
+-	if (dst_reg->type == PTR_TO_STACK)
+-		insn_flags |= INSN_F_DST_REG_STACK;
+ 	if (insn_flags) {
+ 		err = push_insn_history(env, this_branch, insn_flags, 0);
+ 		if (err)
+diff --git a/kernel/resource.c b/kernel/resource.c
+index 8d3e6ed0bdc1f3..f9bb5481501a37 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -1279,8 +1279,9 @@ static int __request_region_locked(struct resource *res, struct resource *parent
+ 		 * become unavailable to other users.  Conflicts are
+ 		 * not expected.  Warn to aid debugging if encountered.
+ 		 */
+-		if (conflict->desc == IORES_DESC_DEVICE_PRIVATE_MEMORY) {
+-			pr_warn("Unaddressable device %s %pR conflicts with %pR",
++		if (parent == &iomem_resource &&
++		    conflict->desc == IORES_DESC_DEVICE_PRIVATE_MEMORY) {
++			pr_warn("Unaddressable device %s %pR conflicts with %pR\n",
+ 				conflict->name, conflict, res);
+ 		}
+ 		if (conflict != parent) {
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index a009c91f7b05fc..83c65f3afccaac 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -1256,7 +1256,7 @@ int get_device_system_crosststamp(int (*get_time_fn)
+ 				  struct system_time_snapshot *history_begin,
+ 				  struct system_device_crosststamp *xtstamp)
+ {
+-	struct system_counterval_t system_counterval;
++	struct system_counterval_t system_counterval = {};
+ 	struct timekeeper *tk = &tk_core.timekeeper;
+ 	u64 cycles, now, interval_start;
+ 	unsigned int clock_was_set_seq = 0;
+diff --git a/mm/kasan/report.c b/mm/kasan/report.c
+index b0877035491f80..62c01b4527eba6 100644
+--- a/mm/kasan/report.c
++++ b/mm/kasan/report.c
+@@ -399,7 +399,9 @@ static void print_address_description(void *addr, u8 tag,
+ 	}
+ 
+ 	if (is_vmalloc_addr(addr)) {
+-		pr_err("The buggy address %px belongs to a vmalloc virtual mapping\n", addr);
++		pr_err("The buggy address belongs to a");
++		if (!vmalloc_dump_obj(addr))
++			pr_cont(" vmalloc virtual mapping\n");
+ 		page = vmalloc_to_page(addr);
+ 	}
+ 
+diff --git a/mm/ksm.c b/mm/ksm.c
+index 8583fb91ef136e..a9d3e719e08993 100644
+--- a/mm/ksm.c
++++ b/mm/ksm.c
+@@ -3669,10 +3669,10 @@ static ssize_t advisor_mode_show(struct kobject *kobj,
+ {
+ 	const char *output;
+ 
+-	if (ksm_advisor == KSM_ADVISOR_NONE)
+-		output = "[none] scan-time";
+-	else if (ksm_advisor == KSM_ADVISOR_SCAN_TIME)
++	if (ksm_advisor == KSM_ADVISOR_SCAN_TIME)
+ 		output = "none [scan-time]";
++	else
++		output = "[none] scan-time";
+ 
+ 	return sysfs_emit(buf, "%s\n", output);
+ }
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index b91a33fb6c694f..225dddff091d71 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1561,6 +1561,10 @@ static int get_hwpoison_page(struct page *p, unsigned long flags)
+ 	return ret;
+ }
+ 
++/*
++ * The caller must guarantee the folio isn't large folio, except hugetlb.
++ * try_to_unmap() can't handle it.
++ */
+ int unmap_poisoned_folio(struct folio *folio, unsigned long pfn, bool must_kill)
+ {
+ 	enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON;
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 3783e45bfc92e2..d3bce4d7a33980 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -1128,6 +1128,14 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
+ 			goto keep;
+ 
+ 		if (folio_contain_hwpoisoned_page(folio)) {
++			/*
++			 * unmap_poisoned_folio() can't handle large
++			 * folio, just skip it. memory_failure() will
++			 * handle it if the UCE is triggered again.
++			 */
++			if (folio_test_large(folio))
++				goto keep_locked;
++
+ 			unmap_poisoned_folio(folio, folio_pfn(folio), false);
+ 			folio_unlock(folio);
+ 			folio_put(folio);
+diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
+index d14a7e317ac8bf..03fe0452e6e2e1 100644
+--- a/mm/zsmalloc.c
++++ b/mm/zsmalloc.c
+@@ -1053,6 +1053,9 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
+ 	if (!zspage)
+ 		return NULL;
+ 
++	if (!IS_ENABLED(CONFIG_COMPACTION))
++		gfp &= ~__GFP_MOVABLE;
++
+ 	zspage->magic = ZSPAGE_MAGIC;
+ 	zspage->pool = pool;
+ 	zspage->class = class->index;
+diff --git a/net/appletalk/aarp.c b/net/appletalk/aarp.c
+index 9c787e2e4b173d..4744e3fd45447c 100644
+--- a/net/appletalk/aarp.c
++++ b/net/appletalk/aarp.c
+@@ -35,6 +35,7 @@
+ #include <linux/seq_file.h>
+ #include <linux/export.h>
+ #include <linux/etherdevice.h>
++#include <linux/refcount.h>
+ 
+ int sysctl_aarp_expiry_time = AARP_EXPIRY_TIME;
+ int sysctl_aarp_tick_time = AARP_TICK_TIME;
+@@ -44,6 +45,7 @@ int sysctl_aarp_resolve_time = AARP_RESOLVE_TIME;
+ /* Lists of aarp entries */
+ /**
+  *	struct aarp_entry - AARP entry
++ *	@refcnt: Reference count
+  *	@last_sent: Last time we xmitted the aarp request
+  *	@packet_queue: Queue of frames wait for resolution
+  *	@status: Used for proxy AARP
+@@ -55,6 +57,7 @@ int sysctl_aarp_resolve_time = AARP_RESOLVE_TIME;
+  *	@next: Next entry in chain
+  */
+ struct aarp_entry {
++	refcount_t			refcnt;
+ 	/* These first two are only used for unresolved entries */
+ 	unsigned long		last_sent;
+ 	struct sk_buff_head	packet_queue;
+@@ -79,6 +82,17 @@ static DEFINE_RWLOCK(aarp_lock);
+ /* Used to walk the list and purge/kick entries.  */
+ static struct timer_list aarp_timer;
+ 
++static inline void aarp_entry_get(struct aarp_entry *a)
++{
++	refcount_inc(&a->refcnt);
++}
++
++static inline void aarp_entry_put(struct aarp_entry *a)
++{
++	if (refcount_dec_and_test(&a->refcnt))
++		kfree(a);
++}
++
+ /*
+  *	Delete an aarp queue
+  *
+@@ -87,7 +101,7 @@ static struct timer_list aarp_timer;
+ static void __aarp_expire(struct aarp_entry *a)
+ {
+ 	skb_queue_purge(&a->packet_queue);
+-	kfree(a);
++	aarp_entry_put(a);
+ }
+ 
+ /*
+@@ -380,9 +394,11 @@ static void aarp_purge(void)
+ static struct aarp_entry *aarp_alloc(void)
+ {
+ 	struct aarp_entry *a = kmalloc(sizeof(*a), GFP_ATOMIC);
++	if (!a)
++		return NULL;
+ 
+-	if (a)
+-		skb_queue_head_init(&a->packet_queue);
++	refcount_set(&a->refcnt, 1);
++	skb_queue_head_init(&a->packet_queue);
+ 	return a;
+ }
+ 
+@@ -477,6 +493,7 @@ int aarp_proxy_probe_network(struct atalk_iface *atif, struct atalk_addr *sa)
+ 	entry->dev = atif->dev;
+ 
+ 	write_lock_bh(&aarp_lock);
++	aarp_entry_get(entry);
+ 
+ 	hash = sa->s_node % (AARP_HASH_SIZE - 1);
+ 	entry->next = proxies[hash];
+@@ -502,6 +519,7 @@ int aarp_proxy_probe_network(struct atalk_iface *atif, struct atalk_addr *sa)
+ 		retval = 1;
+ 	}
+ 
++	aarp_entry_put(entry);
+ 	write_unlock_bh(&aarp_lock);
+ out:
+ 	return retval;
+diff --git a/net/ipv4/xfrm4_input.c b/net/ipv4/xfrm4_input.c
+index 0d31a8c108d4f6..f28cfd88eaf593 100644
+--- a/net/ipv4/xfrm4_input.c
++++ b/net/ipv4/xfrm4_input.c
+@@ -202,6 +202,9 @@ struct sk_buff *xfrm4_gro_udp_encap_rcv(struct sock *sk, struct list_head *head,
+ 	if (len <= sizeof(struct ip_esp_hdr) || udpdata32[0] == 0)
+ 		goto out;
+ 
++	/* set the transport header to ESP */
++	skb_set_transport_header(skb, offset);
++
+ 	NAPI_GRO_CB(skb)->proto = IPPROTO_UDP;
+ 
+ 	pp = call_gro_receive(ops->callbacks.gro_receive, head, skb);
+diff --git a/net/ipv6/xfrm6_input.c b/net/ipv6/xfrm6_input.c
+index 841c81abaaf4ff..9005fc156a20e6 100644
+--- a/net/ipv6/xfrm6_input.c
++++ b/net/ipv6/xfrm6_input.c
+@@ -202,6 +202,9 @@ struct sk_buff *xfrm6_gro_udp_encap_rcv(struct sock *sk, struct list_head *head,
+ 	if (len <= sizeof(struct ip_esp_hdr) || udpdata32[0] == 0)
+ 		goto out;
+ 
++	/* set the transport header to ESP */
++	skb_set_transport_header(skb, offset);
++
+ 	NAPI_GRO_CB(skb)->proto = IPPROTO_UDP;
+ 
+ 	pp = call_gro_receive(ops->callbacks.gro_receive, head, skb);
+diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
+index 2b1b025c31a338..51cc2cfb409367 100644
+--- a/net/sched/sch_qfq.c
++++ b/net/sched/sch_qfq.c
+@@ -536,9 +536,6 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ 
+ static void qfq_destroy_class(struct Qdisc *sch, struct qfq_class *cl)
+ {
+-	struct qfq_sched *q = qdisc_priv(sch);
+-
+-	qfq_rm_from_agg(q, cl);
+ 	gen_kill_estimator(&cl->rate_est);
+ 	qdisc_put(cl->qdisc);
+ 	kfree(cl);
+@@ -559,10 +556,11 @@ static int qfq_delete_class(struct Qdisc *sch, unsigned long arg,
+ 
+ 	qdisc_purge_queue(cl->qdisc);
+ 	qdisc_class_hash_remove(&q->clhash, &cl->common);
+-	qfq_destroy_class(sch, cl);
++	qfq_rm_from_agg(q, cl);
+ 
+ 	sch_tree_unlock(sch);
+ 
++	qfq_destroy_class(sch, cl);
+ 	return 0;
+ }
+ 
+@@ -1503,6 +1501,7 @@ static void qfq_destroy_qdisc(struct Qdisc *sch)
+ 	for (i = 0; i < q->clhash.hashsize; i++) {
+ 		hlist_for_each_entry_safe(cl, next, &q->clhash.hash[i],
+ 					  common.hnode) {
++			qfq_rm_from_agg(q, cl);
+ 			qfq_destroy_class(sch, cl);
+ 		}
+ 	}
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index f46a9e5764f014..a2d3a5f3b4852c 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -305,7 +305,6 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+ 		return -EINVAL;
+ 	}
+ 
+-	xfrm_set_type_offload(x);
+ 	if (!x->type_offload) {
+ 		NL_SET_ERR_MSG(extack, "Type doesn't support offload");
+ 		dev_put(dev);
+diff --git a/net/xfrm/xfrm_interface_core.c b/net/xfrm/xfrm_interface_core.c
+index 622445f041d320..fed96bedd54e3d 100644
+--- a/net/xfrm/xfrm_interface_core.c
++++ b/net/xfrm/xfrm_interface_core.c
+@@ -875,7 +875,7 @@ static int xfrmi_changelink(struct net_device *dev, struct nlattr *tb[],
+ 		return -EINVAL;
+ 	}
+ 
+-	if (p.collect_md) {
++	if (p.collect_md || xi->p.collect_md) {
+ 		NL_SET_ERR_MSG(extack, "collect_md can't be changed");
+ 		return -EINVAL;
+ 	}
+@@ -886,11 +886,6 @@ static int xfrmi_changelink(struct net_device *dev, struct nlattr *tb[],
+ 	} else {
+ 		if (xi->dev != dev)
+ 			return -EEXIST;
+-		if (xi->p.collect_md) {
+-			NL_SET_ERR_MSG(extack,
+-				       "device can't be changed to collect_md");
+-			return -EINVAL;
+-		}
+ 	}
+ 
+ 	return xfrmi_update(xi, &p);
+diff --git a/net/xfrm/xfrm_ipcomp.c b/net/xfrm/xfrm_ipcomp.c
+index 907c3ccb440dab..a38545413b8015 100644
+--- a/net/xfrm/xfrm_ipcomp.c
++++ b/net/xfrm/xfrm_ipcomp.c
+@@ -97,7 +97,7 @@ static int ipcomp_input_done2(struct sk_buff *skb, int err)
+ 	struct ip_comp_hdr *ipch = ip_comp_hdr(skb);
+ 	const int plen = skb->len;
+ 
+-	skb_reset_transport_header(skb);
++	skb->transport_header = skb->network_header + sizeof(*ipch);
+ 
+ 	return ipcomp_post_acomp(skb, err, 0) ?:
+ 	       skb->len < (plen + sizeof(ip_comp_hdr)) ? -EINVAL :
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 5ece039846e201..0cf516b4e6d929 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -424,11 +424,10 @@ void xfrm_unregister_type_offload(const struct xfrm_type_offload *type,
+ }
+ EXPORT_SYMBOL(xfrm_unregister_type_offload);
+ 
+-void xfrm_set_type_offload(struct xfrm_state *x)
++void xfrm_set_type_offload(struct xfrm_state *x, bool try_load)
+ {
+ 	const struct xfrm_type_offload *type = NULL;
+ 	struct xfrm_state_afinfo *afinfo;
+-	bool try_load = true;
+ 
+ retry:
+ 	afinfo = xfrm_state_get_afinfo(x->props.family);
+@@ -607,6 +606,7 @@ static void ___xfrm_state_destroy(struct xfrm_state *x)
+ 	kfree(x->coaddr);
+ 	kfree(x->replay_esn);
+ 	kfree(x->preplay_esn);
++	xfrm_unset_type_offload(x);
+ 	if (x->type) {
+ 		x->type->destructor(x);
+ 		xfrm_put_type(x->type);
+@@ -780,8 +780,6 @@ void xfrm_dev_state_free(struct xfrm_state *x)
+ 	struct xfrm_dev_offload *xso = &x->xso;
+ 	struct net_device *dev = READ_ONCE(xso->dev);
+ 
+-	xfrm_unset_type_offload(x);
+-
+ 	if (dev && dev->xfrmdev_ops) {
+ 		spin_lock_bh(&xfrm_state_dev_gc_lock);
+ 		if (!hlist_unhashed(&x->dev_gclist))
+@@ -1307,14 +1305,8 @@ static void xfrm_hash_grow_check(struct net *net, int have_hash_collision)
+ static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x,
+ 			       const struct flowi *fl, unsigned short family,
+ 			       struct xfrm_state **best, int *acq_in_progress,
+-			       int *error)
++			       int *error, unsigned int pcpu_id)
+ {
+-	/* We need the cpu id just as a lookup key,
+-	 * we don't require it to be stable.
+-	 */
+-	unsigned int pcpu_id = get_cpu();
+-	put_cpu();
+-
+ 	/* Resolution logic:
+ 	 * 1. There is a valid state with matching selector. Done.
+ 	 * 2. Valid state with inappropriate selector. Skip.
+@@ -1381,14 +1373,15 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ 	/* We need the cpu id just as a lookup key,
+ 	 * we don't require it to be stable.
+ 	 */
+-	pcpu_id = get_cpu();
+-	put_cpu();
++	pcpu_id = raw_smp_processor_id();
+ 
+ 	to_put = NULL;
+ 
+ 	sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation);
+ 
+ 	rcu_read_lock();
++	xfrm_hash_ptrs_get(net, &state_ptrs);
++
+ 	hlist_for_each_entry_rcu(x, &pol->state_cache_list, state_cache) {
+ 		if (x->props.family == encap_family &&
+ 		    x->props.reqid == tmpl->reqid &&
+@@ -1400,7 +1393,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ 		    tmpl->id.proto == x->id.proto &&
+ 		    (tmpl->id.spi == x->id.spi || !tmpl->id.spi))
+ 			xfrm_state_look_at(pol, x, fl, encap_family,
+-					   &best, &acquire_in_progress, &error);
++					   &best, &acquire_in_progress, &error, pcpu_id);
+ 	}
+ 
+ 	if (best)
+@@ -1417,7 +1410,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ 		    tmpl->id.proto == x->id.proto &&
+ 		    (tmpl->id.spi == x->id.spi || !tmpl->id.spi))
+ 			xfrm_state_look_at(pol, x, fl, family,
+-					   &best, &acquire_in_progress, &error);
++					   &best, &acquire_in_progress, &error, pcpu_id);
+ 	}
+ 
+ cached:
+@@ -1429,8 +1422,6 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ 	else if (acquire_in_progress) /* XXX: acquire_in_progress should not happen */
+ 		WARN_ON(1);
+ 
+-	xfrm_hash_ptrs_get(net, &state_ptrs);
+-
+ 	h = __xfrm_dst_hash(daddr, saddr, tmpl->reqid, encap_family, state_ptrs.hmask);
+ 	hlist_for_each_entry_rcu(x, state_ptrs.bydst + h, bydst) {
+ #ifdef CONFIG_XFRM_OFFLOAD
+@@ -1460,7 +1451,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ 		    tmpl->id.proto == x->id.proto &&
+ 		    (tmpl->id.spi == x->id.spi || !tmpl->id.spi))
+ 			xfrm_state_look_at(pol, x, fl, family,
+-					   &best, &acquire_in_progress, &error);
++					   &best, &acquire_in_progress, &error, pcpu_id);
+ 	}
+ 	if (best || acquire_in_progress)
+ 		goto found;
+@@ -1495,7 +1486,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ 		    tmpl->id.proto == x->id.proto &&
+ 		    (tmpl->id.spi == x->id.spi || !tmpl->id.spi))
+ 			xfrm_state_look_at(pol, x, fl, family,
+-					   &best, &acquire_in_progress, &error);
++					   &best, &acquire_in_progress, &error, pcpu_id);
+ 	}
+ 
+ found:
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 614b58cb26ab71..d17ea437a1587d 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -977,6 +977,7 @@ static struct xfrm_state *xfrm_state_construct(struct net *net,
+ 	/* override default values from above */
+ 	xfrm_update_ae_params(x, attrs, 0);
+ 
++	xfrm_set_type_offload(x, attrs[XFRMA_OFFLOAD_DEV]);
+ 	/* configure the hardware if offload is requested */
+ 	if (attrs[XFRMA_OFFLOAD_DEV]) {
+ 		err = xfrm_dev_state_add(net, x,
+diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c
+index a590d431c5ff6e..8c0dd439f5a55c 100644
+--- a/sound/pci/hda/hda_tegra.c
++++ b/sound/pci/hda/hda_tegra.c
+@@ -72,6 +72,10 @@
+ struct hda_tegra_soc {
+ 	bool has_hda2codec_2x_reset;
+ 	bool has_hda2hdmi;
++	bool has_hda2codec_2x;
++	bool input_stream;
++	bool always_on;
++	bool requires_init;
+ };
+ 
+ struct hda_tegra {
+@@ -187,7 +191,9 @@ static int hda_tegra_runtime_resume(struct device *dev)
+ 	if (rc != 0)
+ 		return rc;
+ 	if (chip->running) {
+-		hda_tegra_init(hda);
++		if (hda->soc->requires_init)
++			hda_tegra_init(hda);
++
+ 		azx_init_chip(chip, 1);
+ 		/* disable controller wake up event*/
+ 		azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) &
+@@ -250,7 +256,8 @@ static int hda_tegra_init_chip(struct azx *chip, struct platform_device *pdev)
+ 	bus->remap_addr = hda->regs + HDA_BAR0;
+ 	bus->addr = res->start + HDA_BAR0;
+ 
+-	hda_tegra_init(hda);
++	if (hda->soc->requires_init)
++		hda_tegra_init(hda);
+ 
+ 	return 0;
+ }
+@@ -323,7 +330,7 @@ static int hda_tegra_first_init(struct azx *chip, struct platform_device *pdev)
+ 	 * starts with offset 0 which is wrong as HW register for output stream
+ 	 * offset starts with 4.
+ 	 */
+-	if (of_device_is_compatible(np, "nvidia,tegra234-hda"))
++	if (!hda->soc->input_stream)
+ 		chip->capture_streams = 4;
+ 
+ 	chip->playback_streams = (gcap >> 12) & 0x0f;
+@@ -419,7 +426,6 @@ static int hda_tegra_create(struct snd_card *card,
+ 	chip->driver_caps = driver_caps;
+ 	chip->driver_type = driver_caps & 0xff;
+ 	chip->dev_index = 0;
+-	chip->jackpoll_interval = msecs_to_jiffies(5000);
+ 	INIT_LIST_HEAD(&chip->pcm_list);
+ 
+ 	chip->codec_probe_mask = -1;
+@@ -436,7 +442,16 @@ static int hda_tegra_create(struct snd_card *card,
+ 	chip->bus.core.sync_write = 0;
+ 	chip->bus.core.needs_damn_long_delay = 1;
+ 	chip->bus.core.aligned_mmio = 1;
+-	chip->bus.jackpoll_in_suspend = 1;
++
++	/*
++	 * HDA power domain and clocks are always on for Tegra264 and
++	 * the jack detection logic would work always, so no need of
++	 * jack polling mechanism running.
++	 */
++	if (!hda->soc->always_on) {
++		chip->jackpoll_interval = msecs_to_jiffies(5000);
++		chip->bus.jackpoll_in_suspend = 1;
++	}
+ 
+ 	err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, chip, &ops);
+ 	if (err < 0) {
+@@ -450,22 +465,44 @@ static int hda_tegra_create(struct snd_card *card,
+ static const struct hda_tegra_soc tegra30_data = {
+ 	.has_hda2codec_2x_reset = true,
+ 	.has_hda2hdmi = true,
++	.has_hda2codec_2x = true,
++	.input_stream = true,
++	.always_on = false,
++	.requires_init = true,
+ };
+ 
+ static const struct hda_tegra_soc tegra194_data = {
+ 	.has_hda2codec_2x_reset = false,
+ 	.has_hda2hdmi = true,
++	.has_hda2codec_2x = true,
++	.input_stream = true,
++	.always_on = false,
++	.requires_init = true,
+ };
+ 
+ static const struct hda_tegra_soc tegra234_data = {
+ 	.has_hda2codec_2x_reset = true,
+ 	.has_hda2hdmi = false,
++	.has_hda2codec_2x = true,
++	.input_stream = false,
++	.always_on = false,
++	.requires_init = true,
++};
++
++static const struct hda_tegra_soc tegra264_data = {
++	.has_hda2codec_2x_reset = true,
++	.has_hda2hdmi = false,
++	.has_hda2codec_2x = false,
++	.input_stream = false,
++	.always_on = true,
++	.requires_init = false,
+ };
+ 
+ static const struct of_device_id hda_tegra_match[] = {
+ 	{ .compatible = "nvidia,tegra30-hda", .data = &tegra30_data },
+ 	{ .compatible = "nvidia,tegra194-hda", .data = &tegra194_data },
+ 	{ .compatible = "nvidia,tegra234-hda", .data = &tegra234_data },
++	{ .compatible = "nvidia,tegra264-hda", .data = &tegra264_data },
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(of, hda_tegra_match);
+@@ -520,7 +557,9 @@ static int hda_tegra_probe(struct platform_device *pdev)
+ 	hda->clocks[hda->nclocks++].id = "hda";
+ 	if (hda->soc->has_hda2hdmi)
+ 		hda->clocks[hda->nclocks++].id = "hda2hdmi";
+-	hda->clocks[hda->nclocks++].id = "hda2codec_2x";
++
++	if (hda->soc->has_hda2codec_2x)
++		hda->clocks[hda->nclocks++].id = "hda2codec_2x";
+ 
+ 	err = devm_clk_bulk_get(&pdev->dev, hda->nclocks, hda->clocks);
+ 	if (err < 0)
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 7167989a8d86a8..0413e831d6cebd 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -4551,6 +4551,9 @@ HDA_CODEC_ENTRY(0x10de002e, "Tegra186 HDMI/DP1", patch_tegra_hdmi),
+ HDA_CODEC_ENTRY(0x10de002f, "Tegra194 HDMI/DP2", patch_tegra_hdmi),
+ HDA_CODEC_ENTRY(0x10de0030, "Tegra194 HDMI/DP3", patch_tegra_hdmi),
+ HDA_CODEC_ENTRY(0x10de0031, "Tegra234 HDMI/DP", patch_tegra234_hdmi),
++HDA_CODEC_ENTRY(0x10de0033, "SoC 33 HDMI/DP",	patch_tegra234_hdmi),
++HDA_CODEC_ENTRY(0x10de0034, "Tegra264 HDMI/DP",	patch_tegra234_hdmi),
++HDA_CODEC_ENTRY(0x10de0035, "SoC 35 HDMI/DP",	patch_tegra234_hdmi),
+ HDA_CODEC_ENTRY(0x10de0040, "GPU 40 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de0041, "GPU 41 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de0042, "GPU 42 HDMI/DP",	patch_nvhdmi),
+@@ -4589,15 +4592,32 @@ HDA_CODEC_ENTRY(0x10de0097, "GPU 97 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de0098, "GPU 98 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de0099, "GPU 99 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009a, "GPU 9a HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de009b, "GPU 9b HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de009c, "GPU 9c HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009d, "GPU 9d HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009e, "GPU 9e HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009f, "GPU 9f HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a0, "GPU a0 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a1, "GPU a1 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a3, "GPU a3 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a4, "GPU a4 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a5, "GPU a5 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a6, "GPU a6 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a7, "GPU a7 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a8, "GPU a8 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a9, "GPU a9 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00aa, "GPU aa HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00ab, "GPU ab HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00ad, "GPU ad HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00ae, "GPU ae HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00af, "GPU af HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00b0, "GPU b0 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00b1, "GPU b1 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00c0, "GPU c0 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00c1, "GPU c1 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00c3, "GPU c3 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00c4, "GPU c4 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00c5, "GPU c5 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de8001, "MCP73 HDMI",	patch_nvhdmi_2ch),
+ HDA_CODEC_ENTRY(0x10de8067, "MCP67/68 HDMI",	patch_nvhdmi_2ch),
+ HDA_CODEC_ENTRY(0x67663d82, "Arise 82 HDMI/DP",	patch_gf_hdmi),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 5a6d0424bfedce..3c93d213571777 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4753,7 +4753,7 @@ static void alc245_fixup_hp_mute_led_v1_coefbit(struct hda_codec *codec,
+ 	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+ 		spec->mute_led_polarity = 0;
+ 		spec->mute_led_coef.idx = 0x0b;
+-		spec->mute_led_coef.mask = 1 << 3;
++		spec->mute_led_coef.mask = 3 << 2;
+ 		spec->mute_led_coef.on = 1 << 3;
+ 		spec->mute_led_coef.off = 0;
+ 		snd_hda_gen_add_mute_led_cdev(codec, coef_mute_led_set);
+@@ -10668,6 +10668,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8788, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87b7, "HP Laptop 14-fq0xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x87cc, "HP Pavilion 15-eg0xxx", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87d3, "HP Laptop 15-gw0xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x87df, "HP ProBook 430 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+@@ -10746,6 +10747,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8a2e, "HP Envy 16", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8a30, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8a31, "HP Envy 15", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x103c, 0x8a4f, "HP Victus 15-fa0xxx (MB 8A4F)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8a6e, "HP EDNA 360", ALC287_FIXUP_CS35L41_I2C_4),
+ 	SND_PCI_QUIRK(0x103c, 0x8a74, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8a78, "HP Dev One", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST),
+diff --git a/sound/soc/mediatek/common/mtk-soundcard-driver.c b/sound/soc/mediatek/common/mtk-soundcard-driver.c
+index 713a368f79cf04..95a083939f3e22 100644
+--- a/sound/soc/mediatek/common/mtk-soundcard-driver.c
++++ b/sound/soc/mediatek/common/mtk-soundcard-driver.c
+@@ -262,9 +262,13 @@ int mtk_soundcard_common_probe(struct platform_device *pdev)
+ 				soc_card_data->accdet = accdet_comp;
+ 			else
+ 				dev_err(&pdev->dev, "No sound component found from mediatek,accdet property\n");
++
++			put_device(&accdet_pdev->dev);
+ 		} else {
+ 			dev_err(&pdev->dev, "No device found from mediatek,accdet property\n");
+ 		}
++
++		of_node_put(accdet_node);
+ 	}
+ 
+ 	platform_node = of_parse_phandle(pdev->dev.of_node, "mediatek,platform", 0);
+diff --git a/sound/soc/mediatek/mt8365/mt8365-dai-i2s.c b/sound/soc/mediatek/mt8365/mt8365-dai-i2s.c
+index cae51756cead81..cb9beb172ed598 100644
+--- a/sound/soc/mediatek/mt8365/mt8365-dai-i2s.c
++++ b/sound/soc/mediatek/mt8365/mt8365-dai-i2s.c
+@@ -812,11 +812,10 @@ static const struct snd_soc_dapm_route mtk_dai_i2s_routes[] = {
+ static int mt8365_dai_i2s_set_priv(struct mtk_base_afe *afe)
+ {
+ 	int i, ret;
+-	struct mt8365_afe_private *afe_priv = afe->platform_priv;
+ 
+ 	for (i = 0; i < DAI_I2S_NUM; i++) {
+ 		ret = mt8365_dai_set_priv(afe, mt8365_i2s_priv[i].id,
+-					  sizeof(*afe_priv),
++					  sizeof(mt8365_i2s_priv[i]),
+ 					  &mt8365_i2s_priv[i]);
+ 		if (ret)
+ 			return ret;
+diff --git a/tools/hv/hv_fcopy_uio_daemon.c b/tools/hv/hv_fcopy_uio_daemon.c
+index 7d9bcb066d3fb4..92e8307b2a4678 100644
+--- a/tools/hv/hv_fcopy_uio_daemon.c
++++ b/tools/hv/hv_fcopy_uio_daemon.c
+@@ -118,8 +118,11 @@ static int hv_fcopy_create_file(char *file_name, char *path_name, __u32 flags)
+ 
+ 	filesize = 0;
+ 	p = path_name;
+-	snprintf(target_fname, sizeof(target_fname), "%s/%s",
+-		 path_name, file_name);
++	if (snprintf(target_fname, sizeof(target_fname), "%s/%s",
++		     path_name, file_name) >= sizeof(target_fname)) {
++		syslog(LOG_ERR, "target file name is too long: %s/%s", path_name, file_name);
++		goto done;
++	}
+ 
+ 	/*
+ 	 * Check to see if the path is already in place; if not,
+@@ -326,7 +329,7 @@ static void wcstoutf8(char *dest, const __u16 *src, size_t dest_size)
+ {
+ 	size_t len = 0;
+ 
+-	while (len < dest_size) {
++	while (len < dest_size && *src) {
+ 		if (src[len] < 0x80)
+ 			dest[len++] = (char)(*src++);
+ 		else
+@@ -338,27 +341,15 @@ static void wcstoutf8(char *dest, const __u16 *src, size_t dest_size)
+ 
+ static int hv_fcopy_start(struct hv_start_fcopy *smsg_in)
+ {
+-	setlocale(LC_ALL, "en_US.utf8");
+-	size_t file_size, path_size;
+-	char *file_name, *path_name;
+-	char *in_file_name = (char *)smsg_in->file_name;
+-	char *in_path_name = (char *)smsg_in->path_name;
+-
+-	file_size = wcstombs(NULL, (const wchar_t *restrict)in_file_name, 0) + 1;
+-	path_size = wcstombs(NULL, (const wchar_t *restrict)in_path_name, 0) + 1;
+-
+-	file_name = (char *)malloc(file_size * sizeof(char));
+-	path_name = (char *)malloc(path_size * sizeof(char));
+-
+-	if (!file_name || !path_name) {
+-		free(file_name);
+-		free(path_name);
+-		syslog(LOG_ERR, "Can't allocate memory for file name and/or path name");
+-		return HV_E_FAIL;
+-	}
++	/*
++	 * file_name and path_name should have same length with appropriate
++	 * member of hv_start_fcopy.
++	 */
++	char file_name[W_MAX_PATH], path_name[W_MAX_PATH];
+ 
+-	wcstoutf8(file_name, (__u16 *)in_file_name, file_size);
+-	wcstoutf8(path_name, (__u16 *)in_path_name, path_size);
++	setlocale(LC_ALL, "en_US.utf8");
++	wcstoutf8(file_name, smsg_in->file_name, W_MAX_PATH - 1);
++	wcstoutf8(path_name, smsg_in->path_name, W_MAX_PATH - 1);
+ 
+ 	return hv_fcopy_create_file(file_name, path_name, smsg_in->copy_flags);
+ }
+diff --git a/tools/testing/selftests/bpf/progs/verifier_precision.c b/tools/testing/selftests/bpf/progs/verifier_precision.c
+index 6662d4b39969a0..c65d0378282231 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_precision.c
++++ b/tools/testing/selftests/bpf/progs/verifier_precision.c
+@@ -179,4 +179,57 @@ __naked int state_loop_first_last_equal(void)
+ 	);
+ }
+ 
++__used __naked static void __bpf_cond_op_r10(void)
++{
++	asm volatile (
++	"r2 = 2314885393468386424 ll;"
++	"goto +0;"
++	"if r2 <= r10 goto +3;"
++	"if r1 >= -1835016 goto +0;"
++	"if r2 <= 8 goto +0;"
++	"if r3 <= 0 goto +0;"
++	"exit;"
++	::: __clobber_all);
++}
++
++SEC("?raw_tp")
++__success __log_level(2)
++__msg("8: (bd) if r2 <= r10 goto pc+3")
++__msg("9: (35) if r1 >= 0xffe3fff8 goto pc+0")
++__msg("10: (b5) if r2 <= 0x8 goto pc+0")
++__msg("mark_precise: frame1: last_idx 10 first_idx 0 subseq_idx -1")
++__msg("mark_precise: frame1: regs=r2 stack= before 9: (35) if r1 >= 0xffe3fff8 goto pc+0")
++__msg("mark_precise: frame1: regs=r2 stack= before 8: (bd) if r2 <= r10 goto pc+3")
++__msg("mark_precise: frame1: regs=r2 stack= before 7: (05) goto pc+0")
++__naked void bpf_cond_op_r10(void)
++{
++	asm volatile (
++	"r3 = 0 ll;"
++	"call __bpf_cond_op_r10;"
++	"r0 = 0;"
++	"exit;"
++	::: __clobber_all);
++}
++
++SEC("?raw_tp")
++__success __log_level(2)
++__msg("3: (bf) r3 = r10")
++__msg("4: (bd) if r3 <= r2 goto pc+1")
++__msg("5: (b5) if r2 <= 0x8 goto pc+2")
++__msg("mark_precise: frame0: last_idx 5 first_idx 0 subseq_idx -1")
++__msg("mark_precise: frame0: regs=r2 stack= before 4: (bd) if r3 <= r2 goto pc+1")
++__msg("mark_precise: frame0: regs=r2 stack= before 3: (bf) r3 = r10")
++__naked void bpf_cond_op_not_r10(void)
++{
++	asm volatile (
++	"r0 = 0;"
++	"r2 = 2314885393468386424 ll;"
++	"r3 = r10;"
++	"if r3 <= r2 goto +1;"
++	"if r2 <= 8 goto +2;"
++	"r0 = 2 ll;"
++	"exit;"
++	::: __clobber_all);
++}
++
+ char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/drivers/net/lib/py/load.py b/tools/testing/selftests/drivers/net/lib/py/load.py
+index da5af2c680faaa..1a9d57c3efa3c4 100644
+--- a/tools/testing/selftests/drivers/net/lib/py/load.py
++++ b/tools/testing/selftests/drivers/net/lib/py/load.py
+@@ -1,5 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ 
++import re
+ import time
+ 
+ from lib.py import ksft_pr, cmd, ip, rand_port, wait_port_listen, bkg
+@@ -10,12 +11,11 @@ class GenerateTraffic:
+ 
+         self.env = env
+ 
+-        if port is None:
+-            port = rand_port()
+-        self._iperf_server = cmd(f"iperf3 -s -1 -p {port}", background=True)
+-        wait_port_listen(port)
++        self.port = rand_port() if port is None else port
++        self._iperf_server = cmd(f"iperf3 -s -1 -p {self.port}", background=True)
++        wait_port_listen(self.port)
+         time.sleep(0.1)
+-        self._iperf_client = cmd(f"iperf3 -c {env.addr} -P 16 -p {port} -t 86400",
++        self._iperf_client = cmd(f"iperf3 -c {env.addr} -P 16 -p {self.port} -t 86400",
+                                  background=True, host=env.remote)
+ 
+         # Wait for traffic to ramp up
+@@ -74,3 +74,16 @@ class GenerateTraffic:
+             ksft_pr(">> Server:")
+             ksft_pr(self._iperf_server.stdout)
+             ksft_pr(self._iperf_server.stderr)
++        self._wait_client_stopped()
++
++    def _wait_client_stopped(self, sleep=0.005, timeout=5):
++        end = time.monotonic() + timeout
++
++        live_port_pattern = re.compile(fr":{self.port:04X} 0[^6] ")
++
++        while time.monotonic() < end:
++            data = cmd("cat /proc/net/tcp*", host=self.env.remote).stdout
++            if not live_port_pattern.search(data):
++                return
++            time.sleep(sleep)
++        raise Exception(f"Waiting for client to stop timed out after {timeout}s")
+diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c
+index aa7400ed0e9946..f0d9c035641dc9 100644
+--- a/tools/testing/selftests/mm/split_huge_page_test.c
++++ b/tools/testing/selftests/mm/split_huge_page_test.c
+@@ -31,6 +31,7 @@ uint64_t pmd_pagesize;
+ #define INPUT_MAX 80
+ 
+ #define PID_FMT "%d,0x%lx,0x%lx,%d"
++#define PID_FMT_OFFSET "%d,0x%lx,0x%lx,%d,%d"
+ #define PATH_FMT "%s,0x%lx,0x%lx,%d"
+ 
+ #define PFN_MASK     ((1UL<<55)-1)
+@@ -483,7 +484,7 @@ void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc,
+ 		write_debugfs(PID_FMT, getpid(), (uint64_t)addr,
+ 			      (uint64_t)addr + fd_size, order);
+ 	else
+-		write_debugfs(PID_FMT, getpid(), (uint64_t)addr,
++		write_debugfs(PID_FMT_OFFSET, getpid(), (uint64_t)addr,
+ 			      (uint64_t)addr + fd_size, order, offset);
+ 
+ 	for (i = 0; i < fd_size; i++)
+diff --git a/tools/testing/selftests/net/mptcp/Makefile b/tools/testing/selftests/net/mptcp/Makefile
+index 340e1a777e16a8..567e6677c9d409 100644
+--- a/tools/testing/selftests/net/mptcp/Makefile
++++ b/tools/testing/selftests/net/mptcp/Makefile
+@@ -4,7 +4,8 @@ top_srcdir = ../../../../..
+ 
+ CFLAGS += -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
+ 
+-TEST_PROGS := mptcp_connect.sh pm_netlink.sh mptcp_join.sh diag.sh \
++TEST_PROGS := mptcp_connect.sh mptcp_connect_mmap.sh mptcp_connect_sendfile.sh \
++	      mptcp_connect_checksum.sh pm_netlink.sh mptcp_join.sh diag.sh \
+ 	      simult_flows.sh mptcp_sockopt.sh userspace_pm.sh
+ 
+ TEST_GEN_FILES = mptcp_connect pm_nl_ctl mptcp_sockopt mptcp_inq mptcp_diag
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect_checksum.sh b/tools/testing/selftests/net/mptcp/mptcp_connect_checksum.sh
+new file mode 100644
+index 00000000000000..ce93ec2f107fba
+--- /dev/null
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect_checksum.sh
+@@ -0,0 +1,5 @@
++#!/bin/bash
++# SPDX-License-Identifier: GPL-2.0
++
++MPTCP_LIB_KSFT_TEST="$(basename "${0}" .sh)" \
++	"$(dirname "${0}")/mptcp_connect.sh" -C "${@}"
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect_mmap.sh b/tools/testing/selftests/net/mptcp/mptcp_connect_mmap.sh
+new file mode 100644
+index 00000000000000..5dd30f9394af6a
+--- /dev/null
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect_mmap.sh
+@@ -0,0 +1,5 @@
++#!/bin/bash
++# SPDX-License-Identifier: GPL-2.0
++
++MPTCP_LIB_KSFT_TEST="$(basename "${0}" .sh)" \
++	"$(dirname "${0}")/mptcp_connect.sh" -m mmap "${@}"
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect_sendfile.sh b/tools/testing/selftests/net/mptcp/mptcp_connect_sendfile.sh
+new file mode 100644
+index 00000000000000..1d16fb1cc9bb6d
+--- /dev/null
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect_sendfile.sh
+@@ -0,0 +1,5 @@
++#!/bin/bash
++# SPDX-License-Identifier: GPL-2.0
++
++MPTCP_LIB_KSFT_TEST="$(basename "${0}" .sh)" \
++	"$(dirname "${0}")/mptcp_connect.sh" -m sendfile "${@}"


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-08-04  5:58 Arisu Tachibana
  0 siblings, 0 replies; 19+ messages in thread
From: Arisu Tachibana @ 2025-08-04  5:58 UTC (permalink / raw
  To: gentoo-commits

commit:     484e44b3572788de701c8a2138df1cc4dfc643c9
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Mon Aug  4 05:12:30 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Mon Aug  4 05:12:30 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=484e44b3

Add patch 1900 fix log tree replay failure on btrfs

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                  |   4 +
 1900_btrfs_fix_log_tree_replay_failure.patch | 143 +++++++++++++++++++++++++++
 2 files changed, 147 insertions(+)

diff --git a/0000_README b/0000_README
index 1a9c19eb..0f6df62e 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch:  1730_parisc-Disable-prctl.patch
 From:   https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:   prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
 
+Patch:  1900_btrfs_fix_log_tree_replay_failure.patch
+From:   https://gitlab.com/cki-project/kernel-ark/-/commit/e6c71b29fab08fd0ab55d2f83c4539d68d543895
+Desc:   btrfs: fix log tree replay failure due to file with 0 links and extents
+
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1900_btrfs_fix_log_tree_replay_failure.patch b/1900_btrfs_fix_log_tree_replay_failure.patch
new file mode 100644
index 00000000..335bb7f2
--- /dev/null
+++ b/1900_btrfs_fix_log_tree_replay_failure.patch
@@ -0,0 +1,143 @@
+From e6c71b29fab08fd0ab55d2f83c4539d68d543895 Mon Sep 17 00:00:00 2001
+From: Filipe Manana <fdmanana@suse.com>
+Date: Wed, 30 Jul 2025 19:18:37 +0100
+Subject: [PATCH] btrfs: fix log tree replay failure due to file with 0 links
+ and extents
+
+If we log a new inode (not persisted in a past transaction) that has 0
+links and extents, then log another inode with an higher inode number, we
+end up with failing to replay the log tree with -EINVAL. The steps for
+this are:
+
+1) create new file A
+2) write some data to file A
+3) open an fd on file A
+4) unlink file A
+5) fsync file A using the previously open fd
+6) create file B (has higher inode number than file A)
+7) fsync file B
+8) power fail before current transaction commits
+
+Now when attempting to mount the fs, the log replay will fail with
+-ENOENT at replay_one_extent() when attempting to replay the first
+extent of file A. The failure comes when trying to open the inode for
+file A in the subvolume tree, since it doesn't exist.
+
+Before commit 5f61b961599a ("btrfs: fix inode lookup error handling
+during log replay"), the returned error was -EIO instead of -ENOENT,
+since we converted any errors when attempting to read an inode during
+log replay to -EIO.
+
+The reason for this is that the log replay procedure fails to ignore
+the current inode when we are at the stage LOG_WALK_REPLAY_ALL, our
+current inode has 0 links and last inode we processed in the previous
+stage has a non 0 link count. In other words, the issue is that at
+replay_one_extent() we only update wc->ignore_cur_inode if the current
+replay stage is LOG_WALK_REPLAY_INODES.
+
+Fix this by updating wc->ignore_cur_inode whenever we find an inode item
+regardless of the current replay stage. This is a simple solution and easy
+to backport, but later we can do other alternatives like avoid logging
+extents or inode items other than the inode item for inodes with a link
+count of 0.
+
+The problem with the wc->ignore_cur_inode logic has been around since
+commit f2d72f42d5fa ("Btrfs: fix warning when replaying log after fsync
+of a tmpfile") but it only became frequent to hit since the more recent
+commit 5e85262e542d ("btrfs: fix fsync of files with no hard links not
+persisting deletion"), because we stopped skipping inodes with a link
+count of 0 when logging, while before the problem would only be triggered
+if trying to replay a log tree created with an older kernel which has a
+logged inode with 0 links.
+
+A test case for fstests will be submitted soon.
+
+Reported-by: Peter Jung <ptr1337@cachyos.org>
+Link: https://lore.kernel.org/linux-btrfs/fce139db-4458-4788-bb97-c29acf6cb1df@cachyos.org/
+Reported-by: burneddi <burneddi@protonmail.com>
+Link: https://lore.kernel.org/linux-btrfs/lh4W-Lwc0Mbk-QvBhhQyZxf6VbM3E8VtIvU3fPIQgweP_Q1n7wtlUZQc33sYlCKYd-o6rryJQfhHaNAOWWRKxpAXhM8NZPojzsJPyHMf2qY=@protonmail.com/#t
+Reported-by: Russell Haley <yumpusamongus@gmail.com>
+Link: https://lore.kernel.org/linux-btrfs/598ecc75-eb80-41b3-83c2-f2317fbb9864@gmail.com/
+Fixes: f2d72f42d5fa ("Btrfs: fix warning when replaying log after fsync of a tmpfile")
+Reviewed-by: Boris Burkov <boris@bur.io>
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+---
+ fs/btrfs/tree-log.c | 45 +++++++++++++++++++++++++++++----------------
+ 1 file changed, 29 insertions(+), 16 deletions(-)
+
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index e05140ce95be9..2fb9e7bfc9077 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -321,8 +321,7 @@ struct walk_control {
+ 
+ 	/*
+ 	 * Ignore any items from the inode currently being processed. Needs
+-	 * to be set every time we find a BTRFS_INODE_ITEM_KEY and we are in
+-	 * the LOG_WALK_REPLAY_INODES stage.
++	 * to be set every time we find a BTRFS_INODE_ITEM_KEY.
+ 	 */
+ 	bool ignore_cur_inode;
+ 
+@@ -2410,23 +2409,30 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 
+ 	nritems = btrfs_header_nritems(eb);
+ 	for (i = 0; i < nritems; i++) {
+-		btrfs_item_key_to_cpu(eb, &key, i);
++		struct btrfs_inode_item *inode_item;
+ 
+-		/* inode keys are done during the first stage */
+-		if (key.type == BTRFS_INODE_ITEM_KEY &&
+-		    wc->stage == LOG_WALK_REPLAY_INODES) {
+-			struct btrfs_inode_item *inode_item;
+-			u32 mode;
++		btrfs_item_key_to_cpu(eb, &key, i);
+ 
+-			inode_item = btrfs_item_ptr(eb, i,
+-					    struct btrfs_inode_item);
++		if (key.type == BTRFS_INODE_ITEM_KEY) {
++			inode_item = btrfs_item_ptr(eb, i, struct btrfs_inode_item);
+ 			/*
+-			 * If we have a tmpfile (O_TMPFILE) that got fsync'ed
+-			 * and never got linked before the fsync, skip it, as
+-			 * replaying it is pointless since it would be deleted
+-			 * later. We skip logging tmpfiles, but it's always
+-			 * possible we are replaying a log created with a kernel
+-			 * that used to log tmpfiles.
++			 * An inode with no links is either:
++			 *
++			 * 1) A tmpfile (O_TMPFILE) that got fsync'ed and never
++			 *    got linked before the fsync, skip it, as replaying
++			 *    it is pointless since it would be deleted later.
++			 *    We skip logging tmpfiles, but it's always possible
++			 *    we are replaying a log created with a kernel that
++			 *    used to log tmpfiles;
++			 *
++			 * 2) A non-tmpfile which got its last link deleted
++			 *    while holding an open fd on it and later got
++			 *    fsynced through that fd. We always log the
++			 *    parent inodes when inode->last_unlink_trans is
++			 *    set to the current transaction, so ignore all the
++			 *    inode items for this inode. We will delete the
++			 *    inode when processing the parent directory with
++			 *    replay_dir_deletes().
+ 			 */
+ 			if (btrfs_inode_nlink(eb, inode_item) == 0) {
+ 				wc->ignore_cur_inode = true;
+@@ -2434,6 +2440,13 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 			} else {
+ 				wc->ignore_cur_inode = false;
+ 			}
++		}
++
++		/* Inode keys are done during the first stage. */
++		if (key.type == BTRFS_INODE_ITEM_KEY &&
++		    wc->stage == LOG_WALK_REPLAY_INODES) {
++			 u32 mode;
++
+ 			ret = replay_xattr_deletes(wc->trans, root, log,
+ 						   path, key.objectid);
+ 			if (ret)
+-- 
+GitLab
+


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-08-16  3:10 Arisu Tachibana
  0 siblings, 0 replies; 19+ messages in thread
From: Arisu Tachibana @ 2025-08-16  3:10 UTC (permalink / raw
  To: gentoo-commits

commit:     44da50579594d344dda643238b32e4026c2cb956
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 16 03:09:52 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Aug 16 03:09:52 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=44da5057

Linux patch 6.15.10

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README              |     4 +
 1009_linux-6.15.10.patch | 18585 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 18589 insertions(+)

diff --git a/0000_README b/0000_README
index 0f6df62e..c97b9061 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-6.15.9.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.15.9
 
+Patch:  1009_linux-6.15.10.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.15.10
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1009_linux-6.15.10.patch b/1009_linux-6.15.10.patch
new file mode 100644
index 00000000..357cf3d3
--- /dev/null
+++ b/1009_linux-6.15.10.patch
@@ -0,0 +1,18585 @@
+diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
+index e15c4275862a72..edfd30b198f79c 100644
+--- a/Documentation/filesystems/f2fs.rst
++++ b/Documentation/filesystems/f2fs.rst
+@@ -236,9 +236,9 @@ usrjquota=<file>	 Appoint specified file and type during mount, so that quota
+ grpjquota=<file>	 information can be properly updated during recovery flow,
+ prjjquota=<file>	 <quota file>: must be in root directory;
+ jqfmt=<quota type>	 <quota type>: [vfsold,vfsv0,vfsv1].
+-offusrjquota		 Turn off user journalled quota.
+-offgrpjquota		 Turn off group journalled quota.
+-offprjjquota		 Turn off project journalled quota.
++usrjquota=		 Turn off user journalled quota.
++grpjquota=		 Turn off group journalled quota.
++prjjquota=		 Turn off project journalled quota.
+ quota			 Enable plain user disk quota accounting.
+ noquota			 Disable all plain disk quota option.
+ alloc_mode=%s		 Adjust block allocation policy, which supports "reuse"
+diff --git a/Documentation/netlink/specs/ethtool.yaml b/Documentation/netlink/specs/ethtool.yaml
+index c650cd3dcb80bc..85e1d3b7512d6a 100644
+--- a/Documentation/netlink/specs/ethtool.yaml
++++ b/Documentation/netlink/specs/ethtool.yaml
+@@ -2077,9 +2077,6 @@ operations:
+ 
+       do: &module-eeprom-get-op
+         request:
+-          attributes:
+-            - header
+-        reply:
+           attributes:
+             - header
+             - offset
+@@ -2087,6 +2084,9 @@ operations:
+             - page
+             - bank
+             - i2c-address
++        reply:
++          attributes:
++            - header
+             - data
+       dump: *module-eeprom-get-op
+     -
+diff --git a/Makefile b/Makefile
+index d38a669b3ada6f..7831d9cd2e6cdb 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 15
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/boot/dts/microchip/sam9x7.dtsi b/arch/arm/boot/dts/microchip/sam9x7.dtsi
+index b217a908f52534..114449e9072065 100644
+--- a/arch/arm/boot/dts/microchip/sam9x7.dtsi
++++ b/arch/arm/boot/dts/microchip/sam9x7.dtsi
+@@ -45,11 +45,13 @@ cpu@0 {
+ 	clocks {
+ 		slow_xtal: clock-slowxtal {
+ 			compatible = "fixed-clock";
++			clock-output-names = "slow_xtal";
+ 			#clock-cells = <0>;
+ 		};
+ 
+ 		main_xtal: clock-mainxtal {
+ 			compatible = "fixed-clock";
++			clock-output-names = "main_xtal";
+ 			#clock-cells = <0>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/microchip/sama7d65.dtsi b/arch/arm/boot/dts/microchip/sama7d65.dtsi
+index b6710ccd4c360b..7b1dd28a2cfad8 100644
+--- a/arch/arm/boot/dts/microchip/sama7d65.dtsi
++++ b/arch/arm/boot/dts/microchip/sama7d65.dtsi
+@@ -38,11 +38,13 @@ cpu0: cpu@0 {
+ 	clocks {
+ 		main_xtal: clock-mainxtal {
+ 			compatible = "fixed-clock";
++			clock-output-names = "main_xtal";
+ 			#clock-cells = <0>;
+ 		};
+ 
+ 		slow_xtal: clock-slowxtal {
+ 			compatible = "fixed-clock";
++			clock-output-names = "slow_xtal";
+ 			#clock-cells = <0>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6ul-kontron-bl-common.dtsi b/arch/arm/boot/dts/nxp/imx/imx6ul-kontron-bl-common.dtsi
+index 29d2f86d5e34a7..f4c45e964daf8f 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6ul-kontron-bl-common.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6ul-kontron-bl-common.dtsi
+@@ -168,7 +168,6 @@ &uart2 {
+ 	pinctrl-0 = <&pinctrl_uart2>;
+ 	linux,rs485-enabled-at-boot-time;
+ 	rs485-rx-during-tx;
+-	rs485-rts-active-low;
+ 	uart-has-rtscts;
+ 	status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/nxp/vf/vfxxx.dtsi b/arch/arm/boot/dts/nxp/vf/vfxxx.dtsi
+index 597f20be82f1ee..62e555bf6a71d9 100644
+--- a/arch/arm/boot/dts/nxp/vf/vfxxx.dtsi
++++ b/arch/arm/boot/dts/nxp/vf/vfxxx.dtsi
+@@ -603,7 +603,7 @@ usbmisc1: usb@400b4800 {
+ 
+ 			ftm: ftm@400b8000 {
+ 				compatible = "fsl,ftm-timer";
+-				reg = <0x400b8000 0x1000 0x400b9000 0x1000>;
++				reg = <0x400b8000 0x1000>, <0x400b9000 0x1000>;
+ 				interrupts = <44 IRQ_TYPE_LEVEL_HIGH>;
+ 				clock-names = "ftm-evt", "ftm-src",
+ 					"ftm-evt-counter-en", "ftm-src-counter-en";
+diff --git a/arch/arm/boot/dts/ti/omap/am335x-boneblack.dts b/arch/arm/boot/dts/ti/omap/am335x-boneblack.dts
+index 16b567e3cb4722..b4fdcf9c02b500 100644
+--- a/arch/arm/boot/dts/ti/omap/am335x-boneblack.dts
++++ b/arch/arm/boot/dts/ti/omap/am335x-boneblack.dts
+@@ -35,7 +35,7 @@ &gpio0 {
+ 		"P9_18 [spi0_d1]",
+ 		"P9_17 [spi0_cs0]",
+ 		"[mmc0_cd]",
+-		"P8_42A [ecappwm0]",
++		"P9_42A [ecappwm0]",
+ 		"P8_35 [lcd d12]",
+ 		"P8_33 [lcd d13]",
+ 		"P8_31 [lcd d14]",
+diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c
+index f6be80b5938b14..2fad3a0c056379 100644
+--- a/arch/arm/crypto/aes-neonbs-glue.c
++++ b/arch/arm/crypto/aes-neonbs-glue.c
+@@ -232,7 +232,7 @@ static int ctr_encrypt(struct skcipher_request *req)
+ 	while (walk.nbytes > 0) {
+ 		const u8 *src = walk.src.virt.addr;
+ 		u8 *dst = walk.dst.virt.addr;
+-		int bytes = walk.nbytes;
++		unsigned int bytes = walk.nbytes;
+ 
+ 		if (unlikely(bytes < AES_BLOCK_SIZE))
+ 			src = dst = memcpy(buf + sizeof(buf) - bytes,
+diff --git a/arch/arm64/boot/dts/exynos/google/gs101.dtsi b/arch/arm64/boot/dts/exynos/google/gs101.dtsi
+index 3de3a758f113a8..fd0badf24e6f5e 100644
+--- a/arch/arm64/boot/dts/exynos/google/gs101.dtsi
++++ b/arch/arm64/boot/dts/exynos/google/gs101.dtsi
+@@ -155,6 +155,7 @@ ananke_cpu_sleep: cpu-ananke-sleep {
+ 				idle-state-name = "c2";
+ 				compatible = "arm,idle-state";
+ 				arm,psci-suspend-param = <0x0010000>;
++				local-timer-stop;
+ 				entry-latency-us = <70>;
+ 				exit-latency-us = <160>;
+ 				min-residency-us = <2000>;
+@@ -164,6 +165,7 @@ enyo_cpu_sleep: cpu-enyo-sleep {
+ 				idle-state-name = "c2";
+ 				compatible = "arm,idle-state";
+ 				arm,psci-suspend-param = <0x0010000>;
++				local-timer-stop;
+ 				entry-latency-us = <150>;
+ 				exit-latency-us = <190>;
+ 				min-residency-us = <2500>;
+@@ -173,6 +175,7 @@ hera_cpu_sleep: cpu-hera-sleep {
+ 				idle-state-name = "c2";
+ 				compatible = "arm,idle-state";
+ 				arm,psci-suspend-param = <0x0010000>;
++				local-timer-stop;
+ 				entry-latency-us = <235>;
+ 				exit-latency-us = <220>;
+ 				min-residency-us = <3500>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
+index 9ba0cb89fa24e0..c0f00835e47d7a 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
+@@ -286,6 +286,8 @@ &usdhc3 {
+ 	pinctrl-0 = <&pinctrl_usdhc3>;
+ 	pinctrl-1 = <&pinctrl_usdhc3_100mhz>;
+ 	pinctrl-2 = <&pinctrl_usdhc3_200mhz>;
++	assigned-clocks = <&clk IMX8MM_CLK_USDHC3>;
++	assigned-clock-rates = <400000000>;
+ 	bus-width = <8>;
+ 	non-removable;
+ 	status = "okay";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
+index bb11590473a4c7..353d0c9ff35c2e 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
+@@ -297,6 +297,8 @@ &usdhc3 {
+ 	pinctrl-0 = <&pinctrl_usdhc3>;
+ 	pinctrl-1 = <&pinctrl_usdhc3_100mhz>;
+ 	pinctrl-2 = <&pinctrl_usdhc3_200mhz>;
++	assigned-clocks = <&clk IMX8MN_CLK_USDHC3>;
++	assigned-clock-rates = <400000000>;
+ 	bus-width = <8>;
+ 	non-removable;
+ 	status = "okay";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
+index 568d24265ddf8e..12de7cf1e8538e 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
+@@ -301,7 +301,7 @@ &gpio2 {
+ &gpio3 {
+ 	gpio-line-names =
+ 		"", "", "", "", "", "", "m2_rst", "",
+-		"", "", "", "", "", "", "m2_gpio10", "",
++		"", "", "", "", "", "", "m2_wdis2#", "",
+ 		"", "", "", "", "", "", "", "",
+ 		"", "", "", "", "", "", "", "";
+ };
+@@ -310,7 +310,7 @@ &gpio4 {
+ 	gpio-line-names =
+ 		"", "", "m2_off#", "", "", "", "", "",
+ 		"", "", "", "", "", "", "", "",
+-		"", "", "m2_wdis#", "", "", "", "", "",
++		"", "", "m2_wdis1#", "", "", "", "", "",
+ 		"", "", "", "", "", "", "", "rs485_en";
+ };
+ 
+@@ -811,14 +811,14 @@ pinctrl_hog: hoggrp {
+ 			MX8MP_IOMUXC_GPIO1_IO09__GPIO1_IO09	0x40000040 /* DIO0 */
+ 			MX8MP_IOMUXC_GPIO1_IO11__GPIO1_IO11	0x40000040 /* DIO1 */
+ 			MX8MP_IOMUXC_SAI1_RXD0__GPIO4_IO02	0x40000040 /* M2SKT_OFF# */
+-			MX8MP_IOMUXC_SAI1_TXD6__GPIO4_IO18	0x40000150 /* M2SKT_WDIS# */
++			MX8MP_IOMUXC_SAI1_TXD6__GPIO4_IO18	0x40000150 /* M2SKT_WDIS1# */
+ 			MX8MP_IOMUXC_SD1_DATA4__GPIO2_IO06	0x40000040 /* M2SKT_PIN20 */
+ 			MX8MP_IOMUXC_SD1_STROBE__GPIO2_IO11	0x40000040 /* M2SKT_PIN22 */
+ 			MX8MP_IOMUXC_SD2_CLK__GPIO2_IO13	0x40000150 /* PCIE1_WDIS# */
+ 			MX8MP_IOMUXC_SD2_CMD__GPIO2_IO14	0x40000150 /* PCIE3_WDIS# */
+ 			MX8MP_IOMUXC_SD2_DATA3__GPIO2_IO18	0x40000150 /* PCIE2_WDIS# */
+ 			MX8MP_IOMUXC_NAND_DATA00__GPIO3_IO06	0x40000040 /* M2SKT_RST# */
+-			MX8MP_IOMUXC_NAND_DQS__GPIO3_IO14	0x40000040 /* M2SKT_GPIO10 */
++			MX8MP_IOMUXC_NAND_DQS__GPIO3_IO14	0x40000150 /* M2KST_WDIS2# */
+ 			MX8MP_IOMUXC_SAI3_TXD__GPIO5_IO01	0x40000104 /* UART_TERM */
+ 			MX8MP_IOMUXC_SAI3_TXFS__GPIO4_IO31	0x40000104 /* UART_RS485 */
+ 			MX8MP_IOMUXC_SAI3_TXC__GPIO5_IO00	0x40000104 /* UART_HALF */
+diff --git a/arch/arm64/boot/dts/freescale/imx93-tqma9352.dtsi b/arch/arm64/boot/dts/freescale/imx93-tqma9352.dtsi
+index 2cabdae2422739..09385b058664c3 100644
+--- a/arch/arm64/boot/dts/freescale/imx93-tqma9352.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx93-tqma9352.dtsi
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: (GPL-2.0-or-later OR MIT)
+ /*
+- * Copyright (c) 2022 TQ-Systems GmbH <linux@ew.tq-group.com>,
++ * Copyright (c) 2022-2025 TQ-Systems GmbH <linux@ew.tq-group.com>,
+  * D-82229 Seefeld, Germany.
+  * Author: Markus Niebel
+  */
+@@ -110,11 +110,11 @@ buck1: BUCK1 {
+ 				regulator-ramp-delay = <3125>;
+ 			};
+ 
+-			/* V_DDRQ - 1.1 LPDDR4 or 0.6 LPDDR4X */
++			/* V_DDRQ - 0.6 V for LPDDR4X */
+ 			buck2: BUCK2 {
+ 				regulator-name = "BUCK2";
+ 				regulator-min-microvolt = <600000>;
+-				regulator-max-microvolt = <1100000>;
++				regulator-max-microvolt = <600000>;
+ 				regulator-boot-on;
+ 				regulator-always-on;
+ 				regulator-ramp-delay = <3125>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8976.dtsi b/arch/arm64/boot/dts/qcom/msm8976.dtsi
+index d036f31dfdca16..963996f7c927c7 100644
+--- a/arch/arm64/boot/dts/qcom/msm8976.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8976.dtsi
+@@ -1330,6 +1330,7 @@ blsp1_dma: dma-controller@7884000 {
+ 			clock-names = "bam_clk";
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
++			qcom,controlled-remotely;
+ 		};
+ 
+ 		blsp1_uart1: serial@78af000 {
+@@ -1450,6 +1451,7 @@ blsp2_dma: dma-controller@7ac4000 {
+ 			clock-names = "bam_clk";
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
++			qcom,controlled-remotely;
+ 		};
+ 
+ 		blsp2_uart2: serial@7af0000 {
+diff --git a/arch/arm64/boot/dts/qcom/qcs615.dtsi b/arch/arm64/boot/dts/qcom/qcs615.dtsi
+index 12065484904380..3fda88b32a713a 100644
+--- a/arch/arm64/boot/dts/qcom/qcs615.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs615.dtsi
+@@ -1868,6 +1868,7 @@ replicator@604a000 {
+ 
+ 			clocks = <&aoss_qmp>;
+ 			clock-names = "apb_pclk";
++			status = "disabled";
+ 
+ 			in-ports {
+ 				port {
+@@ -2427,6 +2428,9 @@ cti@6c13000 {
+ 
+ 			clocks = <&aoss_qmp>;
+ 			clock-names = "apb_pclk";
++
++			/* Not all required clocks can be enabled from the OS */
++			status = "fail";
+ 		};
+ 
+ 		cti@6c20000 {
+diff --git a/arch/arm64/boot/dts/qcom/sa8775p.dtsi b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+index 2010b7988b6cc4..958e4be164d870 100644
+--- a/arch/arm64/boot/dts/qcom/sa8775p.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+@@ -4663,8 +4663,8 @@ remoteproc_gpdsp0: remoteproc@20c00000 {
+ 
+ 			interrupts-extended = <&intc GIC_SPI 768 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_gpdsp0_in 0 0>,
+-					      <&smp2p_gpdsp0_in 2 0>,
+ 					      <&smp2p_gpdsp0_in 1 0>,
++					      <&smp2p_gpdsp0_in 2 0>,
+ 					      <&smp2p_gpdsp0_in 3 0>;
+ 			interrupt-names = "wdog", "fatal", "ready",
+ 					  "handover", "stop-ack";
+@@ -4706,8 +4706,8 @@ remoteproc_gpdsp1: remoteproc@21c00000 {
+ 
+ 			interrupts-extended = <&intc GIC_SPI 624 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_gpdsp1_in 0 0>,
+-					      <&smp2p_gpdsp1_in 2 0>,
+ 					      <&smp2p_gpdsp1_in 1 0>,
++					      <&smp2p_gpdsp1_in 2 0>,
+ 					      <&smp2p_gpdsp1_in 3 0>;
+ 			interrupt-names = "wdog", "fatal", "ready",
+ 					  "handover", "stop-ack";
+@@ -4847,8 +4847,8 @@ remoteproc_cdsp0: remoteproc@26300000 {
+ 
+ 			interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_cdsp0_in 0 IRQ_TYPE_EDGE_RISING>,
+-					      <&smp2p_cdsp0_in 2 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_cdsp0_in 1 IRQ_TYPE_EDGE_RISING>,
++					      <&smp2p_cdsp0_in 2 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_cdsp0_in 3 IRQ_TYPE_EDGE_RISING>;
+ 			interrupt-names = "wdog", "fatal", "ready",
+ 					  "handover", "stop-ack";
+@@ -4979,8 +4979,8 @@ remoteproc_cdsp1: remoteproc@2a300000 {
+ 
+ 			interrupts-extended = <&intc GIC_SPI 798 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_cdsp1_in 0 IRQ_TYPE_EDGE_RISING>,
+-					      <&smp2p_cdsp1_in 2 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_cdsp1_in 1 IRQ_TYPE_EDGE_RISING>,
++					      <&smp2p_cdsp1_in 2 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_cdsp1_in 3 IRQ_TYPE_EDGE_RISING>;
+ 			interrupt-names = "wdog", "fatal", "ready",
+ 					  "handover", "stop-ack";
+@@ -5135,8 +5135,8 @@ remoteproc_adsp: remoteproc@30000000 {
+ 
+ 			interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+-					      <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
++					      <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
+ 			interrupt-names = "wdog", "fatal", "ready", "handover",
+ 					  "stop-ack";
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index 87c432c12a240f..7dddafa901d8d7 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -3523,18 +3523,18 @@ spmi_bus: spmi@c440000 {
+ 			#interrupt-cells = <4>;
+ 		};
+ 
+-		sram@146aa000 {
++		sram@14680000 {
+ 			compatible = "qcom,sc7180-imem", "syscon", "simple-mfd";
+-			reg = <0 0x146aa000 0 0x2000>;
++			reg = <0 0x14680000 0 0x2e000>;
+ 
+ 			#address-cells = <1>;
+ 			#size-cells = <1>;
+ 
+-			ranges = <0 0 0x146aa000 0x2000>;
++			ranges = <0 0 0x14680000 0x2e000>;
+ 
+-			pil-reloc@94c {
++			pil-reloc@2a94c {
+ 				compatible = "qcom,pil-reloc-info";
+-				reg = <0x94c 0xc8>;
++				reg = <0x2a94c 0xc8>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index d0314cdf0b92fd..0e6ec2c54c2431 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -5078,18 +5078,18 @@ spmi_bus: spmi@c440000 {
+ 			#interrupt-cells = <4>;
+ 		};
+ 
+-		sram@146bf000 {
++		sram@14680000 {
+ 			compatible = "qcom,sdm845-imem", "syscon", "simple-mfd";
+-			reg = <0 0x146bf000 0 0x1000>;
++			reg = <0 0x14680000 0 0x40000>;
+ 
+ 			#address-cells = <1>;
+ 			#size-cells = <1>;
+ 
+-			ranges = <0 0 0x146bf000 0x1000>;
++			ranges = <0 0 0x14680000 0x40000>;
+ 
+-			pil-reloc@94c {
++			pil-reloc@3f94c {
+ 				compatible = "qcom,pil-reloc-info";
+-				reg = <0x94c 0xc8>;
++				reg = <0x3f94c 0xc8>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3528-pinctrl.dtsi b/arch/arm64/boot/dts/rockchip/rk3528-pinctrl.dtsi
+index ea051362fb2651..59b75c91bbb7f0 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3528-pinctrl.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3528-pinctrl.dtsi
+@@ -98,42 +98,42 @@ eth_pins: eth-pins {
+ 
+ 	fephy {
+ 		/omit-if-no-ref/
+-		fephym0_led_dpx: fephym0-led_dpx {
++		fephym0_led_dpx: fephym0-led-dpx {
+ 			rockchip,pins =
+ 				/* fephy_led_dpx_m0 */
+ 				<4 RK_PB5 2 &pcfg_pull_none>;
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		fephym0_led_link: fephym0-led_link {
++		fephym0_led_link: fephym0-led-link {
+ 			rockchip,pins =
+ 				/* fephy_led_link_m0 */
+ 				<4 RK_PC0 2 &pcfg_pull_none>;
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		fephym0_led_spd: fephym0-led_spd {
++		fephym0_led_spd: fephym0-led-spd {
+ 			rockchip,pins =
+ 				/* fephy_led_spd_m0 */
+ 				<4 RK_PB7 2 &pcfg_pull_none>;
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		fephym1_led_dpx: fephym1-led_dpx {
++		fephym1_led_dpx: fephym1-led-dpx {
+ 			rockchip,pins =
+ 				/* fephy_led_dpx_m1 */
+ 				<2 RK_PA4 5 &pcfg_pull_none>;
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		fephym1_led_link: fephym1-led_link {
++		fephym1_led_link: fephym1-led-link {
+ 			rockchip,pins =
+ 				/* fephy_led_link_m1 */
+ 				<2 RK_PA6 5 &pcfg_pull_none>;
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		fephym1_led_spd: fephym1-led_spd {
++		fephym1_led_spd: fephym1-led-spd {
+ 			rockchip,pins =
+ 				/* fephy_led_spd_m1 */
+ 				<2 RK_PA5 5 &pcfg_pull_none>;
+@@ -779,7 +779,7 @@ rgmii_miim: rgmii-miim {
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		rgmii_rx_bus2: rgmii-rx_bus2 {
++		rgmii_rx_bus2: rgmii-rx-bus2 {
+ 			rockchip,pins =
+ 				/* rgmii_rxd0 */
+ 				<3 RK_PA3 2 &pcfg_pull_none>,
+@@ -790,7 +790,7 @@ rgmii_rx_bus2: rgmii-rx_bus2 {
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		rgmii_tx_bus2: rgmii-tx_bus2 {
++		rgmii_tx_bus2: rgmii-tx-bus2 {
+ 			rockchip,pins =
+ 				/* rgmii_txd0 */
+ 				<3 RK_PA1 2 &pcfg_pull_none_drv_level_2>,
+@@ -801,7 +801,7 @@ rgmii_tx_bus2: rgmii-tx_bus2 {
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		rgmii_rgmii_clk: rgmii-rgmii_clk {
++		rgmii_rgmii_clk: rgmii-rgmii-clk {
+ 			rockchip,pins =
+ 				/* rgmii_rxclk */
+ 				<3 RK_PA5 2 &pcfg_pull_none>,
+@@ -810,7 +810,7 @@ rgmii_rgmii_clk: rgmii-rgmii_clk {
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		rgmii_rgmii_bus: rgmii-rgmii_bus {
++		rgmii_rgmii_bus: rgmii-rgmii-bus {
+ 			rockchip,pins =
+ 				/* rgmii_rxd2 */
+ 				<3 RK_PA7 2 &pcfg_pull_none>,
+diff --git a/arch/arm64/boot/dts/rockchip/rk3528-radxa-e20c.dts b/arch/arm64/boot/dts/rockchip/rk3528-radxa-e20c.dts
+index 57a446b5cbd6c6..92bdb66169f273 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3528-radxa-e20c.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3528-radxa-e20c.dts
+@@ -140,6 +140,7 @@ &saradc {
+ &sdhci {
+ 	bus-width = <8>;
+ 	cap-mmc-highspeed;
++	mmc-hs200-1_8v;
+ 	no-sd;
+ 	no-sdio;
+ 	non-removable;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3576-rock-4d.dts b/arch/arm64/boot/dts/rockchip/rk3576-rock-4d.dts
+index 6756403111e704..0a93853cdf43c5 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3576-rock-4d.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3576-rock-4d.dts
+@@ -641,14 +641,16 @@ hym8563: rtc@51 {
+ 
+ &mdio0 {
+ 	rgmii_phy0: ethernet-phy@1 {
+-		compatible = "ethernet-phy-ieee802.3-c22";
++		compatible = "ethernet-phy-id001c.c916";
+ 		reg = <0x1>;
+ 		clocks = <&cru REFCLKO25M_GMAC0_OUT>;
++		assigned-clocks = <&cru REFCLKO25M_GMAC0_OUT>;
++		assigned-clock-rates = <25000000>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&rtl8211f_rst>;
+ 		reset-assert-us = <20000>;
+ 		reset-deassert-us = <100000>;
+-		reset-gpio = <&gpio2 RK_PB5 GPIO_ACTIVE_LOW>;
++		reset-gpios = <&gpio2 RK_PB5 GPIO_ACTIVE_LOW>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/st/stm32mp251.dtsi b/arch/arm64/boot/dts/st/stm32mp251.dtsi
+index 87110f91e4895a..afe88e04875aea 100644
+--- a/arch/arm64/boot/dts/st/stm32mp251.dtsi
++++ b/arch/arm64/boot/dts/st/stm32mp251.dtsi
+@@ -150,7 +150,7 @@ timer {
+ 			     <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(1) | IRQ_TYPE_LEVEL_LOW)>,
+ 			     <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(1) | IRQ_TYPE_LEVEL_LOW)>,
+ 			     <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(1) | IRQ_TYPE_LEVEL_LOW)>;
+-		always-on;
++		arm,no-tick-in-suspend;
+ 	};
+ 
+ 	soc@0 {
+diff --git a/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi
+index f9b5c97518d68f..8bacb04b377320 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi
+@@ -250,7 +250,7 @@ secure_proxy_sa3: mailbox@43600000 {
+ 
+ 	main_pmx0: pinctrl@f4000 {
+ 		compatible = "pinctrl-single";
+-		reg = <0x00 0xf4000 0x00 0x2ac>;
++		reg = <0x00 0xf4000 0x00 0x2b0>;
+ 		#pinctrl-cells = <1>;
+ 		pinctrl-single,register-width = <32>;
+ 		pinctrl-single,function-mask = <0xffffffff>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-rdk.dts b/arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-rdk.dts
+index f63c101b7d61a1..129524eb5b9123 100644
+--- a/arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-rdk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-rdk.dts
+@@ -322,6 +322,8 @@ AM64X_IOPAD(0x0040, PIN_OUTPUT, 7)	/* (U21) GPMC0_AD1.GPIO0_16 */
+ &icssg0_mdio {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&icssg0_mdio_pins_default &clkout0_pins_default>;
++	assigned-clocks = <&k3_clks 157 123>;
++	assigned-clock-parents = <&k3_clks 157 125>;
+ 	status = "okay";
+ 
+ 	icssg0_phy1: ethernet-phy@1 {
+diff --git a/arch/arm64/include/asm/gcs.h b/arch/arm64/include/asm/gcs.h
+index f50660603ecf5d..5bc432234d3aba 100644
+--- a/arch/arm64/include/asm/gcs.h
++++ b/arch/arm64/include/asm/gcs.h
+@@ -58,7 +58,7 @@ static inline u64 gcsss2(void)
+ 
+ static inline bool task_gcs_el0_enabled(struct task_struct *task)
+ {
+-	return current->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE;
++	return task->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE;
+ }
+ 
+ void gcs_set_el0_mode(struct task_struct *task);
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index 4bc70205312e47..ce21682fe129e1 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -305,13 +305,13 @@ static int copy_thread_gcs(struct task_struct *p,
+ 	p->thread.gcs_base = 0;
+ 	p->thread.gcs_size = 0;
+ 
++	p->thread.gcs_el0_mode = current->thread.gcs_el0_mode;
++	p->thread.gcs_el0_locked = current->thread.gcs_el0_locked;
++
+ 	gcs = gcs_alloc_thread_stack(p, args);
+ 	if (IS_ERR_VALUE(gcs))
+ 		return PTR_ERR((void *)gcs);
+ 
+-	p->thread.gcs_el0_mode = current->thread.gcs_el0_mode;
+-	p->thread.gcs_el0_locked = current->thread.gcs_el0_locked;
+-
+ 	return 0;
+ }
+ 
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 634d78422adb27..a85236d0afee56 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -412,6 +412,7 @@ static void push_callee_regs(struct jit_ctx *ctx)
+ 		emit(A64_PUSH(A64_R(23), A64_R(24), A64_SP), ctx);
+ 		emit(A64_PUSH(A64_R(25), A64_R(26), A64_SP), ctx);
+ 		emit(A64_PUSH(A64_R(27), A64_R(28), A64_SP), ctx);
++		ctx->fp_used = true;
+ 	} else {
+ 		find_used_callee_regs(ctx);
+ 		for (i = 0; i + 1 < ctx->nr_used_callee_reg; i += 2) {
+diff --git a/arch/m68k/Kconfig.debug b/arch/m68k/Kconfig.debug
+index 30638a6e8edcb3..d036f903864c26 100644
+--- a/arch/m68k/Kconfig.debug
++++ b/arch/m68k/Kconfig.debug
+@@ -10,7 +10,7 @@ config BOOTPARAM_STRING
+ 
+ config EARLY_PRINTK
+ 	bool "Early printk"
+-	depends on !(SUN3 || M68000 || COLDFIRE)
++	depends on MMU_MOTOROLA
+ 	help
+ 	  Write kernel log output directly to a serial port.
+ 	  Where implemented, output goes to the framebuffer as well.
+diff --git a/arch/m68k/kernel/early_printk.c b/arch/m68k/kernel/early_printk.c
+index f11ef9f1f56fcf..521cbb8a150c99 100644
+--- a/arch/m68k/kernel/early_printk.c
++++ b/arch/m68k/kernel/early_printk.c
+@@ -16,25 +16,10 @@
+ #include "../mvme147/mvme147.h"
+ #include "../mvme16x/mvme16x.h"
+ 
+-asmlinkage void __init debug_cons_nputs(const char *s, unsigned n);
+-
+-static void __ref debug_cons_write(struct console *c,
+-				   const char *s, unsigned n)
+-{
+-#if !(defined(CONFIG_SUN3) || defined(CONFIG_M68000) || \
+-      defined(CONFIG_COLDFIRE))
+-	if (MACH_IS_MVME147)
+-		mvme147_scc_write(c, s, n);
+-	else if (MACH_IS_MVME16x)
+-		mvme16x_cons_write(c, s, n);
+-	else
+-		debug_cons_nputs(s, n);
+-#endif
+-}
++asmlinkage void __init debug_cons_nputs(struct console *c, const char *s, unsigned int n);
+ 
+ static struct console early_console_instance = {
+ 	.name  = "debug",
+-	.write = debug_cons_write,
+ 	.flags = CON_PRINTBUFFER | CON_BOOT,
+ 	.index = -1
+ };
+@@ -44,6 +29,12 @@ static int __init setup_early_printk(char *buf)
+ 	if (early_console || buf)
+ 		return 0;
+ 
++	if (MACH_IS_MVME147)
++		early_console_instance.write = mvme147_scc_write;
++	else if (MACH_IS_MVME16x)
++		early_console_instance.write = mvme16x_cons_write;
++	else
++		early_console_instance.write = debug_cons_nputs;
+ 	early_console = &early_console_instance;
+ 	register_console(early_console);
+ 
+@@ -51,20 +42,15 @@ static int __init setup_early_printk(char *buf)
+ }
+ early_param("earlyprintk", setup_early_printk);
+ 
+-/*
+- * debug_cons_nputs() defined in arch/m68k/kernel/head.S cannot be called
+- * after init sections are discarded (for platforms that use it).
+- */
+-#if !(defined(CONFIG_SUN3) || defined(CONFIG_M68000) || \
+-      defined(CONFIG_COLDFIRE))
+-
+ static int __init unregister_early_console(void)
+ {
+-	if (!early_console || MACH_IS_MVME16x)
+-		return 0;
++	/*
++	 * debug_cons_nputs() defined in arch/m68k/kernel/head.S cannot be
++	 * called after init sections are discarded (for platforms that use it).
++	 */
++	if (early_console && early_console->write == debug_cons_nputs)
++		return unregister_console(early_console);
+ 
+-	return unregister_console(early_console);
++	return 0;
+ }
+ late_initcall(unregister_early_console);
+-
+-#endif
+diff --git a/arch/m68k/kernel/head.S b/arch/m68k/kernel/head.S
+index 852255cf60dec1..ba22bc2f3d6d86 100644
+--- a/arch/m68k/kernel/head.S
++++ b/arch/m68k/kernel/head.S
+@@ -3263,8 +3263,8 @@ func_return	putn
+  *	turns around and calls the internal routines.  This routine
+  *	is used by the boot console.
+  *
+- *	The calling parameters are:
+- *		void debug_cons_nputs(const char *str, unsigned length)
++ *	The function signature is -
++ *		void debug_cons_nputs(struct console *c, const char *s, unsigned int n)
+  *
+  *	This routine does NOT understand variable arguments only
+  *	simple strings!
+@@ -3273,8 +3273,8 @@ ENTRY(debug_cons_nputs)
+ 	moveml	%d0/%d1/%a0,%sp@-
+ 	movew	%sr,%sp@-
+ 	ori	#0x0700,%sr
+-	movel	%sp@(18),%a0		/* fetch parameter */
+-	movel	%sp@(22),%d1		/* fetch parameter */
++	movel	%sp@(22),%a0		/* char *s */
++	movel	%sp@(26),%d1		/* unsigned int n */
+ 	jra	2f
+ 1:
+ #ifdef CONSOLE_DEBUG
+diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
+index 76f3b9c0a9f0ce..347126dc010dd5 100644
+--- a/arch/mips/mm/tlb-r4k.c
++++ b/arch/mips/mm/tlb-r4k.c
+@@ -508,6 +508,60 @@ static int __init set_ntlb(char *str)
+ 
+ __setup("ntlb=", set_ntlb);
+ 
++/* Initialise all TLB entries with unique values */
++static void r4k_tlb_uniquify(void)
++{
++	int entry = num_wired_entries();
++
++	htw_stop();
++	write_c0_entrylo0(0);
++	write_c0_entrylo1(0);
++
++	while (entry < current_cpu_data.tlbsize) {
++		unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
++		unsigned long asid = 0;
++		int idx;
++
++		/* Skip wired MMID to make ginvt_mmid work */
++		if (cpu_has_mmid)
++			asid = MMID_KERNEL_WIRED + 1;
++
++		/* Check for match before using UNIQUE_ENTRYHI */
++		do {
++			if (cpu_has_mmid) {
++				write_c0_memorymapid(asid);
++				write_c0_entryhi(UNIQUE_ENTRYHI(entry));
++			} else {
++				write_c0_entryhi(UNIQUE_ENTRYHI(entry) | asid);
++			}
++			mtc0_tlbw_hazard();
++			tlb_probe();
++			tlb_probe_hazard();
++			idx = read_c0_index();
++			/* No match or match is on current entry */
++			if (idx < 0 || idx == entry)
++				break;
++			/*
++			 * If we hit a match, we need to try again with
++			 * a different ASID.
++			 */
++			asid++;
++		} while (asid < asid_mask);
++
++		if (idx >= 0 && idx != entry)
++			panic("Unable to uniquify TLB entry %d", idx);
++
++		write_c0_index(entry);
++		mtc0_tlbw_hazard();
++		tlb_write_indexed();
++		entry++;
++	}
++
++	tlbw_use_hazard();
++	htw_start();
++	flush_micro_tlb();
++}
++
+ /*
+  * Configure TLB (for init or after a CPU has been powered off).
+  */
+@@ -547,7 +601,7 @@ static void r4k_tlb_configure(void)
+ 	temp_tlb_entry = current_cpu_data.tlbsize - 1;
+ 
+ 	/* From this point on the ARC firmware is dead.	 */
+-	local_flush_tlb_all();
++	r4k_tlb_uniquify();
+ 
+ 	/* Did I tell you that ARC SUCKS?  */
+ }
+diff --git a/arch/powerpc/configs/ppc6xx_defconfig b/arch/powerpc/configs/ppc6xx_defconfig
+index a91a766b71a44c..efa1411a52e0e1 100644
+--- a/arch/powerpc/configs/ppc6xx_defconfig
++++ b/arch/powerpc/configs/ppc6xx_defconfig
+@@ -253,7 +253,6 @@ CONFIG_NET_SCH_DSMARK=m
+ CONFIG_NET_SCH_NETEM=m
+ CONFIG_NET_SCH_INGRESS=m
+ CONFIG_NET_CLS_BASIC=m
+-CONFIG_NET_CLS_TCINDEX=m
+ CONFIG_NET_CLS_ROUTE4=m
+ CONFIG_NET_CLS_FW=m
+ CONFIG_NET_CLS_U32=m
+diff --git a/arch/powerpc/kernel/eeh.c b/arch/powerpc/kernel/eeh.c
+index ca7f7bb2b47869..2b5f3323e1072d 100644
+--- a/arch/powerpc/kernel/eeh.c
++++ b/arch/powerpc/kernel/eeh.c
+@@ -1139,6 +1139,7 @@ int eeh_unfreeze_pe(struct eeh_pe *pe)
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(eeh_unfreeze_pe);
+ 
+ 
+ static struct pci_device_id eeh_reset_ids[] = {
+diff --git a/arch/powerpc/kernel/eeh_driver.c b/arch/powerpc/kernel/eeh_driver.c
+index 7efe04c68f0fe3..dd50de91c43834 100644
+--- a/arch/powerpc/kernel/eeh_driver.c
++++ b/arch/powerpc/kernel/eeh_driver.c
+@@ -257,13 +257,12 @@ static void eeh_pe_report_edev(struct eeh_dev *edev, eeh_report_fn fn,
+ 	struct pci_driver *driver;
+ 	enum pci_ers_result new_result;
+ 
+-	pci_lock_rescan_remove();
+ 	pdev = edev->pdev;
+ 	if (pdev)
+ 		get_device(&pdev->dev);
+-	pci_unlock_rescan_remove();
+ 	if (!pdev) {
+ 		eeh_edev_info(edev, "no device");
++		*result = PCI_ERS_RESULT_DISCONNECT;
+ 		return;
+ 	}
+ 	device_lock(&pdev->dev);
+@@ -304,8 +303,9 @@ static void eeh_pe_report(const char *name, struct eeh_pe *root,
+ 	struct eeh_dev *edev, *tmp;
+ 
+ 	pr_info("EEH: Beginning: '%s'\n", name);
+-	eeh_for_each_pe(root, pe) eeh_pe_for_each_dev(pe, edev, tmp)
+-		eeh_pe_report_edev(edev, fn, result);
++	eeh_for_each_pe(root, pe)
++		eeh_pe_for_each_dev(pe, edev, tmp)
++			eeh_pe_report_edev(edev, fn, result);
+ 	if (result)
+ 		pr_info("EEH: Finished:'%s' with aggregate recovery state:'%s'\n",
+ 			name, pci_ers_result_name(*result));
+@@ -383,6 +383,8 @@ static void eeh_dev_restore_state(struct eeh_dev *edev, void *userdata)
+ 	if (!edev)
+ 		return;
+ 
++	pci_lock_rescan_remove();
++
+ 	/*
+ 	 * The content in the config space isn't saved because
+ 	 * the blocked config space on some adapters. We have
+@@ -393,14 +395,19 @@ static void eeh_dev_restore_state(struct eeh_dev *edev, void *userdata)
+ 		if (list_is_last(&edev->entry, &edev->pe->edevs))
+ 			eeh_pe_restore_bars(edev->pe);
+ 
++		pci_unlock_rescan_remove();
+ 		return;
+ 	}
+ 
+ 	pdev = eeh_dev_to_pci_dev(edev);
+-	if (!pdev)
++	if (!pdev) {
++		pci_unlock_rescan_remove();
+ 		return;
++	}
+ 
+ 	pci_restore_state(pdev);
++
++	pci_unlock_rescan_remove();
+ }
+ 
+ /**
+@@ -647,9 +654,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ 	if (any_passed || driver_eeh_aware || (pe->type & EEH_PE_VF)) {
+ 		eeh_pe_dev_traverse(pe, eeh_rmv_device, rmv_data);
+ 	} else {
+-		pci_lock_rescan_remove();
+ 		pci_hp_remove_devices(bus);
+-		pci_unlock_rescan_remove();
+ 	}
+ 
+ 	/*
+@@ -665,8 +670,6 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ 	if (rc)
+ 		return rc;
+ 
+-	pci_lock_rescan_remove();
+-
+ 	/* Restore PE */
+ 	eeh_ops->configure_bridge(pe);
+ 	eeh_pe_restore_bars(pe);
+@@ -674,7 +677,6 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ 	/* Clear frozen state */
+ 	rc = eeh_clear_pe_frozen_state(pe, false);
+ 	if (rc) {
+-		pci_unlock_rescan_remove();
+ 		return rc;
+ 	}
+ 
+@@ -709,7 +711,6 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ 	pe->tstamp = tstamp;
+ 	pe->freeze_count = cnt;
+ 
+-	pci_unlock_rescan_remove();
+ 	return 0;
+ }
+ 
+@@ -843,10 +844,13 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ 		{LIST_HEAD_INIT(rmv_data.removed_vf_list), 0};
+ 	int devices = 0;
+ 
++	pci_lock_rescan_remove();
++
+ 	bus = eeh_pe_bus_get(pe);
+ 	if (!bus) {
+ 		pr_err("%s: Cannot find PCI bus for PHB#%x-PE#%x\n",
+ 			__func__, pe->phb->global_number, pe->addr);
++		pci_unlock_rescan_remove();
+ 		return;
+ 	}
+ 
+@@ -1094,10 +1098,15 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ 		eeh_pe_state_clear(pe, EEH_PE_PRI_BUS, true);
+ 		eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
+ 
+-		pci_lock_rescan_remove();
+-		pci_hp_remove_devices(bus);
+-		pci_unlock_rescan_remove();
++		bus = eeh_pe_bus_get(pe);
++		if (bus)
++			pci_hp_remove_devices(bus);
++		else
++			pr_err("%s: PCI bus for PHB#%x-PE#%x disappeared\n",
++				__func__, pe->phb->global_number, pe->addr);
++
+ 		/* The passed PE should no longer be used */
++		pci_unlock_rescan_remove();
+ 		return;
+ 	}
+ 
+@@ -1114,6 +1123,8 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ 			eeh_clear_slot_attention(edev->pdev);
+ 
+ 	eeh_pe_state_clear(pe, EEH_PE_RECOVERING, true);
++
++	pci_unlock_rescan_remove();
+ }
+ 
+ /**
+@@ -1132,6 +1143,7 @@ void eeh_handle_special_event(void)
+ 	unsigned long flags;
+ 	int rc;
+ 
++	pci_lock_rescan_remove();
+ 
+ 	do {
+ 		rc = eeh_ops->next_error(&pe);
+@@ -1171,10 +1183,12 @@ void eeh_handle_special_event(void)
+ 
+ 			break;
+ 		case EEH_NEXT_ERR_NONE:
++			pci_unlock_rescan_remove();
+ 			return;
+ 		default:
+ 			pr_warn("%s: Invalid value %d from next_error()\n",
+ 				__func__, rc);
++			pci_unlock_rescan_remove();
+ 			return;
+ 		}
+ 
+@@ -1186,7 +1200,9 @@ void eeh_handle_special_event(void)
+ 		if (rc == EEH_NEXT_ERR_FROZEN_PE ||
+ 		    rc == EEH_NEXT_ERR_FENCED_PHB) {
+ 			eeh_pe_state_mark(pe, EEH_PE_RECOVERING);
++			pci_unlock_rescan_remove();
+ 			eeh_handle_normal_event(pe);
++			pci_lock_rescan_remove();
+ 		} else {
+ 			eeh_for_each_pe(pe, tmp_pe)
+ 				eeh_pe_for_each_dev(tmp_pe, edev, tmp_edev)
+@@ -1199,7 +1215,6 @@ void eeh_handle_special_event(void)
+ 				eeh_report_failure, NULL);
+ 			eeh_set_channel_state(pe, pci_channel_io_perm_failure);
+ 
+-			pci_lock_rescan_remove();
+ 			list_for_each_entry(hose, &hose_list, list_node) {
+ 				phb_pe = eeh_phb_pe_get(hose);
+ 				if (!phb_pe ||
+@@ -1218,7 +1233,6 @@ void eeh_handle_special_event(void)
+ 				}
+ 				pci_hp_remove_devices(bus);
+ 			}
+-			pci_unlock_rescan_remove();
+ 		}
+ 
+ 		/*
+@@ -1228,4 +1242,6 @@ void eeh_handle_special_event(void)
+ 		if (rc == EEH_NEXT_ERR_DEAD_IOC)
+ 			break;
+ 	} while (rc != EEH_NEXT_ERR_NONE);
++
++	pci_unlock_rescan_remove();
+ }
+diff --git a/arch/powerpc/kernel/eeh_pe.c b/arch/powerpc/kernel/eeh_pe.c
+index d283d281d28e8b..e740101fadf3b1 100644
+--- a/arch/powerpc/kernel/eeh_pe.c
++++ b/arch/powerpc/kernel/eeh_pe.c
+@@ -671,10 +671,12 @@ static void eeh_bridge_check_link(struct eeh_dev *edev)
+ 	eeh_ops->write_config(edev, cap + PCI_EXP_LNKCTL, 2, val);
+ 
+ 	/* Check link */
+-	if (!edev->pdev->link_active_reporting) {
+-		eeh_edev_dbg(edev, "No link reporting capability\n");
+-		msleep(1000);
+-		return;
++	if (edev->pdev) {
++		if (!edev->pdev->link_active_reporting) {
++			eeh_edev_dbg(edev, "No link reporting capability\n");
++			msleep(1000);
++			return;
++		}
+ 	}
+ 
+ 	/* Wait the link is up until timeout (5s) */
+diff --git a/arch/powerpc/kernel/pci-hotplug.c b/arch/powerpc/kernel/pci-hotplug.c
+index 9ea74973d78d5a..6f444d0822d820 100644
+--- a/arch/powerpc/kernel/pci-hotplug.c
++++ b/arch/powerpc/kernel/pci-hotplug.c
+@@ -141,6 +141,9 @@ void pci_hp_add_devices(struct pci_bus *bus)
+ 	struct pci_controller *phb;
+ 	struct device_node *dn = pci_bus_to_OF_node(bus);
+ 
++	if (!dn)
++		return;
++
+ 	phb = pci_bus_to_host(bus);
+ 
+ 	mode = PCI_PROBE_NORMAL;
+diff --git a/arch/powerpc/platforms/pseries/dlpar.c b/arch/powerpc/platforms/pseries/dlpar.c
+index 213aa26dc8b337..979487da65223d 100644
+--- a/arch/powerpc/platforms/pseries/dlpar.c
++++ b/arch/powerpc/platforms/pseries/dlpar.c
+@@ -404,6 +404,45 @@ get_device_node_with_drc_info(u32 index)
+ 	return NULL;
+ }
+ 
++static struct device_node *
++get_device_node_with_drc_indexes(u32 drc_index)
++{
++	struct device_node *np = NULL;
++	u32 nr_indexes, index;
++	int i, rc;
++
++	for_each_node_with_property(np, "ibm,drc-indexes") {
++		/*
++		 * First element in the array is the total number of
++		 * DRC indexes returned.
++		 */
++		rc = of_property_read_u32_index(np, "ibm,drc-indexes",
++				0, &nr_indexes);
++		if (rc)
++			goto out_put_np;
++
++		/*
++		 * Retrieve DRC index from the list and return the
++		 * device node if matched with the specified index.
++		 */
++		for (i = 0; i < nr_indexes; i++) {
++			rc = of_property_read_u32_index(np, "ibm,drc-indexes",
++							i+1, &index);
++			if (rc)
++				goto out_put_np;
++
++			if (drc_index == index)
++				return np;
++		}
++	}
++
++	return NULL;
++
++out_put_np:
++	of_node_put(np);
++	return NULL;
++}
++
+ static int dlpar_hp_dt_add(u32 index)
+ {
+ 	struct device_node *np, *nodes;
+@@ -423,10 +462,19 @@ static int dlpar_hp_dt_add(u32 index)
+ 		goto out;
+ 	}
+ 
++	/*
++	 * Recent FW provides ibm,drc-info property. So search
++	 * for the user specified DRC index from ibm,drc-info
++	 * property. If this property is not available, search
++	 * in the indexes array from ibm,drc-indexes property.
++	 */
+ 	np = get_device_node_with_drc_info(index);
+ 
+-	if (!np)
+-		return -EIO;
++	if (!np) {
++		np = get_device_node_with_drc_indexes(index);
++		if (!np)
++			return -EIO;
++	}
+ 
+ 	/* Next, configure the connector. */
+ 	nodes = dlpar_configure_connector(cpu_to_be32(index), np);
+diff --git a/arch/riscv/kvm/vcpu_onereg.c b/arch/riscv/kvm/vcpu_onereg.c
+index 2e1b646f0d6130..cce6a38ea54f2a 100644
+--- a/arch/riscv/kvm/vcpu_onereg.c
++++ b/arch/riscv/kvm/vcpu_onereg.c
+@@ -23,7 +23,7 @@
+ #define KVM_ISA_EXT_ARR(ext)		\
+ [KVM_RISCV_ISA_EXT_##ext] = RISCV_ISA_EXT_##ext
+ 
+-/* Mapping between KVM ISA Extension ID & Host ISA extension ID */
++/* Mapping between KVM ISA Extension ID & guest ISA extension ID */
+ static const unsigned long kvm_isa_ext_arr[] = {
+ 	/* Single letter extensions (alphabetically sorted) */
+ 	[KVM_RISCV_ISA_EXT_A] = RISCV_ISA_EXT_a,
+@@ -35,7 +35,7 @@ static const unsigned long kvm_isa_ext_arr[] = {
+ 	[KVM_RISCV_ISA_EXT_M] = RISCV_ISA_EXT_m,
+ 	[KVM_RISCV_ISA_EXT_V] = RISCV_ISA_EXT_v,
+ 	/* Multi letter extensions (alphabetically sorted) */
+-	[KVM_RISCV_ISA_EXT_SMNPM] = RISCV_ISA_EXT_SSNPM,
++	KVM_ISA_EXT_ARR(SMNPM),
+ 	KVM_ISA_EXT_ARR(SMSTATEEN),
+ 	KVM_ISA_EXT_ARR(SSAIA),
+ 	KVM_ISA_EXT_ARR(SSCOFPMF),
+@@ -112,6 +112,36 @@ static unsigned long kvm_riscv_vcpu_base2isa_ext(unsigned long base_ext)
+ 	return KVM_RISCV_ISA_EXT_MAX;
+ }
+ 
++static int kvm_riscv_vcpu_isa_check_host(unsigned long kvm_ext, unsigned long *guest_ext)
++{
++	unsigned long host_ext;
++
++	if (kvm_ext >= KVM_RISCV_ISA_EXT_MAX ||
++	    kvm_ext >= ARRAY_SIZE(kvm_isa_ext_arr))
++		return -ENOENT;
++
++	*guest_ext = kvm_isa_ext_arr[kvm_ext];
++	switch (*guest_ext) {
++	case RISCV_ISA_EXT_SMNPM:
++		/*
++		 * Pointer masking effective in (H)S-mode is provided by the
++		 * Smnpm extension, so that extension is reported to the guest,
++		 * even though the CSR bits for configuring VS-mode pointer
++		 * masking on the host side are part of the Ssnpm extension.
++		 */
++		host_ext = RISCV_ISA_EXT_SSNPM;
++		break;
++	default:
++		host_ext = *guest_ext;
++		break;
++	}
++
++	if (!__riscv_isa_extension_available(NULL, host_ext))
++		return -ENOENT;
++
++	return 0;
++}
++
+ static bool kvm_riscv_vcpu_isa_enable_allowed(unsigned long ext)
+ {
+ 	switch (ext) {
+@@ -219,13 +249,13 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext)
+ 
+ void kvm_riscv_vcpu_setup_isa(struct kvm_vcpu *vcpu)
+ {
+-	unsigned long host_isa, i;
++	unsigned long guest_ext, i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(kvm_isa_ext_arr); i++) {
+-		host_isa = kvm_isa_ext_arr[i];
+-		if (__riscv_isa_extension_available(NULL, host_isa) &&
+-		    kvm_riscv_vcpu_isa_enable_allowed(i))
+-			set_bit(host_isa, vcpu->arch.isa);
++		if (kvm_riscv_vcpu_isa_check_host(i, &guest_ext))
++			continue;
++		if (kvm_riscv_vcpu_isa_enable_allowed(i))
++			set_bit(guest_ext, vcpu->arch.isa);
+ 	}
+ }
+ 
+@@ -607,18 +637,15 @@ static int riscv_vcpu_get_isa_ext_single(struct kvm_vcpu *vcpu,
+ 					 unsigned long reg_num,
+ 					 unsigned long *reg_val)
+ {
+-	unsigned long host_isa_ext;
+-
+-	if (reg_num >= KVM_RISCV_ISA_EXT_MAX ||
+-	    reg_num >= ARRAY_SIZE(kvm_isa_ext_arr))
+-		return -ENOENT;
++	unsigned long guest_ext;
++	int ret;
+ 
+-	host_isa_ext = kvm_isa_ext_arr[reg_num];
+-	if (!__riscv_isa_extension_available(NULL, host_isa_ext))
+-		return -ENOENT;
++	ret = kvm_riscv_vcpu_isa_check_host(reg_num, &guest_ext);
++	if (ret)
++		return ret;
+ 
+ 	*reg_val = 0;
+-	if (__riscv_isa_extension_available(vcpu->arch.isa, host_isa_ext))
++	if (__riscv_isa_extension_available(vcpu->arch.isa, guest_ext))
+ 		*reg_val = 1; /* Mark the given extension as available */
+ 
+ 	return 0;
+@@ -628,17 +655,14 @@ static int riscv_vcpu_set_isa_ext_single(struct kvm_vcpu *vcpu,
+ 					 unsigned long reg_num,
+ 					 unsigned long reg_val)
+ {
+-	unsigned long host_isa_ext;
+-
+-	if (reg_num >= KVM_RISCV_ISA_EXT_MAX ||
+-	    reg_num >= ARRAY_SIZE(kvm_isa_ext_arr))
+-		return -ENOENT;
++	unsigned long guest_ext;
++	int ret;
+ 
+-	host_isa_ext = kvm_isa_ext_arr[reg_num];
+-	if (!__riscv_isa_extension_available(NULL, host_isa_ext))
+-		return -ENOENT;
++	ret = kvm_riscv_vcpu_isa_check_host(reg_num, &guest_ext);
++	if (ret)
++		return ret;
+ 
+-	if (reg_val == test_bit(host_isa_ext, vcpu->arch.isa))
++	if (reg_val == test_bit(guest_ext, vcpu->arch.isa))
+ 		return 0;
+ 
+ 	if (!vcpu->arch.ran_atleast_once) {
+@@ -648,10 +672,10 @@ static int riscv_vcpu_set_isa_ext_single(struct kvm_vcpu *vcpu,
+ 		 */
+ 		if (reg_val == 1 &&
+ 		    kvm_riscv_vcpu_isa_enable_allowed(reg_num))
+-			set_bit(host_isa_ext, vcpu->arch.isa);
++			set_bit(guest_ext, vcpu->arch.isa);
+ 		else if (!reg_val &&
+ 			 kvm_riscv_vcpu_isa_disable_allowed(reg_num))
+-			clear_bit(host_isa_ext, vcpu->arch.isa);
++			clear_bit(guest_ext, vcpu->arch.isa);
+ 		else
+ 			return -EINVAL;
+ 		kvm_riscv_vcpu_fp_reset(vcpu);
+@@ -1009,16 +1033,15 @@ static int copy_fp_d_reg_indices(const struct kvm_vcpu *vcpu,
+ static int copy_isa_ext_reg_indices(const struct kvm_vcpu *vcpu,
+ 				u64 __user *uindices)
+ {
++	unsigned long guest_ext;
+ 	unsigned int n = 0;
+-	unsigned long isa_ext;
+ 
+ 	for (int i = 0; i < KVM_RISCV_ISA_EXT_MAX; i++) {
+ 		u64 size = IS_ENABLED(CONFIG_32BIT) ?
+ 			   KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64;
+ 		u64 reg = KVM_REG_RISCV | size | KVM_REG_RISCV_ISA_EXT | i;
+ 
+-		isa_ext = kvm_isa_ext_arr[i];
+-		if (!__riscv_isa_extension_available(NULL, isa_ext))
++		if (kvm_riscv_vcpu_isa_check_host(i, &guest_ext))
+ 			continue;
+ 
+ 		if (uindices) {
+diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c
+index 06316fb8e0fad8..6d435744189711 100644
+--- a/arch/s390/boot/startup.c
++++ b/arch/s390/boot/startup.c
+@@ -369,7 +369,7 @@ static unsigned long setup_kernel_memory_layout(unsigned long kernel_size)
+ 		kernel_start = round_down(kernel_end - kernel_size, THREAD_SIZE);
+ 		boot_debug("Randomization range: 0x%016lx-0x%016lx\n", vmax - kaslr_len, vmax);
+ 		boot_debug("kernel image:        0x%016lx-0x%016lx (kaslr)\n", kernel_start,
+-			   kernel_size + kernel_size);
++			   kernel_start + kernel_size);
+ 	} else if (vmax < __NO_KASLR_END_KERNEL || vsize > __NO_KASLR_END_KERNEL) {
+ 		kernel_start = round_down(vmax - kernel_size, THREAD_SIZE);
+ 		boot_debug("kernel image:        0x%016lx-0x%016lx (constrained)\n", kernel_start,
+diff --git a/arch/s390/include/asm/ap.h b/arch/s390/include/asm/ap.h
+index 395b02d6a13374..352108727d7e62 100644
+--- a/arch/s390/include/asm/ap.h
++++ b/arch/s390/include/asm/ap.h
+@@ -103,7 +103,7 @@ struct ap_tapq_hwinfo {
+ 			unsigned int accel :  1; /* A */
+ 			unsigned int ep11  :  1; /* X */
+ 			unsigned int apxa  :  1; /* APXA */
+-			unsigned int	   :  1;
++			unsigned int slcf  :  1; /* Cmd filtering avail. */
+ 			unsigned int class :  8;
+ 			unsigned int bs	   :  2; /* SE bind/assoc */
+ 			unsigned int	   : 14;
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index f244c5560e7f62..5c9789804120ab 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -719,6 +719,11 @@ static void __init memblock_add_physmem_info(void)
+ 	memblock_set_node(0, ULONG_MAX, &memblock.memory, 0);
+ }
+ 
++static void __init setup_high_memory(void)
++{
++	high_memory = __va(ident_map_size);
++}
++
+ /*
+  * Reserve memory used for lowcore.
+  */
+@@ -951,6 +956,7 @@ void __init setup_arch(char **cmdline_p)
+ 
+ 	free_physmem_info();
+ 	setup_memory_end();
++	setup_high_memory();
+ 	memblock_dump_all();
+ 	setup_memory();
+ 
+diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
+index e3a6f8ae156cd6..8f3fb69d96b2e2 100644
+--- a/arch/s390/mm/pgalloc.c
++++ b/arch/s390/mm/pgalloc.c
+@@ -176,11 +176,6 @@ void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable)
+ 	struct ptdesc *ptdesc = virt_to_ptdesc(pgtable);
+ 
+ 	call_rcu(&ptdesc->pt_rcu_head, pte_free_now);
+-	/*
+-	 * THPs are not allowed for KVM guests. Warn if pgste ever reaches here.
+-	 * Turn to the generic pte_free_defer() version once gmap is removed.
+-	 */
+-	WARN_ON_ONCE(mm_has_pgste(mm));
+ }
+ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+ 
+diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
+index 448dd6ed1069b7..f48ef361bc8315 100644
+--- a/arch/s390/mm/vmem.c
++++ b/arch/s390/mm/vmem.c
+@@ -64,13 +64,12 @@ void *vmem_crst_alloc(unsigned long val)
+ 
+ pte_t __ref *vmem_pte_alloc(void)
+ {
+-	unsigned long size = PTRS_PER_PTE * sizeof(pte_t);
+ 	pte_t *pte;
+ 
+ 	if (slab_is_available())
+-		pte = (pte_t *) page_table_alloc(&init_mm);
++		pte = (pte_t *)page_table_alloc(&init_mm);
+ 	else
+-		pte = (pte_t *) memblock_alloc(size, size);
++		pte = (pte_t *)memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ 	if (!pte)
+ 		return NULL;
+ 	memset64((u64 *)pte, _PAGE_INVALID, PTRS_PER_PTE);
+diff --git a/arch/sh/Makefile b/arch/sh/Makefile
+index cab2f9c011a8db..7b420424b6d7c4 100644
+--- a/arch/sh/Makefile
++++ b/arch/sh/Makefile
+@@ -103,16 +103,16 @@ UTS_MACHINE		:= sh
+ LDFLAGS_vmlinux		+= -e _stext
+ 
+ ifdef CONFIG_CPU_LITTLE_ENDIAN
+-ld-bfd			:= elf32-sh-linux
+-LDFLAGS_vmlinux		+= --defsym jiffies=jiffies_64 --oformat $(ld-bfd)
++ld_bfd			:= elf32-sh-linux
++LDFLAGS_vmlinux		+= --defsym jiffies=jiffies_64 --oformat $(ld_bfd)
+ KBUILD_LDFLAGS		+= -EL
+ else
+-ld-bfd			:= elf32-shbig-linux
+-LDFLAGS_vmlinux		+= --defsym jiffies=jiffies_64+4 --oformat $(ld-bfd)
++ld_bfd			:= elf32-shbig-linux
++LDFLAGS_vmlinux		+= --defsym jiffies=jiffies_64+4 --oformat $(ld_bfd)
+ KBUILD_LDFLAGS		+= -EB
+ endif
+ 
+-export ld-bfd
++export ld_bfd
+ 
+ # Mach groups
+ machdir-$(CONFIG_SOLUTION_ENGINE)		+= mach-se
+diff --git a/arch/sh/boot/compressed/Makefile b/arch/sh/boot/compressed/Makefile
+index 8bc319ff54bf93..58df491778b29a 100644
+--- a/arch/sh/boot/compressed/Makefile
++++ b/arch/sh/boot/compressed/Makefile
+@@ -27,7 +27,7 @@ endif
+ 
+ ccflags-remove-$(CONFIG_MCOUNT) += -pg
+ 
+-LDFLAGS_vmlinux := --oformat $(ld-bfd) -Ttext $(IMAGE_OFFSET) -e startup \
++LDFLAGS_vmlinux := --oformat $(ld_bfd) -Ttext $(IMAGE_OFFSET) -e startup \
+ 		   -T $(obj)/../../kernel/vmlinux.lds
+ 
+ KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
+@@ -51,7 +51,7 @@ $(obj)/vmlinux.bin.lzo: $(obj)/vmlinux.bin FORCE
+ 
+ OBJCOPYFLAGS += -R .empty_zero_page
+ 
+-LDFLAGS_piggy.o := -r --format binary --oformat $(ld-bfd) -T
++LDFLAGS_piggy.o := -r --format binary --oformat $(ld_bfd) -T
+ 
+ $(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.$(suffix_y) FORCE
+ 	$(call if_changed,ld)
+diff --git a/arch/sh/boot/romimage/Makefile b/arch/sh/boot/romimage/Makefile
+index c7c8be58400cd9..17b03df0a8de4d 100644
+--- a/arch/sh/boot/romimage/Makefile
++++ b/arch/sh/boot/romimage/Makefile
+@@ -13,7 +13,7 @@ mmcif-obj-$(CONFIG_CPU_SUBTYPE_SH7724)	:= $(obj)/mmcif-sh7724.o
+ load-$(CONFIG_ROMIMAGE_MMCIF)		:= $(mmcif-load-y)
+ obj-$(CONFIG_ROMIMAGE_MMCIF)		:= $(mmcif-obj-y)
+ 
+-LDFLAGS_vmlinux := --oformat $(ld-bfd) -Ttext $(load-y) -e romstart \
++LDFLAGS_vmlinux := --oformat $(ld_bfd) -Ttext $(load-y) -e romstart \
+ 		   -T $(obj)/../../kernel/vmlinux.lds
+ 
+ $(obj)/vmlinux: $(obj)/head.o $(obj-y) $(obj)/piggy.o FORCE
+@@ -24,7 +24,7 @@ OBJCOPYFLAGS += -j .empty_zero_page
+ $(obj)/zeropage.bin: vmlinux FORCE
+ 	$(call if_changed,objcopy)
+ 
+-LDFLAGS_piggy.o := -r --format binary --oformat $(ld-bfd) -T
++LDFLAGS_piggy.o := -r --format binary --oformat $(ld_bfd) -T
+ 
+ $(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/zeropage.bin arch/sh/boot/zImage FORCE
+ 	$(call if_changed,ld)
+diff --git a/arch/um/drivers/rtc_user.c b/arch/um/drivers/rtc_user.c
+index 51e79f3148cd40..67912fcf7b2864 100644
+--- a/arch/um/drivers/rtc_user.c
++++ b/arch/um/drivers/rtc_user.c
+@@ -28,7 +28,7 @@ int uml_rtc_start(bool timetravel)
+ 	int err;
+ 
+ 	if (timetravel) {
+-		int err = os_pipe(uml_rtc_irq_fds, 1, 1);
++		err = os_pipe(uml_rtc_irq_fds, 1, 1);
+ 		if (err)
+ 			goto fail;
+ 	} else {
+diff --git a/arch/x86/boot/cpuflags.c b/arch/x86/boot/cpuflags.c
+index 916bac09b464da..63e037e94e4c03 100644
+--- a/arch/x86/boot/cpuflags.c
++++ b/arch/x86/boot/cpuflags.c
+@@ -106,5 +106,18 @@ void get_cpuflags(void)
+ 			cpuid(0x80000001, &ignored, &ignored, &cpu.flags[6],
+ 			      &cpu.flags[1]);
+ 		}
++
++		if (max_amd_level >= 0x8000001f) {
++			u32 ebx;
++
++			/*
++			 * The X86_FEATURE_COHERENCY_SFW_NO feature bit is in
++			 * the virtualization flags entry (word 8) and set by
++			 * scattered.c, so the bit needs to be explicitly set.
++			 */
++			cpuid(0x8000001f, &ignored, &ebx, &ignored, &ignored);
++			if (ebx & BIT(31))
++				set_bit(X86_FEATURE_COHERENCY_SFW_NO, cpu.flags);
++		}
+ 	}
+ }
+diff --git a/arch/x86/coco/sev/shared.c b/arch/x86/coco/sev/shared.c
+index 2e4122f8aa6b1e..383afc41a718b9 100644
+--- a/arch/x86/coco/sev/shared.c
++++ b/arch/x86/coco/sev/shared.c
+@@ -1254,6 +1254,24 @@ static void svsm_pval_terminate(struct svsm_pvalidate_call *pc, int ret, u64 svs
+ 	__pval_terminate(pfn, action, page_size, ret, svsm_ret);
+ }
+ 
++static inline void sev_evict_cache(void *va, int npages)
++{
++	volatile u8 val __always_unused;
++	u8 *bytes = va;
++	int page_idx;
++
++	/*
++	 * For SEV guests, a read from the first/last cache-lines of a 4K page
++	 * using the guest key is sufficient to cause a flush of all cache-lines
++	 * associated with that 4K page without incurring all the overhead of a
++	 * full CLFLUSH sequence.
++	 */
++	for (page_idx = 0; page_idx < npages; page_idx++) {
++		val = bytes[page_idx * PAGE_SIZE];
++		val = bytes[page_idx * PAGE_SIZE + PAGE_SIZE - 1];
++	}
++}
++
+ static void __head svsm_pval_4k_page(unsigned long paddr, bool validate)
+ {
+ 	struct svsm_pvalidate_call *pc;
+@@ -1307,6 +1325,13 @@ static void __head pvalidate_4k_page(unsigned long vaddr, unsigned long paddr,
+ 		if (ret)
+ 			sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE);
+ 	}
++
++	/*
++	 * If validating memory (making it private) and affected by the
++	 * cache-coherency vulnerability, perform the cache eviction mitigation.
++	 */
++	if (validate && !has_cpuflag(X86_FEATURE_COHERENCY_SFW_NO))
++		sev_evict_cache((void *)vaddr, 1);
+ }
+ 
+ static void pval_pages(struct snp_psc_desc *desc)
+@@ -1491,10 +1516,31 @@ static void svsm_pval_pages(struct snp_psc_desc *desc)
+ 
+ static void pvalidate_pages(struct snp_psc_desc *desc)
+ {
++	struct psc_entry *e;
++	unsigned int i;
++
+ 	if (snp_vmpl)
+ 		svsm_pval_pages(desc);
+ 	else
+ 		pval_pages(desc);
++
++	/*
++	 * If not affected by the cache-coherency vulnerability there is no need
++	 * to perform the cache eviction mitigation.
++	 */
++	if (cpu_feature_enabled(X86_FEATURE_COHERENCY_SFW_NO))
++		return;
++
++	for (i = 0; i <= desc->hdr.end_entry; i++) {
++		e = &desc->entries[i];
++
++		/*
++		 * If validating memory (making it private) perform the cache
++		 * eviction mitigation.
++		 */
++		if (e->operation == SNP_PAGE_STATE_PRIVATE)
++			sev_evict_cache(pfn_to_kaddr(e->gfn), e->pagesize ? 512 : 1);
++	}
+ }
+ 
+ static int vmgexit_psc(struct ghcb *ghcb, struct snp_psc_desc *desc)
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 6f894740663c16..302f3d4b47bfff 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -218,6 +218,7 @@
+ #define X86_FEATURE_FLEXPRIORITY	( 8*32+ 1) /* "flexpriority" Intel FlexPriority */
+ #define X86_FEATURE_EPT			( 8*32+ 2) /* "ept" Intel Extended Page Table */
+ #define X86_FEATURE_VPID		( 8*32+ 3) /* "vpid" Intel Virtual Processor ID */
++#define X86_FEATURE_COHERENCY_SFW_NO	( 8*32+ 4) /* SNP cache coherency software work around not needed */
+ 
+ #define X86_FEATURE_VMMCALL		( 8*32+15) /* "vmmcall" Prefer VMMCALL to VMCALL */
+ #define X86_FEATURE_XENPV		( 8*32+16) /* Xen paravirtual guest */
+diff --git a/arch/x86/include/asm/hw_irq.h b/arch/x86/include/asm/hw_irq.h
+index 162ebd73a6981e..cbe19e6690801a 100644
+--- a/arch/x86/include/asm/hw_irq.h
++++ b/arch/x86/include/asm/hw_irq.h
+@@ -92,8 +92,6 @@ struct irq_cfg {
+ 
+ extern struct irq_cfg *irq_cfg(unsigned int irq);
+ extern struct irq_cfg *irqd_cfg(struct irq_data *irq_data);
+-extern void lock_vector_lock(void);
+-extern void unlock_vector_lock(void);
+ #ifdef CONFIG_SMP
+ extern void vector_schedule_cleanup(struct irq_cfg *);
+ extern void irq_complete_move(struct irq_cfg *cfg);
+@@ -101,12 +99,16 @@ extern void irq_complete_move(struct irq_cfg *cfg);
+ static inline void vector_schedule_cleanup(struct irq_cfg *c) { }
+ static inline void irq_complete_move(struct irq_cfg *c) { }
+ #endif
+-
+ extern void apic_ack_edge(struct irq_data *data);
+-#else	/*  CONFIG_IRQ_DOMAIN_HIERARCHY */
++#endif /* CONFIG_IRQ_DOMAIN_HIERARCHY */
++
++#ifdef CONFIG_X86_LOCAL_APIC
++extern void lock_vector_lock(void);
++extern void unlock_vector_lock(void);
++#else
+ static inline void lock_vector_lock(void) {}
+ static inline void unlock_vector_lock(void) {}
+-#endif	/* CONFIG_IRQ_DOMAIN_HIERARCHY */
++#endif
+ 
+ /* Statistics */
+ extern atomic_t irq_err_count;
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 2333f4e7bc2f1a..8a62ab0ac5c028 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -419,6 +419,7 @@
+ #define DEBUGCTLMSR_FREEZE_PERFMON_ON_PMI	(1UL << 12)
+ #define DEBUGCTLMSR_FREEZE_IN_SMM_BIT	14
+ #define DEBUGCTLMSR_FREEZE_IN_SMM	(1UL << DEBUGCTLMSR_FREEZE_IN_SMM_BIT)
++#define DEBUGCTLMSR_RTM_DEBUG		BIT(15)
+ 
+ #define MSR_PEBS_FRONTEND		0x000003f7
+ 
+diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
+index 8e1b087ca936c1..637648bd7ddfa3 100644
+--- a/arch/x86/kernel/cpu/scattered.c
++++ b/arch/x86/kernel/cpu/scattered.c
+@@ -47,6 +47,7 @@ static const struct cpuid_bit cpuid_bits[] = {
+ 	{ X86_FEATURE_PROC_FEEDBACK,		CPUID_EDX, 11, 0x80000007, 0 },
+ 	{ X86_FEATURE_AMD_FAST_CPPC,		CPUID_EDX, 15, 0x80000007, 0 },
+ 	{ X86_FEATURE_MBA,			CPUID_EBX,  6, 0x80000008, 0 },
++	{ X86_FEATURE_COHERENCY_SFW_NO,		CPUID_EBX, 31, 0x8000001f, 0 },
+ 	{ X86_FEATURE_SMBA,			CPUID_EBX,  2, 0x80000020, 0 },
+ 	{ X86_FEATURE_BMEC,			CPUID_EBX,  3, 0x80000020, 0 },
+ 	{ X86_FEATURE_TSA_SQ_NO,		CPUID_ECX,  1, 0x80000021, 0 },
+diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
+index 6cd5d2d6c58af6..3bdf9f9003c721 100644
+--- a/arch/x86/kernel/irq.c
++++ b/arch/x86/kernel/irq.c
+@@ -256,26 +256,59 @@ static __always_inline void handle_irq(struct irq_desc *desc,
+ 		__handle_irq(desc, regs);
+ }
+ 
+-static __always_inline int call_irq_handler(int vector, struct pt_regs *regs)
++static struct irq_desc *reevaluate_vector(int vector)
+ {
+-	struct irq_desc *desc;
+-	int ret = 0;
++	struct irq_desc *desc = __this_cpu_read(vector_irq[vector]);
++
++	if (!IS_ERR_OR_NULL(desc))
++		return desc;
++
++	if (desc == VECTOR_UNUSED)
++		pr_emerg_ratelimited("No irq handler for %d.%u\n", smp_processor_id(), vector);
++	else
++		__this_cpu_write(vector_irq[vector], VECTOR_UNUSED);
++	return NULL;
++}
++
++static __always_inline bool call_irq_handler(int vector, struct pt_regs *regs)
++{
++	struct irq_desc *desc = __this_cpu_read(vector_irq[vector]);
+ 
+-	desc = __this_cpu_read(vector_irq[vector]);
+ 	if (likely(!IS_ERR_OR_NULL(desc))) {
+ 		handle_irq(desc, regs);
+-	} else {
+-		ret = -EINVAL;
+-		if (desc == VECTOR_UNUSED) {
+-			pr_emerg_ratelimited("%s: %d.%u No irq handler for vector\n",
+-					     __func__, smp_processor_id(),
+-					     vector);
+-		} else {
+-			__this_cpu_write(vector_irq[vector], VECTOR_UNUSED);
+-		}
++		return true;
+ 	}
+ 
+-	return ret;
++	/*
++	 * Reevaluate with vector_lock held to prevent a race against
++	 * request_irq() setting up the vector:
++	 *
++	 * CPU0				CPU1
++	 *				interrupt is raised in APIC IRR
++	 *				but not handled
++	 * free_irq()
++	 *   per_cpu(vector_irq, CPU1)[vector] = VECTOR_SHUTDOWN;
++	 *
++	 * request_irq()		common_interrupt()
++	 *				  d = this_cpu_read(vector_irq[vector]);
++	 *
++	 * per_cpu(vector_irq, CPU1)[vector] = desc;
++	 *
++	 *				  if (d == VECTOR_SHUTDOWN)
++	 *				    this_cpu_write(vector_irq[vector], VECTOR_UNUSED);
++	 *
++	 * This requires that the same vector on the same target CPU is
++	 * handed out or that a spurious interrupt hits that CPU/vector.
++	 */
++	lock_vector_lock();
++	desc = reevaluate_vector(vector);
++	unlock_vector_lock();
++
++	if (!desc)
++		return false;
++
++	handle_irq(desc, regs);
++	return true;
+ }
+ 
+ /*
+@@ -289,7 +322,7 @@ DEFINE_IDTENTRY_IRQ(common_interrupt)
+ 	/* entry code tells RCU that we're not quiescent.  Check it. */
+ 	RCU_LOCKDEP_WARN(!rcu_is_watching(), "IRQ failed to wake up RCU");
+ 
+-	if (unlikely(call_irq_handler(vector, regs)))
++	if (unlikely(!call_irq_handler(vector, regs)))
+ 		apic_eoi();
+ 
+ 	set_irq_regs(old_regs);
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index eec0aa13e00297..9ed43bd03c9d5c 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -2192,6 +2192,10 @@ static u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated
+ 	    (host_initiated || intel_pmu_lbr_is_enabled(vcpu)))
+ 		debugctl |= DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI;
+ 
++	if (boot_cpu_has(X86_FEATURE_RTM) &&
++	    (host_initiated || guest_cpu_cap_has(vcpu, X86_FEATURE_RTM)))
++		debugctl |= DEBUGCTLMSR_RTM_DEBUG;
++
+ 	return debugctl;
+ }
+ 
+diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
+index 51986e8a9d3535..52e22d3e1a82f8 100644
+--- a/arch/x86/mm/extable.c
++++ b/arch/x86/mm/extable.c
+@@ -122,13 +122,12 @@ static bool ex_handler_sgx(const struct exception_table_entry *fixup,
+ static bool ex_handler_fprestore(const struct exception_table_entry *fixup,
+ 				 struct pt_regs *regs)
+ {
+-	regs->ip = ex_fixup_addr(fixup);
+-
+ 	WARN_ONCE(1, "Bad FPU state detected at %pB, reinitializing FPU registers.",
+ 		  (void *)instruction_pointer(regs));
+ 
+ 	fpu_reset_from_exception_fixup();
+-	return true;
++
++	return ex_handler_default(fixup, regs);
+ }
+ 
+ /*
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index 4817e7ca03f83c..47a31e1c090937 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -186,6 +186,8 @@ static void blk_atomic_writes_update_limits(struct queue_limits *lim)
+ static void blk_validate_atomic_write_limits(struct queue_limits *lim)
+ {
+ 	unsigned int boundary_sectors;
++	unsigned int atomic_write_hw_max_sectors =
++			lim->atomic_write_hw_max >> SECTOR_SHIFT;
+ 
+ 	if (!(lim->features & BLK_FEAT_ATOMIC_WRITES))
+ 		goto unsupported;
+@@ -207,6 +209,10 @@ static void blk_validate_atomic_write_limits(struct queue_limits *lim)
+ 			 lim->atomic_write_hw_max))
+ 		goto unsupported;
+ 
++	if (WARN_ON_ONCE(lim->chunk_sectors &&
++			atomic_write_hw_max_sectors > lim->chunk_sectors))
++		goto unsupported;
++
+ 	boundary_sectors = lim->atomic_write_hw_boundary >> SECTOR_SHIFT;
+ 
+ 	if (boundary_sectors) {
+@@ -341,12 +347,19 @@ int blk_validate_limits(struct queue_limits *lim)
+ 	lim->max_discard_sectors =
+ 		min(lim->max_hw_discard_sectors, lim->max_user_discard_sectors);
+ 
++	/*
++	 * When discard is not supported, discard_granularity should be reported
++	 * as 0 to userspace.
++	 */
++	if (lim->max_discard_sectors)
++		lim->discard_granularity =
++			max(lim->discard_granularity, lim->physical_block_size);
++	else
++		lim->discard_granularity = 0;
++
+ 	if (!lim->max_discard_segments)
+ 		lim->max_discard_segments = 1;
+ 
+-	if (lim->discard_granularity < lim->physical_block_size)
+-		lim->discard_granularity = lim->physical_block_size;
+-
+ 	/*
+ 	 * By default there is no limit on the segment boundary alignment,
+ 	 * but if there is one it can't be smaller than the page size as
+diff --git a/crypto/krb5/selftest.c b/crypto/krb5/selftest.c
+index 2a81a6315a0d0b..4519c572d37ef5 100644
+--- a/crypto/krb5/selftest.c
++++ b/crypto/krb5/selftest.c
+@@ -152,6 +152,7 @@ static int krb5_test_one_prf(const struct krb5_prf_test *test)
+ 
+ out:
+ 	clear_buf(&result);
++	clear_buf(&prf);
+ 	clear_buf(&octet);
+ 	clear_buf(&key);
+ 	return ret;
+diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
+index 0d619df03fa933..fe3a0b8377dbe0 100644
+--- a/drivers/block/mtip32xx/mtip32xx.c
++++ b/drivers/block/mtip32xx/mtip32xx.c
+@@ -2040,11 +2040,12 @@ static int mtip_hw_ioctl(struct driver_data *dd, unsigned int cmd,
+  * @dir      Direction (read or write)
+  *
+  * return value
+- *	None
++ *	0	The IO completed successfully.
++ *	-ENOMEM	The DMA mapping failed.
+  */
+-static void mtip_hw_submit_io(struct driver_data *dd, struct request *rq,
+-			      struct mtip_cmd *command,
+-			      struct blk_mq_hw_ctx *hctx)
++static int mtip_hw_submit_io(struct driver_data *dd, struct request *rq,
++			     struct mtip_cmd *command,
++			     struct blk_mq_hw_ctx *hctx)
+ {
+ 	struct mtip_cmd_hdr *hdr =
+ 		dd->port->command_list + sizeof(struct mtip_cmd_hdr) * rq->tag;
+@@ -2056,12 +2057,14 @@ static void mtip_hw_submit_io(struct driver_data *dd, struct request *rq,
+ 	unsigned int nents;
+ 
+ 	/* Map the scatter list for DMA access */
+-	nents = blk_rq_map_sg(rq, command->sg);
+-	nents = dma_map_sg(&dd->pdev->dev, command->sg, nents, dma_dir);
++	command->scatter_ents = blk_rq_map_sg(rq, command->sg);
++	nents = dma_map_sg(&dd->pdev->dev, command->sg,
++			   command->scatter_ents, dma_dir);
++	if (!nents)
++		return -ENOMEM;
+ 
+-	prefetch(&port->flags);
+ 
+-	command->scatter_ents = nents;
++	prefetch(&port->flags);
+ 
+ 	/*
+ 	 * The number of retries for this command before it is
+@@ -2112,11 +2115,13 @@ static void mtip_hw_submit_io(struct driver_data *dd, struct request *rq,
+ 	if (unlikely(port->flags & MTIP_PF_PAUSE_IO)) {
+ 		set_bit(rq->tag, port->cmds_to_issue);
+ 		set_bit(MTIP_PF_ISSUE_CMDS_BIT, &port->flags);
+-		return;
++		return 0;
+ 	}
+ 
+ 	/* Issue the command to the hardware */
+ 	mtip_issue_ncq_command(port, rq->tag);
++
++	return 0;
+ }
+ 
+ /*
+@@ -3315,7 +3320,9 @@ static blk_status_t mtip_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 
+ 	blk_mq_start_request(rq);
+ 
+-	mtip_hw_submit_io(dd, rq, cmd, hctx);
++	if (mtip_hw_submit_io(dd, rq, cmd, hctx))
++		return BLK_STS_IOERR;
++
+ 	return BLK_STS_OK;
+ }
+ 
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 0e017eae97fb1d..066231e66f037d 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -2373,7 +2373,7 @@ static void ublk_deinit_queues(struct ublk_device *ub)
+ 
+ 	for (i = 0; i < nr_queues; i++)
+ 		ublk_deinit_queue(ub, i);
+-	kfree(ub->__queues);
++	kvfree(ub->__queues);
+ }
+ 
+ static int ublk_init_queues(struct ublk_device *ub)
+@@ -2384,7 +2384,7 @@ static int ublk_init_queues(struct ublk_device *ub)
+ 	int i, ret = -ENOMEM;
+ 
+ 	ub->queue_size = ubq_size;
+-	ub->__queues = kcalloc(nr_queues, ubq_size, GFP_KERNEL);
++	ub->__queues = kvcalloc(nr_queues, ubq_size, GFP_KERNEL);
+ 	if (!ub->__queues)
+ 		return ret;
+ 
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 6f2fd043fd3fa6..a2ba32319b6f6e 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -516,6 +516,10 @@ static const struct usb_device_id quirks_table[] = {
+ 	{ USB_DEVICE(0x0bda, 0xb850), .driver_info = BTUSB_REALTEK },
+ 	{ USB_DEVICE(0x13d3, 0x3600), .driver_info = BTUSB_REALTEK },
+ 
++	/* Realtek 8851BU Bluetooth devices */
++	{ USB_DEVICE(0x3625, 0x010b), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++
+ 	/* Realtek 8852AE Bluetooth devices */
+ 	{ USB_DEVICE(0x0bda, 0x2852), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
+diff --git a/drivers/bus/mhi/host/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c
+index 059cfd77382f06..cd274f4dae938c 100644
+--- a/drivers/bus/mhi/host/pci_generic.c
++++ b/drivers/bus/mhi/host/pci_generic.c
+@@ -593,8 +593,8 @@ static const struct mhi_pci_dev_info mhi_foxconn_dw5932e_info = {
+ 	.sideband_wake = false,
+ };
+ 
+-static const struct mhi_pci_dev_info mhi_foxconn_t99w515_info = {
+-	.name = "foxconn-t99w515",
++static const struct mhi_pci_dev_info mhi_foxconn_t99w640_info = {
++	.name = "foxconn-t99w640",
+ 	.edl = "qcom/sdx72m/foxconn/edl.mbn",
+ 	.edl_trigger = true,
+ 	.config = &modem_foxconn_sdx72_config,
+@@ -920,9 +920,9 @@ static const struct pci_device_id mhi_pci_id_table[] = {
+ 	/* DW5932e (sdx62), Non-eSIM */
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0f9),
+ 		.driver_data = (kernel_ulong_t) &mhi_foxconn_dw5932e_info },
+-	/* T99W515 (sdx72) */
++	/* T99W640 (sdx72) */
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe118),
+-		.driver_data = (kernel_ulong_t) &mhi_foxconn_t99w515_info },
++		.driver_data = (kernel_ulong_t) &mhi_foxconn_t99w640_info },
+ 	/* DW5934e(sdx72), With eSIM */
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe11d),
+ 		.driver_data = (kernel_ulong_t) &mhi_foxconn_dw5934e_info },
+diff --git a/drivers/char/hw_random/mtk-rng.c b/drivers/char/hw_random/mtk-rng.c
+index 1e3048f2bb38f0..6c4e40d0365f00 100644
+--- a/drivers/char/hw_random/mtk-rng.c
++++ b/drivers/char/hw_random/mtk-rng.c
+@@ -142,7 +142,9 @@ static int mtk_rng_probe(struct platform_device *pdev)
+ 	dev_set_drvdata(&pdev->dev, priv);
+ 	pm_runtime_set_autosuspend_delay(&pdev->dev, RNG_AUTOSUSPEND_TIMEOUT);
+ 	pm_runtime_use_autosuspend(&pdev->dev);
+-	devm_pm_runtime_enable(&pdev->dev);
++	ret = devm_pm_runtime_enable(&pdev->dev);
++	if (ret)
++		return ret;
+ 
+ 	dev_info(&pdev->dev, "registered RNG driver\n");
+ 
+diff --git a/drivers/clk/at91/sam9x7.c b/drivers/clk/at91/sam9x7.c
+index cbb8b220f16bcd..ffab32b047a017 100644
+--- a/drivers/clk/at91/sam9x7.c
++++ b/drivers/clk/at91/sam9x7.c
+@@ -61,44 +61,44 @@ static const struct clk_master_layout sam9x7_master_layout = {
+ 
+ /* Fractional PLL core output range. */
+ static const struct clk_range plla_core_outputs[] = {
+-	{ .min = 375000000, .max = 1600000000 },
++	{ .min = 800000000, .max = 1600000000 },
+ };
+ 
+ static const struct clk_range upll_core_outputs[] = {
+-	{ .min = 600000000, .max = 1200000000 },
++	{ .min = 600000000, .max = 960000000 },
+ };
+ 
+ static const struct clk_range lvdspll_core_outputs[] = {
+-	{ .min = 400000000, .max = 800000000 },
++	{ .min = 600000000, .max = 1200000000 },
+ };
+ 
+ static const struct clk_range audiopll_core_outputs[] = {
+-	{ .min = 400000000, .max = 800000000 },
++	{ .min = 600000000, .max = 1200000000 },
+ };
+ 
+ static const struct clk_range plladiv2_core_outputs[] = {
+-	{ .min = 375000000, .max = 1600000000 },
++	{ .min = 800000000, .max = 1600000000 },
+ };
+ 
+ /* Fractional PLL output range. */
+ static const struct clk_range plla_outputs[] = {
+-	{ .min = 732421, .max = 800000000 },
++	{ .min = 400000000, .max = 800000000 },
+ };
+ 
+ static const struct clk_range upll_outputs[] = {
+-	{ .min = 300000000, .max = 600000000 },
++	{ .min = 300000000, .max = 480000000 },
+ };
+ 
+ static const struct clk_range lvdspll_outputs[] = {
+-	{ .min = 10000000, .max = 800000000 },
++	{ .min = 175000000, .max = 550000000 },
+ };
+ 
+ static const struct clk_range audiopll_outputs[] = {
+-	{ .min = 10000000, .max = 800000000 },
++	{ .min = 0, .max = 300000000 },
+ };
+ 
+ static const struct clk_range plladiv2_outputs[] = {
+-	{ .min = 366210, .max = 400000000 },
++	{ .min = 200000000, .max = 400000000 },
+ };
+ 
+ /* PLL characteristics. */
+diff --git a/drivers/clk/clk-axi-clkgen.c b/drivers/clk/clk-axi-clkgen.c
+index 934e53a96dddac..00bf799964c61a 100644
+--- a/drivers/clk/clk-axi-clkgen.c
++++ b/drivers/clk/clk-axi-clkgen.c
+@@ -118,7 +118,7 @@ static const struct axi_clkgen_limits axi_clkgen_zynqmp_default_limits = {
+ 
+ static const struct axi_clkgen_limits axi_clkgen_zynq_default_limits = {
+ 	.fpfd_min = 10000,
+-	.fpfd_max = 300000,
++	.fpfd_max = 450000,
+ 	.fvco_min = 600000,
+ 	.fvco_max = 1200000,
+ };
+diff --git a/drivers/clk/davinci/psc.c b/drivers/clk/davinci/psc.c
+index b48322176c2101..f3ee9397bb0c78 100644
+--- a/drivers/clk/davinci/psc.c
++++ b/drivers/clk/davinci/psc.c
+@@ -277,6 +277,11 @@ davinci_lpsc_clk_register(struct device *dev, const char *name,
+ 
+ 	lpsc->pm_domain.name = devm_kasprintf(dev, GFP_KERNEL, "%s: %s",
+ 					      best_dev_name(dev), name);
++	if (!lpsc->pm_domain.name) {
++		clk_hw_unregister(&lpsc->hw);
++		kfree(lpsc);
++		return ERR_PTR(-ENOMEM);
++	}
+ 	lpsc->pm_domain.attach_dev = davinci_psc_genpd_attach_dev;
+ 	lpsc->pm_domain.detach_dev = davinci_psc_genpd_detach_dev;
+ 	lpsc->pm_domain.flags = GENPD_FLAG_PM_CLK;
+diff --git a/drivers/clk/imx/clk-imx95-blk-ctl.c b/drivers/clk/imx/clk-imx95-blk-ctl.c
+index cc2ee2be18195f..86bdcd21753102 100644
+--- a/drivers/clk/imx/clk-imx95-blk-ctl.c
++++ b/drivers/clk/imx/clk-imx95-blk-ctl.c
+@@ -342,8 +342,10 @@ static int imx95_bc_probe(struct platform_device *pdev)
+ 	if (!clk_hw_data)
+ 		return -ENOMEM;
+ 
+-	if (bc_data->rpm_enabled)
+-		pm_runtime_enable(&pdev->dev);
++	if (bc_data->rpm_enabled) {
++		devm_pm_runtime_enable(&pdev->dev);
++		pm_runtime_resume_and_get(&pdev->dev);
++	}
+ 
+ 	clk_hw_data->num = bc_data->num_clks;
+ 	hws = clk_hw_data->hws;
+@@ -383,8 +385,10 @@ static int imx95_bc_probe(struct platform_device *pdev)
+ 		goto cleanup;
+ 	}
+ 
+-	if (pm_runtime_enabled(bc->dev))
++	if (pm_runtime_enabled(bc->dev)) {
++		pm_runtime_put_sync(&pdev->dev);
+ 		clk_disable_unprepare(bc->clk_apb);
++	}
+ 
+ 	return 0;
+ 
+@@ -395,9 +399,6 @@ static int imx95_bc_probe(struct platform_device *pdev)
+ 		clk_hw_unregister(hws[i]);
+ 	}
+ 
+-	if (bc_data->rpm_enabled)
+-		pm_runtime_disable(&pdev->dev);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/clk/renesas/rzv2h-cpg.c b/drivers/clk/renesas/rzv2h-cpg.c
+index 2b9771ab2b3fad..43d2e73f9601f4 100644
+--- a/drivers/clk/renesas/rzv2h-cpg.c
++++ b/drivers/clk/renesas/rzv2h-cpg.c
+@@ -323,6 +323,7 @@ rzv2h_cpg_ddiv_clk_register(const struct cpg_core_clk *core,
+ 	init.ops = &rzv2h_ddiv_clk_divider_ops;
+ 	init.parent_names = &parent_name;
+ 	init.num_parents = 1;
++	init.flags = CLK_SET_RATE_PARENT;
+ 
+ 	ddiv->priv = priv;
+ 	ddiv->mon = cfg_ddiv.monbit;
+diff --git a/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c b/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
+index 579a81bb46df39..7744fc632ea6d1 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
++++ b/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
+@@ -347,8 +347,7 @@ static SUNXI_CCU_GATE(dram_ohci_clk,	"dram-ohci",	"dram",
+ 
+ static const char * const de_parents[] = { "pll-video", "pll-periph0" };
+ static SUNXI_CCU_M_WITH_MUX_GATE(de_clk, "de", de_parents,
+-				 0x104, 0, 4, 24, 2, BIT(31),
+-				 CLK_SET_RATE_PARENT);
++				 0x104, 0, 4, 24, 3, BIT(31), 0);
+ 
+ static const char * const tcon_parents[] = { "pll-video" };
+ static SUNXI_CCU_M_WITH_MUX_GATE(tcon_clk, "tcon", tcon_parents,
+diff --git a/drivers/clk/thead/clk-th1520-ap.c b/drivers/clk/thead/clk-th1520-ap.c
+index 4c9555fc61844d..6ab89245af1217 100644
+--- a/drivers/clk/thead/clk-th1520-ap.c
++++ b/drivers/clk/thead/clk-th1520-ap.c
+@@ -582,7 +582,14 @@ static const struct clk_parent_data peri2sys_apb_pclk_pd[] = {
+ 	{ .hw = &peri2sys_apb_pclk.common.hw }
+ };
+ 
+-static CLK_FIXED_FACTOR_FW_NAME(osc12m_clk, "osc_12m", "osc_24m", 2, 1, 0);
++static struct clk_fixed_factor osc12m_clk = {
++	.div		= 2,
++	.mult		= 1,
++	.hw.init	= CLK_HW_INIT_PARENTS_DATA("osc_12m",
++						   osc_24m_clk,
++						   &clk_fixed_factor_ops,
++						   0),
++};
+ 
+ static const char * const out_parents[] = { "osc_24m", "osc_12m" };
+ 
+diff --git a/drivers/clk/xilinx/clk-xlnx-clock-wizard.c b/drivers/clk/xilinx/clk-xlnx-clock-wizard.c
+index bbf7714480e7b7..0295a13a811cf8 100644
+--- a/drivers/clk/xilinx/clk-xlnx-clock-wizard.c
++++ b/drivers/clk/xilinx/clk-xlnx-clock-wizard.c
+@@ -669,7 +669,7 @@ static long clk_wzrd_ver_round_rate_all(struct clk_hw *hw, unsigned long rate,
+ 	u32 m, d, o, div, f;
+ 	int err;
+ 
+-	err = clk_wzrd_get_divisors(hw, rate, *prate);
++	err = clk_wzrd_get_divisors_ver(hw, rate, *prate);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/clk/xilinx/xlnx_vcu.c b/drivers/clk/xilinx/xlnx_vcu.c
+index 81501b48412ee6..88b3fd8250c202 100644
+--- a/drivers/clk/xilinx/xlnx_vcu.c
++++ b/drivers/clk/xilinx/xlnx_vcu.c
+@@ -587,8 +587,8 @@ static void xvcu_unregister_clock_provider(struct xvcu_device *xvcu)
+ 		xvcu_clk_hw_unregister_leaf(hws[CLK_XVCU_ENC_MCU]);
+ 	if (!IS_ERR_OR_NULL(hws[CLK_XVCU_ENC_CORE]))
+ 		xvcu_clk_hw_unregister_leaf(hws[CLK_XVCU_ENC_CORE]);
+-
+-	clk_hw_unregister_fixed_factor(xvcu->pll_post);
++	if (!IS_ERR_OR_NULL(xvcu->pll_post))
++		clk_hw_unregister_fixed_factor(xvcu->pll_post);
+ }
+ 
+ /**
+diff --git a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile
+index 22ab45209f9b0b..246f58b496da05 100644
+--- a/drivers/cpufreq/Makefile
++++ b/drivers/cpufreq/Makefile
+@@ -20,6 +20,7 @@ obj-$(CONFIG_CPUFREQ_VIRT)		+= virtual-cpufreq.o
+ 
+ # Traces
+ CFLAGS_amd-pstate-trace.o               := -I$(src)
++CFLAGS_powernv-cpufreq.o                := -I$(src)
+ amd_pstate-y				:= amd-pstate.o amd-pstate-trace.o
+ 
+ ##################################################################################
+diff --git a/drivers/cpufreq/armada-8k-cpufreq.c b/drivers/cpufreq/armada-8k-cpufreq.c
+index 5a3545bd0d8d20..006f4c554dd7e9 100644
+--- a/drivers/cpufreq/armada-8k-cpufreq.c
++++ b/drivers/cpufreq/armada-8k-cpufreq.c
+@@ -132,7 +132,7 @@ static int __init armada_8k_cpufreq_init(void)
+ 	int ret = 0, opps_index = 0, cpu, nb_cpus;
+ 	struct freq_table *freq_tables;
+ 	struct device_node *node;
+-	static struct cpumask cpus;
++	static struct cpumask cpus, shared_cpus;
+ 
+ 	node = of_find_matching_node_and_match(NULL, armada_8k_cpufreq_of_match,
+ 					       NULL);
+@@ -154,7 +154,6 @@ static int __init armada_8k_cpufreq_init(void)
+ 	 * divisions of it).
+ 	 */
+ 	for_each_cpu(cpu, &cpus) {
+-		struct cpumask shared_cpus;
+ 		struct device *cpu_dev;
+ 		struct clk *clk;
+ 
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index f45ded62b0e082..5c84d56341e2ed 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1323,6 +1323,8 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
+ 		goto err_free_real_cpus;
+ 	}
+ 
++	init_rwsem(&policy->rwsem);
++
+ 	freq_constraints_init(&policy->constraints);
+ 
+ 	policy->nb_min.notifier_call = cpufreq_notifier_min;
+@@ -1345,7 +1347,6 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
+ 	}
+ 
+ 	INIT_LIST_HEAD(&policy->policy_list);
+-	init_rwsem(&policy->rwsem);
+ 	spin_lock_init(&policy->transition_lock);
+ 	init_waitqueue_head(&policy->transition_wait);
+ 	INIT_WORK(&policy->update, handle_update);
+@@ -3009,15 +3010,6 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
+ 	cpufreq_driver = driver_data;
+ 	write_unlock_irqrestore(&cpufreq_driver_lock, flags);
+ 
+-	/*
+-	 * Mark support for the scheduler's frequency invariance engine for
+-	 * drivers that implement target(), target_index() or fast_switch().
+-	 */
+-	if (!cpufreq_driver->setpolicy) {
+-		static_branch_enable_cpuslocked(&cpufreq_freq_invariance);
+-		pr_debug("supports frequency invariance");
+-	}
+-
+ 	if (driver_data->setpolicy)
+ 		driver_data->flags |= CPUFREQ_CONST_LOOPS;
+ 
+@@ -3048,6 +3040,15 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
+ 	hp_online = ret;
+ 	ret = 0;
+ 
++	/*
++	 * Mark support for the scheduler's frequency invariance engine for
++	 * drivers that implement target(), target_index() or fast_switch().
++	 */
++	if (!cpufreq_driver->setpolicy) {
++		static_branch_enable_cpuslocked(&cpufreq_freq_invariance);
++		pr_debug("supports frequency invariance");
++	}
++
+ 	pr_debug("driver %s up and running\n", driver_data->name);
+ 	goto out;
+ 
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index ba9bf06f1c7736..f9205fe199b890 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -3130,8 +3130,8 @@ static int intel_cpufreq_update_pstate(struct cpufreq_policy *policy,
+ 		int max_pstate = policy->strict_target ?
+ 					target_pstate : cpu->max_perf_ratio;
+ 
+-		intel_cpufreq_hwp_update(cpu, target_pstate, max_pstate, 0,
+-					 fast_switch);
++		intel_cpufreq_hwp_update(cpu, target_pstate, max_pstate,
++					 target_pstate, fast_switch);
+ 	} else if (target_pstate != old_pstate) {
+ 		intel_cpufreq_perf_ctl_update(cpu, target_pstate, fast_switch);
+ 	}
+diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c
+index afe5abf89d3386..b7c3251e7e87f3 100644
+--- a/drivers/cpufreq/powernv-cpufreq.c
++++ b/drivers/cpufreq/powernv-cpufreq.c
+@@ -21,7 +21,6 @@
+ #include <linux/string_choices.h>
+ #include <linux/cpu.h>
+ #include <linux/hashtable.h>
+-#include <trace/events/power.h>
+ 
+ #include <asm/cputhreads.h>
+ #include <asm/firmware.h>
+@@ -30,6 +29,9 @@
+ #include <asm/opal.h>
+ #include <linux/timer.h>
+ 
++#define CREATE_TRACE_POINTS
++#include "powernv-trace.h"
++
+ #define POWERNV_MAX_PSTATES_ORDER  8
+ #define POWERNV_MAX_PSTATES	(1UL << (POWERNV_MAX_PSTATES_ORDER))
+ #define PMSR_PSAFE_ENABLE	(1UL << 30)
+diff --git a/drivers/cpufreq/powernv-trace.h b/drivers/cpufreq/powernv-trace.h
+new file mode 100644
+index 00000000000000..8cadb7c9427b36
+--- /dev/null
++++ b/drivers/cpufreq/powernv-trace.h
+@@ -0,0 +1,44 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++
++#if !defined(_POWERNV_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
++#define _POWERNV_TRACE_H
++
++#include <linux/cpufreq.h>
++#include <linux/tracepoint.h>
++#include <linux/trace_events.h>
++
++#undef TRACE_SYSTEM
++#define TRACE_SYSTEM power
++
++TRACE_EVENT(powernv_throttle,
++
++	TP_PROTO(int chip_id, const char *reason, int pmax),
++
++	TP_ARGS(chip_id, reason, pmax),
++
++	TP_STRUCT__entry(
++		__field(int, chip_id)
++		__string(reason, reason)
++		__field(int, pmax)
++	),
++
++	TP_fast_assign(
++		__entry->chip_id = chip_id;
++		__assign_str(reason);
++		__entry->pmax = pmax;
++	),
++
++	TP_printk("Chip %d Pmax %d %s", __entry->chip_id,
++		  __entry->pmax, __get_str(reason))
++);
++
++#endif /* _POWERNV_TRACE_H */
++
++/* This part must be outside protection */
++#undef TRACE_INCLUDE_PATH
++#define TRACE_INCLUDE_PATH .
++
++#undef TRACE_INCLUDE_FILE
++#define TRACE_INCLUDE_FILE powernv-trace
++
++#include <trace/define_trace.h>
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+index 05f67661553c9a..63e66a85477e54 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+@@ -265,8 +265,8 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
+ 	}
+ 
+ 	chan->timeout = areq->cryptlen;
+-	rctx->nr_sgs = nr_sgs;
+-	rctx->nr_sgd = nr_sgd;
++	rctx->nr_sgs = ns;
++	rctx->nr_sgd = nd;
+ 	return 0;
+ 
+ theend_sgs:
+diff --git a/drivers/crypto/ccp/ccp-debugfs.c b/drivers/crypto/ccp/ccp-debugfs.c
+index a1055554b47a24..dc26bc22c91d1d 100644
+--- a/drivers/crypto/ccp/ccp-debugfs.c
++++ b/drivers/crypto/ccp/ccp-debugfs.c
+@@ -319,5 +319,8 @@ void ccp5_debugfs_setup(struct ccp_device *ccp)
+ 
+ void ccp5_debugfs_destroy(void)
+ {
++	mutex_lock(&ccp_debugfs_lock);
+ 	debugfs_remove_recursive(ccp_debugfs_dir);
++	ccp_debugfs_dir = NULL;
++	mutex_unlock(&ccp_debugfs_lock);
+ }
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 2e87ca0e292a1c..4d790837af22c7 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -424,7 +424,7 @@ static int rmp_mark_pages_firmware(unsigned long paddr, unsigned int npages, boo
+ 	return rc;
+ }
+ 
+-static struct page *__snp_alloc_firmware_pages(gfp_t gfp_mask, int order)
++static struct page *__snp_alloc_firmware_pages(gfp_t gfp_mask, int order, bool locked)
+ {
+ 	unsigned long npages = 1ul << order, paddr;
+ 	struct sev_device *sev;
+@@ -443,7 +443,7 @@ static struct page *__snp_alloc_firmware_pages(gfp_t gfp_mask, int order)
+ 		return page;
+ 
+ 	paddr = __pa((unsigned long)page_address(page));
+-	if (rmp_mark_pages_firmware(paddr, npages, false))
++	if (rmp_mark_pages_firmware(paddr, npages, locked))
+ 		return NULL;
+ 
+ 	return page;
+@@ -453,7 +453,7 @@ void *snp_alloc_firmware_page(gfp_t gfp_mask)
+ {
+ 	struct page *page;
+ 
+-	page = __snp_alloc_firmware_pages(gfp_mask, 0);
++	page = __snp_alloc_firmware_pages(gfp_mask, 0, false);
+ 
+ 	return page ? page_address(page) : NULL;
+ }
+@@ -488,7 +488,7 @@ static void *sev_fw_alloc(unsigned long len)
+ {
+ 	struct page *page;
+ 
+-	page = __snp_alloc_firmware_pages(GFP_KERNEL, get_order(len));
++	page = __snp_alloc_firmware_pages(GFP_KERNEL, get_order(len), true);
+ 	if (!page)
+ 		return NULL;
+ 
+diff --git a/drivers/crypto/img-hash.c b/drivers/crypto/img-hash.c
+index 1dc2378aa88b4e..503bc1c9e3f336 100644
+--- a/drivers/crypto/img-hash.c
++++ b/drivers/crypto/img-hash.c
+@@ -436,7 +436,7 @@ static int img_hash_write_via_dma_stop(struct img_hash_dev *hdev)
+ 	struct img_hash_request_ctx *ctx = ahash_request_ctx(hdev->req);
+ 
+ 	if (ctx->flags & DRIVER_FLAGS_SG)
+-		dma_unmap_sg(hdev->dev, ctx->sg, ctx->dma_ct, DMA_TO_DEVICE);
++		dma_unmap_sg(hdev->dev, ctx->sg, 1, DMA_TO_DEVICE);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
+index f44c08f5f5ec4a..af4b978189e519 100644
+--- a/drivers/crypto/inside-secure/safexcel_hash.c
++++ b/drivers/crypto/inside-secure/safexcel_hash.c
+@@ -249,7 +249,9 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv,
+ 	safexcel_complete(priv, ring);
+ 
+ 	if (sreq->nents) {
+-		dma_unmap_sg(priv->dev, areq->src, sreq->nents, DMA_TO_DEVICE);
++		dma_unmap_sg(priv->dev, areq->src,
++			     sg_nents_for_len(areq->src, areq->nbytes),
++			     DMA_TO_DEVICE);
+ 		sreq->nents = 0;
+ 	}
+ 
+@@ -497,7 +499,9 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
+ 			 DMA_FROM_DEVICE);
+ unmap_sg:
+ 	if (req->nents) {
+-		dma_unmap_sg(priv->dev, areq->src, req->nents, DMA_TO_DEVICE);
++		dma_unmap_sg(priv->dev, areq->src,
++			     sg_nents_for_len(areq->src, areq->nbytes),
++			     DMA_TO_DEVICE);
+ 		req->nents = 0;
+ 	}
+ cdesc_rollback:
+diff --git a/drivers/crypto/intel/keembay/keembay-ocs-hcu-core.c b/drivers/crypto/intel/keembay/keembay-ocs-hcu-core.c
+index 95dc8979918d8b..8f9e21ced0fe1e 100644
+--- a/drivers/crypto/intel/keembay/keembay-ocs-hcu-core.c
++++ b/drivers/crypto/intel/keembay/keembay-ocs-hcu-core.c
+@@ -68,6 +68,7 @@ struct ocs_hcu_ctx {
+  * @sg_data_total:  Total data in the SG list at any time.
+  * @sg_data_offset: Offset into the data of the current individual SG node.
+  * @sg_dma_nents:   Number of sg entries mapped in dma_list.
++ * @nents:          Number of entries in the scatterlist.
+  */
+ struct ocs_hcu_rctx {
+ 	struct ocs_hcu_dev	*hcu_dev;
+@@ -91,6 +92,7 @@ struct ocs_hcu_rctx {
+ 	unsigned int		sg_data_total;
+ 	unsigned int		sg_data_offset;
+ 	unsigned int		sg_dma_nents;
++	unsigned int		nents;
+ };
+ 
+ /**
+@@ -199,7 +201,7 @@ static void kmb_ocs_hcu_dma_cleanup(struct ahash_request *req,
+ 
+ 	/* Unmap req->src (if mapped). */
+ 	if (rctx->sg_dma_nents) {
+-		dma_unmap_sg(dev, req->src, rctx->sg_dma_nents, DMA_TO_DEVICE);
++		dma_unmap_sg(dev, req->src, rctx->nents, DMA_TO_DEVICE);
+ 		rctx->sg_dma_nents = 0;
+ 	}
+ 
+@@ -260,6 +262,10 @@ static int kmb_ocs_dma_prepare(struct ahash_request *req)
+ 			rc = -ENOMEM;
+ 			goto cleanup;
+ 		}
++
++		/* Save the value of nents to pass to dma_unmap_sg. */
++		rctx->nents = nents;
++
+ 		/*
+ 		 * The value returned by dma_map_sg() can be < nents; so update
+ 		 * nents accordingly.
+diff --git a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+index 4feeef83f7a3ee..3549167a9557f9 100644
+--- a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+@@ -193,7 +193,6 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
+ 			  ICP_ACCEL_CAPABILITIES_SM4 |
+ 			  ICP_ACCEL_CAPABILITIES_AES_V2 |
+ 			  ICP_ACCEL_CAPABILITIES_ZUC |
+-			  ICP_ACCEL_CAPABILITIES_ZUC_256 |
+ 			  ICP_ACCEL_CAPABILITIES_WIRELESS_CRYPTO_EXT |
+ 			  ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN;
+ 
+@@ -225,17 +224,11 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
+ 
+ 	if (fusectl1 & ICP_ACCEL_GEN4_MASK_WCP_WAT_SLICE) {
+ 		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC;
+-		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC_256;
+ 		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_WIRELESS_CRYPTO_EXT;
+ 	}
+ 
+-	if (fusectl1 & ICP_ACCEL_GEN4_MASK_EIA3_SLICE) {
++	if (fusectl1 & ICP_ACCEL_GEN4_MASK_EIA3_SLICE)
+ 		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC;
+-		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC_256;
+-	}
+-
+-	if (fusectl1 & ICP_ACCEL_GEN4_MASK_ZUC_256_SLICE)
+-		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC_256;
+ 
+ 	capabilities_asym = ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC |
+ 			  ICP_ACCEL_CAPABILITIES_SM2 |
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
+index 099949a2421c0f..b661736d9ae192 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
+@@ -579,6 +579,28 @@ static int bank_state_restore(struct adf_hw_csr_ops *ops, void __iomem *base,
+ 	ops->write_csr_int_srcsel_w_val(base, bank, state->iaintflagsrcsel0);
+ 	ops->write_csr_exp_int_en(base, bank, state->ringexpintenable);
+ 	ops->write_csr_int_col_ctl(base, bank, state->iaintcolctl);
++
++	/*
++	 * Verify whether any exceptions were raised during the bank save process.
++	 * If exceptions occurred, the status and exception registers cannot
++	 * be directly restored. Consequently, further restoration is not
++	 * feasible, and the current state of the ring should be maintained.
++	 */
++	val = state->ringexpstat;
++	if (val) {
++		pr_info("QAT: Bank %u state not fully restored due to exception in saved state (%#x)\n",
++			bank, val);
++		return 0;
++	}
++
++	/* Ensure that the restoration process completed without exceptions */
++	tmp_val = ops->read_csr_exp_stat(base, bank);
++	if (tmp_val) {
++		pr_err("QAT: Bank %u restored with exception: %#x\n",
++		       bank, tmp_val);
++		return -EFAULT;
++	}
++
+ 	ops->write_csr_ring_srv_arb_en(base, bank, state->ringsrvarben);
+ 
+ 	/* Check that all ring statuses match the saved state. */
+@@ -612,13 +634,6 @@ static int bank_state_restore(struct adf_hw_csr_ops *ops, void __iomem *base,
+ 	if (ret)
+ 		return ret;
+ 
+-	tmp_val = ops->read_csr_exp_stat(base, bank);
+-	val = state->ringexpstat;
+-	if (tmp_val && !val) {
+-		pr_err("QAT: Bank was restored with exception: 0x%x\n", val);
+-		return -EINVAL;
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_sriov.c b/drivers/crypto/intel/qat/qat_common/adf_sriov.c
+index c75d0b6cb0ada3..31d1ef0cb1f52e 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_sriov.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_sriov.c
+@@ -155,7 +155,6 @@ static int adf_do_enable_sriov(struct adf_accel_dev *accel_dev)
+ 	if (!device_iommu_mapped(&GET_DEV(accel_dev))) {
+ 		dev_warn(&GET_DEV(accel_dev),
+ 			 "IOMMU should be enabled for SR-IOV to work correctly\n");
+-		return -EINVAL;
+ 	}
+ 
+ 	if (adf_dev_started(accel_dev)) {
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_transport_debug.c b/drivers/crypto/intel/qat/qat_common/adf_transport_debug.c
+index e2dd568b87b519..621b5d3dfcef91 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_transport_debug.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_transport_debug.c
+@@ -31,8 +31,10 @@ static void *adf_ring_next(struct seq_file *sfile, void *v, loff_t *pos)
+ 	struct adf_etr_ring_data *ring = sfile->private;
+ 
+ 	if (*pos >= (ADF_SIZE_TO_RING_SIZE_IN_BYTES(ring->ring_size) /
+-		     ADF_MSG_SIZE_TO_BYTES(ring->msg_size)))
++		     ADF_MSG_SIZE_TO_BYTES(ring->msg_size))) {
++		(*pos)++;
+ 		return NULL;
++	}
+ 
+ 	return ring->base_addr +
+ 		(ADF_MSG_SIZE_TO_BYTES(ring->msg_size) * (*pos)++);
+diff --git a/drivers/crypto/intel/qat/qat_common/qat_bl.c b/drivers/crypto/intel/qat/qat_common/qat_bl.c
+index 5e4dad4693caad..9b2338f58d97c4 100644
+--- a/drivers/crypto/intel/qat/qat_common/qat_bl.c
++++ b/drivers/crypto/intel/qat/qat_common/qat_bl.c
+@@ -38,7 +38,7 @@ void qat_bl_free_bufl(struct adf_accel_dev *accel_dev,
+ 		for (i = 0; i < blout->num_mapped_bufs; i++) {
+ 			dma_unmap_single(dev, blout->buffers[i].addr,
+ 					 blout->buffers[i].len,
+-					 DMA_FROM_DEVICE);
++					 DMA_BIDIRECTIONAL);
+ 		}
+ 		dma_unmap_single(dev, blpout, sz_out, DMA_TO_DEVICE);
+ 
+@@ -162,7 +162,7 @@ static int __qat_bl_sgl_to_bufl(struct adf_accel_dev *accel_dev,
+ 			}
+ 			buffers[y].addr = dma_map_single(dev, sg_virt(sg) + left,
+ 							 sg->length - left,
+-							 DMA_FROM_DEVICE);
++							 DMA_BIDIRECTIONAL);
+ 			if (unlikely(dma_mapping_error(dev, buffers[y].addr)))
+ 				goto err_out;
+ 			buffers[y].len = sg->length;
+@@ -204,7 +204,7 @@ static int __qat_bl_sgl_to_bufl(struct adf_accel_dev *accel_dev,
+ 		if (!dma_mapping_error(dev, buflout->buffers[i].addr))
+ 			dma_unmap_single(dev, buflout->buffers[i].addr,
+ 					 buflout->buffers[i].len,
+-					 DMA_FROM_DEVICE);
++					 DMA_BIDIRECTIONAL);
+ 	}
+ 
+ 	if (!buf->sgl_dst_valid)
+diff --git a/drivers/crypto/intel/qat/qat_common/qat_compression.c b/drivers/crypto/intel/qat/qat_common/qat_compression.c
+index 7842a9f22178c2..cf94ba3011d51b 100644
+--- a/drivers/crypto/intel/qat/qat_common/qat_compression.c
++++ b/drivers/crypto/intel/qat/qat_common/qat_compression.c
+@@ -197,7 +197,7 @@ static int qat_compression_alloc_dc_data(struct adf_accel_dev *accel_dev)
+ 	struct adf_dc_data *dc_data = NULL;
+ 	u8 *obuff = NULL;
+ 
+-	dc_data = devm_kzalloc(dev, sizeof(*dc_data), GFP_KERNEL);
++	dc_data = kzalloc_node(sizeof(*dc_data), GFP_KERNEL, dev_to_node(dev));
+ 	if (!dc_data)
+ 		goto err;
+ 
+@@ -205,7 +205,7 @@ static int qat_compression_alloc_dc_data(struct adf_accel_dev *accel_dev)
+ 	if (!obuff)
+ 		goto err;
+ 
+-	obuff_p = dma_map_single(dev, obuff, ovf_buff_sz, DMA_FROM_DEVICE);
++	obuff_p = dma_map_single(dev, obuff, ovf_buff_sz, DMA_BIDIRECTIONAL);
+ 	if (unlikely(dma_mapping_error(dev, obuff_p)))
+ 		goto err;
+ 
+@@ -233,9 +233,9 @@ static void qat_free_dc_data(struct adf_accel_dev *accel_dev)
+ 		return;
+ 
+ 	dma_unmap_single(dev, dc_data->ovf_buff_p, dc_data->ovf_buff_sz,
+-			 DMA_FROM_DEVICE);
++			 DMA_BIDIRECTIONAL);
+ 	kfree_sensitive(dc_data->ovf_buff);
+-	devm_kfree(dev, dc_data);
++	kfree(dc_data);
+ 	accel_dev->dc_data = NULL;
+ }
+ 
+diff --git a/drivers/crypto/marvell/cesa/cipher.c b/drivers/crypto/marvell/cesa/cipher.c
+index 48c5c8ea8c43ec..3fe0fd9226cf79 100644
+--- a/drivers/crypto/marvell/cesa/cipher.c
++++ b/drivers/crypto/marvell/cesa/cipher.c
+@@ -75,9 +75,12 @@ mv_cesa_skcipher_dma_cleanup(struct skcipher_request *req)
+ static inline void mv_cesa_skcipher_cleanup(struct skcipher_request *req)
+ {
+ 	struct mv_cesa_skcipher_req *creq = skcipher_request_ctx(req);
++	struct mv_cesa_engine *engine = creq->base.engine;
+ 
+ 	if (mv_cesa_req_get_type(&creq->base) == CESA_DMA_REQ)
+ 		mv_cesa_skcipher_dma_cleanup(req);
++
++	atomic_sub(req->cryptlen, &engine->load);
+ }
+ 
+ static void mv_cesa_skcipher_std_step(struct skcipher_request *req)
+@@ -212,7 +215,6 @@ mv_cesa_skcipher_complete(struct crypto_async_request *req)
+ 	struct mv_cesa_engine *engine = creq->base.engine;
+ 	unsigned int ivsize;
+ 
+-	atomic_sub(skreq->cryptlen, &engine->load);
+ 	ivsize = crypto_skcipher_ivsize(crypto_skcipher_reqtfm(skreq));
+ 
+ 	if (mv_cesa_req_get_type(&creq->base) == CESA_DMA_REQ) {
+diff --git a/drivers/crypto/marvell/cesa/hash.c b/drivers/crypto/marvell/cesa/hash.c
+index 6815eddc906812..e339ce7ad53310 100644
+--- a/drivers/crypto/marvell/cesa/hash.c
++++ b/drivers/crypto/marvell/cesa/hash.c
+@@ -110,9 +110,12 @@ static inline void mv_cesa_ahash_dma_cleanup(struct ahash_request *req)
+ static inline void mv_cesa_ahash_cleanup(struct ahash_request *req)
+ {
+ 	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
++	struct mv_cesa_engine *engine = creq->base.engine;
+ 
+ 	if (mv_cesa_req_get_type(&creq->base) == CESA_DMA_REQ)
+ 		mv_cesa_ahash_dma_cleanup(req);
++
++	atomic_sub(req->nbytes, &engine->load);
+ }
+ 
+ static void mv_cesa_ahash_last_cleanup(struct ahash_request *req)
+@@ -395,8 +398,6 @@ static void mv_cesa_ahash_complete(struct crypto_async_request *req)
+ 			}
+ 		}
+ 	}
+-
+-	atomic_sub(ahashreq->nbytes, &engine->load);
+ }
+ 
+ static void mv_cesa_ahash_prepare(struct crypto_async_request *req,
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 98657d3b9435c7..0d9f3d3282ec94 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -1382,15 +1382,11 @@ int devfreq_remove_governor(struct devfreq_governor *governor)
+ 		int ret;
+ 		struct device *dev = devfreq->dev.parent;
+ 
++		if (!devfreq->governor)
++			continue;
++
+ 		if (!strncmp(devfreq->governor->name, governor->name,
+ 			     DEVFREQ_NAME_LEN)) {
+-			/* we should have a devfreq governor! */
+-			if (!devfreq->governor) {
+-				dev_warn(dev, "%s: Governor %s NOT present\n",
+-					 __func__, governor->name);
+-				continue;
+-				/* Fall through */
+-			}
+ 			ret = devfreq->governor->event_handler(devfreq,
+ 						DEVFREQ_GOV_STOP, NULL);
+ 			if (ret) {
+@@ -1743,7 +1739,7 @@ static ssize_t trans_stat_show(struct device *dev,
+ 	for (i = 0; i < max_state; i++) {
+ 		if (len >= PAGE_SIZE - 1)
+ 			break;
+-		if (df->freq_table[2] == df->previous_freq)
++		if (df->freq_table[i] == df->previous_freq)
+ 			len += sysfs_emit_at(buf, len, "*");
+ 		else
+ 			len += sysfs_emit_at(buf, len, " ");
+diff --git a/drivers/dma/mmp_tdma.c b/drivers/dma/mmp_tdma.c
+index c8dc504510f1e3..b7fb843c67a6f2 100644
+--- a/drivers/dma/mmp_tdma.c
++++ b/drivers/dma/mmp_tdma.c
+@@ -641,7 +641,7 @@ static int mmp_tdma_probe(struct platform_device *pdev)
+ 	int chan_num = TDMA_CHANNEL_NUM;
+ 	struct gen_pool *pool = NULL;
+ 
+-	type = (enum mmp_tdma_type)device_get_match_data(&pdev->dev);
++	type = (kernel_ulong_t)device_get_match_data(&pdev->dev);
+ 
+ 	/* always have couple channels */
+ 	tdev = devm_kzalloc(&pdev->dev, sizeof(*tdev), GFP_KERNEL);
+diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
+index fa6e4646fdc29d..1fdcb0f5c9e725 100644
+--- a/drivers/dma/mv_xor.c
++++ b/drivers/dma/mv_xor.c
+@@ -1061,8 +1061,16 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
+ 	 */
+ 	mv_chan->dummy_src_addr = dma_map_single(dma_dev->dev,
+ 		mv_chan->dummy_src, MV_XOR_MIN_BYTE_COUNT, DMA_FROM_DEVICE);
++	if (dma_mapping_error(dma_dev->dev, mv_chan->dummy_src_addr))
++		return ERR_PTR(-ENOMEM);
++
+ 	mv_chan->dummy_dst_addr = dma_map_single(dma_dev->dev,
+ 		mv_chan->dummy_dst, MV_XOR_MIN_BYTE_COUNT, DMA_TO_DEVICE);
++	if (dma_mapping_error(dma_dev->dev, mv_chan->dummy_dst_addr)) {
++		ret = -ENOMEM;
++		goto err_unmap_src;
++	}
++
+ 
+ 	/* allocate coherent memory for hardware descriptors
+ 	 * note: writecombine gives slightly better performance, but
+@@ -1071,8 +1079,10 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
+ 	mv_chan->dma_desc_pool_virt =
+ 	  dma_alloc_wc(&pdev->dev, MV_XOR_POOL_SIZE, &mv_chan->dma_desc_pool,
+ 		       GFP_KERNEL);
+-	if (!mv_chan->dma_desc_pool_virt)
+-		return ERR_PTR(-ENOMEM);
++	if (!mv_chan->dma_desc_pool_virt) {
++		ret = -ENOMEM;
++		goto err_unmap_dst;
++	}
+ 
+ 	/* discover transaction capabilities from the platform data */
+ 	dma_dev->cap_mask = cap_mask;
+@@ -1155,6 +1165,13 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
+ err_free_dma:
+ 	dma_free_coherent(&pdev->dev, MV_XOR_POOL_SIZE,
+ 			  mv_chan->dma_desc_pool_virt, mv_chan->dma_desc_pool);
++err_unmap_dst:
++	dma_unmap_single(dma_dev->dev, mv_chan->dummy_dst_addr,
++			 MV_XOR_MIN_BYTE_COUNT, DMA_TO_DEVICE);
++err_unmap_src:
++	dma_unmap_single(dma_dev->dev, mv_chan->dummy_src_addr,
++			 MV_XOR_MIN_BYTE_COUNT, DMA_FROM_DEVICE);
++
+ 	return ERR_PTR(ret);
+ }
+ 
+diff --git a/drivers/dma/nbpfaxi.c b/drivers/dma/nbpfaxi.c
+index 7a2488a0d6a326..765462303de098 100644
+--- a/drivers/dma/nbpfaxi.c
++++ b/drivers/dma/nbpfaxi.c
+@@ -711,6 +711,9 @@ static int nbpf_desc_page_alloc(struct nbpf_channel *chan)
+ 		list_add_tail(&ldesc->node, &lhead);
+ 		ldesc->hwdesc_dma_addr = dma_map_single(dchan->device->dev,
+ 					hwdesc, sizeof(*hwdesc), DMA_TO_DEVICE);
++		if (dma_mapping_error(dchan->device->dev,
++				      ldesc->hwdesc_dma_addr))
++			goto unmap_error;
+ 
+ 		dev_dbg(dev, "%s(): mapped 0x%p to %pad\n", __func__,
+ 			hwdesc, &ldesc->hwdesc_dma_addr);
+@@ -737,6 +740,16 @@ static int nbpf_desc_page_alloc(struct nbpf_channel *chan)
+ 	spin_unlock_irq(&chan->lock);
+ 
+ 	return ARRAY_SIZE(dpage->desc);
++
++unmap_error:
++	while (i--) {
++		ldesc--; hwdesc--;
++
++		dma_unmap_single(dchan->device->dev, ldesc->hwdesc_dma_addr,
++				 sizeof(hwdesc), DMA_TO_DEVICE);
++	}
++
++	return -ENOMEM;
+ }
+ 
+ static void nbpf_desc_put(struct nbpf_desc *desc)
+diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c
+index c7e5a34b254bf4..683fd9b85c5ce2 100644
+--- a/drivers/firmware/arm_scmi/perf.c
++++ b/drivers/firmware/arm_scmi/perf.c
+@@ -892,7 +892,7 @@ static int scmi_dvfs_device_opps_add(const struct scmi_protocol_handle *ph,
+ 			freq = dom->opp[idx].indicative_freq * dom->mult_factor;
+ 
+ 		/* All OPPs above the sustained frequency are treated as turbo */
+-		data.turbo = freq > dom->sustained_freq_khz * 1000;
++		data.turbo = freq > dom->sustained_freq_khz * 1000UL;
+ 
+ 		data.level = dom->opp[idx].perf;
+ 		data.freq = freq;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 2144d124c91084..cd4605362a9301 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -9567,9 +9567,8 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
+ 	kiq->pmf->kiq_unmap_queues(kiq_ring, ring, RESET_QUEUES,
+ 				   0, 0);
+ 	amdgpu_ring_commit(kiq_ring);
+-	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+-
+ 	r = amdgpu_ring_test_ring(kiq_ring);
++	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+ 	if (r)
+ 		return r;
+ 
+@@ -9605,9 +9604,8 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
+ 	}
+ 	kiq->pmf->kiq_map_queues(kiq_ring, ring);
+ 	amdgpu_ring_commit(kiq_ring);
+-	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+-
+ 	r = amdgpu_ring_test_ring(kiq_ring);
++	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+ 	if (r)
+ 		return r;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index d725e2e230a3dc..59ea6e88bd9e0c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -7299,8 +7299,8 @@ static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
+ 	}
+ 	kiq->pmf->kiq_map_queues(kiq_ring, ring);
+ 	amdgpu_ring_commit(kiq_ring);
+-	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+ 	r = amdgpu_ring_test_ring(kiq_ring);
++	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+ 	if (r) {
+ 		DRM_ERROR("fail to remap queue\n");
+ 		return r;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+index 53fbf6ca7cdb93..c386b2f4cbcc24 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+@@ -3572,9 +3572,8 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
+ 	}
+ 	kiq->pmf->kiq_map_queues(kiq_ring, ring);
+ 	amdgpu_ring_commit(kiq_ring);
+-	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+-
+ 	r = amdgpu_ring_test_ring(kiq_ring);
++	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+ 	if (r) {
+ 		dev_err(adev->dev, "fail to remap queue\n");
+ 		return r;
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c
+index f23cb79110d615..3a78d035e128de 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c
+@@ -31,9 +31,6 @@
+ 
+ #define NPS_MODE_MASK 0x000000FFL
+ 
+-/* Core 0 Port 0 counter */
+-#define smnPCIEP_NAK_COUNTER 0x1A340218
+-
+ static void nbio_v7_9_remap_hdp_registers(struct amdgpu_device *adev)
+ {
+ 	WREG32_SOC15(NBIO, 0, regBIF_BX0_REMAP_HDP_MEM_FLUSH_CNTL,
+@@ -463,22 +460,6 @@ static void nbio_v7_9_init_registers(struct amdgpu_device *adev)
+ 	}
+ }
+ 
+-static u64 nbio_v7_9_get_pcie_replay_count(struct amdgpu_device *adev)
+-{
+-	u32 val, nak_r, nak_g;
+-
+-	if (adev->flags & AMD_IS_APU)
+-		return 0;
+-
+-	/* Get the number of NAKs received and generated */
+-	val = RREG32_PCIE(smnPCIEP_NAK_COUNTER);
+-	nak_r = val & 0xFFFF;
+-	nak_g = val >> 16;
+-
+-	/* Add the total number of NAKs, i.e the number of replays */
+-	return (nak_r + nak_g);
+-}
+-
+ #define MMIO_REG_HOLE_OFFSET 0x1A000
+ 
+ static void nbio_v7_9_set_reg_remap(struct amdgpu_device *adev)
+@@ -520,7 +501,6 @@ const struct amdgpu_nbio_funcs nbio_v7_9_funcs = {
+ 	.get_memory_partition_mode = nbio_v7_9_get_memory_partition_mode,
+ 	.is_nps_switch_requested = nbio_v7_9_is_nps_switch_requested,
+ 	.init_registers = nbio_v7_9_init_registers,
+-	.get_pcie_replay_count = nbio_v7_9_get_pcie_replay_count,
+ 	.set_reg_remap = nbio_v7_9_set_reg_remap,
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c
+index 79a566f3564a57..c305ea4ec17d21 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c
+@@ -149,7 +149,7 @@ int phm_wait_on_indirect_register(struct pp_hwmgr *hwmgr,
+ 	}
+ 
+ 	cgs_write_register(hwmgr->device, indirect_port, index);
+-	return phm_wait_on_register(hwmgr, indirect_port + 1, mask, value);
++	return phm_wait_on_register(hwmgr, indirect_port + 1, value, mask);
+ }
+ 
+ int phm_wait_for_register_unequal(struct pp_hwmgr *hwmgr,
+diff --git a/drivers/gpu/drm/display/drm_hdmi_state_helper.c b/drivers/gpu/drm/display/drm_hdmi_state_helper.c
+index c205f37da1e12b..6bc96d5d1ab911 100644
+--- a/drivers/gpu/drm/display/drm_hdmi_state_helper.c
++++ b/drivers/gpu/drm/display/drm_hdmi_state_helper.c
+@@ -506,12 +506,12 @@ int drm_atomic_helper_connector_hdmi_check(struct drm_connector *connector,
+ 	if (!new_conn_state->crtc || !new_conn_state->best_encoder)
+ 		return 0;
+ 
+-	new_conn_state->hdmi.is_limited_range = hdmi_is_limited_range(connector, new_conn_state);
+-
+ 	ret = hdmi_compute_config(connector, new_conn_state, mode);
+ 	if (ret)
+ 		return ret;
+ 
++	new_conn_state->hdmi.is_limited_range = hdmi_is_limited_range(connector, new_conn_state);
++
+ 	ret = hdmi_generate_infoframes(connector, new_conn_state);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+index e736eb73a7e615..49aed344d34629 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+@@ -383,6 +383,7 @@ static const struct dpu_perf_cfg sc8180x_perf_data = {
+ 	.min_core_ib = 2400000,
+ 	.min_llcc_ib = 800000,
+ 	.min_dram_ib = 800000,
++	.min_prefill_lines = 24,
+ 	.danger_lut_tbl = {0xf, 0xffff, 0x0},
+ 	.safe_lut_tbl = {0xfff0, 0xf000, 0xffff},
+ 	.qos_lut_tbl = {
+diff --git a/drivers/gpu/drm/panfrost/panfrost_devfreq.c b/drivers/gpu/drm/panfrost/panfrost_devfreq.c
+index 3385fd3ef41a47..5d0dce10336ba3 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_devfreq.c
++++ b/drivers/gpu/drm/panfrost/panfrost_devfreq.c
+@@ -29,7 +29,7 @@ static void panfrost_devfreq_update_utilization(struct panfrost_devfreq *pfdevfr
+ static int panfrost_devfreq_target(struct device *dev, unsigned long *freq,
+ 				   u32 flags)
+ {
+-	struct panfrost_device *ptdev = dev_get_drvdata(dev);
++	struct panfrost_device *pfdev = dev_get_drvdata(dev);
+ 	struct dev_pm_opp *opp;
+ 	int err;
+ 
+@@ -40,7 +40,7 @@ static int panfrost_devfreq_target(struct device *dev, unsigned long *freq,
+ 
+ 	err = dev_pm_opp_set_rate(dev, *freq);
+ 	if (!err)
+-		ptdev->pfdevfreq.current_frequency = *freq;
++		pfdev->pfdevfreq.current_frequency = *freq;
+ 
+ 	return err;
+ }
+diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
+index bbd39348a7aba1..6f50cfdfe5a2ee 100644
+--- a/drivers/gpu/drm/radeon/radeon_device.c
++++ b/drivers/gpu/drm/radeon/radeon_device.c
+@@ -1635,11 +1635,9 @@ int radeon_suspend_kms(struct drm_device *dev, bool suspend,
+ 		pci_set_power_state(pdev, PCI_D3hot);
+ 	}
+ 
+-	if (notify_clients) {
+-		console_lock();
+-		drm_client_dev_suspend(dev, true);
+-		console_unlock();
+-	}
++	if (notify_clients)
++		drm_client_dev_suspend(dev, false);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_fb.c b/drivers/gpu/drm/rockchip/rockchip_drm_fb.c
+index dcc1f07632c3a1..5829ee061c61bb 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_fb.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_fb.c
+@@ -52,16 +52,9 @@ rockchip_fb_create(struct drm_device *dev, struct drm_file *file,
+ 	}
+ 
+ 	if (drm_is_afbc(mode_cmd->modifier[0])) {
+-		int ret, i;
+-
+ 		ret = drm_gem_fb_afbc_init(dev, mode_cmd, afbc_fb);
+ 		if (ret) {
+-			struct drm_gem_object **obj = afbc_fb->base.obj;
+-
+-			for (i = 0; i < info->num_planes; ++i)
+-				drm_gem_object_put(obj[i]);
+-
+-			kfree(afbc_fb);
++			drm_framebuffer_put(&afbc_fb->base);
+ 			return ERR_PTR(ret);
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+index d0f5fea15e21fa..186f6452a7d359 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+@@ -146,25 +146,6 @@ static void vop2_unlock(struct vop2 *vop2)
+ 	mutex_unlock(&vop2->vop2_lock);
+ }
+ 
+-/*
+- * Note:
+- * The write mask function is documented but missing on rk3566/8, writes
+- * to these bits have no effect. For newer soc(rk3588 and following) the
+- * write mask is needed for register writes.
+- *
+- * GLB_CFG_DONE_EN has no write mask bit.
+- *
+- */
+-static void vop2_cfg_done(struct vop2_video_port *vp)
+-{
+-	struct vop2 *vop2 = vp->vop2;
+-	u32 val = RK3568_REG_CFG_DONE__GLB_CFG_DONE_EN;
+-
+-	val |= BIT(vp->id) | (BIT(vp->id) << 16);
+-
+-	regmap_set_bits(vop2->map, RK3568_REG_CFG_DONE, val);
+-}
+-
+ static void vop2_win_disable(struct vop2_win *win)
+ {
+ 	vop2_win_write(win, VOP2_WIN_ENABLE, 0);
+@@ -854,6 +835,11 @@ static void vop2_enable(struct vop2 *vop2)
+ 	if (vop2->version == VOP_VERSION_RK3588)
+ 		rk3588_vop2_power_domain_enable_all(vop2);
+ 
++	if (vop2->version <= VOP_VERSION_RK3588) {
++		vop2->old_layer_sel = vop2_readl(vop2, RK3568_OVL_LAYER_SEL);
++		vop2->old_port_sel = vop2_readl(vop2, RK3568_OVL_PORT_SEL);
++	}
++
+ 	vop2_writel(vop2, RK3568_REG_CFG_DONE, RK3568_REG_CFG_DONE__GLB_CFG_DONE_EN);
+ 
+ 	/*
+@@ -2422,6 +2408,10 @@ static int vop2_create_crtcs(struct vop2 *vop2)
+ 				break;
+ 			}
+ 		}
++
++		if (!vp->primary_plane)
++			return dev_err_probe(drm->dev, -ENOENT,
++					     "no primary plane for vp %d\n", i);
+ 	}
+ 
+ 	/* Register all unused window as overlay plane */
+@@ -2724,6 +2714,7 @@ static int vop2_bind(struct device *dev, struct device *master, void *data)
+ 		return dev_err_probe(drm->dev, vop2->irq, "cannot find irq for vop2\n");
+ 
+ 	mutex_init(&vop2->vop2_lock);
++	mutex_init(&vop2->ovl_lock);
+ 
+ 	ret = devm_request_irq(dev, vop2->irq, vop2_isr, IRQF_SHARED, dev_name(dev), vop2);
+ 	if (ret)
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h
+index fc3ecb9fcd9576..fa5c56f16047e3 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h
+@@ -334,6 +334,19 @@ struct vop2 {
+ 	/* optional internal rgb encoder */
+ 	struct rockchip_rgb *rgb;
+ 
++	/*
++	 * Used to record layer selection configuration on rk356x/rk3588
++	 * as register RK3568_OVL_LAYER_SEL and RK3568_OVL_PORT_SEL are
++	 * shared for all the Video Ports.
++	 */
++	u32 old_layer_sel;
++	u32 old_port_sel;
++	/*
++	 * Ensure that the updates to these two registers(RKK3568_OVL_LAYER_SEL/RK3568_OVL_PORT_SEL)
++	 * take effect in sequence.
++	 */
++	struct mutex ovl_lock;
++
+ 	/* must be put at the end of the struct */
+ 	struct vop2_win win[];
+ };
+@@ -727,6 +740,7 @@ enum dst_factor_mode {
+ #define RK3588_OVL_PORT_SEL__CLUSTER2			GENMASK(21, 20)
+ #define RK3568_OVL_PORT_SEL__CLUSTER1			GENMASK(19, 18)
+ #define RK3568_OVL_PORT_SEL__CLUSTER0			GENMASK(17, 16)
++#define RK3588_OVL_PORT_SET__PORT3_MUX			GENMASK(15, 12)
+ #define RK3568_OVL_PORT_SET__PORT2_MUX			GENMASK(11, 8)
+ #define RK3568_OVL_PORT_SET__PORT1_MUX			GENMASK(7, 4)
+ #define RK3568_OVL_PORT_SET__PORT0_MUX			GENMASK(3, 0)
+@@ -831,4 +845,23 @@ static inline struct vop2_win *to_vop2_win(struct drm_plane *p)
+ 	return container_of(p, struct vop2_win, base);
+ }
+ 
++/*
++ * Note:
++ * The write mask function is documented but missing on rk3566/8, writes
++ * to these bits have no effect. For newer soc(rk3588 and following) the
++ * write mask is needed for register writes.
++ *
++ * GLB_CFG_DONE_EN has no write mask bit.
++ *
++ */
++static inline void vop2_cfg_done(struct vop2_video_port *vp)
++{
++	struct vop2 *vop2 = vp->vop2;
++	u32 val = RK3568_REG_CFG_DONE__GLB_CFG_DONE_EN;
++
++	val |= BIT(vp->id) | (BIT(vp->id) << 16);
++
++	regmap_set_bits(vop2->map, RK3568_REG_CFG_DONE, val);
++}
++
+ #endif /* _ROCKCHIP_DRM_VOP2_H */
+diff --git a/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c b/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
+index 32c4ed6857395a..45c5e398781331 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
++++ b/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
+@@ -2052,12 +2052,55 @@ static void vop2_setup_alpha(struct vop2_video_port *vp)
+ 	}
+ }
+ 
++static u32 rk3568_vop2_read_port_mux(struct vop2 *vop2)
++{
++	return vop2_readl(vop2, RK3568_OVL_PORT_SEL);
++}
++
++static void rk3568_vop2_wait_for_port_mux_done(struct vop2 *vop2)
++{
++	u32 port_mux_sel;
++	int ret;
++
++	/*
++	 * Spin until the previous port_mux figuration is done.
++	 */
++	ret = readx_poll_timeout_atomic(rk3568_vop2_read_port_mux, vop2, port_mux_sel,
++					port_mux_sel == vop2->old_port_sel, 0, 50 * 1000);
++	if (ret)
++		DRM_DEV_ERROR(vop2->dev, "wait port_mux done timeout: 0x%x--0x%x\n",
++			      port_mux_sel, vop2->old_port_sel);
++}
++
++static u32 rk3568_vop2_read_layer_cfg(struct vop2 *vop2)
++{
++	return vop2_readl(vop2, RK3568_OVL_LAYER_SEL);
++}
++
++static void rk3568_vop2_wait_for_layer_cfg_done(struct vop2 *vop2, u32 cfg)
++{
++	u32 atv_layer_cfg;
++	int ret;
++
++	/*
++	 * Spin until the previous layer configuration is done.
++	 */
++	ret = readx_poll_timeout_atomic(rk3568_vop2_read_layer_cfg, vop2, atv_layer_cfg,
++					atv_layer_cfg == cfg, 0, 50 * 1000);
++	if (ret)
++		DRM_DEV_ERROR(vop2->dev, "wait layer cfg done timeout: 0x%x--0x%x\n",
++			      atv_layer_cfg, cfg);
++}
++
+ static void rk3568_vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ {
+ 	struct vop2 *vop2 = vp->vop2;
+ 	struct drm_plane *plane;
+ 	u32 layer_sel = 0;
+ 	u32 port_sel;
++	u32 old_layer_sel = 0;
++	u32 atv_layer_sel = 0;
++	u32 old_port_sel = 0;
+ 	u8 layer_id;
+ 	u8 old_layer_id;
+ 	u8 layer_sel_id;
+@@ -2069,19 +2112,18 @@ static void rk3568_vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ 	struct vop2_video_port *vp2 = &vop2->vps[2];
+ 	struct rockchip_crtc_state *vcstate = to_rockchip_crtc_state(vp->crtc.state);
+ 
++	mutex_lock(&vop2->ovl_lock);
+ 	ovl_ctrl = vop2_readl(vop2, RK3568_OVL_CTRL);
+ 	ovl_ctrl &= ~RK3568_OVL_CTRL__LAYERSEL_REGDONE_IMD;
+ 	ovl_ctrl &= ~RK3568_OVL_CTRL__LAYERSEL_REGDONE_SEL;
+-	ovl_ctrl |= FIELD_PREP(RK3568_OVL_CTRL__LAYERSEL_REGDONE_SEL, vp->id);
+ 
+ 	if (vcstate->yuv_overlay)
+ 		ovl_ctrl |= RK3568_OVL_CTRL__YUV_MODE(vp->id);
+ 	else
+ 		ovl_ctrl &= ~RK3568_OVL_CTRL__YUV_MODE(vp->id);
+ 
+-	vop2_writel(vop2, RK3568_OVL_CTRL, ovl_ctrl);
+-
+-	port_sel = vop2_readl(vop2, RK3568_OVL_PORT_SEL);
++	old_port_sel = vop2->old_port_sel;
++	port_sel = old_port_sel;
+ 	port_sel &= RK3568_OVL_PORT_SEL__SEL_PORT;
+ 
+ 	if (vp0->nlayers)
+@@ -2102,7 +2144,13 @@ static void rk3568_vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ 	else
+ 		port_sel |= FIELD_PREP(RK3568_OVL_PORT_SET__PORT2_MUX, 8);
+ 
+-	layer_sel = vop2_readl(vop2, RK3568_OVL_LAYER_SEL);
++	/* Fixed value for rk3588 */
++	if (vop2->version == VOP_VERSION_RK3588)
++		port_sel |= FIELD_PREP(RK3588_OVL_PORT_SET__PORT3_MUX, 7);
++
++	atv_layer_sel = vop2_readl(vop2, RK3568_OVL_LAYER_SEL);
++	old_layer_sel = vop2->old_layer_sel;
++	layer_sel = old_layer_sel;
+ 
+ 	ofs = 0;
+ 	for (i = 0; i < vp->id; i++)
+@@ -2186,8 +2234,37 @@ static void rk3568_vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ 			     old_win->data->layer_sel_id[vp->id]);
+ 	}
+ 
++	vop2->old_layer_sel = layer_sel;
++	vop2->old_port_sel = port_sel;
++	/*
++	 * As the RK3568_OVL_LAYER_SEL and RK3568_OVL_PORT_SEL are shared by all Video Ports,
++	 * and the configuration take effect by one Video Port's vsync.
++	 * When performing layer migration or change the zpos of layers, there are two things
++	 * to be observed and followed:
++	 * 1. When a layer is migrated from one VP to another, the configuration of the layer
++	 *    can only take effect after the Port mux configuration is enabled.
++	 *
++	 * 2. When we change the zpos of layers, we must ensure that the change for the previous
++	 *    VP takes effect before we proceed to change the next VP. Otherwise, the new
++	 *    configuration might overwrite the previous one for the previous VP, or it could
++	 *    lead to the configuration of the previous VP being take effect along with the VSYNC
++	 *    of the new VP.
++	 */
++	if (layer_sel != old_layer_sel || port_sel != old_port_sel)
++		ovl_ctrl |= FIELD_PREP(RK3568_OVL_CTRL__LAYERSEL_REGDONE_SEL, vp->id);
++	vop2_writel(vop2, RK3568_OVL_CTRL, ovl_ctrl);
++
++	if (port_sel != old_port_sel) {
++		vop2_writel(vop2, RK3568_OVL_PORT_SEL, port_sel);
++		vop2_cfg_done(vp);
++		rk3568_vop2_wait_for_port_mux_done(vop2);
++	}
++
++	if (layer_sel != old_layer_sel && atv_layer_sel != old_layer_sel)
++		rk3568_vop2_wait_for_layer_cfg_done(vop2, vop2->old_layer_sel);
++
+ 	vop2_writel(vop2, RK3568_OVL_LAYER_SEL, layer_sel);
+-	vop2_writel(vop2, RK3568_OVL_PORT_SEL, port_sel);
++	mutex_unlock(&vop2->ovl_lock);
+ }
+ 
+ static void rk3568_vop2_setup_dly_for_windows(struct vop2_video_port *vp)
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
+index 7fb1c88bcc475f..69dfe69ce0f87d 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
+@@ -896,7 +896,7 @@ int vmw_compat_shader_add(struct vmw_private *dev_priv,
+ 		.busy_domain = VMW_BO_DOMAIN_SYS,
+ 		.bo_type = ttm_bo_type_device,
+ 		.size = size,
+-		.pin = true,
++		.pin = false,
+ 		.keep_resv = true,
+ 	};
+ 
+diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
+index f3123914b1abf4..258c9616de19ce 100644
+--- a/drivers/gpu/drm/xe/xe_device.c
++++ b/drivers/gpu/drm/xe/xe_device.c
+@@ -678,6 +678,7 @@ static void sriov_update_device_info(struct xe_device *xe)
+ 	/* disable features that are not available/applicable to VFs */
+ 	if (IS_SRIOV_VF(xe)) {
+ 		xe->info.probe_display = 0;
++		xe->info.has_heci_cscfi = 0;
+ 		xe->info.has_heci_gscfi = 0;
+ 		xe->info.skip_guc_pc = 1;
+ 		xe->info.skip_pcode = 1;
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
+index 35489fa818259d..2ea81d81c0aeb4 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
+@@ -47,9 +47,16 @@ static int pf_alloc_metadata(struct xe_gt *gt)
+ 
+ static void pf_init_workers(struct xe_gt *gt)
+ {
++	xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
+ 	INIT_WORK(&gt->sriov.pf.workers.restart, pf_worker_restart_func);
+ }
+ 
++static void pf_fini_workers(struct xe_gt *gt)
++{
++	xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
++	disable_work_sync(&gt->sriov.pf.workers.restart);
++}
++
+ /**
+  * xe_gt_sriov_pf_init_early - Prepare SR-IOV PF data structures on PF.
+  * @gt: the &xe_gt to initialize
+@@ -79,6 +86,21 @@ int xe_gt_sriov_pf_init_early(struct xe_gt *gt)
+ 	return 0;
+ }
+ 
++static void pf_fini_action(void *arg)
++{
++	struct xe_gt *gt = arg;
++
++	pf_fini_workers(gt);
++}
++
++static int pf_init_late(struct xe_gt *gt)
++{
++	struct xe_device *xe = gt_to_xe(gt);
++
++	xe_gt_assert(gt, IS_SRIOV_PF(xe));
++	return devm_add_action_or_reset(xe->drm.dev, pf_fini_action, gt);
++}
++
+ /**
+  * xe_gt_sriov_pf_init - Prepare SR-IOV PF data structures on PF.
+  * @gt: the &xe_gt to initialize
+@@ -95,7 +117,15 @@ int xe_gt_sriov_pf_init(struct xe_gt *gt)
+ 	if (err)
+ 		return err;
+ 
+-	return xe_gt_sriov_pf_migration_init(gt);
++	err = xe_gt_sriov_pf_migration_init(gt);
++	if (err)
++		return err;
++
++	err = pf_init_late(gt);
++	if (err)
++		return err;
++
++	return 0;
+ }
+ 
+ static bool pf_needs_enable_ggtt_guest_update(struct xe_device *xe)
+diff --git a/drivers/gpu/drm/xe/xe_vsec.c b/drivers/gpu/drm/xe/xe_vsec.c
+index b378848d3b7bc7..56930ad4296216 100644
+--- a/drivers/gpu/drm/xe/xe_vsec.c
++++ b/drivers/gpu/drm/xe/xe_vsec.c
+@@ -24,6 +24,7 @@
+ #define BMG_DEVICE_ID 0xE2F8
+ 
+ static struct intel_vsec_header bmg_telemetry = {
++	.rev = 1,
+ 	.length = 0x10,
+ 	.id = VSEC_ID_TELEMETRY,
+ 	.num_entries = 2,
+@@ -32,28 +33,19 @@ static struct intel_vsec_header bmg_telemetry = {
+ 	.offset = BMG_DISCOVERY_OFFSET,
+ };
+ 
+-static struct intel_vsec_header bmg_punit_crashlog = {
++static struct intel_vsec_header bmg_crashlog = {
++	.rev = 1,
+ 	.length = 0x10,
+ 	.id = VSEC_ID_CRASHLOG,
+-	.num_entries = 1,
+-	.entry_size = 4,
++	.num_entries = 2,
++	.entry_size = 6,
+ 	.tbir = 0,
+ 	.offset = BMG_DISCOVERY_OFFSET + 0x60,
+ };
+ 
+-static struct intel_vsec_header bmg_oobmsm_crashlog = {
+-	.length = 0x10,
+-	.id = VSEC_ID_CRASHLOG,
+-	.num_entries = 1,
+-	.entry_size = 4,
+-	.tbir = 0,
+-	.offset = BMG_DISCOVERY_OFFSET + 0x78,
+-};
+-
+ static struct intel_vsec_header *bmg_capabilities[] = {
+ 	&bmg_telemetry,
+-	&bmg_punit_crashlog,
+-	&bmg_oobmsm_crashlog,
++	&bmg_crashlog,
+ 	NULL
+ };
+ 
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index ed34f5cd5a9145..a3405b1cc668c9 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -890,7 +890,8 @@ static int apple_magic_backlight_init(struct hid_device *hdev)
+ 	backlight->brightness = report_enum->report_id_hash[APPLE_MAGIC_REPORT_ID_BRIGHTNESS];
+ 	backlight->power = report_enum->report_id_hash[APPLE_MAGIC_REPORT_ID_POWER];
+ 
+-	if (!backlight->brightness || !backlight->power)
++	if (!backlight->brightness || backlight->brightness->maxfield < 2 ||
++	    !backlight->power || backlight->power->maxfield < 2)
+ 		return -ENODEV;
+ 
+ 	backlight->cdev.name = ":white:" LED_FUNCTION_KBD_BACKLIGHT;
+@@ -933,10 +934,12 @@ static int apple_probe(struct hid_device *hdev,
+ 		return ret;
+ 	}
+ 
+-	timer_setup(&asc->battery_timer, apple_battery_timer_tick, 0);
+-	mod_timer(&asc->battery_timer,
+-		  jiffies + msecs_to_jiffies(APPLE_BATTERY_TIMEOUT_MS));
+-	apple_fetch_battery(hdev);
++	if (quirks & APPLE_RDESC_BATTERY) {
++		timer_setup(&asc->battery_timer, apple_battery_timer_tick, 0);
++		mod_timer(&asc->battery_timer,
++			  jiffies + msecs_to_jiffies(APPLE_BATTERY_TIMEOUT_MS));
++		apple_fetch_battery(hdev);
++	}
+ 
+ 	if (quirks & APPLE_BACKLIGHT_CTL)
+ 		apple_backlight_init(hdev);
+@@ -950,7 +953,9 @@ static int apple_probe(struct hid_device *hdev,
+ 	return 0;
+ 
+ out_err:
+-	timer_delete_sync(&asc->battery_timer);
++	if (quirks & APPLE_RDESC_BATTERY)
++		timer_delete_sync(&asc->battery_timer);
++
+ 	hid_hw_stop(hdev);
+ 	return ret;
+ }
+@@ -959,7 +964,8 @@ static void apple_remove(struct hid_device *hdev)
+ {
+ 	struct apple_sc *asc = hid_get_drvdata(hdev);
+ 
+-	timer_delete_sync(&asc->battery_timer);
++	if (asc->quirks & APPLE_RDESC_BATTERY)
++		timer_delete_sync(&asc->battery_timer);
+ 
+ 	hid_hw_stop(hdev);
+ }
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index cd4215007d1a4a..9ebbaf8cc1310a 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -66,8 +66,12 @@ static s32 snto32(__u32 value, unsigned int n)
+ 
+ static u32 s32ton(__s32 value, unsigned int n)
+ {
+-	s32 a = value >> (n - 1);
++	s32 a;
+ 
++	if (!value || !n)
++		return 0;
++
++	a = value >> (n - 1);
+ 	if (a && a != -1)
+ 		return value < 0 ? 1 << (n - 1) : (1 << (n - 1)) - 1;
+ 	return value & ((1 << n) - 1);
+diff --git a/drivers/i2c/muxes/i2c-mux-mule.c b/drivers/i2c/muxes/i2c-mux-mule.c
+index 284ff4afeeacab..d3b32b794172ad 100644
+--- a/drivers/i2c/muxes/i2c-mux-mule.c
++++ b/drivers/i2c/muxes/i2c-mux-mule.c
+@@ -47,7 +47,6 @@ static int mule_i2c_mux_probe(struct platform_device *pdev)
+ 	struct mule_i2c_reg_mux *priv;
+ 	struct i2c_client *client;
+ 	struct i2c_mux_core *muxc;
+-	struct device_node *dev;
+ 	unsigned int readback;
+ 	int ndev, ret;
+ 	bool old_fw;
+@@ -95,7 +94,7 @@ static int mule_i2c_mux_probe(struct platform_device *pdev)
+ 				     "Failed to register mux remove\n");
+ 
+ 	/* Create device adapters */
+-	for_each_child_of_node(mux_dev->of_node, dev) {
++	for_each_child_of_node_scoped(mux_dev->of_node, dev) {
+ 		u32 reg;
+ 
+ 		ret = of_property_read_u32(dev, "reg", &reg);
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index 85e16de208d3ba..01295eb808065f 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -104,6 +104,7 @@
+ #define   SVC_I3C_MDATACTRL_TXTRIG_FIFO_NOT_FULL GENMASK(5, 4)
+ #define   SVC_I3C_MDATACTRL_RXTRIG_FIFO_NOT_EMPTY 0
+ #define   SVC_I3C_MDATACTRL_RXCOUNT(x) FIELD_GET(GENMASK(28, 24), (x))
++#define   SVC_I3C_MDATACTRL_TXCOUNT(x) FIELD_GET(GENMASK(20, 16), (x))
+ #define   SVC_I3C_MDATACTRL_TXFULL BIT(30)
+ #define   SVC_I3C_MDATACTRL_RXEMPTY BIT(31)
+ 
+@@ -1308,14 +1309,19 @@ static int svc_i3c_master_xfer(struct svc_i3c_master *master,
+ 		 * FIFO start filling as soon as possible after EmitStartAddr.
+ 		 */
+ 		if (svc_has_quirk(master, SVC_I3C_QUIRK_FIFO_EMPTY) && !rnw && xfer_len) {
+-			u32 end = xfer_len > SVC_I3C_FIFO_SIZE ? 0 : SVC_I3C_MWDATAB_END;
+-			u32 len = min_t(u32, xfer_len, SVC_I3C_FIFO_SIZE);
+-
+-			writesb(master->regs + SVC_I3C_MWDATAB1, out, len - 1);
+-			/* Mark END bit if this is the last byte */
+-			writel(out[len - 1] | end, master->regs + SVC_I3C_MWDATAB);
+-			xfer_len -= len;
+-			out += len;
++			u32 space, end, len;
++
++			reg = readl(master->regs + SVC_I3C_MDATACTRL);
++			space = SVC_I3C_FIFO_SIZE - SVC_I3C_MDATACTRL_TXCOUNT(reg);
++			if (space) {
++				end = xfer_len > space ? 0 : SVC_I3C_MWDATAB_END;
++				len = min_t(u32, xfer_len, space);
++				writesb(master->regs + SVC_I3C_MWDATAB1, out, len - 1);
++				/* Mark END bit if this is the last byte */
++				writel(out[len - 1] | end, master->regs + SVC_I3C_MWDATAB);
++				xfer_len -= len;
++				out += len;
++			}
+ 		}
+ 
+ 		ret = readl_poll_timeout(master->regs + SVC_I3C_MSTATUS, reg,
+diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.c b/drivers/infiniband/hw/erdma/erdma_verbs.c
+index af36a8d2df2285..ec0ad40860668a 100644
+--- a/drivers/infiniband/hw/erdma/erdma_verbs.c
++++ b/drivers/infiniband/hw/erdma/erdma_verbs.c
+@@ -629,7 +629,8 @@ static struct erdma_mtt *erdma_create_cont_mtt(struct erdma_dev *dev,
+ static void erdma_destroy_mtt_buf_sg(struct erdma_dev *dev,
+ 				     struct erdma_mtt *mtt)
+ {
+-	dma_unmap_sg(&dev->pdev->dev, mtt->sglist, mtt->nsg, DMA_TO_DEVICE);
++	dma_unmap_sg(&dev->pdev->dev, mtt->sglist,
++		     DIV_ROUND_UP(mtt->size, PAGE_SIZE), DMA_TO_DEVICE);
+ 	vfree(mtt->sglist);
+ }
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 560a1d9de408ff..cbe73d9ad52536 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -856,6 +856,7 @@ struct hns_roce_caps {
+ 	u16		default_ceq_arm_st;
+ 	u8		cong_cap;
+ 	enum hns_roce_cong_type default_cong_type;
++	u32             max_ack_req_msg_len;
+ };
+ 
+ enum hns_roce_device_state {
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index ca0798224e565c..3d479c63b117a9 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -249,15 +249,12 @@ int hns_roce_calc_hem_mhop(struct hns_roce_dev *hr_dev,
+ }
+ 
+ static struct hns_roce_hem *hns_roce_alloc_hem(struct hns_roce_dev *hr_dev,
+-					       unsigned long hem_alloc_size,
+-					       gfp_t gfp_mask)
++					       unsigned long hem_alloc_size)
+ {
+ 	struct hns_roce_hem *hem;
+ 	int order;
+ 	void *buf;
+ 
+-	WARN_ON(gfp_mask & __GFP_HIGHMEM);
+-
+ 	order = get_order(hem_alloc_size);
+ 	if (PAGE_SIZE << order != hem_alloc_size) {
+ 		dev_err(hr_dev->dev, "invalid hem_alloc_size: %lu!\n",
+@@ -265,13 +262,12 @@ static struct hns_roce_hem *hns_roce_alloc_hem(struct hns_roce_dev *hr_dev,
+ 		return NULL;
+ 	}
+ 
+-	hem = kmalloc(sizeof(*hem),
+-		      gfp_mask & ~(__GFP_HIGHMEM | __GFP_NOWARN));
++	hem = kmalloc(sizeof(*hem), GFP_KERNEL);
+ 	if (!hem)
+ 		return NULL;
+ 
+ 	buf = dma_alloc_coherent(hr_dev->dev, hem_alloc_size,
+-				 &hem->dma, gfp_mask);
++				 &hem->dma, GFP_KERNEL);
+ 	if (!buf)
+ 		goto fail;
+ 
+@@ -378,7 +374,6 @@ static int alloc_mhop_hem(struct hns_roce_dev *hr_dev,
+ {
+ 	u32 bt_size = mhop->bt_chunk_size;
+ 	struct device *dev = hr_dev->dev;
+-	gfp_t flag;
+ 	u64 bt_ba;
+ 	u32 size;
+ 	int ret;
+@@ -417,8 +412,7 @@ static int alloc_mhop_hem(struct hns_roce_dev *hr_dev,
+ 	 * alloc bt space chunk for MTT/CQE.
+ 	 */
+ 	size = table->type < HEM_TYPE_MTT ? mhop->buf_chunk_size : bt_size;
+-	flag = GFP_KERNEL | __GFP_NOWARN;
+-	table->hem[index->buf] = hns_roce_alloc_hem(hr_dev, size, flag);
++	table->hem[index->buf] = hns_roce_alloc_hem(hr_dev, size);
+ 	if (!table->hem[index->buf]) {
+ 		ret = -ENOMEM;
+ 		goto err_alloc_hem;
+@@ -546,9 +540,7 @@ int hns_roce_table_get(struct hns_roce_dev *hr_dev,
+ 		goto out;
+ 	}
+ 
+-	table->hem[i] = hns_roce_alloc_hem(hr_dev,
+-				       table->table_chunk_size,
+-				       GFP_KERNEL | __GFP_NOWARN);
++	table->hem[i] = hns_roce_alloc_hem(hr_dev, table->table_chunk_size);
+ 	if (!table->hem[i]) {
+ 		ret = -ENOMEM;
+ 		goto out;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index bbf6e1983704cf..07d93cf4557eb1 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -2181,31 +2181,36 @@ static void apply_func_caps(struct hns_roce_dev *hr_dev)
+ 
+ static int hns_roce_query_caps(struct hns_roce_dev *hr_dev)
+ {
+-	struct hns_roce_cmq_desc desc[HNS_ROCE_QUERY_PF_CAPS_CMD_NUM];
++	struct hns_roce_cmq_desc desc[HNS_ROCE_QUERY_PF_CAPS_CMD_NUM] = {};
+ 	struct hns_roce_caps *caps = &hr_dev->caps;
+ 	struct hns_roce_query_pf_caps_a *resp_a;
+ 	struct hns_roce_query_pf_caps_b *resp_b;
+ 	struct hns_roce_query_pf_caps_c *resp_c;
+ 	struct hns_roce_query_pf_caps_d *resp_d;
+ 	struct hns_roce_query_pf_caps_e *resp_e;
++	struct hns_roce_query_pf_caps_f *resp_f;
+ 	enum hns_roce_opcode_type cmd;
+ 	int ctx_hop_num;
+ 	int pbl_hop_num;
++	int cmd_num;
+ 	int ret;
+ 	int i;
+ 
+ 	cmd = hr_dev->is_vf ? HNS_ROCE_OPC_QUERY_VF_CAPS_NUM :
+ 	      HNS_ROCE_OPC_QUERY_PF_CAPS_NUM;
++	cmd_num = hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08 ?
++		  HNS_ROCE_QUERY_PF_CAPS_CMD_NUM_HIP08 :
++		  HNS_ROCE_QUERY_PF_CAPS_CMD_NUM;
+ 
+-	for (i = 0; i < HNS_ROCE_QUERY_PF_CAPS_CMD_NUM; i++) {
++	for (i = 0; i < cmd_num - 1; i++) {
+ 		hns_roce_cmq_setup_basic_desc(&desc[i], cmd, true);
+-		if (i < (HNS_ROCE_QUERY_PF_CAPS_CMD_NUM - 1))
+-			desc[i].flag |= cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT);
+-		else
+-			desc[i].flag &= ~cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT);
++		desc[i].flag |= cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT);
+ 	}
+ 
+-	ret = hns_roce_cmq_send(hr_dev, desc, HNS_ROCE_QUERY_PF_CAPS_CMD_NUM);
++	hns_roce_cmq_setup_basic_desc(&desc[cmd_num - 1], cmd, true);
++	desc[cmd_num - 1].flag &= ~cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT);
++
++	ret = hns_roce_cmq_send(hr_dev, desc, cmd_num);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -2214,6 +2219,7 @@ static int hns_roce_query_caps(struct hns_roce_dev *hr_dev)
+ 	resp_c = (struct hns_roce_query_pf_caps_c *)desc[2].data;
+ 	resp_d = (struct hns_roce_query_pf_caps_d *)desc[3].data;
+ 	resp_e = (struct hns_roce_query_pf_caps_e *)desc[4].data;
++	resp_f = (struct hns_roce_query_pf_caps_f *)desc[5].data;
+ 
+ 	caps->local_ca_ack_delay = resp_a->local_ca_ack_delay;
+ 	caps->max_sq_sg = le16_to_cpu(resp_a->max_sq_sg);
+@@ -2278,6 +2284,8 @@ static int hns_roce_query_caps(struct hns_roce_dev *hr_dev)
+ 	caps->reserved_srqs = hr_reg_read(resp_e, PF_CAPS_E_RSV_SRQS);
+ 	caps->reserved_lkey = hr_reg_read(resp_e, PF_CAPS_E_RSV_LKEYS);
+ 
++	caps->max_ack_req_msg_len = le32_to_cpu(resp_f->max_ack_req_msg_len);
++
+ 	caps->qpc_hop_num = ctx_hop_num;
+ 	caps->sccc_hop_num = ctx_hop_num;
+ 	caps->srqc_hop_num = ctx_hop_num;
+@@ -2971,14 +2979,22 @@ static int hns_roce_v2_init(struct hns_roce_dev *hr_dev)
+ {
+ 	int ret;
+ 
++	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) {
++		ret = free_mr_init(hr_dev);
++		if (ret) {
++			dev_err(hr_dev->dev, "failed to init free mr!\n");
++			return ret;
++		}
++	}
++
+ 	/* The hns ROCEE requires the extdb info to be cleared before using */
+ 	ret = hns_roce_clear_extdb_list_info(hr_dev);
+ 	if (ret)
+-		return ret;
++		goto err_clear_extdb_failed;
+ 
+ 	ret = get_hem_table(hr_dev);
+ 	if (ret)
+-		return ret;
++		goto err_get_hem_table_failed;
+ 
+ 	if (hr_dev->is_vf)
+ 		return 0;
+@@ -2993,6 +3009,11 @@ static int hns_roce_v2_init(struct hns_roce_dev *hr_dev)
+ 
+ err_llm_init_failed:
+ 	put_hem_table(hr_dev);
++err_get_hem_table_failed:
++	hns_roce_function_clear(hr_dev);
++err_clear_extdb_failed:
++	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08)
++		free_mr_exit(hr_dev);
+ 
+ 	return ret;
+ }
+@@ -4546,7 +4567,9 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ 	dma_addr_t trrl_ba;
+ 	dma_addr_t irrl_ba;
+ 	enum ib_mtu ib_mtu;
++	u8 ack_req_freq;
+ 	const u8 *smac;
++	int lp_msg_len;
+ 	u8 lp_pktn_ini;
+ 	u64 *mtts;
+ 	u8 *dmac;
+@@ -4629,7 +4652,8 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ 		return -EINVAL;
+ #define MIN_LP_MSG_LEN 1024
+ 	/* mtu * (2 ^ lp_pktn_ini) should be in the range of 1024 to mtu */
+-	lp_pktn_ini = ilog2(max(mtu, MIN_LP_MSG_LEN) / mtu);
++	lp_msg_len = max(mtu, MIN_LP_MSG_LEN);
++	lp_pktn_ini = ilog2(lp_msg_len / mtu);
+ 
+ 	if (attr_mask & IB_QP_PATH_MTU) {
+ 		hr_reg_write(context, QPC_MTU, ib_mtu);
+@@ -4639,8 +4663,22 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ 	hr_reg_write(context, QPC_LP_PKTN_INI, lp_pktn_ini);
+ 	hr_reg_clear(qpc_mask, QPC_LP_PKTN_INI);
+ 
+-	/* ACK_REQ_FREQ should be larger than or equal to LP_PKTN_INI */
+-	hr_reg_write(context, QPC_ACK_REQ_FREQ, lp_pktn_ini);
++	/*
++	 * There are several constraints for ACK_REQ_FREQ:
++	 * 1. mtu * (2 ^ ACK_REQ_FREQ) should not be too large, otherwise
++	 *    it may cause some unexpected retries when sending large
++	 *    payload.
++	 * 2. ACK_REQ_FREQ should be larger than or equal to LP_PKTN_INI.
++	 * 3. ACK_REQ_FREQ must be equal to LP_PKTN_INI when using LDCP
++	 *    or HC3 congestion control algorithm.
++	 */
++	if (hr_qp->cong_type == CONG_TYPE_LDCP ||
++	    hr_qp->cong_type == CONG_TYPE_HC3 ||
++	    hr_dev->caps.max_ack_req_msg_len < lp_msg_len)
++		ack_req_freq = lp_pktn_ini;
++	else
++		ack_req_freq = ilog2(hr_dev->caps.max_ack_req_msg_len / mtu);
++	hr_reg_write(context, QPC_ACK_REQ_FREQ, ack_req_freq);
+ 	hr_reg_clear(qpc_mask, QPC_ACK_REQ_FREQ);
+ 
+ 	hr_reg_clear(qpc_mask, QPC_RX_REQ_PSN_ERR);
+@@ -5333,11 +5371,10 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
+ {
+ 	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+ 	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
+-	struct hns_roce_v2_qp_context ctx[2];
+-	struct hns_roce_v2_qp_context *context = ctx;
+-	struct hns_roce_v2_qp_context *qpc_mask = ctx + 1;
++	struct hns_roce_v2_qp_context *context;
++	struct hns_roce_v2_qp_context *qpc_mask;
+ 	struct ib_device *ibdev = &hr_dev->ib_dev;
+-	int ret;
++	int ret = -ENOMEM;
+ 
+ 	if (attr_mask & ~IB_QP_ATTR_STANDARD_BITS)
+ 		return -EOPNOTSUPP;
+@@ -5348,7 +5385,11 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
+ 	 * we should set all bits of the relevant fields in context mask to
+ 	 * 0 at the same time, else set them to 0x1.
+ 	 */
+-	memset(context, 0, hr_dev->caps.qpc_sz);
++	context = kvzalloc(sizeof(*context), GFP_KERNEL);
++	qpc_mask = kvzalloc(sizeof(*qpc_mask), GFP_KERNEL);
++	if (!context || !qpc_mask)
++		goto out;
++
+ 	memset(qpc_mask, 0xff, hr_dev->caps.qpc_sz);
+ 
+ 	ret = hns_roce_v2_set_abs_fields(ibqp, attr, attr_mask, cur_state,
+@@ -5390,6 +5431,8 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
+ 		clear_qp(hr_qp);
+ 
+ out:
++	kvfree(qpc_mask);
++	kvfree(context);
+ 	return ret;
+ }
+ 
+@@ -7027,21 +7070,11 @@ static int __hns_roce_hw_v2_init_instance(struct hnae3_handle *handle)
+ 		goto error_failed_roce_init;
+ 	}
+ 
+-	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) {
+-		ret = free_mr_init(hr_dev);
+-		if (ret) {
+-			dev_err(hr_dev->dev, "failed to init free mr!\n");
+-			goto error_failed_free_mr_init;
+-		}
+-	}
+ 
+ 	handle->priv = hr_dev;
+ 
+ 	return 0;
+ 
+-error_failed_free_mr_init:
+-	hns_roce_exit(hr_dev);
+-
+ error_failed_roce_init:
+ 	kfree(hr_dev->priv);
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index bc7466830eaf9d..1c2660305d27c8 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -1168,7 +1168,8 @@ struct hns_roce_cfg_gmv_tb_b {
+ #define GMV_TB_B_SMAC_H GMV_TB_B_FIELD_LOC(47, 32)
+ #define GMV_TB_B_SGID_IDX GMV_TB_B_FIELD_LOC(71, 64)
+ 
+-#define HNS_ROCE_QUERY_PF_CAPS_CMD_NUM 5
++#define HNS_ROCE_QUERY_PF_CAPS_CMD_NUM_HIP08 5
++#define HNS_ROCE_QUERY_PF_CAPS_CMD_NUM 6
+ struct hns_roce_query_pf_caps_a {
+ 	u8 number_ports;
+ 	u8 local_ca_ack_delay;
+@@ -1280,6 +1281,11 @@ struct hns_roce_query_pf_caps_e {
+ 	__le16 aeq_period;
+ };
+ 
++struct hns_roce_query_pf_caps_f {
++	__le32 max_ack_req_msg_len;
++	__le32 rsv[5];
++};
++
+ #define PF_CAPS_E_FIELD_LOC(h, l) \
+ 	FIELD_LOC(struct hns_roce_query_pf_caps_e, h, l)
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index e7a497cc125cc3..11fa64044a8d85 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -947,10 +947,7 @@ static int hns_roce_init_hem(struct hns_roce_dev *hr_dev)
+ static void hns_roce_teardown_hca(struct hns_roce_dev *hr_dev)
+ {
+ 	hns_roce_cleanup_bitmap(hr_dev);
+-
+-	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_CQ_RECORD_DB ||
+-	    hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_QP_RECORD_DB)
+-		mutex_destroy(&hr_dev->pgdir_mutex);
++	mutex_destroy(&hr_dev->pgdir_mutex);
+ }
+ 
+ /**
+@@ -965,11 +962,11 @@ static int hns_roce_setup_hca(struct hns_roce_dev *hr_dev)
+ 
+ 	spin_lock_init(&hr_dev->sm_lock);
+ 
+-	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_CQ_RECORD_DB ||
+-	    hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_QP_RECORD_DB) {
+-		INIT_LIST_HEAD(&hr_dev->pgdir_list);
+-		mutex_init(&hr_dev->pgdir_mutex);
+-	}
++	INIT_LIST_HEAD(&hr_dev->qp_list);
++	spin_lock_init(&hr_dev->qp_list_lock);
++
++	INIT_LIST_HEAD(&hr_dev->pgdir_list);
++	mutex_init(&hr_dev->pgdir_mutex);
+ 
+ 	hns_roce_init_uar_table(hr_dev);
+ 
+@@ -1001,9 +998,7 @@ static int hns_roce_setup_hca(struct hns_roce_dev *hr_dev)
+ 
+ err_uar_table_free:
+ 	ida_destroy(&hr_dev->uar_ida.ida);
+-	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_CQ_RECORD_DB ||
+-	    hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_QP_RECORD_DB)
+-		mutex_destroy(&hr_dev->pgdir_mutex);
++	mutex_destroy(&hr_dev->pgdir_mutex);
+ 
+ 	return ret;
+ }
+@@ -1132,9 +1127,6 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
+ 		}
+ 	}
+ 
+-	INIT_LIST_HEAD(&hr_dev->qp_list);
+-	spin_lock_init(&hr_dev->qp_list_lock);
+-
+ 	ret = hns_roce_register_device(hr_dev);
+ 	if (ret)
+ 		goto error_failed_register_device;
+diff --git a/drivers/infiniband/hw/mana/qp.c b/drivers/infiniband/hw/mana/qp.c
+index c928af58f38bfe..456d78c6fcb896 100644
+--- a/drivers/infiniband/hw/mana/qp.c
++++ b/drivers/infiniband/hw/mana/qp.c
+@@ -773,7 +773,7 @@ static int mana_ib_gd_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 		req.ah_attr.dest_port = ROCE_V2_UDP_DPORT;
+ 		req.ah_attr.src_port = rdma_get_udp_sport(attr->ah_attr.grh.flow_label,
+ 							  ibqp->qp_num, attr->dest_qp_num);
+-		req.ah_attr.traffic_class = attr->ah_attr.grh.traffic_class;
++		req.ah_attr.traffic_class = attr->ah_attr.grh.traffic_class >> 2;
+ 		req.ah_attr.hop_limit = attr->ah_attr.grh.hop_limit;
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/mlx5/dm.c b/drivers/infiniband/hw/mlx5/dm.c
+index b4c97fb62abfcc..9ded2b7c1e3199 100644
+--- a/drivers/infiniband/hw/mlx5/dm.c
++++ b/drivers/infiniband/hw/mlx5/dm.c
+@@ -282,7 +282,7 @@ static struct ib_dm *handle_alloc_dm_memic(struct ib_ucontext *ctx,
+ 	int err;
+ 	u64 address;
+ 
+-	if (!MLX5_CAP_DEV_MEM(dm_db->dev, memic))
++	if (!dm_db || !MLX5_CAP_DEV_MEM(dm_db->dev, memic))
+ 		return ERR_PTR(-EOPNOTSUPP);
+ 
+ 	dm = kzalloc(sizeof(*dm), GFP_KERNEL);
+diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c
+index 793f3c5c4d0126..80c665d152189d 100644
+--- a/drivers/infiniband/hw/mlx5/umr.c
++++ b/drivers/infiniband/hw/mlx5/umr.c
+@@ -32,13 +32,15 @@ static __be64 get_umr_disable_mr_mask(void)
+ 	return cpu_to_be64(result);
+ }
+ 
+-static __be64 get_umr_update_translation_mask(void)
++static __be64 get_umr_update_translation_mask(struct mlx5_ib_dev *dev)
+ {
+ 	u64 result;
+ 
+ 	result = MLX5_MKEY_MASK_LEN |
+ 		 MLX5_MKEY_MASK_PAGE_SIZE |
+ 		 MLX5_MKEY_MASK_START_ADDR;
++	if (MLX5_CAP_GEN_2(dev->mdev, umr_log_entity_size_5))
++		result |= MLX5_MKEY_MASK_PAGE_SIZE_5;
+ 
+ 	return cpu_to_be64(result);
+ }
+@@ -654,7 +656,7 @@ static void mlx5r_umr_final_update_xlt(struct mlx5_ib_dev *dev,
+ 		flags & MLX5_IB_UPD_XLT_ENABLE || flags & MLX5_IB_UPD_XLT_ADDR;
+ 
+ 	if (update_translation) {
+-		wqe->ctrl_seg.mkey_mask |= get_umr_update_translation_mask();
++		wqe->ctrl_seg.mkey_mask |= get_umr_update_translation_mask(dev);
+ 		if (!mr->ibmr.length)
+ 			MLX5_SET(mkc, &wqe->mkey_seg, length64, 1);
+ 	}
+diff --git a/drivers/interconnect/qcom/sc8180x.c b/drivers/interconnect/qcom/sc8180x.c
+index a741badaa966e0..4dd1d2f2e82162 100644
+--- a/drivers/interconnect/qcom/sc8180x.c
++++ b/drivers/interconnect/qcom/sc8180x.c
+@@ -1492,34 +1492,40 @@ static struct qcom_icc_bcm bcm_sh3 = {
+ 
+ static struct qcom_icc_bcm bcm_sn0 = {
+ 	.name = "SN0",
++	.num_nodes = 1,
+ 	.nodes = { &slv_qns_gemnoc_sf }
+ };
+ 
+ static struct qcom_icc_bcm bcm_sn1 = {
+ 	.name = "SN1",
++	.num_nodes = 1,
+ 	.nodes = { &slv_qxs_imem }
+ };
+ 
+ static struct qcom_icc_bcm bcm_sn2 = {
+ 	.name = "SN2",
+ 	.keepalive = true,
++	.num_nodes = 1,
+ 	.nodes = { &slv_qns_gemnoc_gc }
+ };
+ 
+ static struct qcom_icc_bcm bcm_co2 = {
+ 	.name = "CO2",
++	.num_nodes = 1,
+ 	.nodes = { &mas_qnm_npu }
+ };
+ 
+ static struct qcom_icc_bcm bcm_sn3 = {
+ 	.name = "SN3",
+ 	.keepalive = true,
++	.num_nodes = 2,
+ 	.nodes = { &slv_srvc_aggre1_noc,
+ 		  &slv_qns_cnoc }
+ };
+ 
+ static struct qcom_icc_bcm bcm_sn4 = {
+ 	.name = "SN4",
++	.num_nodes = 1,
+ 	.nodes = { &slv_qxs_pimem }
+ };
+ 
+diff --git a/drivers/interconnect/qcom/sc8280xp.c b/drivers/interconnect/qcom/sc8280xp.c
+index 0270f6c64481a9..c646cdf8a19bf6 100644
+--- a/drivers/interconnect/qcom/sc8280xp.c
++++ b/drivers/interconnect/qcom/sc8280xp.c
+@@ -48,6 +48,7 @@ static struct qcom_icc_node qnm_a1noc_cfg = {
+ 	.id = SC8280XP_MASTER_A1NOC_CFG,
+ 	.channels = 1,
+ 	.buswidth = 4,
++	.num_links = 1,
+ 	.links = { SC8280XP_SLAVE_SERVICE_A1NOC },
+ };
+ 
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index 31f8d208dedb7a..aafe94568e445f 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -634,8 +634,8 @@ static inline void pdev_disable_cap_pasid(struct pci_dev *pdev)
+ 
+ static void pdev_enable_caps(struct pci_dev *pdev)
+ {
+-	pdev_enable_cap_ats(pdev);
+ 	pdev_enable_cap_pasid(pdev);
++	pdev_enable_cap_ats(pdev);
+ 	pdev_enable_cap_pri(pdev);
+ }
+ 
+@@ -2526,8 +2526,21 @@ static inline u64 dma_max_address(enum protection_domain_mode pgtable)
+ 	if (pgtable == PD_MODE_V1)
+ 		return ~0ULL;
+ 
+-	/* V2 with 4/5 level page table */
+-	return ((1ULL << PM_LEVEL_SHIFT(amd_iommu_gpt_level)) - 1);
++	/*
++	 * V2 with 4/5 level page table. Note that "2.2.6.5 AMD64 4-Kbyte Page
++	 * Translation" shows that the V2 table sign extends the top of the
++	 * address space creating a reserved region in the middle of the
++	 * translation, just like the CPU does. Further Vasant says the docs are
++	 * incomplete and this only applies to non-zero PASIDs. If the AMDv2
++	 * page table is assigned to the 0 PASID then there is no sign extension
++	 * check.
++	 *
++	 * Since the IOMMU must have a fixed geometry, and the core code does
++	 * not understand sign extended addressing, we have to chop off the high
++	 * bit to get consistent behavior with attachments of the domain to any
++	 * PASID.
++	 */
++	return ((1ULL << (PM_LEVEL_SHIFT(amd_iommu_gpt_level) - 1)) - 1);
+ }
+ 
+ static bool amd_iommu_hd_support(struct amd_iommu *iommu)
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index 59d02687280e8d..4f4c9e376fc4fe 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -342,7 +342,8 @@ static int qcom_adreno_smmu_init_context(struct arm_smmu_domain *smmu_domain,
+ 	priv->set_prr_addr = NULL;
+ 
+ 	if (of_device_is_compatible(np, "qcom,smmu-500") &&
+-			of_device_is_compatible(np, "qcom,adreno-smmu")) {
++	    !of_device_is_compatible(np, "qcom,sm8250-smmu-500") &&
++	    of_device_is_compatible(np, "qcom,adreno-smmu")) {
+ 		priv->set_prr_bit = qcom_adreno_smmu_set_prr_bit;
+ 		priv->set_prr_addr = qcom_adreno_smmu_set_prr_addr;
+ 	}
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index ff07ee2940f5f0..024fb7c36d884b 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -1440,7 +1440,6 @@ void domain_detach_iommu(struct dmar_domain *domain, struct intel_iommu *iommu)
+ 	if (--info->refcnt == 0) {
+ 		clear_bit(info->did, iommu->domain_ids);
+ 		xa_erase(&domain->iommu_array, iommu->seq_id);
+-		domain->nid = NUMA_NO_NODE;
+ 		kfree(info);
+ 	}
+ 	spin_unlock(&iommu->lock);
+diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
+index 6539869759b9ed..087e30bab7584c 100644
+--- a/drivers/irqchip/Kconfig
++++ b/drivers/irqchip/Kconfig
+@@ -534,6 +534,7 @@ config IMX_MU_MSI
+ 	tristate "i.MX MU used as MSI controller"
+ 	depends on OF && HAS_IOMEM
+ 	depends on ARCH_MXC || COMPILE_TEST
++	depends on ARM || ARM64
+ 	default m if ARCH_MXC
+ 	select IRQ_DOMAIN
+ 	select IRQ_DOMAIN_HIERARCHY
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 9daa78c5fe3322..47f3253c475720 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -9380,17 +9380,11 @@ static bool md_spares_need_change(struct mddev *mddev)
+ 	return false;
+ }
+ 
+-static int remove_and_add_spares(struct mddev *mddev,
+-				 struct md_rdev *this)
++static int remove_spares(struct mddev *mddev, struct md_rdev *this)
+ {
+ 	struct md_rdev *rdev;
+-	int spares = 0;
+ 	int removed = 0;
+ 
+-	if (this && test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
+-		/* Mustn't remove devices when resync thread is running */
+-		return 0;
+-
+ 	rdev_for_each(rdev, mddev) {
+ 		if ((this == NULL || rdev == this) && rdev_removeable(rdev) &&
+ 		    !mddev->pers->hot_remove_disk(mddev, rdev)) {
+@@ -9404,6 +9398,21 @@ static int remove_and_add_spares(struct mddev *mddev,
+ 	if (removed && mddev->kobj.sd)
+ 		sysfs_notify_dirent_safe(mddev->sysfs_degraded);
+ 
++	return removed;
++}
++
++static int remove_and_add_spares(struct mddev *mddev,
++				 struct md_rdev *this)
++{
++	struct md_rdev *rdev;
++	int spares = 0;
++	int removed = 0;
++
++	if (this && test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
++		/* Mustn't remove devices when resync thread is running */
++		return 0;
++
++	removed = remove_spares(mddev, this);
+ 	if (this && removed)
+ 		goto no_add;
+ 
+@@ -9446,6 +9455,7 @@ static bool md_choose_sync_action(struct mddev *mddev, int *spares)
+ 
+ 	/* Check if resync is in progress. */
+ 	if (mddev->recovery_cp < MaxSector) {
++		remove_spares(mddev, NULL);
+ 		set_bit(MD_RECOVERY_SYNC, &mddev->recovery);
+ 		clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
+ 		return true;
+@@ -9682,8 +9692,8 @@ void md_check_recovery(struct mddev *mddev)
+ 			 * remove disk.
+ 			 */
+ 			rdev_for_each_safe(rdev, tmp, mddev) {
+-				if (test_and_clear_bit(ClusterRemove, &rdev->flags) &&
+-						rdev->raid_disk < 0)
++				if (rdev->raid_disk < 0 &&
++				    test_and_clear_bit(ClusterRemove, &rdev->flags))
+ 					md_kick_rdev_from_array(rdev);
+ 			}
+ 		}
+@@ -9989,8 +9999,11 @@ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
+ 
+ 	/* Check for change of roles in the active devices */
+ 	rdev_for_each_safe(rdev2, tmp, mddev) {
+-		if (test_bit(Faulty, &rdev2->flags))
++		if (test_bit(Faulty, &rdev2->flags)) {
++			if (test_bit(ClusterRemove, &rdev2->flags))
++				set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ 			continue;
++		}
+ 
+ 		/* Check if the roles changed */
+ 		role = le16_to_cpu(sb->dev_roles[rdev2->desc_nr]);
+diff --git a/drivers/media/platform/ti/j721e-csi2rx/j721e-csi2rx.c b/drivers/media/platform/ti/j721e-csi2rx/j721e-csi2rx.c
+index 6412a00be8eab8..0e358759e35faa 100644
+--- a/drivers/media/platform/ti/j721e-csi2rx/j721e-csi2rx.c
++++ b/drivers/media/platform/ti/j721e-csi2rx/j721e-csi2rx.c
+@@ -619,6 +619,7 @@ static void ti_csi2rx_dma_callback(void *param)
+ 
+ 		if (ti_csi2rx_start_dma(csi, buf)) {
+ 			dev_err(csi->dev, "Failed to queue the next buffer for DMA\n");
++			list_del(&buf->list);
+ 			vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
+ 		} else {
+ 			list_move_tail(&buf->list, &dma->submitted);
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls-core.c b/drivers/media/v4l2-core/v4l2-ctrls-core.c
+index 90d25329661ed7..b45809a82f9a66 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls-core.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls-core.c
+@@ -968,12 +968,12 @@ static int std_validate_compound(const struct v4l2_ctrl *ctrl, u32 idx,
+ 
+ 			p_h264_sps->flags &=
+ 				~V4L2_H264_SPS_FLAG_QPPRIME_Y_ZERO_TRANSFORM_BYPASS;
+-
+-			if (p_h264_sps->chroma_format_idc < 3)
+-				p_h264_sps->flags &=
+-					~V4L2_H264_SPS_FLAG_SEPARATE_COLOUR_PLANE;
+ 		}
+ 
++		if (p_h264_sps->chroma_format_idc < 3)
++			p_h264_sps->flags &=
++				~V4L2_H264_SPS_FLAG_SEPARATE_COLOUR_PLANE;
++
+ 		if (p_h264_sps->flags & V4L2_H264_SPS_FLAG_FRAME_MBS_ONLY)
+ 			p_h264_sps->flags &=
+ 				~V4L2_H264_SPS_FLAG_MB_ADAPTIVE_FRAME_FIELD;
+diff --git a/drivers/mfd/tps65219.c b/drivers/mfd/tps65219.c
+index fd390600fbf07e..297511025dd464 100644
+--- a/drivers/mfd/tps65219.c
++++ b/drivers/mfd/tps65219.c
+@@ -190,7 +190,7 @@ static const struct resource tps65219_regulator_resources[] = {
+ 
+ static const struct mfd_cell tps65214_cells[] = {
+ 	MFD_CELL_RES("tps65214-regulator", tps65214_regulator_resources),
+-	MFD_CELL_NAME("tps65215-gpio"),
++	MFD_CELL_NAME("tps65214-gpio"),
+ };
+ 
+ static const struct mfd_cell tps65215_cells[] = {
+diff --git a/drivers/misc/mei/platform-vsc.c b/drivers/misc/mei/platform-vsc.c
+index 435760b1e86f7a..b2b5a20ae3fa48 100644
+--- a/drivers/misc/mei/platform-vsc.c
++++ b/drivers/misc/mei/platform-vsc.c
+@@ -256,6 +256,9 @@ static int mei_vsc_hw_reset(struct mei_device *mei_dev, bool intr_enable)
+ 
+ 	vsc_tp_reset(hw->tp);
+ 
++	if (!intr_enable)
++		return 0;
++
+ 	return vsc_tp_init(hw->tp, mei_dev->dev);
+ }
+ 
+@@ -377,6 +380,8 @@ static int mei_vsc_probe(struct platform_device *pdev)
+ err_cancel:
+ 	mei_cancel_work(mei_dev);
+ 
++	vsc_tp_register_event_cb(tp, NULL, NULL);
++
+ 	mei_disable_interrupts(mei_dev);
+ 
+ 	return ret;
+@@ -385,11 +390,14 @@ static int mei_vsc_probe(struct platform_device *pdev)
+ static void mei_vsc_remove(struct platform_device *pdev)
+ {
+ 	struct mei_device *mei_dev = platform_get_drvdata(pdev);
++	struct mei_vsc_hw *hw = mei_dev_to_vsc_hw(mei_dev);
+ 
+ 	pm_runtime_disable(mei_dev->dev);
+ 
+ 	mei_stop(mei_dev);
+ 
++	vsc_tp_register_event_cb(hw->tp, NULL, NULL);
++
+ 	mei_disable_interrupts(mei_dev);
+ 
+ 	mei_deregister(mei_dev);
+diff --git a/drivers/misc/mei/vsc-tp.c b/drivers/misc/mei/vsc-tp.c
+index 267d0de5fade83..97df3077175d54 100644
+--- a/drivers/misc/mei/vsc-tp.c
++++ b/drivers/misc/mei/vsc-tp.c
+@@ -79,9 +79,8 @@ struct vsc_tp {
+ 
+ 	vsc_tp_event_cb_t event_notify;
+ 	void *event_notify_context;
+-
+-	/* used to protect command download */
+-	struct mutex mutex;
++	struct mutex event_notify_mutex;	/* protects event_notify + context */
++	struct mutex mutex;			/* protects command download */
+ };
+ 
+ /* GPIO resources */
+@@ -113,6 +112,8 @@ static irqreturn_t vsc_tp_thread_isr(int irq, void *data)
+ {
+ 	struct vsc_tp *tp = data;
+ 
++	guard(mutex)(&tp->event_notify_mutex);
++
+ 	if (tp->event_notify)
+ 		tp->event_notify(tp->event_notify_context);
+ 
+@@ -399,6 +400,8 @@ EXPORT_SYMBOL_NS_GPL(vsc_tp_need_read, "VSC_TP");
+ int vsc_tp_register_event_cb(struct vsc_tp *tp, vsc_tp_event_cb_t event_cb,
+ 			    void *context)
+ {
++	guard(mutex)(&tp->event_notify_mutex);
++
+ 	tp->event_notify = event_cb;
+ 	tp->event_notify_context = context;
+ 
+@@ -530,6 +533,7 @@ static int vsc_tp_probe(struct spi_device *spi)
+ 		return ret;
+ 
+ 	mutex_init(&tp->mutex);
++	mutex_init(&tp->event_notify_mutex);
+ 
+ 	/* only one child acpi device */
+ 	ret = acpi_dev_for_each_child(ACPI_COMPANION(dev),
+@@ -552,10 +556,11 @@ static int vsc_tp_probe(struct spi_device *spi)
+ 	return 0;
+ 
+ err_destroy_lock:
+-	mutex_destroy(&tp->mutex);
+-
+ 	free_irq(spi->irq, tp);
+ 
++	mutex_destroy(&tp->event_notify_mutex);
++	mutex_destroy(&tp->mutex);
++
+ 	return ret;
+ }
+ 
+@@ -565,9 +570,10 @@ static void vsc_tp_remove(struct spi_device *spi)
+ 
+ 	platform_device_unregister(tp->pdev);
+ 
+-	mutex_destroy(&tp->mutex);
+-
+ 	free_irq(spi->irq, tp);
++
++	mutex_destroy(&tp->event_notify_mutex);
++	mutex_destroy(&tp->mutex);
+ }
+ 
+ static void vsc_tp_shutdown(struct spi_device *spi)
+diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
+index e5069882457ef6..c69644be4176ab 100644
+--- a/drivers/misc/sram.c
++++ b/drivers/misc/sram.c
+@@ -28,7 +28,8 @@ static ssize_t sram_read(struct file *filp, struct kobject *kobj,
+ {
+ 	struct sram_partition *part;
+ 
+-	part = container_of(attr, struct sram_partition, battr);
++	/* Cast away the const as the attribute is part of a larger structure */
++	part = (struct sram_partition *)container_of(attr, struct sram_partition, battr);
+ 
+ 	mutex_lock(&part->lock);
+ 	memcpy_fromio(buf, part->base + pos, count);
+@@ -43,7 +44,8 @@ static ssize_t sram_write(struct file *filp, struct kobject *kobj,
+ {
+ 	struct sram_partition *part;
+ 
+-	part = container_of(attr, struct sram_partition, battr);
++	/* Cast away the const as the attribute is part of a larger structure */
++	part = (struct sram_partition *)container_of(attr, struct sram_partition, battr);
+ 
+ 	mutex_lock(&part->lock);
+ 	memcpy_toio(part->base + pos, buf, count);
+@@ -164,8 +166,8 @@ static void sram_free_partitions(struct sram_dev *sram)
+ static int sram_reserve_cmp(void *priv, const struct list_head *a,
+ 					const struct list_head *b)
+ {
+-	struct sram_reserve *ra = list_entry(a, struct sram_reserve, list);
+-	struct sram_reserve *rb = list_entry(b, struct sram_reserve, list);
++	const struct sram_reserve *ra = list_entry(a, struct sram_reserve, list);
++	const struct sram_reserve *rb = list_entry(b, struct sram_reserve, list);
+ 
+ 	return ra->start - rb->start;
+ }
+diff --git a/drivers/mtd/ftl.c b/drivers/mtd/ftl.c
+index 8c22064ead3870..f2bd1984609ccc 100644
+--- a/drivers/mtd/ftl.c
++++ b/drivers/mtd/ftl.c
+@@ -344,7 +344,7 @@ static int erase_xfer(partition_t *part,
+             return -ENOMEM;
+ 
+     erase->addr = xfer->Offset;
+-    erase->len = 1 << part->header.EraseUnitSize;
++    erase->len = 1ULL << part->header.EraseUnitSize;
+ 
+     ret = mtd_erase(part->mbd.mtd, erase);
+     if (!ret) {
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index dedcca87defc7a..84ab4a83cbd686 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -373,7 +373,7 @@ static int atmel_nand_dma_transfer(struct atmel_nand_controller *nc,
+ 	dma_cookie_t cookie;
+ 
+ 	buf_dma = dma_map_single(nc->dev, buf, len, dir);
+-	if (dma_mapping_error(nc->dev, dev_dma)) {
++	if (dma_mapping_error(nc->dev, buf_dma)) {
+ 		dev_err(nc->dev,
+ 			"Failed to prepare a buffer for DMA access\n");
+ 		goto err;
+diff --git a/drivers/mtd/nand/raw/atmel/pmecc.c b/drivers/mtd/nand/raw/atmel/pmecc.c
+index 3c7dee1be21df1..0b402823b619cf 100644
+--- a/drivers/mtd/nand/raw/atmel/pmecc.c
++++ b/drivers/mtd/nand/raw/atmel/pmecc.c
+@@ -143,6 +143,7 @@ struct atmel_pmecc_caps {
+ 	int nstrengths;
+ 	int el_offset;
+ 	bool correct_erased_chunks;
++	bool clk_ctrl;
+ };
+ 
+ struct atmel_pmecc {
+@@ -843,6 +844,10 @@ static struct atmel_pmecc *atmel_pmecc_create(struct platform_device *pdev,
+ 	if (IS_ERR(pmecc->regs.errloc))
+ 		return ERR_CAST(pmecc->regs.errloc);
+ 
++	/* pmecc data setup time */
++	if (caps->clk_ctrl)
++		writel(PMECC_CLK_133MHZ, pmecc->regs.base + ATMEL_PMECC_CLK);
++
+ 	/* Disable all interrupts before registering the PMECC handler. */
+ 	writel(0xffffffff, pmecc->regs.base + ATMEL_PMECC_IDR);
+ 	atmel_pmecc_reset(pmecc);
+@@ -896,6 +901,7 @@ static struct atmel_pmecc_caps at91sam9g45_caps = {
+ 	.strengths = atmel_pmecc_strengths,
+ 	.nstrengths = 5,
+ 	.el_offset = 0x8c,
++	.clk_ctrl = true,
+ };
+ 
+ static struct atmel_pmecc_caps sama5d4_caps = {
+diff --git a/drivers/mtd/nand/raw/rockchip-nand-controller.c b/drivers/mtd/nand/raw/rockchip-nand-controller.c
+index 63e7b9e39a5ab0..c5d7cd8a6cab41 100644
+--- a/drivers/mtd/nand/raw/rockchip-nand-controller.c
++++ b/drivers/mtd/nand/raw/rockchip-nand-controller.c
+@@ -656,9 +656,16 @@ static int rk_nfc_write_page_hwecc(struct nand_chip *chip, const u8 *buf,
+ 
+ 	dma_data = dma_map_single(nfc->dev, (void *)nfc->page_buf,
+ 				  mtd->writesize, DMA_TO_DEVICE);
++	if (dma_mapping_error(nfc->dev, dma_data))
++		return -ENOMEM;
++
+ 	dma_oob = dma_map_single(nfc->dev, nfc->oob_buf,
+ 				 ecc->steps * oob_step,
+ 				 DMA_TO_DEVICE);
++	if (dma_mapping_error(nfc->dev, dma_oob)) {
++		dma_unmap_single(nfc->dev, dma_data, mtd->writesize, DMA_TO_DEVICE);
++		return -ENOMEM;
++	}
+ 
+ 	reinit_completion(&nfc->done);
+ 	writel(INT_DMA, nfc->regs + nfc->cfg->int_en_off);
+@@ -772,9 +779,17 @@ static int rk_nfc_read_page_hwecc(struct nand_chip *chip, u8 *buf, int oob_on,
+ 	dma_data = dma_map_single(nfc->dev, nfc->page_buf,
+ 				  mtd->writesize,
+ 				  DMA_FROM_DEVICE);
++	if (dma_mapping_error(nfc->dev, dma_data))
++		return -ENOMEM;
++
+ 	dma_oob = dma_map_single(nfc->dev, nfc->oob_buf,
+ 				 ecc->steps * oob_step,
+ 				 DMA_FROM_DEVICE);
++	if (dma_mapping_error(nfc->dev, dma_oob)) {
++		dma_unmap_single(nfc->dev, dma_data, mtd->writesize,
++				 DMA_FROM_DEVICE);
++		return -ENOMEM;
++	}
+ 
+ 	/*
+ 	 * The first blocks (4, 8 or 16 depending on the device)
+diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c
+index bf08dbf5e7421f..b9f156c0f8bcf9 100644
+--- a/drivers/mtd/spi-nor/spansion.c
++++ b/drivers/mtd/spi-nor/spansion.c
+@@ -17,6 +17,7 @@
+ 
+ #define SPINOR_OP_CLSR		0x30	/* Clear status register 1 */
+ #define SPINOR_OP_CLPEF		0x82	/* Clear program/erase failure flags */
++#define SPINOR_OP_CYPRESS_EX4B	0xB8	/* Exit 4-byte address mode */
+ #define SPINOR_OP_CYPRESS_DIE_ERASE		0x61	/* Chip (die) erase */
+ #define SPINOR_OP_RD_ANY_REG			0x65	/* Read any register */
+ #define SPINOR_OP_WR_ANY_REG			0x71	/* Write any register */
+@@ -58,6 +59,13 @@
+ 		   SPI_MEM_OP_DUMMY(ndummy, 0),				\
+ 		   SPI_MEM_OP_DATA_IN(1, buf, 0))
+ 
++#define CYPRESS_NOR_EN4B_EX4B_OP(enable)				\
++	SPI_MEM_OP(SPI_MEM_OP_CMD(enable ? SPINOR_OP_EN4B :		\
++					   SPINOR_OP_CYPRESS_EX4B, 0),	\
++		   SPI_MEM_OP_NO_ADDR,					\
++		   SPI_MEM_OP_NO_DUMMY,					\
++		   SPI_MEM_OP_NO_DATA)
++
+ #define SPANSION_OP(opcode)						\
+ 	SPI_MEM_OP(SPI_MEM_OP_CMD(opcode, 0),				\
+ 		   SPI_MEM_OP_NO_ADDR,					\
+@@ -356,6 +364,20 @@ static int cypress_nor_quad_enable_volatile(struct spi_nor *nor)
+ 	return 0;
+ }
+ 
++static int cypress_nor_set_4byte_addr_mode(struct spi_nor *nor, bool enable)
++{
++	int ret;
++	struct spi_mem_op op = CYPRESS_NOR_EN4B_EX4B_OP(enable);
++
++	spi_nor_spimem_setup_op(nor, &op, nor->reg_proto);
++
++	ret = spi_mem_exec_op(nor->spimem, &op);
++	if (ret)
++		dev_dbg(nor->dev, "error %d setting 4-byte mode\n", ret);
++
++	return ret;
++}
++
+ /**
+  * cypress_nor_determine_addr_mode_by_sr1() - Determine current address mode
+  *                                            (3 or 4-byte) by querying status
+@@ -526,6 +548,9 @@ s25fs256t_post_bfpt_fixup(struct spi_nor *nor,
+ 	struct spi_mem_op op;
+ 	int ret;
+ 
++	/* Assign 4-byte address mode method that is not determined in BFPT */
++	nor->params->set_4byte_addr_mode = cypress_nor_set_4byte_addr_mode;
++
+ 	ret = cypress_nor_set_addr_mode_nbytes(nor);
+ 	if (ret)
+ 		return ret;
+@@ -591,6 +616,9 @@ s25hx_t_post_bfpt_fixup(struct spi_nor *nor,
+ {
+ 	int ret;
+ 
++	/* Assign 4-byte address mode method that is not determined in BFPT */
++	nor->params->set_4byte_addr_mode = cypress_nor_set_4byte_addr_mode;
++
+ 	ret = cypress_nor_set_addr_mode_nbytes(nor);
+ 	if (ret)
+ 		return ret;
+@@ -718,6 +746,9 @@ static int s28hx_t_post_bfpt_fixup(struct spi_nor *nor,
+ 				   const struct sfdp_parameter_header *bfpt_header,
+ 				   const struct sfdp_bfpt *bfpt)
+ {
++	/* Assign 4-byte address mode method that is not determined in BFPT */
++	nor->params->set_4byte_addr_mode = cypress_nor_set_4byte_addr_mode;
++
+ 	return cypress_nor_set_addr_mode_nbytes(nor);
+ }
+ 
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index 0071a51ce2c1b2..879b3ea6e9b0fc 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -981,6 +981,7 @@ static int kvaser_pciefd_setup_can_ctrls(struct kvaser_pciefd *pcie)
+ 		can->completed_tx_bytes = 0;
+ 		can->bec.txerr = 0;
+ 		can->bec.rxerr = 0;
++		can->can.dev->dev_port = i;
+ 
+ 		init_completion(&can->start_comp);
+ 		init_completion(&can->flush_comp);
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+index dcb0bcbe0565ab..f73ccbc3140a4b 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+@@ -852,6 +852,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
+ 	netdev->ethtool_ops = &kvaser_usb_ethtool_ops;
+ 	SET_NETDEV_DEV(netdev, &dev->intf->dev);
+ 	netdev->dev_id = channel;
++	netdev->dev_port = channel;
+ 
+ 	dev->nets[channel] = priv;
+ 
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
+index 4d85b29a17b787..ebefc274b50a5f 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
+@@ -49,7 +49,7 @@ struct __packed pcan_ufd_fw_info {
+ 	__le32	ser_no;		/* S/N */
+ 	__le32	flags;		/* special functions */
+ 
+-	/* extended data when type == PCAN_USBFD_TYPE_EXT */
++	/* extended data when type >= PCAN_USBFD_TYPE_EXT */
+ 	u8	cmd_out_ep;	/* ep for cmd */
+ 	u8	cmd_in_ep;	/* ep for replies */
+ 	u8	data_out_ep[2];	/* ep for CANx TX */
+@@ -982,10 +982,11 @@ static int pcan_usb_fd_init(struct peak_usb_device *dev)
+ 			dev->can.ctrlmode |= CAN_CTRLMODE_FD_NON_ISO;
+ 		}
+ 
+-		/* if vendor rsp is of type 2, then it contains EP numbers to
+-		 * use for cmds pipes. If not, then default EP should be used.
++		/* if vendor rsp type is greater than or equal to 2, then it
++		 * contains EP numbers to use for cmds pipes. If not, then
++		 * default EP should be used.
+ 		 */
+-		if (fw_info->type != cpu_to_le16(PCAN_USBFD_TYPE_EXT)) {
++		if (le16_to_cpu(fw_info->type) < PCAN_USBFD_TYPE_EXT) {
+ 			fw_info->cmd_out_ep = PCAN_USBPRO_EP_CMDOUT;
+ 			fw_info->cmd_in_ep = PCAN_USBPRO_EP_CMDIN;
+ 		}
+@@ -1018,11 +1019,11 @@ static int pcan_usb_fd_init(struct peak_usb_device *dev)
+ 	dev->can_channel_id =
+ 		le32_to_cpu(pdev->usb_if->fw_info.dev_id[dev->ctrl_idx]);
+ 
+-	/* if vendor rsp is of type 2, then it contains EP numbers to
+-	 * use for data pipes. If not, then statically defined EP are used
+-	 * (see peak_usb_create_dev()).
++	/* if vendor rsp type is greater than or equal to 2, then it contains EP
++	 * numbers to use for data pipes. If not, then statically defined EP are
++	 * used (see peak_usb_create_dev()).
+ 	 */
+-	if (fw_info->type == cpu_to_le16(PCAN_USBFD_TYPE_EXT)) {
++	if (le16_to_cpu(fw_info->type) >= PCAN_USBFD_TYPE_EXT) {
+ 		dev->ep_msg_in = fw_info->data_in_ep;
+ 		dev->ep_msg_out = fw_info->data_out_ep[dev->ctrl_idx];
+ 	}
+diff --git a/drivers/net/dsa/microchip/ksz8.c b/drivers/net/dsa/microchip/ksz8.c
+index be433b4e2b1ca8..8f55be89f8bf65 100644
+--- a/drivers/net/dsa/microchip/ksz8.c
++++ b/drivers/net/dsa/microchip/ksz8.c
+@@ -371,6 +371,9 @@ static void ksz8863_r_mib_pkt(struct ksz_device *dev, int port, u16 addr,
+ 	addr -= dev->info->reg_mib_cnt;
+ 	ctrl_addr = addr ? KSZ8863_MIB_PACKET_DROPPED_TX_0 :
+ 			   KSZ8863_MIB_PACKET_DROPPED_RX_0;
++	if (ksz_is_8895_family(dev) &&
++	    ctrl_addr == KSZ8863_MIB_PACKET_DROPPED_RX_0)
++		ctrl_addr = KSZ8895_MIB_PACKET_DROPPED_RX_0;
+ 	ctrl_addr += port;
+ 	ctrl_addr |= IND_ACC_TABLE(TABLE_MIB | TABLE_READ);
+ 
+diff --git a/drivers/net/dsa/microchip/ksz8_reg.h b/drivers/net/dsa/microchip/ksz8_reg.h
+index 329688603a582b..da80e659c64809 100644
+--- a/drivers/net/dsa/microchip/ksz8_reg.h
++++ b/drivers/net/dsa/microchip/ksz8_reg.h
+@@ -784,7 +784,9 @@
+ #define KSZ8795_MIB_TOTAL_TX_1		0x105
+ 
+ #define KSZ8863_MIB_PACKET_DROPPED_TX_0 0x100
+-#define KSZ8863_MIB_PACKET_DROPPED_RX_0 0x105
++#define KSZ8863_MIB_PACKET_DROPPED_RX_0 0x103
++
++#define KSZ8895_MIB_PACKET_DROPPED_RX_0 0x105
+ 
+ #define MIB_PACKET_DROPPED		0x0000FFFF
+ 
+diff --git a/drivers/net/ethernet/airoha/airoha_npu.c b/drivers/net/ethernet/airoha/airoha_npu.c
+index 760367c2c033ba..cfd540c93dc85b 100644
+--- a/drivers/net/ethernet/airoha/airoha_npu.c
++++ b/drivers/net/ethernet/airoha/airoha_npu.c
+@@ -518,6 +518,8 @@ static struct platform_driver airoha_npu_driver = {
+ };
+ module_platform_driver(airoha_npu_driver);
+ 
++MODULE_FIRMWARE(NPU_EN7581_FIRMWARE_DATA);
++MODULE_FIRMWARE(NPU_EN7581_FIRMWARE_RV32);
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Lorenzo Bianconi <lorenzo@kernel.org>");
+ MODULE_DESCRIPTION("Airoha Network Processor Unit driver");
+diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
+index a89aa4ac0a064a..779f1324bb5f82 100644
+--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
++++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
+@@ -3852,8 +3852,8 @@ int be_cmd_set_mac_list(struct be_adapter *adapter, u8 *mac_array,
+ 	status = be_mcc_notify_wait(adapter);
+ 
+ err:
+-	dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
+ 	spin_unlock_bh(&adapter->mcc_lock);
++	dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
+ 	return status;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/fm10k/fm10k.h b/drivers/net/ethernet/intel/fm10k/fm10k.h
+index 6119a410883815..65a2816142d962 100644
+--- a/drivers/net/ethernet/intel/fm10k/fm10k.h
++++ b/drivers/net/ethernet/intel/fm10k/fm10k.h
+@@ -189,13 +189,14 @@ struct fm10k_q_vector {
+ 	struct fm10k_ring_container rx, tx;
+ 
+ 	struct napi_struct napi;
++	struct rcu_head rcu;	/* to avoid race with update stats on free */
++
+ 	cpumask_t affinity_mask;
+ 	char name[IFNAMSIZ + 9];
+ 
+ #ifdef CONFIG_DEBUG_FS
+ 	struct dentry *dbg_q_vector;
+ #endif /* CONFIG_DEBUG_FS */
+-	struct rcu_head rcu;	/* to avoid race with update stats on free */
+ 
+ 	/* for dynamic allocation of rings associated with this q_vector */
+ 	struct fm10k_ring ring[] ____cacheline_internodealigned_in_smp;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index c67963bfe14ed0..7c600d6e66ba7c 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -945,6 +945,7 @@ struct i40e_q_vector {
+ 	u16 reg_idx;		/* register index of the interrupt */
+ 
+ 	struct napi_struct napi;
++	struct rcu_head rcu;	/* to avoid race with update stats on free */
+ 
+ 	struct i40e_ring_container rx;
+ 	struct i40e_ring_container tx;
+@@ -955,7 +956,6 @@ struct i40e_q_vector {
+ 	cpumask_t affinity_mask;
+ 	struct irq_affinity_notify affinity_notify;
+ 
+-	struct rcu_head rcu;	/* to avoid race with update stats on free */
+ 	char name[I40E_INT_NAME_STR_LEN];
+ 	bool arm_wb_state;
+ 	bool in_busy_poll;
+diff --git a/drivers/net/ethernet/intel/igb/igb_xsk.c b/drivers/net/ethernet/intel/igb/igb_xsk.c
+index 157d43787fa0b5..02935d4e114084 100644
+--- a/drivers/net/ethernet/intel/igb/igb_xsk.c
++++ b/drivers/net/ethernet/intel/igb/igb_xsk.c
+@@ -481,7 +481,7 @@ bool igb_xmit_zc(struct igb_ring *tx_ring, struct xsk_buff_pool *xsk_pool)
+ 	if (!nb_pkts)
+ 		return true;
+ 
+-	while (nb_pkts-- > 0) {
++	for (; i < nb_pkts; i++) {
+ 		dma = xsk_buff_raw_get_dma(xsk_pool, descs[i].addr);
+ 		xsk_buff_raw_dma_sync_for_device(xsk_pool, dma, descs[i].len);
+ 
+@@ -511,7 +511,6 @@ bool igb_xmit_zc(struct igb_ring *tx_ring, struct xsk_buff_pool *xsk_pool)
+ 
+ 		total_bytes += descs[i].len;
+ 
+-		i++;
+ 		tx_ring->next_to_use++;
+ 		tx_buffer_info->next_to_watch = tx_desc;
+ 		if (tx_ring->next_to_use == tx_ring->count)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+index e6a380d4929bd3..fb43fba5daa1f9 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+@@ -505,9 +505,10 @@ struct ixgbe_q_vector {
+ 	struct ixgbe_ring_container rx, tx;
+ 
+ 	struct napi_struct napi;
++	struct rcu_head rcu;	/* to avoid race with update stats on free */
++
+ 	cpumask_t affinity_mask;
+ 	int numa_node;
+-	struct rcu_head rcu;	/* to avoid race with update stats on free */
+ 	char name[IFNAMSIZ + 9];
+ 
+ 	/* for dynamic allocation of rings associated with this q_vector */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+index 8e25f4ef5cccee..5ae787656a7ca0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+@@ -331,6 +331,9 @@ static int port_set_buffer(struct mlx5e_priv *priv,
+ 	if (err)
+ 		goto out;
+ 
++	/* RO bits should be set to 0 on write */
++	MLX5_SET(pbmc_reg, in, port_buffer_size, 0);
++
+ 	err = mlx5e_port_set_pbmc(mdev, in);
+ out:
+ 	kfree(in);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
+index 727fa7c185238c..6056106edcc647 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
+@@ -327,6 +327,10 @@ void mlx5e_ipsec_offload_handle_rx_skb(struct net_device *netdev,
+ 	if (unlikely(!sa_entry)) {
+ 		rcu_read_unlock();
+ 		atomic64_inc(&ipsec->sw_stats.ipsec_rx_drop_sadb_miss);
++		/* Clear secpath to prevent invalid dereference
++		 * in downstream XFRM policy checks.
++		 */
++		secpath_reset(skb);
+ 		return;
+ 	}
+ 	xfrm_state_hold(sa_entry->x);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index e37cf4f754c480..b6d733138de31f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1567,6 +1567,7 @@ static inline void mlx5e_build_rx_skb(struct mlx5_cqe64 *cqe,
+ 		unsigned int hdrlen = mlx5e_lro_update_hdr(skb, cqe, cqe_bcnt);
+ 
+ 		skb_shinfo(skb)->gso_size = DIV_ROUND_UP(cqe_bcnt - hdrlen, lro_num_seg);
++		skb_shinfo(skb)->gso_segs = lro_num_seg;
+ 		/* Subtract one since we already counted this as one
+ 		 * "regular" packet in mlx5e_complete_rx_cqe()
+ 		 */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c
+index 7c5516b0a84494..8115071c34a4ae 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c
+@@ -30,7 +30,7 @@ struct mlx5_dm *mlx5_dm_create(struct mlx5_core_dev *dev)
+ 
+ 	dm = kzalloc(sizeof(*dm), GFP_KERNEL);
+ 	if (!dm)
+-		return ERR_PTR(-ENOMEM);
++		return NULL;
+ 
+ 	spin_lock_init(&dm->lock);
+ 
+@@ -96,7 +96,7 @@ struct mlx5_dm *mlx5_dm_create(struct mlx5_core_dev *dev)
+ err_steering:
+ 	kfree(dm);
+ 
+-	return ERR_PTR(-ENOMEM);
++	return NULL;
+ }
+ 
+ void mlx5_dm_cleanup(struct mlx5_core_dev *dev)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 9c1504d29d34c3..e7bcd0f0a70979 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1102,9 +1102,6 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
+ 	}
+ 
+ 	dev->dm = mlx5_dm_create(dev);
+-	if (IS_ERR(dev->dm))
+-		mlx5_core_warn(dev, "Failed to init device memory %ld\n", PTR_ERR(dev->dm));
+-
+ 	dev->tracer = mlx5_fw_tracer_create(dev);
+ 	dev->hv_vhca = mlx5_hv_vhca_create(dev);
+ 	dev->rsc_dump = mlx5_rsc_dump_create(dev);
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
+index 2524d9b88d591f..34c00c67006f9e 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
+@@ -33,7 +33,7 @@ int __fbnic_open(struct fbnic_net *fbn)
+ 		dev_warn(fbd->dev,
+ 			 "Error %d sending host ownership message to the firmware\n",
+ 			 err);
+-		goto free_resources;
++		goto err_reset_queues;
+ 	}
+ 
+ 	err = fbnic_time_start(fbn);
+@@ -57,6 +57,8 @@ int __fbnic_open(struct fbnic_net *fbn)
+ 	fbnic_time_stop(fbn);
+ release_ownership:
+ 	fbnic_fw_xmit_ownership_msg(fbn->fbd, false);
++err_reset_queues:
++	fbnic_reset_netif_queues(fbn);
+ free_resources:
+ 	fbnic_free_resources(fbn);
+ free_napi_vectors:
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+index ac11389a764cef..f9543d03485fe1 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+@@ -661,8 +661,8 @@ static void fbnic_page_pool_init(struct fbnic_ring *ring, unsigned int idx,
+ {
+ 	struct fbnic_rx_buf *rx_buf = &ring->rx_buf[idx];
+ 
+-	page_pool_fragment_page(page, PAGECNT_BIAS_MAX);
+-	rx_buf->pagecnt_bias = PAGECNT_BIAS_MAX;
++	page_pool_fragment_page(page, FBNIC_PAGECNT_BIAS_MAX);
++	rx_buf->pagecnt_bias = FBNIC_PAGECNT_BIAS_MAX;
+ 	rx_buf->page = page;
+ }
+ 
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
+index f46616af41eac4..37b4dadbfc6c8b 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
+@@ -91,10 +91,8 @@ struct fbnic_queue_stats {
+ 	struct u64_stats_sync syncp;
+ };
+ 
+-/* Pagecnt bias is long max to reserve the last bit to catch overflow
+- * cases where if we overcharge the bias it will flip over to be negative.
+- */
+-#define PAGECNT_BIAS_MAX	LONG_MAX
++#define FBNIC_PAGECNT_BIAS_MAX	PAGE_SIZE
++
+ struct fbnic_rx_buf {
+ 	struct page *page;
+ 	long pagecnt_bias;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 1d716cee0cb108..77f800df6f3783 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2585,7 +2585,7 @@ static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
+ 
+ 	budget = min(budget, stmmac_tx_avail(priv, queue));
+ 
+-	while (budget-- > 0) {
++	for (; budget > 0; budget--) {
+ 		struct stmmac_metadata_request meta_req;
+ 		struct xsk_tx_metadata *meta = NULL;
+ 		dma_addr_t dma_addr;
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_common.c b/drivers/net/ethernet/ti/icssg/icssg_common.c
+index 7ae069e7af92ba..3579e07be3da4b 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_common.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_common.c
+@@ -706,9 +706,9 @@ static int emac_rx_packet(struct prueth_emac *emac, u32 flow_id, u32 *xdp_state)
+ 	struct page_pool *pool;
+ 	struct sk_buff *skb;
+ 	struct xdp_buff xdp;
++	int headroom, ret;
+ 	u32 *psdata;
+ 	void *pa;
+-	int ret;
+ 
+ 	*xdp_state = 0;
+ 	pool = rx_chn->pg_pool;
+@@ -757,22 +757,23 @@ static int emac_rx_packet(struct prueth_emac *emac, u32 flow_id, u32 *xdp_state)
+ 		xdp_prepare_buff(&xdp, pa, PRUETH_HEADROOM, pkt_len, false);
+ 
+ 		*xdp_state = emac_run_xdp(emac, &xdp, page, &pkt_len);
+-		if (*xdp_state == ICSSG_XDP_PASS)
+-			skb = xdp_build_skb_from_buff(&xdp);
+-		else
++		if (*xdp_state != ICSSG_XDP_PASS)
+ 			goto requeue;
++		headroom = xdp.data - xdp.data_hard_start;
++		pkt_len = xdp.data_end - xdp.data;
+ 	} else {
+-		/* prepare skb and send to n/w stack */
+-		skb = napi_build_skb(pa, PAGE_SIZE);
++		headroom = PRUETH_HEADROOM;
+ 	}
+ 
++	/* prepare skb and send to n/w stack */
++	skb = napi_build_skb(pa, PAGE_SIZE);
+ 	if (!skb) {
+ 		ndev->stats.rx_dropped++;
+ 		page_pool_recycle_direct(pool, page);
+ 		goto requeue;
+ 	}
+ 
+-	skb_reserve(skb, PRUETH_HEADROOM);
++	skb_reserve(skb, headroom);
+ 	skb_put(skb, pkt_len);
+ 	skb->dev = ndev;
+ 
+diff --git a/drivers/net/ipa/ipa_sysfs.c b/drivers/net/ipa/ipa_sysfs.c
+index a59bd215494c9b..a53e9e6f6cdf50 100644
+--- a/drivers/net/ipa/ipa_sysfs.c
++++ b/drivers/net/ipa/ipa_sysfs.c
+@@ -37,8 +37,12 @@ static const char *ipa_version_string(struct ipa *ipa)
+ 		return "4.11";
+ 	case IPA_VERSION_5_0:
+ 		return "5.0";
++	case IPA_VERSION_5_1:
++		return "5.1";
++	case IPA_VERSION_5_5:
++		return "5.5";
+ 	default:
+-		return "0.0";	/* Won't happen (checked at probe time) */
++		return "0.0";	/* Should not happen */
+ 	}
+ }
+ 
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 7edbe76b5455a8..4c75d1fea55271 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -3868,7 +3868,7 @@ static void macsec_setup(struct net_device *dev)
+ 	ether_setup(dev);
+ 	dev->min_mtu = 0;
+ 	dev->max_mtu = ETH_MAX_MTU;
+-	dev->priv_flags |= IFF_NO_QUEUE;
++	dev->priv_flags |= IFF_NO_QUEUE | IFF_UNICAST_FLT;
+ 	dev->netdev_ops = &macsec_netdev_ops;
+ 	dev->needs_free_netdev = true;
+ 	dev->priv_destructor = macsec_free_netdev;
+diff --git a/drivers/net/mdio/mdio-bcm-unimac.c b/drivers/net/mdio/mdio-bcm-unimac.c
+index 074d96328f41ad..60565e7c88bdba 100644
+--- a/drivers/net/mdio/mdio-bcm-unimac.c
++++ b/drivers/net/mdio/mdio-bcm-unimac.c
+@@ -209,10 +209,9 @@ static int unimac_mdio_clk_set(struct unimac_mdio_priv *priv)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (!priv->clk)
++	rate = clk_get_rate(priv->clk);
++	if (!rate)
+ 		rate = 250000000;
+-	else
+-		rate = clk_get_rate(priv->clk);
+ 
+ 	div = (rate / (2 * priv->clk_freq)) - 1;
+ 	if (div & ~MDIO_CLK_DIV_MASK) {
+diff --git a/drivers/net/netconsole.c b/drivers/net/netconsole.c
+index 176935a8645ff1..a35b1fd4337b94 100644
+--- a/drivers/net/netconsole.c
++++ b/drivers/net/netconsole.c
+@@ -86,10 +86,10 @@ static DEFINE_SPINLOCK(target_list_lock);
+ static DEFINE_MUTEX(target_cleanup_list_lock);
+ 
+ /*
+- * Console driver for extended netconsoles.  Registered on the first use to
+- * avoid unnecessarily enabling ext message formatting.
++ * Console driver for netconsoles.  Register only consoles that have
++ * an associated target of the same type.
+  */
+-static struct console netconsole_ext;
++static struct console netconsole_ext, netconsole;
+ 
+ struct netconsole_target_stats  {
+ 	u64_stats_t xmit_drop_count;
+@@ -97,6 +97,11 @@ struct netconsole_target_stats  {
+ 	struct u64_stats_sync syncp;
+ };
+ 
++enum console_type {
++	CONS_BASIC = BIT(0),
++	CONS_EXTENDED = BIT(1),
++};
++
+ /* Features enabled in sysdata. Contrary to userdata, this data is populated by
+  * the kernel. The fields are designed as bitwise flags, allowing multiple
+  * features to be set in sysdata_fields.
+@@ -491,6 +496,12 @@ static ssize_t enabled_store(struct config_item *item,
+ 		if (nt->extended && !console_is_registered(&netconsole_ext))
+ 			register_console(&netconsole_ext);
+ 
++		/* User might be enabling the basic format target for the very
++		 * first time, make sure the console is registered.
++		 */
++		if (!nt->extended && !console_is_registered(&netconsole))
++			register_console(&netconsole);
++
+ 		/*
+ 		 * Skip netpoll_parse_options() -- all the attributes are
+ 		 * already configured via configfs. Just print them out.
+@@ -1690,8 +1701,8 @@ static int __init init_netconsole(void)
+ {
+ 	int err;
+ 	struct netconsole_target *nt, *tmp;
++	u32 console_type_needed = 0;
+ 	unsigned int count = 0;
+-	bool extended = false;
+ 	unsigned long flags;
+ 	char *target_config;
+ 	char *input = config;
+@@ -1707,9 +1718,10 @@ static int __init init_netconsole(void)
+ 			}
+ 			/* Dump existing printks when we register */
+ 			if (nt->extended) {
+-				extended = true;
++				console_type_needed |= CONS_EXTENDED;
+ 				netconsole_ext.flags |= CON_PRINTBUFFER;
+ 			} else {
++				console_type_needed |= CONS_BASIC;
+ 				netconsole.flags |= CON_PRINTBUFFER;
+ 			}
+ 
+@@ -1728,9 +1740,10 @@ static int __init init_netconsole(void)
+ 	if (err)
+ 		goto undonotifier;
+ 
+-	if (extended)
++	if (console_type_needed & CONS_EXTENDED)
+ 		register_console(&netconsole_ext);
+-	register_console(&netconsole);
++	if (console_type_needed & CONS_BASIC)
++		register_console(&netconsole);
+ 	pr_info("network logging started\n");
+ 
+ 	return err;
+@@ -1760,7 +1773,8 @@ static void __exit cleanup_netconsole(void)
+ 
+ 	if (console_is_registered(&netconsole_ext))
+ 		unregister_console(&netconsole_ext);
+-	unregister_console(&netconsole);
++	if (console_is_registered(&netconsole))
++		unregister_console(&netconsole);
+ 	dynamic_netconsole_exit();
+ 	unregister_netdevice_notifier(&netconsole_netdev_notifier);
+ 
+diff --git a/drivers/net/phy/mscc/mscc_ptp.c b/drivers/net/phy/mscc/mscc_ptp.c
+index 6b800081eed52f..275706de5847cd 100644
+--- a/drivers/net/phy/mscc/mscc_ptp.c
++++ b/drivers/net/phy/mscc/mscc_ptp.c
+@@ -900,6 +900,7 @@ static int vsc85xx_eth1_conf(struct phy_device *phydev, enum ts_blk blk,
+ 				     get_unaligned_be32(ptp_multicast));
+ 	} else {
+ 		val |= ANA_ETH1_FLOW_ADDR_MATCH2_ANY_MULTICAST;
++		val |= ANA_ETH1_FLOW_ADDR_MATCH2_ANY_UNICAST;
+ 		vsc85xx_ts_write_csr(phydev, blk,
+ 				     MSCC_ANA_ETH1_FLOW_ADDR_MATCH2(0), val);
+ 		vsc85xx_ts_write_csr(phydev, blk,
+diff --git a/drivers/net/phy/mscc/mscc_ptp.h b/drivers/net/phy/mscc/mscc_ptp.h
+index da3465360e9018..ae9ad925bfa8c0 100644
+--- a/drivers/net/phy/mscc/mscc_ptp.h
++++ b/drivers/net/phy/mscc/mscc_ptp.h
+@@ -98,6 +98,7 @@
+ #define MSCC_ANA_ETH1_FLOW_ADDR_MATCH2(x) (MSCC_ANA_ETH1_FLOW_ENA(x) + 3)
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_MASK_MASK	GENMASK(22, 20)
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_ANY_MULTICAST	0x400000
++#define ANA_ETH1_FLOW_ADDR_MATCH2_ANY_UNICAST	0x200000
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_FULL_ADDR	0x100000
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_SRC_DEST_MASK	GENMASK(17, 16)
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_SRC_DEST	0x020000
+diff --git a/drivers/net/ppp/pptp.c b/drivers/net/ppp/pptp.c
+index 5feaa70b5f47e6..90737cb718928a 100644
+--- a/drivers/net/ppp/pptp.c
++++ b/drivers/net/ppp/pptp.c
+@@ -159,19 +159,17 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+ 	int len;
+ 	unsigned char *data;
+ 	__u32 seq_recv;
+-
+-
+ 	struct rtable *rt;
+ 	struct net_device *tdev;
+ 	struct iphdr  *iph;
+ 	int    max_headroom;
+ 
+ 	if (sk_pppox(po)->sk_state & PPPOX_DEAD)
+-		goto tx_error;
++		goto tx_drop;
+ 
+ 	rt = pptp_route_output(po, &fl4);
+ 	if (IS_ERR(rt))
+-		goto tx_error;
++		goto tx_drop;
+ 
+ 	tdev = rt->dst.dev;
+ 
+@@ -179,16 +177,20 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+ 
+ 	if (skb_headroom(skb) < max_headroom || skb_cloned(skb) || skb_shared(skb)) {
+ 		struct sk_buff *new_skb = skb_realloc_headroom(skb, max_headroom);
+-		if (!new_skb) {
+-			ip_rt_put(rt);
++
++		if (!new_skb)
+ 			goto tx_error;
+-		}
++
+ 		if (skb->sk)
+ 			skb_set_owner_w(new_skb, skb->sk);
+ 		consume_skb(skb);
+ 		skb = new_skb;
+ 	}
+ 
++	/* Ensure we can safely access protocol field and LCP code */
++	if (!pskb_may_pull(skb, 3))
++		goto tx_error;
++
+ 	data = skb->data;
+ 	islcp = ((data[0] << 8) + data[1]) == PPP_LCP && 1 <= data[2] && data[2] <= 7;
+ 
+@@ -262,6 +264,8 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+ 	return 1;
+ 
+ tx_error:
++	ip_rt_put(rt);
++tx_drop:
+ 	kfree_skb(skb);
+ 	return 1;
+ }
+diff --git a/drivers/net/team/team_core.c b/drivers/net/team/team_core.c
+index b75ceb90359f89..94fc7eec4fca39 100644
+--- a/drivers/net/team/team_core.c
++++ b/drivers/net/team/team_core.c
+@@ -933,7 +933,7 @@ static bool team_port_find(const struct team *team,
+  * Enable/disable port by adding to enabled port hashlist and setting
+  * port->index (Might be racy so reader could see incorrect ifindex when
+  * processing a flying packet, but that is not a problem). Write guarded
+- * by team->lock.
++ * by RTNL.
+  */
+ static void team_port_enable(struct team *team,
+ 			     struct team_port *port)
+@@ -1660,8 +1660,6 @@ static int team_init(struct net_device *dev)
+ 		goto err_options_register;
+ 	netif_carrier_off(dev);
+ 
+-	lockdep_register_key(&team->team_lock_key);
+-	__mutex_init(&team->lock, "team->team_lock_key", &team->team_lock_key);
+ 	netdev_lockdep_set_classes(dev);
+ 
+ 	return 0;
+@@ -1682,7 +1680,8 @@ static void team_uninit(struct net_device *dev)
+ 	struct team_port *port;
+ 	struct team_port *tmp;
+ 
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	list_for_each_entry_safe(port, tmp, &team->port_list, list)
+ 		team_port_del(team, port->dev);
+ 
+@@ -1691,9 +1690,7 @@ static void team_uninit(struct net_device *dev)
+ 	team_mcast_rejoin_fini(team);
+ 	team_notify_peers_fini(team);
+ 	team_queue_override_fini(team);
+-	mutex_unlock(&team->lock);
+ 	netdev_change_features(dev);
+-	lockdep_unregister_key(&team->team_lock_key);
+ }
+ 
+ static void team_destructor(struct net_device *dev)
+@@ -1778,7 +1775,8 @@ static void team_change_rx_flags(struct net_device *dev, int change)
+ 	struct team_port *port;
+ 	int inc;
+ 
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	list_for_each_entry(port, &team->port_list, list) {
+ 		if (change & IFF_PROMISC) {
+ 			inc = dev->flags & IFF_PROMISC ? 1 : -1;
+@@ -1789,7 +1787,6 @@ static void team_change_rx_flags(struct net_device *dev, int change)
+ 			dev_set_allmulti(port->dev, inc);
+ 		}
+ 	}
+-	mutex_unlock(&team->lock);
+ }
+ 
+ static void team_set_rx_mode(struct net_device *dev)
+@@ -1811,14 +1808,14 @@ static int team_set_mac_address(struct net_device *dev, void *p)
+ 	struct team *team = netdev_priv(dev);
+ 	struct team_port *port;
+ 
++	ASSERT_RTNL();
++
+ 	if (dev->type == ARPHRD_ETHER && !is_valid_ether_addr(addr->sa_data))
+ 		return -EADDRNOTAVAIL;
+ 	dev_addr_set(dev, addr->sa_data);
+-	mutex_lock(&team->lock);
+ 	list_for_each_entry(port, &team->port_list, list)
+ 		if (team->ops.port_change_dev_addr)
+ 			team->ops.port_change_dev_addr(team, port);
+-	mutex_unlock(&team->lock);
+ 	return 0;
+ }
+ 
+@@ -1828,11 +1825,8 @@ static int team_change_mtu(struct net_device *dev, int new_mtu)
+ 	struct team_port *port;
+ 	int err;
+ 
+-	/*
+-	 * Alhough this is reader, it's guarded by team lock. It's not possible
+-	 * to traverse list in reverse under rcu_read_lock
+-	 */
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	team->port_mtu_change_allowed = true;
+ 	list_for_each_entry(port, &team->port_list, list) {
+ 		err = dev_set_mtu(port->dev, new_mtu);
+@@ -1843,7 +1837,6 @@ static int team_change_mtu(struct net_device *dev, int new_mtu)
+ 		}
+ 	}
+ 	team->port_mtu_change_allowed = false;
+-	mutex_unlock(&team->lock);
+ 
+ 	WRITE_ONCE(dev->mtu, new_mtu);
+ 
+@@ -1853,7 +1846,6 @@ static int team_change_mtu(struct net_device *dev, int new_mtu)
+ 	list_for_each_entry_continue_reverse(port, &team->port_list, list)
+ 		dev_set_mtu(port->dev, dev->mtu);
+ 	team->port_mtu_change_allowed = false;
+-	mutex_unlock(&team->lock);
+ 
+ 	return err;
+ }
+@@ -1903,24 +1895,19 @@ static int team_vlan_rx_add_vid(struct net_device *dev, __be16 proto, u16 vid)
+ 	struct team_port *port;
+ 	int err;
+ 
+-	/*
+-	 * Alhough this is reader, it's guarded by team lock. It's not possible
+-	 * to traverse list in reverse under rcu_read_lock
+-	 */
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	list_for_each_entry(port, &team->port_list, list) {
+ 		err = vlan_vid_add(port->dev, proto, vid);
+ 		if (err)
+ 			goto unwind;
+ 	}
+-	mutex_unlock(&team->lock);
+ 
+ 	return 0;
+ 
+ unwind:
+ 	list_for_each_entry_continue_reverse(port, &team->port_list, list)
+ 		vlan_vid_del(port->dev, proto, vid);
+-	mutex_unlock(&team->lock);
+ 
+ 	return err;
+ }
+@@ -1930,10 +1917,10 @@ static int team_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid)
+ 	struct team *team = netdev_priv(dev);
+ 	struct team_port *port;
+ 
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	list_for_each_entry(port, &team->port_list, list)
+ 		vlan_vid_del(port->dev, proto, vid);
+-	mutex_unlock(&team->lock);
+ 
+ 	return 0;
+ }
+@@ -1955,9 +1942,9 @@ static void team_netpoll_cleanup(struct net_device *dev)
+ {
+ 	struct team *team = netdev_priv(dev);
+ 
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	__team_netpoll_cleanup(team);
+-	mutex_unlock(&team->lock);
+ }
+ 
+ static int team_netpoll_setup(struct net_device *dev)
+@@ -1966,7 +1953,8 @@ static int team_netpoll_setup(struct net_device *dev)
+ 	struct team_port *port;
+ 	int err = 0;
+ 
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	list_for_each_entry(port, &team->port_list, list) {
+ 		err = __team_port_enable_netpoll(port);
+ 		if (err) {
+@@ -1974,7 +1962,6 @@ static int team_netpoll_setup(struct net_device *dev)
+ 			break;
+ 		}
+ 	}
+-	mutex_unlock(&team->lock);
+ 	return err;
+ }
+ #endif
+@@ -1985,9 +1972,9 @@ static int team_add_slave(struct net_device *dev, struct net_device *port_dev,
+ 	struct team *team = netdev_priv(dev);
+ 	int err;
+ 
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	err = team_port_add(team, port_dev, extack);
+-	mutex_unlock(&team->lock);
+ 
+ 	if (!err)
+ 		netdev_change_features(dev);
+@@ -2000,18 +1987,13 @@ static int team_del_slave(struct net_device *dev, struct net_device *port_dev)
+ 	struct team *team = netdev_priv(dev);
+ 	int err;
+ 
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	err = team_port_del(team, port_dev);
+-	mutex_unlock(&team->lock);
+ 
+ 	if (err)
+ 		return err;
+ 
+-	if (netif_is_team_master(port_dev)) {
+-		lockdep_unregister_key(&team->team_lock_key);
+-		lockdep_register_key(&team->team_lock_key);
+-		lockdep_set_class(&team->lock, &team->team_lock_key);
+-	}
+ 	netdev_change_features(dev);
+ 
+ 	return err;
+@@ -2304,9 +2286,10 @@ int team_nl_noop_doit(struct sk_buff *skb, struct genl_info *info)
+ static struct team *team_nl_team_get(struct genl_info *info)
+ {
+ 	struct net *net = genl_info_net(info);
+-	int ifindex;
+ 	struct net_device *dev;
+-	struct team *team;
++	int ifindex;
++
++	ASSERT_RTNL();
+ 
+ 	if (!info->attrs[TEAM_ATTR_TEAM_IFINDEX])
+ 		return NULL;
+@@ -2318,14 +2301,11 @@ static struct team *team_nl_team_get(struct genl_info *info)
+ 		return NULL;
+ 	}
+ 
+-	team = netdev_priv(dev);
+-	mutex_lock(&team->lock);
+-	return team;
++	return netdev_priv(dev);
+ }
+ 
+ static void team_nl_team_put(struct team *team)
+ {
+-	mutex_unlock(&team->lock);
+ 	dev_put(team->dev);
+ }
+ 
+@@ -2515,9 +2495,13 @@ int team_nl_options_get_doit(struct sk_buff *skb, struct genl_info *info)
+ 	int err;
+ 	LIST_HEAD(sel_opt_inst_list);
+ 
++	rtnl_lock();
++
+ 	team = team_nl_team_get(info);
+-	if (!team)
+-		return -EINVAL;
++	if (!team) {
++		err = -EINVAL;
++		goto rtnl_unlock;
++	}
+ 
+ 	list_for_each_entry(opt_inst, &team->option_inst_list, list)
+ 		list_add_tail(&opt_inst->tmp_list, &sel_opt_inst_list);
+@@ -2527,6 +2511,9 @@ int team_nl_options_get_doit(struct sk_buff *skb, struct genl_info *info)
+ 
+ 	team_nl_team_put(team);
+ 
++rtnl_unlock:
++	rtnl_unlock();
++
+ 	return err;
+ }
+ 
+@@ -2805,15 +2792,22 @@ int team_nl_port_list_get_doit(struct sk_buff *skb,
+ 	struct team *team;
+ 	int err;
+ 
++	rtnl_lock();
++
+ 	team = team_nl_team_get(info);
+-	if (!team)
+-		return -EINVAL;
++	if (!team) {
++		err = -EINVAL;
++		goto rtnl_unlock;
++	}
+ 
+ 	err = team_nl_send_port_list_get(team, info->snd_portid, info->snd_seq,
+ 					 NLM_F_ACK, team_nl_send_unicast, NULL);
+ 
+ 	team_nl_team_put(team);
+ 
++rtnl_unlock:
++	rtnl_unlock();
++
+ 	return err;
+ }
+ 
+@@ -2961,11 +2955,9 @@ static void __team_port_change_port_removed(struct team_port *port)
+ 
+ static void team_port_change_check(struct team_port *port, bool linkup)
+ {
+-	struct team *team = port->team;
++	ASSERT_RTNL();
+ 
+-	mutex_lock(&team->lock);
+ 	__team_port_change_check(port, linkup);
+-	mutex_unlock(&team->lock);
+ }
+ 
+ 
+diff --git a/drivers/net/team/team_mode_activebackup.c b/drivers/net/team/team_mode_activebackup.c
+index e0f599e2a51dd6..1c3336c7a1b26e 100644
+--- a/drivers/net/team/team_mode_activebackup.c
++++ b/drivers/net/team/team_mode_activebackup.c
+@@ -67,8 +67,7 @@ static void ab_active_port_get(struct team *team, struct team_gsetter_ctx *ctx)
+ {
+ 	struct team_port *active_port;
+ 
+-	active_port = rcu_dereference_protected(ab_priv(team)->active_port,
+-						lockdep_is_held(&team->lock));
++	active_port = rtnl_dereference(ab_priv(team)->active_port);
+ 	if (active_port)
+ 		ctx->data.u32_val = active_port->dev->ifindex;
+ 	else
+diff --git a/drivers/net/team/team_mode_loadbalance.c b/drivers/net/team/team_mode_loadbalance.c
+index 00f8989c29c0ff..b14538bde2f824 100644
+--- a/drivers/net/team/team_mode_loadbalance.c
++++ b/drivers/net/team/team_mode_loadbalance.c
+@@ -301,8 +301,7 @@ static int lb_bpf_func_set(struct team *team, struct team_gsetter_ctx *ctx)
+ 	if (lb_priv->ex->orig_fprog) {
+ 		/* Clear old filter data */
+ 		__fprog_destroy(lb_priv->ex->orig_fprog);
+-		orig_fp = rcu_dereference_protected(lb_priv->fp,
+-						lockdep_is_held(&team->lock));
++		orig_fp = rtnl_dereference(lb_priv->fp);
+ 	}
+ 
+ 	rcu_assign_pointer(lb_priv->fp, fp);
+@@ -324,8 +323,7 @@ static void lb_bpf_func_free(struct team *team)
+ 		return;
+ 
+ 	__fprog_destroy(lb_priv->ex->orig_fprog);
+-	fp = rcu_dereference_protected(lb_priv->fp,
+-				       lockdep_is_held(&team->lock));
++	fp = rtnl_dereference(lb_priv->fp);
+ 	bpf_prog_destroy(fp);
+ }
+ 
+@@ -335,8 +333,7 @@ static void lb_tx_method_get(struct team *team, struct team_gsetter_ctx *ctx)
+ 	lb_select_tx_port_func_t *func;
+ 	char *name;
+ 
+-	func = rcu_dereference_protected(lb_priv->select_tx_port_func,
+-					 lockdep_is_held(&team->lock));
++	func = rtnl_dereference(lb_priv->select_tx_port_func);
+ 	name = lb_select_tx_port_get_name(func);
+ 	BUG_ON(!name);
+ 	ctx->data.str_val = name;
+@@ -478,7 +475,7 @@ static void lb_stats_refresh(struct work_struct *work)
+ 	team = lb_priv_ex->team;
+ 	lb_priv = get_lb_priv(team);
+ 
+-	if (!mutex_trylock(&team->lock)) {
++	if (!rtnl_trylock()) {
+ 		schedule_delayed_work(&lb_priv_ex->stats.refresh_dw, 0);
+ 		return;
+ 	}
+@@ -515,7 +512,7 @@ static void lb_stats_refresh(struct work_struct *work)
+ 	schedule_delayed_work(&lb_priv_ex->stats.refresh_dw,
+ 			      (lb_priv_ex->stats.refresh_interval * HZ) / 10);
+ 
+-	mutex_unlock(&team->lock);
++	rtnl_unlock();
+ }
+ 
+ static void lb_stats_refresh_interval_get(struct team *team,
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index c39dfa17813a3f..ef442a875b2d45 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -1113,6 +1113,9 @@ static void __handle_link_change(struct usbnet *dev)
+ 	if (!test_bit(EVENT_DEV_OPEN, &dev->flags))
+ 		return;
+ 
++	if (test_and_clear_bit(EVENT_LINK_CARRIER_ON, &dev->flags))
++		netif_carrier_on(dev->net);
++
+ 	if (!netif_carrier_ok(dev->net)) {
+ 		/* kill URBs for reading packets to save bus bandwidth */
+ 		unlink_urbs(dev, &dev->rxq);
+@@ -2009,10 +2012,12 @@ EXPORT_SYMBOL(usbnet_manage_power);
+ void usbnet_link_change(struct usbnet *dev, bool link, bool need_reset)
+ {
+ 	/* update link after link is reseted */
+-	if (link && !need_reset)
+-		netif_carrier_on(dev->net);
+-	else
++	if (link && !need_reset) {
++		set_bit(EVENT_LINK_CARRIER_ON, &dev->flags);
++	} else {
++		clear_bit(EVENT_LINK_CARRIER_ON, &dev->flags);
+ 		netif_carrier_off(dev->net);
++	}
+ 
+ 	if (need_reset && link)
+ 		usbnet_defer_kevent(dev, EVENT_LINK_RESET);
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 7168b33adadb0d..8b12b3ae580d99 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1304,6 +1304,8 @@ static void vrf_ip6_input_dst(struct sk_buff *skb, struct net_device *vrf_dev,
+ 	struct net *net = dev_net(vrf_dev);
+ 	struct rt6_info *rt6;
+ 
++	skb_dst_drop(skb);
++
+ 	rt6 = vrf_ip6_route_lookup(net, vrf_dev, &fl6, ifindex, skb,
+ 				   RT6_LOOKUP_F_HAS_SADDR | RT6_LOOKUP_F_IFACE);
+ 	if (unlikely(!rt6))
+diff --git a/drivers/net/wireless/ath/ath11k/hal.c b/drivers/net/wireless/ath/ath11k/hal.c
+index 8cb1505a5a0c3f..cab11a35f9115d 100644
+--- a/drivers/net/wireless/ath/ath11k/hal.c
++++ b/drivers/net/wireless/ath/ath11k/hal.c
+@@ -1346,6 +1346,10 @@ EXPORT_SYMBOL(ath11k_hal_srng_init);
+ void ath11k_hal_srng_deinit(struct ath11k_base *ab)
+ {
+ 	struct ath11k_hal *hal = &ab->hal;
++	int i;
++
++	for (i = 0; i < HAL_SRNG_RING_ID_MAX; i++)
++		ab->hal.srng_list[i].initialized = 0;
+ 
+ 	ath11k_hal_unregister_srng_key(ab);
+ 	ath11k_hal_free_cont_rdp(ab);
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 4763b271309aa2..9514e95d50201e 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -8734,9 +8734,9 @@ ath11k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
+ 				    arvif->vdev_id, ret);
+ 			return ret;
+ 		}
+-		ieee80211_iterate_stations_atomic(ar->hw,
+-						  ath11k_mac_disable_peer_fixed_rate,
+-						  arvif);
++		ieee80211_iterate_stations_mtx(ar->hw,
++					       ath11k_mac_disable_peer_fixed_rate,
++					       arvif);
+ 	} else if (ath11k_mac_bitrate_mask_get_single_nss(ar, arvif, band, mask,
+ 							  &single_nss)) {
+ 		rate = WMI_FIXED_RATE_NONE;
+@@ -8803,9 +8803,9 @@ ath11k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
+ 		}
+ 
+ 		mutex_lock(&ar->conf_mutex);
+-		ieee80211_iterate_stations_atomic(ar->hw,
+-						  ath11k_mac_disable_peer_fixed_rate,
+-						  arvif);
++		ieee80211_iterate_stations_mtx(ar->hw,
++					       ath11k_mac_disable_peer_fixed_rate,
++					       arvif);
+ 
+ 		arvif->bitrate_mask = *mask;
+ 		ieee80211_iterate_stations_atomic(ar->hw,
+diff --git a/drivers/net/wireless/ath/ath12k/dp.h b/drivers/net/wireless/ath/ath12k/dp.h
+index e8dbba0c3bb7d4..4003e81df535ac 100644
+--- a/drivers/net/wireless/ath/ath12k/dp.h
++++ b/drivers/net/wireless/ath/ath12k/dp.h
+@@ -425,6 +425,7 @@ enum htt_h2t_msg_type {
+ };
+ 
+ #define HTT_VER_REQ_INFO_MSG_ID		GENMASK(7, 0)
++#define HTT_OPTION_TCL_METADATA_VER_V1	1
+ #define HTT_OPTION_TCL_METADATA_VER_V2	2
+ #define HTT_OPTION_TAG			GENMASK(7, 0)
+ #define HTT_OPTION_LEN			GENMASK(15, 8)
+diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
+index 826c9723a7a68d..340a7b3474b111 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
++++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
+@@ -3528,7 +3528,6 @@ int ath12k_dp_mon_srng_process(struct ath12k *ar, int *budget,
+ 	ath12k_hal_srng_access_begin(ab, srng);
+ 
+ 	while (likely(*budget)) {
+-		*budget -= 1;
+ 		mon_dst_desc = ath12k_hal_srng_dst_peek(ab, srng);
+ 		if (unlikely(!mon_dst_desc))
+ 			break;
+diff --git a/drivers/net/wireless/ath/ath12k/dp_tx.c b/drivers/net/wireless/ath/ath12k/dp_tx.c
+index f82d2c58eff3f6..5e741b221d8767 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_tx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_tx.c
+@@ -12,10 +12,9 @@
+ #include "mac.h"
+ 
+ static enum hal_tcl_encap_type
+-ath12k_dp_tx_get_encap_type(struct ath12k_link_vif *arvif, struct sk_buff *skb)
++ath12k_dp_tx_get_encap_type(struct ath12k_base *ab, struct sk_buff *skb)
+ {
+ 	struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+-	struct ath12k_base *ab = arvif->ar->ab;
+ 
+ 	if (test_bit(ATH12K_FLAG_RAW_MODE, &ab->dev_flags))
+ 		return HAL_TCL_ENCAP_TYPE_RAW;
+@@ -302,7 +301,7 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_link_vif *arvif,
+ 			u32_encode_bits(mcbc_gsn, HTT_TCL_META_DATA_GLOBAL_SEQ_NUM);
+ 	}
+ 
+-	ti.encap_type = ath12k_dp_tx_get_encap_type(arvif, skb);
++	ti.encap_type = ath12k_dp_tx_get_encap_type(ab, skb);
+ 	ti.addr_search_flags = arvif->hal_addr_search_flags;
+ 	ti.search_type = arvif->search_type;
+ 	ti.type = HAL_TCL_DESC_TYPE_BUFFER;
+@@ -1108,6 +1107,7 @@ int ath12k_dp_tx_htt_h2t_ver_req_msg(struct ath12k_base *ab)
+ 	struct sk_buff *skb;
+ 	struct htt_ver_req_cmd *cmd;
+ 	int len = sizeof(*cmd);
++	u32 metadata_version;
+ 	int ret;
+ 
+ 	init_completion(&dp->htt_tgt_version_received);
+@@ -1120,12 +1120,14 @@ int ath12k_dp_tx_htt_h2t_ver_req_msg(struct ath12k_base *ab)
+ 	cmd = (struct htt_ver_req_cmd *)skb->data;
+ 	cmd->ver_reg_info = le32_encode_bits(HTT_H2T_MSG_TYPE_VERSION_REQ,
+ 					     HTT_OPTION_TAG);
++	metadata_version = ath12k_ftm_mode ? HTT_OPTION_TCL_METADATA_VER_V1 :
++			   HTT_OPTION_TCL_METADATA_VER_V2;
+ 
+ 	cmd->tcl_metadata_version = le32_encode_bits(HTT_TAG_TCL_METADATA_VERSION,
+ 						     HTT_OPTION_TAG) |
+ 				    le32_encode_bits(HTT_TCL_METADATA_VER_SZ,
+ 						     HTT_OPTION_LEN) |
+-				    le32_encode_bits(HTT_OPTION_TCL_METADATA_VER_V2,
++				    le32_encode_bits(metadata_version,
+ 						     HTT_OPTION_VALUE);
+ 
+ 	ret = ath12k_htc_send(&ab->htc, dp->eid, skb);
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index d1d3c9f34372da..029376c574967a 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -685,6 +685,9 @@ static void ath12k_get_arvif_iter(void *data, u8 *mac,
+ 		if (WARN_ON(!arvif))
+ 			continue;
+ 
++		if (!arvif->is_created)
++			continue;
++
+ 		if (arvif->vdev_id == arvif_iter->vdev_id &&
+ 		    arvif->ar == arvif_iter->ar) {
+ 			arvif_iter->arvif = arvif;
+@@ -1844,7 +1847,7 @@ static void ath12k_mac_handle_beacon_iter(void *data, u8 *mac,
+ 	struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif);
+ 	struct ath12k_link_vif *arvif = &ahvif->deflink;
+ 
+-	if (vif->type != NL80211_IFTYPE_STATION)
++	if (vif->type != NL80211_IFTYPE_STATION || !arvif->is_created)
+ 		return;
+ 
+ 	if (!ether_addr_equal(mgmt->bssid, vif->bss_conf.bssid))
+@@ -1867,16 +1870,16 @@ static void ath12k_mac_handle_beacon_miss_iter(void *data, u8 *mac,
+ 	u32 *vdev_id = data;
+ 	struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif);
+ 	struct ath12k_link_vif *arvif = &ahvif->deflink;
+-	struct ath12k *ar = arvif->ar;
+-	struct ieee80211_hw *hw = ath12k_ar_to_hw(ar);
++	struct ieee80211_hw *hw;
+ 
+-	if (arvif->vdev_id != *vdev_id)
++	if (!arvif->is_created || arvif->vdev_id != *vdev_id)
+ 		return;
+ 
+ 	if (!arvif->is_up)
+ 		return;
+ 
+ 	ieee80211_beacon_loss(vif);
++	hw = ath12k_ar_to_hw(arvif->ar);
+ 
+ 	/* Firmware doesn't report beacon loss events repeatedly. If AP probe
+ 	 * (done by mac80211) succeeds but beacons do not resume then it
+@@ -3312,6 +3315,7 @@ static void ath12k_bss_assoc(struct ath12k *ar,
+ 
+ 	rcu_read_unlock();
+ 
++	peer_arg->is_assoc = true;
+ 	ret = ath12k_wmi_send_peer_assoc_cmd(ar, peer_arg);
+ 	if (ret) {
+ 		ath12k_warn(ar->ab, "failed to run peer assoc for %pM vdev %i: %d\n",
+@@ -5084,6 +5088,8 @@ static int ath12k_mac_station_assoc(struct ath12k *ar,
+ 			    "invalid peer NSS %d\n", peer_arg->peer_nss);
+ 		return -EINVAL;
+ 	}
++
++	peer_arg->is_assoc = true;
+ 	ret = ath12k_wmi_send_peer_assoc_cmd(ar, peer_arg);
+ 	if (ret) {
+ 		ath12k_warn(ar->ab, "failed to run peer assoc for STA %pM vdev %i: %d\n",
+@@ -5330,6 +5336,7 @@ static void ath12k_sta_rc_update_wk(struct wiphy *wiphy, struct wiphy_work *wk)
+ 			ath12k_peer_assoc_prepare(ar, arvif, arsta,
+ 						  peer_arg, true);
+ 
++			peer_arg->is_assoc = false;
+ 			err = ath12k_wmi_send_peer_assoc_cmd(ar, peer_arg);
+ 			if (err)
+ 				ath12k_warn(ar->ab, "failed to run peer assoc for STA %pM vdev %i: %d\n",
+@@ -7682,14 +7689,9 @@ static int ath12k_mac_start(struct ath12k *ar)
+ 
+ static void ath12k_drain_tx(struct ath12k_hw *ah)
+ {
+-	struct ath12k *ar = ah->radio;
++	struct ath12k *ar;
+ 	int i;
+ 
+-	if (ath12k_ftm_mode) {
+-		ath12k_err(ar->ab, "fail to start mac operations in ftm mode\n");
+-		return;
+-	}
+-
+ 	lockdep_assert_wiphy(ah->hw->wiphy);
+ 
+ 	for_each_ar(ah, ar, i)
+@@ -7702,6 +7704,9 @@ static int ath12k_mac_op_start(struct ieee80211_hw *hw)
+ 	struct ath12k *ar;
+ 	int ret, i;
+ 
++	if (ath12k_ftm_mode)
++		return -EPERM;
++
+ 	lockdep_assert_wiphy(hw->wiphy);
+ 
+ 	ath12k_drain_tx(ah);
+@@ -9165,7 +9170,7 @@ ath12k_mac_change_chanctx_cnt_iter(void *data, u8 *mac,
+ 		if (WARN_ON(!arvif))
+ 			continue;
+ 
+-		if (arvif->ar != arg->ar)
++		if (!arvif->is_created || arvif->ar != arg->ar)
+ 			continue;
+ 
+ 		link_conf = wiphy_dereference(ahvif->ah->hw->wiphy,
+@@ -9200,7 +9205,7 @@ ath12k_mac_change_chanctx_fill_iter(void *data, u8 *mac,
+ 		if (WARN_ON(!arvif))
+ 			continue;
+ 
+-		if (arvif->ar != arg->ar)
++		if (!arvif->is_created || arvif->ar != arg->ar)
+ 			continue;
+ 
+ 		link_conf = wiphy_dereference(ahvif->ah->hw->wiphy,
+diff --git a/drivers/net/wireless/ath/ath12k/p2p.c b/drivers/net/wireless/ath/ath12k/p2p.c
+index 84cccf7d91e72b..59589748f1a8c2 100644
+--- a/drivers/net/wireless/ath/ath12k/p2p.c
++++ b/drivers/net/wireless/ath/ath12k/p2p.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
+  */
+ 
+ #include <net/mac80211.h>
+@@ -124,7 +125,7 @@ static void ath12k_p2p_noa_update_vdev_iter(void *data, u8 *mac,
+ 
+ 	WARN_ON(!rcu_read_lock_any_held());
+ 	arvif = &ahvif->deflink;
+-	if (arvif->ar != arg->ar || arvif->vdev_id != arg->vdev_id)
++	if (!arvif->is_created || arvif->ar != arg->ar || arvif->vdev_id != arg->vdev_id)
+ 		return;
+ 
+ 	ath12k_p2p_noa_update(arvif, arg->noa);
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index a44fc9106634b6..9ebe4b573f7e30 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -2136,7 +2136,7 @@ static void ath12k_wmi_copy_peer_flags(struct wmi_peer_assoc_complete_cmd *cmd,
+ 		cmd->peer_flags |= cpu_to_le32(WMI_PEER_AUTH);
+ 	if (arg->need_ptk_4_way) {
+ 		cmd->peer_flags |= cpu_to_le32(WMI_PEER_NEED_PTK_4_WAY);
+-		if (!hw_crypto_disabled)
++		if (!hw_crypto_disabled && arg->is_assoc)
+ 			cmd->peer_flags &= cpu_to_le32(~WMI_PEER_AUTH);
+ 	}
+ 	if (arg->need_gtk_2_way)
+@@ -6829,7 +6829,7 @@ static int ath12k_wmi_tlv_services_parser(struct ath12k_base *ab,
+ 					  void *data)
+ {
+ 	const struct wmi_service_available_event *ev;
+-	u32 *wmi_ext2_service_bitmap;
++	__le32 *wmi_ext2_service_bitmap;
+ 	int i, j;
+ 	u16 expected_len;
+ 
+@@ -6861,12 +6861,12 @@ static int ath12k_wmi_tlv_services_parser(struct ath12k_base *ab,
+ 			   ev->wmi_service_segment_bitmap[3]);
+ 		break;
+ 	case WMI_TAG_ARRAY_UINT32:
+-		wmi_ext2_service_bitmap = (u32 *)ptr;
++		wmi_ext2_service_bitmap = (__le32 *)ptr;
+ 		for (i = 0, j = WMI_MAX_EXT_SERVICE;
+ 		     i < WMI_SERVICE_SEGMENT_BM_SIZE32 && j < WMI_MAX_EXT2_SERVICE;
+ 		     i++) {
+ 			do {
+-				if (wmi_ext2_service_bitmap[i] &
++				if (__le32_to_cpu(wmi_ext2_service_bitmap[i]) &
+ 				    BIT(j % WMI_AVAIL_SERVICE_BITS_IN_SIZE32))
+ 					set_bit(j, ab->wmi_ab.svc_map);
+ 			} while (++j % WMI_AVAIL_SERVICE_BITS_IN_SIZE32);
+@@ -6874,8 +6874,10 @@ static int ath12k_wmi_tlv_services_parser(struct ath12k_base *ab,
+ 
+ 		ath12k_dbg(ab, ATH12K_DBG_WMI,
+ 			   "wmi_ext2_service_bitmap 0x%04x 0x%04x 0x%04x 0x%04x",
+-			   wmi_ext2_service_bitmap[0], wmi_ext2_service_bitmap[1],
+-			   wmi_ext2_service_bitmap[2], wmi_ext2_service_bitmap[3]);
++			   __le32_to_cpu(wmi_ext2_service_bitmap[0]),
++			   __le32_to_cpu(wmi_ext2_service_bitmap[1]),
++			   __le32_to_cpu(wmi_ext2_service_bitmap[2]),
++			   __le32_to_cpu(wmi_ext2_service_bitmap[3]));
+ 		break;
+ 	}
+ 	return 0;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index 4b70845e1a2643..075b99478e656b 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -1545,10 +1545,6 @@ brcmf_cfg80211_scan(struct wiphy *wiphy, struct cfg80211_scan_request *request)
+ 		return -EAGAIN;
+ 	}
+ 
+-	/* If scan req comes for p2p0, send it over primary I/F */
+-	if (vif == cfg->p2p.bss_idx[P2PAPI_BSSCFG_DEVICE].vif)
+-		vif = cfg->p2p.bss_idx[P2PAPI_BSSCFG_PRIMARY].vif;
+-
+ 	brcmf_dbg(SCAN, "START ESCAN\n");
+ 
+ 	cfg->scan_request = request;
+@@ -1564,6 +1560,10 @@ brcmf_cfg80211_scan(struct wiphy *wiphy, struct cfg80211_scan_request *request)
+ 	if (err)
+ 		goto scan_out;
+ 
++	/* If scan req comes for p2p0, send it over primary I/F */
++	if (vif == cfg->p2p.bss_idx[P2PAPI_BSSCFG_DEVICE].vif)
++		vif = cfg->p2p.bss_idx[P2PAPI_BSSCFG_PRIMARY].vif;
++
+ 	err = brcmf_do_escan(vif->ifp, request);
+ 	if (err)
+ 		goto scan_out;
+diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/main.c b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
+index a7f9e244c0975e..59c13c40bb83a6 100644
+--- a/drivers/net/wireless/intel/iwlwifi/dvm/main.c
++++ b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
+@@ -1048,9 +1048,11 @@ static void iwl_bg_restart(struct work_struct *data)
+  *
+  *****************************************************************************/
+ 
+-static void iwl_setup_deferred_work(struct iwl_priv *priv)
++static int iwl_setup_deferred_work(struct iwl_priv *priv)
+ {
+ 	priv->workqueue = alloc_ordered_workqueue(DRV_NAME, 0);
++	if (!priv->workqueue)
++		return -ENOMEM;
+ 
+ 	INIT_WORK(&priv->restart, iwl_bg_restart);
+ 	INIT_WORK(&priv->beacon_update, iwl_bg_beacon_update);
+@@ -1067,6 +1069,8 @@ static void iwl_setup_deferred_work(struct iwl_priv *priv)
+ 	timer_setup(&priv->statistics_periodic, iwl_bg_statistics_periodic, 0);
+ 
+ 	timer_setup(&priv->ucode_trace, iwl_bg_ucode_trace, 0);
++
++	return 0;
+ }
+ 
+ void iwl_cancel_deferred_work(struct iwl_priv *priv)
+@@ -1464,7 +1468,10 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans,
+ 	/********************
+ 	 * 6. Setup services
+ 	 ********************/
+-	iwl_setup_deferred_work(priv);
++	err = iwl_setup_deferred_work(priv);
++	if (err)
++		goto out_uninit_drv;
++
+ 	iwl_setup_rx_handlers(priv);
+ 
+ 	iwl_power_initialize(priv);
+@@ -1503,6 +1510,7 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans,
+ 	iwl_cancel_deferred_work(priv);
+ 	destroy_workqueue(priv->workqueue);
+ 	priv->workqueue = NULL;
++out_uninit_drv:
+ 	iwl_uninit_drv(priv);
+ out_free_eeprom_blob:
+ 	kfree(priv->eeprom_blob);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/rx.c b/drivers/net/wireless/intel/iwlwifi/mld/rx.c
+index c4f189bcece213..5a206a663470f9 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/rx.c
+@@ -1039,6 +1039,15 @@ static void iwl_mld_rx_eht(struct iwl_mld *mld, struct sk_buff *skb,
+ 			rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT;
+ 	}
+ 
++	/* update aggregation data for monitor sake on default queue */
++	if (!queue && (phy_info & IWL_RX_MPDU_PHY_TSF_OVERLOAD) &&
++	    (phy_info & IWL_RX_MPDU_PHY_AMPDU) && phy_data->first_subframe) {
++		rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT_KNOWN;
++		if (phy_data->data0 &
++		    cpu_to_le32(IWL_RX_PHY_DATA0_EHT_DELIM_EOF))
++			rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT;
++	}
++
+ 	if (phy_info & IWL_RX_MPDU_PHY_TSF_OVERLOAD)
+ 		iwl_mld_decode_eht_phy_data(mld, phy_data, rx_status, eht, usig);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 76603ef02704a6..15617cad967fa4 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -61,8 +61,10 @@ static int __init iwl_mvm_init(void)
+ 	}
+ 
+ 	ret = iwl_opmode_register("iwlmvm", &iwl_mvm_ops);
+-	if (ret)
++	if (ret) {
+ 		pr_err("Unable to register MVM op_mode: %d\n", ret);
++		iwl_mvm_rate_control_unregister();
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/wireless/marvell/mwl8k.c b/drivers/net/wireless/marvell/mwl8k.c
+index bab9ef37a1ab80..8bcb1d0dd61887 100644
+--- a/drivers/net/wireless/marvell/mwl8k.c
++++ b/drivers/net/wireless/marvell/mwl8k.c
+@@ -1227,6 +1227,10 @@ static int rxq_refill(struct ieee80211_hw *hw, int index, int limit)
+ 
+ 		addr = dma_map_single(&priv->pdev->dev, skb->data,
+ 				      MWL8K_RX_MAXSZ, DMA_FROM_DEVICE);
++		if (dma_mapping_error(&priv->pdev->dev, addr)) {
++			kfree_skb(skb);
++			break;
++		}
+ 
+ 		rxq->rxd_count++;
+ 		rx = rxq->tail++;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+index 5584bea9e2a3f8..45ef0f3091356a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+@@ -1061,7 +1061,7 @@ mt7996_mac_sta_add(struct mt76_phy *mphy, struct ieee80211_vif *vif,
+ 	struct mt7996_dev *dev = container_of(mdev, struct mt7996_dev, mt76);
+ 	struct mt7996_sta *msta = (struct mt7996_sta *)sta->drv_priv;
+ 	struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
+-	unsigned long links = sta->mlo ? sta->valid_links : BIT(0);
++	unsigned long links = sta->valid_links ? sta->valid_links : BIT(0);
+ 	int err;
+ 
+ 	mutex_lock(&mdev->mutex);
+@@ -1155,7 +1155,7 @@ mt7996_mac_sta_remove(struct mt76_phy *mphy, struct ieee80211_vif *vif,
+ {
+ 	struct mt76_dev *mdev = mphy->dev;
+ 	struct mt7996_dev *dev = container_of(mdev, struct mt7996_dev, mt76);
+-	unsigned long links = sta->mlo ? sta->valid_links : BIT(0);
++	unsigned long links = sta->valid_links ? sta->valid_links : BIT(0);
+ 
+ 	mutex_lock(&mdev->mutex);
+ 
+@@ -1216,10 +1216,17 @@ static void mt7996_tx(struct ieee80211_hw *hw,
+ 
+ 	if (vif) {
+ 		struct mt7996_vif *mvif = (void *)vif->drv_priv;
+-		struct mt76_vif_link *mlink;
++		struct mt76_vif_link *mlink = &mvif->deflink.mt76;
+ 
+-		mlink = rcu_dereference(mvif->mt76.link[link_id]);
+-		if (mlink && mlink->wcid)
++		if (link_id < IEEE80211_LINK_UNSPECIFIED)
++			mlink = rcu_dereference(mvif->mt76.link[link_id]);
++
++		if (!mlink) {
++			ieee80211_free_txskb(hw, skb);
++			goto unlock;
++		}
++
++		if (mlink->wcid)
+ 			wcid = mlink->wcid;
+ 
+ 		if (mvif->mt76.roc_phy &&
+@@ -1228,7 +1235,7 @@ static void mt7996_tx(struct ieee80211_hw *hw,
+ 			if (mphy->roc_link)
+ 				wcid = mphy->roc_link->wcid;
+ 		} else {
+-			mphy = mt76_vif_link_phy(&mvif->deflink.mt76);
++			mphy = mt76_vif_link_phy(mlink);
+ 		}
+ 	}
+ 
+@@ -1237,7 +1244,7 @@ static void mt7996_tx(struct ieee80211_hw *hw,
+ 		goto unlock;
+ 	}
+ 
+-	if (control->sta) {
++	if (control->sta && link_id < IEEE80211_LINK_UNSPECIFIED) {
+ 		struct mt7996_sta *msta = (void *)control->sta->drv_priv;
+ 		struct mt7996_sta_link *msta_link;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+index 63dc6df20c3e42..ce6e33d39d2282 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+@@ -2307,8 +2307,7 @@ mt7996_mcu_sta_mld_setup_tlv(struct mt7996_dev *dev, struct sk_buff *skb,
+ 
+ 	if (nlinks > 1) {
+ 		link_id = __ffs(links & ~BIT(msta->deflink_id));
+-		msta_link = mt76_dereference(msta->link[msta->deflink_id],
+-					     &dev->mt76);
++		msta_link = mt76_dereference(msta->link[link_id], &dev->mt76);
+ 		if (!msta_link)
+ 			return;
+ 	}
+diff --git a/drivers/net/wireless/purelifi/plfxlc/mac.c b/drivers/net/wireless/purelifi/plfxlc/mac.c
+index 82d1bf7edba20d..a7f5d287e369bd 100644
+--- a/drivers/net/wireless/purelifi/plfxlc/mac.c
++++ b/drivers/net/wireless/purelifi/plfxlc/mac.c
+@@ -99,11 +99,6 @@ int plfxlc_mac_init_hw(struct ieee80211_hw *hw)
+ 	return r;
+ }
+ 
+-void plfxlc_mac_release(struct plfxlc_mac *mac)
+-{
+-	plfxlc_chip_release(&mac->chip);
+-}
+-
+ int plfxlc_op_start(struct ieee80211_hw *hw)
+ {
+ 	plfxlc_hw_mac(hw)->chip.usb.initialized = 1;
+@@ -755,3 +750,9 @@ struct ieee80211_hw *plfxlc_mac_alloc_hw(struct usb_interface *intf)
+ 	SET_IEEE80211_DEV(hw, &intf->dev);
+ 	return hw;
+ }
++
++void plfxlc_mac_release_hw(struct ieee80211_hw *hw)
++{
++	plfxlc_chip_release(&plfxlc_hw_mac(hw)->chip);
++	ieee80211_free_hw(hw);
++}
+diff --git a/drivers/net/wireless/purelifi/plfxlc/mac.h b/drivers/net/wireless/purelifi/plfxlc/mac.h
+index 9384acddcf26a3..56da502999c1aa 100644
+--- a/drivers/net/wireless/purelifi/plfxlc/mac.h
++++ b/drivers/net/wireless/purelifi/plfxlc/mac.h
+@@ -168,7 +168,7 @@ static inline u8 *plfxlc_mac_get_perm_addr(struct plfxlc_mac *mac)
+ }
+ 
+ struct ieee80211_hw *plfxlc_mac_alloc_hw(struct usb_interface *intf);
+-void plfxlc_mac_release(struct plfxlc_mac *mac);
++void plfxlc_mac_release_hw(struct ieee80211_hw *hw);
+ 
+ int plfxlc_mac_preinit_hw(struct ieee80211_hw *hw, const u8 *hw_address);
+ int plfxlc_mac_init_hw(struct ieee80211_hw *hw);
+diff --git a/drivers/net/wireless/purelifi/plfxlc/usb.c b/drivers/net/wireless/purelifi/plfxlc/usb.c
+index c2a1234b59db6c..0817506021c360 100644
+--- a/drivers/net/wireless/purelifi/plfxlc/usb.c
++++ b/drivers/net/wireless/purelifi/plfxlc/usb.c
+@@ -604,7 +604,7 @@ static int probe(struct usb_interface *intf,
+ 	r = plfxlc_upload_mac_and_serial(intf, hw_address, serial_number);
+ 	if (r) {
+ 		dev_err(&intf->dev, "MAC and Serial upload failed (%d)\n", r);
+-		goto error;
++		goto error_free_hw;
+ 	}
+ 
+ 	chip->unit_type = STA;
+@@ -613,13 +613,13 @@ static int probe(struct usb_interface *intf,
+ 	r = plfxlc_mac_preinit_hw(hw, hw_address);
+ 	if (r) {
+ 		dev_err(&intf->dev, "Init mac failed (%d)\n", r);
+-		goto error;
++		goto error_free_hw;
+ 	}
+ 
+ 	r = ieee80211_register_hw(hw);
+ 	if (r) {
+ 		dev_err(&intf->dev, "Register device failed (%d)\n", r);
+-		goto error;
++		goto error_free_hw;
+ 	}
+ 
+ 	if ((le16_to_cpu(interface_to_usbdev(intf)->descriptor.idVendor) ==
+@@ -632,7 +632,7 @@ static int probe(struct usb_interface *intf,
+ 	}
+ 	if (r != 0) {
+ 		dev_err(&intf->dev, "FPGA download failed (%d)\n", r);
+-		goto error;
++		goto error_unreg_hw;
+ 	}
+ 
+ 	tx->mac_fifo_full = 0;
+@@ -642,21 +642,21 @@ static int probe(struct usb_interface *intf,
+ 	r = plfxlc_usb_init_hw(usb);
+ 	if (r < 0) {
+ 		dev_err(&intf->dev, "usb_init_hw failed (%d)\n", r);
+-		goto error;
++		goto error_unreg_hw;
+ 	}
+ 
+ 	msleep(PLF_MSLEEP_TIME);
+ 	r = plfxlc_chip_switch_radio(chip, PLFXLC_RADIO_ON);
+ 	if (r < 0) {
+ 		dev_dbg(&intf->dev, "chip_switch_radio_on failed (%d)\n", r);
+-		goto error;
++		goto error_unreg_hw;
+ 	}
+ 
+ 	msleep(PLF_MSLEEP_TIME);
+ 	r = plfxlc_chip_set_rate(chip, 8);
+ 	if (r < 0) {
+ 		dev_dbg(&intf->dev, "chip_set_rate failed (%d)\n", r);
+-		goto error;
++		goto error_unreg_hw;
+ 	}
+ 
+ 	msleep(PLF_MSLEEP_TIME);
+@@ -664,7 +664,7 @@ static int probe(struct usb_interface *intf,
+ 			    hw_address, ETH_ALEN, USB_REQ_MAC_WR);
+ 	if (r < 0) {
+ 		dev_dbg(&intf->dev, "MAC_WR failure (%d)\n", r);
+-		goto error;
++		goto error_unreg_hw;
+ 	}
+ 
+ 	plfxlc_chip_enable_rxtx(chip);
+@@ -691,12 +691,12 @@ static int probe(struct usb_interface *intf,
+ 	plfxlc_mac_init_hw(hw);
+ 	usb->initialized = true;
+ 	return 0;
++
++error_unreg_hw:
++	ieee80211_unregister_hw(hw);
++error_free_hw:
++	plfxlc_mac_release_hw(hw);
+ error:
+-	if (hw) {
+-		plfxlc_mac_release(plfxlc_hw_mac(hw));
+-		ieee80211_unregister_hw(hw);
+-		ieee80211_free_hw(hw);
+-	}
+ 	dev_err(&intf->dev, "pureLifi:Device error");
+ 	return r;
+ }
+@@ -730,8 +730,7 @@ static void disconnect(struct usb_interface *intf)
+ 	 */
+ 	usb_reset_device(interface_to_usbdev(intf));
+ 
+-	plfxlc_mac_release(mac);
+-	ieee80211_free_hw(hw);
++	plfxlc_mac_release_hw(hw);
+ }
+ 
+ static void plfxlc_usb_resume(struct plfxlc_usb *usb)
+diff --git a/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c b/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c
+index 220ac5bdf279a1..8a57d6c72335ef 100644
+--- a/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c
++++ b/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c
+@@ -1041,10 +1041,11 @@ static void rtl8187_stop(struct ieee80211_hw *dev, bool suspend)
+ 	rtl818x_iowrite8(priv, &priv->map->CONFIG4, reg | RTL818X_CONFIG4_VCOOFF);
+ 	rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
+ 
++	usb_kill_anchored_urbs(&priv->anchored);
++
+ 	while ((skb = skb_dequeue(&priv->b_tx_status.queue)))
+ 		dev_kfree_skb_any(skb);
+ 
+-	usb_kill_anchored_urbs(&priv->anchored);
+ 	mutex_unlock(&priv->conf_mutex);
+ 
+ 	if (!priv->is_rtl8187b)
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/core.c b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+index 569856ca677f62..c6f69d87c38d41 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+@@ -6617,7 +6617,7 @@ static int rtl8xxxu_submit_rx_urb(struct rtl8xxxu_priv *priv,
+ 		skb_size = fops->rx_agg_buf_size;
+ 		skb_size += (rx_desc_sz + sizeof(struct rtl8723au_phy_stats));
+ 	} else {
+-		skb_size = IEEE80211_MAX_FRAME_LEN;
++		skb_size = IEEE80211_MAX_FRAME_LEN + rx_desc_sz;
+ 	}
+ 
+ 	skb = __netdev_alloc_skb(NULL, skb_size, GFP_KERNEL);
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index bc2c1a5a30b379..c589727c525efb 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -349,7 +349,7 @@ int rtw_sta_add(struct rtw_dev *rtwdev, struct ieee80211_sta *sta,
+ 	struct rtw_vif *rtwvif = (struct rtw_vif *)vif->drv_priv;
+ 	int i;
+ 
+-	if (vif->type == NL80211_IFTYPE_STATION) {
++	if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) {
+ 		si->mac_id = rtwvif->mac_id;
+ 	} else {
+ 		si->mac_id = rtw_acquire_macid(rtwdev);
+@@ -386,7 +386,7 @@ void rtw_sta_remove(struct rtw_dev *rtwdev, struct ieee80211_sta *sta,
+ 
+ 	cancel_work_sync(&si->rc_work);
+ 
+-	if (vif->type != NL80211_IFTYPE_STATION)
++	if (vif->type != NL80211_IFTYPE_STATION || sta->tdls)
+ 		rtw_release_macid(rtwdev, si->mac_id);
+ 	if (fw_exist)
+ 		rtw_fw_media_status_report(rtwdev, si->mac_id, false);
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index cc9b014457acef..69546a0394942b 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -2110,6 +2110,11 @@ static void rtw89_core_cancel_6ghz_probe_tx(struct rtw89_dev *rtwdev,
+ 	if (rx_status->band != NL80211_BAND_6GHZ)
+ 		return;
+ 
++	if (unlikely(!(rtwdev->chip->support_bands & BIT(NL80211_BAND_6GHZ)))) {
++		rtw89_debug(rtwdev, RTW89_DBG_UNEXP, "invalid rx on unsupported 6 GHz\n");
++		return;
++	}
++
+ 	ssid_ie = cfg80211_find_ie(WLAN_EID_SSID, ies, skb->len);
+ 
+ 	list_for_each_entry(info, &pkt_list[NL80211_BAND_6GHZ], list) {
+diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
+index f4eee642e5ced5..4bdc6d9da62530 100644
+--- a/drivers/net/wireless/realtek/rtw89/phy.c
++++ b/drivers/net/wireless/realtek/rtw89/phy.c
+@@ -119,10 +119,12 @@ static u64 get_eht_mcs_ra_mask(u8 *max_nss, u8 start_mcs, u8 n_nss)
+ 	return mask;
+ }
+ 
+-static u64 get_eht_ra_mask(struct ieee80211_link_sta *link_sta)
++static u64 get_eht_ra_mask(struct rtw89_vif_link *rtwvif_link,
++			   struct ieee80211_link_sta *link_sta)
+ {
+-	struct ieee80211_sta_eht_cap *eht_cap = &link_sta->eht_cap;
++	struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ 	struct ieee80211_eht_mcs_nss_supp_20mhz_only *mcs_nss_20mhz;
++	struct ieee80211_sta_eht_cap *eht_cap = &link_sta->eht_cap;
+ 	struct ieee80211_eht_mcs_nss_supp_bw *mcs_nss;
+ 	u8 *he_phy_cap = link_sta->he_cap.he_cap_elem.phy_cap_info;
+ 
+@@ -136,8 +138,8 @@ static u64 get_eht_ra_mask(struct ieee80211_link_sta *link_sta)
+ 		/* MCS 9, 11, 13 */
+ 		return get_eht_mcs_ra_mask(mcs_nss->rx_tx_max_nss, 9, 3);
+ 	case IEEE80211_STA_RX_BW_20:
+-		if (!(he_phy_cap[0] &
+-		      IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_MASK_ALL)) {
++		if (vif->type == NL80211_IFTYPE_AP &&
++		    !(he_phy_cap[0] & IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_MASK_ALL)) {
+ 			mcs_nss_20mhz = &eht_cap->eht_mcs_nss_supp.only_20mhz;
+ 			/* MCS 7, 9, 11, 13 */
+ 			return get_eht_mcs_ra_mask(mcs_nss_20mhz->rx_tx_max_nss, 7, 4);
+@@ -332,7 +334,7 @@ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev,
+ 	/* Set the ra mask from sta's capability */
+ 	if (link_sta->eht_cap.has_eht) {
+ 		mode |= RTW89_RA_MODE_EHT;
+-		ra_mask |= get_eht_ra_mask(link_sta);
++		ra_mask |= get_eht_ra_mask(rtwvif_link, link_sta);
+ 
+ 		if (rtwdev->hal.no_mcs_12_13)
+ 			high_rate_masks = rtw89_ra_mask_eht_mcs0_11;
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 69b1ddff6731fc..90abc5c01377a5 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -1906,24 +1906,24 @@ static int __init nvmet_init(void)
+ 	if (!nvmet_wq)
+ 		goto out_free_buffered_work_queue;
+ 
+-	error = nvmet_init_discovery();
++	error = nvmet_init_debugfs();
+ 	if (error)
+ 		goto out_free_nvmet_work_queue;
+ 
+-	error = nvmet_init_debugfs();
++	error = nvmet_init_discovery();
+ 	if (error)
+-		goto out_exit_discovery;
++		goto out_exit_debugfs;
+ 
+ 	error = nvmet_init_configfs();
+ 	if (error)
+-		goto out_exit_debugfs;
++		goto out_exit_discovery;
+ 
+ 	return 0;
+ 
+-out_exit_debugfs:
+-	nvmet_exit_debugfs();
+ out_exit_discovery:
+ 	nvmet_exit_discovery();
++out_exit_debugfs:
++	nvmet_exit_debugfs();
+ out_free_nvmet_work_queue:
+ 	destroy_workqueue(nvmet_wq);
+ out_free_buffered_work_queue:
+@@ -1938,8 +1938,8 @@ static int __init nvmet_init(void)
+ static void __exit nvmet_exit(void)
+ {
+ 	nvmet_exit_configfs();
+-	nvmet_exit_debugfs();
+ 	nvmet_exit_discovery();
++	nvmet_exit_debugfs();
+ 	ida_destroy(&cntlid_ida);
+ 	destroy_workqueue(nvmet_wq);
+ 	destroy_workqueue(buffered_io_wq);
+diff --git a/drivers/pci/controller/pcie-rockchip-host.c b/drivers/pci/controller/pcie-rockchip-host.c
+index 6a46be17aa91b1..2804980bab869e 100644
+--- a/drivers/pci/controller/pcie-rockchip-host.c
++++ b/drivers/pci/controller/pcie-rockchip-host.c
+@@ -439,7 +439,7 @@ static irqreturn_t rockchip_pcie_subsys_irq_handler(int irq, void *arg)
+ 			dev_dbg(dev, "malformed TLP received from the link\n");
+ 
+ 		if (sub_reg & PCIE_CORE_INT_UCR)
+-			dev_dbg(dev, "malformed TLP received from the link\n");
++			dev_dbg(dev, "Unexpected Completion received from the link\n");
+ 
+ 		if (sub_reg & PCIE_CORE_INT_FCE)
+ 			dev_dbg(dev, "an error was observed in the flow control advertisements from the other side\n");
+diff --git a/drivers/pci/endpoint/functions/pci-epf-vntb.c b/drivers/pci/endpoint/functions/pci-epf-vntb.c
+index 874cb097b093ae..62d09a528e6885 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-vntb.c
++++ b/drivers/pci/endpoint/functions/pci-epf-vntb.c
+@@ -530,7 +530,7 @@ static int epf_ntb_db_bar_init(struct epf_ntb *ntb)
+ 	struct device *dev = &ntb->epf->dev;
+ 	int ret;
+ 	struct pci_epf_bar *epf_bar;
+-	void __iomem *mw_addr;
++	void *mw_addr;
+ 	enum pci_barno barno;
+ 	size_t size = sizeof(u32) * ntb->db_count;
+ 
+@@ -700,7 +700,7 @@ static int epf_ntb_init_epc_bar(struct epf_ntb *ntb)
+ 		barno = pci_epc_get_next_free_bar(epc_features, barno);
+ 		if (barno < 0) {
+ 			dev_err(dev, "Fail to get NTB function BAR\n");
+-			return barno;
++			return -ENOENT;
+ 		}
+ 		ntb->epf_ntb_bar[bar] = barno;
+ 	}
+diff --git a/drivers/pci/hotplug/pnv_php.c b/drivers/pci/hotplug/pnv_php.c
+index 573a41869c153f..4f85e7fe29ec23 100644
+--- a/drivers/pci/hotplug/pnv_php.c
++++ b/drivers/pci/hotplug/pnv_php.c
+@@ -3,12 +3,15 @@
+  * PCI Hotplug Driver for PowerPC PowerNV platform.
+  *
+  * Copyright Gavin Shan, IBM Corporation 2016.
++ * Copyright (C) 2025 Raptor Engineering, LLC
++ * Copyright (C) 2025 Raptor Computing Systems, LLC
+  */
+ 
+ #include <linux/bitfield.h>
+ #include <linux/libfdt.h>
+ #include <linux/module.h>
+ #include <linux/pci.h>
++#include <linux/delay.h>
+ #include <linux/pci_hotplug.h>
+ #include <linux/of_fdt.h>
+ 
+@@ -36,8 +39,10 @@ static void pnv_php_register(struct device_node *dn);
+ static void pnv_php_unregister_one(struct device_node *dn);
+ static void pnv_php_unregister(struct device_node *dn);
+ 
++static void pnv_php_enable_irq(struct pnv_php_slot *php_slot);
++
+ static void pnv_php_disable_irq(struct pnv_php_slot *php_slot,
+-				bool disable_device)
++				bool disable_device, bool disable_msi)
+ {
+ 	struct pci_dev *pdev = php_slot->pdev;
+ 	u16 ctrl;
+@@ -53,19 +58,15 @@ static void pnv_php_disable_irq(struct pnv_php_slot *php_slot,
+ 		php_slot->irq = 0;
+ 	}
+ 
+-	if (php_slot->wq) {
+-		destroy_workqueue(php_slot->wq);
+-		php_slot->wq = NULL;
+-	}
+-
+-	if (disable_device) {
++	if (disable_device || disable_msi) {
+ 		if (pdev->msix_enabled)
+ 			pci_disable_msix(pdev);
+ 		else if (pdev->msi_enabled)
+ 			pci_disable_msi(pdev);
++	}
+ 
++	if (disable_device)
+ 		pci_disable_device(pdev);
+-	}
+ }
+ 
+ static void pnv_php_free_slot(struct kref *kref)
+@@ -74,7 +75,8 @@ static void pnv_php_free_slot(struct kref *kref)
+ 					struct pnv_php_slot, kref);
+ 
+ 	WARN_ON(!list_empty(&php_slot->children));
+-	pnv_php_disable_irq(php_slot, false);
++	pnv_php_disable_irq(php_slot, false, false);
++	destroy_workqueue(php_slot->wq);
+ 	kfree(php_slot->name);
+ 	kfree(php_slot);
+ }
+@@ -391,6 +393,20 @@ static int pnv_php_get_power_state(struct hotplug_slot *slot, u8 *state)
+ 	return 0;
+ }
+ 
++static int pcie_check_link_active(struct pci_dev *pdev)
++{
++	u16 lnk_status;
++	int ret;
++
++	ret = pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
++	if (ret == PCIBIOS_DEVICE_NOT_FOUND || PCI_POSSIBLE_ERROR(lnk_status))
++		return -ENODEV;
++
++	ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA);
++
++	return ret;
++}
++
+ static int pnv_php_get_adapter_state(struct hotplug_slot *slot, u8 *state)
+ {
+ 	struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
+@@ -403,6 +419,19 @@ static int pnv_php_get_adapter_state(struct hotplug_slot *slot, u8 *state)
+ 	 */
+ 	ret = pnv_pci_get_presence_state(php_slot->id, &presence);
+ 	if (ret >= 0) {
++		if (pci_pcie_type(php_slot->pdev) == PCI_EXP_TYPE_DOWNSTREAM &&
++			presence == OPAL_PCI_SLOT_EMPTY) {
++			/*
++			 * Similar to pciehp_hpc, check whether the Link Active
++			 * bit is set to account for broken downstream bridges
++			 * that don't properly assert Presence Detect State, as
++			 * was observed on the Microsemi Switchtec PM8533 PFX
++			 * [11f8:8533].
++			 */
++			if (pcie_check_link_active(php_slot->pdev) > 0)
++				presence = OPAL_PCI_SLOT_PRESENT;
++		}
++
+ 		*state = presence;
+ 		ret = 0;
+ 	} else {
+@@ -442,6 +471,61 @@ static int pnv_php_set_attention_state(struct hotplug_slot *slot, u8 state)
+ 	return 0;
+ }
+ 
++static int pnv_php_activate_slot(struct pnv_php_slot *php_slot,
++				 struct hotplug_slot *slot)
++{
++	int ret, i;
++
++	/*
++	 * Issue initial slot activation command to firmware
++	 *
++	 * Firmware will power slot on, attempt to train the link, and
++	 * discover any downstream devices. If this process fails, firmware
++	 * will return an error code and an invalid device tree. Failure
++	 * can be caused for multiple reasons, including a faulty
++	 * downstream device, poor connection to the downstream device, or
++	 * a previously latched PHB fence.  On failure, issue fundamental
++	 * reset up to three times before aborting.
++	 */
++	ret = pnv_php_set_slot_power_state(slot, OPAL_PCI_SLOT_POWER_ON);
++	if (ret) {
++		SLOT_WARN(
++			php_slot,
++			"PCI slot activation failed with error code %d, possible frozen PHB",
++			ret);
++		SLOT_WARN(
++			php_slot,
++			"Attempting complete PHB reset before retrying slot activation\n");
++		for (i = 0; i < 3; i++) {
++			/*
++			 * Slot activation failed, PHB may be fenced from a
++			 * prior device failure.
++			 *
++			 * Use the OPAL fundamental reset call to both try a
++			 * device reset and clear any potentially active PHB
++			 * fence / freeze.
++			 */
++			SLOT_WARN(php_slot, "Try %d...\n", i + 1);
++			pci_set_pcie_reset_state(php_slot->pdev,
++						 pcie_warm_reset);
++			msleep(250);
++			pci_set_pcie_reset_state(php_slot->pdev,
++						 pcie_deassert_reset);
++
++			ret = pnv_php_set_slot_power_state(
++				slot, OPAL_PCI_SLOT_POWER_ON);
++			if (!ret)
++				break;
++		}
++
++		if (i >= 3)
++			SLOT_WARN(php_slot,
++				  "Failed to bring slot online, aborting!\n");
++	}
++
++	return ret;
++}
++
+ static int pnv_php_enable(struct pnv_php_slot *php_slot, bool rescan)
+ {
+ 	struct hotplug_slot *slot = &php_slot->slot;
+@@ -504,7 +588,7 @@ static int pnv_php_enable(struct pnv_php_slot *php_slot, bool rescan)
+ 		goto scan;
+ 
+ 	/* Power is off, turn it on and then scan the slot */
+-	ret = pnv_php_set_slot_power_state(slot, OPAL_PCI_SLOT_POWER_ON);
++	ret = pnv_php_activate_slot(php_slot, slot);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -561,8 +645,58 @@ static int pnv_php_reset_slot(struct hotplug_slot *slot, bool probe)
+ static int pnv_php_enable_slot(struct hotplug_slot *slot)
+ {
+ 	struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
++	u32 prop32;
++	int ret;
++
++	ret = pnv_php_enable(php_slot, true);
++	if (ret)
++		return ret;
++
++	/* (Re-)enable interrupt if the slot supports surprise hotplug */
++	ret = of_property_read_u32(php_slot->dn, "ibm,slot-surprise-pluggable",
++				   &prop32);
++	if (!ret && prop32)
++		pnv_php_enable_irq(php_slot);
++
++	return 0;
++}
++
++/*
++ * Disable any hotplug interrupts for all slots on the provided bus, as well as
++ * all downstream slots in preparation for a hot unplug.
++ */
++static int pnv_php_disable_all_irqs(struct pci_bus *bus)
++{
++	struct pci_bus *child_bus;
++	struct pci_slot *slot;
++
++	/* First go down child buses */
++	list_for_each_entry(child_bus, &bus->children, node)
++		pnv_php_disable_all_irqs(child_bus);
++
++	/* Disable IRQs for all pnv_php slots on this bus */
++	list_for_each_entry(slot, &bus->slots, list) {
++		struct pnv_php_slot *php_slot = to_pnv_php_slot(slot->hotplug);
+ 
+-	return pnv_php_enable(php_slot, true);
++		pnv_php_disable_irq(php_slot, false, true);
++	}
++
++	return 0;
++}
++
++/*
++ * Disable any hotplug interrupts for all downstream slots on the provided
++ * bus in preparation for a hot unplug.
++ */
++static int pnv_php_disable_all_downstream_irqs(struct pci_bus *bus)
++{
++	struct pci_bus *child_bus;
++
++	/* Go down child buses, recursively deactivating their IRQs */
++	list_for_each_entry(child_bus, &bus->children, node)
++		pnv_php_disable_all_irqs(child_bus);
++
++	return 0;
+ }
+ 
+ static int pnv_php_disable_slot(struct hotplug_slot *slot)
+@@ -579,6 +713,13 @@ static int pnv_php_disable_slot(struct hotplug_slot *slot)
+ 	    php_slot->state != PNV_PHP_STATE_REGISTERED)
+ 		return 0;
+ 
++	/*
++	 * Free all IRQ resources from all child slots before remove.
++	 * Note that we do not disable the root slot IRQ here as that
++	 * would also deactivate the slot hot (re)plug interrupt!
++	 */
++	pnv_php_disable_all_downstream_irqs(php_slot->bus);
++
+ 	/* Remove all devices behind the slot */
+ 	pci_lock_rescan_remove();
+ 	pci_hp_remove_devices(php_slot->bus);
+@@ -647,6 +788,15 @@ static struct pnv_php_slot *pnv_php_alloc_slot(struct device_node *dn)
+ 		return NULL;
+ 	}
+ 
++	/* Allocate workqueue for this slot's interrupt handling */
++	php_slot->wq = alloc_workqueue("pciehp-%s", 0, 0, php_slot->name);
++	if (!php_slot->wq) {
++		SLOT_WARN(php_slot, "Cannot alloc workqueue\n");
++		kfree(php_slot->name);
++		kfree(php_slot);
++		return NULL;
++	}
++
+ 	if (dn->child && PCI_DN(dn->child))
+ 		php_slot->slot_no = PCI_SLOT(PCI_DN(dn->child)->devfn);
+ 	else
+@@ -745,16 +895,63 @@ static int pnv_php_enable_msix(struct pnv_php_slot *php_slot)
+ 	return entry.vector;
+ }
+ 
++static void
++pnv_php_detect_clear_suprise_removal_freeze(struct pnv_php_slot *php_slot)
++{
++	struct pci_dev *pdev = php_slot->pdev;
++	struct eeh_dev *edev;
++	struct eeh_pe *pe;
++	int i, rc;
++
++	/*
++	 * When a device is surprise removed from a downstream bridge slot,
++	 * the upstream bridge port can still end up frozen due to related EEH
++	 * events, which will in turn block the MSI interrupts for slot hotplug
++	 * detection.
++	 *
++	 * Detect and thaw any frozen upstream PE after slot deactivation.
++	 */
++	edev = pci_dev_to_eeh_dev(pdev);
++	pe = edev ? edev->pe : NULL;
++	rc = eeh_pe_get_state(pe);
++	if ((rc == -ENODEV) || (rc == -ENOENT)) {
++		SLOT_WARN(
++			php_slot,
++			"Upstream bridge PE state unknown, hotplug detect may fail\n");
++	} else {
++		if (pe->state & EEH_PE_ISOLATED) {
++			SLOT_WARN(
++				php_slot,
++				"Upstream bridge PE %02x frozen, thawing...\n",
++				pe->addr);
++			for (i = 0; i < 3; i++)
++				if (!eeh_unfreeze_pe(pe))
++					break;
++			if (i >= 3)
++				SLOT_WARN(
++					php_slot,
++					"Unable to thaw PE %02x, hotplug detect will fail!\n",
++					pe->addr);
++			else
++				SLOT_WARN(php_slot,
++					  "PE %02x thawed successfully\n",
++					  pe->addr);
++		}
++	}
++}
++
+ static void pnv_php_event_handler(struct work_struct *work)
+ {
+ 	struct pnv_php_event *event =
+ 		container_of(work, struct pnv_php_event, work);
+ 	struct pnv_php_slot *php_slot = event->php_slot;
+ 
+-	if (event->added)
++	if (event->added) {
+ 		pnv_php_enable_slot(&php_slot->slot);
+-	else
++	} else {
+ 		pnv_php_disable_slot(&php_slot->slot);
++		pnv_php_detect_clear_suprise_removal_freeze(php_slot);
++	}
+ 
+ 	kfree(event);
+ }
+@@ -843,14 +1040,6 @@ static void pnv_php_init_irq(struct pnv_php_slot *php_slot, int irq)
+ 	u16 sts, ctrl;
+ 	int ret;
+ 
+-	/* Allocate workqueue */
+-	php_slot->wq = alloc_workqueue("pciehp-%s", 0, 0, php_slot->name);
+-	if (!php_slot->wq) {
+-		SLOT_WARN(php_slot, "Cannot alloc workqueue\n");
+-		pnv_php_disable_irq(php_slot, true);
+-		return;
+-	}
+-
+ 	/* Check PDC (Presence Detection Change) is broken or not */
+ 	ret = of_property_read_u32(php_slot->dn, "ibm,slot-broken-pdc",
+ 				   &broken_pdc);
+@@ -869,7 +1058,7 @@ static void pnv_php_init_irq(struct pnv_php_slot *php_slot, int irq)
+ 	ret = request_irq(irq, pnv_php_interrupt, IRQF_SHARED,
+ 			  php_slot->name, php_slot);
+ 	if (ret) {
+-		pnv_php_disable_irq(php_slot, true);
++		pnv_php_disable_irq(php_slot, true, true);
+ 		SLOT_WARN(php_slot, "Error %d enabling IRQ %d\n", ret, irq);
+ 		return;
+ 	}
+diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
+index c8bd71a739f724..66e3bea7dc1a01 100644
+--- a/drivers/pci/pci-driver.c
++++ b/drivers/pci/pci-driver.c
+@@ -1634,7 +1634,7 @@ static int pci_bus_num_vf(struct device *dev)
+  */
+ static int pci_dma_configure(struct device *dev)
+ {
+-	struct pci_driver *driver = to_pci_driver(dev->driver);
++	const struct device_driver *drv = READ_ONCE(dev->driver);
+ 	struct device *bridge;
+ 	int ret = 0;
+ 
+@@ -1651,8 +1651,8 @@ static int pci_dma_configure(struct device *dev)
+ 
+ 	pci_put_host_bridge_device(bridge);
+ 
+-	/* @driver may not be valid when we're called from the IOMMU layer */
+-	if (!ret && dev->driver && !driver->driver_managed_dma) {
++	/* @drv may not be valid when we're called from the IOMMU layer */
++	if (!ret && drv && !to_pci_driver(drv)->driver_managed_dma) {
+ 		ret = iommu_device_use_default_domain(dev);
+ 		if (ret)
+ 			arch_teardown_dma_ops(dev);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index d0f7b749b9a620..6e29f2b39dcef8 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -109,13 +109,13 @@ int pcie_failed_link_retrain(struct pci_dev *dev)
+ 	    !pcie_cap_has_lnkctl2(dev) || !dev->link_active_reporting)
+ 		return ret;
+ 
+-	pcie_capability_read_word(dev, PCI_EXP_LNKCTL2, &lnkctl2);
+ 	pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta);
+ 	if (!(lnksta & PCI_EXP_LNKSTA_DLLLA) && pcie_lbms_seen(dev, lnksta)) {
+-		u16 oldlnkctl2 = lnkctl2;
++		u16 oldlnkctl2;
+ 
+ 		pci_info(dev, "broken device, retraining non-functional downstream link at 2.5GT/s\n");
+ 
++		pcie_capability_read_word(dev, PCI_EXP_LNKCTL2, &oldlnkctl2);
+ 		ret = pcie_set_target_speed(dev, PCIE_SPEED_2_5GT, false);
+ 		if (ret) {
+ 			pci_info(dev, "retraining failed\n");
+@@ -127,6 +127,8 @@ int pcie_failed_link_retrain(struct pci_dev *dev)
+ 		pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta);
+ 	}
+ 
++	pcie_capability_read_word(dev, PCI_EXP_LNKCTL2, &lnkctl2);
++
+ 	if ((lnksta & PCI_EXP_LNKSTA_DLLLA) &&
+ 	    (lnkctl2 & PCI_EXP_LNKCTL2_TLS) == PCI_EXP_LNKCTL2_TLS_2_5GT &&
+ 	    pci_match_id(ids, dev)) {
+diff --git a/drivers/perf/arm-ni.c b/drivers/perf/arm-ni.c
+index de7b6cce4d68a8..9396d243415f48 100644
+--- a/drivers/perf/arm-ni.c
++++ b/drivers/perf/arm-ni.c
+@@ -544,6 +544,8 @@ static int arm_ni_init_cd(struct arm_ni *ni, struct arm_ni_node *node, u64 res_s
+ 		return err;
+ 
+ 	cd->cpu = cpumask_local_spread(0, dev_to_node(ni->dev));
++	irq_set_affinity(cd->irq, cpumask_of(cd->cpu));
++
+ 	cd->pmu = (struct pmu) {
+ 		.module = THIS_MODULE,
+ 		.parent = ni->dev,
+diff --git a/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c b/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
+index 6bd1b3c75c779d..d7493c2294ef23 100644
+--- a/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
++++ b/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
+@@ -37,32 +37,13 @@
+ #define EUSB2_TUNE_EUSB_EQU		0x5A
+ #define EUSB2_TUNE_EUSB_HS_COMP_CUR	0x5B
+ 
+-enum eusb2_reg_layout {
+-	TUNE_EUSB_HS_COMP_CUR,
+-	TUNE_EUSB_EQU,
+-	TUNE_EUSB_SLEW,
+-	TUNE_USB2_HS_COMP_CUR,
+-	TUNE_USB2_PREEM,
+-	TUNE_USB2_EQU,
+-	TUNE_USB2_SLEW,
+-	TUNE_SQUELCH_U,
+-	TUNE_HSDISC,
+-	TUNE_RES_FSDIF,
+-	TUNE_IUSB2,
+-	TUNE_USB2_CROSSOVER,
+-	NUM_TUNE_FIELDS,
+-
+-	FORCE_VAL_5 = NUM_TUNE_FIELDS,
+-	FORCE_EN_5,
+-
+-	EN_CTL1,
+-
+-	RPTR_STATUS,
+-	LAYOUT_SIZE,
++struct eusb2_repeater_init_tbl_reg {
++	unsigned int reg;
++	unsigned int value;
+ };
+ 
+ struct eusb2_repeater_cfg {
+-	const u32 *init_tbl;
++	const struct eusb2_repeater_init_tbl_reg *init_tbl;
+ 	int init_tbl_num;
+ 	const char * const *vreg_list;
+ 	int num_vregs;
+@@ -82,16 +63,16 @@ static const char * const pm8550b_vreg_l[] = {
+ 	"vdd18", "vdd3",
+ };
+ 
+-static const u32 pm8550b_init_tbl[NUM_TUNE_FIELDS] = {
+-	[TUNE_IUSB2] = 0x8,
+-	[TUNE_SQUELCH_U] = 0x3,
+-	[TUNE_USB2_PREEM] = 0x5,
++static const struct eusb2_repeater_init_tbl_reg pm8550b_init_tbl[] = {
++	{ EUSB2_TUNE_IUSB2, 0x8 },
++	{ EUSB2_TUNE_SQUELCH_U, 0x3 },
++	{ EUSB2_TUNE_USB2_PREEM, 0x5 },
+ };
+ 
+-static const u32 smb2360_init_tbl[NUM_TUNE_FIELDS] = {
+-	[TUNE_IUSB2] = 0x5,
+-	[TUNE_SQUELCH_U] = 0x3,
+-	[TUNE_USB2_PREEM] = 0x2,
++static const struct eusb2_repeater_init_tbl_reg smb2360_init_tbl[] = {
++	{ EUSB2_TUNE_IUSB2, 0x5 },
++	{ EUSB2_TUNE_SQUELCH_U, 0x3 },
++	{ EUSB2_TUNE_USB2_PREEM, 0x2 },
+ };
+ 
+ static const struct eusb2_repeater_cfg pm8550b_eusb2_cfg = {
+@@ -129,17 +110,10 @@ static int eusb2_repeater_init(struct phy *phy)
+ 	struct eusb2_repeater *rptr = phy_get_drvdata(phy);
+ 	struct device_node *np = rptr->dev->of_node;
+ 	struct regmap *regmap = rptr->regmap;
+-	const u32 *init_tbl = rptr->cfg->init_tbl;
+-	u8 tune_usb2_preem = init_tbl[TUNE_USB2_PREEM];
+-	u8 tune_hsdisc = init_tbl[TUNE_HSDISC];
+-	u8 tune_iusb2 = init_tbl[TUNE_IUSB2];
+ 	u32 base = rptr->base;
+-	u32 val;
++	u32 poll_val;
+ 	int ret;
+-
+-	of_property_read_u8(np, "qcom,tune-usb2-amplitude", &tune_iusb2);
+-	of_property_read_u8(np, "qcom,tune-usb2-disc-thres", &tune_hsdisc);
+-	of_property_read_u8(np, "qcom,tune-usb2-preem", &tune_usb2_preem);
++	u8 val;
+ 
+ 	ret = regulator_bulk_enable(rptr->cfg->num_vregs, rptr->vregs);
+ 	if (ret)
+@@ -147,21 +121,24 @@ static int eusb2_repeater_init(struct phy *phy)
+ 
+ 	regmap_write(regmap, base + EUSB2_EN_CTL1, EUSB2_RPTR_EN);
+ 
+-	regmap_write(regmap, base + EUSB2_TUNE_EUSB_HS_COMP_CUR, init_tbl[TUNE_EUSB_HS_COMP_CUR]);
+-	regmap_write(regmap, base + EUSB2_TUNE_EUSB_EQU, init_tbl[TUNE_EUSB_EQU]);
+-	regmap_write(regmap, base + EUSB2_TUNE_EUSB_SLEW, init_tbl[TUNE_EUSB_SLEW]);
+-	regmap_write(regmap, base + EUSB2_TUNE_USB2_HS_COMP_CUR, init_tbl[TUNE_USB2_HS_COMP_CUR]);
+-	regmap_write(regmap, base + EUSB2_TUNE_USB2_EQU, init_tbl[TUNE_USB2_EQU]);
+-	regmap_write(regmap, base + EUSB2_TUNE_USB2_SLEW, init_tbl[TUNE_USB2_SLEW]);
+-	regmap_write(regmap, base + EUSB2_TUNE_SQUELCH_U, init_tbl[TUNE_SQUELCH_U]);
+-	regmap_write(regmap, base + EUSB2_TUNE_RES_FSDIF, init_tbl[TUNE_RES_FSDIF]);
+-	regmap_write(regmap, base + EUSB2_TUNE_USB2_CROSSOVER, init_tbl[TUNE_USB2_CROSSOVER]);
+-
+-	regmap_write(regmap, base + EUSB2_TUNE_USB2_PREEM, tune_usb2_preem);
+-	regmap_write(regmap, base + EUSB2_TUNE_HSDISC, tune_hsdisc);
+-	regmap_write(regmap, base + EUSB2_TUNE_IUSB2, tune_iusb2);
+-
+-	ret = regmap_read_poll_timeout(regmap, base + EUSB2_RPTR_STATUS, val, val & RPTR_OK, 10, 5);
++	/* Write registers from init table */
++	for (int i = 0; i < rptr->cfg->init_tbl_num; i++)
++		regmap_write(regmap, base + rptr->cfg->init_tbl[i].reg,
++			     rptr->cfg->init_tbl[i].value);
++
++	/* Override registers from devicetree values */
++	if (!of_property_read_u8(np, "qcom,tune-usb2-amplitude", &val))
++		regmap_write(regmap, base + EUSB2_TUNE_USB2_PREEM, val);
++
++	if (!of_property_read_u8(np, "qcom,tune-usb2-disc-thres", &val))
++		regmap_write(regmap, base + EUSB2_TUNE_HSDISC, val);
++
++	if (!of_property_read_u8(np, "qcom,tune-usb2-preem", &val))
++		regmap_write(regmap, base + EUSB2_TUNE_IUSB2, val);
++
++	/* Wait for status OK */
++	ret = regmap_read_poll_timeout(regmap, base + EUSB2_RPTR_STATUS, poll_val,
++				       poll_val & RPTR_OK, 10, 5);
+ 	if (ret)
+ 		dev_err(rptr->dev, "initialization timed-out\n");
+ 
+diff --git a/drivers/pinctrl/berlin/berlin.c b/drivers/pinctrl/berlin/berlin.c
+index c372a2a24be4bb..9dc2da8056b722 100644
+--- a/drivers/pinctrl/berlin/berlin.c
++++ b/drivers/pinctrl/berlin/berlin.c
+@@ -204,6 +204,7 @@ static int berlin_pinctrl_build_state(struct platform_device *pdev)
+ 	const struct berlin_desc_group *desc_group;
+ 	const struct berlin_desc_function *desc_function;
+ 	int i, max_functions = 0;
++	struct pinfunction *new_functions;
+ 
+ 	pctrl->nfunctions = 0;
+ 
+@@ -229,12 +230,15 @@ static int berlin_pinctrl_build_state(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	pctrl->functions = krealloc(pctrl->functions,
++	new_functions = krealloc(pctrl->functions,
+ 				    pctrl->nfunctions * sizeof(*pctrl->functions),
+ 				    GFP_KERNEL);
+-	if (!pctrl->functions)
++	if (!new_functions) {
++		kfree(pctrl->functions);
+ 		return -ENOMEM;
++	}
+ 
++	pctrl->functions = new_functions;
+ 	/* map functions to theirs groups */
+ 	for (i = 0; i < pctrl->desc->ngroups; i++) {
+ 		desc_group = pctrl->desc->groups + i;
+diff --git a/drivers/pinctrl/pinctrl-k230.c b/drivers/pinctrl/pinctrl-k230.c
+index a9b4627b46b012..d716f23d837f7a 100644
+--- a/drivers/pinctrl/pinctrl-k230.c
++++ b/drivers/pinctrl/pinctrl-k230.c
+@@ -477,6 +477,10 @@ static int k230_pinctrl_parse_groups(struct device_node *np,
+ 	grp->name = np->name;
+ 
+ 	list = of_get_property(np, "pinmux", &size);
++	if (!list) {
++		dev_err(dev, "failed to get pinmux property\n");
++		return -EINVAL;
++	}
+ 	size /= sizeof(*list);
+ 
+ 	grp->num_pins = size;
+@@ -586,6 +590,7 @@ static int k230_pinctrl_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct k230_pinctrl *info;
+ 	struct pinctrl_desc *pctl;
++	int ret;
+ 
+ 	info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL);
+ 	if (!info)
+@@ -611,19 +616,21 @@ static int k230_pinctrl_probe(struct platform_device *pdev)
+ 		return dev_err_probe(dev, PTR_ERR(info->regmap_base),
+ 				     "failed to init regmap\n");
+ 
++	ret = k230_pinctrl_parse_dt(pdev, info);
++	if (ret)
++		return ret;
++
+ 	info->pctl_dev = devm_pinctrl_register(dev, pctl, info);
+ 	if (IS_ERR(info->pctl_dev))
+ 		return dev_err_probe(dev, PTR_ERR(info->pctl_dev),
+ 				     "devm_pinctrl_register failed\n");
+ 
+-	k230_pinctrl_parse_dt(pdev, info);
+-
+ 	return 0;
+ }
+ 
+ static const struct of_device_id k230_dt_ids[] = {
+ 	{ .compatible = "canaan,k230-pinctrl", },
+-	{ /* sintenel */ }
++	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, k230_dt_ids);
+ 
+diff --git a/drivers/pinctrl/pinmux.c b/drivers/pinctrl/pinmux.c
+index 0743190da59e81..2c31e7f2a27a86 100644
+--- a/drivers/pinctrl/pinmux.c
++++ b/drivers/pinctrl/pinmux.c
+@@ -236,18 +236,7 @@ static const char *pin_free(struct pinctrl_dev *pctldev, int pin,
+ 			if (desc->mux_usecount)
+ 				return NULL;
+ 		}
+-	}
+-
+-	/*
+-	 * If there is no kind of request function for the pin we just assume
+-	 * we got it by default and proceed.
+-	 */
+-	if (gpio_range && ops->gpio_disable_free)
+-		ops->gpio_disable_free(pctldev, gpio_range, pin);
+-	else if (ops->free)
+-		ops->free(pctldev, pin);
+ 
+-	scoped_guard(mutex, &desc->mux_lock) {
+ 		if (gpio_range) {
+ 			owner = desc->gpio_owner;
+ 			desc->gpio_owner = NULL;
+@@ -258,6 +247,15 @@ static const char *pin_free(struct pinctrl_dev *pctldev, int pin,
+ 		}
+ 	}
+ 
++	/*
++	 * If there is no kind of request function for the pin we just assume
++	 * we got it by default and proceed.
++	 */
++	if (gpio_range && ops->gpio_disable_free)
++		ops->gpio_disable_free(pctldev, gpio_range, pin);
++	else if (ops->free)
++		ops->free(pctldev, pin);
++
+ 	module_put(pctldev->owner);
+ 
+ 	return owner;
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+index f1c5a991cf7b52..ada2ec62916f9b 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+@@ -408,6 +408,7 @@ static int sunxi_pctrl_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	const char *function, *pin_prop;
+ 	const char *group;
+ 	int ret, npins, nmaps, configlen = 0, i = 0;
++	struct pinctrl_map *new_map;
+ 
+ 	*map = NULL;
+ 	*num_maps = 0;
+@@ -482,9 +483,13 @@ static int sunxi_pctrl_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	 * We know have the number of maps we need, we can resize our
+ 	 * map array
+ 	 */
+-	*map = krealloc(*map, i * sizeof(struct pinctrl_map), GFP_KERNEL);
+-	if (!*map)
+-		return -ENOMEM;
++	new_map = krealloc(*map, i * sizeof(struct pinctrl_map), GFP_KERNEL);
++	if (!new_map) {
++		ret = -ENOMEM;
++		goto err_free_map;
++	}
++
++	*map = new_map;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/platform/x86/intel/pmt/class.c b/drivers/platform/x86/intel/pmt/class.c
+index 7233b654bbad15..d046e87521738e 100644
+--- a/drivers/platform/x86/intel/pmt/class.c
++++ b/drivers/platform/x86/intel/pmt/class.c
+@@ -97,7 +97,7 @@ intel_pmt_read(struct file *filp, struct kobject *kobj,
+ 	if (count > entry->size - off)
+ 		count = entry->size - off;
+ 
+-	count = pmt_telem_read_mmio(entry->ep->pcidev, entry->cb, entry->header.guid, buf,
++	count = pmt_telem_read_mmio(entry->pcidev, entry->cb, entry->header.guid, buf,
+ 				    entry->base, off, count);
+ 
+ 	return count;
+@@ -252,6 +252,7 @@ static int intel_pmt_populate_entry(struct intel_pmt_entry *entry,
+ 		return -EINVAL;
+ 	}
+ 
++	entry->pcidev = pci_dev;
+ 	entry->guid = header->guid;
+ 	entry->size = header->size;
+ 	entry->cb = ivdev->priv_data;
+diff --git a/drivers/platform/x86/intel/pmt/class.h b/drivers/platform/x86/intel/pmt/class.h
+index b2006d57779d66..f6ce80c4e05111 100644
+--- a/drivers/platform/x86/intel/pmt/class.h
++++ b/drivers/platform/x86/intel/pmt/class.h
+@@ -39,6 +39,7 @@ struct intel_pmt_header {
+ 
+ struct intel_pmt_entry {
+ 	struct telem_endpoint	*ep;
++	struct pci_dev		*pcidev;
+ 	struct intel_pmt_header	header;
+ 	struct bin_attribute	pmt_bin_attr;
+ 	struct kobject		*kobj;
+diff --git a/drivers/power/sequencing/pwrseq-qcom-wcn.c b/drivers/power/sequencing/pwrseq-qcom-wcn.c
+index e8f5030f2639a6..7d8d6b3407495c 100644
+--- a/drivers/power/sequencing/pwrseq-qcom-wcn.c
++++ b/drivers/power/sequencing/pwrseq-qcom-wcn.c
+@@ -155,7 +155,7 @@ static const struct pwrseq_unit_data pwrseq_qcom_wcn_bt_unit_data = {
+ };
+ 
+ static const struct pwrseq_unit_data pwrseq_qcom_wcn6855_bt_unit_data = {
+-	.name = "wlan-enable",
++	.name = "bluetooth-enable",
+ 	.deps = pwrseq_qcom_wcn6855_unit_deps,
+ 	.enable = pwrseq_qcom_wcn_bt_enable,
+ 	.disable = pwrseq_qcom_wcn_bt_disable,
+diff --git a/drivers/power/supply/cpcap-charger.c b/drivers/power/supply/cpcap-charger.c
+index 13300dc60baf9b..d0c3008db53490 100644
+--- a/drivers/power/supply/cpcap-charger.c
++++ b/drivers/power/supply/cpcap-charger.c
+@@ -689,9 +689,8 @@ static void cpcap_usb_detect(struct work_struct *work)
+ 		struct power_supply *battery;
+ 
+ 		battery = power_supply_get_by_name("battery");
+-		if (IS_ERR_OR_NULL(battery)) {
+-			dev_err(ddata->dev, "battery power_supply not available %li\n",
+-					PTR_ERR(battery));
++		if (!battery) {
++			dev_err(ddata->dev, "battery power_supply not available\n");
+ 			return;
+ 		}
+ 
+diff --git a/drivers/power/supply/max14577_charger.c b/drivers/power/supply/max14577_charger.c
+index 1cef2f860b5fcc..63077d38ea30a7 100644
+--- a/drivers/power/supply/max14577_charger.c
++++ b/drivers/power/supply/max14577_charger.c
+@@ -501,7 +501,7 @@ static struct max14577_charger_platform_data *max14577_charger_dt_init(
+ static struct max14577_charger_platform_data *max14577_charger_dt_init(
+ 		struct platform_device *pdev)
+ {
+-	return NULL;
++	return ERR_PTR(-ENODATA);
+ }
+ #endif /* CONFIG_OF */
+ 
+@@ -572,7 +572,7 @@ static int max14577_charger_probe(struct platform_device *pdev)
+ 	chg->max14577 = max14577;
+ 
+ 	chg->pdata = max14577_charger_dt_init(pdev);
+-	if (IS_ERR_OR_NULL(chg->pdata))
++	if (IS_ERR(chg->pdata))
+ 		return PTR_ERR(chg->pdata);
+ 
+ 	ret = max14577_charger_reg_init(chg);
+diff --git a/drivers/power/supply/max1720x_battery.c b/drivers/power/supply/max1720x_battery.c
+index ea3912fd1de8bf..68b5314ecf3a23 100644
+--- a/drivers/power/supply/max1720x_battery.c
++++ b/drivers/power/supply/max1720x_battery.c
+@@ -288,9 +288,10 @@ static int max172xx_voltage_to_ps(unsigned int reg)
+ 	return reg * 1250;	/* in uV */
+ }
+ 
+-static int max172xx_capacity_to_ps(unsigned int reg)
++static int max172xx_capacity_to_ps(unsigned int reg,
++				   struct max1720x_device_info *info)
+ {
+-	return reg * 500;	/* in uAh */
++	return reg * (500000 / info->rsense);	/* in uAh */
+ }
+ 
+ /*
+@@ -394,11 +395,11 @@ static int max1720x_battery_get_property(struct power_supply *psy,
+ 		break;
+ 	case POWER_SUPPLY_PROP_CHARGE_FULL_DESIGN:
+ 		ret = regmap_read(info->regmap, MAX172XX_DESIGN_CAP, &reg_val);
+-		val->intval = max172xx_capacity_to_ps(reg_val);
++		val->intval = max172xx_capacity_to_ps(reg_val, info);
+ 		break;
+ 	case POWER_SUPPLY_PROP_CHARGE_AVG:
+ 		ret = regmap_read(info->regmap, MAX172XX_REPCAP, &reg_val);
+-		val->intval = max172xx_capacity_to_ps(reg_val);
++		val->intval = max172xx_capacity_to_ps(reg_val, info);
+ 		break;
+ 	case POWER_SUPPLY_PROP_TIME_TO_EMPTY_AVG:
+ 		ret = regmap_read(info->regmap, MAX172XX_TTE, &reg_val);
+@@ -422,7 +423,7 @@ static int max1720x_battery_get_property(struct power_supply *psy,
+ 		break;
+ 	case POWER_SUPPLY_PROP_CHARGE_FULL:
+ 		ret = regmap_read(info->regmap, MAX172XX_FULL_CAP, &reg_val);
+-		val->intval = max172xx_capacity_to_ps(reg_val);
++		val->intval = max172xx_capacity_to_ps(reg_val, info);
+ 		break;
+ 	case POWER_SUPPLY_PROP_MODEL_NAME:
+ 		ret = regmap_read(info->regmap, MAX172XX_DEV_NAME, &reg_val);
+diff --git a/drivers/power/supply/qcom_pmi8998_charger.c b/drivers/power/supply/qcom_pmi8998_charger.c
+index 74a8d8ed8d9fa3..8b641b822f5234 100644
+--- a/drivers/power/supply/qcom_pmi8998_charger.c
++++ b/drivers/power/supply/qcom_pmi8998_charger.c
+@@ -1016,7 +1016,9 @@ static int smb2_probe(struct platform_device *pdev)
+ 	if (rc < 0)
+ 		return rc;
+ 
+-	rc = dev_pm_set_wake_irq(chip->dev, chip->cable_irq);
++	devm_device_init_wakeup(chip->dev);
++
++	rc = devm_pm_set_wake_irq(chip->dev, chip->cable_irq);
+ 	if (rc < 0)
+ 		return dev_err_probe(chip->dev, rc, "Couldn't set wake irq\n");
+ 
+diff --git a/drivers/powercap/dtpm_cpu.c b/drivers/powercap/dtpm_cpu.c
+index 6b6f51b215501b..99390ec1481f83 100644
+--- a/drivers/powercap/dtpm_cpu.c
++++ b/drivers/powercap/dtpm_cpu.c
+@@ -96,6 +96,8 @@ static u64 get_pd_power_uw(struct dtpm *dtpm)
+ 	int i;
+ 
+ 	pd = em_cpu_get(dtpm_cpu->cpu);
++	if (!pd)
++		return 0;
+ 
+ 	pd_mask = em_span_cpus(pd);
+ 
+diff --git a/drivers/pps/pps.c b/drivers/pps/pps.c
+index 6a02245ea35fec..9463232af8d2e6 100644
+--- a/drivers/pps/pps.c
++++ b/drivers/pps/pps.c
+@@ -41,6 +41,9 @@ static __poll_t pps_cdev_poll(struct file *file, poll_table *wait)
+ 
+ 	poll_wait(file, &pps->queue, wait);
+ 
++	if (pps->last_fetched_ev == pps->last_ev)
++		return 0;
++
+ 	return EPOLLIN | EPOLLRDNORM;
+ }
+ 
+@@ -186,9 +189,11 @@ static long pps_cdev_ioctl(struct file *file,
+ 		if (err)
+ 			return err;
+ 
+-		/* Return the fetched timestamp */
++		/* Return the fetched timestamp and save last fetched event  */
+ 		spin_lock_irq(&pps->lock);
+ 
++		pps->last_fetched_ev = pps->last_ev;
++
+ 		fdata.info.assert_sequence = pps->assert_sequence;
+ 		fdata.info.clear_sequence = pps->clear_sequence;
+ 		fdata.info.assert_tu = pps->assert_tu;
+@@ -272,9 +277,11 @@ static long pps_cdev_compat_ioctl(struct file *file,
+ 		if (err)
+ 			return err;
+ 
+-		/* Return the fetched timestamp */
++		/* Return the fetched timestamp and save last fetched event  */
+ 		spin_lock_irq(&pps->lock);
+ 
++		pps->last_fetched_ev = pps->last_ev;
++
+ 		compat.info.assert_sequence = pps->assert_sequence;
+ 		compat.info.clear_sequence = pps->clear_sequence;
+ 		compat.info.current_mode = pps->current_mode;
+diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
+index 83962a114dc9fd..48a0d3a69ed080 100644
+--- a/drivers/remoteproc/Kconfig
++++ b/drivers/remoteproc/Kconfig
+@@ -214,7 +214,7 @@ config QCOM_Q6V5_MSS
+ 	  handled by QCOM_Q6V5_PAS driver.
+ 
+ config QCOM_Q6V5_PAS
+-	tristate "Qualcomm Hexagon v5 Peripheral Authentication Service support"
++	tristate "Qualcomm Peripheral Authentication Service support"
+ 	depends on OF && ARCH_QCOM
+ 	depends on QCOM_SMEM
+ 	depends on RPMSG_QCOM_SMD || RPMSG_QCOM_SMD=n
+@@ -229,11 +229,10 @@ config QCOM_Q6V5_PAS
+ 	select QCOM_RPROC_COMMON
+ 	select QCOM_SCM
+ 	help
+-	  Say y here to support the TrustZone based Peripheral Image Loader
+-	  for the Qualcomm Hexagon v5 based remote processors. This is commonly
+-	  used to control subsystems such as ADSP (Audio DSP),
+-	  CDSP (Compute DSP), MPSS (Modem Peripheral SubSystem), and
+-	  SLPI (Sensor Low Power Island).
++	  Say y here to support the TrustZone based Peripheral Image Loader for
++	  the Qualcomm remote processors. This is commonly used to control
++	  subsystems such as ADSP (Audio DSP), CDSP (Compute DSP), MPSS (Modem
++	  Peripheral SubSystem), and SLPI (Sensor Low Power Island).
+ 
+ config QCOM_Q6V5_WCSS
+ 	tristate "Qualcomm Hexagon based WCSS Peripheral Image Loader"
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index b306f223127c45..02e29171cbbee2 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Qualcomm ADSP/SLPI Peripheral Image Loader for MSM8974 and MSM8996
++ * Qualcomm Peripheral Authentication Service remoteproc driver
+  *
+  * Copyright (C) 2016 Linaro Ltd
+  * Copyright (C) 2014 Sony Mobile Communications AB
+@@ -31,11 +31,11 @@
+ #include "qcom_q6v5.h"
+ #include "remoteproc_internal.h"
+ 
+-#define ADSP_DECRYPT_SHUTDOWN_DELAY_MS	100
++#define QCOM_PAS_DECRYPT_SHUTDOWN_DELAY_MS	100
+ 
+ #define MAX_ASSIGN_COUNT 3
+ 
+-struct adsp_data {
++struct qcom_pas_data {
+ 	int crash_reason_smem;
+ 	const char *firmware_name;
+ 	const char *dtb_firmware_name;
+@@ -60,7 +60,7 @@ struct adsp_data {
+ 	int region_assign_vmid;
+ };
+ 
+-struct qcom_adsp {
++struct qcom_pas {
+ 	struct device *dev;
+ 	struct rproc *rproc;
+ 
+@@ -119,36 +119,37 @@ struct qcom_adsp {
+ 	struct qcom_scm_pas_metadata dtb_pas_metadata;
+ };
+ 
+-static void adsp_segment_dump(struct rproc *rproc, struct rproc_dump_segment *segment,
+-		       void *dest, size_t offset, size_t size)
++static void qcom_pas_segment_dump(struct rproc *rproc,
++				  struct rproc_dump_segment *segment,
++				  void *dest, size_t offset, size_t size)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 	int total_offset;
+ 
+-	total_offset = segment->da + segment->offset + offset - adsp->mem_phys;
+-	if (total_offset < 0 || total_offset + size > adsp->mem_size) {
+-		dev_err(adsp->dev,
++	total_offset = segment->da + segment->offset + offset - pas->mem_phys;
++	if (total_offset < 0 || total_offset + size > pas->mem_size) {
++		dev_err(pas->dev,
+ 			"invalid copy request for segment %pad with offset %zu and size %zu)\n",
+ 			&segment->da, offset, size);
+ 		memset(dest, 0xff, size);
+ 		return;
+ 	}
+ 
+-	memcpy_fromio(dest, adsp->mem_region + total_offset, size);
++	memcpy_fromio(dest, pas->mem_region + total_offset, size);
+ }
+ 
+-static void adsp_minidump(struct rproc *rproc)
++static void qcom_pas_minidump(struct rproc *rproc)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 
+ 	if (rproc->dump_conf == RPROC_COREDUMP_DISABLED)
+ 		return;
+ 
+-	qcom_minidump(rproc, adsp->minidump_id, adsp_segment_dump);
++	qcom_minidump(rproc, pas->minidump_id, qcom_pas_segment_dump);
+ }
+ 
+-static int adsp_pds_enable(struct qcom_adsp *adsp, struct device **pds,
+-			   size_t pd_count)
++static int qcom_pas_pds_enable(struct qcom_pas *pas, struct device **pds,
++			       size_t pd_count)
+ {
+ 	int ret;
+ 	int i;
+@@ -174,8 +175,8 @@ static int adsp_pds_enable(struct qcom_adsp *adsp, struct device **pds,
+ 	return ret;
+ };
+ 
+-static void adsp_pds_disable(struct qcom_adsp *adsp, struct device **pds,
+-			     size_t pd_count)
++static void qcom_pas_pds_disable(struct qcom_pas *pas, struct device **pds,
++				 size_t pd_count)
+ {
+ 	int i;
+ 
+@@ -185,65 +186,65 @@ static void adsp_pds_disable(struct qcom_adsp *adsp, struct device **pds,
+ 	}
+ }
+ 
+-static int adsp_shutdown_poll_decrypt(struct qcom_adsp *adsp)
++static int qcom_pas_shutdown_poll_decrypt(struct qcom_pas *pas)
+ {
+ 	unsigned int retry_num = 50;
+ 	int ret;
+ 
+ 	do {
+-		msleep(ADSP_DECRYPT_SHUTDOWN_DELAY_MS);
+-		ret = qcom_scm_pas_shutdown(adsp->pas_id);
++		msleep(QCOM_PAS_DECRYPT_SHUTDOWN_DELAY_MS);
++		ret = qcom_scm_pas_shutdown(pas->pas_id);
+ 	} while (ret == -EINVAL && --retry_num);
+ 
+ 	return ret;
+ }
+ 
+-static int adsp_unprepare(struct rproc *rproc)
++static int qcom_pas_unprepare(struct rproc *rproc)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 
+ 	/*
+-	 * adsp_load() did pass pas_metadata to the SCM driver for storing
++	 * qcom_pas_load() did pass pas_metadata to the SCM driver for storing
+ 	 * metadata context. It might have been released already if
+ 	 * auth_and_reset() was successful, but in other cases clean it up
+ 	 * here.
+ 	 */
+-	qcom_scm_pas_metadata_release(&adsp->pas_metadata);
+-	if (adsp->dtb_pas_id)
+-		qcom_scm_pas_metadata_release(&adsp->dtb_pas_metadata);
++	qcom_scm_pas_metadata_release(&pas->pas_metadata);
++	if (pas->dtb_pas_id)
++		qcom_scm_pas_metadata_release(&pas->dtb_pas_metadata);
+ 
+ 	return 0;
+ }
+ 
+-static int adsp_load(struct rproc *rproc, const struct firmware *fw)
++static int qcom_pas_load(struct rproc *rproc, const struct firmware *fw)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 	int ret;
+ 
+-	/* Store firmware handle to be used in adsp_start() */
+-	adsp->firmware = fw;
++	/* Store firmware handle to be used in qcom_pas_start() */
++	pas->firmware = fw;
+ 
+-	if (adsp->lite_pas_id)
+-		ret = qcom_scm_pas_shutdown(adsp->lite_pas_id);
++	if (pas->lite_pas_id)
++		ret = qcom_scm_pas_shutdown(pas->lite_pas_id);
+ 
+-	if (adsp->dtb_pas_id) {
+-		ret = request_firmware(&adsp->dtb_firmware, adsp->dtb_firmware_name, adsp->dev);
++	if (pas->dtb_pas_id) {
++		ret = request_firmware(&pas->dtb_firmware, pas->dtb_firmware_name, pas->dev);
+ 		if (ret) {
+-			dev_err(adsp->dev, "request_firmware failed for %s: %d\n",
+-				adsp->dtb_firmware_name, ret);
++			dev_err(pas->dev, "request_firmware failed for %s: %d\n",
++				pas->dtb_firmware_name, ret);
+ 			return ret;
+ 		}
+ 
+-		ret = qcom_mdt_pas_init(adsp->dev, adsp->dtb_firmware, adsp->dtb_firmware_name,
+-					adsp->dtb_pas_id, adsp->dtb_mem_phys,
+-					&adsp->dtb_pas_metadata);
++		ret = qcom_mdt_pas_init(pas->dev, pas->dtb_firmware, pas->dtb_firmware_name,
++					pas->dtb_pas_id, pas->dtb_mem_phys,
++					&pas->dtb_pas_metadata);
+ 		if (ret)
+ 			goto release_dtb_firmware;
+ 
+-		ret = qcom_mdt_load_no_init(adsp->dev, adsp->dtb_firmware, adsp->dtb_firmware_name,
+-					    adsp->dtb_pas_id, adsp->dtb_mem_region,
+-					    adsp->dtb_mem_phys, adsp->dtb_mem_size,
+-					    &adsp->dtb_mem_reloc);
++		ret = qcom_mdt_load_no_init(pas->dev, pas->dtb_firmware, pas->dtb_firmware_name,
++					    pas->dtb_pas_id, pas->dtb_mem_region,
++					    pas->dtb_mem_phys, pas->dtb_mem_size,
++					    &pas->dtb_mem_reloc);
+ 		if (ret)
+ 			goto release_dtb_metadata;
+ 	}
+@@ -251,248 +252,246 @@ static int adsp_load(struct rproc *rproc, const struct firmware *fw)
+ 	return 0;
+ 
+ release_dtb_metadata:
+-	qcom_scm_pas_metadata_release(&adsp->dtb_pas_metadata);
++	qcom_scm_pas_metadata_release(&pas->dtb_pas_metadata);
+ 
+ release_dtb_firmware:
+-	release_firmware(adsp->dtb_firmware);
++	release_firmware(pas->dtb_firmware);
+ 
+ 	return ret;
+ }
+ 
+-static int adsp_start(struct rproc *rproc)
++static int qcom_pas_start(struct rproc *rproc)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 	int ret;
+ 
+-	ret = qcom_q6v5_prepare(&adsp->q6v5);
++	ret = qcom_q6v5_prepare(&pas->q6v5);
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = adsp_pds_enable(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
++	ret = qcom_pas_pds_enable(pas, pas->proxy_pds, pas->proxy_pd_count);
+ 	if (ret < 0)
+ 		goto disable_irqs;
+ 
+-	ret = clk_prepare_enable(adsp->xo);
++	ret = clk_prepare_enable(pas->xo);
+ 	if (ret)
+ 		goto disable_proxy_pds;
+ 
+-	ret = clk_prepare_enable(adsp->aggre2_clk);
++	ret = clk_prepare_enable(pas->aggre2_clk);
+ 	if (ret)
+ 		goto disable_xo_clk;
+ 
+-	if (adsp->cx_supply) {
+-		ret = regulator_enable(adsp->cx_supply);
++	if (pas->cx_supply) {
++		ret = regulator_enable(pas->cx_supply);
+ 		if (ret)
+ 			goto disable_aggre2_clk;
+ 	}
+ 
+-	if (adsp->px_supply) {
+-		ret = regulator_enable(adsp->px_supply);
++	if (pas->px_supply) {
++		ret = regulator_enable(pas->px_supply);
+ 		if (ret)
+ 			goto disable_cx_supply;
+ 	}
+ 
+-	if (adsp->dtb_pas_id) {
+-		ret = qcom_scm_pas_auth_and_reset(adsp->dtb_pas_id);
++	if (pas->dtb_pas_id) {
++		ret = qcom_scm_pas_auth_and_reset(pas->dtb_pas_id);
+ 		if (ret) {
+-			dev_err(adsp->dev,
++			dev_err(pas->dev,
+ 				"failed to authenticate dtb image and release reset\n");
+ 			goto disable_px_supply;
+ 		}
+ 	}
+ 
+-	ret = qcom_mdt_pas_init(adsp->dev, adsp->firmware, rproc->firmware, adsp->pas_id,
+-				adsp->mem_phys, &adsp->pas_metadata);
++	ret = qcom_mdt_pas_init(pas->dev, pas->firmware, rproc->firmware, pas->pas_id,
++				pas->mem_phys, &pas->pas_metadata);
+ 	if (ret)
+ 		goto disable_px_supply;
+ 
+-	ret = qcom_mdt_load_no_init(adsp->dev, adsp->firmware, rproc->firmware, adsp->pas_id,
+-				    adsp->mem_region, adsp->mem_phys, adsp->mem_size,
+-				    &adsp->mem_reloc);
++	ret = qcom_mdt_load_no_init(pas->dev, pas->firmware, rproc->firmware, pas->pas_id,
++				    pas->mem_region, pas->mem_phys, pas->mem_size,
++				    &pas->mem_reloc);
+ 	if (ret)
+ 		goto release_pas_metadata;
+ 
+-	qcom_pil_info_store(adsp->info_name, adsp->mem_phys, adsp->mem_size);
++	qcom_pil_info_store(pas->info_name, pas->mem_phys, pas->mem_size);
+ 
+-	ret = qcom_scm_pas_auth_and_reset(adsp->pas_id);
++	ret = qcom_scm_pas_auth_and_reset(pas->pas_id);
+ 	if (ret) {
+-		dev_err(adsp->dev,
++		dev_err(pas->dev,
+ 			"failed to authenticate image and release reset\n");
+ 		goto release_pas_metadata;
+ 	}
+ 
+-	ret = qcom_q6v5_wait_for_start(&adsp->q6v5, msecs_to_jiffies(5000));
++	ret = qcom_q6v5_wait_for_start(&pas->q6v5, msecs_to_jiffies(5000));
+ 	if (ret == -ETIMEDOUT) {
+-		dev_err(adsp->dev, "start timed out\n");
+-		qcom_scm_pas_shutdown(adsp->pas_id);
++		dev_err(pas->dev, "start timed out\n");
++		qcom_scm_pas_shutdown(pas->pas_id);
+ 		goto release_pas_metadata;
+ 	}
+ 
+-	qcom_scm_pas_metadata_release(&adsp->pas_metadata);
+-	if (adsp->dtb_pas_id)
+-		qcom_scm_pas_metadata_release(&adsp->dtb_pas_metadata);
++	qcom_scm_pas_metadata_release(&pas->pas_metadata);
++	if (pas->dtb_pas_id)
++		qcom_scm_pas_metadata_release(&pas->dtb_pas_metadata);
+ 
+-	/* Remove pointer to the loaded firmware, only valid in adsp_load() & adsp_start() */
+-	adsp->firmware = NULL;
++	/* firmware is used to pass reference from qcom_pas_start(), drop it now */
++	pas->firmware = NULL;
+ 
+ 	return 0;
+ 
+ release_pas_metadata:
+-	qcom_scm_pas_metadata_release(&adsp->pas_metadata);
+-	if (adsp->dtb_pas_id)
+-		qcom_scm_pas_metadata_release(&adsp->dtb_pas_metadata);
++	qcom_scm_pas_metadata_release(&pas->pas_metadata);
++	if (pas->dtb_pas_id)
++		qcom_scm_pas_metadata_release(&pas->dtb_pas_metadata);
+ disable_px_supply:
+-	if (adsp->px_supply)
+-		regulator_disable(adsp->px_supply);
++	if (pas->px_supply)
++		regulator_disable(pas->px_supply);
+ disable_cx_supply:
+-	if (adsp->cx_supply)
+-		regulator_disable(adsp->cx_supply);
++	if (pas->cx_supply)
++		regulator_disable(pas->cx_supply);
+ disable_aggre2_clk:
+-	clk_disable_unprepare(adsp->aggre2_clk);
++	clk_disable_unprepare(pas->aggre2_clk);
+ disable_xo_clk:
+-	clk_disable_unprepare(adsp->xo);
++	clk_disable_unprepare(pas->xo);
+ disable_proxy_pds:
+-	adsp_pds_disable(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
++	qcom_pas_pds_disable(pas, pas->proxy_pds, pas->proxy_pd_count);
+ disable_irqs:
+-	qcom_q6v5_unprepare(&adsp->q6v5);
++	qcom_q6v5_unprepare(&pas->q6v5);
+ 
+-	/* Remove pointer to the loaded firmware, only valid in adsp_load() & adsp_start() */
+-	adsp->firmware = NULL;
++	/* firmware is used to pass reference from qcom_pas_start(), drop it now */
++	pas->firmware = NULL;
+ 
+ 	return ret;
+ }
+ 
+ static void qcom_pas_handover(struct qcom_q6v5 *q6v5)
+ {
+-	struct qcom_adsp *adsp = container_of(q6v5, struct qcom_adsp, q6v5);
+-
+-	if (adsp->px_supply)
+-		regulator_disable(adsp->px_supply);
+-	if (adsp->cx_supply)
+-		regulator_disable(adsp->cx_supply);
+-	clk_disable_unprepare(adsp->aggre2_clk);
+-	clk_disable_unprepare(adsp->xo);
+-	adsp_pds_disable(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
++	struct qcom_pas *pas = container_of(q6v5, struct qcom_pas, q6v5);
++
++	if (pas->px_supply)
++		regulator_disable(pas->px_supply);
++	if (pas->cx_supply)
++		regulator_disable(pas->cx_supply);
++	clk_disable_unprepare(pas->aggre2_clk);
++	clk_disable_unprepare(pas->xo);
++	qcom_pas_pds_disable(pas, pas->proxy_pds, pas->proxy_pd_count);
+ }
+ 
+-static int adsp_stop(struct rproc *rproc)
++static int qcom_pas_stop(struct rproc *rproc)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 	int handover;
+ 	int ret;
+ 
+-	ret = qcom_q6v5_request_stop(&adsp->q6v5, adsp->sysmon);
++	ret = qcom_q6v5_request_stop(&pas->q6v5, pas->sysmon);
+ 	if (ret == -ETIMEDOUT)
+-		dev_err(adsp->dev, "timed out on wait\n");
++		dev_err(pas->dev, "timed out on wait\n");
+ 
+-	ret = qcom_scm_pas_shutdown(adsp->pas_id);
+-	if (ret && adsp->decrypt_shutdown)
+-		ret = adsp_shutdown_poll_decrypt(adsp);
++	ret = qcom_scm_pas_shutdown(pas->pas_id);
++	if (ret && pas->decrypt_shutdown)
++		ret = qcom_pas_shutdown_poll_decrypt(pas);
+ 
+ 	if (ret)
+-		dev_err(adsp->dev, "failed to shutdown: %d\n", ret);
++		dev_err(pas->dev, "failed to shutdown: %d\n", ret);
+ 
+-	if (adsp->dtb_pas_id) {
+-		ret = qcom_scm_pas_shutdown(adsp->dtb_pas_id);
++	if (pas->dtb_pas_id) {
++		ret = qcom_scm_pas_shutdown(pas->dtb_pas_id);
+ 		if (ret)
+-			dev_err(adsp->dev, "failed to shutdown dtb: %d\n", ret);
++			dev_err(pas->dev, "failed to shutdown dtb: %d\n", ret);
+ 	}
+ 
+-	handover = qcom_q6v5_unprepare(&adsp->q6v5);
++	handover = qcom_q6v5_unprepare(&pas->q6v5);
+ 	if (handover)
+-		qcom_pas_handover(&adsp->q6v5);
++		qcom_pas_handover(&pas->q6v5);
+ 
+-	if (adsp->smem_host_id)
+-		ret = qcom_smem_bust_hwspin_lock_by_host(adsp->smem_host_id);
++	if (pas->smem_host_id)
++		ret = qcom_smem_bust_hwspin_lock_by_host(pas->smem_host_id);
+ 
+ 	return ret;
+ }
+ 
+-static void *adsp_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem)
++static void *qcom_pas_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 	int offset;
+ 
+-	offset = da - adsp->mem_reloc;
+-	if (offset < 0 || offset + len > adsp->mem_size)
++	offset = da - pas->mem_reloc;
++	if (offset < 0 || offset + len > pas->mem_size)
+ 		return NULL;
+ 
+ 	if (is_iomem)
+ 		*is_iomem = true;
+ 
+-	return adsp->mem_region + offset;
++	return pas->mem_region + offset;
+ }
+ 
+-static unsigned long adsp_panic(struct rproc *rproc)
++static unsigned long qcom_pas_panic(struct rproc *rproc)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 
+-	return qcom_q6v5_panic(&adsp->q6v5);
++	return qcom_q6v5_panic(&pas->q6v5);
+ }
+ 
+-static const struct rproc_ops adsp_ops = {
+-	.unprepare = adsp_unprepare,
+-	.start = adsp_start,
+-	.stop = adsp_stop,
+-	.da_to_va = adsp_da_to_va,
++static const struct rproc_ops qcom_pas_ops = {
++	.unprepare = qcom_pas_unprepare,
++	.start = qcom_pas_start,
++	.stop = qcom_pas_stop,
++	.da_to_va = qcom_pas_da_to_va,
+ 	.parse_fw = qcom_register_dump_segments,
+-	.load = adsp_load,
+-	.panic = adsp_panic,
++	.load = qcom_pas_load,
++	.panic = qcom_pas_panic,
+ };
+ 
+-static const struct rproc_ops adsp_minidump_ops = {
+-	.unprepare = adsp_unprepare,
+-	.start = adsp_start,
+-	.stop = adsp_stop,
+-	.da_to_va = adsp_da_to_va,
++static const struct rproc_ops qcom_pas_minidump_ops = {
++	.unprepare = qcom_pas_unprepare,
++	.start = qcom_pas_start,
++	.stop = qcom_pas_stop,
++	.da_to_va = qcom_pas_da_to_va,
+ 	.parse_fw = qcom_register_dump_segments,
+-	.load = adsp_load,
+-	.panic = adsp_panic,
+-	.coredump = adsp_minidump,
++	.load = qcom_pas_load,
++	.panic = qcom_pas_panic,
++	.coredump = qcom_pas_minidump,
+ };
+ 
+-static int adsp_init_clock(struct qcom_adsp *adsp)
++static int qcom_pas_init_clock(struct qcom_pas *pas)
+ {
+-	adsp->xo = devm_clk_get(adsp->dev, "xo");
+-	if (IS_ERR(adsp->xo))
+-		return dev_err_probe(adsp->dev, PTR_ERR(adsp->xo),
++	pas->xo = devm_clk_get(pas->dev, "xo");
++	if (IS_ERR(pas->xo))
++		return dev_err_probe(pas->dev, PTR_ERR(pas->xo),
+ 				     "failed to get xo clock");
+ 
+-
+-	adsp->aggre2_clk = devm_clk_get_optional(adsp->dev, "aggre2");
+-	if (IS_ERR(adsp->aggre2_clk))
+-		return dev_err_probe(adsp->dev, PTR_ERR(adsp->aggre2_clk),
++	pas->aggre2_clk = devm_clk_get_optional(pas->dev, "aggre2");
++	if (IS_ERR(pas->aggre2_clk))
++		return dev_err_probe(pas->dev, PTR_ERR(pas->aggre2_clk),
+ 				     "failed to get aggre2 clock");
+ 
+ 	return 0;
+ }
+ 
+-static int adsp_init_regulator(struct qcom_adsp *adsp)
++static int qcom_pas_init_regulator(struct qcom_pas *pas)
+ {
+-	adsp->cx_supply = devm_regulator_get_optional(adsp->dev, "cx");
+-	if (IS_ERR(adsp->cx_supply)) {
+-		if (PTR_ERR(adsp->cx_supply) == -ENODEV)
+-			adsp->cx_supply = NULL;
++	pas->cx_supply = devm_regulator_get_optional(pas->dev, "cx");
++	if (IS_ERR(pas->cx_supply)) {
++		if (PTR_ERR(pas->cx_supply) == -ENODEV)
++			pas->cx_supply = NULL;
+ 		else
+-			return PTR_ERR(adsp->cx_supply);
++			return PTR_ERR(pas->cx_supply);
+ 	}
+ 
+-	if (adsp->cx_supply)
+-		regulator_set_load(adsp->cx_supply, 100000);
++	if (pas->cx_supply)
++		regulator_set_load(pas->cx_supply, 100000);
+ 
+-	adsp->px_supply = devm_regulator_get_optional(adsp->dev, "px");
+-	if (IS_ERR(adsp->px_supply)) {
+-		if (PTR_ERR(adsp->px_supply) == -ENODEV)
+-			adsp->px_supply = NULL;
++	pas->px_supply = devm_regulator_get_optional(pas->dev, "px");
++	if (IS_ERR(pas->px_supply)) {
++		if (PTR_ERR(pas->px_supply) == -ENODEV)
++			pas->px_supply = NULL;
+ 		else
+-			return PTR_ERR(adsp->px_supply);
++			return PTR_ERR(pas->px_supply);
+ 	}
+ 
+ 	return 0;
+ }
+ 
+-static int adsp_pds_attach(struct device *dev, struct device **devs,
+-			   char **pd_names)
++static int qcom_pas_pds_attach(struct device *dev, struct device **devs, char **pd_names)
+ {
+ 	size_t num_pds = 0;
+ 	int ret;
+@@ -528,10 +527,9 @@ static int adsp_pds_attach(struct device *dev, struct device **devs,
+ 	return ret;
+ };
+ 
+-static void adsp_pds_detach(struct qcom_adsp *adsp, struct device **pds,
+-			    size_t pd_count)
++static void qcom_pas_pds_detach(struct qcom_pas *pas, struct device **pds, size_t pd_count)
+ {
+-	struct device *dev = adsp->dev;
++	struct device *dev = pas->dev;
+ 	int i;
+ 
+ 	/* Handle single power domain */
+@@ -544,62 +542,62 @@ static void adsp_pds_detach(struct qcom_adsp *adsp, struct device **pds,
+ 		dev_pm_domain_detach(pds[i], false);
+ }
+ 
+-static int adsp_alloc_memory_region(struct qcom_adsp *adsp)
++static int qcom_pas_alloc_memory_region(struct qcom_pas *pas)
+ {
+ 	struct reserved_mem *rmem;
+ 	struct device_node *node;
+ 
+-	node = of_parse_phandle(adsp->dev->of_node, "memory-region", 0);
++	node = of_parse_phandle(pas->dev->of_node, "memory-region", 0);
+ 	if (!node) {
+-		dev_err(adsp->dev, "no memory-region specified\n");
++		dev_err(pas->dev, "no memory-region specified\n");
+ 		return -EINVAL;
+ 	}
+ 
+ 	rmem = of_reserved_mem_lookup(node);
+ 	of_node_put(node);
+ 	if (!rmem) {
+-		dev_err(adsp->dev, "unable to resolve memory-region\n");
++		dev_err(pas->dev, "unable to resolve memory-region\n");
+ 		return -EINVAL;
+ 	}
+ 
+-	adsp->mem_phys = adsp->mem_reloc = rmem->base;
+-	adsp->mem_size = rmem->size;
+-	adsp->mem_region = devm_ioremap_wc(adsp->dev, adsp->mem_phys, adsp->mem_size);
+-	if (!adsp->mem_region) {
+-		dev_err(adsp->dev, "unable to map memory region: %pa+%zx\n",
+-			&rmem->base, adsp->mem_size);
++	pas->mem_phys = pas->mem_reloc = rmem->base;
++	pas->mem_size = rmem->size;
++	pas->mem_region = devm_ioremap_wc(pas->dev, pas->mem_phys, pas->mem_size);
++	if (!pas->mem_region) {
++		dev_err(pas->dev, "unable to map memory region: %pa+%zx\n",
++			&rmem->base, pas->mem_size);
+ 		return -EBUSY;
+ 	}
+ 
+-	if (!adsp->dtb_pas_id)
++	if (!pas->dtb_pas_id)
+ 		return 0;
+ 
+-	node = of_parse_phandle(adsp->dev->of_node, "memory-region", 1);
++	node = of_parse_phandle(pas->dev->of_node, "memory-region", 1);
+ 	if (!node) {
+-		dev_err(adsp->dev, "no dtb memory-region specified\n");
++		dev_err(pas->dev, "no dtb memory-region specified\n");
+ 		return -EINVAL;
+ 	}
+ 
+ 	rmem = of_reserved_mem_lookup(node);
+ 	of_node_put(node);
+ 	if (!rmem) {
+-		dev_err(adsp->dev, "unable to resolve dtb memory-region\n");
++		dev_err(pas->dev, "unable to resolve dtb memory-region\n");
+ 		return -EINVAL;
+ 	}
+ 
+-	adsp->dtb_mem_phys = adsp->dtb_mem_reloc = rmem->base;
+-	adsp->dtb_mem_size = rmem->size;
+-	adsp->dtb_mem_region = devm_ioremap_wc(adsp->dev, adsp->dtb_mem_phys, adsp->dtb_mem_size);
+-	if (!adsp->dtb_mem_region) {
+-		dev_err(adsp->dev, "unable to map dtb memory region: %pa+%zx\n",
+-			&rmem->base, adsp->dtb_mem_size);
++	pas->dtb_mem_phys = pas->dtb_mem_reloc = rmem->base;
++	pas->dtb_mem_size = rmem->size;
++	pas->dtb_mem_region = devm_ioremap_wc(pas->dev, pas->dtb_mem_phys, pas->dtb_mem_size);
++	if (!pas->dtb_mem_region) {
++		dev_err(pas->dev, "unable to map dtb memory region: %pa+%zx\n",
++			&rmem->base, pas->dtb_mem_size);
+ 		return -EBUSY;
+ 	}
+ 
+ 	return 0;
+ }
+ 
+-static int adsp_assign_memory_region(struct qcom_adsp *adsp)
++static int qcom_pas_assign_memory_region(struct qcom_pas *pas)
+ {
+ 	struct qcom_scm_vmperm perm[MAX_ASSIGN_COUNT];
+ 	struct device_node *node;
+@@ -607,45 +605,45 @@ static int adsp_assign_memory_region(struct qcom_adsp *adsp)
+ 	int offset;
+ 	int ret;
+ 
+-	if (!adsp->region_assign_idx)
++	if (!pas->region_assign_idx)
+ 		return 0;
+ 
+-	for (offset = 0; offset < adsp->region_assign_count; ++offset) {
++	for (offset = 0; offset < pas->region_assign_count; ++offset) {
+ 		struct reserved_mem *rmem = NULL;
+ 
+-		node = of_parse_phandle(adsp->dev->of_node, "memory-region",
+-					adsp->region_assign_idx + offset);
++		node = of_parse_phandle(pas->dev->of_node, "memory-region",
++					pas->region_assign_idx + offset);
+ 		if (node)
+ 			rmem = of_reserved_mem_lookup(node);
+ 		of_node_put(node);
+ 		if (!rmem) {
+-			dev_err(adsp->dev, "unable to resolve shareable memory-region index %d\n",
++			dev_err(pas->dev, "unable to resolve shareable memory-region index %d\n",
+ 				offset);
+ 			return -EINVAL;
+ 		}
+ 
+-		if (adsp->region_assign_shared)  {
++		if (pas->region_assign_shared)  {
+ 			perm[0].vmid = QCOM_SCM_VMID_HLOS;
+ 			perm[0].perm = QCOM_SCM_PERM_RW;
+-			perm[1].vmid = adsp->region_assign_vmid;
++			perm[1].vmid = pas->region_assign_vmid;
+ 			perm[1].perm = QCOM_SCM_PERM_RW;
+ 			perm_size = 2;
+ 		} else {
+-			perm[0].vmid = adsp->region_assign_vmid;
++			perm[0].vmid = pas->region_assign_vmid;
+ 			perm[0].perm = QCOM_SCM_PERM_RW;
+ 			perm_size = 1;
+ 		}
+ 
+-		adsp->region_assign_phys[offset] = rmem->base;
+-		adsp->region_assign_size[offset] = rmem->size;
+-		adsp->region_assign_owners[offset] = BIT(QCOM_SCM_VMID_HLOS);
++		pas->region_assign_phys[offset] = rmem->base;
++		pas->region_assign_size[offset] = rmem->size;
++		pas->region_assign_owners[offset] = BIT(QCOM_SCM_VMID_HLOS);
+ 
+-		ret = qcom_scm_assign_mem(adsp->region_assign_phys[offset],
+-					  adsp->region_assign_size[offset],
+-					  &adsp->region_assign_owners[offset],
++		ret = qcom_scm_assign_mem(pas->region_assign_phys[offset],
++					  pas->region_assign_size[offset],
++					  &pas->region_assign_owners[offset],
+ 					  perm, perm_size);
+ 		if (ret < 0) {
+-			dev_err(adsp->dev, "assign memory %d failed\n", offset);
++			dev_err(pas->dev, "assign memory %d failed\n", offset);
+ 			return ret;
+ 		}
+ 	}
+@@ -653,35 +651,35 @@ static int adsp_assign_memory_region(struct qcom_adsp *adsp)
+ 	return 0;
+ }
+ 
+-static void adsp_unassign_memory_region(struct qcom_adsp *adsp)
++static void qcom_pas_unassign_memory_region(struct qcom_pas *pas)
+ {
+ 	struct qcom_scm_vmperm perm;
+ 	int offset;
+ 	int ret;
+ 
+-	if (!adsp->region_assign_idx || adsp->region_assign_shared)
++	if (!pas->region_assign_idx || pas->region_assign_shared)
+ 		return;
+ 
+-	for (offset = 0; offset < adsp->region_assign_count; ++offset) {
++	for (offset = 0; offset < pas->region_assign_count; ++offset) {
+ 		perm.vmid = QCOM_SCM_VMID_HLOS;
+ 		perm.perm = QCOM_SCM_PERM_RW;
+ 
+-		ret = qcom_scm_assign_mem(adsp->region_assign_phys[offset],
+-					  adsp->region_assign_size[offset],
+-					  &adsp->region_assign_owners[offset],
++		ret = qcom_scm_assign_mem(pas->region_assign_phys[offset],
++					  pas->region_assign_size[offset],
++					  &pas->region_assign_owners[offset],
+ 					  &perm, 1);
+ 		if (ret < 0)
+-			dev_err(adsp->dev, "unassign memory %d failed\n", offset);
++			dev_err(pas->dev, "unassign memory %d failed\n", offset);
+ 	}
+ }
+ 
+-static int adsp_probe(struct platform_device *pdev)
++static int qcom_pas_probe(struct platform_device *pdev)
+ {
+-	const struct adsp_data *desc;
+-	struct qcom_adsp *adsp;
++	const struct qcom_pas_data *desc;
++	struct qcom_pas *pas;
+ 	struct rproc *rproc;
+ 	const char *fw_name, *dtb_fw_name = NULL;
+-	const struct rproc_ops *ops = &adsp_ops;
++	const struct rproc_ops *ops = &qcom_pas_ops;
+ 	int ret;
+ 
+ 	desc = of_device_get_match_data(&pdev->dev);
+@@ -706,9 +704,9 @@ static int adsp_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	if (desc->minidump_id)
+-		ops = &adsp_minidump_ops;
++		ops = &qcom_pas_minidump_ops;
+ 
+-	rproc = devm_rproc_alloc(&pdev->dev, desc->sysmon_name, ops, fw_name, sizeof(*adsp));
++	rproc = devm_rproc_alloc(&pdev->dev, desc->sysmon_name, ops, fw_name, sizeof(*pas));
+ 
+ 	if (!rproc) {
+ 		dev_err(&pdev->dev, "unable to allocate remoteproc\n");
+@@ -718,68 +716,65 @@ static int adsp_probe(struct platform_device *pdev)
+ 	rproc->auto_boot = desc->auto_boot;
+ 	rproc_coredump_set_elf_info(rproc, ELFCLASS32, EM_NONE);
+ 
+-	adsp = rproc->priv;
+-	adsp->dev = &pdev->dev;
+-	adsp->rproc = rproc;
+-	adsp->minidump_id = desc->minidump_id;
+-	adsp->pas_id = desc->pas_id;
+-	adsp->lite_pas_id = desc->lite_pas_id;
+-	adsp->info_name = desc->sysmon_name;
+-	adsp->smem_host_id = desc->smem_host_id;
+-	adsp->decrypt_shutdown = desc->decrypt_shutdown;
+-	adsp->region_assign_idx = desc->region_assign_idx;
+-	adsp->region_assign_count = min_t(int, MAX_ASSIGN_COUNT, desc->region_assign_count);
+-	adsp->region_assign_vmid = desc->region_assign_vmid;
+-	adsp->region_assign_shared = desc->region_assign_shared;
++	pas = rproc->priv;
++	pas->dev = &pdev->dev;
++	pas->rproc = rproc;
++	pas->minidump_id = desc->minidump_id;
++	pas->pas_id = desc->pas_id;
++	pas->lite_pas_id = desc->lite_pas_id;
++	pas->info_name = desc->sysmon_name;
++	pas->smem_host_id = desc->smem_host_id;
++	pas->decrypt_shutdown = desc->decrypt_shutdown;
++	pas->region_assign_idx = desc->region_assign_idx;
++	pas->region_assign_count = min_t(int, MAX_ASSIGN_COUNT, desc->region_assign_count);
++	pas->region_assign_vmid = desc->region_assign_vmid;
++	pas->region_assign_shared = desc->region_assign_shared;
+ 	if (dtb_fw_name) {
+-		adsp->dtb_firmware_name = dtb_fw_name;
+-		adsp->dtb_pas_id = desc->dtb_pas_id;
++		pas->dtb_firmware_name = dtb_fw_name;
++		pas->dtb_pas_id = desc->dtb_pas_id;
+ 	}
+-	platform_set_drvdata(pdev, adsp);
++	platform_set_drvdata(pdev, pas);
+ 
+-	ret = device_init_wakeup(adsp->dev, true);
++	ret = device_init_wakeup(pas->dev, true);
+ 	if (ret)
+ 		goto free_rproc;
+ 
+-	ret = adsp_alloc_memory_region(adsp);
++	ret = qcom_pas_alloc_memory_region(pas);
+ 	if (ret)
+ 		goto free_rproc;
+ 
+-	ret = adsp_assign_memory_region(adsp);
++	ret = qcom_pas_assign_memory_region(pas);
+ 	if (ret)
+ 		goto free_rproc;
+ 
+-	ret = adsp_init_clock(adsp);
++	ret = qcom_pas_init_clock(pas);
+ 	if (ret)
+ 		goto unassign_mem;
+ 
+-	ret = adsp_init_regulator(adsp);
++	ret = qcom_pas_init_regulator(pas);
+ 	if (ret)
+ 		goto unassign_mem;
+ 
+-	ret = adsp_pds_attach(&pdev->dev, adsp->proxy_pds,
+-			      desc->proxy_pd_names);
++	ret = qcom_pas_pds_attach(&pdev->dev, pas->proxy_pds, desc->proxy_pd_names);
+ 	if (ret < 0)
+ 		goto unassign_mem;
+-	adsp->proxy_pd_count = ret;
++	pas->proxy_pd_count = ret;
+ 
+-	ret = qcom_q6v5_init(&adsp->q6v5, pdev, rproc, desc->crash_reason_smem, desc->load_state,
+-			     qcom_pas_handover);
++	ret = qcom_q6v5_init(&pas->q6v5, pdev, rproc, desc->crash_reason_smem,
++			     desc->load_state, qcom_pas_handover);
+ 	if (ret)
+ 		goto detach_proxy_pds;
+ 
+-	qcom_add_glink_subdev(rproc, &adsp->glink_subdev, desc->ssr_name);
+-	qcom_add_smd_subdev(rproc, &adsp->smd_subdev);
+-	qcom_add_pdm_subdev(rproc, &adsp->pdm_subdev);
+-	adsp->sysmon = qcom_add_sysmon_subdev(rproc,
+-					      desc->sysmon_name,
+-					      desc->ssctl_id);
+-	if (IS_ERR(adsp->sysmon)) {
+-		ret = PTR_ERR(adsp->sysmon);
++	qcom_add_glink_subdev(rproc, &pas->glink_subdev, desc->ssr_name);
++	qcom_add_smd_subdev(rproc, &pas->smd_subdev);
++	qcom_add_pdm_subdev(rproc, &pas->pdm_subdev);
++	pas->sysmon = qcom_add_sysmon_subdev(rproc, desc->sysmon_name, desc->ssctl_id);
++	if (IS_ERR(pas->sysmon)) {
++		ret = PTR_ERR(pas->sysmon);
+ 		goto deinit_remove_pdm_smd_glink;
+ 	}
+ 
+-	qcom_add_ssr_subdev(rproc, &adsp->ssr_subdev, desc->ssr_name);
++	qcom_add_ssr_subdev(rproc, &pas->ssr_subdev, desc->ssr_name);
+ 	ret = rproc_add(rproc);
+ 	if (ret)
+ 		goto remove_ssr_sysmon;
+@@ -787,41 +782,41 @@ static int adsp_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ remove_ssr_sysmon:
+-	qcom_remove_ssr_subdev(rproc, &adsp->ssr_subdev);
+-	qcom_remove_sysmon_subdev(adsp->sysmon);
++	qcom_remove_ssr_subdev(rproc, &pas->ssr_subdev);
++	qcom_remove_sysmon_subdev(pas->sysmon);
+ deinit_remove_pdm_smd_glink:
+-	qcom_remove_pdm_subdev(rproc, &adsp->pdm_subdev);
+-	qcom_remove_smd_subdev(rproc, &adsp->smd_subdev);
+-	qcom_remove_glink_subdev(rproc, &adsp->glink_subdev);
+-	qcom_q6v5_deinit(&adsp->q6v5);
++	qcom_remove_pdm_subdev(rproc, &pas->pdm_subdev);
++	qcom_remove_smd_subdev(rproc, &pas->smd_subdev);
++	qcom_remove_glink_subdev(rproc, &pas->glink_subdev);
++	qcom_q6v5_deinit(&pas->q6v5);
+ detach_proxy_pds:
+-	adsp_pds_detach(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
++	qcom_pas_pds_detach(pas, pas->proxy_pds, pas->proxy_pd_count);
+ unassign_mem:
+-	adsp_unassign_memory_region(adsp);
++	qcom_pas_unassign_memory_region(pas);
+ free_rproc:
+-	device_init_wakeup(adsp->dev, false);
++	device_init_wakeup(pas->dev, false);
+ 
+ 	return ret;
+ }
+ 
+-static void adsp_remove(struct platform_device *pdev)
++static void qcom_pas_remove(struct platform_device *pdev)
+ {
+-	struct qcom_adsp *adsp = platform_get_drvdata(pdev);
+-
+-	rproc_del(adsp->rproc);
+-
+-	qcom_q6v5_deinit(&adsp->q6v5);
+-	adsp_unassign_memory_region(adsp);
+-	qcom_remove_glink_subdev(adsp->rproc, &adsp->glink_subdev);
+-	qcom_remove_sysmon_subdev(adsp->sysmon);
+-	qcom_remove_smd_subdev(adsp->rproc, &adsp->smd_subdev);
+-	qcom_remove_pdm_subdev(adsp->rproc, &adsp->pdm_subdev);
+-	qcom_remove_ssr_subdev(adsp->rproc, &adsp->ssr_subdev);
+-	adsp_pds_detach(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
+-	device_init_wakeup(adsp->dev, false);
++	struct qcom_pas *pas = platform_get_drvdata(pdev);
++
++	rproc_del(pas->rproc);
++
++	qcom_q6v5_deinit(&pas->q6v5);
++	qcom_pas_unassign_memory_region(pas);
++	qcom_remove_glink_subdev(pas->rproc, &pas->glink_subdev);
++	qcom_remove_sysmon_subdev(pas->sysmon);
++	qcom_remove_smd_subdev(pas->rproc, &pas->smd_subdev);
++	qcom_remove_pdm_subdev(pas->rproc, &pas->pdm_subdev);
++	qcom_remove_ssr_subdev(pas->rproc, &pas->ssr_subdev);
++	qcom_pas_pds_detach(pas, pas->proxy_pds, pas->proxy_pd_count);
++	device_init_wakeup(pas->dev, false);
+ }
+ 
+-static const struct adsp_data adsp_resource_init = {
++static const struct qcom_pas_data adsp_resource_init = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.pas_id = 1,
+@@ -831,7 +826,7 @@ static const struct adsp_data adsp_resource_init = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data sa8775p_adsp_resource = {
++static const struct qcom_pas_data sa8775p_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mbn",
+ 	.pas_id = 1,
+@@ -848,7 +843,7 @@ static const struct adsp_data sa8775p_adsp_resource = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data sdm845_adsp_resource_init = {
++static const struct qcom_pas_data sdm845_adsp_resource_init = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.pas_id = 1,
+@@ -859,7 +854,7 @@ static const struct adsp_data sdm845_adsp_resource_init = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data sm6350_adsp_resource = {
++static const struct qcom_pas_data sm6350_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.pas_id = 1,
+@@ -875,7 +870,7 @@ static const struct adsp_data sm6350_adsp_resource = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data sm6375_mpss_resource = {
++static const struct qcom_pas_data sm6375_mpss_resource = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.pas_id = 4,
+@@ -890,7 +885,7 @@ static const struct adsp_data sm6375_mpss_resource = {
+ 	.ssctl_id = 0x12,
+ };
+ 
+-static const struct adsp_data sm8150_adsp_resource = {
++static const struct qcom_pas_data sm8150_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.pas_id = 1,
+@@ -905,7 +900,7 @@ static const struct adsp_data sm8150_adsp_resource = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data sm8250_adsp_resource = {
++static const struct qcom_pas_data sm8250_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.pas_id = 1,
+@@ -922,7 +917,7 @@ static const struct adsp_data sm8250_adsp_resource = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data sm8350_adsp_resource = {
++static const struct qcom_pas_data sm8350_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.pas_id = 1,
+@@ -938,7 +933,7 @@ static const struct adsp_data sm8350_adsp_resource = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data msm8996_adsp_resource = {
++static const struct qcom_pas_data msm8996_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.pas_id = 1,
+@@ -952,7 +947,7 @@ static const struct adsp_data msm8996_adsp_resource = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data cdsp_resource_init = {
++static const struct qcom_pas_data cdsp_resource_init = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 18,
+@@ -962,7 +957,7 @@ static const struct adsp_data cdsp_resource_init = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sa8775p_cdsp0_resource = {
++static const struct qcom_pas_data sa8775p_cdsp0_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp0.mbn",
+ 	.pas_id = 18,
+@@ -980,7 +975,7 @@ static const struct adsp_data sa8775p_cdsp0_resource = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sa8775p_cdsp1_resource = {
++static const struct qcom_pas_data sa8775p_cdsp1_resource = {
+ 	.crash_reason_smem = 633,
+ 	.firmware_name = "cdsp1.mbn",
+ 	.pas_id = 30,
+@@ -998,7 +993,7 @@ static const struct adsp_data sa8775p_cdsp1_resource = {
+ 	.ssctl_id = 0x20,
+ };
+ 
+-static const struct adsp_data sdm845_cdsp_resource_init = {
++static const struct qcom_pas_data sdm845_cdsp_resource_init = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 18,
+@@ -1009,7 +1004,7 @@ static const struct adsp_data sdm845_cdsp_resource_init = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sm6350_cdsp_resource = {
++static const struct qcom_pas_data sm6350_cdsp_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 18,
+@@ -1025,7 +1020,7 @@ static const struct adsp_data sm6350_cdsp_resource = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sm8150_cdsp_resource = {
++static const struct qcom_pas_data sm8150_cdsp_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 18,
+@@ -1040,7 +1035,7 @@ static const struct adsp_data sm8150_cdsp_resource = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sm8250_cdsp_resource = {
++static const struct qcom_pas_data sm8250_cdsp_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 18,
+@@ -1055,7 +1050,7 @@ static const struct adsp_data sm8250_cdsp_resource = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sc8280xp_nsp0_resource = {
++static const struct qcom_pas_data sc8280xp_nsp0_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 18,
+@@ -1069,7 +1064,7 @@ static const struct adsp_data sc8280xp_nsp0_resource = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sc8280xp_nsp1_resource = {
++static const struct qcom_pas_data sc8280xp_nsp1_resource = {
+ 	.crash_reason_smem = 633,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 30,
+@@ -1083,7 +1078,7 @@ static const struct adsp_data sc8280xp_nsp1_resource = {
+ 	.ssctl_id = 0x20,
+ };
+ 
+-static const struct adsp_data x1e80100_adsp_resource = {
++static const struct qcom_pas_data x1e80100_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.dtb_firmware_name = "adsp_dtb.mdt",
+@@ -1103,7 +1098,7 @@ static const struct adsp_data x1e80100_adsp_resource = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data x1e80100_cdsp_resource = {
++static const struct qcom_pas_data x1e80100_cdsp_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.dtb_firmware_name = "cdsp_dtb.mdt",
+@@ -1123,7 +1118,7 @@ static const struct adsp_data x1e80100_cdsp_resource = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sm8350_cdsp_resource = {
++static const struct qcom_pas_data sm8350_cdsp_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 18,
+@@ -1140,7 +1135,7 @@ static const struct adsp_data sm8350_cdsp_resource = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sa8775p_gpdsp0_resource = {
++static const struct qcom_pas_data sa8775p_gpdsp0_resource = {
+ 	.crash_reason_smem = 640,
+ 	.firmware_name = "gpdsp0.mbn",
+ 	.pas_id = 39,
+@@ -1157,7 +1152,7 @@ static const struct adsp_data sa8775p_gpdsp0_resource = {
+ 	.ssctl_id = 0x21,
+ };
+ 
+-static const struct adsp_data sa8775p_gpdsp1_resource = {
++static const struct qcom_pas_data sa8775p_gpdsp1_resource = {
+ 	.crash_reason_smem = 641,
+ 	.firmware_name = "gpdsp1.mbn",
+ 	.pas_id = 40,
+@@ -1174,7 +1169,7 @@ static const struct adsp_data sa8775p_gpdsp1_resource = {
+ 	.ssctl_id = 0x22,
+ };
+ 
+-static const struct adsp_data mpss_resource_init = {
++static const struct qcom_pas_data mpss_resource_init = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.pas_id = 4,
+@@ -1191,7 +1186,7 @@ static const struct adsp_data mpss_resource_init = {
+ 	.ssctl_id = 0x12,
+ };
+ 
+-static const struct adsp_data sc8180x_mpss_resource = {
++static const struct qcom_pas_data sc8180x_mpss_resource = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.pas_id = 4,
+@@ -1206,7 +1201,7 @@ static const struct adsp_data sc8180x_mpss_resource = {
+ 	.ssctl_id = 0x12,
+ };
+ 
+-static const struct adsp_data msm8996_slpi_resource_init = {
++static const struct qcom_pas_data msm8996_slpi_resource_init = {
+ 	.crash_reason_smem = 424,
+ 	.firmware_name = "slpi.mdt",
+ 	.pas_id = 12,
+@@ -1220,7 +1215,7 @@ static const struct adsp_data msm8996_slpi_resource_init = {
+ 	.ssctl_id = 0x16,
+ };
+ 
+-static const struct adsp_data sdm845_slpi_resource_init = {
++static const struct qcom_pas_data sdm845_slpi_resource_init = {
+ 	.crash_reason_smem = 424,
+ 	.firmware_name = "slpi.mdt",
+ 	.pas_id = 12,
+@@ -1236,7 +1231,7 @@ static const struct adsp_data sdm845_slpi_resource_init = {
+ 	.ssctl_id = 0x16,
+ };
+ 
+-static const struct adsp_data wcss_resource_init = {
++static const struct qcom_pas_data wcss_resource_init = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "wcnss.mdt",
+ 	.pas_id = 6,
+@@ -1246,7 +1241,7 @@ static const struct adsp_data wcss_resource_init = {
+ 	.ssctl_id = 0x12,
+ };
+ 
+-static const struct adsp_data sdx55_mpss_resource = {
++static const struct qcom_pas_data sdx55_mpss_resource = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.pas_id = 4,
+@@ -1261,7 +1256,7 @@ static const struct adsp_data sdx55_mpss_resource = {
+ 	.ssctl_id = 0x22,
+ };
+ 
+-static const struct adsp_data sm8450_mpss_resource = {
++static const struct qcom_pas_data sm8450_mpss_resource = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.pas_id = 4,
+@@ -1279,7 +1274,7 @@ static const struct adsp_data sm8450_mpss_resource = {
+ 	.ssctl_id = 0x12,
+ };
+ 
+-static const struct adsp_data sm8550_adsp_resource = {
++static const struct qcom_pas_data sm8550_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.dtb_firmware_name = "adsp_dtb.mdt",
+@@ -1299,7 +1294,7 @@ static const struct adsp_data sm8550_adsp_resource = {
+ 	.smem_host_id = 2,
+ };
+ 
+-static const struct adsp_data sm8550_cdsp_resource = {
++static const struct qcom_pas_data sm8550_cdsp_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.dtb_firmware_name = "cdsp_dtb.mdt",
+@@ -1320,7 +1315,7 @@ static const struct adsp_data sm8550_cdsp_resource = {
+ 	.smem_host_id = 5,
+ };
+ 
+-static const struct adsp_data sm8550_mpss_resource = {
++static const struct qcom_pas_data sm8550_mpss_resource = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.dtb_firmware_name = "modem_dtb.mdt",
+@@ -1344,7 +1339,7 @@ static const struct adsp_data sm8550_mpss_resource = {
+ 	.region_assign_vmid = QCOM_SCM_VMID_MSS_MSA,
+ };
+ 
+-static const struct adsp_data sc7280_wpss_resource = {
++static const struct qcom_pas_data sc7280_wpss_resource = {
+ 	.crash_reason_smem = 626,
+ 	.firmware_name = "wpss.mdt",
+ 	.pas_id = 6,
+@@ -1361,7 +1356,7 @@ static const struct adsp_data sc7280_wpss_resource = {
+ 	.ssctl_id = 0x19,
+ };
+ 
+-static const struct adsp_data sm8650_cdsp_resource = {
++static const struct qcom_pas_data sm8650_cdsp_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.dtb_firmware_name = "cdsp_dtb.mdt",
+@@ -1386,7 +1381,7 @@ static const struct adsp_data sm8650_cdsp_resource = {
+ 	.region_assign_vmid = QCOM_SCM_VMID_CDSP,
+ };
+ 
+-static const struct adsp_data sm8650_mpss_resource = {
++static const struct qcom_pas_data sm8650_mpss_resource = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.dtb_firmware_name = "modem_dtb.mdt",
+@@ -1410,7 +1405,7 @@ static const struct adsp_data sm8650_mpss_resource = {
+ 	.region_assign_vmid = QCOM_SCM_VMID_MSS_MSA,
+ };
+ 
+-static const struct adsp_data sm8750_mpss_resource = {
++static const struct qcom_pas_data sm8750_mpss_resource = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.dtb_firmware_name = "modem_dtb.mdt",
+@@ -1434,7 +1429,7 @@ static const struct adsp_data sm8750_mpss_resource = {
+ 	.region_assign_vmid = QCOM_SCM_VMID_MSS_MSA,
+ };
+ 
+-static const struct of_device_id adsp_of_match[] = {
++static const struct of_device_id qcom_pas_of_match[] = {
+ 	{ .compatible = "qcom,msm8226-adsp-pil", .data = &msm8996_adsp_resource},
+ 	{ .compatible = "qcom,msm8953-adsp-pil", .data = &msm8996_adsp_resource},
+ 	{ .compatible = "qcom,msm8974-adsp-pil", .data = &adsp_resource_init},
+@@ -1504,17 +1499,17 @@ static const struct of_device_id adsp_of_match[] = {
+ 	{ .compatible = "qcom,x1e80100-cdsp-pas", .data = &x1e80100_cdsp_resource},
+ 	{ },
+ };
+-MODULE_DEVICE_TABLE(of, adsp_of_match);
++MODULE_DEVICE_TABLE(of, qcom_pas_of_match);
+ 
+-static struct platform_driver adsp_driver = {
+-	.probe = adsp_probe,
+-	.remove = adsp_remove,
++static struct platform_driver qcom_pas_driver = {
++	.probe = qcom_pas_probe,
++	.remove = qcom_pas_remove,
+ 	.driver = {
+ 		.name = "qcom_q6v5_pas",
+-		.of_match_table = adsp_of_match,
++		.of_match_table = qcom_pas_of_match,
+ 	},
+ };
+ 
+-module_platform_driver(adsp_driver);
+-MODULE_DESCRIPTION("Qualcomm Hexagon v5 Peripheral Authentication Service driver");
++module_platform_driver(qcom_pas_driver);
++MODULE_DESCRIPTION("Qualcomm Peripheral Authentication Service remoteproc driver");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
+index 5aeedeaf3c415e..c165422d06516b 100644
+--- a/drivers/remoteproc/xlnx_r5_remoteproc.c
++++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
+@@ -906,6 +906,8 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
+ 
+ 	rproc_coredump_set_elf_info(r5_rproc, ELFCLASS32, EM_ARM);
+ 
++	r5_rproc->recovery_disabled = true;
++	r5_rproc->has_iommu = false;
+ 	r5_rproc->auto_boot = false;
+ 	r5_core = r5_rproc->priv;
+ 	r5_core->dev = cdev;
+diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c
+index 5efbe69bf5ca8c..c8a666de9cbe91 100644
+--- a/drivers/rtc/rtc-ds1307.c
++++ b/drivers/rtc/rtc-ds1307.c
+@@ -1466,7 +1466,7 @@ static long ds3231_clk_sqw_round_rate(struct clk_hw *hw, unsigned long rate,
+ 			return ds3231_clk_sqw_rates[i];
+ 	}
+ 
+-	return 0;
++	return ds3231_clk_sqw_rates[ARRAY_SIZE(ds3231_clk_sqw_rates) - 1];
+ }
+ 
+ static int ds3231_clk_sqw_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-hym8563.c b/drivers/rtc/rtc-hym8563.c
+index 63f11ea3589d64..759dc2ad6e3b2a 100644
+--- a/drivers/rtc/rtc-hym8563.c
++++ b/drivers/rtc/rtc-hym8563.c
+@@ -294,7 +294,7 @@ static long hym8563_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ 		if (clkout_rates[i] <= rate)
+ 			return clkout_rates[i];
+ 
+-	return 0;
++	return clkout_rates[0];
+ }
+ 
+ static int hym8563_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-nct3018y.c b/drivers/rtc/rtc-nct3018y.c
+index 76c5f464b2daeb..cea05fca0bccdd 100644
+--- a/drivers/rtc/rtc-nct3018y.c
++++ b/drivers/rtc/rtc-nct3018y.c
+@@ -376,7 +376,7 @@ static long nct3018y_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ 		if (clkout_rates[i] <= rate)
+ 			return clkout_rates[i];
+ 
+-	return 0;
++	return clkout_rates[0];
+ }
+ 
+ static int nct3018y_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-pcf85063.c b/drivers/rtc/rtc-pcf85063.c
+index 4fa5c4ecdd5a34..b26c9bfad5d929 100644
+--- a/drivers/rtc/rtc-pcf85063.c
++++ b/drivers/rtc/rtc-pcf85063.c
+@@ -410,7 +410,7 @@ static long pcf85063_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ 		if (clkout_rates[i] <= rate)
+ 			return clkout_rates[i];
+ 
+-	return 0;
++	return clkout_rates[0];
+ }
+ 
+ static int pcf85063_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-pcf8563.c b/drivers/rtc/rtc-pcf8563.c
+index 5a084d426e58d0..e79da89015442f 100644
+--- a/drivers/rtc/rtc-pcf8563.c
++++ b/drivers/rtc/rtc-pcf8563.c
+@@ -339,7 +339,7 @@ static long pcf8563_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ 		if (clkout_rates[i] <= rate)
+ 			return clkout_rates[i];
+ 
+-	return 0;
++	return clkout_rates[0];
+ }
+ 
+ static int pcf8563_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-rv3028.c b/drivers/rtc/rtc-rv3028.c
+index 868d1b1eb0f42e..278841c2e47edf 100644
+--- a/drivers/rtc/rtc-rv3028.c
++++ b/drivers/rtc/rtc-rv3028.c
+@@ -740,7 +740,7 @@ static long rv3028_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ 		if (clkout_rates[i] <= rate)
+ 			return clkout_rates[i];
+ 
+-	return 0;
++	return clkout_rates[0];
+ }
+ 
+ static int rv3028_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/s390/crypto/ap_bus.h b/drivers/s390/crypto/ap_bus.h
+index f4622ee4d89473..6111913c858c04 100644
+--- a/drivers/s390/crypto/ap_bus.h
++++ b/drivers/s390/crypto/ap_bus.h
+@@ -180,7 +180,7 @@ struct ap_card {
+ 	atomic64_t total_request_count;	/* # requests ever for this AP device.*/
+ };
+ 
+-#define TAPQ_CARD_HWINFO_MASK 0xFEFF0000FFFF0F0FUL
++#define TAPQ_CARD_HWINFO_MASK 0xFFFF0000FFFF0F0FUL
+ #define ASSOC_IDX_INVALID 0x10000
+ 
+ #define to_ap_card(x) container_of((x), struct ap_card, ap_dev.device)
+diff --git a/drivers/scsi/elx/efct/efct_lio.c b/drivers/scsi/elx/efct/efct_lio.c
+index 9ac69356b13e08..bd3d489e56ae93 100644
+--- a/drivers/scsi/elx/efct/efct_lio.c
++++ b/drivers/scsi/elx/efct/efct_lio.c
+@@ -382,7 +382,7 @@ efct_lio_sg_unmap(struct efct_io *io)
+ 		return;
+ 
+ 	dma_unmap_sg(&io->efct->pci->dev, cmd->t_data_sg,
+-		     ocp->seg_map_cnt, cmd->data_direction);
++		     cmd->t_data_nents, cmd->data_direction);
+ 	ocp->seg_map_cnt = 0;
+ }
+ 
+diff --git a/drivers/scsi/ibmvscsi_tgt/libsrp.c b/drivers/scsi/ibmvscsi_tgt/libsrp.c
+index 8a0e28aec928e4..0ecad398ed3db0 100644
+--- a/drivers/scsi/ibmvscsi_tgt/libsrp.c
++++ b/drivers/scsi/ibmvscsi_tgt/libsrp.c
+@@ -184,7 +184,8 @@ static int srp_direct_data(struct ibmvscsis_cmd *cmd, struct srp_direct_buf *md,
+ 	err = rdma_io(cmd, sg, nsg, md, 1, dir, len);
+ 
+ 	if (dma_map)
+-		dma_unmap_sg(iue->target->dev, sg, nsg, DMA_BIDIRECTIONAL);
++		dma_unmap_sg(iue->target->dev, sg, cmd->se_cmd.t_data_nents,
++			     DMA_BIDIRECTIONAL);
+ 
+ 	return err;
+ }
+@@ -256,7 +257,8 @@ static int srp_indirect_data(struct ibmvscsis_cmd *cmd, struct srp_cmd *srp_cmd,
+ 	err = rdma_io(cmd, sg, nsg, md, nmd, dir, len);
+ 
+ 	if (dma_map)
+-		dma_unmap_sg(iue->target->dev, sg, nsg, DMA_BIDIRECTIONAL);
++		dma_unmap_sg(iue->target->dev, sg, cmd->se_cmd.t_data_nents,
++			     DMA_BIDIRECTIONAL);
+ 
+ free_mem:
+ 	if (token && dma_map) {
+diff --git a/drivers/scsi/isci/request.c b/drivers/scsi/isci/request.c
+index 355a0bc0828e74..bb89a2e33eb407 100644
+--- a/drivers/scsi/isci/request.c
++++ b/drivers/scsi/isci/request.c
+@@ -2904,7 +2904,7 @@ static void isci_request_io_request_complete(struct isci_host *ihost,
+ 					 task->total_xfer_len, task->data_dir);
+ 		else  /* unmap the sgl dma addresses */
+ 			dma_unmap_sg(&ihost->pdev->dev, task->scatter,
+-				     request->num_sg_entries, task->data_dir);
++				     task->num_scatter, task->data_dir);
+ 		break;
+ 	case SAS_PROTOCOL_SMP: {
+ 		struct scatterlist *sg = &task->smp_task.smp_req;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 508861e88d9fe2..0f900ddb3047c7 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -10790,8 +10790,7 @@ _mpt3sas_fw_work(struct MPT3SAS_ADAPTER *ioc, struct fw_event_work *fw_event)
+ 		break;
+ 	case MPI2_EVENT_PCIE_TOPOLOGY_CHANGE_LIST:
+ 		_scsih_pcie_topology_change_event(ioc, fw_event);
+-		ioc->current_event = NULL;
+-		return;
++		break;
+ 	}
+ out:
+ 	fw_event_work_put(fw_event);
+diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
+index 52ac10226cb08f..3f12096528b1ea 100644
+--- a/drivers/scsi/mvsas/mv_sas.c
++++ b/drivers/scsi/mvsas/mv_sas.c
+@@ -818,7 +818,7 @@ static int mvs_task_prep(struct sas_task *task, struct mvs_info *mvi, int is_tmf
+ 	dev_printk(KERN_ERR, mvi->dev, "mvsas prep failed[%d]!\n", rc);
+ 	if (!sas_protocol_ata(task->task_proto))
+ 		if (n_elem)
+-			dma_unmap_sg(mvi->dev, task->scatter, n_elem,
++			dma_unmap_sg(mvi->dev, task->scatter, task->num_scatter,
+ 				     task->data_dir);
+ prep_out:
+ 	return rc;
+@@ -864,7 +864,7 @@ static void mvs_slot_task_free(struct mvs_info *mvi, struct sas_task *task,
+ 	if (!sas_protocol_ata(task->task_proto))
+ 		if (slot->n_elem)
+ 			dma_unmap_sg(mvi->dev, task->scatter,
+-				     slot->n_elem, task->data_dir);
++				     task->num_scatter, task->data_dir);
+ 
+ 	switch (task->task_proto) {
+ 	case SAS_PROTOCOL_SMP:
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index c75a806496d674..743b4c792ceb36 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2143,6 +2143,8 @@ static int iscsi_iter_destroy_conn_fn(struct device *dev, void *data)
+ 		return 0;
+ 
+ 	iscsi_remove_conn(iscsi_dev_to_conn(dev));
++	iscsi_put_conn(iscsi_dev_to_conn(dev));
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 89d5c4b17bc462..2f64caa3b25302 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -4173,7 +4173,9 @@ static void sd_shutdown(struct device *dev)
+ 	if ((system_state != SYSTEM_RESTART &&
+ 	     sdkp->device->manage_system_start_stop) ||
+ 	    (system_state == SYSTEM_POWER_OFF &&
+-	     sdkp->device->manage_shutdown)) {
++	     sdkp->device->manage_shutdown) ||
++	    (system_state == SYSTEM_RUNNING &&
++	     sdkp->device->manage_runtime_start_stop)) {
+ 		sd_printk(KERN_NOTICE, sdkp, "Stopping disk\n");
+ 		sd_start_stop_device(sdkp, 0);
+ 	}
+diff --git a/drivers/soc/qcom/pmic_glink.c b/drivers/soc/qcom/pmic_glink.c
+index cde19cdfd3c7fb..e57b47c17c3c1e 100644
+--- a/drivers/soc/qcom/pmic_glink.c
++++ b/drivers/soc/qcom/pmic_glink.c
+@@ -167,7 +167,10 @@ static int pmic_glink_rpmsg_callback(struct rpmsg_device *rpdev, void *data,
+ 	return 0;
+ }
+ 
+-static void pmic_glink_aux_release(struct device *dev) {}
++static void pmic_glink_aux_release(struct device *dev)
++{
++	of_node_put(dev->of_node);
++}
+ 
+ static int pmic_glink_add_aux_device(struct pmic_glink *pg,
+ 				     struct auxiliary_device *aux,
+@@ -181,8 +184,10 @@ static int pmic_glink_add_aux_device(struct pmic_glink *pg,
+ 	aux->dev.release = pmic_glink_aux_release;
+ 	device_set_of_node_from_dev(&aux->dev, parent);
+ 	ret = auxiliary_device_init(aux);
+-	if (ret)
++	if (ret) {
++		of_node_put(aux->dev.of_node);
+ 		return ret;
++	}
+ 
+ 	ret = auxiliary_device_add(aux);
+ 	if (ret)
+diff --git a/drivers/soc/qcom/qmi_encdec.c b/drivers/soc/qcom/qmi_encdec.c
+index bb09eff85cff3b..dafe0a4c202e14 100644
+--- a/drivers/soc/qcom/qmi_encdec.c
++++ b/drivers/soc/qcom/qmi_encdec.c
+@@ -304,6 +304,8 @@ static int qmi_encode(const struct qmi_elem_info *ei_array, void *out_buf,
+ 	const void *buf_src;
+ 	int encode_tlv = 0;
+ 	int rc;
++	u8 val8;
++	u16 val16;
+ 
+ 	if (!ei_array)
+ 		return 0;
+@@ -338,7 +340,6 @@ static int qmi_encode(const struct qmi_elem_info *ei_array, void *out_buf,
+ 			break;
+ 
+ 		case QMI_DATA_LEN:
+-			memcpy(&data_len_value, buf_src, temp_ei->elem_size);
+ 			data_len_sz = temp_ei->elem_size == sizeof(u8) ?
+ 					sizeof(u8) : sizeof(u16);
+ 			/* Check to avoid out of range buffer access */
+@@ -348,8 +349,17 @@ static int qmi_encode(const struct qmi_elem_info *ei_array, void *out_buf,
+ 				       __func__);
+ 				return -ETOOSMALL;
+ 			}
+-			rc = qmi_encode_basic_elem(buf_dst, &data_len_value,
+-						   1, data_len_sz);
++			if (data_len_sz == sizeof(u8)) {
++				val8 = *(u8 *)buf_src;
++				data_len_value = (u32)val8;
++				rc = qmi_encode_basic_elem(buf_dst, &val8,
++							   1, data_len_sz);
++			} else {
++				val16 = *(u16 *)buf_src;
++				data_len_value = (u32)le16_to_cpu(val16);
++				rc = qmi_encode_basic_elem(buf_dst, &val16,
++							   1, data_len_sz);
++			}
+ 			UPDATE_ENCODE_VARIABLES(temp_ei, buf_dst,
+ 						encoded_bytes, tlv_len,
+ 						encode_tlv, rc);
+@@ -523,14 +533,23 @@ static int qmi_decode_string_elem(const struct qmi_elem_info *ei_array,
+ 	u32 string_len = 0;
+ 	u32 string_len_sz = 0;
+ 	const struct qmi_elem_info *temp_ei = ei_array;
++	u8 val8;
++	u16 val16;
+ 
+ 	if (dec_level == 1) {
+ 		string_len = tlv_len;
+ 	} else {
+ 		string_len_sz = temp_ei->elem_len <= U8_MAX ?
+ 				sizeof(u8) : sizeof(u16);
+-		rc = qmi_decode_basic_elem(&string_len, buf_src,
+-					   1, string_len_sz);
++		if (string_len_sz == sizeof(u8)) {
++			rc = qmi_decode_basic_elem(&val8, buf_src,
++						   1, string_len_sz);
++			string_len = (u32)val8;
++		} else {
++			rc = qmi_decode_basic_elem(&val16, buf_src,
++						   1, string_len_sz);
++			string_len = (u32)val16;
++		}
+ 		decoded_bytes += rc;
+ 	}
+ 
+@@ -604,6 +623,9 @@ static int qmi_decode(const struct qmi_elem_info *ei_array, void *out_c_struct,
+ 	u32 decoded_bytes = 0;
+ 	const void *buf_src = in_buf;
+ 	int rc;
++	u8 val8;
++	u16 val16;
++	u32 val32;
+ 
+ 	while (decoded_bytes < in_buf_len) {
+ 		if (dec_level >= 2 && temp_ei->data_type == QMI_EOTI)
+@@ -642,9 +664,17 @@ static int qmi_decode(const struct qmi_elem_info *ei_array, void *out_c_struct,
+ 		if (temp_ei->data_type == QMI_DATA_LEN) {
+ 			data_len_sz = temp_ei->elem_size == sizeof(u8) ?
+ 					sizeof(u8) : sizeof(u16);
+-			rc = qmi_decode_basic_elem(&data_len_value, buf_src,
+-						   1, data_len_sz);
+-			memcpy(buf_dst, &data_len_value, sizeof(u32));
++			if (data_len_sz == sizeof(u8)) {
++				rc = qmi_decode_basic_elem(&val8, buf_src,
++							   1, data_len_sz);
++				data_len_value = (u32)val8;
++			} else {
++				rc = qmi_decode_basic_elem(&val16, buf_src,
++							   1, data_len_sz);
++				data_len_value = (u32)val16;
++			}
++			val32 = cpu_to_le32(data_len_value);
++			memcpy(buf_dst, &val32, sizeof(u32));
+ 			temp_ei = temp_ei + 1;
+ 			buf_dst = out_c_struct + temp_ei->offset;
+ 			tlv_len -= data_len_sz;
+diff --git a/drivers/soc/tegra/cbb/tegra234-cbb.c b/drivers/soc/tegra/cbb/tegra234-cbb.c
+index c74629af9bb5d0..1da31ead2b5ebc 100644
+--- a/drivers/soc/tegra/cbb/tegra234-cbb.c
++++ b/drivers/soc/tegra/cbb/tegra234-cbb.c
+@@ -185,6 +185,8 @@ static void tegra234_cbb_error_clear(struct tegra_cbb *cbb)
+ {
+ 	struct tegra234_cbb *priv = to_tegra234_cbb(cbb);
+ 
++	writel(0, priv->mon + FABRIC_MN_MASTER_ERR_FORCE_0);
++
+ 	writel(0x3f, priv->mon + FABRIC_MN_MASTER_ERR_STATUS_0);
+ 	dsb(sy);
+ }
+diff --git a/drivers/soundwire/debugfs.c b/drivers/soundwire/debugfs.c
+index 3099ea074f10e2..230a51489486e1 100644
+--- a/drivers/soundwire/debugfs.c
++++ b/drivers/soundwire/debugfs.c
+@@ -291,6 +291,9 @@ static int cmd_go(void *data, u64 value)
+ 
+ 	finish_t = ktime_get();
+ 
++	dev_dbg(&slave->dev, "command completed, num_byte %zu status %d, time %lld ms\n",
++		num_bytes, ret, div_u64(finish_t - start_t, NSEC_PER_MSEC));
++
+ out:
+ 	if (fw)
+ 		release_firmware(fw);
+@@ -298,9 +301,6 @@ static int cmd_go(void *data, u64 value)
+ 	pm_runtime_mark_last_busy(&slave->dev);
+ 	pm_runtime_put(&slave->dev);
+ 
+-	dev_dbg(&slave->dev, "command completed, num_byte %zu status %d, time %lld ms\n",
+-		num_bytes, ret, div_u64(finish_t - start_t, NSEC_PER_MSEC));
+-
+ 	return ret;
+ }
+ DEFINE_DEBUGFS_ATTRIBUTE(cmd_go_fops, NULL,
+diff --git a/drivers/soundwire/mipi_disco.c b/drivers/soundwire/mipi_disco.c
+index 65afb28ef8fab1..c69b78cd0b6209 100644
+--- a/drivers/soundwire/mipi_disco.c
++++ b/drivers/soundwire/mipi_disco.c
+@@ -451,10 +451,10 @@ int sdw_slave_read_prop(struct sdw_slave *slave)
+ 			"mipi-sdw-highPHY-capable");
+ 
+ 	prop->paging_support = mipi_device_property_read_bool(dev,
+-			"mipi-sdw-paging-support");
++			"mipi-sdw-paging-supported");
+ 
+ 	prop->bank_delay_support = mipi_device_property_read_bool(dev,
+-			"mipi-sdw-bank-delay-support");
++			"mipi-sdw-bank-delay-supported");
+ 
+ 	device_property_read_u32(dev,
+ 			"mipi-sdw-port15-read-behavior", &prop->p15_behave);
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index a4bea742b5d9a5..38c9dbd3560652 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -1510,7 +1510,7 @@ static int _sdw_prepare_stream(struct sdw_stream_runtime *stream,
+ 		if (ret < 0) {
+ 			dev_err(bus->dev, "Prepare port(s) failed ret = %d\n",
+ 				ret);
+-			return ret;
++			goto restore_params;
+ 		}
+ 	}
+ 
+diff --git a/drivers/spi/spi-cs42l43.c b/drivers/spi/spi-cs42l43.c
+index ceefc253c54904..004801c2c92538 100644
+--- a/drivers/spi/spi-cs42l43.c
++++ b/drivers/spi/spi-cs42l43.c
+@@ -293,7 +293,7 @@ static struct spi_board_info *cs42l43_create_bridge_amp(struct cs42l43_spi *priv
+ 	struct spi_board_info *info;
+ 
+ 	if (spkid >= 0) {
+-		props = devm_kmalloc(priv->dev, sizeof(*props), GFP_KERNEL);
++		props = devm_kcalloc(priv->dev, 2, sizeof(*props), GFP_KERNEL);
+ 		if (!props)
+ 			return NULL;
+ 
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index da3517d7102dce..dc22b98bdbcc34 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -2069,9 +2069,15 @@ static int stm32_spi_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	struct reset_control *rst;
+ 	struct device_node *np = pdev->dev.of_node;
++	const struct stm32_spi_cfg *cfg;
+ 	bool device_mode;
+ 	int ret;
+-	const struct stm32_spi_cfg *cfg = of_device_get_match_data(&pdev->dev);
++
++	cfg = of_device_get_match_data(&pdev->dev);
++	if (!cfg) {
++		dev_err(&pdev->dev, "Failed to get match data for platform\n");
++		return -ENODEV;
++	}
+ 
+ 	device_mode = of_property_read_bool(np, "spi-slave");
+ 	if (!cfg->has_device_mode && device_mode) {
+diff --git a/drivers/staging/fbtft/fbtft-core.c b/drivers/staging/fbtft/fbtft-core.c
+index da9c64152a606d..39bced40006501 100644
+--- a/drivers/staging/fbtft/fbtft-core.c
++++ b/drivers/staging/fbtft/fbtft-core.c
+@@ -692,6 +692,7 @@ struct fb_info *fbtft_framebuffer_alloc(struct fbtft_display *display,
+ 	return info;
+ 
+ release_framebuf:
++	fb_deferred_io_cleanup(info);
+ 	framebuffer_release(info);
+ 
+ alloc_fail:
+diff --git a/drivers/staging/gpib/cb7210/cb7210.c b/drivers/staging/gpib/cb7210/cb7210.c
+index 6b22a33a8c4f51..e6465331ffd0eb 100644
+--- a/drivers/staging/gpib/cb7210/cb7210.c
++++ b/drivers/staging/gpib/cb7210/cb7210.c
+@@ -1183,8 +1183,7 @@ struct local_info {
+ static int cb_gpib_probe(struct pcmcia_device *link)
+ {
+ 	struct local_info *info;
+-
+-//	int ret, i;
++	int ret;
+ 
+ 	/* Allocate space for private device-specific data */
+ 	info = kzalloc(sizeof(*info), GFP_KERNEL);
+@@ -1210,8 +1209,16 @@ static int cb_gpib_probe(struct pcmcia_device *link)
+ 
+ 	/* Register with Card Services */
+ 	curr_dev = link;
+-	return cb_gpib_config(link);
+-} /* gpib_attach */
++	ret = cb_gpib_config(link);
++	if (ret)
++		goto free_info;
++
++	return 0;
++
++free_info:
++	kfree(info);
++	return ret;
++}
+ 
+ /*
+  *   This deletes a driver "instance".  The device is de-registered
+diff --git a/drivers/staging/gpib/common/gpib_os.c b/drivers/staging/gpib/common/gpib_os.c
+index 8456b97290b80e..01a9099a6c16d8 100644
+--- a/drivers/staging/gpib/common/gpib_os.c
++++ b/drivers/staging/gpib/common/gpib_os.c
+@@ -819,7 +819,7 @@ static int board_type_ioctl(gpib_file_private_t *file_priv, struct gpib_board *b
+ 
+ 	retval = copy_from_user(&cmd, (void __user *)arg, sizeof(board_type_ioctl_t));
+ 	if (retval)
+-		return retval;
++		return -EFAULT;
+ 
+ 	for (list_ptr = registered_drivers.next; list_ptr != &registered_drivers;
+ 	     list_ptr = list_ptr->next) {
+diff --git a/drivers/staging/greybus/gbphy.c b/drivers/staging/greybus/gbphy.c
+index 6adcad28663305..60cf09a302a7e3 100644
+--- a/drivers/staging/greybus/gbphy.c
++++ b/drivers/staging/greybus/gbphy.c
+@@ -102,8 +102,8 @@ static int gbphy_dev_uevent(const struct device *dev, struct kobj_uevent_env *en
+ }
+ 
+ static const struct gbphy_device_id *
+-gbphy_dev_match_id(struct gbphy_device *gbphy_dev,
+-		   struct gbphy_driver *gbphy_drv)
++gbphy_dev_match_id(const struct gbphy_device *gbphy_dev,
++		   const struct gbphy_driver *gbphy_drv)
+ {
+ 	const struct gbphy_device_id *id = gbphy_drv->id_table;
+ 
+@@ -119,7 +119,7 @@ gbphy_dev_match_id(struct gbphy_device *gbphy_dev,
+ 
+ static int gbphy_dev_match(struct device *dev, const struct device_driver *drv)
+ {
+-	struct gbphy_driver *gbphy_drv = to_gbphy_driver(drv);
++	const struct gbphy_driver *gbphy_drv = to_gbphy_driver(drv);
+ 	struct gbphy_device *gbphy_dev = to_gbphy_dev(dev);
+ 	const struct gbphy_device_id *id;
+ 
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+index e176483df301f5..b86494faa63adb 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+@@ -1358,14 +1358,15 @@ static int gmin_get_config_var(struct device *maindev,
+ 	if (efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE))
+ 		status = efi.get_variable(var16, &GMIN_CFG_VAR_EFI_GUID, NULL,
+ 					  (unsigned long *)out_len, out);
+-	if (status == EFI_SUCCESS)
++	if (status == EFI_SUCCESS) {
+ 		dev_info(maindev, "found EFI entry for '%s'\n", var8);
+-	else if (is_gmin)
++		return 0;
++	}
++	if (is_gmin)
+ 		dev_info(maindev, "Failed to find EFI gmin variable %s\n", var8);
+ 	else
+ 		dev_info(maindev, "Failed to find EFI variable %s\n", var8);
+-
+-	return ret;
++	return -ENOENT;
+ }
+ 
+ int gmin_get_var_int(struct device *dev, bool is_gmin, const char *var, int def)
+diff --git a/drivers/staging/nvec/nvec_power.c b/drivers/staging/nvec/nvec_power.c
+index e0e67a3eb7222b..2faab9fdedef70 100644
+--- a/drivers/staging/nvec/nvec_power.c
++++ b/drivers/staging/nvec/nvec_power.c
+@@ -194,7 +194,7 @@ static int nvec_power_bat_notifier(struct notifier_block *nb,
+ 		break;
+ 	case MANUFACTURER:
+ 		memcpy(power->bat_manu, &res->plc, res->length - 2);
+-		power->bat_model[res->length - 2] = '\0';
++		power->bat_manu[res->length - 2] = '\0';
+ 		break;
+ 	case MODEL:
+ 		memcpy(power->bat_model, &res->plc, res->length - 2);
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index e7e6bbc04d21cd..db2a2760c0d649 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -4322,7 +4322,7 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd)
+ 	hba->uic_async_done = NULL;
+ 	if (reenable_intr)
+ 		ufshcd_enable_intr(hba, UIC_COMMAND_COMPL);
+-	if (ret) {
++	if (ret && !hba->pm_op_in_progress) {
+ 		ufshcd_set_link_broken(hba);
+ 		ufshcd_schedule_eh_work(hba);
+ 	}
+@@ -4330,6 +4330,14 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd)
+ 	spin_unlock_irqrestore(hba->host->host_lock, flags);
+ 	mutex_unlock(&hba->uic_cmd_mutex);
+ 
++	/*
++	 * If the h8 exit fails during the runtime resume process, it becomes
++	 * stuck and cannot be recovered through the error handler.  To fix
++	 * this, use link recovery instead of the error handler.
++	 */
++	if (ret && hba->pm_op_in_progress)
++		ret = ufshcd_link_recovery(hba);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
+index 341408410ed934..41118bba91978d 100644
+--- a/drivers/usb/early/xhci-dbc.c
++++ b/drivers/usb/early/xhci-dbc.c
+@@ -681,6 +681,10 @@ int __init early_xdbc_setup_hardware(void)
+ 
+ 		xdbc.table_base = NULL;
+ 		xdbc.out_buf = NULL;
++
++		early_iounmap(xdbc.xhci_base, xdbc.xhci_length);
++		xdbc.xhci_base = NULL;
++		xdbc.xhci_length = 0;
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 8dbc132a505e39..a893a29ebfac5e 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -2489,6 +2489,11 @@ int composite_os_desc_req_prepare(struct usb_composite_dev *cdev,
+ 	if (!cdev->os_desc_req->buf) {
+ 		ret = -ENOMEM;
+ 		usb_ep_free_request(ep0, cdev->os_desc_req);
++		/*
++		 * Set os_desc_req to NULL so that composite_dev_cleanup()
++		 * will not try to free it again.
++		 */
++		cdev->os_desc_req = NULL;
+ 		goto end;
+ 	}
+ 	cdev->os_desc_req->context = cdev;
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index d8bd2d82e9ec63..ab4d170469f578 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -1275,18 +1275,19 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ 
+ 	if (!hidg->workqueue) {
+ 		status = -ENOMEM;
+-		goto fail;
++		goto fail_free_descs;
+ 	}
+ 
+ 	/* create char device */
+ 	cdev_init(&hidg->cdev, &f_hidg_fops);
+ 	status = cdev_device_add(&hidg->cdev, &hidg->dev);
+ 	if (status)
+-		goto fail_free_descs;
++		goto fail_free_all;
+ 
+ 	return 0;
+-fail_free_descs:
++fail_free_all:
+ 	destroy_workqueue(hidg->workqueue);
++fail_free_descs:
+ 	usb_free_all_descriptors(f);
+ fail:
+ 	ERROR(f->config->cdev, "hidg_bind FAILED\n");
+diff --git a/drivers/usb/gadget/function/uvc_configfs.c b/drivers/usb/gadget/function/uvc_configfs.c
+index f131943254a4c4..a4a2d3dcb0d666 100644
+--- a/drivers/usb/gadget/function/uvc_configfs.c
++++ b/drivers/usb/gadget/function/uvc_configfs.c
+@@ -2916,8 +2916,15 @@ static struct config_group *uvcg_framebased_make(struct config_group *group,
+ 		'H',  '2',  '6',  '4', 0x00, 0x00, 0x10, 0x00,
+ 		0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71
+ 	};
++	struct uvcg_color_matching *color_match;
++	struct config_item *streaming;
+ 	struct uvcg_framebased *h;
+ 
++	streaming = group->cg_item.ci_parent;
++	color_match = uvcg_format_get_default_color_match(streaming);
++	if (!color_match)
++		return ERR_PTR(-EINVAL);
++
+ 	h = kzalloc(sizeof(*h), GFP_KERNEL);
+ 	if (!h)
+ 		return ERR_PTR(-ENOMEM);
+@@ -2936,6 +2943,9 @@ static struct config_group *uvcg_framebased_make(struct config_group *group,
+ 
+ 	INIT_LIST_HEAD(&h->fmt.frames);
+ 	h->fmt.type = UVCG_FRAMEBASED;
++
++	h->fmt.color_matching = color_match;
++	color_match->refcnt++;
+ 	config_group_init_type_name(&h->fmt.group, name,
+ 				    &uvcg_framebased_type);
+ 
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index 619481dec8e8d8..87f173392a010b 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -152,7 +152,7 @@ int xhci_plat_probe(struct platform_device *pdev, struct device *sysdev, const s
+ 	int			ret;
+ 	int			irq;
+ 	struct xhci_plat_priv	*priv = NULL;
+-	bool			of_match;
++	const struct of_device_id *of_match;
+ 
+ 	if (usb_disabled())
+ 		return -ENODEV;
+diff --git a/drivers/usb/misc/apple-mfi-fastcharge.c b/drivers/usb/misc/apple-mfi-fastcharge.c
+index ac8695195c13c8..8e852f4b8262e6 100644
+--- a/drivers/usb/misc/apple-mfi-fastcharge.c
++++ b/drivers/usb/misc/apple-mfi-fastcharge.c
+@@ -44,6 +44,7 @@ MODULE_DEVICE_TABLE(usb, mfi_fc_id_table);
+ struct mfi_device {
+ 	struct usb_device *udev;
+ 	struct power_supply *battery;
++	struct power_supply_desc battery_desc;
+ 	int charge_type;
+ };
+ 
+@@ -178,6 +179,7 @@ static int mfi_fc_probe(struct usb_device *udev)
+ {
+ 	struct power_supply_config battery_cfg = {};
+ 	struct mfi_device *mfi = NULL;
++	char *battery_name;
+ 	int err;
+ 
+ 	if (!mfi_fc_match(udev))
+@@ -187,23 +189,38 @@ static int mfi_fc_probe(struct usb_device *udev)
+ 	if (!mfi)
+ 		return -ENOMEM;
+ 
++	battery_name = kasprintf(GFP_KERNEL, "apple_mfi_fastcharge_%d-%d",
++				 udev->bus->busnum, udev->devnum);
++	if (!battery_name) {
++		err = -ENOMEM;
++		goto err_free_mfi;
++	}
++
++	mfi->battery_desc = apple_mfi_fc_desc;
++	mfi->battery_desc.name = battery_name;
++
+ 	battery_cfg.drv_data = mfi;
+ 
+ 	mfi->charge_type = POWER_SUPPLY_CHARGE_TYPE_TRICKLE;
+ 	mfi->battery = power_supply_register(&udev->dev,
+-						&apple_mfi_fc_desc,
++						&mfi->battery_desc,
+ 						&battery_cfg);
+ 	if (IS_ERR(mfi->battery)) {
+ 		dev_err(&udev->dev, "Can't register battery\n");
+ 		err = PTR_ERR(mfi->battery);
+-		kfree(mfi);
+-		return err;
++		goto err_free_name;
+ 	}
+ 
+ 	mfi->udev = usb_get_dev(udev);
+ 	dev_set_drvdata(&udev->dev, mfi);
+ 
+ 	return 0;
++
++err_free_name:
++	kfree(battery_name);
++err_free_mfi:
++	kfree(mfi);
++	return err;
+ }
+ 
+ static void mfi_fc_disconnect(struct usb_device *udev)
+@@ -213,6 +230,7 @@ static void mfi_fc_disconnect(struct usb_device *udev)
+ 	mfi = dev_get_drvdata(&udev->dev);
+ 	if (mfi->battery)
+ 		power_supply_unregister(mfi->battery);
++	kfree(mfi->battery_desc.name);
+ 	dev_set_drvdata(&udev->dev, NULL);
+ 	usb_put_dev(mfi->udev);
+ 	kfree(mfi);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 147ca50c94beec..e5cd3309342364 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -2346,6 +2346,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe145, 0xff),			/* Foxconn T99W651 RNDIS */
+ 	  .driver_info = RSVD(5) | RSVD(6) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe15f, 0xff),                     /* Foxconn T99W709 */
++	  .driver_info = RSVD(5) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe167, 0xff),                     /* Foxconn T99W640 MBIM */
+ 	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE(0x1508, 0x1001),						/* Fibocom NL668 (IOT version) */
+diff --git a/drivers/usb/typec/ucsi/ucsi_yoga_c630.c b/drivers/usb/typec/ucsi/ucsi_yoga_c630.c
+index d33e3f2dd1d80f..47e8dd5b255b2b 100644
+--- a/drivers/usb/typec/ucsi/ucsi_yoga_c630.c
++++ b/drivers/usb/typec/ucsi/ucsi_yoga_c630.c
+@@ -133,17 +133,30 @@ static int yoga_c630_ucsi_probe(struct auxiliary_device *adev,
+ 
+ 	ret = yoga_c630_ec_register_notify(ec, &uec->nb);
+ 	if (ret)
+-		return ret;
++		goto err_destroy;
++
++	ret = ucsi_register(uec->ucsi);
++	if (ret)
++		goto err_unregister;
++
++	return 0;
+ 
+-	return ucsi_register(uec->ucsi);
++err_unregister:
++	yoga_c630_ec_unregister_notify(uec->ec, &uec->nb);
++
++err_destroy:
++	ucsi_destroy(uec->ucsi);
++
++	return ret;
+ }
+ 
+ static void yoga_c630_ucsi_remove(struct auxiliary_device *adev)
+ {
+ 	struct yoga_c630_ucsi *uec = auxiliary_get_drvdata(adev);
+ 
+-	yoga_c630_ec_unregister_notify(uec->ec, &uec->nb);
+ 	ucsi_unregister(uec->ucsi);
++	yoga_c630_ec_unregister_notify(uec->ec, &uec->nb);
++	ucsi_destroy(uec->ucsi);
+ }
+ 
+ static const struct auxiliary_device_id yoga_c630_ucsi_id_table[] = {
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index 61424342c09641..c7a20278bc3ca5 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -908,6 +908,9 @@ void mlx5_vdpa_destroy_mr_resources(struct mlx5_vdpa_dev *mvdev)
+ {
+ 	struct mlx5_vdpa_mr_resources *mres = &mvdev->mres;
+ 
++	if (!mres->wq_gc)
++		return;
++
+ 	atomic_set(&mres->shutdown, 1);
+ 
+ 	flush_delayed_work(&mres->gc_dwork_ent);
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index cccc49a08a1abf..0ed2fc28e1cefe 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -2491,7 +2491,7 @@ static void mlx5_vdpa_set_vq_num(struct vdpa_device *vdev, u16 idx, u32 num)
+         }
+ 
+ 	mvq = &ndev->vqs[idx];
+-	ndev->needs_teardown = num != mvq->num_ent;
++	ndev->needs_teardown |= num != mvq->num_ent;
+ 	mvq->num_ent = num;
+ }
+ 
+@@ -3432,15 +3432,17 @@ static void mlx5_vdpa_free(struct vdpa_device *vdev)
+ 
+ 	ndev = to_mlx5_vdpa_ndev(mvdev);
+ 
++	/* Functions called here should be able to work with
++	 * uninitialized resources.
++	 */
+ 	free_fixed_resources(ndev);
+ 	mlx5_vdpa_clean_mrs(mvdev);
+ 	mlx5_vdpa_destroy_mr_resources(&ndev->mvdev);
+-	mlx5_cmd_cleanup_async_ctx(&mvdev->async_ctx);
+-
+ 	if (!is_zero_ether_addr(ndev->config.mac)) {
+ 		pfmdev = pci_get_drvdata(pci_physfn(mvdev->mdev->pdev));
+ 		mlx5_mpfs_del_mac(pfmdev, ndev->config.mac);
+ 	}
++	mlx5_cmd_cleanup_async_ctx(&mvdev->async_ctx);
+ 	mlx5_vdpa_free_resources(&ndev->mvdev);
+ 	free_irqs(ndev);
+ 	kfree(ndev->event_cbs);
+@@ -3888,6 +3890,8 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
+ 	mvdev->actual_features =
+ 			(device_features & BIT_ULL(VIRTIO_F_VERSION_1));
+ 
++	mlx5_cmd_init_async_ctx(mdev, &mvdev->async_ctx);
++
+ 	ndev->vqs = kcalloc(max_vqs, sizeof(*ndev->vqs), GFP_KERNEL);
+ 	ndev->event_cbs = kcalloc(max_vqs + 1, sizeof(*ndev->event_cbs), GFP_KERNEL);
+ 	if (!ndev->vqs || !ndev->event_cbs) {
+@@ -3960,8 +3964,6 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
+ 		ndev->rqt_size = 1;
+ 	}
+ 
+-	mlx5_cmd_init_async_ctx(mdev, &mvdev->async_ctx);
+-
+ 	ndev->mvdev.mlx_features = device_features;
+ 	mvdev->vdev.dma_dev = &mdev->pdev->dev;
+ 	err = mlx5_vdpa_alloc_resources(&ndev->mvdev);
+diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
+index 6a9a3735131036..04620bb77203d0 100644
+--- a/drivers/vdpa/vdpa_user/vduse_dev.c
++++ b/drivers/vdpa/vdpa_user/vduse_dev.c
+@@ -2216,6 +2216,7 @@ static void vduse_exit(void)
+ 	cdev_del(&vduse_ctrl_cdev);
+ 	unregister_chrdev_region(vduse_major, VDUSE_DEV_MAX);
+ 	class_unregister(&vduse_class);
++	idr_destroy(&vduse_idr);
+ }
+ module_exit(vduse_exit);
+ 
+diff --git a/drivers/vfio/device_cdev.c b/drivers/vfio/device_cdev.c
+index 281a8dc3ed4974..480cac3a0c274f 100644
+--- a/drivers/vfio/device_cdev.c
++++ b/drivers/vfio/device_cdev.c
+@@ -60,22 +60,50 @@ static void vfio_df_get_kvm_safe(struct vfio_device_file *df)
+ 	spin_unlock(&df->kvm_ref_lock);
+ }
+ 
++static int vfio_df_check_token(struct vfio_device *device,
++			       const struct vfio_device_bind_iommufd *bind)
++{
++	uuid_t uuid;
++
++	if (!device->ops->match_token_uuid) {
++		if (bind->flags & VFIO_DEVICE_BIND_FLAG_TOKEN)
++			return -EINVAL;
++		return 0;
++	}
++
++	if (!(bind->flags & VFIO_DEVICE_BIND_FLAG_TOKEN))
++		return device->ops->match_token_uuid(device, NULL);
++
++	if (copy_from_user(&uuid, u64_to_user_ptr(bind->token_uuid_ptr),
++			   sizeof(uuid)))
++		return -EFAULT;
++	return device->ops->match_token_uuid(device, &uuid);
++}
++
+ long vfio_df_ioctl_bind_iommufd(struct vfio_device_file *df,
+ 				struct vfio_device_bind_iommufd __user *arg)
+ {
++	const u32 VALID_FLAGS = VFIO_DEVICE_BIND_FLAG_TOKEN;
+ 	struct vfio_device *device = df->device;
+ 	struct vfio_device_bind_iommufd bind;
+ 	unsigned long minsz;
++	u32 user_size;
+ 	int ret;
+ 
+ 	static_assert(__same_type(arg->out_devid, df->devid));
+ 
+ 	minsz = offsetofend(struct vfio_device_bind_iommufd, out_devid);
+ 
+-	if (copy_from_user(&bind, arg, minsz))
+-		return -EFAULT;
++	ret = get_user(user_size, &arg->argsz);
++	if (ret)
++		return ret;
++	if (user_size < minsz)
++		return -EINVAL;
++	ret = copy_struct_from_user(&bind, minsz, arg, user_size);
++	if (ret)
++		return ret;
+ 
+-	if (bind.argsz < minsz || bind.flags || bind.iommufd < 0)
++	if (bind.iommufd < 0 || bind.flags & ~VALID_FLAGS)
+ 		return -EINVAL;
+ 
+ 	/* BIND_IOMMUFD only allowed for cdev fds */
+@@ -93,6 +121,10 @@ long vfio_df_ioctl_bind_iommufd(struct vfio_device_file *df,
+ 		goto out_unlock;
+ 	}
+ 
++	ret = vfio_df_check_token(device, &bind);
++	if (ret)
++		goto out_unlock;
++
+ 	df->iommufd = iommufd_ctx_from_fd(bind.iommufd);
+ 	if (IS_ERR(df->iommufd)) {
+ 		ret = PTR_ERR(df->iommufd);
+diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c
+index c321d442f0da09..c376a6279de0e6 100644
+--- a/drivers/vfio/group.c
++++ b/drivers/vfio/group.c
+@@ -192,11 +192,10 @@ static int vfio_df_group_open(struct vfio_device_file *df)
+ 		 * implies they expected translation to exist
+ 		 */
+ 		if (!capable(CAP_SYS_RAWIO) ||
+-		    vfio_iommufd_device_has_compat_ioas(device, df->iommufd))
++		    vfio_iommufd_device_has_compat_ioas(device, df->iommufd)) {
+ 			ret = -EPERM;
+-		else
+-			ret = 0;
+-		goto out_put_kvm;
++			goto out_put_kvm;
++		}
+ 	}
+ 
+ 	ret = vfio_df_open(df);
+diff --git a/drivers/vfio/iommufd.c b/drivers/vfio/iommufd.c
+index c8c3a2d53f86e1..a38d262c602809 100644
+--- a/drivers/vfio/iommufd.c
++++ b/drivers/vfio/iommufd.c
+@@ -25,6 +25,10 @@ int vfio_df_iommufd_bind(struct vfio_device_file *df)
+ 
+ 	lockdep_assert_held(&vdev->dev_set->lock);
+ 
++	/* Returns 0 to permit device opening under noiommu mode */
++	if (vfio_device_is_noiommu(vdev))
++		return 0;
++
+ 	return vdev->ops->bind_iommufd(vdev, ictx, &df->devid);
+ }
+ 
+diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
+index d12a350440d3ca..36b60e29320433 100644
+--- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
++++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
+@@ -1580,6 +1580,7 @@ static const struct vfio_device_ops hisi_acc_vfio_pci_ops = {
+ 	.mmap = vfio_pci_core_mmap,
+ 	.request = vfio_pci_core_request,
+ 	.match = vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd = vfio_iommufd_physical_bind,
+ 	.unbind_iommufd = vfio_iommufd_physical_unbind,
+ 	.attach_ioas = vfio_iommufd_physical_attach_ioas,
+diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c
+index 709543e7eb0428..d83249aea275e8 100644
+--- a/drivers/vfio/pci/mlx5/main.c
++++ b/drivers/vfio/pci/mlx5/main.c
+@@ -1387,6 +1387,7 @@ static const struct vfio_device_ops mlx5vf_pci_ops = {
+ 	.mmap = vfio_pci_core_mmap,
+ 	.request = vfio_pci_core_request,
+ 	.match = vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd = vfio_iommufd_physical_bind,
+ 	.unbind_iommufd = vfio_iommufd_physical_unbind,
+ 	.attach_ioas = vfio_iommufd_physical_attach_ioas,
+diff --git a/drivers/vfio/pci/nvgrace-gpu/main.c b/drivers/vfio/pci/nvgrace-gpu/main.c
+index e5ac39c4cc6b6f..d95761dcdd58c4 100644
+--- a/drivers/vfio/pci/nvgrace-gpu/main.c
++++ b/drivers/vfio/pci/nvgrace-gpu/main.c
+@@ -696,6 +696,7 @@ static const struct vfio_device_ops nvgrace_gpu_pci_ops = {
+ 	.mmap		= nvgrace_gpu_mmap,
+ 	.request	= vfio_pci_core_request,
+ 	.match		= vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd	= vfio_iommufd_physical_bind,
+ 	.unbind_iommufd	= vfio_iommufd_physical_unbind,
+ 	.attach_ioas	= vfio_iommufd_physical_attach_ioas,
+@@ -715,6 +716,7 @@ static const struct vfio_device_ops nvgrace_gpu_pci_core_ops = {
+ 	.mmap		= vfio_pci_core_mmap,
+ 	.request	= vfio_pci_core_request,
+ 	.match		= vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd	= vfio_iommufd_physical_bind,
+ 	.unbind_iommufd	= vfio_iommufd_physical_unbind,
+ 	.attach_ioas	= vfio_iommufd_physical_attach_ioas,
+diff --git a/drivers/vfio/pci/pds/vfio_dev.c b/drivers/vfio/pci/pds/vfio_dev.c
+index 76a80ae7087b51..f3ccb0008f6752 100644
+--- a/drivers/vfio/pci/pds/vfio_dev.c
++++ b/drivers/vfio/pci/pds/vfio_dev.c
+@@ -201,9 +201,11 @@ static const struct vfio_device_ops pds_vfio_ops = {
+ 	.mmap = vfio_pci_core_mmap,
+ 	.request = vfio_pci_core_request,
+ 	.match = vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd = vfio_iommufd_physical_bind,
+ 	.unbind_iommufd = vfio_iommufd_physical_unbind,
+ 	.attach_ioas = vfio_iommufd_physical_attach_ioas,
++	.detach_ioas = vfio_iommufd_physical_detach_ioas,
+ };
+ 
+ const struct vfio_device_ops *pds_vfio_ops_info(void)
+diff --git a/drivers/vfio/pci/qat/main.c b/drivers/vfio/pci/qat/main.c
+index 845ed15b67718c..5cce6b0b8d2f3e 100644
+--- a/drivers/vfio/pci/qat/main.c
++++ b/drivers/vfio/pci/qat/main.c
+@@ -614,6 +614,7 @@ static const struct vfio_device_ops qat_vf_pci_ops = {
+ 	.mmap = vfio_pci_core_mmap,
+ 	.request = vfio_pci_core_request,
+ 	.match = vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd = vfio_iommufd_physical_bind,
+ 	.unbind_iommufd = vfio_iommufd_physical_unbind,
+ 	.attach_ioas = vfio_iommufd_physical_attach_ioas,
+diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
+index 5ba39f7623bb76..ac10f14417f2f3 100644
+--- a/drivers/vfio/pci/vfio_pci.c
++++ b/drivers/vfio/pci/vfio_pci.c
+@@ -138,6 +138,7 @@ static const struct vfio_device_ops vfio_pci_ops = {
+ 	.mmap		= vfio_pci_core_mmap,
+ 	.request	= vfio_pci_core_request,
+ 	.match		= vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd	= vfio_iommufd_physical_bind,
+ 	.unbind_iommufd	= vfio_iommufd_physical_unbind,
+ 	.attach_ioas	= vfio_iommufd_physical_attach_ioas,
+diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
+index 6328c3a05bcdd4..fad410cf91bc22 100644
+--- a/drivers/vfio/pci/vfio_pci_core.c
++++ b/drivers/vfio/pci/vfio_pci_core.c
+@@ -1821,9 +1821,13 @@ void vfio_pci_core_request(struct vfio_device *core_vdev, unsigned int count)
+ }
+ EXPORT_SYMBOL_GPL(vfio_pci_core_request);
+ 
+-static int vfio_pci_validate_vf_token(struct vfio_pci_core_device *vdev,
+-				      bool vf_token, uuid_t *uuid)
++int vfio_pci_core_match_token_uuid(struct vfio_device *core_vdev,
++				   const uuid_t *uuid)
++
+ {
++	struct vfio_pci_core_device *vdev =
++		container_of(core_vdev, struct vfio_pci_core_device, vdev);
++
+ 	/*
+ 	 * There's always some degree of trust or collaboration between SR-IOV
+ 	 * PF and VFs, even if just that the PF hosts the SR-IOV capability and
+@@ -1854,7 +1858,7 @@ static int vfio_pci_validate_vf_token(struct vfio_pci_core_device *vdev,
+ 		bool match;
+ 
+ 		if (!pf_vdev) {
+-			if (!vf_token)
++			if (!uuid)
+ 				return 0; /* PF is not vfio-pci, no VF token */
+ 
+ 			pci_info_ratelimited(vdev->pdev,
+@@ -1862,7 +1866,7 @@ static int vfio_pci_validate_vf_token(struct vfio_pci_core_device *vdev,
+ 			return -EINVAL;
+ 		}
+ 
+-		if (!vf_token) {
++		if (!uuid) {
+ 			pci_info_ratelimited(vdev->pdev,
+ 				"VF token required to access device\n");
+ 			return -EACCES;
+@@ -1880,7 +1884,7 @@ static int vfio_pci_validate_vf_token(struct vfio_pci_core_device *vdev,
+ 	} else if (vdev->vf_token) {
+ 		mutex_lock(&vdev->vf_token->lock);
+ 		if (vdev->vf_token->users) {
+-			if (!vf_token) {
++			if (!uuid) {
+ 				mutex_unlock(&vdev->vf_token->lock);
+ 				pci_info_ratelimited(vdev->pdev,
+ 					"VF token required to access device\n");
+@@ -1893,12 +1897,12 @@ static int vfio_pci_validate_vf_token(struct vfio_pci_core_device *vdev,
+ 					"Incorrect VF token provided for device\n");
+ 				return -EACCES;
+ 			}
+-		} else if (vf_token) {
++		} else if (uuid) {
+ 			uuid_copy(&vdev->vf_token->uuid, uuid);
+ 		}
+ 
+ 		mutex_unlock(&vdev->vf_token->lock);
+-	} else if (vf_token) {
++	} else if (uuid) {
+ 		pci_info_ratelimited(vdev->pdev,
+ 			"VF token incorrectly provided, not a PF or VF\n");
+ 		return -EINVAL;
+@@ -1906,6 +1910,7 @@ static int vfio_pci_validate_vf_token(struct vfio_pci_core_device *vdev,
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(vfio_pci_core_match_token_uuid);
+ 
+ #define VF_TOKEN_ARG "vf_token="
+ 
+@@ -1952,7 +1957,8 @@ int vfio_pci_core_match(struct vfio_device *core_vdev, char *buf)
+ 		}
+ 	}
+ 
+-	ret = vfio_pci_validate_vf_token(vdev, vf_token, &uuid);
++	ret = core_vdev->ops->match_token_uuid(core_vdev,
++					       vf_token ? &uuid : NULL);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -2149,7 +2155,7 @@ int vfio_pci_core_register_device(struct vfio_pci_core_device *vdev)
+ 		return -EBUSY;
+ 	}
+ 
+-	if (pci_is_root_bus(pdev->bus)) {
++	if (pci_is_root_bus(pdev->bus) || pdev->is_virtfn) {
+ 		ret = vfio_assign_device_set(&vdev->vdev, vdev);
+ 	} else if (!pci_probe_reset_slot(pdev->slot)) {
+ 		ret = vfio_assign_device_set(&vdev->vdev, pdev->slot);
+diff --git a/drivers/vfio/pci/virtio/main.c b/drivers/vfio/pci/virtio/main.c
+index 515fe1b9f94d80..8084f3e36a9f70 100644
+--- a/drivers/vfio/pci/virtio/main.c
++++ b/drivers/vfio/pci/virtio/main.c
+@@ -94,6 +94,7 @@ static const struct vfio_device_ops virtiovf_vfio_pci_lm_ops = {
+ 	.mmap = vfio_pci_core_mmap,
+ 	.request = vfio_pci_core_request,
+ 	.match = vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd = vfio_iommufd_physical_bind,
+ 	.unbind_iommufd = vfio_iommufd_physical_unbind,
+ 	.attach_ioas = vfio_iommufd_physical_attach_ioas,
+@@ -114,6 +115,7 @@ static const struct vfio_device_ops virtiovf_vfio_pci_tran_lm_ops = {
+ 	.mmap = vfio_pci_core_mmap,
+ 	.request = vfio_pci_core_request,
+ 	.match = vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd = vfio_iommufd_physical_bind,
+ 	.unbind_iommufd = vfio_iommufd_physical_unbind,
+ 	.attach_ioas = vfio_iommufd_physical_attach_ioas,
+@@ -134,6 +136,7 @@ static const struct vfio_device_ops virtiovf_vfio_pci_ops = {
+ 	.mmap = vfio_pci_core_mmap,
+ 	.request = vfio_pci_core_request,
+ 	.match = vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd = vfio_iommufd_physical_bind,
+ 	.unbind_iommufd = vfio_iommufd_physical_unbind,
+ 	.attach_ioas = vfio_iommufd_physical_attach_ioas,
+diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
+index 1fd261efc582d0..5046cae052224e 100644
+--- a/drivers/vfio/vfio_main.c
++++ b/drivers/vfio/vfio_main.c
+@@ -583,7 +583,8 @@ void vfio_df_close(struct vfio_device_file *df)
+ 
+ 	lockdep_assert_held(&device->dev_set->lock);
+ 
+-	vfio_assert_device_open(device);
++	if (!vfio_assert_device_open(device))
++		return;
+ 	if (device->open_count == 1)
+ 		vfio_df_device_last_close(df);
+ 	device->open_count--;
+diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
+index 020d4fbb947ca0..bc0f385744974d 100644
+--- a/drivers/vhost/Kconfig
++++ b/drivers/vhost/Kconfig
+@@ -95,4 +95,22 @@ config VHOST_CROSS_ENDIAN_LEGACY
+ 
+ 	  If unsure, say "N".
+ 
++config VHOST_ENABLE_FORK_OWNER_CONTROL
++	bool "Enable VHOST_ENABLE_FORK_OWNER_CONTROL"
++	default y
++	help
++	  This option enables two IOCTLs: VHOST_SET_FORK_FROM_OWNER and
++	  VHOST_GET_FORK_FROM_OWNER. These allow userspace applications
++	  to modify the vhost worker mode for vhost devices.
++
++	  Also expose module parameter 'fork_from_owner_default' to allow users
++	  to configure the default mode for vhost workers.
++
++	  By default, `VHOST_ENABLE_FORK_OWNER_CONTROL` is set to `y`,
++	  users can change the worker thread mode as needed.
++	  If this config is disabled (n),the related IOCTLs and parameters will
++	  be unavailable.
++
++	  If unsure, say "Y".
++
+ endif
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index 26bcf3a7f70cb2..d3b8355f79532b 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -71,7 +71,7 @@ static int vhost_scsi_set_inline_sg_cnt(const char *buf,
+ 	if (ret)
+ 		return ret;
+ 
+-	if (ret > VHOST_SCSI_PREALLOC_SGLS) {
++	if (cnt > VHOST_SCSI_PREALLOC_SGLS) {
+ 		pr_err("Max inline_sg_cnt is %u\n", VHOST_SCSI_PREALLOC_SGLS);
+ 		return -EINVAL;
+ 	}
+@@ -1148,10 +1148,8 @@ vhost_scsi_get_req(struct vhost_virtqueue *vq, struct vhost_scsi_ctx *vc,
+ 			/* validated at handler entry */
+ 			vs_tpg = vhost_vq_get_backend(vq);
+ 			tpg = READ_ONCE(vs_tpg[*vc->target]);
+-			if (unlikely(!tpg)) {
+-				vq_err(vq, "Target 0x%x does not exist\n", *vc->target);
++			if (unlikely(!tpg))
+ 				goto out;
+-			}
+ 		}
+ 
+ 		if (tpgp)
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 63612faeab7271..79b0b7cd28601a 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -22,6 +22,7 @@
+ #include <linux/slab.h>
+ #include <linux/vmalloc.h>
+ #include <linux/kthread.h>
++#include <linux/cgroup.h>
+ #include <linux/module.h>
+ #include <linux/sort.h>
+ #include <linux/sched/mm.h>
+@@ -41,6 +42,13 @@ static int max_iotlb_entries = 2048;
+ module_param(max_iotlb_entries, int, 0444);
+ MODULE_PARM_DESC(max_iotlb_entries,
+ 	"Maximum number of iotlb entries. (default: 2048)");
++static bool fork_from_owner_default = VHOST_FORK_OWNER_TASK;
++
++#ifdef CONFIG_VHOST_ENABLE_FORK_OWNER_CONTROL
++module_param(fork_from_owner_default, bool, 0444);
++MODULE_PARM_DESC(fork_from_owner_default,
++		 "Set task mode as the default(default: Y)");
++#endif
+ 
+ enum {
+ 	VHOST_MEMORY_F_LOG = 0x1,
+@@ -242,7 +250,7 @@ static void vhost_worker_queue(struct vhost_worker *worker,
+ 		 * test_and_set_bit() implies a memory barrier.
+ 		 */
+ 		llist_add(&work->node, &worker->work_list);
+-		vhost_task_wake(worker->vtsk);
++		worker->ops->wakeup(worker);
+ 	}
+ }
+ 
+@@ -388,6 +396,44 @@ static void vhost_vq_reset(struct vhost_dev *dev,
+ 	__vhost_vq_meta_reset(vq);
+ }
+ 
++static int vhost_run_work_kthread_list(void *data)
++{
++	struct vhost_worker *worker = data;
++	struct vhost_work *work, *work_next;
++	struct vhost_dev *dev = worker->dev;
++	struct llist_node *node;
++
++	kthread_use_mm(dev->mm);
++
++	for (;;) {
++		/* mb paired w/ kthread_stop */
++		set_current_state(TASK_INTERRUPTIBLE);
++
++		if (kthread_should_stop()) {
++			__set_current_state(TASK_RUNNING);
++			break;
++		}
++		node = llist_del_all(&worker->work_list);
++		if (!node)
++			schedule();
++
++		node = llist_reverse_order(node);
++		/* make sure flag is seen after deletion */
++		smp_wmb();
++		llist_for_each_entry_safe(work, work_next, node, node) {
++			clear_bit(VHOST_WORK_QUEUED, &work->flags);
++			__set_current_state(TASK_RUNNING);
++			kcov_remote_start_common(worker->kcov_handle);
++			work->fn(work);
++			kcov_remote_stop();
++			cond_resched();
++		}
++	}
++	kthread_unuse_mm(dev->mm);
++
++	return 0;
++}
++
+ static bool vhost_run_work_list(void *data)
+ {
+ 	struct vhost_worker *worker = data;
+@@ -552,6 +598,7 @@ void vhost_dev_init(struct vhost_dev *dev,
+ 	dev->byte_weight = byte_weight;
+ 	dev->use_worker = use_worker;
+ 	dev->msg_handler = msg_handler;
++	dev->fork_owner = fork_from_owner_default;
+ 	init_waitqueue_head(&dev->wait);
+ 	INIT_LIST_HEAD(&dev->read_list);
+ 	INIT_LIST_HEAD(&dev->pending_list);
+@@ -581,6 +628,46 @@ long vhost_dev_check_owner(struct vhost_dev *dev)
+ }
+ EXPORT_SYMBOL_GPL(vhost_dev_check_owner);
+ 
++struct vhost_attach_cgroups_struct {
++	struct vhost_work work;
++	struct task_struct *owner;
++	int ret;
++};
++
++static void vhost_attach_cgroups_work(struct vhost_work *work)
++{
++	struct vhost_attach_cgroups_struct *s;
++
++	s = container_of(work, struct vhost_attach_cgroups_struct, work);
++	s->ret = cgroup_attach_task_all(s->owner, current);
++}
++
++static int vhost_attach_task_to_cgroups(struct vhost_worker *worker)
++{
++	struct vhost_attach_cgroups_struct attach;
++	int saved_cnt;
++
++	attach.owner = current;
++
++	vhost_work_init(&attach.work, vhost_attach_cgroups_work);
++	vhost_worker_queue(worker, &attach.work);
++
++	mutex_lock(&worker->mutex);
++
++	/*
++	 * Bypass attachment_cnt check in __vhost_worker_flush:
++	 * Temporarily change it to INT_MAX to bypass the check
++	 */
++	saved_cnt = worker->attachment_cnt;
++	worker->attachment_cnt = INT_MAX;
++	__vhost_worker_flush(worker);
++	worker->attachment_cnt = saved_cnt;
++
++	mutex_unlock(&worker->mutex);
++
++	return attach.ret;
++}
++
+ /* Caller should have device mutex */
+ bool vhost_dev_has_owner(struct vhost_dev *dev)
+ {
+@@ -626,7 +713,7 @@ static void vhost_worker_destroy(struct vhost_dev *dev,
+ 
+ 	WARN_ON(!llist_empty(&worker->work_list));
+ 	xa_erase(&dev->worker_xa, worker->id);
+-	vhost_task_stop(worker->vtsk);
++	worker->ops->stop(worker);
+ 	kfree(worker);
+ }
+ 
+@@ -649,42 +736,115 @@ static void vhost_workers_free(struct vhost_dev *dev)
+ 	xa_destroy(&dev->worker_xa);
+ }
+ 
++static void vhost_task_wakeup(struct vhost_worker *worker)
++{
++	return vhost_task_wake(worker->vtsk);
++}
++
++static void vhost_kthread_wakeup(struct vhost_worker *worker)
++{
++	wake_up_process(worker->kthread_task);
++}
++
++static void vhost_task_do_stop(struct vhost_worker *worker)
++{
++	return vhost_task_stop(worker->vtsk);
++}
++
++static void vhost_kthread_do_stop(struct vhost_worker *worker)
++{
++	kthread_stop(worker->kthread_task);
++}
++
++static int vhost_task_worker_create(struct vhost_worker *worker,
++				    struct vhost_dev *dev, const char *name)
++{
++	struct vhost_task *vtsk;
++	u32 id;
++	int ret;
++
++	vtsk = vhost_task_create(vhost_run_work_list, vhost_worker_killed,
++				 worker, name);
++	if (IS_ERR(vtsk))
++		return PTR_ERR(vtsk);
++
++	worker->vtsk = vtsk;
++	vhost_task_start(vtsk);
++	ret = xa_alloc(&dev->worker_xa, &id, worker, xa_limit_32b, GFP_KERNEL);
++	if (ret < 0) {
++		vhost_task_do_stop(worker);
++		return ret;
++	}
++	worker->id = id;
++	return 0;
++}
++
++static int vhost_kthread_worker_create(struct vhost_worker *worker,
++				       struct vhost_dev *dev, const char *name)
++{
++	struct task_struct *task;
++	u32 id;
++	int ret;
++
++	task = kthread_create(vhost_run_work_kthread_list, worker, "%s", name);
++	if (IS_ERR(task))
++		return PTR_ERR(task);
++
++	worker->kthread_task = task;
++	wake_up_process(task);
++	ret = xa_alloc(&dev->worker_xa, &id, worker, xa_limit_32b, GFP_KERNEL);
++	if (ret < 0)
++		goto stop_worker;
++
++	ret = vhost_attach_task_to_cgroups(worker);
++	if (ret)
++		goto stop_worker;
++
++	worker->id = id;
++	return 0;
++
++stop_worker:
++	vhost_kthread_do_stop(worker);
++	return ret;
++}
++
++static const struct vhost_worker_ops kthread_ops = {
++	.create = vhost_kthread_worker_create,
++	.stop = vhost_kthread_do_stop,
++	.wakeup = vhost_kthread_wakeup,
++};
++
++static const struct vhost_worker_ops vhost_task_ops = {
++	.create = vhost_task_worker_create,
++	.stop = vhost_task_do_stop,
++	.wakeup = vhost_task_wakeup,
++};
++
+ static struct vhost_worker *vhost_worker_create(struct vhost_dev *dev)
+ {
+ 	struct vhost_worker *worker;
+-	struct vhost_task *vtsk;
+ 	char name[TASK_COMM_LEN];
+ 	int ret;
+-	u32 id;
++	const struct vhost_worker_ops *ops = dev->fork_owner ? &vhost_task_ops :
++							       &kthread_ops;
+ 
+ 	worker = kzalloc(sizeof(*worker), GFP_KERNEL_ACCOUNT);
+ 	if (!worker)
+ 		return NULL;
+ 
+ 	worker->dev = dev;
++	worker->ops = ops;
+ 	snprintf(name, sizeof(name), "vhost-%d", current->pid);
+ 
+-	vtsk = vhost_task_create(vhost_run_work_list, vhost_worker_killed,
+-				 worker, name);
+-	if (IS_ERR(vtsk))
+-		goto free_worker;
+-
+ 	mutex_init(&worker->mutex);
+ 	init_llist_head(&worker->work_list);
+ 	worker->kcov_handle = kcov_common_handle();
+-	worker->vtsk = vtsk;
+-
+-	vhost_task_start(vtsk);
+-
+-	ret = xa_alloc(&dev->worker_xa, &id, worker, xa_limit_32b, GFP_KERNEL);
++	ret = ops->create(worker, dev, name);
+ 	if (ret < 0)
+-		goto stop_worker;
+-	worker->id = id;
++		goto free_worker;
+ 
+ 	return worker;
+ 
+-stop_worker:
+-	vhost_task_stop(vtsk);
+ free_worker:
+ 	kfree(worker);
+ 	return NULL;
+@@ -865,6 +1025,14 @@ long vhost_worker_ioctl(struct vhost_dev *dev, unsigned int ioctl,
+ 	switch (ioctl) {
+ 	/* dev worker ioctls */
+ 	case VHOST_NEW_WORKER:
++		/*
++		 * vhost_tasks will account for worker threads under the parent's
++		 * NPROC value but kthreads do not. To avoid userspace overflowing
++		 * the system with worker threads fork_owner must be true.
++		 */
++		if (!dev->fork_owner)
++			return -EFAULT;
++
+ 		ret = vhost_new_worker(dev, &state);
+ 		if (!ret && copy_to_user(argp, &state, sizeof(state)))
+ 			ret = -EFAULT;
+@@ -982,6 +1150,7 @@ void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_iotlb *umem)
+ 
+ 	vhost_dev_cleanup(dev);
+ 
++	dev->fork_owner = fork_from_owner_default;
+ 	dev->umem = umem;
+ 	/* We don't need VQ locks below since vhost_dev_cleanup makes sure
+ 	 * VQs aren't running.
+@@ -2135,6 +2304,45 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
+ 		goto done;
+ 	}
+ 
++#ifdef CONFIG_VHOST_ENABLE_FORK_OWNER_CONTROL
++	if (ioctl == VHOST_SET_FORK_FROM_OWNER) {
++		/* Only allow modification before owner is set */
++		if (vhost_dev_has_owner(d)) {
++			r = -EBUSY;
++			goto done;
++		}
++		u8 fork_owner_val;
++
++		if (get_user(fork_owner_val, (u8 __user *)argp)) {
++			r = -EFAULT;
++			goto done;
++		}
++		if (fork_owner_val != VHOST_FORK_OWNER_TASK &&
++		    fork_owner_val != VHOST_FORK_OWNER_KTHREAD) {
++			r = -EINVAL;
++			goto done;
++		}
++		d->fork_owner = !!fork_owner_val;
++		r = 0;
++		goto done;
++	}
++	if (ioctl == VHOST_GET_FORK_FROM_OWNER) {
++		u8 fork_owner_val = d->fork_owner;
++
++		if (fork_owner_val != VHOST_FORK_OWNER_TASK &&
++		    fork_owner_val != VHOST_FORK_OWNER_KTHREAD) {
++			r = -EINVAL;
++			goto done;
++		}
++		if (put_user(fork_owner_val, (u8 __user *)argp)) {
++			r = -EFAULT;
++			goto done;
++		}
++		r = 0;
++		goto done;
++	}
++#endif
++
+ 	/* You must be the owner to do anything else */
+ 	r = vhost_dev_check_owner(d);
+ 	if (r)
+diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
+index bb75a292d50cd3..ab704d84fb3446 100644
+--- a/drivers/vhost/vhost.h
++++ b/drivers/vhost/vhost.h
+@@ -26,7 +26,18 @@ struct vhost_work {
+ 	unsigned long		flags;
+ };
+ 
++struct vhost_worker;
++struct vhost_dev;
++
++struct vhost_worker_ops {
++	int (*create)(struct vhost_worker *worker, struct vhost_dev *dev,
++		      const char *name);
++	void (*stop)(struct vhost_worker *worker);
++	void (*wakeup)(struct vhost_worker *worker);
++};
++
+ struct vhost_worker {
++	struct task_struct *kthread_task;
+ 	struct vhost_task	*vtsk;
+ 	struct vhost_dev	*dev;
+ 	/* Used to serialize device wide flushing with worker swapping. */
+@@ -36,6 +47,7 @@ struct vhost_worker {
+ 	u32			id;
+ 	int			attachment_cnt;
+ 	bool			killed;
++	const struct vhost_worker_ops *ops;
+ };
+ 
+ /* Poll a file (eventfd or socket) */
+@@ -176,6 +188,16 @@ struct vhost_dev {
+ 	int byte_weight;
+ 	struct xarray worker_xa;
+ 	bool use_worker;
++	/*
++	 * If fork_owner is true we use vhost_tasks to create
++	 * the worker so all settings/limits like cgroups, NPROC,
++	 * scheduler, etc are inherited from the owner. If false,
++	 * we use kthreads and only attach to the same cgroups
++	 * as the owner for compat with older kernels.
++	 * here we use true as default value.
++	 * The default value is set by fork_from_owner_default
++	 */
++	bool fork_owner;
+ 	int (*msg_handler)(struct vhost_dev *dev, u32 asid,
+ 			   struct vhost_iotlb_msg *msg);
+ };
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 2df48037688d1d..2b2d36c021ba55 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -952,13 +952,13 @@ static const char *fbcon_startup(void)
+ 	int rows, cols;
+ 
+ 	/*
+-	 *  If num_registered_fb is zero, this is a call for the dummy part.
++	 *  If fbcon_num_registered_fb is zero, this is a call for the dummy part.
+ 	 *  The frame buffer devices weren't initialized yet.
+ 	 */
+ 	if (!fbcon_num_registered_fb || info_idx == -1)
+ 		return display_desc;
+ 	/*
+-	 * Instead of blindly using registered_fb[0], we use info_idx, set by
++	 * Instead of blindly using fbcon_registered_fb[0], we use info_idx, set by
+ 	 * fbcon_fb_registered();
+ 	 */
+ 	info = fbcon_registered_fb[info_idx];
+diff --git a/drivers/video/fbdev/imxfb.c b/drivers/video/fbdev/imxfb.c
+index f30da32cdaed4d..a077bf346bdf4b 100644
+--- a/drivers/video/fbdev/imxfb.c
++++ b/drivers/video/fbdev/imxfb.c
+@@ -996,8 +996,13 @@ static int imxfb_probe(struct platform_device *pdev)
+ 	info->fix.smem_start = fbi->map_dma;
+ 
+ 	INIT_LIST_HEAD(&info->modelist);
+-	for (i = 0; i < fbi->num_modes; i++)
+-		fb_add_videomode(&fbi->mode[i].mode, &info->modelist);
++	for (i = 0; i < fbi->num_modes; i++) {
++		ret = fb_add_videomode(&fbi->mode[i].mode, &info->modelist);
++		if (ret) {
++			dev_err(&pdev->dev, "Failed to add videomode\n");
++			goto failed_cmap;
++		}
++	}
+ 
+ 	/*
+ 	 * This makes sure that our colour bitfield
+diff --git a/drivers/watchdog/ziirave_wdt.c b/drivers/watchdog/ziirave_wdt.c
+index fcc1ba02e75b66..5c6e3fa001d885 100644
+--- a/drivers/watchdog/ziirave_wdt.c
++++ b/drivers/watchdog/ziirave_wdt.c
+@@ -302,6 +302,9 @@ static int ziirave_firm_verify(struct watchdog_device *wdd,
+ 		const u16 len = be16_to_cpu(rec->len);
+ 		const u32 addr = be32_to_cpu(rec->addr);
+ 
++		if (len > sizeof(data))
++			return -EINVAL;
++
+ 		if (ziirave_firm_addr_readonly(addr))
+ 			continue;
+ 
+diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h
+index 9c286b2a190016..ac8ce3179ba2e9 100644
+--- a/drivers/xen/gntdev-common.h
++++ b/drivers/xen/gntdev-common.h
+@@ -26,6 +26,10 @@ struct gntdev_priv {
+ 	/* lock protects maps and freeable_maps. */
+ 	struct mutex lock;
+ 
++	/* Free instances of struct gntdev_copy_batch. */
++	struct gntdev_copy_batch *batch;
++	struct mutex batch_lock;
++
+ #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
+ 	/* Device for which DMA memory is allocated. */
+ 	struct device *dma_dev;
+diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
+index 5453d86324f66f..82855105ab857f 100644
+--- a/drivers/xen/gntdev-dmabuf.c
++++ b/drivers/xen/gntdev-dmabuf.c
+@@ -357,8 +357,11 @@ struct gntdev_dmabuf_export_args {
+ static int dmabuf_exp_from_pages(struct gntdev_dmabuf_export_args *args)
+ {
+ 	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+-	struct gntdev_dmabuf *gntdev_dmabuf;
+-	int ret;
++	struct gntdev_dmabuf *gntdev_dmabuf __free(kfree) = NULL;
++	CLASS(get_unused_fd, ret)(O_CLOEXEC);
++
++	if (ret < 0)
++		return ret;
+ 
+ 	gntdev_dmabuf = kzalloc(sizeof(*gntdev_dmabuf), GFP_KERNEL);
+ 	if (!gntdev_dmabuf)
+@@ -383,32 +386,21 @@ static int dmabuf_exp_from_pages(struct gntdev_dmabuf_export_args *args)
+ 	exp_info.priv = gntdev_dmabuf;
+ 
+ 	gntdev_dmabuf->dmabuf = dma_buf_export(&exp_info);
+-	if (IS_ERR(gntdev_dmabuf->dmabuf)) {
+-		ret = PTR_ERR(gntdev_dmabuf->dmabuf);
+-		gntdev_dmabuf->dmabuf = NULL;
+-		goto fail;
+-	}
+-
+-	ret = dma_buf_fd(gntdev_dmabuf->dmabuf, O_CLOEXEC);
+-	if (ret < 0)
+-		goto fail;
++	if (IS_ERR(gntdev_dmabuf->dmabuf))
++		return PTR_ERR(gntdev_dmabuf->dmabuf);
+ 
+ 	gntdev_dmabuf->fd = ret;
+ 	args->fd = ret;
+ 
+ 	pr_debug("Exporting DMA buffer with fd %d\n", ret);
+ 
++	get_file(gntdev_dmabuf->priv->filp);
+ 	mutex_lock(&args->dmabuf_priv->lock);
+ 	list_add(&gntdev_dmabuf->next, &args->dmabuf_priv->exp_list);
+ 	mutex_unlock(&args->dmabuf_priv->lock);
+-	get_file(gntdev_dmabuf->priv->filp);
+-	return 0;
+ 
+-fail:
+-	if (gntdev_dmabuf->dmabuf)
+-		dma_buf_put(gntdev_dmabuf->dmabuf);
+-	kfree(gntdev_dmabuf);
+-	return ret;
++	fd_install(take_fd(ret), no_free_ptr(gntdev_dmabuf)->dmabuf->file);
++	return 0;
+ }
+ 
+ static struct gntdev_grant_map *
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index 61faea1f066305..1f21607656182a 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -56,6 +56,18 @@ MODULE_AUTHOR("Derek G. Murray <Derek.Murray@cl.cam.ac.uk>, "
+ 	      "Gerd Hoffmann <kraxel@redhat.com>");
+ MODULE_DESCRIPTION("User-space granted page access driver");
+ 
++#define GNTDEV_COPY_BATCH 16
++
++struct gntdev_copy_batch {
++	struct gnttab_copy ops[GNTDEV_COPY_BATCH];
++	struct page *pages[GNTDEV_COPY_BATCH];
++	s16 __user *status[GNTDEV_COPY_BATCH];
++	unsigned int nr_ops;
++	unsigned int nr_pages;
++	bool writeable;
++	struct gntdev_copy_batch *next;
++};
++
+ static unsigned int limit = 64*1024;
+ module_param(limit, uint, 0644);
+ MODULE_PARM_DESC(limit,
+@@ -584,6 +596,8 @@ static int gntdev_open(struct inode *inode, struct file *flip)
+ 	INIT_LIST_HEAD(&priv->maps);
+ 	mutex_init(&priv->lock);
+ 
++	mutex_init(&priv->batch_lock);
++
+ #ifdef CONFIG_XEN_GNTDEV_DMABUF
+ 	priv->dmabuf_priv = gntdev_dmabuf_init(flip);
+ 	if (IS_ERR(priv->dmabuf_priv)) {
+@@ -608,6 +622,7 @@ static int gntdev_release(struct inode *inode, struct file *flip)
+ {
+ 	struct gntdev_priv *priv = flip->private_data;
+ 	struct gntdev_grant_map *map;
++	struct gntdev_copy_batch *batch;
+ 
+ 	pr_debug("priv %p\n", priv);
+ 
+@@ -620,6 +635,14 @@ static int gntdev_release(struct inode *inode, struct file *flip)
+ 	}
+ 	mutex_unlock(&priv->lock);
+ 
++	mutex_lock(&priv->batch_lock);
++	while (priv->batch) {
++		batch = priv->batch;
++		priv->batch = batch->next;
++		kfree(batch);
++	}
++	mutex_unlock(&priv->batch_lock);
++
+ #ifdef CONFIG_XEN_GNTDEV_DMABUF
+ 	gntdev_dmabuf_fini(priv->dmabuf_priv);
+ #endif
+@@ -785,17 +808,6 @@ static long gntdev_ioctl_notify(struct gntdev_priv *priv, void __user *u)
+ 	return rc;
+ }
+ 
+-#define GNTDEV_COPY_BATCH 16
+-
+-struct gntdev_copy_batch {
+-	struct gnttab_copy ops[GNTDEV_COPY_BATCH];
+-	struct page *pages[GNTDEV_COPY_BATCH];
+-	s16 __user *status[GNTDEV_COPY_BATCH];
+-	unsigned int nr_ops;
+-	unsigned int nr_pages;
+-	bool writeable;
+-};
+-
+ static int gntdev_get_page(struct gntdev_copy_batch *batch, void __user *virt,
+ 				unsigned long *gfn)
+ {
+@@ -953,36 +965,53 @@ static int gntdev_grant_copy_seg(struct gntdev_copy_batch *batch,
+ static long gntdev_ioctl_grant_copy(struct gntdev_priv *priv, void __user *u)
+ {
+ 	struct ioctl_gntdev_grant_copy copy;
+-	struct gntdev_copy_batch batch;
++	struct gntdev_copy_batch *batch;
+ 	unsigned int i;
+ 	int ret = 0;
+ 
+ 	if (copy_from_user(&copy, u, sizeof(copy)))
+ 		return -EFAULT;
+ 
+-	batch.nr_ops = 0;
+-	batch.nr_pages = 0;
++	mutex_lock(&priv->batch_lock);
++	if (!priv->batch) {
++		batch = kmalloc(sizeof(*batch), GFP_KERNEL);
++	} else {
++		batch = priv->batch;
++		priv->batch = batch->next;
++	}
++	mutex_unlock(&priv->batch_lock);
++	if (!batch)
++		return -ENOMEM;
++
++	batch->nr_ops = 0;
++	batch->nr_pages = 0;
+ 
+ 	for (i = 0; i < copy.count; i++) {
+ 		struct gntdev_grant_copy_segment seg;
+ 
+ 		if (copy_from_user(&seg, &copy.segments[i], sizeof(seg))) {
+ 			ret = -EFAULT;
++			gntdev_put_pages(batch);
+ 			goto out;
+ 		}
+ 
+-		ret = gntdev_grant_copy_seg(&batch, &seg, &copy.segments[i].status);
+-		if (ret < 0)
++		ret = gntdev_grant_copy_seg(batch, &seg, &copy.segments[i].status);
++		if (ret < 0) {
++			gntdev_put_pages(batch);
+ 			goto out;
++		}
+ 
+ 		cond_resched();
+ 	}
+-	if (batch.nr_ops)
+-		ret = gntdev_copy(&batch);
+-	return ret;
++	if (batch->nr_ops)
++		ret = gntdev_copy(batch);
++
++ out:
++	mutex_lock(&priv->batch_lock);
++	batch->next = priv->batch;
++	priv->batch = batch;
++	mutex_unlock(&priv->batch_lock);
+ 
+-  out:
+-	gntdev_put_pages(&batch);
+ 	return ret;
+ }
+ 
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index a2e7979372ccd8..648531fe09002c 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -4585,16 +4585,13 @@ int btrfs_del_items(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ 
+ /*
+  * A helper function to walk down the tree starting at min_key, and looking
+- * for nodes or leaves that are have a minimum transaction id.
++ * for leaves that have a minimum transaction id.
+  * This is used by the btree defrag code, and tree logging
+  *
+  * This does not cow, but it does stuff the starting key it finds back
+  * into min_key, so you can call btrfs_search_slot with cow=1 on the
+  * key and get a writable path.
+  *
+- * This honors path->lowest_level to prevent descent past a given level
+- * of the tree.
+- *
+  * min_trans indicates the oldest transaction that you are interested
+  * in walking through.  Any nodes or leaves older than min_trans are
+  * skipped over (without reading them).
+@@ -4615,6 +4612,7 @@ int btrfs_search_forward(struct btrfs_root *root, struct btrfs_key *min_key,
+ 	int keep_locks = path->keep_locks;
+ 
+ 	ASSERT(!path->nowait);
++	ASSERT(path->lowest_level == 0);
+ 	path->keep_locks = 1;
+ again:
+ 	cur = btrfs_read_lock_root_node(root);
+@@ -4636,8 +4634,8 @@ int btrfs_search_forward(struct btrfs_root *root, struct btrfs_key *min_key,
+ 			goto out;
+ 		}
+ 
+-		/* at the lowest level, we're done, setup the path and exit */
+-		if (level == path->lowest_level) {
++		/* At level 0 we're done, setup the path and exit. */
++		if (level == 0) {
+ 			if (slot >= nritems)
+ 				goto find_next_key;
+ 			ret = 0;
+@@ -4678,12 +4676,6 @@ int btrfs_search_forward(struct btrfs_root *root, struct btrfs_key *min_key,
+ 				goto out;
+ 			}
+ 		}
+-		if (level == path->lowest_level) {
+-			ret = 0;
+-			/* Save our key for returning back. */
+-			btrfs_node_key_to_cpu(cur, min_key, slot);
+-			goto out;
+-		}
+ 		cur = btrfs_read_node_slot(cur, slot);
+ 		if (IS_ERR(cur)) {
+ 			ret = PTR_ERR(cur);
+@@ -4699,7 +4691,7 @@ int btrfs_search_forward(struct btrfs_root *root, struct btrfs_key *min_key,
+ out:
+ 	path->keep_locks = keep_locks;
+ 	if (ret == 0)
+-		btrfs_unlock_up_safe(path, path->lowest_level + 1);
++		btrfs_unlock_up_safe(path, 1);
+ 	return ret;
+ }
+ 
+diff --git a/fs/ceph/crypto.c b/fs/ceph/crypto.c
+index 3b3c4d8d401ece..9c70622458800a 100644
+--- a/fs/ceph/crypto.c
++++ b/fs/ceph/crypto.c
+@@ -215,35 +215,31 @@ static struct inode *parse_longname(const struct inode *parent,
+ 	struct ceph_client *cl = ceph_inode_to_client(parent);
+ 	struct inode *dir = NULL;
+ 	struct ceph_vino vino = { .snap = CEPH_NOSNAP };
+-	char *inode_number;
+-	char *name_end;
+-	int orig_len = *name_len;
++	char *name_end, *inode_number;
+ 	int ret = -EIO;
+-
++	/* NUL-terminate */
++	char *str __free(kfree) = kmemdup_nul(name, *name_len, GFP_KERNEL);
++	if (!str)
++		return ERR_PTR(-ENOMEM);
+ 	/* Skip initial '_' */
+-	name++;
+-	name_end = strrchr(name, '_');
++	str++;
++	name_end = strrchr(str, '_');
+ 	if (!name_end) {
+-		doutc(cl, "failed to parse long snapshot name: %s\n", name);
++		doutc(cl, "failed to parse long snapshot name: %s\n", str);
+ 		return ERR_PTR(-EIO);
+ 	}
+-	*name_len = (name_end - name);
++	*name_len = (name_end - str);
+ 	if (*name_len <= 0) {
+ 		pr_err_client(cl, "failed to parse long snapshot name\n");
+ 		return ERR_PTR(-EIO);
+ 	}
+ 
+ 	/* Get the inode number */
+-	inode_number = kmemdup_nul(name_end + 1,
+-				   orig_len - *name_len - 2,
+-				   GFP_KERNEL);
+-	if (!inode_number)
+-		return ERR_PTR(-ENOMEM);
++	inode_number = name_end + 1;
+ 	ret = kstrtou64(inode_number, 10, &vino.ino);
+ 	if (ret) {
+-		doutc(cl, "failed to parse inode number: %s\n", name);
+-		dir = ERR_PTR(ret);
+-		goto out;
++		doutc(cl, "failed to parse inode number: %s\n", str);
++		return ERR_PTR(ret);
+ 	}
+ 
+ 	/* And finally the inode */
+@@ -254,9 +250,6 @@ static struct inode *parse_longname(const struct inode *parent,
+ 		if (IS_ERR(dir))
+ 			doutc(cl, "can't find inode %s (%s)\n", inode_number, name);
+ 	}
+-
+-out:
+-	kfree(inode_number);
+ 	return dir;
+ }
+ 
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index 841a5b18e3dfdb..7ac5126aa4f1ea 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -623,9 +623,8 @@ static ssize_t exfat_file_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	if (pos > valid_size)
+ 		pos = valid_size;
+ 
+-	if (iocb_is_dsync(iocb) && iocb->ki_pos > pos) {
+-		ssize_t err = vfs_fsync_range(file, pos, iocb->ki_pos - 1,
+-				iocb->ki_flags & IOCB_SYNC);
++	if (iocb->ki_pos > pos) {
++		ssize_t err = generic_write_sync(iocb, iocb->ki_pos - pos);
+ 		if (err < 0)
+ 			return err;
+ 	}
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index e5e6bf0d338b96..f27d9da53fb75d 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -611,6 +611,7 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
+ 	} else
+ 		ret = ext4_block_write_begin(handle, folio, from, to,
+ 					     ext4_get_block);
++	clear_buffer_new(folio_buffers(folio));
+ 
+ 	if (!ret && ext4_should_journal_data(inode)) {
+ 		ret = ext4_walk_page_buffers(handle, inode,
+@@ -890,6 +891,7 @@ static int ext4_da_convert_inline_data_to_extent(struct address_space *mapping,
+ 		return ret;
+ 	}
+ 
++	clear_buffer_new(folio_buffers(folio));
+ 	folio_mark_dirty(folio);
+ 	folio_mark_uptodate(folio);
+ 	ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 7fcdc01a0220a7..46bfb39f610892 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1065,7 +1065,7 @@ int ext4_block_write_begin(handle_t *handle, struct folio *folio,
+ 			}
+ 			continue;
+ 		}
+-		if (buffer_new(bh))
++		if (WARN_ON_ONCE(buffer_new(bh)))
+ 			clear_buffer_new(bh);
+ 		if (!buffer_mapped(bh)) {
+ 			WARN_ON(bh->b_size != blocksize);
+@@ -1282,6 +1282,7 @@ static int write_end_fn(handle_t *handle, struct inode *inode,
+ 	ret = ext4_dirty_journalled_data(handle, bh);
+ 	clear_buffer_meta(bh);
+ 	clear_buffer_prio(bh);
++	clear_buffer_new(bh);
+ 	return ret;
+ }
+ 
+diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
+index 179e54f3a3b6a2..3d8b0f6d2dea50 100644
+--- a/fs/ext4/page-io.c
++++ b/fs/ext4/page-io.c
+@@ -236,10 +236,12 @@ static void dump_completed_IO(struct inode *inode, struct list_head *head)
+ 
+ static bool ext4_io_end_defer_completion(ext4_io_end_t *io_end)
+ {
+-	if (io_end->flag & EXT4_IO_END_UNWRITTEN)
++	if (io_end->flag & EXT4_IO_END_UNWRITTEN &&
++	    !list_empty(&io_end->list_vec))
+ 		return true;
+ 	if (test_opt(io_end->inode->i_sb, DATA_ERR_ABORT) &&
+-	    io_end->flag & EXT4_IO_END_FAILED)
++	    io_end->flag & EXT4_IO_END_FAILED &&
++	    !ext4_emergency_state(io_end->inode->i_sb))
+ 		return true;
+ 	return false;
+ }
+@@ -256,6 +258,7 @@ static void ext4_add_complete_io(ext4_io_end_t *io_end)
+ 	WARN_ON(!(io_end->flag & EXT4_IO_END_DEFER_COMPLETION));
+ 	WARN_ON(io_end->flag & EXT4_IO_END_UNWRITTEN &&
+ 		!io_end->handle && sbi->s_journal);
++	WARN_ON(!io_end->bio);
+ 
+ 	spin_lock_irqsave(&ei->i_completed_io_lock, flags);
+ 	wq = sbi->rsv_conversion_wq;
+@@ -318,12 +321,9 @@ ext4_io_end_t *ext4_init_io_end(struct inode *inode, gfp_t flags)
+ void ext4_put_io_end_defer(ext4_io_end_t *io_end)
+ {
+ 	if (refcount_dec_and_test(&io_end->count)) {
+-		if (io_end->flag & EXT4_IO_END_FAILED ||
+-		    (io_end->flag & EXT4_IO_END_UNWRITTEN &&
+-		     !list_empty(&io_end->list_vec))) {
+-			ext4_add_complete_io(io_end);
+-			return;
+-		}
++		if (ext4_io_end_defer_completion(io_end))
++			return ext4_add_complete_io(io_end);
++
+ 		ext4_release_io_end(io_end);
+ 	}
+ }
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index b0b8748ae287f4..80eb44dfe0f197 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -282,7 +282,7 @@ static void f2fs_read_end_io(struct bio *bio)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_P_SB(bio_first_page_all(bio));
+ 	struct bio_post_read_ctx *ctx;
+-	bool intask = in_task();
++	bool intask = in_task() && !irqs_disabled();
+ 
+ 	iostat_update_and_unbind_ctx(bio);
+ 	ctx = bio->bi_private;
+@@ -1572,8 +1572,11 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, int flag)
+ 	end = pgofs + maxblocks;
+ 
+ next_dnode:
+-	if (map->m_may_create)
++	if (map->m_may_create) {
++		if (f2fs_lfs_mode(sbi))
++			f2fs_balance_fs(sbi, true);
+ 		f2fs_map_lock(sbi, flag);
++	}
+ 
+ 	/* When reading holes, we need its node page */
+ 	set_new_dnode(&dn, inode, NULL, NULL, 0);
+diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c
+index 16c2dfb4f59530..3417e7e550b210 100644
+--- a/fs/f2fs/debug.c
++++ b/fs/f2fs/debug.c
+@@ -21,7 +21,7 @@
+ #include "gc.h"
+ 
+ static LIST_HEAD(f2fs_stat_list);
+-static DEFINE_RAW_SPINLOCK(f2fs_stat_lock);
++static DEFINE_SPINLOCK(f2fs_stat_lock);
+ #ifdef CONFIG_DEBUG_FS
+ static struct dentry *f2fs_debugfs_root;
+ #endif
+@@ -439,9 +439,8 @@ static int stat_show(struct seq_file *s, void *v)
+ {
+ 	struct f2fs_stat_info *si;
+ 	int i = 0, j = 0;
+-	unsigned long flags;
+ 
+-	raw_spin_lock_irqsave(&f2fs_stat_lock, flags);
++	spin_lock(&f2fs_stat_lock);
+ 	list_for_each_entry(si, &f2fs_stat_list, stat_list) {
+ 		struct f2fs_sb_info *sbi = si->sbi;
+ 
+@@ -753,7 +752,7 @@ static int stat_show(struct seq_file *s, void *v)
+ 		seq_printf(s, "  - paged : %llu KB\n",
+ 				si->page_mem >> 10);
+ 	}
+-	raw_spin_unlock_irqrestore(&f2fs_stat_lock, flags);
++	spin_unlock(&f2fs_stat_lock);
+ 	return 0;
+ }
+ 
+@@ -765,7 +764,6 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
+ 	struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi);
+ 	struct f2fs_stat_info *si;
+ 	struct f2fs_dev_stats *dev_stats;
+-	unsigned long flags;
+ 	int i;
+ 
+ 	si = f2fs_kzalloc(sbi, sizeof(struct f2fs_stat_info), GFP_KERNEL);
+@@ -817,9 +815,9 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
+ 
+ 	atomic_set(&sbi->max_aw_cnt, 0);
+ 
+-	raw_spin_lock_irqsave(&f2fs_stat_lock, flags);
++	spin_lock(&f2fs_stat_lock);
+ 	list_add_tail(&si->stat_list, &f2fs_stat_list);
+-	raw_spin_unlock_irqrestore(&f2fs_stat_lock, flags);
++	spin_unlock(&f2fs_stat_lock);
+ 
+ 	return 0;
+ }
+@@ -827,11 +825,10 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
+ void f2fs_destroy_stats(struct f2fs_sb_info *sbi)
+ {
+ 	struct f2fs_stat_info *si = F2FS_STAT(sbi);
+-	unsigned long flags;
+ 
+-	raw_spin_lock_irqsave(&f2fs_stat_lock, flags);
++	spin_lock(&f2fs_stat_lock);
+ 	list_del(&si->stat_list);
+-	raw_spin_unlock_irqrestore(&f2fs_stat_lock, flags);
++	spin_unlock(&f2fs_stat_lock);
+ 
+ 	kfree(si->dev_stats);
+ 	kfree(si);
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 347b3b64783475..c4d79ab0ae916b 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -414,7 +414,7 @@ void f2fs_init_read_extent_tree(struct inode *inode, struct page *ipage)
+ 	struct f2fs_extent *i_ext = &F2FS_INODE(ipage)->i_ext;
+ 	struct extent_tree *et;
+ 	struct extent_node *en;
+-	struct extent_info ei;
++	struct extent_info ei = {0};
+ 
+ 	if (!__may_extent_tree(inode, EX_READ)) {
+ 		/* drop largest read extent */
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 34e4ae2a5f5ba3..8a8d15c219dc21 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1273,7 +1273,7 @@ struct f2fs_bio_info {
+ struct f2fs_dev_info {
+ 	struct file *bdev_file;
+ 	struct block_device *bdev;
+-	char path[MAX_PATH_LEN];
++	char path[MAX_PATH_LEN + 1];
+ 	unsigned int total_segments;
+ 	block_t start_blk;
+ 	block_t end_blk;
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 8b5a55b72264dd..67f04d140e0fca 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1893,6 +1893,7 @@ int f2fs_gc(struct f2fs_sb_info *sbi, struct f2fs_gc_control *gc_control)
+ 	/* Let's run FG_GC, if we don't have enough space. */
+ 	if (has_not_enough_free_secs(sbi, 0, 0)) {
+ 		gc_type = FG_GC;
++		gc_control->one_time = false;
+ 
+ 		/*
+ 		 * For example, if there are many prefree_segments below given
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index f5991e8751b9bb..f3c5e6e7579b3c 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -934,6 +934,19 @@ void f2fs_evict_inode(struct inode *inode)
+ 		f2fs_update_inode_page(inode);
+ 		if (dquot_initialize_needed(inode))
+ 			set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
++
++		/*
++		 * If both f2fs_truncate() and f2fs_update_inode_page() failed
++		 * due to fuzzed corrupted inode, call f2fs_inode_synced() to
++		 * avoid triggering later f2fs_bug_on().
++		 */
++		if (is_inode_flag_set(inode, FI_DIRTY_INODE)) {
++			f2fs_warn(sbi,
++				"f2fs_evict_inode: inode is dirty, ino:%lu",
++				inode->i_ino);
++			f2fs_inode_synced(inode);
++			set_sbi_flag(sbi, SBI_NEED_FSCK);
++		}
+ 	}
+ 	if (freeze_protected)
+ 		sb_end_intwrite(inode->i_sb);
+@@ -950,8 +963,12 @@ void f2fs_evict_inode(struct inode *inode)
+ 	if (likely(!f2fs_cp_error(sbi) &&
+ 				!is_sbi_flag_set(sbi, SBI_CP_DISABLED)))
+ 		f2fs_bug_on(sbi, is_inode_flag_set(inode, FI_DIRTY_INODE));
+-	else
+-		f2fs_inode_synced(inode);
++
++	/*
++	 * anyway, it needs to remove the inode from sbi->inode_list[DIRTY_META]
++	 * list to avoid UAF in f2fs_sync_inode_meta() during checkpoint.
++	 */
++	f2fs_inode_synced(inode);
+ 
+ 	/* for the case f2fs_new_inode() was failed, .i_ino is zero, skip it */
+ 	if (inode->i_ino)
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 503f6df690bf2b..4e0a56f107800f 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -623,8 +623,7 @@ static inline void __get_secs_required(struct f2fs_sb_info *sbi,
+ 	unsigned int dent_blocks = total_dent_blocks % CAP_BLKS_PER_SEC(sbi);
+ 	unsigned int data_blocks = 0;
+ 
+-	if (f2fs_lfs_mode(sbi) &&
+-		unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
++	if (f2fs_lfs_mode(sbi)) {
+ 		total_data_blocks = get_pages(sbi, F2FS_DIRTY_DATA);
+ 		data_secs = total_data_blocks / CAP_BLKS_PER_SEC(sbi);
+ 		data_blocks = total_data_blocks % CAP_BLKS_PER_SEC(sbi);
+@@ -633,7 +632,7 @@ static inline void __get_secs_required(struct f2fs_sb_info *sbi,
+ 	if (lower_p)
+ 		*lower_p = node_secs + dent_secs + data_secs;
+ 	if (upper_p)
+-		*upper_p = node_secs + dent_secs +
++		*upper_p = node_secs + dent_secs + data_secs +
+ 			(node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0) +
+ 			(data_blocks ? 1 : 0);
+ 	if (curseg_p)
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 86dd30eb50b1de..f3d0495f3a5f7b 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -3446,6 +3446,7 @@ static int __f2fs_commit_super(struct f2fs_sb_info *sbi, struct folio *folio,
+ 		f2fs_bug_on(sbi, 1);
+ 
+ 	ret = submit_bio_wait(bio);
++	bio_put(bio);
+ 	folio_end_writeback(folio);
+ 
+ 	return ret;
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index c691613664675f..05e5d8316c706a 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -621,6 +621,27 @@ static ssize_t __sbi_store(struct f2fs_attr *a,
+ 		return count;
+ 	}
+ 
++	if (!strcmp(a->attr.name, "gc_no_zoned_gc_percent")) {
++		if (t > 100)
++			return -EINVAL;
++		*ui = (unsigned int)t;
++		return count;
++	}
++
++	if (!strcmp(a->attr.name, "gc_boost_zoned_gc_percent")) {
++		if (t > 100)
++			return -EINVAL;
++		*ui = (unsigned int)t;
++		return count;
++	}
++
++	if (!strcmp(a->attr.name, "gc_valid_thresh_ratio")) {
++		if (t > 100)
++			return -EINVAL;
++		*ui = (unsigned int)t;
++		return count;
++	}
++
+ #ifdef CONFIG_F2FS_IOSTAT
+ 	if (!strcmp(a->attr.name, "iostat_enable")) {
+ 		sbi->iostat_enable = !!t;
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index ba25b884169e50..ea96113edbe31b 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -802,7 +802,8 @@ __acquires(&gl->gl_lockref.lock)
+ 			 * We skip telling dlm to do the locking, so we won't get a
+ 			 * reply that would otherwise clear GLF_LOCK. So we clear it here.
+ 			 */
+-			clear_bit(GLF_LOCK, &gl->gl_flags);
++			if (!test_bit(GLF_CANCELING, &gl->gl_flags))
++				clear_bit(GLF_LOCK, &gl->gl_flags);
+ 			clear_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags);
+ 			gfs2_glock_queue_work(gl, GL_GLOCK_DFT_HOLD);
+ 			return;
+diff --git a/fs/gfs2/util.c b/fs/gfs2/util.c
+index 13be8d1d228b8f..ee198a261d4fad 100644
+--- a/fs/gfs2/util.c
++++ b/fs/gfs2/util.c
+@@ -232,32 +232,23 @@ static void signal_our_withdraw(struct gfs2_sbd *sdp)
+ 	 */
+ 	ret = gfs2_glock_nq(&sdp->sd_live_gh);
+ 
++	gfs2_glock_put(live_gl); /* drop extra reference we acquired */
++	clear_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags);
++
+ 	/*
+ 	 * If we actually got the "live" lock in EX mode, there are no other
+-	 * nodes available to replay our journal. So we try to replay it
+-	 * ourselves. We hold the "live" glock to prevent other mounters
+-	 * during recovery, then just dequeue it and reacquire it in our
+-	 * normal SH mode. Just in case the problem that caused us to
+-	 * withdraw prevents us from recovering our journal (e.g. io errors
+-	 * and such) we still check if the journal is clean before proceeding
+-	 * but we may wait forever until another mounter does the recovery.
++	 * nodes available to replay our journal.
+ 	 */
+ 	if (ret == 0) {
+-		fs_warn(sdp, "No other mounters found. Trying to recover our "
+-			"own journal jid %d.\n", sdp->sd_lockstruct.ls_jid);
+-		if (gfs2_recover_journal(sdp->sd_jdesc, 1))
+-			fs_warn(sdp, "Unable to recover our journal jid %d.\n",
+-				sdp->sd_lockstruct.ls_jid);
+-		gfs2_glock_dq_wait(&sdp->sd_live_gh);
+-		gfs2_holder_reinit(LM_ST_SHARED,
+-				   LM_FLAG_NOEXP | GL_EXACT | GL_NOPID,
+-				   &sdp->sd_live_gh);
+-		gfs2_glock_nq(&sdp->sd_live_gh);
++		fs_warn(sdp, "No other mounters found.\n");
++		/*
++		 * We are about to release the lockspace.  By keeping live_gl
++		 * locked here, we ensure that the next mounter coming along
++		 * will be a "first" mounter which will perform recovery.
++		 */
++		goto skip_recovery;
+ 	}
+ 
+-	gfs2_glock_put(live_gl); /* drop extra reference we acquired */
+-	clear_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags);
+-
+ 	/*
+ 	 * At this point our journal is evicted, so we need to get a new inode
+ 	 * for it. Once done, we need to call gfs2_find_jhead which
+diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
+index a81ce7a740b918..451115360f73a0 100644
+--- a/fs/hfs/inode.c
++++ b/fs/hfs/inode.c
+@@ -692,6 +692,7 @@ static const struct file_operations hfs_file_operations = {
+ 	.write_iter	= generic_file_write_iter,
+ 	.mmap		= generic_file_mmap,
+ 	.splice_read	= filemap_splice_read,
++	.splice_write	= iter_file_splice_write,
+ 	.fsync		= hfs_file_fsync,
+ 	.open		= hfs_file_open,
+ 	.release	= hfs_file_release,
+diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c
+index a6d61685ae79bb..b1699b3c246ae4 100644
+--- a/fs/hfsplus/extents.c
++++ b/fs/hfsplus/extents.c
+@@ -342,9 +342,6 @@ static int hfsplus_free_extents(struct super_block *sb,
+ 	int i;
+ 	int err = 0;
+ 
+-	/* Mapping the allocation file may lock the extent tree */
+-	WARN_ON(mutex_is_locked(&HFSPLUS_SB(sb)->ext_tree->tree_lock));
+-
+ 	hfsplus_dump_extent(extent);
+ 	for (i = 0; i < 8; extent++, i++) {
+ 		count = be32_to_cpu(extent->block_count);
+diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c
+index f331e957421783..c85b5802ec0f95 100644
+--- a/fs/hfsplus/inode.c
++++ b/fs/hfsplus/inode.c
+@@ -368,6 +368,7 @@ static const struct file_operations hfsplus_file_operations = {
+ 	.write_iter	= generic_file_write_iter,
+ 	.mmap		= generic_file_mmap,
+ 	.splice_read	= filemap_splice_read,
++	.splice_write	= iter_file_splice_write,
+ 	.fsync		= hfsplus_file_fsync,
+ 	.open		= hfsplus_file_open,
+ 	.release	= hfsplus_file_release,
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 35e063c9f3a42e..5a877261c3fe48 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -1809,8 +1809,10 @@ dbAllocCtl(struct bmap * bmp, s64 nblocks, int l2nb, s64 blkno, s64 * results)
+ 			return -EIO;
+ 		dp = (struct dmap *) mp->data;
+ 
+-		if (dp->tree.budmin < 0)
++		if (dp->tree.budmin < 0) {
++			release_metapage(mp);
+ 			return -EIO;
++		}
+ 
+ 		/* try to allocate the blocks.
+ 		 */
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index d0e0b435a84316..d812179239362b 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -1828,9 +1828,7 @@ static void block_revalidate(struct dentry *dentry)
+ 
+ static void unblock_revalidate(struct dentry *dentry)
+ {
+-	/* store_release ensures wait_var_event() sees the update */
+-	smp_store_release(&dentry->d_fsdata, NULL);
+-	wake_up_var(&dentry->d_fsdata);
++	store_release_wake_up(&dentry->d_fsdata, NULL);
+ }
+ 
+ /*
+diff --git a/fs/nfs/export.c b/fs/nfs/export.c
+index e9c233b6fd2095..a10dd5f9d0786e 100644
+--- a/fs/nfs/export.c
++++ b/fs/nfs/export.c
+@@ -66,14 +66,21 @@ nfs_fh_to_dentry(struct super_block *sb, struct fid *fid,
+ {
+ 	struct nfs_fattr *fattr = NULL;
+ 	struct nfs_fh *server_fh = nfs_exp_embedfh(fid->raw);
+-	size_t fh_size = offsetof(struct nfs_fh, data) + server_fh->size;
++	size_t fh_size = offsetof(struct nfs_fh, data);
+ 	const struct nfs_rpc_ops *rpc_ops;
+ 	struct dentry *dentry;
+ 	struct inode *inode;
+-	int len = EMBED_FH_OFF + XDR_QUADLEN(fh_size);
++	int len = EMBED_FH_OFF;
+ 	u32 *p = fid->raw;
+ 	int ret;
+ 
++	/* Initial check of bounds */
++	if (fh_len < len + XDR_QUADLEN(fh_size) ||
++	    fh_len > XDR_QUADLEN(NFS_MAXFHSIZE))
++		return NULL;
++	/* Calculate embedded filehandle size */
++	fh_size += server_fh->size;
++	len += XDR_QUADLEN(fh_size);
+ 	/* NULL translates to ESTALE */
+ 	if (fh_len < len || fh_type != len)
+ 		return NULL;
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 4bea008dbebd7c..8dc921d835388e 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -762,14 +762,14 @@ ff_layout_choose_ds_for_read(struct pnfs_layout_segment *lseg,
+ {
+ 	struct nfs4_ff_layout_segment *fls = FF_LAYOUT_LSEG(lseg);
+ 	struct nfs4_ff_layout_mirror *mirror;
+-	struct nfs4_pnfs_ds *ds;
++	struct nfs4_pnfs_ds *ds = ERR_PTR(-EAGAIN);
+ 	u32 idx;
+ 
+ 	/* mirrors are initially sorted by efficiency */
+ 	for (idx = start_idx; idx < fls->mirror_array_cnt; idx++) {
+ 		mirror = FF_LAYOUT_COMP(lseg, idx);
+ 		ds = nfs4_ff_layout_prepare_ds(lseg, mirror, false);
+-		if (!ds)
++		if (IS_ERR(ds))
+ 			continue;
+ 
+ 		if (check_device &&
+@@ -777,10 +777,10 @@ ff_layout_choose_ds_for_read(struct pnfs_layout_segment *lseg,
+ 			continue;
+ 
+ 		*best_idx = idx;
+-		return ds;
++		break;
+ 	}
+ 
+-	return NULL;
++	return ds;
+ }
+ 
+ static struct nfs4_pnfs_ds *
+@@ -942,7 +942,7 @@ ff_layout_pg_init_write(struct nfs_pageio_descriptor *pgio,
+ 	for (i = 0; i < pgio->pg_mirror_count; i++) {
+ 		mirror = FF_LAYOUT_COMP(pgio->pg_lseg, i);
+ 		ds = nfs4_ff_layout_prepare_ds(pgio->pg_lseg, mirror, true);
+-		if (!ds) {
++		if (IS_ERR(ds)) {
+ 			if (!ff_layout_no_fallback_to_mds(pgio->pg_lseg))
+ 				goto out_mds;
+ 			pnfs_generic_pg_cleanup(pgio);
+@@ -1867,6 +1867,7 @@ ff_layout_read_pagelist(struct nfs_pgio_header *hdr)
+ 	u32 idx = hdr->pgio_mirror_idx;
+ 	int vers;
+ 	struct nfs_fh *fh;
++	bool ds_fatal_error = false;
+ 
+ 	dprintk("--> %s ino %lu pgbase %u req %zu@%llu\n",
+ 		__func__, hdr->inode->i_ino,
+@@ -1874,8 +1875,10 @@ ff_layout_read_pagelist(struct nfs_pgio_header *hdr)
+ 
+ 	mirror = FF_LAYOUT_COMP(lseg, idx);
+ 	ds = nfs4_ff_layout_prepare_ds(lseg, mirror, false);
+-	if (!ds)
++	if (IS_ERR(ds)) {
++		ds_fatal_error = nfs_error_is_fatal(PTR_ERR(ds));
+ 		goto out_failed;
++	}
+ 
+ 	ds_clnt = nfs4_ff_find_or_create_ds_client(mirror, ds->ds_clp,
+ 						   hdr->inode);
+@@ -1923,7 +1926,7 @@ ff_layout_read_pagelist(struct nfs_pgio_header *hdr)
+ 	return PNFS_ATTEMPTED;
+ 
+ out_failed:
+-	if (ff_layout_avoid_mds_available_ds(lseg))
++	if (ff_layout_avoid_mds_available_ds(lseg) && !ds_fatal_error)
+ 		return PNFS_TRY_AGAIN;
+ 	trace_pnfs_mds_fallback_read_pagelist(hdr->inode,
+ 			hdr->args.offset, hdr->args.count,
+@@ -1945,11 +1948,14 @@ ff_layout_write_pagelist(struct nfs_pgio_header *hdr, int sync)
+ 	int vers;
+ 	struct nfs_fh *fh;
+ 	u32 idx = hdr->pgio_mirror_idx;
++	bool ds_fatal_error = false;
+ 
+ 	mirror = FF_LAYOUT_COMP(lseg, idx);
+ 	ds = nfs4_ff_layout_prepare_ds(lseg, mirror, true);
+-	if (!ds)
++	if (IS_ERR(ds)) {
++		ds_fatal_error = nfs_error_is_fatal(PTR_ERR(ds));
+ 		goto out_failed;
++	}
+ 
+ 	ds_clnt = nfs4_ff_find_or_create_ds_client(mirror, ds->ds_clp,
+ 						   hdr->inode);
+@@ -2000,7 +2006,7 @@ ff_layout_write_pagelist(struct nfs_pgio_header *hdr, int sync)
+ 	return PNFS_ATTEMPTED;
+ 
+ out_failed:
+-	if (ff_layout_avoid_mds_available_ds(lseg))
++	if (ff_layout_avoid_mds_available_ds(lseg) && !ds_fatal_error)
+ 		return PNFS_TRY_AGAIN;
+ 	trace_pnfs_mds_fallback_write_pagelist(hdr->inode,
+ 			hdr->args.offset, hdr->args.count,
+@@ -2043,7 +2049,7 @@ static int ff_layout_initiate_commit(struct nfs_commit_data *data, int how)
+ 	idx = calc_ds_index_from_commit(lseg, data->ds_commit_index);
+ 	mirror = FF_LAYOUT_COMP(lseg, idx);
+ 	ds = nfs4_ff_layout_prepare_ds(lseg, mirror, true);
+-	if (!ds)
++	if (IS_ERR(ds))
+ 		goto out_err;
+ 
+ 	ds_clnt = nfs4_ff_find_or_create_ds_client(mirror, ds->ds_clp,
+diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+index 656d5c50bbce1c..30365ec782bb1b 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c
++++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+@@ -370,11 +370,11 @@ nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg,
+ 			  struct nfs4_ff_layout_mirror *mirror,
+ 			  bool fail_return)
+ {
+-	struct nfs4_pnfs_ds *ds = NULL;
++	struct nfs4_pnfs_ds *ds;
+ 	struct inode *ino = lseg->pls_layout->plh_inode;
+ 	struct nfs_server *s = NFS_SERVER(ino);
+ 	unsigned int max_payload;
+-	int status;
++	int status = -EAGAIN;
+ 
+ 	if (!ff_layout_init_mirror_ds(lseg->pls_layout, mirror))
+ 		goto noconnect;
+@@ -418,7 +418,7 @@ nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg,
+ 	ff_layout_send_layouterror(lseg);
+ 	if (fail_return || !ff_layout_has_available_ds(lseg))
+ 		pnfs_error_mark_layout_for_return(ino, lseg);
+-	ds = NULL;
++	ds = ERR_PTR(status);
+ out:
+ 	return ds;
+ }
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 69c2c10ee658c9..d8f768254f1665 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -671,9 +671,12 @@ nfs_write_match_verf(const struct nfs_writeverf *verf,
+ 
+ static inline gfp_t nfs_io_gfp_mask(void)
+ {
+-	if (current->flags & PF_WQ_WORKER)
+-		return GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN;
+-	return GFP_KERNEL;
++	gfp_t ret = current_gfp_context(GFP_KERNEL);
++
++	/* For workers __GFP_NORETRY only with __GFP_IO or __GFP_FS */
++	if ((current->flags & PF_WQ_WORKER) && ret == GFP_KERNEL)
++		ret |= __GFP_NORETRY | __GFP_NOWARN;
++	return ret;
+ }
+ 
+ /*
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 2f5a6aa3fd48ea..3c1ef174aa813d 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -10866,7 +10866,7 @@ const struct nfs4_minor_version_ops *nfs_v4_minor_ops[] = {
+ 
+ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
+ {
+-	ssize_t error, error2, error3, error4;
++	ssize_t error, error2, error3, error4 = 0;
+ 	size_t left = size;
+ 
+ 	error = generic_listxattr(dentry, list, left);
+@@ -10894,9 +10894,11 @@ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
+ 		left -= error3;
+ 	}
+ 
+-	error4 = security_inode_listsecurity(d_inode(dentry), list, left);
+-	if (error4 < 0)
+-		return error4;
++	if (!nfs_server_capable(d_inode(dentry), NFS_CAP_SECURITY_LABEL)) {
++		error4 = security_inode_listsecurity(d_inode(dentry), list, left);
++		if (error4 < 0)
++			return error4;
++	}
+ 
+ 	error += error2 + error3 + error4;
+ 	if (size && error > size)
+diff --git a/fs/nfs_common/nfslocalio.c b/fs/nfs_common/nfslocalio.c
+index 05c7c16e37ab4c..dd715cdb6c0431 100644
+--- a/fs/nfs_common/nfslocalio.c
++++ b/fs/nfs_common/nfslocalio.c
+@@ -177,7 +177,7 @@ static bool nfs_uuid_put(nfs_uuid_t *nfs_uuid)
+ 			/* nfs_close_local_fh() is doing the
+ 			 * close and we must wait. until it unlinks
+ 			 */
+-			wait_var_event_spinlock(nfl,
++			wait_var_event_spinlock(nfs_uuid,
+ 						list_first_entry_or_null(
+ 							&nfs_uuid->files,
+ 							struct nfs_file_localio,
+@@ -198,8 +198,7 @@ static bool nfs_uuid_put(nfs_uuid_t *nfs_uuid)
+ 		/* Now we can allow racing nfs_close_local_fh() to
+ 		 * skip the locking.
+ 		 */
+-		RCU_INIT_POINTER(nfl->nfs_uuid, NULL);
+-		wake_up_var_locked(&nfl->nfs_uuid, &nfs_uuid->lock);
++		store_release_wake_up(&nfl->nfs_uuid, RCU_INITIALIZER(NULL));
+ 	}
+ 
+ 	/* Remove client from nn->local_clients */
+@@ -243,15 +242,20 @@ void nfs_localio_invalidate_clients(struct list_head *nn_local_clients,
+ }
+ EXPORT_SYMBOL_GPL(nfs_localio_invalidate_clients);
+ 
+-static void nfs_uuid_add_file(nfs_uuid_t *nfs_uuid, struct nfs_file_localio *nfl)
++static int nfs_uuid_add_file(nfs_uuid_t *nfs_uuid, struct nfs_file_localio *nfl)
+ {
++	int ret = 0;
++
+ 	/* Add nfl to nfs_uuid->files if it isn't already */
+ 	spin_lock(&nfs_uuid->lock);
+-	if (list_empty(&nfl->list)) {
++	if (rcu_access_pointer(nfs_uuid->net) == NULL) {
++		ret = -ENXIO;
++	} else if (list_empty(&nfl->list)) {
+ 		rcu_assign_pointer(nfl->nfs_uuid, nfs_uuid);
+ 		list_add_tail(&nfl->list, &nfs_uuid->files);
+ 	}
+ 	spin_unlock(&nfs_uuid->lock);
++	return ret;
+ }
+ 
+ /*
+@@ -285,11 +289,13 @@ struct nfsd_file *nfs_open_local_fh(nfs_uuid_t *uuid,
+ 	}
+ 	rcu_read_unlock();
+ 	/* We have an implied reference to net thanks to nfsd_net_try_get */
+-	localio = nfs_to->nfsd_open_local_fh(net, uuid->dom, rpc_clnt,
+-					     cred, nfs_fh, pnf, fmode);
++	localio = nfs_to->nfsd_open_local_fh(net, uuid->dom, rpc_clnt, cred,
++					     nfs_fh, pnf, fmode);
++	if (!IS_ERR(localio) && nfs_uuid_add_file(uuid, nfl) < 0) {
++		/* Delete the cached file when racing with nfs_uuid_put() */
++		nfs_to_nfsd_file_put_local(pnf);
++	}
+ 	nfs_to_nfsd_net_put(net);
+-	if (!IS_ERR(localio))
+-		nfs_uuid_add_file(uuid, nfl);
+ 
+ 	return localio;
+ }
+@@ -314,7 +320,7 @@ void nfs_close_local_fh(struct nfs_file_localio *nfl)
+ 		rcu_read_unlock();
+ 		return;
+ 	}
+-	if (list_empty(&nfs_uuid->files)) {
++	if (list_empty(&nfl->list)) {
+ 		/* nfs_uuid_put() has started closing files, wait for it
+ 		 * to finished
+ 		 */
+@@ -338,7 +344,7 @@ void nfs_close_local_fh(struct nfs_file_localio *nfl)
+ 	 */
+ 	spin_lock(&nfs_uuid->lock);
+ 	list_del_init(&nfl->list);
+-	wake_up_var_locked(&nfl->nfs_uuid, &nfs_uuid->lock);
++	wake_up_var_locked(nfs_uuid, &nfs_uuid->lock);
+ 	spin_unlock(&nfs_uuid->lock);
+ }
+ EXPORT_SYMBOL_GPL(nfs_close_local_fh);
+diff --git a/fs/nfsd/localio.c b/fs/nfsd/localio.c
+index 80d9ff6608a7b9..519bbdedcb1170 100644
+--- a/fs/nfsd/localio.c
++++ b/fs/nfsd/localio.c
+@@ -103,10 +103,11 @@ nfsd_open_local_fh(struct net *net, struct auth_domain *dom,
+ 			if (nfsd_file_get(new) == NULL)
+ 				goto again;
+ 			/*
+-			 * Drop the ref we were going to install and the
+-			 * one we were going to return.
++			 * Drop the ref we were going to install (both file and
++			 * net) and the one we were going to return (only file).
+ 			 */
+ 			nfsd_file_put(localio);
++			nfsd_net_put(net);
+ 			nfsd_file_put(localio);
+ 			localio = new;
+ 		}
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index 9abdc4b758138b..a7c8eac5761a92 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -466,7 +466,15 @@ static int __nfsd_setattr(struct dentry *dentry, struct iattr *iap)
+ 	if (!iap->ia_valid)
+ 		return 0;
+ 
+-	iap->ia_valid |= ATTR_CTIME;
++	/*
++	 * If ATTR_DELEG is set, then this is an update from a client that
++	 * holds a delegation. If this is an update for only the atime, the
++	 * ctime should not be changed. If the update contains the mtime
++	 * too, then ATTR_CTIME should already be set.
++	 */
++	if (!(iap->ia_valid & ATTR_DELEG))
++		iap->ia_valid |= ATTR_CTIME;
++
+ 	return notify_change(&nop_mnt_idmap, dentry, iap, NULL);
+ }
+ 
+diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c
+index 6d386080faf221..7834eadf40a713 100644
+--- a/fs/notify/fanotify/fanotify.c
++++ b/fs/notify/fanotify/fanotify.c
+@@ -454,7 +454,13 @@ static int fanotify_encode_fh(struct fanotify_fh *fh, struct inode *inode,
+ 	dwords = fh_len >> 2;
+ 	type = exportfs_encode_fid(inode, buf, &dwords);
+ 	err = -EINVAL;
+-	if (type <= 0 || type == FILEID_INVALID || fh_len != dwords << 2)
++	/*
++	 * Unlike file_handle, type and len of struct fanotify_fh are u8.
++	 * Traditionally, filesystem return handle_type < 0xff, but there
++	 * is no enforecement for that in vfs.
++	 */
++	BUILD_BUG_ON(MAX_HANDLE_SZ > 0xff || FILEID_INVALID > 0xff);
++	if (type <= 0 || type >= FILEID_INVALID || fh_len != dwords << 2)
+ 		goto out_err;
+ 
+ 	fh->type = type;
+diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
+index 9b6a3f8d2e7c5c..fbecda79fa84bd 100644
+--- a/fs/ntfs3/file.c
++++ b/fs/ntfs3/file.c
+@@ -394,7 +394,10 @@ static int ntfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ 		}
+ 
+ 		if (ni->i_valid < to) {
+-			inode_lock(inode);
++			if (!inode_trylock(inode)) {
++				err = -EAGAIN;
++				goto out;
++			}
+ 			err = ntfs_extend_initialized_size(file, ni,
+ 							   ni->i_valid, to);
+ 			inode_unlock(inode);
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index b7a83200f2cc0a..83593ecbe57e15 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -3003,8 +3003,7 @@ int ni_add_name(struct ntfs_inode *dir_ni, struct ntfs_inode *ni,
+  * ni_rename - Remove one name and insert new name.
+  */
+ int ni_rename(struct ntfs_inode *dir_ni, struct ntfs_inode *new_dir_ni,
+-	      struct ntfs_inode *ni, struct NTFS_DE *de, struct NTFS_DE *new_de,
+-	      bool *is_bad)
++	      struct ntfs_inode *ni, struct NTFS_DE *de, struct NTFS_DE *new_de)
+ {
+ 	int err;
+ 	struct NTFS_DE *de2 = NULL;
+@@ -3027,8 +3026,8 @@ int ni_rename(struct ntfs_inode *dir_ni, struct ntfs_inode *new_dir_ni,
+ 	err = ni_add_name(new_dir_ni, ni, new_de);
+ 	if (!err) {
+ 		err = ni_remove_name(dir_ni, ni, de, &de2, &undo);
+-		if (err && ni_remove_name(new_dir_ni, ni, new_de, &de2, &undo))
+-			*is_bad = true;
++		WARN_ON(err && ni_remove_name(new_dir_ni, ni, new_de, &de2,
++			&undo));
+ 	}
+ 
+ 	/*
+diff --git a/fs/ntfs3/namei.c b/fs/ntfs3/namei.c
+index 652735a0b0c498..fec451381a8862 100644
+--- a/fs/ntfs3/namei.c
++++ b/fs/ntfs3/namei.c
+@@ -244,7 +244,7 @@ static int ntfs_rename(struct mnt_idmap *idmap, struct inode *dir,
+ 	struct ntfs_inode *ni = ntfs_i(inode);
+ 	struct inode *new_inode = d_inode(new_dentry);
+ 	struct NTFS_DE *de, *new_de;
+-	bool is_same, is_bad;
++	bool is_same;
+ 	/*
+ 	 * de		- memory of PATH_MAX bytes:
+ 	 * [0-1024)	- original name (dentry->d_name)
+@@ -313,12 +313,8 @@ static int ntfs_rename(struct mnt_idmap *idmap, struct inode *dir,
+ 	if (dir_ni != new_dir_ni)
+ 		ni_lock_dir2(new_dir_ni);
+ 
+-	is_bad = false;
+-	err = ni_rename(dir_ni, new_dir_ni, ni, de, new_de, &is_bad);
+-	if (is_bad) {
+-		/* Restore after failed rename failed too. */
+-		_ntfs_bad_inode(inode);
+-	} else if (!err) {
++	err = ni_rename(dir_ni, new_dir_ni, ni, de, new_de);
++	if (!err) {
+ 		simple_rename_timestamp(dir, dentry, new_dir, new_dentry);
+ 		mark_inode_dirty(inode);
+ 		mark_inode_dirty(dir);
+diff --git a/fs/ntfs3/ntfs_fs.h b/fs/ntfs3/ntfs_fs.h
+index d628977e2556db..a79cf4a63b256a 100644
+--- a/fs/ntfs3/ntfs_fs.h
++++ b/fs/ntfs3/ntfs_fs.h
+@@ -581,8 +581,7 @@ int ni_add_name(struct ntfs_inode *dir_ni, struct ntfs_inode *ni,
+ 		struct NTFS_DE *de);
+ 
+ int ni_rename(struct ntfs_inode *dir_ni, struct ntfs_inode *new_dir_ni,
+-	      struct ntfs_inode *ni, struct NTFS_DE *de, struct NTFS_DE *new_de,
+-	      bool *is_bad);
++	      struct ntfs_inode *ni, struct NTFS_DE *de, struct NTFS_DE *new_de);
+ 
+ bool ni_is_dirty(struct inode *inode);
+ int ni_set_compress(struct inode *inode, bool compr);
+diff --git a/fs/orangefs/orangefs-debugfs.c b/fs/orangefs/orangefs-debugfs.c
+index f7095c91660c34..e8e3badbc2ec06 100644
+--- a/fs/orangefs/orangefs-debugfs.c
++++ b/fs/orangefs/orangefs-debugfs.c
+@@ -769,8 +769,8 @@ static void do_k_string(void *k_mask, int index)
+ 
+ 	if (*mask & s_kmod_keyword_mask_map[index].mask_val) {
+ 		if ((strlen(kernel_debug_string) +
+-		     strlen(s_kmod_keyword_mask_map[index].keyword))
+-			< ORANGEFS_MAX_DEBUG_STRING_LEN - 1) {
++		     strlen(s_kmod_keyword_mask_map[index].keyword) + 1)
++			< ORANGEFS_MAX_DEBUG_STRING_LEN) {
+ 				strcat(kernel_debug_string,
+ 				       s_kmod_keyword_mask_map[index].keyword);
+ 				strcat(kernel_debug_string, ",");
+@@ -797,7 +797,7 @@ static void do_c_string(void *c_mask, int index)
+ 	    (mask->mask2 & cdm_array[index].mask2)) {
+ 		if ((strlen(client_debug_string) +
+ 		     strlen(cdm_array[index].keyword) + 1)
+-			< ORANGEFS_MAX_DEBUG_STRING_LEN - 2) {
++			< ORANGEFS_MAX_DEBUG_STRING_LEN) {
+ 				strcat(client_debug_string,
+ 				       cdm_array[index].keyword);
+ 				strcat(client_debug_string, ",");
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index a3e22803cddf2d..e0e50914ab25f2 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -569,6 +569,8 @@ static void pde_set_flags(struct proc_dir_entry *pde)
+ 	if (pde->proc_ops->proc_compat_ioctl)
+ 		pde->flags |= PROC_ENTRY_proc_compat_ioctl;
+ #endif
++	if (pde->proc_ops->proc_lseek)
++		pde->flags |= PROC_ENTRY_proc_lseek;
+ }
+ 
+ struct proc_dir_entry *proc_create_data(const char *name, umode_t mode,
+diff --git a/fs/proc/inode.c b/fs/proc/inode.c
+index 3604b616311c27..129490151be147 100644
+--- a/fs/proc/inode.c
++++ b/fs/proc/inode.c
+@@ -473,7 +473,7 @@ static int proc_reg_open(struct inode *inode, struct file *file)
+ 	typeof_member(struct proc_ops, proc_open) open;
+ 	struct pde_opener *pdeo;
+ 
+-	if (!pde->proc_ops->proc_lseek)
++	if (!pde_has_proc_lseek(pde))
+ 		file->f_mode &= ~FMODE_LSEEK;
+ 
+ 	if (pde_is_permanent(pde)) {
+diff --git a/fs/proc/internal.h b/fs/proc/internal.h
+index 96122e91c64597..3d48ffe72583a1 100644
+--- a/fs/proc/internal.h
++++ b/fs/proc/internal.h
+@@ -99,6 +99,11 @@ static inline bool pde_has_proc_compat_ioctl(const struct proc_dir_entry *pde)
+ #endif
+ }
+ 
++static inline bool pde_has_proc_lseek(const struct proc_dir_entry *pde)
++{
++	return pde->flags & PROC_ENTRY_proc_lseek;
++}
++
+ extern struct kmem_cache *proc_dir_entry_cache;
+ void pde_free(struct proc_dir_entry *pde);
+ 
+diff --git a/fs/smb/client/cifs_debug.c b/fs/smb/client/cifs_debug.c
+index c0196be0e65fc0..9092051776fc1b 100644
+--- a/fs/smb/client/cifs_debug.c
++++ b/fs/smb/client/cifs_debug.c
+@@ -432,10 +432,8 @@ static int cifs_debug_data_proc_show(struct seq_file *m, void *v)
+ 			server->smbd_conn->receive_credit_target);
+ 		seq_printf(m, "\nPending send_pending: %x ",
+ 			atomic_read(&server->smbd_conn->send_pending));
+-		seq_printf(m, "\nReceive buffers count_receive_queue: %x "
+-			"count_empty_packet_queue: %x",
+-			server->smbd_conn->count_receive_queue,
+-			server->smbd_conn->count_empty_packet_queue);
++		seq_printf(m, "\nReceive buffers count_receive_queue: %x ",
++			server->smbd_conn->count_receive_queue);
+ 		seq_printf(m, "\nMR responder_resources: %x "
+ 			"max_frmr_depth: %x mr_type: %x",
+ 			server->smbd_conn->responder_resources,
+diff --git a/fs/smb/client/cifsencrypt.c b/fs/smb/client/cifsencrypt.c
+index 35892df7335c75..6be850d2a34677 100644
+--- a/fs/smb/client/cifsencrypt.c
++++ b/fs/smb/client/cifsencrypt.c
+@@ -343,7 +343,7 @@ static struct ntlmssp2_name *find_next_av(struct cifs_ses *ses,
+ 	len = AV_LEN(av);
+ 	if (AV_TYPE(av) == NTLMSSP_AV_EOL)
+ 		return NULL;
+-	if (!len || (u8 *)av + sizeof(*av) + len > end)
++	if ((u8 *)av + sizeof(*av) + len > end)
+ 		return NULL;
+ 	return av;
+ }
+@@ -363,7 +363,7 @@ static int find_av_name(struct cifs_ses *ses, u16 type, char **name, u16 maxlen)
+ 
+ 	av_for_each_entry(ses, av) {
+ 		len = AV_LEN(av);
+-		if (AV_TYPE(av) != type)
++		if (AV_TYPE(av) != type || !len)
+ 			continue;
+ 		if (!IS_ALIGNED(len, sizeof(__le16))) {
+ 			cifs_dbg(VFS | ONCE, "%s: bad length(%u) for type %u\n",
+diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
+index a08c42363ffc84..a34d94cf6d6d9b 100644
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -724,7 +724,7 @@ cifs_show_options(struct seq_file *s, struct dentry *root)
+ 	else
+ 		seq_puts(s, ",nativesocket");
+ 	seq_show_option(s, "symlink",
+-			cifs_symlink_type_str(get_cifs_symlink_type(cifs_sb)));
++			cifs_symlink_type_str(cifs_symlink_type(cifs_sb)));
+ 
+ 	seq_printf(s, ",rsize=%u", cifs_sb->ctx->rsize);
+ 	seq_printf(s, ",wsize=%u", cifs_sb->ctx->wsize);
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index 7ebed5e856dc84..87ea186774741b 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -3362,18 +3362,15 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ 		struct net *net = cifs_net_ns(server);
+ 		struct sock *sk;
+ 
+-		rc = __sock_create(net, sfamily, SOCK_STREAM,
+-				   IPPROTO_TCP, &server->ssocket, 1);
++		rc = sock_create_kern(net, sfamily, SOCK_STREAM,
++				      IPPROTO_TCP, &server->ssocket);
+ 		if (rc < 0) {
+ 			cifs_server_dbg(VFS, "Error %d creating socket\n", rc);
+ 			return rc;
+ 		}
+ 
+ 		sk = server->ssocket->sk;
+-		__netns_tracker_free(net, &sk->ns_tracker, false);
+-		sk->sk_net_refcnt = 1;
+-		get_net_track(net, &sk->ns_tracker, GFP_KERNEL);
+-		sock_inuse_add(net, 1);
++		sk_net_refcnt_upgrade(sk);
+ 
+ 		/* BB other socket options to set KEEPALIVE, NODELAY? */
+ 		cifs_dbg(FYI, "Socket created\n");
+diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
+index 59ccc2229ab300..3d0bb068f825fc 100644
+--- a/fs/smb/client/fs_context.c
++++ b/fs/smb/client/fs_context.c
+@@ -1674,6 +1674,7 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ 				pr_warn_once("conflicting posix mount options specified\n");
+ 			ctx->linux_ext = 1;
+ 			ctx->no_linux_ext = 0;
++			ctx->nonativesocket = 1; /* POSIX mounts use NFS style reparse points */
+ 		}
+ 		break;
+ 	case Opt_nocase:
+@@ -1851,24 +1852,6 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ 	return -EINVAL;
+ }
+ 
+-enum cifs_symlink_type get_cifs_symlink_type(struct cifs_sb_info *cifs_sb)
+-{
+-	if (cifs_sb->ctx->symlink_type == CIFS_SYMLINK_TYPE_DEFAULT) {
+-		if (cifs_sb->ctx->mfsymlinks)
+-			return CIFS_SYMLINK_TYPE_MFSYMLINKS;
+-		else if (cifs_sb->ctx->sfu_emul)
+-			return CIFS_SYMLINK_TYPE_SFU;
+-		else if (cifs_sb->ctx->linux_ext && !cifs_sb->ctx->no_linux_ext)
+-			return CIFS_SYMLINK_TYPE_UNIX;
+-		else if (cifs_sb->ctx->reparse_type != CIFS_REPARSE_TYPE_NONE)
+-			return CIFS_SYMLINK_TYPE_NATIVE;
+-		else
+-			return CIFS_SYMLINK_TYPE_NONE;
+-	} else {
+-		return cifs_sb->ctx->symlink_type;
+-	}
+-}
+-
+ int smb3_init_fs_context(struct fs_context *fc)
+ {
+ 	struct smb3_fs_context *ctx;
+diff --git a/fs/smb/client/fs_context.h b/fs/smb/client/fs_context.h
+index 9e83302ce4b801..b0fec6b9a23b4f 100644
+--- a/fs/smb/client/fs_context.h
++++ b/fs/smb/client/fs_context.h
+@@ -341,7 +341,23 @@ struct smb3_fs_context {
+ 
+ extern const struct fs_parameter_spec smb3_fs_parameters[];
+ 
+-extern enum cifs_symlink_type get_cifs_symlink_type(struct cifs_sb_info *cifs_sb);
++static inline enum cifs_symlink_type cifs_symlink_type(struct cifs_sb_info *cifs_sb)
++{
++	bool posix = cifs_sb_master_tcon(cifs_sb)->posix_extensions;
++
++	if (cifs_sb->ctx->symlink_type != CIFS_SYMLINK_TYPE_DEFAULT)
++		return cifs_sb->ctx->symlink_type;
++
++	if (cifs_sb->ctx->mfsymlinks)
++		return CIFS_SYMLINK_TYPE_MFSYMLINKS;
++	else if (cifs_sb->ctx->sfu_emul)
++		return CIFS_SYMLINK_TYPE_SFU;
++	else if (cifs_sb->ctx->linux_ext && !cifs_sb->ctx->no_linux_ext)
++		return posix ? CIFS_SYMLINK_TYPE_NATIVE : CIFS_SYMLINK_TYPE_UNIX;
++	else if (cifs_sb->ctx->reparse_type != CIFS_REPARSE_TYPE_NONE)
++		return CIFS_SYMLINK_TYPE_NATIVE;
++	return CIFS_SYMLINK_TYPE_NONE;
++}
+ 
+ extern int smb3_init_fs_context(struct fs_context *fc);
+ extern void smb3_cleanup_fs_context_contents(struct smb3_fs_context *ctx);
+diff --git a/fs/smb/client/link.c b/fs/smb/client/link.c
+index 769752ad2c5ce8..e2075f1aebc96c 100644
+--- a/fs/smb/client/link.c
++++ b/fs/smb/client/link.c
+@@ -606,14 +606,7 @@ cifs_symlink(struct mnt_idmap *idmap, struct inode *inode,
+ 
+ 	/* BB what if DFS and this volume is on different share? BB */
+ 	rc = -EOPNOTSUPP;
+-	switch (get_cifs_symlink_type(cifs_sb)) {
+-	case CIFS_SYMLINK_TYPE_DEFAULT:
+-		/* should not happen, get_cifs_symlink_type() resolves the default */
+-		break;
+-
+-	case CIFS_SYMLINK_TYPE_NONE:
+-		break;
+-
++	switch (cifs_symlink_type(cifs_sb)) {
+ 	case CIFS_SYMLINK_TYPE_UNIX:
+ #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ 		if (pTcon->unix_ext) {
+@@ -653,6 +646,8 @@ cifs_symlink(struct mnt_idmap *idmap, struct inode *inode,
+ 			goto symlink_exit;
+ 		}
+ 		break;
++	default:
++		break;
+ 	}
+ 
+ 	if (rc == 0) {
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index 5fa29a97ac154b..4f6c320b41c971 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -38,7 +38,7 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ 				struct dentry *dentry, struct cifs_tcon *tcon,
+ 				const char *full_path, const char *symname)
+ {
+-	switch (get_cifs_symlink_type(CIFS_SB(inode->i_sb))) {
++	switch (cifs_symlink_type(CIFS_SB(inode->i_sb))) {
+ 	case CIFS_SYMLINK_TYPE_NATIVE:
+ 		return create_native_symlink(xid, inode, dentry, tcon, full_path, symname);
+ 	case CIFS_SYMLINK_TYPE_NFS:
+diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
+index 754e94a0e07f50..c661a8e6c18b85 100644
+--- a/fs/smb/client/smbdirect.c
++++ b/fs/smb/client/smbdirect.c
+@@ -13,8 +13,6 @@
+ #include "cifsproto.h"
+ #include "smb2proto.h"
+ 
+-static struct smbd_response *get_empty_queue_buffer(
+-		struct smbd_connection *info);
+ static struct smbd_response *get_receive_buffer(
+ 		struct smbd_connection *info);
+ static void put_receive_buffer(
+@@ -23,8 +21,6 @@ static void put_receive_buffer(
+ static int allocate_receive_buffers(struct smbd_connection *info, int num_buf);
+ static void destroy_receive_buffers(struct smbd_connection *info);
+ 
+-static void put_empty_packet(
+-		struct smbd_connection *info, struct smbd_response *response);
+ static void enqueue_reassembly(
+ 		struct smbd_connection *info,
+ 		struct smbd_response *response, int data_length);
+@@ -391,7 +387,6 @@ static bool process_negotiation_response(
+ static void smbd_post_send_credits(struct work_struct *work)
+ {
+ 	int ret = 0;
+-	int use_receive_queue = 1;
+ 	int rc;
+ 	struct smbd_response *response;
+ 	struct smbd_connection *info =
+@@ -407,18 +402,9 @@ static void smbd_post_send_credits(struct work_struct *work)
+ 	if (info->receive_credit_target >
+ 		atomic_read(&info->receive_credits)) {
+ 		while (true) {
+-			if (use_receive_queue)
+-				response = get_receive_buffer(info);
+-			else
+-				response = get_empty_queue_buffer(info);
+-			if (!response) {
+-				/* now switch to empty packet queue */
+-				if (use_receive_queue) {
+-					use_receive_queue = 0;
+-					continue;
+-				} else
+-					break;
+-			}
++			response = get_receive_buffer(info);
++			if (!response)
++				break;
+ 
+ 			response->type = SMBD_TRANSFER_DATA;
+ 			response->first_segment = false;
+@@ -466,7 +452,6 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_RECV) {
+ 		log_rdma_recv(INFO, "wc->status=%d opcode=%d\n",
+ 			wc->status, wc->opcode);
+-		smbd_disconnect_rdma_connection(info);
+ 		goto error;
+ 	}
+ 
+@@ -483,18 +468,15 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		info->full_packet_received = true;
+ 		info->negotiate_done =
+ 			process_negotiation_response(response, wc->byte_len);
++		put_receive_buffer(info, response);
+ 		complete(&info->negotiate_completion);
+-		break;
++		return;
+ 
+ 	/* SMBD data transfer packet */
+ 	case SMBD_TRANSFER_DATA:
+ 		data_transfer = smbd_response_payload(response);
+ 		data_length = le32_to_cpu(data_transfer->data_length);
+ 
+-		/*
+-		 * If this is a packet with data playload place the data in
+-		 * reassembly queue and wake up the reading thread
+-		 */
+ 		if (data_length) {
+ 			if (info->full_packet_received)
+ 				response->first_segment = true;
+@@ -503,16 +485,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 				info->full_packet_received = false;
+ 			else
+ 				info->full_packet_received = true;
+-
+-			enqueue_reassembly(
+-				info,
+-				response,
+-				data_length);
+-		} else
+-			put_empty_packet(info, response);
+-
+-		if (data_length)
+-			wake_up_interruptible(&info->wait_reassembly_queue);
++		}
+ 
+ 		atomic_dec(&info->receive_credits);
+ 		info->receive_credit_target =
+@@ -540,15 +513,27 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 			info->keep_alive_requested = KEEP_ALIVE_PENDING;
+ 		}
+ 
+-		return;
++		/*
++		 * If this is a packet with data playload place the data in
++		 * reassembly queue and wake up the reading thread
++		 */
++		if (data_length) {
++			enqueue_reassembly(info, response, data_length);
++			wake_up_interruptible(&info->wait_reassembly_queue);
++		} else
++			put_receive_buffer(info, response);
+ 
+-	default:
+-		log_rdma_recv(ERR,
+-			"unexpected response type=%d\n", response->type);
++		return;
+ 	}
+ 
++	/*
++	 * This is an internal error!
++	 */
++	log_rdma_recv(ERR, "unexpected response type=%d\n", response->type);
++	WARN_ON_ONCE(response->type != SMBD_TRANSFER_DATA);
+ error:
+ 	put_receive_buffer(info, response);
++	smbd_disconnect_rdma_connection(info);
+ }
+ 
+ static struct rdma_cm_id *smbd_create_id(
+@@ -1069,6 +1054,7 @@ static int smbd_post_recv(
+ 	if (rc) {
+ 		ib_dma_unmap_single(sc->ib.dev, response->sge.addr,
+ 				    response->sge.length, DMA_FROM_DEVICE);
++		response->sge.length = 0;
+ 		smbd_disconnect_rdma_connection(info);
+ 		log_rdma_recv(ERR, "ib_post_recv failed rc=%d\n", rc);
+ 	}
+@@ -1113,17 +1099,6 @@ static int smbd_negotiate(struct smbd_connection *info)
+ 	return rc;
+ }
+ 
+-static void put_empty_packet(
+-		struct smbd_connection *info, struct smbd_response *response)
+-{
+-	spin_lock(&info->empty_packet_queue_lock);
+-	list_add_tail(&response->list, &info->empty_packet_queue);
+-	info->count_empty_packet_queue++;
+-	spin_unlock(&info->empty_packet_queue_lock);
+-
+-	queue_work(info->workqueue, &info->post_send_credits_work);
+-}
+-
+ /*
+  * Implement Connection.FragmentReassemblyBuffer defined in [MS-SMBD] 3.1.1.1
+  * This is a queue for reassembling upper layer payload and present to upper
+@@ -1172,25 +1147,6 @@ static struct smbd_response *_get_first_reassembly(struct smbd_connection *info)
+ 	return ret;
+ }
+ 
+-static struct smbd_response *get_empty_queue_buffer(
+-		struct smbd_connection *info)
+-{
+-	struct smbd_response *ret = NULL;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&info->empty_packet_queue_lock, flags);
+-	if (!list_empty(&info->empty_packet_queue)) {
+-		ret = list_first_entry(
+-			&info->empty_packet_queue,
+-			struct smbd_response, list);
+-		list_del(&ret->list);
+-		info->count_empty_packet_queue--;
+-	}
+-	spin_unlock_irqrestore(&info->empty_packet_queue_lock, flags);
+-
+-	return ret;
+-}
+-
+ /*
+  * Get a receive buffer
+  * For each remote send, we need to post a receive. The receive buffers are
+@@ -1228,8 +1184,13 @@ static void put_receive_buffer(
+ 	struct smbdirect_socket *sc = &info->socket;
+ 	unsigned long flags;
+ 
+-	ib_dma_unmap_single(sc->ib.dev, response->sge.addr,
+-		response->sge.length, DMA_FROM_DEVICE);
++	if (likely(response->sge.length != 0)) {
++		ib_dma_unmap_single(sc->ib.dev,
++				    response->sge.addr,
++				    response->sge.length,
++				    DMA_FROM_DEVICE);
++		response->sge.length = 0;
++	}
+ 
+ 	spin_lock_irqsave(&info->receive_queue_lock, flags);
+ 	list_add_tail(&response->list, &info->receive_queue);
+@@ -1255,10 +1216,6 @@ static int allocate_receive_buffers(struct smbd_connection *info, int num_buf)
+ 	spin_lock_init(&info->receive_queue_lock);
+ 	info->count_receive_queue = 0;
+ 
+-	INIT_LIST_HEAD(&info->empty_packet_queue);
+-	spin_lock_init(&info->empty_packet_queue_lock);
+-	info->count_empty_packet_queue = 0;
+-
+ 	init_waitqueue_head(&info->wait_receive_queues);
+ 
+ 	for (i = 0; i < num_buf; i++) {
+@@ -1267,6 +1224,7 @@ static int allocate_receive_buffers(struct smbd_connection *info, int num_buf)
+ 			goto allocate_failed;
+ 
+ 		response->info = info;
++		response->sge.length = 0;
+ 		list_add_tail(&response->list, &info->receive_queue);
+ 		info->count_receive_queue++;
+ 	}
+@@ -1292,9 +1250,6 @@ static void destroy_receive_buffers(struct smbd_connection *info)
+ 
+ 	while ((response = get_receive_buffer(info)))
+ 		mempool_free(response, info->response_mempool);
+-
+-	while ((response = get_empty_queue_buffer(info)))
+-		mempool_free(response, info->response_mempool);
+ }
+ 
+ /* Implement idle connection timer [MS-SMBD] 3.1.6.2 */
+@@ -1381,8 +1336,7 @@ void smbd_destroy(struct TCP_Server_Info *server)
+ 
+ 	log_rdma_event(INFO, "free receive buffers\n");
+ 	wait_event(info->wait_receive_queues,
+-		info->count_receive_queue + info->count_empty_packet_queue
+-			== sp->recv_credit_max);
++		info->count_receive_queue == sp->recv_credit_max);
+ 	destroy_receive_buffers(info);
+ 
+ 	/*
+@@ -1680,8 +1634,10 @@ static struct smbd_connection *_smbd_get_connection(
+ 		goto rdma_connect_failed;
+ 	}
+ 
+-	wait_event_interruptible(
+-		info->conn_wait, sc->status != SMBDIRECT_SOCKET_CONNECTING);
++	wait_event_interruptible_timeout(
++		info->conn_wait,
++		sc->status != SMBDIRECT_SOCKET_CONNECTING,
++		msecs_to_jiffies(RDMA_RESOLVE_TIMEOUT));
+ 
+ 	if (sc->status != SMBDIRECT_SOCKET_CONNECTED) {
+ 		log_rdma_event(ERR, "rdma_connect failed port=%d\n", port);
+diff --git a/fs/smb/client/smbdirect.h b/fs/smb/client/smbdirect.h
+index 3d552ab27e0f3d..fb8db71735f322 100644
+--- a/fs/smb/client/smbdirect.h
++++ b/fs/smb/client/smbdirect.h
+@@ -110,10 +110,6 @@ struct smbd_connection {
+ 	int count_receive_queue;
+ 	spinlock_t receive_queue_lock;
+ 
+-	struct list_head empty_packet_queue;
+-	int count_empty_packet_queue;
+-	spinlock_t empty_packet_queue_lock;
+-
+ 	wait_queue_head_t wait_receive_queues;
+ 
+ 	/* Reassembly queue */
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index dd3e0e3f7bf046..31dd1caac1e8a8 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -46,6 +46,7 @@ struct ksmbd_conn {
+ 	struct mutex			srv_mutex;
+ 	int				status;
+ 	unsigned int			cli_cap;
++	__be32				inet_addr;
+ 	char				*request_buf;
+ 	struct ksmbd_transport		*transport;
+ 	struct nls_table		*local_nls;
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index f1c7ed1a6ca59d..55a7887fdad758 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -1621,11 +1621,24 @@ static int krb5_authenticate(struct ksmbd_work *work,
+ 
+ 	rsp->SecurityBufferLength = cpu_to_le16(out_len);
+ 
+-	if ((conn->sign || server_conf.enforced_signing) ||
++	/*
++	 * If session state is SMB2_SESSION_VALID, We can assume
++	 * that it is reauthentication. And the user/password
++	 * has been verified, so return it here.
++	 */
++	if (sess->state == SMB2_SESSION_VALID) {
++		if (conn->binding)
++			goto binding_session;
++		return 0;
++	}
++
++	if ((rsp->SessionFlags != SMB2_SESSION_FLAG_IS_GUEST_LE &&
++	    (conn->sign || server_conf.enforced_signing)) ||
+ 	    (req->SecurityMode & SMB2_NEGOTIATE_SIGNING_REQUIRED))
+ 		sess->sign = true;
+ 
+-	if (smb3_encryption_negotiated(conn)) {
++	if (smb3_encryption_negotiated(conn) &&
++	    !(req->Flags & SMB2_SESSION_REQ_FLAG_BINDING)) {
+ 		retval = conn->ops->generate_encryptionkey(conn, sess);
+ 		if (retval) {
+ 			ksmbd_debug(SMB,
+@@ -1638,6 +1651,7 @@ static int krb5_authenticate(struct ksmbd_work *work,
+ 		sess->sign = false;
+ 	}
+ 
++binding_session:
+ 	if (conn->dialect >= SMB30_PROT_ID) {
+ 		chann = lookup_chann_list(sess, conn);
+ 		if (!chann) {
+@@ -1833,8 +1847,6 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ 				ksmbd_conn_set_good(conn);
+ 				sess->state = SMB2_SESSION_VALID;
+ 			}
+-			kfree(sess->Preauth_HashValue);
+-			sess->Preauth_HashValue = NULL;
+ 		} else if (conn->preferred_auth_mech == KSMBD_AUTH_NTLMSSP) {
+ 			if (negblob->MessageType == NtLmNegotiate) {
+ 				rc = ntlm_negotiate(work, negblob, negblob_len, rsp);
+@@ -1861,8 +1873,6 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ 						kfree(preauth_sess);
+ 					}
+ 				}
+-				kfree(sess->Preauth_HashValue);
+-				sess->Preauth_HashValue = NULL;
+ 			} else {
+ 				pr_info_ratelimited("Unknown NTLMSSP message type : 0x%x\n",
+ 						le32_to_cpu(negblob->MessageType));
+diff --git a/fs/smb/server/smb_common.c b/fs/smb/server/smb_common.c
+index 425c756bcfb862..b23203a1c2865a 100644
+--- a/fs/smb/server/smb_common.c
++++ b/fs/smb/server/smb_common.c
+@@ -515,7 +515,7 @@ int ksmbd_extract_shortname(struct ksmbd_conn *conn, const char *longname,
+ 
+ 	p = strrchr(longname, '.');
+ 	if (p == longname) { /*name starts with a dot*/
+-		strscpy(extension, "___", strlen("___"));
++		strscpy(extension, "___", sizeof(extension));
+ 	} else {
+ 		if (p) {
+ 			p++;
+diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c
+index c6cbe0d56e3212..8d366db5f60547 100644
+--- a/fs/smb/server/transport_rdma.c
++++ b/fs/smb/server/transport_rdma.c
+@@ -129,9 +129,6 @@ struct smb_direct_transport {
+ 	spinlock_t		recvmsg_queue_lock;
+ 	struct list_head	recvmsg_queue;
+ 
+-	spinlock_t		empty_recvmsg_queue_lock;
+-	struct list_head	empty_recvmsg_queue;
+-
+ 	int			send_credit_target;
+ 	atomic_t		send_credits;
+ 	spinlock_t		lock_new_recv_credits;
+@@ -268,40 +265,19 @@ smb_direct_recvmsg *get_free_recvmsg(struct smb_direct_transport *t)
+ static void put_recvmsg(struct smb_direct_transport *t,
+ 			struct smb_direct_recvmsg *recvmsg)
+ {
+-	ib_dma_unmap_single(t->cm_id->device, recvmsg->sge.addr,
+-			    recvmsg->sge.length, DMA_FROM_DEVICE);
++	if (likely(recvmsg->sge.length != 0)) {
++		ib_dma_unmap_single(t->cm_id->device,
++				    recvmsg->sge.addr,
++				    recvmsg->sge.length,
++				    DMA_FROM_DEVICE);
++		recvmsg->sge.length = 0;
++	}
+ 
+ 	spin_lock(&t->recvmsg_queue_lock);
+ 	list_add(&recvmsg->list, &t->recvmsg_queue);
+ 	spin_unlock(&t->recvmsg_queue_lock);
+ }
+ 
+-static struct
+-smb_direct_recvmsg *get_empty_recvmsg(struct smb_direct_transport *t)
+-{
+-	struct smb_direct_recvmsg *recvmsg = NULL;
+-
+-	spin_lock(&t->empty_recvmsg_queue_lock);
+-	if (!list_empty(&t->empty_recvmsg_queue)) {
+-		recvmsg = list_first_entry(&t->empty_recvmsg_queue,
+-					   struct smb_direct_recvmsg, list);
+-		list_del(&recvmsg->list);
+-	}
+-	spin_unlock(&t->empty_recvmsg_queue_lock);
+-	return recvmsg;
+-}
+-
+-static void put_empty_recvmsg(struct smb_direct_transport *t,
+-			      struct smb_direct_recvmsg *recvmsg)
+-{
+-	ib_dma_unmap_single(t->cm_id->device, recvmsg->sge.addr,
+-			    recvmsg->sge.length, DMA_FROM_DEVICE);
+-
+-	spin_lock(&t->empty_recvmsg_queue_lock);
+-	list_add_tail(&recvmsg->list, &t->empty_recvmsg_queue);
+-	spin_unlock(&t->empty_recvmsg_queue_lock);
+-}
+-
+ static void enqueue_reassembly(struct smb_direct_transport *t,
+ 			       struct smb_direct_recvmsg *recvmsg,
+ 			       int data_length)
+@@ -386,9 +362,6 @@ static struct smb_direct_transport *alloc_transport(struct rdma_cm_id *cm_id)
+ 	spin_lock_init(&t->recvmsg_queue_lock);
+ 	INIT_LIST_HEAD(&t->recvmsg_queue);
+ 
+-	spin_lock_init(&t->empty_recvmsg_queue_lock);
+-	INIT_LIST_HEAD(&t->empty_recvmsg_queue);
+-
+ 	init_waitqueue_head(&t->wait_send_pending);
+ 	atomic_set(&t->send_pending, 0);
+ 
+@@ -548,13 +521,13 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	t = recvmsg->transport;
+ 
+ 	if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_RECV) {
++		put_recvmsg(t, recvmsg);
+ 		if (wc->status != IB_WC_WR_FLUSH_ERR) {
+ 			pr_err("Recv error. status='%s (%d)' opcode=%d\n",
+ 			       ib_wc_status_msg(wc->status), wc->status,
+ 			       wc->opcode);
+ 			smb_direct_disconnect_rdma_connection(t);
+ 		}
+-		put_empty_recvmsg(t, recvmsg);
+ 		return;
+ 	}
+ 
+@@ -568,7 +541,8 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	switch (recvmsg->type) {
+ 	case SMB_DIRECT_MSG_NEGOTIATE_REQ:
+ 		if (wc->byte_len < sizeof(struct smb_direct_negotiate_req)) {
+-			put_empty_recvmsg(t, recvmsg);
++			put_recvmsg(t, recvmsg);
++			smb_direct_disconnect_rdma_connection(t);
+ 			return;
+ 		}
+ 		t->negotiation_requested = true;
+@@ -576,7 +550,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		t->status = SMB_DIRECT_CS_CONNECTED;
+ 		enqueue_reassembly(t, recvmsg, 0);
+ 		wake_up_interruptible(&t->wait_status);
+-		break;
++		return;
+ 	case SMB_DIRECT_MSG_DATA_TRANSFER: {
+ 		struct smb_direct_data_transfer *data_transfer =
+ 			(struct smb_direct_data_transfer *)recvmsg->packet;
+@@ -585,7 +559,8 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 
+ 		if (wc->byte_len <
+ 		    offsetof(struct smb_direct_data_transfer, padding)) {
+-			put_empty_recvmsg(t, recvmsg);
++			put_recvmsg(t, recvmsg);
++			smb_direct_disconnect_rdma_connection(t);
+ 			return;
+ 		}
+ 
+@@ -593,7 +568,8 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		if (data_length) {
+ 			if (wc->byte_len < sizeof(struct smb_direct_data_transfer) +
+ 			    (u64)data_length) {
+-				put_empty_recvmsg(t, recvmsg);
++				put_recvmsg(t, recvmsg);
++				smb_direct_disconnect_rdma_connection(t);
+ 				return;
+ 			}
+ 
+@@ -605,16 +581,11 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 			else
+ 				t->full_packet_received = true;
+ 
+-			enqueue_reassembly(t, recvmsg, (int)data_length);
+-			wake_up_interruptible(&t->wait_reassembly_queue);
+-
+ 			spin_lock(&t->receive_credit_lock);
+ 			receive_credits = --(t->recv_credits);
+ 			avail_recvmsg_count = t->count_avail_recvmsg;
+ 			spin_unlock(&t->receive_credit_lock);
+ 		} else {
+-			put_empty_recvmsg(t, recvmsg);
+-
+ 			spin_lock(&t->receive_credit_lock);
+ 			receive_credits = --(t->recv_credits);
+ 			avail_recvmsg_count = ++(t->count_avail_recvmsg);
+@@ -636,11 +607,23 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		if (is_receive_credit_post_required(receive_credits, avail_recvmsg_count))
+ 			mod_delayed_work(smb_direct_wq,
+ 					 &t->post_recv_credits_work, 0);
+-		break;
++
++		if (data_length) {
++			enqueue_reassembly(t, recvmsg, (int)data_length);
++			wake_up_interruptible(&t->wait_reassembly_queue);
++		} else
++			put_recvmsg(t, recvmsg);
++
++		return;
+ 	}
+-	default:
+-		break;
+ 	}
++
++	/*
++	 * This is an internal error!
++	 */
++	WARN_ON_ONCE(recvmsg->type != SMB_DIRECT_MSG_DATA_TRANSFER);
++	put_recvmsg(t, recvmsg);
++	smb_direct_disconnect_rdma_connection(t);
+ }
+ 
+ static int smb_direct_post_recv(struct smb_direct_transport *t,
+@@ -670,6 +653,7 @@ static int smb_direct_post_recv(struct smb_direct_transport *t,
+ 		ib_dma_unmap_single(t->cm_id->device,
+ 				    recvmsg->sge.addr, recvmsg->sge.length,
+ 				    DMA_FROM_DEVICE);
++		recvmsg->sge.length = 0;
+ 		smb_direct_disconnect_rdma_connection(t);
+ 		return ret;
+ 	}
+@@ -811,7 +795,6 @@ static void smb_direct_post_recv_credits(struct work_struct *work)
+ 	struct smb_direct_recvmsg *recvmsg;
+ 	int receive_credits, credits = 0;
+ 	int ret;
+-	int use_free = 1;
+ 
+ 	spin_lock(&t->receive_credit_lock);
+ 	receive_credits = t->recv_credits;
+@@ -819,18 +802,9 @@ static void smb_direct_post_recv_credits(struct work_struct *work)
+ 
+ 	if (receive_credits < t->recv_credit_target) {
+ 		while (true) {
+-			if (use_free)
+-				recvmsg = get_free_recvmsg(t);
+-			else
+-				recvmsg = get_empty_recvmsg(t);
+-			if (!recvmsg) {
+-				if (use_free) {
+-					use_free = 0;
+-					continue;
+-				} else {
+-					break;
+-				}
+-			}
++			recvmsg = get_free_recvmsg(t);
++			if (!recvmsg)
++				break;
+ 
+ 			recvmsg->type = SMB_DIRECT_MSG_DATA_TRANSFER;
+ 			recvmsg->first_segment = false;
+@@ -1806,8 +1780,6 @@ static void smb_direct_destroy_pools(struct smb_direct_transport *t)
+ 
+ 	while ((recvmsg = get_free_recvmsg(t)))
+ 		mempool_free(recvmsg, t->recvmsg_mempool);
+-	while ((recvmsg = get_empty_recvmsg(t)))
+-		mempool_free(recvmsg, t->recvmsg_mempool);
+ 
+ 	mempool_destroy(t->recvmsg_mempool);
+ 	t->recvmsg_mempool = NULL;
+@@ -1863,6 +1835,7 @@ static int smb_direct_create_pools(struct smb_direct_transport *t)
+ 		if (!recvmsg)
+ 			goto err;
+ 		recvmsg->transport = t;
++		recvmsg->sge.length = 0;
+ 		list_add(&recvmsg->list, &t->recvmsg_queue);
+ 	}
+ 	t->count_avail_recvmsg = t->recv_credit_max;
+diff --git a/fs/smb/server/transport_tcp.c b/fs/smb/server/transport_tcp.c
+index 4e9f98db9ff409..d72588f33b9cd1 100644
+--- a/fs/smb/server/transport_tcp.c
++++ b/fs/smb/server/transport_tcp.c
+@@ -87,6 +87,7 @@ static struct tcp_transport *alloc_transport(struct socket *client_sk)
+ 		return NULL;
+ 	}
+ 
++	conn->inet_addr = inet_sk(client_sk->sk)->inet_daddr;
+ 	conn->transport = KSMBD_TRANS(t);
+ 	KSMBD_TRANS(t)->conn = conn;
+ 	KSMBD_TRANS(t)->ops = &ksmbd_tcp_transport_ops;
+@@ -230,6 +231,8 @@ static int ksmbd_kthread_fn(void *p)
+ {
+ 	struct socket *client_sk = NULL;
+ 	struct interface *iface = (struct interface *)p;
++	struct inet_sock *csk_inet;
++	struct ksmbd_conn *conn;
+ 	int ret;
+ 
+ 	while (!kthread_should_stop()) {
+@@ -248,6 +251,20 @@ static int ksmbd_kthread_fn(void *p)
+ 			continue;
+ 		}
+ 
++		/*
++		 * Limits repeated connections from clients with the same IP.
++		 */
++		csk_inet = inet_sk(client_sk->sk);
++		down_read(&conn_list_lock);
++		list_for_each_entry(conn, &conn_list, conns_list)
++			if (csk_inet->inet_daddr == conn->inet_addr) {
++				ret = -EAGAIN;
++				break;
++			}
++		up_read(&conn_list_lock);
++		if (ret == -EAGAIN)
++			continue;
++
+ 		if (server_conf.max_connections &&
+ 		    atomic_inc_return(&active_num_conn) >= server_conf.max_connections) {
+ 			pr_info_ratelimited("Limit the maximum number of connections(%u)\n",
+diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
+index 134cabdd60eb3b..70b933b0f7c9fb 100644
+--- a/fs/smb/server/vfs.c
++++ b/fs/smb/server/vfs.c
+@@ -546,7 +546,8 @@ int ksmbd_vfs_getattr(const struct path *path, struct kstat *stat)
+ {
+ 	int err;
+ 
+-	err = vfs_getattr(path, stat, STATX_BTIME, AT_STATX_SYNC_AS_STAT);
++	err = vfs_getattr(path, stat, STATX_BASIC_STATS | STATX_BTIME,
++			AT_STATX_SYNC_AS_STAT);
+ 	if (err)
+ 		pr_err("getattr failed, err %d\n", err);
+ 	return err;
+diff --git a/include/linux/audit.h b/include/linux/audit.h
+index 0050ef288ab3ce..a394614ccd0b81 100644
+--- a/include/linux/audit.h
++++ b/include/linux/audit.h
+@@ -417,7 +417,7 @@ extern int __audit_log_bprm_fcaps(struct linux_binprm *bprm,
+ extern void __audit_log_capset(const struct cred *new, const struct cred *old);
+ extern void __audit_mmap_fd(int fd, int flags);
+ extern void __audit_openat2_how(struct open_how *how);
+-extern void __audit_log_kern_module(char *name);
++extern void __audit_log_kern_module(const char *name);
+ extern void __audit_fanotify(u32 response, struct fanotify_response_info_audit_rule *friar);
+ extern void __audit_tk_injoffset(struct timespec64 offset);
+ extern void __audit_ntp_log(const struct audit_ntp_data *ad);
+@@ -519,7 +519,7 @@ static inline void audit_openat2_how(struct open_how *how)
+ 		__audit_openat2_how(how);
+ }
+ 
+-static inline void audit_log_kern_module(char *name)
++static inline void audit_log_kern_module(const char *name)
+ {
+ 	if (!audit_dummy_context())
+ 		__audit_log_kern_module(name);
+@@ -677,9 +677,8 @@ static inline void audit_mmap_fd(int fd, int flags)
+ static inline void audit_openat2_how(struct open_how *how)
+ { }
+ 
+-static inline void audit_log_kern_module(char *name)
+-{
+-}
++static inline void audit_log_kern_module(const char *name)
++{ }
+ 
+ static inline void audit_fanotify(u32 response, struct fanotify_response_info_audit_rule *friar)
+ { }
+diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h
+index e4ce1cae03bf77..b3b53f8c1b28ef 100644
+--- a/include/linux/fortify-string.h
++++ b/include/linux/fortify-string.h
+@@ -596,7 +596,7 @@ __FORTIFY_INLINE bool fortify_memcpy_chk(__kernel_size_t size,
+ 	if (p_size != SIZE_MAX && p_size < size)
+ 		fortify_panic(func, FORTIFY_WRITE, p_size, size, true);
+ 	else if (q_size != SIZE_MAX && q_size < size)
+-		fortify_panic(func, FORTIFY_READ, p_size, size, true);
++		fortify_panic(func, FORTIFY_READ, q_size, size, true);
+ 
+ 	/*
+ 	 * Warn when writing beyond destination field size.
+diff --git a/include/linux/fs_context.h b/include/linux/fs_context.h
+index a19e4bd32e4d31..7773eb870039c4 100644
+--- a/include/linux/fs_context.h
++++ b/include/linux/fs_context.h
+@@ -200,7 +200,7 @@ void logfc(struct fc_log *log, const char *prefix, char level, const char *fmt,
+  */
+ #define infof(fc, fmt, ...) __logfc(fc, 'i', fmt, ## __VA_ARGS__)
+ #define info_plog(p, fmt, ...) __plog(p, 'i', fmt, ## __VA_ARGS__)
+-#define infofc(p, fmt, ...) __plog((&(fc)->log), 'i', fmt, ## __VA_ARGS__)
++#define infofc(fc, fmt, ...) __plog((&(fc)->log), 'i', fmt, ## __VA_ARGS__)
+ 
+ /**
+  * warnf - Store supplementary warning message
+diff --git a/include/linux/if_team.h b/include/linux/if_team.h
+index cdc684e04a2fb6..ce97d891cf720f 100644
+--- a/include/linux/if_team.h
++++ b/include/linux/if_team.h
+@@ -191,8 +191,6 @@ struct team {
+ 
+ 	const struct header_ops *header_ops_cache;
+ 
+-	struct mutex lock; /* used for overall locking, e.g. port lists write */
+-
+ 	/*
+ 	 * List of enabled ports and their count
+ 	 */
+@@ -223,7 +221,6 @@ struct team {
+ 		atomic_t count_pending;
+ 		struct delayed_work dw;
+ 	} mcast_rejoin;
+-	struct lock_class_key team_lock_key;
+ 	long mode_priv[TEAM_MODE_PRIV_LONGS];
+ };
+ 
+diff --git a/include/linux/ioprio.h b/include/linux/ioprio.h
+index b25377b6ea98dd..5210e8371238f1 100644
+--- a/include/linux/ioprio.h
++++ b/include/linux/ioprio.h
+@@ -60,7 +60,8 @@ static inline int __get_task_ioprio(struct task_struct *p)
+ 	int prio;
+ 
+ 	if (!ioc)
+-		return IOPRIO_DEFAULT;
++		return IOPRIO_PRIO_VALUE(task_nice_ioclass(p),
++					 task_nice_ioprio(p));
+ 
+ 	if (p != current)
+ 		lockdep_assert_held(&p->alloc_lock);
+diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h
+index 6822cfa5f4ad31..9d2467f982ad46 100644
+--- a/include/linux/mlx5/device.h
++++ b/include/linux/mlx5/device.h
+@@ -280,6 +280,7 @@ enum {
+ 	MLX5_MKEY_MASK_SMALL_FENCE	= 1ull << 23,
+ 	MLX5_MKEY_MASK_RELAXED_ORDERING_WRITE	= 1ull << 25,
+ 	MLX5_MKEY_MASK_FREE			= 1ull << 29,
++	MLX5_MKEY_MASK_PAGE_SIZE_5		= 1ull << 42,
+ 	MLX5_MKEY_MASK_RELAXED_ORDERING_READ	= 1ull << 47,
+ };
+ 
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 2e4584e1bfcd9a..82b7bea9fa7cc7 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -33,6 +33,7 @@
+ #include <linux/slab.h>
+ #include <linux/cacheinfo.h>
+ #include <linux/rcuwait.h>
++#include <linux/sched/mm.h>
+ 
+ struct mempolicy;
+ struct anon_vma;
+@@ -716,6 +717,10 @@ static inline void vma_refcount_put(struct vm_area_struct *vma)
+  * reused and attached to a different mm before we lock it.
+  * Returns the vma on success, NULL on failure to lock and EAGAIN if vma got
+  * detached.
++ *
++ * WARNING! The vma passed to this function cannot be used if the function
++ * fails to lock it because in certain cases RCU lock is dropped and then
++ * reacquired. Once RCU lock is dropped the vma can be concurently freed.
+  */
+ static inline struct vm_area_struct *vma_start_read(struct mm_struct *mm,
+ 						    struct vm_area_struct *vma)
+@@ -745,6 +750,31 @@ static inline struct vm_area_struct *vma_start_read(struct mm_struct *mm,
+ 	}
+ 
+ 	rwsem_acquire_read(&vma->vmlock_dep_map, 0, 1, _RET_IP_);
++
++	/*
++	 * If vma got attached to another mm from under us, that mm is not
++	 * stable and can be freed in the narrow window after vma->vm_refcnt
++	 * is dropped and before rcuwait_wake_up(mm) is called. Grab it before
++	 * releasing vma->vm_refcnt.
++	 */
++	if (unlikely(vma->vm_mm != mm)) {
++		/* Use a copy of vm_mm in case vma is freed after we drop vm_refcnt */
++		struct mm_struct *other_mm = vma->vm_mm;
++
++		/*
++		 * __mmdrop() is a heavy operation and we don't need RCU
++		 * protection here. Release RCU lock during these operations.
++		 * We reinstate the RCU read lock as the caller expects it to
++		 * be held when this function returns even on error.
++		 */
++		rcu_read_unlock();
++		mmgrab(other_mm);
++		vma_refcount_put(vma);
++		mmdrop(other_mm);
++		rcu_read_lock();
++		return NULL;
++	}
++
+ 	/*
+ 	 * Overflow of vm_lock_seq/mm_lock_seq might produce false locked result.
+ 	 * False unlocked result is impossible because we modify and check
+diff --git a/include/linux/moduleparam.h b/include/linux/moduleparam.h
+index bfb85fd13e1fae..110e9d09de2436 100644
+--- a/include/linux/moduleparam.h
++++ b/include/linux/moduleparam.h
+@@ -282,10 +282,9 @@ struct kparam_array
+ #define __moduleparam_const const
+ #endif
+ 
+-/* This is the fundamental function for registering boot/module
+-   parameters. */
++/* This is the fundamental function for registering boot/module parameters. */
+ #define __module_param_call(prefix, name, ops, arg, perm, level, flags)	\
+-	/* Default value instead of permissions? */			\
++	static_assert(sizeof(""prefix) - 1 <= MAX_PARAM_PREFIX_LEN);	\
+ 	static const char __param_str_##name[] = prefix #name;		\
+ 	static struct kernel_param __moduleparam_const __param_##name	\
+ 	__used __section("__param")					\
+diff --git a/include/linux/padata.h b/include/linux/padata.h
+index 0146daf3443066..765f2778e264a5 100644
+--- a/include/linux/padata.h
++++ b/include/linux/padata.h
+@@ -90,8 +90,6 @@ struct padata_cpumask {
+  * @processed: Number of already processed objects.
+  * @cpu: Next CPU to be processed.
+  * @cpumask: The cpumasks in use for parallel and serial workers.
+- * @reorder_work: work struct for reordering.
+- * @lock: Reorder lock.
+  */
+ struct parallel_data {
+ 	struct padata_shell		*ps;
+@@ -102,8 +100,6 @@ struct parallel_data {
+ 	unsigned int			processed;
+ 	int				cpu;
+ 	struct padata_cpumask		cpumask;
+-	struct work_struct		reorder_work;
+-	spinlock_t                      ____cacheline_aligned lock;
+ };
+ 
+ /**
+diff --git a/include/linux/pps_kernel.h b/include/linux/pps_kernel.h
+index c7abce28ed2995..aab0aebb529e02 100644
+--- a/include/linux/pps_kernel.h
++++ b/include/linux/pps_kernel.h
+@@ -52,6 +52,7 @@ struct pps_device {
+ 	int current_mode;			/* PPS mode at event time */
+ 
+ 	unsigned int last_ev;			/* last PPS event id */
++	unsigned int last_fetched_ev;		/* last fetched PPS event id */
+ 	wait_queue_head_t queue;		/* PPS event queue */
+ 
+ 	unsigned int id;			/* PPS source unique ID */
+diff --git a/include/linux/proc_fs.h b/include/linux/proc_fs.h
+index ea62201c74c402..703d0c76cc9a0a 100644
+--- a/include/linux/proc_fs.h
++++ b/include/linux/proc_fs.h
+@@ -27,6 +27,7 @@ enum {
+ 
+ 	PROC_ENTRY_proc_read_iter	= 1U << 1,
+ 	PROC_ENTRY_proc_compat_ioctl	= 1U << 2,
++	PROC_ENTRY_proc_lseek		= 1U << 3,
+ };
+ 
+ struct proc_ops {
+diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
+index f1fd3a8044e0ec..dd10c22299ab82 100644
+--- a/include/linux/psi_types.h
++++ b/include/linux/psi_types.h
+@@ -84,11 +84,9 @@ enum psi_aggregators {
+ struct psi_group_cpu {
+ 	/* 1st cacheline updated by the scheduler */
+ 
+-	/* Aggregator needs to know of concurrent changes */
+-	seqcount_t seq ____cacheline_aligned_in_smp;
+-
+ 	/* States of the tasks belonging to this group */
+-	unsigned int tasks[NR_PSI_TASK_COUNTS];
++	unsigned int tasks[NR_PSI_TASK_COUNTS]
++			____cacheline_aligned_in_smp;
+ 
+ 	/* Aggregate pressure state derived from the tasks */
+ 	u32 state_mask;
+diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
+index 56e27263acf892..00e232f3c2e880 100644
+--- a/include/linux/ring_buffer.h
++++ b/include/linux/ring_buffer.h
+@@ -152,9 +152,7 @@ ring_buffer_consume(struct trace_buffer *buffer, int cpu, u64 *ts,
+ 		    unsigned long *lost_events);
+ 
+ struct ring_buffer_iter *
+-ring_buffer_read_prepare(struct trace_buffer *buffer, int cpu, gfp_t flags);
+-void ring_buffer_read_prepare_sync(void);
+-void ring_buffer_read_start(struct ring_buffer_iter *iter);
++ring_buffer_read_start(struct trace_buffer *buffer, int cpu, gfp_t flags);
+ void ring_buffer_read_finish(struct ring_buffer_iter *iter);
+ 
+ struct ring_buffer_event *
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index b974a277975a8a..fad2fc972d23ab 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -3032,6 +3032,29 @@ static inline void skb_reset_transport_header(struct sk_buff *skb)
+ 	skb->transport_header = offset;
+ }
+ 
++/**
++ * skb_reset_transport_header_careful - conditionally reset transport header
++ * @skb: buffer to alter
++ *
++ * Hardened version of skb_reset_transport_header().
++ *
++ * Returns: true if the operation was a success.
++ */
++static inline bool __must_check
++skb_reset_transport_header_careful(struct sk_buff *skb)
++{
++	long offset = skb->data - skb->head;
++
++	if (unlikely(offset != (typeof(skb->transport_header))offset))
++		return false;
++
++	if (unlikely(offset == (typeof(skb->transport_header))~0U))
++		return false;
++
++	skb->transport_header = offset;
++	return true;
++}
++
+ static inline void skb_set_transport_header(struct sk_buff *skb,
+ 					    const int offset)
+ {
+diff --git a/include/linux/usb/usbnet.h b/include/linux/usb/usbnet.h
+index 0b9f1e598e3a6b..4bc6bb01a0eb8b 100644
+--- a/include/linux/usb/usbnet.h
++++ b/include/linux/usb/usbnet.h
+@@ -76,6 +76,7 @@ struct usbnet {
+ #		define EVENT_LINK_CHANGE	11
+ #		define EVENT_SET_RX_MODE	12
+ #		define EVENT_NO_IP_ALIGN	13
++#		define EVENT_LINK_CARRIER_ON	14
+ /* This one is special, as it indicates that the device is going away
+  * there are cyclic dependencies between tasklet, timer and bh
+  * that must be broken
+diff --git a/include/linux/vfio.h b/include/linux/vfio.h
+index 707b00772ce1ff..eb563f538dee51 100644
+--- a/include/linux/vfio.h
++++ b/include/linux/vfio.h
+@@ -105,6 +105,9 @@ struct vfio_device {
+  * @match: Optional device name match callback (return: 0 for no-match, >0 for
+  *         match, -errno for abort (ex. match with insufficient or incorrect
+  *         additional args)
++ * @match_token_uuid: Optional device token match/validation. Return 0
++ *         if the uuid is valid for the device, -errno otherwise. uuid is NULL
++ *         if none was provided.
+  * @dma_unmap: Called when userspace unmaps IOVA from the container
+  *             this device is attached to.
+  * @device_feature: Optional, fill in the VFIO_DEVICE_FEATURE ioctl
+@@ -132,6 +135,7 @@ struct vfio_device_ops {
+ 	int	(*mmap)(struct vfio_device *vdev, struct vm_area_struct *vma);
+ 	void	(*request)(struct vfio_device *vdev, unsigned int count);
+ 	int	(*match)(struct vfio_device *vdev, char *buf);
++	int	(*match_token_uuid)(struct vfio_device *vdev, const uuid_t *uuid);
+ 	void	(*dma_unmap)(struct vfio_device *vdev, u64 iova, u64 length);
+ 	int	(*device_feature)(struct vfio_device *device, u32 flags,
+ 				  void __user *arg, size_t argsz);
+diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h
+index fbb472dd99b361..f541044e42a2ad 100644
+--- a/include/linux/vfio_pci_core.h
++++ b/include/linux/vfio_pci_core.h
+@@ -122,6 +122,8 @@ ssize_t vfio_pci_core_write(struct vfio_device *core_vdev, const char __user *bu
+ int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma);
+ void vfio_pci_core_request(struct vfio_device *core_vdev, unsigned int count);
+ int vfio_pci_core_match(struct vfio_device *core_vdev, char *buf);
++int vfio_pci_core_match_token_uuid(struct vfio_device *core_vdev,
++				   const uuid_t *uuid);
+ int vfio_pci_core_enable(struct vfio_pci_core_device *vdev);
+ void vfio_pci_core_disable(struct vfio_pci_core_device *vdev);
+ void vfio_pci_core_finish_enable(struct vfio_pci_core_device *vdev);
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index f47dfb8b5be799..ebe01eb2826441 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -2633,6 +2633,7 @@ struct hci_ev_le_conn_complete {
+ #define LE_EXT_ADV_DIRECT_IND		0x0004
+ #define LE_EXT_ADV_SCAN_RSP		0x0008
+ #define LE_EXT_ADV_LEGACY_PDU		0x0010
++#define LE_EXT_ADV_DATA_STATUS_MASK	0x0060
+ #define LE_EXT_ADV_EVT_TYPE_MASK	0x007f
+ 
+ #define ADDR_LE_DEV_PUBLIC		0x00
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index d22468bb4341c6..351a9057e70eef 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -29,6 +29,7 @@
+ #include <linux/idr.h>
+ #include <linux/leds.h>
+ #include <linux/rculist.h>
++#include <linux/spinlock.h>
+ #include <linux/srcu.h>
+ 
+ #include <net/bluetooth/hci.h>
+@@ -93,6 +94,7 @@ struct discovery_state {
+ 	u16			uuid_count;
+ 	u8			(*uuids)[16];
+ 	unsigned long		name_resolve_timeout;
++	spinlock_t		lock;
+ };
+ 
+ #define SUSPEND_NOTIFIER_TIMEOUT	msecs_to_jiffies(2000) /* 2 seconds */
+@@ -885,6 +887,7 @@ static inline void iso_recv(struct hci_conn *hcon, struct sk_buff *skb,
+ 
+ static inline void discovery_init(struct hci_dev *hdev)
+ {
++	spin_lock_init(&hdev->discovery.lock);
+ 	hdev->discovery.state = DISCOVERY_STOPPED;
+ 	INIT_LIST_HEAD(&hdev->discovery.all);
+ 	INIT_LIST_HEAD(&hdev->discovery.unknown);
+@@ -899,8 +902,11 @@ static inline void hci_discovery_filter_clear(struct hci_dev *hdev)
+ 	hdev->discovery.report_invalid_rssi = true;
+ 	hdev->discovery.rssi = HCI_RSSI_INVALID;
+ 	hdev->discovery.uuid_count = 0;
++
++	spin_lock(&hdev->discovery.lock);
+ 	kfree(hdev->discovery.uuids);
+ 	hdev->discovery.uuids = NULL;
++	spin_unlock(&hdev->discovery.lock);
+ }
+ 
+ bool hci_discovery_active(struct hci_dev *hdev);
+diff --git a/include/net/dst.h b/include/net/dst.h
+index 78c78cdce0e9a7..32dafbab4cd0d6 100644
+--- a/include/net/dst.h
++++ b/include/net/dst.h
+@@ -456,7 +456,7 @@ INDIRECT_CALLABLE_DECLARE(int ip_output(struct net *, struct sock *,
+ /* Output packet to network from transport.  */
+ static inline int dst_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+-	return INDIRECT_CALL_INET(skb_dst(skb)->output,
++	return INDIRECT_CALL_INET(READ_ONCE(skb_dst(skb)->output),
+ 				  ip6_output, ip_output,
+ 				  net, sk, skb);
+ }
+@@ -466,7 +466,7 @@ INDIRECT_CALLABLE_DECLARE(int ip_local_deliver(struct sk_buff *));
+ /* Input packet from network to transport.  */
+ static inline int dst_input(struct sk_buff *skb)
+ {
+-	return INDIRECT_CALL_INET(skb_dst(skb)->input,
++	return INDIRECT_CALL_INET(READ_ONCE(skb_dst(skb)->input),
+ 				  ip6_input, ip_local_deliver, skb);
+ }
+ 
+@@ -561,6 +561,26 @@ static inline void skb_dst_update_pmtu_no_confirm(struct sk_buff *skb, u32 mtu)
+ 		dst->ops->update_pmtu(dst, NULL, skb, mtu, false);
+ }
+ 
++static inline struct net_device *dst_dev(const struct dst_entry *dst)
++{
++	return READ_ONCE(dst->dev);
++}
++
++static inline struct net_device *skb_dst_dev(const struct sk_buff *skb)
++{
++	return dst_dev(skb_dst(skb));
++}
++
++static inline struct net *skb_dst_dev_net(const struct sk_buff *skb)
++{
++	return dev_net(skb_dst_dev(skb));
++}
++
++static inline struct net *skb_dst_dev_net_rcu(const struct sk_buff *skb)
++{
++	return dev_net_rcu(skb_dst_dev(skb));
++}
++
+ struct dst_entry *dst_blackhole_check(struct dst_entry *dst, u32 cookie);
+ void dst_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk,
+ 			       struct sk_buff *skb, u32 mtu, bool confirm_neigh);
+diff --git a/include/net/lwtunnel.h b/include/net/lwtunnel.h
+index 39cd50300a1897..eabe80d52a6c2c 100644
+--- a/include/net/lwtunnel.h
++++ b/include/net/lwtunnel.h
+@@ -140,12 +140,12 @@ int bpf_lwt_push_ip_encap(struct sk_buff *skb, void *hdr, u32 len,
+ static inline void lwtunnel_set_redirect(struct dst_entry *dst)
+ {
+ 	if (lwtunnel_output_redirect(dst->lwtstate)) {
+-		dst->lwtstate->orig_output = dst->output;
+-		dst->output = lwtunnel_output;
++		dst->lwtstate->orig_output = READ_ONCE(dst->output);
++		WRITE_ONCE(dst->output, lwtunnel_output);
+ 	}
+ 	if (lwtunnel_input_redirect(dst->lwtstate)) {
+-		dst->lwtstate->orig_input = dst->input;
+-		dst->input = lwtunnel_input;
++		dst->lwtstate->orig_input = READ_ONCE(dst->input);
++		WRITE_ONCE(dst->input, lwtunnel_input);
+ 	}
+ }
+ #else
+diff --git a/include/net/tc_act/tc_ctinfo.h b/include/net/tc_act/tc_ctinfo.h
+index f071c1d70a25e1..a04bcac7adf4b6 100644
+--- a/include/net/tc_act/tc_ctinfo.h
++++ b/include/net/tc_act/tc_ctinfo.h
+@@ -18,9 +18,9 @@ struct tcf_ctinfo_params {
+ struct tcf_ctinfo {
+ 	struct tc_action common;
+ 	struct tcf_ctinfo_params __rcu *params;
+-	u64 stats_dscp_set;
+-	u64 stats_dscp_error;
+-	u64 stats_cpmark_set;
++	atomic64_t stats_dscp_set;
++	atomic64_t stats_dscp_error;
++	atomic64_t stats_cpmark_set;
+ };
+ 
+ enum {
+diff --git a/include/net/udp.h b/include/net/udp.h
+index 6e89520e100dcb..b59462c5b97a4a 100644
+--- a/include/net/udp.h
++++ b/include/net/udp.h
+@@ -586,6 +586,16 @@ static inline struct sk_buff *udp_rcv_segment(struct sock *sk,
+ {
+ 	netdev_features_t features = NETIF_F_SG;
+ 	struct sk_buff *segs;
++	int drop_count;
++
++	/*
++	 * Segmentation in UDP receive path is only for UDP GRO, drop udp
++	 * fragmentation offload (UFO) packets.
++	 */
++	if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP) {
++		drop_count = 1;
++		goto drop;
++	}
+ 
+ 	/* Avoid csum recalculation by skb_segment unless userspace explicitly
+ 	 * asks for the final checksum values
+@@ -609,16 +619,18 @@ static inline struct sk_buff *udp_rcv_segment(struct sock *sk,
+ 	 */
+ 	segs = __skb_gso_segment(skb, features, false);
+ 	if (IS_ERR_OR_NULL(segs)) {
+-		int segs_nr = skb_shinfo(skb)->gso_segs;
+-
+-		atomic_add(segs_nr, &sk->sk_drops);
+-		SNMP_ADD_STATS(__UDPX_MIB(sk, ipv4), UDP_MIB_INERRORS, segs_nr);
+-		kfree_skb(skb);
+-		return NULL;
++		drop_count = skb_shinfo(skb)->gso_segs;
++		goto drop;
+ 	}
+ 
+ 	consume_skb(skb);
+ 	return segs;
++
++drop:
++	atomic_add(drop_count, &sk->sk_drops);
++	SNMP_ADD_STATS(__UDPX_MIB(sk, ipv4), UDP_MIB_INERRORS, drop_count);
++	kfree_skb(skb);
++	return NULL;
+ }
+ 
+ static inline void udp_post_segment_fix_csum(struct sk_buff *skb)
+diff --git a/include/sound/tas2781-tlv.h b/include/sound/tas2781-tlv.h
+index d87263e43fdb61..ef9b9f19d21205 100644
+--- a/include/sound/tas2781-tlv.h
++++ b/include/sound/tas2781-tlv.h
+@@ -15,7 +15,7 @@
+ #ifndef __TAS2781_TLV_H__
+ #define __TAS2781_TLV_H__
+ 
+-static const __maybe_unused DECLARE_TLV_DB_SCALE(dvc_tlv, -10000, 100, 0);
++static const __maybe_unused DECLARE_TLV_DB_SCALE(dvc_tlv, -10000, 50, 0);
+ static const __maybe_unused DECLARE_TLV_DB_SCALE(amp_vol_tlv, 1100, 50, 0);
+ 
+ #endif
+diff --git a/include/trace/events/power.h b/include/trace/events/power.h
+index 9253e83b9bb433..ff0974e9be9afa 100644
+--- a/include/trace/events/power.h
++++ b/include/trace/events/power.h
+@@ -99,28 +99,6 @@ DEFINE_EVENT(psci_domain_idle, psci_domain_idle_exit,
+ 	TP_ARGS(cpu_id, state, s2idle)
+ );
+ 
+-TRACE_EVENT(powernv_throttle,
+-
+-	TP_PROTO(int chip_id, const char *reason, int pmax),
+-
+-	TP_ARGS(chip_id, reason, pmax),
+-
+-	TP_STRUCT__entry(
+-		__field(int, chip_id)
+-		__string(reason, reason)
+-		__field(int, pmax)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->chip_id = chip_id;
+-		__assign_str(reason);
+-		__entry->pmax = pmax;
+-	),
+-
+-	TP_printk("Chip %d Pmax %d %s", __entry->chip_id,
+-		  __entry->pmax, __get_str(reason))
+-);
+-
+ TRACE_EVENT(pstate_sample,
+ 
+ 	TP_PROTO(u32 core_busy,
+diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
+index 97e2c4510e6954..dbb907eae44397 100644
+--- a/include/uapi/drm/panthor_drm.h
++++ b/include/uapi/drm/panthor_drm.h
+@@ -293,6 +293,9 @@ struct drm_panthor_gpu_info {
+ 	/** @as_present: Bitmask encoding the number of address-space exposed by the MMU. */
+ 	__u32 as_present;
+ 
++	/** @pad0: MBZ. */
++	__u32 pad0;
++
+ 	/** @shader_present: Bitmask encoding the shader cores exposed by the GPU. */
+ 	__u64 shader_present;
+ 
+diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
+index 5764f315137f99..75100bf009baf5 100644
+--- a/include/uapi/linux/vfio.h
++++ b/include/uapi/linux/vfio.h
+@@ -905,10 +905,12 @@ struct vfio_device_feature {
+  * VFIO_DEVICE_BIND_IOMMUFD - _IOR(VFIO_TYPE, VFIO_BASE + 18,
+  *				   struct vfio_device_bind_iommufd)
+  * @argsz:	 User filled size of this data.
+- * @flags:	 Must be 0.
++ * @flags:	 Must be 0 or a bit flags of VFIO_DEVICE_BIND_*
+  * @iommufd:	 iommufd to bind.
+  * @out_devid:	 The device id generated by this bind. devid is a handle for
+  *		 this device/iommufd bond and can be used in IOMMUFD commands.
++ * @token_uuid_ptr: Valid if VFIO_DEVICE_BIND_FLAG_TOKEN. Points to a 16 byte
++ *                  UUID in the same format as VFIO_DEVICE_FEATURE_PCI_VF_TOKEN.
+  *
+  * Bind a vfio_device to the specified iommufd.
+  *
+@@ -917,13 +919,21 @@ struct vfio_device_feature {
+  *
+  * Unbind is automatically conducted when device fd is closed.
+  *
++ * A token is sometimes required to open the device, unless this is known to be
++ * needed VFIO_DEVICE_BIND_FLAG_TOKEN should not be set and token_uuid_ptr is
++ * ignored. The only case today is a PF/VF relationship where the VF bind must
++ * be provided the same token as VFIO_DEVICE_FEATURE_PCI_VF_TOKEN provided to
++ * the PF.
++ *
+  * Return: 0 on success, -errno on failure.
+  */
+ struct vfio_device_bind_iommufd {
+ 	__u32		argsz;
+ 	__u32		flags;
++#define VFIO_DEVICE_BIND_FLAG_TOKEN (1 << 0)
+ 	__s32		iommufd;
+ 	__u32		out_devid;
++	__aligned_u64	token_uuid_ptr;
+ };
+ 
+ #define VFIO_DEVICE_BIND_IOMMUFD	_IO(VFIO_TYPE, VFIO_BASE + 18)
+diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
+index d4b3e2ae1314d1..e72f2655459e45 100644
+--- a/include/uapi/linux/vhost.h
++++ b/include/uapi/linux/vhost.h
+@@ -235,4 +235,33 @@
+  */
+ #define VHOST_VDPA_GET_VRING_SIZE	_IOWR(VHOST_VIRTIO, 0x82,	\
+ 					      struct vhost_vring_state)
++
++/* fork_owner values for vhost */
++#define VHOST_FORK_OWNER_KTHREAD 0
++#define VHOST_FORK_OWNER_TASK 1
++
++/**
++ * VHOST_SET_FORK_FROM_OWNER - Set the fork_owner flag for the vhost device,
++ * This ioctl must called before VHOST_SET_OWNER.
++ * Only available when CONFIG_VHOST_ENABLE_FORK_OWNER_CONTROL=y
++ *
++ * @param fork_owner: An 8-bit value that determines the vhost thread mode
++ *
++ * When fork_owner is set to VHOST_FORK_OWNER_TASK(default value):
++ *   - Vhost will create vhost worker as tasks forked from the owner,
++ *     inheriting all of the owner's attributes.
++ *
++ * When fork_owner is set to VHOST_FORK_OWNER_KTHREAD:
++ *   - Vhost will create vhost workers as kernel threads.
++ */
++#define VHOST_SET_FORK_FROM_OWNER _IOW(VHOST_VIRTIO, 0x83, __u8)
++
++/**
++ * VHOST_GET_FORK_OWNER - Get the current fork_owner flag for the vhost device.
++ * Only available when CONFIG_VHOST_ENABLE_FORK_OWNER_CONTROL=y
++ *
++ * @return: An 8-bit value indicating the current thread mode.
++ */
++#define VHOST_GET_FORK_FROM_OWNER _IOR(VHOST_VIRTIO, 0x84, __u8)
++
+ #endif
+diff --git a/init/Kconfig b/init/Kconfig
+index bf3a920064bec1..b2367239ac9d92 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -1761,7 +1761,7 @@ config IO_URING
+ 
+ config GCOV_PROFILE_URING
+ 	bool "Enable GCOV profiling on the io_uring subsystem"
+-	depends on GCOV_KERNEL
++	depends on IO_URING && GCOV_KERNEL
+ 	help
+ 	  Enable GCOV profiling on the io_uring subsystem, to facilitate
+ 	  code coverage testing.
+diff --git a/kernel/audit.h b/kernel/audit.h
+index 0211cb307d3028..2a24d01c5fb0e2 100644
+--- a/kernel/audit.h
++++ b/kernel/audit.h
+@@ -200,7 +200,7 @@ struct audit_context {
+ 			int			argc;
+ 		} execve;
+ 		struct {
+-			char			*name;
++			const char		*name;
+ 		} module;
+ 		struct {
+ 			struct audit_ntp_data	ntp_data;
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 78fd876a5473fb..eb98cd6fe91fb5 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -2864,7 +2864,7 @@ void __audit_openat2_how(struct open_how *how)
+ 	context->type = AUDIT_OPENAT2;
+ }
+ 
+-void __audit_log_kern_module(char *name)
++void __audit_log_kern_module(const char *name)
+ {
+ 	struct audit_context *context = audit_context();
+ 
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index c20babbf998f4e..93e49b0c218ba9 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -778,7 +778,10 @@ bool is_bpf_text_address(unsigned long addr)
+ 
+ struct bpf_prog *bpf_prog_ksym_find(unsigned long addr)
+ {
+-	struct bpf_ksym *ksym = bpf_ksym_find(addr);
++	struct bpf_ksym *ksym;
++
++	WARN_ON_ONCE(!rcu_read_lock_held());
++	ksym = bpf_ksym_find(addr);
+ 
+ 	return ksym && ksym->prog ?
+ 	       container_of(ksym, struct bpf_prog_aux, ksym)->prog :
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index 52d02bc0abb2b9..3312442bc38934 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -2864,9 +2864,16 @@ static bool bpf_stack_walker(void *cookie, u64 ip, u64 sp, u64 bp)
+ 	struct bpf_throw_ctx *ctx = cookie;
+ 	struct bpf_prog *prog;
+ 
+-	if (!is_bpf_text_address(ip))
+-		return !ctx->cnt;
++	/*
++	 * The RCU read lock is held to safely traverse the latch tree, but we
++	 * don't need its protection when accessing the prog, since it has an
++	 * active stack frame on the current stack trace, and won't disappear.
++	 */
++	rcu_read_lock();
+ 	prog = bpf_prog_ksym_find(ip);
++	rcu_read_unlock();
++	if (!prog)
++		return !ctx->cnt;
+ 	ctx->cnt++;
+ 	if (bpf_is_subprog(prog))
+ 		return true;
+diff --git a/kernel/bpf/preload/Kconfig b/kernel/bpf/preload/Kconfig
+index c9d45c9d6918d1..f9b11d01c3b50d 100644
+--- a/kernel/bpf/preload/Kconfig
++++ b/kernel/bpf/preload/Kconfig
+@@ -10,7 +10,6 @@ menuconfig BPF_PRELOAD
+ 	# The dependency on !COMPILE_TEST prevents it from being enabled
+ 	# in allmodconfig or allyesconfig configurations
+ 	depends on !COMPILE_TEST
+-	select USERMODE_DRIVER
+ 	help
+ 	  This builds kernel module with several embedded BPF programs that are
+ 	  pinned into BPF FS mount point as human readable files that are
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index c12dfbeb78a744..a1ecad2944a8da 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -23657,6 +23657,7 @@ static bool can_jump(struct bpf_insn *insn)
+ 	case BPF_JSLT:
+ 	case BPF_JSLE:
+ 	case BPF_JCOND:
++	case BPF_JSET:
+ 		return true;
+ 	}
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 5a2ed3344392fa..033270a281b5d6 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6790,10 +6790,20 @@ static vm_fault_t perf_mmap_pfn_mkwrite(struct vm_fault *vmf)
+ 	return vmf->pgoff == 0 ? 0 : VM_FAULT_SIGBUS;
+ }
+ 
++static int perf_mmap_may_split(struct vm_area_struct *vma, unsigned long addr)
++{
++	/*
++	 * Forbid splitting perf mappings to prevent refcount leaks due to
++	 * the resulting non-matching offsets and sizes. See open()/close().
++	 */
++	return -EINVAL;
++}
++
+ static const struct vm_operations_struct perf_mmap_vmops = {
+ 	.open		= perf_mmap_open,
+ 	.close		= perf_mmap_close, /* non mergeable */
+ 	.pfn_mkwrite	= perf_mmap_pfn_mkwrite,
++	.may_split	= perf_mmap_may_split,
+ };
+ 
+ static int map_range(struct perf_buffer *rb, struct vm_area_struct *vma)
+@@ -6988,8 +6998,6 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 			ret = 0;
+ 			goto unlock;
+ 		}
+-
+-		atomic_set(&rb->aux_mmap_count, 1);
+ 	}
+ 
+ 	user_lock_limit = sysctl_perf_event_mlock >> (PAGE_SHIFT - 10);
+@@ -7052,15 +7060,16 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 		perf_event_update_time(event);
+ 		perf_event_init_userpage(event);
+ 		perf_event_update_userpage(event);
++		ret = 0;
+ 	} else {
+ 		ret = rb_alloc_aux(rb, event, vma->vm_pgoff, nr_pages,
+ 				   event->attr.aux_watermark, flags);
+-		if (!ret)
++		if (!ret) {
++			atomic_set(&rb->aux_mmap_count, 1);
+ 			rb->aux_mmap_locked = extra;
++		}
+ 	}
+ 
+-	ret = 0;
+-
+ unlock:
+ 	if (!ret) {
+ 		atomic_long_add(user_extra, &user->locked_vm);
+@@ -7068,6 +7077,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 
+ 		atomic_inc(&event->mmap_count);
+ 	} else if (rb) {
++		/* AUX allocation failed */
+ 		atomic_dec(&rb->mmap_count);
+ 	}
+ aux_unlock:
+@@ -7075,6 +7085,9 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 		mutex_unlock(aux_mutex);
+ 	mutex_unlock(&event->mmap_mutex);
+ 
++	if (ret)
++		return ret;
++
+ 	/*
+ 	 * Since pinned accounting is per vm we cannot allow fork() to copy our
+ 	 * vma.
+@@ -7082,12 +7095,19 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 	vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+ 	vma->vm_ops = &perf_mmap_vmops;
+ 
+-	if (!ret)
+-		ret = map_range(rb, vma);
+-
+-	if (!ret && event->pmu->event_mapped)
++	if (event->pmu->event_mapped)
+ 		event->pmu->event_mapped(event, vma->vm_mm);
+ 
++	/*
++	 * Try to map it into the page table. On fail, invoke
++	 * perf_mmap_close() to undo the above, as the callsite expects
++	 * full cleanup in this case and therefore does not invoke
++	 * vmops::close().
++	 */
++	ret = map_range(rb, vma);
++	if (ret)
++		perf_mmap_close(vma);
++
+ 	return ret;
+ }
+ 
+diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c
+index 6ce73cceaf535e..1305bc0e24796c 100644
+--- a/kernel/kcsan/kcsan_test.c
++++ b/kernel/kcsan/kcsan_test.c
+@@ -533,7 +533,7 @@ static void test_barrier_nothreads(struct kunit *test)
+ 	struct kcsan_scoped_access *reorder_access = NULL;
+ #endif
+ 	arch_spinlock_t arch_spinlock = __ARCH_SPIN_LOCK_UNLOCKED;
+-	atomic_t dummy;
++	atomic_t dummy = ATOMIC_INIT(0);
+ 
+ 	KCSAN_TEST_REQUIRES(test, reorder_access != NULL);
+ 	KCSAN_TEST_REQUIRES(test, IS_ENABLED(CONFIG_SMP));
+diff --git a/kernel/module/main.c b/kernel/module/main.c
+index 9d8a845d946657..05da78b6a6c141 100644
+--- a/kernel/module/main.c
++++ b/kernel/module/main.c
+@@ -3298,7 +3298,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
+ 
+ 	module_allocated = true;
+ 
+-	audit_log_kern_module(mod->name);
++	audit_log_kern_module(info->name);
+ 
+ 	/* Reserve our place in the list. */
+ 	err = add_unformed_module(mod);
+@@ -3460,8 +3460,10 @@ static int load_module(struct load_info *info, const char __user *uargs,
+ 	 * failures once the proper module was allocated and
+ 	 * before that.
+ 	 */
+-	if (!module_allocated)
++	if (!module_allocated) {
++		audit_log_kern_module(info->name ? info->name : "?");
+ 		mod_stat_bump_becoming(info, flags);
++	}
+ 	free_copy(info, flags);
+ 	return err;
+ }
+diff --git a/kernel/padata.c b/kernel/padata.c
+index 7eee94166357a0..25cd3406477ab8 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -261,20 +261,17 @@ EXPORT_SYMBOL(padata_do_parallel);
+  *   be parallel processed by another cpu and is not yet present in
+  *   the cpu's reorder queue.
+  */
+-static struct padata_priv *padata_find_next(struct parallel_data *pd,
+-					    bool remove_object)
++static struct padata_priv *padata_find_next(struct parallel_data *pd, int cpu,
++					    unsigned int processed)
+ {
+ 	struct padata_priv *padata;
+ 	struct padata_list *reorder;
+-	int cpu = pd->cpu;
+ 
+ 	reorder = per_cpu_ptr(pd->reorder_list, cpu);
+ 
+ 	spin_lock(&reorder->lock);
+-	if (list_empty(&reorder->list)) {
+-		spin_unlock(&reorder->lock);
+-		return NULL;
+-	}
++	if (list_empty(&reorder->list))
++		goto notfound;
+ 
+ 	padata = list_entry(reorder->list.next, struct padata_priv, list);
+ 
+@@ -282,97 +279,52 @@ static struct padata_priv *padata_find_next(struct parallel_data *pd,
+ 	 * Checks the rare case where two or more parallel jobs have hashed to
+ 	 * the same CPU and one of the later ones finishes first.
+ 	 */
+-	if (padata->seq_nr != pd->processed) {
+-		spin_unlock(&reorder->lock);
+-		return NULL;
+-	}
+-
+-	if (remove_object) {
+-		list_del_init(&padata->list);
+-		++pd->processed;
+-		pd->cpu = cpumask_next_wrap(cpu, pd->cpumask.pcpu);
+-	}
++	if (padata->seq_nr != processed)
++		goto notfound;
+ 
++	list_del_init(&padata->list);
+ 	spin_unlock(&reorder->lock);
+ 	return padata;
++
++notfound:
++	pd->processed = processed;
++	pd->cpu = cpu;
++	spin_unlock(&reorder->lock);
++	return NULL;
+ }
+ 
+-static void padata_reorder(struct parallel_data *pd)
++static void padata_reorder(struct padata_priv *padata)
+ {
++	struct parallel_data *pd = padata->pd;
+ 	struct padata_instance *pinst = pd->ps->pinst;
+-	int cb_cpu;
+-	struct padata_priv *padata;
+-	struct padata_serial_queue *squeue;
+-	struct padata_list *reorder;
++	unsigned int processed;
++	int cpu;
+ 
+-	/*
+-	 * We need to ensure that only one cpu can work on dequeueing of
+-	 * the reorder queue the time. Calculating in which percpu reorder
+-	 * queue the next object will arrive takes some time. A spinlock
+-	 * would be highly contended. Also it is not clear in which order
+-	 * the objects arrive to the reorder queues. So a cpu could wait to
+-	 * get the lock just to notice that there is nothing to do at the
+-	 * moment. Therefore we use a trylock and let the holder of the lock
+-	 * care for all the objects enqueued during the holdtime of the lock.
+-	 */
+-	if (!spin_trylock_bh(&pd->lock))
+-		return;
++	processed = pd->processed;
++	cpu = pd->cpu;
+ 
+-	while (1) {
+-		padata = padata_find_next(pd, true);
++	do {
++		struct padata_serial_queue *squeue;
++		int cb_cpu;
+ 
+-		/*
+-		 * If the next object that needs serialization is parallel
+-		 * processed by another cpu and is still on it's way to the
+-		 * cpu's reorder queue, nothing to do for now.
+-		 */
+-		if (!padata)
+-			break;
++		cpu = cpumask_next_wrap(cpu, pd->cpumask.pcpu);
++		processed++;
+ 
+ 		cb_cpu = padata->cb_cpu;
+ 		squeue = per_cpu_ptr(pd->squeue, cb_cpu);
+ 
+ 		spin_lock(&squeue->serial.lock);
+ 		list_add_tail(&padata->list, &squeue->serial.list);
+-		spin_unlock(&squeue->serial.lock);
+-
+ 		queue_work_on(cb_cpu, pinst->serial_wq, &squeue->work);
+-	}
+ 
+-	spin_unlock_bh(&pd->lock);
+-
+-	/*
+-	 * The next object that needs serialization might have arrived to
+-	 * the reorder queues in the meantime.
+-	 *
+-	 * Ensure reorder queue is read after pd->lock is dropped so we see
+-	 * new objects from another task in padata_do_serial.  Pairs with
+-	 * smp_mb in padata_do_serial.
+-	 */
+-	smp_mb();
+-
+-	reorder = per_cpu_ptr(pd->reorder_list, pd->cpu);
+-	if (!list_empty(&reorder->list) && padata_find_next(pd, false)) {
+ 		/*
+-		 * Other context(eg. the padata_serial_worker) can finish the request.
+-		 * To avoid UAF issue, add pd ref here, and put pd ref after reorder_work finish.
++		 * If the next object that needs serialization is parallel
++		 * processed by another cpu and is still on it's way to the
++		 * cpu's reorder queue, end the loop.
+ 		 */
+-		padata_get_pd(pd);
+-		if (!queue_work(pinst->serial_wq, &pd->reorder_work))
+-			padata_put_pd(pd);
+-	}
+-}
+-
+-static void invoke_padata_reorder(struct work_struct *work)
+-{
+-	struct parallel_data *pd;
+-
+-	local_bh_disable();
+-	pd = container_of(work, struct parallel_data, reorder_work);
+-	padata_reorder(pd);
+-	local_bh_enable();
+-	/* Pairs with putting the reorder_work in the serial_wq */
+-	padata_put_pd(pd);
++		padata = padata_find_next(pd, cpu, processed);
++		spin_unlock(&squeue->serial.lock);
++	} while (padata);
+ }
+ 
+ static void padata_serial_worker(struct work_struct *serial_work)
+@@ -423,6 +375,7 @@ void padata_do_serial(struct padata_priv *padata)
+ 	struct padata_list *reorder = per_cpu_ptr(pd->reorder_list, hashed_cpu);
+ 	struct padata_priv *cur;
+ 	struct list_head *pos;
++	bool gotit = true;
+ 
+ 	spin_lock(&reorder->lock);
+ 	/* Sort in ascending order of sequence number. */
+@@ -432,17 +385,14 @@ void padata_do_serial(struct padata_priv *padata)
+ 		if ((signed int)(cur->seq_nr - padata->seq_nr) < 0)
+ 			break;
+ 	}
+-	list_add(&padata->list, pos);
++	if (padata->seq_nr != pd->processed) {
++		gotit = false;
++		list_add(&padata->list, pos);
++	}
+ 	spin_unlock(&reorder->lock);
+ 
+-	/*
+-	 * Ensure the addition to the reorder list is ordered correctly
+-	 * with the trylock of pd->lock in padata_reorder.  Pairs with smp_mb
+-	 * in padata_reorder.
+-	 */
+-	smp_mb();
+-
+-	padata_reorder(pd);
++	if (gotit)
++		padata_reorder(padata);
+ }
+ EXPORT_SYMBOL(padata_do_serial);
+ 
+@@ -632,9 +582,7 @@ static struct parallel_data *padata_alloc_pd(struct padata_shell *ps)
+ 	padata_init_squeues(pd);
+ 	pd->seq_nr = -1;
+ 	refcount_set(&pd->refcnt, 1);
+-	spin_lock_init(&pd->lock);
+ 	pd->cpu = cpumask_first(pd->cpumask.pcpu);
+-	INIT_WORK(&pd->reorder_work, invoke_padata_reorder);
+ 
+ 	return pd;
+ 
+@@ -1144,12 +1092,6 @@ void padata_free_shell(struct padata_shell *ps)
+ 	if (!ps)
+ 		return;
+ 
+-	/*
+-	 * Wait for all _do_serial calls to finish to avoid touching
+-	 * freed pd's and ps's.
+-	 */
+-	synchronize_rcu();
+-
+ 	mutex_lock(&ps->pinst->lock);
+ 	list_del(&ps->list);
+ 	pd = rcu_dereference_protected(ps->pd, 1);
+diff --git a/kernel/rcu/refscale.c b/kernel/rcu/refscale.c
+index f11a7c2af778cd..ab7fcdc94cc08f 100644
+--- a/kernel/rcu/refscale.c
++++ b/kernel/rcu/refscale.c
+@@ -85,7 +85,7 @@ torture_param(int, holdoff, IS_BUILTIN(CONFIG_RCU_REF_SCALE_TEST) ? 10 : 0,
+ // Number of typesafe_lookup structures, that is, the degree of concurrency.
+ torture_param(long, lookup_instances, 0, "Number of typesafe_lookup structures.");
+ // Number of loops per experiment, all readers execute operations concurrently.
+-torture_param(long, loops, 10000, "Number of loops per experiment.");
++torture_param(int, loops, 10000, "Number of loops per experiment.");
+ // Number of readers, with -1 defaulting to about 75% of the CPUs.
+ torture_param(int, nreaders, -1, "Number of readers, -1 for 75% of CPUs.");
+ // Number of runs.
+@@ -1140,7 +1140,7 @@ static void
+ ref_scale_print_module_parms(const struct ref_scale_ops *cur_ops, const char *tag)
+ {
+ 	pr_alert("%s" SCALE_FLAG
+-		 "--- %s:  verbose=%d verbose_batched=%d shutdown=%d holdoff=%d lookup_instances=%ld loops=%ld nreaders=%d nruns=%d readdelay=%d\n", scale_type, tag,
++		 "--- %s:  verbose=%d verbose_batched=%d shutdown=%d holdoff=%d lookup_instances=%ld loops=%d nreaders=%d nruns=%d readdelay=%d\n", scale_type, tag,
+ 		 verbose, verbose_batched, shutdown, holdoff, lookup_instances, loops, nreaders, nruns, readdelay);
+ }
+ 
+@@ -1238,12 +1238,16 @@ ref_scale_init(void)
+ 	// Reader tasks (default to ~75% of online CPUs).
+ 	if (nreaders < 0)
+ 		nreaders = (num_online_cpus() >> 1) + (num_online_cpus() >> 2);
+-	if (WARN_ONCE(loops <= 0, "%s: loops = %ld, adjusted to 1\n", __func__, loops))
++	if (WARN_ONCE(loops <= 0, "%s: loops = %d, adjusted to 1\n", __func__, loops))
+ 		loops = 1;
+ 	if (WARN_ONCE(nreaders <= 0, "%s: nreaders = %d, adjusted to 1\n", __func__, nreaders))
+ 		nreaders = 1;
+ 	if (WARN_ONCE(nruns <= 0, "%s: nruns = %d, adjusted to 1\n", __func__, nruns))
+ 		nruns = 1;
++	if (WARN_ONCE(loops > INT_MAX / nreaders,
++		      "%s: nreaders * loops will overflow, adjusted loops to %d",
++		      __func__, INT_MAX / nreaders))
++		loops = INT_MAX / nreaders;
+ 	reader_tasks = kcalloc(nreaders, sizeof(reader_tasks[0]),
+ 			       GFP_KERNEL);
+ 	if (!reader_tasks) {
+diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
+index fa269d34167a90..6b3118a4dde379 100644
+--- a/kernel/rcu/tree_nocb.h
++++ b/kernel/rcu/tree_nocb.h
+@@ -276,7 +276,7 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, int waketype,
+ 	 * callback storms, no need to wake up too early.
+ 	 */
+ 	if (waketype == RCU_NOCB_WAKE_LAZY &&
+-	    rdp->nocb_defer_wakeup == RCU_NOCB_WAKE_NOT) {
++	    rdp_gp->nocb_defer_wakeup == RCU_NOCB_WAKE_NOT) {
+ 		mod_timer(&rdp_gp->nocb_timer, jiffies + rcu_get_jiffies_lazy_flush());
+ 		WRITE_ONCE(rdp_gp->nocb_defer_wakeup, waketype);
+ 	} else if (waketype == RCU_NOCB_WAKE_BYPASS) {
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 89019a14082642..65f3b2cc891da6 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -2976,7 +2976,14 @@ void dl_clear_root_domain(struct root_domain *rd)
+ 	int i;
+ 
+ 	guard(raw_spinlock_irqsave)(&rd->dl_bw.lock);
++
++	/*
++	 * Reset total_bw to zero and extra_bw to max_bw so that next
++	 * loop will add dl-servers contributions back properly,
++	 */
+ 	rd->dl_bw.total_bw = 0;
++	for_each_cpu(i, rd->span)
++		cpu_rq(i)->dl.extra_bw = cpu_rq(i)->dl.max_bw;
+ 
+ 	/*
+ 	 * dl_servers are not tasks. Since dl_add_task_root_domain ignores
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index 1396674fa7227a..e0ad56b26171d1 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -172,17 +172,35 @@ struct psi_group psi_system = {
+ 	.pcpu = &system_group_pcpu,
+ };
+ 
++static DEFINE_PER_CPU(seqcount_t, psi_seq) = SEQCNT_ZERO(psi_seq);
++
++static inline void psi_write_begin(int cpu)
++{
++	write_seqcount_begin(per_cpu_ptr(&psi_seq, cpu));
++}
++
++static inline void psi_write_end(int cpu)
++{
++	write_seqcount_end(per_cpu_ptr(&psi_seq, cpu));
++}
++
++static inline u32 psi_read_begin(int cpu)
++{
++	return read_seqcount_begin(per_cpu_ptr(&psi_seq, cpu));
++}
++
++static inline bool psi_read_retry(int cpu, u32 seq)
++{
++	return read_seqcount_retry(per_cpu_ptr(&psi_seq, cpu), seq);
++}
++
+ static void psi_avgs_work(struct work_struct *work);
+ 
+ static void poll_timer_fn(struct timer_list *t);
+ 
+ static void group_init(struct psi_group *group)
+ {
+-	int cpu;
+-
+ 	group->enabled = true;
+-	for_each_possible_cpu(cpu)
+-		seqcount_init(&per_cpu_ptr(group->pcpu, cpu)->seq);
+ 	group->avg_last_update = sched_clock();
+ 	group->avg_next_update = group->avg_last_update + psi_period;
+ 	mutex_init(&group->avgs_lock);
+@@ -262,14 +280,14 @@ static void get_recent_times(struct psi_group *group, int cpu,
+ 
+ 	/* Snapshot a coherent view of the CPU state */
+ 	do {
+-		seq = read_seqcount_begin(&groupc->seq);
++		seq = psi_read_begin(cpu);
+ 		now = cpu_clock(cpu);
+ 		memcpy(times, groupc->times, sizeof(groupc->times));
+ 		state_mask = groupc->state_mask;
+ 		state_start = groupc->state_start;
+ 		if (cpu == current_cpu)
+ 			memcpy(tasks, groupc->tasks, sizeof(groupc->tasks));
+-	} while (read_seqcount_retry(&groupc->seq, seq));
++	} while (psi_read_retry(cpu, seq));
+ 
+ 	/* Calculate state time deltas against the previous snapshot */
+ 	for (s = 0; s < NR_PSI_STATES; s++) {
+@@ -768,30 +786,20 @@ static void record_times(struct psi_group_cpu *groupc, u64 now)
+ 		groupc->times[PSI_NONIDLE] += delta;
+ }
+ 
++#define for_each_group(iter, group) \
++	for (typeof(group) iter = group; iter; iter = iter->parent)
++
+ static void psi_group_change(struct psi_group *group, int cpu,
+ 			     unsigned int clear, unsigned int set,
+-			     bool wake_clock)
++			     u64 now, bool wake_clock)
+ {
+ 	struct psi_group_cpu *groupc;
+ 	unsigned int t, m;
+ 	u32 state_mask;
+-	u64 now;
+ 
+ 	lockdep_assert_rq_held(cpu_rq(cpu));
+ 	groupc = per_cpu_ptr(group->pcpu, cpu);
+ 
+-	/*
+-	 * First we update the task counts according to the state
+-	 * change requested through the @clear and @set bits.
+-	 *
+-	 * Then if the cgroup PSI stats accounting enabled, we
+-	 * assess the aggregate resource states this CPU's tasks
+-	 * have been in since the last change, and account any
+-	 * SOME and FULL time these may have resulted in.
+-	 */
+-	write_seqcount_begin(&groupc->seq);
+-	now = cpu_clock(cpu);
+-
+ 	/*
+ 	 * Start with TSK_ONCPU, which doesn't have a corresponding
+ 	 * task count - it's just a boolean flag directly encoded in
+@@ -843,7 +851,6 @@ static void psi_group_change(struct psi_group *group, int cpu,
+ 
+ 		groupc->state_mask = state_mask;
+ 
+-		write_seqcount_end(&groupc->seq);
+ 		return;
+ 	}
+ 
+@@ -864,8 +871,6 @@ static void psi_group_change(struct psi_group *group, int cpu,
+ 
+ 	groupc->state_mask = state_mask;
+ 
+-	write_seqcount_end(&groupc->seq);
+-
+ 	if (state_mask & group->rtpoll_states)
+ 		psi_schedule_rtpoll_work(group, 1, false);
+ 
+@@ -900,24 +905,29 @@ static void psi_flags_change(struct task_struct *task, int clear, int set)
+ void psi_task_change(struct task_struct *task, int clear, int set)
+ {
+ 	int cpu = task_cpu(task);
+-	struct psi_group *group;
++	u64 now;
+ 
+ 	if (!task->pid)
+ 		return;
+ 
+ 	psi_flags_change(task, clear, set);
+ 
+-	group = task_psi_group(task);
+-	do {
+-		psi_group_change(group, cpu, clear, set, true);
+-	} while ((group = group->parent));
++	psi_write_begin(cpu);
++	now = cpu_clock(cpu);
++	for_each_group(group, task_psi_group(task))
++		psi_group_change(group, cpu, clear, set, now, true);
++	psi_write_end(cpu);
+ }
+ 
+ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ 		     bool sleep)
+ {
+-	struct psi_group *group, *common = NULL;
++	struct psi_group *common = NULL;
+ 	int cpu = task_cpu(prev);
++	u64 now;
++
++	psi_write_begin(cpu);
++	now = cpu_clock(cpu);
+ 
+ 	if (next->pid) {
+ 		psi_flags_change(next, 0, TSK_ONCPU);
+@@ -926,16 +936,15 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ 		 * ancestors with @prev, those will already have @prev's
+ 		 * TSK_ONCPU bit set, and we can stop the iteration there.
+ 		 */
+-		group = task_psi_group(next);
+-		do {
+-			if (per_cpu_ptr(group->pcpu, cpu)->state_mask &
+-			    PSI_ONCPU) {
++		for_each_group(group, task_psi_group(next)) {
++			struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
++
++			if (groupc->state_mask & PSI_ONCPU) {
+ 				common = group;
+ 				break;
+ 			}
+-
+-			psi_group_change(group, cpu, 0, TSK_ONCPU, true);
+-		} while ((group = group->parent));
++			psi_group_change(group, cpu, 0, TSK_ONCPU, now, true);
++		}
+ 	}
+ 
+ 	if (prev->pid) {
+@@ -968,12 +977,11 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ 
+ 		psi_flags_change(prev, clear, set);
+ 
+-		group = task_psi_group(prev);
+-		do {
++		for_each_group(group, task_psi_group(prev)) {
+ 			if (group == common)
+ 				break;
+-			psi_group_change(group, cpu, clear, set, wake_clock);
+-		} while ((group = group->parent));
++			psi_group_change(group, cpu, clear, set, now, wake_clock);
++		}
+ 
+ 		/*
+ 		 * TSK_ONCPU is handled up to the common ancestor. If there are
+@@ -983,20 +991,21 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ 		 */
+ 		if ((prev->psi_flags ^ next->psi_flags) & ~TSK_ONCPU) {
+ 			clear &= ~TSK_ONCPU;
+-			for (; group; group = group->parent)
+-				psi_group_change(group, cpu, clear, set, wake_clock);
++			for_each_group(group, common)
++				psi_group_change(group, cpu, clear, set, now, wake_clock);
+ 		}
+ 	}
++	psi_write_end(cpu);
+ }
+ 
+ #ifdef CONFIG_IRQ_TIME_ACCOUNTING
+ void psi_account_irqtime(struct rq *rq, struct task_struct *curr, struct task_struct *prev)
+ {
+ 	int cpu = task_cpu(curr);
+-	struct psi_group *group;
+ 	struct psi_group_cpu *groupc;
+ 	s64 delta;
+ 	u64 irq;
++	u64 now;
+ 
+ 	if (static_branch_likely(&psi_disabled) || !irqtime_enabled())
+ 		return;
+@@ -1005,8 +1014,7 @@ void psi_account_irqtime(struct rq *rq, struct task_struct *curr, struct task_st
+ 		return;
+ 
+ 	lockdep_assert_rq_held(rq);
+-	group = task_psi_group(curr);
+-	if (prev && task_psi_group(prev) == group)
++	if (prev && task_psi_group(prev) == task_psi_group(curr))
+ 		return;
+ 
+ 	irq = irq_time_read(cpu);
+@@ -1015,25 +1023,22 @@ void psi_account_irqtime(struct rq *rq, struct task_struct *curr, struct task_st
+ 		return;
+ 	rq->psi_irq_time = irq;
+ 
+-	do {
+-		u64 now;
++	psi_write_begin(cpu);
++	now = cpu_clock(cpu);
+ 
++	for_each_group(group, task_psi_group(curr)) {
+ 		if (!group->enabled)
+ 			continue;
+ 
+ 		groupc = per_cpu_ptr(group->pcpu, cpu);
+ 
+-		write_seqcount_begin(&groupc->seq);
+-		now = cpu_clock(cpu);
+-
+ 		record_times(groupc, now);
+ 		groupc->times[PSI_IRQ_FULL] += delta;
+ 
+-		write_seqcount_end(&groupc->seq);
+-
+ 		if (group->rtpoll_states & (1 << PSI_IRQ_FULL))
+ 			psi_schedule_rtpoll_work(group, 1, false);
+-	} while ((group = group->parent));
++	}
++	psi_write_end(cpu);
+ }
+ #endif
+ 
+@@ -1221,12 +1226,14 @@ void psi_cgroup_restart(struct psi_group *group)
+ 		return;
+ 
+ 	for_each_possible_cpu(cpu) {
+-		struct rq *rq = cpu_rq(cpu);
+-		struct rq_flags rf;
++		u64 now;
+ 
+-		rq_lock_irq(rq, &rf);
+-		psi_group_change(group, cpu, 0, 0, true);
+-		rq_unlock_irq(rq, &rf);
++		guard(rq_lock_irq)(cpu_rq(cpu));
++
++		psi_write_begin(cpu);
++		now = cpu_clock(cpu);
++		psi_group_change(group, cpu, 0, 0, now, true);
++		psi_write_end(cpu);
+ 	}
+ }
+ #endif /* CONFIG_CGROUPS */
+diff --git a/kernel/trace/power-traces.c b/kernel/trace/power-traces.c
+index 21bb161c231698..f2fe33573e544a 100644
+--- a/kernel/trace/power-traces.c
++++ b/kernel/trace/power-traces.c
+@@ -17,5 +17,4 @@
+ EXPORT_TRACEPOINT_SYMBOL_GPL(suspend_resume);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(cpu_idle);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(cpu_frequency);
+-EXPORT_TRACEPOINT_SYMBOL_GPL(powernv_throttle);
+ 
+diff --git a/kernel/trace/preemptirq_delay_test.c b/kernel/trace/preemptirq_delay_test.c
+index 314ffc143039c5..acb0c971a4082a 100644
+--- a/kernel/trace/preemptirq_delay_test.c
++++ b/kernel/trace/preemptirq_delay_test.c
+@@ -117,12 +117,15 @@ static int preemptirq_delay_run(void *data)
+ {
+ 	int i;
+ 	int s = MIN(burst_size, NR_TEST_FUNCS);
+-	struct cpumask cpu_mask;
++	cpumask_var_t cpu_mask;
++
++	if (!alloc_cpumask_var(&cpu_mask, GFP_KERNEL))
++		return -ENOMEM;
+ 
+ 	if (cpu_affinity > -1) {
+-		cpumask_clear(&cpu_mask);
+-		cpumask_set_cpu(cpu_affinity, &cpu_mask);
+-		if (set_cpus_allowed_ptr(current, &cpu_mask))
++		cpumask_clear(cpu_mask);
++		cpumask_set_cpu(cpu_affinity, cpu_mask);
++		if (set_cpus_allowed_ptr(current, cpu_mask))
+ 			pr_err("cpu_affinity:%d, failed\n", cpu_affinity);
+ 	}
+ 
+@@ -139,6 +142,8 @@ static int preemptirq_delay_run(void *data)
+ 
+ 	__set_current_state(TASK_RUNNING);
+ 
++	free_cpumask_var(cpu_mask);
++
+ 	return 0;
+ }
+ 
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 67707ff28fc519..f84210ee691e63 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -5835,24 +5835,20 @@ ring_buffer_consume(struct trace_buffer *buffer, int cpu, u64 *ts,
+ EXPORT_SYMBOL_GPL(ring_buffer_consume);
+ 
+ /**
+- * ring_buffer_read_prepare - Prepare for a non consuming read of the buffer
++ * ring_buffer_read_start - start a non consuming read of the buffer
+  * @buffer: The ring buffer to read from
+  * @cpu: The cpu buffer to iterate over
+  * @flags: gfp flags to use for memory allocation
+  *
+- * This performs the initial preparations necessary to iterate
+- * through the buffer.  Memory is allocated, buffer resizing
+- * is disabled, and the iterator pointer is returned to the caller.
+- *
+- * After a sequence of ring_buffer_read_prepare calls, the user is
+- * expected to make at least one call to ring_buffer_read_prepare_sync.
+- * Afterwards, ring_buffer_read_start is invoked to get things going
+- * for real.
++ * This creates an iterator to allow non-consuming iteration through
++ * the buffer. If the buffer is disabled for writing, it will produce
++ * the same information each time, but if the buffer is still writing
++ * then the first hit of a write will cause the iteration to stop.
+  *
+- * This overall must be paired with ring_buffer_read_finish.
++ * Must be paired with ring_buffer_read_finish.
+  */
+ struct ring_buffer_iter *
+-ring_buffer_read_prepare(struct trace_buffer *buffer, int cpu, gfp_t flags)
++ring_buffer_read_start(struct trace_buffer *buffer, int cpu, gfp_t flags)
+ {
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+ 	struct ring_buffer_iter *iter;
+@@ -5878,51 +5874,12 @@ ring_buffer_read_prepare(struct trace_buffer *buffer, int cpu, gfp_t flags)
+ 
+ 	atomic_inc(&cpu_buffer->resize_disabled);
+ 
+-	return iter;
+-}
+-EXPORT_SYMBOL_GPL(ring_buffer_read_prepare);
+-
+-/**
+- * ring_buffer_read_prepare_sync - Synchronize a set of prepare calls
+- *
+- * All previously invoked ring_buffer_read_prepare calls to prepare
+- * iterators will be synchronized.  Afterwards, read_buffer_read_start
+- * calls on those iterators are allowed.
+- */
+-void
+-ring_buffer_read_prepare_sync(void)
+-{
+-	synchronize_rcu();
+-}
+-EXPORT_SYMBOL_GPL(ring_buffer_read_prepare_sync);
+-
+-/**
+- * ring_buffer_read_start - start a non consuming read of the buffer
+- * @iter: The iterator returned by ring_buffer_read_prepare
+- *
+- * This finalizes the startup of an iteration through the buffer.
+- * The iterator comes from a call to ring_buffer_read_prepare and
+- * an intervening ring_buffer_read_prepare_sync must have been
+- * performed.
+- *
+- * Must be paired with ring_buffer_read_finish.
+- */
+-void
+-ring_buffer_read_start(struct ring_buffer_iter *iter)
+-{
+-	struct ring_buffer_per_cpu *cpu_buffer;
+-	unsigned long flags;
+-
+-	if (!iter)
+-		return;
+-
+-	cpu_buffer = iter->cpu_buffer;
+-
+-	raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
++	guard(raw_spinlock_irqsave)(&cpu_buffer->reader_lock);
+ 	arch_spin_lock(&cpu_buffer->lock);
+ 	rb_iter_reset(iter);
+ 	arch_spin_unlock(&cpu_buffer->lock);
+-	raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++
++	return iter;
+ }
+ EXPORT_SYMBOL_GPL(ring_buffer_read_start);
+ 
+diff --git a/kernel/trace/rv/monitors/scpd/Kconfig b/kernel/trace/rv/monitors/scpd/Kconfig
+index b9114fbf680f99..682d0416188b39 100644
+--- a/kernel/trace/rv/monitors/scpd/Kconfig
++++ b/kernel/trace/rv/monitors/scpd/Kconfig
+@@ -2,7 +2,7 @@
+ #
+ config RV_MON_SCPD
+ 	depends on RV
+-	depends on PREEMPT_TRACER
++	depends on TRACE_PREEMPT_TOGGLE
+ 	depends on RV_MON_SCHED
+ 	default y
+ 	select DA_MON_EVENTS_IMPLICIT
+diff --git a/kernel/trace/rv/monitors/sncid/Kconfig b/kernel/trace/rv/monitors/sncid/Kconfig
+index 76bcfef4fd1032..3a5639feaaaf69 100644
+--- a/kernel/trace/rv/monitors/sncid/Kconfig
++++ b/kernel/trace/rv/monitors/sncid/Kconfig
+@@ -2,7 +2,7 @@
+ #
+ config RV_MON_SNCID
+ 	depends on RV
+-	depends on IRQSOFF_TRACER
++	depends on TRACE_IRQFLAGS
+ 	depends on RV_MON_SCHED
+ 	default y
+ 	select DA_MON_EVENTS_IMPLICIT
+diff --git a/kernel/trace/rv/monitors/snep/Kconfig b/kernel/trace/rv/monitors/snep/Kconfig
+index 77527f97123250..7dd54f434ff758 100644
+--- a/kernel/trace/rv/monitors/snep/Kconfig
++++ b/kernel/trace/rv/monitors/snep/Kconfig
+@@ -2,7 +2,7 @@
+ #
+ config RV_MON_SNEP
+ 	depends on RV
+-	depends on PREEMPT_TRACER
++	depends on TRACE_PREEMPT_TOGGLE
+ 	depends on RV_MON_SCHED
+ 	default y
+ 	select DA_MON_EVENTS_IMPLICIT
+diff --git a/kernel/trace/rv/monitors/wip/Kconfig b/kernel/trace/rv/monitors/wip/Kconfig
+index e464b9294865b5..87a26195792b44 100644
+--- a/kernel/trace/rv/monitors/wip/Kconfig
++++ b/kernel/trace/rv/monitors/wip/Kconfig
+@@ -2,7 +2,7 @@
+ #
+ config RV_MON_WIP
+ 	depends on RV
+-	depends on PREEMPT_TRACER
++	depends on TRACE_PREEMPT_TOGGLE
+ 	select DA_MON_EVENTS_IMPLICIT
+ 	bool "wip monitor"
+ 	help
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 14e1e1ed55058d..db3fd111b10a92 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -4640,21 +4640,15 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
+ 	if (iter->cpu_file == RING_BUFFER_ALL_CPUS) {
+ 		for_each_tracing_cpu(cpu) {
+ 			iter->buffer_iter[cpu] =
+-				ring_buffer_read_prepare(iter->array_buffer->buffer,
+-							 cpu, GFP_KERNEL);
+-		}
+-		ring_buffer_read_prepare_sync();
+-		for_each_tracing_cpu(cpu) {
+-			ring_buffer_read_start(iter->buffer_iter[cpu]);
++				ring_buffer_read_start(iter->array_buffer->buffer,
++						       cpu, GFP_KERNEL);
+ 			tracing_iter_reset(iter, cpu);
+ 		}
+ 	} else {
+ 		cpu = iter->cpu_file;
+ 		iter->buffer_iter[cpu] =
+-			ring_buffer_read_prepare(iter->array_buffer->buffer,
+-						 cpu, GFP_KERNEL);
+-		ring_buffer_read_prepare_sync();
+-		ring_buffer_read_start(iter->buffer_iter[cpu]);
++			ring_buffer_read_start(iter->array_buffer->buffer,
++					       cpu, GFP_KERNEL);
+ 		tracing_iter_reset(iter, cpu);
+ 	}
+ 
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index cca676f651b108..8fc5323a2ed380 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -1342,13 +1342,14 @@ struct filter_list {
+ 
+ struct filter_head {
+ 	struct list_head	list;
+-	struct rcu_head		rcu;
++	union {
++		struct rcu_head		rcu;
++		struct rcu_work		rwork;
++	};
+ };
+ 
+-
+-static void free_filter_list(struct rcu_head *rhp)
++static void free_filter_list(struct filter_head *filter_list)
+ {
+-	struct filter_head *filter_list = container_of(rhp, struct filter_head, rcu);
+ 	struct filter_list *filter_item, *tmp;
+ 
+ 	list_for_each_entry_safe(filter_item, tmp, &filter_list->list, list) {
+@@ -1359,9 +1360,20 @@ static void free_filter_list(struct rcu_head *rhp)
+ 	kfree(filter_list);
+ }
+ 
++static void free_filter_list_work(struct work_struct *work)
++{
++	struct filter_head *filter_list;
++
++	filter_list = container_of(to_rcu_work(work), struct filter_head, rwork);
++	free_filter_list(filter_list);
++}
++
+ static void free_filter_list_tasks(struct rcu_head *rhp)
+ {
+-	call_rcu(rhp, free_filter_list);
++	struct filter_head *filter_list = container_of(rhp, struct filter_head, rcu);
++
++	INIT_RCU_WORK(&filter_list->rwork, free_filter_list_work);
++	queue_rcu_work(system_wq, &filter_list->rwork);
+ }
+ 
+ /*
+@@ -1458,7 +1470,7 @@ static void filter_free_subsystem_filters(struct trace_subsystem_dir *dir,
+ 	tracepoint_synchronize_unregister();
+ 
+ 	if (head)
+-		free_filter_list(&head->rcu);
++		free_filter_list(head);
+ 
+ 	list_for_each_entry(file, &tr->events, list) {
+ 		if (file->system != dir || !file->filter)
+@@ -2303,7 +2315,7 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
+ 	return 0;
+  fail:
+ 	/* No call succeeded */
+-	free_filter_list(&filter_list->rcu);
++	free_filter_list(filter_list);
+ 	parse_error(pe, FILT_ERR_BAD_SUBSYS_FILTER, 0);
+ 	return -EINVAL;
+  fail_mem:
+@@ -2313,7 +2325,7 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
+ 	if (!fail)
+ 		delay_free_filter(filter_list);
+ 	else
+-		free_filter_list(&filter_list->rcu);
++		free_filter_list(filter_list);
+ 
+ 	return -ENOMEM;
+ }
+diff --git a/kernel/trace/trace_kdb.c b/kernel/trace/trace_kdb.c
+index 1e72d20b3c2fc6..1981d00e1f5d0d 100644
+--- a/kernel/trace/trace_kdb.c
++++ b/kernel/trace/trace_kdb.c
+@@ -43,17 +43,15 @@ static void ftrace_dump_buf(int skip_entries, long cpu_file)
+ 	if (cpu_file == RING_BUFFER_ALL_CPUS) {
+ 		for_each_tracing_cpu(cpu) {
+ 			iter.buffer_iter[cpu] =
+-			ring_buffer_read_prepare(iter.array_buffer->buffer,
+-						 cpu, GFP_ATOMIC);
+-			ring_buffer_read_start(iter.buffer_iter[cpu]);
++			ring_buffer_read_start(iter.array_buffer->buffer,
++					       cpu, GFP_ATOMIC);
+ 			tracing_iter_reset(&iter, cpu);
+ 		}
+ 	} else {
+ 		iter.cpu_file = cpu_file;
+ 		iter.buffer_iter[cpu_file] =
+-			ring_buffer_read_prepare(iter.array_buffer->buffer,
++			ring_buffer_read_start(iter.array_buffer->buffer,
+ 						 cpu_file, GFP_ATOMIC);
+-		ring_buffer_read_start(iter.buffer_iter[cpu_file]);
+ 		tracing_iter_reset(&iter, cpu_file);
+ 	}
+ 
+diff --git a/kernel/ucount.c b/kernel/ucount.c
+index 8686e329b8f2ce..f629db485a0797 100644
+--- a/kernel/ucount.c
++++ b/kernel/ucount.c
+@@ -199,7 +199,7 @@ void put_ucounts(struct ucounts *ucounts)
+ 	}
+ }
+ 
+-static inline bool atomic_long_inc_below(atomic_long_t *v, int u)
++static inline bool atomic_long_inc_below(atomic_long_t *v, long u)
+ {
+ 	long c, old;
+ 	c = atomic_long_read(v);
+diff --git a/lib/tests/fortify_kunit.c b/lib/tests/fortify_kunit.c
+index 29ffc62a71e3f9..fc9c76f026d636 100644
+--- a/lib/tests/fortify_kunit.c
++++ b/lib/tests/fortify_kunit.c
+@@ -1003,8 +1003,8 @@ static void fortify_test_memcmp(struct kunit *test)
+ {
+ 	char one[] = "My mind is going ...";
+ 	char two[] = "My mind is going ... I can feel it.";
+-	size_t one_len = sizeof(one) - 1;
+-	size_t two_len = sizeof(two) - 1;
++	volatile size_t one_len = sizeof(one) - 1;
++	volatile size_t two_len = sizeof(two) - 1;
+ 
+ 	OPTIMIZER_HIDE_VAR(one_len);
+ 	OPTIMIZER_HIDE_VAR(two_len);
+diff --git a/mm/hmm.c b/mm/hmm.c
+index 082f7b7c0b9ebc..0e4c01c051608d 100644
+--- a/mm/hmm.c
++++ b/mm/hmm.c
+@@ -173,6 +173,7 @@ static inline unsigned long hmm_pfn_flags_order(unsigned long order)
+ 	return order << HMM_PFN_ORDER_SHIFT;
+ }
+ 
++#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ static inline unsigned long pmd_to_hmm_pfn_flags(struct hmm_range *range,
+ 						 pmd_t pmd)
+ {
+@@ -183,7 +184,6 @@ static inline unsigned long pmd_to_hmm_pfn_flags(struct hmm_range *range,
+ 	       hmm_pfn_flags_order(PMD_SHIFT - PAGE_SHIFT);
+ }
+ 
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr,
+ 			      unsigned long end, unsigned long hmm_pfns[],
+ 			      pmd_t pmd)
+diff --git a/mm/memory.c b/mm/memory.c
+index 2c7d9bb28e88ec..1df793ce2e6e8f 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -6554,8 +6554,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
+ 	 */
+ 
+ 	/* Check if the vma we locked is the right one. */
+-	if (unlikely(vma->vm_mm != mm ||
+-		     address < vma->vm_start || address >= vma->vm_end))
++	if (unlikely(address < vma->vm_start || address >= vma->vm_end))
+ 		goto inval_end_read;
+ 
+ 	rcu_read_unlock();
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 12882a39759b44..6b2ecbcf069ef1 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -5930,8 +5930,8 @@ struct folio *shmem_read_folio_gfp(struct address_space *mapping,
+ 	struct folio *folio;
+ 	int error;
+ 
+-	error = shmem_get_folio_gfp(inode, index, 0, &folio, SGP_CACHE,
+-				    gfp, NULL, NULL);
++	error = shmem_get_folio_gfp(inode, index, i_size_read(inode),
++				    &folio, SGP_CACHE, gfp, NULL, NULL);
+ 	if (error)
+ 		return ERR_PTR(error);
+ 
+diff --git a/mm/slub.c b/mm/slub.c
+index be8b09e09d3043..5c73b956615fe6 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -4929,12 +4929,12 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
+  * When slub_debug_orig_size() is off, krealloc() only knows about the bucket
+  * size of an allocation (but not the exact size it was allocated with) and
+  * hence implements the following semantics for shrinking and growing buffers
+- * with __GFP_ZERO.
++ * with __GFP_ZERO::
+  *
+- *         new             bucket
+- * 0       size             size
+- * |--------|----------------|
+- * |  keep  |      zero      |
++ *           new             bucket
++ *   0       size             size
++ *   |--------|----------------|
++ *   |  keep  |      zero      |
+  *
+  * Otherwise, the original allocation size 'orig_size' could be used to
+  * precisely clear the requested size, and the new size will also be stored
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 412ccd6543b34d..626fcb9e79e312 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -1115,6 +1115,7 @@ static void swap_range_alloc(struct swap_info_struct *si,
+ 		if (vm_swap_full())
+ 			schedule_work(&si->reclaim_work);
+ 	}
++	atomic_long_sub(nr_entries, &nr_swap_pages);
+ }
+ 
+ static void swap_range_free(struct swap_info_struct *si, unsigned long offset,
+@@ -1313,7 +1314,6 @@ int folio_alloc_swap(struct folio *folio, gfp_t gfp)
+ 	if (add_to_swap_cache(folio, entry, gfp | __GFP_NOMEMALLOC, NULL))
+ 		goto out_free;
+ 
+-	atomic_long_sub(size, &nr_swap_pages);
+ 	return 0;
+ 
+ out_free:
+@@ -3138,43 +3138,30 @@ static unsigned long read_swap_header(struct swap_info_struct *si,
+ 	return maxpages;
+ }
+ 
+-static int setup_swap_map_and_extents(struct swap_info_struct *si,
+-					union swap_header *swap_header,
+-					unsigned char *swap_map,
+-					unsigned long maxpages,
+-					sector_t *span)
++static int setup_swap_map(struct swap_info_struct *si,
++			  union swap_header *swap_header,
++			  unsigned char *swap_map,
++			  unsigned long maxpages)
+ {
+-	unsigned int nr_good_pages;
+ 	unsigned long i;
+-	int nr_extents;
+-
+-	nr_good_pages = maxpages - 1;	/* omit header page */
+ 
++	swap_map[0] = SWAP_MAP_BAD; /* omit header page */
+ 	for (i = 0; i < swap_header->info.nr_badpages; i++) {
+ 		unsigned int page_nr = swap_header->info.badpages[i];
+ 		if (page_nr == 0 || page_nr > swap_header->info.last_page)
+ 			return -EINVAL;
+ 		if (page_nr < maxpages) {
+ 			swap_map[page_nr] = SWAP_MAP_BAD;
+-			nr_good_pages--;
++			si->pages--;
+ 		}
+ 	}
+ 
+-	if (nr_good_pages) {
+-		swap_map[0] = SWAP_MAP_BAD;
+-		si->max = maxpages;
+-		si->pages = nr_good_pages;
+-		nr_extents = setup_swap_extents(si, span);
+-		if (nr_extents < 0)
+-			return nr_extents;
+-		nr_good_pages = si->pages;
+-	}
+-	if (!nr_good_pages) {
++	if (!si->pages) {
+ 		pr_warn("Empty swap-file\n");
+ 		return -EINVAL;
+ 	}
+ 
+-	return nr_extents;
++	return 0;
+ }
+ 
+ #define SWAP_CLUSTER_INFO_COLS						\
+@@ -3214,13 +3201,17 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si,
+ 	 * Mark unusable pages as unavailable. The clusters aren't
+ 	 * marked free yet, so no list operations are involved yet.
+ 	 *
+-	 * See setup_swap_map_and_extents(): header page, bad pages,
++	 * See setup_swap_map(): header page, bad pages,
+ 	 * and the EOF part of the last cluster.
+ 	 */
+ 	inc_cluster_info_page(si, cluster_info, 0);
+-	for (i = 0; i < swap_header->info.nr_badpages; i++)
+-		inc_cluster_info_page(si, cluster_info,
+-				      swap_header->info.badpages[i]);
++	for (i = 0; i < swap_header->info.nr_badpages; i++) {
++		unsigned int page_nr = swap_header->info.badpages[i];
++
++		if (page_nr >= maxpages)
++			continue;
++		inc_cluster_info_page(si, cluster_info, page_nr);
++	}
+ 	for (i = maxpages; i < round_up(maxpages, SWAPFILE_CLUSTER); i++)
+ 		inc_cluster_info_page(si, cluster_info, i);
+ 
+@@ -3360,6 +3351,21 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags)
+ 		goto bad_swap_unlock_inode;
+ 	}
+ 
++	si->max = maxpages;
++	si->pages = maxpages - 1;
++	nr_extents = setup_swap_extents(si, &span);
++	if (nr_extents < 0) {
++		error = nr_extents;
++		goto bad_swap_unlock_inode;
++	}
++	if (si->pages != si->max - 1) {
++		pr_err("swap:%u != (max:%u - 1)\n", si->pages, si->max);
++		error = -EINVAL;
++		goto bad_swap_unlock_inode;
++	}
++
++	maxpages = si->max;
++
+ 	/* OK, set up the swap map and apply the bad block list */
+ 	swap_map = vzalloc(maxpages);
+ 	if (!swap_map) {
+@@ -3371,12 +3377,9 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags)
+ 	if (error)
+ 		goto bad_swap_unlock_inode;
+ 
+-	nr_extents = setup_swap_map_and_extents(si, swap_header, swap_map,
+-						maxpages, &span);
+-	if (unlikely(nr_extents < 0)) {
+-		error = nr_extents;
++	error = setup_swap_map(si, swap_header, swap_map, maxpages);
++	if (error)
+ 		goto bad_swap_unlock_inode;
+-	}
+ 
+ 	/*
+ 	 * Use kvmalloc_array instead of bitmap_zalloc as the allocation order might
+diff --git a/net/bluetooth/coredump.c b/net/bluetooth/coredump.c
+index 819eacb3876228..720cb79adf9648 100644
+--- a/net/bluetooth/coredump.c
++++ b/net/bluetooth/coredump.c
+@@ -249,15 +249,15 @@ static void hci_devcd_dump(struct hci_dev *hdev)
+ 
+ 	size = hdev->dump.tail - hdev->dump.head;
+ 
+-	/* Emit a devcoredump with the available data */
+-	dev_coredumpv(&hdev->dev, hdev->dump.head, size, GFP_KERNEL);
+-
+ 	/* Send a copy to monitor as a diagnostic packet */
+ 	skb = bt_skb_alloc(size, GFP_ATOMIC);
+ 	if (skb) {
+ 		skb_put_data(skb, hdev->dump.head, size);
+ 		hci_recv_diag(hdev, skb);
+ 	}
++
++	/* Emit a devcoredump with the available data */
++	dev_coredumpv(&hdev->dev, hdev->dump.head, size, GFP_KERNEL);
+ }
+ 
+ static void hci_devcd_handle_pkt_complete(struct hci_dev *hdev,
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index cf4b30ac9e0e57..c1dd8d78701fe5 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6239,6 +6239,11 @@ static void hci_le_adv_report_evt(struct hci_dev *hdev, void *data,
+ 
+ static u8 ext_evt_type_to_legacy(struct hci_dev *hdev, u16 evt_type)
+ {
++	u16 pdu_type = evt_type & ~LE_EXT_ADV_DATA_STATUS_MASK;
++
++	if (!pdu_type)
++		return LE_ADV_NONCONN_IND;
++
+ 	if (evt_type & LE_EXT_ADV_LEGACY_PDU) {
+ 		switch (evt_type) {
+ 		case LE_LEGACY_ADV_IND:
+@@ -6270,8 +6275,7 @@ static u8 ext_evt_type_to_legacy(struct hci_dev *hdev, u16 evt_type)
+ 	if (evt_type & LE_EXT_ADV_SCAN_IND)
+ 		return LE_ADV_SCAN_IND;
+ 
+-	if (evt_type == LE_EXT_ADV_NON_CONN_IND ||
+-	    evt_type & LE_EXT_ADV_DIRECT_IND)
++	if (evt_type & LE_EXT_ADV_DIRECT_IND)
+ 		return LE_ADV_NONCONN_IND;
+ 
+ invalid:
+diff --git a/net/caif/cfctrl.c b/net/caif/cfctrl.c
+index 20139fa1be1fff..06b604cf9d58c0 100644
+--- a/net/caif/cfctrl.c
++++ b/net/caif/cfctrl.c
+@@ -351,17 +351,154 @@ int cfctrl_cancel_req(struct cflayer *layr, struct cflayer *adap_layer)
+ 	return found;
+ }
+ 
++static int cfctrl_link_setup(struct cfctrl *cfctrl, struct cfpkt *pkt, u8 cmdrsp)
++{
++	u8 len;
++	u8 linkid = 0;
++	enum cfctrl_srv serv;
++	enum cfctrl_srv servtype;
++	u8 endpoint;
++	u8 physlinkid;
++	u8 prio;
++	u8 tmp;
++	u8 *cp;
++	int i;
++	struct cfctrl_link_param linkparam;
++	struct cfctrl_request_info rsp, *req;
++
++	memset(&linkparam, 0, sizeof(linkparam));
++
++	tmp = cfpkt_extr_head_u8(pkt);
++
++	serv = tmp & CFCTRL_SRV_MASK;
++	linkparam.linktype = serv;
++
++	servtype = tmp >> 4;
++	linkparam.chtype = servtype;
++
++	tmp = cfpkt_extr_head_u8(pkt);
++	physlinkid = tmp & 0x07;
++	prio = tmp >> 3;
++
++	linkparam.priority = prio;
++	linkparam.phyid = physlinkid;
++	endpoint = cfpkt_extr_head_u8(pkt);
++	linkparam.endpoint = endpoint & 0x03;
++
++	switch (serv) {
++	case CFCTRL_SRV_VEI:
++	case CFCTRL_SRV_DBG:
++		if (CFCTRL_ERR_BIT & cmdrsp)
++			break;
++		/* Link ID */
++		linkid = cfpkt_extr_head_u8(pkt);
++		break;
++	case CFCTRL_SRV_VIDEO:
++		tmp = cfpkt_extr_head_u8(pkt);
++		linkparam.u.video.connid = tmp;
++		if (CFCTRL_ERR_BIT & cmdrsp)
++			break;
++		/* Link ID */
++		linkid = cfpkt_extr_head_u8(pkt);
++		break;
++
++	case CFCTRL_SRV_DATAGRAM:
++		linkparam.u.datagram.connid = cfpkt_extr_head_u32(pkt);
++		if (CFCTRL_ERR_BIT & cmdrsp)
++			break;
++		/* Link ID */
++		linkid = cfpkt_extr_head_u8(pkt);
++		break;
++	case CFCTRL_SRV_RFM:
++		/* Construct a frame, convert
++		 * DatagramConnectionID
++		 * to network format long and copy it out...
++		 */
++		linkparam.u.rfm.connid = cfpkt_extr_head_u32(pkt);
++		cp = (u8 *) linkparam.u.rfm.volume;
++		for (tmp = cfpkt_extr_head_u8(pkt);
++		     cfpkt_more(pkt) && tmp != '\0';
++		     tmp = cfpkt_extr_head_u8(pkt))
++			*cp++ = tmp;
++		*cp = '\0';
++
++		if (CFCTRL_ERR_BIT & cmdrsp)
++			break;
++		/* Link ID */
++		linkid = cfpkt_extr_head_u8(pkt);
++
++		break;
++	case CFCTRL_SRV_UTIL:
++		/* Construct a frame, convert
++		 * DatagramConnectionID
++		 * to network format long and copy it out...
++		 */
++		/* Fifosize KB */
++		linkparam.u.utility.fifosize_kb = cfpkt_extr_head_u16(pkt);
++		/* Fifosize bufs */
++		linkparam.u.utility.fifosize_bufs = cfpkt_extr_head_u16(pkt);
++		/* name */
++		cp = (u8 *) linkparam.u.utility.name;
++		caif_assert(sizeof(linkparam.u.utility.name)
++			     >= UTILITY_NAME_LENGTH);
++		for (i = 0; i < UTILITY_NAME_LENGTH && cfpkt_more(pkt); i++) {
++			tmp = cfpkt_extr_head_u8(pkt);
++			*cp++ = tmp;
++		}
++		/* Length */
++		len = cfpkt_extr_head_u8(pkt);
++		linkparam.u.utility.paramlen = len;
++		/* Param Data */
++		cp = linkparam.u.utility.params;
++		while (cfpkt_more(pkt) && len--) {
++			tmp = cfpkt_extr_head_u8(pkt);
++			*cp++ = tmp;
++		}
++		if (CFCTRL_ERR_BIT & cmdrsp)
++			break;
++		/* Link ID */
++		linkid = cfpkt_extr_head_u8(pkt);
++		/* Length */
++		len = cfpkt_extr_head_u8(pkt);
++		/* Param Data */
++		cfpkt_extr_head(pkt, NULL, len);
++		break;
++	default:
++		pr_warn("Request setup, invalid type (%d)\n", serv);
++		return -1;
++	}
++
++	rsp.cmd = CFCTRL_CMD_LINK_SETUP;
++	rsp.param = linkparam;
++	spin_lock_bh(&cfctrl->info_list_lock);
++	req = cfctrl_remove_req(cfctrl, &rsp);
++
++	if (CFCTRL_ERR_BIT == (CFCTRL_ERR_BIT & cmdrsp) ||
++		cfpkt_erroneous(pkt)) {
++		pr_err("Invalid O/E bit or parse error "
++				"on CAIF control channel\n");
++		cfctrl->res.reject_rsp(cfctrl->serv.layer.up, 0,
++				       req ? req->client_layer : NULL);
++	} else {
++		cfctrl->res.linksetup_rsp(cfctrl->serv.layer.up, linkid,
++					  serv, physlinkid,
++					  req ?  req->client_layer : NULL);
++	}
++
++	kfree(req);
++
++	spin_unlock_bh(&cfctrl->info_list_lock);
++
++	return 0;
++}
++
+ static int cfctrl_recv(struct cflayer *layer, struct cfpkt *pkt)
+ {
+ 	u8 cmdrsp;
+ 	u8 cmd;
+-	int ret = -1;
+-	u8 len;
+-	u8 param[255];
++	int ret = 0;
+ 	u8 linkid = 0;
+ 	struct cfctrl *cfctrl = container_obj(layer);
+-	struct cfctrl_request_info rsp, *req;
+-
+ 
+ 	cmdrsp = cfpkt_extr_head_u8(pkt);
+ 	cmd = cmdrsp & CFCTRL_CMD_MASK;
+@@ -374,150 +511,7 @@ static int cfctrl_recv(struct cflayer *layer, struct cfpkt *pkt)
+ 
+ 	switch (cmd) {
+ 	case CFCTRL_CMD_LINK_SETUP:
+-		{
+-			enum cfctrl_srv serv;
+-			enum cfctrl_srv servtype;
+-			u8 endpoint;
+-			u8 physlinkid;
+-			u8 prio;
+-			u8 tmp;
+-			u8 *cp;
+-			int i;
+-			struct cfctrl_link_param linkparam;
+-			memset(&linkparam, 0, sizeof(linkparam));
+-
+-			tmp = cfpkt_extr_head_u8(pkt);
+-
+-			serv = tmp & CFCTRL_SRV_MASK;
+-			linkparam.linktype = serv;
+-
+-			servtype = tmp >> 4;
+-			linkparam.chtype = servtype;
+-
+-			tmp = cfpkt_extr_head_u8(pkt);
+-			physlinkid = tmp & 0x07;
+-			prio = tmp >> 3;
+-
+-			linkparam.priority = prio;
+-			linkparam.phyid = physlinkid;
+-			endpoint = cfpkt_extr_head_u8(pkt);
+-			linkparam.endpoint = endpoint & 0x03;
+-
+-			switch (serv) {
+-			case CFCTRL_SRV_VEI:
+-			case CFCTRL_SRV_DBG:
+-				if (CFCTRL_ERR_BIT & cmdrsp)
+-					break;
+-				/* Link ID */
+-				linkid = cfpkt_extr_head_u8(pkt);
+-				break;
+-			case CFCTRL_SRV_VIDEO:
+-				tmp = cfpkt_extr_head_u8(pkt);
+-				linkparam.u.video.connid = tmp;
+-				if (CFCTRL_ERR_BIT & cmdrsp)
+-					break;
+-				/* Link ID */
+-				linkid = cfpkt_extr_head_u8(pkt);
+-				break;
+-
+-			case CFCTRL_SRV_DATAGRAM:
+-				linkparam.u.datagram.connid =
+-				    cfpkt_extr_head_u32(pkt);
+-				if (CFCTRL_ERR_BIT & cmdrsp)
+-					break;
+-				/* Link ID */
+-				linkid = cfpkt_extr_head_u8(pkt);
+-				break;
+-			case CFCTRL_SRV_RFM:
+-				/* Construct a frame, convert
+-				 * DatagramConnectionID
+-				 * to network format long and copy it out...
+-				 */
+-				linkparam.u.rfm.connid =
+-				    cfpkt_extr_head_u32(pkt);
+-				cp = (u8 *) linkparam.u.rfm.volume;
+-				for (tmp = cfpkt_extr_head_u8(pkt);
+-				     cfpkt_more(pkt) && tmp != '\0';
+-				     tmp = cfpkt_extr_head_u8(pkt))
+-					*cp++ = tmp;
+-				*cp = '\0';
+-
+-				if (CFCTRL_ERR_BIT & cmdrsp)
+-					break;
+-				/* Link ID */
+-				linkid = cfpkt_extr_head_u8(pkt);
+-
+-				break;
+-			case CFCTRL_SRV_UTIL:
+-				/* Construct a frame, convert
+-				 * DatagramConnectionID
+-				 * to network format long and copy it out...
+-				 */
+-				/* Fifosize KB */
+-				linkparam.u.utility.fifosize_kb =
+-				    cfpkt_extr_head_u16(pkt);
+-				/* Fifosize bufs */
+-				linkparam.u.utility.fifosize_bufs =
+-				    cfpkt_extr_head_u16(pkt);
+-				/* name */
+-				cp = (u8 *) linkparam.u.utility.name;
+-				caif_assert(sizeof(linkparam.u.utility.name)
+-					     >= UTILITY_NAME_LENGTH);
+-				for (i = 0;
+-				     i < UTILITY_NAME_LENGTH
+-				     && cfpkt_more(pkt); i++) {
+-					tmp = cfpkt_extr_head_u8(pkt);
+-					*cp++ = tmp;
+-				}
+-				/* Length */
+-				len = cfpkt_extr_head_u8(pkt);
+-				linkparam.u.utility.paramlen = len;
+-				/* Param Data */
+-				cp = linkparam.u.utility.params;
+-				while (cfpkt_more(pkt) && len--) {
+-					tmp = cfpkt_extr_head_u8(pkt);
+-					*cp++ = tmp;
+-				}
+-				if (CFCTRL_ERR_BIT & cmdrsp)
+-					break;
+-				/* Link ID */
+-				linkid = cfpkt_extr_head_u8(pkt);
+-				/* Length */
+-				len = cfpkt_extr_head_u8(pkt);
+-				/* Param Data */
+-				cfpkt_extr_head(pkt, &param, len);
+-				break;
+-			default:
+-				pr_warn("Request setup, invalid type (%d)\n",
+-					serv);
+-				goto error;
+-			}
+-
+-			rsp.cmd = cmd;
+-			rsp.param = linkparam;
+-			spin_lock_bh(&cfctrl->info_list_lock);
+-			req = cfctrl_remove_req(cfctrl, &rsp);
+-
+-			if (CFCTRL_ERR_BIT == (CFCTRL_ERR_BIT & cmdrsp) ||
+-				cfpkt_erroneous(pkt)) {
+-				pr_err("Invalid O/E bit or parse error "
+-						"on CAIF control channel\n");
+-				cfctrl->res.reject_rsp(cfctrl->serv.layer.up,
+-						       0,
+-						       req ? req->client_layer
+-						       : NULL);
+-			} else {
+-				cfctrl->res.linksetup_rsp(cfctrl->serv.
+-							  layer.up, linkid,
+-							  serv, physlinkid,
+-							  req ? req->
+-							  client_layer : NULL);
+-			}
+-
+-			kfree(req);
+-
+-			spin_unlock_bh(&cfctrl->info_list_lock);
+-		}
++		ret = cfctrl_link_setup(cfctrl, pkt, cmdrsp);
+ 		break;
+ 	case CFCTRL_CMD_LINK_DESTROY:
+ 		linkid = cfpkt_extr_head_u8(pkt);
+@@ -544,9 +538,9 @@ static int cfctrl_recv(struct cflayer *layer, struct cfpkt *pkt)
+ 		break;
+ 	default:
+ 		pr_err("Unrecognized Control Frame\n");
++		ret = -1;
+ 		goto error;
+ 	}
+-	ret = 0;
+ error:
+ 	cfpkt_destroy(pkt);
+ 	return ret;
+diff --git a/net/core/dst.c b/net/core/dst.c
+index 795ca07e28a4ef..b3a12c7c08af0c 100644
+--- a/net/core/dst.c
++++ b/net/core/dst.c
+@@ -148,9 +148,9 @@ void dst_dev_put(struct dst_entry *dst)
+ 	dst->obsolete = DST_OBSOLETE_DEAD;
+ 	if (dst->ops->ifdown)
+ 		dst->ops->ifdown(dst, dev);
+-	dst->input = dst_discard;
+-	dst->output = dst_discard_out;
+-	dst->dev = blackhole_netdev;
++	WRITE_ONCE(dst->input, dst_discard);
++	WRITE_ONCE(dst->output, dst_discard_out);
++	WRITE_ONCE(dst->dev, blackhole_netdev);
+ 	netdev_ref_replace(dev, blackhole_netdev, &dst->dev_tracker,
+ 			   GFP_ATOMIC);
+ }
+@@ -263,7 +263,7 @@ unsigned int dst_blackhole_mtu(const struct dst_entry *dst)
+ {
+ 	unsigned int mtu = dst_metric_raw(dst, RTAX_MTU);
+ 
+-	return mtu ? : dst->dev->mtu;
++	return mtu ? : dst_dev(dst)->mtu;
+ }
+ EXPORT_SYMBOL_GPL(dst_blackhole_mtu);
+ 
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 34f91c3aacb2f1..ac2cb6eba56e16 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -9463,6 +9463,9 @@ static bool flow_dissector_is_valid_access(int off, int size,
+ 	if (off < 0 || off >= sizeof(struct __sk_buff))
+ 		return false;
+ 
++	if (off % size != 0)
++		return false;
++
+ 	if (type == BPF_WRITE)
+ 		return false;
+ 
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index a07249b59ae1e3..559841334f1a2b 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -368,6 +368,43 @@ static void pneigh_queue_purge(struct sk_buff_head *list, struct net *net,
+ 	}
+ }
+ 
++static void neigh_flush_one(struct neighbour *n)
++{
++	hlist_del_rcu(&n->hash);
++	hlist_del_rcu(&n->dev_list);
++
++	write_lock(&n->lock);
++
++	neigh_del_timer(n);
++	neigh_mark_dead(n);
++
++	if (refcount_read(&n->refcnt) != 1) {
++		/* The most unpleasant situation.
++		 * We must destroy neighbour entry,
++		 * but someone still uses it.
++		 *
++		 * The destroy will be delayed until
++		 * the last user releases us, but
++		 * we must kill timers etc. and move
++		 * it to safe state.
++		 */
++		__skb_queue_purge(&n->arp_queue);
++		n->arp_queue_len_bytes = 0;
++		WRITE_ONCE(n->output, neigh_blackhole);
++
++		if (n->nud_state & NUD_VALID)
++			n->nud_state = NUD_NOARP;
++		else
++			n->nud_state = NUD_NONE;
++
++		neigh_dbg(2, "neigh %p is stray\n", n);
++	}
++
++	write_unlock(&n->lock);
++
++	neigh_cleanup_and_release(n);
++}
++
+ static void neigh_flush_dev(struct neigh_table *tbl, struct net_device *dev,
+ 			    bool skip_perm)
+ {
+@@ -381,32 +418,24 @@ static void neigh_flush_dev(struct neigh_table *tbl, struct net_device *dev,
+ 		if (skip_perm && n->nud_state & NUD_PERMANENT)
+ 			continue;
+ 
+-		hlist_del_rcu(&n->hash);
+-		hlist_del_rcu(&n->dev_list);
+-		write_lock(&n->lock);
+-		neigh_del_timer(n);
+-		neigh_mark_dead(n);
+-		if (refcount_read(&n->refcnt) != 1) {
+-			/* The most unpleasant situation.
+-			 * We must destroy neighbour entry,
+-			 * but someone still uses it.
+-			 *
+-			 * The destroy will be delayed until
+-			 * the last user releases us, but
+-			 * we must kill timers etc. and move
+-			 * it to safe state.
+-			 */
+-			__skb_queue_purge(&n->arp_queue);
+-			n->arp_queue_len_bytes = 0;
+-			WRITE_ONCE(n->output, neigh_blackhole);
+-			if (n->nud_state & NUD_VALID)
+-				n->nud_state = NUD_NOARP;
+-			else
+-				n->nud_state = NUD_NONE;
+-			neigh_dbg(2, "neigh %p is stray\n", n);
+-		}
+-		write_unlock(&n->lock);
+-		neigh_cleanup_and_release(n);
++		neigh_flush_one(n);
++	}
++}
++
++static void neigh_flush_table(struct neigh_table *tbl)
++{
++	struct neigh_hash_table *nht;
++	int i;
++
++	nht = rcu_dereference_protected(tbl->nht,
++					lockdep_is_held(&tbl->lock));
++
++	for (i = 0; i < (1 << nht->hash_shift); i++) {
++		struct hlist_node *tmp;
++		struct neighbour *n;
++
++		neigh_for_each_in_bucket_safe(n, tmp, &nht->hash_heads[i])
++			neigh_flush_one(n);
+ 	}
+ }
+ 
+@@ -422,7 +451,12 @@ static int __neigh_ifdown(struct neigh_table *tbl, struct net_device *dev,
+ 			  bool skip_perm)
+ {
+ 	write_lock_bh(&tbl->lock);
+-	neigh_flush_dev(tbl, dev, skip_perm);
++	if (likely(dev)) {
++		neigh_flush_dev(tbl, dev, skip_perm);
++	} else {
++		DEBUG_NET_WARN_ON_ONCE(skip_perm);
++		neigh_flush_table(tbl);
++	}
+ 	pneigh_ifdown_and_unlock(tbl, dev);
+ 	pneigh_queue_purge(&tbl->proxy_queue, dev ? dev_net(dev) : NULL,
+ 			   tbl->family);
+diff --git a/net/core/netpoll.c b/net/core/netpoll.c
+index 6ad84d4a2b464b..63477a6dd6e965 100644
+--- a/net/core/netpoll.c
++++ b/net/core/netpoll.c
+@@ -831,6 +831,13 @@ int netpoll_setup(struct netpoll *np)
+ 	if (err)
+ 		goto flush;
+ 	rtnl_unlock();
++
++	/* Make sure all NAPI polls which started before dev->npinfo
++	 * was visible have exited before we start calling NAPI poll.
++	 * NAPI skips locking if dev->npinfo is NULL.
++	 */
++	synchronize_rcu();
++
+ 	return 0;
+ 
+ flush:
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 34c51eb1a14fb4..83c78379932e23 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -656,6 +656,13 @@ static void sk_psock_backlog(struct work_struct *work)
+ 	bool ingress;
+ 	int ret;
+ 
++	/* If sk is quickly removed from the map and then added back, the old
++	 * psock should not be scheduled, because there are now two psocks
++	 * pointing to the same sk.
++	 */
++	if (!sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
++		return;
++
+ 	/* Increment the psock refcnt to synchronize with close(fd) path in
+ 	 * sock_map_close(), ensuring we wait for backlog thread completion
+ 	 * before sk_socket freed. If refcnt increment fails, it indicates
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 3e8c548cb1f878..bcd0d6c757ce17 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2557,8 +2557,8 @@ static u32 sk_dst_gso_max_size(struct sock *sk, struct dst_entry *dst)
+ 		   !ipv6_addr_v4mapped(&sk->sk_v6_rcv_saddr));
+ #endif
+ 	/* pairs with the WRITE_ONCE() in netif_set_gso(_ipv4)_max_size() */
+-	max_size = is_ipv6 ? READ_ONCE(dst->dev->gso_max_size) :
+-			READ_ONCE(dst->dev->gso_ipv4_max_size);
++	max_size = is_ipv6 ? READ_ONCE(dst_dev(dst)->gso_max_size) :
++			READ_ONCE(dst_dev(dst)->gso_ipv4_max_size);
+ 	if (max_size > GSO_LEGACY_MAX_SIZE && !sk_is_tcp(sk))
+ 		max_size = GSO_LEGACY_MAX_SIZE;
+ 
+@@ -2569,7 +2569,7 @@ void sk_setup_caps(struct sock *sk, struct dst_entry *dst)
+ {
+ 	u32 max_segs = 1;
+ 
+-	sk->sk_route_caps = dst->dev->features;
++	sk->sk_route_caps = dst_dev(dst)->features;
+ 	if (sk_is_tcp(sk)) {
+ 		struct inet_connection_sock *icsk = inet_csk(sk);
+ 
+@@ -2587,7 +2587,7 @@ void sk_setup_caps(struct sock *sk, struct dst_entry *dst)
+ 			sk->sk_route_caps |= NETIF_F_SG | NETIF_F_HW_CSUM;
+ 			sk->sk_gso_max_size = sk_dst_gso_max_size(sk, dst);
+ 			/* pairs with the WRITE_ONCE() in netif_set_gso_max_segs() */
+-			max_segs = max_t(u32, READ_ONCE(dst->dev->gso_max_segs), 1);
++			max_segs = max_t(u32, READ_ONCE(dst_dev(dst)->gso_max_segs), 1);
+ 		}
+ 	}
+ 	sk->sk_gso_max_segs = max_segs;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 5d7c7efea66cc6..e686f088bc67fb 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1684,8 +1684,8 @@ struct rtable *rt_dst_clone(struct net_device *dev, struct rtable *rt)
+ 		else if (rt->rt_gw_family == AF_INET6)
+ 			new_rt->rt_gw6 = rt->rt_gw6;
+ 
+-		new_rt->dst.input = rt->dst.input;
+-		new_rt->dst.output = rt->dst.output;
++		new_rt->dst.input = READ_ONCE(rt->dst.input);
++		new_rt->dst.output = READ_ONCE(rt->dst.output);
+ 		new_rt->dst.error = rt->dst.error;
+ 		new_rt->dst.lastuse = jiffies;
+ 		new_rt->dst.lwtstate = lwtstate_get(rt->dst.lwtstate);
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index bce2a111cc9e05..ca24c2ea359b15 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -4986,8 +4986,9 @@ static void tcp_ofo_queue(struct sock *sk)
+ 
+ 		if (before(TCP_SKB_CB(skb)->seq, dsack_high)) {
+ 			__u32 dsack = dsack_high;
++
+ 			if (before(TCP_SKB_CB(skb)->end_seq, dsack_high))
+-				dsack_high = TCP_SKB_CB(skb)->end_seq;
++				dsack = TCP_SKB_CB(skb)->end_seq;
+ 			tcp_dsack_extend(sk, TCP_SKB_CB(skb)->seq, dsack);
+ 		}
+ 		p = rb_next(p);
+@@ -5055,6 +5056,7 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
+ 		return;
+ 	}
+ 
++	tcp_measure_rcv_mss(sk, skb);
+ 	/* Disable header prediction. */
+ 	tp->pred_flags = 0;
+ 	inet_csk_schedule_ack(sk);
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index bf727149fdece6..3ac7e15d8d2370 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -433,15 +433,17 @@ struct fib6_dump_arg {
+ static int fib6_rt_dump(struct fib6_info *rt, struct fib6_dump_arg *arg)
+ {
+ 	enum fib_event_type fib_event = FIB_EVENT_ENTRY_REPLACE;
++	unsigned int nsiblings;
+ 	int err;
+ 
+ 	if (!rt || rt == arg->net->ipv6.fib6_null_entry)
+ 		return 0;
+ 
+-	if (rt->fib6_nsiblings)
++	nsiblings = READ_ONCE(rt->fib6_nsiblings);
++	if (nsiblings)
+ 		err = call_fib6_multipath_entry_notifier(arg->nb, fib_event,
+ 							 rt,
+-							 rt->fib6_nsiblings,
++							 nsiblings,
+ 							 arg->extack);
+ 	else
+ 		err = call_fib6_entry_notifier(arg->nb, fib_event, rt,
+@@ -1119,7 +1121,7 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ 
+ 			if (rt6_duplicate_nexthop(iter, rt)) {
+ 				if (rt->fib6_nsiblings)
+-					rt->fib6_nsiblings = 0;
++					WRITE_ONCE(rt->fib6_nsiblings, 0);
+ 				if (!(iter->fib6_flags & RTF_EXPIRES))
+ 					return -EEXIST;
+ 				if (!(rt->fib6_flags & RTF_EXPIRES)) {
+@@ -1148,7 +1150,8 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ 			 */
+ 			if (rt_can_ecmp &&
+ 			    rt6_qualify_for_ecmp(iter))
+-				rt->fib6_nsiblings++;
++				WRITE_ONCE(rt->fib6_nsiblings,
++					   rt->fib6_nsiblings + 1);
+ 		}
+ 
+ 		if (iter->fib6_metric > rt->fib6_metric)
+@@ -1198,7 +1201,8 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ 		fib6_nsiblings = 0;
+ 		list_for_each_entry_safe(sibling, temp_sibling,
+ 					 &rt->fib6_siblings, fib6_siblings) {
+-			sibling->fib6_nsiblings++;
++			WRITE_ONCE(sibling->fib6_nsiblings,
++				   sibling->fib6_nsiblings + 1);
+ 			BUG_ON(sibling->fib6_nsiblings != rt->fib6_nsiblings);
+ 			fib6_nsiblings++;
+ 		}
+@@ -1243,8 +1247,9 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ 				list_for_each_entry_safe(sibling, next_sibling,
+ 							 &rt->fib6_siblings,
+ 							 fib6_siblings)
+-					sibling->fib6_nsiblings--;
+-				rt->fib6_nsiblings = 0;
++					WRITE_ONCE(sibling->fib6_nsiblings,
++						   sibling->fib6_nsiblings - 1);
++				WRITE_ONCE(rt->fib6_nsiblings, 0);
+ 				list_del_rcu(&rt->fib6_siblings);
+ 				rt6_multipath_rebalance(next_sibling);
+ 				return err;
+@@ -1961,8 +1966,9 @@ static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn,
+ 			notify_del = true;
+ 		list_for_each_entry_safe(sibling, next_sibling,
+ 					 &rt->fib6_siblings, fib6_siblings)
+-			sibling->fib6_nsiblings--;
+-		rt->fib6_nsiblings = 0;
++			WRITE_ONCE(sibling->fib6_nsiblings,
++				   sibling->fib6_nsiblings - 1);
++		WRITE_ONCE(rt->fib6_nsiblings, 0);
+ 		list_del_rcu(&rt->fib6_siblings);
+ 		rt6_multipath_rebalance(next_sibling);
+ 	}
+diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
+index 9822163428b028..fce91183797a60 100644
+--- a/net/ipv6/ip6_offload.c
++++ b/net/ipv6/ip6_offload.c
+@@ -148,7 +148,9 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
+ 
+ 	ops = rcu_dereference(inet6_offloads[proto]);
+ 	if (likely(ops && ops->callbacks.gso_segment)) {
+-		skb_reset_transport_header(skb);
++		if (!skb_reset_transport_header_careful(skb))
++			goto out;
++
+ 		segs = ops->callbacks.gso_segment(skb, features);
+ 		if (!segs)
+ 			skb->network_header = skb_mac_header(skb) + nhoff - skb->head;
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index 3276cde5ebd704..63c90dae6cbf7b 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -2039,6 +2039,7 @@ static int ip6mr_forward2(struct net *net, struct mr_table *mrt,
+ 			  struct sk_buff *skb, int vifi)
+ {
+ 	struct vif_device *vif = &mrt->vif_table[vifi];
++	struct net_device *indev = skb->dev;
+ 	struct net_device *vif_dev;
+ 	struct ipv6hdr *ipv6h;
+ 	struct dst_entry *dst;
+@@ -2101,7 +2102,7 @@ static int ip6mr_forward2(struct net *net, struct mr_table *mrt,
+ 	IP6CB(skb)->flags |= IP6SKB_FORWARDED;
+ 
+ 	return NF_HOOK(NFPROTO_IPV6, NF_INET_FORWARD,
+-		       net, NULL, skb, skb->dev, vif_dev,
++		       net, NULL, skb, indev, skb->dev,
+ 		       ip6mr_forward2_finish);
+ 
+ out_free:
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 96f1621e2381c8..aa1341fc99331e 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -5249,7 +5249,8 @@ static void ip6_route_mpath_notify(struct fib6_info *rt,
+ 	 */
+ 	rcu_read_lock();
+ 
+-	if ((nlflags & NLM_F_APPEND) && rt_last && rt_last->fib6_nsiblings) {
++	if ((nlflags & NLM_F_APPEND) && rt_last &&
++	    READ_ONCE(rt_last->fib6_nsiblings)) {
+ 		rt = list_first_or_null_rcu(&rt_last->fib6_siblings,
+ 					    struct fib6_info,
+ 					    fib6_siblings);
+@@ -5596,32 +5597,34 @@ static int rt6_nh_nlmsg_size(struct fib6_nh *nh, void *arg)
+ 
+ static size_t rt6_nlmsg_size(struct fib6_info *f6i)
+ {
++	struct fib6_info *sibling;
++	struct fib6_nh *nh;
+ 	int nexthop_len;
+ 
+ 	if (f6i->nh) {
+ 		nexthop_len = nla_total_size(4); /* RTA_NH_ID */
+ 		nexthop_for_each_fib6_nh(f6i->nh, rt6_nh_nlmsg_size,
+ 					 &nexthop_len);
+-	} else {
+-		struct fib6_nh *nh = f6i->fib6_nh;
+-		struct fib6_info *sibling;
+-
+-		nexthop_len = 0;
+-		if (f6i->fib6_nsiblings) {
+-			rt6_nh_nlmsg_size(nh, &nexthop_len);
+-
+-			rcu_read_lock();
++		goto common;
++	}
+ 
+-			list_for_each_entry_rcu(sibling, &f6i->fib6_siblings,
+-						fib6_siblings) {
+-				rt6_nh_nlmsg_size(sibling->fib6_nh, &nexthop_len);
+-			}
++	rcu_read_lock();
++retry:
++	nh = f6i->fib6_nh;
++	nexthop_len = 0;
++	if (READ_ONCE(f6i->fib6_nsiblings)) {
++		rt6_nh_nlmsg_size(nh, &nexthop_len);
+ 
+-			rcu_read_unlock();
++		list_for_each_entry_rcu(sibling, &f6i->fib6_siblings,
++					fib6_siblings) {
++			rt6_nh_nlmsg_size(sibling->fib6_nh, &nexthop_len);
++			if (!READ_ONCE(f6i->fib6_nsiblings))
++				goto retry;
+ 		}
+-		nexthop_len += lwtunnel_get_encap_size(nh->fib_nh_lws);
+ 	}
+-
++	rcu_read_unlock();
++	nexthop_len += lwtunnel_get_encap_size(nh->fib_nh_lws);
++common:
+ 	return NLMSG_ALIGN(sizeof(struct rtmsg))
+ 	       + nla_total_size(16) /* RTA_SRC */
+ 	       + nla_total_size(16) /* RTA_DST */
+@@ -5780,7 +5783,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 		if (dst->lwtstate &&
+ 		    lwtunnel_fill_encap(skb, dst->lwtstate, RTA_ENCAP, RTA_ENCAP_TYPE) < 0)
+ 			goto nla_put_failure;
+-	} else if (rt->fib6_nsiblings) {
++	} else if (READ_ONCE(rt->fib6_nsiblings)) {
+ 		struct fib6_info *sibling;
+ 		struct nlattr *mp;
+ 
+@@ -5882,16 +5885,21 @@ static bool fib6_info_uses_dev(const struct fib6_info *f6i,
+ 	if (f6i->fib6_nh->fib_nh_dev == dev)
+ 		return true;
+ 
+-	if (f6i->fib6_nsiblings) {
+-		struct fib6_info *sibling, *next_sibling;
++	if (READ_ONCE(f6i->fib6_nsiblings)) {
++		const struct fib6_info *sibling;
+ 
+-		list_for_each_entry_safe(sibling, next_sibling,
+-					 &f6i->fib6_siblings, fib6_siblings) {
+-			if (sibling->fib6_nh->fib_nh_dev == dev)
++		rcu_read_lock();
++		list_for_each_entry_rcu(sibling, &f6i->fib6_siblings,
++					fib6_siblings) {
++			if (sibling->fib6_nh->fib_nh_dev == dev) {
++				rcu_read_unlock();
+ 				return true;
++			}
++			if (!READ_ONCE(f6i->fib6_nsiblings))
++				break;
+ 		}
++		rcu_read_unlock();
+ 	}
+-
+ 	return false;
+ }
+ 
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 4a8d9c3ea480f6..d9f96a962fa603 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -1109,13 +1109,13 @@ ieee80211_copy_rnr_beacon(u8 *pos, struct cfg80211_rnr_elems *dst,
+ {
+ 	int i, offset = 0;
+ 
++	dst->cnt = src->cnt;
+ 	for (i = 0; i < src->cnt; i++) {
+ 		memcpy(pos + offset, src->elem[i].data, src->elem[i].len);
+ 		dst->elem[i].len = src->elem[i].len;
+ 		dst->elem[i].data = pos + offset;
+ 		offset += dst->elem[i].len;
+ 	}
+-	dst->cnt = src->cnt;
+ 
+ 	return offset;
+ }
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 6b6de43d9420ac..1bad353d8a772b 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -407,9 +407,20 @@ void ieee80211_link_info_change_notify(struct ieee80211_sub_if_data *sdata,
+ 
+ 	WARN_ON_ONCE(changed & BSS_CHANGED_VIF_CFG_FLAGS);
+ 
+-	if (!changed || sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
++	if (!changed)
+ 		return;
+ 
++	switch (sdata->vif.type) {
++	case NL80211_IFTYPE_AP_VLAN:
++		return;
++	case NL80211_IFTYPE_MONITOR:
++		if (!ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF))
++			return;
++		break;
++	default:
++		break;
++	}
++
+ 	if (!check_sdata_in_driver(sdata))
+ 		return;
+ 
+diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c
+index 2f92e7c7f203b6..49c92c5d3909de 100644
+--- a/net/mac80211/tdls.c
++++ b/net/mac80211/tdls.c
+@@ -1422,7 +1422,7 @@ int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev,
+ 	if (!(wiphy->flags & WIPHY_FLAG_SUPPORTS_TDLS))
+ 		return -EOPNOTSUPP;
+ 
+-	if (sdata->vif.type != NL80211_IFTYPE_STATION)
++	if (sdata->vif.type != NL80211_IFTYPE_STATION || !sdata->vif.cfg.assoc)
+ 		return -EINVAL;
+ 
+ 	switch (oper) {
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 695db38ccfb41a..506523803cc01a 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -622,6 +622,12 @@ ieee80211_tx_h_select_key(struct ieee80211_tx_data *tx)
+ 	else
+ 		tx->key = NULL;
+ 
++	if (info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP) {
++		if (tx->key && tx->key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE)
++			info->control.hw_key = &tx->key->conf;
++		return TX_CONTINUE;
++	}
++
+ 	if (tx->key) {
+ 		bool skip_hw = false;
+ 
+@@ -1438,7 +1444,7 @@ static void ieee80211_txq_enqueue(struct ieee80211_local *local,
+ {
+ 	struct fq *fq = &local->fq;
+ 	struct fq_tin *tin = &txqi->tin;
+-	u32 flow_idx = fq_flow_idx(fq, skb);
++	u32 flow_idx;
+ 
+ 	ieee80211_set_skb_enqueue_time(skb);
+ 
+@@ -1454,6 +1460,7 @@ static void ieee80211_txq_enqueue(struct ieee80211_local *local,
+ 			IEEE80211_TX_INTCFL_NEED_TXPROCESSING;
+ 		__skb_queue_tail(&txqi->frags, skb);
+ 	} else {
++		flow_idx = fq_flow_idx(fq, skb);
+ 		fq_tin_enqueue(fq, tin, flow_idx, skb,
+ 			       fq_skb_free_func);
+ 	}
+@@ -3887,6 +3894,7 @@ struct sk_buff *ieee80211_tx_dequeue(struct ieee80211_hw *hw,
+ 	 * The key can be removed while the packet was queued, so need to call
+ 	 * this here to get the current key.
+ 	 */
++	info->control.hw_key = NULL;
+ 	r = ieee80211_tx_h_select_key(&tx);
+ 	if (r != TX_CONTINUE) {
+ 		ieee80211_free_txskb(&local->hw, skb);
+@@ -4109,7 +4117,9 @@ void __ieee80211_schedule_txq(struct ieee80211_hw *hw,
+ 
+ 	spin_lock_bh(&local->active_txq_lock[txq->ac]);
+ 
+-	has_queue = force || txq_has_queue(txq);
++	has_queue = force ||
++		    (!test_bit(IEEE80211_TXQ_STOP, &txqi->flags) &&
++		     txq_has_queue(txq));
+ 	if (list_empty(&txqi->schedule_order) &&
+ 	    (has_queue || ieee80211_txq_keep_active(txqi))) {
+ 		/* If airtime accounting is active, always enqueue STAs at the
+diff --git a/net/netfilter/nf_bpf_link.c b/net/netfilter/nf_bpf_link.c
+index 06b08484470034..c12250e50a8b29 100644
+--- a/net/netfilter/nf_bpf_link.c
++++ b/net/netfilter/nf_bpf_link.c
+@@ -17,7 +17,7 @@ static unsigned int nf_hook_run_bpf(void *bpf_prog, struct sk_buff *skb,
+ 		.skb = skb,
+ 	};
+ 
+-	return bpf_prog_run(prog, &ctx);
++	return bpf_prog_run_pin_on_cpu(prog, &ctx);
+ }
+ 
+ struct bpf_nf_link {
+@@ -295,6 +295,9 @@ static bool nf_is_valid_access(int off, int size, enum bpf_access_type type,
+ 	if (off < 0 || off >= sizeof(struct bpf_nf_ctx))
+ 		return false;
+ 
++	if (off % size != 0)
++		return false;
++
+ 	if (type == BPF_WRITE)
+ 		return false;
+ 
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index a133e1c175ce9c..843f2c3ce73c6e 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1130,11 +1130,6 @@ static int nf_tables_fill_table_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_TABLE_PAD))
+ 		goto nla_put_failure;
+ 
+-	if (event == NFT_MSG_DELTABLE) {
+-		nlmsg_end(skb, nlh);
+-		return 0;
+-	}
+-
+ 	if (nla_put_be32(skb, NFTA_TABLE_FLAGS,
+ 			 htonl(table->flags & NFT_TABLE_F_MASK)))
+ 		goto nla_put_failure;
+@@ -1993,11 +1988,6 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_CHAIN_PAD))
+ 		goto nla_put_failure;
+ 
+-	if (event == NFT_MSG_DELCHAIN && !hook_list) {
+-		nlmsg_end(skb, nlh);
+-		return 0;
+-	}
+-
+ 	if (nft_is_base_chain(chain)) {
+ 		const struct nft_base_chain *basechain = nft_base_chain(chain);
+ 		struct nft_stats __percpu *stats;
+@@ -3991,7 +3981,7 @@ void nf_tables_rule_destroy(const struct nft_ctx *ctx, struct nft_rule *rule)
+ /* can only be used if rule is no longer visible to dumps */
+ static void nf_tables_rule_release(const struct nft_ctx *ctx, struct nft_rule *rule)
+ {
+-	lockdep_commit_lock_is_held(ctx->net);
++	WARN_ON_ONCE(!lockdep_commit_lock_is_held(ctx->net));
+ 
+ 	nft_rule_expr_deactivate(ctx, rule, NFT_TRANS_RELEASE);
+ 	nf_tables_rule_destroy(ctx, rule);
+@@ -4788,11 +4778,6 @@ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
+ 			 NFTA_SET_PAD))
+ 		goto nla_put_failure;
+ 
+-	if (event == NFT_MSG_DELSET) {
+-		nlmsg_end(skb, nlh);
+-		return 0;
+-	}
+-
+ 	if (set->flags != 0)
+ 		if (nla_put_be32(skb, NFTA_SET_FLAGS, htonl(set->flags)))
+ 			goto nla_put_failure;
+@@ -5785,7 +5770,7 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 			      struct nft_set_binding *binding,
+ 			      enum nft_trans_phase phase)
+ {
+-	lockdep_commit_lock_is_held(ctx->net);
++	WARN_ON_ONCE(!lockdep_commit_lock_is_held(ctx->net));
+ 
+ 	switch (phase) {
+ 	case NFT_TRANS_PREPARE_ERROR:
+@@ -8276,11 +8261,6 @@ static int nf_tables_fill_obj_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_OBJ_PAD))
+ 		goto nla_put_failure;
+ 
+-	if (event == NFT_MSG_DELOBJ) {
+-		nlmsg_end(skb, nlh);
+-		return 0;
+-	}
+-
+ 	if (nla_put_be32(skb, NFTA_OBJ_TYPE, htonl(obj->ops->type->type)) ||
+ 	    nla_put_be32(skb, NFTA_OBJ_USE, htonl(obj->use)) ||
+ 	    nft_object_dump(skb, NFTA_OBJ_DATA, obj, reset))
+@@ -9298,11 +9278,6 @@ static int nf_tables_fill_flowtable_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_FLOWTABLE_PAD))
+ 		goto nla_put_failure;
+ 
+-	if (event == NFT_MSG_DELFLOWTABLE && !hook_list) {
+-		nlmsg_end(skb, nlh);
+-		return 0;
+-	}
+-
+ 	if (nla_put_be32(skb, NFTA_FLOWTABLE_USE, htonl(flowtable->use)) ||
+ 	    nla_put_be32(skb, NFTA_FLOWTABLE_FLAGS, htonl(flowtable->data.flags)))
+ 		goto nla_put_failure;
+diff --git a/net/netfilter/xt_nfacct.c b/net/netfilter/xt_nfacct.c
+index 7c6bf1c168131a..0ca1cdfc4095b6 100644
+--- a/net/netfilter/xt_nfacct.c
++++ b/net/netfilter/xt_nfacct.c
+@@ -38,8 +38,8 @@ nfacct_mt_checkentry(const struct xt_mtchk_param *par)
+ 
+ 	nfacct = nfnl_acct_find_get(par->net, info->name);
+ 	if (nfacct == NULL) {
+-		pr_info_ratelimited("accounting object `%s' does not exists\n",
+-				    info->name);
++		pr_info_ratelimited("accounting object `%.*s' does not exist\n",
++				    NFACCT_NAME_MAX, info->name);
+ 		return -ENOENT;
+ 	}
+ 	info->nfacct = nfacct;
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index ab1482e7c9fdb7..0184e74f8adfdf 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -4573,10 +4573,10 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 	spin_lock(&po->bind_lock);
+ 	was_running = packet_sock_flag(po, PACKET_SOCK_RUNNING);
+ 	num = po->num;
+-	if (was_running) {
+-		WRITE_ONCE(po->num, 0);
++	WRITE_ONCE(po->num, 0);
++	if (was_running)
+ 		__unregister_prot_hook(sk, false);
+-	}
++
+ 	spin_unlock(&po->bind_lock);
+ 
+ 	synchronize_net();
+@@ -4608,10 +4608,10 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 	mutex_unlock(&po->pg_vec_lock);
+ 
+ 	spin_lock(&po->bind_lock);
+-	if (was_running) {
+-		WRITE_ONCE(po->num, num);
++	WRITE_ONCE(po->num, num);
++	if (was_running)
+ 		register_prot_hook(sk);
+-	}
++
+ 	spin_unlock(&po->bind_lock);
+ 	if (pg_vec && (po->tp_version > TPACKET_V2)) {
+ 		/* Because we don't support block-based V3 on tx-ring */
+diff --git a/net/sched/act_ctinfo.c b/net/sched/act_ctinfo.c
+index 5b1241ddc75851..93ab3bcd6d3106 100644
+--- a/net/sched/act_ctinfo.c
++++ b/net/sched/act_ctinfo.c
+@@ -44,9 +44,9 @@ static void tcf_ctinfo_dscp_set(struct nf_conn *ct, struct tcf_ctinfo *ca,
+ 				ipv4_change_dsfield(ip_hdr(skb),
+ 						    INET_ECN_MASK,
+ 						    newdscp);
+-				ca->stats_dscp_set++;
++				atomic64_inc(&ca->stats_dscp_set);
+ 			} else {
+-				ca->stats_dscp_error++;
++				atomic64_inc(&ca->stats_dscp_error);
+ 			}
+ 		}
+ 		break;
+@@ -57,9 +57,9 @@ static void tcf_ctinfo_dscp_set(struct nf_conn *ct, struct tcf_ctinfo *ca,
+ 				ipv6_change_dsfield(ipv6_hdr(skb),
+ 						    INET_ECN_MASK,
+ 						    newdscp);
+-				ca->stats_dscp_set++;
++				atomic64_inc(&ca->stats_dscp_set);
+ 			} else {
+-				ca->stats_dscp_error++;
++				atomic64_inc(&ca->stats_dscp_error);
+ 			}
+ 		}
+ 		break;
+@@ -72,7 +72,7 @@ static void tcf_ctinfo_cpmark_set(struct nf_conn *ct, struct tcf_ctinfo *ca,
+ 				  struct tcf_ctinfo_params *cp,
+ 				  struct sk_buff *skb)
+ {
+-	ca->stats_cpmark_set++;
++	atomic64_inc(&ca->stats_cpmark_set);
+ 	skb->mark = READ_ONCE(ct->mark) & cp->cpmarkmask;
+ }
+ 
+@@ -323,15 +323,18 @@ static int tcf_ctinfo_dump(struct sk_buff *skb, struct tc_action *a,
+ 	}
+ 
+ 	if (nla_put_u64_64bit(skb, TCA_CTINFO_STATS_DSCP_SET,
+-			      ci->stats_dscp_set, TCA_CTINFO_PAD))
++			      atomic64_read(&ci->stats_dscp_set),
++			      TCA_CTINFO_PAD))
+ 		goto nla_put_failure;
+ 
+ 	if (nla_put_u64_64bit(skb, TCA_CTINFO_STATS_DSCP_ERROR,
+-			      ci->stats_dscp_error, TCA_CTINFO_PAD))
++			      atomic64_read(&ci->stats_dscp_error),
++			      TCA_CTINFO_PAD))
+ 		goto nla_put_failure;
+ 
+ 	if (nla_put_u64_64bit(skb, TCA_CTINFO_STATS_CPMARK_SET,
+-			      ci->stats_cpmark_set, TCA_CTINFO_PAD))
++			      atomic64_read(&ci->stats_cpmark_set),
++			      TCA_CTINFO_PAD))
+ 		goto nla_put_failure;
+ 
+ 	spin_unlock_bh(&ci->tcf_lock);
+diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
+index 51d4013b612198..f3e5ef9a959256 100644
+--- a/net/sched/sch_mqprio.c
++++ b/net/sched/sch_mqprio.c
+@@ -152,7 +152,7 @@ static int mqprio_parse_opt(struct net_device *dev, struct tc_mqprio_qopt *qopt,
+ static const struct
+ nla_policy mqprio_tc_entry_policy[TCA_MQPRIO_TC_ENTRY_MAX + 1] = {
+ 	[TCA_MQPRIO_TC_ENTRY_INDEX]	= NLA_POLICY_MAX(NLA_U32,
+-							 TC_QOPT_MAX_QUEUE),
++							 TC_QOPT_MAX_QUEUE - 1),
+ 	[TCA_MQPRIO_TC_ENTRY_FP]	= NLA_POLICY_RANGE(NLA_U32,
+ 							   TC_FP_EXPRESS,
+ 							   TC_FP_PREEMPTIBLE),
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index fdd79d3ccd8ce7..eafc316ae319e3 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -973,6 +973,41 @@ static int parse_attr(struct nlattr *tb[], int maxtype, struct nlattr *nla,
+ 	return 0;
+ }
+ 
++static const struct Qdisc_class_ops netem_class_ops;
++
++static int check_netem_in_tree(struct Qdisc *sch, bool duplicates,
++			       struct netlink_ext_ack *extack)
++{
++	struct Qdisc *root, *q;
++	unsigned int i;
++
++	root = qdisc_root_sleeping(sch);
++
++	if (sch != root && root->ops->cl_ops == &netem_class_ops) {
++		if (duplicates ||
++		    ((struct netem_sched_data *)qdisc_priv(root))->duplicate)
++			goto err;
++	}
++
++	if (!qdisc_dev(root))
++		return 0;
++
++	hash_for_each(qdisc_dev(root)->qdisc_hash, i, q, hash) {
++		if (sch != q && q->ops->cl_ops == &netem_class_ops) {
++			if (duplicates ||
++			    ((struct netem_sched_data *)qdisc_priv(q))->duplicate)
++				goto err;
++		}
++	}
++
++	return 0;
++
++err:
++	NL_SET_ERR_MSG(extack,
++		       "netem: cannot mix duplicating netems with other netems in tree");
++	return -EINVAL;
++}
++
+ /* Parse netlink message to set options */
+ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 			struct netlink_ext_ack *extack)
+@@ -1031,6 +1066,11 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 	q->gap = qopt->gap;
+ 	q->counter = 0;
+ 	q->loss = qopt->loss;
++
++	ret = check_netem_in_tree(sch, qopt->duplicate, extack);
++	if (ret)
++		goto unlock;
++
+ 	q->duplicate = qopt->duplicate;
+ 
+ 	/* for compatibility with earlier versions.
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 2b14c81a87e5c4..85d84f39e220c7 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -43,6 +43,11 @@ static struct static_key_false taprio_have_working_mqprio;
+ #define TAPRIO_SUPPORTED_FLAGS \
+ 	(TCA_TAPRIO_ATTR_FLAG_TXTIME_ASSIST | TCA_TAPRIO_ATTR_FLAG_FULL_OFFLOAD)
+ #define TAPRIO_FLAGS_INVALID U32_MAX
++/* Minimum value for picos_per_byte to ensure non-zero duration
++ * for minimum-sized Ethernet frames (ETH_ZLEN = 60).
++ * 60 * 17 > PSEC_PER_NSEC (1000)
++ */
++#define TAPRIO_PICOS_PER_BYTE_MIN 17
+ 
+ struct sched_entry {
+ 	/* Durations between this GCL entry and the GCL entry where the
+@@ -1284,7 +1289,8 @@ static void taprio_start_sched(struct Qdisc *sch,
+ }
+ 
+ static void taprio_set_picos_per_byte(struct net_device *dev,
+-				      struct taprio_sched *q)
++				      struct taprio_sched *q,
++				      struct netlink_ext_ack *extack)
+ {
+ 	struct ethtool_link_ksettings ecmd;
+ 	int speed = SPEED_10;
+@@ -1300,6 +1306,15 @@ static void taprio_set_picos_per_byte(struct net_device *dev,
+ 
+ skip:
+ 	picos_per_byte = (USEC_PER_SEC * 8) / speed;
++	if (picos_per_byte < TAPRIO_PICOS_PER_BYTE_MIN) {
++		if (!extack)
++			pr_warn("Link speed %d is too high. Schedule may be inaccurate.\n",
++				speed);
++		NL_SET_ERR_MSG_FMT_MOD(extack,
++				       "Link speed %d is too high. Schedule may be inaccurate.",
++				       speed);
++		picos_per_byte = TAPRIO_PICOS_PER_BYTE_MIN;
++	}
+ 
+ 	atomic64_set(&q->picos_per_byte, picos_per_byte);
+ 	netdev_dbg(dev, "taprio: set %s's picos_per_byte to: %lld, linkspeed: %d\n",
+@@ -1324,7 +1339,7 @@ static int taprio_dev_notifier(struct notifier_block *nb, unsigned long event,
+ 		if (dev != qdisc_dev(q->root))
+ 			continue;
+ 
+-		taprio_set_picos_per_byte(dev, q);
++		taprio_set_picos_per_byte(dev, q, NULL);
+ 
+ 		stab = rtnl_dereference(q->root->stab);
+ 
+@@ -1848,7 +1863,7 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt,
+ 	q->flags = taprio_flags;
+ 
+ 	/* Needed for length_to_duration() during netlink attribute parsing */
+-	taprio_set_picos_per_byte(dev, q);
++	taprio_set_picos_per_byte(dev, q, extack);
+ 
+ 	err = taprio_parse_mqprio_opt(dev, mqprio, extack, q->flags);
+ 	if (err < 0)
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 72e5a01df3d352..d6d06c64fbb25e 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -257,20 +257,47 @@ svc_tcp_sock_process_cmsg(struct socket *sock, struct msghdr *msg,
+ }
+ 
+ static int
+-svc_tcp_sock_recv_cmsg(struct svc_sock *svsk, struct msghdr *msg)
++svc_tcp_sock_recv_cmsg(struct socket *sock, unsigned int *msg_flags)
+ {
+ 	union {
+ 		struct cmsghdr	cmsg;
+ 		u8		buf[CMSG_SPACE(sizeof(u8))];
+ 	} u;
+-	struct socket *sock = svsk->sk_sock;
++	u8 alert[2];
++	struct kvec alert_kvec = {
++		.iov_base = alert,
++		.iov_len = sizeof(alert),
++	};
++	struct msghdr msg = {
++		.msg_flags = *msg_flags,
++		.msg_control = &u,
++		.msg_controllen = sizeof(u),
++	};
++	int ret;
++
++	iov_iter_kvec(&msg.msg_iter, ITER_DEST, &alert_kvec, 1,
++		      alert_kvec.iov_len);
++	ret = sock_recvmsg(sock, &msg, MSG_DONTWAIT);
++	if (ret > 0 &&
++	    tls_get_record_type(sock->sk, &u.cmsg) == TLS_RECORD_TYPE_ALERT) {
++		iov_iter_revert(&msg.msg_iter, ret);
++		ret = svc_tcp_sock_process_cmsg(sock, &msg, &u.cmsg, -EAGAIN);
++	}
++	return ret;
++}
++
++static int
++svc_tcp_sock_recvmsg(struct svc_sock *svsk, struct msghdr *msg)
++{
+ 	int ret;
++	struct socket *sock = svsk->sk_sock;
+ 
+-	msg->msg_control = &u;
+-	msg->msg_controllen = sizeof(u);
+ 	ret = sock_recvmsg(sock, msg, MSG_DONTWAIT);
+-	if (unlikely(msg->msg_controllen != sizeof(u)))
+-		ret = svc_tcp_sock_process_cmsg(sock, msg, &u.cmsg, ret);
++	if (msg->msg_flags & MSG_CTRUNC) {
++		msg->msg_flags &= ~(MSG_CTRUNC | MSG_EOR);
++		if (ret == 0 || ret == -EIO)
++			ret = svc_tcp_sock_recv_cmsg(sock, &msg->msg_flags);
++	}
+ 	return ret;
+ }
+ 
+@@ -321,7 +348,7 @@ static ssize_t svc_tcp_read_msg(struct svc_rqst *rqstp, size_t buflen,
+ 		iov_iter_advance(&msg.msg_iter, seek);
+ 		buflen -= seek;
+ 	}
+-	len = svc_tcp_sock_recv_cmsg(svsk, &msg);
++	len = svc_tcp_sock_recvmsg(svsk, &msg);
+ 	if (len > 0)
+ 		svc_flush_bvec(bvec, len, seek);
+ 
+@@ -1019,7 +1046,7 @@ static ssize_t svc_tcp_read_marker(struct svc_sock *svsk,
+ 		iov.iov_base = ((char *)&svsk->sk_marker) + svsk->sk_tcplen;
+ 		iov.iov_len  = want;
+ 		iov_iter_kvec(&msg.msg_iter, ITER_DEST, &iov, 1, want);
+-		len = svc_tcp_sock_recv_cmsg(svsk, &msg);
++		len = svc_tcp_sock_recvmsg(svsk, &msg);
+ 		if (len < 0)
+ 			return len;
+ 		svsk->sk_tcplen += len;
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 4b10ecf4c26538..9b2328b727b6d1 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -358,7 +358,7 @@ xs_alloc_sparse_pages(struct xdr_buf *buf, size_t want, gfp_t gfp)
+ 
+ static int
+ xs_sock_process_cmsg(struct socket *sock, struct msghdr *msg,
+-		     struct cmsghdr *cmsg, int ret)
++		     unsigned int *msg_flags, struct cmsghdr *cmsg, int ret)
+ {
+ 	u8 content_type = tls_get_record_type(sock->sk, cmsg);
+ 	u8 level, description;
+@@ -371,7 +371,7 @@ xs_sock_process_cmsg(struct socket *sock, struct msghdr *msg,
+ 		 * record, even though there might be more frames
+ 		 * waiting to be decrypted.
+ 		 */
+-		msg->msg_flags &= ~MSG_EOR;
++		*msg_flags &= ~MSG_EOR;
+ 		break;
+ 	case TLS_RECORD_TYPE_ALERT:
+ 		tls_alert_recv(sock->sk, msg, &level, &description);
+@@ -386,19 +386,33 @@ xs_sock_process_cmsg(struct socket *sock, struct msghdr *msg,
+ }
+ 
+ static int
+-xs_sock_recv_cmsg(struct socket *sock, struct msghdr *msg, int flags)
++xs_sock_recv_cmsg(struct socket *sock, unsigned int *msg_flags, int flags)
+ {
+ 	union {
+ 		struct cmsghdr	cmsg;
+ 		u8		buf[CMSG_SPACE(sizeof(u8))];
+ 	} u;
++	u8 alert[2];
++	struct kvec alert_kvec = {
++		.iov_base = alert,
++		.iov_len = sizeof(alert),
++	};
++	struct msghdr msg = {
++		.msg_flags = *msg_flags,
++		.msg_control = &u,
++		.msg_controllen = sizeof(u),
++	};
+ 	int ret;
+ 
+-	msg->msg_control = &u;
+-	msg->msg_controllen = sizeof(u);
+-	ret = sock_recvmsg(sock, msg, flags);
+-	if (msg->msg_controllen != sizeof(u))
+-		ret = xs_sock_process_cmsg(sock, msg, &u.cmsg, ret);
++	iov_iter_kvec(&msg.msg_iter, ITER_DEST, &alert_kvec, 1,
++		      alert_kvec.iov_len);
++	ret = sock_recvmsg(sock, &msg, flags);
++	if (ret > 0 &&
++	    tls_get_record_type(sock->sk, &u.cmsg) == TLS_RECORD_TYPE_ALERT) {
++		iov_iter_revert(&msg.msg_iter, ret);
++		ret = xs_sock_process_cmsg(sock, &msg, msg_flags, &u.cmsg,
++					   -EAGAIN);
++	}
+ 	return ret;
+ }
+ 
+@@ -408,7 +422,13 @@ xs_sock_recvmsg(struct socket *sock, struct msghdr *msg, int flags, size_t seek)
+ 	ssize_t ret;
+ 	if (seek != 0)
+ 		iov_iter_advance(&msg->msg_iter, seek);
+-	ret = xs_sock_recv_cmsg(sock, msg, flags);
++	ret = sock_recvmsg(sock, msg, flags);
++	/* Handle TLS inband control message lazily */
++	if (msg->msg_flags & MSG_CTRUNC) {
++		msg->msg_flags &= ~(MSG_CTRUNC | MSG_EOR);
++		if (ret == 0 || ret == -EIO)
++			ret = xs_sock_recv_cmsg(sock, &msg->msg_flags, flags);
++	}
+ 	return ret > 0 ? ret + seek : ret;
+ }
+ 
+@@ -434,7 +454,7 @@ xs_read_discard(struct socket *sock, struct msghdr *msg, int flags,
+ 		size_t count)
+ {
+ 	iov_iter_discard(&msg->msg_iter, ITER_DEST, count);
+-	return xs_sock_recv_cmsg(sock, msg, flags);
++	return xs_sock_recvmsg(sock, msg, flags, 0);
+ }
+ 
+ #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index fc88e34b7f33fe..549d1ea01a72a7 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -872,6 +872,19 @@ static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk,
+ 		delta = msg->sg.size;
+ 		psock->eval = sk_psock_msg_verdict(sk, psock, msg);
+ 		delta -= msg->sg.size;
++
++		if ((s32)delta > 0) {
++			/* It indicates that we executed bpf_msg_pop_data(),
++			 * causing the plaintext data size to decrease.
++			 * Therefore the encrypted data size also needs to
++			 * correspondingly decrease. We only need to subtract
++			 * delta to calculate the new ciphertext length since
++			 * ktls does not support block encryption.
++			 */
++			struct sk_msg *enc = &ctx->open_rec->msg_encrypted;
++
++			sk_msg_trim(sk, enc, enc->sg.size - delta);
++		}
+ 	}
+ 	if (msg->cork_bytes && msg->cork_bytes > msg->sg.size &&
+ 	    !enospc && !full_record) {
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index c50184eddb4455..78efde2442d423 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -689,7 +689,8 @@ static int __vsock_bind_connectible(struct vsock_sock *vsk,
+ 		unsigned int i;
+ 
+ 		for (i = 0; i < MAX_PORT_RETRIES; i++) {
+-			if (port <= LAST_RESERVED_PORT)
++			if (port == VMADDR_PORT_ANY ||
++			    port <= LAST_RESERVED_PORT)
+ 				port = LAST_RESERVED_PORT + 1;
+ 
+ 			new_addr.svm_port = port++;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 0c7e8389bc49e8..5b348aefd77db4 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -16892,6 +16892,7 @@ static int nl80211_set_sar_specs(struct sk_buff *skb, struct genl_info *info)
+ 	if (!sar_spec)
+ 		return -ENOMEM;
+ 
++	sar_spec->num_sub_specs = specs;
+ 	sar_spec->type = type;
+ 	specs = 0;
+ 	nla_for_each_nested(spec_list, tb[NL80211_SAR_ATTR_SPECS], rem) {
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index c1752b31734faa..92e04370fa63a8 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -4229,6 +4229,8 @@ static void cfg80211_check_and_end_cac(struct cfg80211_registered_device *rdev)
+ 	struct wireless_dev *wdev;
+ 	unsigned int link_id;
+ 
++	guard(wiphy)(&rdev->wiphy);
++
+ 	/* If we finished CAC or received radar, we should end any
+ 	 * CAC running on the same channels.
+ 	 * the check !cfg80211_chandef_dfs_usable contain 2 options:
+diff --git a/rust/kernel/miscdevice.rs b/rust/kernel/miscdevice.rs
+index 15d10e5c1db7da..188ae10d33196e 100644
+--- a/rust/kernel/miscdevice.rs
++++ b/rust/kernel/miscdevice.rs
+@@ -44,7 +44,13 @@ pub const fn into_raw<T: MiscDevice>(self) -> bindings::miscdevice {
+ ///
+ /// # Invariants
+ ///
+-/// `inner` is a registered misc device.
++/// - `inner` contains a `struct miscdevice` that is registered using
++///   `misc_register()`.
++/// - This registration remains valid for the entire lifetime of the
++///   [`MiscDeviceRegistration`] instance.
++/// - Deregistration occurs exactly once in [`Drop`] via `misc_deregister()`.
++/// - `inner` wraps a valid, pinned `miscdevice` created using
++///   [`MiscDeviceOptions::into_raw`].
+ #[repr(transparent)]
+ #[pin_data(PinnedDrop)]
+ pub struct MiscDeviceRegistration<T> {
+diff --git a/samples/mei/mei-amt-version.c b/samples/mei/mei-amt-version.c
+index 867debd3b9124c..1d7254bcb44cb7 100644
+--- a/samples/mei/mei-amt-version.c
++++ b/samples/mei/mei-amt-version.c
+@@ -69,11 +69,11 @@
+ #include <string.h>
+ #include <fcntl.h>
+ #include <sys/ioctl.h>
++#include <sys/time.h>
+ #include <unistd.h>
+ #include <errno.h>
+ #include <stdint.h>
+ #include <stdbool.h>
+-#include <bits/wordsize.h>
+ #include <linux/mei.h>
+ 
+ /*****************************************************************************
+diff --git a/scripts/gdb/linux/constants.py.in b/scripts/gdb/linux/constants.py.in
+index f795302ddfa8b3..c3886739a02891 100644
+--- a/scripts/gdb/linux/constants.py.in
++++ b/scripts/gdb/linux/constants.py.in
+@@ -74,12 +74,12 @@ if IS_BUILTIN(CONFIG_MODULES):
+     LX_GDBPARSED(MOD_RO_AFTER_INIT)
+ 
+ /* linux/mount.h */
+-LX_VALUE(MNT_NOSUID)
+-LX_VALUE(MNT_NODEV)
+-LX_VALUE(MNT_NOEXEC)
+-LX_VALUE(MNT_NOATIME)
+-LX_VALUE(MNT_NODIRATIME)
+-LX_VALUE(MNT_RELATIME)
++LX_GDBPARSED(MNT_NOSUID)
++LX_GDBPARSED(MNT_NODEV)
++LX_GDBPARSED(MNT_NOEXEC)
++LX_GDBPARSED(MNT_NOATIME)
++LX_GDBPARSED(MNT_NODIRATIME)
++LX_GDBPARSED(MNT_RELATIME)
+ 
+ /* linux/threads.h */
+ LX_VALUE(NR_CPUS)
+diff --git a/scripts/kconfig/qconf.cc b/scripts/kconfig/qconf.cc
+index eaa465b0ccf9c4..49607555d343bb 100644
+--- a/scripts/kconfig/qconf.cc
++++ b/scripts/kconfig/qconf.cc
+@@ -478,7 +478,7 @@ void ConfigList::updateListAllForAll()
+ 	while (it.hasNext()) {
+ 		ConfigList *list = it.next();
+ 
+-		list->updateList();
++		list->updateListAll();
+ 	}
+ }
+ 
+diff --git a/security/apparmor/include/match.h b/security/apparmor/include/match.h
+index 536ce3abd5986a..27cf23b0396bc8 100644
+--- a/security/apparmor/include/match.h
++++ b/security/apparmor/include/match.h
+@@ -137,17 +137,15 @@ aa_state_t aa_dfa_matchn_until(struct aa_dfa *dfa, aa_state_t start,
+ 
+ void aa_dfa_free_kref(struct kref *kref);
+ 
+-#define WB_HISTORY_SIZE 24
++/* This needs to be a power of 2 */
++#define WB_HISTORY_SIZE 32
+ struct match_workbuf {
+-	unsigned int count;
+ 	unsigned int pos;
+ 	unsigned int len;
+-	unsigned int size;	/* power of 2, same as history size */
+-	unsigned int history[WB_HISTORY_SIZE];
++	aa_state_t history[WB_HISTORY_SIZE];
+ };
+ #define DEFINE_MATCH_WB(N)		\
+ struct match_workbuf N = {		\
+-	.count = 0,			\
+ 	.pos = 0,			\
+ 	.len = 0,			\
+ }
+diff --git a/security/apparmor/match.c b/security/apparmor/match.c
+index f2d9c57f879439..c5a91600842a16 100644
+--- a/security/apparmor/match.c
++++ b/security/apparmor/match.c
+@@ -679,34 +679,35 @@ aa_state_t aa_dfa_matchn_until(struct aa_dfa *dfa, aa_state_t start,
+ 	return state;
+ }
+ 
+-#define inc_wb_pos(wb)						\
+-do {								\
++#define inc_wb_pos(wb)							\
++do {									\
++	BUILD_BUG_ON_NOT_POWER_OF_2(WB_HISTORY_SIZE);			\
+ 	wb->pos = (wb->pos + 1) & (WB_HISTORY_SIZE - 1);		\
+-	wb->len = (wb->len + 1) & (WB_HISTORY_SIZE - 1);		\
++	wb->len = (wb->len + 1) > WB_HISTORY_SIZE ? WB_HISTORY_SIZE :	\
++				wb->len + 1;				\
+ } while (0)
+ 
+ /* For DFAs that don't support extended tagging of states */
++/* adjust is only set if is_loop returns true */
+ static bool is_loop(struct match_workbuf *wb, aa_state_t state,
+ 		    unsigned int *adjust)
+ {
+-	aa_state_t pos = wb->pos;
+-	aa_state_t i;
++	int pos = wb->pos;
++	int i;
+ 
+ 	if (wb->history[pos] < state)
+ 		return false;
+ 
+-	for (i = 0; i <= wb->len; i++) {
++	for (i = 0; i < wb->len; i++) {
+ 		if (wb->history[pos] == state) {
+ 			*adjust = i;
+ 			return true;
+ 		}
+-		if (pos == 0)
+-			pos = WB_HISTORY_SIZE;
+-		pos--;
++		/* -1 wraps to WB_HISTORY_SIZE - 1 */
++		pos = (pos - 1) & (WB_HISTORY_SIZE - 1);
+ 	}
+ 
+-	*adjust = i;
+-	return true;
++	return false;
+ }
+ 
+ static aa_state_t leftmatch_fb(struct aa_dfa *dfa, aa_state_t start,
+diff --git a/security/apparmor/policy_unpack_test.c b/security/apparmor/policy_unpack_test.c
+index 5b2ba88ae9e24b..cf18744dafe264 100644
+--- a/security/apparmor/policy_unpack_test.c
++++ b/security/apparmor/policy_unpack_test.c
+@@ -9,6 +9,8 @@
+ #include "include/policy.h"
+ #include "include/policy_unpack.h"
+ 
++#include <linux/unaligned.h>
++
+ #define TEST_STRING_NAME "TEST_STRING"
+ #define TEST_STRING_DATA "testing"
+ #define TEST_STRING_BUF_OFFSET \
+@@ -80,7 +82,7 @@ static struct aa_ext *build_aa_ext_struct(struct policy_unpack_fixture *puf,
+ 	*(buf + 1) = strlen(TEST_U32_NAME) + 1;
+ 	strscpy(buf + 3, TEST_U32_NAME, e->end - (void *)(buf + 3));
+ 	*(buf + 3 + strlen(TEST_U32_NAME) + 1) = AA_U32;
+-	*((__le32 *)(buf + 3 + strlen(TEST_U32_NAME) + 2)) = cpu_to_le32(TEST_U32_DATA);
++	put_unaligned_le32(TEST_U32_DATA, buf + 3 + strlen(TEST_U32_NAME) + 2);
+ 
+ 	buf = e->start + TEST_NAMED_U64_BUF_OFFSET;
+ 	*buf = AA_NAME;
+@@ -103,7 +105,7 @@ static struct aa_ext *build_aa_ext_struct(struct policy_unpack_fixture *puf,
+ 	*(buf + 1) = strlen(TEST_ARRAY_NAME) + 1;
+ 	strscpy(buf + 3, TEST_ARRAY_NAME, e->end - (void *)(buf + 3));
+ 	*(buf + 3 + strlen(TEST_ARRAY_NAME) + 1) = AA_ARRAY;
+-	*((__le16 *)(buf + 3 + strlen(TEST_ARRAY_NAME) + 2)) = cpu_to_le16(TEST_ARRAY_SIZE);
++	put_unaligned_le16(TEST_ARRAY_SIZE, buf + 3 + strlen(TEST_ARRAY_NAME) + 2);
+ 
+ 	return e;
+ }
+diff --git a/security/landlock/id.c b/security/landlock/id.c
+index 56f7cc0fc7440f..838c3ed7bb822e 100644
+--- a/security/landlock/id.c
++++ b/security/landlock/id.c
+@@ -119,6 +119,12 @@ static u64 get_id_range(size_t number_of_ids, atomic64_t *const counter,
+ 
+ #ifdef CONFIG_SECURITY_LANDLOCK_KUNIT_TEST
+ 
++static u8 get_random_u8_positive(void)
++{
++	/* max() evaluates its arguments once. */
++	return max(1, get_random_u8());
++}
++
+ static void test_range1_rand0(struct kunit *const test)
+ {
+ 	atomic64_t counter;
+@@ -127,9 +133,10 @@ static void test_range1_rand0(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(1, &counter, 0), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 1);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 1);
+ }
+ 
+ static void test_range1_rand1(struct kunit *const test)
+@@ -140,9 +147,10 @@ static void test_range1_rand1(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(1, &counter, 1), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 2);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 2);
+ }
+ 
+ static void test_range1_rand15(struct kunit *const test)
+@@ -153,9 +161,10 @@ static void test_range1_rand15(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(1, &counter, 15), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 16);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 16);
+ }
+ 
+ static void test_range1_rand16(struct kunit *const test)
+@@ -166,9 +175,10 @@ static void test_range1_rand16(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(1, &counter, 16), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 1);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 1);
+ }
+ 
+ static void test_range2_rand0(struct kunit *const test)
+@@ -179,9 +189,10 @@ static void test_range2_rand0(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(2, &counter, 0), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 2);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 2);
+ }
+ 
+ static void test_range2_rand1(struct kunit *const test)
+@@ -192,9 +203,10 @@ static void test_range2_rand1(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(2, &counter, 1), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 3);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 3);
+ }
+ 
+ static void test_range2_rand2(struct kunit *const test)
+@@ -205,9 +217,10 @@ static void test_range2_rand2(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(2, &counter, 2), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 4);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 4);
+ }
+ 
+ static void test_range2_rand15(struct kunit *const test)
+@@ -218,9 +231,10 @@ static void test_range2_rand15(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(2, &counter, 15), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 17);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 17);
+ }
+ 
+ static void test_range2_rand16(struct kunit *const test)
+@@ -231,9 +245,10 @@ static void test_range2_rand16(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(2, &counter, 16), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 2);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 2);
+ }
+ 
+ #endif /* CONFIG_SECURITY_LANDLOCK_KUNIT_TEST */
+diff --git a/sound/pci/hda/cs35l56_hda.c b/sound/pci/hda/cs35l56_hda.c
+index 235d22049aa9fa..c9c8ec8d2474d8 100644
+--- a/sound/pci/hda/cs35l56_hda.c
++++ b/sound/pci/hda/cs35l56_hda.c
+@@ -874,6 +874,52 @@ static int cs35l56_hda_system_resume(struct device *dev)
+ 	return 0;
+ }
+ 
++static int cs35l56_hda_fixup_yoga9(struct cs35l56_hda *cs35l56, int *bus_addr)
++{
++	/* The cirrus,dev-index property has the wrong values */
++	switch (*bus_addr) {
++	case 0x30:
++		cs35l56->index = 1;
++		return 0;
++	case 0x31:
++		cs35l56->index = 0;
++		return 0;
++	default:
++		/* There is a pseudo-address for broadcast to both amps - ignore it */
++		dev_dbg(cs35l56->base.dev, "Ignoring I2C address %#x\n", *bus_addr);
++		return 0;
++	}
++}
++
++static const struct {
++	const char *sub;
++	int (*fixup_fn)(struct cs35l56_hda *cs35l56, int *bus_addr);
++} cs35l56_hda_fixups[] = {
++	{
++		.sub = "17AA390B", /* Lenovo Yoga Book 9i GenX */
++		.fixup_fn = cs35l56_hda_fixup_yoga9,
++	},
++};
++
++static int cs35l56_hda_apply_platform_fixups(struct cs35l56_hda *cs35l56, const char *sub,
++					     int *bus_addr)
++{
++	int i;
++
++	if (IS_ERR(sub))
++		return 0;
++
++	for (i = 0; i < ARRAY_SIZE(cs35l56_hda_fixups); i++) {
++		if (strcasecmp(cs35l56_hda_fixups[i].sub, sub) == 0) {
++			dev_dbg(cs35l56->base.dev, "Applying fixup for %s\n",
++				cs35l56_hda_fixups[i].sub);
++			return (cs35l56_hda_fixups[i].fixup_fn)(cs35l56, bus_addr);
++		}
++	}
++
++	return 0;
++}
++
+ static int cs35l56_hda_read_acpi(struct cs35l56_hda *cs35l56, int hid, int id)
+ {
+ 	u32 values[HDA_MAX_COMPONENTS];
+@@ -898,39 +944,47 @@ static int cs35l56_hda_read_acpi(struct cs35l56_hda *cs35l56, int hid, int id)
+ 		ACPI_COMPANION_SET(cs35l56->base.dev, adev);
+ 	}
+ 
+-	property = "cirrus,dev-index";
+-	ret = device_property_count_u32(cs35l56->base.dev, property);
+-	if (ret <= 0)
+-		goto err;
+-
+-	if (ret > ARRAY_SIZE(values)) {
+-		ret = -EINVAL;
+-		goto err;
+-	}
+-	nval = ret;
++	/* Initialize things that could be overwritten by a fixup */
++	cs35l56->index = -1;
+ 
+-	ret = device_property_read_u32_array(cs35l56->base.dev, property, values, nval);
++	sub = acpi_get_subsystem_id(ACPI_HANDLE(cs35l56->base.dev));
++	ret = cs35l56_hda_apply_platform_fixups(cs35l56, sub, &id);
+ 	if (ret)
+-		goto err;
++		return ret;
+ 
+-	cs35l56->index = -1;
+-	for (i = 0; i < nval; i++) {
+-		if (values[i] == id) {
+-			cs35l56->index = i;
+-			break;
+-		}
+-	}
+-	/*
+-	 * It's not an error for the ID to be missing: for I2C there can be
+-	 * an alias address that is not a real device. So reject silently.
+-	 */
+ 	if (cs35l56->index == -1) {
+-		dev_dbg(cs35l56->base.dev, "No index found in %s\n", property);
+-		ret = -ENODEV;
+-		goto err;
+-	}
++		property = "cirrus,dev-index";
++		ret = device_property_count_u32(cs35l56->base.dev, property);
++		if (ret <= 0)
++			goto err;
+ 
+-	sub = acpi_get_subsystem_id(ACPI_HANDLE(cs35l56->base.dev));
++		if (ret > ARRAY_SIZE(values)) {
++			ret = -EINVAL;
++			goto err;
++		}
++		nval = ret;
++
++		ret = device_property_read_u32_array(cs35l56->base.dev, property, values, nval);
++		if (ret)
++			goto err;
++
++		for (i = 0; i < nval; i++) {
++			if (values[i] == id) {
++				cs35l56->index = i;
++				break;
++			}
++		}
++
++		/*
++		 * It's not an error for the ID to be missing: for I2C there can be
++		 * an alias address that is not a real device. So reject silently.
++		 */
++		if (cs35l56->index == -1) {
++			dev_dbg(cs35l56->base.dev, "No index found in %s\n", property);
++			ret = -ENODEV;
++			goto err;
++		}
++	}
+ 
+ 	if (IS_ERR(sub)) {
+ 		dev_info(cs35l56->base.dev,
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index d40197fb5fbd58..77432e06f3e32c 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -4802,7 +4802,8 @@ static int ca0132_alt_select_out(struct hda_codec *codec)
+ 	if (err < 0)
+ 		goto exit;
+ 
+-	if (ca0132_alt_select_out_quirk_set(codec) < 0)
++	err = ca0132_alt_select_out_quirk_set(codec);
++	if (err < 0)
+ 		goto exit;
+ 
+ 	switch (spec->cur_out_type) {
+@@ -4892,6 +4893,8 @@ static int ca0132_alt_select_out(struct hda_codec *codec)
+ 				spec->bass_redirection_val);
+ 	else
+ 		err = ca0132_alt_surround_set_bass_redirection(codec, 0);
++	if (err < 0)
++		goto exit;
+ 
+ 	/* Unmute DSP now that we're done with output selection. */
+ 	err = dspio_set_uint_param(codec, 0x96,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 3c93d213571777..271e335610e0e9 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7497,6 +7497,9 @@ static void alc287_fixup_yoga9_14iap7_bass_spk_pin(struct hda_codec *codec,
+ 	};
+ 	struct alc_spec *spec = codec->spec;
+ 
++	/* Support Audio mute LED and Mic mute LED on keyboard */
++	hda_fixup_ideapad_acpi(codec, fix, action);
++
+ 	switch (action) {
+ 	case HDA_FIXUP_ACT_PRE_PROBE:
+ 		snd_hda_apply_pincfgs(codec, pincfgs);
+@@ -10738,6 +10741,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8a0f, "HP Pavilion 14-ec1xxx", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8a20, "HP Laptop 15s-fq5xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x8a25, "HP Victus 16-d1xxx (MB 8A25)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
++	SND_PCI_QUIRK(0x103c, 0x8a26, "HP Victus 16-d1xxx (MB 8A26)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8a28, "HP Envy 13", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8a29, "HP Envy 15", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8a2a, "HP Envy 15", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10796,6 +10800,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8bbe, "HP Victus 16-r0xxx (MB 8BBE)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8bc8, "HP Victus 15-fa1xxx", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8bcd, "HP Omen 16-xd0xxx", ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT),
++	SND_PCI_QUIRK(0x103c, 0x8bd4, "HP Victus 16-s0xxx (MB 8BD4)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8bdd, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8bde, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8bdf, "HP Envy 15", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10848,6 +10853,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8c91, "HP EliteBook 660", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8c96, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8c97, "HP ZBook", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x8c99, "HP Victus 16-r1xxx (MB 8C99)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8c9c, "HP Victus 16-s1xxx (MB 8C9C)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca1, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca2, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED),
+diff --git a/sound/soc/amd/acp/acp-pci.c b/sound/soc/amd/acp/acp-pci.c
+index 0b2aa33cc426f9..2591b1a1c5e002 100644
+--- a/sound/soc/amd/acp/acp-pci.c
++++ b/sound/soc/amd/acp/acp-pci.c
+@@ -137,26 +137,26 @@ static int acp_pci_probe(struct pci_dev *pci, const struct pci_device_id *pci_id
+ 		chip->name = "acp_asoc_renoir";
+ 		chip->rsrc = &rn_rsrc;
+ 		chip->acp_hw_ops_init = acp31_hw_ops_init;
+-		chip->machines = &snd_soc_acpi_amd_acp_machines;
++		chip->machines = snd_soc_acpi_amd_acp_machines;
+ 		break;
+ 	case 0x6f:
+ 		chip->name = "acp_asoc_rembrandt";
+ 		chip->rsrc = &rmb_rsrc;
+ 		chip->acp_hw_ops_init = acp6x_hw_ops_init;
+-		chip->machines = &snd_soc_acpi_amd_rmb_acp_machines;
++		chip->machines = snd_soc_acpi_amd_rmb_acp_machines;
+ 		break;
+ 	case 0x63:
+ 		chip->name = "acp_asoc_acp63";
+ 		chip->rsrc = &acp63_rsrc;
+ 		chip->acp_hw_ops_init = acp63_hw_ops_init;
+-		chip->machines = &snd_soc_acpi_amd_acp63_acp_machines;
++		chip->machines = snd_soc_acpi_amd_acp63_acp_machines;
+ 		break;
+ 	case 0x70:
+ 	case 0x71:
+ 		chip->name = "acp_asoc_acp70";
+ 		chip->rsrc = &acp70_rsrc;
+ 		chip->acp_hw_ops_init = acp70_hw_ops_init;
+-		chip->machines = &snd_soc_acpi_amd_acp70_acp_machines;
++		chip->machines = snd_soc_acpi_amd_acp70_acp_machines;
+ 		break;
+ 	default:
+ 		dev_err(dev, "Unsupported device revision:0x%x\n", pci->revision);
+diff --git a/sound/soc/amd/acp/amd-acpi-mach.c b/sound/soc/amd/acp/amd-acpi-mach.c
+index d95047d2ee945e..27da2a862f1c22 100644
+--- a/sound/soc/amd/acp/amd-acpi-mach.c
++++ b/sound/soc/amd/acp/amd-acpi-mach.c
+@@ -8,12 +8,12 @@
+ 
+ #include <sound/soc-acpi.h>
+ 
+-struct snd_soc_acpi_codecs amp_rt1019 = {
++static struct snd_soc_acpi_codecs amp_rt1019 = {
+ 	.num_codecs = 1,
+ 	.codecs = {"10EC1019"}
+ };
+ 
+-struct snd_soc_acpi_codecs amp_max = {
++static struct snd_soc_acpi_codecs amp_max = {
+ 	.num_codecs = 1,
+ 	.codecs = {"MX98360A"}
+ };
+diff --git a/sound/soc/amd/acp/amd.h b/sound/soc/amd/acp/amd.h
+index 863e74fcee437e..cb8d97122f95c7 100644
+--- a/sound/soc/amd/acp/amd.h
++++ b/sound/soc/amd/acp/amd.h
+@@ -243,10 +243,10 @@ extern struct acp_resource rmb_rsrc;
+ extern struct acp_resource acp63_rsrc;
+ extern struct acp_resource acp70_rsrc;
+ 
+-extern struct snd_soc_acpi_mach snd_soc_acpi_amd_acp_machines;
+-extern struct snd_soc_acpi_mach snd_soc_acpi_amd_rmb_acp_machines;
+-extern struct snd_soc_acpi_mach snd_soc_acpi_amd_acp63_acp_machines;
+-extern struct snd_soc_acpi_mach snd_soc_acpi_amd_acp70_acp_machines;
++extern struct snd_soc_acpi_mach snd_soc_acpi_amd_acp_machines[];
++extern struct snd_soc_acpi_mach snd_soc_acpi_amd_rmb_acp_machines[];
++extern struct snd_soc_acpi_mach snd_soc_acpi_amd_acp63_acp_machines[];
++extern struct snd_soc_acpi_mach snd_soc_acpi_amd_acp70_acp_machines[];
+ 
+ extern const struct snd_soc_dai_ops asoc_acp_cpu_dai_ops;
+ extern const struct snd_soc_dai_ops acp_dmic_dai_ops;
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 1689b6b22598e2..e362c2865ec131 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -409,6 +409,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "M6500RC"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "M6501RM"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+@@ -528,6 +535,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "OMEN by HP Gaming Laptop 16z-n000"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "HP"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Victus by HP Gaming Laptop 15-fb1xxx"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+@@ -577,6 +591,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "8A7F"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "HP"),
++			DMI_MATCH(DMI_BOARD_NAME, "8A81"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+diff --git a/sound/soc/fsl/fsl_xcvr.c b/sound/soc/fsl/fsl_xcvr.c
+index 83aea341c1b609..f877dcb2570aa6 100644
+--- a/sound/soc/fsl/fsl_xcvr.c
++++ b/sound/soc/fsl/fsl_xcvr.c
+@@ -1395,7 +1395,7 @@ static irqreturn_t irq0_isr(int irq, void *devid)
+ 	if (isr & FSL_XCVR_IRQ_NEW_CS) {
+ 		dev_dbg(dev, "Received new CS block\n");
+ 		isr_clr |= FSL_XCVR_IRQ_NEW_CS;
+-		if (!xcvr->soc_data->spdif_only) {
++		if (xcvr->soc_data->fw_name) {
+ 			/* Data RAM is 4KiB, last two pages: 8 and 9. Select page 8. */
+ 			regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL,
+ 					   FSL_XCVR_EXT_CTRL_PAGE_MASK,
+@@ -1423,6 +1423,26 @@ static irqreturn_t irq0_isr(int irq, void *devid)
+ 				/* clear CS control register */
+ 				memset_io(reg_ctrl, 0, sizeof(val));
+ 			}
++		} else {
++			regmap_read(xcvr->regmap, FSL_XCVR_RX_CS_DATA_0,
++				    (u32 *)&xcvr->rx_iec958.status[0]);
++			regmap_read(xcvr->regmap, FSL_XCVR_RX_CS_DATA_1,
++				    (u32 *)&xcvr->rx_iec958.status[4]);
++			regmap_read(xcvr->regmap, FSL_XCVR_RX_CS_DATA_2,
++				    (u32 *)&xcvr->rx_iec958.status[8]);
++			regmap_read(xcvr->regmap, FSL_XCVR_RX_CS_DATA_3,
++				    (u32 *)&xcvr->rx_iec958.status[12]);
++			regmap_read(xcvr->regmap, FSL_XCVR_RX_CS_DATA_4,
++				    (u32 *)&xcvr->rx_iec958.status[16]);
++			regmap_read(xcvr->regmap, FSL_XCVR_RX_CS_DATA_5,
++				    (u32 *)&xcvr->rx_iec958.status[20]);
++			for (i = 0; i < 6; i++) {
++				val = *(u32 *)(xcvr->rx_iec958.status + i * 4);
++				*(u32 *)(xcvr->rx_iec958.status + i * 4) =
++					bitrev32(val);
++			}
++			regmap_set_bits(xcvr->regmap, FSL_XCVR_RX_DPTH_CTRL,
++					FSL_XCVR_RX_DPTH_CTRL_CSA);
+ 		}
+ 	}
+ 	if (isr & FSL_XCVR_IRQ_NEW_UD) {
+@@ -1497,6 +1517,7 @@ static const struct fsl_xcvr_soc_data fsl_xcvr_imx93_data = {
+ };
+ 
+ static const struct fsl_xcvr_soc_data fsl_xcvr_imx95_data = {
++	.fw_name = "imx/xcvr/xcvr-imx95.bin",
+ 	.spdif_only = true,
+ 	.use_phy = true,
+ 	.use_edma = true,
+@@ -1786,7 +1807,7 @@ static int fsl_xcvr_runtime_resume(struct device *dev)
+ 		}
+ 	}
+ 
+-	if (xcvr->mode == FSL_XCVR_MODE_EARC) {
++	if (xcvr->soc_data->fw_name) {
+ 		ret = fsl_xcvr_load_firmware(xcvr);
+ 		if (ret) {
+ 			dev_err(dev, "failed to load firmware.\n");
+diff --git a/sound/soc/intel/boards/Kconfig b/sound/soc/intel/boards/Kconfig
+index 4db7931ba561eb..4a932423293621 100644
+--- a/sound/soc/intel/boards/Kconfig
++++ b/sound/soc/intel/boards/Kconfig
+@@ -11,7 +11,7 @@ menuconfig SND_SOC_INTEL_MACH
+ 	 kernel: saying N will just cause the configurator to skip all
+ 	 the questions about Intel ASoC machine drivers.
+ 
+-if SND_SOC_INTEL_MACH
++if SND_SOC_INTEL_MACH && (SND_SOC_SOF_INTEL_COMMON || !SND_SOC_SOF_INTEL_COMMON)
+ 
+ config SND_SOC_INTEL_USER_FRIENDLY_LONG_NAMES
+ 	bool "Use more user friendly long card names"
+diff --git a/sound/soc/mediatek/common/mtk-afe-platform-driver.c b/sound/soc/mediatek/common/mtk-afe-platform-driver.c
+index 6b633058394140..70fd05d5ff486c 100644
+--- a/sound/soc/mediatek/common/mtk-afe-platform-driver.c
++++ b/sound/soc/mediatek/common/mtk-afe-platform-driver.c
+@@ -120,7 +120,9 @@ int mtk_afe_pcm_new(struct snd_soc_component *component,
+ 	struct mtk_base_afe *afe = snd_soc_component_get_drvdata(component);
+ 
+ 	size = afe->mtk_afe_hardware->buffer_bytes_max;
+-	snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV, afe->dev, 0, size);
++	snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV, afe->dev,
++				       afe->preallocate_buffers ? size : 0,
++				       size);
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/mediatek/common/mtk-base-afe.h b/sound/soc/mediatek/common/mtk-base-afe.h
+index f51578b6c50a35..a406f2e3e7a878 100644
+--- a/sound/soc/mediatek/common/mtk-base-afe.h
++++ b/sound/soc/mediatek/common/mtk-base-afe.h
+@@ -117,6 +117,7 @@ struct mtk_base_afe {
+ 	struct mtk_base_afe_irq *irqs;
+ 	int irqs_size;
+ 	int memif_32bit_supported;
++	bool preallocate_buffers;
+ 
+ 	struct list_head sub_dais;
+ 	struct snd_soc_dai_driver *dai_drivers;
+diff --git a/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c b/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
+index 04ed0cfec1741e..f93d6348fdf89a 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
++++ b/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
+@@ -13,6 +13,7 @@
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
++#include <linux/of_reserved_mem.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/pm_runtime.h>
+ #include <sound/soc.h>
+@@ -1070,6 +1071,12 @@ static int mt8173_afe_pcm_dev_probe(struct platform_device *pdev)
+ 
+ 	afe->dev = &pdev->dev;
+ 
++	ret = of_reserved_mem_device_init(&pdev->dev);
++	if (ret) {
++		dev_info(&pdev->dev, "no reserved memory found, pre-allocating buffers instead\n");
++		afe->preallocate_buffers = true;
++	}
++
+ 	irq_id = platform_get_irq(pdev, 0);
+ 	if (irq_id <= 0)
+ 		return irq_id < 0 ? irq_id : -ENXIO;
+diff --git a/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c b/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c
+index d083b4bf0f9545..e7378bee8e50f2 100644
+--- a/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c
++++ b/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c
+@@ -10,6 +10,7 @@
+ #include <linux/mfd/syscon.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
++#include <linux/of_reserved_mem.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/reset.h>
+ 
+@@ -1094,6 +1095,12 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->dev = &pdev->dev;
+ 	dev = afe->dev;
+ 
++	ret = of_reserved_mem_device_init(dev);
++	if (ret) {
++		dev_info(dev, "no reserved memory found, pre-allocating buffers instead\n");
++		afe->preallocate_buffers = true;
++	}
++
+ 	/* initial audio related clock */
+ 	ret = mt8183_init_clock(afe);
+ 	if (ret) {
+diff --git a/sound/soc/mediatek/mt8186/mt8186-afe-pcm.c b/sound/soc/mediatek/mt8186/mt8186-afe-pcm.c
+index db7c93401bee69..c73b4664e53e1b 100644
+--- a/sound/soc/mediatek/mt8186/mt8186-afe-pcm.c
++++ b/sound/soc/mediatek/mt8186/mt8186-afe-pcm.c
+@@ -10,6 +10,7 @@
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
++#include <linux/of_reserved_mem.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/reset.h>
+ #include <sound/soc.h>
+@@ -2835,6 +2836,12 @@ static int mt8186_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe_priv = afe->platform_priv;
+ 	afe->dev = &pdev->dev;
+ 
++	ret = of_reserved_mem_device_init(dev);
++	if (ret) {
++		dev_info(dev, "no reserved memory found, pre-allocating buffers instead\n");
++		afe->preallocate_buffers = true;
++	}
++
+ 	afe->base_addr = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(afe->base_addr))
+ 		return PTR_ERR(afe->base_addr);
+diff --git a/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c b/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c
+index fd6af74d799579..3d32fe46118ece 100644
+--- a/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c
++++ b/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c
+@@ -12,6 +12,7 @@
+ #include <linux/mfd/syscon.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
++#include <linux/of_reserved_mem.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/reset.h>
+ #include <sound/soc.h>
+@@ -2179,6 +2180,12 @@ static int mt8192_afe_pcm_dev_probe(struct platform_device *pdev)
+ 
+ 	afe->dev = dev;
+ 
++	ret = of_reserved_mem_device_init(dev);
++	if (ret) {
++		dev_info(dev, "no reserved memory found, pre-allocating buffers instead\n");
++		afe->preallocate_buffers = true;
++	}
++
+ 	/* init audio related clock */
+ 	ret = mt8192_init_clock(afe);
+ 	if (ret) {
+diff --git a/sound/soc/sdca/sdca_functions.c b/sound/soc/sdca/sdca_functions.c
+index 493f390f087add..15aa57a07c73f1 100644
+--- a/sound/soc/sdca/sdca_functions.c
++++ b/sound/soc/sdca/sdca_functions.c
+@@ -880,7 +880,8 @@ static int find_sdca_entity_control(struct device *dev, struct sdca_entity *enti
+ 			control->value = tmp;
+ 			control->has_fixed = true;
+ 		}
+-
++		fallthrough;
++	case SDCA_ACCESS_MODE_RO:
+ 		control->deferrable = fwnode_property_read_bool(control_node,
+ 								"mipi-sdca-control-deferrable");
+ 		break;
+diff --git a/sound/soc/sdca/sdca_regmap.c b/sound/soc/sdca/sdca_regmap.c
+index 4b78188cfcebd8..394058a0537c33 100644
+--- a/sound/soc/sdca/sdca_regmap.c
++++ b/sound/soc/sdca/sdca_regmap.c
+@@ -72,12 +72,18 @@ bool sdca_regmap_readable(struct sdca_function_data *function, unsigned int reg)
+ 	if (!control)
+ 		return false;
+ 
++	if (!(BIT(SDW_SDCA_CTL_CNUM(reg)) & control->cn_list))
++		return false;
++
+ 	switch (control->mode) {
+ 	case SDCA_ACCESS_MODE_RW:
+ 	case SDCA_ACCESS_MODE_RO:
+-	case SDCA_ACCESS_MODE_DUAL:
+ 	case SDCA_ACCESS_MODE_RW1S:
+ 	case SDCA_ACCESS_MODE_RW1C:
++		if (SDW_SDCA_NEXT_CTL(0) & reg)
++			return false;
++		fallthrough;
++	case SDCA_ACCESS_MODE_DUAL:
+ 		/* No access to registers marked solely for device use */
+ 		return control->layers & ~SDCA_ACCESS_LAYER_DEVICE;
+ 	default:
+@@ -104,11 +110,17 @@ bool sdca_regmap_writeable(struct sdca_function_data *function, unsigned int reg
+ 	if (!control)
+ 		return false;
+ 
++	if (!(BIT(SDW_SDCA_CTL_CNUM(reg)) & control->cn_list))
++		return false;
++
+ 	switch (control->mode) {
+ 	case SDCA_ACCESS_MODE_RW:
+-	case SDCA_ACCESS_MODE_DUAL:
+ 	case SDCA_ACCESS_MODE_RW1S:
+ 	case SDCA_ACCESS_MODE_RW1C:
++		if (SDW_SDCA_NEXT_CTL(0) & reg)
++			return false;
++		fallthrough;
++	case SDCA_ACCESS_MODE_DUAL:
+ 		/* No access to registers marked solely for device use */
+ 		return control->layers & ~SDCA_ACCESS_LAYER_DEVICE;
+ 	default:
+diff --git a/sound/soc/soc-dai.c b/sound/soc/soc-dai.c
+index a210089747d004..32f46a38682b79 100644
+--- a/sound/soc/soc-dai.c
++++ b/sound/soc/soc-dai.c
+@@ -259,13 +259,15 @@ int snd_soc_dai_set_tdm_slot(struct snd_soc_dai *dai,
+ 		&rx_mask,
+ 	};
+ 
+-	if (dai->driver->ops &&
+-	    dai->driver->ops->xlate_tdm_slot_mask)
+-		ret = dai->driver->ops->xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
+-	else
+-		ret = snd_soc_xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
+-	if (ret)
+-		goto err;
++	if (slots) {
++		if (dai->driver->ops &&
++		    dai->driver->ops->xlate_tdm_slot_mask)
++			ret = dai->driver->ops->xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
++		else
++			ret = snd_soc_xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
++		if (ret)
++			goto err;
++	}
+ 
+ 	for_each_pcm_streams(stream)
+ 		snd_soc_dai_tdm_mask_set(dai, stream, *tdm_mask[stream]);
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index 8d4dd11c9aef1d..a629e0eacb20eb 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -399,28 +399,32 @@ EXPORT_SYMBOL_GPL(snd_soc_put_volsw_sx);
+ static int snd_soc_clip_to_platform_max(struct snd_kcontrol *kctl)
+ {
+ 	struct soc_mixer_control *mc = (struct soc_mixer_control *)kctl->private_value;
+-	struct snd_ctl_elem_value uctl;
++	struct snd_ctl_elem_value *uctl;
+ 	int ret;
+ 
+ 	if (!mc->platform_max)
+ 		return 0;
+ 
+-	ret = kctl->get(kctl, &uctl);
++	uctl = kzalloc(sizeof(*uctl), GFP_KERNEL);
++	if (!uctl)
++		return -ENOMEM;
++
++	ret = kctl->get(kctl, uctl);
+ 	if (ret < 0)
+-		return ret;
++		goto out;
+ 
+-	if (uctl.value.integer.value[0] > mc->platform_max)
+-		uctl.value.integer.value[0] = mc->platform_max;
++	if (uctl->value.integer.value[0] > mc->platform_max)
++		uctl->value.integer.value[0] = mc->platform_max;
+ 
+ 	if (snd_soc_volsw_is_stereo(mc) &&
+-	    uctl.value.integer.value[1] > mc->platform_max)
+-		uctl.value.integer.value[1] = mc->platform_max;
++	    uctl->value.integer.value[1] > mc->platform_max)
++		uctl->value.integer.value[1] = mc->platform_max;
+ 
+-	ret = kctl->put(kctl, &uctl);
+-	if (ret < 0)
+-		return ret;
++	ret = kctl->put(kctl, uctl);
+ 
+-	return 0;
++out:
++	kfree(uctl);
++	return ret;
+ }
+ 
+ /**
+diff --git a/sound/soc/sof/intel/Kconfig b/sound/soc/sof/intel/Kconfig
+index dc1d21de4ab792..4f27f8c8debf8a 100644
+--- a/sound/soc/sof/intel/Kconfig
++++ b/sound/soc/sof/intel/Kconfig
+@@ -266,9 +266,10 @@ config SND_SOC_SOF_METEORLAKE
+ 
+ config SND_SOC_SOF_INTEL_LNL
+ 	tristate
++	select SOUNDWIRE_INTEL if SND_SOC_SOF_INTEL_SOUNDWIRE != n
+ 	select SND_SOC_SOF_HDA_GENERIC
+ 	select SND_SOC_SOF_INTEL_SOUNDWIRE_LINK_BASELINE
+-	select SND_SOF_SOF_HDA_SDW_BPT if SND_SOC_SOF_INTEL_SOUNDWIRE
++	select SND_SOF_SOF_HDA_SDW_BPT if SND_SOC_SOF_INTEL_SOUNDWIRE != n
+ 	select SND_SOC_SOF_IPC4
+ 	select SND_SOC_SOF_INTEL_MTL
+ 
+diff --git a/sound/usb/mixer_scarlett2.c b/sound/usb/mixer_scarlett2.c
+index 288d22e6a0b253..b176b5e36d651e 100644
+--- a/sound/usb/mixer_scarlett2.c
++++ b/sound/usb/mixer_scarlett2.c
+@@ -2351,6 +2351,8 @@ static int scarlett2_usb(
+ 	struct scarlett2_usb_packet *req, *resp = NULL;
+ 	size_t req_buf_size = struct_size(req, data, req_size);
+ 	size_t resp_buf_size = struct_size(resp, data, resp_size);
++	int retries = 0;
++	const int max_retries = 5;
+ 	int err;
+ 
+ 	req = kmalloc(req_buf_size, GFP_KERNEL);
+@@ -2374,10 +2376,15 @@ static int scarlett2_usb(
+ 	if (req_size)
+ 		memcpy(req->data, req_data, req_size);
+ 
++retry:
+ 	err = scarlett2_usb_tx(dev, private->bInterfaceNumber,
+ 			       req, req_buf_size);
+ 
+ 	if (err != req_buf_size) {
++		if (err == -EPROTO && ++retries <= max_retries) {
++			msleep(5 * (1 << (retries - 1)));
++			goto retry;
++		}
+ 		usb_audio_err(
+ 			mixer->chip,
+ 			"%s USB request result cmd %x was %d\n",
+@@ -3971,8 +3978,13 @@ static int scarlett2_input_select_ctl_info(
+ 		goto unlock;
+ 
+ 	/* Loop through each input */
+-	for (i = 0; i < inputs; i++)
++	for (i = 0; i < inputs; i++) {
+ 		values[i] = kasprintf(GFP_KERNEL, "Input %d", i + 1);
++		if (!values[i]) {
++			err = -ENOMEM;
++			goto unlock;
++		}
++	}
+ 
+ 	err = snd_ctl_enum_info(uinfo, 1, i,
+ 				(const char * const *)values);
+diff --git a/sound/x86/intel_hdmi_audio.c b/sound/x86/intel_hdmi_audio.c
+index 7fcc528a020462..b10157c6c277eb 100644
+--- a/sound/x86/intel_hdmi_audio.c
++++ b/sound/x86/intel_hdmi_audio.c
+@@ -1767,7 +1767,7 @@ static int __hdmi_lpe_audio_probe(struct platform_device *pdev)
+ 		/* setup private data which can be retrieved when required */
+ 		pcm->private_data = ctx;
+ 		pcm->info_flags = 0;
+-		strscpy(pcm->name, card->shortname, strlen(card->shortname));
++		strscpy(pcm->name, card->shortname, sizeof(pcm->name));
+ 		/* setup the ops for playback */
+ 		snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &had_pcm_ops);
+ 
+diff --git a/tools/bpf/bpftool/net.c b/tools/bpf/bpftool/net.c
+index 64f958f437b01e..cfc6f944f7c33a 100644
+--- a/tools/bpf/bpftool/net.c
++++ b/tools/bpf/bpftool/net.c
+@@ -366,17 +366,18 @@ static int dump_link_nlmsg(void *cookie, void *msg, struct nlattr **tb)
+ {
+ 	struct bpf_netdev_t *netinfo = cookie;
+ 	struct ifinfomsg *ifinfo = msg;
++	struct ip_devname_ifindex *tmp;
+ 
+ 	if (netinfo->filter_idx > 0 && netinfo->filter_idx != ifinfo->ifi_index)
+ 		return 0;
+ 
+ 	if (netinfo->used_len == netinfo->array_len) {
+-		netinfo->devices = realloc(netinfo->devices,
+-			(netinfo->array_len + 16) *
+-			sizeof(struct ip_devname_ifindex));
+-		if (!netinfo->devices)
++		tmp = realloc(netinfo->devices,
++			(netinfo->array_len + 16) * sizeof(struct ip_devname_ifindex));
++		if (!tmp)
+ 			return -ENOMEM;
+ 
++		netinfo->devices = tmp;
+ 		netinfo->array_len += 16;
+ 	}
+ 	netinfo->devices[netinfo->used_len].ifindex = ifinfo->ifi_index;
+@@ -395,6 +396,7 @@ static int dump_class_qdisc_nlmsg(void *cookie, void *msg, struct nlattr **tb)
+ {
+ 	struct bpf_tcinfo_t *tcinfo = cookie;
+ 	struct tcmsg *info = msg;
++	struct tc_kind_handle *tmp;
+ 
+ 	if (tcinfo->is_qdisc) {
+ 		/* skip clsact qdisc */
+@@ -406,11 +408,12 @@ static int dump_class_qdisc_nlmsg(void *cookie, void *msg, struct nlattr **tb)
+ 	}
+ 
+ 	if (tcinfo->used_len == tcinfo->array_len) {
+-		tcinfo->handle_array = realloc(tcinfo->handle_array,
++		tmp = realloc(tcinfo->handle_array,
+ 			(tcinfo->array_len + 16) * sizeof(struct tc_kind_handle));
+-		if (!tcinfo->handle_array)
++		if (!tmp)
+ 			return -ENOMEM;
+ 
++		tcinfo->handle_array = tmp;
+ 		tcinfo->array_len += 16;
+ 	}
+ 	tcinfo->handle_array[tcinfo->used_len].handle = info->tcm_handle;
+diff --git a/tools/cgroup/memcg_slabinfo.py b/tools/cgroup/memcg_slabinfo.py
+index 270c28a0d09801..6bf4bde77903c3 100644
+--- a/tools/cgroup/memcg_slabinfo.py
++++ b/tools/cgroup/memcg_slabinfo.py
+@@ -146,11 +146,11 @@ def detect_kernel_config():
+ 
+ 
+ def for_each_slab(prog):
+-    PGSlab = ~prog.constant('PG_slab')
++    slabtype = prog.constant('PGTY_slab')
+ 
+     for page in for_each_page(prog):
+         try:
+-            if page.page_type.value_() == PGSlab:
++            if (page.page_type.value_() >> 24) == slabtype:
+                 yield cast('struct slab *', page)
+         except FaultError:
+             pass
+diff --git a/tools/lib/subcmd/help.c b/tools/lib/subcmd/help.c
+index 8561b0f01a2476..9ef569492560ef 100644
+--- a/tools/lib/subcmd/help.c
++++ b/tools/lib/subcmd/help.c
+@@ -9,6 +9,7 @@
+ #include <sys/stat.h>
+ #include <unistd.h>
+ #include <dirent.h>
++#include <assert.h>
+ #include "subcmd-util.h"
+ #include "help.h"
+ #include "exec-cmd.h"
+@@ -82,10 +83,11 @@ void exclude_cmds(struct cmdnames *cmds, struct cmdnames *excludes)
+ 				ci++;
+ 				cj++;
+ 			} else {
+-				zfree(&cmds->names[cj]);
+-				cmds->names[cj++] = cmds->names[ci++];
++				cmds->names[cj++] = cmds->names[ci];
++				cmds->names[ci++] = NULL;
+ 			}
+ 		} else if (cmp == 0) {
++			zfree(&cmds->names[ci]);
+ 			ci++;
+ 			ei++;
+ 		} else if (cmp > 0) {
+@@ -94,12 +96,12 @@ void exclude_cmds(struct cmdnames *cmds, struct cmdnames *excludes)
+ 	}
+ 	if (ci != cj) {
+ 		while (ci < cmds->cnt) {
+-			zfree(&cmds->names[cj]);
+-			cmds->names[cj++] = cmds->names[ci++];
++			cmds->names[cj++] = cmds->names[ci];
++			cmds->names[ci++] = NULL;
+ 		}
+ 	}
+ 	for (ci = cj; ci < cmds->cnt; ci++)
+-		zfree(&cmds->names[ci]);
++		assert(cmds->names[ci] == NULL);
+ 	cmds->cnt = cj;
+ }
+ 
+diff --git a/tools/lib/subcmd/run-command.c b/tools/lib/subcmd/run-command.c
+index 0a764c25c384f0..b7510f83209a0a 100644
+--- a/tools/lib/subcmd/run-command.c
++++ b/tools/lib/subcmd/run-command.c
+@@ -5,6 +5,7 @@
+ #include <ctype.h>
+ #include <fcntl.h>
+ #include <string.h>
++#include <linux/compiler.h>
+ #include <linux/string.h>
+ #include <errno.h>
+ #include <sys/wait.h>
+@@ -216,10 +217,20 @@ static int wait_or_whine(struct child_process *cmd, bool block)
+ 	return result;
+ }
+ 
++/*
++ * Conservative estimate of number of characaters needed to hold an a decoded
++ * integer, assume each 3 bits needs a character byte and plus a possible sign
++ * character.
++ */
++#ifndef is_signed_type
++#define is_signed_type(type) (((type)(-1)) < (type)1)
++#endif
++#define MAX_STRLEN_TYPE(type) (sizeof(type) * 8 / 3 + (is_signed_type(type) ? 1 : 0))
++
+ int check_if_command_finished(struct child_process *cmd)
+ {
+ #ifdef __linux__
+-	char filename[FILENAME_MAX + 12];
++	char filename[6 + MAX_STRLEN_TYPE(typeof(cmd->pid)) + 7 + 1];
+ 	char status_line[256];
+ 	FILE *status_file;
+ 
+@@ -227,7 +238,7 @@ int check_if_command_finished(struct child_process *cmd)
+ 	 * Check by reading /proc/<pid>/status as calling waitpid causes
+ 	 * stdout/stderr to be closed and data lost.
+ 	 */
+-	sprintf(filename, "/proc/%d/status", cmd->pid);
++	sprintf(filename, "/proc/%u/status", cmd->pid);
+ 	status_file = fopen(filename, "r");
+ 	if (status_file == NULL) {
+ 		/* Open failed assume finish_command was called. */
+diff --git a/tools/perf/.gitignore b/tools/perf/.gitignore
+index 5aaf73df6700dc..b64302a761444d 100644
+--- a/tools/perf/.gitignore
++++ b/tools/perf/.gitignore
+@@ -48,8 +48,6 @@ libbpf/
+ libperf/
+ libsubcmd/
+ libsymbol/
+-libtraceevent/
+-libtraceevent_plugins/
+ fixdep
+ Documentation/doc.dep
+ python_ext_build/
+diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c
+index 26ece6e9bfd167..4bbebd6ef2e4a7 100644
+--- a/tools/perf/builtin-sched.c
++++ b/tools/perf/builtin-sched.c
+@@ -994,7 +994,7 @@ thread_atoms_search(struct rb_root_cached *root, struct thread *thread,
+ 		else if (cmp < 0)
+ 			node = node->rb_right;
+ 		else {
+-			BUG_ON(thread != atoms->thread);
++			BUG_ON(!RC_CHK_EQUAL(thread, atoms->thread));
+ 			return atoms;
+ 		}
+ 	}
+@@ -1111,6 +1111,21 @@ add_sched_in_event(struct work_atoms *atoms, u64 timestamp)
+ 	atoms->nb_atoms++;
+ }
+ 
++static void free_work_atoms(struct work_atoms *atoms)
++{
++	struct work_atom *atom, *tmp;
++
++	if (atoms == NULL)
++		return;
++
++	list_for_each_entry_safe(atom, tmp, &atoms->work_list, list) {
++		list_del(&atom->list);
++		free(atom);
++	}
++	thread__zput(atoms->thread);
++	free(atoms);
++}
++
+ static int latency_switch_event(struct perf_sched *sched,
+ 				struct evsel *evsel,
+ 				struct perf_sample *sample,
+@@ -1634,6 +1649,7 @@ static int map_switch_event(struct perf_sched *sched, struct evsel *evsel,
+ 	const char *color = PERF_COLOR_NORMAL;
+ 	char stimestamp[32];
+ 	const char *str;
++	int ret = -1;
+ 
+ 	BUG_ON(this_cpu.cpu >= MAX_CPUS || this_cpu.cpu < 0);
+ 
+@@ -1664,17 +1680,20 @@ static int map_switch_event(struct perf_sched *sched, struct evsel *evsel,
+ 	sched_in = map__findnew_thread(sched, machine, -1, next_pid);
+ 	sched_out = map__findnew_thread(sched, machine, -1, prev_pid);
+ 	if (sched_in == NULL || sched_out == NULL)
+-		return -1;
++		goto out;
+ 
+ 	tr = thread__get_runtime(sched_in);
+-	if (tr == NULL) {
+-		thread__put(sched_in);
+-		return -1;
+-	}
++	if (tr == NULL)
++		goto out;
++
++	thread__put(sched->curr_thread[this_cpu.cpu]);
++	thread__put(sched->curr_out_thread[this_cpu.cpu]);
+ 
+ 	sched->curr_thread[this_cpu.cpu] = thread__get(sched_in);
+ 	sched->curr_out_thread[this_cpu.cpu] = thread__get(sched_out);
+ 
++	ret = 0;
++
+ 	str = thread__comm_str(sched_in);
+ 	new_shortname = 0;
+ 	if (!tr->shortname[0]) {
+@@ -1769,12 +1788,10 @@ static int map_switch_event(struct perf_sched *sched, struct evsel *evsel,
+ 	color_fprintf(stdout, color, "\n");
+ 
+ out:
+-	if (sched->map.task_name)
+-		thread__put(sched_out);
+-
++	thread__put(sched_out);
+ 	thread__put(sched_in);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int process_sched_switch_event(const struct perf_tool *tool,
+@@ -2018,6 +2035,16 @@ static u64 evsel__get_time(struct evsel *evsel, u32 cpu)
+ 	return r->last_time[cpu];
+ }
+ 
++static void timehist__evsel_priv_destructor(void *priv)
++{
++	struct evsel_runtime *r = priv;
++
++	if (r) {
++		free(r->last_time);
++		free(r);
++	}
++}
++
+ static int comm_width = 30;
+ 
+ static char *timehist_get_commstr(struct thread *thread)
+@@ -2311,8 +2338,10 @@ static void save_task_callchain(struct perf_sched *sched,
+ 		return;
+ 	}
+ 
+-	if (!sched->show_callchain || sample->callchain == NULL)
++	if (!sched->show_callchain || sample->callchain == NULL) {
++		thread__put(thread);
+ 		return;
++	}
+ 
+ 	cursor = get_tls_callchain_cursor();
+ 
+@@ -2321,10 +2350,12 @@ static void save_task_callchain(struct perf_sched *sched,
+ 		if (verbose > 0)
+ 			pr_err("Failed to resolve callchain. Skipping\n");
+ 
++		thread__put(thread);
+ 		return;
+ 	}
+ 
+ 	callchain_cursor_commit(cursor);
++	thread__put(thread);
+ 
+ 	while (true) {
+ 		struct callchain_cursor_node *node;
+@@ -2401,8 +2432,17 @@ static void free_idle_threads(void)
+ 		return;
+ 
+ 	for (i = 0; i < idle_max_cpu; ++i) {
+-		if ((idle_threads[i]))
+-			thread__delete(idle_threads[i]);
++		struct thread *idle = idle_threads[i];
++
++		if (idle) {
++			struct idle_thread_runtime *itr;
++
++			itr = thread__priv(idle);
++			if (itr)
++				thread__put(itr->last_thread);
++
++			thread__delete(idle);
++		}
+ 	}
+ 
+ 	free(idle_threads);
+@@ -2439,7 +2479,7 @@ static struct thread *get_idle_thread(int cpu)
+ 		}
+ 	}
+ 
+-	return idle_threads[cpu];
++	return thread__get(idle_threads[cpu]);
+ }
+ 
+ static void save_idle_callchain(struct perf_sched *sched,
+@@ -2494,7 +2534,8 @@ static struct thread *timehist_get_thread(struct perf_sched *sched,
+ 			if (itr == NULL)
+ 				return NULL;
+ 
+-			itr->last_thread = thread;
++			thread__put(itr->last_thread);
++			itr->last_thread = thread__get(thread);
+ 
+ 			/* copy task callchain when entering to idle */
+ 			if (evsel__intval(evsel, sample, "next_pid") == 0)
+@@ -2565,6 +2606,7 @@ static void timehist_print_wakeup_event(struct perf_sched *sched,
+ 	/* show wakeup unless both awakee and awaker are filtered */
+ 	if (timehist_skip_sample(sched, thread, evsel, sample) &&
+ 	    timehist_skip_sample(sched, awakened, evsel, sample)) {
++		thread__put(thread);
+ 		return;
+ 	}
+ 
+@@ -2581,6 +2623,8 @@ static void timehist_print_wakeup_event(struct perf_sched *sched,
+ 	printf("awakened: %s", timehist_get_commstr(awakened));
+ 
+ 	printf("\n");
++
++	thread__put(thread);
+ }
+ 
+ static int timehist_sched_wakeup_ignore(const struct perf_tool *tool __maybe_unused,
+@@ -2609,8 +2653,10 @@ static int timehist_sched_wakeup_event(const struct perf_tool *tool,
+ 		return -1;
+ 
+ 	tr = thread__get_runtime(thread);
+-	if (tr == NULL)
++	if (tr == NULL) {
++		thread__put(thread);
+ 		return -1;
++	}
+ 
+ 	if (tr->ready_to_run == 0)
+ 		tr->ready_to_run = sample->time;
+@@ -2620,6 +2666,7 @@ static int timehist_sched_wakeup_event(const struct perf_tool *tool,
+ 	    !perf_time__skip_sample(&sched->ptime, sample->time))
+ 		timehist_print_wakeup_event(sched, evsel, sample, machine, thread);
+ 
++	thread__put(thread);
+ 	return 0;
+ }
+ 
+@@ -2647,6 +2694,7 @@ static void timehist_print_migration_event(struct perf_sched *sched,
+ 
+ 	if (timehist_skip_sample(sched, thread, evsel, sample) &&
+ 	    timehist_skip_sample(sched, migrated, evsel, sample)) {
++		thread__put(thread);
+ 		return;
+ 	}
+ 
+@@ -2674,6 +2722,7 @@ static void timehist_print_migration_event(struct perf_sched *sched,
+ 	printf(" cpu %d => %d", ocpu, dcpu);
+ 
+ 	printf("\n");
++	thread__put(thread);
+ }
+ 
+ static int timehist_migrate_task_event(const struct perf_tool *tool,
+@@ -2693,8 +2742,10 @@ static int timehist_migrate_task_event(const struct perf_tool *tool,
+ 		return -1;
+ 
+ 	tr = thread__get_runtime(thread);
+-	if (tr == NULL)
++	if (tr == NULL) {
++		thread__put(thread);
+ 		return -1;
++	}
+ 
+ 	tr->migrations++;
+ 	tr->migrated = sample->time;
+@@ -2704,6 +2755,7 @@ static int timehist_migrate_task_event(const struct perf_tool *tool,
+ 		timehist_print_migration_event(sched, evsel, sample,
+ 							machine, thread);
+ 	}
++	thread__put(thread);
+ 
+ 	return 0;
+ }
+@@ -2726,10 +2778,10 @@ static void timehist_update_task_prio(struct evsel *evsel,
+ 		return;
+ 
+ 	tr = thread__get_runtime(thread);
+-	if (tr == NULL)
+-		return;
++	if (tr != NULL)
++		tr->prio = next_prio;
+ 
+-	tr->prio = next_prio;
++	thread__put(thread);
+ }
+ 
+ static int timehist_sched_change_event(const struct perf_tool *tool,
+@@ -2741,7 +2793,7 @@ static int timehist_sched_change_event(const struct perf_tool *tool,
+ 	struct perf_sched *sched = container_of(tool, struct perf_sched, tool);
+ 	struct perf_time_interval *ptime = &sched->ptime;
+ 	struct addr_location al;
+-	struct thread *thread;
++	struct thread *thread = NULL;
+ 	struct thread_runtime *tr = NULL;
+ 	u64 tprev, t = sample->time;
+ 	int rc = 0;
+@@ -2865,6 +2917,7 @@ static int timehist_sched_change_event(const struct perf_tool *tool,
+ 
+ 	evsel__save_time(evsel, sample->time, sample->cpu);
+ 
++	thread__put(thread);
+ 	addr_location__exit(&al);
+ 	return rc;
+ }
+@@ -3286,6 +3339,8 @@ static int perf_sched__timehist(struct perf_sched *sched)
+ 
+ 	setup_pager();
+ 
++	evsel__set_priv_destructor(timehist__evsel_priv_destructor);
++
+ 	/* prefer sched_waking if it is captured */
+ 	if (evlist__find_tracepoint_by_name(session->evlist, "sched:sched_waking"))
+ 		handlers[1].handler = timehist_sched_wakeup_ignore;
+@@ -3386,13 +3441,13 @@ static void __merge_work_atoms(struct rb_root_cached *root, struct work_atoms *d
+ 			this->total_runtime += data->total_runtime;
+ 			this->nb_atoms += data->nb_atoms;
+ 			this->total_lat += data->total_lat;
+-			list_splice(&data->work_list, &this->work_list);
++			list_splice_init(&data->work_list, &this->work_list);
+ 			if (this->max_lat < data->max_lat) {
+ 				this->max_lat = data->max_lat;
+ 				this->max_lat_start = data->max_lat_start;
+ 				this->max_lat_end = data->max_lat_end;
+ 			}
+-			zfree(&data);
++			free_work_atoms(data);
+ 			return;
+ 		}
+ 	}
+@@ -3471,7 +3526,6 @@ static int perf_sched__lat(struct perf_sched *sched)
+ 		work_list = rb_entry(next, struct work_atoms, node);
+ 		output_lat_thread(sched, work_list);
+ 		next = rb_next(next);
+-		thread__zput(work_list->thread);
+ 	}
+ 
+ 	printf(" -----------------------------------------------------------------------------------------------------------------\n");
+@@ -3485,6 +3539,13 @@ static int perf_sched__lat(struct perf_sched *sched)
+ 
+ 	rc = 0;
+ 
++	while ((next = rb_first_cached(&sched->sorted_atom_root))) {
++		struct work_atoms *data;
++
++		data = rb_entry(next, struct work_atoms, node);
++		rb_erase_cached(next, &sched->sorted_atom_root);
++		free_work_atoms(data);
++	}
+ out_free_cpus_switch_event:
+ 	free_cpus_switch_event(sched);
+ 	return rc;
+@@ -3556,10 +3617,10 @@ static int perf_sched__map(struct perf_sched *sched)
+ 
+ 	sched->curr_out_thread = calloc(MAX_CPUS, sizeof(*(sched->curr_out_thread)));
+ 	if (!sched->curr_out_thread)
+-		return rc;
++		goto out_free_curr_thread;
+ 
+ 	if (setup_cpus_switch_event(sched))
+-		goto out_free_curr_thread;
++		goto out_free_curr_out_thread;
+ 
+ 	if (setup_map_cpus(sched))
+ 		goto out_free_cpus_switch_event;
+@@ -3590,7 +3651,14 @@ static int perf_sched__map(struct perf_sched *sched)
+ out_free_cpus_switch_event:
+ 	free_cpus_switch_event(sched);
+ 
++out_free_curr_out_thread:
++	for (int i = 0; i < MAX_CPUS; i++)
++		thread__put(sched->curr_out_thread[i]);
++	zfree(&sched->curr_out_thread);
++
+ out_free_curr_thread:
++	for (int i = 0; i < MAX_CPUS; i++)
++		thread__put(sched->curr_thread[i]);
+ 	zfree(&sched->curr_thread);
+ 	return rc;
+ }
+@@ -3898,13 +3966,15 @@ int cmd_sched(int argc, const char **argv)
+ 	if (!argc)
+ 		usage_with_options(sched_usage, sched_options);
+ 
++	thread__set_priv_destructor(free);
++
+ 	/*
+ 	 * Aliased to 'perf script' for now:
+ 	 */
+ 	if (!strcmp(argv[0], "script")) {
+-		return cmd_script(argc, argv);
++		ret = cmd_script(argc, argv);
+ 	} else if (strlen(argv[0]) > 2 && strstarts("record", argv[0])) {
+-		return __cmd_record(argc, argv);
++		ret = __cmd_record(argc, argv);
+ 	} else if (strlen(argv[0]) > 2 && strstarts("latency", argv[0])) {
+ 		sched.tp_handler = &lat_ops;
+ 		if (argc > 1) {
+@@ -3913,7 +3983,7 @@ int cmd_sched(int argc, const char **argv)
+ 				usage_with_options(latency_usage, latency_options);
+ 		}
+ 		setup_sorting(&sched, latency_options, latency_usage);
+-		return perf_sched__lat(&sched);
++		ret = perf_sched__lat(&sched);
+ 	} else if (!strcmp(argv[0], "map")) {
+ 		if (argc) {
+ 			argc = parse_options(argc, argv, map_options, map_usage, 0);
+@@ -3924,13 +3994,14 @@ int cmd_sched(int argc, const char **argv)
+ 				sched.map.task_names = strlist__new(sched.map.task_name, NULL);
+ 				if (sched.map.task_names == NULL) {
+ 					fprintf(stderr, "Failed to parse task names\n");
+-					return -1;
++					ret = -1;
++					goto out;
+ 				}
+ 			}
+ 		}
+ 		sched.tp_handler = &map_ops;
+ 		setup_sorting(&sched, latency_options, latency_usage);
+-		return perf_sched__map(&sched);
++		ret = perf_sched__map(&sched);
+ 	} else if (strlen(argv[0]) > 2 && strstarts("replay", argv[0])) {
+ 		sched.tp_handler = &replay_ops;
+ 		if (argc) {
+@@ -3938,7 +4009,7 @@ int cmd_sched(int argc, const char **argv)
+ 			if (argc)
+ 				usage_with_options(replay_usage, replay_options);
+ 		}
+-		return perf_sched__replay(&sched);
++		ret = perf_sched__replay(&sched);
+ 	} else if (!strcmp(argv[0], "timehist")) {
+ 		if (argc) {
+ 			argc = parse_options(argc, argv, timehist_options,
+@@ -3954,19 +4025,19 @@ int cmd_sched(int argc, const char **argv)
+ 				parse_options_usage(NULL, timehist_options, "w", true);
+ 			if (sched.show_next)
+ 				parse_options_usage(NULL, timehist_options, "n", true);
+-			return -EINVAL;
++			ret = -EINVAL;
++			goto out;
+ 		}
+ 		ret = symbol__validate_sym_arguments();
+-		if (ret)
+-			return ret;
+-
+-		return perf_sched__timehist(&sched);
++		if (!ret)
++			ret = perf_sched__timehist(&sched);
+ 	} else {
+ 		usage_with_options(sched_usage, sched_options);
+ 	}
+ 
++out:
+ 	/* free usage string allocated by parse_options_subcommand */
+ 	free((void *)sched_usage[0]);
+ 
+-	return 0;
++	return ret;
+ }
+diff --git a/tools/perf/tests/bp_account.c b/tools/perf/tests/bp_account.c
+index 4cb7d486b5c178..047433c977bc9d 100644
+--- a/tools/perf/tests/bp_account.c
++++ b/tools/perf/tests/bp_account.c
+@@ -104,6 +104,7 @@ static int bp_accounting(int wp_cnt, int share)
+ 		fd_wp = wp_event((void *)&the_var, &attr_new);
+ 		TEST_ASSERT_VAL("failed to create max wp\n", fd_wp != -1);
+ 		pr_debug("wp max created\n");
++		close(fd_wp);
+ 	}
+ 
+ 	for (i = 0; i < wp_cnt; i++)
+diff --git a/tools/perf/util/build-id.c b/tools/perf/util/build-id.c
+index e763e8d99a4367..ee00313d5d7e2a 100644
+--- a/tools/perf/util/build-id.c
++++ b/tools/perf/util/build-id.c
+@@ -864,7 +864,7 @@ static int dso__cache_build_id(struct dso *dso, struct machine *machine,
+ 	char *allocated_name = NULL;
+ 	int ret = 0;
+ 
+-	if (!dso__has_build_id(dso))
++	if (!dso__has_build_id(dso) || !dso__hit(dso))
+ 		return 0;
+ 
+ 	if (dso__is_kcore(dso)) {
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 3c030da2e477c7..08fd9b9afcf8fd 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -1652,6 +1652,15 @@ static void evsel__free_config_terms(struct evsel *evsel)
+ 	free_config_terms(&evsel->config_terms);
+ }
+ 
++static void (*evsel__priv_destructor)(void *priv);
++
++void evsel__set_priv_destructor(void (*destructor)(void *priv))
++{
++	assert(evsel__priv_destructor == NULL);
++
++	evsel__priv_destructor = destructor;
++}
++
+ void evsel__exit(struct evsel *evsel)
+ {
+ 	assert(list_empty(&evsel->core.node));
+@@ -1680,6 +1689,8 @@ void evsel__exit(struct evsel *evsel)
+ 	hashmap__free(evsel->per_pkg_mask);
+ 	evsel->per_pkg_mask = NULL;
+ 	zfree(&evsel->metric_events);
++	if (evsel__priv_destructor)
++		evsel__priv_destructor(evsel->priv);
+ 	perf_evsel__object.fini(evsel);
+ 	if (evsel__tool_event(evsel) == TOOL_PMU__EVENT_SYSTEM_TIME ||
+ 	    evsel__tool_event(evsel) == TOOL_PMU__EVENT_USER_TIME)
+diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
+index aae431d63d6477..b7f8b29f30eaf4 100644
+--- a/tools/perf/util/evsel.h
++++ b/tools/perf/util/evsel.h
+@@ -270,6 +270,8 @@ void evsel__init(struct evsel *evsel, struct perf_event_attr *attr, int idx);
+ void evsel__exit(struct evsel *evsel);
+ void evsel__delete(struct evsel *evsel);
+ 
++void evsel__set_priv_destructor(void (*destructor)(void *priv));
++
+ struct callchain_param;
+ 
+ void evsel__config(struct evsel *evsel, struct record_opts *opts,
+diff --git a/tools/perf/util/hwmon_pmu.c b/tools/perf/util/hwmon_pmu.c
+index 3cce77fc80041d..cf7156c7e3bc9f 100644
+--- a/tools/perf/util/hwmon_pmu.c
++++ b/tools/perf/util/hwmon_pmu.c
+@@ -344,7 +344,7 @@ static int hwmon_pmu__read_events(struct hwmon_pmu *pmu)
+ 
+ struct perf_pmu *hwmon_pmu__new(struct list_head *pmus, int hwmon_dir, const char *sysfs_name, const char *name)
+ {
+-	char buf[32];
++	char buf[64];
+ 	struct hwmon_pmu *hwm;
+ 
+ 	hwm = zalloc(sizeof(*hwm));
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index 5152fd5a6eada0..7ed3c3cadd6ab1 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -1733,13 +1733,11 @@ static int parse_events__modifier_list(struct parse_events_state *parse_state,
+ 		int eH = group ? evsel->core.attr.exclude_host : 0;
+ 		int eG = group ? evsel->core.attr.exclude_guest : 0;
+ 		int exclude = eu | ek | eh;
+-		int exclude_GH = group ? evsel->exclude_GH : 0;
++		int exclude_GH = eG | eH;
+ 
+ 		if (mod.user) {
+ 			if (!exclude)
+ 				exclude = eu = ek = eh = 1;
+-			if (!exclude_GH && !perf_guest && exclude_GH_default)
+-				eG = 1;
+ 			eu = 0;
+ 		}
+ 		if (mod.kernel) {
+@@ -1762,6 +1760,13 @@ static int parse_events__modifier_list(struct parse_events_state *parse_state,
+ 				exclude_GH = eG = eH = 1;
+ 			eH = 0;
+ 		}
++		if (!exclude_GH && exclude_GH_default) {
++			if (perf_host)
++				eG = 1;
++			else if (perf_guest)
++				eH = 1;
++		}
++
+ 		evsel->core.attr.exclude_user   = eu;
+ 		evsel->core.attr.exclude_kernel = ek;
+ 		evsel->core.attr.exclude_hv     = eh;
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index 11540219481ba8..9c9e28bbb24545 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -1414,6 +1414,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
+ 				goto out_err;
+ 			}
+ 		}
++		map__zput(new_node->map);
+ 		free(new_node);
+ 	}
+ 
+diff --git a/tools/power/cpupower/utils/idle_monitor/cpupower-monitor.c b/tools/power/cpupower/utils/idle_monitor/cpupower-monitor.c
+index ad493157f826f3..e8b3841d5c0f81 100644
+--- a/tools/power/cpupower/utils/idle_monitor/cpupower-monitor.c
++++ b/tools/power/cpupower/utils/idle_monitor/cpupower-monitor.c
+@@ -121,10 +121,8 @@ void print_header(int topology_depth)
+ 	switch (topology_depth) {
+ 	case TOPOLOGY_DEPTH_PKG:
+ 		printf(" PKG|");
+-		break;
+ 	case TOPOLOGY_DEPTH_CORE:
+ 		printf("CORE|");
+-		break;
+ 	case	TOPOLOGY_DEPTH_CPU:
+ 		printf(" CPU|");
+ 		break;
+@@ -167,10 +165,8 @@ void print_results(int topology_depth, int cpu)
+ 	switch (topology_depth) {
+ 	case TOPOLOGY_DEPTH_PKG:
+ 		printf("%4d|", cpu_top.core_info[cpu].pkg);
+-		break;
+ 	case TOPOLOGY_DEPTH_CORE:
+ 		printf("%4d|", cpu_top.core_info[cpu].core);
+-		break;
+ 	case TOPOLOGY_DEPTH_CPU:
+ 		printf("%4d|", cpu_top.core_info[cpu].cpu);
+ 		break;
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index ab79854cb296e4..444b6bfb4683f3 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -2385,7 +2385,6 @@ unsigned long long bic_lookup(char *name_list, enum show_hide_mode mode)
+ 
+ 		}
+ 		if (i == MAX_BIC) {
+-			fprintf(stderr, "deferred %s\n", name_list);
+ 			if (mode == SHOW_LIST) {
+ 				deferred_add_names[deferred_add_index++] = name_list;
+ 				if (deferred_add_index >= MAX_DEFERRED) {
+@@ -9594,6 +9593,7 @@ int fork_it(char **argv)
+ 	timersub(&tv_odd, &tv_even, &tv_delta);
+ 	if (for_all_cpus_2(delta_cpu, ODD_COUNTERS, EVEN_COUNTERS))
+ 		fprintf(outf, "%s: Counter reset detected\n", progname);
++	delta_platform(&platform_counters_odd, &platform_counters_even);
+ 
+ 	compute_average(EVEN_COUNTERS);
+ 	format_all_counters(EVEN_COUNTERS);
+@@ -10314,9 +10314,6 @@ void probe_cpuidle_residency(void)
+ 	int min_state = 1024, max_state = 0;
+ 	char *sp;
+ 
+-	if (!DO_BIC(BIC_pct_idle))
+-		return;
+-
+ 	for (state = 10; state >= 0; --state) {
+ 
+ 		sprintf(path, "/sys/devices/system/cpu/cpu%d/cpuidle/state%d/name", base_cpu, state);
+diff --git a/tools/testing/selftests/alsa/utimer-test.c b/tools/testing/selftests/alsa/utimer-test.c
+index 32ee3ce577216b..37964f311a3397 100644
+--- a/tools/testing/selftests/alsa/utimer-test.c
++++ b/tools/testing/selftests/alsa/utimer-test.c
+@@ -135,6 +135,7 @@ TEST_F(timer_f, utimer) {
+ 	pthread_join(ticking_thread, NULL);
+ 	ASSERT_EQ(total_ticks, TICKS_COUNT);
+ 	pclose(rfp);
++	free(buf);
+ }
+ 
+ TEST(wrong_timers_test) {
+diff --git a/tools/testing/selftests/arm64/fp/sve-ptrace.c b/tools/testing/selftests/arm64/fp/sve-ptrace.c
+index 577b6e05e860c9..c499d5789dd53f 100644
+--- a/tools/testing/selftests/arm64/fp/sve-ptrace.c
++++ b/tools/testing/selftests/arm64/fp/sve-ptrace.c
+@@ -253,7 +253,7 @@ static void ptrace_set_get_vl(pid_t child, const struct vec_type *type,
+ 		return;
+ 	}
+ 
+-	ksft_test_result(new_sve->vl = prctl_vl, "Set %s VL %u\n",
++	ksft_test_result(new_sve->vl == prctl_vl, "Set %s VL %u\n",
+ 			 type->name, vl);
+ 
+ 	free(new_sve);
+diff --git a/tools/testing/selftests/bpf/bpf_atomic.h b/tools/testing/selftests/bpf/bpf_atomic.h
+index a9674e54432228..c550e571196774 100644
+--- a/tools/testing/selftests/bpf/bpf_atomic.h
++++ b/tools/testing/selftests/bpf/bpf_atomic.h
+@@ -61,7 +61,7 @@ extern bool CONFIG_X86_64 __kconfig __weak;
+ 
+ #define smp_mb()                                 \
+ 	({                                       \
+-		unsigned long __val;             \
++		volatile unsigned long __val;    \
+ 		__sync_fetch_and_add(&__val, 0); \
+ 	})
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
+index 4ee1148d22be3d..1cfed83156b035 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
++++ b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
+@@ -924,6 +924,8 @@ static void redir_partial(int family, int sotype, int sock_map, int parser_map)
+ 		goto close;
+ 
+ 	n = xsend(c1, buf, sizeof(buf), 0);
++	if (n == -1)
++		goto close;
+ 	if (n < sizeof(buf))
+ 		FAIL("incomplete write");
+ 
+diff --git a/tools/testing/selftests/bpf/veristat.c b/tools/testing/selftests/bpf/veristat.c
+index a18972ffdeb6f2..2ff2c064f04577 100644
+--- a/tools/testing/selftests/bpf/veristat.c
++++ b/tools/testing/selftests/bpf/veristat.c
+@@ -344,6 +344,7 @@ static error_t parse_arg(int key, char *arg, struct argp_state *state)
+ 			fprintf(stderr, "invalid top N specifier: %s\n", arg);
+ 			argp_usage(state);
+ 		}
++		break;
+ 	case 'C':
+ 		env.comparison_mode = true;
+ 		break;
+diff --git a/tools/testing/selftests/breakpoints/step_after_suspend_test.c b/tools/testing/selftests/breakpoints/step_after_suspend_test.c
+index 8d275f03e977f5..8d233ac95696be 100644
+--- a/tools/testing/selftests/breakpoints/step_after_suspend_test.c
++++ b/tools/testing/selftests/breakpoints/step_after_suspend_test.c
+@@ -127,22 +127,42 @@ int run_test(int cpu)
+ 	return KSFT_PASS;
+ }
+ 
++/*
++ * Reads the suspend success count from sysfs.
++ * Returns the count on success or exits on failure.
++ */
++static int get_suspend_success_count_or_fail(void)
++{
++	FILE *fp;
++	int val;
++
++	fp = fopen("/sys/power/suspend_stats/success", "r");
++	if (!fp)
++		ksft_exit_fail_msg(
++			"Failed to open suspend_stats/success: %s\n",
++			strerror(errno));
++
++	if (fscanf(fp, "%d", &val) != 1) {
++		fclose(fp);
++		ksft_exit_fail_msg(
++			"Failed to read suspend success count\n");
++	}
++
++	fclose(fp);
++	return val;
++}
++
+ void suspend(void)
+ {
+-	int power_state_fd;
+ 	int timerfd;
+ 	int err;
++	int count_before;
++	int count_after;
+ 	struct itimerspec spec = {};
+ 
+ 	if (getuid() != 0)
+ 		ksft_exit_skip("Please run the test as root - Exiting.\n");
+ 
+-	power_state_fd = open("/sys/power/state", O_RDWR);
+-	if (power_state_fd < 0)
+-		ksft_exit_fail_msg(
+-			"open(\"/sys/power/state\") failed %s)\n",
+-			strerror(errno));
+-
+ 	timerfd = timerfd_create(CLOCK_BOOTTIME_ALARM, 0);
+ 	if (timerfd < 0)
+ 		ksft_exit_fail_msg("timerfd_create() failed\n");
+@@ -152,14 +172,15 @@ void suspend(void)
+ 	if (err < 0)
+ 		ksft_exit_fail_msg("timerfd_settime() failed\n");
+ 
++	count_before = get_suspend_success_count_or_fail();
++
+ 	system("(echo mem > /sys/power/state) 2> /dev/null");
+ 
+-	timerfd_gettime(timerfd, &spec);
+-	if (spec.it_value.tv_sec != 0 || spec.it_value.tv_nsec != 0)
++	count_after = get_suspend_success_count_or_fail();
++	if (count_after <= count_before)
+ 		ksft_exit_fail_msg("Failed to enter Suspend state\n");
+ 
+ 	close(timerfd);
+-	close(power_state_fd);
+ }
+ 
+ int main(int argc, char **argv)
+diff --git a/tools/testing/selftests/drivers/net/hw/tso.py b/tools/testing/selftests/drivers/net/hw/tso.py
+index 3370827409aa02..5fddb5056a2053 100755
+--- a/tools/testing/selftests/drivers/net/hw/tso.py
++++ b/tools/testing/selftests/drivers/net/hw/tso.py
+@@ -102,7 +102,7 @@ def build_tunnel(cfg, outer_ipver, tun_info):
+     remote_addr = cfg.remote_addr_v[outer_ipver]
+ 
+     tun_type = tun_info[0]
+-    tun_arg  = tun_info[2]
++    tun_arg  = tun_info[1]
+     ip(f"link add {tun_type}-ksft type {tun_type} {tun_arg} local {local_addr} remote {remote_addr} dev {cfg.ifname}")
+     defer(ip, f"link del {tun_type}-ksft")
+     ip(f"link set dev {tun_type}-ksft up")
+@@ -119,15 +119,30 @@ def build_tunnel(cfg, outer_ipver, tun_info):
+     return remote_v4, remote_v6
+ 
+ 
++def restore_wanted_features(cfg):
++    features_cmd = ""
++    for feature in cfg.hw_features:
++        setting = "on" if feature in cfg.wanted_features else "off"
++        features_cmd += f" {feature} {setting}"
++    try:
++        ethtool(f"-K {cfg.ifname} {features_cmd}")
++    except Exception as e:
++        ksft_pr(f"WARNING: failure restoring wanted features: {e}")
++
++
+ def test_builder(name, cfg, outer_ipver, feature, tun=None, inner_ipver=None):
+     """Construct specific tests from the common template."""
+     def f(cfg):
+         cfg.require_ipver(outer_ipver)
++        defer(restore_wanted_features, cfg)
+ 
+         if not cfg.have_stat_super_count and \
+            not cfg.have_stat_wire_count:
+             raise KsftSkipEx(f"Device does not support LSO queue stats")
+ 
++        if feature not in cfg.hw_features:
++            raise KsftSkipEx(f"Device does not support {feature}")
++
+         ipver = outer_ipver
+         if tun:
+             remote_v4, remote_v6 = build_tunnel(cfg, ipver, tun)
+@@ -136,36 +151,21 @@ def test_builder(name, cfg, outer_ipver, feature, tun=None, inner_ipver=None):
+             remote_v4 = cfg.remote_addr_v["4"]
+             remote_v6 = cfg.remote_addr_v["6"]
+ 
+-        tun_partial = tun and tun[1]
+-        # Tunnel which can silently fall back to gso-partial
+-        has_gso_partial = tun and 'tx-gso-partial' in cfg.features
+-
+-        # For TSO4 via partial we need mangleid
+-        if ipver == "4" and feature in cfg.partial_features:
+-            ksft_pr("Testing with mangleid enabled")
+-            if 'tx-tcp-mangleid-segmentation' not in cfg.features:
+-                ethtool(f"-K {cfg.ifname} tx-tcp-mangleid-segmentation on")
+-                defer(ethtool, f"-K {cfg.ifname} tx-tcp-mangleid-segmentation off")
+-
+         # First test without the feature enabled.
+         ethtool(f"-K {cfg.ifname} {feature} off")
+-        if has_gso_partial:
+-            ethtool(f"-K {cfg.ifname} tx-gso-partial off")
+         run_one_stream(cfg, ipver, remote_v4, remote_v6, should_lso=False)
+ 
+-        # Now test with the feature enabled.
+-        # For compatible tunnels only - just GSO partial, not specific feature.
+-        if has_gso_partial:
++        ethtool(f"-K {cfg.ifname} tx-gso-partial off")
++        ethtool(f"-K {cfg.ifname} tx-tcp-mangleid-segmentation off")
++        if feature in cfg.partial_features:
+             ethtool(f"-K {cfg.ifname} tx-gso-partial on")
+-            run_one_stream(cfg, ipver, remote_v4, remote_v6,
+-                           should_lso=tun_partial)
++            if ipver == "4":
++                ksft_pr("Testing with mangleid enabled")
++                ethtool(f"-K {cfg.ifname} tx-tcp-mangleid-segmentation on")
+ 
+         # Full feature enabled.
+-        if feature in cfg.features:
+-            ethtool(f"-K {cfg.ifname} {feature} on")
+-            run_one_stream(cfg, ipver, remote_v4, remote_v6, should_lso=True)
+-        else:
+-            raise KsftXfailEx(f"Device does not support {feature}")
++        ethtool(f"-K {cfg.ifname} {feature} on")
++        run_one_stream(cfg, ipver, remote_v4, remote_v6, should_lso=True)
+ 
+     f.__name__ = name + ((outer_ipver + "_") if tun else "") + "ipv" + inner_ipver
+     return f
+@@ -176,23 +176,39 @@ def query_nic_features(cfg) -> None:
+     cfg.have_stat_super_count = False
+     cfg.have_stat_wire_count = False
+ 
+-    cfg.features = set()
+     features = cfg.ethnl.features_get({"header": {"dev-index": cfg.ifindex}})
+-    for f in features["active"]["bits"]["bit"]:
+-        cfg.features.add(f["name"])
++
++    cfg.wanted_features = set()
++    for f in features["wanted"]["bits"]["bit"]:
++        cfg.wanted_features.add(f["name"])
++
++    cfg.hw_features = set()
++    hw_all_features_cmd = ""
++    for f in features["hw"]["bits"]["bit"]:
++        if f.get("value", False):
++            feature = f["name"]
++            cfg.hw_features.add(feature)
++            hw_all_features_cmd += f" {feature} on"
++    try:
++        ethtool(f"-K {cfg.ifname} {hw_all_features_cmd}")
++    except Exception as e:
++        ksft_pr(f"WARNING: failure enabling all hw features: {e}")
++        ksft_pr("partial gso feature detection may be impacted")
+ 
+     # Check which features are supported via GSO partial
+     cfg.partial_features = set()
+-    if 'tx-gso-partial' in cfg.features:
++    if 'tx-gso-partial' in cfg.hw_features:
+         ethtool(f"-K {cfg.ifname} tx-gso-partial off")
+ 
+         no_partial = set()
+         features = cfg.ethnl.features_get({"header": {"dev-index": cfg.ifindex}})
+         for f in features["active"]["bits"]["bit"]:
+             no_partial.add(f["name"])
+-        cfg.partial_features = cfg.features - no_partial
++        cfg.partial_features = cfg.hw_features - no_partial
+         ethtool(f"-K {cfg.ifname} tx-gso-partial on")
+ 
++    restore_wanted_features(cfg)
++
+     stats = cfg.netnl.qstats_get({"ifindex": cfg.ifindex}, dump=True)
+     if stats:
+         if 'tx-hw-gso-packets' in stats[0]:
+@@ -211,13 +227,14 @@ def main() -> None:
+         query_nic_features(cfg)
+ 
+         test_info = (
+-            # name,       v4/v6  ethtool_feature              tun:(type,    partial, args)
+-            ("",            "4", "tx-tcp-segmentation",           None),
+-            ("",            "6", "tx-tcp6-segmentation",          None),
+-            ("vxlan",        "", "tx-udp_tnl-segmentation",       ("vxlan",  True,  "id 100 dstport 4789 noudpcsum")),
+-            ("vxlan_csum",   "", "tx-udp_tnl-csum-segmentation",  ("vxlan",  False, "id 100 dstport 4789 udpcsum")),
+-            ("gre",         "4", "tx-gre-segmentation",           ("gre",    False,  "")),
+-            ("gre",         "6", "tx-gre-segmentation",           ("ip6gre", False,  "")),
++            # name,       v4/v6  ethtool_feature               tun:(type, args, inner ip versions)
++            ("",           "4", "tx-tcp-segmentation",         None),
++            ("",           "6", "tx-tcp6-segmentation",        None),
++            ("vxlan",      "4", "tx-udp_tnl-segmentation",     ("vxlan", "id 100 dstport 4789 noudpcsum", ("4", "6"))),
++            ("vxlan",      "6", "tx-udp_tnl-segmentation",     ("vxlan", "id 100 dstport 4789 udp6zerocsumtx udp6zerocsumrx", ("4", "6"))),
++            ("vxlan_csum", "", "tx-udp_tnl-csum-segmentation", ("vxlan", "id 100 dstport 4789 udpcsum", ("4", "6"))),
++            ("gre",        "4", "tx-gre-segmentation",         ("gre",   "", ("4", "6"))),
++            ("gre",        "6", "tx-gre-segmentation",         ("ip6gre","", ("4", "6"))),
+         )
+ 
+         cases = []
+@@ -227,11 +244,13 @@ def main() -> None:
+                 if info[1] and outer_ipver != info[1]:
+                     continue
+ 
+-                cases.append(test_builder(info[0], cfg, outer_ipver, info[2],
+-                                          tun=info[3], inner_ipver="4"))
+                 if info[3]:
+-                    cases.append(test_builder(info[0], cfg, outer_ipver, info[2],
+-                                              tun=info[3], inner_ipver="6"))
++                    cases += [
++                        test_builder(info[0], cfg, outer_ipver, info[2], info[3], inner_ipver)
++                        for inner_ipver in info[3][2]
++                    ]
++                else:
++                    cases.append(test_builder(info[0], cfg, outer_ipver, info[2], None, outer_ipver))
+ 
+         ksft_run(cases=cases, args=(cfg, ))
+     ksft_exit()
+diff --git a/tools/testing/selftests/drivers/net/lib/py/env.py b/tools/testing/selftests/drivers/net/lib/py/env.py
+index ad5ff645183acd..98bfc1e9e9ca4e 100644
+--- a/tools/testing/selftests/drivers/net/lib/py/env.py
++++ b/tools/testing/selftests/drivers/net/lib/py/env.py
+@@ -259,7 +259,7 @@ class NetDrvEpEnv(NetDrvEnvBase):
+             if not self._require_cmd(comm, "local"):
+                 raise KsftSkipEx("Test requires command: " + comm)
+         if remote:
+-            if not self._require_cmd(comm, "remote"):
++            if not self._require_cmd(comm, "remote", host=self.remote):
+                 raise KsftSkipEx("Test requires (remote) command: " + comm)
+ 
+     def wait_hw_stats_settle(self):
+diff --git a/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc b/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
+index b7c8f29c09a978..65916bb55dfbbf 100644
+--- a/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
++++ b/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
+@@ -14,11 +14,35 @@ fail() { #msg
+     exit_fail
+ }
+ 
++# As reading trace can last forever, simply look for 3 different
++# events then exit out of reading the file. If there's not 3 different
++# events, then the test has failed.
++check_unique() {
++    cat trace | grep -v '^#' | awk '
++	BEGIN { cnt = 0; }
++	{
++	    for (i = 0; i < cnt; i++) {
++		if (event[i] == $5) {
++		    break;
++		}
++	    }
++	    if (i == cnt) {
++		event[cnt++] = $5;
++		if (cnt > 2) {
++		    exit;
++		}
++	    }
++	}
++	END {
++	    printf "%d", cnt;
++	}'
++}
++
+ echo 'sched:*' > set_event
+ 
+ yield
+ 
+-count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`check_unique`
+ if [ $count -lt 3 ]; then
+     fail "at least fork, exec and exit events should be recorded"
+ fi
+@@ -29,7 +53,7 @@ echo 1 > events/sched/enable
+ 
+ yield
+ 
+-count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`check_unique`
+ if [ $count -lt 3 ]; then
+     fail "at least fork, exec and exit events should be recorded"
+ fi
+diff --git a/tools/testing/selftests/landlock/audit.h b/tools/testing/selftests/landlock/audit.h
+index 18a6014920b5f8..b16986aa64427b 100644
+--- a/tools/testing/selftests/landlock/audit.h
++++ b/tools/testing/selftests/landlock/audit.h
+@@ -403,11 +403,12 @@ static int audit_init_filter_exe(struct audit_filter *filter, const char *path)
+ 	/* It is assume that there is not already filtering rules. */
+ 	filter->record_type = AUDIT_EXE;
+ 	if (!path) {
+-		filter->exe_len = readlink("/proc/self/exe", filter->exe,
+-					   sizeof(filter->exe) - 1);
+-		if (filter->exe_len < 0)
++		int ret = readlink("/proc/self/exe", filter->exe,
++				   sizeof(filter->exe) - 1);
++		if (ret < 0)
+ 			return -errno;
+ 
++		filter->exe_len = ret;
+ 		return 0;
+ 	}
+ 
+diff --git a/tools/testing/selftests/landlock/audit_test.c b/tools/testing/selftests/landlock/audit_test.c
+index cfc571afd0eb81..46d02d49835aae 100644
+--- a/tools/testing/selftests/landlock/audit_test.c
++++ b/tools/testing/selftests/landlock/audit_test.c
+@@ -7,6 +7,7 @@
+ 
+ #define _GNU_SOURCE
+ #include <errno.h>
++#include <fcntl.h>
+ #include <limits.h>
+ #include <linux/landlock.h>
+ #include <pthread.h>
+diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
+index 2e8243a65b5070..d2298da320a673 100755
+--- a/tools/testing/selftests/net/rtnetlink.sh
++++ b/tools/testing/selftests/net/rtnetlink.sh
+@@ -673,6 +673,11 @@ kci_test_ipsec_offload()
+ 	sysfsf=$sysfsd/ipsec
+ 	sysfsnet=/sys/bus/netdevsim/devices/netdevsim0/net/
+ 	probed=false
++	esp4_offload_probed_default=false
++
++	if lsmod | grep -q esp4_offload; then
++		esp4_offload_probed_default=true
++	fi
+ 
+ 	if ! mount | grep -q debugfs; then
+ 		mount -t debugfs none /sys/kernel/debug/ &> /dev/null
+@@ -766,6 +771,7 @@ EOF
+ 	fi
+ 
+ 	# clean up any leftovers
++	! "$esp4_offload_probed_default" && lsmod | grep -q esp4_offload && rmmod esp4_offload
+ 	echo 0 > /sys/bus/netdevsim/del_device
+ 	$probed && rmmod netdevsim
+ 
+diff --git a/tools/testing/selftests/perf_events/.gitignore b/tools/testing/selftests/perf_events/.gitignore
+index ee93dc4969b8b5..4931b3b6bbd397 100644
+--- a/tools/testing/selftests/perf_events/.gitignore
++++ b/tools/testing/selftests/perf_events/.gitignore
+@@ -2,3 +2,4 @@
+ sigtrap_threads
+ remove_on_exec
+ watermark_signal
++mmap
+diff --git a/tools/testing/selftests/perf_events/Makefile b/tools/testing/selftests/perf_events/Makefile
+index 70e3ff21127890..2e5d85770dfead 100644
+--- a/tools/testing/selftests/perf_events/Makefile
++++ b/tools/testing/selftests/perf_events/Makefile
+@@ -2,5 +2,5 @@
+ CFLAGS += -Wl,-no-as-needed -Wall $(KHDR_INCLUDES)
+ LDFLAGS += -lpthread
+ 
+-TEST_GEN_PROGS := sigtrap_threads remove_on_exec watermark_signal
++TEST_GEN_PROGS := sigtrap_threads remove_on_exec watermark_signal mmap
+ include ../lib.mk
+diff --git a/tools/testing/selftests/perf_events/mmap.c b/tools/testing/selftests/perf_events/mmap.c
+new file mode 100644
+index 00000000000000..ea0427aac1f98f
+--- /dev/null
++++ b/tools/testing/selftests/perf_events/mmap.c
+@@ -0,0 +1,236 @@
++// SPDX-License-Identifier: GPL-2.0-only
++#define _GNU_SOURCE
++
++#include <dirent.h>
++#include <sched.h>
++#include <stdbool.h>
++#include <stdio.h>
++#include <unistd.h>
++
++#include <sys/ioctl.h>
++#include <sys/mman.h>
++#include <sys/syscall.h>
++#include <sys/types.h>
++
++#include <linux/perf_event.h>
++
++#include "../kselftest_harness.h"
++
++#define RB_SIZE		0x3000
++#define AUX_SIZE	0x10000
++#define AUX_OFFS	0x4000
++
++#define HOLE_SIZE	0x1000
++
++/* Reserve space for rb, aux with space for shrink-beyond-vma testing. */
++#define REGION_SIZE	(2 * RB_SIZE + 2 * AUX_SIZE)
++#define REGION_AUX_OFFS (2 * RB_SIZE)
++
++#define MAP_BASE	1
++#define MAP_AUX		2
++
++#define EVENT_SRC_DIR	"/sys/bus/event_source/devices"
++
++FIXTURE(perf_mmap)
++{
++	int		fd;
++	void		*ptr;
++	void		*region;
++};
++
++FIXTURE_VARIANT(perf_mmap)
++{
++	bool		aux;
++	unsigned long	ptr_size;
++};
++
++FIXTURE_VARIANT_ADD(perf_mmap, rb)
++{
++	.aux = false,
++	.ptr_size = RB_SIZE,
++};
++
++FIXTURE_VARIANT_ADD(perf_mmap, aux)
++{
++	.aux = true,
++	.ptr_size = AUX_SIZE,
++};
++
++static bool read_event_type(struct dirent *dent, __u32 *type)
++{
++	char typefn[512];
++	FILE *fp;
++	int res;
++
++	snprintf(typefn, sizeof(typefn), "%s/%s/type", EVENT_SRC_DIR, dent->d_name);
++	fp = fopen(typefn, "r");
++	if (!fp)
++		return false;
++
++	res = fscanf(fp, "%u", type);
++	fclose(fp);
++	return res > 0;
++}
++
++FIXTURE_SETUP(perf_mmap)
++{
++	struct perf_event_attr attr = {
++		.size		= sizeof(attr),
++		.disabled	= 1,
++		.exclude_kernel	= 1,
++		.exclude_hv	= 1,
++	};
++	struct perf_event_attr attr_ok = {};
++	unsigned int eacces = 0, map = 0;
++	struct perf_event_mmap_page *rb;
++	struct dirent *dent;
++	void *aux, *region;
++	DIR *dir;
++
++	self->ptr = NULL;
++
++	dir = opendir(EVENT_SRC_DIR);
++	if (!dir)
++		SKIP(return, "perf not available.");
++
++	region = mmap(NULL, REGION_SIZE, PROT_NONE, MAP_ANON | MAP_PRIVATE, -1, 0);
++	ASSERT_NE(region, MAP_FAILED);
++	self->region = region;
++
++	// Try to find a suitable event on this system
++	while ((dent = readdir(dir))) {
++		int fd;
++
++		if (!read_event_type(dent, &attr.type))
++			continue;
++
++		fd = syscall(SYS_perf_event_open, &attr, 0, -1, -1, 0);
++		if (fd < 0) {
++			if (errno == EACCES)
++				eacces++;
++			continue;
++		}
++
++		// Check whether the event supports mmap()
++		rb = mmap(region, RB_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, fd, 0);
++		if (rb == MAP_FAILED) {
++			close(fd);
++			continue;
++		}
++
++		if (!map) {
++			// Save the event in case that no AUX capable event is found
++			attr_ok = attr;
++			map = MAP_BASE;
++		}
++
++		if (!variant->aux)
++			continue;
++
++		rb->aux_offset = AUX_OFFS;
++		rb->aux_size = AUX_SIZE;
++
++		// Check whether it supports a AUX buffer
++		aux = mmap(region + REGION_AUX_OFFS, AUX_SIZE, PROT_READ | PROT_WRITE,
++			   MAP_SHARED | MAP_FIXED, fd, AUX_OFFS);
++		if (aux == MAP_FAILED) {
++			munmap(rb, RB_SIZE);
++			close(fd);
++			continue;
++		}
++
++		attr_ok = attr;
++		map = MAP_AUX;
++		munmap(aux, AUX_SIZE);
++		munmap(rb, RB_SIZE);
++		close(fd);
++		break;
++	}
++	closedir(dir);
++
++	if (!map) {
++		if (!eacces)
++			SKIP(return, "No mappable perf event found.");
++		else
++			SKIP(return, "No permissions for perf_event_open()");
++	}
++
++	self->fd = syscall(SYS_perf_event_open, &attr_ok, 0, -1, -1, 0);
++	ASSERT_NE(self->fd, -1);
++
++	rb = mmap(region, RB_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, self->fd, 0);
++	ASSERT_NE(rb, MAP_FAILED);
++
++	if (!variant->aux) {
++		self->ptr = rb;
++		return;
++	}
++
++	if (map != MAP_AUX)
++		SKIP(return, "No AUX event found.");
++
++	rb->aux_offset = AUX_OFFS;
++	rb->aux_size = AUX_SIZE;
++	aux = mmap(region + REGION_AUX_OFFS, AUX_SIZE, PROT_READ | PROT_WRITE,
++		   MAP_SHARED | MAP_FIXED, self->fd, AUX_OFFS);
++	ASSERT_NE(aux, MAP_FAILED);
++	self->ptr = aux;
++}
++
++FIXTURE_TEARDOWN(perf_mmap)
++{
++	ASSERT_EQ(munmap(self->region, REGION_SIZE), 0);
++	if (self->fd != -1)
++		ASSERT_EQ(close(self->fd), 0);
++}
++
++TEST_F(perf_mmap, remap)
++{
++	void *tmp, *ptr = self->ptr;
++	unsigned long size = variant->ptr_size;
++
++	// Test the invalid remaps
++	ASSERT_EQ(mremap(ptr, size, HOLE_SIZE, MREMAP_MAYMOVE), MAP_FAILED);
++	ASSERT_EQ(mremap(ptr + HOLE_SIZE, size, HOLE_SIZE, MREMAP_MAYMOVE), MAP_FAILED);
++	ASSERT_EQ(mremap(ptr + size - HOLE_SIZE, HOLE_SIZE, size, MREMAP_MAYMOVE), MAP_FAILED);
++	// Shrink the end of the mapping such that we only unmap past end of the VMA,
++	// which should succeed and poke a hole into the PROT_NONE region
++	ASSERT_NE(mremap(ptr + size - HOLE_SIZE, size, HOLE_SIZE, MREMAP_MAYMOVE), MAP_FAILED);
++
++	// Remap the whole buffer to a new address
++	tmp = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0);
++	ASSERT_NE(tmp, MAP_FAILED);
++
++	// Try splitting offset 1 hole size into VMA, this should fail
++	ASSERT_EQ(mremap(ptr + HOLE_SIZE, size - HOLE_SIZE, size - HOLE_SIZE,
++			 MREMAP_MAYMOVE | MREMAP_FIXED, tmp), MAP_FAILED);
++	// Remapping the whole thing should succeed fine
++	ptr = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, tmp);
++	ASSERT_EQ(ptr, tmp);
++	ASSERT_EQ(munmap(tmp, size), 0);
++}
++
++TEST_F(perf_mmap, unmap)
++{
++	unsigned long size = variant->ptr_size;
++
++	// Try to poke holes into the mappings
++	ASSERT_NE(munmap(self->ptr, HOLE_SIZE), 0);
++	ASSERT_NE(munmap(self->ptr + HOLE_SIZE, HOLE_SIZE), 0);
++	ASSERT_NE(munmap(self->ptr + size - HOLE_SIZE, HOLE_SIZE), 0);
++}
++
++TEST_F(perf_mmap, map)
++{
++	unsigned long size = variant->ptr_size;
++
++	// Try to poke holes into the mappings by mapping anonymous memory over it
++	ASSERT_EQ(mmap(self->ptr, HOLE_SIZE, PROT_READ | PROT_WRITE,
++		       MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0), MAP_FAILED);
++	ASSERT_EQ(mmap(self->ptr + HOLE_SIZE, HOLE_SIZE, PROT_READ | PROT_WRITE,
++		       MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0), MAP_FAILED);
++	ASSERT_EQ(mmap(self->ptr + size - HOLE_SIZE, HOLE_SIZE, PROT_READ | PROT_WRITE,
++		       MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0), MAP_FAILED);
++}
++
++TEST_HARNESS_MAIN
+diff --git a/tools/testing/selftests/syscall_user_dispatch/sud_test.c b/tools/testing/selftests/syscall_user_dispatch/sud_test.c
+index d975a67673299f..48cf01aeec3e77 100644
+--- a/tools/testing/selftests/syscall_user_dispatch/sud_test.c
++++ b/tools/testing/selftests/syscall_user_dispatch/sud_test.c
+@@ -79,6 +79,21 @@ TEST_SIGNAL(dispatch_trigger_sigsys, SIGSYS)
+ 	}
+ }
+ 
++static void prctl_valid(struct __test_metadata *_metadata,
++			unsigned long op, unsigned long off,
++			unsigned long size, void *sel)
++{
++	EXPECT_EQ(0, prctl(PR_SET_SYSCALL_USER_DISPATCH, op, off, size, sel));
++}
++
++static void prctl_invalid(struct __test_metadata *_metadata,
++			  unsigned long op, unsigned long off,
++			  unsigned long size, void *sel, int err)
++{
++	EXPECT_EQ(-1, prctl(PR_SET_SYSCALL_USER_DISPATCH, op, off, size, sel));
++	EXPECT_EQ(err, errno);
++}
++
+ TEST(bad_prctl_param)
+ {
+ 	char sel = SYSCALL_DISPATCH_FILTER_ALLOW;
+@@ -86,57 +101,42 @@ TEST(bad_prctl_param)
+ 
+ 	/* Invalid op */
+ 	op = -1;
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0, 0, &sel);
+-	ASSERT_EQ(EINVAL, errno);
++	prctl_invalid(_metadata, op, 0, 0, &sel, EINVAL);
+ 
+ 	/* PR_SYS_DISPATCH_OFF */
+ 	op = PR_SYS_DISPATCH_OFF;
+ 
+ 	/* offset != 0 */
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x1, 0x0, 0);
+-	EXPECT_EQ(EINVAL, errno);
++	prctl_invalid(_metadata, op, 0x1, 0x0, 0, EINVAL);
+ 
+ 	/* len != 0 */
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x0, 0xff, 0);
+-	EXPECT_EQ(EINVAL, errno);
++	prctl_invalid(_metadata, op, 0x0, 0xff, 0, EINVAL);
+ 
+ 	/* sel != NULL */
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x0, 0x0, &sel);
+-	EXPECT_EQ(EINVAL, errno);
++	prctl_invalid(_metadata, op, 0x0, 0x0, &sel, EINVAL);
+ 
+ 	/* Valid parameter */
+-	errno = 0;
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x0, 0x0, 0x0);
+-	EXPECT_EQ(0, errno);
++	prctl_valid(_metadata, op, 0x0, 0x0, 0x0);
+ 
+ 	/* PR_SYS_DISPATCH_ON */
+ 	op = PR_SYS_DISPATCH_ON;
+ 
+ 	/* Dispatcher region is bad (offset > 0 && len == 0) */
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x1, 0x0, &sel);
+-	EXPECT_EQ(EINVAL, errno);
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, -1L, 0x0, &sel);
+-	EXPECT_EQ(EINVAL, errno);
++	prctl_invalid(_metadata, op, 0x1, 0x0, &sel, EINVAL);
++	prctl_invalid(_metadata, op, -1L, 0x0, &sel, EINVAL);
+ 
+ 	/* Invalid selector */
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x0, 0x1, (void *) -1);
+-	ASSERT_EQ(EFAULT, errno);
++	prctl_invalid(_metadata, op, 0x0, 0x1, (void *) -1, EFAULT);
+ 
+ 	/*
+ 	 * Dispatcher range overflows unsigned long
+ 	 */
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, PR_SYS_DISPATCH_ON, 1, -1L, &sel);
+-	ASSERT_EQ(EINVAL, errno) {
+-		TH_LOG("Should reject bad syscall range");
+-	}
++	prctl_invalid(_metadata, PR_SYS_DISPATCH_ON, 1, -1L, &sel, EINVAL);
+ 
+ 	/*
+ 	 * Allowed range overflows usigned long
+ 	 */
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, PR_SYS_DISPATCH_ON, -1L, 0x1, &sel);
+-	ASSERT_EQ(EINVAL, errno) {
+-		TH_LOG("Should reject bad syscall range");
+-	}
++	prctl_invalid(_metadata, PR_SYS_DISPATCH_ON, -1L, 0x1, &sel, EINVAL);
+ }
+ 
+ /*
+diff --git a/tools/testing/selftests/vDSO/vdso_test_chacha.c b/tools/testing/selftests/vDSO/vdso_test_chacha.c
+index 8757f738b0b1a7..0aad682b12c883 100644
+--- a/tools/testing/selftests/vDSO/vdso_test_chacha.c
++++ b/tools/testing/selftests/vDSO/vdso_test_chacha.c
+@@ -76,7 +76,8 @@ static void reference_chacha20_blocks(uint8_t *dst_bytes, const uint32_t *key, u
+ 
+ void __weak __arch_chacha20_blocks_nostack(uint8_t *dst_bytes, const uint32_t *key, uint32_t *counter, size_t nblocks)
+ {
+-	ksft_exit_skip("Not implemented on architecture\n");
++	ksft_test_result_skip("Not implemented on architecture\n");
++	ksft_finished();
+ }
+ 
+ int main(int argc, char *argv[])
+diff --git a/tools/verification/rv/src/in_kernel.c b/tools/verification/rv/src/in_kernel.c
+index c0dcee795c0de0..4bb746ea6e1735 100644
+--- a/tools/verification/rv/src/in_kernel.c
++++ b/tools/verification/rv/src/in_kernel.c
+@@ -431,7 +431,7 @@ ikm_event_handler(struct trace_seq *s, struct tep_record *record,
+ 
+ 	if (config_has_id && (config_my_pid == id))
+ 		return 0;
+-	else if (config_my_pid && (config_my_pid == pid))
++	else if (config_my_pid == pid)
+ 		return 0;
+ 
+ 	tep_print_event(trace_event->tep, s, record, "%16s-%-8d [%.3d] ",
+@@ -734,7 +734,7 @@ static int parse_arguments(char *monitor_name, int argc, char **argv)
+ 			config_reactor = optarg;
+ 			break;
+ 		case 's':
+-			config_my_pid = 0;
++			config_my_pid = -1;
+ 			break;
+ 		case 't':
+ 			config_trace = 1;


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-08-21  1:10 Arisu Tachibana
  0 siblings, 0 replies; 19+ messages in thread
From: Arisu Tachibana @ 2025-08-21  1:10 UTC (permalink / raw
  To: gentoo-commits

commit:     c997089d42039a606c52a799b986107c8f28b14c
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 21 01:05:40 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Aug 21 01:05:40 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c997089d

Linux patch 6.15.11

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README              |     4 +
 1010_linux-6.15.11.patch | 25771 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 25775 insertions(+)

diff --git a/0000_README b/0000_README
index c97b9061..e249871f 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1009_linux-6.15.10.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.15.10
 
+Patch:  1010_linux-6.15.11.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.15.11
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1010_linux-6.15.11.patch b/1010_linux-6.15.11.patch
new file mode 100644
index 00000000..a8f338ef
--- /dev/null
+++ b/1010_linux-6.15.11.patch
@@ -0,0 +1,25771 @@
+diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst
+index e8032990854922..b9a75e7122a834 100644
+--- a/Documentation/filesystems/fscrypt.rst
++++ b/Documentation/filesystems/fscrypt.rst
+@@ -140,9 +140,8 @@ However, these ioctls have some limitations:
+   were wiped.  To partially solve this, you can add init_on_free=1 to
+   your kernel command line.  However, this has a performance cost.
+ 
+-- Secret keys might still exist in CPU registers, in crypto
+-  accelerator hardware (if used by the crypto API to implement any of
+-  the algorithms), or in other places not explicitly considered here.
++- Secret keys might still exist in CPU registers or in other places
++  not explicitly considered here.
+ 
+ Limitations of v1 policies
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
+@@ -377,9 +376,12 @@ the work is done by XChaCha12, which is much faster than AES when AES
+ acceleration is unavailable.  For more information about Adiantum, see
+ `the Adiantum paper <https://eprint.iacr.org/2018/720.pdf>`_.
+ 
+-The (AES-128-CBC-ESSIV, AES-128-CBC-CTS) pair exists only to support
+-systems whose only form of AES acceleration is an off-CPU crypto
+-accelerator such as CAAM or CESA that does not support XTS.
++The (AES-128-CBC-ESSIV, AES-128-CBC-CTS) pair was added to try to
++provide a more efficient option for systems that lack AES instructions
++in the CPU but do have a non-inline crypto engine such as CAAM or CESA
++that supports AES-CBC (and not AES-XTS).  This is deprecated.  It has
++been shown that just doing AES on the CPU is actually faster.
++Moreover, Adiantum is faster still and is recommended on such systems.
+ 
+ The remaining mode pairs are the "national pride ciphers":
+ 
+@@ -1285,22 +1287,13 @@ this by validating all top-level encryption policies prior to access.
+ Inline encryption support
+ =========================
+ 
+-By default, fscrypt uses the kernel crypto API for all cryptographic
+-operations (other than HKDF, which fscrypt partially implements
+-itself).  The kernel crypto API supports hardware crypto accelerators,
+-but only ones that work in the traditional way where all inputs and
+-outputs (e.g. plaintexts and ciphertexts) are in memory.  fscrypt can
+-take advantage of such hardware, but the traditional acceleration
+-model isn't particularly efficient and fscrypt hasn't been optimized
+-for it.
+-
+-Instead, many newer systems (especially mobile SoCs) have *inline
+-encryption hardware* that can encrypt/decrypt data while it is on its
+-way to/from the storage device.  Linux supports inline encryption
+-through a set of extensions to the block layer called *blk-crypto*.
+-blk-crypto allows filesystems to attach encryption contexts to bios
+-(I/O requests) to specify how the data will be encrypted or decrypted
+-in-line.  For more information about blk-crypto, see
++Many newer systems (especially mobile SoCs) have *inline encryption
++hardware* that can encrypt/decrypt data while it is on its way to/from
++the storage device.  Linux supports inline encryption through a set of
++extensions to the block layer called *blk-crypto*.  blk-crypto allows
++filesystems to attach encryption contexts to bios (I/O requests) to
++specify how the data will be encrypted or decrypted in-line.  For more
++information about blk-crypto, see
+ :ref:`Documentation/block/inline-encryption.rst <inline_encryption>`.
+ 
+ On supported filesystems (currently ext4 and f2fs), fscrypt can use
+diff --git a/Documentation/firmware-guide/acpi/i2c-muxes.rst b/Documentation/firmware-guide/acpi/i2c-muxes.rst
+index 3a8997ccd7c4b6..f366539acd792a 100644
+--- a/Documentation/firmware-guide/acpi/i2c-muxes.rst
++++ b/Documentation/firmware-guide/acpi/i2c-muxes.rst
+@@ -14,7 +14,7 @@ Consider this topology::
+     |      |   | 0x70 |--CH01--> i2c client B (0x50)
+     +------+   +------+
+ 
+-which corresponds to the following ASL::
++which corresponds to the following ASL (in the scope of \_SB)::
+ 
+     Device (SMB1)
+     {
+@@ -24,7 +24,7 @@ which corresponds to the following ASL::
+             Name (_HID, ...)
+             Name (_CRS, ResourceTemplate () {
+                 I2cSerialBus (0x70, ControllerInitiated, I2C_SPEED,
+-                            AddressingMode7Bit, "^SMB1", 0x00,
++                            AddressingMode7Bit, "\\_SB.SMB1", 0x00,
+                             ResourceConsumer,,)
+             }
+ 
+@@ -37,7 +37,7 @@ which corresponds to the following ASL::
+                     Name (_HID, ...)
+                     Name (_CRS, ResourceTemplate () {
+                         I2cSerialBus (0x50, ControllerInitiated, I2C_SPEED,
+-                                    AddressingMode7Bit, "^CH00", 0x00,
++                                    AddressingMode7Bit, "\\_SB.SMB1.CH00", 0x00,
+                                     ResourceConsumer,,)
+                     }
+                 }
+@@ -52,7 +52,7 @@ which corresponds to the following ASL::
+                     Name (_HID, ...)
+                     Name (_CRS, ResourceTemplate () {
+                         I2cSerialBus (0x50, ControllerInitiated, I2C_SPEED,
+-                                    AddressingMode7Bit, "^CH01", 0x00,
++                                    AddressingMode7Bit, "\\_SB.SMB1.CH01", 0x00,
+                                     ResourceConsumer,,)
+                     }
+                 }
+diff --git a/Documentation/sphinx/kernel_abi.py b/Documentation/sphinx/kernel_abi.py
+index db6f0380de94cb..4c4375201b9ec3 100644
+--- a/Documentation/sphinx/kernel_abi.py
++++ b/Documentation/sphinx/kernel_abi.py
+@@ -146,8 +146,10 @@ class KernelCmd(Directive):
+                 n += 1
+ 
+             if f != old_f:
+-                # Add the file to Sphinx build dependencies
+-                env.note_dependency(os.path.abspath(f))
++                # Add the file to Sphinx build dependencies if the file exists
++                fname = os.path.join(srctree, f)
++                if os.path.isfile(fname):
++                    env.note_dependency(fname)
+ 
+                 old_f = f
+ 
+diff --git a/Makefile b/Makefile
+index 7831d9cd2e6cdb..3a9650df9bb955 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 15
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/mach-rockchip/platsmp.c b/arch/arm/mach-rockchip/platsmp.c
+index 36915a073c2340..f432d22bfed844 100644
+--- a/arch/arm/mach-rockchip/platsmp.c
++++ b/arch/arm/mach-rockchip/platsmp.c
+@@ -279,11 +279,6 @@ static void __init rockchip_smp_prepare_cpus(unsigned int max_cpus)
+ 	}
+ 
+ 	if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) {
+-		if (rockchip_smp_prepare_sram(node)) {
+-			of_node_put(node);
+-			return;
+-		}
+-
+ 		/* enable the SCU power domain */
+ 		pmu_set_power_domain(PMU_PWRDN_SCU, true);
+ 
+@@ -316,11 +311,19 @@ static void __init rockchip_smp_prepare_cpus(unsigned int max_cpus)
+ 		asm ("mrc p15, 1, %0, c9, c0, 2\n" : "=r" (l2ctlr));
+ 		ncores = ((l2ctlr >> 24) & 0x3) + 1;
+ 	}
+-	of_node_put(node);
+ 
+ 	/* Make sure that all cores except the first are really off */
+ 	for (i = 1; i < ncores; i++)
+ 		pmu_set_power_domain(0 + i, false);
++
++	if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) {
++		if (rockchip_smp_prepare_sram(node)) {
++			of_node_put(node);
++			return;
++		}
++	}
++
++	of_node_put(node);
+ }
+ 
+ static void __init rk3036_smp_prepare_cpus(unsigned int max_cpus)
+diff --git a/arch/arm/mach-tegra/reset.c b/arch/arm/mach-tegra/reset.c
+index d5c805adf7a82b..ea706fac63587a 100644
+--- a/arch/arm/mach-tegra/reset.c
++++ b/arch/arm/mach-tegra/reset.c
+@@ -63,7 +63,7 @@ static void __init tegra_cpu_reset_handler_enable(void)
+ 	BUG_ON(is_enabled);
+ 	BUG_ON(tegra_cpu_reset_handler_size > TEGRA_IRAM_RESET_HANDLER_SIZE);
+ 
+-	memcpy(iram_base, (void *)__tegra_cpu_reset_handler_start,
++	memcpy_toio(iram_base, (void *)__tegra_cpu_reset_handler_start,
+ 			tegra_cpu_reset_handler_size);
+ 
+ 	err = call_firmware_op(set_cpu_boot_addr, 0, reset_address);
+diff --git a/arch/arm64/boot/dts/ti/k3-j722s-evm.dts b/arch/arm64/boot/dts/ti/k3-j722s-evm.dts
+index 0bf2e182166244..a11f47703f50b0 100644
+--- a/arch/arm64/boot/dts/ti/k3-j722s-evm.dts
++++ b/arch/arm64/boot/dts/ti/k3-j722s-evm.dts
+@@ -598,7 +598,7 @@ p05-hog {
+ 			/* P05 - USB2.0_MUX_SEL */
+ 			gpio-hog;
+ 			gpios = <5 GPIO_ACTIVE_LOW>;
+-			output-high;
++			output-low;
+ 		};
+ 
+ 		p01_hog: p01-hog {
+diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
+index a407f9cd549edc..c07a58b96329d8 100644
+--- a/arch/arm64/include/asm/acpi.h
++++ b/arch/arm64/include/asm/acpi.h
+@@ -150,7 +150,7 @@ acpi_set_mailbox_entry(int cpu, struct acpi_madt_generic_interrupt *processor)
+ {}
+ #endif
+ 
+-static inline const char *acpi_get_enable_method(int cpu)
++static __always_inline const char *acpi_get_enable_method(int cpu)
+ {
+ 	if (acpi_psci_present())
+ 		return "psci";
+diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c
+index b9a66fc146c9fa..4d529ff7ba513a 100644
+--- a/arch/arm64/kernel/acpi.c
++++ b/arch/arm64/kernel/acpi.c
+@@ -197,6 +197,8 @@ static int __init acpi_fadt_sanity_check(void)
+  */
+ void __init acpi_boot_table_init(void)
+ {
++	int ret;
++
+ 	/*
+ 	 * Enable ACPI instead of device tree unless
+ 	 * - ACPI has been disabled explicitly (acpi=off), or
+@@ -250,10 +252,12 @@ void __init acpi_boot_table_init(void)
+ 		 * behaviour, use acpi=nospcr to disable console in ACPI SPCR
+ 		 * table as default serial console.
+ 		 */
+-		acpi_parse_spcr(earlycon_acpi_spcr_enable,
++		ret = acpi_parse_spcr(earlycon_acpi_spcr_enable,
+ 			!param_acpi_nospcr);
+-		pr_info("Use ACPI SPCR as default console: %s\n",
+-				param_acpi_nospcr ? "No" : "Yes");
++		if (!ret || param_acpi_nospcr || !IS_ENABLED(CONFIG_ACPI_SPCR_TABLE))
++			pr_info("Use ACPI SPCR as default console: No\n");
++		else
++			pr_info("Use ACPI SPCR as default console: Yes\n");
+ 
+ 		if (IS_ENABLED(CONFIG_ACPI_BGRT))
+ 			acpi_table_parse(ACPI_SIG_BGRT, acpi_parse_bgrt);
+diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
+index 1d9d51d7627fd4..f6494c09421442 100644
+--- a/arch/arm64/kernel/stacktrace.c
++++ b/arch/arm64/kernel/stacktrace.c
+@@ -152,6 +152,8 @@ kunwind_recover_return_address(struct kunwind_state *state)
+ 		orig_pc = kretprobe_find_ret_addr(state->task,
+ 						  (void *)state->common.fp,
+ 						  &state->kr_cur);
++		if (!orig_pc)
++			return -EINVAL;
+ 		state->common.pc = orig_pc;
+ 		state->flags.kretprobe = 1;
+ 	}
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index 529cff825531c8..35eed1942d85a1 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -931,6 +931,7 @@ void __noreturn panic_bad_stack(struct pt_regs *regs, unsigned long esr, unsigne
+ 
+ void __noreturn arm64_serror_panic(struct pt_regs *regs, unsigned long esr)
+ {
++	add_taint(TAINT_MACHINE_CHECK, LOCKDEP_STILL_OK);
+ 	console_verbose();
+ 
+ 	pr_crit("SError Interrupt on CPU%d, code 0x%016lx -- %s\n",
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index 11eb8d1adc8418..f590dc71ce9980 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -838,6 +838,7 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
+ 		 */
+ 		siaddr  = untagged_addr(far);
+ 	}
++	add_taint(TAINT_MACHINE_CHECK, LOCKDEP_STILL_OK);
+ 	arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
+ 
+ 	return 0;
+diff --git a/arch/arm64/mm/ptdump_debugfs.c b/arch/arm64/mm/ptdump_debugfs.c
+index 68bf1a125502da..1e308328c07966 100644
+--- a/arch/arm64/mm/ptdump_debugfs.c
++++ b/arch/arm64/mm/ptdump_debugfs.c
+@@ -1,6 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/debugfs.h>
+-#include <linux/memory_hotplug.h>
+ #include <linux/seq_file.h>
+ 
+ #include <asm/ptdump.h>
+@@ -9,9 +8,7 @@ static int ptdump_show(struct seq_file *m, void *v)
+ {
+ 	struct ptdump_info *info = m->private;
+ 
+-	get_online_mems();
+ 	ptdump_walk(m, info);
+-	put_online_mems();
+ 	return 0;
+ }
+ DEFINE_SHOW_ATTRIBUTE(ptdump);
+diff --git a/arch/loongarch/kernel/env.c b/arch/loongarch/kernel/env.c
+index 27144de5c5fe4f..c0a5dc9aeae287 100644
+--- a/arch/loongarch/kernel/env.c
++++ b/arch/loongarch/kernel/env.c
+@@ -39,16 +39,19 @@ void __init init_environ(void)
+ 
+ static int __init init_cpu_fullname(void)
+ {
+-	struct device_node *root;
+ 	int cpu, ret;
+-	char *model;
++	char *cpuname;
++	const char *model;
++	struct device_node *root;
+ 
+ 	/* Parsing cpuname from DTS model property */
+ 	root = of_find_node_by_path("/");
+-	ret = of_property_read_string(root, "model", (const char **)&model);
++	ret = of_property_read_string(root, "model", &model);
++	if (ret == 0) {
++		cpuname = kstrdup(model, GFP_KERNEL);
++		loongson_sysconf.cpuname = strsep(&cpuname, " ");
++	}
+ 	of_node_put(root);
+-	if (ret == 0)
+-		loongson_sysconf.cpuname = strsep(&model, " ");
+ 
+ 	if (loongson_sysconf.cpuname && !strncmp(loongson_sysconf.cpuname, "Loongson", 8)) {
+ 		for (cpu = 0; cpu < NR_CPUS; cpu++)
+diff --git a/arch/loongarch/kernel/relocate_kernel.S b/arch/loongarch/kernel/relocate_kernel.S
+index 84e6de2fd97354..8b5140ac9ea112 100644
+--- a/arch/loongarch/kernel/relocate_kernel.S
++++ b/arch/loongarch/kernel/relocate_kernel.S
+@@ -109,4 +109,4 @@ SYM_CODE_END(kexec_smp_wait)
+ relocate_new_kernel_end:
+ 
+ 	.section ".data"
+-SYM_DATA(relocate_new_kernel_size, .long relocate_new_kernel_end - relocate_new_kernel)
++SYM_DATA(relocate_new_kernel_size, .quad relocate_new_kernel_end - relocate_new_kernel)
+diff --git a/arch/loongarch/kernel/unwind_orc.c b/arch/loongarch/kernel/unwind_orc.c
+index d623935a75471c..ece04b5207a4a3 100644
+--- a/arch/loongarch/kernel/unwind_orc.c
++++ b/arch/loongarch/kernel/unwind_orc.c
+@@ -507,7 +507,7 @@ bool unwind_next_frame(struct unwind_state *state)
+ 
+ 	state->pc = bt_address(pc);
+ 	if (!state->pc) {
+-		pr_err("cannot find unwind pc at %pK\n", (void *)pc);
++		pr_err("cannot find unwind pc at %p\n", (void *)pc);
+ 		goto err;
+ 	}
+ 
+diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
+index fa1500d4aa3e3a..5ba3249cea98a2 100644
+--- a/arch/loongarch/net/bpf_jit.c
++++ b/arch/loongarch/net/bpf_jit.c
+@@ -208,11 +208,9 @@ bool bpf_jit_supports_far_kfunc_call(void)
+ 	return true;
+ }
+ 
+-/* initialized on the first pass of build_body() */
+-static int out_offset = -1;
+-static int emit_bpf_tail_call(struct jit_ctx *ctx)
++static int emit_bpf_tail_call(struct jit_ctx *ctx, int insn)
+ {
+-	int off;
++	int off, tc_ninsn = 0;
+ 	u8 tcc = tail_call_reg(ctx);
+ 	u8 a1 = LOONGARCH_GPR_A1;
+ 	u8 a2 = LOONGARCH_GPR_A2;
+@@ -222,7 +220,7 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ 	const int idx0 = ctx->idx;
+ 
+ #define cur_offset (ctx->idx - idx0)
+-#define jmp_offset (out_offset - (cur_offset))
++#define jmp_offset (tc_ninsn - (cur_offset))
+ 
+ 	/*
+ 	 * a0: &ctx
+@@ -232,6 +230,7 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ 	 * if (index >= array->map.max_entries)
+ 	 *	 goto out;
+ 	 */
++	tc_ninsn = insn ? ctx->offset[insn+1] - ctx->offset[insn] : ctx->offset[0];
+ 	off = offsetof(struct bpf_array, map.max_entries);
+ 	emit_insn(ctx, ldwu, t1, a1, off);
+ 	/* bgeu $a2, $t1, jmp_offset */
+@@ -263,15 +262,6 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ 	emit_insn(ctx, ldd, t3, t2, off);
+ 	__build_epilogue(ctx, true);
+ 
+-	/* out: */
+-	if (out_offset == -1)
+-		out_offset = cur_offset;
+-	if (cur_offset != out_offset) {
+-		pr_err_once("tail_call out_offset = %d, expected %d!\n",
+-			    cur_offset, out_offset);
+-		return -1;
+-	}
+-
+ 	return 0;
+ 
+ toofar:
+@@ -916,7 +906,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
+ 	/* tail call */
+ 	case BPF_JMP | BPF_TAIL_CALL:
+ 		mark_tail_call(ctx);
+-		if (emit_bpf_tail_call(ctx) < 0)
++		if (emit_bpf_tail_call(ctx, i) < 0)
+ 			return -EINVAL;
+ 		break;
+ 
+@@ -1342,7 +1332,6 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ 	if (tmp_blinded)
+ 		bpf_jit_prog_release_other(prog, prog == orig_prog ? tmp : orig_prog);
+ 
+-	out_offset = -1;
+ 
+ 	return prog;
+ 
+diff --git a/arch/loongarch/vdso/Makefile b/arch/loongarch/vdso/Makefile
+index ccd2c5e135c66f..d8316f99348240 100644
+--- a/arch/loongarch/vdso/Makefile
++++ b/arch/loongarch/vdso/Makefile
+@@ -36,7 +36,7 @@ endif
+ 
+ # VDSO linker flags.
+ ldflags-y := -Bsymbolic --no-undefined -soname=linux-vdso.so.1 \
+-	$(filter -E%,$(KBUILD_CFLAGS)) -nostdlib -shared --build-id -T
++	$(filter -E%,$(KBUILD_CFLAGS)) -shared --build-id -T
+ 
+ #
+ # Shared build commands.
+diff --git a/arch/mips/include/asm/vpe.h b/arch/mips/include/asm/vpe.h
+index 61fd4d0aeda41f..c0769dc4b85321 100644
+--- a/arch/mips/include/asm/vpe.h
++++ b/arch/mips/include/asm/vpe.h
+@@ -119,4 +119,12 @@ void cleanup_tc(struct tc *tc);
+ 
+ int __init vpe_module_init(void);
+ void __exit vpe_module_exit(void);
++
++#ifdef CONFIG_MIPS_VPE_LOADER_MT
++void *vpe_alloc(void);
++int vpe_start(void *vpe, unsigned long start);
++int vpe_stop(void *vpe);
++int vpe_free(void *vpe);
++#endif /* CONFIG_MIPS_VPE_LOADER_MT */
++
+ #endif /* _ASM_VPE_H */
+diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
+index b630604c577f9f..02aa6a04a21da4 100644
+--- a/arch/mips/kernel/process.c
++++ b/arch/mips/kernel/process.c
+@@ -690,18 +690,20 @@ unsigned long mips_stack_top(void)
+ 	}
+ 
+ 	/* Space for the VDSO, data page & GIC user page */
+-	top -= PAGE_ALIGN(current->thread.abi->vdso->size);
+-	top -= PAGE_SIZE;
+-	top -= mips_gic_present() ? PAGE_SIZE : 0;
++	if (current->thread.abi) {
++		top -= PAGE_ALIGN(current->thread.abi->vdso->size);
++		top -= PAGE_SIZE;
++		top -= mips_gic_present() ? PAGE_SIZE : 0;
++
++		/* Space to randomize the VDSO base */
++		if (current->flags & PF_RANDOMIZE)
++			top -= VDSO_RANDOMIZE_SIZE;
++	}
+ 
+ 	/* Space for cache colour alignment */
+ 	if (cpu_has_dc_aliases)
+ 		top -= shm_align_mask + 1;
+ 
+-	/* Space to randomize the VDSO base */
+-	if (current->flags & PF_RANDOMIZE)
+-		top -= VDSO_RANDOMIZE_SIZE;
+-
+ 	return top;
+ }
+ 
+diff --git a/arch/mips/lantiq/falcon/sysctrl.c b/arch/mips/lantiq/falcon/sysctrl.c
+index 1187729d8cbb1b..357543996ee661 100644
+--- a/arch/mips/lantiq/falcon/sysctrl.c
++++ b/arch/mips/lantiq/falcon/sysctrl.c
+@@ -214,19 +214,16 @@ void __init ltq_soc_init(void)
+ 	of_node_put(np_syseth);
+ 	of_node_put(np_sysgpe);
+ 
+-	if ((request_mem_region(res_status.start, resource_size(&res_status),
+-				res_status.name) < 0) ||
+-		(request_mem_region(res_ebu.start, resource_size(&res_ebu),
+-				res_ebu.name) < 0) ||
+-		(request_mem_region(res_sys[0].start,
+-				resource_size(&res_sys[0]),
+-				res_sys[0].name) < 0) ||
+-		(request_mem_region(res_sys[1].start,
+-				resource_size(&res_sys[1]),
+-				res_sys[1].name) < 0) ||
+-		(request_mem_region(res_sys[2].start,
+-				resource_size(&res_sys[2]),
+-				res_sys[2].name) < 0))
++	if ((!request_mem_region(res_status.start, resource_size(&res_status),
++				 res_status.name)) ||
++	    (!request_mem_region(res_ebu.start, resource_size(&res_ebu),
++				 res_ebu.name)) ||
++	    (!request_mem_region(res_sys[0].start, resource_size(&res_sys[0]),
++				 res_sys[0].name)) ||
++	    (!request_mem_region(res_sys[1].start, resource_size(&res_sys[1]),
++				 res_sys[1].name)) ||
++	    (!request_mem_region(res_sys[2].start, resource_size(&res_sys[2]),
++				 res_sys[2].name)))
+ 		pr_err("Failed to request core resources");
+ 
+ 	status_membase = ioremap(res_status.start,
+diff --git a/arch/parisc/Makefile b/arch/parisc/Makefile
+index 21b8166a688394..9cd9aa3d16f29a 100644
+--- a/arch/parisc/Makefile
++++ b/arch/parisc/Makefile
+@@ -139,7 +139,7 @@ palo lifimage: vmlinuz
+ 	fi
+ 	@if test ! -f "$(PALOCONF)"; then \
+ 		cp $(srctree)/arch/parisc/defpalo.conf $(objtree)/palo.conf; \
+-		echo 'A generic palo config file ($(objree)/palo.conf) has been created for you.'; \
++		echo 'A generic palo config file ($(objtree)/palo.conf) has been created for you.'; \
+ 		echo 'You should check it and re-run "make palo".'; \
+ 		echo 'WARNING: the "lifimage" file is now placed in this directory by default!'; \
+ 		false; \
+diff --git a/arch/powerpc/include/asm/floppy.h b/arch/powerpc/include/asm/floppy.h
+index f8ce178b43b783..34abf8bea2ccd6 100644
+--- a/arch/powerpc/include/asm/floppy.h
++++ b/arch/powerpc/include/asm/floppy.h
+@@ -144,9 +144,12 @@ static int hard_dma_setup(char *addr, unsigned long size, int mode, int io)
+ 		bus_addr = 0;
+ 	}
+ 
+-	if (!bus_addr)	/* need to map it */
++	if (!bus_addr) {	/* need to map it */
+ 		bus_addr = dma_map_single(&isa_bridge_pcidev->dev, addr, size,
+ 					  dir);
++		if (dma_mapping_error(&isa_bridge_pcidev->dev, bus_addr))
++			return -ENOMEM;
++	}
+ 
+ 	/* remember this one as prev */
+ 	prev_addr = addr;
+diff --git a/arch/powerpc/platforms/512x/mpc512x_lpbfifo.c b/arch/powerpc/platforms/512x/mpc512x_lpbfifo.c
+index 9668b052cd4b3a..f251e0f6826204 100644
+--- a/arch/powerpc/platforms/512x/mpc512x_lpbfifo.c
++++ b/arch/powerpc/platforms/512x/mpc512x_lpbfifo.c
+@@ -240,10 +240,8 @@ static int mpc512x_lpbfifo_kick(void)
+ 	dma_conf.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+ 
+ 	/* Make DMA channel work with LPB FIFO data register */
+-	if (dma_dev->device_config(lpbfifo.chan, &dma_conf)) {
+-		ret = -EINVAL;
+-		goto err_dma_prep;
+-	}
++	if (dma_dev->device_config(lpbfifo.chan, &dma_conf))
++		return -EINVAL;
+ 
+ 	sg_init_table(&sg, 1);
+ 
+diff --git a/arch/riscv/boot/dts/thead/th1520.dtsi b/arch/riscv/boot/dts/thead/th1520.dtsi
+index 527336417765d8..0aae4e6a5b3320 100644
+--- a/arch/riscv/boot/dts/thead/th1520.dtsi
++++ b/arch/riscv/boot/dts/thead/th1520.dtsi
+@@ -286,8 +286,9 @@ gmac1: ethernet@ffe7060000 {
+ 			reg-names = "dwmac", "apb";
+ 			interrupts = <67 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "macirq";
+-			clocks = <&clk CLK_GMAC_AXI>, <&clk CLK_GMAC1>;
+-			clock-names = "stmmaceth", "pclk";
++			clocks = <&clk CLK_GMAC_AXI>, <&clk CLK_GMAC1>,
++				 <&clk CLK_PERISYS_APB4_HCLK>;
++			clock-names = "stmmaceth", "pclk", "apb";
+ 			snps,pbl = <32>;
+ 			snps,fixed-burst;
+ 			snps,multicast-filter-bins = <64>;
+@@ -308,8 +309,9 @@ gmac0: ethernet@ffe7070000 {
+ 			reg-names = "dwmac", "apb";
+ 			interrupts = <66 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "macirq";
+-			clocks = <&clk CLK_GMAC_AXI>, <&clk CLK_GMAC0>;
+-			clock-names = "stmmaceth", "pclk";
++			clocks = <&clk CLK_GMAC_AXI>, <&clk CLK_GMAC0>,
++				 <&clk CLK_PERISYS_APB4_HCLK>;
++			clock-names = "stmmaceth", "pclk", "apb";
+ 			snps,pbl = <32>;
+ 			snps,fixed-burst;
+ 			snps,multicast-filter-bins = <64>;
+diff --git a/arch/riscv/mm/ptdump.c b/arch/riscv/mm/ptdump.c
+index 9d5f657a251b32..1289cc6d3700cd 100644
+--- a/arch/riscv/mm/ptdump.c
++++ b/arch/riscv/mm/ptdump.c
+@@ -6,7 +6,6 @@
+ #include <linux/efi.h>
+ #include <linux/init.h>
+ #include <linux/debugfs.h>
+-#include <linux/memory_hotplug.h>
+ #include <linux/seq_file.h>
+ #include <linux/ptdump.h>
+ 
+@@ -371,9 +370,7 @@ bool ptdump_check_wx(void)
+ 
+ static int ptdump_show(struct seq_file *m, void *v)
+ {
+-	get_online_mems();
+ 	ptdump_walk(m, m->private);
+-	put_online_mems();
+ 
+ 	return 0;
+ }
+diff --git a/arch/s390/include/asm/timex.h b/arch/s390/include/asm/timex.h
+index bed8d0b5a282c0..59dfb8780f62ad 100644
+--- a/arch/s390/include/asm/timex.h
++++ b/arch/s390/include/asm/timex.h
+@@ -196,13 +196,6 @@ static inline unsigned long get_tod_clock_fast(void)
+ 	asm volatile("stckf %0" : "=Q" (clk) : : "cc");
+ 	return clk;
+ }
+-
+-static inline cycles_t get_cycles(void)
+-{
+-	return (cycles_t) get_tod_clock() >> 2;
+-}
+-#define get_cycles get_cycles
+-
+ int get_phys_clock(unsigned long *clock);
+ void init_cpu_timer(void);
+ 
+@@ -230,6 +223,12 @@ static inline unsigned long get_tod_clock_monotonic(void)
+ 	return tod;
+ }
+ 
++static inline cycles_t get_cycles(void)
++{
++	return (cycles_t)get_tod_clock_monotonic() >> 2;
++}
++#define get_cycles get_cycles
++
+ /**
+  * tod_to_ns - convert a TOD format value to nanoseconds
+  * @todval: to be converted TOD format value
+diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c
+index 54cf0923050f2d..9b4b5ccda323ac 100644
+--- a/arch/s390/kernel/early.c
++++ b/arch/s390/kernel/early.c
+@@ -154,6 +154,7 @@ void __init __do_early_pgm_check(struct pt_regs *regs)
+ 
+ 	regs->int_code = lc->pgm_int_code;
+ 	regs->int_parm_long = lc->trans_exc_code;
++	regs->last_break = lc->pgm_last_break;
+ 	ip = __rewind_psw(regs->psw, regs->int_code >> 16);
+ 
+ 	/* Monitor Event? Might be a warning */
+diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
+index fed17d407a4442..cb7ed55e24d206 100644
+--- a/arch/s390/kernel/time.c
++++ b/arch/s390/kernel/time.c
+@@ -580,7 +580,7 @@ static int stp_sync_clock(void *data)
+ 		atomic_dec(&sync->cpus);
+ 		/* Wait for in_sync to be set. */
+ 		while (READ_ONCE(sync->in_sync) == 0)
+-			__udelay(1);
++			;
+ 	}
+ 	if (sync->in_sync != 1)
+ 		/* Didn't work. Clear per-cpu in sync bit again. */
+diff --git a/arch/s390/mm/dump_pagetables.c b/arch/s390/mm/dump_pagetables.c
+index d3e943752fa0a4..39ee1f57be3dc8 100644
+--- a/arch/s390/mm/dump_pagetables.c
++++ b/arch/s390/mm/dump_pagetables.c
+@@ -205,11 +205,9 @@ static int ptdump_show(struct seq_file *m, void *v)
+ 		.marker = markers,
+ 	};
+ 
+-	get_online_mems();
+ 	mutex_lock(&cpa_mutex);
+ 	ptdump_walk_pgd(&st.ptdump, &init_mm, NULL);
+ 	mutex_unlock(&cpa_mutex);
+-	put_online_mems();
+ 	return 0;
+ }
+ DEFINE_SHOW_ATTRIBUTE(ptdump);
+diff --git a/arch/um/include/asm/thread_info.h b/arch/um/include/asm/thread_info.h
+index f9ad06fcc991a2..eb9b3a6d99e847 100644
+--- a/arch/um/include/asm/thread_info.h
++++ b/arch/um/include/asm/thread_info.h
+@@ -50,7 +50,11 @@ struct thread_info {
+ #define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ #define _TIF_MEMDIE		(1 << TIF_MEMDIE)
+ #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
++#define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
+ #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
+ #define _TIF_SINGLESTEP		(1 << TIF_SINGLESTEP)
+ 
++#define _TIF_WORK_MASK		(_TIF_NEED_RESCHED | _TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL | \
++				 _TIF_NOTIFY_RESUME)
++
+ #endif
+diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
+index 0cd6fad3d908d4..1be644de9e41ec 100644
+--- a/arch/um/kernel/process.c
++++ b/arch/um/kernel/process.c
+@@ -82,14 +82,18 @@ struct task_struct *__switch_to(struct task_struct *from, struct task_struct *to
+ void interrupt_end(void)
+ {
+ 	struct pt_regs *regs = &current->thread.regs;
+-
+-	if (need_resched())
+-		schedule();
+-	if (test_thread_flag(TIF_SIGPENDING) ||
+-	    test_thread_flag(TIF_NOTIFY_SIGNAL))
+-		do_signal(regs);
+-	if (test_thread_flag(TIF_NOTIFY_RESUME))
+-		resume_user_mode_work(regs);
++	unsigned long thread_flags;
++
++	thread_flags = read_thread_flags();
++	while (thread_flags & _TIF_WORK_MASK) {
++		if (thread_flags & _TIF_NEED_RESCHED)
++			schedule();
++		if (thread_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
++			do_signal(regs);
++		if (thread_flags & _TIF_NOTIFY_RESUME)
++			resume_user_mode_work(regs);
++		thread_flags = read_thread_flags();
++	}
+ }
+ 
+ int get_current_pid(void)
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 8980786686bffe..cc516375327e6f 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1658,6 +1658,10 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical)
+ 	return dest_mode_logical ? APIC_DEST_LOGICAL : APIC_DEST_PHYSICAL;
+ }
+ 
++enum kvm_x86_run_flags {
++	KVM_RUN_FORCE_IMMEDIATE_EXIT	= BIT(0),
++};
++
+ struct kvm_x86_ops {
+ 	const char *name;
+ 
+@@ -1738,7 +1742,7 @@ struct kvm_x86_ops {
+ 
+ 	int (*vcpu_pre_run)(struct kvm_vcpu *vcpu);
+ 	enum exit_fastpath_completion (*vcpu_run)(struct kvm_vcpu *vcpu,
+-						  bool force_immediate_exit);
++						  u64 run_flags);
+ 	int (*handle_exit)(struct kvm_vcpu *vcpu,
+ 		enum exit_fastpath_completion exit_fastpath);
+ 	int (*skip_emulated_instruction)(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 0f6bc28db1828a..82089a464d30ac 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -70,10 +70,9 @@ void (*x86_return_thunk)(void) __ro_after_init = __x86_return_thunk;
+ 
+ static void __init set_return_thunk(void *thunk)
+ {
+-	if (x86_return_thunk != __x86_return_thunk)
+-		pr_warn("x86/bugs: return thunk changed\n");
+-
+ 	x86_return_thunk = thunk;
++
++	pr_info("active return thunk: %ps\n", thunk);
+ }
+ 
+ /* Update SPEC_CTRL MSR and its cached copy unconditionally */
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index b567ec94b7fa54..ae6278f6b5e47e 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -4319,9 +4319,9 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_in
+ 	guest_state_exit_irqoff();
+ }
+ 
+-static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
+-					  bool force_immediate_exit)
++static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
+ {
++	bool force_immediate_exit = run_flags & KVM_RUN_FORCE_IMMEDIATE_EXIT;
+ 	struct vcpu_svm *svm = to_svm(vcpu);
+ 	bool spec_ctrl_intercepted = msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL);
+ 
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 5504d9e9fd32cb..86febb03332d55 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2653,10 +2653,11 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ 	if (vmx->nested.nested_run_pending &&
+ 	    (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS)) {
+ 		kvm_set_dr(vcpu, 7, vmcs12->guest_dr7);
+-		vmcs_write64(GUEST_IA32_DEBUGCTL, vmcs12->guest_ia32_debugctl);
++		vmx_guest_debugctl_write(vcpu, vmcs12->guest_ia32_debugctl &
++					       vmx_get_supported_debugctl(vcpu, false));
+ 	} else {
+ 		kvm_set_dr(vcpu, 7, vcpu->arch.dr7);
+-		vmcs_write64(GUEST_IA32_DEBUGCTL, vmx->nested.pre_vmenter_debugctl);
++		vmx_guest_debugctl_write(vcpu, vmx->nested.pre_vmenter_debugctl);
+ 	}
+ 	if (kvm_mpx_supported() && (!vmx->nested.nested_run_pending ||
+ 	    !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
+@@ -3146,7 +3147,8 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
+ 		return -EINVAL;
+ 
+ 	if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS) &&
+-	    CC(!kvm_dr7_valid(vmcs12->guest_dr7)))
++	    (CC(!kvm_dr7_valid(vmcs12->guest_dr7)) ||
++	     CC(!vmx_is_valid_debugctl(vcpu, vmcs12->guest_ia32_debugctl, false))))
+ 		return -EINVAL;
+ 
+ 	if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PAT) &&
+@@ -3520,7 +3522,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
+ 
+ 	if (!vmx->nested.nested_run_pending ||
+ 	    !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS))
+-		vmx->nested.pre_vmenter_debugctl = vmcs_read64(GUEST_IA32_DEBUGCTL);
++		vmx->nested.pre_vmenter_debugctl = vmx_guest_debugctl_read();
+ 	if (kvm_mpx_supported() &&
+ 	    (!vmx->nested.nested_run_pending ||
+ 	     !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
+@@ -4598,6 +4600,12 @@ static void sync_vmcs02_to_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
+ 		(vmcs12->vm_entry_controls & ~VM_ENTRY_IA32E_MODE) |
+ 		(vm_entry_controls_get(to_vmx(vcpu)) & VM_ENTRY_IA32E_MODE);
+ 
++	/*
++	 * Note!  Save DR7, but intentionally don't grab DEBUGCTL from vmcs02.
++	 * Writes to DEBUGCTL that aren't intercepted by L1 are immediately
++	 * propagated to vmcs12 (see vmx_set_msr()), as the value loaded into
++	 * vmcs02 doesn't strictly track vmcs12.
++	 */
+ 	if (vmcs12->vm_exit_controls & VM_EXIT_SAVE_DEBUG_CONTROLS)
+ 		vmcs12->guest_dr7 = vcpu->arch.dr7;
+ 
+@@ -4788,7 +4796,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
+ 	__vmx_set_segment(vcpu, &seg, VCPU_SREG_LDTR);
+ 
+ 	kvm_set_dr(vcpu, 7, 0x400);
+-	vmcs_write64(GUEST_IA32_DEBUGCTL, 0);
++	vmx_guest_debugctl_write(vcpu, 0);
+ 
+ 	if (nested_vmx_load_msr(vcpu, vmcs12->vm_exit_msr_load_addr,
+ 				vmcs12->vm_exit_msr_load_count))
+diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
+index 77012b2eca0e9a..010789057487a8 100644
+--- a/arch/x86/kvm/vmx/pmu_intel.c
++++ b/arch/x86/kvm/vmx/pmu_intel.c
+@@ -605,11 +605,11 @@ static void intel_pmu_reset(struct kvm_vcpu *vcpu)
+  */
+ static void intel_pmu_legacy_freezing_lbrs_on_pmi(struct kvm_vcpu *vcpu)
+ {
+-	u64 data = vmcs_read64(GUEST_IA32_DEBUGCTL);
++	u64 data = vmx_guest_debugctl_read();
+ 
+ 	if (data & DEBUGCTLMSR_FREEZE_LBRS_ON_PMI) {
+ 		data &= ~DEBUGCTLMSR_LBR;
+-		vmcs_write64(GUEST_IA32_DEBUGCTL, data);
++		vmx_guest_debugctl_write(vcpu, data);
+ 	}
+ }
+ 
+@@ -679,7 +679,7 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu)
+ 
+ 	if (!lbr_desc->event) {
+ 		vmx_disable_lbr_msrs_passthrough(vcpu);
+-		if (vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR)
++		if (vmx_guest_debugctl_read() & DEBUGCTLMSR_LBR)
+ 			goto warn;
+ 		if (test_bit(INTEL_PMC_IDX_FIXED_VLBR, pmu->pmc_in_use))
+ 			goto warn;
+@@ -701,7 +701,7 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu)
+ 
+ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu)
+ {
+-	if (!(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR))
++	if (!(vmx_guest_debugctl_read() & DEBUGCTLMSR_LBR))
+ 		intel_pmu_release_guest_lbr_event(vcpu);
+ }
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 9ed43bd03c9d5c..f2c3ccda950109 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -2155,7 +2155,7 @@ int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 			msr_info->data = vmx->pt_desc.guest.addr_a[index / 2];
+ 		break;
+ 	case MSR_IA32_DEBUGCTLMSR:
+-		msr_info->data = vmcs_read64(GUEST_IA32_DEBUGCTL);
++		msr_info->data = vmx_guest_debugctl_read();
+ 		break;
+ 	default:
+ 	find_uret_msr:
+@@ -2180,7 +2180,7 @@ static u64 nested_vmx_truncate_sysenter_addr(struct kvm_vcpu *vcpu,
+ 	return (unsigned long)data;
+ }
+ 
+-static u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated)
++u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated)
+ {
+ 	u64 debugctl = 0;
+ 
+@@ -2199,6 +2199,18 @@ static u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated
+ 	return debugctl;
+ }
+ 
++bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data, bool host_initiated)
++{
++	u64 invalid;
++
++	invalid = data & ~vmx_get_supported_debugctl(vcpu, host_initiated);
++	if (invalid & (DEBUGCTLMSR_BTF | DEBUGCTLMSR_LBR)) {
++		kvm_pr_unimpl_wrmsr(vcpu, MSR_IA32_DEBUGCTLMSR, data);
++		invalid &= ~(DEBUGCTLMSR_BTF | DEBUGCTLMSR_LBR);
++	}
++	return !invalid;
++}
++
+ /*
+  * Writes msr value into the appropriate "register".
+  * Returns 0 on success, non-0 otherwise.
+@@ -2267,29 +2279,22 @@ int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		}
+ 		vmcs_writel(GUEST_SYSENTER_ESP, data);
+ 		break;
+-	case MSR_IA32_DEBUGCTLMSR: {
+-		u64 invalid;
+-
+-		invalid = data & ~vmx_get_supported_debugctl(vcpu, msr_info->host_initiated);
+-		if (invalid & (DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR)) {
+-			kvm_pr_unimpl_wrmsr(vcpu, msr_index, data);
+-			data &= ~(DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR);
+-			invalid &= ~(DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR);
+-		}
+-
+-		if (invalid)
++	case MSR_IA32_DEBUGCTLMSR:
++		if (!vmx_is_valid_debugctl(vcpu, data, msr_info->host_initiated))
+ 			return 1;
+ 
++		data &= vmx_get_supported_debugctl(vcpu, msr_info->host_initiated);
++
+ 		if (is_guest_mode(vcpu) && get_vmcs12(vcpu)->vm_exit_controls &
+ 						VM_EXIT_SAVE_DEBUG_CONTROLS)
+ 			get_vmcs12(vcpu)->guest_ia32_debugctl = data;
+ 
+-		vmcs_write64(GUEST_IA32_DEBUGCTL, data);
++		vmx_guest_debugctl_write(vcpu, data);
++
+ 		if (intel_pmu_lbr_is_enabled(vcpu) && !to_vmx(vcpu)->lbr_desc.event &&
+ 		    (data & DEBUGCTLMSR_LBR))
+ 			intel_pmu_create_guest_lbr_event(vcpu);
+ 		return 0;
+-	}
+ 	case MSR_IA32_BNDCFGS:
+ 		if (!kvm_mpx_supported() ||
+ 		    (!msr_info->host_initiated &&
+@@ -4857,7 +4862,8 @@ static void init_vmcs(struct vcpu_vmx *vmx)
+ 	vmcs_write32(GUEST_SYSENTER_CS, 0);
+ 	vmcs_writel(GUEST_SYSENTER_ESP, 0);
+ 	vmcs_writel(GUEST_SYSENTER_EIP, 0);
+-	vmcs_write64(GUEST_IA32_DEBUGCTL, 0);
++
++	vmx_guest_debugctl_write(&vmx->vcpu, 0);
+ 
+ 	if (cpu_has_vmx_tpr_shadow()) {
+ 		vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, 0);
+@@ -7410,8 +7416,9 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
+ 	guest_state_exit_irqoff();
+ }
+ 
+-fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
++fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
+ {
++	bool force_immediate_exit = run_flags & KVM_RUN_FORCE_IMMEDIATE_EXIT;
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+ 	unsigned long cr3, cr4;
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 951e44dc9d0ea3..172d3c0d0ef91c 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -437,6 +437,19 @@ static inline void vmx_set_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr,
+ 
+ void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu);
+ 
++u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated);
++bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data, bool host_initiated);
++
++static inline void vmx_guest_debugctl_write(struct kvm_vcpu *vcpu, u64 val)
++{
++	vmcs_write64(GUEST_IA32_DEBUGCTL, val);
++}
++
++static inline u64 vmx_guest_debugctl_read(void)
++{
++	return vmcs_read64(GUEST_IA32_DEBUGCTL);
++}
++
+ /*
+  * Note, early Intel manuals have the write-low and read-high bitmap offsets
+  * the wrong way round.  The bitmaps control MSRs 0x00000000-0x00001fff and
+diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
+index 430773a5ef8e3e..0c01042659e7d2 100644
+--- a/arch/x86/kvm/vmx/x86_ops.h
++++ b/arch/x86/kvm/vmx/x86_ops.h
+@@ -21,7 +21,7 @@ void vmx_vm_destroy(struct kvm *kvm);
+ int vmx_vcpu_precreate(struct kvm *kvm);
+ int vmx_vcpu_create(struct kvm_vcpu *vcpu);
+ int vmx_vcpu_pre_run(struct kvm_vcpu *vcpu);
+-fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit);
++fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags);
+ void vmx_vcpu_free(struct kvm_vcpu *vcpu);
+ void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
+ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 7bae91eb7b2345..cb482d59c3df17 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -10724,6 +10724,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		dm_request_for_irq_injection(vcpu) &&
+ 		kvm_cpu_accept_dm_intr(vcpu);
+ 	fastpath_t exit_fastpath;
++	u64 run_flags;
+ 
+ 	bool req_immediate_exit = false;
+ 
+@@ -10968,8 +10969,11 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		goto cancel_injection;
+ 	}
+ 
+-	if (req_immediate_exit)
++	run_flags = 0;
++	if (req_immediate_exit) {
++		run_flags |= KVM_RUN_FORCE_IMMEDIATE_EXIT;
+ 		kvm_make_request(KVM_REQ_EVENT, vcpu);
++	}
+ 
+ 	fpregs_assert_state_consistent();
+ 	if (test_thread_flag(TIF_NEED_FPU_LOAD))
+@@ -11005,8 +11009,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		WARN_ON_ONCE((kvm_vcpu_apicv_activated(vcpu) != kvm_vcpu_apicv_active(vcpu)) &&
+ 			     (kvm_get_apic_mode(vcpu) != LAPIC_MODE_DISABLED));
+ 
+-		exit_fastpath = kvm_x86_call(vcpu_run)(vcpu,
+-						       req_immediate_exit);
++		exit_fastpath = kvm_x86_call(vcpu_run)(vcpu, run_flags);
+ 		if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
+ 			break;
+ 
+@@ -11018,6 +11021,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 			break;
+ 		}
+ 
++		run_flags = 0;
++
+ 		/* Note, VM-Exits that go down the "slow" path are accounted below. */
+ 		++vcpu->stat.exits;
+ 	}
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index abd80dc135623a..67c42d276e8ab1 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -701,17 +701,13 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
+ {
+ 	struct bfq_data *bfqd = data->q->elevator->elevator_data;
+ 	struct bfq_io_cq *bic = bfq_bic_lookup(data->q);
+-	int depth;
+-	unsigned limit = data->q->nr_requests;
+-	unsigned int act_idx;
++	unsigned int limit, act_idx;
+ 
+ 	/* Sync reads have full depth available */
+-	if (op_is_sync(opf) && !op_is_write(opf)) {
+-		depth = 0;
+-	} else {
+-		depth = bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(opf)];
+-		limit = (limit * depth) >> bfqd->full_depth_shift;
+-	}
++	if (op_is_sync(opf) && !op_is_write(opf))
++		limit = data->q->nr_requests;
++	else
++		limit = bfqd->async_depths[!!bfqd->wr_busy_queues][op_is_sync(opf)];
+ 
+ 	for (act_idx = 0; bic && act_idx < bfqd->num_actuators; act_idx++) {
+ 		/* Fast path to check if bfqq is already allocated. */
+@@ -725,14 +721,16 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
+ 		 * available requests and thus starve other entities.
+ 		 */
+ 		if (bfqq_request_over_limit(bfqd, bic, opf, act_idx, limit)) {
+-			depth = 1;
++			limit = 1;
+ 			break;
+ 		}
+ 	}
++
+ 	bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u",
+-		__func__, bfqd->wr_busy_queues, op_is_sync(opf), depth);
+-	if (depth)
+-		data->shallow_depth = depth;
++		__func__, bfqd->wr_busy_queues, op_is_sync(opf), limit);
++
++	if (limit < data->q->nr_requests)
++		data->shallow_depth = limit;
+ }
+ 
+ static struct bfq_queue *
+@@ -7128,9 +7126,8 @@ void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg)
+  */
+ static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt)
+ {
+-	unsigned int depth = 1U << bt->sb.shift;
++	unsigned int nr_requests = bfqd->queue->nr_requests;
+ 
+-	bfqd->full_depth_shift = bt->sb.shift;
+ 	/*
+ 	 * In-word depths if no bfq_queue is being weight-raised:
+ 	 * leaving 25% of tags only for sync reads.
+@@ -7142,13 +7139,13 @@ static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt)
+ 	 * limit 'something'.
+ 	 */
+ 	/* no more than 50% of tags for async I/O */
+-	bfqd->word_depths[0][0] = max(depth >> 1, 1U);
++	bfqd->async_depths[0][0] = max(nr_requests >> 1, 1U);
+ 	/*
+ 	 * no more than 75% of tags for sync writes (25% extra tags
+ 	 * w.r.t. async I/O, to prevent async I/O from starving sync
+ 	 * writes)
+ 	 */
+-	bfqd->word_depths[0][1] = max((depth * 3) >> 2, 1U);
++	bfqd->async_depths[0][1] = max((nr_requests * 3) >> 2, 1U);
+ 
+ 	/*
+ 	 * In-word depths in case some bfq_queue is being weight-
+@@ -7158,9 +7155,9 @@ static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt)
+ 	 * shortage.
+ 	 */
+ 	/* no more than ~18% of tags for async I/O */
+-	bfqd->word_depths[1][0] = max((depth * 3) >> 4, 1U);
++	bfqd->async_depths[1][0] = max((nr_requests * 3) >> 4, 1U);
+ 	/* no more than ~37% of tags for sync writes (~20% extra tags) */
+-	bfqd->word_depths[1][1] = max((depth * 6) >> 4, 1U);
++	bfqd->async_depths[1][1] = max((nr_requests * 6) >> 4, 1U);
+ }
+ 
+ static void bfq_depth_updated(struct blk_mq_hw_ctx *hctx)
+diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
+index 687a3a7ba78478..31217f196f4f1b 100644
+--- a/block/bfq-iosched.h
++++ b/block/bfq-iosched.h
+@@ -813,8 +813,7 @@ struct bfq_data {
+ 	 * Depth limits used in bfq_limit_depth (see comments on the
+ 	 * function)
+ 	 */
+-	unsigned int word_depths[2][2];
+-	unsigned int full_depth_shift;
++	unsigned int async_depths[2][2];
+ 
+ 	/*
+ 	 * Number of independent actuators. This is equal to 1 in
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index c2697db5910912..5e6f6f2fda96f7 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -3117,8 +3117,10 @@ void blk_mq_submit_bio(struct bio *bio)
+ 	if (blk_mq_attempt_bio_merge(q, bio, nr_segs))
+ 		goto queue_exit;
+ 
+-	if (blk_queue_is_zoned(q) && blk_zone_plug_bio(bio, nr_segs))
+-		goto queue_exit;
++	if (bio_needs_zone_write_plugging(bio)) {
++		if (blk_zone_plug_bio(bio, nr_segs))
++			goto queue_exit;
++	}
+ 
+ new_request:
+ 	if (rq) {
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index 47a31e1c090937..7a76ff3696c4de 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -797,7 +797,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
+ 	}
+ 
+ 	/* chunk_sectors a multiple of the physical block size? */
+-	if ((t->chunk_sectors << 9) & (t->physical_block_size - 1)) {
++	if (t->chunk_sectors % (t->physical_block_size >> SECTOR_SHIFT)) {
+ 		t->chunk_sectors = 0;
+ 		t->flags |= BLK_FLAG_MISALIGNED;
+ 		ret = -1;
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 351d659280e116..efe71b1a1da138 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -1116,25 +1116,7 @@ bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs)
+ {
+ 	struct block_device *bdev = bio->bi_bdev;
+ 
+-	if (!bdev->bd_disk->zone_wplugs_hash)
+-		return false;
+-
+-	/*
+-	 * If the BIO already has the plugging flag set, then it was already
+-	 * handled through this path and this is a submission from the zone
+-	 * plug bio submit work.
+-	 */
+-	if (bio_flagged(bio, BIO_ZONE_WRITE_PLUGGING))
+-		return false;
+-
+-	/*
+-	 * We do not need to do anything special for empty flush BIOs, e.g
+-	 * BIOs such as issued by blkdev_issue_flush(). The is because it is
+-	 * the responsibility of the user to first wait for the completion of
+-	 * write operations for flush to have any effect on the persistence of
+-	 * the written data.
+-	 */
+-	if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
++	if (WARN_ON_ONCE(!bdev->bd_disk->zone_wplugs_hash))
+ 		return false;
+ 
+ 	/*
+diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
+index 0f0f8452609a11..d9ef304d1ba96a 100644
+--- a/block/kyber-iosched.c
++++ b/block/kyber-iosched.c
+@@ -157,10 +157,7 @@ struct kyber_queue_data {
+ 	 */
+ 	struct sbitmap_queue domain_tokens[KYBER_NUM_DOMAINS];
+ 
+-	/*
+-	 * Async request percentage, converted to per-word depth for
+-	 * sbitmap_get_shallow().
+-	 */
++	/* Number of allowed async requests. */
+ 	unsigned int async_depth;
+ 
+ 	struct kyber_cpu_latency __percpu *cpu_latency;
+@@ -454,10 +451,8 @@ static void kyber_depth_updated(struct blk_mq_hw_ctx *hctx)
+ {
+ 	struct kyber_queue_data *kqd = hctx->queue->elevator->elevator_data;
+ 	struct blk_mq_tags *tags = hctx->sched_tags;
+-	unsigned int shift = tags->bitmap_tags.sb.shift;
+-
+-	kqd->async_depth = (1U << shift) * KYBER_ASYNC_PERCENT / 100U;
+ 
++	kqd->async_depth = hctx->queue->nr_requests * KYBER_ASYNC_PERCENT / 100U;
+ 	sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, kqd->async_depth);
+ }
+ 
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index 754f6b7415cdce..ed0e0f70fb831d 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -487,20 +487,6 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx)
+ 	return rq;
+ }
+ 
+-/*
+- * 'depth' is a number in the range 1..INT_MAX representing a number of
+- * requests. Scale it with a factor (1 << bt->sb.shift) / q->nr_requests since
+- * 1..(1 << bt->sb.shift) is the range expected by sbitmap_get_shallow().
+- * Values larger than q->nr_requests have the same effect as q->nr_requests.
+- */
+-static int dd_to_word_depth(struct blk_mq_hw_ctx *hctx, unsigned int qdepth)
+-{
+-	struct sbitmap_queue *bt = &hctx->sched_tags->bitmap_tags;
+-	const unsigned int nrr = hctx->queue->nr_requests;
+-
+-	return ((qdepth << bt->sb.shift) + nrr - 1) / nrr;
+-}
+-
+ /*
+  * Called by __blk_mq_alloc_request(). The shallow_depth value set by this
+  * function is used by __blk_mq_get_tag().
+@@ -517,7 +503,7 @@ static void dd_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
+ 	 * Throttle asynchronous requests and writes such that these requests
+ 	 * do not block the allocation of synchronous requests.
+ 	 */
+-	data->shallow_depth = dd_to_word_depth(data->hctx, dd->async_depth);
++	data->shallow_depth = dd->async_depth;
+ }
+ 
+ /* Called by blk_mq_update_nr_requests(). */
+diff --git a/crypto/jitterentropy-kcapi.c b/crypto/jitterentropy-kcapi.c
+index c24d4ff2b4a8b0..1266eb790708b8 100644
+--- a/crypto/jitterentropy-kcapi.c
++++ b/crypto/jitterentropy-kcapi.c
+@@ -144,7 +144,7 @@ int jent_hash_time(void *hash_state, __u64 time, u8 *addtl,
+ 	 * Inject the data from the previous loop into the pool. This data is
+ 	 * not considered to contain any entropy, but it stirs the pool a bit.
+ 	 */
+-	ret = crypto_shash_update(desc, intermediary, sizeof(intermediary));
++	ret = crypto_shash_update(hash_state_desc, intermediary, sizeof(intermediary));
+ 	if (ret)
+ 		goto err;
+ 
+@@ -157,11 +157,12 @@ int jent_hash_time(void *hash_state, __u64 time, u8 *addtl,
+ 	 * conditioning operation to have an identical amount of input data
+ 	 * according to section 3.1.5.
+ 	 */
+-	if (!stuck) {
+-		ret = crypto_shash_update(hash_state_desc, (u8 *)&time,
+-					  sizeof(__u64));
++	if (stuck) {
++		time = 0;
+ 	}
+ 
++	ret = crypto_shash_update(hash_state_desc, (u8 *)&time, sizeof(__u64));
++
+ err:
+ 	shash_desc_zero(desc);
+ 	memzero_explicit(intermediary, sizeof(intermediary));
+diff --git a/drivers/accel/habanalabs/common/memory.c b/drivers/accel/habanalabs/common/memory.c
+index 601fdbe701790b..61472a381904ec 100644
+--- a/drivers/accel/habanalabs/common/memory.c
++++ b/drivers/accel/habanalabs/common/memory.c
+@@ -1829,9 +1829,6 @@ static void hl_release_dmabuf(struct dma_buf *dmabuf)
+ 	struct hl_dmabuf_priv *hl_dmabuf = dmabuf->priv;
+ 	struct hl_ctx *ctx;
+ 
+-	if (!hl_dmabuf)
+-		return;
+-
+ 	ctx = hl_dmabuf->ctx;
+ 
+ 	if (hl_dmabuf->memhash_hnode)
+@@ -1859,7 +1856,12 @@ static int export_dmabuf(struct hl_ctx *ctx,
+ {
+ 	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ 	struct hl_device *hdev = ctx->hdev;
+-	int rc, fd;
++	CLASS(get_unused_fd, fd)(flags);
++
++	if (fd < 0) {
++		dev_err(hdev->dev, "failed to get a file descriptor for a dma-buf, %d\n", fd);
++		return fd;
++	}
+ 
+ 	exp_info.ops = &habanalabs_dmabuf_ops;
+ 	exp_info.size = total_size;
+@@ -1872,13 +1874,6 @@ static int export_dmabuf(struct hl_ctx *ctx,
+ 		return PTR_ERR(hl_dmabuf->dmabuf);
+ 	}
+ 
+-	fd = dma_buf_fd(hl_dmabuf->dmabuf, flags);
+-	if (fd < 0) {
+-		dev_err(hdev->dev, "failed to get a file descriptor for a dma-buf, %d\n", fd);
+-		rc = fd;
+-		goto err_dma_buf_put;
+-	}
+-
+ 	hl_dmabuf->ctx = ctx;
+ 	hl_ctx_get(hl_dmabuf->ctx);
+ 	atomic_inc(&ctx->hdev->dmabuf_export_cnt);
+@@ -1890,13 +1885,9 @@ static int export_dmabuf(struct hl_ctx *ctx,
+ 	get_file(ctx->hpriv->file_priv->filp);
+ 
+ 	*dmabuf_fd = fd;
++	fd_install(take_fd(fd), hl_dmabuf->dmabuf->file);
+ 
+ 	return 0;
+-
+-err_dma_buf_put:
+-	hl_dmabuf->dmabuf->priv = NULL;
+-	dma_buf_put(hl_dmabuf->dmabuf);
+-	return rc;
+ }
+ 
+ static int validate_export_params_common(struct hl_device *hdev, u64 addr, u64 size, u64 offset)
+diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
+index 7cf6101cb4c731..2a99f5eb69629a 100644
+--- a/drivers/acpi/acpi_processor.c
++++ b/drivers/acpi/acpi_processor.c
+@@ -275,7 +275,7 @@ static inline int acpi_processor_hotadd_init(struct acpi_processor *pr,
+ 
+ static int acpi_processor_get_info(struct acpi_device *device)
+ {
+-	union acpi_object object = { 0 };
++	union acpi_object object = { .processor = { 0 } };
+ 	struct acpi_buffer buffer = { sizeof(union acpi_object), &object };
+ 	struct acpi_processor *pr = acpi_driver_data(device);
+ 	int device_declaration = 0;
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index 0f3c663c1b0a33..ce9b8e8a5d0974 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -902,6 +902,17 @@ static bool ghes_do_proc(struct ghes *ghes,
+ 		}
+ 	}
+ 
++	/*
++	 * If no memory failure work is queued for abnormal synchronous
++	 * errors, do a force kill.
++	 */
++	if (sync && !queued) {
++		dev_err(ghes->dev,
++			HW_ERR GHES_PFX "%s:%d: synchronous unrecoverable error (SIGBUS)\n",
++			current->comm, task_pid_nr(current));
++		force_sig(SIGBUS);
++	}
++
+ 	return queued;
+ }
+ 
+@@ -1088,6 +1099,8 @@ static void __ghes_panic(struct ghes *ghes,
+ 
+ 	__ghes_print_estatus(KERN_EMERG, ghes->generic, estatus);
+ 
++	add_taint(TAINT_MACHINE_CHECK, LOCKDEP_STILL_OK);
++
+ 	ghes_clear_estatus(ghes, estatus, buf_paddr, fixmap_idx);
+ 
+ 	if (!panic_timeout)
+diff --git a/drivers/acpi/prmt.c b/drivers/acpi/prmt.c
+index e549914a636c66..be033bbb126a44 100644
+--- a/drivers/acpi/prmt.c
++++ b/drivers/acpi/prmt.c
+@@ -85,8 +85,6 @@ static u64 efi_pa_va_lookup(efi_guid_t *guid, u64 pa)
+ 		}
+ 	}
+ 
+-	pr_warn("Failed to find VA for GUID: %pUL, PA: 0x%llx", guid, pa);
+-
+ 	return 0;
+ }
+ 
+@@ -154,13 +152,37 @@ acpi_parse_prmt(union acpi_subtable_headers *header, const unsigned long end)
+ 		guid_copy(&th->guid, (guid_t *)handler_info->handler_guid);
+ 		th->handler_addr =
+ 			(void *)efi_pa_va_lookup(&th->guid, handler_info->handler_address);
++		/*
++		 * Print a warning message if handler_addr is zero which is not expected to
++		 * ever happen.
++		 */
++		if (unlikely(!th->handler_addr))
++			pr_warn("Failed to find VA of handler for GUID: %pUL, PA: 0x%llx",
++				&th->guid, handler_info->handler_address);
+ 
+ 		th->static_data_buffer_addr =
+ 			efi_pa_va_lookup(&th->guid, handler_info->static_data_buffer_address);
++		/*
++		 * According to the PRM specification, static_data_buffer_address can be zero,
++		 * so avoid printing a warning message in that case.  Otherwise, if the
++		 * return value of efi_pa_va_lookup() is zero, print the message.
++		 */
++		if (unlikely(!th->static_data_buffer_addr && handler_info->static_data_buffer_address))
++			pr_warn("Failed to find VA of static data buffer for GUID: %pUL, PA: 0x%llx",
++				&th->guid, handler_info->static_data_buffer_address);
+ 
+ 		th->acpi_param_buffer_addr =
+ 			efi_pa_va_lookup(&th->guid, handler_info->acpi_param_buffer_address);
+ 
++		/*
++		 * According to the PRM specification, acpi_param_buffer_address can be zero,
++		 * so avoid printing a warning message in that case.  Otherwise, if the
++		 * return value of efi_pa_va_lookup() is zero, print the message.
++		 */
++		if (unlikely(!th->acpi_param_buffer_addr && handler_info->acpi_param_buffer_address))
++			pr_warn("Failed to find VA of acpi param buffer for GUID: %pUL, PA: 0x%llx",
++				&th->guid, handler_info->acpi_param_buffer_address);
++
+ 	} while (++cur_handler < tm->handler_count && (handler_info = get_next_handler(handler_info)));
+ 
+ 	return 0;
+diff --git a/drivers/acpi/processor_perflib.c b/drivers/acpi/processor_perflib.c
+index 53996f1a2d80e9..afe2e0037b3e1b 100644
+--- a/drivers/acpi/processor_perflib.c
++++ b/drivers/acpi/processor_perflib.c
+@@ -172,6 +172,9 @@ void acpi_processor_ppc_init(struct cpufreq_policy *policy)
+ {
+ 	unsigned int cpu;
+ 
++	if (ignore_ppc == 1)
++		return;
++
+ 	for_each_cpu(cpu, policy->related_cpus) {
+ 		struct acpi_processor *pr = per_cpu(processors, cpu);
+ 		int ret;
+@@ -192,6 +195,14 @@ void acpi_processor_ppc_init(struct cpufreq_policy *policy)
+ 		if (ret < 0)
+ 			pr_err("Failed to add freq constraint for CPU%d (%d)\n",
+ 			       cpu, ret);
++
++		if (!pr->performance)
++			continue;
++
++		ret = acpi_processor_get_platform_limit(pr);
++		if (ret)
++			pr_err("Failed to update freq constraint for CPU%d (%d)\n",
++			       cpu, ret);
+ 	}
+ }
+ 
+diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/binder_alloc_selftest.c
+index c88735c5484854..486af3ec3c0272 100644
+--- a/drivers/android/binder_alloc_selftest.c
++++ b/drivers/android/binder_alloc_selftest.c
+@@ -142,12 +142,12 @@ static void binder_selftest_free_buf(struct binder_alloc *alloc,
+ 	for (i = 0; i < BUFFER_NUM; i++)
+ 		binder_alloc_free_buf(alloc, buffers[seq[i]]);
+ 
+-	for (i = 0; i < end / PAGE_SIZE; i++) {
+ 		/**
+ 		 * Error message on a free page can be false positive
+ 		 * if binder shrinker ran during binder_alloc_free_buf
+ 		 * calls above.
+ 		 */
++	for (i = 0; i <= (end - 1) / PAGE_SIZE; i++) {
+ 		if (list_empty(page_to_lru(alloc->pages[i]))) {
+ 			pr_err_size_seq(sizes, seq);
+ 			pr_err("expect lru but is %s at page index %d\n",
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 931da4749e8086..f0c4e225172d49 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -1784,11 +1784,21 @@ static void ahci_update_initial_lpm_policy(struct ata_port *ap)
+ 		return;
+ 	}
+ 
++	/* If no Partial or no Slumber, we cannot support DIPM. */
++	if ((ap->host->flags & ATA_HOST_NO_PART) ||
++	    (ap->host->flags & ATA_HOST_NO_SSC)) {
++		ata_port_dbg(ap, "Host does not support DIPM\n");
++		ap->flags |= ATA_FLAG_NO_DIPM;
++	}
++
+ 	/* If no LPM states are supported by the HBA, do not bother with LPM */
+ 	if ((ap->host->flags & ATA_HOST_NO_PART) &&
+ 	    (ap->host->flags & ATA_HOST_NO_SSC) &&
+ 	    (ap->host->flags & ATA_HOST_NO_DEVSLP)) {
+-		ata_port_dbg(ap, "no LPM states supported, not enabling LPM\n");
++		ata_port_dbg(ap,
++			"No LPM states supported, forcing LPM max_power\n");
++		ap->flags |= ATA_FLAG_NO_LPM;
++		ap->target_lpm_policy = ATA_LPM_MAX_POWER;
+ 		return;
+ 	}
+ 
+diff --git a/drivers/ata/ata_piix.c b/drivers/ata/ata_piix.c
+index d441246fa357a1..6e19ae97e6c87f 100644
+--- a/drivers/ata/ata_piix.c
++++ b/drivers/ata/ata_piix.c
+@@ -1089,6 +1089,7 @@ static struct ata_port_operations ich_pata_ops = {
+ };
+ 
+ static struct attribute *piix_sidpr_shost_attrs[] = {
++	&dev_attr_link_power_management_supported.attr,
+ 	&dev_attr_link_power_management_policy.attr,
+ 	NULL
+ };
+diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
+index 22afa4ff860d18..9b5b1e4c7148cb 100644
+--- a/drivers/ata/libahci.c
++++ b/drivers/ata/libahci.c
+@@ -111,6 +111,7 @@ static DEVICE_ATTR(em_buffer, S_IWUSR | S_IRUGO,
+ static DEVICE_ATTR(em_message_supported, S_IRUGO, ahci_show_em_supported, NULL);
+ 
+ static struct attribute *ahci_shost_attrs[] = {
++	&dev_attr_link_power_management_supported.attr,
+ 	&dev_attr_link_power_management_policy.attr,
+ 	&dev_attr_em_message_type.attr,
+ 	&dev_attr_em_message.attr,
+diff --git a/drivers/ata/libata-sata.c b/drivers/ata/libata-sata.c
+index 2e4463d3a3561f..547a719b151051 100644
+--- a/drivers/ata/libata-sata.c
++++ b/drivers/ata/libata-sata.c
+@@ -900,14 +900,52 @@ static const char *ata_lpm_policy_names[] = {
+ 	[ATA_LPM_MIN_POWER]		= "min_power",
+ };
+ 
++/*
++ * Check if a port supports link power management.
++ * Must be called with the port locked.
++ */
++static bool ata_scsi_lpm_supported(struct ata_port *ap)
++{
++	struct ata_link *link;
++	struct ata_device *dev;
++
++	if (ap->flags & ATA_FLAG_NO_LPM)
++		return false;
++
++	ata_for_each_link(link, ap, EDGE) {
++		ata_for_each_dev(dev, &ap->link, ENABLED) {
++			if (dev->quirks & ATA_QUIRK_NOLPM)
++				return false;
++		}
++	}
++
++	return true;
++}
++
++static ssize_t ata_scsi_lpm_supported_show(struct device *dev,
++				 struct device_attribute *attr, char *buf)
++{
++	struct Scsi_Host *shost = class_to_shost(dev);
++	struct ata_port *ap = ata_shost_to_port(shost);
++	unsigned long flags;
++	bool supported;
++
++	spin_lock_irqsave(ap->lock, flags);
++	supported = ata_scsi_lpm_supported(ap);
++	spin_unlock_irqrestore(ap->lock, flags);
++
++	return sysfs_emit(buf, "%d\n", supported);
++}
++DEVICE_ATTR(link_power_management_supported, S_IRUGO,
++	    ata_scsi_lpm_supported_show, NULL);
++EXPORT_SYMBOL_GPL(dev_attr_link_power_management_supported);
++
+ static ssize_t ata_scsi_lpm_store(struct device *device,
+ 				  struct device_attribute *attr,
+ 				  const char *buf, size_t count)
+ {
+ 	struct Scsi_Host *shost = class_to_shost(device);
+ 	struct ata_port *ap = ata_shost_to_port(shost);
+-	struct ata_link *link;
+-	struct ata_device *dev;
+ 	enum ata_lpm_policy policy;
+ 	unsigned long flags;
+ 
+@@ -924,13 +962,9 @@ static ssize_t ata_scsi_lpm_store(struct device *device,
+ 
+ 	spin_lock_irqsave(ap->lock, flags);
+ 
+-	ata_for_each_link(link, ap, EDGE) {
+-		ata_for_each_dev(dev, &ap->link, ENABLED) {
+-			if (dev->quirks & ATA_QUIRK_NOLPM) {
+-				count = -EOPNOTSUPP;
+-				goto out_unlock;
+-			}
+-		}
++	if (!ata_scsi_lpm_supported(ap)) {
++		count = -EOPNOTSUPP;
++		goto out_unlock;
+ 	}
+ 
+ 	ap->target_lpm_policy = policy;
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index c55a7c70bc1a88..1ef26216f9718a 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -1854,6 +1854,11 @@ void pm_runtime_reinit(struct device *dev)
+ 				pm_runtime_put(dev->parent);
+ 		}
+ 	}
++	/*
++	 * Clear power.needs_force_resume in case it has been set by
++	 * pm_runtime_force_suspend() invoked from a driver remove callback.
++	 */
++	dev->power.needs_force_resume = false;
+ }
+ 
+ /**
+diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
+index e5a2e5f7887b86..975024cf03c594 100644
+--- a/drivers/block/drbd/drbd_receiver.c
++++ b/drivers/block/drbd/drbd_receiver.c
+@@ -2500,7 +2500,11 @@ static int handle_write_conflicts(struct drbd_device *device,
+ 			peer_req->w.cb = superseded ? e_send_superseded :
+ 						   e_send_retry_write;
+ 			list_add_tail(&peer_req->w.list, &device->done_ee);
+-			queue_work(connection->ack_sender, &peer_req->peer_device->send_acks_work);
++			/* put is in drbd_send_acks_wf() */
++			kref_get(&device->kref);
++			if (!queue_work(connection->ack_sender,
++					&peer_req->peer_device->send_acks_work))
++				kref_put(&device->kref, drbd_destroy_device);
+ 
+ 			err = -ENOENT;
+ 			goto out;
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 3999056877572e..8220521b998418 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1432,17 +1432,34 @@ static int loop_set_dio(struct loop_device *lo, unsigned long arg)
+ 	return 0;
+ }
+ 
+-static int loop_set_block_size(struct loop_device *lo, unsigned long arg)
++static int loop_set_block_size(struct loop_device *lo, blk_mode_t mode,
++			       struct block_device *bdev, unsigned long arg)
+ {
+ 	struct queue_limits lim;
+ 	unsigned int memflags;
+ 	int err = 0;
+ 
+-	if (lo->lo_state != Lo_bound)
+-		return -ENXIO;
++	/*
++	 * If we don't hold exclusive handle for the device, upgrade to it
++	 * here to avoid changing device under exclusive owner.
++	 */
++	if (!(mode & BLK_OPEN_EXCL)) {
++		err = bd_prepare_to_claim(bdev, loop_set_block_size, NULL);
++		if (err)
++			return err;
++	}
++
++	err = mutex_lock_killable(&lo->lo_mutex);
++	if (err)
++		goto abort_claim;
++
++	if (lo->lo_state != Lo_bound) {
++		err = -ENXIO;
++		goto unlock;
++	}
+ 
+ 	if (lo->lo_queue->limits.logical_block_size == arg)
+-		return 0;
++		goto unlock;
+ 
+ 	sync_blockdev(lo->lo_device);
+ 	invalidate_bdev(lo->lo_device);
+@@ -1455,6 +1472,11 @@ static int loop_set_block_size(struct loop_device *lo, unsigned long arg)
+ 	loop_update_dio(lo);
+ 	blk_mq_unfreeze_queue(lo->lo_queue, memflags);
+ 
++unlock:
++	mutex_unlock(&lo->lo_mutex);
++abort_claim:
++	if (!(mode & BLK_OPEN_EXCL))
++		bd_abort_claiming(bdev, loop_set_block_size);
+ 	return err;
+ }
+ 
+@@ -1473,9 +1495,6 @@ static int lo_simple_ioctl(struct loop_device *lo, unsigned int cmd,
+ 	case LOOP_SET_DIRECT_IO:
+ 		err = loop_set_dio(lo, arg);
+ 		break;
+-	case LOOP_SET_BLOCK_SIZE:
+-		err = loop_set_block_size(lo, arg);
+-		break;
+ 	default:
+ 		err = -EINVAL;
+ 	}
+@@ -1530,9 +1549,12 @@ static int lo_ioctl(struct block_device *bdev, blk_mode_t mode,
+ 		break;
+ 	case LOOP_GET_STATUS64:
+ 		return loop_get_status64(lo, argp);
++	case LOOP_SET_BLOCK_SIZE:
++		if (!(mode & BLK_OPEN_WRITE) && !capable(CAP_SYS_ADMIN))
++			return -EPERM;
++		return loop_set_block_size(lo, mode, bdev, arg);
+ 	case LOOP_SET_CAPACITY:
+ 	case LOOP_SET_DIRECT_IO:
+-	case LOOP_SET_BLOCK_SIZE:
+ 		if (!(mode & BLK_OPEN_WRITE) && !capable(CAP_SYS_ADMIN))
+ 			return -EPERM;
+ 		fallthrough;
+diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
+index b5727dea15bde7..7af21fe6767172 100644
+--- a/drivers/block/sunvdc.c
++++ b/drivers/block/sunvdc.c
+@@ -957,8 +957,10 @@ static bool vdc_port_mpgroup_check(struct vio_dev *vdev)
+ 	dev = device_find_child(vdev->dev.parent, &port_data,
+ 				vdc_device_probed);
+ 
+-	if (dev)
++	if (dev) {
++		put_device(dev);
+ 		return true;
++	}
+ 
+ 	return false;
+ }
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index a2ba32319b6f6e..c83de1772fdd31 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -515,6 +515,7 @@ static const struct usb_device_id quirks_table[] = {
+ 	/* Realtek 8851BE Bluetooth devices */
+ 	{ USB_DEVICE(0x0bda, 0xb850), .driver_info = BTUSB_REALTEK },
+ 	{ USB_DEVICE(0x13d3, 0x3600), .driver_info = BTUSB_REALTEK },
++	{ USB_DEVICE(0x13d3, 0x3601), .driver_info = BTUSB_REALTEK },
+ 
+ 	/* Realtek 8851BU Bluetooth devices */
+ 	{ USB_DEVICE(0x3625, 0x010b), .driver_info = BTUSB_REALTEK |
+@@ -709,6 +710,8 @@ static const struct usb_device_id quirks_table[] = {
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0489, 0xe139), .driver_info = BTUSB_MEDIATEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x0489, 0xe14e), .driver_info = BTUSB_MEDIATEK |
++						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0489, 0xe14f), .driver_info = BTUSB_MEDIATEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0489, 0xe150), .driver_info = BTUSB_MEDIATEK |
+diff --git a/drivers/bus/mhi/host/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c
+index cd274f4dae938c..e5af0beed05026 100644
+--- a/drivers/bus/mhi/host/pci_generic.c
++++ b/drivers/bus/mhi/host/pci_generic.c
+@@ -43,6 +43,7 @@
+  * @mru_default: default MRU size for MBIM network packets
+  * @sideband_wake: Devices using dedicated sideband GPIO for wakeup instead
+  *		   of inband wake support (such as sdx24)
++ * @no_m3: M3 not supported
+  */
+ struct mhi_pci_dev_info {
+ 	const struct mhi_controller_config *config;
+@@ -54,6 +55,7 @@ struct mhi_pci_dev_info {
+ 	unsigned int dma_data_width;
+ 	unsigned int mru_default;
+ 	bool sideband_wake;
++	bool no_m3;
+ };
+ 
+ #define MHI_CHANNEL_CONFIG_UL(ch_num, ch_name, el_count, ev_ring) \
+@@ -295,6 +297,7 @@ static const struct mhi_pci_dev_info mhi_qcom_qdu100_info = {
+ 	.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+ 	.dma_data_width = 32,
+ 	.sideband_wake = false,
++	.no_m3 = true,
+ };
+ 
+ static const struct mhi_channel_config mhi_qcom_sa8775p_channels[] = {
+@@ -818,6 +821,16 @@ static const struct mhi_pci_dev_info mhi_telit_fn920c04_info = {
+ 	.edl_trigger = true,
+ };
+ 
++static const struct mhi_pci_dev_info mhi_telit_fn990b40_info = {
++	.name = "telit-fn990b40",
++	.config = &modem_telit_fn920c04_config,
++	.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
++	.dma_data_width = 32,
++	.sideband_wake = false,
++	.mru_default = 32768,
++	.edl_trigger = true,
++};
++
+ static const struct mhi_pci_dev_info mhi_netprisma_lcur57_info = {
+ 	.name = "netprisma-lcur57",
+ 	.edl = "qcom/prog_firehose_sdx24.mbn",
+@@ -865,6 +878,9 @@ static const struct pci_device_id mhi_pci_id_table[] = {
+ 		.driver_data = (kernel_ulong_t) &mhi_telit_fe990a_info },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0308),
+ 		.driver_data = (kernel_ulong_t) &mhi_qcom_sdx65_info },
++	/* Telit FN990B40 (sdx72) */
++	{ PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x0309, 0x1c5d, 0x201a),
++		.driver_data = (kernel_ulong_t) &mhi_telit_fn990b40_info },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0309),
+ 		.driver_data = (kernel_ulong_t) &mhi_qcom_sdx75_info },
+ 	/* QDU100, x100-DU */
+@@ -1309,8 +1325,8 @@ static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	/* start health check */
+ 	mod_timer(&mhi_pdev->health_check_timer, jiffies + HEALTH_CHECK_PERIOD);
+ 
+-	/* Only allow runtime-suspend if PME capable (for wakeup) */
+-	if (pci_pme_capable(pdev, PCI_D3hot)) {
++	/* Allow runtime suspend only if both PME from D3Hot and M3 are supported */
++	if (pci_pme_capable(pdev, PCI_D3hot) && !(info->no_m3)) {
+ 		pm_runtime_set_autosuspend_delay(&pdev->dev, 2000);
+ 		pm_runtime_use_autosuspend(&pdev->dev);
+ 		pm_runtime_mark_last_busy(&pdev->dev);
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index 6047c9600e03aa..808d0d21350984 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -4615,10 +4615,10 @@ static int handle_one_recv_msg(struct ipmi_smi *intf,
+ 		 * The NetFN and Command in the response is not even
+ 		 * marginally correct.
+ 		 */
+-		dev_warn(intf->si_dev,
+-			 "BMC returned incorrect response, expected netfn %x cmd %x, got netfn %x cmd %x\n",
+-			 (msg->data[0] >> 2) | 1, msg->data[1],
+-			 msg->rsp[0] >> 2, msg->rsp[1]);
++		dev_warn_ratelimited(intf->si_dev,
++				     "BMC returned incorrect response, expected netfn %x cmd %x, got netfn %x cmd %x\n",
++				     (msg->data[0] >> 2) | 1, msg->data[1],
++				     msg->rsp[0] >> 2, msg->rsp[1]);
+ 
+ 		goto return_unspecified;
+ 	}
+diff --git a/drivers/char/ipmi/ipmi_watchdog.c b/drivers/char/ipmi/ipmi_watchdog.c
+index f1875b2bebbc73..f798d4cbf8a778 100644
+--- a/drivers/char/ipmi/ipmi_watchdog.c
++++ b/drivers/char/ipmi/ipmi_watchdog.c
+@@ -1186,14 +1186,8 @@ static struct ipmi_smi_watcher smi_watcher = {
+ 	.smi_gone = ipmi_smi_gone
+ };
+ 
+-static int action_op(const char *inval, char *outval)
++static int action_op_set_val(const char *inval)
+ {
+-	if (outval)
+-		strcpy(outval, action);
+-
+-	if (!inval)
+-		return 0;
+-
+ 	if (strcmp(inval, "reset") == 0)
+ 		action_val = WDOG_TIMEOUT_RESET;
+ 	else if (strcmp(inval, "none") == 0)
+@@ -1204,18 +1198,26 @@ static int action_op(const char *inval, char *outval)
+ 		action_val = WDOG_TIMEOUT_POWER_DOWN;
+ 	else
+ 		return -EINVAL;
+-	strcpy(action, inval);
+ 	return 0;
+ }
+ 
+-static int preaction_op(const char *inval, char *outval)
++static int action_op(const char *inval, char *outval)
+ {
++	int rv;
++
+ 	if (outval)
+-		strcpy(outval, preaction);
++		strcpy(outval, action);
+ 
+ 	if (!inval)
+ 		return 0;
++	rv = action_op_set_val(inval);
++	if (!rv)
++		strcpy(action, inval);
++	return rv;
++}
+ 
++static int preaction_op_set_val(const char *inval)
++{
+ 	if (strcmp(inval, "pre_none") == 0)
+ 		preaction_val = WDOG_PRETIMEOUT_NONE;
+ 	else if (strcmp(inval, "pre_smi") == 0)
+@@ -1228,18 +1230,26 @@ static int preaction_op(const char *inval, char *outval)
+ 		preaction_val = WDOG_PRETIMEOUT_MSG_INT;
+ 	else
+ 		return -EINVAL;
+-	strcpy(preaction, inval);
+ 	return 0;
+ }
+ 
+-static int preop_op(const char *inval, char *outval)
++static int preaction_op(const char *inval, char *outval)
+ {
++	int rv;
++
+ 	if (outval)
+-		strcpy(outval, preop);
++		strcpy(outval, preaction);
+ 
+ 	if (!inval)
+ 		return 0;
++	rv = preaction_op_set_val(inval);
++	if (!rv)
++		strcpy(preaction, inval);
++	return 0;
++}
+ 
++static int preop_op_set_val(const char *inval)
++{
+ 	if (strcmp(inval, "preop_none") == 0)
+ 		preop_val = WDOG_PREOP_NONE;
+ 	else if (strcmp(inval, "preop_panic") == 0)
+@@ -1248,7 +1258,22 @@ static int preop_op(const char *inval, char *outval)
+ 		preop_val = WDOG_PREOP_GIVE_DATA;
+ 	else
+ 		return -EINVAL;
+-	strcpy(preop, inval);
++	return 0;
++}
++
++static int preop_op(const char *inval, char *outval)
++{
++	int rv;
++
++	if (outval)
++		strcpy(outval, preop);
++
++	if (!inval)
++		return 0;
++
++	rv = preop_op_set_val(inval);
++	if (!rv)
++		strcpy(preop, inval);
+ 	return 0;
+ }
+ 
+@@ -1285,18 +1310,18 @@ static int __init ipmi_wdog_init(void)
+ {
+ 	int rv;
+ 
+-	if (action_op(action, NULL)) {
++	if (action_op_set_val(action)) {
+ 		action_op("reset", NULL);
+ 		pr_info("Unknown action '%s', defaulting to reset\n", action);
+ 	}
+ 
+-	if (preaction_op(preaction, NULL)) {
++	if (preaction_op_set_val(preaction)) {
+ 		preaction_op("pre_none", NULL);
+ 		pr_info("Unknown preaction '%s', defaulting to none\n",
+ 			preaction);
+ 	}
+ 
+-	if (preop_op(preop, NULL)) {
++	if (preop_op_set_val(preop)) {
+ 		preop_op("preop_none", NULL);
+ 		pr_info("Unknown preop '%s', defaulting to none\n", preop);
+ 	}
+diff --git a/drivers/char/misc.c b/drivers/char/misc.c
+index dda466f9181acf..30178e20d962d4 100644
+--- a/drivers/char/misc.c
++++ b/drivers/char/misc.c
+@@ -314,8 +314,8 @@ static int __init misc_init(void)
+ 	if (err)
+ 		goto fail_remove;
+ 
+-	err = -EIO;
+-	if (__register_chrdev(MISC_MAJOR, 0, MINORMASK + 1, "misc", &misc_fops))
++	err = __register_chrdev(MISC_MAJOR, 0, MINORMASK + 1, "misc", &misc_fops);
++	if (err < 0)
+ 		goto fail_printk;
+ 	return 0;
+ 
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index 8d7e4da6ed538a..8d18b33aa62d08 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -82,6 +82,13 @@ static bool tpm_chip_req_canceled(struct tpm_chip *chip, u8 status)
+ 	return chip->ops->req_canceled(chip, status);
+ }
+ 
++static bool tpm_transmit_completed(u8 status, struct tpm_chip *chip)
++{
++	u8 status_masked = status & chip->ops->req_complete_mask;
++
++	return status_masked == chip->ops->req_complete_val;
++}
++
+ static ssize_t tpm_try_transmit(struct tpm_chip *chip, void *buf, size_t bufsiz)
+ {
+ 	struct tpm_header *header = buf;
+@@ -129,8 +136,7 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip, void *buf, size_t bufsiz)
+ 	stop = jiffies + tpm_calc_ordinal_duration(chip, ordinal);
+ 	do {
+ 		u8 status = tpm_chip_status(chip);
+-		if ((status & chip->ops->req_complete_mask) ==
+-		    chip->ops->req_complete_val)
++		if (tpm_transmit_completed(status, chip))
+ 			goto out_recv;
+ 
+ 		if (tpm_chip_req_canceled(chip, status)) {
+@@ -142,6 +148,13 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip, void *buf, size_t bufsiz)
+ 		rmb();
+ 	} while (time_before(jiffies, stop));
+ 
++	/*
++	 * Check for completion one more time, just in case the device reported
++	 * it while the driver was sleeping in the busy loop above.
++	 */
++	if (tpm_transmit_completed(tpm_chip_status(chip), chip))
++		goto out_recv;
++
+ 	tpm_chip_cancel(chip);
+ 	dev_err(&chip->dev, "Operation Timed out\n");
+ 	return -ETIME;
+diff --git a/drivers/char/tpm/tpm_crb_ffa.c b/drivers/char/tpm/tpm_crb_ffa.c
+index 3169a87a56b606..430b99c04124cf 100644
+--- a/drivers/char/tpm/tpm_crb_ffa.c
++++ b/drivers/char/tpm/tpm_crb_ffa.c
+@@ -109,6 +109,7 @@ struct tpm_crb_ffa {
+ };
+ 
+ static struct tpm_crb_ffa *tpm_crb_ffa;
++static struct ffa_driver tpm_crb_ffa_driver;
+ 
+ static int tpm_crb_ffa_to_linux_errno(int errno)
+ {
+@@ -162,13 +163,23 @@ static int tpm_crb_ffa_to_linux_errno(int errno)
+  */
+ int tpm_crb_ffa_init(void)
+ {
++	int ret = 0;
++
++	if (!IS_MODULE(CONFIG_TCG_ARM_CRB_FFA)) {
++		ret = ffa_register(&tpm_crb_ffa_driver);
++		if (ret) {
++			tpm_crb_ffa = ERR_PTR(-ENODEV);
++			return ret;
++		}
++	}
++
+ 	if (!tpm_crb_ffa)
+-		return -ENOENT;
++		ret = -ENOENT;
+ 
+ 	if (IS_ERR_VALUE(tpm_crb_ffa))
+-		return -ENODEV;
++		ret = -ENODEV;
+ 
+-	return 0;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(tpm_crb_ffa_init);
+ 
+@@ -341,7 +352,9 @@ static struct ffa_driver tpm_crb_ffa_driver = {
+ 	.id_table = tpm_crb_ffa_device_id,
+ };
+ 
++#ifdef MODULE
+ module_ffa_driver(tpm_crb_ffa_driver);
++#endif
+ 
+ MODULE_AUTHOR("Arm");
+ MODULE_DESCRIPTION("TPM CRB FFA driver");
+diff --git a/drivers/clk/qcom/dispcc-sm8750.c b/drivers/clk/qcom/dispcc-sm8750.c
+index 877b40d50e6ff5..ca09da111a50e8 100644
+--- a/drivers/clk/qcom/dispcc-sm8750.c
++++ b/drivers/clk/qcom/dispcc-sm8750.c
+@@ -393,7 +393,7 @@ static struct clk_rcg2 disp_cc_mdss_byte0_clk_src = {
+ 		.name = "disp_cc_mdss_byte0_clk_src",
+ 		.parent_data = disp_cc_parent_data_1,
+ 		.num_parents = ARRAY_SIZE(disp_cc_parent_data_1),
+-		.flags = CLK_SET_RATE_PARENT,
++		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_byte2_ops,
+ 	},
+ };
+@@ -408,7 +408,7 @@ static struct clk_rcg2 disp_cc_mdss_byte1_clk_src = {
+ 		.name = "disp_cc_mdss_byte1_clk_src",
+ 		.parent_data = disp_cc_parent_data_1,
+ 		.num_parents = ARRAY_SIZE(disp_cc_parent_data_1),
+-		.flags = CLK_SET_RATE_PARENT,
++		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_byte2_ops,
+ 	},
+ };
+@@ -712,7 +712,7 @@ static struct clk_rcg2 disp_cc_mdss_pclk0_clk_src = {
+ 		.name = "disp_cc_mdss_pclk0_clk_src",
+ 		.parent_data = disp_cc_parent_data_1,
+ 		.num_parents = ARRAY_SIZE(disp_cc_parent_data_1),
+-		.flags = CLK_SET_RATE_PARENT,
++		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_pixel_ops,
+ 	},
+ };
+@@ -727,7 +727,7 @@ static struct clk_rcg2 disp_cc_mdss_pclk1_clk_src = {
+ 		.name = "disp_cc_mdss_pclk1_clk_src",
+ 		.parent_data = disp_cc_parent_data_1,
+ 		.num_parents = ARRAY_SIZE(disp_cc_parent_data_1),
+-		.flags = CLK_SET_RATE_PARENT,
++		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_pixel_ops,
+ 	},
+ };
+@@ -742,7 +742,7 @@ static struct clk_rcg2 disp_cc_mdss_pclk2_clk_src = {
+ 		.name = "disp_cc_mdss_pclk2_clk_src",
+ 		.parent_data = disp_cc_parent_data_1,
+ 		.num_parents = ARRAY_SIZE(disp_cc_parent_data_1),
+-		.flags = CLK_SET_RATE_PARENT,
++		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_pixel_ops,
+ 	},
+ };
+diff --git a/drivers/clk/qcom/gcc-ipq5018.c b/drivers/clk/qcom/gcc-ipq5018.c
+index 70f5dcb96700f5..24eb4c40da6346 100644
+--- a/drivers/clk/qcom/gcc-ipq5018.c
++++ b/drivers/clk/qcom/gcc-ipq5018.c
+@@ -1371,7 +1371,7 @@ static struct clk_branch gcc_xo_clk = {
+ 				&gcc_xo_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT,
++			.flags = CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+diff --git a/drivers/clk/qcom/gcc-ipq8074.c b/drivers/clk/qcom/gcc-ipq8074.c
+index 7258ba5c09001e..1329ea28d70313 100644
+--- a/drivers/clk/qcom/gcc-ipq8074.c
++++ b/drivers/clk/qcom/gcc-ipq8074.c
+@@ -1895,10 +1895,10 @@ static const struct freq_conf ftbl_nss_port6_tx_clk_src_125[] = {
+ static const struct freq_multi_tbl ftbl_nss_port6_tx_clk_src[] = {
+ 	FMS(19200000, P_XO, 1, 0, 0),
+ 	FM(25000000, ftbl_nss_port6_tx_clk_src_25),
+-	FMS(78125000, P_UNIPHY1_RX, 4, 0, 0),
++	FMS(78125000, P_UNIPHY2_TX, 4, 0, 0),
+ 	FM(125000000, ftbl_nss_port6_tx_clk_src_125),
+-	FMS(156250000, P_UNIPHY1_RX, 2, 0, 0),
+-	FMS(312500000, P_UNIPHY1_RX, 1, 0, 0),
++	FMS(156250000, P_UNIPHY2_TX, 2, 0, 0),
++	FMS(312500000, P_UNIPHY2_TX, 1, 0, 0),
+ 	{ }
+ };
+ 
+diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c
+index b91dfbfb01e31c..0dd10c3e2919e0 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.c
++++ b/drivers/clk/renesas/rzg2l-cpg.c
+@@ -1388,10 +1388,6 @@ rzg2l_cpg_register_mod_clk(const struct rzg2l_mod_clk *mod,
+ 		goto fail;
+ 	}
+ 
+-	clk = clock->hw.clk;
+-	dev_dbg(dev, "Module clock %pC at %lu Hz\n", clk, clk_get_rate(clk));
+-	priv->clks[id] = clk;
+-
+ 	if (mod->is_coupled) {
+ 		struct mstp_clock *sibling;
+ 
+@@ -1403,6 +1399,10 @@ rzg2l_cpg_register_mod_clk(const struct rzg2l_mod_clk *mod,
+ 		}
+ 	}
+ 
++	clk = clock->hw.clk;
++	dev_dbg(dev, "Module clock %pC at %lu Hz\n", clk, clk_get_rate(clk));
++	priv->clks[id] = clk;
++
+ 	return;
+ 
+ fail:
+diff --git a/drivers/clk/samsung/clk-exynos850.c b/drivers/clk/samsung/clk-exynos850.c
+index cf7e08cca78e04..56f27697c76b13 100644
+--- a/drivers/clk/samsung/clk-exynos850.c
++++ b/drivers/clk/samsung/clk-exynos850.c
+@@ -1360,7 +1360,7 @@ static const unsigned long cpucl1_clk_regs[] __initconst = {
+ 	CLK_CON_GAT_GATE_CLK_CPUCL1_CPU,
+ };
+ 
+-/* List of parent clocks for Muxes in CMU_CPUCL0 */
++/* List of parent clocks for Muxes in CMU_CPUCL1 */
+ PNAME(mout_pll_cpucl1_p)		 = { "oscclk", "fout_cpucl1_pll" };
+ PNAME(mout_cpucl1_switch_user_p)	 = { "oscclk", "dout_cpucl1_switch" };
+ PNAME(mout_cpucl1_dbg_user_p)		 = { "oscclk", "dout_cpucl1_dbg" };
+diff --git a/drivers/clk/samsung/clk-gs101.c b/drivers/clk/samsung/clk-gs101.c
+index f9c3d68d449c35..70b26db9b95ad0 100644
+--- a/drivers/clk/samsung/clk-gs101.c
++++ b/drivers/clk/samsung/clk-gs101.c
+@@ -1154,7 +1154,7 @@ static const struct samsung_div_clock cmu_top_div_clks[] __initconst = {
+ 	    CLK_CON_DIV_CLKCMU_G2D_MSCL, 0, 4),
+ 	DIV(CLK_DOUT_CMU_G3AA_G3AA, "dout_cmu_g3aa_g3aa", "gout_cmu_g3aa_g3aa",
+ 	    CLK_CON_DIV_CLKCMU_G3AA_G3AA, 0, 4),
+-	DIV(CLK_DOUT_CMU_G3D_SWITCH, "dout_cmu_g3d_busd", "gout_cmu_g3d_busd",
++	DIV(CLK_DOUT_CMU_G3D_BUSD, "dout_cmu_g3d_busd", "gout_cmu_g3d_busd",
+ 	    CLK_CON_DIV_CLKCMU_G3D_BUSD, 0, 4),
+ 	DIV(CLK_DOUT_CMU_G3D_GLB, "dout_cmu_g3d_glb", "gout_cmu_g3d_glb",
+ 	    CLK_CON_DIV_CLKCMU_G3D_GLB, 0, 4),
+@@ -2129,7 +2129,7 @@ PNAME(mout_hsi0_usbdpdbg_user_p)	= { "oscclk",
+ 					    "dout_cmu_hsi0_usbdpdbg" };
+ PNAME(mout_hsi0_bus_p)			= { "mout_hsi0_bus_user",
+ 					    "mout_hsi0_alt_user" };
+-PNAME(mout_hsi0_usb20_ref_p)		= { "fout_usb_pll",
++PNAME(mout_hsi0_usb20_ref_p)		= { "mout_pll_usb",
+ 					    "mout_hsi0_tcxo_user" };
+ PNAME(mout_hsi0_usb31drd_p)		= { "fout_usb_pll",
+ 					    "mout_hsi0_usb31drd_user",
+diff --git a/drivers/clk/tegra/clk-periph.c b/drivers/clk/tegra/clk-periph.c
+index 0626650a7011cc..c9fc52a36fce9c 100644
+--- a/drivers/clk/tegra/clk-periph.c
++++ b/drivers/clk/tegra/clk-periph.c
+@@ -51,7 +51,7 @@ static int clk_periph_determine_rate(struct clk_hw *hw,
+ 	struct tegra_clk_periph *periph = to_clk_periph(hw);
+ 	const struct clk_ops *div_ops = periph->div_ops;
+ 	struct clk_hw *div_hw = &periph->divider.hw;
+-	unsigned long rate;
++	long rate;
+ 
+ 	__clk_hw_set_clk(div_hw, hw);
+ 
+@@ -59,7 +59,7 @@ static int clk_periph_determine_rate(struct clk_hw *hw,
+ 	if (rate < 0)
+ 		return rate;
+ 
+-	req->rate = rate;
++	req->rate = (unsigned long)rate;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/clk/thead/clk-th1520-ap.c b/drivers/clk/thead/clk-th1520-ap.c
+index 6ab89245af1217..c8ebacc6934ab6 100644
+--- a/drivers/clk/thead/clk-th1520-ap.c
++++ b/drivers/clk/thead/clk-th1520-ap.c
+@@ -799,11 +799,12 @@ static CCU_GATE(CLK_AON2CPU_A2X, aon2cpu_a2x_clk, "aon2cpu-a2x", axi4_cpusys2_ac
+ 		0x134, BIT(8), 0);
+ static CCU_GATE(CLK_X2X_CPUSYS, x2x_cpusys_clk, "x2x-cpusys", axi4_cpusys2_aclk_pd,
+ 		0x134, BIT(7), 0);
+-static CCU_GATE(CLK_CPU2AON_X2H, cpu2aon_x2h_clk, "cpu2aon-x2h", axi_aclk_pd, 0x138, BIT(8), 0);
++static CCU_GATE(CLK_CPU2AON_X2H, cpu2aon_x2h_clk, "cpu2aon-x2h", axi_aclk_pd,
++		0x138, BIT(8), CLK_IGNORE_UNUSED);
+ static CCU_GATE(CLK_CPU2PERI_X2H, cpu2peri_x2h_clk, "cpu2peri-x2h", axi4_cpusys2_aclk_pd,
+ 		0x140, BIT(9), CLK_IGNORE_UNUSED);
+ static CCU_GATE(CLK_PERISYS_APB1_HCLK, perisys_apb1_hclk, "perisys-apb1-hclk", perisys_ahb_hclk_pd,
+-		0x150, BIT(9), 0);
++		0x150, BIT(9), CLK_IGNORE_UNUSED);
+ static CCU_GATE(CLK_PERISYS_APB2_HCLK, perisys_apb2_hclk, "perisys-apb2-hclk", perisys_ahb_hclk_pd,
+ 		0x150, BIT(10), CLK_IGNORE_UNUSED);
+ static CCU_GATE(CLK_PERISYS_APB3_HCLK, perisys_apb3_hclk, "perisys-apb3-hclk", perisys_ahb_hclk_pd,
+diff --git a/drivers/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c
+index 07bc81a706b4d3..bd8a44ea62d2d0 100644
+--- a/drivers/comedi/comedi_fops.c
++++ b/drivers/comedi/comedi_fops.c
+@@ -787,6 +787,7 @@ static int is_device_busy(struct comedi_device *dev)
+ 	struct comedi_subdevice *s;
+ 	int i;
+ 
++	lockdep_assert_held_write(&dev->attach_lock);
+ 	lockdep_assert_held(&dev->mutex);
+ 	if (!dev->attached)
+ 		return 0;
+@@ -795,7 +796,16 @@ static int is_device_busy(struct comedi_device *dev)
+ 		s = &dev->subdevices[i];
+ 		if (s->busy)
+ 			return 1;
+-		if (s->async && comedi_buf_is_mmapped(s))
++		if (!s->async)
++			continue;
++		if (comedi_buf_is_mmapped(s))
++			return 1;
++		/*
++		 * There may be tasks still waiting on the subdevice's wait
++		 * queue, although they should already be about to be removed
++		 * from it since the subdevice has no active async command.
++		 */
++		if (wq_has_sleeper(&s->async->wait_head))
+ 			return 1;
+ 	}
+ 
+@@ -825,15 +835,22 @@ static int do_devconfig_ioctl(struct comedi_device *dev,
+ 		return -EPERM;
+ 
+ 	if (!arg) {
+-		if (is_device_busy(dev))
+-			return -EBUSY;
++		int rc = 0;
++
+ 		if (dev->attached) {
+-			struct module *driver_module = dev->driver->module;
++			down_write(&dev->attach_lock);
++			if (is_device_busy(dev)) {
++				rc = -EBUSY;
++			} else {
++				struct module *driver_module =
++					dev->driver->module;
+ 
+-			comedi_device_detach(dev);
+-			module_put(driver_module);
++				comedi_device_detach_locked(dev);
++				module_put(driver_module);
++			}
++			up_write(&dev->attach_lock);
+ 		}
+-		return 0;
++		return rc;
+ 	}
+ 
+ 	if (copy_from_user(&it, arg, sizeof(it)))
+diff --git a/drivers/comedi/comedi_internal.h b/drivers/comedi/comedi_internal.h
+index 9b3631a654c895..cf10ba016ebc81 100644
+--- a/drivers/comedi/comedi_internal.h
++++ b/drivers/comedi/comedi_internal.h
+@@ -50,6 +50,7 @@ extern struct mutex comedi_drivers_list_lock;
+ int insn_inval(struct comedi_device *dev, struct comedi_subdevice *s,
+ 	       struct comedi_insn *insn, unsigned int *data);
+ 
++void comedi_device_detach_locked(struct comedi_device *dev);
+ void comedi_device_detach(struct comedi_device *dev);
+ int comedi_device_attach(struct comedi_device *dev,
+ 			 struct comedi_devconfig *it);
+diff --git a/drivers/comedi/drivers.c b/drivers/comedi/drivers.c
+index 9e4b7c840a8f5a..f1dc854928c176 100644
+--- a/drivers/comedi/drivers.c
++++ b/drivers/comedi/drivers.c
+@@ -158,7 +158,7 @@ static void comedi_device_detach_cleanup(struct comedi_device *dev)
+ 	int i;
+ 	struct comedi_subdevice *s;
+ 
+-	lockdep_assert_held(&dev->attach_lock);
++	lockdep_assert_held_write(&dev->attach_lock);
+ 	lockdep_assert_held(&dev->mutex);
+ 	if (dev->subdevices) {
+ 		for (i = 0; i < dev->n_subdevices; i++) {
+@@ -196,16 +196,23 @@ static void comedi_device_detach_cleanup(struct comedi_device *dev)
+ 	comedi_clear_hw_dev(dev);
+ }
+ 
+-void comedi_device_detach(struct comedi_device *dev)
++void comedi_device_detach_locked(struct comedi_device *dev)
+ {
++	lockdep_assert_held_write(&dev->attach_lock);
+ 	lockdep_assert_held(&dev->mutex);
+ 	comedi_device_cancel_all(dev);
+-	down_write(&dev->attach_lock);
+ 	dev->attached = false;
+ 	dev->detach_count++;
+ 	if (dev->driver)
+ 		dev->driver->detach(dev);
+ 	comedi_device_detach_cleanup(dev);
++}
++
++void comedi_device_detach(struct comedi_device *dev)
++{
++	lockdep_assert_held(&dev->mutex);
++	down_write(&dev->attach_lock);
++	comedi_device_detach_locked(dev);
+ 	up_write(&dev->attach_lock);
+ }
+ 
+diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
+index cb93f00bafdbaf..156c1e516cc845 100644
+--- a/drivers/cpufreq/cppc_cpufreq.c
++++ b/drivers/cpufreq/cppc_cpufreq.c
+@@ -816,7 +816,7 @@ static struct freq_attr *cppc_cpufreq_attr[] = {
+ };
+ 
+ static struct cpufreq_driver cppc_cpufreq_driver = {
+-	.flags = CPUFREQ_CONST_LOOPS,
++	.flags = CPUFREQ_CONST_LOOPS | CPUFREQ_NEED_UPDATE_LIMITS,
+ 	.verify = cppc_verify_policy,
+ 	.target = cppc_cpufreq_set_target,
+ 	.get = cppc_cpufreq_get_rate,
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 5c84d56341e2ed..50aff4afb422a2 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -2785,10 +2785,12 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
+ 	pr_debug("starting governor %s failed\n", policy->governor->name);
+ 	if (old_gov) {
+ 		policy->governor = old_gov;
+-		if (cpufreq_init_governor(policy))
++		if (cpufreq_init_governor(policy)) {
+ 			policy->governor = NULL;
+-		else
+-			cpufreq_start_governor(policy);
++		} else if (cpufreq_start_governor(policy)) {
++			cpufreq_exit_governor(policy);
++			policy->governor = NULL;
++		}
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index f9205fe199b890..c1c42c273d3869 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -2656,6 +2656,8 @@ static const struct x86_cpu_id intel_pstate_cpu_ids[] = {
+ 	X86_MATCH(INTEL_TIGERLAKE,		core_funcs),
+ 	X86_MATCH(INTEL_SAPPHIRERAPIDS_X,	core_funcs),
+ 	X86_MATCH(INTEL_EMERALDRAPIDS_X,	core_funcs),
++	X86_MATCH(INTEL_GRANITERAPIDS_D,	core_funcs),
++	X86_MATCH(INTEL_GRANITERAPIDS_X,	core_funcs),
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids);
+diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
+index 39aa0aea61c662..711517bd43a168 100644
+--- a/drivers/cpuidle/governors/menu.c
++++ b/drivers/cpuidle/governors/menu.c
+@@ -97,6 +97,14 @@ static inline int which_bucket(u64 duration_ns)
+ 
+ static DEFINE_PER_CPU(struct menu_device, menu_devices);
+ 
++static void menu_update_intervals(struct menu_device *data, unsigned int interval_us)
++{
++	/* Update the repeating-pattern data. */
++	data->intervals[data->interval_ptr++] = interval_us;
++	if (data->interval_ptr >= INTERVALS)
++		data->interval_ptr = 0;
++}
++
+ static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev);
+ 
+ /*
+@@ -222,6 +230,14 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 	if (data->needs_update) {
+ 		menu_update(drv, dev);
+ 		data->needs_update = 0;
++	} else if (!dev->last_residency_ns) {
++		/*
++		 * This happens when the driver rejects the previously selected
++		 * idle state and returns an error, so update the recent
++		 * intervals table to prevent invalid information from being
++		 * used going forward.
++		 */
++		menu_update_intervals(data, UINT_MAX);
+ 	}
+ 
+ 	/* Find the shortest expected idle interval. */
+@@ -482,10 +498,7 @@ static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
+ 
+ 	data->correction_factor[data->bucket] = new_factor;
+ 
+-	/* update the repeating-pattern data */
+-	data->intervals[data->interval_ptr++] = ktime_to_us(measured_ns);
+-	if (data->interval_ptr >= INTERVALS)
+-		data->interval_ptr = 0;
++	menu_update_intervals(data, ktime_to_us(measured_ns));
+ }
+ 
+ /**
+diff --git a/drivers/crypto/ccp/sp-pci.c b/drivers/crypto/ccp/sp-pci.c
+index 2ebc878da16095..224edaaa737b6c 100644
+--- a/drivers/crypto/ccp/sp-pci.c
++++ b/drivers/crypto/ccp/sp-pci.c
+@@ -451,6 +451,7 @@ static const struct psp_vdata pspv6 = {
+ 	.cmdresp_reg		= 0x10944,	/* C2PMSG_17 */
+ 	.cmdbuff_addr_lo_reg	= 0x10948,	/* C2PMSG_18 */
+ 	.cmdbuff_addr_hi_reg	= 0x1094c,	/* C2PMSG_19 */
++	.bootloader_info_reg	= 0x109ec,	/* C2PMSG_59 */
+ 	.feature_reg            = 0x109fc,	/* C2PMSG_63 */
+ 	.inten_reg              = 0x10510,	/* P2CMSG_INTEN */
+ 	.intsts_reg             = 0x10514,	/* P2CMSG_INTSTS */
+diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+index 61b5e1c5d0192c..1550c3818383ab 100644
+--- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c
++++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+@@ -1491,11 +1491,13 @@ static void hpre_ecdh_cb(struct hpre_ctx *ctx, void *resp)
+ 	if (overtime_thrhld && hpre_is_bd_timeout(req, overtime_thrhld))
+ 		atomic64_inc(&dfx[HPRE_OVER_THRHLD_CNT].value);
+ 
++	/* Do unmap before data processing */
++	hpre_ecdh_hw_data_clr_all(ctx, req, areq->dst, areq->src);
++
+ 	p = sg_virt(areq->dst);
+ 	memmove(p, p + ctx->key_sz - curve_sz, curve_sz);
+ 	memmove(p + curve_sz, p + areq->dst_len - curve_sz, curve_sz);
+ 
+-	hpre_ecdh_hw_data_clr_all(ctx, req, areq->dst, areq->src);
+ 	kpp_request_complete(areq, ret);
+ 
+ 	atomic64_inc(&dfx[HPRE_RECV_CNT].value);
+@@ -1808,9 +1810,11 @@ static void hpre_curve25519_cb(struct hpre_ctx *ctx, void *resp)
+ 	if (overtime_thrhld && hpre_is_bd_timeout(req, overtime_thrhld))
+ 		atomic64_inc(&dfx[HPRE_OVER_THRHLD_CNT].value);
+ 
++	/* Do unmap before data processing */
++	hpre_curve25519_hw_data_clr_all(ctx, req, areq->dst, areq->src);
++
+ 	hpre_key_to_big_end(sg_virt(areq->dst), CURVE25519_KEY_SIZE);
+ 
+-	hpre_curve25519_hw_data_clr_all(ctx, req, areq->dst, areq->src);
+ 	kpp_request_complete(areq, ret);
+ 
+ 	atomic64_inc(&dfx[HPRE_RECV_CNT].value);
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
+index 42c5484ce66a34..3a818ac89295c9 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
+@@ -1494,6 +1494,7 @@ int otx2_cpt_discover_eng_capabilities(struct otx2_cptpf_dev *cptpf)
+ 	dma_addr_t rptr_baddr;
+ 	struct pci_dev *pdev;
+ 	u32 len, compl_rlen;
++	int timeout = 10000;
+ 	int ret, etype;
+ 	void *rptr;
+ 
+@@ -1556,16 +1557,27 @@ int otx2_cpt_discover_eng_capabilities(struct otx2_cptpf_dev *cptpf)
+ 							 etype);
+ 		otx2_cpt_fill_inst(&inst, &iq_cmd, rptr_baddr);
+ 		lfs->ops->send_cmd(&inst, 1, &cptpf->lfs.lf[0]);
++		timeout = 10000;
+ 
+ 		while (lfs->ops->cpt_get_compcode(result) ==
+-						OTX2_CPT_COMPLETION_CODE_INIT)
++						OTX2_CPT_COMPLETION_CODE_INIT) {
+ 			cpu_relax();
++			udelay(1);
++			timeout--;
++			if (!timeout) {
++				ret = -ENODEV;
++				cptpf->is_eng_caps_discovered = false;
++				dev_warn(&pdev->dev, "Timeout on CPT load_fvc completion poll\n");
++				goto error_no_response;
++			}
++		}
+ 
+ 		cptpf->eng_caps[etype].u = be64_to_cpup(rptr);
+ 	}
+-	dma_unmap_single(&pdev->dev, rptr_baddr, len, DMA_BIDIRECTIONAL);
+ 	cptpf->is_eng_caps_discovered = true;
+ 
++error_no_response:
++	dma_unmap_single(&pdev->dev, rptr_baddr, len, DMA_BIDIRECTIONAL);
+ free_result:
+ 	kfree(result);
+ lf_cleanup:
+diff --git a/drivers/devfreq/governor_userspace.c b/drivers/devfreq/governor_userspace.c
+index d1aa6806b683ac..175de0c0b50e08 100644
+--- a/drivers/devfreq/governor_userspace.c
++++ b/drivers/devfreq/governor_userspace.c
+@@ -9,6 +9,7 @@
+ #include <linux/slab.h>
+ #include <linux/device.h>
+ #include <linux/devfreq.h>
++#include <linux/kstrtox.h>
+ #include <linux/pm.h>
+ #include <linux/mutex.h>
+ #include <linux/module.h>
+@@ -39,10 +40,13 @@ static ssize_t set_freq_store(struct device *dev, struct device_attribute *attr,
+ 	unsigned long wanted;
+ 	int err = 0;
+ 
++	err = kstrtoul(buf, 0, &wanted);
++	if (err)
++		return err;
++
+ 	mutex_lock(&devfreq->lock);
+ 	data = devfreq->governor_data;
+ 
+-	sscanf(buf, "%lu", &wanted);
+ 	data->user_frequency = wanted;
+ 	data->valid = true;
+ 	err = update_devfreq(devfreq);
+diff --git a/drivers/dma/stm32/stm32-dma.c b/drivers/dma/stm32/stm32-dma.c
+index 917f8e9223739a..0e39f99bce8be8 100644
+--- a/drivers/dma/stm32/stm32-dma.c
++++ b/drivers/dma/stm32/stm32-dma.c
+@@ -744,7 +744,7 @@ static void stm32_dma_handle_chan_done(struct stm32_dma_chan *chan, u32 scr)
+ 		/* cyclic while CIRC/DBM disable => post resume reconfiguration needed */
+ 		if (!(scr & (STM32_DMA_SCR_CIRC | STM32_DMA_SCR_DBM)))
+ 			stm32_dma_post_resume_reconfigure(chan);
+-		else if (scr & STM32_DMA_SCR_DBM)
++		else if (scr & STM32_DMA_SCR_DBM && chan->desc->num_sgs > 2)
+ 			stm32_dma_configure_next_sg(chan);
+ 	} else {
+ 		chan->busy = false;
+diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
+index 5ed32a3299c44e..51143b3257de2d 100644
+--- a/drivers/edac/synopsys_edac.c
++++ b/drivers/edac/synopsys_edac.c
+@@ -332,20 +332,26 @@ struct synps_edac_priv {
+ #endif
+ };
+ 
++enum synps_platform_type {
++	ZYNQ,
++	ZYNQMP,
++	SYNPS,
++};
++
+ /**
+  * struct synps_platform_data -  synps platform data structure.
++ * @platform:		Identifies the target hardware platform
+  * @get_error_info:	Get EDAC error info.
+  * @get_mtype:		Get mtype.
+  * @get_dtype:		Get dtype.
+- * @get_ecc_state:	Get ECC state.
+  * @get_mem_info:	Get EDAC memory info
+  * @quirks:		To differentiate IPs.
+  */
+ struct synps_platform_data {
++	enum synps_platform_type platform;
+ 	int (*get_error_info)(struct synps_edac_priv *priv);
+ 	enum mem_type (*get_mtype)(const void __iomem *base);
+ 	enum dev_type (*get_dtype)(const void __iomem *base);
+-	bool (*get_ecc_state)(void __iomem *base);
+ #ifdef CONFIG_EDAC_DEBUG
+ 	u64 (*get_mem_info)(struct synps_edac_priv *priv);
+ #endif
+@@ -720,51 +726,38 @@ static enum dev_type zynqmp_get_dtype(const void __iomem *base)
+ 	return dt;
+ }
+ 
+-/**
+- * zynq_get_ecc_state - Return the controller ECC enable/disable status.
+- * @base:	DDR memory controller base address.
+- *
+- * Get the ECC enable/disable status of the controller.
+- *
+- * Return: true if enabled, otherwise false.
+- */
+-static bool zynq_get_ecc_state(void __iomem *base)
++static bool get_ecc_state(struct synps_edac_priv *priv)
+ {
++	u32 ecctype, clearval;
+ 	enum dev_type dt;
+-	u32 ecctype;
+-
+-	dt = zynq_get_dtype(base);
+-	if (dt == DEV_UNKNOWN)
+-		return false;
+ 
+-	ecctype = readl(base + SCRUB_OFST) & SCRUB_MODE_MASK;
+-	if ((ecctype == SCRUB_MODE_SECDED) && (dt == DEV_X2))
+-		return true;
+-
+-	return false;
+-}
+-
+-/**
+- * zynqmp_get_ecc_state - Return the controller ECC enable/disable status.
+- * @base:	DDR memory controller base address.
+- *
+- * Get the ECC enable/disable status for the controller.
+- *
+- * Return: a ECC status boolean i.e true/false - enabled/disabled.
+- */
+-static bool zynqmp_get_ecc_state(void __iomem *base)
+-{
+-	enum dev_type dt;
+-	u32 ecctype;
+-
+-	dt = zynqmp_get_dtype(base);
+-	if (dt == DEV_UNKNOWN)
+-		return false;
+-
+-	ecctype = readl(base + ECC_CFG0_OFST) & SCRUB_MODE_MASK;
+-	if ((ecctype == SCRUB_MODE_SECDED) &&
+-	    ((dt == DEV_X2) || (dt == DEV_X4) || (dt == DEV_X8)))
+-		return true;
++	if (priv->p_data->platform == ZYNQ) {
++		dt = zynq_get_dtype(priv->baseaddr);
++		if (dt == DEV_UNKNOWN)
++			return false;
++
++		ecctype = readl(priv->baseaddr + SCRUB_OFST) & SCRUB_MODE_MASK;
++		if (ecctype == SCRUB_MODE_SECDED && dt == DEV_X2) {
++			clearval = ECC_CTRL_CLR_CE_ERR | ECC_CTRL_CLR_UE_ERR;
++			writel(clearval, priv->baseaddr + ECC_CTRL_OFST);
++			writel(0x0, priv->baseaddr + ECC_CTRL_OFST);
++			return true;
++		}
++	} else {
++		dt = zynqmp_get_dtype(priv->baseaddr);
++		if (dt == DEV_UNKNOWN)
++			return false;
++
++		ecctype = readl(priv->baseaddr + ECC_CFG0_OFST) & SCRUB_MODE_MASK;
++		if (ecctype == SCRUB_MODE_SECDED &&
++		    (dt == DEV_X2 || dt == DEV_X4 || dt == DEV_X8)) {
++			clearval = readl(priv->baseaddr + ECC_CLR_OFST) |
++			ECC_CTRL_CLR_CE_ERR | ECC_CTRL_CLR_CE_ERRCNT |
++			ECC_CTRL_CLR_UE_ERR | ECC_CTRL_CLR_UE_ERRCNT;
++			writel(clearval, priv->baseaddr + ECC_CLR_OFST);
++			return true;
++		}
++	}
+ 
+ 	return false;
+ }
+@@ -934,18 +927,18 @@ static int setup_irq(struct mem_ctl_info *mci,
+ }
+ 
+ static const struct synps_platform_data zynq_edac_def = {
++	.platform = ZYNQ,
+ 	.get_error_info	= zynq_get_error_info,
+ 	.get_mtype	= zynq_get_mtype,
+ 	.get_dtype	= zynq_get_dtype,
+-	.get_ecc_state	= zynq_get_ecc_state,
+ 	.quirks		= 0,
+ };
+ 
+ static const struct synps_platform_data zynqmp_edac_def = {
++	.platform = ZYNQMP,
+ 	.get_error_info	= zynqmp_get_error_info,
+ 	.get_mtype	= zynqmp_get_mtype,
+ 	.get_dtype	= zynqmp_get_dtype,
+-	.get_ecc_state	= zynqmp_get_ecc_state,
+ #ifdef CONFIG_EDAC_DEBUG
+ 	.get_mem_info	= zynqmp_get_mem_info,
+ #endif
+@@ -957,10 +950,10 @@ static const struct synps_platform_data zynqmp_edac_def = {
+ };
+ 
+ static const struct synps_platform_data synopsys_edac_def = {
++	.platform = SYNPS,
+ 	.get_error_info	= zynqmp_get_error_info,
+ 	.get_mtype	= zynqmp_get_mtype,
+ 	.get_dtype	= zynqmp_get_dtype,
+-	.get_ecc_state	= zynqmp_get_ecc_state,
+ 	.quirks         = (DDR_ECC_INTR_SUPPORT | DDR_ECC_INTR_SELF_CLEAR
+ #ifdef CONFIG_EDAC_DEBUG
+ 			  | DDR_ECC_DATA_POISON_SUPPORT
+@@ -1390,10 +1383,6 @@ static int mc_probe(struct platform_device *pdev)
+ 	if (!p_data)
+ 		return -ENODEV;
+ 
+-	if (!p_data->get_ecc_state(baseaddr)) {
+-		edac_printk(KERN_INFO, EDAC_MC, "ECC not enabled\n");
+-		return -ENXIO;
+-	}
+ 
+ 	layers[0].type = EDAC_MC_LAYER_CHIP_SELECT;
+ 	layers[0].size = SYNPS_EDAC_NR_CSROWS;
+@@ -1413,6 +1402,12 @@ static int mc_probe(struct platform_device *pdev)
+ 	priv = mci->pvt_info;
+ 	priv->baseaddr = baseaddr;
+ 	priv->p_data = p_data;
++	if (!get_ecc_state(priv)) {
++		edac_printk(KERN_INFO, EDAC_MC, "ECC not enabled\n");
++		rc = -ENODEV;
++		goto free_edac_mc;
++	}
++
+ 	spin_lock_init(&priv->reglock);
+ 
+ 	mc_init(mci, pdev);
+diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
+index 37eb2e6c2f9f4d..65bf1685350a2a 100644
+--- a/drivers/firmware/arm_ffa/driver.c
++++ b/drivers/firmware/arm_ffa/driver.c
+@@ -2059,7 +2059,7 @@ static int __init ffa_init(void)
+ 	kfree(drv_info);
+ 	return ret;
+ }
+-module_init(ffa_init);
++rootfs_initcall(ffa_init);
+ 
+ static void __exit ffa_exit(void)
+ {
+diff --git a/drivers/firmware/arm_scmi/scmi_power_control.c b/drivers/firmware/arm_scmi/scmi_power_control.c
+index 21f467a9294288..955736336061d2 100644
+--- a/drivers/firmware/arm_scmi/scmi_power_control.c
++++ b/drivers/firmware/arm_scmi/scmi_power_control.c
+@@ -46,6 +46,7 @@
+ #include <linux/math.h>
+ #include <linux/module.h>
+ #include <linux/mutex.h>
++#include <linux/pm.h>
+ #include <linux/printk.h>
+ #include <linux/reboot.h>
+ #include <linux/scmi_protocol.h>
+@@ -324,12 +325,7 @@ static int scmi_userspace_notifier(struct notifier_block *nb,
+ 
+ static void scmi_suspend_work_func(struct work_struct *work)
+ {
+-	struct scmi_syspower_conf *sc =
+-		container_of(work, struct scmi_syspower_conf, suspend_work);
+-
+ 	pm_suspend(PM_SUSPEND_MEM);
+-
+-	sc->state = SCMI_SYSPOWER_IDLE;
+ }
+ 
+ static int scmi_syspower_probe(struct scmi_device *sdev)
+@@ -354,6 +350,7 @@ static int scmi_syspower_probe(struct scmi_device *sdev)
+ 	sc->required_transition = SCMI_SYSTEM_MAX;
+ 	sc->userspace_nb.notifier_call = &scmi_userspace_notifier;
+ 	sc->dev = &sdev->dev;
++	dev_set_drvdata(&sdev->dev, sc);
+ 
+ 	INIT_WORK(&sc->suspend_work, scmi_suspend_work_func);
+ 
+@@ -363,6 +360,18 @@ static int scmi_syspower_probe(struct scmi_device *sdev)
+ 						       NULL, &sc->userspace_nb);
+ }
+ 
++static int scmi_system_power_resume(struct device *dev)
++{
++	struct scmi_syspower_conf *sc = dev_get_drvdata(dev);
++
++	sc->state = SCMI_SYSPOWER_IDLE;
++	return 0;
++}
++
++static const struct dev_pm_ops scmi_system_power_pmops = {
++	SYSTEM_SLEEP_PM_OPS(NULL, scmi_system_power_resume)
++};
++
+ static const struct scmi_device_id scmi_id_table[] = {
+ 	{ SCMI_PROTOCOL_SYSTEM, "syspower" },
+ 	{ },
+@@ -370,6 +379,9 @@ static const struct scmi_device_id scmi_id_table[] = {
+ MODULE_DEVICE_TABLE(scmi, scmi_id_table);
+ 
+ static struct scmi_driver scmi_system_power_driver = {
++	.driver	= {
++		.pm = pm_sleep_ptr(&scmi_system_power_pmops),
++	},
+ 	.name = "scmi-system-power",
+ 	.probe = scmi_syspower_probe,
+ 	.id_table = scmi_id_table,
+diff --git a/drivers/firmware/tegra/Kconfig b/drivers/firmware/tegra/Kconfig
+index cde1ab8bd9d1cb..91f2320c0d0f89 100644
+--- a/drivers/firmware/tegra/Kconfig
++++ b/drivers/firmware/tegra/Kconfig
+@@ -2,7 +2,7 @@
+ menu "Tegra firmware driver"
+ 
+ config TEGRA_IVC
+-	bool "Tegra IVC protocol"
++	bool "Tegra IVC protocol" if COMPILE_TEST
+ 	depends on ARCH_TEGRA
+ 	help
+ 	  IVC (Inter-VM Communication) protocol is part of the IPC
+@@ -13,8 +13,9 @@ config TEGRA_IVC
+ 
+ config TEGRA_BPMP
+ 	bool "Tegra BPMP driver"
+-	depends on ARCH_TEGRA && TEGRA_HSP_MBOX && TEGRA_IVC
++	depends on ARCH_TEGRA && TEGRA_HSP_MBOX
+ 	depends on !CPU_BIG_ENDIAN
++	select TEGRA_IVC
+ 	help
+ 	  BPMP (Boot and Power Management Processor) is designed to off-loading
+ 	  the PM functions which include clock/DVFS/thermal/power from the CPU.
+diff --git a/drivers/gpio/gpio-loongson-64bit.c b/drivers/gpio/gpio-loongson-64bit.c
+index 286a3876ed0c08..fd8686d8583a63 100644
+--- a/drivers/gpio/gpio-loongson-64bit.c
++++ b/drivers/gpio/gpio-loongson-64bit.c
+@@ -220,6 +220,7 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data0 = {
+ 	.conf_offset = 0x0,
+ 	.in_offset = 0xc,
+ 	.out_offset = 0x8,
++	.inten_offset = 0x14,
+ };
+ 
+ static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data1 = {
+@@ -228,6 +229,7 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data1 = {
+ 	.conf_offset = 0x0,
+ 	.in_offset = 0x20,
+ 	.out_offset = 0x10,
++	.inten_offset = 0x30,
+ };
+ 
+ static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data2 = {
+@@ -244,6 +246,7 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls3a5000_data = {
+ 	.conf_offset = 0x0,
+ 	.in_offset = 0xc,
+ 	.out_offset = 0x8,
++	.inten_offset = 0x14,
+ };
+ 
+ static const struct loongson_gpio_chip_data loongson_gpio_ls7a_data = {
+@@ -252,6 +255,7 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls7a_data = {
+ 	.conf_offset = 0x800,
+ 	.in_offset = 0xa00,
+ 	.out_offset = 0x900,
++	.inten_offset = 0xb00,
+ };
+ 
+ /* LS7A2000 chipset GPIO */
+@@ -261,6 +265,7 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls7a2000_data0 = {
+ 	.conf_offset = 0x800,
+ 	.in_offset = 0xa00,
+ 	.out_offset = 0x900,
++	.inten_offset = 0xb00,
+ };
+ 
+ /* LS7A2000 ACPI GPIO */
+@@ -279,6 +284,7 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls3a6000_data = {
+ 	.conf_offset = 0x0,
+ 	.in_offset = 0xc,
+ 	.out_offset = 0x8,
++	.inten_offset = 0x14,
+ };
+ 
+ static const struct of_device_id loongson_gpio_of_match[] = {
+diff --git a/drivers/gpio/gpio-mlxbf2.c b/drivers/gpio/gpio-mlxbf2.c
+index 6f3dda6b635fa2..390f2e74a9d819 100644
+--- a/drivers/gpio/gpio-mlxbf2.c
++++ b/drivers/gpio/gpio-mlxbf2.c
+@@ -397,7 +397,7 @@ mlxbf2_gpio_probe(struct platform_device *pdev)
+ 	gc->ngpio = npins;
+ 	gc->owner = THIS_MODULE;
+ 
+-	irq = platform_get_irq(pdev, 0);
++	irq = platform_get_irq_optional(pdev, 0);
+ 	if (irq >= 0) {
+ 		girq = &gs->gc.irq;
+ 		gpio_irq_chip_set_chip(girq, &mlxbf2_gpio_irq_chip);
+diff --git a/drivers/gpio/gpio-mlxbf3.c b/drivers/gpio/gpio-mlxbf3.c
+index 9875e34bde72a4..ed29b07d16c190 100644
+--- a/drivers/gpio/gpio-mlxbf3.c
++++ b/drivers/gpio/gpio-mlxbf3.c
+@@ -190,9 +190,7 @@ static int mlxbf3_gpio_probe(struct platform_device *pdev)
+ 	struct mlxbf3_gpio_context *gs;
+ 	struct gpio_irq_chip *girq;
+ 	struct gpio_chip *gc;
+-	char *colon_ptr;
+ 	int ret, irq;
+-	long num;
+ 
+ 	gs = devm_kzalloc(dev, sizeof(*gs), GFP_KERNEL);
+ 	if (!gs)
+@@ -229,39 +227,25 @@ static int mlxbf3_gpio_probe(struct platform_device *pdev)
+ 	gc->owner = THIS_MODULE;
+ 	gc->add_pin_ranges = mlxbf3_gpio_add_pin_ranges;
+ 
+-	colon_ptr = strchr(dev_name(dev), ':');
+-	if (!colon_ptr) {
+-		dev_err(dev, "invalid device name format\n");
+-		return -EINVAL;
+-	}
+-
+-	ret = kstrtol(++colon_ptr, 16, &num);
+-	if (ret) {
+-		dev_err(dev, "invalid device instance\n");
+-		return ret;
+-	}
+-
+-	if (!num) {
+-		irq = platform_get_irq(pdev, 0);
+-		if (irq >= 0) {
+-			girq = &gs->gc.irq;
+-			gpio_irq_chip_set_chip(girq, &gpio_mlxbf3_irqchip);
+-			girq->default_type = IRQ_TYPE_NONE;
+-			/* This will let us handle the parent IRQ in the driver */
+-			girq->num_parents = 0;
+-			girq->parents = NULL;
+-			girq->parent_handler = NULL;
+-			girq->handler = handle_bad_irq;
+-
+-			/*
+-			 * Directly request the irq here instead of passing
+-			 * a flow-handler because the irq is shared.
+-			 */
+-			ret = devm_request_irq(dev, irq, mlxbf3_gpio_irq_handler,
+-					       IRQF_SHARED, dev_name(dev), gs);
+-			if (ret)
+-				return dev_err_probe(dev, ret, "failed to request IRQ");
+-		}
++	irq = platform_get_irq_optional(pdev, 0);
++	if (irq >= 0) {
++		girq = &gs->gc.irq;
++		gpio_irq_chip_set_chip(girq, &gpio_mlxbf3_irqchip);
++		girq->default_type = IRQ_TYPE_NONE;
++		/* This will let us handle the parent IRQ in the driver */
++		girq->num_parents = 0;
++		girq->parents = NULL;
++		girq->parent_handler = NULL;
++		girq->handler = handle_bad_irq;
++
++		/*
++		 * Directly request the irq here instead of passing
++		 * a flow-handler because the irq is shared.
++		 */
++		ret = devm_request_irq(dev, irq, mlxbf3_gpio_irq_handler,
++				       IRQF_SHARED, dev_name(dev), gs);
++		if (ret)
++			return dev_err_probe(dev, ret, "failed to request IRQ");
+ 	}
+ 
+ 	platform_set_drvdata(pdev, gs);
+diff --git a/drivers/gpio/gpio-tps65912.c b/drivers/gpio/gpio-tps65912.c
+index fab771cb6a87bf..bac757c191c2ea 100644
+--- a/drivers/gpio/gpio-tps65912.c
++++ b/drivers/gpio/gpio-tps65912.c
+@@ -49,10 +49,13 @@ static int tps65912_gpio_direction_output(struct gpio_chip *gc,
+ 					  unsigned offset, int value)
+ {
+ 	struct tps65912_gpio *gpio = gpiochip_get_data(gc);
++	int ret;
+ 
+ 	/* Set the initial value */
+-	regmap_update_bits(gpio->tps->regmap, TPS65912_GPIO1 + offset,
+-			   GPIO_SET_MASK, value ? GPIO_SET_MASK : 0);
++	ret = regmap_update_bits(gpio->tps->regmap, TPS65912_GPIO1 + offset,
++				 GPIO_SET_MASK, value ? GPIO_SET_MASK : 0);
++	if (ret)
++		return ret;
+ 
+ 	return regmap_update_bits(gpio->tps->regmap, TPS65912_GPIO1 + offset,
+ 				  GPIO_CFG_MASK, GPIO_CFG_MASK);
+diff --git a/drivers/gpio/gpio-virtio.c b/drivers/gpio/gpio-virtio.c
+index ac39da17a29bb8..608353feb0f350 100644
+--- a/drivers/gpio/gpio-virtio.c
++++ b/drivers/gpio/gpio-virtio.c
+@@ -526,7 +526,6 @@ static const char **virtio_gpio_get_names(struct virtio_gpio *vgpio,
+ 
+ static int virtio_gpio_probe(struct virtio_device *vdev)
+ {
+-	struct virtio_gpio_config config;
+ 	struct device *dev = &vdev->dev;
+ 	struct virtio_gpio *vgpio;
+ 	struct irq_chip *gpio_irq_chip;
+@@ -539,9 +538,11 @@ static int virtio_gpio_probe(struct virtio_device *vdev)
+ 		return -ENOMEM;
+ 
+ 	/* Read configuration */
+-	virtio_cread_bytes(vdev, 0, &config, sizeof(config));
+-	gpio_names_size = le32_to_cpu(config.gpio_names_size);
+-	ngpio = le16_to_cpu(config.ngpio);
++	gpio_names_size =
++		virtio_cread32(vdev, offsetof(struct virtio_gpio_config,
++					      gpio_names_size));
++	ngpio =  virtio_cread16(vdev, offsetof(struct virtio_gpio_config,
++					       ngpio));
+ 	if (!ngpio) {
+ 		dev_err(dev, "Number of GPIOs can't be zero\n");
+ 		return -EINVAL;
+diff --git a/drivers/gpio/gpio-wcd934x.c b/drivers/gpio/gpio-wcd934x.c
+index 2bba27b13947f1..cfa7b0a50c8e33 100644
+--- a/drivers/gpio/gpio-wcd934x.c
++++ b/drivers/gpio/gpio-wcd934x.c
+@@ -46,9 +46,12 @@ static int wcd_gpio_direction_output(struct gpio_chip *chip, unsigned int pin,
+ 				     int val)
+ {
+ 	struct wcd_gpio_data *data = gpiochip_get_data(chip);
++	int ret;
+ 
+-	regmap_update_bits(data->map, WCD_REG_DIR_CTL_OFFSET,
+-			   WCD_PIN_MASK(pin), WCD_PIN_MASK(pin));
++	ret = regmap_update_bits(data->map, WCD_REG_DIR_CTL_OFFSET,
++				 WCD_PIN_MASK(pin), WCD_PIN_MASK(pin));
++	if (ret)
++		return ret;
+ 
+ 	return regmap_update_bits(data->map, WCD_REG_VAL_CTL_OFFSET,
+ 				  WCD_PIN_MASK(pin),
+diff --git a/drivers/gpu/drm/amd/amdgpu/aldebaran.c b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
+index e13fbd97414126..9569dc16dd3dac 100644
+--- a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
++++ b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
+@@ -71,18 +71,29 @@ aldebaran_get_reset_handler(struct amdgpu_reset_control *reset_ctl,
+ 	return NULL;
+ }
+ 
++static inline uint32_t aldebaran_get_ip_block_mask(struct amdgpu_device *adev)
++{
++	uint32_t ip_block_mask = BIT(AMD_IP_BLOCK_TYPE_GFX) |
++				 BIT(AMD_IP_BLOCK_TYPE_SDMA);
++
++	if (adev->aid_mask)
++		ip_block_mask |= BIT(AMD_IP_BLOCK_TYPE_IH);
++
++	return ip_block_mask;
++}
++
+ static int aldebaran_mode2_suspend_ip(struct amdgpu_device *adev)
+ {
++	uint32_t ip_block_mask = aldebaran_get_ip_block_mask(adev);
++	uint32_t ip_block;
+ 	int r, i;
+ 
+ 	amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
+ 	amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
+ 
+ 	for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
+-		if (!(adev->ip_blocks[i].version->type ==
+-			      AMD_IP_BLOCK_TYPE_GFX ||
+-		      adev->ip_blocks[i].version->type ==
+-			      AMD_IP_BLOCK_TYPE_SDMA))
++		ip_block = BIT(adev->ip_blocks[i].version->type);
++		if (!(ip_block_mask & ip_block))
+ 			continue;
+ 
+ 		r = amdgpu_ip_block_suspend(&adev->ip_blocks[i]);
+@@ -200,8 +211,10 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
+ static int aldebaran_mode2_restore_ip(struct amdgpu_device *adev)
+ {
+ 	struct amdgpu_firmware_info *ucode_list[AMDGPU_UCODE_ID_MAXIMUM];
++	uint32_t ip_block_mask = aldebaran_get_ip_block_mask(adev);
+ 	struct amdgpu_firmware_info *ucode;
+ 	struct amdgpu_ip_block *cmn_block;
++	struct amdgpu_ip_block *ih_block;
+ 	int ucode_count = 0;
+ 	int i, r;
+ 
+@@ -243,6 +256,18 @@ static int aldebaran_mode2_restore_ip(struct amdgpu_device *adev)
+ 	if (r)
+ 		return r;
+ 
++	if (ip_block_mask & BIT(AMD_IP_BLOCK_TYPE_IH)) {
++		ih_block = amdgpu_device_ip_get_ip_block(adev,
++							 AMD_IP_BLOCK_TYPE_IH);
++		if (unlikely(!ih_block)) {
++			dev_err(adev->dev, "Failed to get IH handle\n");
++			return -EINVAL;
++		}
++		r = amdgpu_ip_block_resume(ih_block);
++		if (r)
++			return r;
++	}
++
+ 	/* Reinit GFXHUB */
+ 	adev->gfxhub.funcs->init(adev);
+ 	r = adev->gfxhub.funcs->gart_enable(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c
+index 360e07a5c7c1fb..1a1f30654b1478 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c
+@@ -212,7 +212,7 @@ int amdgpu_cper_entry_fill_bad_page_threshold_section(struct amdgpu_device *adev
+ 		   NONSTD_SEC_OFFSET(hdr->sec_cnt, idx));
+ 
+ 	amdgpu_cper_entry_fill_section_desc(adev, section_desc, true, false,
+-					    CPER_SEV_NUM, RUNTIME, NONSTD_SEC_LEN,
++					    CPER_SEV_FATAL, RUNTIME, NONSTD_SEC_LEN,
+ 					    NONSTD_SEC_OFFSET(hdr->sec_cnt, idx));
+ 
+ 	section->hdr.valid_bits.err_info_cnt = 1;
+@@ -326,7 +326,9 @@ int amdgpu_cper_generate_bp_threshold_record(struct amdgpu_device *adev)
+ 		return -ENOMEM;
+ 	}
+ 
+-	amdgpu_cper_entry_fill_hdr(adev, bp_threshold, AMDGPU_CPER_TYPE_BP_THRESHOLD, CPER_SEV_NUM);
++	amdgpu_cper_entry_fill_hdr(adev, bp_threshold,
++				   AMDGPU_CPER_TYPE_BP_THRESHOLD,
++				   CPER_SEV_FATAL);
+ 	ret = amdgpu_cper_entry_fill_bad_page_threshold_section(adev, bp_threshold, 0);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
+index 02138aa557935e..dfb6cfd8376069 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
+@@ -88,8 +88,8 @@ int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ 	}
+ 
+ 	r = amdgpu_vm_bo_map(adev, *bo_va, csa_addr, 0, size,
+-			     AMDGPU_PTE_READABLE | AMDGPU_PTE_WRITEABLE |
+-			     AMDGPU_PTE_EXECUTABLE);
++			     AMDGPU_VM_PAGE_READABLE | AMDGPU_VM_PAGE_WRITEABLE |
++			     AMDGPU_VM_PAGE_EXECUTABLE);
+ 
+ 	if (r) {
+ 		DRM_ERROR("failed to do bo_map on static CSA, err=%d\n", r);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
+index e979a6086178c7..1a7ec674f57949 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
+@@ -475,6 +475,8 @@ int amdgpu_ras_eeprom_reset_table(struct amdgpu_ras_eeprom_control *control)
+ 
+ 	control->ras_num_recs = 0;
+ 	control->ras_num_bad_pages = 0;
++	control->ras_num_mca_recs = 0;
++	control->ras_num_pa_recs = 0;
+ 	control->ras_fri = 0;
+ 
+ 	amdgpu_dpm_send_hbm_bad_pages_num(adev, control->ras_num_bad_pages);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+index 07c936e90d8e40..78f9e86ccc0990 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+@@ -648,9 +648,8 @@ static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man,
+ 	list_for_each_entry(block, &vres->blocks, link)
+ 		vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
+ 
+-	amdgpu_vram_mgr_do_reserve(man);
+-
+ 	drm_buddy_free_list(mm, &vres->blocks, vres->flags);
++	amdgpu_vram_mgr_do_reserve(man);
+ 	mutex_unlock(&mgr->lock);
+ 
+ 	atomic64_sub(vis_usage, &mgr->vis_usage);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index f6d71bf7c89c20..75e7af5c5a3e32 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2998,6 +2998,19 @@ static void hpd_rx_irq_work_suspend(struct amdgpu_display_manager *dm)
+ 	}
+ }
+ 
++static int dm_cache_state(struct amdgpu_device *adev)
++{
++	int r;
++
++	adev->dm.cached_state = drm_atomic_helper_suspend(adev_to_drm(adev));
++	if (IS_ERR(adev->dm.cached_state)) {
++		r = PTR_ERR(adev->dm.cached_state);
++		adev->dm.cached_state = NULL;
++	}
++
++	return adev->dm.cached_state ? 0 : r;
++}
++
+ static int dm_prepare_suspend(struct amdgpu_ip_block *ip_block)
+ {
+ 	struct amdgpu_device *adev = ip_block->adev;
+@@ -3006,11 +3019,8 @@ static int dm_prepare_suspend(struct amdgpu_ip_block *ip_block)
+ 		return 0;
+ 
+ 	WARN_ON(adev->dm.cached_state);
+-	adev->dm.cached_state = drm_atomic_helper_suspend(adev_to_drm(adev));
+-	if (IS_ERR(adev->dm.cached_state))
+-		return PTR_ERR(adev->dm.cached_state);
+ 
+-	return 0;
++	return dm_cache_state(adev);
+ }
+ 
+ static int dm_suspend(struct amdgpu_ip_block *ip_block)
+@@ -3044,9 +3054,10 @@ static int dm_suspend(struct amdgpu_ip_block *ip_block)
+ 	}
+ 
+ 	if (!adev->dm.cached_state) {
+-		adev->dm.cached_state = drm_atomic_helper_suspend(adev_to_drm(adev));
+-		if (IS_ERR(adev->dm.cached_state))
+-			return PTR_ERR(adev->dm.cached_state);
++		int r = dm_cache_state(adev);
++
++		if (r)
++			return r;
+ 	}
+ 
+ 	s3_handle_hdmi_cec(adev_to_drm(adev), true);
+@@ -5298,7 +5309,8 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
+ 
+ static void amdgpu_dm_destroy_drm_device(struct amdgpu_display_manager *dm)
+ {
+-	drm_atomic_private_obj_fini(&dm->atomic_obj);
++	if (dm->atomic_obj.state)
++		drm_atomic_private_obj_fini(&dm->atomic_obj);
+ }
+ 
+ /******************************************************************************
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+index e140b7a04d7246..d63038ec4ec70c 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+@@ -127,8 +127,10 @@ bool amdgpu_dm_link_setup_psr(struct dc_stream_state *stream)
+ 		psr_config.allow_multi_disp_optimizations =
+ 			(amdgpu_dc_feature_mask & DC_PSR_ALLOW_MULTI_DISP_OPT);
+ 
+-		if (!psr_su_set_dsc_slice_height(dc, link, stream, &psr_config))
+-			return false;
++		if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1) {
++			if (!psr_su_set_dsc_slice_height(dc, link, stream, &psr_config))
++				return false;
++		}
+ 
+ 		ret = dc_link_setup_psr(link, stream, &psr_config, &psr_context);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 40561c4deb3c9d..e4cb5e3d34d4a9 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -5335,8 +5335,8 @@ bool dc_update_planes_and_stream(struct dc *dc,
+ 	else
+ 		ret = update_planes_and_stream_v2(dc, srf_updates,
+ 			surface_count, stream, stream_update);
+-
+-	if (ret)
++	if (ret && (dc->ctx->dce_version >= DCN_VERSION_3_2 ||
++		dc->ctx->dce_version == DCN_VERSION_3_01))
+ 		clear_update_flags(srf_updates, surface_count, stream);
+ 
+ 	return ret;
+@@ -5367,7 +5367,7 @@ void dc_commit_updates_for_stream(struct dc *dc,
+ 		ret = update_planes_and_stream_v1(dc, srf_updates, surface_count, stream,
+ 				stream_update, state);
+ 
+-	if (ret)
++	if (ret && dc->ctx->dce_version >= DCN_VERSION_3_2)
+ 		clear_update_flags(srf_updates, surface_count, stream);
+ }
+ 
+@@ -6266,11 +6266,13 @@ struct dc_power_profile dc_get_power_profile_for_dc_state(const struct dc_state
+  */
+ bool dc_get_host_router_index(const struct dc_link *link, unsigned int *host_router_index)
+ {
+-	struct dc *dc = link->ctx->dc;
++	struct dc *dc;
+ 
+-	if (link->ep_type != DISPLAY_ENDPOINT_USB4_DPIA)
++	if (!link || !host_router_index || link->ep_type != DISPLAY_ENDPOINT_USB4_DPIA)
+ 		return false;
+ 
++	dc = link->ctx->dc;
++
+ 	if (link->link_index < dc->lowest_dpia_link_index)
+ 		return false;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index 846c9c51f2d981..977ec7c1c21287 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -273,14 +273,13 @@ void dcn20_setup_gsl_group_as_lock(
+ 	}
+ 
+ 	/* at this point we want to program whether it's to enable or disable */
+-	if (pipe_ctx->stream_res.tg->funcs->set_gsl != NULL &&
+-		pipe_ctx->stream_res.tg->funcs->set_gsl_source_select != NULL) {
++	if (pipe_ctx->stream_res.tg->funcs->set_gsl != NULL) {
+ 		pipe_ctx->stream_res.tg->funcs->set_gsl(
+ 			pipe_ctx->stream_res.tg,
+ 			&gsl);
+-
+-		pipe_ctx->stream_res.tg->funcs->set_gsl_source_select(
+-			pipe_ctx->stream_res.tg, group_idx,	enable ? 4 : 0);
++		if (pipe_ctx->stream_res.tg->funcs->set_gsl_source_select != NULL)
++			pipe_ctx->stream_res.tg->funcs->set_gsl_source_select(
++				pipe_ctx->stream_res.tg, group_idx, enable ? 4 : 0);
+ 	} else
+ 		BREAK_TO_DEBUGGER();
+ }
+@@ -946,7 +945,7 @@ enum dc_status dcn20_enable_stream_timing(
+ 		return DC_ERROR_UNEXPECTED;
+ 	}
+ 
+-	hws->funcs.wait_for_blank_complete(pipe_ctx->stream_res.opp);
++	fsleep(stream->timing.v_total * (stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz));
+ 
+ 	params.vertical_total_min = stream->adjust.v_total_min;
+ 	params.vertical_total_max = stream->adjust.v_total_max;
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+index 53c961f86d43c0..2c1dcde5e3eacb 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+@@ -140,7 +140,8 @@ void link_blank_dp_stream(struct dc_link *link, bool hw_init)
+ 				}
+ 		}
+ 
+-		if ((!link->wa_flags.dp_keep_receiver_powered) || hw_init)
++		if (((!link->wa_flags.dp_keep_receiver_powered) || hw_init) &&
++			(link->type != dc_connection_none))
+ 			dpcd_write_rx_power_ctrl(link, false);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/mpc/dcn401/dcn401_mpc.c b/drivers/gpu/drm/amd/display/dc/mpc/dcn401/dcn401_mpc.c
+index ad67197557ca1b..63fb6777c1fdbd 100644
+--- a/drivers/gpu/drm/amd/display/dc/mpc/dcn401/dcn401_mpc.c
++++ b/drivers/gpu/drm/amd/display/dc/mpc/dcn401/dcn401_mpc.c
+@@ -571,7 +571,7 @@ void mpc401_get_gamut_remap(struct mpc *mpc,
+ 	struct mpc_grph_gamut_adjustment *adjust)
+ {
+ 	uint16_t arr_reg_val[12] = {0};
+-	uint32_t mode_select;
++	uint32_t mode_select = MPCC_GAMUT_REMAP_MODE_SELECT_0;
+ 
+ 	read_gamut_remap(mpc, mpcc_id, arr_reg_val, adjust->mpcc_gamut_remap_block_id, &mode_select);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
+index f47cd281d6e7e5..7aab08151e72c5 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
+@@ -926,6 +926,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ 	.seamless_boot_odm_combine = true,
+ 	.enable_legacy_fast_update = true,
+ 	.using_dml2 = false,
++	.disable_dsc_power_gate = true,
+ };
+ 
+ static const struct dc_panel_config panel_config_defaults = {
+diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c
+index 72a0f078cd1a58..2884977a3dd2f0 100644
+--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c
++++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c
+@@ -92,19 +92,15 @@ void dmub_dcn35_reset(struct dmub_srv *dmub)
+ 	uint32_t in_reset, is_enabled, scratch, i, pwait_mode;
+ 
+ 	REG_GET(DMCUB_CNTL2, DMCUB_SOFT_RESET, &in_reset);
++	REG_GET(DMCUB_CNTL, DMCUB_ENABLE, &is_enabled);
+ 
+-	if (in_reset == 0) {
++	if (in_reset == 0 && is_enabled != 0) {
+ 		cmd.bits.status = 1;
+ 		cmd.bits.command_code = DMUB_GPINT__STOP_FW;
+ 		cmd.bits.param = 0;
+ 
+ 		dmub->hw_funcs.set_gpint(dmub, cmd);
+ 
+-		/**
+-		 * Timeout covers both the ACK and the wait
+-		 * for remaining work to finish.
+-		 */
+-
+ 		for (i = 0; i < timeout; ++i) {
+ 			if (dmub->hw_funcs.is_gpint_acked(dmub, cmd))
+ 				break;
+@@ -130,11 +126,9 @@ void dmub_dcn35_reset(struct dmub_srv *dmub)
+ 		/* Force reset in case we timed out, DMCUB is likely hung. */
+ 	}
+ 
+-	REG_GET(DMCUB_CNTL, DMCUB_ENABLE, &is_enabled);
+-
+ 	if (is_enabled) {
+ 		REG_UPDATE(DMCUB_CNTL2, DMCUB_SOFT_RESET, 1);
+-		REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 1);
++		udelay(1);
+ 		REG_UPDATE(DMCUB_CNTL, DMCUB_ENABLE, 0);
+ 	}
+ 
+@@ -160,11 +154,7 @@ void dmub_dcn35_reset_release(struct dmub_srv *dmub)
+ 		     LONO_SOCCLK_GATE_DISABLE, 1,
+ 		     LONO_DMCUBCLK_GATE_DISABLE, 1);
+ 
+-	REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 1);
+-	udelay(1);
+ 	REG_UPDATE_2(DMCUB_CNTL, DMCUB_ENABLE, 1, DMCUB_TRACEPORT_EN, 1);
+-	REG_UPDATE(DMCUB_CNTL2, DMCUB_SOFT_RESET, 1);
+-	udelay(1);
+ 	REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 0);
+ 	REG_UPDATE(DMCUB_CNTL2, DMCUB_SOFT_RESET, 0);
+ }
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index 922def51685b0a..21da5123d95e83 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -1398,6 +1398,8 @@ static ssize_t amdgpu_set_pp_power_profile_mode(struct device *dev,
+ 			if (ret)
+ 				return -EINVAL;
+ 			parameter_size++;
++			if (!tmp_str)
++				break;
+ 			while (isspace(*tmp_str))
+ 				tmp_str++;
+ 		}
+@@ -3618,6 +3620,9 @@ static int parse_input_od_command_lines(const char *buf,
+ 			return -EINVAL;
+ 		parameter_size++;
+ 
++		if (!tmp_str)
++			break;
++
+ 		while (isspace(*tmp_str))
+ 			tmp_str++;
+ 	}
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+index a55ea76d739969..2c9869feba610f 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+@@ -666,7 +666,6 @@ static int vangogh_print_clk_levels(struct smu_context *smu,
+ {
+ 	DpmClocks_t *clk_table = smu->smu_table.clocks_table;
+ 	SmuMetrics_t metrics;
+-	struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
+ 	int i, idx, size = 0, ret = 0;
+ 	uint32_t cur_value = 0, value = 0, count = 0;
+ 	bool cur_value_match_level = false;
+@@ -682,31 +681,25 @@ static int vangogh_print_clk_levels(struct smu_context *smu,
+ 
+ 	switch (clk_type) {
+ 	case SMU_OD_SCLK:
+-		if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
+-			size += sysfs_emit_at(buf, size, "%s:\n", "OD_SCLK");
+-			size += sysfs_emit_at(buf, size, "0: %10uMhz\n",
+-			(smu->gfx_actual_hard_min_freq > 0) ? smu->gfx_actual_hard_min_freq : smu->gfx_default_hard_min_freq);
+-			size += sysfs_emit_at(buf, size, "1: %10uMhz\n",
+-			(smu->gfx_actual_soft_max_freq > 0) ? smu->gfx_actual_soft_max_freq : smu->gfx_default_soft_max_freq);
+-		}
++		size += sysfs_emit_at(buf, size, "%s:\n", "OD_SCLK");
++		size += sysfs_emit_at(buf, size, "0: %10uMhz\n",
++		(smu->gfx_actual_hard_min_freq > 0) ? smu->gfx_actual_hard_min_freq : smu->gfx_default_hard_min_freq);
++		size += sysfs_emit_at(buf, size, "1: %10uMhz\n",
++		(smu->gfx_actual_soft_max_freq > 0) ? smu->gfx_actual_soft_max_freq : smu->gfx_default_soft_max_freq);
+ 		break;
+ 	case SMU_OD_CCLK:
+-		if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
+-			size += sysfs_emit_at(buf, size, "CCLK_RANGE in Core%d:\n",  smu->cpu_core_id_select);
+-			size += sysfs_emit_at(buf, size, "0: %10uMhz\n",
+-			(smu->cpu_actual_soft_min_freq > 0) ? smu->cpu_actual_soft_min_freq : smu->cpu_default_soft_min_freq);
+-			size += sysfs_emit_at(buf, size, "1: %10uMhz\n",
+-			(smu->cpu_actual_soft_max_freq > 0) ? smu->cpu_actual_soft_max_freq : smu->cpu_default_soft_max_freq);
+-		}
++		size += sysfs_emit_at(buf, size, "CCLK_RANGE in Core%d:\n",  smu->cpu_core_id_select);
++		size += sysfs_emit_at(buf, size, "0: %10uMhz\n",
++		(smu->cpu_actual_soft_min_freq > 0) ? smu->cpu_actual_soft_min_freq : smu->cpu_default_soft_min_freq);
++		size += sysfs_emit_at(buf, size, "1: %10uMhz\n",
++		(smu->cpu_actual_soft_max_freq > 0) ? smu->cpu_actual_soft_max_freq : smu->cpu_default_soft_max_freq);
+ 		break;
+ 	case SMU_OD_RANGE:
+-		if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
+-			size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE");
+-			size += sysfs_emit_at(buf, size, "SCLK: %7uMhz %10uMhz\n",
+-				smu->gfx_default_hard_min_freq, smu->gfx_default_soft_max_freq);
+-			size += sysfs_emit_at(buf, size, "CCLK: %7uMhz %10uMhz\n",
+-				smu->cpu_default_soft_min_freq, smu->cpu_default_soft_max_freq);
+-		}
++		size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE");
++		size += sysfs_emit_at(buf, size, "SCLK: %7uMhz %10uMhz\n",
++			smu->gfx_default_hard_min_freq, smu->gfx_default_soft_max_freq);
++		size += sysfs_emit_at(buf, size, "CCLK: %7uMhz %10uMhz\n",
++			smu->cpu_default_soft_min_freq, smu->cpu_default_soft_max_freq);
+ 		break;
+ 	case SMU_SOCCLK:
+ 		/* the level 3 ~ 6 of socclk use the same frequency for vangogh */
+diff --git a/drivers/gpu/drm/clients/drm_client_setup.c b/drivers/gpu/drm/clients/drm_client_setup.c
+index e17265039ca800..e460ad354de281 100644
+--- a/drivers/gpu/drm/clients/drm_client_setup.c
++++ b/drivers/gpu/drm/clients/drm_client_setup.c
+@@ -2,6 +2,7 @@
+ 
+ #include <drm/clients/drm_client_setup.h>
+ #include <drm/drm_device.h>
++#include <drm/drm_drv.h>
+ #include <drm/drm_fourcc.h>
+ #include <drm/drm_print.h>
+ 
+@@ -31,6 +32,10 @@ MODULE_PARM_DESC(active,
+  */
+ void drm_client_setup(struct drm_device *dev, const struct drm_format_info *format)
+ {
++	if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
++		drm_dbg(dev, "driver does not support mode-setting, skipping DRM clients\n");
++		return;
++	}
+ 
+ #ifdef CONFIG_DRM_FBDEV_EMULATION
+ 	if (!strcmp(drm_client_default, "fbdev")) {
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index 4e938bad808cc6..ccefdcb833ddc9 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -3171,7 +3171,9 @@ static void intel_psr_configure_full_frame_update(struct intel_dp *intel_dp)
+ 
+ static void _psr_invalidate_handle(struct intel_dp *intel_dp)
+ {
+-	if (intel_dp->psr.psr2_sel_fetch_enabled) {
++	struct intel_display *display = to_intel_display(intel_dp);
++
++	if (DISPLAY_VER(display) < 20 && intel_dp->psr.psr2_sel_fetch_enabled) {
+ 		if (!intel_dp->psr.psr2_sel_fetch_cff_enabled) {
+ 			intel_dp->psr.psr2_sel_fetch_cff_enabled = true;
+ 			intel_psr_configure_full_frame_update(intel_dp);
+@@ -3259,7 +3261,7 @@ static void _psr_flush_handle(struct intel_dp *intel_dp)
+ 	struct intel_display *display = to_intel_display(intel_dp);
+ 	struct drm_i915_private *dev_priv = to_i915(display->drm);
+ 
+-	if (intel_dp->psr.psr2_sel_fetch_enabled) {
++	if (DISPLAY_VER(display) < 20 && intel_dp->psr.psr2_sel_fetch_enabled) {
+ 		if (intel_dp->psr.psr2_sel_fetch_cff_enabled) {
+ 			/* can we turn CFF off? */
+ 			if (intel_dp->psr.busy_frontbuffer_bits == 0)
+@@ -3276,11 +3278,13 @@ static void _psr_flush_handle(struct intel_dp *intel_dp)
+ 		 * existing SU configuration
+ 		 */
+ 		intel_psr_configure_full_frame_update(intel_dp);
+-	}
+ 
+-	intel_psr_force_update(intel_dp);
++		intel_psr_force_update(intel_dp);
++	} else {
++		intel_psr_exit(intel_dp);
++	}
+ 
+-	if (!intel_dp->psr.psr2_sel_fetch_enabled && !intel_dp->psr.active &&
++	if ((!intel_dp->psr.psr2_sel_fetch_enabled || DISPLAY_VER(display) >= 20) &&
+ 	    !intel_dp->psr.busy_frontbuffer_bits)
+ 		queue_work(dev_priv->unordered_wq, &intel_dp->psr.work);
+ }
+diff --git a/drivers/gpu/drm/imagination/pvr_power.c b/drivers/gpu/drm/imagination/pvr_power.c
+index 850b318605da4c..d97613c6a0a9ba 100644
+--- a/drivers/gpu/drm/imagination/pvr_power.c
++++ b/drivers/gpu/drm/imagination/pvr_power.c
+@@ -317,6 +317,63 @@ pvr_power_device_idle(struct device *dev)
+ 	return pvr_power_is_idle(pvr_dev) ? 0 : -EBUSY;
+ }
+ 
++static int
++pvr_power_clear_error(struct pvr_device *pvr_dev)
++{
++	struct device *dev = from_pvr_device(pvr_dev)->dev;
++	int err;
++
++	/* Ensure the device state is known and nothing is happening past this point */
++	pm_runtime_disable(dev);
++
++	/* Attempt to clear the runtime PM error by setting the current state again */
++	if (pm_runtime_status_suspended(dev))
++		err = pm_runtime_set_suspended(dev);
++	else
++		err = pm_runtime_set_active(dev);
++
++	if (err) {
++		drm_err(from_pvr_device(pvr_dev),
++			"%s: Failed to clear runtime PM error (new error %d)\n",
++			__func__, err);
++	}
++
++	pm_runtime_enable(dev);
++
++	return err;
++}
++
++/**
++ * pvr_power_get_clear() - Acquire a power reference, correcting any errors
++ * @pvr_dev: Device pointer
++ *
++ * Attempt to acquire a power reference on the device. If the runtime PM
++ * is in error state, attempt to clear the error and retry.
++ *
++ * Returns:
++ *  * 0 on success, or
++ *  * Any error code returned by pvr_power_get() or the runtime PM API.
++ */
++static int
++pvr_power_get_clear(struct pvr_device *pvr_dev)
++{
++	int err;
++
++	err = pvr_power_get(pvr_dev);
++	if (err == 0)
++		return err;
++
++	drm_warn(from_pvr_device(pvr_dev),
++		 "%s: pvr_power_get returned error %d, attempting recovery\n",
++		 __func__, err);
++
++	err = pvr_power_clear_error(pvr_dev);
++	if (err)
++		return err;
++
++	return pvr_power_get(pvr_dev);
++}
++
+ /**
+  * pvr_power_reset() - Reset the GPU
+  * @pvr_dev: Device pointer
+@@ -341,7 +398,7 @@ pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
+ 	 * Take a power reference during the reset. This should prevent any interference with the
+ 	 * power state during reset.
+ 	 */
+-	WARN_ON(pvr_power_get(pvr_dev));
++	WARN_ON(pvr_power_get_clear(pvr_dev));
+ 
+ 	down_write(&pvr_dev->reset_sem);
+ 
+diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
+index 5df20cbeafb8bf..12de9314f6cd0d 100644
+--- a/drivers/gpu/drm/msm/Makefile
++++ b/drivers/gpu/drm/msm/Makefile
+@@ -196,6 +196,11 @@ ADRENO_HEADERS = \
+ 	generated/a4xx.xml.h \
+ 	generated/a5xx.xml.h \
+ 	generated/a6xx.xml.h \
++	generated/a6xx_descriptors.xml.h \
++	generated/a6xx_enums.xml.h \
++	generated/a6xx_perfcntrs.xml.h \
++	generated/a7xx_enums.xml.h \
++	generated/a7xx_perfcntrs.xml.h \
+ 	generated/a6xx_gmu.xml.h \
+ 	generated/adreno_common.xml.h \
+ 	generated/adreno_pm4.xml.h \
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_catalog.c b/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
+index 53e2ff4406d8f0..b975809efed4f0 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
+@@ -1343,7 +1343,7 @@ static const uint32_t a7xx_pwrup_reglist_regs[] = {
+ 	REG_A6XX_RB_NC_MODE_CNTL,
+ 	REG_A6XX_RB_CMP_DBG_ECO_CNTL,
+ 	REG_A7XX_GRAS_NC_MODE_CNTL,
+-	REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE,
++	REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE_ENABLE,
+ 	REG_A6XX_UCHE_GBIF_GX_CONFIG,
+ 	REG_A6XX_UCHE_CLIENT_PF,
+ 	REG_A6XX_TPL1_DBG_ECO_CNTL1,
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+index 9201a53dd341bf..6e71f617fc3d0d 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+@@ -6,6 +6,10 @@
+ 
+ 
+ #include "adreno_gpu.h"
++#include "a6xx_enums.xml.h"
++#include "a7xx_enums.xml.h"
++#include "a6xx_perfcntrs.xml.h"
++#include "a7xx_perfcntrs.xml.h"
+ #include "a6xx.xml.h"
+ 
+ #include "a6xx_gmu.h"
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+index 341a72a6740182..a85d3df7a5facf 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+@@ -158,7 +158,7 @@ static int a6xx_crashdumper_run(struct msm_gpu *gpu,
+ 	/* Make sure all pending memory writes are posted */
+ 	wmb();
+ 
+-	gpu_write64(gpu, REG_A6XX_CP_CRASH_SCRIPT_BASE, dumper->iova);
++	gpu_write64(gpu, REG_A6XX_CP_CRASH_DUMP_SCRIPT_BASE, dumper->iova);
+ 
+ 	gpu_write(gpu, REG_A6XX_CP_CRASH_DUMP_CNTL, 1);
+ 
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h
+index e545106c70be71..95d93ac6812a4d 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h
+@@ -212,7 +212,7 @@ static const struct a6xx_shader_block {
+ 	SHADER(A6XX_SP_LB_5_DATA, 0x200),
+ 	SHADER(A6XX_SP_CB_BINDLESS_DATA, 0x800),
+ 	SHADER(A6XX_SP_CB_LEGACY_DATA, 0x280),
+-	SHADER(A6XX_SP_UAV_DATA, 0x80),
++	SHADER(A6XX_SP_GFX_UAV_BASE_DATA, 0x80),
+ 	SHADER(A6XX_SP_INST_TAG, 0x80),
+ 	SHADER(A6XX_SP_CB_BINDLESS_TAG, 0x80),
+ 	SHADER(A6XX_SP_TMO_UMO_TAG, 0x80),
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
+index 9b5e27d2373c68..43f58cafda7033 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
+@@ -209,7 +209,7 @@ void a6xx_preempt_hw_init(struct msm_gpu *gpu)
+ 	gpu_write64(gpu, REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO, 0);
+ 
+ 	/* Enable the GMEM save/restore feature for preemption */
+-	gpu_write(gpu, REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE, 0x1);
++	gpu_write(gpu, REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE_ENABLE, 0x1);
+ 
+ 	/* Reset the preemption state */
+ 	set_preempt_state(a6xx_gpu, PREEMPT_NONE);
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h b/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h
+index 9a327d543f27de..e02cabb39f194c 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h
++++ b/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h
+@@ -1311,8 +1311,8 @@ static struct a6xx_indexed_registers gen7_9_0_cp_indexed_reg_list[] = {
+ 		REG_A7XX_CP_BV_SQE_UCODE_DBG_DATA, 0x08000},
+ 	{ "CP_BV_SQE_STAT_ADDR", REG_A7XX_CP_BV_SQE_STAT_ADDR,
+ 		REG_A7XX_CP_BV_SQE_STAT_DATA, 0x00040},
+-	{ "CP_RESOURCE_TBL", REG_A7XX_CP_RESOURCE_TBL_DBG_ADDR,
+-		REG_A7XX_CP_RESOURCE_TBL_DBG_DATA, 0x04100},
++	{ "CP_RESOURCE_TBL", REG_A7XX_CP_RESOURCE_TABLE_DBG_ADDR,
++		REG_A7XX_CP_RESOURCE_TABLE_DBG_DATA, 0x04100},
+ 	{ "CP_LPAC_DRAW_STATE_ADDR", REG_A7XX_CP_LPAC_DRAW_STATE_ADDR,
+ 		REG_A7XX_CP_LPAC_DRAW_STATE_DATA, 0x00200},
+ 	{ "CP_LPAC_ROQ", REG_A7XX_CP_LPAC_ROQ_DBG_ADDR,
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index c3588dc9e53764..b4e73b015d02a7 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -551,6 +551,7 @@ static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj,
+ 					   u32 metadata_size)
+ {
+ 	struct msm_gem_object *msm_obj = to_msm_bo(obj);
++	void *new_metadata;
+ 	void *buf;
+ 	int ret;
+ 
+@@ -568,8 +569,14 @@ static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj,
+ 	if (ret)
+ 		goto out;
+ 
+-	msm_obj->metadata =
++	new_metadata =
+ 		krealloc(msm_obj->metadata, metadata_size, GFP_KERNEL);
++	if (!new_metadata) {
++		ret = -ENOMEM;
++		goto out;
++	}
++
++	msm_obj->metadata = new_metadata;
+ 	msm_obj->metadata_size = metadata_size;
+ 	memcpy(msm_obj->metadata, buf, metadata_size);
+ 
+diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
+index ebc9ba66efb89d..eeb3b65dd4d13e 100644
+--- a/drivers/gpu/drm/msm/msm_gem.c
++++ b/drivers/gpu/drm/msm/msm_gem.c
+@@ -963,7 +963,8 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m,
+ 	uint64_t off = drm_vma_node_start(&obj->vma_node);
+ 	const char *madv;
+ 
+-	msm_gem_lock(obj);
++	if (!msm_gem_trylock(obj))
++		return;
+ 
+ 	stats->all.count++;
+ 	stats->all.size += obj->size;
+diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
+index 85f0257e83dab6..748053f70ca7a7 100644
+--- a/drivers/gpu/drm/msm/msm_gem.h
++++ b/drivers/gpu/drm/msm/msm_gem.h
+@@ -188,6 +188,12 @@ msm_gem_lock(struct drm_gem_object *obj)
+ 	dma_resv_lock(obj->resv, NULL);
+ }
+ 
++static inline bool __must_check
++msm_gem_trylock(struct drm_gem_object *obj)
++{
++	return dma_resv_trylock(obj->resv);
++}
++
+ static inline int
+ msm_gem_lock_interruptible(struct drm_gem_object *obj)
+ {
+diff --git a/drivers/gpu/drm/msm/registers/adreno/a6xx.xml b/drivers/gpu/drm/msm/registers/adreno/a6xx.xml
+index 2db425abf0f3cc..d860fd94feae85 100644
+--- a/drivers/gpu/drm/msm/registers/adreno/a6xx.xml
++++ b/drivers/gpu/drm/msm/registers/adreno/a6xx.xml
+@@ -5,6 +5,11 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ <import file="freedreno_copyright.xml"/>
+ <import file="adreno/adreno_common.xml"/>
+ <import file="adreno/adreno_pm4.xml"/>
++<import file="adreno/a6xx_enums.xml"/>
++<import file="adreno/a7xx_enums.xml"/>
++<import file="adreno/a6xx_perfcntrs.xml"/>
++<import file="adreno/a7xx_perfcntrs.xml"/>
++<import file="adreno/a6xx_descriptors.xml"/>
+ 
+ <!--
+ Each register that is actually being used by driver should have "usage" defined,
+@@ -20,2205 +25,6 @@ is either overwritten by renderpass/blit (ib2) or not used if not overwritten
+ by a particular renderpass/blit.
+ -->
+ 
+-<!-- these might be same as a5xx -->
+-<enum name="a6xx_tile_mode">
+-	<value name="TILE6_LINEAR" value="0"/>
+-	<value name="TILE6_2" value="2"/>
+-	<value name="TILE6_3" value="3"/>
+-</enum>
+-
+-<enum name="a6xx_format">
+-	<value value="0x02" name="FMT6_A8_UNORM"/>
+-	<value value="0x03" name="FMT6_8_UNORM"/>
+-	<value value="0x04" name="FMT6_8_SNORM"/>
+-	<value value="0x05" name="FMT6_8_UINT"/>
+-	<value value="0x06" name="FMT6_8_SINT"/>
+-
+-	<value value="0x08" name="FMT6_4_4_4_4_UNORM"/>
+-	<value value="0x0a" name="FMT6_5_5_5_1_UNORM"/>
+-	<value value="0x0c" name="FMT6_1_5_5_5_UNORM"/> <!-- read only -->
+-	<value value="0x0e" name="FMT6_5_6_5_UNORM"/>
+-
+-	<value value="0x0f" name="FMT6_8_8_UNORM"/>
+-	<value value="0x10" name="FMT6_8_8_SNORM"/>
+-	<value value="0x11" name="FMT6_8_8_UINT"/>
+-	<value value="0x12" name="FMT6_8_8_SINT"/>
+-	<value value="0x13" name="FMT6_L8_A8_UNORM"/>
+-
+-	<value value="0x15" name="FMT6_16_UNORM"/>
+-	<value value="0x16" name="FMT6_16_SNORM"/>
+-	<value value="0x17" name="FMT6_16_FLOAT"/>
+-	<value value="0x18" name="FMT6_16_UINT"/>
+-	<value value="0x19" name="FMT6_16_SINT"/>
+-
+-	<value value="0x21" name="FMT6_8_8_8_UNORM"/>
+-	<value value="0x22" name="FMT6_8_8_8_SNORM"/>
+-	<value value="0x23" name="FMT6_8_8_8_UINT"/>
+-	<value value="0x24" name="FMT6_8_8_8_SINT"/>
+-
+-	<value value="0x30" name="FMT6_8_8_8_8_UNORM"/>
+-	<value value="0x31" name="FMT6_8_8_8_X8_UNORM"/> <!-- samples 1 for alpha -->
+-	<value value="0x32" name="FMT6_8_8_8_8_SNORM"/>
+-	<value value="0x33" name="FMT6_8_8_8_8_UINT"/>
+-	<value value="0x34" name="FMT6_8_8_8_8_SINT"/>
+-
+-	<value value="0x35" name="FMT6_9_9_9_E5_FLOAT"/>
+-
+-	<value value="0x36" name="FMT6_10_10_10_2_UNORM"/>
+-	<value value="0x37" name="FMT6_10_10_10_2_UNORM_DEST"/>
+-	<value value="0x39" name="FMT6_10_10_10_2_SNORM"/>
+-	<value value="0x3a" name="FMT6_10_10_10_2_UINT"/>
+-	<value value="0x3b" name="FMT6_10_10_10_2_SINT"/>
+-
+-	<value value="0x42" name="FMT6_11_11_10_FLOAT"/>
+-
+-	<value value="0x43" name="FMT6_16_16_UNORM"/>
+-	<value value="0x44" name="FMT6_16_16_SNORM"/>
+-	<value value="0x45" name="FMT6_16_16_FLOAT"/>
+-	<value value="0x46" name="FMT6_16_16_UINT"/>
+-	<value value="0x47" name="FMT6_16_16_SINT"/>
+-
+-	<value value="0x48" name="FMT6_32_UNORM"/>
+-	<value value="0x49" name="FMT6_32_SNORM"/>
+-	<value value="0x4a" name="FMT6_32_FLOAT"/>
+-	<value value="0x4b" name="FMT6_32_UINT"/>
+-	<value value="0x4c" name="FMT6_32_SINT"/>
+-	<value value="0x4d" name="FMT6_32_FIXED"/>
+-
+-	<value value="0x58" name="FMT6_16_16_16_UNORM"/>
+-	<value value="0x59" name="FMT6_16_16_16_SNORM"/>
+-	<value value="0x5a" name="FMT6_16_16_16_FLOAT"/>
+-	<value value="0x5b" name="FMT6_16_16_16_UINT"/>
+-	<value value="0x5c" name="FMT6_16_16_16_SINT"/>
+-
+-	<value value="0x60" name="FMT6_16_16_16_16_UNORM"/>
+-	<value value="0x61" name="FMT6_16_16_16_16_SNORM"/>
+-	<value value="0x62" name="FMT6_16_16_16_16_FLOAT"/>
+-	<value value="0x63" name="FMT6_16_16_16_16_UINT"/>
+-	<value value="0x64" name="FMT6_16_16_16_16_SINT"/>
+-
+-	<value value="0x65" name="FMT6_32_32_UNORM"/>
+-	<value value="0x66" name="FMT6_32_32_SNORM"/>
+-	<value value="0x67" name="FMT6_32_32_FLOAT"/>
+-	<value value="0x68" name="FMT6_32_32_UINT"/>
+-	<value value="0x69" name="FMT6_32_32_SINT"/>
+-	<value value="0x6a" name="FMT6_32_32_FIXED"/>
+-
+-	<value value="0x70" name="FMT6_32_32_32_UNORM"/>
+-	<value value="0x71" name="FMT6_32_32_32_SNORM"/>
+-	<value value="0x72" name="FMT6_32_32_32_UINT"/>
+-	<value value="0x73" name="FMT6_32_32_32_SINT"/>
+-	<value value="0x74" name="FMT6_32_32_32_FLOAT"/>
+-	<value value="0x75" name="FMT6_32_32_32_FIXED"/>
+-
+-	<value value="0x80" name="FMT6_32_32_32_32_UNORM"/>
+-	<value value="0x81" name="FMT6_32_32_32_32_SNORM"/>
+-	<value value="0x82" name="FMT6_32_32_32_32_FLOAT"/>
+-	<value value="0x83" name="FMT6_32_32_32_32_UINT"/>
+-	<value value="0x84" name="FMT6_32_32_32_32_SINT"/>
+-	<value value="0x85" name="FMT6_32_32_32_32_FIXED"/>
+-
+-	<value value="0x8c" name="FMT6_G8R8B8R8_422_UNORM"/> <!-- UYVY -->
+-	<value value="0x8d" name="FMT6_R8G8R8B8_422_UNORM"/> <!-- YUYV -->
+-	<value value="0x8e" name="FMT6_R8_G8B8_2PLANE_420_UNORM"/> <!-- NV12 -->
+-	<value value="0x8f" name="FMT6_NV21"/>
+-	<value value="0x90" name="FMT6_R8_G8_B8_3PLANE_420_UNORM"/> <!-- YV12 -->
+-
+-	<value value="0x91" name="FMT6_Z24_UNORM_S8_UINT_AS_R8G8B8A8"/>
+-
+-	<!-- Note: tiling/UBWC for these may be different from equivalent formats
+-	For example FMT6_NV12_Y is not compatible with FMT6_8_UNORM
+-	-->
+-	<value value="0x94" name="FMT6_NV12_Y"/>
+-	<value value="0x95" name="FMT6_NV12_UV"/>
+-	<value value="0x96" name="FMT6_NV12_VU"/>
+-	<value value="0x97" name="FMT6_NV12_4R"/>
+-	<value value="0x98" name="FMT6_NV12_4R_Y"/>
+-	<value value="0x99" name="FMT6_NV12_4R_UV"/>
+-	<value value="0x9a" name="FMT6_P010"/>
+-	<value value="0x9b" name="FMT6_P010_Y"/>
+-	<value value="0x9c" name="FMT6_P010_UV"/>
+-	<value value="0x9d" name="FMT6_TP10"/>
+-	<value value="0x9e" name="FMT6_TP10_Y"/>
+-	<value value="0x9f" name="FMT6_TP10_UV"/>
+-
+-	<value value="0xa0" name="FMT6_Z24_UNORM_S8_UINT"/>
+-
+-	<value value="0xab" name="FMT6_ETC2_RG11_UNORM"/>
+-	<value value="0xac" name="FMT6_ETC2_RG11_SNORM"/>
+-	<value value="0xad" name="FMT6_ETC2_R11_UNORM"/>
+-	<value value="0xae" name="FMT6_ETC2_R11_SNORM"/>
+-	<value value="0xaf" name="FMT6_ETC1"/>
+-	<value value="0xb0" name="FMT6_ETC2_RGB8"/>
+-	<value value="0xb1" name="FMT6_ETC2_RGBA8"/>
+-	<value value="0xb2" name="FMT6_ETC2_RGB8A1"/>
+-	<value value="0xb3" name="FMT6_DXT1"/>
+-	<value value="0xb4" name="FMT6_DXT3"/>
+-	<value value="0xb5" name="FMT6_DXT5"/>
+-	<value value="0xb7" name="FMT6_RGTC1_UNORM"/>
+-	<value value="0xb8" name="FMT6_RGTC1_SNORM"/>
+-	<value value="0xbb" name="FMT6_RGTC2_UNORM"/>
+-	<value value="0xbc" name="FMT6_RGTC2_SNORM"/>
+-	<value value="0xbe" name="FMT6_BPTC_UFLOAT"/>
+-	<value value="0xbf" name="FMT6_BPTC_FLOAT"/>
+-	<value value="0xc0" name="FMT6_BPTC"/>
+-	<value value="0xc1" name="FMT6_ASTC_4x4"/>
+-	<value value="0xc2" name="FMT6_ASTC_5x4"/>
+-	<value value="0xc3" name="FMT6_ASTC_5x5"/>
+-	<value value="0xc4" name="FMT6_ASTC_6x5"/>
+-	<value value="0xc5" name="FMT6_ASTC_6x6"/>
+-	<value value="0xc6" name="FMT6_ASTC_8x5"/>
+-	<value value="0xc7" name="FMT6_ASTC_8x6"/>
+-	<value value="0xc8" name="FMT6_ASTC_8x8"/>
+-	<value value="0xc9" name="FMT6_ASTC_10x5"/>
+-	<value value="0xca" name="FMT6_ASTC_10x6"/>
+-	<value value="0xcb" name="FMT6_ASTC_10x8"/>
+-	<value value="0xcc" name="FMT6_ASTC_10x10"/>
+-	<value value="0xcd" name="FMT6_ASTC_12x10"/>
+-	<value value="0xce" name="FMT6_ASTC_12x12"/>
+-
+-	<!-- for sampling stencil (integer, 2nd channel), not available on a630 -->
+-	<value value="0xea" name="FMT6_Z24_UINT_S8_UINT"/>
+-
+-	<!-- Not a hw enum, used internally in driver -->
+-	<value value="0xff" name="FMT6_NONE"/>
+-
+-</enum>
+-
+-<!-- probably same as a5xx -->
+-<enum name="a6xx_polygon_mode">
+-	<value name="POLYMODE6_POINTS" value="1"/>
+-	<value name="POLYMODE6_LINES" value="2"/>
+-	<value name="POLYMODE6_TRIANGLES" value="3"/>
+-</enum>
+-
+-<enum name="a6xx_depth_format">
+-	<value name="DEPTH6_NONE" value="0"/>
+-	<value name="DEPTH6_16" value="1"/>
+-	<value name="DEPTH6_24_8" value="2"/>
+-	<value name="DEPTH6_32" value="4"/>
+-</enum>
+-
+-<bitset name="a6x_cp_protect" inline="yes">
+-	<bitfield name="BASE_ADDR" low="0" high="17"/>
+-	<bitfield name="MASK_LEN" low="18" high="30"/>
+-	<bitfield name="READ" pos="31" type="boolean"/>
+-</bitset>
+-
+-<enum name="a6xx_shader_id">
+-	<value value="0x9" name="A6XX_TP0_TMO_DATA"/>
+-	<value value="0xa" name="A6XX_TP0_SMO_DATA"/>
+-	<value value="0xb" name="A6XX_TP0_MIPMAP_BASE_DATA"/>
+-	<value value="0x19" name="A6XX_TP1_TMO_DATA"/>
+-	<value value="0x1a" name="A6XX_TP1_SMO_DATA"/>
+-	<value value="0x1b" name="A6XX_TP1_MIPMAP_BASE_DATA"/>
+-	<value value="0x29" name="A6XX_SP_INST_DATA"/>
+-	<value value="0x2a" name="A6XX_SP_LB_0_DATA"/>
+-	<value value="0x2b" name="A6XX_SP_LB_1_DATA"/>
+-	<value value="0x2c" name="A6XX_SP_LB_2_DATA"/>
+-	<value value="0x2d" name="A6XX_SP_LB_3_DATA"/>
+-	<value value="0x2e" name="A6XX_SP_LB_4_DATA"/>
+-	<value value="0x2f" name="A6XX_SP_LB_5_DATA"/>
+-	<value value="0x30" name="A6XX_SP_CB_BINDLESS_DATA"/>
+-	<value value="0x31" name="A6XX_SP_CB_LEGACY_DATA"/>
+-	<value value="0x32" name="A6XX_SP_UAV_DATA"/>
+-	<value value="0x33" name="A6XX_SP_INST_TAG"/>
+-	<value value="0x34" name="A6XX_SP_CB_BINDLESS_TAG"/>
+-	<value value="0x35" name="A6XX_SP_TMO_UMO_TAG"/>
+-	<value value="0x36" name="A6XX_SP_SMO_TAG"/>
+-	<value value="0x37" name="A6XX_SP_STATE_DATA"/>
+-	<value value="0x49" name="A6XX_HLSQ_CHUNK_CVS_RAM"/>
+-	<value value="0x4a" name="A6XX_HLSQ_CHUNK_CPS_RAM"/>
+-	<value value="0x4b" name="A6XX_HLSQ_CHUNK_CVS_RAM_TAG"/>
+-	<value value="0x4c" name="A6XX_HLSQ_CHUNK_CPS_RAM_TAG"/>
+-	<value value="0x4d" name="A6XX_HLSQ_ICB_CVS_CB_BASE_TAG"/>
+-	<value value="0x4e" name="A6XX_HLSQ_ICB_CPS_CB_BASE_TAG"/>
+-	<value value="0x50" name="A6XX_HLSQ_CVS_MISC_RAM"/>
+-	<value value="0x51" name="A6XX_HLSQ_CPS_MISC_RAM"/>
+-	<value value="0x52" name="A6XX_HLSQ_INST_RAM"/>
+-	<value value="0x53" name="A6XX_HLSQ_GFX_CVS_CONST_RAM"/>
+-	<value value="0x54" name="A6XX_HLSQ_GFX_CPS_CONST_RAM"/>
+-	<value value="0x55" name="A6XX_HLSQ_CVS_MISC_RAM_TAG"/>
+-	<value value="0x56" name="A6XX_HLSQ_CPS_MISC_RAM_TAG"/>
+-	<value value="0x57" name="A6XX_HLSQ_INST_RAM_TAG"/>
+-	<value value="0x58" name="A6XX_HLSQ_GFX_CVS_CONST_RAM_TAG"/>
+-	<value value="0x59" name="A6XX_HLSQ_GFX_CPS_CONST_RAM_TAG"/>
+-	<value value="0x5a" name="A6XX_HLSQ_PWR_REST_RAM"/>
+-	<value value="0x5b" name="A6XX_HLSQ_PWR_REST_TAG"/>
+-	<value value="0x60" name="A6XX_HLSQ_DATAPATH_META"/>
+-	<value value="0x61" name="A6XX_HLSQ_FRONTEND_META"/>
+-	<value value="0x62" name="A6XX_HLSQ_INDIRECT_META"/>
+-	<value value="0x63" name="A6XX_HLSQ_BACKEND_META"/>
+-	<value value="0x70" name="A6XX_SP_LB_6_DATA"/>
+-	<value value="0x71" name="A6XX_SP_LB_7_DATA"/>
+-	<value value="0x73" name="A6XX_HLSQ_INST_RAM_1"/>
+-</enum>
+-
+-<enum name="a7xx_statetype_id">
+-	<value value="0" name="A7XX_TP0_NCTX_REG"/>
+-	<value value="1" name="A7XX_TP0_CTX0_3D_CVS_REG"/>
+-	<value value="2" name="A7XX_TP0_CTX0_3D_CPS_REG"/>
+-	<value value="3" name="A7XX_TP0_CTX1_3D_CVS_REG"/>
+-	<value value="4" name="A7XX_TP0_CTX1_3D_CPS_REG"/>
+-	<value value="5" name="A7XX_TP0_CTX2_3D_CPS_REG"/>
+-	<value value="6" name="A7XX_TP0_CTX3_3D_CPS_REG"/>
+-	<value value="9" name="A7XX_TP0_TMO_DATA"/>
+-	<value value="10" name="A7XX_TP0_SMO_DATA"/>
+-	<value value="11" name="A7XX_TP0_MIPMAP_BASE_DATA"/>
+-	<value value="32" name="A7XX_SP_NCTX_REG"/>
+-	<value value="33" name="A7XX_SP_CTX0_3D_CVS_REG"/>
+-	<value value="34" name="A7XX_SP_CTX0_3D_CPS_REG"/>
+-	<value value="35" name="A7XX_SP_CTX1_3D_CVS_REG"/>
+-	<value value="36" name="A7XX_SP_CTX1_3D_CPS_REG"/>
+-	<value value="37" name="A7XX_SP_CTX2_3D_CPS_REG"/>
+-	<value value="38" name="A7XX_SP_CTX3_3D_CPS_REG"/>
+-	<value value="39" name="A7XX_SP_INST_DATA"/>
+-	<value value="40" name="A7XX_SP_INST_DATA_1"/>
+-	<value value="41" name="A7XX_SP_LB_0_DATA"/>
+-	<value value="42" name="A7XX_SP_LB_1_DATA"/>
+-	<value value="43" name="A7XX_SP_LB_2_DATA"/>
+-	<value value="44" name="A7XX_SP_LB_3_DATA"/>
+-	<value value="45" name="A7XX_SP_LB_4_DATA"/>
+-	<value value="46" name="A7XX_SP_LB_5_DATA"/>
+-	<value value="47" name="A7XX_SP_LB_6_DATA"/>
+-	<value value="48" name="A7XX_SP_LB_7_DATA"/>
+-	<value value="49" name="A7XX_SP_CB_RAM"/>
+-	<value value="50" name="A7XX_SP_LB_13_DATA"/>
+-	<value value="51" name="A7XX_SP_LB_14_DATA"/>
+-	<value value="52" name="A7XX_SP_INST_TAG"/>
+-	<value value="53" name="A7XX_SP_INST_DATA_2"/>
+-	<value value="54" name="A7XX_SP_TMO_TAG"/>
+-	<value value="55" name="A7XX_SP_SMO_TAG"/>
+-	<value value="56" name="A7XX_SP_STATE_DATA"/>
+-	<value value="57" name="A7XX_SP_HWAVE_RAM"/>
+-	<value value="58" name="A7XX_SP_L0_INST_BUF"/>
+-	<value value="59" name="A7XX_SP_LB_8_DATA"/>
+-	<value value="60" name="A7XX_SP_LB_9_DATA"/>
+-	<value value="61" name="A7XX_SP_LB_10_DATA"/>
+-	<value value="62" name="A7XX_SP_LB_11_DATA"/>
+-	<value value="63" name="A7XX_SP_LB_12_DATA"/>
+-	<value value="64" name="A7XX_HLSQ_DATAPATH_DSTR_META"/>
+-	<value value="67" name="A7XX_HLSQ_L2STC_TAG_RAM"/>
+-	<value value="68" name="A7XX_HLSQ_L2STC_INFO_CMD"/>
+-	<value value="69" name="A7XX_HLSQ_CVS_BE_CTXT_BUF_RAM_TAG"/>
+-	<value value="70" name="A7XX_HLSQ_CPS_BE_CTXT_BUF_RAM_TAG"/>
+-	<value value="71" name="A7XX_HLSQ_GFX_CVS_BE_CTXT_BUF_RAM"/>
+-	<value value="72" name="A7XX_HLSQ_GFX_CPS_BE_CTXT_BUF_RAM"/>
+-	<value value="73" name="A7XX_HLSQ_CHUNK_CVS_RAM"/>
+-	<value value="74" name="A7XX_HLSQ_CHUNK_CPS_RAM"/>
+-	<value value="75" name="A7XX_HLSQ_CHUNK_CVS_RAM_TAG"/>
+-	<value value="76" name="A7XX_HLSQ_CHUNK_CPS_RAM_TAG"/>
+-	<value value="77" name="A7XX_HLSQ_ICB_CVS_CB_BASE_TAG"/>
+-	<value value="78" name="A7XX_HLSQ_ICB_CPS_CB_BASE_TAG"/>
+-	<value value="79" name="A7XX_HLSQ_CVS_MISC_RAM"/>
+-	<value value="80" name="A7XX_HLSQ_CPS_MISC_RAM"/>
+-	<value value="81" name="A7XX_HLSQ_CPS_MISC_RAM_1"/>
+-	<value value="82" name="A7XX_HLSQ_INST_RAM"/>
+-	<value value="83" name="A7XX_HLSQ_GFX_CVS_CONST_RAM"/>
+-	<value value="84" name="A7XX_HLSQ_GFX_CPS_CONST_RAM"/>
+-	<value value="85" name="A7XX_HLSQ_CVS_MISC_RAM_TAG"/>
+-	<value value="86" name="A7XX_HLSQ_CPS_MISC_RAM_TAG"/>
+-	<value value="87" name="A7XX_HLSQ_INST_RAM_TAG"/>
+-	<value value="88" name="A7XX_HLSQ_GFX_CVS_CONST_RAM_TAG"/>
+-	<value value="89" name="A7XX_HLSQ_GFX_CPS_CONST_RAM_TAG"/>
+-	<value value="90" name="A7XX_HLSQ_GFX_LOCAL_MISC_RAM"/>
+-	<value value="91" name="A7XX_HLSQ_GFX_LOCAL_MISC_RAM_TAG"/>
+-	<value value="92" name="A7XX_HLSQ_INST_RAM_1"/>
+-	<value value="93" name="A7XX_HLSQ_STPROC_META"/>
+-	<value value="94" name="A7XX_HLSQ_BV_BE_META"/>
+-	<value value="95" name="A7XX_HLSQ_INST_RAM_2"/>
+-	<value value="96" name="A7XX_HLSQ_DATAPATH_META"/>
+-	<value value="97" name="A7XX_HLSQ_FRONTEND_META"/>
+-	<value value="98" name="A7XX_HLSQ_INDIRECT_META"/>
+-	<value value="99" name="A7XX_HLSQ_BACKEND_META"/>
+-</enum>
+-
+-<enum name="a6xx_debugbus_id">
+-	<value value="0x1" name="A6XX_DBGBUS_CP"/>
+-	<value value="0x2" name="A6XX_DBGBUS_RBBM"/>
+-	<value value="0x3" name="A6XX_DBGBUS_VBIF"/>
+-	<value value="0x4" name="A6XX_DBGBUS_HLSQ"/>
+-	<value value="0x5" name="A6XX_DBGBUS_UCHE"/>
+-	<value value="0x6" name="A6XX_DBGBUS_DPM"/>
+-	<value value="0x7" name="A6XX_DBGBUS_TESS"/>
+-	<value value="0x8" name="A6XX_DBGBUS_PC"/>
+-	<value value="0x9" name="A6XX_DBGBUS_VFDP"/>
+-	<value value="0xa" name="A6XX_DBGBUS_VPC"/>
+-	<value value="0xb" name="A6XX_DBGBUS_TSE"/>
+-	<value value="0xc" name="A6XX_DBGBUS_RAS"/>
+-	<value value="0xd" name="A6XX_DBGBUS_VSC"/>
+-	<value value="0xe" name="A6XX_DBGBUS_COM"/>
+-	<value value="0x10" name="A6XX_DBGBUS_LRZ"/>
+-	<value value="0x11" name="A6XX_DBGBUS_A2D"/>
+-	<value value="0x12" name="A6XX_DBGBUS_CCUFCHE"/>
+-	<value value="0x13" name="A6XX_DBGBUS_GMU_CX"/>
+-	<value value="0x14" name="A6XX_DBGBUS_RBP"/>
+-	<value value="0x15" name="A6XX_DBGBUS_DCS"/>
+-	<value value="0x16" name="A6XX_DBGBUS_DBGC"/>
+-	<value value="0x17" name="A6XX_DBGBUS_CX"/>
+-	<value value="0x18" name="A6XX_DBGBUS_GMU_GX"/>
+-	<value value="0x19" name="A6XX_DBGBUS_TPFCHE"/>
+-	<value value="0x1a" name="A6XX_DBGBUS_GBIF_GX"/>
+-	<value value="0x1d" name="A6XX_DBGBUS_GPC"/>
+-	<value value="0x1e" name="A6XX_DBGBUS_LARC"/>
+-	<value value="0x1f" name="A6XX_DBGBUS_HLSQ_SPTP"/>
+-	<value value="0x20" name="A6XX_DBGBUS_RB_0"/>
+-	<value value="0x21" name="A6XX_DBGBUS_RB_1"/>
+-	<value value="0x22" name="A6XX_DBGBUS_RB_2"/>
+-	<value value="0x24" name="A6XX_DBGBUS_UCHE_WRAPPER"/>
+-	<value value="0x28" name="A6XX_DBGBUS_CCU_0"/>
+-	<value value="0x29" name="A6XX_DBGBUS_CCU_1"/>
+-	<value value="0x2a" name="A6XX_DBGBUS_CCU_2"/>
+-	<value value="0x38" name="A6XX_DBGBUS_VFD_0"/>
+-	<value value="0x39" name="A6XX_DBGBUS_VFD_1"/>
+-	<value value="0x3a" name="A6XX_DBGBUS_VFD_2"/>
+-	<value value="0x3b" name="A6XX_DBGBUS_VFD_3"/>
+-	<value value="0x3c" name="A6XX_DBGBUS_VFD_4"/>
+-	<value value="0x3d" name="A6XX_DBGBUS_VFD_5"/>
+-	<value value="0x40" name="A6XX_DBGBUS_SP_0"/>
+-	<value value="0x41" name="A6XX_DBGBUS_SP_1"/>
+-	<value value="0x42" name="A6XX_DBGBUS_SP_2"/>
+-	<value value="0x48" name="A6XX_DBGBUS_TPL1_0"/>
+-	<value value="0x49" name="A6XX_DBGBUS_TPL1_1"/>
+-	<value value="0x4a" name="A6XX_DBGBUS_TPL1_2"/>
+-	<value value="0x4b" name="A6XX_DBGBUS_TPL1_3"/>
+-	<value value="0x4c" name="A6XX_DBGBUS_TPL1_4"/>
+-	<value value="0x4d" name="A6XX_DBGBUS_TPL1_5"/>
+-	<value value="0x58" name="A6XX_DBGBUS_SPTP_0"/>
+-	<value value="0x59" name="A6XX_DBGBUS_SPTP_1"/>
+-	<value value="0x5a" name="A6XX_DBGBUS_SPTP_2"/>
+-	<value value="0x5b" name="A6XX_DBGBUS_SPTP_3"/>
+-	<value value="0x5c" name="A6XX_DBGBUS_SPTP_4"/>
+-	<value value="0x5d" name="A6XX_DBGBUS_SPTP_5"/>
+-</enum>
+-
+-<enum name="a7xx_state_location">
+-	<value value="0" name="A7XX_HLSQ_STATE"/>
+-	<value value="1" name="A7XX_HLSQ_DP"/>
+-	<value value="2" name="A7XX_SP_TOP"/>
+-	<value value="3" name="A7XX_USPTP"/>
+-	<value value="4" name="A7XX_HLSQ_DP_STR"/>
+-</enum>
+-
+-<enum name="a7xx_pipe">
+-	<value value="0" name="A7XX_PIPE_NONE"/>
+-	<value value="1" name="A7XX_PIPE_BR"/>
+-	<value value="2" name="A7XX_PIPE_BV"/>
+-	<value value="3" name="A7XX_PIPE_LPAC"/>
+-</enum>
+-
+-<enum name="a7xx_cluster">
+-	<value value="0" name="A7XX_CLUSTER_NONE"/>
+-	<value value="1" name="A7XX_CLUSTER_FE"/>
+-	<value value="2" name="A7XX_CLUSTER_SP_VS"/>
+-	<value value="3" name="A7XX_CLUSTER_PC_VS"/>
+-	<value value="4" name="A7XX_CLUSTER_GRAS"/>
+-	<value value="5" name="A7XX_CLUSTER_SP_PS"/>
+-	<value value="6" name="A7XX_CLUSTER_VPC_PS"/>
+-	<value value="7" name="A7XX_CLUSTER_PS"/>
+-</enum>
+-
+-<enum name="a7xx_debugbus_id">
+-	<value value="1" name="A7XX_DBGBUS_CP_0_0"/>
+-	<value value="2" name="A7XX_DBGBUS_CP_0_1"/>
+-	<value value="3" name="A7XX_DBGBUS_RBBM"/>
+-	<value value="5" name="A7XX_DBGBUS_GBIF_GX"/>
+-	<value value="6" name="A7XX_DBGBUS_GBIF_CX"/>
+-	<value value="7" name="A7XX_DBGBUS_HLSQ"/>
+-	<value value="9" name="A7XX_DBGBUS_UCHE_0"/>
+-	<value value="10" name="A7XX_DBGBUS_UCHE_1"/>
+-	<value value="13" name="A7XX_DBGBUS_TESS_BR"/>
+-	<value value="14" name="A7XX_DBGBUS_TESS_BV"/>
+-	<value value="17" name="A7XX_DBGBUS_PC_BR"/>
+-	<value value="18" name="A7XX_DBGBUS_PC_BV"/>
+-	<value value="21" name="A7XX_DBGBUS_VFDP_BR"/>
+-	<value value="22" name="A7XX_DBGBUS_VFDP_BV"/>
+-	<value value="25" name="A7XX_DBGBUS_VPC_BR"/>
+-	<value value="26" name="A7XX_DBGBUS_VPC_BV"/>
+-	<value value="29" name="A7XX_DBGBUS_TSE_BR"/>
+-	<value value="30" name="A7XX_DBGBUS_TSE_BV"/>
+-	<value value="33" name="A7XX_DBGBUS_RAS_BR"/>
+-	<value value="34" name="A7XX_DBGBUS_RAS_BV"/>
+-	<value value="37" name="A7XX_DBGBUS_VSC"/>
+-	<value value="39" name="A7XX_DBGBUS_COM_0"/>
+-	<value value="43" name="A7XX_DBGBUS_LRZ_BR"/>
+-	<value value="44" name="A7XX_DBGBUS_LRZ_BV"/>
+-	<value value="47" name="A7XX_DBGBUS_UFC_0"/>
+-	<value value="48" name="A7XX_DBGBUS_UFC_1"/>
+-	<value value="55" name="A7XX_DBGBUS_GMU_GX"/>
+-	<value value="59" name="A7XX_DBGBUS_DBGC"/>
+-	<value value="60" name="A7XX_DBGBUS_CX"/>
+-	<value value="61" name="A7XX_DBGBUS_GMU_CX"/>
+-	<value value="62" name="A7XX_DBGBUS_GPC_BR"/>
+-	<value value="63" name="A7XX_DBGBUS_GPC_BV"/>
+-	<value value="66" name="A7XX_DBGBUS_LARC"/>
+-	<value value="68" name="A7XX_DBGBUS_HLSQ_SPTP"/>
+-	<value value="70" name="A7XX_DBGBUS_RB_0"/>
+-	<value value="71" name="A7XX_DBGBUS_RB_1"/>
+-	<value value="72" name="A7XX_DBGBUS_RB_2"/>
+-	<value value="73" name="A7XX_DBGBUS_RB_3"/>
+-	<value value="74" name="A7XX_DBGBUS_RB_4"/>
+-	<value value="75" name="A7XX_DBGBUS_RB_5"/>
+-	<value value="102" name="A7XX_DBGBUS_UCHE_WRAPPER"/>
+-	<value value="106" name="A7XX_DBGBUS_CCU_0"/>
+-	<value value="107" name="A7XX_DBGBUS_CCU_1"/>
+-	<value value="108" name="A7XX_DBGBUS_CCU_2"/>
+-	<value value="109" name="A7XX_DBGBUS_CCU_3"/>
+-	<value value="110" name="A7XX_DBGBUS_CCU_4"/>
+-	<value value="111" name="A7XX_DBGBUS_CCU_5"/>
+-	<value value="138" name="A7XX_DBGBUS_VFD_BR_0"/>
+-	<value value="139" name="A7XX_DBGBUS_VFD_BR_1"/>
+-	<value value="140" name="A7XX_DBGBUS_VFD_BR_2"/>
+-	<value value="141" name="A7XX_DBGBUS_VFD_BR_3"/>
+-	<value value="142" name="A7XX_DBGBUS_VFD_BR_4"/>
+-	<value value="143" name="A7XX_DBGBUS_VFD_BR_5"/>
+-	<value value="144" name="A7XX_DBGBUS_VFD_BR_6"/>
+-	<value value="145" name="A7XX_DBGBUS_VFD_BR_7"/>
+-	<value value="202" name="A7XX_DBGBUS_VFD_BV_0"/>
+-	<value value="203" name="A7XX_DBGBUS_VFD_BV_1"/>
+-	<value value="204" name="A7XX_DBGBUS_VFD_BV_2"/>
+-	<value value="205" name="A7XX_DBGBUS_VFD_BV_3"/>
+-	<value value="234" name="A7XX_DBGBUS_USP_0"/>
+-	<value value="235" name="A7XX_DBGBUS_USP_1"/>
+-	<value value="236" name="A7XX_DBGBUS_USP_2"/>
+-	<value value="237" name="A7XX_DBGBUS_USP_3"/>
+-	<value value="238" name="A7XX_DBGBUS_USP_4"/>
+-	<value value="239" name="A7XX_DBGBUS_USP_5"/>
+-	<value value="266" name="A7XX_DBGBUS_TP_0"/>
+-	<value value="267" name="A7XX_DBGBUS_TP_1"/>
+-	<value value="268" name="A7XX_DBGBUS_TP_2"/>
+-	<value value="269" name="A7XX_DBGBUS_TP_3"/>
+-	<value value="270" name="A7XX_DBGBUS_TP_4"/>
+-	<value value="271" name="A7XX_DBGBUS_TP_5"/>
+-	<value value="272" name="A7XX_DBGBUS_TP_6"/>
+-	<value value="273" name="A7XX_DBGBUS_TP_7"/>
+-	<value value="274" name="A7XX_DBGBUS_TP_8"/>
+-	<value value="275" name="A7XX_DBGBUS_TP_9"/>
+-	<value value="276" name="A7XX_DBGBUS_TP_10"/>
+-	<value value="277" name="A7XX_DBGBUS_TP_11"/>
+-	<value value="330" name="A7XX_DBGBUS_USPTP_0"/>
+-	<value value="331" name="A7XX_DBGBUS_USPTP_1"/>
+-	<value value="332" name="A7XX_DBGBUS_USPTP_2"/>
+-	<value value="333" name="A7XX_DBGBUS_USPTP_3"/>
+-	<value value="334" name="A7XX_DBGBUS_USPTP_4"/>
+-	<value value="335" name="A7XX_DBGBUS_USPTP_5"/>
+-	<value value="336" name="A7XX_DBGBUS_USPTP_6"/>
+-	<value value="337" name="A7XX_DBGBUS_USPTP_7"/>
+-	<value value="338" name="A7XX_DBGBUS_USPTP_8"/>
+-	<value value="339" name="A7XX_DBGBUS_USPTP_9"/>
+-	<value value="340" name="A7XX_DBGBUS_USPTP_10"/>
+-	<value value="341" name="A7XX_DBGBUS_USPTP_11"/>
+-	<value value="396" name="A7XX_DBGBUS_CCHE_0"/>
+-	<value value="397" name="A7XX_DBGBUS_CCHE_1"/>
+-	<value value="398" name="A7XX_DBGBUS_CCHE_2"/>
+-	<value value="408" name="A7XX_DBGBUS_VPC_DSTR_0"/>
+-	<value value="409" name="A7XX_DBGBUS_VPC_DSTR_1"/>
+-	<value value="410" name="A7XX_DBGBUS_VPC_DSTR_2"/>
+-	<value value="411" name="A7XX_DBGBUS_HLSQ_DP_STR_0"/>
+-	<value value="412" name="A7XX_DBGBUS_HLSQ_DP_STR_1"/>
+-	<value value="413" name="A7XX_DBGBUS_HLSQ_DP_STR_2"/>
+-	<value value="414" name="A7XX_DBGBUS_HLSQ_DP_STR_3"/>
+-	<value value="415" name="A7XX_DBGBUS_HLSQ_DP_STR_4"/>
+-	<value value="416" name="A7XX_DBGBUS_HLSQ_DP_STR_5"/>
+-	<value value="443" name="A7XX_DBGBUS_UFC_DSTR_0"/>
+-	<value value="444" name="A7XX_DBGBUS_UFC_DSTR_1"/>
+-	<value value="445" name="A7XX_DBGBUS_UFC_DSTR_2"/>
+-	<value value="446" name="A7XX_DBGBUS_CGC_SUBCORE"/>
+-	<value value="447" name="A7XX_DBGBUS_CGC_CORE"/>
+-</enum>
+-
+-<enum name="a6xx_cp_perfcounter_select">
+-	<value value="0" name="PERF_CP_ALWAYS_COUNT"/>
+-	<value value="1" name="PERF_CP_BUSY_GFX_CORE_IDLE"/>
+-	<value value="2" name="PERF_CP_BUSY_CYCLES"/>
+-	<value value="3" name="PERF_CP_NUM_PREEMPTIONS"/>
+-	<value value="4" name="PERF_CP_PREEMPTION_REACTION_DELAY"/>
+-	<value value="5" name="PERF_CP_PREEMPTION_SWITCH_OUT_TIME"/>
+-	<value value="6" name="PERF_CP_PREEMPTION_SWITCH_IN_TIME"/>
+-	<value value="7" name="PERF_CP_DEAD_DRAWS_IN_BIN_RENDER"/>
+-	<value value="8" name="PERF_CP_PREDICATED_DRAWS_KILLED"/>
+-	<value value="9" name="PERF_CP_MODE_SWITCH"/>
+-	<value value="10" name="PERF_CP_ZPASS_DONE"/>
+-	<value value="11" name="PERF_CP_CONTEXT_DONE"/>
+-	<value value="12" name="PERF_CP_CACHE_FLUSH"/>
+-	<value value="13" name="PERF_CP_LONG_PREEMPTIONS"/>
+-	<value value="14" name="PERF_CP_SQE_I_CACHE_STARVE"/>
+-	<value value="15" name="PERF_CP_SQE_IDLE"/>
+-	<value value="16" name="PERF_CP_SQE_PM4_STARVE_RB_IB"/>
+-	<value value="17" name="PERF_CP_SQE_PM4_STARVE_SDS"/>
+-	<value value="18" name="PERF_CP_SQE_MRB_STARVE"/>
+-	<value value="19" name="PERF_CP_SQE_RRB_STARVE"/>
+-	<value value="20" name="PERF_CP_SQE_VSD_STARVE"/>
+-	<value value="21" name="PERF_CP_VSD_DECODE_STARVE"/>
+-	<value value="22" name="PERF_CP_SQE_PIPE_OUT_STALL"/>
+-	<value value="23" name="PERF_CP_SQE_SYNC_STALL"/>
+-	<value value="24" name="PERF_CP_SQE_PM4_WFI_STALL"/>
+-	<value value="25" name="PERF_CP_SQE_SYS_WFI_STALL"/>
+-	<value value="26" name="PERF_CP_SQE_T4_EXEC"/>
+-	<value value="27" name="PERF_CP_SQE_LOAD_STATE_EXEC"/>
+-	<value value="28" name="PERF_CP_SQE_SAVE_SDS_STATE"/>
+-	<value value="29" name="PERF_CP_SQE_DRAW_EXEC"/>
+-	<value value="30" name="PERF_CP_SQE_CTXT_REG_BUNCH_EXEC"/>
+-	<value value="31" name="PERF_CP_SQE_EXEC_PROFILED"/>
+-	<value value="32" name="PERF_CP_MEMORY_POOL_EMPTY"/>
+-	<value value="33" name="PERF_CP_MEMORY_POOL_SYNC_STALL"/>
+-	<value value="34" name="PERF_CP_MEMORY_POOL_ABOVE_THRESH"/>
+-	<value value="35" name="PERF_CP_AHB_WR_STALL_PRE_DRAWS"/>
+-	<value value="36" name="PERF_CP_AHB_STALL_SQE_GMU"/>
+-	<value value="37" name="PERF_CP_AHB_STALL_SQE_WR_OTHER"/>
+-	<value value="38" name="PERF_CP_AHB_STALL_SQE_RD_OTHER"/>
+-	<value value="39" name="PERF_CP_CLUSTER0_EMPTY"/>
+-	<value value="40" name="PERF_CP_CLUSTER1_EMPTY"/>
+-	<value value="41" name="PERF_CP_CLUSTER2_EMPTY"/>
+-	<value value="42" name="PERF_CP_CLUSTER3_EMPTY"/>
+-	<value value="43" name="PERF_CP_CLUSTER4_EMPTY"/>
+-	<value value="44" name="PERF_CP_CLUSTER5_EMPTY"/>
+-	<value value="45" name="PERF_CP_PM4_DATA"/>
+-	<value value="46" name="PERF_CP_PM4_HEADERS"/>
+-	<value value="47" name="PERF_CP_VBIF_READ_BEATS"/>
+-	<value value="48" name="PERF_CP_VBIF_WRITE_BEATS"/>
+-	<value value="49" name="PERF_CP_SQE_INSTR_COUNTER"/>
+-</enum>
+-
+-<enum name="a6xx_rbbm_perfcounter_select">
+-	<value value="0" name="PERF_RBBM_ALWAYS_COUNT"/>
+-	<value value="1" name="PERF_RBBM_ALWAYS_ON"/>
+-	<value value="2" name="PERF_RBBM_TSE_BUSY"/>
+-	<value value="3" name="PERF_RBBM_RAS_BUSY"/>
+-	<value value="4" name="PERF_RBBM_PC_DCALL_BUSY"/>
+-	<value value="5" name="PERF_RBBM_PC_VSD_BUSY"/>
+-	<value value="6" name="PERF_RBBM_STATUS_MASKED"/>
+-	<value value="7" name="PERF_RBBM_COM_BUSY"/>
+-	<value value="8" name="PERF_RBBM_DCOM_BUSY"/>
+-	<value value="9" name="PERF_RBBM_VBIF_BUSY"/>
+-	<value value="10" name="PERF_RBBM_VSC_BUSY"/>
+-	<value value="11" name="PERF_RBBM_TESS_BUSY"/>
+-	<value value="12" name="PERF_RBBM_UCHE_BUSY"/>
+-	<value value="13" name="PERF_RBBM_HLSQ_BUSY"/>
+-</enum>
+-
+-<enum name="a6xx_pc_perfcounter_select">
+-	<value value="0" name="PERF_PC_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_PC_WORKING_CYCLES"/>
+-	<value value="2" name="PERF_PC_STALL_CYCLES_VFD"/>
+-	<value value="3" name="PERF_PC_STALL_CYCLES_TSE"/>
+-	<value value="4" name="PERF_PC_STALL_CYCLES_VPC"/>
+-	<value value="5" name="PERF_PC_STALL_CYCLES_UCHE"/>
+-	<value value="6" name="PERF_PC_STALL_CYCLES_TESS"/>
+-	<value value="7" name="PERF_PC_STALL_CYCLES_TSE_ONLY"/>
+-	<value value="8" name="PERF_PC_STALL_CYCLES_VPC_ONLY"/>
+-	<value value="9" name="PERF_PC_PASS1_TF_STALL_CYCLES"/>
+-	<value value="10" name="PERF_PC_STARVE_CYCLES_FOR_INDEX"/>
+-	<value value="11" name="PERF_PC_STARVE_CYCLES_FOR_TESS_FACTOR"/>
+-	<value value="12" name="PERF_PC_STARVE_CYCLES_FOR_VIZ_STREAM"/>
+-	<value value="13" name="PERF_PC_STARVE_CYCLES_FOR_POSITION"/>
+-	<value value="14" name="PERF_PC_STARVE_CYCLES_DI"/>
+-	<value value="15" name="PERF_PC_VIS_STREAMS_LOADED"/>
+-	<value value="16" name="PERF_PC_INSTANCES"/>
+-	<value value="17" name="PERF_PC_VPC_PRIMITIVES"/>
+-	<value value="18" name="PERF_PC_DEAD_PRIM"/>
+-	<value value="19" name="PERF_PC_LIVE_PRIM"/>
+-	<value value="20" name="PERF_PC_VERTEX_HITS"/>
+-	<value value="21" name="PERF_PC_IA_VERTICES"/>
+-	<value value="22" name="PERF_PC_IA_PRIMITIVES"/>
+-	<value value="23" name="PERF_PC_GS_PRIMITIVES"/>
+-	<value value="24" name="PERF_PC_HS_INVOCATIONS"/>
+-	<value value="25" name="PERF_PC_DS_INVOCATIONS"/>
+-	<value value="26" name="PERF_PC_VS_INVOCATIONS"/>
+-	<value value="27" name="PERF_PC_GS_INVOCATIONS"/>
+-	<value value="28" name="PERF_PC_DS_PRIMITIVES"/>
+-	<value value="29" name="PERF_PC_VPC_POS_DATA_TRANSACTION"/>
+-	<value value="30" name="PERF_PC_3D_DRAWCALLS"/>
+-	<value value="31" name="PERF_PC_2D_DRAWCALLS"/>
+-	<value value="32" name="PERF_PC_NON_DRAWCALL_GLOBAL_EVENTS"/>
+-	<value value="33" name="PERF_TESS_BUSY_CYCLES"/>
+-	<value value="34" name="PERF_TESS_WORKING_CYCLES"/>
+-	<value value="35" name="PERF_TESS_STALL_CYCLES_PC"/>
+-	<value value="36" name="PERF_TESS_STARVE_CYCLES_PC"/>
+-	<value value="37" name="PERF_PC_TSE_TRANSACTION"/>
+-	<value value="38" name="PERF_PC_TSE_VERTEX"/>
+-	<value value="39" name="PERF_PC_TESS_PC_UV_TRANS"/>
+-	<value value="40" name="PERF_PC_TESS_PC_UV_PATCHES"/>
+-	<value value="41" name="PERF_PC_TESS_FACTOR_TRANS"/>
+-</enum>
+-
+-<enum name="a6xx_vfd_perfcounter_select">
+-	<value value="0" name="PERF_VFD_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_VFD_STALL_CYCLES_UCHE"/>
+-	<value value="2" name="PERF_VFD_STALL_CYCLES_VPC_ALLOC"/>
+-	<value value="3" name="PERF_VFD_STALL_CYCLES_SP_INFO"/>
+-	<value value="4" name="PERF_VFD_STALL_CYCLES_SP_ATTR"/>
+-	<value value="5" name="PERF_VFD_STARVE_CYCLES_UCHE"/>
+-	<value value="6" name="PERF_VFD_RBUFFER_FULL"/>
+-	<value value="7" name="PERF_VFD_ATTR_INFO_FIFO_FULL"/>
+-	<value value="8" name="PERF_VFD_DECODED_ATTRIBUTE_BYTES"/>
+-	<value value="9" name="PERF_VFD_NUM_ATTRIBUTES"/>
+-	<value value="10" name="PERF_VFD_UPPER_SHADER_FIBERS"/>
+-	<value value="11" name="PERF_VFD_LOWER_SHADER_FIBERS"/>
+-	<value value="12" name="PERF_VFD_MODE_0_FIBERS"/>
+-	<value value="13" name="PERF_VFD_MODE_1_FIBERS"/>
+-	<value value="14" name="PERF_VFD_MODE_2_FIBERS"/>
+-	<value value="15" name="PERF_VFD_MODE_3_FIBERS"/>
+-	<value value="16" name="PERF_VFD_MODE_4_FIBERS"/>
+-	<value value="17" name="PERF_VFD_TOTAL_VERTICES"/>
+-	<value value="18" name="PERF_VFDP_STALL_CYCLES_VFD"/>
+-	<value value="19" name="PERF_VFDP_STALL_CYCLES_VFD_INDEX"/>
+-	<value value="20" name="PERF_VFDP_STALL_CYCLES_VFD_PROG"/>
+-	<value value="21" name="PERF_VFDP_STARVE_CYCLES_PC"/>
+-	<value value="22" name="PERF_VFDP_VS_STAGE_WAVES"/>
+-</enum>
+-
+-<enum name="a6xx_hlsq_perfcounter_select">
+-	<value value="0" name="PERF_HLSQ_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_HLSQ_STALL_CYCLES_UCHE"/>
+-	<value value="2" name="PERF_HLSQ_STALL_CYCLES_SP_STATE"/>
+-	<value value="3" name="PERF_HLSQ_STALL_CYCLES_SP_FS_STAGE"/>
+-	<value value="4" name="PERF_HLSQ_UCHE_LATENCY_CYCLES"/>
+-	<value value="5" name="PERF_HLSQ_UCHE_LATENCY_COUNT"/>
+-	<value value="6" name="PERF_HLSQ_FS_STAGE_1X_WAVES"/>
+-	<value value="7" name="PERF_HLSQ_FS_STAGE_2X_WAVES"/>
+-	<value value="8" name="PERF_HLSQ_QUADS"/>
+-	<value value="9" name="PERF_HLSQ_CS_INVOCATIONS"/>
+-	<value value="10" name="PERF_HLSQ_COMPUTE_DRAWCALLS"/>
+-	<value value="11" name="PERF_HLSQ_FS_DATA_WAIT_PROGRAMMING"/>
+-	<value value="12" name="PERF_HLSQ_DUAL_FS_PROG_ACTIVE"/>
+-	<value value="13" name="PERF_HLSQ_DUAL_VS_PROG_ACTIVE"/>
+-	<value value="14" name="PERF_HLSQ_FS_BATCH_COUNT_ZERO"/>
+-	<value value="15" name="PERF_HLSQ_VS_BATCH_COUNT_ZERO"/>
+-	<value value="16" name="PERF_HLSQ_WAVE_PENDING_NO_QUAD"/>
+-	<value value="17" name="PERF_HLSQ_WAVE_PENDING_NO_PRIM_BASE"/>
+-	<value value="18" name="PERF_HLSQ_STALL_CYCLES_VPC"/>
+-	<value value="19" name="PERF_HLSQ_PIXELS"/>
+-	<value value="20" name="PERF_HLSQ_DRAW_MODE_SWITCH_VSFS_SYNC"/>
+-</enum>
+-
+-<enum name="a6xx_vpc_perfcounter_select">
+-	<value value="0" name="PERF_VPC_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_VPC_WORKING_CYCLES"/>
+-	<value value="2" name="PERF_VPC_STALL_CYCLES_UCHE"/>
+-	<value value="3" name="PERF_VPC_STALL_CYCLES_VFD_WACK"/>
+-	<value value="4" name="PERF_VPC_STALL_CYCLES_HLSQ_PRIM_ALLOC"/>
+-	<value value="5" name="PERF_VPC_STALL_CYCLES_PC"/>
+-	<value value="6" name="PERF_VPC_STALL_CYCLES_SP_LM"/>
+-	<value value="7" name="PERF_VPC_STARVE_CYCLES_SP"/>
+-	<value value="8" name="PERF_VPC_STARVE_CYCLES_LRZ"/>
+-	<value value="9" name="PERF_VPC_PC_PRIMITIVES"/>
+-	<value value="10" name="PERF_VPC_SP_COMPONENTS"/>
+-	<value value="11" name="PERF_VPC_STALL_CYCLES_VPCRAM_POS"/>
+-	<value value="12" name="PERF_VPC_LRZ_ASSIGN_PRIMITIVES"/>
+-	<value value="13" name="PERF_VPC_RB_VISIBLE_PRIMITIVES"/>
+-	<value value="14" name="PERF_VPC_LM_TRANSACTION"/>
+-	<value value="15" name="PERF_VPC_STREAMOUT_TRANSACTION"/>
+-	<value value="16" name="PERF_VPC_VS_BUSY_CYCLES"/>
+-	<value value="17" name="PERF_VPC_PS_BUSY_CYCLES"/>
+-	<value value="18" name="PERF_VPC_VS_WORKING_CYCLES"/>
+-	<value value="19" name="PERF_VPC_PS_WORKING_CYCLES"/>
+-	<value value="20" name="PERF_VPC_STARVE_CYCLES_RB"/>
+-	<value value="21" name="PERF_VPC_NUM_VPCRAM_READ_POS"/>
+-	<value value="22" name="PERF_VPC_WIT_FULL_CYCLES"/>
+-	<value value="23" name="PERF_VPC_VPCRAM_FULL_CYCLES"/>
+-	<value value="24" name="PERF_VPC_LM_FULL_WAIT_FOR_INTP_END"/>
+-	<value value="25" name="PERF_VPC_NUM_VPCRAM_WRITE"/>
+-	<value value="26" name="PERF_VPC_NUM_VPCRAM_READ_SO"/>
+-	<value value="27" name="PERF_VPC_NUM_ATTR_REQ_LM"/>
+-</enum>
+-
+-<enum name="a6xx_tse_perfcounter_select">
+-	<value value="0" name="PERF_TSE_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_TSE_CLIPPING_CYCLES"/>
+-	<value value="2" name="PERF_TSE_STALL_CYCLES_RAS"/>
+-	<value value="3" name="PERF_TSE_STALL_CYCLES_LRZ_BARYPLANE"/>
+-	<value value="4" name="PERF_TSE_STALL_CYCLES_LRZ_ZPLANE"/>
+-	<value value="5" name="PERF_TSE_STARVE_CYCLES_PC"/>
+-	<value value="6" name="PERF_TSE_INPUT_PRIM"/>
+-	<value value="7" name="PERF_TSE_INPUT_NULL_PRIM"/>
+-	<value value="8" name="PERF_TSE_TRIVAL_REJ_PRIM"/>
+-	<value value="9" name="PERF_TSE_CLIPPED_PRIM"/>
+-	<value value="10" name="PERF_TSE_ZERO_AREA_PRIM"/>
+-	<value value="11" name="PERF_TSE_FACENESS_CULLED_PRIM"/>
+-	<value value="12" name="PERF_TSE_ZERO_PIXEL_PRIM"/>
+-	<value value="13" name="PERF_TSE_OUTPUT_NULL_PRIM"/>
+-	<value value="14" name="PERF_TSE_OUTPUT_VISIBLE_PRIM"/>
+-	<value value="15" name="PERF_TSE_CINVOCATION"/>
+-	<value value="16" name="PERF_TSE_CPRIMITIVES"/>
+-	<value value="17" name="PERF_TSE_2D_INPUT_PRIM"/>
+-	<value value="18" name="PERF_TSE_2D_ALIVE_CYCLES"/>
+-	<value value="19" name="PERF_TSE_CLIP_PLANES"/>
+-</enum>
+-
+-<enum name="a6xx_ras_perfcounter_select">
+-	<value value="0" name="PERF_RAS_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_RAS_SUPERTILE_ACTIVE_CYCLES"/>
+-	<value value="2" name="PERF_RAS_STALL_CYCLES_LRZ"/>
+-	<value value="3" name="PERF_RAS_STARVE_CYCLES_TSE"/>
+-	<value value="4" name="PERF_RAS_SUPER_TILES"/>
+-	<value value="5" name="PERF_RAS_8X4_TILES"/>
+-	<value value="6" name="PERF_RAS_MASKGEN_ACTIVE"/>
+-	<value value="7" name="PERF_RAS_FULLY_COVERED_SUPER_TILES"/>
+-	<value value="8" name="PERF_RAS_FULLY_COVERED_8X4_TILES"/>
+-	<value value="9" name="PERF_RAS_PRIM_KILLED_INVISILBE"/>
+-	<value value="10" name="PERF_RAS_SUPERTILE_GEN_ACTIVE_CYCLES"/>
+-	<value value="11" name="PERF_RAS_LRZ_INTF_WORKING_CYCLES"/>
+-	<value value="12" name="PERF_RAS_BLOCKS"/>
+-</enum>
+-
+-<enum name="a6xx_uche_perfcounter_select">
+-	<value value="0" name="PERF_UCHE_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_UCHE_STALL_CYCLES_ARBITER"/>
+-	<value value="2" name="PERF_UCHE_VBIF_LATENCY_CYCLES"/>
+-	<value value="3" name="PERF_UCHE_VBIF_LATENCY_SAMPLES"/>
+-	<value value="4" name="PERF_UCHE_VBIF_READ_BEATS_TP"/>
+-	<value value="5" name="PERF_UCHE_VBIF_READ_BEATS_VFD"/>
+-	<value value="6" name="PERF_UCHE_VBIF_READ_BEATS_HLSQ"/>
+-	<value value="7" name="PERF_UCHE_VBIF_READ_BEATS_LRZ"/>
+-	<value value="8" name="PERF_UCHE_VBIF_READ_BEATS_SP"/>
+-	<value value="9" name="PERF_UCHE_READ_REQUESTS_TP"/>
+-	<value value="10" name="PERF_UCHE_READ_REQUESTS_VFD"/>
+-	<value value="11" name="PERF_UCHE_READ_REQUESTS_HLSQ"/>
+-	<value value="12" name="PERF_UCHE_READ_REQUESTS_LRZ"/>
+-	<value value="13" name="PERF_UCHE_READ_REQUESTS_SP"/>
+-	<value value="14" name="PERF_UCHE_WRITE_REQUESTS_LRZ"/>
+-	<value value="15" name="PERF_UCHE_WRITE_REQUESTS_SP"/>
+-	<value value="16" name="PERF_UCHE_WRITE_REQUESTS_VPC"/>
+-	<value value="17" name="PERF_UCHE_WRITE_REQUESTS_VSC"/>
+-	<value value="18" name="PERF_UCHE_EVICTS"/>
+-	<value value="19" name="PERF_UCHE_BANK_REQ0"/>
+-	<value value="20" name="PERF_UCHE_BANK_REQ1"/>
+-	<value value="21" name="PERF_UCHE_BANK_REQ2"/>
+-	<value value="22" name="PERF_UCHE_BANK_REQ3"/>
+-	<value value="23" name="PERF_UCHE_BANK_REQ4"/>
+-	<value value="24" name="PERF_UCHE_BANK_REQ5"/>
+-	<value value="25" name="PERF_UCHE_BANK_REQ6"/>
+-	<value value="26" name="PERF_UCHE_BANK_REQ7"/>
+-	<value value="27" name="PERF_UCHE_VBIF_READ_BEATS_CH0"/>
+-	<value value="28" name="PERF_UCHE_VBIF_READ_BEATS_CH1"/>
+-	<value value="29" name="PERF_UCHE_GMEM_READ_BEATS"/>
+-	<value value="30" name="PERF_UCHE_TPH_REF_FULL"/>
+-	<value value="31" name="PERF_UCHE_TPH_VICTIM_FULL"/>
+-	<value value="32" name="PERF_UCHE_TPH_EXT_FULL"/>
+-	<value value="33" name="PERF_UCHE_VBIF_STALL_WRITE_DATA"/>
+-	<value value="34" name="PERF_UCHE_DCMP_LATENCY_SAMPLES"/>
+-	<value value="35" name="PERF_UCHE_DCMP_LATENCY_CYCLES"/>
+-	<value value="36" name="PERF_UCHE_VBIF_READ_BEATS_PC"/>
+-	<value value="37" name="PERF_UCHE_READ_REQUESTS_PC"/>
+-	<value value="38" name="PERF_UCHE_RAM_READ_REQ"/>
+-	<value value="39" name="PERF_UCHE_RAM_WRITE_REQ"/>
+-</enum>
+-
+-<enum name="a6xx_tp_perfcounter_select">
+-	<value value="0" name="PERF_TP_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_TP_STALL_CYCLES_UCHE"/>
+-	<value value="2" name="PERF_TP_LATENCY_CYCLES"/>
+-	<value value="3" name="PERF_TP_LATENCY_TRANS"/>
+-	<value value="4" name="PERF_TP_FLAG_CACHE_REQUEST_SAMPLES"/>
+-	<value value="5" name="PERF_TP_FLAG_CACHE_REQUEST_LATENCY"/>
+-	<value value="6" name="PERF_TP_L1_CACHELINE_REQUESTS"/>
+-	<value value="7" name="PERF_TP_L1_CACHELINE_MISSES"/>
+-	<value value="8" name="PERF_TP_SP_TP_TRANS"/>
+-	<value value="9" name="PERF_TP_TP_SP_TRANS"/>
+-	<value value="10" name="PERF_TP_OUTPUT_PIXELS"/>
+-	<value value="11" name="PERF_TP_FILTER_WORKLOAD_16BIT"/>
+-	<value value="12" name="PERF_TP_FILTER_WORKLOAD_32BIT"/>
+-	<value value="13" name="PERF_TP_QUADS_RECEIVED"/>
+-	<value value="14" name="PERF_TP_QUADS_OFFSET"/>
+-	<value value="15" name="PERF_TP_QUADS_SHADOW"/>
+-	<value value="16" name="PERF_TP_QUADS_ARRAY"/>
+-	<value value="17" name="PERF_TP_QUADS_GRADIENT"/>
+-	<value value="18" name="PERF_TP_QUADS_1D"/>
+-	<value value="19" name="PERF_TP_QUADS_2D"/>
+-	<value value="20" name="PERF_TP_QUADS_BUFFER"/>
+-	<value value="21" name="PERF_TP_QUADS_3D"/>
+-	<value value="22" name="PERF_TP_QUADS_CUBE"/>
+-	<value value="23" name="PERF_TP_DIVERGENT_QUADS_RECEIVED"/>
+-	<value value="24" name="PERF_TP_PRT_NON_RESIDENT_EVENTS"/>
+-	<value value="25" name="PERF_TP_OUTPUT_PIXELS_POINT"/>
+-	<value value="26" name="PERF_TP_OUTPUT_PIXELS_BILINEAR"/>
+-	<value value="27" name="PERF_TP_OUTPUT_PIXELS_MIP"/>
+-	<value value="28" name="PERF_TP_OUTPUT_PIXELS_ANISO"/>
+-	<value value="29" name="PERF_TP_OUTPUT_PIXELS_ZERO_LOD"/>
+-	<value value="30" name="PERF_TP_FLAG_CACHE_REQUESTS"/>
+-	<value value="31" name="PERF_TP_FLAG_CACHE_MISSES"/>
+-	<value value="32" name="PERF_TP_L1_5_L2_REQUESTS"/>
+-	<value value="33" name="PERF_TP_2D_OUTPUT_PIXELS"/>
+-	<value value="34" name="PERF_TP_2D_OUTPUT_PIXELS_POINT"/>
+-	<value value="35" name="PERF_TP_2D_OUTPUT_PIXELS_BILINEAR"/>
+-	<value value="36" name="PERF_TP_2D_FILTER_WORKLOAD_16BIT"/>
+-	<value value="37" name="PERF_TP_2D_FILTER_WORKLOAD_32BIT"/>
+-	<value value="38" name="PERF_TP_TPA2TPC_TRANS"/>
+-	<value value="39" name="PERF_TP_L1_MISSES_ASTC_1TILE"/>
+-	<value value="40" name="PERF_TP_L1_MISSES_ASTC_2TILE"/>
+-	<value value="41" name="PERF_TP_L1_MISSES_ASTC_4TILE"/>
+-	<value value="42" name="PERF_TP_L1_5_L2_COMPRESS_REQS"/>
+-	<value value="43" name="PERF_TP_L1_5_L2_COMPRESS_MISS"/>
+-	<value value="44" name="PERF_TP_L1_BANK_CONFLICT"/>
+-	<value value="45" name="PERF_TP_L1_5_MISS_LATENCY_CYCLES"/>
+-	<value value="46" name="PERF_TP_L1_5_MISS_LATENCY_TRANS"/>
+-	<value value="47" name="PERF_TP_QUADS_CONSTANT_MULTIPLIED"/>
+-	<value value="48" name="PERF_TP_FRONTEND_WORKING_CYCLES"/>
+-	<value value="49" name="PERF_TP_L1_TAG_WORKING_CYCLES"/>
+-	<value value="50" name="PERF_TP_L1_DATA_WRITE_WORKING_CYCLES"/>
+-	<value value="51" name="PERF_TP_PRE_L1_DECOM_WORKING_CYCLES"/>
+-	<value value="52" name="PERF_TP_BACKEND_WORKING_CYCLES"/>
+-	<value value="53" name="PERF_TP_FLAG_CACHE_WORKING_CYCLES"/>
+-	<value value="54" name="PERF_TP_L1_5_CACHE_WORKING_CYCLES"/>
+-	<value value="55" name="PERF_TP_STARVE_CYCLES_SP"/>
+-	<value value="56" name="PERF_TP_STARVE_CYCLES_UCHE"/>
+-</enum>
+-
+-<enum name="a6xx_sp_perfcounter_select">
+-	<value value="0" name="PERF_SP_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_SP_ALU_WORKING_CYCLES"/>
+-	<value value="2" name="PERF_SP_EFU_WORKING_CYCLES"/>
+-	<value value="3" name="PERF_SP_STALL_CYCLES_VPC"/>
+-	<value value="4" name="PERF_SP_STALL_CYCLES_TP"/>
+-	<value value="5" name="PERF_SP_STALL_CYCLES_UCHE"/>
+-	<value value="6" name="PERF_SP_STALL_CYCLES_RB"/>
+-	<value value="7" name="PERF_SP_NON_EXECUTION_CYCLES"/>
+-	<value value="8" name="PERF_SP_WAVE_CONTEXTS"/>
+-	<value value="9" name="PERF_SP_WAVE_CONTEXT_CYCLES"/>
+-	<value value="10" name="PERF_SP_FS_STAGE_WAVE_CYCLES"/>
+-	<value value="11" name="PERF_SP_FS_STAGE_WAVE_SAMPLES"/>
+-	<value value="12" name="PERF_SP_VS_STAGE_WAVE_CYCLES"/>
+-	<value value="13" name="PERF_SP_VS_STAGE_WAVE_SAMPLES"/>
+-	<value value="14" name="PERF_SP_FS_STAGE_DURATION_CYCLES"/>
+-	<value value="15" name="PERF_SP_VS_STAGE_DURATION_CYCLES"/>
+-	<value value="16" name="PERF_SP_WAVE_CTRL_CYCLES"/>
+-	<value value="17" name="PERF_SP_WAVE_LOAD_CYCLES"/>
+-	<value value="18" name="PERF_SP_WAVE_EMIT_CYCLES"/>
+-	<value value="19" name="PERF_SP_WAVE_NOP_CYCLES"/>
+-	<value value="20" name="PERF_SP_WAVE_WAIT_CYCLES"/>
+-	<value value="21" name="PERF_SP_WAVE_FETCH_CYCLES"/>
+-	<value value="22" name="PERF_SP_WAVE_IDLE_CYCLES"/>
+-	<value value="23" name="PERF_SP_WAVE_END_CYCLES"/>
+-	<value value="24" name="PERF_SP_WAVE_LONG_SYNC_CYCLES"/>
+-	<value value="25" name="PERF_SP_WAVE_SHORT_SYNC_CYCLES"/>
+-	<value value="26" name="PERF_SP_WAVE_JOIN_CYCLES"/>
+-	<value value="27" name="PERF_SP_LM_LOAD_INSTRUCTIONS"/>
+-	<value value="28" name="PERF_SP_LM_STORE_INSTRUCTIONS"/>
+-	<value value="29" name="PERF_SP_LM_ATOMICS"/>
+-	<value value="30" name="PERF_SP_GM_LOAD_INSTRUCTIONS"/>
+-	<value value="31" name="PERF_SP_GM_STORE_INSTRUCTIONS"/>
+-	<value value="32" name="PERF_SP_GM_ATOMICS"/>
+-	<value value="33" name="PERF_SP_VS_STAGE_TEX_INSTRUCTIONS"/>
+-	<value value="34" name="PERF_SP_VS_STAGE_EFU_INSTRUCTIONS"/>
+-	<value value="35" name="PERF_SP_VS_STAGE_FULL_ALU_INSTRUCTIONS"/>
+-	<value value="36" name="PERF_SP_VS_STAGE_HALF_ALU_INSTRUCTIONS"/>
+-	<value value="37" name="PERF_SP_FS_STAGE_TEX_INSTRUCTIONS"/>
+-	<value value="38" name="PERF_SP_FS_STAGE_CFLOW_INSTRUCTIONS"/>
+-	<value value="39" name="PERF_SP_FS_STAGE_EFU_INSTRUCTIONS"/>
+-	<value value="40" name="PERF_SP_FS_STAGE_FULL_ALU_INSTRUCTIONS"/>
+-	<value value="41" name="PERF_SP_FS_STAGE_HALF_ALU_INSTRUCTIONS"/>
+-	<value value="42" name="PERF_SP_FS_STAGE_BARY_INSTRUCTIONS"/>
+-	<value value="43" name="PERF_SP_VS_INSTRUCTIONS"/>
+-	<value value="44" name="PERF_SP_FS_INSTRUCTIONS"/>
+-	<value value="45" name="PERF_SP_ADDR_LOCK_COUNT"/>
+-	<value value="46" name="PERF_SP_UCHE_READ_TRANS"/>
+-	<value value="47" name="PERF_SP_UCHE_WRITE_TRANS"/>
+-	<value value="48" name="PERF_SP_EXPORT_VPC_TRANS"/>
+-	<value value="49" name="PERF_SP_EXPORT_RB_TRANS"/>
+-	<value value="50" name="PERF_SP_PIXELS_KILLED"/>
+-	<value value="51" name="PERF_SP_ICL1_REQUESTS"/>
+-	<value value="52" name="PERF_SP_ICL1_MISSES"/>
+-	<value value="53" name="PERF_SP_HS_INSTRUCTIONS"/>
+-	<value value="54" name="PERF_SP_DS_INSTRUCTIONS"/>
+-	<value value="55" name="PERF_SP_GS_INSTRUCTIONS"/>
+-	<value value="56" name="PERF_SP_CS_INSTRUCTIONS"/>
+-	<value value="57" name="PERF_SP_GPR_READ"/>
+-	<value value="58" name="PERF_SP_GPR_WRITE"/>
+-	<value value="59" name="PERF_SP_FS_STAGE_HALF_EFU_INSTRUCTIONS"/>
+-	<value value="60" name="PERF_SP_VS_STAGE_HALF_EFU_INSTRUCTIONS"/>
+-	<value value="61" name="PERF_SP_LM_BANK_CONFLICTS"/>
+-	<value value="62" name="PERF_SP_TEX_CONTROL_WORKING_CYCLES"/>
+-	<value value="63" name="PERF_SP_LOAD_CONTROL_WORKING_CYCLES"/>
+-	<value value="64" name="PERF_SP_FLOW_CONTROL_WORKING_CYCLES"/>
+-	<value value="65" name="PERF_SP_LM_WORKING_CYCLES"/>
+-	<value value="66" name="PERF_SP_DISPATCHER_WORKING_CYCLES"/>
+-	<value value="67" name="PERF_SP_SEQUENCER_WORKING_CYCLES"/>
+-	<value value="68" name="PERF_SP_LOW_EFFICIENCY_STARVED_BY_TP"/>
+-	<value value="69" name="PERF_SP_STARVE_CYCLES_HLSQ"/>
+-	<value value="70" name="PERF_SP_NON_EXECUTION_LS_CYCLES"/>
+-	<value value="71" name="PERF_SP_WORKING_EU"/>
+-	<value value="72" name="PERF_SP_ANY_EU_WORKING"/>
+-	<value value="73" name="PERF_SP_WORKING_EU_FS_STAGE"/>
+-	<value value="74" name="PERF_SP_ANY_EU_WORKING_FS_STAGE"/>
+-	<value value="75" name="PERF_SP_WORKING_EU_VS_STAGE"/>
+-	<value value="76" name="PERF_SP_ANY_EU_WORKING_VS_STAGE"/>
+-	<value value="77" name="PERF_SP_WORKING_EU_CS_STAGE"/>
+-	<value value="78" name="PERF_SP_ANY_EU_WORKING_CS_STAGE"/>
+-	<value value="79" name="PERF_SP_GPR_READ_PREFETCH"/>
+-	<value value="80" name="PERF_SP_GPR_READ_CONFLICT"/>
+-	<value value="81" name="PERF_SP_GPR_WRITE_CONFLICT"/>
+-	<value value="82" name="PERF_SP_GM_LOAD_LATENCY_CYCLES"/>
+-	<value value="83" name="PERF_SP_GM_LOAD_LATENCY_SAMPLES"/>
+-	<value value="84" name="PERF_SP_EXECUTABLE_WAVES"/>
+-</enum>
+-
+-<enum name="a6xx_rb_perfcounter_select">
+-	<value value="0" name="PERF_RB_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_RB_STALL_CYCLES_HLSQ"/>
+-	<value value="2" name="PERF_RB_STALL_CYCLES_FIFO0_FULL"/>
+-	<value value="3" name="PERF_RB_STALL_CYCLES_FIFO1_FULL"/>
+-	<value value="4" name="PERF_RB_STALL_CYCLES_FIFO2_FULL"/>
+-	<value value="5" name="PERF_RB_STARVE_CYCLES_SP"/>
+-	<value value="6" name="PERF_RB_STARVE_CYCLES_LRZ_TILE"/>
+-	<value value="7" name="PERF_RB_STARVE_CYCLES_CCU"/>
+-	<value value="8" name="PERF_RB_STARVE_CYCLES_Z_PLANE"/>
+-	<value value="9" name="PERF_RB_STARVE_CYCLES_BARY_PLANE"/>
+-	<value value="10" name="PERF_RB_Z_WORKLOAD"/>
+-	<value value="11" name="PERF_RB_HLSQ_ACTIVE"/>
+-	<value value="12" name="PERF_RB_Z_READ"/>
+-	<value value="13" name="PERF_RB_Z_WRITE"/>
+-	<value value="14" name="PERF_RB_C_READ"/>
+-	<value value="15" name="PERF_RB_C_WRITE"/>
+-	<value value="16" name="PERF_RB_TOTAL_PASS"/>
+-	<value value="17" name="PERF_RB_Z_PASS"/>
+-	<value value="18" name="PERF_RB_Z_FAIL"/>
+-	<value value="19" name="PERF_RB_S_FAIL"/>
+-	<value value="20" name="PERF_RB_BLENDED_FXP_COMPONENTS"/>
+-	<value value="21" name="PERF_RB_BLENDED_FP16_COMPONENTS"/>
+-	<value value="22" name="PERF_RB_PS_INVOCATIONS"/>
+-	<value value="23" name="PERF_RB_2D_ALIVE_CYCLES"/>
+-	<value value="24" name="PERF_RB_2D_STALL_CYCLES_A2D"/>
+-	<value value="25" name="PERF_RB_2D_STARVE_CYCLES_SRC"/>
+-	<value value="26" name="PERF_RB_2D_STARVE_CYCLES_SP"/>
+-	<value value="27" name="PERF_RB_2D_STARVE_CYCLES_DST"/>
+-	<value value="28" name="PERF_RB_2D_VALID_PIXELS"/>
+-	<value value="29" name="PERF_RB_3D_PIXELS"/>
+-	<value value="30" name="PERF_RB_BLENDER_WORKING_CYCLES"/>
+-	<value value="31" name="PERF_RB_ZPROC_WORKING_CYCLES"/>
+-	<value value="32" name="PERF_RB_CPROC_WORKING_CYCLES"/>
+-	<value value="33" name="PERF_RB_SAMPLER_WORKING_CYCLES"/>
+-	<value value="34" name="PERF_RB_STALL_CYCLES_CCU_COLOR_READ"/>
+-	<value value="35" name="PERF_RB_STALL_CYCLES_CCU_COLOR_WRITE"/>
+-	<value value="36" name="PERF_RB_STALL_CYCLES_CCU_DEPTH_READ"/>
+-	<value value="37" name="PERF_RB_STALL_CYCLES_CCU_DEPTH_WRITE"/>
+-	<value value="38" name="PERF_RB_STALL_CYCLES_VPC"/>
+-	<value value="39" name="PERF_RB_2D_INPUT_TRANS"/>
+-	<value value="40" name="PERF_RB_2D_OUTPUT_RB_DST_TRANS"/>
+-	<value value="41" name="PERF_RB_2D_OUTPUT_RB_SRC_TRANS"/>
+-	<value value="42" name="PERF_RB_BLENDED_FP32_COMPONENTS"/>
+-	<value value="43" name="PERF_RB_COLOR_PIX_TILES"/>
+-	<value value="44" name="PERF_RB_STALL_CYCLES_CCU"/>
+-	<value value="45" name="PERF_RB_EARLY_Z_ARB3_GRANT"/>
+-	<value value="46" name="PERF_RB_LATE_Z_ARB3_GRANT"/>
+-	<value value="47" name="PERF_RB_EARLY_Z_SKIP_GRANT"/>
+-</enum>
+-
+-<enum name="a6xx_vsc_perfcounter_select">
+-	<value value="0" name="PERF_VSC_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_VSC_WORKING_CYCLES"/>
+-	<value value="2" name="PERF_VSC_STALL_CYCLES_UCHE"/>
+-	<value value="3" name="PERF_VSC_EOT_NUM"/>
+-	<value value="4" name="PERF_VSC_INPUT_TILES"/>
+-</enum>
+-
+-<enum name="a6xx_ccu_perfcounter_select">
+-	<value value="0" name="PERF_CCU_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_CCU_STALL_CYCLES_RB_DEPTH_RETURN"/>
+-	<value value="2" name="PERF_CCU_STALL_CYCLES_RB_COLOR_RETURN"/>
+-	<value value="3" name="PERF_CCU_STARVE_CYCLES_FLAG_RETURN"/>
+-	<value value="4" name="PERF_CCU_DEPTH_BLOCKS"/>
+-	<value value="5" name="PERF_CCU_COLOR_BLOCKS"/>
+-	<value value="6" name="PERF_CCU_DEPTH_BLOCK_HIT"/>
+-	<value value="7" name="PERF_CCU_COLOR_BLOCK_HIT"/>
+-	<value value="8" name="PERF_CCU_PARTIAL_BLOCK_READ"/>
+-	<value value="9" name="PERF_CCU_GMEM_READ"/>
+-	<value value="10" name="PERF_CCU_GMEM_WRITE"/>
+-	<value value="11" name="PERF_CCU_DEPTH_READ_FLAG0_COUNT"/>
+-	<value value="12" name="PERF_CCU_DEPTH_READ_FLAG1_COUNT"/>
+-	<value value="13" name="PERF_CCU_DEPTH_READ_FLAG2_COUNT"/>
+-	<value value="14" name="PERF_CCU_DEPTH_READ_FLAG3_COUNT"/>
+-	<value value="15" name="PERF_CCU_DEPTH_READ_FLAG4_COUNT"/>
+-	<value value="16" name="PERF_CCU_DEPTH_READ_FLAG5_COUNT"/>
+-	<value value="17" name="PERF_CCU_DEPTH_READ_FLAG6_COUNT"/>
+-	<value value="18" name="PERF_CCU_DEPTH_READ_FLAG8_COUNT"/>
+-	<value value="19" name="PERF_CCU_COLOR_READ_FLAG0_COUNT"/>
+-	<value value="20" name="PERF_CCU_COLOR_READ_FLAG1_COUNT"/>
+-	<value value="21" name="PERF_CCU_COLOR_READ_FLAG2_COUNT"/>
+-	<value value="22" name="PERF_CCU_COLOR_READ_FLAG3_COUNT"/>
+-	<value value="23" name="PERF_CCU_COLOR_READ_FLAG4_COUNT"/>
+-	<value value="24" name="PERF_CCU_COLOR_READ_FLAG5_COUNT"/>
+-	<value value="25" name="PERF_CCU_COLOR_READ_FLAG6_COUNT"/>
+-	<value value="26" name="PERF_CCU_COLOR_READ_FLAG8_COUNT"/>
+-	<value value="27" name="PERF_CCU_2D_RD_REQ"/>
+-	<value value="28" name="PERF_CCU_2D_WR_REQ"/>
+-</enum>
+-
+-<enum name="a6xx_lrz_perfcounter_select">
+-	<value value="0" name="PERF_LRZ_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_LRZ_STARVE_CYCLES_RAS"/>
+-	<value value="2" name="PERF_LRZ_STALL_CYCLES_RB"/>
+-	<value value="3" name="PERF_LRZ_STALL_CYCLES_VSC"/>
+-	<value value="4" name="PERF_LRZ_STALL_CYCLES_VPC"/>
+-	<value value="5" name="PERF_LRZ_STALL_CYCLES_FLAG_PREFETCH"/>
+-	<value value="6" name="PERF_LRZ_STALL_CYCLES_UCHE"/>
+-	<value value="7" name="PERF_LRZ_LRZ_READ"/>
+-	<value value="8" name="PERF_LRZ_LRZ_WRITE"/>
+-	<value value="9" name="PERF_LRZ_READ_LATENCY"/>
+-	<value value="10" name="PERF_LRZ_MERGE_CACHE_UPDATING"/>
+-	<value value="11" name="PERF_LRZ_PRIM_KILLED_BY_MASKGEN"/>
+-	<value value="12" name="PERF_LRZ_PRIM_KILLED_BY_LRZ"/>
+-	<value value="13" name="PERF_LRZ_VISIBLE_PRIM_AFTER_LRZ"/>
+-	<value value="14" name="PERF_LRZ_FULL_8X8_TILES"/>
+-	<value value="15" name="PERF_LRZ_PARTIAL_8X8_TILES"/>
+-	<value value="16" name="PERF_LRZ_TILE_KILLED"/>
+-	<value value="17" name="PERF_LRZ_TOTAL_PIXEL"/>
+-	<value value="18" name="PERF_LRZ_VISIBLE_PIXEL_AFTER_LRZ"/>
+-	<value value="19" name="PERF_LRZ_FULLY_COVERED_TILES"/>
+-	<value value="20" name="PERF_LRZ_PARTIAL_COVERED_TILES"/>
+-	<value value="21" name="PERF_LRZ_FEEDBACK_ACCEPT"/>
+-	<value value="22" name="PERF_LRZ_FEEDBACK_DISCARD"/>
+-	<value value="23" name="PERF_LRZ_FEEDBACK_STALL"/>
+-	<value value="24" name="PERF_LRZ_STALL_CYCLES_RB_ZPLANE"/>
+-	<value value="25" name="PERF_LRZ_STALL_CYCLES_RB_BPLANE"/>
+-	<value value="26" name="PERF_LRZ_STALL_CYCLES_VC"/>
+-	<value value="27" name="PERF_LRZ_RAS_MASK_TRANS"/>
+-</enum>
+-
+-<enum name="a6xx_cmp_perfcounter_select">
+-	<value value="0" name="PERF_CMPDECMP_STALL_CYCLES_ARB"/>
+-	<value value="1" name="PERF_CMPDECMP_VBIF_LATENCY_CYCLES"/>
+-	<value value="2" name="PERF_CMPDECMP_VBIF_LATENCY_SAMPLES"/>
+-	<value value="3" name="PERF_CMPDECMP_VBIF_READ_DATA_CCU"/>
+-	<value value="4" name="PERF_CMPDECMP_VBIF_WRITE_DATA_CCU"/>
+-	<value value="5" name="PERF_CMPDECMP_VBIF_READ_REQUEST"/>
+-	<value value="6" name="PERF_CMPDECMP_VBIF_WRITE_REQUEST"/>
+-	<value value="7" name="PERF_CMPDECMP_VBIF_READ_DATA"/>
+-	<value value="8" name="PERF_CMPDECMP_VBIF_WRITE_DATA"/>
+-	<value value="9" name="PERF_CMPDECMP_FLAG_FETCH_CYCLES"/>
+-	<value value="10" name="PERF_CMPDECMP_FLAG_FETCH_SAMPLES"/>
+-	<value value="11" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG1_COUNT"/>
+-	<value value="12" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG2_COUNT"/>
+-	<value value="13" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG3_COUNT"/>
+-	<value value="14" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG4_COUNT"/>
+-	<value value="15" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG5_COUNT"/>
+-	<value value="16" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG6_COUNT"/>
+-	<value value="17" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG8_COUNT"/>
+-	<value value="18" name="PERF_CMPDECMP_COLOR_WRITE_FLAG1_COUNT"/>
+-	<value value="19" name="PERF_CMPDECMP_COLOR_WRITE_FLAG2_COUNT"/>
+-	<value value="20" name="PERF_CMPDECMP_COLOR_WRITE_FLAG3_COUNT"/>
+-	<value value="21" name="PERF_CMPDECMP_COLOR_WRITE_FLAG4_COUNT"/>
+-	<value value="22" name="PERF_CMPDECMP_COLOR_WRITE_FLAG5_COUNT"/>
+-	<value value="23" name="PERF_CMPDECMP_COLOR_WRITE_FLAG6_COUNT"/>
+-	<value value="24" name="PERF_CMPDECMP_COLOR_WRITE_FLAG8_COUNT"/>
+-	<value value="25" name="PERF_CMPDECMP_2D_STALL_CYCLES_VBIF_REQ"/>
+-	<value value="26" name="PERF_CMPDECMP_2D_STALL_CYCLES_VBIF_WR"/>
+-	<value value="27" name="PERF_CMPDECMP_2D_STALL_CYCLES_VBIF_RETURN"/>
+-	<value value="28" name="PERF_CMPDECMP_2D_RD_DATA"/>
+-	<value value="29" name="PERF_CMPDECMP_2D_WR_DATA"/>
+-	<value value="30" name="PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH0"/>
+-	<value value="31" name="PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH1"/>
+-	<value value="32" name="PERF_CMPDECMP_2D_OUTPUT_TRANS"/>
+-	<value value="33" name="PERF_CMPDECMP_VBIF_WRITE_DATA_UCHE"/>
+-	<value value="34" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG0_COUNT"/>
+-	<value value="35" name="PERF_CMPDECMP_COLOR_WRITE_FLAG0_COUNT"/>
+-	<value value="36" name="PERF_CMPDECMP_COLOR_WRITE_FLAGALPHA_COUNT"/>
+-	<value value="37" name="PERF_CMPDECMP_2D_BUSY_CYCLES"/>
+-	<value value="38" name="PERF_CMPDECMP_2D_REORDER_STARVE_CYCLES"/>
+-	<value value="39" name="PERF_CMPDECMP_2D_PIXELS"/>
+-</enum>
+-
+-<!--
+-Used in a6xx_2d_blit_cntl.. the value mostly seems to correlate to the
+-component type/size, so I think it relates to internal format used for
+-blending?  The one exception is that 16b unorm and 32b float use the
+-same value... maybe 16b unorm is uncommon enough that it was just easier
+-to upconvert to 32b float internally?
+-
+- 8b unorm:  10 (sometimes 0, is the high bit part of something else?)
+-16b unorm:   4
+-
+-32b int:     7
+-16b int:     6
+- 8b int:     5
+-
+-32b float:   4
+-16b float:   3
+- -->
+-<enum name="a6xx_2d_ifmt">
+-	<value value="0x10" name="R2D_UNORM8"/>
+-	<value value="0x7"  name="R2D_INT32"/>
+-	<value value="0x6"  name="R2D_INT16"/>
+-	<value value="0x5"  name="R2D_INT8"/>
+-	<value value="0x4"  name="R2D_FLOAT32"/>
+-	<value value="0x3"  name="R2D_FLOAT16"/>
+-	<value value="0x1"  name="R2D_UNORM8_SRGB"/>
+-	<value value="0x0"  name="R2D_RAW"/>
+-</enum>
+-
+-<enum name="a6xx_ztest_mode">
+-	<doc>Allow early z-test and early-lrz (if applicable)</doc>
+-	<value value="0x0" name="A6XX_EARLY_Z"/>
+-	<doc>Disable early z-test and early-lrz test (if applicable)</doc>
+-	<value value="0x1" name="A6XX_LATE_Z"/>
+-	<doc>
+-		A special mode that allows early-lrz test but disables
+-		early-z test.  Which might sound a bit funny, since
+-		lrz-test happens before z-test.  But as long as a couple
+-		conditions are maintained this allows using lrz-test in
+-		cases where fragment shader has kill/discard:
+-
+-		1) Disable lrz-write in cases where it is uncertain during
+-		   binning pass that a fragment will pass.  Ie.  if frag
+-		   shader has-kill, writes-z, or alpha/stencil test is
+-		   enabled.  (For correctness, lrz-write must be disabled
+-		   when blend is enabled.)  This is analogous to how a
+-		   z-prepass works.
+-
+-		2) Disable lrz-write and test if a depth-test direction
+-		   reversal is detected.  Due to condition (1), the contents
+-		   of the lrz buffer are a conservative estimation of the
+-		   depth buffer during the draw pass.  Meaning that geometry
+-		   that we know for certain will not be visible will not pass
+-		   lrz-test.  But geometry which may be (or contributes to
+-		   blend) will pass the lrz-test.
+-
+-		This allows us to keep early-lrz-test in cases where the frag
+-		shader does not write-z (ie. we know the z-value before FS)
+-		and does not have side-effects (image/ssbo writes, etc), but
+-		does have kill/discard.  Which turns out to be a common
+-		enough case that it is useful to keep early-lrz test against
+-		the conservative lrz buffer to discard fragments that we
+-		know will definitely not be visible.
+-	</doc>
+-	<value value="0x2" name="A6XX_EARLY_LRZ_LATE_Z"/>
+-	<doc>Not a real hw value, used internally by mesa</doc>
+-	<value value="0x3" name="A6XX_INVALID_ZTEST"/>
+-</enum>
+-
+-<enum name="a6xx_tess_spacing">
+-	<value value="0x0" name="TESS_EQUAL"/>
+-	<value value="0x2" name="TESS_FRACTIONAL_ODD"/>
+-	<value value="0x3" name="TESS_FRACTIONAL_EVEN"/>
+-</enum>
+-<enum name="a6xx_tess_output">
+-	<value value="0x0" name="TESS_POINTS"/>
+-	<value value="0x1" name="TESS_LINES"/>
+-	<value value="0x2" name="TESS_CW_TRIS"/>
+-	<value value="0x3" name="TESS_CCW_TRIS"/>
+-</enum>
+-
+-<enum name="a7xx_cp_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_CP_ALWAYS_COUNT"/>
+-	<value value="1" name="A7XX_PERF_CP_BUSY_GFX_CORE_IDLE"/>
+-	<value value="2" name="A7XX_PERF_CP_BUSY_CYCLES"/>
+-	<value value="3" name="A7XX_PERF_CP_NUM_PREEMPTIONS"/>
+-	<value value="4" name="A7XX_PERF_CP_PREEMPTION_REACTION_DELAY"/>
+-	<value value="5" name="A7XX_PERF_CP_PREEMPTION_SWITCH_OUT_TIME"/>
+-	<value value="6" name="A7XX_PERF_CP_PREEMPTION_SWITCH_IN_TIME"/>
+-	<value value="7" name="A7XX_PERF_CP_DEAD_DRAWS_IN_BIN_RENDER"/>
+-	<value value="8" name="A7XX_PERF_CP_PREDICATED_DRAWS_KILLED"/>
+-	<value value="9" name="A7XX_PERF_CP_MODE_SWITCH"/>
+-	<value value="10" name="A7XX_PERF_CP_ZPASS_DONE"/>
+-	<value value="11" name="A7XX_PERF_CP_CONTEXT_DONE"/>
+-	<value value="12" name="A7XX_PERF_CP_CACHE_FLUSH"/>
+-	<value value="13" name="A7XX_PERF_CP_LONG_PREEMPTIONS"/>
+-	<value value="14" name="A7XX_PERF_CP_SQE_I_CACHE_STARVE"/>
+-	<value value="15" name="A7XX_PERF_CP_SQE_IDLE"/>
+-	<value value="16" name="A7XX_PERF_CP_SQE_PM4_STARVE_RB_IB"/>
+-	<value value="17" name="A7XX_PERF_CP_SQE_PM4_STARVE_SDS"/>
+-	<value value="18" name="A7XX_PERF_CP_SQE_MRB_STARVE"/>
+-	<value value="19" name="A7XX_PERF_CP_SQE_RRB_STARVE"/>
+-	<value value="20" name="A7XX_PERF_CP_SQE_VSD_STARVE"/>
+-	<value value="21" name="A7XX_PERF_CP_VSD_DECODE_STARVE"/>
+-	<value value="22" name="A7XX_PERF_CP_SQE_PIPE_OUT_STALL"/>
+-	<value value="23" name="A7XX_PERF_CP_SQE_SYNC_STALL"/>
+-	<value value="24" name="A7XX_PERF_CP_SQE_PM4_WFI_STALL"/>
+-	<value value="25" name="A7XX_PERF_CP_SQE_SYS_WFI_STALL"/>
+-	<value value="26" name="A7XX_PERF_CP_SQE_T4_EXEC"/>
+-	<value value="27" name="A7XX_PERF_CP_SQE_LOAD_STATE_EXEC"/>
+-	<value value="28" name="A7XX_PERF_CP_SQE_SAVE_SDS_STATE"/>
+-	<value value="29" name="A7XX_PERF_CP_SQE_DRAW_EXEC"/>
+-	<value value="30" name="A7XX_PERF_CP_SQE_CTXT_REG_BUNCH_EXEC"/>
+-	<value value="31" name="A7XX_PERF_CP_SQE_EXEC_PROFILED"/>
+-	<value value="32" name="A7XX_PERF_CP_MEMORY_POOL_EMPTY"/>
+-	<value value="33" name="A7XX_PERF_CP_MEMORY_POOL_SYNC_STALL"/>
+-	<value value="34" name="A7XX_PERF_CP_MEMORY_POOL_ABOVE_THRESH"/>
+-	<value value="35" name="A7XX_PERF_CP_AHB_WR_STALL_PRE_DRAWS"/>
+-	<value value="36" name="A7XX_PERF_CP_AHB_STALL_SQE_GMU"/>
+-	<value value="37" name="A7XX_PERF_CP_AHB_STALL_SQE_WR_OTHER"/>
+-	<value value="38" name="A7XX_PERF_CP_AHB_STALL_SQE_RD_OTHER"/>
+-	<value value="39" name="A7XX_PERF_CP_CLUSTER0_EMPTY"/>
+-	<value value="40" name="A7XX_PERF_CP_CLUSTER1_EMPTY"/>
+-	<value value="41" name="A7XX_PERF_CP_CLUSTER2_EMPTY"/>
+-	<value value="42" name="A7XX_PERF_CP_CLUSTER3_EMPTY"/>
+-	<value value="43" name="A7XX_PERF_CP_CLUSTER4_EMPTY"/>
+-	<value value="44" name="A7XX_PERF_CP_CLUSTER5_EMPTY"/>
+-	<value value="45" name="A7XX_PERF_CP_PM4_DATA"/>
+-	<value value="46" name="A7XX_PERF_CP_PM4_HEADERS"/>
+-	<value value="47" name="A7XX_PERF_CP_VBIF_READ_BEATS"/>
+-	<value value="48" name="A7XX_PERF_CP_VBIF_WRITE_BEATS"/>
+-	<value value="49" name="A7XX_PERF_CP_SQE_INSTR_COUNTER"/>
+-	<value value="50" name="A7XX_PERF_CP_RESERVED_50"/>
+-	<value value="51" name="A7XX_PERF_CP_RESERVED_51"/>
+-	<value value="52" name="A7XX_PERF_CP_RESERVED_52"/>
+-	<value value="53" name="A7XX_PERF_CP_RESERVED_53"/>
+-	<value value="54" name="A7XX_PERF_CP_RESERVED_54"/>
+-	<value value="55" name="A7XX_PERF_CP_RESERVED_55"/>
+-	<value value="56" name="A7XX_PERF_CP_RESERVED_56"/>
+-	<value value="57" name="A7XX_PERF_CP_RESERVED_57"/>
+-	<value value="58" name="A7XX_PERF_CP_RESERVED_58"/>
+-	<value value="59" name="A7XX_PERF_CP_RESERVED_59"/>
+-	<value value="60" name="A7XX_PERF_CP_CLUSTER0_FULL"/>
+-	<value value="61" name="A7XX_PERF_CP_CLUSTER1_FULL"/>
+-	<value value="62" name="A7XX_PERF_CP_CLUSTER2_FULL"/>
+-	<value value="63" name="A7XX_PERF_CP_CLUSTER3_FULL"/>
+-	<value value="64" name="A7XX_PERF_CP_CLUSTER4_FULL"/>
+-	<value value="65" name="A7XX_PERF_CP_CLUSTER5_FULL"/>
+-	<value value="66" name="A7XX_PERF_CP_CLUSTER6_FULL"/>
+-	<value value="67" name="A7XX_PERF_CP_CLUSTER6_EMPTY"/>
+-	<value value="68" name="A7XX_PERF_CP_ICACHE_MISSES"/>
+-	<value value="69" name="A7XX_PERF_CP_ICACHE_HITS"/>
+-	<value value="70" name="A7XX_PERF_CP_ICACHE_STALL"/>
+-	<value value="71" name="A7XX_PERF_CP_DCACHE_MISSES"/>
+-	<value value="72" name="A7XX_PERF_CP_DCACHE_HITS"/>
+-	<value value="73" name="A7XX_PERF_CP_DCACHE_STALLS"/>
+-	<value value="74" name="A7XX_PERF_CP_AQE_SQE_STALL"/>
+-	<value value="75" name="A7XX_PERF_CP_SQE_AQE_STARVE"/>
+-	<value value="76" name="A7XX_PERF_CP_PREEMPT_LATENCY"/>
+-	<value value="77" name="A7XX_PERF_CP_SQE_MD8_STALL_CYCLES"/>
+-	<value value="78" name="A7XX_PERF_CP_SQE_MESH_EXEC_CYCLES"/>
+-	<value value="79" name="A7XX_PERF_CP_AQE_NUM_AS_CHUNKS"/>
+-	<value value="80" name="A7XX_PERF_CP_AQE_NUM_MS_CHUNKS"/>
+-</enum>
+-
+-<enum name="a7xx_rbbm_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_RBBM_ALWAYS_COUNT"/>
+-	<value value="1" name="A7XX_PERF_RBBM_ALWAYS_ON"/>
+-	<value value="2" name="A7XX_PERF_RBBM_TSE_BUSY"/>
+-	<value value="3" name="A7XX_PERF_RBBM_RAS_BUSY"/>
+-	<value value="4" name="A7XX_PERF_RBBM_PC_DCALL_BUSY"/>
+-	<value value="5" name="A7XX_PERF_RBBM_PC_VSD_BUSY"/>
+-	<value value="6" name="A7XX_PERF_RBBM_STATUS_MASKED"/>
+-	<value value="7" name="A7XX_PERF_RBBM_COM_BUSY"/>
+-	<value value="8" name="A7XX_PERF_RBBM_DCOM_BUSY"/>
+-	<value value="9" name="A7XX_PERF_RBBM_VBIF_BUSY"/>
+-	<value value="10" name="A7XX_PERF_RBBM_VSC_BUSY"/>
+-	<value value="11" name="A7XX_PERF_RBBM_TESS_BUSY"/>
+-	<value value="12" name="A7XX_PERF_RBBM_UCHE_BUSY"/>
+-	<value value="13" name="A7XX_PERF_RBBM_HLSQ_BUSY"/>
+-</enum>
+-
+-<enum name="a7xx_pc_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_PC_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_PC_WORKING_CYCLES"/>
+-	<value value="2" name="A7XX_PERF_PC_STALL_CYCLES_VFD"/>
+-	<value value="3" name="A7XX_PERF_PC_RESERVED"/>
+-	<value value="4" name="A7XX_PERF_PC_STALL_CYCLES_VPC"/>
+-	<value value="5" name="A7XX_PERF_PC_STALL_CYCLES_UCHE"/>
+-	<value value="6" name="A7XX_PERF_PC_STALL_CYCLES_TESS"/>
+-	<value value="7" name="A7XX_PERF_PC_STALL_CYCLES_VFD_ONLY"/>
+-	<value value="8" name="A7XX_PERF_PC_STALL_CYCLES_VPC_ONLY"/>
+-	<value value="9" name="A7XX_PERF_PC_PASS1_TF_STALL_CYCLES"/>
+-	<value value="10" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_INDEX"/>
+-	<value value="11" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_TESS_FACTOR"/>
+-	<value value="12" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_VIZ_STREAM"/>
+-	<value value="13" name="A7XX_PERF_PC_STARVE_CYCLES_DI"/>
+-	<value value="14" name="A7XX_PERF_PC_VIS_STREAMS_LOADED"/>
+-	<value value="15" name="A7XX_PERF_PC_INSTANCES"/>
+-	<value value="16" name="A7XX_PERF_PC_VPC_PRIMITIVES"/>
+-	<value value="17" name="A7XX_PERF_PC_DEAD_PRIM"/>
+-	<value value="18" name="A7XX_PERF_PC_LIVE_PRIM"/>
+-	<value value="19" name="A7XX_PERF_PC_VERTEX_HITS"/>
+-	<value value="20" name="A7XX_PERF_PC_IA_VERTICES"/>
+-	<value value="21" name="A7XX_PERF_PC_IA_PRIMITIVES"/>
+-	<value value="22" name="A7XX_PERF_PC_RESERVED_22"/>
+-	<value value="23" name="A7XX_PERF_PC_HS_INVOCATIONS"/>
+-	<value value="24" name="A7XX_PERF_PC_DS_INVOCATIONS"/>
+-	<value value="25" name="A7XX_PERF_PC_VS_INVOCATIONS"/>
+-	<value value="26" name="A7XX_PERF_PC_GS_INVOCATIONS"/>
+-	<value value="27" name="A7XX_PERF_PC_DS_PRIMITIVES"/>
+-	<value value="28" name="A7XX_PERF_PC_3D_DRAWCALLS"/>
+-	<value value="29" name="A7XX_PERF_PC_2D_DRAWCALLS"/>
+-	<value value="30" name="A7XX_PERF_PC_NON_DRAWCALL_GLOBAL_EVENTS"/>
+-	<value value="31" name="A7XX_PERF_PC_TESS_BUSY_CYCLES"/>
+-	<value value="32" name="A7XX_PERF_PC_TESS_WORKING_CYCLES"/>
+-	<value value="33" name="A7XX_PERF_PC_TESS_STALL_CYCLES_PC"/>
+-	<value value="34" name="A7XX_PERF_PC_TESS_STARVE_CYCLES_PC"/>
+-	<value value="35" name="A7XX_PERF_PC_TESS_SINGLE_PRIM_CYCLES"/>
+-	<value value="36" name="A7XX_PERF_PC_TESS_PC_UV_TRANS"/>
+-	<value value="37" name="A7XX_PERF_PC_TESS_PC_UV_PATCHES"/>
+-	<value value="38" name="A7XX_PERF_PC_TESS_FACTOR_TRANS"/>
+-	<value value="39" name="A7XX_PERF_PC_TAG_CHECKED_VERTICES"/>
+-	<value value="40" name="A7XX_PERF_PC_MESH_VS_WAVES"/>
+-	<value value="41" name="A7XX_PERF_PC_MESH_DRAWS"/>
+-	<value value="42" name="A7XX_PERF_PC_MESH_DEAD_DRAWS"/>
+-	<value value="43" name="A7XX_PERF_PC_MESH_MVIS_EN_DRAWS"/>
+-	<value value="44" name="A7XX_PERF_PC_MESH_DEAD_PRIM"/>
+-	<value value="45" name="A7XX_PERF_PC_MESH_LIVE_PRIM"/>
+-	<value value="46" name="A7XX_PERF_PC_MESH_PA_EN_PRIM"/>
+-	<value value="47" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_MVIS_STREAM"/>
+-	<value value="48" name="A7XX_PERF_PC_STARVE_CYCLES_PREDRAW"/>
+-	<value value="49" name="A7XX_PERF_PC_STALL_CYCLES_COMPUTE_GFX"/>
+-	<value value="50" name="A7XX_PERF_PC_STALL_CYCLES_GFX_COMPUTE"/>
+-	<value value="51" name="A7XX_PERF_PC_TESS_PC_MULTI_PATCH_TRANS"/>
+-</enum>
+-
+-<enum name="a7xx_vfd_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_VFD_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_VFD_STALL_CYCLES_UCHE"/>
+-	<value value="2" name="A7XX_PERF_VFD_STALL_CYCLES_VPC_ALLOC"/>
+-	<value value="3" name="A7XX_PERF_VFD_STALL_CYCLES_SP_INFO"/>
+-	<value value="4" name="A7XX_PERF_VFD_STALL_CYCLES_SP_ATTR"/>
+-	<value value="5" name="A7XX_PERF_VFD_STARVE_CYCLES_UCHE"/>
+-	<value value="6" name="A7XX_PERF_VFD_RBUFFER_FULL"/>
+-	<value value="7" name="A7XX_PERF_VFD_ATTR_INFO_FIFO_FULL"/>
+-	<value value="8" name="A7XX_PERF_VFD_DECODED_ATTRIBUTE_BYTES"/>
+-	<value value="9" name="A7XX_PERF_VFD_NUM_ATTRIBUTES"/>
+-	<value value="10" name="A7XX_PERF_VFD_UPPER_SHADER_FIBERS"/>
+-	<value value="11" name="A7XX_PERF_VFD_LOWER_SHADER_FIBERS"/>
+-	<value value="12" name="A7XX_PERF_VFD_MODE_0_FIBERS"/>
+-	<value value="13" name="A7XX_PERF_VFD_MODE_1_FIBERS"/>
+-	<value value="14" name="A7XX_PERF_VFD_MODE_2_FIBERS"/>
+-	<value value="15" name="A7XX_PERF_VFD_MODE_3_FIBERS"/>
+-	<value value="16" name="A7XX_PERF_VFD_MODE_4_FIBERS"/>
+-	<value value="17" name="A7XX_PERF_VFD_TOTAL_VERTICES"/>
+-	<value value="18" name="A7XX_PERF_VFDP_STALL_CYCLES_VFD"/>
+-	<value value="19" name="A7XX_PERF_VFDP_STALL_CYCLES_VFD_INDEX"/>
+-	<value value="20" name="A7XX_PERF_VFDP_STALL_CYCLES_VFD_PROG"/>
+-	<value value="21" name="A7XX_PERF_VFDP_STARVE_CYCLES_PC"/>
+-	<value value="22" name="A7XX_PERF_VFDP_VS_STAGE_WAVES"/>
+-	<value value="23" name="A7XX_PERF_VFD_STALL_CYCLES_PRG_END_FE"/>
+-	<value value="24" name="A7XX_PERF_VFD_STALL_CYCLES_CBSYNC"/>
+-</enum>
+-
+-<enum name="a7xx_hlsq_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_HLSQ_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_HLSQ_STALL_CYCLES_UCHE"/>
+-	<value value="2" name="A7XX_PERF_HLSQ_STALL_CYCLES_SP_STATE"/>
+-	<value value="3" name="A7XX_PERF_HLSQ_STALL_CYCLES_SP_FS_STAGE"/>
+-	<value value="4" name="A7XX_PERF_HLSQ_UCHE_LATENCY_CYCLES"/>
+-	<value value="5" name="A7XX_PERF_HLSQ_UCHE_LATENCY_COUNT"/>
+-	<value value="6" name="A7XX_PERF_HLSQ_RESERVED_6"/>
+-	<value value="7" name="A7XX_PERF_HLSQ_RESERVED_7"/>
+-	<value value="8" name="A7XX_PERF_HLSQ_RESERVED_8"/>
+-	<value value="9" name="A7XX_PERF_HLSQ_RESERVED_9"/>
+-	<value value="10" name="A7XX_PERF_HLSQ_COMPUTE_DRAWCALLS"/>
+-	<value value="11" name="A7XX_PERF_HLSQ_FS_DATA_WAIT_PROGRAMMING"/>
+-	<value value="12" name="A7XX_PERF_HLSQ_DUAL_FS_PROG_ACTIVE"/>
+-	<value value="13" name="A7XX_PERF_HLSQ_DUAL_VS_PROG_ACTIVE"/>
+-	<value value="14" name="A7XX_PERF_HLSQ_FS_BATCH_COUNT_ZERO"/>
+-	<value value="15" name="A7XX_PERF_HLSQ_VS_BATCH_COUNT_ZERO"/>
+-	<value value="16" name="A7XX_PERF_HLSQ_WAVE_PENDING_NO_QUAD"/>
+-	<value value="17" name="A7XX_PERF_HLSQ_WAVE_PENDING_NO_PRIM_BASE"/>
+-	<value value="18" name="A7XX_PERF_HLSQ_STALL_CYCLES_VPC"/>
+-	<value value="19" name="A7XX_PERF_HLSQ_RESERVED_19"/>
+-	<value value="20" name="A7XX_PERF_HLSQ_DRAW_MODE_SWITCH_VSFS_SYNC"/>
+-	<value value="21" name="A7XX_PERF_HLSQ_VSBR_STALL_CYCLES"/>
+-	<value value="22" name="A7XX_PERF_HLSQ_FS_STALL_CYCLES"/>
+-	<value value="23" name="A7XX_PERF_HLSQ_LPAC_STALL_CYCLES"/>
+-	<value value="24" name="A7XX_PERF_HLSQ_BV_STALL_CYCLES"/>
+-	<value value="25" name="A7XX_PERF_HLSQ_VSBR_DEREF_CYCLES"/>
+-	<value value="26" name="A7XX_PERF_HLSQ_FS_DEREF_CYCLES"/>
+-	<value value="27" name="A7XX_PERF_HLSQ_LPAC_DEREF_CYCLES"/>
+-	<value value="28" name="A7XX_PERF_HLSQ_BV_DEREF_CYCLES"/>
+-	<value value="29" name="A7XX_PERF_HLSQ_VSBR_S2W_CYCLES"/>
+-	<value value="30" name="A7XX_PERF_HLSQ_FS_S2W_CYCLES"/>
+-	<value value="31" name="A7XX_PERF_HLSQ_LPAC_S2W_CYCLES"/>
+-	<value value="32" name="A7XX_PERF_HLSQ_BV_S2W_CYCLES"/>
+-	<value value="33" name="A7XX_PERF_HLSQ_VSBR_WAIT_FS_S2W"/>
+-	<value value="34" name="A7XX_PERF_HLSQ_FS_WAIT_VS_S2W"/>
+-	<value value="35" name="A7XX_PERF_HLSQ_LPAC_WAIT_VS_S2W"/>
+-	<value value="36" name="A7XX_PERF_HLSQ_BV_WAIT_FS_S2W"/>
+-	<value value="37" name="A7XX_PERF_HLSQ_VS_WAIT_CONST_RESOURCE"/>
+-	<value value="38" name="A7XX_PERF_HLSQ_FS_WAIT_SAME_VS_S2W"/>
+-	<value value="39" name="A7XX_PERF_HLSQ_FS_STARVING_SP"/>
+-	<value value="40" name="A7XX_PERF_HLSQ_VS_DATA_WAIT_PROGRAMMING"/>
+-	<value value="41" name="A7XX_PERF_HLSQ_BV_DATA_WAIT_PROGRAMMING"/>
+-	<value value="42" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_VS"/>
+-	<value value="43" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_VS"/>
+-	<value value="44" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_FS"/>
+-	<value value="45" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_FS"/>
+-	<value value="46" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_BV"/>
+-	<value value="47" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_BV"/>
+-	<value value="48" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_LPAC"/>
+-	<value value="49" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_LPAC"/>
+-	<value value="50" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_VS"/>
+-	<value value="51" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_FS"/>
+-	<value value="52" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_BV"/>
+-	<value value="53" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_LPAC"/>
+-	<value value="54" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_VS"/>
+-	<value value="55" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_FS"/>
+-	<value value="56" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_BV"/>
+-	<value value="57" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_LPAC"/>
+-</enum>
+-
+-<enum name="a7xx_vpc_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_VPC_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_VPC_WORKING_CYCLES"/>
+-	<value value="2" name="A7XX_PERF_VPC_STALL_CYCLES_UCHE"/>
+-	<value value="3" name="A7XX_PERF_VPC_STALL_CYCLES_VFD_WACK"/>
+-	<value value="4" name="A7XX_PERF_VPC_STALL_CYCLES_HLSQ_PRIM_ALLOC"/>
+-	<value value="5" name="A7XX_PERF_VPC_RESERVED_5"/>
+-	<value value="6" name="A7XX_PERF_VPC_STALL_CYCLES_SP_LM"/>
+-	<value value="7" name="A7XX_PERF_VPC_STARVE_CYCLES_SP"/>
+-	<value value="8" name="A7XX_PERF_VPC_STARVE_CYCLES_LRZ"/>
+-	<value value="9" name="A7XX_PERF_VPC_PC_PRIMITIVES"/>
+-	<value value="10" name="A7XX_PERF_VPC_SP_COMPONENTS"/>
+-	<value value="11" name="A7XX_PERF_VPC_STALL_CYCLES_VPCRAM_POS"/>
+-	<value value="12" name="A7XX_PERF_VPC_LRZ_ASSIGN_PRIMITIVES"/>
+-	<value value="13" name="A7XX_PERF_VPC_RB_VISIBLE_PRIMITIVES"/>
+-	<value value="14" name="A7XX_PERF_VPC_LM_TRANSACTION"/>
+-	<value value="15" name="A7XX_PERF_VPC_STREAMOUT_TRANSACTION"/>
+-	<value value="16" name="A7XX_PERF_VPC_VS_BUSY_CYCLES"/>
+-	<value value="17" name="A7XX_PERF_VPC_PS_BUSY_CYCLES"/>
+-	<value value="18" name="A7XX_PERF_VPC_VS_WORKING_CYCLES"/>
+-	<value value="19" name="A7XX_PERF_VPC_PS_WORKING_CYCLES"/>
+-	<value value="20" name="A7XX_PERF_VPC_STARVE_CYCLES_RB"/>
+-	<value value="21" name="A7XX_PERF_VPC_NUM_VPCRAM_READ_POS"/>
+-	<value value="22" name="A7XX_PERF_VPC_WIT_FULL_CYCLES"/>
+-	<value value="23" name="A7XX_PERF_VPC_VPCRAM_FULL_CYCLES"/>
+-	<value value="24" name="A7XX_PERF_VPC_LM_FULL_WAIT_FOR_INTP_END"/>
+-	<value value="25" name="A7XX_PERF_VPC_NUM_VPCRAM_WRITE"/>
+-	<value value="26" name="A7XX_PERF_VPC_NUM_VPCRAM_READ_SO"/>
+-	<value value="27" name="A7XX_PERF_VPC_NUM_ATTR_REQ_LM"/>
+-	<value value="28" name="A7XX_PERF_VPC_STALL_CYCLE_TSE"/>
+-	<value value="29" name="A7XX_PERF_VPC_TSE_PRIMITIVES"/>
+-	<value value="30" name="A7XX_PERF_VPC_GS_PRIMITIVES"/>
+-	<value value="31" name="A7XX_PERF_VPC_TSE_TRANSACTIONS"/>
+-	<value value="32" name="A7XX_PERF_VPC_STALL_CYCLES_CCU"/>
+-	<value value="33" name="A7XX_PERF_VPC_NUM_WM_HIT"/>
+-	<value value="34" name="A7XX_PERF_VPC_STALL_DQ_WACK"/>
+-	<value value="35" name="A7XX_PERF_VPC_STALL_CYCLES_CCHE"/>
+-	<value value="36" name="A7XX_PERF_VPC_STARVE_CYCLES_CCHE"/>
+-	<value value="37" name="A7XX_PERF_VPC_NUM_PA_REQ"/>
+-	<value value="38" name="A7XX_PERF_VPC_NUM_LM_REQ_HIT"/>
+-	<value value="39" name="A7XX_PERF_VPC_CCHE_REQBUF_FULL"/>
+-	<value value="40" name="A7XX_PERF_VPC_STALL_CYCLES_LM_ACK"/>
+-	<value value="41" name="A7XX_PERF_VPC_STALL_CYCLES_PRG_END_FE"/>
+-	<value value="42" name="A7XX_PERF_VPC_STALL_CYCLES_PRG_END_PCVS"/>
+-	<value value="43" name="A7XX_PERF_VPC_STALL_CYCLES_PRG_END_VPCPS"/>
+-</enum>
+-
+-<enum name="a7xx_tse_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_TSE_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_TSE_CLIPPING_CYCLES"/>
+-	<value value="2" name="A7XX_PERF_TSE_STALL_CYCLES_RAS"/>
+-	<value value="3" name="A7XX_PERF_TSE_STALL_CYCLES_LRZ_BARYPLANE"/>
+-	<value value="4" name="A7XX_PERF_TSE_STALL_CYCLES_LRZ_ZPLANE"/>
+-	<value value="5" name="A7XX_PERF_TSE_STARVE_CYCLES_PC"/>
+-	<value value="6" name="A7XX_PERF_TSE_INPUT_PRIM"/>
+-	<value value="7" name="A7XX_PERF_TSE_INPUT_NULL_PRIM"/>
+-	<value value="8" name="A7XX_PERF_TSE_TRIVAL_REJ_PRIM"/>
+-	<value value="9" name="A7XX_PERF_TSE_CLIPPED_PRIM"/>
+-	<value value="10" name="A7XX_PERF_TSE_ZERO_AREA_PRIM"/>
+-	<value value="11" name="A7XX_PERF_TSE_FACENESS_CULLED_PRIM"/>
+-	<value value="12" name="A7XX_PERF_TSE_ZERO_PIXEL_PRIM"/>
+-	<value value="13" name="A7XX_PERF_TSE_OUTPUT_NULL_PRIM"/>
+-	<value value="14" name="A7XX_PERF_TSE_OUTPUT_VISIBLE_PRIM"/>
+-	<value value="15" name="A7XX_PERF_TSE_CINVOCATION"/>
+-	<value value="16" name="A7XX_PERF_TSE_CPRIMITIVES"/>
+-	<value value="17" name="A7XX_PERF_TSE_2D_INPUT_PRIM"/>
+-	<value value="18" name="A7XX_PERF_TSE_2D_ALIVE_CYCLES"/>
+-	<value value="19" name="A7XX_PERF_TSE_CLIP_PLANES"/>
+-</enum>
+-
+-<enum name="a7xx_ras_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_RAS_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_RAS_SUPERTILE_ACTIVE_CYCLES"/>
+-	<value value="2" name="A7XX_PERF_RAS_STALL_CYCLES_LRZ"/>
+-	<value value="3" name="A7XX_PERF_RAS_STARVE_CYCLES_TSE"/>
+-	<value value="4" name="A7XX_PERF_RAS_SUPER_TILES"/>
+-	<value value="5" name="A7XX_PERF_RAS_8X4_TILES"/>
+-	<value value="6" name="A7XX_PERF_RAS_MASKGEN_ACTIVE"/>
+-	<value value="7" name="A7XX_PERF_RAS_FULLY_COVERED_SUPER_TILES"/>
+-	<value value="8" name="A7XX_PERF_RAS_FULLY_COVERED_8X4_TILES"/>
+-	<value value="9" name="A7XX_PERF_RAS_PRIM_KILLED_INVISILBE"/>
+-	<value value="10" name="A7XX_PERF_RAS_SUPERTILE_GEN_ACTIVE_CYCLES"/>
+-	<value value="11" name="A7XX_PERF_RAS_LRZ_INTF_WORKING_CYCLES"/>
+-	<value value="12" name="A7XX_PERF_RAS_BLOCKS"/>
+-	<value value="13" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_0_WORKING_CC_l2"/>
+-	<value value="14" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_1_WORKING_CC_l2"/>
+-	<value value="15" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_2_WORKING_CC_l2"/>
+-	<value value="16" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_3_WORKING_CC_l2"/>
+-	<value value="17" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_4_WORKING_CC_l2"/>
+-	<value value="18" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_5_WORKING_CC_l2"/>
+-	<value value="19" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_6_WORKING_CC_l2"/>
+-	<value value="20" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_7_WORKING_CC_l2"/>
+-	<value value="21" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_8_WORKING_CC_l2"/>
+-	<value value="22" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_9_WORKING_CC_l2"/>
+-	<value value="23" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_10_WORKING_CC_l2"/>
+-	<value value="24" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_11_WORKING_CC_l2"/>
+-	<value value="25" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_12_WORKING_CC_l2"/>
+-	<value value="26" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_13_WORKING_CC_l2"/>
+-	<value value="27" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_14_WORKING_CC_l2"/>
+-	<value value="28" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_15_WORKING_CC_l2"/>
+-	<value value="29" name="A7XX_PERF_RAS_FALSE_PARTIAL_STILE"/>
+-
+-</enum>
+-
+-<enum name="a7xx_uche_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_UCHE_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_UCHE_STALL_CYCLES_ARBITER"/>
+-	<value value="2" name="A7XX_PERF_UCHE_VBIF_LATENCY_CYCLES"/>
+-	<value value="3" name="A7XX_PERF_UCHE_VBIF_LATENCY_SAMPLES"/>
+-	<value value="4" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_TP"/>
+-	<value value="5" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_VFD"/>
+-	<value value="6" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_HLSQ"/>
+-	<value value="7" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_LRZ"/>
+-	<value value="8" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_SP"/>
+-	<value value="9" name="A7XX_PERF_UCHE_READ_REQUESTS_TP"/>
+-	<value value="10" name="A7XX_PERF_UCHE_READ_REQUESTS_VFD"/>
+-	<value value="11" name="A7XX_PERF_UCHE_READ_REQUESTS_HLSQ"/>
+-	<value value="12" name="A7XX_PERF_UCHE_READ_REQUESTS_LRZ"/>
+-	<value value="13" name="A7XX_PERF_UCHE_READ_REQUESTS_SP"/>
+-	<value value="14" name="A7XX_PERF_UCHE_WRITE_REQUESTS_LRZ"/>
+-	<value value="15" name="A7XX_PERF_UCHE_WRITE_REQUESTS_SP"/>
+-	<value value="16" name="A7XX_PERF_UCHE_WRITE_REQUESTS_VPC"/>
+-	<value value="17" name="A7XX_PERF_UCHE_WRITE_REQUESTS_VSC"/>
+-	<value value="18" name="A7XX_PERF_UCHE_EVICTS"/>
+-	<value value="19" name="A7XX_PERF_UCHE_BANK_REQ0"/>
+-	<value value="20" name="A7XX_PERF_UCHE_BANK_REQ1"/>
+-	<value value="21" name="A7XX_PERF_UCHE_BANK_REQ2"/>
+-	<value value="22" name="A7XX_PERF_UCHE_BANK_REQ3"/>
+-	<value value="23" name="A7XX_PERF_UCHE_BANK_REQ4"/>
+-	<value value="24" name="A7XX_PERF_UCHE_BANK_REQ5"/>
+-	<value value="25" name="A7XX_PERF_UCHE_BANK_REQ6"/>
+-	<value value="26" name="A7XX_PERF_UCHE_BANK_REQ7"/>
+-	<value value="27" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_CH0"/>
+-	<value value="28" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_CH1"/>
+-	<value value="29" name="A7XX_PERF_UCHE_GMEM_READ_BEATS"/>
+-	<value value="30" name="A7XX_PERF_UCHE_TPH_REF_FULL"/>
+-	<value value="31" name="A7XX_PERF_UCHE_TPH_VICTIM_FULL"/>
+-	<value value="32" name="A7XX_PERF_UCHE_TPH_EXT_FULL"/>
+-	<value value="33" name="A7XX_PERF_UCHE_VBIF_STALL_WRITE_DATA"/>
+-	<value value="34" name="A7XX_PERF_UCHE_DCMP_LATENCY_SAMPLES"/>
+-	<value value="35" name="A7XX_PERF_UCHE_DCMP_LATENCY_CYCLES"/>
+-	<value value="36" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_PC"/>
+-	<value value="37" name="A7XX_PERF_UCHE_READ_REQUESTS_PC"/>
+-	<value value="38" name="A7XX_PERF_UCHE_RAM_READ_REQ"/>
+-	<value value="39" name="A7XX_PERF_UCHE_RAM_WRITE_REQ"/>
+-	<value value="40" name="A7XX_PERF_UCHE_STARVED_CYCLES_VBIF_DECMP"/>
+-	<value value="41" name="A7XX_PERF_UCHE_STALL_CYCLES_DECMP"/>
+-	<value value="42" name="A7XX_PERF_UCHE_ARBITER_STALL_CYCLES_VBIF"/>
+-	<value value="43" name="A7XX_PERF_UCHE_READ_REQUESTS_TP_UBWC"/>
+-	<value value="44" name="A7XX_PERF_UCHE_READ_REQUESTS_TP_NONUBWC"/>
+-	<value value="45" name="A7XX_PERF_UCHE_READ_REQUESTS_TP_GMEM"/>
+-	<value value="46" name="A7XX_PERF_UCHE_LONG_LINE_ALL_EVICTS_KAILUA"/>
+-	<value value="47" name="A7XX_PERF_UCHE_LONG_LINE_PARTIAL_EVICTS_KAILUA"/>
+-	<value value="48" name="A7XX_PERF_UCHE_TPH_CONFLICT_CL_CCHE"/>
+-	<value value="49" name="A7XX_PERF_UCHE_TPH_CONFLICT_CL_OTHER_KAILUA"/>
+-	<value value="50" name="A7XX_PERF_UCHE_DBANK_CONFLICT_CL_CCHE"/>
+-	<value value="51" name="A7XX_PERF_UCHE_DBANK_CONFLICT_CL_OTHER_CLIENTS"/>
+-	<value value="52" name="A7XX_PERF_UCHE_VBIF_WRITE_BEATS_CH0"/>
+-	<value value="53" name="A7XX_PERF_UCHE_VBIF_WRITE_BEATS_CH1"/>
+-	<value value="54" name="A7XX_PERF_UCHE_CCHE_TPH_QUEUE_FULL"/>
+-	<value value="55" name="A7XX_PERF_UCHE_CCHE_DPH_QUEUE_FULL"/>
+-	<value value="56" name="A7XX_PERF_UCHE_GMEM_WRITE_BEATS"/>
+-	<value value="57" name="A7XX_PERF_UCHE_UBWC_READ_BEATS"/>
+-	<value value="58" name="A7XX_PERF_UCHE_UBWC_WRITE_BEATS"/>
+-</enum>
+-
+-<enum name="a7xx_tp_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_TP_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_TP_STALL_CYCLES_UCHE"/>
+-	<value value="2" name="A7XX_PERF_TP_LATENCY_CYCLES"/>
+-	<value value="3" name="A7XX_PERF_TP_LATENCY_TRANS"/>
+-	<value value="4" name="A7XX_PERF_TP_FLAG_FIFO_DELAY_SAMPLES"/>
+-	<value value="5" name="A7XX_PERF_TP_FLAG_FIFO_DELAY_CYCLES"/>
+-	<value value="6" name="A7XX_PERF_TP_L1_CACHELINE_REQUESTS"/>
+-	<value value="7" name="A7XX_PERF_TP_L1_CACHELINE_MISSES"/>
+-	<value value="8" name="A7XX_PERF_TP_SP_TP_TRANS"/>
+-	<value value="9" name="A7XX_PERF_TP_TP_SP_TRANS"/>
+-	<value value="10" name="A7XX_PERF_TP_OUTPUT_PIXELS"/>
+-	<value value="11" name="A7XX_PERF_TP_FILTER_WORKLOAD_16BIT"/>
+-	<value value="12" name="A7XX_PERF_TP_FILTER_WORKLOAD_32BIT"/>
+-	<value value="13" name="A7XX_PERF_TP_QUADS_RECEIVED"/>
+-	<value value="14" name="A7XX_PERF_TP_QUADS_OFFSET"/>
+-	<value value="15" name="A7XX_PERF_TP_QUADS_SHADOW"/>
+-	<value value="16" name="A7XX_PERF_TP_QUADS_ARRAY"/>
+-	<value value="17" name="A7XX_PERF_TP_QUADS_GRADIENT"/>
+-	<value value="18" name="A7XX_PERF_TP_QUADS_1D"/>
+-	<value value="19" name="A7XX_PERF_TP_QUADS_2D"/>
+-	<value value="20" name="A7XX_PERF_TP_QUADS_BUFFER"/>
+-	<value value="21" name="A7XX_PERF_TP_QUADS_3D"/>
+-	<value value="22" name="A7XX_PERF_TP_QUADS_CUBE"/>
+-	<value value="23" name="A7XX_PERF_TP_DIVERGENT_QUADS_RECEIVED"/>
+-	<value value="24" name="A7XX_PERF_TP_PRT_NON_RESIDENT_EVENTS"/>
+-	<value value="25" name="A7XX_PERF_TP_OUTPUT_PIXELS_POINT"/>
+-	<value value="26" name="A7XX_PERF_TP_OUTPUT_PIXELS_BILINEAR"/>
+-	<value value="27" name="A7XX_PERF_TP_OUTPUT_PIXELS_MIP"/>
+-	<value value="28" name="A7XX_PERF_TP_OUTPUT_PIXELS_ANISO"/>
+-	<value value="29" name="A7XX_PERF_TP_OUTPUT_PIXELS_ZERO_LOD"/>
+-	<value value="30" name="A7XX_PERF_TP_FLAG_CACHE_REQUESTS"/>
+-	<value value="31" name="A7XX_PERF_TP_FLAG_CACHE_MISSES"/>
+-	<value value="32" name="A7XX_PERF_TP_L1_5_L2_REQUESTS"/>
+-	<value value="33" name="A7XX_PERF_TP_2D_OUTPUT_PIXELS"/>
+-	<value value="34" name="A7XX_PERF_TP_2D_OUTPUT_PIXELS_POINT"/>
+-	<value value="35" name="A7XX_PERF_TP_2D_OUTPUT_PIXELS_BILINEAR"/>
+-	<value value="36" name="A7XX_PERF_TP_2D_FILTER_WORKLOAD_16BIT"/>
+-	<value value="37" name="A7XX_PERF_TP_2D_FILTER_WORKLOAD_32BIT"/>
+-	<value value="38" name="A7XX_PERF_TP_TPA2TPC_TRANS"/>
+-	<value value="39" name="A7XX_PERF_TP_L1_MISSES_ASTC_1TILE"/>
+-	<value value="40" name="A7XX_PERF_TP_L1_MISSES_ASTC_2TILE"/>
+-	<value value="41" name="A7XX_PERF_TP_L1_MISSES_ASTC_4TILE"/>
+-	<value value="42" name="A7XX_PERF_TP_L1_5_COMPRESS_REQS"/>
+-	<value value="43" name="A7XX_PERF_TP_L1_5_L2_COMPRESS_MISS"/>
+-	<value value="44" name="A7XX_PERF_TP_L1_BANK_CONFLICT"/>
+-	<value value="45" name="A7XX_PERF_TP_L1_5_MISS_LATENCY_CYCLES"/>
+-	<value value="46" name="A7XX_PERF_TP_L1_5_MISS_LATENCY_TRANS"/>
+-	<value value="47" name="A7XX_PERF_TP_QUADS_CONSTANT_MULTIPLIED"/>
+-	<value value="48" name="A7XX_PERF_TP_FRONTEND_WORKING_CYCLES"/>
+-	<value value="49" name="A7XX_PERF_TP_L1_TAG_WORKING_CYCLES"/>
+-	<value value="50" name="A7XX_PERF_TP_L1_DATA_WRITE_WORKING_CYCLES"/>
+-	<value value="51" name="A7XX_PERF_TP_PRE_L1_DECOM_WORKING_CYCLES"/>
+-	<value value="52" name="A7XX_PERF_TP_BACKEND_WORKING_CYCLES"/>
+-	<value value="53" name="A7XX_PERF_TP_L1_5_CACHE_WORKING_CYCLES"/>
+-	<value value="54" name="A7XX_PERF_TP_STARVE_CYCLES_SP"/>
+-	<value value="55" name="A7XX_PERF_TP_STARVE_CYCLES_UCHE"/>
+-	<value value="56" name="A7XX_PERF_TP_STALL_CYCLES_UFC"/>
+-	<value value="57" name="A7XX_PERF_TP_FORMAT_DECOMP"/>
+-	<value value="58" name="A7XX_PERF_TP_FILTER_POINT_FP16"/>
+-	<value value="59" name="A7XX_PERF_TP_FILTER_POINT_FP32"/>
+-	<value value="60" name="A7XX_PERF_TP_LATENCY_FIFO_FULL"/>
+-	<value value="61" name="A7XX_PERF_TP_RESERVED_61"/>
+-	<value value="62" name="A7XX_PERF_TP_RESERVED_62"/>
+-	<value value="63" name="A7XX_PERF_TP_RESERVED_63"/>
+-	<value value="64" name="A7XX_PERF_TP_RESERVED_64"/>
+-	<value value="65" name="A7XX_PERF_TP_RESERVED_65"/>
+-	<value value="66" name="A7XX_PERF_TP_RESERVED_66"/>
+-	<value value="67" name="A7XX_PERF_TP_RESERVED_67"/>
+-	<value value="68" name="A7XX_PERF_TP_RESERVED_68"/>
+-	<value value="69" name="A7XX_PERF_TP_RESERVED_69"/>
+-	<value value="70" name="A7XX_PERF_TP_RESERVED_70"/>
+-	<value value="71" name="A7XX_PERF_TP_RESERVED_71"/>
+-	<value value="72" name="A7XX_PERF_TP_RESERVED_72"/>
+-	<value value="73" name="A7XX_PERF_TP_RESERVED_73"/>
+-	<value value="74" name="A7XX_PERF_TP_RESERVED_74"/>
+-	<value value="75" name="A7XX_PERF_TP_RESERVED_75"/>
+-	<value value="76" name="A7XX_PERF_TP_RESERVED_76"/>
+-	<value value="77" name="A7XX_PERF_TP_RESERVED_77"/>
+-	<value value="78" name="A7XX_PERF_TP_RESERVED_78"/>
+-	<value value="79" name="A7XX_PERF_TP_RESERVED_79"/>
+-	<value value="80" name="A7XX_PERF_TP_RESERVED_80"/>
+-	<value value="81" name="A7XX_PERF_TP_RESERVED_81"/>
+-	<value value="82" name="A7XX_PERF_TP_RESERVED_82"/>
+-	<value value="83" name="A7XX_PERF_TP_RESERVED_83"/>
+-	<value value="84" name="A7XX_PERF_TP_RESERVED_84"/>
+-	<value value="85" name="A7XX_PERF_TP_RESERVED_85"/>
+-	<value value="86" name="A7XX_PERF_TP_RESERVED_86"/>
+-	<value value="87" name="A7XX_PERF_TP_RESERVED_87"/>
+-	<value value="88" name="A7XX_PERF_TP_RESERVED_88"/>
+-	<value value="89" name="A7XX_PERF_TP_RESERVED_89"/>
+-	<value value="90" name="A7XX_PERF_TP_RESERVED_90"/>
+-	<value value="91" name="A7XX_PERF_TP_RESERVED_91"/>
+-	<value value="92" name="A7XX_PERF_TP_RESERVED_92"/>
+-	<value value="93" name="A7XX_PERF_TP_RESERVED_93"/>
+-	<value value="94" name="A7XX_PERF_TP_RESERVED_94"/>
+-	<value value="95" name="A7XX_PERF_TP_RESERVED_95"/>
+-	<value value="96" name="A7XX_PERF_TP_RESERVED_96"/>
+-	<value value="97" name="A7XX_PERF_TP_RESERVED_97"/>
+-	<value value="98" name="A7XX_PERF_TP_RESERVED_98"/>
+-	<value value="99" name="A7XX_PERF_TP_RESERVED_99"/>
+-	<value value="100" name="A7XX_PERF_TP_RESERVED_100"/>
+-	<value value="101" name="A7XX_PERF_TP_RESERVED_101"/>
+-	<value value="102" name="A7XX_PERF_TP_RESERVED_102"/>
+-	<value value="103" name="A7XX_PERF_TP_RESERVED_103"/>
+-	<value value="104" name="A7XX_PERF_TP_RESERVED_104"/>
+-	<value value="105" name="A7XX_PERF_TP_RESERVED_105"/>
+-	<value value="106" name="A7XX_PERF_TP_RESERVED_106"/>
+-	<value value="107" name="A7XX_PERF_TP_RESERVED_107"/>
+-	<value value="108" name="A7XX_PERF_TP_RESERVED_108"/>
+-	<value value="109" name="A7XX_PERF_TP_RESERVED_109"/>
+-	<value value="110" name="A7XX_PERF_TP_RESERVED_110"/>
+-	<value value="111" name="A7XX_PERF_TP_RESERVED_111"/>
+-	<value value="112" name="A7XX_PERF_TP_RESERVED_112"/>
+-	<value value="113" name="A7XX_PERF_TP_RESERVED_113"/>
+-	<value value="114" name="A7XX_PERF_TP_RESERVED_114"/>
+-	<value value="115" name="A7XX_PERF_TP_RESERVED_115"/>
+-	<value value="116" name="A7XX_PERF_TP_RESERVED_116"/>
+-	<value value="117" name="A7XX_PERF_TP_RESERVED_117"/>
+-	<value value="118" name="A7XX_PERF_TP_RESERVED_118"/>
+-	<value value="119" name="A7XX_PERF_TP_RESERVED_119"/>
+-	<value value="120" name="A7XX_PERF_TP_RESERVED_120"/>
+-	<value value="121" name="A7XX_PERF_TP_RESERVED_121"/>
+-	<value value="122" name="A7XX_PERF_TP_RESERVED_122"/>
+-	<value value="123" name="A7XX_PERF_TP_RESERVED_123"/>
+-	<value value="124" name="A7XX_PERF_TP_RESERVED_124"/>
+-	<value value="125" name="A7XX_PERF_TP_RESERVED_125"/>
+-	<value value="126" name="A7XX_PERF_TP_RESERVED_126"/>
+-	<value value="127" name="A7XX_PERF_TP_RESERVED_127"/>
+-	<value value="128" name="A7XX_PERF_TP_FORMAT_DECOMP_BILINEAR"/>
+-	<value value="129" name="A7XX_PERF_TP_PACKED_POINT_BOTH_VALID_FP16"/>
+-	<value value="130" name="A7XX_PERF_TP_PACKED_POINT_SINGLE_VALID_FP16"/>
+-	<value value="131" name="A7XX_PERF_TP_PACKED_POINT_BOTH_VALID_FP32"/>
+-	<value value="132" name="A7XX_PERF_TP_PACKED_POINT_SINGLE_VALID_FP32"/>
+-</enum>
+-
+-<enum name="a7xx_sp_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_SP_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_SP_ALU_WORKING_CYCLES"/>
+-	<value value="2" name="A7XX_PERF_SP_EFU_WORKING_CYCLES"/>
+-	<value value="3" name="A7XX_PERF_SP_STALL_CYCLES_VPC"/>
+-	<value value="4" name="A7XX_PERF_SP_STALL_CYCLES_TP"/>
+-	<value value="5" name="A7XX_PERF_SP_STALL_CYCLES_UCHE"/>
+-	<value value="6" name="A7XX_PERF_SP_STALL_CYCLES_RB"/>
+-	<value value="7" name="A7XX_PERF_SP_NON_EXECUTION_CYCLES"/>
+-	<value value="8" name="A7XX_PERF_SP_WAVE_CONTEXTS"/>
+-	<value value="9" name="A7XX_PERF_SP_WAVE_CONTEXT_CYCLES"/>
+-	<value value="10" name="A7XX_PERF_SP_STAGE_WAVE_CYCLES"/>
+-	<value value="11" name="A7XX_PERF_SP_STAGE_WAVE_SAMPLES"/>
+-	<value value="12" name="A7XX_PERF_SP_VS_STAGE_WAVE_CYCLES"/>
+-	<value value="13" name="A7XX_PERF_SP_VS_STAGE_WAVE_SAMPLES"/>
+-	<value value="14" name="A7XX_PERF_SP_FS_STAGE_DURATION_CYCLES"/>
+-	<value value="15" name="A7XX_PERF_SP_VS_STAGE_DURATION_CYCLES"/>
+-	<value value="16" name="A7XX_PERF_SP_WAVE_CTRL_CYCLES"/>
+-	<value value="17" name="A7XX_PERF_SP_WAVE_LOAD_CYCLES"/>
+-	<value value="18" name="A7XX_PERF_SP_WAVE_EMIT_CYCLES"/>
+-	<value value="19" name="A7XX_PERF_SP_WAVE_NOP_CYCLES"/>
+-	<value value="20" name="A7XX_PERF_SP_WAVE_WAIT_CYCLES"/>
+-	<value value="21" name="A7XX_PERF_SP_WAVE_FETCH_CYCLES"/>
+-	<value value="22" name="A7XX_PERF_SP_WAVE_IDLE_CYCLES"/>
+-	<value value="23" name="A7XX_PERF_SP_WAVE_END_CYCLES"/>
+-	<value value="24" name="A7XX_PERF_SP_WAVE_LONG_SYNC_CYCLES"/>
+-	<value value="25" name="A7XX_PERF_SP_WAVE_SHORT_SYNC_CYCLES"/>
+-	<value value="26" name="A7XX_PERF_SP_WAVE_JOIN_CYCLES"/>
+-	<value value="27" name="A7XX_PERF_SP_LM_LOAD_INSTRUCTIONS"/>
+-	<value value="28" name="A7XX_PERF_SP_LM_STORE_INSTRUCTIONS"/>
+-	<value value="29" name="A7XX_PERF_SP_LM_ATOMICS"/>
+-	<value value="30" name="A7XX_PERF_SP_GM_LOAD_INSTRUCTIONS"/>
+-	<value value="31" name="A7XX_PERF_SP_GM_STORE_INSTRUCTIONS"/>
+-	<value value="32" name="A7XX_PERF_SP_GM_ATOMICS"/>
+-	<value value="33" name="A7XX_PERF_SP_VS_STAGE_TEX_INSTRUCTIONS"/>
+-	<value value="34" name="A7XX_PERF_SP_VS_STAGE_EFU_INSTRUCTIONS"/>
+-	<value value="35" name="A7XX_PERF_SP_VS_STAGE_FULL_ALU_INSTRUCTIONS"/>
+-	<value value="36" name="A7XX_PERF_SP_VS_STAGE_HALF_ALU_INSTRUCTIONS"/>
+-	<value value="37" name="A7XX_PERF_SP_FS_STAGE_TEX_INSTRUCTIONS"/>
+-	<value value="38" name="A7XX_PERF_SP_FS_STAGE_CFLOW_INSTRUCTIONS"/>
+-	<value value="39" name="A7XX_PERF_SP_FS_STAGE_EFU_INSTRUCTIONS"/>
+-	<value value="40" name="A7XX_PERF_SP_FS_STAGE_FULL_ALU_INSTRUCTIONS"/>
+-	<value value="41" name="A7XX_PERF_SP_FS_STAGE_HALF_ALU_INSTRUCTIONS"/>
+-	<value value="42" name="A7XX_PERF_SP_FS_STAGE_BARY_INSTRUCTIONS"/>
+-	<value value="43" name="A7XX_PERF_SP_VS_INSTRUCTIONS"/>
+-	<value value="44" name="A7XX_PERF_SP_FS_INSTRUCTIONS"/>
+-	<value value="45" name="A7XX_PERF_SP_ADDR_LOCK_COUNT"/>
+-	<value value="46" name="A7XX_PERF_SP_UCHE_READ_TRANS"/>
+-	<value value="47" name="A7XX_PERF_SP_UCHE_WRITE_TRANS"/>
+-	<value value="48" name="A7XX_PERF_SP_EXPORT_VPC_TRANS"/>
+-	<value value="49" name="A7XX_PERF_SP_EXPORT_RB_TRANS"/>
+-	<value value="50" name="A7XX_PERF_SP_PIXELS_KILLED"/>
+-	<value value="51" name="A7XX_PERF_SP_ICL1_REQUESTS"/>
+-	<value value="52" name="A7XX_PERF_SP_ICL1_MISSES"/>
+-	<value value="53" name="A7XX_PERF_SP_HS_INSTRUCTIONS"/>
+-	<value value="54" name="A7XX_PERF_SP_DS_INSTRUCTIONS"/>
+-	<value value="55" name="A7XX_PERF_SP_GS_INSTRUCTIONS"/>
+-	<value value="56" name="A7XX_PERF_SP_CS_INSTRUCTIONS"/>
+-	<value value="57" name="A7XX_PERF_SP_GPR_READ"/>
+-	<value value="58" name="A7XX_PERF_SP_GPR_WRITE"/>
+-	<value value="59" name="A7XX_PERF_SP_FS_STAGE_HALF_EFU_INSTRUCTIONS"/>
+-	<value value="60" name="A7XX_PERF_SP_VS_STAGE_HALF_EFU_INSTRUCTIONS"/>
+-	<value value="61" name="A7XX_PERF_SP_LM_BANK_CONFLICTS"/>
+-	<value value="62" name="A7XX_PERF_SP_TEX_CONTROL_WORKING_CYCLES"/>
+-	<value value="63" name="A7XX_PERF_SP_LOAD_CONTROL_WORKING_CYCLES"/>
+-	<value value="64" name="A7XX_PERF_SP_FLOW_CONTROL_WORKING_CYCLES"/>
+-	<value value="65" name="A7XX_PERF_SP_LM_WORKING_CYCLES"/>
+-	<value value="66" name="A7XX_PERF_SP_DISPATCHER_WORKING_CYCLES"/>
+-	<value value="67" name="A7XX_PERF_SP_SEQUENCER_WORKING_CYCLES"/>
+-	<value value="68" name="A7XX_PERF_SP_LOW_EFFICIENCY_STARVED_BY_TP"/>
+-	<value value="69" name="A7XX_PERF_SP_STARVE_CYCLES_HLSQ"/>
+-	<value value="70" name="A7XX_PERF_SP_NON_EXECUTION_LS_CYCLES"/>
+-	<value value="71" name="A7XX_PERF_SP_WORKING_EU"/>
+-	<value value="72" name="A7XX_PERF_SP_ANY_EU_WORKING"/>
+-	<value value="73" name="A7XX_PERF_SP_WORKING_EU_FS_STAGE"/>
+-	<value value="74" name="A7XX_PERF_SP_ANY_EU_WORKING_FS_STAGE"/>
+-	<value value="75" name="A7XX_PERF_SP_WORKING_EU_VS_STAGE"/>
+-	<value value="76" name="A7XX_PERF_SP_ANY_EU_WORKING_VS_STAGE"/>
+-	<value value="77" name="A7XX_PERF_SP_WORKING_EU_CS_STAGE"/>
+-	<value value="78" name="A7XX_PERF_SP_ANY_EU_WORKING_CS_STAGE"/>
+-	<value value="79" name="A7XX_PERF_SP_GPR_READ_PREFETCH"/>
+-	<value value="80" name="A7XX_PERF_SP_GPR_READ_CONFLICT"/>
+-	<value value="81" name="A7XX_PERF_SP_GPR_WRITE_CONFLICT"/>
+-	<value value="82" name="A7XX_PERF_SP_GM_LOAD_LATENCY_CYCLES"/>
+-	<value value="83" name="A7XX_PERF_SP_GM_LOAD_LATENCY_SAMPLES"/>
+-	<value value="84" name="A7XX_PERF_SP_EXECUTABLE_WAVES"/>
+-	<value value="85" name="A7XX_PERF_SP_ICL1_MISS_FETCH_CYCLES"/>
+-	<value value="86" name="A7XX_PERF_SP_WORKING_EU_LPAC"/>
+-	<value value="87" name="A7XX_PERF_SP_BYPASS_BUSY_CYCLES"/>
+-	<value value="88" name="A7XX_PERF_SP_ANY_EU_WORKING_LPAC"/>
+-	<value value="89" name="A7XX_PERF_SP_WAVE_ALU_CYCLES"/>
+-	<value value="90" name="A7XX_PERF_SP_WAVE_EFU_CYCLES"/>
+-	<value value="91" name="A7XX_PERF_SP_WAVE_INT_CYCLES"/>
+-	<value value="92" name="A7XX_PERF_SP_WAVE_CSP_CYCLES"/>
+-	<value value="93" name="A7XX_PERF_SP_EWAVE_CONTEXTS"/>
+-	<value value="94" name="A7XX_PERF_SP_EWAVE_CONTEXT_CYCLES"/>
+-	<value value="95" name="A7XX_PERF_SP_LPAC_BUSY_CYCLES"/>
+-	<value value="96" name="A7XX_PERF_SP_LPAC_INSTRUCTIONS"/>
+-	<value value="97" name="A7XX_PERF_SP_FS_STAGE_1X_WAVES"/>
+-	<value value="98" name="A7XX_PERF_SP_FS_STAGE_2X_WAVES"/>
+-	<value value="99" name="A7XX_PERF_SP_QUADS"/>
+-	<value value="100" name="A7XX_PERF_SP_CS_INVOCATIONS"/>
+-	<value value="101" name="A7XX_PERF_SP_PIXELS"/>
+-	<value value="102" name="A7XX_PERF_SP_LPAC_DRAWCALLS"/>
+-	<value value="103" name="A7XX_PERF_SP_PI_WORKING_CYCLES"/>
+-	<value value="104" name="A7XX_PERF_SP_WAVE_INPUT_CYCLES"/>
+-	<value value="105" name="A7XX_PERF_SP_WAVE_OUTPUT_CYCLES"/>
+-	<value value="106" name="A7XX_PERF_SP_WAVE_HWAVE_WAIT_CYCLES"/>
+-	<value value="107" name="A7XX_PERF_SP_WAVE_HWAVE_SYNC"/>
+-	<value value="108" name="A7XX_PERF_SP_OUTPUT_3D_PIXELS"/>
+-	<value value="109" name="A7XX_PERF_SP_FULL_ALU_MAD_INSTRUCTIONS"/>
+-	<value value="110" name="A7XX_PERF_SP_HALF_ALU_MAD_INSTRUCTIONS"/>
+-	<value value="111" name="A7XX_PERF_SP_FULL_ALU_MUL_INSTRUCTIONS"/>
+-	<value value="112" name="A7XX_PERF_SP_HALF_ALU_MUL_INSTRUCTIONS"/>
+-	<value value="113" name="A7XX_PERF_SP_FULL_ALU_ADD_INSTRUCTIONS"/>
+-	<value value="114" name="A7XX_PERF_SP_HALF_ALU_ADD_INSTRUCTIONS"/>
+-	<value value="115" name="A7XX_PERF_SP_BARY_FP32_INSTRUCTIONS"/>
+-	<value value="116" name="A7XX_PERF_SP_ALU_GPR_READ_CYCLES"/>
+-	<value value="117" name="A7XX_PERF_SP_ALU_DATA_FORWARDING_CYCLES"/>
+-	<value value="118" name="A7XX_PERF_SP_LM_FULL_CYCLES"/>
+-	<value value="119" name="A7XX_PERF_SP_TEXTURE_FETCH_LATENCY_CYCLES"/>
+-	<value value="120" name="A7XX_PERF_SP_TEXTURE_FETCH_LATENCY_SAMPLES"/>
+-	<value value="121" name="A7XX_PERF_SP_FS_STAGE_PI_TEX_INSTRUCTION"/>
+-	<value value="122" name="A7XX_PERF_SP_RAY_QUERY_INSTRUCTIONS"/>
+-	<value value="123" name="A7XX_PERF_SP_RBRT_KICKOFF_FIBERS"/>
+-	<value value="124" name="A7XX_PERF_SP_RBRT_KICKOFF_DQUADS"/>
+-	<value value="125" name="A7XX_PERF_SP_RTU_BUSY_CYCLES"/>
+-	<value value="126" name="A7XX_PERF_SP_RTU_L0_HITS"/>
+-	<value value="127" name="A7XX_PERF_SP_RTU_L0_MISSES"/>
+-	<value value="128" name="A7XX_PERF_SP_RTU_L0_HIT_ON_MISS"/>
+-	<value value="129" name="A7XX_PERF_SP_RTU_STALL_CYCLES_WAVE_QUEUE"/>
+-	<value value="130" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0_HIT_QUEUE"/>
+-	<value value="131" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0_MISS_QUEUE"/>
+-	<value value="132" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0D_IDX_QUEUE"/>
+-	<value value="133" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0DATA"/>
+-	<value value="134" name="A7XX_PERF_SP_RTU_STALL_CYCLES_REPLACE_CNT"/>
+-	<value value="135" name="A7XX_PERF_SP_RTU_STALL_CYCLES_MRG_CNT"/>
+-	<value value="136" name="A7XX_PERF_SP_RTU_STALL_CYCLES_UCHE"/>
+-	<value value="137" name="A7XX_PERF_SP_RTU_OPERAND_FETCH_STALL_CYCLES_L0"/>
+-	<value value="138" name="A7XX_PERF_SP_RTU_OPERAND_FETCH_STALL_CYCLES_INS_FIFO"/>
+-	<value value="139" name="A7XX_PERF_SP_RTU_BVH_FETCH_LATENCY_CYCLES"/>
+-	<value value="140" name="A7XX_PERF_SP_RTU_BVH_FETCH_LATENCY_SAMPLES"/>
+-	<value value="141" name="A7XX_PERF_SP_STCHE_MISS_INC_VS"/>
+-	<value value="142" name="A7XX_PERF_SP_STCHE_MISS_INC_FS"/>
+-	<value value="143" name="A7XX_PERF_SP_STCHE_MISS_INC_BV"/>
+-	<value value="144" name="A7XX_PERF_SP_STCHE_MISS_INC_LPAC"/>
+-	<value value="145" name="A7XX_PERF_SP_VGPR_ACTIVE_CONTEXTS"/>
+-	<value value="146" name="A7XX_PERF_SP_PGPR_ALLOC_CONTEXTS"/>
+-	<value value="147" name="A7XX_PERF_SP_VGPR_ALLOC_CONTEXTS"/>
+-	<value value="148" name="A7XX_PERF_SP_RTU_RAY_BOX_INTERSECTIONS"/>
+-	<value value="149" name="A7XX_PERF_SP_RTU_RAY_TRIANGLE_INTERSECTIONS"/>
+-	<value value="150" name="A7XX_PERF_SP_SCH_STALL_CYCLES_RTU"/>
+-</enum>
+-
+-<enum name="a7xx_rb_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_RB_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_RB_STALL_CYCLES_HLSQ"/>
+-	<value value="2" name="A7XX_PERF_RB_STALL_CYCLES_FIFO0_FULL"/>
+-	<value value="3" name="A7XX_PERF_RB_STALL_CYCLES_FIFO1_FULL"/>
+-	<value value="4" name="A7XX_PERF_RB_STALL_CYCLES_FIFO2_FULL"/>
+-	<value value="5" name="A7XX_PERF_RB_STARVE_CYCLES_SP"/>
+-	<value value="6" name="A7XX_PERF_RB_STARVE_CYCLES_LRZ_TILE"/>
+-	<value value="7" name="A7XX_PERF_RB_STARVE_CYCLES_CCU"/>
+-	<value value="8" name="A7XX_PERF_RB_STARVE_CYCLES_Z_PLANE"/>
+-	<value value="9" name="A7XX_PERF_RB_STARVE_CYCLES_BARY_PLANE"/>
+-	<value value="10" name="A7XX_PERF_RB_Z_WORKLOAD"/>
+-	<value value="11" name="A7XX_PERF_RB_HLSQ_ACTIVE"/>
+-	<value value="12" name="A7XX_PERF_RB_Z_READ"/>
+-	<value value="13" name="A7XX_PERF_RB_Z_WRITE"/>
+-	<value value="14" name="A7XX_PERF_RB_C_READ"/>
+-	<value value="15" name="A7XX_PERF_RB_C_WRITE"/>
+-	<value value="16" name="A7XX_PERF_RB_TOTAL_PASS"/>
+-	<value value="17" name="A7XX_PERF_RB_Z_PASS"/>
+-	<value value="18" name="A7XX_PERF_RB_Z_FAIL"/>
+-	<value value="19" name="A7XX_PERF_RB_S_FAIL"/>
+-	<value value="20" name="A7XX_PERF_RB_BLENDED_FXP_COMPONENTS"/>
+-	<value value="21" name="A7XX_PERF_RB_BLENDED_FP16_COMPONENTS"/>
+-	<value value="22" name="A7XX_PERF_RB_PS_INVOCATIONS"/>
+-	<value value="23" name="A7XX_PERF_RB_2D_ALIVE_CYCLES"/>
+-	<value value="24" name="A7XX_PERF_RB_2D_STALL_CYCLES_A2D"/>
+-	<value value="25" name="A7XX_PERF_RB_2D_STARVE_CYCLES_SRC"/>
+-	<value value="26" name="A7XX_PERF_RB_2D_STARVE_CYCLES_SP"/>
+-	<value value="27" name="A7XX_PERF_RB_2D_STARVE_CYCLES_DST"/>
+-	<value value="28" name="A7XX_PERF_RB_2D_VALID_PIXELS"/>
+-	<value value="29" name="A7XX_PERF_RB_3D_PIXELS"/>
+-	<value value="30" name="A7XX_PERF_RB_BLENDER_WORKING_CYCLES"/>
+-	<value value="31" name="A7XX_PERF_RB_ZPROC_WORKING_CYCLES"/>
+-	<value value="32" name="A7XX_PERF_RB_CPROC_WORKING_CYCLES"/>
+-	<value value="33" name="A7XX_PERF_RB_SAMPLER_WORKING_CYCLES"/>
+-	<value value="34" name="A7XX_PERF_RB_STALL_CYCLES_CCU_COLOR_READ"/>
+-	<value value="35" name="A7XX_PERF_RB_STALL_CYCLES_CCU_COLOR_WRITE"/>
+-	<value value="36" name="A7XX_PERF_RB_STALL_CYCLES_CCU_DEPTH_READ"/>
+-	<value value="37" name="A7XX_PERF_RB_STALL_CYCLES_CCU_DEPTH_WRITE"/>
+-	<value value="38" name="A7XX_PERF_RB_STALL_CYCLES_VPC"/>
+-	<value value="39" name="A7XX_PERF_RB_2D_INPUT_TRANS"/>
+-	<value value="40" name="A7XX_PERF_RB_2D_OUTPUT_RB_DST_TRANS"/>
+-	<value value="41" name="A7XX_PERF_RB_2D_OUTPUT_RB_SRC_TRANS"/>
+-	<value value="42" name="A7XX_PERF_RB_BLENDED_FP32_COMPONENTS"/>
+-	<value value="43" name="A7XX_PERF_RB_COLOR_PIX_TILES"/>
+-	<value value="44" name="A7XX_PERF_RB_STALL_CYCLES_CCU"/>
+-	<value value="45" name="A7XX_PERF_RB_EARLY_Z_ARB3_GRANT"/>
+-	<value value="46" name="A7XX_PERF_RB_LATE_Z_ARB3_GRANT"/>
+-	<value value="47" name="A7XX_PERF_RB_EARLY_Z_SKIP_GRANT"/>
+-	<value value="48" name="A7XX_PERF_RB_VRS_1x1_QUADS"/>
+-	<value value="49" name="A7XX_PERF_RB_VRS_2x1_QUADS"/>
+-	<value value="50" name="A7XX_PERF_RB_VRS_1x2_QUADS"/>
+-	<value value="51" name="A7XX_PERF_RB_VRS_2x2_QUADS"/>
+-	<value value="52" name="A7XX_PERF_RB_VRS_4x2_QUADS"/>
+-	<value value="53" name="A7XX_PERF_RB_VRS_4x4_QUADS"/>
+-</enum>
+-
+-<enum name="a7xx_vsc_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_VSC_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_VSC_WORKING_CYCLES"/>
+-	<value value="2" name="A7XX_PERF_VSC_STALL_CYCLES_UCHE"/>
+-	<value value="3" name="A7XX_PERF_VSC_EOT_NUM"/>
+-	<value value="4" name="A7XX_PERF_VSC_INPUT_TILES"/>
+-</enum>
+-
+-<enum name="a7xx_ccu_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_CCU_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_CCU_STALL_CYCLES_RB_DEPTH_RETURN"/>
+-	<value value="2" name="A7XX_PERF_CCU_STALL_CYCLES_RB_COLOR_RETURN"/>
+-	<value value="3" name="A7XX_PERF_CCU_DEPTH_BLOCKS"/>
+-	<value value="4" name="A7XX_PERF_CCU_COLOR_BLOCKS"/>
+-	<value value="5" name="A7XX_PERF_CCU_DEPTH_BLOCK_HIT"/>
+-	<value value="6" name="A7XX_PERF_CCU_COLOR_BLOCK_HIT"/>
+-	<value value="7" name="A7XX_PERF_CCU_PARTIAL_BLOCK_READ"/>
+-	<value value="8" name="A7XX_PERF_CCU_GMEM_READ"/>
+-	<value value="9" name="A7XX_PERF_CCU_GMEM_WRITE"/>
+-	<value value="10" name="A7XX_PERF_CCU_2D_RD_REQ"/>
+-	<value value="11" name="A7XX_PERF_CCU_2D_WR_REQ"/>
+-	<value value="12" name="A7XX_PERF_CCU_UBWC_COLOR_BLOCKS_CONCURRENT"/>
+-	<value value="13" name="A7XX_PERF_CCU_UBWC_DEPTH_BLOCKS_CONCURRENT"/>
+-	<value value="14" name="A7XX_PERF_CCU_COLOR_RESOLVE_DROPPED"/>
+-	<value value="15" name="A7XX_PERF_CCU_DEPTH_RESOLVE_DROPPED"/>
+-	<value value="16" name="A7XX_PERF_CCU_COLOR_RENDER_CONCURRENT"/>
+-	<value value="17" name="A7XX_PERF_CCU_DEPTH_RENDER_CONCURRENT"/>
+-	<value value="18" name="A7XX_PERF_CCU_COLOR_RESOLVE_AFTER_RENDER"/>
+-	<value value="19" name="A7XX_PERF_CCU_DEPTH_RESOLVE_AFTER_RENDER"/>
+-	<value value="20" name="A7XX_PERF_CCU_GMEM_EXTRA_DEPTH_READ"/>
+-	<value value="21" name="A7XX_PERF_CCU_GMEM_COLOR_READ_4AA"/>
+-	<value value="22" name="A7XX_PERF_CCU_GMEM_COLOR_READ_4AA_FULL"/>
+-</enum>
+-
+-<enum name="a7xx_lrz_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_LRZ_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_LRZ_STARVE_CYCLES_RAS"/>
+-	<value value="2" name="A7XX_PERF_LRZ_STALL_CYCLES_RB"/>
+-	<value value="3" name="A7XX_PERF_LRZ_STALL_CYCLES_VSC"/>
+-	<value value="4" name="A7XX_PERF_LRZ_STALL_CYCLES_VPC"/>
+-	<value value="5" name="A7XX_PERF_LRZ_STALL_CYCLES_FLAG_PREFETCH"/>
+-	<value value="6" name="A7XX_PERF_LRZ_STALL_CYCLES_UCHE"/>
+-	<value value="7" name="A7XX_PERF_LRZ_LRZ_READ"/>
+-	<value value="8" name="A7XX_PERF_LRZ_LRZ_WRITE"/>
+-	<value value="9" name="A7XX_PERF_LRZ_READ_LATENCY"/>
+-	<value value="10" name="A7XX_PERF_LRZ_MERGE_CACHE_UPDATING"/>
+-	<value value="11" name="A7XX_PERF_LRZ_PRIM_KILLED_BY_MASKGEN"/>
+-	<value value="12" name="A7XX_PERF_LRZ_PRIM_KILLED_BY_LRZ"/>
+-	<value value="13" name="A7XX_PERF_LRZ_VISIBLE_PRIM_AFTER_LRZ"/>
+-	<value value="14" name="A7XX_PERF_LRZ_FULL_8X8_TILES"/>
+-	<value value="15" name="A7XX_PERF_LRZ_PARTIAL_8X8_TILES"/>
+-	<value value="16" name="A7XX_PERF_LRZ_TILE_KILLED"/>
+-	<value value="17" name="A7XX_PERF_LRZ_TOTAL_PIXEL"/>
+-	<value value="18" name="A7XX_PERF_LRZ_VISIBLE_PIXEL_AFTER_LRZ"/>
+-	<value value="19" name="A7XX_PERF_LRZ_FEEDBACK_ACCEPT"/>
+-	<value value="20" name="A7XX_PERF_LRZ_FEEDBACK_DISCARD"/>
+-	<value value="21" name="A7XX_PERF_LRZ_FEEDBACK_STALL"/>
+-	<value value="22" name="A7XX_PERF_LRZ_STALL_CYCLES_RB_ZPLANE"/>
+-	<value value="23" name="A7XX_PERF_LRZ_STALL_CYCLES_RB_BPLANE"/>
+-	<value value="24" name="A7XX_PERF_LRZ_RAS_MASK_TRANS"/>
+-	<value value="25" name="A7XX_PERF_LRZ_STALL_CYCLES_MVC"/>
+-	<value value="26" name="A7XX_PERF_LRZ_TILE_KILLED_BY_IMAGE_VRS"/>
+-	<value value="27" name="A7XX_PERF_LRZ_TILE_KILLED_BY_Z"/>
+-</enum>
+-
+-<enum name="a7xx_cmp_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_CMPDECMP_STALL_CYCLES_ARB"/>
+-	<value value="1" name="A7XX_PERF_CMPDECMP_VBIF_LATENCY_CYCLES"/>
+-	<value value="2" name="A7XX_PERF_CMPDECMP_VBIF_LATENCY_SAMPLES"/>
+-	<value value="3" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA_CCU"/>
+-	<value value="4" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_DATA_CCU"/>
+-	<value value="5" name="A7XX_PERF_CMPDECMP_VBIF_READ_REQUEST"/>
+-	<value value="6" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_REQUEST"/>
+-	<value value="7" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA"/>
+-	<value value="8" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_DATA"/>
+-	<value value="9" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG1_COUNT"/>
+-	<value value="10" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG2_COUNT"/>
+-	<value value="11" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG3_COUNT"/>
+-	<value value="12" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG4_COUNT"/>
+-	<value value="13" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG5_COUNT"/>
+-	<value value="14" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG6_COUNT"/>
+-	<value value="15" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG8_COUNT"/>
+-	<value value="16" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG1_COUNT"/>
+-	<value value="17" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG2_COUNT"/>
+-	<value value="18" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG3_COUNT"/>
+-	<value value="19" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG4_COUNT"/>
+-	<value value="20" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG5_COUNT"/>
+-	<value value="21" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG6_COUNT"/>
+-	<value value="22" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG8_COUNT"/>
+-	<value value="23" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH0"/>
+-	<value value="24" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH1"/>
+-	<value value="25" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_DATA_UCHE"/>
+-	<value value="26" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG0_COUNT"/>
+-	<value value="27" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG0_COUNT"/>
+-	<value value="28" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAGALPHA_COUNT"/>
+-	<value value="29" name="A7XX_PERF_CMPDECMP_RESOLVE_EVENTS"/>
+-	<value value="30" name="A7XX_PERF_CMPDECMP_CONCURRENT_RESOLVE_EVENTS"/>
+-	<value value="31" name="A7XX_PERF_CMPDECMP_DROPPED_CLEAR_EVENTS"/>
+-	<value value="32" name="A7XX_PERF_CMPDECMP_ST_BLOCKS_CONCURRENT"/>
+-	<value value="33" name="A7XX_PERF_CMPDECMP_LRZ_ST_BLOCKS_CONCURRENT"/>
+-	<value value="34" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG0_COUNT"/>
+-	<value value="35" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG1_COUNT"/>
+-	<value value="36" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG2_COUNT"/>
+-	<value value="37" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG3_COUNT"/>
+-	<value value="38" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG4_COUNT"/>
+-	<value value="39" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG5_COUNT"/>
+-	<value value="40" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG6_COUNT"/>
+-	<value value="41" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG8_COUNT"/>
+-	<value value="42" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG0_COUNT"/>
+-	<value value="43" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG1_COUNT"/>
+-	<value value="44" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG2_COUNT"/>
+-	<value value="45" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG3_COUNT"/>
+-	<value value="46" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG4_COUNT"/>
+-	<value value="47" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG5_COUNT"/>
+-	<value value="48" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG6_COUNT"/>
+-	<value value="49" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG8_COUNT"/>
+-</enum>
+-
+-<enum name="a7xx_gbif_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_GBIF_RESERVED_0"/>
+-	<value value="1" name="A7XX_PERF_GBIF_RESERVED_1"/>
+-	<value value="2" name="A7XX_PERF_GBIF_RESERVED_2"/>
+-	<value value="3" name="A7XX_PERF_GBIF_RESERVED_3"/>
+-	<value value="4" name="A7XX_PERF_GBIF_RESERVED_4"/>
+-	<value value="5" name="A7XX_PERF_GBIF_RESERVED_5"/>
+-	<value value="6" name="A7XX_PERF_GBIF_RESERVED_6"/>
+-	<value value="7" name="A7XX_PERF_GBIF_RESERVED_7"/>
+-	<value value="8" name="A7XX_PERF_GBIF_RESERVED_8"/>
+-	<value value="9" name="A7XX_PERF_GBIF_RESERVED_9"/>
+-	<value value="10" name="A7XX_PERF_GBIF_AXI0_READ_REQUESTS_TOTAL"/>
+-	<value value="11" name="A7XX_PERF_GBIF_AXI1_READ_REQUESTS_TOTAL"/>
+-	<value value="12" name="A7XX_PERF_GBIF_RESERVED_12"/>
+-	<value value="13" name="A7XX_PERF_GBIF_RESERVED_13"/>
+-	<value value="14" name="A7XX_PERF_GBIF_RESERVED_14"/>
+-	<value value="15" name="A7XX_PERF_GBIF_RESERVED_15"/>
+-	<value value="16" name="A7XX_PERF_GBIF_RESERVED_16"/>
+-	<value value="17" name="A7XX_PERF_GBIF_RESERVED_17"/>
+-	<value value="18" name="A7XX_PERF_GBIF_RESERVED_18"/>
+-	<value value="19" name="A7XX_PERF_GBIF_RESERVED_19"/>
+-	<value value="20" name="A7XX_PERF_GBIF_RESERVED_20"/>
+-	<value value="21" name="A7XX_PERF_GBIF_RESERVED_21"/>
+-	<value value="22" name="A7XX_PERF_GBIF_AXI0_WRITE_REQUESTS_TOTAL"/>
+-	<value value="23" name="A7XX_PERF_GBIF_AXI1_WRITE_REQUESTS_TOTAL"/>
+-	<value value="24" name="A7XX_PERF_GBIF_RESERVED_24"/>
+-	<value value="25" name="A7XX_PERF_GBIF_RESERVED_25"/>
+-	<value value="26" name="A7XX_PERF_GBIF_RESERVED_26"/>
+-	<value value="27" name="A7XX_PERF_GBIF_RESERVED_27"/>
+-	<value value="28" name="A7XX_PERF_GBIF_RESERVED_28"/>
+-	<value value="29" name="A7XX_PERF_GBIF_RESERVED_29"/>
+-	<value value="30" name="A7XX_PERF_GBIF_RESERVED_30"/>
+-	<value value="31" name="A7XX_PERF_GBIF_RESERVED_31"/>
+-	<value value="32" name="A7XX_PERF_GBIF_RESERVED_32"/>
+-	<value value="33" name="A7XX_PERF_GBIF_RESERVED_33"/>
+-	<value value="34" name="A7XX_PERF_GBIF_AXI0_READ_DATA_BEATS_TOTAL"/>
+-	<value value="35" name="A7XX_PERF_GBIF_AXI1_READ_DATA_BEATS_TOTAL"/>
+-	<value value="36" name="A7XX_PERF_GBIF_RESERVED_36"/>
+-	<value value="37" name="A7XX_PERF_GBIF_RESERVED_37"/>
+-	<value value="38" name="A7XX_PERF_GBIF_RESERVED_38"/>
+-	<value value="39" name="A7XX_PERF_GBIF_RESERVED_39"/>
+-	<value value="40" name="A7XX_PERF_GBIF_RESERVED_40"/>
+-	<value value="41" name="A7XX_PERF_GBIF_RESERVED_41"/>
+-	<value value="42" name="A7XX_PERF_GBIF_RESERVED_42"/>
+-	<value value="43" name="A7XX_PERF_GBIF_RESERVED_43"/>
+-	<value value="44" name="A7XX_PERF_GBIF_RESERVED_44"/>
+-	<value value="45" name="A7XX_PERF_GBIF_RESERVED_45"/>
+-	<value value="46" name="A7XX_PERF_GBIF_AXI0_WRITE_DATA_BEATS_TOTAL"/>
+-	<value value="47" name="A7XX_PERF_GBIF_AXI1_WRITE_DATA_BEATS_TOTAL"/>
+-	<value value="48" name="A7XX_PERF_GBIF_RESERVED_48"/>
+-	<value value="49" name="A7XX_PERF_GBIF_RESERVED_49"/>
+-	<value value="50" name="A7XX_PERF_GBIF_RESERVED_50"/>
+-	<value value="51" name="A7XX_PERF_GBIF_RESERVED_51"/>
+-	<value value="52" name="A7XX_PERF_GBIF_RESERVED_52"/>
+-	<value value="53" name="A7XX_PERF_GBIF_RESERVED_53"/>
+-	<value value="54" name="A7XX_PERF_GBIF_RESERVED_54"/>
+-	<value value="55" name="A7XX_PERF_GBIF_RESERVED_55"/>
+-	<value value="56" name="A7XX_PERF_GBIF_RESERVED_56"/>
+-	<value value="57" name="A7XX_PERF_GBIF_RESERVED_57"/>
+-	<value value="58" name="A7XX_PERF_GBIF_RESERVED_58"/>
+-	<value value="59" name="A7XX_PERF_GBIF_RESERVED_59"/>
+-	<value value="60" name="A7XX_PERF_GBIF_RESERVED_60"/>
+-	<value value="61" name="A7XX_PERF_GBIF_RESERVED_61"/>
+-	<value value="62" name="A7XX_PERF_GBIF_RESERVED_62"/>
+-	<value value="63" name="A7XX_PERF_GBIF_RESERVED_63"/>
+-	<value value="64" name="A7XX_PERF_GBIF_RESERVED_64"/>
+-	<value value="65" name="A7XX_PERF_GBIF_RESERVED_65"/>
+-	<value value="66" name="A7XX_PERF_GBIF_RESERVED_66"/>
+-	<value value="67" name="A7XX_PERF_GBIF_RESERVED_67"/>
+-	<value value="68" name="A7XX_PERF_GBIF_CYCLES_CH0_HELD_OFF_RD_ALL"/>
+-	<value value="69" name="A7XX_PERF_GBIF_CYCLES_CH1_HELD_OFF_RD_ALL"/>
+-	<value value="70" name="A7XX_PERF_GBIF_CYCLES_CH0_HELD_OFF_WR_ALL"/>
+-	<value value="71" name="A7XX_PERF_GBIF_CYCLES_CH1_HELD_OFF_WR_ALL"/>
+-	<value value="72" name="A7XX_PERF_GBIF_AXI_CH0_REQUEST_HELD_OFF"/>
+-	<value value="73" name="A7XX_PERF_GBIF_AXI_CH1_REQUEST_HELD_OFF"/>
+-	<value value="74" name="A7XX_PERF_GBIF_AXI_REQUEST_HELD_OFF"/>
+-	<value value="75" name="A7XX_PERF_GBIF_AXI_CH0_WRITE_DATA_HELD_OFF"/>
+-	<value value="76" name="A7XX_PERF_GBIF_AXI_CH1_WRITE_DATA_HELD_OFF"/>
+-	<value value="77" name="A7XX_PERF_GBIF_AXI_ALL_WRITE_DATA_HELD_OFF"/>
+-	<value value="78" name="A7XX_PERF_GBIF_AXI_ALL_READ_BEATS"/>
+-	<value value="79" name="A7XX_PERF_GBIF_AXI_ALL_WRITE_BEATS"/>
+-	<value value="80" name="A7XX_PERF_GBIF_AXI_ALL_BEATS"/>
+-</enum>
+-
+-<enum name="a7xx_ufc_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_UFC_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_UFC_READ_DATA_VBIF"/>
+-	<value value="2" name="A7XX_PERF_UFC_WRITE_DATA_VBIF"/>
+-	<value value="3" name="A7XX_PERF_UFC_READ_REQUEST_VBIF"/>
+-	<value value="4" name="A7XX_PERF_UFC_WRITE_REQUEST_VBIF"/>
+-	<value value="5" name="A7XX_PERF_UFC_LRZ_FILTER_HIT"/>
+-	<value value="6" name="A7XX_PERF_UFC_LRZ_FILTER_MISS"/>
+-	<value value="7" name="A7XX_PERF_UFC_CRE_FILTER_HIT"/>
+-	<value value="8" name="A7XX_PERF_UFC_CRE_FILTER_MISS"/>
+-	<value value="9" name="A7XX_PERF_UFC_SP_FILTER_HIT"/>
+-	<value value="10" name="A7XX_PERF_UFC_SP_FILTER_MISS"/>
+-	<value value="11" name="A7XX_PERF_UFC_SP_REQUESTS"/>
+-	<value value="12" name="A7XX_PERF_UFC_TP_FILTER_HIT"/>
+-	<value value="13" name="A7XX_PERF_UFC_TP_FILTER_MISS"/>
+-	<value value="14" name="A7XX_PERF_UFC_TP_REQUESTS"/>
+-	<value value="15" name="A7XX_PERF_UFC_MAIN_HIT_LRZ_PREFETCH"/>
+-	<value value="16" name="A7XX_PERF_UFC_MAIN_HIT_CRE_PREFETCH"/>
+-	<value value="17" name="A7XX_PERF_UFC_MAIN_HIT_SP_PREFETCH"/>
+-	<value value="18" name="A7XX_PERF_UFC_MAIN_HIT_TP_PREFETCH"/>
+-	<value value="19" name="A7XX_PERF_UFC_MAIN_HIT_UBWC_READ"/>
+-	<value value="20" name="A7XX_PERF_UFC_MAIN_HIT_UBWC_WRITE"/>
+-	<value value="21" name="A7XX_PERF_UFC_MAIN_MISS_LRZ_PREFETCH"/>
+-	<value value="22" name="A7XX_PERF_UFC_MAIN_MISS_CRE_PREFETCH"/>
+-	<value value="23" name="A7XX_PERF_UFC_MAIN_MISS_SP_PREFETCH"/>
+-	<value value="24" name="A7XX_PERF_UFC_MAIN_MISS_TP_PREFETCH"/>
+-	<value value="25" name="A7XX_PERF_UFC_MAIN_MISS_UBWC_READ"/>
+-	<value value="26" name="A7XX_PERF_UFC_MAIN_MISS_UBWC_WRITE"/>
+-	<value value="27" name="A7XX_PERF_UFC_UBWC_READ_UFC_TRANS"/>
+-	<value value="28" name="A7XX_PERF_UFC_UBWC_WRITE_UFC_TRANS"/>
+-	<value value="29" name="A7XX_PERF_UFC_STALL_CYCLES_GBIF_CMD"/>
+-	<value value="30" name="A7XX_PERF_UFC_STALL_CYCLES_GBIF_RDATA"/>
+-	<value value="31" name="A7XX_PERF_UFC_STALL_CYCLES_GBIF_WDATA"/>
+-	<value value="32" name="A7XX_PERF_UFC_STALL_CYCLES_UBWC_WR_FLAG"/>
+-	<value value="33" name="A7XX_PERF_UFC_STALL_CYCLES_UBWC_FLAG_RTN"/>
+-	<value value="34" name="A7XX_PERF_UFC_STALL_CYCLES_UBWC_EVENT"/>
+-	<value value="35" name="A7XX_PERF_UFC_LRZ_PREFETCH_STALLED_CYCLES"/>
+-	<value value="36" name="A7XX_PERF_UFC_CRE_PREFETCH_STALLED_CYCLES"/>
+-	<value value="37" name="A7XX_PERF_UFC_SPTP_PREFETCH_STALLED_CYCLES"/>
+-	<value value="38" name="A7XX_PERF_UFC_UBWC_RD_STALLED_CYCLES"/>
+-	<value value="39" name="A7XX_PERF_UFC_UBWC_WR_STALLED_CYCLES"/>
+-	<value value="40" name="A7XX_PERF_UFC_PREFETCH_STALLED_CYCLES"/>
+-	<value value="41" name="A7XX_PERF_UFC_EVICTION_STALLED_CYCLES"/>
+-	<value value="42" name="A7XX_PERF_UFC_LOCK_STALLED_CYCLES"/>
+-	<value value="43" name="A7XX_PERF_UFC_MISS_LATENCY_CYCLES"/>
+-	<value value="44" name="A7XX_PERF_UFC_MISS_LATENCY_SAMPLES"/>
+-	<value value="45" name="A7XX_PERF_UFC_UBWC_REQ_STALLED_CYCLES"/>
+-	<value value="46" name="A7XX_PERF_UFC_TP_HINT_TAG_MISS"/>
+-	<value value="47" name="A7XX_PERF_UFC_TP_HINT_TAG_HIT_RDY"/>
+-	<value value="48" name="A7XX_PERF_UFC_TP_HINT_TAG_HIT_NRDY"/>
+-	<value value="49" name="A7XX_PERF_UFC_TP_HINT_IS_FCLEAR"/>
+-	<value value="50" name="A7XX_PERF_UFC_TP_HINT_IS_ALPHA0"/>
+-	<value value="51" name="A7XX_PERF_UFC_SP_L1_FILTER_HIT"/>
+-	<value value="52" name="A7XX_PERF_UFC_SP_L1_FILTER_MISS"/>
+-	<value value="53" name="A7XX_PERF_UFC_SP_L1_FILTER_REQUESTS"/>
+-	<value value="54" name="A7XX_PERF_UFC_TP_L1_TAG_HIT_RDY"/>
+-	<value value="55" name="A7XX_PERF_UFC_TP_L1_TAG_HIT_NRDY"/>
+-	<value value="56" name="A7XX_PERF_UFC_TP_L1_TAG_MISS"/>
+-	<value value="57" name="A7XX_PERF_UFC_TP_L1_FILTER_REQUESTS"/>
+-</enum>
+-
+ <domain name="A6XX" width="32" prefix="variant" varset="chip">
+ 	<bitset name="A6XX_RBBM_INT_0_MASK" inline="no" varset="chip">
+ 		<bitfield name="RBBM_GPU_IDLE" pos="0" type="boolean"/>
+@@ -2371,7 +177,7 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x08ab" name="CP_CONTEXT_SWITCH_LEVEL_STATUS" variants="A7XX-"/>
+ 	<array offset="0x08D0" name="CP_PERFCTR_CP_SEL" stride="1" length="14"/>
+ 	<array offset="0x08e0" name="CP_BV_PERFCTR_CP_SEL" stride="1" length="7" variants="A7XX-"/>
+-	<reg64 offset="0x0900" name="CP_CRASH_SCRIPT_BASE"/>
++	<reg64 offset="0x0900" name="CP_CRASH_DUMP_SCRIPT_BASE"/>
+ 	<reg32 offset="0x0902" name="CP_CRASH_DUMP_CNTL"/>
+ 	<reg32 offset="0x0903" name="CP_CRASH_DUMP_STATUS"/>
+ 	<reg32 offset="0x0908" name="CP_SQE_STAT_ADDR"/>
+@@ -2400,22 +206,22 @@ to upconvert to 32b float internally?
+ 	-->
+ 	<reg64 offset="0x0934" name="CP_VSD_BASE"/>
+ 
+-	<bitset name="a6xx_roq_stat" inline="yes">
++	<bitset name="a6xx_roq_status" inline="yes">
+ 		<bitfield name="RPTR" low="0" high="9"/>
+ 		<bitfield name="WPTR" low="16" high="25"/>
+ 	</bitset>
+-	<reg32 offset="0x0939" name="CP_ROQ_RB_STAT" type="a6xx_roq_stat"/>
+-	<reg32 offset="0x093a" name="CP_ROQ_IB1_STAT" type="a6xx_roq_stat"/>
+-	<reg32 offset="0x093b" name="CP_ROQ_IB2_STAT" type="a6xx_roq_stat"/>
+-	<reg32 offset="0x093c" name="CP_ROQ_SDS_STAT" type="a6xx_roq_stat"/>
+-	<reg32 offset="0x093d" name="CP_ROQ_MRB_STAT" type="a6xx_roq_stat"/>
+-	<reg32 offset="0x093e" name="CP_ROQ_VSD_STAT" type="a6xx_roq_stat"/>
+-
+-	<reg32 offset="0x0943" name="CP_IB1_DWORDS"/>
+-	<reg32 offset="0x0944" name="CP_IB2_DWORDS"/>
+-	<reg32 offset="0x0945" name="CP_SDS_DWORDS"/>
+-	<reg32 offset="0x0946" name="CP_MRB_DWORDS"/>
+-	<reg32 offset="0x0947" name="CP_VSD_DWORDS"/>
++	<reg32 offset="0x0939" name="CP_ROQ_RB_STATUS" type="a6xx_roq_status"/>
++	<reg32 offset="0x093a" name="CP_ROQ_IB1_STATUS" type="a6xx_roq_status"/>
++	<reg32 offset="0x093b" name="CP_ROQ_IB2_STATUS" type="a6xx_roq_status"/>
++	<reg32 offset="0x093c" name="CP_ROQ_SDS_STATUS" type="a6xx_roq_status"/>
++	<reg32 offset="0x093d" name="CP_ROQ_MRB_STATUS" type="a6xx_roq_status"/>
++	<reg32 offset="0x093e" name="CP_ROQ_VSD_STATUS" type="a6xx_roq_status"/>
++
++	<reg32 offset="0x0943" name="CP_IB1_INIT_SIZE"/>
++	<reg32 offset="0x0944" name="CP_IB2_INIT_SIZE"/>
++	<reg32 offset="0x0945" name="CP_SDS_INIT_SIZE"/>
++	<reg32 offset="0x0946" name="CP_MRB_INIT_SIZE"/>
++	<reg32 offset="0x0947" name="CP_VSD_INIT_SIZE"/>
+ 
+ 	<reg32 offset="0x0948" name="CP_ROQ_AVAIL_RB">
+ 		<doc>number of remaining dwords incl current dword being consumed?</doc>
+@@ -2451,6 +257,7 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x098D" name="CP_AHB_CNTL"/>
+ 	<reg32 offset="0x0A00" name="CP_APERTURE_CNTL_HOST" variants="A6XX"/>
+ 	<reg32 offset="0x0A00" name="CP_APERTURE_CNTL_HOST" type="a7xx_aperture_cntl" variants="A7XX-"/>
++	<reg32 offset="0x0A01" name="CP_APERTURE_CNTL_SQE" variants="A6XX"/>
+ 	<reg32 offset="0x0A03" name="CP_APERTURE_CNTL_CD" variants="A6XX"/>
+ 	<reg32 offset="0x0A03" name="CP_APERTURE_CNTL_CD" type="a7xx_aperture_cntl" variants="A7XX-"/>
+ 
+@@ -2468,8 +275,8 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x0a97" name="CP_BV_MEM_POOL_DBG_DATA" variants="A7XX-"/>
+ 	<reg64 offset="0x0a98" name="CP_BV_RB_RPTR_ADDR" variants="A7XX-"/>
+ 
+-	<reg32 offset="0x0a9a" name="CP_RESOURCE_TBL_DBG_ADDR" variants="A7XX-"/>
+-	<reg32 offset="0x0a9b" name="CP_RESOURCE_TBL_DBG_DATA" variants="A7XX-"/>
++	<reg32 offset="0x0a9a" name="CP_RESOURCE_TABLE_DBG_ADDR" variants="A7XX-"/>
++	<reg32 offset="0x0a9b" name="CP_RESOURCE_TABLE_DBG_DATA" variants="A7XX-"/>
+ 	<reg32 offset="0x0ad0" name="CP_BV_APRIV_CNTL" variants="A7XX-"/>
+ 	<reg32 offset="0x0ada" name="CP_BV_CHICKEN_DBG" variants="A7XX-"/>
+ 
+@@ -2619,28 +426,17 @@ to upconvert to 32b float internally?
+ 	    vertices in, number of primnitives assembled etc.
+ 	-->
+ 
+-	<reg32 offset="0x0540" name="RBBM_PRIMCTR_0_LO"/>  <!-- vs vertices in -->
+-	<reg32 offset="0x0541" name="RBBM_PRIMCTR_0_HI"/>
+-	<reg32 offset="0x0542" name="RBBM_PRIMCTR_1_LO"/>  <!-- vs primitives out -->
+-	<reg32 offset="0x0543" name="RBBM_PRIMCTR_1_HI"/>
+-	<reg32 offset="0x0544" name="RBBM_PRIMCTR_2_LO"/>  <!-- hs vertices in -->
+-	<reg32 offset="0x0545" name="RBBM_PRIMCTR_2_HI"/>
+-	<reg32 offset="0x0546" name="RBBM_PRIMCTR_3_LO"/>  <!-- hs patches out -->
+-	<reg32 offset="0x0547" name="RBBM_PRIMCTR_3_HI"/>
+-	<reg32 offset="0x0548" name="RBBM_PRIMCTR_4_LO"/>  <!-- dss vertices in -->
+-	<reg32 offset="0x0549" name="RBBM_PRIMCTR_4_HI"/>
+-	<reg32 offset="0x054a" name="RBBM_PRIMCTR_5_LO"/>  <!-- ds primitives out -->
+-	<reg32 offset="0x054b" name="RBBM_PRIMCTR_5_HI"/>
+-	<reg32 offset="0x054c" name="RBBM_PRIMCTR_6_LO"/>  <!-- gs primitives in -->
+-	<reg32 offset="0x054d" name="RBBM_PRIMCTR_6_HI"/>
+-	<reg32 offset="0x054e" name="RBBM_PRIMCTR_7_LO"/>  <!-- gs primitives out -->
+-	<reg32 offset="0x054f" name="RBBM_PRIMCTR_7_HI"/>
+-	<reg32 offset="0x0550" name="RBBM_PRIMCTR_8_LO"/>  <!-- gs primitives out -->
+-	<reg32 offset="0x0551" name="RBBM_PRIMCTR_8_HI"/>
+-	<reg32 offset="0x0552" name="RBBM_PRIMCTR_9_LO"/>  <!-- raster primitives in -->
+-	<reg32 offset="0x0553" name="RBBM_PRIMCTR_9_HI"/>
+-	<reg32 offset="0x0554" name="RBBM_PRIMCTR_10_LO"/>
+-	<reg32 offset="0x0555" name="RBBM_PRIMCTR_10_HI"/>
++	<reg64 offset="0x0540" name="RBBM_PIPESTAT_IAVERTICES"/>
++	<reg64 offset="0x0542" name="RBBM_PIPESTAT_IAPRIMITIVES"/>
++	<reg64 offset="0x0544" name="RBBM_PIPESTAT_VSINVOCATIONS"/>
++	<reg64 offset="0x0546" name="RBBM_PIPESTAT_HSINVOCATIONS"/>
++	<reg64 offset="0x0548" name="RBBM_PIPESTAT_DSINVOCATIONS"/>
++	<reg64 offset="0x054a" name="RBBM_PIPESTAT_GSINVOCATIONS"/>
++	<reg64 offset="0x054c" name="RBBM_PIPESTAT_GSPRIMITIVES"/>
++	<reg64 offset="0x054e" name="RBBM_PIPESTAT_CINVOCATIONS"/>
++	<reg64 offset="0x0550" name="RBBM_PIPESTAT_CPRIMITIVES"/>
++	<reg64 offset="0x0552" name="RBBM_PIPESTAT_PSINVOCATIONS"/>
++	<reg64 offset="0x0554" name="RBBM_PIPESTAT_CSINVOCATIONS"/>
+ 
+ 	<reg32 offset="0xF400" name="RBBM_SECVID_TRUST_CNTL"/>
+ 	<reg64 offset="0xF800" name="RBBM_SECVID_TSB_TRUSTED_BASE"/>
+@@ -2779,7 +575,7 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x0011f" name="RBBM_CGC_P2S_TRIG_CMD" variants="A7XX-"/>
+ 	<reg32 offset="0x00120" name="RBBM_CLOCK_CNTL_TEX_FCHE"/>
+ 	<reg32 offset="0x00121" name="RBBM_CLOCK_DELAY_TEX_FCHE"/>
+-	<reg32 offset="0x00122" name="RBBM_CLOCK_HYST_TEX_FCHE"/>
++	<reg32 offset="0x00122" name="RBBM_CLOCK_HYST_TEX_FCHE" variants="A6XX"/>
+ 	<reg32 offset="0x00122" name="RBBM_CGC_P2S_STATUS" variants="A7XX-">
+ 		<bitfield name="TXDONE" pos="0" type="boolean"/>
+ 	</reg32>
+@@ -2840,7 +636,7 @@ to upconvert to 32b float internally?
+ 	</reg32>
+ 	<reg32 offset="0x062f" name="DBGC_CFG_DBGBUS_TRACE_BUF1"/>
+ 	<reg32 offset="0x0630" name="DBGC_CFG_DBGBUS_TRACE_BUF2"/>
+-	<array offset="0x0CD8" name="VSC_PERFCTR_VSC_SEL" stride="1" length="2"/>
++	<array offset="0x0CD8" name="VSC_PERFCTR_VSC_SEL" stride="1" length="2" variants="A6XX"/>
+ 	<reg32 offset="0x0CD8" name="VSC_UNKNOWN_0CD8" variants="A7XX">
+ 		<doc>
+ 			Set to true when binning, isn't changed afterwards
+@@ -2936,8 +732,8 @@ to upconvert to 32b float internally?
+ 		<bitfield name="WIDTH" low="0" high="7" shr="5" type="uint"/>
+ 		<bitfield name="HEIGHT" low="8" high="16" shr="4" type="uint"/>
+ 	</reg32>
+-	<reg64 offset="0x0c03" name="VSC_DRAW_STRM_SIZE_ADDRESS" type="waddress" usage="cmd"/>
+-	<reg32 offset="0x0c06" name="VSC_BIN_COUNT" usage="rp_blit">
++	<reg64 offset="0x0c03" name="VSC_SIZE_BASE" type="waddress" usage="cmd"/>
++	<reg32 offset="0x0c06" name="VSC_EXPANDED_BIN_CNTL" usage="rp_blit">
+ 		<bitfield name="NX" low="1" high="10" type="uint"/>
+ 		<bitfield name="NY" low="11" high="20" type="uint"/>
+ 	</reg32>
+@@ -2967,14 +763,14 @@ to upconvert to 32b float internally?
+ 
+ 	LIMIT is set to PITCH - 64, to make room for a bit of overflow
+ 	 -->
+-	<reg64 offset="0x0c30" name="VSC_PRIM_STRM_ADDRESS" type="waddress" usage="cmd"/>
+-	<reg32 offset="0x0c32" name="VSC_PRIM_STRM_PITCH" usage="cmd"/>
+-	<reg32 offset="0x0c33" name="VSC_PRIM_STRM_LIMIT" usage="cmd"/>
+-	<reg64 offset="0x0c34" name="VSC_DRAW_STRM_ADDRESS" type="waddress" usage="cmd"/>
+-	<reg32 offset="0x0c36" name="VSC_DRAW_STRM_PITCH" usage="cmd"/>
+-	<reg32 offset="0x0c37" name="VSC_DRAW_STRM_LIMIT" usage="cmd"/>
+-
+-	<array offset="0x0c38" name="VSC_STATE" stride="1" length="32" usage="rp_blit">
++	<reg64 offset="0x0c30" name="VSC_PIPE_DATA_PRIM_BASE" type="waddress" usage="cmd"/>
++	<reg32 offset="0x0c32" name="VSC_PIPE_DATA_PRIM_STRIDE" usage="cmd"/>
++	<reg32 offset="0x0c33" name="VSC_PIPE_DATA_PRIM_LENGTH" usage="cmd"/>
++	<reg64 offset="0x0c34" name="VSC_PIPE_DATA_DRAW_BASE" type="waddress" usage="cmd"/>
++	<reg32 offset="0x0c36" name="VSC_PIPE_DATA_DRAW_STRIDE" usage="cmd"/>
++	<reg32 offset="0x0c37" name="VSC_PIPE_DATA_DRAW_LENGTH" usage="cmd"/>
++
++	<array offset="0x0c38" name="VSC_CHANNEL_VISIBILITY" stride="1" length="32" usage="rp_blit">
+ 		<doc>
+ 			Seems to be a bitmap of which tiles mapped to the VSC
+ 			pipe contain geometry.
+@@ -2985,7 +781,7 @@ to upconvert to 32b float internally?
+ 		<reg32 offset="0x0" name="REG"/>
+ 	</array>
+ 
+-	<array offset="0x0c58" name="VSC_PRIM_STRM_SIZE" stride="1" length="32" variants="A6XX" usage="rp_blit">
++	<array offset="0x0c58" name="VSC_PIPE_DATA_PRIM_SIZE" stride="1" length="32" variants="A6XX" usage="rp_blit">
+ 		<doc>
+ 			Has the size of data written to corresponding VSC_PRIM_STRM
+ 			buffer.
+@@ -2993,10 +789,10 @@ to upconvert to 32b float internally?
+ 		<reg32 offset="0x0" name="REG"/>
+ 	</array>
+ 
+-	<array offset="0x0c78" name="VSC_DRAW_STRM_SIZE" stride="1" length="32" variants="A6XX" usage="rp_blit">
++	<array offset="0x0c78" name="VSC_PIPE_DATA_DRAW_SIZE" stride="1" length="32" variants="A6XX" usage="rp_blit">
+ 		<doc>
+ 			Has the size of data written to corresponding VSC pipe, ie.
+-			same thing that is written out to VSC_DRAW_STRM_SIZE_ADDRESS_LO/HI
++			same thing that is written out to VSC_SIZE_BASE
+ 		</doc>
+ 		<reg32 offset="0x0" name="REG"/>
+ 	</array>
+@@ -3028,17 +824,17 @@ to upconvert to 32b float internally?
+ 		<bitfield name="PERSP_DIVISION_DISABLE" pos="9" type="boolean"/>
+ 	</reg32>
+ 
+-	<bitset name="a6xx_gras_xs_cl_cntl" inline="yes">
++	<bitset name="a6xx_gras_xs_clip_cull_distance" inline="yes">
+ 		<bitfield name="CLIP_MASK" low="0" high="7"/>
+ 		<bitfield name="CULL_MASK" low="8" high="15"/>
+ 	</bitset>
+-	<reg32 offset="0x8001" name="GRAS_VS_CL_CNTL" type="a6xx_gras_xs_cl_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x8002" name="GRAS_DS_CL_CNTL" type="a6xx_gras_xs_cl_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x8003" name="GRAS_GS_CL_CNTL" type="a6xx_gras_xs_cl_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x8004" name="GRAS_MAX_LAYER_INDEX" low="0" high="10" type="uint" usage="rp_blit"/>
++	<reg32 offset="0x8001" name="GRAS_CL_VS_CLIP_CULL_DISTANCE" type="a6xx_gras_xs_clip_cull_distance" usage="rp_blit"/>
++	<reg32 offset="0x8002" name="GRAS_CL_DS_CLIP_CULL_DISTANCE" type="a6xx_gras_xs_clip_cull_distance" usage="rp_blit"/>
++	<reg32 offset="0x8003" name="GRAS_CL_GS_CLIP_CULL_DISTANCE" type="a6xx_gras_xs_clip_cull_distance" usage="rp_blit"/>
++	<reg32 offset="0x8004" name="GRAS_CL_ARRAY_SIZE" low="0" high="10" type="uint" usage="rp_blit"/>
+ 
+-	<reg32 offset="0x8005" name="GRAS_CNTL" usage="rp_blit">
+-		<!-- see also RB_RENDER_CONTROL0 -->
++	<reg32 offset="0x8005" name="GRAS_CL_INTERP_CNTL" usage="rp_blit">
++		<!-- see also RB_INTERP_CNTL -->
+ 		<bitfield name="IJ_PERSP_PIXEL" pos="0" type="boolean"/>
+ 		<bitfield name="IJ_PERSP_CENTROID" pos="1" type="boolean"/>
+ 		<bitfield name="IJ_PERSP_SAMPLE" pos="2" type="boolean"/>
+@@ -3067,7 +863,7 @@ to upconvert to 32b float internally?
+ 	<!-- <reg32 offset="0x80f0" name="GRAS_UNKNOWN_80F0" type="a6xx_reg_xy"/> -->
+ 
+ 	<!-- 0x8006-0x800f invalid -->
+-	<array offset="0x8010" name="GRAS_CL_VPORT" stride="6" length="16" usage="rp_blit">
++	<array offset="0x8010" name="GRAS_CL_VIEWPORT" stride="6" length="16" usage="rp_blit">
+ 		<reg32 offset="0" name="XOFFSET" type="float"/>
+ 		<reg32 offset="1" name="XSCALE" type="float"/>
+ 		<reg32 offset="2" name="YOFFSET" type="float"/>
+@@ -3075,7 +871,7 @@ to upconvert to 32b float internally?
+ 		<reg32 offset="4" name="ZOFFSET" type="float"/>
+ 		<reg32 offset="5" name="ZSCALE" type="float"/>
+ 	</array>
+-	<array offset="0x8070" name="GRAS_CL_Z_CLAMP" stride="2" length="16" usage="rp_blit">
++	<array offset="0x8070" name="GRAS_CL_VIEWPORT_ZCLAMP" stride="2" length="16" usage="rp_blit">
+ 		<reg32 offset="0" name="MIN" type="float"/>
+ 		<reg32 offset="1" name="MAX" type="float"/>
+ 	</array>
+@@ -3124,7 +920,12 @@ to upconvert to 32b float internally?
+ 
+ 	<reg32 offset="0x8099" name="GRAS_SU_CONSERVATIVE_RAS_CNTL" usage="cmd">
+ 		<bitfield name="CONSERVATIVERASEN" pos="0" type="boolean"/>
+-		<bitfield name="SHIFTAMOUNT" low="1" high="2"/>
++		<enum name="a6xx_shift_amount">
++			<value value="0" name="NO_SHIFT"/>
++			<value value="1" name="HALF_PIXEL_SHIFT"/>
++			<value value="2" name="FULL_PIXEL_SHIFT"/>
++		</enum>
++		<bitfield name="SHIFTAMOUNT" low="1" high="2" type="a6xx_shift_amount"/>
+ 		<bitfield name="INNERCONSERVATIVERASEN" pos="3" type="boolean"/>
+ 		<bitfield name="UNK4" low="4" high="5"/>
+ 	</reg32>
+@@ -3133,13 +934,13 @@ to upconvert to 32b float internally?
+ 		<bitfield name="LINELENGTHEN" pos="1" type="boolean"/>
+ 	</reg32>
+ 
+-	<bitset name="a6xx_gras_layer_cntl" inline="yes">
++	<bitset name="a6xx_gras_us_xs_siv_cntl" inline="yes">
+ 		<bitfield name="WRITES_LAYER" pos="0" type="boolean"/>
+ 		<bitfield name="WRITES_VIEW" pos="1" type="boolean"/>
+ 	</bitset>
+-	<reg32 offset="0x809b" name="GRAS_VS_LAYER_CNTL" type="a6xx_gras_layer_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x809c" name="GRAS_GS_LAYER_CNTL" type="a6xx_gras_layer_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x809d" name="GRAS_DS_LAYER_CNTL" type="a6xx_gras_layer_cntl" usage="rp_blit"/>
++	<reg32 offset="0x809b" name="GRAS_SU_VS_SIV_CNTL" type="a6xx_gras_us_xs_siv_cntl" usage="rp_blit"/>
++	<reg32 offset="0x809c" name="GRAS_SU_GS_SIV_CNTL" type="a6xx_gras_us_xs_siv_cntl" usage="rp_blit"/>
++	<reg32 offset="0x809d" name="GRAS_SU_DS_SIV_CNTL" type="a6xx_gras_us_xs_siv_cntl" usage="rp_blit"/>
+ 	<!-- 0x809e/0x809f invalid -->
+ 
+ 	<enum name="a6xx_sequenced_thread_dist">
+@@ -3213,13 +1014,13 @@ to upconvert to 32b float internally?
+ 	<enum name="a6xx_lrz_feedback_mask">
+ 		<value value="0x0" name="LRZ_FEEDBACK_NONE"/>
+ 		<value value="0x1" name="LRZ_FEEDBACK_EARLY_Z"/>
+-		<value value="0x2" name="LRZ_FEEDBACK_EARLY_LRZ_LATE_Z"/>
++		<value value="0x2" name="LRZ_FEEDBACK_EARLY_Z_LATE_Z"/>
+ 		<!-- We don't have a flag type and this flags combination is often used -->
+-		<value value="0x3" name="LRZ_FEEDBACK_EARLY_Z_OR_EARLY_LRZ_LATE_Z"/>
++		<value value="0x3" name="LRZ_FEEDBACK_EARLY_Z_OR_EARLY_Z_LATE_Z"/>
+ 		<value value="0x4" name="LRZ_FEEDBACK_LATE_Z"/>
+ 	</enum>
+ 
+-	<reg32 offset="0x80a1" name="GRAS_BIN_CONTROL" usage="rp_blit">
++	<reg32 offset="0x80a1" name="GRAS_SC_BIN_CNTL" usage="rp_blit">
+ 		<bitfield name="BINW" low="0" high="5" shr="5" type="uint"/>
+ 		<bitfield name="BINH" low="8" high="14" shr="4" type="uint"/>
+ 		<bitfield name="RENDER_MODE" low="18" high="20" type="a6xx_render_mode"/>
+@@ -3235,22 +1036,22 @@ to upconvert to 32b float internally?
+ 		<bitfield name="UNK27" pos="27"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x80a2" name="GRAS_RAS_MSAA_CNTL" usage="rp_blit">
++	<reg32 offset="0x80a2" name="GRAS_SC_RAS_MSAA_CNTL" usage="rp_blit">
+ 		<bitfield name="SAMPLES" low="0" high="1" type="a3xx_msaa_samples"/>
+ 		<bitfield name="UNK2" pos="2"/>
+ 		<bitfield name="UNK3" pos="3"/>
+ 	</reg32>
+-	<reg32 offset="0x80a3" name="GRAS_DEST_MSAA_CNTL" usage="rp_blit">
++	<reg32 offset="0x80a3" name="GRAS_SC_DEST_MSAA_CNTL" usage="rp_blit">
+ 		<bitfield name="SAMPLES" low="0" high="1" type="a3xx_msaa_samples"/>
+ 		<bitfield name="MSAA_DISABLE" pos="2" type="boolean"/>
+ 	</reg32>
+ 
+-	<bitset name="a6xx_sample_config" inline="yes">
++	<bitset name="a6xx_msaa_sample_pos_cntl" inline="yes">
+ 		<bitfield name="UNK0" pos="0"/>
+ 		<bitfield name="LOCATION_ENABLE" pos="1" type="boolean"/>
+ 	</bitset>
+ 
+-	<bitset name="a6xx_sample_locations" inline="yes">
++	<bitset name="a6xx_programmable_msaa_pos" inline="yes">
+ 		<bitfield name="SAMPLE_0_X" low="0" high="3" radix="4" type="fixed"/>
+ 		<bitfield name="SAMPLE_0_Y" low="4" high="7" radix="4" type="fixed"/>
+ 		<bitfield name="SAMPLE_1_X" low="8" high="11" radix="4" type="fixed"/>
+@@ -3261,9 +1062,9 @@ to upconvert to 32b float internally?
+ 		<bitfield name="SAMPLE_3_Y" low="28" high="31" radix="4" type="fixed"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0x80a4" name="GRAS_SAMPLE_CONFIG" type="a6xx_sample_config" usage="rp_blit"/>
+-	<reg32 offset="0x80a5" name="GRAS_SAMPLE_LOCATION_0" type="a6xx_sample_locations" usage="rp_blit"/>
+-	<reg32 offset="0x80a6" name="GRAS_SAMPLE_LOCATION_1" type="a6xx_sample_locations" usage="rp_blit"/>
++	<reg32 offset="0x80a4" name="GRAS_SC_MSAA_SAMPLE_POS_CNTL" type="a6xx_msaa_sample_pos_cntl" usage="rp_blit"/>
++	<reg32 offset="0x80a5" name="GRAS_SC_PROGRAMMABLE_MSAA_POS_0" type="a6xx_programmable_msaa_pos" usage="rp_blit"/>
++	<reg32 offset="0x80a6" name="GRAS_SC_PROGRAMMABLE_MSAA_POS_1" type="a6xx_programmable_msaa_pos" usage="rp_blit"/>
+ 
+ 	<reg32 offset="0x80a7" name="GRAS_UNKNOWN_80A7" variants="A7XX-" usage="cmd"/>
+ 
+@@ -3286,13 +1087,36 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x80f0" name="GRAS_SC_WINDOW_SCISSOR_TL" type="a6xx_reg_xy" usage="rp_blit"/>
+ 	<reg32 offset="0x80f1" name="GRAS_SC_WINDOW_SCISSOR_BR" type="a6xx_reg_xy" usage="rp_blit"/>
+ 
+-	<!-- 0x80f4 - 0x80fa are used for VK_KHR_fragment_shading_rate -->
+-	<reg64 offset="0x80f4" name="GRAS_UNKNOWN_80F4" variants="A7XX-" usage="cmd"/>
+-	<reg64 offset="0x80f5" name="GRAS_UNKNOWN_80F5" variants="A7XX-" usage="cmd"/>
+-	<reg64 offset="0x80f6" name="GRAS_UNKNOWN_80F6" variants="A7XX-" usage="cmd"/>
+-	<reg64 offset="0x80f8" name="GRAS_UNKNOWN_80F8" variants="A7XX-" usage="cmd"/>
+-	<reg64 offset="0x80f9" name="GRAS_UNKNOWN_80F9" variants="A7XX-" usage="cmd"/>
+-	<reg64 offset="0x80fa" name="GRAS_UNKNOWN_80FA" variants="A7XX-" usage="cmd"/>
++	<enum name="a6xx_fsr_combiner">
++		<value value="0" name="FSR_COMBINER_OP_KEEP"/>
++		<value value="1" name="FSR_COMBINER_OP_REPLACE"/>
++		<value value="2" name="FSR_COMBINER_OP_MIN"/>
++		<value value="3" name="FSR_COMBINER_OP_MAX"/>
++		<value value="4" name="FSR_COMBINER_OP_MUL"/>
++	</enum>
++
++	<reg32 offset="0x80f4" name="GRAS_VRS_CONFIG" variants="A7XX-" usage="rp_blit">
++		<bitfield name="PIPELINE_FSR_ENABLE" pos="0" type="boolean"/>
++		<bitfield name="FRAG_SIZE_X" low="1" high="2" type="uint"/>
++		<bitfield name="FRAG_SIZE_Y" low="3" high="4" type="uint"/>
++		<bitfield name="COMBINER_OP_1" low="5" high="7" type="a6xx_fsr_combiner"/>
++		<bitfield name="COMBINER_OP_2" low="8" high="10" type="a6xx_fsr_combiner"/>
++		<bitfield name="ATTACHMENT_FSR_ENABLE" pos="13" type="boolean"/>
++		<bitfield name="PRIMITIVE_FSR_ENABLE" pos="20" type="boolean"/>
++	</reg32>
++	<reg32 offset="0x80f5" name="GRAS_QUALITY_BUFFER_INFO" variants="A7XX-" usage="rp_blit">
++		<bitfield name="LAYERED" pos="0" type="boolean"/>
++		<bitfield name="TILE_MODE" low="1" high="2" type="a6xx_tile_mode"/>
++	</reg32>
++	<reg32 offset="0x80f6" name="GRAS_QUALITY_BUFFER_DIMENSION" variants="A7XX-" usage="rp_blit">
++		<bitfield name="WIDTH" low="0" high="15" type="uint"/>
++		<bitfield name="HEIGHT" low="16" high="31" type="uint"/>
++	</reg32>
++	<reg64 offset="0x80f8" name="GRAS_QUALITY_BUFFER_BASE" variants="A7XX-" type="waddress" usage="rp_blit"/>
++	<reg32 offset="0x80fa" name="GRAS_QUALITY_BUFFER_PITCH" variants="A7XX-" usage="rp_blit">
++		<bitfield name="PITCH" shr="6" low="0" high="7" type="uint"/>
++		<bitfield name="ARRAY_PITCH" shr="6" low="10" high="28" type="uint"/>
++	</reg32>
+ 
+ 	<enum name="a6xx_lrz_dir_status">
+ 		<value value="0x1" name="LRZ_DIR_LE"/>
+@@ -3313,7 +1137,7 @@ to upconvert to 32b float internally?
+ 		</doc>
+ 		<bitfield name="FC_ENABLE" pos="3" type="boolean" variants="A6XX"/>
+ 		<!-- set when depth-test + depth-write enabled -->
+-		<bitfield name="Z_TEST_ENABLE" pos="4" type="boolean"/>
++		<bitfield name="Z_WRITE_ENABLE" pos="4" type="boolean"/>
+ 		<bitfield name="Z_BOUNDS_ENABLE" pos="5" type="boolean"/>
+ 		<bitfield name="DIR" low="6" high="7" type="a6xx_lrz_dir_status"/>
+ 		<doc>
+@@ -3339,14 +1163,13 @@ to upconvert to 32b float internally?
+ 		<bitfield name="FRAGCOORDSAMPLEMODE" low="1" high="2" type="a6xx_fragcoord_sample_mode"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x8102" name="GRAS_LRZ_MRT_BUF_INFO_0" usage="rp_blit">
++	<reg32 offset="0x8102" name="GRAS_LRZ_MRT_BUFFER_INFO_0" usage="rp_blit">
+ 		<bitfield name="COLOR_FORMAT" low="0" high="7" type="a6xx_format"/>
+ 	</reg32>
+ 	<reg64 offset="0x8103" name="GRAS_LRZ_BUFFER_BASE" align="256" type="waddress" usage="rp_blit"/>
+ 	<reg32 offset="0x8105" name="GRAS_LRZ_BUFFER_PITCH" usage="rp_blit">
+-		<!-- TODO: fix the shr fields -->
+ 		<bitfield name="PITCH" low="0" high="7" shr="5" type="uint"/>
+-		<bitfield name="ARRAY_PITCH" low="10" high="28" shr="4" type="uint"/>
++		<bitfield name="ARRAY_PITCH" low="10" high="28" shr="8" type="uint"/>
+ 	</reg32>
+ 
+ 	<!--
+@@ -3381,18 +1204,18 @@ to upconvert to 32b float internally?
+ 	 -->
+ 	<reg64 offset="0x8106" name="GRAS_LRZ_FAST_CLEAR_BUFFER_BASE" align="64" type="waddress" usage="rp_blit"/>
+ 	<!-- 0x8108 invalid -->
+-	<reg32 offset="0x8109" name="GRAS_SAMPLE_CNTL" usage="rp_blit">
++	<reg32 offset="0x8109" name="GRAS_LRZ_PS_SAMPLEFREQ_CNTL" usage="rp_blit">
+ 		<bitfield name="PER_SAMP_MODE" pos="0" type="boolean"/>
+ 	</reg32>
+ 	<!--
+ 	LRZ buffer represents a single array layer + mip level, and there is
+ 	a single buffer per depth image. Thus to reuse LRZ between renderpasses
+ 	it is necessary to track the depth view used in the past renderpass, which
+-	GRAS_LRZ_DEPTH_VIEW is for.
+-	GRAS_LRZ_CNTL checks if current value of GRAS_LRZ_DEPTH_VIEW is equal to
++	GRAS_LRZ_VIEW_INFO is for.
++	GRAS_LRZ_CNTL checks if current value of GRAS_LRZ_VIEW_INFO is equal to
+ 	the value stored in the LRZ buffer, if not - LRZ is disabled.
+ 	-->
+-	<reg32 offset="0x810a" name="GRAS_LRZ_DEPTH_VIEW" usage="cmd">
++	<reg32 offset="0x810a" name="GRAS_LRZ_VIEW_INFO" usage="cmd">
+ 		<bitfield name="BASE_LAYER" low="0" high="10" type="uint"/>
+ 		<bitfield name="LAYER_COUNT" low="16" high="26" type="uint"/>
+ 		<bitfield name="BASE_MIP_LEVEL" low="28" high="31" type="uint"/>
+@@ -3408,7 +1231,7 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8110" name="GRAS_UNKNOWN_8110" low="0" high="1" usage="cmd"/>
+ 
+ 	<!-- A bit tentative but it's a color and it is followed by LRZ_CLEAR -->
+-	<reg32 offset="0x8111" name="GRAS_LRZ_CLEAR_DEPTH_F32" type="float" variants="A7XX-"/>
++	<reg32 offset="0x8111" name="GRAS_LRZ_DEPTH_CLEAR" type="float" variants="A7XX-"/>
+ 
+ 	<reg32 offset="0x8113" name="GRAS_LRZ_DEPTH_BUFFER_INFO" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="DEPTH_FORMAT" low="0" high="2" type="a6xx_depth_format"/>
+@@ -3430,7 +1253,7 @@ to upconvert to 32b float internally?
+ 		<value value="0x5" name="ROTATE_VFLIP"/>
+ 	</enum>
+ 
+-	<bitset name="a6xx_2d_blit_cntl" inline="yes">
++	<bitset name="a6xx_a2d_bit_cntl" inline="yes">
+ 		<bitfield name="ROTATE" low="0" high="2" type="a6xx_rotation"/>
+ 		<bitfield name="OVERWRITEEN" pos="3" type="boolean"/>
+ 		<bitfield name="UNK4" low="4" high="6"/>
+@@ -3447,22 +1270,22 @@ to upconvert to 32b float internally?
+ 		<bitfield name="UNK30" pos="30" type="boolean" variants="A7XX-"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0x8400" name="GRAS_2D_BLIT_CNTL" type="a6xx_2d_blit_cntl" usage="rp_blit"/>
++	<reg32 offset="0x8400" name="GRAS_A2D_BLT_CNTL" type="a6xx_a2d_bit_cntl" usage="rp_blit"/>
+ 	<!-- note: the low 8 bits for src coords are valid, probably fixed point
+ 	     it would be a bit weird though, since we subtract 1 from BR coords
+ 	     apparently signed, gallium driver uses negative coords and it works?
+ 	 -->
+-	<reg32 offset="0x8401" name="GRAS_2D_SRC_TL_X" low="8" high="24" type="int" usage="rp_blit"/>
+-	<reg32 offset="0x8402" name="GRAS_2D_SRC_BR_X" low="8" high="24" type="int" usage="rp_blit"/>
+-	<reg32 offset="0x8403" name="GRAS_2D_SRC_TL_Y" low="8" high="24" type="int" usage="rp_blit"/>
+-	<reg32 offset="0x8404" name="GRAS_2D_SRC_BR_Y" low="8" high="24" type="int" usage="rp_blit"/>
+-	<reg32 offset="0x8405" name="GRAS_2D_DST_TL" type="a6xx_reg_xy" usage="rp_blit"/>
+-	<reg32 offset="0x8406" name="GRAS_2D_DST_BR" type="a6xx_reg_xy" usage="rp_blit"/>
++	<reg32 offset="0x8401" name="GRAS_A2D_SRC_XMIN" low="8" high="24" type="int" usage="rp_blit"/>
++	<reg32 offset="0x8402" name="GRAS_A2D_SRC_XMAX" low="8" high="24" type="int" usage="rp_blit"/>
++	<reg32 offset="0x8403" name="GRAS_A2D_SRC_YMIN" low="8" high="24" type="int" usage="rp_blit"/>
++	<reg32 offset="0x8404" name="GRAS_A2D_SRC_YMAX" low="8" high="24" type="int" usage="rp_blit"/>
++	<reg32 offset="0x8405" name="GRAS_A2D_DEST_TL" type="a6xx_reg_xy" usage="rp_blit"/>
++	<reg32 offset="0x8406" name="GRAS_A2D_DEST_BR" type="a6xx_reg_xy" usage="rp_blit"/>
+ 	<reg32 offset="0x8407" name="GRAS_2D_UNKNOWN_8407" low="0" high="31"/>
+ 	<reg32 offset="0x8408" name="GRAS_2D_UNKNOWN_8408" low="0" high="31"/>
+ 	<reg32 offset="0x8409" name="GRAS_2D_UNKNOWN_8409" low="0" high="31"/>
+-	<reg32 offset="0x840a" name="GRAS_2D_RESOLVE_CNTL_1" type="a6xx_reg_xy" usage="rp_blit"/>
+-	<reg32 offset="0x840b" name="GRAS_2D_RESOLVE_CNTL_2" type="a6xx_reg_xy" usage="rp_blit"/>
++	<reg32 offset="0x840a" name="GRAS_A2D_SCISSOR_TL" type="a6xx_reg_xy" usage="rp_blit"/>
++	<reg32 offset="0x840b" name="GRAS_A2D_SCISSOR_BR" type="a6xx_reg_xy" usage="rp_blit"/>
+ 	<!-- 0x840c-0x85ff invalid -->
+ 
+ 	<!-- always 0x880 ? (and 0 in a640/a650 traces?) -->
+@@ -3481,7 +1304,7 @@ to upconvert to 32b float internally?
+ 	-->
+ 
+ 	<!-- same as GRAS_BIN_CONTROL, but without bit 27: -->
+-	<reg32 offset="0x8800" name="RB_BIN_CONTROL" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0x8800" name="RB_CNTL" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="BINW" low="0" high="5" shr="5" type="uint"/>
+ 		<bitfield name="BINH" low="8" high="14" shr="4" type="uint"/>
+ 		<bitfield name="RENDER_MODE" low="18" high="20" type="a6xx_render_mode"/>
+@@ -3490,7 +1313,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="LRZ_FEEDBACK_ZMODE_MASK" low="24" high="26" type="a6xx_lrz_feedback_mask"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x8800" name="RB_BIN_CONTROL" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x8800" name="RB_CNTL" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="BINW" low="0" high="5" shr="5" type="uint"/>
+ 		<bitfield name="BINH" low="8" high="14" shr="4" type="uint"/>
+ 		<bitfield name="RENDER_MODE" low="18" high="20" type="a6xx_render_mode"/>
+@@ -3501,8 +1324,7 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8801" name="RB_RENDER_CNTL" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="CCUSINGLECACHELINESIZE" low="3" high="5"/>
+ 		<bitfield name="EARLYVIZOUTEN" pos="6" type="boolean"/>
+-		<!-- set during binning pass: -->
+-		<bitfield name="BINNING" pos="7" type="boolean"/>
++		<bitfield name="FS_DISABLE" pos="7" type="boolean"/>
+ 		<bitfield name="UNK8" low="8" high="10"/>
+ 		<bitfield name="RASTER_MODE" pos="8" type="a6xx_raster_mode"/>
+ 		<bitfield name="RASTER_DIRECTION" low="9" high="10" type="a6xx_raster_direction"/>
+@@ -3515,15 +1337,14 @@ to upconvert to 32b float internally?
+ 	</reg32>
+ 	<reg32 offset="0x8801" name="RB_RENDER_CNTL" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="EARLYVIZOUTEN" pos="6" type="boolean"/>
+-		<!-- set during binning pass: -->
+-		<bitfield name="BINNING" pos="7" type="boolean"/>
++		<bitfield name="FS_DISABLE" pos="7" type="boolean"/>
+ 		<bitfield name="RASTER_MODE" pos="8" type="a6xx_raster_mode"/>
+ 		<bitfield name="RASTER_DIRECTION" low="9" high="10" type="a6xx_raster_direction"/>
+ 		<bitfield name="CONSERVATIVERASEN" pos="11" type="boolean"/>
+ 		<bitfield name="INNERCONSERVATIVERASEN" pos="12" type="boolean"/>
+ 	</reg32>
+ 	<reg32 offset="0x8116" name="GRAS_SU_RENDER_CNTL" variants="A7XX-" usage="rp_blit">
+-		<bitfield name="BINNING" pos="7" type="boolean"/>
++		<bitfield name="FS_DISABLE" pos="7" type="boolean"/>
+ 	</reg32>
+ 
+ 	<reg32 offset="0x8802" name="RB_RAS_MSAA_CNTL" usage="rp_blit">
+@@ -3536,16 +1357,16 @@ to upconvert to 32b float internally?
+ 		<bitfield name="MSAA_DISABLE" pos="2" type="boolean"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x8804" name="RB_SAMPLE_CONFIG" type="a6xx_sample_config" usage="rp_blit"/>
+-	<reg32 offset="0x8805" name="RB_SAMPLE_LOCATION_0" type="a6xx_sample_locations" usage="rp_blit"/>
+-	<reg32 offset="0x8806" name="RB_SAMPLE_LOCATION_1" type="a6xx_sample_locations" usage="rp_blit"/>
++	<reg32 offset="0x8804" name="RB_MSAA_SAMPLE_POS_CNTL" type="a6xx_msaa_sample_pos_cntl" usage="rp_blit"/>
++	<reg32 offset="0x8805" name="RB_PROGRAMMABLE_MSAA_POS_0" type="a6xx_programmable_msaa_pos" usage="rp_blit"/>
++	<reg32 offset="0x8806" name="RB_PROGRAMMABLE_MSAA_POS_1" type="a6xx_programmable_msaa_pos" usage="rp_blit"/>
+ 	<!-- 0x8807-0x8808 invalid -->
+ 	<!--
+ 	note: maybe not actually called RB_RENDER_CONTROLn (since RB_RENDER_CNTL
+ 	name comes from kernel and is probably right)
+ 	 -->
+-	<reg32 offset="0x8809" name="RB_RENDER_CONTROL0" usage="rp_blit">
+-		<!-- see also GRAS_CNTL -->
++	<reg32 offset="0x8809" name="RB_INTERP_CNTL" usage="rp_blit">
++		<!-- see also GRAS_CL_INTERP_CNTL -->
+ 		<bitfield name="IJ_PERSP_PIXEL" pos="0" type="boolean"/>
+ 		<bitfield name="IJ_PERSP_CENTROID" pos="1" type="boolean"/>
+ 		<bitfield name="IJ_PERSP_SAMPLE" pos="2" type="boolean"/>
+@@ -3555,7 +1376,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="COORD_MASK" low="6" high="9" type="hex"/>
+ 		<bitfield name="UNK10" pos="10" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0x880a" name="RB_RENDER_CONTROL1" usage="rp_blit">
++	<reg32 offset="0x880a" name="RB_PS_INPUT_CNTL" usage="rp_blit">
+ 		<!-- enable bits for various FS sysvalue regs: -->
+ 		<bitfield name="SAMPLEMASK" pos="0" type="boolean"/>
+ 		<bitfield name="POSTDEPTHCOVERAGE" pos="1" type="boolean"/>
+@@ -3567,16 +1388,16 @@ to upconvert to 32b float internally?
+ 		<bitfield name="FOVEATION" pos="8" type="boolean"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x880b" name="RB_FS_OUTPUT_CNTL0" usage="rp_blit">
++	<reg32 offset="0x880b" name="RB_PS_OUTPUT_CNTL" usage="rp_blit">
+ 		<bitfield name="DUAL_COLOR_IN_ENABLE" pos="0" type="boolean"/>
+ 		<bitfield name="FRAG_WRITES_Z" pos="1" type="boolean"/>
+ 		<bitfield name="FRAG_WRITES_SAMPMASK" pos="2" type="boolean"/>
+ 		<bitfield name="FRAG_WRITES_STENCILREF" pos="3" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0x880c" name="RB_FS_OUTPUT_CNTL1" usage="rp_blit">
++	<reg32 offset="0x880c" name="RB_PS_MRT_CNTL" usage="rp_blit">
+ 		<bitfield name="MRT" low="0" high="3" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0x880d" name="RB_RENDER_COMPONENTS" usage="rp_blit">
++	<reg32 offset="0x880d" name="RB_PS_OUTPUT_MASK" usage="rp_blit">
+ 		<bitfield name="RT0" low="0" high="3"/>
+ 		<bitfield name="RT1" low="4" high="7"/>
+ 		<bitfield name="RT2" low="8" high="11"/>
+@@ -3608,7 +1429,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="SRGB_MRT7" pos="7" type="boolean"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x8810" name="RB_SAMPLE_CNTL" usage="rp_blit">
++	<reg32 offset="0x8810" name="RB_PS_SAMPLEFREQ_CNTL" usage="rp_blit">
+ 		<bitfield name="PER_SAMP_MODE" pos="0" type="boolean"/>
+ 	</reg32>
+ 	<reg32 offset="0x8811" name="RB_UNKNOWN_8811" low="4" high="6" usage="cmd"/>
+@@ -3672,18 +1493,18 @@ to upconvert to 32b float internally?
+ 		<reg32 offset="0x7" name="BASE_GMEM" low="12" high="31" shr="12"/>
+ 	</array>
+ 
+-	<reg32 offset="0x8860" name="RB_BLEND_RED_F32" type="float" usage="rp_blit"/>
+-	<reg32 offset="0x8861" name="RB_BLEND_GREEN_F32" type="float" usage="rp_blit"/>
+-	<reg32 offset="0x8862" name="RB_BLEND_BLUE_F32" type="float" usage="rp_blit"/>
+-	<reg32 offset="0x8863" name="RB_BLEND_ALPHA_F32" type="float" usage="rp_blit"/>
+-	<reg32 offset="0x8864" name="RB_ALPHA_CONTROL" usage="cmd">
++	<reg32 offset="0x8860" name="RB_BLEND_CONSTANT_RED_FP32" type="float" usage="rp_blit"/>
++	<reg32 offset="0x8861" name="RB_BLEND_CONSTANT_GREEN_FP32" type="float" usage="rp_blit"/>
++	<reg32 offset="0x8862" name="RB_BLEND_CONSTANT_BLUE_FP32" type="float" usage="rp_blit"/>
++	<reg32 offset="0x8863" name="RB_BLEND_CONSTANT_ALPHA_FP32" type="float" usage="rp_blit"/>
++	<reg32 offset="0x8864" name="RB_ALPHA_TEST_CNTL" usage="cmd">
+ 		<bitfield name="ALPHA_REF" low="0" high="7" type="hex"/>
+ 		<bitfield name="ALPHA_TEST" pos="8" type="boolean"/>
+ 		<bitfield name="ALPHA_TEST_FUNC" low="9" high="11" type="adreno_compare_func"/>
+ 	</reg32>
+ 	<reg32 offset="0x8865" name="RB_BLEND_CNTL" usage="rp_blit">
+ 		<!-- per-mrt enable bit -->
+-		<bitfield name="ENABLE_BLEND" low="0" high="7"/>
++		<bitfield name="BLEND_READS_DEST" low="0" high="7"/>
+ 		<bitfield name="INDEPENDENT_BLEND" pos="8" type="boolean"/>
+ 		<bitfield name="DUAL_COLOR_IN_ENABLE" pos="9" type="boolean"/>
+ 		<bitfield name="ALPHA_TO_COVERAGE" pos="10" type="boolean"/>
+@@ -3726,12 +1547,12 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8873" name="RB_DEPTH_BUFFER_PITCH" low="0" high="13" shr="6" type="uint" usage="rp_blit"/>
+ 	<reg32 offset="0x8874" name="RB_DEPTH_BUFFER_ARRAY_PITCH" low="0" high="27" shr="6" type="uint" usage="rp_blit"/>
+ 	<reg64 offset="0x8875" name="RB_DEPTH_BUFFER_BASE" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x8877" name="RB_DEPTH_BUFFER_BASE_GMEM" low="12" high="31" shr="12" usage="rp_blit"/>
++	<reg32 offset="0x8877" name="RB_DEPTH_GMEM_BASE" low="12" high="31" shr="12" usage="rp_blit"/>
+ 
+-	<reg32 offset="0x8878" name="RB_Z_BOUNDS_MIN" type="float" usage="rp_blit"/>
+-	<reg32 offset="0x8879" name="RB_Z_BOUNDS_MAX" type="float" usage="rp_blit"/>
++	<reg32 offset="0x8878" name="RB_DEPTH_BOUND_MIN" type="float" usage="rp_blit"/>
++	<reg32 offset="0x8879" name="RB_DEPTH_BOUND_MAX" type="float" usage="rp_blit"/>
+ 	<!-- 0x887a-0x887f invalid -->
+-	<reg32 offset="0x8880" name="RB_STENCIL_CONTROL" usage="rp_blit">
++	<reg32 offset="0x8880" name="RB_STENCIL_CNTL" usage="rp_blit">
+ 		<bitfield name="STENCIL_ENABLE" pos="0" type="boolean"/>
+ 		<bitfield name="STENCIL_ENABLE_BF" pos="1" type="boolean"/>
+ 		<!--
+@@ -3753,11 +1574,11 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8115" name="GRAS_SU_STENCIL_CNTL" usage="rp_blit">
+ 		<bitfield name="STENCIL_ENABLE" pos="0" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0x8881" name="RB_STENCIL_INFO" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0x8881" name="RB_STENCIL_BUFFER_INFO" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="SEPARATE_STENCIL" pos="0" type="boolean"/>
+ 		<bitfield name="UNK1" pos="1" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0x8881" name="RB_STENCIL_INFO" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x8881" name="RB_STENCIL_BUFFER_INFO" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="SEPARATE_STENCIL" pos="0" type="boolean"/>
+ 		<bitfield name="UNK1" pos="1" type="boolean"/>
+ 		<bitfield name="TILEMODE" low="2" high="3" type="a6xx_tile_mode"/>
+@@ -3765,22 +1586,22 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8882" name="RB_STENCIL_BUFFER_PITCH" low="0" high="11" shr="6" type="uint" usage="rp_blit"/>
+ 	<reg32 offset="0x8883" name="RB_STENCIL_BUFFER_ARRAY_PITCH" low="0" high="23" shr="6" type="uint" usage="rp_blit"/>
+ 	<reg64 offset="0x8884" name="RB_STENCIL_BUFFER_BASE" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x8886" name="RB_STENCIL_BUFFER_BASE_GMEM" low="12" high="31" shr="12" usage="rp_blit"/>
+-	<reg32 offset="0x8887" name="RB_STENCILREF" usage="rp_blit">
++	<reg32 offset="0x8886" name="RB_STENCIL_GMEM_BASE" low="12" high="31" shr="12" usage="rp_blit"/>
++	<reg32 offset="0x8887" name="RB_STENCIL_REF_CNTL" usage="rp_blit">
+ 		<bitfield name="REF" low="0" high="7"/>
+ 		<bitfield name="BFREF" low="8" high="15"/>
+ 	</reg32>
+-	<reg32 offset="0x8888" name="RB_STENCILMASK" usage="rp_blit">
++	<reg32 offset="0x8888" name="RB_STENCIL_MASK" usage="rp_blit">
+ 		<bitfield name="MASK" low="0" high="7"/>
+ 		<bitfield name="BFMASK" low="8" high="15"/>
+ 	</reg32>
+-	<reg32 offset="0x8889" name="RB_STENCILWRMASK" usage="rp_blit">
++	<reg32 offset="0x8889" name="RB_STENCIL_WRITE_MASK" usage="rp_blit">
+ 		<bitfield name="WRMASK" low="0" high="7"/>
+ 		<bitfield name="BFWRMASK" low="8" high="15"/>
+ 	</reg32>
+ 	<!-- 0x888a-0x888f invalid -->
+ 	<reg32 offset="0x8890" name="RB_WINDOW_OFFSET" type="a6xx_reg_xy" usage="rp_blit"/>
+-	<reg32 offset="0x8891" name="RB_SAMPLE_COUNT_CONTROL" usage="cmd">
++	<reg32 offset="0x8891" name="RB_SAMPLE_COUNTER_CNTL" usage="cmd">
+ 		<bitfield name="DISABLE" pos="0" type="boolean"/>
+ 		<bitfield name="COPY" pos="1" type="boolean"/>
+ 	</reg32>
+@@ -3791,27 +1612,27 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8899" name="RB_UNKNOWN_8899" variants="A7XX-" usage="cmd"/>
+ 	<!-- 0x8899-0x88bf invalid -->
+ 	<!-- clamps depth value for depth test/write -->
+-	<reg32 offset="0x88c0" name="RB_Z_CLAMP_MIN" type="float" usage="rp_blit"/>
+-	<reg32 offset="0x88c1" name="RB_Z_CLAMP_MAX" type="float" usage="rp_blit"/>
++	<reg32 offset="0x88c0" name="RB_VIEWPORT_ZCLAMP_MIN" type="float" usage="rp_blit"/>
++	<reg32 offset="0x88c1" name="RB_VIEWPORT_ZCLAMP_MAX" type="float" usage="rp_blit"/>
+ 	<!-- 0x88c2-0x88cf invalid-->
+-	<reg32 offset="0x88d0" name="RB_UNKNOWN_88D0" usage="rp_blit">
++	<reg32 offset="0x88d0" name="RB_RESOLVE_CNTL_0" usage="rp_blit">
+ 		<bitfield name="UNK0" low="0" high="12"/>
+ 		<bitfield name="UNK16" low="16" high="26"/>
+ 	</reg32>
+-	<reg32 offset="0x88d1" name="RB_BLIT_SCISSOR_TL" type="a6xx_reg_xy" usage="rp_blit"/>
+-	<reg32 offset="0x88d2" name="RB_BLIT_SCISSOR_BR" type="a6xx_reg_xy" usage="rp_blit"/>
++	<reg32 offset="0x88d1" name="RB_RESOLVE_CNTL_1" type="a6xx_reg_xy" usage="rp_blit"/>
++	<reg32 offset="0x88d2" name="RB_RESOLVE_CNTL_2" type="a6xx_reg_xy" usage="rp_blit"/>
+ 	<!-- weird to duplicate other regs from same block?? -->
+-	<reg32 offset="0x88d3" name="RB_BIN_CONTROL2" usage="rp_blit">
++	<reg32 offset="0x88d3" name="RB_RESOLVE_CNTL_3" usage="rp_blit">
+ 		<bitfield name="BINW" low="0" high="5" shr="5" type="uint"/>
+ 		<bitfield name="BINH" low="8" high="14" shr="4" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0x88d4" name="RB_WINDOW_OFFSET2" type="a6xx_reg_xy" usage="rp_blit"/>
+-	<reg32 offset="0x88d5" name="RB_BLIT_GMEM_MSAA_CNTL" usage="rp_blit">
++	<reg32 offset="0x88d4" name="RB_RESOLVE_WINDOW_OFFSET" type="a6xx_reg_xy" usage="rp_blit"/>
++	<reg32 offset="0x88d5" name="RB_RESOLVE_GMEM_BUFFER_INFO" usage="rp_blit">
+ 		<bitfield name="SAMPLES" low="3" high="4" type="a3xx_msaa_samples"/>
+ 	</reg32>
+-	<reg32 offset="0x88d6" name="RB_BLIT_BASE_GMEM" low="12" high="31" shr="12" usage="rp_blit"/>
++	<reg32 offset="0x88d6" name="RB_RESOLVE_GMEM_BUFFER_BASE" low="12" high="31" shr="12" usage="rp_blit"/>
+ 	<!-- s/DST_FORMAT/DST_INFO/ probably: -->
+-	<reg32 offset="0x88d7" name="RB_BLIT_DST_INFO" usage="rp_blit">
++	<reg32 offset="0x88d7" name="RB_RESOLVE_SYSTEM_BUFFER_INFO" usage="rp_blit">
+ 		<bitfield name="TILE_MODE" low="0" high="1" type="a6xx_tile_mode"/>
+ 		<bitfield name="FLAGS" pos="2" type="boolean"/>
+ 		<bitfield name="SAMPLES" low="3" high="4" type="a3xx_msaa_samples"/>
+@@ -3820,25 +1641,31 @@ to upconvert to 32b float internally?
+ 		<bitfield name="UNK15" pos="15" type="boolean"/>
+ 		<bitfield name="MUTABLEEN" pos="16" type="boolean" variants="A7XX-"/>
+ 	</reg32>
+-	<reg64 offset="0x88d8" name="RB_BLIT_DST" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x88da" name="RB_BLIT_DST_PITCH" low="0" high="15" shr="6" type="uint" usage="rp_blit"/>
++	<reg64 offset="0x88d8" name="RB_RESOLVE_SYSTEM_BUFFER_BASE" type="waddress" align="64" usage="rp_blit"/>
++	<reg32 offset="0x88da" name="RB_RESOLVE_SYSTEM_BUFFER_PITCH" low="0" high="15" shr="6" type="uint" usage="rp_blit"/>
+ 	<!-- array-pitch is size of layer -->
+-	<reg32 offset="0x88db" name="RB_BLIT_DST_ARRAY_PITCH" low="0" high="28" shr="6" type="uint" usage="rp_blit"/>
+-	<reg64 offset="0x88dc" name="RB_BLIT_FLAG_DST" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x88de" name="RB_BLIT_FLAG_DST_PITCH" usage="rp_blit">
++	<reg32 offset="0x88db" name="RB_RESOLVE_SYSTEM_BUFFER_ARRAY_PITCH" low="0" high="28" shr="6" type="uint" usage="rp_blit"/>
++	<reg64 offset="0x88dc" name="RB_RESOLVE_SYSTEM_FLAG_BUFFER_BASE" type="waddress" align="64" usage="rp_blit"/>
++	<reg32 offset="0x88de" name="RB_RESOLVE_SYSTEM_FLAG_BUFFER_PITCH" usage="rp_blit">
+ 		<bitfield name="PITCH" low="0" high="10" shr="6" type="uint"/>
+ 		<bitfield name="ARRAY_PITCH" low="11" high="27" shr="7" type="uint"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x88df" name="RB_BLIT_CLEAR_COLOR_DW0" usage="rp_blit"/>
+-	<reg32 offset="0x88e0" name="RB_BLIT_CLEAR_COLOR_DW1" usage="rp_blit"/>
+-	<reg32 offset="0x88e1" name="RB_BLIT_CLEAR_COLOR_DW2" usage="rp_blit"/>
+-	<reg32 offset="0x88e2" name="RB_BLIT_CLEAR_COLOR_DW3" usage="rp_blit"/>
++	<reg32 offset="0x88df" name="RB_RESOLVE_CLEAR_COLOR_DW0" usage="rp_blit"/>
++	<reg32 offset="0x88e0" name="RB_RESOLVE_CLEAR_COLOR_DW1" usage="rp_blit"/>
++	<reg32 offset="0x88e1" name="RB_RESOLVE_CLEAR_COLOR_DW2" usage="rp_blit"/>
++	<reg32 offset="0x88e2" name="RB_RESOLVE_CLEAR_COLOR_DW3" usage="rp_blit"/>
++
++	<enum name="a6xx_blit_event_type">
++		<value value="0x0" name="BLIT_EVENT_STORE"/>
++		<value value="0x1" name="BLIT_EVENT_STORE_AND_CLEAR"/>
++		<value value="0x2" name="BLIT_EVENT_CLEAR"/>
++		<value value="0x3" name="BLIT_EVENT_LOAD"/>
++	</enum>
+ 
+ 	<!-- seems somewhat similar to what we called RB_CLEAR_CNTL on a5xx: -->
+-	<reg32 offset="0x88e3" name="RB_BLIT_INFO" usage="rp_blit">
+-		<bitfield name="UNK0" pos="0" type="boolean"/> <!-- s8 stencil restore/clear?  But also color restore? -->
+-		<bitfield name="GMEM" pos="1" type="boolean"/> <!-- set for restore and clear to gmem? -->
++	<reg32 offset="0x88e3" name="RB_RESOLVE_OPERATION" usage="rp_blit">
++		<bitfield name="TYPE" low="0" high="1" type="a6xx_blit_event_type"/>
+ 		<bitfield name="SAMPLE_0" pos="2" type="boolean"/> <!-- takes sample 0 instead of averaging -->
+ 		<bitfield name="DEPTH" pos="3" type="boolean"/> <!-- z16/z32/z24s8/x24x8 clear or resolve? -->
+ 		<doc>
+@@ -3853,16 +1680,20 @@ to upconvert to 32b float internally?
+ 		<!-- set when this is the last resolve on a650+ -->
+ 		<bitfield name="LAST" low="8" high="9"/>
+ 		<!--
+-			a618 GLES: color render target number being resolved for RM6_RESOLVE, 0x8 for depth, 0x9 for separate stencil.
+-			a618 VK: 0x8 for depth RM6_RESOLVE, 0x9 for separate stencil, 0 otherwise.
+-
+-			We believe this is related to concurrent resolves
++			a618 GLES: color render target number being resolved for CCU_RESOLVE, 0x8 for depth, 0x9 for separate stencil.
++			a618 VK: 0x8 for depth CCU_RESOLVE, 0x9 for separate stencil, 0 otherwise.
++			a7xx VK: 0x8 for depth, 0x9 for separate stencil, 0x0 to 0x7 used for concurrent resolves of color render
++			targets inside a given resolve group.
+ 		 -->
+ 		<bitfield name="BUFFER_ID" low="12" high="15"/>
+ 	</reg32>
+-	<reg32 offset="0x88e4" name="RB_UNKNOWN_88E4" variants="A7XX-" usage="rp_blit">
+-		<!-- Value conditioned based on predicate, changed before blits -->
+-		<bitfield name="UNK0" pos="0" type="boolean"/>
++
++	<enum name="a7xx_blit_clear_mode">
++		<value value="0x0" name="CLEAR_MODE_SYSMEM"/>
++		<value value="0x1" name="CLEAR_MODE_GMEM"/>
++	</enum>
++	<reg32 offset="0x88e4" name="RB_CLEAR_TARGET" variants="A7XX-" usage="rp_blit">
++			<bitfield name="CLEAR_MODE" pos="0" type="a7xx_blit_clear_mode"/>
+ 	</reg32>
+ 
+ 	<enum name="a6xx_ccu_cache_size">
+@@ -3871,7 +1702,7 @@ to upconvert to 32b float internally?
+ 		<value value="0x2" name="CCU_CACHE_SIZE_QUARTER"/>
+ 		<value value="0x3" name="CCU_CACHE_SIZE_EIGHTH"/>
+ 	</enum>
+-	<reg32 offset="0x88e5" name="RB_CCU_CNTL2" variants="A7XX-" usage="cmd">
++	<reg32 offset="0x88e5" name="RB_CCU_CACHE_CNTL" variants="A7XX-" usage="cmd">
+ 		<bitfield name="DEPTH_OFFSET_HI" pos="0" type="hex"/>
+ 		<bitfield name="COLOR_OFFSET_HI" pos="2" type="hex"/>
+ 		<bitfield name="DEPTH_CACHE_SIZE" low="10" high="11" type="a6xx_ccu_cache_size"/>
+@@ -3895,7 +1726,13 @@ to upconvert to 32b float internally?
+ 		<bitfield name="PITCH" low="0" high="10" shr="6" type="uint"/>
+ 		<bitfield name="ARRAY_PITCH" low="11" high="23" shr="7" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0x88f4" name="RB_UNKNOWN_88F4" low="0" high="2"/>
++
++	<reg32 offset="0x88f4" name="RB_VRS_CONFIG" usage="rp_blit">
++		<bitfield name="UNK2" pos="2" type="boolean"/>
++		<bitfield name="PIPELINE_FSR_ENABLE" pos="4" type="boolean"/>
++		<bitfield name="ATTACHMENT_FSR_ENABLE" pos="5" type="boolean"/>
++		<bitfield name="PRIMITIVE_FSR_ENABLE" pos="18" type="boolean"/>
++	</reg32>
+ 	<!-- Connected to VK_EXT_fragment_density_map? -->
+ 	<reg32 offset="0x88f5" name="RB_UNKNOWN_88F5" variants="A7XX-"/>
+ 	<!-- 0x88f6-0x88ff invalid -->
+@@ -3906,7 +1743,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="UNK8" low="8" high="10"/>
+ 		<bitfield name="ARRAY_PITCH" low="11" high="27" shr="7" type="uint"/>
+ 	</reg32>
+-	<array offset="0x8903" name="RB_MRT_FLAG_BUFFER" stride="3" length="8" usage="rp_blit">
++	<array offset="0x8903" name="RB_COLOR_FLAG_BUFFER" stride="3" length="8" usage="rp_blit">
+ 		<reg64 offset="0" name="ADDR" type="waddress" align="64"/>
+ 		<reg32 offset="2" name="PITCH">
+ 			<bitfield name="PITCH" low="0" high="10" shr="6" type="uint"/>
+@@ -3915,10 +1752,10 @@ to upconvert to 32b float internally?
+ 	</array>
+ 	<!-- 0x891b-0x8926 invalid -->
+ 	<doc>
+-		RB_SAMPLE_COUNT_ADDR register is used up to (and including) a730. After that
++		RB_SAMPLE_COUNTER_BASE register is used up to (and including) a730. After that
+ 		the address is specified through CP_EVENT_WRITE7::WRITE_SAMPLE_COUNT.
+ 	</doc>
+-	<reg64 offset="0x8927" name="RB_SAMPLE_COUNT_ADDR" type="waddress" align="16" usage="cmd"/>
++	<reg64 offset="0x8927" name="RB_SAMPLE_COUNTER_BASE" type="waddress" align="16" usage="cmd"/>
+ 	<!-- 0x8929-0x89ff invalid -->
+ 
+ 	<!-- TODO: there are some registers in the 0x8a00-0x8bff range -->
+@@ -3932,10 +1769,10 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8a20" name="RB_UNKNOWN_8A20" variants="A6XX" usage="rp_blit"/>
+ 	<reg32 offset="0x8a30" name="RB_UNKNOWN_8A30" variants="A6XX" usage="rp_blit"/>
+ 
+-	<reg32 offset="0x8c00" name="RB_2D_BLIT_CNTL" type="a6xx_2d_blit_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x8c01" name="RB_2D_UNKNOWN_8C01" low="0" high="31" usage="rp_blit"/>
++	<reg32 offset="0x8c00" name="RB_A2D_BLT_CNTL" type="a6xx_a2d_bit_cntl" usage="rp_blit"/>
++	<reg32 offset="0x8c01" name="RB_A2D_PIXEL_CNTL" low="0" high="31" usage="rp_blit"/>
+ 
+-	<bitset name="a6xx_2d_src_surf_info" inline="yes">
++	<bitset name="a6xx_a2d_src_texture_info" inline="yes">
+ 		<bitfield name="COLOR_FORMAT" low="0" high="7" type="a6xx_format"/>
+ 		<bitfield name="TILE_MODE" low="8" high="9" type="a6xx_tile_mode"/>
+ 		<bitfield name="COLOR_SWAP" low="10" high="11" type="a3xx_color_swap"/>
+@@ -3954,7 +1791,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="MUTABLEEN" pos="29" type="boolean" variants="A7XX-"/>
+ 	</bitset>
+ 
+-	<bitset name="a6xx_2d_dst_surf_info" inline="yes">
++	<bitset name="a6xx_a2d_dest_buffer_info" inline="yes">
+ 		<bitfield name="COLOR_FORMAT" low="0" high="7" type="a6xx_format"/>
+ 		<bitfield name="TILE_MODE" low="8" high="9" type="a6xx_tile_mode"/>
+ 		<bitfield name="COLOR_SWAP" low="10" high="11" type="a3xx_color_swap"/>
+@@ -3965,26 +1802,26 @@ to upconvert to 32b float internally?
+ 	</bitset>
+ 
+ 	<!-- 0x8c02-0x8c16 invalid -->
+-	<reg32 offset="0x8c17" name="RB_2D_DST_INFO" type="a6xx_2d_dst_surf_info" usage="rp_blit"/>
+-	<reg64 offset="0x8c18" name="RB_2D_DST" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x8c1a" name="RB_2D_DST_PITCH" low="0" high="15" shr="6" type="uint" usage="rp_blit"/>
++	<reg32 offset="0x8c17" name="RB_A2D_DEST_BUFFER_INFO" type="a6xx_a2d_dest_buffer_info" usage="rp_blit"/>
++	<reg64 offset="0x8c18" name="RB_A2D_DEST_BUFFER_BASE" type="waddress" align="64" usage="rp_blit"/>
++	<reg32 offset="0x8c1a" name="RB_A2D_DEST_BUFFER_PITCH" low="0" high="15" shr="6" type="uint" usage="rp_blit"/>
+ 	<!-- this is a guess but seems likely (for NV12/IYUV): -->
+-	<reg64 offset="0x8c1b" name="RB_2D_DST_PLANE1" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x8c1d" name="RB_2D_DST_PLANE_PITCH" low="0" high="15" shr="6" type="uint" usage="rp_blit"/>
+-	<reg64 offset="0x8c1e" name="RB_2D_DST_PLANE2" type="waddress" align="64" usage="rp_blit"/>
++	<reg64 offset="0x8c1b" name="RB_A2D_DEST_BUFFER_BASE_1" type="waddress" align="64" usage="rp_blit"/>
++	<reg32 offset="0x8c1d" name="RB_A2D_DEST_BUFFER_PITCH_1" low="0" high="15" shr="6" type="uint" usage="rp_blit"/>
++	<reg64 offset="0x8c1e" name="RB_A2D_DEST_BUFFER_BASE_2" type="waddress" align="64" usage="rp_blit"/>
+ 
+-	<reg64 offset="0x8c20" name="RB_2D_DST_FLAGS" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x8c22" name="RB_2D_DST_FLAGS_PITCH" low="0" high="7" shr="6" type="uint" usage="rp_blit"/>
++	<reg64 offset="0x8c20" name="RB_A2D_DEST_FLAG_BUFFER_BASE" type="waddress" align="64" usage="rp_blit"/>
++	<reg32 offset="0x8c22" name="RB_A2D_DEST_FLAG_BUFFER_PITCH" low="0" high="7" shr="6" type="uint" usage="rp_blit"/>
+ 	<!-- this is a guess but seems likely (for NV12 with UBWC): -->
+-	<reg64 offset="0x8c23" name="RB_2D_DST_FLAGS_PLANE" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x8c25" name="RB_2D_DST_FLAGS_PLANE_PITCH" low="0" high="7" shr="6" type="uint" usage="rp_blit"/>
++	<reg64 offset="0x8c23" name="RB_A2D_DEST_FLAG_BUFFER_BASE_1" type="waddress" align="64" usage="rp_blit"/>
++	<reg32 offset="0x8c25" name="RB_A2D_DEST_FLAG_BUFFER_PITCH_1" low="0" high="7" shr="6" type="uint" usage="rp_blit"/>
+ 
+ 	<!-- TODO: 0x8c26-0x8c33 are all full 32-bit registers -->
+ 	<!-- unlike a5xx, these are per channel values rather than packed -->
+-	<reg32 offset="0x8c2c" name="RB_2D_SRC_SOLID_C0" usage="rp_blit"/>
+-	<reg32 offset="0x8c2d" name="RB_2D_SRC_SOLID_C1" usage="rp_blit"/>
+-	<reg32 offset="0x8c2e" name="RB_2D_SRC_SOLID_C2" usage="rp_blit"/>
+-	<reg32 offset="0x8c2f" name="RB_2D_SRC_SOLID_C3" usage="rp_blit"/>
++	<reg32 offset="0x8c2c" name="RB_A2D_CLEAR_COLOR_DW0" usage="rp_blit"/>
++	<reg32 offset="0x8c2d" name="RB_A2D_CLEAR_COLOR_DW1" usage="rp_blit"/>
++	<reg32 offset="0x8c2e" name="RB_A2D_CLEAR_COLOR_DW2" usage="rp_blit"/>
++	<reg32 offset="0x8c2f" name="RB_A2D_CLEAR_COLOR_DW3" usage="rp_blit"/>
+ 
+ 	<reg32 offset="0x8c34" name="RB_UNKNOWN_8C34" variants="A7XX-" usage="cmd"/>
+ 
+@@ -3996,7 +1833,7 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8e04" name="RB_DBG_ECO_CNTL" usage="cmd"/> <!-- TODO: valid mask 0xfffffeff -->
+ 	<reg32 offset="0x8e05" name="RB_ADDR_MODE_CNTL" pos="0" type="a5xx_address_mode"/>
+ 	<!-- 0x02080000 in GMEM, zero otherwise?  -->
+-	<reg32 offset="0x8e06" name="RB_UNKNOWN_8E06" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0x8e06" name="RB_CCU_DBG_ECO_CNTL" variants="A7XX-" usage="cmd"/>
+ 
+ 	<reg32 offset="0x8e07" name="RB_CCU_CNTL" usage="cmd" variants="A6XX">
+ 		<bitfield name="GMEM_FAST_CLEAR_DISABLE" pos="0" type="boolean"/>
+@@ -4017,10 +1854,21 @@ to upconvert to 32b float internally?
+ 		<bitfield name="COLOR_OFFSET" low="23" high="31" shr="12" type="hex"/>
+ 		<!--TODO: valid mask 0xfffffc1f -->
+ 	</reg32>
++	<enum name="a7xx_concurrent_resolve_mode">
++		<value value="0x0" name="CONCURRENT_RESOLVE_MODE_DISABLED"/>
++		<value value="0x1" name="CONCURRENT_RESOLVE_MODE_1"/>
++		<value value="0x2" name="CONCURRENT_RESOLVE_MODE_2"/>
++	</enum>
++	<enum name="a7xx_concurrent_unresolve_mode">
++		<value value="0x0" name="CONCURRENT_UNRESOLVE_MODE_DISABLED"/>
++		<value value="0x1" name="CONCURRENT_UNRESOLVE_MODE_PARTIAL"/>
++		<value value="0x3" name="CONCURRENT_UNRESOLVE_MODE_FULL"/>
++	</enum>
+ 	<reg32 offset="0x8e07" name="RB_CCU_CNTL" usage="cmd" variants="A7XX-">
+ 		<bitfield name="GMEM_FAST_CLEAR_DISABLE" pos="0" type="boolean"/>
+-		<bitfield name="CONCURRENT_RESOLVE" pos="2" type="boolean"/>
+-		<!-- rest of the bits were moved to RB_CCU_CNTL2 -->
++		<bitfield name="CONCURRENT_RESOLVE_MODE" low="2" high="3" type="a7xx_concurrent_resolve_mode"/>
++		<bitfield name="CONCURRENT_UNRESOLVE_MODE" low="5" high="6" type="a7xx_concurrent_unresolve_mode"/>
++		<!-- rest of the bits were moved to RB_CCU_CACHE_CNTL -->
+ 	</reg32>
+ 	<reg32 offset="0x8e08" name="RB_NC_MODE_CNTL">
+ 		<bitfield name="MODE" pos="0" type="boolean"/>
+@@ -4046,9 +1894,9 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8e3d" name="RB_RB_SUB_BLOCK_SEL_CNTL_CD"/>
+ 	<!-- 0x8e3e-0x8e4f invalid -->
+ 	<!-- GMEM save/restore for preemption: -->
+-	<reg32 offset="0x8e50" name="RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE" pos="0" type="boolean"/>
++	<reg32 offset="0x8e50" name="RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE_ENABLE" pos="0" type="boolean"/>
+ 	<!-- address for GMEM save/restore? -->
+-	<reg32 offset="0x8e51" name="RB_UNKNOWN_8E51" type="waddress" align="1"/>
++	<reg32 offset="0x8e51" name="RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE_ADDR" type="waddress" align="1"/>
+ 	<!-- 0x8e53-0x8e7f invalid -->
+ 	<reg32 offset="0x8e79" name="RB_UNKNOWN_8E79" variants="A7XX-" usage="cmd"/>
+ 	<!-- 0x8e80-0x8e83 are valid -->
+@@ -4069,38 +1917,38 @@ to upconvert to 32b float internally?
+ 		<bitfield name="CLIP_DIST_03_LOC" low="8" high="15" type="uint"/>
+ 		<bitfield name="CLIP_DIST_47_LOC" low="16" high="23" type="uint"/>
+ 	</bitset>
+-	<reg32 offset="0x9101" name="VPC_VS_CLIP_CNTL" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9102" name="VPC_GS_CLIP_CNTL" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9103" name="VPC_DS_CLIP_CNTL" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9101" name="VPC_VS_CLIP_CULL_CNTL" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9102" name="VPC_GS_CLIP_CULL_CNTL" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9103" name="VPC_DS_CLIP_CULL_CNTL" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
+ 
+-	<reg32 offset="0x9311" name="VPC_VS_CLIP_CNTL_V2" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9312" name="VPC_GS_CLIP_CNTL_V2" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9313" name="VPC_DS_CLIP_CNTL_V2" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9311" name="VPC_VS_CLIP_CULL_CNTL_V2" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9312" name="VPC_GS_CLIP_CULL_CNTL_V2" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9313" name="VPC_DS_CLIP_CULL_CNTL_V2" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
+ 
+-	<bitset name="a6xx_vpc_xs_layer_cntl" inline="yes">
++	<bitset name="a6xx_vpc_xs_siv_cntl" inline="yes">
+ 		<bitfield name="LAYERLOC" low="0" high="7" type="uint"/>
+ 		<bitfield name="VIEWLOC" low="8" high="15" type="uint"/>
+ 		<bitfield name="SHADINGRATELOC" low="16" high="23" type="uint" variants="A7XX-"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0x9104" name="VPC_VS_LAYER_CNTL" type="a6xx_vpc_xs_layer_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9105" name="VPC_GS_LAYER_CNTL" type="a6xx_vpc_xs_layer_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9106" name="VPC_DS_LAYER_CNTL" type="a6xx_vpc_xs_layer_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9104" name="VPC_VS_SIV_CNTL" type="a6xx_vpc_xs_siv_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9105" name="VPC_GS_SIV_CNTL" type="a6xx_vpc_xs_siv_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9106" name="VPC_DS_SIV_CNTL" type="a6xx_vpc_xs_siv_cntl" usage="rp_blit"/>
+ 
+-	<reg32 offset="0x9314" name="VPC_VS_LAYER_CNTL_V2" type="a6xx_vpc_xs_layer_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9315" name="VPC_GS_LAYER_CNTL_V2" type="a6xx_vpc_xs_layer_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9316" name="VPC_DS_LAYER_CNTL_V2" type="a6xx_vpc_xs_layer_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9314" name="VPC_VS_SIV_CNTL_V2" type="a6xx_vpc_xs_siv_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9315" name="VPC_GS_SIV_CNTL_V2" type="a6xx_vpc_xs_siv_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9316" name="VPC_DS_SIV_CNTL_V2" type="a6xx_vpc_xs_siv_cntl" usage="rp_blit"/>
+ 
+ 	<reg32 offset="0x9107" name="VPC_UNKNOWN_9107" variants="A6XX" usage="rp_blit">
+-		<!-- this mirrors PC_RASTER_CNTL::DISCARD, although it seems it's unused -->
++		<!-- this mirrors VPC_RAST_STREAM_CNTL::DISCARD, although it seems it's unused -->
+ 		<bitfield name="RASTER_DISCARD" pos="0" type="boolean"/>
+ 		<bitfield name="UNK2" pos="2" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0x9108" name="VPC_POLYGON_MODE" usage="rp_blit">
++	<reg32 offset="0x9108" name="VPC_RAST_CNTL" usage="rp_blit">
+ 		<bitfield name="MODE" low="0" high="1" type="a6xx_polygon_mode"/>
+ 	</reg32>
+ 
+-	<bitset name="a6xx_primitive_cntl_0" inline="yes">
++	<bitset name="a6xx_pc_cntl" inline="yes">
+ 		<bitfield name="PRIMITIVE_RESTART" pos="0" type="boolean"/>
+ 		<bitfield name="PROVOKING_VTX_LAST" pos="1" type="boolean"/>
+ 		<bitfield name="D3D_VERTEX_ORDERING" pos="2" type="boolean">
+@@ -4113,7 +1961,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="UNK3" pos="3" type="boolean"/>
+ 	</bitset>
+ 
+-	<bitset name="a6xx_primitive_cntl_5" inline="yes">
++	<bitset name="a6xx_gs_param_0" inline="yes">
+ 		<doc>
+ 		  geometry shader
+ 		</doc>
+@@ -4125,7 +1973,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="UNK18" pos="18"/>
+ 	</bitset>
+ 
+-	<bitset name="a6xx_multiview_cntl" inline="yes">
++	<bitset name="a6xx_stereo_rendering_cntl" inline="yes">
+ 		<bitfield name="ENABLE" pos="0" type="boolean"/>
+ 		<bitfield name="DISABLEMULTIPOS" pos="1" type="boolean">
+ 			<doc>
+@@ -4139,10 +1987,10 @@ to upconvert to 32b float internally?
+ 		<bitfield name="VIEWS" low="2" high="6" type="uint"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0x9109" name="VPC_PRIMITIVE_CNTL_0" type="a6xx_primitive_cntl_0" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0x910a" name="VPC_PRIMITIVE_CNTL_5" type="a6xx_primitive_cntl_5" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0x910b" name="VPC_MULTIVIEW_MASK" type="hex" low="0" high="15" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0x910c" name="VPC_MULTIVIEW_CNTL" type="a6xx_multiview_cntl" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0x9109" name="VPC_PC_CNTL" type="a6xx_pc_cntl" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0x910a" name="VPC_GS_PARAM_0" type="a6xx_gs_param_0" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0x910b" name="VPC_STEREO_RENDERING_VIEWMASK" type="hex" low="0" high="15" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0x910c" name="VPC_STEREO_RENDERING_CNTL" type="a6xx_stereo_rendering_cntl" variants="A7XX-" usage="rp_blit"/>
+ 
+ 	<enum name="a6xx_varying_interp_mode">
+ 		<value value="0" name="INTERP_SMOOTH"/>
+@@ -4159,11 +2007,11 @@ to upconvert to 32b float internally?
+ 	</enum>
+ 
+ 	<!-- 0x9109-0x91ff invalid -->
+-	<array offset="0x9200" name="VPC_VARYING_INTERP" stride="1" length="8" usage="rp_blit">
++	<array offset="0x9200" name="VPC_VARYING_INTERP_MODE" stride="1" length="8" usage="rp_blit">
+ 		<doc>Packed array of a6xx_varying_interp_mode</doc>
+ 		<reg32 offset="0x0" name="MODE"/>
+ 	</array>
+-	<array offset="0x9208" name="VPC_VARYING_PS_REPL" stride="1" length="8" usage="rp_blit">
++	<array offset="0x9208" name="VPC_VARYING_REPLACE_MODE_0" stride="1" length="8" usage="rp_blit">
+ 		<doc>Packed array of a6xx_varying_ps_repl_mode</doc>
+ 		<reg32 offset="0x0" name="MODE"/>
+ 	</array>
+@@ -4172,12 +2020,12 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x9210" name="VPC_UNKNOWN_9210" low="0" high="31" variants="A6XX" usage="cmd"/>
+ 	<reg32 offset="0x9211" name="VPC_UNKNOWN_9211" low="0" high="31" variants="A6XX" usage="cmd"/>
+ 
+-	<array offset="0x9212" name="VPC_VAR" stride="1" length="4" usage="rp_blit">
++	<array offset="0x9212" name="VPC_VARYING_LM_TRANSFER_CNTL_0" stride="1" length="4" usage="rp_blit">
+ 		<!-- one bit per varying component: -->
+ 		<reg32 offset="0" name="DISABLE"/>
+ 	</array>
+ 
+-	<reg32 offset="0x9216" name="VPC_SO_CNTL" usage="rp_blit">
++	<reg32 offset="0x9216" name="VPC_SO_MAPPING_WPTR" usage="rp_blit">
+ 		<!--
+ 			Choose which DWORD to write to. There is an array of
+ 			(4 * 64) DWORD's, dumped in the devcoredump at
+@@ -4198,7 +2046,7 @@ to upconvert to 32b float internally?
+ 			When EmitStreamVertex(N) happens, the HW goes to DWORD
+ 			64 * N and then "executes" the next 64 DWORD's.
+ 
+-			This field is auto-incremented when VPC_SO_PROG is
++			This field is auto-incremented when VPC_SO_MAPPING_PORT is
+ 			written to.
+ 		-->
+ 		<bitfield name="ADDR" low="0" high="7" type="hex"/>
+@@ -4206,7 +2054,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="RESET" pos="16" type="boolean"/>
+ 	</reg32>
+ 	<!-- special register, write multiple times to load SO program (not readable) -->
+-	<reg32 offset="0x9217" name="VPC_SO_PROG" usage="rp_blit">
++	<reg32 offset="0x9217" name="VPC_SO_MAPPING_PORT" usage="rp_blit">
+ 		<bitfield name="A_BUF" low="0" high="1" type="uint"/>
+ 		<bitfield name="A_OFF" low="2" high="10" shr="2" type="uint"/>
+ 		<bitfield name="A_EN" pos="11" type="boolean"/>
+@@ -4215,7 +2063,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="B_EN" pos="23" type="boolean"/>
+ 	</reg32>
+ 
+-	<reg64 offset="0x9218" name="VPC_SO_STREAM_COUNTS" type="waddress" align="32" usage="cmd"/>
++	<reg64 offset="0x9218" name="VPC_SO_QUERY_BASE" type="waddress" align="32" usage="cmd"/>
+ 
+ 	<array offset="0x921a" name="VPC_SO" stride="7" length="4" usage="cmd">
+ 		<reg64 offset="0" name="BUFFER_BASE" type="waddress" align="32"/>
+@@ -4225,14 +2073,14 @@ to upconvert to 32b float internally?
+ 		<reg64 offset="5" name="FLUSH_BASE" type="waddress" align="32"/>
+ 	</array>
+ 
+-	<reg32 offset="0x9236" name="VPC_POINT_COORD_INVERT" usage="cmd">
++	<reg32 offset="0x9236" name="VPC_REPLACE_MODE_CNTL" usage="cmd">
+ 		<bitfield name="INVERT" pos="0" type="boolean"/>
+ 	</reg32>
+ 	<!-- 0x9237-0x92ff invalid -->
+ 	<!-- always 0x0 ? -->
+ 	<reg32 offset="0x9300" name="VPC_UNKNOWN_9300" low="0" high="2" usage="cmd"/>
+ 
+-	<bitset name="a6xx_vpc_xs_pack" inline="yes">
++	<bitset name="a6xx_vpc_xs_cntl" inline="yes">
+ 		<doc>
+ 			num of varyings plus four for gl_Position (plus one if gl_PointSize)
+ 			plus # of transform-feedback (streamout) varyings if using the
+@@ -4249,11 +2097,11 @@ to upconvert to 32b float internally?
+ 			</doc>
+ 		</bitfield>
+ 	</bitset>
+-	<reg32 offset="0x9301" name="VPC_VS_PACK" type="a6xx_vpc_xs_pack" usage="rp_blit"/>
+-	<reg32 offset="0x9302" name="VPC_GS_PACK" type="a6xx_vpc_xs_pack" usage="rp_blit"/>
+-	<reg32 offset="0x9303" name="VPC_DS_PACK" type="a6xx_vpc_xs_pack" usage="rp_blit"/>
++	<reg32 offset="0x9301" name="VPC_VS_CNTL" type="a6xx_vpc_xs_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9302" name="VPC_GS_CNTL" type="a6xx_vpc_xs_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9303" name="VPC_DS_CNTL" type="a6xx_vpc_xs_cntl" usage="rp_blit"/>
+ 
+-	<reg32 offset="0x9304" name="VPC_CNTL_0" usage="rp_blit">
++	<reg32 offset="0x9304" name="VPC_PS_CNTL" usage="rp_blit">
+ 		<bitfield name="NUMNONPOSVAR" low="0" high="7" type="uint"/>
+ 		<!-- for fixed-function (i.e. no GS) gl_PrimitiveID in FS -->
+ 		<bitfield name="PRIMIDLOC" low="8" high="15" type="uint"/>
+@@ -4272,7 +2120,7 @@ to upconvert to 32b float internally?
+ 		</bitfield>
+ 	</reg32>
+ 
+-	<reg32 offset="0x9305" name="VPC_SO_STREAM_CNTL" usage="rp_blit">
++	<reg32 offset="0x9305" name="VPC_SO_CNTL" usage="rp_blit">
+ 		<!--
+ 		It's offset by 1, and 0 means "disabled"
+ 		-->
+@@ -4282,19 +2130,19 @@ to upconvert to 32b float internally?
+ 		<bitfield name="BUF3_STREAM" low="9" high="11" type="uint"/>
+ 		<bitfield name="STREAM_ENABLE" low="15" high="18" type="hex"/>
+ 	</reg32>
+-	<reg32 offset="0x9306" name="VPC_SO_DISABLE" usage="rp_blit">
++	<reg32 offset="0x9306" name="VPC_SO_OVERRIDE" usage="rp_blit">
+ 		<bitfield name="DISABLE" pos="0" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0x9307" name="VPC_POLYGON_MODE2" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x9307" name="VPC_PS_RAST_CNTL" variants="A6XX-" usage="rp_blit"> <!-- A702 + A7xx -->
+ 		<bitfield name="MODE" low="0" high="1" type="a6xx_polygon_mode"/>
+ 	</reg32>
+-	<reg32 offset="0x9308" name="VPC_ATTR_BUF_SIZE_GMEM" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x9308" name="VPC_ATTR_BUF_GMEM_SIZE" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="SIZE_GMEM" low="0" high="31"/>
+ 	</reg32>
+-	<reg32 offset="0x9309" name="VPC_ATTR_BUF_BASE_GMEM" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x9309" name="VPC_ATTR_BUF_GMEM_BASE" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="BASE_GMEM" low="0" high="31"/>
+ 	</reg32>
+-	<reg32 offset="0x9b09" name="PC_ATTR_BUF_SIZE_GMEM" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x9b09" name="PC_ATTR_BUF_GMEM_SIZE" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="SIZE_GMEM" low="0" high="31"/>
+ 	</reg32>
+ 
+@@ -4311,15 +2159,15 @@ to upconvert to 32b float internally?
+ 	<!-- TODO: regs from 0x9624-0x963a -->
+ 	<!-- 0x963b-0x97ff invalid -->
+ 
+-	<reg32 offset="0x9800" name="PC_TESS_NUM_VERTEX" low="0" high="5" type="uint" usage="rp_blit"/>
++	<reg32 offset="0x9800" name="PC_HS_PARAM_0" low="0" high="5" type="uint" usage="rp_blit"/>
+ 
+ 	<!-- always 0x0 ? -->
+-	<reg32 offset="0x9801" name="PC_HS_INPUT_SIZE" usage="rp_blit">
++	<reg32 offset="0x9801" name="PC_HS_PARAM_1" usage="rp_blit">
+ 		<bitfield name="SIZE" low="0" high="10" type="uint"/>
+ 		<bitfield name="UNK13" pos="13"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x9802" name="PC_TESS_CNTL" usage="rp_blit">
++	<reg32 offset="0x9802" name="PC_DS_PARAM" usage="rp_blit">
+ 		<bitfield name="SPACING" low="0" high="1" type="a6xx_tess_spacing"/>
+ 		<bitfield name="OUTPUT" low="2" high="3" type="a6xx_tess_output"/>
+ 	</reg32>
+@@ -4334,7 +2182,7 @@ to upconvert to 32b float internally?
+ 	</reg32>
+ 
+ 	<!-- New in a6xx gen3+ -->
+-	<reg32 offset="0x9808" name="PC_SO_STREAM_CNTL" usage="rp_blit">
++	<reg32 offset="0x9808" name="PC_DGEN_SO_CNTL" usage="rp_blit">
+ 		<bitfield name="STREAM_ENABLE" low="15" high="18" type="hex"/>
+ 	</reg32>
+ 
+@@ -4344,15 +2192,15 @@ to upconvert to 32b float internally?
+ 	<!-- 0x980b-0x983f invalid -->
+ 
+ 	<!-- 0x9840 - 0x9842 are not readable -->
+-	<reg32 offset="0x9840" name="PC_DRAW_CMD">
++	<reg32 offset="0x9840" name="PC_DRAW_INITIATOR">
+ 		<bitfield name="STATE_ID" low="0" high="7"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x9841" name="PC_DISPATCH_CMD">
++	<reg32 offset="0x9841" name="PC_KERNEL_INITIATOR">
+ 		<bitfield name="STATE_ID" low="0" high="7"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x9842" name="PC_EVENT_CMD">
++	<reg32 offset="0x9842" name="PC_EVENT_INITIATOR">
+ 		<!-- I think only the low bit is actually used? -->
+ 		<bitfield name="STATE_ID" low="16" high="23"/>
+ 		<bitfield name="EVENT" low="0" high="6" type="vgt_event_type"/>
+@@ -4367,27 +2215,27 @@ to upconvert to 32b float internally?
+ 
+ 	<!-- 0x9843-0x997f invalid -->
+ 
+-	<reg32 offset="0x9981" name="PC_POLYGON_MODE" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0x9981" name="PC_DGEN_RAST_CNTL" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="MODE" low="0" high="1" type="a6xx_polygon_mode"/>
+ 	</reg32>
+-	<reg32 offset="0x9809" name="PC_POLYGON_MODE" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x9809" name="PC_DGEN_RAST_CNTL" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="MODE" low="0" high="1" type="a6xx_polygon_mode"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x9980" name="PC_RASTER_CNTL" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0x9980" name="VPC_RAST_STREAM_CNTL" variants="A6XX" usage="rp_blit">
+ 		<!-- which stream to send to GRAS -->
+ 		<bitfield name="STREAM" low="0" high="1" type="uint"/>
+ 		<!-- discard primitives before rasterization -->
+ 		<bitfield name="DISCARD" pos="2" type="boolean"/>
+ 	</reg32>
+-	<!-- VPC_RASTER_CNTL -->
+-	<reg32 offset="0x9107" name="PC_RASTER_CNTL" variants="A7XX-" usage="rp_blit">
++	<!-- VPC_RAST_STREAM_CNTL -->
++	<reg32 offset="0x9107" name="VPC_RAST_STREAM_CNTL" variants="A7XX-" usage="rp_blit">
+ 		<!-- which stream to send to GRAS -->
+ 		<bitfield name="STREAM" low="0" high="1" type="uint"/>
+ 		<!-- discard primitives before rasterization -->
+ 		<bitfield name="DISCARD" pos="2" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0x9317" name="PC_RASTER_CNTL_V2" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x9317" name="VPC_RAST_STREAM_CNTL_V2" variants="A7XX-" usage="rp_blit">
+ 		<!-- which stream to send to GRAS -->
+ 		<bitfield name="STREAM" low="0" high="1" type="uint"/>
+ 		<!-- discard primitives before rasterization -->
+@@ -4397,17 +2245,17 @@ to upconvert to 32b float internally?
+ 	<!-- Both are a750+.
+ 	     Probably needed to correctly overlap execution of several draws.
+ 	-->
+-	<reg32 offset="0x9885" name="PC_TESS_PARAM_SIZE" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0x9885" name="PC_HS_BUFFER_SIZE" variants="A7XX-" usage="cmd"/>
+ 	<!-- Blob adds a bit more space {0x10, 0x20, 0x30, 0x40} bytes, but the meaning of
+ 	     this additional space is not known.
+ 	-->
+-	<reg32 offset="0x9886" name="PC_TESS_FACTOR_SIZE" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0x9886" name="PC_TF_BUFFER_SIZE" variants="A7XX-" usage="cmd"/>
+ 
+ 	<!-- 0x9982-0x9aff invalid -->
+ 
+-	<reg32 offset="0x9b00" name="PC_PRIMITIVE_CNTL_0" type="a6xx_primitive_cntl_0" usage="rp_blit"/>
++	<reg32 offset="0x9b00" name="PC_CNTL" type="a6xx_pc_cntl" usage="rp_blit"/>
+ 
+-	<bitset name="a6xx_xs_out_cntl" inline="yes">
++	<bitset name="a6xx_pc_xs_cntl" inline="yes">
+ 		<doc>
+ 			num of varyings plus four for gl_Position (plus one if gl_PointSize)
+ 			plus # of transform-feedback (streamout) varyings if using the
+@@ -4417,19 +2265,19 @@ to upconvert to 32b float internally?
+ 		<bitfield name="PSIZE" pos="8" type="boolean"/>
+ 		<bitfield name="LAYER" pos="9" type="boolean"/>
+ 		<bitfield name="VIEW" pos="10" type="boolean"/>
+-		<!-- note: PC_VS_OUT_CNTL doesn't have the PRIMITIVE_ID bit -->
++		<!-- note: PC_VS_CNTL doesn't have the PRIMITIVE_ID bit -->
+ 		<bitfield name="PRIMITIVE_ID" pos="11" type="boolean"/>
+ 		<bitfield name="CLIP_MASK" low="16" high="23" type="uint"/>
+ 		<bitfield name="SHADINGRATE" pos="24" type="boolean" variants="A7XX-"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0x9b01" name="PC_VS_OUT_CNTL" type="a6xx_xs_out_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9b02" name="PC_GS_OUT_CNTL" type="a6xx_xs_out_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9b01" name="PC_VS_CNTL" type="a6xx_pc_xs_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9b02" name="PC_GS_CNTL" type="a6xx_pc_xs_cntl" usage="rp_blit"/>
+ 	<!-- since HS can't output anything, only PRIMITIVE_ID is valid -->
+-	<reg32 offset="0x9b03" name="PC_HS_OUT_CNTL" type="a6xx_xs_out_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9b04" name="PC_DS_OUT_CNTL" type="a6xx_xs_out_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9b03" name="PC_HS_CNTL" type="a6xx_pc_xs_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9b04" name="PC_DS_CNTL" type="a6xx_pc_xs_cntl" usage="rp_blit"/>
+ 
+-	<reg32 offset="0x9b05" name="PC_PRIMITIVE_CNTL_5" type="a6xx_primitive_cntl_5" usage="rp_blit"/>
++	<reg32 offset="0x9b05" name="PC_GS_PARAM_0" type="a6xx_gs_param_0" usage="rp_blit"/>
+ 
+ 	<reg32 offset="0x9b06" name="PC_PRIMITIVE_CNTL_6" variants="A6XX" usage="rp_blit">
+ 		<doc>
+@@ -4438,9 +2286,9 @@ to upconvert to 32b float internally?
+ 		<bitfield name="STRIDE_IN_VPC" low="0" high="10" type="uint"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x9b07" name="PC_MULTIVIEW_CNTL" type="a6xx_multiview_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9b07" name="PC_STEREO_RENDERING_CNTL" type="a6xx_stereo_rendering_cntl" usage="rp_blit"/>
+ 	<!-- mask of enabled views, doesn't exist on A630 -->
+-	<reg32 offset="0x9b08" name="PC_MULTIVIEW_MASK" type="hex" low="0" high="15" usage="rp_blit"/>
++	<reg32 offset="0x9b08" name="PC_STEREO_RENDERING_VIEWMASK" type="hex" low="0" high="15" usage="rp_blit"/>
+ 	<!-- 0x9b09-0x9bff invalid -->
+ 	<reg32 offset="0x9c00" name="PC_2D_EVENT_CMD">
+ 		<!-- special register (but note first 8 bits can be written/read) -->
+@@ -4451,31 +2299,31 @@ to upconvert to 32b float internally?
+ 	<!-- TODO: 0x9e00-0xa000 range incomplete -->
+ 	<reg32 offset="0x9e00" name="PC_DBG_ECO_CNTL"/>
+ 	<reg32 offset="0x9e01" name="PC_ADDR_MODE_CNTL" type="a5xx_address_mode"/>
+-	<reg64 offset="0x9e04" name="PC_DRAW_INDX_BASE"/>
+-	<reg32 offset="0x9e06" name="PC_DRAW_FIRST_INDX" type="uint"/>
+-	<reg32 offset="0x9e07" name="PC_DRAW_MAX_INDICES" type="uint"/>
+-	<reg64 offset="0x9e08" name="PC_TESSFACTOR_ADDR" variants="A6XX" type="waddress" align="32" usage="cmd"/>
+-	<reg64 offset="0x9810" name="PC_TESSFACTOR_ADDR" variants="A7XX-" type="waddress" align="32" usage="cmd"/>
++	<reg64 offset="0x9e04" name="PC_DMA_BASE"/>
++	<reg32 offset="0x9e06" name="PC_DMA_OFFSET" type="uint"/>
++	<reg32 offset="0x9e07" name="PC_DMA_SIZE" type="uint"/>
++	<reg64 offset="0x9e08" name="PC_TESS_BASE" variants="A6XX" type="waddress" align="32" usage="cmd"/>
++	<reg64 offset="0x9810" name="PC_TESS_BASE" variants="A7XX-" type="waddress" align="32" usage="cmd"/>
+ 
+-	<reg32 offset="0x9e0b" name="PC_DRAW_INITIATOR" type="vgt_draw_initiator_a4xx">
++	<reg32 offset="0x9e0b" name="PC_DRAWCALL_CNTL" type="vgt_draw_initiator_a4xx">
+ 		<doc>
+ 			Possibly not really "initiating" the draw but the layout is similar
+ 			to VGT_DRAW_INITIATOR on older gens
+ 		</doc>
+ 	</reg32>
+-	<reg32 offset="0x9e0c" name="PC_DRAW_NUM_INSTANCES" type="uint"/>
+-	<reg32 offset="0x9e0d" name="PC_DRAW_NUM_INDICES" type="uint"/>
++	<reg32 offset="0x9e0c" name="PC_DRAWCALL_INSTANCE_NUM" type="uint"/>
++	<reg32 offset="0x9e0d" name="PC_DRAWCALL_SIZE" type="uint"/>
+ 
+ 	<!-- These match the contents of CP_SET_BIN_DATA (not written directly) -->
+-	<reg32 offset="0x9e11" name="PC_VSTREAM_CONTROL">
++	<reg32 offset="0x9e11" name="PC_VIS_STREAM_CNTL">
+ 		<bitfield name="UNK0" low="0" high="15"/>
+ 		<bitfield name="VSC_SIZE" low="16" high="21" type="uint"/>
+ 		<bitfield name="VSC_N" low="22" high="26" type="uint"/>
+ 	</reg32>
+-	<reg64 offset="0x9e12" name="PC_BIN_PRIM_STRM" type="waddress" align="32"/>
+-	<reg64 offset="0x9e14" name="PC_BIN_DRAW_STRM" type="waddress" align="32"/>
++	<reg64 offset="0x9e12" name="PC_PVIS_STREAM_BIN_BASE" type="waddress" align="32"/>
++	<reg64 offset="0x9e14" name="PC_DVIS_STREAM_BIN_BASE" type="waddress" align="32"/>
+ 
+-	<reg32 offset="0x9e1c" name="PC_VISIBILITY_OVERRIDE">
++	<reg32 offset="0x9e1c" name="PC_DRAWCALL_CNTL_OVERRIDE">
+ 		<doc>Written by CP_SET_VISIBILITY_OVERRIDE handler</doc>
+ 		<bitfield name="OVERRIDE" pos="0" type="boolean"/>
+ 	</reg32>
+@@ -4488,18 +2336,18 @@ to upconvert to 32b float internally?
+ 	<!-- always 0x0 -->
+ 	<reg32 offset="0x9e72" name="PC_UNKNOWN_9E72" usage="cmd"/>
+ 
+-	<reg32 offset="0xa000" name="VFD_CONTROL_0" usage="rp_blit">
++	<reg32 offset="0xa000" name="VFD_CNTL_0" usage="rp_blit">
+ 		<bitfield name="FETCH_CNT" low="0" high="5" type="uint"/>
+ 		<bitfield name="DECODE_CNT" low="8" high="13" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa001" name="VFD_CONTROL_1" usage="rp_blit">
++	<reg32 offset="0xa001" name="VFD_CNTL_1" usage="rp_blit">
+ 		<bitfield name="REGID4VTX" low="0" high="7" type="a3xx_regid"/>
+ 		<bitfield name="REGID4INST" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="REGID4PRIMID" low="16" high="23" type="a3xx_regid"/>
+ 		<!-- only used for VS in non-multi-position-output case -->
+ 		<bitfield name="REGID4VIEWID" low="24" high="31" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xa002" name="VFD_CONTROL_2" usage="rp_blit">
++	<reg32 offset="0xa002" name="VFD_CNTL_2" usage="rp_blit">
+ 		<bitfield name="REGID_HSRELPATCHID" low="0" high="7" type="a3xx_regid">
+ 			<doc>
+ 				This is the ID of the current patch within the
+@@ -4512,32 +2360,32 @@ to upconvert to 32b float internally?
+ 		</bitfield>
+ 		<bitfield name="REGID_INVOCATIONID" low="8" high="15" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xa003" name="VFD_CONTROL_3" usage="rp_blit">
++	<reg32 offset="0xa003" name="VFD_CNTL_3" usage="rp_blit">
+ 		<bitfield name="REGID_DSPRIMID" low="0" high="7" type="a3xx_regid"/>
+ 		<bitfield name="REGID_DSRELPATCHID" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="REGID_TESSX" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="REGID_TESSY" low="24" high="31" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xa004" name="VFD_CONTROL_4" usage="rp_blit">
++	<reg32 offset="0xa004" name="VFD_CNTL_4" usage="rp_blit">
+ 		<bitfield name="UNK0" low="0" high="7" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xa005" name="VFD_CONTROL_5" usage="rp_blit">
++	<reg32 offset="0xa005" name="VFD_CNTL_5" usage="rp_blit">
+ 		<bitfield name="REGID_GSHEADER" low="0" high="7" type="a3xx_regid"/>
+ 		<bitfield name="UNK8" low="8" high="15" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xa006" name="VFD_CONTROL_6" usage="rp_blit">
++	<reg32 offset="0xa006" name="VFD_CNTL_6" usage="rp_blit">
+ 		<!--
+ 			True if gl_PrimitiveID is read via the FS
+ 		-->
+ 		<bitfield name="PRIMID4PSEN" pos="0" type="boolean"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xa007" name="VFD_MODE_CNTL" usage="cmd">
++	<reg32 offset="0xa007" name="VFD_RENDER_MODE" usage="cmd">
+ 		<bitfield name="RENDER_MODE" low="0" high="2" type="a6xx_render_mode"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xa008" name="VFD_MULTIVIEW_CNTL" type="a6xx_multiview_cntl" usage="rp_blit"/>
+-	<reg32 offset="0xa009" name="VFD_ADD_OFFSET" usage="cmd">
++	<reg32 offset="0xa008" name="VFD_STEREO_RENDERING_CNTL" type="a6xx_stereo_rendering_cntl" usage="rp_blit"/>
++	<reg32 offset="0xa009" name="VFD_MODE_CNTL" usage="cmd">
+ 		<!-- add VFD_INDEX_OFFSET to REGID4VTX -->
+ 		<bitfield name="VERTEX" pos="0" type="boolean"/>
+ 		<!-- add VFD_INSTANCE_START_OFFSET to REGID4INST -->
+@@ -4546,14 +2394,14 @@ to upconvert to 32b float internally?
+ 
+ 	<reg32 offset="0xa00e" name="VFD_INDEX_OFFSET" usage="rp_blit"/>
+ 	<reg32 offset="0xa00f" name="VFD_INSTANCE_START_OFFSET" usage="rp_blit"/>
+-	<array offset="0xa010" name="VFD_FETCH" stride="4" length="32" usage="rp_blit">
++	<array offset="0xa010" name="VFD_VERTEX_BUFFER" stride="4" length="32" usage="rp_blit">
+ 		<reg64 offset="0x0" name="BASE" type="address" align="1"/>
+ 		<reg32 offset="0x2" name="SIZE" type="uint"/>
+ 		<reg32 offset="0x3" name="STRIDE" low="0" high="11" type="uint"/>
+ 	</array>
+-	<array offset="0xa090" name="VFD_DECODE" stride="2" length="32" usage="rp_blit">
++	<array offset="0xa090" name="VFD_FETCH_INSTR" stride="2" length="32" usage="rp_blit">
+ 		<reg32 offset="0x0" name="INSTR">
+-			<!-- IDX and byte OFFSET into VFD_FETCH -->
++			<!-- IDX and byte OFFSET into VFD_VERTEX_BUFFER -->
+ 			<bitfield name="IDX" low="0" high="4" type="uint"/>
+ 			<bitfield name="OFFSET" low="5" high="16"/>
+ 			<bitfield name="INSTANCED" pos="17" type="boolean"/>
+@@ -4573,7 +2421,7 @@ to upconvert to 32b float internally?
+ 
+ 	<reg32 offset="0xa0f8" name="VFD_POWER_CNTL" low="0" high="2" usage="rp_blit"/>
+ 
+-	<reg32 offset="0xa600" name="VFD_UNKNOWN_A600" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa600" name="VFD_DBG_ECO_CNTL" variants="A7XX-" usage="cmd"/>
+ 
+ 	<reg32 offset="0xa601" name="VFD_ADDR_MODE_CNTL" type="a5xx_address_mode"/>
+ 	<array offset="0xa610" name="VFD_PERFCTR_VFD_SEL" stride="1" length="8" variants="A6XX"/>
+@@ -4588,7 +2436,7 @@ to upconvert to 32b float internally?
+ 		<value value="1" name="THREAD128"/>
+ 	</enum>
+ 
+-	<bitset name="a6xx_sp_xs_ctrl_reg0" inline="yes">
++	<bitset name="a6xx_sp_xs_cntl_0" inline="yes">
+ 		<!-- if set to SINGLE, only use 1 concurrent wave on each SP -->
+ 		<bitfield name="THREADMODE" pos="0" type="a3xx_threadmode"/>
+ 		<!--
+@@ -4620,7 +2468,7 @@ to upconvert to 32b float internally?
+ 		-->
+ 		<bitfield name="BINDLESS_TEX" pos="0" type="boolean"/>
+ 		<bitfield name="BINDLESS_SAMP" pos="1" type="boolean"/>
+-		<bitfield name="BINDLESS_IBO" pos="2" type="boolean"/>
++		<bitfield name="BINDLESS_UAV" pos="2" type="boolean"/>
+ 		<bitfield name="BINDLESS_UBO" pos="3" type="boolean"/>
+ 
+ 		<bitfield name="ENABLED" pos="8" type="boolean"/>
+@@ -4630,17 +2478,17 @@ to upconvert to 32b float internally?
+ 		 -->
+ 		<bitfield name="NTEX" low="9" high="16" type="uint"/>
+ 		<bitfield name="NSAMP" low="17" high="21" type="uint"/>
+-		<bitfield name="NIBO" low="22" high="28" type="uint"/>
++		<bitfield name="NUAV" low="22" high="28" type="uint"/>
+ 	</bitset>
+ 
+-	<bitset name="a6xx_sp_xs_prim_cntl" inline="yes">
++	<bitset name="a6xx_sp_xs_output_cntl" inline="yes">
+ 		<!-- # of VS outputs including pos/psize -->
+ 		<bitfield name="OUT" low="0" high="5" type="uint"/>
+ 		<!-- FLAGS_REGID only for GS -->
+ 		<bitfield name="FLAGS_REGID" low="6" high="13" type="a3xx_regid"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0xa800" name="SP_VS_CTRL_REG0" type="a6xx_sp_xs_ctrl_reg0" usage="rp_blit">
++	<reg32 offset="0xa800" name="SP_VS_CNTL_0" type="a6xx_sp_xs_cntl_0" usage="rp_blit">
+ 		<!--
+ 		This field actually controls all geometry stages. TCS, TES, and
+ 		GS must have the same mergedregs setting as VS.
+@@ -4665,10 +2513,10 @@ to upconvert to 32b float internally?
+ 	</reg32>
+ 	<!-- bitmask of true/false conditions for VS brac.N instructions,
+ 	     bit N corresponds to brac.N -->
+-	<reg32 offset="0xa801" name="SP_VS_BRANCH_COND" type="hex"/>
++	<reg32 offset="0xa801" name="SP_VS_BOOLEAN_CF_MASK" type="hex"/>
+ 	<!-- # of VS outputs including pos/psize -->
+-	<reg32 offset="0xa802" name="SP_VS_PRIMITIVE_CNTL" type="a6xx_sp_xs_prim_cntl" usage="rp_blit"/>
+-	<array offset="0xa803" name="SP_VS_OUT" stride="1" length="16" usage="rp_blit">
++	<reg32 offset="0xa802" name="SP_VS_OUTPUT_CNTL" type="a6xx_sp_xs_output_cntl" usage="rp_blit"/>
++	<array offset="0xa803" name="SP_VS_OUTPUT" stride="1" length="16" usage="rp_blit">
+ 		<reg32 offset="0x0" name="REG">
+ 			<bitfield name="A_REGID" low="0" high="7" type="a3xx_regid"/>
+ 			<bitfield name="A_COMPMASK" low="8" high="11" type="hex"/>
+@@ -4678,12 +2526,12 @@ to upconvert to 32b float internally?
+ 	</array>
+ 	<!--
+ 	Starting with a5xx, position/psize outputs from shader end up in the
+-	SP_VS_OUT map, with highest OUTLOCn position.  (Generally they are
++	SP_VS_OUTPUT map, with highest OUTLOCn position.  (Generally they are
+ 	the last entries too, except when gl_PointCoord is used, blob inserts
+ 	an extra varying after, but with a lower OUTLOC position.  If present,
+ 	psize is last, preceded by position.
+ 	 -->
+-	<array offset="0xa813" name="SP_VS_VPC_DST" stride="1" length="8" usage="rp_blit">
++	<array offset="0xa813" name="SP_VS_VPC_DEST" stride="1" length="8" usage="rp_blit">
+ 		<reg32 offset="0x0" name="REG">
+ 			<bitfield name="OUTLOC0" low="0" high="7" type="uint"/>
+ 			<bitfield name="OUTLOC1" low="8" high="15" type="uint"/>
+@@ -4752,7 +2600,7 @@ to upconvert to 32b float internally?
+ 		</bitfield>
+ 	</bitset>
+ 
+-	<bitset name="a6xx_sp_xs_pvt_mem_hw_stack_offset" inline="yes">
++	<bitset name="a6xx_sp_xs_pvt_mem_stack_offset" inline="yes">
+ 		<doc>
+ 			This seems to be be the equivalent of HWSTACKOFFSET in
+ 			a3xx. The ldp/stp offset formula above isn't affected by
+@@ -4763,18 +2611,18 @@ to upconvert to 32b float internally?
+ 		<bitfield name="OFFSET" low="0" high="18" shr="11"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0xa81b" name="SP_VS_OBJ_FIRST_EXEC_OFFSET" type="uint" usage="rp_blit"/>
+-	<reg64 offset="0xa81c" name="SP_VS_OBJ_START" type="address" align="32" usage="rp_blit"/>
++	<reg32 offset="0xa81b" name="SP_VS_PROGRAM_COUNTER_OFFSET" type="uint" usage="rp_blit"/>
++	<reg64 offset="0xa81c" name="SP_VS_BASE" type="address" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa81e" name="SP_VS_PVT_MEM_PARAM" type="a6xx_sp_xs_pvt_mem_param" usage="rp_blit"/>
+-	<reg64 offset="0xa81f" name="SP_VS_PVT_MEM_ADDR" type="waddress" align="32" usage="rp_blit"/>
++	<reg64 offset="0xa81f" name="SP_VS_PVT_MEM_BASE" type="waddress" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa821" name="SP_VS_PVT_MEM_SIZE" type="a6xx_sp_xs_pvt_mem_size" usage="rp_blit"/>
+-	<reg32 offset="0xa822" name="SP_VS_TEX_COUNT" low="0" high="7" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa822" name="SP_VS_TSIZE" low="0" high="7" type="uint" usage="rp_blit"/>
+ 	<reg32 offset="0xa823" name="SP_VS_CONFIG" type="a6xx_sp_xs_config" usage="rp_blit"/>
+-	<reg32 offset="0xa824" name="SP_VS_INSTRLEN" low="0" high="27" type="uint" usage="rp_blit"/>
+-	<reg32 offset="0xa825" name="SP_VS_PVT_MEM_HW_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_hw_stack_offset" usage="rp_blit"/>
+-	<reg32 offset="0xa82d" name="SP_VS_VGPR_CONFIG" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa824" name="SP_VS_INSTR_SIZE" low="0" high="27" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa825" name="SP_VS_PVT_MEM_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_stack_offset" usage="rp_blit"/>
++	<reg32 offset="0xa82d" name="SP_VS_VGS_CNTL" variants="A7XX-" usage="cmd"/>
+ 
+-	<reg32 offset="0xa830" name="SP_HS_CTRL_REG0" type="a6xx_sp_xs_ctrl_reg0" usage="rp_blit">
++	<reg32 offset="0xa830" name="SP_HS_CNTL_0" type="a6xx_sp_xs_cntl_0" usage="rp_blit">
+ 		<!-- There is no mergedregs bit, that comes from the VS. -->
+ 		<bitfield name="EARLYPREAMBLE" pos="20" type="boolean"/>
+ 	</reg32>
+@@ -4782,32 +2630,32 @@ to upconvert to 32b float internally?
+ 	Total size of local storage in dwords divided by the wave size.
+ 	The maximum value is 64. With the wave size being always 64 for HS,
+ 	the maximum size of local storage should be:
+-	 64 (wavesize) * 64 (SP_HS_WAVE_INPUT_SIZE) * 4 = 16k
++	 64 (wavesize) * 64 (SP_HS_CNTL_1) * 4 = 16k
+ 	-->
+-	<reg32 offset="0xa831" name="SP_HS_WAVE_INPUT_SIZE" low="0" high="7" type="uint" usage="rp_blit"/>
+-	<reg32 offset="0xa832" name="SP_HS_BRANCH_COND" type="hex" usage="rp_blit"/>
++	<reg32 offset="0xa831" name="SP_HS_CNTL_1" low="0" high="7" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa832" name="SP_HS_BOOLEAN_CF_MASK" type="hex" usage="rp_blit"/>
+ 
+ 	<!-- TODO: exact same layout as 0xa81b-0xa825 -->
+-	<reg32 offset="0xa833" name="SP_HS_OBJ_FIRST_EXEC_OFFSET" type="uint" usage="rp_blit"/>
+-	<reg64 offset="0xa834" name="SP_HS_OBJ_START" type="address" align="32" usage="rp_blit"/>
++	<reg32 offset="0xa833" name="SP_HS_PROGRAM_COUNTER_OFFSET" type="uint" usage="rp_blit"/>
++	<reg64 offset="0xa834" name="SP_HS_BASE" type="address" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa836" name="SP_HS_PVT_MEM_PARAM" type="a6xx_sp_xs_pvt_mem_param" usage="rp_blit"/>
+-	<reg64 offset="0xa837" name="SP_HS_PVT_MEM_ADDR" type="waddress" align="32" usage="rp_blit"/>
++	<reg64 offset="0xa837" name="SP_HS_PVT_MEM_BASE" type="waddress" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa839" name="SP_HS_PVT_MEM_SIZE" type="a6xx_sp_xs_pvt_mem_size" usage="rp_blit"/>
+-	<reg32 offset="0xa83a" name="SP_HS_TEX_COUNT" low="0" high="7" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa83a" name="SP_HS_TSIZE" low="0" high="7" type="uint" usage="rp_blit"/>
+ 	<reg32 offset="0xa83b" name="SP_HS_CONFIG" type="a6xx_sp_xs_config" usage="rp_blit"/>
+-	<reg32 offset="0xa83c" name="SP_HS_INSTRLEN" low="0" high="27" type="uint" usage="rp_blit"/>
+-	<reg32 offset="0xa83d" name="SP_HS_PVT_MEM_HW_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_hw_stack_offset" usage="rp_blit"/>
+-	<reg32 offset="0xa82f" name="SP_HS_VGPR_CONFIG" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa83c" name="SP_HS_INSTR_SIZE" low="0" high="27" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa83d" name="SP_HS_PVT_MEM_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_stack_offset" usage="rp_blit"/>
++	<reg32 offset="0xa82f" name="SP_HS_VGS_CNTL" variants="A7XX-" usage="cmd"/>
+ 
+-	<reg32 offset="0xa840" name="SP_DS_CTRL_REG0" type="a6xx_sp_xs_ctrl_reg0" usage="rp_blit">
++	<reg32 offset="0xa840" name="SP_DS_CNTL_0" type="a6xx_sp_xs_cntl_0" usage="rp_blit">
+ 		<!-- There is no mergedregs bit, that comes from the VS. -->
+ 		<bitfield name="EARLYPREAMBLE" pos="20" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0xa841" name="SP_DS_BRANCH_COND" type="hex"/>
++	<reg32 offset="0xa841" name="SP_DS_BOOLEAN_CF_MASK" type="hex"/>
+ 
+ 	<!-- TODO: exact same layout as 0xa802-0xa81a -->
+-	<reg32 offset="0xa842" name="SP_DS_PRIMITIVE_CNTL" type="a6xx_sp_xs_prim_cntl" usage="rp_blit"/>
+-	<array offset="0xa843" name="SP_DS_OUT" stride="1" length="16" usage="rp_blit">
++	<reg32 offset="0xa842" name="SP_DS_OUTPUT_CNTL" type="a6xx_sp_xs_output_cntl" usage="rp_blit"/>
++	<array offset="0xa843" name="SP_DS_OUTPUT" stride="1" length="16" usage="rp_blit">
+ 		<reg32 offset="0x0" name="REG">
+ 			<bitfield name="A_REGID" low="0" high="7" type="a3xx_regid"/>
+ 			<bitfield name="A_COMPMASK" low="8" high="11" type="hex"/>
+@@ -4815,7 +2663,7 @@ to upconvert to 32b float internally?
+ 			<bitfield name="B_COMPMASK" low="24" high="27" type="hex"/>
+ 		</reg32>
+ 	</array>
+-	<array offset="0xa853" name="SP_DS_VPC_DST" stride="1" length="8" usage="rp_blit">
++	<array offset="0xa853" name="SP_DS_VPC_DEST" stride="1" length="8" usage="rp_blit">
+ 		<reg32 offset="0x0" name="REG">
+ 			<bitfield name="OUTLOC0" low="0" high="7" type="uint"/>
+ 			<bitfield name="OUTLOC1" low="8" high="15" type="uint"/>
+@@ -4825,22 +2673,22 @@ to upconvert to 32b float internally?
+ 	</array>
+ 
+ 	<!-- TODO: exact same layout as 0xa81b-0xa825 -->
+-	<reg32 offset="0xa85b" name="SP_DS_OBJ_FIRST_EXEC_OFFSET" type="uint" usage="rp_blit"/>
+-	<reg64 offset="0xa85c" name="SP_DS_OBJ_START" type="address" align="32" usage="rp_blit"/>
++	<reg32 offset="0xa85b" name="SP_DS_PROGRAM_COUNTER_OFFSET" type="uint" usage="rp_blit"/>
++	<reg64 offset="0xa85c" name="SP_DS_BASE" type="address" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa85e" name="SP_DS_PVT_MEM_PARAM" type="a6xx_sp_xs_pvt_mem_param" usage="rp_blit"/>
+-	<reg64 offset="0xa85f" name="SP_DS_PVT_MEM_ADDR" type="waddress" align="32" usage="rp_blit"/>
++	<reg64 offset="0xa85f" name="SP_DS_PVT_MEM_BASE" type="waddress" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa861" name="SP_DS_PVT_MEM_SIZE" type="a6xx_sp_xs_pvt_mem_size" usage="rp_blit"/>
+-	<reg32 offset="0xa862" name="SP_DS_TEX_COUNT" low="0" high="7" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa862" name="SP_DS_TSIZE" low="0" high="7" type="uint" usage="rp_blit"/>
+ 	<reg32 offset="0xa863" name="SP_DS_CONFIG" type="a6xx_sp_xs_config" usage="rp_blit"/>
+-	<reg32 offset="0xa864" name="SP_DS_INSTRLEN" low="0" high="27" type="uint" usage="rp_blit"/>
+-	<reg32 offset="0xa865" name="SP_DS_PVT_MEM_HW_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_hw_stack_offset" usage="rp_blit"/>
+-	<reg32 offset="0xa868" name="SP_DS_VGPR_CONFIG" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa864" name="SP_DS_INSTR_SIZE" low="0" high="27" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa865" name="SP_DS_PVT_MEM_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_stack_offset" usage="rp_blit"/>
++	<reg32 offset="0xa868" name="SP_DS_VGS_CNTL" variants="A7XX-" usage="cmd"/>
+ 
+-	<reg32 offset="0xa870" name="SP_GS_CTRL_REG0" type="a6xx_sp_xs_ctrl_reg0" usage="rp_blit">
++	<reg32 offset="0xa870" name="SP_GS_CNTL_0" type="a6xx_sp_xs_cntl_0" usage="rp_blit">
+ 		<!-- There is no mergedregs bit, that comes from the VS. -->
+ 		<bitfield name="EARLYPREAMBLE" pos="20" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0xa871" name="SP_GS_PRIM_SIZE" low="0" high="7" type="uint" usage="rp_blit">
++	<reg32 offset="0xa871" name="SP_GS_CNTL_1" low="0" high="7" type="uint" usage="rp_blit">
+ 		<doc>
+ 			Normally the size of the output of the last stage in
+ 			dwords. It should be programmed as follows:
+@@ -4854,11 +2702,11 @@ to upconvert to 32b float internally?
+ 			doesn't matter in practice.
+ 		</doc>
+ 	</reg32>
+-	<reg32 offset="0xa872" name="SP_GS_BRANCH_COND" type="hex" usage="rp_blit"/>
++	<reg32 offset="0xa872" name="SP_GS_BOOLEAN_CF_MASK" type="hex" usage="rp_blit"/>
+ 
+ 	<!-- TODO: exact same layout as 0xa802-0xa81a -->
+-	<reg32 offset="0xa873" name="SP_GS_PRIMITIVE_CNTL" type="a6xx_sp_xs_prim_cntl" usage="rp_blit"/>
+-	<array offset="0xa874" name="SP_GS_OUT" stride="1" length="16" usage="rp_blit">
++	<reg32 offset="0xa873" name="SP_GS_OUTPUT_CNTL" type="a6xx_sp_xs_output_cntl" usage="rp_blit"/>
++	<array offset="0xa874" name="SP_GS_OUTPUT" stride="1" length="16" usage="rp_blit">
+ 		<reg32 offset="0x0" name="REG">
+ 			<bitfield name="A_REGID" low="0" high="7" type="a3xx_regid"/>
+ 			<bitfield name="A_COMPMASK" low="8" high="11" type="hex"/>
+@@ -4867,7 +2715,7 @@ to upconvert to 32b float internally?
+ 		</reg32>
+ 	</array>
+ 
+-	<array offset="0xa884" name="SP_GS_VPC_DST" stride="1" length="8" usage="rp_blit">
++	<array offset="0xa884" name="SP_GS_VPC_DEST" stride="1" length="8" usage="rp_blit">
+ 		<reg32 offset="0x0" name="REG">
+ 			<bitfield name="OUTLOC0" low="0" high="7" type="uint"/>
+ 			<bitfield name="OUTLOC1" low="8" high="15" type="uint"/>
+@@ -4877,29 +2725,29 @@ to upconvert to 32b float internally?
+ 	</array>
+ 
+ 	<!-- TODO: exact same layout as 0xa81b-0xa825 -->
+-	<reg32 offset="0xa88c" name="SP_GS_OBJ_FIRST_EXEC_OFFSET" type="uint" usage="rp_blit"/>
+-	<reg64 offset="0xa88d" name="SP_GS_OBJ_START" type="address" align="32" usage="rp_blit"/>
++	<reg32 offset="0xa88c" name="SP_GS_PROGRAM_COUNTER_OFFSET" type="uint" usage="rp_blit"/>
++	<reg64 offset="0xa88d" name="SP_GS_BASE" type="address" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa88f" name="SP_GS_PVT_MEM_PARAM" type="a6xx_sp_xs_pvt_mem_param" usage="rp_blit"/>
+-	<reg64 offset="0xa890" name="SP_GS_PVT_MEM_ADDR" type="waddress" align="32" usage="rp_blit"/>
++	<reg64 offset="0xa890" name="SP_GS_PVT_MEM_BASE" type="waddress" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa892" name="SP_GS_PVT_MEM_SIZE" type="a6xx_sp_xs_pvt_mem_size" usage="rp_blit"/>
+-	<reg32 offset="0xa893" name="SP_GS_TEX_COUNT" low="0" high="7" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa893" name="SP_GS_TSIZE" low="0" high="7" type="uint" usage="rp_blit"/>
+ 	<reg32 offset="0xa894" name="SP_GS_CONFIG" type="a6xx_sp_xs_config" usage="rp_blit"/>
+-	<reg32 offset="0xa895" name="SP_GS_INSTRLEN" low="0" high="27" type="uint" usage="rp_blit"/>
+-	<reg32 offset="0xa896" name="SP_GS_PVT_MEM_HW_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_hw_stack_offset" usage="rp_blit"/>
+-	<reg32 offset="0xa899" name="SP_GS_VGPR_CONFIG" variants="A7XX-" usage="cmd"/>
+-
+-	<reg64 offset="0xa8a0" name="SP_VS_TEX_SAMP" type="address" align="16" usage="cmd"/>
+-	<reg64 offset="0xa8a2" name="SP_HS_TEX_SAMP" type="address" align="16" usage="cmd"/>
+-	<reg64 offset="0xa8a4" name="SP_DS_TEX_SAMP" type="address" align="16" usage="cmd"/>
+-	<reg64 offset="0xa8a6" name="SP_GS_TEX_SAMP" type="address" align="16" usage="cmd"/>
+-	<reg64 offset="0xa8a8" name="SP_VS_TEX_CONST" type="address" align="64" usage="cmd"/>
+-	<reg64 offset="0xa8aa" name="SP_HS_TEX_CONST" type="address" align="64" usage="cmd"/>
+-	<reg64 offset="0xa8ac" name="SP_DS_TEX_CONST" type="address" align="64" usage="cmd"/>
+-	<reg64 offset="0xa8ae" name="SP_GS_TEX_CONST" type="address" align="64" usage="cmd"/>
++	<reg32 offset="0xa895" name="SP_GS_INSTR_SIZE" low="0" high="27" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa896" name="SP_GS_PVT_MEM_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_stack_offset" usage="rp_blit"/>
++	<reg32 offset="0xa899" name="SP_GS_VGS_CNTL" variants="A7XX-" usage="cmd"/>
++
++	<reg64 offset="0xa8a0" name="SP_VS_SAMPLER_BASE" type="address" align="16" usage="cmd"/>
++	<reg64 offset="0xa8a2" name="SP_HS_SAMPLER_BASE" type="address" align="16" usage="cmd"/>
++	<reg64 offset="0xa8a4" name="SP_DS_SAMPLER_BASE" type="address" align="16" usage="cmd"/>
++	<reg64 offset="0xa8a6" name="SP_GS_SAMPLER_BASE" type="address" align="16" usage="cmd"/>
++	<reg64 offset="0xa8a8" name="SP_VS_TEXMEMOBJ_BASE" type="address" align="64" usage="cmd"/>
++	<reg64 offset="0xa8aa" name="SP_HS_TEXMEMOBJ_BASE" type="address" align="64" usage="cmd"/>
++	<reg64 offset="0xa8ac" name="SP_DS_TEXMEMOBJ_BASE" type="address" align="64" usage="cmd"/>
++	<reg64 offset="0xa8ae" name="SP_GS_TEXMEMOBJ_BASE" type="address" align="64" usage="cmd"/>
+ 
+ 	<!-- TODO: 4 unknown bool registers 0xa8c0-0xa8c3 -->
+ 
+-	<reg32 offset="0xa980" name="SP_FS_CTRL_REG0" type="a6xx_sp_xs_ctrl_reg0" usage="rp_blit">
++	<reg32 offset="0xa980" name="SP_PS_CNTL_0" type="a6xx_sp_xs_cntl_0" usage="rp_blit">
+ 		<bitfield name="THREADSIZE" pos="20" type="a6xx_threadsize"/>
+ 		<bitfield name="UNK21" pos="21" type="boolean"/>
+ 		<bitfield name="VARYING" pos="22" type="boolean"/>
+@@ -4909,8 +2757,7 @@ to upconvert to 32b float internally?
+ 				fine derivatives and quad subgroup ops.
+ 			</doc>
+ 		</bitfield>
+-		<!-- note: vk blob uses bit24 -->
+-		<bitfield name="UNK24" pos="24" type="boolean"/>
++		<bitfield name="INOUTREGOVERLAP" pos="24" type="boolean"/>
+ 		<bitfield name="UNK25" pos="25" type="boolean"/>
+ 		<bitfield name="PIXLODENABLE" pos="26" type="boolean">
+ 			<doc>
+@@ -4923,12 +2770,12 @@ to upconvert to 32b float internally?
+ 		<bitfield name="EARLYPREAMBLE" pos="28" type="boolean"/>
+ 		<bitfield name="MERGEDREGS" pos="31" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0xa981" name="SP_FS_BRANCH_COND" type="hex"/>
+-	<reg32 offset="0xa982" name="SP_FS_OBJ_FIRST_EXEC_OFFSET" type="uint" usage="rp_blit"/>
+-	<reg64 offset="0xa983" name="SP_FS_OBJ_START" type="address" align="32" usage="rp_blit"/>
+-	<reg32 offset="0xa985" name="SP_FS_PVT_MEM_PARAM" type="a6xx_sp_xs_pvt_mem_param" usage="rp_blit"/>
+-	<reg64 offset="0xa986" name="SP_FS_PVT_MEM_ADDR" type="waddress" align="32" usage="rp_blit"/>
+-	<reg32 offset="0xa988" name="SP_FS_PVT_MEM_SIZE" type="a6xx_sp_xs_pvt_mem_size" usage="rp_blit"/>
++	<reg32 offset="0xa981" name="SP_PS_BOOLEAN_CF_MASK" type="hex"/>
++	<reg32 offset="0xa982" name="SP_PS_PROGRAM_COUNTER_OFFSET" type="uint" usage="rp_blit"/>
++	<reg64 offset="0xa983" name="SP_PS_BASE" type="address" align="32" usage="rp_blit"/>
++	<reg32 offset="0xa985" name="SP_PS_PVT_MEM_PARAM" type="a6xx_sp_xs_pvt_mem_param" usage="rp_blit"/>
++	<reg64 offset="0xa986" name="SP_PS_PVT_MEM_BASE" type="waddress" align="32" usage="rp_blit"/>
++	<reg32 offset="0xa988" name="SP_PS_PVT_MEM_SIZE" type="a6xx_sp_xs_pvt_mem_size" usage="rp_blit"/>
+ 
+ 	<reg32 offset="0xa989" name="SP_BLEND_CNTL" usage="rp_blit">
+ 		<!-- per-mrt enable bit -->
+@@ -4948,7 +2795,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="SRGB_MRT6" pos="6" type="boolean"/>
+ 		<bitfield name="SRGB_MRT7" pos="7" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0xa98b" name="SP_FS_RENDER_COMPONENTS" usage="rp_blit">
++	<reg32 offset="0xa98b" name="SP_PS_OUTPUT_MASK" usage="rp_blit">
+ 		<bitfield name="RT0" low="0" high="3"/>
+ 		<bitfield name="RT1" low="4" high="7"/>
+ 		<bitfield name="RT2" low="8" high="11"/>
+@@ -4958,17 +2805,17 @@ to upconvert to 32b float internally?
+ 		<bitfield name="RT6" low="24" high="27"/>
+ 		<bitfield name="RT7" low="28" high="31"/>
+ 	</reg32>
+-	<reg32 offset="0xa98c" name="SP_FS_OUTPUT_CNTL0" usage="rp_blit">
++	<reg32 offset="0xa98c" name="SP_PS_OUTPUT_CNTL" usage="rp_blit">
+ 		<bitfield name="DUAL_COLOR_IN_ENABLE" pos="0" type="boolean"/>
+ 		<bitfield name="DEPTH_REGID" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="SAMPMASK_REGID" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="STENCILREF_REGID" low="24" high="31" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xa98d" name="SP_FS_OUTPUT_CNTL1" usage="rp_blit">
++	<reg32 offset="0xa98d" name="SP_PS_MRT_CNTL" usage="rp_blit">
+ 		<bitfield name="MRT" low="0" high="3" type="uint"/>
+ 	</reg32>
+ 
+-	<array offset="0xa98e" name="SP_FS_OUTPUT" stride="1" length="8" usage="rp_blit">
++	<array offset="0xa98e" name="SP_PS_OUTPUT" stride="1" length="8" usage="rp_blit">
+ 		<doc>per MRT</doc>
+ 		<reg32 offset="0x0" name="REG">
+ 			<bitfield name="REGID" low="0" high="7" type="a3xx_regid"/>
+@@ -4976,7 +2823,7 @@ to upconvert to 32b float internally?
+ 		</reg32>
+ 	</array>
+ 
+-	<array offset="0xa996" name="SP_FS_MRT" stride="1" length="8" usage="rp_blit">
++	<array offset="0xa996" name="SP_PS_MRT" stride="1" length="8" usage="rp_blit">
+ 		<reg32 offset="0" name="REG">
+ 			<bitfield name="COLOR_FORMAT" low="0" high="7" type="a6xx_format"/>
+ 			<bitfield name="COLOR_SINT" pos="8" type="boolean"/>
+@@ -4985,7 +2832,7 @@ to upconvert to 32b float internally?
+ 		</reg32>
+ 	</array>
+ 
+-	<reg32 offset="0xa99e" name="SP_FS_PREFETCH_CNTL" usage="rp_blit">
++	<reg32 offset="0xa99e" name="SP_PS_INITIAL_TEX_LOAD_CNTL" usage="rp_blit">
+ 		<bitfield name="COUNT" low="0" high="2" type="uint"/>
+ 		<bitfield name="IJ_WRITE_DISABLE" pos="3" type="boolean"/>
+ 		<doc>
+@@ -5002,7 +2849,7 @@ to upconvert to 32b float internally?
+ 		<!-- Blob never uses it -->
+ 		<bitfield name="CONSTSLOTID4COORD" low="16" high="24" type="uint" variants="A7XX-"/>
+ 	</reg32>
+-	<array offset="0xa99f" name="SP_FS_PREFETCH" stride="1" length="4" variants="A6XX" usage="rp_blit">
++	<array offset="0xa99f" name="SP_PS_INITIAL_TEX_LOAD" stride="1" length="4" variants="A6XX" usage="rp_blit">
+ 		<reg32 offset="0" name="CMD" variants="A6XX">
+ 			<bitfield name="SRC" low="0" high="6" type="uint"/>
+ 			<bitfield name="SAMP_ID" low="7" high="10" type="uint"/>
+@@ -5016,7 +2863,7 @@ to upconvert to 32b float internally?
+ 			<bitfield name="CMD" low="29" high="31" type="a6xx_tex_prefetch_cmd"/>
+ 		</reg32>
+ 	</array>
+-	<array offset="0xa99f" name="SP_FS_PREFETCH" stride="1" length="4" variants="A7XX-" usage="rp_blit">
++	<array offset="0xa99f" name="SP_PS_INITIAL_TEX_LOAD" stride="1" length="4" variants="A7XX-" usage="rp_blit">
+ 		<reg32 offset="0" name="CMD" variants="A7XX-">
+ 			<bitfield name="SRC" low="0" high="6" type="uint"/>
+ 			<bitfield name="SAMP_ID" low="7" high="9" type="uint"/>
+@@ -5028,22 +2875,23 @@ to upconvert to 32b float internally?
+ 			<bitfield name="CMD" low="26" high="29" type="a6xx_tex_prefetch_cmd"/>
+ 		</reg32>
+ 	</array>
+-	<array offset="0xa9a3" name="SP_FS_BINDLESS_PREFETCH" stride="1" length="4" usage="rp_blit">
++	<array offset="0xa9a3" name="SP_PS_INITIAL_TEX_INDEX" stride="1" length="4" usage="rp_blit">
+ 		<reg32 offset="0" name="CMD">
+ 			<bitfield name="SAMP_ID" low="0" high="15" type="uint"/>
+ 			<bitfield name="TEX_ID" low="16" high="31" type="uint"/>
+ 		</reg32>
+ 	</array>
+-	<reg32 offset="0xa9a7" name="SP_FS_TEX_COUNT" low="0" high="7" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa9a7" name="SP_PS_TSIZE" low="0" high="7" type="uint" usage="rp_blit"/>
+ 	<reg32 offset="0xa9a8" name="SP_UNKNOWN_A9A8" low="0" high="16" usage="cmd"/> <!-- always 0x0 ? -->
+-	<reg32 offset="0xa9a9" name="SP_FS_PVT_MEM_HW_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_hw_stack_offset" usage="rp_blit"/>
++	<reg32 offset="0xa9a9" name="SP_PS_PVT_MEM_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_stack_offset" usage="rp_blit"/>
++	<reg32 offset="0xa9ab" name="SP_PS_UNKNOWN_A9AB" variants="A7XX-" usage="cmd"/>
+ 
+ 	<!-- TODO: unknown bool register at 0xa9aa, likely same as 0xa8c0-0xa8c3 but for FS -->
+ 
+ 
+ 
+ 
+-	<reg32 offset="0xa9b0" name="SP_CS_CTRL_REG0" type="a6xx_sp_xs_ctrl_reg0" usage="cmd">
++	<reg32 offset="0xa9b0" name="SP_CS_CNTL_0" type="a6xx_sp_xs_cntl_0" usage="cmd">
+ 		<bitfield name="THREADSIZE" pos="20" type="a6xx_threadsize"/>
+ 		<!-- seems to make SP use less concurrent threads when possible? -->
+ 		<bitfield name="UNK21" pos="21" type="boolean"/>
+@@ -5053,8 +2901,15 @@ to upconvert to 32b float internally?
+ 		<bitfield name="MERGEDREGS" pos="31" type="boolean"/>
+ 	</reg32>
+ 
++	<enum name="a6xx_const_ram_mode">
++		<value value="0x0" name="CONSTLEN_128"/>
++		<value value="0x1" name="CONSTLEN_192"/>
++		<value value="0x2" name="CONSTLEN_256"/>
++		<value value="0x3" name="CONSTLEN_512"/> <!-- a7xx only -->
++	</enum>
++
+ 	<!-- set for compute shaders -->
+-	<reg32 offset="0xa9b1" name="SP_CS_UNKNOWN_A9B1" usage="cmd">
++	<reg32 offset="0xa9b1" name="SP_CS_CNTL_1" usage="cmd">
+ 		<bitfield name="SHARED_SIZE" low="0" high="4" type="uint">
+ 			<doc>
+ 				If 0 - all 32k of shared storage is enabled, otherwise
+@@ -5065,32 +2920,36 @@ to upconvert to 32b float internally?
+ 				always return 0)
+ 			</doc>
+ 		</bitfield>
+-		<bitfield name="UNK5" pos="5" type="boolean"/>
+-		<!-- always 1 ? -->
+-		<bitfield name="UNK6" pos="6" type="boolean"/>
++		<bitfield name="CONSTANTRAMMODE" low="5" high="6" type="a6xx_const_ram_mode">
++			<doc>
++				This defines the split between consts and local
++				memory in the Local Buffer. The programmed value
++				must be at least the actual CONSTLEN.
++			</doc>
++		</bitfield>
+ 	</reg32>
+-	<reg32 offset="0xa9b2" name="SP_CS_BRANCH_COND" type="hex" usage="cmd"/>
+-	<reg32 offset="0xa9b3" name="SP_CS_OBJ_FIRST_EXEC_OFFSET" type="uint" usage="cmd"/>
+-	<reg64 offset="0xa9b4" name="SP_CS_OBJ_START" type="address" align="32" usage="cmd"/>
++	<reg32 offset="0xa9b2" name="SP_CS_BOOLEAN_CF_MASK" type="hex" usage="cmd"/>
++	<reg32 offset="0xa9b3" name="SP_CS_PROGRAM_COUNTER_OFFSET" type="uint" usage="cmd"/>
++	<reg64 offset="0xa9b4" name="SP_CS_BASE" type="address" align="32" usage="cmd"/>
+ 	<reg32 offset="0xa9b6" name="SP_CS_PVT_MEM_PARAM" type="a6xx_sp_xs_pvt_mem_param" usage="cmd"/>
+-	<reg64 offset="0xa9b7" name="SP_CS_PVT_MEM_ADDR" align="32" usage="cmd"/>
++	<reg64 offset="0xa9b7" name="SP_CS_PVT_MEM_BASE" align="32" usage="cmd"/>
+ 	<reg32 offset="0xa9b9" name="SP_CS_PVT_MEM_SIZE" type="a6xx_sp_xs_pvt_mem_size" usage="cmd"/>
+-	<reg32 offset="0xa9ba" name="SP_CS_TEX_COUNT" low="0" high="7" type="uint" usage="cmd"/>
++	<reg32 offset="0xa9ba" name="SP_CS_TSIZE" low="0" high="7" type="uint" usage="cmd"/>
+ 	<reg32 offset="0xa9bb" name="SP_CS_CONFIG" type="a6xx_sp_xs_config" usage="cmd"/>
+-	<reg32 offset="0xa9bc" name="SP_CS_INSTRLEN" low="0" high="27" type="uint" usage="cmd"/>
+-	<reg32 offset="0xa9bd" name="SP_CS_PVT_MEM_HW_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_hw_stack_offset" usage="cmd"/>
++	<reg32 offset="0xa9bc" name="SP_CS_INSTR_SIZE" low="0" high="27" type="uint" usage="cmd"/>
++	<reg32 offset="0xa9bd" name="SP_CS_PVT_MEM_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_stack_offset" usage="cmd"/>
+ 	<reg32 offset="0xa9be" name="SP_CS_UNKNOWN_A9BE" variants="A7XX-" usage="cmd"/>
+-	<reg32 offset="0xa9c5" name="SP_CS_VGPR_CONFIG" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa9c5" name="SP_CS_VGS_CNTL" variants="A7XX-" usage="cmd"/>
+ 
+-	<!-- new in a6xx gen4, matches HLSQ_CS_CNTL_0 -->
+-	<reg32 offset="0xa9c2" name="SP_CS_CNTL_0" usage="cmd">
++	<!-- new in a6xx gen4, matches SP_CS_CONST_CONFIG_0 -->
++	<reg32 offset="0xa9c2" name="SP_CS_WIE_CNTL_0" usage="cmd">
+ 		<bitfield name="WGIDCONSTID" low="0" high="7" type="a3xx_regid"/>
+ 		<bitfield name="WGSIZECONSTID" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="WGOFFSETCONSTID" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="LOCALIDREGID" low="24" high="31" type="a3xx_regid"/>
+ 	</reg32>
+-	<!-- new in a6xx gen4, matches HLSQ_CS_CNTL_1 -->
+-	<reg32 offset="0xa9c3" name="SP_CS_CNTL_1" variants="A6XX" usage="cmd">
++	<!-- new in a6xx gen4, matches SP_CS_WGE_CNTL -->
++	<reg32 offset="0xa9c3" name="SP_CS_WIE_CNTL_1" variants="A6XX" usage="cmd">
+ 		<!-- gl_LocalInvocationIndex -->
+ 		<bitfield name="LINEARLOCALIDREGID" low="0" high="7" type="a3xx_regid"/>
+ 		<!-- a650 has 6 "SP cores" (but 3 "SP"). this makes it use only
+@@ -5102,7 +2961,18 @@ to upconvert to 32b float internally?
+ 		<bitfield name="THREADSIZE_SCALAR" pos="10" type="boolean"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xa9c3" name="SP_CS_CNTL_1" variants="A7XX-" usage="cmd">
++	<enum name="a7xx_workitem_rast_order">
++		<value value="0x0" name="WORKITEMRASTORDER_LINEAR"/>
++		<doc>
++			This is a fixed tiling, with 4x4 invocation outer tiles
++			containing 2x2 invocation inner tiles. The intent is to
++			improve cache locality with textures and images accessed
++			using gl_LocalInvocationID.
++		</doc>
++		<value value="0x1" name="WORKITEMRASTORDER_TILED"/>
++	</enum>
++
++	<reg32 offset="0xa9c3" name="SP_CS_WIE_CNTL_1" variants="A7XX-" usage="cmd">
+ 		<!-- gl_LocalInvocationIndex -->
+ 		<bitfield name="LINEARLOCALIDREGID" low="0" high="7" type="a3xx_regid"/>
+ 		<!-- Must match SP_CS_CTRL -->
+@@ -5110,18 +2980,16 @@ to upconvert to 32b float internally?
+ 		<!-- 1 thread per wave (would hang if THREAD128 is also set) -->
+ 		<bitfield name="THREADSIZE_SCALAR" pos="9" type="boolean"/>
+ 
+-		<!-- Affects getone. If enabled, getone sometimes executed 1? less times
+-		     than there are subgroups.
+-		 -->
+-		<bitfield name="UNK15" pos="15" type="boolean"/>
++		<doc>How invocations/fibers within a workgroup are tiled.</doc>
++		<bitfield name="WORKITEMRASTORDER" pos="15" type="a7xx_workitem_rast_order"/>
+ 	</reg32>
+ 
+ 	<!-- TODO: two 64kb aligned addresses at a9d0/a9d2 -->
+ 
+-	<reg64 offset="0xa9e0" name="SP_FS_TEX_SAMP" type="address" align="16" usage="rp_blit"/>
+-	<reg64 offset="0xa9e2" name="SP_CS_TEX_SAMP" type="address" align="16" usage="cmd"/>
+-	<reg64 offset="0xa9e4" name="SP_FS_TEX_CONST" type="address" align="64" usage="rp_blit"/>
+-	<reg64 offset="0xa9e6" name="SP_CS_TEX_CONST" type="address" align="64" usage="cmd"/>
++	<reg64 offset="0xa9e0" name="SP_PS_SAMPLER_BASE" type="address" align="16" usage="rp_blit"/>
++	<reg64 offset="0xa9e2" name="SP_CS_SAMPLER_BASE" type="address" align="16" usage="cmd"/>
++	<reg64 offset="0xa9e4" name="SP_PS_TEXMEMOBJ_BASE" type="address" align="64" usage="rp_blit"/>
++	<reg64 offset="0xa9e6" name="SP_CS_TEXMEMOBJ_BASE" type="address" align="64" usage="cmd"/>
+ 
+ 	<enum name="a6xx_bindless_descriptor_size">
+ 		<doc>
+@@ -5146,18 +3014,19 @@ to upconvert to 32b float internally?
+ 	</array>
+ 
+ 	<!--
+-	IBO state for compute shader:
++	UAV state for compute shader:
+ 	 -->
+-	<reg64 offset="0xa9f2" name="SP_CS_IBO" type="address" align="16"/>
+-	<reg32 offset="0xaa00" name="SP_CS_IBO_COUNT" low="0" high="6" type="uint"/>
++	<reg64 offset="0xa9f2" name="SP_CS_UAV_BASE" type="address" align="16" variants="A6XX"/>
++	<reg64 offset="0xa9f8" name="SP_CS_UAV_BASE" type="address" align="16" variants="A7XX"/>
++	<reg32 offset="0xaa00" name="SP_CS_USIZE" low="0" high="6" type="uint"/>
+ 
+ 	<!-- Correlated with avgs/uvgs usage in FS -->
+-	<reg32 offset="0xaa01" name="SP_FS_VGPR_CONFIG" type="uint" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xaa01" name="SP_PS_VGS_CNTL" type="uint" variants="A7XX-" usage="cmd"/>
+ 
+-	<reg32 offset="0xaa02" name="SP_PS_ALIASED_COMPONENTS_CONTROL" variants="A7XX-" usage="cmd">
++	<reg32 offset="0xaa02" name="SP_PS_OUTPUT_CONST_CNTL" variants="A7XX-" usage="cmd">
+ 		<bitfield name="ENABLED" pos="0" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0xaa03" name="SP_PS_ALIASED_COMPONENTS" variants="A7XX-" usage="cmd">
++	<reg32 offset="0xaa03" name="SP_PS_OUTPUT_CONST_MASK" variants="A7XX-" usage="cmd">
+ 		<doc>
+ 			Specify for which components the output color should be read
+ 			from alias, e.g. for:
+@@ -5167,7 +3036,7 @@ to upconvert to 32b float internally?
+ 				alias.1.b32.0 r1.x, c4.x
+ 				alias.1.b32.0 r0.x, c0.x
+ 
+-			the SP_PS_ALIASED_COMPONENTS would be 0x00001111
++			the SP_PS_OUTPUT_CONST_MASK would be 0x00001111
+ 		</doc>
+ 
+ 		<bitfield name="RT0" low="0" high="3"/>
+@@ -5193,7 +3062,7 @@ to upconvert to 32b float internally?
+ 		<value value="0x2" name="ISAMMODE_GL"/>
+ 	</enum>
+ 
+-	<reg32 offset="0xab00" name="SP_MODE_CONTROL" usage="rp_blit">
++	<reg32 offset="0xab00" name="SP_MODE_CNTL" usage="rp_blit">
+ 	  <!--
+ 	  When set, half register loads from the constant file will
+ 	  load a 32-bit value (so hc0.y loads the same value as c0.y)
+@@ -5210,16 +3079,16 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0xab01" name="SP_UNKNOWN_AB01" variants="A7XX-" usage="cmd"/>
+ 	<reg32 offset="0xab02" name="SP_UNKNOWN_AB02" variants="A7XX-" usage="cmd"/>
+ 
+-	<reg32 offset="0xab04" name="SP_FS_CONFIG" type="a6xx_sp_xs_config" usage="rp_blit"/>
+-	<reg32 offset="0xab05" name="SP_FS_INSTRLEN" low="0" high="27" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xab04" name="SP_PS_CONFIG" type="a6xx_sp_xs_config" usage="rp_blit"/>
++	<reg32 offset="0xab05" name="SP_PS_INSTR_SIZE" low="0" high="27" type="uint" usage="rp_blit"/>
+ 
+-	<array offset="0xab10" name="SP_BINDLESS_BASE" stride="2" length="5" variants="A6XX" usage="rp_blit">
++	<array offset="0xab10" name="SP_GFX_BINDLESS_BASE" stride="2" length="5" variants="A6XX" usage="rp_blit">
+ 		<reg64 offset="0" name="DESCRIPTOR" variants="A6XX">
+ 			<bitfield name="DESC_SIZE" low="0" high="1" type="a6xx_bindless_descriptor_size"/>
+ 			<bitfield name="ADDR" low="2" high="63" shr="2" type="address"/>
+ 		</reg64>
+ 	</array>
+-	<array offset="0xab0a" name="SP_BINDLESS_BASE" stride="2" length="8" variants="A7XX-" usage="rp_blit">
++	<array offset="0xab0a" name="SP_GFX_BINDLESS_BASE" stride="2" length="8" variants="A7XX-" usage="rp_blit">
+ 		<reg64 offset="0" name="DESCRIPTOR" variants="A7XX-">
+ 			<bitfield name="DESC_SIZE" low="0" high="1" type="a6xx_bindless_descriptor_size"/>
+ 			<bitfield name="ADDR" low="2" high="63" shr="2" type="address"/>
+@@ -5227,15 +3096,15 @@ to upconvert to 32b float internally?
+ 	</array>
+ 
+ 	<!--
+-	Combined IBO state for 3d pipe, used for Image and SSBO write/atomic
+-	instructions VS/HS/DS/GS/FS.  See SP_CS_IBO_* for compute shaders.
++	Combined UAV state for 3d pipe, used for Image and SSBO write/atomic
++	instructions VS/HS/DS/GS/FS.  See SP_CS_UAV_BASE_* for compute shaders.
+ 	 -->
+-	<reg64 offset="0xab1a" name="SP_IBO" type="address" align="16" usage="cmd"/>
+-	<reg32 offset="0xab20" name="SP_IBO_COUNT" low="0" high="6" type="uint" usage="cmd"/>
++	<reg64 offset="0xab1a" name="SP_GFX_UAV_BASE" type="address" align="16" usage="cmd"/>
++	<reg32 offset="0xab20" name="SP_GFX_USIZE" low="0" high="6" type="uint" usage="cmd"/>
+ 
+ 	<reg32 offset="0xab22" name="SP_UNKNOWN_AB22" variants="A7XX-" usage="cmd"/>
+ 
+-	<bitset name="a6xx_sp_2d_dst_format" inline="yes">
++	<bitset name="a6xx_sp_a2d_output_info" inline="yes">
+ 		<bitfield name="NORM" pos="0" type="boolean"/>
+ 		<bitfield name="SINT" pos="1" type="boolean"/>
+ 		<bitfield name="UINT" pos="2" type="boolean"/>
+@@ -5248,8 +3117,8 @@ to upconvert to 32b float internally?
+ 		<bitfield name="MASK" low="12" high="15"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0xacc0" name="SP_2D_DST_FORMAT" type="a6xx_sp_2d_dst_format" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xa9bf" name="SP_2D_DST_FORMAT" type="a6xx_sp_2d_dst_format" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xacc0" name="SP_A2D_OUTPUT_INFO" type="a6xx_sp_a2d_output_info" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xa9bf" name="SP_A2D_OUTPUT_INFO" type="a6xx_sp_a2d_output_info" variants="A7XX-" usage="rp_blit"/>
+ 
+ 	<reg32 offset="0xae00" name="SP_DBG_ECO_CNTL" usage="cmd"/>
+ 	<reg32 offset="0xae01" name="SP_ADDR_MODE_CNTL" pos="0" type="a5xx_address_mode"/>
+@@ -5257,16 +3126,16 @@ to upconvert to 32b float internally?
+ 		<!-- TODO: valid bits 0x3c3f, see kernel -->
+ 	</reg32>
+ 	<reg32 offset="0xae03" name="SP_CHICKEN_BITS" usage="cmd"/>
+-	<reg32 offset="0xae04" name="SP_FLOAT_CNTL" usage="cmd">
++	<reg32 offset="0xae04" name="SP_NC_MODE_CNTL_2" usage="cmd">
+ 		<bitfield name="F16_NO_INF" pos="3" type="boolean"/>
+ 	</reg32>
+ 
+ 	<reg32 offset="0xae06" name="SP_UNKNOWN_AE06" variants="A7XX-" usage="cmd"/>
+-	<reg32 offset="0xae08" name="SP_UNKNOWN_AE08" variants="A7XX-" usage="cmd"/>
+-	<reg32 offset="0xae09" name="SP_UNKNOWN_AE09" variants="A7XX-" usage="cmd"/>
+-	<reg32 offset="0xae0a" name="SP_UNKNOWN_AE0A" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xae08" name="SP_CHICKEN_BITS_1" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xae09" name="SP_CHICKEN_BITS_2" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xae0a" name="SP_CHICKEN_BITS_3" variants="A7XX-" usage="cmd"/>
+ 
+-	<reg32 offset="0xae0f" name="SP_PERFCTR_ENABLE" usage="cmd">
++	<reg32 offset="0xae0f" name="SP_PERFCTR_SHADER_MASK" usage="cmd">
+ 		<!-- some perfcntrs are affected by a per-stage enable bit
+ 		     (PERF_SP_ALU_WORKING_CYCLES for example)
+ 		     TODO: verify position of HS/DS/GS bits -->
+@@ -5281,7 +3150,7 @@ to upconvert to 32b float internally?
+ 	<array offset="0xae60" name="SP_PERFCTR_HLSQ_SEL" stride="1" length="6" variants="A7XX-"/>
+ 	<reg32 offset="0xae6a" name="SP_UNKNOWN_AE6A" variants="A7XX-" usage="cmd"/>
+ 	<reg32 offset="0xae6b" name="SP_UNKNOWN_AE6B" variants="A7XX-" usage="cmd"/>
+-	<reg32 offset="0xae6c" name="SP_UNKNOWN_AE6C" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xae6c" name="SP_HLSQ_DBG_ECO_CNTL" variants="A7XX-" usage="cmd"/>
+ 	<reg32 offset="0xae6d" name="SP_READ_SEL" variants="A7XX-">
+ 		<bitfield name="LOCATION" low="18" high="19" type="a7xx_state_location"/>
+ 		<bitfield name="PIPE" low="16" high="17" type="a7xx_pipe"/>
+@@ -5301,33 +3170,44 @@ to upconvert to 32b float internally?
+ 	"a6xx_sp_ps_tp_cluster" but this actually specifies the border
+ 	color base for compute shaders.
+ 	-->
+-	<reg64 offset="0xb180" name="SP_PS_TP_BORDER_COLOR_BASE_ADDR" type="address" align="128" usage="cmd"/>
++	<reg64 offset="0xb180" name="TPL1_CS_BORDER_COLOR_BASE" type="address" align="128" usage="cmd"/>
+ 	<reg32 offset="0xb182" name="SP_UNKNOWN_B182" low="0" high="2" usage="cmd"/>
+ 	<reg32 offset="0xb183" name="SP_UNKNOWN_B183" low="0" high="23" usage="cmd"/>
+ 
+ 	<reg32 offset="0xb190" name="SP_UNKNOWN_B190"/>
+ 	<reg32 offset="0xb191" name="SP_UNKNOWN_B191"/>
+ 
+-	<!-- could be all the stuff below here is actually TPL1?? -->
+-
+-	<reg32 offset="0xb300" name="SP_TP_RAS_MSAA_CNTL" usage="rp_blit">
++	<reg32 offset="0xb300" name="TPL1_RAS_MSAA_CNTL" usage="rp_blit">
+ 		<bitfield name="SAMPLES" low="0" high="1" type="a3xx_msaa_samples"/>
+ 		<bitfield name="UNK2" low="2" high="3"/>
+ 	</reg32>
+-	<reg32 offset="0xb301" name="SP_TP_DEST_MSAA_CNTL" usage="rp_blit">
++	<reg32 offset="0xb301" name="TPL1_DEST_MSAA_CNTL" usage="rp_blit">
+ 		<bitfield name="SAMPLES" low="0" high="1" type="a3xx_msaa_samples"/>
+ 		<bitfield name="MSAA_DISABLE" pos="2" type="boolean"/>
+ 	</reg32>
+ 
+ 	<!-- looks to work in the same way as a5xx: -->
+-	<reg64 offset="0xb302" name="SP_TP_BORDER_COLOR_BASE_ADDR" type="address" align="128" usage="cmd"/>
+-	<reg32 offset="0xb304" name="SP_TP_SAMPLE_CONFIG" type="a6xx_sample_config" usage="rp_blit"/>
+-	<reg32 offset="0xb305" name="SP_TP_SAMPLE_LOCATION_0" type="a6xx_sample_locations" usage="rp_blit"/>
+-	<reg32 offset="0xb306" name="SP_TP_SAMPLE_LOCATION_1" type="a6xx_sample_locations" usage="rp_blit"/>
+-	<reg32 offset="0xb307" name="SP_TP_WINDOW_OFFSET" type="a6xx_reg_xy" usage="rp_blit"/>
+-	<reg32 offset="0xb309" name="SP_TP_MODE_CNTL" usage="cmd">
++	<reg64 offset="0xb302" name="TPL1_GFX_BORDER_COLOR_BASE" type="address" align="128" usage="cmd"/>
++	<reg32 offset="0xb304" name="TPL1_MSAA_SAMPLE_POS_CNTL" type="a6xx_msaa_sample_pos_cntl" usage="rp_blit"/>
++	<reg32 offset="0xb305" name="TPL1_PROGRAMMABLE_MSAA_POS_0" type="a6xx_programmable_msaa_pos" usage="rp_blit"/>
++	<reg32 offset="0xb306" name="TPL1_PROGRAMMABLE_MSAA_POS_1" type="a6xx_programmable_msaa_pos" usage="rp_blit"/>
++	<reg32 offset="0xb307" name="TPL1_WINDOW_OFFSET" type="a6xx_reg_xy" usage="rp_blit"/>
++
++	<enum name="a6xx_coord_round">
++		<value value="0" name="COORD_TRUNCATE"/>
++		<value value="1" name="COORD_ROUND_NEAREST_EVEN"/>
++	</enum>
++
++	<enum name="a6xx_nearest_mode">
++		<value value="0" name="ROUND_CLAMP_TRUNCATE"/>
++		<value value="1" name="CLAMP_ROUND_TRUNCATE"/>
++	</enum>
++
++	<reg32 offset="0xb309" name="TPL1_MODE_CNTL" usage="cmd">
+ 		<bitfield name="ISAMMODE" low="0" high="1" type="a6xx_isam_mode"/>
+-		<bitfield name="UNK3" low="2" high="7"/>
++		<bitfield name="TEXCOORDROUNDMODE" pos="2" type="a6xx_coord_round"/>
++		<bitfield name="NEARESTMIPSNAP" pos="5" type="a6xx_nearest_mode"/>
++		<bitfield name="DESTDATATYPEOVERRIDE" pos="7" type="boolean"/>
+ 	</reg32>
+ 	<reg32 offset="0xb310" name="SP_UNKNOWN_B310" variants="A7XX-" usage="cmd"/>
+ 
+@@ -5336,42 +3216,45 @@ to upconvert to 32b float internally?
+ 	badly named or the functionality moved in a6xx.  But downstream kernel
+ 	calls this "a6xx_sp_ps_tp_2d_cluster"
+ 	 -->
+-	<reg32 offset="0xb4c0" name="SP_PS_2D_SRC_INFO" type="a6xx_2d_src_surf_info" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb4c1" name="SP_PS_2D_SRC_SIZE" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb4c0" name="TPL1_A2D_SRC_TEXTURE_INFO" type="a6xx_a2d_src_texture_info" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb4c1" name="TPL1_A2D_SRC_TEXTURE_SIZE" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="WIDTH" low="0" high="14" type="uint"/>
+ 		<bitfield name="HEIGHT" low="15" high="29" type="uint"/>
+ 	</reg32>
+-	<reg64 offset="0xb4c2" name="SP_PS_2D_SRC" type="address" align="16" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb4c4" name="SP_PS_2D_SRC_PITCH" variants="A6XX" usage="rp_blit">
++	<reg64 offset="0xb4c2" name="TPL1_A2D_SRC_TEXTURE_BASE" type="address" align="16" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb4c4" name="TPL1_A2D_SRC_TEXTURE_PITCH" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="UNK0" low="0" high="8"/>
+ 		<bitfield name="PITCH" low="9" high="23" shr="6" type="uint"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xb2c0" name="SP_PS_2D_SRC_INFO" type="a6xx_2d_src_surf_info" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xb2c1" name="SP_PS_2D_SRC_SIZE" variants="A7XX">
++	<reg32 offset="0xb2c0" name="TPL1_A2D_SRC_TEXTURE_INFO" type="a6xx_a2d_src_texture_info" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xb2c1" name="TPL1_A2D_SRC_TEXTURE_SIZE" variants="A7XX">
+ 		<bitfield name="WIDTH" low="0" high="14" type="uint"/>
+ 		<bitfield name="HEIGHT" low="15" high="29" type="uint"/>
+ 	</reg32>
+-	<reg64 offset="0xb2c2" name="SP_PS_2D_SRC" type="address" align="16" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xb2c4" name="SP_PS_2D_SRC_PITCH" variants="A7XX">
+-		<bitfield name="UNK0" low="0" high="8"/>
+-		<bitfield name="PITCH" low="9" high="23" shr="6" type="uint"/>
++	<reg64 offset="0xb2c2" name="TPL1_A2D_SRC_TEXTURE_BASE" type="address" align="16" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xb2c4" name="TPL1_A2D_SRC_TEXTURE_PITCH" variants="A7XX">
++		<!--
++		Bits from 3..9 must be zero unless 'TPL1_A2D_BLT_CNTL::TYPE'
++		is A6XX_TEX_IMG_BUFFER, which allows for lower alignment.
++		 -->
++		<bitfield name="PITCH" low="3" high="23" type="uint"/>
+ 	</reg32>
+ 
+ 	<!-- planes for NV12, etc. (TODO: not tested) -->
+-	<reg64 offset="0xb4c5" name="SP_PS_2D_SRC_PLANE1" type="address" align="16" variants="A6XX"/>
+-	<reg32 offset="0xb4c7" name="SP_PS_2D_SRC_PLANE_PITCH" low="0" high="11" shr="6" type="uint" variants="A6XX"/>
+-	<reg64 offset="0xb4c8" name="SP_PS_2D_SRC_PLANE2" type="address" align="16" variants="A6XX"/>
++	<reg64 offset="0xb4c5" name="TPL1_A2D_SRC_TEXTURE_BASE_1" type="address" align="16" variants="A6XX"/>
++	<reg32 offset="0xb4c7" name="TPL1_A2D_SRC_TEXTURE_PITCH_1" low="0" high="11" shr="6" type="uint" variants="A6XX"/>
++	<reg64 offset="0xb4c8" name="TPL1_A2D_SRC_TEXTURE_BASE_2" type="address" align="16" variants="A6XX"/>
+ 
+-	<reg64 offset="0xb2c5" name="SP_PS_2D_SRC_PLANE1" type="address" align="16" variants="A7XX-"/>
+-	<reg32 offset="0xb2c7" name="SP_PS_2D_SRC_PLANE_PITCH" low="0" high="11" shr="6" type="uint" variants="A7XX-"/>
+-	<reg64 offset="0xb2c8" name="SP_PS_2D_SRC_PLANE2" type="address" align="16" variants="A7XX-"/>
++	<reg64 offset="0xb2c5" name="TPL1_A2D_SRC_TEXTURE_BASE_1" type="address" align="16" variants="A7XX-"/>
++	<reg32 offset="0xb2c7" name="TPL1_A2D_SRC_TEXTURE_PITCH_1" low="0" high="11" shr="6" type="uint" variants="A7XX-"/>
++	<reg64 offset="0xb2c8" name="TPL1_A2D_SRC_TEXTURE_BASE_2" type="address" align="16" variants="A7XX-"/>
+ 
+-	<reg64 offset="0xb4ca" name="SP_PS_2D_SRC_FLAGS" type="address" align="16" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb4cc" name="SP_PS_2D_SRC_FLAGS_PITCH" low="0" high="7" shr="6" type="uint" variants="A6XX" usage="rp_blit"/>
++	<reg64 offset="0xb4ca" name="TPL1_A2D_SRC_TEXTURE_FLAG_BASE" type="address" align="16" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb4cc" name="TPL1_A2D_SRC_TEXTURE_FLAG_PITCH" low="0" high="7" shr="6" type="uint" variants="A6XX" usage="rp_blit"/>
+ 
+-	<reg64 offset="0xb2ca" name="SP_PS_2D_SRC_FLAGS" type="address" align="16" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xb2cc" name="SP_PS_2D_SRC_FLAGS_PITCH" low="0" high="7" shr="6" type="uint" variants="A7XX-" usage="rp_blit"/>
++	<reg64 offset="0xb2ca" name="TPL1_A2D_SRC_TEXTURE_FLAG_BASE" type="address" align="16" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xb2cc" name="TPL1_A2D_SRC_TEXTURE_FLAG_PITCH" low="0" high="7" shr="6" type="uint" variants="A7XX-" usage="rp_blit"/>
+ 
+ 	<reg32 offset="0xb4cd" name="SP_PS_UNKNOWN_B4CD" low="6" high="31" variants="A6XX"/>
+ 	<reg32 offset="0xb4ce" name="SP_PS_UNKNOWN_B4CE" low="0" high="31" variants="A6XX"/>
+@@ -5383,8 +3266,12 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0xb2ce" name="SP_PS_UNKNOWN_B4CE" low="0" high="31" variants="A7XX"/>
+ 	<reg32 offset="0xb2cf" name="SP_PS_UNKNOWN_B4CF" low="0" high="30" variants="A7XX"/>
+ 	<reg32 offset="0xb2d0" name="SP_PS_UNKNOWN_B4D0" low="0" high="29" variants="A7XX"/>
+-	<reg32 offset="0xb2d1" name="SP_PS_2D_WINDOW_OFFSET" type="a6xx_reg_xy" variants="A7XX"/>
+-	<reg32 offset="0xb2d2" name="SP_PS_UNKNOWN_B2D2" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xb2d1" name="TPL1_A2D_WINDOW_OFFSET" type="a6xx_reg_xy" variants="A7XX"/>
++	<reg32 offset="0xb2d2" name="TPL1_A2D_BLT_CNTL" variants="A7XX-" usage="rp_blit">
++		<bitfield name="RAW_COPY" pos="0" type="boolean"/>
++		<bitfield name="START_OFFSET_TEXELS" low="16" high="21"/>
++		<bitfield name="TYPE" low="29" high="31" type="a6xx_tex_type"/>
++	</reg32>
+ 	<reg32 offset="0xab21" name="SP_WINDOW_OFFSET" type="a6xx_reg_xy" variants="A7XX-" usage="rp_blit"/>
+ 
+ 	<!-- always 0x100000 or 0x1000000? -->
+@@ -5422,34 +3309,44 @@ to upconvert to 32b float internally?
+ 
+ 	<!-- TODO: 4 more perfcntr sel at 0xb620 ? -->
+ 
+-	<bitset name="a6xx_hlsq_xs_cntl" inline="yes">
++	<bitset name="a6xx_xs_const_config" inline="yes">
+ 		<bitfield name="CONSTLEN" low="0" high="7" shr="2" type="uint"/>
+ 		<bitfield name="ENABLED" pos="8" type="boolean"/>
+ 		<bitfield name="READ_IMM_SHARED_CONSTS" pos="9" type="boolean" variants="A7XX-"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0xb800" name="HLSQ_VS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb801" name="HLSQ_HS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb802" name="HLSQ_DS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb803" name="HLSQ_GS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb800" name="SP_VS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb801" name="SP_HS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb802" name="SP_DS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb803" name="SP_GS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A6XX" usage="rp_blit"/>
+ 
+-	<reg32 offset="0xa827" name="HLSQ_VS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa83f" name="HLSQ_HS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa867" name="HLSQ_DS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa898" name="HLSQ_GS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa827" name="SP_VS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa83f" name="SP_HS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa867" name="SP_DS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa898" name="SP_GS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A7XX-" usage="rp_blit"/>
+ 
+-	<reg32 offset="0xa9aa" name="HLSQ_FS_UNKNOWN_A9AA" variants="A7XX-" usage="rp_blit">
+-		<!-- Tentatively named, appears to disable consts being loaded via CP_LOAD_STATE6_FRAG -->
+-		<bitfield name="CONSTS_LOAD_DISABLE" pos="0" type="boolean"/>
++	<reg32 offset="0xa9aa" name="SP_RENDER_CNTL" variants="A7XX-" usage="rp_blit">
++		<bitfield name="FS_DISABLE" pos="0" type="boolean"/>
+ 	</reg32>
+ 
+-	<!-- Always 0 -->
+-	<reg32 offset="0xa9ac" name="HLSQ_UNKNOWN_A9AC" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa9ac" name="SP_DITHER_CNTL" variants="A7XX-" usage="cmd">
++		<bitfield name="DITHER_MODE_MRT0" low="0"  high="1"  type="adreno_rb_dither_mode"/>
++		<bitfield name="DITHER_MODE_MRT1" low="2"  high="3"  type="adreno_rb_dither_mode"/>
++		<bitfield name="DITHER_MODE_MRT2" low="4"  high="5"  type="adreno_rb_dither_mode"/>
++		<bitfield name="DITHER_MODE_MRT3" low="6"  high="7"  type="adreno_rb_dither_mode"/>
++		<bitfield name="DITHER_MODE_MRT4" low="8"  high="9"  type="adreno_rb_dither_mode"/>
++		<bitfield name="DITHER_MODE_MRT5" low="10" high="11" type="adreno_rb_dither_mode"/>
++		<bitfield name="DITHER_MODE_MRT6" low="12" high="13" type="adreno_rb_dither_mode"/>
++		<bitfield name="DITHER_MODE_MRT7" low="14" high="15" type="adreno_rb_dither_mode"/>
++	</reg32>
+ 
+-	<!-- Used in VK_KHR_fragment_shading_rate -->
+-	<reg32 offset="0xa9ad" name="HLSQ_UNKNOWN_A9AD" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa9ad" name="SP_VRS_CONFIG" variants="A7XX-" usage="rp_blit">
++		<bitfield name="PIPELINE_FSR_ENABLE" pos="0" type="boolean"/>
++		<bitfield name="ATTACHMENT_FSR_ENABLE" pos="1" type="boolean"/>
++		<bitfield name="PRIMITIVE_FSR_ENABLE" pos="3" type="boolean"/>
++	</reg32>
+ 
+-	<reg32 offset="0xa9ae" name="HLSQ_UNKNOWN_A9AE" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9ae" name="SP_PS_CNTL_1" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="SYSVAL_REGS_COUNT" low="0" high="7" type="uint"/>
+ 		<!-- UNK8 is set on a730/a740 -->
+ 		<bitfield name="UNK8" pos="8" type="boolean"/>
+@@ -5462,94 +3359,94 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0xb823" name="HLSQ_LOAD_STATE_GEOM_DATA"/>
+ 
+ 
+-	<bitset name="a6xx_hlsq_fs_cntl_0" inline="yes">
++	<bitset name="a6xx_sp_ps_wave_cntl" inline="yes">
+ 		<!-- must match SP_FS_CTRL -->
+ 		<bitfield name="THREADSIZE" pos="0" type="a6xx_threadsize"/>
+ 		<bitfield name="VARYINGS" pos="1" type="boolean"/>
+ 		<bitfield name="UNK2" low="2" high="11"/>
+ 	</bitset>
+-	<bitset name="a6xx_hlsq_control_3_reg" inline="yes">
++	<bitset name="a6xx_sp_reg_prog_id_1" inline="yes">
+ 		<!-- register loaded with position (bary.f) -->
+ 		<bitfield name="IJ_PERSP_PIXEL" low="0" high="7" type="a3xx_regid"/>
+ 		<bitfield name="IJ_LINEAR_PIXEL" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="IJ_PERSP_CENTROID" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="IJ_LINEAR_CENTROID" low="24" high="31" type="a3xx_regid"/>
+ 	</bitset>
+-	<bitset name="a6xx_hlsq_control_4_reg" inline="yes">
++	<bitset name="a6xx_sp_reg_prog_id_2" inline="yes">
+ 		<bitfield name="IJ_PERSP_SAMPLE" low="0" high="7" type="a3xx_regid"/>
+ 		<bitfield name="IJ_LINEAR_SAMPLE" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="XYCOORDREGID" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="ZWCOORDREGID" low="24" high="31" type="a3xx_regid"/>
+ 	</bitset>
+-	<bitset name="a6xx_hlsq_control_5_reg" inline="yes">
++	<bitset name="a6xx_sp_reg_prog_id_3" inline="yes">
+ 		<bitfield name="LINELENGTHREGID" low="0" high="7" type="a3xx_regid"/>
+ 		<bitfield name="FOVEATIONQUALITYREGID" low="8" high="15" type="a3xx_regid"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0xb980" type="a6xx_hlsq_fs_cntl_0" name="HLSQ_FS_CNTL_0" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb980" type="a6xx_sp_ps_wave_cntl" name="SP_PS_WAVE_CNTL" variants="A6XX" usage="rp_blit"/>
+ 	<reg32 offset="0xb981" name="HLSQ_UNKNOWN_B981" pos="0" type="boolean" variants="A6XX"/> <!-- never used by blob -->
+-	<reg32 offset="0xb982" name="HLSQ_CONTROL_1_REG" low="0" high="2" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb982" name="SP_LB_PARAM_LIMIT" low="0" high="2" variants="A6XX" usage="rp_blit">
+ 		<!-- Sets the maximum number of primitives allowed in one FS wave minus one, similarly to the
+ 				 A3xx field, except that it's not necessary to set it to anything but the maximum, since
+ 				 the hardware will simply emit smaller waves when it runs out of space.	-->
+ 		<bitfield name="PRIMALLOCTHRESHOLD" low="0" high="2" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb983" name="HLSQ_CONTROL_2_REG" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb983" name="SP_REG_PROG_ID_0" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="FACEREGID" low="0" high="7" type="a3xx_regid"/>
+ 		<!-- SAMPLEID is loaded into a half-precision register: -->
+ 		<bitfield name="SAMPLEID" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="SAMPLEMASK" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="CENTERRHW" low="24" high="31" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xb984" type="a6xx_hlsq_control_3_reg" name="HLSQ_CONTROL_3_REG" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb985" type="a6xx_hlsq_control_4_reg" name="HLSQ_CONTROL_4_REG" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb986" type="a6xx_hlsq_control_5_reg" name="HLSQ_CONTROL_5_REG" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb987" name="HLSQ_CS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A6XX" usage="cmd"/>
+-	<reg32 offset="0xa9c6" type="a6xx_hlsq_fs_cntl_0" name="HLSQ_FS_CNTL_0" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa9c7" name="HLSQ_CONTROL_1_REG" low="0" high="2" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xb984" type="a6xx_sp_reg_prog_id_1" name="SP_REG_PROG_ID_1" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb985" type="a6xx_sp_reg_prog_id_2" name="SP_REG_PROG_ID_2" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb986" type="a6xx_sp_reg_prog_id_3" name="SP_REG_PROG_ID_3" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb987" name="SP_CS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A6XX" usage="cmd"/>
++	<reg32 offset="0xa9c6" type="a6xx_sp_ps_wave_cntl" name="SP_PS_WAVE_CNTL" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa9c7" name="SP_LB_PARAM_LIMIT" low="0" high="2" variants="A7XX-" usage="rp_blit">
+ 			<bitfield name="PRIMALLOCTHRESHOLD" low="0" high="2" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa9c8" name="HLSQ_CONTROL_2_REG" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9c8" name="SP_REG_PROG_ID_0" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="FACEREGID" low="0" high="7" type="a3xx_regid"/>
+ 		<!-- SAMPLEID is loaded into a half-precision register: -->
+ 		<bitfield name="SAMPLEID" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="SAMPLEMASK" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="CENTERRHW" low="24" high="31" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xa9c9" type="a6xx_hlsq_control_3_reg" name="HLSQ_CONTROL_3_REG" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa9ca" type="a6xx_hlsq_control_4_reg" name="HLSQ_CONTROL_4_REG" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa9cb" type="a6xx_hlsq_control_5_reg" name="HLSQ_CONTROL_5_REG" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa9cd" name="HLSQ_CS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa9c9" type="a6xx_sp_reg_prog_id_1" name="SP_REG_PROG_ID_1" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa9ca" type="a6xx_sp_reg_prog_id_2" name="SP_REG_PROG_ID_2" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa9cb" type="a6xx_sp_reg_prog_id_3" name="SP_REG_PROG_ID_3" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa9cd" name="SP_CS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A7XX-" usage="cmd"/>
+ 
+ 	<!-- TODO: what does KERNELDIM do exactly (blob sets it differently from turnip) -->
+-	<reg32 offset="0xb990" name="HLSQ_CS_NDRANGE_0" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb990" name="SP_CS_NDRANGE_0" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="KERNELDIM" low="0" high="1" type="uint"/>
+ 		<!-- localsize is value minus one: -->
+ 		<bitfield name="LOCALSIZEX" low="2" high="11" type="uint"/>
+ 		<bitfield name="LOCALSIZEY" low="12" high="21" type="uint"/>
+ 		<bitfield name="LOCALSIZEZ" low="22" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb991" name="HLSQ_CS_NDRANGE_1" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb991" name="SP_CS_NDRANGE_1" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="GLOBALSIZE_X" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb992" name="HLSQ_CS_NDRANGE_2" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb992" name="SP_CS_NDRANGE_2" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="GLOBALOFF_X" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb993" name="HLSQ_CS_NDRANGE_3" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb993" name="SP_CS_NDRANGE_3" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="GLOBALSIZE_Y" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb994" name="HLSQ_CS_NDRANGE_4" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb994" name="SP_CS_NDRANGE_4" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="GLOBALOFF_Y" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb995" name="HLSQ_CS_NDRANGE_5" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb995" name="SP_CS_NDRANGE_5" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="GLOBALSIZE_Z" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb996" name="HLSQ_CS_NDRANGE_6" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb996" name="SP_CS_NDRANGE_6" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="GLOBALOFF_Z" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb997" name="HLSQ_CS_CNTL_0" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb997" name="SP_CS_CONST_CONFIG_0" variants="A6XX" usage="rp_blit">
+ 		<!-- these are all vec3. first 3 need to be high regs
+-		     WGSIZECONSTID is the local size (from HLSQ_CS_NDRANGE_0)
++		     WGSIZECONSTID is the local size (from SP_CS_NDRANGE_0)
+ 		     WGOFFSETCONSTID is WGIDCONSTID*WGSIZECONSTID
+ 		-->
+ 		<bitfield name="WGIDCONSTID" low="0" high="7" type="a3xx_regid"/>
+@@ -5557,7 +3454,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="WGOFFSETCONSTID" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="LOCALIDREGID" low="24" high="31" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xb998" name="HLSQ_CS_CNTL_1" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb998" name="SP_CS_WGE_CNTL" variants="A6XX" usage="rp_blit">
+ 		<!-- gl_LocalInvocationIndex -->
+ 		<bitfield name="LINEARLOCALIDREGID" low="0" high="7" type="a3xx_regid"/>
+ 		<!-- a650 has 6 "SP cores" (but 3 "SP"). this makes it use only
+@@ -5569,40 +3466,40 @@ to upconvert to 32b float internally?
+ 		<bitfield name="THREADSIZE_SCALAR" pos="10" type="boolean"/>
+ 	</reg32>
+ 	<!--note: vulkan blob doesn't use these -->
+-	<reg32 offset="0xb999" name="HLSQ_CS_KERNEL_GROUP_X" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb99a" name="HLSQ_CS_KERNEL_GROUP_Y" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb99b" name="HLSQ_CS_KERNEL_GROUP_Z" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb999" name="SP_CS_KERNEL_GROUP_X" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb99a" name="SP_CS_KERNEL_GROUP_Y" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb99b" name="SP_CS_KERNEL_GROUP_Z" variants="A6XX" usage="rp_blit"/>
+ 
+ 	<!-- TODO: what does KERNELDIM do exactly (blob sets it differently from turnip) -->
+-	<reg32 offset="0xa9d4" name="HLSQ_CS_NDRANGE_0" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9d4" name="SP_CS_NDRANGE_0" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="KERNELDIM" low="0" high="1" type="uint"/>
+ 		<!-- localsize is value minus one: -->
+ 		<bitfield name="LOCALSIZEX" low="2" high="11" type="uint"/>
+ 		<bitfield name="LOCALSIZEY" low="12" high="21" type="uint"/>
+ 		<bitfield name="LOCALSIZEZ" low="22" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa9d5" name="HLSQ_CS_NDRANGE_1" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9d5" name="SP_CS_NDRANGE_1" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="GLOBALSIZE_X" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa9d6" name="HLSQ_CS_NDRANGE_2" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9d6" name="SP_CS_NDRANGE_2" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="GLOBALOFF_X" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa9d7" name="HLSQ_CS_NDRANGE_3" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9d7" name="SP_CS_NDRANGE_3" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="GLOBALSIZE_Y" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa9d8" name="HLSQ_CS_NDRANGE_4" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9d8" name="SP_CS_NDRANGE_4" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="GLOBALOFF_Y" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa9d9" name="HLSQ_CS_NDRANGE_5" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9d9" name="SP_CS_NDRANGE_5" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="GLOBALSIZE_Z" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa9da" name="HLSQ_CS_NDRANGE_6" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9da" name="SP_CS_NDRANGE_6" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="GLOBALOFF_Z" low="0" high="31" type="uint"/>
+ 	</reg32>
+ 	<!--note: vulkan blob doesn't use these -->
+-	<reg32 offset="0xa9dc" name="HLSQ_CS_KERNEL_GROUP_X" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa9dd" name="HLSQ_CS_KERNEL_GROUP_Y" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa9de" name="HLSQ_CS_KERNEL_GROUP_Z" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa9dc" name="SP_CS_KERNEL_GROUP_X" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa9dd" name="SP_CS_KERNEL_GROUP_Y" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa9de" name="SP_CS_KERNEL_GROUP_Z" variants="A7XX-" usage="rp_blit"/>
+ 
+ 	<enum name="a7xx_cs_yalign">
+ 		<value name="CS_YALIGN_1" value="8"/>
+@@ -5611,19 +3508,29 @@ to upconvert to 32b float internally?
+ 		<value name="CS_YALIGN_8" value="1"/>
+ 	</enum>
+ 
+-	<reg32 offset="0xa9db" name="HLSQ_CS_CNTL_1" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9db" name="SP_CS_WGE_CNTL" variants="A7XX-" usage="rp_blit">
+ 		<!-- gl_LocalInvocationIndex -->
+ 		<bitfield name="LINEARLOCALIDREGID" low="0" high="7" type="a3xx_regid"/>
+ 		<!-- Must match SP_CS_CTRL -->
+ 		<bitfield name="THREADSIZE" pos="9" type="a6xx_threadsize"/>
+-		<bitfield name="UNK11" pos="11" type="boolean"/>
+-		<bitfield name="UNK22" pos="22" type="boolean"/>
+-		<bitfield name="UNK26" pos="26" type="boolean"/>
+-		<bitfield name="YALIGN" low="27" high="30" type="a7xx_cs_yalign"/>
++		<doc>
++			When this bit is enabled, the dispatch order interleaves
++			the z coordinate instead of launching all workgroups
++			with z=0, then all with z=1 and so on.
++		</doc>
++		<bitfield name="WORKGROUPRASTORDERZFIRSTEN" pos="11" type="boolean"/>
++		<doc>
++			When both fields are non-0 then the dispatcher uses
++			these tile sizes to launch workgroups in a tiled manner
++			when the x and y workgroup counts are
++			both more than 1.
++		</doc>
++		<bitfield name="WGTILEWIDTH" low="20" high="25"/>
++		<bitfield name="WGTILEHEIGHT" low="26" high="31"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xa9df" name="HLSQ_CS_LOCAL_SIZE" variants="A7XX-" usage="cmd">
+-		<!-- localsize is value minus one: -->
++	<reg32 offset="0xa9df" name="SP_CS_NDRANGE_7" variants="A7XX-" usage="cmd">
++		<!-- The size of the last workgroup. localsize is value minus one: -->
+ 		<bitfield name="LOCALSIZEX" low="2" high="11" type="uint"/>
+ 		<bitfield name="LOCALSIZEY" low="12" high="21" type="uint"/>
+ 		<bitfield name="LOCALSIZEZ" low="22" high="31" type="uint"/>
+@@ -5641,29 +3548,27 @@ to upconvert to 32b float internally?
+ 		</reg64>
+ 	</array>
+ 
+-	<!-- new in a6xx gen4, mirror of SP_CS_UNKNOWN_A9B1? -->
+-	<reg32 offset="0xb9d0" name="HLSQ_CS_UNKNOWN_B9D0" variants="A6XX" usage="cmd">
++	<!-- new in a6xx gen4, mirror of SP_CS_CNTL_1? -->
++	<reg32 offset="0xb9d0" name="HLSQ_CS_CTRL_REG1" variants="A6XX" usage="cmd">
+ 		<bitfield name="SHARED_SIZE" low="0" high="4" type="uint"/>
+-		<bitfield name="UNK5" pos="5" type="boolean"/>
+-		<!-- always 1 ? -->
+-		<bitfield name="UNK6" pos="6" type="boolean"/>
++		<bitfield name="CONSTANTRAMMODE" low="5" high="6" type="a6xx_const_ram_mode"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xbb00" name="HLSQ_DRAW_CMD" variants="A6XX">
++	<reg32 offset="0xbb00" name="SP_DRAW_INITIATOR" variants="A6XX">
+ 		<bitfield name="STATE_ID" low="0" high="7"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xbb01" name="HLSQ_DISPATCH_CMD" variants="A6XX">
++	<reg32 offset="0xbb01" name="SP_KERNEL_INITIATOR" variants="A6XX">
+ 		<bitfield name="STATE_ID" low="0" high="7"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xbb02" name="HLSQ_EVENT_CMD" variants="A6XX">
++	<reg32 offset="0xbb02" name="SP_EVENT_INITIATOR" variants="A6XX">
+ 		<!-- I think only the low bit is actually used? -->
+ 		<bitfield name="STATE_ID" low="16" high="23"/>
+ 		<bitfield name="EVENT" low="0" high="6" type="vgt_event_type"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xbb08" name="HLSQ_INVALIDATE_CMD" variants="A6XX" usage="cmd">
++	<reg32 offset="0xbb08" name="SP_UPDATE_CNTL" variants="A6XX" usage="cmd">
+ 		<doc>
+ 			This register clears pending loads queued up by
+ 			CP_LOAD_STATE6. Each bit resets a particular kind(s) of
+@@ -5678,8 +3583,8 @@ to upconvert to 32b float internally?
+ 		<bitfield name="FS_STATE" pos="4" type="boolean"/>
+ 		<bitfield name="CS_STATE" pos="5" type="boolean"/>
+ 
+-		<bitfield name="CS_IBO" pos="6" type="boolean"/>
+-		<bitfield name="GFX_IBO" pos="7" type="boolean"/>
++		<bitfield name="CS_UAV" pos="6" type="boolean"/>
++		<bitfield name="GFX_UAV" pos="7" type="boolean"/>
+ 
+ 		<!-- Note: these only do something when HLSQ_SHARED_CONSTS is set to 1 -->
+ 		<bitfield name="CS_SHARED_CONST" pos="19" type="boolean"/>
+@@ -5690,20 +3595,20 @@ to upconvert to 32b float internally?
+ 		<bitfield name="GFX_BINDLESS" low="14" high="18" type="hex"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xab1c" name="HLSQ_DRAW_CMD" variants="A7XX-">
++	<reg32 offset="0xab1c" name="SP_DRAW_INITIATOR" variants="A7XX-">
+ 		<bitfield name="STATE_ID" low="0" high="7"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xab1d" name="HLSQ_DISPATCH_CMD" variants="A7XX-">
++	<reg32 offset="0xab1d" name="SP_KERNEL_INITIATOR" variants="A7XX-">
+ 		<bitfield name="STATE_ID" low="0" high="7"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xab1e" name="HLSQ_EVENT_CMD" variants="A7XX-">
++	<reg32 offset="0xab1e" name="SP_EVENT_INITIATOR" variants="A7XX-">
+ 		<bitfield name="STATE_ID" low="16" high="23"/>
+ 		<bitfield name="EVENT" low="0" high="6" type="vgt_event_type"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xab1f" name="HLSQ_INVALIDATE_CMD" variants="A7XX-" usage="cmd">
++	<reg32 offset="0xab1f" name="SP_UPDATE_CNTL" variants="A7XX-" usage="cmd">
+ 		<doc>
+ 			This register clears pending loads queued up by
+ 			CP_LOAD_STATE6. Each bit resets a particular kind(s) of
+@@ -5718,18 +3623,18 @@ to upconvert to 32b float internally?
+ 		<bitfield name="FS_STATE" pos="4" type="boolean"/>
+ 		<bitfield name="CS_STATE" pos="5" type="boolean"/>
+ 
+-		<bitfield name="CS_IBO" pos="6" type="boolean"/>
+-		<bitfield name="GFX_IBO" pos="7" type="boolean"/>
++		<bitfield name="CS_UAV" pos="6" type="boolean"/>
++		<bitfield name="GFX_UAV" pos="7" type="boolean"/>
+ 
+ 		<!-- SS6_BINDLESS: one bit per bindless base -->
+ 		<bitfield name="CS_BINDLESS" low="9" high="16" type="hex"/>
+ 		<bitfield name="GFX_BINDLESS" low="17" high="24" type="hex"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xbb10" name="HLSQ_FS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xab03" name="HLSQ_FS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xbb10" name="SP_PS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xab03" name="SP_PS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A7XX-" usage="rp_blit"/>
+ 
+-	<array offset="0xab40" name="HLSQ_SHARED_CONSTS_IMM" stride="1" length="64" variants="A7XX-"/>
++	<array offset="0xab40" name="SP_SHARED_CONSTANT_GFX_0" stride="1" length="64" variants="A7XX-"/>
+ 
+ 	<reg32 offset="0xbb11" name="HLSQ_SHARED_CONSTS" variants="A6XX" usage="cmd">
+ 		<doc>
+@@ -5738,7 +3643,7 @@ to upconvert to 32b float internally?
+ 			const pool and 16 in the geometry const pool although
+ 			only 8 are actually used (why?) and they are mapped to
+ 			c504-c511 in each stage. Both VS and FS shared consts
+-			are written using ST6_CONSTANTS/SB6_IBO, so that both
++			are written using ST6_CONSTANTS/SB6_UAV, so that both
+ 			the geometry and FS shared consts can be written at once
+ 			by using CP_LOAD_STATE6 rather than
+ 			CP_LOAD_STATE6_FRAG/CP_LOAD_STATE6_GEOM. In addition
+@@ -5747,13 +3652,13 @@ to upconvert to 32b float internally?
+ 
+ 			There is also a separate shared constant pool for CS,
+ 			which is loaded through CP_LOAD_STATE6_FRAG with
+-			ST6_UBO/ST6_IBO. However the only real difference for CS
++			ST6_UBO/ST6_UAV. However the only real difference for CS
+ 			is the dword units.
+ 		</doc>
+ 		<bitfield name="ENABLE" pos="0" type="boolean"/>
+ 	</reg32>
+ 
+-	<!-- mirror of SP_BINDLESS_BASE -->
++	<!-- mirror of SP_GFX_BINDLESS_BASE -->
+ 	<array offset="0xbb20" name="HLSQ_BINDLESS_BASE" stride="2" length="5" variants="A6XX" usage="cmd">
+ 		<reg64 offset="0" name="DESCRIPTOR">
+ 			<bitfield name="DESC_SIZE" low="0" high="1" type="a6xx_bindless_descriptor_size"/>
+@@ -5788,10 +3693,10 @@ to upconvert to 32b float internally?
+ 		sequence. The sequence used internally for an event looks like:
+ 		- write EVENT_CMD pipe register
+ 		- write CP_EVENT_START
+-		- write HLSQ_EVENT_CMD with event or HLSQ_DRAW_CMD
+-		- write PC_EVENT_CMD with event or PC_DRAW_CMD
+-		- write HLSQ_EVENT_CMD(CONTEXT_DONE)
+-		- write PC_EVENT_CMD(CONTEXT_DONE)
++		- write SP_EVENT_INITIATOR with event or SP_DRAW_INITIATOR
++		- write PC_EVENT_INITIATOR with event or PC_DRAW_INITIATOR
++		- write SP_EVENT_INITIATOR(CONTEXT_DONE)
++		- write PC_EVENT_INITIATOR(CONTEXT_DONE)
+ 		- write CP_EVENT_END
+ 		Writing to CP_EVENT_END seems to actually trigger the context roll
+ 	-->
+@@ -5809,193 +3714,6 @@ to upconvert to 32b float internally?
+ 	</reg32>
+ </domain>
+ 
+-<!-- Seems basically the same as a5xx, maybe move to common.xml.. -->
+-<domain name="A6XX_TEX_SAMP" width="32">
+-	<doc>Texture sampler dwords</doc>
+-	<enum name="a6xx_tex_filter"> <!-- same as a4xx? -->
+-		<value name="A6XX_TEX_NEAREST" value="0"/>
+-		<value name="A6XX_TEX_LINEAR" value="1"/>
+-		<value name="A6XX_TEX_ANISO" value="2"/>
+-		<value name="A6XX_TEX_CUBIC" value="3"/> <!-- a650 only -->
+-	</enum>
+-	<enum name="a6xx_tex_clamp"> <!-- same as a4xx? -->
+-		<value name="A6XX_TEX_REPEAT" value="0"/>
+-		<value name="A6XX_TEX_CLAMP_TO_EDGE" value="1"/>
+-		<value name="A6XX_TEX_MIRROR_REPEAT" value="2"/>
+-		<value name="A6XX_TEX_CLAMP_TO_BORDER" value="3"/>
+-		<value name="A6XX_TEX_MIRROR_CLAMP" value="4"/>
+-	</enum>
+-	<enum name="a6xx_tex_aniso"> <!-- same as a4xx? -->
+-		<value name="A6XX_TEX_ANISO_1" value="0"/>
+-		<value name="A6XX_TEX_ANISO_2" value="1"/>
+-		<value name="A6XX_TEX_ANISO_4" value="2"/>
+-		<value name="A6XX_TEX_ANISO_8" value="3"/>
+-		<value name="A6XX_TEX_ANISO_16" value="4"/>
+-	</enum>
+-	<enum name="a6xx_reduction_mode">
+-		<value name="A6XX_REDUCTION_MODE_AVERAGE" value="0"/>
+-		<value name="A6XX_REDUCTION_MODE_MIN" value="1"/>
+-		<value name="A6XX_REDUCTION_MODE_MAX" value="2"/>
+-	</enum>
+-
+-	<reg32 offset="0" name="0">
+-		<bitfield name="MIPFILTER_LINEAR_NEAR" pos="0" type="boolean"/>
+-		<bitfield name="XY_MAG" low="1" high="2" type="a6xx_tex_filter"/>
+-		<bitfield name="XY_MIN" low="3" high="4" type="a6xx_tex_filter"/>
+-		<bitfield name="WRAP_S" low="5" high="7" type="a6xx_tex_clamp"/>
+-		<bitfield name="WRAP_T" low="8" high="10" type="a6xx_tex_clamp"/>
+-		<bitfield name="WRAP_R" low="11" high="13" type="a6xx_tex_clamp"/>
+-		<bitfield name="ANISO" low="14" high="16" type="a6xx_tex_aniso"/>
+-		<bitfield name="LOD_BIAS" low="19" high="31" type="fixed" radix="8"/><!-- no idea how many bits for real -->
+-	</reg32>
+-	<reg32 offset="1" name="1">
+-		<bitfield name="CLAMPENABLE" pos="0" type="boolean">
+-			<doc>
+-				clamp result to [0, 1] if the format is unorm or
+-				[-1, 1] if the format is snorm, *after*
+-				filtering. Has no effect for other formats.
+-			</doc>
+-		</bitfield>
+-		<bitfield name="COMPARE_FUNC" low="1" high="3" type="adreno_compare_func"/>
+-		<bitfield name="CUBEMAPSEAMLESSFILTOFF" pos="4" type="boolean"/>
+-		<bitfield name="UNNORM_COORDS" pos="5" type="boolean"/>
+-		<bitfield name="MIPFILTER_LINEAR_FAR" pos="6" type="boolean"/>
+-		<bitfield name="MAX_LOD" low="8" high="19" type="ufixed" radix="8"/>
+-		<bitfield name="MIN_LOD" low="20" high="31" type="ufixed" radix="8"/>
+-	</reg32>
+-	<reg32 offset="2" name="2">
+-		<bitfield name="REDUCTION_MODE" low="0" high="1" type="a6xx_reduction_mode"/>
+-		<bitfield name="CHROMA_LINEAR" pos="5" type="boolean"/>
+-		<bitfield name="BCOLOR" low="7" high="31"/>
+-	</reg32>
+-	<reg32 offset="3" name="3"/>
+-</domain>
+-
+-<domain name="A6XX_TEX_CONST" width="32" varset="chip">
+-	<doc>Texture constant dwords</doc>
+-	<enum name="a6xx_tex_swiz"> <!-- same as a4xx? -->
+-		<value name="A6XX_TEX_X" value="0"/>
+-		<value name="A6XX_TEX_Y" value="1"/>
+-		<value name="A6XX_TEX_Z" value="2"/>
+-		<value name="A6XX_TEX_W" value="3"/>
+-		<value name="A6XX_TEX_ZERO" value="4"/>
+-		<value name="A6XX_TEX_ONE" value="5"/>
+-	</enum>
+-	<enum name="a6xx_tex_type"> <!-- same as a4xx? -->
+-		<value name="A6XX_TEX_1D" value="0"/>
+-		<value name="A6XX_TEX_2D" value="1"/>
+-		<value name="A6XX_TEX_CUBE" value="2"/>
+-		<value name="A6XX_TEX_3D" value="3"/>
+-		<value name="A6XX_TEX_BUFFER" value="4"/>
+-	</enum>
+-	<reg32 offset="0" name="0">
+-		<bitfield name="TILE_MODE" low="0" high="1" type="a6xx_tile_mode"/>
+-		<bitfield name="SRGB" pos="2" type="boolean"/>
+-		<bitfield name="SWIZ_X" low="4" high="6" type="a6xx_tex_swiz"/>
+-		<bitfield name="SWIZ_Y" low="7" high="9" type="a6xx_tex_swiz"/>
+-		<bitfield name="SWIZ_Z" low="10" high="12" type="a6xx_tex_swiz"/>
+-		<bitfield name="SWIZ_W" low="13" high="15" type="a6xx_tex_swiz"/>
+-		<bitfield name="MIPLVLS" low="16" high="19" type="uint"/>
+-		<!-- overlaps with MIPLVLS -->
+-		<bitfield name="CHROMA_MIDPOINT_X" pos="16" type="boolean"/>
+-		<bitfield name="CHROMA_MIDPOINT_Y" pos="18" type="boolean"/>
+-		<bitfield name="SAMPLES" low="20" high="21" type="a3xx_msaa_samples"/>
+-		<bitfield name="FMT" low="22" high="29" type="a6xx_format"/>
+-		<!--
+-			Why is the swap needed in addition to SWIZ_*? The swap
+-			is performed before border color replacement, while the
+-			swizzle is applied after after it.
+-		-->
+-		<bitfield name="SWAP" low="30" high="31" type="a3xx_color_swap"/>
+-	</reg32>
+-	<reg32 offset="1" name="1">
+-		<bitfield name="WIDTH" low="0" high="14" type="uint"/>
+-		<bitfield name="HEIGHT" low="15" high="29" type="uint"/>
+-		<bitfield name="MUTABLEEN" pos="31" type="boolean" variants="A7XX-"/>
+-	</reg32>
+-	<reg32 offset="2" name="2">
+-		<!--
+-			These fields overlap PITCH, and are used instead of
+-			PITCH/PITCHALIGN when TYPE is A6XX_TEX_BUFFER.
+-		 -->
+-		<doc> probably for D3D structured UAVs, normally set to 1 </doc>
+-		<bitfield name="STRUCTSIZETEXELS" low="4" high="15" type="uint"/>
+-		<bitfield name="STARTOFFSETTEXELS" low="16" high="21" type="uint"/>
+-
+-		<!-- minimum pitch (for mipmap levels): log2(pitchalign / 64) -->
+-		<bitfield name="PITCHALIGN" low="0" high="3" type="uint"/>
+-		<doc>Pitch in bytes (so actually stride)</doc>
+-		<bitfield name="PITCH" low="7" high="28" type="uint"/>
+-		<bitfield name="TYPE" low="29" high="31" type="a6xx_tex_type"/>
+-	</reg32>
+-	<reg32 offset="3" name="3">
+-		<!--
+-		ARRAY_PITCH is basically LAYERSZ for the first mipmap level, and
+-		for 3d textures (laid out mipmap level first) MIN_LAYERSZ is the
+-		layer size at the point that it stops being reduced moving to
+-		higher (smaller) mipmap levels
+-		 -->
+-		<bitfield name="ARRAY_PITCH" low="0" high="22" shr="12" type="uint"/>
+-		<bitfield name="MIN_LAYERSZ" low="23" high="26" shr="12"/>
+-		<!--
+-		by default levels with w < 16 are linear
+-		TILE_ALL makes all levels have tiling
+-		seems required when using UBWC, since all levels have UBWC (can possibly be disabled?)
+-		 -->
+-		<bitfield name="TILE_ALL" pos="27" type="boolean"/>
+-		<bitfield name="FLAG" pos="28" type="boolean"/>
+-	</reg32>
+-	<!-- for 2-3 plane format, BASE is flag buffer address (if enabled)
+-	     the address of the non-flag base buffer is determined automatically,
+-	     and must follow the flag buffer
+-	 -->
+-	<reg32 offset="4" name="4">
+-		<bitfield name="BASE_LO" low="5" high="31" shr="5"/>
+-	</reg32>
+-	<reg32 offset="5" name="5">
+-		<bitfield name="BASE_HI" low="0" high="16"/>
+-		<bitfield name="DEPTH" low="17" high="29" type="uint"/>
+-	</reg32>
+-	<reg32 offset="6" name="6">
+-		<!-- overlaps with PLANE_PITCH -->
+-		<bitfield name="MIN_LOD_CLAMP" low="0" high="11" type="ufixed" radix="8"/>
+-		<!-- pitch for plane 2 / plane 3 -->
+-		<bitfield name="PLANE_PITCH" low="8" high="31" type="uint"/>
+-	</reg32>
+-	<!-- 7/8 is plane 2 address for planar formats -->
+-	<reg32 offset="7" name="7">
+-		<bitfield name="FLAG_LO" low="5" high="31" shr="5"/>
+-	</reg32>
+-	<reg32 offset="8" name="8">
+-		<bitfield name="FLAG_HI" low="0" high="16"/>
+-	</reg32>
+-	<!-- 9/10 is plane 3 address for planar formats -->
+-	<reg32 offset="9" name="9">
+-		<bitfield name="FLAG_BUFFER_ARRAY_PITCH" low="0" high="16" shr="4" type="uint"/>
+-	</reg32>
+-	<reg32 offset="10" name="10">
+-		<bitfield name="FLAG_BUFFER_PITCH" low="0" high="6" shr="6" type="uint"/>
+-		<!-- log2 size of the first level, required for mipmapping -->
+-		<bitfield name="FLAG_BUFFER_LOGW" low="8" high="11" type="uint"/>
+-		<bitfield name="FLAG_BUFFER_LOGH" low="12" high="15" type="uint"/>
+-	</reg32>
+-	<reg32 offset="11" name="11"/>
+-	<reg32 offset="12" name="12"/>
+-	<reg32 offset="13" name="13"/>
+-	<reg32 offset="14" name="14"/>
+-	<reg32 offset="15" name="15"/>
+-</domain>
+-
+-<domain name="A6XX_UBO" width="32">
+-	<reg32 offset="0" name="0">
+-		<bitfield name="BASE_LO" low="0" high="31"/>
+-	</reg32>
+-	<reg32 offset="1" name="1">
+-		<bitfield name="BASE_HI" low="0" high="16"/>
+-		<bitfield name="SIZE" low="17" high="31"/> <!-- size in vec4 (4xDWORD) units -->
+-	</reg32>
+-</domain>
+-
+ <domain name="A6XX_PDC" width="32">
+ 	<reg32 offset="0x1140" name="GPU_ENABLE_PDC"/>
+ 	<reg32 offset="0x1148" name="GPU_SEQ_START_ADDR"/>
+diff --git a/drivers/gpu/drm/msm/registers/adreno/a6xx_descriptors.xml b/drivers/gpu/drm/msm/registers/adreno/a6xx_descriptors.xml
+new file mode 100644
+index 00000000000000..307d43dda8a254
+--- /dev/null
++++ b/drivers/gpu/drm/msm/registers/adreno/a6xx_descriptors.xml
+@@ -0,0 +1,198 @@
++<?xml version="1.0" encoding="UTF-8"?>
++<database xmlns="http://nouveau.freedesktop.org/"
++xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
++xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
++<import file="freedreno_copyright.xml"/>
++<import file="adreno/adreno_common.xml"/>
++<import file="adreno/adreno_pm4.xml"/>
++<import file="adreno/a6xx_enums.xml"/>
++
++<domain name="A6XX_TEX_SAMP" width="32">
++	<doc>Texture sampler dwords</doc>
++	<enum name="a6xx_tex_filter"> <!-- same as a4xx? -->
++		<value name="A6XX_TEX_NEAREST" value="0"/>
++		<value name="A6XX_TEX_LINEAR" value="1"/>
++		<value name="A6XX_TEX_ANISO" value="2"/>
++		<value name="A6XX_TEX_CUBIC" value="3"/> <!-- a650 only -->
++	</enum>
++	<enum name="a6xx_tex_clamp"> <!-- same as a4xx? -->
++		<value name="A6XX_TEX_REPEAT" value="0"/>
++		<value name="A6XX_TEX_CLAMP_TO_EDGE" value="1"/>
++		<value name="A6XX_TEX_MIRROR_REPEAT" value="2"/>
++		<value name="A6XX_TEX_CLAMP_TO_BORDER" value="3"/>
++		<value name="A6XX_TEX_MIRROR_CLAMP" value="4"/>
++	</enum>
++	<enum name="a6xx_tex_aniso"> <!-- same as a4xx? -->
++		<value name="A6XX_TEX_ANISO_1" value="0"/>
++		<value name="A6XX_TEX_ANISO_2" value="1"/>
++		<value name="A6XX_TEX_ANISO_4" value="2"/>
++		<value name="A6XX_TEX_ANISO_8" value="3"/>
++		<value name="A6XX_TEX_ANISO_16" value="4"/>
++	</enum>
++	<enum name="a6xx_reduction_mode">
++		<value name="A6XX_REDUCTION_MODE_AVERAGE" value="0"/>
++		<value name="A6XX_REDUCTION_MODE_MIN" value="1"/>
++		<value name="A6XX_REDUCTION_MODE_MAX" value="2"/>
++	</enum>
++	<enum name="a6xx_fast_border_color">
++		<!--                           R B G A -->
++		<value name="A6XX_BORDER_COLOR_0_0_0_0" value="0"/>
++		<value name="A6XX_BORDER_COLOR_0_0_0_1" value="1"/>
++		<value name="A6XX_BORDER_COLOR_1_1_1_0" value="2"/>
++		<value name="A6XX_BORDER_COLOR_1_1_1_1" value="3"/>
++	</enum>
++
++	<reg32 offset="0" name="0">
++		<bitfield name="MIPFILTER_LINEAR_NEAR" pos="0" type="boolean"/>
++		<bitfield name="XY_MAG" low="1" high="2" type="a6xx_tex_filter"/>
++		<bitfield name="XY_MIN" low="3" high="4" type="a6xx_tex_filter"/>
++		<bitfield name="WRAP_S" low="5" high="7" type="a6xx_tex_clamp"/>
++		<bitfield name="WRAP_T" low="8" high="10" type="a6xx_tex_clamp"/>
++		<bitfield name="WRAP_R" low="11" high="13" type="a6xx_tex_clamp"/>
++		<bitfield name="ANISO" low="14" high="16" type="a6xx_tex_aniso"/>
++		<bitfield name="LOD_BIAS" low="19" high="31" type="fixed" radix="8"/><!-- no idea how many bits for real -->
++	</reg32>
++	<reg32 offset="1" name="1">
++		<bitfield name="CLAMPENABLE" pos="0" type="boolean">
++			<doc>
++				clamp result to [0, 1] if the format is unorm or
++				[-1, 1] if the format is snorm, *after*
++				filtering. Has no effect for other formats.
++			</doc>
++		</bitfield>
++		<bitfield name="COMPARE_FUNC" low="1" high="3" type="adreno_compare_func"/>
++		<bitfield name="CUBEMAPSEAMLESSFILTOFF" pos="4" type="boolean"/>
++		<bitfield name="UNNORM_COORDS" pos="5" type="boolean"/>
++		<bitfield name="MIPFILTER_LINEAR_FAR" pos="6" type="boolean"/>
++		<bitfield name="MAX_LOD" low="8" high="19" type="ufixed" radix="8"/>
++		<bitfield name="MIN_LOD" low="20" high="31" type="ufixed" radix="8"/>
++	</reg32>
++	<reg32 offset="2" name="2">
++		<bitfield name="REDUCTION_MODE" low="0" high="1" type="a6xx_reduction_mode"/>
++		<bitfield name="FASTBORDERCOLOR" low="2" high="3" type="a6xx_fast_border_color"/>
++		<bitfield name="FASTBORDERCOLOREN" pos="4" type="boolean"/>
++		<bitfield name="CHROMA_LINEAR" pos="5" type="boolean"/>
++		<bitfield name="BCOLOR" low="7" high="31"/>
++	</reg32>
++	<reg32 offset="3" name="3"/>
++</domain>
++
++<domain name="A6XX_TEX_CONST" width="32" varset="chip">
++	<doc>Texture constant dwords</doc>
++	<enum name="a6xx_tex_swiz"> <!-- same as a4xx? -->
++		<value name="A6XX_TEX_X" value="0"/>
++		<value name="A6XX_TEX_Y" value="1"/>
++		<value name="A6XX_TEX_Z" value="2"/>
++		<value name="A6XX_TEX_W" value="3"/>
++		<value name="A6XX_TEX_ZERO" value="4"/>
++		<value name="A6XX_TEX_ONE" value="5"/>
++	</enum>
++	<reg32 offset="0" name="0">
++		<bitfield name="TILE_MODE" low="0" high="1" type="a6xx_tile_mode"/>
++		<bitfield name="SRGB" pos="2" type="boolean"/>
++		<bitfield name="SWIZ_X" low="4" high="6" type="a6xx_tex_swiz"/>
++		<bitfield name="SWIZ_Y" low="7" high="9" type="a6xx_tex_swiz"/>
++		<bitfield name="SWIZ_Z" low="10" high="12" type="a6xx_tex_swiz"/>
++		<bitfield name="SWIZ_W" low="13" high="15" type="a6xx_tex_swiz"/>
++		<bitfield name="MIPLVLS" low="16" high="19" type="uint"/>
++		<!-- overlaps with MIPLVLS -->
++		<bitfield name="CHROMA_MIDPOINT_X" pos="16" type="boolean"/>
++		<bitfield name="CHROMA_MIDPOINT_Y" pos="18" type="boolean"/>
++		<bitfield name="SAMPLES" low="20" high="21" type="a3xx_msaa_samples"/>
++		<bitfield name="FMT" low="22" high="29" type="a6xx_format"/>
++		<!--
++			Why is the swap needed in addition to SWIZ_*? The swap
++			is performed before border color replacement, while the
++			swizzle is applied after after it.
++		-->
++		<bitfield name="SWAP" low="30" high="31" type="a3xx_color_swap"/>
++	</reg32>
++	<reg32 offset="1" name="1">
++		<bitfield name="WIDTH" low="0" high="14" type="uint"/>
++		<bitfield name="HEIGHT" low="15" high="29" type="uint"/>
++		<bitfield name="MUTABLEEN" pos="31" type="boolean" variants="A7XX-"/>
++	</reg32>
++	<reg32 offset="2" name="2">
++		<!--
++			These fields overlap PITCH, and are used instead of
++			PITCH/PITCHALIGN when TYPE is A6XX_TEX_BUFFER.
++		 -->
++		<doc> probably for D3D structured UAVs, normally set to 1 </doc>
++		<bitfield name="STRUCTSIZETEXELS" low="4" high="15" type="uint"/>
++		<bitfield name="STARTOFFSETTEXELS" low="16" high="21" type="uint"/>
++
++		<!-- minimum pitch (for mipmap levels): log2(pitchalign / 64) -->
++		<bitfield name="PITCHALIGN" low="0" high="3" type="uint"/>
++		<doc>Pitch in bytes (so actually stride)</doc>
++		<bitfield name="PITCH" low="7" high="28" type="uint"/>
++		<bitfield name="TYPE" low="29" high="31" type="a6xx_tex_type"/>
++	</reg32>
++	<reg32 offset="3" name="3">
++		<!--
++		ARRAY_PITCH is basically LAYERSZ for the first mipmap level, and
++		for 3d textures (laid out mipmap level first) MIN_LAYERSZ is the
++		layer size at the point that it stops being reduced moving to
++		higher (smaller) mipmap levels
++		 -->
++		<bitfield name="ARRAY_PITCH" low="0" high="22" shr="12" type="uint"/>
++		<bitfield name="MIN_LAYERSZ" low="23" high="26" shr="12"/>
++		<!--
++		by default levels with w < 16 are linear
++		TILE_ALL makes all levels have tiling
++		seems required when using UBWC, since all levels have UBWC (can possibly be disabled?)
++		 -->
++		<bitfield name="TILE_ALL" pos="27" type="boolean"/>
++		<bitfield name="FLAG" pos="28" type="boolean"/>
++	</reg32>
++	<!-- for 2-3 plane format, BASE is flag buffer address (if enabled)
++	     the address of the non-flag base buffer is determined automatically,
++	     and must follow the flag buffer
++	 -->
++	<reg32 offset="4" name="4">
++		<bitfield name="BASE_LO" low="5" high="31" shr="5"/>
++	</reg32>
++	<reg32 offset="5" name="5">
++		<bitfield name="BASE_HI" low="0" high="16"/>
++		<bitfield name="DEPTH" low="17" high="29" type="uint"/>
++	</reg32>
++	<reg32 offset="6" name="6">
++		<!-- overlaps with PLANE_PITCH -->
++		<bitfield name="MIN_LOD_CLAMP" low="0" high="11" type="ufixed" radix="8"/>
++		<!-- pitch for plane 2 / plane 3 -->
++		<bitfield name="PLANE_PITCH" low="8" high="31" type="uint"/>
++	</reg32>
++	<!-- 7/8 is plane 2 address for planar formats -->
++	<reg32 offset="7" name="7">
++		<bitfield name="FLAG_LO" low="5" high="31" shr="5"/>
++	</reg32>
++	<reg32 offset="8" name="8">
++		<bitfield name="FLAG_HI" low="0" high="16"/>
++	</reg32>
++	<!-- 9/10 is plane 3 address for planar formats -->
++	<reg32 offset="9" name="9">
++		<bitfield name="FLAG_BUFFER_ARRAY_PITCH" low="0" high="16" shr="4" type="uint"/>
++	</reg32>
++	<reg32 offset="10" name="10">
++		<bitfield name="FLAG_BUFFER_PITCH" low="0" high="6" shr="6" type="uint"/>
++		<!-- log2 size of the first level, required for mipmapping -->
++		<bitfield name="FLAG_BUFFER_LOGW" low="8" high="11" type="uint"/>
++		<bitfield name="FLAG_BUFFER_LOGH" low="12" high="15" type="uint"/>
++	</reg32>
++	<reg32 offset="11" name="11"/>
++	<reg32 offset="12" name="12"/>
++	<reg32 offset="13" name="13"/>
++	<reg32 offset="14" name="14"/>
++	<reg32 offset="15" name="15"/>
++</domain>
++
++<domain name="A6XX_UBO" width="32">
++	<reg32 offset="0" name="0">
++		<bitfield name="BASE_LO" low="0" high="31"/>
++	</reg32>
++	<reg32 offset="1" name="1">
++		<bitfield name="BASE_HI" low="0" high="16"/>
++		<bitfield name="SIZE" low="17" high="31"/> <!-- size in vec4 (4xDWORD) units -->
++	</reg32>
++</domain>
++
++</database>
+diff --git a/drivers/gpu/drm/msm/registers/adreno/a6xx_enums.xml b/drivers/gpu/drm/msm/registers/adreno/a6xx_enums.xml
+new file mode 100644
+index 00000000000000..665539b098c632
+--- /dev/null
++++ b/drivers/gpu/drm/msm/registers/adreno/a6xx_enums.xml
+@@ -0,0 +1,383 @@
++<?xml version="1.0" encoding="UTF-8"?>
++<database xmlns="http://nouveau.freedesktop.org/"
++xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
++xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
++<import file="freedreno_copyright.xml"/>
++<import file="adreno/adreno_common.xml"/>
++<import file="adreno/adreno_pm4.xml"/>
++
++<enum name="a6xx_tile_mode">
++	<value name="TILE6_LINEAR" value="0"/>
++	<value name="TILE6_2" value="2"/>
++	<value name="TILE6_3" value="3"/>
++</enum>
++
++<enum name="a6xx_format">
++	<value value="0x02" name="FMT6_A8_UNORM"/>
++	<value value="0x03" name="FMT6_8_UNORM"/>
++	<value value="0x04" name="FMT6_8_SNORM"/>
++	<value value="0x05" name="FMT6_8_UINT"/>
++	<value value="0x06" name="FMT6_8_SINT"/>
++
++	<value value="0x08" name="FMT6_4_4_4_4_UNORM"/>
++	<value value="0x0a" name="FMT6_5_5_5_1_UNORM"/>
++	<value value="0x0c" name="FMT6_1_5_5_5_UNORM"/> <!-- read only -->
++	<value value="0x0e" name="FMT6_5_6_5_UNORM"/>
++
++	<value value="0x0f" name="FMT6_8_8_UNORM"/>
++	<value value="0x10" name="FMT6_8_8_SNORM"/>
++	<value value="0x11" name="FMT6_8_8_UINT"/>
++	<value value="0x12" name="FMT6_8_8_SINT"/>
++	<value value="0x13" name="FMT6_L8_A8_UNORM"/>
++
++	<value value="0x15" name="FMT6_16_UNORM"/>
++	<value value="0x16" name="FMT6_16_SNORM"/>
++	<value value="0x17" name="FMT6_16_FLOAT"/>
++	<value value="0x18" name="FMT6_16_UINT"/>
++	<value value="0x19" name="FMT6_16_SINT"/>
++
++	<value value="0x21" name="FMT6_8_8_8_UNORM"/>
++	<value value="0x22" name="FMT6_8_8_8_SNORM"/>
++	<value value="0x23" name="FMT6_8_8_8_UINT"/>
++	<value value="0x24" name="FMT6_8_8_8_SINT"/>
++
++	<value value="0x30" name="FMT6_8_8_8_8_UNORM"/>
++	<value value="0x31" name="FMT6_8_8_8_X8_UNORM"/> <!-- samples 1 for alpha -->
++	<value value="0x32" name="FMT6_8_8_8_8_SNORM"/>
++	<value value="0x33" name="FMT6_8_8_8_8_UINT"/>
++	<value value="0x34" name="FMT6_8_8_8_8_SINT"/>
++
++	<value value="0x35" name="FMT6_9_9_9_E5_FLOAT"/>
++
++	<value value="0x36" name="FMT6_10_10_10_2_UNORM"/>
++	<value value="0x37" name="FMT6_10_10_10_2_UNORM_DEST"/>
++	<value value="0x39" name="FMT6_10_10_10_2_SNORM"/>
++	<value value="0x3a" name="FMT6_10_10_10_2_UINT"/>
++	<value value="0x3b" name="FMT6_10_10_10_2_SINT"/>
++
++	<value value="0x42" name="FMT6_11_11_10_FLOAT"/>
++
++	<value value="0x43" name="FMT6_16_16_UNORM"/>
++	<value value="0x44" name="FMT6_16_16_SNORM"/>
++	<value value="0x45" name="FMT6_16_16_FLOAT"/>
++	<value value="0x46" name="FMT6_16_16_UINT"/>
++	<value value="0x47" name="FMT6_16_16_SINT"/>
++
++	<value value="0x48" name="FMT6_32_UNORM"/>
++	<value value="0x49" name="FMT6_32_SNORM"/>
++	<value value="0x4a" name="FMT6_32_FLOAT"/>
++	<value value="0x4b" name="FMT6_32_UINT"/>
++	<value value="0x4c" name="FMT6_32_SINT"/>
++	<value value="0x4d" name="FMT6_32_FIXED"/>
++
++	<value value="0x58" name="FMT6_16_16_16_UNORM"/>
++	<value value="0x59" name="FMT6_16_16_16_SNORM"/>
++	<value value="0x5a" name="FMT6_16_16_16_FLOAT"/>
++	<value value="0x5b" name="FMT6_16_16_16_UINT"/>
++	<value value="0x5c" name="FMT6_16_16_16_SINT"/>
++
++	<value value="0x60" name="FMT6_16_16_16_16_UNORM"/>
++	<value value="0x61" name="FMT6_16_16_16_16_SNORM"/>
++	<value value="0x62" name="FMT6_16_16_16_16_FLOAT"/>
++	<value value="0x63" name="FMT6_16_16_16_16_UINT"/>
++	<value value="0x64" name="FMT6_16_16_16_16_SINT"/>
++
++	<value value="0x65" name="FMT6_32_32_UNORM"/>
++	<value value="0x66" name="FMT6_32_32_SNORM"/>
++	<value value="0x67" name="FMT6_32_32_FLOAT"/>
++	<value value="0x68" name="FMT6_32_32_UINT"/>
++	<value value="0x69" name="FMT6_32_32_SINT"/>
++	<value value="0x6a" name="FMT6_32_32_FIXED"/>
++
++	<value value="0x70" name="FMT6_32_32_32_UNORM"/>
++	<value value="0x71" name="FMT6_32_32_32_SNORM"/>
++	<value value="0x72" name="FMT6_32_32_32_UINT"/>
++	<value value="0x73" name="FMT6_32_32_32_SINT"/>
++	<value value="0x74" name="FMT6_32_32_32_FLOAT"/>
++	<value value="0x75" name="FMT6_32_32_32_FIXED"/>
++
++	<value value="0x80" name="FMT6_32_32_32_32_UNORM"/>
++	<value value="0x81" name="FMT6_32_32_32_32_SNORM"/>
++	<value value="0x82" name="FMT6_32_32_32_32_FLOAT"/>
++	<value value="0x83" name="FMT6_32_32_32_32_UINT"/>
++	<value value="0x84" name="FMT6_32_32_32_32_SINT"/>
++	<value value="0x85" name="FMT6_32_32_32_32_FIXED"/>
++
++	<value value="0x8c" name="FMT6_G8R8B8R8_422_UNORM"/> <!-- UYVY -->
++	<value value="0x8d" name="FMT6_R8G8R8B8_422_UNORM"/> <!-- YUYV -->
++	<value value="0x8e" name="FMT6_R8_G8B8_2PLANE_420_UNORM"/> <!-- NV12 -->
++	<value value="0x8f" name="FMT6_NV21"/>
++	<value value="0x90" name="FMT6_R8_G8_B8_3PLANE_420_UNORM"/> <!-- YV12 -->
++
++	<value value="0x91" name="FMT6_Z24_UNORM_S8_UINT_AS_R8G8B8A8"/>
++
++	<!-- Note: tiling/UBWC for these may be different from equivalent formats
++	For example FMT6_NV12_Y is not compatible with FMT6_8_UNORM
++	-->
++	<value value="0x94" name="FMT6_NV12_Y"/>
++	<value value="0x95" name="FMT6_NV12_UV"/>
++	<value value="0x96" name="FMT6_NV12_VU"/>
++	<value value="0x97" name="FMT6_NV12_4R"/>
++	<value value="0x98" name="FMT6_NV12_4R_Y"/>
++	<value value="0x99" name="FMT6_NV12_4R_UV"/>
++	<value value="0x9a" name="FMT6_P010"/>
++	<value value="0x9b" name="FMT6_P010_Y"/>
++	<value value="0x9c" name="FMT6_P010_UV"/>
++	<value value="0x9d" name="FMT6_TP10"/>
++	<value value="0x9e" name="FMT6_TP10_Y"/>
++	<value value="0x9f" name="FMT6_TP10_UV"/>
++
++	<value value="0xa0" name="FMT6_Z24_UNORM_S8_UINT"/>
++
++	<value value="0xab" name="FMT6_ETC2_RG11_UNORM"/>
++	<value value="0xac" name="FMT6_ETC2_RG11_SNORM"/>
++	<value value="0xad" name="FMT6_ETC2_R11_UNORM"/>
++	<value value="0xae" name="FMT6_ETC2_R11_SNORM"/>
++	<value value="0xaf" name="FMT6_ETC1"/>
++	<value value="0xb0" name="FMT6_ETC2_RGB8"/>
++	<value value="0xb1" name="FMT6_ETC2_RGBA8"/>
++	<value value="0xb2" name="FMT6_ETC2_RGB8A1"/>
++	<value value="0xb3" name="FMT6_DXT1"/>
++	<value value="0xb4" name="FMT6_DXT3"/>
++	<value value="0xb5" name="FMT6_DXT5"/>
++	<value value="0xb6" name="FMT6_RGTC1_UNORM"/>
++	<value value="0xb7" name="FMT6_RGTC1_UNORM_FAST"/>
++	<value value="0xb8" name="FMT6_RGTC1_SNORM"/>
++	<value value="0xb9" name="FMT6_RGTC1_SNORM_FAST"/>
++	<value value="0xba" name="FMT6_RGTC2_UNORM"/>
++	<value value="0xbb" name="FMT6_RGTC2_UNORM_FAST"/>
++	<value value="0xbc" name="FMT6_RGTC2_SNORM"/>
++	<value value="0xbd" name="FMT6_RGTC2_SNORM_FAST"/>
++	<value value="0xbe" name="FMT6_BPTC_UFLOAT"/>
++	<value value="0xbf" name="FMT6_BPTC_FLOAT"/>
++	<value value="0xc0" name="FMT6_BPTC"/>
++	<value value="0xc1" name="FMT6_ASTC_4x4"/>
++	<value value="0xc2" name="FMT6_ASTC_5x4"/>
++	<value value="0xc3" name="FMT6_ASTC_5x5"/>
++	<value value="0xc4" name="FMT6_ASTC_6x5"/>
++	<value value="0xc5" name="FMT6_ASTC_6x6"/>
++	<value value="0xc6" name="FMT6_ASTC_8x5"/>
++	<value value="0xc7" name="FMT6_ASTC_8x6"/>
++	<value value="0xc8" name="FMT6_ASTC_8x8"/>
++	<value value="0xc9" name="FMT6_ASTC_10x5"/>
++	<value value="0xca" name="FMT6_ASTC_10x6"/>
++	<value value="0xcb" name="FMT6_ASTC_10x8"/>
++	<value value="0xcc" name="FMT6_ASTC_10x10"/>
++	<value value="0xcd" name="FMT6_ASTC_12x10"/>
++	<value value="0xce" name="FMT6_ASTC_12x12"/>
++
++	<!-- for sampling stencil (integer, 2nd channel), not available on a630 -->
++	<value value="0xea" name="FMT6_Z24_UINT_S8_UINT"/>
++
++	<!-- Not a hw enum, used internally in driver -->
++	<value value="0xff" name="FMT6_NONE"/>
++
++</enum>
++
++<!-- probably same as a5xx -->
++<enum name="a6xx_polygon_mode">
++	<value name="POLYMODE6_POINTS" value="1"/>
++	<value name="POLYMODE6_LINES" value="2"/>
++	<value name="POLYMODE6_TRIANGLES" value="3"/>
++</enum>
++
++<enum name="a6xx_depth_format">
++	<value name="DEPTH6_NONE" value="0"/>
++	<value name="DEPTH6_16" value="1"/>
++	<value name="DEPTH6_24_8" value="2"/>
++	<value name="DEPTH6_32" value="4"/>
++</enum>
++
++<bitset name="a6x_cp_protect" inline="yes">
++	<bitfield name="BASE_ADDR" low="0" high="17"/>
++	<bitfield name="MASK_LEN" low="18" high="30"/>
++	<bitfield name="READ" pos="31" type="boolean"/>
++</bitset>
++
++<enum name="a6xx_shader_id">
++	<value value="0x9" name="A6XX_TP0_TMO_DATA"/>
++	<value value="0xa" name="A6XX_TP0_SMO_DATA"/>
++	<value value="0xb" name="A6XX_TP0_MIPMAP_BASE_DATA"/>
++	<value value="0x19" name="A6XX_TP1_TMO_DATA"/>
++	<value value="0x1a" name="A6XX_TP1_SMO_DATA"/>
++	<value value="0x1b" name="A6XX_TP1_MIPMAP_BASE_DATA"/>
++	<value value="0x29" name="A6XX_SP_INST_DATA"/>
++	<value value="0x2a" name="A6XX_SP_LB_0_DATA"/>
++	<value value="0x2b" name="A6XX_SP_LB_1_DATA"/>
++	<value value="0x2c" name="A6XX_SP_LB_2_DATA"/>
++	<value value="0x2d" name="A6XX_SP_LB_3_DATA"/>
++	<value value="0x2e" name="A6XX_SP_LB_4_DATA"/>
++	<value value="0x2f" name="A6XX_SP_LB_5_DATA"/>
++	<value value="0x30" name="A6XX_SP_CB_BINDLESS_DATA"/>
++	<value value="0x31" name="A6XX_SP_CB_LEGACY_DATA"/>
++	<value value="0x32" name="A6XX_SP_GFX_UAV_BASE_DATA"/>
++	<value value="0x33" name="A6XX_SP_INST_TAG"/>
++	<value value="0x34" name="A6XX_SP_CB_BINDLESS_TAG"/>
++	<value value="0x35" name="A6XX_SP_TMO_UMO_TAG"/>
++	<value value="0x36" name="A6XX_SP_SMO_TAG"/>
++	<value value="0x37" name="A6XX_SP_STATE_DATA"/>
++	<value value="0x49" name="A6XX_HLSQ_CHUNK_CVS_RAM"/>
++	<value value="0x4a" name="A6XX_HLSQ_CHUNK_CPS_RAM"/>
++	<value value="0x4b" name="A6XX_HLSQ_CHUNK_CVS_RAM_TAG"/>
++	<value value="0x4c" name="A6XX_HLSQ_CHUNK_CPS_RAM_TAG"/>
++	<value value="0x4d" name="A6XX_HLSQ_ICB_CVS_CB_BASE_TAG"/>
++	<value value="0x4e" name="A6XX_HLSQ_ICB_CPS_CB_BASE_TAG"/>
++	<value value="0x50" name="A6XX_HLSQ_CVS_MISC_RAM"/>
++	<value value="0x51" name="A6XX_HLSQ_CPS_MISC_RAM"/>
++	<value value="0x52" name="A6XX_HLSQ_INST_RAM"/>
++	<value value="0x53" name="A6XX_HLSQ_GFX_CVS_CONST_RAM"/>
++	<value value="0x54" name="A6XX_HLSQ_GFX_CPS_CONST_RAM"/>
++	<value value="0x55" name="A6XX_HLSQ_CVS_MISC_RAM_TAG"/>
++	<value value="0x56" name="A6XX_HLSQ_CPS_MISC_RAM_TAG"/>
++	<value value="0x57" name="A6XX_HLSQ_INST_RAM_TAG"/>
++	<value value="0x58" name="A6XX_HLSQ_GFX_CVS_CONST_RAM_TAG"/>
++	<value value="0x59" name="A6XX_HLSQ_GFX_CPS_CONST_RAM_TAG"/>
++	<value value="0x5a" name="A6XX_HLSQ_PWR_REST_RAM"/>
++	<value value="0x5b" name="A6XX_HLSQ_PWR_REST_TAG"/>
++	<value value="0x60" name="A6XX_HLSQ_DATAPATH_META"/>
++	<value value="0x61" name="A6XX_HLSQ_FRONTEND_META"/>
++	<value value="0x62" name="A6XX_HLSQ_INDIRECT_META"/>
++	<value value="0x63" name="A6XX_HLSQ_BACKEND_META"/>
++	<value value="0x70" name="A6XX_SP_LB_6_DATA"/>
++	<value value="0x71" name="A6XX_SP_LB_7_DATA"/>
++	<value value="0x73" name="A6XX_HLSQ_INST_RAM_1"/>
++</enum>
++
++<enum name="a6xx_debugbus_id">
++	<value value="0x1" name="A6XX_DBGBUS_CP"/>
++	<value value="0x2" name="A6XX_DBGBUS_RBBM"/>
++	<value value="0x3" name="A6XX_DBGBUS_VBIF"/>
++	<value value="0x4" name="A6XX_DBGBUS_HLSQ"/>
++	<value value="0x5" name="A6XX_DBGBUS_UCHE"/>
++	<value value="0x6" name="A6XX_DBGBUS_DPM"/>
++	<value value="0x7" name="A6XX_DBGBUS_TESS"/>
++	<value value="0x8" name="A6XX_DBGBUS_PC"/>
++	<value value="0x9" name="A6XX_DBGBUS_VFDP"/>
++	<value value="0xa" name="A6XX_DBGBUS_VPC"/>
++	<value value="0xb" name="A6XX_DBGBUS_TSE"/>
++	<value value="0xc" name="A6XX_DBGBUS_RAS"/>
++	<value value="0xd" name="A6XX_DBGBUS_VSC"/>
++	<value value="0xe" name="A6XX_DBGBUS_COM"/>
++	<value value="0x10" name="A6XX_DBGBUS_LRZ"/>
++	<value value="0x11" name="A6XX_DBGBUS_A2D"/>
++	<value value="0x12" name="A6XX_DBGBUS_CCUFCHE"/>
++	<value value="0x13" name="A6XX_DBGBUS_GMU_CX"/>
++	<value value="0x14" name="A6XX_DBGBUS_RBP"/>
++	<value value="0x15" name="A6XX_DBGBUS_DCS"/>
++	<value value="0x16" name="A6XX_DBGBUS_DBGC"/>
++	<value value="0x17" name="A6XX_DBGBUS_CX"/>
++	<value value="0x18" name="A6XX_DBGBUS_GMU_GX"/>
++	<value value="0x19" name="A6XX_DBGBUS_TPFCHE"/>
++	<value value="0x1a" name="A6XX_DBGBUS_GBIF_GX"/>
++	<value value="0x1d" name="A6XX_DBGBUS_GPC"/>
++	<value value="0x1e" name="A6XX_DBGBUS_LARC"/>
++	<value value="0x1f" name="A6XX_DBGBUS_HLSQ_SPTP"/>
++	<value value="0x20" name="A6XX_DBGBUS_RB_0"/>
++	<value value="0x21" name="A6XX_DBGBUS_RB_1"/>
++	<value value="0x22" name="A6XX_DBGBUS_RB_2"/>
++	<value value="0x24" name="A6XX_DBGBUS_UCHE_WRAPPER"/>
++	<value value="0x28" name="A6XX_DBGBUS_CCU_0"/>
++	<value value="0x29" name="A6XX_DBGBUS_CCU_1"/>
++	<value value="0x2a" name="A6XX_DBGBUS_CCU_2"/>
++	<value value="0x38" name="A6XX_DBGBUS_VFD_0"/>
++	<value value="0x39" name="A6XX_DBGBUS_VFD_1"/>
++	<value value="0x3a" name="A6XX_DBGBUS_VFD_2"/>
++	<value value="0x3b" name="A6XX_DBGBUS_VFD_3"/>
++	<value value="0x3c" name="A6XX_DBGBUS_VFD_4"/>
++	<value value="0x3d" name="A6XX_DBGBUS_VFD_5"/>
++	<value value="0x40" name="A6XX_DBGBUS_SP_0"/>
++	<value value="0x41" name="A6XX_DBGBUS_SP_1"/>
++	<value value="0x42" name="A6XX_DBGBUS_SP_2"/>
++	<value value="0x48" name="A6XX_DBGBUS_TPL1_0"/>
++	<value value="0x49" name="A6XX_DBGBUS_TPL1_1"/>
++	<value value="0x4a" name="A6XX_DBGBUS_TPL1_2"/>
++	<value value="0x4b" name="A6XX_DBGBUS_TPL1_3"/>
++	<value value="0x4c" name="A6XX_DBGBUS_TPL1_4"/>
++	<value value="0x4d" name="A6XX_DBGBUS_TPL1_5"/>
++	<value value="0x58" name="A6XX_DBGBUS_SPTP_0"/>
++	<value value="0x59" name="A6XX_DBGBUS_SPTP_1"/>
++	<value value="0x5a" name="A6XX_DBGBUS_SPTP_2"/>
++	<value value="0x5b" name="A6XX_DBGBUS_SPTP_3"/>
++	<value value="0x5c" name="A6XX_DBGBUS_SPTP_4"/>
++	<value value="0x5d" name="A6XX_DBGBUS_SPTP_5"/>
++</enum>
++
++<!--
++Used in a6xx_a2d_bit_cntl.. the value mostly seems to correlate to the
++component type/size, so I think it relates to internal format used for
++blending?  The one exception is that 16b unorm and 32b float use the
++same value... maybe 16b unorm is uncommon enough that it was just easier
++to upconvert to 32b float internally?
++
++ 8b unorm:  10 (sometimes 0, is the high bit part of something else?)
++16b unorm:   4
++
++32b int:     7
++16b int:     6
++ 8b int:     5
++
++32b float:   4
++16b float:   3
++ -->
++<enum name="a6xx_2d_ifmt">
++	<value value="0x10" name="R2D_UNORM8"/>
++	<value value="0x7"  name="R2D_INT32"/>
++	<value value="0x6"  name="R2D_INT16"/>
++	<value value="0x5"  name="R2D_INT8"/>
++	<value value="0x4"  name="R2D_FLOAT32"/>
++	<value value="0x3"  name="R2D_FLOAT16"/>
++	<value value="0x1"  name="R2D_UNORM8_SRGB"/>
++	<value value="0x0"  name="R2D_RAW"/>
++</enum>
++
++<enum name="a6xx_tex_type">
++	<value name="A6XX_TEX_1D" value="0"/>
++	<value name="A6XX_TEX_2D" value="1"/>
++	<value name="A6XX_TEX_CUBE" value="2"/>
++	<value name="A6XX_TEX_3D" value="3"/>
++	<value name="A6XX_TEX_BUFFER" value="4"/>
++	<doc>
++		A special buffer type for usage as the source for buffer
++		to image copies with lower alignment requirements than
++		A6XX_TEX_2D, available since A7XX.
++	</doc>
++	<value name="A6XX_TEX_IMG_BUFFER" value="5"/>
++</enum>
++
++<enum name="a6xx_ztest_mode">
++	<doc>Allow early z-test and early-lrz (if applicable)</doc>
++	<value value="0x0" name="A6XX_EARLY_Z"/>
++	<doc>Disable early z-test and early-lrz test (if applicable)</doc>
++	<value value="0x1" name="A6XX_LATE_Z"/>
++	<doc>
++		A special mode that allows early-lrz (if applicable) or early-z
++		tests, but also does late-z tests at which point it writes depth.
++
++		This mode is used when fragment can be killed (via discard or
++		sample mask) after early-z tests and it writes depth. In such case
++		depth can be written only at late-z stage, but it's ok to use
++		early-z to discard fragments.
++
++		However this mode is not compatible with:
++		- Lack of D/S attachment
++		- Stencil writes on stencil or depth test failures
++		- Per-sample shading
++	</doc>
++	<value value="0x2" name="A6XX_EARLY_Z_LATE_Z"/>
++	<doc>Not a real hw value, used internally by mesa</doc>
++	<value value="0x3" name="A6XX_INVALID_ZTEST"/>
++</enum>
++
++<enum name="a6xx_tess_spacing">
++	<value value="0x0" name="TESS_EQUAL"/>
++	<value value="0x2" name="TESS_FRACTIONAL_ODD"/>
++	<value value="0x3" name="TESS_FRACTIONAL_EVEN"/>
++</enum>
++<enum name="a6xx_tess_output">
++	<value value="0x0" name="TESS_POINTS"/>
++	<value value="0x1" name="TESS_LINES"/>
++	<value value="0x2" name="TESS_CW_TRIS"/>
++	<value value="0x3" name="TESS_CCW_TRIS"/>
++</enum>
++
++</database>
+diff --git a/drivers/gpu/drm/msm/registers/adreno/a6xx_perfcntrs.xml b/drivers/gpu/drm/msm/registers/adreno/a6xx_perfcntrs.xml
+new file mode 100644
+index 00000000000000..c446a2eb11202f
+--- /dev/null
++++ b/drivers/gpu/drm/msm/registers/adreno/a6xx_perfcntrs.xml
+@@ -0,0 +1,600 @@
++<?xml version="1.0" encoding="UTF-8"?>
++<database xmlns="http://nouveau.freedesktop.org/"
++xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
++xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
++<import file="freedreno_copyright.xml"/>
++<import file="adreno/adreno_common.xml"/>
++<import file="adreno/adreno_pm4.xml"/>
++
++<enum name="a6xx_cp_perfcounter_select">
++	<value value="0" name="PERF_CP_ALWAYS_COUNT"/>
++	<value value="1" name="PERF_CP_BUSY_GFX_CORE_IDLE"/>
++	<value value="2" name="PERF_CP_BUSY_CYCLES"/>
++	<value value="3" name="PERF_CP_NUM_PREEMPTIONS"/>
++	<value value="4" name="PERF_CP_PREEMPTION_REACTION_DELAY"/>
++	<value value="5" name="PERF_CP_PREEMPTION_SWITCH_OUT_TIME"/>
++	<value value="6" name="PERF_CP_PREEMPTION_SWITCH_IN_TIME"/>
++	<value value="7" name="PERF_CP_DEAD_DRAWS_IN_BIN_RENDER"/>
++	<value value="8" name="PERF_CP_PREDICATED_DRAWS_KILLED"/>
++	<value value="9" name="PERF_CP_MODE_SWITCH"/>
++	<value value="10" name="PERF_CP_ZPASS_DONE"/>
++	<value value="11" name="PERF_CP_CONTEXT_DONE"/>
++	<value value="12" name="PERF_CP_CACHE_FLUSH"/>
++	<value value="13" name="PERF_CP_LONG_PREEMPTIONS"/>
++	<value value="14" name="PERF_CP_SQE_I_CACHE_STARVE"/>
++	<value value="15" name="PERF_CP_SQE_IDLE"/>
++	<value value="16" name="PERF_CP_SQE_PM4_STARVE_RB_IB"/>
++	<value value="17" name="PERF_CP_SQE_PM4_STARVE_SDS"/>
++	<value value="18" name="PERF_CP_SQE_MRB_STARVE"/>
++	<value value="19" name="PERF_CP_SQE_RRB_STARVE"/>
++	<value value="20" name="PERF_CP_SQE_VSD_STARVE"/>
++	<value value="21" name="PERF_CP_VSD_DECODE_STARVE"/>
++	<value value="22" name="PERF_CP_SQE_PIPE_OUT_STALL"/>
++	<value value="23" name="PERF_CP_SQE_SYNC_STALL"/>
++	<value value="24" name="PERF_CP_SQE_PM4_WFI_STALL"/>
++	<value value="25" name="PERF_CP_SQE_SYS_WFI_STALL"/>
++	<value value="26" name="PERF_CP_SQE_T4_EXEC"/>
++	<value value="27" name="PERF_CP_SQE_LOAD_STATE_EXEC"/>
++	<value value="28" name="PERF_CP_SQE_SAVE_SDS_STATE"/>
++	<value value="29" name="PERF_CP_SQE_DRAW_EXEC"/>
++	<value value="30" name="PERF_CP_SQE_CTXT_REG_BUNCH_EXEC"/>
++	<value value="31" name="PERF_CP_SQE_EXEC_PROFILED"/>
++	<value value="32" name="PERF_CP_MEMORY_POOL_EMPTY"/>
++	<value value="33" name="PERF_CP_MEMORY_POOL_SYNC_STALL"/>
++	<value value="34" name="PERF_CP_MEMORY_POOL_ABOVE_THRESH"/>
++	<value value="35" name="PERF_CP_AHB_WR_STALL_PRE_DRAWS"/>
++	<value value="36" name="PERF_CP_AHB_STALL_SQE_GMU"/>
++	<value value="37" name="PERF_CP_AHB_STALL_SQE_WR_OTHER"/>
++	<value value="38" name="PERF_CP_AHB_STALL_SQE_RD_OTHER"/>
++	<value value="39" name="PERF_CP_CLUSTER0_EMPTY"/>
++	<value value="40" name="PERF_CP_CLUSTER1_EMPTY"/>
++	<value value="41" name="PERF_CP_CLUSTER2_EMPTY"/>
++	<value value="42" name="PERF_CP_CLUSTER3_EMPTY"/>
++	<value value="43" name="PERF_CP_CLUSTER4_EMPTY"/>
++	<value value="44" name="PERF_CP_CLUSTER5_EMPTY"/>
++	<value value="45" name="PERF_CP_PM4_DATA"/>
++	<value value="46" name="PERF_CP_PM4_HEADERS"/>
++	<value value="47" name="PERF_CP_VBIF_READ_BEATS"/>
++	<value value="48" name="PERF_CP_VBIF_WRITE_BEATS"/>
++	<value value="49" name="PERF_CP_SQE_INSTR_COUNTER"/>
++</enum>
++
++<enum name="a6xx_rbbm_perfcounter_select">
++	<value value="0" name="PERF_RBBM_ALWAYS_COUNT"/>
++	<value value="1" name="PERF_RBBM_ALWAYS_ON"/>
++	<value value="2" name="PERF_RBBM_TSE_BUSY"/>
++	<value value="3" name="PERF_RBBM_RAS_BUSY"/>
++	<value value="4" name="PERF_RBBM_PC_DCALL_BUSY"/>
++	<value value="5" name="PERF_RBBM_PC_VSD_BUSY"/>
++	<value value="6" name="PERF_RBBM_STATUS_MASKED"/>
++	<value value="7" name="PERF_RBBM_COM_BUSY"/>
++	<value value="8" name="PERF_RBBM_DCOM_BUSY"/>
++	<value value="9" name="PERF_RBBM_VBIF_BUSY"/>
++	<value value="10" name="PERF_RBBM_VSC_BUSY"/>
++	<value value="11" name="PERF_RBBM_TESS_BUSY"/>
++	<value value="12" name="PERF_RBBM_UCHE_BUSY"/>
++	<value value="13" name="PERF_RBBM_HLSQ_BUSY"/>
++</enum>
++
++<enum name="a6xx_pc_perfcounter_select">
++	<value value="0" name="PERF_PC_BUSY_CYCLES"/>
++	<value value="1" name="PERF_PC_WORKING_CYCLES"/>
++	<value value="2" name="PERF_PC_STALL_CYCLES_VFD"/>
++	<value value="3" name="PERF_PC_STALL_CYCLES_TSE"/>
++	<value value="4" name="PERF_PC_STALL_CYCLES_VPC"/>
++	<value value="5" name="PERF_PC_STALL_CYCLES_UCHE"/>
++	<value value="6" name="PERF_PC_STALL_CYCLES_TESS"/>
++	<value value="7" name="PERF_PC_STALL_CYCLES_TSE_ONLY"/>
++	<value value="8" name="PERF_PC_STALL_CYCLES_VPC_ONLY"/>
++	<value value="9" name="PERF_PC_PASS1_TF_STALL_CYCLES"/>
++	<value value="10" name="PERF_PC_STARVE_CYCLES_FOR_INDEX"/>
++	<value value="11" name="PERF_PC_STARVE_CYCLES_FOR_TESS_FACTOR"/>
++	<value value="12" name="PERF_PC_STARVE_CYCLES_FOR_VIZ_STREAM"/>
++	<value value="13" name="PERF_PC_STARVE_CYCLES_FOR_POSITION"/>
++	<value value="14" name="PERF_PC_STARVE_CYCLES_DI"/>
++	<value value="15" name="PERF_PC_VIS_STREAMS_LOADED"/>
++	<value value="16" name="PERF_PC_INSTANCES"/>
++	<value value="17" name="PERF_PC_VPC_PRIMITIVES"/>
++	<value value="18" name="PERF_PC_DEAD_PRIM"/>
++	<value value="19" name="PERF_PC_LIVE_PRIM"/>
++	<value value="20" name="PERF_PC_VERTEX_HITS"/>
++	<value value="21" name="PERF_PC_IA_VERTICES"/>
++	<value value="22" name="PERF_PC_IA_PRIMITIVES"/>
++	<value value="23" name="PERF_PC_GS_PRIMITIVES"/>
++	<value value="24" name="PERF_PC_HS_INVOCATIONS"/>
++	<value value="25" name="PERF_PC_DS_INVOCATIONS"/>
++	<value value="26" name="PERF_PC_VS_INVOCATIONS"/>
++	<value value="27" name="PERF_PC_GS_INVOCATIONS"/>
++	<value value="28" name="PERF_PC_DS_PRIMITIVES"/>
++	<value value="29" name="PERF_PC_VPC_POS_DATA_TRANSACTION"/>
++	<value value="30" name="PERF_PC_3D_DRAWCALLS"/>
++	<value value="31" name="PERF_PC_2D_DRAWCALLS"/>
++	<value value="32" name="PERF_PC_NON_DRAWCALL_GLOBAL_EVENTS"/>
++	<value value="33" name="PERF_TESS_BUSY_CYCLES"/>
++	<value value="34" name="PERF_TESS_WORKING_CYCLES"/>
++	<value value="35" name="PERF_TESS_STALL_CYCLES_PC"/>
++	<value value="36" name="PERF_TESS_STARVE_CYCLES_PC"/>
++	<value value="37" name="PERF_PC_TSE_TRANSACTION"/>
++	<value value="38" name="PERF_PC_TSE_VERTEX"/>
++	<value value="39" name="PERF_PC_TESS_PC_UV_TRANS"/>
++	<value value="40" name="PERF_PC_TESS_PC_UV_PATCHES"/>
++	<value value="41" name="PERF_PC_TESS_FACTOR_TRANS"/>
++</enum>
++
++<enum name="a6xx_vfd_perfcounter_select">
++	<value value="0" name="PERF_VFD_BUSY_CYCLES"/>
++	<value value="1" name="PERF_VFD_STALL_CYCLES_UCHE"/>
++	<value value="2" name="PERF_VFD_STALL_CYCLES_VPC_ALLOC"/>
++	<value value="3" name="PERF_VFD_STALL_CYCLES_SP_INFO"/>
++	<value value="4" name="PERF_VFD_STALL_CYCLES_SP_ATTR"/>
++	<value value="5" name="PERF_VFD_STARVE_CYCLES_UCHE"/>
++	<value value="6" name="PERF_VFD_RBUFFER_FULL"/>
++	<value value="7" name="PERF_VFD_ATTR_INFO_FIFO_FULL"/>
++	<value value="8" name="PERF_VFD_DECODED_ATTRIBUTE_BYTES"/>
++	<value value="9" name="PERF_VFD_NUM_ATTRIBUTES"/>
++	<value value="10" name="PERF_VFD_UPPER_SHADER_FIBERS"/>
++	<value value="11" name="PERF_VFD_LOWER_SHADER_FIBERS"/>
++	<value value="12" name="PERF_VFD_MODE_0_FIBERS"/>
++	<value value="13" name="PERF_VFD_MODE_1_FIBERS"/>
++	<value value="14" name="PERF_VFD_MODE_2_FIBERS"/>
++	<value value="15" name="PERF_VFD_MODE_3_FIBERS"/>
++	<value value="16" name="PERF_VFD_MODE_4_FIBERS"/>
++	<value value="17" name="PERF_VFD_TOTAL_VERTICES"/>
++	<value value="18" name="PERF_VFDP_STALL_CYCLES_VFD"/>
++	<value value="19" name="PERF_VFDP_STALL_CYCLES_VFD_INDEX"/>
++	<value value="20" name="PERF_VFDP_STALL_CYCLES_VFD_PROG"/>
++	<value value="21" name="PERF_VFDP_STARVE_CYCLES_PC"/>
++	<value value="22" name="PERF_VFDP_VS_STAGE_WAVES"/>
++</enum>
++
++<enum name="a6xx_hlsq_perfcounter_select">
++	<value value="0" name="PERF_HLSQ_BUSY_CYCLES"/>
++	<value value="1" name="PERF_HLSQ_STALL_CYCLES_UCHE"/>
++	<value value="2" name="PERF_HLSQ_STALL_CYCLES_SP_STATE"/>
++	<value value="3" name="PERF_HLSQ_STALL_CYCLES_SP_FS_STAGE"/>
++	<value value="4" name="PERF_HLSQ_UCHE_LATENCY_CYCLES"/>
++	<value value="5" name="PERF_HLSQ_UCHE_LATENCY_COUNT"/>
++	<value value="6" name="PERF_HLSQ_FS_STAGE_1X_WAVES"/>
++	<value value="7" name="PERF_HLSQ_FS_STAGE_2X_WAVES"/>
++	<value value="8" name="PERF_HLSQ_QUADS"/>
++	<value value="9" name="PERF_HLSQ_CS_INVOCATIONS"/>
++	<value value="10" name="PERF_HLSQ_COMPUTE_DRAWCALLS"/>
++	<value value="11" name="PERF_HLSQ_FS_DATA_WAIT_PROGRAMMING"/>
++	<value value="12" name="PERF_HLSQ_DUAL_FS_PROG_ACTIVE"/>
++	<value value="13" name="PERF_HLSQ_DUAL_VS_PROG_ACTIVE"/>
++	<value value="14" name="PERF_HLSQ_FS_BATCH_COUNT_ZERO"/>
++	<value value="15" name="PERF_HLSQ_VS_BATCH_COUNT_ZERO"/>
++	<value value="16" name="PERF_HLSQ_WAVE_PENDING_NO_QUAD"/>
++	<value value="17" name="PERF_HLSQ_WAVE_PENDING_NO_PRIM_BASE"/>
++	<value value="18" name="PERF_HLSQ_STALL_CYCLES_VPC"/>
++	<value value="19" name="PERF_HLSQ_PIXELS"/>
++	<value value="20" name="PERF_HLSQ_DRAW_MODE_SWITCH_VSFS_SYNC"/>
++</enum>
++
++<enum name="a6xx_vpc_perfcounter_select">
++	<value value="0" name="PERF_VPC_BUSY_CYCLES"/>
++	<value value="1" name="PERF_VPC_WORKING_CYCLES"/>
++	<value value="2" name="PERF_VPC_STALL_CYCLES_UCHE"/>
++	<value value="3" name="PERF_VPC_STALL_CYCLES_VFD_WACK"/>
++	<value value="4" name="PERF_VPC_STALL_CYCLES_HLSQ_PRIM_ALLOC"/>
++	<value value="5" name="PERF_VPC_STALL_CYCLES_PC"/>
++	<value value="6" name="PERF_VPC_STALL_CYCLES_SP_LM"/>
++	<value value="7" name="PERF_VPC_STARVE_CYCLES_SP"/>
++	<value value="8" name="PERF_VPC_STARVE_CYCLES_LRZ"/>
++	<value value="9" name="PERF_VPC_PC_PRIMITIVES"/>
++	<value value="10" name="PERF_VPC_SP_COMPONENTS"/>
++	<value value="11" name="PERF_VPC_STALL_CYCLES_VPCRAM_POS"/>
++	<value value="12" name="PERF_VPC_LRZ_ASSIGN_PRIMITIVES"/>
++	<value value="13" name="PERF_VPC_RB_VISIBLE_PRIMITIVES"/>
++	<value value="14" name="PERF_VPC_LM_TRANSACTION"/>
++	<value value="15" name="PERF_VPC_STREAMOUT_TRANSACTION"/>
++	<value value="16" name="PERF_VPC_VS_BUSY_CYCLES"/>
++	<value value="17" name="PERF_VPC_PS_BUSY_CYCLES"/>
++	<value value="18" name="PERF_VPC_VS_WORKING_CYCLES"/>
++	<value value="19" name="PERF_VPC_PS_WORKING_CYCLES"/>
++	<value value="20" name="PERF_VPC_STARVE_CYCLES_RB"/>
++	<value value="21" name="PERF_VPC_NUM_VPCRAM_READ_POS"/>
++	<value value="22" name="PERF_VPC_WIT_FULL_CYCLES"/>
++	<value value="23" name="PERF_VPC_VPCRAM_FULL_CYCLES"/>
++	<value value="24" name="PERF_VPC_LM_FULL_WAIT_FOR_INTP_END"/>
++	<value value="25" name="PERF_VPC_NUM_VPCRAM_WRITE"/>
++	<value value="26" name="PERF_VPC_NUM_VPCRAM_READ_SO"/>
++	<value value="27" name="PERF_VPC_NUM_ATTR_REQ_LM"/>
++</enum>
++
++<enum name="a6xx_tse_perfcounter_select">
++	<value value="0" name="PERF_TSE_BUSY_CYCLES"/>
++	<value value="1" name="PERF_TSE_CLIPPING_CYCLES"/>
++	<value value="2" name="PERF_TSE_STALL_CYCLES_RAS"/>
++	<value value="3" name="PERF_TSE_STALL_CYCLES_LRZ_BARYPLANE"/>
++	<value value="4" name="PERF_TSE_STALL_CYCLES_LRZ_ZPLANE"/>
++	<value value="5" name="PERF_TSE_STARVE_CYCLES_PC"/>
++	<value value="6" name="PERF_TSE_INPUT_PRIM"/>
++	<value value="7" name="PERF_TSE_INPUT_NULL_PRIM"/>
++	<value value="8" name="PERF_TSE_TRIVAL_REJ_PRIM"/>
++	<value value="9" name="PERF_TSE_CLIPPED_PRIM"/>
++	<value value="10" name="PERF_TSE_ZERO_AREA_PRIM"/>
++	<value value="11" name="PERF_TSE_FACENESS_CULLED_PRIM"/>
++	<value value="12" name="PERF_TSE_ZERO_PIXEL_PRIM"/>
++	<value value="13" name="PERF_TSE_OUTPUT_NULL_PRIM"/>
++	<value value="14" name="PERF_TSE_OUTPUT_VISIBLE_PRIM"/>
++	<value value="15" name="PERF_TSE_CINVOCATION"/>
++	<value value="16" name="PERF_TSE_CPRIMITIVES"/>
++	<value value="17" name="PERF_TSE_2D_INPUT_PRIM"/>
++	<value value="18" name="PERF_TSE_2D_ALIVE_CYCLES"/>
++	<value value="19" name="PERF_TSE_CLIP_PLANES"/>
++</enum>
++
++<enum name="a6xx_ras_perfcounter_select">
++	<value value="0" name="PERF_RAS_BUSY_CYCLES"/>
++	<value value="1" name="PERF_RAS_SUPERTILE_ACTIVE_CYCLES"/>
++	<value value="2" name="PERF_RAS_STALL_CYCLES_LRZ"/>
++	<value value="3" name="PERF_RAS_STARVE_CYCLES_TSE"/>
++	<value value="4" name="PERF_RAS_SUPER_TILES"/>
++	<value value="5" name="PERF_RAS_8X4_TILES"/>
++	<value value="6" name="PERF_RAS_MASKGEN_ACTIVE"/>
++	<value value="7" name="PERF_RAS_FULLY_COVERED_SUPER_TILES"/>
++	<value value="8" name="PERF_RAS_FULLY_COVERED_8X4_TILES"/>
++	<value value="9" name="PERF_RAS_PRIM_KILLED_INVISILBE"/>
++	<value value="10" name="PERF_RAS_SUPERTILE_GEN_ACTIVE_CYCLES"/>
++	<value value="11" name="PERF_RAS_LRZ_INTF_WORKING_CYCLES"/>
++	<value value="12" name="PERF_RAS_BLOCKS"/>
++</enum>
++
++<enum name="a6xx_uche_perfcounter_select">
++	<value value="0" name="PERF_UCHE_BUSY_CYCLES"/>
++	<value value="1" name="PERF_UCHE_STALL_CYCLES_ARBITER"/>
++	<value value="2" name="PERF_UCHE_VBIF_LATENCY_CYCLES"/>
++	<value value="3" name="PERF_UCHE_VBIF_LATENCY_SAMPLES"/>
++	<value value="4" name="PERF_UCHE_VBIF_READ_BEATS_TP"/>
++	<value value="5" name="PERF_UCHE_VBIF_READ_BEATS_VFD"/>
++	<value value="6" name="PERF_UCHE_VBIF_READ_BEATS_HLSQ"/>
++	<value value="7" name="PERF_UCHE_VBIF_READ_BEATS_LRZ"/>
++	<value value="8" name="PERF_UCHE_VBIF_READ_BEATS_SP"/>
++	<value value="9" name="PERF_UCHE_READ_REQUESTS_TP"/>
++	<value value="10" name="PERF_UCHE_READ_REQUESTS_VFD"/>
++	<value value="11" name="PERF_UCHE_READ_REQUESTS_HLSQ"/>
++	<value value="12" name="PERF_UCHE_READ_REQUESTS_LRZ"/>
++	<value value="13" name="PERF_UCHE_READ_REQUESTS_SP"/>
++	<value value="14" name="PERF_UCHE_WRITE_REQUESTS_LRZ"/>
++	<value value="15" name="PERF_UCHE_WRITE_REQUESTS_SP"/>
++	<value value="16" name="PERF_UCHE_WRITE_REQUESTS_VPC"/>
++	<value value="17" name="PERF_UCHE_WRITE_REQUESTS_VSC"/>
++	<value value="18" name="PERF_UCHE_EVICTS"/>
++	<value value="19" name="PERF_UCHE_BANK_REQ0"/>
++	<value value="20" name="PERF_UCHE_BANK_REQ1"/>
++	<value value="21" name="PERF_UCHE_BANK_REQ2"/>
++	<value value="22" name="PERF_UCHE_BANK_REQ3"/>
++	<value value="23" name="PERF_UCHE_BANK_REQ4"/>
++	<value value="24" name="PERF_UCHE_BANK_REQ5"/>
++	<value value="25" name="PERF_UCHE_BANK_REQ6"/>
++	<value value="26" name="PERF_UCHE_BANK_REQ7"/>
++	<value value="27" name="PERF_UCHE_VBIF_READ_BEATS_CH0"/>
++	<value value="28" name="PERF_UCHE_VBIF_READ_BEATS_CH1"/>
++	<value value="29" name="PERF_UCHE_GMEM_READ_BEATS"/>
++	<value value="30" name="PERF_UCHE_TPH_REF_FULL"/>
++	<value value="31" name="PERF_UCHE_TPH_VICTIM_FULL"/>
++	<value value="32" name="PERF_UCHE_TPH_EXT_FULL"/>
++	<value value="33" name="PERF_UCHE_VBIF_STALL_WRITE_DATA"/>
++	<value value="34" name="PERF_UCHE_DCMP_LATENCY_SAMPLES"/>
++	<value value="35" name="PERF_UCHE_DCMP_LATENCY_CYCLES"/>
++	<value value="36" name="PERF_UCHE_VBIF_READ_BEATS_PC"/>
++	<value value="37" name="PERF_UCHE_READ_REQUESTS_PC"/>
++	<value value="38" name="PERF_UCHE_RAM_READ_REQ"/>
++	<value value="39" name="PERF_UCHE_RAM_WRITE_REQ"/>
++</enum>
++
++<enum name="a6xx_tp_perfcounter_select">
++	<value value="0" name="PERF_TP_BUSY_CYCLES"/>
++	<value value="1" name="PERF_TP_STALL_CYCLES_UCHE"/>
++	<value value="2" name="PERF_TP_LATENCY_CYCLES"/>
++	<value value="3" name="PERF_TP_LATENCY_TRANS"/>
++	<value value="4" name="PERF_TP_FLAG_CACHE_REQUEST_SAMPLES"/>
++	<value value="5" name="PERF_TP_FLAG_CACHE_REQUEST_LATENCY"/>
++	<value value="6" name="PERF_TP_L1_CACHELINE_REQUESTS"/>
++	<value value="7" name="PERF_TP_L1_CACHELINE_MISSES"/>
++	<value value="8" name="PERF_TP_SP_TP_TRANS"/>
++	<value value="9" name="PERF_TP_TP_SP_TRANS"/>
++	<value value="10" name="PERF_TP_OUTPUT_PIXELS"/>
++	<value value="11" name="PERF_TP_FILTER_WORKLOAD_16BIT"/>
++	<value value="12" name="PERF_TP_FILTER_WORKLOAD_32BIT"/>
++	<value value="13" name="PERF_TP_QUADS_RECEIVED"/>
++	<value value="14" name="PERF_TP_QUADS_OFFSET"/>
++	<value value="15" name="PERF_TP_QUADS_SHADOW"/>
++	<value value="16" name="PERF_TP_QUADS_ARRAY"/>
++	<value value="17" name="PERF_TP_QUADS_GRADIENT"/>
++	<value value="18" name="PERF_TP_QUADS_1D"/>
++	<value value="19" name="PERF_TP_QUADS_2D"/>
++	<value value="20" name="PERF_TP_QUADS_BUFFER"/>
++	<value value="21" name="PERF_TP_QUADS_3D"/>
++	<value value="22" name="PERF_TP_QUADS_CUBE"/>
++	<value value="23" name="PERF_TP_DIVERGENT_QUADS_RECEIVED"/>
++	<value value="24" name="PERF_TP_PRT_NON_RESIDENT_EVENTS"/>
++	<value value="25" name="PERF_TP_OUTPUT_PIXELS_POINT"/>
++	<value value="26" name="PERF_TP_OUTPUT_PIXELS_BILINEAR"/>
++	<value value="27" name="PERF_TP_OUTPUT_PIXELS_MIP"/>
++	<value value="28" name="PERF_TP_OUTPUT_PIXELS_ANISO"/>
++	<value value="29" name="PERF_TP_OUTPUT_PIXELS_ZERO_LOD"/>
++	<value value="30" name="PERF_TP_FLAG_CACHE_REQUESTS"/>
++	<value value="31" name="PERF_TP_FLAG_CACHE_MISSES"/>
++	<value value="32" name="PERF_TP_L1_5_L2_REQUESTS"/>
++	<value value="33" name="PERF_TP_2D_OUTPUT_PIXELS"/>
++	<value value="34" name="PERF_TP_2D_OUTPUT_PIXELS_POINT"/>
++	<value value="35" name="PERF_TP_2D_OUTPUT_PIXELS_BILINEAR"/>
++	<value value="36" name="PERF_TP_2D_FILTER_WORKLOAD_16BIT"/>
++	<value value="37" name="PERF_TP_2D_FILTER_WORKLOAD_32BIT"/>
++	<value value="38" name="PERF_TP_TPA2TPC_TRANS"/>
++	<value value="39" name="PERF_TP_L1_MISSES_ASTC_1TILE"/>
++	<value value="40" name="PERF_TP_L1_MISSES_ASTC_2TILE"/>
++	<value value="41" name="PERF_TP_L1_MISSES_ASTC_4TILE"/>
++	<value value="42" name="PERF_TP_L1_5_L2_COMPRESS_REQS"/>
++	<value value="43" name="PERF_TP_L1_5_L2_COMPRESS_MISS"/>
++	<value value="44" name="PERF_TP_L1_BANK_CONFLICT"/>
++	<value value="45" name="PERF_TP_L1_5_MISS_LATENCY_CYCLES"/>
++	<value value="46" name="PERF_TP_L1_5_MISS_LATENCY_TRANS"/>
++	<value value="47" name="PERF_TP_QUADS_CONSTANT_MULTIPLIED"/>
++	<value value="48" name="PERF_TP_FRONTEND_WORKING_CYCLES"/>
++	<value value="49" name="PERF_TP_L1_TAG_WORKING_CYCLES"/>
++	<value value="50" name="PERF_TP_L1_DATA_WRITE_WORKING_CYCLES"/>
++	<value value="51" name="PERF_TP_PRE_L1_DECOM_WORKING_CYCLES"/>
++	<value value="52" name="PERF_TP_BACKEND_WORKING_CYCLES"/>
++	<value value="53" name="PERF_TP_FLAG_CACHE_WORKING_CYCLES"/>
++	<value value="54" name="PERF_TP_L1_5_CACHE_WORKING_CYCLES"/>
++	<value value="55" name="PERF_TP_STARVE_CYCLES_SP"/>
++	<value value="56" name="PERF_TP_STARVE_CYCLES_UCHE"/>
++</enum>
++
++<enum name="a6xx_sp_perfcounter_select">
++	<value value="0" name="PERF_SP_BUSY_CYCLES"/>
++	<value value="1" name="PERF_SP_ALU_WORKING_CYCLES"/>
++	<value value="2" name="PERF_SP_EFU_WORKING_CYCLES"/>
++	<value value="3" name="PERF_SP_STALL_CYCLES_VPC"/>
++	<value value="4" name="PERF_SP_STALL_CYCLES_TP"/>
++	<value value="5" name="PERF_SP_STALL_CYCLES_UCHE"/>
++	<value value="6" name="PERF_SP_STALL_CYCLES_RB"/>
++	<value value="7" name="PERF_SP_NON_EXECUTION_CYCLES"/>
++	<value value="8" name="PERF_SP_WAVE_CONTEXTS"/>
++	<value value="9" name="PERF_SP_WAVE_CONTEXT_CYCLES"/>
++	<value value="10" name="PERF_SP_FS_STAGE_WAVE_CYCLES"/>
++	<value value="11" name="PERF_SP_FS_STAGE_WAVE_SAMPLES"/>
++	<value value="12" name="PERF_SP_VS_STAGE_WAVE_CYCLES"/>
++	<value value="13" name="PERF_SP_VS_STAGE_WAVE_SAMPLES"/>
++	<value value="14" name="PERF_SP_FS_STAGE_DURATION_CYCLES"/>
++	<value value="15" name="PERF_SP_VS_STAGE_DURATION_CYCLES"/>
++	<value value="16" name="PERF_SP_WAVE_CTRL_CYCLES"/>
++	<value value="17" name="PERF_SP_WAVE_LOAD_CYCLES"/>
++	<value value="18" name="PERF_SP_WAVE_EMIT_CYCLES"/>
++	<value value="19" name="PERF_SP_WAVE_NOP_CYCLES"/>
++	<value value="20" name="PERF_SP_WAVE_WAIT_CYCLES"/>
++	<value value="21" name="PERF_SP_WAVE_FETCH_CYCLES"/>
++	<value value="22" name="PERF_SP_WAVE_IDLE_CYCLES"/>
++	<value value="23" name="PERF_SP_WAVE_END_CYCLES"/>
++	<value value="24" name="PERF_SP_WAVE_LONG_SYNC_CYCLES"/>
++	<value value="25" name="PERF_SP_WAVE_SHORT_SYNC_CYCLES"/>
++	<value value="26" name="PERF_SP_WAVE_JOIN_CYCLES"/>
++	<value value="27" name="PERF_SP_LM_LOAD_INSTRUCTIONS"/>
++	<value value="28" name="PERF_SP_LM_STORE_INSTRUCTIONS"/>
++	<value value="29" name="PERF_SP_LM_ATOMICS"/>
++	<value value="30" name="PERF_SP_GM_LOAD_INSTRUCTIONS"/>
++	<value value="31" name="PERF_SP_GM_STORE_INSTRUCTIONS"/>
++	<value value="32" name="PERF_SP_GM_ATOMICS"/>
++	<value value="33" name="PERF_SP_VS_STAGE_TEX_INSTRUCTIONS"/>
++	<value value="34" name="PERF_SP_VS_STAGE_EFU_INSTRUCTIONS"/>
++	<value value="35" name="PERF_SP_VS_STAGE_FULL_ALU_INSTRUCTIONS"/>
++	<value value="36" name="PERF_SP_VS_STAGE_HALF_ALU_INSTRUCTIONS"/>
++	<value value="37" name="PERF_SP_FS_STAGE_TEX_INSTRUCTIONS"/>
++	<value value="38" name="PERF_SP_FS_STAGE_CFLOW_INSTRUCTIONS"/>
++	<value value="39" name="PERF_SP_FS_STAGE_EFU_INSTRUCTIONS"/>
++	<value value="40" name="PERF_SP_FS_STAGE_FULL_ALU_INSTRUCTIONS"/>
++	<value value="41" name="PERF_SP_FS_STAGE_HALF_ALU_INSTRUCTIONS"/>
++	<value value="42" name="PERF_SP_FS_STAGE_BARY_INSTRUCTIONS"/>
++	<value value="43" name="PERF_SP_VS_INSTRUCTIONS"/>
++	<value value="44" name="PERF_SP_FS_INSTRUCTIONS"/>
++	<value value="45" name="PERF_SP_ADDR_LOCK_COUNT"/>
++	<value value="46" name="PERF_SP_UCHE_READ_TRANS"/>
++	<value value="47" name="PERF_SP_UCHE_WRITE_TRANS"/>
++	<value value="48" name="PERF_SP_EXPORT_VPC_TRANS"/>
++	<value value="49" name="PERF_SP_EXPORT_RB_TRANS"/>
++	<value value="50" name="PERF_SP_PIXELS_KILLED"/>
++	<value value="51" name="PERF_SP_ICL1_REQUESTS"/>
++	<value value="52" name="PERF_SP_ICL1_MISSES"/>
++	<value value="53" name="PERF_SP_HS_INSTRUCTIONS"/>
++	<value value="54" name="PERF_SP_DS_INSTRUCTIONS"/>
++	<value value="55" name="PERF_SP_GS_INSTRUCTIONS"/>
++	<value value="56" name="PERF_SP_CS_INSTRUCTIONS"/>
++	<value value="57" name="PERF_SP_GPR_READ"/>
++	<value value="58" name="PERF_SP_GPR_WRITE"/>
++	<value value="59" name="PERF_SP_FS_STAGE_HALF_EFU_INSTRUCTIONS"/>
++	<value value="60" name="PERF_SP_VS_STAGE_HALF_EFU_INSTRUCTIONS"/>
++	<value value="61" name="PERF_SP_LM_BANK_CONFLICTS"/>
++	<value value="62" name="PERF_SP_TEX_CONTROL_WORKING_CYCLES"/>
++	<value value="63" name="PERF_SP_LOAD_CONTROL_WORKING_CYCLES"/>
++	<value value="64" name="PERF_SP_FLOW_CONTROL_WORKING_CYCLES"/>
++	<value value="65" name="PERF_SP_LM_WORKING_CYCLES"/>
++	<value value="66" name="PERF_SP_DISPATCHER_WORKING_CYCLES"/>
++	<value value="67" name="PERF_SP_SEQUENCER_WORKING_CYCLES"/>
++	<value value="68" name="PERF_SP_LOW_EFFICIENCY_STARVED_BY_TP"/>
++	<value value="69" name="PERF_SP_STARVE_CYCLES_HLSQ"/>
++	<value value="70" name="PERF_SP_NON_EXECUTION_LS_CYCLES"/>
++	<value value="71" name="PERF_SP_WORKING_EU"/>
++	<value value="72" name="PERF_SP_ANY_EU_WORKING"/>
++	<value value="73" name="PERF_SP_WORKING_EU_FS_STAGE"/>
++	<value value="74" name="PERF_SP_ANY_EU_WORKING_FS_STAGE"/>
++	<value value="75" name="PERF_SP_WORKING_EU_VS_STAGE"/>
++	<value value="76" name="PERF_SP_ANY_EU_WORKING_VS_STAGE"/>
++	<value value="77" name="PERF_SP_WORKING_EU_CS_STAGE"/>
++	<value value="78" name="PERF_SP_ANY_EU_WORKING_CS_STAGE"/>
++	<value value="79" name="PERF_SP_GPR_READ_PREFETCH"/>
++	<value value="80" name="PERF_SP_GPR_READ_CONFLICT"/>
++	<value value="81" name="PERF_SP_GPR_WRITE_CONFLICT"/>
++	<value value="82" name="PERF_SP_GM_LOAD_LATENCY_CYCLES"/>
++	<value value="83" name="PERF_SP_GM_LOAD_LATENCY_SAMPLES"/>
++	<value value="84" name="PERF_SP_EXECUTABLE_WAVES"/>
++</enum>
++
++<enum name="a6xx_rb_perfcounter_select">
++	<value value="0" name="PERF_RB_BUSY_CYCLES"/>
++	<value value="1" name="PERF_RB_STALL_CYCLES_HLSQ"/>
++	<value value="2" name="PERF_RB_STALL_CYCLES_FIFO0_FULL"/>
++	<value value="3" name="PERF_RB_STALL_CYCLES_FIFO1_FULL"/>
++	<value value="4" name="PERF_RB_STALL_CYCLES_FIFO2_FULL"/>
++	<value value="5" name="PERF_RB_STARVE_CYCLES_SP"/>
++	<value value="6" name="PERF_RB_STARVE_CYCLES_LRZ_TILE"/>
++	<value value="7" name="PERF_RB_STARVE_CYCLES_CCU"/>
++	<value value="8" name="PERF_RB_STARVE_CYCLES_Z_PLANE"/>
++	<value value="9" name="PERF_RB_STARVE_CYCLES_BARY_PLANE"/>
++	<value value="10" name="PERF_RB_Z_WORKLOAD"/>
++	<value value="11" name="PERF_RB_HLSQ_ACTIVE"/>
++	<value value="12" name="PERF_RB_Z_READ"/>
++	<value value="13" name="PERF_RB_Z_WRITE"/>
++	<value value="14" name="PERF_RB_C_READ"/>
++	<value value="15" name="PERF_RB_C_WRITE"/>
++	<value value="16" name="PERF_RB_TOTAL_PASS"/>
++	<value value="17" name="PERF_RB_Z_PASS"/>
++	<value value="18" name="PERF_RB_Z_FAIL"/>
++	<value value="19" name="PERF_RB_S_FAIL"/>
++	<value value="20" name="PERF_RB_BLENDED_FXP_COMPONENTS"/>
++	<value value="21" name="PERF_RB_BLENDED_FP16_COMPONENTS"/>
++	<value value="22" name="PERF_RB_PS_INVOCATIONS"/>
++	<value value="23" name="PERF_RB_2D_ALIVE_CYCLES"/>
++	<value value="24" name="PERF_RB_2D_STALL_CYCLES_A2D"/>
++	<value value="25" name="PERF_RB_2D_STARVE_CYCLES_SRC"/>
++	<value value="26" name="PERF_RB_2D_STARVE_CYCLES_SP"/>
++	<value value="27" name="PERF_RB_2D_STARVE_CYCLES_DST"/>
++	<value value="28" name="PERF_RB_2D_VALID_PIXELS"/>
++	<value value="29" name="PERF_RB_3D_PIXELS"/>
++	<value value="30" name="PERF_RB_BLENDER_WORKING_CYCLES"/>
++	<value value="31" name="PERF_RB_ZPROC_WORKING_CYCLES"/>
++	<value value="32" name="PERF_RB_CPROC_WORKING_CYCLES"/>
++	<value value="33" name="PERF_RB_SAMPLER_WORKING_CYCLES"/>
++	<value value="34" name="PERF_RB_STALL_CYCLES_CCU_COLOR_READ"/>
++	<value value="35" name="PERF_RB_STALL_CYCLES_CCU_COLOR_WRITE"/>
++	<value value="36" name="PERF_RB_STALL_CYCLES_CCU_DEPTH_READ"/>
++	<value value="37" name="PERF_RB_STALL_CYCLES_CCU_DEPTH_WRITE"/>
++	<value value="38" name="PERF_RB_STALL_CYCLES_VPC"/>
++	<value value="39" name="PERF_RB_2D_INPUT_TRANS"/>
++	<value value="40" name="PERF_RB_2D_OUTPUT_RB_DST_TRANS"/>
++	<value value="41" name="PERF_RB_2D_OUTPUT_RB_SRC_TRANS"/>
++	<value value="42" name="PERF_RB_BLENDED_FP32_COMPONENTS"/>
++	<value value="43" name="PERF_RB_COLOR_PIX_TILES"/>
++	<value value="44" name="PERF_RB_STALL_CYCLES_CCU"/>
++	<value value="45" name="PERF_RB_EARLY_Z_ARB3_GRANT"/>
++	<value value="46" name="PERF_RB_LATE_Z_ARB3_GRANT"/>
++	<value value="47" name="PERF_RB_EARLY_Z_SKIP_GRANT"/>
++</enum>
++
++<enum name="a6xx_vsc_perfcounter_select">
++	<value value="0" name="PERF_VSC_BUSY_CYCLES"/>
++	<value value="1" name="PERF_VSC_WORKING_CYCLES"/>
++	<value value="2" name="PERF_VSC_STALL_CYCLES_UCHE"/>
++	<value value="3" name="PERF_VSC_EOT_NUM"/>
++	<value value="4" name="PERF_VSC_INPUT_TILES"/>
++</enum>
++
++<enum name="a6xx_ccu_perfcounter_select">
++	<value value="0" name="PERF_CCU_BUSY_CYCLES"/>
++	<value value="1" name="PERF_CCU_STALL_CYCLES_RB_DEPTH_RETURN"/>
++	<value value="2" name="PERF_CCU_STALL_CYCLES_RB_COLOR_RETURN"/>
++	<value value="3" name="PERF_CCU_STARVE_CYCLES_FLAG_RETURN"/>
++	<value value="4" name="PERF_CCU_DEPTH_BLOCKS"/>
++	<value value="5" name="PERF_CCU_COLOR_BLOCKS"/>
++	<value value="6" name="PERF_CCU_DEPTH_BLOCK_HIT"/>
++	<value value="7" name="PERF_CCU_COLOR_BLOCK_HIT"/>
++	<value value="8" name="PERF_CCU_PARTIAL_BLOCK_READ"/>
++	<value value="9" name="PERF_CCU_GMEM_READ"/>
++	<value value="10" name="PERF_CCU_GMEM_WRITE"/>
++	<value value="11" name="PERF_CCU_DEPTH_READ_FLAG0_COUNT"/>
++	<value value="12" name="PERF_CCU_DEPTH_READ_FLAG1_COUNT"/>
++	<value value="13" name="PERF_CCU_DEPTH_READ_FLAG2_COUNT"/>
++	<value value="14" name="PERF_CCU_DEPTH_READ_FLAG3_COUNT"/>
++	<value value="15" name="PERF_CCU_DEPTH_READ_FLAG4_COUNT"/>
++	<value value="16" name="PERF_CCU_DEPTH_READ_FLAG5_COUNT"/>
++	<value value="17" name="PERF_CCU_DEPTH_READ_FLAG6_COUNT"/>
++	<value value="18" name="PERF_CCU_DEPTH_READ_FLAG8_COUNT"/>
++	<value value="19" name="PERF_CCU_COLOR_READ_FLAG0_COUNT"/>
++	<value value="20" name="PERF_CCU_COLOR_READ_FLAG1_COUNT"/>
++	<value value="21" name="PERF_CCU_COLOR_READ_FLAG2_COUNT"/>
++	<value value="22" name="PERF_CCU_COLOR_READ_FLAG3_COUNT"/>
++	<value value="23" name="PERF_CCU_COLOR_READ_FLAG4_COUNT"/>
++	<value value="24" name="PERF_CCU_COLOR_READ_FLAG5_COUNT"/>
++	<value value="25" name="PERF_CCU_COLOR_READ_FLAG6_COUNT"/>
++	<value value="26" name="PERF_CCU_COLOR_READ_FLAG8_COUNT"/>
++	<value value="27" name="PERF_CCU_2D_RD_REQ"/>
++	<value value="28" name="PERF_CCU_2D_WR_REQ"/>
++</enum>
++
++<enum name="a6xx_lrz_perfcounter_select">
++	<value value="0" name="PERF_LRZ_BUSY_CYCLES"/>
++	<value value="1" name="PERF_LRZ_STARVE_CYCLES_RAS"/>
++	<value value="2" name="PERF_LRZ_STALL_CYCLES_RB"/>
++	<value value="3" name="PERF_LRZ_STALL_CYCLES_VSC"/>
++	<value value="4" name="PERF_LRZ_STALL_CYCLES_VPC"/>
++	<value value="5" name="PERF_LRZ_STALL_CYCLES_FLAG_PREFETCH"/>
++	<value value="6" name="PERF_LRZ_STALL_CYCLES_UCHE"/>
++	<value value="7" name="PERF_LRZ_LRZ_READ"/>
++	<value value="8" name="PERF_LRZ_LRZ_WRITE"/>
++	<value value="9" name="PERF_LRZ_READ_LATENCY"/>
++	<value value="10" name="PERF_LRZ_MERGE_CACHE_UPDATING"/>
++	<value value="11" name="PERF_LRZ_PRIM_KILLED_BY_MASKGEN"/>
++	<value value="12" name="PERF_LRZ_PRIM_KILLED_BY_LRZ"/>
++	<value value="13" name="PERF_LRZ_VISIBLE_PRIM_AFTER_LRZ"/>
++	<value value="14" name="PERF_LRZ_FULL_8X8_TILES"/>
++	<value value="15" name="PERF_LRZ_PARTIAL_8X8_TILES"/>
++	<value value="16" name="PERF_LRZ_TILE_KILLED"/>
++	<value value="17" name="PERF_LRZ_TOTAL_PIXEL"/>
++	<value value="18" name="PERF_LRZ_VISIBLE_PIXEL_AFTER_LRZ"/>
++	<value value="19" name="PERF_LRZ_FULLY_COVERED_TILES"/>
++	<value value="20" name="PERF_LRZ_PARTIAL_COVERED_TILES"/>
++	<value value="21" name="PERF_LRZ_FEEDBACK_ACCEPT"/>
++	<value value="22" name="PERF_LRZ_FEEDBACK_DISCARD"/>
++	<value value="23" name="PERF_LRZ_FEEDBACK_STALL"/>
++	<value value="24" name="PERF_LRZ_STALL_CYCLES_RB_ZPLANE"/>
++	<value value="25" name="PERF_LRZ_STALL_CYCLES_RB_BPLANE"/>
++	<value value="26" name="PERF_LRZ_STALL_CYCLES_VC"/>
++	<value value="27" name="PERF_LRZ_RAS_MASK_TRANS"/>
++</enum>
++
++<enum name="a6xx_cmp_perfcounter_select">
++	<value value="0" name="PERF_CMPDECMP_STALL_CYCLES_ARB"/>
++	<value value="1" name="PERF_CMPDECMP_VBIF_LATENCY_CYCLES"/>
++	<value value="2" name="PERF_CMPDECMP_VBIF_LATENCY_SAMPLES"/>
++	<value value="3" name="PERF_CMPDECMP_VBIF_READ_DATA_CCU"/>
++	<value value="4" name="PERF_CMPDECMP_VBIF_WRITE_DATA_CCU"/>
++	<value value="5" name="PERF_CMPDECMP_VBIF_READ_REQUEST"/>
++	<value value="6" name="PERF_CMPDECMP_VBIF_WRITE_REQUEST"/>
++	<value value="7" name="PERF_CMPDECMP_VBIF_READ_DATA"/>
++	<value value="8" name="PERF_CMPDECMP_VBIF_WRITE_DATA"/>
++	<value value="9" name="PERF_CMPDECMP_FLAG_FETCH_CYCLES"/>
++	<value value="10" name="PERF_CMPDECMP_FLAG_FETCH_SAMPLES"/>
++	<value value="11" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG1_COUNT"/>
++	<value value="12" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG2_COUNT"/>
++	<value value="13" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG3_COUNT"/>
++	<value value="14" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG4_COUNT"/>
++	<value value="15" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG5_COUNT"/>
++	<value value="16" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG6_COUNT"/>
++	<value value="17" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG8_COUNT"/>
++	<value value="18" name="PERF_CMPDECMP_COLOR_WRITE_FLAG1_COUNT"/>
++	<value value="19" name="PERF_CMPDECMP_COLOR_WRITE_FLAG2_COUNT"/>
++	<value value="20" name="PERF_CMPDECMP_COLOR_WRITE_FLAG3_COUNT"/>
++	<value value="21" name="PERF_CMPDECMP_COLOR_WRITE_FLAG4_COUNT"/>
++	<value value="22" name="PERF_CMPDECMP_COLOR_WRITE_FLAG5_COUNT"/>
++	<value value="23" name="PERF_CMPDECMP_COLOR_WRITE_FLAG6_COUNT"/>
++	<value value="24" name="PERF_CMPDECMP_COLOR_WRITE_FLAG8_COUNT"/>
++	<value value="25" name="PERF_CMPDECMP_2D_STALL_CYCLES_VBIF_REQ"/>
++	<value value="26" name="PERF_CMPDECMP_2D_STALL_CYCLES_VBIF_WR"/>
++	<value value="27" name="PERF_CMPDECMP_2D_STALL_CYCLES_VBIF_RETURN"/>
++	<value value="28" name="PERF_CMPDECMP_2D_RD_DATA"/>
++	<value value="29" name="PERF_CMPDECMP_2D_WR_DATA"/>
++	<value value="30" name="PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH0"/>
++	<value value="31" name="PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH1"/>
++	<value value="32" name="PERF_CMPDECMP_2D_OUTPUT_TRANS"/>
++	<value value="33" name="PERF_CMPDECMP_VBIF_WRITE_DATA_UCHE"/>
++	<value value="34" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG0_COUNT"/>
++	<value value="35" name="PERF_CMPDECMP_COLOR_WRITE_FLAG0_COUNT"/>
++	<value value="36" name="PERF_CMPDECMP_COLOR_WRITE_FLAGALPHA_COUNT"/>
++	<value value="37" name="PERF_CMPDECMP_2D_BUSY_CYCLES"/>
++	<value value="38" name="PERF_CMPDECMP_2D_REORDER_STARVE_CYCLES"/>
++	<value value="39" name="PERF_CMPDECMP_2D_PIXELS"/>
++</enum>
++
++</database>
+diff --git a/drivers/gpu/drm/msm/registers/adreno/a7xx_enums.xml b/drivers/gpu/drm/msm/registers/adreno/a7xx_enums.xml
+new file mode 100644
+index 00000000000000..661b0dd0f675ba
+--- /dev/null
++++ b/drivers/gpu/drm/msm/registers/adreno/a7xx_enums.xml
+@@ -0,0 +1,223 @@
++<?xml version="1.0" encoding="UTF-8"?>
++<database xmlns="http://nouveau.freedesktop.org/"
++xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
++xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
++<import file="freedreno_copyright.xml"/>
++<import file="adreno/adreno_common.xml"/>
++<import file="adreno/adreno_pm4.xml"/>
++
++<enum name="a7xx_statetype_id">
++	<value value="0" name="A7XX_TP0_NCTX_REG"/>
++	<value value="1" name="A7XX_TP0_CTX0_3D_CVS_REG"/>
++	<value value="2" name="A7XX_TP0_CTX0_3D_CPS_REG"/>
++	<value value="3" name="A7XX_TP0_CTX1_3D_CVS_REG"/>
++	<value value="4" name="A7XX_TP0_CTX1_3D_CPS_REG"/>
++	<value value="5" name="A7XX_TP0_CTX2_3D_CPS_REG"/>
++	<value value="6" name="A7XX_TP0_CTX3_3D_CPS_REG"/>
++	<value value="9" name="A7XX_TP0_TMO_DATA"/>
++	<value value="10" name="A7XX_TP0_SMO_DATA"/>
++	<value value="11" name="A7XX_TP0_MIPMAP_BASE_DATA"/>
++	<value value="32" name="A7XX_SP_NCTX_REG"/>
++	<value value="33" name="A7XX_SP_CTX0_3D_CVS_REG"/>
++	<value value="34" name="A7XX_SP_CTX0_3D_CPS_REG"/>
++	<value value="35" name="A7XX_SP_CTX1_3D_CVS_REG"/>
++	<value value="36" name="A7XX_SP_CTX1_3D_CPS_REG"/>
++	<value value="37" name="A7XX_SP_CTX2_3D_CPS_REG"/>
++	<value value="38" name="A7XX_SP_CTX3_3D_CPS_REG"/>
++	<value value="39" name="A7XX_SP_INST_DATA"/>
++	<value value="40" name="A7XX_SP_INST_DATA_1"/>
++	<value value="41" name="A7XX_SP_LB_0_DATA"/>
++	<value value="42" name="A7XX_SP_LB_1_DATA"/>
++	<value value="43" name="A7XX_SP_LB_2_DATA"/>
++	<value value="44" name="A7XX_SP_LB_3_DATA"/>
++	<value value="45" name="A7XX_SP_LB_4_DATA"/>
++	<value value="46" name="A7XX_SP_LB_5_DATA"/>
++	<value value="47" name="A7XX_SP_LB_6_DATA"/>
++	<value value="48" name="A7XX_SP_LB_7_DATA"/>
++	<value value="49" name="A7XX_SP_CB_RAM"/>
++	<value value="50" name="A7XX_SP_LB_13_DATA"/>
++	<value value="51" name="A7XX_SP_LB_14_DATA"/>
++	<value value="52" name="A7XX_SP_INST_TAG"/>
++	<value value="53" name="A7XX_SP_INST_DATA_2"/>
++	<value value="54" name="A7XX_SP_TMO_TAG"/>
++	<value value="55" name="A7XX_SP_SMO_TAG"/>
++	<value value="56" name="A7XX_SP_STATE_DATA"/>
++	<value value="57" name="A7XX_SP_HWAVE_RAM"/>
++	<value value="58" name="A7XX_SP_L0_INST_BUF"/>
++	<value value="59" name="A7XX_SP_LB_8_DATA"/>
++	<value value="60" name="A7XX_SP_LB_9_DATA"/>
++	<value value="61" name="A7XX_SP_LB_10_DATA"/>
++	<value value="62" name="A7XX_SP_LB_11_DATA"/>
++	<value value="63" name="A7XX_SP_LB_12_DATA"/>
++	<value value="64" name="A7XX_HLSQ_DATAPATH_DSTR_META"/>
++	<value value="67" name="A7XX_HLSQ_L2STC_TAG_RAM"/>
++	<value value="68" name="A7XX_HLSQ_L2STC_INFO_CMD"/>
++	<value value="69" name="A7XX_HLSQ_CVS_BE_CTXT_BUF_RAM_TAG"/>
++	<value value="70" name="A7XX_HLSQ_CPS_BE_CTXT_BUF_RAM_TAG"/>
++	<value value="71" name="A7XX_HLSQ_GFX_CVS_BE_CTXT_BUF_RAM"/>
++	<value value="72" name="A7XX_HLSQ_GFX_CPS_BE_CTXT_BUF_RAM"/>
++	<value value="73" name="A7XX_HLSQ_CHUNK_CVS_RAM"/>
++	<value value="74" name="A7XX_HLSQ_CHUNK_CPS_RAM"/>
++	<value value="75" name="A7XX_HLSQ_CHUNK_CVS_RAM_TAG"/>
++	<value value="76" name="A7XX_HLSQ_CHUNK_CPS_RAM_TAG"/>
++	<value value="77" name="A7XX_HLSQ_ICB_CVS_CB_BASE_TAG"/>
++	<value value="78" name="A7XX_HLSQ_ICB_CPS_CB_BASE_TAG"/>
++	<value value="79" name="A7XX_HLSQ_CVS_MISC_RAM"/>
++	<value value="80" name="A7XX_HLSQ_CPS_MISC_RAM"/>
++	<value value="81" name="A7XX_HLSQ_CPS_MISC_RAM_1"/>
++	<value value="82" name="A7XX_HLSQ_INST_RAM"/>
++	<value value="83" name="A7XX_HLSQ_GFX_CVS_CONST_RAM"/>
++	<value value="84" name="A7XX_HLSQ_GFX_CPS_CONST_RAM"/>
++	<value value="85" name="A7XX_HLSQ_CVS_MISC_RAM_TAG"/>
++	<value value="86" name="A7XX_HLSQ_CPS_MISC_RAM_TAG"/>
++	<value value="87" name="A7XX_HLSQ_INST_RAM_TAG"/>
++	<value value="88" name="A7XX_HLSQ_GFX_CVS_CONST_RAM_TAG"/>
++	<value value="89" name="A7XX_HLSQ_GFX_CPS_CONST_RAM_TAG"/>
++	<value value="90" name="A7XX_HLSQ_GFX_LOCAL_MISC_RAM"/>
++	<value value="91" name="A7XX_HLSQ_GFX_LOCAL_MISC_RAM_TAG"/>
++	<value value="92" name="A7XX_HLSQ_INST_RAM_1"/>
++	<value value="93" name="A7XX_HLSQ_STPROC_META"/>
++	<value value="94" name="A7XX_HLSQ_BV_BE_META"/>
++	<value value="95" name="A7XX_HLSQ_INST_RAM_2"/>
++	<value value="96" name="A7XX_HLSQ_DATAPATH_META"/>
++	<value value="97" name="A7XX_HLSQ_FRONTEND_META"/>
++	<value value="98" name="A7XX_HLSQ_INDIRECT_META"/>
++	<value value="99" name="A7XX_HLSQ_BACKEND_META"/>
++</enum>
++
++<enum name="a7xx_state_location">
++	<value value="0" name="A7XX_HLSQ_STATE"/>
++	<value value="1" name="A7XX_HLSQ_DP"/>
++	<value value="2" name="A7XX_SP_TOP"/>
++	<value value="3" name="A7XX_USPTP"/>
++	<value value="4" name="A7XX_HLSQ_DP_STR"/>
++</enum>
++
++<enum name="a7xx_pipe">
++	<value value="0" name="A7XX_PIPE_NONE"/>
++	<value value="1" name="A7XX_PIPE_BR"/>
++	<value value="2" name="A7XX_PIPE_BV"/>
++	<value value="3" name="A7XX_PIPE_LPAC"/>
++</enum>
++
++<enum name="a7xx_cluster">
++	<value value="0" name="A7XX_CLUSTER_NONE"/>
++	<value value="1" name="A7XX_CLUSTER_FE"/>
++	<value value="2" name="A7XX_CLUSTER_SP_VS"/>
++	<value value="3" name="A7XX_CLUSTER_PC_VS"/>
++	<value value="4" name="A7XX_CLUSTER_GRAS"/>
++	<value value="5" name="A7XX_CLUSTER_SP_PS"/>
++	<value value="6" name="A7XX_CLUSTER_VPC_PS"/>
++	<value value="7" name="A7XX_CLUSTER_PS"/>
++</enum>
++
++<enum name="a7xx_debugbus_id">
++	<value value="1" name="A7XX_DBGBUS_CP_0_0"/>
++	<value value="2" name="A7XX_DBGBUS_CP_0_1"/>
++	<value value="3" name="A7XX_DBGBUS_RBBM"/>
++	<value value="5" name="A7XX_DBGBUS_GBIF_GX"/>
++	<value value="6" name="A7XX_DBGBUS_GBIF_CX"/>
++	<value value="7" name="A7XX_DBGBUS_HLSQ"/>
++	<value value="9" name="A7XX_DBGBUS_UCHE_0"/>
++	<value value="10" name="A7XX_DBGBUS_UCHE_1"/>
++	<value value="13" name="A7XX_DBGBUS_TESS_BR"/>
++	<value value="14" name="A7XX_DBGBUS_TESS_BV"/>
++	<value value="17" name="A7XX_DBGBUS_PC_BR"/>
++	<value value="18" name="A7XX_DBGBUS_PC_BV"/>
++	<value value="21" name="A7XX_DBGBUS_VFDP_BR"/>
++	<value value="22" name="A7XX_DBGBUS_VFDP_BV"/>
++	<value value="25" name="A7XX_DBGBUS_VPC_BR"/>
++	<value value="26" name="A7XX_DBGBUS_VPC_BV"/>
++	<value value="29" name="A7XX_DBGBUS_TSE_BR"/>
++	<value value="30" name="A7XX_DBGBUS_TSE_BV"/>
++	<value value="33" name="A7XX_DBGBUS_RAS_BR"/>
++	<value value="34" name="A7XX_DBGBUS_RAS_BV"/>
++	<value value="37" name="A7XX_DBGBUS_VSC"/>
++	<value value="39" name="A7XX_DBGBUS_COM_0"/>
++	<value value="43" name="A7XX_DBGBUS_LRZ_BR"/>
++	<value value="44" name="A7XX_DBGBUS_LRZ_BV"/>
++	<value value="47" name="A7XX_DBGBUS_UFC_0"/>
++	<value value="48" name="A7XX_DBGBUS_UFC_1"/>
++	<value value="55" name="A7XX_DBGBUS_GMU_GX"/>
++	<value value="59" name="A7XX_DBGBUS_DBGC"/>
++	<value value="60" name="A7XX_DBGBUS_CX"/>
++	<value value="61" name="A7XX_DBGBUS_GMU_CX"/>
++	<value value="62" name="A7XX_DBGBUS_GPC_BR"/>
++	<value value="63" name="A7XX_DBGBUS_GPC_BV"/>
++	<value value="66" name="A7XX_DBGBUS_LARC"/>
++	<value value="68" name="A7XX_DBGBUS_HLSQ_SPTP"/>
++	<value value="70" name="A7XX_DBGBUS_RB_0"/>
++	<value value="71" name="A7XX_DBGBUS_RB_1"/>
++	<value value="72" name="A7XX_DBGBUS_RB_2"/>
++	<value value="73" name="A7XX_DBGBUS_RB_3"/>
++	<value value="74" name="A7XX_DBGBUS_RB_4"/>
++	<value value="75" name="A7XX_DBGBUS_RB_5"/>
++	<value value="102" name="A7XX_DBGBUS_UCHE_WRAPPER"/>
++	<value value="106" name="A7XX_DBGBUS_CCU_0"/>
++	<value value="107" name="A7XX_DBGBUS_CCU_1"/>
++	<value value="108" name="A7XX_DBGBUS_CCU_2"/>
++	<value value="109" name="A7XX_DBGBUS_CCU_3"/>
++	<value value="110" name="A7XX_DBGBUS_CCU_4"/>
++	<value value="111" name="A7XX_DBGBUS_CCU_5"/>
++	<value value="138" name="A7XX_DBGBUS_VFD_BR_0"/>
++	<value value="139" name="A7XX_DBGBUS_VFD_BR_1"/>
++	<value value="140" name="A7XX_DBGBUS_VFD_BR_2"/>
++	<value value="141" name="A7XX_DBGBUS_VFD_BR_3"/>
++	<value value="142" name="A7XX_DBGBUS_VFD_BR_4"/>
++	<value value="143" name="A7XX_DBGBUS_VFD_BR_5"/>
++	<value value="144" name="A7XX_DBGBUS_VFD_BR_6"/>
++	<value value="145" name="A7XX_DBGBUS_VFD_BR_7"/>
++	<value value="202" name="A7XX_DBGBUS_VFD_BV_0"/>
++	<value value="203" name="A7XX_DBGBUS_VFD_BV_1"/>
++	<value value="204" name="A7XX_DBGBUS_VFD_BV_2"/>
++	<value value="205" name="A7XX_DBGBUS_VFD_BV_3"/>
++	<value value="234" name="A7XX_DBGBUS_USP_0"/>
++	<value value="235" name="A7XX_DBGBUS_USP_1"/>
++	<value value="236" name="A7XX_DBGBUS_USP_2"/>
++	<value value="237" name="A7XX_DBGBUS_USP_3"/>
++	<value value="238" name="A7XX_DBGBUS_USP_4"/>
++	<value value="239" name="A7XX_DBGBUS_USP_5"/>
++	<value value="266" name="A7XX_DBGBUS_TP_0"/>
++	<value value="267" name="A7XX_DBGBUS_TP_1"/>
++	<value value="268" name="A7XX_DBGBUS_TP_2"/>
++	<value value="269" name="A7XX_DBGBUS_TP_3"/>
++	<value value="270" name="A7XX_DBGBUS_TP_4"/>
++	<value value="271" name="A7XX_DBGBUS_TP_5"/>
++	<value value="272" name="A7XX_DBGBUS_TP_6"/>
++	<value value="273" name="A7XX_DBGBUS_TP_7"/>
++	<value value="274" name="A7XX_DBGBUS_TP_8"/>
++	<value value="275" name="A7XX_DBGBUS_TP_9"/>
++	<value value="276" name="A7XX_DBGBUS_TP_10"/>
++	<value value="277" name="A7XX_DBGBUS_TP_11"/>
++	<value value="330" name="A7XX_DBGBUS_USPTP_0"/>
++	<value value="331" name="A7XX_DBGBUS_USPTP_1"/>
++	<value value="332" name="A7XX_DBGBUS_USPTP_2"/>
++	<value value="333" name="A7XX_DBGBUS_USPTP_3"/>
++	<value value="334" name="A7XX_DBGBUS_USPTP_4"/>
++	<value value="335" name="A7XX_DBGBUS_USPTP_5"/>
++	<value value="336" name="A7XX_DBGBUS_USPTP_6"/>
++	<value value="337" name="A7XX_DBGBUS_USPTP_7"/>
++	<value value="338" name="A7XX_DBGBUS_USPTP_8"/>
++	<value value="339" name="A7XX_DBGBUS_USPTP_9"/>
++	<value value="340" name="A7XX_DBGBUS_USPTP_10"/>
++	<value value="341" name="A7XX_DBGBUS_USPTP_11"/>
++	<value value="396" name="A7XX_DBGBUS_CCHE_0"/>
++	<value value="397" name="A7XX_DBGBUS_CCHE_1"/>
++	<value value="398" name="A7XX_DBGBUS_CCHE_2"/>
++	<value value="408" name="A7XX_DBGBUS_VPC_DSTR_0"/>
++	<value value="409" name="A7XX_DBGBUS_VPC_DSTR_1"/>
++	<value value="410" name="A7XX_DBGBUS_VPC_DSTR_2"/>
++	<value value="411" name="A7XX_DBGBUS_HLSQ_DP_STR_0"/>
++	<value value="412" name="A7XX_DBGBUS_HLSQ_DP_STR_1"/>
++	<value value="413" name="A7XX_DBGBUS_HLSQ_DP_STR_2"/>
++	<value value="414" name="A7XX_DBGBUS_HLSQ_DP_STR_3"/>
++	<value value="415" name="A7XX_DBGBUS_HLSQ_DP_STR_4"/>
++	<value value="416" name="A7XX_DBGBUS_HLSQ_DP_STR_5"/>
++	<value value="443" name="A7XX_DBGBUS_UFC_DSTR_0"/>
++	<value value="444" name="A7XX_DBGBUS_UFC_DSTR_1"/>
++	<value value="445" name="A7XX_DBGBUS_UFC_DSTR_2"/>
++	<value value="446" name="A7XX_DBGBUS_CGC_SUBCORE"/>
++	<value value="447" name="A7XX_DBGBUS_CGC_CORE"/>
++</enum>
++
++</database>
+diff --git a/drivers/gpu/drm/msm/registers/adreno/a7xx_perfcntrs.xml b/drivers/gpu/drm/msm/registers/adreno/a7xx_perfcntrs.xml
+new file mode 100644
+index 00000000000000..9bf78b0a854b12
+--- /dev/null
++++ b/drivers/gpu/drm/msm/registers/adreno/a7xx_perfcntrs.xml
+@@ -0,0 +1,1030 @@
++<?xml version="1.0" encoding="UTF-8"?>
++<database xmlns="http://nouveau.freedesktop.org/"
++xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
++xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
++<import file="freedreno_copyright.xml"/>
++<import file="adreno/adreno_common.xml"/>
++<import file="adreno/adreno_pm4.xml"/>
++
++<enum name="a7xx_cp_perfcounter_select">
++	<value value="0" name="A7XX_PERF_CP_ALWAYS_COUNT"/>
++	<value value="1" name="A7XX_PERF_CP_BUSY_GFX_CORE_IDLE"/>
++	<value value="2" name="A7XX_PERF_CP_BUSY_CYCLES"/>
++	<value value="3" name="A7XX_PERF_CP_NUM_PREEMPTIONS"/>
++	<value value="4" name="A7XX_PERF_CP_PREEMPTION_REACTION_DELAY"/>
++	<value value="5" name="A7XX_PERF_CP_PREEMPTION_SWITCH_OUT_TIME"/>
++	<value value="6" name="A7XX_PERF_CP_PREEMPTION_SWITCH_IN_TIME"/>
++	<value value="7" name="A7XX_PERF_CP_DEAD_DRAWS_IN_BIN_RENDER"/>
++	<value value="8" name="A7XX_PERF_CP_PREDICATED_DRAWS_KILLED"/>
++	<value value="9" name="A7XX_PERF_CP_MODE_SWITCH"/>
++	<value value="10" name="A7XX_PERF_CP_ZPASS_DONE"/>
++	<value value="11" name="A7XX_PERF_CP_CONTEXT_DONE"/>
++	<value value="12" name="A7XX_PERF_CP_CACHE_FLUSH"/>
++	<value value="13" name="A7XX_PERF_CP_LONG_PREEMPTIONS"/>
++	<value value="14" name="A7XX_PERF_CP_SQE_I_CACHE_STARVE"/>
++	<value value="15" name="A7XX_PERF_CP_SQE_IDLE"/>
++	<value value="16" name="A7XX_PERF_CP_SQE_PM4_STARVE_RB_IB"/>
++	<value value="17" name="A7XX_PERF_CP_SQE_PM4_STARVE_SDS"/>
++	<value value="18" name="A7XX_PERF_CP_SQE_MRB_STARVE"/>
++	<value value="19" name="A7XX_PERF_CP_SQE_RRB_STARVE"/>
++	<value value="20" name="A7XX_PERF_CP_SQE_VSD_STARVE"/>
++	<value value="21" name="A7XX_PERF_CP_VSD_DECODE_STARVE"/>
++	<value value="22" name="A7XX_PERF_CP_SQE_PIPE_OUT_STALL"/>
++	<value value="23" name="A7XX_PERF_CP_SQE_SYNC_STALL"/>
++	<value value="24" name="A7XX_PERF_CP_SQE_PM4_WFI_STALL"/>
++	<value value="25" name="A7XX_PERF_CP_SQE_SYS_WFI_STALL"/>
++	<value value="26" name="A7XX_PERF_CP_SQE_T4_EXEC"/>
++	<value value="27" name="A7XX_PERF_CP_SQE_LOAD_STATE_EXEC"/>
++	<value value="28" name="A7XX_PERF_CP_SQE_SAVE_SDS_STATE"/>
++	<value value="29" name="A7XX_PERF_CP_SQE_DRAW_EXEC"/>
++	<value value="30" name="A7XX_PERF_CP_SQE_CTXT_REG_BUNCH_EXEC"/>
++	<value value="31" name="A7XX_PERF_CP_SQE_EXEC_PROFILED"/>
++	<value value="32" name="A7XX_PERF_CP_MEMORY_POOL_EMPTY"/>
++	<value value="33" name="A7XX_PERF_CP_MEMORY_POOL_SYNC_STALL"/>
++	<value value="34" name="A7XX_PERF_CP_MEMORY_POOL_ABOVE_THRESH"/>
++	<value value="35" name="A7XX_PERF_CP_AHB_WR_STALL_PRE_DRAWS"/>
++	<value value="36" name="A7XX_PERF_CP_AHB_STALL_SQE_GMU"/>
++	<value value="37" name="A7XX_PERF_CP_AHB_STALL_SQE_WR_OTHER"/>
++	<value value="38" name="A7XX_PERF_CP_AHB_STALL_SQE_RD_OTHER"/>
++	<value value="39" name="A7XX_PERF_CP_CLUSTER0_EMPTY"/>
++	<value value="40" name="A7XX_PERF_CP_CLUSTER1_EMPTY"/>
++	<value value="41" name="A7XX_PERF_CP_CLUSTER2_EMPTY"/>
++	<value value="42" name="A7XX_PERF_CP_CLUSTER3_EMPTY"/>
++	<value value="43" name="A7XX_PERF_CP_CLUSTER4_EMPTY"/>
++	<value value="44" name="A7XX_PERF_CP_CLUSTER5_EMPTY"/>
++	<value value="45" name="A7XX_PERF_CP_PM4_DATA"/>
++	<value value="46" name="A7XX_PERF_CP_PM4_HEADERS"/>
++	<value value="47" name="A7XX_PERF_CP_VBIF_READ_BEATS"/>
++	<value value="48" name="A7XX_PERF_CP_VBIF_WRITE_BEATS"/>
++	<value value="49" name="A7XX_PERF_CP_SQE_INSTR_COUNTER"/>
++	<value value="50" name="A7XX_PERF_CP_RESERVED_50"/>
++	<value value="51" name="A7XX_PERF_CP_RESERVED_51"/>
++	<value value="52" name="A7XX_PERF_CP_RESERVED_52"/>
++	<value value="53" name="A7XX_PERF_CP_RESERVED_53"/>
++	<value value="54" name="A7XX_PERF_CP_RESERVED_54"/>
++	<value value="55" name="A7XX_PERF_CP_RESERVED_55"/>
++	<value value="56" name="A7XX_PERF_CP_RESERVED_56"/>
++	<value value="57" name="A7XX_PERF_CP_RESERVED_57"/>
++	<value value="58" name="A7XX_PERF_CP_RESERVED_58"/>
++	<value value="59" name="A7XX_PERF_CP_RESERVED_59"/>
++	<value value="60" name="A7XX_PERF_CP_CLUSTER0_FULL"/>
++	<value value="61" name="A7XX_PERF_CP_CLUSTER1_FULL"/>
++	<value value="62" name="A7XX_PERF_CP_CLUSTER2_FULL"/>
++	<value value="63" name="A7XX_PERF_CP_CLUSTER3_FULL"/>
++	<value value="64" name="A7XX_PERF_CP_CLUSTER4_FULL"/>
++	<value value="65" name="A7XX_PERF_CP_CLUSTER5_FULL"/>
++	<value value="66" name="A7XX_PERF_CP_CLUSTER6_FULL"/>
++	<value value="67" name="A7XX_PERF_CP_CLUSTER6_EMPTY"/>
++	<value value="68" name="A7XX_PERF_CP_ICACHE_MISSES"/>
++	<value value="69" name="A7XX_PERF_CP_ICACHE_HITS"/>
++	<value value="70" name="A7XX_PERF_CP_ICACHE_STALL"/>
++	<value value="71" name="A7XX_PERF_CP_DCACHE_MISSES"/>
++	<value value="72" name="A7XX_PERF_CP_DCACHE_HITS"/>
++	<value value="73" name="A7XX_PERF_CP_DCACHE_STALLS"/>
++	<value value="74" name="A7XX_PERF_CP_AQE_SQE_STALL"/>
++	<value value="75" name="A7XX_PERF_CP_SQE_AQE_STARVE"/>
++	<value value="76" name="A7XX_PERF_CP_PREEMPT_LATENCY"/>
++	<value value="77" name="A7XX_PERF_CP_SQE_MD8_STALL_CYCLES"/>
++	<value value="78" name="A7XX_PERF_CP_SQE_MESH_EXEC_CYCLES"/>
++	<value value="79" name="A7XX_PERF_CP_AQE_NUM_AS_CHUNKS"/>
++	<value value="80" name="A7XX_PERF_CP_AQE_NUM_MS_CHUNKS"/>
++</enum>
++
++<enum name="a7xx_rbbm_perfcounter_select">
++	<value value="0" name="A7XX_PERF_RBBM_ALWAYS_COUNT"/>
++	<value value="1" name="A7XX_PERF_RBBM_ALWAYS_ON"/>
++	<value value="2" name="A7XX_PERF_RBBM_TSE_BUSY"/>
++	<value value="3" name="A7XX_PERF_RBBM_RAS_BUSY"/>
++	<value value="4" name="A7XX_PERF_RBBM_PC_DCALL_BUSY"/>
++	<value value="5" name="A7XX_PERF_RBBM_PC_VSD_BUSY"/>
++	<value value="6" name="A7XX_PERF_RBBM_STATUS_MASKED"/>
++	<value value="7" name="A7XX_PERF_RBBM_COM_BUSY"/>
++	<value value="8" name="A7XX_PERF_RBBM_DCOM_BUSY"/>
++	<value value="9" name="A7XX_PERF_RBBM_VBIF_BUSY"/>
++	<value value="10" name="A7XX_PERF_RBBM_VSC_BUSY"/>
++	<value value="11" name="A7XX_PERF_RBBM_TESS_BUSY"/>
++	<value value="12" name="A7XX_PERF_RBBM_UCHE_BUSY"/>
++	<value value="13" name="A7XX_PERF_RBBM_HLSQ_BUSY"/>
++</enum>
++
++<enum name="a7xx_pc_perfcounter_select">
++	<value value="0" name="A7XX_PERF_PC_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_PC_WORKING_CYCLES"/>
++	<value value="2" name="A7XX_PERF_PC_STALL_CYCLES_VFD"/>
++	<value value="3" name="A7XX_PERF_PC_RESERVED"/>
++	<value value="4" name="A7XX_PERF_PC_STALL_CYCLES_VPC"/>
++	<value value="5" name="A7XX_PERF_PC_STALL_CYCLES_UCHE"/>
++	<value value="6" name="A7XX_PERF_PC_STALL_CYCLES_TESS"/>
++	<value value="7" name="A7XX_PERF_PC_STALL_CYCLES_VFD_ONLY"/>
++	<value value="8" name="A7XX_PERF_PC_STALL_CYCLES_VPC_ONLY"/>
++	<value value="9" name="A7XX_PERF_PC_PASS1_TF_STALL_CYCLES"/>
++	<value value="10" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_INDEX"/>
++	<value value="11" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_TESS_FACTOR"/>
++	<value value="12" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_VIZ_STREAM"/>
++	<value value="13" name="A7XX_PERF_PC_STARVE_CYCLES_DI"/>
++	<value value="14" name="A7XX_PERF_PC_VIS_STREAMS_LOADED"/>
++	<value value="15" name="A7XX_PERF_PC_INSTANCES"/>
++	<value value="16" name="A7XX_PERF_PC_VPC_PRIMITIVES"/>
++	<value value="17" name="A7XX_PERF_PC_DEAD_PRIM"/>
++	<value value="18" name="A7XX_PERF_PC_LIVE_PRIM"/>
++	<value value="19" name="A7XX_PERF_PC_VERTEX_HITS"/>
++	<value value="20" name="A7XX_PERF_PC_IA_VERTICES"/>
++	<value value="21" name="A7XX_PERF_PC_IA_PRIMITIVES"/>
++	<value value="22" name="A7XX_PERF_PC_RESERVED_22"/>
++	<value value="23" name="A7XX_PERF_PC_HS_INVOCATIONS"/>
++	<value value="24" name="A7XX_PERF_PC_DS_INVOCATIONS"/>
++	<value value="25" name="A7XX_PERF_PC_VS_INVOCATIONS"/>
++	<value value="26" name="A7XX_PERF_PC_GS_INVOCATIONS"/>
++	<value value="27" name="A7XX_PERF_PC_DS_PRIMITIVES"/>
++	<value value="28" name="A7XX_PERF_PC_3D_DRAWCALLS"/>
++	<value value="29" name="A7XX_PERF_PC_2D_DRAWCALLS"/>
++	<value value="30" name="A7XX_PERF_PC_NON_DRAWCALL_GLOBAL_EVENTS"/>
++	<value value="31" name="A7XX_PERF_PC_TESS_BUSY_CYCLES"/>
++	<value value="32" name="A7XX_PERF_PC_TESS_WORKING_CYCLES"/>
++	<value value="33" name="A7XX_PERF_PC_TESS_STALL_CYCLES_PC"/>
++	<value value="34" name="A7XX_PERF_PC_TESS_STARVE_CYCLES_PC"/>
++	<value value="35" name="A7XX_PERF_PC_TESS_SINGLE_PRIM_CYCLES"/>
++	<value value="36" name="A7XX_PERF_PC_TESS_PC_UV_TRANS"/>
++	<value value="37" name="A7XX_PERF_PC_TESS_PC_UV_PATCHES"/>
++	<value value="38" name="A7XX_PERF_PC_TESS_FACTOR_TRANS"/>
++	<value value="39" name="A7XX_PERF_PC_TAG_CHECKED_VERTICES"/>
++	<value value="40" name="A7XX_PERF_PC_MESH_VS_WAVES"/>
++	<value value="41" name="A7XX_PERF_PC_MESH_DRAWS"/>
++	<value value="42" name="A7XX_PERF_PC_MESH_DEAD_DRAWS"/>
++	<value value="43" name="A7XX_PERF_PC_MESH_MVIS_EN_DRAWS"/>
++	<value value="44" name="A7XX_PERF_PC_MESH_DEAD_PRIM"/>
++	<value value="45" name="A7XX_PERF_PC_MESH_LIVE_PRIM"/>
++	<value value="46" name="A7XX_PERF_PC_MESH_PA_EN_PRIM"/>
++	<value value="47" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_MVIS_STREAM"/>
++	<value value="48" name="A7XX_PERF_PC_STARVE_CYCLES_PREDRAW"/>
++	<value value="49" name="A7XX_PERF_PC_STALL_CYCLES_COMPUTE_GFX"/>
++	<value value="50" name="A7XX_PERF_PC_STALL_CYCLES_GFX_COMPUTE"/>
++	<value value="51" name="A7XX_PERF_PC_TESS_PC_MULTI_PATCH_TRANS"/>
++</enum>
++
++<enum name="a7xx_vfd_perfcounter_select">
++	<value value="0" name="A7XX_PERF_VFD_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_VFD_STALL_CYCLES_UCHE"/>
++	<value value="2" name="A7XX_PERF_VFD_STALL_CYCLES_VPC_ALLOC"/>
++	<value value="3" name="A7XX_PERF_VFD_STALL_CYCLES_SP_INFO"/>
++	<value value="4" name="A7XX_PERF_VFD_STALL_CYCLES_SP_ATTR"/>
++	<value value="5" name="A7XX_PERF_VFD_STARVE_CYCLES_UCHE"/>
++	<value value="6" name="A7XX_PERF_VFD_RBUFFER_FULL"/>
++	<value value="7" name="A7XX_PERF_VFD_ATTR_INFO_FIFO_FULL"/>
++	<value value="8" name="A7XX_PERF_VFD_DECODED_ATTRIBUTE_BYTES"/>
++	<value value="9" name="A7XX_PERF_VFD_NUM_ATTRIBUTES"/>
++	<value value="10" name="A7XX_PERF_VFD_UPPER_SHADER_FIBERS"/>
++	<value value="11" name="A7XX_PERF_VFD_LOWER_SHADER_FIBERS"/>
++	<value value="12" name="A7XX_PERF_VFD_MODE_0_FIBERS"/>
++	<value value="13" name="A7XX_PERF_VFD_MODE_1_FIBERS"/>
++	<value value="14" name="A7XX_PERF_VFD_MODE_2_FIBERS"/>
++	<value value="15" name="A7XX_PERF_VFD_MODE_3_FIBERS"/>
++	<value value="16" name="A7XX_PERF_VFD_MODE_4_FIBERS"/>
++	<value value="17" name="A7XX_PERF_VFD_TOTAL_VERTICES"/>
++	<value value="18" name="A7XX_PERF_VFDP_STALL_CYCLES_VFD"/>
++	<value value="19" name="A7XX_PERF_VFDP_STALL_CYCLES_VFD_INDEX"/>
++	<value value="20" name="A7XX_PERF_VFDP_STALL_CYCLES_VFD_PROG"/>
++	<value value="21" name="A7XX_PERF_VFDP_STARVE_CYCLES_PC"/>
++	<value value="22" name="A7XX_PERF_VFDP_VS_STAGE_WAVES"/>
++	<value value="23" name="A7XX_PERF_VFD_STALL_CYCLES_PRG_END_FE"/>
++	<value value="24" name="A7XX_PERF_VFD_STALL_CYCLES_CBSYNC"/>
++</enum>
++
++<enum name="a7xx_hlsq_perfcounter_select">
++	<value value="0" name="A7XX_PERF_HLSQ_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_HLSQ_STALL_CYCLES_UCHE"/>
++	<value value="2" name="A7XX_PERF_HLSQ_STALL_CYCLES_SP_STATE"/>
++	<value value="3" name="A7XX_PERF_HLSQ_STALL_CYCLES_SP_FS_STAGE"/>
++	<value value="4" name="A7XX_PERF_HLSQ_UCHE_LATENCY_CYCLES"/>
++	<value value="5" name="A7XX_PERF_HLSQ_UCHE_LATENCY_COUNT"/>
++	<value value="6" name="A7XX_PERF_HLSQ_RESERVED_6"/>
++	<value value="7" name="A7XX_PERF_HLSQ_RESERVED_7"/>
++	<value value="8" name="A7XX_PERF_HLSQ_RESERVED_8"/>
++	<value value="9" name="A7XX_PERF_HLSQ_RESERVED_9"/>
++	<value value="10" name="A7XX_PERF_HLSQ_COMPUTE_DRAWCALLS"/>
++	<value value="11" name="A7XX_PERF_HLSQ_FS_DATA_WAIT_PROGRAMMING"/>
++	<value value="12" name="A7XX_PERF_HLSQ_DUAL_FS_PROG_ACTIVE"/>
++	<value value="13" name="A7XX_PERF_HLSQ_DUAL_VS_PROG_ACTIVE"/>
++	<value value="14" name="A7XX_PERF_HLSQ_FS_BATCH_COUNT_ZERO"/>
++	<value value="15" name="A7XX_PERF_HLSQ_VS_BATCH_COUNT_ZERO"/>
++	<value value="16" name="A7XX_PERF_HLSQ_WAVE_PENDING_NO_QUAD"/>
++	<value value="17" name="A7XX_PERF_HLSQ_WAVE_PENDING_NO_PRIM_BASE"/>
++	<value value="18" name="A7XX_PERF_HLSQ_STALL_CYCLES_VPC"/>
++	<value value="19" name="A7XX_PERF_HLSQ_RESERVED_19"/>
++	<value value="20" name="A7XX_PERF_HLSQ_DRAW_MODE_SWITCH_VSFS_SYNC"/>
++	<value value="21" name="A7XX_PERF_HLSQ_VSBR_STALL_CYCLES"/>
++	<value value="22" name="A7XX_PERF_HLSQ_FS_STALL_CYCLES"/>
++	<value value="23" name="A7XX_PERF_HLSQ_LPAC_STALL_CYCLES"/>
++	<value value="24" name="A7XX_PERF_HLSQ_BV_STALL_CYCLES"/>
++	<value value="25" name="A7XX_PERF_HLSQ_VSBR_DEREF_CYCLES"/>
++	<value value="26" name="A7XX_PERF_HLSQ_FS_DEREF_CYCLES"/>
++	<value value="27" name="A7XX_PERF_HLSQ_LPAC_DEREF_CYCLES"/>
++	<value value="28" name="A7XX_PERF_HLSQ_BV_DEREF_CYCLES"/>
++	<value value="29" name="A7XX_PERF_HLSQ_VSBR_S2W_CYCLES"/>
++	<value value="30" name="A7XX_PERF_HLSQ_FS_S2W_CYCLES"/>
++	<value value="31" name="A7XX_PERF_HLSQ_LPAC_S2W_CYCLES"/>
++	<value value="32" name="A7XX_PERF_HLSQ_BV_S2W_CYCLES"/>
++	<value value="33" name="A7XX_PERF_HLSQ_VSBR_WAIT_FS_S2W"/>
++	<value value="34" name="A7XX_PERF_HLSQ_FS_WAIT_VS_S2W"/>
++	<value value="35" name="A7XX_PERF_HLSQ_LPAC_WAIT_VS_S2W"/>
++	<value value="36" name="A7XX_PERF_HLSQ_BV_WAIT_FS_S2W"/>
++	<value value="37" name="A7XX_PERF_HLSQ_VS_WAIT_CONST_RESOURCE"/>
++	<value value="38" name="A7XX_PERF_HLSQ_FS_WAIT_SAME_VS_S2W"/>
++	<value value="39" name="A7XX_PERF_HLSQ_FS_STARVING_SP"/>
++	<value value="40" name="A7XX_PERF_HLSQ_VS_DATA_WAIT_PROGRAMMING"/>
++	<value value="41" name="A7XX_PERF_HLSQ_BV_DATA_WAIT_PROGRAMMING"/>
++	<value value="42" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_VS"/>
++	<value value="43" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_VS"/>
++	<value value="44" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_FS"/>
++	<value value="45" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_FS"/>
++	<value value="46" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_BV"/>
++	<value value="47" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_BV"/>
++	<value value="48" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_LPAC"/>
++	<value value="49" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_LPAC"/>
++	<value value="50" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_VS"/>
++	<value value="51" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_FS"/>
++	<value value="52" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_BV"/>
++	<value value="53" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_LPAC"/>
++	<value value="54" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_VS"/>
++	<value value="55" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_FS"/>
++	<value value="56" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_BV"/>
++	<value value="57" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_LPAC"/>
++</enum>
++
++<enum name="a7xx_vpc_perfcounter_select">
++	<value value="0" name="A7XX_PERF_VPC_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_VPC_WORKING_CYCLES"/>
++	<value value="2" name="A7XX_PERF_VPC_STALL_CYCLES_UCHE"/>
++	<value value="3" name="A7XX_PERF_VPC_STALL_CYCLES_VFD_WACK"/>
++	<value value="4" name="A7XX_PERF_VPC_STALL_CYCLES_HLSQ_PRIM_ALLOC"/>
++	<value value="5" name="A7XX_PERF_VPC_RESERVED_5"/>
++	<value value="6" name="A7XX_PERF_VPC_STALL_CYCLES_SP_LM"/>
++	<value value="7" name="A7XX_PERF_VPC_STARVE_CYCLES_SP"/>
++	<value value="8" name="A7XX_PERF_VPC_STARVE_CYCLES_LRZ"/>
++	<value value="9" name="A7XX_PERF_VPC_PC_PRIMITIVES"/>
++	<value value="10" name="A7XX_PERF_VPC_SP_COMPONENTS"/>
++	<value value="11" name="A7XX_PERF_VPC_STALL_CYCLES_VPCRAM_POS"/>
++	<value value="12" name="A7XX_PERF_VPC_LRZ_ASSIGN_PRIMITIVES"/>
++	<value value="13" name="A7XX_PERF_VPC_RB_VISIBLE_PRIMITIVES"/>
++	<value value="14" name="A7XX_PERF_VPC_LM_TRANSACTION"/>
++	<value value="15" name="A7XX_PERF_VPC_STREAMOUT_TRANSACTION"/>
++	<value value="16" name="A7XX_PERF_VPC_VS_BUSY_CYCLES"/>
++	<value value="17" name="A7XX_PERF_VPC_PS_BUSY_CYCLES"/>
++	<value value="18" name="A7XX_PERF_VPC_VS_WORKING_CYCLES"/>
++	<value value="19" name="A7XX_PERF_VPC_PS_WORKING_CYCLES"/>
++	<value value="20" name="A7XX_PERF_VPC_STARVE_CYCLES_RB"/>
++	<value value="21" name="A7XX_PERF_VPC_NUM_VPCRAM_READ_POS"/>
++	<value value="22" name="A7XX_PERF_VPC_WIT_FULL_CYCLES"/>
++	<value value="23" name="A7XX_PERF_VPC_VPCRAM_FULL_CYCLES"/>
++	<value value="24" name="A7XX_PERF_VPC_LM_FULL_WAIT_FOR_INTP_END"/>
++	<value value="25" name="A7XX_PERF_VPC_NUM_VPCRAM_WRITE"/>
++	<value value="26" name="A7XX_PERF_VPC_NUM_VPCRAM_READ_SO"/>
++	<value value="27" name="A7XX_PERF_VPC_NUM_ATTR_REQ_LM"/>
++	<value value="28" name="A7XX_PERF_VPC_STALL_CYCLE_TSE"/>
++	<value value="29" name="A7XX_PERF_VPC_TSE_PRIMITIVES"/>
++	<value value="30" name="A7XX_PERF_VPC_GS_PRIMITIVES"/>
++	<value value="31" name="A7XX_PERF_VPC_TSE_TRANSACTIONS"/>
++	<value value="32" name="A7XX_PERF_VPC_STALL_CYCLES_CCU"/>
++	<value value="33" name="A7XX_PERF_VPC_NUM_WM_HIT"/>
++	<value value="34" name="A7XX_PERF_VPC_STALL_DQ_WACK"/>
++	<value value="35" name="A7XX_PERF_VPC_STALL_CYCLES_CCHE"/>
++	<value value="36" name="A7XX_PERF_VPC_STARVE_CYCLES_CCHE"/>
++	<value value="37" name="A7XX_PERF_VPC_NUM_PA_REQ"/>
++	<value value="38" name="A7XX_PERF_VPC_NUM_LM_REQ_HIT"/>
++	<value value="39" name="A7XX_PERF_VPC_CCHE_REQBUF_FULL"/>
++	<value value="40" name="A7XX_PERF_VPC_STALL_CYCLES_LM_ACK"/>
++	<value value="41" name="A7XX_PERF_VPC_STALL_CYCLES_PRG_END_FE"/>
++	<value value="42" name="A7XX_PERF_VPC_STALL_CYCLES_PRG_END_PCVS"/>
++	<value value="43" name="A7XX_PERF_VPC_STALL_CYCLES_PRG_END_VPCPS"/>
++</enum>
++
++<enum name="a7xx_tse_perfcounter_select">
++	<value value="0" name="A7XX_PERF_TSE_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_TSE_CLIPPING_CYCLES"/>
++	<value value="2" name="A7XX_PERF_TSE_STALL_CYCLES_RAS"/>
++	<value value="3" name="A7XX_PERF_TSE_STALL_CYCLES_LRZ_BARYPLANE"/>
++	<value value="4" name="A7XX_PERF_TSE_STALL_CYCLES_LRZ_ZPLANE"/>
++	<value value="5" name="A7XX_PERF_TSE_STARVE_CYCLES_PC"/>
++	<value value="6" name="A7XX_PERF_TSE_INPUT_PRIM"/>
++	<value value="7" name="A7XX_PERF_TSE_INPUT_NULL_PRIM"/>
++	<value value="8" name="A7XX_PERF_TSE_TRIVAL_REJ_PRIM"/>
++	<value value="9" name="A7XX_PERF_TSE_CLIPPED_PRIM"/>
++	<value value="10" name="A7XX_PERF_TSE_ZERO_AREA_PRIM"/>
++	<value value="11" name="A7XX_PERF_TSE_FACENESS_CULLED_PRIM"/>
++	<value value="12" name="A7XX_PERF_TSE_ZERO_PIXEL_PRIM"/>
++	<value value="13" name="A7XX_PERF_TSE_OUTPUT_NULL_PRIM"/>
++	<value value="14" name="A7XX_PERF_TSE_OUTPUT_VISIBLE_PRIM"/>
++	<value value="15" name="A7XX_PERF_TSE_CINVOCATION"/>
++	<value value="16" name="A7XX_PERF_TSE_CPRIMITIVES"/>
++	<value value="17" name="A7XX_PERF_TSE_2D_INPUT_PRIM"/>
++	<value value="18" name="A7XX_PERF_TSE_2D_ALIVE_CYCLES"/>
++	<value value="19" name="A7XX_PERF_TSE_CLIP_PLANES"/>
++</enum>
++
++<enum name="a7xx_ras_perfcounter_select">
++	<value value="0" name="A7XX_PERF_RAS_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_RAS_SUPERTILE_ACTIVE_CYCLES"/>
++	<value value="2" name="A7XX_PERF_RAS_STALL_CYCLES_LRZ"/>
++	<value value="3" name="A7XX_PERF_RAS_STARVE_CYCLES_TSE"/>
++	<value value="4" name="A7XX_PERF_RAS_SUPER_TILES"/>
++	<value value="5" name="A7XX_PERF_RAS_8X4_TILES"/>
++	<value value="6" name="A7XX_PERF_RAS_MASKGEN_ACTIVE"/>
++	<value value="7" name="A7XX_PERF_RAS_FULLY_COVERED_SUPER_TILES"/>
++	<value value="8" name="A7XX_PERF_RAS_FULLY_COVERED_8X4_TILES"/>
++	<value value="9" name="A7XX_PERF_RAS_PRIM_KILLED_INVISILBE"/>
++	<value value="10" name="A7XX_PERF_RAS_SUPERTILE_GEN_ACTIVE_CYCLES"/>
++	<value value="11" name="A7XX_PERF_RAS_LRZ_INTF_WORKING_CYCLES"/>
++	<value value="12" name="A7XX_PERF_RAS_BLOCKS"/>
++	<value value="13" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_0_WORKING_CC_l2"/>
++	<value value="14" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_1_WORKING_CC_l2"/>
++	<value value="15" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_2_WORKING_CC_l2"/>
++	<value value="16" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_3_WORKING_CC_l2"/>
++	<value value="17" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_4_WORKING_CC_l2"/>
++	<value value="18" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_5_WORKING_CC_l2"/>
++	<value value="19" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_6_WORKING_CC_l2"/>
++	<value value="20" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_7_WORKING_CC_l2"/>
++	<value value="21" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_8_WORKING_CC_l2"/>
++	<value value="22" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_9_WORKING_CC_l2"/>
++	<value value="23" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_10_WORKING_CC_l2"/>
++	<value value="24" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_11_WORKING_CC_l2"/>
++	<value value="25" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_12_WORKING_CC_l2"/>
++	<value value="26" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_13_WORKING_CC_l2"/>
++	<value value="27" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_14_WORKING_CC_l2"/>
++	<value value="28" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_15_WORKING_CC_l2"/>
++	<value value="29" name="A7XX_PERF_RAS_FALSE_PARTIAL_STILE"/>
++
++</enum>
++
++<enum name="a7xx_uche_perfcounter_select">
++	<value value="0" name="A7XX_PERF_UCHE_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_UCHE_STALL_CYCLES_ARBITER"/>
++	<value value="2" name="A7XX_PERF_UCHE_VBIF_LATENCY_CYCLES"/>
++	<value value="3" name="A7XX_PERF_UCHE_VBIF_LATENCY_SAMPLES"/>
++	<value value="4" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_TP"/>
++	<value value="5" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_VFD"/>
++	<value value="6" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_HLSQ"/>
++	<value value="7" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_LRZ"/>
++	<value value="8" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_SP"/>
++	<value value="9" name="A7XX_PERF_UCHE_READ_REQUESTS_TP"/>
++	<value value="10" name="A7XX_PERF_UCHE_READ_REQUESTS_VFD"/>
++	<value value="11" name="A7XX_PERF_UCHE_READ_REQUESTS_HLSQ"/>
++	<value value="12" name="A7XX_PERF_UCHE_READ_REQUESTS_LRZ"/>
++	<value value="13" name="A7XX_PERF_UCHE_READ_REQUESTS_SP"/>
++	<value value="14" name="A7XX_PERF_UCHE_WRITE_REQUESTS_LRZ"/>
++	<value value="15" name="A7XX_PERF_UCHE_WRITE_REQUESTS_SP"/>
++	<value value="16" name="A7XX_PERF_UCHE_WRITE_REQUESTS_VPC"/>
++	<value value="17" name="A7XX_PERF_UCHE_WRITE_REQUESTS_VSC"/>
++	<value value="18" name="A7XX_PERF_UCHE_EVICTS"/>
++	<value value="19" name="A7XX_PERF_UCHE_BANK_REQ0"/>
++	<value value="20" name="A7XX_PERF_UCHE_BANK_REQ1"/>
++	<value value="21" name="A7XX_PERF_UCHE_BANK_REQ2"/>
++	<value value="22" name="A7XX_PERF_UCHE_BANK_REQ3"/>
++	<value value="23" name="A7XX_PERF_UCHE_BANK_REQ4"/>
++	<value value="24" name="A7XX_PERF_UCHE_BANK_REQ5"/>
++	<value value="25" name="A7XX_PERF_UCHE_BANK_REQ6"/>
++	<value value="26" name="A7XX_PERF_UCHE_BANK_REQ7"/>
++	<value value="27" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_CH0"/>
++	<value value="28" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_CH1"/>
++	<value value="29" name="A7XX_PERF_UCHE_GMEM_READ_BEATS"/>
++	<value value="30" name="A7XX_PERF_UCHE_TPH_REF_FULL"/>
++	<value value="31" name="A7XX_PERF_UCHE_TPH_VICTIM_FULL"/>
++	<value value="32" name="A7XX_PERF_UCHE_TPH_EXT_FULL"/>
++	<value value="33" name="A7XX_PERF_UCHE_VBIF_STALL_WRITE_DATA"/>
++	<value value="34" name="A7XX_PERF_UCHE_DCMP_LATENCY_SAMPLES"/>
++	<value value="35" name="A7XX_PERF_UCHE_DCMP_LATENCY_CYCLES"/>
++	<value value="36" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_PC"/>
++	<value value="37" name="A7XX_PERF_UCHE_READ_REQUESTS_PC"/>
++	<value value="38" name="A7XX_PERF_UCHE_RAM_READ_REQ"/>
++	<value value="39" name="A7XX_PERF_UCHE_RAM_WRITE_REQ"/>
++	<value value="40" name="A7XX_PERF_UCHE_STARVED_CYCLES_VBIF_DECMP"/>
++	<value value="41" name="A7XX_PERF_UCHE_STALL_CYCLES_DECMP"/>
++	<value value="42" name="A7XX_PERF_UCHE_ARBITER_STALL_CYCLES_VBIF"/>
++	<value value="43" name="A7XX_PERF_UCHE_READ_REQUESTS_TP_UBWC"/>
++	<value value="44" name="A7XX_PERF_UCHE_READ_REQUESTS_TP_NONUBWC"/>
++	<value value="45" name="A7XX_PERF_UCHE_READ_REQUESTS_TP_GMEM"/>
++	<value value="46" name="A7XX_PERF_UCHE_LONG_LINE_ALL_EVICTS_KAILUA"/>
++	<value value="47" name="A7XX_PERF_UCHE_LONG_LINE_PARTIAL_EVICTS_KAILUA"/>
++	<value value="48" name="A7XX_PERF_UCHE_TPH_CONFLICT_CL_CCHE"/>
++	<value value="49" name="A7XX_PERF_UCHE_TPH_CONFLICT_CL_OTHER_KAILUA"/>
++	<value value="50" name="A7XX_PERF_UCHE_DBANK_CONFLICT_CL_CCHE"/>
++	<value value="51" name="A7XX_PERF_UCHE_DBANK_CONFLICT_CL_OTHER_CLIENTS"/>
++	<value value="52" name="A7XX_PERF_UCHE_VBIF_WRITE_BEATS_CH0"/>
++	<value value="53" name="A7XX_PERF_UCHE_VBIF_WRITE_BEATS_CH1"/>
++	<value value="54" name="A7XX_PERF_UCHE_CCHE_TPH_QUEUE_FULL"/>
++	<value value="55" name="A7XX_PERF_UCHE_CCHE_DPH_QUEUE_FULL"/>
++	<value value="56" name="A7XX_PERF_UCHE_GMEM_WRITE_BEATS"/>
++	<value value="57" name="A7XX_PERF_UCHE_UBWC_READ_BEATS"/>
++	<value value="58" name="A7XX_PERF_UCHE_UBWC_WRITE_BEATS"/>
++</enum>
++
++<enum name="a7xx_tp_perfcounter_select">
++	<value value="0" name="A7XX_PERF_TP_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_TP_STALL_CYCLES_UCHE"/>
++	<value value="2" name="A7XX_PERF_TP_LATENCY_CYCLES"/>
++	<value value="3" name="A7XX_PERF_TP_LATENCY_TRANS"/>
++	<value value="4" name="A7XX_PERF_TP_FLAG_FIFO_DELAY_SAMPLES"/>
++	<value value="5" name="A7XX_PERF_TP_FLAG_FIFO_DELAY_CYCLES"/>
++	<value value="6" name="A7XX_PERF_TP_L1_CACHELINE_REQUESTS"/>
++	<value value="7" name="A7XX_PERF_TP_L1_CACHELINE_MISSES"/>
++	<value value="8" name="A7XX_PERF_TP_SP_TP_TRANS"/>
++	<value value="9" name="A7XX_PERF_TP_TP_SP_TRANS"/>
++	<value value="10" name="A7XX_PERF_TP_OUTPUT_PIXELS"/>
++	<value value="11" name="A7XX_PERF_TP_FILTER_WORKLOAD_16BIT"/>
++	<value value="12" name="A7XX_PERF_TP_FILTER_WORKLOAD_32BIT"/>
++	<value value="13" name="A7XX_PERF_TP_QUADS_RECEIVED"/>
++	<value value="14" name="A7XX_PERF_TP_QUADS_OFFSET"/>
++	<value value="15" name="A7XX_PERF_TP_QUADS_SHADOW"/>
++	<value value="16" name="A7XX_PERF_TP_QUADS_ARRAY"/>
++	<value value="17" name="A7XX_PERF_TP_QUADS_GRADIENT"/>
++	<value value="18" name="A7XX_PERF_TP_QUADS_1D"/>
++	<value value="19" name="A7XX_PERF_TP_QUADS_2D"/>
++	<value value="20" name="A7XX_PERF_TP_QUADS_BUFFER"/>
++	<value value="21" name="A7XX_PERF_TP_QUADS_3D"/>
++	<value value="22" name="A7XX_PERF_TP_QUADS_CUBE"/>
++	<value value="23" name="A7XX_PERF_TP_DIVERGENT_QUADS_RECEIVED"/>
++	<value value="24" name="A7XX_PERF_TP_PRT_NON_RESIDENT_EVENTS"/>
++	<value value="25" name="A7XX_PERF_TP_OUTPUT_PIXELS_POINT"/>
++	<value value="26" name="A7XX_PERF_TP_OUTPUT_PIXELS_BILINEAR"/>
++	<value value="27" name="A7XX_PERF_TP_OUTPUT_PIXELS_MIP"/>
++	<value value="28" name="A7XX_PERF_TP_OUTPUT_PIXELS_ANISO"/>
++	<value value="29" name="A7XX_PERF_TP_OUTPUT_PIXELS_ZERO_LOD"/>
++	<value value="30" name="A7XX_PERF_TP_FLAG_CACHE_REQUESTS"/>
++	<value value="31" name="A7XX_PERF_TP_FLAG_CACHE_MISSES"/>
++	<value value="32" name="A7XX_PERF_TP_L1_5_L2_REQUESTS"/>
++	<value value="33" name="A7XX_PERF_TP_2D_OUTPUT_PIXELS"/>
++	<value value="34" name="A7XX_PERF_TP_2D_OUTPUT_PIXELS_POINT"/>
++	<value value="35" name="A7XX_PERF_TP_2D_OUTPUT_PIXELS_BILINEAR"/>
++	<value value="36" name="A7XX_PERF_TP_2D_FILTER_WORKLOAD_16BIT"/>
++	<value value="37" name="A7XX_PERF_TP_2D_FILTER_WORKLOAD_32BIT"/>
++	<value value="38" name="A7XX_PERF_TP_TPA2TPC_TRANS"/>
++	<value value="39" name="A7XX_PERF_TP_L1_MISSES_ASTC_1TILE"/>
++	<value value="40" name="A7XX_PERF_TP_L1_MISSES_ASTC_2TILE"/>
++	<value value="41" name="A7XX_PERF_TP_L1_MISSES_ASTC_4TILE"/>
++	<value value="42" name="A7XX_PERF_TP_L1_5_COMPRESS_REQS"/>
++	<value value="43" name="A7XX_PERF_TP_L1_5_L2_COMPRESS_MISS"/>
++	<value value="44" name="A7XX_PERF_TP_L1_BANK_CONFLICT"/>
++	<value value="45" name="A7XX_PERF_TP_L1_5_MISS_LATENCY_CYCLES"/>
++	<value value="46" name="A7XX_PERF_TP_L1_5_MISS_LATENCY_TRANS"/>
++	<value value="47" name="A7XX_PERF_TP_QUADS_CONSTANT_MULTIPLIED"/>
++	<value value="48" name="A7XX_PERF_TP_FRONTEND_WORKING_CYCLES"/>
++	<value value="49" name="A7XX_PERF_TP_L1_TAG_WORKING_CYCLES"/>
++	<value value="50" name="A7XX_PERF_TP_L1_DATA_WRITE_WORKING_CYCLES"/>
++	<value value="51" name="A7XX_PERF_TP_PRE_L1_DECOM_WORKING_CYCLES"/>
++	<value value="52" name="A7XX_PERF_TP_BACKEND_WORKING_CYCLES"/>
++	<value value="53" name="A7XX_PERF_TP_L1_5_CACHE_WORKING_CYCLES"/>
++	<value value="54" name="A7XX_PERF_TP_STARVE_CYCLES_SP"/>
++	<value value="55" name="A7XX_PERF_TP_STARVE_CYCLES_UCHE"/>
++	<value value="56" name="A7XX_PERF_TP_STALL_CYCLES_UFC"/>
++	<value value="57" name="A7XX_PERF_TP_FORMAT_DECOMP"/>
++	<value value="58" name="A7XX_PERF_TP_FILTER_POINT_FP16"/>
++	<value value="59" name="A7XX_PERF_TP_FILTER_POINT_FP32"/>
++	<value value="60" name="A7XX_PERF_TP_LATENCY_FIFO_FULL"/>
++	<value value="61" name="A7XX_PERF_TP_RESERVED_61"/>
++	<value value="62" name="A7XX_PERF_TP_RESERVED_62"/>
++	<value value="63" name="A7XX_PERF_TP_RESERVED_63"/>
++	<value value="64" name="A7XX_PERF_TP_RESERVED_64"/>
++	<value value="65" name="A7XX_PERF_TP_RESERVED_65"/>
++	<value value="66" name="A7XX_PERF_TP_RESERVED_66"/>
++	<value value="67" name="A7XX_PERF_TP_RESERVED_67"/>
++	<value value="68" name="A7XX_PERF_TP_RESERVED_68"/>
++	<value value="69" name="A7XX_PERF_TP_RESERVED_69"/>
++	<value value="70" name="A7XX_PERF_TP_RESERVED_70"/>
++	<value value="71" name="A7XX_PERF_TP_RESERVED_71"/>
++	<value value="72" name="A7XX_PERF_TP_RESERVED_72"/>
++	<value value="73" name="A7XX_PERF_TP_RESERVED_73"/>
++	<value value="74" name="A7XX_PERF_TP_RESERVED_74"/>
++	<value value="75" name="A7XX_PERF_TP_RESERVED_75"/>
++	<value value="76" name="A7XX_PERF_TP_RESERVED_76"/>
++	<value value="77" name="A7XX_PERF_TP_RESERVED_77"/>
++	<value value="78" name="A7XX_PERF_TP_RESERVED_78"/>
++	<value value="79" name="A7XX_PERF_TP_RESERVED_79"/>
++	<value value="80" name="A7XX_PERF_TP_RESERVED_80"/>
++	<value value="81" name="A7XX_PERF_TP_RESERVED_81"/>
++	<value value="82" name="A7XX_PERF_TP_RESERVED_82"/>
++	<value value="83" name="A7XX_PERF_TP_RESERVED_83"/>
++	<value value="84" name="A7XX_PERF_TP_RESERVED_84"/>
++	<value value="85" name="A7XX_PERF_TP_RESERVED_85"/>
++	<value value="86" name="A7XX_PERF_TP_RESERVED_86"/>
++	<value value="87" name="A7XX_PERF_TP_RESERVED_87"/>
++	<value value="88" name="A7XX_PERF_TP_RESERVED_88"/>
++	<value value="89" name="A7XX_PERF_TP_RESERVED_89"/>
++	<value value="90" name="A7XX_PERF_TP_RESERVED_90"/>
++	<value value="91" name="A7XX_PERF_TP_RESERVED_91"/>
++	<value value="92" name="A7XX_PERF_TP_RESERVED_92"/>
++	<value value="93" name="A7XX_PERF_TP_RESERVED_93"/>
++	<value value="94" name="A7XX_PERF_TP_RESERVED_94"/>
++	<value value="95" name="A7XX_PERF_TP_RESERVED_95"/>
++	<value value="96" name="A7XX_PERF_TP_RESERVED_96"/>
++	<value value="97" name="A7XX_PERF_TP_RESERVED_97"/>
++	<value value="98" name="A7XX_PERF_TP_RESERVED_98"/>
++	<value value="99" name="A7XX_PERF_TP_RESERVED_99"/>
++	<value value="100" name="A7XX_PERF_TP_RESERVED_100"/>
++	<value value="101" name="A7XX_PERF_TP_RESERVED_101"/>
++	<value value="102" name="A7XX_PERF_TP_RESERVED_102"/>
++	<value value="103" name="A7XX_PERF_TP_RESERVED_103"/>
++	<value value="104" name="A7XX_PERF_TP_RESERVED_104"/>
++	<value value="105" name="A7XX_PERF_TP_RESERVED_105"/>
++	<value value="106" name="A7XX_PERF_TP_RESERVED_106"/>
++	<value value="107" name="A7XX_PERF_TP_RESERVED_107"/>
++	<value value="108" name="A7XX_PERF_TP_RESERVED_108"/>
++	<value value="109" name="A7XX_PERF_TP_RESERVED_109"/>
++	<value value="110" name="A7XX_PERF_TP_RESERVED_110"/>
++	<value value="111" name="A7XX_PERF_TP_RESERVED_111"/>
++	<value value="112" name="A7XX_PERF_TP_RESERVED_112"/>
++	<value value="113" name="A7XX_PERF_TP_RESERVED_113"/>
++	<value value="114" name="A7XX_PERF_TP_RESERVED_114"/>
++	<value value="115" name="A7XX_PERF_TP_RESERVED_115"/>
++	<value value="116" name="A7XX_PERF_TP_RESERVED_116"/>
++	<value value="117" name="A7XX_PERF_TP_RESERVED_117"/>
++	<value value="118" name="A7XX_PERF_TP_RESERVED_118"/>
++	<value value="119" name="A7XX_PERF_TP_RESERVED_119"/>
++	<value value="120" name="A7XX_PERF_TP_RESERVED_120"/>
++	<value value="121" name="A7XX_PERF_TP_RESERVED_121"/>
++	<value value="122" name="A7XX_PERF_TP_RESERVED_122"/>
++	<value value="123" name="A7XX_PERF_TP_RESERVED_123"/>
++	<value value="124" name="A7XX_PERF_TP_RESERVED_124"/>
++	<value value="125" name="A7XX_PERF_TP_RESERVED_125"/>
++	<value value="126" name="A7XX_PERF_TP_RESERVED_126"/>
++	<value value="127" name="A7XX_PERF_TP_RESERVED_127"/>
++	<value value="128" name="A7XX_PERF_TP_FORMAT_DECOMP_BILINEAR"/>
++	<value value="129" name="A7XX_PERF_TP_PACKED_POINT_BOTH_VALID_FP16"/>
++	<value value="130" name="A7XX_PERF_TP_PACKED_POINT_SINGLE_VALID_FP16"/>
++	<value value="131" name="A7XX_PERF_TP_PACKED_POINT_BOTH_VALID_FP32"/>
++	<value value="132" name="A7XX_PERF_TP_PACKED_POINT_SINGLE_VALID_FP32"/>
++</enum>
++
++<enum name="a7xx_sp_perfcounter_select">
++	<value value="0" name="A7XX_PERF_SP_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_SP_ALU_WORKING_CYCLES"/>
++	<value value="2" name="A7XX_PERF_SP_EFU_WORKING_CYCLES"/>
++	<value value="3" name="A7XX_PERF_SP_STALL_CYCLES_VPC"/>
++	<value value="4" name="A7XX_PERF_SP_STALL_CYCLES_TP"/>
++	<value value="5" name="A7XX_PERF_SP_STALL_CYCLES_UCHE"/>
++	<value value="6" name="A7XX_PERF_SP_STALL_CYCLES_RB"/>
++	<value value="7" name="A7XX_PERF_SP_NON_EXECUTION_CYCLES"/>
++	<value value="8" name="A7XX_PERF_SP_WAVE_CONTEXTS"/>
++	<value value="9" name="A7XX_PERF_SP_WAVE_CONTEXT_CYCLES"/>
++	<value value="10" name="A7XX_PERF_SP_STAGE_WAVE_CYCLES"/>
++	<value value="11" name="A7XX_PERF_SP_STAGE_WAVE_SAMPLES"/>
++	<value value="12" name="A7XX_PERF_SP_VS_STAGE_WAVE_CYCLES"/>
++	<value value="13" name="A7XX_PERF_SP_VS_STAGE_WAVE_SAMPLES"/>
++	<value value="14" name="A7XX_PERF_SP_FS_STAGE_DURATION_CYCLES"/>
++	<value value="15" name="A7XX_PERF_SP_VS_STAGE_DURATION_CYCLES"/>
++	<value value="16" name="A7XX_PERF_SP_WAVE_CTRL_CYCLES"/>
++	<value value="17" name="A7XX_PERF_SP_WAVE_LOAD_CYCLES"/>
++	<value value="18" name="A7XX_PERF_SP_WAVE_EMIT_CYCLES"/>
++	<value value="19" name="A7XX_PERF_SP_WAVE_NOP_CYCLES"/>
++	<value value="20" name="A7XX_PERF_SP_WAVE_WAIT_CYCLES"/>
++	<value value="21" name="A7XX_PERF_SP_WAVE_FETCH_CYCLES"/>
++	<value value="22" name="A7XX_PERF_SP_WAVE_IDLE_CYCLES"/>
++	<value value="23" name="A7XX_PERF_SP_WAVE_END_CYCLES"/>
++	<value value="24" name="A7XX_PERF_SP_WAVE_LONG_SYNC_CYCLES"/>
++	<value value="25" name="A7XX_PERF_SP_WAVE_SHORT_SYNC_CYCLES"/>
++	<value value="26" name="A7XX_PERF_SP_WAVE_JOIN_CYCLES"/>
++	<value value="27" name="A7XX_PERF_SP_LM_LOAD_INSTRUCTIONS"/>
++	<value value="28" name="A7XX_PERF_SP_LM_STORE_INSTRUCTIONS"/>
++	<value value="29" name="A7XX_PERF_SP_LM_ATOMICS"/>
++	<value value="30" name="A7XX_PERF_SP_GM_LOAD_INSTRUCTIONS"/>
++	<value value="31" name="A7XX_PERF_SP_GM_STORE_INSTRUCTIONS"/>
++	<value value="32" name="A7XX_PERF_SP_GM_ATOMICS"/>
++	<value value="33" name="A7XX_PERF_SP_VS_STAGE_TEX_INSTRUCTIONS"/>
++	<value value="34" name="A7XX_PERF_SP_VS_STAGE_EFU_INSTRUCTIONS"/>
++	<value value="35" name="A7XX_PERF_SP_VS_STAGE_FULL_ALU_INSTRUCTIONS"/>
++	<value value="36" name="A7XX_PERF_SP_VS_STAGE_HALF_ALU_INSTRUCTIONS"/>
++	<value value="37" name="A7XX_PERF_SP_FS_STAGE_TEX_INSTRUCTIONS"/>
++	<value value="38" name="A7XX_PERF_SP_FS_STAGE_CFLOW_INSTRUCTIONS"/>
++	<value value="39" name="A7XX_PERF_SP_FS_STAGE_EFU_INSTRUCTIONS"/>
++	<value value="40" name="A7XX_PERF_SP_FS_STAGE_FULL_ALU_INSTRUCTIONS"/>
++	<value value="41" name="A7XX_PERF_SP_FS_STAGE_HALF_ALU_INSTRUCTIONS"/>
++	<value value="42" name="A7XX_PERF_SP_FS_STAGE_BARY_INSTRUCTIONS"/>
++	<value value="43" name="A7XX_PERF_SP_VS_INSTRUCTIONS"/>
++	<value value="44" name="A7XX_PERF_SP_FS_INSTRUCTIONS"/>
++	<value value="45" name="A7XX_PERF_SP_ADDR_LOCK_COUNT"/>
++	<value value="46" name="A7XX_PERF_SP_UCHE_READ_TRANS"/>
++	<value value="47" name="A7XX_PERF_SP_UCHE_WRITE_TRANS"/>
++	<value value="48" name="A7XX_PERF_SP_EXPORT_VPC_TRANS"/>
++	<value value="49" name="A7XX_PERF_SP_EXPORT_RB_TRANS"/>
++	<value value="50" name="A7XX_PERF_SP_PIXELS_KILLED"/>
++	<value value="51" name="A7XX_PERF_SP_ICL1_REQUESTS"/>
++	<value value="52" name="A7XX_PERF_SP_ICL1_MISSES"/>
++	<value value="53" name="A7XX_PERF_SP_HS_INSTRUCTIONS"/>
++	<value value="54" name="A7XX_PERF_SP_DS_INSTRUCTIONS"/>
++	<value value="55" name="A7XX_PERF_SP_GS_INSTRUCTIONS"/>
++	<value value="56" name="A7XX_PERF_SP_CS_INSTRUCTIONS"/>
++	<value value="57" name="A7XX_PERF_SP_GPR_READ"/>
++	<value value="58" name="A7XX_PERF_SP_GPR_WRITE"/>
++	<value value="59" name="A7XX_PERF_SP_FS_STAGE_HALF_EFU_INSTRUCTIONS"/>
++	<value value="60" name="A7XX_PERF_SP_VS_STAGE_HALF_EFU_INSTRUCTIONS"/>
++	<value value="61" name="A7XX_PERF_SP_LM_BANK_CONFLICTS"/>
++	<value value="62" name="A7XX_PERF_SP_TEX_CONTROL_WORKING_CYCLES"/>
++	<value value="63" name="A7XX_PERF_SP_LOAD_CONTROL_WORKING_CYCLES"/>
++	<value value="64" name="A7XX_PERF_SP_FLOW_CONTROL_WORKING_CYCLES"/>
++	<value value="65" name="A7XX_PERF_SP_LM_WORKING_CYCLES"/>
++	<value value="66" name="A7XX_PERF_SP_DISPATCHER_WORKING_CYCLES"/>
++	<value value="67" name="A7XX_PERF_SP_SEQUENCER_WORKING_CYCLES"/>
++	<value value="68" name="A7XX_PERF_SP_LOW_EFFICIENCY_STARVED_BY_TP"/>
++	<value value="69" name="A7XX_PERF_SP_STARVE_CYCLES_HLSQ"/>
++	<value value="70" name="A7XX_PERF_SP_NON_EXECUTION_LS_CYCLES"/>
++	<value value="71" name="A7XX_PERF_SP_WORKING_EU"/>
++	<value value="72" name="A7XX_PERF_SP_ANY_EU_WORKING"/>
++	<value value="73" name="A7XX_PERF_SP_WORKING_EU_FS_STAGE"/>
++	<value value="74" name="A7XX_PERF_SP_ANY_EU_WORKING_FS_STAGE"/>
++	<value value="75" name="A7XX_PERF_SP_WORKING_EU_VS_STAGE"/>
++	<value value="76" name="A7XX_PERF_SP_ANY_EU_WORKING_VS_STAGE"/>
++	<value value="77" name="A7XX_PERF_SP_WORKING_EU_CS_STAGE"/>
++	<value value="78" name="A7XX_PERF_SP_ANY_EU_WORKING_CS_STAGE"/>
++	<value value="79" name="A7XX_PERF_SP_GPR_READ_PREFETCH"/>
++	<value value="80" name="A7XX_PERF_SP_GPR_READ_CONFLICT"/>
++	<value value="81" name="A7XX_PERF_SP_GPR_WRITE_CONFLICT"/>
++	<value value="82" name="A7XX_PERF_SP_GM_LOAD_LATENCY_CYCLES"/>
++	<value value="83" name="A7XX_PERF_SP_GM_LOAD_LATENCY_SAMPLES"/>
++	<value value="84" name="A7XX_PERF_SP_EXECUTABLE_WAVES"/>
++	<value value="85" name="A7XX_PERF_SP_ICL1_MISS_FETCH_CYCLES"/>
++	<value value="86" name="A7XX_PERF_SP_WORKING_EU_LPAC"/>
++	<value value="87" name="A7XX_PERF_SP_BYPASS_BUSY_CYCLES"/>
++	<value value="88" name="A7XX_PERF_SP_ANY_EU_WORKING_LPAC"/>
++	<value value="89" name="A7XX_PERF_SP_WAVE_ALU_CYCLES"/>
++	<value value="90" name="A7XX_PERF_SP_WAVE_EFU_CYCLES"/>
++	<value value="91" name="A7XX_PERF_SP_WAVE_INT_CYCLES"/>
++	<value value="92" name="A7XX_PERF_SP_WAVE_CSP_CYCLES"/>
++	<value value="93" name="A7XX_PERF_SP_EWAVE_CONTEXTS"/>
++	<value value="94" name="A7XX_PERF_SP_EWAVE_CONTEXT_CYCLES"/>
++	<value value="95" name="A7XX_PERF_SP_LPAC_BUSY_CYCLES"/>
++	<value value="96" name="A7XX_PERF_SP_LPAC_INSTRUCTIONS"/>
++	<value value="97" name="A7XX_PERF_SP_FS_STAGE_1X_WAVES"/>
++	<value value="98" name="A7XX_PERF_SP_FS_STAGE_2X_WAVES"/>
++	<value value="99" name="A7XX_PERF_SP_QUADS"/>
++	<value value="100" name="A7XX_PERF_SP_CS_INVOCATIONS"/>
++	<value value="101" name="A7XX_PERF_SP_PIXELS"/>
++	<value value="102" name="A7XX_PERF_SP_LPAC_DRAWCALLS"/>
++	<value value="103" name="A7XX_PERF_SP_PI_WORKING_CYCLES"/>
++	<value value="104" name="A7XX_PERF_SP_WAVE_INPUT_CYCLES"/>
++	<value value="105" name="A7XX_PERF_SP_WAVE_OUTPUT_CYCLES"/>
++	<value value="106" name="A7XX_PERF_SP_WAVE_HWAVE_WAIT_CYCLES"/>
++	<value value="107" name="A7XX_PERF_SP_WAVE_HWAVE_SYNC"/>
++	<value value="108" name="A7XX_PERF_SP_OUTPUT_3D_PIXELS"/>
++	<value value="109" name="A7XX_PERF_SP_FULL_ALU_MAD_INSTRUCTIONS"/>
++	<value value="110" name="A7XX_PERF_SP_HALF_ALU_MAD_INSTRUCTIONS"/>
++	<value value="111" name="A7XX_PERF_SP_FULL_ALU_MUL_INSTRUCTIONS"/>
++	<value value="112" name="A7XX_PERF_SP_HALF_ALU_MUL_INSTRUCTIONS"/>
++	<value value="113" name="A7XX_PERF_SP_FULL_ALU_ADD_INSTRUCTIONS"/>
++	<value value="114" name="A7XX_PERF_SP_HALF_ALU_ADD_INSTRUCTIONS"/>
++	<value value="115" name="A7XX_PERF_SP_BARY_FP32_INSTRUCTIONS"/>
++	<value value="116" name="A7XX_PERF_SP_ALU_GPR_READ_CYCLES"/>
++	<value value="117" name="A7XX_PERF_SP_ALU_DATA_FORWARDING_CYCLES"/>
++	<value value="118" name="A7XX_PERF_SP_LM_FULL_CYCLES"/>
++	<value value="119" name="A7XX_PERF_SP_TEXTURE_FETCH_LATENCY_CYCLES"/>
++	<value value="120" name="A7XX_PERF_SP_TEXTURE_FETCH_LATENCY_SAMPLES"/>
++	<value value="121" name="A7XX_PERF_SP_FS_STAGE_PI_TEX_INSTRUCTION"/>
++	<value value="122" name="A7XX_PERF_SP_RAY_QUERY_INSTRUCTIONS"/>
++	<value value="123" name="A7XX_PERF_SP_RBRT_KICKOFF_FIBERS"/>
++	<value value="124" name="A7XX_PERF_SP_RBRT_KICKOFF_DQUADS"/>
++	<value value="125" name="A7XX_PERF_SP_RTU_BUSY_CYCLES"/>
++	<value value="126" name="A7XX_PERF_SP_RTU_L0_HITS"/>
++	<value value="127" name="A7XX_PERF_SP_RTU_L0_MISSES"/>
++	<value value="128" name="A7XX_PERF_SP_RTU_L0_HIT_ON_MISS"/>
++	<value value="129" name="A7XX_PERF_SP_RTU_STALL_CYCLES_WAVE_QUEUE"/>
++	<value value="130" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0_HIT_QUEUE"/>
++	<value value="131" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0_MISS_QUEUE"/>
++	<value value="132" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0D_IDX_QUEUE"/>
++	<value value="133" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0DATA"/>
++	<value value="134" name="A7XX_PERF_SP_RTU_STALL_CYCLES_REPLACE_CNT"/>
++	<value value="135" name="A7XX_PERF_SP_RTU_STALL_CYCLES_MRG_CNT"/>
++	<value value="136" name="A7XX_PERF_SP_RTU_STALL_CYCLES_UCHE"/>
++	<value value="137" name="A7XX_PERF_SP_RTU_OPERAND_FETCH_STALL_CYCLES_L0"/>
++	<value value="138" name="A7XX_PERF_SP_RTU_OPERAND_FETCH_STALL_CYCLES_INS_FIFO"/>
++	<value value="139" name="A7XX_PERF_SP_RTU_BVH_FETCH_LATENCY_CYCLES"/>
++	<value value="140" name="A7XX_PERF_SP_RTU_BVH_FETCH_LATENCY_SAMPLES"/>
++	<value value="141" name="A7XX_PERF_SP_STCHE_MISS_INC_VS"/>
++	<value value="142" name="A7XX_PERF_SP_STCHE_MISS_INC_FS"/>
++	<value value="143" name="A7XX_PERF_SP_STCHE_MISS_INC_BV"/>
++	<value value="144" name="A7XX_PERF_SP_STCHE_MISS_INC_LPAC"/>
++	<value value="145" name="A7XX_PERF_SP_VGPR_ACTIVE_CONTEXTS"/>
++	<value value="146" name="A7XX_PERF_SP_PGPR_ALLOC_CONTEXTS"/>
++	<value value="147" name="A7XX_PERF_SP_VGPR_ALLOC_CONTEXTS"/>
++	<value value="148" name="A7XX_PERF_SP_RTU_RAY_BOX_INTERSECTIONS"/>
++	<value value="149" name="A7XX_PERF_SP_RTU_RAY_TRIANGLE_INTERSECTIONS"/>
++	<value value="150" name="A7XX_PERF_SP_SCH_STALL_CYCLES_RTU"/>
++</enum>
++
++<enum name="a7xx_rb_perfcounter_select">
++	<value value="0" name="A7XX_PERF_RB_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_RB_STALL_CYCLES_HLSQ"/>
++	<value value="2" name="A7XX_PERF_RB_STALL_CYCLES_FIFO0_FULL"/>
++	<value value="3" name="A7XX_PERF_RB_STALL_CYCLES_FIFO1_FULL"/>
++	<value value="4" name="A7XX_PERF_RB_STALL_CYCLES_FIFO2_FULL"/>
++	<value value="5" name="A7XX_PERF_RB_STARVE_CYCLES_SP"/>
++	<value value="6" name="A7XX_PERF_RB_STARVE_CYCLES_LRZ_TILE"/>
++	<value value="7" name="A7XX_PERF_RB_STARVE_CYCLES_CCU"/>
++	<value value="8" name="A7XX_PERF_RB_STARVE_CYCLES_Z_PLANE"/>
++	<value value="9" name="A7XX_PERF_RB_STARVE_CYCLES_BARY_PLANE"/>
++	<value value="10" name="A7XX_PERF_RB_Z_WORKLOAD"/>
++	<value value="11" name="A7XX_PERF_RB_HLSQ_ACTIVE"/>
++	<value value="12" name="A7XX_PERF_RB_Z_READ"/>
++	<value value="13" name="A7XX_PERF_RB_Z_WRITE"/>
++	<value value="14" name="A7XX_PERF_RB_C_READ"/>
++	<value value="15" name="A7XX_PERF_RB_C_WRITE"/>
++	<value value="16" name="A7XX_PERF_RB_TOTAL_PASS"/>
++	<value value="17" name="A7XX_PERF_RB_Z_PASS"/>
++	<value value="18" name="A7XX_PERF_RB_Z_FAIL"/>
++	<value value="19" name="A7XX_PERF_RB_S_FAIL"/>
++	<value value="20" name="A7XX_PERF_RB_BLENDED_FXP_COMPONENTS"/>
++	<value value="21" name="A7XX_PERF_RB_BLENDED_FP16_COMPONENTS"/>
++	<value value="22" name="A7XX_PERF_RB_PS_INVOCATIONS"/>
++	<value value="23" name="A7XX_PERF_RB_2D_ALIVE_CYCLES"/>
++	<value value="24" name="A7XX_PERF_RB_2D_STALL_CYCLES_A2D"/>
++	<value value="25" name="A7XX_PERF_RB_2D_STARVE_CYCLES_SRC"/>
++	<value value="26" name="A7XX_PERF_RB_2D_STARVE_CYCLES_SP"/>
++	<value value="27" name="A7XX_PERF_RB_2D_STARVE_CYCLES_DST"/>
++	<value value="28" name="A7XX_PERF_RB_2D_VALID_PIXELS"/>
++	<value value="29" name="A7XX_PERF_RB_3D_PIXELS"/>
++	<value value="30" name="A7XX_PERF_RB_BLENDER_WORKING_CYCLES"/>
++	<value value="31" name="A7XX_PERF_RB_ZPROC_WORKING_CYCLES"/>
++	<value value="32" name="A7XX_PERF_RB_CPROC_WORKING_CYCLES"/>
++	<value value="33" name="A7XX_PERF_RB_SAMPLER_WORKING_CYCLES"/>
++	<value value="34" name="A7XX_PERF_RB_STALL_CYCLES_CCU_COLOR_READ"/>
++	<value value="35" name="A7XX_PERF_RB_STALL_CYCLES_CCU_COLOR_WRITE"/>
++	<value value="36" name="A7XX_PERF_RB_STALL_CYCLES_CCU_DEPTH_READ"/>
++	<value value="37" name="A7XX_PERF_RB_STALL_CYCLES_CCU_DEPTH_WRITE"/>
++	<value value="38" name="A7XX_PERF_RB_STALL_CYCLES_VPC"/>
++	<value value="39" name="A7XX_PERF_RB_2D_INPUT_TRANS"/>
++	<value value="40" name="A7XX_PERF_RB_2D_OUTPUT_RB_DST_TRANS"/>
++	<value value="41" name="A7XX_PERF_RB_2D_OUTPUT_RB_SRC_TRANS"/>
++	<value value="42" name="A7XX_PERF_RB_BLENDED_FP32_COMPONENTS"/>
++	<value value="43" name="A7XX_PERF_RB_COLOR_PIX_TILES"/>
++	<value value="44" name="A7XX_PERF_RB_STALL_CYCLES_CCU"/>
++	<value value="45" name="A7XX_PERF_RB_EARLY_Z_ARB3_GRANT"/>
++	<value value="46" name="A7XX_PERF_RB_LATE_Z_ARB3_GRANT"/>
++	<value value="47" name="A7XX_PERF_RB_EARLY_Z_SKIP_GRANT"/>
++	<value value="48" name="A7XX_PERF_RB_VRS_1x1_QUADS"/>
++	<value value="49" name="A7XX_PERF_RB_VRS_2x1_QUADS"/>
++	<value value="50" name="A7XX_PERF_RB_VRS_1x2_QUADS"/>
++	<value value="51" name="A7XX_PERF_RB_VRS_2x2_QUADS"/>
++	<value value="52" name="A7XX_PERF_RB_VRS_4x2_QUADS"/>
++	<value value="53" name="A7XX_PERF_RB_VRS_4x4_QUADS"/>
++</enum>
++
++<enum name="a7xx_vsc_perfcounter_select">
++	<value value="0" name="A7XX_PERF_VSC_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_VSC_WORKING_CYCLES"/>
++	<value value="2" name="A7XX_PERF_VSC_STALL_CYCLES_UCHE"/>
++	<value value="3" name="A7XX_PERF_VSC_EOT_NUM"/>
++	<value value="4" name="A7XX_PERF_VSC_INPUT_TILES"/>
++</enum>
++
++<enum name="a7xx_ccu_perfcounter_select">
++	<value value="0" name="A7XX_PERF_CCU_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_CCU_STALL_CYCLES_RB_DEPTH_RETURN"/>
++	<value value="2" name="A7XX_PERF_CCU_STALL_CYCLES_RB_COLOR_RETURN"/>
++	<value value="3" name="A7XX_PERF_CCU_DEPTH_BLOCKS"/>
++	<value value="4" name="A7XX_PERF_CCU_COLOR_BLOCKS"/>
++	<value value="5" name="A7XX_PERF_CCU_DEPTH_BLOCK_HIT"/>
++	<value value="6" name="A7XX_PERF_CCU_COLOR_BLOCK_HIT"/>
++	<value value="7" name="A7XX_PERF_CCU_PARTIAL_BLOCK_READ"/>
++	<value value="8" name="A7XX_PERF_CCU_GMEM_READ"/>
++	<value value="9" name="A7XX_PERF_CCU_GMEM_WRITE"/>
++	<value value="10" name="A7XX_PERF_CCU_2D_RD_REQ"/>
++	<value value="11" name="A7XX_PERF_CCU_2D_WR_REQ"/>
++	<value value="12" name="A7XX_PERF_CCU_UBWC_COLOR_BLOCKS_CONCURRENT"/>
++	<value value="13" name="A7XX_PERF_CCU_UBWC_DEPTH_BLOCKS_CONCURRENT"/>
++	<value value="14" name="A7XX_PERF_CCU_COLOR_RESOLVE_DROPPED"/>
++	<value value="15" name="A7XX_PERF_CCU_DEPTH_RESOLVE_DROPPED"/>
++	<value value="16" name="A7XX_PERF_CCU_COLOR_RENDER_CONCURRENT"/>
++	<value value="17" name="A7XX_PERF_CCU_DEPTH_RENDER_CONCURRENT"/>
++	<value value="18" name="A7XX_PERF_CCU_COLOR_RESOLVE_AFTER_RENDER"/>
++	<value value="19" name="A7XX_PERF_CCU_DEPTH_RESOLVE_AFTER_RENDER"/>
++	<value value="20" name="A7XX_PERF_CCU_GMEM_EXTRA_DEPTH_READ"/>
++	<value value="21" name="A7XX_PERF_CCU_GMEM_COLOR_READ_4AA"/>
++	<value value="22" name="A7XX_PERF_CCU_GMEM_COLOR_READ_4AA_FULL"/>
++</enum>
++
++<enum name="a7xx_lrz_perfcounter_select">
++	<value value="0" name="A7XX_PERF_LRZ_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_LRZ_STARVE_CYCLES_RAS"/>
++	<value value="2" name="A7XX_PERF_LRZ_STALL_CYCLES_RB"/>
++	<value value="3" name="A7XX_PERF_LRZ_STALL_CYCLES_VSC"/>
++	<value value="4" name="A7XX_PERF_LRZ_STALL_CYCLES_VPC"/>
++	<value value="5" name="A7XX_PERF_LRZ_STALL_CYCLES_FLAG_PREFETCH"/>
++	<value value="6" name="A7XX_PERF_LRZ_STALL_CYCLES_UCHE"/>
++	<value value="7" name="A7XX_PERF_LRZ_LRZ_READ"/>
++	<value value="8" name="A7XX_PERF_LRZ_LRZ_WRITE"/>
++	<value value="9" name="A7XX_PERF_LRZ_READ_LATENCY"/>
++	<value value="10" name="A7XX_PERF_LRZ_MERGE_CACHE_UPDATING"/>
++	<value value="11" name="A7XX_PERF_LRZ_PRIM_KILLED_BY_MASKGEN"/>
++	<value value="12" name="A7XX_PERF_LRZ_PRIM_KILLED_BY_LRZ"/>
++	<value value="13" name="A7XX_PERF_LRZ_VISIBLE_PRIM_AFTER_LRZ"/>
++	<value value="14" name="A7XX_PERF_LRZ_FULL_8X8_TILES"/>
++	<value value="15" name="A7XX_PERF_LRZ_PARTIAL_8X8_TILES"/>
++	<value value="16" name="A7XX_PERF_LRZ_TILE_KILLED"/>
++	<value value="17" name="A7XX_PERF_LRZ_TOTAL_PIXEL"/>
++	<value value="18" name="A7XX_PERF_LRZ_VISIBLE_PIXEL_AFTER_LRZ"/>
++	<value value="19" name="A7XX_PERF_LRZ_FEEDBACK_ACCEPT"/>
++	<value value="20" name="A7XX_PERF_LRZ_FEEDBACK_DISCARD"/>
++	<value value="21" name="A7XX_PERF_LRZ_FEEDBACK_STALL"/>
++	<value value="22" name="A7XX_PERF_LRZ_STALL_CYCLES_RB_ZPLANE"/>
++	<value value="23" name="A7XX_PERF_LRZ_STALL_CYCLES_RB_BPLANE"/>
++	<value value="24" name="A7XX_PERF_LRZ_RAS_MASK_TRANS"/>
++	<value value="25" name="A7XX_PERF_LRZ_STALL_CYCLES_MVC"/>
++	<value value="26" name="A7XX_PERF_LRZ_TILE_KILLED_BY_IMAGE_VRS"/>
++	<value value="27" name="A7XX_PERF_LRZ_TILE_KILLED_BY_Z"/>
++</enum>
++
++<enum name="a7xx_cmp_perfcounter_select">
++	<value value="0" name="A7XX_PERF_CMPDECMP_STALL_CYCLES_ARB"/>
++	<value value="1" name="A7XX_PERF_CMPDECMP_VBIF_LATENCY_CYCLES"/>
++	<value value="2" name="A7XX_PERF_CMPDECMP_VBIF_LATENCY_SAMPLES"/>
++	<value value="3" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA_CCU"/>
++	<value value="4" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_DATA_CCU"/>
++	<value value="5" name="A7XX_PERF_CMPDECMP_VBIF_READ_REQUEST"/>
++	<value value="6" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_REQUEST"/>
++	<value value="7" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA"/>
++	<value value="8" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_DATA"/>
++	<value value="9" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG1_COUNT"/>
++	<value value="10" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG2_COUNT"/>
++	<value value="11" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG3_COUNT"/>
++	<value value="12" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG4_COUNT"/>
++	<value value="13" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG5_COUNT"/>
++	<value value="14" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG6_COUNT"/>
++	<value value="15" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG8_COUNT"/>
++	<value value="16" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG1_COUNT"/>
++	<value value="17" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG2_COUNT"/>
++	<value value="18" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG3_COUNT"/>
++	<value value="19" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG4_COUNT"/>
++	<value value="20" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG5_COUNT"/>
++	<value value="21" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG6_COUNT"/>
++	<value value="22" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG8_COUNT"/>
++	<value value="23" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH0"/>
++	<value value="24" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH1"/>
++	<value value="25" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_DATA_UCHE"/>
++	<value value="26" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG0_COUNT"/>
++	<value value="27" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG0_COUNT"/>
++	<value value="28" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAGALPHA_COUNT"/>
++	<value value="29" name="A7XX_PERF_CMPDECMP_RESOLVE_EVENTS"/>
++	<value value="30" name="A7XX_PERF_CMPDECMP_CONCURRENT_RESOLVE_EVENTS"/>
++	<value value="31" name="A7XX_PERF_CMPDECMP_DROPPED_CLEAR_EVENTS"/>
++	<value value="32" name="A7XX_PERF_CMPDECMP_ST_BLOCKS_CONCURRENT"/>
++	<value value="33" name="A7XX_PERF_CMPDECMP_LRZ_ST_BLOCKS_CONCURRENT"/>
++	<value value="34" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG0_COUNT"/>
++	<value value="35" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG1_COUNT"/>
++	<value value="36" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG2_COUNT"/>
++	<value value="37" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG3_COUNT"/>
++	<value value="38" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG4_COUNT"/>
++	<value value="39" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG5_COUNT"/>
++	<value value="40" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG6_COUNT"/>
++	<value value="41" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG8_COUNT"/>
++	<value value="42" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG0_COUNT"/>
++	<value value="43" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG1_COUNT"/>
++	<value value="44" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG2_COUNT"/>
++	<value value="45" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG3_COUNT"/>
++	<value value="46" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG4_COUNT"/>
++	<value value="47" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG5_COUNT"/>
++	<value value="48" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG6_COUNT"/>
++	<value value="49" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG8_COUNT"/>
++</enum>
++
++<enum name="a7xx_gbif_perfcounter_select">
++	<value value="0" name="A7XX_PERF_GBIF_RESERVED_0"/>
++	<value value="1" name="A7XX_PERF_GBIF_RESERVED_1"/>
++	<value value="2" name="A7XX_PERF_GBIF_RESERVED_2"/>
++	<value value="3" name="A7XX_PERF_GBIF_RESERVED_3"/>
++	<value value="4" name="A7XX_PERF_GBIF_RESERVED_4"/>
++	<value value="5" name="A7XX_PERF_GBIF_RESERVED_5"/>
++	<value value="6" name="A7XX_PERF_GBIF_RESERVED_6"/>
++	<value value="7" name="A7XX_PERF_GBIF_RESERVED_7"/>
++	<value value="8" name="A7XX_PERF_GBIF_RESERVED_8"/>
++	<value value="9" name="A7XX_PERF_GBIF_RESERVED_9"/>
++	<value value="10" name="A7XX_PERF_GBIF_AXI0_READ_REQUESTS_TOTAL"/>
++	<value value="11" name="A7XX_PERF_GBIF_AXI1_READ_REQUESTS_TOTAL"/>
++	<value value="12" name="A7XX_PERF_GBIF_RESERVED_12"/>
++	<value value="13" name="A7XX_PERF_GBIF_RESERVED_13"/>
++	<value value="14" name="A7XX_PERF_GBIF_RESERVED_14"/>
++	<value value="15" name="A7XX_PERF_GBIF_RESERVED_15"/>
++	<value value="16" name="A7XX_PERF_GBIF_RESERVED_16"/>
++	<value value="17" name="A7XX_PERF_GBIF_RESERVED_17"/>
++	<value value="18" name="A7XX_PERF_GBIF_RESERVED_18"/>
++	<value value="19" name="A7XX_PERF_GBIF_RESERVED_19"/>
++	<value value="20" name="A7XX_PERF_GBIF_RESERVED_20"/>
++	<value value="21" name="A7XX_PERF_GBIF_RESERVED_21"/>
++	<value value="22" name="A7XX_PERF_GBIF_AXI0_WRITE_REQUESTS_TOTAL"/>
++	<value value="23" name="A7XX_PERF_GBIF_AXI1_WRITE_REQUESTS_TOTAL"/>
++	<value value="24" name="A7XX_PERF_GBIF_RESERVED_24"/>
++	<value value="25" name="A7XX_PERF_GBIF_RESERVED_25"/>
++	<value value="26" name="A7XX_PERF_GBIF_RESERVED_26"/>
++	<value value="27" name="A7XX_PERF_GBIF_RESERVED_27"/>
++	<value value="28" name="A7XX_PERF_GBIF_RESERVED_28"/>
++	<value value="29" name="A7XX_PERF_GBIF_RESERVED_29"/>
++	<value value="30" name="A7XX_PERF_GBIF_RESERVED_30"/>
++	<value value="31" name="A7XX_PERF_GBIF_RESERVED_31"/>
++	<value value="32" name="A7XX_PERF_GBIF_RESERVED_32"/>
++	<value value="33" name="A7XX_PERF_GBIF_RESERVED_33"/>
++	<value value="34" name="A7XX_PERF_GBIF_AXI0_READ_DATA_BEATS_TOTAL"/>
++	<value value="35" name="A7XX_PERF_GBIF_AXI1_READ_DATA_BEATS_TOTAL"/>
++	<value value="36" name="A7XX_PERF_GBIF_RESERVED_36"/>
++	<value value="37" name="A7XX_PERF_GBIF_RESERVED_37"/>
++	<value value="38" name="A7XX_PERF_GBIF_RESERVED_38"/>
++	<value value="39" name="A7XX_PERF_GBIF_RESERVED_39"/>
++	<value value="40" name="A7XX_PERF_GBIF_RESERVED_40"/>
++	<value value="41" name="A7XX_PERF_GBIF_RESERVED_41"/>
++	<value value="42" name="A7XX_PERF_GBIF_RESERVED_42"/>
++	<value value="43" name="A7XX_PERF_GBIF_RESERVED_43"/>
++	<value value="44" name="A7XX_PERF_GBIF_RESERVED_44"/>
++	<value value="45" name="A7XX_PERF_GBIF_RESERVED_45"/>
++	<value value="46" name="A7XX_PERF_GBIF_AXI0_WRITE_DATA_BEATS_TOTAL"/>
++	<value value="47" name="A7XX_PERF_GBIF_AXI1_WRITE_DATA_BEATS_TOTAL"/>
++	<value value="48" name="A7XX_PERF_GBIF_RESERVED_48"/>
++	<value value="49" name="A7XX_PERF_GBIF_RESERVED_49"/>
++	<value value="50" name="A7XX_PERF_GBIF_RESERVED_50"/>
++	<value value="51" name="A7XX_PERF_GBIF_RESERVED_51"/>
++	<value value="52" name="A7XX_PERF_GBIF_RESERVED_52"/>
++	<value value="53" name="A7XX_PERF_GBIF_RESERVED_53"/>
++	<value value="54" name="A7XX_PERF_GBIF_RESERVED_54"/>
++	<value value="55" name="A7XX_PERF_GBIF_RESERVED_55"/>
++	<value value="56" name="A7XX_PERF_GBIF_RESERVED_56"/>
++	<value value="57" name="A7XX_PERF_GBIF_RESERVED_57"/>
++	<value value="58" name="A7XX_PERF_GBIF_RESERVED_58"/>
++	<value value="59" name="A7XX_PERF_GBIF_RESERVED_59"/>
++	<value value="60" name="A7XX_PERF_GBIF_RESERVED_60"/>
++	<value value="61" name="A7XX_PERF_GBIF_RESERVED_61"/>
++	<value value="62" name="A7XX_PERF_GBIF_RESERVED_62"/>
++	<value value="63" name="A7XX_PERF_GBIF_RESERVED_63"/>
++	<value value="64" name="A7XX_PERF_GBIF_RESERVED_64"/>
++	<value value="65" name="A7XX_PERF_GBIF_RESERVED_65"/>
++	<value value="66" name="A7XX_PERF_GBIF_RESERVED_66"/>
++	<value value="67" name="A7XX_PERF_GBIF_RESERVED_67"/>
++	<value value="68" name="A7XX_PERF_GBIF_CYCLES_CH0_HELD_OFF_RD_ALL"/>
++	<value value="69" name="A7XX_PERF_GBIF_CYCLES_CH1_HELD_OFF_RD_ALL"/>
++	<value value="70" name="A7XX_PERF_GBIF_CYCLES_CH0_HELD_OFF_WR_ALL"/>
++	<value value="71" name="A7XX_PERF_GBIF_CYCLES_CH1_HELD_OFF_WR_ALL"/>
++	<value value="72" name="A7XX_PERF_GBIF_AXI_CH0_REQUEST_HELD_OFF"/>
++	<value value="73" name="A7XX_PERF_GBIF_AXI_CH1_REQUEST_HELD_OFF"/>
++	<value value="74" name="A7XX_PERF_GBIF_AXI_REQUEST_HELD_OFF"/>
++	<value value="75" name="A7XX_PERF_GBIF_AXI_CH0_WRITE_DATA_HELD_OFF"/>
++	<value value="76" name="A7XX_PERF_GBIF_AXI_CH1_WRITE_DATA_HELD_OFF"/>
++	<value value="77" name="A7XX_PERF_GBIF_AXI_ALL_WRITE_DATA_HELD_OFF"/>
++	<value value="78" name="A7XX_PERF_GBIF_AXI_ALL_READ_BEATS"/>
++	<value value="79" name="A7XX_PERF_GBIF_AXI_ALL_WRITE_BEATS"/>
++	<value value="80" name="A7XX_PERF_GBIF_AXI_ALL_BEATS"/>
++</enum>
++
++<enum name="a7xx_ufc_perfcounter_select">
++	<value value="0" name="A7XX_PERF_UFC_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_UFC_READ_DATA_VBIF"/>
++	<value value="2" name="A7XX_PERF_UFC_WRITE_DATA_VBIF"/>
++	<value value="3" name="A7XX_PERF_UFC_READ_REQUEST_VBIF"/>
++	<value value="4" name="A7XX_PERF_UFC_WRITE_REQUEST_VBIF"/>
++	<value value="5" name="A7XX_PERF_UFC_LRZ_FILTER_HIT"/>
++	<value value="6" name="A7XX_PERF_UFC_LRZ_FILTER_MISS"/>
++	<value value="7" name="A7XX_PERF_UFC_CRE_FILTER_HIT"/>
++	<value value="8" name="A7XX_PERF_UFC_CRE_FILTER_MISS"/>
++	<value value="9" name="A7XX_PERF_UFC_SP_FILTER_HIT"/>
++	<value value="10" name="A7XX_PERF_UFC_SP_FILTER_MISS"/>
++	<value value="11" name="A7XX_PERF_UFC_SP_REQUESTS"/>
++	<value value="12" name="A7XX_PERF_UFC_TP_FILTER_HIT"/>
++	<value value="13" name="A7XX_PERF_UFC_TP_FILTER_MISS"/>
++	<value value="14" name="A7XX_PERF_UFC_TP_REQUESTS"/>
++	<value value="15" name="A7XX_PERF_UFC_MAIN_HIT_LRZ_PREFETCH"/>
++	<value value="16" name="A7XX_PERF_UFC_MAIN_HIT_CRE_PREFETCH"/>
++	<value value="17" name="A7XX_PERF_UFC_MAIN_HIT_SP_PREFETCH"/>
++	<value value="18" name="A7XX_PERF_UFC_MAIN_HIT_TP_PREFETCH"/>
++	<value value="19" name="A7XX_PERF_UFC_MAIN_HIT_UBWC_READ"/>
++	<value value="20" name="A7XX_PERF_UFC_MAIN_HIT_UBWC_WRITE"/>
++	<value value="21" name="A7XX_PERF_UFC_MAIN_MISS_LRZ_PREFETCH"/>
++	<value value="22" name="A7XX_PERF_UFC_MAIN_MISS_CRE_PREFETCH"/>
++	<value value="23" name="A7XX_PERF_UFC_MAIN_MISS_SP_PREFETCH"/>
++	<value value="24" name="A7XX_PERF_UFC_MAIN_MISS_TP_PREFETCH"/>
++	<value value="25" name="A7XX_PERF_UFC_MAIN_MISS_UBWC_READ"/>
++	<value value="26" name="A7XX_PERF_UFC_MAIN_MISS_UBWC_WRITE"/>
++	<value value="27" name="A7XX_PERF_UFC_UBWC_READ_UFC_TRANS"/>
++	<value value="28" name="A7XX_PERF_UFC_UBWC_WRITE_UFC_TRANS"/>
++	<value value="29" name="A7XX_PERF_UFC_STALL_CYCLES_GBIF_CMD"/>
++	<value value="30" name="A7XX_PERF_UFC_STALL_CYCLES_GBIF_RDATA"/>
++	<value value="31" name="A7XX_PERF_UFC_STALL_CYCLES_GBIF_WDATA"/>
++	<value value="32" name="A7XX_PERF_UFC_STALL_CYCLES_UBWC_WR_FLAG"/>
++	<value value="33" name="A7XX_PERF_UFC_STALL_CYCLES_UBWC_FLAG_RTN"/>
++	<value value="34" name="A7XX_PERF_UFC_STALL_CYCLES_UBWC_EVENT"/>
++	<value value="35" name="A7XX_PERF_UFC_LRZ_PREFETCH_STALLED_CYCLES"/>
++	<value value="36" name="A7XX_PERF_UFC_CRE_PREFETCH_STALLED_CYCLES"/>
++	<value value="37" name="A7XX_PERF_UFC_SPTP_PREFETCH_STALLED_CYCLES"/>
++	<value value="38" name="A7XX_PERF_UFC_UBWC_RD_STALLED_CYCLES"/>
++	<value value="39" name="A7XX_PERF_UFC_UBWC_WR_STALLED_CYCLES"/>
++	<value value="40" name="A7XX_PERF_UFC_PREFETCH_STALLED_CYCLES"/>
++	<value value="41" name="A7XX_PERF_UFC_EVICTION_STALLED_CYCLES"/>
++	<value value="42" name="A7XX_PERF_UFC_LOCK_STALLED_CYCLES"/>
++	<value value="43" name="A7XX_PERF_UFC_MISS_LATENCY_CYCLES"/>
++	<value value="44" name="A7XX_PERF_UFC_MISS_LATENCY_SAMPLES"/>
++	<value value="45" name="A7XX_PERF_UFC_UBWC_REQ_STALLED_CYCLES"/>
++	<value value="46" name="A7XX_PERF_UFC_TP_HINT_TAG_MISS"/>
++	<value value="47" name="A7XX_PERF_UFC_TP_HINT_TAG_HIT_RDY"/>
++	<value value="48" name="A7XX_PERF_UFC_TP_HINT_TAG_HIT_NRDY"/>
++	<value value="49" name="A7XX_PERF_UFC_TP_HINT_IS_FCLEAR"/>
++	<value value="50" name="A7XX_PERF_UFC_TP_HINT_IS_ALPHA0"/>
++	<value value="51" name="A7XX_PERF_UFC_SP_L1_FILTER_HIT"/>
++	<value value="52" name="A7XX_PERF_UFC_SP_L1_FILTER_MISS"/>
++	<value value="53" name="A7XX_PERF_UFC_SP_L1_FILTER_REQUESTS"/>
++	<value value="54" name="A7XX_PERF_UFC_TP_L1_TAG_HIT_RDY"/>
++	<value value="55" name="A7XX_PERF_UFC_TP_L1_TAG_HIT_NRDY"/>
++	<value value="56" name="A7XX_PERF_UFC_TP_L1_TAG_MISS"/>
++	<value value="57" name="A7XX_PERF_UFC_TP_L1_FILTER_REQUESTS"/>
++</enum>
++
++</database>
+diff --git a/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml b/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
+index 46271340162280..7abc08635495ce 100644
+--- a/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
++++ b/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
+@@ -21,9 +21,9 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 	<value name="HLSQ_FLUSH" value="7" variants="A3XX-A4XX"/>
+ 	<value name="VIZQUERY_END" value="8" variants="A2XX"/>
+ 	<value name="SC_WAIT_WC" value="9" variants="A2XX"/>
+-	<value name="WRITE_PRIMITIVE_COUNTS" value="9" variants="A6XX"/>
+-	<value name="START_PRIMITIVE_CTRS" value="11" variants="A6XX"/>
+-	<value name="STOP_PRIMITIVE_CTRS" value="12" variants="A6XX"/>
++	<value name="WRITE_PRIMITIVE_COUNTS" value="9" variants="A6XX-"/>
++	<value name="START_PRIMITIVE_CTRS" value="11" variants="A6XX-"/>
++	<value name="STOP_PRIMITIVE_CTRS" value="12" variants="A6XX-"/>
+ 	<!-- Not sure that these 4 events don't have the same meaning as on A5XX+ -->
+ 	<value name="RST_PIX_CNT" value="13" variants="A2XX-A4XX"/>
+ 	<value name="RST_VTX_CNT" value="14" variants="A2XX-A4XX"/>
+@@ -31,8 +31,8 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 	<value name="STAT_EVENT" value="16" variants="A2XX-A4XX"/>
+ 	<value name="CACHE_FLUSH_AND_INV_TS_EVENT" value="20" variants="A2XX-A4XX"/>
+ 	<doc>
+-		If A6XX_RB_SAMPLE_COUNT_CONTROL.copy is true, writes OQ Z passed
+-		sample counts to RB_SAMPLE_COUNT_ADDR.  This writes to main
++		If A6XX_RB_SAMPLE_COUNTER_CNTL.copy is true, writes OQ Z passed
++		sample counts to RB_SAMPLE_COUNTER_BASE.  This writes to main
+ 		memory, skipping UCHE.
+ 	</doc>
+ 	<value name="ZPASS_DONE" value="21"/>
+@@ -97,6 +97,13 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 	</doc>
+ 	<value name="BLIT" value="30" variants="A5XX-"/>
+ 
++	<doc>
++	Flip between the primary and secondary LRZ buffers. This is used
++	for concurrent binning, so that BV can write to one buffer while
++	BR reads from the other.
++	</doc>
++	<value name="LRZ_FLIP_BUFFER" value="36" variants="A7XX"/>
++
+ 	<doc>
+ 		Clears based on GRAS_LRZ_CNTL configuration, could clear
+ 		fast-clear buffer or LRZ direction.
+@@ -114,6 +121,7 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 	<value name="BLIT_OP_FILL_2D" value="39" variants="A5XX-"/>
+ 	<value name="BLIT_OP_COPY_2D" value="40" variants="A5XX-A6XX"/>
+ 	<value name="UNK_40" value="40" variants="A7XX"/>
++	<value name="LRZ_Q_CACHE_INVALIDATE" value="41" variants="A7XX"/>
+ 	<value name="BLIT_OP_SCALE_2D" value="42" variants="A5XX-"/>
+ 	<value name="CONTEXT_DONE_2D" value="43" variants="A5XX-"/>
+ 	<value name="UNK_2C" value="44" variants="A5XX-"/>
+@@ -372,7 +380,7 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 	<value name="CP_LOAD_STATE" value="0x30" variants="A3XX"/>
+ 	<value name="CP_LOAD_STATE4" value="0x30" variants="A4XX-A5XX"/>
+ 	<doc>Conditionally load a IB based on a flag, prefetch enabled</doc>
+-	<value name="CP_COND_INDIRECT_BUFFER_PFE" value="0x3a"/>
++	<value name="CP_COND_INDIRECT_BUFFER_PFE" value="0x3a" variants="A3XX-A5XX"/>
+ 	<doc>Conditionally load a IB based on a flag, prefetch disabled</doc>
+ 	<value name="CP_COND_INDIRECT_BUFFER_PFD" value="0x32" variants="A3XX"/>
+ 	<doc>Load a buffer with pre-fetch enabled</doc>
+@@ -538,7 +546,7 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 	<value name="CP_LOAD_STATE6_GEOM" value="0x32" variants="A6XX-"/>
+ 	<value name="CP_LOAD_STATE6_FRAG" value="0x34" variants="A6XX-"/>
+ 	<!--
+-	Note: For IBO state (Image/SSBOs) which have shared state across
++	Note: For UAV state (Image/SSBOs) which have shared state across
+ 	shader stages, for 3d pipeline CP_LOAD_STATE6 is used.  But for
+ 	compute shaders, CP_LOAD_STATE6_FRAG is used.  Possibly they are
+ 	interchangable.
+@@ -567,7 +575,7 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 	<value name="IN_PREEMPT" value="0x0f" variants="A6XX-"/>
+ 
+ 	<!-- TODO do these exist on A5xx? -->
+-	<value name="CP_SCRATCH_WRITE" value="0x4c" variants="A6XX"/>
++	<value name="CP_SCRATCH_WRITE" value="0x4c" variants="A6XX-"/>
+ 	<value name="CP_REG_TO_MEM_OFFSET_MEM" value="0x74" variants="A6XX-"/>
+ 	<value name="CP_REG_TO_MEM_OFFSET_REG" value="0x72" variants="A6XX-"/>
+ 	<value name="CP_WAIT_MEM_GTE" value="0x14" variants="A6XX"/>
+@@ -650,6 +658,11 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 
+ 	<doc>Reset various on-chip state used for synchronization</doc>
+ 	<value name="CP_RESET_CONTEXT_STATE" value="0x1f" variants="A7XX-"/>
++
++	<doc>Invalidates the "CCHE" introduced on a740</doc>
++	<value name="CP_CCHE_INVALIDATE" value="0x3a" variants="A7XX-"/>
++
++	<value name="CP_SCOPE_CNTL" value="0x6c" variants="A7XX-"/>
+ </enum>
+ 
+ 
+@@ -792,14 +805,14 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 		<value name="SB6_GS_SHADER" value="0xb"/>
+ 		<value name="SB6_FS_SHADER" value="0xc"/>
+ 		<value name="SB6_CS_SHADER" value="0xd"/>
+-		<value name="SB6_IBO"       value="0xe"/>
+-		<value name="SB6_CS_IBO"    value="0xf"/>
++		<value name="SB6_UAV"       value="0xe"/>
++		<value name="SB6_CS_UAV"    value="0xf"/>
+ 	</enum>
+ 	<enum name="a6xx_state_type">
+ 		<value name="ST6_SHADER" value="0"/>
+ 		<value name="ST6_CONSTANTS" value="1"/>
+ 		<value name="ST6_UBO" value="2"/>
+-		<value name="ST6_IBO" value="3"/>
++		<value name="ST6_UAV" value="3"/>
+ 	</enum>
+ 	<enum name="a6xx_state_src">
+ 		<value name="SS6_DIRECT" value="0"/>
+@@ -1121,39 +1134,93 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 	</reg32>
+ </domain>
+ 
++<enum name="a7xx_abs_mask_mode">
++	<value name="ABS_MASK" value="0x1"/>
++	<value name="NO_ABS_MASK" value="0x0"/>
++</enum>
++
+ <domain name="CP_SET_BIN_DATA5" width="32">
+ 	<reg32 offset="0" name="0">
++		<bitfield name="VSC_MASK" low="0" high="15" type="hex">
++			<doc>
++				A mask of bins, starting at VSC_N, whose
++				visibility is OR'd together. A value of 0 is
++				interpreted as 1 (i.e. just use VSC_N for
++				visbility) for backwards compatibility. Only
++				exists on a7xx.
++			</doc>
++		</bitfield>
+ 		<!-- equiv to PC_VSTREAM_CONTROL.SIZE on a3xx/a4xx: -->
+ 		<bitfield name="VSC_SIZE" low="16" high="21" type="uint"/>
+ 		<!-- equiv to PC_VSTREAM_CONTROL.N on a3xx/a4xx: -->
+ 		<bitfield name="VSC_N" low="22" high="26" type="uint"/>
++		<bitfield name="ABS_MASK" pos="28" type="a7xx_abs_mask_mode" addvariant="yes">
++			<doc>
++				If this field is 1, VSC_MASK and VSC_N are
++				ignored and instead a new ordinal immediately
++				after specifies the full 32-bit mask of bins
++				to use. The mask is "absolute" instead of
++				relative to VSC_N.
++			</doc>
++		</bitfield>
+ 	</reg32>
+-	<!-- BIN_DATA_ADDR -> VSC_PIPE[p].DATA_ADDRESS -->
+-	<reg32 offset="1" name="1">
+-		<bitfield name="BIN_DATA_ADDR_LO" low="0" high="31" type="hex"/>
+-	</reg32>
+-	<reg32 offset="2" name="2">
+-		<bitfield name="BIN_DATA_ADDR_HI" low="0" high="31" type="hex"/>
+-	</reg32>
+-	<!-- BIN_SIZE_ADDRESS -> VSC_SIZE_ADDRESS + (p * 4)-->
+-	<reg32 offset="3" name="3">
+-		<bitfield name="BIN_SIZE_ADDRESS_LO" low="0" high="31"/>
+-	</reg32>
+-	<reg32 offset="4" name="4">
+-		<bitfield name="BIN_SIZE_ADDRESS_HI" low="0" high="31"/>
+-	</reg32>
+-	<!-- new on a6xx, where BIN_DATA_ADDR is the DRAW_STRM: -->
+-	<reg32 offset="5" name="5">
+-		<bitfield name="BIN_PRIM_STRM_LO" low="0" high="31"/>
+-	</reg32>
+-	<reg32 offset="6" name="6">
+-		<bitfield name="BIN_PRIM_STRM_HI" low="0" high="31"/>
+-	</reg32>
+-	<!--
+-		a7xx adds a few more addresses to the end of the pkt
+-	 -->
+-	<reg64 offset="7" name="7"/>
+-	<reg64 offset="9" name="9"/>
++	<stripe varset="a7xx_abs_mask_mode" variants="NO_ABS_MASK">
++		<!-- BIN_DATA_ADDR -> VSC_PIPE[p].DATA_ADDRESS -->
++		<reg32 offset="1" name="1">
++			<bitfield name="BIN_DATA_ADDR_LO" low="0" high="31" type="hex"/>
++		</reg32>
++		<reg32 offset="2" name="2">
++			<bitfield name="BIN_DATA_ADDR_HI" low="0" high="31" type="hex"/>
++		</reg32>
++		<!-- BIN_SIZE_ADDRESS -> VSC_SIZE_ADDRESS + (p * 4)-->
++		<reg32 offset="3" name="3">
++			<bitfield name="BIN_SIZE_ADDRESS_LO" low="0" high="31"/>
++		</reg32>
++		<reg32 offset="4" name="4">
++			<bitfield name="BIN_SIZE_ADDRESS_HI" low="0" high="31"/>
++		</reg32>
++		<!-- new on a6xx, where BIN_DATA_ADDR is the DRAW_STRM: -->
++		<reg32 offset="5" name="5">
++			<bitfield name="BIN_PRIM_STRM_LO" low="0" high="31"/>
++		</reg32>
++		<reg32 offset="6" name="6">
++			<bitfield name="BIN_PRIM_STRM_HI" low="0" high="31"/>
++		</reg32>
++		<!--
++			a7xx adds a few more addresses to the end of the pkt
++		 -->
++		<reg64 offset="7" name="7"/>
++		<reg64 offset="9" name="9"/>
++	</stripe>
++	<stripe varset="a7xx_abs_mask_mode" variants="ABS_MASK">
++		<reg32 offset="1" name="ABS_MASK"/>
++		<!-- BIN_DATA_ADDR -> VSC_PIPE[p].DATA_ADDRESS -->
++		<reg32 offset="2" name="2">
++			<bitfield name="BIN_DATA_ADDR_LO" low="0" high="31" type="hex"/>
++		</reg32>
++		<reg32 offset="3" name="3">
++			<bitfield name="BIN_DATA_ADDR_HI" low="0" high="31" type="hex"/>
++		</reg32>
++		<!-- BIN_SIZE_ADDRESS -> VSC_SIZE_ADDRESS + (p * 4)-->
++		<reg32 offset="4" name="4">
++			<bitfield name="BIN_SIZE_ADDRESS_LO" low="0" high="31"/>
++		</reg32>
++		<reg32 offset="5" name="5">
++			<bitfield name="BIN_SIZE_ADDRESS_HI" low="0" high="31"/>
++		</reg32>
++		<!-- new on a6xx, where BIN_DATA_ADDR is the DRAW_STRM: -->
++		<reg32 offset="6" name="6">
++			<bitfield name="BIN_PRIM_STRM_LO" low="0" high="31"/>
++		</reg32>
++		<reg32 offset="7" name="7">
++			<bitfield name="BIN_PRIM_STRM_HI" low="0" high="31"/>
++		</reg32>
++		<!--
++			a7xx adds a few more addresses to the end of the pkt
++		 -->
++		<reg64 offset="8" name="8"/>
++		<reg64 offset="10" name="10"/>
++	</stripe>
+ </domain>
+ 
+ <domain name="CP_SET_BIN_DATA5_OFFSET" width="32">
+@@ -1164,23 +1231,42 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+                 stream is recorded.
+ 	</doc>
+ 	<reg32 offset="0" name="0">
++		<bitfield name="VSC_MASK" low="0" high="15" type="hex"/>
+ 		<!-- equiv to PC_VSTREAM_CONTROL.SIZE on a3xx/a4xx: -->
+ 		<bitfield name="VSC_SIZE" low="16" high="21" type="uint"/>
+ 		<!-- equiv to PC_VSTREAM_CONTROL.N on a3xx/a4xx: -->
+ 		<bitfield name="VSC_N" low="22" high="26" type="uint"/>
++		<bitfield name="ABS_MASK" pos="28" type="a7xx_abs_mask_mode" addvariant="yes"/>
+ 	</reg32>
+-	<!-- BIN_DATA_ADDR -> VSC_PIPE[p].DATA_ADDRESS -->
+-	<reg32 offset="1" name="1">
+-		<bitfield name="BIN_DATA_OFFSET" low="0" high="31" type="uint"/>
+-	</reg32>
+-	<!-- BIN_SIZE_ADDRESS -> VSC_SIZE_ADDRESS + (p * 4)-->
+-	<reg32 offset="2" name="2">
+-		<bitfield name="BIN_SIZE_OFFSET" low="0" high="31" type="uint"/>
+-	</reg32>
+-	<!-- BIN_DATA2_ADDR -> VSC_PIPE[p].DATA2_ADDRESS -->
+-	<reg32 offset="3" name="3">
+-		<bitfield name="BIN_DATA2_OFFSET" low="0" high="31" type="uint"/>
+-	</reg32>
++	<stripe varset="a7xx_abs_mask_mode" variants="NO_ABS_MASK">
++		<!-- BIN_DATA_ADDR -> VSC_PIPE[p].DATA_ADDRESS -->
++		<reg32 offset="1" name="1">
++			<bitfield name="BIN_DATA_OFFSET" low="0" high="31" type="uint"/>
++		</reg32>
++		<!-- BIN_SIZE_ADDRESS -> VSC_SIZE_ADDRESS + (p * 4)-->
++		<reg32 offset="2" name="2">
++			<bitfield name="BIN_SIZE_OFFSET" low="0" high="31" type="uint"/>
++		</reg32>
++		<!-- BIN_DATA2_ADDR -> VSC_PIPE[p].DATA2_ADDRESS -->
++		<reg32 offset="3" name="3">
++			<bitfield name="BIN_DATA2_OFFSET" low="0" high="31" type="uint"/>
++		</reg32>
++	</stripe>
++	<stripe varset="a7xx_abs_mask_mode" variants="ABS_MASK">
++		<reg32 offset="1" name="ABS_MASK"/>
++		<!-- BIN_DATA_ADDR -> VSC_PIPE[p].DATA_ADDRESS -->
++		<reg32 offset="2" name="2">
++			<bitfield name="BIN_DATA_OFFSET" low="0" high="31" type="uint"/>
++		</reg32>
++		<!-- BIN_SIZE_ADDRESS -> VSC_SIZE_ADDRESS + (p * 4)-->
++		<reg32 offset="3" name="3">
++			<bitfield name="BIN_SIZE_OFFSET" low="0" high="31" type="uint"/>
++		</reg32>
++		<!-- BIN_DATA2_ADDR -> VSC_PIPE[p].DATA2_ADDRESS -->
++		<reg32 offset="4" name="4">
++			<bitfield name="BIN_DATA2_OFFSET" low="0" high="31" type="uint"/>
++		</reg32>
++	</stripe>
+ </domain>
+ 
+ <domain name="CP_REG_RMW" width="32">
+@@ -1198,6 +1284,9 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 	</doc>
+ 	<reg32 offset="0" name="0">
+ 		<bitfield name="DST_REG" low="0" high="17" type="hex"/>
++		<bitfield name="DST_SCRATCH" pos="19" type="boolean" varset="chip" variants="A7XX-"/>
++		<!-- skip implied CP_WAIT_FOR_IDLE + CP_WAIT_FOR_ME -->
++		<bitfield name="SKIP_WAIT_FOR_ME" pos="23" type="boolean" varset="chip" variants="A7XX-"/>
+ 		<bitfield name="ROTATE" low="24" high="28" type="uint"/>
+ 		<bitfield name="SRC1_ADD" pos="29" type="boolean"/>
+ 		<bitfield name="SRC1_IS_REG" pos="30" type="boolean"/>
+@@ -1348,6 +1437,8 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 		<bitfield name="SCRATCH" low="20" high="22" type="uint"/>
+ 		<!-- number of registers/dwords copied is CNT + 1. -->
+ 		<bitfield name="CNT" low="24" high="26" type="uint"/>
++		<!-- skip implied CP_WAIT_FOR_IDLE + CP_WAIT_FOR_ME -->
++		<bitfield name="SKIP_WAIT_FOR_ME" pos="27" type="boolean" varset="chip" variants="A7XX-"/>
+ 	</reg32>
+ </domain>
+ 
+@@ -1655,8 +1746,8 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 		<bitfield name="WRITE_SAMPLE_COUNT" pos="12" type="boolean"/>
+ 		<!-- Write sample count at (iova + 16) -->
+ 		<bitfield name="SAMPLE_COUNT_END_OFFSET" pos="13" type="boolean"/>
+-		<!-- *(iova + 8) = *(iova + 16) - *iova -->
+-		<bitfield name="WRITE_SAMPLE_COUNT_DIFF" pos="14" type="boolean"/>
++		<!-- *(iova + 8) += *(iova + 16) - *iova -->
++		<bitfield name="WRITE_ACCUM_SAMPLE_COUNT_DIFF" pos="14" type="boolean"/>
+ 
+ 		<!-- Next 4 flags are valid to set only when concurrent binning is enabled -->
+ 		<!-- Increment 16b BV counter. Valid only in BV pipe -->
+@@ -1670,15 +1761,11 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 		<bitfield name="WRITE_DST" pos="24" type="event_write_dst" addvariant="yes"/>
+ 		<!-- Writes into WRITE_DST from WRITE_SRC. RB_DONE_TS requires WRITE_ENABLED. -->
+ 		<bitfield name="WRITE_ENABLED" pos="27" type="boolean"/>
++		<bitfield name="IRQ" pos="31" type="boolean"/>
+ 	</reg32>
+ 
+ 	<stripe varset="event_write_dst" variants="EV_DST_RAM">
+-		<reg32 offset="1" name="1">
+-			<bitfield name="ADDR_0_LO" low="0" high="31"/>
+-		</reg32>
+-		<reg32 offset="2" name="2">
+-			<bitfield name="ADDR_0_HI" low="0" high="31"/>
+-		</reg32>
++		<reg64 offset="1" name="1" type="waddress"/>
+ 		<reg32 offset="3" name="3">
+ 			<bitfield name="PAYLOAD_0" low="0" high="31"/>
+ 		</reg32>
+@@ -1773,13 +1860,23 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 
+ <domain name="CP_SET_MARKER" width="32" varset="chip" prefix="chip" variants="A6XX-">
+ 	<doc>Tell CP the current operation mode, indicates save and restore procedure</doc>
++	<enum name="set_marker_mode">
++		<value value="0" name="SET_RENDER_MODE"/>
++		<!-- IFPC - inter-frame power collapse -->
++		<value value="1" name="SET_IFPC_MODE"/>
++	</enum>
++	<enum name="a6xx_ifpc_mode">
++		<value value="0" name="IFPC_ENABLE"/>
++		<value value="1" name="IFPC_DISABLE"/>
++	</enum>
+ 	<enum name="a6xx_marker">
+-		<value value="1" name="RM6_BYPASS"/>
+-		<value value="2" name="RM6_BINNING"/>
+-		<value value="4" name="RM6_GMEM"/>
+-		<value value="5" name="RM6_ENDVIS"/>
+-		<value value="6" name="RM6_RESOLVE"/>
+-		<value value="7" name="RM6_YIELD"/>
++		<value value="1" name="RM6_DIRECT_RENDER"/>
++		<value value="2" name="RM6_BIN_VISIBILITY"/>
++		<value value="3" name="RM6_BIN_DIRECT"/>
++		<value value="4" name="RM6_BIN_RENDER_START"/>
++		<value value="5" name="RM6_BIN_END_OF_DRAWS"/>
++		<value value="6" name="RM6_BIN_RESOLVE"/>
++		<value value="7" name="RM6_BIN_RENDER_END"/>
+ 		<value value="8" name="RM6_COMPUTE"/>
+ 		<value value="0xc" name="RM6_BLIT2DSCALE"/>  <!-- no-op (at least on current sqe fw) -->
+ 
+@@ -1789,23 +1886,40 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 		-->
+ 		<value value="0xd" name="RM6_IB1LIST_START"/>
+ 		<value value="0xe" name="RM6_IB1LIST_END"/>
+-		<!-- IFPC - inter-frame power collapse -->
+-		<value value="0x100" name="RM6_IFPC_ENABLE"/>
+-		<value value="0x101" name="RM6_IFPC_DISABLE"/>
+ 	</enum>
+ 	<reg32 offset="0" name="0">
++		<!-- if b8 is set, the low bits are interpreted differently (and b4 ignored) -->
++		<bitfield name="MARKER_MODE" pos="8" type="set_marker_mode" addvariant="yes"/>
++
++		<bitfield name="MODE" low="0" high="3" type="a6xx_marker" varset="set_marker_mode" variants="SET_RENDER_MODE"/>
++		<!-- used by preemption to determine if GMEM needs to be saved or not -->
++		<bitfield name="USES_GMEM" pos="4" type="boolean" varset="set_marker_mode" variants="SET_RENDER_MODE"/>
++
++		<bitfield name="IFPC_MODE" pos="0" type="a6xx_ifpc_mode" varset="set_marker_mode" variants="SET_IFPC_MODE"/>
++
+ 		<!--
+-			NOTE: blob driver and some versions of freedreno/turnip set
+-			b4, which is unused (at least by current sqe fw), but interferes
+-			with parsing if we extend the size of the bitfield to include
+-			b8 (only sent by kernel mode driver).  Really, the way the
+-			parsing works in the firmware, only b0-b3 are considered, but
+-			if b8 is set, the low bits are interpreted differently.  To
+-			model this, without getting confused by spurious b4, this is
+-			described as two overlapping bitfields:
+-		 -->
+-		<bitfield name="MODE" low="0" high="8" type="a6xx_marker"/>
+-		<bitfield name="MARKER" low="0" high="3" type="a6xx_marker"/>
++			CP_SET_MARKER is used with these bits to create a
++			critical section around a workaround for ray tracing.
++			The workaround happens after BVH building, and appears
++			to invalidate the RTU's BVH node cache. It makes sure
++			that only one of BR/BV/LPAC is executing the
++			workaround at a time, and no draws using RT on BV/LPAC
++			are executing while the workaround is executed on BR (or
++			vice versa, that no draws on BV/BR using RT are executed
++			while the workaround executes on LPAC), by
++			hooking subsequent CP_EVENT_WRITE/CP_DRAW_*/CP_EXEC_CS.
++			The blob usage is:
++
++			CP_SET_MARKER(RT_WA_START)
++			... workaround here ...
++			CP_SET_MARKER(RT_WA_END)
++			...
++			CP_SET_MARKER(SHADER_USES_RT)
++			CP_DRAW_INDX(...) or CP_EXEC_CS(...)
++		-->
++		<bitfield name="SHADER_USES_RT" pos="9" type="boolean" variants="A7XX-"/>
++		<bitfield name="RT_WA_START" pos="10" type="boolean" variants="A7XX-"/>
++		<bitfield name="RT_WA_END" pos="11" type="boolean" variants="A7XX-"/>
+ 	</reg32>
+ </domain>
+ 
+@@ -1832,9 +1946,9 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 			If concurrent binning is disabled then BR also does binning so it will also
+ 			write the "real" registers in BR.
+ 		-->
+-		<value value="8" name="DRAW_STRM_ADDRESS"/>
+-		<value value="9" name="DRAW_STRM_SIZE_ADDRESS"/>
+-		<value value="10" name="PRIM_STRM_ADDRESS"/>
++		<value value="8" name="VSC_PIPE_DATA_DRAW_BASE"/>
++		<value value="9" name="VSC_SIZE_BASE"/>
++		<value value="10" name="VSC_PIPE_DATA_PRIM_BASE"/>
+ 		<value value="11" name="UNK_STRM_ADDRESS"/>
+ 		<value value="12" name="UNK_STRM_SIZE_ADDRESS"/>
+ 
+@@ -1935,11 +2049,11 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 			a bitmask of which modes pass the test.
+ 		-->
+ 
+-		<!-- RM6_BINNING -->
++		<!-- RM6_BIN_VISIBILITY -->
+ 		<bitfield name="BINNING" pos="25" variants="RENDER_MODE" type="boolean"/>
+ 		<!-- all others -->
+ 		<bitfield name="GMEM" pos="26" variants="RENDER_MODE" type="boolean"/>
+-		<!-- RM6_BYPASS -->
++		<!-- RM6_DIRECT_RENDER -->
+ 		<bitfield name="SYSMEM" pos="27" variants="RENDER_MODE" type="boolean"/>
+ 
+ 		<bitfield name="BV" pos="25" variants="THREAD_MODE" type="boolean"/>
+@@ -2014,10 +2128,10 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 
+ <domain name="CP_SET_AMBLE" width="32">
+ 	<doc>
+-                Used by the userspace and kernel drivers to set various IB's
+-                which are executed during context save/restore for handling
+-                state that isn't restored by the context switch routine itself.
+-  </doc>
++		Used by the userspace and kernel drivers to set various IB's
++		which are executed during context save/restore for handling
++		state that isn't restored by the context switch routine itself.
++	</doc>
+ 	<enum name="amble_type">
+ 		<value name="PREAMBLE_AMBLE_TYPE" value="0">
+ 			<doc>Executed unconditionally when switching back to the context.</doc>
+@@ -2087,12 +2201,12 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 		<value name="UNK_EVENT_WRITE" value="0x4"/>
+ 		<doc>
+ 			Tracks GRAS_LRZ_CNTL::GREATER, GRAS_LRZ_CNTL::DIR, and
+-			GRAS_LRZ_DEPTH_VIEW with previous values, and if one of
++			GRAS_LRZ_VIEW_INFO with previous values, and if one of
+ 			the following is true:
+ 			- GRAS_LRZ_CNTL::GREATER has changed
+ 			- GRAS_LRZ_CNTL::DIR has changed, the old value is not
+ 			  CUR_DIR_GE, and the new value is not CUR_DIR_DISABLED
+-			- GRAS_LRZ_DEPTH_VIEW has changed
++			- GRAS_LRZ_VIEW_INFO has changed
+ 			then it does a LRZ_FLUSH with GRAS_LRZ_CNTL::ENABLE
+ 			forced to 1.
+ 			Only exists in a650_sqe.fw.
+@@ -2207,7 +2321,7 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 
+ <domain name="CP_MEM_TO_SCRATCH_MEM" width="32">
+ 	<doc>
+-		Best guess is that it is a faster way to fetch all the VSC_STATE registers
++		Best guess is that it is a faster way to fetch all the VSC_CHANNEL_VISIBILITY registers
+ 		and keep them in a local scratch memory instead of fetching every time
+ 		when skipping IBs.
+ 	</doc>
+@@ -2260,6 +2374,16 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 	</reg32>
+ </domain>
+ 
++<domain name="CP_SCOPE_CNTL" width="32">
++	<enum name="cp_scope">
++		<value value="0" name="INTERRUPTS"/>
++	</enum>
++	<reg32 offset="0" name="0">
++		<bitfield name="DISABLE_PREEMPTION" pos="0" type="boolean"/>
++		<bitfield low="28" high="31" name="SCOPE" type="cp_scope"/>
++	</reg32>
++</domain>
++
+ <domain name="CP_INDIRECT_BUFFER" width="32" varset="chip" prefix="chip" variants="A5XX-">
+ 	<reg64 offset="0" name="IB_BASE" type="address"/>
+ 	<reg32 offset="2" name="2">
+diff --git a/drivers/gpu/drm/panel/panel-raydium-rm67200.c b/drivers/gpu/drm/panel/panel-raydium-rm67200.c
+index 64b685dc11f65b..6d4d00d4cd7459 100644
+--- a/drivers/gpu/drm/panel/panel-raydium-rm67200.c
++++ b/drivers/gpu/drm/panel/panel-raydium-rm67200.c
+@@ -318,6 +318,7 @@ static void w552793baa_setup(struct mipi_dsi_multi_context *ctx)
+ static int raydium_rm67200_prepare(struct drm_panel *panel)
+ {
+ 	struct raydium_rm67200 *ctx = to_raydium_rm67200(panel);
++	struct mipi_dsi_multi_context mctx = { .dsi = ctx->dsi };
+ 	int ret;
+ 
+ 	ret = regulator_bulk_enable(ctx->num_supplies, ctx->supplies);
+@@ -328,6 +329,12 @@ static int raydium_rm67200_prepare(struct drm_panel *panel)
+ 
+ 	msleep(60);
+ 
++	ctx->panel_info->panel_setup(&mctx);
++	mipi_dsi_dcs_exit_sleep_mode_multi(&mctx);
++	mipi_dsi_msleep(&mctx, 120);
++	mipi_dsi_dcs_set_display_on_multi(&mctx);
++	mipi_dsi_msleep(&mctx, 30);
++
+ 	return 0;
+ }
+ 
+@@ -343,20 +350,6 @@ static int raydium_rm67200_unprepare(struct drm_panel *panel)
+ 	return 0;
+ }
+ 
+-static int raydium_rm67200_enable(struct drm_panel *panel)
+-{
+-	struct raydium_rm67200 *rm67200 = to_raydium_rm67200(panel);
+-	struct mipi_dsi_multi_context ctx = { .dsi = rm67200->dsi };
+-
+-	rm67200->panel_info->panel_setup(&ctx);
+-	mipi_dsi_dcs_exit_sleep_mode_multi(&ctx);
+-	mipi_dsi_msleep(&ctx, 120);
+-	mipi_dsi_dcs_set_display_on_multi(&ctx);
+-	mipi_dsi_msleep(&ctx, 30);
+-
+-	return ctx.accum_err;
+-}
+-
+ static int raydium_rm67200_disable(struct drm_panel *panel)
+ {
+ 	struct raydium_rm67200 *rm67200 = to_raydium_rm67200(panel);
+@@ -381,7 +374,6 @@ static const struct drm_panel_funcs raydium_rm67200_funcs = {
+ 	.prepare = raydium_rm67200_prepare,
+ 	.unprepare = raydium_rm67200_unprepare,
+ 	.get_modes = raydium_rm67200_get_modes,
+-	.enable = raydium_rm67200_enable,
+ 	.disable = raydium_rm67200_disable,
+ };
+ 
+diff --git a/drivers/gpu/drm/renesas/rz-du/rzg2l_mipi_dsi.c b/drivers/gpu/drm/renesas/rz-du/rzg2l_mipi_dsi.c
+index 4550c6d847962f..ec8baecb9ba5e0 100644
+--- a/drivers/gpu/drm/renesas/rz-du/rzg2l_mipi_dsi.c
++++ b/drivers/gpu/drm/renesas/rz-du/rzg2l_mipi_dsi.c
+@@ -584,6 +584,9 @@ rzg2l_mipi_dsi_bridge_mode_valid(struct drm_bridge *bridge,
+ 	if (mode->clock > 148500)
+ 		return MODE_CLOCK_HIGH;
+ 
++	if (mode->clock < 5803)
++		return MODE_CLOCK_LOW;
++
+ 	return MODE_OK;
+ }
+ 
+diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
+index bfea608a7106e2..40eaedd433a71b 100644
+--- a/drivers/gpu/drm/scheduler/sched_main.c
++++ b/drivers/gpu/drm/scheduler/sched_main.c
+@@ -1335,6 +1335,18 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_
+ }
+ EXPORT_SYMBOL(drm_sched_init);
+ 
++static void drm_sched_cancel_remaining_jobs(struct drm_gpu_scheduler *sched)
++{
++	struct drm_sched_job *job, *tmp;
++
++	/* All other accessors are stopped. No locking necessary. */
++	list_for_each_entry_safe_reverse(job, tmp, &sched->pending_list, list) {
++		sched->ops->cancel_job(job);
++		list_del(&job->list);
++		sched->ops->free_job(job);
++	}
++}
++
+ /**
+  * drm_sched_fini - Destroy a gpu scheduler
+  *
+@@ -1342,19 +1354,11 @@ EXPORT_SYMBOL(drm_sched_init);
+  *
+  * Tears down and cleans up the scheduler.
+  *
+- * This stops submission of new jobs to the hardware through
+- * drm_sched_backend_ops.run_job(). Consequently, drm_sched_backend_ops.free_job()
+- * will not be called for all jobs still in drm_gpu_scheduler.pending_list.
+- * There is no solution for this currently. Thus, it is up to the driver to make
+- * sure that:
+- *
+- *  a) drm_sched_fini() is only called after for all submitted jobs
+- *     drm_sched_backend_ops.free_job() has been called or that
+- *  b) the jobs for which drm_sched_backend_ops.free_job() has not been called
+- *     after drm_sched_fini() ran are freed manually.
+- *
+- * FIXME: Take care of the above problem and prevent this function from leaking
+- * the jobs in drm_gpu_scheduler.pending_list under any circumstances.
++ * This stops submission of new jobs to the hardware through &struct
++ * drm_sched_backend_ops.run_job. If &struct drm_sched_backend_ops.cancel_job
++ * is implemented, all jobs will be canceled through it and afterwards cleaned
++ * up through &struct drm_sched_backend_ops.free_job. If cancel_job is not
++ * implemented, memory could leak.
+  */
+ void drm_sched_fini(struct drm_gpu_scheduler *sched)
+ {
+@@ -1384,6 +1388,10 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched)
+ 	/* Confirm no work left behind accessing device structures */
+ 	cancel_delayed_work_sync(&sched->work_tdr);
+ 
++	/* Avoid memory leaks if supported by the driver. */
++	if (sched->ops->cancel_job)
++		drm_sched_cancel_remaining_jobs(sched);
++
+ 	if (sched->own_submit_wq)
+ 		destroy_workqueue(sched->submit_wq);
+ 	sched->ready = false;
+diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
+index c2ea865be65720..c060c90b89c021 100644
+--- a/drivers/gpu/drm/ttm/ttm_pool.c
++++ b/drivers/gpu/drm/ttm/ttm_pool.c
+@@ -1132,7 +1132,6 @@ void ttm_pool_fini(struct ttm_pool *pool)
+ }
+ EXPORT_SYMBOL(ttm_pool_fini);
+ 
+-/* As long as pages are available make sure to release at least one */
+ static unsigned long ttm_pool_shrinker_scan(struct shrinker *shrink,
+ 					    struct shrink_control *sc)
+ {
+@@ -1140,9 +1139,12 @@ static unsigned long ttm_pool_shrinker_scan(struct shrinker *shrink,
+ 
+ 	do
+ 		num_freed += ttm_pool_shrink();
+-	while (!num_freed && atomic_long_read(&allocated_pages));
++	while (num_freed < sc->nr_to_scan &&
++	       atomic_long_read(&allocated_pages));
+ 
+-	return num_freed;
++	sc->nr_scanned = num_freed;
++
++	return num_freed ?: SHRINK_STOP;
+ }
+ 
+ /* Return the number of pages available or SHRINK_EMPTY if we have none */
+diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c
+index 7e5a60c5581396..bb84528276cdab 100644
+--- a/drivers/gpu/drm/ttm/ttm_resource.c
++++ b/drivers/gpu/drm/ttm/ttm_resource.c
+@@ -558,6 +558,9 @@ int ttm_resource_manager_evict_all(struct ttm_device *bdev,
+ 		cond_resched();
+ 	} while (!ret);
+ 
++	if (ret && ret != -ENOENT)
++		return ret;
++
+ 	spin_lock(&man->move_lock);
+ 	fence = dma_fence_get(man->move);
+ 	spin_unlock(&man->move_lock);
+diff --git a/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h b/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
+index 4c39f01e4f5286..a3f421e2adc03b 100644
+--- a/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
++++ b/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
+@@ -20,6 +20,8 @@ struct xe_exec_queue;
+ struct xe_guc_exec_queue {
+ 	/** @q: Backpointer to parent xe_exec_queue */
+ 	struct xe_exec_queue *q;
++	/** @rcu: For safe freeing of exported dma fences */
++	struct rcu_head rcu;
+ 	/** @sched: GPU scheduler for this xe_exec_queue */
+ 	struct xe_gpu_scheduler sched;
+ 	/** @entity: Scheduler entity for this xe_exec_queue */
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index 71ddd26ec30e80..00ff6197e06ff0 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -1282,7 +1282,11 @@ static void __guc_exec_queue_fini_async(struct work_struct *w)
+ 	xe_sched_entity_fini(&ge->entity);
+ 	xe_sched_fini(&ge->sched);
+ 
+-	kfree(ge);
++	/*
++	 * RCU free due sched being exported via DRM scheduler fences
++	 * (timeline name).
++	 */
++	kfree_rcu(ge, rcu);
+ 	xe_exec_queue_fini(q);
+ 	xe_pm_runtime_put(guc_to_xe(guc));
+ }
+@@ -1465,6 +1469,7 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
+ 
+ 	q->guc = ge;
+ 	ge->q = q;
++	init_rcu_head(&ge->rcu);
+ 	init_waitqueue_head(&ge->suspend_wait);
+ 
+ 	for (i = 0; i < MAX_STATIC_MSG_TYPE; ++i)
+diff --git a/drivers/gpu/drm/xe/xe_hw_fence.c b/drivers/gpu/drm/xe/xe_hw_fence.c
+index 0b4f12be3692ab..6e2221b606885f 100644
+--- a/drivers/gpu/drm/xe/xe_hw_fence.c
++++ b/drivers/gpu/drm/xe/xe_hw_fence.c
+@@ -100,6 +100,9 @@ void xe_hw_fence_irq_finish(struct xe_hw_fence_irq *irq)
+ 		spin_unlock_irqrestore(&irq->lock, flags);
+ 		dma_fence_end_signalling(tmp);
+ 	}
++
++	/* Safe release of the irq->lock used in dma_fence_init. */
++	synchronize_rcu();
+ }
+ 
+ void xe_hw_fence_irq_run(struct xe_hw_fence_irq *irq)
+diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
+index 5e65830dad2585..4552e9e82b99b7 100644
+--- a/drivers/gpu/drm/xe/xe_query.c
++++ b/drivers/gpu/drm/xe/xe_query.c
+@@ -368,6 +368,7 @@ static int query_gt_list(struct xe_device *xe, struct drm_xe_device_query *query
+ 	struct drm_xe_query_gt_list __user *query_ptr =
+ 		u64_to_user_ptr(query->data);
+ 	struct drm_xe_query_gt_list *gt_list;
++	int iter = 0;
+ 	u8 id;
+ 
+ 	if (query->size == 0) {
+@@ -385,12 +386,12 @@ static int query_gt_list(struct xe_device *xe, struct drm_xe_device_query *query
+ 
+ 	for_each_gt(gt, xe, id) {
+ 		if (xe_gt_is_media_type(gt))
+-			gt_list->gt_list[id].type = DRM_XE_QUERY_GT_TYPE_MEDIA;
++			gt_list->gt_list[iter].type = DRM_XE_QUERY_GT_TYPE_MEDIA;
+ 		else
+-			gt_list->gt_list[id].type = DRM_XE_QUERY_GT_TYPE_MAIN;
+-		gt_list->gt_list[id].tile_id = gt_to_tile(gt)->id;
+-		gt_list->gt_list[id].gt_id = gt->info.id;
+-		gt_list->gt_list[id].reference_clock = gt->info.reference_clock;
++			gt_list->gt_list[iter].type = DRM_XE_QUERY_GT_TYPE_MAIN;
++		gt_list->gt_list[iter].tile_id = gt_to_tile(gt)->id;
++		gt_list->gt_list[iter].gt_id = gt->info.id;
++		gt_list->gt_list[iter].reference_clock = gt->info.reference_clock;
+ 		/*
+ 		 * The mem_regions indexes in the mask below need to
+ 		 * directly identify the struct
+@@ -406,19 +407,21 @@ static int query_gt_list(struct xe_device *xe, struct drm_xe_device_query *query
+ 		 * assumption.
+ 		 */
+ 		if (!IS_DGFX(xe))
+-			gt_list->gt_list[id].near_mem_regions = 0x1;
++			gt_list->gt_list[iter].near_mem_regions = 0x1;
+ 		else
+-			gt_list->gt_list[id].near_mem_regions =
++			gt_list->gt_list[iter].near_mem_regions =
+ 				BIT(gt_to_tile(gt)->id) << 1;
+-		gt_list->gt_list[id].far_mem_regions = xe->info.mem_region_mask ^
+-			gt_list->gt_list[id].near_mem_regions;
++		gt_list->gt_list[iter].far_mem_regions = xe->info.mem_region_mask ^
++			gt_list->gt_list[iter].near_mem_regions;
+ 
+-		gt_list->gt_list[id].ip_ver_major =
++		gt_list->gt_list[iter].ip_ver_major =
+ 			REG_FIELD_GET(GMD_ID_ARCH_MASK, gt->info.gmdid);
+-		gt_list->gt_list[id].ip_ver_minor =
++		gt_list->gt_list[iter].ip_ver_minor =
+ 			REG_FIELD_GET(GMD_ID_RELEASE_MASK, gt->info.gmdid);
+-		gt_list->gt_list[id].ip_ver_rev =
++		gt_list->gt_list[iter].ip_ver_rev =
+ 			REG_FIELD_GET(GMD_ID_REVID, gt->info.gmdid);
++
++		iter++;
+ 	}
+ 
+ 	if (copy_to_user(query_ptr, gt_list, size)) {
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 9ebbaf8cc1310a..bd4b4cc785f75b 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -663,9 +663,9 @@ static int hid_parser_main(struct hid_parser *parser, struct hid_item *item)
+ 	default:
+ 		if (item->tag >= HID_MAIN_ITEM_TAG_RESERVED_MIN &&
+ 			item->tag <= HID_MAIN_ITEM_TAG_RESERVED_MAX)
+-			hid_warn(parser->device, "reserved main item tag 0x%x\n", item->tag);
++			hid_warn_ratelimited(parser->device, "reserved main item tag 0x%x\n", item->tag);
+ 		else
+-			hid_warn(parser->device, "unknown main item tag 0x%x\n", item->tag);
++			hid_warn_ratelimited(parser->device, "unknown main item tag 0x%x\n", item->tag);
+ 		ret = 0;
+ 	}
+ 
+diff --git a/drivers/hid/hid-magicmouse.c b/drivers/hid/hid-magicmouse.c
+index adfa329e917b43..6d916c6c7d4ae1 100644
+--- a/drivers/hid/hid-magicmouse.c
++++ b/drivers/hid/hid-magicmouse.c
+@@ -779,16 +779,30 @@ static void magicmouse_enable_mt_work(struct work_struct *work)
+ 		hid_err(msc->hdev, "unable to request touch data (%d)\n", ret);
+ }
+ 
++static bool is_usb_magicmouse2(__u32 vendor, __u32 product)
++{
++	if (vendor != USB_VENDOR_ID_APPLE)
++		return false;
++	return product == USB_DEVICE_ID_APPLE_MAGICMOUSE2;
++}
++
++static bool is_usb_magictrackpad2(__u32 vendor, __u32 product)
++{
++	if (vendor != USB_VENDOR_ID_APPLE)
++		return false;
++	return product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++	       product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC;
++}
++
+ static int magicmouse_fetch_battery(struct hid_device *hdev)
+ {
+ #ifdef CONFIG_HID_BATTERY_STRENGTH
+ 	struct hid_report_enum *report_enum;
+ 	struct hid_report *report;
+ 
+-	if (!hdev->battery || hdev->vendor != USB_VENDOR_ID_APPLE ||
+-	    (hdev->product != USB_DEVICE_ID_APPLE_MAGICMOUSE2 &&
+-	     hdev->product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 &&
+-	     hdev->product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC))
++	if (!hdev->battery ||
++	    (!is_usb_magicmouse2(hdev->vendor, hdev->product) &&
++	     !is_usb_magictrackpad2(hdev->vendor, hdev->product)))
+ 		return -1;
+ 
+ 	report_enum = &hdev->report_enum[hdev->battery_report_type];
+@@ -850,16 +864,17 @@ static int magicmouse_probe(struct hid_device *hdev,
+ 		return ret;
+ 	}
+ 
+-	timer_setup(&msc->battery_timer, magicmouse_battery_timer_tick, 0);
+-	mod_timer(&msc->battery_timer,
+-		  jiffies + msecs_to_jiffies(USB_BATTERY_TIMEOUT_MS));
+-	magicmouse_fetch_battery(hdev);
++	if (is_usb_magicmouse2(id->vendor, id->product) ||
++	    is_usb_magictrackpad2(id->vendor, id->product)) {
++		timer_setup(&msc->battery_timer, magicmouse_battery_timer_tick, 0);
++		mod_timer(&msc->battery_timer,
++			  jiffies + msecs_to_jiffies(USB_BATTERY_TIMEOUT_MS));
++		magicmouse_fetch_battery(hdev);
++	}
+ 
+-	if (id->vendor == USB_VENDOR_ID_APPLE &&
+-	    (id->product == USB_DEVICE_ID_APPLE_MAGICMOUSE2 ||
+-	     ((id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
+-	       id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) &&
+-	      hdev->type != HID_TYPE_USBMOUSE)))
++	if (is_usb_magicmouse2(id->vendor, id->product) ||
++	    (is_usb_magictrackpad2(id->vendor, id->product) &&
++	     hdev->type != HID_TYPE_USBMOUSE))
+ 		return 0;
+ 
+ 	if (!msc->input) {
+@@ -915,7 +930,10 @@ static int magicmouse_probe(struct hid_device *hdev,
+ 
+ 	return 0;
+ err_stop_hw:
+-	timer_delete_sync(&msc->battery_timer);
++	if (is_usb_magicmouse2(id->vendor, id->product) ||
++	    is_usb_magictrackpad2(id->vendor, id->product))
++		timer_delete_sync(&msc->battery_timer);
++
+ 	hid_hw_stop(hdev);
+ 	return ret;
+ }
+@@ -926,7 +944,9 @@ static void magicmouse_remove(struct hid_device *hdev)
+ 
+ 	if (msc) {
+ 		cancel_delayed_work_sync(&msc->work);
+-		timer_delete_sync(&msc->battery_timer);
++		if (is_usb_magicmouse2(hdev->vendor, hdev->product) ||
++		    is_usb_magictrackpad2(hdev->vendor, hdev->product))
++			timer_delete_sync(&msc->battery_timer);
+ 	}
+ 
+ 	hid_hw_stop(hdev);
+@@ -943,10 +963,8 @@ static const __u8 *magicmouse_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ 	 *   0x05, 0x01,       // Usage Page (Generic Desktop)        0
+ 	 *   0x09, 0x02,       // Usage (Mouse)                       2
+ 	 */
+-	if (hdev->vendor == USB_VENDOR_ID_APPLE &&
+-	    (hdev->product == USB_DEVICE_ID_APPLE_MAGICMOUSE2 ||
+-	     hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
+-	     hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) &&
++	if ((is_usb_magicmouse2(hdev->vendor, hdev->product) ||
++	     is_usb_magictrackpad2(hdev->vendor, hdev->product)) &&
+ 	    *rsize == 83 && rdesc[46] == 0x84 && rdesc[58] == 0x85) {
+ 		hid_info(hdev,
+ 			 "fixing up magicmouse battery report descriptor\n");
+diff --git a/drivers/hwmon/emc2305.c b/drivers/hwmon/emc2305.c
+index 234c54956a4bdf..1dbe3f26467d38 100644
+--- a/drivers/hwmon/emc2305.c
++++ b/drivers/hwmon/emc2305.c
+@@ -299,6 +299,12 @@ static int emc2305_set_single_tz(struct device *dev, int idx)
+ 		dev_err(dev, "Failed to register cooling device %s\n", emc2305_fan_name[idx]);
+ 		return PTR_ERR(data->cdev_data[cdev_idx].cdev);
+ 	}
++
++	if (data->cdev_data[cdev_idx].cur_state > 0)
++		/* Update pwm when temperature is above trips */
++		pwm = EMC2305_PWM_STATE2DUTY(data->cdev_data[cdev_idx].cur_state,
++					     data->max_state, EMC2305_FAN_MAX);
++
+ 	/* Set minimal PWM speed. */
+ 	if (data->pwm_separate) {
+ 		ret = emc2305_set_pwm(dev, pwm, cdev_idx);
+@@ -312,10 +318,10 @@ static int emc2305_set_single_tz(struct device *dev, int idx)
+ 		}
+ 	}
+ 	data->cdev_data[cdev_idx].cur_state =
+-		EMC2305_PWM_DUTY2STATE(data->pwm_min[cdev_idx], data->max_state,
++		EMC2305_PWM_DUTY2STATE(pwm, data->max_state,
+ 				       EMC2305_FAN_MAX);
+ 	data->cdev_data[cdev_idx].last_hwmon_state =
+-		EMC2305_PWM_DUTY2STATE(data->pwm_min[cdev_idx], data->max_state,
++		EMC2305_PWM_DUTY2STATE(pwm, data->max_state,
+ 				       EMC2305_FAN_MAX);
+ 	return 0;
+ }
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index d2499f302b5083..f43067f6797e94 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -370,6 +370,7 @@ static const struct acpi_device_id i2c_acpi_force_100khz_device_ids[] = {
+ 	 * the device works without issues on Windows at what is expected to be
+ 	 * a 400KHz frequency. The root cause of the issue is not known.
+ 	 */
++	{ "DLL0945", 0 },
+ 	{ "ELAN06FA", 0 },
+ 	{}
+ };
+diff --git a/drivers/i3c/internals.h b/drivers/i3c/internals.h
+index 433f6088b7cec8..ce04aa4f269e09 100644
+--- a/drivers/i3c/internals.h
++++ b/drivers/i3c/internals.h
+@@ -9,6 +9,7 @@
+ #define I3C_INTERNALS_H
+ 
+ #include <linux/i3c/master.h>
++#include <linux/io.h>
+ 
+ void i3c_bus_normaluse_lock(struct i3c_bus *bus);
+ void i3c_bus_normaluse_unlock(struct i3c_bus *bus);
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index fd81871609d95b..dfa0bad991cf72 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -1439,7 +1439,7 @@ static int i3c_master_retrieve_dev_info(struct i3c_dev_desc *dev)
+ 
+ 	if (dev->info.bcr & I3C_BCR_HDR_CAP) {
+ 		ret = i3c_master_gethdrcap_locked(master, &dev->info);
+-		if (ret)
++		if (ret && ret != -ENOTSUPP)
+ 			return ret;
+ 	}
+ 
+@@ -2467,6 +2467,8 @@ static int i3c_i2c_notifier_call(struct notifier_block *nb, unsigned long action
+ 	case BUS_NOTIFY_DEL_DEVICE:
+ 		ret = i3c_master_i2c_detach(adap, client);
+ 		break;
++	default:
++		ret = -EINVAL;
+ 	}
+ 	i3c_bus_maintenance_unlock(&master->bus);
+ 
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index 976f5be54e36d0..039dc42dd50989 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -1665,7 +1665,7 @@ static const struct x86_cpu_id intel_idle_ids[] __initconst = {
+ };
+ 
+ static const struct x86_cpu_id intel_mwait_ids[] __initconst = {
+-	X86_MATCH_VENDOR_FAM_FEATURE(INTEL, 6, X86_FEATURE_MWAIT, NULL),
++	X86_MATCH_VENDOR_FAM_FEATURE(INTEL, X86_FAMILY_ANY, X86_FEATURE_MWAIT, NULL),
+ 	{}
+ };
+ 
+diff --git a/drivers/iio/adc/ad7768-1.c b/drivers/iio/adc/ad7768-1.c
+index 5e0be36af0c5c2..32063f54f36460 100644
+--- a/drivers/iio/adc/ad7768-1.c
++++ b/drivers/iio/adc/ad7768-1.c
+@@ -202,6 +202,24 @@ static int ad7768_spi_reg_write(struct ad7768_state *st,
+ 	return spi_write(st->spi, st->data.d8, 2);
+ }
+ 
++static int ad7768_send_sync_pulse(struct ad7768_state *st)
++{
++	/*
++	 * The datasheet specifies a minimum SYNC_IN pulse width of 1.5 × Tmclk,
++	 * where Tmclk is the MCLK period. The supported MCLK frequencies range
++	 * from 0.6 MHz to 17 MHz, which corresponds to a minimum SYNC_IN pulse
++	 * width of approximately 2.5 µs in the worst-case scenario (0.6 MHz).
++	 *
++	 * Add a delay to ensure the pulse width is always sufficient to
++	 * trigger synchronization.
++	 */
++	gpiod_set_value_cansleep(st->gpio_sync_in, 1);
++	fsleep(3);
++	gpiod_set_value_cansleep(st->gpio_sync_in, 0);
++
++	return 0;
++}
++
+ static int ad7768_set_mode(struct ad7768_state *st,
+ 			   enum ad7768_conv_mode mode)
+ {
+@@ -289,10 +307,7 @@ static int ad7768_set_dig_fil(struct ad7768_state *st,
+ 		return ret;
+ 
+ 	/* A sync-in pulse is required every time the filter dec rate changes */
+-	gpiod_set_value(st->gpio_sync_in, 1);
+-	gpiod_set_value(st->gpio_sync_in, 0);
+-
+-	return 0;
++	return ad7768_send_sync_pulse(st);
+ }
+ 
+ static int ad7768_set_freq(struct ad7768_state *st,
+diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
+index 4c5f8d29a559fe..6b3ef7ef403e00 100644
+--- a/drivers/iio/adc/ad_sigma_delta.c
++++ b/drivers/iio/adc/ad_sigma_delta.c
+@@ -489,7 +489,7 @@ static int ad_sd_buffer_postenable(struct iio_dev *indio_dev)
+ 			return ret;
+ 	}
+ 
+-	samples_buf_size = ALIGN(slot * indio_dev->channels[0].scan_type.storagebits, 8);
++	samples_buf_size = ALIGN(slot * indio_dev->channels[0].scan_type.storagebits / 8, 8);
+ 	samples_buf_size += sizeof(int64_t);
+ 	samples_buf = devm_krealloc(&sigma_delta->spi->dev, sigma_delta->samples_buf,
+ 				    samples_buf_size, GFP_KERNEL);
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index a872643e8039fc..e9b7a64192913a 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -1469,10 +1469,11 @@ static const struct nldev_fill_res_entry fill_entries[RDMA_RESTRACK_MAX] = {
+ 
+ };
+ 
+-static int res_get_common_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
+-			       struct netlink_ext_ack *extack,
+-			       enum rdma_restrack_type res_type,
+-			       res_fill_func_t fill_func)
++static noinline_for_stack int
++res_get_common_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
++		    struct netlink_ext_ack *extack,
++		    enum rdma_restrack_type res_type,
++		    res_fill_func_t fill_func)
+ {
+ 	const struct nldev_fill_res_entry *fe = &fill_entries[res_type];
+ 	struct nlattr *tb[RDMA_NLDEV_ATTR_MAX];
+@@ -2263,10 +2264,10 @@ static int nldev_stat_del_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	return ret;
+ }
+ 
+-static int stat_get_doit_default_counter(struct sk_buff *skb,
+-					 struct nlmsghdr *nlh,
+-					 struct netlink_ext_ack *extack,
+-					 struct nlattr *tb[])
++static noinline_for_stack int
++stat_get_doit_default_counter(struct sk_buff *skb, struct nlmsghdr *nlh,
++			      struct netlink_ext_ack *extack,
++			      struct nlattr *tb[])
+ {
+ 	struct rdma_hw_stats *stats;
+ 	struct nlattr *table_attr;
+@@ -2356,8 +2357,9 @@ static int stat_get_doit_default_counter(struct sk_buff *skb,
+ 	return ret;
+ }
+ 
+-static int stat_get_doit_qp(struct sk_buff *skb, struct nlmsghdr *nlh,
+-			    struct netlink_ext_ack *extack, struct nlattr *tb[])
++static noinline_for_stack int
++stat_get_doit_qp(struct sk_buff *skb, struct nlmsghdr *nlh,
++		 struct netlink_ext_ack *extack, struct nlattr *tb[])
+ 
+ {
+ 	static enum rdma_nl_counter_mode mode;
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 063801384b2b04..3a627acb82ce13 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -4738,7 +4738,7 @@ static int UVERBS_HANDLER(BNXT_RE_METHOD_GET_TOGGLE_MEM)(struct uverbs_attr_bund
+ 		return err;
+ 
+ 	err = uverbs_copy_to(attrs, BNXT_RE_TOGGLE_MEM_MMAP_OFFSET,
+-			     &offset, sizeof(length));
++			     &offset, sizeof(offset));
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
+index 7ead8746b79b38..f2c530ab85a563 100644
+--- a/drivers/infiniband/hw/hfi1/affinity.c
++++ b/drivers/infiniband/hw/hfi1/affinity.c
+@@ -964,31 +964,35 @@ static void find_hw_thread_mask(uint hw_thread_no, cpumask_var_t hw_thread_mask,
+ 				struct hfi1_affinity_node_list *affinity)
+ {
+ 	int possible, curr_cpu, i;
+-	uint num_cores_per_socket = node_affinity.num_online_cpus /
++	uint num_cores_per_socket;
++
++	cpumask_copy(hw_thread_mask, &affinity->proc.mask);
++
++	if (affinity->num_core_siblings == 0)
++		return;
++
++	num_cores_per_socket = node_affinity.num_online_cpus /
+ 					affinity->num_core_siblings /
+ 						node_affinity.num_online_nodes;
+ 
+-	cpumask_copy(hw_thread_mask, &affinity->proc.mask);
+-	if (affinity->num_core_siblings > 0) {
+-		/* Removing other siblings not needed for now */
+-		possible = cpumask_weight(hw_thread_mask);
+-		curr_cpu = cpumask_first(hw_thread_mask);
+-		for (i = 0;
+-		     i < num_cores_per_socket * node_affinity.num_online_nodes;
+-		     i++)
+-			curr_cpu = cpumask_next(curr_cpu, hw_thread_mask);
+-
+-		for (; i < possible; i++) {
+-			cpumask_clear_cpu(curr_cpu, hw_thread_mask);
+-			curr_cpu = cpumask_next(curr_cpu, hw_thread_mask);
+-		}
++	/* Removing other siblings not needed for now */
++	possible = cpumask_weight(hw_thread_mask);
++	curr_cpu = cpumask_first(hw_thread_mask);
++	for (i = 0;
++	     i < num_cores_per_socket * node_affinity.num_online_nodes;
++	     i++)
++		curr_cpu = cpumask_next(curr_cpu, hw_thread_mask);
+ 
+-		/* Identifying correct HW threads within physical cores */
+-		cpumask_shift_left(hw_thread_mask, hw_thread_mask,
+-				   num_cores_per_socket *
+-				   node_affinity.num_online_nodes *
+-				   hw_thread_no);
++	for (; i < possible; i++) {
++		cpumask_clear_cpu(curr_cpu, hw_thread_mask);
++		curr_cpu = cpumask_next(curr_cpu, hw_thread_mask);
+ 	}
++
++	/* Identifying correct HW threads within physical cores */
++	cpumask_shift_left(hw_thread_mask, hw_thread_mask,
++			   num_cores_per_socket *
++			   node_affinity.num_online_nodes *
++			   hw_thread_no);
+ }
+ 
+ int hfi1_get_proc_affinity(int node)
+diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c
+index 6432bce7d08317..4ed2b1ea296e75 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_tx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_tx.c
+@@ -332,18 +332,17 @@ static int siw_tcp_sendpages(struct socket *s, struct page **page, int offset,
+ 		if (!sendpage_ok(page[i]))
+ 			msg.msg_flags &= ~MSG_SPLICE_PAGES;
+ 		bvec_set_page(&bvec, page[i], bytes, offset);
+-		iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size);
++		iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, bytes);
+ 
+ try_page_again:
+ 		lock_sock(sk);
+-		rv = tcp_sendmsg_locked(sk, &msg, size);
++		rv = tcp_sendmsg_locked(sk, &msg, bytes);
+ 		release_sock(sk);
+ 
+ 		if (rv > 0) {
+ 			size -= rv;
+ 			sent += rv;
+ 			if (rv != bytes) {
+-				offset += rv;
+ 				bytes -= rv;
+ 				goto try_page_again;
+ 			}
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index 4f4c9e376fc4fe..15ffa8b99476ee 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -366,6 +366,7 @@ static const struct of_device_id qcom_smmu_client_of_match[] __maybe_unused = {
+ 	{ .compatible = "qcom,sdm670-mdss" },
+ 	{ .compatible = "qcom,sdm845-mdss" },
+ 	{ .compatible = "qcom,sdm845-mss-pil" },
++	{ .compatible = "qcom,sm6115-mdss" },
+ 	{ .compatible = "qcom,sm6350-mdss" },
+ 	{ .compatible = "qcom,sm6375-mdss" },
+ 	{ .compatible = "qcom,sm8150-mdss" },
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 024fb7c36d884b..91fe53d671e6af 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -1845,6 +1845,18 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
+ 					  (pgd_t *)pgd, flags, old);
+ }
+ 
++static bool domain_need_iotlb_sync_map(struct dmar_domain *domain,
++				       struct intel_iommu *iommu)
++{
++	if (cap_caching_mode(iommu->cap) && !domain->use_first_level)
++		return true;
++
++	if (rwbf_quirk || cap_rwbf(iommu->cap))
++		return true;
++
++	return false;
++}
++
+ static int dmar_domain_attach_device(struct dmar_domain *domain,
+ 				     struct device *dev)
+ {
+@@ -1882,6 +1894,8 @@ static int dmar_domain_attach_device(struct dmar_domain *domain,
+ 	if (ret)
+ 		goto out_block_translation;
+ 
++	domain->iotlb_sync_map |= domain_need_iotlb_sync_map(domain, iommu);
++
+ 	return 0;
+ 
+ out_block_translation:
+@@ -4020,7 +4034,10 @@ static bool risky_device(struct pci_dev *pdev)
+ static int intel_iommu_iotlb_sync_map(struct iommu_domain *domain,
+ 				      unsigned long iova, size_t size)
+ {
+-	cache_tag_flush_range_np(to_dmar_domain(domain), iova, iova + size - 1);
++	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
++
++	if (dmar_domain->iotlb_sync_map)
++		cache_tag_flush_range_np(dmar_domain, iova, iova + size - 1);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h
+index 814f5ed2001898..0f7cd96f1ed0fa 100644
+--- a/drivers/iommu/intel/iommu.h
++++ b/drivers/iommu/intel/iommu.h
+@@ -615,6 +615,9 @@ struct dmar_domain {
+ 	u8 has_mappings:1;		/* Has mappings configured through
+ 					 * iommu_map() interface.
+ 					 */
++	u8 iotlb_sync_map:1;		/* Need to flush IOTLB cache or write
++					 * buffer when creating mappings.
++					 */
+ 
+ 	spinlock_t lock;		/* Protect device tracking lists */
+ 	struct list_head devices;	/* all devices' list */
+diff --git a/drivers/iommu/iommufd/io_pagetable.c b/drivers/iommu/iommufd/io_pagetable.c
+index 8a790e597e1253..c3a316fd1ef972 100644
+--- a/drivers/iommu/iommufd/io_pagetable.c
++++ b/drivers/iommu/iommufd/io_pagetable.c
+@@ -70,36 +70,45 @@ struct iopt_area *iopt_area_contig_next(struct iopt_area_contig_iter *iter)
+ 	return iter->area;
+ }
+ 
+-static bool __alloc_iova_check_hole(struct interval_tree_double_span_iter *span,
+-				    unsigned long length,
+-				    unsigned long iova_alignment,
+-				    unsigned long page_offset)
++static bool __alloc_iova_check_range(unsigned long *start, unsigned long last,
++				     unsigned long length,
++				     unsigned long iova_alignment,
++				     unsigned long page_offset)
+ {
+-	if (span->is_used || span->last_hole - span->start_hole < length - 1)
++	unsigned long aligned_start;
++
++	/* ALIGN_UP() */
++	if (check_add_overflow(*start, iova_alignment - 1, &aligned_start))
+ 		return false;
++	aligned_start &= ~(iova_alignment - 1);
++	aligned_start |= page_offset;
+ 
+-	span->start_hole = ALIGN(span->start_hole, iova_alignment) |
+-			   page_offset;
+-	if (span->start_hole > span->last_hole ||
+-	    span->last_hole - span->start_hole < length - 1)
++	if (aligned_start >= last || last - aligned_start < length - 1)
+ 		return false;
++	*start = aligned_start;
+ 	return true;
+ }
+ 
+-static bool __alloc_iova_check_used(struct interval_tree_span_iter *span,
++static bool __alloc_iova_check_hole(struct interval_tree_double_span_iter *span,
+ 				    unsigned long length,
+ 				    unsigned long iova_alignment,
+ 				    unsigned long page_offset)
+ {
+-	if (span->is_hole || span->last_used - span->start_used < length - 1)
++	if (span->is_used)
+ 		return false;
++	return __alloc_iova_check_range(&span->start_hole, span->last_hole,
++					length, iova_alignment, page_offset);
++}
+ 
+-	span->start_used = ALIGN(span->start_used, iova_alignment) |
+-			   page_offset;
+-	if (span->start_used > span->last_used ||
+-	    span->last_used - span->start_used < length - 1)
++static bool __alloc_iova_check_used(struct interval_tree_span_iter *span,
++				    unsigned long length,
++				    unsigned long iova_alignment,
++				    unsigned long page_offset)
++{
++	if (span->is_hole)
+ 		return false;
+-	return true;
++	return __alloc_iova_check_range(&span->start_used, span->last_used,
++					length, iova_alignment, page_offset);
+ }
+ 
+ /*
+@@ -743,8 +752,10 @@ static int iopt_unmap_iova_range(struct io_pagetable *iopt, unsigned long start,
+ 			iommufd_access_notify_unmap(iopt, area_first, length);
+ 			/* Something is not responding to unmap requests. */
+ 			tries++;
+-			if (WARN_ON(tries > 100))
+-				return -EDEADLOCK;
++			if (WARN_ON(tries > 100)) {
++				rc = -EDEADLOCK;
++				goto out_unmapped;
++			}
+ 			goto again;
+ 		}
+ 
+@@ -766,6 +777,7 @@ static int iopt_unmap_iova_range(struct io_pagetable *iopt, unsigned long start,
+ out_unlock_iova:
+ 	up_write(&iopt->iova_rwsem);
+ 	up_read(&iopt->domains_rwsem);
++out_unmapped:
+ 	if (unmapped)
+ 		*unmapped = unmapped_bytes;
+ 	return rc;
+diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
+index bca8053864b2ce..1c2284297354ac 100644
+--- a/drivers/irqchip/irq-mips-gic.c
++++ b/drivers/irqchip/irq-mips-gic.c
+@@ -375,9 +375,13 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
+ 	/*
+ 	 * The GIC specifies that we can only route an interrupt to one VP(E),
+ 	 * ie. CPU in Linux parlance, at a time. Therefore we always route to
+-	 * the first online CPU in the mask.
++	 * the first forced or online CPU in the mask.
+ 	 */
+-	cpu = cpumask_first_and(cpumask, cpu_online_mask);
++	if (force)
++		cpu = cpumask_first(cpumask);
++	else
++		cpu = cpumask_first_and(cpumask, cpu_online_mask);
++
+ 	if (cpu >= NR_CPUS)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/irqchip/irq-renesas-rzv2h.c b/drivers/irqchip/irq-renesas-rzv2h.c
+index 0f0fd7d4dfdf2c..f1f7869b49cb91 100644
+--- a/drivers/irqchip/irq-renesas-rzv2h.c
++++ b/drivers/irqchip/irq-renesas-rzv2h.c
+@@ -394,7 +394,9 @@ static const struct irq_chip rzv2h_icu_chip = {
+ 	.irq_retrigger		= irq_chip_retrigger_hierarchy,
+ 	.irq_set_type		= rzv2h_icu_set_type,
+ 	.irq_set_affinity	= irq_chip_set_affinity_parent,
+-	.flags			= IRQCHIP_SET_TYPE_MASKED,
++	.flags			= IRQCHIP_MASK_ON_SUSPEND |
++				  IRQCHIP_SET_TYPE_MASKED |
++				  IRQCHIP_SKIP_SET_WAKE,
+ };
+ 
+ static int rzv2h_icu_alloc(struct irq_domain *domain, unsigned int virq, unsigned int nr_irqs,
+diff --git a/drivers/leds/flash/leds-qcom-flash.c b/drivers/leds/flash/leds-qcom-flash.c
+index b4c19be51c4da7..89cf5120f5d55b 100644
+--- a/drivers/leds/flash/leds-qcom-flash.c
++++ b/drivers/leds/flash/leds-qcom-flash.c
+@@ -117,7 +117,7 @@ enum {
+ 	REG_MAX_COUNT,
+ };
+ 
+-static struct reg_field mvflash_3ch_regs[REG_MAX_COUNT] = {
++static const struct reg_field mvflash_3ch_regs[REG_MAX_COUNT] = {
+ 	REG_FIELD(0x08, 0, 7),			/* status1	*/
+ 	REG_FIELD(0x09, 0, 7),                  /* status2	*/
+ 	REG_FIELD(0x0a, 0, 7),                  /* status3	*/
+@@ -132,7 +132,7 @@ static struct reg_field mvflash_3ch_regs[REG_MAX_COUNT] = {
+ 	REG_FIELD(0x58, 0, 2),			/* therm_thrsh3 */
+ };
+ 
+-static struct reg_field mvflash_4ch_regs[REG_MAX_COUNT] = {
++static const struct reg_field mvflash_4ch_regs[REG_MAX_COUNT] = {
+ 	REG_FIELD(0x06, 0, 7),			/* status1	*/
+ 	REG_FIELD(0x07, 0, 6),			/* status2	*/
+ 	REG_FIELD(0x09, 0, 7),			/* status3	*/
+@@ -854,11 +854,17 @@ static int qcom_flash_led_probe(struct platform_device *pdev)
+ 	if (val == FLASH_SUBTYPE_3CH_PM8150_VAL || val == FLASH_SUBTYPE_3CH_PMI8998_VAL) {
+ 		flash_data->hw_type = QCOM_MVFLASH_3CH;
+ 		flash_data->max_channels = 3;
+-		regs = mvflash_3ch_regs;
++		regs = devm_kmemdup(dev, mvflash_3ch_regs, sizeof(mvflash_3ch_regs),
++				    GFP_KERNEL);
++		if (!regs)
++			return -ENOMEM;
+ 	} else if (val == FLASH_SUBTYPE_4CH_VAL) {
+ 		flash_data->hw_type = QCOM_MVFLASH_4CH;
+ 		flash_data->max_channels = 4;
+-		regs = mvflash_4ch_regs;
++		regs = devm_kmemdup(dev, mvflash_4ch_regs, sizeof(mvflash_4ch_regs),
++				    GFP_KERNEL);
++		if (!regs)
++			return -ENOMEM;
+ 
+ 		rc = regmap_read(regmap, reg_base + FLASH_REVISION_REG, &val);
+ 		if (rc < 0) {
+@@ -880,6 +886,7 @@ static int qcom_flash_led_probe(struct platform_device *pdev)
+ 		dev_err(dev, "Failed to allocate regmap field, rc=%d\n", rc);
+ 		return rc;
+ 	}
++	devm_kfree(dev, regs); /* devm_regmap_field_bulk_alloc() makes copies */
+ 
+ 	platform_set_drvdata(pdev, flash_data);
+ 	mutex_init(&flash_data->lock);
+diff --git a/drivers/leds/leds-lp50xx.c b/drivers/leds/leds-lp50xx.c
+index 02cb1565a9fb62..94f8ef6b482c91 100644
+--- a/drivers/leds/leds-lp50xx.c
++++ b/drivers/leds/leds-lp50xx.c
+@@ -476,6 +476,7 @@ static int lp50xx_probe_dt(struct lp50xx *priv)
+ 			return -ENOMEM;
+ 
+ 		fwnode_for_each_child_node(child, led_node) {
++			int multi_index;
+ 			ret = fwnode_property_read_u32(led_node, "color",
+ 						       &color_id);
+ 			if (ret) {
+@@ -483,8 +484,16 @@ static int lp50xx_probe_dt(struct lp50xx *priv)
+ 				dev_err(priv->dev, "Cannot read color\n");
+ 				return ret;
+ 			}
++			ret = fwnode_property_read_u32(led_node, "reg", &multi_index);
++			if (ret != 0) {
++				dev_err(priv->dev, "reg must be set\n");
++				return -EINVAL;
++			} else if (multi_index >= LP50XX_LEDS_PER_MODULE) {
++				dev_err(priv->dev, "reg %i out of range\n", multi_index);
++				return -EINVAL;
++			}
+ 
+-			mc_led_info[num_colors].color_index = color_id;
++			mc_led_info[multi_index].color_index = color_id;
+ 			num_colors++;
+ 		}
+ 
+diff --git a/drivers/leds/trigger/ledtrig-netdev.c b/drivers/leds/trigger/ledtrig-netdev.c
+index 4e048e08c4fdec..c15efe3e50780f 100644
+--- a/drivers/leds/trigger/ledtrig-netdev.c
++++ b/drivers/leds/trigger/ledtrig-netdev.c
+@@ -68,7 +68,6 @@ struct led_netdev_data {
+ 	unsigned int last_activity;
+ 
+ 	unsigned long mode;
+-	unsigned long blink_delay;
+ 	int link_speed;
+ 	__ETHTOOL_DECLARE_LINK_MODE_MASK(supported_link_modes);
+ 	u8 duplex;
+@@ -87,10 +86,6 @@ static void set_baseline_state(struct led_netdev_data *trigger_data)
+ 	/* Already validated, hw control is possible with the requested mode */
+ 	if (trigger_data->hw_control) {
+ 		led_cdev->hw_control_set(led_cdev, trigger_data->mode);
+-		if (led_cdev->blink_set) {
+-			led_cdev->blink_set(led_cdev, &trigger_data->blink_delay,
+-					    &trigger_data->blink_delay);
+-		}
+ 
+ 		return;
+ 	}
+@@ -459,11 +454,10 @@ static ssize_t interval_store(struct device *dev,
+ 			      size_t size)
+ {
+ 	struct led_netdev_data *trigger_data = led_trigger_get_drvdata(dev);
+-	struct led_classdev *led_cdev = trigger_data->led_cdev;
+ 	unsigned long value;
+ 	int ret;
+ 
+-	if (trigger_data->hw_control && !led_cdev->blink_set)
++	if (trigger_data->hw_control)
+ 		return -EINVAL;
+ 
+ 	ret = kstrtoul(buf, 0, &value);
+@@ -472,13 +466,9 @@ static ssize_t interval_store(struct device *dev,
+ 
+ 	/* impose some basic bounds on the timer interval */
+ 	if (value >= 5 && value <= 10000) {
+-		if (trigger_data->hw_control) {
+-			trigger_data->blink_delay = value;
+-		} else {
+-			cancel_delayed_work_sync(&trigger_data->work);
++		cancel_delayed_work_sync(&trigger_data->work);
+ 
+-			atomic_set(&trigger_data->interval, msecs_to_jiffies(value));
+-		}
++		atomic_set(&trigger_data->interval, msecs_to_jiffies(value));
+ 		set_baseline_state(trigger_data);	/* resets timer */
+ 	}
+ 
+diff --git a/drivers/md/dm-ps-historical-service-time.c b/drivers/md/dm-ps-historical-service-time.c
+index b49e10d76d0302..2c8626a83de437 100644
+--- a/drivers/md/dm-ps-historical-service-time.c
++++ b/drivers/md/dm-ps-historical-service-time.c
+@@ -541,8 +541,10 @@ static int __init dm_hst_init(void)
+ {
+ 	int r = dm_register_path_selector(&hst_ps);
+ 
+-	if (r < 0)
++	if (r < 0) {
+ 		DMERR("register failed %d", r);
++		return r;
++	}
+ 
+ 	DMINFO("version " HST_VERSION " loaded");
+ 
+diff --git a/drivers/md/dm-ps-queue-length.c b/drivers/md/dm-ps-queue-length.c
+index e305f05ad1e5e8..eb543e6431e038 100644
+--- a/drivers/md/dm-ps-queue-length.c
++++ b/drivers/md/dm-ps-queue-length.c
+@@ -260,8 +260,10 @@ static int __init dm_ql_init(void)
+ {
+ 	int r = dm_register_path_selector(&ql_ps);
+ 
+-	if (r < 0)
++	if (r < 0) {
+ 		DMERR("register failed %d", r);
++		return r;
++	}
+ 
+ 	DMINFO("version " QL_VERSION " loaded");
+ 
+diff --git a/drivers/md/dm-ps-round-robin.c b/drivers/md/dm-ps-round-robin.c
+index d1745b123dc19c..66a15ac0c22c8b 100644
+--- a/drivers/md/dm-ps-round-robin.c
++++ b/drivers/md/dm-ps-round-robin.c
+@@ -220,8 +220,10 @@ static int __init dm_rr_init(void)
+ {
+ 	int r = dm_register_path_selector(&rr_ps);
+ 
+-	if (r < 0)
++	if (r < 0) {
+ 		DMERR("register failed %d", r);
++		return r;
++	}
+ 
+ 	DMINFO("version " RR_VERSION " loaded");
+ 
+diff --git a/drivers/md/dm-ps-service-time.c b/drivers/md/dm-ps-service-time.c
+index 969d31c40272e2..f8c43aecdb27ad 100644
+--- a/drivers/md/dm-ps-service-time.c
++++ b/drivers/md/dm-ps-service-time.c
+@@ -341,8 +341,10 @@ static int __init dm_st_init(void)
+ {
+ 	int r = dm_register_path_selector(&st_ps);
+ 
+-	if (r < 0)
++	if (r < 0) {
+ 		DMERR("register failed %d", r);
++		return r;
++	}
+ 
+ 	DMINFO("version " ST_VERSION " loaded");
+ 
+diff --git a/drivers/md/dm-stripe.c b/drivers/md/dm-stripe.c
+index a1b7535c508a7f..8f61030d3b2dfe 100644
+--- a/drivers/md/dm-stripe.c
++++ b/drivers/md/dm-stripe.c
+@@ -459,6 +459,7 @@ static void stripe_io_hints(struct dm_target *ti,
+ 	struct stripe_c *sc = ti->private;
+ 	unsigned int chunk_size = sc->chunk_size << SECTOR_SHIFT;
+ 
++	limits->chunk_sectors = sc->chunk_size;
+ 	limits->io_min = chunk_size;
+ 	limits->io_opt = chunk_size * sc->stripes;
+ }
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index dd074c8ecbbad0..587a9833c1f67b 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -900,17 +900,17 @@ static bool dm_table_supports_dax(struct dm_table *t,
+ 	return true;
+ }
+ 
+-static int device_is_rq_stackable(struct dm_target *ti, struct dm_dev *dev,
+-				  sector_t start, sector_t len, void *data)
++static int device_is_not_rq_stackable(struct dm_target *ti, struct dm_dev *dev,
++				      sector_t start, sector_t len, void *data)
+ {
+ 	struct block_device *bdev = dev->bdev;
+ 	struct request_queue *q = bdev_get_queue(bdev);
+ 
+ 	/* request-based cannot stack on partitions! */
+ 	if (bdev_is_partition(bdev))
+-		return false;
++		return true;
+ 
+-	return queue_is_mq(q);
++	return !queue_is_mq(q);
+ }
+ 
+ static int dm_table_determine_type(struct dm_table *t)
+@@ -1006,7 +1006,7 @@ static int dm_table_determine_type(struct dm_table *t)
+ 
+ 	/* Non-request-stackable devices can't be used for request-based dm */
+ 	if (!ti->type->iterate_devices ||
+-	    !ti->type->iterate_devices(ti, device_is_rq_stackable, NULL)) {
++	    ti->type->iterate_devices(ti, device_is_not_rq_stackable, NULL)) {
+ 		DMERR("table load rejected: including non-request-stackable devices");
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c
+index 6141fc25d8421a..c38bd6e4c27377 100644
+--- a/drivers/md/dm-zoned-target.c
++++ b/drivers/md/dm-zoned-target.c
+@@ -1061,7 +1061,7 @@ static int dmz_iterate_devices(struct dm_target *ti,
+ 	struct dmz_target *dmz = ti->private;
+ 	unsigned int zone_nr_sectors = dmz_zone_nr_sectors(dmz->metadata);
+ 	sector_t capacity;
+-	int i, r;
++	int i, r = 0;
+ 
+ 	for (i = 0; i < dmz->nr_ddevs; i++) {
+ 		capacity = dmz->dev[i].capacity & ~(zone_nr_sectors - 1);
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 240f6dab8ddafb..c2089c38f855ab 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1790,19 +1790,35 @@ static void init_clone_info(struct clone_info *ci, struct dm_io *io,
+ }
+ 
+ #ifdef CONFIG_BLK_DEV_ZONED
+-static inline bool dm_zone_bio_needs_split(struct mapped_device *md,
+-					   struct bio *bio)
++static inline bool dm_zone_bio_needs_split(struct bio *bio)
+ {
+ 	/*
+-	 * For mapped device that need zone append emulation, we must
+-	 * split any large BIO that straddles zone boundaries.
++	 * Special case the zone operations that cannot or should not be split.
+ 	 */
+-	return dm_emulate_zone_append(md) && bio_straddles_zones(bio) &&
+-		!bio_flagged(bio, BIO_ZONE_WRITE_PLUGGING);
++	switch (bio_op(bio)) {
++	case REQ_OP_ZONE_APPEND:
++	case REQ_OP_ZONE_FINISH:
++	case REQ_OP_ZONE_RESET:
++	case REQ_OP_ZONE_RESET_ALL:
++		return false;
++	default:
++		break;
++	}
++
++	/*
++	 * When mapped devices use the block layer zone write plugging, we must
++	 * split any large BIO to the mapped device limits to not submit BIOs
++	 * that span zone boundaries and to avoid potential deadlocks with
++	 * queue freeze operations.
++	 */
++	return bio_needs_zone_write_plugging(bio) || bio_straddles_zones(bio);
+ }
++
+ static inline bool dm_zone_plug_bio(struct mapped_device *md, struct bio *bio)
+ {
+-	return dm_emulate_zone_append(md) && blk_zone_plug_bio(bio, 0);
++	if (!bio_needs_zone_write_plugging(bio))
++		return false;
++	return blk_zone_plug_bio(bio, 0);
+ }
+ 
+ static blk_status_t __send_zone_reset_all_emulated(struct clone_info *ci,
+@@ -1918,8 +1934,7 @@ static blk_status_t __send_zone_reset_all(struct clone_info *ci)
+ }
+ 
+ #else
+-static inline bool dm_zone_bio_needs_split(struct mapped_device *md,
+-					   struct bio *bio)
++static inline bool dm_zone_bio_needs_split(struct bio *bio)
+ {
+ 	return false;
+ }
+@@ -1946,9 +1961,7 @@ static void dm_split_and_process_bio(struct mapped_device *md,
+ 
+ 	is_abnormal = is_abnormal_io(bio);
+ 	if (static_branch_unlikely(&zoned_enabled)) {
+-		/* Special case REQ_OP_ZONE_RESET_ALL as it cannot be split. */
+-		need_split = (bio_op(bio) != REQ_OP_ZONE_RESET_ALL) &&
+-			(is_abnormal || dm_zone_bio_needs_split(md, bio));
++		need_split = is_abnormal || dm_zone_bio_needs_split(bio);
+ 	} else {
+ 		need_split = is_abnormal;
+ 	}
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 6a55374a6ba37d..e946abe62084bd 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -4019,6 +4019,7 @@ static int raid10_set_queue_limits(struct mddev *mddev)
+ 	md_init_stacking_limits(&lim);
+ 	lim.max_write_zeroes_sectors = 0;
+ 	lim.io_min = mddev->chunk_sectors << 9;
++	lim.chunk_sectors = mddev->chunk_sectors;
+ 	lim.io_opt = lim.io_min * raid10_nr_stripes(conf);
+ 	lim.features |= BLK_FEAT_ATOMIC_WRITES;
+ 	err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
+diff --git a/drivers/media/dvb-frontends/dib7000p.c b/drivers/media/dvb-frontends/dib7000p.c
+index c5582d4fa5be85..40c5b1dc7d91a2 100644
+--- a/drivers/media/dvb-frontends/dib7000p.c
++++ b/drivers/media/dvb-frontends/dib7000p.c
+@@ -2193,6 +2193,8 @@ static int w7090p_tuner_write_serpar(struct i2c_adapter *i2c_adap, struct i2c_ms
+ 	struct dib7000p_state *state = i2c_get_adapdata(i2c_adap);
+ 	u8 n_overflow = 1;
+ 	u16 i = 1000;
++	if (msg[0].len < 3)
++		return -EOPNOTSUPP;
+ 	u16 serpar_num = msg[0].buf[0];
+ 
+ 	while (n_overflow == 1 && i) {
+@@ -2212,6 +2214,8 @@ static int w7090p_tuner_read_serpar(struct i2c_adapter *i2c_adap, struct i2c_msg
+ 	struct dib7000p_state *state = i2c_get_adapdata(i2c_adap);
+ 	u8 n_overflow = 1, n_empty = 1;
+ 	u16 i = 1000;
++	if (msg[0].len < 1 || msg[1].len < 2)
++		return -EOPNOTSUPP;
+ 	u16 serpar_num = msg[0].buf[0];
+ 	u16 read_word;
+ 
+@@ -2256,8 +2260,12 @@ static int dib7090p_rw_on_apb(struct i2c_adapter *i2c_adap,
+ 	u16 word;
+ 
+ 	if (num == 1) {		/* write */
++		if (msg[0].len < 3)
++			return -EOPNOTSUPP;
+ 		dib7000p_write_word(state, apb_address, ((msg[0].buf[1] << 8) | (msg[0].buf[2])));
+ 	} else {
++		if (msg[1].len < 2)
++			return -EOPNOTSUPP;
+ 		word = dib7000p_read_word(state, apb_address);
+ 		msg[1].buf[0] = (word >> 8) & 0xff;
+ 		msg[1].buf[1] = (word) & 0xff;
+diff --git a/drivers/media/i2c/hi556.c b/drivers/media/i2c/hi556.c
+index aed258211b8a88..d3cc65b67855c8 100644
+--- a/drivers/media/i2c/hi556.c
++++ b/drivers/media/i2c/hi556.c
+@@ -1321,7 +1321,12 @@ static int hi556_resume(struct device *dev)
+ 		return ret;
+ 	}
+ 
+-	gpiod_set_value_cansleep(hi556->reset_gpio, 0);
++	if (hi556->reset_gpio) {
++		/* Assert reset for at least 2ms on back to back off-on */
++		usleep_range(2000, 2200);
++		gpiod_set_value_cansleep(hi556->reset_gpio, 0);
++	}
++
+ 	usleep_range(5000, 5500);
+ 	return 0;
+ }
+diff --git a/drivers/media/i2c/lt6911uxe.c b/drivers/media/i2c/lt6911uxe.c
+index 24857d683fcfcf..bdefdd157e69ca 100644
+--- a/drivers/media/i2c/lt6911uxe.c
++++ b/drivers/media/i2c/lt6911uxe.c
+@@ -600,7 +600,7 @@ static int lt6911uxe_probe(struct i2c_client *client)
+ 
+ 	v4l2_i2c_subdev_init(&lt6911uxe->sd, client, &lt6911uxe_subdev_ops);
+ 
+-	lt6911uxe->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_IN);
++	lt6911uxe->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
+ 	if (IS_ERR(lt6911uxe->reset_gpio))
+ 		return dev_err_probe(dev, PTR_ERR(lt6911uxe->reset_gpio),
+ 				     "failed to get reset gpio\n");
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index dcef93e1a3bcdf..2c56832e006518 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -114,7 +114,7 @@ static inline struct tc358743_state *to_state(struct v4l2_subdev *sd)
+ 
+ /* --------------- I2C --------------- */
+ 
+-static void i2c_rd(struct v4l2_subdev *sd, u16 reg, u8 *values, u32 n)
++static int i2c_rd(struct v4l2_subdev *sd, u16 reg, u8 *values, u32 n)
+ {
+ 	struct tc358743_state *state = to_state(sd);
+ 	struct i2c_client *client = state->i2c_client;
+@@ -140,6 +140,7 @@ static void i2c_rd(struct v4l2_subdev *sd, u16 reg, u8 *values, u32 n)
+ 		v4l2_err(sd, "%s: reading register 0x%x from 0x%x failed: %d\n",
+ 				__func__, reg, client->addr, err);
+ 	}
++	return err != ARRAY_SIZE(msgs);
+ }
+ 
+ static void i2c_wr(struct v4l2_subdev *sd, u16 reg, u8 *values, u32 n)
+@@ -196,15 +197,24 @@ static void i2c_wr(struct v4l2_subdev *sd, u16 reg, u8 *values, u32 n)
+ 	}
+ }
+ 
+-static noinline u32 i2c_rdreg(struct v4l2_subdev *sd, u16 reg, u32 n)
++static noinline u32 i2c_rdreg_err(struct v4l2_subdev *sd, u16 reg, u32 n,
++				  int *err)
+ {
++	int error;
+ 	__le32 val = 0;
+ 
+-	i2c_rd(sd, reg, (u8 __force *)&val, n);
++	error = i2c_rd(sd, reg, (u8 __force *)&val, n);
++	if (err)
++		*err = error;
+ 
+ 	return le32_to_cpu(val);
+ }
+ 
++static inline u32 i2c_rdreg(struct v4l2_subdev *sd, u16 reg, u32 n)
++{
++	return i2c_rdreg_err(sd, reg, n, NULL);
++}
++
+ static noinline void i2c_wrreg(struct v4l2_subdev *sd, u16 reg, u32 val, u32 n)
+ {
+ 	__le32 raw = cpu_to_le32(val);
+@@ -233,6 +243,13 @@ static u16 i2c_rd16(struct v4l2_subdev *sd, u16 reg)
+ 	return i2c_rdreg(sd, reg, 2);
+ }
+ 
++static int i2c_rd16_err(struct v4l2_subdev *sd, u16 reg, u16 *value)
++{
++	int err;
++	*value = i2c_rdreg_err(sd, reg, 2, &err);
++	return err;
++}
++
+ static void i2c_wr16(struct v4l2_subdev *sd, u16 reg, u16 val)
+ {
+ 	i2c_wrreg(sd, reg, val, 2);
+@@ -1691,12 +1708,23 @@ static int tc358743_enum_mbus_code(struct v4l2_subdev *sd,
+ 	return 0;
+ }
+ 
++static u32 tc358743_g_colorspace(u32 code)
++{
++	switch (code) {
++	case MEDIA_BUS_FMT_RGB888_1X24:
++		return V4L2_COLORSPACE_SRGB;
++	case MEDIA_BUS_FMT_UYVY8_1X16:
++		return V4L2_COLORSPACE_SMPTE170M;
++	default:
++		return 0;
++	}
++}
++
+ static int tc358743_get_fmt(struct v4l2_subdev *sd,
+ 		struct v4l2_subdev_state *sd_state,
+ 		struct v4l2_subdev_format *format)
+ {
+ 	struct tc358743_state *state = to_state(sd);
+-	u8 vi_rep = i2c_rd8(sd, VI_REP);
+ 
+ 	if (format->pad != 0)
+ 		return -EINVAL;
+@@ -1706,23 +1734,7 @@ static int tc358743_get_fmt(struct v4l2_subdev *sd,
+ 	format->format.height = state->timings.bt.height;
+ 	format->format.field = V4L2_FIELD_NONE;
+ 
+-	switch (vi_rep & MASK_VOUT_COLOR_SEL) {
+-	case MASK_VOUT_COLOR_RGB_FULL:
+-	case MASK_VOUT_COLOR_RGB_LIMITED:
+-		format->format.colorspace = V4L2_COLORSPACE_SRGB;
+-		break;
+-	case MASK_VOUT_COLOR_601_YCBCR_LIMITED:
+-	case MASK_VOUT_COLOR_601_YCBCR_FULL:
+-		format->format.colorspace = V4L2_COLORSPACE_SMPTE170M;
+-		break;
+-	case MASK_VOUT_COLOR_709_YCBCR_FULL:
+-	case MASK_VOUT_COLOR_709_YCBCR_LIMITED:
+-		format->format.colorspace = V4L2_COLORSPACE_REC709;
+-		break;
+-	default:
+-		format->format.colorspace = 0;
+-		break;
+-	}
++	format->format.colorspace = tc358743_g_colorspace(format->format.code);
+ 
+ 	return 0;
+ }
+@@ -1736,19 +1748,14 @@ static int tc358743_set_fmt(struct v4l2_subdev *sd,
+ 	u32 code = format->format.code; /* is overwritten by get_fmt */
+ 	int ret = tc358743_get_fmt(sd, sd_state, format);
+ 
+-	format->format.code = code;
++	if (code == MEDIA_BUS_FMT_RGB888_1X24 ||
++	    code == MEDIA_BUS_FMT_UYVY8_1X16)
++		format->format.code = code;
++	format->format.colorspace = tc358743_g_colorspace(format->format.code);
+ 
+ 	if (ret)
+ 		return ret;
+ 
+-	switch (code) {
+-	case MEDIA_BUS_FMT_RGB888_1X24:
+-	case MEDIA_BUS_FMT_UYVY8_1X16:
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+ 	if (format->which == V4L2_SUBDEV_FORMAT_TRY)
+ 		return 0;
+ 
+@@ -1972,8 +1979,19 @@ static int tc358743_probe_of(struct tc358743_state *state)
+ 	state->pdata.refclk_hz = clk_get_rate(refclk);
+ 	state->pdata.ddc5v_delay = DDC5V_DELAY_100_MS;
+ 	state->pdata.enable_hdcp = false;
+-	/* A FIFO level of 16 should be enough for 2-lane 720p60 at 594 MHz. */
+-	state->pdata.fifo_level = 16;
++	/*
++	 * Ideally the FIFO trigger level should be set based on the input and
++	 * output data rates, but the calculations required are buried in
++	 * Toshiba's register settings spreadsheet.
++	 * A value of 16 works with a 594Mbps data rate for 720p60 (using 2
++	 * lanes) and 1080p60 (using 4 lanes), but fails when the data rate
++	 * is increased, or a lower pixel clock is used that result in CSI
++	 * reading out faster than the data is arriving.
++	 *
++	 * A value of 374 works with both those modes at 594Mbps, and with most
++	 * modes on 972Mbps.
++	 */
++	state->pdata.fifo_level = 374;
+ 	/*
+ 	 * The PLL input clock is obtained by dividing refclk by pll_prd.
+ 	 * It must be between 6 MHz and 40 MHz, lower frequency is better.
+@@ -2061,6 +2079,7 @@ static int tc358743_probe(struct i2c_client *client)
+ 	struct tc358743_platform_data *pdata = client->dev.platform_data;
+ 	struct v4l2_subdev *sd;
+ 	u16 irq_mask = MASK_HDMI_MSK | MASK_CSI_MSK;
++	u16 chipid;
+ 	int err;
+ 
+ 	if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_BYTE_DATA))
+@@ -2092,7 +2111,8 @@ static int tc358743_probe(struct i2c_client *client)
+ 	sd->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE | V4L2_SUBDEV_FL_HAS_EVENTS;
+ 
+ 	/* i2c access */
+-	if ((i2c_rd16(sd, CHIPID) & MASK_CHIPID) != 0) {
++	if (i2c_rd16_err(sd, CHIPID, &chipid) ||
++	    (chipid & MASK_CHIPID) != 0) {
+ 		v4l2_info(sd, "not a TC358743 on address 0x%x\n",
+ 			  client->addr << 1);
+ 		return -ENODEV;
+diff --git a/drivers/media/pci/intel/ipu-bridge.c b/drivers/media/pci/intel/ipu-bridge.c
+index 1cb7458556004d..a081e44d786e0d 100644
+--- a/drivers/media/pci/intel/ipu-bridge.c
++++ b/drivers/media/pci/intel/ipu-bridge.c
+@@ -60,6 +60,8 @@ static const struct ipu_sensor_config ipu_supported_sensors[] = {
+ 	IPU_SENSOR_CONFIG("INT33BE", 1, 419200000),
+ 	/* Omnivision OV2740 */
+ 	IPU_SENSOR_CONFIG("INT3474", 1, 180000000),
++	/* Omnivision OV5670 */
++	IPU_SENSOR_CONFIG("INT3479", 1, 422400000),
+ 	/* Omnivision OV8865 */
+ 	IPU_SENSOR_CONFIG("INT347A", 1, 360000000),
+ 	/* Omnivision OV7251 */
+diff --git a/drivers/media/platform/qcom/iris/iris_buffer.c b/drivers/media/platform/qcom/iris/iris_buffer.c
+index e5c5a564fcb81e..7dd5730a867af7 100644
+--- a/drivers/media/platform/qcom/iris/iris_buffer.c
++++ b/drivers/media/platform/qcom/iris/iris_buffer.c
+@@ -593,10 +593,13 @@ int iris_vb2_buffer_done(struct iris_inst *inst, struct iris_buffer *buf)
+ 
+ 	vb2 = &vbuf->vb2_buf;
+ 
+-	if (buf->flags & V4L2_BUF_FLAG_ERROR)
++	if (buf->flags & V4L2_BUF_FLAG_ERROR) {
+ 		state = VB2_BUF_STATE_ERROR;
+-	else
+-		state = VB2_BUF_STATE_DONE;
++		vb2_set_plane_payload(vb2, 0, 0);
++		vb2->timestamp = 0;
++		v4l2_m2m_buf_done(vbuf, state);
++		return 0;
++	}
+ 
+ 	vbuf->flags |= buf->flags;
+ 
+@@ -616,6 +619,8 @@ int iris_vb2_buffer_done(struct iris_inst *inst, struct iris_buffer *buf)
+ 			v4l2_m2m_mark_stopped(m2m_ctx);
+ 		}
+ 	}
++
++	state = VB2_BUF_STATE_DONE;
+ 	vb2->timestamp = buf->timestamp;
+ 	v4l2_m2m_buf_done(vbuf, state);
+ 
+diff --git a/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h b/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h
+index 9f246816a28620..93b5f838c2901c 100644
+--- a/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h
++++ b/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h
+@@ -117,6 +117,8 @@
+ #define HFI_FRAME_NOTCODED				0x7f002000
+ #define HFI_FRAME_YUV					0x7f004000
+ #define HFI_UNUSED_PICT					0x10000000
++#define HFI_BUFFERFLAG_DATACORRUPT			0x00000008
++#define HFI_BUFFERFLAG_DROP_FRAME			0x20000000
+ 
+ struct hfi_pkt_hdr {
+ 	u32 size;
+diff --git a/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c b/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c
+index b72d503dd74018..91d95eed68aa29 100644
+--- a/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c
++++ b/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c
+@@ -481,6 +481,12 @@ static void iris_hfi_gen1_session_ftb_done(struct iris_inst *inst, void *packet)
+ 	buf->attr |= BUF_ATTR_DEQUEUED;
+ 	buf->attr |= BUF_ATTR_BUFFER_DONE;
+ 
++	if (hfi_flags & HFI_BUFFERFLAG_DATACORRUPT)
++		flags |= V4L2_BUF_FLAG_ERROR;
++
++	if (hfi_flags & HFI_BUFFERFLAG_DROP_FRAME)
++		flags |= V4L2_BUF_FLAG_ERROR;
++
+ 	buf->flags |= flags;
+ 
+ 	iris_vb2_buffer_done(inst, buf);
+diff --git a/drivers/media/platform/qcom/venus/hfi_msgs.c b/drivers/media/platform/qcom/venus/hfi_msgs.c
+index 0a041b4db9efc5..cf0d97cbc4631f 100644
+--- a/drivers/media/platform/qcom/venus/hfi_msgs.c
++++ b/drivers/media/platform/qcom/venus/hfi_msgs.c
+@@ -33,8 +33,9 @@ static void event_seq_changed(struct venus_core *core, struct venus_inst *inst,
+ 	struct hfi_buffer_requirements *bufreq;
+ 	struct hfi_extradata_input_crop *crop;
+ 	struct hfi_dpb_counts *dpb_count;
++	u32 ptype, rem_bytes;
++	u32 size_read = 0;
+ 	u8 *data_ptr;
+-	u32 ptype;
+ 
+ 	inst->error = HFI_ERR_NONE;
+ 
+@@ -44,86 +45,118 @@ static void event_seq_changed(struct venus_core *core, struct venus_inst *inst,
+ 		break;
+ 	default:
+ 		inst->error = HFI_ERR_SESSION_INVALID_PARAMETER;
+-		goto done;
++		inst->ops->event_notify(inst, EVT_SYS_EVENT_CHANGE, &event);
++		return;
+ 	}
+ 
+ 	event.event_type = pkt->event_data1;
+ 
+ 	num_properties_changed = pkt->event_data2;
+-	if (!num_properties_changed) {
+-		inst->error = HFI_ERR_SESSION_INSUFFICIENT_RESOURCES;
+-		goto done;
+-	}
++	if (!num_properties_changed)
++		goto error;
+ 
+ 	data_ptr = (u8 *)&pkt->ext_event_data[0];
++	rem_bytes = pkt->shdr.hdr.size - sizeof(*pkt);
++
+ 	do {
++		if (rem_bytes < sizeof(u32))
++			goto error;
+ 		ptype = *((u32 *)data_ptr);
++
++		data_ptr += sizeof(u32);
++		rem_bytes -= sizeof(u32);
++
+ 		switch (ptype) {
+ 		case HFI_PROPERTY_PARAM_FRAME_SIZE:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_framesize))
++				goto error;
++
+ 			frame_sz = (struct hfi_framesize *)data_ptr;
+ 			event.width = frame_sz->width;
+ 			event.height = frame_sz->height;
+-			data_ptr += sizeof(*frame_sz);
++			size_read = sizeof(struct hfi_framesize);
+ 			break;
+ 		case HFI_PROPERTY_PARAM_PROFILE_LEVEL_CURRENT:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_profile_level))
++				goto error;
++
+ 			profile_level = (struct hfi_profile_level *)data_ptr;
+ 			event.profile = profile_level->profile;
+ 			event.level = profile_level->level;
+-			data_ptr += sizeof(*profile_level);
++			size_read = sizeof(struct hfi_profile_level);
+ 			break;
+ 		case HFI_PROPERTY_PARAM_VDEC_PIXEL_BITDEPTH:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_bit_depth))
++				goto error;
++
+ 			pixel_depth = (struct hfi_bit_depth *)data_ptr;
+ 			event.bit_depth = pixel_depth->bit_depth;
+-			data_ptr += sizeof(*pixel_depth);
++			size_read = sizeof(struct hfi_bit_depth);
+ 			break;
+ 		case HFI_PROPERTY_PARAM_VDEC_PIC_STRUCT:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_pic_struct))
++				goto error;
++
+ 			pic_struct = (struct hfi_pic_struct *)data_ptr;
+ 			event.pic_struct = pic_struct->progressive_only;
+-			data_ptr += sizeof(*pic_struct);
++			size_read = sizeof(struct hfi_pic_struct);
+ 			break;
+ 		case HFI_PROPERTY_PARAM_VDEC_COLOUR_SPACE:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_colour_space))
++				goto error;
++
+ 			colour_info = (struct hfi_colour_space *)data_ptr;
+ 			event.colour_space = colour_info->colour_space;
+-			data_ptr += sizeof(*colour_info);
++			size_read = sizeof(struct hfi_colour_space);
+ 			break;
+ 		case HFI_PROPERTY_CONFIG_VDEC_ENTROPY:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(u32))
++				goto error;
++
+ 			event.entropy_mode = *(u32 *)data_ptr;
+-			data_ptr += sizeof(u32);
++			size_read = sizeof(u32);
+ 			break;
+ 		case HFI_PROPERTY_CONFIG_BUFFER_REQUIREMENTS:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_buffer_requirements))
++				goto error;
++
+ 			bufreq = (struct hfi_buffer_requirements *)data_ptr;
+ 			event.buf_count = hfi_bufreq_get_count_min(bufreq, ver);
+-			data_ptr += sizeof(*bufreq);
++			size_read = sizeof(struct hfi_buffer_requirements);
+ 			break;
+ 		case HFI_INDEX_EXTRADATA_INPUT_CROP:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_extradata_input_crop))
++				goto error;
++
+ 			crop = (struct hfi_extradata_input_crop *)data_ptr;
+ 			event.input_crop.left = crop->left;
+ 			event.input_crop.top = crop->top;
+ 			event.input_crop.width = crop->width;
+ 			event.input_crop.height = crop->height;
+-			data_ptr += sizeof(*crop);
++			size_read = sizeof(struct hfi_extradata_input_crop);
+ 			break;
+ 		case HFI_PROPERTY_PARAM_VDEC_DPB_COUNTS:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_dpb_counts))
++				goto error;
++
+ 			dpb_count = (struct hfi_dpb_counts *)data_ptr;
+ 			event.buf_count = dpb_count->fw_min_cnt;
+-			data_ptr += sizeof(*dpb_count);
++			size_read = sizeof(struct hfi_dpb_counts);
+ 			break;
+ 		default:
++			size_read = 0;
+ 			break;
+ 		}
++		data_ptr += size_read;
++		rem_bytes -= size_read;
+ 		num_properties_changed--;
+ 	} while (num_properties_changed > 0);
+ 
+-done:
++	inst->ops->event_notify(inst, EVT_SYS_EVENT_CHANGE, &event);
++	return;
++
++error:
++	inst->error = HFI_ERR_SESSION_INSUFFICIENT_RESOURCES;
+ 	inst->ops->event_notify(inst, EVT_SYS_EVENT_CHANGE, &event);
+ }
+ 
+diff --git a/drivers/media/platform/raspberrypi/rp1-cfe/cfe.c b/drivers/media/platform/raspberrypi/rp1-cfe/cfe.c
+index 69a5f23e795439..9af9efc0d7e7a7 100644
+--- a/drivers/media/platform/raspberrypi/rp1-cfe/cfe.c
++++ b/drivers/media/platform/raspberrypi/rp1-cfe/cfe.c
+@@ -1025,9 +1025,6 @@ static int cfe_queue_setup(struct vb2_queue *vq, unsigned int *nbuffers,
+ 	cfe_dbg(cfe, "%s: [%s] type:%u\n", __func__, node_desc[node->id].name,
+ 		node->buffer_queue.type);
+ 
+-	if (vq->max_num_buffers + *nbuffers < 3)
+-		*nbuffers = 3 - vq->max_num_buffers;
+-
+ 	if (*nplanes) {
+ 		if (sizes[0] < size) {
+ 			cfe_err(cfe, "sizes[0] %i < size %u\n", sizes[0], size);
+@@ -1999,6 +1996,7 @@ static int cfe_register_node(struct cfe_device *cfe, int id)
+ 	q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+ 	q->lock = &node->lock;
+ 	q->min_queued_buffers = 1;
++	q->min_reqbufs_allocation = 3;
+ 	q->dev = &cfe->pdev->dev;
+ 
+ 	ret = vb2_queue_init(q);
+diff --git a/drivers/media/usb/hdpvr/hdpvr-i2c.c b/drivers/media/usb/hdpvr/hdpvr-i2c.c
+index 070559b01b01b8..54956a8ff15e86 100644
+--- a/drivers/media/usb/hdpvr/hdpvr-i2c.c
++++ b/drivers/media/usb/hdpvr/hdpvr-i2c.c
+@@ -165,10 +165,16 @@ static const struct i2c_algorithm hdpvr_algo = {
+ 	.functionality = hdpvr_functionality,
+ };
+ 
++/* prevent invalid 0-length usb_control_msg */
++static const struct i2c_adapter_quirks hdpvr_quirks = {
++	.flags = I2C_AQ_NO_ZERO_LEN_READ,
++};
++
+ static const struct i2c_adapter hdpvr_i2c_adapter_template = {
+ 	.name   = "Hauppauge HD PVR I2C",
+ 	.owner  = THIS_MODULE,
+ 	.algo   = &hdpvr_algo,
++	.quirks = &hdpvr_quirks,
+ };
+ 
+ static int hdpvr_activate_ir(struct hdpvr_device *dev)
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index 44b6513c526421..f24272d483a2d7 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -1483,14 +1483,28 @@ static u32 uvc_get_ctrl_bitmap(struct uvc_control *ctrl,
+ 	return ~0;
+ }
+ 
++/*
++ * Maximum retry count to avoid spurious errors with controls. Increasing this
++ * value does no seem to produce better results in the tested hardware.
++ */
++#define MAX_QUERY_RETRIES 2
++
+ static int __uvc_queryctrl_boundaries(struct uvc_video_chain *chain,
+ 				      struct uvc_control *ctrl,
+ 				      struct uvc_control_mapping *mapping,
+ 				      struct v4l2_query_ext_ctrl *v4l2_ctrl)
+ {
+ 	if (!ctrl->cached) {
+-		int ret = uvc_ctrl_populate_cache(chain, ctrl);
+-		if (ret < 0)
++		unsigned int retries;
++		int ret;
++
++		for (retries = 0; retries < MAX_QUERY_RETRIES; retries++) {
++			ret = uvc_ctrl_populate_cache(chain, ctrl);
++			if (ret != -EIO)
++				break;
++		}
++
++		if (ret)
+ 			return ret;
+ 	}
+ 
+@@ -1567,6 +1581,7 @@ static int __uvc_query_v4l2_ctrl(struct uvc_video_chain *chain,
+ {
+ 	struct uvc_control_mapping *master_map = NULL;
+ 	struct uvc_control *master_ctrl = NULL;
++	int ret;
+ 
+ 	memset(v4l2_ctrl, 0, sizeof(*v4l2_ctrl));
+ 	v4l2_ctrl->id = mapping->id;
+@@ -1587,18 +1602,31 @@ static int __uvc_query_v4l2_ctrl(struct uvc_video_chain *chain,
+ 		__uvc_find_control(ctrl->entity, mapping->master_id,
+ 				   &master_map, &master_ctrl, 0, 0);
+ 	if (master_ctrl && (master_ctrl->info.flags & UVC_CTRL_FLAG_GET_CUR)) {
++		unsigned int retries;
+ 		s32 val;
+ 		int ret;
+ 
+ 		if (WARN_ON(uvc_ctrl_mapping_is_compound(master_map)))
+ 			return -EIO;
+ 
+-		ret = __uvc_ctrl_get(chain, master_ctrl, master_map, &val);
+-		if (ret < 0)
+-			return ret;
++		for (retries = 0; retries < MAX_QUERY_RETRIES; retries++) {
++			ret = __uvc_ctrl_get(chain, master_ctrl, master_map,
++					     &val);
++			if (!ret)
++				break;
++			if (ret < 0 && ret != -EIO)
++				return ret;
++		}
+ 
+-		if (val != mapping->master_manual)
+-			v4l2_ctrl->flags |= V4L2_CTRL_FLAG_INACTIVE;
++		if (ret == -EIO) {
++			dev_warn_ratelimited(&chain->dev->udev->dev,
++					     "UVC non compliance: Error %d querying master control %x (%s)\n",
++					     ret, master_map->id,
++					     uvc_map_get_name(master_map));
++		} else {
++			if (val != mapping->master_manual)
++				v4l2_ctrl->flags |= V4L2_CTRL_FLAG_INACTIVE;
++		}
+ 	}
+ 
+ 	v4l2_ctrl->elem_size = uvc_mapping_v4l2_size(mapping);
+@@ -1613,7 +1641,18 @@ static int __uvc_query_v4l2_ctrl(struct uvc_video_chain *chain,
+ 		return 0;
+ 	}
+ 
+-	return __uvc_queryctrl_boundaries(chain, ctrl, mapping, v4l2_ctrl);
++	ret = __uvc_queryctrl_boundaries(chain, ctrl, mapping, v4l2_ctrl);
++	if (ret && !mapping->disabled) {
++		dev_warn(&chain->dev->udev->dev,
++			 "UVC non compliance: permanently disabling control %x (%s), due to error %d\n",
++			 mapping->id, uvc_map_get_name(mapping), ret);
++		mapping->disabled = true;
++	}
++
++	if (mapping->disabled)
++		v4l2_ctrl->flags |= V4L2_CTRL_FLAG_DISABLED;
++
++	return 0;
+ }
+ 
+ int uvc_query_v4l2_ctrl(struct uvc_video_chain *chain,
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 25e9aea81196e0..6828931830841f 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -344,6 +344,9 @@ static int uvc_parse_format(struct uvc_device *dev,
+ 	u8 ftype;
+ 	int ret;
+ 
++	if (buflen < 4)
++		return -EINVAL;
++
+ 	format->type = buffer[2];
+ 	format->index = buffer[3];
+ 	format->frames = frames;
+@@ -2511,6 +2514,15 @@ static const struct uvc_device_info uvc_quirk_force_y8 = {
+  * Sort these by vendor/product ID.
+  */
+ static const struct usb_device_id uvc_ids[] = {
++	/* HP Webcam HD 2300 */
++	{ .match_flags		= USB_DEVICE_ID_MATCH_DEVICE
++				| USB_DEVICE_ID_MATCH_INT_INFO,
++	  .idVendor		= 0x03f0,
++	  .idProduct		= 0xe207,
++	  .bInterfaceClass	= USB_CLASS_VIDEO,
++	  .bInterfaceSubClass	= 1,
++	  .bInterfaceProtocol	= 0,
++	  .driver_info		= (kernel_ulong_t)&uvc_quirk_stream_no_fid },
+ 	/* Quanta ACER HD User Facing */
+ 	{ .match_flags		= USB_DEVICE_ID_MATCH_DEVICE
+ 				| USB_DEVICE_ID_MATCH_INT_INFO,
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index e3567aeb0007c1..2e377e7b9e8159 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -262,6 +262,15 @@ static void uvc_fixup_video_ctrl(struct uvc_streaming *stream,
+ 
+ 		ctrl->dwMaxPayloadTransferSize = bandwidth;
+ 	}
++
++	if (stream->intf->num_altsetting > 1 &&
++	    ctrl->dwMaxPayloadTransferSize > stream->maxpsize) {
++		dev_warn_ratelimited(&stream->intf->dev,
++				     "UVC non compliance: the max payload transmission size (%u) exceeds the size of the ep max packet (%u). Using the max size.\n",
++				     ctrl->dwMaxPayloadTransferSize,
++				     stream->maxpsize);
++		ctrl->dwMaxPayloadTransferSize = stream->maxpsize;
++	}
+ }
+ 
+ static size_t uvc_video_ctrl_size(struct uvc_streaming *stream)
+@@ -1433,12 +1442,6 @@ static void uvc_video_decode_meta(struct uvc_streaming *stream,
+ 	if (!meta_buf || length == 2)
+ 		return;
+ 
+-	if (meta_buf->length - meta_buf->bytesused <
+-	    length + sizeof(meta->ns) + sizeof(meta->sof)) {
+-		meta_buf->error = 1;
+-		return;
+-	}
+-
+ 	has_pts = mem[1] & UVC_STREAM_PTS;
+ 	has_scr = mem[1] & UVC_STREAM_SCR;
+ 
+@@ -1459,6 +1462,12 @@ static void uvc_video_decode_meta(struct uvc_streaming *stream,
+ 				  !memcmp(scr, stream->clock.last_scr, 6)))
+ 		return;
+ 
++	if (meta_buf->length - meta_buf->bytesused <
++	    length + sizeof(meta->ns) + sizeof(meta->sof)) {
++		meta_buf->error = 1;
++		return;
++	}
++
+ 	meta = (struct uvc_meta_buf *)((u8 *)meta_buf->mem + meta_buf->bytesused);
+ 	local_irq_save(flags);
+ 	time = uvc_video_get_time();
+diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
+index b9f8eb62ba1d82..11d6e3c2ebdfba 100644
+--- a/drivers/media/usb/uvc/uvcvideo.h
++++ b/drivers/media/usb/uvc/uvcvideo.h
+@@ -134,6 +134,8 @@ struct uvc_control_mapping {
+ 	s32 master_manual;
+ 	u32 slave_ids[2];
+ 
++	bool disabled;
++
+ 	const struct uvc_control_mapping *(*filter_mapping)
+ 				(struct uvc_video_chain *chain,
+ 				struct uvc_control *ctrl);
+diff --git a/drivers/media/v4l2-core/v4l2-common.c b/drivers/media/v4l2-core/v4l2-common.c
+index e4b2de3833ee3d..ac0f8a0865c3a8 100644
+--- a/drivers/media/v4l2-core/v4l2-common.c
++++ b/drivers/media/v4l2-core/v4l2-common.c
+@@ -312,6 +312,12 @@ const struct v4l2_format_info *v4l2_format_info(u32 format)
+ 		{ .format = V4L2_PIX_FMT_NV61M,   .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 2, .comp_planes = 2, .bpp = { 1, 2, 0, 0 }, .bpp_div = { 1, 1, 1, 1 }, .hdiv = 2, .vdiv = 1 },
+ 		{ .format = V4L2_PIX_FMT_P012M,   .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 2, .comp_planes = 2, .bpp = { 2, 4, 0, 0 }, .bpp_div = { 1, 1, 1, 1 }, .hdiv = 2, .vdiv = 2 },
+ 
++		/* Tiled YUV formats, non contiguous variant */
++		{ .format = V4L2_PIX_FMT_NV12MT,        .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 2, .comp_planes = 2, .bpp = { 1, 2, 0, 0 }, .bpp_div = { 1, 1, 1, 1 }, .hdiv = 2, .vdiv = 2,
++		  .block_w = { 64, 32, 0, 0 },	.block_h = { 32, 16, 0, 0 }},
++		{ .format = V4L2_PIX_FMT_NV12MT_16X16,  .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 2, .comp_planes = 2, .bpp = { 1, 2, 0, 0 }, .bpp_div = { 1, 1, 1, 1 }, .hdiv = 2, .vdiv = 2,
++		  .block_w = { 16,  8, 0, 0 },	.block_h = { 16,  8, 0, 0 }},
++
+ 		/* Bayer RGB formats */
+ 		{ .format = V4L2_PIX_FMT_SBGGR8,	.pixel_enc = V4L2_PIXEL_ENC_BAYER, .mem_planes = 1, .comp_planes = 1, .bpp = { 1, 0, 0, 0 }, .bpp_div = { 1, 1, 1, 1 }, .hdiv = 1, .vdiv = 1 },
+ 		{ .format = V4L2_PIX_FMT_SGBRG8,	.pixel_enc = V4L2_PIXEL_ENC_BAYER, .mem_planes = 1, .comp_planes = 1, .bpp = { 1, 0, 0, 0 }, .bpp_div = { 1, 1, 1, 1 }, .hdiv = 1, .vdiv = 1 },
+@@ -494,10 +500,10 @@ s64 __v4l2_get_link_freq_ctrl(struct v4l2_ctrl_handler *handler,
+ 
+ 		freq = div_u64(v4l2_ctrl_g_ctrl_int64(ctrl) * mul, div);
+ 
+-		pr_warn("%s: Link frequency estimated using pixel rate: result might be inaccurate\n",
+-			__func__);
+-		pr_warn("%s: Consider implementing support for V4L2_CID_LINK_FREQ in the transmitter driver\n",
+-			__func__);
++		pr_warn_once("%s: Link frequency estimated using pixel rate: result might be inaccurate\n",
++			     __func__);
++		pr_warn_once("%s: Consider implementing support for V4L2_CID_LINK_FREQ in the transmitter driver\n",
++			     __func__);
+ 	}
+ 
+ 	return freq > 0 ? freq : -EINVAL;
+diff --git a/drivers/mfd/axp20x.c b/drivers/mfd/axp20x.c
+index e9914e8a29a33c..25c639b348cd69 100644
+--- a/drivers/mfd/axp20x.c
++++ b/drivers/mfd/axp20x.c
+@@ -1053,7 +1053,8 @@ static const struct mfd_cell axp152_cells[] = {
+ };
+ 
+ static struct mfd_cell axp313a_cells[] = {
+-	MFD_CELL_NAME("axp20x-regulator"),
++	/* AXP323 is sometimes paired with AXP717 as sub-PMIC */
++	MFD_CELL_BASIC("axp20x-regulator", NULL, NULL, 0, 1),
+ 	MFD_CELL_RES("axp313a-pek", axp313a_pek_resources),
+ };
+ 
+diff --git a/drivers/mfd/cros_ec_dev.c b/drivers/mfd/cros_ec_dev.c
+index 9f84a52b48d6a8..dc80a272726bb1 100644
+--- a/drivers/mfd/cros_ec_dev.c
++++ b/drivers/mfd/cros_ec_dev.c
+@@ -87,7 +87,6 @@ static const struct mfd_cell cros_ec_sensorhub_cells[] = {
+ };
+ 
+ static const struct mfd_cell cros_usbpd_charger_cells[] = {
+-	{ .name = "cros-charge-control", },
+ 	{ .name = "cros-usbpd-charger", },
+ 	{ .name = "cros-usbpd-logger", },
+ };
+@@ -112,6 +111,10 @@ static const struct mfd_cell cros_ec_ucsi_cells[] = {
+ 	{ .name = "cros_ec_ucsi", },
+ };
+ 
++static const struct mfd_cell cros_ec_charge_control_cells[] = {
++	{ .name = "cros-charge-control", },
++};
++
+ static const struct cros_feature_to_cells cros_subdevices[] = {
+ 	{
+ 		.id		= EC_FEATURE_CEC,
+@@ -148,6 +151,11 @@ static const struct cros_feature_to_cells cros_subdevices[] = {
+ 		.mfd_cells	= cros_ec_keyboard_leds_cells,
+ 		.num_cells	= ARRAY_SIZE(cros_ec_keyboard_leds_cells),
+ 	},
++	{
++		.id		= EC_FEATURE_CHARGER,
++		.mfd_cells	= cros_ec_charge_control_cells,
++		.num_cells	= ARRAY_SIZE(cros_ec_charge_control_cells),
++	},
+ };
+ 
+ static const struct mfd_cell cros_ec_platform_cells[] = {
+diff --git a/drivers/misc/cardreader/rtsx_usb.c b/drivers/misc/cardreader/rtsx_usb.c
+index 7314c8d9ae75a7..0e2cffe447d93c 100644
+--- a/drivers/misc/cardreader/rtsx_usb.c
++++ b/drivers/misc/cardreader/rtsx_usb.c
+@@ -698,6 +698,12 @@ static void rtsx_usb_disconnect(struct usb_interface *intf)
+ }
+ 
+ #ifdef CONFIG_PM
++static int rtsx_usb_resume_child(struct device *dev, void *data)
++{
++	pm_request_resume(dev);
++	return 0;
++}
++
+ static int rtsx_usb_suspend(struct usb_interface *intf, pm_message_t message)
+ {
+ 	struct rtsx_ucr *ucr =
+@@ -713,8 +719,10 @@ static int rtsx_usb_suspend(struct usb_interface *intf, pm_message_t message)
+ 			mutex_unlock(&ucr->dev_mutex);
+ 
+ 			/* Defer the autosuspend if card exists */
+-			if (val & (SD_CD | MS_CD))
++			if (val & (SD_CD | MS_CD)) {
++				device_for_each_child(&intf->dev, NULL, rtsx_usb_resume_child);
+ 				return -EAGAIN;
++			}
+ 		} else {
+ 			/* There is an ongoing operation*/
+ 			return -EAGAIN;
+@@ -724,12 +732,6 @@ static int rtsx_usb_suspend(struct usb_interface *intf, pm_message_t message)
+ 	return 0;
+ }
+ 
+-static int rtsx_usb_resume_child(struct device *dev, void *data)
+-{
+-	pm_request_resume(dev);
+-	return 0;
+-}
+-
+ static int rtsx_usb_resume(struct usb_interface *intf)
+ {
+ 	device_for_each_child(&intf->dev, NULL, rtsx_usb_resume_child);
+diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
+index 67176caf54163a..1958c043ac142b 100644
+--- a/drivers/misc/mei/bus.c
++++ b/drivers/misc/mei/bus.c
+@@ -1301,10 +1301,16 @@ static void mei_dev_bus_put(struct mei_device *bus)
+ static void mei_cl_bus_dev_release(struct device *dev)
+ {
+ 	struct mei_cl_device *cldev = to_mei_cl_device(dev);
++	struct mei_device *mdev = cldev->cl->dev;
++	struct mei_cl *cl;
+ 
+ 	mei_cl_flush_queues(cldev->cl, NULL);
+ 	mei_me_cl_put(cldev->me_cl);
+ 	mei_dev_bus_put(cldev->bus);
++
++	list_for_each_entry(cl, &mdev->file_list, link)
++		WARN_ON(cl == cldev->cl);
++
+ 	kfree(cldev->cl);
+ 	kfree(cldev);
+ }
+diff --git a/drivers/mmc/host/rtsx_usb_sdmmc.c b/drivers/mmc/host/rtsx_usb_sdmmc.c
+index d229c2b83ea99e..8c35cb85a9c0e9 100644
+--- a/drivers/mmc/host/rtsx_usb_sdmmc.c
++++ b/drivers/mmc/host/rtsx_usb_sdmmc.c
+@@ -1029,9 +1029,7 @@ static int sd_set_power_mode(struct rtsx_usb_sdmmc *host,
+ 		err = sd_power_on(host);
+ 	}
+ 
+-	if (!err)
+-		host->power_mode = power_mode;
+-
++	host->power_mode = power_mode;
+ 	return err;
+ }
+ 
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index 57bd49eea77724..20cee9f44b4cd5 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -1564,6 +1564,7 @@ static void sdhci_msm_check_power_status(struct sdhci_host *host, u32 req_type)
+ {
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host);
++	struct mmc_host *mmc = host->mmc;
+ 	bool done = false;
+ 	u32 val = SWITCHABLE_SIGNALING_VOLTAGE;
+ 	const struct sdhci_msm_offset *msm_offset =
+@@ -1621,6 +1622,12 @@ static void sdhci_msm_check_power_status(struct sdhci_host *host, u32 req_type)
+ 				 "%s: pwr_irq for req: (%d) timed out\n",
+ 				 mmc_hostname(host->mmc), req_type);
+ 	}
++
++	if ((req_type & REQ_BUS_ON) && mmc->card && !mmc->ops->get_cd(mmc)) {
++		sdhci_writeb(host, 0, SDHCI_POWER_CONTROL);
++		host->pwr = 0;
++	}
++
+ 	pr_debug("%s: %s: request %d done\n", mmc_hostname(host->mmc),
+ 			__func__, req_type);
+ }
+@@ -1679,6 +1686,13 @@ static void sdhci_msm_handle_pwr_irq(struct sdhci_host *host, int irq)
+ 		udelay(10);
+ 	}
+ 
++	if ((irq_status & CORE_PWRCTL_BUS_ON) && mmc->card &&
++	    !mmc->ops->get_cd(mmc)) {
++		msm_host_writel(msm_host, CORE_PWRCTL_BUS_FAIL, host,
++				msm_offset->core_pwrctl_ctl);
++		return;
++	}
++
+ 	/* Handle BUS ON/OFF*/
+ 	if (irq_status & CORE_PWRCTL_BUS_ON) {
+ 		pwr_state = REQ_BUS_ON;
+diff --git a/drivers/net/can/ti_hecc.c b/drivers/net/can/ti_hecc.c
+index 644e8b8eb91e74..e6d6661a908ab1 100644
+--- a/drivers/net/can/ti_hecc.c
++++ b/drivers/net/can/ti_hecc.c
+@@ -383,7 +383,7 @@ static void ti_hecc_start(struct net_device *ndev)
+ 	 * overflows instead of the hardware silently dropping the
+ 	 * messages.
+ 	 */
+-	mbx_mask = ~BIT(HECC_RX_LAST_MBOX);
++	mbx_mask = ~BIT_U32(HECC_RX_LAST_MBOX);
+ 	hecc_write(priv, HECC_CANOPC, mbx_mask);
+ 
+ 	/* Enable interrupts */
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index dc2f4adac9bc96..d15d912690c40e 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -361,18 +361,23 @@ static void b53_set_forwarding(struct b53_device *dev, int enable)
+ 
+ 	b53_write8(dev, B53_CTRL_PAGE, B53_SWITCH_MODE, mgmt);
+ 
+-	/* Include IMP port in dumb forwarding mode
+-	 */
+-	b53_read8(dev, B53_CTRL_PAGE, B53_SWITCH_CTRL, &mgmt);
+-	mgmt |= B53_MII_DUMB_FWDG_EN;
+-	b53_write8(dev, B53_CTRL_PAGE, B53_SWITCH_CTRL, mgmt);
+-
+-	/* Look at B53_UC_FWD_EN and B53_MC_FWD_EN to decide whether
+-	 * frames should be flooded or not.
+-	 */
+-	b53_read8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, &mgmt);
+-	mgmt |= B53_UC_FWD_EN | B53_MC_FWD_EN | B53_IPMC_FWD_EN;
+-	b53_write8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, mgmt);
++	if (!is5325(dev)) {
++		/* Include IMP port in dumb forwarding mode */
++		b53_read8(dev, B53_CTRL_PAGE, B53_SWITCH_CTRL, &mgmt);
++		mgmt |= B53_MII_DUMB_FWDG_EN;
++		b53_write8(dev, B53_CTRL_PAGE, B53_SWITCH_CTRL, mgmt);
++
++		/* Look at B53_UC_FWD_EN and B53_MC_FWD_EN to decide whether
++		 * frames should be flooded or not.
++		 */
++		b53_read8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, &mgmt);
++		mgmt |= B53_UC_FWD_EN | B53_MC_FWD_EN | B53_IPMC_FWD_EN;
++		b53_write8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, mgmt);
++	} else {
++		b53_read8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, &mgmt);
++		mgmt |= B53_IP_MCAST_25;
++		b53_write8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, mgmt);
++	}
+ }
+ 
+ static void b53_enable_vlan(struct b53_device *dev, int port, bool enable,
+@@ -529,6 +534,10 @@ void b53_imp_vlan_setup(struct dsa_switch *ds, int cpu_port)
+ 	unsigned int i;
+ 	u16 pvlan;
+ 
++	/* BCM5325 CPU port is at 8 */
++	if ((is5325(dev) || is5365(dev)) && cpu_port == B53_CPU_PORT_25)
++		cpu_port = B53_CPU_PORT;
++
+ 	/* Enable the IMP port to be in the same VLAN as the other ports
+ 	 * on a per-port basis such that we only have Port i and IMP in
+ 	 * the same VLAN.
+@@ -579,6 +588,9 @@ static void b53_port_set_learning(struct b53_device *dev, int port,
+ {
+ 	u16 reg;
+ 
++	if (is5325(dev))
++		return;
++
+ 	b53_read16(dev, B53_CTRL_PAGE, B53_DIS_LEARNING, &reg);
+ 	if (learning)
+ 		reg &= ~BIT(port);
+@@ -615,6 +627,19 @@ int b53_setup_port(struct dsa_switch *ds, int port)
+ 	if (dsa_is_user_port(ds, port))
+ 		b53_set_eap_mode(dev, port, EAP_MODE_SIMPLIFIED);
+ 
++	if (is5325(dev) &&
++	    in_range(port, 1, 4)) {
++		u8 reg;
++
++		b53_read8(dev, B53_CTRL_PAGE, B53_PD_MODE_CTRL_25, &reg);
++		reg &= ~PD_MODE_POWER_DOWN_PORT(0);
++		if (dsa_is_unused_port(ds, port))
++			reg |= PD_MODE_POWER_DOWN_PORT(port);
++		else
++			reg &= ~PD_MODE_POWER_DOWN_PORT(port);
++		b53_write8(dev, B53_CTRL_PAGE, B53_PD_MODE_CTRL_25, reg);
++	}
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(b53_setup_port);
+@@ -1257,6 +1282,8 @@ static void b53_force_link(struct b53_device *dev, int port, int link)
+ 	if (port == dev->imp_port) {
+ 		off = B53_PORT_OVERRIDE_CTRL;
+ 		val = PORT_OVERRIDE_EN;
++	} else if (is5325(dev)) {
++		return;
+ 	} else {
+ 		off = B53_GMII_PORT_OVERRIDE_CTRL(port);
+ 		val = GMII_PO_EN;
+@@ -1281,6 +1308,8 @@ static void b53_force_port_config(struct b53_device *dev, int port,
+ 	if (port == dev->imp_port) {
+ 		off = B53_PORT_OVERRIDE_CTRL;
+ 		val = PORT_OVERRIDE_EN;
++	} else if (is5325(dev)) {
++		return;
+ 	} else {
+ 		off = B53_GMII_PORT_OVERRIDE_CTRL(port);
+ 		val = GMII_PO_EN;
+@@ -1311,10 +1340,19 @@ static void b53_force_port_config(struct b53_device *dev, int port,
+ 		return;
+ 	}
+ 
+-	if (rx_pause)
+-		reg |= PORT_OVERRIDE_RX_FLOW;
+-	if (tx_pause)
+-		reg |= PORT_OVERRIDE_TX_FLOW;
++	if (rx_pause) {
++		if (is5325(dev))
++			reg |= PORT_OVERRIDE_LP_FLOW_25;
++		else
++			reg |= PORT_OVERRIDE_RX_FLOW;
++	}
++
++	if (tx_pause) {
++		if (is5325(dev))
++			reg |= PORT_OVERRIDE_LP_FLOW_25;
++		else
++			reg |= PORT_OVERRIDE_TX_FLOW;
++	}
+ 
+ 	b53_write8(dev, B53_CTRL_PAGE, off, reg);
+ }
+@@ -2165,7 +2203,13 @@ int b53_br_flags_pre(struct dsa_switch *ds, int port,
+ 		     struct switchdev_brport_flags flags,
+ 		     struct netlink_ext_ack *extack)
+ {
+-	if (flags.mask & ~(BR_FLOOD | BR_MCAST_FLOOD | BR_LEARNING))
++	struct b53_device *dev = ds->priv;
++	unsigned long mask = (BR_FLOOD | BR_MCAST_FLOOD);
++
++	if (!is5325(dev))
++		mask |= BR_LEARNING;
++
++	if (flags.mask & ~mask)
+ 		return -EINVAL;
+ 
+ 	return 0;
+diff --git a/drivers/net/dsa/b53/b53_regs.h b/drivers/net/dsa/b53/b53_regs.h
+index 1fbc5a204bc721..f2caf8fe569984 100644
+--- a/drivers/net/dsa/b53/b53_regs.h
++++ b/drivers/net/dsa/b53/b53_regs.h
+@@ -95,17 +95,22 @@
+ #define   PORT_OVERRIDE_SPEED_10M	(0 << PORT_OVERRIDE_SPEED_S)
+ #define   PORT_OVERRIDE_SPEED_100M	(1 << PORT_OVERRIDE_SPEED_S)
+ #define   PORT_OVERRIDE_SPEED_1000M	(2 << PORT_OVERRIDE_SPEED_S)
++#define   PORT_OVERRIDE_LP_FLOW_25	BIT(3) /* BCM5325 only */
+ #define   PORT_OVERRIDE_RV_MII_25	BIT(4) /* BCM5325 only */
+ #define   PORT_OVERRIDE_RX_FLOW		BIT(4)
+ #define   PORT_OVERRIDE_TX_FLOW		BIT(5)
+ #define   PORT_OVERRIDE_SPEED_2000M	BIT(6) /* BCM5301X only, requires setting 1000M */
+ #define   PORT_OVERRIDE_EN		BIT(7) /* Use the register contents */
+ 
+-/* Power-down mode control */
++/* Power-down mode control (8 bit) */
+ #define B53_PD_MODE_CTRL_25		0x0f
++#define  PD_MODE_PORT_MASK		0x1f
++/* Bit 0 also powers down the switch. */
++#define  PD_MODE_POWER_DOWN_PORT(i)	BIT(i)
+ 
+ /* IP Multicast control (8 bit) */
+ #define B53_IP_MULTICAST_CTRL		0x21
++#define  B53_IP_MCAST_25		BIT(0)
+ #define  B53_IPMC_FWD_EN		BIT(1)
+ #define  B53_UC_FWD_EN			BIT(6)
+ #define  B53_MC_FWD_EN			BIT(7)
+diff --git a/drivers/net/ethernet/agere/et131x.c b/drivers/net/ethernet/agere/et131x.c
+index b398adacda915e..5c52bfc09210af 100644
+--- a/drivers/net/ethernet/agere/et131x.c
++++ b/drivers/net/ethernet/agere/et131x.c
+@@ -2459,6 +2459,10 @@ static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+ 							  skb->data,
+ 							  skb_headlen(skb),
+ 							  DMA_TO_DEVICE);
++				if (dma_mapping_error(&adapter->pdev->dev,
++						      dma_addr))
++					return -ENOMEM;
++
+ 				desc[frag].addr_lo = lower_32_bits(dma_addr);
+ 				desc[frag].addr_hi = upper_32_bits(dma_addr);
+ 				frag++;
+@@ -2468,6 +2472,10 @@ static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+ 							  skb->data,
+ 							  skb_headlen(skb) / 2,
+ 							  DMA_TO_DEVICE);
++				if (dma_mapping_error(&adapter->pdev->dev,
++						      dma_addr))
++					return -ENOMEM;
++
+ 				desc[frag].addr_lo = lower_32_bits(dma_addr);
+ 				desc[frag].addr_hi = upper_32_bits(dma_addr);
+ 				frag++;
+@@ -2478,6 +2486,10 @@ static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+ 							  skb_headlen(skb) / 2,
+ 							  skb_headlen(skb) / 2,
+ 							  DMA_TO_DEVICE);
++				if (dma_mapping_error(&adapter->pdev->dev,
++						      dma_addr))
++					goto unmap_first_out;
++
+ 				desc[frag].addr_lo = lower_32_bits(dma_addr);
+ 				desc[frag].addr_hi = upper_32_bits(dma_addr);
+ 				frag++;
+@@ -2489,6 +2501,9 @@ static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+ 						    0,
+ 						    desc[frag].len_vlan,
+ 						    DMA_TO_DEVICE);
++			if (dma_mapping_error(&adapter->pdev->dev, dma_addr))
++				goto unmap_out;
++
+ 			desc[frag].addr_lo = lower_32_bits(dma_addr);
+ 			desc[frag].addr_hi = upper_32_bits(dma_addr);
+ 			frag++;
+@@ -2578,6 +2593,27 @@ static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+ 		       &adapter->regs->global.watchdog_timer);
+ 	}
+ 	return 0;
++
++unmap_out:
++	// Unmap the body of the packet with map_page
++	while (--i) {
++		frag--;
++		dma_addr = desc[frag].addr_lo;
++		dma_addr |= (u64)desc[frag].addr_hi << 32;
++		dma_unmap_page(&adapter->pdev->dev, dma_addr,
++			       desc[frag].len_vlan, DMA_TO_DEVICE);
++	}
++
++unmap_first_out:
++	// Unmap the header with map_single
++	while (frag--) {
++		dma_addr = desc[frag].addr_lo;
++		dma_addr |= (u64)desc[frag].addr_hi << 32;
++		dma_unmap_single(&adapter->pdev->dev, dma_addr,
++				 desc[frag].len_vlan, DMA_TO_DEVICE);
++	}
++
++	return -ENOMEM;
+ }
+ 
+ static int send_packet(struct sk_buff *skb, struct et131x_adapter *adapter)
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_hw.h b/drivers/net/ethernet/aquantia/atlantic/aq_hw.h
+index 42c0efc1b45581..4e66fd9b2ab1dc 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_hw.h
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_hw.h
+@@ -113,6 +113,8 @@ struct aq_stats_s {
+ #define AQ_HW_POWER_STATE_D0   0U
+ #define AQ_HW_POWER_STATE_D3   3U
+ 
++#define	AQ_FW_WAKE_ON_LINK_RTPM BIT(10)
++
+ #define AQ_HW_FLAG_STARTED     0x00000004U
+ #define AQ_HW_FLAG_STOPPING    0x00000008U
+ #define AQ_HW_FLAG_RESETTING   0x00000010U
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils_fw.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils_fw.c
+index 52e2070a4a2f0c..7370e3f76b6208 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils_fw.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils_fw.c
+@@ -462,6 +462,44 @@ static int aq_a2_fw_get_mac_temp(struct aq_hw_s *self, int *temp)
+ 	return aq_a2_fw_get_phy_temp(self, temp);
+ }
+ 
++static int aq_a2_fw_set_wol_params(struct aq_hw_s *self, const u8 *mac, u32 wol)
++{
++	struct mac_address_aligned_s mac_address;
++	struct link_control_s link_control;
++	struct wake_on_lan_s wake_on_lan;
++
++	memcpy(mac_address.aligned.mac_address, mac, ETH_ALEN);
++	hw_atl2_shared_buffer_write(self, mac_address, mac_address);
++
++	memset(&wake_on_lan, 0, sizeof(wake_on_lan));
++
++	if (wol & WAKE_MAGIC)
++		wake_on_lan.wake_on_magic_packet = 1U;
++
++	if (wol & (WAKE_PHY | AQ_FW_WAKE_ON_LINK_RTPM))
++		wake_on_lan.wake_on_link_up = 1U;
++
++	hw_atl2_shared_buffer_write(self, sleep_proxy, wake_on_lan);
++
++	hw_atl2_shared_buffer_get(self, link_control, link_control);
++	link_control.mode = AQ_HOST_MODE_SLEEP_PROXY;
++	hw_atl2_shared_buffer_write(self, link_control, link_control);
++
++	return hw_atl2_shared_buffer_finish_ack(self);
++}
++
++static int aq_a2_fw_set_power(struct aq_hw_s *self, unsigned int power_state,
++			      const u8 *mac)
++{
++	u32 wol = self->aq_nic_cfg->wol;
++	int err = 0;
++
++	if (wol)
++		err = aq_a2_fw_set_wol_params(self, mac, wol);
++
++	return err;
++}
++
+ static int aq_a2_fw_set_eee_rate(struct aq_hw_s *self, u32 speed)
+ {
+ 	struct link_options_s link_options;
+@@ -605,6 +643,7 @@ const struct aq_fw_ops aq_a2_fw_ops = {
+ 	.set_state          = aq_a2_fw_set_state,
+ 	.update_link_status = aq_a2_fw_update_link_status,
+ 	.update_stats       = aq_a2_fw_update_stats,
++	.set_power          = aq_a2_fw_set_power,
+ 	.get_mac_temp       = aq_a2_fw_get_mac_temp,
+ 	.get_phy_temp       = aq_a2_fw_get_phy_temp,
+ 	.set_eee_rate       = aq_a2_fw_set_eee_rate,
+diff --git a/drivers/net/ethernet/atheros/ag71xx.c b/drivers/net/ethernet/atheros/ag71xx.c
+index 67b654889caed0..d1be8928b98590 100644
+--- a/drivers/net/ethernet/atheros/ag71xx.c
++++ b/drivers/net/ethernet/atheros/ag71xx.c
+@@ -1213,6 +1213,11 @@ static bool ag71xx_fill_rx_buf(struct ag71xx *ag, struct ag71xx_buf *buf,
+ 	buf->rx.rx_buf = data;
+ 	buf->rx.dma_addr = dma_map_single(&ag->pdev->dev, data, ag->rx_buf_size,
+ 					  DMA_FROM_DEVICE);
++	if (dma_mapping_error(&ag->pdev->dev, buf->rx.dma_addr)) {
++		skb_free_frag(data);
++		buf->rx.rx_buf = NULL;
++		return false;
++	}
+ 	desc->data = (u32)buf->rx.dma_addr + offset;
+ 	return true;
+ }
+@@ -1511,6 +1516,10 @@ static netdev_tx_t ag71xx_hard_start_xmit(struct sk_buff *skb,
+ 
+ 	dma_addr = dma_map_single(&ag->pdev->dev, skb->data, skb->len,
+ 				  DMA_TO_DEVICE);
++	if (dma_mapping_error(&ag->pdev->dev, dma_addr)) {
++		netif_dbg(ag, tx_err, ndev, "DMA mapping error\n");
++		goto err_drop;
++	}
+ 
+ 	i = ring->curr & ring_mask;
+ 	desc = ag71xx_ring_desc(ring, i);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index d66519ce57af08..8021d97f3f2291 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -3779,7 +3779,6 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
+ 	if (BNXT_RX_PAGE_MODE(bp))
+ 		pp.pool_size += bp->rx_ring_size;
+ 	pp.nid = numa_node;
+-	pp.napi = &rxr->bnapi->napi;
+ 	pp.netdev = bp->dev;
+ 	pp.dev = &bp->pdev->dev;
+ 	pp.dma_dir = bp->rx_dir;
+@@ -3807,6 +3806,12 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
+ 	return PTR_ERR(pool);
+ }
+ 
++static void bnxt_enable_rx_page_pool(struct bnxt_rx_ring_info *rxr)
++{
++	page_pool_enable_direct_recycling(rxr->head_pool, &rxr->bnapi->napi);
++	page_pool_enable_direct_recycling(rxr->page_pool, &rxr->bnapi->napi);
++}
++
+ static int bnxt_alloc_rx_agg_bmap(struct bnxt *bp, struct bnxt_rx_ring_info *rxr)
+ {
+ 	u16 mem_size;
+@@ -3845,6 +3850,7 @@ static int bnxt_alloc_rx_rings(struct bnxt *bp)
+ 		rc = bnxt_alloc_rx_page_pool(bp, rxr, cpu_node);
+ 		if (rc)
+ 			return rc;
++		bnxt_enable_rx_page_pool(rxr);
+ 
+ 		rc = xdp_rxq_info_reg(&rxr->xdp_rxq, bp->dev, i, 0);
+ 		if (rc < 0)
+@@ -15998,6 +16004,7 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
+ 			goto err_reset;
+ 	}
+ 
++	bnxt_enable_rx_page_pool(rxr);
+ 	napi_enable_locked(&bnapi->napi);
+ 	bnxt_db_nq_arm(bp, &cpr->cp_db, cpr->cp_raw_cons);
+ 
+diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+index 608cc6af5af1c7..aa80c370223237 100644
+--- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
++++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+@@ -1429,9 +1429,9 @@ static acpi_status bgx_acpi_match_id(acpi_handle handle, u32 lvl,
+ {
+ 	struct acpi_buffer string = { ACPI_ALLOCATE_BUFFER, NULL };
+ 	struct bgx *bgx = context;
+-	char bgx_sel[5];
++	char bgx_sel[7];
+ 
+-	snprintf(bgx_sel, 5, "BGX%d", bgx->bgx_id);
++	snprintf(bgx_sel, sizeof(bgx_sel), "BGX%d", bgx->bgx_id);
+ 	if (ACPI_FAILURE(acpi_get_name(handle, ACPI_SINGLE_NAME, &string))) {
+ 		pr_warn("Invalid link device\n");
+ 		return AE_OK;
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index 3d2e2159211917..490af665942947 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -1465,10 +1465,10 @@ static void be_tx_timeout(struct net_device *netdev, unsigned int txqueue)
+ 						 ntohs(tcphdr->source));
+ 					dev_info(dev, "TCP dest port %d\n",
+ 						 ntohs(tcphdr->dest));
+-					dev_info(dev, "TCP sequence num %d\n",
+-						 ntohs(tcphdr->seq));
+-					dev_info(dev, "TCP ack_seq %d\n",
+-						 ntohs(tcphdr->ack_seq));
++					dev_info(dev, "TCP sequence num %u\n",
++						 ntohl(tcphdr->seq));
++					dev_info(dev, "TCP ack_seq %u\n",
++						 ntohl(tcphdr->ack_seq));
+ 				} else if (ip_hdr(skb)->protocol ==
+ 					   IPPROTO_UDP) {
+ 					udphdr = udp_hdr(skb);
+diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
+index 17ec35e75a6563..0e4258257efc7a 100644
+--- a/drivers/net/ethernet/faraday/ftgmac100.c
++++ b/drivers/net/ethernet/faraday/ftgmac100.c
+@@ -1730,16 +1730,17 @@ static int ftgmac100_setup_mdio(struct net_device *netdev)
+ static void ftgmac100_phy_disconnect(struct net_device *netdev)
+ {
+ 	struct ftgmac100 *priv = netdev_priv(netdev);
++	struct phy_device *phydev = netdev->phydev;
+ 
+-	if (!netdev->phydev)
++	if (!phydev)
+ 		return;
+ 
+-	phy_disconnect(netdev->phydev);
++	phy_disconnect(phydev);
+ 	if (of_phy_is_fixed_link(priv->dev->of_node))
+ 		of_phy_deregister_fixed_link(priv->dev->of_node);
+ 
+ 	if (priv->use_ncsi)
+-		fixed_phy_unregister(netdev->phydev);
++		fixed_phy_unregister(phydev);
+ }
+ 
+ static void ftgmac100_destroy_mdio(struct net_device *netdev)
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+index 4948b4906584e0..ebdaa3e7e10682 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+@@ -28,7 +28,6 @@
+ #include <linux/percpu.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/sort.h>
+-#include <linux/phy_fixed.h>
+ #include <linux/bpf.h>
+ #include <linux/bpf_trace.h>
+ #include <soc/fsl/bman.h>
+@@ -3151,7 +3150,6 @@ static const struct net_device_ops dpaa_ops = {
+ 	.ndo_stop = dpaa_eth_stop,
+ 	.ndo_tx_timeout = dpaa_tx_timeout,
+ 	.ndo_get_stats64 = dpaa_get_stats64,
+-	.ndo_change_carrier = fixed_phy_change_carrier,
+ 	.ndo_set_mac_address = dpaa_set_mac_address,
+ 	.ndo_validate_addr = eth_validate_addr,
+ 	.ndo_set_rx_mode = dpaa_set_rx_mode,
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+index 9986f6e1f58774..7fc01baef280c0 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+@@ -401,8 +401,10 @@ static int dpaa_get_ts_info(struct net_device *net_dev,
+ 		of_node_put(ptp_node);
+ 	}
+ 
+-	if (ptp_dev)
++	if (ptp_dev) {
+ 		ptp = platform_get_drvdata(ptp_dev);
++		put_device(&ptp_dev->dev);
++	}
+ 
+ 	if (ptp)
+ 		info->phc_index = ptp->phc_index;
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+index 203862ec111427..33a07b40c3e702 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+@@ -924,19 +924,29 @@ static int enetc_pf_register_with_ierb(struct pci_dev *pdev)
+ {
+ 	struct platform_device *ierb_pdev;
+ 	struct device_node *ierb_node;
++	int ret;
+ 
+ 	ierb_node = of_find_compatible_node(NULL, NULL,
+ 					    "fsl,ls1028a-enetc-ierb");
+-	if (!ierb_node || !of_device_is_available(ierb_node))
++	if (!ierb_node)
+ 		return -ENODEV;
+ 
++	if (!of_device_is_available(ierb_node)) {
++		of_node_put(ierb_node);
++		return -ENODEV;
++	}
++
+ 	ierb_pdev = of_find_device_by_node(ierb_node);
+ 	of_node_put(ierb_node);
+ 
+ 	if (!ierb_pdev)
+ 		return -EPROBE_DEFER;
+ 
+-	return enetc_ierb_register_pf(ierb_pdev, pdev);
++	ret = enetc_ierb_register_pf(ierb_pdev, pdev);
++
++	put_device(&ierb_pdev->dev);
++
++	return ret;
+ }
+ 
+ static struct enetc_si *enetc_psi_create(struct pci_dev *pdev)
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 17e9bddb9ddd58..651b73163b6ee9 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3124,27 +3124,25 @@ static int fec_enet_us_to_itr_clock(struct net_device *ndev, int us)
+ static void fec_enet_itr_coal_set(struct net_device *ndev)
+ {
+ 	struct fec_enet_private *fep = netdev_priv(ndev);
+-	int rx_itr, tx_itr;
++	u32 rx_itr = 0, tx_itr = 0;
++	int rx_ictt, tx_ictt;
+ 
+-	/* Must be greater than zero to avoid unpredictable behavior */
+-	if (!fep->rx_time_itr || !fep->rx_pkts_itr ||
+-	    !fep->tx_time_itr || !fep->tx_pkts_itr)
+-		return;
+-
+-	/* Select enet system clock as Interrupt Coalescing
+-	 * timer Clock Source
+-	 */
+-	rx_itr = FEC_ITR_CLK_SEL;
+-	tx_itr = FEC_ITR_CLK_SEL;
++	rx_ictt = fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr);
++	tx_ictt = fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr);
+ 
+-	/* set ICFT and ICTT */
+-	rx_itr |= FEC_ITR_ICFT(fep->rx_pkts_itr);
+-	rx_itr |= FEC_ITR_ICTT(fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr));
+-	tx_itr |= FEC_ITR_ICFT(fep->tx_pkts_itr);
+-	tx_itr |= FEC_ITR_ICTT(fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr));
++	if (rx_ictt > 0 && fep->rx_pkts_itr > 1) {
++		/* Enable with enet system clock as Interrupt Coalescing timer Clock Source */
++		rx_itr = FEC_ITR_EN | FEC_ITR_CLK_SEL;
++		rx_itr |= FEC_ITR_ICFT(fep->rx_pkts_itr);
++		rx_itr |= FEC_ITR_ICTT(rx_ictt);
++	}
+ 
+-	rx_itr |= FEC_ITR_EN;
+-	tx_itr |= FEC_ITR_EN;
++	if (tx_ictt > 0 && fep->tx_pkts_itr > 1) {
++		/* Enable with enet system clock as Interrupt Coalescing timer Clock Source */
++		tx_itr = FEC_ITR_EN | FEC_ITR_CLK_SEL;
++		tx_itr |= FEC_ITR_ICFT(fep->tx_pkts_itr);
++		tx_itr |= FEC_ITR_ICTT(tx_ictt);
++	}
+ 
+ 	writel(tx_itr, fep->hwp + FEC_TXIC0);
+ 	writel(rx_itr, fep->hwp + FEC_RXIC0);
+diff --git a/drivers/net/ethernet/freescale/gianfar_ethtool.c b/drivers/net/ethernet/freescale/gianfar_ethtool.c
+index 781d92e703cb3d..c9992ed4e30135 100644
+--- a/drivers/net/ethernet/freescale/gianfar_ethtool.c
++++ b/drivers/net/ethernet/freescale/gianfar_ethtool.c
+@@ -1466,8 +1466,10 @@ static int gfar_get_ts_info(struct net_device *dev,
+ 	if (ptp_node) {
+ 		ptp_dev = of_find_device_by_node(ptp_node);
+ 		of_node_put(ptp_node);
+-		if (ptp_dev)
++		if (ptp_dev) {
+ 			ptp = platform_get_drvdata(ptp_dev);
++			put_device(&ptp_dev->dev);
++		}
+ 	}
+ 
+ 	if (ptp)
+diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
+index 3e8fc33cc11fdb..7f2334006f9f3b 100644
+--- a/drivers/net/ethernet/google/gve/gve_adminq.c
++++ b/drivers/net/ethernet/google/gve/gve_adminq.c
+@@ -564,6 +564,7 @@ static int gve_adminq_issue_cmd(struct gve_priv *priv,
+ 		break;
+ 	default:
+ 		dev_err(&priv->pdev->dev, "unknown AQ command opcode %d\n", opcode);
++		return -EINVAL;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/hisilicon/hibmcge/hbg_err.c b/drivers/net/ethernet/hisilicon/hibmcge/hbg_err.c
+index ff3295b60a69a8..dee1e86811576d 100644
+--- a/drivers/net/ethernet/hisilicon/hibmcge/hbg_err.c
++++ b/drivers/net/ethernet/hisilicon/hibmcge/hbg_err.c
+@@ -53,9 +53,11 @@ static int hbg_reset_prepare(struct hbg_priv *priv, enum hbg_reset_type type)
+ {
+ 	int ret;
+ 
+-	ASSERT_RTNL();
++	if (test_and_set_bit(HBG_NIC_STATE_RESETTING, &priv->state))
++		return -EBUSY;
+ 
+ 	if (netif_running(priv->netdev)) {
++		clear_bit(HBG_NIC_STATE_RESETTING, &priv->state);
+ 		dev_warn(&priv->pdev->dev,
+ 			 "failed to reset because port is up\n");
+ 		return -EBUSY;
+@@ -64,7 +66,6 @@ static int hbg_reset_prepare(struct hbg_priv *priv, enum hbg_reset_type type)
+ 	netif_device_detach(priv->netdev);
+ 
+ 	priv->reset_type = type;
+-	set_bit(HBG_NIC_STATE_RESETTING, &priv->state);
+ 	clear_bit(HBG_NIC_STATE_RESET_FAIL, &priv->state);
+ 	ret = hbg_hw_event_notify(priv, HBG_HW_EVENT_RESET);
+ 	if (ret) {
+@@ -83,28 +84,25 @@ static int hbg_reset_done(struct hbg_priv *priv, enum hbg_reset_type type)
+ 	    type != priv->reset_type)
+ 		return 0;
+ 
+-	ASSERT_RTNL();
+-
+-	clear_bit(HBG_NIC_STATE_RESETTING, &priv->state);
+ 	ret = hbg_rebuild(priv);
+ 	if (ret) {
+ 		set_bit(HBG_NIC_STATE_RESET_FAIL, &priv->state);
++		clear_bit(HBG_NIC_STATE_RESETTING, &priv->state);
+ 		dev_err(&priv->pdev->dev, "failed to rebuild after reset\n");
+ 		return ret;
+ 	}
+ 
+ 	netif_device_attach(priv->netdev);
++	clear_bit(HBG_NIC_STATE_RESETTING, &priv->state);
+ 
+ 	dev_info(&priv->pdev->dev, "reset done\n");
+ 	return ret;
+ }
+ 
+-/* must be protected by rtnl lock */
+ int hbg_reset(struct hbg_priv *priv)
+ {
+ 	int ret;
+ 
+-	ASSERT_RTNL();
+ 	ret = hbg_reset_prepare(priv, HBG_RESET_TYPE_FUNCTION);
+ 	if (ret)
+ 		return ret;
+@@ -169,7 +167,6 @@ static void hbg_pci_err_reset_prepare(struct pci_dev *pdev)
+ 	struct net_device *netdev = pci_get_drvdata(pdev);
+ 	struct hbg_priv *priv = netdev_priv(netdev);
+ 
+-	rtnl_lock();
+ 	hbg_reset_prepare(priv, HBG_RESET_TYPE_FLR);
+ }
+ 
+@@ -179,7 +176,6 @@ static void hbg_pci_err_reset_done(struct pci_dev *pdev)
+ 	struct hbg_priv *priv = netdev_priv(netdev);
+ 
+ 	hbg_reset_done(priv, HBG_RESET_TYPE_FLR);
+-	rtnl_unlock();
+ }
+ 
+ static const struct pci_error_handlers hbg_pci_err_handler = {
+diff --git a/drivers/net/ethernet/hisilicon/hibmcge/hbg_hw.c b/drivers/net/ethernet/hisilicon/hibmcge/hbg_hw.c
+index 9b65eef62b3fba..2844124f306dae 100644
+--- a/drivers/net/ethernet/hisilicon/hibmcge/hbg_hw.c
++++ b/drivers/net/ethernet/hisilicon/hibmcge/hbg_hw.c
+@@ -12,6 +12,8 @@
+ 
+ #define HBG_HW_EVENT_WAIT_TIMEOUT_US	(2 * 1000 * 1000)
+ #define HBG_HW_EVENT_WAIT_INTERVAL_US	(10 * 1000)
++#define HBG_MAC_LINK_WAIT_TIMEOUT_US	(500 * 1000)
++#define HBG_MAC_LINK_WAIT_INTERVAL_US	(5 * 1000)
+ /* little endian or big endian.
+  * ctrl means packet description, data means skb packet data
+  */
+@@ -213,6 +215,9 @@ void hbg_hw_fill_buffer(struct hbg_priv *priv, u32 buffer_dma_addr)
+ 
+ void hbg_hw_adjust_link(struct hbg_priv *priv, u32 speed, u32 duplex)
+ {
++	u32 link_status;
++	int ret;
++
+ 	hbg_hw_mac_enable(priv, HBG_STATUS_DISABLE);
+ 
+ 	hbg_reg_write_field(priv, HBG_REG_PORT_MODE_ADDR,
+@@ -224,8 +229,14 @@ void hbg_hw_adjust_link(struct hbg_priv *priv, u32 speed, u32 duplex)
+ 
+ 	hbg_hw_mac_enable(priv, HBG_STATUS_ENABLE);
+ 
+-	if (!hbg_reg_read_field(priv, HBG_REG_AN_NEG_STATE_ADDR,
+-				HBG_REG_AN_NEG_STATE_NP_LINK_OK_B))
++	/* wait MAC link up */
++	ret = readl_poll_timeout(priv->io_base + HBG_REG_AN_NEG_STATE_ADDR,
++				 link_status,
++				 FIELD_GET(HBG_REG_AN_NEG_STATE_NP_LINK_OK_B,
++					   link_status),
++				 HBG_MAC_LINK_WAIT_INTERVAL_US,
++				 HBG_MAC_LINK_WAIT_TIMEOUT_US);
++	if (ret)
+ 		hbg_np_link_fail_task_schedule(priv);
+ }
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hibmcge/hbg_txrx.h b/drivers/net/ethernet/hisilicon/hibmcge/hbg_txrx.h
+index 2883a5899ae29e..8b6110599e10db 100644
+--- a/drivers/net/ethernet/hisilicon/hibmcge/hbg_txrx.h
++++ b/drivers/net/ethernet/hisilicon/hibmcge/hbg_txrx.h
+@@ -29,7 +29,12 @@ static inline bool hbg_fifo_is_full(struct hbg_priv *priv, enum hbg_dir dir)
+ 
+ static inline u32 hbg_get_queue_used_num(struct hbg_ring *ring)
+ {
+-	return (ring->ntu + ring->len - ring->ntc) % ring->len;
++	u32 len = READ_ONCE(ring->len);
++
++	if (!len)
++		return 0;
++
++	return (READ_ONCE(ring->ntu) + len - READ_ONCE(ring->ntc)) % len;
+ }
+ 
+ netdev_tx_t hbg_net_start_xmit(struct sk_buff *skb, struct net_device *netdev);
+diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h
+index 70dbf80f3bb75b..a2b346d91879e5 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf.h
++++ b/drivers/net/ethernet/intel/idpf/idpf.h
+@@ -369,10 +369,28 @@ struct idpf_rss_data {
+ 	u32 *cached_lut;
+ };
+ 
++/**
++ * struct idpf_q_coalesce - User defined coalescing configuration values for
++ *			   a single queue.
++ * @tx_intr_mode: Dynamic TX ITR or not
++ * @rx_intr_mode: Dynamic RX ITR or not
++ * @tx_coalesce_usecs: TX interrupt throttling rate
++ * @rx_coalesce_usecs: RX interrupt throttling rate
++ *
++ * Used to restore user coalescing configuration after a reset.
++ */
++struct idpf_q_coalesce {
++	u32 tx_intr_mode;
++	u32 rx_intr_mode;
++	u32 tx_coalesce_usecs;
++	u32 rx_coalesce_usecs;
++};
++
+ /**
+  * struct idpf_vport_user_config_data - User defined configuration values for
+  *					each vport.
+  * @rss_data: See struct idpf_rss_data
++ * @q_coalesce: Array of per queue coalescing data
+  * @num_req_tx_qs: Number of user requested TX queues through ethtool
+  * @num_req_rx_qs: Number of user requested RX queues through ethtool
+  * @num_req_txq_desc: Number of user requested TX queue descriptors through
+@@ -386,6 +404,7 @@ struct idpf_rss_data {
+  */
+ struct idpf_vport_user_config_data {
+ 	struct idpf_rss_data rss_data;
++	struct idpf_q_coalesce *q_coalesce;
+ 	u16 num_req_tx_qs;
+ 	u16 num_req_rx_qs;
+ 	u32 num_req_txq_desc;
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
+index f72420cf68216c..f0f0ced0d95fed 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
+@@ -1089,12 +1089,14 @@ static int idpf_get_per_q_coalesce(struct net_device *netdev, u32 q_num,
+ /**
+  * __idpf_set_q_coalesce - set ITR values for specific queue
+  * @ec: ethtool structure from user to update ITR settings
++ * @q_coal: per queue coalesce settings
+  * @qv: queue vector for which itr values has to be set
+  * @is_rxq: is queue type rx
+  *
+  * Returns 0 on success, negative otherwise.
+  */
+ static int __idpf_set_q_coalesce(const struct ethtool_coalesce *ec,
++				 struct idpf_q_coalesce *q_coal,
+ 				 struct idpf_q_vector *qv, bool is_rxq)
+ {
+ 	u32 use_adaptive_coalesce, coalesce_usecs;
+@@ -1138,20 +1140,25 @@ static int __idpf_set_q_coalesce(const struct ethtool_coalesce *ec,
+ 
+ 	if (is_rxq) {
+ 		qv->rx_itr_value = coalesce_usecs;
++		q_coal->rx_coalesce_usecs = coalesce_usecs;
+ 		if (use_adaptive_coalesce) {
+ 			qv->rx_intr_mode = IDPF_ITR_DYNAMIC;
++			q_coal->rx_intr_mode = IDPF_ITR_DYNAMIC;
+ 		} else {
+ 			qv->rx_intr_mode = !IDPF_ITR_DYNAMIC;
+-			idpf_vport_intr_write_itr(qv, qv->rx_itr_value,
+-						  false);
++			q_coal->rx_intr_mode = !IDPF_ITR_DYNAMIC;
++			idpf_vport_intr_write_itr(qv, coalesce_usecs, false);
+ 		}
+ 	} else {
+ 		qv->tx_itr_value = coalesce_usecs;
++		q_coal->tx_coalesce_usecs = coalesce_usecs;
+ 		if (use_adaptive_coalesce) {
+ 			qv->tx_intr_mode = IDPF_ITR_DYNAMIC;
++			q_coal->tx_intr_mode = IDPF_ITR_DYNAMIC;
+ 		} else {
+ 			qv->tx_intr_mode = !IDPF_ITR_DYNAMIC;
+-			idpf_vport_intr_write_itr(qv, qv->tx_itr_value, true);
++			q_coal->tx_intr_mode = !IDPF_ITR_DYNAMIC;
++			idpf_vport_intr_write_itr(qv, coalesce_usecs, true);
+ 		}
+ 	}
+ 
+@@ -1164,6 +1171,7 @@ static int __idpf_set_q_coalesce(const struct ethtool_coalesce *ec,
+ /**
+  * idpf_set_q_coalesce - set ITR values for specific queue
+  * @vport: vport associated to the queue that need updating
++ * @q_coal: per queue coalesce settings
+  * @ec: coalesce settings to program the device with
+  * @q_num: update ITR/INTRL (coalesce) settings for this queue number/index
+  * @is_rxq: is queue type rx
+@@ -1171,6 +1179,7 @@ static int __idpf_set_q_coalesce(const struct ethtool_coalesce *ec,
+  * Return 0 on success, and negative on failure
+  */
+ static int idpf_set_q_coalesce(const struct idpf_vport *vport,
++			       struct idpf_q_coalesce *q_coal,
+ 			       const struct ethtool_coalesce *ec,
+ 			       int q_num, bool is_rxq)
+ {
+@@ -1179,7 +1188,7 @@ static int idpf_set_q_coalesce(const struct idpf_vport *vport,
+ 	qv = is_rxq ? idpf_find_rxq_vec(vport, q_num) :
+ 		      idpf_find_txq_vec(vport, q_num);
+ 
+-	if (qv && __idpf_set_q_coalesce(ec, qv, is_rxq))
++	if (qv && __idpf_set_q_coalesce(ec, q_coal, qv, is_rxq))
+ 		return -EINVAL;
+ 
+ 	return 0;
+@@ -1200,9 +1209,13 @@ static int idpf_set_coalesce(struct net_device *netdev,
+ 			     struct netlink_ext_ack *extack)
+ {
+ 	struct idpf_netdev_priv *np = netdev_priv(netdev);
++	struct idpf_vport_user_config_data *user_config;
++	struct idpf_q_coalesce *q_coal;
+ 	struct idpf_vport *vport;
+ 	int i, err = 0;
+ 
++	user_config = &np->adapter->vport_config[np->vport_idx]->user_config;
++
+ 	idpf_vport_ctrl_lock(netdev);
+ 	vport = idpf_netdev_to_vport(netdev);
+ 
+@@ -1210,13 +1223,15 @@ static int idpf_set_coalesce(struct net_device *netdev,
+ 		goto unlock_mutex;
+ 
+ 	for (i = 0; i < vport->num_txq; i++) {
+-		err = idpf_set_q_coalesce(vport, ec, i, false);
++		q_coal = &user_config->q_coalesce[i];
++		err = idpf_set_q_coalesce(vport, q_coal, ec, i, false);
+ 		if (err)
+ 			goto unlock_mutex;
+ 	}
+ 
+ 	for (i = 0; i < vport->num_rxq; i++) {
+-		err = idpf_set_q_coalesce(vport, ec, i, true);
++		q_coal = &user_config->q_coalesce[i];
++		err = idpf_set_q_coalesce(vport, q_coal, ec, i, true);
+ 		if (err)
+ 			goto unlock_mutex;
+ 	}
+@@ -1238,20 +1253,25 @@ static int idpf_set_coalesce(struct net_device *netdev,
+ static int idpf_set_per_q_coalesce(struct net_device *netdev, u32 q_num,
+ 				   struct ethtool_coalesce *ec)
+ {
++	struct idpf_netdev_priv *np = netdev_priv(netdev);
++	struct idpf_vport_user_config_data *user_config;
++	struct idpf_q_coalesce *q_coal;
+ 	struct idpf_vport *vport;
+ 	int err;
+ 
+ 	idpf_vport_ctrl_lock(netdev);
+ 	vport = idpf_netdev_to_vport(netdev);
++	user_config = &np->adapter->vport_config[np->vport_idx]->user_config;
++	q_coal = &user_config->q_coalesce[q_num];
+ 
+-	err = idpf_set_q_coalesce(vport, ec, q_num, false);
++	err = idpf_set_q_coalesce(vport, q_coal, ec, q_num, false);
+ 	if (err) {
+ 		idpf_vport_ctrl_unlock(netdev);
+ 
+ 		return err;
+ 	}
+ 
+-	err = idpf_set_q_coalesce(vport, ec, q_num, true);
++	err = idpf_set_q_coalesce(vport, q_coal, ec, q_num, true);
+ 
+ 	idpf_vport_ctrl_unlock(netdev);
+ 
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+index fe96e20573660f..72454fb3469578 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -1094,8 +1094,10 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
+ 	if (!vport)
+ 		return vport;
+ 
++	num_max_q = max(max_q->max_txq, max_q->max_rxq);
+ 	if (!adapter->vport_config[idx]) {
+ 		struct idpf_vport_config *vport_config;
++		struct idpf_q_coalesce *q_coal;
+ 
+ 		vport_config = kzalloc(sizeof(*vport_config), GFP_KERNEL);
+ 		if (!vport_config) {
+@@ -1104,6 +1106,21 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
+ 			return NULL;
+ 		}
+ 
++		q_coal = kcalloc(num_max_q, sizeof(*q_coal), GFP_KERNEL);
++		if (!q_coal) {
++			kfree(vport_config);
++			kfree(vport);
++
++			return NULL;
++		}
++		for (int i = 0; i < num_max_q; i++) {
++			q_coal[i].tx_intr_mode = IDPF_ITR_DYNAMIC;
++			q_coal[i].tx_coalesce_usecs = IDPF_ITR_TX_DEF;
++			q_coal[i].rx_intr_mode = IDPF_ITR_DYNAMIC;
++			q_coal[i].rx_coalesce_usecs = IDPF_ITR_RX_DEF;
++		}
++		vport_config->user_config.q_coalesce = q_coal;
++
+ 		adapter->vport_config[idx] = vport_config;
+ 	}
+ 
+@@ -1113,7 +1130,6 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
+ 	vport->default_vport = adapter->num_alloc_vports <
+ 			       idpf_get_default_vports(adapter);
+ 
+-	num_max_q = max(max_q->max_txq, max_q->max_rxq);
+ 	vport->q_vector_idxs = kcalloc(num_max_q, sizeof(u16), GFP_KERNEL);
+ 	if (!vport->q_vector_idxs)
+ 		goto free_vport;
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_main.c b/drivers/net/ethernet/intel/idpf/idpf_main.c
+index b35713036a54ab..8b3298e8e4f1e6 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_main.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_main.c
+@@ -62,6 +62,7 @@ static void idpf_remove(struct pci_dev *pdev)
+ 	destroy_workqueue(adapter->vc_event_wq);
+ 
+ 	for (i = 0; i < adapter->max_vports; i++) {
++		kfree(adapter->vport_config[i]->user_config.q_coalesce);
+ 		kfree(adapter->vport_config[i]);
+ 		adapter->vport_config[i] = NULL;
+ 	}
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+index aa16e4c1edbb8b..59fe0ca17eec01 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+@@ -4194,9 +4194,13 @@ static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport)
+ int idpf_vport_intr_alloc(struct idpf_vport *vport)
+ {
+ 	u16 txqs_per_vector, rxqs_per_vector, bufqs_per_vector;
++	struct idpf_vport_user_config_data *user_config;
+ 	struct idpf_q_vector *q_vector;
++	struct idpf_q_coalesce *q_coal;
+ 	u32 complqs_per_vector, v_idx;
++	u16 idx = vport->idx;
+ 
++	user_config = &vport->adapter->vport_config[idx]->user_config;
+ 	vport->q_vectors = kcalloc(vport->num_q_vectors,
+ 				   sizeof(struct idpf_q_vector), GFP_KERNEL);
+ 	if (!vport->q_vectors)
+@@ -4214,14 +4218,15 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport)
+ 
+ 	for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
+ 		q_vector = &vport->q_vectors[v_idx];
++		q_coal = &user_config->q_coalesce[v_idx];
+ 		q_vector->vport = vport;
+ 
+-		q_vector->tx_itr_value = IDPF_ITR_TX_DEF;
+-		q_vector->tx_intr_mode = IDPF_ITR_DYNAMIC;
++		q_vector->tx_itr_value = q_coal->tx_coalesce_usecs;
++		q_vector->tx_intr_mode = q_coal->tx_intr_mode;
+ 		q_vector->tx_itr_idx = VIRTCHNL2_ITR_IDX_1;
+ 
+-		q_vector->rx_itr_value = IDPF_ITR_RX_DEF;
+-		q_vector->rx_intr_mode = IDPF_ITR_DYNAMIC;
++		q_vector->rx_itr_value = q_coal->rx_coalesce_usecs;
++		q_vector->rx_intr_mode = q_coal->rx_intr_mode;
+ 		q_vector->rx_itr_idx = VIRTCHNL2_ITR_IDX_0;
+ 
+ 		q_vector->tx = kcalloc(txqs_per_vector, sizeof(*q_vector->tx),
+diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c
+index e212a4ba92751f..499ca700012599 100644
+--- a/drivers/net/ethernet/mediatek/mtk_wed.c
++++ b/drivers/net/ethernet/mediatek/mtk_wed.c
+@@ -2794,7 +2794,6 @@ void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth,
+ 	if (!pdev)
+ 		goto err_of_node_put;
+ 
+-	get_device(&pdev->dev);
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0)
+ 		goto err_put_device;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
+index f0744a45db92c3..4e461cb03b83dd 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
+@@ -374,7 +374,7 @@ void mlx5e_reactivate_qos_sq(struct mlx5e_priv *priv, u16 qid, struct netdev_que
+ void mlx5e_reset_qdisc(struct net_device *dev, u16 qid)
+ {
+ 	struct netdev_queue *dev_queue = netdev_get_tx_queue(dev, qid);
+-	struct Qdisc *qdisc = dev_queue->qdisc_sleeping;
++	struct Qdisc *qdisc = rtnl_dereference(dev_queue->qdisc_sleeping);
+ 
+ 	if (!qdisc)
+ 		return;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 7707a9e53c439d..48cb5d30b5f6f6 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -3526,10 +3526,6 @@ void ionic_lif_free(struct ionic_lif *lif)
+ 	lif->info = NULL;
+ 	lif->info_pa = 0;
+ 
+-	/* unmap doorbell page */
+-	ionic_bus_unmap_dbpage(lif->ionic, lif->kern_dbpage);
+-	lif->kern_dbpage = NULL;
+-
+ 	mutex_destroy(&lif->config_lock);
+ 	mutex_destroy(&lif->queue_lock);
+ 
+@@ -3555,6 +3551,9 @@ void ionic_lif_deinit(struct ionic_lif *lif)
+ 	ionic_lif_qcq_deinit(lif, lif->notifyqcq);
+ 	ionic_lif_qcq_deinit(lif, lif->adminqcq);
+ 
++	ionic_bus_unmap_dbpage(lif->ionic, lif->kern_dbpage);
++	lif->kern_dbpage = NULL;
++
+ 	ionic_lif_reset(lif);
+ }
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c
+index c72ee759aae58b..f2946bea0bc268 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c
+@@ -211,6 +211,7 @@ static int thead_dwmac_probe(struct platform_device *pdev)
+ 	struct stmmac_resources stmmac_res;
+ 	struct plat_stmmacenet_data *plat;
+ 	struct thead_dwmac *dwmac;
++	struct clk *apb_clk;
+ 	void __iomem *apb;
+ 	int ret;
+ 
+@@ -224,6 +225,19 @@ static int thead_dwmac_probe(struct platform_device *pdev)
+ 		return dev_err_probe(&pdev->dev, PTR_ERR(plat),
+ 				     "dt configuration failed\n");
+ 
++	/*
++	 * The APB clock is essential for accessing glue registers. However,
++	 * old devicetrees don't describe it correctly. We continue to probe
++	 * and emit a warning if it isn't present.
++	 */
++	apb_clk = devm_clk_get_enabled(&pdev->dev, "apb");
++	if (PTR_ERR(apb_clk) == -ENOENT)
++		dev_warn(&pdev->dev,
++			 "cannot get apb clock, link may break after speed changes\n");
++	else if (IS_ERR(apb_clk))
++		return dev_err_probe(&pdev->dev, PTR_ERR(apb_clk),
++				     "failed to get apb clock\n");
++
+ 	dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL);
+ 	if (!dwmac)
+ 		return -ENOMEM;
+diff --git a/drivers/net/ethernet/ti/icssg/icss_iep.c b/drivers/net/ethernet/ti/icssg/icss_iep.c
+index 2a1c43316f462b..d8c9fe1d98c475 100644
+--- a/drivers/net/ethernet/ti/icssg/icss_iep.c
++++ b/drivers/net/ethernet/ti/icssg/icss_iep.c
+@@ -621,7 +621,8 @@ static int icss_iep_pps_enable(struct icss_iep *iep, int on)
+ 
+ static int icss_iep_extts_enable(struct icss_iep *iep, u32 index, int on)
+ {
+-	u32 val, cap, ret = 0;
++	u32 val, cap;
++	int ret = 0;
+ 
+ 	mutex_lock(&iep->ptp_clk_mutex);
+ 
+@@ -685,10 +686,16 @@ struct icss_iep *icss_iep_get_idx(struct device_node *np, int idx)
+ 	struct platform_device *pdev;
+ 	struct device_node *iep_np;
+ 	struct icss_iep *iep;
++	int ret;
+ 
+ 	iep_np = of_parse_phandle(np, "ti,iep", idx);
+-	if (!iep_np || !of_device_is_available(iep_np))
++	if (!iep_np)
++		return ERR_PTR(-ENODEV);
++
++	if (!of_device_is_available(iep_np)) {
++		of_node_put(iep_np);
+ 		return ERR_PTR(-ENODEV);
++	}
+ 
+ 	pdev = of_find_device_by_node(iep_np);
+ 	of_node_put(iep_np);
+@@ -698,21 +705,28 @@ struct icss_iep *icss_iep_get_idx(struct device_node *np, int idx)
+ 		return ERR_PTR(-EPROBE_DEFER);
+ 
+ 	iep = platform_get_drvdata(pdev);
+-	if (!iep)
+-		return ERR_PTR(-EPROBE_DEFER);
++	if (!iep) {
++		ret = -EPROBE_DEFER;
++		goto err_put_pdev;
++	}
+ 
+ 	device_lock(iep->dev);
+ 	if (iep->client_np) {
+ 		device_unlock(iep->dev);
+ 		dev_err(iep->dev, "IEP is already acquired by %s",
+ 			iep->client_np->name);
+-		return ERR_PTR(-EBUSY);
++		ret = -EBUSY;
++		goto err_put_pdev;
+ 	}
+ 	iep->client_np = np;
+ 	device_unlock(iep->dev);
+-	get_device(iep->dev);
+ 
+ 	return iep;
++
++err_put_pdev:
++	put_device(&pdev->dev);
++
++	return ERR_PTR(ret);
+ }
+ EXPORT_SYMBOL_GPL(icss_iep_get_idx);
+ 
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.c b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+index 2f5c4335dec388..008d7772740078 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+@@ -50,6 +50,8 @@
+ /* CTRLMMR_ICSSG_RGMII_CTRL register bits */
+ #define ICSSG_CTRL_RGMII_ID_MODE                BIT(24)
+ 
++static void emac_adjust_link(struct net_device *ndev);
++
+ static int emac_get_tx_ts(struct prueth_emac *emac,
+ 			  struct emac_tx_ts_response *rsp)
+ {
+@@ -266,6 +268,10 @@ static int prueth_emac_common_start(struct prueth *prueth)
+ 		ret = icssg_config(prueth, emac, slice);
+ 		if (ret)
+ 			goto disable_class;
++
++		mutex_lock(&emac->ndev->phydev->lock);
++		emac_adjust_link(emac->ndev);
++		mutex_unlock(&emac->ndev->phydev->lock);
+ 	}
+ 
+ 	ret = prueth_emac_start(prueth);
+diff --git a/drivers/net/hamradio/bpqether.c b/drivers/net/hamradio/bpqether.c
+index 0e0fe32d2da4e3..045c5177262eaf 100644
+--- a/drivers/net/hamradio/bpqether.c
++++ b/drivers/net/hamradio/bpqether.c
+@@ -138,7 +138,7 @@ static inline struct net_device *bpq_get_ax25_dev(struct net_device *dev)
+ 
+ static inline int dev_is_ethdev(struct net_device *dev)
+ {
+-	return dev->type == ARPHRD_ETHER && strncmp(dev->name, "dummy", 5);
++	return dev->type == ARPHRD_ETHER && !netdev_need_ops_lock(dev);
+ }
+ 
+ /* ------------------------------------------------------------------------ */
+diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
+index cb6f5482d203e1..7397c693f984af 100644
+--- a/drivers/net/hyperv/hyperv_net.h
++++ b/drivers/net/hyperv/hyperv_net.h
+@@ -1061,6 +1061,7 @@ struct net_device_context {
+ 	struct net_device __rcu *vf_netdev;
+ 	struct netvsc_vf_pcpu_stats __percpu *vf_stats;
+ 	struct delayed_work vf_takeover;
++	struct delayed_work vfns_work;
+ 
+ 	/* 1: allocated, serial number is valid. 0: not allocated */
+ 	u32 vf_alloc;
+@@ -1075,6 +1076,8 @@ struct net_device_context {
+ 	struct netvsc_device_info *saved_netvsc_dev_info;
+ };
+ 
++void netvsc_vfns_work(struct work_struct *w);
++
+ /* Azure hosts don't support non-TCP port numbers in hashing for fragmented
+  * packets. We can use ethtool to change UDP hash level when necessary.
+  */
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 8c73f710510202..501c5f71645c17 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2530,6 +2530,7 @@ static int netvsc_probe(struct hv_device *dev,
+ 	spin_lock_init(&net_device_ctx->lock);
+ 	INIT_LIST_HEAD(&net_device_ctx->reconfig_events);
+ 	INIT_DELAYED_WORK(&net_device_ctx->vf_takeover, netvsc_vf_setup);
++	INIT_DELAYED_WORK(&net_device_ctx->vfns_work, netvsc_vfns_work);
+ 
+ 	net_device_ctx->vf_stats
+ 		= netdev_alloc_pcpu_stats(struct netvsc_vf_pcpu_stats);
+@@ -2674,6 +2675,8 @@ static void netvsc_remove(struct hv_device *dev)
+ 	cancel_delayed_work_sync(&ndev_ctx->dwork);
+ 
+ 	rtnl_lock();
++	cancel_delayed_work_sync(&ndev_ctx->vfns_work);
++
+ 	nvdev = rtnl_dereference(ndev_ctx->nvdev);
+ 	if (nvdev) {
+ 		cancel_work_sync(&nvdev->subchan_work);
+@@ -2715,6 +2718,7 @@ static int netvsc_suspend(struct hv_device *dev)
+ 	cancel_delayed_work_sync(&ndev_ctx->dwork);
+ 
+ 	rtnl_lock();
++	cancel_delayed_work_sync(&ndev_ctx->vfns_work);
+ 
+ 	nvdev = rtnl_dereference(ndev_ctx->nvdev);
+ 	if (nvdev == NULL) {
+@@ -2808,6 +2812,27 @@ static void netvsc_event_set_vf_ns(struct net_device *ndev)
+ 	}
+ }
+ 
++void netvsc_vfns_work(struct work_struct *w)
++{
++	struct net_device_context *ndev_ctx =
++		container_of(w, struct net_device_context, vfns_work.work);
++	struct net_device *ndev;
++
++	if (!rtnl_trylock()) {
++		schedule_delayed_work(&ndev_ctx->vfns_work, 1);
++		return;
++	}
++
++	ndev = hv_get_drvdata(ndev_ctx->device_ctx);
++	if (!ndev)
++		goto out;
++
++	netvsc_event_set_vf_ns(ndev);
++
++out:
++	rtnl_unlock();
++}
++
+ /*
+  * On Hyper-V, every VF interface is matched with a corresponding
+  * synthetic interface. The synthetic interface is presented first
+@@ -2818,10 +2843,12 @@ static int netvsc_netdev_event(struct notifier_block *this,
+ 			       unsigned long event, void *ptr)
+ {
+ 	struct net_device *event_dev = netdev_notifier_info_to_dev(ptr);
++	struct net_device_context *ndev_ctx;
+ 	int ret = 0;
+ 
+ 	if (event_dev->netdev_ops == &device_ops && event == NETDEV_REGISTER) {
+-		netvsc_event_set_vf_ns(event_dev);
++		ndev_ctx = netdev_priv(event_dev);
++		schedule_delayed_work(&ndev_ctx->vfns_work, 0);
+ 		return NOTIFY_DONE;
+ 	}
+ 
+diff --git a/drivers/net/pcs/pcs-xpcs-plat.c b/drivers/net/pcs/pcs-xpcs-plat.c
+index 629315f1e57cb3..9dcaf7a66113ed 100644
+--- a/drivers/net/pcs/pcs-xpcs-plat.c
++++ b/drivers/net/pcs/pcs-xpcs-plat.c
+@@ -66,7 +66,7 @@ static int xpcs_mmio_read_reg_indirect(struct dw_xpcs_plat *pxpcs,
+ 	switch (pxpcs->reg_width) {
+ 	case 4:
+ 		writel(page, pxpcs->reg_base + (DW_VR_CSR_VIEWPORT << 2));
+-		ret = readl(pxpcs->reg_base + (ofs << 2));
++		ret = readl(pxpcs->reg_base + (ofs << 2)) & 0xffff;
+ 		break;
+ 	default:
+ 		writew(page, pxpcs->reg_base + (DW_VR_CSR_VIEWPORT << 1));
+@@ -124,7 +124,7 @@ static int xpcs_mmio_read_reg_direct(struct dw_xpcs_plat *pxpcs,
+ 
+ 	switch (pxpcs->reg_width) {
+ 	case 4:
+-		ret = readl(pxpcs->reg_base + (csr << 2));
++		ret = readl(pxpcs->reg_base + (csr << 2)) & 0xffff;
+ 		break;
+ 	default:
+ 		ret = readw(pxpcs->reg_base + (csr << 1));
+diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
+index 9b1de54fd4835c..f871f11d192130 100644
+--- a/drivers/net/phy/broadcom.c
++++ b/drivers/net/phy/broadcom.c
+@@ -655,7 +655,7 @@ static int bcm5481x_read_abilities(struct phy_device *phydev)
+ {
+ 	struct device_node *np = phydev->mdio.dev.of_node;
+ 	struct bcm54xx_phy_priv *priv = phydev->priv;
+-	int i, val, err;
++	int i, val, err, aneg;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(bcm54811_linkmodes); i++)
+ 		linkmode_clear_bit(bcm54811_linkmodes[i], phydev->supported);
+@@ -676,9 +676,19 @@ static int bcm5481x_read_abilities(struct phy_device *phydev)
+ 		if (val < 0)
+ 			return val;
+ 
++		/* BCM54811 is not capable of LDS but the corresponding bit
++		 * in LRESR is set to 1 and marked "Ignore" in the datasheet.
++		 * So we must read the bcm54811 as unable to auto-negotiate
++		 * in BroadR-Reach mode.
++		 */
++		if (BRCM_PHY_MODEL(phydev) == PHY_ID_BCM54811)
++			aneg = 0;
++		else
++			aneg = val & LRESR_LDSABILITY;
++
+ 		linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ 				 phydev->supported,
+-				 val & LRESR_LDSABILITY);
++				 aneg);
+ 		linkmode_mod_bit(ETHTOOL_LINK_MODE_100baseT1_Full_BIT,
+ 				 phydev->supported,
+ 				 val & LRESR_100_1PAIR);
+@@ -735,8 +745,15 @@ static int bcm54811_config_aneg(struct phy_device *phydev)
+ 
+ 	/* Aneg firstly. */
+ 	if (priv->brr_mode) {
+-		/* BCM54811 is only capable of autonegotiation in IEEE mode */
+-		phydev->autoneg = 0;
++		/* BCM54811 is only capable of autonegotiation in IEEE mode.
++		 * In BroadR-Reach mode, disable the Long Distance Signaling,
++		 * the BRR mode autoneg as supported in other Broadcom PHYs.
++		 * This bit is marked as "Reserved" and "Default 1, must be
++		 *  written to 0 after every device reset" in the datasheet.
++		 */
++		ret = phy_modify(phydev, MII_BCM54XX_LRECR, LRECR_LDSEN, 0);
++		if (ret < 0)
++			return ret;
+ 		ret = bcm_config_lre_aneg(phydev, false);
+ 	} else {
+ 		ret = genphy_config_aneg(phydev);
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index e2c6569d8c45ca..d2498b31aab6b3 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -472,6 +472,8 @@ static const struct kszphy_type ksz8051_type = {
+ 
+ static const struct kszphy_type ksz8081_type = {
+ 	.led_mode_reg		= MII_KSZPHY_CTRL_2,
++	.cable_diag_reg		= KSZ8081_LMD,
++	.pair_mask		= KSZPHY_WIRE_PAIR_MASK,
+ 	.has_broadcast_disable	= true,
+ 	.has_nand_tree_disable	= true,
+ 	.has_rmii_ref_clk_sel	= true,
+@@ -5408,6 +5410,14 @@ static int lan8841_suspend(struct phy_device *phydev)
+ 	return kszphy_generic_suspend(phydev);
+ }
+ 
++static int ksz9131_resume(struct phy_device *phydev)
++{
++	if (phydev->suspended && phy_interface_is_rgmii(phydev))
++		ksz9131_config_rgmii_delay(phydev);
++
++	return kszphy_resume(phydev);
++}
++
+ static struct phy_driver ksphy_driver[] = {
+ {
+ 	.phy_id		= PHY_ID_KS8737,
+@@ -5654,7 +5664,7 @@ static struct phy_driver ksphy_driver[] = {
+ 	.get_strings	= kszphy_get_strings,
+ 	.get_stats	= kszphy_get_stats,
+ 	.suspend	= kszphy_suspend,
+-	.resume		= kszphy_resume,
++	.resume		= ksz9131_resume,
+ 	.cable_test_start	= ksz9x31_cable_test_start,
+ 	.cable_test_get_status	= ksz9x31_cable_test_get_status,
+ 	.get_features	= ksz9477_get_features,
+diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c
+index b6489da5cfcdfb..48487149c22528 100644
+--- a/drivers/net/phy/smsc.c
++++ b/drivers/net/phy/smsc.c
+@@ -785,6 +785,7 @@ static struct phy_driver smsc_phy_driver[] = {
+ 
+ 	/* PHY_BASIC_FEATURES */
+ 
++	.flags		= PHY_RST_AFTER_CLK_EN,
+ 	.probe		= smsc_phy_probe,
+ 
+ 	/* basic functions */
+diff --git a/drivers/net/thunderbolt/main.c b/drivers/net/thunderbolt/main.c
+index 0a53ec293d0408..dcaa62377808c2 100644
+--- a/drivers/net/thunderbolt/main.c
++++ b/drivers/net/thunderbolt/main.c
+@@ -396,9 +396,9 @@ static void tbnet_tear_down(struct tbnet *net, bool send_logout)
+ 
+ 		ret = tb_xdomain_disable_paths(net->xd,
+ 					       net->local_transmit_path,
+-					       net->rx_ring.ring->hop,
++					       net->tx_ring.ring->hop,
+ 					       net->remote_transmit_path,
+-					       net->tx_ring.ring->hop);
++					       net->rx_ring.ring->hop);
+ 		if (ret)
+ 			netdev_warn(net->dev, "failed to disable DMA paths\n");
+ 
+@@ -662,9 +662,9 @@ static void tbnet_connected_work(struct work_struct *work)
+ 		goto err_free_rx_buffers;
+ 
+ 	ret = tb_xdomain_enable_paths(net->xd, net->local_transmit_path,
+-				      net->rx_ring.ring->hop,
++				      net->tx_ring.ring->hop,
+ 				      net->remote_transmit_path,
+-				      net->tx_ring.ring->hop);
++				      net->rx_ring.ring->hop);
+ 	if (ret) {
+ 		netdev_err(net->dev, "failed to enable DMA paths\n");
+ 		goto err_free_tx_buffers;
+@@ -924,8 +924,12 @@ static int tbnet_open(struct net_device *dev)
+ 
+ 	netif_carrier_off(dev);
+ 
+-	ring = tb_ring_alloc_tx(xd->tb->nhi, -1, TBNET_RING_SIZE,
+-				RING_FLAG_FRAME);
++	flags = RING_FLAG_FRAME;
++	/* Only enable full E2E if the other end supports it too */
++	if (tbnet_e2e && net->svc->prtcstns & TBNET_E2E)
++		flags |= RING_FLAG_E2E;
++
++	ring = tb_ring_alloc_tx(xd->tb->nhi, -1, TBNET_RING_SIZE, flags);
+ 	if (!ring) {
+ 		netdev_err(dev, "failed to allocate Tx ring\n");
+ 		return -ENOMEM;
+@@ -944,11 +948,6 @@ static int tbnet_open(struct net_device *dev)
+ 	sof_mask = BIT(TBIP_PDF_FRAME_START);
+ 	eof_mask = BIT(TBIP_PDF_FRAME_END);
+ 
+-	flags = RING_FLAG_FRAME;
+-	/* Only enable full E2E if the other end supports it too */
+-	if (tbnet_e2e && net->svc->prtcstns & TBNET_E2E)
+-		flags |= RING_FLAG_E2E;
+-
+ 	ring = tb_ring_alloc_rx(xd->tb->nhi, -1, TBNET_RING_SIZE, flags,
+ 				net->tx_ring.ring->hop, sof_mask,
+ 				eof_mask, tbnet_start_poll, net);
+diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
+index 9b0318fb50b55c..d9f5942ccc447b 100644
+--- a/drivers/net/usb/asix_devices.c
++++ b/drivers/net/usb/asix_devices.c
+@@ -676,6 +676,7 @@ static int ax88772_init_mdio(struct usbnet *dev)
+ 	priv->mdio->read = &asix_mdio_bus_read;
+ 	priv->mdio->write = &asix_mdio_bus_write;
+ 	priv->mdio->name = "Asix MDIO Bus";
++	priv->mdio->phy_mask = ~(BIT(priv->phy_addr) | BIT(AX_EMBD_PHY_ADDR));
+ 	/* mii bus name is usb-<usb bus number>-<usb device number> */
+ 	snprintf(priv->mdio->id, MII_BUS_ID_SIZE, "usb-%03d:%03d",
+ 		 dev->udev->bus->busnum, dev->udev->devnum);
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index 34e82f1e37d964..ea0e5e276cd6d1 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -892,6 +892,10 @@ int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_
+ 		}
+ 	}
+ 
++	if (ctx->func_desc)
++		ctx->filtering_supported = !!(ctx->func_desc->bmNetworkCapabilities
++			& USB_CDC_NCM_NCAP_ETH_FILTER);
++
+ 	iface_no = ctx->data->cur_altsetting->desc.bInterfaceNumber;
+ 
+ 	/* Device-specific flags */
+@@ -1898,6 +1902,14 @@ static void cdc_ncm_status(struct usbnet *dev, struct urb *urb)
+ 	}
+ }
+ 
++static void cdc_ncm_update_filter(struct usbnet *dev)
++{
++	struct cdc_ncm_ctx *ctx = (struct cdc_ncm_ctx *)dev->data[0];
++
++	if (ctx->filtering_supported)
++		usbnet_cdc_update_filter(dev);
++}
++
+ static const struct driver_info cdc_ncm_info = {
+ 	.description = "CDC NCM (NO ZLP)",
+ 	.flags = FLAG_POINTTOPOINT | FLAG_NO_SETINT | FLAG_MULTI_PACKET
+@@ -1908,7 +1920,7 @@ static const struct driver_info cdc_ncm_info = {
+ 	.status = cdc_ncm_status,
+ 	.rx_fixup = cdc_ncm_rx_fixup,
+ 	.tx_fixup = cdc_ncm_tx_fixup,
+-	.set_rx_mode = usbnet_cdc_update_filter,
++	.set_rx_mode = cdc_ncm_update_filter,
+ };
+ 
+ /* Same as cdc_ncm_info, but with FLAG_SEND_ZLP  */
+@@ -1922,7 +1934,7 @@ static const struct driver_info cdc_ncm_zlp_info = {
+ 	.status = cdc_ncm_status,
+ 	.rx_fixup = cdc_ncm_rx_fixup,
+ 	.tx_fixup = cdc_ncm_tx_fixup,
+-	.set_rx_mode = usbnet_cdc_update_filter,
++	.set_rx_mode = cdc_ncm_update_filter,
+ };
+ 
+ /* Same as cdc_ncm_info, but with FLAG_SEND_ZLP */
+@@ -1964,7 +1976,7 @@ static const struct driver_info wwan_info = {
+ 	.status = cdc_ncm_status,
+ 	.rx_fixup = cdc_ncm_rx_fixup,
+ 	.tx_fixup = cdc_ncm_tx_fixup,
+-	.set_rx_mode = usbnet_cdc_update_filter,
++	.set_rx_mode = cdc_ncm_update_filter,
+ };
+ 
+ /* Same as wwan_info, but with FLAG_NOARP  */
+@@ -1978,7 +1990,7 @@ static const struct driver_info wwan_noarp_info = {
+ 	.status = cdc_ncm_status,
+ 	.rx_fixup = cdc_ncm_rx_fixup,
+ 	.tx_fixup = cdc_ncm_tx_fixup,
+-	.set_rx_mode = usbnet_cdc_update_filter,
++	.set_rx_mode = cdc_ncm_update_filter,
+ };
+ 
+ static const struct usb_device_id cdc_devs[] = {
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index f5647ee0addec2..e56901bb6ebc43 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1361,6 +1361,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1057, 2)},	/* Telit FN980 */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)},	/* Telit LN920 */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1070, 2)},	/* Telit FN990A */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1077, 2)},	/* Telit FN990A w/audio */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1080, 2)}, /* Telit FE990A */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x10a0, 0)}, /* Telit FN920C04 */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x10a4, 0)}, /* Telit FN920C04 */
+diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
+index 995a7207bdf871..f357a7ac70ac47 100644
+--- a/drivers/net/wan/lapbether.c
++++ b/drivers/net/wan/lapbether.c
+@@ -81,7 +81,7 @@ static struct lapbethdev *lapbeth_get_x25_dev(struct net_device *dev)
+ 
+ static __inline__ int dev_is_ethdev(struct net_device *dev)
+ {
+-	return dev->type == ARPHRD_ETHER && strncmp(dev->name, "dummy", 5);
++	return dev->type == ARPHRD_ETHER && !netdev_need_ops_lock(dev);
+ }
+ 
+ /* ------------------------------------------------------------------------ */
+diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
+index 6d336e39d6738b..1adc35e37f401b 100644
+--- a/drivers/net/wireless/ath/ath10k/core.c
++++ b/drivers/net/wireless/ath/ath10k/core.c
+@@ -2491,12 +2491,50 @@ static int ath10k_init_hw_params(struct ath10k *ar)
+ 	return 0;
+ }
+ 
++static bool ath10k_core_needs_recovery(struct ath10k *ar)
++{
++	long time_left;
++
++	/* Sometimes the recovery will fail and then the next all recovery fail,
++	 * so avoid infinite recovery.
++	 */
++	if (atomic_read(&ar->fail_cont_count) >= ATH10K_RECOVERY_MAX_FAIL_COUNT) {
++		ath10k_err(ar, "consecutive fail %d times, will shutdown driver!",
++			   atomic_read(&ar->fail_cont_count));
++		ar->state = ATH10K_STATE_WEDGED;
++		return false;
++	}
++
++	ath10k_dbg(ar, ATH10K_DBG_BOOT, "total recovery count: %d", ++ar->recovery_count);
++
++	if (atomic_read(&ar->pending_recovery)) {
++		/* Sometimes it happened another recovery work before the previous one
++		 * completed, then the second recovery work will destroy the previous
++		 * one, thus below is to avoid that.
++		 */
++		time_left = wait_for_completion_timeout(&ar->driver_recovery,
++							ATH10K_RECOVERY_TIMEOUT_HZ);
++		if (time_left) {
++			ath10k_warn(ar, "previous recovery succeeded, skip this!\n");
++			return false;
++		}
++
++		/* Record the continuous recovery fail count when recovery failed. */
++		atomic_inc(&ar->fail_cont_count);
++
++		/* Avoid having multiple recoveries at the same time. */
++		return false;
++	}
++
++	atomic_inc(&ar->pending_recovery);
++
++	return true;
++}
++
+ void ath10k_core_start_recovery(struct ath10k *ar)
+ {
+-	if (test_and_set_bit(ATH10K_FLAG_RESTARTING, &ar->dev_flags)) {
+-		ath10k_warn(ar, "already restarting\n");
++	if (!ath10k_core_needs_recovery(ar))
+ 		return;
+-	}
+ 
+ 	queue_work(ar->workqueue, &ar->restart_work);
+ }
+@@ -2532,6 +2570,8 @@ static void ath10k_core_restart(struct work_struct *work)
+ 	struct ath10k *ar = container_of(work, struct ath10k, restart_work);
+ 	int ret;
+ 
++	reinit_completion(&ar->driver_recovery);
++
+ 	set_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags);
+ 
+ 	/* Place a barrier to make sure the compiler doesn't reorder
+@@ -2596,8 +2636,6 @@ static void ath10k_core_restart(struct work_struct *work)
+ 	if (ret)
+ 		ath10k_warn(ar, "failed to send firmware crash dump via devcoredump: %d",
+ 			    ret);
+-
+-	complete(&ar->driver_recovery);
+ }
+ 
+ static void ath10k_core_set_coverage_class_work(struct work_struct *work)
+diff --git a/drivers/net/wireless/ath/ath10k/core.h b/drivers/net/wireless/ath/ath10k/core.h
+index 446dca74f06a63..85e16c945b5c20 100644
+--- a/drivers/net/wireless/ath/ath10k/core.h
++++ b/drivers/net/wireless/ath/ath10k/core.h
+@@ -4,6 +4,7 @@
+  * Copyright (c) 2011-2017 Qualcomm Atheros, Inc.
+  * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
+  * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
+  */
+ 
+ #ifndef _CORE_H_
+@@ -87,6 +88,8 @@
+ 				  IEEE80211_IFACE_SKIP_SDATA_NOT_IN_DRIVER)
+ #define ATH10K_ITER_RESUME_FLAGS (IEEE80211_IFACE_ITER_RESUME_ALL |\
+ 				  IEEE80211_IFACE_SKIP_SDATA_NOT_IN_DRIVER)
++#define ATH10K_RECOVERY_TIMEOUT_HZ			(5 * HZ)
++#define ATH10K_RECOVERY_MAX_FAIL_COUNT			4
+ 
+ struct ath10k;
+ 
+@@ -865,9 +868,6 @@ enum ath10k_dev_flags {
+ 	/* Per Station statistics service */
+ 	ATH10K_FLAG_PEER_STATS,
+ 
+-	/* Indicates that ath10k device is during recovery process and not complete */
+-	ATH10K_FLAG_RESTARTING,
+-
+ 	/* protected by conf_mutex */
+ 	ATH10K_FLAG_NAPI_ENABLED,
+ };
+@@ -1211,6 +1211,11 @@ struct ath10k {
+ 	struct work_struct bundle_tx_work;
+ 	struct work_struct tx_complete_work;
+ 
++	atomic_t pending_recovery;
++	unsigned int recovery_count;
++	/* continuous recovery fail count */
++	atomic_t fail_cont_count;
++
+ 	/* cycle count is reported twice for each visited channel during scan.
+ 	 * access protected by data_lock
+ 	 */
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index c61b95a928dac1..0a53ec55b46bec 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -8139,7 +8139,12 @@ static void ath10k_reconfig_complete(struct ieee80211_hw *hw,
+ 		ath10k_info(ar, "device successfully recovered\n");
+ 		ar->state = ATH10K_STATE_ON;
+ 		ieee80211_wake_queues(ar->hw);
+-		clear_bit(ATH10K_FLAG_RESTARTING, &ar->dev_flags);
++
++		/* Clear recovery state. */
++		complete(&ar->driver_recovery);
++		atomic_set(&ar->fail_cont_count, 0);
++		atomic_set(&ar->pending_recovery, 0);
++
+ 		if (ar->hw_params.hw_restart_disconnect) {
+ 			list_for_each_entry(arvif, &ar->arvifs, list) {
+ 				if (arvif->is_up && arvif->vdev_type == WMI_VDEV_TYPE_STA)
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index 5e061f7525a6bd..09066e6aca4025 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -4,6 +4,7 @@
+  * Copyright (c) 2011-2017 Qualcomm Atheros, Inc.
+  * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
+  * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
+  */
+ 
+ #include <linux/skbuff.h>
+@@ -1941,6 +1942,11 @@ int ath10k_wmi_cmd_send(struct ath10k *ar, struct sk_buff *skb, u32 cmd_id)
+ 	}
+ 
+ 	wait_event_timeout(ar->wmi.tx_credits_wq, ({
++		if (ar->state == ATH10K_STATE_WEDGED) {
++			ret = -ESHUTDOWN;
++			ath10k_dbg(ar, ATH10K_DBG_WMI,
++				   "drop wmi command %d, hardware is wedged\n", cmd_id);
++		}
+ 		/* try to send pending beacons first. they take priority */
+ 		ath10k_wmi_tx_beacons_nowait(ar);
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/core.h b/drivers/net/wireless/ath/ath12k/core.h
+index f5f1ec796f7c55..4cff5e42eb34d2 100644
+--- a/drivers/net/wireless/ath/ath12k/core.h
++++ b/drivers/net/wireless/ath/ath12k/core.h
+@@ -303,6 +303,10 @@ struct ath12k_link_vif {
+ 	struct ath12k_rekey_data rekey_data;
+ 
+ 	u8 current_cntdown_counter;
++
++	bool group_key_valid;
++	struct wmi_vdev_install_key_arg group_key;
++	bool pairwise_key_done;
+ };
+ 
+ struct ath12k_vif {
+diff --git a/drivers/net/wireless/ath/ath12k/dp.c b/drivers/net/wireless/ath/ath12k/dp.c
+index 34e1bd2934ce3d..a53e4ebdcbfcbd 100644
+--- a/drivers/net/wireless/ath/ath12k/dp.c
++++ b/drivers/net/wireless/ath/ath12k/dp.c
+@@ -84,6 +84,7 @@ int ath12k_dp_peer_setup(struct ath12k *ar, int vdev_id, const u8 *addr)
+ 	ret = ath12k_dp_rx_peer_frag_setup(ar, addr, vdev_id);
+ 	if (ret) {
+ 		ath12k_warn(ab, "failed to setup rx defrag context\n");
++		tid--;
+ 		goto peer_clean;
+ 	}
+ 
+@@ -101,7 +102,7 @@ int ath12k_dp_peer_setup(struct ath12k *ar, int vdev_id, const u8 *addr)
+ 		return -ENOENT;
+ 	}
+ 
+-	for (; tid >= 0; tid--)
++	for (tid--; tid >= 0; tid--)
+ 		ath12k_dp_rx_peer_tid_delete(ar, peer, tid);
+ 
+ 	spin_unlock_bh(&ab->base_lock);
+diff --git a/drivers/net/wireless/ath/ath12k/hw.c b/drivers/net/wireless/ath/ath12k/hw.c
+index a5fa3b6a831ae4..e0e766a0b6d63c 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.c
++++ b/drivers/net/wireless/ath/ath12k/hw.c
+@@ -1090,7 +1090,7 @@ static const struct ath12k_hw_params ath12k_hw_params[] = {
+ 		.download_calib = true,
+ 		.supports_suspend = false,
+ 		.tcl_ring_retry = true,
+-		.reoq_lut_support = false,
++		.reoq_lut_support = true,
+ 		.supports_shadow_regs = false,
+ 
+ 		.num_tcl_banks = 48,
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 029376c574967a..21558e8286eae8 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -4645,14 +4645,13 @@ static int ath12k_install_key(struct ath12k_link_vif *arvif,
+ 		.key_len = key->keylen,
+ 		.key_data = key->key,
+ 		.key_flags = flags,
++		.ieee80211_key_cipher = key->cipher,
+ 		.macaddr = macaddr,
+ 	};
+ 	struct ath12k_vif *ahvif = arvif->ahvif;
+ 
+ 	lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy);
+ 
+-	reinit_completion(&ar->install_key_done);
+-
+ 	if (test_bit(ATH12K_FLAG_HW_CRYPTO_DISABLED, &ar->ab->dev_flags))
+ 		return 0;
+ 
+@@ -4661,7 +4660,7 @@ static int ath12k_install_key(struct ath12k_link_vif *arvif,
+ 		/* arg.key_cipher = WMI_CIPHER_NONE; */
+ 		arg.key_len = 0;
+ 		arg.key_data = NULL;
+-		goto install;
++		goto check_order;
+ 	}
+ 
+ 	switch (key->cipher) {
+@@ -4689,19 +4688,82 @@ static int ath12k_install_key(struct ath12k_link_vif *arvif,
+ 		key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV |
+ 			      IEEE80211_KEY_FLAG_RESERVE_TAILROOM;
+ 
++check_order:
++	if (ahvif->vdev_type == WMI_VDEV_TYPE_STA &&
++	    arg.key_flags == WMI_KEY_GROUP) {
++		if (cmd == SET_KEY) {
++			if (arvif->pairwise_key_done) {
++				ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
++					   "vdev %u pairwise key done, go install group key\n",
++					   arg.vdev_id);
++				goto install;
++			} else {
++				/* WCN7850 firmware requires pairwise key to be installed
++				 * before group key. In case group key comes first, cache
++				 * it and return. Will revisit it once pairwise key gets
++				 * installed.
++				 */
++				arvif->group_key = arg;
++				arvif->group_key_valid = true;
++				ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
++					   "vdev %u group key before pairwise key, cache and skip\n",
++					   arg.vdev_id);
++
++				ret = 0;
++				goto out;
++			}
++		} else {
++			arvif->group_key_valid = false;
++		}
++	}
++
+ install:
+-	ret = ath12k_wmi_vdev_install_key(arvif->ar, &arg);
++	reinit_completion(&ar->install_key_done);
+ 
++	ret = ath12k_wmi_vdev_install_key(arvif->ar, &arg);
+ 	if (ret)
+ 		return ret;
+ 
+ 	if (!wait_for_completion_timeout(&ar->install_key_done, 1 * HZ))
+ 		return -ETIMEDOUT;
+ 
+-	if (ether_addr_equal(macaddr, arvif->bssid))
+-		ahvif->key_cipher = key->cipher;
++	if (ether_addr_equal(arg.macaddr, arvif->bssid))
++		ahvif->key_cipher = arg.ieee80211_key_cipher;
++
++	if (ar->install_key_status) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	if (ahvif->vdev_type == WMI_VDEV_TYPE_STA &&
++	    arg.key_flags == WMI_KEY_PAIRWISE) {
++		if (cmd == SET_KEY) {
++			arvif->pairwise_key_done = true;
++			if (arvif->group_key_valid) {
++				/* Install cached GTK */
++				arvif->group_key_valid = false;
++				arg = arvif->group_key;
++				ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
++					   "vdev %u pairwise key done, group key ready, go install\n",
++					   arg.vdev_id);
++				goto install;
++			}
++		} else {
++			arvif->pairwise_key_done = false;
++		}
++	}
++
++out:
++	if (ret) {
++		/* In case of failure userspace may not do DISABLE_KEY
++		 * but triggers re-connection directly, so manually reset
++		 * status here.
++		 */
++		arvif->group_key_valid = false;
++		arvif->pairwise_key_done = false;
++	}
+ 
+-	return ar->install_key_status ? -EINVAL : 0;
++	return ret;
+ }
+ 
+ static int ath12k_clear_peer_keys(struct ath12k_link_vif *arvif,
+@@ -11337,6 +11399,7 @@ static int ath12k_mac_hw_register(struct ath12k_hw *ah)
+ 
+ 	wiphy->mbssid_max_interfaces = mbssid_max_interfaces;
+ 	wiphy->ema_max_profile_periodicity = TARGET_EMA_MAX_PROFILE_PERIOD;
++	ieee80211_hw_set(hw, SUPPORTS_MULTI_BSSID);
+ 
+ 	if (is_6ghz) {
+ 		wiphy_ext_feature_set(wiphy,
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index 9ebe4b573f7e30..3a39a1be3f7cf5 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -5465,6 +5465,11 @@ static int wmi_process_mgmt_tx_comp(struct ath12k *ar, u32 desc_id,
+ 	dma_unmap_single(ar->ab->dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
+ 
+ 	info = IEEE80211_SKB_CB(msdu);
++	memset(&info->status, 0, sizeof(info->status));
++
++	/* skip tx rate update from ieee80211_status*/
++	info->status.rates[0].idx = -1;
++
+ 	if ((!(info->flags & IEEE80211_TX_CTL_NO_ACK)) && !status)
+ 		info->flags |= IEEE80211_TX_STAT_ACK;
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.h b/drivers/net/wireless/ath/ath12k/wmi.h
+index bd7312f3cf24aa..d0cc697a418e10 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.h
++++ b/drivers/net/wireless/ath/ath12k/wmi.h
+@@ -3725,6 +3725,7 @@ struct wmi_vdev_install_key_arg {
+ 	u32 key_idx;
+ 	u32 key_flags;
+ 	u32 key_cipher;
++	u32 ieee80211_key_cipher;
+ 	u32 key_len;
+ 	u32 key_txmic_len;
+ 	u32 key_rxmic_len;
+diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+index dc8c408902e6a6..4d2148147b947e 100644
+--- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c
++++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+@@ -1575,8 +1575,11 @@ il4965_tx_cmd_build_rate(struct il_priv *il,
+ 	    || rate_idx > RATE_COUNT_LEGACY)
+ 		rate_idx = rate_lowest_index(&il->bands[info->band], sta);
+ 	/* For 5 GHZ band, remap mac80211 rate indices into driver indices */
+-	if (info->band == NL80211_BAND_5GHZ)
++	if (info->band == NL80211_BAND_5GHZ) {
+ 		rate_idx += IL_FIRST_OFDM_RATE;
++		if (rate_idx > IL_LAST_OFDM_RATE)
++			rate_idx = IL_LAST_OFDM_RATE;
++	}
+ 	/* Get PLCP rate for tx_cmd->rate_n_flags */
+ 	rate_plcp = il_rates[rate_idx].plcp;
+ 	/* Zero out flags for this packet */
+diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/rs.c b/drivers/net/wireless/intel/iwlwifi/dvm/rs.c
+index 8879e668ef0da0..ed964103281ed5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/dvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/dvm/rs.c
+@@ -2899,7 +2899,7 @@ static void rs_fill_link_cmd(struct iwl_priv *priv,
+ 		/* Repeat initial/next rate.
+ 		 * For legacy IWL_NUMBER_TRY == 1, this loop will not execute.
+ 		 * For HT IWL_HT_NUMBER_TRY == 3, this executes twice. */
+-		while (repeat_rate > 0 && (index < LINK_QUAL_MAX_RETRY_NUM)) {
++		while (repeat_rate > 0 && index < (LINK_QUAL_MAX_RETRY_NUM - 1)) {
+ 			if (is_legacy(tbl_type.lq_type)) {
+ 				if (ant_toggle_cnt < NUM_TRY_BEFORE_ANT_TOGGLE)
+ 					ant_toggle_cnt++;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index 03f639fbf9b610..3df15ff3e9d256 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -3006,6 +3006,7 @@ int iwl_fw_dbg_collect(struct iwl_fw_runtime *fwrt,
+ 	struct iwl_fw_dump_desc *desc;
+ 	unsigned int delay = 0;
+ 	bool monitor_only = false;
++	int ret;
+ 
+ 	if (trigger) {
+ 		u16 occurrences = le16_to_cpu(trigger->occurrences) - 1;
+@@ -3036,7 +3037,11 @@ int iwl_fw_dbg_collect(struct iwl_fw_runtime *fwrt,
+ 	desc->trig_desc.type = cpu_to_le32(trig);
+ 	memcpy(desc->trig_desc.data, str, len);
+ 
+-	return iwl_fw_dbg_collect_desc(fwrt, desc, monitor_only, delay);
++	ret = iwl_fw_dbg_collect_desc(fwrt, desc, monitor_only, delay);
++	if (ret)
++		kfree(desc);
++
++	return ret;
+ }
+ IWL_EXPORT_SYMBOL(iwl_fw_dbg_collect);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/agg.c b/drivers/net/wireless/intel/iwlwifi/mld/agg.c
+index 687a9450ac9840..aab11c61a82d69 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/agg.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/agg.c
+@@ -303,10 +303,15 @@ iwl_mld_reorder(struct iwl_mld *mld, struct napi_struct *napi,
+ 	 * already ahead and it will be dropped.
+ 	 * If the last sub-frame is not on this queue - we will get frame
+ 	 * release notification with up to date NSSN.
++	 * If this is the first frame that is stored in the buffer, the head_sn
++	 * may be outdated. Update it based on the last NSSN to make sure it
++	 * will be released when the frame release notification arrives.
+ 	 */
+ 	if (!amsdu || last_subframe)
+ 		iwl_mld_reorder_release_frames(mld, sta, napi, baid_data,
+ 					       buffer, nssn);
++	else if (buffer->num_stored == 1)
++		buffer->head_sn = nssn;
+ 
+ 	return IWL_MLD_BUFFERED_SKB;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/iface.h b/drivers/net/wireless/intel/iwlwifi/mld/iface.h
+index ec14d0736cee6e..586bfed450c5ac 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/iface.h
++++ b/drivers/net/wireless/intel/iwlwifi/mld/iface.h
+@@ -84,6 +84,8 @@ enum iwl_mld_emlsr_exit {
+  * @last_exit_reason: Reason for the last EMLSR exit
+  * @last_exit_ts: Time of the last EMLSR exit (if @last_exit_reason is non-zero)
+  * @exit_repeat_count: Number of times EMLSR was exited for the same reason
++ * @last_entry_ts: the time of the last EMLSR entry (if iwl_mld_emlsr_active()
++ *	is true)
+  * @unblock_tpt_wk: Unblock EMLSR because the throughput limit was reached
+  * @check_tpt_wk: a worker to check if IWL_MLD_EMLSR_BLOCKED_TPT should be
+  *	added, for example if there is no longer enough traffic.
+@@ -102,6 +104,7 @@ struct iwl_mld_emlsr {
+ 		enum iwl_mld_emlsr_exit last_exit_reason;
+ 		unsigned long last_exit_ts;
+ 		u8 exit_repeat_count;
++		unsigned long last_entry_ts;
+ 	);
+ 
+ 	struct wiphy_work unblock_tpt_wk;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/link.c b/drivers/net/wireless/intel/iwlwifi/mld/link.c
+index 82a4979a3af3c9..5f7628c65e3cf0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/link.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/link.c
+@@ -863,21 +863,23 @@ void iwl_mld_handle_missed_beacon_notif(struct iwl_mld *mld,
+ {
+ 	const struct iwl_missed_beacons_notif *notif = (const void *)pkt->data;
+ 	union iwl_dbg_tlv_tp_data tp_data = { .fw_pkt = pkt };
+-	u32 link_id = le32_to_cpu(notif->link_id);
++	u32 fw_link_id = le32_to_cpu(notif->link_id);
+ 	u32 missed_bcon = le32_to_cpu(notif->consec_missed_beacons);
+ 	u32 missed_bcon_since_rx =
+ 		le32_to_cpu(notif->consec_missed_beacons_since_last_rx);
+ 	u32 scnd_lnk_bcn_lost =
+ 		le32_to_cpu(notif->consec_missed_beacons_other_link);
+ 	struct ieee80211_bss_conf *link_conf =
+-		iwl_mld_fw_id_to_link_conf(mld, link_id);
++		iwl_mld_fw_id_to_link_conf(mld, fw_link_id);
+ 	u32 bss_param_ch_cnt_link_id;
+ 	struct ieee80211_vif *vif;
++	u8 link_id;
+ 
+ 	if (WARN_ON(!link_conf))
+ 		return;
+ 
+ 	vif = link_conf->vif;
++	link_id = link_conf->link_id;
+ 	bss_param_ch_cnt_link_id = link_conf->bss_param_ch_cnt_link_id;
+ 
+ 	IWL_DEBUG_INFO(mld,
+@@ -889,7 +891,7 @@ void iwl_mld_handle_missed_beacon_notif(struct iwl_mld *mld,
+ 
+ 	mld->trans->dbg.dump_file_name_ext_valid = true;
+ 	snprintf(mld->trans->dbg.dump_file_name_ext, IWL_FW_INI_MAX_NAME,
+-		 "LinkId_%d_MacType_%d", link_id,
++		 "LinkId_%d_MacType_%d", fw_link_id,
+ 		 iwl_mld_mac80211_iftype_to_fw(vif));
+ 
+ 	iwl_dbg_tlv_time_point(&mld->fwrt,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c
+index 2d5233dc3e2423..8a020d161f4ad2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c
+@@ -995,6 +995,7 @@ int iwl_mld_assign_vif_chanctx(struct ieee80211_hw *hw,
+ 
+ 		/* Indicate to mac80211 that EML is enabled */
+ 		vif->driver_flags |= IEEE80211_VIF_EML_ACTIVE;
++		mld_vif->emlsr.last_entry_ts = jiffies;
+ 
+ 		if (vif->active_links & BIT(mld_vif->emlsr.selected_links))
+ 			mld_vif->emlsr.primary = mld_vif->emlsr.selected_primary;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/mlo.c b/drivers/net/wireless/intel/iwlwifi/mld/mlo.c
+index a870e169e2655c..962a27e8d791ce 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/mlo.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/mlo.c
+@@ -508,10 +508,12 @@ void iwl_mld_emlsr_check_tpt(struct wiphy *wiphy, struct wiphy_work *wk)
+ 	/*
+ 	 * TPT is unblocked, need to check if the TPT criteria is still met.
+ 	 *
+-	 * If EMLSR is active, then we also need to check the secondar link
+-	 * requirements.
++	 * If EMLSR is active for at least 5 seconds, then we also
++	 * need to check the secondary link requirements.
+ 	 */
+-	if (iwl_mld_emlsr_active(vif)) {
++	if (iwl_mld_emlsr_active(vif) &&
++	    time_is_before_jiffies(mld_vif->emlsr.last_entry_ts +
++				   IWL_MLD_TPT_COUNT_WINDOW)) {
+ 		sec_link_id = iwl_mld_get_other_link(vif, iwl_mld_get_primary_link(vif));
+ 		sec_link = iwl_mld_link_dereference_check(mld_vif, sec_link_id);
+ 		if (WARN_ON_ONCE(!sec_link))
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/scan.c b/drivers/net/wireless/intel/iwlwifi/mld/scan.c
+index 7ec04318ec2fcf..13b9ae18dd7c9f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/scan.c
+@@ -359,7 +359,7 @@ iwl_mld_scan_fits(struct iwl_mld *mld, int n_ssids,
+ 		  struct ieee80211_scan_ies *ies, int n_channels)
+ {
+ 	return ((n_ssids <= PROBE_OPTION_MAX) &&
+-		(n_channels <= mld->fw->ucode_capa.n_scan_channels) &
++		(n_channels <= mld->fw->ucode_capa.n_scan_channels) &&
+ 		(ies->common_ie_len + ies->len[NL80211_BAND_2GHZ] +
+ 		 ies->len[NL80211_BAND_5GHZ] + ies->len[NL80211_BAND_6GHZ] <=
+ 		 iwl_mld_scan_max_template_size()));
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/scan.h b/drivers/net/wireless/intel/iwlwifi/mld/scan.h
+index 3ae940d55065c5..4044cac3f086bd 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/scan.h
++++ b/drivers/net/wireless/intel/iwlwifi/mld/scan.h
+@@ -130,7 +130,7 @@ struct iwl_mld_scan {
+ 	void *cmd;
+ 	unsigned long last_6ghz_passive_jiffies;
+ 	unsigned long last_start_time_jiffies;
+-	unsigned long last_mlo_scan_time;
++	u64 last_mlo_scan_time;
+ };
+ 
+ #endif /* __iwl_mld_scan_h__ */
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index 3e8b7168af01ce..a30ef33525ec30 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -2387,6 +2387,7 @@ static void iwl_mvm_convert_gtk_v2(struct iwl_wowlan_status_data *status,
+ 
+ 	status->gtk[0].len = data->key_len;
+ 	status->gtk[0].flags = data->key_flags;
++	status->gtk[0].id = status->gtk[0].flags & IWL_WOWLAN_GTK_IDX_MASK;
+ 
+ 	memcpy(status->gtk[0].key, data->key, sizeof(data->key));
+ 
+@@ -2737,6 +2738,7 @@ iwl_mvm_send_wowlan_get_status(struct iwl_mvm *mvm, u8 sta_id)
+ 		 * currently used key.
+ 		 */
+ 		status->gtk[0].flags = v6->gtk.key_index | BIT(7);
++		status->gtk[0].id = v6->gtk.key_index;
+ 	} else if (notif_ver == 7) {
+ 		struct iwl_wowlan_status_v7 *v7 = (void *)cmd.resp_pkt->data;
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index 14ea89f931bbf3..ff59a322fbcb9c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -854,10 +854,15 @@ static bool iwl_mvm_reorder(struct iwl_mvm *mvm,
+ 	 * already ahead and it will be dropped.
+ 	 * If the last sub-frame is not on this queue - we will get frame
+ 	 * release notification with up to date NSSN.
++	 * If this is the first frame that is stored in the buffer, the head_sn
++	 * may be outdated. Update it based on the last NSSN to make sure it
++	 * will be released when the frame release notification arrives.
+ 	 */
+ 	if (!amsdu || last_subframe)
+ 		iwl_mvm_release_frames(mvm, sta, napi, baid_data,
+ 				       buffer, nssn);
++	else if (buffer->num_stored == 1)
++		buffer->head_sn = nssn;
+ 
+ 	spin_unlock_bh(&buffer->lock);
+ 	return true;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index 60bd9c7e5f03d8..3dbda1e4a522b2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -835,7 +835,7 @@ static inline bool iwl_mvm_scan_fits(struct iwl_mvm *mvm, int n_ssids,
+ 				     int n_channels)
+ {
+ 	return ((n_ssids <= PROBE_OPTION_MAX) &&
+-		(n_channels <= mvm->fw->ucode_capa.n_scan_channels) &
++		(n_channels <= mvm->fw->ucode_capa.n_scan_channels) &&
+ 		(ies->common_ie_len +
+ 		 ies->len[NL80211_BAND_2GHZ] + ies->len[NL80211_BAND_5GHZ] +
+ 		 ies->len[NL80211_BAND_6GHZ] <=
+diff --git a/drivers/net/wireless/mediatek/mt76/channel.c b/drivers/net/wireless/mediatek/mt76/channel.c
+index cc2d888e3f17a5..77b75792eb488e 100644
+--- a/drivers/net/wireless/mediatek/mt76/channel.c
++++ b/drivers/net/wireless/mediatek/mt76/channel.c
+@@ -173,13 +173,13 @@ void mt76_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ 	if (!mlink)
+ 		goto out;
+ 
+-	if (link_conf != &vif->bss_conf)
++	if (mlink != (struct mt76_vif_link *)vif->drv_priv)
+ 		rcu_assign_pointer(mvif->link[link_id], NULL);
+ 
+ 	dev->drv->vif_link_remove(phy, vif, link_conf, mlink);
+ 	mlink->ctx = NULL;
+ 
+-	if (link_conf != &vif->bss_conf)
++	if (mlink != (struct mt76_vif_link *)vif->drv_priv)
+ 		kfree_rcu(mlink, rcu_head);
+ 
+ out:
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index d7cd467b812fc7..cc92eb9e5b1d07 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -1852,6 +1852,9 @@ mt76_vif_link(struct mt76_dev *dev, struct ieee80211_vif *vif, int link_id)
+ 	struct mt76_vif_link *mlink = (struct mt76_vif_link *)vif->drv_priv;
+ 	struct mt76_vif_data *mvif = mlink->mvif;
+ 
++	if (!link_id)
++		return mlink;
++
+ 	return mt76_dereference(mvif->link[link_id], dev);
+ }
+ 
+@@ -1862,7 +1865,7 @@ mt76_vif_conf_link(struct mt76_dev *dev, struct ieee80211_vif *vif,
+ 	struct mt76_vif_link *mlink = (struct mt76_vif_link *)vif->drv_priv;
+ 	struct mt76_vif_data *mvif = mlink->mvif;
+ 
+-	if (link_conf == &vif->bss_conf)
++	if (link_conf == &vif->bss_conf || !link_conf->link_id)
+ 		return mlink;
+ 
+ 	return mt76_dereference(mvif->link[link_conf->link_id], dev);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 3643c72bb68d4d..cac176ee5152fb 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -208,6 +208,9 @@ mt7915_mcu_set_timeout(struct mt76_dev *mdev, int cmd)
+ 	case MCU_EXT_CMD_BSS_INFO_UPDATE:
+ 		mdev->mcu.timeout = 2 * HZ;
+ 		return;
++	case MCU_EXT_CMD_EFUSE_BUFFER_MODE:
++		mdev->mcu.timeout = 10 * HZ;
++		return;
+ 	default:
+ 		break;
+ 	}
+@@ -2092,16 +2095,21 @@ static int mt7915_load_firmware(struct mt7915_dev *dev)
+ {
+ 	int ret;
+ 
+-	/* make sure fw is download state */
+-	if (mt7915_firmware_state(dev, false)) {
+-		/* restart firmware once */
+-		mt76_connac_mcu_restart(&dev->mt76);
+-		ret = mt7915_firmware_state(dev, false);
+-		if (ret) {
+-			dev_err(dev->mt76.dev,
+-				"Firmware is not ready for download\n");
+-			return ret;
+-		}
++	/* Release Semaphore if taken by previous failed attempt */
++	ret = mt76_connac_mcu_patch_sem_ctrl(&dev->mt76, false);
++	if (ret != PATCH_REL_SEM_SUCCESS) {
++		dev_err(dev->mt76.dev, "Could not release semaphore\n");
++		/* Continue anyways */
++	}
++
++	/* Always restart MCU firmware */
++	mt76_connac_mcu_restart(&dev->mt76);
++
++	/* Check if MCU is ready */
++	ret = mt7915_firmware_state(dev, false);
++	if (ret) {
++		dev_err(dev->mt76.dev, "Firmware did not enter download state\n");
++		return ret;
+ 	}
+ 
+ 	ret = mt76_connac2_load_patch(&dev->mt76, fw_name_var(dev, ROM_PATCH));
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+index 3646806088e9a7..8f8c3104284358 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+@@ -1059,9 +1059,9 @@ int mt7996_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ 		if (wcid->offchannel)
+ 			mlink = rcu_dereference(mvif->mt76.offchannel_link);
+ 		if (!mlink)
+-			mlink = &mvif->deflink.mt76;
++			mlink = rcu_dereference(mvif->mt76.link[wcid->link_id]);
+ 
+-		txp->fw.bss_idx = mlink->idx;
++		txp->fw.bss_idx = mlink ? mlink->idx : mvif->deflink.mt76.idx;
+ 	}
+ 
+ 	txp->fw.token = cpu_to_le16(id);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.c b/drivers/net/wireless/realtek/rtlwifi/pci.c
+index 898f597f70a96d..d080469264cf89 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.c
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.c
+@@ -572,8 +572,11 @@ static int _rtl_pci_init_one_rxdesc(struct ieee80211_hw *hw,
+ 		dma_map_single(&rtlpci->pdev->dev, skb_tail_pointer(skb),
+ 			       rtlpci->rxbuffersize, DMA_FROM_DEVICE);
+ 	bufferaddress = *((dma_addr_t *)skb->cb);
+-	if (dma_mapping_error(&rtlpci->pdev->dev, bufferaddress))
++	if (dma_mapping_error(&rtlpci->pdev->dev, bufferaddress)) {
++		if (!new_skb)
++			kfree_skb(skb);
+ 		return 0;
++	}
+ 	rtlpci->rx_ring[rxring_idx].rx_buf[desc_idx] = skb;
+ 	if (rtlpriv->use_new_trx_flow) {
+ 		/* skb->cb may be 64 bit address */
+@@ -802,13 +805,19 @@ static void _rtl_pci_rx_interrupt(struct ieee80211_hw *hw)
+ 		skb = new_skb;
+ no_new:
+ 		if (rtlpriv->use_new_trx_flow) {
+-			_rtl_pci_init_one_rxdesc(hw, skb, (u8 *)buffer_desc,
+-						 rxring_idx,
+-						 rtlpci->rx_ring[rxring_idx].idx);
++			if (!_rtl_pci_init_one_rxdesc(hw, skb, (u8 *)buffer_desc,
++						      rxring_idx,
++						      rtlpci->rx_ring[rxring_idx].idx)) {
++				if (new_skb)
++					dev_kfree_skb_any(skb);
++			}
+ 		} else {
+-			_rtl_pci_init_one_rxdesc(hw, skb, (u8 *)pdesc,
+-						 rxring_idx,
+-						 rtlpci->rx_ring[rxring_idx].idx);
++			if (!_rtl_pci_init_one_rxdesc(hw, skb, (u8 *)pdesc,
++						      rxring_idx,
++						      rtlpci->rx_ring[rxring_idx].idx)) {
++				if (new_skb)
++					dev_kfree_skb_any(skb);
++			}
+ 			if (rtlpci->rx_ring[rxring_idx].idx ==
+ 			    rtlpci->rxringcount - 1)
+ 				rtlpriv->cfg->ops->set_desc(hw, (u8 *)pdesc,
+diff --git a/drivers/net/wireless/realtek/rtw89/chan.c b/drivers/net/wireless/realtek/rtw89/chan.c
+index f60e93870b093c..d343681c2c0010 100644
+--- a/drivers/net/wireless/realtek/rtw89/chan.c
++++ b/drivers/net/wireless/realtek/rtw89/chan.c
+@@ -2680,6 +2680,9 @@ int rtw89_chanctx_ops_assign_vif(struct rtw89_dev *rtwdev,
+ 	rtwvif_link->chanctx_assigned = true;
+ 	cfg->ref_count++;
+ 
++	if (rtwdev->scanning)
++		rtw89_hw_scan_abort(rtwdev, rtwdev->scan_info.scanning_vif);
++
+ 	if (list_empty(&rtwvif->mgnt_entry))
+ 		list_add_tail(&rtwvif->mgnt_entry, &mgnt->active_list);
+ 
+@@ -2719,6 +2722,9 @@ void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev,
+ 	rtwvif_link->chanctx_assigned = false;
+ 	cfg->ref_count--;
+ 
++	if (rtwdev->scanning)
++		rtw89_hw_scan_abort(rtwdev, rtwdev->scan_info.scanning_vif);
++
+ 	if (!rtw89_vif_is_active_role(rtwvif))
+ 		list_del_init(&rtwvif->mgnt_entry);
+ 
+diff --git a/drivers/net/wireless/realtek/rtw89/coex.c b/drivers/net/wireless/realtek/rtw89/coex.c
+index 5ccf0cbaed2fa4..ea3664103fbf86 100644
+--- a/drivers/net/wireless/realtek/rtw89/coex.c
++++ b/drivers/net/wireless/realtek/rtw89/coex.c
+@@ -3836,13 +3836,13 @@ void rtw89_btc_set_policy_v1(struct rtw89_dev *rtwdev, u16 policy_type)
+ 
+ 		switch (policy_type) {
+ 		case BTC_CXP_OFFE_2GBWISOB: /* for normal-case */
+-			_slot_set(btc, CXST_E2G, 0, tbl_w1, SLOT_ISO);
++			_slot_set(btc, CXST_E2G, 5, tbl_w1, SLOT_ISO);
+ 			_slot_set_le(btc, CXST_EBT, s_def[CXST_EBT].dur,
+ 				     s_def[CXST_EBT].cxtbl, s_def[CXST_EBT].cxtype);
+ 			_slot_set_dur(btc, CXST_EBT, dur_2);
+ 			break;
+ 		case BTC_CXP_OFFE_2GISOB: /* for bt no-link */
+-			_slot_set(btc, CXST_E2G, 0, cxtbl[1], SLOT_ISO);
++			_slot_set(btc, CXST_E2G, 5, cxtbl[1], SLOT_ISO);
+ 			_slot_set_le(btc, CXST_EBT, s_def[CXST_EBT].dur,
+ 				     s_def[CXST_EBT].cxtbl, s_def[CXST_EBT].cxtype);
+ 			_slot_set_dur(btc, CXST_EBT, dur_2);
+@@ -3868,15 +3868,15 @@ void rtw89_btc_set_policy_v1(struct rtw89_dev *rtwdev, u16 policy_type)
+ 			break;
+ 		case BTC_CXP_OFFE_2GBWMIXB:
+ 			if (a2dp->exist)
+-				_slot_set(btc, CXST_E2G, 0, cxtbl[2], SLOT_MIX);
++				_slot_set(btc, CXST_E2G, 5, cxtbl[2], SLOT_MIX);
+ 			else
+-				_slot_set(btc, CXST_E2G, 0, tbl_w1, SLOT_MIX);
++				_slot_set(btc, CXST_E2G, 5, tbl_w1, SLOT_MIX);
+ 			_slot_set_le(btc, CXST_EBT, s_def[CXST_EBT].dur,
+ 				     s_def[CXST_EBT].cxtbl, s_def[CXST_EBT].cxtype);
+ 			break;
+ 		case BTC_CXP_OFFE_WL: /* for 4-way */
+-			_slot_set(btc, CXST_E2G, 0, cxtbl[1], SLOT_MIX);
+-			_slot_set(btc, CXST_EBT, 0, cxtbl[1], SLOT_MIX);
++			_slot_set(btc, CXST_E2G, 5, cxtbl[1], SLOT_MIX);
++			_slot_set(btc, CXST_EBT, 5, cxtbl[1], SLOT_MIX);
+ 			break;
+ 		default:
+ 			break;
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index 69546a0394942b..628b64457a1766 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -2788,6 +2788,9 @@ static enum rtw89_ps_mode rtw89_update_ps_mode(struct rtw89_dev *rtwdev)
+ {
+ 	const struct rtw89_chip_info *chip = rtwdev->chip;
+ 
++	if (rtwdev->hci.type != RTW89_HCI_TYPE_PCIE)
++		return RTW89_PS_MODE_NONE;
++
+ 	if (rtw89_disable_ps_mode || !chip->ps_mode_supported ||
+ 	    RTW89_CHK_FW_FEATURE(NO_DEEP_PS, &rtwdev->fw))
+ 		return RTW89_PS_MODE_NONE;
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index 6c52b0425f2ea9..4d7d0197736fb4 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -6446,13 +6446,18 @@ static int rtw89_fw_read_c2h_reg(struct rtw89_dev *rtwdev,
+ 	const struct rtw89_chip_info *chip = rtwdev->chip;
+ 	struct rtw89_fw_info *fw_info = &rtwdev->fw;
+ 	const u32 *c2h_reg = chip->c2h_regs;
+-	u32 ret;
++	u32 ret, timeout;
+ 	u8 i, val;
+ 
+ 	info->id = RTW89_FWCMD_C2HREG_FUNC_NULL;
+ 
++	if (rtwdev->hci.type == RTW89_HCI_TYPE_USB)
++		timeout = RTW89_C2H_TIMEOUT_USB;
++	else
++		timeout = RTW89_C2H_TIMEOUT;
++
+ 	ret = read_poll_timeout_atomic(rtw89_read8, val, val, 1,
+-				       RTW89_C2H_TIMEOUT, false, rtwdev,
++				       timeout, false, rtwdev,
+ 				       chip->c2h_ctrl_reg);
+ 	if (ret) {
+ 		rtw89_warn(rtwdev, "c2h reg timeout\n");
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.h b/drivers/net/wireless/realtek/rtw89/fw.h
+index 55255b48bdb71b..43e569b90e18ba 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.h
++++ b/drivers/net/wireless/realtek/rtw89/fw.h
+@@ -112,6 +112,8 @@ struct rtw89_h2creg_sch_tx_en {
+ #define RTW89_C2HREG_HDR_LEN 2
+ #define RTW89_H2CREG_HDR_LEN 2
+ #define RTW89_C2H_TIMEOUT 1000000
++#define RTW89_C2H_TIMEOUT_USB 4000
++
+ struct rtw89_mac_c2h_info {
+ 	u8 id;
+ 	u8 content_len;
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
+index b4841f948ec1c7..4d76a5e4796793 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.c
++++ b/drivers/net/wireless/realtek/rtw89/mac.c
+@@ -1441,6 +1441,23 @@ void rtw89_mac_notify_wake(struct rtw89_dev *rtwdev)
+ 	rtw89_mac_send_rpwm(rtwdev, state, true);
+ }
+ 
++static void rtw89_mac_power_switch_boot_mode(struct rtw89_dev *rtwdev)
++{
++	u32 boot_mode;
++
++	if (rtwdev->hci.type != RTW89_HCI_TYPE_USB)
++		return;
++
++	boot_mode = rtw89_read32_mask(rtwdev, R_AX_GPIO_MUXCFG, B_AX_BOOT_MODE);
++	if (!boot_mode)
++		return;
++
++	rtw89_write32_clr(rtwdev, R_AX_SYS_PW_CTRL, B_AX_APFN_ONMAC);
++	rtw89_write32_clr(rtwdev, R_AX_SYS_STATUS1, B_AX_AUTO_WLPON);
++	rtw89_write32_clr(rtwdev, R_AX_GPIO_MUXCFG, B_AX_BOOT_MODE);
++	rtw89_write32_clr(rtwdev, R_AX_RSV_CTRL, B_AX_R_DIS_PRST);
++}
++
+ static int rtw89_mac_power_switch(struct rtw89_dev *rtwdev, bool on)
+ {
+ #define PWR_ACT 1
+@@ -1451,6 +1468,8 @@ static int rtw89_mac_power_switch(struct rtw89_dev *rtwdev, bool on)
+ 	int ret;
+ 	u8 val;
+ 
++	rtw89_mac_power_switch_boot_mode(rtwdev);
++
+ 	if (on) {
+ 		cfg_seq = chip->pwr_on_seq;
+ 		cfg_func = chip->ops->pwr_on_func;
+diff --git a/drivers/net/wireless/realtek/rtw89/reg.h b/drivers/net/wireless/realtek/rtw89/reg.h
+index c776954ad360dc..f33dc82a641e56 100644
+--- a/drivers/net/wireless/realtek/rtw89/reg.h
++++ b/drivers/net/wireless/realtek/rtw89/reg.h
+@@ -182,6 +182,7 @@
+ 
+ #define R_AX_SYS_STATUS1 0x00F4
+ #define B_AX_SEL_0XC0_MASK GENMASK(17, 16)
++#define B_AX_AUTO_WLPON BIT(10)
+ #define B_AX_PAD_HCI_SEL_V2_MASK GENMASK(5, 3)
+ #define MAC_AX_HCI_SEL_SDIO_UART 0
+ #define MAC_AX_HCI_SEL_MULTI_USB 1
+diff --git a/drivers/net/wireless/realtek/rtw89/wow.c b/drivers/net/wireless/realtek/rtw89/wow.c
+index 17eee58503cb36..ea2d3ad8391a8d 100644
+--- a/drivers/net/wireless/realtek/rtw89/wow.c
++++ b/drivers/net/wireless/realtek/rtw89/wow.c
+@@ -1413,6 +1413,8 @@ static void rtw89_fw_release_pno_pkt_list(struct rtw89_dev *rtwdev,
+ static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev,
+ 					   struct rtw89_vif_link *rtwvif_link)
+ {
++	static const u8 basic_rate_ie[] = {WLAN_EID_SUPP_RATES, 0x08,
++		 0x0c, 0x12, 0x18, 0x24, 0x30, 0x48, 0x60, 0x6c};
+ 	struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ 	struct cfg80211_sched_scan_request *nd_config = rtw_wow->nd_config;
+ 	u8 num = nd_config->n_match_sets, i;
+@@ -1424,10 +1426,11 @@ static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev,
+ 		skb = ieee80211_probereq_get(rtwdev->hw, rtwvif_link->mac_addr,
+ 					     nd_config->match_sets[i].ssid.ssid,
+ 					     nd_config->match_sets[i].ssid.ssid_len,
+-					     nd_config->ie_len);
++					     nd_config->ie_len + sizeof(basic_rate_ie));
+ 		if (!skb)
+ 			return -ENOMEM;
+ 
++		skb_put_data(skb, basic_rate_ie, sizeof(basic_rate_ie));
+ 		skb_put_data(skb, nd_config->ie, nd_config->ie_len);
+ 
+ 		info = kzalloc(sizeof(*info), GFP_KERNEL);
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 5091e1fa4a0df6..186d8ec1eaa61b 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -637,8 +637,6 @@ static int xennet_xdp_xmit_one(struct net_device *dev,
+ 	tx_stats->packets++;
+ 	u64_stats_update_end(&tx_stats->syncp);
+ 
+-	xennet_tx_buf_gc(queue);
+-
+ 	return 0;
+ }
+ 
+@@ -848,9 +846,6 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
+ 	tx_stats->packets++;
+ 	u64_stats_update_end(&tx_stats->syncp);
+ 
+-	/* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
+-	xennet_tx_buf_gc(queue);
+-
+ 	if (!netfront_tx_slot_available(queue))
+ 		netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
+ 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 776c867fb64d57..5396282015a25e 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1888,8 +1888,28 @@ static int nvme_pci_configure_admin_queue(struct nvme_dev *dev)
+ 	 * might be pointing at!
+ 	 */
+ 	result = nvme_disable_ctrl(&dev->ctrl, false);
+-	if (result < 0)
+-		return result;
++	if (result < 0) {
++		struct pci_dev *pdev = to_pci_dev(dev->dev);
++
++		/*
++		 * The NVMe Controller Reset method did not get an expected
++		 * CSTS.RDY transition, so something with the device appears to
++		 * be stuck. Use the lower level and bigger hammer PCIe
++		 * Function Level Reset to attempt restoring the device to its
++		 * initial state, and try again.
++		 */
++		result = pcie_reset_flr(pdev, false);
++		if (result < 0)
++			return result;
++
++		pci_restore_state(pdev);
++		result = nvme_disable_ctrl(&dev->ctrl, false);
++		if (result < 0)
++			return result;
++
++		dev_info(dev->ctrl.device,
++			"controller reset completed after pcie flr\n");
++	}
+ 
+ 	result = nvme_alloc_queue(dev, 0, NVME_AQ_DEPTH);
+ 	if (result)
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index b882ee6ef40f6e..d0ee9e2a32947c 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1776,9 +1776,14 @@ static int nvme_tcp_start_tls(struct nvme_ctrl *nctrl,
+ 			qid, ret);
+ 		tls_handshake_cancel(queue->sock->sk);
+ 	} else {
+-		dev_dbg(nctrl->device,
+-			"queue %d: TLS handshake complete, error %d\n",
+-			qid, queue->tls_err);
++		if (queue->tls_err) {
++			dev_err(nctrl->device,
++				"queue %d: TLS handshake complete, error %d\n",
++				qid, queue->tls_err);
++		} else {
++			dev_dbg(nctrl->device,
++				"queue %d: TLS handshake complete\n", qid);
++		}
+ 		ret = queue->tls_err;
+ 	}
+ 	return ret;
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index af370628e58393..99c58ee09fbb0b 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -816,13 +816,11 @@ int pci_acpi_program_hp_params(struct pci_dev *dev)
+ bool pciehp_is_native(struct pci_dev *bridge)
+ {
+ 	const struct pci_host_bridge *host;
+-	u32 slot_cap;
+ 
+ 	if (!IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE))
+ 		return false;
+ 
+-	pcie_capability_read_dword(bridge, PCI_EXP_SLTCAP, &slot_cap);
+-	if (!(slot_cap & PCI_EXP_SLTCAP_HPC))
++	if (!bridge->is_pciehp)
+ 		return false;
+ 
+ 	if (pcie_ports_native)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 027a71c9c06f13..9ac0321bbb5c41 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -3030,8 +3030,12 @@ static const struct dmi_system_id bridge_d3_blacklist[] = {
+  * pci_bridge_d3_possible - Is it possible to put the bridge into D3
+  * @bridge: Bridge to check
+  *
+- * This function checks if it is possible to move the bridge to D3.
+  * Currently we only allow D3 for some PCIe ports and for Thunderbolt.
++ *
++ * Return: Whether it is possible to move the bridge to D3.
++ *
++ * The return value is guaranteed to be constant across the entire lifetime
++ * of the bridge, including its hot-removal.
+  */
+ bool pci_bridge_d3_possible(struct pci_dev *bridge)
+ {
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index e03dcab8c3df56..19010c3828648f 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1678,7 +1678,7 @@ void set_pcie_hotplug_bridge(struct pci_dev *pdev)
+ 
+ 	pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &reg32);
+ 	if (reg32 & PCI_EXP_SLTCAP_HPC)
+-		pdev->is_hotplug_bridge = 1;
++		pdev->is_hotplug_bridge = pdev->is_pciehp = 1;
+ }
+ 
+ static void set_pcie_thunderbolt(struct pci_dev *dev)
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index 403850b1040d3d..e4bf181842fabf 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -2662,6 +2662,7 @@ static struct platform_driver arm_cmn_driver = {
+ 		.name = "arm-cmn",
+ 		.of_match_table = of_match_ptr(arm_cmn_of_match),
+ 		.acpi_match_table = ACPI_PTR(arm_cmn_acpi_match),
++		.suppress_bind_attrs = true,
+ 	},
+ 	.probe = arm_cmn_probe,
+ 	.remove = arm_cmn_remove,
+diff --git a/drivers/perf/arm-ni.c b/drivers/perf/arm-ni.c
+index 9396d243415f48..c30a67fe2ae3ce 100644
+--- a/drivers/perf/arm-ni.c
++++ b/drivers/perf/arm-ni.c
+@@ -709,6 +709,7 @@ static struct platform_driver arm_ni_driver = {
+ 		.name = "arm-ni",
+ 		.of_match_table = of_match_ptr(arm_ni_of_match),
+ 		.acpi_match_table = ACPI_PTR(arm_ni_acpi_match),
++		.suppress_bind_attrs = true,
+ 	},
+ 	.probe = arm_ni_probe,
+ 	.remove = arm_ni_remove,
+diff --git a/drivers/perf/cxl_pmu.c b/drivers/perf/cxl_pmu.c
+index d6693519eaee2e..948e7c067dd2f1 100644
+--- a/drivers/perf/cxl_pmu.c
++++ b/drivers/perf/cxl_pmu.c
+@@ -873,7 +873,7 @@ static int cxl_pmu_probe(struct device *dev)
+ 		return rc;
+ 	irq = rc;
+ 
+-	irq_name = devm_kasprintf(dev, GFP_KERNEL, "%s_overflow\n", dev_name);
++	irq_name = devm_kasprintf(dev, GFP_KERNEL, "%s_overflow", dev_name);
+ 	if (!irq_name)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/phy/rockchip/phy-rockchip-pcie.c b/drivers/phy/rockchip/phy-rockchip-pcie.c
+index bd44af36c67a5a..4e2dfd01adf2ff 100644
+--- a/drivers/phy/rockchip/phy-rockchip-pcie.c
++++ b/drivers/phy/rockchip/phy-rockchip-pcie.c
+@@ -30,9 +30,8 @@
+ #define PHY_CFG_ADDR_SHIFT    1
+ #define PHY_CFG_DATA_MASK     0xf
+ #define PHY_CFG_ADDR_MASK     0x3f
+-#define PHY_CFG_RD_MASK       0x3ff
+ #define PHY_CFG_WR_ENABLE     1
+-#define PHY_CFG_WR_DISABLE    1
++#define PHY_CFG_WR_DISABLE    0
+ #define PHY_CFG_WR_SHIFT      0
+ #define PHY_CFG_WR_MASK       1
+ #define PHY_CFG_PLL_LOCK      0x10
+@@ -160,6 +159,12 @@ static int rockchip_pcie_phy_power_on(struct phy *phy)
+ 
+ 	guard(mutex)(&rk_phy->pcie_mutex);
+ 
++	regmap_write(rk_phy->reg_base,
++		     rk_phy->phy_data->pcie_laneoff,
++		     HIWORD_UPDATE(!PHY_LANE_IDLE_OFF,
++				   PHY_LANE_IDLE_MASK,
++				   PHY_LANE_IDLE_A_SHIFT + inst->index));
++
+ 	if (rk_phy->pwr_cnt++) {
+ 		return 0;
+ 	}
+@@ -176,12 +181,6 @@ static int rockchip_pcie_phy_power_on(struct phy *phy)
+ 				   PHY_CFG_ADDR_MASK,
+ 				   PHY_CFG_ADDR_SHIFT));
+ 
+-	regmap_write(rk_phy->reg_base,
+-		     rk_phy->phy_data->pcie_laneoff,
+-		     HIWORD_UPDATE(!PHY_LANE_IDLE_OFF,
+-				   PHY_LANE_IDLE_MASK,
+-				   PHY_LANE_IDLE_A_SHIFT + inst->index));
+-
+ 	/*
+ 	 * No documented timeout value for phy operation below,
+ 	 * so we make it large enough here. And we use loop-break
+diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
+index cc0b4d1d7cfff0..ba9dbb21b49751 100644
+--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
++++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
+@@ -408,6 +408,7 @@ static struct irq_chip stm32_gpio_irq_chip = {
+ 	.irq_set_wake	= irq_chip_set_wake_parent,
+ 	.irq_request_resources = stm32_gpio_irq_request_resources,
+ 	.irq_release_resources = stm32_gpio_irq_release_resources,
++	.irq_set_affinity = IS_ENABLED(CONFIG_SMP) ? irq_chip_set_affinity_parent : NULL,
+ };
+ 
+ static int stm32_gpio_domain_translate(struct irq_domain *d,
+diff --git a/drivers/platform/chrome/cros_ec_sensorhub.c b/drivers/platform/chrome/cros_ec_sensorhub.c
+index 50cdae67fa3204..9bad8f72680ea8 100644
+--- a/drivers/platform/chrome/cros_ec_sensorhub.c
++++ b/drivers/platform/chrome/cros_ec_sensorhub.c
+@@ -8,6 +8,7 @@
+ 
+ #include <linux/init.h>
+ #include <linux/device.h>
++#include <linux/delay.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/module.h>
+ #include <linux/platform_data/cros_ec_commands.h>
+@@ -18,6 +19,7 @@
+ #include <linux/types.h>
+ 
+ #define DRV_NAME		"cros-ec-sensorhub"
++#define CROS_EC_CMD_INFO_RETRIES 50
+ 
+ static void cros_ec_sensorhub_free_sensor(void *arg)
+ {
+@@ -53,7 +55,7 @@ static int cros_ec_sensorhub_register(struct device *dev,
+ 	int sensor_type[MOTIONSENSE_TYPE_MAX] = { 0 };
+ 	struct cros_ec_command *msg = sensorhub->msg;
+ 	struct cros_ec_dev *ec = sensorhub->ec;
+-	int ret, i;
++	int ret, i, retries;
+ 	char *name;
+ 
+ 
+@@ -65,12 +67,25 @@ static int cros_ec_sensorhub_register(struct device *dev,
+ 		sensorhub->params->cmd = MOTIONSENSE_CMD_INFO;
+ 		sensorhub->params->info.sensor_num = i;
+ 
+-		ret = cros_ec_cmd_xfer_status(ec->ec_dev, msg);
++		retries = CROS_EC_CMD_INFO_RETRIES;
++		do {
++			ret = cros_ec_cmd_xfer_status(ec->ec_dev, msg);
++			if (ret == -EBUSY) {
++				/* The EC is still busy initializing sensors. */
++				usleep_range(5000, 6000);
++				retries--;
++			}
++		} while (ret == -EBUSY && retries);
++
+ 		if (ret < 0) {
+-			dev_warn(dev, "no info for EC sensor %d : %d/%d\n",
+-				 i, ret, msg->result);
++			dev_err(dev, "no info for EC sensor %d : %d/%d\n",
++				i, ret, msg->result);
+ 			continue;
+ 		}
++		if (retries < CROS_EC_CMD_INFO_RETRIES) {
++			dev_warn(dev, "%d retries needed to bring up sensor %d\n",
++				 CROS_EC_CMD_INFO_RETRIES - retries, i);
++		}
+ 
+ 		switch (sensorhub->resp->info.type) {
+ 		case MOTIONSENSE_TYPE_ACCEL:
+diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
+index 7678e3d05fd36f..f437b594055cff 100644
+--- a/drivers/platform/chrome/cros_ec_typec.c
++++ b/drivers/platform/chrome/cros_ec_typec.c
+@@ -1272,8 +1272,8 @@ static int cros_typec_probe(struct platform_device *pdev)
+ 
+ 	typec->ec = dev_get_drvdata(pdev->dev.parent);
+ 	if (!typec->ec) {
+-		dev_err(dev, "couldn't find parent EC device\n");
+-		return -ENODEV;
++		dev_warn(dev, "couldn't find parent EC device\n");
++		return -EPROBE_DEFER;
+ 	}
+ 
+ 	platform_set_drvdata(pdev, typec);
+diff --git a/drivers/platform/x86/amd/pmc/pmc-quirks.c b/drivers/platform/x86/amd/pmc/pmc-quirks.c
+index 7ed12c1d3b34c0..04686ae1e976bd 100644
+--- a/drivers/platform/x86/amd/pmc/pmc-quirks.c
++++ b/drivers/platform/x86/amd/pmc/pmc-quirks.c
+@@ -189,6 +189,15 @@ static const struct dmi_system_id fwbug_list[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "82XQ"),
+ 		}
+ 	},
++	/* https://gitlab.freedesktop.org/drm/amd/-/issues/4434 */
++	{
++		.ident = "Lenovo Yoga 6 13ALC6",
++		.driver_data = &quirk_s2idle_bug,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "82ND"),
++		}
++	},
+ 	/* https://gitlab.freedesktop.org/drm/amd/-/issues/2684 */
+ 	{
+ 		.ident = "HP Laptop 15s-eq2xxx",
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 657625dd60a06c..dc1fc069fed9c7 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -558,12 +558,12 @@ static unsigned long __init tpacpi_check_quirks(
+ 	return 0;
+ }
+ 
+-static inline bool __pure __init tpacpi_is_lenovo(void)
++static __always_inline bool __pure __init tpacpi_is_lenovo(void)
+ {
+ 	return thinkpad_id.vendor == PCI_VENDOR_ID_LENOVO;
+ }
+ 
+-static inline bool __pure __init tpacpi_is_ibm(void)
++static __always_inline bool __pure __init tpacpi_is_ibm(void)
+ {
+ 	return thinkpad_id.vendor == PCI_VENDOR_ID_IBM;
+ }
+diff --git a/drivers/pmdomain/imx/imx8m-blk-ctrl.c b/drivers/pmdomain/imx/imx8m-blk-ctrl.c
+index 912802b5215bd2..5c83e5599f1ea6 100644
+--- a/drivers/pmdomain/imx/imx8m-blk-ctrl.c
++++ b/drivers/pmdomain/imx/imx8m-blk-ctrl.c
+@@ -665,6 +665,11 @@ static const struct imx8m_blk_ctrl_data imx8mn_disp_blk_ctl_dev_data = {
+ #define  LCDIF_1_RD_HURRY	GENMASK(15, 13)
+ #define  LCDIF_0_RD_HURRY	GENMASK(12, 10)
+ 
++#define ISI_CACHE_CTRL		0x50
++#define  ISI_V_WR_HURRY		GENMASK(28, 26)
++#define  ISI_U_WR_HURRY		GENMASK(25, 23)
++#define  ISI_Y_WR_HURRY		GENMASK(22, 20)
++
+ static int imx8mp_media_power_notifier(struct notifier_block *nb,
+ 				unsigned long action, void *data)
+ {
+@@ -694,6 +699,11 @@ static int imx8mp_media_power_notifier(struct notifier_block *nb,
+ 		regmap_set_bits(bc->regmap, LCDIF_ARCACHE_CTRL,
+ 				FIELD_PREP(LCDIF_1_RD_HURRY, 7) |
+ 				FIELD_PREP(LCDIF_0_RD_HURRY, 7));
++		/* Same here for ISI */
++		regmap_set_bits(bc->regmap, ISI_CACHE_CTRL,
++				FIELD_PREP(ISI_V_WR_HURRY, 7) |
++				FIELD_PREP(ISI_U_WR_HURRY, 7) |
++				FIELD_PREP(ISI_Y_WR_HURRY, 7));
+ 	}
+ 
+ 	return NOTIFY_OK;
+diff --git a/drivers/pmdomain/ti/Kconfig b/drivers/pmdomain/ti/Kconfig
+index 67c608bf7ed026..5386b362a7ab25 100644
+--- a/drivers/pmdomain/ti/Kconfig
++++ b/drivers/pmdomain/ti/Kconfig
+@@ -10,7 +10,7 @@ if SOC_TI
+ config TI_SCI_PM_DOMAINS
+ 	tristate "TI SCI PM Domains Driver"
+ 	depends on TI_SCI_PROTOCOL
+-	depends on PM_GENERIC_DOMAINS
++	select PM_GENERIC_DOMAINS if PM
+ 	help
+ 	  Generic power domain implementation for TI device implementing
+ 	  the TI SCI protocol.
+diff --git a/drivers/power/supply/qcom_battmgr.c b/drivers/power/supply/qcom_battmgr.c
+index fe27676fbc7cd1..2d50830610e9aa 100644
+--- a/drivers/power/supply/qcom_battmgr.c
++++ b/drivers/power/supply/qcom_battmgr.c
+@@ -981,6 +981,8 @@ static unsigned int qcom_battmgr_sc8280xp_parse_technology(const char *chemistry
+ {
+ 	if (!strncmp(chemistry, "LIO", BATTMGR_CHEMISTRY_LEN))
+ 		return POWER_SUPPLY_TECHNOLOGY_LION;
++	if (!strncmp(chemistry, "LIP", BATTMGR_CHEMISTRY_LEN))
++		return POWER_SUPPLY_TECHNOLOGY_LIPO;
+ 
+ 	pr_err("Unknown battery technology '%s'\n", chemistry);
+ 	return POWER_SUPPLY_TECHNOLOGY_UNKNOWN;
+diff --git a/drivers/pps/clients/pps-gpio.c b/drivers/pps/clients/pps-gpio.c
+index 374ceefd6f2a40..2866636b0554cb 100644
+--- a/drivers/pps/clients/pps-gpio.c
++++ b/drivers/pps/clients/pps-gpio.c
+@@ -210,8 +210,8 @@ static int pps_gpio_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* register IRQ interrupt handler */
+-	ret = devm_request_irq(dev, data->irq, pps_gpio_irq_handler,
+-			get_irqf_trigger_flags(data), data->info.name, data);
++	ret = request_irq(data->irq, pps_gpio_irq_handler,
++			  get_irqf_trigger_flags(data), data->info.name, data);
+ 	if (ret) {
+ 		pps_unregister_source(data->pps);
+ 		dev_err(dev, "failed to acquire IRQ %d\n", data->irq);
+@@ -228,6 +228,7 @@ static void pps_gpio_remove(struct platform_device *pdev)
+ {
+ 	struct pps_gpio_device_data *data = platform_get_drvdata(pdev);
+ 
++	free_irq(data->irq, data);
+ 	pps_unregister_source(data->pps);
+ 	timer_delete_sync(&data->echo_timer);
+ 	/* reset echo pin in any case */
+diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
+index 36f57d7b4a6671..1cc06b7cb17ef5 100644
+--- a/drivers/ptp/ptp_clock.c
++++ b/drivers/ptp/ptp_clock.c
+@@ -96,7 +96,7 @@ static int ptp_clock_settime(struct posix_clock *pc, const struct timespec64 *tp
+ 	struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock);
+ 
+ 	if (ptp_clock_freerun(ptp)) {
+-		pr_err("ptp: physical clock is free running\n");
++		pr_err_ratelimited("ptp: physical clock is free running\n");
+ 		return -EBUSY;
+ 	}
+ 
+diff --git a/drivers/ptp/ptp_private.h b/drivers/ptp/ptp_private.h
+index a6aad743c282f4..b352df4cd3f972 100644
+--- a/drivers/ptp/ptp_private.h
++++ b/drivers/ptp/ptp_private.h
+@@ -24,6 +24,11 @@
+ #define PTP_DEFAULT_MAX_VCLOCKS 20
+ #define PTP_MAX_CHANNELS 2048
+ 
++enum {
++	PTP_LOCK_PHYSICAL = 0,
++	PTP_LOCK_VIRTUAL,
++};
++
+ struct timestamp_event_queue {
+ 	struct ptp_extts_event buf[PTP_MAX_TIMESTAMPS];
+ 	int head;
+diff --git a/drivers/ptp/ptp_vclock.c b/drivers/ptp/ptp_vclock.c
+index 7febfdcbde8bc6..8ed4b85989242f 100644
+--- a/drivers/ptp/ptp_vclock.c
++++ b/drivers/ptp/ptp_vclock.c
+@@ -154,6 +154,11 @@ static long ptp_vclock_refresh(struct ptp_clock_info *ptp)
+ 	return PTP_VCLOCK_REFRESH_INTERVAL;
+ }
+ 
++static void ptp_vclock_set_subclass(struct ptp_clock *ptp)
++{
++	lockdep_set_subclass(&ptp->clock.rwsem, PTP_LOCK_VIRTUAL);
++}
++
+ static const struct ptp_clock_info ptp_vclock_info = {
+ 	.owner		= THIS_MODULE,
+ 	.name		= "ptp virtual clock",
+@@ -213,6 +218,8 @@ struct ptp_vclock *ptp_vclock_register(struct ptp_clock *pclock)
+ 		return NULL;
+ 	}
+ 
++	ptp_vclock_set_subclass(vclock->clock);
++
+ 	timecounter_init(&vclock->tc, &vclock->cc, 0);
+ 	ptp_schedule_worker(vclock->clock, PTP_VCLOCK_REFRESH_INTERVAL);
+ 
+diff --git a/drivers/remoteproc/imx_rproc.c b/drivers/remoteproc/imx_rproc.c
+index 74299af1d7f10a..627e57a88db218 100644
+--- a/drivers/remoteproc/imx_rproc.c
++++ b/drivers/remoteproc/imx_rproc.c
+@@ -1029,8 +1029,8 @@ static int imx_rproc_clk_enable(struct imx_rproc *priv)
+ 	struct device *dev = priv->dev;
+ 	int ret;
+ 
+-	/* Remote core is not under control of Linux */
+-	if (dcfg->method == IMX_RPROC_NONE)
++	/* Remote core is not under control of Linux or it is managed by SCU API */
++	if (dcfg->method == IMX_RPROC_NONE || dcfg->method == IMX_RPROC_SCU_API)
+ 		return 0;
+ 
+ 	priv->clk = devm_clk_get(dev, NULL);
+diff --git a/drivers/reset/Kconfig b/drivers/reset/Kconfig
+index 99f6f9784e6865..c9bbdc6ac382c6 100644
+--- a/drivers/reset/Kconfig
++++ b/drivers/reset/Kconfig
+@@ -51,8 +51,8 @@ config RESET_BERLIN
+ 
+ config RESET_BRCMSTB
+ 	tristate "Broadcom STB reset controller"
+-	depends on ARCH_BRCMSTB || COMPILE_TEST
+-	default ARCH_BRCMSTB
++	depends on ARCH_BRCMSTB || ARCH_BCM2835 || COMPILE_TEST
++	default ARCH_BRCMSTB || ARCH_BCM2835
+ 	help
+ 	  This enables the reset controller driver for Broadcom STB SoCs using
+ 	  a SUN_TOP_CTRL_SW_INIT style controller.
+@@ -60,11 +60,11 @@ config RESET_BRCMSTB
+ config RESET_BRCMSTB_RESCAL
+ 	tristate "Broadcom STB RESCAL reset controller"
+ 	depends on HAS_IOMEM
+-	depends on ARCH_BRCMSTB || COMPILE_TEST
+-	default ARCH_BRCMSTB
++	depends on ARCH_BRCMSTB || ARCH_BCM2835 || COMPILE_TEST
++	default ARCH_BRCMSTB || ARCH_BCM2835
+ 	help
+ 	  This enables the RESCAL reset controller for SATA, PCIe0, or PCIe1 on
+-	  BCM7216.
++	  BCM7216 or the BCM2712.
+ 
+ config RESET_EYEQ
+ 	bool "Mobileye EyeQ reset controller"
+diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c
+index c8a666de9cbe91..1960d1bd851cb0 100644
+--- a/drivers/rtc/rtc-ds1307.c
++++ b/drivers/rtc/rtc-ds1307.c
+@@ -279,6 +279,13 @@ static int ds1307_get_time(struct device *dev, struct rtc_time *t)
+ 		if (tmp & DS1340_BIT_OSF)
+ 			return -EINVAL;
+ 		break;
++	case ds_1341:
++		ret = regmap_read(ds1307->regmap, DS1337_REG_STATUS, &tmp);
++		if (ret)
++			return ret;
++		if (tmp & DS1337_BIT_OSF)
++			return -EINVAL;
++		break;
+ 	case ds_1388:
+ 		ret = regmap_read(ds1307->regmap, DS1388_REG_FLAG, &tmp);
+ 		if (ret)
+@@ -377,6 +384,10 @@ static int ds1307_set_time(struct device *dev, struct rtc_time *t)
+ 		regmap_update_bits(ds1307->regmap, DS1340_REG_FLAG,
+ 				   DS1340_BIT_OSF, 0);
+ 		break;
++	case ds_1341:
++		regmap_update_bits(ds1307->regmap, DS1337_REG_STATUS,
++				   DS1337_BIT_OSF, 0);
++		break;
+ 	case ds_1388:
+ 		regmap_update_bits(ds1307->regmap, DS1388_REG_FLAG,
+ 				   DS1388_BIT_OSF, 0);
+@@ -1813,10 +1824,8 @@ static int ds1307_probe(struct i2c_client *client)
+ 		regmap_write(ds1307->regmap, DS1337_REG_CONTROL,
+ 			     regs[0]);
+ 
+-		/* oscillator fault?  clear flag, and warn */
++		/* oscillator fault? warn */
+ 		if (regs[1] & DS1337_BIT_OSF) {
+-			regmap_write(ds1307->regmap, DS1337_REG_STATUS,
+-				     regs[1] & ~DS1337_BIT_OSF);
+ 			dev_warn(ds1307->dev, "SET TIME!\n");
+ 		}
+ 		break;
+diff --git a/drivers/s390/char/sclp.c b/drivers/s390/char/sclp.c
+index 840be75e75d419..9a55e2d04e633f 100644
+--- a/drivers/s390/char/sclp.c
++++ b/drivers/s390/char/sclp.c
+@@ -719,7 +719,7 @@ sclp_sync_wait(void)
+ 	timeout = 0;
+ 	if (timer_pending(&sclp_request_timer)) {
+ 		/* Get timeout TOD value */
+-		timeout = get_tod_clock_fast() +
++		timeout = get_tod_clock_monotonic() +
+ 			  sclp_tod_from_jiffies(sclp_request_timer.expires -
+ 						jiffies);
+ 	}
+@@ -739,7 +739,7 @@ sclp_sync_wait(void)
+ 	/* Loop until driver state indicates finished request */
+ 	while (sclp_running_state != sclp_running_state_idle) {
+ 		/* Check for expired request timer */
+-		if (get_tod_clock_fast() > timeout && timer_delete(&sclp_request_timer))
++		if (get_tod_clock_monotonic() > timeout && timer_delete(&sclp_request_timer))
+ 			sclp_request_timer.function(&sclp_request_timer);
+ 		cpu_relax();
+ 	}
+diff --git a/drivers/scsi/aacraid/comminit.c b/drivers/scsi/aacraid/comminit.c
+index 28cf18955a088e..726c8531b7d3fb 100644
+--- a/drivers/scsi/aacraid/comminit.c
++++ b/drivers/scsi/aacraid/comminit.c
+@@ -481,8 +481,7 @@ void aac_define_int_mode(struct aac_dev *dev)
+ 	    pci_find_capability(dev->pdev, PCI_CAP_ID_MSIX)) {
+ 		min_msix = 2;
+ 		i = pci_alloc_irq_vectors(dev->pdev,
+-					  min_msix, msi_count,
+-					  PCI_IRQ_MSIX | PCI_IRQ_AFFINITY);
++					  min_msix, msi_count, PCI_IRQ_MSIX);
+ 		if (i > 0) {
+ 			dev->msi_enabled = 1;
+ 			msi_count = i;
+diff --git a/drivers/scsi/bfa/bfad_im.c b/drivers/scsi/bfa/bfad_im.c
+index a719a18f0fbcf7..f56e008ee52b1d 100644
+--- a/drivers/scsi/bfa/bfad_im.c
++++ b/drivers/scsi/bfa/bfad_im.c
+@@ -706,6 +706,7 @@ bfad_im_probe(struct bfad_s *bfad)
+ 
+ 	if (bfad_thread_workq(bfad) != BFA_STATUS_OK) {
+ 		kfree(im);
++		bfad->im = NULL;
+ 		return BFA_STATUS_FAILED;
+ 	}
+ 
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index 1ddaf72283403a..09d5724db32a0f 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -3184,7 +3184,8 @@ iscsi_conn_setup(struct iscsi_cls_session *cls_session, int dd_size,
+ 		return NULL;
+ 	conn = cls_conn->dd_data;
+ 
+-	conn->dd_data = cls_conn->dd_data + sizeof(*conn);
++	if (dd_size)
++		conn->dd_data = cls_conn->dd_data + sizeof(*conn);
+ 	conn->session = session;
+ 	conn->cls_conn = cls_conn;
+ 	conn->c_stage = ISCSI_CONN_INITIAL_STAGE;
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index 3fd1aa5cc78cc8..1b601e45bc45c1 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -6289,7 +6289,6 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
+ 			}
+ 			phba->nvmeio_trc_on = 1;
+ 			phba->nvmeio_trc_output_idx = 0;
+-			phba->nvmeio_trc = NULL;
+ 		} else {
+ nvmeio_off:
+ 			phba->nvmeio_trc_size = 0;
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index c256c3edd66392..97263b8e1bf87d 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -183,7 +183,8 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ 
+ 	/* Don't schedule a worker thread event if the vport is going down. */
+ 	if (test_bit(FC_UNLOADING, &vport->load_flag) ||
+-	    !test_bit(HBA_SETUP, &phba->hba_flag)) {
++	    (phba->sli_rev == LPFC_SLI_REV4 &&
++	    !test_bit(HBA_SETUP, &phba->hba_flag))) {
+ 
+ 		spin_lock_irqsave(&ndlp->lock, iflags);
+ 		ndlp->rport = NULL;
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index 9edf80b14b1a0f..8862343aae554e 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -390,6 +390,10 @@ lpfc_sli4_vport_delete_fcp_xri_aborted(struct lpfc_vport *vport)
+ 	if (!(vport->cfg_enable_fc4_type & LPFC_ENABLE_FCP))
+ 		return;
+ 
++	/* may be called before queues established if hba_setup fails */
++	if (!phba->sli4_hba.hdwq)
++		return;
++
+ 	spin_lock_irqsave(&phba->hbalock, iflag);
+ 	for (idx = 0; idx < phba->cfg_hdw_queue; idx++) {
+ 		qp = &phba->sli4_hba.hdwq[idx];
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c
+index c186b892150fe8..5ecea2c7584f44 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_os.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c
+@@ -49,6 +49,13 @@ static void mpi3mr_send_event_ack(struct mpi3mr_ioc *mrioc, u8 event,
+ 
+ #define MPI3_EVENT_WAIT_FOR_DEVICES_TO_REFRESH	(0xFFFE)
+ 
++/*
++ * SAS Log info code for a NCQ collateral abort after an NCQ error:
++ * IOC_LOGINFO_PREFIX_PL | PL_LOGINFO_CODE_SATA_NCQ_FAIL_ALL_CMDS_AFTR_ERR
++ * See: drivers/message/fusion/lsi/mpi_log_sas.h
++ */
++#define IOC_LOGINFO_SATA_NCQ_FAIL_AFTER_ERR	0x31080000
++
+ /**
+  * mpi3mr_host_tag_for_scmd - Get host tag for a scmd
+  * @mrioc: Adapter instance reference
+@@ -3397,7 +3404,18 @@ void mpi3mr_process_op_reply_desc(struct mpi3mr_ioc *mrioc,
+ 		scmd->result = DID_NO_CONNECT << 16;
+ 		break;
+ 	case MPI3_IOCSTATUS_SCSI_IOC_TERMINATED:
+-		scmd->result = DID_SOFT_ERROR << 16;
++		if (ioc_loginfo == IOC_LOGINFO_SATA_NCQ_FAIL_AFTER_ERR) {
++			/*
++			 * This is a ATA NCQ command aborted due to another NCQ
++			 * command failure. We must retry this command
++			 * immediately but without incrementing its retry
++			 * counter.
++			 */
++			WARN_ON_ONCE(xfer_count != 0);
++			scmd->result = DID_IMM_RETRY << 16;
++		} else {
++			scmd->result = DID_SOFT_ERROR << 16;
++		}
+ 		break;
+ 	case MPI3_IOCSTATUS_SCSI_TASK_TERMINATED:
+ 	case MPI3_IOCSTATUS_SCSI_EXT_TERMINATED:
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 0f900ddb3047c7..967af259118e72 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -195,6 +195,14 @@ struct sense_info {
+ #define MPT3SAS_PORT_ENABLE_COMPLETE (0xFFFD)
+ #define MPT3SAS_ABRT_TASK_SET (0xFFFE)
+ #define MPT3SAS_REMOVE_UNRESPONDING_DEVICES (0xFFFF)
++
++/*
++ * SAS Log info code for a NCQ collateral abort after an NCQ error:
++ * IOC_LOGINFO_PREFIX_PL | PL_LOGINFO_CODE_SATA_NCQ_FAIL_ALL_CMDS_AFTR_ERR
++ * See: drivers/message/fusion/lsi/mpi_log_sas.h
++ */
++#define IOC_LOGINFO_SATA_NCQ_FAIL_AFTER_ERR	0x31080000
++
+ /**
+  * struct fw_event_work - firmware event struct
+  * @list: link list framework
+@@ -5814,6 +5822,17 @@ _scsih_io_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index, u32 reply)
+ 			scmd->result = DID_TRANSPORT_DISRUPTED << 16;
+ 			goto out;
+ 		}
++		if (log_info == IOC_LOGINFO_SATA_NCQ_FAIL_AFTER_ERR) {
++			/*
++			 * This is a ATA NCQ command aborted due to another NCQ
++			 * command failure. We must retry this command
++			 * immediately but without incrementing its retry
++			 * counter.
++			 */
++			WARN_ON_ONCE(xfer_cnt != 0);
++			scmd->result = DID_IMM_RETRY << 16;
++			break;
++		}
+ 		if (log_info == 0x31110630) {
+ 			if (scmd->retries > 2) {
+ 				scmd->result = DID_NO_CONNECT << 16;
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 5b373c53c0369e..c4074f062d9311 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -4677,8 +4677,12 @@ pm80xx_chip_phy_start_req(struct pm8001_hba_info *pm8001_ha, u8 phy_id)
+ 		&pm8001_ha->phy[phy_id].dev_sas_addr, SAS_ADDR_SIZE);
+ 	payload.sas_identify.phy_id = phy_id;
+ 
+-	return pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload,
++	ret = pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload,
+ 				    sizeof(payload), 0);
++	if (ret < 0)
++		pm8001_tag_free(pm8001_ha, tag);
++
++	return ret;
+ }
+ 
+ /**
+@@ -4704,8 +4708,12 @@ static int pm80xx_chip_phy_stop_req(struct pm8001_hba_info *pm8001_ha,
+ 	payload.tag = cpu_to_le32(tag);
+ 	payload.phy_id = cpu_to_le32(phy_id);
+ 
+-	return pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload,
++	ret = pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload,
+ 				    sizeof(payload), 0);
++	if (ret < 0)
++		pm8001_tag_free(pm8001_ha, tag);
++
++	return ret;
+ }
+ 
+ /*
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index 4833b8fe251b88..396fcf194b6b37 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -1899,7 +1899,7 @@ int scsi_scan_host_selected(struct Scsi_Host *shost, unsigned int channel,
+ 
+ 	return 0;
+ }
+-
++EXPORT_SYMBOL(scsi_scan_host_selected);
+ static void scsi_sysfs_add_devices(struct Scsi_Host *shost)
+ {
+ 	struct scsi_device *sdev;
+diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c
+index 351b028ef8938e..d69c7c444a3116 100644
+--- a/drivers/scsi/scsi_transport_sas.c
++++ b/drivers/scsi/scsi_transport_sas.c
+@@ -40,6 +40,8 @@
+ #include <scsi/scsi_transport_sas.h>
+ 
+ #include "scsi_sas_internal.h"
++#include "scsi_priv.h"
++
+ struct sas_host_attrs {
+ 	struct list_head rphy_list;
+ 	struct mutex lock;
+@@ -1683,32 +1685,66 @@ int scsi_is_sas_rphy(const struct device *dev)
+ }
+ EXPORT_SYMBOL(scsi_is_sas_rphy);
+ 
+-
+-/*
+- * SCSI scan helper
+- */
+-
+-static int sas_user_scan(struct Scsi_Host *shost, uint channel,
+-		uint id, u64 lun)
++static void scan_channel_zero(struct Scsi_Host *shost, uint id, u64 lun)
+ {
+ 	struct sas_host_attrs *sas_host = to_sas_host_attrs(shost);
+ 	struct sas_rphy *rphy;
+ 
+-	mutex_lock(&sas_host->lock);
+ 	list_for_each_entry(rphy, &sas_host->rphy_list, list) {
+ 		if (rphy->identify.device_type != SAS_END_DEVICE ||
+ 		    rphy->scsi_target_id == -1)
+ 			continue;
+ 
+-		if ((channel == SCAN_WILD_CARD || channel == 0) &&
+-		    (id == SCAN_WILD_CARD || id == rphy->scsi_target_id)) {
++		if (id == SCAN_WILD_CARD || id == rphy->scsi_target_id) {
+ 			scsi_scan_target(&rphy->dev, 0, rphy->scsi_target_id,
+ 					 lun, SCSI_SCAN_MANUAL);
+ 		}
+ 	}
+-	mutex_unlock(&sas_host->lock);
++}
+ 
+-	return 0;
++/*
++ * SCSI scan helper
++ */
++
++static int sas_user_scan(struct Scsi_Host *shost, uint channel,
++		uint id, u64 lun)
++{
++	struct sas_host_attrs *sas_host = to_sas_host_attrs(shost);
++	int res = 0;
++	int i;
++
++	switch (channel) {
++	case 0:
++		mutex_lock(&sas_host->lock);
++		scan_channel_zero(shost, id, lun);
++		mutex_unlock(&sas_host->lock);
++		break;
++
++	case SCAN_WILD_CARD:
++		mutex_lock(&sas_host->lock);
++		scan_channel_zero(shost, id, lun);
++		mutex_unlock(&sas_host->lock);
++
++		for (i = 1; i <= shost->max_channel; i++) {
++			res = scsi_scan_host_selected(shost, i, id, lun,
++						      SCSI_SCAN_MANUAL);
++			if (res)
++				goto exit_scan;
++		}
++		break;
++
++	default:
++		if (channel < shost->max_channel) {
++			res = scsi_scan_host_selected(shost, channel, id, lun,
++						      SCSI_SCAN_MANUAL);
++		} else {
++			res = -EINVAL;
++		}
++		break;
++	}
++
++exit_scan:
++	return res;
+ }
+ 
+ 
+diff --git a/drivers/soc/qcom/mdt_loader.c b/drivers/soc/qcom/mdt_loader.c
+index b2c0fb55d4ae67..44589d10b15b50 100644
+--- a/drivers/soc/qcom/mdt_loader.c
++++ b/drivers/soc/qcom/mdt_loader.c
+@@ -83,7 +83,7 @@ ssize_t qcom_mdt_get_size(const struct firmware *fw)
+ 	int i;
+ 
+ 	ehdr = (struct elf32_hdr *)fw->data;
+-	phdrs = (struct elf32_phdr *)(ehdr + 1);
++	phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff);
+ 
+ 	for (i = 0; i < ehdr->e_phnum; i++) {
+ 		phdr = &phdrs[i];
+@@ -135,7 +135,7 @@ void *qcom_mdt_read_metadata(const struct firmware *fw, size_t *data_len,
+ 	void *data;
+ 
+ 	ehdr = (struct elf32_hdr *)fw->data;
+-	phdrs = (struct elf32_phdr *)(ehdr + 1);
++	phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff);
+ 
+ 	if (ehdr->e_phnum < 2)
+ 		return ERR_PTR(-EINVAL);
+@@ -215,7 +215,7 @@ int qcom_mdt_pas_init(struct device *dev, const struct firmware *fw,
+ 	int i;
+ 
+ 	ehdr = (struct elf32_hdr *)fw->data;
+-	phdrs = (struct elf32_phdr *)(ehdr + 1);
++	phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff);
+ 
+ 	for (i = 0; i < ehdr->e_phnum; i++) {
+ 		phdr = &phdrs[i];
+@@ -270,7 +270,7 @@ static bool qcom_mdt_bins_are_split(const struct firmware *fw, const char *fw_na
+ 	int i;
+ 
+ 	ehdr = (struct elf32_hdr *)fw->data;
+-	phdrs = (struct elf32_phdr *)(ehdr + 1);
++	phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff);
+ 
+ 	for (i = 0; i < ehdr->e_phnum; i++) {
+ 		/*
+@@ -312,7 +312,7 @@ static int __qcom_mdt_load(struct device *dev, const struct firmware *fw,
+ 
+ 	is_split = qcom_mdt_bins_are_split(fw, fw_name);
+ 	ehdr = (struct elf32_hdr *)fw->data;
+-	phdrs = (struct elf32_phdr *)(ehdr + 1);
++	phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff);
+ 
+ 	for (i = 0; i < ehdr->e_phnum; i++) {
+ 		phdr = &phdrs[i];
+diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
+index cb82e887b51d44..fdab2b1067dbb1 100644
+--- a/drivers/soc/qcom/rpmh-rsc.c
++++ b/drivers/soc/qcom/rpmh-rsc.c
+@@ -1072,7 +1072,7 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
+ 	drv->ver.minor = rsc_id & (MINOR_VER_MASK << MINOR_VER_SHIFT);
+ 	drv->ver.minor >>= MINOR_VER_SHIFT;
+ 
+-	if (drv->ver.major == 3)
++	if (drv->ver.major >= 3)
+ 		drv->regs = rpmh_rsc_reg_offset_ver_3_0;
+ 	else
+ 		drv->regs = rpmh_rsc_reg_offset_ver_2_7;
+diff --git a/drivers/soundwire/amd_manager.c b/drivers/soundwire/amd_manager.c
+index 7a671a7861979c..7ed9c8c0b4c8f1 100644
+--- a/drivers/soundwire/amd_manager.c
++++ b/drivers/soundwire/amd_manager.c
+@@ -1074,6 +1074,7 @@ static void amd_sdw_manager_remove(struct platform_device *pdev)
+ 	int ret;
+ 
+ 	pm_runtime_disable(&pdev->dev);
++	cancel_work_sync(&amd_manager->amd_sdw_work);
+ 	amd_disable_sdw_interrupts(amd_manager);
+ 	sdw_bus_master_delete(&amd_manager->bus);
+ 	ret = amd_disable_sdw_manager(amd_manager);
+@@ -1178,10 +1179,10 @@ static int __maybe_unused amd_pm_prepare(struct device *dev)
+ 	 * device is not in runtime suspend state, observed that device alerts are missing
+ 	 * without pm_prepare on AMD platforms in clockstop mode0.
+ 	 */
+-	if (amd_manager->power_mode_mask & AMD_SDW_CLK_STOP_MODE) {
+-		ret = pm_request_resume(dev);
++	if (amd_manager->power_mode_mask) {
++		ret = pm_runtime_resume(dev);
+ 		if (ret < 0) {
+-			dev_err(bus->dev, "pm_request_resume failed: %d\n", ret);
++			dev_err(bus->dev, "pm_runtime_resume failed: %d\n", ret);
+ 			return 0;
+ 		}
+ 	}
+diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c
+index 39aecd34c6414b..1a70f80c2514d3 100644
+--- a/drivers/soundwire/bus.c
++++ b/drivers/soundwire/bus.c
+@@ -1756,15 +1756,15 @@ static int sdw_handle_slave_alerts(struct sdw_slave *slave)
+ 
+ 		/* Update the Slave driver */
+ 		if (slave_notify) {
++			if (slave->prop.use_domain_irq && slave->irq)
++				handle_nested_irq(slave->irq);
++
+ 			mutex_lock(&slave->sdw_dev_lock);
+ 
+ 			if (slave->probed) {
+ 				struct device *dev = &slave->dev;
+ 				struct sdw_driver *drv = drv_to_sdw_driver(dev->driver);
+ 
+-				if (slave->prop.use_domain_irq && slave->irq)
+-					handle_nested_irq(slave->irq);
+-
+ 				if (drv->ops && drv->ops->interrupt_callback) {
+ 					slave_intr.sdca_cascade = sdca_cascade;
+ 					slave_intr.control_port = clear;
+diff --git a/drivers/staging/gpib/ni_usb/ni_usb_gpib.c b/drivers/staging/gpib/ni_usb/ni_usb_gpib.c
+index 9f1b9927f025cc..56d3b62249b8ab 100644
+--- a/drivers/staging/gpib/ni_usb/ni_usb_gpib.c
++++ b/drivers/staging/gpib/ni_usb/ni_usb_gpib.c
+@@ -2069,10 +2069,10 @@ static int ni_usb_hs_wait_for_ready(struct ni_usb_priv *ni_priv)
+ 		}
+ 		if (buffer[++j] != 0x0) { // [6]
+ 			ready = 1;
+-			// NI-USB-HS+ sends 0xf here
++			// NI-USB-HS+ sends 0xf or 0x19 here
+ 			if (buffer[j] != 0x2 && buffer[j] != 0xe && buffer[j] != 0xf &&
+-			    buffer[j] != 0x16)	{
+-				dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x2, 0xe, 0xf or 0x16\n",
++			    buffer[j] != 0x16 && buffer[j] != 0x19) {
++				dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x2, 0xe, 0xf, 0x16 or 0x19\n",
+ 					j, (int)buffer[j]);
+ 				unexpected = 1;
+ 			}
+@@ -2100,11 +2100,11 @@ static int ni_usb_hs_wait_for_ready(struct ni_usb_priv *ni_priv)
+ 				j, (int)buffer[j]);
+ 			unexpected = 1;
+ 		}
+-		if (buffer[++j] != 0x0) {
++		if (buffer[++j] != 0x0) { // [10] MC usb-488 sends 0x7 here, new HS+ sends 0x59
+ 			ready = 1;
+-			if (buffer[j] != 0x96 && buffer[j] != 0x7 && buffer[j] != 0x6e) {
+-// [10] MC usb-488 sends 0x7 here
+-				dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x96, 0x07 or 0x6e\n",
++			if (buffer[j] != 0x96 && buffer[j] != 0x7 && buffer[j] != 0x6e &&
++			    buffer[j] != 0x59) {
++				dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x96, 0x07, 0x6e or 0x59\n",
+ 					j, (int)buffer[j]);
+ 				unexpected = 1;
+ 			}
+diff --git a/drivers/target/target_core_fabric_lib.c b/drivers/target/target_core_fabric_lib.c
+index 43f47e3aa4482c..ec7bc6e3022891 100644
+--- a/drivers/target/target_core_fabric_lib.c
++++ b/drivers/target/target_core_fabric_lib.c
+@@ -257,11 +257,41 @@ static int iscsi_get_pr_transport_id_len(
+ 	return len;
+ }
+ 
+-static char *iscsi_parse_pr_out_transport_id(
++static void sas_parse_pr_out_transport_id(char *buf, char *i_str)
++{
++	char hex[17] = {};
++
++	bin2hex(hex, buf + 4, 8);
++	snprintf(i_str, TRANSPORT_IQN_LEN, "naa.%s", hex);
++}
++
++static void srp_parse_pr_out_transport_id(char *buf, char *i_str)
++{
++	char hex[33] = {};
++
++	bin2hex(hex, buf + 8, 16);
++	snprintf(i_str, TRANSPORT_IQN_LEN, "0x%s", hex);
++}
++
++static void fcp_parse_pr_out_transport_id(char *buf, char *i_str)
++{
++	snprintf(i_str, TRANSPORT_IQN_LEN, "%8phC", buf + 8);
++}
++
++static void sbp_parse_pr_out_transport_id(char *buf, char *i_str)
++{
++	char hex[17] = {};
++
++	bin2hex(hex, buf + 8, 8);
++	snprintf(i_str, TRANSPORT_IQN_LEN, "%s", hex);
++}
++
++static bool iscsi_parse_pr_out_transport_id(
+ 	struct se_portal_group *se_tpg,
+ 	char *buf,
+ 	u32 *out_tid_len,
+-	char **port_nexus_ptr)
++	char **port_nexus_ptr,
++	char *i_str)
+ {
+ 	char *p;
+ 	int i;
+@@ -282,7 +312,7 @@ static char *iscsi_parse_pr_out_transport_id(
+ 	if ((format_code != 0x00) && (format_code != 0x40)) {
+ 		pr_err("Illegal format code: 0x%02x for iSCSI"
+ 			" Initiator Transport ID\n", format_code);
+-		return NULL;
++		return false;
+ 	}
+ 	/*
+ 	 * If the caller wants the TransportID Length, we set that value for the
+@@ -306,7 +336,7 @@ static char *iscsi_parse_pr_out_transport_id(
+ 			pr_err("Unable to locate \",i,0x\" separator"
+ 				" for Initiator port identifier: %s\n",
+ 				&buf[4]);
+-			return NULL;
++			return false;
+ 		}
+ 		*p = '\0'; /* Terminate iSCSI Name */
+ 		p += 5; /* Skip over ",i,0x" separator */
+@@ -339,7 +369,8 @@ static char *iscsi_parse_pr_out_transport_id(
+ 	} else
+ 		*port_nexus_ptr = NULL;
+ 
+-	return &buf[4];
++	strscpy(i_str, &buf[4], TRANSPORT_IQN_LEN);
++	return true;
+ }
+ 
+ int target_get_pr_transport_id_len(struct se_node_acl *nacl,
+@@ -387,33 +418,35 @@ int target_get_pr_transport_id(struct se_node_acl *nacl,
+ 	}
+ }
+ 
+-const char *target_parse_pr_out_transport_id(struct se_portal_group *tpg,
+-		char *buf, u32 *out_tid_len, char **port_nexus_ptr)
++bool target_parse_pr_out_transport_id(struct se_portal_group *tpg,
++		char *buf, u32 *out_tid_len, char **port_nexus_ptr, char *i_str)
+ {
+-	u32 offset;
+-
+ 	switch (tpg->proto_id) {
+ 	case SCSI_PROTOCOL_SAS:
+ 		/*
+ 		 * Assume the FORMAT CODE 00b from spc4r17, 7.5.4.7 TransportID
+ 		 * for initiator ports using SCSI over SAS Serial SCSI Protocol.
+ 		 */
+-		offset = 4;
++		sas_parse_pr_out_transport_id(buf, i_str);
+ 		break;
+-	case SCSI_PROTOCOL_SBP:
+ 	case SCSI_PROTOCOL_SRP:
++		srp_parse_pr_out_transport_id(buf, i_str);
++		break;
+ 	case SCSI_PROTOCOL_FCP:
+-		offset = 8;
++		fcp_parse_pr_out_transport_id(buf, i_str);
++		break;
++	case SCSI_PROTOCOL_SBP:
++		sbp_parse_pr_out_transport_id(buf, i_str);
+ 		break;
+ 	case SCSI_PROTOCOL_ISCSI:
+ 		return iscsi_parse_pr_out_transport_id(tpg, buf, out_tid_len,
+-					port_nexus_ptr);
++					port_nexus_ptr, i_str);
+ 	default:
+ 		pr_err("Unknown proto_id: 0x%02x\n", tpg->proto_id);
+-		return NULL;
++		return false;
+ 	}
+ 
+ 	*port_nexus_ptr = NULL;
+ 	*out_tid_len = 24;
+-	return buf + offset;
++	return true;
+ }
+diff --git a/drivers/target/target_core_internal.h b/drivers/target/target_core_internal.h
+index 408be26d2e9b4d..20aab1f505655c 100644
+--- a/drivers/target/target_core_internal.h
++++ b/drivers/target/target_core_internal.h
+@@ -103,8 +103,8 @@ int	target_get_pr_transport_id_len(struct se_node_acl *nacl,
+ int	target_get_pr_transport_id(struct se_node_acl *nacl,
+ 		struct t10_pr_registration *pr_reg, int *format_code,
+ 		unsigned char *buf);
+-const char *target_parse_pr_out_transport_id(struct se_portal_group *tpg,
+-		char *buf, u32 *out_tid_len, char **port_nexus_ptr);
++bool target_parse_pr_out_transport_id(struct se_portal_group *tpg,
++		char *buf, u32 *out_tid_len, char **port_nexus_ptr, char *i_str);
+ 
+ /* target_core_hba.c */
+ struct se_hba *core_alloc_hba(const char *, u32, u32);
+diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
+index 70905805cb1756..83e172c92238d9 100644
+--- a/drivers/target/target_core_pr.c
++++ b/drivers/target/target_core_pr.c
+@@ -1478,11 +1478,12 @@ core_scsi3_decode_spec_i_port(
+ 	LIST_HEAD(tid_dest_list);
+ 	struct pr_transport_id_holder *tidh_new, *tidh, *tidh_tmp;
+ 	unsigned char *buf, *ptr, proto_ident;
+-	const unsigned char *i_str = NULL;
++	unsigned char i_str[TRANSPORT_IQN_LEN];
+ 	char *iport_ptr = NULL, i_buf[PR_REG_ISID_ID_LEN];
+ 	sense_reason_t ret;
+ 	u32 tpdl, tid_len = 0;
+ 	u32 dest_rtpi = 0;
++	bool tid_found;
+ 
+ 	/*
+ 	 * Allocate a struct pr_transport_id_holder and setup the
+@@ -1571,9 +1572,9 @@ core_scsi3_decode_spec_i_port(
+ 			dest_rtpi = tmp_lun->lun_tpg->tpg_rtpi;
+ 
+ 			iport_ptr = NULL;
+-			i_str = target_parse_pr_out_transport_id(tmp_tpg,
+-					ptr, &tid_len, &iport_ptr);
+-			if (!i_str)
++			tid_found = target_parse_pr_out_transport_id(tmp_tpg,
++					ptr, &tid_len, &iport_ptr, i_str);
++			if (!tid_found)
+ 				continue;
+ 			/*
+ 			 * Determine if this SCSI device server requires that
+@@ -3153,13 +3154,14 @@ core_scsi3_emulate_pro_register_and_move(struct se_cmd *cmd, u64 res_key,
+ 	struct t10_pr_registration *pr_reg, *pr_res_holder, *dest_pr_reg;
+ 	struct t10_reservation *pr_tmpl = &dev->t10_pr;
+ 	unsigned char *buf;
+-	const unsigned char *initiator_str;
++	unsigned char initiator_str[TRANSPORT_IQN_LEN];
+ 	char *iport_ptr = NULL, i_buf[PR_REG_ISID_ID_LEN] = { };
+ 	u32 tid_len, tmp_tid_len;
+ 	int new_reg = 0, type, scope, matching_iname;
+ 	sense_reason_t ret;
+ 	unsigned short rtpi;
+ 	unsigned char proto_ident;
++	bool tid_found;
+ 
+ 	if (!se_sess || !se_lun) {
+ 		pr_err("SPC-3 PR: se_sess || struct se_lun is NULL!\n");
+@@ -3278,9 +3280,9 @@ core_scsi3_emulate_pro_register_and_move(struct se_cmd *cmd, u64 res_key,
+ 		ret = TCM_INVALID_PARAMETER_LIST;
+ 		goto out;
+ 	}
+-	initiator_str = target_parse_pr_out_transport_id(dest_se_tpg,
+-			&buf[24], &tmp_tid_len, &iport_ptr);
+-	if (!initiator_str) {
++	tid_found = target_parse_pr_out_transport_id(dest_se_tpg,
++			&buf[24], &tmp_tid_len, &iport_ptr, initiator_str);
++	if (!tid_found) {
+ 		pr_err("SPC-3 PR REGISTER_AND_MOVE: Unable to locate"
+ 			" initiator_str from Transport ID\n");
+ 		ret = TCM_INVALID_PARAMETER_LIST;
+diff --git a/drivers/thermal/qcom/qcom-spmi-temp-alarm.c b/drivers/thermal/qcom/qcom-spmi-temp-alarm.c
+index a81e7d6e865f91..4b91cc13ce3472 100644
+--- a/drivers/thermal/qcom/qcom-spmi-temp-alarm.c
++++ b/drivers/thermal/qcom/qcom-spmi-temp-alarm.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+  * Copyright (c) 2011-2015, 2017, 2020, The Linux Foundation. All rights reserved.
++ * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
+  */
+ 
+ #include <linux/bitops.h>
+@@ -16,6 +17,7 @@
+ 
+ #include "../thermal_hwmon.h"
+ 
++#define QPNP_TM_REG_DIG_MINOR		0x00
+ #define QPNP_TM_REG_DIG_MAJOR		0x01
+ #define QPNP_TM_REG_TYPE		0x04
+ #define QPNP_TM_REG_SUBTYPE		0x05
+@@ -31,7 +33,7 @@
+ #define STATUS_GEN2_STATE_MASK		GENMASK(6, 4)
+ #define STATUS_GEN2_STATE_SHIFT		4
+ 
+-#define SHUTDOWN_CTRL1_OVERRIDE_S2	BIT(6)
++#define SHUTDOWN_CTRL1_OVERRIDE_STAGE2	BIT(6)
+ #define SHUTDOWN_CTRL1_THRESHOLD_MASK	GENMASK(1, 0)
+ 
+ #define SHUTDOWN_CTRL1_RATE_25HZ	BIT(3)
+@@ -78,6 +80,7 @@ struct qpnp_tm_chip {
+ 	/* protects .thresh, .stage and chip registers */
+ 	struct mutex			lock;
+ 	bool				initialized;
++	bool				require_stage2_shutdown;
+ 
+ 	struct iio_channel		*adc;
+ 	const long			(*temp_map)[THRESH_COUNT][STAGE_COUNT];
+@@ -220,13 +223,13 @@ static int qpnp_tm_update_critical_trip_temp(struct qpnp_tm_chip *chip,
+ {
+ 	long stage2_threshold_min = (*chip->temp_map)[THRESH_MIN][1];
+ 	long stage2_threshold_max = (*chip->temp_map)[THRESH_MAX][1];
+-	bool disable_s2_shutdown = false;
++	bool disable_stage2_shutdown = false;
+ 	u8 reg;
+ 
+ 	WARN_ON(!mutex_is_locked(&chip->lock));
+ 
+ 	/*
+-	 * Default: S2 and S3 shutdown enabled, thresholds at
++	 * Default: Stage 2 and Stage 3 shutdown enabled, thresholds at
+ 	 * lowest threshold set, monitoring at 25Hz
+ 	 */
+ 	reg = SHUTDOWN_CTRL1_RATE_25HZ;
+@@ -241,12 +244,12 @@ static int qpnp_tm_update_critical_trip_temp(struct qpnp_tm_chip *chip,
+ 		chip->thresh = THRESH_MAX -
+ 			((stage2_threshold_max - temp) /
+ 			 TEMP_THRESH_STEP);
+-		disable_s2_shutdown = true;
++		disable_stage2_shutdown = true;
+ 	} else {
+ 		chip->thresh = THRESH_MAX;
+ 
+ 		if (chip->adc)
+-			disable_s2_shutdown = true;
++			disable_stage2_shutdown = true;
+ 		else
+ 			dev_warn(chip->dev,
+ 				 "No ADC is configured and critical temperature %d mC is above the maximum stage 2 threshold of %ld mC! Configuring stage 2 shutdown at %ld mC.\n",
+@@ -255,8 +258,8 @@ static int qpnp_tm_update_critical_trip_temp(struct qpnp_tm_chip *chip,
+ 
+ skip:
+ 	reg |= chip->thresh;
+-	if (disable_s2_shutdown)
+-		reg |= SHUTDOWN_CTRL1_OVERRIDE_S2;
++	if (disable_stage2_shutdown && !chip->require_stage2_shutdown)
++		reg |= SHUTDOWN_CTRL1_OVERRIDE_STAGE2;
+ 
+ 	return qpnp_tm_write(chip, QPNP_TM_REG_SHUTDOWN_CTRL1, reg);
+ }
+@@ -350,8 +353,8 @@ static int qpnp_tm_probe(struct platform_device *pdev)
+ {
+ 	struct qpnp_tm_chip *chip;
+ 	struct device_node *node;
+-	u8 type, subtype, dig_major;
+-	u32 res;
++	u8 type, subtype, dig_major, dig_minor;
++	u32 res, dig_revision;
+ 	int ret, irq;
+ 
+ 	node = pdev->dev.of_node;
+@@ -402,6 +405,11 @@ static int qpnp_tm_probe(struct platform_device *pdev)
+ 		return dev_err_probe(&pdev->dev, ret,
+ 				     "could not read dig_major\n");
+ 
++	ret = qpnp_tm_read(chip, QPNP_TM_REG_DIG_MINOR, &dig_minor);
++	if (ret < 0)
++		return dev_err_probe(&pdev->dev, ret,
++				     "could not read dig_minor\n");
++
+ 	if (type != QPNP_TM_TYPE || (subtype != QPNP_TM_SUBTYPE_GEN1
+ 				     && subtype != QPNP_TM_SUBTYPE_GEN2)) {
+ 		dev_err(&pdev->dev, "invalid type 0x%02x or subtype 0x%02x\n",
+@@ -415,6 +423,23 @@ static int qpnp_tm_probe(struct platform_device *pdev)
+ 	else
+ 		chip->temp_map = &temp_map_gen1;
+ 
++	if (chip->subtype == QPNP_TM_SUBTYPE_GEN2) {
++		dig_revision = (dig_major << 8) | dig_minor;
++		/*
++		 * Check if stage 2 automatic partial shutdown must remain
++		 * enabled to avoid potential repeated faults upon reaching
++		 * over-temperature stage 3.
++		 */
++		switch (dig_revision) {
++		case 0x0001:
++		case 0x0002:
++		case 0x0100:
++		case 0x0101:
++			chip->require_stage2_shutdown = true;
++			break;
++		}
++	}
++
+ 	/*
+ 	 * Register the sensor before initializing the hardware to be able to
+ 	 * read the trip points. get_temp() returns the default temperature
+diff --git a/drivers/thermal/thermal_sysfs.c b/drivers/thermal/thermal_sysfs.c
+index 24b9055a0b6c51..d80612506a334a 100644
+--- a/drivers/thermal/thermal_sysfs.c
++++ b/drivers/thermal/thermal_sysfs.c
+@@ -40,10 +40,13 @@ temp_show(struct device *dev, struct device_attribute *attr, char *buf)
+ 
+ 	ret = thermal_zone_get_temp(tz, &temperature);
+ 
+-	if (ret)
+-		return ret;
++	if (!ret)
++		return sprintf(buf, "%d\n", temperature);
+ 
+-	return sprintf(buf, "%d\n", temperature);
++	if (ret == -EAGAIN)
++		return -ENODATA;
++
++	return ret;
+ }
+ 
+ static ssize_t
+diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c
+index 144d0232a70c11..b692618ed9d4f4 100644
+--- a/drivers/thunderbolt/domain.c
++++ b/drivers/thunderbolt/domain.c
+@@ -36,7 +36,7 @@ static bool match_service_id(const struct tb_service_id *id,
+ 			return false;
+ 	}
+ 
+-	if (id->match_flags & TBSVC_MATCH_PROTOCOL_VERSION) {
++	if (id->match_flags & TBSVC_MATCH_PROTOCOL_REVISION) {
+ 		if (id->protocol_revision != svc->prtcrevs)
+ 			return false;
+ 	}
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 88669972d9a036..8658377dbe1c19 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -1337,28 +1337,28 @@ static void uart_sanitize_serial_rs485_delays(struct uart_port *port,
+ 	if (!port->rs485_supported.delay_rts_before_send) {
+ 		if (rs485->delay_rts_before_send) {
+ 			dev_warn_ratelimited(port->dev,
+-				"%s (%d): RTS delay before sending not supported\n",
++				"%s (%u): RTS delay before sending not supported\n",
+ 				port->name, port->line);
+ 		}
+ 		rs485->delay_rts_before_send = 0;
+ 	} else if (rs485->delay_rts_before_send > RS485_MAX_RTS_DELAY) {
+ 		rs485->delay_rts_before_send = RS485_MAX_RTS_DELAY;
+ 		dev_warn_ratelimited(port->dev,
+-			"%s (%d): RTS delay before sending clamped to %u ms\n",
++			"%s (%u): RTS delay before sending clamped to %u ms\n",
+ 			port->name, port->line, rs485->delay_rts_before_send);
+ 	}
+ 
+ 	if (!port->rs485_supported.delay_rts_after_send) {
+ 		if (rs485->delay_rts_after_send) {
+ 			dev_warn_ratelimited(port->dev,
+-				"%s (%d): RTS delay after sending not supported\n",
++				"%s (%u): RTS delay after sending not supported\n",
+ 				port->name, port->line);
+ 		}
+ 		rs485->delay_rts_after_send = 0;
+ 	} else if (rs485->delay_rts_after_send > RS485_MAX_RTS_DELAY) {
+ 		rs485->delay_rts_after_send = RS485_MAX_RTS_DELAY;
+ 		dev_warn_ratelimited(port->dev,
+-			"%s (%d): RTS delay after sending clamped to %u ms\n",
++			"%s (%u): RTS delay after sending clamped to %u ms\n",
+ 			port->name, port->line, rs485->delay_rts_after_send);
+ 	}
+ }
+@@ -1388,14 +1388,14 @@ static void uart_sanitize_serial_rs485(struct uart_port *port, struct serial_rs4
+ 			rs485->flags &= ~SER_RS485_RTS_AFTER_SEND;
+ 
+ 			dev_warn_ratelimited(port->dev,
+-				"%s (%d): invalid RTS setting, using RTS_ON_SEND instead\n",
++				"%s (%u): invalid RTS setting, using RTS_ON_SEND instead\n",
+ 				port->name, port->line);
+ 		} else {
+ 			rs485->flags |= SER_RS485_RTS_AFTER_SEND;
+ 			rs485->flags &= ~SER_RS485_RTS_ON_SEND;
+ 
+ 			dev_warn_ratelimited(port->dev,
+-				"%s (%d): invalid RTS setting, using RTS_AFTER_SEND instead\n",
++				"%s (%u): invalid RTS setting, using RTS_AFTER_SEND instead\n",
+ 				port->name, port->line);
+ 		}
+ 	}
+@@ -1834,7 +1834,7 @@ static void uart_wait_until_sent(struct tty_struct *tty, int timeout)
+ 
+ 	expire = jiffies + timeout;
+ 
+-	pr_debug("uart_wait_until_sent(%d), jiffies=%lu, expire=%lu...\n",
++	pr_debug("uart_wait_until_sent(%u), jiffies=%lu, expire=%lu...\n",
+ 		port->line, jiffies, expire);
+ 
+ 	/*
+@@ -2029,7 +2029,7 @@ static void uart_line_info(struct seq_file *m, struct uart_state *state)
+ 		return;
+ 
+ 	mmio = uport->iotype >= UPIO_MEM;
+-	seq_printf(m, "%d: uart:%s %s%08llX irq:%d",
++	seq_printf(m, "%u: uart:%s %s%08llX irq:%u",
+ 			uport->line, uart_type(uport),
+ 			mmio ? "mmio:0x" : "port:",
+ 			mmio ? (unsigned long long)uport->mapbase
+@@ -2051,18 +2051,18 @@ static void uart_line_info(struct seq_file *m, struct uart_state *state)
+ 		if (pm_state != UART_PM_STATE_ON)
+ 			uart_change_pm(state, pm_state);
+ 
+-		seq_printf(m, " tx:%d rx:%d",
++		seq_printf(m, " tx:%u rx:%u",
+ 				uport->icount.tx, uport->icount.rx);
+ 		if (uport->icount.frame)
+-			seq_printf(m, " fe:%d",	uport->icount.frame);
++			seq_printf(m, " fe:%u",	uport->icount.frame);
+ 		if (uport->icount.parity)
+-			seq_printf(m, " pe:%d",	uport->icount.parity);
++			seq_printf(m, " pe:%u",	uport->icount.parity);
+ 		if (uport->icount.brk)
+-			seq_printf(m, " brk:%d", uport->icount.brk);
++			seq_printf(m, " brk:%u", uport->icount.brk);
+ 		if (uport->icount.overrun)
+-			seq_printf(m, " oe:%d", uport->icount.overrun);
++			seq_printf(m, " oe:%u", uport->icount.overrun);
+ 		if (uport->icount.buf_overrun)
+-			seq_printf(m, " bo:%d", uport->icount.buf_overrun);
++			seq_printf(m, " bo:%u", uport->icount.buf_overrun);
+ 
+ #define INFOBIT(bit, str) \
+ 	if (uport->mctrl & (bit)) \
+@@ -2554,7 +2554,7 @@ uart_report_port(struct uart_driver *drv, struct uart_port *port)
+ 		break;
+ 	}
+ 
+-	pr_info("%s%s%s at %s (irq = %d, base_baud = %d) is a %s\n",
++	pr_info("%s%s%s at %s (irq = %u, base_baud = %u) is a %s\n",
+ 	       port->dev ? dev_name(port->dev) : "",
+ 	       port->dev ? ": " : "",
+ 	       port->name,
+@@ -2562,7 +2562,7 @@ uart_report_port(struct uart_driver *drv, struct uart_port *port)
+ 
+ 	/* The magic multiplier feature is a bit obscure, so report it too.  */
+ 	if (port->flags & UPF_MAGIC_MULTIPLIER)
+-		pr_info("%s%s%s extra baud rates supported: %d, %d",
++		pr_info("%s%s%s extra baud rates supported: %u, %u",
+ 			port->dev ? dev_name(port->dev) : "",
+ 			port->dev ? ": " : "",
+ 			port->name,
+@@ -2961,7 +2961,7 @@ static ssize_t close_delay_show(struct device *dev,
+ 	struct tty_port *port = dev_get_drvdata(dev);
+ 
+ 	uart_get_info(port, &tmp);
+-	return sprintf(buf, "%d\n", tmp.close_delay);
++	return sprintf(buf, "%u\n", tmp.close_delay);
+ }
+ 
+ static ssize_t closing_wait_show(struct device *dev,
+@@ -2971,7 +2971,7 @@ static ssize_t closing_wait_show(struct device *dev,
+ 	struct tty_port *port = dev_get_drvdata(dev);
+ 
+ 	uart_get_info(port, &tmp);
+-	return sprintf(buf, "%d\n", tmp.closing_wait);
++	return sprintf(buf, "%u\n", tmp.closing_wait);
+ }
+ 
+ static ssize_t custom_divisor_show(struct device *dev,
+@@ -2991,7 +2991,7 @@ static ssize_t io_type_show(struct device *dev,
+ 	struct tty_port *port = dev_get_drvdata(dev);
+ 
+ 	uart_get_info(port, &tmp);
+-	return sprintf(buf, "%d\n", tmp.io_type);
++	return sprintf(buf, "%u\n", tmp.io_type);
+ }
+ 
+ static ssize_t iomem_base_show(struct device *dev,
+@@ -3011,7 +3011,7 @@ static ssize_t iomem_reg_shift_show(struct device *dev,
+ 	struct tty_port *port = dev_get_drvdata(dev);
+ 
+ 	uart_get_info(port, &tmp);
+-	return sprintf(buf, "%d\n", tmp.iomem_reg_shift);
++	return sprintf(buf, "%u\n", tmp.iomem_reg_shift);
+ }
+ 
+ static ssize_t console_show(struct device *dev,
+@@ -3147,7 +3147,7 @@ static int serial_core_add_one_port(struct uart_driver *drv, struct uart_port *u
+ 	state->pm_state = UART_PM_STATE_UNDEFINED;
+ 	uart_port_set_cons(uport, drv->cons);
+ 	uport->minor = drv->tty_driver->minor_start + uport->line;
+-	uport->name = kasprintf(GFP_KERNEL, "%s%d", drv->dev_name,
++	uport->name = kasprintf(GFP_KERNEL, "%s%u", drv->dev_name,
+ 				drv->tty_driver->name_base + uport->line);
+ 	if (!uport->name)
+ 		return -ENOMEM;
+@@ -3186,7 +3186,7 @@ static int serial_core_add_one_port(struct uart_driver *drv, struct uart_port *u
+ 		device_set_wakeup_capable(tty_dev, 1);
+ 	} else {
+ 		uport->flags |= UPF_DEAD;
+-		dev_err(uport->dev, "Cannot register tty device on line %d\n",
++		dev_err(uport->dev, "Cannot register tty device on line %u\n",
+ 		       uport->line);
+ 	}
+ 
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index c2ecfa3c83496f..5a334e370f4d66 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1520,6 +1520,12 @@ static int acm_probe(struct usb_interface *intf,
+ 			goto err_remove_files;
+ 	}
+ 
++	if (quirks & CLEAR_HALT_CONDITIONS) {
++		/* errors intentionally ignored */
++		usb_clear_halt(usb_dev, acm->in);
++		usb_clear_halt(usb_dev, acm->out);
++	}
++
+ 	tty_dev = tty_port_register_device(&acm->port, acm_tty_driver, minor,
+ 			&control_interface->dev);
+ 	if (IS_ERR(tty_dev)) {
+@@ -1527,11 +1533,6 @@ static int acm_probe(struct usb_interface *intf,
+ 		goto err_release_data_interface;
+ 	}
+ 
+-	if (quirks & CLEAR_HALT_CONDITIONS) {
+-		usb_clear_halt(usb_dev, acm->in);
+-		usb_clear_halt(usb_dev, acm->out);
+-	}
+-
+ 	dev_info(&intf->dev, "ttyACM%d: USB ACM device\n", minor);
+ 
+ 	return 0;
+diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
+index 13bd4ec4ea5f7a..76259739471a3c 100644
+--- a/drivers/usb/core/config.c
++++ b/drivers/usb/core/config.c
+@@ -107,8 +107,14 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
+ 	 */
+ 	desc = (struct usb_ss_ep_comp_descriptor *) buffer;
+ 
+-	if (desc->bDescriptorType != USB_DT_SS_ENDPOINT_COMP ||
+-			size < USB_DT_SS_EP_COMP_SIZE) {
++	if (size < USB_DT_SS_EP_COMP_SIZE) {
++		dev_notice(ddev,
++			   "invalid SuperSpeed endpoint companion descriptor "
++			   "of length %d, skipping\n", size);
++		return;
++	}
++
++	if (desc->bDescriptorType != USB_DT_SS_ENDPOINT_COMP) {
+ 		dev_notice(ddev, "No SuperSpeed endpoint companion for config %d "
+ 				" interface %d altsetting %d ep %d: "
+ 				"using minimum values\n",
+diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c
+index 5e52a35486afbe..120de3c499d226 100644
+--- a/drivers/usb/core/urb.c
++++ b/drivers/usb/core/urb.c
+@@ -500,7 +500,7 @@ int usb_submit_urb(struct urb *urb, gfp_t mem_flags)
+ 
+ 	/* Check that the pipe's type matches the endpoint's type */
+ 	if (usb_pipe_type_check(urb->dev, urb->pipe))
+-		dev_WARN(&dev->dev, "BOGUS urb xfer, pipe %x != type %x\n",
++		dev_warn_once(&dev->dev, "BOGUS urb xfer, pipe %x != type %x\n",
+ 			usb_pipetype(urb->pipe), pipetypes[xfertype]);
+ 
+ 	/* Check against a simple/standard policy */
+diff --git a/drivers/usb/dwc3/dwc3-xilinx.c b/drivers/usb/dwc3/dwc3-xilinx.c
+index 4ca7f6240d07df..09c3c5c226ab40 100644
+--- a/drivers/usb/dwc3/dwc3-xilinx.c
++++ b/drivers/usb/dwc3/dwc3-xilinx.c
+@@ -422,6 +422,7 @@ static const struct dev_pm_ops dwc3_xlnx_dev_pm_ops = {
+ static struct platform_driver dwc3_xlnx_driver = {
+ 	.probe		= dwc3_xlnx_probe,
+ 	.remove		= dwc3_xlnx_remove,
++	.shutdown	= dwc3_xlnx_remove,
+ 	.driver		= {
+ 		.name		= "dwc3-xilinx",
+ 		.of_match_table	= dwc3_xlnx_of_match,
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index a5e7980ac1031f..2a503de0a8813a 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -1166,6 +1166,8 @@ int xhci_setup_addressable_virt_dev(struct xhci_hcd *xhci, struct usb_device *ud
+ 	ep0_ctx->deq = cpu_to_le64(dev->eps[0].ring->first_seg->dma |
+ 				   dev->eps[0].ring->cycle_state);
+ 
++	ep0_ctx->tx_info = cpu_to_le32(EP_AVG_TRB_LENGTH(8));
++
+ 	trace_xhci_setup_addressable_virt_device(dev);
+ 
+ 	/* Steps 7 and 8 were done in xhci_alloc_virt_device() */
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index b720e04ce7d86c..fc3ae7318046b2 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1376,12 +1376,15 @@ static void xhci_kill_endpoint_urbs(struct xhci_hcd *xhci,
+  */
+ void xhci_hc_died(struct xhci_hcd *xhci)
+ {
++	bool notify;
+ 	int i, j;
+ 
+ 	if (xhci->xhc_state & XHCI_STATE_DYING)
+ 		return;
+ 
+-	xhci_err(xhci, "xHCI host controller not responding, assume dead\n");
++	notify = !(xhci->xhc_state & XHCI_STATE_REMOVING);
++	if (notify)
++		xhci_err(xhci, "xHCI host controller not responding, assume dead\n");
+ 	xhci->xhc_state |= XHCI_STATE_DYING;
+ 
+ 	xhci_cleanup_command_queue(xhci);
+@@ -1395,7 +1398,7 @@ void xhci_hc_died(struct xhci_hcd *xhci)
+ 	}
+ 
+ 	/* inform usb core hc died if PCI remove isn't already handling it */
+-	if (!(xhci->xhc_state & XHCI_STATE_REMOVING))
++	if (notify)
+ 		usb_hc_died(xhci_to_hcd(xhci));
+ }
+ 
+@@ -4337,7 +4340,8 @@ static int queue_command(struct xhci_hcd *xhci, struct xhci_command *cmd,
+ 
+ 	if ((xhci->xhc_state & XHCI_STATE_DYING) ||
+ 		(xhci->xhc_state & XHCI_STATE_HALTED)) {
+-		xhci_dbg(xhci, "xHCI dying or halted, can't queue_command\n");
++		xhci_dbg(xhci, "xHCI dying or halted, can't queue_command. state: 0x%x\n",
++			 xhci->xhc_state);
+ 		return -ESHUTDOWN;
+ 	}
+ 
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index cb9f35acb1f94f..cb29aa49ceba03 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -120,7 +120,8 @@ int xhci_halt(struct xhci_hcd *xhci)
+ 	ret = xhci_handshake(&xhci->op_regs->status,
+ 			STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC);
+ 	if (ret) {
+-		xhci_warn(xhci, "Host halt failed, %d\n", ret);
++		if (!(xhci->xhc_state & XHCI_STATE_DYING))
++			xhci_warn(xhci, "Host halt failed, %d\n", ret);
+ 		return ret;
+ 	}
+ 
+@@ -179,7 +180,8 @@ int xhci_reset(struct xhci_hcd *xhci, u64 timeout_us)
+ 	state = readl(&xhci->op_regs->status);
+ 
+ 	if (state == ~(u32)0) {
+-		xhci_warn(xhci, "Host not accessible, reset failed.\n");
++		if (!(xhci->xhc_state & XHCI_STATE_DYING))
++			xhci_warn(xhci, "Host not accessible, reset failed.\n");
+ 		return -ENODEV;
+ 	}
+ 
+diff --git a/drivers/usb/typec/mux/intel_pmc_mux.c b/drivers/usb/typec/mux/intel_pmc_mux.c
+index 65dda9183e6fb6..1698428654abbe 100644
+--- a/drivers/usb/typec/mux/intel_pmc_mux.c
++++ b/drivers/usb/typec/mux/intel_pmc_mux.c
+@@ -754,7 +754,7 @@ static int pmc_usb_probe(struct platform_device *pdev)
+ 
+ 	pmc->ipc = devm_intel_scu_ipc_dev_get(&pdev->dev);
+ 	if (!pmc->ipc)
+-		return -ENODEV;
++		return -EPROBE_DEFER;
+ 
+ 	pmc->dev = &pdev->dev;
+ 
+diff --git a/drivers/usb/typec/tcpm/fusb302.c b/drivers/usb/typec/tcpm/fusb302.c
+index f15c63d3a8f441..870a71f953f6cd 100644
+--- a/drivers/usb/typec/tcpm/fusb302.c
++++ b/drivers/usb/typec/tcpm/fusb302.c
+@@ -104,6 +104,7 @@ struct fusb302_chip {
+ 	bool vconn_on;
+ 	bool vbus_on;
+ 	bool charge_on;
++	bool pd_rx_on;
+ 	bool vbus_present;
+ 	enum typec_cc_polarity cc_polarity;
+ 	enum typec_cc_status cc1;
+@@ -841,6 +842,11 @@ static int tcpm_set_pd_rx(struct tcpc_dev *dev, bool on)
+ 	int ret = 0;
+ 
+ 	mutex_lock(&chip->lock);
++	if (chip->pd_rx_on == on) {
++		fusb302_log(chip, "pd is already %s", str_on_off(on));
++		goto done;
++	}
++
+ 	ret = fusb302_pd_rx_flush(chip);
+ 	if (ret < 0) {
+ 		fusb302_log(chip, "cannot flush pd rx buffer, ret=%d", ret);
+@@ -863,6 +869,8 @@ static int tcpm_set_pd_rx(struct tcpc_dev *dev, bool on)
+ 			    str_on_off(on), ret);
+ 		goto done;
+ 	}
++
++	chip->pd_rx_on = on;
+ 	fusb302_log(chip, "pd := %s", str_on_off(on));
+ done:
+ 	mutex_unlock(&chip->lock);
+diff --git a/drivers/usb/typec/tcpm/tcpci_maxim_core.c b/drivers/usb/typec/tcpm/tcpci_maxim_core.c
+index b5a5ed40faea9c..ff3604be79da73 100644
+--- a/drivers/usb/typec/tcpm/tcpci_maxim_core.c
++++ b/drivers/usb/typec/tcpm/tcpci_maxim_core.c
+@@ -421,21 +421,6 @@ static irqreturn_t max_tcpci_isr(int irq, void *dev_id)
+ 	return IRQ_WAKE_THREAD;
+ }
+ 
+-static int max_tcpci_init_alert(struct max_tcpci_chip *chip, struct i2c_client *client)
+-{
+-	int ret;
+-
+-	ret = devm_request_threaded_irq(chip->dev, client->irq, max_tcpci_isr, max_tcpci_irq,
+-					(IRQF_TRIGGER_LOW | IRQF_ONESHOT), dev_name(chip->dev),
+-					chip);
+-
+-	if (ret < 0)
+-		return ret;
+-
+-	enable_irq_wake(client->irq);
+-	return 0;
+-}
+-
+ static int max_tcpci_start_toggling(struct tcpci *tcpci, struct tcpci_data *tdata,
+ 				    enum typec_cc_status cc)
+ {
+@@ -532,7 +517,9 @@ static int max_tcpci_probe(struct i2c_client *client)
+ 
+ 	chip->port = tcpci_get_tcpm_port(chip->tcpci);
+ 
+-	ret = max_tcpci_init_alert(chip, client);
++	ret = devm_request_threaded_irq(&client->dev, client->irq, max_tcpci_isr, max_tcpci_irq,
++					(IRQF_TRIGGER_LOW | IRQF_ONESHOT), dev_name(chip->dev),
++					chip);
+ 	if (ret < 0)
+ 		return dev_err_probe(&client->dev, ret,
+ 				     "IRQ initialization failed\n");
+@@ -544,6 +531,32 @@ static int max_tcpci_probe(struct i2c_client *client)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_PM_SLEEP
++static int max_tcpci_resume(struct device *dev)
++{
++	struct i2c_client *client = to_i2c_client(dev);
++	int ret = 0;
++
++	if (client->irq && device_may_wakeup(dev))
++		ret = disable_irq_wake(client->irq);
++
++	return ret;
++}
++
++static int max_tcpci_suspend(struct device *dev)
++{
++	struct i2c_client *client = to_i2c_client(dev);
++	int ret = 0;
++
++	if (client->irq && device_may_wakeup(dev))
++		ret = enable_irq_wake(client->irq);
++
++	return ret;
++}
++#endif /* CONFIG_PM_SLEEP */
++
++static SIMPLE_DEV_PM_OPS(max_tcpci_pm_ops, max_tcpci_suspend, max_tcpci_resume);
++
+ static const struct i2c_device_id max_tcpci_id[] = {
+ 	{ "maxtcpc" },
+ 	{ }
+@@ -562,6 +575,7 @@ static struct i2c_driver max_tcpci_i2c_driver = {
+ 	.driver = {
+ 		.name = "maxtcpc",
+ 		.of_match_table = of_match_ptr(max_tcpci_of_match),
++		.pm = &max_tcpci_pm_ops,
+ 	},
+ 	.probe = max_tcpci_probe,
+ 	.id_table = max_tcpci_id,
+diff --git a/drivers/usb/typec/ucsi/cros_ec_ucsi.c b/drivers/usb/typec/ucsi/cros_ec_ucsi.c
+index 4ec1c6d2231096..eed2a7d0ebc634 100644
+--- a/drivers/usb/typec/ucsi/cros_ec_ucsi.c
++++ b/drivers/usb/typec/ucsi/cros_ec_ucsi.c
+@@ -137,6 +137,7 @@ static int cros_ucsi_sync_control(struct ucsi *ucsi, u64 cmd, u32 *cci,
+ static const struct ucsi_operations cros_ucsi_ops = {
+ 	.read_version = cros_ucsi_read_version,
+ 	.read_cci = cros_ucsi_read_cci,
++	.poll_cci = cros_ucsi_read_cci,
+ 	.read_message_in = cros_ucsi_read_message_in,
+ 	.async_control = cros_ucsi_async_control,
+ 	.sync_control = cros_ucsi_sync_control,
+diff --git a/drivers/usb/typec/ucsi/psy.c b/drivers/usb/typec/ucsi/psy.c
+index 62ac6973040502..62a9d68bb66d21 100644
+--- a/drivers/usb/typec/ucsi/psy.c
++++ b/drivers/usb/typec/ucsi/psy.c
+@@ -164,7 +164,7 @@ static int ucsi_psy_get_current_max(struct ucsi_connector *con,
+ 	case UCSI_CONSTAT_PWR_OPMODE_DEFAULT:
+ 	/* UCSI can't tell b/w DCP/CDP or USB2/3x1/3x2 SDP chargers */
+ 	default:
+-		val->intval = 0;
++		val->intval = UCSI_TYPEC_DEFAULT_CURRENT * 1000;
+ 		break;
+ 	}
+ 	return 0;
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 01ce858a1a2b34..8ff31963970bb3 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -1246,6 +1246,7 @@ static void ucsi_handle_connector_change(struct work_struct *work)
+ 
+ 	if (change & UCSI_CONSTAT_POWER_DIR_CHANGE) {
+ 		typec_set_pwr_role(con->port, role);
++		ucsi_port_psy_changed(con);
+ 
+ 		/* Complete pending power role swap */
+ 		if (!completion_done(&con->complete))
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index 70910232a05d74..1ae068a9284438 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -479,9 +479,10 @@ struct ucsi {
+ #define UCSI_MAX_SVID		5
+ #define UCSI_MAX_ALTMODES	(UCSI_MAX_SVID * 6)
+ 
+-#define UCSI_TYPEC_VSAFE5V	5000
+-#define UCSI_TYPEC_1_5_CURRENT	1500
+-#define UCSI_TYPEC_3_0_CURRENT	3000
++#define UCSI_TYPEC_VSAFE5V		5000
++#define UCSI_TYPEC_DEFAULT_CURRENT	 100
++#define UCSI_TYPEC_1_5_CURRENT		1500
++#define UCSI_TYPEC_3_0_CURRENT		3000
+ 
+ struct ucsi_connector {
+ 	int num;
+diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c
+index 11eda6b207f13f..6d36b3b4cd30c5 100644
+--- a/drivers/vfio/pci/mlx5/cmd.c
++++ b/drivers/vfio/pci/mlx5/cmd.c
+@@ -1538,8 +1538,8 @@ int mlx5vf_start_page_tracker(struct vfio_device *vdev,
+ 	log_max_msg_size = MLX5_CAP_ADV_VIRTUALIZATION(mdev, pg_track_log_max_msg_size);
+ 	max_msg_size = (1ULL << log_max_msg_size);
+ 	/* The RQ must hold at least 4 WQEs/messages for successful QP creation */
+-	if (rq_size < 4 * max_msg_size)
+-		rq_size = 4 * max_msg_size;
++	if (rq_size < 4ULL * max_msg_size)
++		rq_size = 4ULL * max_msg_size;
+ 
+ 	memset(tracker, 0, sizeof(*tracker));
+ 	tracker->uar = mlx5_get_uars_page(mdev);
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index ba5d91e576af16..a685e01f73f5e2 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -648,6 +648,13 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
+ 
+ 	while (npage) {
+ 		if (!batch->size) {
++			/*
++			 * Large mappings may take a while to repeatedly refill
++			 * the batch, so conditionally relinquish the CPU when
++			 * needed to avoid stalls.
++			 */
++			cond_resched();
++
+ 			/* Empty batch, so refill it. */
+ 			ret = vaddr_get_pfns(mm, vaddr, npage, dma->prot,
+ 					     &pfn, batch);
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 79b0b7cd28601a..71604668e53f60 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -2971,6 +2971,9 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
+ 	}
+ 	r = __vhost_add_used_n(vq, heads, count);
+ 
++	if (r < 0)
++		return r;
++
+ 	/* Make sure buffer is written before we update index. */
+ 	smp_wmb();
+ 	if (vhost_put_used_idx(vq)) {
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index 55c6686f091e79..c21484d15f0cbd 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -660,7 +660,7 @@ config FB_ATMEL
+ 
+ config FB_NVIDIA
+ 	tristate "nVidia Framebuffer Support"
+-	depends on FB && PCI
++	depends on FB && PCI && HAS_IOPORT
+ 	select FB_CFB_FILLRECT
+ 	select FB_CFB_COPYAREA
+ 	select FB_CFB_IMAGEBLIT
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 2b2d36c021ba55..9eb880a11fd8ce 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -825,7 +825,8 @@ static void con2fb_init_display(struct vc_data *vc, struct fb_info *info,
+ 				   fg_vc->vc_rows);
+ 	}
+ 
+-	update_screen(vc_cons[fg_console].d);
++	if (fg_console != unit)
++		update_screen(vc_cons[fg_console].d);
+ }
+ 
+ /**
+@@ -1362,6 +1363,7 @@ static void fbcon_set_disp(struct fb_info *info, struct fb_var_screeninfo *var,
+ 	struct vc_data *svc;
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 	int rows, cols;
++	unsigned long ret = 0;
+ 
+ 	p = &fb_display[unit];
+ 
+@@ -1412,11 +1414,10 @@ static void fbcon_set_disp(struct fb_info *info, struct fb_var_screeninfo *var,
+ 	rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ 	cols /= vc->vc_font.width;
+ 	rows /= vc->vc_font.height;
+-	vc_resize(vc, cols, rows);
++	ret = vc_resize(vc, cols, rows);
+ 
+-	if (con_is_visible(vc)) {
++	if (con_is_visible(vc) && !ret)
+ 		update_screen(vc);
+-	}
+ }
+ 
+ static __inline__ void ywrap_up(struct vc_data *vc, int count)
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index eca2498f243685..6a033bf17ab602 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -403,6 +403,9 @@ static int do_register_framebuffer(struct fb_info *fb_info)
+ 		if (!registered_fb[i])
+ 			break;
+ 
++	if (i >= FB_MAX)
++		return -ENXIO;
++
+ 	if (!fb_info->modelist.prev || !fb_info->modelist.next)
+ 		INIT_LIST_HEAD(&fb_info->modelist);
+ 
+diff --git a/drivers/virt/coco/efi_secret/efi_secret.c b/drivers/virt/coco/efi_secret/efi_secret.c
+index 1864f9f80617e0..f2da4819ec3b7d 100644
+--- a/drivers/virt/coco/efi_secret/efi_secret.c
++++ b/drivers/virt/coco/efi_secret/efi_secret.c
+@@ -136,15 +136,7 @@ static int efi_secret_unlink(struct inode *dir, struct dentry *dentry)
+ 		if (s->fs_files[i] == dentry)
+ 			s->fs_files[i] = NULL;
+ 
+-	/*
+-	 * securityfs_remove tries to lock the directory's inode, but we reach
+-	 * the unlink callback when it's already locked
+-	 */
+-	inode_unlock(dir);
+-	securityfs_remove(dentry);
+-	inode_lock(dir);
+-
+-	return 0;
++	return simple_unlink(inode, dentry);
+ }
+ 
+ static const struct inode_operations efi_secret_dir_inode_operations = {
+diff --git a/drivers/watchdog/dw_wdt.c b/drivers/watchdog/dw_wdt.c
+index 26efca9ae0e7d2..c3fbb6068c5201 100644
+--- a/drivers/watchdog/dw_wdt.c
++++ b/drivers/watchdog/dw_wdt.c
+@@ -644,6 +644,8 @@ static int dw_wdt_drv_probe(struct platform_device *pdev)
+ 	} else {
+ 		wdd->timeout = DW_WDT_DEFAULT_SECONDS;
+ 		watchdog_init_timeout(wdd, 0, dev);
++		/* Limit timeout value to hardware constraints. */
++		dw_wdt_set_timeout(wdd, wdd->timeout);
+ 	}
+ 
+ 	platform_set_drvdata(pdev, dw_wdt);
+diff --git a/drivers/watchdog/iTCO_wdt.c b/drivers/watchdog/iTCO_wdt.c
+index 7672582fa40764..e9bf53929b53d0 100644
+--- a/drivers/watchdog/iTCO_wdt.c
++++ b/drivers/watchdog/iTCO_wdt.c
+@@ -601,7 +601,11 @@ static int iTCO_wdt_probe(struct platform_device *pdev)
+ 	/* Check that the heartbeat value is within it's range;
+ 	   if not reset to the default */
+ 	if (iTCO_wdt_set_timeout(&p->wddev, heartbeat)) {
+-		iTCO_wdt_set_timeout(&p->wddev, WATCHDOG_TIMEOUT);
++		ret = iTCO_wdt_set_timeout(&p->wddev, WATCHDOG_TIMEOUT);
++		if (ret != 0) {
++			dev_err(dev, "Failed to set watchdog timeout (%d)\n", WATCHDOG_TIMEOUT);
++			return ret;
++		}
+ 		dev_info(dev, "timeout value out of range, using %d\n",
+ 			WATCHDOG_TIMEOUT);
+ 	}
+diff --git a/drivers/watchdog/sbsa_gwdt.c b/drivers/watchdog/sbsa_gwdt.c
+index 5f23913ce3b49c..6ce1bfb3906413 100644
+--- a/drivers/watchdog/sbsa_gwdt.c
++++ b/drivers/watchdog/sbsa_gwdt.c
+@@ -75,11 +75,17 @@
+ #define SBSA_GWDT_VERSION_MASK  0xF
+ #define SBSA_GWDT_VERSION_SHIFT 16
+ 
++#define SBSA_GWDT_IMPL_MASK	0x7FF
++#define SBSA_GWDT_IMPL_SHIFT	0
++#define SBSA_GWDT_IMPL_MEDIATEK	0x426
++
+ /**
+  * struct sbsa_gwdt - Internal representation of the SBSA GWDT
+  * @wdd:		kernel watchdog_device structure
+  * @clk:		store the System Counter clock frequency, in Hz.
+  * @version:            store the architecture version
++ * @need_ws0_race_workaround:
++ *			indicate whether to adjust wdd->timeout to avoid a race with WS0
+  * @refresh_base:	Virtual address of the watchdog refresh frame
+  * @control_base:	Virtual address of the watchdog control frame
+  */
+@@ -87,6 +93,7 @@ struct sbsa_gwdt {
+ 	struct watchdog_device	wdd;
+ 	u32			clk;
+ 	int			version;
++	bool			need_ws0_race_workaround;
+ 	void __iomem		*refresh_base;
+ 	void __iomem		*control_base;
+ };
+@@ -161,6 +168,31 @@ static int sbsa_gwdt_set_timeout(struct watchdog_device *wdd,
+ 		 */
+ 		sbsa_gwdt_reg_write(((u64)gwdt->clk / 2) * timeout, gwdt);
+ 
++	/*
++	 * Some watchdog hardware has a race condition where it will ignore
++	 * sbsa_gwdt_keepalive() if it is called at the exact moment that a
++	 * timeout occurs and WS0 is being asserted. Unfortunately, the default
++	 * behavior of the watchdog core is very likely to trigger this race
++	 * when action=0 because it programs WOR to be half of the desired
++	 * timeout, and watchdog_next_keepalive() chooses the exact same time to
++	 * send keepalive pings.
++	 *
++	 * This triggers a race where sbsa_gwdt_keepalive() can be called right
++	 * as WS0 is being asserted, and affected hardware will ignore that
++	 * write and continue to assert WS0. After another (timeout / 2)
++	 * seconds, the same race happens again. If the driver wins then the
++	 * explicit refresh will reset WS0 to false but if the hardware wins,
++	 * then WS1 is asserted and the system resets.
++	 *
++	 * Avoid the problem by scheduling keepalive heartbeats one second later
++	 * than the WOR timeout.
++	 *
++	 * This workaround might not be needed in a future revision of the
++	 * hardware.
++	 */
++	if (gwdt->need_ws0_race_workaround)
++		wdd->min_hw_heartbeat_ms = timeout * 500 + 1000;
++
+ 	return 0;
+ }
+ 
+@@ -202,12 +234,15 @@ static int sbsa_gwdt_keepalive(struct watchdog_device *wdd)
+ static void sbsa_gwdt_get_version(struct watchdog_device *wdd)
+ {
+ 	struct sbsa_gwdt *gwdt = watchdog_get_drvdata(wdd);
+-	int ver;
++	int iidr, ver, impl;
+ 
+-	ver = readl(gwdt->control_base + SBSA_GWDT_W_IIDR);
+-	ver = (ver >> SBSA_GWDT_VERSION_SHIFT) & SBSA_GWDT_VERSION_MASK;
++	iidr = readl(gwdt->control_base + SBSA_GWDT_W_IIDR);
++	ver = (iidr >> SBSA_GWDT_VERSION_SHIFT) & SBSA_GWDT_VERSION_MASK;
++	impl = (iidr >> SBSA_GWDT_IMPL_SHIFT) & SBSA_GWDT_IMPL_MASK;
+ 
+ 	gwdt->version = ver;
++	gwdt->need_ws0_race_workaround =
++		!action && (impl == SBSA_GWDT_IMPL_MEDIATEK);
+ }
+ 
+ static int sbsa_gwdt_start(struct watchdog_device *wdd)
+@@ -299,6 +334,15 @@ static int sbsa_gwdt_probe(struct platform_device *pdev)
+ 	else
+ 		wdd->max_hw_heartbeat_ms = GENMASK_ULL(47, 0) / gwdt->clk * 1000;
+ 
++	if (gwdt->need_ws0_race_workaround) {
++		/*
++		 * A timeout of 3 seconds means that WOR will be set to 1.5
++		 * seconds and the heartbeat will be scheduled every 2.5
++		 * seconds.
++		 */
++		wdd->min_timeout = 3;
++	}
++
+ 	status = readl(cf_base + SBSA_GWDT_WCS);
+ 	if (status & SBSA_GWDT_WCS_WS1) {
+ 		dev_warn(dev, "System reset by WDT.\n");
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index a8129f1ce78c7b..289163ca9ac704 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -34,6 +34,19 @@ int btrfs_should_fragment_free_space(const struct btrfs_block_group *block_group
+ }
+ #endif
+ 
++static inline bool has_unwritten_metadata(struct btrfs_block_group *block_group)
++{
++	/* The meta_write_pointer is available only on the zoned setup. */
++	if (!btrfs_is_zoned(block_group->fs_info))
++		return false;
++
++	if (block_group->flags & BTRFS_BLOCK_GROUP_DATA)
++		return false;
++
++	return block_group->start + block_group->alloc_offset >
++		block_group->meta_write_pointer;
++}
++
+ /*
+  * Return target flags in extended format or 0 if restripe for this chunk_type
+  * is not in progress
+@@ -1246,6 +1259,15 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ 		goto out;
+ 
+ 	spin_lock(&block_group->lock);
++	/*
++	 * Hitting this WARN means we removed a block group with an unwritten
++	 * region. It will cause "unable to find chunk map for logical" errors.
++	 */
++	if (WARN_ON(has_unwritten_metadata(block_group)))
++		btrfs_warn(fs_info,
++			   "block group %llu is removed before metadata write out",
++			   block_group->start);
++
+ 	set_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags);
+ 
+ 	/*
+@@ -1589,8 +1611,9 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ 		 * needing to allocate extents from the block group.
+ 		 */
+ 		used = btrfs_space_info_used(space_info, true);
+-		if (space_info->total_bytes - block_group->length < used &&
+-		    block_group->zone_unusable < block_group->length) {
++		if ((space_info->total_bytes - block_group->length < used &&
++		     block_group->zone_unusable < block_group->length) ||
++		    has_unwritten_metadata(block_group)) {
+ 			/*
+ 			 * Add a reference for the list, compensate for the ref
+ 			 * drop under the "next" label for the
+@@ -1619,8 +1642,10 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ 		ret = btrfs_zone_finish(block_group);
+ 		if (ret < 0) {
+ 			btrfs_dec_block_group_ro(block_group);
+-			if (ret == -EAGAIN)
++			if (ret == -EAGAIN) {
++				btrfs_link_bg_list(block_group, &retry_list);
+ 				ret = 0;
++			}
+ 			goto next;
+ 		}
+ 
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 648531fe09002c..8abc066ce51fb4 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -2872,6 +2872,7 @@ static noinline int insert_new_root(struct btrfs_trans_handle *trans,
+ 	if (ret < 0) {
+ 		int ret2;
+ 
++		btrfs_clear_buffer_dirty(trans, c);
+ 		ret2 = btrfs_free_tree_block(trans, btrfs_root_id(root), c, 0, 1);
+ 		if (ret2 < 0)
+ 			btrfs_abort_transaction(trans, ret2);
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 957230abd8271c..74582f3ee53b78 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -3625,6 +3625,21 @@ btrfs_release_block_group(struct btrfs_block_group *cache,
+ 	btrfs_put_block_group(cache);
+ }
+ 
++static bool find_free_extent_check_size_class(const struct find_free_extent_ctl *ffe_ctl,
++					      const struct btrfs_block_group *bg)
++{
++	if (ffe_ctl->policy == BTRFS_EXTENT_ALLOC_ZONED)
++		return true;
++	if (!btrfs_block_group_should_use_size_class(bg))
++		return true;
++	if (ffe_ctl->loop >= LOOP_WRONG_SIZE_CLASS)
++		return true;
++	if (ffe_ctl->loop >= LOOP_UNSET_SIZE_CLASS &&
++	    bg->size_class == BTRFS_BG_SZ_NONE)
++		return true;
++	return ffe_ctl->size_class == bg->size_class;
++}
++
+ /*
+  * Helper function for find_free_extent().
+  *
+@@ -3646,7 +3661,8 @@ static int find_free_extent_clustered(struct btrfs_block_group *bg,
+ 	if (!cluster_bg)
+ 		goto refill_cluster;
+ 	if (cluster_bg != bg && (cluster_bg->ro ||
+-	    !block_group_bits(cluster_bg, ffe_ctl->flags)))
++	    !block_group_bits(cluster_bg, ffe_ctl->flags) ||
++	    !find_free_extent_check_size_class(ffe_ctl, cluster_bg)))
+ 		goto release_cluster;
+ 
+ 	offset = btrfs_alloc_from_cluster(cluster_bg, last_ptr,
+@@ -4202,21 +4218,6 @@ static int find_free_extent_update_loop(struct btrfs_fs_info *fs_info,
+ 	return -ENOSPC;
+ }
+ 
+-static bool find_free_extent_check_size_class(struct find_free_extent_ctl *ffe_ctl,
+-					      struct btrfs_block_group *bg)
+-{
+-	if (ffe_ctl->policy == BTRFS_EXTENT_ALLOC_ZONED)
+-		return true;
+-	if (!btrfs_block_group_should_use_size_class(bg))
+-		return true;
+-	if (ffe_ctl->loop >= LOOP_WRONG_SIZE_CLASS)
+-		return true;
+-	if (ffe_ctl->loop >= LOOP_UNSET_SIZE_CLASS &&
+-	    bg->size_class == BTRFS_BG_SZ_NONE)
+-		return true;
+-	return ffe_ctl->size_class == bg->size_class;
+-}
+-
+ static int prepare_allocation_clustered(struct btrfs_fs_info *fs_info,
+ 					struct find_free_extent_ctl *ffe_ctl,
+ 					struct btrfs_space_info *space_info,
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index f18f4d59d389e4..ec83a24d463f02 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2028,7 +2028,7 @@ static int nocow_one_range(struct btrfs_inode *inode, struct folio *locked_folio
+ 	 * cleaered by the caller.
+ 	 */
+ 	if (ret < 0)
+-		btrfs_cleanup_ordered_extents(inode, file_pos, end);
++		btrfs_cleanup_ordered_extents(inode, file_pos, len);
+ 	return ret;
+ }
+ 
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index b70ef4455610a0..d250a658eab863 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -4832,7 +4832,8 @@ static int btrfs_uring_encoded_read(struct io_uring_cmd *cmd, unsigned int issue
+ #if defined(CONFIG_64BIT) && defined(CONFIG_COMPAT)
+ 		copy_end = offsetofend(struct btrfs_ioctl_encoded_io_args_32, flags);
+ #else
+-		return -ENOTTY;
++		ret = -ENOTTY;
++		goto out_acct;
+ #endif
+ 	} else {
+ 		copy_end = copy_end_kernel;
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index d6fa36674270f7..294e11cb5b7ec6 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -630,22 +630,30 @@ bool btrfs_check_quota_leak(const struct btrfs_fs_info *fs_info)
+ 
+ /*
+  * This is called from close_ctree() or open_ctree() or btrfs_quota_disable(),
+- * first two are in single-threaded paths.And for the third one, we have set
+- * quota_root to be null with qgroup_lock held before, so it is safe to clean
+- * up the in-memory structures without qgroup_lock held.
++ * first two are in single-threaded paths.
+  */
+ void btrfs_free_qgroup_config(struct btrfs_fs_info *fs_info)
+ {
+ 	struct rb_node *n;
+ 	struct btrfs_qgroup *qgroup;
+ 
++	/*
++	 * btrfs_quota_disable() can be called concurrently with
++	 * btrfs_qgroup_rescan() -> qgroup_rescan_zero_tracking(), so take the
++	 * lock.
++	 */
++	spin_lock(&fs_info->qgroup_lock);
+ 	while ((n = rb_first(&fs_info->qgroup_tree))) {
+ 		qgroup = rb_entry(n, struct btrfs_qgroup, node);
+ 		rb_erase(n, &fs_info->qgroup_tree);
+ 		__del_qgroup_rb(qgroup);
++		spin_unlock(&fs_info->qgroup_lock);
+ 		btrfs_sysfs_del_one_qgroup(fs_info, qgroup);
+ 		kfree(qgroup);
++		spin_lock(&fs_info->qgroup_lock);
+ 	}
++	spin_unlock(&fs_info->qgroup_lock);
++
+ 	/*
+ 	 * We call btrfs_free_qgroup_config() when unmounting
+ 	 * filesystem and disabling quota, so we set qgroup_ulist
+@@ -1354,11 +1362,14 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ 
+ 	/*
+ 	 * We have nothing held here and no trans handle, just return the error
+-	 * if there is one.
++	 * if there is one and set back the quota enabled bit since we didn't
++	 * actually disable quotas.
+ 	 */
+ 	ret = flush_reservations(fs_info);
+-	if (ret)
++	if (ret) {
++		set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
+ 		return ret;
++	}
+ 
+ 	/*
+ 	 * 1 For the root item
+@@ -1470,7 +1481,6 @@ static int __qgroup_excl_accounting(struct btrfs_fs_info *fs_info, u64 ref_root,
+ 				    struct btrfs_qgroup *src, int sign)
+ {
+ 	struct btrfs_qgroup *qgroup;
+-	struct btrfs_qgroup *cur;
+ 	LIST_HEAD(qgroup_list);
+ 	u64 num_bytes = src->excl;
+ 	int ret = 0;
+@@ -1480,7 +1490,7 @@ static int __qgroup_excl_accounting(struct btrfs_fs_info *fs_info, u64 ref_root,
+ 		goto out;
+ 
+ 	qgroup_iterator_add(&qgroup_list, qgroup);
+-	list_for_each_entry(cur, &qgroup_list, iterator) {
++	list_for_each_entry(qgroup, &qgroup_list, iterator) {
+ 		struct btrfs_qgroup_list *glist;
+ 
+ 		qgroup->rfer += sign * num_bytes;
+@@ -1679,9 +1689,6 @@ int btrfs_create_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ 	struct btrfs_qgroup *prealloc = NULL;
+ 	int ret = 0;
+ 
+-	if (btrfs_qgroup_mode(fs_info) == BTRFS_QGROUP_MODE_DISABLED)
+-		return 0;
+-
+ 	mutex_lock(&fs_info->qgroup_ioctl_lock);
+ 	if (!fs_info->quota_root) {
+ 		ret = -ENOTCONN;
+@@ -4037,12 +4044,21 @@ btrfs_qgroup_rescan(struct btrfs_fs_info *fs_info)
+ 	qgroup_rescan_zero_tracking(fs_info);
+ 
+ 	mutex_lock(&fs_info->qgroup_rescan_lock);
+-	fs_info->qgroup_rescan_running = true;
+-	btrfs_queue_work(fs_info->qgroup_rescan_workers,
+-			 &fs_info->qgroup_rescan_work);
++	/*
++	 * The rescan worker is only for full accounting qgroups, check if it's
++	 * enabled as it is pointless to queue it otherwise. A concurrent quota
++	 * disable may also have just cleared BTRFS_FS_QUOTA_ENABLED.
++	 */
++	if (btrfs_qgroup_full_accounting(fs_info)) {
++		fs_info->qgroup_rescan_running = true;
++		btrfs_queue_work(fs_info->qgroup_rescan_workers,
++				 &fs_info->qgroup_rescan_work);
++	} else {
++		ret = -ENOTCONN;
++	}
+ 	mutex_unlock(&fs_info->qgroup_rescan_lock);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ int btrfs_qgroup_wait_for_completion(struct btrfs_fs_info *fs_info,
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index e17bcb03459519..6a32bf2b5670a2 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -593,6 +593,25 @@ static struct btrfs_root *create_reloc_root(struct btrfs_trans_handle *trans,
+ 	if (btrfs_root_id(root) == objectid) {
+ 		u64 commit_root_gen;
+ 
++		/*
++		 * Relocation will wait for cleaner thread, and any half-dropped
++		 * subvolume will be fully cleaned up at mount time.
++		 * So here we shouldn't hit a subvolume with non-zero drop_progress.
++		 *
++		 * If this isn't the case, error out since it can make us attempt to
++		 * drop references for extents that were already dropped before.
++		 */
++		if (unlikely(btrfs_disk_key_objectid(&root->root_item.drop_progress))) {
++			struct btrfs_key cpu_key;
++
++			btrfs_disk_key_to_cpu(&cpu_key, &root->root_item.drop_progress);
++			btrfs_err(fs_info,
++	"cannot relocate partially dropped subvolume %llu, drop progress key (%llu %u %llu)",
++				  objectid, cpu_key.objectid, cpu_key.type, cpu_key.offset);
++			ret = -EUCLEAN;
++			goto fail;
++		}
++
+ 		/* called by btrfs_init_reloc_root */
+ 		ret = btrfs_copy_root(trans, root, root->commit_root, &eb,
+ 				      BTRFS_TREE_RELOC_OBJECTID);
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 0c8c58c4f29b78..186a79b824f12d 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -4,6 +4,7 @@
+  */
+ 
+ #include <linux/bsearch.h>
++#include <linux/falloc.h>
+ #include <linux/fs.h>
+ #include <linux/file.h>
+ #include <linux/sort.h>
+@@ -5465,6 +5466,30 @@ static int send_update_extent(struct send_ctx *sctx,
+ 	return ret;
+ }
+ 
++static int send_fallocate(struct send_ctx *sctx, u32 mode, u64 offset, u64 len)
++{
++	struct fs_path *path;
++	int ret;
++
++	path = get_cur_inode_path(sctx);
++	if (IS_ERR(path))
++		return PTR_ERR(path);
++
++	ret = begin_cmd(sctx, BTRFS_SEND_C_FALLOCATE);
++	if (ret < 0)
++		return ret;
++
++	TLV_PUT_PATH(sctx, BTRFS_SEND_A_PATH, path);
++	TLV_PUT_U32(sctx, BTRFS_SEND_A_FALLOCATE_MODE, mode);
++	TLV_PUT_U64(sctx, BTRFS_SEND_A_FILE_OFFSET, offset);
++	TLV_PUT_U64(sctx, BTRFS_SEND_A_SIZE, len);
++
++	ret = send_cmd(sctx);
++
++tlv_put_failure:
++	return ret;
++}
++
+ static int send_hole(struct send_ctx *sctx, u64 end)
+ {
+ 	struct fs_path *p = NULL;
+@@ -5472,6 +5497,14 @@ static int send_hole(struct send_ctx *sctx, u64 end)
+ 	u64 offset = sctx->cur_inode_last_extent;
+ 	int ret = 0;
+ 
++	/*
++	 * Starting with send stream v2 we have fallocate and can use it to
++	 * punch holes instead of sending writes full of zeroes.
++	 */
++	if (proto_cmd_ok(sctx, BTRFS_SEND_C_FALLOCATE))
++		return send_fallocate(sctx, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
++				      offset, end - offset);
++
+ 	/*
+ 	 * A hole that starts at EOF or beyond it. Since we do not yet support
+ 	 * fallocate (for extent preallocation and hole punching), sending a
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index f26a394a9ec5f0..2a14cc56956c0d 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1736,8 +1736,10 @@ static noinline int create_pending_snapshot(struct btrfs_trans_handle *trans,
+ 
+ 	ret = btrfs_create_qgroup(trans, objectid);
+ 	if (ret && ret != -EEXIST) {
+-		btrfs_abort_transaction(trans, ret);
+-		goto fail;
++		if (ret != -ENOTCONN || btrfs_qgroup_enabled(fs_info)) {
++			btrfs_abort_transaction(trans, ret);
++			goto fail;
++		}
+ 	}
+ 
+ 	/*
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index e05140ce95be9f..1b03e020db4a8f 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -321,8 +321,7 @@ struct walk_control {
+ 
+ 	/*
+ 	 * Ignore any items from the inode currently being processed. Needs
+-	 * to be set every time we find a BTRFS_INODE_ITEM_KEY and we are in
+-	 * the LOG_WALK_REPLAY_INODES stage.
++	 * to be set every time we find a BTRFS_INODE_ITEM_KEY.
+ 	 */
+ 	bool ignore_cur_inode;
+ 
+@@ -1380,6 +1379,8 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 	dir = btrfs_iget_logging(parent_objectid, root);
+ 	if (IS_ERR(dir)) {
+ 		ret = PTR_ERR(dir);
++		if (ret == -ENOENT)
++			ret = 0;
+ 		dir = NULL;
+ 		goto out;
+ 	}
+@@ -1395,6 +1396,8 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 		if (log_ref_ver) {
+ 			ret = extref_get_fields(eb, ref_ptr, &name,
+ 						&ref_index, &parent_objectid);
++			if (ret)
++				goto out;
+ 			/*
+ 			 * parent object can change from one array
+ 			 * item to another.
+@@ -1404,14 +1407,30 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 				if (IS_ERR(dir)) {
+ 					ret = PTR_ERR(dir);
+ 					dir = NULL;
++					/*
++					 * A new parent dir may have not been
++					 * logged and not exist in the subvolume
++					 * tree, see the comment above before
++					 * the loop when getting the first
++					 * parent dir.
++					 */
++					if (ret == -ENOENT) {
++						/*
++						 * The next extref may refer to
++						 * another parent dir that
++						 * exists, so continue.
++						 */
++						ret = 0;
++						goto next;
++					}
+ 					goto out;
+ 				}
+ 			}
+ 		} else {
+ 			ret = ref_get_fields(eb, ref_ptr, &name, &ref_index);
++			if (ret)
++				goto out;
+ 		}
+-		if (ret)
+-			goto out;
+ 
+ 		ret = inode_in_dir(root, path, btrfs_ino(dir), btrfs_ino(inode),
+ 				   ref_index, &name);
+@@ -1445,10 +1464,11 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 		}
+ 		/* Else, ret == 1, we already have a perfect match, we're done. */
+ 
++next:
+ 		ref_ptr = (unsigned long)(ref_ptr + ref_struct_size) + name.len;
+ 		kfree(name.name);
+ 		name.name = NULL;
+-		if (log_ref_ver) {
++		if (log_ref_ver && dir) {
+ 			iput(&dir->vfs_inode);
+ 			dir = NULL;
+ 		}
+@@ -2410,23 +2430,30 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 
+ 	nritems = btrfs_header_nritems(eb);
+ 	for (i = 0; i < nritems; i++) {
+-		btrfs_item_key_to_cpu(eb, &key, i);
++		struct btrfs_inode_item *inode_item;
+ 
+-		/* inode keys are done during the first stage */
+-		if (key.type == BTRFS_INODE_ITEM_KEY &&
+-		    wc->stage == LOG_WALK_REPLAY_INODES) {
+-			struct btrfs_inode_item *inode_item;
+-			u32 mode;
++		btrfs_item_key_to_cpu(eb, &key, i);
+ 
+-			inode_item = btrfs_item_ptr(eb, i,
+-					    struct btrfs_inode_item);
++		if (key.type == BTRFS_INODE_ITEM_KEY) {
++			inode_item = btrfs_item_ptr(eb, i, struct btrfs_inode_item);
+ 			/*
+-			 * If we have a tmpfile (O_TMPFILE) that got fsync'ed
+-			 * and never got linked before the fsync, skip it, as
+-			 * replaying it is pointless since it would be deleted
+-			 * later. We skip logging tmpfiles, but it's always
+-			 * possible we are replaying a log created with a kernel
+-			 * that used to log tmpfiles.
++			 * An inode with no links is either:
++			 *
++			 * 1) A tmpfile (O_TMPFILE) that got fsync'ed and never
++			 *    got linked before the fsync, skip it, as replaying
++			 *    it is pointless since it would be deleted later.
++			 *    We skip logging tmpfiles, but it's always possible
++			 *    we are replaying a log created with a kernel that
++			 *    used to log tmpfiles;
++			 *
++			 * 2) A non-tmpfile which got its last link deleted
++			 *    while holding an open fd on it and later got
++			 *    fsynced through that fd. We always log the
++			 *    parent inodes when inode->last_unlink_trans is
++			 *    set to the current transaction, so ignore all the
++			 *    inode items for this inode. We will delete the
++			 *    inode when processing the parent directory with
++			 *    replay_dir_deletes().
+ 			 */
+ 			if (btrfs_inode_nlink(eb, inode_item) == 0) {
+ 				wc->ignore_cur_inode = true;
+@@ -2434,8 +2461,14 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 			} else {
+ 				wc->ignore_cur_inode = false;
+ 			}
+-			ret = replay_xattr_deletes(wc->trans, root, log,
+-						   path, key.objectid);
++		}
++
++		/* Inode keys are done during the first stage. */
++		if (key.type == BTRFS_INODE_ITEM_KEY &&
++		    wc->stage == LOG_WALK_REPLAY_INODES) {
++			u32 mode;
++
++			ret = replay_xattr_deletes(wc->trans, root, log, path, key.objectid);
+ 			if (ret)
+ 				break;
+ 			mode = btrfs_inode_mode(eb, inode_item);
+@@ -2516,9 +2549,8 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 			   key.type == BTRFS_INODE_EXTREF_KEY) {
+ 			ret = add_inode_ref(wc->trans, root, log, path,
+ 					    eb, i, &key);
+-			if (ret && ret != -ENOENT)
++			if (ret)
+ 				break;
+-			ret = 0;
+ 		} else if (key.type == BTRFS_EXTENT_DATA_KEY) {
+ 			ret = replay_one_extent(wc->trans, root, path,
+ 						eb, i, &key);
+@@ -2539,14 +2571,14 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ /*
+  * Correctly adjust the reserved bytes occupied by a log tree extent buffer
+  */
+-static void unaccount_log_buffer(struct btrfs_fs_info *fs_info, u64 start)
++static int unaccount_log_buffer(struct btrfs_fs_info *fs_info, u64 start)
+ {
+ 	struct btrfs_block_group *cache;
+ 
+ 	cache = btrfs_lookup_block_group(fs_info, start);
+ 	if (!cache) {
+ 		btrfs_err(fs_info, "unable to find block group for %llu", start);
+-		return;
++		return -ENOENT;
+ 	}
+ 
+ 	spin_lock(&cache->space_info->lock);
+@@ -2557,27 +2589,22 @@ static void unaccount_log_buffer(struct btrfs_fs_info *fs_info, u64 start)
+ 	spin_unlock(&cache->space_info->lock);
+ 
+ 	btrfs_put_block_group(cache);
++
++	return 0;
+ }
+ 
+ static int clean_log_buffer(struct btrfs_trans_handle *trans,
+ 			    struct extent_buffer *eb)
+ {
+-	int ret;
+-
+ 	btrfs_tree_lock(eb);
+ 	btrfs_clear_buffer_dirty(trans, eb);
+ 	wait_on_extent_buffer_writeback(eb);
+ 	btrfs_tree_unlock(eb);
+ 
+-	if (trans) {
+-		ret = btrfs_pin_reserved_extent(trans, eb);
+-		if (ret)
+-			return ret;
+-	} else {
+-		unaccount_log_buffer(eb->fs_info, eb->start);
+-	}
++	if (trans)
++		return btrfs_pin_reserved_extent(trans, eb);
+ 
+-	return 0;
++	return unaccount_log_buffer(eb->fs_info, eb->start);
+ }
+ 
+ static noinline int walk_down_log_tree(struct btrfs_trans_handle *trans,
+@@ -4208,6 +4235,9 @@ static void fill_inode_item(struct btrfs_trans_handle *trans,
+ 	btrfs_set_token_timespec_nsec(&token, &item->ctime,
+ 				      inode_get_ctime_nsec(inode));
+ 
++	btrfs_set_timespec_sec(leaf, &item->otime, BTRFS_I(inode)->i_otime_sec);
++	btrfs_set_timespec_nsec(leaf, &item->otime, BTRFS_I(inode)->i_otime_nsec);
++
+ 	/*
+ 	 * We do not need to set the nbytes field, in fact during a fast fsync
+ 	 * its value may not even be correct, since a fast fsync does not wait
+@@ -7276,11 +7306,14 @@ int btrfs_recover_log_trees(struct btrfs_root *log_root_tree)
+ 
+ 		wc.replay_dest->log_root = log;
+ 		ret = btrfs_record_root_in_trans(trans, wc.replay_dest);
+-		if (ret)
++		if (ret) {
+ 			/* The loop needs to continue due to the root refs */
+ 			btrfs_abort_transaction(trans, ret);
+-		else
++		} else {
+ 			ret = walk_log_tree(trans, log, &wc);
++			if (ret)
++				btrfs_abort_transaction(trans, ret);
++		}
+ 
+ 		if (!ret && wc.stage == LOG_WALK_REPLAY_ALL) {
+ 			ret = fixup_inode_link_counts(trans, wc.replay_dest,
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 4a3e02b49f2957..b66242f080265e 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -2477,8 +2477,8 @@ bool btrfs_zoned_should_reclaim(const struct btrfs_fs_info *fs_info)
+ {
+ 	struct btrfs_fs_devices *fs_devices = fs_info->fs_devices;
+ 	struct btrfs_device *device;
++	u64 total = btrfs_super_total_bytes(fs_info->super_copy);
+ 	u64 used = 0;
+-	u64 total = 0;
+ 	u64 factor;
+ 
+ 	ASSERT(btrfs_is_zoned(fs_info));
+@@ -2491,7 +2491,6 @@ bool btrfs_zoned_should_reclaim(const struct btrfs_fs_info *fs_info)
+ 		if (!device->bdev)
+ 			continue;
+ 
+-		total += device->disk_total_bytes;
+ 		used += device->bytes_used;
+ 	}
+ 	mutex_unlock(&fs_devices->device_list_mutex);
+@@ -2545,7 +2544,7 @@ int btrfs_zone_finish_one_bg(struct btrfs_fs_info *fs_info)
+ 
+ 		spin_lock(&block_group->lock);
+ 		if (block_group->reserved || block_group->alloc_offset == 0 ||
+-		    (block_group->flags & BTRFS_BLOCK_GROUP_SYSTEM) ||
++		    !(block_group->flags & BTRFS_BLOCK_GROUP_DATA) ||
+ 		    test_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC, &block_group->runtime_flags)) {
+ 			spin_unlock(&block_group->lock);
+ 			continue;
+diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
+index 8371e4e1f596a9..25bcfcc2d70637 100644
+--- a/fs/crypto/fscrypt_private.h
++++ b/fs/crypto/fscrypt_private.h
+@@ -27,6 +27,23 @@
+  */
+ #define FSCRYPT_MIN_KEY_SIZE	16
+ 
++/*
++ * This mask is passed as the third argument to the crypto_alloc_*() functions
++ * to prevent fscrypt from using the Crypto API drivers for non-inline crypto
++ * engines.  Those drivers have been problematic for fscrypt.  fscrypt users
++ * have reported hangs and even incorrect en/decryption with these drivers.
++ * Since going to the driver, off CPU, and back again is really slow, such
++ * drivers can be over 50 times slower than the CPU-based code for fscrypt's
++ * workload.  Even on platforms that lack AES instructions on the CPU, using the
++ * offloads has been shown to be slower, even staying with AES.  (Of course,
++ * Adiantum is faster still, and is the recommended option on such platforms...)
++ *
++ * Note that fscrypt also supports inline crypto engines.  Those don't use the
++ * Crypto API and work much better than the old-style (non-inline) engines.
++ */
++#define FSCRYPT_CRYPTOAPI_MASK \
++	(CRYPTO_ALG_ALLOCATES_MEMORY | CRYPTO_ALG_KERN_DRIVER_ONLY)
++
+ #define FSCRYPT_CONTEXT_V1	1
+ #define FSCRYPT_CONTEXT_V2	2
+ 
+diff --git a/fs/crypto/hkdf.c b/fs/crypto/hkdf.c
+index 855a0f4b7318b7..604704f2b50edd 100644
+--- a/fs/crypto/hkdf.c
++++ b/fs/crypto/hkdf.c
+@@ -56,7 +56,7 @@ int fscrypt_init_hkdf(struct fscrypt_hkdf *hkdf, const u8 *master_key,
+ 	u8 prk[HKDF_HASHLEN];
+ 	int err;
+ 
+-	hmac_tfm = crypto_alloc_shash(HKDF_HMAC_ALG, 0, 0);
++	hmac_tfm = crypto_alloc_shash(HKDF_HMAC_ALG, 0, FSCRYPT_CRYPTOAPI_MASK);
+ 	if (IS_ERR(hmac_tfm)) {
+ 		fscrypt_err(NULL, "Error allocating " HKDF_HMAC_ALG ": %ld",
+ 			    PTR_ERR(hmac_tfm));
+diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
+index b4fe01ea4bd4c9..2896046a49771c 100644
+--- a/fs/crypto/keysetup.c
++++ b/fs/crypto/keysetup.c
+@@ -103,7 +103,8 @@ fscrypt_allocate_skcipher(struct fscrypt_mode *mode, const u8 *raw_key,
+ 	struct crypto_skcipher *tfm;
+ 	int err;
+ 
+-	tfm = crypto_alloc_skcipher(mode->cipher_str, 0, 0);
++	tfm = crypto_alloc_skcipher(mode->cipher_str, 0,
++				    FSCRYPT_CRYPTOAPI_MASK);
+ 	if (IS_ERR(tfm)) {
+ 		if (PTR_ERR(tfm) == -ENOENT) {
+ 			fscrypt_warn(inode,
+diff --git a/fs/crypto/keysetup_v1.c b/fs/crypto/keysetup_v1.c
+index cf3b58ec32ccec..d19d1d4c2e7e53 100644
+--- a/fs/crypto/keysetup_v1.c
++++ b/fs/crypto/keysetup_v1.c
+@@ -52,7 +52,8 @@ static int derive_key_aes(const u8 *master_key,
+ 	struct skcipher_request *req = NULL;
+ 	DECLARE_CRYPTO_WAIT(wait);
+ 	struct scatterlist src_sg, dst_sg;
+-	struct crypto_skcipher *tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0);
++	struct crypto_skcipher *tfm =
++		crypto_alloc_skcipher("ecb(aes)", 0, FSCRYPT_CRYPTOAPI_MASK);
+ 
+ 	if (IS_ERR(tfm)) {
+ 		res = PTR_ERR(tfm);
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index 6e57b9cc6ed2e0..cfe454dbf41521 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -313,8 +313,8 @@ static int erofs_read_superblock(struct super_block *sb)
+ 	sbi->islotbits = ilog2(sizeof(struct erofs_inode_compact));
+ 	if (erofs_sb_has_48bit(sbi) && dsb->rootnid_8b) {
+ 		sbi->root_nid = le64_to_cpu(dsb->rootnid_8b);
+-		sbi->dif0.blocks = (sbi->dif0.blocks << 32) |
+-				le16_to_cpu(dsb->rb.blocks_hi);
++		sbi->dif0.blocks = sbi->dif0.blocks |
++				((u64)le16_to_cpu(dsb->rb.blocks_hi) << 32);
+ 	} else {
+ 		sbi->root_nid = le16_to_cpu(dsb->rb.rootnid_2b);
+ 	}
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 0fbf5dfedb24e2..7a7b044daadc49 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -218,6 +218,7 @@ struct eventpoll {
+ 	/* used to optimize loop detection check */
+ 	u64 gen;
+ 	struct hlist_head refs;
++	u8 loop_check_depth;
+ 
+ 	/*
+ 	 * usage count, used together with epitem->dying to
+@@ -2140,23 +2141,24 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
+ }
+ 
+ /**
+- * ep_loop_check_proc - verify that adding an epoll file inside another
+- *                      epoll structure does not violate the constraints, in
+- *                      terms of closed loops, or too deep chains (which can
+- *                      result in excessive stack usage).
++ * ep_loop_check_proc - verify that adding an epoll file @ep inside another
++ *                      epoll file does not create closed loops, and
++ *                      determine the depth of the subtree starting at @ep
+  *
+  * @ep: the &struct eventpoll to be currently checked.
+  * @depth: Current depth of the path being checked.
+  *
+- * Return: %zero if adding the epoll @file inside current epoll
+- *          structure @ep does not violate the constraints, or %-1 otherwise.
++ * Return: depth of the subtree, or INT_MAX if we found a loop or went too deep.
+  */
+ static int ep_loop_check_proc(struct eventpoll *ep, int depth)
+ {
+-	int error = 0;
++	int result = 0;
+ 	struct rb_node *rbp;
+ 	struct epitem *epi;
+ 
++	if (ep->gen == loop_check_gen)
++		return ep->loop_check_depth;
++
+ 	mutex_lock_nested(&ep->mtx, depth + 1);
+ 	ep->gen = loop_check_gen;
+ 	for (rbp = rb_first_cached(&ep->rbr); rbp; rbp = rb_next(rbp)) {
+@@ -2164,13 +2166,11 @@ static int ep_loop_check_proc(struct eventpoll *ep, int depth)
+ 		if (unlikely(is_file_epoll(epi->ffd.file))) {
+ 			struct eventpoll *ep_tovisit;
+ 			ep_tovisit = epi->ffd.file->private_data;
+-			if (ep_tovisit->gen == loop_check_gen)
+-				continue;
+ 			if (ep_tovisit == inserting_into || depth > EP_MAX_NESTS)
+-				error = -1;
++				result = INT_MAX;
+ 			else
+-				error = ep_loop_check_proc(ep_tovisit, depth + 1);
+-			if (error != 0)
++				result = max(result, ep_loop_check_proc(ep_tovisit, depth + 1) + 1);
++			if (result > EP_MAX_NESTS)
+ 				break;
+ 		} else {
+ 			/*
+@@ -2184,9 +2184,27 @@ static int ep_loop_check_proc(struct eventpoll *ep, int depth)
+ 			list_file(epi->ffd.file);
+ 		}
+ 	}
++	ep->loop_check_depth = result;
+ 	mutex_unlock(&ep->mtx);
+ 
+-	return error;
++	return result;
++}
++
++/**
++ * ep_get_upwards_depth_proc - determine depth of @ep when traversed upwards
++ */
++static int ep_get_upwards_depth_proc(struct eventpoll *ep, int depth)
++{
++	int result = 0;
++	struct epitem *epi;
++
++	if (ep->gen == loop_check_gen)
++		return ep->loop_check_depth;
++	hlist_for_each_entry_rcu(epi, &ep->refs, fllink)
++		result = max(result, ep_get_upwards_depth_proc(epi->ep, depth + 1) + 1);
++	ep->gen = loop_check_gen;
++	ep->loop_check_depth = result;
++	return result;
+ }
+ 
+ /**
+@@ -2202,8 +2220,22 @@ static int ep_loop_check_proc(struct eventpoll *ep, int depth)
+  */
+ static int ep_loop_check(struct eventpoll *ep, struct eventpoll *to)
+ {
++	int depth, upwards_depth;
++
+ 	inserting_into = ep;
+-	return ep_loop_check_proc(to, 0);
++	/*
++	 * Check how deep down we can get from @to, and whether it is possible
++	 * to loop up to @ep.
++	 */
++	depth = ep_loop_check_proc(to, 0);
++	if (depth > EP_MAX_NESTS)
++		return -1;
++	/* Check how far up we can go from @ep. */
++	rcu_read_lock();
++	upwards_depth = ep_get_upwards_depth_proc(ep, 0);
++	rcu_read_unlock();
++
++	return (depth+1+upwards_depth > EP_MAX_NESTS) ? -1 : 0;
+ }
+ 
+ static void clear_tfile_check_list(void)
+diff --git a/fs/exfat/dir.c b/fs/exfat/dir.c
+index 3103b932b67461..ee060e26f51d2a 100644
+--- a/fs/exfat/dir.c
++++ b/fs/exfat/dir.c
+@@ -996,6 +996,7 @@ int exfat_find_dir_entry(struct super_block *sb, struct exfat_inode_info *ei,
+ 	struct exfat_hint_femp candi_empty;
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
+ 	int num_entries = exfat_calc_num_entries(p_uniname);
++	unsigned int clu_count = 0;
+ 
+ 	if (num_entries < 0)
+ 		return num_entries;
+@@ -1133,6 +1134,10 @@ int exfat_find_dir_entry(struct super_block *sb, struct exfat_inode_info *ei,
+ 		} else {
+ 			if (exfat_get_next_cluster(sb, &clu.dir))
+ 				return -EIO;
++
++			/* break if the cluster chain includes a loop */
++			if (unlikely(++clu_count > EXFAT_DATA_CLUSTER_COUNT(sbi)))
++				goto not_found;
+ 		}
+ 	}
+ 
+@@ -1195,6 +1200,7 @@ int exfat_count_dir_entries(struct super_block *sb, struct exfat_chain *p_dir)
+ 	int i, count = 0;
+ 	int dentries_per_clu;
+ 	unsigned int entry_type;
++	unsigned int clu_count = 0;
+ 	struct exfat_chain clu;
+ 	struct exfat_dentry *ep;
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
+@@ -1227,6 +1233,12 @@ int exfat_count_dir_entries(struct super_block *sb, struct exfat_chain *p_dir)
+ 		} else {
+ 			if (exfat_get_next_cluster(sb, &(clu.dir)))
+ 				return -EIO;
++
++			if (unlikely(++clu_count > sbi->used_clusters)) {
++				exfat_fs_error(sb, "FAT or bitmap is corrupted");
++				return -EIO;
++			}
++
+ 		}
+ 	}
+ 
+diff --git a/fs/exfat/fatent.c b/fs/exfat/fatent.c
+index 23065f948ae752..232cc7f8ab92fc 100644
+--- a/fs/exfat/fatent.c
++++ b/fs/exfat/fatent.c
+@@ -490,5 +490,15 @@ int exfat_count_num_clusters(struct super_block *sb,
+ 	}
+ 
+ 	*ret_count = count;
++
++	/*
++	 * since exfat_count_used_clusters() is not called, sbi->used_clusters
++	 * cannot be used here.
++	 */
++	if (unlikely(i == sbi->num_clusters && clu != EXFAT_EOF_CLUSTER)) {
++		exfat_fs_error(sb, "The cluster chain has a loop");
++		return -EIO;
++	}
++
+ 	return 0;
+ }
+diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
+index fede0283d6e21f..f5f1c4e8a29fd2 100644
+--- a/fs/exfat/namei.c
++++ b/fs/exfat/namei.c
+@@ -890,6 +890,7 @@ static int exfat_check_dir_empty(struct super_block *sb,
+ {
+ 	int i, dentries_per_clu;
+ 	unsigned int type;
++	unsigned int clu_count = 0;
+ 	struct exfat_chain clu;
+ 	struct exfat_dentry *ep;
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
+@@ -926,6 +927,10 @@ static int exfat_check_dir_empty(struct super_block *sb,
+ 		} else {
+ 			if (exfat_get_next_cluster(sb, &(clu.dir)))
+ 				return -EIO;
++
++			/* break if the cluster chain includes a loop */
++			if (unlikely(++clu_count > EXFAT_DATA_CLUSTER_COUNT(sbi)))
++				break;
+ 		}
+ 	}
+ 
+diff --git a/fs/exfat/super.c b/fs/exfat/super.c
+index 7ed858937d45d2..3a9ec75ab45254 100644
+--- a/fs/exfat/super.c
++++ b/fs/exfat/super.c
+@@ -341,13 +341,12 @@ static void exfat_hash_init(struct super_block *sb)
+ 		INIT_HLIST_HEAD(&sbi->inode_hashtable[i]);
+ }
+ 
+-static int exfat_read_root(struct inode *inode)
++static int exfat_read_root(struct inode *inode, struct exfat_chain *root_clu)
+ {
+ 	struct super_block *sb = inode->i_sb;
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
+ 	struct exfat_inode_info *ei = EXFAT_I(inode);
+-	struct exfat_chain cdir;
+-	int num_subdirs, num_clu = 0;
++	int num_subdirs;
+ 
+ 	exfat_chain_set(&ei->dir, sbi->root_dir, 0, ALLOC_FAT_CHAIN);
+ 	ei->entry = -1;
+@@ -360,12 +359,9 @@ static int exfat_read_root(struct inode *inode)
+ 	ei->hint_stat.clu = sbi->root_dir;
+ 	ei->hint_femp.eidx = EXFAT_HINT_NONE;
+ 
+-	exfat_chain_set(&cdir, sbi->root_dir, 0, ALLOC_FAT_CHAIN);
+-	if (exfat_count_num_clusters(sb, &cdir, &num_clu))
+-		return -EIO;
+-	i_size_write(inode, num_clu << sbi->cluster_size_bits);
++	i_size_write(inode, EXFAT_CLU_TO_B(root_clu->size, sbi));
+ 
+-	num_subdirs = exfat_count_dir_entries(sb, &cdir);
++	num_subdirs = exfat_count_dir_entries(sb, root_clu);
+ 	if (num_subdirs < 0)
+ 		return -EIO;
+ 	set_nlink(inode, num_subdirs + EXFAT_MIN_SUBDIR);
+@@ -578,7 +574,8 @@ static int exfat_verify_boot_region(struct super_block *sb)
+ }
+ 
+ /* mount the file system volume */
+-static int __exfat_fill_super(struct super_block *sb)
++static int __exfat_fill_super(struct super_block *sb,
++		struct exfat_chain *root_clu)
+ {
+ 	int ret;
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
+@@ -595,6 +592,18 @@ static int __exfat_fill_super(struct super_block *sb)
+ 		goto free_bh;
+ 	}
+ 
++	/*
++	 * Call exfat_count_num_cluster() before searching for up-case and
++	 * bitmap directory entries to avoid infinite loop if they are missing
++	 * and the cluster chain includes a loop.
++	 */
++	exfat_chain_set(root_clu, sbi->root_dir, 0, ALLOC_FAT_CHAIN);
++	ret = exfat_count_num_clusters(sb, root_clu, &root_clu->size);
++	if (ret) {
++		exfat_err(sb, "failed to count the number of clusters in root");
++		goto free_bh;
++	}
++
+ 	ret = exfat_create_upcase_table(sb);
+ 	if (ret) {
+ 		exfat_err(sb, "failed to load upcase table");
+@@ -627,6 +636,7 @@ static int exfat_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	struct exfat_sb_info *sbi = sb->s_fs_info;
+ 	struct exfat_mount_options *opts = &sbi->options;
+ 	struct inode *root_inode;
++	struct exfat_chain root_clu;
+ 	int err;
+ 
+ 	if (opts->allow_utime == (unsigned short)-1)
+@@ -645,7 +655,7 @@ static int exfat_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	sb->s_time_min = EXFAT_MIN_TIMESTAMP_SECS;
+ 	sb->s_time_max = EXFAT_MAX_TIMESTAMP_SECS;
+ 
+-	err = __exfat_fill_super(sb);
++	err = __exfat_fill_super(sb, &root_clu);
+ 	if (err) {
+ 		exfat_err(sb, "failed to recognize exfat type");
+ 		goto check_nls_io;
+@@ -680,7 +690,7 @@ static int exfat_fill_super(struct super_block *sb, struct fs_context *fc)
+ 
+ 	root_inode->i_ino = EXFAT_ROOT_INO;
+ 	inode_set_iversion(root_inode, 1);
+-	err = exfat_read_root(root_inode);
++	err = exfat_read_root(root_inode, &root_clu);
+ 	if (err) {
+ 		exfat_err(sb, "failed to initialize root inode");
+ 		goto put_inode;
+diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c
+index 30f8201c155f40..177b1f852b63ac 100644
+--- a/fs/ext2/inode.c
++++ b/fs/ext2/inode.c
+@@ -895,9 +895,19 @@ int ext2_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ 		u64 start, u64 len)
+ {
+ 	int ret;
++	loff_t i_size;
+ 
+ 	inode_lock(inode);
+-	len = min_t(u64, len, i_size_read(inode));
++	i_size = i_size_read(inode);
++	/*
++	 * iomap_fiemap() returns EINVAL for 0 length. Make sure we don't trim
++	 * length to 0 but still trim the range as much as possible since
++	 * ext2_get_blocks() iterates unmapped space block by block which is
++	 * slow.
++	 */
++	if (i_size == 0)
++		i_size = 1;
++	len = min_t(u64, len, i_size);
+ 	ret = iomap_fiemap(inode, fieinfo, start, len, &ext2_iomap_ops);
+ 	inode_unlock(inode);
+ 
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index f27d9da53fb75d..372058706cce82 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -303,7 +303,11 @@ static int ext4_create_inline_data(handle_t *handle,
+ 	if (error)
+ 		goto out;
+ 
+-	BUG_ON(!is.s.not_found);
++	if (!is.s.not_found) {
++		EXT4_ERROR_INODE(inode, "unexpected inline data xattr");
++		error = -EFSCORRUPTED;
++		goto out;
++	}
+ 
+ 	error = ext4_xattr_ibody_set(handle, inode, &i, &is);
+ 	if (error) {
+@@ -354,7 +358,11 @@ static int ext4_update_inline_data(handle_t *handle, struct inode *inode,
+ 	if (error)
+ 		goto out;
+ 
+-	BUG_ON(is.s.not_found);
++	if (is.s.not_found) {
++		EXT4_ERROR_INODE(inode, "missing inline data xattr");
++		error = -EFSCORRUPTED;
++		goto out;
++	}
+ 
+ 	len -= EXT4_MIN_INLINE_DATA_SIZE;
+ 	value = kzalloc(len, GFP_NOFS);
+@@ -1904,7 +1912,12 @@ int ext4_inline_data_truncate(struct inode *inode, int *has_inline)
+ 			if ((err = ext4_xattr_ibody_find(inode, &i, &is)) != 0)
+ 				goto out_error;
+ 
+-			BUG_ON(is.s.not_found);
++			if (is.s.not_found) {
++				EXT4_ERROR_INODE(inode,
++						 "missing inline data xattr");
++				err = -EFSCORRUPTED;
++				goto out_error;
++			}
+ 
+ 			value_len = le32_to_cpu(is.s.here->e_value_size);
+ 			value = kmalloc(value_len, GFP_NOFS);
+diff --git a/fs/ext4/mballoc-test.c b/fs/ext4/mballoc-test.c
+index d634c12f198474..f018bc8424c7cb 100644
+--- a/fs/ext4/mballoc-test.c
++++ b/fs/ext4/mballoc-test.c
+@@ -155,6 +155,7 @@ static struct super_block *mbt_ext4_alloc_super_block(void)
+ 	bgl_lock_init(sbi->s_blockgroup_lock);
+ 
+ 	sbi->s_es = &fsb->es;
++	sbi->s_sb = sb;
+ 	sb->s_fs_info = sbi;
+ 
+ 	up_write(&sb->s_umount);
+@@ -802,6 +803,10 @@ static void test_mb_mark_used(struct kunit *test)
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	grp->bb_free = EXT4_CLUSTERS_PER_GROUP(sb);
++	grp->bb_largest_free_order = -1;
++	grp->bb_avg_fragment_size_order = -1;
++	INIT_LIST_HEAD(&grp->bb_largest_free_order_node);
++	INIT_LIST_HEAD(&grp->bb_avg_fragment_size_node);
+ 	mbt_generate_test_ranges(sb, ranges, TEST_RANGE_COUNT);
+ 	for (i = 0; i < TEST_RANGE_COUNT; i++)
+ 		test_mb_mark_used_range(test, &e4b, ranges[i].start,
+@@ -875,6 +880,10 @@ static void test_mb_free_blocks(struct kunit *test)
+ 	ext4_unlock_group(sb, TEST_GOAL_GROUP);
+ 
+ 	grp->bb_free = 0;
++	grp->bb_largest_free_order = -1;
++	grp->bb_avg_fragment_size_order = -1;
++	INIT_LIST_HEAD(&grp->bb_largest_free_order_node);
++	INIT_LIST_HEAD(&grp->bb_avg_fragment_size_node);
+ 	memset(bitmap, 0xff, sb->s_blocksize);
+ 
+ 	mbt_generate_test_ranges(sb, ranges, TEST_RANGE_COUNT);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 1e98c5be4e0ad5..fb7093ea30236f 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -841,30 +841,30 @@ static void
+ mb_update_avg_fragment_size(struct super_block *sb, struct ext4_group_info *grp)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+-	int new_order;
++	int new, old;
+ 
+-	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || grp->bb_fragments == 0)
++	if (!test_opt2(sb, MB_OPTIMIZE_SCAN))
+ 		return;
+ 
+-	new_order = mb_avg_fragment_size_order(sb,
+-					grp->bb_free / grp->bb_fragments);
+-	if (new_order == grp->bb_avg_fragment_size_order)
++	old = grp->bb_avg_fragment_size_order;
++	new = grp->bb_fragments == 0 ? -1 :
++	      mb_avg_fragment_size_order(sb, grp->bb_free / grp->bb_fragments);
++	if (new == old)
+ 		return;
+ 
+-	if (grp->bb_avg_fragment_size_order != -1) {
+-		write_lock(&sbi->s_mb_avg_fragment_size_locks[
+-					grp->bb_avg_fragment_size_order]);
++	if (old >= 0) {
++		write_lock(&sbi->s_mb_avg_fragment_size_locks[old]);
+ 		list_del(&grp->bb_avg_fragment_size_node);
+-		write_unlock(&sbi->s_mb_avg_fragment_size_locks[
+-					grp->bb_avg_fragment_size_order]);
++		write_unlock(&sbi->s_mb_avg_fragment_size_locks[old]);
++	}
++
++	grp->bb_avg_fragment_size_order = new;
++	if (new >= 0) {
++		write_lock(&sbi->s_mb_avg_fragment_size_locks[new]);
++		list_add_tail(&grp->bb_avg_fragment_size_node,
++				&sbi->s_mb_avg_fragment_size[new]);
++		write_unlock(&sbi->s_mb_avg_fragment_size_locks[new]);
+ 	}
+-	grp->bb_avg_fragment_size_order = new_order;
+-	write_lock(&sbi->s_mb_avg_fragment_size_locks[
+-					grp->bb_avg_fragment_size_order]);
+-	list_add_tail(&grp->bb_avg_fragment_size_node,
+-		&sbi->s_mb_avg_fragment_size[grp->bb_avg_fragment_size_order]);
+-	write_unlock(&sbi->s_mb_avg_fragment_size_locks[
+-					grp->bb_avg_fragment_size_order]);
+ }
+ 
+ /*
+@@ -1150,33 +1150,28 @@ static void
+ mb_set_largest_free_order(struct super_block *sb, struct ext4_group_info *grp)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+-	int i;
++	int new, old = grp->bb_largest_free_order;
+ 
+-	for (i = MB_NUM_ORDERS(sb) - 1; i >= 0; i--)
+-		if (grp->bb_counters[i] > 0)
++	for (new = MB_NUM_ORDERS(sb) - 1; new >= 0; new--)
++		if (grp->bb_counters[new] > 0)
+ 			break;
++
+ 	/* No need to move between order lists? */
+-	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) ||
+-	    i == grp->bb_largest_free_order) {
+-		grp->bb_largest_free_order = i;
++	if (new == old)
+ 		return;
+-	}
+ 
+-	if (grp->bb_largest_free_order >= 0) {
+-		write_lock(&sbi->s_mb_largest_free_orders_locks[
+-					      grp->bb_largest_free_order]);
++	if (old >= 0 && !list_empty(&grp->bb_largest_free_order_node)) {
++		write_lock(&sbi->s_mb_largest_free_orders_locks[old]);
+ 		list_del_init(&grp->bb_largest_free_order_node);
+-		write_unlock(&sbi->s_mb_largest_free_orders_locks[
+-					      grp->bb_largest_free_order]);
++		write_unlock(&sbi->s_mb_largest_free_orders_locks[old]);
+ 	}
+-	grp->bb_largest_free_order = i;
+-	if (grp->bb_largest_free_order >= 0 && grp->bb_free) {
+-		write_lock(&sbi->s_mb_largest_free_orders_locks[
+-					      grp->bb_largest_free_order]);
++
++	grp->bb_largest_free_order = new;
++	if (test_opt2(sb, MB_OPTIMIZE_SCAN) && new >= 0 && grp->bb_free) {
++		write_lock(&sbi->s_mb_largest_free_orders_locks[new]);
+ 		list_add_tail(&grp->bb_largest_free_order_node,
+-		      &sbi->s_mb_largest_free_orders[grp->bb_largest_free_order]);
+-		write_unlock(&sbi->s_mb_largest_free_orders_locks[
+-					      grp->bb_largest_free_order]);
++			      &sbi->s_mb_largest_free_orders[new]);
++		write_unlock(&sbi->s_mb_largest_free_orders_locks[new]);
+ 	}
+ }
+ 
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index f2de3c886c086b..72c8d8cb6fe8ab 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1044,6 +1044,18 @@ int f2fs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 	if (unlikely(f2fs_cp_error(F2FS_I_SB(inode))))
+ 		return -EIO;
+ 
++	err = setattr_prepare(idmap, dentry, attr);
++	if (err)
++		return err;
++
++	err = fscrypt_prepare_setattr(dentry, attr);
++	if (err)
++		return err;
++
++	err = fsverity_prepare_setattr(dentry, attr);
++	if (err)
++		return err;
++
+ 	if (unlikely(IS_IMMUTABLE(inode)))
+ 		return -EPERM;
+ 
+@@ -1062,18 +1074,6 @@ int f2fs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 			return -EINVAL;
+ 	}
+ 
+-	err = setattr_prepare(idmap, dentry, attr);
+-	if (err)
+-		return err;
+-
+-	err = fscrypt_prepare_setattr(dentry, attr);
+-	if (err)
+-		return err;
+-
+-	err = fsverity_prepare_setattr(dentry, attr);
+-	if (err)
+-		return err;
+-
+ 	if (is_quota_modification(idmap, inode, attr)) {
+ 		err = f2fs_dquot_initialize(inode);
+ 		if (err)
+diff --git a/fs/fhandle.c b/fs/fhandle.c
+index 3e092ae6d142ae..66ff60591d17b2 100644
+--- a/fs/fhandle.c
++++ b/fs/fhandle.c
+@@ -88,7 +88,7 @@ static long do_sys_name_to_handle(const struct path *path,
+ 		if (fh_flags & EXPORT_FH_CONNECTABLE) {
+ 			handle->handle_type |= FILEID_IS_CONNECTABLE;
+ 			if (d_is_dir(path->dentry))
+-				fh_flags |= FILEID_IS_DIR;
++				handle->handle_type |= FILEID_IS_DIR;
+ 		}
+ 		retval = 0;
+ 	}
+diff --git a/fs/file.c b/fs/file.c
+index b6db031545e650..6d2275c3be9c69 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -197,6 +197,21 @@ static struct fdtable *alloc_fdtable(unsigned int slots_wanted)
+ 			return ERR_PTR(-EMFILE);
+ 	}
+ 
++	/*
++	 * Check if the allocation size would exceed INT_MAX. kvmalloc_array()
++	 * and kvmalloc() will warn if the allocation size is greater than
++	 * INT_MAX, as filp_cachep objects are not __GFP_NOWARN.
++	 *
++	 * This can happen when sysctl_nr_open is set to a very high value and
++	 * a process tries to use a file descriptor near that limit. For example,
++	 * if sysctl_nr_open is set to 1073741816 (0x3ffffff8) - which is what
++	 * systemd typically sets it to - then trying to use a file descriptor
++	 * close to that value will require allocating a file descriptor table
++	 * that exceeds 8GB in size.
++	 */
++	if (unlikely(nr > INT_MAX / sizeof(struct file *)))
++		return ERR_PTR(-EMFILE);
++
+ 	fdt = kmalloc(sizeof(struct fdtable), GFP_KERNEL_ACCOUNT);
+ 	if (!fdt)
+ 		goto out;
+diff --git a/fs/gfs2/dir.c b/fs/gfs2/dir.c
+index dbf1aede744c12..509e2f0d97e787 100644
+--- a/fs/gfs2/dir.c
++++ b/fs/gfs2/dir.c
+@@ -60,6 +60,7 @@
+ #include <linux/crc32.h>
+ #include <linux/vmalloc.h>
+ #include <linux/bio.h>
++#include <linux/log2.h>
+ 
+ #include "gfs2.h"
+ #include "incore.h"
+@@ -912,7 +913,6 @@ static int dir_make_exhash(struct inode *inode)
+ 	struct qstr args;
+ 	struct buffer_head *bh, *dibh;
+ 	struct gfs2_leaf *leaf;
+-	int y;
+ 	u32 x;
+ 	__be64 *lp;
+ 	u64 bn;
+@@ -979,9 +979,7 @@ static int dir_make_exhash(struct inode *inode)
+ 	i_size_write(inode, sdp->sd_sb.sb_bsize / 2);
+ 	gfs2_add_inode_blocks(&dip->i_inode, 1);
+ 	dip->i_diskflags |= GFS2_DIF_EXHASH;
+-
+-	for (x = sdp->sd_hash_ptrs, y = -1; x; x >>= 1, y++) ;
+-	dip->i_depth = y;
++	dip->i_depth = ilog2(sdp->sd_hash_ptrs);
+ 
+ 	gfs2_dinode_out(dip, dibh->b_data);
+ 
+diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
+index 116efe335c3212..0cf9da2ef622a9 100644
+--- a/fs/gfs2/glops.c
++++ b/fs/gfs2/glops.c
+@@ -11,6 +11,7 @@
+ #include <linux/bio.h>
+ #include <linux/posix_acl.h>
+ #include <linux/security.h>
++#include <linux/log2.h>
+ 
+ #include "gfs2.h"
+ #include "incore.h"
+@@ -450,6 +451,11 @@ static int gfs2_dinode_in(struct gfs2_inode *ip, const void *buf)
+ 		gfs2_consist_inode(ip);
+ 		return -EIO;
+ 	}
++	if ((ip->i_diskflags & GFS2_DIF_EXHASH) &&
++	    depth < ilog2(sdp->sd_hash_ptrs)) {
++		gfs2_consist_inode(ip);
++		return -EIO;
++	}
+ 	ip->i_depth = (u8)depth;
+ 	ip->i_entries = be32_to_cpu(str->di_entries);
+ 
+diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c
+index 9dc8885c95d072..66ee10929736f6 100644
+--- a/fs/gfs2/meta_io.c
++++ b/fs/gfs2/meta_io.c
+@@ -103,6 +103,7 @@ const struct address_space_operations gfs2_meta_aops = {
+ 	.invalidate_folio = block_invalidate_folio,
+ 	.writepages = gfs2_aspace_writepages,
+ 	.release_folio = gfs2_release_folio,
++	.migrate_folio = buffer_migrate_folio_norefs,
+ };
+ 
+ const struct address_space_operations gfs2_rgrp_aops = {
+@@ -110,6 +111,7 @@ const struct address_space_operations gfs2_rgrp_aops = {
+ 	.invalidate_folio = block_invalidate_folio,
+ 	.writepages = gfs2_aspace_writepages,
+ 	.release_folio = gfs2_release_folio,
++	.migrate_folio = buffer_migrate_folio_norefs,
+ };
+ 
+ /**
+diff --git a/fs/hfs/bfind.c b/fs/hfs/bfind.c
+index ef9498a6e88acd..34e9804e0f3601 100644
+--- a/fs/hfs/bfind.c
++++ b/fs/hfs/bfind.c
+@@ -16,6 +16,9 @@ int hfs_find_init(struct hfs_btree *tree, struct hfs_find_data *fd)
+ {
+ 	void *ptr;
+ 
++	if (!tree || !fd)
++		return -EINVAL;
++
+ 	fd->tree = tree;
+ 	fd->bnode = NULL;
+ 	ptr = kmalloc(tree->max_key_len * 2 + 4, GFP_KERNEL);
+diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c
+index cb823a8a6ba960..e8cd1a31f2470c 100644
+--- a/fs/hfs/bnode.c
++++ b/fs/hfs/bnode.c
+@@ -15,6 +15,48 @@
+ 
+ #include "btree.h"
+ 
++static inline
++bool is_bnode_offset_valid(struct hfs_bnode *node, int off)
++{
++	bool is_valid = off < node->tree->node_size;
++
++	if (!is_valid) {
++		pr_err("requested invalid offset: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off);
++	}
++
++	return is_valid;
++}
++
++static inline
++int check_and_correct_requested_length(struct hfs_bnode *node, int off, int len)
++{
++	unsigned int node_size;
++
++	if (!is_bnode_offset_valid(node, off))
++		return 0;
++
++	node_size = node->tree->node_size;
++
++	if ((off + len) > node_size) {
++		int new_len = (int)node_size - off;
++
++		pr_err("requested length has been corrected: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, "
++		       "requested_len %d, corrected_len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len, new_len);
++
++		return new_len;
++	}
++
++	return len;
++}
++
+ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len)
+ {
+ 	struct page *page;
+@@ -22,6 +64,20 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len)
+ 	int bytes_read;
+ 	int bytes_to_read;
+ 
++	if (!is_bnode_offset_valid(node, off))
++		return;
++
++	if (len == 0) {
++		pr_err("requested zero length: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len);
++		return;
++	}
++
++	len = check_and_correct_requested_length(node, off, len);
++
+ 	off += node->page_offset;
+ 	pagenum = off >> PAGE_SHIFT;
+ 	off &= ~PAGE_MASK; /* compute page offset for the first page */
+@@ -80,6 +136,20 @@ void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len)
+ {
+ 	struct page *page;
+ 
++	if (!is_bnode_offset_valid(node, off))
++		return;
++
++	if (len == 0) {
++		pr_err("requested zero length: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len);
++		return;
++	}
++
++	len = check_and_correct_requested_length(node, off, len);
++
+ 	off += node->page_offset;
+ 	page = node->page[0];
+ 
+@@ -104,6 +174,20 @@ void hfs_bnode_clear(struct hfs_bnode *node, int off, int len)
+ {
+ 	struct page *page;
+ 
++	if (!is_bnode_offset_valid(node, off))
++		return;
++
++	if (len == 0) {
++		pr_err("requested zero length: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len);
++		return;
++	}
++
++	len = check_and_correct_requested_length(node, off, len);
++
+ 	off += node->page_offset;
+ 	page = node->page[0];
+ 
+@@ -119,6 +203,10 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst,
+ 	hfs_dbg(BNODE_MOD, "copybytes: %u,%u,%u\n", dst, src, len);
+ 	if (!len)
+ 		return;
++
++	len = check_and_correct_requested_length(src_node, src, len);
++	len = check_and_correct_requested_length(dst_node, dst, len);
++
+ 	src += src_node->page_offset;
+ 	dst += dst_node->page_offset;
+ 	src_page = src_node->page[0];
+@@ -136,6 +224,10 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len)
+ 	hfs_dbg(BNODE_MOD, "movebytes: %u,%u,%u\n", dst, src, len);
+ 	if (!len)
+ 		return;
++
++	len = check_and_correct_requested_length(node, src, len);
++	len = check_and_correct_requested_length(node, dst, len);
++
+ 	src += node->page_offset;
+ 	dst += node->page_offset;
+ 	page = node->page[0];
+@@ -482,6 +574,7 @@ void hfs_bnode_put(struct hfs_bnode *node)
+ 		if (test_bit(HFS_BNODE_DELETED, &node->flags)) {
+ 			hfs_bnode_unhash(node);
+ 			spin_unlock(&tree->hash_lock);
++			hfs_bnode_clear(node, 0, tree->node_size);
+ 			hfs_bmap_free(node);
+ 			hfs_bnode_free(node);
+ 			return;
+diff --git a/fs/hfs/btree.c b/fs/hfs/btree.c
+index 2fa4b1f8cc7fb0..e86e1e235658fa 100644
+--- a/fs/hfs/btree.c
++++ b/fs/hfs/btree.c
+@@ -21,8 +21,12 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id, btree_keycmp ke
+ 	struct hfs_btree *tree;
+ 	struct hfs_btree_header_rec *head;
+ 	struct address_space *mapping;
+-	struct page *page;
++	struct folio *folio;
++	struct buffer_head *bh;
+ 	unsigned int size;
++	u16 dblock;
++	sector_t start_block;
++	loff_t offset;
+ 
+ 	tree = kzalloc(sizeof(*tree), GFP_KERNEL);
+ 	if (!tree)
+@@ -75,12 +79,40 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id, btree_keycmp ke
+ 	unlock_new_inode(tree->inode);
+ 
+ 	mapping = tree->inode->i_mapping;
+-	page = read_mapping_page(mapping, 0, NULL);
+-	if (IS_ERR(page))
++	folio = filemap_grab_folio(mapping, 0);
++	if (IS_ERR(folio))
+ 		goto free_inode;
+ 
++	folio_zero_range(folio, 0, folio_size(folio));
++
++	dblock = hfs_ext_find_block(HFS_I(tree->inode)->first_extents, 0);
++	start_block = HFS_SB(sb)->fs_start + (dblock * HFS_SB(sb)->fs_div);
++
++	size = folio_size(folio);
++	offset = 0;
++	while (size > 0) {
++		size_t len;
++
++		bh = sb_bread(sb, start_block);
++		if (!bh) {
++			pr_err("unable to read tree header\n");
++			goto put_folio;
++		}
++
++		len = min_t(size_t, folio_size(folio), sb->s_blocksize);
++		memcpy_to_folio(folio, offset, bh->b_data, sb->s_blocksize);
++
++		brelse(bh);
++
++		start_block++;
++		offset += len;
++		size -= len;
++	}
++
++	folio_mark_uptodate(folio);
++
+ 	/* Load the header */
+-	head = (struct hfs_btree_header_rec *)(kmap_local_page(page) +
++	head = (struct hfs_btree_header_rec *)(kmap_local_folio(folio, 0) +
+ 					       sizeof(struct hfs_bnode_desc));
+ 	tree->root = be32_to_cpu(head->root);
+ 	tree->leaf_count = be32_to_cpu(head->leaf_count);
+@@ -95,22 +127,22 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id, btree_keycmp ke
+ 
+ 	size = tree->node_size;
+ 	if (!is_power_of_2(size))
+-		goto fail_page;
++		goto fail_folio;
+ 	if (!tree->node_count)
+-		goto fail_page;
++		goto fail_folio;
+ 	switch (id) {
+ 	case HFS_EXT_CNID:
+ 		if (tree->max_key_len != HFS_MAX_EXT_KEYLEN) {
+ 			pr_err("invalid extent max_key_len %d\n",
+ 			       tree->max_key_len);
+-			goto fail_page;
++			goto fail_folio;
+ 		}
+ 		break;
+ 	case HFS_CAT_CNID:
+ 		if (tree->max_key_len != HFS_MAX_CAT_KEYLEN) {
+ 			pr_err("invalid catalog max_key_len %d\n",
+ 			       tree->max_key_len);
+-			goto fail_page;
++			goto fail_folio;
+ 		}
+ 		break;
+ 	default:
+@@ -121,12 +153,15 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id, btree_keycmp ke
+ 	tree->pages_per_bnode = (tree->node_size + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ 
+ 	kunmap_local(head);
+-	put_page(page);
++	folio_unlock(folio);
++	folio_put(folio);
+ 	return tree;
+ 
+-fail_page:
++fail_folio:
+ 	kunmap_local(head);
+-	put_page(page);
++put_folio:
++	folio_unlock(folio);
++	folio_put(folio);
+ free_inode:
+ 	tree->inode->i_mapping->a_ops = &hfs_aops;
+ 	iput(tree->inode);
+diff --git a/fs/hfs/extent.c b/fs/hfs/extent.c
+index 4a0ce131e233fe..580c62981dbd3d 100644
+--- a/fs/hfs/extent.c
++++ b/fs/hfs/extent.c
+@@ -71,7 +71,7 @@ int hfs_ext_keycmp(const btree_key *key1, const btree_key *key2)
+  *
+  * Find a block within an extent record
+  */
+-static u16 hfs_ext_find_block(struct hfs_extent *ext, u16 off)
++u16 hfs_ext_find_block(struct hfs_extent *ext, u16 off)
+ {
+ 	int i;
+ 	u16 count;
+diff --git a/fs/hfs/hfs_fs.h b/fs/hfs/hfs_fs.h
+index a0c7cb0f79fcc9..732c5c4c7545d6 100644
+--- a/fs/hfs/hfs_fs.h
++++ b/fs/hfs/hfs_fs.h
+@@ -190,6 +190,7 @@ extern const struct inode_operations hfs_dir_inode_operations;
+ 
+ /* extent.c */
+ extern int hfs_ext_keycmp(const btree_key *, const btree_key *);
++extern u16 hfs_ext_find_block(struct hfs_extent *ext, u16 off);
+ extern int hfs_free_fork(struct super_block *, struct hfs_cat_file *, int);
+ extern int hfs_ext_write_extent(struct inode *);
+ extern int hfs_extend_file(struct inode *);
+diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c
+index 079ea80534f7de..14f4995588ff03 100644
+--- a/fs/hfsplus/bnode.c
++++ b/fs/hfsplus/bnode.c
+@@ -18,12 +18,68 @@
+ #include "hfsplus_fs.h"
+ #include "hfsplus_raw.h"
+ 
++static inline
++bool is_bnode_offset_valid(struct hfs_bnode *node, int off)
++{
++	bool is_valid = off < node->tree->node_size;
++
++	if (!is_valid) {
++		pr_err("requested invalid offset: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off);
++	}
++
++	return is_valid;
++}
++
++static inline
++int check_and_correct_requested_length(struct hfs_bnode *node, int off, int len)
++{
++	unsigned int node_size;
++
++	if (!is_bnode_offset_valid(node, off))
++		return 0;
++
++	node_size = node->tree->node_size;
++
++	if ((off + len) > node_size) {
++		int new_len = (int)node_size - off;
++
++		pr_err("requested length has been corrected: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, "
++		       "requested_len %d, corrected_len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len, new_len);
++
++		return new_len;
++	}
++
++	return len;
++}
++
+ /* Copy a specified range of bytes from the raw data of a node */
+ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len)
+ {
+ 	struct page **pagep;
+ 	int l;
+ 
++	if (!is_bnode_offset_valid(node, off))
++		return;
++
++	if (len == 0) {
++		pr_err("requested zero length: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len);
++		return;
++	}
++
++	len = check_and_correct_requested_length(node, off, len);
++
+ 	off += node->page_offset;
+ 	pagep = node->page + (off >> PAGE_SHIFT);
+ 	off &= ~PAGE_MASK;
+@@ -81,6 +137,20 @@ void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len)
+ 	struct page **pagep;
+ 	int l;
+ 
++	if (!is_bnode_offset_valid(node, off))
++		return;
++
++	if (len == 0) {
++		pr_err("requested zero length: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len);
++		return;
++	}
++
++	len = check_and_correct_requested_length(node, off, len);
++
+ 	off += node->page_offset;
+ 	pagep = node->page + (off >> PAGE_SHIFT);
+ 	off &= ~PAGE_MASK;
+@@ -109,6 +179,20 @@ void hfs_bnode_clear(struct hfs_bnode *node, int off, int len)
+ 	struct page **pagep;
+ 	int l;
+ 
++	if (!is_bnode_offset_valid(node, off))
++		return;
++
++	if (len == 0) {
++		pr_err("requested zero length: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len);
++		return;
++	}
++
++	len = check_and_correct_requested_length(node, off, len);
++
+ 	off += node->page_offset;
+ 	pagep = node->page + (off >> PAGE_SHIFT);
+ 	off &= ~PAGE_MASK;
+@@ -133,6 +217,10 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst,
+ 	hfs_dbg(BNODE_MOD, "copybytes: %u,%u,%u\n", dst, src, len);
+ 	if (!len)
+ 		return;
++
++	len = check_and_correct_requested_length(src_node, src, len);
++	len = check_and_correct_requested_length(dst_node, dst, len);
++
+ 	src += src_node->page_offset;
+ 	dst += dst_node->page_offset;
+ 	src_page = src_node->page + (src >> PAGE_SHIFT);
+@@ -187,6 +275,10 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len)
+ 	hfs_dbg(BNODE_MOD, "movebytes: %u,%u,%u\n", dst, src, len);
+ 	if (!len)
+ 		return;
++
++	len = check_and_correct_requested_length(node, src, len);
++	len = check_and_correct_requested_length(node, dst, len);
++
+ 	src += node->page_offset;
+ 	dst += node->page_offset;
+ 	if (dst > src) {
+diff --git a/fs/hfsplus/unicode.c b/fs/hfsplus/unicode.c
+index 73342c925a4b6e..36b6cf2a3abba4 100644
+--- a/fs/hfsplus/unicode.c
++++ b/fs/hfsplus/unicode.c
+@@ -132,7 +132,14 @@ int hfsplus_uni2asc(struct super_block *sb,
+ 
+ 	op = astr;
+ 	ip = ustr->unicode;
++
+ 	ustrlen = be16_to_cpu(ustr->length);
++	if (ustrlen > HFSPLUS_MAX_STRLEN) {
++		ustrlen = HFSPLUS_MAX_STRLEN;
++		pr_err("invalid length %u has been corrected to %d\n",
++			be16_to_cpu(ustr->length), ustrlen);
++	}
++
+ 	len = *len_p;
+ 	ce1 = NULL;
+ 	compose = !test_bit(HFSPLUS_SB_NODECOMPOSE, &HFSPLUS_SB(sb)->flags);
+diff --git a/fs/hfsplus/xattr.c b/fs/hfsplus/xattr.c
+index 9a1a93e3888b92..18dc3d254d218c 100644
+--- a/fs/hfsplus/xattr.c
++++ b/fs/hfsplus/xattr.c
+@@ -172,7 +172,11 @@ static int hfsplus_create_attributes_file(struct super_block *sb)
+ 		return PTR_ERR(attr_file);
+ 	}
+ 
+-	BUG_ON(i_size_read(attr_file) != 0);
++	if (i_size_read(attr_file) != 0) {
++		err = -EIO;
++		pr_err("detected inconsistent attributes file, running fsck.hfsplus is recommended.\n");
++		goto end_attr_file_creation;
++	}
+ 
+ 	hip = HFSPLUS_I(attr_file);
+ 
+diff --git a/fs/jfs/file.c b/fs/jfs/file.c
+index 01b6912e60f808..742cadd1f37e84 100644
+--- a/fs/jfs/file.c
++++ b/fs/jfs/file.c
+@@ -44,6 +44,9 @@ static int jfs_open(struct inode *inode, struct file *file)
+ {
+ 	int rc;
+ 
++	if (S_ISREG(inode->i_mode) && inode->i_size < 0)
++		return -EIO;
++
+ 	if ((rc = dquot_file_open(inode, file)))
+ 		return rc;
+ 
+diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c
+index 60fc92dee24d20..81e6b18e81e1b5 100644
+--- a/fs/jfs/inode.c
++++ b/fs/jfs/inode.c
+@@ -145,9 +145,9 @@ void jfs_evict_inode(struct inode *inode)
+ 	if (!inode->i_nlink && !is_bad_inode(inode)) {
+ 		dquot_initialize(inode);
+ 
++		truncate_inode_pages_final(&inode->i_data);
+ 		if (JFS_IP(inode)->fileset == FILESYSTEM_I) {
+ 			struct inode *ipimap = JFS_SBI(inode->i_sb)->ipimap;
+-			truncate_inode_pages_final(&inode->i_data);
+ 
+ 			if (test_cflag(COMMIT_Freewmap, inode))
+ 				jfs_free_zero_link(inode);
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 5a877261c3fe48..cdfa699cd7c8fa 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -1389,6 +1389,12 @@ dbAllocAG(struct bmap * bmp, int agno, s64 nblocks, int l2nb, s64 * results)
+ 	    (1 << (L2LPERCTL - (bmp->db_agheight << 1))) / bmp->db_agwidth;
+ 	ti = bmp->db_agstart + bmp->db_agwidth * (agno & (agperlev - 1));
+ 
++	if (ti < 0 || ti >= le32_to_cpu(dcp->nleafs)) {
++		jfs_error(bmp->db_ipbmap->i_sb, "Corrupt dmapctl page\n");
++		release_metapage(mp);
++		return -EIO;
++	}
++
+ 	/* dmap control page trees fan-out by 4 and a single allocation
+ 	 * group may be described by 1 or 2 subtrees within the ag level
+ 	 * dmap control page, depending upon the ag size. examine the ag's
+diff --git a/fs/libfs.c b/fs/libfs.c
+index 14e0e9b18c8ed8..1149c62a801fd3 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -612,7 +612,7 @@ void simple_recursive_removal(struct dentry *dentry,
+ 		struct dentry *victim = NULL, *child;
+ 		struct inode *inode = this->d_inode;
+ 
+-		inode_lock(inode);
++		inode_lock_nested(inode, I_MUTEX_CHILD);
+ 		if (d_is_dir(this))
+ 			inode->i_flags |= S_DEAD;
+ 		while ((child = find_next_child(this, victim)) == NULL) {
+@@ -624,7 +624,7 @@ void simple_recursive_removal(struct dentry *dentry,
+ 			victim = this;
+ 			this = this->d_parent;
+ 			inode = this->d_inode;
+-			inode_lock(inode);
++			inode_lock_nested(inode, I_MUTEX_CHILD);
+ 			if (simple_positive(victim)) {
+ 				d_invalidate(victim);	// avoid lost mounts
+ 				if (d_is_dir(victim))
+diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
+index 47189476b5538b..5d6edafbed202a 100644
+--- a/fs/nfs/blocklayout/blocklayout.c
++++ b/fs/nfs/blocklayout/blocklayout.c
+@@ -149,8 +149,8 @@ do_add_page_to_bio(struct bio *bio, int npg, enum req_op op, sector_t isect,
+ 
+ 	/* limit length to what the device mapping allows */
+ 	end = disk_addr + *len;
+-	if (end >= map->start + map->len)
+-		*len = map->start + map->len - disk_addr;
++	if (end >= map->disk_offset + map->len)
++		*len = map->disk_offset + map->len - disk_addr;
+ 
+ retry:
+ 	if (!bio) {
+diff --git a/fs/nfs/blocklayout/dev.c b/fs/nfs/blocklayout/dev.c
+index cab8809f0e0f48..44306ac22353be 100644
+--- a/fs/nfs/blocklayout/dev.c
++++ b/fs/nfs/blocklayout/dev.c
+@@ -257,10 +257,11 @@ static bool bl_map_stripe(struct pnfs_block_dev *dev, u64 offset,
+ 	struct pnfs_block_dev *child;
+ 	u64 chunk;
+ 	u32 chunk_idx;
++	u64 disk_chunk;
+ 	u64 disk_offset;
+ 
+ 	chunk = div_u64(offset, dev->chunk_size);
+-	div_u64_rem(chunk, dev->nr_children, &chunk_idx);
++	disk_chunk = div_u64_rem(chunk, dev->nr_children, &chunk_idx);
+ 
+ 	if (chunk_idx >= dev->nr_children) {
+ 		dprintk("%s: invalid chunk idx %d (%lld/%lld)\n",
+@@ -273,7 +274,7 @@ static bool bl_map_stripe(struct pnfs_block_dev *dev, u64 offset,
+ 	offset = chunk * dev->chunk_size;
+ 
+ 	/* disk offset of the stripe */
+-	disk_offset = div_u64(offset, dev->nr_children);
++	disk_offset = disk_chunk * dev->chunk_size;
+ 
+ 	child = &dev->children[chunk_idx];
+ 	child->map(child, disk_offset, map);
+diff --git a/fs/nfs/blocklayout/extent_tree.c b/fs/nfs/blocklayout/extent_tree.c
+index 8f7cff7a42938e..0add0f329816b0 100644
+--- a/fs/nfs/blocklayout/extent_tree.c
++++ b/fs/nfs/blocklayout/extent_tree.c
+@@ -552,6 +552,15 @@ static int ext_tree_encode_commit(struct pnfs_block_layout *bl, __be32 *p,
+ 	return ret;
+ }
+ 
++/**
++ * ext_tree_prepare_commit - encode extents that need to be committed
++ * @arg: layout commit data
++ *
++ * Return values:
++ *   %0: Success, all required extents are encoded
++ *   %-ENOSPC: Some extents are encoded, but not all, due to RPC size limit
++ *   %-ENOMEM: Out of memory, extents not encoded
++ */
+ int
+ ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg)
+ {
+@@ -568,12 +577,12 @@ ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg)
+ 	start_p = page_address(arg->layoutupdate_page);
+ 	arg->layoutupdate_pages = &arg->layoutupdate_page;
+ 
+-retry:
+-	ret = ext_tree_encode_commit(bl, start_p + 1, buffer_size, &count, &arg->lastbytewritten);
++	ret = ext_tree_encode_commit(bl, start_p + 1, buffer_size,
++			&count, &arg->lastbytewritten);
+ 	if (unlikely(ret)) {
+ 		ext_tree_free_commitdata(arg, buffer_size);
+ 
+-		buffer_size = ext_tree_layoutupdate_size(bl, count);
++		buffer_size = NFS_SERVER(arg->inode)->wsize;
+ 		count = 0;
+ 
+ 		arg->layoutupdate_pages =
+@@ -588,7 +597,8 @@ ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg)
+ 			return -ENOMEM;
+ 		}
+ 
+-		goto retry;
++		ret = ext_tree_encode_commit(bl, start_p + 1, buffer_size,
++				&count, &arg->lastbytewritten);
+ 	}
+ 
+ 	*start_p = cpu_to_be32(count);
+@@ -608,7 +618,7 @@ ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg)
+ 	}
+ 
+ 	dprintk("%s found %zu ranges\n", __func__, count);
+-	return 0;
++	return ret;
+ }
+ 
+ void
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index d8fe7c0e7e052d..188cac04f14ca6 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -682,6 +682,44 @@ struct nfs_client *nfs_init_client(struct nfs_client *clp,
+ }
+ EXPORT_SYMBOL_GPL(nfs_init_client);
+ 
++static void nfs4_server_set_init_caps(struct nfs_server *server)
++{
++#if IS_ENABLED(CONFIG_NFS_V4)
++	/* Set the basic capabilities */
++	server->caps = server->nfs_client->cl_mvops->init_caps;
++	if (server->flags & NFS_MOUNT_NORDIRPLUS)
++		server->caps &= ~NFS_CAP_READDIRPLUS;
++	if (server->nfs_client->cl_proto == XPRT_TRANSPORT_RDMA)
++		server->caps &= ~NFS_CAP_READ_PLUS;
++
++	/*
++	 * Don't use NFS uid/gid mapping if we're using AUTH_SYS or lower
++	 * authentication.
++	 */
++	if (nfs4_disable_idmapping &&
++	    server->client->cl_auth->au_flavor == RPC_AUTH_UNIX)
++		server->caps |= NFS_CAP_UIDGID_NOMAP;
++#endif
++}
++
++void nfs_server_set_init_caps(struct nfs_server *server)
++{
++	switch (server->nfs_client->rpc_ops->version) {
++	case 2:
++		server->caps = NFS_CAP_HARDLINKS | NFS_CAP_SYMLINKS;
++		break;
++	case 3:
++		server->caps = NFS_CAP_HARDLINKS | NFS_CAP_SYMLINKS;
++		if (!(server->flags & NFS_MOUNT_NORDIRPLUS))
++			server->caps |= NFS_CAP_READDIRPLUS;
++		break;
++	default:
++		nfs4_server_set_init_caps(server);
++		break;
++	}
++}
++EXPORT_SYMBOL_GPL(nfs_server_set_init_caps);
++
+ /*
+  * Create a version 2 or 3 client
+  */
+@@ -726,7 +764,6 @@ static int nfs_init_server(struct nfs_server *server,
+ 	/* Initialise the client representation from the mount data */
+ 	server->flags = ctx->flags;
+ 	server->options = ctx->options;
+-	server->caps |= NFS_CAP_HARDLINKS | NFS_CAP_SYMLINKS;
+ 
+ 	switch (clp->rpc_ops->version) {
+ 	case 2:
+@@ -762,6 +799,8 @@ static int nfs_init_server(struct nfs_server *server,
+ 	if (error < 0)
+ 		goto error;
+ 
++	nfs_server_set_init_caps(server);
++
+ 	/* Preserve the values of mount_server-related mount options */
+ 	if (ctx->mount_server.addrlen) {
+ 		memcpy(&server->mountd_address, &ctx->mount_server.address,
+@@ -936,7 +975,6 @@ void nfs_server_copy_userdata(struct nfs_server *target, struct nfs_server *sour
+ 	target->acregmax = source->acregmax;
+ 	target->acdirmin = source->acdirmin;
+ 	target->acdirmax = source->acdirmax;
+-	target->caps = source->caps;
+ 	target->options = source->options;
+ 	target->auth_info = source->auth_info;
+ 	target->port = source->port;
+@@ -1170,6 +1208,8 @@ struct nfs_server *nfs_clone_server(struct nfs_server *source,
+ 	if (error < 0)
+ 		goto out_free_server;
+ 
++	nfs_server_set_init_caps(server);
++
+ 	/* probe the filesystem info for this server filesystem */
+ 	error = nfs_probe_server(server, fh);
+ 	if (error < 0)
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index d8f768254f1665..9dcbc339649221 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -232,7 +232,7 @@ extern struct nfs_client *
+ nfs4_find_client_sessionid(struct net *, const struct sockaddr *,
+ 				struct nfs4_sessionid *, u32);
+ extern struct nfs_server *nfs_create_server(struct fs_context *);
+-extern void nfs4_server_set_init_caps(struct nfs_server *);
++extern void nfs_server_set_init_caps(struct nfs_server *);
+ extern struct nfs_server *nfs4_create_server(struct fs_context *);
+ extern struct nfs_server *nfs4_create_referral_server(struct fs_context *);
+ extern int nfs4_update_server(struct nfs_server *server, const char *hostname,
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index 162c85a83a14ae..dccf628850a727 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -1088,24 +1088,6 @@ static void nfs4_session_limit_xasize(struct nfs_server *server)
+ #endif
+ }
+ 
+-void nfs4_server_set_init_caps(struct nfs_server *server)
+-{
+-	/* Set the basic capabilities */
+-	server->caps |= server->nfs_client->cl_mvops->init_caps;
+-	if (server->flags & NFS_MOUNT_NORDIRPLUS)
+-			server->caps &= ~NFS_CAP_READDIRPLUS;
+-	if (server->nfs_client->cl_proto == XPRT_TRANSPORT_RDMA)
+-		server->caps &= ~NFS_CAP_READ_PLUS;
+-
+-	/*
+-	 * Don't use NFS uid/gid mapping if we're using AUTH_SYS or lower
+-	 * authentication.
+-	 */
+-	if (nfs4_disable_idmapping &&
+-			server->client->cl_auth->au_flavor == RPC_AUTH_UNIX)
+-		server->caps |= NFS_CAP_UIDGID_NOMAP;
+-}
+-
+ static int nfs4_server_common_setup(struct nfs_server *server,
+ 		struct nfs_fh *mntfh, bool auth_probe)
+ {
+@@ -1120,7 +1102,7 @@ static int nfs4_server_common_setup(struct nfs_server *server,
+ 	if (error < 0)
+ 		goto out;
+ 
+-	nfs4_server_set_init_caps(server);
++	nfs_server_set_init_caps(server);
+ 
+ 	/* Probe the root fh to retrieve its FSID and filehandle */
+ 	error = nfs4_get_rootfh(server, mntfh, auth_probe);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 3c1ef174aa813d..151765d619cf49 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -4083,7 +4083,7 @@ int nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *fhandle)
+ 	};
+ 	int err;
+ 
+-	nfs4_server_set_init_caps(server);
++	nfs_server_set_init_caps(server);
+ 	do {
+ 		err = nfs4_handle_exception(server,
+ 				_nfs4_server_capabilities(server, fhandle),
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 1a7ec68bde1532..3fd0971bf16fcf 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -3340,6 +3340,7 @@ pnfs_layoutcommit_inode(struct inode *inode, bool sync)
+ 	struct nfs_inode *nfsi = NFS_I(inode);
+ 	loff_t end_pos;
+ 	int status;
++	bool mark_as_dirty = false;
+ 
+ 	if (!pnfs_layoutcommit_outstanding(inode))
+ 		return 0;
+@@ -3391,19 +3392,23 @@ pnfs_layoutcommit_inode(struct inode *inode, bool sync)
+ 	if (ld->prepare_layoutcommit) {
+ 		status = ld->prepare_layoutcommit(&data->args);
+ 		if (status) {
+-			put_cred(data->cred);
++			if (status != -ENOSPC)
++				put_cred(data->cred);
+ 			spin_lock(&inode->i_lock);
+ 			set_bit(NFS_INO_LAYOUTCOMMIT, &nfsi->flags);
+ 			if (end_pos > nfsi->layout->plh_lwb)
+ 				nfsi->layout->plh_lwb = end_pos;
+-			goto out_unlock;
++			if (status != -ENOSPC)
++				goto out_unlock;
++			spin_unlock(&inode->i_lock);
++			mark_as_dirty = true;
+ 		}
+ 	}
+ 
+ 
+ 	status = nfs4_proc_layoutcommit(data, sync);
+ out:
+-	if (status)
++	if (status || mark_as_dirty)
+ 		mark_inode_dirty_sync(inode);
+ 	dprintk("<-- %s status %d\n", __func__, status);
+ 	return status;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 59a693f22452b8..50eaf05c33bbac 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4693,10 +4693,16 @@ nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
+ 	}
+ 	status = nfs_ok;
+ 	if (conf) {
+-		old = unconf;
+-		unhash_client_locked(old);
+-		nfsd4_change_callback(conf, &unconf->cl_cb_conn);
+-	} else {
++		if (get_client_locked(conf) == nfs_ok) {
++			old = unconf;
++			unhash_client_locked(old);
++			nfsd4_change_callback(conf, &unconf->cl_cb_conn);
++		} else {
++			conf = NULL;
++		}
++	}
++
++	if (!conf) {
+ 		old = find_confirmed_client_by_name(&unconf->cl_name, nn);
+ 		if (old) {
+ 			status = nfserr_clid_inuse;
+@@ -4713,10 +4719,14 @@ nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
+ 			}
+ 			trace_nfsd_clid_replaced(&old->cl_clientid);
+ 		}
++		status = get_client_locked(unconf);
++		if (status != nfs_ok) {
++			old = NULL;
++			goto out;
++		}
+ 		move_to_confirmed(unconf);
+ 		conf = unconf;
+ 	}
+-	get_client_locked(conf);
+ 	spin_unlock(&nn->client_lock);
+ 	if (conf == unconf)
+ 		fsnotify_dentry(conf->cl_nfsd_info_dentry, FS_MODIFY);
+@@ -6318,6 +6328,20 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
+ 		status = nfs4_check_deleg(cl, open, &dp);
+ 		if (status)
+ 			goto out;
++		if (dp && nfsd4_is_deleg_cur(open) &&
++				(dp->dl_stid.sc_file != fp)) {
++			/*
++			 * RFC8881 section 8.2.4 mandates the server to return
++			 * NFS4ERR_BAD_STATEID if the selected table entry does
++			 * not match the current filehandle. However returning
++			 * NFS4ERR_BAD_STATEID in the OPEN can cause the client
++			 * to repeatedly retry the operation with the same
++			 * stateid, since the stateid itself is valid. To avoid
++			 * this situation NFSD returns NFS4ERR_INVAL instead.
++			 */
++			status = nfserr_inval;
++			goto out;
++		}
+ 		stp = nfsd4_find_and_lock_existing_open(fp, open);
+ 	} else {
+ 		open->op_file = NULL;
+diff --git a/fs/ntfs3/dir.c b/fs/ntfs3/dir.c
+index b6da80c69ca634..600e66035c1b70 100644
+--- a/fs/ntfs3/dir.c
++++ b/fs/ntfs3/dir.c
+@@ -304,6 +304,9 @@ static inline bool ntfs_dir_emit(struct ntfs_sb_info *sbi,
+ 	if (sbi->options->nohidden && (fname->dup.fa & FILE_ATTRIBUTE_HIDDEN))
+ 		return true;
+ 
++	if (fname->name_len + sizeof(struct NTFS_DE) > le16_to_cpu(e->size))
++		return true;
++
+ 	name_len = ntfs_utf16_to_nls(sbi, fname->name, fname->name_len, name,
+ 				     PATH_MAX);
+ 	if (name_len <= 0) {
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index 0f0d27d4644a9b..8214970f4594f1 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -1062,10 +1062,10 @@ int inode_read_data(struct inode *inode, void *data, size_t bytes)
+  * Number of bytes for REPARSE_DATA_BUFFER(IO_REPARSE_TAG_SYMLINK)
+  * for unicode string of @uni_len length.
+  */
+-static inline u32 ntfs_reparse_bytes(u32 uni_len)
++static inline u32 ntfs_reparse_bytes(u32 uni_len, bool is_absolute)
+ {
+ 	/* Header + unicode string + decorated unicode string. */
+-	return sizeof(short) * (2 * uni_len + 4) +
++	return sizeof(short) * (2 * uni_len + (is_absolute ? 4 : 0)) +
+ 	       offsetof(struct REPARSE_DATA_BUFFER,
+ 			SymbolicLinkReparseBuffer.PathBuffer);
+ }
+@@ -1078,8 +1078,11 @@ ntfs_create_reparse_buffer(struct ntfs_sb_info *sbi, const char *symname,
+ 	struct REPARSE_DATA_BUFFER *rp;
+ 	__le16 *rp_name;
+ 	typeof(rp->SymbolicLinkReparseBuffer) *rs;
++	bool is_absolute;
+ 
+-	rp = kzalloc(ntfs_reparse_bytes(2 * size + 2), GFP_NOFS);
++	is_absolute = (strlen(symname) > 1 && symname[1] == ':');
++
++	rp = kzalloc(ntfs_reparse_bytes(2 * size + 2, is_absolute), GFP_NOFS);
+ 	if (!rp)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -1094,7 +1097,7 @@ ntfs_create_reparse_buffer(struct ntfs_sb_info *sbi, const char *symname,
+ 		goto out;
+ 
+ 	/* err = the length of unicode name of symlink. */
+-	*nsize = ntfs_reparse_bytes(err);
++	*nsize = ntfs_reparse_bytes(err, is_absolute);
+ 
+ 	if (*nsize > sbi->reparse.max_size) {
+ 		err = -EFBIG;
+@@ -1114,7 +1117,7 @@ ntfs_create_reparse_buffer(struct ntfs_sb_info *sbi, const char *symname,
+ 
+ 	/* PrintName + SubstituteName. */
+ 	rs->SubstituteNameOffset = cpu_to_le16(sizeof(short) * err);
+-	rs->SubstituteNameLength = cpu_to_le16(sizeof(short) * err + 8);
++	rs->SubstituteNameLength = cpu_to_le16(sizeof(short) * err + (is_absolute ? 8 : 0));
+ 	rs->PrintNameLength = rs->SubstituteNameOffset;
+ 
+ 	/*
+@@ -1122,16 +1125,18 @@ ntfs_create_reparse_buffer(struct ntfs_sb_info *sbi, const char *symname,
+ 	 * parse this path.
+ 	 * 0-absolute path 1- relative path (SYMLINK_FLAG_RELATIVE).
+ 	 */
+-	rs->Flags = 0;
++	rs->Flags = cpu_to_le32(is_absolute ? 0 : SYMLINK_FLAG_RELATIVE);
+ 
+-	memmove(rp_name + err + 4, rp_name, sizeof(short) * err);
++	memmove(rp_name + err + (is_absolute ? 4 : 0), rp_name, sizeof(short) * err);
+ 
+-	/* Decorate SubstituteName. */
+-	rp_name += err;
+-	rp_name[0] = cpu_to_le16('\\');
+-	rp_name[1] = cpu_to_le16('?');
+-	rp_name[2] = cpu_to_le16('?');
+-	rp_name[3] = cpu_to_le16('\\');
++	if (is_absolute) {
++		/* Decorate SubstituteName. */
++		rp_name += err;
++		rp_name[0] = cpu_to_le16('\\');
++		rp_name[1] = cpu_to_le16('?');
++		rp_name[2] = cpu_to_le16('?');
++		rp_name[3] = cpu_to_le16('\\');
++	}
+ 
+ 	return rp;
+ out:
+diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
+index 40b6bce12951ec..89aadc6cdd8779 100644
+--- a/fs/ocfs2/aops.c
++++ b/fs/ocfs2/aops.c
+@@ -1071,6 +1071,7 @@ static int ocfs2_grab_folios_for_write(struct address_space *mapping,
+ 			if (IS_ERR(wc->w_folios[i])) {
+ 				ret = PTR_ERR(wc->w_folios[i]);
+ 				mlog_errno(ret);
++				wc->w_folios[i] = NULL;
+ 				goto out;
+ 			}
+ 		}
+diff --git a/fs/orangefs/orangefs-debugfs.c b/fs/orangefs/orangefs-debugfs.c
+index e8e3badbc2ec06..1c375fb650185c 100644
+--- a/fs/orangefs/orangefs-debugfs.c
++++ b/fs/orangefs/orangefs-debugfs.c
+@@ -396,7 +396,7 @@ static ssize_t orangefs_debug_read(struct file *file,
+ 		goto out;
+ 
+ 	mutex_lock(&orangefs_debug_lock);
+-	sprintf_ret = sprintf(buf, "%s", (char *)file->private_data);
++	sprintf_ret = scnprintf(buf, ORANGEFS_MAX_DEBUG_STRING_LEN, "%s", (char *)file->private_data);
+ 	mutex_unlock(&orangefs_debug_lock);
+ 
+ 	read_ret = simple_read_from_buffer(ubuf, count, ppos, buf, sprintf_ret);
+diff --git a/fs/pidfs.c b/fs/pidfs.c
+index 005976025ce9fd..b8fe3495ccbf43 100644
+--- a/fs/pidfs.c
++++ b/fs/pidfs.c
+@@ -854,6 +854,8 @@ static int pidfs_init_fs_context(struct fs_context *fc)
+ 	if (!ctx)
+ 		return -ENOMEM;
+ 
++	fc->s_iflags |= SB_I_NOEXEC;
++	fc->s_iflags |= SB_I_NODEV;
+ 	ctx->ops = &pidfs_sops;
+ 	ctx->eops = &pidfs_export_operations;
+ 	ctx->dops = &pidfs_dentry_operations;
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index e57e323817e78e..3b8eaa7722c85b 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -1020,10 +1020,13 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask,
+ {
+ 	struct mem_size_stats *mss = walk->private;
+ 	struct vm_area_struct *vma = walk->vma;
+-	pte_t ptent = huge_ptep_get(walk->mm, addr, pte);
+ 	struct folio *folio = NULL;
+ 	bool present = false;
++	spinlock_t *ptl;
++	pte_t ptent;
+ 
++	ptl = huge_pte_lock(hstate_vma(vma), walk->mm, pte);
++	ptent = huge_ptep_get(walk->mm, addr, pte);
+ 	if (pte_present(ptent)) {
+ 		folio = page_folio(pte_page(ptent));
+ 		present = true;
+@@ -1042,6 +1045,7 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask,
+ 		else
+ 			mss->private_hugetlb += huge_page_size(hstate_vma(vma));
+ 	}
++	spin_unlock(ptl);
+ 	return 0;
+ }
+ #else
+diff --git a/fs/smb/client/cifsencrypt.c b/fs/smb/client/cifsencrypt.c
+index 6be850d2a34677..3cc68624690876 100644
+--- a/fs/smb/client/cifsencrypt.c
++++ b/fs/smb/client/cifsencrypt.c
+@@ -532,17 +532,67 @@ CalcNTLMv2_response(const struct cifs_ses *ses, char *ntlmv2_hash, struct shash_
+ 	return rc;
+ }
+ 
++/*
++ * Set up NTLMv2 response blob with SPN (cifs/<hostname>) appended to the
++ * existing list of AV pairs.
++ */
++static int set_auth_key_response(struct cifs_ses *ses)
++{
++	size_t baselen = CIFS_SESS_KEY_SIZE + sizeof(struct ntlmv2_resp);
++	size_t len, spnlen, tilen = 0, num_avs = 2 /* SPN + EOL */;
++	struct TCP_Server_Info *server = ses->server;
++	char *spn __free(kfree) = NULL;
++	struct ntlmssp2_name *av;
++	char *rsp = NULL;
++	int rc;
++
++	spnlen = strlen(server->hostname);
++	len = sizeof("cifs/") + spnlen;
++	spn = kmalloc(len, GFP_KERNEL);
++	if (!spn) {
++		rc = -ENOMEM;
++		goto out;
++	}
++
++	spnlen = scnprintf(spn, len, "cifs/%.*s",
++			   (int)spnlen, server->hostname);
++
++	av_for_each_entry(ses, av)
++		tilen += sizeof(*av) + AV_LEN(av);
++
++	len = baselen + tilen + spnlen * sizeof(__le16) + num_avs * sizeof(*av);
++	rsp = kmalloc(len, GFP_KERNEL);
++	if (!rsp) {
++		rc = -ENOMEM;
++		goto out;
++	}
++
++	memcpy(rsp + baselen, ses->auth_key.response, tilen);
++	av = (void *)(rsp + baselen + tilen);
++	av->type = cpu_to_le16(NTLMSSP_AV_TARGET_NAME);
++	av->length = cpu_to_le16(spnlen * sizeof(__le16));
++	cifs_strtoUTF16((__le16 *)av->data, spn, spnlen, ses->local_nls);
++	av = (void *)((__u8 *)av + sizeof(*av) + AV_LEN(av));
++	av->type = cpu_to_le16(NTLMSSP_AV_EOL);
++	av->length = 0;
++
++	rc = 0;
++	ses->auth_key.len = len;
++out:
++	ses->auth_key.response = rsp;
++	return rc;
++}
++
+ int
+ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
+ {
+ 	struct shash_desc *hmacmd5 = NULL;
+-	int rc;
+-	int baselen;
+-	unsigned int tilen;
++	unsigned char *tiblob = NULL; /* target info blob */
+ 	struct ntlmv2_resp *ntlmv2;
+ 	char ntlmv2_hash[16];
+-	unsigned char *tiblob = NULL; /* target info blob */
+ 	__le64 rsp_timestamp;
++	__u64 cc;
++	int rc;
+ 
+ 	if (nls_cp == NULL) {
+ 		cifs_dbg(VFS, "%s called with nls_cp==NULL\n", __func__);
+@@ -588,32 +638,25 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
+ 	 * (as Windows 7 does)
+ 	 */
+ 	rsp_timestamp = find_timestamp(ses);
++	get_random_bytes(&cc, sizeof(cc));
+ 
+-	baselen = CIFS_SESS_KEY_SIZE + sizeof(struct ntlmv2_resp);
+-	tilen = ses->auth_key.len;
+-	tiblob = ses->auth_key.response;
++	cifs_server_lock(ses->server);
+ 
+-	ses->auth_key.response = kmalloc(baselen + tilen, GFP_KERNEL);
+-	if (!ses->auth_key.response) {
+-		rc = -ENOMEM;
++	tiblob = ses->auth_key.response;
++	rc = set_auth_key_response(ses);
++	if (rc) {
+ 		ses->auth_key.len = 0;
+-		goto setup_ntlmv2_rsp_ret;
++		goto unlock;
+ 	}
+-	ses->auth_key.len += baselen;
+ 
+ 	ntlmv2 = (struct ntlmv2_resp *)
+ 			(ses->auth_key.response + CIFS_SESS_KEY_SIZE);
+ 	ntlmv2->blob_signature = cpu_to_le32(0x00000101);
+ 	ntlmv2->reserved = 0;
+ 	ntlmv2->time = rsp_timestamp;
+-
+-	get_random_bytes(&ntlmv2->client_chal, sizeof(ntlmv2->client_chal));
++	ntlmv2->client_chal = cc;
+ 	ntlmv2->reserved2 = 0;
+ 
+-	memcpy(ses->auth_key.response + baselen, tiblob, tilen);
+-
+-	cifs_server_lock(ses->server);
+-
+ 	rc = cifs_alloc_hash("hmac(md5)", &hmacmd5);
+ 	if (rc) {
+ 		cifs_dbg(VFS, "Could not allocate HMAC-MD5, rc=%d\n", rc);
+diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
+index 0e509a0433fb67..4a6d1d83630fce 100644
+--- a/fs/smb/client/cifssmb.c
++++ b/fs/smb/client/cifssmb.c
+@@ -4000,6 +4000,12 @@ CIFSFindFirst(const unsigned int xid, struct cifs_tcon *tcon,
+ 			pSMB->FileName[name_len] = 0;
+ 			pSMB->FileName[name_len+1] = 0;
+ 			name_len += 2;
++		} else if (!searchName[0]) {
++			pSMB->FileName[0] = CIFS_DIR_SEP(cifs_sb);
++			pSMB->FileName[1] = 0;
++			pSMB->FileName[2] = 0;
++			pSMB->FileName[3] = 0;
++			name_len = 4;
+ 		}
+ 	} else {
+ 		name_len = copy_path_name(pSMB->FileName, searchName);
+@@ -4011,6 +4017,10 @@ CIFSFindFirst(const unsigned int xid, struct cifs_tcon *tcon,
+ 			pSMB->FileName[name_len] = '*';
+ 			pSMB->FileName[name_len+1] = 0;
+ 			name_len += 2;
++		} else if (!searchName[0]) {
++			pSMB->FileName[0] = CIFS_DIR_SEP(cifs_sb);
++			pSMB->FileName[1] = 0;
++			name_len = 2;
+ 		}
+ 	}
+ 
+diff --git a/fs/smb/client/compress.c b/fs/smb/client/compress.c
+index 766b4de13da76a..db709f5cd2e1ff 100644
+--- a/fs/smb/client/compress.c
++++ b/fs/smb/client/compress.c
+@@ -155,58 +155,29 @@ static int cmp_bkt(const void *_a, const void *_b)
+ }
+ 
+ /*
+- * TODO:
+- * Support other iter types, if required.
+- * Only ITER_XARRAY is supported for now.
++ * Collect some 2K samples with 2K gaps between.
+  */
+-static int collect_sample(const struct iov_iter *iter, ssize_t max, u8 *sample)
++static int collect_sample(const struct iov_iter *source, ssize_t max, u8 *sample)
+ {
+-	struct folio *folios[16], *folio;
+-	unsigned int nr, i, j, npages;
+-	loff_t start = iter->xarray_start + iter->iov_offset;
+-	pgoff_t last, index = start / PAGE_SIZE;
+-	size_t len, off, foff;
+-	void *p;
+-	int s = 0;
+-
+-	last = (start + max - 1) / PAGE_SIZE;
+-	do {
+-		nr = xa_extract(iter->xarray, (void **)folios, index, last, ARRAY_SIZE(folios),
+-				XA_PRESENT);
+-		if (nr == 0)
+-			return -EIO;
+-
+-		for (i = 0; i < nr; i++) {
+-			folio = folios[i];
+-			npages = folio_nr_pages(folio);
+-			foff = start - folio_pos(folio);
+-			off = foff % PAGE_SIZE;
+-
+-			for (j = foff / PAGE_SIZE; j < npages; j++) {
+-				size_t len2;
+-
+-				len = min_t(size_t, max, PAGE_SIZE - off);
+-				len2 = min_t(size_t, len, SZ_2K);
+-
+-				p = kmap_local_page(folio_page(folio, j));
+-				memcpy(&sample[s], p, len2);
+-				kunmap_local(p);
+-
+-				s += len2;
+-
+-				if (len2 < SZ_2K || s >= max - SZ_2K)
+-					return s;
+-
+-				max -= len;
+-				if (max <= 0)
+-					return s;
+-
+-				start += len;
+-				off = 0;
+-				index++;
+-			}
+-		}
+-	} while (nr == ARRAY_SIZE(folios));
++	struct iov_iter iter = *source;
++	size_t s = 0;
++
++	while (iov_iter_count(&iter) >= SZ_2K) {
++		size_t part = umin(umin(iov_iter_count(&iter), SZ_2K), max);
++		size_t n;
++
++		n = copy_from_iter(sample + s, part, &iter);
++		if (n != part)
++			return -EFAULT;
++
++		s += n;
++		max -= n;
++
++		if (iov_iter_count(&iter) < PAGE_SIZE - SZ_2K)
++			break;
++
++		iov_iter_advance(&iter, SZ_2K);
++	}
+ 
+ 	return s;
+ }
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index 87ea186774741b..1fdfe3aa281e99 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -4198,7 +4198,6 @@ cifs_negotiate_protocol(const unsigned int xid, struct cifs_ses *ses,
+ 		return 0;
+ 	}
+ 
+-	server->lstrp = jiffies;
+ 	server->tcpStatus = CifsInNegotiate;
+ 	server->neg_start = jiffies;
+ 	spin_unlock(&server->srv_lock);
+diff --git a/fs/smb/client/sess.c b/fs/smb/client/sess.c
+index 330bc3d25badd8..0a8c2fcc9dedf1 100644
+--- a/fs/smb/client/sess.c
++++ b/fs/smb/client/sess.c
+@@ -332,6 +332,7 @@ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+ 	struct cifs_server_iface *old_iface = NULL;
+ 	struct cifs_server_iface *last_iface = NULL;
+ 	struct sockaddr_storage ss;
++	int retry = 0;
+ 
+ 	spin_lock(&ses->chan_lock);
+ 	chan_index = cifs_ses_get_chan_index(ses, server);
+@@ -360,6 +361,7 @@ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+ 		return;
+ 	}
+ 
++try_again:
+ 	last_iface = list_last_entry(&ses->iface_list, struct cifs_server_iface,
+ 				     iface_head);
+ 	iface_min_speed = last_iface->speed;
+@@ -397,6 +399,13 @@ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+ 	}
+ 
+ 	if (list_entry_is_head(iface, &ses->iface_list, iface_head)) {
++		list_for_each_entry(iface, &ses->iface_list, iface_head)
++			iface->weight_fulfilled = 0;
++
++		/* see if it can be satisfied in second attempt */
++		if (!retry++)
++			goto try_again;
++
+ 		iface = NULL;
+ 		cifs_dbg(FYI, "unable to find a suitable iface\n");
+ 	}
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index fcbc4048f10644..b9a6beb3d34c98 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -772,6 +772,13 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
+ 			bytes_left -= sizeof(*p);
+ 			break;
+ 		}
++		/* Validate that Next doesn't point beyond the buffer */
++		if (next > bytes_left) {
++			cifs_dbg(VFS, "%s: invalid Next pointer %zu > %zd\n",
++				 __func__, next, bytes_left);
++			rc = -EINVAL;
++			goto out;
++		}
+ 		p = (struct network_interface_info_ioctl_rsp *)((u8 *)p+next);
+ 		bytes_left -= next;
+ 	}
+@@ -783,7 +790,9 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
+ 	}
+ 
+ 	/* Azure rounds the buffer size up 8, to a 16 byte boundary */
+-	if ((bytes_left > 8) || p->Next)
++	if ((bytes_left > 8) ||
++	    (bytes_left >= offsetof(struct network_interface_info_ioctl_rsp, Next)
++	     + sizeof(p->Next) && p->Next))
+ 		cifs_dbg(VFS, "%s: incomplete interface info\n", __func__);
+ 
+ 	ses->iface_last_update = jiffies;
+diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
+index c661a8e6c18b85..b9bb531717a651 100644
+--- a/fs/smb/client/smbdirect.c
++++ b/fs/smb/client/smbdirect.c
+@@ -277,18 +277,20 @@ static void send_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	log_rdma_send(INFO, "smbd_request 0x%p completed wc->status=%d\n",
+ 		request, wc->status);
+ 
+-	if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_SEND) {
+-		log_rdma_send(ERR, "wc->status=%d wc->opcode=%d\n",
+-			wc->status, wc->opcode);
+-		smbd_disconnect_rdma_connection(request->info);
+-	}
+-
+ 	for (i = 0; i < request->num_sge; i++)
+ 		ib_dma_unmap_single(sc->ib.dev,
+ 			request->sge[i].addr,
+ 			request->sge[i].length,
+ 			DMA_TO_DEVICE);
+ 
++	if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_SEND) {
++		log_rdma_send(ERR, "wc->status=%d wc->opcode=%d\n",
++			wc->status, wc->opcode);
++		mempool_free(request, info->request_mempool);
++		smbd_disconnect_rdma_connection(info);
++		return;
++	}
++
+ 	if (atomic_dec_and_test(&request->info->send_pending))
+ 		wake_up(&request->info->wait_send_pending);
+ 
+@@ -1314,10 +1316,6 @@ void smbd_destroy(struct TCP_Server_Info *server)
+ 	log_rdma_event(INFO, "cancelling idle timer\n");
+ 	cancel_delayed_work_sync(&info->idle_timer_work);
+ 
+-	log_rdma_event(INFO, "wait for all send posted to IB to finish\n");
+-	wait_event(info->wait_send_pending,
+-		atomic_read(&info->send_pending) == 0);
+-
+ 	/* It's not possible for upper layer to get to reassembly */
+ 	log_rdma_event(INFO, "drain the reassembly queue\n");
+ 	do {
+@@ -1691,7 +1689,6 @@ static struct smbd_connection *_smbd_get_connection(
+ 	cancel_delayed_work_sync(&info->idle_timer_work);
+ 	destroy_caches_and_workqueue(info);
+ 	sc->status = SMBDIRECT_SOCKET_NEGOTIATE_FAILED;
+-	init_waitqueue_head(&info->conn_wait);
+ 	rdma_disconnect(sc->rdma.cm_id);
+ 	wait_event(info->conn_wait,
+ 		sc->status == SMBDIRECT_SOCKET_DISCONNECTED);
+@@ -1963,7 +1960,11 @@ int smbd_send(struct TCP_Server_Info *server,
+ 	 */
+ 
+ 	wait_event(info->wait_send_pending,
+-		atomic_read(&info->send_pending) == 0);
++		atomic_read(&info->send_pending) == 0 ||
++		sc->status != SMBDIRECT_SOCKET_CONNECTED);
++
++	if (sc->status != SMBDIRECT_SOCKET_CONNECTED && rc == 0)
++		rc = -EAGAIN;
+ 
+ 	return rc;
+ }
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 55a7887fdad758..1e379de6b22bce 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -6035,7 +6035,6 @@ static int smb2_create_link(struct ksmbd_work *work,
+ {
+ 	char *link_name = NULL, *target_name = NULL, *pathname = NULL;
+ 	struct path path, parent_path;
+-	bool file_present = false;
+ 	int rc;
+ 
+ 	if (buf_len < (u64)sizeof(struct smb2_file_link_info) +
+@@ -6068,11 +6067,8 @@ static int smb2_create_link(struct ksmbd_work *work,
+ 	if (rc) {
+ 		if (rc != -ENOENT)
+ 			goto out;
+-	} else
+-		file_present = true;
+-
+-	if (file_info->ReplaceIfExists) {
+-		if (file_present) {
++	} else {
++		if (file_info->ReplaceIfExists) {
+ 			rc = ksmbd_vfs_remove_file(work, &path);
+ 			if (rc) {
+ 				rc = -EINVAL;
+@@ -6080,21 +6076,17 @@ static int smb2_create_link(struct ksmbd_work *work,
+ 					    link_name);
+ 				goto out;
+ 			}
+-		}
+-	} else {
+-		if (file_present) {
++		} else {
+ 			rc = -EEXIST;
+ 			ksmbd_debug(SMB, "link already exists\n");
+ 			goto out;
+ 		}
++		ksmbd_vfs_kern_path_unlock(&parent_path, &path);
+ 	}
+-
+ 	rc = ksmbd_vfs_link(work, target_name, link_name);
+ 	if (rc)
+ 		rc = -EINVAL;
+ out:
+-	if (file_present)
+-		ksmbd_vfs_kern_path_unlock(&parent_path, &path);
+ 
+ 	if (!IS_ERR(link_name))
+ 		kfree(link_name);
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index cb1af30b49f580..2c4a0c0202f67f 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -465,9 +465,20 @@ static int tracefs_d_revalidate(struct inode *inode, const struct qstr *name,
+ 	return !(ei && ei->is_freed);
+ }
+ 
++static int tracefs_d_delete(const struct dentry *dentry)
++{
++	/*
++	 * We want to keep eventfs dentries around but not tracefs
++	 * ones. eventfs dentries have content in d_fsdata.
++	 * Use d_fsdata to determine if it's a eventfs dentry or not.
++	 */
++	return dentry->d_fsdata == NULL;
++}
++
+ static const struct dentry_operations tracefs_dentry_operations = {
+ 	.d_revalidate = tracefs_d_revalidate,
+ 	.d_release = tracefs_d_release,
++	.d_delete = tracefs_d_delete,
+ };
+ 
+ static int tracefs_fill_super(struct super_block *sb, struct fs_context *fc)
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 1c8a736b33097e..b2f168b0a0d18e 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -1440,7 +1440,7 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
+ 	struct genericPartitionMap *gpm;
+ 	uint16_t ident;
+ 	struct buffer_head *bh;
+-	unsigned int table_len;
++	unsigned int table_len, part_map_count;
+ 	int ret;
+ 
+ 	bh = udf_read_tagged(sb, block, block, &ident);
+@@ -1461,7 +1461,16 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
+ 					   "logical volume");
+ 	if (ret)
+ 		goto out_bh;
+-	ret = udf_sb_alloc_partition_maps(sb, le32_to_cpu(lvd->numPartitionMaps));
++
++	part_map_count = le32_to_cpu(lvd->numPartitionMaps);
++	if (part_map_count > table_len / sizeof(struct genericPartitionMap1)) {
++		udf_err(sb, "error loading logical volume descriptor: "
++			"Too many partition maps (%u > %u)\n", part_map_count,
++			table_len / (unsigned)sizeof(struct genericPartitionMap1));
++		ret = -EIO;
++		goto out_bh;
++	}
++	ret = udf_sb_alloc_partition_maps(sb, part_map_count);
+ 	if (ret)
+ 		goto out_bh;
+ 
+diff --git a/fs/xfs/scrub/trace.h b/fs/xfs/scrub/trace.h
+index d7c4ced47c1567..4368f08e91c68e 100644
+--- a/fs/xfs/scrub/trace.h
++++ b/fs/xfs/scrub/trace.h
+@@ -479,7 +479,7 @@ DECLARE_EVENT_CLASS(xchk_dqiter_class,
+ 		__field(xfs_exntst_t, state)
+ 	),
+ 	TP_fast_assign(
+-		__entry->dev = cursor->sc->ip->i_mount->m_super->s_dev;
++		__entry->dev = cursor->sc->mp->m_super->s_dev;
+ 		__entry->dqtype = cursor->dqtype;
+ 		__entry->ino = cursor->quota_ip->i_ino;
+ 		__entry->cur_id = cursor->id;
+diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
+index 50928a7ae98e3d..b8742bd569d873 100644
+--- a/include/drm/gpu_scheduler.h
++++ b/include/drm/gpu_scheduler.h
+@@ -466,6 +466,24 @@ struct drm_sched_backend_ops {
+          * and it's time to clean it up.
+ 	 */
+ 	void (*free_job)(struct drm_sched_job *sched_job);
++
++	/**
++	 * @cancel_job: Used by the scheduler to guarantee remaining jobs' fences
++	 * get signaled in drm_sched_fini().
++	 *
++	 * Used by the scheduler to cancel all jobs that have not been executed
++	 * with &struct drm_sched_backend_ops.run_job by the time
++	 * drm_sched_fini() gets invoked.
++	 *
++	 * Drivers need to signal the passed job's hardware fence with an
++	 * appropriate error code (e.g., -ECANCELED) in this callback. They
++	 * must not free the job.
++	 *
++	 * The scheduler will only call this callback once it stopped calling
++	 * all other callbacks forever, with the exception of &struct
++	 * drm_sched_backend_ops.free_job.
++	 */
++	void (*cancel_job)(struct drm_sched_job *sched_job);
+ };
+ 
+ /**
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index fc372bbaa54769..01c4c6e3924fcd 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -1491,7 +1491,7 @@ int acpi_parse_spcr(bool enable_earlycon, bool enable_console);
+ #else
+ static inline int acpi_parse_spcr(bool enable_earlycon, bool enable_console)
+ {
+-	return 0;
++	return -ENODEV;
+ }
+ #endif
+ 
+diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
+index dce7615c35e7e3..f3f52ebc3e1edc 100644
+--- a/include/linux/blk_types.h
++++ b/include/linux/blk_types.h
+@@ -342,11 +342,11 @@ enum req_op {
+ 	/* Close a zone */
+ 	REQ_OP_ZONE_CLOSE	= (__force blk_opf_t)11,
+ 	/* Transition a zone to full */
+-	REQ_OP_ZONE_FINISH	= (__force blk_opf_t)12,
++	REQ_OP_ZONE_FINISH	= (__force blk_opf_t)13,
+ 	/* reset a zone write pointer */
+-	REQ_OP_ZONE_RESET	= (__force blk_opf_t)13,
++	REQ_OP_ZONE_RESET	= (__force blk_opf_t)15,
+ 	/* reset all the zone present on the device */
+-	REQ_OP_ZONE_RESET_ALL	= (__force blk_opf_t)15,
++	REQ_OP_ZONE_RESET_ALL	= (__force blk_opf_t)17,
+ 
+ 	/* Driver private requests */
+ 	REQ_OP_DRV_IN		= (__force blk_opf_t)34,
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 7c2a66995518ad..1f63f6411d1f58 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -835,6 +835,55 @@ static inline unsigned int disk_nr_zones(struct gendisk *disk)
+ {
+ 	return disk->nr_zones;
+ }
++
++/**
++ * bio_needs_zone_write_plugging - Check if a BIO needs to be handled with zone
++ *				   write plugging
++ * @bio: The BIO being submitted
++ *
++ * Return true whenever @bio execution needs to be handled through zone
++ * write plugging (using blk_zone_plug_bio()). Return false otherwise.
++ */
++static inline bool bio_needs_zone_write_plugging(struct bio *bio)
++{
++	enum req_op op = bio_op(bio);
++
++	/*
++	 * Only zoned block devices have a zone write plug hash table. But not
++	 * all of them have one (e.g. DM devices may not need one).
++	 */
++	if (!bio->bi_bdev->bd_disk->zone_wplugs_hash)
++		return false;
++
++	/* Only write operations need zone write plugging. */
++	if (!op_is_write(op))
++		return false;
++
++	/* Ignore empty flush */
++	if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
++		return false;
++
++	/* Ignore BIOs that already have been handled by zone write plugging. */
++	if (bio_flagged(bio, BIO_ZONE_WRITE_PLUGGING))
++		return false;
++
++	/*
++	 * All zone write operations must be handled through zone write plugging
++	 * using blk_zone_plug_bio().
++	 */
++	switch (op) {
++	case REQ_OP_ZONE_APPEND:
++	case REQ_OP_WRITE:
++	case REQ_OP_WRITE_ZEROES:
++	case REQ_OP_ZONE_FINISH:
++	case REQ_OP_ZONE_RESET:
++	case REQ_OP_ZONE_RESET_ALL:
++		return true;
++	default:
++		return false;
++	}
++}
++
+ bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs);
+ 
+ /**
+@@ -864,6 +913,12 @@ static inline unsigned int disk_nr_zones(struct gendisk *disk)
+ {
+ 	return 0;
+ }
++
++static inline bool bio_needs_zone_write_plugging(struct bio *bio)
++{
++	return false;
++}
++
+ static inline bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs)
+ {
+ 	return false;
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index daae1d6d11a744..ada27535a19550 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -1233,6 +1233,8 @@ void hid_quirks_exit(__u16 bus);
+ 	dev_notice(&(hid)->dev, fmt, ##__VA_ARGS__)
+ #define hid_warn(hid, fmt, ...)				\
+ 	dev_warn(&(hid)->dev, fmt, ##__VA_ARGS__)
++#define hid_warn_ratelimited(hid, fmt, ...)				\
++	dev_warn_ratelimited(&(hid)->dev, fmt, ##__VA_ARGS__)
+ #define hid_info(hid, fmt, ...)				\
+ 	dev_info(&(hid)->dev, fmt, ##__VA_ARGS__)
+ #define hid_dbg(hid, fmt, ...)				\
+diff --git a/include/linux/hypervisor.h b/include/linux/hypervisor.h
+index 9efbc54e35e596..be5417303ecf69 100644
+--- a/include/linux/hypervisor.h
++++ b/include/linux/hypervisor.h
+@@ -37,6 +37,9 @@ static inline bool hypervisor_isolated_pci_functions(void)
+ 	if (IS_ENABLED(CONFIG_S390))
+ 		return true;
+ 
++	if (IS_ENABLED(CONFIG_LOONGARCH))
++		return true;
++
+ 	return jailhouse_paravirt();
+ }
+ 
+diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h
+index 38456b42cdb556..b9f699799cf6e9 100644
+--- a/include/linux/if_vlan.h
++++ b/include/linux/if_vlan.h
+@@ -79,11 +79,6 @@ static inline struct vlan_ethhdr *skb_vlan_eth_hdr(const struct sk_buff *skb)
+ /* found in socket.c */
+ extern void vlan_ioctl_set(int (*hook)(struct net *, void __user *));
+ 
+-static inline bool is_vlan_dev(const struct net_device *dev)
+-{
+-        return dev->priv_flags & IFF_802_1Q_VLAN;
+-}
+-
+ #define skb_vlan_tag_present(__skb)	(!!(__skb)->vlan_all)
+ #define skb_vlan_tag_get(__skb)		((__skb)->vlan_tci)
+ #define skb_vlan_tag_get_id(__skb)	((__skb)->vlan_tci & VLAN_VID_MASK)
+@@ -200,6 +195,11 @@ struct vlan_dev_priv {
+ #endif
+ };
+ 
++static inline bool is_vlan_dev(const struct net_device *dev)
++{
++	return dev->priv_flags & IFF_802_1Q_VLAN;
++}
++
+ static inline struct vlan_dev_priv *vlan_dev_priv(const struct net_device *dev)
+ {
+ 	return netdev_priv(dev);
+@@ -237,6 +237,11 @@ extern void vlan_vids_del_by_dev(struct net_device *dev,
+ extern bool vlan_uses_dev(const struct net_device *dev);
+ 
+ #else
++static inline bool is_vlan_dev(const struct net_device *dev)
++{
++	return false;
++}
++
+ static inline struct net_device *
+ __vlan_find_dev_deep_rcu(struct net_device *real_dev,
+ 		     __be16 vlan_proto, u16 vlan_id)
+@@ -254,19 +259,19 @@ vlan_for_each(struct net_device *dev,
+ 
+ static inline struct net_device *vlan_dev_real_dev(const struct net_device *dev)
+ {
+-	BUG();
++	WARN_ON_ONCE(1);
+ 	return NULL;
+ }
+ 
+ static inline u16 vlan_dev_vlan_id(const struct net_device *dev)
+ {
+-	BUG();
++	WARN_ON_ONCE(1);
+ 	return 0;
+ }
+ 
+ static inline __be16 vlan_dev_vlan_proto(const struct net_device *dev)
+ {
+-	BUG();
++	WARN_ON_ONCE(1);
+ 	return 0;
+ }
+ 
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index ec3b0c9c2a8cff..85b390f9953dee 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -545,6 +545,7 @@ typedef void (*ata_postreset_fn_t)(struct ata_link *link, unsigned int *classes)
+ 
+ extern struct device_attribute dev_attr_unload_heads;
+ #ifdef CONFIG_SATA_HOST
++extern struct device_attribute dev_attr_link_power_management_supported;
+ extern struct device_attribute dev_attr_link_power_management_policy;
+ extern struct device_attribute dev_attr_ncq_prio_supported;
+ extern struct device_attribute dev_attr_ncq_prio_enable;
+diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h
+index 0dc0cf2863e2ad..7a805796fcfd07 100644
+--- a/include/linux/memory-tiers.h
++++ b/include/linux/memory-tiers.h
+@@ -18,7 +18,7 @@
+  * adistance value (slightly faster) than default DRAM adistance to be part of
+  * the same memory tier.
+  */
+-#define MEMTIER_ADISTANCE_DRAM	((4 * MEMTIER_CHUNK_SIZE) + (MEMTIER_CHUNK_SIZE >> 1))
++#define MEMTIER_ADISTANCE_DRAM	((4L * MEMTIER_CHUNK_SIZE) + (MEMTIER_CHUNK_SIZE >> 1))
+ 
+ struct memory_tier;
+ struct memory_dev_type {
+diff --git a/include/linux/packing.h b/include/linux/packing.h
+index 0589d70bbe0434..20ae4d452c7bb4 100644
+--- a/include/linux/packing.h
++++ b/include/linux/packing.h
+@@ -5,8 +5,12 @@
+ #ifndef _LINUX_PACKING_H
+ #define _LINUX_PACKING_H
+ 
+-#include <linux/types.h>
++#include <linux/array_size.h>
+ #include <linux/bitops.h>
++#include <linux/build_bug.h>
++#include <linux/minmax.h>
++#include <linux/stddef.h>
++#include <linux/types.h>
+ 
+ #define GEN_PACKED_FIELD_STRUCT(__type) \
+ 	struct packed_field_ ## __type { \
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 081e5c0a3ddf4e..0b03176f07aa92 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -328,6 +328,11 @@ struct rcec_ea;
+  *			determined (e.g., for Root Complex Integrated
+  *			Endpoints without the relevant Capability
+  *			Registers).
++ * @is_hotplug_bridge:	Hotplug bridge of any kind (e.g. PCIe Hot-Plug Capable,
++ *			Conventional PCI Hot-Plug, ACPI slot).
++ *			Such bridges are allocated additional MMIO and bus
++ *			number resources to allow for hierarchy expansion.
++ * @is_pciehp:		PCIe Hot-Plug Capable bridge.
+  */
+ struct pci_dev {
+ 	struct list_head bus_list;	/* Node in per-bus list */
+@@ -453,6 +458,7 @@ struct pci_dev {
+ 	unsigned int	is_physfn:1;
+ 	unsigned int	is_virtfn:1;
+ 	unsigned int	is_hotplug_bridge:1;
++	unsigned int	is_pciehp:1;
+ 	unsigned int	shpc_managed:1;		/* SHPC owned by shpchp */
+ 	unsigned int	is_thunderbolt:1;	/* Thunderbolt controller */
+ 	/*
+diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h
+index 189140bf11fc40..4adf4b364fcda9 100644
+--- a/include/linux/sbitmap.h
++++ b/include/linux/sbitmap.h
+@@ -213,12 +213,12 @@ int sbitmap_get(struct sbitmap *sb);
+  * sbitmap_get_shallow() - Try to allocate a free bit from a &struct sbitmap,
+  * limiting the depth used from each word.
+  * @sb: Bitmap to allocate from.
+- * @shallow_depth: The maximum number of bits to allocate from a single word.
++ * @shallow_depth: The maximum number of bits to allocate from the bitmap.
+  *
+  * This rather specific operation allows for having multiple users with
+  * different allocation limits. E.g., there can be a high-priority class that
+  * uses sbitmap_get() and a low-priority class that uses sbitmap_get_shallow()
+- * with a @shallow_depth of (1 << (@sb->shift - 1)). Then, the low-priority
++ * with a @shallow_depth of (sb->depth >> 1). Then, the low-priority
+  * class can only allocate half of the total bits in the bitmap, preventing it
+  * from starving out the high-priority class.
+  *
+@@ -478,7 +478,7 @@ unsigned long __sbitmap_queue_get_batch(struct sbitmap_queue *sbq, int nr_tags,
+  * sbitmap_queue, limiting the depth used from each word, with preemption
+  * already disabled.
+  * @sbq: Bitmap queue to allocate from.
+- * @shallow_depth: The maximum number of bits to allocate from a single word.
++ * @shallow_depth: The maximum number of bits to allocate from the queue.
+  * See sbitmap_get_shallow().
+  *
+  * If you call this, make sure to call sbitmap_queue_min_shallow_depth() after
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index fad2fc972d23ab..2c768882191f0c 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -3687,7 +3687,13 @@ static inline void *skb_frag_address(const skb_frag_t *frag)
+  */
+ static inline void *skb_frag_address_safe(const skb_frag_t *frag)
+ {
+-	void *ptr = page_address(skb_frag_page(frag));
++	struct page *page = skb_frag_page(frag);
++	void *ptr;
++
++	if (!page)
++		return NULL;
++
++	ptr = page_address(page);
+ 	if (unlikely(!ptr))
+ 		return NULL;
+ 
+diff --git a/include/linux/usb/cdc_ncm.h b/include/linux/usb/cdc_ncm.h
+index 2d207cb4837dbf..4ac082a6317381 100644
+--- a/include/linux/usb/cdc_ncm.h
++++ b/include/linux/usb/cdc_ncm.h
+@@ -119,6 +119,7 @@ struct cdc_ncm_ctx {
+ 	u32 timer_interval;
+ 	u32 max_ndp_size;
+ 	u8 is_ndp16;
++	u8 filtering_supported;
+ 	union {
+ 		struct usb_cdc_ncm_ndp16 *delayed_ndp16;
+ 		struct usb_cdc_ncm_ndp32 *delayed_ndp32;
+diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
+index 36fb3edfa403d9..6c00687539cf46 100644
+--- a/include/linux/virtio_vsock.h
++++ b/include/linux/virtio_vsock.h
+@@ -111,7 +111,12 @@ static inline size_t virtio_vsock_skb_len(struct sk_buff *skb)
+ 	return (size_t)(skb_end_pointer(skb) - skb->head);
+ }
+ 
+-#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE	(1024 * 4)
++/* Dimension the RX SKB so that the entire thing fits exactly into
++ * a single 4KiB page. This avoids wasting memory due to alloc_skb()
++ * rounding up to the next page order and also means that we
++ * don't leave higher-order pages sitting around in the RX queue.
++ */
++#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE	SKB_WITH_OVERHEAD(1024 * 4)
+ #define VIRTIO_VSOCK_MAX_BUF_SIZE		0xFFFFFFFFUL
+ #define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE		(1024 * 64)
+ 
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index ebe01eb2826441..702b526541e6f3 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -2851,6 +2851,12 @@ struct hci_evt_le_big_sync_estabilished {
+ 	__le16  bis[];
+ } __packed;
+ 
++#define HCI_EVT_LE_BIG_SYNC_LOST 0x1e
++struct hci_evt_le_big_sync_lost {
++	__u8    handle;
++	__u8    reason;
++} __packed;
++
+ #define HCI_EVT_LE_BIG_INFO_ADV_REPORT	0x22
+ struct hci_evt_le_big_info_adv_report {
+ 	__le16  sync_handle;
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 351a9057e70eef..1d62f0cce19502 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1348,7 +1348,8 @@ hci_conn_hash_lookup_big_sync_pend(struct hci_dev *hdev,
+ }
+ 
+ static inline struct hci_conn *
+-hci_conn_hash_lookup_big_state(struct hci_dev *hdev, __u8 handle,  __u16 state)
++hci_conn_hash_lookup_big_state(struct hci_dev *hdev, __u8 handle, __u16 state,
++			       __u8 role)
+ {
+ 	struct hci_conn_hash *h = &hdev->conn_hash;
+ 	struct hci_conn  *c;
+@@ -1356,7 +1357,7 @@ hci_conn_hash_lookup_big_state(struct hci_dev *hdev, __u8 handle,  __u16 state)
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != BIS_LINK || c->state != state)
++		if (c->type != BIS_LINK || c->state != state || c->role != role)
+ 			continue;
+ 
+ 		if (handle == c->iso_qos.bcast.big) {
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 75f2e5782887ff..71db571617ba60 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -633,7 +633,7 @@ ieee80211_get_sband_iftype_data(const struct ieee80211_supported_band *sband,
+ 	const struct ieee80211_sband_iftype_data *data;
+ 	int i;
+ 
+-	if (WARN_ON(iftype >= NL80211_IFTYPE_MAX))
++	if (WARN_ON(iftype >= NUM_NL80211_IFTYPES))
+ 		return NULL;
+ 
+ 	if (iftype == NL80211_IFTYPE_AP_VLAN)
+diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h
+index ff406ef4fd4aab..29a36709e7f35c 100644
+--- a/include/net/ip_vs.h
++++ b/include/net/ip_vs.h
+@@ -1163,6 +1163,14 @@ static inline const struct cpumask *sysctl_est_cpulist(struct netns_ipvs *ipvs)
+ 		return housekeeping_cpumask(HK_TYPE_KTHREAD);
+ }
+ 
++static inline const struct cpumask *sysctl_est_preferred_cpulist(struct netns_ipvs *ipvs)
++{
++	if (ipvs->est_cpulist_valid)
++		return ipvs->sysctl_est_cpulist;
++	else
++		return NULL;
++}
++
+ static inline int sysctl_est_nice(struct netns_ipvs *ipvs)
+ {
+ 	return ipvs->sysctl_est_nice;
+@@ -1270,6 +1278,11 @@ static inline const struct cpumask *sysctl_est_cpulist(struct netns_ipvs *ipvs)
+ 	return housekeeping_cpumask(HK_TYPE_KTHREAD);
+ }
+ 
++static inline const struct cpumask *sysctl_est_preferred_cpulist(struct netns_ipvs *ipvs)
++{
++	return NULL;
++}
++
+ static inline int sysctl_est_nice(struct netns_ipvs *ipvs)
+ {
+ 	return IPVS_EST_NICE;
+diff --git a/include/net/kcm.h b/include/net/kcm.h
+index 441e993be634ce..d9c35e71ecea40 100644
+--- a/include/net/kcm.h
++++ b/include/net/kcm.h
+@@ -71,7 +71,6 @@ struct kcm_sock {
+ 	struct list_head wait_psock_list;
+ 	struct sk_buff *seq_skb;
+ 	struct mutex tx_mutex;
+-	u32 tx_stopped : 1;
+ 
+ 	/* Don't use bit fields here, these are set under different locks */
+ 	bool tx_wait;
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index 5349df59615711..5989cacb9d506a 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -4296,6 +4296,8 @@ struct ieee80211_prep_tx_info {
+  * @mgd_complete_tx: Notify the driver that the response frame for a previously
+  *	transmitted frame announced with @mgd_prepare_tx was received, the data
+  *	is filled similarly to @mgd_prepare_tx though the duration is not used.
++ *	Note that this isn't always called for each mgd_prepare_tx() call, for
++ *	example for SAE the 'confirm' messages can be on the air in any order.
+  *
+  * @mgd_protect_tdls_discover: Protect a TDLS discovery session. After sending
+  *	a TDLS discovery-request, we expect a reply to arrive on the AP's
+@@ -4460,6 +4462,8 @@ struct ieee80211_prep_tx_info {
+  *	new links bitmaps may be 0 if going from/to a non-MLO situation.
+  *	The @old array contains pointers to the old bss_conf structures
+  *	that were already removed, in case they're needed.
++ *	Note that removal of link should always succeed, so the return value
++ *	will be ignored in a removal only case.
+  *	This callback can sleep.
+  * @change_sta_links: Change the valid links of a station, similar to
+  *	@change_vif_links. This callback can sleep.
+diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
+index 431b593de70937..1509a536cb855a 100644
+--- a/include/net/page_pool/types.h
++++ b/include/net/page_pool/types.h
+@@ -265,6 +265,8 @@ struct page_pool *page_pool_create_percpu(const struct page_pool_params *params,
+ struct xdp_mem_info;
+ 
+ #ifdef CONFIG_PAGE_POOL
++void page_pool_enable_direct_recycling(struct page_pool *pool,
++				       struct napi_struct *napi);
+ void page_pool_disable_direct_recycling(struct page_pool *pool);
+ void page_pool_destroy(struct page_pool *pool);
+ void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *),
+diff --git a/include/sound/sdca_function.h b/include/sound/sdca_function.h
+index 253654568a41ee..1403a9f469769b 100644
+--- a/include/sound/sdca_function.h
++++ b/include/sound/sdca_function.h
+@@ -16,6 +16,8 @@ struct device;
+ struct sdca_entity;
+ struct sdca_function_desc;
+ 
++#define SDCA_NO_INTERRUPT -1
++
+ /*
+  * The addressing space for SDCA relies on 7 bits for Entities, so a
+  * maximum of 128 Entities per function can be represented.
+diff --git a/include/trace/events/thp.h b/include/trace/events/thp.h
+index f50048af5fcc28..c8fe879d5828bd 100644
+--- a/include/trace/events/thp.h
++++ b/include/trace/events/thp.h
+@@ -8,6 +8,7 @@
+ #include <linux/types.h>
+ #include <linux/tracepoint.h>
+ 
++#ifdef CONFIG_PPC_BOOK3S_64
+ DECLARE_EVENT_CLASS(hugepage_set,
+ 
+ 	    TP_PROTO(unsigned long addr, unsigned long pte),
+@@ -66,6 +67,7 @@ DEFINE_EVENT(hugepage_update, hugepage_update_pud,
+ 	    TP_PROTO(unsigned long addr, unsigned long pud, unsigned long clr, unsigned long set),
+ 	    TP_ARGS(addr, pud, clr, set)
+ );
++#endif /* CONFIG_PPC_BOOK3S_64 */
+ 
+ DECLARE_EVENT_CLASS(migration_pmd,
+ 
+diff --git a/include/uapi/linux/in6.h b/include/uapi/linux/in6.h
+index ff8d21f9e95b77..5a47339ef7d768 100644
+--- a/include/uapi/linux/in6.h
++++ b/include/uapi/linux/in6.h
+@@ -152,7 +152,6 @@ struct in6_flowlabel_req {
+ /*
+  *	IPV6 socket options
+  */
+-#if __UAPI_DEF_IPV6_OPTIONS
+ #define IPV6_ADDRFORM		1
+ #define IPV6_2292PKTINFO	2
+ #define IPV6_2292HOPOPTS	3
+@@ -169,8 +168,10 @@ struct in6_flowlabel_req {
+ #define IPV6_MULTICAST_IF	17
+ #define IPV6_MULTICAST_HOPS	18
+ #define IPV6_MULTICAST_LOOP	19
++#if __UAPI_DEF_IPV6_OPTIONS
+ #define IPV6_ADD_MEMBERSHIP	20
+ #define IPV6_DROP_MEMBERSHIP	21
++#endif
+ #define IPV6_ROUTER_ALERT	22
+ #define IPV6_MTU_DISCOVER	23
+ #define IPV6_MTU		24
+@@ -203,7 +204,6 @@ struct in6_flowlabel_req {
+ #define IPV6_IPSEC_POLICY	34
+ #define IPV6_XFRM_POLICY	35
+ #define IPV6_HDRINCL		36
+-#endif
+ 
+ /*
+  * Multicast:
+diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
+index 8f1fc12bac4620..1bbf9c13eef340 100644
+--- a/include/uapi/linux/io_uring.h
++++ b/include/uapi/linux/io_uring.h
+@@ -50,7 +50,7 @@ struct io_uring_sqe {
+ 	};
+ 	__u32	len;		/* buffer size or number of iovecs */
+ 	union {
+-		__kernel_rwf_t	rw_flags;
++		__u32		rw_flags;
+ 		__u32		fsync_flags;
+ 		__u16		poll_events;	/* compatibility */
+ 		__u32		poll32_events;	/* word-reversed for BE */
+diff --git a/io_uring/memmap.c b/io_uring/memmap.c
+index 07f8a5cbd37ec7..97f42523f7f2f4 100644
+--- a/io_uring/memmap.c
++++ b/io_uring/memmap.c
+@@ -155,7 +155,7 @@ static int io_region_allocate_pages(struct io_ring_ctx *ctx,
+ 				    unsigned long mmap_offset)
+ {
+ 	gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO | __GFP_NOWARN;
+-	unsigned long size = mr->nr_pages << PAGE_SHIFT;
++	size_t size = (size_t) mr->nr_pages << PAGE_SHIFT;
+ 	unsigned long nr_allocated;
+ 	struct page **pages;
+ 	void *p;
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 4dac01ff7ac772..fb1e6b925d519e 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -482,6 +482,15 @@ static int io_bundle_nbufs(struct io_async_msghdr *kmsg, int ret)
+ 	return nbufs;
+ }
+ 
++static int io_net_kbuf_recyle(struct io_kiocb *req,
++			      struct io_async_msghdr *kmsg, int len)
++{
++	req->flags |= REQ_F_BL_NO_RECYCLE;
++	if (req->flags & REQ_F_BUFFERS_COMMIT)
++		io_kbuf_commit(req, req->buf_list, len, io_bundle_nbufs(kmsg, len));
++	return IOU_RETRY;
++}
++
+ static inline bool io_send_finish(struct io_kiocb *req, int *ret,
+ 				  struct io_async_msghdr *kmsg,
+ 				  unsigned issue_flags)
+@@ -550,8 +559,7 @@ int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
+ 			kmsg->msg.msg_controllen = 0;
+ 			kmsg->msg.msg_control = NULL;
+ 			sr->done_io += ret;
+-			req->flags |= REQ_F_BL_NO_RECYCLE;
+-			return -EAGAIN;
++			return io_net_kbuf_recyle(req, kmsg, ret);
+ 		}
+ 		if (ret == -ERESTARTSYS)
+ 			ret = -EINTR;
+@@ -661,8 +669,7 @@ int io_send(struct io_kiocb *req, unsigned int issue_flags)
+ 			sr->len -= ret;
+ 			sr->buf += ret;
+ 			sr->done_io += ret;
+-			req->flags |= REQ_F_BL_NO_RECYCLE;
+-			return -EAGAIN;
++			return io_net_kbuf_recyle(req, kmsg, ret);
+ 		}
+ 		if (ret == -ERESTARTSYS)
+ 			ret = -EINTR;
+@@ -1034,8 +1041,7 @@ int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
+ 		}
+ 		if (ret > 0 && io_net_retry(sock, flags)) {
+ 			sr->done_io += ret;
+-			req->flags |= REQ_F_BL_NO_RECYCLE;
+-			return IOU_RETRY;
++			return io_net_kbuf_recyle(req, kmsg, ret);
+ 		}
+ 		if (ret == -ERESTARTSYS)
+ 			ret = -EINTR;
+@@ -1175,8 +1181,7 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
+ 			sr->len -= ret;
+ 			sr->buf += ret;
+ 			sr->done_io += ret;
+-			req->flags |= REQ_F_BL_NO_RECYCLE;
+-			return -EAGAIN;
++			return io_net_kbuf_recyle(req, kmsg, ret);
+ 		}
+ 		if (ret == -ERESTARTSYS)
+ 			ret = -EINTR;
+@@ -1461,8 +1466,7 @@ int io_send_zc(struct io_kiocb *req, unsigned int issue_flags)
+ 			zc->len -= ret;
+ 			zc->buf += ret;
+ 			zc->done_io += ret;
+-			req->flags |= REQ_F_BL_NO_RECYCLE;
+-			return -EAGAIN;
++			return io_net_kbuf_recyle(req, kmsg, ret);
+ 		}
+ 		if (ret == -ERESTARTSYS)
+ 			ret = -EINTR;
+@@ -1532,8 +1536,7 @@ int io_sendmsg_zc(struct io_kiocb *req, unsigned int issue_flags)
+ 
+ 		if (ret > 0 && io_net_retry(sock, flags)) {
+ 			sr->done_io += ret;
+-			req->flags |= REQ_F_BL_NO_RECYCLE;
+-			return -EAGAIN;
++			return io_net_kbuf_recyle(req, kmsg, ret);
+ 		}
+ 		if (ret == -ERESTARTSYS)
+ 			ret = -EINTR;
+diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
+index 7409af671c9ee2..4478abb6ca1a5f 100644
+--- a/io_uring/rsrc.c
++++ b/io_uring/rsrc.c
+@@ -55,7 +55,7 @@ int __io_account_mem(struct user_struct *user, unsigned long nr_pages)
+ 	return 0;
+ }
+ 
+-static void io_unaccount_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
++void io_unaccount_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
+ {
+ 	if (ctx->user)
+ 		__io_unaccount_mem(ctx->user, nr_pages);
+@@ -64,7 +64,7 @@ static void io_unaccount_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
+ 		atomic64_sub(nr_pages, &ctx->mm_account->pinned_vm);
+ }
+ 
+-static int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
++int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
+ {
+ 	int ret;
+ 
+diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
+index ba00ee6f0650e8..0d550772466188 100644
+--- a/io_uring/rsrc.h
++++ b/io_uring/rsrc.h
+@@ -146,6 +146,8 @@ int io_files_update(struct io_kiocb *req, unsigned int issue_flags);
+ int io_files_update_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe);
+ 
+ int __io_account_mem(struct user_struct *user, unsigned long nr_pages);
++int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages);
++void io_unaccount_mem(struct io_ring_ctx *ctx, unsigned long nr_pages);
+ 
+ static inline void __io_unaccount_mem(struct user_struct *user,
+ 				      unsigned long nr_pages)
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index 039e063f7091eb..0942a5201e5ce6 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -284,7 +284,7 @@ static int __io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ 
+ 	rw->addr = READ_ONCE(sqe->addr);
+ 	rw->len = READ_ONCE(sqe->len);
+-	rw->flags = READ_ONCE(sqe->rw_flags);
++	rw->flags = (__force rwf_t) READ_ONCE(sqe->rw_flags);
+ 
+ 	attr_type_mask = READ_ONCE(sqe->attr_type_mask);
+ 	if (attr_type_mask) {
+diff --git a/kernel/.gitignore b/kernel/.gitignore
+index c6b299a6b7866d..a501bfc8069425 100644
+--- a/kernel/.gitignore
++++ b/kernel/.gitignore
+@@ -1,3 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ /config_data
+ /kheaders.md5
++/kheaders-objlist
++/kheaders-srclist
+diff --git a/kernel/Makefile b/kernel/Makefile
+index 434929de17ef27..96a8798c9c640b 100644
+--- a/kernel/Makefile
++++ b/kernel/Makefile
+@@ -156,11 +156,48 @@ filechk_cat = cat $<
+ $(obj)/config_data: $(KCONFIG_CONFIG) FORCE
+ 	$(call filechk,cat)
+ 
++# kheaders_data.tar.xz
+ $(obj)/kheaders.o: $(obj)/kheaders_data.tar.xz
+ 
+-quiet_cmd_genikh = CHK     $(obj)/kheaders_data.tar.xz
+-      cmd_genikh = $(CONFIG_SHELL) $(srctree)/kernel/gen_kheaders.sh $@
+-$(obj)/kheaders_data.tar.xz: FORCE
+-	$(call cmd,genikh)
++quiet_cmd_kheaders_data = GEN     $@
++      cmd_kheaders_data = "$<" "$@" "$(obj)/kheaders-srclist" "$(obj)/kheaders-objlist"
++      cmd_kheaders_data_dep = cat $(depfile) >> $(dot-target).cmd; rm -f $(depfile)
+ 
+-clean-files := kheaders_data.tar.xz kheaders.md5
++define rule_kheaders_data
++	$(call cmd_and_savecmd,kheaders_data)
++	$(call cmd,kheaders_data_dep)
++endef
++
++targets += kheaders_data.tar.xz
++$(obj)/kheaders_data.tar.xz: $(src)/gen_kheaders.sh $(obj)/kheaders-srclist $(obj)/kheaders-objlist $(obj)/kheaders.md5 FORCE
++	$(call if_changed_rule,kheaders_data)
++
++# generated headers in objtree
++#
++# include/generated/utsversion.h is ignored because it is generated
++# after gen_kheaders.sh is executed. (utsversion.h is unneeded for kheaders)
++filechk_kheaders_objlist = \
++	for d in include "arch/$(SRCARCH)/include"; do \
++		find "$${d}/generated" ! -path "include/generated/utsversion.h" -a -name "*.h" -print; \
++	done
++
++$(obj)/kheaders-objlist: FORCE
++	$(call filechk,kheaders_objlist)
++
++# non-generated headers in srctree
++filechk_kheaders_srclist = \
++	for d in include "arch/$(SRCARCH)/include"; do \
++		find "$(srctree)/$${d}" -path "$(srctree)/$${d}/generated" -prune -o -name "*.h" -print; \
++	done
++
++$(obj)/kheaders-srclist: FORCE
++	$(call filechk,kheaders_srclist)
++
++# Some files are symlinks. If symlinks are changed, kheaders_data.tar.xz should
++# be rebuilt.
++filechk_kheaders_md5sum = xargs -r -a $< stat -c %N | md5sum
++
++$(obj)/kheaders.md5: $(obj)/kheaders-srclist FORCE
++	$(call filechk,kheaders_md5sum)
++
++clean-files := kheaders.md5 kheaders-srclist kheaders-objlist
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index a1ecad2944a8da..ef6c108c468dd5 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -404,7 +404,8 @@ static bool reg_not_null(const struct bpf_reg_state *reg)
+ 		type == PTR_TO_MAP_KEY ||
+ 		type == PTR_TO_SOCK_COMMON ||
+ 		(type == PTR_TO_BTF_ID && is_trusted_reg(reg)) ||
+-		type == PTR_TO_MEM;
++		type == PTR_TO_MEM ||
++		type == CONST_PTR_TO_MAP;
+ }
+ 
+ static struct btf_record *reg_btf_record(const struct bpf_reg_state *reg)
+@@ -16014,6 +16015,10 @@ static void regs_refine_cond_op(struct bpf_reg_state *reg1, struct bpf_reg_state
+ 		if (!is_reg_const(reg2, is_jmp32))
+ 			break;
+ 		val = reg_const_value(reg2, is_jmp32);
++		/* Forget the ranges before narrowing tnums, to avoid invariant
++		 * violations if we're on a dead branch.
++		 */
++		__mark_reg_unbounded(reg1);
+ 		if (is_jmp32) {
+ 			t = tnum_and(tnum_subreg(reg1->var_off), tnum_const(~val));
+ 			reg1->var_off = tnum_with_subreg(reg1->var_off, t);
+diff --git a/kernel/gen_kheaders.sh b/kernel/gen_kheaders.sh
+index c9e5dc068e854f..0ff7beabb21a7b 100755
+--- a/kernel/gen_kheaders.sh
++++ b/kernel/gen_kheaders.sh
+@@ -4,79 +4,33 @@
+ # This script generates an archive consisting of kernel headers
+ # for CONFIG_IKHEADERS.
+ set -e
+-sfile="$(readlink -f "$0")"
+-outdir="$(pwd)"
+ tarfile=$1
+-tmpdir=$outdir/${tarfile%/*}/.tmp_dir
+-
+-dir_list="
+-include/
+-arch/$SRCARCH/include/
+-"
+-
+-# Support incremental builds by skipping archive generation
+-# if timestamps of files being archived are not changed.
+-
+-# This block is useful for debugging the incremental builds.
+-# Uncomment it for debugging.
+-# if [ ! -f /tmp/iter ]; then iter=1; echo 1 > /tmp/iter;
+-# else iter=$(($(cat /tmp/iter) + 1)); echo $iter > /tmp/iter; fi
+-# find $all_dirs -name "*.h" | xargs ls -l > /tmp/ls-$iter
+-
+-all_dirs=
+-if [ "$building_out_of_srctree" ]; then
+-	for d in $dir_list; do
+-		all_dirs="$all_dirs $srctree/$d"
+-	done
+-fi
+-all_dirs="$all_dirs $dir_list"
+-
+-# include/generated/utsversion.h is ignored because it is generated after this
+-# script is executed. (utsversion.h is unneeded for kheaders)
+-#
+-# When Kconfig regenerates include/generated/autoconf.h, its timestamp is
+-# updated, but the contents might be still the same. When any CONFIG option is
+-# changed, Kconfig touches the corresponding timestamp file include/config/*.
+-# Hence, the md5sum detects the configuration change anyway. We do not need to
+-# check include/generated/autoconf.h explicitly.
+-#
+-# Ignore them for md5 calculation to avoid pointless regeneration.
+-headers_md5="$(find $all_dirs -name "*.h" -a			\
+-		! -path include/generated/utsversion.h -a	\
+-		! -path include/generated/autoconf.h		|
+-		xargs ls -l | md5sum | cut -d ' ' -f1)"
+-
+-# Any changes to this script will also cause a rebuild of the archive.
+-this_file_md5="$(ls -l $sfile | md5sum | cut -d ' ' -f1)"
+-if [ -f $tarfile ]; then tarfile_md5="$(md5sum $tarfile | cut -d ' ' -f1)"; fi
+-if [ -f kernel/kheaders.md5 ] &&
+-	[ "$(head -n 1 kernel/kheaders.md5)" = "$headers_md5" ] &&
+-	[ "$(head -n 2 kernel/kheaders.md5 | tail -n 1)" = "$this_file_md5" ] &&
+-	[ "$(tail -n 1 kernel/kheaders.md5)" = "$tarfile_md5" ]; then
+-		exit
+-fi
+-
+-echo "  GEN     $tarfile"
++srclist=$2
++objlist=$3
++
++dir=$(dirname "${tarfile}")
++tmpdir=${dir}/.tmp_dir
++depfile=${dir}/.$(basename "${tarfile}").d
++
++# generate dependency list.
++{
++	echo
++	echo "deps_${tarfile} := \\"
++	sed 's:\(.*\):  \1 \\:' "${srclist}"
++	sed -n '/^include\/generated\/autoconf\.h$/!s:\(.*\):  \1 \\:p' "${objlist}"
++	echo
++	echo "${tarfile}: \$(deps_${tarfile})"
++	echo
++	echo "\$(deps_${tarfile}):"
++
++} > "${depfile}"
+ 
+ rm -rf "${tmpdir}"
+ mkdir "${tmpdir}"
+ 
+-if [ "$building_out_of_srctree" ]; then
+-	(
+-		cd $srctree
+-		for f in $dir_list
+-			do find "$f" -name "*.h";
+-		done | tar -c -f - -T - | tar -xf - -C "${tmpdir}"
+-	)
+-fi
+-
+-for f in $dir_list;
+-	do find "$f" -name "*.h";
+-done | tar -c -f - -T - | tar -xf - -C "${tmpdir}"
+-
+-# Always exclude include/generated/utsversion.h
+-# Otherwise, the contents of the tarball may vary depending on the build steps.
+-rm -f "${tmpdir}/include/generated/utsversion.h"
++# shellcheck disable=SC2154 # srctree is passed as an env variable
++sed "s:^${srctree}/::" "${srclist}" | tar -c -f - -C "${srctree}" -T - | tar -xf - -C "${tmpdir}"
++tar -c -f - -T "${objlist}" | tar -xf - -C "${tmpdir}"
+ 
+ # Remove comments except SDPX lines
+ # Use a temporary file to store directory contents to prevent find/xargs from
+@@ -92,8 +46,4 @@ tar "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \
+     --owner=0 --group=0 --sort=name --numeric-owner --mode=u=rw,go=r,a+X \
+     -I $XZ -cf $tarfile -C "${tmpdir}/" . > /dev/null
+ 
+-echo $headers_md5 > kernel/kheaders.md5
+-echo "$this_file_md5" >> kernel/kheaders.md5
+-echo "$(md5sum $tarfile | cut -d ' ' -f1)" >> kernel/kheaders.md5
+-
+ rm -rf "${tmpdir}"
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index 77c44924cf54a3..800c8fc46b0868 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -894,6 +894,7 @@ int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask)
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(kthread_affine_preferred);
+ 
+ /*
+  * Re-affine kthreads according to their preferences
+diff --git a/kernel/module/main.c b/kernel/module/main.c
+index 05da78b6a6c141..4e4e0fd11ddee4 100644
+--- a/kernel/module/main.c
++++ b/kernel/module/main.c
+@@ -727,14 +727,16 @@ SYSCALL_DEFINE2(delete_module, const char __user *, name_user,
+ 	struct module *mod;
+ 	char name[MODULE_NAME_LEN];
+ 	char buf[MODULE_FLAGS_BUF_SIZE];
+-	int ret, forced = 0;
++	int ret, len, forced = 0;
+ 
+ 	if (!capable(CAP_SYS_MODULE) || modules_disabled)
+ 		return -EPERM;
+ 
+-	if (strncpy_from_user(name, name_user, MODULE_NAME_LEN-1) < 0)
+-		return -EFAULT;
+-	name[MODULE_NAME_LEN-1] = '\0';
++	len = strncpy_from_user(name, name_user, MODULE_NAME_LEN);
++	if (len == 0 || len == MODULE_NAME_LEN)
++		return -ENOENT;
++	if (len < 0)
++		return len;
+ 
+ 	audit_log_kern_module(name);
+ 
+diff --git a/kernel/power/console.c b/kernel/power/console.c
+index fcdf0e14a47d47..19c48aa5355d2b 100644
+--- a/kernel/power/console.c
++++ b/kernel/power/console.c
+@@ -16,6 +16,7 @@
+ #define SUSPEND_CONSOLE	(MAX_NR_CONSOLES-1)
+ 
+ static int orig_fgconsole, orig_kmsg;
++static bool vt_switch_done;
+ 
+ static DEFINE_MUTEX(vt_switch_mutex);
+ 
+@@ -136,17 +137,21 @@ void pm_prepare_console(void)
+ 	if (orig_fgconsole < 0)
+ 		return;
+ 
++	vt_switch_done = true;
++
+ 	orig_kmsg = vt_kmsg_redirect(SUSPEND_CONSOLE);
+ 	return;
+ }
+ 
+ void pm_restore_console(void)
+ {
+-	if (!pm_vt_switch())
++	if (!pm_vt_switch() && !vt_switch_done)
+ 		return;
+ 
+ 	if (orig_fgconsole >= 0) {
+ 		vt_move_to_console(orig_fgconsole, 0);
+ 		vt_kmsg_redirect(orig_kmsg);
+ 	}
++
++	vt_switch_done = false;
+ }
+diff --git a/kernel/printk/nbcon.c b/kernel/printk/nbcon.c
+index fd12efcc4aeda8..e7a3af81b17397 100644
+--- a/kernel/printk/nbcon.c
++++ b/kernel/printk/nbcon.c
+@@ -214,8 +214,9 @@ static void nbcon_seq_try_update(struct nbcon_context *ctxt, u64 new_seq)
+ 
+ /**
+  * nbcon_context_try_acquire_direct - Try to acquire directly
+- * @ctxt:	The context of the caller
+- * @cur:	The current console state
++ * @ctxt:		The context of the caller
++ * @cur:		The current console state
++ * @is_reacquire:	This acquire is a reacquire
+  *
+  * Acquire the console when it is released. Also acquire the console when
+  * the current owner has a lower priority and the console is in a safe state.
+@@ -225,17 +226,17 @@ static void nbcon_seq_try_update(struct nbcon_context *ctxt, u64 new_seq)
+  *
+  * Errors:
+  *
+- *	-EPERM:		A panic is in progress and this is not the panic CPU.
+- *			Or the current owner or waiter has the same or higher
+- *			priority. No acquire method can be successful in
+- *			this case.
++ *	-EPERM:		A panic is in progress and this is neither the panic
++ *			CPU nor is this a reacquire. Or the current owner or
++ *			waiter has the same or higher priority. No acquire
++ *			method can be successful in these cases.
+  *
+  *	-EBUSY:		The current owner has a lower priority but the console
+  *			in an unsafe state. The caller should try using
+  *			the handover acquire method.
+  */
+ static int nbcon_context_try_acquire_direct(struct nbcon_context *ctxt,
+-					    struct nbcon_state *cur)
++					    struct nbcon_state *cur, bool is_reacquire)
+ {
+ 	unsigned int cpu = smp_processor_id();
+ 	struct console *con = ctxt->console;
+@@ -243,14 +244,20 @@ static int nbcon_context_try_acquire_direct(struct nbcon_context *ctxt,
+ 
+ 	do {
+ 		/*
+-		 * Panic does not imply that the console is owned. However, it
+-		 * is critical that non-panic CPUs during panic are unable to
+-		 * acquire ownership in order to satisfy the assumptions of
+-		 * nbcon_waiter_matches(). In particular, the assumption that
+-		 * lower priorities are ignored during panic.
++		 * Panic does not imply that the console is owned. However,
++		 * since all non-panic CPUs are stopped during panic(), it
++		 * is safer to have them avoid gaining console ownership.
++		 *
++		 * If this acquire is a reacquire (and an unsafe takeover
++		 * has not previously occurred) then it is allowed to attempt
++		 * a direct acquire in panic. This gives console drivers an
++		 * opportunity to perform any necessary cleanup if they were
++		 * interrupted by the panic CPU while printing.
+ 		 */
+-		if (other_cpu_in_panic())
++		if (other_cpu_in_panic() &&
++		    (!is_reacquire || cur->unsafe_takeover)) {
+ 			return -EPERM;
++		}
+ 
+ 		if (ctxt->prio <= cur->prio || ctxt->prio <= cur->req_prio)
+ 			return -EPERM;
+@@ -301,8 +308,9 @@ static bool nbcon_waiter_matches(struct nbcon_state *cur, int expected_prio)
+ 	 * Event #1 implies this context is EMERGENCY.
+ 	 * Event #2 implies the new context is PANIC.
+ 	 * Event #3 occurs when panic() has flushed the console.
+-	 * Events #4 and #5 are not possible due to the other_cpu_in_panic()
+-	 * check in nbcon_context_try_acquire_direct().
++	 * Event #4 occurs when a non-panic CPU reacquires.
++	 * Event #5 is not possible due to the other_cpu_in_panic() check
++	 *          in nbcon_context_try_acquire_handover().
+ 	 */
+ 
+ 	return (cur->req_prio == expected_prio);
+@@ -431,6 +439,16 @@ static int nbcon_context_try_acquire_handover(struct nbcon_context *ctxt,
+ 	WARN_ON_ONCE(ctxt->prio <= cur->prio || ctxt->prio <= cur->req_prio);
+ 	WARN_ON_ONCE(!cur->unsafe);
+ 
++	/*
++	 * Panic does not imply that the console is owned. However, it
++	 * is critical that non-panic CPUs during panic are unable to
++	 * wait for a handover in order to satisfy the assumptions of
++	 * nbcon_waiter_matches(). In particular, the assumption that
++	 * lower priorities are ignored during panic.
++	 */
++	if (other_cpu_in_panic())
++		return -EPERM;
++
+ 	/* Handover is not possible on the same CPU. */
+ 	if (cur->cpu == cpu)
+ 		return -EBUSY;
+@@ -558,7 +576,8 @@ static struct printk_buffers panic_nbcon_pbufs;
+ 
+ /**
+  * nbcon_context_try_acquire - Try to acquire nbcon console
+- * @ctxt:	The context of the caller
++ * @ctxt:		The context of the caller
++ * @is_reacquire:	This acquire is a reacquire
+  *
+  * Context:	Under @ctxt->con->device_lock() or local_irq_save().
+  * Return:	True if the console was acquired. False otherwise.
+@@ -568,7 +587,7 @@ static struct printk_buffers panic_nbcon_pbufs;
+  * in an unsafe state. Otherwise, on success the caller may assume
+  * the console is not in an unsafe state.
+  */
+-static bool nbcon_context_try_acquire(struct nbcon_context *ctxt)
++static bool nbcon_context_try_acquire(struct nbcon_context *ctxt, bool is_reacquire)
+ {
+ 	unsigned int cpu = smp_processor_id();
+ 	struct console *con = ctxt->console;
+@@ -577,7 +596,7 @@ static bool nbcon_context_try_acquire(struct nbcon_context *ctxt)
+ 
+ 	nbcon_state_read(con, &cur);
+ try_again:
+-	err = nbcon_context_try_acquire_direct(ctxt, &cur);
++	err = nbcon_context_try_acquire_direct(ctxt, &cur, is_reacquire);
+ 	if (err != -EBUSY)
+ 		goto out;
+ 
+@@ -913,7 +932,7 @@ void nbcon_reacquire_nobuf(struct nbcon_write_context *wctxt)
+ {
+ 	struct nbcon_context *ctxt = &ACCESS_PRIVATE(wctxt, ctxt);
+ 
+-	while (!nbcon_context_try_acquire(ctxt))
++	while (!nbcon_context_try_acquire(ctxt, true))
+ 		cpu_relax();
+ 
+ 	nbcon_write_context_set_buf(wctxt, NULL, 0);
+@@ -1101,7 +1120,7 @@ static bool nbcon_emit_one(struct nbcon_write_context *wctxt, bool use_atomic)
+ 		cant_migrate();
+ 	}
+ 
+-	if (!nbcon_context_try_acquire(ctxt))
++	if (!nbcon_context_try_acquire(ctxt, false))
+ 		goto out;
+ 
+ 	/*
+@@ -1486,7 +1505,7 @@ static int __nbcon_atomic_flush_pending_con(struct console *con, u64 stop_seq,
+ 	ctxt->prio			= nbcon_get_default_prio();
+ 	ctxt->allow_unsafe_takeover	= allow_unsafe_takeover;
+ 
+-	if (!nbcon_context_try_acquire(ctxt))
++	if (!nbcon_context_try_acquire(ctxt, false))
+ 		return -EPERM;
+ 
+ 	while (nbcon_seq_read(con) < stop_seq) {
+@@ -1762,7 +1781,7 @@ bool nbcon_device_try_acquire(struct console *con)
+ 	ctxt->console	= con;
+ 	ctxt->prio	= NBCON_PRIO_NORMAL;
+ 
+-	if (!nbcon_context_try_acquire(ctxt))
++	if (!nbcon_context_try_acquire(ctxt, false))
+ 		return false;
+ 
+ 	if (!nbcon_context_enter_unsafe(ctxt))
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index 4fa7772be18318..a6202cbef9c53c 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -458,7 +458,7 @@ rcu_read_delay(struct torture_random_state *rrsp, struct rt_read_seg *rtrsp)
+ 	    !(torture_random(rrsp) % (nrealreaders * 2000 * longdelay_ms))) {
+ 		started = cur_ops->get_gp_seq();
+ 		ts = rcu_trace_clock_local();
+-		if (preempt_count() & (SOFTIRQ_MASK | HARDIRQ_MASK))
++		if ((preempt_count() & HARDIRQ_MASK) || softirq_count())
+ 			longdelay_ms = 5; /* Avoid triggering BH limits. */
+ 		mdelay(longdelay_ms);
+ 		rtrsp->rt_delay_ms = longdelay_ms;
+@@ -1922,7 +1922,7 @@ static void rcutorture_one_extend_check(char *s, int curstate, int new, int old,
+ 		return;
+ 
+ 	WARN_ONCE((curstate & (RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH)) &&
+-		  !(preempt_count() & SOFTIRQ_MASK), ROEC_ARGS);
++		  !softirq_count(), ROEC_ARGS);
+ 	WARN_ONCE((curstate & (RCUTORTURE_RDR_PREEMPT | RCUTORTURE_RDR_SCHED)) &&
+ 		  !(preempt_count() & PREEMPT_MASK), ROEC_ARGS);
+ 	WARN_ONCE(cur_ops->readlock_nesting &&
+@@ -1936,7 +1936,7 @@ static void rcutorture_one_extend_check(char *s, int curstate, int new, int old,
+ 
+ 	WARN_ONCE(cur_ops->extendables &&
+ 		  !(curstate & (RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH)) &&
+-		  (preempt_count() & SOFTIRQ_MASK), ROEC_ARGS);
++		  softirq_count(), ROEC_ARGS);
+ 
+ 	/*
+ 	 * non-preemptible RCU in a preemptible kernel uses preempt_disable()
+@@ -1957,6 +1957,9 @@ static void rcutorture_one_extend_check(char *s, int curstate, int new, int old,
+ 	if (!IS_ENABLED(CONFIG_PREEMPT_RCU))
+ 		mask |= RCUTORTURE_RDR_PREEMPT | RCUTORTURE_RDR_SCHED;
+ 
++	if (IS_ENABLED(CONFIG_PREEMPT_RT) && softirq_count())
++		mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
++
+ 	WARN_ONCE(cur_ops->readlock_nesting && !(curstate & mask) &&
+ 		  cur_ops->readlock_nesting() > 0, ROEC_ARGS);
+ }
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 37778d913f0115..680f3117c6b6d6 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -4229,6 +4229,8 @@ int rcutree_prepare_cpu(unsigned int cpu)
+ 	rdp->rcu_iw_gp_seq = rdp->gp_seq - 1;
+ 	trace_rcu_grace_period(rcu_state.name, rdp->gp_seq, TPS("cpuonl"));
+ 	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
++
++	rcu_preempt_deferred_qs_init(rdp);
+ 	rcu_spawn_rnp_kthreads(rnp);
+ 	rcu_spawn_cpu_nocb_kthread(cpu);
+ 	ASSERT_EXCLUSIVE_WRITER(rcu_state.n_online_cpus);
+diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
+index 1bba2225e7448b..8ba04b179416a0 100644
+--- a/kernel/rcu/tree.h
++++ b/kernel/rcu/tree.h
+@@ -174,6 +174,17 @@ struct rcu_snap_record {
+ 	unsigned long   jiffies;	/* Track jiffies value */
+ };
+ 
++/*
++ * An IRQ work (deferred_qs_iw) is used by RCU to get the scheduler's attention.
++ * to report quiescent states at the soonest possible time.
++ * The request can be in one of the following states:
++ * - DEFER_QS_IDLE: An IRQ work is yet to be scheduled.
++ * - DEFER_QS_PENDING: An IRQ work was scheduled but either not yet run, or it
++ *                     ran and we still haven't reported a quiescent state.
++ */
++#define DEFER_QS_IDLE		0
++#define DEFER_QS_PENDING	1
++
+ /* Per-CPU data for read-copy update. */
+ struct rcu_data {
+ 	/* 1) quiescent-state and grace-period handling : */
+@@ -191,7 +202,7 @@ struct rcu_data {
+ 					/*  during and after the last grace */
+ 					/* period it is aware of. */
+ 	struct irq_work defer_qs_iw;	/* Obtain later scheduler attention. */
+-	bool defer_qs_iw_pending;	/* Scheduler attention pending? */
++	int defer_qs_iw_pending;	/* Scheduler attention pending? */
+ 	struct work_struct strict_work;	/* Schedule readers for strict GPs. */
+ 
+ 	/* 2) batch handling */
+@@ -476,6 +487,7 @@ static int rcu_print_task_exp_stall(struct rcu_node *rnp);
+ static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp);
+ static void rcu_flavor_sched_clock_irq(int user);
+ static void dump_blkd_tasks(struct rcu_node *rnp, int ncheck);
++static void rcu_preempt_deferred_qs_init(struct rcu_data *rdp);
+ static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags);
+ static void rcu_preempt_boost_start_gp(struct rcu_node *rnp);
+ static bool rcu_is_callbacks_kthread(struct rcu_data *rdp);
+diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
+index 6b3118a4dde379..2083d2343bd456 100644
+--- a/kernel/rcu/tree_nocb.h
++++ b/kernel/rcu/tree_nocb.h
+@@ -1152,7 +1152,6 @@ static bool rcu_nocb_rdp_offload_wait_cond(struct rcu_data *rdp)
+ static int rcu_nocb_rdp_offload(struct rcu_data *rdp)
+ {
+ 	int wake_gp;
+-	struct rcu_data *rdp_gp = rdp->nocb_gp_rdp;
+ 
+ 	WARN_ON_ONCE(cpu_online(rdp->cpu));
+ 	/*
+@@ -1162,7 +1161,7 @@ static int rcu_nocb_rdp_offload(struct rcu_data *rdp)
+ 	if (!rdp->nocb_gp_rdp)
+ 		return -EINVAL;
+ 
+-	if (WARN_ON_ONCE(!rdp_gp->nocb_gp_kthread))
++	if (WARN_ON_ONCE(!rdp->nocb_gp_kthread))
+ 		return -EINVAL;
+ 
+ 	pr_info("Offloading %d\n", rdp->cpu);
+@@ -1172,7 +1171,7 @@ static int rcu_nocb_rdp_offload(struct rcu_data *rdp)
+ 
+ 	wake_gp = rcu_nocb_queue_toggle_rdp(rdp);
+ 	if (wake_gp)
+-		wake_up_process(rdp_gp->nocb_gp_kthread);
++		wake_up_process(rdp->nocb_gp_kthread);
+ 
+ 	swait_event_exclusive(rdp->nocb_state_wq,
+ 			      rcu_nocb_rdp_offload_wait_cond(rdp));
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index 3c0bbbbb686fe2..f5fea66c2b0df1 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -486,13 +486,16 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
+ 	struct rcu_node *rnp;
+ 	union rcu_special special;
+ 
++	rdp = this_cpu_ptr(&rcu_data);
++	if (rdp->defer_qs_iw_pending == DEFER_QS_PENDING)
++		rdp->defer_qs_iw_pending = DEFER_QS_IDLE;
++
+ 	/*
+ 	 * If RCU core is waiting for this CPU to exit its critical section,
+ 	 * report the fact that it has exited.  Because irqs are disabled,
+ 	 * t->rcu_read_unlock_special cannot change.
+ 	 */
+ 	special = t->rcu_read_unlock_special;
+-	rdp = this_cpu_ptr(&rcu_data);
+ 	if (!special.s && !rdp->cpu_no_qs.b.exp) {
+ 		local_irq_restore(flags);
+ 		return;
+@@ -624,10 +627,29 @@ notrace void rcu_preempt_deferred_qs(struct task_struct *t)
+  */
+ static void rcu_preempt_deferred_qs_handler(struct irq_work *iwp)
+ {
++	unsigned long flags;
+ 	struct rcu_data *rdp;
+ 
+ 	rdp = container_of(iwp, struct rcu_data, defer_qs_iw);
+-	rdp->defer_qs_iw_pending = false;
++	local_irq_save(flags);
++
++	/*
++	 * If the IRQ work handler happens to run in the middle of RCU read-side
++	 * critical section, it could be ineffective in getting the scheduler's
++	 * attention to report a deferred quiescent state (the whole point of the
++	 * IRQ work). For this reason, requeue the IRQ work.
++	 *
++	 * Basically, we want to avoid following situation:
++	 * 1. rcu_read_unlock() queues IRQ work (state -> DEFER_QS_PENDING)
++	 * 2. CPU enters new rcu_read_lock()
++	 * 3. IRQ work runs but cannot report QS due to rcu_preempt_depth() > 0
++	 * 4. rcu_read_unlock() does not re-queue work (state still PENDING)
++	 * 5. Deferred QS reporting does not happen.
++	 */
++	if (rcu_preempt_depth() > 0)
++		WRITE_ONCE(rdp->defer_qs_iw_pending, DEFER_QS_IDLE);
++
++	local_irq_restore(flags);
+ }
+ 
+ /*
+@@ -673,17 +695,11 @@ static void rcu_read_unlock_special(struct task_struct *t)
+ 			set_tsk_need_resched(current);
+ 			set_preempt_need_resched();
+ 			if (IS_ENABLED(CONFIG_IRQ_WORK) && irqs_were_disabled &&
+-			    expboost && !rdp->defer_qs_iw_pending && cpu_online(rdp->cpu)) {
++			    expboost && rdp->defer_qs_iw_pending != DEFER_QS_PENDING &&
++			    cpu_online(rdp->cpu)) {
+ 				// Get scheduler to re-evaluate and call hooks.
+ 				// If !IRQ_WORK, FQS scan will eventually IPI.
+-				if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD) &&
+-				    IS_ENABLED(CONFIG_PREEMPT_RT))
+-					rdp->defer_qs_iw = IRQ_WORK_INIT_HARD(
+-								rcu_preempt_deferred_qs_handler);
+-				else
+-					init_irq_work(&rdp->defer_qs_iw,
+-						      rcu_preempt_deferred_qs_handler);
+-				rdp->defer_qs_iw_pending = true;
++				rdp->defer_qs_iw_pending = DEFER_QS_PENDING;
+ 				irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu);
+ 			}
+ 		}
+@@ -822,6 +838,10 @@ dump_blkd_tasks(struct rcu_node *rnp, int ncheck)
+ 	}
+ }
+ 
++static void rcu_preempt_deferred_qs_init(struct rcu_data *rdp)
++{
++	rdp->defer_qs_iw = IRQ_WORK_INIT_HARD(rcu_preempt_deferred_qs_handler);
++}
+ #else /* #ifdef CONFIG_PREEMPT_RCU */
+ 
+ /*
+@@ -1021,6 +1041,8 @@ dump_blkd_tasks(struct rcu_node *rnp, int ncheck)
+ 	WARN_ON_ONCE(!list_empty(&rnp->blkd_tasks));
+ }
+ 
++static void rcu_preempt_deferred_qs_init(struct rcu_data *rdp) { }
++
+ #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
+ 
+ /*
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 65f3b2cc891da6..d86b211f2c141f 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -3249,6 +3249,9 @@ void sched_dl_do_global(void)
+ 	if (global_rt_runtime() != RUNTIME_INF)
+ 		new_bw = to_ratio(global_rt_period(), global_rt_runtime());
+ 
++	for_each_possible_cpu(cpu)
++		init_dl_rq_bw_ratio(&cpu_rq(cpu)->dl);
++
+ 	for_each_possible_cpu(cpu) {
+ 		rcu_read_lock_sched();
+ 
+@@ -3264,7 +3267,6 @@ void sched_dl_do_global(void)
+ 		raw_spin_unlock_irqrestore(&dl_b->lock, flags);
+ 
+ 		rcu_read_unlock_sched();
+-		init_dl_rq_bw_ratio(&cpu_rq(cpu)->dl);
+ 	}
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 138d9f4658d5f8..23264fe111e515 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -12170,8 +12170,14 @@ static inline bool update_newidle_cost(struct sched_domain *sd, u64 cost)
+ 		/*
+ 		 * Track max cost of a domain to make sure to not delay the
+ 		 * next wakeup on the CPU.
++		 *
++		 * sched_balance_newidle() bumps the cost whenever newidle
++		 * balance fails, and we don't want things to grow out of
++		 * control.  Use the sysctl_sched_migration_cost as the upper
++		 * limit, plus a litle extra to avoid off by ones.
+ 		 */
+-		sd->max_newidle_lb_cost = cost;
++		sd->max_newidle_lb_cost =
++			min(cost, sysctl_sched_migration_cost + 200);
+ 		sd->last_decay_max_lb_cost = jiffies;
+ 	} else if (time_after(jiffies, sd->last_decay_max_lb_cost + HZ)) {
+ 		/*
+@@ -12863,10 +12869,17 @@ static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf)
+ 
+ 			t1 = sched_clock_cpu(this_cpu);
+ 			domain_cost = t1 - t0;
+-			update_newidle_cost(sd, domain_cost);
+-
+ 			curr_cost += domain_cost;
+ 			t0 = t1;
++
++			/*
++			 * Failing newidle means it is not effective;
++			 * bump the cost so we end up doing less of it.
++			 */
++			if (!pulled_task)
++				domain_cost = (3 * sd->max_newidle_lb_cost) / 2;
++
++			update_newidle_cost(sd, domain_cost);
+ 		}
+ 
+ 		/*
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index bfcb8b0a1e2c46..41a216fec3ce0d 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -2936,6 +2936,12 @@ static int sched_rt_handler(const struct ctl_table *table, int write, void *buff
+ 	sched_domains_mutex_unlock();
+ 	mutex_unlock(&mutex);
+ 
++	/*
++	 * After changing maximum available bandwidth for DEADLINE, we need to
++	 * recompute per root domain and per cpus variables accordingly.
++	 */
++	rebuild_sched_domains();
++
+ 	return ret;
+ }
+ 
+diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
+index ba7ff14f5339b5..f9b3aa9afb1784 100644
+--- a/kernel/trace/fprobe.c
++++ b/kernel/trace/fprobe.c
+@@ -352,7 +352,7 @@ static void fprobe_return(struct ftrace_graph_ret *trace,
+ 	size_words = SIZE_IN_LONG(size);
+ 	ret_ip = ftrace_regs_get_instruction_pointer(fregs);
+ 
+-	preempt_disable();
++	preempt_disable_notrace();
+ 
+ 	curr = 0;
+ 	while (size_words > curr) {
+@@ -368,7 +368,7 @@ static void fprobe_return(struct ftrace_graph_ret *trace,
+ 		}
+ 		curr += size;
+ 	}
+-	preempt_enable();
++	preempt_enable_notrace();
+ }
+ NOKPROBE_SYMBOL(fprobe_return);
+ 
+diff --git a/kernel/trace/rv/rv_trace.h b/kernel/trace/rv/rv_trace.h
+index 422b75f58891eb..99c3801616d400 100644
+--- a/kernel/trace/rv/rv_trace.h
++++ b/kernel/trace/rv/rv_trace.h
+@@ -129,8 +129,9 @@ DECLARE_EVENT_CLASS(error_da_monitor_id,
+ #endif /* CONFIG_DA_MON_EVENTS_ID */
+ #endif /* _TRACE_RV_H */
+ 
+-/* This part ust be outside protection */
++/* This part must be outside protection */
+ #undef TRACE_INCLUDE_PATH
+ #define TRACE_INCLUDE_PATH .
++#undef TRACE_INCLUDE_FILE
+ #define TRACE_INCLUDE_FILE rv_trace
+ #include <trace/define_trace.h>
+diff --git a/lib/sbitmap.c b/lib/sbitmap.c
+index d3412984170c03..c07e3cd82e29d7 100644
+--- a/lib/sbitmap.c
++++ b/lib/sbitmap.c
+@@ -208,8 +208,28 @@ static int sbitmap_find_bit_in_word(struct sbitmap_word *map,
+ 	return nr;
+ }
+ 
++static unsigned int __map_depth_with_shallow(const struct sbitmap *sb,
++					     int index,
++					     unsigned int shallow_depth)
++{
++	u64 shallow_word_depth;
++	unsigned int word_depth, reminder;
++
++	word_depth = __map_depth(sb, index);
++	if (shallow_depth >= sb->depth)
++		return word_depth;
++
++	shallow_word_depth = word_depth * shallow_depth;
++	reminder = do_div(shallow_word_depth, sb->depth);
++
++	if (reminder >= (index + 1) * word_depth)
++		shallow_word_depth++;
++
++	return (unsigned int)shallow_word_depth;
++}
++
+ static int sbitmap_find_bit(struct sbitmap *sb,
+-			    unsigned int depth,
++			    unsigned int shallow_depth,
+ 			    unsigned int index,
+ 			    unsigned int alloc_hint,
+ 			    bool wrap)
+@@ -218,12 +238,12 @@ static int sbitmap_find_bit(struct sbitmap *sb,
+ 	int nr = -1;
+ 
+ 	for (i = 0; i < sb->map_nr; i++) {
+-		nr = sbitmap_find_bit_in_word(&sb->map[index],
+-					      min_t(unsigned int,
+-						    __map_depth(sb, index),
+-						    depth),
+-					      alloc_hint, wrap);
++		unsigned int depth = __map_depth_with_shallow(sb, index,
++							      shallow_depth);
+ 
++		if (depth)
++			nr = sbitmap_find_bit_in_word(&sb->map[index], depth,
++						      alloc_hint, wrap);
+ 		if (nr != -1) {
+ 			nr += index << sb->shift;
+ 			break;
+@@ -406,27 +426,9 @@ EXPORT_SYMBOL_GPL(sbitmap_bitmap_show);
+ static unsigned int sbq_calc_wake_batch(struct sbitmap_queue *sbq,
+ 					unsigned int depth)
+ {
+-	unsigned int wake_batch;
+-	unsigned int shallow_depth;
+-
+-	/*
+-	 * Each full word of the bitmap has bits_per_word bits, and there might
+-	 * be a partial word. There are depth / bits_per_word full words and
+-	 * depth % bits_per_word bits left over. In bitwise arithmetic:
+-	 *
+-	 * bits_per_word = 1 << shift
+-	 * depth / bits_per_word = depth >> shift
+-	 * depth % bits_per_word = depth & ((1 << shift) - 1)
+-	 *
+-	 * Each word can be limited to sbq->min_shallow_depth bits.
+-	 */
+-	shallow_depth = min(1U << sbq->sb.shift, sbq->min_shallow_depth);
+-	depth = ((depth >> sbq->sb.shift) * shallow_depth +
+-		 min(depth & ((1U << sbq->sb.shift) - 1), shallow_depth));
+-	wake_batch = clamp_t(unsigned int, depth / SBQ_WAIT_QUEUES, 1,
+-			     SBQ_WAKE_BATCH);
+-
+-	return wake_batch;
++	return clamp_t(unsigned int,
++		       min(depth, sbq->min_shallow_depth) / SBQ_WAIT_QUEUES,
++		       1, SBQ_WAKE_BATCH);
+ }
+ 
+ int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth,
+diff --git a/mm/damon/core.c b/mm/damon/core.c
+index 629c9a1adff8ac..18d8eff71e0b96 100644
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -978,6 +978,7 @@ static int damos_commit(struct damos *dst, struct damos *src)
+ 		return err;
+ 
+ 	dst->wmarks = src->wmarks;
++	dst->target_nid = src->target_nid;
+ 
+ 	err = damos_commit_filters(dst, src);
+ 	return err;
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 47d76d03ce30b4..213780e73eaa8a 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -1516,10 +1516,9 @@ static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma)
+ }
+ 
+ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+-		pud_t *pud, pfn_t pfn, bool write)
++		pud_t *pud, pfn_t pfn, pgprot_t prot, bool write)
+ {
+ 	struct mm_struct *mm = vma->vm_mm;
+-	pgprot_t prot = vma->vm_page_prot;
+ 	pud_t entry;
+ 
+ 	if (!pud_none(*pud)) {
+@@ -1581,7 +1580,7 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write)
+ 	track_pfn_insert(vma, &pgprot, pfn);
+ 
+ 	ptl = pud_lock(vma->vm_mm, vmf->pud);
+-	insert_pfn_pud(vma, addr, vmf->pud, pfn, write);
++	insert_pfn_pud(vma, addr, vmf->pud, pfn, pgprot, write);
+ 	spin_unlock(ptl);
+ 
+ 	return VM_FAULT_NOPAGE;
+@@ -1625,7 +1624,7 @@ vm_fault_t vmf_insert_folio_pud(struct vm_fault *vmf, struct folio *folio,
+ 		add_mm_counter(mm, mm_counter_file(folio), HPAGE_PUD_NR);
+ 	}
+ 	insert_pfn_pud(vma, addr, vmf->pud, pfn_to_pfn_t(folio_pfn(folio)),
+-		write);
++		       vma->vm_page_prot, write);
+ 	spin_unlock(ptl);
+ 
+ 	return VM_FAULT_NOPAGE;
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index c12cef3eeb32a5..4876791465d3df 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -475,6 +475,7 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp)
+ {
+ 	unsigned long flags;
+ 	struct kmemleak_object *object;
++	bool warn = false;
+ 
+ 	/* try the slab allocator first */
+ 	if (object_cache) {
+@@ -493,8 +494,10 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp)
+ 	else if (mem_pool_free_count)
+ 		object = &mem_pool[--mem_pool_free_count];
+ 	else
+-		pr_warn_once("Memory pool empty, consider increasing CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE\n");
++		warn = true;
+ 	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
++	if (warn)
++		pr_warn_once("Memory pool empty, consider increasing CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE\n");
+ 
+ 	return object;
+ }
+@@ -2172,6 +2175,7 @@ static const struct file_operations kmemleak_fops = {
+ static void __kmemleak_do_cleanup(void)
+ {
+ 	struct kmemleak_object *object, *tmp;
++	unsigned int cnt = 0;
+ 
+ 	/*
+ 	 * Kmemleak has already been disabled, no need for RCU list traversal
+@@ -2180,6 +2184,10 @@ static void __kmemleak_do_cleanup(void)
+ 	list_for_each_entry_safe(object, tmp, &object_list, object_list) {
+ 		__remove_object(object);
+ 		__delete_object(object);
++
++		/* Call cond_resched() once per 64 iterations to avoid soft lockup */
++		if (!(++cnt & 0x3f))
++			cond_resched();
+ 	}
+ }
+ 
+diff --git a/mm/ptdump.c b/mm/ptdump.c
+index 106e1d66e9f9ee..3e78bf33da420d 100644
+--- a/mm/ptdump.c
++++ b/mm/ptdump.c
+@@ -153,6 +153,7 @@ void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm, pgd_t *pgd)
+ {
+ 	const struct ptdump_range *range = st->range;
+ 
++	get_online_mems();
+ 	mmap_write_lock(mm);
+ 	while (range->start != range->end) {
+ 		walk_page_range_novma(mm, range->start, range->end,
+@@ -160,6 +161,7 @@ void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm, pgd_t *pgd)
+ 		range++;
+ 	}
+ 	mmap_write_unlock(mm);
++	put_online_mems();
+ 
+ 	/* Flush out the last page */
+ 	st->note_page(st, 0, -1, 0);
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 6b2ecbcf069ef1..311dc1c9d3b9ab 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -884,7 +884,9 @@ static int shmem_add_to_page_cache(struct folio *folio,
+ 				   pgoff_t index, void *expected, gfp_t gfp)
+ {
+ 	XA_STATE_ORDER(xas, &mapping->i_pages, index, folio_order(folio));
+-	long nr = folio_nr_pages(folio);
++	unsigned long nr = folio_nr_pages(folio);
++	swp_entry_t iter, swap;
++	void *entry;
+ 
+ 	VM_BUG_ON_FOLIO(index != round_down(index, nr), folio);
+ 	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
+@@ -896,14 +898,25 @@ static int shmem_add_to_page_cache(struct folio *folio,
+ 
+ 	gfp &= GFP_RECLAIM_MASK;
+ 	folio_throttle_swaprate(folio, gfp);
++	swap = radix_to_swp_entry(expected);
+ 
+ 	do {
++		iter = swap;
+ 		xas_lock_irq(&xas);
+-		if (expected != xas_find_conflict(&xas)) {
+-			xas_set_err(&xas, -EEXIST);
+-			goto unlock;
++		xas_for_each_conflict(&xas, entry) {
++			/*
++			 * The range must either be empty, or filled with
++			 * expected swap entries. Shmem swap entries are never
++			 * partially freed without split of both entry and
++			 * folio, so there shouldn't be any holes.
++			 */
++			if (!expected || entry != swp_to_radix_entry(iter)) {
++				xas_set_err(&xas, -EEXIST);
++				goto unlock;
++			}
++			iter.val += 1 << xas_get_order(&xas);
+ 		}
+-		if (expected && xas_find_conflict(&xas)) {
++		if (expected && iter.val - nr != swap.val) {
+ 			xas_set_err(&xas, -EEXIST);
+ 			goto unlock;
+ 		}
+@@ -2330,7 +2343,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
+ 			error = -ENOMEM;
+ 			goto failed;
+ 		}
+-	} else if (order != folio_order(folio)) {
++	} else if (order > folio_order(folio)) {
+ 		/*
+ 		 * Swap readahead may swap in order 0 folios into swapcache
+ 		 * asynchronously, while the shmem mapping can still stores
+@@ -2353,15 +2366,23 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
+ 
+ 			swap = swp_entry(swp_type(swap), swp_offset(swap) + offset);
+ 		}
++	} else if (order < folio_order(folio)) {
++		swap.val = round_down(swap.val, 1 << folio_order(folio));
++		index = round_down(index, 1 << folio_order(folio));
+ 	}
+ 
+ alloced:
+-	/* We have to do this with folio locked to prevent races */
++	/*
++	 * We have to do this with the folio locked to prevent races.
++	 * The shmem_confirm_swap below only checks if the first swap
++	 * entry matches the folio, that's enough to ensure the folio
++	 * is not used outside of shmem, as shmem swap entries
++	 * and swap cache folios are never partially freed.
++	 */
+ 	folio_lock(folio);
+ 	if ((!skip_swapcache && !folio_test_swapcache(folio)) ||
+-	    folio->swap.val != swap.val ||
+ 	    !shmem_confirm_swap(mapping, index, swap) ||
+-	    xa_get_order(&mapping->i_pages, index) != folio_order(folio)) {
++	    folio->swap.val != swap.val) {
+ 		error = -EEXIST;
+ 		goto unlock;
+ 	}
+diff --git a/mm/slub.c b/mm/slub.c
+index 5c73b956615fe6..b8d75fcabbc7ee 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -4268,7 +4268,12 @@ static void *___kmalloc_large_node(size_t size, gfp_t flags, int node)
+ 		flags = kmalloc_fix_flags(flags);
+ 
+ 	flags |= __GFP_COMP;
+-	folio = (struct folio *)alloc_pages_node_noprof(node, flags, order);
++
++	if (node == NUMA_NO_NODE)
++		folio = (struct folio *)alloc_pages_noprof(flags, order);
++	else
++		folio = (struct folio *)__alloc_pages_noprof(flags, order, node, NULL);
++
+ 	if (folio) {
+ 		ptr = folio_address(folio);
+ 		lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B,
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index 416c573ed36316..6bbc6cba80da9f 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -1829,13 +1829,16 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start,
+ 			/* Check if we can move the pmd without splitting it. */
+ 			if (move_splits_huge_pmd(dst_addr, src_addr, src_start + len) ||
+ 			    !pmd_none(dst_pmdval)) {
+-				struct folio *folio = pmd_folio(*src_pmd);
+-
+-				if (!folio || (!is_huge_zero_folio(folio) &&
+-					       !PageAnonExclusive(&folio->page))) {
+-					spin_unlock(ptl);
+-					err = -EBUSY;
+-					break;
++				/* Can be a migration entry */
++				if (pmd_present(*src_pmd)) {
++					struct folio *folio = pmd_folio(*src_pmd);
++
++					if (!is_huge_zero_folio(folio) &&
++					    !PageAnonExclusive(&folio->page)) {
++						spin_unlock(ptl);
++						err = -EBUSY;
++						break;
++					}
+ 				}
+ 
+ 				spin_unlock(ptl);
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index fccdb864af7264..082cca18db2eb1 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -2144,7 +2144,8 @@ struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ 	struct hci_link *link;
+ 
+ 	/* Look for any BIS that is open for rebinding */
+-	conn = hci_conn_hash_lookup_big_state(hdev, qos->bcast.big, BT_OPEN);
++	conn = hci_conn_hash_lookup_big_state(hdev, qos->bcast.big, BT_OPEN,
++					      HCI_ROLE_MASTER);
+ 	if (conn) {
+ 		memcpy(qos, &conn->iso_qos, sizeof(*qos));
+ 		conn->state = BT_CONNECTED;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index c1dd8d78701fe5..b83995898da098 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6880,7 +6880,8 @@ static void hci_le_create_big_complete_evt(struct hci_dev *hdev, void *data,
+ 
+ 	/* Connect all BISes that are bound to the BIG */
+ 	while ((conn = hci_conn_hash_lookup_big_state(hdev, ev->handle,
+-						      BT_BOUND))) {
++						      BT_BOUND,
++						      HCI_ROLE_MASTER))) {
+ 		if (ev->status) {
+ 			hci_connect_cfm(conn, ev->status);
+ 			hci_conn_del(conn);
+@@ -6996,6 +6997,37 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ 	hci_dev_unlock(hdev);
+ }
+ 
++static void hci_le_big_sync_lost_evt(struct hci_dev *hdev, void *data,
++				     struct sk_buff *skb)
++{
++	struct hci_evt_le_big_sync_lost *ev = data;
++	struct hci_conn *bis, *conn;
++
++	bt_dev_dbg(hdev, "big handle 0x%2.2x", ev->handle);
++
++	hci_dev_lock(hdev);
++
++	/* Delete the pa sync connection */
++	bis = hci_conn_hash_lookup_pa_sync_big_handle(hdev, ev->handle);
++	if (bis) {
++		conn = hci_conn_hash_lookup_pa_sync_handle(hdev,
++							   bis->sync_handle);
++		if (conn)
++			hci_conn_del(conn);
++	}
++
++	/* Delete each bis connection */
++	while ((bis = hci_conn_hash_lookup_big_state(hdev, ev->handle,
++						     BT_CONNECTED,
++						     HCI_ROLE_SLAVE))) {
++		clear_bit(HCI_CONN_BIG_SYNC, &bis->flags);
++		hci_disconn_cfm(bis, ev->reason);
++		hci_conn_del(bis);
++	}
++
++	hci_dev_unlock(hdev);
++}
++
+ static void hci_le_big_info_adv_report_evt(struct hci_dev *hdev, void *data,
+ 					   struct sk_buff *skb)
+ {
+@@ -7119,6 +7151,11 @@ static const struct hci_le_ev {
+ 		     hci_le_big_sync_established_evt,
+ 		     sizeof(struct hci_evt_le_big_sync_estabilished),
+ 		     HCI_MAX_EVENT_SIZE),
++	/* [0x1e = HCI_EVT_LE_BIG_SYNC_LOST] */
++	HCI_LE_EV_VL(HCI_EVT_LE_BIG_SYNC_LOST,
++		     hci_le_big_sync_lost_evt,
++		     sizeof(struct hci_evt_le_big_sync_lost),
++		     HCI_MAX_EVENT_SIZE),
+ 	/* [0x22 = HCI_EVT_LE_BIG_INFO_ADV_REPORT] */
+ 	HCI_LE_EV_VL(HCI_EVT_LE_BIG_INFO_ADV_REPORT,
+ 		     hci_le_big_info_adv_report_evt,
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 022b86797acdc5..4ad5296d79345d 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -118,7 +118,7 @@ static void hci_sock_free_cookie(struct sock *sk)
+ 	int id = hci_pi(sk)->cookie;
+ 
+ 	if (id) {
+-		hci_pi(sk)->cookie = 0xffffffff;
++		hci_pi(sk)->cookie = 0;
+ 		ida_free(&sock_cookie_ida, id);
+ 	}
+ }
+diff --git a/net/core/ieee8021q_helpers.c b/net/core/ieee8021q_helpers.c
+index 759a9b9f3f898b..669b357b73b2d7 100644
+--- a/net/core/ieee8021q_helpers.c
++++ b/net/core/ieee8021q_helpers.c
+@@ -7,6 +7,11 @@
+ #include <net/dscp.h>
+ #include <net/ieee8021q.h>
+ 
++/* verify that table covers all 8 traffic types */
++#define TT_MAP_SIZE_OK(tbl)                                 \
++	compiletime_assert(ARRAY_SIZE(tbl) == IEEE8021Q_TT_MAX, \
++			   #tbl " size mismatch")
++
+ /* The following arrays map Traffic Types (TT) to traffic classes (TC) for
+  * different number of queues as shown in the example provided by
+  * IEEE 802.1Q-2022 in Annex I "I.3 Traffic type to traffic class mapping" and
+@@ -101,51 +106,28 @@ int ieee8021q_tt_to_tc(enum ieee8021q_traffic_type tt, unsigned int num_queues)
+ 
+ 	switch (num_queues) {
+ 	case 8:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_8queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_8queue_tt_tc_map != max - 1");
++		TT_MAP_SIZE_OK(ieee8021q_8queue_tt_tc_map);
+ 		return ieee8021q_8queue_tt_tc_map[tt];
+ 	case 7:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_7queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_7queue_tt_tc_map != max - 1");
+-
++		TT_MAP_SIZE_OK(ieee8021q_7queue_tt_tc_map);
+ 		return ieee8021q_7queue_tt_tc_map[tt];
+ 	case 6:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_6queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_6queue_tt_tc_map != max - 1");
+-
++		TT_MAP_SIZE_OK(ieee8021q_6queue_tt_tc_map);
+ 		return ieee8021q_6queue_tt_tc_map[tt];
+ 	case 5:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_5queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_5queue_tt_tc_map != max - 1");
+-
++		TT_MAP_SIZE_OK(ieee8021q_5queue_tt_tc_map);
+ 		return ieee8021q_5queue_tt_tc_map[tt];
+ 	case 4:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_4queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_4queue_tt_tc_map != max - 1");
+-
++		TT_MAP_SIZE_OK(ieee8021q_4queue_tt_tc_map);
+ 		return ieee8021q_4queue_tt_tc_map[tt];
+ 	case 3:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_3queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_3queue_tt_tc_map != max - 1");
+-
++		TT_MAP_SIZE_OK(ieee8021q_3queue_tt_tc_map);
+ 		return ieee8021q_3queue_tt_tc_map[tt];
+ 	case 2:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_2queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_2queue_tt_tc_map != max - 1");
+-
++		TT_MAP_SIZE_OK(ieee8021q_2queue_tt_tc_map);
+ 		return ieee8021q_2queue_tt_tc_map[tt];
+ 	case 1:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_1queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_1queue_tt_tc_map != max - 1");
+-
++		TT_MAP_SIZE_OK(ieee8021q_1queue_tt_tc_map);
+ 		return ieee8021q_1queue_tt_tc_map[tt];
+ 	}
+ 
+diff --git a/net/core/page_pool.c b/net/core/page_pool.c
+index 3eabe78c93f4c2..ef870c21e854e2 100644
+--- a/net/core/page_pool.c
++++ b/net/core/page_pool.c
+@@ -1201,6 +1201,35 @@ void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *),
+ 	pool->xdp_mem_id = mem->id;
+ }
+ 
++/**
++ * page_pool_enable_direct_recycling() - mark page pool as owned by NAPI
++ * @pool: page pool to modify
++ * @napi: NAPI instance to associate the page pool with
++ *
++ * Associate a page pool with a NAPI instance for lockless page recycling.
++ * This is useful when a new page pool has to be added to a NAPI instance
++ * without disabling that NAPI instance, to mark the point at which control
++ * path "hands over" the page pool to the NAPI instance. In most cases driver
++ * can simply set the @napi field in struct page_pool_params, and does not
++ * have to call this helper.
++ *
++ * The function is idempotent, but does not implement any refcounting.
++ * Single page_pool_disable_direct_recycling() will disable recycling,
++ * no matter how many times enable was called.
++ */
++void page_pool_enable_direct_recycling(struct page_pool *pool,
++				       struct napi_struct *napi)
++{
++	if (READ_ONCE(pool->p.napi) == napi)
++		return;
++	WARN_ON(!napi || pool->p.napi);
++
++	mutex_lock(&page_pools_lock);
++	WRITE_ONCE(pool->p.napi, napi);
++	mutex_unlock(&page_pools_lock);
++}
++EXPORT_SYMBOL(page_pool_enable_direct_recycling);
++
+ void page_pool_disable_direct_recycling(struct page_pool *pool)
+ {
+ 	/* Disable direct recycling based on pool->cpuid.
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index e686f088bc67fb..50b3d270f60664 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -2578,7 +2578,6 @@ static struct rtable *__mkroute_output(const struct fib_result *res,
+ 	do_cache = true;
+ 	if (type == RTN_BROADCAST) {
+ 		flags |= RTCF_BROADCAST | RTCF_LOCAL;
+-		fi = NULL;
+ 	} else if (type == RTN_MULTICAST) {
+ 		flags |= RTCF_MULTICAST | RTCF_LOCAL;
+ 		if (!ip_check_mc_rcu(in_dev, fl4->daddr, fl4->saddr,
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index a1aca630867779..4245522d420142 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -61,7 +61,7 @@ static struct sk_buff *__skb_udp_tunnel_segment(struct sk_buff *skb,
+ 	remcsum = !!(skb_shinfo(skb)->gso_type & SKB_GSO_TUNNEL_REMCSUM);
+ 	skb->remcsum_offload = remcsum;
+ 
+-	need_ipsec = skb_dst(skb) && dst_xfrm(skb_dst(skb));
++	need_ipsec = (skb_dst(skb) && dst_xfrm(skb_dst(skb))) || skb_sec_path(skb);
+ 	/* Try to offload checksum if possible */
+ 	offload_csum = !!(need_csum &&
+ 			  !need_ipsec &&
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 4ef3a3c166c6e9..7881637f651876 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -2229,13 +2229,12 @@ void addrconf_dad_failure(struct sk_buff *skb, struct inet6_ifaddr *ifp)
+ 	in6_ifa_put(ifp);
+ }
+ 
+-/* Join to solicited addr multicast group.
+- * caller must hold RTNL */
++/* Join to solicited addr multicast group. */
+ void addrconf_join_solict(struct net_device *dev, const struct in6_addr *addr)
+ {
+ 	struct in6_addr maddr;
+ 
+-	if (dev->flags&(IFF_LOOPBACK|IFF_NOARP))
++	if (READ_ONCE(dev->flags) & (IFF_LOOPBACK | IFF_NOARP))
+ 		return;
+ 
+ 	addrconf_addr_solict_mult(addr, &maddr);
+@@ -3860,7 +3859,7 @@ static int addrconf_ifdown(struct net_device *dev, bool unregister)
+ 	 *	   Do not dev_put!
+ 	 */
+ 	if (unregister) {
+-		idev->dead = 1;
++		WRITE_ONCE(idev->dead, 1);
+ 
+ 		/* protected by rtnl_lock */
+ 		RCU_INIT_POINTER(dev->ip6_ptr, NULL);
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index 616bf4c0c8fd91..b91538e90c54a7 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -945,23 +945,22 @@ static void inet6_ifmcaddr_notify(struct net_device *dev,
+ static int __ipv6_dev_mc_inc(struct net_device *dev,
+ 			     const struct in6_addr *addr, unsigned int mode)
+ {
+-	struct ifmcaddr6 *mc;
+ 	struct inet6_dev *idev;
+-
+-	ASSERT_RTNL();
++	struct ifmcaddr6 *mc;
+ 
+ 	/* we need to take a reference on idev */
+ 	idev = in6_dev_get(dev);
+-
+ 	if (!idev)
+ 		return -EINVAL;
+ 
+-	if (idev->dead) {
++	mutex_lock(&idev->mc_lock);
++
++	if (READ_ONCE(idev->dead)) {
++		mutex_unlock(&idev->mc_lock);
+ 		in6_dev_put(idev);
+ 		return -ENODEV;
+ 	}
+ 
+-	mutex_lock(&idev->mc_lock);
+ 	for_each_mc_mclock(idev, mc) {
+ 		if (ipv6_addr_equal(&mc->mca_addr, addr)) {
+ 			mc->mca_users++;
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index 24aec295a51cf7..8c0577cd764f76 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -429,7 +429,7 @@ static void psock_write_space(struct sock *sk)
+ 
+ 	/* Check if the socket is reserved so someone is waiting for sending. */
+ 	kcm = psock->tx_kcm;
+-	if (kcm && !unlikely(kcm->tx_stopped))
++	if (kcm)
+ 		queue_work(kcm_wq, &kcm->tx_work);
+ 
+ 	spin_unlock_bh(&mux->lock);
+@@ -1688,12 +1688,6 @@ static int kcm_release(struct socket *sock)
+ 	 */
+ 	__skb_queue_purge(&sk->sk_write_queue);
+ 
+-	/* Set tx_stopped. This is checked when psock is bound to a kcm and we
+-	 * get a writespace callback. This prevents further work being queued
+-	 * from the callback (unbinding the psock occurs after canceling work.
+-	 */
+-	kcm->tx_stopped = 1;
+-
+ 	release_sock(sk);
+ 
+ 	spin_lock_bh(&mux->lock);
+@@ -1709,7 +1703,7 @@ static int kcm_release(struct socket *sock)
+ 	/* Cancel work. After this point there should be no outside references
+ 	 * to the kcm socket.
+ 	 */
+-	cancel_work_sync(&kcm->tx_work);
++	disable_work_sync(&kcm->tx_work);
+ 
+ 	lock_sock(sk);
+ 	psock = kcm->tx_psock;
+diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c
+index 3aaf5abf1acc13..e0fdeaafc4893e 100644
+--- a/net/mac80211/chan.c
++++ b/net/mac80211/chan.c
+@@ -1381,6 +1381,7 @@ ieee80211_link_use_reserved_reassign(struct ieee80211_link_data *link)
+ 		goto out;
+ 	}
+ 
++	link->radar_required = link->reserved_radar_required;
+ 	list_move(&link->assigned_chanctx_list, &new_ctx->assigned_links);
+ 	rcu_assign_pointer(link_conf->chanctx_conf, &new_ctx->conf);
+ 
+diff --git a/net/mac80211/ht.c b/net/mac80211/ht.c
+index 32390d8a9d753e..1c82a28b03de4b 100644
+--- a/net/mac80211/ht.c
++++ b/net/mac80211/ht.c
+@@ -9,7 +9,7 @@
+  * Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+  * Copyright 2007-2010, Intel Corporation
+  * Copyright 2017	Intel Deutschland GmbH
+- * Copyright(c) 2020-2024 Intel Corporation
++ * Copyright(c) 2020-2025 Intel Corporation
+  */
+ 
+ #include <linux/ieee80211.h>
+@@ -603,3 +603,41 @@ void ieee80211_request_smps(struct ieee80211_vif *vif, unsigned int link_id,
+ }
+ /* this might change ... don't want non-open drivers using it */
+ EXPORT_SYMBOL_GPL(ieee80211_request_smps);
++
++void ieee80211_ht_handle_chanwidth_notif(struct ieee80211_local *local,
++					 struct ieee80211_sub_if_data *sdata,
++					 struct sta_info *sta,
++					 struct link_sta_info *link_sta,
++					 u8 chanwidth, enum nl80211_band band)
++{
++	enum ieee80211_sta_rx_bandwidth max_bw, new_bw;
++	struct ieee80211_supported_band *sband;
++	struct sta_opmode_info sta_opmode = {};
++
++	lockdep_assert_wiphy(local->hw.wiphy);
++
++	if (chanwidth == IEEE80211_HT_CHANWIDTH_20MHZ)
++		max_bw = IEEE80211_STA_RX_BW_20;
++	else
++		max_bw = ieee80211_sta_cap_rx_bw(link_sta);
++
++	/* set cur_max_bandwidth and recalc sta bw */
++	link_sta->cur_max_bandwidth = max_bw;
++	new_bw = ieee80211_sta_cur_vht_bw(link_sta);
++
++	if (link_sta->pub->bandwidth == new_bw)
++		return;
++
++	link_sta->pub->bandwidth = new_bw;
++	sband = local->hw.wiphy->bands[band];
++	sta_opmode.bw =
++		ieee80211_sta_rx_bw_to_chan_width(link_sta);
++	sta_opmode.changed = STA_OPMODE_MAX_BW_CHANGED;
++
++	rate_control_rate_update(local, sband, link_sta,
++				 IEEE80211_RC_BW_CHANGED);
++	cfg80211_sta_opmode_change_notify(sdata->dev,
++					  sta->addr,
++					  &sta_opmode,
++					  GFP_KERNEL);
++}
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index e0b44dbebe001b..b504e09cc457cb 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -2205,6 +2205,12 @@ u8 ieee80211_mcs_to_chains(const struct ieee80211_mcs_info *mcs);
+ enum nl80211_smps_mode
+ ieee80211_smps_mode_to_smps_mode(enum ieee80211_smps_mode smps);
+ 
++void ieee80211_ht_handle_chanwidth_notif(struct ieee80211_local *local,
++					 struct ieee80211_sub_if_data *sdata,
++					 struct sta_info *sta,
++					 struct link_sta_info *link_sta,
++					 u8 chanwidth, enum nl80211_band band);
++
+ /* VHT */
+ void
+ ieee80211_vht_cap_ie_to_sta_vht_cap(struct ieee80211_sub_if_data *sdata,
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 7d93e5aa595b28..e062d2d7b3be29 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -1521,6 +1521,35 @@ static void ieee80211_iface_process_skb(struct ieee80211_local *local,
+ 				break;
+ 			}
+ 		}
++	} else if (ieee80211_is_action(mgmt->frame_control) &&
++		   mgmt->u.action.category == WLAN_CATEGORY_HT) {
++		switch (mgmt->u.action.u.ht_smps.action) {
++		case WLAN_HT_ACTION_NOTIFY_CHANWIDTH: {
++			u8 chanwidth = mgmt->u.action.u.ht_notify_cw.chanwidth;
++			struct ieee80211_rx_status *status;
++			struct link_sta_info *link_sta;
++			struct sta_info *sta;
++
++			sta = sta_info_get_bss(sdata, mgmt->sa);
++			if (!sta)
++				break;
++
++			status = IEEE80211_SKB_RXCB(skb);
++			if (!status->link_valid)
++				link_sta = &sta->deflink;
++			else
++				link_sta = rcu_dereference_protected(sta->link[status->link_id],
++							lockdep_is_held(&local->hw.wiphy->mtx));
++			if (link_sta)
++				ieee80211_ht_handle_chanwidth_notif(local, sdata, sta,
++								    link_sta, chanwidth,
++								    status->band);
++			break;
++		}
++		default:
++			WARN_ON(1);
++			break;
++		}
+ 	} else if (ieee80211_is_action(mgmt->frame_control) &&
+ 		   mgmt->u.action.category == WLAN_CATEGORY_VHT) {
+ 		switch (mgmt->u.action.u.vht_group_notif.action_code) {
+diff --git a/net/mac80211/link.c b/net/mac80211/link.c
+index 4f7b7d0f64f24b..d71eabe5abf8d8 100644
+--- a/net/mac80211/link.c
++++ b/net/mac80211/link.c
+@@ -2,7 +2,7 @@
+ /*
+  * MLO link handling
+  *
+- * Copyright (C) 2022-2024 Intel Corporation
++ * Copyright (C) 2022-2025 Intel Corporation
+  */
+ #include <linux/slab.h>
+ #include <linux/kernel.h>
+@@ -368,6 +368,13 @@ static int ieee80211_vif_update_links(struct ieee80211_sub_if_data *sdata,
+ 			ieee80211_update_apvlan_links(sdata);
+ 	}
+ 
++	/*
++	 * Ignore errors if we are only removing links as removal should
++	 * always succeed
++	 */
++	if (!new_links)
++		ret = 0;
++
+ 	if (ret) {
+ 		/* restore config */
+ 		memcpy(sdata->link, old_data, sizeof(old_data));
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index dc8df3129c007e..160821e42524de 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1218,18 +1218,36 @@ EXPORT_SYMBOL_IF_MAC80211_KUNIT(ieee80211_determine_chan_mode);
+ 
+ static int ieee80211_config_bw(struct ieee80211_link_data *link,
+ 			       struct ieee802_11_elems *elems,
+-			       bool update, u64 *changed,
+-			       const char *frame)
++			       bool update, u64 *changed, u16 stype)
+ {
+ 	struct ieee80211_channel *channel = link->conf->chanreq.oper.chan;
+ 	struct ieee80211_sub_if_data *sdata = link->sdata;
+ 	struct ieee80211_chan_req chanreq = {};
+ 	struct cfg80211_chan_def ap_chandef;
+ 	enum ieee80211_conn_mode ap_mode;
++	const char *frame;
+ 	u32 vht_cap_info = 0;
+ 	u16 ht_opmode;
+ 	int ret;
+ 
++	switch (stype) {
++	case IEEE80211_STYPE_BEACON:
++		frame = "beacon";
++		break;
++	case IEEE80211_STYPE_ASSOC_RESP:
++		frame = "assoc response";
++		break;
++	case IEEE80211_STYPE_REASSOC_RESP:
++		frame = "reassoc response";
++		break;
++	case IEEE80211_STYPE_ACTION:
++		/* the only action frame that gets here */
++		frame = "ML reconf response";
++		break;
++	default:
++		return -EINVAL;
++	}
++
+ 	/* don't track any bandwidth changes in legacy/S1G modes */
+ 	if (link->u.mgd.conn.mode == IEEE80211_CONN_MODE_LEGACY ||
+ 	    link->u.mgd.conn.mode == IEEE80211_CONN_MODE_S1G)
+@@ -1278,7 +1296,9 @@ static int ieee80211_config_bw(struct ieee80211_link_data *link,
+ 			ieee80211_min_bw_limit_from_chandef(&chanreq.oper))
+ 		ieee80211_chandef_downgrade(&chanreq.oper, NULL);
+ 
+-	if (ap_chandef.chan->band == NL80211_BAND_6GHZ &&
++	/* TPE element is not present in (re)assoc/ML reconfig response */
++	if (stype == IEEE80211_STYPE_BEACON &&
++	    ap_chandef.chan->band == NL80211_BAND_6GHZ &&
+ 	    link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_HE) {
+ 		ieee80211_rearrange_tpe(&elems->tpe, &ap_chandef,
+ 					&chanreq.oper);
+@@ -2515,7 +2535,8 @@ ieee80211_sta_abort_chanswitch(struct ieee80211_link_data *link)
+ 	if (!local->ops->abort_channel_switch)
+ 		return;
+ 
+-	ieee80211_link_unreserve_chanctx(link);
++	if (rcu_access_pointer(link->conf->chanctx_conf))
++		ieee80211_link_unreserve_chanctx(link);
+ 
+ 	ieee80211_vif_unblock_queues_csa(sdata);
+ 
+@@ -4733,6 +4754,7 @@ static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
+ 	struct ieee80211_prep_tx_info info = {
+ 		.subtype = IEEE80211_STYPE_AUTH,
+ 	};
++	bool sae_need_confirm = false;
+ 
+ 	lockdep_assert_wiphy(sdata->local->hw.wiphy);
+ 
+@@ -4778,6 +4800,8 @@ static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
+ 				jiffies + IEEE80211_AUTH_WAIT_SAE_RETRY;
+ 			ifmgd->auth_data->timeout_started = true;
+ 			run_again(sdata, ifmgd->auth_data->timeout);
++			if (auth_transaction == 1)
++				sae_need_confirm = true;
+ 			goto notify_driver;
+ 		}
+ 
+@@ -4820,6 +4844,9 @@ static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
+ 	     ifmgd->auth_data->expected_transaction == 2)) {
+ 		if (!ieee80211_mark_sta_auth(sdata))
+ 			return; /* ignore frame -- wait for timeout */
++	} else if (ifmgd->auth_data->algorithm == WLAN_AUTH_SAE &&
++		   auth_transaction == 1) {
++		sae_need_confirm = true;
+ 	} else if (ifmgd->auth_data->algorithm == WLAN_AUTH_SAE &&
+ 		   auth_transaction == 2) {
+ 		sdata_info(sdata, "SAE peer confirmed\n");
+@@ -4828,7 +4855,8 @@ static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
+ 
+ 	cfg80211_rx_mlme_mgmt(sdata->dev, (u8 *)mgmt, len);
+ notify_driver:
+-	drv_mgd_complete_tx(sdata->local, sdata, &info);
++	if (!sae_need_confirm)
++		drv_mgd_complete_tx(sdata->local, sdata, &info);
+ }
+ 
+ #define case_WLAN(type) \
+@@ -5290,7 +5318,9 @@ static bool ieee80211_assoc_config_link(struct ieee80211_link_data *link,
+ 	/* check/update if AP changed anything in assoc response vs. scan */
+ 	if (ieee80211_config_bw(link, elems,
+ 				link_id == assoc_data->assoc_link_id,
+-				changed, "assoc response")) {
++				changed,
++				le16_to_cpu(mgmt->frame_control) &
++					IEEE80211_FCTL_STYPE)) {
+ 		ret = false;
+ 		goto out;
+ 	}
+@@ -7478,7 +7508,8 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_link_data *link,
+ 
+ 	changed |= ieee80211_recalc_twt_req(sdata, sband, link, link_sta, elems);
+ 
+-	if (ieee80211_config_bw(link, elems, true, &changed, "beacon")) {
++	if (ieee80211_config_bw(link, elems, true, &changed,
++				IEEE80211_STYPE_BEACON)) {
+ 		ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH,
+ 				       WLAN_REASON_DEAUTH_LEAVING,
+ 				       true, deauth_buf);
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index e73431549ce77e..7b801dd3f569a2 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -3576,41 +3576,18 @@ ieee80211_rx_h_action(struct ieee80211_rx_data *rx)
+ 			goto handled;
+ 		}
+ 		case WLAN_HT_ACTION_NOTIFY_CHANWIDTH: {
+-			struct ieee80211_supported_band *sband;
+ 			u8 chanwidth = mgmt->u.action.u.ht_notify_cw.chanwidth;
+-			enum ieee80211_sta_rx_bandwidth max_bw, new_bw;
+-			struct sta_opmode_info sta_opmode = {};
++
++			if (chanwidth != IEEE80211_HT_CHANWIDTH_20MHZ &&
++			    chanwidth != IEEE80211_HT_CHANWIDTH_ANY)
++				goto invalid;
+ 
+ 			/* If it doesn't support 40 MHz it can't change ... */
+ 			if (!(rx->link_sta->pub->ht_cap.cap &
+-					IEEE80211_HT_CAP_SUP_WIDTH_20_40))
+-				goto handled;
+-
+-			if (chanwidth == IEEE80211_HT_CHANWIDTH_20MHZ)
+-				max_bw = IEEE80211_STA_RX_BW_20;
+-			else
+-				max_bw = ieee80211_sta_cap_rx_bw(rx->link_sta);
+-
+-			/* set cur_max_bandwidth and recalc sta bw */
+-			rx->link_sta->cur_max_bandwidth = max_bw;
+-			new_bw = ieee80211_sta_cur_vht_bw(rx->link_sta);
+-
+-			if (rx->link_sta->pub->bandwidth == new_bw)
++				IEEE80211_HT_CAP_SUP_WIDTH_20_40))
+ 				goto handled;
+ 
+-			rx->link_sta->pub->bandwidth = new_bw;
+-			sband = rx->local->hw.wiphy->bands[status->band];
+-			sta_opmode.bw =
+-				ieee80211_sta_rx_bw_to_chan_width(rx->link_sta);
+-			sta_opmode.changed = STA_OPMODE_MAX_BW_CHANGED;
+-
+-			rate_control_rate_update(local, sband, rx->link_sta,
+-						 IEEE80211_RC_BW_CHANGED);
+-			cfg80211_sta_opmode_change_notify(sdata->dev,
+-							  rx->sta->addr,
+-							  &sta_opmode,
+-							  GFP_ATOMIC);
+-			goto handled;
++			goto queue;
+ 		}
+ 		default:
+ 			goto invalid;
+@@ -4234,10 +4211,16 @@ static bool ieee80211_rx_data_set_sta(struct ieee80211_rx_data *rx,
+ 		rx->link_sta = NULL;
+ 	}
+ 
+-	if (link_id < 0)
+-		rx->link = &rx->sdata->deflink;
+-	else if (!ieee80211_rx_data_set_link(rx, link_id))
++	if (link_id < 0) {
++		if (ieee80211_vif_is_mld(&rx->sdata->vif) &&
++		    sta && !sta->sta.valid_links)
++			rx->link =
++				rcu_dereference(rx->sdata->link[sta->deflink.link_id]);
++		else
++			rx->link = &rx->sdata->deflink;
++	} else if (!ieee80211_rx_data_set_link(rx, link_id)) {
+ 		return false;
++	}
+ 
+ 	return true;
+ }
+diff --git a/net/mctp/af_mctp.c b/net/mctp/af_mctp.c
+index 9b12ca97f41282..9d5db3feedec57 100644
+--- a/net/mctp/af_mctp.c
++++ b/net/mctp/af_mctp.c
+@@ -73,7 +73,6 @@ static int mctp_bind(struct socket *sock, struct sockaddr *addr, int addrlen)
+ 
+ 	lock_sock(sk);
+ 
+-	/* TODO: allow rebind */
+ 	if (sk_hashed(sk)) {
+ 		rc = -EADDRINUSE;
+ 		goto out_release;
+@@ -629,15 +628,36 @@ static void mctp_sk_close(struct sock *sk, long timeout)
+ static int mctp_sk_hash(struct sock *sk)
+ {
+ 	struct net *net = sock_net(sk);
++	struct sock *existing;
++	struct mctp_sock *msk;
++	int rc;
++
++	msk = container_of(sk, struct mctp_sock, sk);
+ 
+ 	/* Bind lookup runs under RCU, remain live during that. */
+ 	sock_set_flag(sk, SOCK_RCU_FREE);
+ 
+ 	mutex_lock(&net->mctp.bind_lock);
++
++	/* Prevent duplicate binds. */
++	sk_for_each(existing, &net->mctp.binds) {
++		struct mctp_sock *mex =
++			container_of(existing, struct mctp_sock, sk);
++
++		if (mex->bind_type == msk->bind_type &&
++		    mex->bind_addr == msk->bind_addr &&
++		    mex->bind_net == msk->bind_net) {
++			rc = -EADDRINUSE;
++			goto out;
++		}
++	}
++
+ 	sk_add_node_rcu(sk, &net->mctp.binds);
+-	mutex_unlock(&net->mctp.bind_lock);
++	rc = 0;
+ 
+-	return 0;
++out:
++	mutex_unlock(&net->mctp.bind_lock);
++	return rc;
+ }
+ 
+ static void mctp_sk_unhash(struct sock *sk)
+diff --git a/net/ncsi/internal.h b/net/ncsi/internal.h
+index 2c260f33b55cc5..ad1f671ffc37fa 100644
+--- a/net/ncsi/internal.h
++++ b/net/ncsi/internal.h
+@@ -110,7 +110,7 @@ struct ncsi_channel_version {
+ 	u8   update;		/* NCSI version update */
+ 	char alpha1;		/* NCSI version alpha1 */
+ 	char alpha2;		/* NCSI version alpha2 */
+-	u8  fw_name[12];	/* Firmware name string                */
++	u8  fw_name[12 + 1];	/* Firmware name string                */
+ 	u32 fw_version;		/* Firmware version                   */
+ 	u16 pci_ids[4];		/* PCI identification                 */
+ 	u32 mf_id;		/* Manufacture ID                     */
+diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c
+index 8668888c5a2f99..d5ed80731e8928 100644
+--- a/net/ncsi/ncsi-rsp.c
++++ b/net/ncsi/ncsi-rsp.c
+@@ -775,6 +775,7 @@ static int ncsi_rsp_handler_gvi(struct ncsi_request *nr)
+ 	ncv->alpha1 = rsp->alpha1;
+ 	ncv->alpha2 = rsp->alpha2;
+ 	memcpy(ncv->fw_name, rsp->fw_name, 12);
++	ncv->fw_name[12] = '\0';
+ 	ncv->fw_version = ntohl(rsp->fw_version);
+ 	for (i = 0; i < ARRAY_SIZE(ncv->pci_ids); i++)
+ 		ncv->pci_ids[i] = ntohs(rsp->pci_ids[i]);
+diff --git a/net/netfilter/ipvs/ip_vs_est.c b/net/netfilter/ipvs/ip_vs_est.c
+index f821ad2e19b35e..15049b82673272 100644
+--- a/net/netfilter/ipvs/ip_vs_est.c
++++ b/net/netfilter/ipvs/ip_vs_est.c
+@@ -265,7 +265,8 @@ int ip_vs_est_kthread_start(struct netns_ipvs *ipvs,
+ 	}
+ 
+ 	set_user_nice(kd->task, sysctl_est_nice(ipvs));
+-	set_cpus_allowed_ptr(kd->task, sysctl_est_cpulist(ipvs));
++	if (sysctl_est_preferred_cpulist(ipvs))
++		kthread_affine_preferred(kd->task, sysctl_est_preferred_cpulist(ipvs));
+ 
+ 	pr_info("starting estimator thread %d...\n", kd->id);
+ 	wake_up_process(kd->task);
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 2cc0fde233447e..5fdcae45e0bc49 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -884,8 +884,6 @@ ctnetlink_conntrack_event(unsigned int events, const struct nf_ct_event *item)
+ 
+ static int ctnetlink_done(struct netlink_callback *cb)
+ {
+-	if (cb->args[1])
+-		nf_ct_put((struct nf_conn *)cb->args[1]);
+ 	kfree(cb->data);
+ 	return 0;
+ }
+@@ -1208,19 +1206,26 @@ static int ctnetlink_filter_match(struct nf_conn *ct, void *data)
+ 	return 0;
+ }
+ 
++static unsigned long ctnetlink_get_id(const struct nf_conn *ct)
++{
++	unsigned long id = nf_ct_get_id(ct);
++
++	return id ? id : 1;
++}
++
+ static int
+ ctnetlink_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ {
+ 	unsigned int flags = cb->data ? NLM_F_DUMP_FILTERED : 0;
+ 	struct net *net = sock_net(skb->sk);
+-	struct nf_conn *ct, *last;
++	unsigned long last_id = cb->args[1];
+ 	struct nf_conntrack_tuple_hash *h;
+ 	struct hlist_nulls_node *n;
+ 	struct nf_conn *nf_ct_evict[8];
++	struct nf_conn *ct;
+ 	int res, i;
+ 	spinlock_t *lockp;
+ 
+-	last = (struct nf_conn *)cb->args[1];
+ 	i = 0;
+ 
+ 	local_bh_disable();
+@@ -1257,7 +1262,7 @@ ctnetlink_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ 				continue;
+ 
+ 			if (cb->args[1]) {
+-				if (ct != last)
++				if (ctnetlink_get_id(ct) != last_id)
+ 					continue;
+ 				cb->args[1] = 0;
+ 			}
+@@ -1270,8 +1275,7 @@ ctnetlink_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ 					    NFNL_MSG_TYPE(cb->nlh->nlmsg_type),
+ 					    ct, true, flags);
+ 			if (res < 0) {
+-				nf_conntrack_get(&ct->ct_general);
+-				cb->args[1] = (unsigned long)ct;
++				cb->args[1] = ctnetlink_get_id(ct);
+ 				spin_unlock(lockp);
+ 				goto out;
+ 			}
+@@ -1284,12 +1288,10 @@ ctnetlink_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ 	}
+ out:
+ 	local_bh_enable();
+-	if (last) {
++	if (last_id) {
+ 		/* nf ct hash resize happened, now clear the leftover. */
+-		if ((struct nf_conn *)cb->args[1] == last)
++		if (cb->args[1] == last_id)
+ 			cb->args[1] = 0;
+-
+-		nf_ct_put(last);
+ 	}
+ 
+ 	while (i) {
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index c5855069bdaba0..9e4e25f2458f99 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1219,7 +1219,7 @@ static void pipapo_free_scratch(const struct nft_pipapo_match *m, unsigned int c
+ 
+ 	mem = s;
+ 	mem -= s->align_off;
+-	kfree(mem);
++	kvfree(mem);
+ }
+ 
+ /**
+@@ -1240,10 +1240,9 @@ static int pipapo_realloc_scratch(struct nft_pipapo_match *clone,
+ 		void *scratch_aligned;
+ 		u32 align_off;
+ #endif
+-		scratch = kzalloc_node(struct_size(scratch, map,
+-						   bsize_max * 2) +
+-				       NFT_PIPAPO_ALIGN_HEADROOM,
+-				       GFP_KERNEL_ACCOUNT, cpu_to_node(i));
++		scratch = kvzalloc_node(struct_size(scratch, map, bsize_max * 2) +
++					NFT_PIPAPO_ALIGN_HEADROOM,
++					GFP_KERNEL_ACCOUNT, cpu_to_node(i));
+ 		if (!scratch) {
+ 			/* On failure, there's no need to undo previous
+ 			 * allocations: this means that some scratch maps have
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 6332a0e0659675..0fc3f045fb65a3 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1218,7 +1218,7 @@ int netlink_attachskb(struct sock *sk, struct sk_buff *skb,
+ 	nlk = nlk_sk(sk);
+ 	rmem = atomic_add_return(skb->truesize, &sk->sk_rmem_alloc);
+ 
+-	if ((rmem == skb->truesize || rmem < READ_ONCE(sk->sk_rcvbuf)) &&
++	if ((rmem == skb->truesize || rmem <= READ_ONCE(sk->sk_rcvbuf)) &&
+ 	    !test_bit(NETLINK_S_CONGESTED, &nlk->state)) {
+ 		netlink_skb_set_owner_r(skb, sk);
+ 		return 0;
+diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
+index 037f764822b965..82635dd2cfa59f 100644
+--- a/net/sched/sch_ets.c
++++ b/net/sched/sch_ets.c
+@@ -651,6 +651,12 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
+ 
+ 	sch_tree_lock(sch);
+ 
++	for (i = nbands; i < oldbands; i++) {
++		if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)
++			list_del_init(&q->classes[i].alist);
++		qdisc_purge_queue(q->classes[i].qdisc);
++	}
++
+ 	WRITE_ONCE(q->nbands, nbands);
+ 	for (i = nstrict; i < q->nstrict; i++) {
+ 		if (q->classes[i].qdisc->q.qlen) {
+@@ -658,11 +664,6 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
+ 			q->classes[i].deficit = quanta[i];
+ 		}
+ 	}
+-	for (i = q->nbands; i < oldbands; i++) {
+-		if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)
+-			list_del_init(&q->classes[i].alist);
+-		qdisc_purge_queue(q->classes[i].qdisc);
+-	}
+ 	WRITE_ONCE(q->nstrict, nstrict);
+ 	memcpy(q->prio2band, priomap, sizeof(priomap));
+ 
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index 0c0d2757f6f8df..6fcdcaeed40e97 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -117,7 +117,7 @@ int sctp_rcv(struct sk_buff *skb)
+ 	 * it's better to just linearize it otherwise crc computing
+ 	 * takes longer.
+ 	 */
+-	if ((!is_gso && skb_linearize(skb)) ||
++	if (((!is_gso || skb_cloned(skb)) && skb_linearize(skb)) ||
+ 	    !pskb_may_pull(skb, sizeof(struct sctphdr)))
+ 		goto discard_it;
+ 
+diff --git a/net/tls/tls.h b/net/tls/tls.h
+index 774859b63f0ded..4e077068e6d98a 100644
+--- a/net/tls/tls.h
++++ b/net/tls/tls.h
+@@ -196,7 +196,7 @@ void tls_strp_msg_done(struct tls_strparser *strp);
+ int tls_rx_msg_size(struct tls_strparser *strp, struct sk_buff *skb);
+ void tls_rx_msg_ready(struct tls_strparser *strp);
+ 
+-void tls_strp_msg_load(struct tls_strparser *strp, bool force_refresh);
++bool tls_strp_msg_load(struct tls_strparser *strp, bool force_refresh);
+ int tls_strp_msg_cow(struct tls_sw_context_rx *ctx);
+ struct sk_buff *tls_strp_msg_detach(struct tls_sw_context_rx *ctx);
+ int tls_strp_msg_hold(struct tls_strparser *strp, struct sk_buff_head *dst);
+diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
+index 095cf31bae0ba9..d71643b494a1ae 100644
+--- a/net/tls/tls_strp.c
++++ b/net/tls/tls_strp.c
+@@ -475,7 +475,7 @@ static void tls_strp_load_anchor_with_queue(struct tls_strparser *strp, int len)
+ 	strp->stm.offset = offset;
+ }
+ 
+-void tls_strp_msg_load(struct tls_strparser *strp, bool force_refresh)
++bool tls_strp_msg_load(struct tls_strparser *strp, bool force_refresh)
+ {
+ 	struct strp_msg *rxm;
+ 	struct tls_msg *tlm;
+@@ -484,8 +484,11 @@ void tls_strp_msg_load(struct tls_strparser *strp, bool force_refresh)
+ 	DEBUG_NET_WARN_ON_ONCE(!strp->stm.full_len);
+ 
+ 	if (!strp->copy_mode && force_refresh) {
+-		if (WARN_ON(tcp_inq(strp->sk) < strp->stm.full_len))
+-			return;
++		if (unlikely(tcp_inq(strp->sk) < strp->stm.full_len)) {
++			WRITE_ONCE(strp->msg_ready, 0);
++			memset(&strp->stm, 0, sizeof(strp->stm));
++			return false;
++		}
+ 
+ 		tls_strp_load_anchor_with_queue(strp, strp->stm.full_len);
+ 	}
+@@ -495,6 +498,8 @@ void tls_strp_msg_load(struct tls_strparser *strp, bool force_refresh)
+ 	rxm->offset	= strp->stm.offset;
+ 	tlm = tls_msg(strp->anchor);
+ 	tlm->control	= strp->mark;
++
++	return true;
+ }
+ 
+ /* Called with lock held on lower socket */
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 549d1ea01a72a7..51c98a007ddac4 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1384,7 +1384,8 @@ tls_rx_rec_wait(struct sock *sk, struct sk_psock *psock, bool nonblock,
+ 			return sock_intr_errno(timeo);
+ 	}
+ 
+-	tls_strp_msg_load(&ctx->strp, released);
++	if (unlikely(!tls_strp_msg_load(&ctx->strp, released)))
++		return tls_rx_rec_wait(sk, psock, nonblock, false);
+ 
+ 	return 1;
+ }
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index f0e48e6911fc46..f01f9e8781061e 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -307,7 +307,7 @@ virtio_transport_cancel_pkt(struct vsock_sock *vsk)
+ 
+ static void virtio_vsock_rx_fill(struct virtio_vsock *vsock)
+ {
+-	int total_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE + VIRTIO_VSOCK_SKB_HEADROOM;
++	int total_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE;
+ 	struct scatterlist pkt, *p;
+ 	struct virtqueue *vq;
+ 	struct sk_buff *skb;
+diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c
+index 05d44a4435189c..fd88a32d43d685 100644
+--- a/net/wireless/mlme.c
++++ b/net/wireless/mlme.c
+@@ -850,7 +850,8 @@ int cfg80211_mlme_mgmt_tx(struct cfg80211_registered_device *rdev,
+ 
+ 	mgmt = (const struct ieee80211_mgmt *)params->buf;
+ 
+-	if (!ieee80211_is_mgmt(mgmt->frame_control))
++	if (!ieee80211_is_mgmt(mgmt->frame_control) ||
++	    ieee80211_has_order(mgmt->frame_control))
+ 		return -EINVAL;
+ 
+ 	stype = le16_to_cpu(mgmt->frame_control) & IEEE80211_FCTL_STYPE;
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index a2d3a5f3b4852c..a6c28985840133 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -415,10 +415,12 @@ bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x)
+ 	struct net_device *dev = x->xso.dev;
+ 	bool check_tunnel_size;
+ 
+-	if (x->xso.type == XFRM_DEV_OFFLOAD_UNSPECIFIED)
++	if (!x->type_offload ||
++	    (x->xso.type == XFRM_DEV_OFFLOAD_UNSPECIFIED && x->encap))
+ 		return false;
+ 
+-	if ((dev == xfrm_dst_path(dst)->dev) && !xdst->child->xfrm) {
++	if ((!dev || dev == xfrm_dst_path(dst)->dev) &&
++	    !xdst->child->xfrm) {
+ 		mtu = xfrm_state_mtu(x, xdst->child_mtu_cached);
+ 		if (skb->len <= mtu)
+ 			goto ok;
+@@ -430,6 +432,9 @@ bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x)
+ 	return false;
+ 
+ ok:
++	if (!dev)
++		return true;
++
+ 	check_tunnel_size = x->xso.type == XFRM_DEV_OFFLOAD_PACKET &&
+ 			    x->props.mode == XFRM_MODE_TUNNEL;
+ 	switch (x->props.family) {
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 0cf516b4e6d929..f57bb78fb12a27 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -1702,6 +1702,26 @@ struct xfrm_state *xfrm_state_lookup_byspi(struct net *net, __be32 spi,
+ }
+ EXPORT_SYMBOL(xfrm_state_lookup_byspi);
+ 
++static struct xfrm_state *xfrm_state_lookup_spi_proto(struct net *net, __be32 spi, u8 proto)
++{
++	struct xfrm_state *x;
++	unsigned int i;
++
++	rcu_read_lock();
++	for (i = 0; i <= net->xfrm.state_hmask; i++) {
++		hlist_for_each_entry_rcu(x, &net->xfrm.state_byspi[i], byspi) {
++			if (x->id.spi == spi && x->id.proto == proto) {
++				if (!xfrm_state_hold_rcu(x))
++					continue;
++				rcu_read_unlock();
++				return x;
++			}
++		}
++	}
++	rcu_read_unlock();
++	return NULL;
++}
++
+ static void __xfrm_state_insert(struct xfrm_state *x)
+ {
+ 	struct net *net = xs_net(x);
+@@ -2538,10 +2558,8 @@ int xfrm_alloc_spi(struct xfrm_state *x, u32 low, u32 high,
+ 	unsigned int h;
+ 	struct xfrm_state *x0;
+ 	int err = -ENOENT;
+-	__be32 minspi = htonl(low);
+-	__be32 maxspi = htonl(high);
++	u32 range = high - low + 1;
+ 	__be32 newspi = 0;
+-	u32 mark = x->mark.v & x->mark.m;
+ 
+ 	spin_lock_bh(&x->lock);
+ 	if (x->km.state == XFRM_STATE_DEAD) {
+@@ -2555,38 +2573,34 @@ int xfrm_alloc_spi(struct xfrm_state *x, u32 low, u32 high,
+ 
+ 	err = -ENOENT;
+ 
+-	if (minspi == maxspi) {
+-		x0 = xfrm_state_lookup(net, mark, &x->id.daddr, minspi, x->id.proto, x->props.family);
+-		if (x0) {
+-			NL_SET_ERR_MSG(extack, "Requested SPI is already in use");
+-			xfrm_state_put(x0);
++	for (h = 0; h < range; h++) {
++		u32 spi = (low == high) ? low : get_random_u32_inclusive(low, high);
++		newspi = htonl(spi);
++
++		spin_lock_bh(&net->xfrm.xfrm_state_lock);
++		x0 = xfrm_state_lookup_spi_proto(net, newspi, x->id.proto);
++		if (!x0) {
++			x->id.spi = newspi;
++			h = xfrm_spi_hash(net, &x->id.daddr, newspi, x->id.proto, x->props.family);
++			XFRM_STATE_INSERT(byspi, &x->byspi, net->xfrm.state_byspi + h, x->xso.type);
++			spin_unlock_bh(&net->xfrm.xfrm_state_lock);
++			err = 0;
+ 			goto unlock;
+ 		}
+-		newspi = minspi;
+-	} else {
+-		u32 spi = 0;
+-		for (h = 0; h < high-low+1; h++) {
+-			spi = get_random_u32_inclusive(low, high);
+-			x0 = xfrm_state_lookup(net, mark, &x->id.daddr, htonl(spi), x->id.proto, x->props.family);
+-			if (x0 == NULL) {
+-				newspi = htonl(spi);
+-				break;
+-			}
+-			xfrm_state_put(x0);
++		xfrm_state_put(x0);
++		spin_unlock_bh(&net->xfrm.xfrm_state_lock);
++
++		if (signal_pending(current)) {
++			err = -ERESTARTSYS;
++			goto unlock;
+ 		}
++
++		if (low == high)
++			break;
+ 	}
+-	if (newspi) {
+-		spin_lock_bh(&net->xfrm.xfrm_state_lock);
+-		x->id.spi = newspi;
+-		h = xfrm_spi_hash(net, &x->id.daddr, x->id.spi, x->id.proto, x->props.family);
+-		XFRM_STATE_INSERT(byspi, &x->byspi, net->xfrm.state_byspi + h,
+-				  x->xso.type);
+-		spin_unlock_bh(&net->xfrm.xfrm_state_lock);
+ 
+-		err = 0;
+-	} else {
++	if (err)
+ 		NL_SET_ERR_MSG(extack, "No SPI available in the requested range");
+-	}
+ 
+ unlock:
+ 	spin_unlock_bh(&x->lock);
+diff --git a/rust/Makefile b/rust/Makefile
+index 913b31d25bc4d1..1cf688d5997d2f 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -62,6 +62,10 @@ core-cfgs = \
+ 
+ core-edition := $(if $(call rustc-min-version,108700),2024,2021)
+ 
++# `rustdoc` did not save the target modifiers, thus workaround for
++# the time being (https://github.com/rust-lang/rust/issues/144521).
++rustdoc_modifiers_workaround := $(if $(call rustc-min-version,108800),-Cunsafe-allow-abi-mismatch=fixed-x18)
++
+ # `rustc` recognizes `--remap-path-prefix` since 1.26.0, but `rustdoc` only
+ # since Rust 1.81.0. Moreover, `rustdoc` ICEs on out-of-tree builds since Rust
+ # 1.82.0 (https://github.com/rust-lang/rust/issues/138520). Thus workaround both
+@@ -74,6 +78,7 @@ quiet_cmd_rustdoc = RUSTDOC $(if $(rustdoc_host),H, ) $<
+ 		-Zunstable-options --generate-link-to-definition \
+ 		--output $(rustdoc_output) \
+ 		--crate-name $(subst rustdoc-,,$@) \
++		$(rustdoc_modifiers_workaround) \
+ 		$(if $(rustdoc_host),,--sysroot=/dev/null) \
+ 		@$(objtree)/include/generated/rustc_cfg $<
+ 
+@@ -103,14 +108,14 @@ rustdoc: rustdoc-core rustdoc-macros rustdoc-compiler_builtins \
+ rustdoc-macros: private rustdoc_host = yes
+ rustdoc-macros: private rustc_target_flags = --crate-type proc-macro \
+     --extern proc_macro
+-rustdoc-macros: $(src)/macros/lib.rs FORCE
++rustdoc-macros: $(src)/macros/lib.rs rustdoc-clean FORCE
+ 	+$(call if_changed,rustdoc)
+ 
+ # Starting with Rust 1.82.0, skipping `-Wrustdoc::unescaped_backticks` should
+ # not be needed -- see https://github.com/rust-lang/rust/pull/128307.
+ rustdoc-core: private skip_flags = --edition=2021 -Wrustdoc::unescaped_backticks
+ rustdoc-core: private rustc_target_flags = --edition=$(core-edition) $(core-cfgs)
+-rustdoc-core: $(RUST_LIB_SRC)/core/src/lib.rs FORCE
++rustdoc-core: $(RUST_LIB_SRC)/core/src/lib.rs rustdoc-clean FORCE
+ 	+$(call if_changed,rustdoc)
+ 
+ rustdoc-compiler_builtins: $(src)/compiler_builtins.rs rustdoc-core FORCE
+@@ -122,7 +127,8 @@ rustdoc-ffi: $(src)/ffi.rs rustdoc-core FORCE
+ rustdoc-pin_init_internal: private rustdoc_host = yes
+ rustdoc-pin_init_internal: private rustc_target_flags = --cfg kernel \
+     --extern proc_macro --crate-type proc-macro
+-rustdoc-pin_init_internal: $(src)/pin-init/internal/src/lib.rs FORCE
++rustdoc-pin_init_internal: $(src)/pin-init/internal/src/lib.rs \
++    rustdoc-clean FORCE
+ 	+$(call if_changed,rustdoc)
+ 
+ rustdoc-pin_init: private rustdoc_host = yes
+@@ -140,6 +146,9 @@ rustdoc-kernel: $(src)/kernel/lib.rs rustdoc-core rustdoc-ffi rustdoc-macros \
+     $(obj)/bindings.o FORCE
+ 	+$(call if_changed,rustdoc)
+ 
++rustdoc-clean: FORCE
++	$(Q)rm -rf $(rustdoc_output)
++
+ quiet_cmd_rustc_test_library = $(RUSTC_OR_CLIPPY_QUIET) TL $<
+       cmd_rustc_test_library = \
+ 	OBJTREE=$(abspath $(objtree)) \
+@@ -212,6 +221,7 @@ quiet_cmd_rustdoc_test_kernel = RUSTDOC TK $<
+ 		--extern bindings --extern uapi \
+ 		--no-run --crate-name kernel -Zunstable-options \
+ 		--sysroot=/dev/null \
++		$(rustdoc_modifiers_workaround) \
+ 		--test-builder $(objtree)/scripts/rustdoc_test_builder \
+ 		$< $(rustdoc_test_kernel_quiet); \
+ 	$(objtree)/scripts/rustdoc_test_gen
+diff --git a/samples/damon/wsse.c b/samples/damon/wsse.c
+index e20238a249e7b5..e941958b103249 100644
+--- a/samples/damon/wsse.c
++++ b/samples/damon/wsse.c
+@@ -89,6 +89,8 @@ static void damon_sample_wsse_stop(void)
+ 		put_pid(target_pidp);
+ }
+ 
++static bool init_called;
++
+ static int damon_sample_wsse_enable_store(
+ 		const char *val, const struct kernel_param *kp)
+ {
+@@ -114,7 +116,15 @@ static int damon_sample_wsse_enable_store(
+ 
+ static int __init damon_sample_wsse_init(void)
+ {
+-	return 0;
++	int err = 0;
++
++	init_called = true;
++	if (enable) {
++		err = damon_sample_wsse_start();
++		if (err)
++			enable = false;
++	}
++	return err;
+ }
+ 
+ module_init(damon_sample_wsse_init);
+diff --git a/scripts/kconfig/gconf.c b/scripts/kconfig/gconf.c
+index c0f46f18906073..0caf0ced13df4a 100644
+--- a/scripts/kconfig/gconf.c
++++ b/scripts/kconfig/gconf.c
+@@ -748,7 +748,7 @@ static void renderer_edited(GtkCellRendererText * cell,
+ 	struct symbol *sym;
+ 
+ 	if (!gtk_tree_model_get_iter(model2, &iter, path))
+-		return;
++		goto free;
+ 
+ 	gtk_tree_model_get(model2, &iter, COL_MENU, &menu, -1);
+ 	sym = menu->sym;
+@@ -760,6 +760,7 @@ static void renderer_edited(GtkCellRendererText * cell,
+ 
+ 	update_tree(&rootmenu, NULL);
+ 
++free:
+ 	gtk_tree_path_free(path);
+ }
+ 
+@@ -942,13 +943,14 @@ on_treeview2_key_press_event(GtkWidget * widget,
+ void
+ on_treeview2_cursor_changed(GtkTreeView * treeview, gpointer user_data)
+ {
++	GtkTreeModel *model = gtk_tree_view_get_model(treeview);
+ 	GtkTreeSelection *selection;
+ 	GtkTreeIter iter;
+ 	struct menu *menu;
+ 
+ 	selection = gtk_tree_view_get_selection(treeview);
+-	if (gtk_tree_selection_get_selected(selection, &model2, &iter)) {
+-		gtk_tree_model_get(model2, &iter, COL_MENU, &menu, -1);
++	if (gtk_tree_selection_get_selected(selection, &model, &iter)) {
++		gtk_tree_model_get(model, &iter, COL_MENU, &menu, -1);
+ 		text_insert_help(menu);
+ 	}
+ }
+diff --git a/scripts/kconfig/lxdialog/inputbox.c b/scripts/kconfig/lxdialog/inputbox.c
+index 3c6e24b20f5be6..5e4a131724f288 100644
+--- a/scripts/kconfig/lxdialog/inputbox.c
++++ b/scripts/kconfig/lxdialog/inputbox.c
+@@ -39,8 +39,10 @@ int dialog_inputbox(const char *title, const char *prompt, int height, int width
+ 
+ 	if (!init)
+ 		instr[0] = '\0';
+-	else
+-		strcpy(instr, init);
++	else {
++		strncpy(instr, init, sizeof(dialog_input_result) - 1);
++		instr[sizeof(dialog_input_result) - 1] = '\0';
++	}
+ 
+ do_resize:
+ 	if (getmaxy(stdscr) <= (height - INPUTBOX_HEIGHT_MIN))
+diff --git a/scripts/kconfig/lxdialog/menubox.c b/scripts/kconfig/lxdialog/menubox.c
+index 6e6244df0c56e3..d4c19b7beebbd4 100644
+--- a/scripts/kconfig/lxdialog/menubox.c
++++ b/scripts/kconfig/lxdialog/menubox.c
+@@ -264,7 +264,7 @@ int dialog_menu(const char *title, const char *prompt,
+ 		if (key < 256 && isalpha(key))
+ 			key = tolower(key);
+ 
+-		if (strchr("ynmh", key))
++		if (strchr("ynmh ", key))
+ 			i = max_choice;
+ 		else {
+ 			for (i = choice + 1; i < max_choice; i++) {
+diff --git a/scripts/kconfig/nconf.c b/scripts/kconfig/nconf.c
+index c0b2dabf6c894f..ae1fe5f6032703 100644
+--- a/scripts/kconfig/nconf.c
++++ b/scripts/kconfig/nconf.c
+@@ -593,6 +593,8 @@ static void item_add_str(const char *fmt, ...)
+ 		tmp_str,
+ 		sizeof(k_menu_items[index].str));
+ 
++	k_menu_items[index].str[sizeof(k_menu_items[index].str) - 1] = '\0';
++
+ 	free_item(curses_menu_items[index]);
+ 	curses_menu_items[index] = new_item(
+ 			k_menu_items[index].str,
+diff --git a/scripts/kconfig/nconf.gui.c b/scripts/kconfig/nconf.gui.c
+index 4bfdf8ac2a9a34..7206437e784a0a 100644
+--- a/scripts/kconfig/nconf.gui.c
++++ b/scripts/kconfig/nconf.gui.c
+@@ -359,6 +359,7 @@ int dialog_inputbox(WINDOW *main_window,
+ 	x = (columns-win_cols)/2;
+ 
+ 	strncpy(result, init, *result_len);
++	result[*result_len - 1] = '\0';
+ 
+ 	/* create the windows */
+ 	win = newwin(win_lines, win_cols, y, x);
+diff --git a/security/apparmor/domain.c b/security/apparmor/domain.c
+index 5939bd9a9b9bb0..08ca9057f82b02 100644
+--- a/security/apparmor/domain.c
++++ b/security/apparmor/domain.c
+@@ -508,6 +508,7 @@ static const char *next_name(int xtype, const char *name)
+  * @name: returns: name tested to find label (NOT NULL)
+  *
+  * Returns: refcounted label, or NULL on failure (MAYBE NULL)
++ *          @name will always be set with the last name tried
+  */
+ struct aa_label *x_table_lookup(struct aa_profile *profile, u32 xindex,
+ 				const char **name)
+@@ -517,6 +518,7 @@ struct aa_label *x_table_lookup(struct aa_profile *profile, u32 xindex,
+ 	struct aa_label *label = NULL;
+ 	u32 xtype = xindex & AA_X_TYPE_MASK;
+ 	int index = xindex & AA_X_INDEX_MASK;
++	const char *next;
+ 
+ 	AA_BUG(!name);
+ 
+@@ -524,25 +526,27 @@ struct aa_label *x_table_lookup(struct aa_profile *profile, u32 xindex,
+ 	/* TODO: move lookup parsing to unpack time so this is a straight
+ 	 *       index into the resultant label
+ 	 */
+-	for (*name = rules->file->trans.table[index]; !label && *name;
+-	     *name = next_name(xtype, *name)) {
++	for (next = rules->file->trans.table[index]; next;
++	     next = next_name(xtype, next)) {
++		const char *lookup = (*next == '&') ? next + 1 : next;
++		*name = next;
+ 		if (xindex & AA_X_CHILD) {
+-			struct aa_profile *new_profile;
+-			/* release by caller */
+-			new_profile = aa_find_child(profile, *name);
+-			if (new_profile)
+-				label = &new_profile->label;
++			/* TODO: switich to parse to get stack of child */
++			struct aa_profile *new = aa_find_child(profile, lookup);
++
++			if (new)
++				/* release by caller */
++				return &new->label;
+ 			continue;
+ 		}
+-		label = aa_label_parse(&profile->label, *name, GFP_KERNEL,
++		label = aa_label_parse(&profile->label, lookup, GFP_KERNEL,
+ 				       true, false);
+-		if (IS_ERR(label))
+-			label = NULL;
++		if (!IS_ERR_OR_NULL(label))
++			/* release by caller */
++			return label;
+ 	}
+ 
+-	/* released by caller */
+-
+-	return label;
++	return NULL;
+ }
+ 
+ /**
+@@ -567,9 +571,9 @@ static struct aa_label *x_to_label(struct aa_profile *profile,
+ 	struct aa_ruleset *rules = list_first_entry(&profile->rules,
+ 						    typeof(*rules), list);
+ 	struct aa_label *new = NULL;
++	struct aa_label *stack = NULL;
+ 	struct aa_ns *ns = profile->ns;
+ 	u32 xtype = xindex & AA_X_TYPE_MASK;
+-	const char *stack = NULL;
+ 
+ 	switch (xtype) {
+ 	case AA_X_NONE:
+@@ -578,13 +582,14 @@ static struct aa_label *x_to_label(struct aa_profile *profile,
+ 		break;
+ 	case AA_X_TABLE:
+ 		/* TODO: fix when perm mapping done at unload */
+-		stack = rules->file->trans.table[xindex & AA_X_INDEX_MASK];
+-		if (*stack != '&') {
+-			/* released by caller */
+-			new = x_table_lookup(profile, xindex, lookupname);
+-			stack = NULL;
++		/* released by caller
++		 * if null for both stack and direct want to try fallback
++		 */
++		new = x_table_lookup(profile, xindex, lookupname);
++		if (!new || **lookupname != '&')
+ 			break;
+-		}
++		stack = new;
++		new = NULL;
+ 		fallthrough;	/* to X_NAME */
+ 	case AA_X_NAME:
+ 		if (xindex & AA_X_CHILD)
+@@ -599,6 +604,7 @@ static struct aa_label *x_to_label(struct aa_profile *profile,
+ 		break;
+ 	}
+ 
++	/* fallback transition check */
+ 	if (!new) {
+ 		if (xindex & AA_X_INHERIT) {
+ 			/* (p|c|n)ix - don't change profile but do
+@@ -617,12 +623,12 @@ static struct aa_label *x_to_label(struct aa_profile *profile,
+ 		/* base the stack on post domain transition */
+ 		struct aa_label *base = new;
+ 
+-		new = aa_label_parse(base, stack, GFP_KERNEL, true, false);
+-		if (IS_ERR(new))
+-			new = NULL;
++		new = aa_label_merge(base, stack, GFP_KERNEL);
++		/* null on error */
+ 		aa_put_label(base);
+ 	}
+ 
++	aa_put_label(stack);
+ 	/* released by caller */
+ 	return new;
+ }
+diff --git a/security/apparmor/file.c b/security/apparmor/file.c
+index d52a5b14dad4c7..62bc46e037588a 100644
+--- a/security/apparmor/file.c
++++ b/security/apparmor/file.c
+@@ -423,9 +423,11 @@ int aa_path_link(const struct cred *subj_cred,
+ {
+ 	struct path link = { .mnt = new_dir->mnt, .dentry = new_dentry };
+ 	struct path target = { .mnt = new_dir->mnt, .dentry = old_dentry };
++	struct inode *inode = d_backing_inode(old_dentry);
++	vfsuid_t vfsuid = i_uid_into_vfsuid(mnt_idmap(target.mnt), inode);
+ 	struct path_cond cond = {
+-		d_backing_inode(old_dentry)->i_uid,
+-		d_backing_inode(old_dentry)->i_mode
++		.uid = vfsuid_into_kuid(vfsuid),
++		.mode = inode->i_mode,
+ 	};
+ 	char *buffer = NULL, *buffer2 = NULL;
+ 	struct aa_profile *profile;
+diff --git a/security/apparmor/include/lib.h b/security/apparmor/include/lib.h
+index f11a0db7f51da4..e83f45e936a7d4 100644
+--- a/security/apparmor/include/lib.h
++++ b/security/apparmor/include/lib.h
+@@ -48,7 +48,11 @@ extern struct aa_dfa *stacksplitdfa;
+ #define AA_BUG_FMT(X, fmt, args...)					\
+ 	WARN((X), "AppArmor WARN %s: (" #X "): " fmt, __func__, ##args)
+ #else
+-#define AA_BUG_FMT(X, fmt, args...) no_printk(fmt, ##args)
++#define AA_BUG_FMT(X, fmt, args...)					\
++	do {								\
++		BUILD_BUG_ON_INVALID(X);				\
++		no_printk(fmt, ##args);					\
++	} while (0)
+ #endif
+ 
+ #define AA_ERROR(fmt, args...)						\
+diff --git a/security/inode.c b/security/inode.c
+index da3ab44c8e571f..58cc60c50498d2 100644
+--- a/security/inode.c
++++ b/security/inode.c
+@@ -159,7 +159,6 @@ static struct dentry *securityfs_create_dentry(const char *name, umode_t mode,
+ 		inode->i_fop = fops;
+ 	}
+ 	d_instantiate(dentry, inode);
+-	dget(dentry);
+ 	inode_unlock(dir);
+ 	return dentry;
+ 
+@@ -306,7 +305,6 @@ void securityfs_remove(struct dentry *dentry)
+ 			simple_rmdir(dir, dentry);
+ 		else
+ 			simple_unlink(dir, dentry);
+-		dput(dentry);
+ 	}
+ 	inode_unlock(dir);
+ 	simple_release_fs(&mount, &mount_count);
+diff --git a/security/landlock/syscalls.c b/security/landlock/syscalls.c
+index 33eafb71e4f31b..0116e9f93ffe30 100644
+--- a/security/landlock/syscalls.c
++++ b/security/landlock/syscalls.c
+@@ -303,7 +303,6 @@ static int get_path_from_fd(const s32 fd, struct path *const path)
+ 	if ((fd_file(f)->f_op == &ruleset_fops) ||
+ 	    (fd_file(f)->f_path.mnt->mnt_flags & MNT_INTERNAL) ||
+ 	    (fd_file(f)->f_path.dentry->d_sb->s_flags & SB_NOUSER) ||
+-	    d_is_negative(fd_file(f)->f_path.dentry) ||
+ 	    IS_PRIVATE(d_backing_inode(fd_file(f)->f_path.dentry)))
+ 		return -EBADFD;
+ 
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 853ac5bb33ff2a..ecb71bf1859d40 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -24,6 +24,7 @@
+ #include <sound/minors.h>
+ #include <linux/uio.h>
+ #include <linux/delay.h>
++#include <linux/bitops.h>
+ 
+ #include "pcm_local.h"
+ 
+@@ -3130,13 +3131,23 @@ struct snd_pcm_sync_ptr32 {
+ static snd_pcm_uframes_t recalculate_boundary(struct snd_pcm_runtime *runtime)
+ {
+ 	snd_pcm_uframes_t boundary;
++	snd_pcm_uframes_t border;
++	int order;
+ 
+ 	if (! runtime->buffer_size)
+ 		return 0;
+-	boundary = runtime->buffer_size;
+-	while (boundary * 2 <= 0x7fffffffUL - runtime->buffer_size)
+-		boundary *= 2;
+-	return boundary;
++
++	border = 0x7fffffffUL - runtime->buffer_size;
++	if (runtime->buffer_size > border)
++		return runtime->buffer_size;
++
++	order = __fls(border) - __fls(runtime->buffer_size);
++	boundary = runtime->buffer_size << order;
++
++	if (boundary <= border)
++		return boundary;
++	else
++		return boundary / 2;
+ }
+ 
+ static int snd_pcm_ioctl_sync_ptr_compat(struct snd_pcm_substream *substream,
+diff --git a/sound/pci/hda/cs35l41_hda.c b/sound/pci/hda/cs35l41_hda.c
+index 5dc021976c7905..4397630496fd7e 100644
+--- a/sound/pci/hda/cs35l41_hda.c
++++ b/sound/pci/hda/cs35l41_hda.c
+@@ -2060,3 +2060,5 @@ MODULE_IMPORT_NS("SND_SOC_CS_AMP_LIB");
+ MODULE_AUTHOR("Lucas Tanure, Cirrus Logic Inc, <tanureal@opensource.cirrus.com>");
+ MODULE_LICENSE("GPL");
+ MODULE_IMPORT_NS("FW_CS_DSP");
++MODULE_FIRMWARE("cirrus/cs35l41-*.wmfw");
++MODULE_FIRMWARE("cirrus/cs35l41-*.bin");
+diff --git a/sound/pci/hda/cs35l56_hda.c b/sound/pci/hda/cs35l56_hda.c
+index c9c8ec8d2474d8..7e92ad955e7891 100644
+--- a/sound/pci/hda/cs35l56_hda.c
++++ b/sound/pci/hda/cs35l56_hda.c
+@@ -1178,3 +1178,7 @@ MODULE_IMPORT_NS("SND_SOC_CS_AMP_LIB");
+ MODULE_AUTHOR("Richard Fitzgerald <rf@opensource.cirrus.com>");
+ MODULE_AUTHOR("Simon Trimmer <simont@opensource.cirrus.com>");
+ MODULE_LICENSE("GPL");
++MODULE_FIRMWARE("cirrus/cs35l54-*.wmfw");
++MODULE_FIRMWARE("cirrus/cs35l54-*.bin");
++MODULE_FIRMWARE("cirrus/cs35l56-*.wmfw");
++MODULE_FIRMWARE("cirrus/cs35l56-*.bin");
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index b436d436831bbf..7b090c9539748e 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -639,24 +639,16 @@ static void hda_jackpoll_work(struct work_struct *work)
+ 	struct hda_codec *codec =
+ 		container_of(work, struct hda_codec, jackpoll_work.work);
+ 
+-	/* for non-polling trigger: we need nothing if already powered on */
+-	if (!codec->jackpoll_interval && snd_hdac_is_power_on(&codec->core))
++	if (!codec->jackpoll_interval)
+ 		return;
+ 
+ 	/* the power-up/down sequence triggers the runtime resume */
+-	snd_hda_power_up_pm(codec);
++	snd_hda_power_up(codec);
+ 	/* update jacks manually if polling is required, too */
+-	if (codec->jackpoll_interval) {
+-		snd_hda_jack_set_dirty_all(codec);
+-		snd_hda_jack_poll_all(codec);
+-	}
+-	snd_hda_power_down_pm(codec);
+-
+-	if (!codec->jackpoll_interval)
+-		return;
+-
+-	schedule_delayed_work(&codec->jackpoll_work,
+-			      codec->jackpoll_interval);
++	snd_hda_jack_set_dirty_all(codec);
++	snd_hda_jack_poll_all(codec);
++	schedule_delayed_work(&codec->jackpoll_work, codec->jackpoll_interval);
++	snd_hda_power_down(codec);
+ }
+ 
+ /* release all pincfg lists */
+@@ -2926,12 +2918,12 @@ static void hda_call_codec_resume(struct hda_codec *codec)
+ 		snd_hda_regmap_sync(codec);
+ 	}
+ 
+-	if (codec->jackpoll_interval)
+-		hda_jackpoll_work(&codec->jackpoll_work.work);
+-	else
+-		snd_hda_jack_report_sync(codec);
++	snd_hda_jack_report_sync(codec);
+ 	codec->core.dev.power.power_state = PMSG_ON;
+ 	snd_hdac_leave_pm(&codec->core);
++	if (codec->jackpoll_interval)
++		schedule_delayed_work(&codec->jackpoll_work,
++				      codec->jackpoll_interval);
+ }
+ 
+ static int hda_codec_runtime_suspend(struct device *dev)
+@@ -2943,8 +2935,6 @@ static int hda_codec_runtime_suspend(struct device *dev)
+ 	if (!codec->card)
+ 		return 0;
+ 
+-	cancel_delayed_work_sync(&codec->jackpoll_work);
+-
+ 	state = hda_call_codec_suspend(codec);
+ 	if (codec->link_down_at_suspend ||
+ 	    (codec_has_clkstop(codec) && codec_has_epss(codec) &&
+@@ -2952,10 +2942,6 @@ static int hda_codec_runtime_suspend(struct device *dev)
+ 		snd_hdac_codec_link_down(&codec->core);
+ 	snd_hda_codec_display_power(codec, false);
+ 
+-	if (codec->bus->jackpoll_in_suspend &&
+-		(dev->power.power_state.event != PM_EVENT_SUSPEND))
+-		schedule_delayed_work(&codec->jackpoll_work,
+-					codec->jackpoll_interval);
+ 	return 0;
+ }
+ 
+@@ -3051,6 +3037,7 @@ void snd_hda_codec_shutdown(struct hda_codec *codec)
+ 	if (!codec->core.registered)
+ 		return;
+ 
++	codec->jackpoll_interval = 0; /* don't poll any longer */
+ 	cancel_delayed_work_sync(&codec->jackpoll_work);
+ 	list_for_each_entry(cpcm, &codec->pcm_list_head, list)
+ 		snd_pcm_suspend_all(cpcm->pcm);
+@@ -3117,10 +3104,11 @@ int snd_hda_codec_build_controls(struct hda_codec *codec)
+ 	if (err < 0)
+ 		return err;
+ 
++	snd_hda_jack_report_sync(codec); /* call at the last init point */
+ 	if (codec->jackpoll_interval)
+-		hda_jackpoll_work(&codec->jackpoll_work.work);
+-	else
+-		snd_hda_jack_report_sync(codec); /* call at the last init point */
++		schedule_delayed_work(&codec->jackpoll_work,
++				      codec->jackpoll_interval);
++
+ 	sync_power_up_states(codec);
+ 	return 0;
+ }
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 77432e06f3e32c..a2f57d7424bb84 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -4410,7 +4410,7 @@ static int add_tuning_control(struct hda_codec *codec,
+ 	}
+ 	knew.private_value =
+ 		HDA_COMPOSE_AMP_VAL(nid, 1, 0, type);
+-	sprintf(namestr, "%s %s Volume", name, dirstr[dir]);
++	snprintf(namestr, sizeof(namestr), "%s %s Volume", name, dirstr[dir]);
+ 	return snd_hda_ctl_add(codec, nid, snd_ctl_new1(&knew, codec));
+ }
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 271e335610e0e9..1c421518570e4a 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -11376,6 +11376,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1854, 0x0440, "LG CQ6", ALC256_FIXUP_HEADPHONE_AMP_VOL),
+ 	SND_PCI_QUIRK(0x1854, 0x0441, "LG CQ6 AIO", ALC256_FIXUP_HEADPHONE_AMP_VOL),
+ 	SND_PCI_QUIRK(0x1854, 0x0488, "LG gram 16 (16Z90R)", ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS),
++	SND_PCI_QUIRK(0x1854, 0x0489, "LG gram 16 (16Z90R-A)", ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS),
+ 	SND_PCI_QUIRK(0x1854, 0x048a, "LG gram 17 (17ZD90R)", ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS),
+ 	SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
+ 	SND_PCI_QUIRK(0x19e5, 0x320f, "Huawei WRT-WX9 ", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+@@ -11405,6 +11406,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1ee7, 0x2078, "HONOR BRB-X M1010", ALC2XX_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1f66, 0x0105, "Ayaneo Portable Game Player", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x2014, 0x800a, "Positivo ARN50", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -11421,6 +11423,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0xf111, 0x0001, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0xf111, 0x0006, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0xf111, 0x0009, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0xf111, 0x000b, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0xf111, 0x000c, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+ 
+ #if 0
+diff --git a/sound/pci/intel8x0.c b/sound/pci/intel8x0.c
+index e4bb99f71c2c9e..95f0bd2e15323c 100644
+--- a/sound/pci/intel8x0.c
++++ b/sound/pci/intel8x0.c
+@@ -2249,7 +2249,7 @@ static int snd_intel8x0_mixer(struct intel8x0 *chip, int ac97_clock,
+ 			tmp |= chip->ac97_sdin[0] << ICH_DI1L_SHIFT;
+ 			for (i = 1; i < 4; i++) {
+ 				if (pcm->r[0].codec[i]) {
+-					tmp |= chip->ac97_sdin[pcm->r[0].codec[1]->num] << ICH_DI2L_SHIFT;
++					tmp |= chip->ac97_sdin[pcm->r[0].codec[i]->num] << ICH_DI2L_SHIFT;
+ 					break;
+ 				}
+ 			}
+diff --git a/sound/soc/codecs/hdac_hdmi.c b/sound/soc/codecs/hdac_hdmi.c
+index 1139a2754ca337..056d98154682a7 100644
+--- a/sound/soc/codecs/hdac_hdmi.c
++++ b/sound/soc/codecs/hdac_hdmi.c
+@@ -1232,7 +1232,8 @@ static int hdac_hdmi_parse_eld(struct hdac_device *hdev,
+ 						>> DRM_ELD_VER_SHIFT;
+ 
+ 	if (ver != ELD_VER_CEA_861D && ver != ELD_VER_PARTIAL) {
+-		dev_err(&hdev->dev, "HDMI: Unknown ELD version %d\n", ver);
++		dev_err_ratelimited(&hdev->dev,
++				    "HDMI: Unknown ELD version %d\n", ver);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -1240,7 +1241,8 @@ static int hdac_hdmi_parse_eld(struct hdac_device *hdev,
+ 		DRM_ELD_MNL_MASK) >> DRM_ELD_MNL_SHIFT;
+ 
+ 	if (mnl > ELD_MAX_MNL) {
+-		dev_err(&hdev->dev, "HDMI: MNL Invalid %d\n", mnl);
++		dev_err_ratelimited(&hdev->dev,
++				    "HDMI: MNL Invalid %d\n", mnl);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -1299,8 +1301,8 @@ static void hdac_hdmi_present_sense(struct hdac_hdmi_pin *pin,
+ 
+ 	if (!port->eld.monitor_present || !port->eld.eld_valid) {
+ 
+-		dev_err(&hdev->dev, "%s: disconnect for pin:port %d:%d\n",
+-						__func__, pin->nid, port->id);
++		dev_dbg(&hdev->dev, "%s: disconnect for pin:port %d:%d\n",
++			__func__, pin->nid, port->id);
+ 
+ 		/*
+ 		 * PCMs are not registered during device probe, so don't
+diff --git a/sound/soc/codecs/rt5640.c b/sound/soc/codecs/rt5640.c
+index 21a18012b4c0db..55881a5669e2b4 100644
+--- a/sound/soc/codecs/rt5640.c
++++ b/sound/soc/codecs/rt5640.c
+@@ -3013,6 +3013,11 @@ static int rt5640_i2c_probe(struct i2c_client *i2c)
+ 	}
+ 
+ 	regmap_read(rt5640->regmap, RT5640_VENDOR_ID2, &val);
++	if (val != RT5640_DEVICE_ID) {
++		usleep_range(60000, 100000);
++		regmap_read(rt5640->regmap, RT5640_VENDOR_ID2, &val);
++	}
++
+ 	if (val != RT5640_DEVICE_ID) {
+ 		dev_err(&i2c->dev,
+ 			"Device with ID register %#x is not rt5640/39\n", val);
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index f244e36799759f..01944860a26442 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -777,9 +777,9 @@ static void fsl_sai_config_disable(struct fsl_sai *sai, int dir)
+ 	 * are running concurrently.
+ 	 */
+ 	/* Software Reset */
+-	regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR);
++	regmap_update_bits(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR, FSL_SAI_CSR_SR);
+ 	/* Clear SR bit to finish the reset */
+-	regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), 0);
++	regmap_update_bits(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR, 0);
+ }
+ 
+ static int fsl_sai_trigger(struct snd_pcm_substream *substream, int cmd,
+@@ -898,11 +898,11 @@ static int fsl_sai_dai_probe(struct snd_soc_dai *cpu_dai)
+ 	unsigned int ofs = sai->soc_data->reg_offset;
+ 
+ 	/* Software Reset for both Tx and Rx */
+-	regmap_write(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR);
+-	regmap_write(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR);
++	regmap_update_bits(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR, FSL_SAI_CSR_SR);
++	regmap_update_bits(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR, FSL_SAI_CSR_SR);
+ 	/* Clear SR bit to finish the reset */
+-	regmap_write(sai->regmap, FSL_SAI_TCSR(ofs), 0);
+-	regmap_write(sai->regmap, FSL_SAI_RCSR(ofs), 0);
++	regmap_update_bits(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR, 0);
++	regmap_update_bits(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR, 0);
+ 
+ 	regmap_update_bits(sai->regmap, FSL_SAI_TCR1(ofs),
+ 			   FSL_SAI_CR1_RFW_MASK(sai->soc_data->fifo_depth),
+@@ -1790,11 +1790,11 @@ static int fsl_sai_runtime_resume(struct device *dev)
+ 
+ 	regcache_cache_only(sai->regmap, false);
+ 	regcache_mark_dirty(sai->regmap);
+-	regmap_write(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR);
+-	regmap_write(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR);
++	regmap_update_bits(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR, FSL_SAI_CSR_SR);
++	regmap_update_bits(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR, FSL_SAI_CSR_SR);
+ 	usleep_range(1000, 2000);
+-	regmap_write(sai->regmap, FSL_SAI_TCSR(ofs), 0);
+-	regmap_write(sai->regmap, FSL_SAI_RCSR(ofs), 0);
++	regmap_update_bits(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR, 0);
++	regmap_update_bits(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR, 0);
+ 
+ 	ret = regcache_sync(sai->regmap);
+ 	if (ret)
+diff --git a/sound/soc/intel/avs/core.c b/sound/soc/intel/avs/core.c
+index cbbc656fcc3f86..6ef49f10e19f71 100644
+--- a/sound/soc/intel/avs/core.c
++++ b/sound/soc/intel/avs/core.c
+@@ -446,6 +446,8 @@ static int avs_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ 	adev = devm_kzalloc(dev, sizeof(*adev), GFP_KERNEL);
+ 	if (!adev)
+ 		return -ENOMEM;
++	bus = &adev->base.core;
++
+ 	ret = avs_bus_init(adev, pci, id);
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to init avs bus: %d\n", ret);
+@@ -456,7 +458,6 @@ static int avs_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	bus = &adev->base.core;
+ 	bus->addr = pci_resource_start(pci, 0);
+ 	bus->remap_addr = pci_ioremap_bar(pci, 0);
+ 	if (!bus->remap_addr) {
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 095d08b3fc8249..c479b65d73df8e 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -741,6 +741,14 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 		},
+ 		.driver_data = (void *)(SOC_SDW_CODEC_SPKR),
+ 	},
++	{
++		.callback = sof_sdw_quirk_cb,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Alienware"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0CCC")
++		},
++		.driver_data = (void *)(SOC_SDW_CODEC_SPKR),
++	},
+ 	/* Pantherlake devices*/
+ 	{
+ 		.callback = sof_sdw_quirk_cb,
+diff --git a/sound/soc/qcom/lpass-platform.c b/sound/soc/qcom/lpass-platform.c
+index 9946f12254b396..b456e096f138fd 100644
+--- a/sound/soc/qcom/lpass-platform.c
++++ b/sound/soc/qcom/lpass-platform.c
+@@ -202,7 +202,6 @@ static int lpass_platform_pcmops_open(struct snd_soc_component *component,
+ 	struct regmap *map;
+ 	unsigned int dai_id = cpu_dai->driver->id;
+ 
+-	component->id = dai_id;
+ 	data = kzalloc(sizeof(*data), GFP_KERNEL);
+ 	if (!data)
+ 		return -ENOMEM;
+@@ -1190,13 +1189,14 @@ static int lpass_platform_pcmops_suspend(struct snd_soc_component *component)
+ {
+ 	struct lpass_data *drvdata = snd_soc_component_get_drvdata(component);
+ 	struct regmap *map;
+-	unsigned int dai_id = component->id;
+ 
+-	if (dai_id == LPASS_DP_RX)
++	if (drvdata->hdmi_port_enable) {
+ 		map = drvdata->hdmiif_map;
+-	else
+-		map = drvdata->lpaif_map;
++		regcache_cache_only(map, true);
++		regcache_mark_dirty(map);
++	}
+ 
++	map = drvdata->lpaif_map;
+ 	regcache_cache_only(map, true);
+ 	regcache_mark_dirty(map);
+ 
+@@ -1207,14 +1207,19 @@ static int lpass_platform_pcmops_resume(struct snd_soc_component *component)
+ {
+ 	struct lpass_data *drvdata = snd_soc_component_get_drvdata(component);
+ 	struct regmap *map;
+-	unsigned int dai_id = component->id;
++	int ret;
+ 
+-	if (dai_id == LPASS_DP_RX)
++	if (drvdata->hdmi_port_enable) {
+ 		map = drvdata->hdmiif_map;
+-	else
+-		map = drvdata->lpaif_map;
++		regcache_cache_only(map, false);
++		ret = regcache_sync(map);
++		if (ret)
++			return ret;
++	}
+ 
++	map = drvdata->lpaif_map;
+ 	regcache_cache_only(map, false);
++
+ 	return regcache_sync(map);
+ }
+ 
+@@ -1224,7 +1229,9 @@ static int lpass_platform_copy(struct snd_soc_component *component,
+ 			       unsigned long bytes)
+ {
+ 	struct snd_pcm_runtime *rt = substream->runtime;
+-	unsigned int dai_id = component->id;
++	struct snd_soc_pcm_runtime *soc_runtime = snd_soc_substream_to_rtd(substream);
++	struct snd_soc_dai *cpu_dai = snd_soc_rtd_to_cpu(soc_runtime, 0);
++	unsigned int dai_id = cpu_dai->driver->id;
+ 	int ret = 0;
+ 
+ 	void __iomem *dma_buf = (void __iomem *) (rt->dma_area + pos +
+diff --git a/sound/soc/sdca/sdca_functions.c b/sound/soc/sdca/sdca_functions.c
+index 15aa57a07c73f1..e1803f0dc59040 100644
+--- a/sound/soc/sdca/sdca_functions.c
++++ b/sound/soc/sdca/sdca_functions.c
+@@ -912,6 +912,8 @@ static int find_sdca_entity_control(struct device *dev, struct sdca_entity *enti
+ 				       &tmp);
+ 	if (!ret)
+ 		control->interrupt_position = tmp;
++	else
++		control->interrupt_position = SDCA_NO_INTERRUPT;
+ 
+ 	control->label = find_sdca_control_label(dev, entity, control);
+ 	if (!control->label)
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 3f97d1f132c640..0db6db16f28b9e 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -1139,6 +1139,9 @@ static int snd_soc_compensate_channel_connection_map(struct snd_soc_card *card,
+ void snd_soc_remove_pcm_runtime(struct snd_soc_card *card,
+ 				struct snd_soc_pcm_runtime *rtd)
+ {
++	if (!rtd)
++		return;
++
+ 	lockdep_assert_held(&client_mutex);
+ 
+ 	/*
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index b7818388984e3a..227f86752b1ea0 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -739,6 +739,10 @@ static int snd_soc_dapm_set_bias_level(struct snd_soc_dapm_context *dapm,
+ out:
+ 	trace_snd_soc_bias_level_done(dapm, level);
+ 
++	/* success */
++	if (ret == 0)
++		snd_soc_dapm_init_bias_level(dapm, level);
++
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index 14aa8ecc4bc426..554808d2dc29f7 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -2368,14 +2368,25 @@ static int sof_dspless_widget_ready(struct snd_soc_component *scomp, int index,
+ 				    struct snd_soc_dapm_widget *w,
+ 				    struct snd_soc_tplg_dapm_widget *tw)
+ {
++	struct snd_soc_tplg_private *priv = &tw->priv;
++	int ret;
++
++	/* for snd_soc_dapm_widget.no_wname_in_kcontrol_name */
++	ret = sof_parse_tokens(scomp, w, dapm_widget_tokens,
++			       ARRAY_SIZE(dapm_widget_tokens),
++			       priv->array, le32_to_cpu(priv->size));
++	if (ret < 0) {
++		dev_err(scomp->dev, "failed to parse dapm widget tokens for %s\n",
++			w->name);
++		return ret;
++	}
++
+ 	if (WIDGET_IS_DAI(w->id)) {
+ 		static const struct sof_topology_token dai_tokens[] = {
+ 			{SOF_TKN_DAI_TYPE, SND_SOC_TPLG_TUPLE_TYPE_STRING, get_token_dai_type, 0}};
+ 		struct snd_sof_dev *sdev = snd_soc_component_get_drvdata(scomp);
+-		struct snd_soc_tplg_private *priv = &tw->priv;
+ 		struct snd_sof_widget *swidget;
+ 		struct snd_sof_dai *sdai;
+-		int ret;
+ 
+ 		swidget = kzalloc(sizeof(*swidget), GFP_KERNEL);
+ 		if (!swidget)
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index a90673d4382219..8435ca833a424c 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -2153,15 +2153,15 @@ static int dell_dock_mixer_init(struct usb_mixer_interface *mixer)
+ #define SND_RME_CLK_FREQMUL_SHIFT		18
+ #define SND_RME_CLK_FREQMUL_MASK		0x7
+ #define SND_RME_CLK_SYSTEM(x) \
+-	((x >> SND_RME_CLK_SYSTEM_SHIFT) & SND_RME_CLK_SYSTEM_MASK)
++	(((x) >> SND_RME_CLK_SYSTEM_SHIFT) & SND_RME_CLK_SYSTEM_MASK)
+ #define SND_RME_CLK_AES(x) \
+-	((x >> SND_RME_CLK_AES_SHIFT) & SND_RME_CLK_AES_SPDIF_MASK)
++	(((x) >> SND_RME_CLK_AES_SHIFT) & SND_RME_CLK_AES_SPDIF_MASK)
+ #define SND_RME_CLK_SPDIF(x) \
+-	((x >> SND_RME_CLK_SPDIF_SHIFT) & SND_RME_CLK_AES_SPDIF_MASK)
++	(((x) >> SND_RME_CLK_SPDIF_SHIFT) & SND_RME_CLK_AES_SPDIF_MASK)
+ #define SND_RME_CLK_SYNC(x) \
+-	((x >> SND_RME_CLK_SYNC_SHIFT) & SND_RME_CLK_SYNC_MASK)
++	(((x) >> SND_RME_CLK_SYNC_SHIFT) & SND_RME_CLK_SYNC_MASK)
+ #define SND_RME_CLK_FREQMUL(x) \
+-	((x >> SND_RME_CLK_FREQMUL_SHIFT) & SND_RME_CLK_FREQMUL_MASK)
++	(((x) >> SND_RME_CLK_FREQMUL_SHIFT) & SND_RME_CLK_FREQMUL_MASK)
+ #define SND_RME_CLK_AES_LOCK			0x1
+ #define SND_RME_CLK_AES_SYNC			0x4
+ #define SND_RME_CLK_SPDIF_LOCK			0x2
+@@ -2170,9 +2170,9 @@ static int dell_dock_mixer_init(struct usb_mixer_interface *mixer)
+ #define SND_RME_SPDIF_FORMAT_SHIFT		5
+ #define SND_RME_BINARY_MASK			0x1
+ #define SND_RME_SPDIF_IF(x) \
+-	((x >> SND_RME_SPDIF_IF_SHIFT) & SND_RME_BINARY_MASK)
++	(((x) >> SND_RME_SPDIF_IF_SHIFT) & SND_RME_BINARY_MASK)
+ #define SND_RME_SPDIF_FORMAT(x) \
+-	((x >> SND_RME_SPDIF_FORMAT_SHIFT) & SND_RME_BINARY_MASK)
++	(((x) >> SND_RME_SPDIF_FORMAT_SHIFT) & SND_RME_BINARY_MASK)
+ 
+ static const u32 snd_rme_rate_table[] = {
+ 	32000, 44100, 48000, 50000,
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index aa91d63749f2ca..1cb52373e70f64 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -341,20 +341,28 @@ snd_pcm_chmap_elem *convert_chmap_v3(struct uac3_cluster_header_descriptor
+ 
+ 	len = le16_to_cpu(cluster->wLength);
+ 	c = 0;
+-	p += sizeof(struct uac3_cluster_header_descriptor);
++	p += sizeof(*cluster);
++	len -= sizeof(*cluster);
+ 
+-	while (((p - (void *)cluster) < len) && (c < channels)) {
++	while (len > 0 && (c < channels)) {
+ 		struct uac3_cluster_segment_descriptor *cs_desc = p;
+ 		u16 cs_len;
+ 		u8 cs_type;
+ 
++		if (len < sizeof(*p))
++			break;
+ 		cs_len = le16_to_cpu(cs_desc->wLength);
++		if (len < cs_len)
++			break;
+ 		cs_type = cs_desc->bSegmentType;
+ 
+ 		if (cs_type == UAC3_CHANNEL_INFORMATION) {
+ 			struct uac3_cluster_information_segment_descriptor *is = p;
+ 			unsigned char map;
+ 
++			if (cs_len < sizeof(*is))
++				break;
++
+ 			/*
+ 			 * TODO: this conversion is not complete, update it
+ 			 * after adding UAC3 values to asound.h
+@@ -456,6 +464,7 @@ snd_pcm_chmap_elem *convert_chmap_v3(struct uac3_cluster_header_descriptor
+ 			chmap->map[c++] = map;
+ 		}
+ 		p += cs_len;
++		len -= cs_len;
+ 	}
+ 
+ 	if (channels < c)
+@@ -880,7 +889,7 @@ snd_usb_get_audioformat_uac3(struct snd_usb_audio *chip,
+ 	u64 badd_formats = 0;
+ 	unsigned int num_channels;
+ 	struct audioformat *fp;
+-	u16 cluster_id, wLength;
++	u16 cluster_id, wLength, cluster_wLength;
+ 	int clock = 0;
+ 	int err;
+ 
+@@ -1010,6 +1019,16 @@ snd_usb_get_audioformat_uac3(struct snd_usb_audio *chip,
+ 		return ERR_PTR(-EIO);
+ 	}
+ 
++	cluster_wLength = le16_to_cpu(cluster->wLength);
++	if (cluster_wLength < sizeof(*cluster) ||
++	    cluster_wLength > wLength) {
++		dev_err(&dev->dev,
++			"%u:%d : invalid Cluster Descriptor size\n",
++			iface_no, altno);
++		kfree(cluster);
++		return ERR_PTR(-EIO);
++	}
++
+ 	num_channels = cluster->bNrChannels;
+ 	chmap = convert_chmap_v3(cluster);
+ 	kfree(cluster);
+diff --git a/sound/usb/validate.c b/sound/usb/validate.c
+index 6fe206f6e91105..4f4e8e87a14cd0 100644
+--- a/sound/usb/validate.c
++++ b/sound/usb/validate.c
+@@ -221,6 +221,17 @@ static bool validate_uac3_feature_unit(const void *p,
+ 	return d->bLength >= sizeof(*d) + 4 + 2;
+ }
+ 
++static bool validate_uac3_power_domain_unit(const void *p,
++					    const struct usb_desc_validator *v)
++{
++	const struct uac3_power_domain_descriptor *d = p;
++
++	if (d->bLength < sizeof(*d))
++		return false;
++	/* baEntities[] + wPDomainDescrStr */
++	return d->bLength >= sizeof(*d) + d->bNrEntities + 2;
++}
++
+ static bool validate_midi_out_jack(const void *p,
+ 				   const struct usb_desc_validator *v)
+ {
+@@ -285,6 +296,7 @@ static const struct usb_desc_validator audio_validators[] = {
+ 	      struct uac3_clock_multiplier_descriptor),
+ 	/* UAC_VERSION_3, UAC3_SAMPLE_RATE_CONVERTER: not implemented yet */
+ 	/* UAC_VERSION_3, UAC3_CONNECTORS: not implemented yet */
++	FUNC(UAC_VERSION_3, UAC3_POWER_DOMAIN, validate_uac3_power_domain_unit),
+ 	{ } /* terminator */
+ };
+ 
+diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
+index cd5963cb605873..2b7f2bd3a7dbc7 100644
+--- a/tools/bpf/bpftool/main.c
++++ b/tools/bpf/bpftool/main.c
+@@ -534,9 +534,9 @@ int main(int argc, char **argv)
+ 		usage();
+ 
+ 	if (version_requested)
+-		return do_version(argc, argv);
+-
+-	ret = cmd_select(commands, argc, argv, do_help);
++		ret = do_version(argc, argv);
++	else
++		ret = cmd_select(commands, argc, argv, do_help);
+ 
+ 	if (json_output)
+ 		jsonw_destroy(&json_wtr);
+diff --git a/tools/include/nolibc/std.h b/tools/include/nolibc/std.h
+index 933bc0be7e1c6b..a9d8b5b51f37f8 100644
+--- a/tools/include/nolibc/std.h
++++ b/tools/include/nolibc/std.h
+@@ -20,6 +20,8 @@
+ 
+ #include "stdint.h"
+ 
++#include <linux/types.h>
++
+ /* those are commonly provided by sys/types.h */
+ typedef unsigned int          dev_t;
+ typedef unsigned long         ino_t;
+@@ -31,6 +33,6 @@ typedef unsigned long       nlink_t;
+ typedef   signed long         off_t;
+ typedef   signed long     blksize_t;
+ typedef   signed long      blkcnt_t;
+-typedef   signed long        time_t;
++typedef __kernel_old_time_t  time_t;
+ 
+ #endif /* _NOLIBC_STD_H */
+diff --git a/tools/include/nolibc/types.h b/tools/include/nolibc/types.h
+index b26a5d0c417c7c..9d606c7138a86f 100644
+--- a/tools/include/nolibc/types.h
++++ b/tools/include/nolibc/types.h
+@@ -127,7 +127,7 @@ typedef struct {
+ 		int __fd = (fd);					\
+ 		if (__fd >= 0)						\
+ 			__set->fds[__fd / FD_SETIDXMASK] &=		\
+-				~(1U << (__fd & FX_SETBITMASK));	\
++				~(1U << (__fd & FD_SETBITMASK));	\
+ 	} while (0)
+ 
+ #define FD_SET(fd, set) do {						\
+@@ -144,7 +144,7 @@ typedef struct {
+ 		int __r = 0;						\
+ 		if (__fd >= 0)						\
+ 			__r = !!(__set->fds[__fd / FD_SETIDXMASK] &	\
+-1U << (__fd & FD_SET_BITMASK));						\
++1U << (__fd & FD_SETBITMASK));						\
+ 		__r;							\
+ 	})
+ 
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index c8e29c52d28c02..faa4ca04986c33 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -4582,6 +4582,11 @@ static int bpf_program__record_reloc(struct bpf_program *prog,
+ 
+ 	/* arena data relocation */
+ 	if (shdr_idx == obj->efile.arena_data_shndx) {
++		if (obj->arena_map_idx < 0) {
++			pr_warn("prog '%s': bad arena data relocation at insn %u, no arena maps defined\n",
++				prog->name, insn_idx);
++			return -LIBBPF_ERRNO__RELOC;
++		}
+ 		reloc_desc->type = RELO_DATA;
+ 		reloc_desc->insn_idx = insn_idx;
+ 		reloc_desc->map_idx = obj->arena_map_idx;
+@@ -9216,7 +9221,7 @@ int bpf_object__gen_loader(struct bpf_object *obj, struct gen_loader_opts *opts)
+ 		return libbpf_err(-EFAULT);
+ 	if (!OPTS_VALID(opts, gen_loader_opts))
+ 		return libbpf_err(-EINVAL);
+-	gen = calloc(sizeof(*gen), 1);
++	gen = calloc(1, sizeof(*gen));
+ 	if (!gen)
+ 		return libbpf_err(-ENOMEM);
+ 	gen->opts = opts;
+diff --git a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
+index 73b6b10cbdd291..5ae02c3d5b64b7 100644
+--- a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
++++ b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
+@@ -240,9 +240,9 @@ static int mperf_stop(void)
+ 	int cpu;
+ 
+ 	for (cpu = 0; cpu < cpu_count; cpu++) {
+-		mperf_measure_stats(cpu);
+-		mperf_get_tsc(&tsc_at_measure_end[cpu]);
+ 		clock_gettime(CLOCK_REALTIME, &time_end[cpu]);
++		mperf_get_tsc(&tsc_at_measure_end[cpu]);
++		mperf_measure_stats(cpu);
+ 	}
+ 
+ 	return 0;
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 444b6bfb4683f3..dd3c32ab9ec1fd 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -67,6 +67,7 @@
+ #include <stdbool.h>
+ #include <assert.h>
+ #include <linux/kernel.h>
++#include <limits.h>
+ 
+ #define UNUSED(x) (void)(x)
+ 
+@@ -6526,8 +6527,16 @@ int check_for_cap_sys_rawio(void)
+ 	int ret = 0;
+ 
+ 	caps = cap_get_proc();
+-	if (caps == NULL)
++	if (caps == NULL) {
++		/*
++		 * CONFIG_MULTIUSER=n kernels have no cap_get_proc()
++		 * Allow them to continue and attempt to access MSRs
++		 */
++		if (errno == ENOSYS)
++			return 0;
++
+ 		return 1;
++	}
+ 
+ 	if (cap_get_flag(caps, CAP_SYS_RAWIO, CAP_EFFECTIVE, &cap_flag_value)) {
+ 		ret = 1;
+@@ -6690,7 +6699,8 @@ static void probe_intel_uncore_frequency_legacy(void)
+ 			sprintf(path_base, "/sys/devices/system/cpu/intel_uncore_frequency/package_%02d_die_%02d", i,
+ 				j);
+ 
+-			if (access(path_base, R_OK))
++			sprintf(path, "%s/current_freq_khz", path_base);
++			if (access(path, R_OK))
+ 				continue;
+ 
+ 			BIC_PRESENT(BIC_UNCORE_MHZ);
+diff --git a/tools/scripts/Makefile.include b/tools/scripts/Makefile.include
+index 5158250988cea8..ded48263dd5e05 100644
+--- a/tools/scripts/Makefile.include
++++ b/tools/scripts/Makefile.include
+@@ -101,7 +101,9 @@ else ifneq ($(CROSS_COMPILE),)
+ # Allow userspace to override CLANG_CROSS_FLAGS to specify their own
+ # sysroots and flags or to avoid the GCC call in pure Clang builds.
+ ifeq ($(CLANG_CROSS_FLAGS),)
+-CLANG_CROSS_FLAGS := --target=$(notdir $(CROSS_COMPILE:%-=%))
++CLANG_TARGET := $(notdir $(CROSS_COMPILE:%-=%))
++CLANG_TARGET := $(subst s390-linux,s390x-linux,$(CLANG_TARGET))
++CLANG_CROSS_FLAGS := --target=$(CLANG_TARGET)
+ GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE)gcc 2>/dev/null))
+ ifneq ($(GCC_TOOLCHAIN_DIR),)
+ CLANG_CROSS_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR)$(notdir $(CROSS_COMPILE))
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index a5f7fdd0c1fbbb..e1d31e2aa948ff 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -1371,7 +1371,10 @@ sub __eval_option {
+ 	# If a variable contains itself, use the default var
+ 	if (($var eq $name) && defined($opt{$var})) {
+ 	    $o = $opt{$var};
+-	    $retval = "$retval$o";
++	    # Only append if the default doesn't contain itself
++	    if ($o !~ m/\$\{$var\}/) {
++		$retval = "$retval$o";
++	    }
+ 	} elsif (defined($opt{$o})) {
+ 	    $o = $opt{$o};
+ 	    $retval = "$retval$o";
+diff --git a/tools/testing/selftests/arm64/fp/sve-ptrace.c b/tools/testing/selftests/arm64/fp/sve-ptrace.c
+index c499d5789dd53f..16320aeaff857b 100644
+--- a/tools/testing/selftests/arm64/fp/sve-ptrace.c
++++ b/tools/testing/selftests/arm64/fp/sve-ptrace.c
+@@ -170,7 +170,7 @@ static void ptrace_set_get_inherit(pid_t child, const struct vec_type *type)
+ 	memset(&sve, 0, sizeof(sve));
+ 	sve.size = sizeof(sve);
+ 	sve.vl = sve_vl_from_vq(SVE_VQ_MIN);
+-	sve.flags = SVE_PT_VL_INHERIT;
++	sve.flags = SVE_PT_VL_INHERIT | SVE_PT_REGS_SVE;
+ 	ret = set_sve(child, type, &sve);
+ 	if (ret != 0) {
+ 		ksft_test_result_fail("Failed to set %s SVE_PT_VL_INHERIT\n",
+@@ -235,6 +235,7 @@ static void ptrace_set_get_vl(pid_t child, const struct vec_type *type,
+ 	/* Set the VL by doing a set with no register payload */
+ 	memset(&sve, 0, sizeof(sve));
+ 	sve.size = sizeof(sve);
++	sve.flags = SVE_PT_REGS_SVE;
+ 	sve.vl = vl;
+ 	ret = set_sve(child, type, &sve);
+ 	if (ret != 0) {
+diff --git a/tools/testing/selftests/bpf/prog_tests/ringbuf.c b/tools/testing/selftests/bpf/prog_tests/ringbuf.c
+index da430df45aa497..d1e4cb28a72c6b 100644
+--- a/tools/testing/selftests/bpf/prog_tests/ringbuf.c
++++ b/tools/testing/selftests/bpf/prog_tests/ringbuf.c
+@@ -97,7 +97,7 @@ static void ringbuf_write_subtest(void)
+ 	if (!ASSERT_OK_PTR(skel, "skel_open"))
+ 		return;
+ 
+-	skel->maps.ringbuf.max_entries = 0x4000;
++	skel->maps.ringbuf.max_entries = 0x40000;
+ 
+ 	err = test_ringbuf_write_lskel__load(skel);
+ 	if (!ASSERT_OK(err, "skel_load"))
+@@ -108,7 +108,7 @@ static void ringbuf_write_subtest(void)
+ 	mmap_ptr = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, rb_fd, 0);
+ 	if (!ASSERT_OK_PTR(mmap_ptr, "rw_cons_pos"))
+ 		goto cleanup;
+-	*mmap_ptr = 0x3000;
++	*mmap_ptr = 0x30000;
+ 	ASSERT_OK(munmap(mmap_ptr, page_size), "unmap_rw");
+ 
+ 	skel->bss->pid = getpid();
+diff --git a/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c b/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c
+index d424e7ecbd12d0..9fd3ae98732102 100644
+--- a/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c
++++ b/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c
+@@ -21,8 +21,7 @@
+ #include "../progs/test_user_ringbuf.h"
+ 
+ static const long c_sample_size = sizeof(struct sample) + BPF_RINGBUF_HDR_SZ;
+-static const long c_ringbuf_size = 1 << 12; /* 1 small page */
+-static const long c_max_entries = c_ringbuf_size / c_sample_size;
++static long c_ringbuf_size, c_max_entries;
+ 
+ static void drain_current_samples(void)
+ {
+@@ -424,7 +423,9 @@ static void test_user_ringbuf_loop(void)
+ 	uint32_t remaining_samples = total_samples;
+ 	int err;
+ 
+-	BUILD_BUG_ON(total_samples <= c_max_entries);
++	if (!ASSERT_LT(c_max_entries, total_samples, "compare_c_max_entries"))
++		return;
++
+ 	err = load_skel_create_user_ringbuf(&skel, &ringbuf);
+ 	if (err)
+ 		return;
+@@ -686,6 +687,9 @@ void test_user_ringbuf(void)
+ {
+ 	int i;
+ 
++	c_ringbuf_size = getpagesize(); /* 1 page */
++	c_max_entries = c_ringbuf_size / c_sample_size;
++
+ 	for (i = 0; i < ARRAY_SIZE(success_tests); i++) {
+ 		if (!test__start_subtest(success_tests[i].test_name))
+ 			continue;
+diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf_write.c b/tools/testing/selftests/bpf/progs/test_ringbuf_write.c
+index 350513c0e4c985..f063a0013f8506 100644
+--- a/tools/testing/selftests/bpf/progs/test_ringbuf_write.c
++++ b/tools/testing/selftests/bpf/progs/test_ringbuf_write.c
+@@ -26,11 +26,11 @@ int test_ringbuf_write(void *ctx)
+ 	if (cur_pid != pid)
+ 		return 0;
+ 
+-	sample1 = bpf_ringbuf_reserve(&ringbuf, 0x3000, 0);
++	sample1 = bpf_ringbuf_reserve(&ringbuf, 0x30000, 0);
+ 	if (!sample1)
+ 		return 0;
+ 	/* first one can pass */
+-	sample2 = bpf_ringbuf_reserve(&ringbuf, 0x3000, 0);
++	sample2 = bpf_ringbuf_reserve(&ringbuf, 0x30000, 0);
+ 	if (!sample2) {
+ 		bpf_ringbuf_discard(sample1, 0);
+ 		__sync_fetch_and_add(&discarded, 1);
+diff --git a/tools/testing/selftests/bpf/progs/verifier_unpriv.c b/tools/testing/selftests/bpf/progs/verifier_unpriv.c
+index a4a5e207160404..28200f068ce53c 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_unpriv.c
++++ b/tools/testing/selftests/bpf/progs/verifier_unpriv.c
+@@ -619,7 +619,7 @@ __naked void pass_pointer_to_tail_call(void)
+ 
+ SEC("socket")
+ __description("unpriv: cmp map pointer with zero")
+-__success __failure_unpriv __msg_unpriv("R1 pointer comparison")
++__success __success_unpriv
+ __retval(0)
+ __naked void cmp_map_pointer_with_zero(void)
+ {
+diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc
+index 4b994b6df5ac30..ed81eaf2afd6d9 100644
+--- a/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc
++++ b/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc
+@@ -29,7 +29,7 @@ ftrace_filter_check 'schedule*' '^schedule.*$'
+ ftrace_filter_check '*pin*lock' '.*pin.*lock$'
+ 
+ # filter by start*mid*
+-ftrace_filter_check 'mutex*try*' '^mutex.*try.*'
++ftrace_filter_check 'mutex*unl*' '^mutex.*unl.*'
+ 
+ # Advanced full-glob matching feature is recently supported.
+ # Skip the tests if we are sure the kernel does not support it.
+diff --git a/tools/testing/selftests/futex/include/futextest.h b/tools/testing/selftests/futex/include/futextest.h
+index ddbcfc9b7bac4a..7a5fd1d5355e7e 100644
+--- a/tools/testing/selftests/futex/include/futextest.h
++++ b/tools/testing/selftests/futex/include/futextest.h
+@@ -47,6 +47,17 @@ typedef volatile u_int32_t futex_t;
+ 					 FUTEX_PRIVATE_FLAG)
+ #endif
+ 
++/*
++ * SYS_futex is expected from system C library, in glibc some 32-bit
++ * architectures (e.g. RV32) are using 64-bit time_t, therefore it doesn't have
++ * SYS_futex defined but just SYS_futex_time64. Define SYS_futex as
++ * SYS_futex_time64 in this situation to ensure the compilation and the
++ * compatibility.
++ */
++#if !defined(SYS_futex) && defined(SYS_futex_time64)
++#define SYS_futex SYS_futex_time64
++#endif
++
+ /**
+  * futex() - SYS_futex syscall wrapper
+  * @uaddr:	address of first futex
+diff --git a/tools/testing/selftests/net/netfilter/config b/tools/testing/selftests/net/netfilter/config
+index 43d8b500d391a2..8cc6036f97dc48 100644
+--- a/tools/testing/selftests/net/netfilter/config
++++ b/tools/testing/selftests/net/netfilter/config
+@@ -91,4 +91,4 @@ CONFIG_XFRM_STATISTICS=y
+ CONFIG_NET_PKTGEN=m
+ CONFIG_TUN=m
+ CONFIG_INET_DIAG=m
+-CONFIG_SCTP_DIAG=m
++CONFIG_INET_SCTP_DIAG=m
+diff --git a/tools/testing/selftests/vDSO/vdso_test_getrandom.c b/tools/testing/selftests/vDSO/vdso_test_getrandom.c
+index 95057f7567db22..ff8d5675da2b0e 100644
+--- a/tools/testing/selftests/vDSO/vdso_test_getrandom.c
++++ b/tools/testing/selftests/vDSO/vdso_test_getrandom.c
+@@ -242,6 +242,7 @@ static void kselftest(void)
+ 	pid_t child;
+ 
+ 	ksft_print_header();
++	vgetrandom_init();
+ 	ksft_set_plan(2);
+ 
+ 	for (size_t i = 0; i < 1000; ++i) {
+@@ -295,8 +296,6 @@ static void usage(const char *argv0)
+ 
+ int main(int argc, char *argv[])
+ {
+-	vgetrandom_init();
+-
+ 	if (argc == 1) {
+ 		kselftest();
+ 		return 0;
+@@ -306,6 +305,9 @@ int main(int argc, char *argv[])
+ 		usage(argv[0]);
+ 		return 1;
+ 	}
++
++	vgetrandom_init();
++
+ 	if (!strcmp(argv[1], "bench-single"))
+ 		bench_single();
+ 	else if (!strcmp(argv[1], "bench-multi"))
+diff --git a/tools/verification/dot2/dot2k.py b/tools/verification/dot2/dot2k.py
+index 745d35a4a37918..dd4b5528a4f23e 100644
+--- a/tools/verification/dot2/dot2k.py
++++ b/tools/verification/dot2/dot2k.py
+@@ -35,6 +35,7 @@ class dot2k(Dot2c):
+             self.states = []
+             self.main_c = self.__read_file(self.monitor_templates_dir + "main_container.c")
+             self.main_h = self.__read_file(self.monitor_templates_dir + "main_container.h")
++            self.kconfig = self.__read_file(self.monitor_templates_dir + "Kconfig_container")
+         else:
+             super().__init__(file_path, extra_params.get("model_name"))
+ 
+@@ -44,7 +45,7 @@ class dot2k(Dot2c):
+             self.monitor_type = MonitorType
+             self.main_c = self.__read_file(self.monitor_templates_dir + "main.c")
+             self.trace_h = self.__read_file(self.monitor_templates_dir + "trace.h")
+-        self.kconfig = self.__read_file(self.monitor_templates_dir + "Kconfig")
++            self.kconfig = self.__read_file(self.monitor_templates_dir + "Kconfig")
+         self.enum_suffix = "_%s" % self.name
+         self.description = extra_params.get("description", self.name) or "auto-generated"
+         self.auto_patch = extra_params.get("auto_patch")
+diff --git a/tools/verification/dot2/dot2k_templates/Kconfig_container b/tools/verification/dot2/dot2k_templates/Kconfig_container
+new file mode 100644
+index 00000000000000..a606111949c27e
+--- /dev/null
++++ b/tools/verification/dot2/dot2k_templates/Kconfig_container
+@@ -0,0 +1,5 @@
++config RV_MON_%%MODEL_NAME_UP%%
++	depends on RV
++	bool "%%MODEL_NAME%% monitor"
++	help
++	  %%DESCRIPTION%%


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [gentoo-commits] proj/linux-patches:6.15 commit in: /
@ 2025-08-21  4:00 Arisu Tachibana
  0 siblings, 0 replies; 19+ messages in thread
From: Arisu Tachibana @ 2025-08-21  4:00 UTC (permalink / raw
  To: gentoo-commits

commit:     61ceb38aa297025c438f9f416b64850b459b0cc7
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 21 04:00:01 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Aug 21 04:00:01 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=61ceb38a

Remove 1900_btrfs_fix_log_tree_replay_failure.patch

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                  |   4 -
 1900_btrfs_fix_log_tree_replay_failure.patch | 143 ---------------------------
 2 files changed, 147 deletions(-)

diff --git a/0000_README b/0000_README
index e249871f..49603ffb 100644
--- a/0000_README
+++ b/0000_README
@@ -99,10 +99,6 @@ Patch:  1730_parisc-Disable-prctl.patch
 From:   https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:   prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
 
-Patch:  1900_btrfs_fix_log_tree_replay_failure.patch
-From:   https://gitlab.com/cki-project/kernel-ark/-/commit/e6c71b29fab08fd0ab55d2f83c4539d68d543895
-Desc:   btrfs: fix log tree replay failure due to file with 0 links and extents
-
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1900_btrfs_fix_log_tree_replay_failure.patch b/1900_btrfs_fix_log_tree_replay_failure.patch
deleted file mode 100644
index 335bb7f2..00000000
--- a/1900_btrfs_fix_log_tree_replay_failure.patch
+++ /dev/null
@@ -1,143 +0,0 @@
-From e6c71b29fab08fd0ab55d2f83c4539d68d543895 Mon Sep 17 00:00:00 2001
-From: Filipe Manana <fdmanana@suse.com>
-Date: Wed, 30 Jul 2025 19:18:37 +0100
-Subject: [PATCH] btrfs: fix log tree replay failure due to file with 0 links
- and extents
-
-If we log a new inode (not persisted in a past transaction) that has 0
-links and extents, then log another inode with an higher inode number, we
-end up with failing to replay the log tree with -EINVAL. The steps for
-this are:
-
-1) create new file A
-2) write some data to file A
-3) open an fd on file A
-4) unlink file A
-5) fsync file A using the previously open fd
-6) create file B (has higher inode number than file A)
-7) fsync file B
-8) power fail before current transaction commits
-
-Now when attempting to mount the fs, the log replay will fail with
--ENOENT at replay_one_extent() when attempting to replay the first
-extent of file A. The failure comes when trying to open the inode for
-file A in the subvolume tree, since it doesn't exist.
-
-Before commit 5f61b961599a ("btrfs: fix inode lookup error handling
-during log replay"), the returned error was -EIO instead of -ENOENT,
-since we converted any errors when attempting to read an inode during
-log replay to -EIO.
-
-The reason for this is that the log replay procedure fails to ignore
-the current inode when we are at the stage LOG_WALK_REPLAY_ALL, our
-current inode has 0 links and last inode we processed in the previous
-stage has a non 0 link count. In other words, the issue is that at
-replay_one_extent() we only update wc->ignore_cur_inode if the current
-replay stage is LOG_WALK_REPLAY_INODES.
-
-Fix this by updating wc->ignore_cur_inode whenever we find an inode item
-regardless of the current replay stage. This is a simple solution and easy
-to backport, but later we can do other alternatives like avoid logging
-extents or inode items other than the inode item for inodes with a link
-count of 0.
-
-The problem with the wc->ignore_cur_inode logic has been around since
-commit f2d72f42d5fa ("Btrfs: fix warning when replaying log after fsync
-of a tmpfile") but it only became frequent to hit since the more recent
-commit 5e85262e542d ("btrfs: fix fsync of files with no hard links not
-persisting deletion"), because we stopped skipping inodes with a link
-count of 0 when logging, while before the problem would only be triggered
-if trying to replay a log tree created with an older kernel which has a
-logged inode with 0 links.
-
-A test case for fstests will be submitted soon.
-
-Reported-by: Peter Jung <ptr1337@cachyos.org>
-Link: https://lore.kernel.org/linux-btrfs/fce139db-4458-4788-bb97-c29acf6cb1df@cachyos.org/
-Reported-by: burneddi <burneddi@protonmail.com>
-Link: https://lore.kernel.org/linux-btrfs/lh4W-Lwc0Mbk-QvBhhQyZxf6VbM3E8VtIvU3fPIQgweP_Q1n7wtlUZQc33sYlCKYd-o6rryJQfhHaNAOWWRKxpAXhM8NZPojzsJPyHMf2qY=@protonmail.com/#t
-Reported-by: Russell Haley <yumpusamongus@gmail.com>
-Link: https://lore.kernel.org/linux-btrfs/598ecc75-eb80-41b3-83c2-f2317fbb9864@gmail.com/
-Fixes: f2d72f42d5fa ("Btrfs: fix warning when replaying log after fsync of a tmpfile")
-Reviewed-by: Boris Burkov <boris@bur.io>
-Signed-off-by: Filipe Manana <fdmanana@suse.com>
----
- fs/btrfs/tree-log.c | 45 +++++++++++++++++++++++++++++----------------
- 1 file changed, 29 insertions(+), 16 deletions(-)
-
-diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
-index e05140ce95be9..2fb9e7bfc9077 100644
---- a/fs/btrfs/tree-log.c
-+++ b/fs/btrfs/tree-log.c
-@@ -321,8 +321,7 @@ struct walk_control {
- 
- 	/*
- 	 * Ignore any items from the inode currently being processed. Needs
--	 * to be set every time we find a BTRFS_INODE_ITEM_KEY and we are in
--	 * the LOG_WALK_REPLAY_INODES stage.
-+	 * to be set every time we find a BTRFS_INODE_ITEM_KEY.
- 	 */
- 	bool ignore_cur_inode;
- 
-@@ -2410,23 +2409,30 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
- 
- 	nritems = btrfs_header_nritems(eb);
- 	for (i = 0; i < nritems; i++) {
--		btrfs_item_key_to_cpu(eb, &key, i);
-+		struct btrfs_inode_item *inode_item;
- 
--		/* inode keys are done during the first stage */
--		if (key.type == BTRFS_INODE_ITEM_KEY &&
--		    wc->stage == LOG_WALK_REPLAY_INODES) {
--			struct btrfs_inode_item *inode_item;
--			u32 mode;
-+		btrfs_item_key_to_cpu(eb, &key, i);
- 
--			inode_item = btrfs_item_ptr(eb, i,
--					    struct btrfs_inode_item);
-+		if (key.type == BTRFS_INODE_ITEM_KEY) {
-+			inode_item = btrfs_item_ptr(eb, i, struct btrfs_inode_item);
- 			/*
--			 * If we have a tmpfile (O_TMPFILE) that got fsync'ed
--			 * and never got linked before the fsync, skip it, as
--			 * replaying it is pointless since it would be deleted
--			 * later. We skip logging tmpfiles, but it's always
--			 * possible we are replaying a log created with a kernel
--			 * that used to log tmpfiles.
-+			 * An inode with no links is either:
-+			 *
-+			 * 1) A tmpfile (O_TMPFILE) that got fsync'ed and never
-+			 *    got linked before the fsync, skip it, as replaying
-+			 *    it is pointless since it would be deleted later.
-+			 *    We skip logging tmpfiles, but it's always possible
-+			 *    we are replaying a log created with a kernel that
-+			 *    used to log tmpfiles;
-+			 *
-+			 * 2) A non-tmpfile which got its last link deleted
-+			 *    while holding an open fd on it and later got
-+			 *    fsynced through that fd. We always log the
-+			 *    parent inodes when inode->last_unlink_trans is
-+			 *    set to the current transaction, so ignore all the
-+			 *    inode items for this inode. We will delete the
-+			 *    inode when processing the parent directory with
-+			 *    replay_dir_deletes().
- 			 */
- 			if (btrfs_inode_nlink(eb, inode_item) == 0) {
- 				wc->ignore_cur_inode = true;
-@@ -2434,6 +2440,13 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
- 			} else {
- 				wc->ignore_cur_inode = false;
- 			}
-+		}
-+
-+		/* Inode keys are done during the first stage. */
-+		if (key.type == BTRFS_INODE_ITEM_KEY &&
-+		    wc->stage == LOG_WALK_REPLAY_INODES) {
-+			 u32 mode;
-+
- 			ret = replay_xattr_deletes(wc->trans, root, log,
- 						   path, key.objectid);
- 			if (ret)
--- 
-GitLab
-


^ permalink raw reply related	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2025-08-21  4:00 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-24 17:42 [gentoo-commits] proj/linux-patches:6.15 commit in: / Mike Pagano
  -- strict thread matches above, loose matches on Subject: below --
2025-08-21  4:00 Arisu Tachibana
2025-08-21  1:10 Arisu Tachibana
2025-08-16  3:10 Arisu Tachibana
2025-08-04  5:58 Arisu Tachibana
2025-08-01 10:30 Arisu Tachibana
2025-07-24  9:17 Arisu Tachibana
2025-07-18 12:05 Arisu Tachibana
2025-07-11  2:26 Arisu Tachibana
2025-07-06 13:42 Arisu Tachibana
2025-06-27 11:17 Mike Pagano
2025-06-19 14:21 Mike Pagano
2025-06-10 12:24 Mike Pagano
2025-06-10 12:14 Mike Pagano
2025-06-05 19:13 Mike Pagano
2025-06-04 18:07 Mike Pagano
2025-06-01 21:41 Mike Pagano
2025-05-27 19:29 Mike Pagano
2025-05-26 10:21 Mike Pagano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox